text
stringlengths
56
7.94M
\begin{document} \author[Dos Santos, H.S.]{Helen Samara Dos Santos} \address{Department of Mathematics and Statistics, Memorial University of Newfoundland, St. John's, NL, A1C5S7, Canada} \email{[email protected]} \author[Hornhardt, C.]{Caio De Naday Hornhardt} \address{Department of Mathematics and Statistics, Memorial University of Newfoundland, St. John's, NL, A1C5S7, Canada} \email{[email protected]} \author[Kochetov, M.]{Mikhail Kochetov} \address{Department of Mathematics and Statistics, Memorial University of Newfoundland, St. John's, NL, A1C5S7, Canada} \email{[email protected]} \thanks{The authors acknowledge financial support by Discovery Grant 341792-2013 of the Natural Sciences and Engineering Research Council (NSERC) of Canada and by Memorial University of Newfoundland.} \date{} \title[Group gradings on the superalgebras $M(m,n)$, $A(m,n)$ and $P(n)$]{Group gradings on the superalgebras\\ $M(m,n)$, $A(m,n)$ and $P(n)$} \subjclass[2010]{Primary 17B70; Secondary 16W50, 16W55, 17A70} \keywords{Graded algebra, associative superalgebra, simple Lie superalgebra, classical Lie superalgebra, classification} \begin{abstract} We classify gradings by arbitrary abelian groups on the classical simple Lie superalgebras $P(n)$, $n \geq 2$, and on the simple associative superalgebras $M(m,n)$, $m, n \geq 1$, over an algebraically closed field: fine gradings up to equivalence and $G$-gradings, for a fixed group $G$, up to isomorphism. As a corollary, we also classify up to isomorphism the $G$-gradings on the classical Lie superalgebra $A(m,n)$ that are induced from $G$-gradings on $M(m+1,n+1)$. In the case of Lie superalgebras, the characteristic is assumed to be $0$. \end{abstract} \maketitle \section{Introduction} In the past two decades, gradings on Lie algebras by arbitrary abelian groups have been extensively studied. For finite-dimensional simple Lie algebras over an algebraically closed field $\FF$, the classification of fine gradings up to equivalence has recently been completed (assuming $\Char \FF = 0$) by efforts of many authors --- see the monograph \cite[Chapters 3--6]{livromicha} and the references therein, and also \cite{YuExc} and \cite{E14}. For a fixed abelian group $G$, the classification of $G$-gradings up to isomorphism is also known (assuming $\Char \FF \neq 2$), except for types $E_6$, $E_7$ and $E_8$ --- see \cite{livromicha} and \cite{EK_d4}. This paper is devoted to gradings on finite-dimensional simple Lie superalgebras. Over an algebraically closed field of characteristic $0$, such superalgebras were classified by V.~G.~Kac in \cite{kacZ,artigokac} (see also \cite{livrosuperalgebra}). In \cite{kacZ}, there is also a classification of $\ZZ$-gradings on these superalgebras. More recently, gradings by arbitrary abelian groups have been considered. Fine gradings on the exceptional simple Lie superalgebras, namely, $D(2,1;\alpha)$, $G(3)$ and $F(4)$, were classified in \cite{artigoelduque} and all gradings on the series $Q(n)$, $n\geq 2$, were classified in \cite{paper-Qn}. A description of gradings on matrix superalgebras, here denoted by $M(m,n)$ (see Section \ref{sec:Mmn}), was given in \cite{BS}, but the isomorphism problem was left open and fine gradings were not considered. The initial goal of this work was to classify abelian group gradings on the series $P(n)$, $n\geq 2$, and thereby complete the classification of gradings on the so-called ``strange Lie superalgebras''. Our approach led us to the study of gradings on the associative superalgebras $M(m,n)$ and the closely related Lie superalgebras $A(m,n)$. Throughout this work, the canonical $\ZZ_2$-grading of a superalgebra will be denoted by superscripts, reserving subscripts for the components of other gradings. Thus, a $G$-grading on a superalgebra $A = A\even \oplus A\odd$ is a vector space decomposition $\Gamma:\,A = \bigoplus_{g \in G} A_g$ such that $A_g A_h\subseteq A_{gh}$, for all $g,h\in G$, and each $A_g$ is compatible with the superalgebra structure, i.e., $A_g=A_g^\bz \oplus A_g^\bo$. Note that $G$-gradings on a superalgebra can be seen as $G\times \ZZ_2$-gradings on the underlying algebra. For the superalgebras under consideration, namely, $M(m,n)$, $A(m,n)$ and $P(n)$, the canonical $\ZZ_2$-grading can be refined to a canonical $\ZZ$-grading, whose components will be denoted by superscripts $-1, 0, 1$. Only gradings by abelian groups are discussed in this work, which is no loss of generality in the case of simple Lie superalgebras, because the support always generates an abelian group. All the (super)algebras and vector (super)spaces are assumed to be finite-dimensional over a fixed algebraically closed field $\FF$. When dealing with the Lie superalgebras $A(m,n)$ and $P(n)$, we will also assume $\Char \FF = 0$. The paper is structured as follows. Sections \ref{sec:generalities} and \ref{sec:gradings-on-matrix-algebras} have no original results. In the former, we introduce all basic definitions and a few general results for future reference, and the latter is a review of the classification of gradings on matrix algebras closely following \cite[Chapter 2]{livromicha}, with a slight change in notation. Section \ref{sec:Mmn} is devoted to the associative superalgebras $M(m,n)$, which have two kinds of gradings: the \emph{even gradings} are compatible with the canonical $\ZZ$-grading and the \emph{odd gradings} are not. (The latter can occur only if $m=n$.) The classification results for even gradings are Theorems \ref{thm:even-assc-iso} ($G$-gradings up to isomorphism) and \ref{thm:class-fine-even} (fine gradings up to equivalence). We present two descriptions of odd gradings: one as $G\times \ZZ_2$-gradings on the underlying matrix algebra (see Subsection \ref{ssec:grds-on-superalgebras}) and the other purely in terms of the group $G$ (see Subsection \ref{ssec:second-odd}). We classify odd gradings in Theorems \ref{thm:first-odd-iso} and \ref{thm:2nd-odd-iso} ($G$-gradings up to isomorphism) and in Theorem \ref{thm:class-fine-odd} (fine gradings up to equivalence). In Section \ref{sec:Amn}, we consider gradings on the Lie superalgebras $A(m,n)$, but only those that are induced from $M(m+1, n+1)$ (see Definition \ref{def:Type-I}). We classify them up to isomorphism in Theorem \ref{thm:even-Lie-iso} (even gradings) and in Theorem \ref{thm:first-odd-Lie-iso} and Corollary \ref{cor:2nd-odd-Lie-iso} (odd gradings). In Section \ref{sec:Pn}, we classify gradings on the Lie superalgebras $P(n)$: see Theorem \ref{thm:Pn-iso} for $G$-gradings up to isomorphism and Theorem \ref{thm:class-fine-Pn} for fine gradings up to equivalence. \section{Generalities on gradings}\label{sec:generalities} The purpose of this section is to fix notation and recall definitions concerning graded algebras and graded modules. \subsection{Gradings on vector spaces and (bi)modules}\label{subsec:graded-bimodules} Let $G$ be a group. By a \emph{$G$-grading} on a vector space $V$ we mean simply a vector space decomposition $\Gamma:\,V = \bigoplus_{g \in G} V_g$ where the summands are labeled by elements of $G$. If $\Gamma$ is fixed, $V$ is referred to as a {\em $G$-graded vector space}. A subspace $W \subseteq V$ is said to be \emph{graded} if $W = \bigoplus_{g \in G} (W \cap V_g)$. We will refer to $\ZZ_2$-graded vector spaces as \emph{superspaces} and their graded subspaces as \emph{subsuperspaces}. An element $v$ in a graded vector space $V = \bigoplus_{g \in G} V_g$ is said to be \emph{homogeneous} if $v\in V_g$ for some $g\in G$. If $0\ne v\in V_g$, we will say that $g$ is the \emph{degree} of $v$ and write $\deg v = g$. In reference to the canonical $\ZZ_2$-grading of a superspace, we will instead speak of the \emph{parity} of $v$ and write $|v| = g$. Every time we write $\deg v$ or $|v|$, it should be understood that $v$ is a nonzero homogeneous element. \begin{defi} Given two $G$-graded vector spaces, $V=\bigoplus_{g\in G} V_g$ and $W=\bigoplus_{g\in G} W_g$, we define their tensor product to be the vector space $V\otimes W$ together with the $G$-grading given by $(V \otimes W)_g = \bigoplus_{ab=g} V_{a} \otimes W_{b}$. \end{defi} The concept of grading on a vector space is connected to gradings on algebras by means of the following: \begin{defi} If $V=\bigoplus_{g\in G} V_{g}$ and $W=\bigoplus_{g\in G} W_{g}$ are two graded vector spaces and $T: V\rightarrow W$ is a linear map, we say that $T$ is \emph{homogeneous of degree $t$}, for some $t\in G$, if $T(V_g)\subseteq W_{tg}$ for all $g\in G$. \end{defi} If $S: U\rightarrow V$ and $T: V\rightarrow W$ are homogeneous linear maps of degrees $s$ and $t$, respectively, then the composition $T\circ S$ is homogeneous of degree $ts$. We define the {\em space of graded linear transformations} from $V$ to $W$ to be: \[ \Hom^{\text{gr}} (V,W) = \bigoplus_{g\in G} \Hom (V,W)_{g}\] where $\Hom (V,W)_{g}$ denotes the set of all linear maps from $V$ to $W$ that are homogeneous of degree $g$. If we assume $V$ to be finite-dimensional then we have $\Hom(V,W)=\Hom^{\gr}(V,W)$ and, in particular, $\End (V) = \bigoplus_{g\in G} \End (V)_g$ is a graded algebra. We also note that $V$ becomes a graded module over $\End(V)$ in the following sense: \begin{defi} Let $A$ be a $G$-graded algebra (associative or Lie) and let $V$ be a (left) module over $A$ that is also a $G$-graded vector space. We say that $V$ is a \emph{graded $A$-module} if $A_g \cdot V_h \subseteq V_{gh}$, for all $g$,$h\in G$. The concept of $G$-\emph{graded bimodule} is defined similarly. \end{defi} If we have a $G$-grading on a Lie superalgebra $L=L\even \oplus L\odd$ then, in particular, we have a grading on the Lie algebra $L\even$ and a grading on the space $L\odd$ that makes it a graded $L\even$-module. If we have a $G$-grading on an associative superalgebra $C=C\even \oplus C\odd$, then $C\odd$ becomes a graded bimodule over $C\even$. If $ \Gamma$ is a $G$-grading on a vector space $V$ and $g\in G$, we denote by $\Gamma^{[g]} $ the grading given by relabeling the component $V_h$ as $V_{hg}$, for all $h \in G$. This is called the \emph{(right) shift of the grading $\Gamma$ by $g$}. We denote the graded space $(V, \, \Gamma^{[g]})$ by $V^{[g]}$. From now on, we assume that $G$ is abelian. If $V$ is a graded module over a graded algebra (or a graded bimodule over a pair of graded algebras), then $V^{[g]}$ is also a graded (bi)module. We will make use of the following partial converse (see e.g. \cite[Proposition 3.5]{paper-Qn}): \begin{lemma}\label{lemma:simplebimodule} Let $A$ and $B$ be $G$-graded algebras and let $V$ be a finite-dimensional (ungraded) simple $A$-module or $(A,B)$-bimodule. If $\Gamma$ and $\Gamma'$ are two $G$-gradings that make $V$ a graded (bi)module, then $\Gamma'$ is a shift of $\Gamma$.\qed \end{lemma} Certain shifts of grading may be applied to graded $\ZZ$- or $\ZZ_2$-superalgebras. In the case of a $\ZZ$-superalgebra $L=L^{-1}\oplus L^{0}\oplus L^{1}$, we have the following: \begin{lemma}\label{lemma:opposite-directions} Let $L=L^{-1}\oplus L^0\oplus L^1$ be a $\ZZ$-superalgebra such that $L^1\, L^{-1}\neq 0$. If we shift the grading on $L^1$ by $g\in G$ and the grading on $L^{-1}$ by $g' \in G$, then we have a grading on $L$ if and only if $g' = g^{-1}$. \qed \end{lemma} We will describe this situation as \emph{shift in opposite directions}. \subsection{Universal grading group, equivalence and isomorphism of gradings} There is a concept of grading not involving groups. A \emph{set grading} on a (super)algebra $A$ is a decomposition $\Gamma:\,A=\bigoplus_{s\in S}A_s$ as a direct sum of sub\-(su\-per)\-spa\-ces indexed by a set $S$ and having the property that, for any $s_1,s_2\in S$ with $A_{s_1}A_{s_2}\ne 0$, there exists $s_3\in S$ such that $A_{s_1}A_{s_2}\subseteq A_{s_3}$. The \emph{support} of $\Gamma$ (or of $A$) is defined to be the set $\supp(\Gamma) := \{s\in S \mid A_s \neq 0\}$. Similarly, $\supp_\bz(\Gamma) := \{s\in S \mid A_s^\bz \neq 0\}$ and $\supp_\bo(\Gamma) := \{s\in S \mid A_s^\bo \neq 0\}$. For a set grading $\Gamma:\,A=\bigoplus_{s\in S}A_s$, there may or may not exist a group $G$ containing $\supp(\Gamma)$ that makes $\Gamma$ a $G$-grading. If such a group exists, $\Gamma$ is said to be a {\em group grading}. (As already mentioned, we only consider abelian group gradings in this paper.) However, $G$ is usually not unique even if we require that it should be generated by $\supp(\Gamma)$. The {\em universal (abelian) grading group} of $\Gamma$ is generated by $\supp(\Gamma)$ and has the defining relations $s_1s_2=s_3$ for all $s_1,s_2,s_3\in S$ such that $0\neq A_{s_1}A_{s_2}\subseteq A_{s_3}$. This group is universal among all (abelian) groups that realize the grading $\Gamma$ (see e.g. \cite[Chapter 1]{livromicha} for details). Let $\Gamma:\,A=\bigoplus_{g\in G} A_g$ and $\Delta:\,B=\bigoplus_{h\in H} B_h$ be two group gradings on the (super)algebras $A$ and $B$, with supports $S$ and $T$, respectively. We say that $\Gamma$ and $\Delta$ are {\em equivalent} if there exists an isomorphism of (super)algebras $\vphi: A\to B$ and a bijection $\alpha: S\to T$ such that $\vphi(A_s)=B_{\alpha(s)}$ for all $s\in S$. If $G$ and $H$ are universal grading groups then $\alpha$ extends to an isomorphism $G\to H$. In the case $G=H$, the $G$-gradings $\Gamma$ and $\Delta$ are {\em isomorphic} if $A$ and $B$ are isomorphic as $G$-graded (super)algebras, i.e., if there exists an isomorphism of (super)algebras $\vphi: A\to B$ such that $\vphi(A_g)=B_g$ for all $g\in G$. If $\Gamma:\,A=\bigoplus_{g\in G} A_g$ and $\Gamma':\,A=\bigoplus_{h\in H} A'_h$ are two gradings on the same (super)algebra $A$, with supports $S$ and $T$, respectively, then we will say that $\Gamma'$ is a {\em refinement} of $\Gamma$ (or $\Gamma$ is a {\em coarsening} of $\Gamma'$) if, for any $t\in T$, there exists (unique) $s\in S$ such that $A'_t\subseteq A_s$. If, moreover, $A'_t\ne A_s$ for at least one $t\in T$, then the refinement is said to be {\em proper}. A grading $\Gamma$ is said to be {\em fine} if it does not admit any proper refinements. Note that if $A$ is a superalgebra then $A=\bigoplus_{(g,i)\in G\times\mathbb{Z}_2}A_g^i$ is a refinement of $\Gamma$. It follows that if $\Gamma$ is fine then the sets $\supp_\bz(\Gamma)$ and $\supp_\bo(\Gamma)$ are disjoint. If, moreover, $G$ is the universal group of $\Gamma$, then the superalgebra structure on $A$ is given by the unique homomorphism $p: G \to \ZZ_2$ that sends $\supp_\bz(\Gamma)$ to $\bar 0$ and $\supp_\bo(\Gamma)$ to $\bar 1$. \begin{defi} Let $G$ and $H$ be groups, $\alpha:G\to H$ be a group homomorphism and $\Gamma:\,A=\bigoplus_{g\in G} A_g$ be a $G$-grading. The \emph{coarsening of $\Gamma$ induced by $\alpha$} is the $H$-grading ${}^\alpha \Gamma: A= \bigoplus_{h\in H} B_h$ where $ B_h = \bigoplus_{g\in \alpha\inv (h)} A_g$. (This coarsening is not necessarily proper.) \end{defi} The following result appears to be ``folklore''. We include a proof for completeness. \begin{lemma}\label{lemma:universal-grp} Let $\mathcal{F}=\{\Gamma_i\}_{i\in I}$, be a family of pairwise nonequivalent fine (abelian) group gradings on a (super)algebra $A$, where $\Gamma_i$ is a $G_i$-grading and $G_i$ is generated by $\supp(\Gamma_i)$. Suppose that $\mathcal{F}$ has the following property: for any grading $\Gamma$ on $A$ by an (abelian) group $H$, there exists $i\in I$ and a homomorphism $\alpha:G_i\to H$ such that $\Gamma$ is isomorphic to ${}^\alpha\Gamma_i$. Then \begin{enumerate}[(i)] \item every fine (abelian) group grading on $A$ is equivalent to a unique $\Gamma_i$; \item for all $i$, $G_i$ is the universal (abelian) group of $\Gamma_i$. \end{enumerate} \end{lemma} \begin{proof} Let $\Gamma$ be a fine grading on $A$, realized over its universal group $H$. Then there is $i\in I$ and $\alpha: G_i \to H$ such that ${}^\alpha \Gamma_i \iso \Gamma$. Writing $\Gamma_i: A = \bigoplus_{g\in G_i} A_g$ and $\Gamma: A = \bigoplus_{h\in H} B_h$, we then have $\vphi \in \Aut(A)$ such that \[ \vphi\,\big( \bigoplus_{g\in \alpha\inv (h)} A_g \big) = B_h \] for all $h\in H$. Since $\Gamma$ is fine, we must have $B_h \neq 0$ if, and only if, there is a unique $g\in G_i$ such that $\alpha(g) = h$, $A_g\neq 0$ and $\vphi(A_g) = B_h$. Equivalently, $\alpha$ restricts to a bijection $\supp(\Gamma_i) \to \supp(\Gamma)$ and $\vphi(A_g) = B_{\alpha(g)}$ for all $g \in S_i:= \supp (\Gamma_i)$. This proves assertion $(i)$. Let $G$ be the universal group of $\Gamma_i$. It follows that, for all $s_1, s_2, s_3 \in S_i$, \begin{equation*} \label{eq:relations-unvrsl-grp} \begin{split} & s_1s_2 = s_3 \text{ is a defining relation of } G \\ \iff & 0 \neq A_{s_1} A_{s_2} \subseteq A_{s_3}\\ \iff & 0 \neq B_{\alpha(s_1)} B_{\alpha(s_2)} \subseteq B_{\alpha (s_3)}\\ \iff & \alpha(s_1)\alpha(s_2) = \alpha(s_3) \text{ is a defining relation of } H. \end{split} \end{equation*} Therefore, the bijection $\alpha\restriction_{S_i}$ extends uniquely to an isomorphism $\beta: G\rightarrow H$. By the universal property of $G$, there is a unique homomorphism $\gamma: G\to G_i$ that restricts to the identity on $S_i$. Hence, the following diagram commutes: \begin{center} \begin{tikzcd} G \arrow[to=Gi, "\gamma"] \arrow[to = H, "\beta"]&&\\ && |[alias=H]|H\\ |[alias=Gi]|G_i \arrow[to=H, "\alpha"]&& \end{tikzcd} \end{center} Since $\beta$ is an isomorphism, $\gamma$ must be injective. But $\gamma$ is also surjective since $S_i$ generates $G_i$. Hence $G_i$ is isomorphic to $G$. Since $\Gamma$ was an arbitrary fine grading, for each given $j\in I$, we can take $\Gamma = \Gamma_j$ (hence, $i=j$ and $H=G$). This concludes the proof of $(ii)$. \end{proof} \begin{defi} Let $\Gamma$ be a grading on an algebra $A$. We define $\Aut(\Gamma)$ as the group of all self-equivalences of $\Gamma$, i.e., automorphisms of $A$ that permute the components of $\Gamma$. Let $\operatorname{Stab}(\Gamma)$ be the subgroup of $\Aut(\Gamma)$ consisting of the automorphisms that fix each component of $\Gamma$. Clearly, $\operatorname{Stab}(\Gamma)$ is a normal subgroup of $\Aut(\Gamma)$, so we can define the \emph{Weil group} of $\Gamma$ by $\operatorname W (\Gamma) := \frac{\Aut(\Gamma)}{\operatorname{Stab}(\Gamma)}$. The group $\operatorname W (\Gamma)$ can be seen as a subgroup of the permutation group of the support and also as a subgroup of the automorphism group of the universal group of $\Gamma$. \end{defi} \subsection{Correspondence between $G$-gradings and $\widehat G$-actions}\label{ssec:G-hat-action} One of the most important tools for dealing with gradings by abelian groups on (super)algebras is to translate a $G$-grading into a $\widehat G$-action, where $\widehat G$ is the algebraic group of characters of $G$, \ie, group homomorphisms $G \rightarrow \FF^{\times}$. The group $\widehat{G}$ acts on any $G$-graded (super)algebra $A = \bigoplus_{g\in G} A_g$ by $\chi \cdot a = \chi(g) a$ for all $a\in A_g$ (extended to arbitrary $a\in A$ by linearity). The map given by the action of a character $\chi \in \widehat{G}$ is an automorphism of $A$. If $\FF$ is algebraically closed and $\Char \FF = 0$, then $A_g = \{ a\in A \mid \chi \cdot a = \chi (g) a\}$, so the grading can be recovered from the action. For example, if $A=A\even \oplus A\odd$ is a superalgebra, the action of the nontrivial character of $\ZZ_2$ yields the \emph{parity automorphism} $\upsilon$, which acts as the identity on $A\even$ and as the negative identity on $A\odd$. If $A$ is a $\ZZ$-graded algebra, we get a representation $\widehat \ZZ = \FF^\times \rightarrow \Aut (A)$ given by $\lambda \mapsto \upsilon_\lambda$ where $\upsilon_{\lambda}$ acts as $\lambda^i \id$ on $A^i$. A grading on a (super)algebra over an algebraically closed field of characteristic $0$ is said to be \emph{inner} if it corresponds to an action by inner automorphisms. For example, the inner gradings on $\Sl(n)$ (also known as Type I gradings) are precisely the restrictions of gradings on the associative algebra $M_n(\FF)$. \section{Gradings on matrix algebras} \label{sec:gradings-on-matrix-algebras} In this section we will recall the classification of gradings on matrix algebras. We will follow \cite[Chapter 2]{livromicha} but use slightly different notation, which we will extend to superalgebras in Section \ref{sec:Mmn}. The following is the graded version of a classical result (see e.g. \cite[Theorem 2.6]{livromicha}). We recall that a \emph{graded division algebra} is a graded unital associative algebra such that every nonzero homogeneous element is invertible. \begin{thm}\label{thm:End-over-D} Let $G$ be a group and let $R$ be a $G$-graded associative algebra that has no nontrivial graded ideals and satisfies the descending chain condition on graded left ideals. Then there is a $G$-graded division algebra $\D$ and a graded (right) $\D$-module $\mc{V}$ such that $R \simeq \End_{\D} (\mc{V})$ as graded algebras.\qed \end{thm} We apply this result to the algebra $R=M_n(\FF)$ equipped with a grading by an abelian group $G$. We will now introduce the parameters that determine $\mc D$ and $\mc V$, and give an explicit isomorphism $\End_{\D} (\mc{V})\simeq M_n(\FF)$ (see Definition \ref{def:explicit-grd-assoc}). Let $\D$ be a finite-dimensional $G$-graded division algebra. It is easy to see that $T= \supp \D$ is a finite subgroup of $G$. Also, since we are over an algebraically closed field, each homogeneous component $\D_t$, for $t\in T$, is one-dimensional. We can choose a generator $X_t$ for each $\D_t$. It follows that, for every $u,v\in T$, there is a unique nonzero scalar $\beta (u,v)$ such that $X_u X_v = \beta (u,v) X_v X_u$. Clearly, $\beta (u,v)$ does not depend on the choice of $X_u$ and $X_v$. The map $\beta: T\times T \rightarrow \FF^{\times}$ is a \emph{bicharacter}, \ie, both maps $\beta(t,\cdot)$ and $\beta(\cdot,t)$ are characters for every $t \in T$. It is also \emph{alternating} in the sense that $\beta (t,t) = 1$ for all $t\in T$. We define the \emph{radical} of $\beta$ as the set $\rad \beta = \{ t\in T \mid \beta(t, T) = 1 \}$. In the case we are interested in, where $\D$ is simple as an algebra, the bicharacter $\beta$ is \emph{nondegenerate}, \ie, $\rad \beta = \{e\} $. The isomorphism classes of $G$-graded division algebras that are finite-dimensional and simple as algebras are in one-to-one correspondence with the pairs $(T,\beta)$ where $T$ is a finite subgroup of $G$ and $\beta$ is an alternating nondegenerate bicharacter on $T$ (see e.g. \cite[Section 2.2]{livromicha} for a proof). Using that the bicharacter $\beta$ is nondegenerate, we can decompose the group $T$ as $A\times B$, where the restrictions of $\beta$ to each of the subgroups $A$ and $B$ is trivial, and hence $A$ and $B$ are in duality by $\beta$. We can choose the elements $X_t\in \D_t$ in a convenient way (see \cite[Remark 2.16]{livromicha} and \cite[Remark 18]{EK15}) such that $X_{ab}=X_aX_b$ for all $a\in A$ and $b\in B$. Using this choice, we can define an action of $\D$ on the vector space underlying the group algebra $\FF B$, by declaring $X_a\cdot e_{b'} = \beta(a, b') e_{b'}$ and $X_b\cdot e_{b'} = e_{bb'}$. This action allows us to identify $\D$ with $\End{(\FF B)}$. Using the basis $\{e_{b}\mid b\in B\}$ in $\FF B$, we can see it as a matrix algebra, where \[X_{ab}= \sum_{b'\in B} \beta(a, bb') E_{bb', b'}\] and $E_{b'', b'}$ with $b'$, $b'' \in B$, is a matrix unit, namely, the matrix of the operator that sends $e_{b'}$ to $e_{b''}$ and sends all other basis elements to zero. \begin{defi} We will refer to these matrix models of $\mc D$ as its \emph{standard realizations}. \end{defi} \begin{remark}\label{rmk:2-grp-transp} The matrix transposition is always an involution of the algebra structure. As to the grading, we have \[ X_{ab}\transp = \sum_{b'\in B} \beta(a, bb') E_{b',bb'} = \beta(a,b) \sum_{b''\in B} \beta(a, b^{-1}b'') E_{b^{-1}b'', b''} = \beta(a,b) X_{ab^{-1}}. \] It follows that if $T$ is an elementary 2-group, then the transposition preserves the degree. In this case, we will use it to fix an identification between the graded algebras $\D$ and $\D\op$. \end{remark} Graded modules over a graded division algebra $\mc D$ behave similarly to vector spaces. The usual proof that every vector space has a basis, with obvious modifications, shows that every graded $\mc D$-module has a \emph{homogeneous basis}, \ie, a basis formed by homogeneous elements. Let $\mc V$ be such a module of finite rank $k$, fix a homogeneous basis $\mc B = \{v_1, \ldots, v_k\}$ and let $g_i := \operatorname{deg} v_i$. We then have $\mc{V}\iso \ \D^{[g_1]}\oplus\cdots\oplus\D^{[g_k]}$, so, the graded $\mc D$-module $\mc V$ is determined by the $k$-tuple $\gamma = (g_1,\ldots, g_k)$. The tuple $\gamma$ is not unique. To capture the precise information that determines the isomorphism class of $\mc V$, we use the concept of \emph{multiset}, \ie, a set together with a map from it to the set of positive integers. If $\gamma = (g_1,\ldots, g_k)$ and $T=\supp \D$, we denote by $\Xi(\gamma)$ the multiset whose underlying set is $\{g_1 T,\ldots, g_k T\} \subseteq G/T$ and the multiplicity of $g_i T$, for $1\leq i\leq k$, is the number of entries of $\gamma$ that are congruent to $g_i$ modulo $T$. Using $\mc B$ to represent the linear maps by matrices in $M_k(\D) = M_k(\FF)\tensor \D$, we now construct an explicit matrix model for $\End_{\D}(\mc V)$. \begin{defi}\label{def:explicit-grd-assoc} Let $T \subseteq G$ be a finite subgroup, $\beta$ a nondegenerate alternating bicharacter on $T$, and $\gamma = (g_1, \ldots, g_k)$ a $k$-tuple of elements of $G$. Let $\D$ be a standard realization of a graded division algebra associated to $(T, \beta)$. Identify $M_k(\FF)\tensor \D \iso M_n(\FF)$ by means of the Kronecker product, where $n=k\sqrt{|T|}$. We will denote by $\Gamma(T, \beta, \gamma)$ the grading on $M_n(\FF)$ given by $\deg (E_{ij} \tensor d) := g_i (\deg d) g_j\inv$ for $i,j\in \{1, \ldots , k\}$ and homogeneous $d\in \D$, where $E_{ij}$ is the $(i,j)$-th matrix unit. \end{defi} If $\End(V)$, equipped with a grading, is isomorphic to $M_n(\FF)$ with $\Gamma(T, \beta, \gamma)$, we may abuse notation and also denote the grading on $\End(V)$ by $\Gamma(T,\beta,\gamma)$. We restate \cite[Theorem 2.27]{livromicha} using our notation: \begin{thm}\label{thm:classification-matrix} Two gradings, $\Gamma(T,\beta,\gamma)$ and $\Gamma(T',\beta',\gamma')$, on the algebra $M_n(\FF)$ are isomorphic if, and only if, $T=T'$, $\beta=\beta'$ and there is an element $g\in G$ such that $g \Xi(\gamma)=\Xi(\gamma')$.\qed \end{thm} The proof of this theorem is based on the following result (see Theorem 2.10 and Proposition 2.18 from \cite{livromicha}), which will also be needed: \begin{prop}\label{prop:inner-automorphism} If $\phi: \End_\D (\mc V) \rightarrow \End_\D (\mc V')$ is an isomorphism of graded algebras, then there is a homogeneous invertible $\D$-linear map $\psi: \mc V\rightarrow \mc V'$ such that $\phi(r)=\psi \circ r \circ \psi\inv$, for all $r\in \End_\D (\mc V)$.\qed \end{prop} \section{Gradings on $M(m,n)$}\label{sec:Mmn} \subsection{The associative superalgebra $M(m,n)$}\label{M(m,n)} Let $U = U\even \oplus U\odd$ be a superspace. The algebra of endomorphisms of $U$ has an induced $\Zmod2$-grading, so it can be regarded as a superalgebra. It is convenient to write it in matrix form: \begin{equation}\label{eq:End_U} \End(U) = \left(\begin{matrix} \End(U\even) & \Hom(U\odd,U\even)\\ \Hom(U\even,U\odd) & \End(U\odd)\\ \end{matrix} \right). \end{equation} Choosing bases, we may assume that $U\even=\FF^m$ and $U\odd=\FF^n$, so the superalgebra $\End(U)$ can be seen as a matrix superalgebra, which is denoted by $M(m,n)$. We may also regard $U$ as a $\ZZ$-graded vector space, putting $U^0=U\even$ and $U^1=U\odd$. By doing so, we obtain an induced $\ZZ$-grading on $M(m,n) = \End (U)$ such that \[(\End\, U)\even =(\End\, U)^0 = \left(\begin{matrix} \End(U\even) & 0\\ 0 & \End(U\odd)\\ \end{matrix} \right) \] and $(\End\, U)\odd = (\End\, U)^{-1}\oplus (\End\, U)^1$ where \[(\End U)^{1}= \left(\begin{matrix} 0 & 0\\ \Hom(U\even,U\odd) & 0\\ \end{matrix} \right) \,\text{ and }\, (\End U)^{-1}= \left(\begin{matrix} 0 & \Hom(U\odd,U\even)\\ 0 & 0 \\ \end{matrix} \right). \] This grading will be called the \emph{canonical $\ZZ$-grading} on $M(m,n)$. \subsection{Automorphisms of $M(m,n)$} It is known that the automorphisms of the superalgebra $\End(U)$ are conjugations by invertible homogeneous operators. (This follows, for example, from Proposition \ref{prop:inner-automorphism}.) The invertible even operators are of the form $\left( \begin{matrix} a&0\\ 0&d\\ \end{matrix}\right)$ where $a\in \GL(m)$ and $d\in \GL(n)$. The corresponding inner automorphisms of $M(m,n)$ will be called \emph{even automorphisms}. They form a normal subgroup of $\Aut(M(m,n))$, which we denote by $\mc E$. The inner automorphisms given by odd operators will be called \emph{odd automorphisms}. Note that an invertible odd operator must be of the form $\left( \begin{matrix} 0&b\\ c&0\\ \end{matrix}\right)$ where both $b$ and $c$ are invertible, and this forces $m=n$. In this case, the set of odd automorphisms is a coset of $\mc E$, namely, $\pi \mc E$, where $\pi$ is the conjugation by the matrix $\left( \begin{matrix} 0_n & I_n\\ I_n & 0_n\\ \end{matrix}\right)$. This automorphism is called the \emph{parity transpose} and is usually denoted by superscript: \begin{equation*} \left( \begin{matrix} a&b\\ c&d\\ \end{matrix}\right)^\pi = \left( \begin{matrix} d&c\\ b&a\\ \end{matrix}\right). \end{equation*} Thus, $\Aut (M(m,n)) = \mc E$ if $m\neq n$, and $\Aut (M(n,n)) = \mc E \rtimes \langle \pi \rangle$. \begin{remark}\label{rmk:Aut-ZZ-superalgebra} It is worth noting that $\mc E$ is the automorphism group of the $\ZZ$-\-su\-per\-al\-gebra structure of $M(m,n)$, regardless of the values $m$ and $n$. Indeed, the elements of this group are conjugations by homogeneous matrices with respect to the canonical $\ZZ$-grading, but all the matrices of degree $-1$ or $1$ are degenerate. \end{remark} \subsection{Gradings on matrix superalgebras}\label{ssec:grds-on-superalgebras} We are now going to generalize the results of Section \ref{sec:gradings-on-matrix-algebras} to the superalgebra $\M(m,n)$. It is clear that a $G$-graded associative superalgebra is equivalent to a $(G \times \ZZ_2)$-graded associative algebra, hence one could think that there is no new problem. But the description of gradings on matrix algebras presented in Section \ref{sec:gradings-on-matrix-algebras} does not allow us to readily see the gradings on the even and odd components of the superalgebra, so we are going to refine that description. We will denote the group $G\times \ZZ_2$ by $G^\#$ and the projection on the second factor by $p\colon G^\# \rightarrow \ZZ_2$. Also, we will abuse notation and identify $G$ with $G\times \{\barr 0\} \subseteq G^\#$. \begin{remark} If the canonical $\ZZ_2$-grading is a coarsening of the $G$-grading by means of a homomorphism $p\colon G\rightarrow \ZZ_2$ (referred to as the \emph{parity homomorphism}), then we have another isomorphic copy of $G$ in $G^\#$, namely, the image of the embedding $g\mapsto (g, p (g))$, which contains the support of the $G^\#$-grading. In this case, we do not need $G^\#$ and can work with the original $G$-grading. \end{remark} A $G$-graded superalgebra $\mathcal D$ is called a \emph{graded division superalgebra} if every nonzero homogeneous element in $\D\even \cup \D\odd$ is invertible --- in other words, $\D$ is a $G^\#$-graded division algebra. We separate the gradings on $\M(m,n)$ in two classes depending on the superalgebra structure on $\D$: if $\D\odd = 0$, we say that we have an \emph{even grading} and, if $\D\odd \ne 0$, we have an \emph{odd grading}. To see the difference between even and odd gradings, consider the $G^\#$-graded algebra $E=\End_\D (\mc U)$, where $\D$ is a $G^{\#}$-graded division algebra and $\mc U$ is a graded module over $\D$. Define \[ \mc U\even = \bigoplus_{g\in G^\#} \{u\in \mc U_g \mid p(g)=\barr 0\}\,\, \text{and} \,\,\mc U\odd = \bigoplus_{g\in G^\#} \{u\in \mc U_g \mid p(g)=\barr 1\}. \] Then $\mc U\even$ and $\mc U\odd$ are $\D\even$-modules, but they are $\D$-modules if and only if $\D\odd=0$. So, in the case of an even grading, $\mc U$ is as a direct sum of $\D$-modules, and all the information related to the canonical $\ZZ_2$-grading on $\End_\D (\mc U)$ comes from the decomposition $\mc U=\mc U\even \oplus \mc U\odd$. \begin{defi}\label{def:even-grd-on-Mmn} Similarly to Definition \ref{def:explicit-grd-assoc}, we will parametrize the even gradings on $M(m,n)$ as $\Gamma(T,\beta, \gamma_0, \gamma_1)$, where the pair $(T,\beta)$ characterizes $\D$ and $\gamma_0$ and $\gamma_1$ are tuples of elements of $G$ corresponding to the degrees of homogeneous bases for $\mc U\even$ and $\mc U\odd$, respectively. Here $\gamma_0$ is a $k_0$-tuple and $\gamma_1$ is a $k_1$-tuple, with $k_0\sqrt{|T|}=m$ and $k_1\sqrt{|T|}=n$. \end{defi} On the other hand, in the case of an odd grading, the information about the canonical $\ZZ_2$-grading is encoded in $\D$. To see that, take a homogeneous $\D$-basis of $\mc U$ and multiply all the odd elements by some nonzero homogeneous element in $\D\odd$. This way we get a homogeneous $\D$-basis of $\mc U$ such that the degrees are all in the subgroup $G$ of $G^\#$. If we denote the $\FF$-span of this new basis by $\widetilde U$, then $E\iso \End (\widetilde U)\tensor \D$ where the first factor has the trivial $\ZZ_2$-grading. \begin{defi}\label{def:odd-grd-on-Mmn-1} We parametrize the odd gradings by $\Gamma(T, \beta, \gamma)$ where $T\subseteq G^\#$ but $T\subsetneq G$, the pair $(T,\beta)$ characterizes $\D$, and $\gamma$ is a tuple of elements of $G = G\times \{\bar 0\}$ corresponding to the degrees of a homogeneous basis of $\mc U$ with only even elements. \end{defi} Clearly, it is impossible for an even grading to be isomorphic to an odd grading. The classification of even gradings is the following: \begin{thm}\label{thm:even-assc-iso} Every even $G$-grading on the superalgebra $M(m,n)$ is isomorphic to some $\Gamma(T,\beta, \gamma_0, \gamma_1)$ as in Definition \ref{def:even-grd-on-Mmn}. Two even gradings, $\Gamma = \Gamma(T,\beta, \gamma_0, \gamma_1)$ and $\Gamma' = \Gamma(T',\beta', \gamma_0', \gamma_1')$, are isomorphic if, and only if, $T=T'$, $\beta=\beta'$, and there is $g\in G$ such that \begin{enumerate}[(i)] \item for $m\neq n$: $g \Xi(\gamma_0)=\Xi(\gamma_0')$ and $g \Xi(\gamma_1)=\Xi(\gamma_1')$; \item for $m = n$: either $g \Xi(\gamma_0)=\Xi(\gamma_0')$ and $g \Xi(\gamma_1)=\Xi(\gamma_1')$ or $g\Xi(\gamma_0)=\Xi(\gamma_1')$ and $g \Xi(\gamma_1)=\Xi(\gamma_0')$. \end{enumerate} \end{thm} \begin{proof} We have already proved the first assertion. For the second assertion, we consider $\Gamma$ and $\Gamma'$ as $G^\#$-gradings on the algebra $M(m+n)$ and use Theorem \ref{thm:classification-matrix} to conclude that they are isomorphic if, and only if, $T=T'$, $\beta=\beta'$ and there is $(g,s)\in G^\#$ such that $(g,s)\Xi(\gamma)=\Xi(\gamma')$, where $\gamma$ is the concatenation of $\gamma_0$ and $\gamma_1$, where we regard the entries as elements of $G^\# = G\times \ZZ_2$ appending $\barr{0}$ in the second coordinate of the entries of $\gamma_0$ and $\bar 1$ in the second coordinates of the entries of $\gamma_1$. If $m\neq n$, the condition $(g,s)\Xi(\gamma)=\Xi(\gamma)$ must have $s=\barr0$, since the size of $\gamma_0$ is different from the size of $\gamma_1$. If $m=n$, the condition $(g,s)\Xi(\gamma)=\Xi(\gamma')$ becomes $g \Xi(\gamma_1)=\Xi(\gamma_1')$ if $s=\barr0$ and $g \Xi(\gamma_1)=\Xi(\gamma_0')$ if $s=\barr1$. \end{proof} We now turn to the classification of odd gradings. Recall that here we choose the tuple $\gamma$ to consist of elements of $G$. The corresponding multiset $\Xi(\gamma)$ is contained in $\frac{G^\#}{T} \iso \frac{G}{T \cap G}$. \begin{thm}\label{thm:first-odd-iso} Every odd $G$-grading on the superalgebra $M(m,n)$ is isomorphic to some $\Gamma(T,\beta, \gamma)$ as in Definition \ref{def:odd-grd-on-Mmn-1}. Two odd gradings, $\Gamma = \Gamma(T,\beta, \gamma)$ and $\Gamma' = \Gamma(T',\beta', \gamma')$, are isomorphic if, and only if, $T=T'$, $\beta=\beta'$, and there is $g\in G$ such that $g \Xi(\gamma)=\Xi(\gamma')$. \end{thm} \begin{proof} We have already proved the first assertion. For the second assertion, we again consider $\Gamma$ and $\Gamma'$ as $G^\#$-gradings and use Theorem \ref{thm:classification-matrix}: they are isomorphic if, and only if, $T=T'$, $\beta=\beta'$ and there is $(g,s)\in G^\#$ such that $(g,s)\Xi(\gamma)=\Xi(\gamma')$. Since $T$ contains an element $t_1$ with $p(t_1) = \barr 1$, we may assume $s=\barr 0$. \end{proof} In Subsection \ref{subsec:odd-gradings}, we will show that odd gradings can exist only if $m=n$. It may be desirable to express the classification in terms of $G$ rather than $G^\#$ (as we did for even gradings). We will return to this in Subsection \ref{ssec:second-odd}. \subsection{Even gradings and Morita context.}\label{subsec:even-gradings} First we observe that every grading on $M(m,n)$ compatible with the $\ZZ$-superalgebra structure is an even grading. This follows from the fact that $T=\supp \D$ is a finite group, and if a finite group is contained in $G\times \ZZ$, then it must be contained in $G\times \{0\}$. Hence, when we look at the corresponding $(G\times\ZZ_2)$-grading, we have that $T\subseteq G$, so no element of $\D$ has an odd degree. The converse is also true. Actually, we can prove a stronger assertion: if we write $\M(m,n)$ as in Equation \eqref{eq:End_U}, the subspaces given by each of the four blocks are graded. To capture this information, it is convenient to use the concepts of Morita context and Morita algebra. Recall that a \emph{Morita context} is a sextuple $\mathcal{C} = (R, S, M, N, \vphi, \psi )$ where $R$ and $S$ are unital associative algebras, $M$ is an $(R,S)$-bimodule, $N$ is a $(S,R)$-bimodule and $\vphi: M\tensor_{S} N\rightarrow R$ and $\psi: N\tensor_{R} M\rightarrow S$ are bilinear maps satisfying the necessary and sufficient conditions for \begin{equation*} C = \left(\begin{matrix}\label{eq:morita-algebra} R & M\\ N & S\\ \end{matrix} \right) \end{equation*} to be an associative algebra, \ie, \[\vphi(m_1\tensor n_1)\cdot m_2 = m_1\cdot \psi(n_1\tensor m_2) \text{ and }\psi(n_1\tensor m_1)\cdot n_2 = n_1\cdot \vphi(m_1\tensor n_2)\] \noindent for all $m_1,m_2\in M$ and $n_1,n_2\in N$. We can associate a Morita context to a superspace $U = U\even \oplus U\odd$ by taking $R = \End(U\even)$, $S = \End(U\odd)$, $M = \Hom(U\odd, U\even)$, $N = \Hom (U\even, U\odd)$, with $\vphi$ and $\psi$ given by composition of operators. Given an algebra $C$ as above and the idempotent $ \epsilon = \left(\begin{matrix} 1 & 0\\ 0 & 0\\ \end{matrix} \right) $, we can recover all the data of the Morita context (up to isomorphism): $R \iso \epsilon C \epsilon$, $S \iso (1 - \epsilon) C (1 - \epsilon)$, $M \iso \epsilon C (1-\epsilon)$, $N \iso (1-\epsilon) C \epsilon$ and $\phi$ and $\psi$ are given by multiplication in $C$. In other words, the concept of Morita context is equivalent to the concept of \emph{Morita algebra}, which is a pair $(C,\epsilon)$ where $C$ is a unital associative algebra and $\epsilon\in C$ is an idempotent. For example, we may consider $\M(m,n)$ as a Morita algebra by fixing the idempotent $ \epsilon = \left(\begin{matrix} I_m & 0_{m\times n}\\ 0_{n\times m} & 0_n\\ \end{matrix} \right) $, \ie, $M(m,n)$ is the Morita algebra corresponding to the Morita context associated to the superspace $U = \FF^m \oplus \FF^n$. \begin{defi} A Morita context $(R, S, M, N, \vphi, \psi )$ is said to be $G$-\emph{graded} if the algebras $R$ and $S$ are graded, the bimodules $M$ and $N$ are graded, and the maps $\vphi$ and $\psi$ are homogeneous of degree $e$. A Morita algebra $(C,\epsilon)$ is said to be $G$-\emph{graded} if $C$ is $G$-graded and $\epsilon$ is a homogeneous element (necessarily of degree $e$). \end{defi} Clearly, a Morita context is graded if, and only if, the corresponding Morita algebra is graded. \begin{remark}\label{remarkk} For every graded Morita algebra $(C,\epsilon)$, we can define a $\ZZ$-grading by taking $C^{-1} = \epsilon C (1-\epsilon )$, $C^0 = \epsilon C \epsilon \oplus (1-\epsilon)C(1-\epsilon)$ and $C^1=(1-\epsilon)C\epsilon$. In the case of $M(m,n)$, this is precisely the canonical $\ZZ$-grading. \end{remark} \begin{prop}\label{prop:3-equiv-even-morita-action} Let $\Gamma$ be a grading on the superalgebra $M(m,n)$. The following are equivalent: \begin{enumerate}[(i)] \item $\Gamma$ is compatible with the canonical $\ZZ$-grading; \item $\Gamma$ is even; \item $M(m,n)$ equipped with $\Gamma$ is a graded Morita algebra. \par\vbox{\parbox[t]{\linewidth}{Further, if we assume $\Char\FF=0$, the above statements are also equivalent to:}} \item $\Gamma$ corresponds to a $\widehat G$-action by even automorphisms. \end{enumerate} \end{prop} \begin{proof} ~\\ \textit{(i) $\Rightarrow$ (ii):} See the beginning of this subsection. \textit{(ii) $\Rightarrow$ (iii):} Regard $\Gamma$ as a $G^\#$-grading. By Theorem \ref{thm:End-over-D}, there is a graded division algebra $\D$ and a graded right $\D$-module $\mc U$ such that $\End_{\mc D} (\mc U) \simeq M(m,n)$. Take an isomorphism of graded algebras $\phi: \End_{\mc D} (\mc U) \rightarrow M(m,n)$. Since $\Gamma$ is even, $\mc U\even$ and $\mc U\odd$ are graded $\mc D$-submodules. Take $\epsilon' \in \End_{\mc D} (\mc U)$ to be the projection onto $\mc U\even$ associated to the decomposition $\mc U= \mc U\even \oplus \mc U\odd$. Clearly, $\epsilon'$ is a central idempotent of $\End_{\mc D} (\mc U)\even$, hence $\phi(\epsilon')$ is a central idempotent of $M(m,n)\even$, so either $\phi(\epsilon')=\epsilon$ or $\phi(\epsilon')=1-\epsilon$. Either way, $\phi^{-1}(\epsilon)$ is homogeneous, hence so is $\epsilon$. \textit{(iii) $\Rightarrow$ (i):} Follows from Remark \ref{remarkk}. \textit{(i) $\Leftrightarrow$ (iv):} This follows from the fact that the group of even automorphisms is precisely the group of automorphisms of the $\ZZ$-superalgebra structure on $M(m,n)$ (see Remark \ref{rmk:Aut-ZZ-superalgebra}). \end{proof} \begin{remark} It follows from Proposition \ref{prop:3-equiv-even-morita-action} that, if $\Char \FF = 0$, odd gradings exist only if $m=n$. In Subsection \ref{subsec:odd-gradings} we will give a characteristic-independent proof of this fact. \end{remark} We now know that the gradings on the $\ZZ$-superalgebra $M(m,n)$ are precisely the even gradings, but since the automorphism group is different from the $\ZZ_2$-superalgebra case, the classification of gradings up to isomorphism is also different. The proof of the next result is similar to the proof of Theorem \ref{thm:even-assc-iso}. \begin{thm} Let $\Gamma(T,\beta,\gamma_0,\gamma_1)$ and $\Gamma'(T',\beta',\gamma_0',\gamma_1')$ be $G$-gradings on the $\ZZ$-superalgebra $M(m,n)$. Then $\Gamma$ and $\Gamma'$ are isomorphic if, and only if, $T=T'$, $\beta=\beta'$, and there is $g\in G$ such that $g\Xi (\gamma_i) = \Xi (\gamma_i')$ for $i=0,1$. \qed \end{thm} As we mentioned in Subsection \ref{subsec:graded-bimodules}, we can always shift the grading on a graded (bi)module and still have a graded (bi)module. In a graded Morita context, as in the case of a graded superalgebra (see Lemma \ref{lemma:opposite-directions}), we have more structure to preserve: if we shift one of the bimodules by an element $g\in G$ and at least one of the bilinear maps is nonzero, then we are forced to shift the other bimodule by $g^{-1}$. As in the superalgebra case, we will refer to this situation as \emph{shift in opposite directions}. \begin{thm}\label{thm:graded-morita} Let $\mathcal{C}=(R, S, M, N, \vphi, \psi )$ be the Morita context associated with a superspace $U$ and fix gradings on $R$ and $S$ making them graded algebras. The bimodules $M$ and $N$ admit $G$-gradings so that $\mathcal{C}$ becomes a graded Morita context if, and only if, there exists a graded division algebra $\D$ and graded right $\D$-modules $\mc V$ and $\mc W$ such that $R\iso \End_{\D}(\mc V)$ and $S\iso \End_{\D} (\mc W)$ as graded algebras. Moreover, all such gradings on $M$ and $N$ have the form $M\iso \Hom_{\D}(\mc W, \mc V)^{[g]}$ and $N\iso \Hom_{\D}(\mc V, \mc W)^{[g^{-1}]}$ as graded bimodules, where $g\in G$ is arbitrary. \end{thm} \begin{proof} Suppose $M$ and $N$ admit $G$-gradings so that the Morita algebra $(C, \epsilon)$ associated to $\mc C$ becomes $G$-graded. By Theorem \ref{thm:End-over-D} there exists a graded division algebra $\D$ and a graded $\D$-module $\mc U$ such that $C \iso \End_{\D} (\mc U)$. Denote the image of $\epsilon$ under this isomorphism by $\epsilon'$ and let $\mc V = \epsilon'(\mc U)$ and $\mc W = (1 -\epsilon')(\mc U)$. Since $\epsilon$ is homogeneous, so is $\epsilon'$, hence $\mc V$ and $\mc W$ are graded $\D$-modules. It follows that $R \iso \epsilon M(m,n) \epsilon \iso \epsilon' \End_{\D} (\mc U) \epsilon' \iso \End_{\D} (\mc V)$ and, analogously, $S \iso \End_{\D} (\mc W)$. For the converse, write $C$ in matrix form by fixing a basis in $U$ and identify $\End_{\D} (\mc V)$ and $\End_{\D} (\mc W)$ with matrix algebras as in Definition \ref{def:explicit-grd-assoc}. Suppose there exist isomorphisms of graded algebras $\theta_1 \colon R\rightarrow \End_{\D} (\mc V) $ and $\theta_2 \colon S\rightarrow \End_{\D} (\mc W)$. Then there are $x\in \GL (m)$ and $y\in \GL (n)$ such that $\theta_1$ is the conjugation by $x$ and $\theta_2$ is the conjugation by $y$. It follows that the conjugation by $\begin{pmatrix} x & 0\\ 0 & y \end{pmatrix}$ \noindent is an isomorphism of algebras between $C\iso M(m,n)$ and \[\End_{\D} (\mc V \oplus \mc W) = \begin{pmatrix} \End_\D(\mc V) & \Hom_\D(\mc W, \mc V)\\ \Hom_\D(\mc V, \mc W) & \End_\D(\mc W) \end{pmatrix},\] \noindent hence we transport the gradings on $\Hom_\D(\mc W, \mc V)$ and $\Hom_\D(\mc V, \mc W)$ to $M$ and $N$, respectively. It remains to prove that the gradings on $M$ and $N$ are determined up to shift in opposite directions. Since in our case the Morita algebra $C$ is simple, $M$ and $N$ are simple bimodules. By Lemma \ref{lemma:simplebimodule}, the gradings on $M$ and $N$ are determined up to shifts, and the shifts have to be in opposite directions in order for $\vphi$ and $\psi$ to be degree-preserving. \end{proof} \subsection{Odd gradings}\label{subsec:odd-gradings} Let $\Gamma$ be an odd grading on $M(m,n)$. We saw in Subsection \ref{ssec:grds-on-superalgebras} that, as a $G^\#$-graded algebra, $M(m,n)$ is isomorphic to $E\iso \End(\tilde U)\tensor \D$ where the first factor has the trivial $\ZZ_2$-grading and $\D=\D\even\oplus \D\odd$, with $\D\odd\neq 0$, is a $G^\#$-graded division algebra that is simple as an algebra. Let $T\subseteq G^\#$ be the support of $\D$ and $\beta: T\times T \rightarrow \FF^\times$ be the associated bicharacter. We write $T^+ = \{t\in T \mid p(t)=\barr 0\} = T \cap G$ and $T^- = \{t\in T \mid p(t)=\barr 1\}$, and denote the restriction of $\beta$ to $T^+\times T^+$ by $\beta^+$. Note that there are no odd gradings if $\Char \FF =2$. Indeed, in this case, there is no nondegenerate bicharacter on $T$ because the characteristic of the field divides $|T|=2|T^+|$. From now on, we suppose $\Char \FF \neq 2$. For a subgroup $A\subseteq T$, we denote by $A'$ its orthogonal complement in $T$ with respect to $\beta$, i.e., $A' = \{t\in T\mid \beta(t, A) =1\}$. This is the inverse image of the subgroup $A^\perp\subseteq \widehat T$ under the isomorphism $T\rightarrow \widehat T$ given by $t\mapsto \beta(t,\cdot)$. In particular, $|A'| = [T:A]$. From these considerations, we have $(T^+)' = \langle t_0 \rangle$ where $t_0$ is an element of order 2. It follows that $\beta(t_0, t) = 1$ if $t\in T^+$ and $\beta(t_0, t) = -1$ if $t\in T^-$. For this reason, we call $t_0$ the \emph{parity element} of the odd grading $\Gamma$. Note that $\rad \beta^+ = T^+\cap (T^+)' = \langle t_0 \rangle$. Fix an element $0\neq d_0\in \D$ of degree $t_0$. By the definition of $\beta$, $d_0$ commutes with all elements of $\D\even$ and anticommutes with all elements of $\D\odd$. Since $d_0^2\in \D_e = \FF$, we may rescale $d_0$ so that $d_0^2=1$. Then $\epsilon := \frac{1}{2}(1+d_0)$ is a central idempotent of $\D\even$. Take a homogeneous element $0\neq d_1\in\D\odd$. Then $d_1\epsilon d_1\inv = \frac{1}{2}(1-d_0)=1-\epsilon$, which is another central idempotent of $\D\even$ and must have the same rank as $\epsilon$. Hence, $\D\even\iso \epsilon\D\even\oplus (1-\epsilon)\D\even$ (direct sum of ideals) and, consequently, $E\even \iso \End(\tilde U)\tensor \D\even = \End(\tilde U)\tensor \epsilon\D\even \oplus \End(\tilde U)\tensor (1-\epsilon)\D\even$, where the two summands have the same dimension. Therefore, odd gradings exist only if $m=n$. Also note that we have \begin{equation}\label{eq:D1eps} \D\odd \epsilon = (1-\epsilon) \D\odd. \end{equation} We are now going construct an even grading by coarsening a given odd grading. The reverse of this construction will be used in Subsection \ref{ssec:second-odd}. Let $H$ be a group and suppose we have an even grading $\Gamma'$ on $M(n,n)$ that is the coarsening of $\Gamma$ induced by a group homomorphism $\alpha: G\rightarrow H$. Since $\Gamma'$ is even, then the idempotent $\id_{\tilde U}\tensor\epsilon$ must be homogeneous with respect to $\Gamma'$. This means that $\alpha(t_0)=e$, so $\alpha$ factors through $\barr G := G/\langle t_0 \rangle$. This motivates the following definition: \begin{defi} Let $\Gamma$ be an odd $G$-grading on $M(n,n)$ with parity element $t_0$. The \emph{finest even coarsening of $\Gamma$} is the $\barr G$-grading ${}^\theta \Gamma$, where $\barr G := G/\langle t_0 \rangle$ and $\theta: G \to \barr G$ is the natural homomorphism. \end{defi} \begin{thm} Let $\Gamma = \Gamma(T, \beta, \gamma)$ be an odd grading on $M(n,n)$ with parity element $t_0$. Then its finest even coarsening is isomorphic to $\barr \Gamma = \Gamma(\barr T, \barr \beta, \barr \gamma, \barr u\barr \gamma)$, where $\barr T= \frac{T^+}{\langle t_0 \rangle}$, $\barr\beta$ is the nondegenerate bicharacter on $\barr T$ induced by $\beta^+$, $\barr\gamma$ is the tuple whose entries are the images of the entries of $\gamma$ under $\theta$, and $u \in G$ is any element such that $(u, \barr 1) \in T^-$. \end{thm} \begin{proof} Let us focus our attention on the $G$-graded division algebra $\D$. We now consider it as a $\barr G$-graded algebra, which has a decomposition $\D=\D\epsilon \oplus \D(1-\epsilon)$ as a graded left module over itself. \setcounter{claim}{0} \begin{claim} The $\D$-module $\D\epsilon$ is simple as a graded module. \end{claim} To see this, consider a nontrivial graded submodule $V\subseteq \D\epsilon$ and take a homogeneous element $0\neq v\in V$. Then we can write $v=d\epsilon$ where $d$ is a $\barr G$-homogeneous element of $\D$, so $d = d' + \lambda d' d_0$ where $d'$ is a $G$-homogeneous element and $\lambda\in \FF$. Hence, $v = d'\epsilon + \lambda d'd_0\epsilon = (1+\lambda)d'\epsilon$. Clearly, $(1+\lambda)d'\neq 0$, so it has an inverse in $\D$. We conclude that $\epsilon\in V$, hence $V=\D\epsilon$.\qedclaim Let $\barr \D := \epsilon \D \epsilon \iso \End_{\D}(\D\epsilon)$, where we are using the convention of writing endomorphisms of a left module on the right. By Claim 1 and the graded analog of Schur's Lemma (see \eg \cite[Lemma 2.4]{livromicha}), $\barr \D$ is a $\barr G$-graded division algebra. \begin{claim} The support of $\barr \D$ is $\barr T= \frac{T^+}{\langle t_0 \rangle}$ and the bicharacter $\barr \beta: \barr T\times \barr T\rightarrow \FF^\times$ is induced by $\beta^+: T^+\times T^+ \rightarrow \FF^\times$. \end{claim} We have $\barr \D = \epsilon \D\even \epsilon + \epsilon \D\odd \epsilon$ and $\epsilon \D\odd \epsilon = 0$ by Equation \eqref{eq:D1eps}, so $\supp \barr \D \subseteq \barr T$. On the other hand, for every $0\neq d\in \D\even$ with $G$-degree $t\in T^+$, we have that $\epsilon d\epsilon = d\epsilon = \frac{1}{2}(d+dd_0)\neq 0$, since the component of degree $t$ is different from zero. Hence $\supp \barr \D = \barr T$. Since $\epsilon$ is central in $\D\even$, we obtain $\barr\beta (\barr t,\barr s) = \beta (s, t) = \beta^+ (s, t)$ for all $t, s\in T^+$.\qedclaim We now consider $\D\epsilon$ as a graded right $\barr \D$-module. Then we have the decomposition $\D\epsilon = \epsilon \D\epsilon \oplus (1-\epsilon) \D\epsilon$. The set $\{\epsilon\}$ is clearly a basis of $\epsilon \D\epsilon$. To find a basis for $(1-\epsilon)\D\epsilon$, fix any $G$-homogeneous $0\neq d_1\in \D\odd$ with $\deg d_1 = t_1\in T^-$. Then we have $(1-\epsilon)\D\epsilon = (1-\epsilon)\D\even \epsilon + (1-\epsilon)\D\odd \epsilon = (1-\epsilon)\D\odd \epsilon = \D\odd \epsilon$ by Equation \eqref{eq:D1eps}. Since $d_1$ is invertible, $\{d_1\epsilon\}$ is a basis for $(1-\epsilon) \D\epsilon$. We conclude that $\{\epsilon, d_1\epsilon\}$ is a basis for $\D\epsilon$. Using the graded analog of the Density Theorem (see e.g. \cite[Theorem 2.5]{livromicha}), we have $\D\iso \End_{\barr \D}(\D\epsilon)\iso \End(\FF\epsilon\oplus \FF d_1\epsilon)\tensor \barr\D$. Hence, \[ \begin{split} \End_\D(\mc U)&\iso\End (\tilde U) \tensor \D \iso \End (\tilde U) \tensor \End(\FF\epsilon \oplus \FF d_1\epsilon) \tensor \barr\D \\ &\iso \End(\tilde U\tensor \epsilon \oplus \tilde U\tensor d_1\epsilon) \tensor \barr\D \end{split} \] as $\barr G$-graded algebras. The result follows. \end{proof} In the next section, we will show how to recover $\Gamma$ from $\barr\Gamma$ and some extra data. The following definition and result will be used there. \begin{defi} For every abelian group $A$ we put $A^{[2]} = \{a^2 \mid a\in A\}$ and $A_{[2]} = \{a\in A \mid a^2 = e \}$. \end{defi} Note that $T^{[2]}\subseteq T^+$, but $T^{[2]}$ can be larger than $(T^+)^{[2]}$ since it also includes the squares of elements of $T^-$. Also, the subgroup $\barr S = \{\barr t \in \barr T \mid t \in T^{[2]}\}$ of $\barr T$ can be larger than $\barr T^{[2]}$, but we will show that, surprisingly, it does not depend on $T^-$. \begin{lemma}\label{lemma:square-subgroup} Let $\theta: T^+\rightarrow \barr T=\frac{T^+}{\langle t_0 \rangle}$ be the natural homomorphism. Consider the subgroups $\barr S = \theta(T^{[2]})$ and $\barr R=\theta(T^+_{[2]})$ of $\barr T$. Then $\barr S$ is the orthogonal complement of $\barr R$ with respect to the nondegenerate bicharacter $\barr\beta$. \end{lemma} \begin{proof} We claim that $\barr S' = \barr R$. Indeed, \[ \begin{split} \barr S' & = \{ \theta(t) \mid t\in T^+ \AND \barr\beta(\theta (t), \theta (s^2)) =1 \text{ for all }s\in T\}\\ & = \{\theta(t) \mid t\in T^+ \AND \beta (t, s^2) =1 \text{ for all }s\in T\}\\ & = \{ \theta(t) \mid t\in T^+ \AND \beta (t^2, s) =1 \text{ for all }s\in T\}\\ & = \{ \theta(t) \mid t\in T^+ \AND t^2=e \}\\ & = \barr R\,. \end{split} \] It follows that $\barr S = \barr R'$, as desired. \end{proof} \subsection{A description of odd gradings in terms of $G$}\label{ssec:second-odd} Our second description of an odd grading consists of its finest even coarsening and the data necessary to recover the odd grading from this coarsening. All parameters will be obtained in terms of $G$ rather than its extension $G^\#=G\times \ZZ_2$. Let $t_0\in G$ be an arbitrarily fixed element of order 2 and set $\barr G = \frac{G}{\langle t_0 \rangle}$. Let $\barr T \subseteq \barr G$ be a finite subgroup and let $\barr \beta: \barr T \times \barr T \rightarrow \FF^\times$ be a nondegenerate alternating bicharacter. We define $T^+\subseteq G$ to be the inverse image of $\barr T$ under the natural homomorphism $\theta: G\rightarrow \barr G$. Note that $\barr \beta$ gives rise to a bicharacter $\beta^+$ on $T^+$ whose radical is generated by the element $t_0$. We wish to define $T^-\subseteq G\times \{\barr 1\}$ so that $T=T^+\cup T^-$ is a subgroup of $G^\#$ and $\beta^+$ extends to a nondegenerate alternating bicharacter on $T$. From Lemma \ref{lemma:square-subgroup}, we have a necessary condition for the existence of such $T^-$, namely, for $\barr R=\frac{T^+_{[2]}}{\langle t_0 \rangle}$, we need $\barr R' \subseteq \barr G^{[2]}$ (indeed, $\barr S$ is a subgroup of $\overline {G^{[2]}} = \barr G^{[2]}$). We will now prove that this condition is also sufficient. \begin{prop}\label{prop:square-subgroup-converse} If $\left( \frac{T^+_{[2]} }{\langle t_0 \rangle}\right)'\subseteq \barr G^{[2]}$, then there exists an element $t_1\in G\times \{\barr 1\} \subseteq G^\#$ such that $T= T^+ \cup t_1\, T^+$ is a subgroup of $G^\#$ and $\beta^+$ extends to a nondegenerate alternating bicharacter $\beta:T\times T\rightarrow \FF^\times$. \end{prop} \begin{proof} Let $\chi\in \widehat {T^+}$ be such that $\chi(t_0) = -1$. Since $\chi^2(t_0)=1$, we can consider $\chi^2$ as a character of the group $\barr T = \frac{T^+}{\langle t_0 \rangle}$, hence there is $a\in T^+$ such that $\chi^2(\barr t) = \barr\beta(\barr a, \barr t)$ for all $\barr t\in \barr T$. Note that $\chi (a) = \pm 1$ and hence, changing $a$ to $a t_0$ if necessary, we may assume $\chi (a) = 1$. \textit{(i) Existence of $t_1$}: As before, let $\barr R = \frac{T^+_{[2]}}{\langle t_0 \rangle}$. Then $\barr a \in \barr R'$. Indeed, if $b\in T^+_{[2]}$, then $\barr\beta(\barr a,\barr b) = \chi^2 (\barr b) = \chi (b^2) = \chi (e) =1$. By our assumption, we conclude that $\barr a\in \barr G^{[2]}$. We are going to prove that, actually, $a\in G^{[2]}$. Pick $u\in G$ such that $\barr u^2 = \barr a$. Then, either $a=u^2$ or $a=u^2t_0$. If $t_0 = c^2$ for some $c\in G$, then replacing $u$ by $uc$ if necessary, we can make $u^2 = a$. Otherwise, $t_0$ has no square root in $T^+$, which implies that $\barr R=\barr T_{[2]}$. Hence $\barr R' = (\barr T_{[2]})' = \barr T^{[2]} = \theta ((T^+)^{[2]})$. Thus, in this case, we can assume $u\in T^+$. Then $\chi(u^2) = \chi^2(u) = \barr \beta (\barr a, \barr u) = \barr \beta (\barr u^2, \barr u) =1$, hence $u^2 = a$. Finally, we set $t_1=(u,\barr 1) \in G^\#$. \textit{(ii) Existence of $\beta$}: We wish to extend $\beta^+$ to $T=T^+ \cup t_1\, T^+$ by setting $\beta(t_1, t) = \chi (t)$ for all $t\in T^+$. It is clear that there is at most one alternating bicharacter on $T$ with this property that extends $\beta^+$. To show that it exists and is nondegenerate, we will first introduce an auxiliary group $\widetilde T$ and a bicharacter $\tilde\beta$. Let $\widetilde T$ be the direct product of $T^+$ and the infinite cyclic group generated by a new symbol $\tau$. We define $\tilde\beta:\widetilde T\times \widetilde T \rightarrow \FF^\times$ by $ \tilde\beta(s\tau^i,t\tau^j) = \beta^+(s,t)\, \chi (s)^{-j}\, \chi (t)^i$, where $s,t\in T^+$. It is clear that $\tilde\beta$ is an alternating bicharacter. \begin{claim*} $\langle a\tau^{-2} \rangle = \rad \tilde \beta\,$. \end{claim*} Let $t\in T^+$ and $\ell\in \ZZ$. Then \[ \tilde \beta (a\tau^{-2},t\tau^\ell) = \beta^+(a, t)\,\, \chi(t)^{-2} \, \chi(a)^{-\ell} = \barr\beta(\barr a, \barr t)\,\, \chi(t)^{-2} = \chi(t)^2 \, \chi(t)^{-2} = 1, \] hence, $\langle a\tau^{-2} \rangle \subseteq \rad \tilde \beta$. Conversely, if $s\tau^k \in \rad \tilde\beta$, then, $1 = \tilde \beta (s\tau^k, t_0) = \beta^+(s,t_0)\, \chi(t_0)^k = (-1)^k$, hence $k$ is even. From the previous paragraph, we know that $a\tau^{-2} \in \rad \tilde\beta$, hence $a^\frac{k}{2} \tau^{-k} \in \rad \tilde\beta$ and $s a^\frac{k}{2} = (s \tau^k) (a^\frac{k}{2} \tau^{-k}) \in \rad \tilde\beta$. Since $s a^\frac{k}{2} \in T^+$, we get $s a^\frac{k}{2} \in \rad \beta^+ = \{ e, t_0 \}$. But, if $sa^\frac{k}{2} = t_0$, we have $1 = \tilde\beta (sa^\frac{k}{2}, \tau) = \tilde\beta (t_0, \tau) = \chi(t_0)\inv = -1$, a contradiction. It follows that $sa^\frac{k}{2} = e$ and, hence, $s\tau^k = a^{-\frac{k}{2}}\tau^k = (a\tau^{-2})^{\frac{k}{2}}$, concluding the proof of the claim. \qedclaim We have a homomorphism $\vphi:\widetilde T\rightarrow T$ that is the identity on $T^+$ and sends $\tau$ to $t_1$. Clearly, $\ker \vphi = \langle a\tau^{-2} \rangle$. By the above claim, $\tilde\beta$ induces a nondegenerate alternating bicharacter on $\frac{\widetilde T}{\langle a\tau^{-2} \rangle}$, which can be transferred via $\vphi$ to a nondegenerate alternating bicharacter on $T$ that extends $\beta^+$. \end{proof} Now fix $\chi\in \widehat {T^+}$ with $\chi(t_0)=-1$ and let $a$ be the unique element of $T^+$ such that $\chi(a)=1$ and $\chi^2(\barr t) = \barr\beta (\barr a, \barr t)$ for all $t\in T^+$. Suppose that the condition of Proposition \ref{prop:square-subgroup-converse} is satisfied. Then part (i) of the proof shows that there exists $u\in G$ such that $u^2=a$. Moreover, part (ii) shows that there exists an extension of $\beta^+$ to a nondegenerate alternating bicharacter $\beta$ on $T=T^+\cup t_1T^+$, where $t_1=(u,\bar 1)$, such that $\beta(t_1,t)=\chi(t)$ for all $t\in T^+$. Clearly, such an extension is unique. We will denote it by $\beta_u$ and its domain by $T_u$. \begin{prop}\label{prop:roots-of-a} For every $T\subseteq G^\#$ such that $T\subsetneq G$ and $T\cap G=T^+$ and for every extension of $\beta^+$ to a nondegenerate alternating bicharacter $\beta$ on $T$, there exists $u\in G$ such that $u^2=a$, $T=T_u$ and $\beta=\beta_u$. Moreover, $\beta_u=\beta_{\tilde{u}}$ if, and only if, $u \equiv \tilde{u} \pmod{\langle t_0 \rangle}$. \end{prop} \begin{proof} We have $T=T^+ \cup T^-$ where $T^-\subseteq G\times \{\barr 1\}$ is a coset of $T^+$. We can extend $\chi$ to a character of $T$, which we still denote by $\chi$, and, since $\beta$ is nondegenerate, there is $t_1\in T$ such that $\beta(t_1, t) = \chi(t)$ for all $t\in T$. We have $t_1\in T^-$ since $\beta(t_1,t_0)=\chi(t_0)=-1$, so $t_1=(u,\bar 1)$, for some $u\in G$, and hence $T=T_u$. We claim that $t_1^2=a$. Indeed, $\chi(t_1^2) = \beta(t_1,t_1^2)=1$ and, for every $t\in T^+$, \[ \chi^2(\barr t) = \chi(t)^2 = \beta (t_1, t)^2 = \beta (t_1^2, t) = \barr\beta (\,\overline {(t_1^2)},\, \barr t)\,, \] so $t_1^2$ satisfies the definition of the element $a$. This completes the proof of the first assertion. Now suppose $\beta_u=\beta_{\tilde{u}}$, so in particular $t_1\,T^+=\tilde{t}_1\,T^+$ where $t_1 = (u, \barr 1)$ and $\tilde{t}_1 = (\tilde u, \barr 1)$. Then there is $r\in T^+$ such that $\tilde{t}_1 = t_1\,r$. Also, for every $t\in T^+$, \[ \chi(t) = \beta_{\tilde{u}}(\tilde{t}_1,t) = \beta_u (t_1\,r, t) = \beta_u(t_1, t)\,\beta_u(r,t) = \chi(t) \beta^+(r, t) \] and, hence, $\beta^+(r, t)=1$ for all $t\in T^+$. This means that $r = u\inv \tilde{u} \in \langle t_0 \rangle$. Conversely, if $\tilde u = u r$ for some $r\in \langle t_0 \rangle$, then $t_1\, T^+ = \tilde t_1\, T^+$. Also, for all $t\in T^+$, \[ \beta_u(t_1, t) = \chi(t) = \beta_{\tilde{u}}(\tilde{t}_1, t) = \beta_{\tilde{u}}(t_1r, t) = \beta_{\tilde{u}}(t_1, t)\, \beta^+(r, t) = \beta_{\tilde{u}}(t_1, t). \] It follows that $\beta_u=\beta_{\tilde{u}}$. \end{proof} Note that, keeping the character $\chi \in \widehat {T^+}$ with $\chi(t_0) = -1$ fixed, we have a surjective map from the square roots of $a$ to all possible pairs $(T,\beta)$. If we had started with a different character above, we would have obtained a different surjective map. Hence, for parametrization purposes, $\chi$ (and, hence, $a$) will be fixed. We are now in a position to give a classification of odd gradings in terms of $G$ only. We already have the following parameters: an element $t_0\in G$ of order $2$ and a pair $(\barr T, \barr\beta)$. For each $t_0$ and $\barr T$, we fix a character $\chi\in \widehat {T^+}$ satisfying $\chi(t_0) = -1$. The next parameter is an element $u\in G$ such that $u^2 = a$, where $a$ is the unique element of $T^+$ such that $\chi(a)=1$ and $\chi^2(\barr t) = \barr\beta (\barr a, \barr t)$ for all $t\in T^+$. Finally, let $\gamma = (g_1, \ldots, g_k)$ be a $k$-tuple of elements of $G$. With these data, we construct the grading $\Gamma (t_0, \barr T, \barr \beta, u, \gamma)$ as follows: \begin{defi}\label{def:odd-grd-on-Mmn-2} Let $\D$ be a standard realization of the $G^\#$-graded division algebra with parameters $(T_u,\beta_u)$. Take the graded $\D$-module $\mathcal U = \D^{[g_1]}\oplus \cdots \oplus \D^{[g_k]}$. Then $\End_\D (\mathcal U)$ is a $G^\#$-graded algebra, hence a superalgebra by means of $p:G^\# \rightarrow \ZZ_2$. As a superalgebra, it is isomorphic to $M(n,n)$ where $n=k\sqrt{|\barr T|}$. We define $\Gamma (t_0, \barr T, \barr \beta, u, \gamma)$ as the corresponding $G$-grading on $M(n,n)$. \end{defi} Theorem \ref{thm:first-odd-iso} together with Proposition \ref{prop:roots-of-a} give the following result: \begin{thm}\label{thm:2nd-odd-iso} Every odd $G$-grading on the superalgebra $M(n,n)$ is isomorphic to some $\Gamma (t_0, \barr T, \barr \beta, u, \gamma)$ as in Definition \ref{def:odd-grd-on-Mmn-2}. Two odd gradings, $\Gamma (t_0, \barr T, \barr \beta, u, \gamma)$ and $\Gamma (t_0', \barr T', \barr \beta', u', \gamma')$, are isomorphic if, and only if, $t_0=t_0'$, $\barr T = \barr T'$, $\barr\beta = \barr\beta'$, $u \equiv u' \pmod{\langle t_0 \rangle}$, and there is $g\in G$ such that $g\, \Xi(\gamma) = \Xi(\gamma')$.\qed \end{thm} \subsection{Fine gradings up to equivalence} We start by investigating the gradings on the superalgebra $M(m,n)$ that are fine among even gradings. By Proposition \ref{prop:3-equiv-even-morita-action}, this is the same as fine gradings on $M(m,n)$ as a $\ZZ$-superalgebra, and, by the discussion in Subsection \ref{subsec:odd-gradings}, the same as fine gradings if $m\neq n$ or $\Char \FF = 2$. We will use the following notation. Let $H$ be a finite abelian group whose order is not divisible by $\Char \FF$. Set $T_H = H\times \widehat H$ and define $\beta_H: T_H\times T_H \to \FF^\times$ by \[ \beta_H((h_1, \chi_1), (h_2, \chi_2)) = \chi_1(h_2)\, \chi_2 (h_1)\inv. \] Then $\beta_H$ is a nondegenerate alternating bicharacter on $T_H$. \begin{defi}\label{def:even-fine-grd-on-Mmn} Let $\ell \mid \operatorname{gcd}(m,n)$ be a natural number such that $\Char \FF\nmid\ell$ and put $k_0 := \frac{m}{\ell}$ and $k_1 := \frac{n}{\ell}$. Let $\Theta_\ell$ be a set of representatives of the isomorphism classes of abelian groups of order $\ell$. For every $H$ in $\Theta_\ell$, we define $\Gamma(H, k_0, k_1)$ to be the even $T_H\times \ZZ^{k_0 + k_1}$-grading $\Gamma(T_H, \beta_H, (e_1, \ldots, e_{k_0}), (e_{k_0 + 1}, \dots, e_{k_0 + k_1}))$ on $M(m,n)$, where $\{e_1, \ldots, e_{k_0+k_1}\}$ is the standard basis of $\ZZ^{k_0 + k_1}$. If $m$ and $n$ are clear from the context, we will simply write $\Gamma(H)$. \end{defi} Let $G_H$ be the subgroup of $T_H\times \ZZ^{k_0 + k_1}$ generated by the support of $\Gamma(H,k_0,k_1)$, i.e., $G_H = T_H\times \ZZ^{k_0 + k_1}_0$, where $\ZZ^k_0 := \{ (x_1, \ldots, x_k) \in \ZZ^k \mid x_1 + \cdots + x_k = 0\}$. \begin{thm}\label{thm:class-fine-even} The fine gradings on $M(m,n)$ as a $\ZZ$-superalgebra are precisely the even fine gradings. Every such grading is equivalent to a unique $\Gamma(H)$ as in Definition \ref{def:even-fine-grd-on-Mmn}. Moreover, every grading $\Gamma(H)$ is fine, and $G_H$ is its universal group. \end{thm} \begin{proof} By \cite[Proposition 2.35]{livromicha}, if we consider $\Gamma(H)$ as a grading on the algebra $M_{n+m}(\FF)$, it is a fine grading and $G_H$ is its universal group. It follows that the same is true of $\Gamma(H)$ as a grading on the superalgebra $M(m,n)$. Let $\Gamma = \Gamma(T, \beta, \gamma_0, \gamma_1)$ be any even $G$-grading on $M(m,n)$. We can write $T = A\times B$ where the restrictions of $\beta$ to the subgroups $A$ and $B$ are trivial and, hence, there is an isomorphism $\alpha: T_A \to T$ such that $\beta_A=\beta\circ(\alpha\times\alpha)$. We can extend $\alpha$ to a homomorphism $G_A \to G$ (also denoted by $\alpha$) by sending the elements $e_1, \ldots, e_{k_0}$ to the entries of $\gamma_0$ and the elements $e_{k_0+1}, \ldots, e_{k_0+k_1}$ to the entries of $\gamma_1$. It follows that ${}^\alpha \Gamma(A) \iso \Gamma$. Since all $\Gamma(H)$ are fine and pairwise nonequivalent (because their universal groups are pairwise nonisomorphic), we can apply Lemma \ref{lemma:universal-grp}, concluding that every fine grading on $M(m,n)$ as a $\ZZ$-superalgebra is equivalent to a unique $\Gamma(H)$. \end{proof} We now consider odd fine gradings on $M(n,n)$, so $\Char\FF\ne 2$. We first define some gradings on the algebra $M_{2n}(\FF)$ and then impose a superalgebra structure. \begin{defi}\label{def:param-fine-odd} Let $\ell\mid n$ be a natural number such that $\Char \FF\nmid\ell$ and put $k:= \frac{n}{\ell}$. Let $\Theta_{2\ell}$ be a set of representatives of the isomorphism classes of abelian groups of order $2\ell$. For every $H$ in $\Theta_{2\ell}$, we consider the $T_H\times \ZZ^k$-grading $\Gamma = \Gamma(T_H, \beta_H, (e_1, \ldots, e_k))$ on $M_{2n}(\FF)$, where $\{e_1, \ldots, e_k\}$ is the standard basis of $\ZZ^k$. Then we choose an element $t_0 \in T$ of order $2$ and define a group homomorphism $p: T_H\times\ZZ^k \to \ZZ_2$ by \[ p(t, x_1, \ldots, x_k) = \begin{cases*} \bar 0 & if $\beta(t_0, t) = 1$,\\ \bar 1 & if $\beta(t_0, t) = -1$. \end{cases*} \] This defines a superalgebra structure on $M_{2n}(\FF)$. By construction, $\Gamma$ is odd as a grading on this superalgebra $(M_{2n}(\FF),p)$, and this forces the superalgebra to be isomorphic to $M(n,n)$. We denote by $\Gamma(H, t_0, k)$ the grading $\Gamma$ considered as a grading on $M(n,n)$. If $n$ is clear from the context, we will simply write $\Gamma(H, t_0)$. \end{defi} Note that the parameter $t_0$ of $\Gamma(H, t_0, k)$ does not affect the grading on the algebra $M_{2n}(\FF)$, but, as we will see in Proposition \ref{prop:equiv-with-same-H}, different choices of $t_0$ can yield nonequivalent gradings on the superalgebra $M(n,n)$. \begin{prop}\label{prop:all-fine-odd} Each grading $\Gamma(H, t_0)$ on $M(n,n)$ is fine, and its universal group is $G_H = T_H\times \ZZ^k_0$. Every odd fine grading on $M(n,n)$ is equivalent to at least one $\Gamma(H, t_0)$. \end{prop} \begin{proof} As in the proof of Theorem \ref{thm:class-fine-even}, the first assertion follows from \cite[Proposition 2.35]{livromicha}. Let $\Gamma(T,\beta, \gamma)$ be an odd $G$-grading on $M(n,n)$ and let $t_0$ be its parity element. Then we can find subgroups $A$ and $B$ such that $T=A\times B$ and there exists an isomorphism $\alpha: T_A \to T$ such that $\beta_A=\beta\circ(\alpha\times\alpha)$. We define $t_0' := \alpha\inv(t_0)$ and extend $\alpha$ to a homomorphism $G_A \to G$ (also denoted by $\alpha$) by sending the elements $e_1, \ldots, e_k$ to the entries of $\gamma$. Then ${}^\alpha \Gamma(A, t_0') \iso \Gamma$. Selecting a representative from each equivalence class of gradings of the form $\Gamma(H, t_0)$, we can apply Lemma \ref{lemma:universal-grp}, which proves the second assertion. \end{proof} It remains to determine which of the gradings $\Gamma(H, t_0)$ are equivalent to each other. \begin{prop}\label{prop:equiv-with-same-H} The gradings $\Gamma = \Gamma(H, t_0)$ and $\Gamma' = \Gamma(H, t_0')$ on $M(n,n)$ are equivalent if, and only if, there is $\alpha \in \Aut(T_H, \beta_H)$ such that $\alpha(t_0) = t_0'$. \end{prop} \begin{proof} We will denote by $p: G_H \to \ZZ_2$ the parity homomorphism associated to the grading $\Gamma$ and by $p': G_H\to \ZZ_2$ the one associated to $\Gamma'$. If $\Gamma$ is equivalent to $\Gamma'$, there is an isomorphism $\vphi: (M_{2n}(\FF),p) \to (M_{2n}(\FF),p')$ of superalgebras that is a self-equivalence of the grading on $M_{2n}(\FF)$. Hence, we have the corresponding group automorphism $\alpha: G_H \to G_H$ in the Weyl group of the grading, and the following diagram commutes: \begin{equation}\label{diag:parity} \begin{tikzcd} G_H \arrow[to = 2G, "\alpha"] \arrow[to = Z2, "p"'] && |[alias = 2G]| G_H \arrow[to = Z2, "p'"]\\ & |[alias = Z2]|\ZZ_2 & \end{tikzcd} \end{equation} By the definition of $p$ and $p'$, this is equivalent to $\alpha(t_0) = \alpha(t_0')$. The automorphism $\alpha$ must send the torsion subgroup of $G_H$ to itself, so we can consider the restriction $\alpha\restriction_{T_H}$. By \cite[Corrolary 2.45]{livromicha}, this restriction is in $\Aut(T_H, \beta_H)$. For the converse, we use the same \cite[Corrolary 2.45]{livromicha} to extend $\alpha$ to an automorphism $G_H\to G_H$ in the Weyl group. Hence, there is an automorphism $\vphi$ of the algebra $M_{2n}(\FF)$ that permutes the components of the grading according to $\alpha$. The condition $\alpha(t_0) = \alpha(t_0')$ is equivalent to Diagram \eqref{diag:parity} being commutative, which shows that $\vphi: (M_{2n}(\FF),p) \to (M_{2n}(\FF),p')$ is an isomorphism of superalgebras. \end{proof} Combining Propositions \ref{prop:all-fine-odd} and \ref{prop:equiv-with-same-H}, we obtain: \begin{thm}\label{thm:class-fine-odd} Every odd fine grading on $M(n,n)$ is equivalent to some $\Gamma(H,t_0)$ as in Definition \ref{def:param-fine-odd}. Every grading $\Gamma(H,t_0)$ is fine, and $G_H$ is its universal group. Two gradings, $\Gamma(H, t_0)$ and $\Gamma(H', t_0')$, are equivalent if, and only if, $H=H'$ and $t_0'$ lies in the orbit of $t_0$ under the natural action of $\Aut(T_H, \beta_H)$.\qed \end{thm} For a matrix description of the group $\Aut(T_H, \beta_H)$, we refer the reader to \cite[Remark 2.46]{livromicha}. \section{Gradings on $A(m,n)$}\label{sec:Amn} Throughout this section it will be assumed that $\Char \FF = 0$. \subsection{The Lie superalgebra $A(m,n)$}\label{ssec:def-A} Let $U = U\even \oplus U\odd$ be a finite dimentional superspace. Recall that the \emph{general linear Lie superalgebra}, denoted by $\gl\, (U)$, is the superspace $\End(U)$ with product given by the \emph{supercommutator}: \[ [a,b] = ab - (-1)^{\abs{a}\abs{b}}ba. \] If $U\even=\FF^m$ and $U\odd=\FF^n$, then $\gl (U)$ is also denoted by $\gl(m|n)$. The \emph{special linear Lie superalgebra}, denoted by $\Sl (U)$, is the derived algebra of $\gl(U)$. As in the Lie algebra case, we describe it as an algebra of ``traceless'' operators. The analog of trace in the ``super'' setting is the so called \emph{supertrace}: \[ \str \left(\begin{matrix} a & b\\ c & d\\ \end{matrix}\right) = \tr a - \tr d, \] and we have $\Sl (U) = \{ T\in \gl (U) \mid \Str T = 0\}$. Again, if $U\even=\FF^m$ and $U\odd=\FF^n$ then $\Sl (U)$ is also denoted by $\Sl(m|n)$. If one of the parameters $m$ or $n$ is zero, we get a Lie algebra, so we assume this is not the case. If $m\neq n$ then $\Sl(m|n)$ is a simple Lie superalgebra. If $m=n$, the identity map $I_{2n}\in \Sl(n|n)$ is a central element and hence $\Sl(n|n)$ is not simple, but the quotient $\mathfrak{psl}(n|n) := \Sl(n|n)/ \FF I_{2n}$ is simple if $n>1$. For $m$,$n\geq 0$ (not both zero), the simple Lie superalgebra $A(m,n)$ is $\Sl(m+1|n+1)$ if $m\neq n$, and $\mathfrak{psl}(n+1|n+1)$ if $m=n$. \begin{defi}\label{def:Type-I} If $\Gamma$ is a $G$-grading on $M(m,n)$, then, since $G$ is abelian, it is also a grading on $\gl(m|n)$ and, hence, restricts to its derived superalgebra $\Sl(m|n)$. If $m=n$, then the grading on $\Sl(m|n)$ induces a grading on $\mathfrak{psl}(n|n)$. If a grading on $\Sl(m|n)$ or $\mathfrak{psl}(n|n)$ is obtained in this way, we will call it a \emph{Type I} grading and, otherwise, a \emph{Type II} grading. \end{defi} \subsection{Automorphisms of $A(m,n)$}\label{ssec:auto-Amn} As in the Lie algebra case, the group of automorphisms of the Lie superalgebra $A(m,n)$ is bigger than the group of automorphisms of the associative superalgebra $\End(U)$. We define the \emph{supertranspose} of a matrix in $\End(U)$ by \begin{equation*} \left( \begin{matrix} a&b\\ c&d\\ \end{matrix}\right)^{s\top} = \left( \begin{matrix} a\transp & -c\transp\\ b\transp & d\transp\\ \end{matrix}\right). \end{equation*} The supertranspose map $\End(U) \to \End(U)$ is an example of a \emph{super-anti-automorphism}, \ie, it is $\FF$-linear and \[ (XY)^{s\top} = (-1)^{|X||Y|} Y^{s\top} X^{s\top}. \] Hence, the map $\tau:\Sl(m+1,n+1)\rightarrow \Sl(m+1,n+1)$ given by $\tau(X) = - X^{s\top}$ is an automorphism. By \cite[Theorem 1]{serganova}, the group of automorphisms of $A(m,n)$ is generated by $\tau$ and the automorphisms of $\End(U)$, which are restricted to traceless operators and, if necessary, taken modulo the center. In other words, if $m\neq n$, $\Aut(A(m,n))$ is generated by $\mc E \cup \{\tau\}$ and, if $m=n$, by $\mc E \cup \{\pi\,,\,\tau\}$. In both cases, $\mc E$ is a normal subgroup of $\Aut(A(m,n))$. Note that $\pi^2 = \id$, $\tau^2=\upsilon$ (the parity automorphism) and $\pi \tau = \upsilon \tau \pi$. Hence $\frac{\Aut(A)}{\mc E}$ is isomorphic to $\ZZ_{2}$ if $m\neq n$ and $\ZZ_2\times\ZZ_2$ if $m=n$. Note that a $G$-grading on $A(m,n)$ is of Type I if, and only if, it corresponds to a $\widehat{G}$-action on $A(m,n)$ by automorphisms that belong to the subgroup $\mc E$ if $m\ne n$ and to $\mc E\rtimes\langle\pi\rangle$ if $m=n$. If $\widehat{G}$ acts by automorphisms that belong to $\mc E$ then the Type I grading is said to be \emph{even} and, otherwise, \emph{odd}. \subsection{Superdual of a graded module}\label{ssec:superdual} We will need the following concepts. Let $\D$ be an associative superalgebra with a grading by an abelian group $G$, so we may consider $\D$ graded by the group $G^\# = G\times \ZZ_2$. Let $\U$ be a $G^\#$-graded \emph{right} $\D$-module. The parity $|x|$ of a homogeneous element $x \in \D$ or $x\in \U$ is determined by $\deg x \in G^\#$. The \emph{superdual module} of $\U$ is $\U\Star = \Hom_\D (\U,\D)$, with its natural $G^\#$-grading and the $\D$-action defined on the \emph{left}: if $d \in \D$ and $f \in \U \Star$, then $(df)(u) = d\, f(u)$ for all $u\in \mc U$. We define the \emph{opposite superalgebra} of $\D$, denoted by $\D\sop$, to be the same graded superspace $\D$, but with a new product $a*b = (-1)^{|a||b|} ba$ for every pair of $\ZZ_2$-homogeneous elements $a,b \in \D$. The left $\D$-module $\U\Star$ can be considered as a right $\D\sop$-module by means of the action defined by $f\cdot d := (-1)^{|d||f|} df$, for every $\ZZ_2$-homogeneous $d\in \D$ and $f\in \U\Star$. \begin{lemma}\label{lemma:Dsop} If $\D$ is a graded division superalgebra associated to the pair $(T,\beta)$, then $\D\sop$ is associated to the pair $(T,\beta\inv)$.\qed \end{lemma} If $\U$ has a homogeneous $\D$-basis $\mc B = \{e_1, \ldots, e_k\}$, we can consider its \emph{superdual basis} $\mc B\Star = \{e_1\Star, \ldots, e_k\Star\}$ in $\U\Star$, where $e_i\Star : \U \rightarrow \D$ is defined by $e_i\Star (e_j) = (-1)^{|e_i||e_j|} \delta_{ij}$. \begin{remark}\label{rmk:gamma-inv} The superdual basis is a homogeneous basis of $\U\Star$, with $\deg e_i\Star = (\deg e_i)\inv$. So, if $\gamma = (g_1, \ldots, g_k)$ is the $k$-tuple of degrees of $\mc B$, then $\gamma\inv = (g_1\inv, \ldots, g_k\inv)$ is the $k$-tuple of degrees of $\mc B\Star$. \end{remark} For graded right $\D$-modules $\U$ and $\V$, we consider $\U\Star$ and $\V\Star$ as right $\D\sop$-modules as defined above. If $L:\U \rightarrow \V$ is a $\ZZ_2$-homogeneous $\D$-linear map, then the \emph{superadjoint} of $L$ is the $\D\sop$-linear map $L\Star: \V\Star \rightarrow \U\Star$ defined by $L\Star (f) = (-1)^{|L||f|} f \circ L$. We extend the definition of superadjoint to any map in $\Hom_\D (\U, \V)$ by linearity. \begin{remark} In the case $\D=\FF$, if we denote by $[L]$ the matrix of $L$ with respect to the homogeneous bases $\mc B$ of $\U$ and $\mc C$ of $\V$, then the supertranspose $[L]\sT$ is the matrix corresponding to $L\Star$ with respect to the superdual bases $\mc C\Star$ and $\mc B\Star$. \end{remark} We denote by $\vphi: \End_\D (\U) \rightarrow \End_{\D\sop} (\U\Star)$ the map $L \mapsto L\Star$. It is clearly a degree-preserving super-anti-isomorphism. It follows that, if we consider the Lie superalgebras $\End_\D (U)^{(-)}$ and $\End_{\D\sop} (U\Star)^{(-)}$, the map $-\vphi$ is an isomorphism. We summarize these considerations in the following result: \begin{lemma}\label{lemma:iso-inv} If $\Gamma = \Gamma(T,\beta,\gamma)$ and $\Gamma' = \Gamma(T,\beta\inv,\gamma\inv)$ are $G$-gradings (considered as $G^\#$-gradings) on the associative superalgebra $M(m,n)$, then, as gradings on the Lie superalgebra $M(m,n)^{(-)}$, $\Gamma$ and $\Gamma'$ are isomorphic via an automorphism of $M(m,n)^{(-)}$ that is the negative of a super-anti-automorphism of $M(m,n)$. \end{lemma} \begin{proof} Let $\D$ be a graded division superalgebra associated to $(T,\beta)$ and let $\U$ be the graded right $\D$-module associated to $\gamma$. The grading $\Gamma$ is obtained by an identification $\psi: M(m, n) \xrightarrow{\sim} \End_\D (\U)$. By Lemma \ref{lemma:Dsop} and Remark \ref{rmk:gamma-inv}, $\Gamma'$ is obtained by an identification $\psi': M(m, n) \xrightarrow{\sim} \End_{\D\sop} (\U\Star)$. Hence we have the diagram: \begin{center} \begin{tikzcd} & \End_\D (\U) \arrow[to=3-2, "-\vphi"]\\ M(m, n) \arrow[ur, "\psi"] \arrow[dr, "\psi'"]\\ & \End_{\D\sop} (\U\Star) \end{tikzcd} \end{center} Thus, the composition $(\psi')\inv \, (-\vphi) \, \psi$ is an automorphism of the Lie superalgebra $M(m,n)^{(-)}$ sending $\Gamma$ to $\Gamma'$. \end{proof} \subsection{Type I gradings on $A(m,n)$} In this work, we only classify the gradings on $A(m,n)$ that are induced from the associative algebra $M(m+1, n+1)$. \begin{defi}\label{def:grd-on-Amn-I} If $\Gamma(T, \beta, \gamma_0,\gamma_1)$ is an even grading on $M(m+1,n+1)$ (see Definition \ref{def:even-grd-on-Mmn}), we denote by $\Gamma_A (T, \beta, \gamma_0,\gamma_1)$ the induced grading on $A(m,n)$. Analogously, if $\Gamma(T, \beta, \gamma)$, or alternatively $\Gamma(t_0, \barr T, \barr\beta, u, \gamma)$, is an odd grading on $M (n+1,n+1)$ (see Definitions \ref{def:odd-grd-on-Mmn-1} and \ref{def:odd-grd-on-Mmn-2}), we denote by $\Gamma_A (T, \beta, \gamma)$, respectively $\Gamma_A (t_0, \barr T, \barr\beta, u, \gamma)$, the induced grading on $A(n,n)$. (Recall that odd gradings can occur only if $m=n$.) \end{defi} \begin{thm}\label{thm:even-Lie-iso} If a $G$-grading of Type I on the Lie superalgebra $A(m,n)$ is even, then it is isomorphic to some $\Gamma_A(T,\beta, \gamma_0, \gamma_1)$ as in Definition \ref{def:grd-on-Amn-I}. Two such gradings, $\Gamma=\Gamma_A(T,\beta, \gamma_0, \gamma_1)$ and $\Gamma'=\Gamma_A (T',\beta', \gamma_0', \gamma_1')$, are isomorphic if, and only if, $T=T'$ and there are $\delta\in \{\pm 1\}$ and $g\in G$ such that $\beta^\delta=\beta'$ and \begin{enumerate}[(i)] \item for $m\neq n$: $g \Xi (\gamma_0^\delta) =\Xi(\gamma_0')$ and $g \Xi (\gamma_1^\delta) =\Xi(\gamma_1')$; \item for $m = n$: either $g \Xi(\gamma_0^\delta)=\Xi(\gamma_0')$ and $g \Xi(\gamma_1^\delta)=\Xi(\gamma_1')$ or $g\Xi(\gamma_0^\delta)=\Xi(\gamma_1')$ and $g \Xi(\gamma_1^\delta)=\Xi(\gamma_0')$. \end{enumerate} \end{thm} \begin{proof} Let $M = M(m+1, n+1)$. Since any automorphism of $M$ induces an automorphism of $A(m,n)$, the first assertion follows from Theorem \ref{thm:even-assc-iso} and the definition of Type I grading. We know from Subsection \ref{ssec:auto-Amn} that every automorphism of $A(m, n)$ arises from an automorphism of $M$ or the negative of a super-anti-automorphism of $M$. Moreover, this automorphism or super-anti-automorphism is uniquely determined and, hence, any Type I grading on $A(m,n)$ is induced by a unique grading on $M$. It follows that $\Gamma$ and $\Gamma'$ are isomorphic if, and only if, there exists either $(a)$ an automorphism or $(b)$ a super-anti-automorphism of $M$ sending $\Gamma (T,\beta, \gamma_0, \gamma_1)$ to $\Gamma (T',\beta', \gamma_0', \gamma_1')$. From Theorem \ref{thm:even-assc-iso}, we know that case $(a)$ holds if, and only if, the above conditions are satisfied with $\delta = 1$. From Lemma \ref{lemma:iso-inv}, there is an automorphism of $A(m,n)$ coming from a super-anti-automorphism of $M$ that sends $\Gamma (T,\beta, \gamma_0, \gamma_1)$ to $\Gamma (T,\beta\inv, \gamma_0\inv, \gamma_1\inv)$. It follows that case $(b)$ holds if, and only if, the above conditions are satisfied with $\delta = -1$. \end{proof} \begin{thm}\label{thm:first-odd-Lie-iso} If a $G$-grading of Type I on the Lie superalgebra $A(n,n)$ is odd, then it is isomorphic to some $\Gamma_A(T,\beta,\gamma)$ as in Definition \ref{def:grd-on-Amn-I}. Two such gradings, $\Gamma_A (T,\beta, \gamma)$ and $\Gamma_A (T',\beta', \gamma')$, are isomorphic if, and only if, $T=T'$, and there are $\delta \in \{\pm 1\}$ and $g\in G$ such that $\beta^\delta=\beta'$ and $g \Xi(\gamma^\delta)=\Xi(\gamma')$. \end{thm} \begin{proof} The same as for Theorem \ref{thm:even-Lie-iso}, but referring to Theorem \ref{thm:first-odd-iso} instead of Theorem \ref{thm:even-assc-iso}. \end{proof} The parameters $T$, $\beta$ and $\gamma$ in Theorem \ref{thm:first-odd-Lie-iso} are associated to the group $G^\#$, not $G$. Below we use parameters associated to $G$, as we did in Subsection \ref{ssec:second-odd}. \begin{cor}\label{cor:2nd-odd-Lie-iso} If a $G$-grading of Type I on the Lie superalgebra $A(n,n)$ is odd, then it is isomorphic to some $\Gamma_A (t_0, \barr T, \barr\beta, u, \gamma)$. Two such gradings, $\Gamma_A (t_0, \barr T, \barr \beta, u, \gamma)$ and $\Gamma_A (t_0', \barr T', \barr \beta', u', \gamma')$, are isomorphic if, and only if, $t_0=t_0'$, $\barr T = \barr T'$, and there are $\delta \in \{\pm 1\}$ and $g\in G$ such that $\barr\beta^\delta = \barr\beta'$, $u^\delta \equiv u' \,\, (\operatorname{mod}\,\, \langle t_0 \rangle)$ and $g\, \Xi(\gamma^\delta) = \Xi(\gamma')$. \end{cor} \begin{proof} Follows from Theorems \ref{thm:first-odd-Lie-iso} and \ref{thm:2nd-odd-iso}. \end{proof} \section{Gradings on $P(n)$}\label{sec:Pn} Throughout this section it will be assumed that $\Char \FF = 0$. \subsection{The Lie superalgebra $P(n)$}\label{subseq:Pn} Let $U = U\even \oplus U\odd$ be a superspace and let $\langle\, , \rangle: U\times U\rightarrow \FF$ be a bilinear form that is homogeneous with respect to the $\ZZ_2$-grading, i.e., has parity as a linear map $U\tensor U \rightarrow \FF$. We say that $\langle\, , \rangle$ is \emph{supersymmetric} if $\langle x,y\rangle = (-1)^{\abs{x}\abs{y}} \langle y,x \rangle$ for all homogeneous elements $x,y\in U$. From now on, we suppose that $\langle\, , \rangle$ is supersymmetric, nondegenerate, and odd. The \emph{periplectic Lie superalgebra} $\mathfrak{p}(U)$ is defined as $\mathfrak{p}(U)\even \oplus \mathfrak{p}(U)\odd$ where \[ \mathfrak{p}(U)^{i} = \{L\in \gl(U)^i\mid \langle L(x),y\rangle = - (-1)^{i\abs{x}} \langle x,L(y)\rangle\} \] for all $i\in\Zmod2$. The superalgebra $\mathfrak{p}(U)$ is not simple, but its derived superalgebra $P(U) = [\mathfrak{p}(U),\mathfrak{p}(U)]$ is simple if $\Dim U \geq 6$. Since $\langle\, , \rangle$ is nondegenerate and odd, it is clear that $U\odd$ is isomorphic to $(U\even)^*$ by $u \mapsto \langle u, \cdot\rangle $. Writing $U\even = V$, we can identify $U$ with $V\oplus V^*$. Since $\langle \, , \rangle$ is supersymmetric, with this identification we have \[ \langle v_1+v^*_1,v_2 + v_2^* \rangle = v_1^* (v_2) + v_2^*(v_1) \] for all $v_1, v_2\in V$ and $v_1^*, v_2^*\in V^*$. Hence, $P(U)$ is a subsuperspace of \[ \End(U) = \End(V \oplus V^*) = \begin{pmatrix} \End (V) & \Hom (V^*, V)\\ \Hom (V, V^*) & \End(V^*) \end{pmatrix} \] given by \[ P(U) = \left\{\left(\begin{matrix} a & b \\ c & -a^*\\ \end{matrix} \right)\Big| \,\tr a = 0,\, b=b^* \AND c=-c^*\right\}. \] In the case $V=\FF^{n+1}$, we write $\mathfrak{p}(n)$ for $\mathfrak{p}(U)$ and define $P(n) = [\mathfrak{p}(n),\mathfrak{p}(n)]$, where $n\geq 2$. Using the standard basis of $V$, we can identify $P(n)$ with the following subsuperalgebra of $M(n+1,n+1)^{(-)}$: \begin{equation}\label{eq:Pn-abstract} P(n) = \left\{\left(\begin{matrix} a & b \\ c & -a\transp\\ \end{matrix} \right)\Big| \,\tr a = 0,\, b=b\transp \AND c=-c\transp\right\}. \end{equation} One can readily check that $P(U)$ is a graded subspace of $\End (U)$ equipped with its canonical $\ZZ$-grading, so we have $P(U) = P(U)\inv \oplus P(U)^0 \oplus P(U)^1$. Also, the map $\iota: \Sl(n+1) \rightarrow P(n)^0$ given by $ \iota(a) = \left(\begin{matrix} a & 0 \\ 0 & -a\transp\\ \end{matrix} \right) $ is an isomorphism of Lie algebras. If we identify $\Sl(n+1)$ and $P(n)^0$ via this map, then $P(n)^{-1} \iso \mathrm{S}^2 (U\even) \iso V_{2\pi_1}$ and $P(n)^1 \iso \Exterior^2 (U\odd) \iso V_{\pi_{n-1}}$ as modules over $P(n)^0$, where $\pi_i$ denotes the $i$-th fundamental weight of $\Sl(n+1)$. \subsection{Automorphisms of $P(n)$} The automorphisms of $P(n)$ were originally described by V. Serganova (see \cite[Theorem 1]{serganova}). We give a more explicit description of the automorphism group that we will use for our purposes. \begin{lemma}\label{lemma:Pn-generates-Mmn} Let $U$ be a finite-dimensional superspace equipped with a supersymmetric nondegenerate odd bilinear form. The subset $P(U)$ generates $\End (U)$ as an associative superalgebra. \end{lemma} \begin{proof} Denote by $R$ the associative superalgebra generated by $P(U)$. We claim that $U$ is a simple $R$-module. Indeed, since $P(U)\even\iso \Sl(n+1)$, we have that $U\even \iso V_{\pi_1}$ and $U\odd \iso V_{\pi_n}$ are simple non-isomorphic modules over the Lie algebra $P(U)\even$. Also, the action of $P(U)\odd$ moves elements from $U\even$ to $U\odd$ and vice-versa, so $U$ does not have nonzero proper subspaces invariant under $P(U)$. By Density Theorem, since we are over an algebraically closed field, we conclude that $R = \End (U)$. \end{proof} \begin{prop}\label{prop:Aut-Pn} The group of automorphisms of $P(n)$ is $\frac{\GL (n+1)}{\{-1,+1\}}$ where $a\in \GL(n+1)$ acts as the conjugation by $\left( \begin{matrix} a&0\\ 0&(a\transp)^{-1}\\ \end{matrix}\right)$. \end{prop} \begin{proof} Let $P=P(n)$ and $\vphi: P \rightarrow P$ be a Lie superalgebra automorphism. Since it preserves the canonical $\ZZ_2$-grading, taking its restrictions, we obtain a Lie algebra automorphism $\vphi\subeven : P\even \rightarrow P\even$ and an invertible linear map $\vphi\subodd: P\odd \rightarrow P\odd$. \setcounter{claim}{0} \begin{claim} The components $P^{-1}$ and $P^1$ of the canonical $\ZZ$-grading are invariant under $\vphi$. \end{claim} We denote by $(P\odd)^{\vphi\subeven}$ the $P\even$-module $P\odd$ twisted by $\vphi\subeven$, \ie, the space $P\odd$ with a new action given by $\ell \cdot x = \vphi\subeven (\ell)x$ for all $\ell \in P\even$ and $x \in P\odd$. Clearly, the map $\vphi\subodd: P\odd \rightarrow (P\odd)^{\vphi\subeven}$ is a $P\even$-module isomorphism. In particular, $(P\odd)^{\vphi\subeven} = \vphi\subodd (P^{-1}) \oplus \vphi\subodd (P^{1})$, where $\vphi\subodd (P^{-1})$ and $\vphi\subodd (P^{1})$ are simple and non-isomorphic. It follows that either $(P^{-1})^{\vphi\subeven} = \vphi\subodd (P^{-1})$ or $(P^{-1})^{\vphi\subeven} = \vphi\subodd (P^{1})$. By dimension count, we have $(P^{-1})^{\vphi\subeven} = \vphi\subodd (P^{-1})$ and, similarly, $(P^{1})^{\vphi\subeven} = \vphi\subodd (P^{1})$. \begin{claim} The automorphism $\vphi\subeven$ is inner. \end{claim} If we identify $\Sl(n+1)$ with $P^0$ via the map $\iota$ defined in Subsection \ref{subseq:Pn}, we have $P^{-1}\iso V_{2\pi_1}$ as an $\Sl(n+1)$-module. By Claim 1, we know that $\vphi\subodd \restriction_{P\inv} : P\inv \rightarrow (P\inv)^{\vphi\subeven}$ is an isomorphism of modules, but if $\vphi\subeven$ were an outer automorphism, we would have $(V_{2\pi_1})^{\vphi\subeven} \simeq V_{2\pi_n}$, which would force $n=1$, a contradiction. \begin{claim} If $\varphi\subeven = \id$, then $\varphi = \upsilon_\lambda$ for some $\lambda\in \FF^\times$. \end{claim} Recall from Subsection \ref{ssec:G-hat-action} that $\upsilon_{\lambda}$ acts as $\lambda^i \id$ on $P^i$. Since $\varphi\subeven = \id$, $\vphi_{\bar1}: P\odd \rightarrow P\odd$ is a $P^0$-module automorphism. By Claim 1 and Schur's Lemma, $\vphi_{\bar1}\restriction_{P^{-1}}$ and $\vphi_{\bar1}\restriction_{P^{1}}$ are scalar operators. Due to the superalgebra structure, these two scalars must be inverses of each other, concluding the proof of the claim.\qedclaim By Claim 2, we know that there is an invertible $a\in \End(U\even)$ such that $\vphi\subeven$ is the conjugation by $A=\left( \begin{matrix} a&0\\ 0&(a\transp)^{-1}\\ \end{matrix}\right)$. By Claim 3, $\vphi$ must be this conjugation composed with $\upsilon_\lambda$ for some $\lambda\in \FF^\times$. But $\upsilon_\lambda$ is the conjugation by $\left( \begin{matrix} \mu\inv\id &0\\ 0&\mu \id\\ \end{matrix} \right)$ where $\mu^2=\lambda$, so we can adjust $a$ and assume that $\vphi$ is the conjugation by $A$. Since $P$ generates $M(n+1,n+1)$ as an associative superalgebra (Lemma \ref{lemma:Pn-generates-Mmn}), $A$ is determined up to scalar and, clearly, the only possible scalars are $-1$ and $1$. \end{proof} \begin{remark} The images of $\upsilon_\lambda$, $\lambda \in \FF^\times$, cover the group of outer automorphisms of $P(n)$ (see \cite[Theorem 1]{serganova}). \end{remark} \subsection{Restriction of gradings from $M(n+1,n+1)$ to $P(n)$} We start with a consequence of Proposition \ref{prop:Aut-Pn}. \begin{cor}\label{cor:automorphisms-Pn} Every automorphism of $P(n)$ is the restriction of a unique even automorphism of $M(n+1, n+1)$ and every grading on $P(n)$ is the restriction of a unique even grading on $M(n+1, n+1)$. \end{cor} \begin{proof} Consider the embedding $\Aut(P(n))\rightarrow \Aut(M(n+1,n+1))$ that follows from Proposition \ref{prop:Aut-Pn}. The image consists of even automorphisms, so Proposition \ref{prop:3-equiv-even-morita-action}(iv) implies that every $G$-grading on $P(n)$ extends to an even grading on $M(n+1, n+1)$. The uniqueness follows from Lemma \ref{lemma:Pn-generates-Mmn}. \end{proof} Of course, not every even grading on $M(n+1,n+1)$ restricts to $P(n)$. We are going to obtain necessary and sufficient conditions for such restriction to be possible. Let $\D$ be a finite-dimensional graded division algebra. The concept of \emph{dual of a graded $\D$-module} is a special case of the concept of \emph{superdual} discussed in Subsection \ref{ssec:superdual}, which arises when the gradings on $\D$ and its graded modules are even. Furthermore, in our situation $T$ must be an elementary $2$-group (see Theorem \ref{thm:Pn-elem-2-grp}). Let us recall the definitions and specialize them to the case at hand. Let $\mc V$ be a \emph{right} graded $\D$-module. Then $\mc V^{\star}=\Hom_{\D} (\mc V, \D)$ is a \emph{left} $\D$-module with the action given by $(d\cdot f) (v) = d f(v)$ for all $d\in \D$, $f\in\mc V^\star$ and $v\in\mc V$. If $\mc B = \{v_1, \ldots, v_k\}$ is a homogeneous basis for $\mc V$, the dual basis $\mc B\Star \subseteq \mc V \Star$ consists of the elements $v_i\Star$, $1\leq i \leq k$, defined by $v_i\Star (v_j) = \delta_{ij}$. Note that $\operatorname {deg} v_i\Star = (\operatorname {deg} v_i)\inv$. Given two right $\D$-modules, $\mc V$ and $\mc W$, and a $\D$-linear map $L:\mc V\rightarrow \mc W$, we have the adjoint $L^\star: \mc W^\star\rightarrow \mc V^\star$ defined by $L^\star(f) = f\circ L$, for every $f\in\mc W^\star$. We now assume that $\D$ is a standard realization associated to a pair $(T, \beta)$ such that $T$ is an elementary $2$-group. With this we can identify $\D$ with $\D\op$ via transposition (see Remark \ref{rmk:2-grp-transp}) and, hence, we can regard left $\D$-modules as right $\D$-modules. In particular, if $\mc V$ is a graded right $\D$-module, then $\mc V\Star$ is a graded right $\D$-module via $(f \cdot d)(v) = d\transp f(v)$ for all $f\in \mc V\Star$, $d\in\D$ and $v\in \mc V$. Consider the space $\Hom_\D(\mc V, \mc W)$. Fixing homogeneous $\D$-bases $\mc B = \{v_1, \ldots, v_k\}$ and $\mc C = \{w_1, \ldots, w_\ell\}$ for $\mc V$ and $\mc W$, respectively, we obtain an isomorphism between $\Hom_\D(\mc V, \mc W)$ and $\M_{\ell \times k} (\D)$. The latter is naturally isomorphic to $\M_{\ell \times k} (\FF) \tensor \D$, so we will identify them. \begin{lemma}\label{lemma:D-transp} Let $L: \mc V\rightarrow \mc W$ be a $\D$-linear map. We fix homogeneous $\D$-bases $\mc B$ and $\mc C$ on $\mc V$ and $\mc W$, respectively, and their dual bases in $\mc V\Star$ and $\mc W\Star$. If $A\tensor d \in \M_{\ell \times k} (\FF) \tensor \D$ represents $L$, then $A\transp \tensor d\transp$ represents $L\Star$.\qed \end{lemma} We can regard the elements of $\M_{\ell \times k} (\FF) \tensor \D$ as matrices over $\FF$ via Kronecker product (as in Definition \ref{def:explicit-grd-assoc}). Then we have $A\transp \tensor d\transp = (A\tensor d)\transp$. \begin{thm}\label{thm:Pn-elem-2-grp} Let $U$ be a finite-dimensional superspace and let $\Gamma = \Gamma (T, \beta, \gamma_0, \gamma_1)$ be an even $G$-grading on $\End(U)$. The superspace $U$ admits a supersymmetric nondegenerate odd bilinear form such that $P(U)$ is a $G$-graded subsuperalgebra of $\End(U)^{(-)}$ if, and only if, $T$ is an elementary $2$-group and there is $g_0\in G$ such that $\Xi(\gamma_1) = g_0 \, \Xi(\gamma_0\inv)$. Moreover, if there are two supersymmetric nondegenerate odd bilinear forms on $U$ such that the corresponding $P_1(U)$ and $P_2(U)$ are $G$-graded subsuperalgebras, then $P_1(U)$ and $P_2(U)$ are ismorphic up to shift in opposite directions. \end{thm} \begin{proof} Assume that, for some form, $P(U)$ is a $G$-graded subsuperalgebra. Let $V=U\even$ and consider the identification of $U\odd$ with $V^*$ presented in Subsection \ref{subseq:Pn}. This way $\Gamma = \Gamma (T, \beta, \gamma_0, \gamma_1)$ is an even grading on \[ \End(U) = \End(V \oplus V^*) = \begin{pmatrix} \End (V) & \Hom (V^*, V)\\ \Hom (V, V^*) & \End(V^*) \end{pmatrix}. \] In particular, $\End(V)$ and $\End(V^*)$ are graded subspaces of $\End(U)\even$, with gradings $\Gamma (T, \beta, \gamma_0)$ and $\Gamma (T, \beta, \gamma_1)$, respectively. If \[ x = \left(\begin{matrix} a & 0 \\ 0 & -a^*\\ \end{matrix} \right) \] is a homogeneous element in $P(U)\even$, then both $u(x) := a \in \Sl(V) \subseteq \End(V)$ and $v(x) := -a^* \in \Sl(V^*) \subseteq \End(V^*)$ are homogeneous elements of the same degree. In other words, the maps $u: P(n)\even \rightarrow \Sl(V)$ and $v: P(n)\even \rightarrow \Sl(V^*)$ are homogeneous of degree $e$. Consider the algebra isomorphism $\vphi: \End(V)\op \rightarrow \End(V^*)$ associating to each operator its adjoint. Clearly, $\vphi(a) = - (v \circ u\inv) (a)$ for all $a\in \Sl(V)$. Since $\End(V) = \Sl(V)\, \oplus\, \FF\id_V$ and $\vphi(\id_V) = \id_{V^*}$, we see that $\vphi$ is homogeneous of degree $e$. From Lemma \ref{lemma:Dsop} and Remark \ref{rmk:gamma-inv}, we conclude that $\Gamma (T, \beta\inv, \gamma_0\inv) \iso \Gamma (T, \beta, \gamma_0)$, and hence, by Theorem \ref{thm:classification-matrix}, $\beta\inv = \beta$ and there is $g_0\in G$ such that $g_0\,\Xi(\gamma_0\inv) = \Xi(\gamma_1)$. Since $\beta$ is nondegenerate, $\beta\inv = \beta$ if, and only if, $T$ is an elementary $2$-group. Note that the $G$-graded algebra $P(U)\even$ is isomorphic (via the map $u$) to the $G$-graded subalgebra $\Sl(V)$ of $\End(V)^{(-)}$, where the grading on $\End(V)$ is $\Gamma(T,\beta, \gamma_0)$. Therefore, if we have two forms such that the corresponding $P_1(U)$ and $P_2(U)$ are $G$-graded subsuperalgebras, then their even parts are isomorphic as $G$-graded algebras. Using Lemmas \ref{lemma:simplebimodule} and \ref{lemma:opposite-directions}, we conclude the ``moreover'' part. Now assume, conversely, that $T$ is an elementary $2$-group and $\Xi(\gamma_1) = g_0 \, \Xi(\gamma_0\inv)$. We can adjust $\gamma_1$, if necessary, so that $\gamma_1 = g_0\, \gamma_0\inv$ and the isomorphism class of $\Gamma$ does not change. Let $\D$ be a standard realization of a graded division algebra associated to $(T,\beta)$ and let $\mc V$ be a graded right $\D$-module with a homogeneous basis $\mc B$ whose degrees are given by $\gamma_0$. Define $\mc U = \mc U\even \oplus \mc U\odd$ with $\mc U\even = \mc V$ and $\mc U\odd = (\mc V\Star)^{[g_0]}$. The $G$-grading $\Gamma$ on $\End(U)$ is defined by means of an isomorphism: \[ \begin{split} \End(U) \iso \End_\D (\mc U) &= \begin{pmatrix} \End_\D (\mc V) & \Hom_\D ((\mc V\Star)^{[g_0]}, \mc V)\\ \Hom_\D (\mc V, (\mc V\Star)^{[g_0]}) & \End_\D ((\mc V\Star)^{[g_0]}) \end{pmatrix}\\ &= \begin{pmatrix} \End_\D (\mc V) & \Hom_\D (\mc V\Star, \mc V)^{[g_0\inv]}\\ \Hom_\D (\mc V, \mc V\Star)^{[g_0]} & \End_\D (\mc V\Star) \end{pmatrix}. \end{split} \] Using the homogeneous $\D$-bases $\mc B$ for $\mc V$ and $\mc B\Star$ for $\mc V\Star$ to represent $\D$-linear maps by matrices in $M_k(\D) = M_k(\FF) \tensor \D$ and using the Kronecker product to identify the latter with $M_{n+1}(\FF)$, we obtain an isomorphism $\End(U) \xrightarrow{\sim} M(n+1, n+1)$, and $M(n+1, n+1)$ contains $\mathfrak{p}(n)$ and $P(n) = [\mathfrak{p}(n), \mathfrak{p}(n)]$ as in Equation \eqref{eq:Pn-abstract}. The above isomorphism $\End(U)\xrightarrow{\sim} M(n+1,n+1)$ of superagebras is given by an isomorphism of superspaces $U \xrightarrow{\sim} \FF^{n+1} \oplus \FF^{n+1}$. Hence, there exists a supersymmetric nondegenerate odd bilinear form on $U$ such that $P(U)$ corresponds to $P(n)$ under the above isomorphism. Finally, we have to show that $P(U)$ is a $G$-graded subsuperspace of $\End(U)$. Clearly, it is sufficient to prove the same for $\mathfrak{p}(U)$. But $\mathfrak{p}(U)$ corresponds to \begin{equation*} \mathfrak{p}(n) = \left\{\left(\begin{matrix} a & b \\ c & -a\transp\\ \end{matrix} \right)\Big| \,a,b,c\in M_{n+1}(\FF),\, b=b\transp \AND c=-c\transp\right\} \subseteq M(n+1,n+1), \end{equation*} which, in view of Lemma \ref{lemma:D-transp}, corresponds to the subsuperspace \[\begin{split} \bigg \{\left(\begin{matrix} a & b \\ c & -a\Star\\ \end{matrix} \right)\Big| \,a\in \End_\D(\mc U),\, b=b\Star\in \Hom_\D(\mc V\Star, \mc V),\, c=-c\Star\in \Hom_\D(\mc V, \mc V\Star) \bigg\} \end{split} \] of $\End_\D(\mc U)$, which is clearly a $G$-graded subsuperspace. \end{proof} \subsection{$G$-gradings up to isomorphism} \begin{defi}\label{def:grd-Pn} Let $T\subseteq G$ be a finite elementary $2$-subgroup, $\beta$ be a nondegenerate alternating bicharacter on $T$, $\gamma$ be a $k$-tuple of elements of $G$, and $g_0\in G$. We will denote by $\Gamma_P (T, \beta, \gamma, g_0)$ the grading on the superalgebra $P(n)$ obtained by restricting the grading $\Gamma(T,\beta,\gamma,g_0\gamma\inv)$ on $M(n+1,n+1)$ as in the proof of Theorem \ref{thm:Pn-elem-2-grp}. Explicitly, write $\gamma = (g_1, \ldots, g_k)$ and take a standard realization of a graded division algebra $\D$ associated to $(T, \beta)$. Then $M_{n+1}(\FF)\iso M_k(\FF) \tensor \D$ by means of Kronecker product, and \[ M(n+1, n+1) \iso \begin{pmatrix} M_k (\FF)\tensor \D & M_k (\FF)\tensor \D\\ M_k (\FF)\tensor \D & M_k (\FF)\tensor \D \end{pmatrix} \] Denote by $E_{ij}$ the $(i,j)$-th matrix unit in $M_k (\FF)$. The grading $\Gamma(T,\beta,\gamma,g_0\gamma\inv)$ is given by: \begin{center} \begin{tabular}{@{$\bullet$ }ll} $\deg (E_{ij}\tensor d) = g_i (\deg d) g_j\inv$ & in the upper left corner;\\ $\deg (E_{ij}\tensor d) = g_i (\deg d) g_j \, g_0\inv$ & in the upper right corner;\\ $\deg (E_{ij}\tensor d) = g_i\inv (\deg d) g_j\inv g_0$ & in the lower left corner;\\ $\deg (E_{ij}\tensor d) = g_i\inv (\deg d) g_j$ & in the lower right corner. \end{tabular} \end{center} \end{defi} Note that the restriction of $\Gamma_P(T, \beta, \gamma, g_0)$ to the even part is the inner grading on $\Sl(n+1)$ with parameters $(T, \beta, \gamma)$. \begin{thm}\label{thm:Pn-iso} Every $G$-grading on the Lie superalgebra $P(n)$ is isomorphic to some $\Gamma_P (T, \beta, \gamma, g_0)$ as in Definition \ref{def:grd-Pn}. Two gradings, $\Gamma = \Gamma_P (T,\beta,\gamma,g_0)$ and $\Gamma' = \Gamma_P (T',\beta',\gamma',g_0')$, are isomorphic if and only if $T=T'$, $\beta = \beta'$, and there is $g\in G$ such that $g^2 g_0 = g_0'$ and $g\,\Xi(\gamma)=\Xi(\gamma')$. \end{thm} \begin{proof} The first assertion follows from Corollary \ref{cor:automorphisms-Pn} and Theorem \ref{thm:Pn-elem-2-grp}. For the second assertion, recall that $\Gamma$ and $\Gamma'$ are, respectively, the restrictions of the gradings $\widetilde \Gamma = \Gamma (T, \beta, \gamma, g_0 \gamma\inv)$ and $\widetilde \Gamma' = \Gamma (T', \beta', \gamma', g_0' (\gamma')\inv)$ on $M(n+1, n+1)$. $(\Rightarrow)$: Suppose $\Gamma \iso \Gamma'$. Since every automorphism of $P(n)$ extends to an automorphism of $M(n+1, n+1)$ (Corollary \ref{cor:automorphisms-Pn}), we have $\widetilde \Gamma \iso \widetilde \Gamma'$, which implies $T=T'$ and $\beta = \beta'$ by Theorem \ref{thm:even-assc-iso}. Let $\D$ be a standard realization associated to $(T, \beta)$ and let $\mc V$ be a right $\D$-module with basis $\mc B = \{v_1, \ldots, v_k\}$, which is graded by assigning $\deg v_i = g_i$. The same module, but with $\deg v_i = g_i'$, will be denoted by $\mc W$. Then $E = \End_\D (\mc V \oplus (\mc V\Star)^{[g_0]})$ and $E' = \End_\D (\mc W \oplus (\mc W\Star)^{[g_0']})$ are graded superalgebras. Using the bases $\mc B$ and $\mc B\Star$ and the Kronecker product, we can identify them with $M(n+1, n+1)$. The first identification gives the grading $\widetilde \Gamma$ on $M(n+1, n+1)$ and the second gives $\widetilde \Gamma'$. Let $\Phi$ be an automorphism of $M(n+1, n+1)$ that sends $\widetilde\Gamma$ to $\widetilde\Gamma'$. By Proposition \ref{prop:Aut-Pn}, $\Phi$ is the conjugation by \[ A = \begin{pmatrix} a & 0\\ 0 & (a\transp)\inv \end{pmatrix} \] for some $a\in \GL(n+1)$. By Lemma \ref{lemma:D-transp}, $\Phi$ corresponds to the isomorphism $E\rightarrow E'$ that is the conjugation by \[ \phi = \begin{pmatrix} \alpha & 0\\ 0 & (\alpha\Star)\inv \end{pmatrix} \] where $\alpha: \mc V\rightarrow \mc W $ and $(\alpha\Star)\inv: (\mc V\Star)^{[g_0]} \rightarrow (\mc W\Star)^{[g_0']} $ are $\D$-linear maps. On the other hand, by Proposition \ref{prop:inner-automorphism}, this isomorphism $E\to E'$ is the conjugation by a homogeneous bijective $\D$-linear map \[ \psi= \left( \begin{matrix} \psi_{11}&\psi_{12}\\ \psi_{21}&\psi_{22}\\ \end{matrix}\right). \] It follows that there is $\lambda\in \FF$ such that $\phi = \lambda\psi$, and, hence, $\phi$ is homogeneous. Let us denote its degree by $g$. Then both $\alpha$ and $(\alpha\Star)\inv$ must be homogeneous of degree $g$. Hence, $\alpha: \mc V^{[g]}\to\mc W $ is an isomorphism of graded $\D$-modules, so we conclude that $g \Xi(\gamma) = \Xi(\gamma')$. Considered as a map $\mc V\Star \rightarrow \mc W\Star$, $(\alpha\Star)\inv$ would have degree $g\inv$, so taking into account the shifts, it has degree $g\inv g_0\inv g_0'$, which must be equal to $g$, so $g_0' = g^2 g_0$. $(\Leftarrow)$: We may suppose $\D=\D'$. Since $g\Xi(\gamma) = \Xi(\gamma')$, we have an isomorphism of graded $\D$-modules $\alpha: \mc V^{[g]} \rightarrow \mc W$. As a map from $\mc V$ to $\mc W$, $\alpha$ is homogeneous of degree $g$, hence $(\alpha\Star)\inv: \mc V\Star \rightarrow \mc W\Star$ has degree $g\inv$. It follows that, as a map from $(\mc V\Star)^{[g_0]}$ to $(\mc W\Star)^{[g_0']}$, $(\mc \alpha\Star)\inv$ has degree $g\inv g_0\inv g_0' = g$. The desired automorphism of $P(n)$ that sends $\Gamma$ to $\Gamma'$ is the conjugation by the matrix $\psi=\left( \begin{matrix} \alpha&0\\ 0&(\alpha\Star)\inv\\ \end{matrix}\right).$ \end{proof} \subsection{Fine gradings up to equivalence} For every integer $\ell\ge 0$, we set $T_{(\ell)}=\ZZ_2^{2\ell}$ and fix a nondegenerate alternating bicharacter $\beta_{(\ell)}$, say, \[ \beta_{(\ell)} (x,y)=(-1)^{\sum_{i=1}^{2\ell} x_i y_{2\ell-i+1}}. \] \begin{defi}\label{def:fine-grd-Pn} For every $\ell$ such that $2^\ell$ is a divisor of $n+1$, put $k:=\frac{n+1}{2^\ell}$ and $\tilde{G}_{(\ell)}=T_{(\ell)}\times \ZZ^k$. Let $\{e_1,\ldots,e_k\}$ be the standard basis of $\ZZ^k$ and let $\langle e_0\rangle$ be the infinite cyclic group generated by a new symbol $e_0$. We define $\Gamma_P(\ell,k)$ to be the $\tilde{G}_{(\ell)}\times\langle e_0 \rangle$-grading $\Gamma_P (T_{(\ell)},\beta_{(\ell)},(e_1, \ldots, e_k), e_0)$ on $P(n)$. If $n$ is clear from the context, we will simply write $\Gamma_P(\ell)$. \end{defi} The subgroup of $\tilde{G}_{(\ell)} \times \langle e_0 \rangle$ generated by the support of $\Gamma_P(\ell,k)$ is \[ G_{(\ell)} := (T_{(\ell)}\times \ZZ^k_0)\oplus \langle 2e_1 - e_0 \rangle \iso \ZZ_2^{2\ell}\times \ZZ^k. \] \begin{prop}\label{prop:P-fine} The gradings $\Gamma_P(\ell)$ on $P(n)$ are fine and pairwise nonequivalent. \end{prop} \begin{proof} We can write $\Gamma_P (\ell) = \Gamma^{-1} \oplus \Gamma^0 \oplus \Gamma^1$ where $\Gamma^i$ is the restriction of $\Gamma_P(\ell)$ to the $i$-th component of the canonical $\ZZ$-grading of $P(n)$. We identify $ P(n)^0 = P(n)\even$ with $ \Sl(n+1)$ via de map $\iota$ defined in Subsection \ref{subseq:Pn}. Then the grading $\Gamma^0$ on $P(n)^0$ is the restriction to $\Sl(n+1)$ of a fine grading on $M_{n+1}(\FF)$ with universal group $T_{(\ell)}\times\ZZ^k_0$ (\cite[Proposition 2.35]{livromicha}), so it has no proper refinements among the inner gradings on $\Sl(n+1)$. Also, $\Gamma_P(\ell)$ and $\Gamma_P(\ell')$ are nonequivalent if $\ell\ne\ell'$, because their restrictions to $P(n)^0$ are nonequivalent. Note that the supports of $\Gamma^{-1}$, $\Gamma^0$ and $\Gamma^1$ are pairwise disjoint since they project to, respectively, $-e_0$, $0$, and $e_0$ in the direct summand $\langle e_0 \rangle $ of $\tilde{G}_{(\ell)}\times\langle e_0\rangle$. Suppose that the grading $\Gamma_P (\ell)$ admits a refinement $\Delta = \Delta\inv \oplus \Delta^0 \oplus \Delta^1$. Then $\Delta^0$ is an inner grading that is a refinement of $\Gamma^0$, hence they are the same grading (up to relabeling). Using Lemma \ref{lemma:simplebimodule}, we conclude that $\Gamma$ and $\Delta$ are the same grading, proving that $\Gamma$ is fine. \end{proof} \begin{thm}\label{thm:class-fine-Pn} Every fine grading on $P(n)$ is equivalent to a unique $\Gamma_P(\ell)$ as in Definition \ref{def:fine-grd-Pn}. Moreover, every grading $\Gamma_P(\ell)$ is fine, and $G_{(\ell)}$ is its universal group. \end{thm} \begin{proof} Let $\Gamma=\Gamma_P(G,T,\beta,\gamma,g_0)$ be any $G$-grading on $P(n)$. Since $T$ is an elementary $2$-group of even rank, we have an isomorphism $\alpha:T_{(\ell)}\to T$, for some $\ell$, such that $\beta_{(\ell)}=\beta\circ(\alpha\times\alpha)$. We can extend $\alpha$ to a homomorphism $G_{(\ell)}\rightarrow G$ (also denoted by $\alpha$) by sending the elements $e_1,\ldots,e_k$ to the entries of $\gamma$, and $e_0$ to $g_0$. By construction, ${}^\alpha\Gamma_P {(\ell)}\iso\Gamma$. It remains to apply Proposition \ref{prop:P-fine} and Lemma \ref{lemma:universal-grp}. \end{proof} \section*{Acknowledgments} The first two authors were Ph.D. students at Memorial University of Newfoundland while working on this paper. Helen Samara Dos Santos would like to thank her co-supervisor, Yuri Bahturin, for help and guidance during her Ph.D. program. All authors are grateful to Yuri Bahturin and Alberto Elduque for useful discussions. \newcommand{\noop}[1]{} \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Irrational Exuberance: Correcting Bias in Probability Estimates} \author{Gareth M. James$^1$, Peter Radchenko$^2$ and Bradley Rava$^{1,3}$} \date{} \footnotetext[1]{Department of Data Sciences and Operations, University of Southern California.} \footnotetext[2]{University of Sydney.} \footnotetext[3]{Research is generously supported by the NSF GRFP in Mathematical Statistics.} \maketitle \begin{abstract} We consider the common setting where one observes probability estimates for a large number of events, such as default risks for numerous bonds. Unfortunately, even with unbiased estimates, selecting events corresponding to the most extreme probabilities can result in systematically underestimating the true level of uncertainty. We develop an empirical Bayes approach ``Excess Certainty Adjusted Probabilities" (ECAP), using a variant of Tweedie's formula, which updates probability estimates to correct for selection bias. ECAP is a flexible non-parametric method, which directly estimates the score function associated with the probability estimates, so it does not need to make any restrictive assumptions about the prior on the true probabilities. ECAP also works well in settings where the probability estimates are biased. We demonstrate through theoretical results, simulations, and an analysis of two real world data sets, that ECAP can provide significant improvements over the original probability estimates. \end{abstract} \textbf{Keywords:\/} Empirical Bayes; selection bias; excess certainty; Tweedie's formula. \section{Introduction} We are increasingly facing a world where automated algorithms are used to generate probabilities, often in real time, for thousands of different events. Just a small handful of examples include finance where rating agencies provide default probabilities on thousands of different risky assets \citep{Keal03, Hull05}; sporting events where each season ESPN and other sites estimate win probabilities for all the games occurring in a given sport \citep{LEUNG2014710}; politics where pundits estimate the probabilities of candidates winning in congressional and state races during a given election season \citep{Silver18, Soumbatiants2006}; or medicine where researchers estimate the survival probabilities of patients undergoing a given medical procedure \citep{Poses97, Smeenk07}. Moreover, with the increasing availability of enormous quantities of data, there are more and more automated probability estimates being generated and consumed by the general public. Many of these probabilities have significant real world implications. For example, the rating given to a company's bonds will impact their cost of borrowing, or the estimated risk of a medical procedure will affect the patient's likelihood of undertaking the operation. This leads us to question the accuracy of these probability estimates. Let $p_i$ and $\tilde p_i$ respectively represent the true and estimated probability of $A_i$ occurring for a series of events $A_1,\ldots, A_n$. Then, we often seek an unbiased estimator such that $E(\tilde p_i|p_i)=p_i,$ so $\tilde p_i$ is neither systematically too high nor too low. Of course, there are many recent examples where this unbiasedness assumption has not held. For example, prior to the financial crisis of 2008 rating agencies systematically under estimated the risk of default for mortgage backed securities so $E(\tilde p_i|p_i)<p_i$. Similarly, in the lead up to the 2016 US presidential election political pundits significantly underestimated the uncertainty in which candidate would win. However, even when unbiasedness does hold, using $\tilde p_i$ as an estimate for $p_i$ can cause significant problems. Consider, for example, a conservative investor who only purchases bonds with extremely low default risk. When presented with $n$ estimated bond default probabilities $\tilde p_1,\ldots, \tilde p_n$ from a rating agency, she only invests when $\tilde p_i=0.001$. Let us suppose that the rating agency has done a careful risk assessment, so their probability estimates are unbiased for all $n$ bonds. What then is the fraction of the investor's bonds which will actually default? Given that the estimates are unbiased, one might imagine (and the investor is certainly hoping) that the rate would be close to $0.001$. Unfortunately, the true default rate may be much higher. \begin{figure} \caption{\label{intro.plot} \label{intro.plot} \end{figure} Figure~\ref{intro.plot} provides an illustration. We first generated a large number of probabilities $p_i$ from a uniform distribution and then produced corresponding $\tilde p_i$ in such a way that $E(\tilde p_i|p_i)=p_i$ for $i=1,\ldots, n$. In the left panel of Figure~\ref{intro.plot} we plotted a random sample of $100$ of these probabilities, concentrating on values less than 10\%. While there is some variability in the estimates, there is no evidence of bias in $\tilde p_i$. In the middle panel we used the simulated data to compute the average value of $p_i$ for any given value of $\tilde p_i$ i.e. $E(p_i|\tilde p_i)$. A curious effect is observed. At every point the average value of $p_i$ (orange line) is systematically higher than $\tilde p_i$ (dashed line) i.e. $E(p_i|\tilde p_i)>\tilde p_i$. Finally, in the right panel we have plotted the ratio of $E(p_i|\tilde p_i)$ to $\tilde p_i$. Ideally this ratio should be approximately one, which would, for example, correspond to the true risk of a set of bonds equalling the estimated risk. However, for small values of $\tilde p_i$ we observe ratios far higher than one. So, for example, our investor who only purchases bonds with an estimated default risk of $\tilde p_i=0.001$ will in fact find that $0.004$ of her bonds end up defaulting, a 400\% higher risk level than she intended to take! These somewhat surprising results are not a consequence of this particular simulation setting. It is in fact an instance of selection bias, a well known issue which occurs when the selection of observations is made in such a way, e.g. selecting the most extreme observations, that they can no longer be considered random samples from the underlying population. If this bias is not taken into account then any future analyses will provide a distorted estimate of the population. Consider the setting where we observe $X_1,\ldots, X_n$ with $E(X_i)=\mu_i$ and wish to estimate $\mu_i$ based on an observed $X_i$. Then it is well known that the conditional expectation $E(\mu_i|X_i)$ corrects for any selection bias associated with choosing $X_i$ in a non-random fashion \citep{efron2011}. Numerous approaches have been suggested to address selection bias, with most methods imposing some form of shrinkage to either explicitly, or implicitly, estimate $E(\mu_i|X_i)$. Among linear shrinkage methods, the James-Stein estimator \citep{james1961} is the most well known, although many others exist \citep{efron1975, ikeda2016}. There are also other popular classes of methods, including: non-linear approaches utilizing sparse priors \citep{donoho1994, abramovich2006, bickel2008, ledoit2012}, Bayesian estimators \citep{gelman2012} and empirical Bayes methods \citep{jiang2009, brown2009nonparametric, petrone2014}. For Gaussian data, Tweedie's formula \citep{Rob1956} provides an elegant empirical Bayes estimate for $E(\mu_i|X_i)$, using only the marginal distribution of $X_i$. While less well known than the James-Stein estimator, it has been shown to be an effective non-parametric approach for addressing selection bias \citep{efron2011}. The approach can be automatically adjusted to lean more heavily on parametric assumptions when little data is available, but in settings such as ours, where large quantities of data have been observed, it provides a highly flexible non-parametric shrinkage method \citep{benjamini2005, henderson2015}. However, the standard implementation of Tweedie's formula assumes that, conditional on $\mu_i$, the observed data follow a Gaussian distribution. Most shrinkage methods make similar distributional assumptions or else model the data as unbounded, which makes little sense for probabilities. What then would be a better estimator for low probability events? In this paper we propose an empirical Bayes approach, called ``Excess Certainty Adjusted Probability" (ECAP), specifically designed for probability estimation in settings with a large number of observations. ECAP uses a variant of Tweedie's formula which models $\tilde p_i$ as coming from a beta distribution, automatically ensuring the estimate is bounded between $0$ and $1$. We provide theoretical and empirical evidence demonstrating that the ECAP estimate is generally significantly more accurate than $\tilde p_i$. This paper makes three key contributions. First, we convincingly demonstrate that even an unbiased estimator $\tilde p_i$ can provide a systematically sub-optimal estimate for $p_i$, especially in situations where large numbers of probability estimates have been generated. This leads us to develop the oracle estimator for $p_i$, which results in a substantial improvement in expected loss. Second, we introduce the ECAP method which estimates the oracle. ECAP does not need to make any assumptions about the distribution of $p_i$. Instead, it relies on estimating the marginal distribution of $\tilde p_i$, a relatively easy problem in the increasingly common situation where we observe a large number of probability estimates. Finally, we extend ECAP to the biased data setting where $\tilde p_i$ represents a biased observation of $p_i$ and show that even in this setting we are able to recover systematically superior estimates of $p_i$. The paper is structured as follows. In Section~\ref{method.sec} we first formulate a model for $\tilde p_i$ and a loss function for estimating $p_i$. We then provide a closed form expression for the corresponding oracle estimator and its associated reduction in expected loss. We conclude Section~\ref{method.sec} by proposing the ECAP estimator for the oracle and deriving its theoretical properties. Section~\ref{extension.sec} provides two extensions. First, we propose a bias corrected version of ECAP, which can detect situations where $\tilde p_i$ is a biased estimator for $p_i$ and automatically adjust for the bias. Second, we generalize the ECAP model from Section~\ref{method.sec}. Next, Section~\ref{sim.sec} contains results from an extensive simulation study that examines how well ECAP works to estimate $p_i$, in both the unbiased and biased settings. Section~\ref{emp.sec} illustrates ECAP on two interesting real world data sets. The first is a unique set of probabilities from ESPN predicting, in real time, the winner of various NCAA football games, and the second contains the win probabilities of all candidates in the 2018 US midterm elections. We conclude with a discussion and possible future extensions in Section~\ref{discussion.sec}. Proofs of all theorems are provided in the appendix. \section{Methodology} \label{method.sec} Let $\tilde p_1,\ldots, \tilde p_n$ represent initial estimates of events $A_1,\ldots, A_n$ occurring. In practice, we assume that $\tilde p_1,\ldots, \tilde p_n$ have already been generated, by previous analysis or externally, say, by an outside rating agency in the case of the investment example. Our goal is to construct estimators $\hat p_1(\tilde p_1), \ldots, \hat p_n(\tilde p_n)$ which provide more accurate estimates for $p_1,\ldots, p_n$. In order to derive the estimator we first choose a model for $\tilde p_i$ and select a loss function for $\hat p_i$, which allows us to compute the corresponding oracle estimator $p_{i0}$. Finally, we provide an approach for generating an estimator for the oracle $\hat p_i$. In this section we only consider the setting where $\tilde p_i$ is assumed to be an unbiased estimator for $p_i$. We extend our approach to the more general setting where $\tilde p_i$ may be a biased estimator in Section~\ref{biased.sec}. \subsection{Modeling $\tilde p_i$ and Selecting a Loss Function} Given that $\tilde p_i$ is a probability, we model its conditional distribution using the beta distribution\footnote{We consider a more general class of distributions for $\tilde p_i$ in Section~\ref{sec.mixture}}. In particular, we model \begin{equation} \label{beta.model} \tilde p_i|p_i \sim Beta(\alpha_i, \beta_i),\quad\text{where} \quad \alpha_i=\frac{p_i}{\gamma^*}, \quad\beta_i=\frac{1-p_i}{\gamma^*}, \end{equation} and $\gamma^*$ is a fixed parameter which influences the variance of $\tilde p_i$. Under \eqref{beta.model}, \begin{equation} \label{pt.mean.var} E(\tilde p_i|p_i)=p_i \quad\text{and} \quad Var(\tilde p_i|p_i)=\frac{\gamma^*}{1+\gamma^*}p_i(1-p_i), \end{equation} so $\tilde p_i$ is an unbiased estimate for $p_i$, which becomes more accurate as $\gamma^*$ tends to zero. Figure~\ref{beta.plot} provides an illustration of the density function of $\tilde p_i$ for three different values of $p_i$. In principle, this model could be extended to incorporate observation specific variance terms $\gamma_i^*$. Unfortunately, in practice $\gamma^*$ needs to be estimated, which would be challenging if we assumed a separate term for each observation. However, in some settings it may be reasonable to model $\gamma_i^*=w_i \gamma^*$, where $w_i$ is a known weighting term, in which case only one parameter needs to be estimated. \begin{figure} \caption{\label{beta.plot} \label{beta.plot} \end{figure} Next, we select a loss function for our estimator to minimize. One potential option would be to use a standard squared error loss, $L(\hat p_i) = E\left(p_i-\hat p_i\right)^2$. However, this loss function is not the most reasonable approach in this setting. Consider for example the event corresponding to a bond defaulting, or a patient dying during surgery. If the bond has junk status, or the surgery is highly risky, the true probability of default or death might be $p_i=0.26$, in which case an estimate of $\hat p_i=0.25$ would be considered very accurate. It is unlikely that an investor or patient would have made a different decision if they had instead been provided with the true probability of $0.26$. However, if the bond, or surgery, are considered very safe we might provide an estimated probability of $\hat p_i=0.0001$, when the true probability is somewhat higher at $p_i=0.01$. The absolute error in the estimate is actually slightly lower in this case, but the patient or investor might well make a very different decision when given a $1\%$ probability of a negative outcome vs a one in ten thousand chance. In this sense, the error between $p_i$ and $\hat p_i$ {\it as a percentage of $\hat p_i$} is a far more meaningful measure of precision. In the first example we have a percentage error of only 4\%, while in the second instance the percentage error is almost 10,000\%, indicating a far more risky proposition. To capture this concept of relative error we introduce as our measure of accuracy a quantity we call the ``Excess Certainty", which is defined as \begin{equation} \label{EC} \text{EC}(\hat p_i)= \frac{p_i-\hat p_i}{\min(\hat p_i,1-\hat p_i)}. \end{equation} In the first example $\text{EC}=0.04$, while in the second example $\text{EC}=99$. Note, we include $\hat p_i$ in the denominator rather than $p_i$ because we wish to more heavily penalize settings where the estimated risk is far lower than the true risk (irrational exuberance) compared to the alternative where true risk is much lower. Ideally, the excess certainty of any probability estimate should be very close to zero. Thus, we adopt the following expected loss function, \begin{equation} \label{loss.fn} L(\hat p_i, \tilde p_i)=E_{p_i}\left(\text{EC}(\hat p_i)^2|\tilde p_i\right), \end{equation} where the expectation is taken over $p_i$, conditional on $\tilde p_i$. Our aim is to produce an estimator $\hat p_i$ that minimizes \eqref{loss.fn} conditional on the observed value of~$\tilde p_i$. It is worth noting that if our goal was solely to remove selection bias then we could simply compute $E(p_i|\tilde p_i)$, which would be equivalent to minimizing $E\left[\left(p_i-\hat p_i\right)^2|\tilde p_i\right]$. Minimizing \eqref{loss.fn} generates a similar shrinkage estimator, which also removes the selection bias, but, as we discuss in the next section, it actually provides additional shrinkage to account for the fact that we wish to minimize the relative, or percentage, error. \subsection{The Oracle Estimator} We now derive the oracle estimator, $p_{i0}$, which minimizes the loss function given by \eqref{loss.fn}, \begin{equation} \label{argmin} p_{i0}=\arg\min_a E_{p_i}\left[\text{EC}(a)^2|\tilde p_i\right]. \end{equation} Our ECAP estimate aims to approximate the oracle. Theorem~\ref{oracle.thm} below provides a relatively simple closed form expression for $p_{i0}$ and a bound on the minimum reduction in loss from using $p_{i0}$ relative to any other estimator. \begin{theorem} \label{oracle.thm} For any distribution of $\tilde p_i$, \begin{equation} \label{oracle.general} p_{i0}=\begin{cases} \min\left(E(p_i|\tilde p_i)+ \frac{Var(p_i|\tilde p_i)}{E(p_i|\tilde p_i)}\,,\,0.5\right), & E(p_i|\tilde p_i)\le 0.5\\ \max\left(0.5\,,\,E(p_i|\tilde p_i)- \frac{Var(p_i|\tilde p_i)}{1-E(p_i|\tilde p_i)}\right), & E(p_i|\tilde p_i)>0.5. \end{cases} \end{equation} Furthermore, for any $p'_i\ne p_{i0}$, \begin{equation} \label{L.diff} L(p'_i,\tilde p_i) - L(p_{i0},\tilde p_i) \ge \begin{cases} E\left(p_i^2|\tilde p_i\right)\left[\frac{1}{p'_i}-\frac{1}{p_{i0}}\right]^2, &p_{i0}\le 0.5\\ E\left([1-p_i]^2|\tilde p_i\right)\left[\frac{1}{1-p'_i}-\frac{1}{1-p_{i0}}\right]^2,&p_{i0}\ge0.5. \end{cases} \end{equation} \end{theorem} \begin{remark} Note that both bounds in \eqref{L.diff} are valid when $p_{i0}=0.5$. \end{remark} We observe from this result that the oracle estimator starts with the conditional expectation $E(p_i|\tilde p_i)$ and then shifts the estimate towards $0.5$ by an amount $\frac{Var(p_i|\tilde p_i)}{\min(E(p_i|\tilde p_i),1-E(p_i|\tilde p_i))}$. However, if this would move the estimate past $0.5$ then the estimator simply becomes $0.5$. Figure~\ref{EC.plot} plots the average excess certainty \eqref{EC} from using $\tilde p_i$ to estimate $p_i$ (orange lines) and from using $p_{i0}$ to estimate $p_i$ (green lines), for three different values of $\gamma^*$. Recall that an ideal EC should be zero, but the observed values for $\tilde p_i$ are far larger, especially for higher values of $\gamma^*$ and lower values of $\tilde p_i$. Note that, as a consequence of the minimization of the expected squared loss function~\eqref{loss.fn}, the oracle is slightly conservative with a negative EC, which is due to the variance term in~\eqref{oracle.general}. \begin{figure} \caption{\label{EC.plot} \label{EC.plot} \end{figure} It is worth noting that Theorem~\ref{oracle.thm} applies for any distribution of $\tilde p_i|p_i$ and does not rely on our model \eqref{beta.model}. If we further assume that \eqref{beta.model} holds, then Theorem~\ref{EandVar} provides explicit forms for $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$. \begin{theorem} \label{EandVar} Under~\eqref{beta.model}, \begin{eqnarray} \label{mui}E(p_i|\tilde p_i) &=&\mu_i \ \equiv \ \tilde p_i + \gamma^*\left[g^*(\tilde p_i)+1-2\tilde p_i\right]\\ \label{sigmai} Var(p_i|\tilde p_i)&=& \sigma^2_i\ \equiv \ \gamma^*\tilde p_i(1-\tilde p_i) + {\gamma^*}^2\tilde p_i(1-\tilde p_i)\left[{g^*}'(\tilde p_i)-2\right], \end{eqnarray} where $g^*(\tilde p_i) = \tilde p_i(1-\tilde p_i) v^*(\tilde p_i)$, $v^*(\tilde p_i)=\frac{\partial}{\partial \tilde p_i} \log f^*(\tilde p_i)$ is the score function of $\tilde p_i$ and $f^*(\tilde p_i)$ is the marginal density of $\tilde p_i$. \end{theorem} If we also assume that the distribution of $p_i$ is symmetric then further simplifications are possible. \begin{corollary} \label{cor.EandVar} If the prior distribution of $p_i$ is symmetric about $0.5$, then \begin{equation} \label{oracle} p_{i0}=\begin{cases} \min\left(E(p_i|\tilde p_i)+ \frac{Var(p_i|\tilde p_i)}{E(p_i|\tilde p_i)}\,,\,0.5\right), & \tilde p_i\le 0.5\\ \max\left(0.5\,,\,E(p_i|\tilde p_i)- \frac{Var(p_i|\tilde p_i)}{1-E(p_i|\tilde p_i)}\right), & \tilde p_i>0.5, \end{cases} \end{equation} \begin{equation} \label{g} g^*(0.5)=0,\quad \text{and} \quad g^*(\tilde p_i)=-g^*(1-\tilde p_i). \end{equation} \end{corollary} A particularly appealing aspect of Theorem~\ref{EandVar} and its corollary is that $g^*(\tilde p_i)$ is only a function of the marginal distribution of $\tilde p_i$, so that it can be estimated directly using the observed probabilities~$\tilde p_i$. In particular, we do not need to make any assumptions about the distribution of~$p_i$ in order to compute $g^*(\tilde p_i)$. \subsection{Estimation} \label{score.sec} In order to estimate $p_{i0}$ we must form estimates for $g^*(\tilde p_i)$, its derivative ${g^*}'(t)$, and $\gamma^*$. \subsubsection{Estimation of $g$} \label{sec.estimation} Let $\hat g(\tilde p)$ represent our estimator of $g^*(\tilde p)$. Given that $g^*(\tilde p)$ is a function of the marginal distribution of $\tilde p_i$, i.e. $f^*(\tilde p_i)$, then one could estimate $g^*(\tilde p_i)$ by $\tilde p_i(1-\tilde p_i)\hat f'(\tilde p_i)/\hat f(\tilde p_i)$, where $\hat f(\tilde p_i)$ and $\hat f'(\tilde p_i)$ are respectively estimates for the marginal distribution of $\tilde p_i$ and its derivative. However, this approach requires dividing by the estimated density function, which can produce a highly unstable estimate in the boundary points, precisely the region we are most interested in. Instead we directly estimate $g^*(\tilde p)$ by choosing $\hat g(\tilde p)$ so as to minimize the risk function, which is defined as $R(g)=E[g(\tilde p)-g^*(\tilde p)]^2$ for every candidate function~$g$. The following result provides an explicit form for the risk. \begin{theorem} \label{risk.lemma} Suppose that model~\eqref{beta.model} holds, and the prior for~$p$ has a bounded density. Then, \begin{equation} R(g) = E g(\tilde p)^2+2E\left[ g(\tilde p)(1-2\tilde p)+\tilde p(1-\tilde p) g'(\tilde p)\right] +C\label{full.risk} \end{equation} for all bounded and differentiable functions~$g$, where~$C$ is a constant that does not depend on~$g$. \end{theorem} \begin{remark} We show in the proof of Theorem~\ref{risk.lemma} that $g^*$ is bounded and differentiable so \eqref{full.risk} holds for $g=g^*$. \end{remark} Theorem~\ref{risk.lemma} suggests that we can approximate the risk, up to an irrelevant constant, by \begin{equation} \label{emp.risk} \hat R(g) = \frac1n \sum_{i=1}^n g(\tilde p_i)^2 + 2 \frac1n \sum_{i=1}^n \left[ g(\tilde p_i)(1-2\tilde p_i)+\tilde p_i(1-\tilde p_i) g'(\tilde p_i)\right]. \end{equation} However, simply minimizing \eqref{emp.risk} would provide a poor estimate for $g^*(\tilde p)$ because, without any smoothness constraints, $\hat R(g)$ can be trivially minimized. Hence, we place a smoothness penalty on our criterion by minimizing \begin{equation} \label{risk.criterion} Q(g) = \hat R(g) + \lambda \int g''(\tilde p)^2d\tilde p, \end{equation} where $\lambda>0$ is a tuning parameter which adjusts the level of smoothness in $g(\tilde p)$. We show in our theoretical analysis in Section~\ref{theory.sec} (see the proof of Theorem~\ref{g.thm}) that, much as with the more standard curve fitting setting, the solution to criteria of the form in \eqref{risk.criterion} can be well approximated using a natural cubic spline, which provides a computationally efficient approach to compute $ g(\tilde p)$. Let ${\bf b}(x)$ represent the vector of basis functions for a natural cubic spline, with knots at $\tilde p_1, \ldots, \tilde p_n$, restricted to satisfy ${\bf b}(0.5)={\bf 0}$. Then, in minimizing $Q(g)$ we only need to consider functions of the form $g(\tilde p) = {\bf b}(\tilde p)^T\boldsymbol \eta$, where $\boldsymbol \eta$ is the basis coefficients. Thus, \eqref{risk.criterion} can be re-expressed as \begin{equation} \label{qn.ncs.probs} Q_n(\boldsymbol \eta) = \frac1n \sum_{i=1}^n \boldsymbol \eta^T{\bf b}(\tilde p_i){\bf b}(\tilde p_i)^T \boldsymbol \eta + 2\frac1n \sum_{i=1}^n \left[(1-2\tilde p_i){\bf b}(\tilde p_i)^T+\tilde p_i(1-\tilde p_i) {\bf b}'(\tilde p_i)^T\right]\boldsymbol \eta + \lambda \boldsymbol \eta^T\Omega\boldsymbol \eta \end{equation} where $\Omega=\int {\bf b}''(\tilde p){\bf b}''(\tilde p)^Td\tilde p$. Standard calculations show that \eqref{qn.ncs.probs} is minimized by setting \begin{equation} \label{ls.eta.probs} \hat \boldsymbol \eta = -\left(\sum_{i=1}^n{\bf b}(\tilde p_i){\bf b}(\tilde p_i)^T+n\lambda\Omega\right)^{-1} \sum_{i=1}^n \left[(1-2\tilde p_i){\bf b}(\tilde p_i)+\tilde p_i(1-\tilde p_i) {\bf b}'(\tilde p_i)\right]. \end{equation} If the prior distribution of $p_i$ is not assumed to be symmetric, then $g^*(\tilde p_i)$ should be directly estimated for $0\le \tilde p_i\le 1$. However, if the prior is believed to be symmetric this approach is inefficient, because it does not incorporate the identity $g^*(\tilde p_i)=-g^*(1-\tilde p_i)$. Hence, a superior approach involves flipping all of the $\tilde p_i>0.5$ across~$0.5$, thus converting them into $1-\tilde p_i$, and then using both the flipped and the unflipped~$\tilde p_i$ to estimate $g(\tilde p_i)$ between $0$ and $0.5$. Finally, the identity $\hat g(\tilde p_i)=-\hat g(1-\tilde p_i)$ can be used to define~$\hat g$ on $(0.5,1]$. This is the approach we use for the remainder of the paper. Equation~(\ref{ls.eta.probs}) allows us to compute estimates for $E(p_i|\tilde p_i)$ and $\text{Var}(p_i|\tilde p_i)$: \begin{eqnarray} \label{mu.hat}\hat \mu_i &=& \tilde p_i + \hat\gamma({\bf b}(\tilde p_i)^T\hat\boldsymbol \eta+1-2\tilde p_i) \\ \label{sigma.hat}\hat \sigma^2_i &=& \hat\gamma\tilde p_i(1-\tilde p_i)+ {\hat\gamma}^2\tilde p_i(1-\tilde p_i)[{\bf b}'(\tilde p_i)^T\hat\boldsymbol \eta-2]. \end{eqnarray} Equations~\eqref{mu.hat} and \eqref{sigma.hat} can then be substituted into \eqref{oracle} to produce the ECAP estimator $\hat p_i$. \subsubsection{Estimation of $\lambda$ and $\gamma^*$} \label{gamma.sec} In computing \eqref{mu.hat} and \eqref{sigma.hat} we need to provide estimates for $\gamma^*$ and $\lambda$. We choose $\lambda$ so as to minimize a cross-validated version of the estimated risk \eqref{emp.risk}. In particular, we randomly partition the probabilities into $K$ roughly even groups: $G_1,\ldots, G_{K}$. Then, for given values of $\lambda$ and $k$, $\hat \boldsymbol \eta_{k\lambda}$ is computed via \eqref{ls.eta.probs}, with the probabilities in $G_k$ excluded from the calculation. We then compute the corresponding estimated risk on the probabilities in $G_k$: $$R_{k\lambda}=\sum_{i\in G_k} \hat h_{ik}^2 + 2\sum_{i\in G_k} \left[(1-2\tilde p_i)\hat h_{ik}+\tilde p_i(1-\tilde p_i) \hat h'_{ik}\right],$$ where $\hat h_{ik}={\bf b}(\tilde p_i)^T \hat \boldsymbol \eta_{k\lambda}$ and $\hat h'_{ik}={\bf b}'(\tilde p_i)^T \hat \boldsymbol \eta_{k\lambda}$. This process is repeated $K$ times for $k=1,\ldots, K$, and $$ R_\lambda=\frac1n\sum_{k=1}^K R_{k\lambda}$$ is computed as our cross-validated risk estimate. Finally, we choose $\hat \lambda = \arg\min_\lambda R_\lambda$. To estimate $\gamma^*$ we need a measure of the accuracy of $\tilde p_i$ as an estimate of $p_i$. In some cases that information may be available from previous analyses. For example, if the estimates~$\tilde p_i$ were obtained by fitting a logistic regression model, we could compute the standard errors on the estimated coefficients and hence form a variance estimate for each~$\tilde p_i$. We would estimate~$\gamma^*$ by matching the computed variance estimates to the expression~(\ref{pt.mean.var}) for the conditional variance under the ECAP model. Alternatively, we can use previously observed outcomes of $A_i$ to estimate $\gamma^*$. Suppose that we observe $$Z_i=\begin{cases}1 & A_i\text{ occured},\\ 0&A_i\text{ did not occur}, \end{cases}$$ for $i= 1,\ldots, n$. Then a natural approach is to compute the (log) likelihood function for $Z_i$. Namely, \begin{equation} \label{log.like} l_\gamma = \sum_{i:Z_i=1} \log(\hat p^\gamma_i) + \sum_{i:Z_i=0} \log(1-\hat p^\gamma_i), \end{equation} where $\hat p^\gamma_i$ is the ECAP estimate generated by substituting in a particular value of $\gamma$ into~\eqref{mu.hat} and~\eqref{sigma.hat}. We then choose the value of~$\gamma$ that maximizes~\eqref{log.like}. As an example of this approach, consider the ESPN data recording probabilities of victory for various NCAA football teams throughout each season. To form an estimate for~$\gamma^*$ we can take the observed outcomes of the games from last season (or the first couple of weeks of this season if there are no previous games available), use these results to generate a set of $Z_i$, and then choose the~$\gamma$ that maximizes \eqref{log.like}. One could then form ECAP estimates for future games during the season, possibly updating the~$\gamma$ estimate as new games are played. \subsection{Large sample results} \label{theory.sec} In this section we investigate the large sample behavior of the ECAP estimator. More specifically, we show that, under smoothness assumptions on the function~$g^*$, the ECAP adjusted probabilities are consistent estimators of the corresponding oracle probabilities, defined in~\eqref{argmin}. We establish an analogous result for the corresponding values of the loss function, defined in~\eqref{loss.fn}. In addition to demonstrating consistency we also derive the rates of convergence. Our method of proof takes advantage of the theory of empirical processes, however, the corresponding arguments go well beyond a simple application of the existing results. We let~$f^*$ denote the marginal density of the observed $\tilde p_i$ and define the $L_2(\tilde P)$ norm of a given function~$u(\tilde p)$ as $\|u\|=[\int_0^1 u^2(\tilde p)f^*(\tilde p)d\tilde p]^{1/2}$. We denote the corresponding empirical norm, $[(1/n)\sum_{i=1}^n u^2(\tilde p_i)]^{1/2}$, by $\|u\|_n$. To simplify the presentation of the results, we define \begin{equation*} r_n=n^{-4/7}\lambda_n^{-1}+n^{-2/7}+\lambda_n \qquad \text{and} \qquad s_n=1+n^{-4/7}\lambda_n^{-2}. \end{equation*} We write~$\hat g$ for the minimizer of criterion~(\ref{risk.criterion}) over all natural cubic spline functions~$g$ that correspond to the sequence of~$n$ knots located at the observed~$\tilde p_i$. For concreteness, we focus on the case where criterion~(\ref{risk.criterion}) is computed over the entire interval $[0,1]$. However, all of the results in this section continue to hold if $\hat g$ is determined by only computing the criterion over $[0,0.5]$, according to the estimation approach described in Section~\ref{sec.estimation}. The following result establishes consistency and rates of convergence for $\hat g$ and $\hat g'$. \begin{theorem} \label{g.thm} If $g^*$ is twice continuously differentiable on $[0,1]$, $f^*$ is bounded away from zero and $n^{-8/21}\ll\lambda_n\ll 1$, then \begin{equation*} \|\hat g - g^*\|_n = O_p\big(r_n\big), \qquad \|\hat g' - {g^*}'\|_n= O_p\big(\sqrt{r_ns_n}\big). \end{equation*} The above bounds also hold for the $\|\cdot\|$ norm. \end{theorem} \begin{remark} The assumption $n^{-8/21}\ll\lambda_n\ll 1$ implies that the error bounds for $\hat g$ and $\hat g'$ are of order $o_p(1)$. \end{remark} When $\lambda_n\asymp n^{-2/7}$, Theorem~\ref{g.thm} yields an $n^{-2/7}$ rate of convergence for $\hat{g}$. This rate matches the optimal rate of convergence for estimating the derivative of a density under the corresponding smoothness conditions \citep{stone1980optimal}. Given a value $\tilde p$ in the interval $(0,1)$, we define the ECAP estimator, $\hat p=\hat p(\tilde p)$, by replacing $\tilde p_i$, $\gamma^*$, and $g$ with~$\tilde p$,$\hat\gamma$ and $\hat g$, respectively, in the expression for the oracle estimator provided by formulas~(\ref{mui}), (\ref{sigmai}) and~(\ref{oracle}). Thus, we treat~$\hat p$ as a random function of~$\tilde p$, where the randomness comes from the fact that $\hat p$ depends on the training sample of the observed probabilities~$\tilde p_i$. By analogy, we define $p_0$ via~(\ref{oracle}), with~$\tilde p_i$ replaced by~$\tilde p$, and view~$p_0$ as a (deterministic) function of~$\tilde p$. We define the function $W_0(\tilde p)$ as the expected loss for the oracle estimator: \begin{equation*} W_0(\tilde p)=E_p\big[EC\left(p_0(\tilde p)\right)^2 |\tilde p\big], \end{equation*} where the expected value is taken over the true~$p$ given the corresponding observed probability~$\tilde p$. Similarly, we define the random function $\widehat W(\tilde p)$ as the expected loss for the ECAP estimator: \begin{equation*} \widehat W(\tilde p)=E_p\big[EC\left(\hat p(\tilde p)\right)^2 |\tilde p\big], \end{equation*} where the expected value is again taken over the true~$p$ given the corresponding~$\tilde p$. The randomness in the function $\widehat W(\tilde p)$ is due to the dependence of~$\hat p$ on the training sample $\tilde p_1,...,\tilde p_n$. To state the asymptotic results for $\hat p$ and $\hat W$, we implement a minor technical modification in the estimation of the conditional variance via formula (\ref{sigmai}). After computing the value of $\hat\sigma^2$, we set it equal to $\max\{\hat\sigma^2,c\sqrt{r_ns_n}\}$, where~$c$ is allowed to be any fixed positive constant. This ensures that, as the sample size grows, $\hat\sigma^2$ does not approach zero too fast. We note that this technical modification is only used to establish consistency of $\widehat W(\tilde p)$ in the next theorem; all the other results in this section hold both with and without this modification. \begin{theorem} \label{cons.thm} If $g^*$ is twice continuously differentiable on $[0,1]$, $f^*$ is bounded away from zero, $n^{-8/21}\ll\lambda_n\ll 1$ and $|\hat \gamma-\gamma^*|=o_p(1)$, then \begin{equation*} \|\hat{p} - p_0\|= o_p(1) \qquad\text{and}\qquad \|\hat{p} - p_0\|_n = o_p(1). \end{equation*} If, in addition, $|\hat \gamma-\gamma^*|=O_p(\sqrt{r_ns_n})$, then \begin{equation*} \int\limits_0^1\big|\widehat{W}(\tilde p) - W_0(\tilde p)\big|f^*(\tilde p)d\tilde p = o_p(1) \qquad\text{and}\qquad \frac1n\sum_{i=1}^n\big|\widehat{W}(\tilde p_i) - W_0(\tilde p_i)\big| = o_p(1). \end{equation*} \end{theorem} The next result provides the rates of convergence for $\hat p$ and $\hat W$. \begin{theorem} \label{rate.thm} If $g^*$ is twice continuously differentiable on $[0,1]$, $f^*$ is bounded away from zero, $n^{-8/21}\ll\lambda_n\ll 1$ and $|\hat \gamma-\gamma^*|=O_p\big(\sqrt{r_ns_n}\big)$, then \begin{eqnarray*} \int\limits_{\epsilon}^{1-\epsilon}\big|\hat{p}(\tilde p) - p_0(\tilde p)\big|^2f^*(\tilde p)d\tilde p = O_p\big(r_ns_n\big), &\quad& \int\limits_{\epsilon}^{1-\epsilon}\big|\widehat{W}(\tilde p) - W_0(\tilde p)\big|f^*(\tilde p)d\tilde p = O_p\big(r_ns_n\big),\\ \\ \sum_{i:\, \epsilon\le\tilde p_i\le 1-\epsilon}\frac1n\big|\hat{p}(\tilde p_i) - p_0(\tilde p_i)\big|^2 = O_p\big(r_ns_n\big)&\,\quad\text{and}\,\quad& \sum_{i:\, \epsilon\le\tilde p_i\le 1-\epsilon}\frac1n\big|\widehat{W}(\tilde p_i) - W_0(\tilde p_i)\big| = O_p\big(r_ns_n\big), \end{eqnarray*} for each fixed positive~$\epsilon$. \begin{remark} The assumption $n^{-8/21}\ll\lambda_n\ll 1$ ensures that all the error bounds are of order $o_p(1)$. \end{remark} \end{theorem} In Theorem~\ref{rate.thm} we bound the integration limits away from zero and one, because the rate of convergence changes as~$\tilde p$ approaches those values. However, we note that~$\epsilon$ can be set to an arbitrarily small value. The optimal rate of convergence for $\widehat{W}$ is provided in the following result. \begin{corollary}\label{W.rate.crl} Suppose that $\lambda_n$ decreases at the rate~$n^{-2/7}$ and $|\hat \gamma-\gamma^*|=O_p(n^{-1/7})$. If~$f^*$ is bounded away from zero and~$g^*$ is twice continuously differentiable on $[0,1]$, then \begin{equation*} \int\limits_{\epsilon}^{1-\epsilon}\big|\widehat{W}(\tilde p) - W_0(\tilde p)\big|d\tilde p = O_p\big(n^{-2/7}\big)\qquad\text{and}\qquad \sum_{i:\, \epsilon\le\tilde p_i\le 1-\epsilon}\frac1n\big|\widehat{W}(\tilde p_i) - W_0(\tilde p_i)\big|= O_p\big(n^{-2/7}\big), \end{equation*} for every positive~$\epsilon$. \end{corollary} Corollary~\ref{W.rate.crl} follows directly from Theorem~\ref{rate.thm} by balancing out the components in the expression for~$r_n$. \section{ECAP Extensions} \label{extension.sec} In this section we consider two possible extensions of \eqref{beta.model}, the model for $\tilde p_i$. In particular, in the next subsection we discuss the setting where~$\tilde p_i$ can no longer be considered an unbiased estimator for~$p_i$, while in the following subsection we suggest a generalization of the beta model. \subsection{Incorporating Bias in $\tilde p_i$} \label{biased.sec} So far, we have assumed that $\tilde p_i$ is an unbiased estimate for $p_i$. In practice probability estimates~$\tilde p_i$ may exhibit some systematic bias. For example, in Section~\ref{emp.sec} we examine probability predictions from the \href{www.FiveThirtyEight.com}{FiveThirtyEight.com} website on congressional house, senate, and governors races during the 2018 US midterm election. After comparing the actual election results with the predicted probability of a candidate being elected, there is clear evidence of bias in the estimates \citep{Silver18}. In particular the leading candidate won many more races than would be suggested by the probability estimates. This indicates that the \href{www.FiveThirtyEight.com}{FiveThirtyEight.com} probabilities were overly conservative, i.e., that in comparison to $p_i$ the estimate~$\tilde p_i$ was generally closer to $0.5$; for example, $E(\tilde p_i|p_i)<p_i$ when $p_i>0.5$. In this section we generalize \eqref{beta.model} to model situations where $E(\tilde p_i|p_i)\ne p_i$. To achieve this goal we replace \eqref{beta.model} with \begin{equation} \label{bias.alpha} \tilde p_i|p_i \sim Beta(\alpha_i, \beta_i),\quad\text{where} \quad p_i=h_\theta(\alpha_i\gamma^*)=h_\theta(1-\beta_i\gamma^*), \end{equation} $h_\theta(\cdot)$ is a prespecified function, and $\theta$ is a parameter which determines the level of bias of $\tilde p_i$. In particular, \eqref{bias.alpha} implies that for any invertible $h_\theta$, \begin{equation} p_i= h_\theta(E(\tilde p_i|p_i)), \end{equation} so that if $h_\theta(x)=x$, i.e., $h_\theta(\cdot)$ is the identity function, then \eqref{bias.alpha} reduces to \eqref{beta.model}, and $\tilde p_i$ is an unbiased estimate for $\tilde p_i$. To produce a valid probability model $h_\theta(\cdot)$ needs to satisfy several criteria: \begin{enumerate} \item $h_0(x)=x$, so that \eqref{bias.alpha} reduces to \eqref{beta.model} when $\theta=0$. \item $h_\theta(1-x)=1-h_\theta(x)$, ensuring that the probabilities of events~$A_i$ and~$A_i^c$ sum to~$1$. \item $h_\theta(x)=x$ for $x=0, x=0.5$ and $x=1$. \item $h_\theta(\alpha)$ is invertible for values of $\theta$ in a region around zero, so that $E(\tilde p_i|p_i)$ is unique. \end{enumerate} The simplest polynomial function that satisfies all these constraints is $$h_\theta(x) = (1-0.5\theta)x - \theta[x^3 - 1.5 x^2],$$ which is invertible for $-4 \le \theta \le 2.$ Note that for $\theta=0$, we have $h_0(x)=x$, which corresponds to the unbiased model~\eqref{beta.model}. However, if $\theta>0$, then $\tilde p_i$ tends to overestimate small $p_i$ and underestimate large $p_i$, so the probability estimates are overly conservative. Alternatively, when $\theta<0$, then $\tilde p_i$ tends to underestimate small $p_i$ and overestimate large $p_i$, so the probability estimates exhibit excess certainty. Figure~\ref{ebias.plot} provides examples of $E(\tilde p_i|p_i)$ for three different values of $\theta$, with the green line representing probabilities resulting in excess certainty, the orange line overly conservative probabilities, and the black line unbiased probabilities. \begin{figure} \caption{\label{ebias.plot} \label{ebias.plot} \end{figure} One of the appealing aspects of this model is that the ECAP oracle \eqref{oracle} can still be used to generate an estimator for $p_i$. The only change is in how $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$ are computed. The following result allows us to generalize Theorem~\ref{EandVar} to the biased setting to compute $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$. \begin{theorem} \label{bias.thm} Suppose that model~\eqref{bias.alpha} holds, $p_i$ has a bounded density, and $\mu_i$ and $\sigma_i^2$ are respectively defined as in~\eqref{mui} and~\eqref{sigmai}. Then, \begin{eqnarray} \label{bias.e} E(p_i|\tilde p_i) &=& \mu_i+0.5\theta\left[3\sigma_i^2-6\mu_i\sigma_i^2+3\mu_i^2-\mu_i-2\mu_i^3\right]+O\big(\theta{\gamma^*}^{3/2}\big)\\ \label{bias.v}Var(p_i|\tilde p_i) &=& (1-0.5\theta)^2\sigma_i^2 +\theta\sigma_i^2\big[3\mu_i(1-\mu_i)(3\theta\mu_i(1-\mu_i)-0.5\theta+1) \big] +O\big(\theta{\gamma^*}^{3/2}\big). \end{eqnarray} \end{theorem} The remainder terms in the above approximations are of smaller order than the leading terms when~$\gamma^*$ is small, which is typically the case in practice. As we demonstrate in the proof of Theorem~\ref{bias.thm}, explicit expressions can be provided for the remainder terms. However, the approximation error involved in estimating these expressions is likely to be much higher than any bias from excluding them. Hence, we ignore these terms when estimating $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$: \begin{eqnarray} \widehat{E(p_i|\tilde p_i)} &=& \hat\mu_i+0.5\theta\left[3\hat\sigma_i^2-6\hat\mu_i\hat\sigma_i^2+3\hat\mu_i^2-\hat\mu_i-2\hat\mu_i^3\right]\\ \widehat{Var(p_i|\tilde p_i)} &=& (1-0.5\theta)^2\hat\sigma_i^2 +\theta\hat\sigma_i^2\big[3\hat\mu_i(1-\hat\mu_i)(3\theta\hat\mu_i(1-\hat\mu_i)-0.5\theta+1) \big]. \end{eqnarray} The only remaining issue in implementing this approach involves producing an estimate for~$\theta$. However, this can be achieved using exactly the same maximum likelihood approach as the one used to estimate~$\gamma^*$, which is described in Section~\ref{gamma.sec}. Thus, we now choose both $\theta$ and $\gamma$ to jointly maximize the likelihood function \begin{equation} \label{bias.log.like} l_{\theta,\gamma} = \sum_{i:Z_i=1} \log(\hat p^{\theta,\gamma}_{i}) + \sum_{i:Z_i=0} \log(1-\hat p^{\theta,\gamma}_{i}), \end{equation} where $\hat p^{\theta,\gamma}_{i}$ is the bias corrected ECAP estimate generated by substituting in particular values of~$\gamma$ and~$\theta$. In all other respects, the bias corrected version of ECAP is implemented in an identical fashion to the unbiased version. \subsection{Mixture Distribution} \label{sec.mixture} We now consider another possible extension of \eqref{beta.model}, where we believe that~$\tilde p_i$ is an unbiased estimator for~$p_i$ but find the beta model assumption to be unrealistic. In this setting one could potentially model~$\tilde p_i$ using a variety of members of the exponential family. However, one appealing alternative is to extend \eqref{beta.model} to a mixture of beta distributions: \begin{equation} \label{beta.mixture.model} \tilde p_i|p_i \sim \sum_{k=1}^K w_k Beta(\alpha_{ik}, \beta_{ik}),\quad\text{where} \quad \alpha_{ik}=\frac{c_k p_i}{\gamma^*}, \quad\beta_{ik}=\frac{1-c_k p_i}{\gamma^*}, \end{equation} and $w_k$ and $c_k$ are predefined weights such that $\sum_k w_k=1$ and $\sum_k w_kc_k=1$. Note that \eqref{beta.model} is a special case of \eqref{beta.mixture.model} with $K=w_1=c_1=1$. As $K$ grows, the mixture model can provide as flexible a model as desired, but it also has a number of other appealing characteristics. In particular, under this model it is still the case that $E(\tilde p_i|p_i)=p_i$. In addition, Theorem~\ref{EandVar.mixture} demonstrates that simple closed form solutions still exist for $E(p_i|\tilde p_i)$ and $Var(p_i|\tilde p_i)$, and, hence, also the oracle ECAP estimator $p_{i0}$. \begin{theorem} \label{EandVar.mixture} Under~\eqref{beta.mixture.model}, \begin{eqnarray} \label{mui.mixture}E(p_i|\tilde p_i) &=&\mu_i\sum_{k=1}^K \frac{w_k}{c_k}\\ \label{sigmai.mixture} Var(p_i|\tilde p_i)&=& (\sigma_i^2+\mu_i^2)\sum_{k=1}^K \frac{w_k}{c_k^2}-\mu_i^2\left(\sum_{k=1}^K\frac{w_k}{c_k}\right)^2, \end{eqnarray} where $\mu_i$ and $\sigma_i^2$ are defined in \eqref{mui} and \eqref{sigmai}. \end{theorem} The generalized ECAP estimator can thus be generated by substituting $\hat \mu_i$ and $\hat \sigma^2_i$, given by formulas~(\ref{mu.hat}) and~(\ref{sigma.hat}), into~\eqref{mui.mixture} and~\eqref{sigmai.mixture}. The only additional complication involves computing values for $w_k$ and $c_k$. For settings with a large enough sample size, this could be achieved using a variant of the maximum likelihood approach discussed in Section~\ref{gamma.sec}. However, we do not explore that approach further in this paper. \section{Simulation Results} \label{sim.sec} In Section~\ref{sec.unbiased} we compare ECAP to competing methods under the assumption of unbiasedness in $\tilde p_i$. We further extend this comparison to the setting where $\tilde p_i$ represents a potentially biased estimate in Section~\ref{sec.biased}. \subsection{Unbiased Simulation Results} \label{sec.unbiased} In this section our data consists of $n=$ 1,000 triplets $(p_i,\tilde p_i, Z_i)$ for each simulation. The $p_i$ are generated from one of three possible prior distributions; Beta$(4,4)$, an equal mixture of Beta$(6,2)$ and Beta$(2,6)$, or Beta$(1.5,1.5)$. The corresponding density functions are displayed in Figure~\ref{fig:figure3}. \begin{figure} \caption{Distributions of $p$ used in the simulation} \label{fig:figure3} \end{figure} Recall that ECAP models $\tilde p_i$ as coming from a beta distribution, conditional on $p_i$. However, in practice there is no guarantee that the observed data will exactly follow this distribution. Hence, we generate the observed data according to: \begin{equation} \label{sim.model} \tilde p_i = p_i + p^q_i (\tilde p^{\text{o}}_i-p_i), \end{equation} where $\tilde p_i^{\text{o}}|p_i\sim$ Beta$(\alpha,\beta)$ and $q$ is a tuning parameter. In particular for $q=0$ \eqref{sim.model} generates observations directly from the ECAP model, while larger values of $q$ provide a greater deviation from the beta assumption. In practice we found that setting $q=0$ can result in $\tilde{p}$'s that are so small they are effectively zero ($\tilde p_i = 10^{-20}$, for example). ECAP is not significantly impacted by these probabilities but, as we show, other approaches can perform extremely poorly in this scenario. Setting $q>0$ prevents pathologic scenarios and allows us to more closely mimic what practitioners will see in real life. We found that $q=0.05$ typically gives a reasonable amount of dispersion so we consider settings where either $q=0$ or $q=0.05$. We also consider different levels of the conditional variance for~$\tilde p_i$, by taking~$\gamma^*$ as either~$0.005$ or~$0.03$. Finally, we generate $Z_i$, representing whether event $A_i$ occurs, from a Bernoulli distribution with probability $p_i$. We implement the following five approaches: the {\it Unadjusted} method, which simply uses the original probability estimates~$\tilde p_i$, two implementations of the proposed {\it ECAP} approach (ECAP Opt and ECAP MLE), and two versions of the James Stein approach (JS Opt and JS MLE). For the proposed ECAP methods, we select~$\lambda$ via the cross-validation procedure in Section~\ref{gamma.sec}. ECAP Opt is an oracle-type implementation of the ECAP methodology, in which we select~$\gamma$ to minimize the average expected loss, defined in \eqref{loss.fn}, over the training data. Alternatively, ECAP MLE makes use of the $Z_i$'s and estimates~$\gamma^*$ using the maximum likelihood approach described in Section~\ref{gamma.sec}. The James-Stein method we use is similar to its traditional formulation. In particular the estimated probability is computed using \begin{equation} \label{js.sim} \hat p^{JS}_i = \bar \tilde p + (1-c)\left(\tilde p_i - \bar \tilde p\right), \end{equation} where $\bar \tilde p=\frac{1}{n}\sum_{j=1}^n \tilde p_j$ and $c$ is a tuning parameter chosen to optimize the estimates.\footnote{To maintain consistency with ECAP we flip all $\tilde p_i>0.5$ across $0.5$ before forming $\hat p^{JS}_i$ and then flip the estimate back.} Equation \eqref{js.sim} is a convex combination of $\tilde p_i$ and the average observed probability $\bar \tilde p$. The JS Opt implementation selects~$c$ to minimize the average expected loss in the same fashion as for ECAP Opt, while the JS MLE implementation selects~$c$ using the maximum likelihood approach described in Section~\ref{gamma.sec}. Note that ECAP Opt and JS Opt represent optimal situations that can not be implemented in practice because they require knowledge of the true distribution of $p_i$. \begin{table}[t] \captionof{table}{Average expected loss for different methods over multiple unbiased simulation scenarios. Standard errors are provided in parentheses.} \label{tab:title} \begin{center} {\small \begin{tabular}{c|c||l|l|l|l} $\gamma^*$ & \textbf{q} & \textbf{Method Type} & $\text{Beta}(4, 4)$ & \begin{tabular}[c]{@{}l@{}}$0.5$*Beta($6$,$2$) +\\ $0.5$*Beta($2$,$6$)\end{tabular} & $\text{Beta}(1.5, 1.5)$ \\ \hline \multirow{10}{*}{0.005} & \multirow{5}{*}{0} & Unadjusted & 0.0116 (0.0001) & 44.9824 (43.7241) & 3.9$\times 10^{12}$ (3.9$\times 10^{12}$) \\ \cline{3-6} & & ECAP Opt & 0.0095 (0.0001) & 0.0236 (0.0002) & 0.0197 (0.0001) \\ \cline{3-6} & & JS Opt & 0.0100 (0.0001) & 0.0241 (0.0002) & 0.0204 (0.0002) \\ \cline{3-6} & & ECAP MLE & 0.0109 (0.0002) & 0.0302 (0.0006) & 0.0263 (0.0007) \\ \cline{3-6} & & JS MLE & 0.0121 (0.0003) & 1.1590 (0.8569) & 4.8941 (4.7526) \\ \cline{2-6} & \multirow{5}{*}{0.05} & Unadjusted & 0.0100 (0.0001) & 0.0308 (0.0006) & 0.0273 (0.0006) \\ \cline{3-6} & & ECAP Opt & 0.0085 (0.0000) & 0.0196 (0.0001) & 0.0166 (0.0001) \\ \cline{3-6} & & JS Opt & 0.0090 (0.0000) & 0.0201 (0.0001) & 0.0172 (0.0001) \\ \cline{3-6} & & ECAP MLE & 0.0098 (0.0002) & 0.0238 (0.0005) & 0.0197 (0.0004) \\ \cline{3-6} & & JS MLE & 0.0105 (0.0002) & 0.0265 (0.0006) & 0.0245 (0.0007) \\ \hline \hline \multirow{10}{*}{0.03} & \multirow{5}{*}{0} & Unadjusted & 2.1$\times 10^{8}$ (2.1$\times 10^{8}$) & 2.4$\times 10^{14}$ (1.6$\times 10^{14}$) & 1.6$\times 10^{15}$ (5.5$\times 10^{14}$) \\ \cline{3-6} & & ECAP Opt & 0.0391 (0.0002) & 0.0854 (0.0004) & 0.0740 (0.0004) \\ \cline{3-6} & & JS Opt & 0.0537 (0.0002) & 0.0986 (0.0005) & 0.0899 (0.0005) \\ \cline{3-6} & & ECAP MLE & 0.0452 (0.0010) & 0.1607 (0.0187) & 0.1477 (0.0187) \\ \cline{3-6} & & JS MLE & 0.0636 (0.0019) & 1.4$\times 10^{13}$ (1.4$\times 10^{13}$) & 1.2$\times 10^{14}$ (1.1$\times 10^{14}$) \\ \cline{2-6} & \multirow{5}{*}{0.05} & Unadjusted & 0.0887 (0.0010) & 0.3373 (0.0047) & 0.2780 (0.0043) \\ \cline{3-6} & & ECAP Opt & 0.0364 (0.0002) & 0.0765 (0.0004) & 0.0665 (0.0004) \\ \cline{3-6} & & JS Opt & 0.0488 (0.0002) & 0.0874 (0.0005) & 0.0801 (0.0005) \\ \cline{3-6} & & ECAP MLE & 0.0411 (0.0008) & 0.1035 (0.0050) & 0.0896 (0.0036) \\ \cline{3-6} & & JS MLE & 0.0558 (0.0011) & 0.1213 (0.0066) & 0.1235 (0.0071) \\ \end{tabular}} \end{center} \end{table} In each simulation run we generate both training and test data sets. Each method is fit on the training data. We then calculate $EC(\hat p_i)^2$ for each point in the test data and average over these observations. The results for the three prior distributions, two values of $\gamma^*$, and two values of $q$, averaged over 100 simulation runs, are reported in Table~\ref{tab:title}. Since the ECAP Opt and JS Opt approaches both represent oracle type methods, they should be compared with each other. The ECAP Opt method statistically significantly outperforms its JS counterpart in each of the twelve settings, with larger improvements in the noisy setting where $\gamma^*=0.03$. The ECAP MLE method is statistically significantly better than the corresponding JS approach in all but four settings. However, those four settings, correspond to $q=0$ and actually represent situations where JS MLE has failed because it has extremely large excess certainty, which impacts both the mean and standard error. Alternatively, the performance of the ECAP approach remains stable even in the presence of extreme outliers. Similarly, the ECAP MLE approach statistically significantly outperforms the Unadjusted approach, often by large amounts, except for the five settings with large outliers, which result in extremely bad average performance for the latter method. \subsection{Biased Simulation} \label{sec.biased} In this section we extend the results to the setting where the observed probabilities may be biased, i.e., $E(\tilde p_i|p_i)\ne p_i$. To do this we generate $\tilde p_i$ according to \eqref{bias.alpha} using four different values for $\theta$, $\{-3,-1,0,2\}$. Recall that $\theta<0$ corresponds to anti-conservative data, where~$\tilde p_i$ tends to be too close to $0$ or $1$, $\theta=0$ represents unbiased observations, and $\theta>0$ corresponds to conservative data, where~$\tilde p_i$ tends to be too far from $0$ or $1$. In all other respects our data is generated in an identical fashion to that of the unbiased setting.\footnote{Because the observed probabilities are now biased, we replace $p_i$ in \eqref{sim.model} with $E(\tilde p_i|p_i)$.} To illustrate the biased setting we opted to focus on the $q=0.05$ with $\gamma^*=0.005$ setting. We also increased the sample size to $n=$ 5,000 because of the increased difficulty of the problem. The two ECAP implementations now require us to estimate three parameters: $\lambda, \gamma$ and $\theta$. We estimate $\lambda$ in the same fashion as previously discussed, while $\gamma$ and $\theta$ are now chosen over a two-dimensional grid of values, with $\theta$ restricted to lie between $-4$ and $2$. The two JS methods remain unchanged. \begin{table}[t] \centering \captionof{table}{Average expected loss for different methods over multiple biased simulation scenarios.} \label{table:biasedtable} {\small \begin{tabular}{l|l||l|l|l} \textbf{} & \textbf{Method Type} & Beta($4$,$4$) & \begin{tabular}[c]{@{}l@{}}$0.5$*Beta($6$,$2$) +\\ $0.5$*Beta($2$,$6$)\end{tabular} & Beta($1.5$, $1.5$) \\ \hline \hline \multirow{5}{*}{$\theta=-3$} & Unadjusted & 0.1749 (0.0005) & 0.7837 (0.0025) & 0.6052 (0.0030) \\ \cline{2-5} & ECAP Opt & 0.0019 (0.0000) & 0.0109 (0.0000) & 0.0086 (0.0000) \\ \cline{2-5} & JS Opt & 0.0609 (0.0002) & 0.2431 (0.0005) & 0.1526 (0.0003) \\ \cline{2-5} & ECAP MLE & 0.0026 (0.0001) & 0.0126 (0.0001) & 0.0100 (0.0001) \\ \cline{2-5} & JS MLE & 0.0633 (0.0003) & 0.2712 (0.0014) & 0.1707 (0.0011) \\ \hline \multirow{5}{*}{$\theta=-1$} & Unadjusted & 0.0319 (0.0001) & 0.1389 (0.0007) & 0.1130 (0.0008) \\ \cline{2-5} & ECAP Opt & 0.0051 (0.0000) & 0.0150 (0.0000) & 0.0124 (0.0001) \\ \cline{2-5} & JS Opt & 0.0142 (0.0000) & 0.0477 (0.0001) & 0.0361 (0.0001) \\ \cline{2-5} & ECAP MLE & 0.0059 (0.0001) & 0.0171 (0.0002) & 0.0149 (0.0003) \\ \cline{2-5} & JS MLE & 0.0155 (0.0002) & 0.0541 (0.0008) & 0.0413 (0.0010) \\ \hline \multirow{5}{*}{$\theta=0$} & Unadjusted & 0.0099 (0.0000) & 0.0305 (0.0002) & 0.0275 (0.0003) \\ \cline{2-5} & ECAP Opt & 0.0084 (0.0000) & 0.0195 (0.0001) & 0.0164 (0.0001) \\ \cline{2-5} & JS Opt & 0.0088 (0.0000) & 0.0199 (0.0001) & 0.0171 (0.0001) \\ \cline{2-5} & ECAP MLE & 0.0093 (0.0001) & 0.0224 (0.0003) & 0.0200 (0.0004) \\ \cline{2-5} & JS MLE & 0.0094 (0.0001) & 0.0233 (0.0005) & 0.0219 (0.0005) \\ \hline \multirow{5}{*}{$\theta=2$} & Unadjusted & 0.0652 (0.0001) & 0.2419 (0.0003) & 0.1776 (0.0003) \\ \cline{2-5} & ECAP Opt & 0.0240 (0.0001) & 0.0614 (0.0002) & 0.0502 (0.0001) \\ \cline{2-5} & JS Opt & 0.0652 (0.0001) & 0.2419 (0.0003) & 0.1776 (0.0003) \\ \cline{2-5} & ECAP MLE & 0.0255 (0.0002) & 0.0744 (0.0012) & 0.0599 (0.0009) \\ \cline{2-5} & JS MLE & 0.0652 (0.0001) & 0.2419 (0.0003) & 0.1776 (0.0003) \\ \end{tabular}} \end{table} The results, again averaged over $100$ simulation runs, are presented in Table~\ref{table:biasedtable}. In the two settings where $\theta<0$ we note that the unadjusted and JS methods all exhibit significant deterioration in their performance relative to the unbiased $\theta=0$ scenario. By comparison, the two ECAP methods significantly outperform the JS and unadjusted approaches. A similar pattern is observed for $\theta>0$. In this setting all five methods deteriorate, but ECAP is far more robust to the biased setting than unadjusted and JS. It is perhaps not surprising that the bias corrected version of ECAP outperforms the other methods when the data is indeed biased. However, just as interestingly, even in the unbiased setting ($\theta=0$) we still observe that ECAP outperforms its JS counterpart, despite the fact that ECAP must estimate $\theta$. This is likely a result of the fact that ECAP is able to accurately estimate $\theta$. Over all simulation runs and settings, ECAP Opt and ECAP MLE respectively averaged absolute errors of only~$0.0681$ and $0.2666$ in estimating $\theta$. \section{Empirical Results} \label{emp.sec} In this section we illustrate ECAP on two real world data sets. Section~\ref{sec.espn} contains our results analyzing ESPN's probability estimates from NCAA football games, while Section~\ref{sec.538} examines probability estimates from the 2018 US midterm elections. Given that for real data $p_i$ is never observed, we need to compute an estimate of $EC(\hat p_i)$. Hence, we choose a small window $\delta$, for example $\delta =[0,0.02]$, and consider all observations for which $\tilde p_i$ falls within $\delta$.\footnote{In this section, for simplicity of notation, we have flipped all probabilities greater than $0.5$, and the associated $Z_i$ around $0.5$ so $\delta=[0,0.02]$ also includes probabilities between $0.98$ and $1$.} We then estimate $p_i$ via $\bar p_\delta=\frac{1}{n_\delta}\sum_{i=1}^n Z_i \delta_i$, where $\delta_i=I(\tilde p_i\in \delta)$ and $n_\delta=\sum_{i=1}^n \delta_i$. Hence we can estimate EC using \begin{equation} \label{se.ec} \widehat{EC}_\delta(\bar{\hat{p_\delta}}) = \frac{\bar p_\delta - \bar{\hat{p_\delta}}}{\bar{\hat{p_\delta}}}, \end{equation} where $\bar{\hat{p_\delta}}=\frac{1}{n_\delta}\sum_{i=1}^n \hat p_i \delta_i$. \subsection{ESPN NCAA Football Data} \label{sec.espn} \begin{figure} \caption{\label{espn_web.plot} \label{espn_web.plot} \end{figure} Each year there are approximately 1,200 Division 1 NCAA football games played within the US. For the last several seasons ESPN has been producing automatic win probability estimates for every game. These probabilities update in real time after every play. Figure~\ref{espn_web.plot} provides an example of a fully realized game between the University of Southern California (USC) and the University of Texas at Austin (TEX) during the 2017 season. For most of the game the probability of a USC win hovers around 75\% but towards the end of the game the probability starts to oscillate wildly, with both teams having high win probabilities, before USC ultimately wins.\footnote{The game was not chosen at random.} These gyrations are quite common and occasionally result in a team with a high win probability ultimately losing. Of course even a team with a 99\% win probability will end up losing 1\% of the time so these unusual outcomes do not necessarily indicate an error, or selection bias issue, with the probability estimates. To assess the accuracy of ESPN's estimation procedure we collected data from the 2016 and 2017 NCAA football seasons. We obtained this unique data set by scrapping the win probabilities, and ultimate winning team, for a total of 1,722 games (about 860 per season), involving an average of approximately 180 probabilities per game. Each game runs for 60 minutes, although the clock is often stopped. For any particular time point $t$ during these 60 minutes, we took the probability estimate closest to $t$ in each of the individual games. We used the entire data set, 2016 and 2017, to compute $\bar{p}_\delta$, which represents the ideal gold standard. However, this estimator is impractical in practice because we would need to collect data over two full years to implement it. By comparison, we used only the 2016 season to fit ECAP and ultimately to compute $\bar{\hat{p}}_\delta$. We then calculated $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ for both the raw ESPN probabilities and the adjusted ECAP estimates. The intuition here is that $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ provides a comparison of these estimates to the ideal, but unrealistic, $\bar{p}_\delta$. In general we found that $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ computed on the ESPN probabilities was not systematically different from zero, suggesting ESPN's probabilities were reasonably accurate. However, we observed that, for extreme values of $\delta$, $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ was well above zero towards the end of the games. Consider, for example, the solid orange line in Figure~\ref{espn.plot}, which plots $\widehat{EC_\delta}(\bar{\hat{p_\delta}},t)$ using $\delta=[0,0.02]$ at six different time points during the final minute of these games. We observe that excess certainty is consistently well above zero. The $90\%$ bootstrap confidence intervals (dashed lines), generated by sampling with replacement from the probabilities that landed inside $\delta_i$, demonstrate that the difference from zero is statistically significant for most time points. This suggests that towards the end of the game ESPN's probabilities are too extreme i.e. there are more upsets then would be predicted by their estimates. \begin{figure} \caption{\label{espn.plot} \label{espn.plot} \label{plot:espn_ecap} \end{figure} Next we applied the unbiased implementation of ECAP, i.e. with $\theta=0$, separately to each of these six time points and computed $\widehat{EC_\delta}(t)$ for the associated ECAP probability estimates. To estimate the out of sample performance of our method, we randomly picked half of the 2016 games to estimate $\gamma^*$, and then used ECAP to produce probability estimates on the other half. We repeated this process 100 times and averaged the resulting $\widehat{EC}_\delta(\bar{\hat{p_\delta}},t)$ independently for each time point. The solid green line in Figure~\ref{espn.plot} provides the estimated excess certainty. ECAP appears to work well on this data, with excess certainty estimates close to zero and confidence intervals that contain zero at most time points. Notice also that ECAP is consistently producing a slightly negative excess certainty, which is actually necessary to minimize the expected loss function \eqref{loss.fn}, as demonstrated in Figure~\ref{EC.plot}. Interestingly this excess certainty pattern in the ESPN probabilities is no longer apparent in data for the 2018 season, suggesting that ESPN also identified this as an issue and applied a correction to their estimation procedure. \subsection{Election Data} \label{sec.538} Probabilities have increasingly been used to predict election results. For example, news organizations, political campaigns, and others, often attempt to predict the probability of a given candidate winning a governors race, or a seat in the house, or senate. Among other uses, political parties can use these estimates to optimize their funding allocations across hundreds of different races. In this section we illustrate ECAP using probability estimates produced by the \href{www.FiveThirtyEight.com}{FiveThirtyEight.com} website during the 2018 US midterm election cycle. FiveThrityEight used three different methods, {\it Classic, Deluxe}, and {\it Lite}, to generate probability estimates for every governor, house, and senate seat up for election, resulting in 506 probability estimates for each of the three methods. Interestingly a previous analysis of this data \citep{Silver18} showed that the FiveThirtyEight probability estimates appeared to be overly conservative i.e. the leading candidate won more often than would have been predicted by their probabilities. Hence, we should be able to improve the probability estimates using the bias corrected version of ECAP from Section~\ref{biased.sec}. We first computed $\widehat{EC}_\delta(\bar{\hat{p_\delta}})$ on the unadjusted FiveThirtyEight probability estimates using two different values for $\delta$ i.e. $\delta_1=[0,0.1]$ and $\delta_2=[0.1,0.2]$. We used wider windows for $\delta$ in comparison to the ESPN data because we only had one third as many observations. The results for the three methods used by FiveThirtyEight are shown in Table~\ref{table:538table}. Notice that for all three methods and both values of $\delta$ the unadjusted estimates are far below zero and several are close to $-1$, the minimum possible value. These results validate the previous analysis suggesting the FiveThirtyEight estimates are systematically conservatively biased. \begin{table}[t] \centering \captionof{table}{Bias corrected ECAP adjustment of FiveThirtyEight's 2018 election probabilities. Reported average $\widehat{EC}_\delta$.} \label{table:538table} {\small \begin{tabular}{ll||rr} Method & Adjustment & \textbf{$\delta_1$} & \textbf{$\delta_2$} \\ \hline \hline \multirow{2}{*}{\textbf{Classic}} & Unadjusted & -0.6910 & -0.8361 \\ \cline{2-4} & ECAP & -0.2880 & -0.0734 \\ \hline \multirow{2}{*}{\textbf{Deluxe}} & Unadjusted & -0.4276 & -0.8137 \\ \cline{2-4} & ECAP & -0.0364 & 0.1802 \\ \hline \multirow{2}{*}{\textbf{Lite}} & Unadjusted & -0.8037 & -0.8302 \\ \cline{2-4} & ECAP & -0.3876 & -0.1118 \\ \end{tabular}} \end{table} Next we applied ECAP separately to each of the three sets of probability estimates, with the value of $\theta$ chosen using the MLE approach previously described. Again the results are provided in Table~\ref{table:538table}. ECAP appears to have significantly reduced the level of bias, with most values of $\widehat{EC}_\delta(\bar{\hat{p_\delta}})$ close to zero, and in one case actually slightly above zero. For the Deluxe method with $\delta_1$, ECAP has an almost perfect level of excess certainty. For the Classic and Lite methods, $\theta=2$ was chosen by ECAP for both values of~$\delta$, representing the largest possible level of bias correction. For the Deluxe method, ECAP selected $\theta=1.9$. Figure~\ref{plot:ecap_v_538unadj} demonstrates the significant level of correction that ECAP applies to the classic method FiveThirtyEight estimates. For example, ECAP adjusts probability estimates of $0.8$ to $0.89$ and estimates of $0.9$ to $0.97$. \section{Discussion} \label{discussion.sec} In this article, we have convincingly demonstrated both theoretically and empirically that probability estimates are subject to selection bias, even when the individual estimates are unbiased. Our proposed ECAP method applies a novel non-parametric empirical Bayes approach to adjust both biased and unbiased probabilities, and hence produce more accurate estimates. The results in both the simulation study and on real data sets demonstrate that ECAP can successfully correct for selection bias, allowing us to use the probabilities with a higher level of confidence when selecting extreme values. \begin{figure} \caption{ECAP bias corrected probabilities vs original FiveThirtyEight probability from classic method.} \label{plot:ecap_v_538unadj} \end{figure} There are a number of possible areas for future work. For example, the ESPN data contains an interesting time series structure to the probabilities, with each game consisting of a probability function measured over 60 minutes. Our current method treats each time point independently and adjusts the probabilities accordingly. However, one may be able to leverage more power by incorporating all time points simultaneously using some form of functional data analysis. Another potential area of exploration involves the type of data on which ECAP is implemented. For example, consider a setting involving a large number of hypothesis tests and associated p-values, $\tilde p_1,\ldots, \tilde p_n$. There has been much discussion recently of the limitations around using p-values. A superior approach would involve thresholding based on the posterior probability of the null hypothesis being true i.e. $p_i=P(H_{0i}|X_i)$. Of course, in general, $p_i$ is difficult to compute which is why we use the p-value $\tilde p_i$. However, if we were to treat $\tilde p_i$ as a, possibly biased, estimate of $p_i$, then it may be possible to use a modified version of ECAP to estimate $p_i$. If such an approach could be implemented it would likely have a significant impact in the area of multiple hypothesis testing. \appendix \section{Proof of Theorem~\ref{oracle.thm}} We begin by computing the derivative of the loss function, $$L(x)=\begin{cases} \frac{1}{x^2}E(p_i^2|\tilde p_i)-\frac2x E(p_i|\tilde p_i)+1&x\le 0.5\\ \frac{1}{(1-x)^2}E(p_i^2|\tilde p_i)-\frac{2x}{(1-x)^2} E(p_i|\tilde p_i)+\left(\frac{x}{1-x}\right)^2&x> 0.5. \end{cases}$$ We have \begin{eqnarray*} \frac{\partial L}{\partial x} &=& \begin{cases} -\frac{2}{x^3}E(p_i^2|\tilde p_i)+\frac2{x^2} E(p_i|\tilde p_i)&x< 0.5\\ \frac{2}{(1-x)^3}E(p_i^2|\tilde p_i)-\frac{2(1+x)}{(1-x)^3} E(p_i|\tilde p_i)+\frac{2x}{(1-x)^3}&x> 0.5 \end{cases}\\ &\propto& \begin{cases} -E(p_i^2|\tilde p_i)+x E(p_i|\tilde p_i)&x< 0.5\\ E(p_i^2|\tilde p_i)-E(p_i|\tilde p_i)+x(1-E(p_i|\tilde p_i)) &x> 0.5. \end{cases} \end{eqnarray*} Note that~$L$ is a continuous function. If $E(p_i|\tilde p_i)\le 0.5$ and $x^*=E(p_i^2|\tilde p_i)/E(p_i|\tilde p_i)\le0.5$ then algebraic manipulations show that $\partial L/\partial x$ is negative for all $x<x^*$ and positive for $x>x^*$. Hence, $p_{i0}=x^*=E(p_i|\tilde p_i) + Var(p_i|\tilde p_i)/E(p_i|\tilde p_i)$ minimizes~$L$. Alternatively, if $E(p_i|\tilde p_i)\le0.5$ and $x^*=E(p_i^2|\tilde p_i)/E(p_i|\tilde p_i)\ge0.5$ then $\partial L/\partial x$ is negative for all $x<0.5$ and positive for all $x>0.5$, so $L$ is minimized by $p_{i0}=0.5$. Analogous arguments show that if $E(p_i|\tilde p_i)>0.5$ and $x^*=E(p_i^2|\tilde p_i)/(1-E(p_i|\tilde p_i))>0.5$, then $\partial L/\partial x$ is negative for all $x<x^*$, zero at $x=x^*$ and positive for $x>x^*$. Hence, $p_{i0}=x^*=E(p_i|\tilde p_i) + Var(p_i|\tilde p_i)/(1-E(p_i|\tilde p_i))$ will minimize $L$. Alternatively, if $E(p_i|\tilde p_i)>0.5$ and $x^*=E(p_i^2|\tilde p_i)/(1-E(p_i|\tilde p_i))<0.5$ then $\partial L/\partial x$ is negative for all $x<0.5$ and positive for all $x>0.5$, so $L$ is minimized by $p_{i0}=0.5$. To prove the second result, first suppose $E(p_i|\tilde p_i)\le0.5$ and $p_{i0}<0.5$, in which case $L(p_{i0}) = 1 - E(p_i^2|\tilde p_i)/p_{i0}^2$. Now let $\tilde L(p_i') = E\left(\left(\frac{p_i-p_i'}{p_i'}\right)^2|\tilde p_i\right)= \frac{1}{{p_i'}^2}E(p_i^2|\tilde p_i) - \frac{2}{p_i'}E(p_i|\tilde p_i)+1$. Note that $\tilde L(p_i')\le L(p_i')$ with equality for $p_i'\le 0.5$. Hence, $$L(p_i')-L(p_{i0})\ge \tilde L(p_i')-L(p_{i0}) = E(p_i^2|\tilde p_i)\left(\frac{1}{{p_i'}^2} + \frac{1}{p_{i0}^2}\right)- \frac{2}{p_i'}E(p_i|\tilde p_i) = E(p_i^2|\tilde p_i)\left( \frac{1}{p_i'}-\frac{1}{p_{i0}}\right)^2.$$ Now consider the case $E(p_i|\tilde p_i)\le0.5$ and $p_{i0}=0.5$. Note that this implies $2E(p_i^2|\tilde p_i)>E(p_i|\tilde p_i)$. If~$p_i'\le0.5$, then $$L(p_i')-L(p_{i0})= E(p_i^2|\tilde p_i)\left(\frac{1}{{p_i'}^2} -4\right)- 2E(p_i|\tilde p_i)\left(\frac{1}{{p_i'}} -2\right) \ge E(p_i^2|\tilde p_i)\left( \frac{1}{p_i'}-\frac{1}{0.5}\right)^2.$$ Also note that $$\left( \frac{1}{p_i'}-\frac{1}{0.5}\right)^2\ge \left( \frac{1}{1-p_i'}-\frac{1}{0.5}\right)^2. $$ Alternatively, if~$p_i'>0.5$, then $$L(p_i')-L(p_{i0})\ge \tilde{L}(1-p_i')-L(p_{i0})\ge E(p_i^2|\tilde p_i)\left( \frac{1}{1-p_i'}-\frac{1}{0.5}\right)^2.$$ Observe that $$\left( \frac{1}{1-p_i'}-\frac{1}{0.5}\right)^2\ge \left( \frac{1}{p_i'}-\frac{1}{0.5}\right)^2. $$ Consequently, we have shown that $$L(p_i')-L(p_{i0})\ge E(p_i^2|\tilde p_i) \max\left(\left[ \frac{1}{p_i'}-\frac{1}{0.5}\right]^2, \left[\frac{1}{1-p_i'}-\frac{1}{0.5}\right]^2\right) $$ when $E(p_i|\tilde p_i)\le0.5$ and $p_{i0}=0.5$. Thus, we have established the result for the case $E(p_i|\tilde p_i)\le0.5$ Finally, consider the case $E(p_i|\tilde p_i)\ge0.5$. The result follows by repeating the argument from the case $E(p_i|\tilde p_i)<0.5$ while replacing all of the probabilities with their complements, i.e., by replacing~$p_i$, $p_{i0}$ and $p_i'$ with $1-p_i$, $1-p_{i0}$ and $1-p_i'$, respectively. \section{Proof of Theorem~\ref{EandVar} and Corollary~\ref{cor.EandVar}} Throughout the proof, we omit the subscript~$i$, for the simplicity of notation. We let $f_{\tilde p}(\tilde p|p)$ denote the conditional density of $\tilde p$ given that the corresponding true probability equals~$p$, and define $f_{X}(x|p)$ by analogy for the random variable~$X=\log(\tilde p/[1-\tilde p])$. We will slightly abuse the notation and not distinguish between the random variable and its value in the case of $\tilde p$, $p$ and~$\eta$. According to model~\eqref{beta.model}, we have $f_{\tilde p}(\tilde p|p)=B(p/{\gamma^*},(1-p)/{\gamma^*})^{-1}\tilde p^{p/{\gamma^*}-1}(1-\tilde p)^{(1-p)/{\gamma^*}-1}$ where~$B(\cdot)$ denotes the beta function. Hence, writing~$B$ for $B(p/{\gamma^*},(1-p)/{\gamma^*})$, we derive \begin{eqnarray} \log(f_{\tilde p}(\tilde p|p)) &=& -\log B + \left(\frac{p}{{\gamma^*}}-1\right)[\log \tilde p - \log(1-\tilde p)] + (1/{\gamma^*}-2)\log(1-\tilde p)\nonumber\\ \label{eq.exp.fam} &=& -\log B + \eta x + (1/{\gamma^*}-2)\log(1-\tilde p) \end{eqnarray} where $\eta=\frac{p}{{\gamma^*}}-1$ and $x=\log \frac{\tilde p}{1-\tilde p}$. Standard calculations show that \begin{equation} \label{ptilde.dist} f_X(x|p)= f_{\tilde p}(\tilde p|p) \frac{e^x}{(1+e^x)^2} = f_{\tilde p}(\tilde p|p) \tilde p(1-\tilde p). \end{equation} Note that $\log(1-\tilde p)=-\log(1+e^x)$, and hence \begin{eqnarray*} \log(f_{X}(x|p))&=&-\log B + \eta x - (1/{\gamma^*}-2)\log(1+e^x)+x - 2\log(1+e^x)\\ &=& -\log B + \eta x +x- 1/{\gamma^*}\log(1+e^x)\\ &=& -\log B + \eta x +l_h(x), \end{eqnarray*} where $l_h(x)=x- 1/{\gamma^*}\log(1+e^x)$. Consequently, we can apply Tweedie's formula \citep{efron2011} to derive $$E(p/{\gamma^*}-1|\tilde p)=E(\eta|x)=v_X(x)-l_h'(x)=v_X(x)-1 + \frac{1}{{\gamma^*}}\frac{e^x}{1+e^x}= v_X(x)+\frac{\tilde p}{{\gamma^*}}-1,$$ where $v_{X}(x)=(d f_{X}(x)/dx)/f_{X}(x)$ and $f_X$ is the density of~$X$. This implies $$E(p|\tilde p) = \tilde p+{\gamma^*} v_X(x).$$ In addition, we have \begin{eqnarray*} \frac{d f_X(x)}{dx}&=& \frac{d f_{\tilde p}(\tilde p)}{d\tilde p}\frac{d\tilde p}{dx}\frac{e^x}{(1+e^x)^2}+ f_{\tilde p}(\tilde p) \frac{e^x(1+e^x)^2-2e^{2x}(1+e^x)}{(1+e^x)^4}\\ &=& \frac{d f_{\tilde p}(\tilde p)}{d\tilde p}\left(\frac{e^x}{(1+e^x)^2}\right)^2+ f_{\tilde p}(\tilde p)\frac{e^x}{(1+e^x)^2}\frac{1-e^x}{1+e^x}. \end{eqnarray*} Using the unconditional analog of formula~\eqref{ptilde.dist}, we derive \begin{eqnarray*} v_X(x)&=&\frac{d f_X(x)/dx}{f_X(x)}\\ &=& \frac{d f_{\tilde p}(\tilde p)/d\tilde p}{f_{\tilde p}(\tilde p)}\frac{e^x}{(1+e^x)^2}+\frac{1-e^x}{1+e^x}\\ &=& v_{\tilde p}(\tilde p) \tilde p(1-\tilde p) +1-2\tilde p, \end{eqnarray*} where $v_{\tilde p}(\tilde p)=(d f_{\tilde p}(\tilde p)/d\tilde p)/f_{\tilde p}(\tilde p).$ Thus, $$E(p|\tilde p) = \tilde p + {\gamma^*}(\tilde p(1-\tilde p)v_{\tilde p}(\tilde p)+ 1-2\tilde p).$$ Similarly, again by Tweedie's formula, $$Var(p/{\gamma^*}-1|\tilde p)=Var(\eta|x)=v'_X(x)-l_h''(x)=v'_X(x)+\frac{1}{{\gamma^*}}\frac{e^x}{(1+e^x)^2}= v'_X(x)+\frac{\tilde p(1-\tilde p)}{{\gamma^*}},$$ which implies $$Var(p|\tilde p) = {\gamma^*}\tilde p(1-\tilde p)+{\gamma^*}^2 v'_X(x).$$ Noting that $$v'_X(x) =\tilde p(1-\tilde p)[v'_{\tilde p}(\tilde p) \tilde p(1-\tilde p) + v_{\tilde p}(\tilde p)(1-2\tilde p)-2],$$ we derive $$Var(p|\tilde p) = {\gamma^*}^2\tilde p(1-\tilde p)[v'_{\tilde p}(\tilde p) \tilde p(1-\tilde p) + v_{\tilde p}(\tilde p)(1-2\tilde p)-2]+ {\gamma^*}\tilde p(1-\tilde p).$$ If we define $g^*(\tilde p)=\tilde p(1-\tilde p)v_{\tilde p}(\tilde p)$, then \begin{eqnarray*} E(p|\tilde p) &=& \tilde p + {\gamma^*}(g(\tilde p)+1-2\tilde p)\\ Var(p|\tilde p) &=& {\gamma^*}\tilde p(1-\tilde p)+{\gamma^*}^2\tilde p(1-\tilde p)[g'(\tilde p)-2]. \end{eqnarray*} This completes the proof of Theorem~\ref{EandVar}. Finally, we establish some properties of $g^*(\tilde p)$ and prove Corollary~\ref{cor.EandVar}. We denote the marginal density of~$\tilde p$ by~$f$. First note that $g(1-\tilde p)=-\tilde p(1-\tilde p)f'(1-\tilde p|p)/f(1-\tilde p|p)$. If $h(p)$ represents the prior density for~$p$, then \begin{equation} \label{f.marginal} f(\tilde p)=\int_0^{1} B(\alpha,\beta)^{-1}\tilde p^{\alpha-1}(1-\tilde p)^{\beta-1}h(p)dp. \end{equation} Because function~$h$ is bounded, differentiation under the integral sign is justified, and hence \begin{equation} \label{f.prime} f'(\tilde p)=\int_0^{1} B(\alpha,\beta)^{-1}\left\{(\alpha-1)\tilde p^{\alpha-2}(1-\tilde p)^{\beta-1}-(\beta-1)\tilde p^{\alpha-1}(1-\tilde p)^{\beta-2}\right\}h(p)dp, \end{equation} where $\alpha=p/{\gamma^*}$ and $\beta=(1-p)/{\gamma^*}$. Substituting $p^*=1-p$ we get $$f(1-\tilde p)=\int_0^{1} B(\beta,\alpha)^{-1}\tilde p^{\alpha-1}(1-\tilde p)^{\beta-1}h(1-p^*)dp^*=f(\tilde p)$$ and $$f'(1-\tilde p)=\int_0^{1} B(\beta,\alpha)^{-1}\left\{(\alpha-1)\tilde p^{\alpha-2}(1-\tilde p)^{\beta-1}-(\beta-1)\tilde p^{\alpha-1}(1-\tilde p)^{\beta-2}\right\}h(1-p^*)dp^*=f'(\tilde p),$$ provided $h(p)=h(1-p)$. Hence, $g^*(1-\tilde p)=-\tilde p(1-\tilde p)f'(\tilde p)/f(\tilde p)=-g^*(\tilde p)$. By continuity of $g^*(\tilde p)$ this result also implies $g^*(0.5)=0$. To complete the proof of Corollary~\ref{cor.EandVar}, we note that under the assumption that the distribution of~$p_i$ is symmetric, the conditional expected value $E(p_i|\tilde p_i)$ lies on the same side of~$0.5$ as~$\tilde p_i$. \section{Proof of Theorem~\ref{risk.lemma}} As before, we denote the marginal density of~$\tilde p$ by~$f$. First, we derive a bound for~${g^*}$. Note that $-1\le \alpha-1\le \frac{1}{{\gamma^*}}$ and, similarly, $-1\le \beta-1\le \frac{1}{{\gamma^*}}$. Hence, by \eqref{f.marginal} and \eqref{f.prime}, \begin{equation*} \nonumber -\left(1-\tilde p+\frac{\tilde p}{{\gamma^*}}\right)f(\tilde p)\le \tilde p(1-\tilde p) f'(\tilde p)\le \left((1-\tilde p)\frac{1}{{\gamma^*}}+\tilde p\right)f(\tilde p), \end{equation*} which implies \begin{equation} \label{g.bound} |{g^*}(\tilde p)|\le \frac{1}{{\gamma^*}}. \end{equation} Next, note that \begin{eqnarray} \lim_{\tilde p\rightarrow 0} \tilde p(1-\tilde p) f(\tilde p) &=& 0\label{lim1}\qquad\text{and}\\ \lim_{\tilde p\rightarrow 1} \tilde p(1-\tilde p) f(\tilde p) &=& 0.\label{lim2} \end{eqnarray} Observe that \begin{eqnarray} R( g(\tilde p)) &=& E(\left( g(\tilde p)- {g^*}(\tilde p)\right)^2\nonumber\\ &=& E g(\tilde p)^2-2E\left\{ g(\tilde p) {g^*}(\tilde p)\right\}+C\nonumber\\ &=& E g(\tilde p)^2-2\int_0^1 \left\{ g(\tilde p) \tilde p(1-\tilde p) \frac{f'(\tilde p)}{f(\tilde p)}\right\} f(\tilde p)d\tilde p+C\nonumber\\ &=& E g(\tilde p)^2-2\left[ g(\tilde p)\tilde p(1-\tilde p) f(\tilde p)\right]^1_0+2\int_0^1 \left[ g(\tilde p)(1-2\tilde p)+\tilde p(1-\tilde p) g'(\tilde p)\right]f(\tilde p)d\tilde p +C \nonumber\\ &=& E g(\tilde p)^2+2\int_0^1 \left[ g(\tilde p)(1-2\tilde p)+\tilde p(1-\tilde p) g'(\tilde p)\right]f(\tilde p)d\tilde p +C\label{full.risk.prf} \end{eqnarray} where $C$ is a constant that does not depend on~$g$, and the second to last line follows via integration by parts. Note the last line holds when~$ g$ is bounded, because by \eqref{lim1}, $$\lim_{\tilde p\rightarrow 0} g(\tilde p)\tilde p(1-\tilde p) f(\tilde p) = 0,$$ and by \eqref{lim2}, $$\lim_{\tilde p\rightarrow 1} g(\tilde p)\tilde p(1-\tilde p) f(\tilde p) = 0.$$ In particular, due to the inequality~(\ref{g.bound}), the relationship~(\ref{full.risk.prf}) holds when~$ g$ is the true function~${g^*}$. \section{Proof of Theorem~\ref{g.thm}} \label{sec:proof.asympt} We write $\mathcal{G}_N$ for the class of all natural cubic spline functions $g$ on $[0,1]$ that correspond to the sequence of~$n$ knots located at the observed~$\tilde p_i$. Given a function~$g$, we define $s_g(\tilde p)=2[g(\tilde p)(1-2\tilde p)+\tilde p(1-\tilde p)g'(\tilde p)]$ and $I^2(g) = \int_0^1 [g''(\tilde p)]^2 d\tilde p$. We also denote $(1/n)\sum_{i=1}^n g^2(\tilde p_i)$ and $\int_0^1 g(\tilde p)f^*(\tilde p)d\tilde p$ by~$\|g\|^2_n$ and~$\|g\|^2$, respectively. By Lemma~\ref{lem1} in Appendix~\ref{append.sup.res}, there exists $g^*_N\in\mathcal{G}_N$, such that $\|g^*_N-g^*\|^2=O_p(\lambda_n^2)$ and \begin{equation*} \|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/7} I(\hat g) \Big)+O_p\Big(n^{-4/7} +\lambda_n^2\Big). \end{equation*} We consider two possible cases (a) $n^{-4/7} I(\hat g)\le n^{-2/7}\|\hat g - g^*_N\|+n^{-4/7}+\lambda_n^2$ and (b) $n^{-4/7} I(\hat g)> n^{-2/7}\|\hat g - g^*_N\|+n^{-4/7}+\lambda_n^2$. Under (a) we have \begin{equation} \|\hat g - g^*_N\|^2 +\lambda^2_n I^2(\hat g)\le O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) +O_p\Big(n^{-4/7}+\lambda^2_n\Big). \end{equation} It follows that $\|\hat g - g^*_N\|=O_p(n^{-2/7}+\lambda_n)$ and $I^2(\hat g )=O_p(n^{-4/7}\lambda_n^{-2}+1)$. However, taking into account the case (a) condition, we also have $I^2(\hat g )=O_p(n^{4/7}\lambda^2_n+1)$, thus leading to $I(\hat g )=O_p(1)$. Under (b) we have \begin{equation} \label{case.b.ineq} \|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le O_p\Big(n^{-4/7}I(\hat g )\Big). \end{equation} It follows that $I(\hat g )=O_p(n^{-4/7}\lambda_n^{-2})$ and $\|\hat g - g^*_N\|=O_p(n^{-4/7}\lambda_n^{-1})$. Collecting all the stochastic bounds we derived, and using the fact that~$f^*$ is bounded away from zero, we deduce \begin{equation*} \|\hat g - g^*_N\| = O_p(n^{-4/7}\lambda_n^{-1}+n^{-2/7}+\lambda_n) \qquad\text{and}\qquad I(\hat g)=O_p(1+n^{-4/7}\lambda_n^{-2}) \end{equation*} Using the bound $\|g^*_N-g^*\|^2=O_p(\lambda_n^2)$, together with the definitions of $r_n$ and~$s_n$, we derive \begin{equation} \label{g.bounds} \|\hat g - g^*\| = O_p(r_n) \qquad\text{and}\qquad I(\hat g - g^*)=O_p(1+n^{-4/7}\lambda_n^{-2}). \end{equation} Applying Lemma 10.9 in \cite{van2000applications}, which builds on the interpolation inequality of \cite{agmon1965lectures}, we derive $\|\hat{g}' - {g^*}'\| = O_p(\sqrt{r_n s_n})$. This establishes the error bounds for $\hat g$ and $\hat g'$ with respect to the $\|\cdot\|$ norm. To derive the corresponding results with respect to the $\|\cdot\|_n$ norm, we first apply bound~(\ref{nu_n_dg2}), in which we replace~$g^*_N$ with~$g^*$. It follows that \begin{equation*} \|\hat g - g^*\|^2_n-\|\hat g - g^*\|^2 = ({\tilde P}_n-{\tilde P})[\hat g-{g^*}]^2= o_p\Big(\|\hat g - g^*\|^2\Big) + O_p\Big(n^{-1}I^2(\hat g - g^*)\Big), \end{equation*} where we use the notation from the proof of Lemma~\ref{lem1}. Because bounds~(\ref{g.bounds}) together with the assumption $\lambda_n\gg n^{-8/21}$ imply \begin{equation*} I(\hat g - g^*)=O\Big(n^{-4/7}n^{16/21}\Big)=O\Big(n^{4/21}\Big), \end{equation*} we can then derive \begin{equation*} \|\hat g - g^*\|^2_n=O\Big(\|\hat g - g^*\|^2\Big) + O_p\Big(n^{-13/21}\Big). \end{equation*} Because $r_n\ge n^{-2/7}$, we have $r_n^2\ge n^{-13/21}$. Consequently, $ \|\hat g - g^*\|^2_n=O(r_n^2)$, which establishes the analog of the first bound in~(\ref{g.bounds}) for the $\|\cdot\|_n$ norm. It is only left to derive $\|\hat{g}' - {g^*}'\|_n=O_p(\sqrt{r_ns_n})$. Applying Lemma~17 in \cite{meier2009high}, in conjunction with Corollary~5 from the same paper, in which we take ${\gamma^*}=2/3$ and $\lambda=n^{-3/14}$, we derive \begin{equation*} ({\tilde P}_n-{\tilde P})[\hat{g}' - {g^*}']^2 =O_p\Big(n^{-5/14}\Big[ \|\hat{g}' - {g^*}'\|^2 + n^{-2/7} I^2(\hat{g}' - {g^*}')\Big] \Big). \end{equation*} Consequently, \begin{equation*} \|\hat{g}' - {g^*}'\|^2_n =O_p\Big(\|\hat{g}' - {g^*}'\|^2\Big)+O_p\Big(n^{-9/14}I^2(\hat{g}' - {g^*}') \Big). \end{equation*} Taking into account bound~(\ref{g.bounds}), the definition of~$s_n$, the assumption $\lambda_n\gg n^{-8/21}$ and the inequality $r_n\ge n^{-2/7}$,we derive \begin{equation*} n^{-9/14}I^2(\hat{g}' - {g^*}')=O_p\Big(n^{-9/14}s_n^2\Big)=O_p\Big(n^{-19/42}s_n\Big)=O_p\Big(r_ns_n\Big). \end{equation*} Thus, $\|\hat{g}' - {g^*}'\|^2_n=O_p(\|\hat{g}' - {g^*}'\|^2+r_ns_n)=O_p(r_ns_n)$, which completes the proof. \section{Proof of Theorem~\ref{cons.thm}} We will take advantage of the results in Theorem~\ref{rate.thm}, which are established independently from Theorem~\ref{cons.thm}. We will focus on proving the results involving integrals, because the results for the averages follow by an analogous argument with minimal modifications. We start by establishing consistency of $\hat p$. Fixing an arbitrary positive~$\tilde\epsilon$, identifying a positive~$\epsilon$ for which $\tilde P(0,\epsilon)+\tilde P(1-\epsilon,1)\le \tilde\epsilon/2$, and noting that $\hat p$ and $p_0$ fall in $[0,1]$ for every~$\tilde p$, we derive \begin{equation*} \|\hat p - p_0 \|^2 \le \tilde\epsilon/2 + \int_{\epsilon}^{1-\epsilon}|\hat p(\tilde p)-p_0(\tilde p)|^2f^*(\tilde p)dp. \end{equation*} By Theorem~\ref{rate.thm}, the second term on the right-hand side of the above display is $o_p(1)$. Consequently, \begin{equation*} P\Big(\|\hat p - p_0 \|^2 >\tilde\epsilon\Big)\rightarrow0 \quad\text{as}\quad n\rightarrow\infty. \end{equation*} As the above statement holds for every fixed positive~$\tilde\epsilon$, we have established that $\|\hat p - p_0 \|=o_p(1)$. We now focus on showing consistency for~$\hat W$. Note that \begin{equation*} [\hat\mu^2(\tilde p)+\hat\sigma^2(\tilde p)]^2/\hat\mu^2(\tilde p) \ge [2\hat\mu(\tilde p)\hat\sigma(\tilde p)]^2/[\hat\mu^2(\tilde p)]=4\hat\sigma^2(\tilde p). \end{equation*} Thus, the definition of~$\hat p$ implies $\hat p^2(\tilde p)\ge \hat\sigma^2(\tilde p) \wedge 0.25$, and also $\hat p(\tilde p)\ge \hat\mu(\tilde p)\wedge 0.5$, for every~$\tilde p\in(0,1)$. Writing~$p$ for the true probability corresponding to the observed~$\tilde p$, we then derive \begin{eqnarray} \widehat W(\tilde p)=\frac{E_p\Big[(p-\hat p(\tilde p))^2|\tilde p\Big]}{\hat p^2(\tilde p)}&\le& \frac{\sigma^2(\tilde p)}{\hat p^2(\tilde p)}+ \frac{[\hat p(\tilde p) -\mu(\tilde p)]^2}{\hat p^2(\tilde p)}\nonumber\\ \nonumber\\ &\le& \frac{|\hat\sigma^2(\tilde p)-\sigma^2(\tilde p)|}{4\hat\sigma^2(\tilde p)}+ \frac{[\hat \mu(\tilde p) -\mu(\tilde p)]^2}{\hat\sigma^2(\tilde p)}+7. \label{W.hat.bnd} \end{eqnarray} By Theorem~\ref{rate.thm}, we have $\|\hat\sigma^2-\sigma^2\|=O_p(\sqrt{r_ns_n})=o_p(1)$ and $\|\hat\mu-\mu\|=O_p(\sqrt{r_ns_n})=o_p(1)$. Fix an arbitrary positive~$\epsilon$ and define $A_{\epsilon}=(0,\epsilon)\cup(1-\epsilon,1)$. Applying the Cauchy-Schwarz inequality, and using the imposed technical modification of the ECAP approach to bound $\hat\sigma^2$ below, we derive \begin{equation*} \int_{A_{\epsilon}} \frac{|\hat\sigma^2(\tilde p)-\sigma^2(\tilde p)|}{\hat\sigma^2(\tilde p)}f^*(\tilde p)d\tilde p\le [\tilde PA_{\epsilon}]^{1/2} \frac{\|\hat\sigma^2-\sigma^2\|}{c\sqrt{r_ns_n}}= [\tilde PA_{\epsilon}]^{1/2}O_p(1)=O_p(\epsilon^{1/2}). \end{equation*} Similarly, we derive \begin{equation*} \int_{A_{\epsilon}} \frac{|\hat\mu(\tilde p)-\mu(\tilde p)|^2}{\hat\sigma^2(\tilde p)}f^*(\tilde p)d\tilde p\le \int_{A_{\epsilon}} \frac{|\hat\mu(\tilde p)-\mu(\tilde p)|}{\hat\sigma^2(\tilde p)}f^*(\tilde p)d\tilde p= O_p(\epsilon^{1/2}). \end{equation*} Note that $|W_0(\tilde p)|\le 1$ for every~$\tilde p$. Thus, combining the bounds for the terms in~(\ref{W.hat.bnd}) with the corresponding bound for $|\hat W - W_0|$ in Theorem~\ref{rate.thm}, we derive \begin{equation*} \int_0^1 \big|\widehat W(\tilde p) - W_0(\tilde p)\big|f^*(\tilde p)d\tilde p=O_p(\epsilon^{1/2})+o_p(1). \end{equation*} As this bound holds for every positive~$\epsilon$, we deduce that $\int_0^1 \big|\widehat W(\tilde p) - W_0(\tilde p)\big|f^*(\tilde p)d\tilde p=o_p(1)$. \section{Proof of Theorem~\ref{rate.thm}} We build on the results of Theorem~\ref{g.thm} to derive the rate of convergence for~$\hat\mu$ and~$\widehat{W}$ for a fixed positive~$\epsilon$. Continuity and positivity of $\mu(\tilde p)$ and $p_0(\tilde p)$ imply that both functions are bounded away from zero on the interval $[\epsilon,1-\epsilon]$. Applying Lemma 10.9 in \cite{van2000applications}, we derive $\|\hat g - g^*\|_{\infty} = O_p(r_n^{3/4} s_n^{1/4})$. Because $n^{-8/21}\ll\lambda_n\ll 1$, we have $\|\hat g - g^*\|_{\infty} = o_p(1)$, which implies $\sup_{[\epsilon,1-\epsilon]}|\hat \mu(\tilde p) - \mu(\tilde p)| = o_p(1)$. Also note that $\hat p(\tilde p)\ge \hat \mu(\tilde p)$ for all~$\tilde p$. Consequently, there exists an event with probability tending to one, on which random functions~$\hat p(\tilde p)$ and $\hat\mu(\tilde p)$ are bounded away from zero on the interval $[\epsilon,1-\epsilon]$. The stated error bounds for~$\hat p$ then follow directly from this observation and the error bounds for~$\hat g$ and~$\hat g'$ in Theorem~\ref{g.thm}. For the remainder of the proof we restrict our attention to the event~A (whose probability tends to one), on which functions~$p_0$ and~$\hat p$ are both bounded away from zero on $[\epsilon,1-\epsilon]$. We write~$p$ for the true probability corresponding to the observed~$\tilde p$, define $G(q)=E[(p-q)^2/q^2|\tilde p]$ and note that $G'(q)={2(q-p^*)E(p|\tilde p)}/{q^3}$. Let $p^*$ be the minimizer of $G$, given by $p^*=E[p|\tilde p]+{Var[p|\tilde p]}/{E[p|\tilde p]}$. Denote by $\hat p^*$ our estimator of~$p^*$, which is obtained by replacing the conditional expected value and variance in the above formula by their ECAP estimators. While~$p^*$ and $\hat p^*$ depend on~$\tilde p$, we will generally suppress this dependence in the notation for simplicity. Note that for~$\tilde p\in[\epsilon,1-\epsilon]$, functions~$p^*$ and~$\hat p^*$ are both bounded away from zero on the set~$A$. Fix an arbitrary~$\tilde p\le0.5$. Define events $A_1=A\cap\{p^*\le0.5,\hat p^*\le0.5\}$, $A_2=A\cap\{p^*>0.5,\hat p^*\le0.5\}$, $A_3=A\cap\{p^*\le0.5,\hat p^*>0.5\}$ and $A_4=A\cap\{p^*>0.5,\hat p^*>0.5\}$. Note that $A_4$ implies $\hat p=p_0=0.5$. Writing Taylor expansions for function~$G$ near~$p^*$ and~$0.5$, we derive the following bounds, which hold for some universal constant~$c$ that depends only on~$\epsilon$: \begin{eqnarray*} \big|W_0(\tilde p)-\widehat{W}(\tilde p)\big|1_{\{A\}}&=&\big|G(p^*)-G(\hat p^*)\big|1_{\{A_1\}}+\big|G(0.5)-G(\hat p^*)\big|1_{\{A_2\}}+\big|G(p^*)-G(0.5)\big|1_{\{A_3\}}\\ &\le& c\big(p^*-\hat p^*)^21_{\{A_1\}}+c\big(0.5-\hat p^*)^21_{\{A_2\}}+c\big(p^*-0.5)^21_{\{A_3\}}\\ &\le& c\big(p^*-\hat p^*)^2. \end{eqnarray*} Analogous arguments derive the above bound for $\tilde p>0.5$. The rate of convergence for~$\widehat{W}$ then follows directly from the error bounds for~$\hat g$ and~$\hat g'$ in Theorem~\ref{g.thm}. \section{Proof of Theorem~\ref{bias.thm}} \label{prf.bias.thm} Throughout the proof we drop the subscript~$i$ for the simplicity of notation. First note that the derivations in the proof of Theorem~\ref{EandVar} also give $E({\gamma^*}\alpha|\tilde p)=\mu$ and $Var({\gamma^*}\alpha|\tilde p)=\sigma^2$, where $\mu$ and $\sigma^2$ are respectively defined in \eqref{mui} and \eqref{sigmai}. These identities hold for both the unbiased and biased versions of the model. The only difference is in how ${\gamma^*}\alpha$ relates to $p$. Note that \begin{eqnarray} E(p|\tilde p) &=& E(h({\gamma^*}\alpha)|\tilde p)\nonumber=(1-0.5\theta) E({\gamma^*}\alpha|\tilde p) - \theta[E({\gamma^*}^3\alpha^3|\tilde p)-1.5E({\gamma^*}^2\alpha^2|\tilde p)]\nonumber\\ \label{bias.e.proof}&=& (1-0.5\theta)\mu -\theta[s_3 + 3\mu\sigma^2 + \mu^3 - 1.5\sigma^2-1.5\mu^2], \end{eqnarray} where we use~$s_k$ to denote the $k$-th conditional central moment of ${\gamma^*}\alpha$ given~$\tilde p$. By Lemma~\ref{lem.mom.appr} in Appendix~\ref{append.sup.res}, the~$s_3$ term in~\eqref{bias.e.proof} is $O({\gamma^*}^{3/2})$, which leads to the stated approximation for $E(p|\tilde p)$. We also have $$Var(p|\tilde p) = Var(h({\gamma^*}\alpha)|\tilde p) = (1-0.5\theta)^2\sigma^2 + \theta a,$$ where $a = \theta Var({\gamma^*}^3 \alpha^3 - 1.5 {\gamma^*}^2\alpha^2|\tilde p) - (1-0.5\theta)Cov({\gamma^*}\alpha,{\gamma^*}^3\alpha^3 -1.5{\gamma^*}^2\alpha^2|\tilde p)$. It is only left to show that $a=O({\gamma^*}^{3/2})$. A routine calculation yields \begin{equation*} a=\sigma_i^2\big[3\mu(1-\mu)(3\theta\mu(1-\mu)-0.5\theta+1) \big]+O\Big(\sum_{k=3}^6[\sigma^k+s_k]\Big). \end{equation*} By Lemma~\ref{lem.mom.appr}, the remainder term is $O({\gamma^*}^{3/2})$, which completes the proof. \section{Proof of Theorem~\ref{EandVar.mixture}} We use the notation from the proof of Theorem \ref{EandVar}. In particular, we omit the subscript~$i$ throughout most of the proof, for the simplicity of the exposition. We represent $\tilde p$ as $\sum_{k=1}^KI_{\{\mathcal{I}=k\}}\xi_k$, where $\xi_k|p\sim Beta(\alpha_k,\beta_k)$,$\alpha_k=c_kp/\gamma^*$, $\beta_k=(1-c_kp)/\gamma^*$, and~$\mathcal{I}$ is a discrete random variable independent of~$p$ and $\xi_k$, whose probability distribution is given by $P(\mathcal{I}=k)=w_k$ for $k=1,...,K$. Note that \begin{equation*} f_{\xi_k}(\tilde p|p)=B\Big(\frac{c_kp}{\gamma^*}, \frac{1-c_kp}{\gamma^*}\Big)^{-1}\tilde p^{\frac{c_kp}{\gamma^*} -1 }(1-\tilde p)^{\frac{1-c_kp}{\gamma^*}-1}. \end{equation*} Hence, writing $B$ for $B(\frac{c_kp}{\gamma^*}, \frac{1-c_kp}{\gamma^*})$, we derive \begin{equation*} \begin{split} \log(f_{\xi_k}(\tilde p|p) )&=-\log B + \Big( \frac{c_kp}{\gamma^*} -1 \Big) \log\tilde p + \Big( \frac{1-c_kp}{\gamma^*} -1 \Big) \log(1-\tilde p) \\ &= -\log B +\frac{c_kp}{\gamma^*}\log\tilde p -\log\tilde p+\frac{1-c_kp}{\gamma^*} \log(1-\tilde p)-\log(1-\tilde p)\\ &= -\log B + p\frac{c_k}{\gamma^*} \log\Big( \frac{\tilde p}{1-\tilde p}\Big) -\log\tilde p +\frac{1}{\gamma^*} \log(1-\tilde p)-\log(1-\tilde p) \\ &= -\log B +\eta x - \log\tilde p +\Big(\frac{1-\gamma^*}{\gamma^*}\Big)\log(1-\tilde p), \end{split} \end{equation*} where we've defined $\eta=p\frac{c_1}{\gamma^*}$ and $x=\log \big({\tilde p}/[{1-\tilde p}]\big)$. Repeating the derivations in the proof of Theorem \ref{EandVar} directly below display~(\ref{eq.exp.fam}), we derive \begin{equation*} \begin{split} E(p|\xi_k=\tilde p)&=\frac{1}{c_k} \Big[ \gamma^*\Big(g^*(\tilde p)+1-2\tilde p\Big)+\tilde p\Big] \\ Var(p|\xi_k=\tilde p) &= \frac{1}{c_k^2} \Big[ \gamma^{*2} \Big( \tilde p(1-\tilde p) ({g^*}'(\tilde p)-2) +\gamma^*\tilde p(1-\tilde p)\Big) \Big]. \end{split} \end{equation*} Consequently, \begin{equation} \begin{split} E(p|\tilde p, \mathcal{I}=k)&=E(p|\xi_k=\tilde p)=\frac{1}{c_k} \Big[ \gamma^*\Big(g^*(\tilde p)+1-2\tilde p\Big)+\tilde p\Big] \\ Var(p|\tilde p, \mathcal{I}=k) &= Var(p|\xi_k=\tilde p)=\frac{1}{c_k^2} \Big[ \gamma^{*2} \Big( \tilde p(1-\tilde p) ({g^*}'(\tilde p)-2) +\gamma^*\tilde p(1-\tilde p)\Big) \Big] \end{split} \end{equation} Applying the law of total probability and using the fact that~$\mathcal{I}$ and~$\tilde p$ are independent, we derive $$E(p|\tilde p)= \sum_{k=1}w_kE(p|\tilde p,\mathcal{I}=k)= \sum_{k=1}^K \frac{w_k}{c_k} \Big[ \gamma\Big( g^{*}(\tilde p)+1-2\tilde p \Big) +\tilde p \Big]. $$ By the law of total variance, we also have \begin{eqnarray*} Var(p|\tilde p)&=& \sum_{k=1}w_k Var(p|\tilde p,\mathcal{I}=k)+\sum_{k=1}w_kE^2(p|\tilde p,\mathcal{I}=k)-\big[\sum_{k=1}w_kE(p|\tilde p,\mathcal{I}=k)\big]^2\\ &=& \sum_{k=1}^K \frac{w_k}{c_k^2} \big[ \gamma^{*2} \tilde p(1-\tilde p) ({g^*}'(\tilde p)-2) +\gamma^*\tilde p(1-\tilde p) \big] + \big[ \gamma^*(g^*(\tilde p)+1-2\tilde p)+\tilde p\big] ^2 \big[ \sum_{k=1}^K\frac{w_k}{c_k^2}- \big( \sum_{k=1}^K\frac{w_k}{c_k} \big)^2\big]. \end{eqnarray*} To complete the proof, we use formulas $$\mu_i=\tilde p_i+\gamma^*[g^*(\tilde p_i)+1-2\tilde p_i]\qquad\text{and}\qquad \sigma_i^2=\gamma^*\tilde p_i(1-\tilde p_i)+\gamma^{*2}\tilde p_i(1-\tilde p)[g^{*'}(\tilde p_i)-2]$$ to rewrite the above expressions as $E(p_i|\tilde p_i) = \mu_i \sum_{k=1}^K {w_k}/{c_k}$ and \begin{equation*} Var(p_i|\tilde p_i)=\sum_{k=1}^K \frac{w_k}{c_k^2} \sigma_i^2 + \mu_i^2\sum_{k=1}^K \frac{w_k}{c_k^2} - \mu_i^2\Big( \sum_{k=1}^K \frac{w_k}{c_k} \Big)^2 = (\sigma_i^2+\mu_i^2)\sum_{k=1}^K \frac{w_k}{c_k^2} -\mu_i^2 \Big( \sum_{k=1}^K \frac{w_k}{c_k} \Big)^2. \end{equation*} \section{Supplementary Results} \label{append.sup.res} \begin{lemma} \label{lem1} Under the conditions of Theorem~\ref{rate.thm}, there exists a function $g^*_N\in\mathcal{G}_N$, such that $\|g^*_N-g^*\|^2=O_p(\lambda_n^{2})$ and \begin{equation*} \|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/7} I(\hat g) \Big)+O_p\Big(n^{-4/7} +\lambda_n^2\Big). \end{equation*} \end{lemma} \noindent\textbf{Proof of Lemma~\ref{lem1}}. We will use the empirical process theory notation and write ${\tilde P}_n g$ and~${\tilde P}g$ for $(1/n)\sum_{i=1}^n g(\tilde p_i)$ and $\int_0^1 g(\tilde p)f^*(\tilde p)d\tilde p$, respectively. Using the new notation, criterion~(\ref{risk.criterion}) can be written as follows: \begin{equation*} Q_n(g) = {\tilde P}_n g^2 + {\tilde P}_n s_g + \lambda_n^2 I^2(g). \end{equation*} As we showed in the proof of Theorem~\ref{risk.lemma}, equality ${\tilde P}g^2 + {\tilde P} s_g = \|g-g^*\|^2$ holds for every candidate function~$g\in\mathcal{G}_N$. Consequently, \begin{equation*} Q_n(g) = \|g-g^*\|^2 + ({\tilde P}_n-{\tilde P})g^2 + ({\tilde P}_n-{\tilde P}) s_g + \lambda_n^2 I^2(g). \end{equation*} Let $g^*_N$ be a function in $\mathcal{G}_N$ that interpolates~$g^*$ at points $\{0,\tilde p_1,...,\tilde p_n,1\}$, with two additional constraints: ${g^*_N}'(0)={g^*}'(0)$ and ${g^*_N}'(1)={g^*}'(1)$. A standard partial integration argument \citep[similar to that in ][for example]{green1993nonparametric} shows that $I(g^*_N)\le I(g^*)$, which also implies that ${g^*_N}'$ is uniformly bounded. Furthermore, we have $\|g^*_N-g^*\|_{\infty}=O_p(\log(n)/n)$ by the maximum spacing results for the uniform distribution \citep[for example]{shorack2009empirical}, the boundedness away from zero assumption on~$f^*$ and the boundedness of ${g^*_N}'$. Consequently, $\|g^*_N-g^*\|^2=O_p(\lambda_n^2)$. Because $Q_n(\hat g)\le Q_n(g^*_N)$, we then have \begin{equation} \label{basic_ineq} \|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le ({\tilde P}_n-{\tilde P})[{g^*_N}^2-\hat g^2] + ({\tilde P}_n-{\tilde P})[s_{g^*_N}-s_{\hat g}] + \lambda_n^2 [I^2(g^*_N)+1]. \end{equation} Note that \begin{equation} \label{g-sq.ineq} ({\tilde P}_n-{\tilde P})[{g^*_N}^2-\hat g^2] =-({\tilde P}_n-{\tilde P})[\hat g-{g^*_N}]^2 - ({\tilde P}_n-{\tilde P}){g^*_N}[\hat g-{g^*_N}]. \end{equation} Applying Lemma~17 in \cite{meier2009high}, in conjunction with Corollary~5 from the same paper, in which we take $\gamma=2/5$ and $\lambda=n^{-1/2}$, we derive \begin{equation} \label{nu_n_dg2} ({\tilde P}_n-{\tilde P})[{g^*_N}-\hat g]^2 = O_p\Big(n^{-1/5}\|\hat g - g^*_N\|^2\Big) + O_p\Big(n^{-1} I^2(\hat g - g^*_N) \Big). \end{equation} Applying Corollary 5 in \cite{meier2009high} with the same~$\gamma$ and~$\lambda$ yields $$({\tilde P}_n-{\tilde P}){g^*_N}[\hat g-{g^*_N}] = O_p\Big(n^{-2/5}\sqrt{\|{g^*_N}[\hat g - g^*_N]\|^2 + n^{-4/5} I^2({g^*_N}[\hat g - g^*_N])}\Big).$$ Using Lemma 10.9 in \cite{van2000applications} to express the $L_2$ norm of the first derivative in terms of the norms of the second derivative and the original function, we derive \begin{equation} \label{nu_n_dg} ({\tilde P}_n-{\tilde P}){g^*_N}[\hat g-{g^*_N}] = O_p\Big(n^{-2/5}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/5} I(\hat g - g^*_N) \Big). \end{equation} Applying Corollary 5 in \cite{meier2009high} with $\gamma=2/3, \lambda=n^{-3/14}$ and using Lemma 10.9 in \cite{van2000applications} again, we derive \begin{equation*} ({\tilde P}_n-{\tilde P})[s_{g^*_N}-s_{\hat g}] = O_p\Big(n^{-3/7}\|s_{g^*_N}-s_{\hat g}\|\Big) + O_p\Big(n^{-4/7} I(\hat g - g^*_N) \Big). \end{equation*} Hence, by Lemma 10.9 in \cite{van2000applications}, \begin{equation*} ({\tilde P}_n-{\tilde P})[s_{g^*_N}-s_{\hat g}] = O_p\Big(n^{-3/7}\|\hat g - g^*_N\|^{1/2}I^{1/2}(\hat g - g^*_N)\Big) + O_p\Big(n^{-4/7} I(\hat g - g^*_N) \Big), \end{equation*} which leads to \begin{equation} \label{nu_n_ds}({\tilde P}_n-{\tilde P})[s_{g^*_N}-s_{\hat g}] = O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/7} I(\hat g) \Big)+O_p\Big([n^{-4/7}I(\hat g^*_N)\Big) \end{equation} Combining (\ref{basic_ineq})-(\ref{nu_n_ds}), and noting the imposed assumptions on~$\lambda_n$, we arrive at \begin{equation*} \|\hat g - g^*_N\|^2 + \lambda_n^2 I^2(\hat g)\le O_p\Big(n^{-2/7}\|\hat g - g^*_N\|\Big) + O_p\Big(n^{-4/7} I(\hat g) \Big)+O_p\Big(n^{-4/7} +\lambda_n^2\Big), \end{equation*} which completes the proof of Lemma~\ref{lem1}. \begin{lemma} \label{lem.mom.appr} Under the conditions of Theorem~\ref{bias.thm}, \begin{equation*} \sigma^2=O({\gamma^*}) \quad\text{and}\quad s_k=O({\gamma^*}^{3/2}),\;\;\text{for}\;\; k\ge3. \end{equation*} \end{lemma} \noindent \textbf{Proof of Lemma~\ref{lem.mom.appr}}. We first show that $E(t-\tilde p|\tilde p)=O(\sqrt{\gamma^*})$ as $\gamma^*$ tends to zero, where we write~$t$ for the quantity $\alpha\gamma^*=h_{\theta}^{-1}(p)$. This result will useful for establishing the stated bound for~$s_k$. Throughout the proof we use expression $\gtrsim$ to denote inequality $\ge$ up to a multiplicative factor equal to a positive constant that does not depend on~$\gamma^*$. We use an analogous agreement for the $\lesssim$ expression. We write $f_c(\tilde p)$ for the conditional density of $\tilde p$ given $t=c$, write $f(\tilde p)$ for the marginal density of $\tilde p$, and write $m_{\theta}(t)$ for the marginal density of $t$. In the new notation, we have \begin{equation*} E\big(|t-\tilde p|\big|\tilde p\big)={\int\limits_{0}^{1}|t-\tilde p|f_{t}(\tilde p)m_{\theta}(t)[f(\tilde p)]^{-1}dt}. \end{equation*} Using Stirling's approximation for the Gamma function, $\Gamma(x)=e^{-x}x^{x-1/2}(2\pi)^{1/2}[1+O(1/x)]$, and applying the bound $x\Gamma(x)=O(1)$ when~$t$ is close to zero or one, we derive the following bounds as $\tau$ tends to infinity: \begin{eqnarray*} \sqrt{\tau}E\big(|t-\tilde p|\big|\tilde p\big) &=&\int\limits_0^1 \sqrt{\tau}|t-\tilde p|\frac{\Gamma(\tau)}{\Gamma(t\tau)\Gamma(q\tau)}\tilde p^{t\tau-1}\tilde q^{q\tau-1}m_{\theta}(t)[f(\tilde p)]^{-1}dt\\ &\lesssim&\int\limits_0^1 \sqrt{\tau}|t-\tilde p|\frac1{\sqrt{2\pi}}[\tilde p/t]^{t\tau}(\tilde q/q)^{q\tau}\sqrt{tq\tau}m_{\theta}(t)[f(\tilde p)]^{-1}[\tilde p\tilde q]^{-1}dt\\ &\lesssim& \int\limits_{0}^{1} \sqrt{\tau}|t-\tilde p|e^{-\frac{\tau(t-\tilde p)^2}{18}}\sqrt{\tau}dt. \end{eqnarray*} Implementing a change of variable, $v=\sqrt{\tau}(t-\tilde p)$, we derive \begin{equation*} \sqrt{\tau}\big|E\big(t-\tilde p\big|\tilde p\big)\big|\lesssim \int\limits_{\mathbb{R}} |v|e^{-v^2/18}dv=O(1). \end{equation*} Consequently, $E(t-\tilde p|\tilde p)=O(1/\sqrt{\tau})=O(\sqrt{\gamma^*})$. We now bound $E\big([t-\tilde p]^2\big|\tilde p\big)$ using a similar argument. Following the arguments in the derivations above, we arrive at \begin{eqnarray*} \tau E\big([t-\tilde p]^2\big|\tilde p\big)&\lesssim& \int\limits_{0}^{1} \tau(t-\tilde p)^2\frac1{\sqrt{2\pi}}e^{-\frac{\tau(t-\tilde p)^2}{18}}\sqrt{\tau}m_{\theta}(t)dt. \end{eqnarray*} Implementing a change of variable, $v=\sqrt{\tau}(t-\tilde p)$, we conclude that \begin{equation*} \tau E\big([t-\tilde p]^2\big|\tilde p\big)\lesssim \int\limits_{\mathbb{R}} v^2e^{-v^2/18}dv = O(1). \end{equation*} Thus, we have established \begin{equation} \label{s_k.bnds.12} E\big([t-\tilde p]^k\big|\tilde p\big)=O({\gamma^*}^{k/2}), \quad\text{for}\quad k\in\{1,2\}. \end{equation} Analogous arguments lead to bounds $E\big([t-\tilde p]^k\big|\tilde p\big)=O({\gamma^*}^{3/2})$ for $k\ge3$. We complete the proof of the lemma by noting that \begin{equation*} s_k=O\Big(E\Big([t-\tilde p]^k\Big|\tilde p\Big)\Big)+O\big({\gamma^*}^{3/2}\big), \quad\text{for}\quad k\ge 2. \end{equation*} When $k=2$ the above approximation follows from $\sigma^2\le E\big([t-\tilde p]^2\big|\tilde p\big)$, and when $k=3$ it follows from~(\ref{s_k.bnds.12}) and \begin{eqnarray*} s_3&=&E\Big([t-\tilde p]^3\Big|\tilde p\Big)+3E\Big([t-\tilde p]^2\Big|\tilde p\Big)E\Big(\tilde p-t\Big|\tilde p\Big) +3E\Big(t-\tilde p\Big|\tilde p\Big)E^2\Big(\tilde p-t\Big|\tilde p\Big)+E^3\Big(\tilde p-t\Big|\tilde p\Big)\\ &=&E\Big([t-\tilde p]^3\Big|\tilde p\Big)+O\big({\gamma^*}^{3/2}\big). \end{eqnarray*} The derivations for~$k\ge4$ are analogous. \end{document}
\begin{document} \begingroup \centering {\Large\textbf{Oracles and query lower bounds in generalised probabilistic theories} \\[1.5em] \normalsize Howard Barnum\textsuperscript{$\dagger,\mathsection$,} \footnote{Electronic address: [email protected]}, Ciar\'{a}n M. Lee\textsuperscript{$\ddagger$,}\footnote{ Electronic address: [email protected]}, John H. Selby\textsuperscript{$\ast,\bullet$,}\footnote{Electronic address: [email protected]} } \\[1em] \it Mathematical Sciences, University of Copenhagen, Denmark.\\ \textsuperscript{$\mathsection$} Department of Physics and Astronomy, University of New Mexico, Albuquerque, USA. \\ \textsuperscript{$\ddagger$} Department of Physics and Astronomy, University College London, UK. \\ \textsuperscript{$\ast$} University of Oxford, Department of Computer Science, OX1 3QD, UK. \\ \textsuperscript{$\bullet$} Imperial College London, London SW7 2AZ, UK. \endgroup \begin{abstract} \small{ We investigate the connection between interference and computational power within the operationally defined framework of generalised probabilistic theories. To compare the computational abilities of different theories within this framework we show that any theory satisfying four natural physical principles possess a well-defined oracle model. Indeed, we prove a subroutine theorem for oracles in such theories which is a necessary condition for the oracle model to be well-defined. The four principles are: causality (roughly, no signalling from the future), purification (each mixed state arises as the marginal of a pure state of a larger system), strong symmetry (existence of a rich set of nontrivial reversible transformations), and informationally consistent composition (roughly: the information capacity of a composite system is the sum of the capacities of its constituent subsystems). Sorkin has defined a hierarchy of conceivable interference behaviours, where the order in the hierarchy corresponds to the number of paths that have an irreducible interaction in a multi-slit experiment. Given our oracle model, we show that if a classical computer requires at least $n$ queries to solve a learning problem, because fewer queries provide no information about the solution, then the corresponding ``no-information'' lower bound in theories lying at the $k$th level of Sorkin's hierarchy is $\lceil{n/k}\rceil$. This lower bound leaves open the possibility that quantum oracles are less powerful than general probabilistic oracles, although it is not known whether the lower bound is achievable in general. Hence searches for higher-order interference are not only foundationally motivated, but constitute a search for a computational resource that might have power beyond that offered by quantum computation.} \end{abstract} Landauer's Principle \cite{landauer1961irreversibility} states that any logically irreversible processing or manipulation of information, such as the erasure of a bit, must always be accompanied by an entropy increase in the environment of the system processing the information. As this illustrates, information is intimately tied to the physical system that embodies it and is hence bound by physical law---alternatively, \emph{information is physical}. If information processing---or computation---is bound by physical law, then the ultimate limits of computation should be derivable from natural physical principles. Indeed, the advent of quantum computation demonstrated that different physical principles lead to different limits on computational power. This naturally leads to the question of what general relationships hold between computational power and physical principles. This question has recently been studied in the framework of generalised probabilistic theories \cite{paterek2010theories,lee2015computation,lee2015proofs,lee2016deriving,landscape,lee2016information}, which contains operationally-defined physical theories that generalise the probabilistic formalism of quantum theory. By studying how computational power varies as the underlying physical theory is changed, one can determine the connection between physical principles and computational power in a manner not tied to the specific mathematical manifestation of a particular principle within a theory. Most previous research into computation within the generalised probabilistic theory framework has focused on deriving general \emph{limitations} on computational ability from natural physical principles\footnote{With the exception of \cite{landscape}, which constructs a theory capable of post-quantum computation. However, whether this computational advantage is directly tied to a simple physical principle remains unclear.}. No work to date has tied a computational \emph{advantage} directly to a physical principle. For instance, it is known that quantum interference between computational paths is a resource for post-classical computation \cite{stahlke2014quantum}, but it is not clear whether the presence of interference in a general theory entails post-classical computation \cite{lee2016generalised}, nor whether post-quantum interference behaviour is in general a resource for post-quantum computation \cite{lee2016deriving}. The former point concerns whether it is just the particular mathematical description of interference in Hilbert space which can be exploited to provide an advantage over classical computation or whether such a statement can be seen to directly follow from the observation of interference in nature, and the latter concerns whether ``more'' interference implies, or at least can sometimes allow, ``more'' computational power. Indeed, as first noted by Rafael Sorkin \cite{sorkin1994quantum,sorkin1995quantum}, there is a limit to quantum interference---at most \emph{pairs} of computational paths can ever interact in a fundamental way. Sorkin has defined a hierarchy of operationally conceivable interference behaviours---currently under experimental investigation \cite{sinha2008testing,park2012three,sinha2010ruling,kauten2015obtaining}---where classical theory is at the first level of the hierarchy and quantum theory belongs to the second. Informally, the order in the hierarchy corresponds to the number of paths that have an irreducible interaction in a multi-slit experiment. Given this definition, one can investigate the role interference plays in computation in a theory-independent manner by asking whether theories at level $k$ possess a computational advantage over theories at level $k-1, k-2, \dots$. One usually demonstrates the existence of a quantum advantage over classical computation using \emph{oracles}. Indeed, the Deutsch-Jozsa problem \cite{nielsen2010quantum} is such a example: given a function $f:\{0,1\}^n\rightarrow\{0,1\}$, one is asked to determine whether it is constant (same output for all inputs) or balanced ($0$ on exactly half of the inputs)---promised that it is one of these cases. Performance is quantified by the number of queries to an oracle, which implements $f$ on any desired input, needed to solve the problem. A deterministic classical computer requires $2^{n-1}+1$ queries, but Deutsch and Jozsa showed that a quantum computer can solve the problem in a single query to an appropriately defined \emph{quantum} oracle. Hence, to compare the computational abilities of different theories within the framework a well-defined oracle model is needed. We show that such a model can be defined in any theory satisfying the following physical principles: \emph{causality} (roughly, no signalling from the future), \emph{purification} (each mixed state arises as the marginal of a pure state of a larger system), and \emph{strong symmetry} (existence of a rich group of non-trivial reversible transformations){\color{black}; additionally, we demand \emph{informationally consistent composition} (the act of bringing systems together cannot create or destroy information)}. Moreover, we prove a subroutine theorem for theories satisfying these principles. That is, we show that having access to an oracle for a particular decision problem which can be efficiently solved in a given theory does not provide any more computational power than just using the efficient algorithm itself. Such a result was proved for quantum theory by Bennett et al. in \cite{bennett1997strengths}, and is a necessary condition for a well-defined oracle model. Given this oracle model, we investigate whether lower bounds on the number of queries needed by a quantum computer to solve certain computational problems can be reduced in theories which possess higher-order interference and satisfy the principles discussed above. Indeed, we generalise results due to Meyer and Pommersheim \cite{meyer2011uselessness} and show that if a particular type of lower bound to a given query problem (from a fairly large class of such problems) is $n$ using a classical computer, then the corresponding lower bound for the same problem in theories with $k$th order interference is $\lceil{n/k}\rceil$. As quantum theory only exhibits second order interference, theories with post-quantum interference allow for post-quantum computation. For example, in the problem where we are asked to determine the parity of a function $f:\{1,\dots,k\}\rightarrow\{0,1\}$ (which generalizes the special case of the Deutsch-Jozsa problem for functions $f:\{0,1\} \rightarrow \{0,1\}$), $\lceil{k/2}\rceil$ quantum queries are needed, but we show that in any theory satisfying our principles which has $k$th order interference, our generalisation of the Meyer-Pommersheim useless-queries bound leaves open the possibility that the parity can be determined with a single query. This bound leaves open the possibility that the power of computation might be improved by modifying interference behaviour in an operationally conceivable manner. Hence searches for higher-order interference are not only foundationally motivated, but constitute a search for a computational resource potentially beyond that offered by quantum computation. {\color{black} An important direction for future work is to determine whether in theories satisfying these principles it is possible to find an algorithm that reaches this lower bound. We discuss potential ways to do this in the conclusion. } Other authors have considered computation beyond the usual quantum formalism from a different perspective to the one employed here. For example, Aaronson has considered alternate modifications of quantum theory, such as a hidden variable model in which the history of hidden states can be read out by the observer, and---together with collaborators in \cite{aaronson2016space}---a model in which one is given the ability to perform certain unphysical non-collapsing measurements. Both of these models have been shown to entail computational speed-ups over the usual quantum formalism. Additionally, Bao et al. \cite{bao2015grover} have investigated computation in modifications of quantum theory suggested by the black hole information loss paradox and have shown the ability to signal faster than light in such theories is intimately linked to a speed-up over standard quantum theory in searching an unstructured database. In contrast, the generalised probabilistic theory framework employed here allowed for an investigation of query lower bounds and computational advantages in alternate theories that are physically reasonable and which, for instance, do not allow for superluminal signalling \cite{barrett2007information}, cloning \cite{barnum2007generalized}, or other phenomena that are arguably undesirable features of a theory. The paper is organised as follows. In section~\ref{framework}, the generalised probabilistic theory framework will be introduced along with our physical principles and the definition of higher-order interference. In section~\ref{section:oracles} the oracle model will be introduced and defined and the subroutine theorem will be stated, with the proof presented in appendix~\ref{Proof of subroutine}. Finally, in section~\ref{lower bound} we derive lower bounds on query problems that directly follow from our principles. \section{The framework} \label{framework} One of the fundamental requirements of a physical theory is that it should provide a consistent account of experimental observations. This viewpoint underlies the framework of generalised probabilistic theories \cite{barrett2007information,chiribella2010probabilistic,hardy2011reformulating,lee2017no}. A generalised probabilistic theory specifies a set of laboratory devices that can be connected together in different ways to form experiments and specifies probability distributions over experimental outcomes. A device comes equipped with input ports, output ports, and a classical pointer. When a device is used in an experiment, the classical pointer comes to rest in one of a number of positions (``values''), indicating an outcome has occurred. Intuitively, one envisages \emph{physical systems} passing between the ports of these devices. Such physical systems come in different types, denoted $A,B,\dots$. One constructs experiments by composing devices both sequentially and in parallel, and when composed sequentially, types must match. In this framework, closed circuits---those with no unconnected ports and no cycles---are associated with probabilities---a probability for each assigment of values to all classical pointers appearing in the circuit. These add to unity when summed over all assignments of values to pointers in the circuit. We use the term ``element'' for a pair of device and pointer value; circuits may also be constructed of such elements, and closed circuits of this type are associated with individual probabilities. The set of equivalence classes of elements with no input ports are called \emph{states}, no output ports \emph{effects} and both input and output ports \emph{transformations}. The set of all states of system $A$ is denoted $\mathrm{St(A)}$, the set of all effects on $B$ is denoted $\mathrm{Eff(B)}$ and the set of transformations between systems $A$ and $B$ is denoted $\mathrm{Transf(A,B)}$. Using standard operational assumptions and arguments \cite{lee2015computation,chiribella2010probabilistic,barrett2007information}, one can show that the set of states, effects and transformations each give rise to a real vector space with transformations and effects acting linearly on the real vector space of states. We assume in this work that all vector spaces are finite dimensional\footnote{Operationally this can be seen as saying that one does not need to perform an infinite number of distinct experiments to characterise states}. A state is said to be \emph{pure} if it does not arise as a \emph{coarse-graining} of other states\footnote{The process $\{\mathcal{U}_j\}_{j\in{Y}}$, where $j$ index the positions of the classical pointer, is a coarse-graining of the process $\{\mathcal{E}_i\}_{i\in{X}}$ if there is a disjoint partition $\{X_j\}_{j\in{Y}}$ of $X$ such that $\mathcal{U}_j=\sum_{i\in{X_j}}\mathcal{E}_i$.}; a pure state is one for which we have maximal information. A state is \emph{mixed} if it is not pure. A state $\omega$ is \emph{completely mixed} if for every pure state $\chi$, $\omega$ can be expressed as a coarsegraining of a set of states that includes $\chi$. Similarly, one says a transformation, respectively an effect, is pure if it does not arise as a coarse-graining of other transformations, respectively effects. It can be shown that reversible transformations preserve pure states \cite{chiribella2015entanglement}. We'll say a measurement is pure if all of its outcomes are pure effects. A state is \emph{maximally mixed} if, when expressed as a convex combination $\omega = \sum_{i \in S} p_i \omega_i$ of perfectly distinguishable pure states, as is always possible given our assumptions, the probabilities $p_i$ are uniform ($p_i = 1/|S|$). The following `Dirac-like' notation $_A|s_i)$ will be used to represent a state\footnote{or, more accurately, the real vector corresponding to the state.} of system $A$, and $(e_{r}|_B$ to represent an effect on $B$. Here $i$ and $r$ represent the position of the classical pointer associated to the device the prepares the state and performs the measurement, respectively. The full measurement is defined by the collection $\{(e_r|\}_r$. States, effects, and transformations can be represented diagrammatically: \[\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (-0.75, -0) {\huge{${s_i}$}}; \node [style=small box,minimum size=0.75cm] (1) at (1.5, -0) {$T$}; \node [style=cocpoint] (2) at (3.75, -0) {\huge{$e_r$}}; \node [style=none] (3) at (5.5, -0) {$:=$}; \node [style=none] (4) at (8.5, -0) {$(e_r|_BT_A|s)$}; \node [style=none] (5) at (0.25, 0.5) {$A$}; \node [style=none] (6) at (2.75, 0.5) {$B$}; \node [style=none] (7) at (10.5, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (1); \draw (1) to (2); \end{pgfonlayer} \end{tikzpicture}\] The above diagram represents the joint probability of preparing state $|s_i)$, acting with transformation $T$ and registering outcome $r$ for the measurement $\{(e_r|\}_r$. In the above, the wires represent physical systems, with their type denoted by the letter above them. This diagrammatic representation was inspired by categorical quantum mechanics \cite{coecke2016picturing,coecke2010quantum}. Note that in the ``bra-ket'' like notation, the time-ordering (first states, then transformations, then effects) ``flows'' from right to left, while the reverse is true in the diagrammatic notation. In the rest of the paper, it will be assumed that all theories satisfy the following physical principles. \begin{define}[Deterministic effect, Causality \cite{chiribella2010probabilistic}] An effect is called \emph{deterministic} if it has probability $1$ in all states. A theory is said to be \emph{causal} if there exists a unique deterministic effect for every system. We denote this effect by $\detEff $. In a causal theory, $\sum_r (e_r|=\detEff $ for all measurements $\{(e_r|\}_r$ on a given system. \end{define} Mathematically, the principle of causality is equivalent to the statement: ``Probabilities of outcomes of present experiments are independent of future measurement choices''. In causal theories, all states are \emph{normalised} \cite{chiribella2010probabilistic}. That is, $\detEff s)=1$ for all $|s)$. Moreover, the unique deterministic effect allows one to define a notion of \emph{marginalisation} for multi-partite states. \begin{define}[Purification \cite{chiribella2010probabilistic}] \label{Pure} Given a state $_A|s)$ there exists a system $B$ and a pure state $_{AB}|\psi)$ on $AB$ such that $_A|s)$ is the marginalisation of $_{AB}|\psi)$: \[\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (1, -0) {}; \node [style=none] (1) at (1, 1.5) {}; \node [style=none] (2) at (1, 2) {}; \node [style=none] (3) at (1, -0.5) {}; \node [style=trace] (4) at (2.5, -0) {}; \node [style=none] (5) at (2.5, 1.5) {}; \node [style=none] (6) at (1, -0.5) {}; \node [style=none] (7) at (0.5, 0.75) {$\psi$}; \node [style=none] (8) at (4, 1) {$=$}; \node [style=cpoint] (9) at (5.5, 1.5) {$s$}; \node [style=none] (10) at (7, 1.5) {}; \node [style=none] (11) at (1.75, 0.5) {$B$}; \node [style=none] (12) at (1.75, 2) {$A$}; \node [style=none] (13) at (6.25, 2) {$A$}; \node [style=none] (14) at (7.5, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=90, looseness=1.25] (2.center) to (3.center); \draw (2.center) to (3.center); \draw (1.center) to (5.center); \draw (0.center) to (4); \draw (9) to (10.center); \end{pgfonlayer} \end{tikzpicture} \] Moreover, the purification $_{AB}|\psi)$ is unique up to reversible transformations on the purifying system, $B$. That is if two states $|\psi)_{AB}$ and $|\psi')_{AB}$ purify $|s)_A$, then there exists a reversible transformation $T_B$ on system $B$ such that $_{AB}|\psi)=(\mathbb{I}_A\otimes{T_B})~_{AB}|\psi)$. \end{define} As pure states are those in which we have maximal information about a system,\footnote{In the following sense: a state that is not pure can be written as a convex combination of other states, which can be thought of as a lack of information about which of these alternative states describes the system; of course in general in a nonclassical theory there are many, incompatible, such convex decompositions of the state. But a pure state admits no such decomposition.} purification principle formalises the statement that each state of incomplete information about a system can arise in an essentially unique way due to a lack of full access to a larger system that it is part of. In \cite{chiribella2015conservation} it was argued that purification can be thought of as a statement of ``information conservation''. In a theory with purification, any missing information about the state of a given system can always be traced back to lack of access to some environment system. We introduce one final principle which ensures that the information stored in a system is compatible with composition, that is, we demand that the mere act of bringing systems together should not create or destroy information.\footnote{Some may prefer another way of glossing this principle: that the capacity of a pair of systems to store classical information should be the same whether they are accessed separately or jointly.} If this were not the case then one could potentially use this new global degree of freedom, representing the increase of information capacity, to hide solutions to a hard computational problem allowing one to solve a hard problem that could not be solved by using the systems independently \cite{lee2015computation}. We formalise this as follows: \begin{define}[Informationally consistent composition] This consists of two constraints on parallel composition: i) the product of pure states is pure, ii) the product of maximally mixed states is maximally mixed. \end{define} The first of these formalises the intuitive idea that if one has maximal information about each of two systems, then one has maximal information about the composite of the two systems. The existence of a maximally mixed state, that is, a state about which we have minimal information, is guaranteed for each system by purification \cite{chiribella2010probabilistic}. The purification principle, in conjunction with causality and the constraints on parallel composition discussed above, implies many quantum information processing \cite{chiribella2010probabilistic} and computational primitives \cite{lee2016deriving}. Examples include teleportation, no information without disturbance, and no-bit commitment \cite{chiribella2010probabilistic}. Moreover, purification also leads to a well-defined notion of thermodynamics \cite{chiribella2015entanglement,chiribella2016entanglement,chiribella2016purity}. Quantum theory---both on complex and real Hilbert spaces---satisfies purification {\color{black} as do} Spekkens' toy model \cite{Disilvestro,spekkens2007evidence} purification distinct from quantum theory include fermionic quantum theory \cite{Fermionic1,Fermionic2}, a superselected version of quantum theory known as double quantum theory \cite{chiribella2016purity}, and a recent extension of classical theory to the theory of coherent $d$-level systems, or codits \cite{chiribella2015entanglement}. Pure states $\{|s_i)\}_{i=1}^n$ are called \emph{perfectly distinguishable} if there exists a measurement, corresponding to effects $\{(e_j|\}_{j=1}^n$, with the property that $(e_j|s_i)=\delta_{ij}$ for all $i,j$. \begin{define}[Strong symmetry \cite{barnum2014higher,muller2012structure}] A theory satisfies \emph{strong symmetry} if, for any two $n$-tuples of pure and perfectly distinguishable states $\{|\rho_i)\},\{|\sigma_i)\},$ there exists a reversible transformation $T$ such that $T|\rho_i)=|\sigma_i)$ for $i=1,\dots,n$. \end{define} Although complex and real quantum theory satisfy all of the above principles, double quantum theory and codit theory do not satisfy strong symmetry. Whether any other theories satisfy all of the principles is an important open question. The following consequences of the above principles, proved in \cite{lee2016generalised}, will be required to define oracles in section~\ref{section:oracles}. \begin{define} Given a set of pure and perfectly distinguishable states $\{|i)\}$, and a set of transformations $\{T_i\}$, define a controlled transformation $C\{T_i\}$ as one that satisfies: \begin{equation}\label{Control} \begin{tikzpicture}[scale=1.5] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (0, 1) {$i$}; \node [style=none] (1) at (1, 1.5) {}; \node [style=none] (2) at (2.5, 1.5) {}; \node [style=none] (3) at (1, -0.75) {}; \node [style=none] (4) at (2.5, -0.75) {}; \node [style=none] (5) at (1, 1) {}; \node [style=none] (6) at (2.5, 1) {}; \node [style=none] (7) at (1, -0.25) {}; \node [style=none] (8) at (2.5, -0.25) {}; \node [style=cpoint] (9) at (0, -0.25) {$\sigma$}; \node [style=none] (10) at (3.5, -0.25) {}; \node [style=none] (11) at (3.5, 1) {}; \node [style=none] (12) at (1.75, 1) {$C$}; \node [style=none] (13) at (5, 0.5) {$=$}; \node [style=cpoint] (14) at (6.5, 1) {$i$}; \node [style=none] (15) at (9.5, 1) {}; \node [style=cpoint] (16) at (6.5, -0.25) {$\sigma$}; \node [style={small box}] (17) at (8, -0.25) {$T_i$}; \node [style=none] (18) at (9.5, -0.25) {}; \node [style=none] (19) at (1.75, -0.25) {$\{T_i\}$}; \node [style=none] (20) at (11.5, -0) {$\forall i, |\sigma)$}; \node [style=none] (21) at (12.5, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (5.center); \draw (9) to (7.center); \draw (6.center) to (11.center); \draw (8.center) to (10.center); \draw (1.center) to (3.center); \draw (3.center) to (4.center); \draw (4.center) to (2.center); \draw (2.center) to (1.center); \draw (16) to (17); \draw (17) to (18.center); \draw (14) to (15.center); \end{pgfonlayer} \end{tikzpicture} \end{equation} The top system and lower systems are referred to as the \emph{control} and \emph{target} respectively. \end{define} Note that classically-controlled transformations---those in which the control is measured and, conditioned on the outcome, a transformation is applied to the target---exist in any causal theory with sufficiently many distinguishable states \cite{chiribella2010probabilistic}. However, such transformations are in general not reversible \cite{lee2016generalised}. \begin{thm}[\cite{lee2016generalised} Theorem 2] \label{Reversible-Control} In any theory satisfying i) causality, ii) purification, iii) strong symmetry, iv) product of pure states is pure, and in which there exists a set of $n$ pure and perfectly distinguishable states $|i)$ indexed by $i \in \{1,...,n\}$, for any collection of $n$ reversible transformations $\{T_i\}_{i=1}^n$ there exists a \emph{reversible} controlled transformation $C\{T_i\}$ in which the $T_i$ are controlled by the states $|i)$. \end{thm} Every controlled unitary transformation in quantum theory has a \emph{phase kick-back} mechanism \cite{nielsen2010quantum, cleve1998quantum}. Such mechanisms form a vital component of most quantum algorithms \cite{cleve1998quantum}. It was shown in \cite{lee2016generalised} that a \emph{generalised} phase kick-back mechanism exists in any theory satisfying the above physical principles. \begin{define}\label{def: phase transformation} A \emph{phase transformation}, relative to a given measurement of effects $(i|$, is a transformation $Q$ such that: \[\begin{tikzpicture}[scale=1.5] \begin{pgfonlayer}{nodelayer} \node [style={small box}] (0) at (1, -0) {$Q$}; \node [style=none] (1) at (5.5, -0) {}; \node [style=none] (2) at (0, -0) {}; \node [style=none] (3) at (8, -0) {$\forall i$}; \node [style=none] (4) at (4, -0) {$=$}; \node [style=cocpoint] (5) at (2.5, -0) {$i$}; \node [style=cocpoint] (6) at (6.5, -0) {$i$}; \node [style=none] (7) at (8.5, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (5) to (0); \draw (0) to (2.center); \draw (6) to (1.center); \end{pgfonlayer} \end{tikzpicture} \] \end{define} \begin{thm}[\cite{lee2016generalised} Lemma 2] \label{Generalised-Kick-Back} In a theory satisfying Causality, Strong Symmetry, and Purification, for any set $T_i$ of reversible transformations and state $|s)$ such that for all $i$ $T_i|s)=|s)$, there exists a reversible transformation $Q$ such that \begin{equation}\label{KB} \begin{tikzpicture}[scale=1.5] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (0, -0.25) {$s$}; \node [style=none] (1) at (1.5, 1) {$C$}; \node [style=cpoint] (2) at (6, 1) {$\sigma$}; \node [style=none] (3) at (2.25, 1.5) {}; \node [style=none] (4) at (9, -0.25) {}; \node [style={small box}] (5) at (7.5, 1) {$Q$}; \node [style=none] (6) at (0.75, -0.75) {}; \node [style=none] (7) at (10.5, -0) {$\forall |\sigma)$}; \node [style=none] (8) at (0.75, 1.5) {}; \node [style=none] (9) at (0.75, -0.25) {}; \node [style=none] (10) at (1.5, -0.25) {$\{T_i\}$}; \node [style=none] (11) at (4.5, 0.5) {$=$}; \node [style=none] (12) at (9, 1) {}; \node [style=none] (13) at (3, 1) {}; \node [style=none] (14) at (2.25, -0.75) {}; \node [style=none] (15) at (0.75, 1) {}; \node [style=cpoint] (16) at (6, -0.25) {$s$}; \node [style=none] (17) at (2.25, -0.25) {}; \node [style=none] (18) at (3, -0.25) {}; \node [style=none] (19) at (2.25, 1) {}; \node [style=cpoint] (20) at (0, 1) {$\sigma$}; \node [style=none] (21) at (11, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (20) to (15.center); \draw (8.center) to (6.center); \draw (6.center) to (14.center); \draw (14.center) to (3.center); \draw (3.center) to (8.center); \draw (0) to (9.center); \draw (17.center) to (18.center); \draw (19.center) to (13.center); \draw (2) to (5); \draw (16) to (4.center); \draw (5) to (12.center); \end{pgfonlayer} \end{tikzpicture} \end{equation} where $Q$ is a phase transformation. Moreover, every phase transformation can arise via such a generalised phase kick-back mechanism. \end{thm} \subsection{Post-quantum interference} In the sections that follow, we will connect post-quantum, or higher-order, probabilistic interference to post-quantum computation; investigating whether ``more'' interference implies ``more'' computational power. The definition of higher-order interference that we present here takes its motivation from the set-up of multi-slit interference experiments. In such experiments a particle (a photon or electron, say) passes through slits in a physical barrier and is detected at a screen placed behind the barrier. By blocking some (or none) of the slits and repeating the experiment many times, one can build up an interference pattern on the screen. The ``intensity'' of the pattern in a small area of the screen is proportional to the probability that the particle arrives there. Informally, a theory has ``$n$th order interference'' if one can generate interference patterns in an $n$-slit experiment which cannot be created in any experiment with only $m$ slits, for all $m<n$. More precisely, this means that the interference pattern created on the screen cannot be written as a particular linear combination of the interference patterns generated when different subsets of slits are open and closed. In the standard two slit experiment, quantum interference corresponds to the statement that the interference pattern can't be written as the sum of single slit patterns: \[\begin{tikzpicture}[scale=.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 1.25) {}; \node [style=none] (1) at (0, 1.25) {}; \node [style=none] (2) at (0, -1.25) {}; \node [style=none] (3) at (0, -1.75) {}; \node [style=none] (4) at (0, -3) {}; \node [style=none] (5) at (0, 1.75) {}; \node [style=none] (6) at (0, 3) {}; \node [style=none] (7) at (2, -0) {$\neq$}; \node [style=none] (8) at (4, 1.25) {}; \node [style=none] (9) at (4, 1.25) {}; \node [style=none] (10) at (4, -1.25) {}; \node [style=none] (11) at (4, -1.75) {}; \node [style=none] (12) at (4, -3) {}; \node [style=none] (13) at (4, 1.75) {}; \node [style=none] (14) at (4, 3) {}; \node [style=none] (15) at (4, 2) {}; \node [style=none] (16) at (4, 1) {}; \node [style=none] (17) at (6, -0) {$+$}; \node [style=none] (18) at (8, 1.25) {}; \node [style=none] (19) at (8, 1.25) {}; \node [style=none] (20) at (8, -1.25) {}; \node [style=none] (21) at (8, -1.75) {}; \node [style=none] (22) at (8, -3) {}; \node [style=none] (23) at (8, 1.75) {}; \node [style=none] (24) at (8, 3) {}; \node [style=none] (25) at (8, -2) {}; \node [style=none] (26) at (8, -0.9999998) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=block] (15.center) to (16.center); \draw [style=block] (25.center) to (26.center); \draw [style=slit] (1.center) to (2.center); \draw [style=slit] (6.center) to (5.center); \draw [style=slit] (3.center) to (4.center); \draw [style=slit] (9.center) to (10.center); \draw [style=slit] (14.center) to (13.center); \draw [style=slit] (11.center) to (12.center); \draw [style=slit] (19.center) to (20.center); \draw [style=slit] (24.center) to (23.center); \draw [style=slit] (21.center) to (22.center); \end{pgfonlayer} \end{tikzpicture} \] It was first shown by Sorkin \cite{sorkin1994quantum,sorkin1995quantum} that---at least for ideal experiments \cite{sinha2015superposition}---quantum theory is limited to the $n=2$ case. That is, the interference pattern created in a three---or more---slit experiment \emph{can} be written in terms of the two and one slit interference patterns obtained by blocking some of the slits. Schematically: \[\begin{tikzpicture}[scale=.65] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, -0.25) {}; \node [style=none] (1) at (0, 0.25) {}; \node [style=none] (2) at (0, 1.25) {}; \node [style=none] (3) at (0, -1.25) {}; \node [style=none] (4) at (0, -1.75) {}; \node [style=none] (5) at (0, 1.75) {}; \node [style=none] (6) at (0, 3) {}; \node [style=none] (7) at (0, -3) {}; \node [style=none] (8) at (4, -1.25) {}; \node [style=none] (9) at (4, -1.75) {}; \node [style=none] (10) at (4, -0.25) {}; \node [style=none] (11) at (4, -3) {}; \node [style=none] (12) at (4, 1.75) {}; \node [style=none] (13) at (4, 3) {}; \node [style=none] (14) at (4, 1.25) {}; \node [style=none] (15) at (4, 0.25) {}; \node [style=none] (16) at (12, -3) {}; \node [style=none] (17) at (8, 1.75) {}; \node [style=none] (18) at (8, -3) {}; \node [style=none] (19) at (12, 1.25) {}; \node [style=none] (20) at (12, -0.25) {}; \node [style=none] (21) at (8, 3) {}; \node [style=none] (22) at (8, -0.25) {}; \node [style=none] (23) at (12, -1.75) {}; \node [style=none] (24) at (8, 1.25) {}; \node [style=none] (25) at (12, 3) {}; \node [style=none] (26) at (8, 0.25) {}; \node [style=none] (27) at (12, 0.25) {}; \node [style=none] (28) at (8, -1.75) {}; \node [style=none] (29) at (8, -1.25) {}; \node [style=none] (30) at (12, -1.25) {}; \node [style=none] (31) at (12, 1.75) {}; \node [style=none] (32) at (20, -1.75) {}; \node [style=none] (33) at (24, 3) {}; \node [style=none] (34) at (24, -1.25) {}; \node [style=none] (35) at (16, -3) {}; \node [style=none] (36) at (20, -0.2500001) {}; \node [style=none] (37) at (20, -1.25) {}; \node [style=none] (38) at (20, 3) {}; \node [style=none] (39) at (24, 0.2500001) {}; \node [style=none] (40) at (16, -1.75) {}; \node [style=none] (41) at (16, -1.25) {}; \node [style=none] (42) at (24, -0.2500001) {}; \node [style=none] (43) at (24, 1.75) {}; \node [style=none] (44) at (16, -0.2500001) {}; \node [style=none] (45) at (16, 0.2500001) {}; \node [style=none] (46) at (20, 1.75) {}; \node [style=none] (47) at (20, 0.2500001) {}; \node [style=none] (48) at (20, 1.25) {}; \node [style=none] (49) at (20, -3) {}; \node [style=none] (50) at (24, -1.75) {}; \node [style=none] (51) at (24, -3) {}; \node [style=none] (52) at (24, 1.25) {}; \node [style=none] (53) at (16, 1.25) {}; \node [style=none] (54) at (16, 3) {}; \node [style=none] (55) at (16, 1.75) {}; \node [style=none] (56) at (2, -0) {$=$}; \node [style=none] (57) at (6, -0) {$+$}; \node [style=none] (58) at (10, -0) { $+$}; \node [style=none] (59) at (14, -0) { $-$}; \node [style=none] (60) at (18, -0) { $-$}; \node [style=none] (61) at (22, -0) { $-$}; \node [style=none] (62) at (4, 1) {}; \node [style=none] (63) at (4, 2) {}; \node [style=none] (64) at (8, 0.4999999) {}; \node [style=none] (65) at (8, -0.4999999) {}; \node [style=none] (66) at (12, -0.9999998) {}; \node [style=none] (67) at (12, -2) {}; \node [style=none] (68) at (16, 2) {}; \node [style=none] (69) at (16, 1) {}; \node [style=none] (70) at (20, 2) {}; \node [style=none] (71) at (20, 1) {}; \node [style=none] (72) at (20, -0.9999998) {}; \node [style=none] (73) at (20, -2) {}; \node [style=none] (74) at (24, -2) {}; \node [style=none] (75) at (24, -0.7500001) {}; \node [style=none] (76) at (24, -0.9999998) {}; \node [style=none] (77) at (16, -0.4999999) {}; \node [style=none] (78) at (16, 0.4999999) {}; \node [style=none] (79) at (24, -0.4999999) {}; \node [style=none] (80) at (24, 0.4999999) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [style=block] (62.center) to (63.center); \draw [style=block] (65.center) to (64.center); \draw [style=block] (67.center) to (66.center); \draw [style=block] (69.center) to (68.center); \draw [style=block] (71.center) to (70.center); \draw [style=block] (73.center) to (72.center); \draw [style=block] (74.center) to (76.center); \draw [style=block] (77.center) to (78.center); \draw [style=block] (79.center) to (80.center); \draw [style=slit] (5.center) to (6.center); \draw [style=slit] (1.center) to (2.center); \draw [style=slit] (3.center) to (0.center); \draw [style=slit] (7.center) to (4.center); \draw [style=slit] (12.center) to (13.center); \draw [style=slit] (15.center) to (14.center); \draw [style=slit] (8.center) to (10.center); \draw [style=slit] (11.center) to (9.center); \draw [style=slit] (17.center) to (21.center); \draw [style=slit] (26.center) to (24.center); \draw [style=slit] (29.center) to (22.center); \draw [style=slit] (18.center) to (28.center); \draw [style=slit] (31.center) to (25.center); \draw [style=slit] (27.center) to (19.center); \draw [style=slit] (30.center) to (20.center); \draw [style=slit] (16.center) to (23.center); \draw [style=slit] (55.center) to (54.center); \draw [style=slit] (45.center) to (53.center); \draw [style=slit] (41.center) to (44.center); \draw [style=slit] (35.center) to (40.center); \draw [style=slit] (46.center) to (38.center); \draw [style=slit] (47.center) to (48.center); \draw [style=slit] (37.center) to (36.center); \draw [style=slit] (49.center) to (32.center); \draw [style=slit] (43.center) to (33.center); \draw [style=slit] (39.center) to (52.center); \draw [style=slit] (34.center) to (42.center); \draw [style=slit] (51.center) to (50.center); \end{pgfonlayer} \end{tikzpicture} \] The terms with minus signs in the above correct for over-counting of the open slits. If a theory does not have $n$th order interference then one can show it will not have $m$th order interference, for any $m>n$ \cite{sorkin1994quantum}. Therefore, one can classify theories according to their maximal order of interference, $k$. For example quantum theory lies at $k=2$ and classical theory at $k=1$. Consider the state of the particle just before it passes through the slits. For every slit, there should exist states such that the particle would definitely be found at that slit, if one were to measure it. Mathematically, this means that there exists a face\footnote{A face is a convex set with the property that if $px+(1-p)y$, for some $p\in(0,1)$, is an element then $x$ and $y$ are also both elements.} \cite{barnum2014higher} of the state space, such that all states in this face give unit probability for the ``yes'' outcome of the two outcome ``is the particle at this slit?'' measurement. These faces will be labelled $F_i$, one for each of the $n$ slits $i\in\{1,\dots, n\}$. As the slits should be perfectly distinguishable, the faces associated to the slit should be mutually orthogonal. This can be achieved by letting the slits be in one-to-one correspondence with a set of pure and perfectly distinguishable states. One can additionally ask coarse grained questions of the form ``Is the particle found among a certain subset of slits, rather than somewhere else?''. The set of states that give outcome ``yes'' with probability one must contain all the faces associated with each slit in the subset. Hence the face associated to the subset of slits $I\subseteq\{1,\dots, n\}$ is the smallest face containing each face in this subset, $F_I:=\bigvee_{i\in I} F_i$. That is, $F_I$ is the face generated by the pure and perfectly distinguishable states identified by the subset $I.$ The face $F_I$ contains all those states which can be found among the $I$ slits. The experiment is ``complete'' if all states in the state space (of a given type $A$) can be found among some subset of slits. That is, if $F_{12\cdots n}=\mathrm{St(A)}$. Higher-order interference was initially formalised by Rafael Sorkin in the framework of Quantum Measure Theory \cite{sorkin1994quantum} but has more recently been adapted to the setting of generalised probabilistic theories in \cite{ududec2011three,ududec2012perspectives,barnum2014higher,lee2016generalised,lee2016higher}. A straightforward translation to this setting describes the order of interference in terms of probability distributions corresponding to interference patterns generated in the different experimental setups (which slits are open, etc.) \cite{lee2016generalised,lee2016higher}. However, given the principles imposed in the previous section, it is possible to define physical transformations that correspond to the action of opening and closing certain subsets of slits. In this case, there is a more convenient (and equivalent \cite{barnum2014higher}, given our principles) definition in terms of such transformations (such a definition was also used in \cite{ududec2011three, ududec2012perspectives}). Given $N$ slits, labelled $1, \dots, N$, these transformations will be denoted $P_I$, where $I \subseteq \{1, \dots, N\}$ corresponds to the subset of slits which are not closed. In general one expects that $P_I P_J = P_{I\cap J}$, as only those slits belonging to both $I$ and $J$ will not be closed by either $P_I$ or $P_J$. This intuition suggests that these transformations should correspond to projectors (i.e. idempotent transformations $P_IP_I=P_I$). Given the principles imposed in this paper, this is indeed the case. \begin{thm} \label{existence of projector} In any theory satisfying the principles introduced in the previous section, the projector onto a face generated by a subset of pure and perfectly distinguishable states is an allowed transformation in the theory. If $F$ and $G$ are faces generated by subsets of the same pure and perfectly distinguishable set of states, one has $P_FP_G=P_{F\cap G}$. \end{thm} The proof of theorem~\ref{existence of projector} is presented in appendix~\ref{Proof of projector}. Given this structure, one can define the maximal order of interference as follows \cite{barnum2014higher,lee2016deriving}. \begin{define} A theory satisfying the principles imposed in this section has maximal order of interference $k$ if, for any $N \geq k$, one has: \[\mathds{1}_N = \sum_{{ \begin{array}{c} I\subseteq \mathbf{N} \\ |I|\leq k \end{array}}}\mathcal{C}\left(k,|I|,N\right)P_I\] where $\mathds{1}_N $ is the identity on a system with $N$ pure and perfectly distinguishable states and \[\mathcal{C}\left(k,|I|,N\right):=(-1)^{k-|I|}\left(\begin{array}{c}N-|I|-1\\ k-|I|\end{array}\right)\] \end{define} The factor $\mathcal{C}\left(k,|I|,N\right)$ in the above definition corrects for the overlaps that occur when different combinations of slits are open and closed. For $k=N$, the above reduces to the expected expression $\mathds{1}_h=P_{\{1,...,k \}}$, that is, the identity is given by the projector with all slits open. The case $N=k+1$ corresponds to $\mathcal{C}\left(k,|I|,k+1\right) = (-1)^{k-|I|}$, corresponding to the situation depicted in the above diagrams, as well as the one most commonly discussed in the literature \cite{sorkin1994quantum,ududec2011three}. Instead of working directly with these physical projectors, it is mathematically convenient to work with the (generally) unphysical transformations corresponding to projecting onto the ``coherences'' of a state. Consider the example of a qutrit in quantum theory, the projector $P_{\{0,1\}}$ projects onto a two dimensional subspace: \[ P_{\{0,1\}}::\left(\begin{array}{ccc} \rho_{00} & \rho_{01} & \rho_{02} \\ \rho_{10} & \rho_{11} & \rho_{12} \\\rho_{20} & \rho_{21} & \rho_{22} \end{array}\right)\mapsto \left(\begin{array}{ccc} \rho_{00} & \rho_{01} & 0 \\ \rho_{10} & \rho_{11} & 0 \\0 & 0 & 0 \end{array}\right)\] whilst the coherence-projector $\omega_{\{0,1\}}$ projects only onto the coherences in that two dimensional subspace: \[ \omega_{\{0,1\}}::\left(\begin{array}{ccc} \rho_{00} & \rho_{01} & \rho_{02} \\ \rho_{10} & \rho_{11} & \rho_{12} \\\rho_{20} & \rho_{21} & \rho_{22} \end{array}\right)\mapsto \left(\begin{array}{ccc} 0 & \rho_{01} & 0 \\ \rho_{10} & 0 & 0 \\0 & 0 & 0 \end{array}\right).\] That is, $\omega_{\{0,1\}}$ corresponds to the linear combination of projectors: $P_{\{0,1\}}-P_{\{0\}}-P_{\{1\}}$. There is a coherence-projector $\omega_I$ for each subset of slits $I \subseteq \{1,\dots, N\}$, defined in terms of the physical projectors: $$\omega_I:=\sum_{\tilde{I}\subseteq I}(-1)^{|I|+|\tilde{I}|}P_{\tilde{I}}.$$ These have the following useful properties, which were proved in \cite{lee2016deriving,barnum2014higher}. \begin{lem} \label{lemma: decomposition of the identity into coherences} An equivalent definition of the maximal order of interference, $k$, is: $$\mathds{1}_N=\sum_{I,|I|=1}^{k}\omega_I, \text{ for all } N \geq k.$$ \end{lem} The above lemma implies that any state (indeed, any vector in the vector space generated by the states) in a theory with maximal order of interference $k$ can be decomposed in a form reminiscent of a rank $k$ tensor: \begin{equation} \label{coherence decomposition} |s)=\sum_{I,|I|=1}^k \omega_I|s)=\sum_{I,|I|=1}^k |s_I). \end{equation} This decomposition can be thought of as a generalised superposition, as it manifestly describes the coherences between different subsets of perfectly distinguishable states (the analogue of a basis in quantum theory) present in a given state. This will be important in discussing the power of oracle queries in the following section, since it allows oracles to be queried not just on states corresponding to definite inputs or probabilistic mixtures of them, but on superpositions of them. \section{Oracles} \label{section:oracles} In classical computation, an \emph{oracle} is usually defined as a total function $f: S \rightarrow T$ from a finite or (more usually) countably infinite set $S$ to a finite set $T$. Most commonly, we have $f:\mathbb{N}\rightarrow\{0,1\}$, or $f: \{0,1\}^* \rightarrow \{0,1 \}$. These last are essentially equivalent since the set $\{0,1\}^*$ of finite binary strings can be identified with $\mathbb{N}$ via the usual binary encoding. The string $x$ is said to be \emph{in} an oracle $O$ if $f(x)=1$, hence oracles can decide membership in a language (defined as a subset of the set of finite binary strings). In a classical oracle model of computation, some baseline computational model, e.g. a circuit or Turing machine model, is augmented by the ability to ``query'' the oracle, i.e. obtain the value of $f$ on one of its inputs. A query is assigned some cost in units commensurate with those of the baseline model, and multiple queries may be made in the course of a computation, on inputs provided in terms of the baseline model (normally, bit strings on a tape or in some set of registers). Oracle outputs are also provided in terms of the baseline model, and may be further processed by means of the baseline model's resources and/or as input to additional queries. Sometimes a model is considered in which the resources of the baseline model are taken to be free, and the \emph{only} cost is the number of queries to the oracle; this is usually termed a \emph{query model}. In quantum computation oracle queries to a function $f: \{0,1\}^* \rightarrow \{0,1\}$ are usually represented by a family $\{G_n\}$ of quantum gates, one for each length $n$ of the ``input'' string $x$.\footnote{Other models of quantum oracle queries have been investigated, but this one is by far the most common.} Each $G_n$ is a unitary transformation acting on $n+1$ qubits, whose effect on the computational basis is in general given by \begin{equation}\label{quantum oracle version 1} G_n\ket{x,a}= \ket{x,a\oplus{f_n(x)}} \end{equation} for all $x\in\{0,1\}^n$ and $a\in\{0,1\}$, where $f_n$ are a family of Boolean functions that represent the specific oracle under consideration. Since the family $f_n$ determines (and is determined by) a function $f: \{0,1\}^* \rightarrow \{0,1\}$, where $f_n$ is $f$'s restriction to inputs of length $n$, a quantum oracle may be thought of as a ``coherent'' version of the corresponding classical oracle $f$; we call it a ``quantum oracle for $f$''. Slightly more generally, one could define a quantum oracle (``for $f$'') as a family (indexed by $|x|$) of controlled unitary transformations which, when queried by state $\ket{x}$ in the control register, applies a unitary---chosen from a set of two unitaries according to the value $f_n(x)$---to the target register. A specific example of a quantum oracle of this sort is the following controlled unitary: \begin{equation} \label{Quantum-Oracle} U_f= \ket{0}\bra{0}\otimes Z^{f(0)} + \ket{1}\bra{1}\otimes Z^{f(1)}, \end{equation} with $Z$ the Pauli $Z$ matrix, $f:\{0,1\}\rightarrow\{0,1\}$ a function encoding some decision problem and $Z^{0}:=\mathds{I}$. (If we had used the Pauli $X$ matrix, in place of $Z$, this oracle would be of the type described by Eq. \ref{quantum oracle version 1}. As was briefly mentioned in \cite{lee2016generalised}, the results of theorem~\ref{Reversible-Control} provide a way to define computational oracles in any theory satisfying our assumptions. \begin{define}\label{def:oracle} In an theory satisfying our assumptions, an oracle for a decision problem $f: S \rightarrow T$, with $S$ and $T$ finite sets, is a reversible controlled transformation\footnote{There could be many distinct transformations that have the same behaviour on a set of control states. As long as one fixes which transformation corresponds to the oracle, this is not a problem.} $C\{T_i\}$ where the set of transformations $\{T_i \}$ being controlled depend on $i \in S$ \emph{only} through the value $f(i)$. If $S = \{A\}^*$, the countably infinite set of strings from a finite alphabet $A$, then an oracle for $f$ is defined to be a family $G_n$, indexed by the length $n$ of strings, of such controlled transformations, one for each $f_n$, where $f_n$ is defined to be $f$ restricted to strings of length $n$. If $T = \{0,1\}$, this is a decision problem. \end{define} In quantum theory there is an equivalent view of oracles in terms of phase transformations. This can be seen as a result of the phase kick-back algorithm \cite{cleve1998quantum,nielsen2010quantum}. In the quantum example above, the phase kick-back for $U_f$ amounts to {\color{black} first rewriting $U_f$ as:} \begin{equation} \label{Quantum-kick-back} U_f=\mathds{I}\otimes\ket{0}\bra{0}+Z^{f(0)\oplus f(1)}\otimes\ket{1}\bra{1}. \end{equation} {\color{black} Inputting $\ket{1}$ into the second qubit results in a `kicked-back' phase of $Z^{f(0)\oplus f(1)}$ on the first qubit. The value of $f(0)\oplus f(1)$ can then be measured by preparing the first qubit in the state $\ket{+}$ and then} measuring {\color{black}it} in the $\{\ket{+},\ket{-}\}$ basis. {\color{black} This provides} the value $f(0)\oplus f(1)$ in a single query of the oracle---a feat impossible on a classical computer \cite{meyer2011uselessness}. In our more general setting, an analogue of the above holds via Theorem \ref{Generalised-Kick-Back}. That is, in theories satisfying our assumptions, as the transformations $T_{i}$ depend on the value $f(i)$, so {\color{black} too can} the controlled transformation and the kicked-back phase. That is, in {\color{black} theories with non-trivial phases, i.e. non-classical theories, the phase kick-back of an oracle can encode} information about the value $f(i)$ for all $i$. Indeed, it is also the case that if one has available as one circuit element the generalised phase kick-back transformation constructed out of the controlled transformation $C{T_i}$ one can construct a circuit for $C{T_i}$ \cite{lee2016generalised}. Hence---as in the quantum example above---from the point of view of querying the oracle, one can reduce all considerations involving the controlled transformation to those involving the phase kick-back transformation, which shall be denoted $\mathcal{O}_f$. (This justifies our use of phase transformations as oracles in Definition \ref{def: oracle system} below.) As was shown in section~\ref{framework}, all states in theories satisfying our principles can be decomposed as $|s)=\sum_I |s_I)$, with $I\subseteq\{1,\dots, n\}$, where $\{1,\dots, n\}$ labels the set of pure and perfectly distinguishable states defining the action of a give oracle. Hence, oracles can not only be queried using a set of pure and perfectly distinguishable states, but also using generalised superposition states---those with non-trivial coherences between different subsets of slits. {\color{black} In fact, the quantum speed-up in the above example came precisely from the fact that one can query in superposition, hence extracting the value $f(0)\oplus f(1)$ in a single query.} To ensure that answers to hard to solve problems are not smuggled into the definition of oracles in generalised theories, we must put conditions on which phase transformations correspond to `reasonable' oracles. We remark that in some definitions of oracle, the possibility of a \emph{null query} is included. This is an input to the oracle transformation conditional on which nothing happens to the target register. Although we did not explicitly include the possibility above, we could define an oracle for $f: S \rightarrow T$ with the possibility of a null query, as above but with an additional distinguishable state of the control register, indexed by some symbol (say $\bullet$) not in $S$, and with $T_{\bullet} = \mathrm{id}$, the identity transformation. Our results still hold for this notion of oracle (which can be viewed as a special case of an oracle of the type defined above, for the slight extension of the function $f$ to have one more input, on which it takes a fixed value). {\color{black} \begin{define}[Oracle system] \label{def: oracle system} A \emph{system of oracles} for a family $\mathcal{C}$ of functions $S \rightarrow T$ is defined as a family of phase transformations (which we call ``oracles'' because they are a particular case of the oracles of Definition \ref{def:oracle}) $\{\mathcal{O}_f\}_{f \in \mathcal{C}}$ such that whenever $f(i)=g(i)$ for all $i\in I$, the oracles corresponding to $f$ and $g$ satisfy $$\mathcal{O}_f|s_I)=\mathcal{O}_g|s_I)$$ for all $|s_I)$ of the form $\omega_I |s)$ (for arbitrary $|s)$. \end{define} An equivalent, and perhaps more intuitively motivated, definition substitutes the condition `` for all states $|s_I)$ such that $|s_I) = P_I |s_I)$, for the condition ``for all $|s_I)$ of the form $|s_I) = \omega_I|s)$...'' This ensures one cannot learn about the value $f(j)$ when querying using a state with no probability of being found in $|j)$.} That is, $\mathcal{O}_f$ and $\mathcal{O}_g$ act identically on states in the face determined by a subset of inputs on which $f$ and $g$ have the same value, so that we cannot, for example, just write \emph{arbitrary} information about which function is being queried into phase degrees of freedom. One can schematically represent the problems that can be solved by a specific computational model with access to an oracle using the language of complexity classes. Let $\mathbf{C}$ and $\mathbf{B}$ be complexity classes, then $\mathbf{C}^{\mathbf{B}}$ denotes the class $\mathbf{C}$ with an oracle for $\mathbf{B}$ (see \cite{papadimitriou2003computational} for formal definitions). We can think of $\mathbf{C}^{\mathbf{B}}$ as the class of languages decided by a computation which is subject to the restrictions and acceptance criteria of $\mathbf{C}$, but allowing an extra kind of computational step: an oracle for any desired language $\mathcal{L}\in\mathbf{B}$ that may be queried during the computation, where each query counts as a single computational step. Here $\mathcal{L}$ is fixed in any given computation, though different computations may use different $\mathcal{L}$. A natural question is whether or not having access to an oracle for a particular decision problem which can be efficiently solved in a given theory provides any more computational power than just using the efficient algorithm. If we schematically denote the class of problems efficiently solvable by a particular theory \textbf{G} by\footnote{See references \cite{lee2015computation,lee2015proofs,landscape} for a rigorous definition of this class.} \textbf{BGP}, this question can be phrased as: ``is \textbf{BGP} closed under \emph{subroutines}''? Here \textbf{BGP} is the analogue of the well-known class of problems efficiently solvable by a quantum computer, \textbf{BQP}. Another way to pose this question is to ask whether $\textbf{BGP}^\textbf{BGP}=\textbf{BGP}$ for \textbf{G} satisfying our principles. There exist complexity classes for which this is probably not the case, for example, \textbf{NP}\footnote{If one assumes that the polynomial hierarchy doesn't collapse.}. But, intuitively, one would expect it to hold in a sensible physical theory where computation is performed with circuits. In such a situation, one might consider it a kind of conceptual consistency check on one's definition of oracle: it is not reasonable for an oracle for a problem to give more power than would be afforded by a circuit for that problem in the model. A potential issue arises when one compares the performance of the oracle implementation to that of the efficient algorithm when both are used as subroutines in another computational algorithm\footnote{Here, an algorithm consists of a poly-size uniform circuit. See \cite{lee2015computation} for the formal definitions.}. As we noted in Sections \ref{framework} and \ref{section:oracles}, an oracle can be queried on a superposition of inputs, but one does not normally query an algorithm for a particular decision problem in superposition for the purpose of solving that decision problem; one merely prepares the state corresponding to a particular bit string and uses the algorithm to determine whether or not that bit string is in the language in question.\footnote{However, a quantum algorithm for one problem can sometimes be used as a subroutine in a quantum algorithm for another problem; in this case, the subroutine sometimes \emph{is} run on a superposition of inputs.} For simplicity, we say the efficient algorithm accepts an input if a measurement of the first outcome system yields outcome $(0|$ with probability\footnote{This can be amplified to $1-2^{-q}$, where $q$ a polynomial in the size of the circuit, by running the circuit in parallel a polynomial number of times. Again, see \cite{lee2015computation}.} greater than $2/3$. This is the same acceptance condition imposed in quantum computation. {\color{black} We therefore need to know whether} every \textbf{BGP} algorithm for a decision problem admits a subroutine having the characteristics of an oracle for that decision problem. Such a result was proved in the quantum case by Bennett et al. in \cite{bennett1997strengths}. The following theorem shows that it is also true for theories satisfying our principles. (See \cite{lee2015computation} for the definition of circuit and circuit family in the GPT context.) \begin{thm}\label{subroutine} Consider a theory \textbf{G} which satisfies the principles outlined in section~\ref{framework}. Given an algorithm (poly-size family of circuits in $\textbf{G}$), $\{A_{|x|}\}$, for a decision problem in \textbf{BGP}, one can always construct a family $\{G_{|x|}\}$ of polynomial-size circuits implementing reversible transformations from \textbf{G}, which, with high probability (greater than $1 - 2^{-q(|x|)}$, for some polynomial $q$), functions as an oracle for that particular decision problem (in the sense of Definition \ref{def:oracle}). (Here, poly-size means polynomial in the length $|x|$ of the input $x$, and the family $G_{|x|}$ comprises a fixed circuit for each input size.) Schematically, we have $\textbf{BGP}^\textbf{BGP}=\textbf{BGP}$. \end{thm} \proof See Appendix \ref{Proof of subroutine} \endproof Given our definition of an oracle we can consider how their computational power depends on the order of interference of the theory. \section{Lower bounds from useless queries} \label{lower bound} In this section we generalise results of Ref. \cite{meyer2011uselessness}, in which Meyer and Pommersheim derived a relation between quantum and classical query complexity lower bounds, by introducing the concept of a ``useless'' quantum query to the setting of GPTs satisfying our principles. They considered \emph{learning problems} in which one is given an element from a class of functions with the same domain and range, chosen with some arbitrary---but known---prior distribution, where the task is to determine to which specific subclass the chosen function belongs. More formally we can define a learning problem as follows: \begin{define}[Learning problems] Given a set of functions $\mathcal{C}\subseteq \{0,1\}^X$ where $X$ is some finite set\footnote{One could alternatively consider replacing $\{0,1\}$ with a different finite set $Y$. Thus the result proved in this section, and the result proved in \cite{meyer2011uselessness}, also holds in the more general case.}, a partitioning of $\mathcal{C}$ into disjoint subsets $\mathcal{C}=\bigsqcup_{j\in J}\mathcal{C}_j$ labeled by $j\in J$, and a probability measure $\mu$ over $\mathcal{C}$. The aim of the learning problem is to determine, which partition $\mathcal{C}_j$ a particular function $f\in \mathcal{C}$ belongs to; this is to be done with low \emph{ex ante} probability of error, with respect to the probability measure $\mu$ with which the function is chosen from among the $\mathcal{C}_j$'s. A particular learning problem is therefore defined by the triple, $(\mathcal{C},\{\mathcal{C}_j\},\mu)$. \end{define} One can only access information about the function by querying an oracle, which, when presented with an element from the domain, outputs the corresponding element of the range assigned by the chosen function. Typically one specifies some upper bound to the error probability, and is interested in the minimal number of queries needed to ensure that the \emph{ex ante} probability of error is below the bound, with respect to the measure $\mu$ that gives the ``prior probability'' of functions $f \in \mathcal{C}$. Meyer and Pommersheim showed that if $n$ queries to a classical oracle reveal no information about which function was chosen\footnote{That is, if the probability that the chosen function belongs to a given subclass after $n$ classical queries is the same as the known prior probability with which it was originally chosen.} then neither do $n/2$ queries to a quantum oracle. Hence $\lceil{n/2}\rceil +1$ quantum queries constitute a lower bound. Many important query problems are examples of learning problems. For instance, \texttt{PARITY}, a generalisation of the special case of Deutsch's problem where the input to $f$ is a bit, \cite{nielsen2010quantum} which asks for the parity of a function\footnote{i.e. the value $f(1)\oplus\cdots\oplus f(N) \text{ mod }2$} $f:\{1,\dots,N\}\rightarrow\{0,1\}$ can be written as a learning problem. Indeed, partition the class of all such functions into two subclasses, one in which all functions have parity $0$ and the other $1$, and choose the function with a prior probability of $1/2$. In this case, $N-1$ classical queries do not provide any information about the parity, hence at least $\lceil{(N-1)/2}\rceil +1$ quantum queries are needed to solve the problem. In fact $\lceil{(N-1)/2}\rceil +1$ quantum queries are also sufficient\footnote{$\lceil{(N-1)/2}\rceil +1$ applications of the solution to Deutsch's problem.}. In this section we generalise Meyer and Pommersheim's result to the case of oracle queries in the generalised probabilistic theory framework presented in the previous section. We prove that if $n$ queries to a classical oracle reveal no information about which function was chosen then neither do $n/k$ queries in a generalised theory satisfying the principles introduced in section~\ref{framework} and which has maximal order of interference $k$. Hence a lower bound to determining the function is $\lceil{n/k}\rceil+1$ queries in theories with $k$th order interference. So in the specific generalisation of Deutsch's problem where we are asked to determine the parity of a function $f:\{1,\dots,n\}\rightarrow\{0,1\}$, $\lceil{n/2}\rceil$ quantum queries are needed, but in a theory with $n$th order interference, the `` no-information'' or ``useless-queries'' bound does not rule out the possibility that the parity can be determined with a single query. We can now formally define what it means for $n$ classical queries to be useless \cite{meyer2011uselessness}. \begin{define}[Useless classical queries \cite{meyer2011uselessness}] Let $\left(\mathcal{C}, \{\mathcal{C}_j : j\in J\}, \mu \right)$ be a learning problem. $n$ classical queries are said to be \emph{useless}, or to convey no information, if for any $x_1, \dots x_n\in X$ and $y_1,\dots y_n \in \{0,1\}$ the following holds $$\mu\left(f\in\mathcal{C}_j \text{ } |\text{ } f(x_i)=y_i, i=1,\dots,n\right)=\mu\left(f\in\mathcal{C}_j\right), \text{ for all } j\in J.$$ \end{define} Here expressions like $\mu\left(f\in\mathcal{C}_j \text{ } |\text{ } f(x_i)=y_i, i=1,\dots,n\right)$ are to be understood simply as conditional probabilities of events like $f \in \mathcal{C}_j$, conditional on events like $f(x_1)= y_1~ \& ~f(x_2) = y_2 ~\& ~ \cdots ~ \& ~ f(x_n) = y_n$. A general $n$-query algorithm in a generalised theory satisfying our principles corresponds to the following: an arbitrary initial state $|\sigma)$ is prepared and input to the oracle $\mathcal{O}_f$, the output state is acted upon by an arbitrary transformation $G_1$ independent of $f$, and the process is repeated. After the $n$th oracle query, the state is $$|\rho_f)=G_n\mathcal{O}_f G_{n-1}\cdots G_1\mathcal{O}_f|\sigma).$$ The final step consists of measuring this state with an arbitrary measurement denoted as $\{(s|\}_{s\in S}$. The final\footnote{As was noted in \cite{meyer2011uselessness}, the final transformation $G_n$ is unnecessary, as it could be incorporated into the measurement.} output of the algorithm is given by a map, which is independent of $f$ from the set $S$ indexing the measurement outcome to the set $J$ indexing the subclasses to which the function could belong. The probability of outcome $(s|$ being observed in the measurement, when the unknown function is $f$, is therefore defined to be $(s|\rho_f)$. So there is a joint probability distribution, which we will denote also by the letter $\mu$, over the outcome $s$ and the function $f$: $$\mu(s,f) = (s|\rho_f)\mu(f).$$ Bayes' rule gives the posterior probability that the function was $f$, given the observed measurement outcome $s$: $$\label{applying Bayes rule} \mu(f|s) = (s|\rho_f)\mu(f) / \sum_{g \in \mathcal{C}} (s|\rho_g)\mu(g).$$ The posterior probability that $f \in \mathcal{C}_j$ given this outcome is $$\mu(f \in \mathcal{C}_j|s) = \sum_{f in \mathcal{C}_j} \mu(f|s).$$ Similarly we define $\mu(f \in C_j) = \sum_{f \in \mathcal{C}_j} \mu(f)$, the prior probability that $f \in C_j$. We can now generalise the definition of a useless quantum query from \cite{meyer2011uselessness} to the case of generalised theories satisfying our principles. \begin{define}[Useless generalised queries] Let $\left(\mathcal{C}, \{\mathcal{C}_j \}_{j\in J}, \mu \right)$ be a learning problem. $n$ generalised queries are said to be \emph{useless}, or to convey no information, if for any $n$ query generalised algorithm with initial state $|\sigma)$, transformations $G_n,\dots, G_1$, and measurement $\{(s|\}_{s\in S}$ the following holds $$\mu\left(f\in\mathcal{C}_j \text{ } |\text{ } s\right)= \mu\left(f\in\mathcal{C}_j\right), \text{ for all possible } s\in S, j\in J.$$ \end{define} We now present our main result, which generalises Theorem $1$ from \cite{meyer2011uselessness}. \begin{thm} Let $\left(\mathcal{C}, \{\mathcal{C}_j : j\in J\}, \mu \right)$ be a learning problem. Suppose $kn$ classical queries are useless. Then in any theory which satisfies our principles and has maximal order of interference $k$, $n$ generalised queries are useless. \end{thm} We present the formal proof below, but first provide a rough sketch proof. In a theory with $k$th order interference, each state can be decomposed as in Eq.~(\ref{coherence decomposition}). Hence each state is explicitly indexed by subsets---of size at most $|I|=k$---of the set of pure and perfectly distinguished states defining the oracle. Thus, after a single generalised query, the state is indexed by the valuation of the chosen function on at most $k$ inputs. After $n$ generalised queries, it is indexed by $kn$ valuations. Therefore, a measurement can reveal at most $kn$ valuations of the chosen function. But, as $kn$ classical queries are useless, it must be that $n$ generalised queries are also useless. The intuition behind this result is that, as a given state can have coherence between at most $k$ basis states, one can use generalised superposition states to extract at most $k$ valuations of a given function in a single query. \begin{proof} Our proof is essentially a slight generalisation of the original quantum one presented in \cite{meyer2011uselessness}. We need to show that the probability of $f$ being in $\mathcal{C}_j$ does not change if outcome $s$ is observed after $n$ queries. That is, we must show $$\sum_{f\in\mathcal{C}_j} \mu (f\text{ }|\text{ } s) = \mu\left(\mathcal{C}_j\right), \text{ for any } s\in S \text{ and } j\in J.$$ Recalling from the application of Bayes' rule to obtain Eq. \ref{applying Bayes rule} we have: $$ \mu(f|s)= \frac{(s|\rho_f)\mu(f)}{\sum_{g\in \mathcal{C}} (s|\rho_g)\mu(g)} $$ and summing over $f$ in $\mathcal{C}_j$, we have \begin{equation} \label{proof} \sum_{f\in\mathcal{C}_j}\mu (f\text{ }|\text{ } s) = \frac{(s| \sum_{f\in\mathcal{C}_j} \mu(f)|\rho_f)}{(s| \sum_{g\in\mathcal{C}} \mu(g)|\rho_g)}. \end{equation} Let's focus on $|\rho_f)$. Given the decomposition in Eq.~(\ref{coherence decomposition}), every state can be written as $$|\sigma)=\sum_{I,|I|=1}^k\omega_I|\sigma)=: \sum_{I,|I|=1}^k\sigma_I.$$ Now, each $\mathcal{O}_f\left(\sigma_I\right)$ can depend on all $f(i)$ with $i\in I$. By padding out those $I$ with $|I|<k$ with dummy indices, after a single query one can write $$\mathcal{O}_f |\sigma)=\sum_I \mathcal{O}_f(\sigma_I)=\sum_{T_1} Q_{T_1}\left( f(x_1^1), f(x^2_1),\dots, f(x^k_1)\right),$$ where the second equality is just a relabeling of the terms where $T_1=\{x_1^1, x^2_1,\dots, x^k_1\}$ is the padded version of $I$, and hence each $Q_{T_1}$ is a vector in the real vector space of states that depends on $f(x_1^1), f(x^2_1),\dots, f(x^k_1)$. Therefore, after $n$ queries one can write the state as $$|\rho_f)= \sum_{T_n} Q_{T_n}\left( f(x_1^1),\dots, f(x^1_n), f(x^2_1),\dots, f(x^2_n),\dots, f(x^k_1),\dots,f(x^k_n) \right) $$ Using a change of variables provides $$\begin{aligned} &\sum_{f\in\mathcal{C}_j} \mu(f)|\rho_f)= \\ &\sum_{T_n}\sum_{\{y_i^1\},\dots,\{y^k_i\}} \mu\left(f\in\mathcal{C}_j \text{ and } f(x_i^m)=y_i^m, \text{ for } i=1,\dots,n \text{ and } m=1,\dots, k\right) Q_{T_n}\left(y_1^1,\dots y_n^k \right). \end{aligned}$$ As $kn$ classical queries are useless $$\begin{aligned} \mu & \left(f\in\mathcal{C}_j \text{ and } f(x_i^m)=y_i^m, \text{ for } i=1,\dots,n \text{ and } m=1,\dots, k\right)= \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad \mu(\mathcal{C}_j) \mu\left( f(x_i^j)=y_i^j, \text{ for } i=1,\dots,n \text{ and } j=1,\dots, k\right). \end{aligned}$$ Inputting this into the above we obtain, \begin{equation}\label{eq:proofEdit} \begin{aligned} &\sum_{f\in\mathcal{C}_j} \mu(f)|\rho_f)= \\ & \mu(\mathcal{C}_j) \sum_{T_n}\sum_{\{y_i^1\},\dots,\{y^k_i\}}\mu\left( f(x_i^j)=y_i^j, \text{ for } i=1,\dots,n \text{ and } j=1,\dots, k\right) Q_{T_n}\left(y_1^1,\dots y_n^k \right). \end{aligned}\end{equation} Then. summing over $j\in J$, results in $$\begin{aligned} &\sum_{f\in\mathcal{C}} \mu(f)|\rho_f)= \\ &\sum_{T_n}\sum_{\{y_i^1\},\dots,\{y^k_i\}}\mu\left( f(x_i^j)=y_i^j, \text{ for } i=1,\dots,n \text{ and } j=1,\dots, k\right) Q_{T_n}\left(y_1^1,\dots y_n^k \right). \end{aligned}$$ Substituting this back into Eq.~(\ref{eq:proofEdit}) immediately gives $$\sum_{f\in\mathcal{C}_j} \mu(f)|\rho_f) = \mu(\mathcal{C}_j)\sum_{f\in\mathcal{C}}\mu(f)|\rho_f).$$ finally, substituting this into Eq.~(\ref{proof}) completes the proof. \end{proof} \section{Conclusion} In this work we have introduced a well-defined oracle model for generalised probabilistic theories, and shown it to be well-behaved in the sense given by our subroutine theorem: that an oracle of our type for a given problem is not more powerful than an algorithm for that problem, since an algorithm permits high-probability simulation of an oracle. This allowed us to compare the computational power imposed by different physical principles through the lens of query complexity. Our main result in this regard was to show that the ``zero-information'' lower bound on the number of queries to a quantum oracle needed to solve certain problems is not optimal in the space of generalised theories satisfying the principles introduced in section~\ref{framework}. Our result highlights the role of interference in computational advantages in a theory independent manner, allowing the possibility that ``more interference could permit more computational power''. Previous work by the authors in \cite{lee2016deriving} derived Grover's lower bound to the search problem from simple physical principles. The search problem asks one to find a certain ``marked item'' from among a collection of items in an unordered database. The only access to the database is through an oracle; when asked if item $i$ is the marked one, the oracle outputs ``yes'' or ``no''. The figure of merit in this problem is how the minimum number of queries required to find the marked item scales with the size of the database. It was shown---subject to strong assumptions close to those used in the present paper---that, asymptotically, higher-order interference does not provide an advantage over quantum theory in this case. As opposed to the asymptotic behaviour of the number of queries needed to solve the search problem, the current work was concerned with whether a fixed number of queries yielded any information about the solution of a particular query problem, where the problem could be any of a large class of ``learning problems''. In this case we were able to show that the ``useless-queries'' or ``zero-information'' lower bound on the number of queries is lower, the higher the order of interference in the theory, leaving open the possibility that higher-order interference could lead to a computational speedup (although we did not show that such a speedup is achievable). Note that a specific oracle model for the search problem was introduced in \cite{lee2016deriving}. However, this is just a special case of the general model introduced in section~\ref{section:oracles} of the current work. Moreover, the subroutine theorem proved here shows that our general oracle model is well-defined. Our derivation of query lower bounds raises the question of whether the physical principles we have discussed are sufficient for the existence of algorithms which achieve these lower bounds. In the specific case of the search problem, a quantum search algorithm based on Hamiltonian simulation, due to Farhi and Gutmann \cite{FarhiGutmann} and also presented in chapter $6$ of the well known textbook by Nielsen and Chuang \cite{nielsen2010quantum}, may be more directly generalisable to theories satisfying our principles than Grover's original construction \cite{grover1997quantum}. This approach may also be applicable to many other query algorithms. In the algorithm as presented in \cite{nielsen2010quantum} they consider a Hamiltonian $H$ consisting of projectors onto the marked item $\ket{x}$ and the initial input state $\ket{\psi}=\alpha\ket{x}+\beta\ket{y}$, with $\ket{y}$ orthogonal to $\ket{x}$ and $\alpha^2 +\beta^2=1$, respectively. That is, they consider the Hamiltonian $H=\ket{x}\bra{x}+\ket{\psi}\bra{\psi}$. Evolving the initial input state under this Hamiltonian for time $t$ results in $$\exp(-itH)\ket{\psi}=\cos(\alpha t)\ket{\psi} - i\sin(\alpha t)\ket{x}.$$ Hence, measuring the system in the $\{\ket{x}, \ket{y}\}$ basis at time $t=\pi/2\alpha$ yields outcome $\ket{x}$ with probability one. If the initial state was a uniform superposition over the orthonormal basis containing $\ket{x}$, then the required evolution time is $t=\pi\sqrt{N}/2$, where $N$ is the size of the system (or equivalently, the number of elements in the database being searched). One might wonder why there is no mention of an oracle in the above discussion. The oracle comes into play when constructing a quantum circuit to simulate the above Hamiltonian evolution. As the above Hamiltonian depends on the marked item, the quantum circuit simulating it must query the search oracle a number of times proportional to the evolution time \cite{nielsen2010quantum}. In this specific case, an efficient Hamiltonian simulation requires $O(\sqrt{N})$ queries to the oracle, yielding an optimal quantum algorithm (up to constant factors) for the search problem. Recently, the authors of \cite{barnum2014higher} have introduced a physical principle, termed ``energy observability'', which implies the existence of a continuous reversible time evolution and ensures that the generator of any such evolution---a generalised ``Hamiltonian''---is associated to an appropriate observable, which is a conserved quantity under the evolution---the generalised ``energy'' of the evolving system. Recall from section~\ref{framework} that the principles we have discussed were sufficient to ensure that projectors onto arbitrary states correspond to allowed transformations. Hence, our previous principles, together with energy observability as introduced in \cite{barnum2014higher}, might be sufficient to run the above quantum search algorithm, hence providing a theory independent description of an optimal (up to constant factors) search algorithm. Similar constructions based on Hamiltonian simulation might also show that theories satisfying the above physical principles can reach the query lower bounds derived in this paper. \begin{appendices} \section{Proof of theorem~\ref{existence of projector}} \label{Proof of projector} This is an adaptation of the proof of theorem 8 from \cite{barnum2014higher}. Consider a self-dual cone $\mathbf{C}$ with self-dualising inner product $\left<\cdot,\cdot\right>$. Now consider a set of pure and perfectly distinguishable states $\phi_i$ which are distinguished by the effects $e_i$ such that $(e_i|\phi_j)=\delta_{ij}$. We can define a face $F$ of $\mathbf{C}$ as the minimal face generated by the set of states $\{\phi_i\}$, we can moreover define the dual face $F^*:=\left\{x\in \mathbf{C}\ | \left<x,s\right> \geq 0 \ \ \forall s\in F\right\}$. Appendix A in \cite{barnum2014higher} shows that if $F=F^*$ then there exists a positive projector onto the face $F$. Consider some $t\in F$. Self-duality of $\mathbf{C}$ implies that $\left<t,s\right>\geq 0 \ \forall s\in F$ hence, $t\in F^*$ and so $F\subseteq F^*$. We therefore just need to prove the converse inclusion and we are done. To prove this, consider a normalised extremal $x\in F^*$, there must be some $s\in F$ such that $\left<s,x\right>=0$, where moreover, if $\left<s,y\right>=0$ then $y\propto x$. Next we prove two simple results: i) $s$ is not internal to $F$---assume, for the sake of contradiction, that $s$ is internal. Then $\left<x,t\right>=0 \ \forall t\in F$ so $x=0$ and hence is not normalised. ii) There exists $t\in F$ such that $s$ and $t$ are perfectly distinguishable---assume, again to reach a contradiction, that there is no such $t$. This means that given any pure and perfectly distinguishing measurement $\{\epsilon_i\}$ that $(\epsilon_i|s)>0$ for all $i$. Due to strong symmetry, any pure effect appears in such a measurement, therefore $(e|s)>0$ for all pure effects $(e|$, this suffices for tomography hence $|s)$ is an internal state, in contradiction with (i). Theorem 1 from \cite{muller2012structure} implies that if $s$ and $t$ are perfectly distinguishable states then $\left<s,t\right>=0$, therefore we know that $t\propto x$ and so $x\in F$. This is true for all extremal normalised $x\in F^*$ it therefore follows from convexity that this is true for all $x\in F^*$ and so we have $F^*\subseteq F$ which concludes the proof. Hence, projectors $P_F$ onto $F$ are positive transformations. It was shown in \cite{chiribella2010probabilistic} that in any theory satisfying causality, purification and informationally consistent composition, mathematically well-defined transformations are physical, i.e. they are allowed in the theory. Hence projectors $P_F$ are physically allowed transformations. Moreover, given two faces, $F$ and $G$, generated by different subsets of the same pure and perfectly distinguishable set of states, one has $P_FP_G=P_{F\cap G}$. \section{Useful consequences of our principles} \subsection{Uniqueness of distinguishing measurement} \label{Sym} Strong symmetry (together with the no restriction hypothesis, which says that all mathematically well-defined measurements are physical) implies that, given any set of pure and perfectly distinguishable states $\{|i)\}$, there exists a unique measurement $\{(j|\}$ such that, \[(i|j)=\delta_{ij}.\] See \cite{muller2012structure,barnum2014higher} for details. Moreover, for every set $\{(e_j|\}$ such that $(e_j|i)=\alpha_j \delta_{ij}$, it holds that \[(e_j|=\alpha_j(j|.\] \subsection{Purifications of completely mixed states are dynamically faithful} As mentioned in section~\ref{framework}, purification implies that there exist \emph{completely mixed} states. Purification implies that there exists a state $|\psi)$ that purifies such a completely mixed state: \[\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (1, -0.25) {}; \node [style=none] (1) at (1, 1.25) {}; \node [style=none] (2) at (1, 1.75) {}; \node [style=none] (3) at (1, -0.75) {}; \node [style=trace] (4) at (2.5, -0.25) {}; \node [style=none] (5) at (2.5, 1.25) {}; \node [style=none] (6) at (1, -0.75) {}; \node [style=none] (7) at (0.5, 0.5) {$\psi$}; \node [style=none] (8) at (3.75, 0.75) {$=$}; \node [style=traceState] (9) at (5, 0.75) {}; \node [style=none] (10) at (6.5, 0.75) {}; \node [style=none] (11) at (7.25, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=90, looseness=1.25] (2.center) to (3.center); \draw (2.center) to (3.center); \draw (1.center) to (5.center); \draw (0.center) to (4); \draw (9) to (10.center); \end{pgfonlayer} \end{tikzpicture} \] This is unique up to reversible transformation. We denote a particular choice of this purification as, \[\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (4.75, -0.25) {}; \node [style=none] (1) at (4.75, 1.25) {}; \node [style=none] (2) at (4.75, 1.75) {}; \node [style=none] (3) at (4.75, -0.75) {}; \node [style=none] (4) at (6.25, -0.25) {}; \node [style=none] (5) at (6.25, 1.25) {}; \node [style=none] (6) at (4.75, -0.75) {}; \node [style=none] (7) at (4.25, 0.5) {$\psi$}; \node [style=none] (8) at (2.5, 0.5) {$:=$}; \node [style=none] (9) at (0.75, 1.25) {}; \node [style=none] (10) at (1.25, -0.25) {}; \node [style=none] (11) at (0.75, -0.25) {}; \node [style=none] (12) at (1.25, 1.25) {}; \node [style=none] (13) at (7, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw [bend right=90, looseness=1.25] (2.center) to (3.center); \draw (2.center) to (3.center); \draw (1.center) to (5.center); \draw (0.center) to (4.center); \draw (9.center) to (12.center); \draw (11.center) to (10.center); \draw [bend right=90, looseness=1.75] (9.center) to (11.center); \end{pgfonlayer} \end{tikzpicture} \] Purifications of completely mixed states are called \emph{dynamically faithful} states \cite{chiribella2010probabilistic} and, due to the constraints on parallel composition imposed in section~\ref{framework}, must satisfy the following important condition \cite{chiribella2010probabilistic}: \[\begin{tikzpicture} \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (5.5, 7.25) {$=$}; \node [style=none] (1) at (0.7499999, 7) {}; \node [style=none] (2) at (1.25, 5.25) {}; \node [style=none] (3) at (0.7499997, 5.25) {}; \node [style=none] (4) at (1.25, 7) {}; \node [style=none] (5) at (1.25, 9) {}; \node [style=none] (6) at (1.25, 6) {}; \node [style=none] (7) at (2.75, 6) {}; \node [style=none] (8) at (2.75, 9) {}; \node [style=none] (9) at (2, 7.5) {$T$}; \node [style=none] (10) at (1.25, 8.5) {}; \node [style=none] (11) at (1.25, 7.5) {}; \node [style=none] (12) at (0, 8.5) {}; \node [style=none] (13) at (3.75, 7) {}; \node [style=none] (14) at (3.75, 8.5) {}; \node [style=none] (15) at (2.75, 8.5) {}; \node [style=none] (16) at (2.75, 7) {}; \node [style=none] (17) at (3.75, 5.25) {}; \node [style=none] (18) at (8.25, 6) {}; \node [style=none] (19) at (8.25, 5.25) {}; \node [style=none] (20) at (9.75, 7) {}; \node [style=none] (21) at (8.25, 8.5) {}; \node [style=none] (22) at (8.25, 9) {}; \node [style=none] (23) at (9.75, 8.5) {}; \node [style=none] (24) at (10.75, 5.25) {}; \node [style=none] (25) at (9.75, 6) {}; \node [style=none] (26) at (9.000001, 7.5) {$T'$}; \node [style=none] (27) at (6.999999, 8.5) {}; \node [style=none] (28) at (7.75, 7) {}; \node [style=none] (29) at (8.25, 7.5) {}; \node [style=none] (30) at (10.75, 8.5) {}; \node [style=none] (31) at (9.75, 9) {}; \node [style=none] (32) at (8.25, 7) {}; \node [style=none] (33) at (7.75, 5.25) {}; \node [style=none] (34) at (10.75, 7) {}; \node [style=none] (35) at (2, 3.5) {$\implies$}; \node [style=none] (36) at (3.75, -0.7499997) {}; \node [style=none] (37) at (2.749999, -0) {}; \node [style=none] (38) at (13.25, 1.75) {}; \node [style=none] (39) at (11.5, 0.7499999) {$T'$}; \node [style=none] (40) at (12.25, 0.25) {}; \node [style=none] (41) at (5.25, 1.75) {}; \node [style=none] (42) at (5.25, 2.25) {}; \node [style=none] (43) at (12.25, -0.7499999) {}; \node [style=none] (44) at (3.75, 1.75) {}; \node [style=none] (45) at (3.75, -0) {}; \node [style=none] (46) at (12.25, 2.25) {}; \node [style=none] (47) at (9.500001, 1.75) {}; \node [style=none] (48) at (3.75, 0.7499997) {}; \node [style=none] (49) at (10.75, 1.75) {}; \node [style=none] (50) at (5.25, 0.25) {}; \node [style=none] (51) at (2.5, 1.75) {}; \node [style=none] (52) at (3.75, 2.25) {}; \node [style=none] (53) at (8, 0.5000001) {$=$}; \node [style=none] (54) at (13.25, 0.2499996) {}; \node [style=none] (55) at (10.75, -0) {}; \node [style=none] (56) at (9.750001, -0) {}; \node [style=none] (57) at (6.25, 1.75) {}; \node [style=none] (58) at (4.5, 0.7499999) {$T$}; \node [style=none] (59) at (12.25, 1.75) {}; \node [style=none] (60) at (10.75, -0.7499997) {}; \node [style=none] (61) at (6.249999, 0.2499996) {}; \node [style=none] (62) at (5.25, -0.7499999) {}; \node [style=none] (63) at (10.75, 0.7499997) {}; \node [style=none] (64) at (10.75, 2.25) {}; \node [style=none] (66) at (2.749999, -0) {}; \node [style=none] (67) at (15.75, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (1.center) to (4.center); \draw (3.center) to (2.center); \draw [bend right=90, looseness=1.75] (1.center) to (3.center); \draw (5.center) to (6.center); \draw (6.center) to (7.center); \draw (7.center) to (8.center); \draw (8.center) to (5.center); \draw (12.center) to (10.center); \draw (15.center) to (14.center); \draw (16.center) to (13.center); \draw (2.center) to (17.center); \draw (28.center) to (32.center); \draw (33.center) to (19.center); \draw [bend right=90, looseness=1.75] (28.center) to (33.center); \draw (22.center) to (18.center); \draw (18.center) to (25.center); \draw (25.center) to (31.center); \draw (31.center) to (22.center); \draw (27.center) to (21.center); \draw (23.center) to (30.center); \draw (20.center) to (34.center); \draw (19.center) to (24.center); \draw (37) to (45.center); \draw (52.center) to (36.center); \draw (36.center) to (62.center); \draw (62.center) to (42.center); \draw (42.center) to (52.center); \draw (51.center) to (44.center); \draw (41.center) to (57.center); \draw (50.center) to (61.center); \draw (64.center) to (60.center); \draw (60.center) to (43.center); \draw (43.center) to (46.center); \draw (46.center) to (64.center); \draw (47.center) to (49.center); \draw (56) to (55.center); \draw (59.center) to (38.center); \draw (40.center) to (54.center); \end{pgfonlayer} \end{tikzpicture} \] As a special case, of course the purification of the \emph{maximally mixed} state is dynamically faithful. In our applications, however, any dynamically faithful state, purifying some completely mixed state, will do. \section{Proof of theorem~\ref{subroutine}} \label{Proof of subroutine} \begin{proof} It was shown in \cite{chiribella2010probabilistic} that the purification principle implies the ability to dilate any transformation to a reversible one. We use this fact in the construction of the circuit $\{G_{|x|}\}$. Our construction is equivalent to the one employed in the quantum case by \cite{bennett1997strengths}. In the construction, each $G_{|x|}$ is given by: \[\begin{tikzpicture}[scale=1.25] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (-0.5, 4) {}; \node [style=none] (1) at (-0.5, 2.5) {}; \node [style=none] (2) at (-0.5, 1) {}; \node [style=none] (3) at (-0.5, -0.5) {}; \node [style=none] (4) at (0.25, 0.5) {$\vdots$}; \node [style=none] (5) at (0.75, 3) {}; \node [style=none] (6) at (0.75, -1) {}; \node [style=none] (7) at (2.25, -1) {}; \node [style=none] (8) at (2.25, 3) {}; \node [style=none] (9) at (1.5, 1) {$U_{|x|}$}; \node [style=none] (10) at (0.75, 2.5) {}; \node [style=none] (11) at (0.75, 1) {}; \node [style=none] (12) at (0.75, -0.5) {}; \node [style=none] (13) at (2.25, -0.5) {}; \node [style=none] (14) at (2.25, 1) {}; \node [style=none] (15) at (2.25, 2.5) {}; \node [style=none] (16) at (3.25, 2.5) {}; \node [style=none] (17) at (3.25, 2) {}; \node [style=none] (18) at (4.25, 2) {}; \node [style=none] (19) at (3.25, 4) {}; \node [style=none] (20) at (4.25, 4) {}; \node [style=none] (21) at (4.25, 2.5) {}; \node [style=none] (22) at (4.25, 4.5) {}; \node [style=none] (23) at (3.25, 4.5) {}; \node [style=none] (24) at (3.75, 3.25) {$C$}; \node [style=none] (25) at (5.25, -0.5) {}; \node [style=none] (26) at (6.75, 1) {}; \node [style=none] (27) at (6, 1) {$U^{-1}_{|x|}$}; \node [style=none] (28) at (3.75, 0.5) {$\vdots$}; \node [style=none] (29) at (5.25, -1) {}; \node [style=none] (30) at (6.75, 3) {}; \node [style=none] (31) at (6.75, 2.5) {}; \node [style=none] (32) at (7.25, 0.5) {$\vdots$}; \node [style=none] (33) at (6.75, -0.5) {}; \node [style=none] (34) at (5.25, 1) {}; \node [style=none] (35) at (5.25, 2.5) {}; \node [style=none] (36) at (5.25, 3) {}; \node [style=none] (37) at (6.75, -1) {}; \node [style=none] (38) at (8, 1) {}; \node [style=none] (39) at (8, 2.5) {}; \node [style=none] (40) at (8, -0.5) {}; \node [style=none] (41) at (8, 4) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (5.center) to (8.center); \draw (8.center) to (7.center); \draw (7.center) to (6.center); \draw (6.center) to (5.center); \draw (36.center) to (30.center); \draw (30.center) to (37.center); \draw (37.center) to (29.center); \draw (29.center) to (36.center); \draw (0) to (19.center); \draw (23.center) to (17.center); \draw (17.center) to (18.center); \draw (18.center) to (22.center); \draw (22.center) to (23.center); \draw (15.center) to (16.center); \draw (10.center) to (1); \draw (2) to (11.center); \draw (3) to (12.center); \draw (13.center) to (25.center); \draw (14.center) to (34.center); \draw (21.center) to (35.center); \draw (20.center) to (41); \draw (31.center) to (39); \draw (26.center) to (38); \draw (33.center) to (40); \end{pgfonlayer} \end{tikzpicture}\] where $U_{|x|}$ is the reversible transformation which dilates\footnote{Recall that that the circuit family $\{U_{|x|}\}$, with $U_{|x|}$ a reversible transformation which dilates $A_{|x|}$ for each ${|x|}$, consists of poly-size uniform circuits.} the \textbf{BGP} algorithm $A_{|x|}$ \[\begin{tikzpicture}[scale=1.6] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0, 2) {}; \node [style=none] (1) at (1, 2) {}; \node [style=none] (2) at (0, 0.5) {}; \node [style=none] (3) at (1, 0.5) {}; \node [style=none] (4) at (1, 2.5) {}; \node [style=none] (5) at (0.5, 1.5) {$\vdots$}; \node [style=none] (6) at (1, -0) {}; \node [style=none] (7) at (2, -0) {}; \node [style=none] (8) at (2, 2.5) {}; \node [style=none] (9) at (2, 2) {}; \node [style=none] (10) at (3, 2) {}; \node [style=none] (11) at (2, 0.5) {}; \node [style=none] (12) at (3, 0.5) {}; \node [style=none] (13) at (2.5, 1.5) {$\vdots$}; \node [style=none] (14) at (1.5, 1.25) {$A_{|x|}$}; \node [style=none] (15) at (4.5, 1.25) {$=$}; \node [style=none] (16) at (8.5, 2) {}; \node [style=none] (17) at (7.5, 0.5) {}; \node [style=none] (18) at (7.5, 2.5) {}; \node [style=none] (19) at (8.5, -1) {}; \node [style=none] (20) at (9, 1.5) {$\vdots$}; \node [style=none] (21) at (10, 2) {}; \node [style=none] (22) at (10, 0.5) {}; \node [style=none] (23) at (6, 2) {}; \node [style=none] (24) at (7.5, 2) {}; \node [style=none] (25) at (7.5, -1) {}; \node [style=none] (26) at (6, 0.5) {}; \node [style=none] (27) at (8, 0.75) {$U_{|x|}$}; \node [style=none] (28) at (8.5, 2.5) {}; \node [style=none] (29) at (7, 1.5) {$\vdots$}; \node [style=none] (30) at (8.5, 0.5) {}; \node [style=none] (31) at (7.5, -0.5) {}; \node [style=cpoint] (32) at (6.5, -0.5) {$0$}; \node [style=none] (33) at (8.5, -0.5) {}; \node [style=trace] (34) at (9.5, -0.5) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (1.center); \draw (4.center) to (6.center); \draw (6.center) to (7.center); \draw (7.center) to (8.center); \draw (8.center) to (4.center); \draw (2.center) to (3.center); \draw (11.center) to (12.center); \draw (9.center) to (10.center); \draw (23.center) to (24.center); \draw (18.center) to (25.center); \draw (25.center) to (19.center); \draw (19.center) to (28.center); \draw (28.center) to (18.center); \draw (26.center) to (17.center); \draw (30.center) to (22.center); \draw (16.center) to (21.center); \draw (32) to (31.center); \draw (33.center) to (34); \end{pgfonlayer} \end{tikzpicture}\] and $C$ is ``controlled bit-flip'', or ``generalised CNOT'': a reversible controlled transformation with the lower system as the control \[ \begin{tikzpicture}[scale=1.7] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0.9999999, -0) {}; \node [style=none] (1) at (2, -0) {}; \node [style=none] (2) at (0.9999999, 1.5) {}; \node [style=none] (3) at (2, 1.5) {}; \node [style=none] (4) at (2, 2) {}; \node [style=none] (5) at (0.9999999, 2) {}; \node [style=none] (6) at (1.5, 0.9999999) {$C$}; \node [style=none] (7) at (5, 0.9999999) {$=$}; \node [style=none] (8) at (0, 1.5) {}; \node [style=none] (9) at (3, 1.5) {}; \node [style=none] (10) at (0.9999999, 0.4999999) {}; \node [style=none] (11) at (2, 0.4999999) {}; \node [style=none] (12) at (3, 0.4999999) {}; \node [style=cpoint] (13) at (0, 0.4999999) {$i$}; \node [style=none] (14) at (6.499999, 1.5) {}; \node [style=none] (15) at (9.500001, 0.4999999) {}; \node [style=cpoint] (16) at (6.999999, 0.4999999) {$i$}; \node [style=none] (17) at (9.500001, 1.5) {}; \node [style={small box}] (18) at (7.999999, 1.5) {$T_i$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (5.center) to (0.center); \draw (0.center) to (1.center); \draw (1.center) to (4.center); \draw (4.center) to (5.center); \draw (8.center) to (2.center); \draw (3.center) to (9.center); \draw (11.center) to (12.center); \draw (13) to (10.center); \draw (16) to (15.center); \draw (14.center) to (18); \draw (18) to (17.center); \end{pgfonlayer} \end{tikzpicture} \] with $|i)\in\{|0), |1)\}$, $T_0=\mathds{I}$, and where $T_1$ acts as $T_1|i)=|i\oplus 1)$. To show that the family $G_{|x|}$ functions as an oracle with high probability, thereby proving theorem~\ref{subroutine}, we will show that the probability corresponding to the following closed circuit \[\begin{tikzpicture}[scale=1.4] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (-0.5, 4) {$0$}; \node [style=cpoint] (1) at (-0.5, 2.5) {$x$}; \node [style=cpoint] (2) at (-0.5, 1) {$0$}; \node [style=cpoint] (3) at (-0.5, -0.5) {$0$}; \node [style=none] (4) at (0.25, 0.5) {$\vdots$}; \node [style=none] (5) at (0.75, 3) {}; \node [style=none] (6) at (0.75, -1) {}; \node [style=none] (7) at (2.25, -1) {}; \node [style=none] (8) at (2.25, 3) {}; \node [style=none] (9) at (1.5, 1) {$U_{|x|}$}; \node [style=none] (10) at (0.75, 2.5) {}; \node [style=none] (11) at (0.75, 1) {}; \node [style=none] (12) at (0.75, -0.5) {}; \node [style=none] (13) at (2.25, -0.5) {}; \node [style=none] (14) at (2.25, 1) {}; \node [style=none] (15) at (2.25, 2.5) {}; \node [style=none] (16) at (3.25, 2.5) {}; \node [style=none] (17) at (3.25, 2) {}; \node [style=none] (18) at (4.25, 2) {}; \node [style=none] (19) at (3.25, 4) {}; \node [style=none] (20) at (4.25, 4) {}; \node [style=none] (21) at (4.25, 2.5) {}; \node [style=none] (22) at (4.25, 4.5) {}; \node [style=none] (23) at (3.25, 4.5) {}; \node [style=none] (24) at (3.75, 3.25) {$C$}; \node [style=none] (25) at (5.25, -0.5) {}; \node [style=none] (26) at (6.75, 1) {}; \node [style=none] (27) at (6, 1) {$U^{-1}_{|x|}$}; \node [style=none] (28) at (3.75, 0.5) {$\vdots$}; \node [style=none] (29) at (5.25, -1) {}; \node [style=none] (30) at (6.75, 3) {}; \node [style=none] (31) at (6.75, 2.5) {}; \node [style=none] (32) at (7.25, 0.5) {$\vdots$}; \node [style=none] (33) at (6.75, -0.5) {}; \node [style=none] (34) at (5.25, 1) {}; \node [style=none] (35) at (5.25, 2.5) {}; \node [style=none] (36) at (5.25, 3) {}; \node [style=none] (37) at (6.75, -1) {}; \node [style=cocpoint] (38) at (8, 1) {$0$}; \node [style=cocpoint] (39) at (8, 2.5) {$x$}; \node [style=cocpoint] (40) at (8, -0.5) {$0$}; \node [style=cocpoint] (41) at (8, 4) {$0$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (5.center) to (8.center); \draw (8.center) to (7.center); \draw (7.center) to (6.center); \draw (6.center) to (5.center); \draw (36.center) to (30.center); \draw (30.center) to (37.center); \draw (37.center) to (29.center); \draw (29.center) to (36.center); \draw (0) to (19.center); \draw (23.center) to (17.center); \draw (17.center) to (18.center); \draw (18.center) to (22.center); \draw (22.center) to (23.center); \draw (15.center) to (16.center); \draw (10.center) to (1); \draw (2) to (11.center); \draw (3) to (12.center); \draw (13.center) to (25.center); \draw (14.center) to (34.center); \draw (21.center) to (35.center); \draw (20.center) to (41); \draw (31.center) to (39); \draw (26.center) to (38); \draw (33.center) to (40); \end{pgfonlayer} \end{tikzpicture}\] is greater than or equal to $1-2^{-q(|x|)}$, for some polynomial $q(|x|)$, when the algorithm $A_{|x|}$ accepts\footnote{That is, when $x$ is in the language decided by the algorithm.} the input $x$. We choose the dynamically faithful state to satisfy \[\begin{tikzpicture}[scale=1.8] \begin{pgfonlayer}{nodelayer} \node [style=none] (0) at (0.4999999, 0.9999999) {}; \node [style=cocpoint] (1) at (1.250001, 0.9999999) {$i$}; \node [style=none] (2) at (3.5, 0.25) {$=$}; \node [style=none] (7) at (4.3, 0.25) {$p_i$}; \node [style=cpoint] (3) at (5, 0.4999999) {$i$}; \node [style=none] (4) at (6.25, 0.4999999) {}; \node [style=none] (5) at (2, -0) {}; \node [style=none] (6) at (0.4999999, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0.center) to (1); \draw (3) to (4.center); \draw [bend right=90, looseness=1.75] (0.center) to (6.center); \draw (6.center) to (5.center); \end{pgfonlayer} \end{tikzpicture}\] where $p_i\in (0,1]$ and $\sum_i p_i=1$, which can always be achieved without loss of generality (see theorem $6$ and corollary $9$ from \cite{chiribella2010probabilistic}). We first show that $C$ satisfies \begin{equation}\label{equation1}\begin{tikzpicture}[scale=1.7] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (0.750001, 1.5) {$0$}; \node [style=none] (1) at (1.5, 0.4999999) {}; \node [style=none] (2) at (1.5, -0) {}; \node [style=none] (3) at (2.5, -0) {}; \node [style=none] (4) at (1.5, 1.5) {}; \node [style=none] (5) at (2.5, 1.5) {}; \node [style=none] (6) at (2.5, 0.4999999) {}; \node [style=none] (7) at (2.5, 2) {}; \node [style=none] (8) at (1.5, 2) {}; \node [style=none] (9) at (2, 0.9999999) {$C$}; \node [style=cocpoint] (10) at (3.250001, 1.5) {$0$}; \node [style=none] (11) at (0, 0.4999999) {}; \node [style=none] (12) at (4, 0.4999999) {}; \node [style=none] (13) at (5.499999, 0.7499999) {$=$}; \node [style=none] (14) at (6.999999, 0.7499999) {}; \node [style=cocpoint] (15) at (8.25, 0.7499999) {$0$}; \node [style=cpoint] (16) at (9.25, 0.7499999) {$0$}; \node [style=none] (17) at (10.5, 0.7499999) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (4.center); \draw (8.center) to (2.center); \draw (2.center) to (3.center); \draw (3.center) to (7.center); \draw (7.center) to (8.center); \draw (5.center) to (10); \draw (11.center) to (1.center); \draw (6.center) to (12.center); \draw (14.center) to (15); \draw (16) to (17.center); \end{pgfonlayer} \end{tikzpicture}. \end{equation} To see this note that uniqueness of measurement (both of the following states give probability $p_0$ for $(0|(0|$, and probability zero for each of $(0|(1|, (1|(0|, $ and $(1|(1|$) implies \[\begin{tikzpicture}[scale=1.5] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (0.2499996, 2) {$0$}; \node [style=none] (1) at (1, 1) {}; \node [style=none] (2) at (1, 2.5) {}; \node [style=none] (3) at (2, 2.5) {}; \node [style=none] (4) at (1, 2) {}; \node [style=none] (5) at (2, 2) {}; \node [style=none] (6) at (2, 1) {}; \node [style=none] (7) at (2, 0.5000001) {}; \node [style=none] (8) at (1, 0.5000001) {}; \node [style=none] (9) at (1.5, 1.5) {$C$}; \node [style=cocpoint] (10) at (2.749999, 2) {$0$}; \node [style=none] (11) at (0.5000001, 1) {}; \node [style=none] (12) at (3.5, 1) {}; \node [style=none] (13) at (4.5, 1.25) {$=$}; \node [style=none] (20) at (5.5, 1.25) {$p_0$}; \node [style=cpoint] (14) at (6.5, 0.2500001) {$0$}; \node [style=none] (15) at (7.75, 0.2500001) {}; \node [style=cpoint] (16) at (6.5, 1.75) {$0$}; \node [style=none] (17) at (7.75, 1.75) {}; \node [style=none] (18) at (0.5000001, -0) {}; \node [style=none] (19) at (3.5, -0) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (4.center); \draw (8.center) to (2.center); \draw (2.center) to (3.center); \draw (3.center) to (7.center); \draw (7.center) to (8.center); \draw (5.center) to (10); \draw (11.center) to (1.center); \draw (6.center) to (12.center); \draw (14) to (15.center); \draw (16) to (17.center); \draw (18.center) to (19.center); \draw [bend right=90, looseness=1.75] (11.center) to (18.center); \end{pgfonlayer} \end{tikzpicture}\] From our choice of dynamically faithful state, it then follows that \[\begin{tikzpicture}[scale=1.5] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (0.2499996, 2) {$0$}; \node [style=none] (1) at (1, 1) {}; \node [style=none] (2) at (1, 2.5) {}; \node [style=none] (3) at (2, 2.5) {}; \node [style=none] (4) at (1, 2) {}; \node [style=none] (5) at (2, 2) {}; \node [style=none] (6) at (2, 1) {}; \node [style=none] (7) at (2, 0.5000001) {}; \node [style=none] (8) at (1, 0.5000001) {}; \node [style=none] (9) at (1.5, 1.5) {$C$}; \node [style=cocpoint] (10) at (2.749999, 2) {$0$}; \node [style=none] (11) at (0.5000001, 1) {}; \node [style=none] (12) at (3.5, 1) {}; \node [style=none] (13) at (5, 1.25) {$=$}; \node [style=none] (14) at (9.750001, 0.5000001) {}; \node [style=cpoint] (15) at (8.499999, 1.75) {$0$}; \node [style=none] (16) at (9.750001, 1.75) {}; \node [style=none] (17) at (0.5000001, -0) {}; \node [style=none] (18) at (3.5, -0) {}; \node [style=cocpoint] (19) at (7.500001, 1.75) {$0$}; \node [style=none] (20) at (6.75, 1.75) {}; \node [style=none] (21) at (6.75, 0.5000001) {}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (0) to (4.center); \draw (8.center) to (2.center); \draw (2.center) to (3.center); \draw (3.center) to (7.center); \draw (7.center) to (8.center); \draw (5.center) to (10); \draw (11.center) to (1.center); \draw (6.center) to (12.center); \draw (15) to (16.center); \draw (17.center) to (18.center); \draw [bend right=90, looseness=1.75] (11.center) to (17.center); \draw (20.center) to (19); \draw [bend right=90, looseness=1.50] (20.center) to (21.center); \draw (21.center) to (14.center); \end{pgfonlayer} \end{tikzpicture} \] Dynamical faithfulness then gives equation (\ref{equation1}). Secondly, we write \[\begin{tikzpicture}[scale=1.4] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (-0.5, 2.5) {$x$}; \node [style=cpoint] (1) at (-0.5, 1) {$0$}; \node [style=cpoint] (2) at (-0.5, -0.5) {$0$}; \node [style=none] (3) at (0.25, 0.5) {$\vdots$}; \node [style=none] (4) at (0.75, 3) {}; \node [style=none] (5) at (0.75, -1) {}; \node [style=none] (6) at (2.25, -1) {}; \node [style=none] (7) at (2.25, 3) {}; \node [style=none] (8) at (1.5, 1) {$U_{|x|}$}; \node [style=none] (9) at (0.75, 2.5) {}; \node [style=none] (10) at (0.75, 1) {}; \node [style=none] (11) at (0.75, -0.5) {}; \node [style=none] (12) at (2.25, -0.5) {}; \node [style=none] (13) at (2.25, 1) {}; \node [style=none] (14) at (2.25, 2.5) {}; \node [style=none] (15) at (4, -0.5) {}; \node [style=none] (16) at (2.75, 0.5) {$\vdots$}; \node [style=none] (17) at (4, 1) {}; \node [style=cocpoint] (18) at (3.5, 2.5) {$0$}; \node [style=none] (19) at (5.5, 1) {$=$}; \node [style=none] (20) at (8.25, 1.75) {}; \node [style=none] (21) at (8.25, 1.25) {}; \node [style=none] (22) at (8.25, -0.25) {}; \node [style=none] (23) at (8.25, -0.75) {}; \node [style=none] (24) at (7.75, 0.5) {$\sigma$}; \node [style=none] (25) at (6.75, 0.5) {$\alpha$}; \node [style=none] (26) at (9.25, 1.25) {}; \node [style=none] (27) at (9.25, -0.25) {}; \node [style=none] (28) at (8.75, 0.75) {$\vdots$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (4.center) to (7.center); \draw (7.center) to (6.center); \draw (6.center) to (5.center); \draw (5.center) to (4.center); \draw (9.center) to (0); \draw (1) to (10.center); \draw (2) to (11.center); \draw (12.center) to (15.center); \draw (13.center) to (17.center); \draw (14.center) to (18); \draw [bend right=90, looseness=1.25] (20.center) to (23.center); \draw (20.center) to (23.center); \draw (21.center) to (26.center); \draw (22.center) to (27.center); \end{pgfonlayer} \end{tikzpicture}\] where $|\sigma)$ is a normalised state and $\alpha\in (0,1]$. Our choice of acceptance condition, together with the fact that $U$ is a dilation of the algorithm $A$, results in \begin{equation} \label{equation2} \begin{tikzpicture}[scale=1.4] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (-0.5, 2.5) {$x$}; \node [style=cpoint] (1) at (-0.5, 1) {$0$}; \node [style=cpoint] (2) at (-0.5, -0.5) {$0$}; \node [style=none] (3) at (0.25, 0.5) {$\vdots$}; \node [style=none] (4) at (0.75, 3) {}; \node [style=none] (5) at (0.75, -1) {}; \node [style=none] (6) at (2.25, -1) {}; \node [style=none] (7) at (2.25, 3) {}; \node [style=none] (8) at (1.5, 1) {$U_{|x|}$}; \node [style=none] (9) at (0.75, 2.5) {}; \node [style=none] (10) at (0.75, 1) {}; \node [style=none] (11) at (0.75, -0.5) {}; \node [style=none] (12) at (2.25, -0.5) {}; \node [style=none] (13) at (2.25, 1) {}; \node [style=none] (14) at (2.25, 2.5) {}; \node [style=none] (15) at (2.75, 0.5) {$\vdots$}; \node [style=cocpoint] (16) at (3.5, 2.5) {$0$}; \node [style=none] (17) at (5.5, 1) {$=$}; \node [style=trace] (18) at (3.5, 1) {}; \node [style=trace] (19) at (3.5, -0.5) {}; \node [style=none] (20) at (6.5, 1) {$\alpha$}; \node [style=none] (21) at (7.5, 1) {$=$}; \node [style=none] (22) at (9.25, 1) {$P_x(acc)$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (4.center) to (7.center); \draw (7.center) to (6.center); \draw (6.center) to (5.center); \draw (5.center) to (4.center); \draw (9.center) to (0); \draw (1) to (10.center); \draw (2) to (11.center); \draw (14.center) to (16); \draw (13.center) to (18); \draw (12.center) to (19); \end{pgfonlayer} \end{tikzpicture} \end{equation} Combining (\ref{equation1}) and (\ref{equation2}) gives \[\begin{tikzpicture}[scale=1.2] \begin{pgfonlayer}{nodelayer} \node [style=cpoint] (0) at (-0.5, 4) {$0$}; \node [style=cpoint] (1) at (-0.5, 2.5) {$x$}; \node [style=cpoint] (2) at (-0.5, 1) {$0$}; \node [style=cpoint] (3) at (-0.5, -0.5) {$0$}; \node [style=none] (4) at (0.25, 0.5) {$\vdots$}; \node [style=none] (5) at (0.75, 3) {}; \node [style=none] (6) at (0.75, -1) {}; \node [style=none] (7) at (2.25, -1) {}; \node [style=none] (8) at (2.25, 3) {}; \node [style=none] (9) at (1.5, 1) {$U_{|x|}$}; \node [style=none] (10) at (0.75, 2.5) {}; \node [style=none] (11) at (0.75, 1) {}; \node [style=none] (12) at (0.75, -0.5) {}; \node [style=none] (13) at (2.25, -0.5) {}; \node [style=none] (14) at (2.25, 1) {}; \node [style=none] (15) at (2.25, 2.5) {}; \node [style=none] (16) at (3.25, 2.5) {}; \node [style=none] (17) at (3.25, 2) {}; \node [style=none] (18) at (4.25, 2) {}; \node [style=none] (19) at (3.25, 4) {}; \node [style=none] (20) at (4.25, 4) {}; \node [style=none] (21) at (4.25, 2.5) {}; \node [style=none] (22) at (4.25, 4.5) {}; \node [style=none] (23) at (3.25, 4.5) {}; \node [style=none] (24) at (3.75, 3.25) {$C$}; \node [style=none] (25) at (5.25, -0.5) {}; \node [style=none] (26) at (6.75, 1) {}; \node [style=none] (27) at (6, 1) {$U^{-1}_{|x|}$}; \node [style=none] (28) at (3.75, 0.5) {$\vdots$}; \node [style=none] (29) at (5.25, -1) {}; \node [style=none] (30) at (6.75, 3) {}; \node [style=none] (31) at (6.75, 2.5) {}; \node [style=none] (32) at (7.25, 0.5) {$\vdots$}; \node [style=none] (33) at (6.75, -0.5) {}; \node [style=none] (34) at (5.25, 1) {}; \node [style=none] (35) at (5.25, 2.5) {}; \node [style=none] (36) at (5.25, 3) {}; \node [style=none] (37) at (6.75, -1) {}; \node [style=cocpoint] (38) at (8, 1) {$0$}; \node [style=cocpoint] (39) at (8, 2.5) {$x$}; \node [style=cocpoint] (40) at (8, -0.5) {$0$}; \node [style=cocpoint] (41) at (8, 4) {$0$}; \node [style=none] (42) at (10, 2) {$=$}; \node [style=none] (43) at (17.25, -0.2500001) {}; \node [style=none] (44) at (15.75, 2.75) {}; \node [style=none] (45) at (17.25, 1.25) {}; \node [style=none] (46) at (17.25, 2.75) {}; \node [style=none] (47) at (15.25, 0.7500001) {$\vdots$}; \node [style=none] (48) at (17.75, 0.7500001) {$\vdots$}; \node [style=none] (49) at (15.75, 3.25) {}; \node [style=none] (50) at (15.75, -0.7500001) {}; \node [style=cocpoint] (51) at (18.5, 2.75) {$x$}; \node [style=none] (52) at (15.75, 1.25) {}; \node [style=none] (53) at (17.25, 3.25) {}; \node [style=cocpoint] (54) at (18.5, -0.2500001) {$0$}; \node [style=none] (55) at (15.75, -0.2500001) {}; \node [style=none] (56) at (16.5, 1.25) {$U^{-1}_{|x|}$}; \node [style=none] (57) at (17.25, -0.7500001) {}; \node [style=cocpoint] (58) at (18.5, 1.25) {$0$}; \node [style=cpoint] (59) at (14.5, 2.75) {$0$}; \node [style=none] (60) at (14.5, 1.25) {}; \node [style=none] (61) at (14.5, -0.2500001) {}; \node [style=none] (62) at (14.5, 1.75) {}; \node [style=none] (63) at (14.5, -0.7500001) {}; \node [style=none] (64) at (14, 0.4999999) {$\sigma$}; \node [style=none] (65) at (12, 2) {$P_x(acc)$}; \node [style=none] (66) at (15.75, -4.25) {$\vdots$}; \node [style=none] (67) at (14.5, -4.5) {$\sigma$}; \node [style=none] (68) at (15, -5.75) {}; \node [style=none] (69) at (15, -3.75) {}; \node [style=none] (70) at (12, -3.75) {$P_x(acc)^2$}; \node [style=none] (71) at (10, -3.75) {$=$}; \node [style=none] (72) at (16.25, -3.75) {}; \node [style=none] (73) at (15, -3.25) {}; \node [style=none] (74) at (16.25, -5.25) {}; \node [style=none] (75) at (15, -5.25) {}; \node [style=none] (76) at (16.25, -5.75) {}; \node [style=none] (77) at (16.25, -5.25) {}; \node [style=none] (78) at (16.25, -3.75) {}; \node [style=none] (79) at (16.25, -3.25) {}; \node [style=none] (80) at (16.75, -4.5) {$\sigma$}; \end{pgfonlayer} \begin{pgfonlayer}{edgelayer} \draw (5.center) to (8.center); \draw (8.center) to (7.center); \draw (7.center) to (6.center); \draw (6.center) to (5.center); \draw (36.center) to (30.center); \draw (30.center) to (37.center); \draw (37.center) to (29.center); \draw (29.center) to (36.center); \draw (0) to (19.center); \draw (23.center) to (17.center); \draw (17.center) to (18.center); \draw (18.center) to (22.center); \draw (22.center) to (23.center); \draw (15.center) to (16.center); \draw (10.center) to (1); \draw (2) to (11.center); \draw (3) to (12.center); \draw (13.center) to (25.center); \draw (14.center) to (34.center); \draw (21.center) to (35.center); \draw (20.center) to (41); \draw (31.center) to (39); \draw (26.center) to (38); \draw (33.center) to (40); \draw (49.center) to (53.center); \draw (53.center) to (57.center); \draw (57.center) to (50.center); \draw (50.center) to (49.center); \draw (46.center) to (51); \draw (45.center) to (58); \draw (43.center) to (54); \draw (59) to (44.center); \draw [bend right=90, looseness=1.25] (62.center) to (63.center); \draw (60.center) to (52.center); \draw (61.center) to (55.center); \draw (62.center) to (63.center); \draw [bend right=90, looseness=1.25] (73.center) to (68.center); \draw (69.center) to (72.center); \draw (75.center) to (74.center); \draw (73.center) to (68.center); \draw [bend left=90, looseness=1.25] (79.center) to (76.center); \draw (79.center) to (76.center); \end{pgfonlayer} \end{tikzpicture}\] where the last line follows from self-duality. By amplifying the acceptance probability of the original algorithm $A$ (see \cite{lee2015computation} for an in depth discussion of bounded error efficient computation and amplifying acceptance probabilities), we can ensure that when $x$ is in the language we have $P_x(acc)\geq 1-2^{-p(|x|)}$ for an arbitrary polynomial $p(|x|)$. Hence it follows that $P_x(acc)^2\geq 1-2^{-p(|x|)+1}$. If $(\sigma |\sigma)=1$, choosing $p(|x|)=q(|x|)+1$ completes the proof. The case $(\sigma |\sigma) < 1$ can be easily dealt with. As $|\sigma)$ and $(\sigma|$ can be efficiently prepared by a poly-size circuit, the factor $(\sigma | \sigma)$ can be approximated by a rational number to high accuracy (this is a consequence of the computational uniformity condition required to define computation in arbitrary physical theories, including quantum theory. See \cite{lee2015computation} and \cite{lee2015proofs} for an expanded discussion of this point). Hence one can write $(\sigma |\sigma)=1-c2^{-w(|x|)}$, for $w$ a polynomial in the size of the circuit and $c$ a constant natural number. One can always find a polynomial $q$ such that $\left(1-c2^{-w(|x|)}\right)\left(1-2^{-p(|x|)+1}\right) \geq 1-2^{-q(|x|)}$. This completes the proof. \end{proof} \end{appendices} \end{document}
\begin{document} \title{A property of diagrams of the trivial knot} \author{Makoto Ozawa} \address{Department of Natural Sciences, Faculty of Arts and Sciences, Komazawa University, 1-23-1 Komazawa, Setagaya-ku, Tokyo, 154-8525, Japan} \email{[email protected]} \subjclass{Primary 57M25; Secondary 57Q35} \keywords{trivial knot, diagram} \begin{abstract} In this paper, we give a necessary condition for a diagram to represent the trivial knot. \end{abstract} \maketitle \section{Introduction} \subsection{What can we say when a diagram represents the trivial knot?} Let $K$ be a knot in the 3-sphere $S^3$ and consider a diagram $\pi(K)$ of $K$ on the 2-sphere $S^2$. We say that a diagram $\pi(K)$ is {\it $\rm{I}$-reduced} (resp. {\it $\rm{II}$-reduced}) if the crossing number of $\pi(K)$ cannot be reduced by a Reidemeister move $\rm{I}$ (resp. Reidemeister move $\rm{II}$). We say that a diagram $\pi(K)$ is {\it prime} if it contains at least one crossing and for any loop $l$ intersecting $D$ in two points except for crossings, there exists a disk $D$ in $S^2$ such that $D\cap \pi(K)$ consists of an embedded arc. We position $K$ in the following Menasco's manner (\cite{M}). For each crossing $c_i$ of $\pi(K)$, we insert a small 3-ball ``bubble" $B_i$ between the over crossing and the under crossing of $c_i$ and isotope the over arc of $N(c_i;K)$ onto the upper hemisphere $\partial_+ B_i$ of $\partial B_i$ and the under arc onto the lower hemisphere $\partial_- B_i$. See Figure \ref{bubble1}. Let $S^2_+$ (resp. $S^2_-$) be a 2-sphere obtained from $S^2$ by replacing each equatorial disk $B_i\cap S^2$ with the upper (resp. lower) hemisphere $\partial_+ B_i$ (resp. $\partial_- B_i$). See Figure \ref{bubble2}. Put $P=S^2_+\cap S^2_-$. Then, $K$ is contained in $S^2_+\cup S^2_-=P\cup \bigcup \partial B_i$. We call each component of $P-\pi(K)$ a {\it region}. Let $B$ be the union of all bubbles and $R$ be the union of all regions. \begin{figure} \caption{a bubble between the over arc and the under arc} \label{bubble1} \end{figure} \begin{figure} \caption{the upper hemisphere and the lower hemisphere} \label{bubble2} \end{figure} A loop $l$ embedded in $S^2_+-K$ (resp. $S^2_--K$) is called a {\it $+$-Menasco loop} (resp. {\it $-$-Menasco loop}) if for each region $R_j$, each component of $l\cap R_j$ is an arc connecting different arc components of $\partial B\cap R_j$ and for each bubble $B_i$, each component of $l\cap \partial B_i$ is an arc connecting two different regions. The number of crossings which a Menasco loop passes through is called {\it length}. \begin{xca} Show that there is always a length two $+$ or $-$-Menasco loop if a diagram is not prime and contains at least one crossing. \end{xca} Two crossings $c_i$ and $c_j$ are {\it adjacent} if there exists an arc $\gamma$ of $K\cap P$ connecting two bubbles $B_i$ and $B_j$, and {\it $+$-adjacent} (resp. {\it $-$-adjacent}) if $\gamma$ connects two over arcs $K\cap \partial_+ B_i$ and $K\cap \partial_+ B_j$ (resp. two under arcs $K\cap \partial_- B_i$ and $K\cap \partial_- B_j$). See Figure \ref{adjacent}. \begin{figure} \caption{adjacent, $+$-adjacent, $-$-adjacent} \label{adjacent} \end{figure} \begin{xca} Show that if there exists a length two $+$-Menasco loop passing through two $+$-adjacent crossings, then the diagram is not prime or not $\rm{II}$-reduced. \end{xca} The following is a restriction on Corollary \ref{trivial knot} which is an essence of this paper. \begin{corollary}\label{trivial} Any $\rm{I}$-reduced, $\rm{II}$-reduced, prime diagram of the trivial knot has a $\pm$-Menasco loop passing through $2n$-crossings $c_1,c_2,\ldots,c_{2n}$, where $n\ge 2$ and $c_{2i-1}$ is $\pm$-adjacent to $c_{2i}$ for $i=1,\ldots,n-1$. \end{corollary} \begin{example} Consider a $\rm{I}$-reduced, $\rm{II}$-reduced, prime, 4-crossing diagram of the right-handed trefoil. See Figure \ref{example1}. There exists a $+$-Menasco loop passing through $c_1, c_2, c_4, c_3, c_1, c_3$, where $c_1$ is $+$-adjacent to $c_2$ and $c_4$ is $+$-adjacent to $c_3$. After Theorem \ref{main}, we will know that any $\rm{I}$-reduced, $\rm{II}$-reduced diagram of the trefoil except for the 3-crossing diagram has a $\pm$-Menasco loop satisfying the condition in Corollary \ref{trivial}. \begin{figure} \caption{$\rm{I} \label{example1} \end{figure} \end{example} \begin{example} Next example is borrowed from Ochiai's book \cite{O2}. This diagram of the trivial knot has no $r$-wave for any $r\ge 0$. See Figure \ref{example2}. At each stage, there exists a $\pm$-Menasco loop satisfying the condition in Corollary \ref{trivial} or it is not $\rm{I}$-reduced or not $\rm{II}$-reduced. In the former case, a $\pm$-Menasco loop can be used to simplify the diagram if it has successive three adjacent crossings, and in the latter case, the crossing number can be reduced by a Reidemeister move $\rm{I}$ or $\rm{II}$. \begin{figure} \caption{Ochiai's ``non-trivial'' diagram of the trivial knot} \label{example2} \end{figure} \end{example} \begin{example} Final example is somewhat artificial. This diagram is $2$-almost alternating, that is, obtained from an alternating diagram by twice crossing changes on it. See Figure \ref{example3}. There does not exist a $\pm$-Menasco loop satisfying the condition in Corollary \ref{trivial}. Hence, this knot is non-trivial. Note that Tsukamoto characterized almost alternating diagarms of the trivial knot (\cite{T}). \begin{figure} \caption{2-almost alternating diagram} \label{example3} \end{figure} \end{example} \subsection{Where do you untie it from?} We position a knot $K$ in the Menasco's manner mentioned above. In \cite{MT}, Menasco and Thistlethwaite defined a standard position of a spanning surface $F$ for $K$ as follows. \begin{enumerate} \item[(i)] $\rm{int}F$ meets each of $S_+,S_-$ transversely in a pairwise disjoint collection of simple closed curves and arcs; \item[(ii)] $F$ meests each $B_i$ in a collection of saddle-shaped disks; \item[(iii)] there is a collar $C\cong I\times \partial F$ of $\partial F$ in $F$ and a projection $p:C\to \partial F$ such that for each $x\in \partial F\cap \partial B_i$ the fibre $p^{-1}(x)$ is a straight line segment which is normal to $\partial B_i$ and which does not meet the interior of $B_i$. \end{enumerate} Moreover they showed; \begin{proposition}[Proposition 2 in \cite{MT}] The disk $F$ spanning $K$ may be replaced with a spanning disk $F'$ such that each circle $C$ in $F'\cap S_+$ satisfies the following: \begin{enumerate} \item[(i)] $C$ bounds a disk in $F'$ whose interior lies entirely above $S_+$; \item[(ii)] if $C\subset \rm{int}F'$, $C$ meets at least one bubble; \item[(iii)] $C$ does not meet any bubble in more than one arc $($whether or not $C\subset \rm{int}F'$$)$. \end{enumerate} Moreover, the corresponding conditions for $F\cap S_-$ can be achieved simultaneously. \end{proposition} Hereafter, let $K$ be the trivial knot and $D$ be a disk bounded by $K$. We assume the above conditions on $D$. Then, there exists an outermost arc $\alpha$ on $D$ which bounds an outermost disk $\delta$ in $D$ with a subarc $\beta$ of $K$. We call an arc of $D\cap S_{\pm}$ a {\it $\pm$-Menasco arc}. See Figure \ref{outermost arc}. \begin{figure} \caption{an outermost Menasco arc} \label{outermost arc} \end{figure} Let $\delta'$ be a subdisk of $D$ which forms $\delta$ together with all saddle-shaped disks meeting $\delta$. We say that $K'$ is obtained by a {\it move along an outermost $\pm$-Menasco arc} if $K'$ is obtained by isotoping $K$ along $\delta'$. See Figure \ref{outermost arc2}. Note that $K'$ is not in the Menasco's position, but there exists a next outermost $\pm$-Menasco arc $\alpha'$ on $D'$ and we can move $K'$ along $\alpha'$. \begin{figure} \caption{Moving $K$ along the outermost Menasco arc} \label{outermost arc2} \end{figure} We can summarize our observation as follows. \begin{observation} Let $K_1$ be the trivial knot and $\pi(K_1)$ be a diagram of $K_1$ with at least one crossing. Then, there exists a sequence $\alpha_1,\ldots,\alpha_n$ of $\pm$-Menasco arc such that $K_{i+1}$ is obtained from $K_i$ by a move along $\alpha_i$ and $K_{n+1}$ is a diagram without crossings. \end{observation} \section{Main Theorem} In this secion, we consider diagrams of a knot on a closed surface. Let $F$ be a closed surface embedded in $S^3$ and $K$ a knot contained in $F\times [-1,1]$. Suppose that $\pi(K)$ is a regular projection on $F$, where $\pi : F\times [-1,1]\to F\times \{0\}=F$ is the projection. Then, we have a regular diagram on $F$ obtained from $\pi(K)$ by adding the over/under information to each double point, and we denote it by the same symbol $\pi(K)$ in this article. We say that a diagram $\pi(K)$ on $F$ is {\em reduced} if there is no disk region of $F-\pi(K)$ which meets only one crossing. We say that a diagram $\pi(K)$ on $F$ is {\em prime} if it contains at least one crossing and for any loop $l$ intersecting $\pi(K)$ in two points except for crossings, there exists a disk $D$ in $F$ such that $D\cap \pi(K)$ consists of an embedded arc. Let $S$ be a closed surface of positive genus in $S^3$ and $K$ a knot contained in $S$. The {\em representativity} $r(S,K)$ of a pair $(S,K)$ is defined as the minimal number of intersecting points of $K$ and $\partial D$, where $D$ ranges over all compressing disks for $S$ in $S^3$. It follows from Lemma 3 in \cite{MO} that $r(S,K)\ge1$ if and only if $S\cap E(K)$ is incompressible in $E(K)$, and $r(S,K)\ge2$ if and only if $S\cap E(K)$ is incompressible and $\partial$-incompressible in $E(K)$, where $E(K)$ denotes the exterior of $K$ in $S^3$. In the previous paper, the author showed the non-triviality of generalized alternating knots by the following theorem. \begin{theorem}[\cite{MO}]\label{alternating} Let $F$ be a closed surface embedded in $S^3$, $K$ a knot contained in $F\times [-1,1]$ which has a reduced, prime, alternating diagram on $F$. Then, we have the following. \begin{enumerate} \item $F-\pi(K)$ consists of open disks. \item $F-\pi(K)$ admits a checkerboard coloring. \item $K$ bounds a non-orientable surface $H$ coming from the checkerboard coloring. \item $K$ can be isotoped into $\partial N(H)$ so that $\partial N(H)-K$ is connected. \item $r(\partial N(H),K)\ge 2$. \end{enumerate} \end{theorem} We call the closed surface $\partial N(H)$ an {\it interpolating surface} obtained from the checkerboard coloring, where $H$ is one of the checkerboard surfaces. Theorem \ref{alternating} assures the existence of an incompressible and $\partial$-incompressible separating orientable surface of integral boundary slope in the exterior of a generalized alternating knot. Hence, the knot is non-trivial. Conversely, suppose that an interpolating surface obtained from the checkerboard coloring is compressible. Then, we know from Theorem \ref{alternating} that the diagram is not alternating. Are there other properties of the diagram? This is the main subject in this paper. \begin{example} The 4-crossing diagram of the right-handed trefoil without nugatory crossings. The interpolating surface obtained from a checkerboard surface is compressible. See Figure \ref{comp_check}. Here, the compressing disk intersects the union of regions in 5 arcs. \end{example} \begin{figure} \caption{a compressing disk for the interpolating surface} \label{comp_check} \end{figure} As Introduction, we position $K$ in the Menasco's manner with respect to a closed surface $F$. For each crossing $c_i$ of $\pi(K)$, we insert a small 3-ball ``bubble" $B_i$ between the over crossing and the under crossing of $c_i$ and isotope the over arc of $N(c_i;K)$ onto the upper hemisphere $\partial_+ B_i$ of $\partial B_i$ and the under arc onto the lower hemisphere $\partial_- B_i$. Let $F_+$ (resp. $F_-$) be a 2-sphere obtained from $F$ by replacing each equatorial disk $B_i\cap F$ with the upper (resp. lower) hemisphere $\partial_+ B_i$ (resp. $\partial_- B_i$). Put $P=F_+\cap F_-$. Then, $K$ is contained in $F_+\cup F_-=P\cup \bigcup \partial B_i$. We call each component of $P-\pi(K)$ a {\it region}. Let $B$ be the union of all bubbles and $R$ be the union of all regions. The following is a main theorem. \begin{theorem}\label{main} Let $F$ be a closed surface embedded in $S^3$, $K$ a knot contained in $F\times [-1,1]$ which has a $\rm{I}$-reduced, $\rm{II}$-reduced, prime, checkerboard colorable diagram on $F$. If at least one of two interpolating surfaces obtained from the checkerboard coloring is compressible in the complement of $K$, then there exists a compressing disk $\delta$ for $F_{\pm}-K$ in $V_{\pm}$ such that $\partial \delta$ is a $\pm$-Menasco loop passing through $2n$-crossings $c_1,c_2,\ldots,c_{2n}$, where $n\ge 2$ and $c_{2i-1}$ is $\pm$-adjacent to $c_{2i}$ for $i=1,\ldots,n-1$. \end{theorem} \begin{remark} Theorem \ref{main} still holds for link case. If a link is split but the diagram is not connected, then both of checkerboard surfaces are compressible in the link complement and there exists a $\pm$-Menasco loop satisfying the condition in Theorem \ref{main}. \end{remark} \begin{remark}\label{once} In Theorem \ref{main}, we can take a compressing disk $D$ so that $\partial D$ does not pass through a one side of a crossing {\em more than once}. \end{remark} \begin{xca} Show the statement of Remark \ref{once}. \end{xca} \begin{remark} It is possible to state that for a checkerboard surface $F$, whether $\tilde{F}$ is compressible by means of {\em all $\pm$-Menasco loop} coming from {\em all subdisk} in $D$. \end{remark} \section{Proof} \begin{proof} (of Theorem \ref{main}) The following claim is Claim 6 in \cite{MO}. \begin{claim}\label{open disk} $F-\pi(K)$ consists of open disks. \end{claim} Let $\partial N(H)$ be an interpolating surface obtained from the ckeckerboard coloring such that $\partial N(H)-K$ is compressible in $S^3-K$. The following claim is Claim 9 in \cite{MO}. \begin{claim}\label{I-bundle} $\partial N(H)-K$ is incompressible in $N(H)$. \end{claim} Hence, $\partial N(H)-K$ is compressible in the outside of $N(H)$, and let $D$ be a compressing disk for $\partial N(H)-K$. We regard $N(H)$ as the following. For each crossing $c_i$ of $\pi(K)$, we insert a small 3-ball $B_i$ as a regular neighborhood of $c_i$. In the rest of these 3-balls, we consider the product $R_i\times I$ for each region $R_i$ of $F-\pi(K)$. Then, the union of $B_i$'s and $R_i\times I$'s is homeomorphic to $N(N)$. See Figure \ref{N(H)}. \begin{figure} \caption{$B_i$, $R_i\times I$ and $K$} \label{N(H)} \end{figure} Let $\Delta=\Delta_1\cup \cdots \cup \Delta_n$ be the union of components of $F-int N(H)$, where $\Delta_i$ is a disk by Claim \ref{open disk}. Then, each component of $(\partial N(H)-K)-\partial \Delta$ is an open disk containing $R_i\times\{0\}$ or $R_i\times\{1\}$ for some $i$ and whose closure is denoted by $R_i^-$ or $R_i^+$ respectively. Put $R=(\bigcup_i R_i^-)\cup (\bigcup_i R_i^+)$ and $B=\bigcup_i B_i$. Note that $D\cap \Delta\ne \emptyset$, otherwise $\partial D$ is entirely contained in a region and $D$ would not be a compressing disk for $\partial N(H)-K$. The following claim is Claim 10 in \cite{MO}. \begin{claim}\label{standard} We may assume the following. \begin{enumerate} \item $\partial D\cap R$ consists of arcs that connect different arc components of $\partial B\cap \partial \Delta$. \item $D\cap \Delta$ consists of arcs that connect different arc components of $\partial B\cap \partial \Delta$. \end{enumerate} \end{claim} Next, we concentrate on an outermost arc $\alpha$ of $D\cap \Delta$ in $D$ and the corresponding outermost disk $\delta$ in $D$. Put $\delta\cap \partial D=\beta$. \begin{claim}\label{outermost} Any outermost arc $\alpha$ connects $\pm$-adjacent crossings. \end{claim} \begin{proof} By Claim \ref{standard}, we have two configurations. \begin{description} \item[Case 1] $\beta$ connects the same crossing ball $B_i$ (Figure \ref{alternate1}). \item[Case 2] $\beta$ connects different crossing balls $B_i$ and $B_j$ (Figure \ref{alternate2}). \end{description} \begin{figure} \caption{Configuration of Case 1} \label{alternate1} \end{figure} \begin{figure} \caption{Configuration of Case 2} \label{alternate2} \end{figure} In Case 1, by connecting $\partial \beta$ on $\partial B_i$ and projecting on $F$, we have a loop $l_\beta$ on $F$ which intersects $\pi(K)$ in one crossing point $c_i$. Similarly, we obtain a loop $l_\alpha$ on $F$ which intersects $\pi(K)$ in one crossing point $c_i$. Since $l_\beta$ intersects $l_\alpha$ in one point $c_i$, $l_\beta$ is essential in $F$. Let $l_\beta$ avoid $c_i$. Then we have an essential loop in $F$ which intersects $\pi(K)$ in two points except for crossings. This contradicts that $\pi(K)$ is prime. In Case 2, we have a loop $\pi(\alpha\cup\beta)$ in $F$ which intersects $\pi(K)$ in two points except for crossings. In Case 2-a, it does not bound a disk $D'$ in $F$ such that $D'\cap \pi(K)$ is an arc since there are crossings $c_i$ and $c_j$ on both sides of the loop. This contradicts that $\pi(K)$ is prime. In Case 2-b, the loop $\pi(\alpha\cup\beta)$ bounds a disk $D'$ such that $D'\cap \pi (K)$ is an arc as in Figure \ref{alternate2} since $\pi(K)$ is prime. This shows that $B_i$ and $B_j$ are $\pm$-adjacent. \end{proof} If $|D\cap \Delta|=1$, then $\pi(K)$ is $\rm{II}$-reducible. See Figure \ref{II-reducible}. This contradicts the supposition of Theorem \ref{main}. \begin{figure} \caption{$\rm{II} \label{II-reducible} \end{figure} Otherwise, there exists an outermost fork in the graph on $D$ obtained by $D\cap \Delta$, where we regard the closure of each component of $D-\Delta$ as a vertex, and connects vertices if the corresponding components are close to each other. See Figure \ref{outermost fork}. Then, a region $\delta$ corresponding to an outermost fork gives a compressing disk for $F_{\pm}-K$ in $V_{\pm}$ such that $\partial \delta$ is a $\pm$-Menasco loop passing through $2n$-crossings $c_1,c_2,\ldots,c_{2n}$. Claim \ref{outermost} satisfies the condition that $n\ge 2$ and $c_{2i-1}$ is $\pm$-adjacent to $c_{2i}$ for $i=1,\ldots,n-1$. See Figure \ref{M-loop}. \begin{figure} \caption{outermost fork} \label{outermost fork} \end{figure} \begin{figure} \caption{$+$-Menasco loop} \label{M-loop} \end{figure} \end{proof} Here, we state Corollary \ref{trivial} in more general form. \begin{corollary}\label{trivial knot} Let $F$ be a closed surface embedded in $S^3$, $K$ a knot contained in $F\times [-1,1]$ which has a $\rm{I}$-reduced, $\rm{II}$-reduced, prime, checkerboard colorable diagram on $F$. If $K$ is trivial, then there exists a compressing disk $\delta$ for $F_{\pm}-K$ in $V_{\pm}$ such that $\partial \delta$ is a $\pm$-Menasco loop passing through $2n$-crossings $c_1,c_2,\ldots,c_{2n}$, where $n\ge 2$ and $c_{2i-1}$ is $\pm$-adjacent to $c_{2i}$ for $i=1,\ldots,n-1$. \end{corollary} \begin{proof} (of Corollary \ref{trivial knot}) Let $\pi(K)$ be a $\rm{I}$-reduced, $\rm{II}$-reduced, prime, checkerboard colorable diagram on $F$ and $S$ an interpolating surface obtained from a checkerboard surface $H$. If $S\cap E(K)$ is incompressible and $\partial$-incompressible in $E(K)$, then by Lemma 1 in \cite{MO}, each component of $S\cap E(K)$ is a disk. Hence $S$ is a 2-sphere and $H$ is a disk. Then, $\pi(K)$ is $\rm{I}$-reducible or it has no crossing. The former contradicts that $\pi(K)$ is $\rm{I}$-reduced and the latter contradicts that $\pi(K)$ is prime. If $S\cap E(K)$ is compressible in $E(K)$, then by Theorem \ref{main}, the conclusion of Corollary \ref{trivial knot} is satisfied. Otherwise, $S\cap E(K)$ is incompressible and $\partial$-compressible in $E(K)$. By Lemma 2 in \cite{MO}, each component of $S\cap E(K)$ is $\partial$-parallel annulus. Hence $S$ is a torus and $H$ is a M\"{o}bius band. Since $\pi(K)$ is $\rm{I}$-reduced and $\rm{II}$-reduced, $\pi(K)$ is a standard $(2,n)$-torus knot diagram, where $n$ is an odd integer. If $|n|\ge3$, then $r(S,K)=2$ and $S\cap E(K)$ is $\partial$-incompressible in $E(K)$. This contradicts the assumption. Otherwise, $n=\pm1$. This shows that $\pi(K)$ is $\rm{I}$-reducible, a contradiction. \end{proof} \end{document}
\begin{document} \begin{abstract} A point $P$ in projective space is said to be Galois with respect to a hypersurface if the function field extension induced by the projection from $P$ is Galois. We present a hyperplane section theorem for Galois points. Precisely, if $P$ is a Galois point for a hypesurface, then $P$ is Galois for a general hyperplane section passing through $P$. As an application, we determine hypersurfaces of dimension $n$ with $n$-dimensional sets of Galois points. \end{abstract} \title{A hyperplane section theorem for Galois points and its application} \section{Introduction} Let the base field $K$ be an algebraically closed field of characteristic $p \ge 0$ and let $X \subset \mathbb P^{n+1}$ be an irreducible and reduced hypersurface of dimension $n$ and of degree $d$. H. Yoshihara introduced the notion of the {\it Galois point} (see \cite{fukasawa1, miura-yoshihara, yoshihara1, yoshihara2, yoshihara3}). If the function field extension $K(X)/K(\mathbb P^n)$ induced by the projection $\pi_P:X \dashrightarrow \mathbb P^n$ from a point $P \in \mathbb P^{n+1}$ is Galois, then the point $P$ is said to be Galois. Galois point theory has given a new viewpoint of classification of algebraic varieties, by the distribution of Galois points (see \cite{fukasawa1, fukasawa2, fukasawa-hasegawa, fukasawa-takahashi, miura-yoshihara, takahashi1, yoshihara1, yoshihara2, yoshihara3}). If the dimension of the singular locus $S_X$ is at most $n-2$, then Fukasawa and Takahashi \cite{fukasawa-takahashi} presented upper bounds for the number of Galois points, as a generalization of a result of Yoshihara for smooth hypersurfaces \cite{yoshihara3}. To do this, they showed a ``hyperplane section theorem'' (see \cite[Theorem 1.3]{fukasawa-takahashi}). In this article, we prove this theorem for {\it arbitrary} Galois points (which may be singular) with respect to {\it arbitrary} hypersurfaces. The intersection of (almost) all tangent spaces of $X$ is denoted by $T_X$. \begin{theorem} \label{HyperplaneMain} Let $X \subset \mathbb P^{n+1}$ be a hypersurface of dimension $n \ge 2$ and degree $d \ge 3$ in characteristic $p \ge 0$, and let $P$ be a Galois point for $X$ with multiplicity $m$, where $0 \le m \le d-2$. Then: \begin{itemize} \item[(i)] A general hyperplane $H$ passing through $P$ satisfies the following condition: \begin{itemize} \item[$(\star)$] the hyperplane section $X_H:=X \cap H$ is an irreducible hypersurface in $H \cong \mathbb P^n$ of degree $d$ such that $S_{X_H}=S_X \cap H$, $P \not\in T_{X_H}$, and the multiplicity of $X_H$ at $P$ is equal to $m$. \end{itemize} \item[(ii)] Let $H$ be a hyperplane passing through $P$ and satisfying the condition $(\star)$. Then, the point $P$ is Galois for $X_H$. \item[(iii)] In this case, the Galois groups are isomorphic: $G_P(X) \cong G_P(X_H)$. \end{itemize} \end{theorem} As an application, we generalize results of Fukasawa and Hasegawa \cite{fukasawa2, fukasawa-hasegawa} for plane curves with infinitely many Galois points. Let $\Delta(X)$ (resp. $\Delta'(X)$) be the set of all Galois points contained in $X \setminus S_X$ (resp. $\mathbb P^{n+1} \setminus X$). \begin{theorem} \label{InnerMain} Let $X \subset \mathbb P^{n+1}$ be a hypersurface of dimension $n \ge 1$ and degree $d \ge 4$ in characteristic $p \ge 0$. Then, the following conditions are equivalent. \begin{itemize} \item[(i)] There exists a non-empty Zariski open subset $U$ of $X$ such that $U \subset \Delta(X)$. \item[(ii)] $p>0$, $d=p^e$ for some $e>0$, and $X$ is projectively equivalent to the hypersurface defined by $X_0^{p^e-1}X_1-X_2^{p^e}=0$. \end{itemize} In this case, $\Delta(X)=X \setminus \{X_0=X_2=0\}$, and the induced Galois group $G_P$ is a cyclic group of order $p^e-1$ for any point $P \in \Delta(X)$. \end{theorem} \begin{theorem} \label{OuterMain} Let $X \subset \mathbb P^{n+1}$ be a hypersurface of dimension $n \ge 1$ and degree $d \ge 3$ in characteristic $p \ge 0$. Then, the following conditions are equivalent. \begin{itemize} \item[(i)] There exist an irreducible Zariski closed subset $Y \subset \mathbb P^{n+1}$ of dimension $n$ and a non-empty open subset $U_Y$ of $Y$ such that $U_Y \subset \Delta'(X)$. \item[(ii)] $p>0$, $d=p^e$ for some $e>0$, and $X$ is projectively equivalent to an irreducible hypersurface whose equation is of the form $$\sum_{j=0}^{e}\sum_{i=1}^{n+1}\alpha_{ij}X_0^{p^e-p^j}X_i^{p^{j}}=0, $$ where $\alpha_{ij} \in K$. \end{itemize} In this case, $\Delta'(X)$ is a Zariski open set of a hyperplane (see Proposition \ref{distribution}), and the induced Galois group $G_P$ is isomorphic to $(\mathbb Z/p\mathbb Z)^{\oplus e}$ for any point $P \in \Delta'(X)$. \end{theorem} \section{Preliminaries and Lemmas} Let $X \subset \mathbb P^{n+1}$ be an irreducible hypersurface, let $S_X$ be the singular locus of $X$, and let $X_{\rm sm}=X \setminus S_X$ be the smooth locus. The projective tangent space at a smooth point $P \in X\setminus S_X$ is denoted by $T_PX \subset \mathbb P^{n+1}$. Let $\check{\mathbb P}^{n+1}$ be the dual projective space which parameterizes hyperplanes of $\mathbb P^{n+1}$. The Gauss map $\gamma$ of $X$ is a rational map from $X$ to $\check{\mathbb P}^{n+1}$ which sends a smooth point $P$ to the tangent space $T_PX$. If $F$ is the defining polynomial of $X$, then $\gamma$ is given by $(\partial F/\partial X_0: \cdots: \partial F/\partial X_{n+1})$. Let $T_X:=\bigcap_{P \in X_{\rm sm}}T_PX$. If $T_X \ne \emptyset$, then $X$ is said to be strange and $T_X$ is called a strange center. A strange center is a linear space. It is well known that (the function field extension induced by) the projection from $T_X$ is separable if and only if $X$ is a cone with center $T_X$. Therefore, any strange variety is a cone with center $T_X$ if $p=0$. For distinct points $P, Q \in \mathbb P^{n+1}$, the line passing through $P$ and $Q$ is denoted by $\overline{PQ}$. For a point $P \in \mathbb P^{n+1}$, the projective space $\mathbb P^{n}$ parameterizes lines passing through $P$. Then, we can identify $\pi_P(Q)=\overline{PQ}$ for any point $Q \in X \setminus \{P\}$. If $P=(0:\cdots:0:1)$, then $\pi_P(X_0:\cdots:X_{n+1})=(X_0:\cdots:X_n)$, up to the projective equivalence of $\mathbb P^n$. We note the following Bertini theorem (see \cite[Theorem 1.1]{fulton-lazarsfeld}, or \cite[II. 6.1, Theorem 1]{shafarevich} and \cite[Lemma 5]{zariski}). \begin{lemma} \label{hyperplane section} Let $P \not\in T_X$ (i.e., the projection $\pi_P: X \dashrightarrow \mathbb P^n$ is generically finite and separable). Then, for a general hyperplane $H \subset \mathbb P^{n+1}$ with $H \ni P$, the hyperplane section $X_H:=X \cap H$ is an integral scheme. \end{lemma} Let $P$ be a Galois point for $X$. The Galois group induced by the Galois extension is denoted by $G_P(X)$, or simply $G_P$. Then, any element of $G_P(X)$ corresponds to a birational map from $X$ to itself. Let ${\rm Bir}(X)$ be the group consisting of all birational map from $X$ to itself. We can consider $G_P(X)$ as a subgroup of ${\rm Bir}(X)$. For $\sigma \in G_P(X)$, the maximal open subset of $X$ on which $\sigma$ is defined is denoted by $U_{\sigma}$. We set $U[P]=\bigcap_{\sigma \in G_P}U_{\sigma}$. \begin{lemma} \label{rigion} Let $P \in \mathbb P^{n+1}$ be a Galois point and let $\sigma \in G_P$ be an induced birational map from $X$ to itself. Suppose that $H$ is a hyperplane such that $P \in H$ and $X_H$ is an integral scheme. Then, $X_H \cap U_{\sigma} \ne \emptyset$. Furthermore, if the multiplicity of $X_H$ at $P$ is less than $d$, then the restriction map $\sigma|_{X_H}$ is a birational map from $X_H$ to itself. \end{lemma} \begin{proof} For the first assertion, see the proof of \cite[V. Lemma 5.1]{hartshorne}. Now, $X_H$ corresponds to a regular point of codimension one in the scheme $X$. We can prove the first assertion by using a valuative criterion of properness for $X$. We prove the second assertion. Let $\tau$ be the inverse of $\sigma$. Then, $\sigma$ is an isomorphism from $U[P] \cap \sigma^{-1}U[P]$ to $U[P] \cap \tau^{-1}U[P]$. By definitions of $\pi_P$ and $\sigma$, $\sigma(X_H) \subset X_H$. If $U[P] \cap \sigma^{-1}U[P] \cap H \ne \emptyset$, we have the conclusion. Assume that $U[P] \cap \sigma^{-1}U[P] \cap H =\emptyset$. Then, $\sigma(U[P] \cap H) \subset (X \cap H) \setminus (U[P] \cap H)$. Since $U[P] \cap H$ is of dimension $n-1$ and $(X \cap H) \setminus (U[P] \cap H)$ is of dimension $\le n-2$, $\sigma^{-1}(\sigma(Q))$ is one-dimensional for a general point $Q$ of $X \cap H$. By definitions of $\pi_P$ and $\sigma$, $\overline{PQ} \subset X \cap H$. Then, $X_H$ is a cone with center $P$. Therefore, the multiplicity of $X_H$ at $P$ is equal to $d$. This is a contradiction. \end{proof} \begin{lemma} \label{transitive} Let $P$ be a Galois point for $X$ with multiplicity $m$ and let $Q, R \in X$ be points such that $\pi_P(Q)=\pi_P(R)$ and the intersection multiplicity of $X$ and $\overline{QR}$ at $P$ is equal to $m$. Assume that $Q \in U[P]$. Then, there exists $\sigma \in G_P$ such that $\sigma(Q)=R$. \end{lemma} \begin{proof} Let $(X_0:\cdots:X_{n+1})$ be a system of homogeneous coordinates, let $x_i=X_i/X_0$ for $i=1, \ldots, n+1$, and let $F$ be the defining homogeneous polynomial of $X$. We can assume that $P=(0:\cdots:0:1)$ and $R=(1:0:\cdots:0)$ for a suitable system of coordinates. Then, the line passing through $P, Q, R$ is given by $X_1=\cdots=X_{n}=0$. Then, we can take $\pi_P(1:x_1:\cdots:x_{n+1})=(1:x_1:\cdots:x_n)$ and $\pi_P(Q)=\pi_P(R)=(1:0:\cdots:0)$. We have $$F(X_0, \ldots, X_{n+1})=A_{d-m}X_{n+1}^{d-m}+\cdots+A_1X_{n+1}+A_0, $$ where $A_i \in K[X_0, \ldots, X_n]$. We define $f(x_1, \ldots, x_{n+1}):=F(1, x_1, \ldots, x_{n+1})$ and $a_i(x_1, \ldots, x_n):=A_i(1, x_1, \ldots, x_n)$. Since $f(R)=0$, $a_0(0, \ldots, 0)=0$. Since the intersection multiplicity of $X$ and $\overline{QR}$ at $P$ is $m$, $a_{d-m} (0, \ldots, 0)=A_{d-m}(1, \ldots, 0) \ne 0$. We consider a function $x_{n+1}$. Let $g=\prod_{\sigma} \sigma^{*}x_{n+1}$. Then, we have $g(Q)=((-1)^{d-m}a_0/a_{d-m})(Q)=0$. Assume that $\sigma(Q) \ne R$ for any $\sigma \in G_P$. Then, $x_{n+1}(\sigma(Q)) \ne 0$ for any $\sigma \in G_P$. Therefore, $g(Q) \ne 0$. This is a contradiction. \end{proof} \section{Proof of Theorem \ref{HyperplaneMain}} \begin{proof}[Proof of Theorem \ref{HyperplaneMain}] Suppose that $P$ is Galois for $X$. Then, $P \not\in T_X$. By Lemma \ref{hyperplane section}, for a general hyperplane $H \ni P$, $X_H$ is an integral scheme. Let $W_P \subset \check{\mathbb{P}}^{n+1}$ be the set of such hyperplanes. Since $P \not\in T_X$, $W_P \setminus \gamma(X_{\rm sm}) \ne \emptyset$. We have $S_{X_H}=S_X \cap H$ for any $H \in W_P \setminus \gamma(X_{\rm sm})$, because, for a point $Q \in X_{\rm sm} \cap H$, $T_QX=H$ if and only if $X \cap H$ is singular at $Q$. Let $U_P \subset X_{\rm sm}$ be the set of all smooth points $Q$ such that the differential map of the projection $\pi_P$ at $Q$ is surjective, and let $\Sigma_P \subset \check{\mathbb P}^{n+1}$ be the set of all hyperplanes $H$ such that $H \subset (X \setminus U_P)$. Since $\pi_P$ is separable, $W_P \setminus \Sigma_P \ne \emptyset$. We have $P \not\in T_{X_H}$ for any $H \in W_P\setminus (\gamma(X_{\rm sm}) \cup \Sigma_P)$. Let $\Gamma_P$ be the (finite) set of all hyperplanes $H$ such that the multiplicity of $X \cap H$ at $P$ is less than $m$. We have $W_P \setminus (\gamma(X_{\rm sm}) \cup \Sigma_P \cup \Gamma_P) \ne \emptyset$, and any hyperplane $H$ in this set satisfies $(\star)$. We have assertion (i). Let $H$ be a hyperplane passing through $P$ and satisfying $(\star)$. We consider a homomorphism of groups $$ \phi: G_P \rightarrow G; \sigma \mapsto \sigma|_{X_H}, $$ where $G=\{ \sigma \in {\rm Bir}(X_H)|\sigma(X_H \cap l \setminus \{P\}) \subset X_H \cap l \mbox{ for a general line } l \mbox{ such that } P \in l \subset H \}$. Since the multiplicity of $X_H$ at $P$ is $m <d$, it follows from Lemma \ref{rigion} and the definition of a Galois point that $\phi$ is well-defined. In addition, by the condition $P \not\in T_{X_H}$, $X_H \cap l \setminus \{P\}$ consists of $d-m$ points for a general line $l \subset H$ containing $P$. It follows from Lemma \ref{transitive} that $\phi$ is injective. Since the order of $G$ is at most $d-m$, $\phi$ is an isomorphism. Then, $P$ is Galois for $X_H$. \end{proof} \begin{remark} A general hyperplane section for a Galois point which is {\it singular} was studied by T. Takahashi in his Ph.D. thesis \cite{takahashi2}. He proved that a Galois point $P \in S_X$ is also Galois for a general hyperplane section $X_H \ni P$, under the assumption that $p=0$ and $X \subset \mathbb{P}^3$ is a normal surface. \end{remark} \section{Case of inner Galois points} As an application of Theorem \ref{HyperplaneMain}, we have the following. \begin{corollary} \label{reduction1} Assume that the condition (i) in Theorem \ref{InnerMain} holds. Then, for a general hyperplane $H$, there exists a non-empty Zariski open set $U_{X_H} \subset X_H$ such that $U_{X_H} \subset \Delta(X_H)$. \end{corollary} \begin{proof} Let $U$ be an open set as in Theorem \ref{InnerMain}(i) and let $H$ be a general hyperplane. Since $H$ is general, we can assume that $U \cap H \ne \emptyset$. By Lemma \ref{hyperplane section}, $X_H$ is an integral scheme and $H$ satisfies the condition $(\star)$ for a general point $P \in X_H$. By Theorem \ref{HyperplaneMain}(ii), we can take $U_{X_H}=(U \cap X_H) \setminus (S_{X_H} \cup T_{X_H})$. \end{proof} \begin{proof}[Proof of Theorem \ref{InnerMain}] We prove the implication (i) $\Rightarrow$ (ii). We use induction on dimension $n$. If $n=1$, then the assertion is nothing but a result of Fukasawa and Hasegawa \cite{fukasawa-hasegawa}. We consider the case where $n \ge 2$. Let $H \subset \mathbb P^{n+1}$ be a general hyperplane. It follows from Corollary \ref{reduction1} that there exists a Zariski open set $U_{X_H} \subset X_H$ such that $U_{X_H} \subset \Delta(X_H)$. By the assumption of induction, $p>0$, $d$ is a power of $p$, and $X_H \subset H \cong \mathbb P^{n}$ is projectively equivalent to $X_0^{d-1}X_1-X_2^d=0$. The Gauss map $\gamma_{X_H}$ for $X_H$ is given by $(-X_0^{d-2}X_1:X_0^{d-1}:0:\cdots:0)$, if $X_H \subset \mathbb P^n$ is given by $X_0^{d-1}X_1-X_2^d=0$. Therefore, we find that $T_{X_H}$ is a linear space of dimension $n-2$ with $T_{X_H} \not\subset X_H$, and a general fiber of $\gamma_{X_H}$ is a linear space of dimension $n-2$. Since $H$ is general and $T_QX_H=T_QX \cap H$ for any smooth point $Q \in X_H$, $T_X$ is a linear space of dimension $n-1$ with $T_X \not\subset X$, $\gamma_{X_H}$ coincides with the restriction map $\gamma|_{X_H}$, and a general fiber of $\gamma$ is a linear space of dimension $n-1$. Let $P \in X$ be a general point. Since the linear spaces $\gamma^{-1}(\gamma (P))$ of dimension $n-1$ and $T_X$ of dimension $n-1$ are contained in the projective space $T_PX$ of dimension $n$, these intersect along a linear space $L_P$ of dimension $n-2$. If $L_{P'} \ne L_{P}$ for a general point $P' \in X$, then $T_X$ is contained in $X$, since $L_{P'} \subset T_X$. This is a contradiction. Therefore, $L_{P'}=L_P$. Then, $X$ is a cone with a $(n-2)$-dimensional center $L$. For a suitable system of coordinates, we can assume that $L$ is defined by $X_0=X_1=X_2=0$ and $X$ is defined by $F(X_0, X_1, X_2)=0$. We can assume that $H$ is defined by $X_{n+1}-(a_0X_0+\cdots+a_{n}X_n)=0$. Then, $X_H$ is given by the same equation $F=0$ and there exists a linear transformation $\phi:H \rightarrow H$ such that $\phi(X_H)$ is defined by $F_1:=X_0^{d-1}X_1-X_2^d=0$. Then, $\phi(L \cap H)=L \cap H$. Therefore, $\phi$ gives an automorphism of the sublinear system $\langle X_0, X_1, X_2 \rangle$ of $H^0(\mathbb P^n, \mathcal{O}(1))$ which is given by $L \cap H$. This implies that $X$ is projectively equivalent to the hypersurface defined by $X_0^{d-1}X_1-X_2^d=0$. We consider the implication (ii) $\Rightarrow$ (i). Let $F=X_0^{p^e-1}X_1-X_2^{p^e}$ be the defining polynomial. It is not difficult to check that the singular locus $S_X$ of $X$ is given by $X_0=X_2=0$. Let $P \in X \setminus S_X$. Then, $P=(1:b_1: \cdots: b_{n+1})$ for some $b_1, \ldots, b_{n+1} \in K$ with $b_1=b_2^{p^e}$. The projection $\pi_P$ is given by $(X_1-b_1X_0:\cdots:X_{n+1}-b_{n+1}X_0)$. Let $\hat{X_i}=X_i-b_iX_0$. Then, $F(X_0, \hat{X_1}+b_1X_0, \ldots, \hat{X}_{n+1}+b_{n+1}X_0)=(\hat{X}_1+b_1X_0)X_0^{p^e-1}-(\hat{X_2}+b_2X_0)^{p^e}=\hat{X}_1X_0^{p^e-1}-\hat{X}_2^{p^e}=F(X_0, \hat{X}_1, \ldots, \hat{X}_n)$. Then, $\pi_P=(X_0:\hat{X}_1:\cdots:\hat{X}_{n+1})$. Therefore, we have a field extension $K(x_0, x_2, \ldots, x_{n+1})/K(x_2, \ldots, x_{n+1})$ with a relation $F(x_0, 1, x_2, \ldots, x_{n+1})=x_0^{p^e-1}-x_2^{p^e}=0$. It is not difficult to check that this is a Galois extension, which is cyclic of degree $p^e-1$. Therefore, we have $\Delta(X)=X \setminus \{X_0=X_2=0\}$. \end{proof} \section{Case of outer Galois points} As an application of Theorem \ref{HyperplaneMain}, we have the following. \begin{corollary} \label{reduction2} Assume the condition (i) in Theorem \ref{OuterMain} holds. Then, for a general hyperplane $H$, there exists a non-empty Zariski open set $U_{Y_H} \subset Y_H$ such that $U_{Y_H} \subset \Delta'(X_H)$. \end{corollary} \begin{proof} Let $Y, U_Y$ be as in Theorem \ref{OuterMain}(i) and let $H$ be a general hyperplane. Since $H$ is general, we can assume that $U \cap H \ne \emptyset$. By Lemma \ref{hyperplane section}, $X_H$ and $Y_H$ are integral and $H$ satisfies the condition $(\star)$ for a general point $P \in Y_H$. By Theorem \ref{HyperplaneMain}(ii), we can take $U_{Y_H}=(U_Y \cap H) \setminus T_{X_H}$. \end{proof} \begin{lemma} \label{n=1} Let $n=1$. Assume that there exists an irreducible plane curve $Y \subset \mathbb P^2$ and a non-empty open set $U_Y \subset Y$ such that $U_Y \subset \Delta'(X)$. Then, \begin{itemize} \item[(1)] $Y$ is a line, and \item[(2)] if we take a linear transformation $\phi$ such that $\phi(Y)$ is defined by $X_0=0$, then the defining polynomial $\phi(X)$ is of the form $\sum_{j=0}^{e}\sum_{i=1}^{2}\alpha_{ij}x_i^{p^{j}}+c=0$, where $\alpha_{ij}, c \in K$. \end{itemize} \end{lemma} \begin{proof} Let $(X_0:X_1:X_{2})$ be a system of homogeneous coordinates and let $x_i=X_{i}/X_0$ for $i=1, 2$. By a result of Fukasawa \cite{fukasawa2}, $Y$ is a line and there exists a linear transformation $\psi$ such that $\psi(X)$ is defined by $f:=\sum_{j=0}^{e}\sum_{i=1}^{2}\alpha_{ij}x_i^{p^{j}}=0$. Then $\psi(Y)$ is defined by $X_0=0$. Let $\phi$ be a linear transformation as in assumption (2). Then, $\phi(X)$ is given by $(\psi \circ \phi^{-1})^*f=0$. Let $\tilde{x}_i:=(\psi\circ\phi^{-1})^*x_i=\beta_{i0}+\beta_{i1}x_1+\beta_{i2}x_2$, where $\beta_{ij} \in K$, for $i=1, 2$. Then, $\phi(X)$ is given by $f(\tilde{x}_1, \tilde{x}_2)=0$, where $f(\tilde{x}_1, \tilde{x}_2)=\sum_{j=0}^{e}\sum_{i=1}^{2}\gamma_{ij}x_i^{p^{j}}+c$ for some $\alpha_{ij}, c \in K$. \end{proof} \begin{proof}[Proof of Theorem \ref{OuterMain}] We consider the following condition ($P_n$): If $X \subset \mathbb P^{n+1}$ is an irreducible hypersurface, and there exists an irreducible hypersurface $Y \subset \mathbb P^{n+1}$ and a non-empty open set $U_Y$ of $Y$ such that $U_Y \subset \Delta'(X)$, then \begin{itemize} \item[(1)] $Y$ is a hyperplane, and \item[(2)] if we take a linear transformation $\phi$ such that $\phi(Y)$ is defined by $X_0=0$, then the defining polynomial of $\phi(X)$ is of the form $\sum_{j=0}^{e}\sum_{i=1}^{n+1}\alpha_{ij}x_i^{p^{j}}+c=0$, where $\alpha_{ij}, c \in K$. \end{itemize} We prove ($P_n$). We use induction on dimension $n$. If $n=1$, then the assertion is nothing but Lemma \ref{n=1}. We consider the case where $n \ge 2$. Let $H \subset \mathbb P^{n+1}$ be a general hyperplane. It follows from Corollary \ref{reduction2} that there exists a non-empty Zariski open set $U_{Y_H} \subset \Delta'(X_H)$. By the assumption of induction, $p>0$, $d$ is a power of $p$, and $X_H \subset H \cong \mathbb P^{n-1}$ is projectively equivalent to $\sum_{j=0}^{e}\sum_{i=1}^n\alpha_{ij}x_i^{p^j}+c=0$. Since $H$ is general, $Y$ is a hyperplane. We have result (1) of $(P_n)$. We take a linear transformation $\phi$ such that $\phi(Y)$ is defined by $X_{0}=0$. Let $P=(1:a_1:\ldots:a_{n+1}) \in X$ be a general point and let $H$ be a general hyperplane passing through $P$. Then, there exists an open set $U_{Y_H} \subset Y_H$ such that $U_{Y_H} \subset \Delta'(X_H)$. If we take a linear transformation $\psi=(X_0:X_1-a_1X_0:\cdots:X_{n+1}-a_{n+1}X_{0})$, then $\psi(P)=(1:0:\cdots:0)$, $\psi(\phi(Y))$ is defined by $X_0=0$ and $\psi(H)$ is defined by $X_{n+1}-b_1X_1+\cdots+b_nX_n=0$ for some $b_i \in K$. Let $F(X_0, \ldots, X_{n+1})$ be the defining polynomial of $\psi(X)$, let $f=F(1, x_1, \ldots, x_{n+1})$ and let $\tilde{x}=b_1x_1+\cdots+b_nx_n$. Then, $f(0, \ldots, 0)=0$ since $(1:0:\cdots:0) \in \psi(X)$. Since $\psi(X) \cap \psi(H)$ satisfies the condition $(P_{n-1})$ by induction, $g(x_1, \ldots, x_n):=f(1, x_1, \ldots, x_{n}, \tilde{x})=\sum_{i,j}\beta_{ij}x_i^{p^j}$ for some $\beta_{ij} \in K$, similar to the proof of Lemma \ref{n=1}. If $f$ has a term of degree not equal to some power of $p$, $g$ has such a term for a general hyperplane $H$ with $P \in H$. Therefore, $f$ has only terms of degree equal to some power of $p$. Let $f_{p^i}$ be the component of $f$ of degree $p^i$ for $i=0, \ldots, e$. Then, $f_{p^i}(x_1, \ldots, x_n, \tilde{x})$ be the component of $g$ of degree $p^i$. By condition (2) of ($P_{n-1}$), $f_{p^i}(x_1, \ldots, x_n, \tilde{x})$ must be of the form $h^{p^i}$ for some linear polynomial $h(x_1, \ldots, x_n)$. Since $H$ is general, $f_{p^i}(x_1, \ldots, x_{n+1})$ must be of the form $h^{p^i}$ for some $h(x_1, \ldots, x_{n+1})$. Therefore, $\psi(X)$ is given by the polynomial as in condition (2) of ($P_n$) and hence, $\phi(X)$ also. The implication (ii) $\Rightarrow$ (i) is derived from Proposition \ref{distribution} below. \end{proof} \begin{proposition}[cf. \cite{fukasawa2}, Propositions 2 and 3] \label{distribution} Let $X$ be an irreducible hypersurface defined by the equation in Theorem \ref{OuterMain}(ii) and let $H_0$ be the hyperplane defined by $X_0=0$. Then, we have the following. \begin{itemize} \item[(i)] $S_X$ and $T_X$ are linear spaces of dimension $n-1$ which are contained in $H_0$. \item[(ii)] $\Delta'(X)=H_0\setminus (S_X \cup T_X)$, and all points in $S_X \setminus T_X$ are Galois. (Here, we consider a point $P$ with $\pi_P^*(\mathbb{P}^n)=K(X)$ as a Galois point.) \item[(iii)] For any Galois point $P \in H_0 \setminus T_X$, any birational map induced by $G_P$ is a restriction of a linear transformation of $\mathbb P^2$. \item[(iv)] For any Galois point $P \in H_0 \setminus T_X$, the Galois group $G_P$ is isomorphic to $(\mathbb Z/p\mathbb Z)^{\oplus m}$ for some $m \le e$. \item[(v)] $\Delta(X)=X_{\rm sm}$ if $X$ is projectively equivalent to the hypersurface defined by $X_0^{p^e-1}X_1-X_2^{p^e}=0$, and $\Delta(X)=\emptyset$ otherwise. \end{itemize} \end{proposition} \begin{proof} Let $F=\sum_{j=0}^{e}\sum_{i=1}^{n+1}\alpha_{ij}X_0^{p^e-p^j}X_i^{p^j}$ be the defining polynomial. It is not difficult to check that $S_X$ is given by $X_0=\sum_{i=1}^{n+1}\alpha_{ie}X_i^{p^e}=0$ and $T_X$ is given by $X_0=\sum_{i=1}^{n+1}\alpha_{i0}X_i=0$. Therefore, we have $(i)$. We prove that all points in $H_0 \setminus T_X$ are Galois. There exists $i$ such that $\alpha_{i0} \ne 0$. We can assume that $i=1$. Let $\hat{X}_1=\sum_{i=1}^{n+1}\alpha_{i0}X_i$. Then, $\phi(X_0, \ldots, X_{n+1})=(X_0, \hat{X_1}, X_2 \ldots, X_{n+1})$ is a linear transformation and $\phi(T_X)$ is given by $X_0=X_1=0$. By considering $\phi(X)$ as $X$, we can assume that $T_X$ is given by $X_0=X_1=0$. Let $P \in H_0 \setminus T_X$. Then, $P=(0:1:b_2:\cdots: b_{n+1})$ for some $b_2, \ldots, b_{n+1}$. The projection $\pi_P$ is given by $(1:x_2-b_2x_1:\cdots:x_{n+1}-b_{n+1}x_1)$. Let $\hat{x}_i=x_i-b_ix_1$. Then, we have a field extension $K(x_1, \hat{x}_2, \ldots, \hat{x}_{n+1})/K(\hat{x}_2, \ldots, \hat{x}_{n+1})$ with a relation $g(x_1, \hat{x}_2, \ldots, \hat{x}_{n+1})=F(1, x_1, \hat{x}_2+b_2x_1, \ldots, \hat{x}_{n+1}+b_{n+1}x_1)=0$. By the form of $F$, $g$ is of the form $\sum_{i,j}\beta_{ij} x_i^{p^j}$ for some $\beta_{ij} \in K$. Since $P \not\in T_X$, this extension is Galois of degree $p^m$ for some $m \le e$ and the Galois group is isomorphic to $(\mathbb Z/p\mathbb Z)^{\oplus m}$ (see \cite[pp. 117--118]{stichtenoth}). Therefore, we have $H_0 \setminus (S_X \cup T_X) \subset \Delta'(X)$, and any point in $S_X \setminus T_X$ is Galois. By considering the form $g$, we find that assertions (iii) and (iv) hold for all Galois points in $H_0\setminus T_X$. We prove that $\Delta'(X) \subset H_0 \setminus (S_X \cup T_X)$. If this is proved, then we have (ii), (iii) and (iv). If $n=1$, then this is a result of Fukasawa (see \cite[Proposition 2]{fukasawa2}). Assume that $n \ge 2$ and there exists an outer Galois point $P \not\in H_0$ for $X$. By Theorem \ref{HyperplaneMain}, there exists a hyperplane $H \ni P$ such that $\{P\} \cup (\Delta'(X) \cap H_ 0 \cap H) \setminus T_{X_H} \subset \Delta'(X_H)$. When $n=2$, $X_H$ is a curve and this is a contradiction. By using induction, we have a contradiction for any $n \ge 2$. We prove (v). Assume that $P \in \Delta(X)$. Since $X \cap H_0=S_X$, $P \in X \setminus H_0$. Let $P' \in X \setminus H_0$ such that the line $\overline{PP'}$ intersects the set $\Delta'(X)$. Let $R \in \overline{PP'} \cap \Delta'(X)$. Since any element of $G_R$ is a linear transformation, it follows from Lemma \ref{transitive} that there exists $\sigma \in G_R$ such that $\sigma(P)=P'$. Since $P$ is Galois, $P'$ is also Galois. Therefore, if one inner Galois point exists, then almost all points of $X$ are inner Galois points. It follows from Theorem \ref{InnerMain} that $X$ is projectively equivalent to the hypersurface defined by $X_0^{p^e-1}X_1-X_2^{p^e}=0$ and $\Delta(X)=X\setminus \{X_0=X_1=0\}=X_{\rm sm}$. \end{proof} \end{document}
\begin{document} \begin{abstract} For each link $L\mathfrak{s}ubset S^3$ and every quantum grading $j$, we construct a stable homotopy type $\khoo^j(L)$ whose cohomology recovers Ozsv\'ath-Rasmussen-Szab\'o's odd Khovanov homology, ${\widetilde{H}}^i(\khoo^j(L))=\kho^{i,j}(L)$, following a construction of Lawson-Lipshitz-Sarkar of the even Khovanov stable homotopy type. Furthermore, the odd Khovanov homotopy type carries a $\mathbb{Z}/2$ action whose fixed point set is a desuspension of the even Khovanov homotopy type. We also construct a $\mathbb{Z}/2$ action on an even Khovanov homotopy type, with fixed point set a desuspension of $\khoo^j(L)$. \end{abstract} \title{An odd Khovanov homotopy type} \tableofcontents \mathfrak{s}ection{Introduction} \label{sec:intro} \mathfrak{s}ubsection{Khovanov homologies} In \cite{kho1} Khovanov categorified the Jones polynomial: to a link diagram $L$, he associated a bigraded chain complex, whose graded Euler characteristic is (a certain normalization of) the Jones polynomial of $L$, and whose (graded) chain homotopy type is an invariant of the underlying link. Several generalizations were soon constructed, such as invariants for tangles \cite{khovtanglefunctor,natantangle}, various perturbations \cite{lee,natantangle}, versions for other polynomials \cite{kr1,kr2}, and many others. The categorified invariant carried structure that was not visible at the decategorified level. To wit, to a link cobordism in $\mathbb{R}^3\times[0,1]$, there is an associated map of Khovanov chain complexes \cite{jacobsson,khinvartangle,natantangle,cmw-disoriented}; and this map, along with Lee's perturbation, was used by Rasmussen in \cite{rasmus-s} to define a numerical concordance invariant $s$ and to give a combinatorial proof of a theorem due to Kronheimer and Mrowka \cite{km1} on the four-ball genus of torus knots (popularly known as the Milnor conjecture). Khovanov homology itself turns out to be a more powerful invariant than the Jones polynomial. Indeed, Khovanov homology is known to detect the unknot \cite{km2}, while the corresponding question for the Jones polynomial remains wide open. In \cite{os-double}, Ozsv\'ath and Szab\'o constructed the first relation between Khovanov homology and Floer-theoretic invariants---Heegaard Floer homology \cite{os-main} to be specific---in the form of a spectral sequence from reduced Khovanov homology of a link to the Heegaard Floer homology of its branched double cover. The spectral sequence was originally constructed over $\mathbb{Z}/2$, but it was soon realized that its integral lift does not start from the usual Khovanov homology, but rather from a different homology theory which has the same $\mathbb{Z}/2$ reduction; this new chain complex was constructed by Ozsv\'ath-Rasmussen-Szabo~\cite{ors} and is usually called the odd Khovanov complex, and this is the version that seems closely related to Heegaard Floer type invariants. (The even theory was later discovered to be related to certain other Floer theories, such as instanton Floer homology \cite{km2} and symplectic Khovanov homology \cite{seidel-smith}.) The two versions were combined by a pullback into a single unified theory by Putyra \cite{putyra}, cf.~\cite{putyrashumakovitch}: the unified Khovanov complex is a chain complex over $\mathbb{Z}[\xi]/(1-\xi^2)$ which recovers the even (respectively, odd) Khovanov chain complex upon setting $\xi=1$ (respectively, $\xi=-1$). In this paper, we will typically decorate the objects from the even theory by the subscript $e$, the ones from the odd theory by the subscript $o$, and the ones from the unified theory by the subscript $u$. In particular, we will denote the even, odd, and the unified Khovanov complexes as $\mathit{Kh}Cx_e(L)$, $\mathit{Kh}Cx_o(L)$, and $\mathit{Kh}Cx_u(L)$, respectively. \mathfrak{s}ubsection{Khovanov homotopy types} In \cite{lshomotopytype}, Lipshitz and Sarkar associated to a link diagram $L$ a finite CW spectrum $\khoh(L)$, whose reduced cellular cochain complex is isomorphic to the Khovanov complex $\mathit{Kh}Cx_e(L)$, taking the (non-basepoint) cells of $\khoh(L)$ to the standard generators of $\mathit{Kh}Cx_e(L)$. The (stable) homotopy type of $\khoh(L)$ is an invariant of the underlying link; specifically, Reidemeister moves from the diagram $L$ to a diagram $L'$ induce stable homotopy equivalences $\khoh(L) \to \khoh(L')$. A different construction of an even Khovanov homotopy type was constructed independently by Hu-Kriz-Kriz~\cite{hkk}, and the two versions were later shown to be equivalent~\cite{lls1}. A stable homotopy refinement of Khovanov homology endowed it with extra structure, such as an action by the Steenrod algebra~\cite{lssteenrod}, which was then used to construct a family of additional $s$-type concordance invariants~\cite{lsrasmussen}, as well as to show that Khovanov homotopy type is a strictly stronger invariant than Khovanov homology~\cite{Seed-Kh-square}. One could ask for a spectrum invariant $\khoo(L)$ satisfying analogous properties, but with Khovanov homology replaced with odd Khovanov homology. The original Lipshitz-Sarkar contruction using the Cohen-Jones-Segal framed flow categories machine from \cite{cjs} does not seem to admit an easy generalization: on account of the signs that appear in the definition of odd Khovanov homology, there is no framed flow category for the odd theory covering the framed cube flow category. However, in \cite{lls1}, Lawson-Lipshitz-Sarkar provided several more abstract constructions of $\khoh(L)$---similar to the one from~\cite{hkk}---in order to understand the behavior of the Khovanov spectrum under disjoint union and connected sum. In this paper, we will give a slight generalization of their machinery to construct a finite $\mathbb{Z}_2$-equivariant CW spectrum $\khoo(L)=\bigvee_j\khoo^j(L)$ for each oriented link diagram $L$ (Definition~\ref{def:oddkhmain}). \begin{thm}\label{thm:oddkhmain} The (stable) homotopy type of the odd Khovanov spectrum $\khoo(L)=\bigvee_j\khoo^j(L)$ from Definition~\ref{def:oddkhmain} is independent of the choices in its construction and is an invariant of the isotopy class of the link corresponding to $L$. Its reduced cellular cochain complex agrees with the odd Khovanov complex $\KhCx_o(L)$, \[ \widetilde{C}_{\mathrm{cell}}^i(\khoo^j(L))=\KhCx_o^{i,j}(L), \] with the cells mapping to the distinguished generators of $\KhCx_o(L)$. \end{thm} We also construct a reduced theory: a finite $\mathbb{Z}_2$-equivariant CW spectrum $\khor(L,p)=\bigvee_j\khor^j(L,p)$ for each oriented link diagram $L$ with basepoint $p$ (Definition~\ref{def:redkh}). \begin{thm}\label{thm:redkh} The (stable) homotopy type of the reduced odd Khovanov spectrum $\khor(L,p)=\bigvee_j \khor^j(L,p)$ from Definition~\ref{def:redkh} is independent of the choices in its construction and is an invariant of the isotopy class of the pointed link corresponding to $(L,p)$. Its reduced cellular cochain complex agrees with the reduced odd Khovanov complex $\widetilde{\mathit{Kh}Cx}_o(L)$, \[ \widetilde{C}_{\mathrm{cell}}^i(\khor^j(L,p))= \widetilde{\mathit{Kh}Cx}_o^{i,j}(L), \] with the cells mapping to the distinguished generators of $\widetilde{\mathit{Kh}Cx}_o(L)$. There is a cofibration sequence \[\khor^{j-1}(L,p) \to \khoo^j(L)\to\khor^{j+1}(L,p).\] \end{thm} We introduce concordance invariants built from this construction, in analogy with \cite{lsrasmussen}. To do so, we show that associated to a cobordism of links, there exists a map of odd Khovanov spectra (we do not attempt to show that the map is well-defined); the map induces a map on the odd Khovanov chain complex, and reduces mod-2 to the usual cobordism map on $\mathit{Kh}Cx(L;\mathbb{Z}_2)$. Therefore: \begin{thm}\label{thm:cobordism-maps} The Khovanov cobordism map $\mathit{Kh}(L;\mathbb{Z}_2)\to\mathit{Kh}(L';\mathbb{Z}_2)$ associated to a link cobordism $L\to L'$ from \cite{jacobsson,khinvartangle,natantangle} is a map of $\mathcal{A}^\mathfrak{s}igma$-modules, where $\mathcal{A}^\mathfrak{s}igma$ is the free product of two copies of the mod-2 Steenrod algebra, and the first (respectively, second) copy acts on the mod-2 Khovanov homology by viewing it as the mod-2 cohomology of the even (respectively, odd) Khovanov homotopy type. \end{thm} Moreover, in Definition~\ref{def:evenaction}, we construct an even stable homotopy type $\khoh'(L)$ (that is, a finite CW spectrum whose cellular chain complex is the even Khovanov chain complex), equipped with a $\mathbb{Z}_2$-action. This $\mathbb{Z}_2$-action is not visible from the Burnside functor constructed in \cite{lls1}, so in some sense this $\mathbb{Z}_2$-action arises from the odd theory. We conjecture that the even space constructed here is stable homotopy equivalent to the construction of \cite{lshomotopytype}. \begin{thm}\label{thm:evenintro} The (stable) homotopy type of the even Khovanov spectrum $\khoh'(L)$ from Definition~\ref{def:evenaction} is independent of the choices in its construction and is an invariant of the isotopy class of $L$. Its reduced cellular cochain complex agrees with the odd Khovanov complex $\mathit{Kh}Cx_e(L)$, \[ \widetilde{C}_{\mathrm{cell}}^i(\khoh^{\mathrm{pt}rime\,j}(L))=\mathit{Kh}Cx_e^{i,j}(L), \] with the cells mapping to the distinguished generators of $\KhCx_o(L)$. \end{thm} Finally, similar to unified Khovanov homology, we combine $\khoh(L)$ and $\khoo(L)$ into a single finite $\mathbb{Z}_2 \times \mathbb{Z}_2$-equivariant CW spectrum $\unis(L)=\bigvee_j \unis^j(L)$, which we think of as a `unified Khovanov spectrum' (Definition~\ref{def:unifiedintro}): \begin{thm}\label{thm:unifiedintro} The (stable) homotopy type of the unified Khovanov spectrum $\unis(L)$ from Definition~\ref{def:unifiedintro} is independent of the choices in its construction and is an invariant of the isotopy class of $L$. Its reduced cellular cochain complex agrees with the unified Khovanov complex $\unic(L)$, \[ \widetilde{C}_{\mathrm{cell}}^i(\unis^j(L))= \unic^{i,j}(L), \] with the cells mapping to the distinguished generators of $\unic(L)$, and the two $\mathbb{Z}_2$ actions correspond to multiplication by $\xi$ and $-\xi$, respectively. \end{thm} There is also a reduced unified spectrum $\widetilde{\X}_u(L)$ for which the analogue of Theorem \ref{thm:redkh} (Proposition \ref{prop:reduced-unified}) holds. The different spectra and the different actions admit the following relationship: \begin{thm}\label{thm:equivariance} Let $L$ be a link diagram. \begin{enumerate}[leftmargin=*] \item\label{itm:thm-equivariance-1} The action of the two $\mathbb{Z}_2$-factors is free away from the basepoint on $\unis(L)$: $\khoh(L)$ is the geometric quotient under the action of the first factor (sometimes called $\mathbb{Z}_2^+$) and $\khoo(L)$ is the geometric quotient under the second factor (sometimes called $\mathbb{Z}_2^-$); moreover, the $\mathbb{Z}_2$-action on $\khoo(L)$ is the quotient of the $\mathbb{Z}_2^+$-action on $\unis(L)$. \item\label{itm:thm-equivariance-2} The geometric fixed-point set of $\khoo^j(L)$ under $\mathbb{Z}_2$ is precisely $\Sigma^{-1}\khoh^j(L)$, and quotienting by the fixed point set produces $\unis(L)$. The induced $\mathbb{Z}_2$-action on $\unis(L)$ agrees with the $\mathbb{Z}_2^+$-action. This produces a cofibration sequence \[ \Sigma^{-1}\khoh(L) \to \khoo(L) \to \unis(L), \] and the induced long exact sequence on cohomology agrees with the one constructed in \cite{putyrashumakovitch}. \item\label{itm:thm-equivariance-3} The Puppe map $\unis(L)\to\khoh(L)$ from the previous cofibration sequence is homotopic to the quotient map $\unis(L)\to\unis(L)/\mathbb{Z}_2^+$. \item\label{itm:thm-equivariance-4} The geometric fixed-point set of $\khoh^{\mathrm{pt}rime\,j}(L)$ under $\mathbb{Z}_2$ is precisely $\Sigma^{-1}\khoo^j(L)$, and quotienting by the fixed point set produces $\unis(L)$. The induced $\mathbb{Z}_2$-action on $\unis(L)$ agrees with the $\mathbb{Z}_2^-$-action. This produces a cofibration sequence \[ \Sigma^{-1}\khoo(L) \to \khoh^\mathrm{pt}rime(L) \to \unis(L), \] and the induced long exact sequence of cohomology agrees with the one constructed in \cite{putyrashumakovitch}. \item\label{itm:thm-equivariance-5} The Puppe map $\unis(L)\to\khoo(L)$ from the previous cofibration sequence is homotopic to the quotient map $\unis(L)\to\unis(L)/\mathbb{Z}_2^-$. \end{enumerate} \end{thm} \mathfrak{s}ubsection{Burnside categories and functors}\label{subsec:tech} This paper uses the machinery of Burnside functors from \cite{hkk,lls1}. There, the dual of the Khovanov chain complex of a link diagram with $n$ (ordered) crossings is viewed as a diagram of abelian groups: \[ \mathfrak{F}_e\mathbb{F}rom (\underline{2}^n)^\mathrm{op} \to \mathbb{Z}\text{-Mod}, \] where $\underline{2}^n$ is the category with objects elements of $\{0,1\}^n$ and a unique arrow $a \to b$ if $a \geq b$. In order to construct a stable homotopy type, one considers a certain $2$-category $\mathscr{B}$, \emph{the Burnside category}, whose objects are finite sets and whose $1$-morphisms are finite correspondences. The $2$-category $\mathscr{B}$ naturally comes with a forgetful functor to abelian groups $\mathscr{B} \to \mathbb{Z}\text{-Mod}$ by sending a set $S$ to the free abelian group $\mathbb{Z}\basis{S}$ generated by $S$. The Khovanov stable homotopy type arises from a lift: \[ \begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)},xscale=2.5,yscale=1.5] \node (a0) at (0,0) {$\underline{2}^n$}; \node (a1) at (1,0) {$\mathbb{Z}\text{-Mod}$}; \node (b1) at (1,1) {$\mathscr{B}$}; \draw[->] (a0) -- (a1) node[pos=0.5,anchor=north] {\mathfrak{s}criptsize $\mathfrak{F}^\mathrm{op}_e$}; \draw[->] (b1) -- (a1) node[pos=0.2,anchor=east] {}; \draw[->,dashed] (a0) -- (b1) node[pos=0.5,anchor=south east]{\mathfrak{s}criptsize $F_e$}; \end{tikzpicture} \] The \emph{realization} of any functor $F_e\mathbb{F}rom \underline{2}^n \to \mathscr{B}$ is then defined as a finite CW spectrum $\Realize{F_e}$ associated to $F_e$. Indeed, as shown in \cite{lls2}, the stable equivalence class of the functor $F_e$, modulo shifting by the number of negative crossings $n_-$, is itself an invariant (which recovers the even Khovanov homotopy type $\khoh$---the appropriately shifted stable homotopy type of $\Realize{F_e}$). In this paper, we first review, in \S\ref{sec:khovanovhomology}, the odd Khovanov chain complex, viewing it as a diagram: \[ \mathfrak{F}_o\mathbb{F}rom (\underline{2}^n)^\mathrm{op}\to \mathbb{Z}\text{-Mod}. \] Indeed, $\mathfrak{F}_o$ and $\mathfrak{F}_e$ can be combined by a pullback into a unified functor \[ \mathfrak{F}_u\mathbb{F}rom (\underline{2}^n)^\mathrm{op}\to \mathbb{Z}_u\text{-Mod}. \] where $\mathbb{Z}_u=\mathbb{Z}[\xi]/(\xi^2-1)$, and $\mathfrak{F}_e$ (respectively, $\mathfrak{F}_o$) is obtained by setting $\xi=+1$ (respectively, $\xi=-1$). Then in \S\ref{sec:burn}, we move to some slight generalizations of the Burnside $2$-category, $\textbf{u}rn_{\mathfrak{s}igma}$, the \emph{signed Burnside category} in order to take account of the signs appearing in odd Khovanov homology, and $\textbf{u}rn_\xi$, the \emph{free $\mathbb{Z}_2$-equivariant Burnside category} in order to take account of the $\xi$-action. In \S\ref{sec:box} we show how the realization construction of \cite{lls1} generalizes to $\textbf{u}rn_{\mathfrak{s}igma}$ and $\textbf{u}rn_\xi$. Roughly, the realization process of a functor to $\textbf{u}rn_{\mathfrak{s}igma}$ is comparable to the realization process of a functor to $\mathscr{B}$, except that where a sign appears, the corresponding cell is glued in by a fixed orientation-reversing homeomorphism. And then in \S\ref{sec:oddkh} we construct lifts \[ \begin{tikzpicture}[ cross line/.style={preaction={draw=white, -, line width=4pt}}, baseline={([yshift=-.8ex]current bounding box.center)},xscale=3,yscale=1.5] \node (ze) at (0,0.5) {$\mathbb{Z}\text{-Mod}$}; \node (zo) at (1.4,1) {$\mathbb{Z}\text{-Mod}$}; \node (zu) at (2,0) {$\mathbb{Z}_u\text{-Mod}$}; \node (two) at ($(ze)!0.5!(zo)$) {$\underline{2}^n$}; \node (be) at ($(ze)+(0,2)$) {$\mathscr{B}$}; \node (bo) at ($(zo)+(0,2)$) {$\textbf{u}rn_{\mathfrak{s}igma}$}; \node (bu) at ($(zu)+(0,2)$) {$\textbf{u}rn_\xi$}; \mathbb{F}oreach \t/\d/\a/\b in {e//south east/east,o/dashed/north west/east,u/dashed/south west/south}{ \draw[->] (two) -- (z\t) node[pos=0.5,inner sep=0,outer sep=1pt,anchor=\a] {\mathfrak{s}criptsize $\mathfrak{F}^\mathrm{op}_\t$}; \draw[cross line, ->,\d] (two) -- (b\t) node[pos=0.4,anchor=\b] {\mathfrak{s}criptsize $F_\t$} ; \draw[->] (b\t) -- (z\t); } \draw[->] (zu) edge node[pos=0.8,anchor=west,align=left] {\mathfrak{s}criptsize $\xi=-1$} (zo) edge node[pos=0.5,anchor=north] {\mathfrak{s}criptsize $\xi=+1$} (ze); \draw[cross line, ->] (bu) -- (be) node[pos=0.6,anchor=south] {\mathfrak{s}criptsize $\mathcal{Q}$}; \draw[->] (bo) edge node[pos=0.5,anchor=south east] {\mathfrak{s}criptsize $\mathbb{F}orgot$} (be) edge node[pos=0.5,anchor=south west] {\mathfrak{s}criptsize $\mathcal{D}$} (bu); \end{tikzpicture} \] where the arrows among the various versions of Burnside categories are those from Figure~\ref{fig:burnsidecategories}. Note that the lift \[ F_o\mathbb{F}rom\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma} \] recovers all the other lifts; it decomposes along quantum gradings $F_o=\amalg_jF_o^j$, and its equivariant equivalence class, after shifting by $n_-$, recovers all the (correctly shifted) stable homotopy types obtained by the above-mentioned realization procedure, and is itself an invariant: \begin{thm}\label{thm:odd-functor-invariant} The equivariant equivalence class of the shifted functor $\Sigma^{-n_-}F^j_o$ from Definition~\ref{def:kh-odd-burnside-functor} is independent of all the choices in its construction and is a link invariant. \end{thm} \mathfrak{s}ubsection*{Acknowledgement} We are grateful to Anna Beliakova, Mike Hill, Tyler Lawson, Francesco Lin, Robert Lipshitz, and Ciprian Manolescu for many helpful conversations. \mathfrak{s}ection{Khovanov homologies}\label{sec:khovanovhomology} In this section we review the definitions and basic properties of three versions of Khovanov homology for an oriented link $L$: ordinary or {\emph{even}} Khovanov homology $\mathit{Kh}(L)=\mathit{Kh}_e(L)$, defined by Khovanov \cite{kho1}; {\emph{odd}} Khovanov homology $\kho(L)$ defined by Ozsv\'{a}th, Rasmussen and Szab\'{o} \cite{ors}; and $\unih(L)$, the {\emph{unified}} theory of Putyra and Putyra-Shumakovitch \cite{putyra,putyrashumakovitch}, which generalizes the previous two theories. These three homological invariants will be upgraded to Burnside functors in \S\ref{sec:oddkh}. \mathfrak{s}ubsection{The cube category}\label{subsec:prelim} We first recall the cube category. Call $\underline{2}=\{0,1\}$ the one-dimensional cube, viewed as a partially ordered set by setting $1>0$, or as a category with a single non-identity morphism from $1$ to $0$. Call $\underline{2}^n=\{0,1\}^n$ the $n$-dimensional cube, with the partial order given by \[ u=(u_1,\dots,u_n) \geq v=(v_1,\dots,v_n) \text{ if and only if } \mathbb{F}orall \; i \; (u_i \geq v_i). \] It has the categorical structure induced by the partial order, where $\mathrm{Hom}_{\underline{2}^n}(u,v)$ has a single element if $u \geq v$ and is empty otherwise. Write $\mathrm{pt}hi_{u,v}$ for the unique morphism $u \to v$ if it exists. The cube carries a grading given by $|v|=\mathfrak{s}um_i v_i$. Write $u\geqslant_k v$ if $u\geq v$ and $|u|-|v|=k$. When $u\geqslant_1 v$, call the corresponding morphism $\mathrm{pt}hi_{u,v}$ an {\emph{edge}}. \begin{defn}\label{def:signassign} The {\emph{standard sign assigment}} $s$ is the following function from edges of $\underline{2}^n$ to $\mathbb{Z}_2$. For $u\geqslant_1 v$, let $k$ be the unique element in $\{1,\dots,n\}$ with $u_k > v_k$. Then \[ s_{u,v} \; := \; \mathfrak{s}um^{k-1}_{i=1} u_i \bmod{2}. \] \end{defn} Note that $s$ may be viewed as a 1-cochain in $C_{\mathrm{cell}}^*([0,1]^n;\mathbb{Z}_2)$. In general, $s+c$ is called a \emph{sign assignment} for any $1$-cocycle $c$ in $C_{\mathrm{cell}}^*([0,1]^n;\mathbb{Z}_2)$. \mathfrak{s}ubsection{Some rings and modules} We will often write $\mathbb{Z}_2$ multiplicatively as $\{1,\xi\}$. The integral group ring of $\mathbb{Z}_2$ then has the presentation $\mathbb{Z}[\xi]/(\xi^2-1)$, which we abbreviate to $\mathbb{Z}_u$. There are two basic $\mathbb{Z}_u$-modules $\mathbb{Z}_e$ and $\mathbb{Z}_o$ obtained from $\mathbb{Z}_u$ by setting $\xi=+1$ and $\xi=-1$, which fit into the following diagram: \begin{equation} \begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)},xscale=2.5,yscale=1.5] \node (a0) at (0,0) {$\mathbb{Z}_u$}; \node (b1) at (-0.5,-0.75) {$\mathbb{Z}_e$}; \node (b2) at (0.5,-0.75) {$\mathbb{Z}_o$}; \node (c0) at (0,-1.5) {$\mathbb{Z}_2$}; \draw[->] (a0) -- (b1) node[pos=0.2,anchor=east] {\tiny$\xi=+1\;\;$}; \draw[->] (a0) -- (b2) node[pos=0.2,anchor=west] {\tiny$\;\;\xi=-1$}; \draw[->] (b1) -- (c0) node[pos=0.5,anchor=south] {}; \draw[->] (b2) -- (c0) node[pos=0.5,anchor=east] {}; \end{tikzpicture}\label{eq:pullback} \end{equation} The modules $\mathbb{Z}_e$ and $\mathbb{Z}_o$ are both infinite cyclic groups, for which $\xi\in \mathbb{Z}_u$ acts as $+1$ on $\mathbb{Z}_e$, and by $-1$ on $\mathbb{Z}_o$. All maps in the above diagram are surjections, and in fact $\mathbb{Z}_u$ is a pull-back for this diagram in the category of rings. Equivalently, $\mathbb{Z}_u$ is isomorphic to the subring of $\mathbb{Z}\mathrm{op}lus \mathbb{Z}$ consisting of pairs $(a,b)$ with $a\equiv b \bmod{2}$. Note that the kernel of the map $\xi=+1$ (resp. $\xi=-1$) in the diagram is isomorphic to $\mathbb{Z}_o$ (resp. $\mathbb{Z}_e$). In particular, we have a short exact sequence $0\to \mathbb{Z}_e \to \mathbb{Z}_u\to \mathbb{Z}_o\to 0$, and an analogous exact sequence with $e$ and $o$ swapped. Now let $S$ be any finite set, and let $T(S)$ be the tensor algebra generated by $S$ over $\mathbb{Z}_u$. Let $I$ be the two-sided ideal of $T(S)$ generated by elements $x\otimes x$ and $x\otimes y - \xi y\otimes x$ where $x,y\in S$. \begin{defn}\label{def:uext} Given a finite set $S$, we define the $\mathbb{Z}_u$-module $\Lambda_u(S):= T(S)/I$. \end{defn} We will abuse notation and write $x_1\otimes \cdots \otimes x_n\in \Lambda_u(S)$ for the equivalence class of the element $x_1\otimes \cdots \otimes x_n\in T(S)$, whenever each $x_i\in S$. We have the fundamental relation \[ x_1\otimes \cdots \otimes x_n \; = \; \xi^{\text{sign}(\mathfrak{s}igma)}x_{\mathfrak{s}igma(1)}\otimes \cdots \otimes x_{\mathfrak{s}igma(n)} \] for any permutation $\mathfrak{s}igma$ of length $n$. Upon setting $\xi=+1$, we recover $\Lambda_e(S)$, the symmetric algebra on the set $S$ modulo the ideal generated by squares of elements in $S$. If we set $\xi=-1$, we recover $\Lambda_o(S)$, the usual exterior algebra on the set $S$. If we write $\Lambda_2(S)$ for the $\mathbb{Z}_2$-exterior algebra on the set $S$, then these four algebras fit into a pull-back diamond analogous to Diagram~\eqref{eq:pullback}. \mathfrak{s}ubsection{Three Khovanov homology theories}\label{sec:threekhfunc} We will now recall the definition of odd Khovanov homology, as well as the definition of the unified theory (in the spirit of odd Khovanov homology). Let $L$ be a link diagram with $n$ ordered crossings. Each crossing $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,10) -- (10,0); \node[crossing] at (5,5) {}; \draw (0,0) -- (10,10); \end{tikzpicture}$ can be resolved as the \emph{$0$-resolution} $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) .. controls (4,4) and (4,6) .. (0,10); \draw (10,0) .. controls (6,4) and (6,6) .. (10,10); \end{tikzpicture}$ or the \emph{$1$-resolution} $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) .. controls (4,4) and (6,4) .. (10,0); \draw (0,10) .. controls (4,6) and (6,6) .. (10,10); \end{tikzpicture}$. We assume that $L$ is decorated by an {\emph{orientation of crossings}}, i.e., a choice of an arrow at each crossing, $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,10) -- (10,0); \node[crossing] at (5,5) {}; \draw (0,0) -- (10,10); \draw[thick,->,red] (0,5) --(10,5); \end{tikzpicture}$ or $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,10) -- (10,0); \node[crossing] at (5,5) {}; \draw (0,0) -- (10,10); \draw[thick,<-,red] (0,5) --(10,5); \end{tikzpicture}$, connecting the two arcs of the $0$-resolution at that crossing. Rotating the arrows $90^\circ$ degrees clockwise (this requires choosing an orientation of the plane) produces an arrow joining the two arcs of the $1$-resolution at that crossing as well. That is, a crossing $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,10) -- (10,0); \node[crossing] at (5,5) {}; \draw (0,0) -- (10,10); \draw[thick,->,red] (0,5) --(10,5); \end{tikzpicture}$ (respectively, $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,10) -- (10,0); \node[crossing] at (5,5) {}; \draw (0,0) -- (10,10); \draw[thick,<-,red] (0,5) --(10,5); \end{tikzpicture}$) has $0$-resolution $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) .. controls (4,4) and (4,6) .. (0,10); \draw (10,0) .. controls (6,4) and (6,6) .. (10,10); \draw[thick,->,red] (3,5) --(7,5); \end{tikzpicture}$ (respectively, $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) .. controls (4,4) and (4,6) .. (0,10); \draw (10,0) .. controls (6,4) and (6,6) .. (10,10); \draw[thick,<-,red] (3,5) --(7,5); \end{tikzpicture}$) and \emph{$1$-resolution} $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) .. controls (4,4) and (6,4) .. (10,0); \draw (0,10) .. controls (4,6) and (6,6) .. (10,10); \draw[thick,red,->] (5,7)--(5,3); \end{tikzpicture}$ (respectively, $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) .. controls (4,4) and (6,4) .. (10,0); \draw (0,10) .. controls (4,6) and (6,6) .. (10,10); \draw[thick,red,<-] (5,7)--(5,3); \end{tikzpicture}$). We will recall the `top-down' construction of three functors \[ \mathfrak{F}_u : (\underline{2}^n)^\mathrm{op} \longrightarrow \mathbb{Z}_u\text{-Mod}, \qquad \mathfrak{F}_e : (\underline{2}^n)^\mathrm{op} \longrightarrow \mathbb{Z}\text{-Mod}, \qquad \mathfrak{F}_o : (\underline{2}^n)^\mathrm{op} \longrightarrow \mathbb{Z}\text{-Mod}, \] by first defining the unified functor $\mathfrak{F}_u$, and then defining $\mathfrak{F}_e$ and $\mathfrak{F}_o$ by restricting scalars $\xi=+1$ and $\xi=-1$, respectively. (Alternatively, one can also carry out a bottom-up approach, defining $\mathfrak{F}_e$ and $\mathfrak{F}_o$, and then defining $\mathfrak{F}_u$ as the pullback.) For each $v\in\underline{2}^n$, let $L_v$ be the complete resolution diagram formed by taking the $0$-resolution at the $i^{\text{th}}$ crossing if $v_i=0$, or the $1$-resolution otherwise. The diagram $L_v$ is a planar diagram of embedded circles and oriented arcs. Write $Z(L_v)$ for the set of circles in $L_v$. For objects $v\in\underline{2}^n$, set $\mathfrak{F}_u(v) = \Lambda_u(Z(L_v))$. For morphisms, first consider the following assignment $\mathfrak{F}_u'$ on the edges; the actual functor $\mathfrak{F}_u$ will be a slight modification of this assignment. Let $\mathrm{pt}hi_{v,w}$ be an edge in $\underline{2}^n$, so that its reverse $\mathrm{pt}hi_{w,v}^\mathrm{op}$ is a morphism in the opposite category. Suppose $\mathrm{pt}hi_{w,v}^\mathrm{op}$ corresponds to a split cobordism, so that some circle $a\in Z(L_w)$ splits into two circles $a_1,a_2\in Z(L_v)$, and that the other elements of these two sets of circles are naturally identified. Suppose further that the arc in $L_v$ associated to this splitting is pointing from $a_1$ to $a_2$. Define \[ \mathfrak{F}_u'(\mathrm{pt}hi^\mathrm{op}_{w,v})(x) \; = \; (a_1 + \xi a_2)\otimes x \] where $\Lambda_u(Z(L_w))$ is viewed embedded in $\Lambda_u(Z(L_v))$ by sending $a$ to either $a_1$ or $a_2$. Now suppose instead we have a merge cobordism, so that two circles $a_1,a_2\in Z(L_w)$ merge into one circle $a\in Z(L_v)$, and that the other elements in these two sets of circles are naturally identified. Define $\mathfrak{F}'_u(\mathrm{pt}hi_{w,v}^\mathrm{op})$ to be the $\mathbb{Z}_u$-algebra map $\Lambda_u(Z(L_w))\to \Lambda_u(Z(L_v))$ determined by sending $a_1$ and $a_2$ to $a$, and by the identity map on other circle generators. The assignment $\mathfrak{F}'_u$ on the edges does not commute across the 2-dimensional faces; rather, it does so only up to possible multiplication by $\xi$. We correct the assignment $\mathfrak{F}'_u$ on morphisms as follows. The two-dimensional configurations can be divided into four categories as follows (with unoriented arcs being orientable in either direction). \begin{equation}\label{diagram:type} \begin{split} A:\quad& \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) circle (5cm); \draw (11,0) circle (5cm); \draw[thick,red] (-5,0) -- (5,0); \draw[thick,red] (11-5,0) -- (11+5,0); \end{tikzpicture}, \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) circle (5cm); \clip (0,0) circle (5cm); \draw[thick,red] (-5,0) circle (4cm); \draw[thick,red] (5,0) circle (4cm); \end{tikzpicture}, \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \node[inner sep=0pt, outer sep=0pt,draw,shape=circle,minimum width=0.6cm,style={transform shape=False}] (a) at (0,0) {}; \node[inner sep=0pt, outer sep=0pt,draw,shape=circle,minimum width=0.6cm,style={transform shape=False}] (b) at (15,0) {}; \draw[thick,red,->] (a) to[out=30,in=150] (b); \draw[thick,red,->] (a) to[out=-30,in=-150] (b); \end{tikzpicture}, \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \draw[thick,red] (-5,0) circle (4cm); \draw[fill=white] (0,0) circle (5cm); \draw[thick,red] (0,5) -- (0,-5); \end{tikzpicture}. \\ C:\quad& \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) circle (5cm); \draw (15,0) circle (5cm); \draw (30,0) circle (5cm); \draw[thick,red] (5,0) -- (10,0); \draw[thick,red] (20,0) -- (25,0); \end{tikzpicture}, \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) circle (5cm); \draw (15,0) circle (5cm); \draw[thick,red] (5,0) -- (10,0); \draw[thick,red] (15,5) -- (15,-5); \end{tikzpicture}, \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \node[inner sep=0pt, outer sep=0pt,draw,shape=circle,minimum width=0.6cm,style={transform shape=False}] (a) at (0,0) {}; \node[inner sep=0pt, outer sep=0pt,draw,shape=circle,minimum width=0.6cm,style={transform shape=False}] (b) at (15,0) {}; \draw[thick,red,<-] (a) to[out=30,in=150] (b); \draw[thick,red,->] (a) to[out=-30,in=-150] (b); \end{tikzpicture}, \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) circle (5cm); \draw (15,0) circle (5cm); \draw (26,0) circle (5cm); \draw (41,0) circle (5cm); \draw[thick,red] (5,0) -- (10,0); \draw[thick,red] (31,0) -- (36,0); \end{tikzpicture}, \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \draw (0,0) circle (5cm); \draw (11,0) circle (5cm); \draw (26,0) circle (5cm); \draw[thick,red] (16,0) -- (21,0); \draw[thick,red] (-5,0) -- (5,0); \end{tikzpicture}. \\ X:\quad& \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \node[inner sep=0pt, outer sep=0pt,draw,shape=circle,minimum width=0.6cm,style={transform shape=False}] (a) at (0,0) {}; \draw[thick,red,<-] (-5,0) --(5,0); \draw[thick,red,->] (a) to[out=150,in=90] (-10,0) to[out=-90,in=210] (a); \end{tikzpicture}. \\ Y:\quad& \begin{tikzpicture}[scale=0.06,baseline={([yshift=-.8ex]current bounding box.center)}] \node[inner sep=0pt, outer sep=0pt,draw,shape=circle,minimum width=0.6cm,style={transform shape=False}] (a) at (0,0) {}; \draw[thick,red,<-] (-5,0) --(5,0); \draw[thick,red,<-] (a) to[out=150,in=90] (-10,0) to[out=-90,in=210] (a); \end{tikzpicture}. \end{split} \end{equation} For the type-A faces, $\mathfrak{F}'_u$ commutes after multiplication by $\xi$, for the type-C faces $\mathfrak{F}'_u$ commutes directly, while for the type-X and type-Y faces, $\mathfrak{F}'_u$ commutes, both directly, and after multiplication by $\xi$. Define $\mathrm{pt}si_X$ (respectively, $\mathrm{pt}si_Y$), an element of $C_{\mathrm{cell}}^2([0,1]^n;\{1,\xi\})$, to be $\xi$ for the type-A or -X faces (respectively, type-A or -Y faces), and $1$ for the type-C or -Y faces (respectively, type-C or -X faces). \begin{defn}\label{def:edgeass} A type-X (respectively, type-Y) {\emph{edge assignment}} for the diagram $L$ with oriented crossings is a (multiplicative) cochain $\epsilon\in C_{\mathrm{cell}}^1([0,1]^n;\{1,\xi\})$ such that $\delta\epsilon=\mathrm{pt}si_X$ (respectively, $=\mathrm{pt}si_Y$). \end{defn} Fix an edge assignment $\epsilon$, either of type-X or type-Y. For an edge $\mathrm{pt}hi_{w,v}^\mathrm{op}$, set \[ \mathfrak{F}_u(\mathrm{pt}hi_{w,v}^\mathrm{op}) = \epsilon(\mathrm{pt}hi_{w,v}^\mathrm{op})\mathfrak{F}'_u(\mathrm{pt}hi_{w,v}^\mathrm{op}) \] and this defines the functor $\mathfrak{F}_u$. Setting $\xi=+1$ and $\xi=-1$ throughout the above construction defines the even and odd functors $\mathfrak{F}_e$ and $\mathfrak{F}_o$, respectively. The three Khovanov homology theories are then defined from these functors as follows. First, we define for $\textbf{u}llet \in \{u,e,o\}$ a chain complex: \[ \mathit{Kh}Cx_\textbf{u}llet(L) \; = \; \bigoplus_{v\in\underline{2}^n} \mathfrak{F}_\textbf{u}llet(v) , \qquad\;\; \mathrm{pt}artial_\textbf{u}llet \; = \; \mathfrak{s}um_{v\geqslant_1 w} (-1)^{s_{v,w}}\,\mathfrak{F}_\textbf{u}llet(\mathrm{pt}hi_{w,v}^\mathrm{op}) \] (Here $s$ is the standard sign assignment from Definition~\ref{def:signassign}.) This complex depends on the ordering of the crossings, the choice of crossing orientations, and the choice of edge assignment, as does the corresponding functor. However, these choices do not affect the resulting chain homotopy type \cite[Theorem 1]{kho1}, \cite[Theorem 1.3]{ors}, \cite[\S7]{putyra}. \begin{defn} For $\textbf{u}llet\in\{u,e,o\}$ define $\mathit{Kh}_\textbf{u}llet(L) = H_\ast(\mathit{Kh}Cx_\textbf{u}llet(L),\mathrm{pt}artial_\textbf{u}llet)$. \end{defn} The unified homology group $\unih(L)$ is a $\mathbb{Z}_u$-module, while the even and odd theories are abelian groups. Each theory has a bigrading, defined in the usual way, as follows. Let $n_-$ be the number of negative crossings $\begin{tikzpicture}[scale=0.04,baseline={([yshift=-.8ex]current bounding box.center)}] \draw[->] (0,10) -- (10,0); \node[crossing] at (5,5) {}; \draw[->] (0,0) -- (10,10); \end{tikzpicture}$ in the diagram $L$. The homological and quantum gradings, denoted $i$ and $j$ respectively, are defined on $x=a_1\otimes \cdots \otimes a_k\in\Lambda_\textbf{u}llet(Z(L_v))$ with $a_1,\dots,a_k\in Z(L_v)$ to be \[ i(x) \; = \; |v|-n_-,\qquad j(x) \; = \; |Z(L_v)|-2k+|v|+n-3n_-. \] We write $\mathit{Kh}_\textbf{u}llet^{i,j}(L)$ for the corresponding bigraded module. We note that the unified theory in \cite{putyra} is called the {\emph{covering homology}}, and is more specfically obtained from Example 10.7 of loc.~cit.~by setting $X=Z=1$ and $Y=\xi$. The terminology {\emph{unified}} is used in \cite{putyrashumakovitch}. \begin{rmk}\label{rmk:edgeass} Our definition of an edge assignment is non-standard, but the standard type-X (respectively, type-Y) edge assignment from \cite{ors,putyra} may be obtained from our type-X (respectively, type-Y) edge assignment by multiplying by $(-1)^{s_{v,w}}$. \end{rmk} \mathfrak{s}ubsection{Relations between the theories} From the definitions, it is clear that the chain complexes $\mathit{Kh}Cx_\textbf{u}llet(L)$ above fit into a pull-back diagram just as in Diagram~\eqref{eq:pullback}, \begin{equation} \begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)},xscale=2.5,yscale=1.5] \node (a0) at (0,0) {$\mathit{Kh}Cx_u(L)$}; \node (b1) at (-0.5,-0.75) {$\mathit{Kh}Cx_e(L)$}; \node (b2) at (0.5,-0.75) {$\mathit{Kh}Cx_o(L)$}; \node (c0) at (0,-1.5) {$\mathit{Kh}Cx_2(L)$}; \draw[->] (a0) -- (b1) node[pos=0.2,anchor=east] {\tiny$\xi=+1\;\;$}; \draw[->] (a0) -- (b2) node[pos=0.2,anchor=west] {\tiny$\;\;\xi=-1$}; \draw[->] (b1) -- (c0) node[pos=0.5,anchor=south] {}; \draw[->] (b2) -- (c0) node[pos=0.5,anchor=east] {}; \end{tikzpicture} \end{equation} where $\mathit{Kh}Cx_2(L)$ denotes Khovanov chain complex with $\mathbb{Z}_2$ coefficients. Indeed, we may define $\mathit{Kh}Cx_u(L)$ to be the pullback of $\mathit{Kh}Cx_e(L)$ and $\mathit{Kh}Cx_o(L)$ over $\mathit{Kh}Cx_2(L)$, \[ \mathit{Kh}Cx_u(L)=\{(a,b)\in\mathit{Kh}Cx_e(L)\mathrm{op}lus\mathit{Kh}Cx_o(L)\mid a\equiv b\bmod{2}\}, \] which then naturally inherits a $\mathbb{Z}_2$-action $\xi(a,b)=(a,-b)$. We also have a short exact sequence of chain complexes $0\to \mathit{Kh}Cx_e(L)\to \mathit{Kh}Cx_u(L)\to \mathit{Kh}Cx_o(L)\to 0$. This may be viewed as arising from tensoring the short exact sequence $0\to \mathbb{Z}_e\to\mathbb{Z}_u\to\mathbb{Z}_o\to 0$ by the unified chain complex $\mathit{Kh}Cx_u(L)$ over $\mathbb{Z}_u$. There is a similar exact sequence with $e$ and $o$ swapped. Passing to homology yields the following long exact sequences, cf.~\cite{putyrashumakovitch}: \begin{equation}\label{eq:les_oe} \cdots\to \mathit{Kh}_e^{i,j}(L) \longrightarrow \mathit{Kh}^{i,j}_u(L) \longrightarrow \mathit{Kh}_o^{i,j}(L) \xrightarrow{\mathrm{pt}hi_{oe}} \mathit{Kh}_e^{i+1,j}(L) \to\cdots \end{equation} \begin{equation}\label{eq:les_eo} \cdots\to \mathit{Kh}_o^{i,j}(L) \longrightarrow \mathit{Kh}^{i,j}_u(L) \longrightarrow \mathit{Kh}_e^{i,j}(L) \xrightarrow{\mathrm{pt}hi_{eo}} \mathit{Kh}_o^{i+1,j}(L) \to\cdots \end{equation} \mathfrak{s}ubsection{Reduced theories} Let $p$ be a basepoint on the planar diagram $L$. Then the reduced complex $\widetilde{\mathit{Kh}Cx}_\textbf{u}llet(L,p)$ for $\textbf{u}llet\in \{u,e,o\}$ is the subcomplex of $\mathit{Kh}Cx_\textbf{u}llet(L)$ consisting of elements of the form $a\otimes y$, where $a$ is a circle containing $p$ in a resolution diagram, and $y$ is any other element. The complex $\widetilde{\mathit{Kh}Cx}_\textbf{u}llet(L)$ is homologically graded as a subcomplex of $\mathit{Kh}Cx_\textbf{u}llet(L)$, but there is a shift in its quantum grading, which is defined as \emph{one more} than the formula for $j(x)$ above. The reduced functors $\widetilde{\mathfrak{F}}_\textbf{u}llet$ are defined in the same way. The chain homotopy type depends on the isotopy type of $L$ and which component of the link $p$ lies in. In the odd case, the basepoint does not matter, and the chain complex is a direct sum \cite[Prop 1.7]{ors}: \begin{equation} \mathit{Kh}Cx_o^{*,j}(L) \; = \; \widetilde{\mathit{Kh}Cx}_o^{*,j-1}(L) \mathrm{op}lus \widetilde{\mathit{Kh}Cx}_o^{*,j+1}(L) \label{eq:splitting_hom} \end{equation} In contrast to the odd case, the unified and even theories do not split into a direct sum of their reduced theories. Instead, for $\textbf{u}llet\in\{u,e\}$, there is a short exact sequence of chain complexes \[ 0\longrightarrow \widetilde{\mathit{Kh}Cx}_\textbf{u}llet^{*,j+1}(L,p) \longrightarrow \mathit{Kh}Cx^{*,j}_\textbf{u}llet(L) \longrightarrow \widetilde{\mathit{Kh}}_\textbf{u}llet^{*,j-1}(L,p) \longrightarrow 0. \] The reduced Khovanov homology is defined as \begin{defn} For $\textbf{u}llet\in\{u,e,o\}$ define $\widetilde{\mathit{Kh}}^{i,j}_\textbf{u}llet(L,p) = H_i(\widetilde{\mathit{Kh}Cx}^{\ast,j}_\textbf{u}llet(L,p),\mathrm{pt}artial_\textbf{u}llet)$. \end{defn} \mathfrak{s}ubsection{Khovanov generators}\label{subsec:kg} In the sequel, we will need to fix bases of these chain complexes to facilitate the construction of the various Khovanov spectra. For even Khovanov homology, there is a natural basis of generators: the elements $a_1\otimes \cdots \otimes a_k \in \Lambda_e(Z(L_v))$ where each $a_i\in Z(L_v)$ is distinct. Since $\Lambda_e(Z(L_v))$ is the quotient of a symmetric algebra on these generators, their order does not matter. For the unified and odd cases, order matters, however: recall that $a_1\otimes a_2 = \xi a_2 \otimes a_1$ in the unified case, and $a_1\otimes a_2 = - a_2 \otimes a_1$ in the odd case. To fix generators, we will thus fix at every vertex $v\in\underline{2}^n$ a total ordering $>$ of the set $Z(L_v)$. Once this is done, we write \[ \mathit{Kh}Gen(v) \; = \; \mathit{Kh}Gen_\textbf{u}llet(v) \; := \; \{ a_{1}\otimes \cdots \otimes a_{k} : \; a_i \in Z(L_v), \; a_1 > \cdots > a_k \} \qquad \textbf{u}llet \in \{u,e,o\} \] for the set of {\emph{Khovanov generators at $v$}}. As indicated, we will often omit the subscript $\textbf{u}llet$ from the notation, as each of the three sets for a fixed $v$ are naturally identified (once the circles in $Z(L_v)$ are totally ordered). Note that in the unified case, the set of Khovanov generators over all $v\in\underline{2}^n$ gives a $\mathbb{Z}_u$-basis for the chain complex. On the other hand, a $\mathbb{Z}$-basis for the unified chain complex at $v\in\underline{2}^n$ is given by \[ \mathit{Kh}Gen(v) \;\amalg\; \xi \mathit{Kh}Gen(v) \] where $\xi \mathit{Kh}Gen(v)$ is the set of $\xi x$ with $x\in \mathit{Kh}Gen(v)$. Note that $\mathit{Kh}Gen(v)$ has $2^{|Z(L_v)|}$ elements. Given a basepoint $p$ on our diagram $L$, we can also form the set of reduced generators $\widetilde{\mathit{Kh}Gen}(v;p)$ at the vertex $v$, the subset of $\mathit{Kh}Gen(v)$ whose elements each include the circle containing the basepoint. This set has half the number of elements of $\mathit{Kh}Gen(v)$, and, running over all $v\in\underline{2}^n$, forms a basis for the reduced complex $\widetilde{\mathit{Kh}Cx}_\textbf{u}llet(L,p)$ where $\textbf{u}llet \in \{e,o\}$, and a $\mathbb{Z}_u$-basis for $\widetilde{\mathit{Kh}Cx}_u(L,p)$. \mathfrak{s}ection{Burnside categories and functors} \label{sec:burn} In this section, we review the definition of the Burnside category $\mathscr{B}$ from \cite{lls1,lls2}, and define some slight modifications: the signed Burnside category $\textbf{u}rn_{\mathfrak{s}igma}$ and the $\mathbb{Z}_2$-equivariant Burnside category $\textbf{u}rn_\xi$. We then discuss functors from the cube category $\underline{2}^n$ to these Burnside categories. \mathfrak{s}ubsection{The Burnside category}\label{subsec:burnside} Given finite sets $X$ and $Y$, a correspondence from $X$ to $Y$ is a triple $(A,s,t)$ for a finite set $A$, where $s,t$ are set maps $s \mathbb{F}rom A\to X$ and $t \mathbb{F}rom A \to Y$; $s$ and $t$ are called the \emph{source} and \emph{target} maps, respectively. The correspondence $(A,s,t)$ is depicted: \[ \begin{tikzpicture}[scale=1] \node (X) at (-2,0) {$X$}; \node (Y) at (2,0) {$Y$}; \node (A) at (0,1) {$A$}; \draw[->] (A) -- (X) node[midway, above] {$s_A\;\;\;$}; \draw[->] (A) -- (Y) node[midway,above] {$\;\;t_A$}; \end{tikzpicture} \] For correspondences $(A,s_A,t_A)$ and $(B,s_B,t_B)$ from $X$ to $Y$ and $Y$ to $Z$, respectively, define the composition $(B,s_B,t_B)\circ (A,s_A,t_A)$ to be the correspondence $(C,s,t)$ from $X$ to $Z$ given by the fiber product $C=B \times_Y A =\{ (b,a) \in B \times A \mid t(a) = s(b)\}$ with source and target maps $s(b,a)=s_A(a)$ and $t(b,a)=t_B(b)$. There is also the identity correspondence from a set $X$ to itself, i.e., $(X,\mathrm{Id}_X,\mathrm{Id}_X)$ from $X$ to $X$. Given correspondences $(A,s_A,t_A)$, $(B,s_B,t_B)$ from $X$ to $Y$, a morphism of correspondences $(A,s_A,t_A)$ to $(B,s_B,t_B)$ is a bijection $f \mathbb{F}rom A \to B$ commuting with the source and target maps. There is also the identity morphism from a correspondence to itself. Composition (of set maps) gives the set of correspondences from $X$ to $Y$ the structure of a category. Define the \emph{Burnside category} $\mathscr{B}$ to be the weak $2$-category whose objects are finite sets, morphisms are finite correspondences, and $2$-morphisms are maps of correspondences. Recall that in a weak $2$-category, that arrows need only be associative up to an equivalence, and similarly the identity axiom holds only after composing with a $2$-morphism. To be explicit, for finite sets $X,Y$ and $(A,s,t)$ a correspondence from $X$ to $Y$, neither $(Y,\mathrm{Id}_Y,\mathrm{Id}_Y) \circ (A,s,t)$, nor $(A,s,t)\circ (X,\mathrm{Id}_X,\mathrm{Id}_X)$, equals $(A,s,t)$. Rather, there are natural $2$-morphisms, left and right unitors, \[ \lambda\mathbb{F}rom Y \times_Y A \to A, \qquad \rho\mathbb{F}rom A \times_X X \to A \] given by $\lambda(y,a)=a$ and $\rho(a,x)=a$. Further, the composition $C\circ (B\circ A)$, for $A$ from $W$ to $X$, $B$ from $X$ to $Y$, and $C$ from $Y$ to $Z$, is not identical to $(C \circ B) \circ A$, rather there is an associator \[ \alpha \mathbb{F}rom (C \times_Y B)\times_X A \to C \times_Y (B \times_X A) \] given by $\alpha((c,b),a)=(c,(b,a))$. The categories to follow are slight variations of this one. The total diagram of Burnside categories that we will consider in this article is depicted in Figure \ref{fig:burnsidecategories}. \mathfrak{s}ubsection{The signed Burnside category} \label{subsec:oddburn} Given sets $X$ and $Y$, a \emph{signed correspondence} is a correspondence $(A,s_A,t_A)$ equipped with a map $\mathfrak{s}igma_A \mathbb{F}rom A \to \{+1,-1\},$ regarded as a tuple $(A,s_A,t_A,\mathfrak{s}igma_A)$; we call $\mathfrak{s}igma_A$ the ``sign'' or ``sign map'' of the signed correspondence: \[ \begin{tikzpicture}[scale=1] \node (X) at (-2,0) {$X$}; \node (Y) at (2,0) {$Y$}; \node (A) at (0,1) {$A$}; \node (S) at (0,2.5) {$\{+1,-1\}$}; \draw[->] (A) -- (X) node[midway, above] {$s_A\;\;\;$}; \draw[->] (A) -- (Y) node[midway,above] {$\;\;t_A$}; \draw[->] (A) -- (S) node[midway,left] {$\mathfrak{s}igma_A$}; \end{tikzpicture} \] In the sequel we often write ``correspondence'' for ``signed correspondence'', where it will not cause any confusion. We define a composition $(B,s_B,t_B,\mathfrak{s}igma_B)\circ (A,s_A,t_A,\mathfrak{s}igma_A)$ of signed correspondences $(A,s_A,t_A,\mathfrak{s}igma_A)$ from $X$ to $Y$, and $(B,s_B,t_B,\mathfrak{s}igma_B)$ from $Y$ to $Z$ by $(C,s,t,\mathfrak{s}igma)$, where $(C,s,t)$ is the composition $(B,s_B,t_B) \circ (A,s_A,t_A)$ and $\mathfrak{s}igma(b,a)=\mathfrak{s}igma_B(b)\mathfrak{s}igma_A(a)$. Also, we define the identity (signed) correspondence by $(X, \mathrm{Id}_X, \mathrm{Id}_X, 1)$ (i.e., the identity correspondence takes value $1$ on all elements). We define maps of signed correspondences $f \mathbb{F}rom (A,s_A,t_A,\mathfrak{s}igma_A) \to (B,s_B,t_B,\mathfrak{s}igma_B)$ to be morphisms of correspondences $f \mathbb{F}rom (A,s_A,t_A) \to (B,s_B,t_B)$ such that $\mathfrak{s}igma_B\circ f=\mathfrak{s}igma_A$. We may then define the \emph{signed Burnside category} $\textbf{u}rn_{\mathfrak{s}igma}$ to be the weak $2$-category with objects finite sets, morphisms given by signed correspondences, and $2$-morphisms given by maps of signed correspondences. The structure maps $\lambda, \rho,\alpha$ of \S\ref{subsec:burnside} are easily seen to respect the sign, confirming that $\textbf{u}rn_{\mathfrak{s}igma}$ is indeed a weak $2$-category. There is a forgetful 2-functor $\mathbb{F}orgot\mathbb{F}rom \textbf{u}rn_{\mathfrak{s}igma} \to \mathscr{B}$ which forgets signs. There is also an inclusion-induced 2-functor $\mathcal{I}:\mathscr{B} \to \textbf{u}rn_{\mathfrak{s}igma}$. We will usually refer to such 2-functors simply as functors. \mathfrak{s}ubsection{The $\mathbb{Z}_2$-equivariant Burnside category} \label{subsec:equivburn} We let $\textbf{u}rn_\xi$ denote the $2$-category whose objects are finite, free $\mathbb{Z}_2$-sets, with $\mathbb{Z}_2$-equivariant correspondences, and $2$-morphisms $\mathbb{Z}_2$-equivariant bijections of correspondences. (Recall that we write $\mathbb{Z}_2=\{1,\xi\}$.) The 2-category $\textbf{u}rn_\xi$ is a subcategory of the Burnside 2-category for the group $\mathbb{Z}_2$. There is a forgetful functor $\mathbb{F}orgot\mathbb{F}rom \textbf{u}rn_\xi \to \mathscr{B}$. There is also a ``quotient'' functor $\mathcal{Q}\mathbb{F}rom \textbf{u}rn_\xi \to \mathscr{B}$, which simply takes the quotient by the action of $\mathbb{Z}_2$ on objects, $1$- and $2$-morphisms. There is also a strictly unitary ``doubling'' 2-functor $\mathcal{D}\mathbb{F}rom \textbf{u}rn_{\mathfrak{s}igma} \to \textbf{u}rn_\xi$ consisting of the following data. \begin{enumerate}[leftmargin=*] \item For each object $X$ of $\textbf{u}rn_{\mathfrak{s}igma}$, we need to specify an object $\mathcal{D}(X)$ of $\textbf{u}rn_\xi$. Define $\mathcal{D}(X)=\{1,\xi\}\times X$, with the $\mathbb{Z}_2$-action on $\{1,\xi\}\times X$ being $\xi(1,x)=(\xi,x)$ for all $x\in X$. \item For any 2 objects $X,Y$ of $\textbf{u}rn_{\mathfrak{s}igma}$, we need to specify a functor, also denoted $\mathcal{D}$, from $\mathrm{Hom}_{\textbf{u}rn_{\mathfrak{s}igma}}(X,Y)$ to $\mathrm{Hom}_{\textbf{u}rn_\xi}(\mathcal{D}(X),\mathcal{D}(Y))$ that sends $\mathrm{Id}_X$ to $\mathrm{Id}_{\mathcal{D}(X)}$. This functor sends a signed correspondence $A$ from $X$ to $Y$ to the correspondence $\{1,\xi\}\times A$ from $\{1,\xi\}\times X$ to $\{1,\xi\}\times Y$ (the $\mathbb{Z}_2$-action is similar). The source and target maps on $\{1\}\times A$ are defined as \begin{align*} s(1,a)&=(1,s(a))\\ t(1,a)&=(\mathfrak{s}igma(a),t(a)) \end{align*} (The sign $\mathfrak{s}igma(a)$ takes value in $\mathbb{Z}_2=\{+1,-1\}$, which has been identitified with $\mathbb{Z}_2=\{1,\xi\}$.) The source and target maps are then extended equivariantly to $\{\xi\}\times A$. Finally, $\mathcal{D}$ sends a $2$-morphism $f\mathbb{F}rom A\to B$ to the $2$-morphism $(\mathrm{Id},f)\mathbb{F}rom\{1,\xi\}\times A\to\{1,\xi\}\times B$. \item Finally, for any correspondences $A$ from $X$ to $Y$ and $B$ from $Y$ to $Z$ in $\textbf{u}rn_{\mathfrak{s}igma}$, we need to specify a 2-morphism in $\textbf{u}rn_\xi$ from $\mathcal{D}(B)\circ\mathcal{D}(A)=(\{1,\xi\}\times B)\times_{(\{1,\xi\}\times Y)}(\{1,\xi\}\times A)$ to $\mathcal{D}(B\circ A)=\{1,\xi\}\times (B\times_Y A)$ that is natural in $A$ and $B$. Define it to be \[ ((\mathfrak{s}igma(a),b),(1,a))\mapsto (1,(b,a)), \] extended $\mathbb{Z}_2$-equivariantly. It is not hard to check that these maps satisfy the required coherence relations with the structure maps $\lambda,\rho,\alpha$ of $\textbf{u}rn_{\mathfrak{s}igma}$ and $\textbf{u}rn_\xi$, as described in \cite[Definition 4.1 and Remark 4.2]{Benabou-other-bicategories}. Therefore, the above is indeed a $2$-functor. \end{enumerate} \begin{figure} \caption{\textbf{The Burnside categories and some functors between them.} \label{fig:burnsidecategories} \end{figure} \mathfrak{s}ubsection{Functors from Burnside categories} \label{subsec:2funcs} For $\textbf{u}llet\in \{\varnothing,\mathfrak{s}igma\}$ we define a functor $\mathcal{A}\mathbb{F}rom \mathscr{B}_\textbf{u}llet \to\mathbb{Z}\text{-Mod}$ by sending a set $X\in \mathscr{B}_\textbf{u}llet$ to the free abelian group generated by $X$, denoted $\mathcal{A}(X)$. For a signed correspondence $\mathrm{pt}hi = (A,s,t,\mathfrak{s}igma)$ from $X$ to $Y$, we define $\mathcal{A}(\mathrm{pt}hi)\mathbb{F}rom\mathcal{A}(X)\to \mathcal{A}(Y)$ by \begin{equation} \mathcal{A}(\mathrm{pt}hi)(x)=\mathfrak{s}um_{a\in A\,\mid\,s(a)=x}\mathfrak{s}igma(a)t(a)\label{eq:amorph} \end{equation} for elements $x \in X$, extended linearly over $\mathbb{Z}$. Similarly, we have a functor $\mathcal{A}\mathbb{F}rom\textbf{u}rn_\xi\to \mathbb{Z}_u\text{-Mod}$ that sends a free $\mathbb{Z}_2$-set $X$ to $\mathcal{A}(X)$, which is a free $\mathbb{Z}_u$-module; for a $\mathbb{Z}_2$-equivariant correspondence $\mathrm{pt}hi$, we use the same formula as Equation~\eqref{eq:amorph}, but exclude $\mathfrak{s}igma$, and extend linearly over $\mathbb{Z}_u$. \mathfrak{s}ubsection{Functors to Burnside categories} We now consider functors from the cube category $\underline{2}^n$ to the Burnside categories thus far introduced. We let $\mathscr{B}_\textbf{u}llet$ be one of the Burnside categories introduced above, appearing in Figure \ref{fig:burnsidecategories}, so that $\textbf{u}llet\in\{\varnothing,\mathfrak{s}igma,\xi\}$. The functors $F\mathbb{F}rom\underline{2}^n\to \mathscr{B}_\textbf{u}llet$ we consider will be strictly unitary 2-functors; that is, they will consist of the following data: \begin{enumerate}[leftmargin=*] \item For each vertex $v$ of $\underline{2}^n$, an object $F(v)$ of $\mathscr{B}_\textbf{u}llet$. \item For any $u\geq v$, a 1-morphism $F(\mathrm{pt}hi_{u,v})$ in $\mathscr{B}_\textbf{u}llet$ from $F(u)$ to $F(v)$, such that $F(\mathrm{pt}hi_{u,u})$ is the identity morphism $\mathrm{Id}_{F(u)}$. \item Finally, for any $u\geq v\geq w$, a 2-morphism $F_{u,v,w}$ in $\mathscr{B}_\textbf{u}llet$ from $F(\mathrm{pt}hi_{v,w})\circ F(\mathrm{pt}hi_{u,v})$ to $F(\mathrm{pt}hi_{u,w})$ that agrees with $\lambda$ (respectively, $\rho$) when $v=w$ (respectively, $u=v$), and that satisfies, for any $u\geq v\geq w\geq z$, \[ F_{u,w,z}\circ_2 (\mathrm{Id}\circ F_{u,v,w})=(F_{v,w,z}\circ\mathrm{Id})\circ_2 F_{u,v,z} \] (with $\circ$ denoting composition of 1-morphisms and $\circ_2$ denoting composition of 2-morphisms; and we have suppressed the associator $\alpha$). \end{enumerate} We will usually use the characterization of these functors in the lemma to follow. \begin{lem}\label{lem:212} Let $\mathscr{B}_\textbf{u}llet$ be any of the Burnside categories with $\textbf{u}llet\in \{\varnothing,\mathfrak{s}igma,\xi\}$. Consider the data of objects $F(v)$ for $v \in \underline{2}^n$, 1-morphisms $F(\mathrm{pt}hi_{v,w})$ for edges $v\geqslant_1 w$, and 2-morphisms $F_{u,v,v',w} \mathbb{F}rom F(\mathrm{pt}hi_{v,w}) \circ F(\mathrm{pt}hi_{u,v}) \to F(\mathrm{pt}hi_{v',w}) \circ F (\mathrm{pt}hi_{u,v'})$ for each 2d face described by $u\geqslant_1 v,v'\geqslant_1 w$, such that the following compatibility conditions are satisfied: \begin{enumerate}[leftmargin=*] \item For any 2d face $u,v,v',w$ as above, $F_{u,v,v',w}=F^{-1}_{u,v',v,w}$; \item For any 3d face in $\underline{2}^n$ on the left, the hexagon on the right commutes: \label{cond:c2} \end{enumerate} \[ \begin{tikzpicture}[node distance=2.5cm, back line/.style={densely dotted}, cross line/.style={preaction={draw=white, -, line width=6pt}},baseline={(current bounding box.center)}] \node (u) {$u$}; \node (v') [right of=u] {$v'$}; \node [below of=u] (v'') {$v''$}; \node [right of=v''] (w) {$w$}; \node (v) [right of=u, above of=u, node distance=1cm] {$v$}; \node (w'') [right of=v] {$w''$}; \node (w') [below of=v] {$w'$}; \node (z) [right of=w'] {$z$}; \draw[back line, ->] (v) to (w'); \draw[back line, ->] (v'') to (w'); \draw[back line, ->] (w') to (z); \draw[->] (w'') to (z); \draw[cross line, ->] (u) to (v); \draw[cross line, ->] (u) to (v'); \draw[cross line, ->] (u) to (v''); \draw[cross line, ->] (v') to (w''); \draw[cross line, ->] (v'') to (w); \draw[cross line, ->] (v) to (w''); \draw[cross line, ->] (w) to (z); \draw[cross line, ->] (v') to (w); \end{tikzpicture}\qquad\qquad \begin{tikzpicture}[scale=0.6,baseline={(current bounding box.center)}] \def3.7cm{3.7cm} \node (h0A) at (60:3.7cm) {$\circ$}; \node (h0C) at (0:3.7cm) {$\circ$}; \node (h1B) at (-60:3.7cm) {$\circ$}; \node (h1A) at (-120:3.7cm) {$\circ$}; \node (h1C) at (180:3.7cm) {$\circ$}; \node (h0B) at (120:3.7cm) {$\circ$}; \draw[->] (h0A) edge node[auto] {$\mathrm{Id}\times F_{u,v,v'',w'}$} (h0C) (h1B) edge node[right] {$F_{v'',w',w,z}\times\mathrm{Id}$} (h0C) (h1A) edge node[auto] {$\mathrm{Id}\times F_{u,v'',v',w}$} (h1B) (h1C) edge node[left] {$F_{v',w,w'',z}\times\mathrm{Id}$} (h1A) (h1C) edge node[auto] {$\mathrm{Id}\times F_{u,v',v,w''}$} (h0B) (h0B) edge node[auto] {$F_{v,w'',w',z}\times\mathrm{Id}$} (h0A); \end{tikzpicture} \] This collection of data can be extended to a strictly unitary functor $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$, uniquely up to natural isomorphism, so that $F_{u,v,v',w}=F_{u,v',w}^{-1}\circ_2 F_{u,v,w}$. \end{lem} \begin{proof} The proof is same as that of \cite[Lemma 2.12]{lls1} and \cite[Lemma 4.2]{lls2}. \end{proof} \mathfrak{s}ubsection{Totalization} Given a functor $F\mathbb{F}rom\underline{2}^n \to \mathscr{B}_\textbf{u}llet$ we construct a chain complex denoted $\mathrm{Tot}(F)$, and called the {\emph{totalization}} of the functor $F$. The underlying chain group of $\mathrm{Tot}(F)$ is given by \[ \mathrm{Tot}(F) \;= \; \bigoplus_{v \in \underline{2}^n} \mathcal{A}(F(v)). \] We set the homological grading of the summand $\mathcal{A}(F(v))$ to be $|v|$. The differential is given by defining the components $\mathrm{pt}artial_{u,v}$ from $\mathcal{A}(F(u))$ to $\mathcal{A}(F(v))$ by \[ \mathrm{pt}artial_{u,v}=\begin{cases} (-1)^{s_{u,v}}\mathcal{A}(F(\mathrm{pt}hi_{u,v})) & \mbox{if } u\geqslant_1 v \\ 0 &\mbox{otherwise.} \end{cases} \] Note that for a functor $F\mathbb{F}rom\underline{2}^n\to \textbf{u}rn_\xi$, the totalization $\mathrm{Tot}(F)$ is a chain complex over $\mathbb{Z}_u$. \mathfrak{s}ubsection{Natural transformations}\label{sec:nat-transform-burn} The following will serve as the basic relation between functors from the cube to a Burnside category. As before, $\textbf{u}llet\in\{\varnothing,\mathfrak{s}igma,\xi\}$. \begin{defn}\label{def:nattrans} A \emph{natural transformation} $\eta \mathbb{F}rom F_1 \to F_0$ between $2$-functors $F_1,F_0\mathbb{F}rom \underline{2}^n \to \mathscr{B}_\textbf{u}llet$ is a strictly unitary $2$-functor $\eta\mathbb{F}rom \underline{2}^{n+1} \to \mathscr{B}_\textbf{u}llet$ so that $\eta|_{\{1\}\times \underline{2}^n}=F_1$ and $\eta|_{\{0\}\times \underline{2}^n}=F_0$. \end{defn} A natural transformation (functorially) induces a chain map between the chain complexes of Burnside functors, which we write as $\mathrm{Tot}(\eta)\mathbb{F}rom\mathrm{Tot}(F_1)\to \mathrm{Tot}(F_0)$. Many of the natural transformations we will encounter will be sub-functor inclusions or quotient functor surjections. Given a functor $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$, a \emph{sub-functor} (respectively, \emph{quotient functor}) $G\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$ is a functor that satisfies: \begin{enumerate}[leftmargin=*] \item $G(v)\mathfrak{s}ubset F(v)$ for all $v\in\underline{2}^n$; if $\textbf{u}llet=\xi$, the $\mathbb{Z}_2$-action on $G(v)$ is induced from the $\mathbb{Z}_2$-action on $F(v)$. \item $G(\mathrm{pt}hi_{u,v})\mathfrak{s}ubset F(\mathrm{pt}hi_{u,v})$ for all $u\geq v$, with the source and target maps (and the sign map if $\textbf{u}llet=\mathfrak{s}igma$ or the $\mathbb{Z}_2$-action if $\textbf{u}llet=\xi$) being restrictions of the corresponding ones on $F(\mathrm{pt}hi_{u,v})$. \item $s^{-1}(x)\mathfrak{s}ubset G(\mathrm{pt}hi_{u,v})$ (respectively, $t^{-1}(y)\mathfrak{s}ubset G(\mathrm{pt}hi_{u,v})$) for all $u\geq v$ and for all $x\in G(u)$ (respectively, $y\in G(v)$). \end{enumerate} If $G$ is a sub (respectively, quotient) functor of $F$, then there is a natural transformation $G\to F$ (respectively, $F\to G$), and the induced chain map $\mathrm{Tot}(G)\to \mathrm{Tot}(F)$ (respectively, $\mathrm{Tot}(F)\to\mathrm{Tot}(G)$) is an inclusion (respectively, a quotient map). \begin{defn}\label{def:burn-cofib-sequence} If $G$ is a sub-functor of $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$, then the functor $H$ defined as $H(v)=F(v)\mathfrak{s}etminus G(v)$ and $H(\mathrm{pt}hi_{u,v})=\cup_{y\in H(v)}t^{-1}(y)\mathfrak{s}ubset F(\mathrm{pt}hi_{u,v})\mathfrak{s}etminus G(\mathrm{pt}hi_{u,v})$ is a quotient functor of $F$ (and vice-versa). Such a sequence \[ G\to F\to H \] is called a \emph{cofibration sequence} of Burnside functors; it induces the short exact sequence \[ 0\to\mathrm{Tot}(G)\to\mathrm{Tot}(F)\to\mathrm{Tot}(H)\to0 \] of chain complexes. \end{defn} The following is another particular example of a natural transformation which will appear later. Suppose we are given a functor $F\mathbb{F}rom\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma}$. For each object $v\in\underline{2}^n$, choose a function $\zeta_v\mathbb{F}rom F(v)\to \{+1,-1\}$. Define a new functor $F'\mathbb{F}rom\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma}$ that is equal to $F$ except that in the correspondence $F(\mathrm{pt}hi_{v,w})=(A,s,t,\mathfrak{s}igma)$ for $v\geqslant_1 w$ we change the sign function $\mathfrak{s}igma$ to be $\mathfrak{s}igma'(x)=\zeta_v(s(x))\zeta_w(t(x))\mathfrak{s}igma(x)$. There is a naturally induced natural transformation $\eta\mathbb{F}rom F\to F'$: \begin{defn}\label{def:signchange} A \emph{sign reassignment} $\eta$ of $F\mathbb{F}rom \underline{2}^n \to \textbf{u}rn_{\mathfrak{s}igma}$ is a natural transformation $\eta\mathbb{F}rom F\to F'$ as described above, induced by a function $\zeta_v\mathbb{F}rom F(v) \to \{ \mathrm{pt}m 1\}$ for each $v\in \underline{2}^n$. \end{defn} In the context of Morse theory, a sign reassignment as above corresponds to changing the orientation on the stable tangent bundle to a critical point in Morse theory. In the sequel, the appearance of sign reassignments will be specific to odd Khovanov homology. Such reassignments are not necessary in the (even) Khovanov setting, since in that case there is a preferred choice of signs: the (even) Khovanov complex comes equipped with a choice of generators for which all signs in the differentials, apart from the standard sign assignment, are positive, cf.~\S\ref{subsec:kg}. In the odd Khovanov setting, in which there are generally no such positive bases, sign reassignments inevitably appear. \mathfrak{s}ubsection{Stable equivalence of functors} In the sequel, we will be interested not just in functors $F\mathbb{F}rom \underline{2}^n\to \mathscr{B}_\textbf{u}llet$, but {\emph{stable}} functors, which are pairs $(F,r)$ with $r\in\mathbb{Z}$. We will write $\Sigma^r F$ for the pair $(F,r)$; its totalization is defined to be $\Sigma^r\mathrm{Tot}(F)=\mathrm{Tot}(F)[r]$, that is, the chain complex $\mathrm{Tot}(F)$ shifted up by $r$. In this section we describe when two such stable functors are equivalent, only slightly modifying \cite[Definition 5.9]{lls2} by allowing $\textbf{u}llet\in\{\varnothing,\mathfrak{s}igma,\xi\}$. A \emph{face inclusion} $\iota$ is a functor $\underline{2}^n \to \underline{2}^N$ that is injective on objects and preserves the relative gradings. We remark that self-equivalences $\iota\mathbb{F}rom \underline{2}^n \to \underline{2}^n$ are face inclusions. Now consider a face inclusion $\iota\mathbb{F}rom \underline{2}^n \to \underline{2}^N$ and a functor $F \mathbb{F}rom \underline{2}^n \to \mathscr{B}_\textbf{u}llet$. The induced functor $F_\iota\mathbb{F}rom \underline{2}^N \to \mathscr{B}_\textbf{u}llet$ is uniquely determined by requiring that $F=F_\iota\circ \iota$, and such that for $v\in \underline{2}^N/ \iota(\underline{2}^n)$, we have $F_\iota(v)=\varnothing$. For a face inclusion $\iota$, we define $|\iota|=|\iota(v)|-|v|$ for any $v\in\underline{2}^n$, which is independent of $v$ because $\iota$ is assumed to preserve relative gradings. As observed in \cite[\S5]{lls2}, for any face inclusion $\iota$ and functor $F$ as above, \[ \mathrm{Tot}(F_\iota) \; \textbf{c}ng \; \Sigma^{|\iota|}\mathrm{Tot}(F) \] where the isomorphism is natural up to certain sign choices. With this background, we state the relevant notion of equivalence for stable functors. \begin{defn}\label{def:stableq} Two stable functors $(E_1 \mathbb{F}rom \underline{2}^{m_1} \to \mathscr{B}_\textbf{u}llet, q_1)$ and $(E_2\mathbb{F}rom \underline{2}^{m_2} \to \mathscr{B}_\textbf{u}llet, q_2)$ are \emph{stably equivalent} if there is a sequence of stable functors $\{(F_i \mathbb{F}rom \underline{2}^{n_i} \to \mathscr{B}_\textbf{u}llet, r_i)\}$ ($0\leq i \leq \ell$) with $\Sigma^{q_1}E_1=\Sigma^{r_0}F_0$ and $\Sigma^{q_2}E_2=\Sigma^{r_\ell}F_\ell$ such that for each pair $\{ \Sigma^{r_i}F_i, \Sigma^{r_{i+1}}F_{i+1}\}$, one of the following holds: \begin{enumerate}[leftmargin=*] \item $(n_i,r_i)=(n_{i+1},r_{i+1})$ and there is a natural transformation $\eta\mathbb{F}rom F_i\to F_{i+1}$ or $\eta\mathbb{F}rom F_{i+1} \to F_i$ such that the induced map $\mathrm{Tot}(\eta)$ is a chain homotopy equivalence. \item There is a face inclusion $\iota\mathbb{F}rom \underline{2}^{n_i} \hookrightarrow \underline{2}^{n_{i+1}}$ such that $r_{i+1}=r_i-|\iota|$ and $F_{i+1}=(F_i)_\iota$; or a face inclusion $\iota\mathbb{F}rom \underline{2}^{n_{i+1}} \hookrightarrow \underline{2}^{n_{i}}$ such that $r_{i}=r_{i+1}-|\iota|$ and $F_{i}=(F_{i+1})_\iota$. \end{enumerate} We call such a sequence, along with the arrows between $\Sigma^{r_i}F_i$, a {\emph{stable equivalence}} between the stable functors $\Sigma^{q_1}E_1$ and $\Sigma^{q_2}E_2$. If $\textbf{u}llet=\mathfrak{s}igma$, and if the sequence is such that the maps $\eta$ satisfy $\mathrm{Tot}(\mathcal{D} \eta)$ are chain homotopy equivalences over $\mathbb{Z}_u$ (where $\mathcal{D}\mathbb{F}rom\textbf{u}rn_{\mathfrak{s}igma}\to\textbf{u}rn_\xi$ is from Figure \ref{fig:burnsidecategories}), we call it a {\emph{equivariant (stable) equivalence}}, and say that $\Sigma^{q_i}E_i$ are \emph{equivariantly equivalent}. \end{defn} We note that a stable equivalence from $\Sigma^{q_1}E_1$ to $\Sigma^{q_2}E_2$ induces a chain homotopy equivalence $\mathrm{Tot}(\Sigma^{q_1}E_1)\to \mathrm{Tot}(\Sigma^{q_2}E_2)$, well-defined up to choices of inverses of the chain homotopy equivalences involved in its construction, and an overall sign. Note that for $\textbf{u}rn_\xi$, the category of chain complexes under consideration is over $\mathbb{Z}_u$. We will also need the notion of a map of Burnside functors: \begin{defn}\label{def:map-of-burnside-functors} A \emph{map} $\Sigma^{q_1}E_1 \to \Sigma^{q_2}E_2$ of Burnside functors $\Sigma^{q_i}E_i \mathbb{F}rom \underline{2}^{m_i} \to \mathscr{B}_\textbf{u}llet$ consists of a sequence of stable functors $\{(F_i \mathbb{F}rom \underline{2}^{n_i} \to \mathscr{B}_\textbf{u}llet, r_i)\}$ ($0\leq i \leq \ell$) with $\Sigma^{q_1}E_1=\Sigma^{r_0}F_0$ and $\Sigma^{q_2}E_2=\Sigma^{r_\ell}F_\ell$ along with the following: \begin{enumerate}[leftmargin=*] \item For $i$ even, a stable equivalence from $\Sigma^{r_i}F_i$ to $\Sigma^{r_{i+1}}F_{i+1}$. \item For $i$ odd, a natural transformation $F_i\to F_{i+1}$, and we require $r_i=r_{i+1}$. \end{enumerate} Similarly, for $\textbf{u}llet=\mathfrak{s}igma$, an \emph{equivariant map} $\Sigma^{q_1}E_1\to \Sigma^{q_2}E_2$ will consist of the same information, but where the stable equivalences are required to be equivariant equivalences. \end{defn} \mathfrak{s}ubsection{Coproducts}\label{subsec:prod} Finally, let us describe the elementary coproduct operation on functors $\underline{2}^n\to \mathscr{B}_\textbf{u}llet$, generalizing that from \cite{lls2}. \cite{lls2} also define a product operation, but we have no need for that. Given two 2-functors $F_1, F_2\mathbb{F}rom \underline{2}^n \to \mathscr{B}_\textbf{u}llet$ the \emph{coproduct} 2-functor $F_1\amalg F_2\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$ is defined as follows. On objects and 1-morphisms, $F_1\amalg F_2$ is just the disjoint union: $(F_1 \amalg F_2) (v) = F_1(v) \amalg F_2(v)$ for $v\in\underline{2}^n$, with $\mathbb{Z}_2$-action defined component-wise if $\textbf{u}llet=\xi$, and $(F_1 \amalg F_2)(\mathrm{pt}hi_{u,v})=F_1(\mathrm{pt}hi_{u,v})\amalg F_2(\mathrm{pt}hi_{u,v})$ for $u\geq v$, with the source map, target map, and sign map if $\textbf{u}llet=\mathfrak{s}igma$, and $\mathbb{Z}_2$-action if $\textbf{u}llet=\xi$, defined component-wise. For $u\geq v\geq w$, the associated 2-morphism may be viewed as a map \[ (F_1\amalg F_2)_{u,v,w}\mathbb{F}rom\textbf{c}prod_{i=1,2}F_i(\mathrm{pt}hi_{v,w})\times_{F_i(v)} F_i(\mathrm{pt}hi_{u,v}) \longrightarrow \textbf{c}prod_{i=1,2} F_i(\mathrm{pt}hi_{u,w}) \] and it is defined component-wise using the bijections $(F_i)_{u,v,w}$ for $i=1,2$. We have the following immediate property for chain complexes: \[ \mathrm{Tot}(F_1 \amalg F_2)=\mathrm{Tot}(F_1)\mathrm{op}lus \mathrm{Tot}(F_2). \] \mathfrak{s}ection{Realizations of Burnside functors}\label{sec:box} In this section, given a functor $F\mathbb{F}rom \underline{2}^n\to\mathscr{B}_\textbf{u}llet$ to any of the previously defined Burnside categories, we construct an essentially well-defined finite CW spectrum $\Realize{F}$, which in the $\mathbb{Z}_2$-equivariant case $\textbf{u}llet=\xi$ is a $\mathbb{Z}_2$-equivariant spectrum. As a first step, we construct finite CW complexes $\CRealize{F}_{k}$ for sufficiently large $k$, so that increasing the parameter $k$ corresponds to suspending the CW complex $\CRealize{F}_{k}$. The finite CW spectrum $\Realize{F}$ is then defined from this sequence of spaces. The construction of $\CRealize{F}_{k}$ depends on some auxiliary choices, but its stable homotopy type does not. Moreover, the spectra constructed from two stably equivalent Burnside functors will be homotopy equivalent. For signed Burnside functors, i.e., when $\textbf{u}llet=\mathfrak{s}igma$, we can carry out our construction with a $\mathbb{Z}_2$-action. For ordinary Burnside functors, i.e., when $\textbf{u}llet = \varnothing$, our construction recovers the ``little boxes'' realization of \cite[\S5]{lls1}, cf.~\cite[\S7]{lls2}; but if it comes from a signed Burnside functor, we produce an alternate constructon with an extra $\mathbb{Z}_2$-action. After reviewing the notion of ``box maps'' used in the little box realization construction of \cite[\S5]{lls2}, we introduce ``signed box maps.'' After providing the necessary background on homotopy colimits, we then use signed box maps to construct the realization $\Realize{\cdot}$ for functors to the signed Burnside category. This is all that is needed to construct the odd Khovanov homotopy type. We then indicate the modifications needed to define the other realization functors and to construct the various extra $\mathbb{Z}_2$-actions. \mathfrak{s}ubsection{Signed box maps}\label{sec:signed-box-maps} We start with the construction of (ordinary) box maps, following \cite[\S2.10]{lls1}. Fix an identification $S^k=[0,1]^k/\mathrm{pt}artial$ which we maintain though the sequel, and view $S^k$ as a pointed space. Let $B$ be a box in $\mathbb{R}^k$ with edges parallel to the coordinate axes, that is, $B=[a_1,b_1] \times \dots \times [a_k,b_k]$ for some $a_i,b_i$. Then there is a standard homeomorphism from $B$ to $[0,1]^k$, via $(x_1,\dots,x_k ) \to (\mathbb{F}rac{x_1-a_1}{b_1-a_1},\dots, \mathbb{F}rac{x_k-a_k}{b_k-a_k})$. Then we have an identification $S^k\textbf{c}ng B/\mathrm{pt}artial B$. Given a collection of sub-boxes $B_1,\dots, B_\ell \mathfrak{s}ubset B$ with disjoint interiors, there is an induced map \begin{equation}\label{eq:signedphi} S^k = B/\mathrm{pt}artial B \to B /(B \backslash (\mathring{B_1} \cup \dots \cup \mathring{B_\ell}))= \bigvee^\ell_{a=1} B_a /\mathrm{pt}artial B_a = \bigvee^\ell_{a=1}S^k \to S^k. \end{equation} The last map is the identity on each summand, so that the composition has degree $\ell$. As pointed out in \cite{lls1}, this construction is continuous in the position of the sub-boxes. We let $E(B,\ell)$ denote the space of boxes with disjoint interiors in $B$, and have a continuous map $E(B,\ell) \to \mathrm{Map}(S^k,S^k)$. We can generalize the above procedure to associate a map of spheres to a map of sets $A \to Y$, as follows. Say we have chosen sub-boxes $B_a \mathfrak{s}ubset B$ with disjoint interiors, for $a\in A$. Then we have a map: \begin{equation}\label{eq:mapfromset} S^k = B /\mathrm{pt}artial B \to B /(B\backslash (\bigcup_{a\in A}\mathring{B_a} )) =\bigvee_{a\in A} B_a /\mathrm{pt}artial B_a = \bigvee_{a\in A}S^k \to \bigvee_{y\in Y} S^k \end{equation} where the last map is built using the map of sets $A \to Y$. More generally, we can also assign a box map to a correspondence of sets, as follows. Fix a correspondence $A$ from $X$ to $Y$ with source map $s$ and target map $t$. Say that we also have a collection of boxes $B_x$ for $x\in X$. Finally, we also choose a collection of sub-boxes $B_a \mathfrak{s}ubset B_{s(a)}$ with disjoint interiors, for $a\in A$. We then have an induced map \begin{equation}\label{eq:mapfromset2} \bigvee_{x\in X} S^k \to \bigvee_{y\in Y} S^k, \end{equation} by applying, on $B_x$, the map associated to the set map $s^{-1}(x) \to Y$. A map as in Equation~\eqref{eq:mapfromset2} is said to \emph{refine} the correspondence $A$. Let $E(\{B_x\},s)$ be the space of collections of labeled sub-boxes $\{B_a \mathfrak{s}ubset B_{s(a)} \mid a \in A\}$ with disjoint interiors. Then, choosing a correspondence $(A,s,t)$ (so that $A$ and $s$ are those appearing in the definition of $E(\{B_x \},s)$---note that the definition of $E(\{B_x\},s)$ does not involve the target map $t$)---Equation~\eqref{eq:mapfromset2} gives a map $E(\{B_x\},s) \to \mathrm{Map}(\vee_{x\in X} S^k ,\vee_{y\in Y} S^k)$. We write \begin{equation} \Phi(e,A) \in\mathrm{Map}(\bigvee_{x\in X} S^k \to \bigvee_{y\in Y}S^k)\label{eq:phi} \end{equation} for the map associated to $e\in E(\{B_x\},s)$ and a compatible correspondence $(A,s,t)$. The main point is that, for any box map $\Phi(e,A)$ refining $A$, the induced map on the $k^{\text{th}}$ homology agrees with the abelianization map \[ \mathcal{A}(A)\mathbb{F}rom\mathcal{A}(X)=\widetilde{H}_k(\vee_{x\in X}S^k)\to\mathcal{A}(Y)=\widetilde{H}_k(\vee_{y\in Y}S^k). \] We now indicate a further generalization of box maps to cover signed correspondences. Fix a signed correspondence $(A,s,t,\mathfrak{s}igma)$ from $X$ to $Y$, and let $B_x,\; x \in X$ be some collection of boxes. Fix a collection of sub-boxes $B_{a} \mathfrak{s}ubset B_{s(a)}$ for $a\in A$. There is an induced map just as in Equation~\eqref{eq:mapfromset2}, but whose construction depends on the sign map $\mathfrak{s}igma$, as follows. For $x\in X$, we have a set map $s^{-1}(x) \to Y$, along with signs for each element of $s^{-1}(x)$. We modify the box map refining $s^{-1}(x) \to Y$ (without sign) by precomposing with $\mathfrak{r}$, reflection in the first coordinate, in boxes with non-trivial sign: \[ S^k = B/\mathrm{pt}artial B \to B /(B\backslash (\bigcup_{a\in A}\mathring{B_a} )) = \bigvee_{a\in A} B_a /\mathrm{pt}artial B_a\xrightarrow{\bigvee \mathfrak{r}_{a}} \bigvee_{a\in A} B_a /\mathrm{pt}artial B_a = \bigvee_{a\in A}S^k \to \bigvee_{y\in Y}S^k. \] Here $\mathfrak{r}_a=\mathfrak{r}$ if $\mathfrak{s}igma(a)=-1$ and $\mathfrak{r}_a=\mathrm{Id}$ if $\mathfrak{s}igma(a)=+1$. This defines the map on the factor on the left of Equation~\eqref{eq:mapfromset2} indexed by the element $x\in X$. We say that a map constructed this way \emph{refines} the signed correspondence $(A,s,t,\mathfrak{s}igma)$. As before, we can regard it as a map \[ \Phi(e,A)\in \mathrm{Map}( \bigvee_{x\in X} S^k ,\bigvee_{y\in Y}S^k), \] where $e \in E(\{B_x\},s)$, and $(A,s,t,\mathfrak{s}igma)$ is a compatible signed correspondence. Once again, the induced map on the $k^{\text{th}}$ homology agrees with the abelianization map. Similarly, for a signed correspondence $(A,s,t,\mathfrak{s}igma)$, we can consider box maps refining the (unsigned) correspondence, and then precompose by $\mathfrak{d}$, the reflection in the first two coordinates, if the sign is nontrivial. We call a map obtained this way a \emph{doubly signed} refinement of the tuple $(A,s,t,\mathfrak{s}igma)$, and denote it $\widehat{\Phi}(e,A)$. We next record two lemmas from \cite{lls1} about the spaces $E(\{B_x\},s)$ that we need later. For a map $s\mathbb{F}rom A \to X$ define $E_\mathfrak{s}ym(\{B_x\},s)$ to be the subspace of $E(\{ B_x\},s)$ in which we require the box $B_a$ to lie symmetrically in $B_{s(a)}$ with respect to reflection in the first coordinate $\mathfrak{r}$, for all $a\in A$, and $E_{{2\mathrm{sym}}}(\{B_x\},s)$ by requiring symmetry in the first two coordinates. \begin{lem}\label{lem:connctd} Consider $s\mathbb{F}rom A\to X$. If the boxes are $k$-dimensional then $E(\{ B_x\},s)$ is $(k-2)$-connected and $E_\mathfrak{s}ym(\{B_x\},s)$ is $(k-3)$-connected and $E_{{2\mathrm{sym}}}(\{B_x\},s)$ is $(k-4)$-connected. \end{lem} \begin{proof} The first statement is \cite[Lemma 2.29]{lls1}, and the second statement follows from the first since the space of symmetric $k$-dimensional boxes, $E^k_\mathfrak{s}ym(\{B_x\},s)$, is homotopy equivalent to the space of $(k-1)$-dimensional boxes, $E^{k-1}(\{B_x\},s)$, from which the third statement also follows. \end{proof} \begin{lem}\label{lem:box-composition} If $e\in E(\{B_x\},s_A)$ is compatible with a signed correspondence $A$ from $X$ to $Y$, and $f\in E(\{B_y\},s_B)$ is compatible with a signed correspondence $B$ from $Y$ to $Z$, then there is a unique $f\circ_{\mathfrak{s}igma_A} e\in E(\{B_x\},s_{B\circ A})$ compatible with $B\circ A$, depending only on $e$, $f$, and the sign map $\mathfrak{s}igma_A$, so that $\Phi(f\circ e,B\circ A)=\Phi(f,B)\circ\Phi(e,A)$; we will sometimes drop the subscript $\mathfrak{s}igma_A$ (as we did just now). Similarly, there is a unique $f\,\widehat{\circ}_{\mathfrak{s}igma_A}\, e\in E(\{B_x\},s_{B\circ A})$ compatible with $B\circ A$, so that $\widehat{\Phi}(f\,\widehat{\circ}\,e,B\circ A)=\widehat{\Phi}(f,B)\circ\widehat{\Phi}(e,A)$. Both of these assignments $\circ,\widehat{\circ}\mathbb{F}rom E(\{B_y\},s_B)\times E(\{B_x\},s_A)\to E(\{B_x\},s_{B\circ A})$ are continuous and send $E_\mathfrak{s}ym(\{B_y\},s_B)\times E_\mathfrak{s}ym(\{B_x\},s_A)$ to $E_\mathfrak{s}ym(\{B_x\},s_{B\circ A})$ (and similarly for $E_{{2\mathrm{sym}}}$). Moreover, $\circ$ agrees with $\widehat{\circ}$ when restricted to $E_{2\mathrm{sym}}(\{B_y\},s_B)\times E_{2\mathrm{sym}}(\{B_x\},s_A)$. \end{lem} \begin{proof} This `composition map' $E(\{B_y\},s_B)\times E(\{B_x\},s_A)\to E(\{B_x\},s_{B\circ A})$, as discussed in \cite[\S2.10]{lls1}, is constructed in the unsigned as follows: Given sub-boxes $B_b\mathfrak{s}ubset B_{s_B(b)}$ for all $b\in B$ and sub-boxes $B_a\mathfrak{s}ubset B_{s_A(a)}$ for all $a\in A$, define, for all $(b,a)\in B\times_Y A$, the sub-box $B_{(b,a)}\mathfrak{s}ubset B_{s_A(a)}$ to be the `sub-box of the sub-box', $B_b\mathfrak{s}ubset B_{s_B(b)=t_A(a)}\textbf{c}ng B_a\mathfrak{s}ubset B_{s_A(a)}$. In the signed case, one needs to precompose with a reflection if $\mathfrak{s}igma_A(a)=-1$. That is, define $B_{(b,a)}$ to be sub-box $\mathfrak{r}_a(B_b)\mathfrak{s}ubset B_{s_B(b)}\textbf{c}ng B_a$ of the sub-box $B_a\mathfrak{s}ubset B_{s_A(a)}$, where as before, $\mathfrak{r}_a=\mathfrak{r}$ if $\mathfrak{s}igma_A(a)=-1$ and $\mathfrak{r}_a=\mathrm{Id}$ if $\mathfrak{s}igma_A(a)=+1$. It is clear that $\Phi(f\circ e,B\circ A)=\Phi(f,B)\circ\Phi(e,A)$. The slight subtlety lies in the case when $(b,a)\in B\circ A$ has sign $+1$, but each of $a$ and $b$ has sign $-1$; then each of the box maps $\Phi(e,A)$ and $\Phi(f,B)$ requires reflecting along the first coordinate, and so their composition does not. The case for $\widehat{\Phi}$ is similar. It is clear that both $\circ$ and $\widehat{\circ}$ send $E_\mathfrak{s}ym(\{B_y\},s_B)\times E_\mathfrak{s}ym(\{B_x\},s_A)$ to $E_\mathfrak{s}ym(\{B_x\},s_{B\circ A})$ and $E_{2\mathrm{sym}}(\{B_y\},s_B)\times E_{2\mathrm{sym}}(\{B_x\},s_A)$ to $E_{2\mathrm{sym}}(\{B_x\},s_{B\circ A})$. In both cases, the definition of $\circ$ does not require any reflections, while in the latter case, the definition of $\widehat{\circ}$ does not. So in the latter case, $\circ$ and $\widehat{\circ}$ agree. \end{proof} \mathfrak{s}ubsection{Homotopy coherence}\label{subsec:homtpy} In this section, we briefly review homotopy colimits and homotopy coherent diagrams following \cite[\S2.9]{lls1}. Let $\mathrm{Top}_*$ be the category of well-based topological spaces; we will usually work with finite CW complexes. A \emph{weak equivalence} $X\to Y$ is a map that induces isomorphisms on all homotopy groups; typically our spaces are all simply connected, when the definition reduces to being isomorphisms on all homology groups. A homotopy equivalence is a special case of weak equivalence, and for CW complexes (the case at hand), the two notions are equivalent. We will sometimes also work with spaces equipped with an action by a fixed finite group $G$ (which is $\mathbb{Z}_2$ or $\mathbb{Z}_2\times\mathbb{Z}_2$ in our case), and all maps are $G$-equivariant, forming a category $G\text{-}\mathrm{Top}_*$. We also require that the inclusions of fixed points $X^{H}\to X^{H'}$, for all subgroups $H'<H$ of $G$, are cofibrations; in our case, all the spaces will carry CW structures so that the actions are CW actions---that is, each group element simply permutes the cells and respects the attaching maps. A map $X\to Y$ is called a \emph{weak equivalence} if the induced map $X^H\to Y^H$ is a weak equivalence for all subgroups $H$ of $G$. A homotopy equivalence in $G\text{-}\mathrm{Top}_*$ induces a weak equivalence. For $G$-CW complexes (the case at hand), the two notions are equivalent by the $G$-Whitehead theorem, see \cite[Theorem 2.4]{Greenlees-May}. For $G$-CW complexes, a weak equivalence $X\to Y$ induces a weak equivalence between quotients of fixed points, $X^{H'}/X^H\to Y^{H'}/Y^H$, for all subgroups $H'<H$ of $G$, and between orbit spaces, $X/H\to Y/H$, for all subgroups $H$ of $G$. Now we recall the notion of a homotopy coherent diagram, which is the data from which a homotopy colimit is constructed. Fix a finite group $G$ (typically $0$ or $\mathbb{Z}_2$ or $\mathbb{Z}_2\times\mathbb{Z}_2$). A homotopy coherent diagram is intuitively a diagram $F\mathbb{F}rom \mathscr{C}\to G\text{-}\mathrm{Top}_*$ which is not commutative, but commutative up to homotopy, and the homotopies themselves commute up to higher homotopy, and so on, and for which all the homotopies and higher homotopies are viewed as part of the data of the diagram. Precisely, we have the following. \begin{defn}[{\cite[Definition 2.3]{vogt}}]\label{def:homco} A homotopy coherent diagram $F\mathbb{F}rom\mathscr{C}\to G\text{-}\mathrm{Top}_*$ assigns to each $x \in \mathscr{C}$ a space $F(x) \in G\text{-}\mathrm{Top}_*$, and for each $n \geq 1$ and each sequence \[ x_0 \xrightarrow{f_1} x_1 \xrightarrow{f_2} \cdots \xrightarrow{f_n} x_n \] of composable morphisms in $\mathscr{C}$ a continuous $G$-map \[ F(f_n,\dots,f_1)\mathbb{F}rom [0,1]^{n-1} \times F(x_0) \to F(x_n)\mathrm{pt}hantom{\xrightarrow{f}} \] with $F(f_n,\dots,f_1)([0,1]^{n-1}\times \{ *\})=*$. These maps are required to satisfy the following compatibility conditions: \begin{align} \nonumber F(f_n,\dots,f_1)(t_1,& \dots,t_{n-1})= \\ & \begin{cases} F(f_n,\dots,f_2)(t_2,\dots,t_{n-1}), &f_1 = \mathrm{Id}\\ F(f_n ,\dots,\hat{f}_i,\dots,f_1)(t_1,\dots,t_{i-1}\cdot t_i,\dots,t_{n-1}),&f_i=\mathrm{Id}, 1 < i < n\\ F(f_{n-1},\dots,f_1)(t_1,\dots,t_{n-2}),&f_n = \mathrm{Id}\\ F(f_n,\dots,f_{i+1})(t_{i+1},\dots, t_{n-1}) \circ F(f_i,\dots,f_1)(t_1,\dots,t_{i-1}), & t_i=0 \\ F(f_n,\dots,f_{i+1}\circ f_i,\dots,f_1)(t_1,\dots,\hat{t}_{i},\dots,t_{n-1}), & t_i=1. \end{cases}\label{eq:compat} \end{align} When $\mathscr{C}$ does not contain any non-identity isomorphisms, homotopy coherent diagrams may be defined only in terms of non-identity morphisms and the last two compatibility conditions. \end{defn} Given a homotopy coherent diagram, we can define its \emph{homotopy colimit} in $G\text{-}\mathrm{Top}_*$, quite concretely, as follows: \begin{defn}[{\cite[\S5.10]{vogt}}]\label{def:hoco} Given a homotopy coherent diagram $F\mathbb{F}rom \mathscr{C} \to G\text{-}\mathrm{Top}_*$ the \emph{homotopy colimit} of $F$ is defined by \begin{equation}\label{eq:hoco} \hoco \; F = \{ * \} \amalg \textbf{c}prod_{n\geq 0} \textbf{c}prod_{x_0\xrightarrow{f_1} \cdots\xrightarrow{f_n}x_n} [0,1]^n \times F(x_0) /\mathfrak{s}im, \end{equation} where the equivalence relation $\mathfrak{s}im$ is given as follows: \[ (f_n,\dots,f_1;t_1,\dots,t_n;p)\mathfrak{s}im\begin{cases} (f_n,\dots,f_2 ; t_2,\dots,t_n;p),&f_1=\mathrm{Id}\\ (f_n,\dots,\hat{f}_i,\dots,f_1;t_1,\dots,t_{i-1}\cdot t_i,\dots,t_n;p),&f_i=\mathrm{Id},i>1\\ (f_{n},\dots,f_{i+1};t_{i+1},\dots,t_n;F(f_i,\dots,f_1)(t_1,\dots,t_{i-1},p)), &t_i=0\\ (f_n,\dots,f_{i+1}\circ f_i,\dots,f_1;t_1,\dots,\hat{t}_{i},\dots,t_n;p), &t_i=1,i<n\\ (f_{n-1},\dots,f_1; t_1,\dots,t_{n-1};p), & t_n=1\\ *, & p=*. \end{cases} \] When $\mathscr{C}$ does not contain any non-identity isomorphisms, homotopy colimits may be defined only in terms of non-identity morphisms and the last four equivalence relations. \end{defn} We will need the following properties, most of which are immediate consequences of the above formulas: \begin{enumerate}[leftmargin=*,label=(ho-\arabic*)] \item \label{itm:ho1} Suppose that $F_0,F_1\mathbb{F}rom \mathscr{C} \to G\text{-}\mathrm{Top}_*$ are homotopy coherent diagrams and $\eta\mathbb{F}rom F_1 \to F_0$ is a natural transformation, that is, a homotopy coherent diagram \[\eta\mathbb{F}rom\underline{2}\times\mathscr{C}\to\mathrm{Top}_* \] with $\eta|_{\{i\}\times\mathscr{C}}=F_i$, $i=0,1$. Then $\eta$ induces a map $\hoco\,\eta \mathbb{F}rom \hoco\,F_1 \to \hoco\,F_0$. If $\eta(x)$ is a weak equivalence for each $x\in \mathscr{C}$---we will call such an $\eta$ a weak equivalence $F_1\to F_0$---then $\hoco\,\eta$ is a weak equivalence as well. When the spaces involved are $G$-CW complexes (the case at hand), a weak equivalence $\eta\mathbb{F}rom F_1\to F_0$ is also a homotopy equivalence \cite[Proposition 4.6]{vogt}, that is, there exists $\zeta,\zeta'\mathbb{F}rom F_0\to F_1$ and \[ \mathfrak{h},\mathfrak{h}'\mathbb{F}rom\{2\to1\to0\}\times\mathscr{C}\to G\text{-}\mathrm{Top}_*, \] with $\mathfrak{h}|_{\{2\to1\}\times\mathscr{C}}=\eta$, $\mathfrak{h}|_{\{1\to0\}\times\mathscr{C}}=\zeta$, $\mathfrak{h}|_{\{2\to0\}\times\mathscr{C}}=\mathrm{Id}_{F_0}$, and $\mathfrak{h}'|_{\{2\to1\}\times\mathscr{C}}=\zeta'$, $\mathfrak{h}'|_{\{1\to0\}\times\mathscr{C}}=\eta$, $\mathfrak{h}'|_{\{2\to0\}\times\mathscr{C}}=\mathrm{Id}_{F_1}$. \item \label{itm:ho2} Suppose that $F_0,F_1\mathbb{F}rom \mathscr{C} \to G\text{-}\mathrm{Top}_*$ are diagrams and that $F_0 \vee F_1 \mathbb{F}rom \mathscr{C} \to G\text{-}\mathrm{Top}_*$ is the diagram obtained by wedge sum: $(F_0 \vee F_1)(x)=F_0(x) \vee F_1(x)$ for all $x\in\mathscr{C}$, and \[(F_0\vee F_1)(f_n,\dots,f_1)(t_1,\dots,t_{n-1},p)=F_i(f_n,\dots,f_1)(t_1,\dots,t_{n-1},p)\] for all $i=0,1$, $x_0 \xrightarrow{f_1} x_1 \xrightarrow{f_2} \cdots \xrightarrow{f_n} x_n$, and $p\in F_i(x_0)$. Then $\hoco (F_0\vee F_1)$ and $(\hoco F_0) \vee (\hoco F_1)$ are naturally homeomorphic. \item \label{itm:ho4} For any normal subgroup $H$ of $G$, define the fixed point diagram $F^H\mathbb{F}rom\mathscr{C}\to G/H\text{-}\mathrm{Top}_*$ by setting $F^H(x)$ to be the fixed points ${F(x)}^H$. Define the quotient diagram $F/F^H\mathbb{F}rom\mathscr{C}\to G\text{-}\mathrm{Top}_*$ by setting $(F/F^H)(x)$ to be the quotient $F(x)/\{p\mathfrak{s}im *\text{ for all }p\in F^H(x)\}$. Then there are natural homeomorphisms \[ \begin{tikzpicture}[xscale=4,yscale=1.5] \node (hofix) at (0,0) {$\hocolim(F^H)$}; \node (ho1) at (1,0) {$\hocolim F$}; \node (hoquot) at (2,0) {$\hocolim(F/F^H)$}; \node (fixho) at (0,1) {$\hocolim(F)^H$}; \node (ho2) at (1,1) {$\hocolim F$}; \node (quotho) at (2,1) {$\hocolim(F)/\hocolim(F)^H$}; \draw[->] (hofix) -- (ho1); \draw[->] (ho1) -- (hoquot); \draw[->] (fixho) -- (ho2); \draw[->] (ho2) -- (quotho); \draw[<-] (hofix) -- (fixho) node[midway,anchor=west] {$\textbf{c}ng$}; \draw[<-] (hoquot) -- (quotho) node[midway,anchor=west] {$\textbf{c}ng$}; \draw[<-] (ho1) -- (ho2) node[midway,anchor=west] {$=$}; \end{tikzpicture} \] with the arrows on the bottom row being induced from the natural transformations $F^H\to F \to F/F^H$. For diagrams of $G$-CW complexes, the arrows on the top row form a cofibration sequence since $(\hocolim F,\hocolim(F)^H)$ form a CW-pair; moreover, a weak equivalence $F_1\to F_0$ induces weak equivalences $F^H_1\to F^H_0$ and $F_1/F^H_1\to F_0/F^H_0$ by the $G$-Whitehead theorem. \item \label{itm:ho5} For any normal subgroup $H$ of $G$, let $F/H\mathbb{F}rom\mathscr{C}\to G/H\text{-}\mathrm{Top}_*$ be the orbit diagram, obtained by setting $(F/H)(x)$ to be the orbit $F(x)/\{p\mathfrak{s}im h(p)\text{ for all }p\in F(x),h\in H\}$. Then $\hocolim(F)/{H}$ and $\hocolim(F/{H})$ are naturally homeomorphic, and the map $\hocolim F\to\hocolim(F/{H})$ induced from the natural transformation $F\to F/{H}$ is identified with the quotient map $\hocolim F\to\hocolim(F)/{H}$. For diagrams of $G$-CW complexes, a weak equivalence $F_1\to F_0$ induces a weak equivalence $F_1/H\to F_0/H$ by the $G$-Whitehead theorem. \end{enumerate} \mathfrak{s}ubsection{Little boxes refinement} With this background, we are ready to review the little box realization construction of \cite[\S5]{lls1} and generalize for the other kinds of Burnside functors introduced here, i.e., functors to $\mathscr{B}_\textbf{u}llet$ for $\textbf{u}llet\in\{\varnothing,\mathfrak{s}igma,\xi\}$. \begin{defn}\label{def:spacref} Fix a small category $\mathscr{C}$ and a strictly unitary $2$-functor $F \mathbb{F}rom \mathscr{C} \to \mathscr{B}_\textbf{u}llet$. A $k$-dimensional \emph{spatial refinement} of $F$ is a homotopy coherent diagram $\widetilde{F}_k \mathbb{F}rom \mathscr{C} \to \mathrm{Top}_*$ such that \begin{enumerate}[leftmargin=*] \item For any $u\in \mathscr{C}$, $\widetilde{F}_k(u)=\vee_{x\in F(u)} S^k$, where $S^k=[0,1]^k/\partial$. When $\textbf{u}llet=\xi$, this space has an additional $\mathbb{Z}_2$-action---denoted $\xi$---induced from the $\mathbb{Z}_2$-action on the set $F(u)$. \item For any sequence of morphisms $u_0 \xrightarrow{f_1} \cdots \xrightarrow{f_n} u_n$ in $\mathscr{C}$ and any $(t_1,\dots,t_{n-1})\in [0,1]^{n-1}$ the map \[ \widetilde{F}_k(f_n,\dots,f_1) (t_1,\dots,t_{n-1})\mathbb{F}rom \bigvee_{x\in F(u_0)} S^k \to \bigvee_{x\in F(u_n)}S^k \] is a box map refining the (possibly signed) correspondence $F(f_n \circ \dots \circ f_1)$, which is naturally isomorphic to $F(f_n) \times_{F(u_{n-1})} \dots \times_{F(u_1)} F(f_1)$; when $\textbf{u}llet=\xi$, we require each $\widetilde{F}_k$ to be $\xi$-equivariant. \end{enumerate} Note that for $\textbf{u}llet=\xi$, the $\xi$-action is a CW action, with the CW complex structure given by the boxes. \end{defn} This definition reduces to \cite[Definition 5.1]{lls1} when $\textbf{u}llet=\varnothing$. The additions here are the signed box map refinements introduced in \S\ref{sec:signed-box-maps}, which are needed for functors to $\textbf{u}rn_{\mathfrak{s}igma}$ with non-trivial signs in the correspondences, and the equivariant conditions. The following is the main technical result in this section. As usual, $\textbf{u}llet\in \{\varnothing, \mathfrak{s}igma,\xi\}$ in the statement. For $\textbf{u}llet=\varnothing$, the ordinary Burnside category, this reduces to \cite[Proposition 5.2]{lls1}. \begin{prop}\label{prop:cube} Let $\mathscr{C}$ be a small category in which every sequence of composable non-identity morphisms has length at most $n$, and let $F\mathbb{F}rom \mathscr{C}\to\mathscr{B}_\textbf{u}llet$ be a strictly unitary 2-functor. \begin{enumerate}[leftmargin=*] \item If $k \geq n$, there is a $k$-dimensional spatial refinement of $F$ (which is $\xi$-equivariant if $\textbf{u}llet=\xi$). \label{itm:real1} \item If $k \geq n+1$, then any two $k$-dimensional spatial refinements of $F$ are weakly equivalent ($\xi$-equivariantly if $\textbf{u}llet=\xi$). \label{itm:real2} \item If $\widetilde{F}_k$ is a $k$-dimensional spatial refinement of $F$, then the result of suspending each $\widetilde{F}_k(u)$ and $\widetilde{F}_k(f_n, \dots, f_1)$ gives a $(k+1)$-dimensional spatial refinement of $F$. \label{itm:real3} \end{enumerate} \end{prop} \begin{proof} The proof is parallel to that of \cite[Proposition 5.2]{lls1}. For all values of $\textbf{u}llet$, the third point is clear. The case $\textbf{u}llet=\mathfrak{s}igma$ is an immediate generalization of the case $\textbf{u}llet=\varnothing$ as worked out in \cite{lls1}. We sketch the main details. For point (\ref{itm:real1}), given $u\in \mathscr{C}$, set $\widetilde{F}_k(u) =\vee_{x\in F(u)} S^k$, where the $x$-summand is $B_x/\mathrm{pt}artial$. We choose for each $f\mathbb{F}rom u \to v$ in $\mathscr{C}$ a signed box map $\widetilde{F}_k(f)$ refining the signed correspondence $F(f)$; for the identity morphism $\mathrm{Id}_u$, choose $\widetilde{F}_k(\mathrm{Id}_u)$ to be the identity map $\mathrm{Id}_{\widetilde{F}_k(u)}$, which indeed is a box map refining the identity correspondence $\mathrm{Id}_{F(u)}=F(\mathrm{Id}_u)$. Let $e_f \in E(\{B_x \mid x \in F(u)\},s_{F(f)})$ be the collection of little boxes corresponding to $F(f)$. This gives a definition of $\widetilde{F}_k$ on vertices and arrows. We now need to define the appropriate coherences among these maps, which will be done inductively. Assume for all sequences $v_0 \xrightarrow{f_1} \cdots \xrightarrow{f_m} v_m$ with $m\leq\ell$, we have defined continuous maps \[ e_{f_m ,\dots, f_1}\mathbb{F}rom [0,1]^{m-1}\to E(\{B_x \mid x \in F(v_0)), s_{F(f_m\circ \dots \circ f_1)}), \] and that these maps satisfy Equation~\eqref{eq:compat} (with the composition map from Lemma \ref{lem:box-composition} playing the role $\circ$). Then for the induction step, given $v_0 \xrightarrow{f_1} \cdots \xrightarrow{f_{\ell+1}} v_{\ell+1}$, we have a continuous map \[ S^{\ell-1}=\mathrm{pt}artial ([0,1]^\ell) \to E(\{B_x \mid x \in F(v_0)\},s_{F(f_{\ell+1}\circ\dots\circ f_1)}) \] defined by \begin{equation}\label{eq:homconds3} \begin{split} (t_1,\dots,t_{i-1},0,t_{i+1},\dots,t_\ell) &\mapsto e_{f_{\ell+1},\dots,f_{i+1}}(t_{i+1},\dots,t_{\ell})\circ e_{f_i,\dots,f_1}(t_1,\dots,t_{i-1}) \\ (t_1,\dots,t_{i-1},1,t_{i+1},\dots,t_\ell) &\mapsto e_{f_{\ell+1},\dots,f_{i+1}\circ f_i,\dots, f_1}(t_1,\dots,t_{i-1},t_{i+1},\dots, t_{\ell}). \end{split} \end{equation} By Lemma \ref{lem:connctd}, this map extends to a map, call it $e_{f_{\ell+1},\dots,f_1}$, from $[0,1]^\ell$. By definition, the maps \[ \Phi(e_{f_m,\dots,f_1})\mathbb{F}rom [0,1]^{m-1} \times \bigvee_{x\in F(v_0)} S^k \to \bigvee_{x\in F(v_{m})}S^k \] assemble to form a homotopy coherent diagram. Next, we address point (\ref{itm:real2}). Say that we have spatial refinements $\widetilde{F}^0_k$ and $\widetilde{F}^1_k$ of $F$. Consider the functor $G\mathbb{F}rom\underline{2}\times\mathscr{C}\to\mathscr{B}_\textbf{u}llet$ defined as the composition $\underline{2}\times\mathscr{C}\xrightarrow{\mathrm{pt}i_2}\mathscr{C}\xrightarrow{F}\mathscr{B}_\textbf{u}llet$. We need only define a spatial refinement $\widetilde{G}_k$ of $G$ that restricts to $\widetilde{F}^i_k$ at $\{i\} \times \mathscr{C}$ for $i=0,1$. The construction of $\widetilde{G}_k$ proceeds uneventfully by induction as before. Then for each $u\in\mathscr{C}$, $G(\mathrm{pt}hi_{1,0}\times \mathrm{Id}_u)$ refines the identity correspondence $\mathrm{Id}_{F(u)}$ (and indeed, may be chosen to be the identity map), and hence is a weak equivalence; therefore, by \ref{itm:ho1}, $\widetilde{G}_k\mathbb{F}rom \widetilde{F}^1_k\to\widetilde{F}^0_k$ is a weak equivalence as well. Next, we consider the case $\textbf{u}llet=\xi$. Construct a functor $G\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_{\mathfrak{s}igma}$ so that $\mathcal{D}\circ G$ is naturally isomorphic to $F$ by choosing a section of the $\mathbb{Z}/2$-quotient map $\amalg_{u\in\mathscr{C}}F(u)\to \amalg_{u\in\mathscr{C}}F(u)/\mathbb{Z}_2$; we leave the details to the reader. Then construct a box map refinement of $G$; let $B_x$ be the box assigned to $x$, for $x\in\amalg_{u\in\mathscr{C}}G(u)$, and let $e_{f_m ,\dots, f_1}\mathbb{F}rom [0,1]^{m-1}\to E(\{B_x \mid x \in G(v_0), s_{G(f_m\circ \dots \circ f_1)})$ denote the $[0,1]^{m-1}$-parameter family chosen for the sequence $v_0 \xrightarrow{f_1} \cdots \xrightarrow{f_m} v_m$ during the construction. We may then define $B_{\{1\}\times x}=\{1\}\times B_x$ and $B_{\{\xi\}\times x}=\{\xi\}\times B_x$, for $x\in\amalg_{u\in\mathscr{C}}G(u)$. For any $v_0 \xrightarrow{f_1} \cdots \xrightarrow{f_m} v_m$ in $\mathscr{C}$, and any $(t_1,\dots,t_{m-1})\in[0,1]^{m-1}$, the configuration of disjoint boxes $e_{f_m ,\dots, f_1}(t_1,\dots,t_{m-1})\in E(\{B_x \mid x \in G(v_0), s_{G(f_m\circ \dots \circ f_1)})$ refining the signed correspondence $G(f_m\circ \dots \circ f_1)$ can be doubled to get a configuration of disjoint boxes \[ d_{f_m,\dots,f_1}(t_1,\dots,t_{m-1})\in E(\{B_y \mid y \in F(v_0)=\{1,\xi\}\times G(v_0), s_{F(f_m\circ \dots \circ f_1)}=\mathrm{Id}_{\{1,\xi\}}\times s_{G(f_m,\circ\dots\circ f_1)}) \] refining the unsigned correspondence $F(f_m\circ \dots \circ f_1)=\{1,\xi\}\times G(f_m\circ \dots \circ f_1)$. Use these $d_{f_m,\dots,f_1}$'s to construct the refinement, which is automatically $\xi$-equivariant. \end{proof} \mathfrak{s}ubsection{Realization of cube-shaped diagrams}\label{sec:cube-shaped-realize} Finally in this section we will discuss how to construct a CW complex $\CRealize{F}_{k}$, and then a CW spectrum $\Realize{F}$, from a given diagram $F\mathbb{F}rom \underline{2}^n\to\mathscr{B}_\textbf{u}llet$. Let $\underline{2}_+$ be the category with objects $\{0,1,*\}$ and unique non-identity morphisms $1\to0$ and $1\to *$, and let $\underline{2}^n_+=(\underline{2}_+)^n$. Let $\widetilde{F}_k\mathbb{F}rom \underline{2}^n\to \mathrm{Top}_*$ be a spatial refinement of $F$ using $k$-dimensional boxes, and let $\widetilde{F}^+_k\mathbb{F}rom \underline{2}^n_+ \to \mathrm{Top}_*$ be the diagram obtained from $\widetilde{F}_k$ by setting $\widetilde{F}_k^+(x)$ to be a point for all $x\in\underline{2}^n_+\mathfrak{s}etminus\underline{2}^n$. Let $\CRealize{F}_{k}$ be the homotopy colimit of $\widetilde{F}_k^+$. We call $\CRealize{F}_{k}$ a \emph{realization} of $F\mathbb{F}rom \underline{2}^n \to\mathscr{B}_\textbf{u}llet$ for $\textbf{u}llet=\{\varnothing,\mathfrak{s}igma,\xi\}$. \begin{cor}\label{cor:burnlimwelldef} If $k\geq n+1$, then $\CRealize{F}_{k}$ is well-defined up to weak equivalence in $\mathrm{Top}_*$ (or $\mathbb{Z}_2\text{-}\mathrm{Top}_*$ if $\textbf{u}llet=\xi$). In each case, $\CRealize{F}_{k+1}=\Sigma\CRealize{F}_{k}$. \end{cor} \begin{proof} As in \cite[Corollary 5.6]{lls1}, this follows from Proposition \ref{prop:cube} and properties of homotopy colimits \ref{itm:ho1}. \end{proof} The homotopy colimit $\CRealize{F}_{k}$ may be given several CW structures. First, from Definition \ref{def:hoco}, there is the \emph{standard} CW structure, with cells $[0,1]^m \times B_x$, parameterized by tuples $(f_m,\dots,f_1)$ subject to some relations. We have a second CW structure on $\CRealize{F}_{k}$, the \emph{fine} structure, which is obtained from the standard structure by subdividing each cell $[0,1]^m\times B_x$ along the central $(m+k-1)$-dimensional box $[0,1]^m\times B^\mathfrak{r}_{x}$, where $B^\mathfrak{r}_x\mathfrak{s}ubset B_x$ is the fixed-point set of the reflection $\mathfrak{r}\mathbb{F}rom B_x\to B_x$ along the first coordinate. The fine CW complex structure will be of relevance in \S\ref{subsec:equiv-constructions}. There is also the \emph{coarse} cell structure of \cite[Section 6]{lls1}. There they construct a CW structure on $\CRealize{F}_{k}$ for $F$ an unsigned Burnside functor, with cells formed from unions of standard cells, so that there is exactly one (non-basepoint) cell $\mathcal{C}(x)$ for each $x\in\amalg_u F(u)$. In more detail, if $F_x$ denotes the Burnside sub-functor of $F$ generated by $x$, then the subcomplex $\CRealize{F_x}_k$ of $\CRealize{F}_k$ is the image of the cell $\mathcal{C}(x)$. The construction generalizes without changes to give a CW structure on $\CRealize{F}_{k}$ for $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$, with the same set of cells; when $\textbf{u}llet=\xi$, this produces a $\mathbb{Z}_2$-CW complex. Unless otherwise specified, this is the default CW complex structure that we consider on $\CRealize{F}_k$. \begin{lem}\label{lem:cofib} A cofibration sequence $G\to F\to H$ of functors $\underline{2}^n\to \mathscr{B}_\textbf{u}llet$ (cf.~Definition~\ref{def:burn-cofib-sequence}), upon realization, induces a cofibration sequence of spaces. In general, any natural transformation $\eta\mathbb{F}rom F_1\to F_0$ of Burnside functors $\underline{2}^n\to\mathscr{B}_\textbf{u}llet$ induces a map on the realizations. \end{lem} \begin{proof} Consider the standard CW complex structures. For the first statement, a spatial refinement $\widetilde{F}_k$ of $F$ produces spatial refinements $\widetilde{G}_k$ of $G$ and $\widetilde{H}_k$ of $H$; working with those refinements, it is an immediate consequence of the definitions that $\CRealize{G}_k$ is a CW subcomplex of $\CRealize{F}_k$ with quotient complex $\CRealize{H}_k$. For the second part, if $\eta\mathbb{F}rom\underline{2}^{n+1}\to\mathscr{B}_\textbf{u}llet$ is the natural transformation, then $(F_0)_{\iota_0}$ is a subfunctor and $(F_1)_{\iota_1}$ is the corresponding quotient functor, where $\iota_i\mathbb{F}rom\underline{2}^n\to\underline{2}^{n+1}$ is the face inclusion to $\{i\}\times\underline{2}^n$. Therefore, we get a cofibration sequence \[ \CRealize{(F_0)_{\iota_0}}_k\to\CRealize{\eta}_k\to\CRealize{(F_1)_{\iota_1}}_k. \] However, $\CRealize{(F_0)_{\iota_0}}_k=\CRealize{F_0}_k$, while $\CRealize{(F_1)_{\iota_1}}_k=\Sigma\CRealize{F_1}_k$ since $\CRealize{F_1}$ is constructed as a hocolim over $\underline{2}_+^n$, while $\CRealize{(F_1)_{\iota_1}}$ is constructed as a hocolim over $\underline{2}_+^{n+1}$. Therefore, the Puppe map \[ \CRealize{(F_1)_{\iota_1}}_k=\Sigma\CRealize{F_1}_k=\CRealize{F_1}_{k+1}\to\CRealize{(F_0)_{\iota_0}}_k=\Sigma\CRealize{F_0}_k=\CRealize{F_0}_{k+1} \] is the required map. \end{proof} \begin{prop}\label{prop:totalization} If $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$, then its shifted reduced cellular complex $\widetilde{C}_{\mathrm{cell}}(\CRealize{F}_{k})[-k]$ is isomorphic to the totalization $\mathrm{Tot}(F)$ with the cells mapping to the corresponding generators. If $\eta\mathbb{F}rom F_1\to F_0$ is a natural transformation of Burnside functors, then the map $\CRealize{F_1}_{k}\to\CRealize{F_0}_{k}$ is cellular, and the induced cellular chain map agrees with $\mathrm{Tot}(\eta)$. \end{prop} \begin{proof} The first statement is an immediate generalization of the corresponding statement for unsigned Burnside functors from \cite[Theorem 6]{lls1}. The second statement is also clear from the form of the map constructed in Lemma~\ref{lem:cofib}, using similar arguments. \end{proof} We can then package all these spaces together to construct a \emph{finite CW spectrum}, by which we mean a pair $(X,r)$ (sometimes written $\Sigma^r X$), where $X$ is a finite CW complex and $r\in\mathbb{Z}$; one may view it as an object in the Spanier-Whitehead category, or as $\Sigma^r (\Sigma^\infty X)$, the $r^{\text{th}}$ suspension of the suspension spectrum of the finite CW complex $X$. One can take the reduced cellular chain complex of a finite CW spectrum, whose chain homotopy type is an invariant of the (stable) homotopy type of the CW spectrum. Then, for a stable Burnside functor $(F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet,r)$, after fixing a $k$-dimensional spatial refinement $\widetilde{F}_k$, we may define its \emph{realization} as the finite CW spectrum $\Realize{\Sigma^r F}=(\CRealize{F}_k,r-k)$. \begin{lem}\label{lem:functor-map-gives-space-map} Let $\Sigma^{r_1}F_1\to \Sigma^{r_2}F_2$ be a map of Burnside functors $F_1,F_2$. Then there is an induced map of realizations \[ \Realize{\Sigma^{r_1}F_1}\to \Realize{\Sigma^{r_2}F_2} \] well-defined up to homotopy equivalence. If the map of Burnside functors is a stable equivalence, then the induced map is a homotopy equivalence. If $\textbf{u}llet=\xi$, the induced map and the homotopy equivalence may be taken $\mathbb{Z}_2$-equivariant. \end{lem} \begin{proof} First, associated to a natural transformation of Burnside functors, there is a well-defined space map by Lemma \ref{lem:cofib}. Since all maps of Burnside functors are obtained as compositions of natural transformations and stable equivalences, we need only show that there is a well-defined (up to homotopy) homotopy equivalence of the realizations associated to a stable equivalence. For this, we must check that maps as in the items of Definition \ref{def:stableq} preserve the stable homotopy type of $\CRealize{F}_{k}$. For a natural transformation $\eta\mathbb{F}rom F_i \to F_{i+1}$, Proposition~\ref{prop:totalization} produces a cellular map $\CRealize{F_i}_k \to \CRealize{F_{i+1}}_k$, which induces the map $\mathrm{Tot}(\eta)$ on the cellular chain complex. The condition that $\mathrm{Tot}(\eta)$ is a chain homotopy equivalence implies that the map of spaces is a homotopy equivalence by Whitehead's theorem (we assume $k$ is sufficiently large so that all relevant spaces are simply connected), so has a homotopy inverse well-defined up to homotopy. When $\textbf{u}llet=\xi$, the $\mathbb{Z}_2$-action is free (away from the basepoint), so by the $G$-Whitehead theorem, the homotopy equivalences may be taken equivariant. \end{proof} \mathfrak{s}ubsection{Equivariant constructions}\label{subsec:equiv-constructions} We now explain how to make the constructions of the previous sections equivariant. Recall that each $S^k=[0,1]^k/\mathrm{pt}artial$ carries a natural $\mathbb{Z}_2$-action by $\mathfrak{r}$---reflection in the first coordinate, as well as a $\mathbb{Z}_2$-action by $\mathfrak{d}$---composition of the reflections in the first two coordinates (that is, a $180^\circ$ rotation in the first coordinate plane). \begin{defn}\label{def:spacref-equivariant} Fix a small category $\mathscr{C}$ and a strictly unitary $2$-functor $F \mathbb{F}rom \mathscr{C} \to \mathscr{B}_\textbf{u}llet$. A \emph{reflection-equivariant or $\mathfrak{r}$-equivariant $k$-dimensional spatial refinement} of $F$ is a $\mathbb{Z}_2$-equivariant $k$-dimensional spatial refinement $\widetilde{F}_k \mathbb{F}rom \mathscr{C} \to \mathrm{Top}_*$ such that for any sequence of morphisms $u_0 \xrightarrow{f_1} \cdots \xrightarrow{f_n} u_n$ in $\mathscr{C}$ and any $(t_1,\dots,t_{n-1})\in [0,1]^{n-1}$ the map \[ \widetilde{F}_k(f_n,\dots,f_1) (t_1,\dots,t_{n-1})\mathbb{F}rom \bigvee_{x\in F(u_0)} S^k \to \bigvee_{x\in F(u_n)}S^k \] is $\mathfrak{r}$-equivariant. When $\textbf{u}llet=\xi$, we require each $\widetilde{F}_k$ to be $\xi$-equivariant as well; in this case, $\widetilde{F}_k$ is a $\mathbb{Z}_2\times\mathbb{Z}_2$-equivariant diagram, with the first factor (denoted $\mathbb{Z}^+_2$) acting by $\xi$ and the second factor (denoted $\mathbb{Z}^-_2$) acting by $\mathfrak{r}\circ\xi$. \end{defn} \begin{defn}\label{def:spacref-del-equivariant} Fix a small category $\mathscr{C}$ and a strictly unitary $2$-functor $F \mathbb{F}rom \mathscr{C} \to \textbf{u}rn_{\mathfrak{s}igma}$. A \emph{doubly-equivariant $k$-dimensional spatial refinement} of $F$ is a $\mathbb{Z}_2$-equivariant $k$-dimensional \emph{doubly signed} spatial refinement $\widehat{F}_k \mathbb{F}rom \mathscr{C} \to \mathrm{Top}_*$ such that for any sequence of morphisms $u_0 \xrightarrow{f_1} \cdots \xrightarrow{f_n} u_n$ in $\mathscr{C}$ and any $(t_1,\dots,t_{n-1})\in [0,1]^{n-1}$ the map \[ \widehat{F}_k(f_n,\dots,f_1) (t_1,\dots,t_{n-1})\mathbb{F}rom \bigvee_{x\in F(u_0)} S^k \to \bigvee_{x\in F(u_n)}S^k \] is equivariant with respect to reflections in the first two coordinates; the $\mathbb{Z}_2$-action is given by $\mathfrak{r}$ (there is also a second $\mathbb{Z}_2$-action by reflection in the second coordinate, which we will ignore). Note that since we require the box map to be a \emph{doubly signed} refinement of $F(f_n\circ\dots\circ f_1)$---defined using $\widehat{\Phi}$ instead of $\Phi$ from \S\ref{sec:signed-box-maps}---a doubly-equivariant spatial refinement is \emph{not} a spatial refinement of $F$ in the usual sense. We will always denote doubly equivariant spatial refinements by $\widehat{F}_k$ to distinguish them from $\mathfrak{r}$-equivariant spatial refinements $\widetilde{F}_k$, to which they are not homotopy equivalent (even nonequivariantly). \end{defn} We next record the equivariant version of Proposition \ref{prop:cube}: \begin{prop}\label{prop:cube-equivariant} Let $\mathscr{C}$ be a small category in which every sequence of composable non-identity morphisms has length at most $n$, and let $F\mathbb{F}rom \mathscr{C}\to\mathscr{B}_\textbf{u}llet$ be a strictly unitary 2-functor. \begin{enumerate}[leftmargin=*] \item If $k\geq n+1$, the refinement of Proposition \ref{prop:cube} may also be constructed $\mathfrak{r}$-equivariantly, while if $k\geq n+2$, it may be constructed doubly equivariantly. \label{itm:real1-equivariant} \item If $k\geq n+2$, the weak equivalence from Proposition \ref{prop:cube}(\ref{itm:real2}) may be constructed $\mathfrak{r}$-equivariantly, while if $k\geq n+3$, it may be constructed doubly equivariantly. \label{itm:real2-equivariant} \end{enumerate} Moreover, if $\textbf{u}llet=\xi$, there is a $\xi$- and $\mathfrak{r}$-equivariant refinement of $F$ for $k\geq n+1$, and the weak equivalence from Proposition \ref{prop:cube}(\ref{itm:real2}) may be constructed $\xi$- and $\mathfrak{r}$-equivariantly for $k\geq n+2$. \end{prop} \begin{proof} The proof is essentially identical to the proof of Proposition \ref{prop:cube}. To make each $\widetilde{F}_k(f_n,\dots,f_1)$ $\mathfrak{r}$- or doubly-equivariant, simply stipulate that each map $e_{f_m,\dots,f_1}$ has image contained in $E_\mathfrak{s}ym(\{B_x\mid x\in F(v_0)\},s_{F(f_m\circ\dots\circ f_1)})$ or $E_{{2\mathrm{sym}}}(\{B_x\mid x\in F(v_0)\},s_{F(f_m\circ\dots\circ f_1)})$. \end{proof} \begin{prop}\label{prop:equivdiag-equivariant} If $\widetilde{F}$ is an $\mathfrak{r}$-equivariant $k$-dimensional spatial refinement of $F\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_{\mathfrak{s}igma}$, then the following hold: \begin{enumerate}[leftmargin=*] \item The fixed point diagram $\widetilde{F}^{\mathbb{Z}_2}$ is a $(k-1)$-dimensional refinement of $\mathbb{F}orgot\circ F\mathbb{F}rom\mathscr{C}\to\mathscr{B}$. \item The orbit diagram $(\widetilde{F}/\widetilde{F}^{\mathbb{Z}_2})/{\mathbb{Z}_2}$ is a $k$-dimensional refinement of $\mathbb{F}orgot\circ F\mathbb{F}rom\mathscr{C}\to\mathscr{B}$. \item The quotient diagram $\widetilde{F}/\widetilde{F}^{\mathbb{Z}_2}$ is a $k$-dimensional refinement of $\mathcal{D}\circ F\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_\xi$, with the $\mathfrak{r}$-action on $\widetilde{F}$ inducing the $\mathbb{Z}_2^+$-action on $\widetilde{F}/\widetilde{F}^{\mathbb{Z}_2}$. \end{enumerate} If $\widehat{F}$ is a doubly-equivariant $k$-dimensional spatial refinement of $F\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_{\mathfrak{s}igma}$, then the following hold: \begin{enumerate}[leftmargin=*] \item The fixed point diagram $\widehat{F}^{\mathbb{Z}_2}$ is a $(k-1)$-dimensional refinement of $F\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_{\mathfrak{s}igma}$. \item The orbit diagram $(\widehat{F}/\widehat{F}^{\mathbb{Z}_2})/{\mathbb{Z}_2}$ is a $k$-dimensional refinement of $F\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_{\mathfrak{s}igma}$. \item The quotient diagram $\widehat{F}/\widehat{F}^{\mathbb{Z}_2}$ is a $k$-dimensional refinement of $\mathcal{D}\circ F\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_\xi$, with the $\mathfrak{r}$-action on $\widetilde{F}$ inducing the $\mathbb{Z}_2^-$-action on $\widetilde{F}/\widetilde{F}^{\mathbb{Z}_2}$. \end{enumerate} If $\widetilde{F}$ is a $k$-dimensional $\mathfrak{r}$-equivariant spatial refinement of $F\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_\xi$, the following hold: \begin{enumerate}[resume*] \item The orbit diagram $\widetilde{F}/{\mathbb{Z}_2^+}$ is a $k$-dimensional spatial refinement of $\mathcal{Q}\circ F\mathbb{F}rom\mathscr{C}\to\mathscr{B}$. \label{itm:real4} \item Say $F=\mathcal{D}\circ G$, and $\widetilde{F}$ is $\xi$-invariant. Then the orbit diagram $\widetilde{F}/{\mathbb{Z}_2^-}$ is a $\mathfrak{r}$-equivariant $k$-dimensional spatial refinement of $G\mathbb{F}rom\mathscr{C}\to\textbf{u}rn_{\mathfrak{s}igma}$ with the $\mathbb{Z}_2^+$-action on $\widetilde{F}$ inducing the $\mathfrak{r}$-action on $\widetilde{F}/{\mathbb{Z}_2^-}$. \label{itm:real5} \end{enumerate} \end{prop} \begin{proof} This is an immediate consequence of the definitions. \end{proof} As in \S\ref{sec:cube-shaped-realize}, given any such equivariant spatial refinement $\widetilde{F}_k$ for $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$, we construct a $G$-equivariant CW complex $\CRealize{F}_k$ as $\hocolim(\widetilde{F}_k^+)$, where $G=\mathbb{Z}_2$ if $\textbf{u}llet=\mathfrak{s}igma$ and $G=\mathbb{Z}_2^+\times\mathbb{Z}_2^-$ if $\textbf{u}llet=\xi$. By a \emph{$G$-equivariant CW complex}, we mean a $G$-space carrying two different CW structures, so that it is a $G$-CW complex with respect to the first structure, and we use the second structure to define its cellular chain complex; equivalently, it is a pair of a $G$-CW complex and a CW complex, along with a homeomorphism connecting the two, and its cellular chain complex is defined to be cellular chain complex of the second CW complex (and so, does not automatically carry a $G$-action). In our construction, the fine structure from \S\ref{sec:cube-shaped-realize} is the first CW structure, while the standard structure is the second one. A caveat: our definition of $G$-equivariant CW complex is quite non-standard. As before, for any stable functor $(F,r)$, after fixing an equivariant spatial refinement $\widetilde{F}_k$, we let $\Realize{\Sigma^r F}=(\CRealize{F}_k,r-k)$ denote the corresponding $G$-equivariant finite CW spectrum. Similarly, if $\widehat{F}_k$ is a $k$-dimensional doubly-equivariant spatial refinement of $F\mathbb{F}rom\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma}$, we let $\widehat{\CRealize{F}}_{k}$ denote the $\mathbb{Z}_2$-equivariant CW complex $\hocolim(\widehat{F}_k^+)$. For any stable functor $(F,r)$, we let $\widehat{\Realize{\Sigma^rF}}$ denote the corresponding finite $\mathbb{Z}_2$-equivariant CW spectrum. We call $\widehat{\Realize{\Sigma^rF}}$ the \emph{doubly-equivariant realization} of $(F,r)$, to avoid confusion with the ordinary realization of $(F,r)$. \begin{cor}\label{cor:burnlimwelldef-equivariant} Let $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\textbf{u}llet$. \begin{enumerate}[leftmargin=*] \item If $\textbf{u}llet=\mathfrak{s}igma$, then $\Realize{F}^{\mathbb{Z}_2}=\Sigma^{-1}\Realize{\mathbb{F}orgot\circ F}$, $\Realize{F}/\Realize{F}^{\mathbb{Z}_2}=\Realize{\mathcal{D} \circ F}$ (with the $\mathbb{Z}_2$-action on the left equaling the $\mathbb{Z}_2^+$-action on the right), and $(\Realize{F}/\Realize{F}^{\mathbb{Z}_2})/{\mathbb{Z}_2}=\Realize{\mathbb{F}orgot\circ F}$. \item If $\textbf{u}llet=\xi$, then $\Realize{F}/\mathbb{Z}_2^+=\Realize{\mathcal{Q}\circ F}$; if $F=\mathcal{D}\circ G$, then $\Realize{F}/\mathbb{Z}_2^-=\Realize{G}$ (with the $\mathbb{Z}_2^+$-action on the left equaling the $\mathbb{Z}_2$-action on the right). \item If $\textbf{u}llet=\mathfrak{s}igma$, then $\widehat{\Realize{F}}^{\mathbb{Z}_2}=\Sigma^{-1}\Realize{F}$, $\widehat{\Realize{F}}/\widehat{\Realize{F}}^{\mathbb{Z}_2}=\Realize{\mathcal{D} \circ F}$ (with the $\mathbb{Z}_2$-action on the left equaling the $\mathbb{Z}_2^-$-action on the right), and $(\widehat{\Realize{F}}/\widehat{\Realize{F}}^{\mathbb{Z}_2})/{\mathbb{Z}_2}=\Realize{F}$. \end{enumerate} \end{cor} \begin{proof} This follows from Proposition \ref{prop:cube-equivariant} and the properties of homotopy colimits \ref{itm:ho4}---\ref{itm:ho5}. \end{proof} \begin{prop}\label{prop:totalization-equivariant} If $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\xi$, then the reduced cellular complex of its realization, $\widetilde{C}_{\mathrm{cell}}(\Realize{F})$, is isomorphic, as a $\mathbb{Z}_u$-module, to the totalization $\mathrm{Tot}(F)$ with the cells mapping to the corresponding generators. If $F\mathbb{F}rom\underline{2}^n\to\mathscr{B}_\mathfrak{s}igma$, then the reduced cellular complex of its doubly-equivariant realization, $\widetilde{C}_{\mathrm{cell}}(\widehat{\Realize{F}})$ is isomorphic to the totalization $\mathrm{Tot}(\mathbb{F}orgot\circ F)$ with the cells mapping to the corresponding generators. \end{prop} \begin{proof} The statement is, nonequivariantly, just Proposition \ref{prop:totalization}. The isomorphism as $\mathbb{Z}_u$-modules follows from inspection of the proof of \cite[Theorem 6]{lls1}. The second statement is proved as in Proposition \ref{prop:totalization}. \end{proof} \begin{lem}\label{lem:functor-map-gives-space-map-equivariant} Let $\Sigma^{r_1}F_1\to \Sigma^{r_2}F_2$ be an equivariant map between stable odd Burnside functors $(F_1\mathbb{F}rom\underline{2}^{n_1}\to\textbf{u}rn_{\mathfrak{s}igma},r_1)$ and $(F_2\mathbb{F}rom\underline{2}^{n_2}\to\textbf{u}rn_{\mathfrak{s}igma},r_2)$. Then there is an induced $\mathbb{Z}_2$-equivariant map of equivariant realizations \[ \Realize{\Sigma^{r_1}F_1}\to \Realize{\Sigma^{r_2}F_2}, \] an induced $\mathbb{Z}_2$-equivariant map of doubly-equivariant realizations \[ \widehat{\Realize{\Sigma^{r_1}F_1}}\to \widehat{\Realize{\Sigma^{r_2}F_2}}, \] and an induced $\mathbb{Z}_2^+\times\mathbb{Z}_2^-$-equivariant map of equivariant realizations \[ \Realize{\Sigma^{r_1}\mathcal{D} F_1}\to \Realize{\Sigma^{r_2}\mathcal{D} F_2}, \] all well-defined up to homotopy equivalence. If the map of Burnside functors is an equivariant equivalence, then these induced maps are equivariant homotopy equivalences. \end{lem} \begin{proof} Let us sketch the arguments in the first case. The other two cases are similar. First, note that Lemma \ref{lem:cofib} can be made to hold equivariantly, so that associated to a natural transformation of signed Burnside functors there is an equivariant map. So we only need to show that associated to an equivariant equivalence of Burnside functors, there is a well-defined equivariant stable homotopy equivalence of their realizations. It will suffice to show that each of the moves in Definition~\ref{def:stableq} induces an equivariant stable homotopy equivalence. For the stabilization move this is clear. For the first move, assume that we have a natural transformation $\eta$, with $\mathrm{Tot}(\mathcal{D} \eta)$ a homotopy equivalence over $\mathbb{Z}_u$. By Proposition \ref{prop:totalization} and Lemma \ref{lem:functor-map-gives-space-map}, the induced map between realizations is cellular and induces a non-equivariant homotopy equivalence. The induced map on fixed-point sets is induced by the underlying natural transformation $\mathbb{F}orgot \eta$ using the identification of Corollary \ref{cor:burnlimwelldef-equivariant}, which is a homotopy equivalence of chain complexes since $\mathrm{Tot}(\mathcal{D} \eta)$ is. By the proof of Lemma \ref{lem:functor-map-gives-space-map}, and using Corollary \ref{cor:burnlimwelldef-equivariant} again, this induced map on fixed-point sets is a homotopy equivalence. By the $G$-Whitehead theorem, the induced map is a $\mathbb{Z}_2$-homotopy equivalence. \end{proof} We record the behavior under coproducts. \begin{lem}\label{lem:product-realization-fixed} Let $F_1,F_2 \mathbb{F}rom \underline{2}^{n} \to \mathscr{B}_\textbf{u}llet$. Then $\Realize{F_1 \amalg F_2}$ is equivariantly homeomorphic to $\Realize{F_1}\vee\Realize{F_2}$. If $\textbf{u}llet=\mathfrak{s}igma$, then $\widehat{\Realize{F_1 \amalg F_2}}$ is equivariantly homeomorphic to $\widehat{\Realize{F_1}}\vee\widehat{\Realize{F_2}}$ \end{lem} \begin{proof} The statement is an immediate consequence of \ref{itm:ho2} and \S\ref{subsec:prod}. \end{proof} \begin{rmk}\label{rmk:other-homotopy-types} In fact, for $F\mathbb{F}rom\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma}$ and any $\ell\geq 0$, the constructions of the present section may be carried out using reflection in the first $\ell$ coordinates to produce a $\mathbb{Z}_2^\ell$-equivariant CW-spectrum $\Realize{F}^\ell$; we have encountered the first few cases: $\Realize{\mathbb{F}orgot F}=\Realize{F}^0$, $\Realize{F}=\Realize{F}^1$, $\widehat{\Realize{F}}=\Realize{F}^2$. Its cellular complex equals the totalization $\mathrm{Tot}(\mathbb{F}orgot F)$ if $\ell$ is even and $\mathrm{Tot}(F)$ if $\ell$ is odd. Proposition \ref{prop:equivdiag-equivariant}, as well as its corollaries above, readily generalizes to these realizations, and the entire family is related by iterated quotients (or fixed point sets) under the various actions. \end{rmk} \mathfrak{s}ection{Khovanov homotopy types}\label{sec:oddkh} In this section, we construct the odd Khovanov Burnside functor, and the odd Khovanov homotopy type as its realization. We also construct a reduced odd Khovanov homotopy type and the unified Khovanov homotopy type. We establish various properties such as fixed point constructions and cofibration sequences. We also construct several concordance invariants following standard procedure. \mathfrak{s}ubsection{The odd Khovanov Burnside functor}\label{subsec:def} In this section, we define a functor to the signed Burnside category associated to an oriented link diagram $L$ with oriented crossings. After ordering the $n$ crossings of $L$, we will identify the vertices of the hypercube of resolutions of $\diagram$ with the objects of $\underline{2}^n$, and the edges with the length one arrows of $\underline{2}^n$. To define the odd Khovanov Burnside functor $F_o\mathbb{F}rom\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma}$, following Lemma \ref{lem:212}, we need only define it on objects, length one morphisms, and across two-dimensional faces of the cube $\underline{2}^n$. On objects we set \[ F_{o}(u)=\mathit{Kh}Gen(u). \] For each edge $u\geqslant_1 v$ in $\underline{2}^n$, and each element $y \in F_{o}(v)$, write \[ \mathfrak{F}_{o}(\mathrm{pt}hi^{\mathrm{op}}_{v,u})(y)=\mathfrak{s}um_{x\in F_{o}(u) } \epsilon_{x,y} x, \] where $\mathfrak{F}_o$ is the odd Khovanov functor from \S\ref{sec:threekhfunc}. Note each $\epsilon_{x,y}\in \{ -1,0,1\}$. Define \[ F_{o}(\mathrm{pt}hi_{u,v})=\{ (y,x) \in F_{o}(v) \times F_{o}(u) \mid \epsilon_{x,y}=\mathrm{pt}m 1 \}, \] where the sign on elements of $F_{Kh'}(\mathrm{pt}hi_{u,v})$ is given by $\epsilon_{x,y}$ of the pair, and the source and target maps are the natural ones. We need only define the $2$-morphisms across $2$-dimensional faces. In fact, in contrast to the case of even Khovanov homology, where a global choice is necessary in order to define the $2$-morphisms \cite{lshomotopytype}, in odd Khovanov homology there is a unique choice of $2$-morphisms compatible with the preceding data. To be more specific, for any $2$-dimensional face $u\geqslant_1v,v'\geqslant_1w$, and any pair $(x,y) \in F_{o}(u) \times F_{o}(w)$, there is a unique bijection between \[ A_{x,y}:=s^{-1}(x)\cap t^{-1}(y)\mathfrak{s}ubset F_{o}(\mathrm{pt}hi_{v,w})\times_{F_{o}(v)} F_{o}(\mathrm{pt}hi_{u,v}) \] and \[ A_{x,y}':=s^{-1}(x)\cap t^{-1}(y)\mathfrak{s}ubset F_{o}(\mathrm{pt}hi_{v',w})\times_{F_{o}(v')} F_{o}(\mathrm{pt}hi_{u,v'}) \] that preserves the signs. (That is, the signed sets $A_{x,y},A_{x,y}'$ both have at most one element of any given sign). The last assertion may be checked on a case-by-case basis, using \cite[Figure 2]{ors}. Away from ladybug configurations (i.e., configurations of type X and Y) the involved sets both have at most one element (and $\mathfrak{F}_o$ commutes across 2d faces), so the result is automatic. For ladybug configurations, there are sets $A_{x,y}, A_{x,y}'$ with two elements, but the elements have opposite sign, and so there is still a unique matching. Next, we observe that the compatibility relation demanded by Lemma \ref{lem:212} is satisfied by $F_{o}$ (using the unique bijections across $2$-faces). For this, we must consider $3$-dimensional faces $\iota \mathbb{F}rom \underline{2}^3 \to \underline{2}^n$, and a choice of elements $x \in F_{o}(\iota(1,1,1)),y\in F_{o}(\iota(0,0,0))$ and the correspondence $A_{x,y}$. There are six distinct decompositions of the arrow $(1,1,1) \to (0,0,0) $ in $\underline{2}^3$ into a composition of nonidentity arrows, corresponding to permutations of $\{1,2,3\}$. Specifically, if $e_i$ denotes the arrow $1\to 0$ in the $i^{\text{th}}$-factor of $\underline{2}^3$, the permutation $\mathfrak{s}igma$ corresponds to the composition $e_{\mathfrak{s}igma(3)}\circ e_{\mathfrak{s}igma(2)}\circ e_{\mathfrak{s}igma(1)}$. These compositions are in turn related by $2$-morphisms \[ F_{i,j}\mathbb{F}rom F(e_{i})\circ F(e_{j}) \to F(e_{j})\circ F(e_{i}). \] The compatibility relation of Lemma \ref{lem:212} boils down to the condition that the following diagram commutes: \begin{center} \begin{tikzpicture}[xscale=0.8,yscale=0.7] \def3.7cm{3.7cm} \node (h0A) at (60:3.7cm) {$F_{e_3}\!\!\circ\! F_{e_2}\!\!\circ\! F_{e_1}$}; \node (h0C) at (0:3.7cm) {$F_{e_2}\!\!\circ\! F_{e_3} \!\!\circ\! F_{e_1}$}; \node (h1B) at (-60:3.7cm) {$F_{e_2}\!\!\circ\! F_{e_1} \!\!\circ\! F_{e_3}$}; \node (h1A) at (-120:3.7cm) {$F_{e_1} \!\!\circ\! F_{e_2} \!\!\circ\! F_{e_3}$}; \node (h1C) at (180:3.7cm) {$F_{e_1} \!\!\circ\! F_{e_3} \!\!\circ\! F_{e_2}$}; \node (h0B) at (120:3.7cm) {$F_{e_3} \!\!\circ\! F_{e_1}\!\!\circ\! F_{e_2}$}; \draw[->] (h0A) edge node[auto] {\tiny $F_{32} \times \mathrm{Id}$} (h0C) (h0C) edge node[auto] {\tiny $\mathrm{Id} \times F_{31}$} (h1B) (h1B) edge node[auto] {\tiny $F_{21} \times \mathrm{Id}$} (h1A) (h1A) edge node[auto] {\tiny $\mathrm{Id} \times F_{23}$} (h1C) (h1C) edge node[auto] {\tiny $F_{13} \times \mathrm{Id}$} (h0B) (h0B) edge node[auto] {\tiny $\mathrm{Id} \times F_{12}$} (h0A); \end{tikzpicture} \end{center} However, it turns out that for any choice of $x,y$ as above, there is at most one element of a given sign in $A^\mathfrak{s}igma_{x,y}:=s^{-1}(x)\cap t^{-1}(y)\mathfrak{s}ubset F(e_{\mathfrak{s}igma(3)})\circ F(e_{\mathfrak{s}igma(2)})\circ F(e_{\mathfrak{s}igma(1)})$, and therefore, the coherence check is automatic. To see this is a simple enumeration of all possible options. In more detail, following \cite{lshomotopytype}, for 3d configurations that do not contain ladybug configurations on any of their 2d faces, each of the six sets $A^\mathfrak{s}igma_{x,y}$ contain at most one element. For the remaining configurations, it is shown in \cite{lshomotopytype} that each of the six sets $A^\mathfrak{s}igma_{x,y}$ contain at most two elements. However, since these remaining configurations contain ladybugs, these two elements must have opposite signs. (Recall that if $u\geqslant_1 v,v'\geqslant_1w$ is a ladybug configuration and $s^{-1}(x)\cap t^{-1}(y)\mathfrak{s}ubset F_o(\mathrm{pt}hi_{v,w})\circ F_o(\mathrm{pt}hi_{u,v})$ is non-empty, then it consists of two oppositely signed points.) Therefore, each of the six sets $A^\mathfrak{s}igma_{x,y}$ contains at most one element of each sign for the remaining configurations as well. \begin{defn}\label{def:kh-odd-burnside-functor} Define the stable signed Burnside functor associated to an oriented link diagram $L$ with $n$ oriented crossings and a choice of edge assignment to be $(F_o,-n_-)$, where $F_o\mathbb{F}rom\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma}$ is the functor defined above, and $n_-$ is the number of negative crossings in $L$. Since the differential on the Khovanov chain complex respects the quantum grading, cf.~\S\ref{sec:khovanovhomology}, the odd Khovanov Burnside functor splits as a coproduct of functors, one in each quantum grading: \[ F_{o}=\textbf{c}prod_j F^j_{o}. \] The total complex of the odd Khovanov Burnside functor $\Sigma^{-n_-}F^j_{o}$ agrees with the dual of the odd Khovanov chain complex: \[ (\mathrm{Tot}(\Sigma^{-n_-}F^j_{o}))^*=\mathit{Kh}Cx^{*,j}_o(L). \] \end{defn} \begin{defn}\label{def:oddkhmain} We define the \emph{odd Khovanov spectrum} $\khoo(L)=\bigvee_j\khoo^j(L)$ as a $\mathbb{Z}_2$-equivariant finite CW spectrum, where $\khoo^j(L)$ is a realization of the stable signed Burnside functor $\Sigma^{-n_-}F^j_{o}$ \end{defn} This odd functor recovers the even functor $F_e\mathbb{F}rom\underline{2}^n\to\mathscr{B}$ from \cite{lls1,lls2} as follows. \begin{prop}\label{prop:odd-recovers-even} The functors $F_o$ and $F_e$ satisfy $\mathbb{F}orgot\circ F_{o}=F_{e}$, where $\mathbb{F}orgot\mathbb{F}rom\textbf{u}rn_{\mathfrak{s}igma}\to\mathscr{B}$ is the forgetful functor from Figure~\ref{fig:burnsidecategories}. \end{prop} \begin{proof} The generators of even and odd Khovanov homology are canonically identified, cf.~\S\ref{subsec:kg}, so on objects we have a canonical identification $\mathbb{F}orgot F_{o}(u)=F_{e}(u)=\mathit{Kh}Gen(u)$. Similarly, since the differentials agree up to sign (for this identification), we have that $\mathbb{F}orgot F_{o}(\mathrm{pt}hi_{u,v})$ is canonically identified with $F_{e}(\mathrm{pt}hi_{u,v})$ for $u\geqslant_1 v$. So we just need to show that the $2$-morphism $\mathbb{F}orgot F_{o}(\mathrm{pt}hi_{v,w}) \circ \mathbb{F}orgot F_{o}(\mathrm{pt}hi_{u,v})\to\mathbb{F}orgot F_{o}(\mathrm{pt}hi_{v',w})\circ \mathbb{F}orgot F_{o}(\mathrm{pt}hi_{u,v'})$ agrees with the the $2$-morphism $F_e(\mathrm{pt}hi_{v,w}) \circ F_e(\mathrm{pt}hi_{u,v})\to F_e(\mathrm{pt}hi_{v',w})\circ F_e(\mathrm{pt}hi_{u,v'})$ for all 2d faces $u\geqslant_1 v,v'\geqslant_1 w$. \begin{figure} \caption{\textbf{The odd functor for the type-X assignment recovers the even functor for the right ladybug matching.} \label{fig:odd-to-even} \end{figure} For 2d configurations apart from ladybugs, for any $x\in\mathit{Kh}Gen(u),y\in\mathit{Kh}Gen(w)$, the subset $s^{-1}(x)\cap t^{-1}(y)$ in each of the correspondences contain at most one element, and so the two $2$-morphisms agree. For ladybugs, one may check directly that the $2$-morphism for $\mathbb{F}orgot F_{o}$ agrees with that for $F_{e}$. To be more specific, the $2$-morphism for $\mathbb{F}orgot F_{o}$ specified by a type-X sign assignment agrees (using the above identifications of objects and $1$-morphisms) with the right ladybug matching for $F_{e}$ (a type-Y assignment corresponds to left ladybug matching), see Figure~\ref{fig:odd-to-even} for details. By Lemma \ref{lem:212}, $\mathbb{F}orgot F_{o}$ is isomorphic to $F_{e}$. \end{proof} Therefore, the even Khovanov spectrum from \cite{lshomotopytype} is $\khoh(L)=\bigvee_j\khoh^{j}(L)$ with $\khoh^{ j}(L)=\Realize{\Sigma^{-n_-}\mathbb{F}orgot F^j_{o}}$. Using the doubly-equivariant realizations, we have a related spectrum: \begin{defn}\label{def:evenaction} We define a second even Khovanov spectrum, denoted $\khoh'(L)=\bigvee_j\khoh^{\mathrm{pt}rime\,j}(L)$ as a $\mathbb{Z}_2$-equivariant finite CW spectrum, where $\khoh^{\mathrm{pt}rime\,j}(L)=\widehat{\Realize{\Sigma^{-n_-}F^j_{o}}}$, a doubly-equivariant realization of the stable signed Burnside functor $\Sigma^{-n_-}F^j_{o}$. \end{defn} \begin{defn}\label{def:unifiedintro} We define the \emph{unified Khovanov spectrum} $\unis(L)=\bigvee_j\unis^j(L)$ as a $\mathbb{Z}^+_2\times\mathbb{Z}^-_2$-equivariant finite CW spectrum, where $\unis^j(L)$ is a realization of the stable $\mathbb{Z}_2$-equivariant Burnside functor $\Sigma^{-n_-}\mathcal{D} F^j_{o}$. \end{defn} \begin{rmk}\label{rmk:khovanov-spaces-with-extra-action} Following Remark \ref{rmk:other-homotopy-types}, there is in fact a family of Khovanov spaces $\mathcal{X}_\ell(L)$, for $\ell\geq 0$, whose cellular chain complexes agree with the even Khovanov chain complex $\mathit{Kh}Cx_e(L)$ if $\ell$ is even and the odd Khovanov chain complex $\KhCx_o(L)$ if $\ell$ even is odd. (We have already encountered $\mathcal{X}_0(L)=\khoh(L)$, $\mathcal{X}_1(L)=\khoo(L)$, and $\mathcal{X}_2(L)=\khoh'(L)$.) There is a natural generalization of Theorem \ref{thm:equivariance} to this family of spaces. We conjecture that $\mathcal{X}_\ell(L)$, up to homotopy equivalence, only depends on the parity of $\ell$. \end{rmk} \mathfrak{s}ubsection{Relations among the three theories}\label{sec:uni} In this section, we find relations among the three Khovanov homotopy types, in terms of geometric fixed points, geometric quotients, and cofibration sequences. \begin{proof}[Proof of Theorem~\ref{thm:equivariance}] The first statement and the first parts of statements (\ref{itm:thm-equivariance-2}) and (\ref{itm:thm-equivariance-4}) follow from Corollary \ref{cor:burnlimwelldef-equivariant}. The exact sequences are a consequence of the $\mathbb{Z}/2$-actions on $\khoh'(L)$ and $\khoo(L)$, for which $\Sigma^{-1}\khoo(L)$ and $\Sigma^{-1}\khoh(L)$ are respectively the fixed point sets, and $\unis(L)$ the quotients. The inclusion of the fixed-point sets are cofibrations in both cases, giving the desired exact sequences. The agreement with the exact sequences of \cite{putyrashumakovitch} at the level of cohomology is a consequence of (\ref{itm:thm-equivariance-3}) and (\ref{itm:thm-equivariance-5}). So it remains to prove (\ref{itm:thm-equivariance-3}) and (\ref{itm:thm-equivariance-5}). The proofs are similar, so let us only consider (\ref{itm:thm-equivariance-3}). Consider the Puppe sequence associated to the inclusion $\Sigma^{-1}\khoh(L) \hookrightarrow \khoo(L)$. For concreteness, assume $\khoo(L)$ has been constructed equivariantly using $k$-dimensional boxes, and all the sub-boxes of $[0,1]^k$ involved in the construction are of the form $[0,1]\times B$; that is, they extend the full length in the first coordinate. Let $X$ denote $\khoo(L)$, $Y$ denote the fixed set $\Sigma^{-1}\khoh(L)$, and $Z$ denote the quotient $X/Y=\unis(L)$. The Puppe sequence takes the form \[ Y \hookrightarrow X \to X\cup C(Y)\xrightarrow{P}\Sigma Y, \] with $C$ denoting the cone, where the last map $P$ is quotienting by $X$. The term $X\cup C(Y)$ is homotopy-equivalent to $Z$ by quotienting by $C(Y)$: \[ Q\mathbb{F}rom X\cup C(Y)\to X/Y =Z. \] So the Puppe map $Z\to\Sigma Y$ is the homotopy inverse of $Q$, composed with $P$. Meanwhile, we have the map $R \mathbb{F}rom \unis(L)=Z \to \khoh(L)=\Sigma Y$ given by quotienting by $\mathbb{Z}_2^+$. Recall that the $\mathbb{Z}_2^+$-action on $Z=X/Y$ is induced from the $\mathbb{Z}_2$-action on $X$. We wish to show that these two maps from $Z$ to $\Sigma Y$ are homotopic. Since $Q$ is a homotopy equivalence, it is enough to show that the two maps $P,R\circ Q\mathbb{F}rom X\cup C(Y)\to \Sigma Y$ are homotopic. Consider the quotient of $X$ by the $\mathbb{Z}_2$-action. Since $X$ has been constructed using boxes that stretch the full length along the first coordinate, it is not hard to see that the quotient is $C(Y)$. This produces a quotient map $S\mathbb{F}rom X\cup C(Y)\to C(Y)\cup C(Y)=\Sigma Y$. Both the maps $P$ and $R\circ Q$ factor through the above map $S$. The first map quotients the first $C(Y)$ factor, while the second map quotients the second factor. Either is homotopic to the identity map $C(Y)\cup C(Y)\to\Sigma Y$, and so the claim follows. \end{proof} \mathfrak{s}ubsection{Invariance}\label{subsec:invar} The main aim of this section is to prove that changes of the orientation of the crossings, as well as Reidemeister moves, result in equivariantly equivalent signed Burnside functors. \begin{proof}[Proof of Theorem~\ref{thm:odd-functor-invariant}] We will now prove that the equivariant equivalence class of the odd Khovanov Burnside stable functor from Definition~\ref{def:kh-odd-burnside-functor} is independent of the choices in its construction, namely the choice of diagram $\diagram$, the orientation of the crossings, the edge assignment, and the ordering of the generators $a_i$ at each resolution. We first see that for a fixed diagram $L$, changing the other auxiliary choices results in sign reassignments (which are sometimes isomorphisms) of functors from $\underline{2}^n$ to $\textbf{u}rn_{\mathfrak{s}igma}$. \begin{itemize}[leftmargin=*] \item \textbf{Edge assignment:} Let $\epsilon,\epsilon'$ be two different edge assignments of the same type for the same oriented knot diagram $L$. As noted in \cite[Lemma 2.2]{ors}, $\epsilon\epsilon'$ is a (multiplicative) cochain in $C_{\mathrm{cell}}^1([0,1]^n,\mathbb{Z}_2)$, and hence a coboundary of a $0$-cochain $\alpha$ on the cube of resolutions. That is, there is a map $\alpha\mathbb{F}rom \underline{2}^n\to \{ \mathrm{pt}m 1\}$, so that for any $v\geqslant_1w$ $\alpha(v)\alpha(w)=\epsilon(\mathrm{pt}hi^\mathrm{op}_{w,v})\epsilon'(\mathrm{pt}hi^\mathrm{op}_{w,v})$. If $F_o$ and $F'_{o}$ are the corresponding functors $\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma}$, we get that $F'_o$ is obtained from $F_o$ by using the sign reassignment associated to $\alpha$. \item \textbf{Orientations at crossings:} Recall that \cite[Lemma 2.3]{ors} asserts that for oriented diagrams $(L,o)$ and $(L,o')$ and an edge assignment $\epsilon$ for $(L,o)$, there exists an edge assignment of the same type $\epsilon'$ for $(L,o')$ so that $\KhCx_o(L,o,\epsilon)\textbf{c}ng \KhCx_o(L,o',\epsilon')$. The isomorphism constructed in the lemma respects the Khovanov generators, and so induces an isomorphism of signed Burnside functors. To be more specific, note that the Khovanov generators $\mathit{Kh}Gen(L)$ of $\KhCx_o(L,o)$ are independent of the orientation $o$ (which only changes the differential). Then the choice of edge assignment $\epsilon'$ is such that the identity morphism $\KhCx_o(L,o,\epsilon)\to \KhCx_o(L,o',\epsilon')$ commutes with the differentials. Then the corresponding Burnside functors are also naturally isomorphic. (Independence of the orientations of crossings can also be proved using Reidemeister II moves twice, as in \cite[Figure~4.5]{SSS-geometric-perturb}.) \item \textbf{Type of edge assignment:} \cite[Lemma 2.4]{ors} proves that an edge assignment $\epsilon$ of a decorated link diagram $(L,o)$ of type $X$ can also be viewed as a type $Y$ edge assignment for some orientation $o'$. That is, the type-$X$ Burnside functor associated to $(L,o,\epsilon)$ is already the type-$Y$ Burnside functor associated to $(L,o',\epsilon)$. (Independence of the type of edge assignment can also be achieved by Viro's trick of reflecting the knot diagram along the vertical line (which switches the $X$ and the $Y$ ladybug), and then using a sequence of Reidemeister moves to come back to the original diagram, cf.~\cite[Proposition 6.5]{lshomotopytype}.) \item \textbf{Ordering of circles at each resolution:} Finally, we must check that reordering the circles of a resolution results in an equivariantly-equivalent signed Burnside functor. For this, let $\mathit{Kh}Gen(u)$ and $\mathit{Kh}Gen'(u)$ denote the Khovanov generators for two differing orderings of the circles for a fixed link diagram. These orderings are related by a bijection from $\mathit{Kh}Gen(u)$ to $\mathit{Kh}Gen'(u)$. It is simple to check that these bijections relate the two functors $F_o,F'_o\mathbb{F}rom\underline{2}^n\to\textbf{u}rn_{\mathfrak{s}igma}$ by a sign reassignment. \end{itemize} Next we move on to the main issue for proving well-definedness of the stable equivalence class of the odd Khovanov Burnside functor: Reidemeister moves. For proving invariance under the Reidemeister moves, the argument mostly follows the proof of \cite[Theorem 1]{lls2}, which lifts Khovanov's invariance proof \cite{kho1} to the level of Burnside functors. For two diagrams differing by a Reidemeister move, Khovanov's invariance proof---see also Bar-Natan~\cite{natancat}---is built using chain maps between Khovanov complexes, which are either subcomplex inclusions or quotient complex projections that send Khovanov generators to Khovanov generators and are chain homotopy equivalences, or their chain homotopy inverses. Since these maps send Khovanov generators to Khovanov generators, it is easy to see that the argument lifts to the Burnside category level~\cite{lls2}; we similarly sketch how in the odd case, the invariance proof from \cite{ors} lifts to the odd Burnside functor. (The astute reader will observe that for the Reidemeister III proof by Khovanov, the chain maps do not send Khovanov generators to Khovanov generators. This issue is faced in \cite{lshomotopytype} already. It seems possible to carry through the approach of \cite{kho1,natancat,ors}, but at the expense of considering functors from categories other than cube categories. However, we will instead follow the proof of Reidemeister III invariance of \cite{lshomotopytype} by considering the braid-like Reidemeister III.) The standard way to prove Reidemeister invariance---applicable in the even, odd, and unified theory---is the following. Start with the Khovanov chain complex of one diagram, and perform a sequence of replacements to arrive at the Khovanov chain complex of the other diagram, where each replacement is either: \begin{enumerate}[leftmargin=*,label=(c-\arabic*)] \item\label{item:merge-cancel} \emph{Replacing the complex with a quotient complex associated to a merge.} Namely, for a merge taking circles $a_1,a_2$ in $\diagram_0$ to $a$ in $\diagram_1$, there is an acyclic subcomplex spanned, at $\diagram_0$, by generators that do not contain $a_1$, and all generators at $\diagram_1$; replace by the quotient by this subcomplex. \item\label{item:split-cancel} \emph{Replacing the complex with the subcomplex associated to a split.} Namely, for a split taking one circle $a$ in $L_0$ to $a_1,a_2$ in $L_1$, there is a subcomplex spanned by all generators from $\diagram_1$ which do not contain an $a_1$ factor, and the corresponding quotient is acyclic; replace by this subcomplex. \end{enumerate} It is easy to check that the relevant maps---the quotient complex projection in the first case and the subcomplex inclusion in the second case---are chain homotopy equivalences in the unified theory over $\mathbb{Z}_u$, and hence also in the odd and the even theory. (These cancellations are parametrized by \emph{cancellation data} from \cite[Definition 4.4]{SSS-geometric-perturb}.) To lift this argument to the Burnside functor level, in the first case, we will replace the functor by a sub-functor, and in the second case, by a quotient functor from \S\ref{sec:nat-transform-burn}. (Recall, the Khovanov complex is the \emph{dual} of the totalization of the Burnside functor, hence sub-functors correspond to quotient complexes and quotient functors correspond to subcomplexes.) \begin{itemize}[leftmargin=*] \item \textbf{RI:} Consider the Reidemeister I move from a diagram $L=\begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)}] \node[thick,inner sep=0,outer sep=0,draw,dotted] at (0,0) {\begin{tikzpicture}[scale=0.04]\draw[solid] (-2,-2) to[out=45,in=-45,looseness=2] (-2,12);\end{tikzpicture}}; \end{tikzpicture}$ to the diagram $L'=\begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)}] \node[thick,inner sep=0,outer sep=0,draw,dotted] at (0,0) {\begin{tikzpicture}[scale=0.04]\draw[solid] (-2,12) to (8,2) to[out=-45,in=-90] (12,5) to[out=90,in=45] (8,8); \node[draw=none,crossing] at (5,5) {}; \draw[solid] (-2,-2) to (8,8);\end{tikzpicture}}; \end{tikzpicture}$. We have that $\mathit{Kh}Gen(\diagram')=\mathit{Kh}Gen(\diagram'_0) \amalg \mathit{Kh}Gen(\diagram'_1)$ where $L'_0=\begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)}] \node[thick,inner sep=0,outer sep=0,draw,dotted] at (0,0) {\begin{tikzpicture}[scale=0.04]\draw[solid] (-2,-2) to[out=45,in=-45,looseness=2] (-2,12);\draw[solid] (8,8) to[out=-135,in=135,looseness=2] (8,2) to[out=-45,in=-90] (12,5) to[out=90,in=45] (8,8);\end{tikzpicture}}; \end{tikzpicture}$ (respectively, $L'_1=\begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)}] \node[thick,inner sep=0,outer sep=0,draw,dotted] at (0,0) {\begin{tikzpicture}[scale=0.04]\draw[solid] (-2,12) to[out=-45,in=-135,looseness=1.5] (8,8) to[out=45,in=90] (12,5) to[out=-90,in=-45] (8,2) to[out=135,in=45,looseness=1.5] (-2,-2);\end{tikzpicture}}; \end{tikzpicture}$) is obtained from $L'$ by resolving the new crossing by the $0$-resolution (respectively, $1$-resolution). Let $a$ be the new circle in $L'_0$, as shown in the picture. We may perform a replacement of Type~\ref{item:merge-cancel} by cancelling the subcomplex of $\uniKhCx(L')$ spanned by all the generators in $\mathit{Kh}Gen(L'_1)$ and only the generators in $\mathit{Kh}Gen(L'_0)$ that do not contain $a$, and after that we will be left with a quotient complex that is naturally isomorphic to $\uniKhCx(L)$. That is, we have a quotient complex projection $\uniKhCx(L')\to\uniKhCx(L)$ that is a chain homotopy equivalence over $\mathbb{Z}_u$. This is induced from a subfunctor inclusion $F_o(L)\to F_o(L')$, that is, the dual map on the totalizations \[ (\mathrm{Tot}(\mathcal{D}\circ F_o(L')))^*\to(\mathrm{Tot}(\mathcal{D}\circ F_o(L)))^* \] agrees with the map on the unified Khovanov complex, where $\mathcal{D}\mathbb{F}rom\textbf{u}rn_{\mathfrak{s}igma}\to\textbf{u}rn_\xi$ is the doubling functor from Figure~\ref{fig:burnsidecategories}. Therefore, the functors $F_o(L)$ and $F_o(L')$ are equivariantly equivalent. \item\textbf{RII:} The proof for Reidemeister II move is similar, except now we have to use both types of cancellations. Say we are doing a Reidemeister II from $L=\begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)}] \node[thick,inner sep=0,outer sep=0,draw,dotted] at (0,0) {\begin{tikzpicture}[scale=0.04]\draw[solid] (-2,12) to[out=-45,in=-135] (22,12); \draw[solid] (-2,-2) to[out=45,in=135] (22,-2);\end{tikzpicture}}; \end{tikzpicture}$ to $L'=\begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)}] \node[thick,inner sep=0,outer sep=0,draw,dotted] at (0,0) {\begin{tikzpicture}[scale=0.04]\draw[solid] (-2,12) to (8,2) to[out=-45,in=-135] (12,2) to (22,12); \node[draw=none,crossing] at (5,5) {};\node[draw=none,crossing] at (15,5) {}; \draw[solid] (-2,-2) to (8,8) to[out=45,in=135] (12,8) to (22,-2);\end{tikzpicture}}; \end{tikzpicture}$. Once again, $\mathit{Kh}Gen(L')$ decomposes as $\amalg_{(i,j)\in\{0,1\}^2}\mathit{Kh}Gen(L'_{ij})$, where $L'_{ij}$ are the partial $(i,j)$ resolutions of $L'$ at the new crossings. Let $a$ be the new circle in $L'_{01}=\begin{tikzpicture}[baseline={([yshift=-.8ex]current bounding box.center)}] \node[thick,inner sep=0,outer sep=0,draw,dotted] at (0,0) {\begin{tikzpicture}[scale=0.04]\draw[solid] (-2,12) to[out=-45,in=45,looseness=2] (-2,-2); \draw[solid] (22,12) to[out=-135,in=135,looseness=2] (22,-2); \draw[solid] (12,8) to[out=135,in=45] (8,8) to[out=-135,in=135,looseness=2] (8,2) to[out=-45,in=-135] (12,2) to[out=45,in=-45,looseness=2] (12,8);\end{tikzpicture}}; \end{tikzpicture}$. For the merge $L'_{01}\to L'_{11}$, we may cancel the subcomplex spanned by $\mathit{Kh}Gen(L'_{11})$ and the generators in $\mathit{Kh}Gen(L'_{01})$ that do not contain $a$. The remaining quotient complex has an acyclic subcomplex corresponding to the split $L'_{00}\to L'_{01}$, spanned by $\mathit{Kh}Gen(L'_{00})$ and the remaining generators in $\mathit{Kh}Gen(L'_{01})$. This produces a chain homotopy equivalence between $\uniKhCx(L')$ and $\uniKhCx(L'_{10})$ (modulo shifting the homological grading by one), and the latter is naturally identified with $\uniKhCx(L)$. Since these subquotient complexes come from Burnside sub-functors and Burnside quotient functors, it is automatic that the two stable Burnside functors $F_o(L)=F_o(L_{10})$ and $\Sigma^{-1}F_o(L')$ are equivariantly equivalent. \item \textbf{RIII:} The proof of Reidemeister III invariance is exactly the same as the previous proof. As discussed earlier, we deviate from the standard proofs from \cite{kho1,natancat,ors}, but instead follow the proof from \cite[Proposition 6.4]{lshomotopytype}. Let $L'$ be obtained from $L$ by performing a braid-like Reidemeister III move, as in \cite[Figure 6.1c]{lshomotopytype}. Then in the six-dimensional partial cube of resolutions of $L'$, one can perform as sequence of cancellations---see \cite[Figure 6.4]{lshomotopytype} and the subsequent table---of Types~\ref{item:merge-cancel} and \ref{item:split-cancel} to produce a chain homotopy equivalence between $\uniKhCx(L')$ and $\uniKhCx(L'_{000111})$ (modulo shifting gradings by three), and the latter is naturally identified to $\uniKhCx(L)$. The subquotient complexes come from Burnside sub-functors and Burnside quotient functors, and once again it follows that the two stable Burnside functors $F_o(L)=F_o(L_{000111})$ and $\Sigma^{-3}F_o(L')$ are equivariantly equivalent. \end{itemize} We leave it to the reader to convince themselves that the above equivalences automatically respect the decomposition of the Burnside functors according to quantum gradings. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:oddkhmain}] Recall that $\khoo(L)=| \Sigma^{-n_-}F_o|=(\CRealize{F_o}_k,-n_--k)$. By Lemma \ref{lem:functor-map-gives-space-map}, $|\Sigma^{n_-}F_o|$ depends, up to (nonequivariant) stable homotopy equivalence, only on the stable equivalence class of $\Sigma^{n_-}F_o$. Then by Theorem \ref{thm:odd-functor-invariant}, the stable homotopy class of $|\Sigma^{n_-}F_o|$ is an invariant of $L$. Proposition \ref{prop:totalization} identifies the cellular chain complex of $|\Sigma^{n_-}F_o|$ as the totalization of $\Sigma^{-n_-}F_o$, which is the dual of the Khovanov complex (see discussion after Definition \ref{def:kh-odd-burnside-functor}), so Theorem \ref{thm:oddkhmain} follows (nonequivariantly). To see that $\khoo(L)$ is well-defined up to equivariant stable homotopy equivalence, we use Lemma \ref{lem:functor-map-gives-space-map-equivariant} in place of Lemma \ref{lem:functor-map-gives-space-map}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:evenintro}] As with the proof of Theorem \ref{thm:oddkhmain}, we see that $\khoh'(L)$, up to equivariant stable homotopy equivalence, depends only on the equivariant equivalence class of $\Sigma^{n_-}F_o$, by Lemma \ref{lem:functor-map-gives-space-map-equivariant}. Theorem \ref{thm:odd-functor-invariant} establishes that the equivariant equivalence class of $\Sigma^{n_-}F_o$ is a link invariant, and the theorem follows. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:unifiedintro}] The well-definedness follows as in Theorems \ref{thm:oddkhmain} and \ref{thm:evenintro} from Lemma \ref{lem:functor-map-gives-space-map-equivariant}. For the CW description, we use Proposition \ref{prop:totalization-equivariant}, which establishes that the equality in Theorem \ref{thm:unifiedintro} is an isomorphism of $\mathbb{Z}_u$-modules. The statement about the $\mathbb{Z}_2$-actions on the reduced cellular chain complex follows from the construction. \end{proof} \mathfrak{s}ubsection{Reduced odd Khovanov homotopy type}\label{subsec:reduced} We briefly address the reduced theory. Given a (generic) point $p$ on a link diagram $L$, there is a natural sub-functor of $F^j_o(L)$ generated by only those Khovanov generators that do not contain the circle $c_p$ containing $p$. Let $\widetilde{F}^{j-1}_{o,+}(L,p)$ denote this subfunctor and $\widetilde{F}^{j+1}_{o,-}(L,p)$ denote the corresponding quotient functor. Next we show that the two reduced functors $\widetilde{F}^j_{o,+}$ and $\widetilde{F}^j_{o,-}$ are canonically identified. Arranging for convenience that the ordering of circles at each resolution has that $c_p$---the circle containing $p$---is always last, we have a canonical bijection between $\widetilde{F}_{o,-}$ and $\widetilde{F}_{o,+}$. This bijection is compatible with the $1$-morphisms of the even Burnside functor; however, we must also check that these respect the sign map. To be specific, for $u\geqslant_1 v$, the bijection $\widetilde{F}_{o,-}(\mathrm{pt}hi_{u,v})\to\widetilde{F}_{o,+}(\mathrm{pt}hi_{u,v})$ preserves all signs cf.~\S\ref{sec:threekhfunc}, as the reader may check. We will refer to either functor as $\widetilde{F}_o$. We would expect, based on what happens for odd Khovanov chain complex, that the unreduced functor $F^j_{o}$ should be stably equivalent to two copies of the reduced functor, $\widetilde{F}^{j-1}_o\amalg\widetilde{F}^{j+1}_o$. However, the chain level splitting from \cite{ors} does not generalize. Indeed, any such stable equivalence cannot be a equivariant equivalence, cf.~Definition \ref{def:stableq}, since by Proposition~\ref{prop:odd-recovers-even}, $\mathbb{F}orgot F_o=F_e$ (and similarly, $\mathbb{F}orgot \widetilde{F}_o = \widetilde{F}_e$, where $\widetilde{F}_e$ is the reduced even Burnside functor), and the even Burnside functor (and indeed, the even Khovanov chain complex) does not split as two copies of its reduced version. \begin{defn}\label{def:redkh} We define the \emph{reduced odd Khovanov spectrum} $\widetilde\khoo(L,p)=\bigvee_j\widetilde\khoo^j(L,p)$ as a $\mathbb{Z}_2$-equivariant finite CW spectrum, where $\widetilde\khoo^j(L,p)$ is a realization of the stable signed Burnside functor $\Sigma^{-n_-}\widetilde{F}^j_{o}$. \end{defn} \begin{defn}\label{def:reduced-unified} We define the \emph{reduced unified Khovanov spectrum} $\widetilde{\X}_u(L,p)=\bigvee_j\widetilde{\X}_u^j(L,p)$ as a $\mathbb{Z}_2\times \mathbb{Z}_2$-equivariant finite CW spectrum, where $\widetilde{\X}_u^j(L,p)$ is a realization of the stable signed Burnside functor $\Sigma^{-n_-}\mathcal{D} \widetilde{F}^j_{o}$. \end{defn} \begin{proof}[Proof of Theorem \ref{thm:redkh}] Well-definedness of $\khor(L)$, up to equivariant stable homotopy equivalence, will follow from showing that $\Sigma^{-n_-}\widetilde{F}_o$ is well-defined up to equivariant equivalence, depending only on the isotopy class of $(L,p)$. Isotopy invariance is immediate for Reidemeister moves away from the basepoint (using the maps induced by Reidemeister moves on $\Sigma^{-n_-}F_o$, and observing that they preserve $\Sigma^{-n_-}\widetilde{F}_o$). As observed in \cite{Kho-kh-patterns}, any two diagrams for isotopic pointed links can be related by Reidemeister moves not crossing the basepoint and isotopies in $S^2$, from which well-definedness follows. The cofibration sequence is a consequence of Lemma \ref{lem:cofib}, using the cofibration sequence of Burnside functors: \[ \widetilde{F}^{j-1}_{o,+}(L,p)\to F^j_o(L) \to \widetilde{F}^{j+1}_{o,-}(L,p). \] Finally, the description of the reduced cellular cochain complex is a consequence of Proposition \ref{prop:totalization}, as in the proof of Theorem \ref{thm:oddkhmain}. \end{proof} \begin{prop}\label{prop:reduced-unified} The (stable) homotopy type of the reduced unified Khovanov spectrum $\khor(L,p)=\bigvee_j \khor^j(L,p)$ from Definition~\ref{def:reduced-unified} is independent of the choices in its construction and is an invariant of the isotopy class of the pointed link corresponding to $(L,p)$. Its reduced cellular cochain complex agrees with the reduced unified Khovanov complex $\widetilde{\mathit{Kh}Cx}_u(L)$, \[ \widetilde{C}_{\mathrm{cell}}^i(\widetilde{\X}_u^j(L,p))= \widetilde{\mathit{Kh}Cx}_u^{i,j}(L), \] with the cells mapping to the distinguished generators of $\widetilde{\mathit{Kh}Cx}_u(L)$. There is a cofibration sequence \[\widetilde{\X}_u^{j-1}(L,p) \to \unis^j(L)\to\widetilde{\X}_u^{j+1}(L,p).\] \end{prop} \begin{proof} The proof of Theorem \ref{thm:redkh} goes through mostly unchanged. The only new observation necessary is that the double of a cofibration sequence of Burnside functors is again a cofibration sequence. \end{proof} \mathfrak{s}ubsection{Cobordism maps}\label{subsec:cobordism-maps} For every smooth link cobordism $L\to L'$ embedded in $\mathbb{R}^3\times[0,1]$, there is an induced map on the even Khovanov complex $\mathit{Kh}Cx(L)\to\mathit{Kh}Cx(L')$ \cite{jacobsson,natantangle,khinvartangle}, well-defined up to chain homotopy and an overall sign. (The dependence on the overall sign can be removed, see~\cite{cmw-disoriented}.) \cite{lsrasmussen} lifted these maps to the even Khovanov homotopy types, $\mathit{Kh}space(L')\to\mathit{Kh}space(L)$, so that the induced map on the cellular cochain complex is the previous map, but did not check well-definedness. It is fairly easy to check that the map $\mathit{Kh}space(L')\to\mathit{Kh}space(L)$ defined in \cite{lsrasmussen} comes from a map of the even Burnside functors $F_e(L')\to F_e(L)$, so that the dual of the map on their totalizations is the map $\mathit{Kh}Cx(L)\to\mathit{Kh}Cx(L')$. In this section, we will further lift these to maps of the odd Burnside functor $F_o(L')\to F_o(L)$, so that the even Burnside functor map is obtained by applying the forgetful functor $\mathbb{F}orgot\mathbb{F}rom\textbf{u}rn_{\mathfrak{s}igma}\to\mathscr{B}$. In particular, we will get maps on the odd Khovanov homotopy type, $\oddKhspace(L')\to\oddKhspace(L)$ and the odd Khovanov complex, $\KhCx_o(L)\to\KhCx_o(L')$. We will not check the well-definedness of any of these maps. The standard way to define these maps is by decomposing the cobordism as movie, which is a sequence of knot diagrams so that each one is obtained from the previous one by a planar isotopy, Reidemeister move, or a Morse critical point, which can be a birth, death, or a saddle. In \S\ref{subsec:invar}, we have already constructed maps of odd Burnside functors corresponding to the Reidemeister moves (which were also equivariant equivalences). So we only need to construct maps associated to the three Morse singularities. First we consider the cup and cap cobordisms. Let $L$ a link diagram, and $L'=L\amalg U$, the disjoint union of $L$ and an unknot, introducing no new crossings. The elementary cobordism from $L$ to $L'$ is called a \emph{cup} or a \emph{birth}, while that from $L'$ to $L$ is a \emph{cap} or a \emph{death}. We construct natural transformations \begin{align*} \Phi_o^\cup & \mathbb{F}rom F_o(L') \to F_o(L) \\ \Phi_o^\cap & \mathbb{F}rom F_o(L) \to F_o(L'), \end{align*} decreasing quantum grading by $1$, lifting the natural transformations \begin{align*} \Phi_e^\cup & \mathbb{F}rom F_e(L') \to F_e(L) \\ \Phi_e^\cap & \mathbb{F}rom F_e(L) \to F_e(L') \end{align*} for the even Burnside functors from \cite{lsrasmussen}. In each resolution of $L'$ there is a component corresponding to $U$. We can write $\mathit{Kh}Gen(L')=\mathit{Kh}Gen(L)_+\amalg \mathit{Kh}Gen(L)_-$ where $\mathit{Kh}Gen(L)_-$ (respectively, $\mathit{Kh}Gen(L)_+$) is the subset of generators in $\mathit{Kh}Gen(L')$ which contain $U$ (respectively, do not contain $U$); either is canonically identified with $\mathit{Kh}Gen(L)$ by ordering the circles at each resolution so that $U$ is last. Let $F_o(L)_-$ (respectively, $F_o(L)_+$) be the subfunctor of $F_o(L')$ generated by $\mathit{Kh}Gen(L)_-$ (respectively, $\mathit{Kh}Gen(L)_+$); either is isomorphic to $F_o(L)$, modulo a quantum grading shift of $\mathrm{pt}m 1$. Then $F_o(L')=F_o(L)_-\amalg F_o(L)_+$, and so there is a subfunctor inclusion $F_o(L)_-\to F_o(L')$ and quotient functor projection $F_o(L')\to F_o(L)_+$. We then set the cobordism maps according to: \begin{align*} \Phi_o^\cup &\mathbb{F}rom F_o(L')\to F_o(L)_+ \textbf{c}ng F_o(L)\\ \Phi_o^\cap &\mathbb{F}rom F_o(L) \textbf{c}ng F_o(L)_- \to F_o(L'). \end{align*} Next, we handle the saddle case. Let $L_0,L_1$ be $n$-crossing link diagrams before and after the saddle, as in \cite[Figure 3.2]{lsrasmussen}, and let $F_e(L_0),F_e(L_1)\mathbb{F}rom\underline{2}^n\to\mathscr{B}$ be the two even Burnside functors (we have implicitly identified the crossings in $L_0$ with the crossings in $L_1$). Associated to the saddle cobordism, \cite{lsrasmussen} constructs a natural transformation $\Phi^s_e\mathbb{F}rom F_e(L_1) \to F_e(L_0)$ as follows. There is a $(n+1)$-crossing diagram $L$ so that $L_i$ is the $i$-resolution of $L$ at the new crossing, for $i=0,1$. Then $\Phi^s_e\mathbb{F}rom\underline{2}^{n+1}\to\mathscr{B}$ is simply defined to be $F_e(L)\mathbb{F}rom\underline{2}^{n+1}\to\mathscr{B}$---the even Burnside functor associated to $L$. One easily checks that the natural transformation increases the quantum grading by $1$. The generalization to the signed Burnside functor version is immediate, and we obtain a natural transformation $\Phi^s_o\mathbb{F}rom F^s_o(L_1) \to F^s_o(L_0)$. \begin{lem}\label{lem:main-cobordism} Associated to a (movie presentation of a) link cobordism $L\to L'$, the map on the odd Burnside functors $\Phi_o\mathbb{F}rom F^{j+\chi(S)}_o(L')\to F^j_o(L)$ lifts the map on the even Burnside functors $\Phi_e\mathbb{F}rom F^{j+\chi(S)}_e(L')\to F^j_e(L)$ that is (implicitly) constructed in \cite{lsrasmussen}: \[ \mathbb{F}orgot\Phi_o= \Phi_e. \] In particular, the induced map on the $\mathbb{Z}_2$ chain complex, $\mathit{Kh}Cx^{*,j}(L;\mathbb{Z}_2)\to\mathit{Kh}Cx^{*,j+\chi(S)}(L';\mathbb{Z}_2)$, agrees with the Khovanov map (up to chain homotopy). \end{lem} \begin{proof} This is immediate from the definitions for each of the maps---the Reideister invariance maps, as well as the cups, caps, and the saddles. \end{proof} Let $\mathcal{A}_2$ denote the mod-2 Steenrod algebra. Let $\mathcal{A}^\mathfrak{s}igma$ denote the free product of two copies of $\mathcal{A}_2$. It acts on $\mathit{Kh}(L;\mathbb{Z}_2)$ as follows: the first (respectively, second) copy acts by viewing $\mathit{Kh}(L;\mathbb{Z}_2)$ as the mod-2 cohomology of $\khoh(L)$ (respectively, $\khoo(L)$). \begin{proof}[Proof of Theorem \ref{thm:cobordism-maps}] We follow the proof of \cite[Corollary 5]{lsrasmussen}. Let $S\mathbb{F}rom L\to L'$ be the link cobordism and $\mathit{Kh}_S\mathbb{F}rom\mathit{Kh}(L;\mathbb{Z}_2)\to\mathit{Kh}(L';\mathbb{Z}_2)$ the induced map. By Lemma~\ref{lem:main-cobordism}, $\mathit{Kh}_S$ comes from a map of Burnside functors $\Phi^S_o\mathbb{F}rom F_o(L')\to F_o(L)$. The even and odd realizations give two actions of $\mathcal{A}_2$ on the mod-2 Khovanov homology, and, in particular, by naturality of Steenrod operations on either the odd or even spatial realization, we have a commutative diagram \[ \begin{tikzpicture}[xscale=5,yscale=1.5] \node (a0) at (0,0) {$\mathit{Kh}^{i,j}(L;\mathbb{F})$}; \node (a1) at (1,0) {$\mathit{Kh}^{i+n,j}(L;\mathbb{F})$}; \node (b0) at (0,-1) {$\mathit{Kh}^{i,j+\chi(S)}(L';\mathbb{F})$}; \node (b1) at (1,-1) {$\mathit{Kh}^{i+n,j+\chi(S)}(L';\mathbb{F})$}; \draw[->] (a0) -- (a1) node[pos=0.5,anchor=south] {\mathfrak{s}criptsize $\alpha$}; \draw[->] (a0) -- (b0) node[pos=0.5,anchor=east] {\mathfrak{s}criptsize $\mathit{Kh}_S$}; \draw[->] (b0) -- (b1) node[pos=0.5,anchor=south] {\mathfrak{s}criptsize $\alpha$}; \draw[->] (a1) -- (b1) node[pos=0.5,anchor=west] {\mathfrak{s}criptsize $\mathit{Kh}_S$}; \end{tikzpicture} \] for $\alpha$ a stable cohomology operation of degree $n$ coming from either copy of $\mathcal{A}_2$. It is then clear that the diagram then also commutes for $\alpha$ a linear combination or composition of elements of the two copies of $\mathcal{A}_2$. The theorem follows directly from these diagrams. \end{proof} \mathfrak{s}ubsection{Concordance invariants}\label{sec:conc} Theorem~\ref{thm:cobordism-maps} allows one to define knot concordance invariants, once again borrowing arguments directly from \cite{lsrasmussen}. In this section, we work only with knots, not links. For a knot diagram $K$ and field $\mathbb{F}$, there is a spectral sequence with $E^2$-page Khovanov homology $\mathit{Kh}^{i,j}(K;\mathbb{F})$ converging to $\mathbb{F}^2$ coming from a descending filtration $\mathcal{F}$ of the Khovanov chain complex $\mathit{Kh}Cx(K)$ so that \[ \mathit{Kh}Cx^{*,j}(K;\mathbb{F})=\mathcal{F}_j/\mathcal{F}_{j+2}. \] This was defined by Lee \cite{lee} for fields not of characteristic $2$, and for all fields by Bar-Natan \cite{natantangle}, see also \cite{turner,naot}; from now on, fix our field as $\mathbb{F}=\mathbb{F}_2$, the field with two elements, and we work their variant. The Rasmussen $s$ invariant of $K$ from \cite{rasmus-s}, cf.~\cite{lsrasmussen}, is defined by \begin{align*} s^\mathbb{F}(K)&= \max \{ q\in 2\mathbb{Z}+1 \mid H^*(\mathcal{F}_q)\to H^*(\mathcal{F}_{-\infty}) \textbf{c}ng \mathbb{F}^2 \text{ surjective } \} +1,\\ &= \max\{ q\in 2\mathbb{Z}+1 \mid H^*(\mathcal{F}_q)\to H^*(\mathcal{F}_{-\infty}) \textbf{c}ng \mathbb{F}^2 \text{ nonzero } \} -1, \end{align*} \begin{defn}\label{def:full} Fix $\alpha\in\mathcal{A}^\mathfrak{s}igma$ of grading $n>0$. Call $q \in 2\mathbb{Z}+1$ $\alpha$-\emph{half full} if there exist elements $\widetilde{a} \in \mathit{Kh}^{-n,q}(K;\mathbb{F})$, $\widehat{a} \in \mathit{Kh}^{0,q}(K;\mathbb{F})$, $a\in H^0(\mathcal{F}_q ; \mathbb{F})$, $\bar{a}\in H^0(\mathcal{F}_{-\infty};\mathbb{F})$ such that \begin{enumerate}[leftmargin=*] \item\label{itm:full1} the map $\alpha \mathbb{F}rom \mathit{Kh}^{-n,q}(K;\mathbb{F}) \to \mathit{Kh}^{0,q}(K,\mathbb{F})$ from Theorem~\ref{thm:cobordism-maps} sends $\widetilde{a}$ to $\widehat{a}$. \item The map $H^0(\mathcal{F}_q;\mathbb{F})\to \mathit{Kh}^{0,q}(K;\mathbb{F})=H^0(\mathcal{F}_q/\mathcal{F}_{q+2};\mathbb{F})$ sends $a$ to $\widehat{a}$. \item\label{itm:full3} The map $H^0(\mathcal{F}_q;\mathbb{F})\to H^0(\mathcal{F}_{-\infty};\mathbb{F})$ sends $a$ to $\bar{a}$. \item $\bar{a}\in H^0(\mathcal{F}_{-\infty};\mathbb{F})=\mathbb{F}\mathrm{op}lus \mathbb{F}$ is nonzero. \end{enumerate} Call $q$ $\alpha$-\emph{full} if there exists tuples $(\widetilde{a},\widehat{a},a,\bar{a})$ and $(\widetilde{b},\widehat{b},b,\bar{b})$ as above, with properties (\ref{itm:full1})--(\ref{itm:full3}), and so that $\bar{a},\bar{b}$ is a basis of $H^0(\mathcal{F}_{-\infty};\mathbb{F})$. \end{defn} \begin{defn}\label{defn:concinv} For a knot $K$, define $r^{\alpha}_+(K)=\max \{ q \in 2\mathbb{Z}+1 \mid q \; \text{is }\alpha\text{-half-full} \}+1$, and $s^{\alpha}_+(K)=\max \{ q \in 2\mathbb{Z}+1 \mid q \; \text{is } \alpha\text{-full} \}+3$. If $m(K)$ is the mirror of $K$, let $r^{\alpha}_-(K)=-r^{\alpha}_+(m(K))$ and $s^{\alpha}_-(K)=-s^{\alpha}_+(m(K))$. \end{defn} \begin{thm}\label{thm:conc} Let $\alpha \in \mathcal{A}^\mathfrak{s}igma$ and $S$ a connected, embedded cobordism in $\mathbb{R}^3\times [0,1]$ from $K$ to $K'$ of genus $g$. Then \begin{align*} | r^{\alpha}_{\mathrm{pt}m}(K)-r^{\alpha}_{\mathrm{pt}m}(K') | &\leq 2g\\ | s^{\alpha}_{\mathrm{pt}m}(K)-s^{\alpha}_{\mathrm{pt}m}(K') | &\leq 2g. \end{align*} In particular, $|r^{\alpha}_{\mathrm{pt}m}(K)|/2,|s^{\alpha}_{\mathrm{pt}m}(K)|/2$ are concordance invariants and lower bounds for the slice genus $g_4(K)$. \end{thm} \begin{proof} This follows from Theorem~\ref{thm:cobordism-maps}, arguing as in \cite[Theorem 1]{lsrasmussen}. \end{proof} \mathfrak{s}ubsection{Questions} We conclude with some structural questions about the odd Khovanov space: \begin{enumerate}[leftmargin=*,label=(q-\arabic*)] \item In \S\ref{sec:conc} we constructed concordance invariants using the action of the mod-$2$ Steenrod algebra on $\widetilde{H}^*(\khoo(L);\mathbb{Z}_2)$. It is natural to ask for concordance invariants defined from homology using different coefficient fields. Indeed, \cite{lsrasmussen} defines such invariants using stable cohomology operations with any coefficient field. For this, perhaps one needs an analogue of the Lee spectral sequence for odd Khovanov homology. \item\label{item:reduced-2copies} Ozsv\'ath-Rasmussen-Szab\'o~\cite{ors} showed that $\kho^{*,j}(L)=\kor^{*,j-1}(L)\mathrm{op}lus \kor^{*,j+1}(L)$ for any link $L$. Is it the case that $\khoo^j(L) \mathfrak{s}imeq \khor^{j-1}(L)\vee \khor^{j+1}(L)$? More specifically, is there a stable equivalence between the signed Burnside functors $F^j_o(L)$ and $\widetilde{F}^{j-1}_o(L)\amalg\widetilde{F}^{j+1}_o(L)$? (Such an equivalence has to be non-equivariant.) \item So far, calculations of the odd Khovanov homotopy type are limited. Is it always a wedge sum of Moore spaces? Do there exist links $L$ for which $\khoo(L)$ is not a wedge sum of smash products of Moore spaces? \item In \cite{putyrashumakovitch} there are short exact sequences: \begin{equation*} \mathit{Kh}Cx_e(L) \to \unic(L) \to \KhCx_o(L) \qquad \text{ and } \qquad \KhCx_o(L) \to \unic(L) \to \mathit{Kh}Cx_e(L) \end{equation*} At the level of cohomology, these exact sequences are induced from the cofibration sequences in Theorem \ref{thm:equivariance}. However, the maps in that proposition were not cellular for the coarse CW structure. That leads to the question: are there CW cofibration sequences \begin{equation*} \khoo(L) \to \unis(L) \to \khoh(L) \qquad \text{ and } \qquad \khoh(L) \to \unis(L) \to \khoo(L) \end{equation*} (with respect to the coarse CW structure) inducing the maps of \cite{putyrashumakovitch} on cellular chain complexes? \item One of the applications of the technology of \cite{lls1} was to show \[ \khoh(m(L))\mathfrak{s}imeq \khoh(L)^\vee \] where $m(L)$ is the mirror of $L$, and $\vee$ denotes the Spanier-Whitehead dual. We conjecture, similarly, that $\khoo(m(L))\mathfrak{s}imeq \khoo(L)^\vee$. The proof of the statement in even Khovanov homology involved the TQFT structure of even Khovanov homology, and does not immediately generalize to odd Khovanov homology. \item It would be desirable to understand the behavior of the odd Khovanov spectra for disjoint unions and connected sums. Is it possible to express the (equivariant) homotopy type $\khoo(L_1 \amalg L_2)$ in terms of the (equivariant) odd Khovanov spectra of $L_1,L_2$? (In the even theory, it is merely the smash product.) Is it possible to express the odd and unified (unreduced) Khovanov spectra of $L_1\#L_2$ in terms of the spectra of the component links? For even Khovanov homotopy, this was dealt with as \cite[Theorem 8]{lls1}: $\khoh(L_1\# L_2)$ is the derived tensor product of $\khoh(L_i)$ over the even Khovanov spectrum of the unknot. \item Is the old even Khovanov spectrum $\khoh(L)$ stable homotopy equivalent to the new even Khovanov spectrum $\khoh'(L)$? More generally, does the Khovanov spectrum $\mathcal{X}_\ell(L)$ from Remark~\ref{rmk:khovanov-spaces-with-extra-action} depend only on the parity of $\ell$? \end{enumerate} \end{document}
\begin{document} \begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=3cm] \tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=0.7] \node[coordinate] (beginpoint) {}; \node[smallsquare,right of=beginpoint,node distance=1.45cm] (sw1) {}; \node[smallCir,above right of=sw1,node distance=1cm] (sw2) {}; \node[smallCir,below right of=sw1,node distance=1cm] (sw3) {}; \node[ctrlBlk, right of=sw2,node distance=1.9cm] (stochasticCtrl) {$\hat{\kappa}_{(\theta_k)}(y)$}; \node[ctrlBlk, right of=sw3,node distance=1.9cm] (detCtrl) {$\hat{\kappa}_{(-1)}(y)=0y$}; \node[below of= detCtrl,node distance=0.5cm] (detsetup) {\scriptsize \spaceskip0pt \it Deteministic controller (open-loop)}; \node[coordinate, above of=stochasticCtrl,node distance=2.5cm] (markovChain) {$\hat{\kappa}(\theta_k)$}; \node[smallsquare,right of=beginpoint,node distance=6.55cm] (intpoint) {}; \node[ctrlBlk, right of=intpoint,node distance=1.9cm] (plant) {Plant $y_k$}; \node[coordinate,right of=plant,node distance=1.75cm] (endpoint) {}; \node[coordinate,below of=intpoint,node distance=1.7cm] (feedbackpoint) {}; \node[coordinate,right of=sw1,node distance=0.2cm] (foo1) {}; \node[coordinate,above of=foo1,node distance=0.2cm] (foo2) {}; \node[coordinate,below right of=foo1,node distance=0.5cm] (foo3) {}; \node[coordinate,right of=beginpoint,node distance=0.95cm] (foo7) {}; \node[above of=foo7,node distance=0.75cm,align=center] (event1) { \small $\left |y_k(1) \right |> d$\\ {\scriptsize ($\beta_k=1$)}}; \node[below of=foo7,node distance=0.95cm,align=center] (event2) { \small $\left |y_k(1) \right |\leq d$ \\ {\scriptsize ($\beta_k=0$)}}; \node[right of=markovChain,node distance=3.1cm] (legen) {Markov switching}; \draw[dashed] (markovChain) circle (1.55cm); \node[coordinate,below of=markovChain,node distance=1.55cm] (mcpoint){}; \node[below right of=mcpoint,node distance=0.4cm] (theta){$\theta_k$}; \node[below of= stochasticCtrl,node distance=0.5cm,align=left] (stoSetup) {\scriptsize \it Stochastic controller}; \node[state,above of= markovChain,node distance=1.7cm] (S1) {$S_2$}; \node[state,below left of= S1,node distance=1.6cm] (S0) {$S_1$}; \node[state,below right of= S1,node distance=1.8cm] (S2) {$S_3$}; \node[state,below of= S2,node distance=1.3cm] (S3) {$\cdots$}; \node[state,below of= S0,node distance=1.7cm] (S4) {$S_{...}$}; \node[coordinate,right of=beginpoint,node distance=-0.1cm] (boundint){}; \node[coordinate,above of=boundint,node distance=1.3cm] (bound1){}; \node[coordinate,right of=bound1,node distance=2.5cm] (bound2){}; \node[coordinate,below of=bound2,node distance=2.4cm] (bound3){}; \node[coordinate,left of=bound3,node distance=2.5cm] (bound4){}; \draw[dashed,-] (bound1)--(bound2)--(bound3)--(bound4)--(bound1); \node[coordinate,right of=bound1,node distance=0.9cm] (mid){}; \node[above of=mid,node distance=0.3cm] (legenbound){\scriptsize Event-triggering}; \node[smallCir, right of=stochasticCtrl,node distance=1.8cm] (swS) {}; \node[smallCir, right of=detCtrl,node distance=1.8cm] (swD) {}; \draw [->,line width=2pt](intpoint) to (swS); \draw[-] (stochasticCtrl) --(swS); \draw[-] (detCtrl) --(swD); \node[coordinate,left of=swS,node distance=0.3cm] (fooS){}; \node[coordinate,above of=fooS,node distance=0.3cm] (boundS1){}; \node[coordinate,right of=boundS1,node distance=1.2cm] (boundS2){}; \node[coordinate,below of=boundS2,node distance=2cm] (boundS3){}; \node[coordinate,left of=boundS3,node distance=1.2cm] (boundS4){}; \draw[-,dashed] (boundS1)--(boundS2)--(boundS3)--(boundS4)--(boundS1); \node[coordinate,right of=boundS1,node distance=0.6cm] (legendS1){}; \node[above of=legendS1,node distance=0.2cm] (legendS2){}; \node[coordinate,left of= intpoint,node distance=0.4cm] (apivot){}; \node[coordinate,above right of= apivot,node distance=0.2cm] (swsync1){}; \node[coordinate,below of= apivot,node distance=0.4cm] (swsync2){}; \draw [bend right,<->] (swsync1) to (swsync2); \draw [bend left,->] (S0) to (S1); \draw [bend left,->] (S1) to (S2); \draw [->] (S0) to (S2); \draw [->] (S0) to (S3); \draw [->] (S1) to (S0); \draw [bend right,<->] (S1) to (S3); \draw [<->] (S2) to (S4); \draw [->] (S2) to (S1); \draw [bend right,->] (S2) to (S3); \draw [bend right,->] (S3) to (S2); \draw [bend right,->] (S3) to (S4); \draw [bend right,->] (S4) to (S3); \draw [bend right,->] (S0) to (S4); \draw[->] (intpoint) -- node{$u_k$}(plant); \draw[-] (plant) -- node{$y_k$}(endpoint); \draw[-] (endpoint) |-(feedbackpoint); \draw[-] (feedbackpoint) -|(beginpoint); \draw[->] (beginpoint) --(sw1); \draw[->,line width=2pt] (sw1) --(sw2); \draw[->] (sw2) --(stochasticCtrl); \draw[->] (sw3) --(detCtrl); \draw[bend left,<->] (foo2) to (foo3); \draw[->,line width=1.2pt] (mcpoint) to (stochasticCtrl); \end{tikzpicture} \end{document}
\begin{equation} \labelgin{document} \title{On the global existence and qualitative behavior of one-dimensional solutions to a model for urban crime} \author{ Nancy Rodriguez\footnote{[email protected]}\\ {\small CU Boulder, Department of Applied Mathematics, Engineering Center, ECOT 225 }\\ {\small Boulder, CO 80309-0526} \and Michael Winkler\footnote{[email protected]}\\ {\small Institut f\"ur Mathematik, Universit\"at Paderborn,}\\ {\small 33098 Paderborn, Germany} } \date{} \maketitle \begin{equation} \labelgin{abstract} \noindent We consider the no-flux initial-boundary value problem for the cross-diffusive evolution system \begin{eqnarray*} \left\{ \begin{equation} \labelgin{array}{ll} u_t = u_{xx} - \chi \big(\frac{u}{v} \partial_x u \big)_x - uv +B_1(x,t), \qquad & x\in \Omega, \ t>0, \\[1mm] v_t = v_{xx} +uv - v + B_2(x,t), \qquad & x\in \Omega, \ t>0, \end{array} \right. \end{eqnarray*} which was introduced by Short {\it et al.} in \cite{Short2008} with $\chi=2$ to describe the dynamics of urban crime.\\[5pt] In bounded intervals $\Omega\subset\mathbb{R}$ and with prescribed suitably regular nonnegative functions $B_1$ and $B_2$, we first prove the existence of global classical solutions for any choice of $\chi>0$ and all reasonably regular nonnegative initial data. \\[5pt] We next address the issue of determining the qualitative behavior of solutions under appropriate assumptions on the asymptotic properties of $B_1$ and $B_2$. Indeed, for arbitrary $\chi>0$ we obtain boundedness of the solutions given strict positivity of the average of $B_2$ over the domain; moreover, it is seen that imposing a mild decay assumption on $B_1$ implies that $u$ must decay to zero in the long-term limit. Our final result, valid for all $\chi\in\left(0,\frac{\sqrt{6\sqrt{3}+9}}{2}\right),$ which contains the relevant value $\chi=2$, states that under the above decay assumption on $B_1$, if furthermore $B_2$ appropriately stabilizes to a nontrivial function $B_{2,\infty}$, then $(u,v)$ approaches the limit $(0,v_\infty)$, where $v_\infty$ denotes the solution of \begin{eqnarray*} \left\{ \begin{equation} \labelgin{array}{l} -\partial_{xx}v_\infty + v_\infty = B_{2,\infty}, \qquad x\in \Omega, \\[1mm] \partial_x v_{\infty}=0, \qquad x\in\partial\Omega. \end{array} \right. \end{eqnarray*} We conclude with some numerical simulations exploring possible effects that may arise when considering large values of $\chi$ not covered by our qualitative analysis. We observe that when $\chi$ increases, solutions may grow substantially on short time intervals, whereas only on large time scales diffusion will dominate and enforce equilibration.\\[5pt] {\bf Keywords:} urban crime, global existence, decay estimates, long-time behavior\\ {\bf MSC (2010):} 35Q91 (primary); 35B40, 35K55 (secondary) \end{abstract} \section{Introduction}\label{intro} Driven by the need to understand the spatio-temporal dynamics of crime {\it hotspots}, which are regions in space that have a disproportionately high level of crime, Short and collaborators introduced a reaction-advection-diffusion system to describe the evolution of urban crime in \cite{Short2008}. When posed in spatial one-dimensional domains $\Omega$, this system read \begin{equation} \label{0} \left\{ \begin{equation} \labelgin{array}{ll} u_t = u_{xx} - \chi \big(\frac{u}{v} v_x\big)_x - uv +B_1(x,t), \qquad & x\in \Omega, \ t>0, \\[1mm] v_t = v_{xx} +uv - v + B_2(x,t), \qquad & x\in \Omega, \ t>0, \end{array} \right. \end{equation} with the parameter $\chi$ fixed as \begin{equation} \label{chi} \chi=2, \end{equation} and with given source functions $B_1$ and $B_2$. In \eqref{0}, $u(x,t)$ represents the {\it density of criminal agents} and $v(x,t)$ the {\it attractiveness value}, which provides a measure of how susceptible a certain location $x$ is to crime at time $t.$ System \eqref{0} was derived from an agent-based model rooted on the assumption of ``routine activity theory", a criminology theory stating that opportunity is the most important factor leading to crime \cite{Cohen1979, Felson1987}. The system models two sociological effects: the `repeat and near-repeat victimization' effect and the `broken-windows theory'. The former has been observed in residential burglary data and alludes to the fact that the burglarization of a house increases the probability of that same house, as well as neighboring houses, to be burgled again within a short period of time following the original burglary \cite{Johnson1997, Short2009}. The latter is the theory that, in a sense, crime is self-exciting - crime tends to lead to more crime \cite{Kelling1982}.\\[5pt] From the first equation in \eqref{0} we see that criminal agents move according to a combination of conditional and unconditional diffusion. The conditional diffusion is a biased movement toward high concentrations of the attractiveness value, which leads to the taxis term seen in the first equation. We stress that the coefficient $\chi = 2$ in front of the taxis term, which we shall see adds a challenge, comes from the first principles derivation of system \eqref{0} and thus it is important that our theory cover this case -- see \cite{Short2008} for more details. The assumption that criminal agents abstain from committing a second crime leads to decay term $-uv.$ Indeed, roughly speaking, the expected number of crime is given by $uv$ and so the expected number of criminal agents removed is $uv$. The prescribed non-negative term $B_1(x)$ describes the introduction of criminal agents into the system. Furthermore, the repeat victimization effect assumes that each criminal activity increases the attractiveness value leading to the $+uv$ term in the second equation of \eqref{0}, while the near-repeat victimization effect leads to the unconditional diffusion also observed in that equation. Finally, the assumption that certain neighborhoods tend to be more crime-prone than others, whatever these reasons may be, is included in the prescribed non-negative term $B_2(x).$\\[5pt] The introduction of system \eqref{0} has generated a great deal of activity related to the analysis of \eqref{0}, which have contributed to the mathematical theory as well as to the understanding of crime dynamics. For example, the emergence and suppression of {\it hotspots} was studied by Short {\it et al.} in \cite{Short2010}, providing insight into the effectiveness of {\it hotspot policing}. The existence and stability of localized patterns representing hotspots has been studied in various works -- see \cite{Berestycki2014a, Cantrell2012, Gu2016, Kolokolnikiv2014,Tse2008}. A more general class of systems was proposed for the dynamics of criminal activity by Berestycki and Nadal in \cite{Berestycki2010} -- see also \cite{Berestycki2013c} for an analysis of these models. The system \eqref{0} has also been generalized in various directions. For example, the incorporation of law enforcement has been proposed and analyzed in \cite{Jones2010, Ricketson2010, Zipkin2014}; the movement of {\em commuter} criminal agents was modeled in \cite{Chaturapruek2013} through the use of L\'evy flights. The dynamics of crime has also be studied with the use of dynamics systems, we refer the readers to \cite{McMillon2014, Nuno2011}. It is also important to note that the work in \cite{Short2008} has been the impetus for the use of PDE type models to gain insight into various other social phenomena -- see for example \cite{Barbaro2013, Rodriguez2014a, Smith2012}. Interested readers are referred to the comprehensive review of mathematical models and theory for criminal activity in \cite{DOrsogna2015}.\\[5pt] From a perspective of mathematical analysis, (\ref{0}) shares essential ingredients with the celebrated Keller-Segel model for chemotaxis processes in biology, which in its simplest form can be obtained on considering the constant sensitivity function $S\equiv 1$ in \begin{equation} \label{KS} \left\{ \begin{equation} \labelgin{array}{ll} u_t = \Delta u - \nabla \cdot (uS(v)\nabla v), \qquad & x\in \Omega, \ t>0, \\[1mm] v_t = \Delta v - v + u, \qquad & x\in \Omega, \ t>0. \end{array} \right. \end{equation} Here the interplay of such cross-diffusive terms with the linear production mechanism expressed in the second equation is known to have a strongly destabilizing potential in multi-dimensional situations: when posed under no-flux boundary conditions in bounded domains $\Omega\subset\mathbb{R}^n$, $n\ge 1$, (\ref{KS}) is globally well-posed in the case $n=1$ (\cite{osaki_yagi}), whereas some solutions may blow up in finite time when either $n= 2$ and the conserved quantity $\int_\Omega u(\cdot,t)$ is suitably large (\cite{herrero_velazquez}), or when $n\ge 3$ (\cite{win_JMPA}; cf.~also the recent survey \cite{BBTW}).\\[5pt] That, in contrast to this, decaying sensitivities may exert a substantial regularizing effect is indicated by the fact that if e.g.~$S(v)=\frac{a}{(1+bv)^\alpha}$ for all $v\ge 0$ and some $a>0$, $b>0$ and $\alpha>1$, then actually for arbitrary $n\ge 1$ global bounded solutions to (\ref{KS}) always exist (\cite{win_MANA}). However, in the particular case of the so-called logarithmic sensitivity given by $S(v)=\frac{\chi}{v}$ for $v>0$ with $\chi>0$, as present in (\ref{0}), the situation seems less clear in that global bounded solutions so far have been constructed only under smallness conditions of the form $\chi<\sqrt{\frac{2}{n}}$ (\cite{biler}, \cite{win_chemosing}), with a slight extension up to the weaker condition $\chi<\chi_0$ with some $\chi_0\in (1.015,2)$ possible when $n=2$ (\cite{lankeit_MMAS}); for larger values of $\chi$ including the choice in (\ref{chi}), in the case $n\ge 2$ only certain global weak solutions to (\ref{KS}), possibly becoming unbounded in finite time, are known to exist in various generalized frameworks (\cite{win_chemosing}, \cite{stinner_win}, \cite{lw2}).\\[5pt] With regard to issues of regularity and boundedness, the situation in (\ref{0}) seems yet more delicate than in the latter version of (\ref{KS}): In (\ref{0}), namely, the production of the attractiveness value occurs in a nonlinear manner, which in comparison to (\ref{KS}) may further stimulate the self-enhanced generation of large cross-diffusive gradients. To the best of our knowledge, no results on global existence have been found so far for any version of (\ref{KS}) in which such reaction terms is introduced, even in spatially one-dimensional cases, and it seems far from obvious to which extent such mechanisms can be compensated by the supplementary absorptive term $-uv$ in the first equation of (\ref{0}).\\[5pt] Accordingly, the literature on initial-value problems for (\ref{0}) is still at quite an early stage and actually limited to a first local existence and uniqueness result achieved in \cite{Rodriguez2010}. Statements on global existence have been obtained only for certain modified versions which contain additional regularizing ingredients (\cite{Manasevich2012}, \cite{Rodriguez2013}). \\[5pt] {\bf Main results.} \quad In the present work we attempt to undertake a first step into a qualitative theory for the full original model from \cite{Short2008} by developing an approach capable of analyzing the spatially one-dimensional system (\ref{0}) in a range of parameters including the choice given in (\ref{chi}). Here we will first concentrate be on establishing a result on global existence of classical solutions under mild assumptions on $\chi$, $B_1$ and $B_2$. Our second focus will be on the derivation of qualitative solution properties under additional assumptions.\\[5pt] In order to specify the setup for our analysis, for a given parameter $\chi>0$ let us consider (\ref{0}) along with the boundary conditions \begin{equation} \label{0b} u_x=v_x=0, \qquad x\in \partial\Omega, \ t>0, \end{equation} and the initial conditions \begin{equation} \label{0i} u(x,0)=u_0(x), \quad v(x,0)=v_0(x), \qquad x\in\Omega, \end{equation} in a bounded open interval $\Omega\subset \mathbb{R}$. We assume throughout the sequel that \begin{equation} \label{B} \mbox{$B_1$ and $B_2$ are nonnegative bounded functions belonging to $C^\vartheta_{loc}(\overline{\Omega}\times [0,\infty))$ for some $\vartheta \in (0,1)$,} \end{equation} and that \begin{equation} \label{init} \left\{ \begin{equation} \labelgin{array}{ll} u_0\in C^0(\overline{\Omega}),& \quad \mbox{with } u_0\ge 0 \mbox{ in } \Omega, \\[1mm] v_0\in W^{1,\infty}(\Omega),& \quad \mbox{with } v_0> 0 \mbox{ in } \overline{\Omega}. \end{array} \right. \end{equation} In this general framework, we shall see that in fact for arbitrary $\chi>0$, the problem \eqref{0}, \eqref{0b}, \eqref{0i} is globally well-posed in the following sense. \begin{equation} \labelgin{theo}\label{theo5} Let $\chi>0$ and suppose that $B_1$ and $B_2$ satisfy \eqref{B}. Then for any choice of $u_0$ and $v_0$ fulfilling \eqref{init}, the problem \eqref{0}, \eqref{0b}, \eqref{0i} possesses a global classical solution, for each $r>1$ uniquely determined by the inclusions \begin{equation} \label{5.1} \left\{ \begin{equation} \labelgin{array}{l} u\in C^0(\overline{\Omega}\times [0,\infty)) \cap C^{2,1}(\overline{\Omega}\times (0,\infty)), \\[1mm] v\in C^0([0,\infty);W^{1,r}(\Omega)) \cap C^{2,1}(\overline{\Omega}\times (0,\infty)), \end{array} \right. \end{equation} for which $u,v>0$ in $\overline{\Omega}\times (0,\infty)$. \end{theo} The qualitative behavior of these solutions, especially on large time scales, will evidently depend on respective asymptotic properties of the parameter functions $B_1$ and $B_2$. Our efforts in this direction will particularly make use of either suitable assumptions on large-time decay of $B_1$ or of certain weak but temporally uniform positivity properties of $B_2$. Specifically, in our analysis we will alternately refer to the hypotheses \begin{equation} \label{H1} \int_0^\infty \int_\Omega B_1 < \infty, \tag{H1} \end{equation} and, in a weaker form, \begin{equation} \label{H1'} \int_t^{t+1} \int_\Omega B_1(x,s) dxds \to 0 \qquad \mbox{as } t\to\infty, \tag{H1'} \end{equation} on decay of $B_1$, and \begin{equation} \label{H2} \inf_{t>0} \int_\Omega B_2(x,t)dx > 0, \tag{H2} \end{equation} on the positivity of $B_2$. In some places we will also assume that $B_2$ stabilizes in the sense that \begin{equation} \label{H3} \int_t^{t+1} \int_\Omega \Big(B_2(x,s)-B_{2,\infty}(x)\Big)^2 dxds \to 0 \qquad \mbox{as } t\to\infty, \tag{H3} \end{equation} holds with some $B_{2,\infty}\in L^2(\Omega)$.\\[5pt] Indeed, the assumption \eqref{H2} implies boundedness of both solution components, and under the additional requirement that \eqref{H1'} be valid, $u$ must even decay in the large time limit. \begin{equation} \labelgin{theo}\label{theo51} Let $\chi>0$ and suppose that \eqref{B} and \eqref{init} are fulfilled. If moreover \eqref{H2} holds, then there exists $C>0$ with the property that the solution $(u,v)$ of \eqref{0}, \eqref{0b}, \eqref{0i} satisfies \begin{equation} \label{51.1} u(x,t) \le C \qquad \mbox{for all $x\in\Omega$ and } t>0, \end{equation} and \begin{equation} \label{51.2} \frac{1}{C} \le v(x,t) \le C \qquad \mbox{for all $x\in\Omega$ and } t>0. \end{equation} If additionally \eqref{H1'} is valid, then \begin{equation} \label{51.3} u(\cdot,t) \to 0 \quad \mbox{in } L^\infty(\Omega) \qquad \mbox{as } t\to\infty. \end{equation} \end{theo} We shall secondly see that for all $\chi$ within an appropriate range, including the relevant value $\chi=2$, also the mere assumption \eqref{H1} is sufficient for boundedness, at least of the second solution component, and that moreover the latter even stabilizes when additionally \eqref{H3} is satisfied. \begin{equation} \labelgin{theo}\label{theo52} Let $\chi>0$ be such that \begin{equation} \label{52.1} \chi < \frac{\sqrt{6\sqrt{3}+9}}{2}=2.201834..., \end{equation} and let $B_1$ and $B_2$ be such that besides \eqref{B}, also \eqref{H1} holds. Then for each pair $(u_0,v_0)$ fulfilling \eqref{init}, one can find $C>0$ such that the solution $(u,v)$ of \eqref{0}, \eqref{0b}, \eqref{0i} satisfies \begin{equation} \label{52.2} v(x,t) \le C, \qquad \mbox{for all $x\in\Omega$ and } t>0. \end{equation} Furthermore, if \eqref{H3} is valid with some $B_{2,\infty}\in L^2(\Omega)$, then \begin{equation} \label{52.3} v(\cdot,t) \to v_\infty \quad \mbox{in } L^\infty(\Omega) \qquad \mbox{as } t\to\infty, \end{equation} where $v_\infty$ denotes the solution to the boundary value problem \begin{equation} \label{vinfty} \left\{ \begin{equation} \labelgin{array}{l} -\partial_{xx}v_\infty + v_\infty = B_{2,\infty}, \qquad x\in \Omega, \\[1mm] \partial_xv_{\infty}=0, \qquad x\in\partial\Omega. \end{array} \right. \end{equation} \end{theo} Let us finally state an essentially immediate consequence of Theorem \ref{theo51} and Theorem \ref{theo52} under slightly sharper but yet quite practicable assumptions. \begin{equation} \labelgin{cor}\label{cor53} Let $\chi\in (0,\frac{\sqrt{6\sqrt{3}+9}}{2})$, and suppose that the functions $B_1$ and $B_2$ are such that beyond \eqref{B} and \eqref{H1} we have \begin{equation} \label{53.2} B_2(\cdot,t) \to B_{2,\infty} \quad \mbox{a.e.~in } \Omega \qquad \mbox{as $t\to\infty$}, \end{equation} with some $0\not\equiv B_{2,\infty} \in L^1(\Omega)$. Then for each $u_0$ and $v_0$ satisfying \eqref{init}, the corresponding solution $(u,v)$ of \eqref{0}, \eqref{0b}, \eqref{0i} has the properties that \begin{eqnarray*} u(\cdot,t) \to 0 \quad \mbox{in } L^\infty(\Omega) \qquad \mbox{as } t\to\infty, \end{eqnarray*} and \begin{eqnarray*} v(\cdot,t) \to v_\infty \quad \mbox{in } L^\infty(\Omega) \qquad \mbox{as } t\to\infty, \end{eqnarray*} where $v_\infty$ solves \eqref{vinfty}. \end{cor} {\sc Proof.} \quad In view of the dominated convergence theorem, \eqref{53.2} along with the boundedness of $B_2$ entails that actually $B_{2,\infty}\in L^\infty(\Omega)$, that \eqref{H3} holds and that moreover $\int_\Omega B_2(\cdot,t)\to \int_\Omega B_{2,\infty}\ne 0$ as $t\to\infty$, whence for some $t_0>0$ we have $\inf_{t>t_0} \int_\Omega B_2(\cdot,t)>0$. The claim therefore results on applying Theorem \ref{theo51} and Theorem \ref{theo52} with $(u,v,B_1,B_2)(x,t)$ replaced by $(u,v,B_1,B_2)(x,t_0+t)$ for $(x,t)\in\overline{\Omega}\times [0,\infty)$. $\Box$ \vskip.2cm {\bf Outline.} \quad After asserting local existence of solutions and some of their basic features in Section \ref{sec:local}, in Section \ref{sec:fund} we will derive some fundamental estimates resulting from an analysis of the coupled functional $\int_\Omega u^p v^q$ which indeed enjoys a certain entropy-type property if, in dependence on the size of $\chi$, the crucial exponent $p$ therein is small enough and $q$ belongs to an appropriate range. Accordingly implied consequences on regularity features will thereafter enable us to verify Theorem \ref{theo5} and Theorem \ref{theo51} in Section \ref{sec:global}. Finally, Section \ref{sec:longtime} will contain our proof of Theorem \ref{theo52}, where we highlight already here that particular challenges will be linked to the derivation of $L^\infty$ bounds for $v$, and that these will be accomplished on the basis of a recursive argument available under the assumption (\ref{52.1}). \mysection{Local existence and basic estimates}\label{sec:local} Let us first make sure that our overall assumptions warrant local-in-time solvability of \eqref{0}, \eqref{0b}, \eqref{0i}, along with a convenient extensibility criterion. \begin{equation} \labelgin{lem}\label{lem_loc} Under the assumptions of Theorem \ref{theo5}, there exist $T_{max}\in (0,\infty]$ and a uniquely determined pair $(u,v)$ of functions \begin{eqnarray*} \left\{ \begin{equation} \labelgin{array}{l} u \in C^0(\overline{\Omega}\times [0,T_{max})) \cap C^{2,1}(\overline{\Omega}\times (0,T_{max})), \\[1mm] v \in \bigcap\limits_{r>1} C^0([0,T_{max}); W^{1,r}(\Omega)) \cap C^{2,1}(\overline{\Omega}\times (0,T_{max})), \end{array} \right. \end{eqnarray*} which solve \eqref{0}, \eqref{0b}, \eqref{0i} classically in $\overline{\Omega}\times [0,T_{max}).$ Moreover, $u>0$ and $v>0$ in $\overline{\Omega}\times (0,T_{max})$ and \begin{equation} \label{ext_crit} \mbox{either $T_{max}=\infty$, \quad or \quad} \limsup_{t\nearrowT_{max}} \Big\{ \|u(\cdot,t)\|_{L^\infty(\Omega)} + \Big\|\frac{1}{v(\cdot,t)}\Big\|_{L^\infty(\Omega)} + \|v_x(\cdot,t)\|_{L^r(\Omega)} \Big\} = \infty \quad \mbox{for all } r>1. \end{equation} \end{lem} {\sc Proof.} \quad The results is a straightforward application of well-established techniques from the theory of tridiagonal cross-diffusive systems (\cite{amann}, specifically applied to chemotaxis systems \cite{horstmann_win}). $\Box$ \vskip.2cm Throughout the sequel, without explicit further mentioning we shall assume the requirements of Theorem \ref{theo5} to be met, and let $u, v$ and $T_{max}$ be as provided by Lemma \ref{lem_loc}.\\[5pt] In order to derive some basic features of this solution, let us recall the following well-known pointwise positivity property of the Neumann heat semigroup $(e^{t\Delta})_{t\ge 0}$ on the bounded real interval $\Omega$ (cf.~e.g.~\cite[Lemma 3.1]{hpw}). \begin{equation} \labelgin{lem}\label{lem24} Let $\tau>0$. Then there exists a constant $C>0$ such that for all nonnegative $\varphi\in C^0(\overline{\Omega})$, \begin{eqnarray*} e^{t\Delta} \varphi \ge C \int_\Omega \varphi \quad \mbox{in } \Omega \qquad \mbox{for all } t> \tau. \end{eqnarray*} \end{lem} Using the previous lemma along with a parabolic comparison argument, we obtain a basic but important pointwise lower estimate for the second solution component. This lower bound is local-in-time for arbitrary $B_1$ and $B_2$ and global-in-time when \eqref{H2} is satisfied. \begin{equation} \labelgin{lem}\label{lem001} For all $T>0$ there exists $C(T)>0$ such that with $T_{max}$ from Lemma \ref{lem_loc}, for $\widehat{T}_{max}:=\min\{T,T_{max}\}$ we have \begin{equation} \label{001.1} v(x,t) \ge C(T), \qquad \mbox{for all $x\in\Omega$ and } t\in (0,\widehat{T}_{max}), \end{equation} with \begin{equation} \label{001.2} \inf_{T>0} C(T)>0, \qquad \mbox{if \eqref{H2} is valid.} \end{equation} \end{lem} {\sc Proof.} \quad We represent $v$ according to \begin{equation} \labela{001.3} v(\cdot,t)= e^{t(\Delta-1)} v_0 + \int_0^t e^{(t-s)(\Delta-1)} u(\cdot,s) v(\cdot,s) ds + \int_0^t e^{(t-s)(\Delta-1)} B_2(\cdot,s) ds, \qquad t\in (0,T_{max}), \end{equation}a and observe that here by the comparison principle for the Neumann problem associated with the heat equation, the second summand on the right is nonnegative, whereas \begin{equation} \label{001.33} e^{t(\Delta-1)} v_0 \ge \Big\{ \inf_{x\in\Omega} v_0(x) \Big\} \cdot e^{-t} \qquad \mbox{for all } t>0. \end{equation} To gain a pointwise lower estimate for the rightmost integral in \eqref{001.3}, we invoke Lemma \ref{lem24} to find $c_1>0$ such that with $\tau:=\min\{1,\frac{1}{3}T_{max}\}$, for any nonnegative $\varphi\in C^0(\overline{\Omega})$ we have \begin{eqnarray*} e^{t\Delta}\varphi \ge c_1 \int_\Omega \varphi \quad \mbox{in } \Omega \qquad \mbox{for all } t>\frac{\tau}{2}, \end{eqnarray*} which implies that \begin{eqnarray*} \int_0^t e^{(t-s)(\Delta-1)} B_2(\cdot,s) ds &\ge& \int_0^{t-\frac{\tau}{2}} e^{-(t-s)} e^{(t-s)\Delta} B_2(\cdot,s) ds \\ &\ge& \int_0^{t-\frac{\tau}{2}} e^{-(t-s)} \cdot \Big\{ c_1 \int_\Omega B_2(\cdot,s) \Big\} ds \\ &\ge& c_1 c_2 \int_0^{t-\frac{\tau}{2}} e^{-(t-s)} ds \\ &=& c_1 c_2 \cdot \Big(e^{-\frac{\tau}{2}} - e^{-t}\Big) \\ &\ge& c_3:= c_1 c_2 \cdot \Big( e^{-\frac{\tau}{2}} - e^{-\tau}\Big) \qquad \mbox{for all } t>\tau \end{eqnarray*} with $c_2:=\inf_{t>0} \int_\Omega B_2(\cdot,t) \ge 0$. Together with \eqref{001.33} and \eqref{001.3}, this entails that \begin{eqnarray*} v(\cdot,t) \ge \Big\{ \inf_{x\in\Omega} v_0(x) \Big\} \cdot e^{-T} + c_3 \quad \mbox{in } \Omega \qquad \mbox{for all } t\in (\tau,\widehat{T}_{max}), \end{eqnarray*} and that \begin{eqnarray*} v(\cdot,t) \ge \Big\{ \inf_{x\in\Omega} v_0(x) \Big\} \cdot e^{-\tau} \quad \mbox{in } \Omega \qquad \mbox{for all } t\in (0,\tau], \end{eqnarray*} and thereby establishes both \eqref{001.1} and \eqref{001.2}. $\Box$ \vskip.2cm Further fundamental properties of \eqref{0} are connected to the evolution of the total mass $\int_\Omega u$ and the associated total absorption rate $\int_\Omega uv$. We formulate these properties in such a way that important dependences of the appearing constants are accounted for in order to provide statements that will be useful for our asymptotic analysis in Theorem \ref{theo51} and Theorem \ref{theo52}. \begin{equation} \labelgin{lem}\label{lem01} For all $T>0$ there exists $C(T)>0$ such that with $\widehat{T}_{max}:=\min\{T,T_{max}\}$, \begin{equation} \label{01.1} \int_\Omega u(\cdot,t) \le C(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}), \end{equation} where \begin{equation} \label{01.11} \sup_{T>0} C(T) < \infty \qquad \mbox{if either \eqref{H1} or \eqref{H2} hold.} \end{equation} Moreover, for all $T>0$ and each $\xi\in (0,T)$ there exists $K(T,\xi)>0$ with the properties that \begin{equation} \label{01.2} \int_t^{t+\xi} \int_\Omega uv \le K(T,\xi) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\xi) \end{equation} and \begin{equation} \label{01.21} \sup_{T>\xi} K(T,\xi) < \infty \quad \mbox{for all } \xi>0 \qquad \mbox{if \eqref{H2} holds} \end{equation} as well as \begin{equation} \label{01.22} \sup_{T>0} \sup_{\xi\in (0,T)} K(T,\xi) < \infty \qquad \mbox{if \eqref{H1} holds.} \end{equation} \end{lem} {\sc Proof.} \quad Integrating the first equation in (\ref{0}) yields \begin{equation} \label{01.3} \frac{d}{dt} \int_\Omega u = - \int_\Omega uv + \int_\Omega B_1 \qquad \mbox{for all } t\in (0,T_{max}) \end{equation} and hence \begin{equation} \label{01.4} \int_\Omega u(\cdot,t) \le c_1(T):=\int_\Omega u_0 + \int_0^T \int_\Omega B_1 \qquad \mbox{for all } t\in (0,\widehat{T}_{max}) \end{equation} as well as \begin{equation} \label{01.5} \int_t^{t+\xi} \int_\Omega uv \le \int_\Omega u(\cdot,t) + \int_t^{t+\xi} \int_\Omega B_1 \le c_1(T) + c_1(2T) \qquad \mbox{for all $t \in (0,\widehat{T}_{max}-\xi)$ and any } \xi \in (0,T). \end{equation} For general $B_1$ and $B_2$, (\ref{01.4}) and (\ref{01.5}) directly imply (\ref{01.1}) and (\ref{01.2}) with $C(T):=c_1(T)$ and $K(T,\xi):=c_1(T)+c_1(2T)$ for $T>0$ and $\xi\in (0,T)$, and if in addition (\ref{H1}) holds, then $c_1(T)\le c_2:=\int_0^\infty \int_\Omega B_1$ for all $T>0$ and thus (\ref{01.4}) and (\ref{01.5}) moreover show that $C(T)\le c_2$ in this case.\\[5pt] Assuming the hypothesis (\ref{H2}) henceforth, we recall that thanks to the latter, Lemma \ref{lem001} implies the existence of $c_3>0$ fulfilling $v\ge c_3$ in $\Omega\times (0,T_{max})$, whence going back to (\ref{01.3}) we see that then \begin{equation} \label{01.6} \frac{d}{dt} \int_\Omega u + \frac{1}{2} \int_\Omega uv \le - \frac{c_3}{2} \int_\Omega u + c_4 \qquad \mbox{for all } t\in (0,T_{max}) \end{equation} with $c_4:=|\Omega|\cdot\|B_1\|_{L^\infty(\Omega\times (0,\infty))}$. By an ODE comparison, this firstly ensures that \begin{eqnarray*} \int_\Omega u(\cdot,t) \le c_5:=\max \Big\{ \int_\Omega u_0, \frac{2c_4}{c_3}\Big\} \qquad \mbox{for all } t\in (0,T_{max}), \end{eqnarray*} whereupon an integration in (\ref{01.6}) shows that furthermore \begin{eqnarray*} \frac{1}{2} \int_t^{t+\xi} \int_\Omega uv \le \int_\Omega u(\cdot,t) + c_4\xi \le c_5 + c_4 \qquad \mbox{for all } t\in (0,T_{max}-\xi) \end{eqnarray*} and that hence indeed the estimates in (\ref{01.1}) and (\ref{01.2}) can actually be achieved to be independent of $T$ also when (\ref{H2}) holds. $\Box$ \vskip.2cm The previous lemma has the following consequence for the time evolution of $\int_\Omega v$. \begin{equation} \labelgin{lem}\label{lem02} For all $T>0$ there exists $C(T)>0$ such that with $\widehat{T}_{max}:=\min\{T,T_{max}\}$ and $\tau:=\min\{1,\frac{1}{3}T_{max}\}$ we have \begin{equation} \label{02.1} \int_\Omega v(\cdot,t) \le C(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}) \end{equation} and \begin{equation} \label{02.2} \sup_{T>0} C(T) < \infty \qquad \mbox{if either \eqref{H1} or \eqref{H2} holds.} \end{equation} \end{lem} {\sc Proof.} \quad From the second equation in (\ref{0}) we obtain that \begin{equation} \label{02.3} \frac{d}{dt} \int_\Omega v + \int_\Omega v = \int_\Omega uv + \int_\Omega B_2 \qquad \mbox{for all } t\in (0,T_{max}). \end{equation} Here we only need to observe that thanks to Lemma \ref{lem01} and the boundedness of $B_2$ we can find $c_1(T)>0$ such that for $h(t):=\int_\Omega u(\cdot,t)v(\cdot,t) + \int_\Omega B_2(\cdot,t)$, $t\in (0,T_{max})$, we have \begin{eqnarray*} \int_t^{t+\tau} h(s) ds \le c_1(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{eqnarray*} and that \begin{eqnarray*} \sup_{T>0} c_1(T)<\infty \qquad \mbox{if either (\ref{H1}) or (\ref{H2}) hold.} \end{eqnarray*} Therefore, extending $h$ by zero to all of $(0,\infty)$ we may apply Lemma \ref{lem_ssw} from the appendix below so as to derive (\ref{02.1}) and (\ref{02.2}) from (\ref{02.3}). $\Box$ \vskip.2cm \mysection{Fundamental estimates resulting from an analysis of $\int_\Omega u^p v^q$} \label{sec:fund} The main goal of this section consists of deriving spatio-temporal $L^2$ bounds for both $u_x$ and $v_x$ with appropriate solution-dependent weight functions. This will be accomplished in Lemma \ref{lem1} through an analysis of the functional $\int_\Omega u^p v^q$ for adequately small $p\in (0,1)$ and certain positive $q<1-p$ taken from a suitable interval. Entropy-like properties of functionals containing multiplicative couplings of both solution components have played important roles in the analysis of several chemotaxis problems at various stages of existence and regularity theory, but in most precedent cases the respective dependence on the unknown is either of strictly convex type with respect to both solution components separately (\cite{tao_small}, \cite{taowin_consumption}, \cite{win_ARMA}, \cite{win_MANA}), or at least exhibits some superlinear growth with respect to the full solution couple when viewed as a whole (\cite{biler}, \cite{Manasevich2012}). In addition, contrary to related situations addressing singular sensitivities of the form in (\ref{0}) (\cite{win_chemosing}, \cite{stinner_win}), the additional zero-order nonlinearities $uv$ appearing in the present context of (\ref{0}) will require adequately coping with respectively occurring superlinear terms (cf.~e.g.~(\ref{1.11}) below). In preparation to a corresponding testing procedure, we will therefore independently derive a regularity property of $v$ by using a quasi-entropy property of the functional $-\int_\Omega v^q$ for arbitrary $q\in (0,1)$. \subsection{A spatio-temporal bound for $v$ in $L^r$ for $r<3$} By means of a standard testing procedure solely involving the second equation in (\ref{0}), thanks to Lemma \ref{lem02} and the nonnegativity of $B_2$ we can derive the following. \begin{equation} \labelgin{lem}\label{lem21} Let $q\in (0,1)$. Then for each $T>0$ one can find $C(T)>0$ with the properties that \begin{equation} \label{21.1} \int_t^{t+\tau} \int_\Omega v^{q-2} v_x^2 \le C(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau) \end{equation} and that \begin{equation} \label{21.11} \sup_{T>0} C(T)<\infty \qquad \mbox{if either \eqref{H1} or \eqref{H2} hold,} \end{equation} where again $\widehat{T}_{max}:=\min\{T,T_{max}\}$ and $\tau:=\min\{1,\frac{1}{3}T_{max}\}$. \end{lem} {\sc Proof.} \quad As $v>0$ in $\overline{\Omega}\times [0,T_{max})$ by Lemma \ref{lem001}, we may test the second equation in (\ref{0}) by $v^{q-1}$ to see that \begin{eqnarray*} \frac{1}{q} \frac{d}{dt} \int_\Omega v^q &=& (1-q) \int_\Omega v^{q-2} v_x^2 + \int_\Omega uv^q - \int_\Omega v^q + \int_\Omega B_2 v^{q-1} \\ &\ge& (1-q)\int_\Omega v^{q-2} v_x^2 - \int_\Omega v^q \qquad \mbox{for all } t\in (0,T_{max}), \end{eqnarray*} which on further integration yields that \begin{equation} \label{21.2} (1-q)\int_t^{t+\tau} \int_\Omega v^{q-2} v_x^2 \le \frac{1}{q} \int_\Omega v^q(\cdot,t+\tau) + \int_t^{t+\tau} \int_\Omega v^q \qquad \mbox{for all } t\in (0,T_{max}-\tau). \end{equation} Since with $c_1:=|\Omega|^{1-q}$ we have \begin{eqnarray*} \int_\Omega v^q \le c_1 \bigg\{ \int_\Omega v\bigg\}^q \qquad \mbox{for all } t\in (0,T_{max}) \end{eqnarray*} by the H\"older inequality, from (\ref{21.2}) we this obtain that \begin{eqnarray*} (1-q)\int_t^{t+\tau} \int_\Omega v^{q-2} v_x^2 \le \Big(\frac{1}{q}+1\Big) \cdot c_1 \cdot \bigg\{ \sup_{s\in (0,\widehat{T}_{max})} \int_\Omega v(\cdot,s) \bigg\}^q \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{eqnarray*} which in view of Lemma \ref{lem02} implies (\ref{21.1}) and (\ref{21.11}). $\Box$ \vskip.2cm Thanks to the fact that the considered spatial setting is one-dimensional, an interpolation of the above result with the outcome of Lemma \ref{lem02} has a natural consequence on space-time integrability of $v$. \begin{equation} \labelgin{lem}\label{lem22} Given $r\in (1,3)$, for any $T>0$ one can fix $C(T)>0$ such that \begin{equation} \label{22.1} \int_t^{t+\tau} \int_\Omega v^r \le C(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau) \end{equation} and that \begin{equation} \label{22.11} \sup_{T>0} C(T)<\infty \qquad \mbox{if either \eqref{H1} or \eqref{H2} holds,} \end{equation} where $\widehat{T}_{max}:=\min\{T,T_{max}\}$ and $\tau:=\min\{1,\frac{1}{3}T_{max}\}$. \end{lem} {\sc Proof.} \quad We may assume that $r\in (2,3)$ and then let $q:=r-2\in (0,1)$ to obtain from Lemma \ref{lem21} that there exists $c_1(T)>0$ such that \begin{equation} \label{22.2} \int_t^{t+\tau} \int_\Omega [(v^\frac{q}{2})_x]^2 \le c_1(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{equation} while Lemma \ref{lem02} provides $c_2(T)>0$ fulfilling \begin{equation} \label{22.3} \|v^\frac{q}{2}\|_{L^\frac{2}{q}(\Omega)}^\frac{2}{q} = \int_\Omega v \le c_2(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}), \end{equation} where \begin{equation} \label{22.33} \sup_{T>0} \Big(c_1(T)+ c_2(T)\Big) < \infty \qquad \mbox{if either (\ref{H1}) or (\ref{H2}) holds.} \end{equation} Now, from the Gagliardo-Nirenberg inequality we know that there exists $c_3>0$ satisfying \begin{eqnarray*} \int_t^{t+\tau} \int_\Omega v^r &=& \int_t^{t+\tau} \|v^\frac{q}{2}(\cdot,s)\|_{L^\frac{2r}{q}(\Omega)}^\frac{2r}{q} ds \\ &\le& c_3 \int_t^{t+\tau} \bigg\{ \Big\| (v^\frac{q}{2})_x(\cdot,s) \Big\|_{L^2(\Omega)}^\frac{2(r-1)}{q+1} \|v^\frac{q}{2}(\cdot,s)\|_{L^\frac{2}{q}(\Omega)}^\frac{2(q+r)}{q(q+1)} + \|v^\frac{q}{2}(\cdot,s)\|_{L^\frac{2}{q}(\Omega)}^\frac{2r}{q} \bigg\} ds \end{eqnarray*} for all $t\in (0,\widehat{T}_{max}-\tau)$, so that since \begin{eqnarray*} \frac{2(r-1)}{q+1}=2 \qquad \mbox{and} \qquad \frac{2(q+r)}{q(q+1)} = \frac{4}{r-2}=\frac{4}{q}, \end{eqnarray*} due to our choice of $q$ we obtain from (\ref{22.2}) and (\ref{22.3}) that \begin{eqnarray*} \int_t^{t+\tau} \int_\Omega v^r &\le& c_3 \int_t^{t+\tau} \bigg\{ \Big\|(v^\frac{q}{2})_x(\cdot,s)\Big\|_{L^2(\Omega)}^2 \|v^\frac{q}{2}(\cdot,s)\|_{L^\frac{2}{q}(\Omega)}^\frac{4}{q} + \|v^\frac{q}{2}(\cdot,s)\|_{L^\frac{2}{q}(\Omega)}^\frac{2r}{q} \bigg\} ds \\[2mm] &\le& c_3 \cdot \Big\{ c_1(T) c_2^2(T) + c_2^r(T) \Big\} \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{eqnarray*} which implies (\ref{22.1}) with (\ref{22.11}) being valid due to (\ref{22.33}). $\Box$ \vskip.2cm \subsection{Analysis of the functional $\int_\Omega u^p v^q$ for small positive $p$ and certain $q>0$} We can now proceed to the following lemma which provides some regularity information that will be fundamental for our subsequent analysis. \begin{equation} \labelgin{lem}\label{lem1} Let $p\in (0,1)$ be such that $p<\frac{1}{\chi^2}$ and suppose that $q\in (q^-(p),q^+(p))$, where \begin{equation} \label{1.1} q^+(p)m:=\frac{1-p}{2} \Big(1\pm\sqrt{1-p\chi^2}\Big). \end{equation} Then for all $T>0$ there exists $C(T)>0$ such that with $\widehat{T}_{max}:=\min\{T,T_{max}\}$ and $\tau:=\min\{1,\frac{1}{3}T_{max}\}$ we have \begin{equation} \label{1.2} \int_t^{t+\tau} \int_\Omega u^{p-2} v^q u_x^2 \le C(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{equation} as well as \begin{equation} \label{1.3} \int_t^{t+\tau} \int_\Omega u^p v^{q-2} v_x^2 \le C(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{equation} and \begin{equation} \label{1.33} \sup_{T>0} C(T) < \infty \qquad \mbox{if either \eqref{H1} or \eqref{H2} hold.} \end{equation} \end{lem} {\sc Proof.} \quad Using that $u$ and $v$ are both positive in $\overline{\Omega}\times (0,T_{max})$, on the basis of (\ref{0}) and several integrations by parts we compute \begin{equation} \labela{1.4} \frac{d}{dt} \int_\Omega u^p v^q &=& p \int_\Omega u^{p-1} v^q \cdot \Big\{ u_{xx} - \chi \Big(\frac{u}{v} v_x\Big)_x - uv + B_1 \Big\} + q\int_\Omega u^p v^{q-1} \cdot \Big\{ v_{xx} + uv - v + B_2\Big\} \nonumber\\[1mm] &=& p(1-p) \int_\Omega u^{p-2} v^q u_x^2 - pq \int_\Omega u^{p-1} v^{q-1} u_x v_x \nonumber\\ & & - p(1-p)\chi \int_\Omega u^{p-1} v^{q-1} u_x v_x + pq\chi \int_\Omega u^p v^{q-2} v_x^2 \nonumber\\ & & - p\int_\Omega u^p v^{q+1} + p \int_\Omega B_1 u^{p-1} v^q \nonumber\\ & & - pq\int_\Omega u^{p-1} v^{q-1} u_x v_x + q(1-q) \int_\Omega u^p v^{q-2} v_x^2 \nonumber\\ & & + q \int_\Omega u^{p+1} v^q - q\int_\Omega u^p v^q + q\int_\Omega B_2 u^p v^{q-1} \nonumber\\[1mm] &=& p(1-p) \int_\Omega u^{p-2} v^q u_x^2 + q(p\chi+1-q) \int_\Omega u^p v^{q-2} v_x^2 \nonumber\\ & & - p(\chi-p\chi+2q) \int_\Omega u^{p-1} v^{q-1} u_x v_x \nonumber\\ & & -p\int_\Omega u^p v^{q+1} + p\int_\Omega B_1 u^{p-1} v^q \nonumber\\ & & + q\int_\Omega u^{p+1} v^q - q\int_\Omega u^p v^q + q\int_\Omega B_2 u^p v^{q-1} \qquad \mbox{for all } t\in (0,T_{max}). \end{equation}a Here in order to estimate the third summand on the right, we note that our assumption (\ref{1.1}) on $q$ warrants that \begin{eqnarray*} 4q^2 - 4(1-p)q + p(1-p)^2 \chi^2 < 0 \end{eqnarray*} and hence \begin{eqnarray*} \frac{p(\chi-p\chi+2q)^2}{4(1-p)} - q(p\chi+1-q) &=& \frac{1}{4(1-p)} \cdot \Bigg\{ \Big\{ p\chi^2 + p^3\chi^2 + 4pq^2 -2p^2 \chi^2 + 4pq\chi - 4p^2 q\chi\Big\} \\ & & - \Big\{ 4pq\chi-4p^2 q\chi + 4q - 4pq - 4q^2 + 4pq^2 \Big\} \Bigg\} \\ &=& \frac{1}{4(1-p)} \cdot \Big\{ 4q^2 - 4(1-p)q + p(1-p)^2 \chi^2 \Big\} \\[2mm] &<& 0, \end{eqnarray*} so that it is possible to pick $\eta\in (0,1)$ suitably close to $1$ such that still \begin{equation} \label{1.5} \frac{p(\chi-p\chi+2q)^2}{4(1-p)\eta} < q(p\chi+1-q). \end{equation} Therefore, by Young's inequality we can estimate \begin{eqnarray*} & & \hspace*{-20mm} p(1-p) \int_\Omega u^{p-2} v^q u_x^2 + q(p\chi+1-q) \int_\Omega u^p v^{q-2} v_x^2 - p(\chi-p\chi+2q) \int_\Omega u^{p-1} v^{q-1} u_x v_x \\ &\ge& p(1-p) \int_\Omega u^{p-2} v^q u_x^2 + q(p\chi+1-q) \int_\Omega u^p v^{q-2} v_x^2 \\ & & - \eta p(1-p) \int_\Omega u^{p-2} v^q u_x^2 - \frac{p(\chi-p\chi+2q)^2}{4(1-p)\eta} \int_\Omega u^p v^{q-2} v_x^2 \\ &=& c_1 \int_\Omega u^{p-2} v^q u_x^2 + c_2 \int_\Omega u^p v^{q-2} v_x^2 \qquad \mbox{for all } t\in (0,T_{max}), \end{eqnarray*} where $c_1:=(1-\eta)p(1-p)$ is positive due to the fact that $\eta<1$, and where \begin{eqnarray*} c_2:=q(p\chi+1-q) - \frac{p(\chi-p\chi+2q)^2}{4(1-p)\eta} >0 \end{eqnarray*} thanks to (\ref{1.5}).\\[5pt] By dropping four nonnegative summands, on integrating (\ref{1.4}) we thus infer that \begin{equation} \labela{1.6} c_1 \int_t^{t+\tau} \int_\Omega u^{p-2} v^q u_x^2 + c_2 \int_t^{t+\tau} \int_\Omega u^p v^{q-2} v_x^2 &\le& \int_\Omega u^p(\cdot,t+\tau) v^q(\cdot,t+\tau) \nonumber\\ & & + p \int_t^{t+\tau} \int_\Omega u^p v^{q+1} + q \int_t^{t+\tau} \int_\Omega u^p v^q \end{equation}a for all $t\in (0,\widehat{T}_{max}-\tau)$. Since (\ref{1.1}) particularly requires that \begin{equation} \label{1.66} q<1-p, \end{equation} we may use the H\"older inequality to see that \begin{eqnarray*} \int_\Omega u^p v^q \le |\Omega|^{1-p-q} \bigg\{ \int_\Omega u\bigg\}^p \bigg\{ \int_\Omega v\bigg\}^q \qquad \mbox{for all } t\in (0,T_{max}), \end{eqnarray*} which in view of Lemma \ref{lem01} and Lemma \ref{lem02} implies that there exists $c_3(T)>0$ such that \begin{equation} \label{1.7} \int_\Omega u^p(\cdot,t+\tau)v^q(\cdot,t+\tau) + q \int_t^{t+\tau} \int_\Omega u^p v^q \le c_3(T) \qquad \mbox{for all } t\in (\widehat{T}_{max}-\tau), \end{equation} where \begin{equation} \label{1.8} \sup_{T>0} c_3(T)<\infty \qquad \mbox{if either (\ref{H1}) or (\ref{H2}) holds.} \end{equation} To estimate the second to last summand in (\ref{1.6}), we recall that Lemma \ref{lem01} moreover yields $c_4(T)>0$ satisfying \begin{equation} \label{1.9} \int_t^{t+\tau} \int_\Omega uv \le c_4(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{equation} where \begin{equation} \label{1.10} \sup_{T>0} c_4(T)<\infty \qquad \mbox{if either (\ref{H1}) or (\ref{H2}) is satisfied.} \end{equation} Therefore, once again by the H\"older inequality, \begin{equation} \labela{1.11} p\int_t^{t+\tau} \int_\Omega u^p v^{q+1} &=& p \int_t^{t+\tau} \int_\Omega (uv)^p v^{q+1-p} \nonumber\\ &\le& p \bigg\{ \int_t^{t+\tau} \int_\Omega uv \bigg\}^p \bigg\{ \int_t^{t+\tau} \int_\Omega v^\frac{q+1-p}{1-p}\bigg\}^{1-p} \nonumber\\ &\le& pc_4^p(T) \bigg\{ \int_t^{t+\tau} \int_\Omega v^\frac{q+1-p}{1-p}\bigg\}^{1-p} \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{equation}a and again by (\ref{1.66}) we see that $\frac{q+1-p}{1-p} < 2 < 3$ and thus Lemma \ref{lem22} becomes applicable to yield $c_5(T)>0$ such that \begin{equation} \label{1.12} \int_t^{t+\tau} \int_\Omega v^\frac{q+1-p}{1-p} \le c_5(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{equation} with \begin{equation} \label{1.13} \sup_{T>0} c_5(T)<\infty \qquad \mbox{if either (\ref{H1}) or (\ref{H2}) is valid.} \end{equation} In summary, (\ref{1.6}), (\ref{1.7}), (\ref{1.11}) and (\ref{1.12}) entail that \begin{eqnarray*} c_1 \int_t^{t+\tau} \int_\Omega u^{p-2} v^q u_x^2 + c_2 \int_t^{t+\tau} \int_\Omega u^p v^{q-2} v_x^2 \le C(T):=c_3(T) + pc_4^p(T) c_5^{1-p}(T) \quad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{eqnarray*} where $C(T)$ satisfies (\ref{1.33}) due to (\ref{1.8}), (\ref{1.10}) and (\ref{1.13}). $\Box$ \vskip.2cm \mysection{Global existence. $L^\infty$ bounds for $u$ and $v$ when (\ref{H2}) holds}\label{sec:global} As a first application of Lemma \ref{lem1}, merely relying on the first inequality (\ref{1.2}) therein and the pointwise positivity properties of $v$ from Lemma \ref{lem001} we shall derive a bound for the first solution component in some superquadratic space-time Lebesgue norm. \begin{equation} \labelgin{lem}\label{lem2} Let $p\in (0,1)$ be such that $p<\frac{1}{\chi^2}$. Then for all $T>0$ there exists $C(T)>0$ such that \begin{equation} \label{2.1} \int_t^{t+\tau} \int_\Omega u^{p+2} \le C(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{equation} where $\widehat{T}_{max}:=\min\{T,T_{max}\}$ and $\tau:=\min\{1,\frac{1}{3}T_{max}\}$. Moreover, \begin{equation} \label{2.2} \sup_{T>0} C(T)<\infty \qquad \mbox{if \eqref{H2} holds.} \end{equation} \end{lem} {\sc Proof.} \quad We fix any $q\in (q^-(p),q^+(p))$, with $q^+(p)m$ taken as in (\ref{1.1}), and invoke Lemma \ref{lem1} and Lemma \ref{lem01} to obtain $c_1(T)>0$ and $c_2(T)>0$ such that \begin{equation} \label{2.3} \int_t^{t+\tau} \int_\Omega u^{p-2} v^q u_x^2 \le c_1(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau) \end{equation} and \begin{equation} \label{2.4} \int_\Omega u(\cdot,t) \le c_2(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}), \end{equation} with \begin{equation} \label{2.5} \sup_{T>0} \Big(c_1(T)+c_2(T)\Big) <\infty \qquad \mbox{if (\ref{H2}) holds.} \end{equation} To exploit (\ref{2.3}), we moreover invoke Lemma \ref{lem001} to find $c_3(T)>0$ such that \begin{equation} \label{2.6} v(x,t) \ge c_3(T) \qquad \mbox{for all $x\in\Omega$ and } t\in (0,\widehat{T}_{max}) \end{equation} with \begin{equation} \label{2.7} \inf_{T>0} c_4(T)>0 \qquad \mbox{if (\ref{H2}) is valid.} \end{equation} Therefore, namely, (\ref{2.3}) entails that \begin{eqnarray*} \int_t^{t+\tau} \int_\Omega [(u^\frac{p}{2})_x]^2 \le c_4(T):=\frac{p^2}{4} \cdot \frac{c_1(T)}{c_3^q(T)} \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau), \end{eqnarray*} and since the Gagliardo-Nirenberg inequality says that with some $c_5>0$ we have \begin{eqnarray*} \int_t^{t+\tau} \int_\Omega u^{p+2} &=& \int_t^{t+\tau} \|u^\frac{p}{2}(\cdot,s)\|_{L^\frac{2(p+2)}{p}(\Omega)}^\frac{2(p+2)}{p} ds \\ &\le& c_5 \int_t^{t+\tau} \bigg\{ \Big\| (u^\frac{p}{2})_x(\cdot,s)\Big\|_{L^2(\Omega)}^2 \|u^\frac{p}{2}(\cdot,s)\|_{L^\frac{2}{p}(\Omega)}^\frac{4}{p} + \|u^\frac{p}{2}(\cdot,s)\|_{L^\frac{2}{p}(\Omega)}^\frac{2(p+2)}{p} \bigg\} ds \end{eqnarray*} for all $t\in (0,\widehat{T}_{max}-\tau)$, by using (\ref{2.4}) we infer that \begin{eqnarray*} \int_t^{t+\tau} \int_\Omega u^{p+2} &\le& c_5 c_2^2(T) \int_t^{t+\tau} \int_\Omega (u^\frac{p}{2})_x^2 + c_5 c_2^{p+2}(T) \\ &\le& c_5 c_2^2(T) c_4(T) + c_5 c_2^{p+2}(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}-\tau). \end{eqnarray*} Combined with (\ref{2.5}) and (\ref{2.7}) this establishes (\ref{2.1}) and (\ref{2.2}). $\Box$ \vskip.2cm In the considered spatially one-dimensional case, the latter property turns out to be sufficient for the derivation of bounds for $v_x$ in $L^r(\Omega)$ for suitably small $r>1$. \begin{equation} \labelgin{lem}\label{lem3} Let $r\in (1,\frac{3}{2})$ be such that $r<1+\frac{1}{2\chi^2}$. Then for all $T>0$ there exists $C(T)>0$ such that with $\widehat{T}_{max}:=\min\{T,T_{max}\}$ we have \begin{equation} \label{3.1} \|v_x(\cdot,t)\|_{L^r(\Omega)} \le C(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}), \end{equation} where \begin{equation} \label{3.2} \sup_{T>0} C(T)<\infty \qquad \mbox{if \eqref{H2} holds.} \end{equation} \end{lem} {\sc Proof.} \quad Once more writing $\tau:=\min\{1,\frac{1}{3}T_{max}\}$, from Lemma \ref{lem_loc} we know that \begin{eqnarray*} c_1:=\sup_{t\in (0,\tau]} \|v_x(\cdot,t)\|_{L^r(\Omega)} \end{eqnarray*} is finite, whence for estimating \begin{eqnarray*} M(T'):=\sup_{t\in (0,T')} \|v_x(\cdot,t)\|_{L^r(\Omega)} \qquad \mbox{for } T'\in (\tau,\widehat{T}_{max}) \end{eqnarray*} it will be sufficient to derive appropriate bounds of $v_x(\cdot,t)$ in $L^r(\Omega)$ for $t\in (\tau,T')$ only. To this end, given any such $t$ we represent $v_x(\cdot,t)$ according to \begin{equation} \labela{3.3} v_x(\cdot,t) = \partial_x e^{\tau(\Delta-1)} v(\cdot,t-\tau) + \int_{t-\tau}^t \partial_x e^{(t-s)(\Delta-1)} u(\cdot,s)v(\cdot,s) ds + \int_{t-\tau}^t \partial_x e^{(t-s)(\Delta-1)} B_2(\cdot,s) ds \end{equation}a and recall that due to known smoothing properties of the Neumann heat semigroup (\cite{win_JDE}) we can find $c_2>0$ such that for all $\varphi\in C^0(\overline{\Omega})$, \begin{equation} \label{3.4} \|\partial_x e^{\sigma\Delta}\varphi\|_{L^r(\Omega)} \le c_2 \sigma^{-\frac{1}{2}-\frac{1}{2}(1-\frac{1}{r})} \|\varphi\|_{L^1(\Omega)} \qquad \mbox{for all } \sigma\in (0,1). \end{equation} Therefore, \begin{equation} \labela{3.5} \Big\|\partial_x e^{\tau(\Delta-1)} v(\cdot,t-\tau)\Big\|_{L^r(\Omega)} &\le& c_2 e^{-\tau} \cdot \tau^{-\frac{1}{2}-\frac{1}{2}(1-\frac{1}{r})} \|v(\cdot,t-\tau)\|_{L^1(\Omega)} \nonumber\\ &\le& c_2 c_3(T) \tau^{-\frac{1}{2}-\frac{1}{2}(1-\frac{1}{r})}, \end{equation}a where $c_3(T)>0$ has been chosen in such a way that in accordance with Lemma \ref{lem02} we have \begin{equation} \label{3.6} \|v(\cdot,s)\|_{L^1(\Omega)} \le c_3(T) \qquad \mbox{for all } s\in (0,\widehat{T}_{max}), \end{equation} and such that \begin{equation} \label{3.7} \sup_{T>0} c_3(T)<\infty \qquad \mbox{if (\ref{H2}) holds.} \end{equation} Next, again by (\ref{3.4}), \begin{equation} \labela{3.8} \bigg\| \int_{t-\tau}^t \partial_x e^{(t-s)(\Delta-1)} B_2(\cdot,s) ds \bigg\|_{L^r(\Omega)} ds &\le& c_2 \int_{t-\tau}^t (t-s)^{-\frac{1}{2}-\frac{1}{2}(1-\frac{1}{r})} \|B_2(\cdot,s)\|_{L^1(\Omega)} ds \nonumber\\ &\le& c_2|\Omega| \|B_2\|_{L^\infty(\Omega\times (0,\infty))} \int_0^\tau \sigma^{-\frac{1}{2}-\frac{1}{2}(1-\frac{1}{r})} d\sigma \nonumber\\[2mm] &=& c_2|\Omega| \|B_2\|_{L^\infty(\Omega\times (0,\infty))} \cdot 2r \tau^\frac{1}{2r} \end{equation}a as well as \begin{equation} \label{3.9} \bigg\| \int_{t-\tau}^t \partial_x e^{(t-s)(\Delta-1)} u(\cdot,s) v(\cdot,s) ds \bigg\|_{L^r(\Omega)} ds \le c_2 \int_{t-\tau}^t (t-s)^{-\frac{1}{2}-\frac{1}{2}(1-\frac{1}{r})} \|u(\cdot,s)v(\cdot,s)\|_{L^1(\Omega)} ds. \end{equation} In order to further estimate the latter integral, we make use of our restrictions $r<\frac{3}{2}$ and $r<1+\frac{1}{2\chi^2}$ which enable us to pick some $p\in (0,1)$ satisfying $p<\frac{1}{\chi^2}$ and $p>2(r-1)$. Then by means of the H\"older inequality we see that \begin{equation} \label{3.10} \|u(\cdot,s)v(\cdot,s)\|_{L^1(\Omega)} \le \|u(\cdot,s)\|_{L^{p+2}(\Omega)} \|v(\cdot,s)\|_{L^\frac{p+2}{p+1}(\Omega)} \qquad \mbox{for all } s\in (0,T_{max}), \end{equation} where the Gagliardo-Nirenberg inequality provides $c_4>0$ and $a\in (0,1)$ fulfilling \begin{eqnarray*} \|v(\cdot,s)\|_{L^\frac{p+2}{p+1}(\Omega)} \le c_4\|v_x(\cdot,s)\|_{L^r(\Omega)}^a \|v(\cdot,s)\|_{L^1(\Omega)}^{1-a} + c_4\|v(\cdot,s)\|_{L^1(\Omega)} \qquad \mbox{for all } s\in (0,T_{max}). \end{eqnarray*} In light of (\ref{3.6}) and the definition of $M(T')$, from (\ref{3.10}) and (\ref{3.9}) we thus obtain that \begin{eqnarray*} \|u(\cdot,s)v(\cdot,s)\|_{L^1(\Omega)} \le \|u(\cdot,s)\|_{L^{p+2}(\Omega)} \cdot \Big\{ c_4 c_3^{1-a}(T) M^a(T') + c_4 c_3(T)\Big\}, \end{eqnarray*} so that once again invoking the H\"older inequality we infer that \begin{equation} \labela{3.11} & & \hspace*{-20mm} \bigg\| \int_{t-\tau}^t \partial_x e^{(t-s)(\Delta-1)} u(\cdot,s) v(\cdot,s) ds \bigg\|_{L^r(\Omega)} ds \nonumber\\ &\le& c_2 \cdot \Big\{ c_4 c_3^{1-a}(T) M^a(T') + c_4 c_3(T)\Big\} \cdot \int_{t-\tau}^t (t-s)^{-\frac{1}{2}-\frac{1}{2}(1-\frac{1}{r})} \|u(\cdot,s)\|_{L^{p+2}(\Omega)} ds \nonumber\\ &\le& c_2 \cdot \Big\{ c_4 c_3^{1-a}(T) M^a(T') + c_4 c_3(T)\Big\} \cdot \bigg\{ \int_{t-\tau}^t (t-s)^{-[\frac{1}{2}+\frac{1}{2}(1-\frac{1}{r})] \cdot \frac{p+2}{p+1}} ds \bigg\}^\frac{p+1}{p+2} \times \nonumber\\ & & \hspace*{58mm} \times \bigg\{ \int_{t-\tau}^t \|u(\cdot,s)\|_{L^{p+2}(\Omega)}^{p+2} ds \bigg\}^\frac{1}{p+2}. \end{equation}a Since herein our assumption $p>2(r-1)$ warrants that \begin{eqnarray*} \Big[\frac{1}{2}+\frac{1}{2}\Big(1-\frac{1}{r}\Big)\Big] \cdot \frac{p+2}{p+1} = \frac{2r-1}{2r} \cdot \Big(1+\frac{1}{p+1}\Big) < \frac{2r-1}{2r} \cdot \Big(1+\frac{1}{2r-1}\Big) = 1, \end{eqnarray*} and since from Lemma \ref{lem2} we know that \begin{eqnarray*} \int_{t-\tau}^t \|u(\cdot,s)\|_{L^{p+2}(\Omega)}^{p+2} ds \le c_5(T) \end{eqnarray*} with some $c_5(T)>0$ satisfying \begin{eqnarray*} \sup_{T>0} c_5(T)<\infty \qquad \mbox{if (\ref{H2}) holds,} \end{eqnarray*} it follows from (\ref{3.11}) that with a certain $c_6(T)>0$ we have \begin{eqnarray*} \bigg\| \int_{t-\tau}^t \partial_x e^{(t-s)(\Delta-1)} u(\cdot,s) v(\cdot,s) ds \bigg\|_{L^r(\Omega)} ds \le c_6(T) \cdot \Big\{ M^a(T')+1\Big\}, \end{eqnarray*} where \begin{equation} \label{3.13} \sup_{T>0} c_6(T)<\infty \qquad \mbox{if (\ref{H2}) is valid.} \end{equation} Together with (\ref{3.5}), (\ref{3.8}) and (\ref{3.3}), this shows that \begin{equation} \label{3.14} \|v_x(\cdot,t)\|_{L^r(\Omega)} \le c_7(T) \cdot \Big\{ M^a(T')+1 \Big\} \qquad \mbox{for all } t\in (\tau,T') \end{equation} with some $c_7(T)>0$ which due to (\ref{3.7}) and (\ref{3.13}) is such that \begin{equation} \label{3.15} \sup_{T>0} c_7(T)<\infty \qquad \mbox{if (\ref{H2}) holds.} \end{equation} In view of our definition of $c_1$, (\ref{3.14}) entails that if we let $c_8(T):=\max\{c_7(T),1\}$ then \begin{eqnarray*} M(T') \le c_8(T) \cdot \Big\{ M^a(T')+1\Big\} \qquad \mbox{for all } T'\in (\tau,\widehat{T}_{max}) \end{eqnarray*} and thus, since $a<1$, \begin{eqnarray*} M(T') \le \max \Big\{ 1 \, , \, (2c_8(T))^\frac{1}{1-a}\Big\} \qquad \mbox{for all } T'\in (\tau,\widehat{T}_{max}). \end{eqnarray*} Combined with (\ref{3.15}), this establishes (\ref{3.1}) and (\ref{3.2}). $\Box$ \vskip.2cm Now, the latter provides sufficient regularity of the inhomogeneity $h$ appearing in the identity $u_t=u_{xx}+h$ in (\ref{0}), that is, of $h:=-\chi(\frac{u}{v}v_x)_x -uv+B_1$, and especially in the crucial cross-diffusive first summand therein. This is obtained by the following statement which beyond boundedness of $u$, as required for extending the solution via Lemma \ref{lem_loc}, moreover asserts a favorable equicontinuity feature of $u$ that will be useful in verifying the uniform decay property claimed in Theorem \ref{theo51}. \begin{equation} \labelgin{lem}\label{lem4} Let $\gamma\in (0,\frac{1}{3})$ be such that $\gamma<\frac{1}{1+2\chi^2}$. Then for all $T>0$ there exists $C(T)>0$ with the properties that with $\widehat{T}_{max}:=\min\{T,T_{max}\}$ and $\tau:=\min\{1,\frac{1}{3}T_{max}\}$ we have \begin{equation} \label{4.1} \|u(\cdot,t)\|_{C^\gamma(\overline{\Omega})} \le C(T), \qquad \mbox{for all } t\in (\tau,\widehat{T}_{max}) \end{equation} and \begin{equation} \label{4.2} \sup_{T>0} C(T)<\infty \qquad \mbox{if \eqref{H2} holds.} \end{equation} \end{lem} {\sc Proof.} \quad Since $\gamma<\frac{1}{3}$ and $\gamma<\frac{1}{1+2\chi^2}$, it is possible to fix $r>1$ such that $r<\frac{3}{2}$ and $r<1+\frac{1}{2\chi^2}$, and such that $1-\frac{1}{r}>\gamma$. This enables us to choose some $\alpha\in (0,\frac{1}{2})$ sufficiently close to $\frac{1}{2}$ such that still $2\alpha-\frac{1}{r}>\gamma$, which in turn ensures that the sectorial realization of $A:=-(\cdot)_{xx}+1$ under homogeneous Neumann boundary conditions in $L^r(\Omega)$ has the domain of its fractional power $A^\alpha$ satisfy $D(A^\alpha) \hookrightarrow C^\gamma(\overline{\Omega})$ (\cite{henry}), meaning that \begin{equation} \label{4.3} \|\varphi\|_{C^\gamma(\overline{\Omega})} \le c_1 \|A^\alpha \varphi\|_{L^r(\Omega)} \qquad \mbox{for all } \varphi \in C^1(\overline{\Omega}) \end{equation} with some $c_1>0$. Moreover, combining known regularization estimates for the associated semigroup $(e^{-tA})_{t\ge 0}\equiv (e^{-t} e^{t\Delta})_{t\ge 0}$ (\cite{friedman}, \cite{win_JDE}) we can find positive constants $c_2$ and $c_3$ such that for all $t\in (0,1)$ we have \begin{equation} \label{4.4} \|A^\alpha e^{-tA} \varphi\|_{L^r(\Omega)} \le c_2 t^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} \|\varphi\|_{L^1(\Omega)} \qquad \mbox{for all } \varphi\in C^0(\overline{\Omega}) \end{equation} and \begin{equation} \label{4.5} \|A^\alpha e^{-tA}\varphi_x\|_{L^r(\Omega)} \le c_3 t^{-\alpha-\frac{1}{2}} \|\varphi\|_{L^r(\Omega)} \qquad \mbox{for all } \varphi\in C^1(\overline{\Omega}) \mbox{ such that $\varphi_x=0$ on } \partial\Omega. \end{equation} Now to estimate \begin{eqnarray*} M(T'):=\sup_{t\in (\tau,T')} \|u(\cdot,t)\|_{C^\gamma(\overline{\Omega})} \qquad \mbox{for } T' \in (\tau,\widehat{T}_{max}), \end{eqnarray*} we use a variation-of-constants representation associated with the identity \begin{eqnarray*} u_t=-Au - \chi \nabla\cdot \Big(\frac{u}{v} v_x\Big)_x - uv + b_1(x,t) + u, \qquad x\in\Omega, \ t\in (0,T_{max}), \end{eqnarray*} to see that thanks to (\ref{4.3}), \begin{equation} \labela{4.6} \frac{1}{c_1} \|u(\cdot,t)\|_{C^\gamma(\overline{\Omega})} &\le& \|A^\alpha u(\cdot,t)\|_{L^r(\Omega)} \nonumber\\ &\le& \Big\| A^\alpha e^{-\tau A} u(\cdot,t-\tau)\Big\|_{L^r(\Omega)} + \chi \int_{t-\tau}^t \Big\| A^\alpha e^{-(t-s)A} \Big(\frac{u(\cdot,s)}{v(\cdot,s)} v_x(\cdot,s)\Big)_x \Big\|_{L^r(\Omega)} ds \nonumber\\ & & + \int_{t-\tau}^t \Big\| A^\alpha e^{-(t-s)A} u(\cdot,s) v(\cdot,s) \Big\|_{L^r(\Omega)} ds + \int_{t-\tau}^t \Big\| A^\alpha e^{-(t-s)A} B_1(\cdot,s)\Big\|_{L^r(\Omega)} ds \nonumber\\ & & + \int_{t-\tau}^t \Big\| A^\alpha e^{-(t-s)A} u(\cdot,s)\Big\|_{L^r(\Omega)} ds \qquad \mbox{for all } t\in (2\tau,T_{max}). \end{equation}a Here by (\ref{4.4}) we see that \begin{equation} \labela{4.7} \|A^\alpha e^{-\tau A} u(\cdot,t-\tau)\|_{L^r(\Omega)} &\le& c_2 \tau^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} \|u(\cdot,t-\tau)\|_{L^1(\Omega)} \nonumber\\ &\le& c_2 c_4(T) \tau^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} \qquad \mbox{for all } t\in (2\tau,\widehat{T}_{max}), \end{equation}a where according to Lemma \ref{lem01} we have taken $c_4(T)>0$ such that \begin{equation} \label{4.8} \|u(\cdot,t)\|_{L^1(\Omega)} \le c_4(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}) \end{equation} and that \begin{equation} \label{4.9} \sup_{T>0} c_4(T)<\infty \qquad \mbox{if (\ref{H2}) holds.} \end{equation} Moreover, in view of our restrictions on $r$ we see that Lemma \ref{lem3} applies so as to yield $c_5(T)>0$ satisfying \begin{equation} \label{4.10} \|v_x(\cdot,t)\|_{L^r(\Omega)} \le c_5(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}) \end{equation} and \begin{equation} \label{4.11} \sup_{T>0} c_5(T)<\infty \qquad \mbox{if (\ref{H2}) is valid,} \end{equation} which combined with the outcome of Lemma \ref{lem02} and the continuity of the embedding $W^{1,r}(\Omega) \hookrightarrow L^\infty(\Omega)$ shows that there exists $c_6(T)>0$ such that \begin{equation} \label{4.12} \|v(\cdot,t)\|_{L^\infty(\Omega)} \le c_6(T) \qquad \mbox{for all } t\in (0,\widehat{T}_{max}) \end{equation} with \begin{equation} \label{4.13} \sup_{T>0} c_6(T)<\infty \qquad \mbox{if (\ref{H2}) holds.} \end{equation} Therefore, in the third integral on the right of (\ref{4.6}) we may use (\ref{4.4}) and again (\ref{4.8}) to estimate \begin{equation} \labela{4.14} \int_{t-\tau}^t \Big\|A^\alpha e^{-(t-s)A} u(\cdot,s)v(\cdot,s)\Big\|_{L^r(\Omega)} ds &\le& c_2 \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} \|u(\cdot,s)v(\cdot,s)\|_{L^1(\Omega)} ds \nonumber\\ &\le& c_2 \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} \|u(\cdot,s)\|_{L^1(\Omega)} \|v(\cdot,s)\|_{L^\infty(\Omega)} ds \nonumber\\ &\le& c_2 c_4(T) c_6(T) \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} ds \nonumber\\ &=& c_2 c_4(T) c_6(T) c_7 \qquad \mbox{for all } t\in (2\tau,\widehat{T}_{max}), \end{equation}a with $c_7:=\int_0^\tau \sigma^{-\alpha-\frac{1}{2}(1-\frac{1}{r})}d\sigma$ being finite since clearly $\alpha+\frac{1}{2}(1-\frac{1}{r})<\alpha+\frac{1}{2}<1$.\\ Likewise, upon two further applications of (\ref{4.4}) we obtain from the boundedness of $B_1$ and (\ref{4.8}) that \begin{equation} \labela{4.15} \int_{t-\tau}^t \Big\| A^\alpha e^{-(t-s)A} B_1(\cdot,s)\Big\|_{L^r(\Omega)} ds &\le& c_2 \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} \|B_1(\cdot,s)\|_{L^1(\Omega)} ds \nonumber\\ &\le& c_2 |\Omega| \|B_1\|_{L^\infty(\Omega\times (0,\infty))} \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} ds \nonumber\\ &=& c_2 |\Omega| \|B_1\|_{L^\infty(\Omega\times (0,\infty))} \cdot c_7 \qquad \mbox{for all } t\in (2\tau,\widehat{T}_{max}) \end{equation}a and that \begin{equation} \labela{4.16} \int_{t-\tau}^t \Big\| A^\alpha e^{-(t-s)A} u(\cdot,s)\Big\|_{L^r(\Omega)} ds &\le& c_2 \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}(1-\frac{1}{r})} \|u(\cdot,s)\|_{L^1(\Omega)} ds \nonumber\\ &\le& c_2 c_4(T) c_7 \qquad \mbox{for all } t\in (2\tau,\widehat{T}_{max}). \end{equation}a Finally, in the second summand on the right-hand side in (\ref{4.6}) we use that due to Lemma \ref{lem001}, \begin{eqnarray*} v(x,t) \ge c_8(T) \qquad \mbox{for all $x\in\Omega$ and } t\in (0,\widehat{T}_{max}) \end{eqnarray*} with some $c_8(T)>0$ fulfilling \begin{equation} \label{4.17} \inf_{T>0} c_8(T)>0 \qquad \mbox{if (\ref{H2}) holds.} \end{equation} From (\ref{4.5}) and (\ref{4.10}) we therefore obtain that \begin{equation} \labela{4.18} & & \hspace*{-20mm} \chi \int_{t-\tau}^t \Big\| A^\alpha e^{-(t-s)A} \Big(\frac{u(\cdot,s)}{v(\cdot,s)} v_x(\cdot,s)\Big)_x \Big\|_{L^r(\Omega)} ds \nonumber\\ &\le& \chi c_3 \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}} \Big\|\frac{u(\cdot,s)}{v(\cdot,s)} v_x(\cdot,s) \Big\|_{L^r(\Omega)} ds \nonumber\\ &\le& \chi c_3 \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}} \|u(\cdot,s)\|_{L^\infty(\Omega)} \Big\|\frac{1}{v(\cdot,s)}\Big\|_{L^\infty(\Omega)} \|v_x(\cdot,s)\|_{L^r(\Omega)} ds \nonumber\\ &\le& \frac{\chi c_3 c_5(T)}{c_8(T)} \|u\|_{L^\infty(\Omega\times (\tau,T'))} \int_{t-\tau}^t (t-s)^{-\alpha-\frac{1}{2}} ds \nonumber\\ &=& \frac{\chi c_3 c_5(T)}{c_8(T)} \|u\|_{L^\infty(\Omega\times (\tau,T'))} \cdot \frac{\tau^{\frac{1}{2}-\alpha}}{\frac{1}{2}-\alpha} \qquad \mbox{for all } t\in (2\tau,T'). \end{equation}a In conclusion, (\ref{4.7}), (\ref{4.14}), (\ref{4.15}), (\ref{4.16}) and (\ref{4.18}) show that (\ref{4.6}) leads to the inequality \begin{equation} \label{4.188} \|u(\cdot,t)\|_{C^\gamma(\overline{\Omega})} \le c_9(T) \|u\|_{L^\infty(\Omega\times (\tau,T'))} + c_9(T) \qquad \mbox{for all } t\in (2\tau,T') \end{equation} with some $c_9(T)>0$ about which due to (\ref{4.9}), (\ref{4.11}), (ref{4.13}) and (\ref{4.17}) we know that \begin{equation} \label{4.19} \sup_{T>0} c_9(T)<\infty \qquad \mbox{if (\ref{H2}) holds.} \end{equation} Now, by compactness of the first in the embeddings $C^\gamma(\overline{\Omega})\hookrightarrow L^\infty(\Omega) \hookrightarrow L^1(\Omega)$, according to an associated Ehrling lemma it is possible to pick $c_{10}(T)>0$ such that \begin{eqnarray*} \|\varphi\|_{L^\infty(\Omega)} \le \frac{1}{2c_9(T)} \|\varphi\|_{C^\gamma(\overline{\Omega})} + c_{10}(T) \|\varphi\|_{L^1(\Omega)} \qquad \mbox{for all } \varphi\in C^\gamma(\overline{\Omega}), \end{eqnarray*} where thanks to (\ref{4.19}) it can clearly be achieved that \begin{equation} \label{4.20} \sup_{T>0} c_{10}(T)<\infty, \qquad \mbox{provided that (\ref{H2}) holds.} \end{equation} Therefore, (\ref{4.188}) together with (\ref{4.8}) implies that \begin{eqnarray*} \|u(\cdot,t)\|_{C^\gamma(\overline{\Omega})} &\le& \frac{1}{2} \sup_{s\in (\tau,T')} \|u(\cdot,s)\|_{C^\gamma(\overline{\Omega})} + c_{10}(T) c_4(T) + c_9(T) \\ &\le& \frac{1}{2} M(T') + c_{10}(T) c_4(T) + c_9(T) \qquad \mbox{for all } t\in (2\tau,T') \end{eqnarray*} and that hence with $c_{11}:=\sup_{t\in (\tau,2\tau]} \|u(\cdot,t)\|_{C^\gamma(\overline{\Omega})}$ we have \begin{eqnarray*} M(T') &\le& c_{11} + \sup_{t\in (2\tau,T')} \|u(\cdot,t)\|_{C^\gamma(\overline{\Omega})} \\ &\le& c_{11} + \frac{1}{2} M(T') + c_{10}(T) c_4(T) + c_9(T). \end{eqnarray*} Thus, \begin{eqnarray*} M(T') \le 2\cdot \Big( c_{11} + c_{10}(T) c_4(T) + c_9(T)\Big) \qquad \mbox{for all } T'\in (\tau,\widehat{T}_{max}), \end{eqnarray*} which on letting $T'\nearrow \widehat{T}_{max}$ yields (\ref{4.1}) with some $C(T)>0$ satisfying (\ref{4.2}) because of (\ref{4.20}), (\ref{4.8}) and (\ref{4.19}). $\Box$ \vskip.2cm \subsection{Proof of Theorem \ref{theo5}} By collecting the above positivity and regularity information, we immediately obtain global extensibility of our local-in-time solution:\\[5pt] {\sc Proof.} \quadc of Theorem \ref{theo5}. \quad Combining Lemma \ref{lem4} with Lemma \ref{lem001} and Lemma \ref{lem3} shows that in (\ref{ext_crit}), the second alternative cannot occur, so that actually $T_{max}=\infty$ and hence all statements result from Lemma \ref{lem_loc}. $\Box$ \vskip.2cm \subsection{Proof of Theorem \ref{theo51}} In view of the above statements on independence of all essential estimates from $T$ when (\ref{H2}) holds, for the verification of the qualitative properties in Theorem \ref{theo51} only one further ingredient is needed which can be obtained by a refined variant of an argument from Lemma \ref{lem01}. \begin{equation} \labelgin{lem}\label{lem33} If \eqref{H2} and \eqref{H1'} are satisfied, then \begin{equation} \label{33.1} \int_\Omega u(\cdot,t) \to 0 \qquad \mbox{as } t\to\infty. \end{equation} \end{lem} {\sc Proof.} \quad Since Lemma \ref{lem001} provides $c_1>0$ such that $v\ge c_1$ in $\Omega\times (0,\infty)$, once more integrating the first equation in (\ref{0}) we obtain that \begin{eqnarray*} \frac{d}{dt} \int_\Omega u = - \int_\Omega uv + \int_\Omega B_1 \le -c_1 \int_\Omega u + \int_\Omega B_1 \qquad \mbox{for all } t>0. \end{eqnarray*} In view of the hypothesis (\ref{H1'}), the claim therefore results by an application of Lemma \ref{lem_espejo_win}. $\Box$ \vskip.2cm We can thereby prove our main result on large time behavior in (\ref{0}), (\ref{0b}), (\ref{0i}) in presence of the hypothesis (\ref{H2}).\\[5pt] {\sc Proof.} \quadc of Theorem \ref{theo51}. \quad Assuming that (\ref{H2}) be valid, from Lemma \ref{lem4} we obtain $\gamma>0$ and $c_1>0$ such that \begin{equation} \label{51.5} \|u(\cdot,t)\|_{C^\gamma(\overline{\Omega})} \le c_1 \qquad \mbox{for all } t>1. \end{equation} This immediately implies (\ref{51.1}), whereas the inequalities in (\ref{51.2}) result from Lemma \ref{lem001}, Lemma \ref{lem02} and Lemma \ref{lem3}, again because $W^{1,r}(\Omega) \hookrightarrow L^\infty(\Omega)$ for arbitrary $r>1$. Finally, as the Arzel\`a-Ascoli theorem says that (\ref{51.5}) implies precompactness of $(u(\cdot,t))_{t>1}$ in $L^\infty(\Omega)$, the outcome of Lemma \ref{lem33}, asserting that (\ref{H1'}) entails decay of $u(\cdot,t)$ in $L^1(\Omega)$ as $t\to\infty$, actually means that we must even have $u(\cdot,t)\to 0$ in $L^\infty(\Omega)$ as $t\to\infty$ in this case. $\Box$ \vskip.2cm \mysection{Bounds for $v$ under the assumption (\ref{H1}). Proof of Theorem \ref{theo52}} \label{sec:longtime} In order to prove Theorem \ref{theo52}, we evidently may no longer rely on any global positivity property of $v$, which in view of the singular taxis term in (\ref{0}) apparently reduces our information on regularity of $u$ to a substantial extent. Our approach will therefore alternatively focus on the derivation of further bounds for $v$ by merely using the second equation in (\ref{0}) together with the class of fundamental estimates from Lemma \ref{lem1}, taking essential advantage from the freedom to choose the parameters $p$ and $q$ there within a suitably large range.\\[5pt] Our argument will at its core be quite simple in that it is built on a straightforward $L^r$ testing procedure (see Lemma \ref{lem13}); however, for adequately estimating the crucial integrals $\int_\Omega uv^r$ appearing therein we will create an iterative setup which allows the eventual choosing of an arbitrarily large $r$ whenever $\chi$ satisfies the smallness condition from Theorem \ref{theo52}.\\[5pt] Let us first reformulate the outcome of Lemma \ref{lem1} in a version convenient for our purpose. \begin{equation} \labelgin{lem}\label{lem10} Assume that \eqref{H1} holds, and let $p\in (0,1)$ and $q>0$ be such that $p<\frac{1}{\chi^2}$ and $q\in (q^-(p),q^+(p))$ with $q^+(p)m$ as given by \eqref{1.1}. Then there exists $C>0$ such that \begin{equation} \label{10.1} \int_t^{t+1} \int_\Omega \left[\Big(u^\frac{p}{2} v^\frac{q}{2}\Big)_x\right]^2 \le C \qquad \mbox{for all } t>0. \end{equation} \end{lem} {\sc Proof.} \quad Since \begin{eqnarray*} \left[\Big(u^\frac{p}{2} v^\frac{q}{2}\Big)_x\right]^2 &=& \Big(\frac{p}{2} u^\frac{p-2}{2} v^\frac{q}{2} u_x + \frac{q}{2} u^\frac{p}{2} v^\frac{q-2}{2} v_x\Big)^2 \\ &\le& \frac{p^2}{2} u^{p-2} v^q u_x^2 + \frac{q^2}{2} u^p v^{q-2} v_x^2 \qquad \mbox{in } \Omega\times (0,\infty), \end{eqnarray*} by Young's inequality, this is an immediate consequence of Lemma \ref{lem1}. $\Box$ \vskip.2cm A zero-order estimate for the coupled quantities appearing in the preceding lemma can be achieved by combining Lemma \ref{lem01} with a supposedly known bound for $v$ in $L^{r_\star}(\Omega)$ in a straightforward manner. \begin{equation} \labelgin{lem}\label{lem11} Assume that (\ref{H1}) holds, and let $r_\star\ge 1, p>0$ and $q>0$. Then there exists $C>0$ with the property that if with some $K>0$ we have \begin{equation} \label{11.1} \|v(\cdot,t)\|_{L^{r_\star}(\Omega)} \le K \qquad \mbox{for all } t>0, \end{equation} then \begin{equation} \label{11.2} \Big\|u^\frac{p}{2}(\cdot,t)v^\frac{q}{2}(\cdot,t)\Big\|_{L^\frac{2r_\star}{pr_\star+q}(\Omega)} \le CK^\frac{q}{2} \qquad \mbox{for all } t>0. \end{equation} \end{lem} {\sc Proof.} \quad According to the hypothesis (\ref{H1}), from Lemma \ref{lem01} we know that \begin{eqnarray*} \|u(\cdot,t)\|_{L^1(\Omega)} \le c_1 \qquad \mbox{for all } t>0 \end{eqnarray*} with some $c_1>0$. By the H\"older inequality we therefore obtain that \begin{eqnarray*} \big\|u^\frac{p}{2}v^\frac{q}{2}\big\|_{L^\frac{2r_\star}{pr_\star+q}(\Omega)} = \bigg\{ \int_\Omega u^\frac{pr_\star}{pr_\star+q} v^\frac{qr_\star}{pr_\star+q} \bigg\}^\frac{pr_\star+q}{2r_\star} \le \bigg\{ \int_\Omega u\bigg\}^\frac{p}{2} \cdot \bigg\{ \int_\Omega v^{r_\star} \bigg\}^\frac{q}{2} \le c_1^\frac{p}{2} K^\frac{q}{2} \qquad \mbox{for all } t>0 \end{eqnarray*} due to (\ref{11.1}). $\Box$ \vskip.2cm We can thereby achieve the following estimate for the crucial term $\int_\Omega uv^r$ appearing in Lemma \ref{lem13} below, for certain $r$ depending on the invested integrability parameter $r_\star$. \begin{equation} \labelgin{lem}\label{lem12} Assume \eqref{H1} and suppose that there exists $r_\star\ge 1$ such that \begin{equation} \label{12.01} \sup_{t>0} \|v(\cdot,t)\|_{L^{r_\star}(\Omega)} < \infty. \end{equation} Moreover, let $p\in (0,1)$ be such that $p<\frac{1}{\chi^2}$, and with $q^+(p)m$ as given by (\ref{1.1}), let $q\in (q^-(p),q^+(p))$ satisfy \begin{equation} \label{12.1} q \le \frac{p(p+1)}{1-p} \cdot r_\star. \end{equation} Then there exists $C>0$ such that \begin{equation} \label{12.2} \int_t^{t+1} \int_\Omega uv^\frac{q}{p} \le C \qquad \mbox{for all } t>0. \end{equation} \end{lem} {\sc Proof.} \quad From Lemma \ref{lem11} we know that due to (\ref{12.01}) we can pick $c_1>0$ such that \begin{equation} \label{12.3} \Big\|u^\frac{p}{2}(\cdot,t) v^\frac{q}{2}(\cdot,t)\Big\|_{L^\frac{2r_\star}{pr_\star+q}(\Omega)} \le c_1 \qquad \mbox{for all } t>0, \end{equation} and since (\ref{12.1}) warrants that \begin{eqnarray*} \frac{2q}{p[(p+1)r_\star+q]} = \frac{2}{p\cdot \Big[\frac{(p+1)r_\star}{q}+1\Big]} \le \frac{2}{p\cdot\Big[\frac{1-p}{p}+1\Big]} =2, \end{eqnarray*} we may combine the outcome of Lemma \ref{lem10} with Young's inequality to obtain $c_2>0$ fulfilling \begin{equation} \label{12.4} \int_t^{t+1} \Big\| \Big( u^\frac{p}{2}(\cdot,s) v^\frac{q}{2}(\cdot,s)\Big)_x \Big\|_{L^2(\Omega)}^\frac{2q}{p[(p+1)r_\star+q]} ds \le c_2 \qquad \mbox{for all } t>0. \end{equation} As the Gagliardo-Nirenberg inequality provides $c_3>0$ such that \begin{eqnarray*} \|\varphi\|_{L^\frac{2}{p}(\Omega)}^\frac{2}{p} \le c_3\|\varphi_x\|_{L^2(\Omega)}^\frac{2q}{p[(p+1)r_\star+q]} \|\varphi\|_{L^\frac{2r_\star}{pr_\star+q}(\Omega)}^\frac{2(p+1)r_\star}{p[(p+1)r_\star+q]} + c_3 \|\varphi\|_{L^\frac{2r_\star}{pr_\star+q}(\Omega)}^\frac{2}{p} \qquad \mbox{for all } \varphi\in W^{1,2}(\Omega), \end{eqnarray*} combining (\ref{12.4}) with (\ref{12.3}) we thus infer that \begin{eqnarray*} \int_t^{t+1} \int_\Omega uv^\frac{q}{p} &=& \int_t^{t+1} \Big\|u^\frac{p}{2}(\cdot,s) v^\frac{q}{2}(\cdot,s)\Big\|_{L^\frac{2}{p}(\Omega)}^\frac{2}{p} ds \\ &\le& c_3 \int_t^{t+1} \Big\| \Big(u^\frac{p}{2}(\cdot,s)v^\frac{q}{2}(\cdot,s)\Big)_x \Big\|_{L^2(\Omega)}^\frac{2q}{p[(p+1)r_\star+q]} \Big\| u^\frac{p}{2}(\cdot,s)v^\frac{q}{2}(\cdot,s) \Big\|_{L^\frac{2r_\star}{pr_\star+q}(\Omega)}^\frac{2(p+1)r_\star}{p[(p+1)r_\star+q]} ds \\ & & + c_3 \int_t^{t+1} \Big\|u^\frac{p}{2}(\cdot,s)v^\frac{q}{2}(\cdot,s) \Big\|_{L^\frac{2r_\star}{pr_\star+q}(\Omega)}^\frac{2}{p} ds \\[2mm] &\le& c_3 \cdot c_2 c_1^\frac{2(p+1)r_\star}{p[(p+1)r_\star+q]} + c_3 \cdot c_1^\frac{2}{p} \end{eqnarray*} for all $t>0$. $\Box$ \vskip.2cm We are now prepared for the announced testing procedure. \begin{equation} \labelgin{lem}\label{lem13} Suppose that (\ref{H1}) holds and that \begin{equation} \label{13.1} \sup_{t>0} \int_\Omega v^{r_\star}(\cdot,t)<\infty \end{equation} for some $r_\star\ge 1$, and let $p\in (0,1)$ be such that $p<\frac{1}{\chi^2}$. Then with $q^+(p)m$ taken from (\ref{1.1}), for all $q\in (q^-(p),q^+(p))$ fulfilling \begin{equation} \label{13.11} q\le \frac{p(p+1)}{1-p}\cdot r_\star \end{equation} one can find $C>0$ such that \begin{equation} \label{13.4} \int_\Omega v^\frac{q}{p}(\cdot,t) \le C \qquad \mbox{for all } t>0. \end{equation} \end{lem} {\sc Proof.} \quad Since $\Omega$ is bounded, in view of (\ref{13.1}) it is sufficient to consider the case when $r:=\frac{q}{p}$ satisfies $r>1$, and then testing the second equation in (\ref{0}) against $v^{r-1}$ shows that \begin{eqnarray*} \frac{1}{r} \frac{d}{dt} \int_\Omega v^r + (r-1) \int_\Omega v^{r-2} v_x^2 + \int_\Omega v^r = \int_\Omega uv^r + \int_\Omega B_2 v^{r-1} \qquad \mbox{for all } t>0. \end{eqnarray*} Here, Young's inequality and the boundedness of $B_2$ show that there exists $c_1>0$ such that \begin{eqnarray*} \int_\Omega B_2 v^{r-1} \le \frac{1}{2} \int_\Omega v^r + c_1 \qquad \mbox{for all } t>0, \end{eqnarray*} so that $y(t):=\int_\Omega v^r(\cdot,t)$, $t\ge 0$, satisfies \begin{equation} \label{13.5} y'(t) + \frac{r}{2} y(t) \le h(t):=c_1 r + r\int_\Omega u(\cdot,t)v(\cdot,t) \qquad \mbox{for all } t>0. \end{equation} Now, thanks to our assumptions on $p$ and $q$, we may apply Lemma \ref{lem12} to conclude from (\ref{13.1}) that there exists $c_2>0$ fulfilling \begin{eqnarray*} \int_t^{t+1} h(s)ds \le c_2 \qquad \mbox{for all } t>0, \end{eqnarray*} and therefore Lemma \ref{lem_ssw} ensures that (\ref{13.4}) is a consequence of (\ref{13.5}). $\Box$ \vskip.2cm \subsection{Preparations for a recursive argument} As Lemma \ref{lem13} suggests, our strategy toward improved estimates for $v$ will consist in a bootstrap-type procedure, in a first step choosing $r_\star:=1$ in Lemma \ref{lem13} and in each step seeking to maximize the exponent $\frac{q}{p}$ appearing in (\ref{13.4}) according to our overall restrictions on $p$ and $q$ as well as (\ref{13.11}). In order to create an appropriate framework for our iteration, let us introduce certain auxiliary functions, and summarize some of their elementary properties, in the following lemma. \begin{equation} \labelgin{lem}\label{lem14} Let $p_\star:=\min\{1,\frac{1}{\chi^2}\}$ as well as \begin{equation} \label{14.1} \varphi_1(p):=\frac{p+1}{1-p}, \quad \varphi_2(p):=\frac{1-p}{2p} \Big(1+\sqrt{1-p\chi^2}\Big) \quad \mbox{and} \quad \varphi_3(p):=\frac{1-p}{2p} \Big(1-\sqrt{1-p\chi^2}\Big) \end{equation} for $p\in (0,p_\star)$. Then \begin{equation} \label{14.2} \varphi_1'>0 \quad \mbox{and} \quad \varphi_2'<0 \qquad \mbox{on } (0,p_\star), \end{equation} and we have \begin{equation} \label{14.3} \varphi_1(p) > \lim_{s\searrow 0} \varphi_1(s)=1 \qquad \mbox{for all } p\in (0,p_\star), \end{equation} and \begin{equation} \label{14.5} \varphi_2(p)\to + \infty \qquad \mbox{as } p\searrow 0, \end{equation} as well as \begin{equation} \label{14.4} \varphi_2(p) >\varphi_3(p) \qquad \mbox{for all } p\in (0,p_\star). \end{equation} \end{lem} {\sc Proof.} \quad All statements can be verified by elementary computations. $\Box$ \vskip.2cm Now the following observation explains the role of our smallness condition on $\chi$ from Theorem \ref{theo52}. \begin{equation} \labelgin{lem}\label{lem15} Suppose that $\chi<\frac{\sqrt{6\sqrt{3}+9}}{2}$. Then \begin{equation} \label{15.1} \varphi_1(p_0)>\varphi_3(p_0) \end{equation} is valid for the number \begin{equation} \label{15.2} p_0:=\frac{2\sqrt{3}-3}{3} \in (0,1) \end{equation} satisfying \begin{equation} \label{15.3} p_0<\frac{1}{\chi^2}. \end{equation} \end{lem} {\sc Proof.} \quad We only need to observe that our assumption on $\chi$ warrants that \begin{eqnarray*} p_0\chi^2 < \frac{2\sqrt{3}-3}{3} \cdot \frac{6\sqrt{3}+9}{2} = \frac{3}{4}, \end{eqnarray*} which namely in particular yields (\ref{15.3}) and moreover implies that by (\ref{14.1}), \begin{eqnarray*} \frac{\varphi_3(p_0)}{\varphi_1(p_0)} -1 &=& \frac{(1-p_0)^2}{2p_0(p_0+1)} \cdot \Big(1-\sqrt{1-p_0\chi^2}\Big) -1 \\ &<& \frac{(1-p_0)^2}{2p_0(p_0+1)} \cdot \frac{1}{2} -1 \\[1mm] &=& 0, \end{eqnarray*} as claimed. $\Box$ \vskip.2cm Indeed, the latter property allows us to construct an increasing divergent sequence $(r_k)_{k\in\mathbb{N}}$ of exponents to be used in Lemma \ref{lem13}. \begin{equation} \labelgin{lem}\label{lem151} Suppose that $\chi<\frac{\sqrt{6\sqrt{3}+9}}{2}$, and that $p_0$ is as in Lemma \ref{lem15}. Then for each $r\ge 1$, the set \begin{equation} \label{151.1} S(r):=\Big\{ p\in (0,p_0) \ \Big| \ \varphi_2(p) \ge \varphi_1(p)\cdot r \Big\} \end{equation} is not empty, and letting $r_0:=1$ as well as \begin{equation} \label{151.2} p_k:=\sup S(r_r), \qquad k\in\mathbb{N}, \end{equation} and \begin{equation} \label{151.3} r_k:=\varphi_1(p_k) \cdot r_{k-1}, \qquad k\in\mathbb{N}, \end{equation} recursively defines sequences $(p_k)_{k\in\mathbb{N}} \subset (0,p_0]$ and $(r_k)_{k\in\mathbb{N}} \subset (1,\infty)$ satisfying \begin{equation} \label{151.4} p_k \le p_{k-1}, \qquad \mbox{for all } k\in\mathbb{N} \end{equation} and \begin{equation} \label{151.5} r_k>r_{k-1}, \qquad \mbox{for all } k\in\mathbb{N} \end{equation} as well as \begin{equation} \label{151.6} r_k\to\infty, \qquad \mbox{as } k\to\infty. \end{equation} Moreover, writing \begin{equation} \label{151.7} q_k:=p_k r_k, \qquad k\in\mathbb{N}, \end{equation} we have \begin{equation} \label{151.8} q^-(p_k) < q_k \le q^+(p_k) \qquad \mbox{for all } k\in\mathbb{N} \end{equation} as well as \begin{equation} \label{151.9} q_k \le \frac{p_k(p_k+1)}{1-p_k} \cdot r_{k-1} \qquad \mbox{for all } k\in\mathbb{N}. \end{equation} \end{lem} {\sc Proof.} \quad Observing that $\varphi_1$ and $\varphi_2$ are well-defined on $(0,p_0)$ due to the fact that $p_0<\frac{1}{\chi^2} \le p_\star$ by (\ref{15.3}), from (\ref{14.3}) and (\ref{14.5}) we see that \begin{eqnarray*} \frac{\varphi_2(p)}{\varphi_1(p)} \to + \infty \qquad \mbox{as } p\searrow 0, \end{eqnarray*} implying that indeed $S(r)\ne\emptyset$ for all $r\ge 1$ and that hence the definitions of $(p_k)_{k\in\mathbb{N}}$ and $(r_k)_{k\in\mathbb{N}}$ are meaningful. Moreover, from (\ref{151.2}) and (\ref{151.1}) it is evident that $p_k\in (0,p_0]$ for all $k\in\mathbb{N}$, whereas (\ref{151.3}) together with (\ref{14.3}) guarantees (\ref{151.5}) and that thus also the inclusion $(r_k)_{k\in\mathbb{N}} \subset (1,\infty)$ holds; as therefore $S(r_k) \subset S(r_{k-1})$ for all $k\in\mathbb{N}$, it is also clear that (\ref{151.4}) is valid.\\ In order to verify (\ref{151.6}), assuming on the contrary that \begin{equation} \label{151.99} r_k\to r_\infty \qquad \mbox{as } k\to\infty \end{equation} with some $r_\infty \in (1,\infty)$, we would firstly obtain from (\ref{151.4}) that \begin{equation} \label{151.10} p_k\searrow 0 \qquad \mbox{as } k\to\infty, \end{equation} for otherwise there would exist $p_\infty \in (0,p_0]$ such that $p_k\ge p_\infty$ for all $k\in\mathbb{N}$, which by (\ref{14.1}) would imply that $\varphi_1(p_k)\ge c_1:=\varphi_1(p_\infty)>1$ for all $k\in\mathbb{N}$ and that hence $r_k\ge c_1 r_{k-1}$ for all $k\in\mathbb{N}$ due to (\ref{151.3}), clearly contradicting the assumed boundedness property of $(r_k)_{k\in\mathbb{N}}$. In particular, (\ref{151.10}) entails the existence of $k_0\in\mathbb{N}$ such that \begin{equation} \label{151.11} \varphi_2(p_k)=\varphi_1(p_k) \cdot r_{k-1} \qquad \mbox{for all } k\ge k_0, \end{equation} because if this was false then for all $k\in\mathbb{N}$ we would have $\varphi_2(p)>\varphi_1(p)\cdot r_k$ for any $p\in (0,p_0)$ and thus $p_k=p_0$ for all $k\in\mathbb{N}$ by (\ref{151.2}). Now combining (\ref{151.11}) with (\ref{151.10}), however, again using (\ref{14.5}) we could infer that \begin{eqnarray*} \varphi_1(p_k) \cdot r_{k-1} = \varphi_2(p_k)\to + \infty \qquad \mbox{as } k\to\infty, \end{eqnarray*} which is incompatible with the observation that \begin{eqnarray*} \varphi_1(p_k)\cdot r_{k-1} \to r_\infty<\infty \qquad \mbox{as } k\to\infty, \end{eqnarray*} as asserted by (\ref{151.10}), (\ref{14.3}) and (\ref{151.99}).\\[5pt] To see that the numbers $q_k$ in (\ref{151.7}) have the claimed properties, we firstly use their definition along with those of $r_k$ and $\varphi_1$ to find that \begin{eqnarray*} q_k=p_k r_k = p_k \varphi_1(p_k) r_{k-1} = \frac{p_k(p_k+1)}{1-p_k} \cdot r_{k-1} \qquad \mbox{for all } k\in\mathbb{N}, \end{eqnarray*} while from (\ref{151.2}) and (\ref{151.1}) it follows that $\varphi_1(p_k)\cdot r_{k-1} \le \varphi_2(p_k)$ and thus \begin{eqnarray*} q_k= p_k \varphi_1(p_k) r_{k-1} \le p_k \varphi_2(p_k) = \frac{1-p_k}{2} \cdot \Big(1+\sqrt{1-p_k\chi^2}\Big) = q^+(p_k) \qquad \mbox{for all } k\in\mathbb{N}. \end{eqnarray*} Finally, for the derivation of the left inequality in (\ref{151.8}) we make use of the property (\ref{15.1}) of $p_0$: Namely, if $k\in\mathbb{N}$ is such that $\varphi_2(p)\ge \varphi_1(p) \cdot r_k$ for all $p\in (0,p_0)$, then (\ref{151.2}) says that $p_k=p_0$ and therefore, by (\ref{151.7}), (\ref{151.3}), (\ref{151.5}), (\ref{15.1}) and (\ref{14.1}), \begin{eqnarray*} q_k &=& p_k r_k = p_k \varphi_1(p_k) r_{k-1} = p_0 \varphi_1(p_0) r_{k-1} \ge p_0 \varphi_1(p_0) \\ &>& p_0 \varphi_3(p_0) = p_k \varphi_3(p_k) = \frac{1-p_k}{2}\Big( 1-\sqrt{1-p_k\chi^2}\Big) = q^-(p_k). \end{eqnarray*} On the other hand, in the case when $k\in\mathbb{N}$ is such that $\inf_{p\in (0,p_0)} \big\{ \varphi_2(p)-\varphi_1(p)\cdot r_k\big\}$ is negative, (\ref{151.2}) implies that necessarily $\varphi_2(p_k)=\varphi_1(p_k)\cdot r_{k-1}$, so that \begin{eqnarray*} q_k=p_k \varphi_1(p_k) r_{k-1} = p_k \varphi_2(p_k) = \frac{1-p_k}{2}\Big(1+\sqrt{1-p_k\chi^2}\Big) > \frac{1-p_k}{2}\Big(1-\sqrt{1-p_k\chi^2}\Big), \end{eqnarray*} because the restriction $p_k\le p_0$ together with (\ref{15.3}) ensures that $\sqrt{1-p_k \chi^2}$ must be positive. $\Box$ \vskip.2cm \subsection{Boundedness of $v$ in $L^r(\Omega)$ for arbitrary $r<\infty$} A straightforward induction on the basis of Lemma \ref{lem13} and Lemma \ref{lem151} leads to the following. \begin{equation} \labelgin{lem}\label{lem152} Let $\chi<\frac{\sqrt{6\sqrt{3}+9}}{2}$ and suppose that \eqref{H1} holds, and let $(r_k)_{k\in\mathbb{N}_0} \subset (1,\infty)$ be as in Lemma \ref{lem151}. Then for all $k\in\mathbb{N}_0$ and any $r\in (1,r_k)\cup \{1\}$ there exists $C>0$ such that \begin{equation} \label{152.1} \int_\Omega v^r(\cdot,t) \le C \qquad \mbox{for all } t>0. \end{equation} \end{lem} {\sc Proof.} \quad Since for $k=0$ this has been asserted by Lemma \ref{lem02}, in view of an inductive argument we only need to make sure that if for some $k\in\mathbb{N}$ we have \begin{equation} \label{152.2} \sup_{t>0} \int_\Omega v^r(\cdot,t) <\infty \qquad \mbox{for all } r\in (1,r_{k-1}) \cup \{1\}, \end{equation} then \begin{equation} \label{152.3} \sup_{t>0} \int_\Omega v^r(\cdot,t) <\infty \qquad \mbox{for all } r\in (1,r_k). \end{equation} In verifying this, by boundedness of $\Omega$ we may concentrate on values of $r\in (1,r_k)$ which are sufficiently close to $r_k$ such that with $p_k$ as in Lemma \ref{lem151} and $q^-(p_k)$ taken from (\ref{1.1}) we have \begin{equation} \label{152.4} r>\frac{q^-(p_k)}{p_k}, \end{equation} which is possible since from (\ref{151.7}) and (\ref{151.8}) we know that \begin{eqnarray*} p_k r \to q_k= p_k r_k>q^-(p_k) \qquad \mbox{as } r\to r_k. \end{eqnarray*} We now let \begin{equation} \label{152.44} q:=p_k r \end{equation} and \begin{equation} \label{152.5} r_\star:=\max \Big\{ 1 \, , \, \frac{(1-p_k)q}{p_k(p_k+1)} \Big\} \end{equation} and observe that then \begin{equation} \label{152.6} q>q^-(p_k) \end{equation} by (\ref{152.4}) and \begin{equation} \label{152.7} q< p_k r_k \le q^+(p_k) \end{equation} by (\ref{151.7}) and (\ref{151.8}), whereas (\ref{152.5}) ensures that \begin{equation} \label{152.8} q \le \frac{p_k(p_k+1)}{1-p_k} \cdot r_\star. \end{equation} From (\ref{152.5}) it moreover follows that if $r_\star>1$ then since $r<r_k$ implies that $q<q_k$, we have \begin{eqnarray*} r_\star=\frac{(1-p_k)q}{p_k(p_k+1)} <\frac{(1-p_k)q_k}{p_k(p_k+1)} \le r_{k-1} \end{eqnarray*} according to (\ref{151.9}). As thus (\ref{152.2}) warrants that \begin{eqnarray*} \sup_{t>0} \int_\Omega v^{r_\star}(\cdot,t)<\infty, \end{eqnarray*} in view of (\ref{152.6}), (\ref{152.7}) and (\ref{152.8}) we may apply Lemma \ref{lem13} to find $c_1>0$ such that \begin{eqnarray*} \int_\Omega v^\frac{q}{p_k}(\cdot,t) \le c_1 \qquad \mbox{for all } t>0, \end{eqnarray*} which thanks to (\ref{152.44}) yields (\ref{152.3}), because $r$ was an arbitrary number in the range described in (\ref{152.3}) and (\ref{152.4}). $\Box$ \vskip.2cm In particular, $v$ remains bounded in $L^r(\Omega)$ for arbitrarily large finite $r$: \begin{equation} \labelgin{cor}\label{cor153} Let $\chi<\frac{\sqrt{6\sqrt{3}+9}}{2}$, and assume \eqref{H1}. Then for all $r\ge 1$ there exists $C>0$ such that \begin{eqnarray*} \int_\Omega v^r(\cdot,t) \le C \qquad \mbox{for all } t>0. \end{eqnarray*} \end{cor} {\sc Proof.} \quad Since Lemma \ref{lem151} asserts that the sequence $(r_k)_{k\in\mathbb{N}}$ introduced there has the property that $r_k\to\infty$ as $k\to\infty$, this is an immediate consequence of Lemma \ref{lem152}. $\Box$ \vskip.2cm \subsection{H\"older regularity of $v$} Once more relying on the first-order estimate provided by Lemma \ref{lem10} and the basic property $\int_0^\infty \int_\Omega uv<0$ asserted by Lemma \ref{lem01}, from Corollary \ref{cor153} we can now derive boundedness, and even a certain temporal decay, of the forcing term $uv$ from the second equation in (\ref{0}) with respect to some superquadratic space-time Lebesgue norm. \begin{equation} \labelgin{lem}\label{lem17} Let $\chi<\frac{\sqrt{6\sqrt{3}+9}}{2}$, and assume \eqref{H1}. Then for all $p\in (0,\frac{1}{3})$ fulfilling $p<\frac{1}{\chi^2}$ we have \begin{equation} \label{17.1} \int_t^{t+1} \int_\Omega (uv)^{p+2} \to 0 \qquad \mbox{as } t\to\infty. \end{equation} \end{lem} {\sc Proof.} \quad We first note that taking $\xi\to\infty$ in Lemma \ref{lem01} shows that our hypothesis (\ref{H1}) warrants that $\int_0^\infty \int_\Omega uv<\infty$ and hence \begin{eqnarray*} \int_t^{t+1} \int_\Omega uv \to 0 \qquad \mbox{as } t\to\infty. \end{eqnarray*} In view of an interpolation argument, it is therefore sufficient to make sure that for all $\widetilde{p} \in (0,\frac{1}{3})$ satisfying $\widetilde{p}<\frac{1}{\chi^2}$ we can find $c_0>0$ such that \begin{equation} \label{17.2} \int_t^{t+1} \int_\Omega (uv)^{\widetilde{p}+2} \le c_1 \qquad \mbox{for all } t>0. \end{equation} For this purpose, given any such $\widetilde{p}$ we can fix $p\in (\widetilde{p},\frac{1}{3})$ such that still $p<\frac{1}{\chi^2}$, and then observe that \begin{eqnarray*} \frac{3p-1}{1-p} < 0 < \sqrt{1-p\chi^2}. \end{eqnarray*} This ensures that with the numbers $q^+(p)m$ from (\ref{1.1}) we have $q^+(p)>p$, whence it is possible to pick $q\in (q^-(p),q^+(p))$ such that $q>p$. Writing $r:=\widetilde{p}+2$, by means of the H\"older inequality we can thus estimate \begin{eqnarray*} \int_\Omega (uv)^r &=& \int_\Omega \Big( u^\frac{p}{2} v^\frac{q}{2}\Big)^\frac{2r}{q} \cdot u^\frac{(q-p)r}{q} \\ &\le& \bigg\{ \int_\Omega \Big(u^\frac{p}{2} v^\frac{q}{2}\Big)^\frac{2r}{q-(q-p)r} \bigg\}^\frac{q-(q-p)r}{q} \cdot \bigg\{ \int_\Omega u \bigg\}^\frac{(q-p)r}{q} \\ &\le& c_2 \bigg\{ \int_\Omega \Big(u^\frac{p}{2} v^\frac{q}{2}\Big)^\frac{2r}{q-(q-p)r} \bigg\}^\frac{q-(q-p)r}{q} \\ &=& c_2 \Big\|u^\frac{p}{2} v^\frac{q}{2}\Big\|_{L^\frac{2r}{q-(q-p)r}(\Omega)}^\frac{2r}{q} \qquad \mbox{for all } t>0 \end{eqnarray*} with $c_2:=\sup_{t>0} \|u(\cdot,t)\|_{L^1(\Omega)}^\frac{(q-p)r}{q}$ being finite according to Lemma \ref{lem01} and our assumption that (\ref{H1}) be valid.\\ Consequently, using the Gagliardo-Nirenberg inequality we see that with some $c_3>0$ we have \begin{equation} \labela{17.3} \int_t^{t+1} \int_\Omega (uv)^r &\le& c_3 \int_t^{t+1} \Big\| \Big(u^\frac{p}{2}(\cdot,s) v^\frac{q}{2}(\cdot,s)\Big)_x \Big\|_{L^2(\Omega)}^2 \Big\| u^\frac{p}{2}(\cdot,s) v^\frac{q}{2}(\cdot,s)\Big\|_{L^\frac{2}{p+\varepsilon q}(\Omega)}^\frac{2(r-q)}{q} ds \nonumber\\ & & + c_3 \int_t^{t+1} \Big\| u^\frac{p}{2}(\cdot,s) v^\frac{q}{2}(\cdot,s)\Big\|_{L^\frac{2}{p+\varepsilon q}(\Omega)}^\frac{2r}{q} ds \qquad \mbox{for all } t>0, \end{equation}a where we have abbreviated \begin{eqnarray*} \varepsilon:=\frac{p+2-r}{r-q}. \end{eqnarray*} Now since $\varepsilon$ is positive because $r<p+2$ and $r>2>1>q^+(p)>q$, and since thus $\frac{2}{p+\varepsilon q}<\frac{2}{p}$, an application of Lemma \ref{lem11} readily yields $c_4>0$ such that \begin{eqnarray*} \Big\| u^\frac{p}{2}(\cdot,s) v^\frac{q}{2}(\cdot,s)\Big\|_{L^\frac{2}{p+\varepsilon q}(\Omega)} \le c_4 \qquad \mbox{for all } s>0, \end{eqnarray*} whereas the inequalities $p<\min \{1,\frac{1}{\chi^2}\}$ and $q^-(p)<q<q^+(p)$ ensure that due to Lemma \ref{lem10} we can find $c_5>0$ such that \begin{eqnarray*} \int_t^{t+1} \Big\| \Big(u^\frac{p}{2}(\cdot,s) v^\frac{q}{2}(\cdot,s)\Big)_x \Big\|_{L^2(\Omega)}^2 ds \le c_5 \qquad \mbox{for all } t>0. \end{eqnarray*} Therefore, (\ref{17.3}) implies that \begin{eqnarray*} \int_t^{t+1} \int_\Omega (uv)^r \le c_3 c_4^\frac{2(r-q)}{q} c_5 + c_3 c_4^\frac{2r}{q} \qquad \mbox{for all } t>0 \end{eqnarray*} and hence proves (\ref{17.2}) due to our definition of $r$. $\Box$ \vskip.2cm Thanks to the fact that the integrability exponent appearing therein is large than $2$, the boundedness property implied by the decay statement in Lemma \ref{lem17} allows us to derive boundedness of $v$ even in a space compactly embedded into $L^\infty(\Omega)$. \begin{equation} \labelgin{lem}\label{lem18} Let $\chi<\frac{\sqrt{6\sqrt{3}+9}}{2}$ and assume \eqref{H1}. Then there exist $\gamma\in (0,1)$ and $C>0$ such that \begin{equation} \label{18.1} \|v(\cdot,t)\|_{C^\gamma(\overline{\Omega})} \le C \qquad \mbox{for all $t>1$.} \end{equation} \end{lem} {\sc Proof.} \quad We fix $\begin{equation} \labelta \in (\frac{1}{4},\frac{1}{2})$ and any $\gamma\in (0,2\begin{equation} \labelta-\frac{1}{2})$ and then once more refer to known embedding results (\cite{henry}) to recall that the sectorial realization $A$ of $-(\cdot)_{xx}+1$ under homogeneous Neumann boundary conditions in $L^2(\Omega)$ has the property that its fractional power $A^\begin{equation} \labelta$ satisfies $D(A^\begin{equation} \labelta) \hookrightarrow C^\gamma(\overline{\Omega})$. Therefore, writing \begin{eqnarray*} v(\cdot,t)=e^{-A} v(\cdot,t-1) + \int_{t-1}^t e^{-(t-s)A} h(\cdot,s) ds \qquad \mbox{for } t>1 \end{eqnarray*} with \begin{eqnarray*} h(\cdot,t):=u(\cdot,t)v(\cdot,t)+B_2(\cdot,t), \qquad t>0, \end{eqnarray*} we can estimate \begin{eqnarray*} \|v(\cdot,t)\|_{C^\gamma(\overline{\Omega})} \le c_1 \Big\|A^\begin{equation} \labelta e^{-{\color{red}{t}}A} v(\cdot,t-1)\Big\|_{L^2(\Omega)} + c_1 \int_{t-1}^t \Big\| A^\begin{equation} \labelta e^{-(t-s)A} h(\cdot,s)\Big\|_{L^2(\Omega)} ds, \qquad \mbox{for all } t>1, \end{eqnarray*} with some $c_1>0$. As well-known regularization features of $(e^{-tA})_{t\ge 0}$ (\cite{friedman}) warrant the existence of $c_2>0$ fulfilling \begin{eqnarray*} \Big\| A^\begin{equation} \labelta e^{-tA} \varphi\Big\|_{L^2(\Omega)} \le c_2 t^{-\begin{equation} \labelta} \|\varphi\|_{L^2(\Omega)} \qquad \mbox{for all $\varphi\in C^0(\overline{\Omega})$ and any } t>0, \end{eqnarray*} by using the Cauchy-Schwarz inequality we infer that \begin{equation} \labela{18.2} \|v(\cdot,t)\|_{C^\gamma(\overline{\Omega})} &\le& c_1 c_2\|v(\cdot,t-1)\|_{L^2(\Omega)} + c_1 c_2 \int_{t-1}^t (t-s)^{-\begin{equation} \labelta} \|h(\cdot,s)\|_{L^2(\Omega)} ds \nonumber\\ &\le& c_1 c_2\|v(\cdot,t-1)\|_{L^2(\Omega)} + c_1 c_2 \bigg\{ \int_{t-1}^t (t-s)^{-2\begin{equation} \labelta} ds \bigg\}^\frac{1}{2} \bigg\{ \int_{t-1}^t \|h(\cdot,s)\|_{L^2(\Omega)}^2 ds \bigg\}^\frac{1}{2} \end{equation}a for all $t>1$, where we note that \begin{eqnarray*} \int_{t-1}^t (t-s)^{-2\begin{equation} \labelta} ds = \frac{1}{1-2\begin{equation} \labelta} \qquad \mbox{for all } t>1 \end{eqnarray*} thanks to our restriction $\begin{equation} \labelta<\frac{1}{2}$. Since Corollary \ref{cor153} provides $c_3>0$ such that \begin{eqnarray*} \|v(\cdot,t-1)\|_{L^2(\Omega)} \le c_3 \qquad \mbox{for all } t>1, \end{eqnarray*} and since Lemma \ref{lem17} along with the boundedness of $B_2$ in $\Omega\times (0,\infty)$ implies that \begin{eqnarray*} \int_{t-1}^t \|h(\cdot,s)\|_{L^2(\Omega)}^2 ds \le c_4 \qquad \mbox{for all } t>1 \end{eqnarray*} with some $c_4>0$, the inequality in (\ref{18.1}) is thus a consequence of (\ref{18.2}). $\Box$ \vskip.2cm \subsection{Stabilization of $v$ under the hypotheses (\ref{H1}) and (\ref{H3}). Proof of Theorem \ref{theo52}} As a final preparation for the proof of Theorem \ref{theo52}, let us now make use of the $L^2$ decay property of $uv$ entailed by Lemma \ref{lem17} in order to assert that under the additional assumption (\ref{H3}), $v$ indeed stabilizes toward the desired limit, at least with respect to the topology in $L^2(\Omega)$. \begin{equation} \labelgin{lem}\label{lem19} Let $\chi<\frac{\sqrt{6\sqrt{3}+9}}{2}$, and assume that \eqref{H1} and \eqref{H3} hold with some $B_{2,\infty} \in L^2(\Omega)$. Then \begin{equation} \label{19.2} v(\cdot,t) \to v_\infty \quad \mbox{in } L^2(\Omega) \qquad \mbox{as } t\to\infty, \end{equation} where $v_\infty$ denotes the solution of \eqref{vinfty}. \end{lem} {\sc Proof.} \quad Using (\ref{0}) and (\ref{vinfty}) we compute \begin{eqnarray*} \frac{1}{2} \frac{d}{dt} \int_\Omega (v-v_\infty)^2 &=& \int_\Omega (v-v_\infty)\cdot (v_{xx}+uv-v+B_2) \\ &=& \int_\Omega (v-v_\infty)\cdot \Big\{ (v-v_\infty)_{xx} - (v-v_\infty) + uv + (B_2-B_{2,\infty}) \Big\} \\ &=& - \int_\Omega (v-v_\infty)_x^2 - \int_\Omega (v-v_\infty)^2 + \int_\Omega (v-v_\infty) \cdot \Big\{ uv+ (B_2-B_{2,\infty}) \Big\} \qquad \mbox{for all } t>0, \end{eqnarray*} where the first summand on the right is nonpositive, and where the rightmost integral can be estimated by Young's inequality according to \begin{eqnarray*} \int_\Omega (v-v_\infty) \cdot \Big\{ uv+ (B_2-B_{2,\infty}) \Big\} &\le& \frac{1}{2} \int_\Omega (v-v_\infty)^2 + \frac{1}{2} \int_\Omega \Big\{ uv+(B_2-B_{2,\infty}) \Big\}^2 \\ &\le& \frac{1}{2} \int_\Omega (v-v_\infty)^2 + \int_\Omega (uv)^2 + \int_\Omega (B_2-B_{2,\infty})^2 \qquad \mbox{for all } t>0. \end{eqnarray*} Therefore, $y(t):=\int_\Omega (v(\cdot,t)-v_\infty)^2$ and $h(t):=2\int_\Omega (u(\cdot,t)v(\cdot,t))^2 + 2\int_\Omega (B_2(\cdot,t)-B_{2,\infty})^2$, \ $t\ge 0$, satisfy \begin{eqnarray*} y'(t)+y(t) \le h(t) \qquad \mbox{for all } t>0, \end{eqnarray*} so that since Lemma \ref{lem17} entails that \begin{eqnarray*} \int_t^{t+1} \int_\Omega (uv)^2 \to 0 \qquad \mbox{as } t\to\infty \end{eqnarray*} and that thus \begin{eqnarray*} \int_t^{t+1} h(s)ds \to 0 \qquad \mbox{as } t\to\infty \end{eqnarray*} thanks to (\ref{H3}), the claimed property (\ref{19.2}) results from Lemma \ref{lem_espejo_win}. $\Box$ \vskip.2cm Collecting all the above, we can easily derive our main result on asymptotic behavior under the assumptions that (\ref{H1}) and possibly also (\ref{H3}) hold.\\[5pt] {\sc Proof.} \quadc of Theorem \ref{theo52}.\quad Supposing that $\chi < \frac{\sqrt{6\sqrt{3}+9}}{2}$ and that (\ref{H1}) be valid, we obtain the boundedness property (\ref{52.2}) of $v$ in $\Omega\times (0,\infty)$ as a consequence of Lemma \ref{lem18} and Lemma \ref{lem_loc}. If moreover (\ref{H3}) is fulfilled with some $B_{2,\infty}\in L^2(\Omega)$, then from Lemma \ref{lem19} we know that $v(\cdot,t)\to v_\infty$ in $L^2(\Omega)$ as $t\to\infty$. Since Lemma \ref{lem18} actually even warrants precompactness of $(v(\cdot,t))_{t>1}$ in $L^\infty(\Omega)$ by means of the Arzel\`a-Ascoli theorem, this already implies the uniform convergence property claimed in (\ref{52.3}). $\Box$ \vskip.2cm \mysection{Numerical results} In this section we explore the growth of solutions to \eqref{0} as $\chi$ increases on small time scales. The effect of large chemotaxis sensitivities on the growth of the solutions has been observed in Keller-Segel-type systems. From numerical simulations we observe that the $L^\infty$ norm of the criminal density increases sharply with $\chi$ in short-time scales before relaxing to the steady-state solution. Indeed, the solution quickly relaxes to a steady-state solution once the dissipation is able to dominate. For all numerical experiments we consider initial data $u(x,0)= e^{-x}$ and $v(x,0)= e^{-x},$ $B_1 = B_2 = 1$ and vary the parameter $\chi$. All numerical computations where made using Matlab's {\it pdepe} function. In Figure \ref{fig:short} we observe the rapid growth on the short time scale ($t \in [0,.05]$ with time step $\delta t = .001$). This figure illustrates the fact that the criminal density reaches a higher value as $\chi$ increases. \begin{equation} \labelgin{figure}[H] \center \subfloat[$\chi=20$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi20_05.jpg}}\; \subfloat[$\chi=50$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi50_05.jpg}}\\ \subfloat[$\chi=100$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi100_05.jpg}}\; \subfloat[$\chi=150$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi150_05.jpg}}\\ \subfloat[$\chi=500$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi500_05.jpg}}\; \subfloat[$\chi=1000$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi1000_05.jpg}} \caption{The evolution of the maximum concentration of criminal $\norm{u(\cdot,t)}_{\infty}$ at a short time scale $t\in [0,.05]$ with initial condition given by $(u(x,0),v(x,0))=(e^{-x},e^{-x})$ and $B_1 = B_2 =1$. } \label{fig:short} \end{figure} On the other hand, at longer time scales (although not so long $t\in [0,5]$ with $\delta t = .05$) the dissipation dominates and in all cases we see eventual decay. This is illustrated in Figure \ref{fig:long}, where we can see that by time $t=5$ the maximum density of criminals has reached a steady state. \begin{equation} \labelgin{figure}[H] \center \subfloat[$\chi=20$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi20_5.jpg}}\; \subfloat[$\chi=50$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi50_5.jpg}}\\ \subfloat[$\chi=100$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi100_5.jpg}}\; \subfloat[$\chi=150$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi150_5.jpg}}\\ \subfloat[$\chi=500$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi500_5.jpg}}\; \subfloat[$\chi=1000$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Max_u_chi1000_5.jpg}} \caption{The evolution of the maximum concentration of criminal $\norm{u(\cdot,t)}_{\infty}$ at a longer time scale ($t\in [0,5]$) with initial condition given by $(u(x,0),v(x,0))=(e^{-x},e^{-x})$ and $B_1 = B_2 =1$. } \label{fig:long} \end{figure} Another interesting thing to note that is that the steady-state of the maximum density of criminals increases with $\chi$. Thus, we do not see a relaxation to the constant steady states, which in this case are $u\equiv \frac{1}{2}$ and $v \equiv 2$. In fact, relaxation to the homogeneous steady-states occurs with $\chi$ small. However, as $\chi$ increases we observe to a non-constant hump-solution with the maximum at the origin. Figure \ref{fig:soln} \begin{equation} \labelgin{figure}[H] \center \subfloat[$\chi=12$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Soln_chi_5_t_20.jpg}}\; \subfloat[$\chi=13$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Soln_chi_13_t_20.jpg}}\\ \subfloat[$\chi=20$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Soln_chi_20_t_20.jpg}}\; \subfloat[$\chi=50$]{\label{fig:no_env}\includegraphics[width=0.35\textwidth]{Soln_chi_50_t_20.jpg}}\\ \caption{Criminal density $u(x,t)$ at $t =20$ for various values of $\chi$. } \label{fig:soln} \end{figure} \mysection{Appendix: Two ODE lemmata} Let us separately formulate two auxiliary statements on boundedness and decay in linear ODIs with inhomogeneities enjoying certain averaged boundedness and decay properties. \begin{equation} \labelgin{lem}\label{lem_ssw} Let $T\in (0,\infty]$ and $\tau\in (0,T)$, and let $y\in C^1([0,T))$ and $h\in L^1_{loc}([0,\infty))$ be nonnegative and such that with some $a>0$ and $b>0$ we have \begin{eqnarray*} y'(t) + ay(t) \le h(t) \qquad \mbox{for all } t\in (0,T), \end{eqnarray*} as well as \begin{eqnarray*} \frac{1}{\tau} \int_t^{t+\tau} h(s)ds \le b \qquad \mbox{for all } t\in (0,T). \end{eqnarray*} Then \begin{eqnarray*} y(t) \le y(0) + \frac{b\tau}{1-e^{-a\tau}} \qquad \mbox{for all } t\in [0,T). \end{eqnarray*} \end{lem} {\sc Proof.} \quad This can be found e.g.~in \cite[Lemma 3.4]{win_ks_nasto_exist}. $\Box$ \vskip.2cm \begin{equation} \labelgin{lem}\label{lem_espejo_win} Let $y\in C^1([0,\infty))$ and $h\in L^1_{loc}([0,\infty))$ be nonnegative functions satisfying \begin{eqnarray*} y'(t) + a y(t) \le h(t) \qquad \mbox{for all } t>0 \end{eqnarray*} with some $a>0$. Then if \begin{eqnarray*} \int_t^{t+1} h(s) ds \to 0 \qquad \mbox{as } t\to\infty, \end{eqnarray*} we have \begin{eqnarray*} y(t) \to 0 \qquad \mbox{as } t\to\infty. \end{eqnarray*} \end{lem} {\sc Proof.} \quad An elementary derivation of this has been given in \cite[Lemma 4.6]{espejo_win1}, for instance. $\Box$ \vskip.2cm \vspace*{5mm} {\bf Acknowledgement.} \quad The second author acknowledges support of the Deutsche Forschungsgemeinschaft in the context of the project {\em Emergence of structures and advantages in cross-diffusion systems} (No.-- 411007140, GZ: WI 3707/5-1). The first author was supported by the NSF Grant DMS-1516778. \end{document}
\begin{document} \begin{abstract} This paper is a sequel of our preceding paper (N. Kita: Constructive characterization for signed analogue of critical graphs I: Principal classes of radials and semiradials. arXiv preprint, arXiv:2001.00083, 2019). In the preceding paper, the concepts of radials and semiradials are introduced, and constructive characterizations for five principal classes of radials and semiradials are provided. Radials are a common analogue of critical graphs from matching theory and a class of directed graphs called flowgraphs, whereas semiradials are a relaxed concept of radials. Five classes of radials and semiradials, that is, absolute semiradials, strong and almost strong radials, linear semiradials, and sublinear radials, were defined and characterized in the paper. In this paper, we use these characterizations to provide a constructive characterization of general radials and semiradials. \end{abstract} \maketitle \section{Definitions} We use the notation introduced in our preceding paper~\cite{kita2019constructive}. For basic notations for sets and graphs, we mostly follow Schrijver~\cite{schrijver2003}. In the following, we list some newly introduced notation. Let $\alpha \in \{+, -\}$. Let $G$ be a bidirected graph, and let $S \subseteq V(G)$. We denote by $\ocut{G}{S}{\alpha}$ the set of edges from $\parcut{G}{S}$ in which the end in $S$ has the sign $\alpha$. The set $\oNei{G}{S}{\alpha}$ denotes the set of vertices from $\parNei{G}{S}$ that are joined with vertices in $S$ by an edge from $\ocut{G}{S}{\alpha}$. Let $X\subseteq V(G)$. The bidirected graph obtained from $G$ by contracting $X$ is denoted by $G/X$. That is, $G[X]$ is the bidirected graph obtained from $G$ by regarding $X$ as a single vertex $[X]$ and deleting the arising loops over $[X]$. \begin{definition} Let $\alpha, \beta \in\{+, -\}$, let $G$ be a bidirected graph, and let $r\in V(G)$. The set of vertices that can reach $r$ by an $(\alpha, \beta)$-ditrail is denoted by $\reachset{G}{r}{\alpha}{\beta}$. The set of vertices that can reach $r$ by an $\alpha$-ditrail is denoted by $\reachsets{G}{r}{\alpha}$. \end{definition} \begin{definition} Let $\alpha\in\{+, -\}$, let $G$ be a bidirected graph, and let $r\in V(G)$. A vertex $x\in V(G)$ is {\em absolute} with respect to $r$ if $x \in \reachsets{G}{r}{\alpha} \cap \reachsets{G}{r}{-\alpha}$ holds. By contrast, $x$ is {\em $\alpha$-linear} with respect to $r$ if $x \in \reachsets{G}{r}{\alpha}\setminus \reachsets{G}{r}{-\alpha}$ holds. A vertex $x\in V(G)$ is {\em $\alpha$-strong} with respect to $r$ if $x \in \reachset{G}{r}{\alpha}{-\alpha}\cap \reachset{G}{r}{-\alpha}{-\alpha}$ holds. By contrast, $x$ is {\em $\alpha$-sublinear} with respect to $r$ if $x\in \reachset{G}{r}{\alpha}{-\alpha} \setminus \reachset{G}{r}{-\alpha}{-\alpha}$ holds. If $G$ is an $\alpha$-semiradial with root $r$, then an absolute or $\alpha$-linear vertex with respect to $r$ can simply be called an absolute or linear vertex, respectively. If $G$ is an $\alpha$-radial with root $r$, then an $\alpha$-strong or sublinear vertex with respect to $r$ can simply be called a strong or sublinear vertex, respectively. \end{definition} \begin{definition} Let $G$ and $H$ be disjoint bidirected graphs. Let $s \in V(G)$ and $T\subseteq V(H)$. A gluing sum of $G$ and $H$ with respect to $s$ and $T$ is a bidirected graph obtained by identifying $s$ and $T$. More precisely, a bidirected graph $\hat{G}$ is a {\em gluing sum} of $G$ and $H$ with respect to $s$ and $T$ if it satisfies the following properties: \begin{rmenum} \item $V(\hat{G}) = ( V(G)\setminus \{s\} ) \cup V(H)$. \item Let $F \subseteq E(G)$ be the union of $\parcut{G}{s}$ and the set of loops over $s$. Then, $( E(G)\setminus F) \cup E(H) \subseteq E(\hat{G})$. Also, there is a one-to-one mapping $f$ from $F$ to $E(\hat{G})\setminus (E(G)\setminus F) \setminus E(H)$ that satisfies the following properties: \begin{rmenum} \item If $e \in F$ is an edge between $s$ and $v\in \parNei{G}{s}$, then $f(e)$ is an edge between a vertex $t \in T$ and $v$ such that the signs of $t$ and $v$ over $f(e)$ are equal to the signs of $s$ and $v$ over $e$, respectively; and, \item if $e\in F$ is a loop, then $f(e)$ is an $(\alpha_1, \alpha_2)$-edge between (possibly identical) vertices from $T$, where $\alpha_1, \alpha_2 \in \{+ , -\}$ are the signs of $s$ over $e$. \end{rmenum} \end{rmenum} We denote such $\hat{G}$ by $(G; s)\oplus (H; T)$. \end{definition} \begin{remark} Note that a gluing sum is not always uniquely determined. However, for convenience, we may write $\hat{G} = (G; s)\oplus (H; T)$ to claim that $\hat{G}$ is a gluing sum of $G$ and $H$ with respect to $s$ and $T$. \end{remark} \section{Edge Adding and Deleting Lemmas} \label{sec:edgelem} \subsection{Edge Adding Lemmas} \label{sec:edgelem:add} In Section~\ref{sec:edgelem:add}, we introduce Lemmas~\ref{lem:edgeaddr} and \ref{lem:edgeaddsr} to be used in later sections. For proving Lemma~\ref{lem:edgeaddr}, we first provide and prove Lemmas~\ref{lem:nobypass} and \ref{lem:oneadd}. \begin{lemma} \label{lem:nobypass} Let $\alpha \in \{+, -\}$. Let $G$ be a bidirected graph, and let $r\in V(G)$. Let $x, y\in V(G)$. If either $x, y \in \reachset{G}{r}{\alpha}{-\alpha} \setminus \reachset{G}{r}{-\alpha}{-\alpha}$ or $x, y \in \reachsets{G}{r}{\alpha} \setminus \reachsets{G}{r}{-\alpha}$ holds, then there is no $(-\alpha, -\alpha)$-ditrail from $x$ to $y$. \end{lemma} \begin{proof} Suppose the contrary, and let $P$ be a $(-\alpha, -\alpha)$-ditrail from $x$ to $y$. First, consider the case $x, y \in \reachsets{G}{r}{\alpha} \setminus \reachsets{G}{r}{-\alpha}$. Let $Q$ be an $\alpha$-ditrail from $y$ to $r$. Trace $Q^{-1}$ from $r$, and let $z$ be the first encountered vertex from $P$. Then, either $xPz + zQr$ or $yP^{-1}z + zQr$ is a $-\alpha$-ditrail that contradicts $x\not\in \reachsets{G}{r}{-\alpha}$ or $y\not\in \reachsets{G}{r}{-\alpha}$, respectively. The case $x, y \in \reachset{G}{r}{\alpha}{-\alpha} \setminus \reachset{G}{r}{-\alpha}{-\alpha}$ can be proved in a similar way. The lemma is proved. \end{proof} Lemma~\ref{lem:nobypass} implies Lemma~\ref{lem:oneadd}. \begin{lemma} \label{lem:oneadd} Let $\alpha \in \{+, -\}$. Let $G$ be a bidirected graph, and let $H$ be an induced subgraph of $G$ that is an $\alpha$-radial with root $r \in V(H)$. Assume $\parNei{G}{V(G)\setminus V(H)} \cap \reachset{G}{r}{-\alpha}{-\alpha} = \emptyset$. Let $u \in V(H)\setminus \reachset{G}{r}{-\alpha}{-\alpha}$ and $v\in V(G)\setminus V(H)$. Let $G'$ be a bidirected graph obtained from $G$ by adding an edge $e$ between $u$ and $v$ in which the sign of $u$ is $\alpha$. Let $x\in V(G)$ and $\beta\in\{+, -\}$. If $G'$ has a $(\beta, -\alpha)$-ditrail $P$ from $x$ to $r$ with $e\in E(P)$, then $G'$ has a $(\beta, -\alpha)$-ditrail $P'$ from $x$ to $r$ with $E(P') \cap \parcut{G}{H} \subseteq E(P)\cap \parcut{G}{H} \setminus \{e\}$. \end{lemma} \begin{proof} Let $P$ be a $(\beta, -\alpha)$-ditrail $P$ from $x$ to $r$ with $e\in E(P)$. If $P$ contains $(v, e, u)$ as a subditrail, then $uPr$ is a $(-\alpha, -\alpha)$-ditrail from $u$ to $r$, which contradicts $u\not\in \reachset{G}{r}{-\alpha}{-\alpha}$. Therefore, $P$ contains $(u, e, v)$. Thus, $xPu$ is a $(\beta, -\alpha)$-ditrail. First, consider the case $x\in V(G)\setminus V(H)$. Trace $P$ from $x$, and let $y$ be the first encountered vertex that is in $V(H)$. Note $e\not\in E(xPy)$. Note also that $xPy$ is a $(\beta, -\alpha)$-ditrail; for, otherwise, $yPu$ is a $(-\alpha, -\alpha)$-ditrail, which contradicts Lemma~\ref{lem:nobypass}. Let $Q$ be an $(\alpha, -\alpha)$-ditrail of $H$ from $y$ to $r$. Then, $xPy + Q$ is a desired ditrail $P'$. Next, consider the case $x\in V(H)$. Let $\beta = \alpha$. Because $H$ is an $\alpha$-radial with root $r$, there is clearly an $(\alpha, -\alpha)$-ditrail $P'$ in $G'$ that satisfies the condition. Now, let $\beta = -\alpha$. Then, $xPu$ is a $(-\alpha, -\alpha)$-ditrail. Let $R$ be an $(\alpha, -\alpha)$-ditrail of $H$ from $x$ to $r$. Trace $R^{-1}$ from $r$, and let $z$ be the first encountered vertex from $xPu$. Because there is no $(-\alpha, -\alpha)$-ditrail between $u$ and $r$, it follows that $xPz + zRr$ is a $(-\alpha, -\alpha)$-ditrail from $x$ to $r$. This ditrail is a desired ditrail $P'$. Thus, the claim is proved for the case $x\in V(H)$. This proves the lemma. \end{proof} Lemma~\ref{lem:edgeaddr} can be derived from Lemma~\ref{lem:oneadd}. \begin{lemma} \label{lem:edgeaddr} Let $\alpha \in \{+, -\}$, let $G$ be a bidirected graph, and let $H$ be an induced subgraph of $G$ that is an $\alpha$-radial with root $r \in V(H)$. Assume $\parNei{G}{V(G)\setminus V(H)} \cap \reachset{G}{r}{-\alpha}{-\alpha} = \emptyset$. Let $G'$ be a bidirected graph obtained from $G$ by adding some edges between $V(H) \setminus \reachset{G}{r}{-\alpha}{-\alpha}$ and $V(G)\setminus V(H)$ in which the ends in $V(H)$ have the sign $\alpha$. Then, $\reachset{G'}{r}{\beta}{-\alpha} = \reachset{G}{r}{\beta}{-\alpha}$ holds for each $\beta\in\{+, -\}$. \end{lemma} \begin{proof} Let $F := E(G')\setminus E(G)$. That is, $G' = G + F$. We prove the lemma by induction on $|F|$. If $|F| = 1$, then Lemma~\ref{lem:oneadd} proves the claim. Let $|F| > 1$, and assume that the claim holds for every case where $|F|$ is smaller. Let $e\in F$, and let $G'' := G + F\setminus \{e\}$. Hence, $G' = G'' + e$. From the induction hypothesis, $\reachset{G''}{r}{\beta}{-\alpha} = \reachset{G}{r}{\beta}{-\alpha}$ holds for each $\beta\in\{+, -\}$. Thus, $H$ is an induced subgraph of $G''$ that is an $\alpha$-radial with root $r \in V(H)$ such that $\parNei{G''}{V(G'') \setminus V(H)} \cap \reachset{G''}{r}{-\alpha}{-\alpha} = \emptyset$, and the end of $e$ from $V(H)$ is disjoint from $\reachset{G}{r}{-\alpha}{-\alpha}$. Therefore, Lemma~\ref{lem:oneadd} implies that $\reachset{G'}{r}{\beta}{-\alpha} = \reachset{G''}{r}{\beta}{-\alpha}$ holds for each $\beta\in\{+, -\}$. Consequently, $\reachset{G'}{r}{\beta}{-\alpha} = \reachset{G}{r}{\beta}{-\alpha}$ is obtained for each $\beta\in\{+, -\}$. This proves the lemma. \end{proof} It can easily be confirmed from similar discussions that the semiradial versions of Lemmas~\ref{lem:oneadd} and \ref{lem:edgeaddr} also hold. As such, Lemma~\ref{lem:edgeaddsr} is obtained. \begin{lemma} \label{lem:edgeaddsr} Let $\alpha \in \{+, -\}$, let $G$ be a bidirected graph, and let $H$ be an induced subgraph of $G$ that is an $\alpha$-semiradial with root $r \in V(H)$. Assume $\parNei{G}{V(G)\setminus V(H)} \cap \reachsets{G}{r}{-\alpha} = \emptyset$. Let $G'$ be a bidirected graph obtained from $G$ by adding some edges between $V(H) \setminus \reachsets{G}{r}{-\alpha}$ and $V(G)\setminus V(H)$ in which the ends in $V(H)$ have the sign $\alpha$. Then, $\reachsets{G'}{r}{\beta} = \reachsets{G}{r}{\beta}$ holds for each $\beta\in\{+, -\}$. \end{lemma} \subsection{Edge Deleting Lemmas} \label{sec:edgelem:del} In Section~\ref{sec:edgelem:del}, we provide Lemmas~\ref{lem:r2delete} and \ref{lem:sr2delete} to be used in later sections. Lemma~\ref{lem:edgeaddr} easily implies Lemma~\ref{lem:r2delete}. \begin{lemma} \label{lem:r2delete} Let $\alpha \in \{+, -\}$, let $G$ be a bidirected graph, and let $H$ be an induced subgraph of $G$ that is an $\alpha$-radial with root $r \in V(H)$ such that $\parNei{G}{V(G)\setminus V(H)} \cap \reachset{G}{r}{-\alpha}{-\alpha} = \emptyset$. Let $F \subseteq \ocut{G}{H}{\alpha}$. Then, $\reachset{G - F}{r}{\beta}{-\alpha} = \reachset{G}{r}{\beta}{-\alpha}$ holds for each $\beta\in\{+, -\}$. \end{lemma} \begin{proof} Let $G' := G - F$. That is, $G = G' + F$. It is clear that $H$ is an induced subgraph of $G'$ that is an $\alpha$-radial with root $r$. Because $\reachset{G'}{r}{-\alpha}{-\alpha} \subseteq \reachset{G}{r}{-\alpha}{-\alpha}$ clearly holds, we have $\parNei{G'}{V(G')\setminus V(H)} \cap \reachset{G'}{r}{-\alpha}{-\alpha} = \emptyset$. Hence, Lemma~\ref{lem:edgeaddr} implies the claim. \end{proof} It is easily observed that the semiradial version of Lemma~\ref{lem:r2delete} also holds, which is stated as Lemma~\ref{lem:sr2delete}. \begin{lemma} \label{lem:sr2delete} Let $\alpha \in \{+, -\}$, let $G$ be a bidirected graph, and let $H$ be an induced subgraph of $G$ that is an $\alpha$-semiradial with root $r$ such that $\parNei{G}{V(G)\setminus V(H)} \cap \reachsets{G}{r}{-\alpha} = \emptyset$. Let $F \subseteq \ocut{G}{H}{\alpha}$. Then, $\reachsets{G -F}{r}{\beta} = \reachsets{G}{r}{\beta}$ holds for each $\beta\in\{+, -\}$. \end{lemma} \section{Neighbor Lemma} \begin{lemma} \label{lem:neigh2ear} Let $G$ be a bidirected graph, let $r\in V(G)$, and let $S$ be a set of vertices with $\{r\} \subseteq S \subsetneq V(G)$. Let $x\in V(G)\setminus S$ and $y \in S$ be adjacent vertices, and let $\beta \in \{+, -\}$ be the sign of $x$ over $xy$. If $G$ has a $-\beta$-ditrail from $x$ to $r$, then there is a diear relative to $S$ that is either \begin{rmenum} \item a simple diear that starts with subditrail $(y, yx, x)$ or ends with subditrail $(x, xy, y)$, or \item a scoop diear whose grip is $xy$. \end{rmenum} \end{lemma} \begin{proof} Let $P$ be a $-\beta$-ditrail from $x$ to $r$. Trace $P$ from $x$, and let $z$ be the first encountered vertex in $S$. If $xPz$ does not contain $xy$, then $(y, yx, x) + xPz$ is a simple diear relative to $S$. If $xPz$ contains $xy$, then $xPz$ ends with the sequence $(x, xy, y)$; this further implies that $(y, yx, x) + xPz$ forms a scoop diear relative to $S$ whose grip is $xy$. \end{proof} \section{Grounds for Semiradials} In this section, we introduce the concepts of absolute and linear grounds for semiradials and provide some properties to be used in later sections. Lemma~\ref{lem:sr2union} can easily be confirmed. \begin{lemma}\label{lem:sr2union} Let $\alpha\in\{+, -\}$. Let $G$ be a bidirected graph, and let $r\in V(G)$. Let $H_1$ and $H_2$ be subgraphs of $G$ that are $\alpha$-semiradials with root $r$. Then, $H_1 + H_2$ is also an $\alpha$-semiradial with root $r$. Furthermore, \begin{rmenum} \item if $H_1$ and $H_2$ are both absolute, then $H_1 + H_2$ is also absolute; and, \item if all vertices of $H_1$ and $H_2$ except $r$ are $\alpha$-linear in $G$ with respect to $r$, which implies $H_1$ and $H_2$ are both linear, then $H_1 + H_2$ is also linear. \end{rmenum} \end{lemma} Under Lemma~\ref{lem:sr2union}, absolute and linear grounds for semiradials can be defined. \begin{definition} Let $\alpha\in\{+, -\}$. Let $G$ be a bidirected graph, and let $r\in V(G)$. The maximum subgraph of $G$ that is an absolute $\alpha$-semiradial with root $r$ is called the {\em absolute ground}. By contrast, the {\em linear ground} of $G$ is the maximum subgraph that is a linear $\alpha$-semiradial with root $r$ in which every vertex except the root is linear in $G$. \end{definition} Every $\alpha$-semiradial has absolute and linear grounds because $r$ induces a subgraph that is trivially an absolute or linear $\alpha$-semiradial. An absolute or linear ground is said to be {\em trivial} if it comprises only one vertex, that is, the root, and no edge. The intersection of the absolute and linear grounds comprises only one vertex, that is, the root. \begin{lemma} \label{lem:rnei2alt} Let $\alpha\in\{+, -\}$. Let $G$ be an $\alpha$-semiradial with root $r\in V(G)$. Then, every neighbor of $r$ is contained in either the absolute or linear grounds of $G$. If $v\in \parNei{G}{r}$ is linear in $G$, then it is contained in the linear ground of $G$. Otherwise, that is, if $v$ is not linear in $G$, then it is contained in the absolute ground of $G$. \end{lemma} Absolute ground of an $\alpha$-semiradial is trivial if and only if every neighbor of $r$ is linear. Linear ground of an $\alpha$-semiradial is trivial if and only if no neighbor of $r$ is linear. \begin{definition} Let $\alpha\in\{+, -\}$. We say that an $\alpha$-semiradial with root $r$ is {\em sharp} if every neighbor of $r$ is linear. \end{definition} That is, by Lemma~\ref{lem:rnei2alt}, an $\alpha$-semiradial with root $r$ is sharp if and only if its absolute ground is trivial. We use this fact everywhere in this paper sometimes without explicitly mentioning it. \begin{lemma} \label{lem:abgnei2lin} Let $\alpha\in\{+, -\}$. Let $G$ be an $\alpha$-semiradial with root $r\in V(G)$. Let $H$ be the absolute ground of $G$. Then, every neighbor of $H$ is linear in $G$. \end{lemma} \begin{proof} Suppose that $x\in\parNei{G}{H}$ is absolute in $G$. Then, Lemma~\ref{lem:neigh2ear} implies that there is a diear relative to $H$. By Theorem~\ref{thm:asr}, this contradicts the maximality of $H$. Thus, the lemma is proved. \end{proof} \begin{lemma} \label{lem:lg2neigh} Let $\alpha\in\{+, -\}$, and let $G$ be a sharp $\alpha$-semiradial with root $r\in V(G)$. Let $H$ be the linear ground of $G$. If $V(G)\setminus V(H) \neq \emptyset$, then the following hold: \begin{rmenum} \item \label{item:noear} There is no simple $(-\alpha, -\alpha)$-diear relative to $H$. \item \label{item:scoop} $\ocut{G}{H}{-\alpha}\neq \emptyset$. For each $e \in \ocut{G}{H}{-\alpha}$, the end $u \in V(G)\setminus V(H)$ of $e$ is absolute in $G$, and there is a $-\alpha$-scoop diear relative to $H$ whose grip is $e$. \end{rmenum} \end{lemma} \begin{proof} For proving \ref{item:noear}, suppose the contrary, and let $P$ be a $(-\alpha, -\alpha)$-ditrail relative to $H$. Let $y$ and $z$ be the first and last vertices of $P$, respectively. Let $Q$ be an $\alpha$-ditrail of $H$ from $z$ to $r$. Then, $P + Q$ is an $-\alpha$-ditrail from $y$ to $r$, which contradicts $y\in V(H)$. Therefore, \ref{item:noear} follows. Let $x\in V(G)\setminus V(H)$. Then, it is easily observed that there is an $(\alpha, -\alpha)$-ditrail from $x$ to a vertex in $H$ whose edges are disjoint from $E(H)$. This implies $\ocut{G}{H}{-\alpha} \neq \emptyset$. If $u$ is linear in $G$, then $H + e$ forms a linear $\alpha$-semiradial in which every vertex is linear. This contradicts the maximality of $H$. As such, $u$ is absolute in $G$. This further implies from Lemma~\ref{lem:neigh2ear} and the statement \ref{item:noear} that there is a scoop diear relative to $H$ whose grip is $e$. This completes the proof. \end{proof} \section{Grounds for Radials} \label{sec:ground4r} \subsection{Strong and Almost Strong Grounds} \label{sec:ground4r:ground} In Section~\ref{sec:ground4r:ground}, we introduce the concept of strong and almost strong grounds for radials and provide some properties to be used in later sections. Lemma~\ref{lem:r2union} is easily confirmed. \begin{lemma} \label{lem:r2union} Let $\alpha\in\{+, -\}$. Let $G$ be a bidirected graph, and let $r\in V(G)$. Let $H_1$ and $H_2$ be subgraphs of $G$ that are $\alpha$-radials with root $r$. Then, $H_1 + H_2$ is also an $\alpha$-radial with root $r$. Furthermore, if $H_1$ and $H_2$ are both strong or almost strong, then $H_1 + H_2$ is also strong or almost strong. \end{lemma} Under Lemma~\ref{lem:r2union}, strong and almost strong grounds of radials can be defined. \begin{definition} Let $\alpha\in\{+, -\}$. Let $G$ be an $\alpha$-radial with root $r$. Let $H$ be the maximum subgraph $H$ of $G$ that is a strong or almost strong $\alpha$-radial with root $r$. We call $H$ the {\em strong} or {\em almost strong ground} if it is a strong or almost strong $\alpha$-radial with root $r$, respectively. \end{definition} If the root $r$ is strong, then $G$ has a strong ground but never has an almost strong ground. If $r$ is sublinear, then $G$ never has a strong ground but has an almost strong ground. A strong or almost strong ground in a radial is said to be {\em trivial} if it comprises a single vertex, that is, the root, and no edge. Strong grounds can never be trivial, whereas almost strong grounds can be trivial. We use these facts in the remainder of this paper sometimes without explicitly mentioning it. \begin{lemma} \label{lem:abgneigh2sublinear} \label{lem:astgneigh2sublinear} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with root $r$. \begin{rmenum} \item \label{item:str} Assume that $r$ is strong in $G$, and let $H$ be the strong ground of $G$. Then, every vertex from $\parNei{G}{H}$ is sublinear in $G$. \item \label{item:sublinr} Assume that $r$ is sublinear in $G$, and let $H$ be the almost strong ground of $G$. Then, every neighbor of $V(H)\setminus \{r\}$ is sublinear in $G$. \end{rmenum} \end{lemma} \begin{proof} We first prove \ref{item:sublinr}. Let $v\in \parNei{G}{ V(H)\setminus \{r\} }$. If $v = r$, then $v$ is sublinear by assumption. Let $v \neq r$. Let $w\in V(H)\setminus \{r\}$ be a vertex that is adjacent to $v$ with an edge $e$, and let $\beta\in\{+, -\}$ be the sign of $v$ over $e$. Suppose that $v$ is strong in $G$. Then, $G$ has a $(-\beta, -\alpha)$-ditrail $P$ from $v$ to $r$. Trace $P$ from $v$, and let $x$ be the first encountered vertex that is in $V(H)$. First, consider the case $x = r$. Because $r$ is sublinear, $vPx$ is a $(-\beta, -\alpha)$-ditrail. Let $\gamma$ be the sign of $w$ over $e$, and let $Q$ be a $(-\gamma, -\alpha)$-ditrail of $H$ from $w$ to $r$. Then, $xP^{-1}v + (v, e, w) + Q$ is a closed $(-\alpha, -\alpha)$-ditrail over $r$, which is a contradiction. Next, consider the case $x \in V(H)\setminus \{r\}$. Then, $(w, e, v) + vPx$ is a diear relative to $H$ that does not contain $r$. By Theorem~\ref{thm:asr2char}, this contradicts the maximality of $H$. The statement \ref{item:str} can be proved in a similar way using Theorem~\ref{thm:str}. The lemma is proved. \end{proof} \subsection{Extended Grounds for Radials with Sublinear Root} In this section, we introduce the concept of extended grounds for radials with sublinear root and provide their properties. Lemma~\ref{lem:shell2sum} can easily be confirmed. \begin{lemma} \label{lem:shell2sum} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $H$ be the almost strong ground of $G$. Let $I_1$ and $I_2$ be subgraphs of $G$ such that, for each $i\in \{1,2\}$, $I_i$ is an $\alpha$-radial with root $r$, $V(H)\subseteq V(I_i)$ holds, and every vertex in $V(I_i)\setminus V(H)$ is sublinear in $G$. Then, $I_1 + I_2$ is also an $\alpha$-radial with root $r$ that contains $V(H)$ for which every vertex in $V(I_1 + I_2)\setminus V(H)$ is sublinear in $G$. \end{lemma} From Lemma~\ref{lem:shell2sum}, the concept of extended grounds can be introduced. \begin{definition} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $H$ be the almost strong ground of $G$. The {\em extended ground} of $G$ is the maximum subgraph $I$ of $G$ such that $I$ is an $\alpha$-radial with root $r$ and contains $H$, and every vertex in $V(I)\setminus V(H)$ is sublinear in $G$. We call $V(I)\setminus V(H)$ the {\em shell} of $I$. Let $S_1$ be the subset of shell that is defined as follows: A vertex $x$ from a shell is in $S_1$ if $I$ has an $(\alpha, -\alpha)$-ditrail from $x$ to $r$ that is disjoint from $V(H)\setminus \{r\}$; let $S_2 := V(I)\setminus V(H) \setminus S_1$. We call $S_1$ and $S_2$ the {\em first} and {\em second} shells, respectively. We call $G$ a {\em triplex} if $G$ is equal to its extended ground. \end{definition} Lemmas~\ref{lem:rrneigh2g} and \ref{lem:shellneigh2strong} are provided to be used in later sections. \begin{lemma} \label{lem:rrneigh2g} Let $\alpha\in\{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $v \in \oNei{G}{r}{-\alpha}$. If $v$ is strong, then it is contained in the almost strong ground. If $v$ is sublinear, then it is contained in the first shell. \end{lemma} \begin{proof} Let $H$ and $I$ be the almost strong and extended ground, respectively. If $v$ is a strong vertex, then Lemma~\ref{lem:neigh2ear} implies that there is a simple $(-\alpha, -\alpha)$-diear or $-\alpha$-scoop diear with grip $vr$ that is relative to $r$; clearly, there is a $-\alpha$-scoop diear with grip $vr$. Hence, Theorem~\ref{thm:asr2char} implies $v \in V(H)$. If $v$ is a sublinear vertex, then the sign of $v$ over $vr$ is $\alpha$. Therefore, $G. vr$ is a subgraph of $I$ that is a sublinear $\alpha$-radial with root $r$. Consequently, it is contained in the first shell. This completes the proof of this lemma. \end{proof} Lemma~\ref{lem:shellneigh2strong} can be easily confirmed from Lemma~\ref{lem:neigh2ear} by a similar discussion as in the proof of Lemma~\ref{lem:lg2neigh}. \begin{lemma} \label{lem:shellneigh2strong} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $I$ be the extended ground of $G$. Then, the following properties hold: \begin{rmenum} \item \label{item:shellneigh2strong:r2sublin} There is no simple $(-\alpha, -\alpha)$-diear relative to $I$. \item \label{item:shellneigh2strong:round} $\oNei{G}{I}{-\alpha} \neq \emptyset$, and every vertex from $\oNei{G}{I}{-\alpha}$ is strong in $G$. \end{rmenum} \end{lemma} Observation~\ref{obs:shell} can easily be confirmed from the definition of shells. \begin{observation} \label{obs:shell} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $H$ and $I$ be the almost strong and extended grounds of $G$, respectively. Let $S_1$ and $S_2$ be the first and second shells of $I$, respectively. Then, the following properties hold: \begin{rmenum} \item If $S_1 = S_2 = \emptyset$, then $G = I = H$. \item If $V(H) = \{r\}$, then $S_2 = \emptyset$. \end{rmenum} \end{observation} In the following two lemmas, we investigate the inside structure of extended grounds and show that an extended ground is a combination of two radials and one semiradial. Lemma~\ref{lem:shell2path} is provided for proving Lemma~\ref{lem:shell}. \begin{lemma} \label{lem:shell2path} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $H$ and $I$ be the almost strong and extended grounds of $G$, respectively. Let $S\subseteq V(G)$ be the shell of $I$, and let $S_1$ and $S_2$ be the first and second shells, respectively. Then, the following properties hold: \begin{rmenum} \item \label{item:shell2path:r} If $P$ is an $(\alpha, -\alpha)$-ditrail from $x\in S_1$ to $r$ with $V(P)\setminus \{r\} \subseteq S$, then $V(P)\setminus \{r\} \subseteq S_1$ holds. \item \label{item:shell2path:sr} If $P$ is an $(\alpha, -\alpha)$-ditrail from $x\in S_2$ to $r$, then there exists a vertex $y\in V(H)\setminus \{r\}$ such that $V(xPy)\setminus \{y\} \subseteq S_2$. There is no $-\alpha$-ditrail from any $x \in S_2$ to any $y\in V(H)\setminus \{r\}$ whose vertices except $y$ are contained in $S_2$. \end{rmenum} \end{lemma} \begin{proof} First, we prove \ref{item:shell2path:r}. From the definition of shell, every vertex from $S_1$ is sublinear. Therefore, for every $y\in V(P)$, $yPr$ is an $(\alpha, -\alpha)$-ditrail with $V(yPr)\setminus \{r\} \subseteq S$. That is, $y\in S_1$. This implies $V(P) \setminus \{r\} \subseteq S_1$. Thus, \ref{item:shell2path:r} is proved. We next prove \ref{item:shell2path:sr}. Trace $P$ from $x$, and let $y$ be the first encountered vertex that is in $S_1 \cup V(H)$. If $y$ is $r$, then $xPy$ is an $(\alpha, -\alpha)$-ditrail from $x$ to $r$ with $V(xPy)\setminus \{r\} \subseteq S$, which contradicts $x\in S_2$. Hence, $y \neq r$. Suppose $y\in S_1$. Because $y$ is sublinear in $G$, $xPy$ is an $(\alpha, -\alpha)$-ditrail. By \ref{item:shell2path:r}, there is an $(\alpha, -\alpha)$-ditrail $Q$ from $y$ to $r$ with $V(zQr)\setminus \{r\} \subseteq S_1$. Therefore, $xPy + Q$ is an $(\alpha, -\alpha)$-ditrail from $x$ to $r$ with $V(P+Q)\setminus \{r\} \subseteq S$. This contradicts $x\in S_2$. Therefore, $y\in V(H)\setminus \{r\}$ follows, and $V(xPy)\setminus \{y\} \subseteq S_2$ is proved. The remaining claim of \ref{item:shell2path:sr} can easily be proved by considering the concatenation of ditrails. The statement \ref{item:shell2path:sr} is proved. This completes the proof of the lemma. \end{proof} Lemma~\ref{lem:shell} is derived from Lemma~\ref{lem:shell2path} and is used in later sections. \begin{lemma} \label{lem:shell} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $H$ and $I$ be the almost strong and extended grounds of $G$, respectively. Let $S_1$ and $S_2$ be the first and second shells of $I$, respectively. Then, the following properties hold: \begin{rmenum} \item \label{item:shell:r} $G[S_1 \cup \{r\} ]$ is a sublinear $\alpha$-radial with root $r$. \item \label{item:shell:sr} $G[S_2 \cup V(H) ]/V(H) - E_G[r, S_2]$ is a linear $\alpha$-semiradial with root $s$, where $s$ denote the contracted vertex that corresponds to $V(H)$. \item \label{item:shell:lgcut} For every edge of the form $uv$ such that $u\in S_1$ and $v\in V(H)\cup S_2$, the sign of $u$ over $uv$ is $\alpha$. \item \label{item:shell:rcut} For every edge of the form $rv$ such that $v\in S_2$, the sign of $r$ over $rv$ is $\alpha$. \end{rmenum} \end{lemma} \begin{proof} We first prove \ref{item:shell:r}. Lemma~\ref{lem:shell2path} \ref{item:shell2path:r} easily implies that $G[S_1 \cup \{r\}]$ is an $\alpha$-radial with root $r$. From the definition of first shell, every vertex from $S_1$ is sublinear in $G$. Hence, the radial $G[S_1 \cup \{r\}]$ is sublinear. The statement \ref{item:shell:r} is proved. Lemmas~\ref{lem:shell2path} \ref{item:shell2path:sr} easily implies \ref{item:shell:sr}. We next prove \ref{item:shell:lgcut}. Suppose, to the contrary, that the sign of $u$ over $uv$ is $-\alpha$. Let $\beta \in \{+, -\}$ be the sign of $v$ over $uv$. By Lemma~\ref{lem:shell2path} \ref{item:shell2path:r}, there is an $(\alpha, -\alpha)$-ditrail from $Q$ from $u$ to $r$ with $V(Q)\setminus \{r\} \subseteq S_1$. Then, $(v, uv, u) + Q$ is a $(\beta, -\alpha)$-ditrail from $v$ to $r$. If $v\in S_2$ holds, it contradicts either $v\not\in S_1$ or that $v$ is sublinear in $G$. For the case $v\in V(H)\setminus \{r\}$, let $R$ be a $(-\beta, -\alpha)$-ditrail in $H$ from $v$ to $r$. Then, $Q^{-1} + (u, uv, v) + R$ is a closed $(-\alpha, -\alpha)$-ditrail over $r$. This contradicts the assumption that $r$ is sublinear. Thus, \ref{item:shell:lgcut} follows. The statement \ref{item:shell:rcut} can be proved in a similar way by considering the concatenation of ditrails. The lemma is proved. \end{proof} \section{Round Radials} \begin{definition} Let $\alpha\in\{+, -\}$. An $\alpha$-radial with sublinear root $r$ is said to be {\em round} if every vertex from $\oNei{G}{r}{-\alpha}$ is strong. \end{definition} Lemma~\ref{lem:asg2neigh} is easily implied from Lemma~\ref{lem:rrneigh2g}. \begin{lemma} \label{lem:asg2neigh} Let $\alpha\in\{+, -\}$, and let $G$ be a round $\alpha$-radial with root $r\in V(G)$. Then, every vertex from $\oNei{G}{r}{-\alpha}$ is contained in the almost strong ground. \end{lemma} By Lemma~\ref{lem:asg2neigh}, an $\alpha$-radial with sublinear root $r$ is round if and only if every vertex from $\oNei{G}{r}{-\alpha}$ is contained in the almost strong ground. \section{Contraction Lemmas} In this section, we provide Lemmas~\ref{lem:const4l} and \ref{lem:const4s} to be used in proving lemmas in Section~\ref{sec:decomposition}. \begin{lemma} \label{lem:const4l} Let $G$ be a bidirected graph, and let $H$ be an induced subgraph of $G$ with $r\in V(G)\cap V(H)$. Let $x\in V(G)\setminus V(H)$ and $\beta\in\{+, -\}$. \begin{rmenum} \item \label{item:const4l:r} Assume $\parNei{G}{V(G)\setminus V(H)} \subseteq \reachset{H}{r}{\alpha}{-\alpha}\setminus \reachset{G}{r}{-\alpha}{-\alpha}$. Then, $G$ has a $(\beta, -\alpha)$-ditrail from $x$ to $r$ if and only if $G$ has a $(\beta, -\alpha)$-ditrail from $x$ to a vertex in $V(H)$ whose edges are disjoint from $E(H)$. \item \label{item:const4l:sr} Assume $\parNei{G}{V(G)\setminus V(H)} \subseteq \reachsets{H}{r}{\alpha} \setminus \reachsets{G}{r}{-\alpha}$. Then, $G$ has a $\beta$-ditrail from $x$ to $r$ if and only if $G$ has a $(\beta, -\alpha)$-ditrail from $x$ to a vertex in $V(H)$ whose edges are disjoint from $E(H)$. \end{rmenum} \end{lemma} \begin{proof} First, we prove \ref{item:const4l:r}. For proving the sufficiency, let $P$ be a $(\beta, -\alpha)$-ditrail in $G$ from $x$ to $r$. Trace $P$ from $x$, and let $y$ be the first encountered vertex that is in $V(H)$. If $xPy$ is a $(\beta, \alpha)$-ditrail, then $yPr$ is a $(-\alpha, -\alpha)$-ditrail, which contradicts $y\not\in \reachset{G}{r}{-\alpha}{-\alpha}$. Hence, $xPy$ is a $(\beta, -\alpha)$-ditrail that satisfies the condition. The necessity can easily be proved by considering the concatenation of ditrails. This proves \ref{item:const4l:r}. The statement \ref{item:const4l:sr} can be proved by a similar discussion. \end{proof} \begin{lemma} \label{lem:const4s} Let $G$ be a bidirected graph, and let $H$ be an induced subgraph of $G$ with $r\in V(G)\cap V(H)$. Let $x\in V(G)\setminus V(H)$ and $\beta\in\{+, -\}$. \begin{rmenum} \item \label{item:const4s:r} Assume $\parNei{G}{V(G)\setminus V(H)} \subseteq \reachset{H}{r}{\alpha}{-\alpha} \cap \reachset{H}{r}{-\alpha}{-\alpha}$. Then, $G$ has an $(\beta, -\alpha)$-ditrail from $x$ to $r$ if and only if $G$ has a $\beta$-ditrail from $x$ to a vertex in $V(H)$ whose edges are disjoint from $E(H)$. \item \label{item:const4s:sr} Assume $\parNei{G}{V(G)\setminus V(H)} \subseteq \reachsets{H}{r}{\alpha} \cap \reachsets{H}{r}{-\alpha}$. Then, $G$ has a $\beta$-ditrail from $x$ to $r$ if and only if $G$ has a $\beta$-ditrail from $x$ to a vertex in $V(H)$ whose edges are disjoint from $E(H)$. \end{rmenum} \end{lemma} \begin{proof} We first prove the sufficiency of \ref{item:const4s:r}. Let $P$ be a $(\beta, -\alpha)$-ditrail from $x$ to $r$. Trace $P$ from $x$, and let $y$ be the first encountered vertex that is in $V(H)$. Then, $xPy$ is a desired ditrail. The necessity can be proved by considering the concatenation of ditrails. This proves \ref{item:const4s:r}. The statement \ref{item:const4s:sr} can be proved in a similar way. \end{proof} \section{Decomposition of Radials and Semiradials} \label{sec:decomposition} \subsection{From General Semiradials to Sharp Semiradials} Lemma~\ref{lem:const4s} \ref{item:const4s:sr} easily implies Lemma~\ref{lem:srpath2abg}. \begin{lemma} \label{lem:srpath2abg} Let $\alpha\in\{+, -\}$. Let $G$ be an $\alpha$-semiradial with root $r\in V(G)$. Let $H$ be the absolute ground of $G$. Let $x\in V(G)$ and $\beta\in\{+, -\}$. Then, there is a $\beta$-ditrail from $x$ to $r$ if and only if there is a $\beta$-ditrail from $x$ to a vertex in $V(H)$ whose edges are disjoint from $E(H)$. \end{lemma} Lemmas~\ref{lem:abgnei2lin} and \ref{lem:srpath2abg} easily imply Lemma~\ref{lem:abg2lg}. \begin{lemma} \label{lem:abg2lg} Let $\alpha\in\{+, -\}$. Let $G$ be an $\alpha$-semiradial with root $r\in V(G)$. Let $H$ be the absolute ground of $G$. Then, $G/H$ is a sharp $\alpha$-semiradial with root $h$, where $h$ is the contracted vertex that corresponds to $H$. \end{lemma} \begin{proof} Lemma~\ref{lem:srpath2abg} easily implies that $G/H$ is an $\alpha$-semiradial with root $h$. Lemma~\ref{lem:abgnei2lin} further implies that it is sharp. The lemma is proved. \end{proof} \subsection{From General Radials with Strong Root to Sharp Semiradials} Lemma~\ref{lem:const4s} \ref{item:const4s:r} easily implies Lemma~\ref{lem:rpath2stg}. \begin{lemma} \label{lem:rpath2stg} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with strong root $r$. Let $H$ be the strong ground of $G$. Let $x\in V(G)$ and $\beta \in \{+, -\}$. Then, $G$ has a $(\beta, -\alpha)$-ditrail from $x$ to $r$ if and only if $G$ has a $\beta$-ditrail from $x$ to a vertex in $V(H)$ whose edges are disjoint from $E(H)$. \end{lemma} Lemmas~\ref{lem:abgneigh2sublinear} and \ref{lem:rpath2stg} imply Lemma~\ref{lem:stg2lg}. \begin{lemma} \label{lem:stg2lg} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with strong root $r$. Let $H$ be the strong ground of $G$. Then, $G/H$ is a sharp $\alpha$-semiradial with root $h$, where $h$ is the contracted vertex that corresponds to $H$. \end{lemma} \begin{proof} Lemma~\ref{lem:rpath2stg} implies that $G/H$ is an $\alpha$-semiradial with root $h$. Lemma~\ref{lem:abgneigh2sublinear} implies that every neighbor of $h$ is linear in $G/H$. Hence, $G/H$ is sharp. \end{proof} \subsection{From General Radials with Sublinear Root to Sharp Semiradials} Lemma~\ref{lem:rrdel} can easily be implied from Lemma~\ref{lem:r2delete}. We use Lemma~\ref{lem:rrdel} to prove Lemmas~\ref{lem:path2shell} and \ref{lem:rpath2asg}. \begin{lemma} \label{lem:rrdel} Let $\alpha\in\{+, -\}$, and let $G$ be an $\alpha$-radial with sublinear root $r\in V(G)$. Let $F \subseteq \ocut{G}{r}{\alpha}$. Then, $\reachset{G}{r}{\beta}{-\alpha} = \reachset{G - F}{r}{\beta}{-\alpha}$ for each $\beta\in\{+, -\}$. \end{lemma} \begin{proof} Because $\{r\} \subseteq \reachset{G}{r}{-\alpha}{-\alpha}$ holds, Lemma~\ref{lem:r2delete} implies the claim. \end{proof} Lemmas~\ref{lem:rrneigh2g}, \ref{lem:const4s} \ref{item:const4s:r}, and \ref{lem:rrdel} imply Lemma~\ref{lem:path2shell}. \begin{lemma} \label{lem:path2shell} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $I$ be the extended ground of $G$. Let $x\in V(G)\setminus V(I) $, and let $\beta\in \{+, -\}$. Then, the following statements are equivalent: \begin{rmenum} \item \label{item:path2shell:r} There is a $(\beta, -\alpha)$-ditrail from $x$ to $r$ in $G$; \item \label{item:path2shell:del} there is a $(\beta, -\alpha)$-ditrail from $x$ to $r$ in $G - E_G[r, V(G)\setminus V(I)]$; and, \item \label{item:path2shell:cut} there is a $(\beta, -\alpha)$-ditrail from $x$ to a vertex in $V(H)\setminus \{r\}$ whose edges are disjoint from $E(I) \cup E_G[r, V(G)\setminus V(I)]$. \end{rmenum} \end{lemma} \begin{proof} Lemma~\ref{lem:rrneigh2g} implies $E_G[r, V(G)\setminus V(I)] \subseteq \ocut{G}{r}{\alpha}$. Hence, by Lemma~\ref{lem:rrdel}, the statements~\ref{item:path2shell:r} and \ref{item:path2shell:del} are equivalent. It is easily confirmed from Lemma~\ref{lem:const4s} \ref{item:const4s:r} that \ref{item:path2shell:del} and \ref{item:path2shell:cut} are equivalent. The lemma is proved. \end{proof} Lemmas~\ref{lem:shellneigh2strong} and \ref{lem:path2shell} imply Lemma~\ref{lem:shell2astg}. \begin{lemma} \label{lem:shell2astg} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $I$ be the extended ground of $G$. Then, $G/I$ is a round $\alpha$-radial with sublinear root $i$, where $i$ is the contracted vertex that corresponds to $I$. \end{lemma} \begin{proof} Lemmas~\ref{lem:shellneigh2strong} \ref{item:shellneigh2strong:r2sublin} and \ref{lem:path2shell} imply that $G/I$ is an $\alpha$-radial with sublinear root $i$. Lemma~\ref{lem:shellneigh2strong} \ref{item:shellneigh2strong:round} further implies that every vertex from $\ocut{G/I}{i}{-\alpha}$ is strong. Hence, $G/I$ is round. The lemma is proved. \end{proof} \subsection{From Sharp Semiradials to Round Radials} Lemma~\ref{lem:srpath2lg} can easily be confirmed from Lemma~\ref{lem:const4l} \ref{item:const4l:sr}. \begin{lemma} \label{lem:srpath2lg} Let $\alpha\in\{+, -\}$, and let $G$ be a sharp $\alpha$-semiradial with root $r\in V(G)$. Let $H$ be the linear ground of $G$. Let $x\in V(G)\setminus V(H)$ and $\beta \in \{+, -\}$. Then, $G$ has a $\beta$-ditrail from $x$ to $r$ if and only if $G$ has a $(\beta, -\alpha)$-ditrail from $x$ to a vertex in $V(H)$ whose edges are disjoint from $E(H)$. \end{lemma} Lemmas~\ref{lem:lg2neigh} and \ref{lem:srpath2lg} imply Lemma~\ref{lem:lg2asg}. \begin{lemma} \label{lem:lg2asg} Let $\alpha\in\{+, -\}$, and let $G$ be a sharp $\alpha$-semiradial with root $r\in V(G)$. Let $H$ be the linear ground of $G$. Then, $G/H$ is a round $\alpha$-radial with root $h$, where $h$ is the contracted vertex that corresponds to $H$. \end{lemma} \begin{proof} It easily follows from Lemma~\ref{lem:srpath2lg} that $G/H$ is an $\alpha$-radial with root $h$. Under Lemma~\ref{lem:srpath2lg}, Lemma~\ref{lem:lg2neigh} \ref{item:noear} further implies that $h$ is sublinear in $G/ H$. Additionally, Lemma~\ref{lem:lg2neigh} \ref{item:scoop} implies that every vertex from $\oNei{G/H}{h}{-\alpha}$ is strong in $G/H$. Thus, it follows that $G/H$ is round. \end{proof} \subsection{From Round Radials to Sharp Semiradials} Lemmas~\ref{lem:asg2neigh}, \ref{lem:const4s}, and \ref{lem:rrdel} imply Lemma~\ref{lem:rpath2asg}. \begin{lemma} \label{lem:rpath2asg} Let $\alpha\in\{+, -\}$, and let $G$ be a round $\alpha$-radial with root $r\in V(G)$. Let $H$ be the almost strong ground of $G$. Let $x\in V(G)\setminus V(H)$ and $\beta \in \{+, -\}$. Then, the following statements are equivalent: \begin{rmenum} \item \label{item:rpath2asg:original} $G$ has a $(\beta, -\alpha)$-ditrail from $x$ to $r$. \item \label{item:rpath2asg:cut} $G - E_G[r, V(G)\setminus V(H)]$ has a $(\beta, -\alpha)$-ditrail from $x$ to $r$. \item \label{item:rpath2asg:ground} $G - E_G[r, V(G)\setminus V(H)]$ has a $\beta$-ditrail from $x$ to a vertex in $V(H)\setminus \{r\}$ whose edges are disjoint from $E(H)$. \end{rmenum} \end{lemma} \begin{proof} From Lemma~\ref{lem:asg2neigh}, we have $E_G[r, V(G)\setminus V(H)] \subseteq \ocut{G}{r}{\alpha}$. Hence, Lemma~\ref{lem:rrdel} implies that \ref{item:rpath2asg:original} and \ref{item:rpath2asg:cut} are equivalent. Lemma~\ref{lem:const4s} \ref{item:const4s:r} implies that \ref{item:rpath2asg:original} and \ref{item:rpath2asg:ground} are equivalent. Hence, the lemma is proved. \end{proof} Lemmas~\ref{lem:astgneigh2sublinear} and \ref{lem:rpath2asg} imply Lemma~\ref{lem:asg2lg}. \begin{lemma} \label{lem:asg2lg} Let $\alpha\in\{+, -\}$, and let $G$ be a round $\alpha$-radial with root $r\in V(G)$. Let $H$ be the almost strong ground of $G$. Let $G' := G - E_G[r, V(G)\setminus V(H)] /H$. Then, $G'$ is a sharp $\alpha$-semiradial with root $h$, where $h$ is the contracted vertex that corresponds to $H$. \end{lemma} \begin{proof} Lemma~\ref{lem:rpath2asg} easily implies that $G/H$ is an $\alpha$-semiradial with root $h$. Under Lemma~\ref{lem:rpath2asg}, Lemma~\ref{lem:astgneigh2sublinear} further implies that every neighbor of $h$ is linear in $G/H$. Thus, $G/H$ is sharp. \end{proof} \section{Gluing Lemmas} \label{sec:constlem} \subsection{Gluing Sharp Semiradials onto Sets of Absolute and Strong Vertices} \label{sec:constlem:abst} In Section~\ref{sec:constlem:abst}, we provide Lemmas~\ref{lem:construct2r} and \ref{lem:construct2sr} to be used in Section~\ref{sec:construct}. \begin{lemma} \label{lem:construct2r} Let $\alpha \in \{+, -\}$. Let $G$ and $H$ be disjoint bidirected graphs such that $G$ is a sharp $\alpha$-semiradial with root $s$, and $H$ is an $\alpha$-radial with root $r$. Let $S\subseteq V(H)$ be a set of strong vertices in $H$. Let $\hat{G} := (G; s) \oplus (H; S)$. Then, for each $\beta\in \{+, -\}$, \begin{rmenum} \item $\reachsets{G}{s}{\beta} \setminus \{s\}$ is equal to $\reachset{\hat{G}}{r}{\beta}{-\alpha} \cap ( V(G)\setminus \{s\} )$; and, \item $\reachset{H}{r}{\beta}{-\alpha}$ is equal to $\reachset{\hat{G}}{r}{\beta}{-\alpha} \cap V(H)$. \end{rmenum} \end{lemma} \begin{proof} It can easily be confirmed that $\reachsets{G}{s}{\beta} \setminus \{s\}$ is contained in $\reachset{\hat{G}}{r}{\beta}{-\alpha} \cap ( V(G)\setminus \{s\} )$ by considering the concatenation of ditrails. It is also clear that $\reachset{H}{r}{\beta}{-\alpha}$ is a subset of $\reachset{\hat{G}}{r}{\beta}{-\alpha} \cap V(H)$. Hence, in the following, we prove $\reachsets{G}{s}{\beta}\setminus \{s\} \supseteq \reachset{\hat{G}}{r}{\beta}{-\alpha} \cap (V(G)\setminus \{s\})$ and $\reachset{H}{r}{\beta}{-\alpha} \supseteq \reachset{\hat{G}}{r}{\beta}{-\alpha} \cap V(H)$. Let $x \in \reachset{\hat{G}}{r}{\beta}{-\alpha}$, and let $P$ be a $(\beta, -\alpha)$-ditrail in $\hat{G}$ from $x$ to $r$. First, consider the case $x\in V(G)\setminus \{s\}$. Trace $P$ from $x$, and let $y$ be the first encountered vertex that is in $V(H)$. Then, $xPy$ corresponds to a $\beta$-ditrail from $x$ to $s$ in $G$. This proves $x\in \reachsets{G}{s}{\beta}$. Next, consider the case $x\in V(H)$. Because $G$ is sharp, Theorem~\ref{thm:asr} implies that $P$ cannot contain a simple diear relative to $H$ as its subditrail. This implies that $P$ is a ditrail of $H$ for this case. Therefore, $x\in \reachset{H}{r}{\beta}{-\alpha}$ holds. This completes the proof. \end{proof} Lemma~\ref{lem:construct2sr} is an analogue of Lemma~\ref{lem:construct2r} and can easily be confirmed by a similar discussion. \begin{lemma} \label{lem:construct2sr} Let $\alpha \in \{+, -\}$. Let $G$ and $H$ be disjoint bidirected graphs such that $G$ is a sharp $\alpha$-semiradial with root $s$, and $H$ is an $\alpha$-semiradial with root $r$. Let $S\subseteq V(H)$ be a set of absolute vertices in $H$. Let $\hat{G} := (G; s) \oplus (H; S)$. Then, for each $\beta\in \{+, -\}$, \begin{rmenum} \item $\reachsets{G}{s}{\beta} \setminus \{s\}$ is equal to $\reachsets{\hat{G}}{r}{\beta} \cap ( V(G)\setminus \{s\} )$; and, \item $\reachsets{H}{r}{\beta}$ is equal to $\reachsets{\hat{G}}{r}{\beta}\cap V(H)$. \end{rmenum} \end{lemma} \subsection{Gluing Round Radials onto Sets of Linear and Sublinear Vertices} \label{sec:constlem:linsublin} In Section~\ref{sec:constlem:linsublin}, we introduce more lemmas to be used in Section~\ref{sec:construct}. First, we provide Lemma~\ref{lem:trimmedr2const} to be used for proving Lemma~\ref{lem:construct2lin}. \begin{lemma} \label{lem:trimmedr2const} Let $\alpha \in \{+, -\}$. Let $G$ and $H$ be disjoint bidirected graphs such that $G$ is an $\alpha$-radial with sublinear root $s$ for which $\ocut{G}{s}{\alpha} = \emptyset$, and $H$ is an $\alpha$-semiradial with root $r$. Let $S \subseteq V(H)\setminus \reachsets{H}{r}{-\alpha}$. Let $\hat{G} := (G; s) \oplus (H; S)$. Then, for each $\beta\in \{+, -\}$, \begin{rmenum} \item $\reachset{G}{s}{\beta}{-\alpha} \setminus \{s\}$ is equal to $\reachsets{\hat{G}}{r}{\beta} \cap ( V(G)\setminus \{s\} )$; and, \item $\reachsets{H}{r}{\beta}$ is equal to $\reachsets{\hat{G}}{r}{\beta} \cap V(H)$. \end{rmenum} \end{lemma} \begin{proof} By considering the concatenation of ditrails, it can easily be confirmed that $\reachset{G}{s}{\beta}{-\alpha} \setminus \{s\}$ and $\reachsets{H}{r}{\beta}$ are subsets of $\reachsets{\hat{G}}{r}{\beta} \cap ( V(G)\setminus \{s\} )$ and $\reachsets{\hat{G}}{r}{\beta} \cap V(H)$, respectively. In the following, we prove that $\reachset{G}{s}{\beta}{-\alpha} \setminus \{s\}$ and $\reachsets{H}{r}{\beta}$ contain $\reachsets{\hat{G}}{r}{\beta} \cap ( V(G)\setminus \{s\} )$ and $\reachsets{\hat{G}}{r}{\beta} \cap V(H)$, respectively. Let $x\in V(\hat{G})$, and let $P$ be a $\beta$-ditrail in $\hat{G}$ from $x$ to $r$. The assumption on $G$ implies that $P$ cannot contain a simple diear relative to $H$ as a subditrail. This further implies that $P$ shares at most one edge from $\parcut{\hat{G}}{H}$. Therefore, if $x$ is a vertex from $V(G) \setminus \{s\}$, then there is a vertex $y\in V(P)$ such that $E(xPy) \subseteq E(G)$ and $E(yPr) \subseteq E(H)$ hold. Because $y \in S$ holds, $yPr$ is an $\alpha$-ditrail. Hence, $xPy$ is an $(\beta, -\alpha)$-ditrail. Thus, $x \in \reachset{G}{s}{\beta}{-\alpha} \setminus \{s\}$ is obtained. By contrast, if $x$ is a vertex from $V(H)$, then $P$ is a ditrail of $H$, and $x$ is accordingly a vertex from $\reachsets{H}{r}{\beta}$. Thus, the lemma is proved. \end{proof} Lemma~\ref{lem:construct2lin} is derived from Lemmas~\ref{lem:edgeaddsr}, \ref{lem:sr2delete}, and \ref{lem:trimmedr2const}. \begin{lemma} \label{lem:construct2lin} Let $\alpha \in \{+, -\}$. Let $G$ and $H$ be disjoint bidirected graphs such that $G$ is an $\alpha$-radial with sublinear root $s$, and $H$ is an $\alpha$-semiradial with root $r$. Let $S \subseteq V(H)\setminus \reachsets{H}{r}{-\alpha}$. Let $\hat{G} := (G; s) \oplus (H; S)$. Then, for each $\beta\in \{+, -\}$, \begin{rmenum} \item $\reachset{G}{s}{\beta}{-\alpha} \setminus \{s\}$ is equal to $\reachsets{\hat{G}}{r}{\beta} \cap ( V(G)\setminus \{s\} )$; and, \item $\reachsets{H}{r}{\beta}$ is equal to $\reachsets{\hat{G}}{r}{\beta} \cap V(H)$. \end{rmenum} \end{lemma} \begin{proof} Let $G' := G - \ocut{G}{s}{\alpha}$. Let $\hat{G}' := (G'; s) \oplus (H; S)$. Note $\hat{G}' = \hat{G} - F$, where $F \subseteq E(\hat{G})$ is the set of edges that corresponds to $\ocut{G}{s}{\alpha}$. Lemma~\ref{lem:sr2delete} implies $\reachset{G}{r}{\beta}{-\alpha} = \reachset{G'}{r}{\beta}{-\alpha}$ for each $\beta\in\{+, -\}$. Therefore, Lemma~\ref{lem:trimmedr2const} can be applied to $G'$, by which we obtain that $\reachset{G'}{s}{\beta}{-\alpha} \setminus \{s\}$ is equal to $\reachsets{\hat{G}'}{r}{\beta} \cap ( V(G)\setminus \{s\} )$, and $\reachsets{H}{r}{\beta}$ is equal to $\reachsets{\hat{G}'}{r}{\beta} \cap V(H)$. This also implies that Lemma~\ref{lem:edgeaddsr} can be applied to $\hat{G}'$ for adding $F$ to $\hat{G}'$, by which we obtain $\reachsets{\hat{G}}{r}{\beta} = \reachsets{G'}{r}{\beta}$ for each $\beta\in\{+, -\}$. Thus, the lemma is proved. \end{proof} Lemma~\ref{lem:construct2sublin} is an analogue of Lemma~\ref{lem:construct2lin} and can easily confirmed by a similar discussion. \begin{lemma} \label{lem:construct2sublin} Let $\alpha \in \{+, -\}$. Let $G$ and $H$ be disjoint bidirected graphs such that $G$ is an $\alpha$-radial with sublinear root $s$, and $H$ is an $\alpha$-radial with root $r$. Let $S \subseteq V(H)\setminus \reachset{H}{r}{-\alpha}{-\alpha}$. Let $\hat{G} := (G; s) \oplus (H; S)$. Then, for each $\beta\in \{+, -\}$, \begin{rmenum} \item $\reachset{G}{s}{\beta}{-\alpha} \setminus \{s\}$ is equal to $\reachset{\hat{G}}{r}{\beta}{-\alpha} \cap ( V(G)\setminus \{s\} )$; and, \item $\reachset{H}{r}{\beta}{-\alpha}$ is equal to $\reachset{\hat{G}}{r}{\beta}{-\alpha} \cap V(H)$. \end{rmenum} \end{lemma} \section{Construction of Radials and Semiradials} \label{sec:construct} \subsection{Construction of Round Radials and Sharp Semiradials} \label{sec:construct:rrlsr} In Section~\ref{sec:construct:rrlsr}, we provide Lemmas~\ref{lem:const2srlg} and \ref{lem:const2rasg} to be used in Section~\ref{sec:characterization}. These lemmas are considered the converses of Lemmas~\ref{lem:lg2asg} and \ref{lem:asg2lg}, respectively. Lemma~\ref{lem:const2srlg} is easily implied from Lemma~\ref{lem:construct2lin}. \begin{lemma} \label{lem:const2srlg} Let $\alpha \in \{+, -\}$. Let $G$ be a round $\alpha$-radial with root $s$. Let $H$ be a linear $\alpha$-semiradial with root $r$. Assume that $G$ and $H$ are disjoint. Let $\hat{G} := (G; s) \oplus (H; V(H)\setminus \{r\})$. Then, $\hat{G}$ is a sharp $\alpha$-semiradial with root $r$ whose linear ground is $H$. \end{lemma} For proving Lemma~\ref{lem:const2rasg}, we first provide Lemma~\ref{lem:rradd}. Lemma~\ref{lem:rradd} can easily be implied from Lemma~\ref{lem:edgeaddr}, and is also used for proving Lemma~\ref{lem:const2rrtriplex}. \begin{lemma} \label{lem:rradd} Let $\alpha \in \{+, -\}$. Let $G$ be an $\alpha$-radial with sublinear root $r$. Let $G'$ be a bidirected graph obtained by adding some edges between $r$ and other vertices in which the sign of $r$ is $\alpha$. Then, $\reachset{G}{r}{\beta}{-\alpha} = \reachset{G'}{r}{\beta}{-\alpha}$ holds for each $\beta\in \{+, -\}$. \end{lemma} \begin{proof} Because $G. \{r\}$ is a trivial $\alpha$-radial, and $\{r\}\cap \reachset{G}{r}{-\alpha}{-\alpha} = \emptyset$, Lemma~\ref{lem:edgeaddr} implies the claim. \end{proof} Lemmas~\ref{lem:construct2r} and \ref{lem:rradd} imply Lemma~\ref{lem:const2rasg}. \begin{lemma} \label{lem:const2rasg} Let $\alpha \in \{+, -\}$. Let $G$ be a sharp $\alpha$-semiradial with root $s$. Let $H$ be an almost strong $\alpha$-radial with root $r$. Assume that $G$ and $H$ are disjoint. Let $\hat{G}$ be a gluing sum $(G; s) \oplus (H; V(H)\setminus \{r\})$ or a bidirected graph obtained from this graph by adding some edges joining $r$ and $E_G[r, V(G)\setminus \{s\}]$ in which $r$ has the sign $\alpha$. Then, $\hat{G}$ is a round $\alpha$-radial with root $r$ whose almost strong ground is $H$. \end{lemma} \begin{proof} Let $\hat{G}'$ be a gluing sum $(G; r) \oplus (H; V(H)\setminus \{s\})$. Then, Lemma~\ref{lem:construct2r} implies that $\hat{G}'$ is a round $\alpha$-radial with root $r$ in which $H$ is the almost strong ground. Lemma~\ref{lem:rradd} further implies the claim for $\hat{G}$. \end{proof} \subsection{Construction of General Radials and Semiradials} \label{sec:construct:rsr} In Section~\ref{sec:construct:rsr}, we provide Lemmas~\ref{lem:const2srag}, \ref{lem:const2rstg}, \ref{lem:triplex2const}, and \ref{lem:const2rrtriplex} to be used in Section~\ref{sec:characterization}. These lemmas are the converses of Lemmas~\ref{lem:abg2lg}, \ref{lem:stg2lg}, \ref{lem:shell}, and \ref{lem:shell2astg}, respectively. Lemma~\ref{lem:const2srag} is easily implied from Lemma~\ref{lem:construct2sr}. \begin{lemma} \label{lem:const2srag} Let $\alpha \in \{+, -\}$. Let $G$ be a sharp $\alpha$-semiradial with root $s$. Let $H$ be an absolute $\alpha$-semiradial with root $r$. Assume that $G$ and $H$ are disjoint. Let $\hat{G} := (G; s) \oplus (H; V(H))$. Then, $\hat{G}$ is an $\alpha$-semiradial with root $r$ whose absolute ground is $H$. \end{lemma} Lemma~\ref{lem:const2rstg} is easily implied from Lemma~\ref{lem:construct2r}. \begin{lemma} \label{lem:const2rstg} Let $\alpha \in \{+, -\}$. Let $G$ and $H$ be disjoint bidirected graphs such that $G$ is a sharp $\alpha$-semiradial with root $s$, and $H$ is a strong $\alpha$-radial with root $r$. Then, $(G; s)\oplus (H; V(H))$ is an $\alpha$-radial with strong root $s$ whose strong ground is $H$. \end{lemma} Finally, we prove two lemmas for radials with sublinear root $r$. Lemma~\ref{lem:triplex2const} is implied from Lemmas~\ref{lem:edgeaddr} and \ref{lem:construct2r}. \begin{lemma} \label{lem:triplex2const} Let $\alpha \in \{+, -\}$. Let $H_1$ be an almost strong $\alpha$-radial with root $r$, $H_2$ be a sublinear $\alpha$-radial with root $r$ such that $V(H_2)\cap V(H_1) = \{r\}$, and let $H_3$ be a linear $\alpha$-semiradial with root $s$ such that $V(H_3)\cap V(H_i) = \emptyset$ for every $i \in \{1, 2\}$. Assume that if $V(H_1) = \{r\}$ then $V(H_3) = \{s\}$. Let $\hat{H}$ be a gluing sum $(H_3; s) \oplus (H_1 + H_2; V(H_1)\setminus \{r\})$. Let $I$ be $\hat{H}$ or a bidirected graph obtained from $\hat{H}$ by adding some edges between \begin{rmenum} \item $V(H_2)\setminus \{r\}$ and $( V(H_1)\setminus \{r\}) \cup ( V(H_3)\setminus \{s\})$ in which the end in $V(H_2)$ has sign $\alpha$, or \item $r$ and $V(H_3)\setminus \{s\}$ in which $r$ has sign $\alpha$. \end{rmenum} Then, $I$ is a triplex $\alpha$-radial with root $r$ in which $H_1$ is the almost strong ground, and $V(H_2) \setminus \{r\}$ and $ V(H_3)\setminus \{s\}$ are the first and second shells, respectively. \end{lemma} \begin{proof} It can easily be confirmed that $H_1 + H_2$ is an $\alpha$-radial with root $r$ for which $V(H_1)\setminus \{r\} = \reachset{H_1 + H_2}{r}{-\alpha}{-\alpha}$. Hence, Lemma~\ref{lem:construct2r} implies that $V(H_1)\setminus \{r\} = \reachset{\hat{H}}{r}{-\alpha}{-\alpha}$, and $V(H_2) \cup V(H_3)\setminus \{s\} = \reachset{\hat{H}}{r}{\alpha}{-\alpha}\setminus \reachset{\hat{H}}{r}{-\alpha}{-\alpha}$. Therefore, Lemma~\ref{lem:edgeaddr} implies $\reachset{I}{r}{\beta}{-\alpha} = \reachset{\hat{H}}{r}{\beta}{-\alpha}$ for every $\beta \in \{+, -\}$. Thus, the claim follows. \end{proof} Lemma~\ref{lem:const2rrtriplex} is implied from Lemmas~\ref{lem:construct2sublin} and \ref{lem:rradd}. \begin{lemma} \label{lem:const2rrtriplex} Let $\alpha \in \{+, -\}$. Let $G$ and $H$ be disjoint bidirected graphs such that $G$ is a round $\alpha$-radial with sublinear root $s$, and $H$ is a triplex $\alpha$-radial with root $r$ whose shell $S$ is nonempty. Let $\hat{G}$ be a gluing sum $(G; s) \oplus (H; S)$ or a bidirected graph obtained from the gluing sum by adding some edges joining $r$ and $V(G)\setminus \{s\}$ in which $r$ has the sign $\alpha$. Then, $\hat{G}$ is an $\alpha$-radial with sublinear root $r$ whose extended ground is $H$. \end{lemma} \begin{proof} Let $\hat{G}'$ be $(G; s) \oplus (H; S)$. Then, Lemma~\ref{lem:construct2sublin} implies that $\hat{G}$ is an $\alpha$-radial with sublinear root $r$ in which $H$ is the extended ground and $S$ is its shell. Lemma~\ref{lem:rradd} further implies the claim for $\hat{G}$. \end{proof} \section{Characterization} \label{sec:characterization} \subsection{Characterizations of Round Radials and Sharp Semiradials} \label{sec:characterization:rrssr} In Section~\ref{sec:characterization:rrssr}, we provide constructive characterizations of round radials and sharp semiradials. \begin{definition} Let $\alpha\in\{+, -\}$. Two sets $\rrset{r}{\alpha}$ and $\lsrset{r}{\alpha}$ of bidirected graphs with vertex $r$ are defined as follows: \begin{rmenum} \item Every almost strong $\alpha$-radial with root $r$ is a member of $\rrset{r}{\alpha}$. \item Every linear $\alpha$-semiradial with root $r$ is a member of $\lsrset{r}{\alpha}$. \item Let $G\in \rrset{r}{\alpha}$. Then, a bidirected graph obtained from $G$ by adding some edges of the form $rv$, where $v\in V(G)$, in which $r$ has sign $\alpha$ is a member of $\rrset{r}{\alpha}$. \item Let $G \in \lsrset{s}{\alpha}$, and let $H$ be an almost strong $\alpha$-radial with root $r$ such that $V(G)\cap V(H) = \emptyset$. Then, any gluing sum $(G; s) \oplus (H; V(H)\setminus \{r\})$ is a member of $\rrset{r}{\alpha}$. \item Let $G \in \rrset{s}{\alpha}$, and let $H$ be a linear $\alpha$-semiradial $H$ with root $r$ such that $V(G)\cap V(H) = \emptyset$. Then, any gluing sum $(G; s) \oplus (H; V(H)\setminus \{r\})$ is a member of $\lsrset{r}{\alpha}$. \end{rmenum} \end{definition} From Lemmas~\ref{lem:lg2asg}, \ref{lem:asg2lg}, \ref{lem:const2srlg}, and \ref{lem:const2rasg}, Theorem~\ref{thm:rrssr} is easily obtained. \begin{theorem} \label{thm:rrssr} Let $\alpha \in \{+, -\}$. A bidirected graph with vertex $r$ is a member of $\rrset{r}{\alpha}$ if and only if it is a round $\alpha$-radial with root $r$. A bidirected graph with vertex $r$ is a member of $\lsrset{r}{\alpha}$ if and only if it is a linear $\alpha$-semiradial with root $r$. \end{theorem} \subsection{Characterizations of General Radials and Semiradials} \label{sec:characterization:rsr} In Section~\ref{sec:characterization:rsr}, we finally present constructive characterizations for general radials and semiradials using results from Sections~\ref{sec:decomposition}, \ref{sec:construct}, and \ref{sec:characterization:rrssr}. Lemmas~\ref{lem:abg2lg} and \ref{lem:const2srag} imply Theorem~\ref{thm:sr}. \begin{theorem} \label{thm:sr} Let $\alpha \in \{+, -\}$. A bidirected graph with vertex $r$ is an $\alpha$-semiradial with root $r$ if and only if it is a gluing sum $(G, s) \oplus (H; V(H))$, where $G$ is a member of $\lsrset{s}{\alpha}$ and $H$ is an absolute semiradial with root $r$. \end{theorem} Lemmas~\ref{lem:stg2lg} and \ref{lem:const2rstg} imply Theorem~\ref{thm:strr}. \begin{theorem} \label{thm:strr} Let $\alpha \in \{+, -\}$. A bidirected graph with vertex $r$ is an $\alpha$-radial with strong root $r$ if and only if it is a gluing sum $(G, s) \oplus (H; V(H))$, where $G$ is a member of $\lsrset{s}{\alpha}$ and $H$ is a strong $\alpha$-radial with root $r$. \end{theorem} Lemmas~\ref{lem:shell} and \ref{lem:triplex2const} imply Theorem~\ref{thm:triplex}. \begin{theorem} \label{thm:triplex} Let $\alpha \in \{+, -\}$. The following two statements are equivalent: \begin{rmenum} \item A bidirected graph $G$ is an $\alpha$-triplex radial with root $r$. \item Let $H_1$ be an almost strong $\alpha$-radial with root $r$, $H_2$ be a sublinear $\alpha$-radial with root $r$, and $H_3$ be a linear $\alpha$-semiradial with root $s$ such that $V(H_2)\cap V(H_1) = \{r\}$, $V(H_3)\cap V(H_i) = \emptyset$ for every $i \in \{1, 2\}$, and if $V(H_1) = \{r\}$ then $V(H_3) = \{s\}$. A bidirected graph $G$ is a gluing sum $(H_3; s) \oplus (H_1 + H_2; V(H_1)\setminus \{r\})$ or a bidirected graph obtained from this sum by adding some edges between \begin{rmenum} \item $V(H_2)\setminus \{r\}$ and $( V(H_1)\setminus \{r\}) \cup ( V(H_3)\setminus \{s\})$ in which the end in $V(H_2)$ has the sign $\alpha$, or \item $r$ and $V(H_3)\setminus \{s\}$ in which $r$ has the sign $\alpha$. \end{rmenum} \end{rmenum} \end{theorem} Lemmas~\ref{lem:shell2astg} and \ref{lem:const2rrtriplex} imply Theorem~\ref{thm:sublinrr}. \begin{theorem} \label{thm:sublinrr} Let $\alpha \in \{+, -\}$. Bidirected graph with vertex $r$ is an $\alpha$-radial with sublinear root $r$ if and only if it is a triplex $\alpha$-radial with root $r$ or a gluing sum $(G, s) \oplus (H; S)$, where $G$ is a member of $\rrset{s}{\alpha}$ and $H$ is a triplex $\alpha$-radial with root $r$ whose shell is $S$. \end{theorem} Combined with Theorem~\ref{thm:rrssr}, Theorems~\ref{thm:sr} and \ref{thm:strr} provide constructive characterizations of semiradials and radials with strong root, respectively. Also, Theorems~\ref{thm:triplex} and \ref{thm:sublinrr} provide a constructive characterization of radials with sublinear root. Here, absolute semiradials, strong and almost strong radials, linear semiradials, and sublinear radials serve as fundamental building blocks. \setcounter{theorem}{0} \renewcommand{A.\arabic{theorem}}{A.\arabic{theorem}} \section*{Appendix} In the Appendix, we present some preliminary definitions and results from our preceding paper~\cite{kita2019constructive}. \begin{definition} Let $G$ be a bidirected graph, let $r\in V(G)$, and let $\alpha\in\{+, -\}$. We call $G$ an {\em $\alpha$-radial} with {\rm root} $r$ if, for every $v\in V(G)$, there is an $(\alpha, -\alpha)$-ditrail from $v$ to $r$. We call $G$ an {\em $\alpha$-semiradial} with {\rm root} $r$ if, for every $v\in V(G)$, there is an $\alpha$-ditrail from $v$ to $r$. \end{definition} \begin{definition} Let $G$ be a bidirected graph, and let $r\in V(G)$. We call $G$ an {\em absolute semiradial} with root $r$ if $G$ is an $\alpha$-semiradial with root $r$ for each $\alpha\in\{+, -\}$. \end{definition} \begin{definition} Let $\alpha\in\{+, -\}$. An $\alpha$-radial $G$ with root $r\in V(G)$ is said to be {\em strong} if, for every $v\in V(G)$, there is a $(-\alpha, -\alpha)$-ditrail from $v$ to $r$. An $\alpha$-radial $G$ with root $r$ is said to be {\em almost strong} if, for every $v\in V(G)\setminus \{r\}$, there is a $(-\alpha, -\alpha)$-ditrail from $v$ to $r$, however there is no closed $(-\alpha, -\alpha)$-ditrail over $r$. \end{definition} \begin{definition} Let $\alpha\in\{+, -\}$. Let $G$ be an $\alpha$-semiradial with root $r$. We say that $G$ is {\em linear} if $G$ has no loop edge over $r$, and $G$ has no $-\alpha$-ditrails from $x$ to $r$ for any $x\in V(G)$ except for the trivial $(-\alpha, \alpha)$-ditrail from $r$ to $r$ with no edge. We say that $G$ is {\em sublinear} if if $G$ has no $(-\alpha, -\alpha)$-ditrails from $x$ to $r$ for any $x\in V(G)$. \end{definition} \begin{definition} We define a set $\sqrset{r}$ of bidirected graphs with vertex $r$ as follows: \begin{rmenum} \item The graph that consists of a single vertex $r$ and no edge is a member of $\sqrset{r}$. \item Let $H\in\sqrset{r}$, and let $P$ be a diear relative to $H$. Then, $H + P$ is a member of $\sqrset{r}$. \end{rmenum} \end{definition} \begin{theorem} \label{thm:asr} Let $r$ be a vertex symbol. Then, $\sqrset{r}$ is the set of absolute semiradials with root $r$. \end{theorem} \begin{definition} Let $r$ be a vertex symbol, and let $\alpha \in \{+, -\}$. We define a set $\acset{r}{\alpha}$ of bidirected graphs that have vertex $r$ as follows: \begin{rmenum} \item \label{item:base} A simple $(-\alpha, -\alpha)$-diear relative to $r$ is an element of $\acset{r}{\alpha}$. \item \label{item:inductive} If $G\in \acset{r}{\alpha}$ holds and $P$ is a diear relative to $G$, then $G + P \in \acset{r}{\alpha}$ holds. \end{rmenum} \end{definition} \begin{theorem} \label{thm:str} Let $r$ be a vertex symbol. Then, $\acset{r}{\alpha}$ is the set of strong $\alpha$-radials with root $r$. \end{theorem} \begin{definition} Let $\alpha\in\{+, -\}$. Define a set $\nacset{r}{\alpha}$ of bidirected graphs with a vertex $r$ as follows: \begin{rmenum} \item \label{item:const:base} Let $\beta \in \{+, -\}$, let $r'$ be a vertex symbol distinct from $r$, let $G\in \acset{r'}{\beta}$ be a bidirected graph with $r\not\in V(G)$, and let $rr'$ be an edge for which the signs of $r$ and $r'$ are $-\alpha$ and $\beta$, respectively. Then, $G + rr'$ is a member of $\nacset{r}{\alpha}$. \item \label{item:const:edge} Let $G\in \nacset{r}{\alpha}$, let $v\in V(G)$, and let $rv$ be an edge in which the sign of $r$ is $\alpha$. Then, $G + rv$ is a member of $\nacset{r}{\alpha}$. \item \label{item:const:vertex} Let $G_1, G_2\in \nacset{r}{\alpha}$ be bidirected graphs with $V(G_1)\cap V(G_2) = \{r\}$. Then, $G_1 + G_2$ is a member of $\nacset{r}{\alpha}$. \end{rmenum} \end{definition} The following theorem is our constructive characterization of almost strong radials. \begin{theorem} \label{thm:asr2char} Let $r$ be a vertex symbol. Let $\alpha \in \{+, -\}$. Then, $\nacset{r}{\alpha}$ is the set of almost strong $\alpha$-radials with root $r$. \end{theorem} \end{document}
\begin{document} \date {} \newtheorem{thm}{Theorem} \newtheorem{cor}{Corollary} \newtheorem{lem}{Lemma} \newtheorem{claim}{Claim} \newtheorem{dfn}{Definition} \newtheorem{prop}{Proposition} \def{\cal D}{{\cal D}} \def{\cal D_A}X{{\cal D_A}X} \def{\cal D}X{{\cal D}X} \def{\cal D}F{{\cal D}F} \def{\cal D}\hat{F_i}{{\cal D}\hat{F_i}} \def{\cal H}{{\cal H}} \def{\rm Var(f)}{{\rm Var(f)}} \def\varphi{\varphi} \def{\rm Var}{{\rm Var}} \def{{\rm Hom}(H_n(F,\partial F),~H_n(F))}{{{\rm Hom}(H_n(F,\partial F),~H_n(F))}} \def\tilde{\pi}_0V {{\rm Diff}(F,\partial)}{\tilde{\pi}_0V {{\rm Diff}(F,\partial)}} \def\tilde{\pi}_0 {{\rm Diff}}(\DblF){\tilde{\pi}_0 {{\rm Diff}}(\DblF)} \def\tilde{\pi}_0 {{\rm Diff}(F,{\rm rel}~\partial)}{\tilde{\pi}_0 {{\rm Diff}(F,{\rm rel}~\partial)}} \def{G}{{G}} \def{{\rm Hom}(H_n(F,\partial F),~\G)}{{{\rm Hom}(H_n(F,\partial F),~\G)}} \def{{\rm Hom}(H_n(\DblF),~\G)}{{{\rm Hom}(H_n(\DblF),~\G)}} \def{{\rm Hom}(H_n(F_+),~\G)}{{{\rm Hom}(H_n(F_+),~\G)}} \def\tilde{\pi}_0S {{\rm Diff}(\DblF)} {\tilde{\pi}_0S {{\rm Diff}(\DblF)} } \def\mathbb N{\mathbb N} \def\mathbb Z{\mathbb Z} \def\bar{\cal F}{\bar{\cal F}} \def\tilde{\pi}_0 {\rm Diff}(DM){\tilde{\pi}_0 {\rm Diff}(DM)} \def\stackrel{\partial}{\simeq}{\stackrel{\partial}{\simeq}} \def\tilde{\pi}_0 {{\rm Diff}(F,~{\rm rel}~K)}{\tilde{\pi}_0 {{\rm Diff}(F,~{\rm rel}~K)}} \def\tilde{\pi}_0 {{\rm Diff}(M)}{\tilde{\pi}_0 {{\rm Diff}(M)}} \def\tilde{\pi}_0S {{\rm Diff}(M)} {\tilde{\pi}_0S {{\rm Diff}(M)} } \def\mathbb Z{\mathbb Z} \def\mathbb C{\mathbb C} \def\downarrow{\downarrow} \def\lefteqn{\lefteqn} \def\longrightarrow{\longrightarrow} \def\rightarrow{\rightarrow} \def\hookrightarrow{\hookrightarrow} \def\longmapsto{\longmapsto} \def{\rm Aut}~H_n(M){{\rm Aut}~H_n(M)} \def\lambda{\lambda} \def\delta{\delta} \def\epsilon{\epsilon} \def\tilde{\psi }{\tilde{\psi }} \def\tilde\jmath{\tilde\jmath} \def\tilde{\vf }{\tilde{\vf }} \maketitle \parskip=2mm \begin{abstract} {We consider a parallelizable $2n$-manifold $F$ which has the homotopy type of the wedge product of $n$-spheres and show that the group of pseudo-isotopy classes of orientation preserving diffeomorphisms that keep the boundary $\partial F$ pointwise fixed and induce the trivial variation operator is a central extension of the group of all homotopy $(2n+1)$-spheres by $H_n\bigl(F; S\pi_n(SO(n))\bigr)$. Then we apply this result to study the periodicity properties of branched cyclic covers of manifolds with simple open book decompositions and extend the previous results of Durfee, Kauffman and Stevens to dimensions 7 and 15.} \end{abstract} \noindent {\bf Keywords}: Isotopy classes of diffeomorphisms; Cyclic branched covers \\ {\bf 2000 Mathematics Subject Classification}: 57N15; 57N37; 14J17 \section{Introduction and the Results} An open book decomposition of a manifold $M^{m+1}$ is a presentation of this manifold as the union of the mapping torus $F_{\vf}$ and the product $\partial F\times D^2$ along the boundary $\partial F\times S^1$, where $\vf: F^m\lra F^m$ is an orientation preserving diffeomorphism which fixes the boundary $\partial F$ pointwise. Open book structures have been used in the study of various topological problems (for short historical overviews see \S2 of \cite{Quinn} or Appendix by E. Winkelnkemper in \cite{Ranicki}), and in particular in the study of the isolated complex hypersurface singularities. Let $f:~(\cn^{n+1},0)\lra (\cn,0)$ be a polynomial mapping with the only singular point at the origin and with zero locus $V=\{z\in\cn^{n+1}|f(z)=0\}$. Consider the intersection of $V$ with a small sphere centered at the origin $K:=V\cap S_{\eps}^{2n+1}$. J. Milnor has shown in \cite{Mil2} that the mapping $$ \Phi(z):=f(z)/|f(z)|~~~~ S_{\eps}^{2n+1}\setminus K\lra S^1 $$ is the projection map of a smooth fibration such that the fiber $F:=\Phi^{-1}(1)$ is a smooth $(n-1)$-connected parallelizable $2n$-manifold homotopically equivalent to the wedge product of $n$-spheres and $\partial F=K$ is $(n-2)$-connected. This gives the open book structure to the sphere $$ S^{2n+1}=F_{\vf}\cup (K \times D^2) $$ Such an open book decomposition of $S^{2n+1}$ is called a simple fibered knot and the periodicity, in $k$, of the $k$-fold cyclic covers of $S^{2n+1}$ branched along $K$ has been studied by A. Durfee and L. Kauffman in \cite{DK}. Later, J. Stevens (see \cite{Stev}, Theorem 7 and Proposition 8) generalized Theorems 4.5 and 5.3 of \cite{DK} to a wider class of manifolds with simple open book decompositions $M^{2n+1}=F_{\vf}\cup (\partial F \times D^2)$ (an open book $M^{2n+1}$ is called {\it simple} if both $M$ and $F$ are $(n-1)$-connected and $M$ bounds a parallelizable manifold). {\bf Theorem I} (Stevens). {\it Let $M_k$ denote the k-fold cyclic cover of $M^{2n+1}$ branched along $\partial F$ and $n\ne 1,3~or~7~odd$. If~ ${\rm Var}(\vf^d)=0$, then $M_k$ and $M_{k+d}$ are (orientation preserving) homeomorphic, while $M_k$ and $M_{d-k},~k<d$ are orientation reversing homeomorphic. Furthermore, $M_{k+d}$ is diffeomorphic to $(\sigma_d/8)\Sigma~\#~M_k$.} \noindent Here $\sigma_k$ is the signature of a parallelizable manifold $N_k$ with the boundary $\partial N_k=M_k$, and $\Sigma$ is the generator of the finite cyclic group $bP_{2n+2}$ of homotopy $(2n+1)$-spheres that bound parallelizable manifolds. ${\rm Var}(h)$ denotes the variation homomorphism of a diffeomorphism $h: F\lra F$, which keeps the boundary $\partial F$ pointwise fixed, and defined as follows. Let $[z]\in H_n(F,\partial F)$ be the homology class of a relative cycle $z$, then we define ${\rm Var}(h): H_n(F,\partial F)\lra H_n(F)$ by the formula ${\rm Var}(h)[z]:=[h(z)-z]$ (cf. \S 1 of \cite{Stev} or \S1.1 of \cite{AGV}). Stevens also proved topological as well as smooth periodicity for $n$ even (see \cite{Stev}, Theorem 9): {\bf Theorem II}. {\it If for branched cyclic covers $M_k$ of a $(2n+1)$-manifold $M$ with simple open book decomposition ${\rm Var}(\vf^d)=0$, then $M_k$ and $M_{k+2d}$ are homeomorphic and $M_k$ and $M_{k+4d}$ are diffeomorphic. Moreover, if $n=2~or~6$, then $M_k$ and $M_{k+d}$ are diffeomorphic.} Both of the papers viewed the open book $M^{2n+1}$ as the boundary of a $(2n+2)$-manifold and used results of C.T.C. Wall \cite{W1}, on classification of $(n-1)$-connected $(2n+1)$-manifolds. Here, in the third section, we are dealing with the same periodicity problems from a different point of view which is based on results of M. Kreck \cite{Kreck} on the group of isotopy classes of diffeomorphisms of $(n-1)$-connected almost-parallelizable $2n$-manifolds. We give here different proofs of these two theorems of Stevens including the cases $n=3$ and $n=7$ (see Corollaries 2, 3, and 4 below). As we have just mentioned, our approach is based on the results of Kreck who has computed the group of isotopy classes of diffeomorphisms of closed $(n-1)$-connected almost-parallelizable $2n$-manifolds in terms of exact sequences. In the first part of this paper we use these results to obtain a similar exact sequence for the diffeomorphisms $f$ of a parallelizable handlebody $F\in\H(2n,\mu,n),~n\geq 2$, that preserve the boundary $\partial F$ pointwise and induce the trivial variation operator $\V(f) : H_*(F,\partial F)\lra H_*(F)$. We will denote the group of pseudo-isotopy classes of such diffeomorphisms by $\K(V) $ and prove the following \noindent {\bf Theorem 3.}~{\it If $n\geq 3$ then the following sequence is exact $$ 0\lra \Theta_{2n+1} \lra \K(V) \lra {{\rm Hom}\Bigl(H_n(F,\partial F),~S\pi_n(SO(n))\Bigr)} \lra 0 $$ If $n=2$ then $\K(V) = 0$.} \noindent Here, by $S\pi_n(SO(n))$ we mean the image of $\pi_n(SO(n))$ in $\pi_n(SO(n+1))$ under the natural inclusion $SO(n)\hra SO(n+1)$ and by $\Theta_{2n+1}$ the group of all homotopy $(2n+1)$-spheres (see \S2.2 for the details). \noindent \underline {Remark}: Recently D. Crowley \cite{Crowl} extended results of D. Wilkins on the classification of closed $(n-1)$-connected $(2n+1)$-manifolds, $n=3,7$. One could use these results together with the technique of Durfee, Kauffman and Stevens to complete the periodicity theorems for $n=3,7$. However our intention was to show how one can apply the higher dimensional analogs of the mapping class group in studying this kind of problem. At the end we briefly mention the cyclic coverings of $S^3$ branched along the trefoil knot as an example which shows that there is no topological periodicity in the case $n=1$. Let $F$ be a manifold with boundary $\partial F$ and consider two diffeomorphisms $\vf,~\psi$ of $F$ that are identities on the boundary (in this paper we consider only orientation preserving diffeomorphisms). As usual, two such diffeomorphisms are called {\it pseudo-isotopic relative to the boundary} if there is a diffeomorphism ${\cal H}:~F\times I\lra F\times I$ which satisfies the following properties: $$ 1)~{\cal H}|_{F\times \{0\}}=\vf,~~~2)~{\cal H}|_{F\times \{1\}}=\psi,~~~3)~{\cal H}|_{\partial F\times I}=id $$ We will denote the group of pseudo-isotopy classes of such diffeomorphisms by $\mcgF$. The group of pseudo-isotopy classes of orientation preserving diffeomorphisms on a closed manifold $M$ will be denoted by $\mcgM$. There is a deep result of J. Cerf \cite{Cerf} which allows one to replace pseudo-isotopy by isotopy provided that the manifold is simply connected and of dimension at least six. All our manifolds are simply connected here, so $n=2$ is the only case when we actually use pseudo-isotopy. For all other $n\geq 3$ we will use the same notations (where tilde \~ {} stands for ``pseudo") but mean the usual isotopy. We will call these groups {\it the mapping class groups}. If $M$ is embedded into $W$ as a submanifold, then the normal bundle of $M$ in $W$ will be denoted by $\nu(M;W)$. Integer coefficients are understood for all homology and cohomology groups, unless otherwise stated, and symbols $\simeq$ and $\cong$ are used to denote diffeomorphism and isomorphism respectively. \noindent {\bf Acknowledgement}: We thank the referee for helpful comments and suggestions. The second author also would like to express his gratitude to Professors Matthias Kreck and Anatoly Libgober for stimulating discussions during the preparation of this paper. \section{Kernel of the variation operator} \subsection{Double of a pair (X,A)} Let $(X,A)$ be a pair of CW complexes, and consider the pair $(X\times I,A\times I)$ (here and later $I=[0,1]$, and we denote the boundary of $I$ by $\partial I$). \begin{dfn} The subspace $(X\times\partial I)\cup(A\times I)$ of $X\times I$ will be called the double of the pair $(X,A)$, and denoted by $\DblX$. \end{dfn} We will denote the pair $(X\times\{0\},A\times\{0\})$ by $(X_0,A_0)$, the product $A\times I$ by $A_+$ and the union $(X\times\{1\})\cup A_+$ by $X_+$. Thus we can write $\DblX=X_0\cup X_+$ and $X_0\cap X_+ = A\times\{0\}$. \noindent \underline{Remark}: If we take the pair $(X,A)$ to be a manifold with the boundary, then the double $\DblX$ will be the boundary of the product $X\times I$, which is a closed manifold with the canonically defined smooth structure (see \cite{Munkres}). In this case we will denote the double simply by ${\cal D}X$. Now we construct a natural homomorphism $d_*: H_*(X,A)\lra H_*(\DblX)$. Consider the reduced suspensions of $X$ and $A$ (the common base point is chosen outside of $X$) and the induced isomorphism between $H_*(X,A)$ and $H_{*+1}(\Sigma X^+,\Sigma A^+)$. The excision property induces a natural isomorphism between $H_{*+1}(\Sigma X^+,\Sigma A^+)$ and $H_{*+1}(X\times I, \DblX)$, and we define the homomorphism $d_*$ as the composition of these two isomorphisms with the boundary map $\delta_{*+1}$ from the exact sequence of the pair $(X\times I,\DblX)$: \begin{dfn} $$ d_q:=\delta_{q+1}\circ iso :~ H_q(X,A) \stackrel{\cong}{\lra} H_{q+1}(X\times I, \DblX) \stackrel{\delta_{q+1}}{\lra} H_q (\DblX) $$ \end{dfn} The groups $H_*(X,A)$ and $H_*(\DblX,X)$ are naturally isomorphic and we can rewrite the exact sequence of the pair $(\DblX,X)$ in the following form: $$ \cdots H_{q+1}(\DblX)\to H_{q+1}(X,A) \to H_q(X) \stackrel{i_q}{\to} H_q(\DblX) \stackrel{j_q}{\to} H_q(X,A)\cdots $$ \begin{lem} For each $q\geq 1$ the homomorphism $d_q$ is a splitting homomorphism of the above exact sequence and we have the following short exact sequence that splits: $$ 0\lra H_q(X) \stackrel{i_q}{\lra} H_q(\DblX) \stackrel{j_q}{\lra} H_q(X,A)\lra 0$$ \end{lem} \begin{proof} It follows rather easily from our definition of $d_q$ that for each $q\geq 1$ the composition $j_q\circ d_q$ is the identity map of the group $H_q(X,A)$. This property entails our lemma (cf. \cite{RF}, chap. 5, \S 1.5). \end{proof} Let us consider now a homeomorphism $f: X\lra X$ which is the identity on $A$, i.e. $f(x)=x$ for all $x\in A$. For such a map the variation homomorphism $\V(f) : H_q(X,A)\lra H_q(X)$ is defined for all $q\geq 1$ by the formula $\V(f) [z]:=[f(z)-z]$ for any relative cycle $z\in H_q(X,A)$ (cf. \S 1 of \cite{Stev} or \S1.1 of \cite{AGV}). The map $f$ also induces the map $f^{(r)}: (X,A)\lra (X,A)$ and a map $\tilde{f}: \DblX\lra \DblX$ defined as follows: $$\tilde{f}(x):=\left\{ \begin{array}{rcl} f(x)& {\rm if}& x\in X_0\\ x~& {\rm if}& x\in X_+\\ \end{array} \right. $$ \noindent If we denote the corresponding induced maps in homology by $f_*,~f^{(r)}_*,~\tilde{f_*}$ then we have the following commutative diagram: $$ \begin{CD} 0 @>>> H_q(X) @>i_q>> H_q(\DblX) @>j_q>> H_q(X,A) @>>>0\\ @. @VVf_*V @VV\tilde{f_*}V @VVf^{(r)}_*V @.\\ 0 @>>> H_q(X) @>i_q>> H_q(\DblX) @>j_q>> H_q(X,A) @>>>0 \end{CD} $$ \begin{thm} ~\\ If~ $\V(f) = 0$, then $\tilde{f_*}$ is the identity map of $H_q(\DblX)$ for all $q$. \end{thm} \begin{proof} It follows right from the definition of $\V(f) $ that $f_* - Id = \V(f) \circ l_*$ and $f_*^{(r)} - Id = l_*\circ\V(f) $, where $l_*: H_*(X)\lra H_*(X,A)$ is induced by the inclusion $(X,\emptyset) \hra (X,A)$ (cf. \S1.1 of \cite{AGV}). It is also easy to check that the homomorphisms $\tilde{f_*}$ and $d_q$ are connected with the variation homomorphism via the formula $$\tilde{f_*}\circ d_q = d_q\circ Id + i_q\circ \V(f) $$ Hence if $\V(f) = 0$, then $f_*=Id,~f_*^{(r)}=Id$ and $\tilde{f_*}\circ d_q = d_q\circ Id$. These three identities together with $j_q\circ d_q = Id$ imply the statement. \end{proof} Now we restrict our attention to the case when $X$ is a smooth, simply connected manifold of dimension at least four and $A=\partial X$ is the boundary. Let $\varphi\in {{\rm Diff}(X,{\rm rel}~\partial)}$ and $\tilde{\varphi} \in {{\rm Diff}(\DX)}$ be the extension by the identity to the second half of the double. Define the map $\omega: {{\rm Diff}(X,{\rm rel}~\partial)}\lra {\rm Diff}(\DX)$ by the formula $\omega(\varphi):=\tilde{\varphi}$. \begin{thm}~\\ The map $\omega$ induces a monomorphism $\tilde{\pi}_0{{\rm Diff} (X,{\rm rel}~\partial)}\lra \tilde{\pi}_0{\rm Diff}(\DX)$. \end{thm} \begin{proof} It is easy to see that $\omega$ induces a well-defined map of groups of pseudo-isotopy classes of diffeomorphisms, i.e., if $\varphi'$ is pseudo-isotopic relative to the boundary to $\varphi$ then $\omega(\varphi')$ is pseudo-isotopic to $\omega(\varphi)$. It is obvious that for any two diffeomorphisms $\varphi,\psi\in \rm Diff(X,rel~\partial)$, $\omega(\varphi\cdot\psi)=\omega(\varphi)\cdot\omega(\psi)$, that is $\omega$ induces a homomorphism which we also denote by $\omega$. To show that $\omega$ is actually a monomorphism we use Proposition 1 of Kreck (see \cite{Kreck}, p. 650 for the details): {\it Let $A^m$ be a simply-connected manifold with $m\geq 5$ and $h\in {\rm Diff}(\partial A)$. $h$ can be extended to a diffeomorphism on $A$ if and only if the twisted double $A\cup_h -A$ bounds a 1-connected manifold $B$ such that all relative homotopy groups $\pi_k(B,A)$ and $\pi_k(B,-A)$ are zero, where $A$ and $-A$ mean the two embeddings of $A$ into the twisted double.} Suppose now that $\omega(\varphi)=\tilde{\varphi}$ is pseudo-isotopic to the identity. Then the mapping torus $\DX_{\tilde{\varphi}}$ is diffeomorphic to the product $\DX\times S^1 =\partial(X\times I\times S^1)$. On the other hand we can present $\DX_{\tilde{\varphi}}$ as the union of $X_{\varphi}$ and $-X\times S^1$ along the boundary $\partial X\times S^1$. Since $\partial(X\times D^2)=X\times S^1\bigcup -\partial X\times D^2$ we can paste together $X\times I\times S^1$ and $X\times D^2$ along the common sub-manifold $X\times S^1$ to obtain a new manifold $W$, which cobounds $X_{\varphi}\bigcup-\partial X\times D^2$. Now note that $X_{\varphi}\bigcup-\partial X\times D^2$ is diffeomorphic to the twisted double $X\times I\bigcup_h -X\times I$ where the diffeomorphism $h: \partial(X\times I)\rightarrow \partial(X\times I)$ is defined by the identities: $\left.h\right|_{X_0}=id$, and $\left.h\right|_{X_+}=\varphi$ (cf. \cite{Kreck}, property 1) of $\tilde{W}$ on page 657). The theorem of Seifert and Van Kampen entails that $\pi_1(W)\cong\{1\}$, and hence $\pi_1(W,X\times I)\cong\{1\}$. To show that the other homotopy groups are trivial it is enough to show that $H_*(W,X)\cong \{0\}$ for all $*\ge 2$. This can be seen from the relative Mayer-Vietoris exact sequence of pairs $(X\times I\times S^1, X)$ and $(X\times D^2, X)$ where by $X$ we mean a fiber of the product $X\times S^1$: $H_*(X\times S^1,X)\stackrel{\cong}{\rightarrow} H_*(X\times I\times S^1,X)\oplus H_*(X\times D^2,X)\rightarrow H_*(W,X)$. Thus by Proposition 1 of \cite{Kreck}, there is a diffeomorphism of $X\times I$ to itself that gives the required pseudo-isotopy between $\varphi$ and $id$. \end{proof} \subsection{$\K(V) $ as an extension} We now let $F\in\H(2n,\mu,n)$ be a parallelizable handlebody, that is, a parallelizable manifold which is obtained by gluing $\mu$ $n$-handles to the $2n$-disk and rounding the corners: $$ F=D^{2n}\cup\bigsqcup_{i=1}^{\mu} (D_i^n\times D^n)$$ We assume here that $n\geq 2$. For the classification of handlebodies in general, see \cite{Wall1}. Obviously $F$ has the homotopy type of the wedge product of $n$-spheres and nonempty boundary $\partial F$ which is $(n-2)$-connected. The Milnor fibre of an isolated complex hypersurface singularity is an example of such a manifold. Let us consider now $\vf\in\mcgF$ and the induced variation homomorphism $\Va(\vf): H_n(F,\partial F)\lra H_n(F)$. This correspondence gives a well defined map $$\Va: \mcgF \lra \Im(var) $$ which is a derivation (1-cocycle) with respect to the natural action of the group $\mcgF$ on $\Im(var) $ (cf. \cite{Stev}, \S2) $$ \Va(h\circ g) = \Va(h) + h_*\circ \Va(g)$$ This formula implies that the isotopy classes of diffeomorphisms that give trivial variation homomorphisms form a subgroup of $\mcgF$. \begin{dfn} The subgroup $$\K(V) :=\{f\in\mcgF~|~\Va(f)[z]=0,~ \forall [z]\in H_n(F,\partial F)\}$$ will be called the kernel of the variation operator. \end{dfn} In order to describe the algebraic structure of this kernel we will use the results of Kreck \cite{Kreck} who has computed the group of isotopy classes of diffeomorphisms of closed oriented $(n-1)$-connected almost-parallelizable $2n$-manifolds in terms of exact sequences. First we note that the double of our handlebody $F$ is such a manifold. \begin{lem} Let $F\in\H(2n,\mu,n)$ be a parallelizable handlebody $(n\geq 2)$, then the double $\DblF$ is a closed $(n-1)$-connected stably-parallelizable $2n$-manifold. \end{lem} \begin{proof} Since $F$ is simply connected and $\DblF=F_0\cup F_+$, we have $\pi_1(\DblF)=0$. Then using the exact homology sequence of the pair ($F\times I, \partial(F\times I)$) it can be easily seen that $\DblF$ is a $(n-1)$-connected manifold. Since $F$ is parallelizable the double will be stably-parallelizable. \end{proof} Next we recall the result of Kreck \cite{Kreck}. Let $M$ be a smooth, closed, oriented $(n-1)$-connected almost-parallelizable $2n$-manifold, $n\geq 2$. Denote by $\Aut$ the group of automorphisms of $H_n(M,\int)$ preserving the intersection form on $M$ and (for $n\geq3$) commuting with the function $\alpha:~H_n(M)\lra \pi_{n-1}(SO(n))$, which is defined as follows. Represent $x\in H_n(M)$ by an embedded sphere $S^n\hookrightarrow M$. Then function $\alpha$ assigns to $x$ the classifying map of the corresponding normal bundle. Any diffeomorphism $f\in {\rm Diff}(M)$ induces a map $f_*$ which lies in $\Aut$. This gives a homomorphism $$ \kappa:~ \mcgM \lra \Aut,~~~[f]\longmapsto f_* $$ The kernel of $\kappa$ is denoted by $\kerM$ and to each element $f$ from this kernel Kreck assigns a homomorphism $H_n(M)\lra S\pi_n(SO(n))$, where $S: \pi_n(SO(n))\lra \pi_n(SO(n+1))$ is induced by the inclusion, in the following way. Represent $x\in H_n(M)$ by an imbedded sphere $S^n\subset M$ and use an isotopy to make $f|_{S^n}=Id$. The stable normal bundle $\nu(S^n)\oplus \varepsilon^1$ of this sphere in $M$ is trivial and therefore the differential of $f$ gives an element of $\pi_n(SO(n+1))$. It is easy to see that this element lies in the image of $S$. This construction leads to a well defined homomorphism (cf. Lemma 1 of \cite{Kreck}) $$\chi:~\kerM\lra {{\rm Hom}\bigl(H_n(M),~S\pi_n(SO(n))\bigr)}$$ \noindent If $n=6$ we have $S\pi_n(SO(n))=0$, and for all other $n\geq 3$ the groups $S\pi_n(SO(n))$ are given in the following table (\cite{Kreck}, p. 644): \parskip=5mm \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $n$ (mod 8) & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7\\ \hline $S\pi_n(SO(n))$ & ~$\zint_2\oplus\zint_2$~ & ~$\zint_2$~ & ~$\zint_2$ ~ & ~$\zint$ ~ & ~ $\zint_2$~ & ~ 0 ~ & ~ $\zint_2$ ~ & ~$\zint$ \\ \hline \end{tabular} In particular, when $n\equiv 3\pmod{4}$ the homomorphism $\chi(f)$ can be defined using the Pontryagin class $p_{\frac{n+1}{4}}(M_f)$ of the mapping torus $M_f$: Take a diffeomorphism $f\in \kerM$ and consider the projection $$ \pi:~M_f\lra \left. M_f \right/ \{0\}\times M =\Sigma M^+ $$ It is clear from the exact sequence of Wang that the map $i^*:~H^n(M_f)\lra H^n(M)$ is surjective (recall that $f_*=id$) and therefore we obtain an isomorphism $\pi^*:~H^n(M)\cong H^{n+1}(\Sigma M^+)\lra H^{n+1}(M_f)$. Next define an element $p'(f)\in H^n(M)$ by $p'(f):={\pi^*}^{-1}(p_{\frac{n+1}{4}}(M_f))$. It can be shown (cf. \cite{Kerv}) that the map $f\lmt p'(f)$ is a homomorphism and $c:=a_{\frac{n+1}{4}} (\frac{n-1}{2})!$ divides $p'(f)$, where as always, $a_m=2$ if $m$ is odd and $a_m=1$ if $m$ is even. This defines a map $$ \chi':~\kerM\lra H^n(M),~~~with~~~\chi'(f):= p'(f)/c $$ \parskip=2mm \noindent \underline{Remark}: These two elements $\chi'(f)$ and $\chi(f)$ belong to the isomorphic groups ${{\rm Hom}(H_n(M),~\pi_n(SO))}~~{\rm and}~~{{\rm Hom}(H_n(M),~ S\pi_n(SO(n)))}$ respectively, and they are connected through $\tau^*(\chi(f))=\chi'(f)$ via the homomorphism $$ \tau^* : {{\rm Hom}\bigl(H_n(M),~S\pi_n(SO(n))\bigr)} \lra {{\rm Hom}(H_n(M),~\pi_n(SO))} $$ induced by the natural homomorphism $\tau: \pi_n(SO(n+1))\lra \pi_n(SO(n+2))$. For the details the reader is referred to Lemma 2 of \cite{Kreck}. If $M^{2n}$ bounds a parallelizable manifold and $n\geq 3$, then Theorem 2 of \cite{Kreck} gives two short exact sequences: \begin{equation} \label{firstseq} 0\lra \kerM \lra \mcgM \stackrel{\kappa}{\lra} \Aut \lra 0 \end{equation} \begin{equation} \label{secondseq} 0\lra \Theta_{2n+1} \stackrel{\iota}{\lra} \kerM \stackrel{\chi}{\lra} {{\rm Hom}\bigl(H_n(M),~S\pi_n(SO(n))\bigr)} \lra 0 \end{equation} \noindent where the map $\iota$ is induced by the identification of each homotopy $(2n+1)$-sphere with the element of the mapping class group $\tilde{\pi}_0 {\rm Diff}(D^{2n},{\rm rel}~\partial)$. If $M$ is a simply connected manifold of dimension 4, Kreck has proved that $\kappa$ is a monomorphism (\cite{Kreck}, Theorem 1). Let $F\in\H(2n,\mu,n)$ be a parallelizable handlebody as above, and $\DblF$ be the corresponding double. First assume that $n=2$ and $\vf\in \K(V) $, then it follows from our Theorems 1 and 2 and Theorem 1 of Kreck \cite{Kreck} that $\tilde{\vf}$ is the trivial element of $\mcgDF$, and therefore $\vf$ is the identity of $\mcgF$. \noindent \underline{Remark}: In this case, the handlebody $F$ doesn't have to be parallelizable and the kernel of the variation operator $\K(V) $ will be trivial for any simply connected 4-manifold $F$. \noindent Next we consider the case when $n\geq 3$ and denote the group $S\pi_n(SO(n))$ by $\G$. Recall also that we can assume that $\DblF=F\cup F_+$. Since $F$ is $(n-1)$-connected and the boundary $\partial F$ is $(n-2)$-connected, the universal coefficient theorem together with the cohomology exact sequence of the pair $(\DblF,F_+)$ and the excision property give us the following short exact sequence: \begin{equation} \label{thirdseq} 0\to \Rhom \stackrel{j^*}{\to} \Dhom \stackrel{i^*}{\to} \Fhom \to 0 \end{equation} where $i:F_+\hra \DblF$, $j: (\DblF,\emptyset)\hra (\DblF,F)$ are inclusions and $i^*$ and $j^*$ are the corresponding induced maps. \begin{lem} $i^*(\chi(\tilde{\vf}))$ is the trivial map for any $\vf\in\mcgF$. \end{lem} \begin{proof} Take any $[z]\in H_n(F_+)$, then we have $i^*(\chi(\tilde{\vf}))[z]=\chi(\tilde{\vf})[i_*(z)]$. Since $H_n(F)\cong\pi_n(F)$ we can present our $n$-cycle $[z]$ by an imbedded $S^n\hra F_+$ and we can also assume that the normal bundle of such a sphere is contained in $F_+$. We have defined $\tilde{\vf}$ as the identity on $F_+$ and this implies $\chi(\tilde{\vf})[i_*(z)]=0$ as required. \end{proof} \noindent Now we define a homomorphism $\chi_r: \K(V) \to \Rhom$. Take any $\vf\in \K(V) $ then $\tilde{\vf}\in \kerDF$ (recall Theorem 1 above) and $\chi(\tilde{\vf})\in \Dhom$. Since $i^*(\chi(\tilde{\vf}))=0$ there exists unique $h\in \Rhom$ such that $j^*(h)=\chi(\tilde{\vf})$. \begin{dfn} We define the map $\chi_r: \K(V) \to \Rhom $ by the formula $\chi_r(\vf):=h$. \end{dfn} \noindent It is clear that $\chi_r$ is a homomorphism. Here we also consider the map $\iota_r: \Theta_{2n+1}\lra \mcgF$ defined as in (\ref{secondseq}) above: present any homotopy $(2n+1)$-sphere $\Sigma'$ as the union of two disks via a diffeomorphism $\psi\in \tilde{\pi}_0 {\rm Diff}(S^{2n})\cong \tilde{\pi}_0 {\rm Diff}(D^{2n},{\rm rel}~\partial ) \cong \Theta_{2n+1}$ then take a disk $D^{2n}$ embedded into ${\rm int}(F)$ and define the diffeomorphism of $F$ by the formula $$ \iota_r(\Sigma')(x):=\left\{ \begin{array}{rcl} \psi(x)& {\rm if}& x\in D^{2n}\hra F\\ x~& ~ & {\rm otherwise} \\ \end{array} \right. $$ \noindent It is obvious that ${\rm Im(\iota_r)}\subset \K(V) $. Now we describe $\K(V) $ as a central extension of the group $\Theta_{2n+1}$ by $H^n(F,\partial F;\G)\cong H_n(F;\G)$. \begin{thm} ~\\ If $n=2$ then $\K(V) = 0$, and for all $n\geq 3$ the following sequence is exact \begin{equation} \label{kervar} 0\lra \Theta_{2n+1} \stackrel{\iota_r}{\lra} \K(V) \stackrel{\chi_r}{\lra} \Rhom \lra 0 \end{equation} \end{thm} \begin{proof} We have mentioned already that if $n=2$, the kernel of the variation operator is trivial. Assume now that $n\geq 3$. It follows from Theorems 1 and 2 above that the inclusion map $\omega: {\rm Diff}(F,{\rm rel}~\partial)\to {\rm Diff} (\DblF)$ induces a monomorphism $s_{\omega}: \K(V) \to \kerDF$. Since the composition $s_{\omega}\cdot \iota_r$ coincides with the injective map $\iota$ from the exact sequence (\ref{secondseq}), we see that our $\iota_r$ is injective too. It is also clear that $\rm Im(\iota_r)\subset Ker(\chi_r)$. Consider now any $\vf\in \rm Ker(\chi_r)$, then $\chi( s_{\omega}(\vf)) = j^*(\chi_r(\vf)) = 0$, where $j^*$ is as in (\ref{thirdseq}). Thus $s_{\omega}(\vf) \in \rm Ker(\chi)\cong\Theta_{2n+1}\cong Im(\iota)$ and since $s_{\omega}$ is a monomorphism we have $\vf\in \rm Im(\iota_r)$ as required. To prove that $\chi_r$ is an epimorphism it is enough to show that for a set of generators $\{g_1,\ldots, g_m\}$ of $\Rhom$ the group $\K(V) $ contains diffeomorphisms $\{\vf_1,\ldots, \vf_m\}$ such that $\chi_r(\vf_j)=g_j, ~ j\in\{1,\ldots,m\}$. Recall that $F=D^{2n}\cup\bigsqcup_{i=1}^{\mu} (D_i^n\times D^n)$ and $H_n(F,\partial F)\cong \zint^{\mu}$. We can choose the following embedded disks $d_i\hra F,~ i\in\{1,\ldots,\mu\}$, as a basis of this homology group: $$ d_i:= \{0\}_i\times D^n\hra D^n_i\times D^n \hra F$$ (here $\{0\}_i$ is the center of the $i^{th}$ handle core disk $D^n_i$). Take a generator $x$ of $G$ and consider the homomorphism $ g_{xi}: H_n(F,\partial F)\lra G$ defined by the formula $$ g_{xi}[d_k]:=\left\{ \begin{array}{rcl} x & {\rm if} & k=i\\ 0 & {\rm if} & k\ne i \end{array} \right. k\in\{1,\ldots,\mu\} $$ end extended linearly to the whole group. The set of such homomorphisms obviously generates $\Rhom$. Now we will use an analog of the Dehn twist in higher dimensions to construct the diffeomorphism $\vf_{xi}$ (cf. \cite{Wall1}, Lemma 12). For each disk $d_k$ consider the ``half-handle" $=(\frac{1}{2}D_k^n)\times D^n$ and notice that the closure of the complement to all these ``half-handles" in $F$ $$ \CCF:=cl(F\setminus\bigsqcup_{k=1}^{\mu}(\frac{1}{2}D_k^n)\times D^n)$$ is diffeomorphic to the closed $2n$-disk $D^{2n}$, and the intersection of each ``half-handle" with the boundary $\partial \CCF\simeq S^{2n-1}$ is $\partial (\frac{1}{2}D_k^n)\times D^n\simeq S_k^{n-1}\times D^n$. We take a smooth map $\vf_x: (D^n,S^{n-1})\lra (SO(n),id)$ that sends a neighborhood of $S^{n-1}$ to $id$ and represents an element $[\vf_x]\in \pi_n(SO(n))$ such that $S([\vf_x])=x$ and define the diffeomorphism $\vf_{xi}|_{\bigsqcup_{k=1}^{\mu}(\frac{1}{2}D_k^n)\times D^n}$ by the formula \begin{equation} \label{twist} \vf_{xi}(t,s):=\left\{ \begin{array}{ccl} (\vf_x(s)\circ t,s) & {\rm if} & (t,s)\in (\frac{1}{2}D_i^n)\times D^n\\ (t,s) & {\rm if} & (t,s)\in (\frac{1}{2}D_k^n)\times D^n~and~k\ne i \end{array} \right. \end{equation} In particular, this gives a diffeomorphism $\phi\in {\rm Diff(}\partial \CCF)$ which is defined on $S_i^{n-1}\times D^n\hra \partial \CCF$ by restricting $t$ to the boundary of $\frac{1}{2}D_i^n$ (see (\ref{twist}) above) and by the identity everywhere else. We will show now that $\phi$ is isotopic to the identity. Consider the handlebody $$ F_i:=D^{2n}\cup (D_i^n\times D^n)=cl(F\setminus\bigsqcup_{k=1,k\ne i }^{\mu}(\frac{1}{2}D_k^n)\times D^n) $$ and denote by $\hat{F_i}$ the manifold obtained from $F_i$ by removing the open disk $\frac{1}{2}D^{2n}$ from $D^{2n}$. Hence $\partial \hat{F_i} \simeq \partial F_i\sqcup S^{2n-1}$. The first equation of (\ref{twist}) together with the identity map define a diffeomorphism $\Phi$ of $\hat{F_i}$ such that $\Phi|_{S^{2n-1}} = \phi$ and $\Phi|_{\partial F_i} = Id$. We use the identity again to extend this $\Phi$ to a diffeomorphism $\tilde{\Phi}$ of $\DblFi$ where $$ \DblFi:=\DblF_i\setminus\frac{1}{2}D^{2n}\simeq \hat{F_i}\bigcup_{\partial F_i} F_i~~~~~and~~~~\tilde{\Phi}|_{F_i}=Id,~\tilde{\Phi}|_{\hat{F_i}}=\Phi $$ Thus $\phi$ is the restriction of $\tilde{\Phi}$ to the boundary $\partial \DblFi=S^{2n-1}$ and hence can be considered as an element of the inertia group of ${\cal D}F_i$ (cf. \cite{Kreck}, Proposition 3). Now it follows from Lemma 2 above and results of Kosinski (\cite{Kos}, see \S3) and Wall \cite{W0} that $\phi$ is isotopic to the identity. In particular, we can use this isotopy on $S^{2n-1}\times [\frac{1}{2},\frac{1}{4}]\subset \frac{1}{2}D^{2n}$ to extend the diffeomorphism $\vf_{xi}|_{\bigsqcup_{k=1}^{\mu}(\frac{1}{2}D_k^n)\times D^n}$ to a diffeomorphism of the whole handlebody $F$. Denote the result of this extension by $\vf_{xi}$. Clearly $\vf_{xi}\in \mcgF$, and we leave it to the reader to check that $\chi_r(\vf_{xi})=g_{xi}$. \end{proof} \begin{cor}We have the following commutative diagram $$ \begin{array}{ccccccccc} &&&&0&&0&&\\ &&&&\da&&\da&&\\ 0 &\ra&\Theta_{2n+1}&\stackrel{\iota_r}{\to}&\K(V) &\stackrel{\chi_r}{\lra} &\Rhom &\ra &0\\ && \updownarrow\lt{\equiv} &&\da\lt{s_{\omega}} &&\da\lt{j^*} &\\ 0 &\ra &\Theta_{2n+1} &\stackrel{\iota}{\to} &\kerDF &\stackrel{\chi}{\lra} & \Dhom &\ra &0\\ &&&&\da\lt{i^*\cdot\chi} &&\da\lt{i^*} &\\ &&&&\Fhom &\stackrel{\equiv}{\longleftrightarrow} &\Fhom &\\ &&&&\da&&\da&&\\ &&&&0&&0&& \end{array} $$ where all horizontal and vertical sequences are exact. \end{cor} \begin{proof} The standard diagram chasing procedure is left to the reader. \end{proof} \noindent {\tt Example:} Consider the case when $F=S^3\times D^3$. Then $\DblF=S^3\times S^3$, $\Rhom\cong G\cong \zint$, $\Theta_{7}\cong\zint_{28}$ and $\kerDF\cong{\cal H}_{28}$, that is the factor group of the group $\cal H$ (upper unitriangular $3\times 3$ matrices with integer coefficients) modulo the cyclic subgroup $28\zint$, where $\zint$ is the center of $\cal H$ (cf. \cite{Fried} or \S1.3 of \cite{Kryl}). Thus $\K(V) \cong S\pi_3(SO(3))\oplus\Theta_7\cong \zint\oplus\zint_{28}$ and the first vertical short exact sequence from the previous corollary can be written as follows $$ 0\lra S\pi_3(SO(3))\oplus\Theta_7 \lra \tilde{\pi}_0S{{\rm Diff}(S^3\times S^3)} \lra S\pi_3(SO(3)) \lra 0. $$ Such exact sequence was obtained by J. Levine (\cite{Levine}, Theorems 2.4 and 3.3) and H. Sato (\cite{Sato}, Theorem II) for the group $\tilde{\pi}_0S{{\rm Diff}(S^p\times S^p)}$. \section{Manifolds with open book decompositions} \subsection{Periodicity in higher dimensions} In this section we will apply our exact sequence (\ref{kervar}) to study the periodicity of branched cyclic covers of manifolds with open book decompositions. \begin{dfn} We will say that a smooth closed $(m+1)$-dimensional manifold $M$ has an open book decomposition if it is diffeomorphic to the union $$ M\simeq F_{\vf}\cup_r (\partial F\times D^2)$$ where $F$ is m-dimensional manifold with boundary $\partial F$, $\vf \in {{\rm Diff}(F,{\rm rel}~\partial)}$ is an orientation preserving diffeomorphism of $F$ that keeps the boundary pointwise fixed, $F_{\vf}$ is the mapping torus of $\vf$ $$ F_{\vf}:= \left. F\times[0,1]\right/ (x,0)\sim (\vf(x),1) $$ and $r:\partial F_{\vf}\lra \partial F\times S^1$ is a diffeomorphism that makes the following diagram commute $$\begin{CD} \partial F_{\vf} @>in>> F_{\vf}\\ @VVrV @VV\pi V\\ \partial F\times S^1 @>p_2>> S^1 \end{CD} $$ (here $p_2$ is the projection onto the second factor and $\pi$ is the bundle projection of the mapping torus onto the base circle). \end{dfn} Such a union is also called the relative mapping torus with page $F$ and binding $\partial F$ (cf. \cite{Quinn} or \cite{Stev}). When $M$ has dimension $(2n+1)$ and $F$ has the homotopy type of a $n$-dimensional CW-complex, it is said that the page is {\it almost canonical}. The diffeomorphism $\vf$ is called the geometric monodromy and the induced map $\vf_*:~H_n(F)\ra H_n(F)$ is the (algebraic) monodromy. If instead of $\vf$ we take some positive power of this diffeomorphism, say $\vf^k$, we obtain the $k$-fold cyclic cover $M_k$ of $M$, branched along $\partial F$, i.e. $$ M_k=F_{\vf^k}\cup_r (\partial F \times D^2) $$ It was shown in \cite{DK} (Theorem 4.5) that if a fibered knot $\partial F$ is a rational homology sphere and $\vf^d=id$ for some $d>0$, then the $k$-fold cyclic covers $M_k$ of $S^{2n+1}$ branched along $\partial F$ have the periodic behavior in $d$. In case of the links of isolated complex polynomial singularities these restrictions on $\partial F$ and $\vf$ are equivalent to the condition ${\rm Var}(\vf^d)=0$. \noindent \underline{Remarks}:\\ i) Notice that the conditions $\vf^d_*=id$ and $\partial F$ is a rational homology sphere imply that ${\rm Var}(\vf^d)=0$, but the converse is not true (see \cite{Stev}, p. 231).\\ ii) Proposition 3.3 of \cite{Kauffman} proves that an open book $M^{2n+1}$ with page $F$ and monodromy $\vf$ is a homotopy sphere if and only if ${\rm Var}(\vf)$ is an isomorphism. In addition to the almost canonical page requirement we will need to assume more about $M$ (cf. \cite{Stev}, \S3 p.232), i.e. we assume from now on that $M$ has a {\it simple open book decomposition}. It implies, in particular, that $M$ bounds a simply connected parallelizable manifold. We will also assume that $n\geq 3$, $\pi_1(\partial F)=1$ and ${\rm Var}(\vf^d)=0$ for some $d\ge 1$ (where $\vf$ is the diffeomorphism that gives $M$ the open book structure). A parallelizable simply connected manifold bounded by $M$ will be denoted by $N$. Before we give proofs of the periodicity theorems (Corollaries 2, 3 and 4 below) we will first obtain some auxiliary results. It is clear that $F$ is also a parallelizable manifold. Take now any $z\in H_n(F,\partial F)\cong \pi_n(F,\partial F)$ and choose an embedded disk $(D^n,\partial D^n)\hra (F,\partial F)$ that represents this class $z$. Inside of $\DblF=F_0\cup F_+$ we consider the double $\Dbl D^n= D^n_0\cup D^n_+\simeq S^n\hra \DblF$, and since the boundary $\partial D^n=S^{n-1}\subset \partial F$ has trivial normal bundle in $\partial F$ we can add to $F_0$ one $n$-handle along this sphere to obtain the manifold $F_0(z):= F_0\cup (D^n_+\times d^n)$. As we have done above, we can extend a diffeomorphism $\vf\in {{\rm Diff}(F,{\rm rel}~\partial)}$ to a diffeomorphism $\vf_z\in {{\rm Diff}(}F_0(z),{\rm rel}~\partial)$ using the identity on $D^n_+\times d^n$. Then we obviously have $\Dbl D^n\hra F_0(z)\hra \DblF$ and $\vf_z = \tilde{\vf}|_{F_0(z)}$. \begin{lem} The mapping torus $F_{\vf}$ of $\vf$ is framed if and only if the mapping torus ${F_0(z)}_{\vf_z}$ of $\vf_z$ is framed. \end{lem} \begin{proof} We will show that any framing of $F_{\vf}$ can be extended to a framing of ${F_0(z)}_{\vf_z}$. The other direction is trivial. Since $\vf$ is the identity on the boundary, we have $S^{n-1}\times S^1\hra \partial F_{\vf} = (\partial F)\times S^1$, where $S^{n-1}$ is the boundary of our relative homology class $z$. We can assume that $F$ has a collar $\partial F\times [0,1]$ and $\vf$ is the identity map on this collar. Now we have $D^n\hra F\hra F\cup (\partial F\times [0,1])$ and we use the disk theorem to change $\vf$ by an isotopy to a diffeomorphism $\vf'$ such that $\vf'|_{D^{2n}}=\vf'|_{\partial F\times 1} =id$ and $D^n\subset {\rm int}(D^{2n})\subset F\cup(\partial F\times [0,\frac{1}{2}])$. Then clearly $F_{\vf'}\simeq F_{\vf}$ and $D^n\times S^1\hra F_{\vf'}$ with the trivial normal bundle. Furthermore since $S^{n-1}\times [0,1]\hra \partial F\times [0,1]$ with trivial normal bundle too, we can connect $\partial D^n\times S^1 \hra F_{\vf'}$ with $S^{n-1}\times S^1 \hra (\partial F\times 1)\times S^1=\partial (F_{\vf'})$, using the collar $(S^{n-1}\times [0,1])\times S^1$. This implies that the trivial normal bundle of $S^{n-1}\times S^1$ in $\partial (F_{\vf})$ comes from the trivial normal bundle of $D^n\times S^1$ embedded into $F_{\vf}$. Now notice that the mapping torus ${F_0(z)}_{\vf_z}$ is the union of $F_{\vf}$ and $D_+^n\times d^n\times S^1$ along $S^{n-1}\times d^n \times S^1 \hra \partial (F_{\vf})$. Therefore the restriction of the framing of $F_{\vf}$ to $S^{n-1}\times d^n \times S^1 = (\partial D^n)\times d^n \times S^1$ (where $D^n\times d^n \times S^1\hra F_{\vf}$) can be extended to a framing of ${F_0(z)}_{\vf_z}$. \end{proof} \begin{thm} (n is odd, $\ne 1$)\\ Suppose $[\psi]\in \K(V) $ and $M^{2n+1}\simeq F_{\psi}\cup_r (\partial F\times D^2)$ bounds a parallelizable manifold $N$. Then $\chi_r(\psi) = 0$. \end{thm} \begin{proof} It is enough to show that $\chi_r(\psi)[z] = 0$ for an arbitrary relative homology class $z\in H_n(F,\partial F)$. As we just did above, we represent such a class by an embedded disk $(D^n,\partial D^n) \hra (F,\partial F)$ and take the double $\Dbl D^n = S^n\hra \DblF$. We will denote this double by $dz$ ({\it to avoid cumbersome notations we denote by $dz$ both the homology class and the embedded sphere $\Dbl D^n$ that represents this class}) and its normal bundle in $\DblF$ by $\nu(dz;\DblF)$ respectively. Note that $\nu(dz;\DblF)$ is trivial. The proof now splits into two parts.\\ 1)~ $n\geq 5$: It is clear that ${\psi_z}_* = id$ on $H_n(F_0(z))$ and we can isotope $\psi_z$ to a diffeomorphism $\psi'_z$ such that ${\psi'_z}|_{dz} = id$ (see \cite{Haef}). Extending this new diffeomorphism by the identity to the diffeomorphism ${\tilde\psi}'\in {{\rm Diff}(\DblF)}$ we obtain an element of $\kerDF$ which pointwise fixes $dz$ and maps $F_0(z)$ to itself. Now it follows from the commutative diagram of Corollary 1 that it is enough to show that $\chi({\tilde \psi}')[dz]=0$. Since by Lemma 4 the mapping torus ${F_0(z)}_{\psi'_z}$ is framed, the normal bundle $\nu(dz\times S^1;\DblF_{{\tilde\psi}'})$ is stably trivial. Since $n$ is odd, the map $\G = S\pi_n(SO(n)) \hra \pi_n(SO(n+1))\lra \pi_n(SO(n+2))$ is a monomorphism (see \cite{Wall3}) and therefore the map $$ l^*: {{\rm Hom}(H_n(\DblF),~\G)} \lra {{\rm Hom}(H_n(\DblF),~\pi_n(SO))}$$ is a monomorphism too. Hence $l^*(\chi({\tilde \psi}'))[dz]$ is the obstruction to triviality of the stable normal bundle $\nu(dz\times S^1;\DblF_{{\tilde\psi}'})$ and since this bundle is trivial we have $\chi({\tilde \psi}')[dz] =0$, as required.\\ 2)~ $n\equiv 3\pmod{8}$: Since $\partial N = M^{2n+1}\simeq F_{\psi}\cup_r (\partial F\times D^2)$ and $\partial(F\times D^2) = (\partial F \times D^2)\cup (F\times S^1)$, we can paste together the manifolds $N$ and $F\times D^2$ along the common part of the boundary $\partial F\times D^2$ (respecting orientations of course) to obtain a manifold (after smoothing the corner) $$ W^{2n+2}:= N\bigcup_{\partial F\times D^2} (F\times D^2)~~~ {\rm with}~~~ \partial W= F_{\psi}\cup (F\times S^1)\simeq {\cal D}F_{\tp }$$ We use elementary obstruction theory to show that this $W$ is stably parallelizable. Fix a frame field of the stable tangent bundle of $N\subset W$. Obstructions to the extension of this frame field over the whole manifold lie in the groups $H^{q+1}(W,N;\pi_q(SO))\cong H^{q+1}(F,\partial F;\pi_q(SO))\cong H_{2n-q-1}(F;\pi_q(SO))$. If $q\ne n-1$ or $q\ne 2n-1$ then $H_{2n-q-1}(F;\zint)\cong 0$ (since $F$ has the homotopy type of the wedge product of $n$-spheres). But if $q=n-1$ or $q=2n-1$ then $\pi_q(SO)\cong 0$ because $n\equiv 3\pmod{8}$ and all obstructions lie in the trivial groups. It implies that ${\cal D}F_{\tp}$ is stable parallelizable and the Pontryagin class $p_{\frac{n+1}{4}}({\cal D}F_{\tp})$ vanishes. Thus $\chi(\tp)=0$ (recall Lemma 2 of \cite{Kreck}) and hence $\chi_r(\psi)=0$. \end{proof} Now we can prove the following theorem of Stevens including the cases when $n=3,7$ (cf. \cite{Stev}, Theorem 7). \begin{cor} Let $M_k$ be the k-fold branched cyclic cover of a $(2n+1)$-manifold $M = F_{\vf}\cup_r (\partial F\times D^2)$ with simple open book decomposition, where n is odd, $\ne 1$. Suppose ${\rm Var}(\vf^d)=0$, then $M_k$ and $M_{k+d}$ are (orientation preserving) homeomorphic, while $M_k$ and $M_{d-k},~d>k$, are orientation reversing homeomorphic. \end{cor} \begin{proof} Since ${\rm Var}(\vf^d)=0$ and $M_d$ bounds a parallelizable manifold (see Lemma 5 of \cite{Stev}) we have $\chi_r(\vf^d)=0$ by the previous theorem. The exact sequence (\ref{kervar}) implies that $\vf^d$ is isotopic to a diffeomorphism which belongs to the image $\iota(\Theta_{2n+1})$ and therefore $F_{\vf^{d+k}}$ is diffeomorphic to $F_{\vf^k}\# \Sigma'$ (cf. Lemma 1 of \cite{Browd1}) for some $\Sigma'\in \Theta_{2n+1}$. In particular, it means that $F_{\vf^{d+k}}$ is homeomorphic (via some homeomorphism that preserves orientation) to $F_{\vf^{k}}$, and hence $M_{d+k}$ is homeomorphic to $M_k$. To see the orientation reversing case, notice that the mapping torus $F_g$ is diffeomorphic to $F_{g^{-1}}$ via an orientation reversing diffeomorphism induced, for instance, by the map $(x,t)\lmt (g(x),1-t)$ from $F\times I$ to $F\times I$. This diffeomorphism extends to an orientation reversing homeomorphism of the corresponding open books $M$ and $M_{-1}$. Hence in our situation $M_k=F_{\vf^{k}}\cup_r (\partial F\times D^2)$ is homeomorphic (orient. revers.) to $F_{\vf^{-k}}\cup_r (\partial F \times D^2)$ which is homeomorphic (orient. pres.) to $F_{\vf^{-k}}\#\Sigma'\cup_r (\partial F\times D^2)\simeq F_{\vf^{d-k}}\cup_r (\partial F\times D^2)=M_{d-k}$. \end{proof} \noindent \underline{Remark}: If one defines $M_k := F_{\vf^k}\cup_r (\partial F\times D^2)$ for any $k\in \int$, then the first statement that $M_k$ is homeomorphic to $M_{k+d}$ remains true, and the restriction $d>k$ in the second part can be omitted. To show diffeomorphism type periodicity we will basically use the same argument plus the fact that the homotopy sphere $\Sigma'$ bounds a parallelizable manifold. We start with proving this fact. Thus for $n\in\nat,~n\geq2$, we consider a diffeomorphism $h$ with $[h]\in\K(V) $ such that our simple open book $M^{2n+1}=F_h\cup_r (\partial F\times D^2)$ bounds a simply connected parallelizable manifold $N$ and $\chi_r(h)=0$. In particular, we can assume that $h\in {\rm Im}(\iota)$ is the identity except on a small closed disk ${\cal D}^{2n}\hra {\rm int}(F)$ embedded into the interior of $F$. \begin{lem} The natural inclusions $i_1:F\hra F_h$ and $i_2: F_h\hra M$ induce isomorphisms $i_{1*}:H_n(F) \to H_n(F_h)$ and $i_{2*}:H_n(F_h) \to H_n(M)$ respectively, and every $[z]\in H_n(M)$ can be represented by an embedded sphere $S^n\hra M$ with trivial normal bundle $\nu(S^n;M)$. In addition, $H_n(M)\cong H_{n+1}(M)$. \end{lem} \begin{proof} That $i_{1*}$ is an isomorphism follows immediately from the Wang exact sequence, and the other two isomorphisms follow from the exact sequence of Stevens: $$ 0\lra H_{n+1}(M) \lra H_n(F,\partial F)\stackrel{\rm Var(h)}{\lra} H_n(F)\lra H_n(M) \lra 0$$ which arises from the exact sequence of the pair $(M,F)$ (see Proposition 1 of \cite{Stev}). Since the normal bundle of any $S^n\hra M$ is stable and $M$ bounds a parallelizable manifold, the bundle $\nu(S^n;M)$ must be trivial. \end{proof} Now we would like to kill $H_n(M)$ using surgery, and as a result obtain a homotopy sphere $\Sigma_h\in \Theta_{2n+1}$ (we again assume $n\geq 3$). We will show firstly that $\Sigma_h$ belongs to $bP_{2n+2}$, and secondly that $\iota(\Sigma_h) =[h]$. Our construction will follow Kreck's construction of the isomorphism $\sigma: {\rm ker}(\chi) \lra \Theta_{2n+1}$ (see \cite{Kreck}, proof of Proposition 3). For each generator $[z_i]\in H_n(F)$ we fix an embedding $\phi_i: S^n_i\times d^{n+1}_i\hra F\times (0,1)\hra M$ disjoint from ${\cal D}^{2n}\times (0,1)$. Then we attach handles $D^{n+1}_i\times d^{n+1}_i$ to the product $M\times I$ along these embeddings into $M\times\{1\}$ to obtain a cobordism $W$ between $M=M\times\{0\}$ and the homotopy sphere $\Sigma_h$ which is the result of these $\phi_i$-surgeries on $M$. Furthermore, we can choose the embeddings $\phi_i$ compatible with the framing of $M$ that comes from the framing of $N$ (see Lemma 6.2 of \cite{KerMil}), and hence we get $W$ as a framed manifold. Taking the union of $N$ and $W$ along $M$ we obtain a parallelizable manifold with boundary $\Sigma_h$, and hence $\Sigma_h\in bP_{2n+2}$. In the next lemma we show that $\Sigma_h$ is well defined (depends only on the isotopy class of $h$) and that $\iota(\Sigma_h)=[h]$, which implies $[h]\in \iota(bP_{2n+2})$ (cf. the properties of $W$ from \cite{Kreck}, pp. 655-656). \begin{lem}~ \begin{enumerate} \item The manifold $W$ is parallelizable, $n$-connected and $\partial W=M\sqcup -\Sigma_h$. \item The embedding ${\cal L}: M\times\{0\}\hra W$ induces an isomorphism\\ ${\cal L}_*: H_{n+1}(M) \lra H_{n+1}(W)$ and all elements of $H_{n+1}(W)$ can be represented by embedded spheres with trivial normal bundle. \item If $W'$ is another n-connected manifold that also satisfies property 2 and $\partial W'=M\sqcup -\Sigma$ for some $\Sigma\in\Theta_{2n+1}$, then $\Sigma \simeq \Sigma_h$. \item $\iota(\Sigma_h)=[h]$ where $\iota: \Theta_{2n+1}\hra \K(V) $. \end{enumerate} \end{lem} \begin{proof} $n$-connectivity follows immediately from $(n-1)$-connectivity of $M$ and the Mayer-Vietoris exact sequence of the union $$ W:=M\times [0,1] \bigcup (\sqcup_i D^{n+1}_i\times d^{n+1}_i) $$ Hence, by the Hurewicz theorem we can represent every $[z]\in H_{n+1}(W)$ by an embedded sphere $S^{n+1}\hra W$. The same exact sequence implies that the embedding ${\cal L}: M=M\times\{0\} \hra W$ induces isomorphism between $H_{n+1}(M)$ and $H_{n+1}(W)$ and to finish part 2 we just need to show that $\nu(S^{n+1};W)$ is trivial. To see this, notice that every $[z]\in H_{n+1}(M)\cong H_{n+1}(W)$ can be represented by an embedded $S^n\times S^1\hra M$ (see the Lemma above) with trivial normal bundle in $M\times (0,1)$. Since we can take the diffeomorphism $h$ to be the identity on $\nu(S^n;F)$, it is not hard to see that there is an embedded $S^{n+1}\hra M$ which is cobordant to $S^n\times S^1$ in $M$ and hence framed cobordant in $M\times(0,1)\hra W$. This proves 2. Suppose now we have $W'$ that satisfies property 3. Take a union of $W$ and $W'$ along $M$ to obtain a n-connected cobordism $\cal W$ between $\Sigma$ and $\Sigma_h$. We will show that we can make it into an $h$-cobordism. Mayer-Vietoris exact sequence implies that $H_{n+1}({\cal W})\cong H_{n+1}(W)\oplus H_n(M)$, and since $H_{n+1}(W)\cong H_{n+1}(W,M)$ (Poincar\'e duality plus Universal coefficient theorem) and also $H_{n+1}(W,M)\cong H_n(M)$ we see that $H_{n+1}({\cal W})$ has the direct summand $H_{n+1}(W)$ with the properties: \\ i) ${\rm dim}(H_{n+1}(W))=\frac{1}{2}{\rm dim}(H_{n+1}({\cal W}))$\\ ii) every homology class of $H_{n+1}(W)$ can be represented by an embedded sphere $S^{n+1}\hra W$ with trivial normal bundle.\\ iii) For all $[z_1],~[z_2]\in H_{n+1}(W)$ the intersection number $z_1\circ z_2$ vanishes (this follows from 2. of this lemma). Therefore we can use surgery to kill $H_{n+1}({\cal W})$ and obtain an $h$-cobordism between $\Sigma$ and $\Sigma_h$. The last property follows from \cite{Browd1}, Lemma 1. Indeed, if $\Sigma'=D^{2n+1}_1\cup_h D^{2n+1}_2$ then by our definition $\iota(\Sigma')=[h]\in \K(V) $ and hence $M\simeq\partial(F\times D^2)\# \Sigma'$. Since $\partial(F\times D^2)$ is framed cobordant to the standard $(2n+1)$-sphere, we can use this cobordism (namely $F\times D^2$ minus an embedded disk $D^{2n+2}$) to produce a $n$-connected cobordism between $M$ and $\Sigma'$ that will satisfy property 2. As we have just seen above this means that $\Sigma'\simeq \Sigma_h$, i.e. $\iota(\Sigma_h) = [h]$. \end{proof} Let us denote the signature of a parallelizable manifold $N_k$ with boundary $\partial N_k= M_k = F_{\vf^k}\cup_r (\partial F\times D^2)$ by $\sigma_k$, and the generator of $bP_{2n+2}$ by $\Sigma$. \begin{cor}(cf. \cite{Stev}, Proposition 8) Let $M^{2n+1}= F_{\vf}\cup_r (\partial F\times D^2)$ be the manifold with simple open book decomposition where $n$ is odd, $\ne 1$ and $M_k$ be the $k$-fold branched cyclic cover of $M$. If ${\rm Var}(\vf^d)=0$ then $M_{k+d}$ is diffeomorphic to $(\frac{\sigma_d}{8}\cdot\Sigma)\# M_k$. \end{cor} \begin{proof} We have just seen above that $M_{k+d}\simeq \Sigma'\# M_k$ with $\Sigma'=m\cdot\Sigma\in bP_{2n+2}$ for some $m\in\nat$. Since $M_d=F_{\vf^d}\cup_r (\partial F\times D^2)\simeq \partial(F\times D^2)\# m\Sigma$ and $m\Sigma$ bounds a parallelizable manifold, say $W_m$, with signature $\sigma(W_m)=8m$ and $\partial(F\times D^2)$ bounds $F\times D^2$ (which is also parallelizable) with signature zero, the connected sum of $W_m$ and $F\times D^2$ along the boundary (cf.~\S 2~of \cite{KerMil}) will give us a parallelizable manifold $N_d := W_m\# F\times D^2$ with boundary $\partial N_d = M_d$ and signature $\sigma(N_d)=\sigma(W_m)+\sigma(F\times D^2)=8m+0$. Thus $m=\frac{\sigma(N_d)}{8}\equiv\frac{\sigma_d}{8}\bmod ({\rm order~of~}bP_{2n+2})$ and the corollary follows. \end{proof} When $n=even$, the periodicity of $M_k$ is more complicated. Consider the link of the singularity $z^2_0+z^2_1+\ldots +z^2_n=0$ with $n=2m$ and denote the $(4m+1)$-dimensional Kervaire sphere by $\Sigma$ and the tangent $S^n$-sphere bundle to $S^{n+1}$ by $T$. Then $M_{k+8}$ is diffeomorphic to $M_k$ and the diffeomorphism types are listed in the table (see \cite{DK}, Proposition 6.1) \parskip=5mm \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $M_1\simeq M_7$ & $M_2$ & $M_3\simeq M_5$ & $M_4$ & $M_6$ & $M_8$ \\ \hline $S^{2n+1}$ & ~$T$~ & ~$\Sigma$~ & ~$(S^n\times S^{n+1})\#\Sigma$~ & ~ $T\#\Sigma$~ & ~ $S^n\times S^{n+1}$\\ \hline \end{tabular} \noindent The following result is due to Stevens (\cite{Stev}, Theorem 9). \begin{cor} If for branched cyclic covers $M_k$ of a $(2n+1)$-manifold $M$ with simple open book decomposition ${\rm Var}(\vf^d)=0$, then $M_k$ and $M_{k+2d}$ are homeomorphic and $M_k$ and $M_{k+4d}$ are diffeomorphic. Moreover, if $n=2~or~6$, then $M_k$ and $M_{k+d}$ are diffeomorphic. \end{cor} \begin{proof} When $n=2$ the mapping class group is trivial and $[\vf^d]=Id$. If $n=6$ then $G\cong 0$ and $bP_{14}\cong 0$ (see \cite{KerMil}, Lemma 7.2) which implies that $[\vf^d]=Id$. For the other even $n$ we know that the group $G$ is isomorphic either to $\zint_2$ or $\zint_2\oplus \zint_2$ and hence $\chi_r(\vf^d)$ has order two. Therefore $\vf^{2d}\in bP_{4m+2}$, i.e. $M_k$ is homeomorphic to $M_{k+2d}$, and since the group $bP_{4m+2}$ is either trivial or $\zint_2$ (see \cite{KerMil}), $\vf^{4d}$ must be pseudo-isotopic to the identity. \end{proof} \noindent {\tt Example 2:} (The authors are indebted to the referee for suggesting this example.) Consider again the singularity $z^2_0+z^2_1+\ldots +z^2_n=0$ with $n=2m$. Assume in addition that $m\ne 0$ (mod 4) and that the Kervaire sphere $\Sigma \in bP_{4m+2}$ is exotic, e.g. when $4m+2\ne 2^l-2$ (see \cite{Browder}). Here the Milnor fiber $F$, is the tangent disc bundle to the sphere $S^{2m}$ and hence $\DblF\simeq S^{2m}\times S^{2m}$. It is also well known that the geometric monodromy $\vf$ of this singularity satisfies the properties: $\vf_*=-Id$, ${\rm Var}(\vf^2)=0$ and ${\rm Var}(\vf)\ne 0$ (cf. \cite{Looijenga}, Chapter 3). Since $M_0$ is not diffeomorphic to $M_2$ and $M_1$ is not diffeomorphic to $M_5$, $\chi_r([\vf^2])$ will be a generator of $\Rhom\cong\int_2$ and $[\vf^4]$ will be a generator of $bP_{4m+2}\cong\int_2$. Since $\Theta_{4m+1}\cong bP_{4m+2}\oplus {\rm Coker}(J_{4m+1})$ (cf. \cite{Brumfiel}) we see that in this case $\K(V) \cong\int_4\oplus {\rm Coker}(J_{4m+1})$ and the exact sequence (\ref{kervar}) doesn't split. \subsection{Periodicity in dimension 3} It is known that if the dimension of the open book $M^{2n+1}$ is three, then there is homological periodicity (see references in \cite{DK}) but there is no topological one. For the sake of completeness we illustrate this with the following classical example (cf. \cite{Rolfsen}, Chapter 10. D.). Let $f(z_0,z_1)=z_0^2+z_1^3$ be the complex polynomial which defines the curve $V=\{f(z_0,z_1)=0\}$ in $\cn^2$ with the cusp at the origin. The corresponding Milnor fibration has monodromy $\vf$ of order six, the boundary of the fiber $F$ is the trefoil knot $K$ and $\Va (\vf^6)=0$. This fibration gives the open book structure to the standard 3-sphere $S^3=M_1=F_{\vf}\cup (K\times D^2)$. We show that $M_7\ne M_1$ and $M_6\ne M_0=(F\times S^1)\cup (K\times D^2)$. Let us first compare $\pi_1(M_0)$ with $\pi_1(M_6)$. The theorem of Seifert and Van Kampen entails that $\pi_1(M_0)\cong \pi_1(F)$ which is the free group on two generators. As for $M_6$ one can easily find using the Reidemeister-Schreier theorem a presentation for $\pi_1(F_{\vf^6})$ and then show that $\pi_1(M_6)$ admits the following presentation: $$ \left\langle Z_1,Z_2,\ldots,Z_6~|~Z_1=Z_6Z_2,~\ldots,~Z_j=Z_{j-1}Z_{j+1},~ \ldots,~ Z_6=Z_5Z_1\right\rangle $$ It takes a bit more effort to show that this group is isomorphic to the group of upper unitriangular $3\times 3$ matrices with integer coefficients (cf. \cite{Mil3}, \S8) $$ {\cal H}\cong \{ \begin{pmatrix} 1 & a & c\\ 0 & 1 & b\\ 0 & 0 & 1 \end{pmatrix}|a,b,c\in\int \} $$ Suppose now that $M_7$ were homeomorphic to the sphere. Then we could take the union of $N_7$ and $D^4$ (recall that $N_7$ is the cyclic covering of $D^4$ branched along the fiber $(F,K)\hra (D^4,S^3)$ where $F\cap S^3=K$): $$ W^4:=N_7\bigcup_{S^3} D^4 $$ Since $N_7$ is parallelizable (see \cite{Cappell}, Theorem 5 or \cite{Kauf-book}, Chapter XII), $W^4$ would be a closed spin-manifold. Hence its signature $\sigma(W^4)=\sigma(N_7)$ must be a multiple of 16 by the theorem of Rokhlin \cite{Rokh}. But $\sigma(N_7)=-8$ as one can find using the Seifert pairing on $H_1(F)$ (cf. \cite{Cappell}; \cite{Kauf-book}), and hence $M_7\ne M_1$. Actually much more is known. Milnor in \cite{Mil3} proved that $\pi_1(M_r)$ is isomorphic to the commutator subgroup $[\Gamma,\Gamma]$ of the centrally extended triangle group $\Gamma$ which has a presentation $$ \Gamma\cong \langle \gamma_1,~\gamma_2,~\gamma_3~|~\gamma_1^2=\gamma_2^3= \gamma_3^r=\gamma_1\cdot \gamma_2\cdot \gamma_3 \rangle $$ This group $\Gamma$ is infinite when $r\geq 6$ (see \cite{Mil3}, \S2,3) and hence $[\Gamma,\Gamma]$, that has index $r-6$, is infinite too. In particular, none of the cyclic coverings of $S^3$ branched along the trefoil knot can be simply connected. \parskip=1mm \noindent University of Illinois at Chicago \\ Department of Mathematics, Statistics and Computer Science\\ 851 S.Morgan st. Chicago, IL 60607 \noindent {\small E-mail address:[email protected]} ~ \noindent International University Bremen\\ School of Engineering \& Science\\ P.O. Box 750 561\\ 28725 Bremen, Germany \noindent {\small E-mail address:[email protected]} \end{document}
\begin{document} \title{Range of applicability of the Hu-Paz-Zhang master equation} \author{G. Homa} \email{[email protected]} \affiliation{Department of Physics of Complex Systems, E\"{o}tv\"{o}s Lor\'and University, ELTE, P\'azm\'any P\'eter s\'et\'any 1/A, H-1117 Budapest, Hungary} \author{A. Csord\'as} \email{[email protected]} \affiliation{Department of Physics of Complex Systems, E\"{o}tv\"{o}s Lor\'and University, ELTE, P\'azm\'any P\'eter s\'et\'any 1/A, H-1117 Budapest, Hungary} \author{M. A. Csirik} \email{[email protected]} \affiliation{Hylleraas Centre for Quantum Molecular Sciences, Department of Chemistry, University of Oslo, P.O. Box 1033 Blindern, N-0315 Oslo, Norway} \affiliation{Institute for Solid State Physics and Optics, Wigner Research Centre, Hungarian Academy of Sciences, P.O. Box 49, H-1525 Budapest, Hungary} \author{J. Z. Bern\'ad} \email{[email protected]} \affiliation{Department of Physics, University of Malta, Msida MSD 2080, Malta} \affiliation{Institut f\"{u}r Angewandte Physik, Technische Universit\"{a}t Darmstadt, D-64289 Darmstadt, Germany} \date{\today} \begin{abstract} We investigate a case of the Hu-Paz-Zhang master equation of the Caldeira-Leggett model without Lindblad form obtained in the weak-coupling limit up to the second-order perturbation. In our study, we use Gaussian initial states to be able to employ a sufficient and necessary condition, which can expose positivity violations of the density operator during the time evolution. We demonstrate that the evolution of the non-Markovian master equation has problems when the stationary solution is not a positive operator, i.e., does not have physical interpretation. We also show that solutions always remain physical for small-times of evolution. Moreover, we identify a strong anomalous behavior, when the trace of the solution is diverging. We also provide results for the corresponding Markovian master equation and show that positivity violations occur for various types of initial conditions even when the stationary solution is a positive operator. Based on our numerical results, we conclude that this non-Markovian master equation is superior to the corresponding Markovian one. \end{abstract} \maketitle \section{Introduction} \label{I} A density operator completely describes the state of a quantum mechanical system and it is defined as a positive trace class operator of trace one \cite{Neumann}. A quantum system in study can be subject to interactions with its environment, which is colloquially referred to as an open quantum system. It is expected that the whole system evolves unitarily and, by tracing out the environment's degrees of freedom, one obtains a positive trace preserving map acting on the states of the open system \cite{Davies}. If one further assumes an initially uncorrelated joint state, then a stronger kind of positivity, called complete positivity, is obtained \cite{Kraus}. Some particular aspects of this assumption have been discussed in Refs. \cite{Pechukas,Hakim,Romero,Buzek,Salgado}. In physical applications, these maps are subject to further approximations, which either leads to Markovian or non-Markovian master equations \cite{book1}. However, the positivity of the approximation-free map may be violated by various approximations implying that complete positivity fails as well. A known case is the Caldeira-Leggett model \cite{CL} of the quantum Brownian motion \cite{Grabert,Weiss}, where different approaches may result in a master equation, which may not preserve the positivity of the density operator for short times \cite{Ambegaokar,Hu_Paz,Diosi,Gnutzmann}. In the model of Unruh and Zurek \cite{Unruh} (where the environment is modeled differently from the Caldeira-Leggett model) issues have also been found with respect to rapid decoherence for short time evolutions. The well-known master equation of Caldeira and Leggett has been extended by Hu, Paz, and Zhang (HPZ), who obtained an exact non-Markovian master equation \cite{Hu_Paz}, \begin{eqnarray} &&i \hbar \frac{\partial \hat{\rho}}{ \partial t}=\left [\hat{H}_0,\hat{\rho}\right]-i D_{pp}(t)[\hat{x},[\hat{x},\hat{\rho}]] \nonumber \\ && \qquad + \lambda(t) [\hat{x},\{\hat{p},\hat{\rho}\}]+2 i D_{px}(t)[\hat{x},[\hat{p},\hat{\rho}]], \nonumber \end{eqnarray} where $\hat{H}_0$ is the Hamiltonian of the open quantum system. $D_{pp}(t)$, $\lambda(t)$ and $D_{px}(t)$ are time-dependent coefficients for which one has explicit expressions (see \cite{Hu_Paz} or \cite{yuhal}). A particular case of this master equation, when the interaction between the system and environment is weak, is given by Eq. \eqref{diffcoeff} for the explicit expressions of the time-dependent coefficients. This case covers both the Caldeira-Leggett master equation \cite{CL} of high temperatures and an extension for lower temperatures \cite{CLLT}. Despite the weak-coupling approximation the master equation has still found applications even decades later in several areas of quantum mechanics, such as quantum optomechanics \cite{Eisert} or quantum estimation theory \cite{Paris}. These works consider the perturbative approach in the weak-coupling up to the second-order, which is also the first non-vanishing term in the perturbation series \cite{book1}. In fact, this version of the HPZ master equation has drawn much attention in the last decade and therefore it is worth while to investigate, in detail, the circumstances under which the time evolution is able to preserve the positivity of the density operator. The main parameters of the Caldeira-Leggett model are the temperature of the thermal bath and the spectral density of the environment. In the phenomenological modeling, one expects that the spectral density goes to zero for very high frequencies. A special case is when the spectral density is proportional to the frequency for small values of frequency, i.e., the ohmic spectral density, which gives rise to a frequency-independent damping rate. Other spectral densities have also been subject to investigations; see, e.g., \cite{Hu_Paz,Fleming,Garg}. In this paper, we choose the ohmic spectral density with a Lorentz-Drude cutoff function. Furthermore, we consider the open quantum system to be a quantum harmonic oscillator. Recently, questions related to the positivity preservation of several Markovian master equations were investigated with the help of purities of density operators \cite{HBL}. The authors exploited the fact that the purity indicates positivity violation when it takes values bigger than one. They have been able to identify cases where positivity violations occur. Unfortunately, the purity is a necessary but not sufficient condition to determine the positivity of a self-adjoint operator with trace one. In this paper, we consider a non-Markovian master equation with its Markovian counterpart, which is obtained from the non-Markovian one by taking the limits in the coefficients $t\rightarrow \infty$. Both the Markovian and the non-Markovian master equation can be formally solved \cite{Ford,Fleming,HBL} for all possible initial conditions. However, the obtained solutions in the phase-space representation cannot determine, in general the positivity of associated Weyl operators,- \cite{Kastler,Nicola}, because one has to verify either a non-countable or a countable set of inequalities. In the special case of Gaussian density operators, all the eigenvalues can be analytically determined \cite{Joos,BCSH}, and furthermore their structure implies that the Gaussian solution is positive if and only if the purity is between zero and one. In particular, results in \cite{Fleming} imply that these types of master equations preserve the Gaussian form of any initially Gaussian state for all times. Therefore, in the case of a Gaussian ansatz, we are able to use a necessary and sufficient condition to monitor the positivity of the evolving density operator. Furthermore, both master equations can be transformed into a system of ordinary differential equations. The paper is organized as follows. In Sec.~\ref{II}.~we introduce the non-Markovian master equation and derive the system of linear differential equations for coefficients of the Gaussian ansatz. In Sec. \ref{III}, we study the positivity of the stationary solution. In parameter space we identify regions where positivity violations can occur. Concrete examples of these violations are given in Sec.~\ref{IV}. Here, we concentrate on the differences of the Markovian and non-Markovian time evolutions of initial Gaussian density operators. Section ~\ref{V} summarizes our main results. Technical details are provided in the three Appendices. \section{Non-Markovian master equation with Gaussian initial conditions} \label{II} In this section we discuss basic features of the HPZ master equation \cite{HR,Hu_Paz,Ford} by focusing on terms up to the second-order expansion in the weak-coupling strength \cite{Breuer-Kappler}. The non-Markovian master equation for a quantum harmonic oscillator with physically observable frequency $\omega_p$ and mass $m$ reads \begin{eqnarray} &&i \hbar \frac{\partial \hat{\rho}}{ \partial t}=\left [\frac{\hat{p}^2}{2m}+\frac{m \omega^2_p(t) \hat{x}^2}{2},\hat{\rho}\right]-i D_{pp}(t)[\hat{x},[\hat{x},\hat{\rho}]] \nonumber \\ && \qquad + \lambda(t) [\hat{x},\{\hat{p},\hat{\rho}\}]+2 i D_{px}(t)[\hat{x},[\hat{p},\hat{\rho}]], \label{NMEQ} \end{eqnarray} where $[,]$ stands for commutators while $\{,\}$ for anti-commutators. In the weak-coupling limit the coefficients in the second-order expansion entering the master equation read \begin{eqnarray}\label{diffcoeff} \omega^2_p(t)&=&\omega^2_b-\frac{2}{m}\int_{0}^t{ds D(s)\cos(\omega_0 s)},\nonumber \\ \lambda(t)&=&\frac{1}{m \omega_0}\int_{0}^t{ds D(s)\sin(\omega_0 s)},\nonumber \\ D_{px}(t)&=&\frac{1}{2 m \omega_0}\int_{0}^t{ds D_1(s)\sin(\omega_0 s)},\nonumber \\ D_{pp}(t)&=&\int_{0}^t{ds D_1(s)\cos(\omega_0 s)}, \end{eqnarray} where $\omega_b$ contains the environment-induced frequency shift of the original oscillator frequency $\omega_0$. We have introduced the following correlation functions: \begin{eqnarray} D(s)&=&\int_0^{\infty}{d \omega J(\omega) \sin(\omega s)}, \label{eq:D(s)_def} \\ D_1(s)&=&\int_0^{\infty}{d \omega J(\omega) \coth \left(\frac{\hbar \omega}{2 k_B T}\right) \cos(\omega s)}, \label{eq:D1(s)_def}. \end{eqnarray} where $T$ is the temperature of the thermal bath. Making use of an Ohmic spectral density with a Lorentz-Drude type function and a high-frequency cutoff $\Omega$, \begin{equation} J(\omega)=\frac{2 m \gamma }{\pi} \omega \frac{\Omega^2}{\Omega^2+\omega^2}, \nonumber \end{equation} where $\gamma$ is the frequency-independent damping constant, the bath correlation $D(s)$ can be determined analytically as \begin{equation} D(s)=2 m \gamma \Omega^2 \exp (-\Omega s ), \quad s \ge 0. \end{equation} For the other correlation function $D_1(s)$ see Eq.~(\ref{eq:d1sumshort}) in Appendix \ref{Appuseful}. Furthermore, for $t>0$, \begin{eqnarray} \omega^2_p(t)&=&\omega^2_0+2 \gamma \Omega-\frac{2}{m}\int_{0}^t{ds D(s)\cos(\omega_0 s)} =\omega^2_0+2 \gamma \Omega \nonumber \\ &-&\frac{2 \gamma \Omega^2 }{\Omega^2+\omega^2_0} e^{-\Omega t } \left[\Omega e^{\Omega t}-\Omega \cos \left(\omega_0 t\right)+\omega_0 \sin \left(\omega _0 t\right)\right], \nonumber \end{eqnarray} where for $t \gg 1$, $\omega_p(t)$ is approximately equal to $\omega_0$, and \begin{eqnarray} &&\lambda(t)= \nonumber \\ &&=\frac{\gamma}{\omega_0}\frac{\Omega^2 }{\Omega^2+\omega^2_0} e^{-\Omega t } \left[\omega_0 e^{\Omega t}-\omega_0 \cos \left(\omega_0 t\right)-\Omega \sin \left(\omega _0 t\right)\right]. \nonumber \end{eqnarray} Closed formulas for $D_{px}(t)$ and $D_{pp}(t)$ are given in Appendix \ref{Appuseful}. It is important to note that in the high-temperature limit $k_B T \gg \hbar \Omega \gg \hbar \omega_0$, we have $\omega_p(t \to \infty)= \omega_0$, $D_{px}(t \to \infty)=\gamma k_B T/(\hbar \Omega)$, $D_{pp}(t \to \infty)= 2 m \gamma k_B T$, and $\lambda(t \to \infty)=\gamma$, which yields exactly the Caldeira-Leggett master equation, i.e., the term $ \gamma k_B T/(\hbar \Omega) [\hat{x},[\hat{p},\hat{\rho}]]$ is very small compared to the other two terms. Furthermore, these coefficients also cover an extended master equation of Caldeira {\it et al.} \cite{CLLT} for lower temperatures by taking only $\Omega \gg \omega_0$ in \eqref{diffcoeff}, which results in their finding $D_{pp}(t \to \infty)=m \gamma \hbar \omega_0 \coth{\hbar \omega_0/(2 k_B T)}$. However, in this particular case of the HPZ master equation the weak damping assumption $\omega_0 \gg \gamma$ is not required. Now, we rewrite Eq. \eqref{NMEQ} in the position representation \begin{eqnarray} &&i \hbar \frac{\partial}{\partial t} \rho(x,y,t)=\Big[ \frac{\hbar^2 }{2m}\left(\frac{\partial^2}{\partial y^2}- \frac{\partial^2}{\partial x^2} \right) +\frac{m \omega^2_p(t)}{2} \left(x^2-y^2\right) \nonumber \\ && \qquad -i D_{pp}(t) (x-y)^2 -i \hbar\lambda(t) (x-y) \left(\frac{\partial}{\partial x}- \frac{\partial}{\partial y} \right) \nonumber \\ && \qquad +2\hbar D_{px}(t) (x-y) \left(\frac{\partial}{\partial x}+ \frac{\partial}{\partial y} \right) \Big] \rho(x,y,t). \label{CROME} \end{eqnarray} Naively, the non-Markovian master equation starts at $t=0$ as a von Neumann equation, because all the time-dependent coefficients in \eqref{diffcoeff} are zero for $t=0$, except for $\omega_p(t)$. This would imply that positivity violations never occur around $t=0$. We prove this fact rigorously for an arbitrary Gaussian initial state in Appendices \ref{sec:dpp_app} and \ref{sec:initial_jolt}. For longer times it is not guaranteed that positivity will not be violated. Another property of \eqref{CROME} is that the Gaussian initial state remains Gaussian during the whole evolution. In \cite{Fleming}, the time evolution of a Wigner function [see Eq. (78) of their paper] starting from an arbitrary initial condition is given. If this initial Wigner function is Gaussian, then this result shows that at an arbitrary time $t>0$, the solution is also a Gaussian with time-dependent coefficients in the exponent. The Wigner function and $\rho(x,y,t)$ are connected by Wigner-Weyl transformation, which maps a Gaussian function to Gaussian ones. Consequently, if we choose $\rho(x,y,t=0)$ to be Gaussian it will be Gaussian at later times too, but with time-dependent coefficients. More concretely, we consider the following Gaussian in the position representation: \begin{eqnarray} &&\rho(x,y,t)=\exp\{-A (t) \left( x-y \right)^2-iB(t) \left( x^2-y^2 \right) \nonumber \\ && ~ -C(t) \left( x+y \right) ^{2} -iD(t) (x-y)-E(t)(x+y) -N(t)\}, \label{xyform} \nonumber \\ \end{eqnarray} where the time-dependent parameters $A$, $B$, $C$, $D$, $E$, and $N$ are real because $\hat{\rho}$ is self-adjoint. Assuming positive $A(t)$ and $C(t)$ the eigenvalue problem in the position representation for a fixed $t$, \begin{equation} \int_{-\infty}^\infty \rho(x,y) \phi_n(y)\,dy=\lambda_n \phi_n(x) \end{equation} has been considered in detail in Ref. \cite{BCSH}. The spectrum $\{\lambda_n\}_{n \in \mathbb{N}_0}$ of \eqref{xyform} depends only on $A$ and $C$ for all $t\geqslant 0$: \begin{eqnarray} \lambda_n&=&\lambda_0 \lambda^n, \nonumber \\ \lambda_0&=&\frac{2\sqrt{C}}{\sqrt{A}+\sqrt{C}},\quad \lambda=\frac{\sqrt{A}-\sqrt{C}}{\sqrt{A}+\sqrt{C}}. \nonumber \end{eqnarray} If $0<A<C$, then the Gaussian self-adjoint operator fails to be positive. Clearly, all eigenvalues are in the interval $[0,1]$ iff \begin{equation} A \geq C \geq 0. \label{good_spectrum} \end{equation} If Eq. \eqref{good_spectrum} is not true at a given time $t$, then the Gaussian function $\rho(x,y,t)$ has no physical interpretation, and Eq. \eqref{good_spectrum} is a sufficient and necessary condition to detect unphysical behavior during the time evolution. We are going to test its validity by investigating $A/C$. Note that the purity is given by $\textrm{Tr}\,\hat{\rho}^2=\sqrt{C/A}$. The time-dependent coefficients $A$, $B$, $C$, $D$, and $E$ obey a system of nonlinear nonautonomous differential equations. However, using the transformation \begin{equation} \rho(k,\Delta,t)=\int_{-\infty}^\infty dx \, e^{ikx} \rho\Big( x+\frac{\Delta}{2},x-\frac{\Delta}{2},t\Big), \label{transform} \end{equation} given in \cite{Unruh,BCSH}, we obtain the equation of motion for $\rho(k,\Delta,t)$, \begin{eqnarray} \frac{\partial}{\partial t} \rho(k,\Delta,t)&=& \biggl(\frac{\hbar k}{m} \frac{\partial}{\partial \Delta} -\frac{m \omega_p^2(t)}{\hbar} \Delta \frac{\partial}{\partial k}-\frac{D_{pp}(t)}{\hbar} \Delta^2 \biggr. \nonumber\\ && \biggl.-2\lambda(t) \Delta \frac{\partial}{\partial \Delta}-2D_{px}(t)k \Delta \biggr) \rho(k,\Delta,t). \nonumber \end{eqnarray} Note that the above equation of motion contains only first-order derivatives and therefore it is easier to construct its solutions. In this representation, the Gaussian form of \eqref{xyform} is also preserved and reads \begin{eqnarray} \rho(k,\Delta,t)&=&\exp\bigl\{-c_1(t)k^2-c_2(t)k \Delta -c_3(t) \Delta^2 \bigr.\nonumber\\ && \bigl.\quad \quad-ic_4(t) k-ic_5(t) \Delta-c_6(t)\bigr\}, \label{Ga} \end{eqnarray} where the time-dependent coefficients $c_1,c_2,c_3,c_4,c_5,$ and $c_6$ are real and obey the following system of \emph{linear} differential equations: \begin{eqnarray} \dot{c}_1&=&\frac{\hbar c_2}{m}, \quad \dot{c}_2=2D_{px}(t)+\frac{2\hbar c_3}{m}-2 \frac{m \omega_p^2(t)}{\hbar} c_1-2\lambda(t) c_2, \nonumber\\ \dot{c}_3&=&\frac{D_{pp}(t)}{\hbar}-\frac{m \omega_p^2(t)}{\hbar} c_2-4\lambda(t) c_3, \quad \dot{c}_4=\frac{\hbar c_5}{m}, \nonumber\\ \dot{c}_5&=&-\frac{m \omega_p^2(t)}{\hbar} c_4-2\lambda(t) c_5, \quad \dot{c}_6=0. \label{difftosolve} \end{eqnarray} The first three and the last three equations decouple. The first three can be written compactly as follows: \begin{equation} \dot{\mathbf{c}}(t)=\mathbf{M}(t)\mathbf{c}(t)+\mathbf{v}(t), \label{eq:main_diffeq} \end{equation} where $\mathbf{c}^T(t)=(c_1,c_2,c_3)$ (the superscript $T$ denotes the transposition), \begin{eqnarray} \mathbf{M}(t)=\begin{pmatrix} 0 & \frac{\hbar}{m} & 0 \\ -2 \frac{m \omega_p^2(t)}{\hbar} & -2\lambda(t) & \frac{2\hbar}{m} \\ 0 & - \frac{m \omega_p^2(t)}{\hbar} & -4\lambda(t) \end{pmatrix}, \label{eq:m(t)_matrix} \end{eqnarray} and \begin{equation} \mathbf{v}(t)=\begin{pmatrix} 0 \\ 2 D_{px}(t) \\ D_{pp}(t)/\hbar \end{pmatrix}.\nonumber \end{equation} The coefficients $A$, $B$, and $C$ are related to $\mathbf{c}$ through the transformation \eqref{transform} as \begin{equation} A= c_3-\frac{c^2_2}{4c_1}, \hspace{0.5em} B=-\frac{c_2}{4c_1}, \hspace{0.5em} C=\frac{1}{16 c_1}. \label{eq:A_and_C_for_cs} \end{equation} We can already see the advantage of the new phase-space representation $\rho(k, \Delta)$ because solving \eqref{eq:main_diffeq} is better suited for our subsequent investigation of the ratio $A/C$. However, the solution of \eqref{eq:main_diffeq} is still not simple because the matrices $\mathbf{M}(t)$ and $\mathbf{M}(t')$ do not commute at different times $t \neq t'$ and the vector $\mathbf{v}(t)$ is also time-dependent. A formal solution with the help of a time-ordered exponential can be given, but does not seem to be helpful for us. Therefore, we are going to focus on the numerical solutions of \eqref{eq:main_diffeq} and to carry out a brief analysis on the stationary state. \section{A brief analytical study of the stationary state} \label{III} In this section, we investigate the positivity of the stationary state. After a long time, a Markovian limit is obtained, which yields \begin{eqnarray}\label{diffcoeffM} \omega^2_p(t \to \infty)&=&\left(\omega^{(M)}_p\right)^2, \quad \lambda(t\to\infty)=\lambda^{(M)},\nonumber \\ D_{px}(t\to \infty)&=&D^{(M)}_{px},\quad D_{pp}(t\to \infty)=D^{(M)}_{pp}, \end{eqnarray} where the details about Markovian values (denoted with superscripts $M$) are given in Appendix \ref{Appuseful}. Thus, $\mathbf{M}(t)$ and $\mathbf{v}(t)$ tend to constants $\mathbf{M}^{(M)}$ and $\mathbf{v}^{(M)}$. The stationary solution of $\mathbf{c}(t)$ can be expressed as: \begin{equation} \mathbf{c}^{(M)}=-[\mathbf{M}^{(M)}]^{-1}\mathbf{v}^{(M)}. \nonumber \end{equation} Approaching the stationary state is governed by the three eigenvalues of $\mathbf{M}$, which are $(-2) \lambda(t)$ and $ (-2)\left(\lambda(t)\pm\sqrt{\lambda^2(t)-\omega_p^2(t)}\right) $. For $t>0$ real parts of all three eigenvalues are negative, and thus $\mathbf{M}(t)$ is contractive, which ensures that starting from arbitrary initial conditions $\mathbf{c}(0)$, the trajectory $\mathbf{c}(t)$ tends to its Markovian limit. In the asymptotic region, $\lambda(t)$ and $\omega_p(t)$ must be replaced by their respective Markovian values. \begin{figure} \caption{Parameter space plot of $k_B T/(\hbar\omega_0)$ vs $\gamma/\omega_0$ at fixed $\Omega/\omega_0=20$. The solid thick line shows the critical line $\gamma=\gamma_\mathrm{crit} \label{density} \end{figure} In the asymptotic regime, where all the time-dependent coefficients of the equation of motion have already reached their stationary values, the test $A/C \geq 1$ can be written as \begin{equation} \frac{A^{(M)}}{C^{(M)}}=\frac{\left(D_{pp}^{(M)}\right)^2+4 m \lambda^{(M)} D_{pp}^{(M)} D_{px}^{(M)}}{m^2\left(\lambda^{(M)}\right)^2 \left(\omega^{(M)}\right)^2}. \end{equation} On the critical line $A^{(M)}/C^{(M)}=1$ the damping factor $\gamma$ can be expressed as \begin{equation} \gamma =\gamma_\mathrm{crit}(\Omega,k_B T,\omega_0)=\frac{\Omega^2+\omega_0^2}{\Omega}\cdot \frac{\coth^2 \left(\frac{\hbar\omega_0}{2k_BT}\right)-1}{Z(\Omega,k_BT,\omega_0)}, \label{eq:gamma_crit_line} \end{equation} where \begin{eqnarray} &&Z(\Omega,k_BT,\omega_0)= 2-4\frac{k_BT}{\hbar\omega_0}\coth\frac{\hbar\omega_0}{2k_BT}\biggl[-1 +\frac{\hbar\Omega}{2\pi k_BT} \times \biggr.\nonumber \\ &&\biggl. \times\biggl( \!\! \Psi\left( \frac{i\hbar\omega_0}{2\pi k_B T} \right) +\Psi\left( \frac{-i\hbar\omega_0}{2\pi k_B T} \right)-2\Psi\left( \frac{\hbar\Omega}{2\pi k_B T}\right) \!\!\biggr)\!\! \biggr] \label{eq:full_Z} \end{eqnarray} and $\Psi$ is the digamma function \cite{Stegun}. The denominator $Z(\Omega,k_BT,\omega_0)$ has a zero if we vary $k_BT$, and thus there exists a certain temperature $\tilde{T}$ at which the damping factor $\gamma$ tends to infinity on the critical line; see Fig. \ref{density}. Clearly, above $\tilde{T}$, the stationary solution is a density operator for any damping factor $\gamma$; see region III in Fig.~\ref{density}. The stationary solution is not a density operator in region I, i.e., $T<\tilde{T}$ and $\gamma> \gamma_\mathrm{crit}$. In this parameter region we can choose any initial condition for which the time evolution for $\hat{\rho}(t)$ eventually violates the positivity of the density operator. Regions II and III of Fig.~\ref{density} guarantee that the asymptotic state is physically allowed, but this does not guarantee that the full time evolution is physical. We can also observe that very weak damping $\gamma \ll \omega_0$ allows us to chose the temperature $T$ arbitrarily. This is in accordance with the result in Ref. \cite{CLLT}. However, a positive stationary solution still is not a guarantee for a meaningful time evolution, because issues might appear for several kind of initial conditions, especially if we choose the parameters of the master equations close to the critical line $\gamma_\mathrm{crit}$. Analytical approximations for the critical line can be made in two cases. If $\Omega \gg \omega_0$ one can expect (see Fig.~\ref{density}) that $k_B\tilde{T}$ is on the $\hbar\Omega$ scale. Let us introduce the quantity $x= {k_B \tilde{T}}/({\hbar \Omega})$. If $\Omega \gg \omega_0$ looking for the zeros for $Z$ in Eq.\eqref{eq:full_Z} the leading terms are \begin{equation} 0=\pi x + \gamma_\mathrm{EM} + \Psi\left( \frac{1}{2 \pi x} \right) , \label{eq:eq_for_univ_number} \end{equation} where $\gamma_\mathrm{EM}$ is the Euler-Mascheroni constant, which is approximately $0.577$. Solving \eqref{eq:eq_for_univ_number} for $x$, one gets $k_B \tilde{T}\approx 0.240395 \cdot \hbar \Omega$ for large $\Omega$. It should be noted that this result has been previously found by Ref. \cite{Lampo2}, where the stationary state has been investigated from the point of view of the Heisenberg uncertainty principle. In the case of Gaussian density operators the Heisenberg uncertainty principle and our test condition $A/C\geqslant1$ are the same constraints on the parameter space of the master equation. \begin{figure*} \caption{The parameters used here are $\gamma=\omega_0$, $\Omega=20\omega_0$, and $k_BT=10\hbar\omega_0$. The initial conditions are $w=1$, $(c_1(0),c_2(0),c_3(0))=(d_0^2/4,0,1/\left(4 d_0^2\right))$. Left panel: $A d_0^2$ and $C d_0^2$ as a function of $\omega_0t$, where $d_0$ is the width of the quantum harmonic oscillator's ground state. The main figure shows the non-Markovian time evolution and the inset shows the Markovian time evolution. Right panel: $A/C$ as a function of $\omega_0t$. The solid and dash-dotted lines show this ratio for the non-Markovian and the Markovian case, respectively. The horizontal thin lines indicate the asymptotic values in both panels.} \label{case1} \end{figure*} \begin{figure*} \caption{The same as for Fig.~\ref{case1} \label{case2} \end{figure*} \begin{figure*} \caption{The same as for Fig.~\ref{case1} \label{case3} \end{figure*} A different approximation is possible for $\gamma_\mathrm{crit}$ at very low temperature. Keeping the leading-order terms in Eq.~\eqref{eq:gamma_crit_line} for $k_B T \ll \hbar\omega_0$ and $k_B T \ll \hbar\Omega$, one gets the limiting behavior \begin{eqnarray} \gamma_\mathrm{crit} \cong 4\frac{\frac{\Omega^2+\omega_0^2}{\Omega}\exp{\left\lbrace - \frac{\hbar \omega_0}{k_B T}\right\rbrace }}{2-\frac{4}{\pi}\frac{\Omega}{\omega_0}\ln\left( \frac{\omega_0}{\Omega}\right) } \equiv C \cdot e^{- \frac{a}{T}}, \label{eq:gamma_for_small_T} \end{eqnarray} where $C$ and $a$ are constants. Clearly, this function is non analytical in $T$, and approaches the origin in Fig.~\ref{density} with infinite slope. Inverting \eqref{eq:gamma_for_small_T} one has on the critical line \begin{equation} T \cong \frac{a}{\ln(\frac{C}{\gamma_\mathrm{crit}})}, \end{equation} for small $\gamma_\mathrm{crit}$. \begin{figure*} \caption{The same as for Fig.~\ref{case1} \label{case4} \end{figure*} As we indicated earlier, one can experience positivity violations during the time evolution. In the following, we show a few time evolutions which might be interesting for the reader. In the numerics, we limit ourselves to Gaussian density operators, which means that we have to follow only the time evolution of $\mathbf{c}(t)$, from which we extract $A$ and $C$ via \eqref{eq:A_and_C_for_cs} and check the validity of \eqref{good_spectrum} numerically. \begin{figure*} \caption{The same as for Fig.~\ref{case1} \label{case5} \end{figure*} \begin{figure} \caption{The same as for Fig.~\ref{case1} \label{case20} \end{figure} \begin{figure*} \caption{The same as for Fig.~\ref{case1} \label{case11} \end{figure*} \begin{figure} \caption{$(A-C)d_0^2$ as a function of $\omega_0t$. The parameters used here are $\gamma=0.1\omega_0$, $\Omega=20\omega_0$, $k_BT=0.397055\hbar \omega_0$. The initial conditions are $w=1$, $(c_1(0),c_2(0),c_3(0))= (d_0^2/4,0,1/\left(4 d_0^2\right))$. Solid line: non-Markovian case; dash-dotted line: Markovian case. The horizontal line is drawn at zero. One can observe several positivity violations for the Markovian case.} \label{case11_minus} \end{figure} \begin{figure} \caption{Positivity violations in the Markovian runs for different squeezed initial conditions characterized by the complex $\zeta$. White region: no positivity violations, dark region: positivity violations. The parameters used here are $\gamma=0.755\omega_0$, $\Omega=20\omega_0$, $k_BT=\hbar \omega_0$, $w=1$. Time evolutions for points at $|\zeta|=1$, $\phi=\pm \pi/4$ will be shown on Fig.\ref{case25} \label{fig:zeta_phi} \end{figure} \begin{figure*} \caption{$A(t)/C(t)$ for selected squeezed initial states. Left Panel: Markovian time evolutions, right panel: non-Markovian time evolutions. The parameters used here are $\gamma=0.755\omega_0$, $\Omega=20\omega_0$, and $k_BT=\hbar \omega_0$. The initial conditions are $w=1$, $\zeta=1$, $\phi=\pm \pi/4$, $(c_1(0),c_2(0),c_3(0))=(0.299405\,d_0^2,\pm 1.282289,1.581693/d_0^2)$. Positivity violation is in the interval $0<\omega_0 t<1.85$ for the Markovian case, with $\phi=-\pi/4$. No violation is on the right panel.} \label{case25} \end{figure*} \begin{figure} \caption{Behavior of $\min_{t\ge 0} \label{thermal_min} \end{figure} \section{Numerical results} \label{IV} In the previous section, we have discussed the validity of the stationary solution, which gives a constraint on the parameters of the master equation. We consider three different types of initial conditions of \eqref{eq:main_diffeq}, namely, coherent, squeezed, and thermal states. For the sake of completeness we hereby reformulate these well-known initial states to our representation. {\it Coherent state}. This state is defined through the complex parameter $\alpha$, \begin{equation} \label{coherent} \ket{\alpha}=\sum_{n=0}^\infty e^{-\frac{|\alpha|^2}{2}} \frac{\alpha^n}{\sqrt{n!}} \ket n, \quad\alpha=|\alpha|e^{i\phi}, \end{equation} where $\ket{n}$ ($n\in {\mathbb N}_0$) are the number states and $\phi$ is the complex phase of $\alpha$. The Wigner function of this coherent state reads \begin{equation} W(x,p)=\frac{1}{\pi \hbar} e^{-\left(x/d-\sqrt{2} \mathrm{Re}(\alpha) \right)^2-\left(pd/\hbar-\sqrt{2} \mathrm{Im}(\alpha) \right)^2} \nonumber \end{equation} where $d$ is a length and can be taken as \[ d=w\sqrt{\frac{\hbar}{m\omega_0}}\equiv w d_0. \] $w$ is a dimensionless positive number, and $d_0$ is the width of the quantum harmonic oscillator's ground state. Due to the relation \begin{equation} W(x,p)=\left( \frac{1}{2 \pi}\right) ^2\int^{\infty}_{-\infty} dk \int^{\infty}_{-\infty} d \Delta e^{-i(kx+\Delta p/\hbar)} \rho(k, \Delta) \label{transform2} \end{equation} we obtain \begin{equation} \mathbf{c}_\mathrm{coh}(0)=\left(\frac{d^2}{4},0,\frac{1}{4 d^2}\right). \label{eq:coherent_init_cond} \end{equation} {\it Squeezed state}. In this case the state is characterized by two complex parameters $\alpha$ and $\zeta=|\zeta|e^{i\phi}$. Introducing the creation $a^\dagger$ and annihilation $a$ operators of the quantum harmonic oscillator a squeezed state is given by \begin{equation} \ket{\alpha,\zeta}=\hat{D}(\alpha)\hat{S}(\zeta)\ket{0} \label{squeezed} \end{equation} where $\hat{D}(\alpha)=\exp\bigl(\alpha\hat{a}^\dagger-\alpha^\ast\hat{a}\bigr)$ is the displacement and $\hat{S}(\zeta)=\exp\bigl[\tfrac{1}{2} \bigl(\zeta^\ast{}a^2-\zeta{}a^{\dagger 2}\bigr)\bigr]$ is the squeezing operator. After a lengthy but standard calculation, the Wigner function yields \begin{eqnarray} W(x,p)&=&\frac{1}{\pi \hbar} e^{-\left(x/d-\sqrt{2} \mathrm{Re}(\alpha) \right)^2 t_1-\left(pd/\hbar-\sqrt{2} \mathrm{Im}(\alpha) \right)^2 t_2} \nonumber \\ &\times& e^{\left(x/d-\sqrt{2} \mathrm{Re}(\alpha) \right)\left(pd/\hbar-\sqrt{2} \mathrm{Im}(\alpha) \right)t_3 } \nonumber \end{eqnarray} where \begin{eqnarray} t_1&=&\frac{e^{2|\zeta|}}{2}(1+\cos \phi)+\frac{e^{-2|\zeta|}}{2}(1-\cos \phi), \nonumber \\ t_2&=&\frac{e^{2|\zeta|}}{2}(1-\cos \phi)+\frac{e^{-2|\zeta|}}{2}(1+\cos \phi), \nonumber \\ t_3&=&\left( e^{2|\zeta|}-e^{-2|\zeta|}\right) \sin \phi. \label{eq:squeezed_params} \end{eqnarray} Finally, with the help of \eqref{transform2}, we get \begin{equation} \mathbf{c}_\mathrm{sq}(0)=\left(\frac{d^2}{4}t_2,\frac{1}{4}t_3,\frac{1}{4 d^2}t_1\right). \end{equation} {\it Thermal state}. This is a Gibbs state characterized by the thermal equilibrium temperature $T'$, which in the number state representation reads \begin{equation} \hat{\rho}=\sum_n \frac{ n^n_{\text{th}}}{(1+ n_{\text{th}})^n} \ket{n}\bra{n} \label{thermal} \end{equation} with the mean excitation number \begin{equation} n_{\text{th}} = \biggl[\exp \biggl( \frac{\hbar\omega_0}{k_\text{B}T'} \biggr)-1 \biggr]^{-1}. \nonumber \end{equation} We have for the Wigner function \begin{equation} W(x,p)=\frac{1}{\pi \hbar} e^{-\frac{x^2}{d^2 (2 n_{\text{th}}+1)}-\frac{p^2 d^2}{\hbar^2 (2 n_{\text{th}}+1)} } \nonumber \end{equation} which yields \begin{equation}\label{thermalstate} \mathbf{c}_\mathrm{th}(0)=\coth\left(\frac{\hbar\omega_0}{2k_B T'} \right)\left(\frac{d^2}{4},0,\frac{1}{4 d^2}\right). \end{equation} Note that the coherent state with $w=1$ corresponds to the ground state of the quantum harmonic oscillator and is contained as trivial special cases of the thermal and squeezed states. In all subsequent numerical cases we will compare the time evolution of \eqref{eq:main_diffeq} with its Markovian version which is obtained by replacing all time-dependent coefficient functions with their respective limits as $t \to \infty$ i.e., \begin{eqnarray} &&\omega_p(t) \rightarrow \omega^{(M)}_p, \quad \lambda(t) \rightarrow \lambda^{(M)},\nonumber \\ &&D_{px}(t)\rightarrow D^{(M)}_{px},\quad D_{pp}(t)\rightarrow D^{(M)}_{pp}. \nonumber \end{eqnarray} The result of a typical, physically valid time evolution can be seen in Fig.~\ref{case1}. Here the parameters are chosen so that the density operator is physical for any time, i.e., $A$ and $C$ are positive and $A \geq C$. One can observe very similar behavior if one starts from a squeezed or a thermal state, except $A/C$ starts from a number bigger than 1 for a thermal state. In the figures, we use dimensionless units, $A$ and $C$ are multiplied with $d_0^2$, where $d_0$ is the width of the quantum harmonic oscillator's ground state. In Fig.~\ref{case2}, parameters are chosen from region I. It promptly follows that the asymptotic behavior must be unphysical for both the Markovian and non-Markovian cases. In Fig.~\ref{case2}(b), both curves are already below the horizontal line at $\omega_0 t \approx 2$; however, the duration of the physical behavior is longer for the non-Markovian case at the beginning. The parameters in Fig.~\ref{case3} are also from region I; however, the comparison with the previous case shows that for smaller temperature $k_BT/\hbar \omega_0$, and damping factor $\gamma/\omega_0$ we can see a few oscillations. The non-Markovian evolution is physical up to $\omega_0t \approx 3.28$ and later it becomes unphysical because $A/C$ becomes smaller than one. The Markovian evolution promptly becomes unphysical at $t=0$ and remains for all times. We note that the parameters $\gamma/\omega_0$, $\Omega/\omega_0$, and $k_BT/\hbar \omega_0$ are chosen to be the same as for the bottom subfigure of Fig.~(10.7) in the book by Breuer and Petruccione \cite{book1}. In Fig.~\ref{case4}, we choose a bigger $\gamma$ than in Fig.~\ref{case3}. All of the other parameters and initial conditions are the same. The parameters still belong to region I. Here, something more drastic happens in both cases. First the ratio of $A/C$ goes below 1 (indicating positivity violation) and, at a later time, $A$ changes sign and at an even further time, $A$ and, $C$ diverge, changing signs anew. The Markovian evolution is still unphysical for the whole time evolution, while non-Markovian evolution shows physical behavior until $A/C$ goes below one. If any of $A$ and $C$ become negative, the corresponding Wigner function and $\textrm{Tr}\,\hat{\rho}$ do not exist. In Fig.~\ref{case5}, we used the same parameters as in Fig.~\ref{case2}, except that $\gamma$ has been decreased in such a way that the parameters are now in region II. The non-Markovian evolution is physical for all time. The Markovian evolution gets unphysical but bounces back into the $A/C \geqslant 1$ region and remains physical at later times. For Fig.~\ref{case5}, the initial condition is a coherent state with $w=1$ in Eq.~(\ref{eq:coherent_init_cond}). It is interesting to note that if we vary $w$, for example to $w=1/\sqrt{10}$ the initial behavior of the Markovian run is completely different (see Fig.~\ref{case20}): the positivity is promptly violated at $t=0^+$ and, at a later time, the system returns back to a physically allowed state. The non-Markovian time evolution remains physical for all the time even for this initial condition. An interesting regime is when $\gamma/\omega_0$ and $k_BT/\hbar \omega_0$ are small. Here we expect a few damped oscillations. In Figs.~\ref{case11} and \ref{case11_minus}, our parameters are close to the critical line, but are still in region II. The non-Markovian time evolution is already physical at any time. However, the Markovian run shows several time intervals where the curve of $A/C$ attains values smaller than one. The same can also be monitored in the quantity $A-C$ (see Fig.~\ref{case11_minus}). Let us discuss a few facts about squeezed initial states. Choosing $\gamma$, $\Omega$, and $k_BT$ as in Fig.~\ref{case5}, we have found strong dependence on the initial conditions of the positivity violation. In Fig.~\ref{fig:zeta_phi}, a large dark region corresponds to the complex $\zeta$'s for which positivity violations can happen for the Markovian runs. This is further supported in Fig.\ref{case25}, where two individual time evolutions are shown with the same $|\zeta|$, but opposite sign of $\phi$. For $\phi=-\pi/4$, the quotient $A/C$ shows a strong positivity violation, namely, in a small-time interval it becomes negative. There is no violation for $\phi=\pi/4$. This particular situation is explained by inequality \eqref{eq:jolt_mar} at $t=0$ (see Appendix \ref{sec:initial_jolt}). In fact, $c_2(0)$ flips sign for the change $\phi\to-\phi$. In the non-Markovian case, we found no positivity violations at all for this family of initial conditions if the stationary solution is physical. Next, we discuss what can happen if one starts from a thermal state (which is not a pure initial state for $T'>0$). Let us consider Fig.~\ref{thermal_min}. We plot the minimal values of the quotient $A/C$ for individual Markovian runs starting from thermal initial states. Different curves belong to different damping factors $\gamma$. At $T'=0$, we start from a coherent state. All relevant parameters belong to region II. The figure clearly supports the expectation that if one increases the width of the initial Gaussian, one can avoid positivity violations. Curves with decreasing $\gamma$ are further away from the critical line. Choosing $\gamma$ to be bigger than $0.72$, there is no positivity violation even for $T'=0$. These numerical investigations suggest that the non-Markovian evolution becomes unphysical, i.e., $A/C <1$, only when the stationary state is unphysical. This has been investigated in detail in Sec. \ref{III} and results in constraints on the choice of the parameters of the model. However, this is not true for the Markovian evolution, which may show, for certain times of the evolution unphysical behavior. It is indeed true that the non-Markovian evolution is still more reliable than the Markovian one. \section{Summary and final remarks} \label{V} Summarizing, we have investigated a HPZ master equation of the Caldeira-Leggett model with a quantum harmonic oscillator, where we have considered the weak-coupling limit up to the second-order in the coupling parameter and Ohmic spectral density with a Lorentz-Drude cutoff function. The restriction to weak-coupling does not necessarily mean that the influence of the bath on the system is weak, i.e., weak damping. The large number of bath modes may act collectively and thereby have a strong influence on the open system even if each mode is perturbatively weakly coupled to it; see, for example \cite{GT}. Therefore, we have begun our analysis without any restriction on the parameters of model. Our goal has been to identify unphysical behavior of this master equation by means of following time evolutions of the initial density operators and examining whether the evolving density operators lose their positivity. This is a very delicate problem for general initial density operators, because the time evolution is usually followed in the phase-space representation and the study of positivity properties of the Weyl transformed operators is still an open problem \cite{Nicola}. Therefore, we have focused only on Gaussian states, where the spectrum can be completely identified from the phase-space solutions of the master equation. As a first step, in Sec. \ref{II}, we have transformed the whole problem into a phase-space representation where the evolution is described by a linear differential equation system. Then, we have identified algebraic relations between the evolving coefficients of this phase-space representation and the spectrum of the evolving operator, which may not always be a density operator. We have used numerical simulations to follow the evolving spectrum. We have compared the non-Markovian evolution to a Markovian one, which we have obtained by taking the coefficients in the $t \to \infty$ limit; see Eq.~\eqref{diffcoeffM}. We have showed for coherent, squeezed and thermal initial conditions that the positivity violations in the non-Markovian evolution occur when the stationary solution is also no longer a physical state. Therefore, a positivity check on the stationary solution is necessary, which puts important constraints on the parameters of our theory. Therefore, we have carried out an analysis on the stationary solution in Sec. \ref{III}, where we have also found results known by the community, see \cite{CLLT} or \cite{Lampo2}. However, it is worthwhile to mention that not all published material handles this positivity issue very carefully; see, for example, Fig. $10.7$ in \cite{book1}. In contrast to the non-Markovian evolution, we have found in Sec. \ref{IV}, both for short (occurring at $t=0^+$) and intermediate (occurring at finite $t>0$) time evolutions, positivity violations in the Markovian case. Our numerical investigations suggest that the rapid growth of the diffusion coefficient $D_{pp}(t)$ compared to the growth of $D_{px}(t)$ is the reason, why the non-Markovian master equation avoids positivity violations for short evolution times. We have only considered Ohmic spectral density with a Lorentz-Drude cutoff function, but one may ask what can happen for other types of spectral densities. At least we know from \cite{Hu_Paz} that in cases of so-called supra- and subohmic spectral densities, $D_{pp}(t)$ is growing faster than $D_{px}(t)$ for short times and, together with our results, we conjecture that non-Markovian evolutions for these spectral densities also cannot exhibit positivity violations for Gaussian initial states and physical stationary states. If one considers the time evolution (\ref{NMEQ}) starting from an \emph{arbitrary}, not necessarily Gaussian, initial density operator, then one can state the following: for parameters belonging to region I of Fig.~\ref{density} and starting from any initial condition, there must be positivity violation both for non-Markovian and Markovian master equations. This can be explained as follows. For parameters in region I the asymptotic state is non-physical. However, this state is unique and corresponds to the asymptotic Gaussian state of any initially physical state, e.g., see \cite{Fleming,Lampo2} discussed in their Sec. III. If this state is non-physical, then positivity violation must occur at least asymptotically. For parameters in regions II and III one should not rule out the possibility of finding positivity violations for appropriately chosen general initial density operators as in the case of Gaussian initial states and the Markovian master equation. Numerically, the non-Markovian evolution does not seem to show any signs of positivity violations for physical stationary states. Unfortunately, this is not always the case for the Markovian evolution. Therefore, we may say the non-Markovian evolution is superior to the Markovian one, which is, vaguely speaking due to the rapid growth of $D_{pp}(t)$ compared to that of $D_{px}(t)$. We managed to prove in Appendix~\ref{sec:initial_jolt} that there is no short time positivity violation for the arbitrary Gaussian initial state and parameters of the model. This remains true even when the stationary solution is unphysical. This finding seems to be connected to the so-called initial ``jolt'' found by Refs. \cite{Unruh,Hu_Paz}. A few generic comments on the purity of the evolving solutions are in order. In our whole investigation, we have focused on the ratio $A/C$ which, in turn, is the squared inverse of the purity. Thus, all figures implicitly describe the purity as well, which is a measure of mixedness. Many figures show that purities are non monotonic in time and therefore states undergo a certain amount of purification or mixing during the time evolution. An easy way to understand this effec is to consider an initial pure state and a different pure stationary state. As the dynamic is clearly not unitary, the stationary state will be reached throughout not necessarily pure states and thus purity in this example cannot be monotonic; see our Fig. \ref{case11}. Several questions concerning this subject remain open problems, even though applications of these master equations are very frequent. Here, we have thoroughly investigated a Markovian and a non-Markovian master equation of the Caldeira-Leggett model for initial Gaussian density operators and identified the boundaries of the physically interpretable solutions of the time evolutions. Therefore, our results provide a key step in establishing the range of applicability of these master equations. \section{Expressions for the coefficients \texorpdfstring{$D_{pp}(t)$}{TEXT} and \texorpdfstring{$D_{px}(t)$}{TEXT}} \label{Appuseful} \label{Appuseful} Expanding the $\coth$ function in eq. (\ref{eq:D1(s)_def}) as \[ \coth \pi x=\sum_{n=-\infty}^\infty \frac{x}{\pi(x^2+n^2)}=\frac{1}{\pi x}+\frac{2x}{\pi}\sum_{n=1}^\infty \frac{1}{(x^2+n^2)} \] and integrating term by term one gets for $s>0$ \begin{eqnarray} D_1(s)&=& \frac{4 m \gamma k_B T \Omega^2}{\hbar} \left[\frac{e^{-\Omega s}}{\Omega} +2 \sum_{n=1}^{\infty}\frac{\Omega e^{-\Omega s}- \nu_n e^{-\nu_n s} }{\Omega^2-\nu_n^2} \right], \nonumber\\ \label{eq:d1sumshort} \end{eqnarray} where $\nu_n$'s are the bosonic Matsubara frequencies: \begin{equation} \nu_n=2\pi n k_B T/\hbar. \end{equation} The first part in the square brackets of (\ref{eq:d1sumshort}) can be transformed using the identity \[ e^{-\Omega s}\left[\frac{1}{\Omega}+2\sum_{n=1}^\infty \frac{\Omega}{\Omega^2-\nu_n^2}\right]=\frac{\pi}{\nu_1}\cot\left( \frac{\Omega \pi}{\nu_1}\right)e^{-\Omega s}, \] where $\nu_1=2\pi k_BT/\hbar$ is the first bosonic Matsubara-frequency. The other part in the square brackets of (\ref{eq:d1sumshort}) can be expressed as \begin{eqnarray} &&\sum_{n=1}^{\infty}\frac{ \nu_n e^{-\nu_n s} }{\Omega^2-\nu_n^2}=-\frac{e^{-\nu_1 s}}{2\nu_1} \times\nonumber \\ \!\!\!\!&&\!\!\!\!\!\left( G\left(e^{-\nu_1 s},1,1-\frac{\Omega}{\nu_1} \right)+G\left(e^{-\nu_1 s},1,1+\frac{\Omega}{\nu_1} \right)\right), \end{eqnarray} where $G(z,a,b)$ denotes the so-called Lerch transcendent or HurwitzLerchPhi$[z, a, b]$ in Mathematica \cite{hurwitzlerchpi}. A similar but different sum also appear later \begin{eqnarray} &&\sum_{n=1}^{\infty}\frac{ \nu_n e^{-\nu_n s} }{\omega_0^2+\nu_n^2}= \nonumber\\ \!\!\!\!&&\!\!\!\!\! \frac{ e^{-i\omega_0 s}F\left(e^{-\nu_1 s},\frac{\nu_1-i\omega_0}{\nu_1},-1 \right)+e^{i\omega_0 s}F\left(e^{-\nu_1 s},\frac{\nu_1+i\omega_0}{\nu_1},-1 \right)}{2i\omega_0}, \nonumber \end{eqnarray} where $F(z,a,b)$ is the so-called incomplete beta function Beta$[z,a,b]$ (we also use the terminology of Wolfram Mathematica). We need also two more sums over the Matsubara-frequencies, however, those ones can be calculated via the useful formulas \[ \sum_{n=1}^{\infty}\frac{ \nu_n^2 e^{-\nu_n s} }{\Omega^2-\nu_n^2}=-\frac{\partial}{\partial s} \left( \sum_{n=1}^{\infty}\frac{ \nu_n e^{-\nu_n s} }{\Omega^2-\nu_n^2}\right), \] \[ \sum_{n=1}^{\infty}\frac{ \nu_n^2 e^{-\nu_n s} }{\omega_0^2+\nu_n^2}=-\frac{\partial}{\partial s} \left( \sum_{n=1}^{\infty}\frac{ \nu_n e^{-\nu_n s} }{\omega_0^2+\nu_n^2}\right). \] Inserting the series (\ref{eq:d1sumshort}) into eq.(\ref{diffcoeff}) the integral over $s$ is trivial, but the final forms for the diffusion coefficients are lengthy: \begin{widetext} \begin{eqnarray} D^{(2)}_{px}(t)=\frac{k_B T \gamma\Omega^2}{\hbar \omega_0\left( \omega^2_0+\Omega^2\right) } \Biggl\{ && \omega_0 \cdot \left( \frac{1}{\Omega}+ 2\sum_{n=1}^\infty \frac{\Omega}{\Omega^2-\nu_n^2}- 2\sum_{n=1}^\infty \left[ \frac{ \nu_n }{\Omega^2-\nu_n^2}+ \frac{ \nu_n}{\omega_0^2+\nu_n^2}\right] \right) \Biggr. \nonumber \\ &&-\omega_0\cos{(\omega_0 t)} \cdot \left(\frac{e^{-\Omega t}}{\Omega}+ 2\sum_{n=1}^\infty \frac{\Omega e^{-\Omega t}}{\Omega^2-\nu_n^2}- 2\sum_{n=1}^\infty \left[ \frac{ \nu_n e^{-\nu_n t}}{\Omega^2-\nu_n^2}+ \frac{ \nu_n e^{-\nu_n t}}{\omega_0^2+\nu_n^2}\right]\right) \nonumber \\ && \Biggl. -\sin{(\omega_0 t)} \cdot\left( e^{-\Omega t}+ 2\sum_{n=1}^\infty \frac{\Omega^2 e^{-\Omega t}}{\Omega^2-\nu_n^2}- 2\sum_{n=1}^\infty \left[ \frac{ \nu_n^2 e^{-\nu_n t}}{\Omega^2-\nu_n^2}+ \frac{ \nu_n^2 e^{-\nu_n t}}{\omega_0^2+\nu_n^2}\right]\right) \Biggr\} \label{eq:D_px(2)} \end{eqnarray} \begin{eqnarray} D^{(2)}_{pp}(t)=\frac{2k_B T m\gamma\Omega^2}{\hbar\left( \omega^2_0+\Omega^2\right) } \Biggl\{ && \left( 1 + 2\sum_{n=1}^\infty \frac{\Omega^2}{\Omega^2-\nu_n^2}- 2\sum_{n=1}^\infty \left[ \frac{ \nu_n^2 }{\Omega^2-\nu_n^2}+ \frac{ \nu_n^2}{\omega_0^2+\nu_n^2}\right] \right) \Biggr. \nonumber \\ &&+\omega_0\sin{(\omega_0 t)} \cdot \left(\frac{e^{-\Omega t}}{\Omega}+ 2\sum_{n=1}^\infty \frac{\Omega e^{-\Omega t}}{\Omega^2-\nu_n^2}- 2\sum_{n=1}^\infty \left[ \frac{ \nu_n e^{-\nu_n t}}{\Omega^2-\nu_n^2}+ \frac{ \nu_n e^{-\nu_n t}}{\omega_0^2+\nu_n^2}\right]\right) \nonumber \\ && \Biggl. -\cos{(\omega_0 t)} \cdot\left( e^{-\Omega t}+ 2\sum_{n=1}^\infty \frac{\Omega^2 e^{-\Omega t}}{\Omega^2-\nu_n^2}- 2\sum_{n=1}^\infty \left[ \frac{ \nu_n^2 e^{-\nu_n t}}{\Omega^2-\nu_n^2}+ \frac{ \nu_n^2 e^{-\nu_n t}}{\omega_0^2+\nu_n^2}\right]\right) \Biggr\} \label{eq:D_pp(2)}. \end{eqnarray} \end{widetext} We used the above formulas in our numerical works. The Markovian values for $\omega_p^2$ and $\lambda$ are \begin{equation} \left(\omega_p^{(M)}\right)^2=\omega^2_0+2 \gamma \Omega-\frac{2\gamma \Omega^3}{\Omega^2+\omega^2_0}, \quad \lambda^{(M)}=\frac{\gamma\Omega^2}{\Omega^2+\omega_0^2}. \label{eq:Markovian_lambda_omega} \end{equation} The asymptotic Markovian values for the diffusion coefficients can be read off from the first lines of eqs. (\ref{eq:D_px(2)}) and (\ref{eq:D_pp(2)}). Performing the Matsubara sums they can be given as \begin{equation} D_{pp}^{(M)}=m\gamma \omega_0 \frac{\Omega^2}{ \omega^2_0+\Omega^2}\coth\left(\frac{\hbar\omega_0} {2k_BT}\right), \end{equation} \begin{eqnarray} D_{px}^{(M)}&=&\frac{\gamma\Omega^2}{\Omega^2+\omega_0^2}\Biggl[ -\frac{k_B T}{\hbar\Omega}-\frac{1}{2\pi}\biggl\{2\Psi\left( \frac{\hbar\Omega}{2\pi k_B T}\right) \biggr.\Biggr. \nonumber\\ && \Biggl.\biggl. \quad-\Psi\left( \frac{i\hbar\omega_0}{2\pi k_B T} \right)-\Psi\left( \frac{-i\hbar\omega_0}{2\pi k_B T} \right) \biggr\} \Biggr], \label{eq:Markovian_Dpx} \end{eqnarray} where $\Psi(x)$ is the digamma function. The Markovian values \eqref{eq:Markovian_lambda_omega}-\eqref{eq:Markovian_Dpx} fully determine the asymptotic matrix $\mathbf{M}^{(M)}$ and the asymptotic vector $\mathbf{v}^{(M)}$. \section{Behavior of \texorpdfstring{$D_{pp}(t)$}{TEXT} and \texorpdfstring{$D_{px}(t)$}{TEXT} for small time \texorpdfstring{$t$}{TEXT}} \label{sec:dpp_app} At very small temperature the hyperbolic cotangent factor in Eq.(\ref{eq:D1(s)_def}) can be well approximated by one: \begin{eqnarray} &&D_1(s)|_{T=0} = \frac{2\gamma m \Omega^2}{\pi} \cdot\int_0^\infty \frac{\omega}{\Omega^2+\omega^2}\cos(\omega s) d\omega = \nonumber \\ &&= \frac{2\gamma m \Omega^2}{\pi} \Bigl(\sinh(\Omega s)\textrm{Shi}\,(\Omega s)-\cosh(\Omega s)\textrm{Chi}\,(\Omega s) \Bigr), \qquad\label{eq:d1_at_T_eq_0} \end{eqnarray} where \begin{equation} \textrm{Chi}\,(z)=\gamma_{\mathrm{EM}}+\ln(z)+\int_0^z\frac{(\cosh(t)-1)}{t}dt, \label{eq:chi} \end{equation} is the function CoshIntegral[x] and \begin{equation} \textrm{Shi}\,(z)=\int_0^z \frac{\sinh(t)}{t}dt \label{eq:shi} \end{equation} is the function SinhIntegral[x] in Mathematica. For short times $s$ the dominant behavior in $D_1(s)$ is the logarithm function. By Eqs.~(\ref{diffcoeff}), (\ref{eq:chi}) and (\ref{eq:shi}) the coefficients $D_{pp}(t)$ and $D_{px}(t)$ behave as \begin{equation}\label{eq:Dpplead} D_{pp}(t) = \frac{2\gamma m\Omega^2}{\pi}\left(1-\gamma_{\mathrm{EM}}-\ln\Omega t \right)t+\mathcal{O}(t^3), \end{equation} \begin{equation}\label{eq:Dpxlead} D_{px}(t) = \frac{\gamma \Omega^2}{4\pi}\left(1-2\gamma_{\mathrm{EM}}-2\ln\Omega t \right)t^2+\mathcal{O}(t^4). \end{equation} for small $t$ and $T=0$. At finite temperature one can make the decomposition \begin{eqnarray} &&D_1(s)=D_1(s)|_{T=0} \nonumber \\ &&+\frac{2\gamma m \Omega^2}{\pi} \int_0^\infty \frac{\omega}{\Omega^2+\omega^2}\left[ \coth\left(\frac{\hbar\omega}{2k_BT}\right)-1 \right]d\omega, \nonumber \end{eqnarray} where the first term on the right hand side is discussed above and behaves as $\sim\ln(\Omega s)$, while the second is finite even for $s=0$. By Eq. (\ref{diffcoeff}) at finite temperature the short time dominant behavior of $D_{px}(t)$ and $D_{pp}(t)$ are still: \begin{eqnarray} D_{pp}(t) &\simeq& -\frac{2\gamma m\Omega^2}{\pi}t \ln(\Omega t), \label{eq:Dpptruelead} \\ D_{px}(t) &\simeq& -\frac{\gamma\Omega^2}{2\pi}t^2 \ln(\Omega t). \label{eq:Dpxtruelead} \end{eqnarray} \section{Analysis of small-time behavior} \label{sec:initial_jolt} In this appendix, we show how a differential equation for the quotient $A(t)/C(t)$ can be used to prove small-time positivity violation/non-violation. We begin with the non-Markovian case. Using the notations of Section \ref{III}, we set $Q(t)=A(t)/C(t)=16c_1(t)c_3(t)-4c_2^2(t)$, and via the system \eqref{eq:main_diffeq} we arrive at \[ \dot{Q}+4\lambda(t)Q=16 \frac{D_{pp}(t)}{\hbar}c_1(t) - 16D_{px}(t)c_2(t). \] The general solution of which is given by the variation of constants formula \begin{eqnarray} &&Q(t)=\frac{Q(0)}{\Lambda(t)}+ \nonumber\\ &&+\frac{16}{\Lambda(t)} \int_0^t \Lambda(s) \biggl[\frac{D_{pp}(s)}{\hbar}c_1(s)-D_{px}(s)c_2(s)\biggr] \, ds, \nonumber \end{eqnarray} where we have let $\Lambda(t)= \exp\left(4\int_0^t\lambda\right)$ for convenience. Using this, the condition $Q(t)\ge 1$ is clearly equivalent to $F(t)\ge 0$, where \begin{eqnarray} F(t)&=&\int_0^t \Lambda(s)\biggl[\frac{D_{pp}(s)}{\hbar}(s)c_1(s)-D_{px}(s)c_2(s)\biggr] \, ds \nonumber \\ && - \frac{\Lambda(t)-Q(0)}{16}.\nonumber \end{eqnarray} Note that $F(0)=\frac{Q(0)-1}{16}\ge 0$. Therefore, a sufficient condition for $F(t)\ge 0$ for small $t$ to hold is simply that $F'(t)\ge 0$, i.e., \begin{equation} \frac{D_{pp}(t)}{\hbar}c_1(t)-D_{px}(t)c_2(t)\ge \frac{\lambda(t)}{4}. \label{eq:jolt_eq} \end{equation} We note in passing that for pure initial states, $F(0)=0$, so $F'(t)\ge 0$ is actually equivalent to $Q(t)\ge 1$ for sufficiently small $t$. Using the expressions \eqref{eq:Dpptruelead} and \eqref{eq:Dpxtruelead} and the short time dominant behavior $\lambda(t)\simeq \tfrac{1}{2}\gamma\Omega^2t^2$, we find \begin{equation} -\frac{2m}{\pi\hbar}\ln(\Omega t)\cdot c_1(t)+ \frac{1}{2\pi}t\ln(\Omega t) \cdot c_2(t) \gtrsim \frac{t}{8},\nonumber \end{equation} which is obviously true for any trajectory $(c_1,c_2,c_3)$ for sufficiently small $t$. In fact, $c_1$ is always positive, and on the left hand side the first term is bigger in modulus than the second term. In the first term the logarithm ensures that the inequality is true for small $t$, and for any positive $c_1$. This shows that the non-Markovian time evolution never violates positivity at $t=0^+$. In the Markovian case, a completely analogous condition to \eqref{eq:jolt_eq} can be derived with $D_{pp}(t)$, $D_{px}(t)$ and $\lambda(t)$ replaced by their Markovian counterparts $D_{pp}^{(M)}$, $D_{px}^{(M)}$ and $\lambda^{(M)}$, viz. \begin{equation}\label{eq:jolt_mar} \frac{D_{pp}^{(M)}}{\hbar}c_1(t)-D_{px}^{(M)}c_2(t)\ge \frac{\lambda^{(M)}}{4}. \end{equation} Now consider squeezed initial states $\mathbf{c}_\mathrm{sq}(0)$, for which clearly $Q(0)=1$. Evaluating the preceding inequality at $t=0$, we obtain a set of initial states $\mathbf{c}_\mathrm{sq}(0)$ that is surely violating at $t=0^+$. This constitutes a subset of the gray set in Figure \ref{fig:zeta_phi}. Hence, we have shown that in the Markovian case, it is always possible to find a pure state that violates positivity at $t=0^+$. \end{document}
{\boldsymbol{e}}gin{document} {\boldsymbol{e}}gin{center} {\bfseries\Large On aggregation of subcritical Galton--Watson branching \\[2mm] processes with regularly varying immigration} \vskip0.5cm {\sc\large M\'aty\'as $\widetilde{e}xt{Barczy}^{*,\diamond}$, Fanni K. $\widetilde{e}xt{Ned\'enyi}^{*}$, Gyula $\widetilde{e}xt{Pap}^{**}$} \end{center} \vskip0.2cm \noindent * MTA-SZTE Analysis and Stochastics Research Group, Bolyai Institute, University of Szeged, Aradi v\'ertan\'uk tere 1, H--6720 Szeged, Hungary. \noindent ** Bolyai Institute, University of Szeged, Aradi v\'ertan\'uk tere 1, H--6720 Szeged, Hungary. \noindent e--mails: [email protected] (M. Barczy), [email protected] (F. K. Ned\'enyi). \noindent $\diamond$ Corresponding author. \vskip0.2cm \renewcommand{\thefootnote}{} \footnote{\widetilde{e}xtit{2020 Mathematics Subject Classifications\/} 60J80, 60F05, 60G10, 60G52, 60G70.} \footnote{\widetilde{e}xtit{Key words and phrases\/}: Galton--Watson branching processes with immigration, temporal and contemporaneous aggregation, multivariate regular variation, stable distribution, limit measure, tail process.} \vspace*{0.2cm} \footnote{M\'aty\'as Barczy is supported by the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences. Fanni K. Ned\'enyi is supported by the UNKP-18-3 New National Excellence Program of the Ministry of Human Capacities. Gyula Pap was supported by the Ministry for Innovation and Technology, Hungary grant TUDFO/47138-1/2019-ITM.} \vspace*{-10mm} {\boldsymbol{e}}gin{abstract} We study an iterated temporal and contemporaneous aggregation of \ $N$ \ independent copies of a strongly stationary subcritical Galton--Watson branching process with regularly varying immigration having index \ $\alpha \in (0, 2)$. \ Limits of finite dimensional distributions of appropriately centered and scaled aggregated partial sum processes are shown to exist when first taking the limit as \ $N \to \infty$ \ and then the time scale \ $n \to \infty$. \ The limit process is an \ $\alpha$-stable process if \ $\alpha \in (0, 1) \cup (1, 2)$, \ and a deterministic line with slope \ $1$ \ if \ $\alpha = 1$. \end{abstract} \section{Introduction} \label{intro} The field of temporal and contemporaneous (also called cross-sectional) aggregations of independent stationary stochastic processes is an important and very active research area in the empirical and theoretical statistics and in other areas as well. Robinson \cite{Rob} and Granger \cite{Gra} started to investigate the scheme of contemporaneous aggregation of random-co\-ef\-fi\-cient autoregressive processes of order 1 in order to obtain the long memory phenomenon in aggregated time series. For surveys on aggregation of different kinds of stochastic processes, see, e.g., Pilipauskait{\.e} and Surgailis \cite{PilSur}, Jirak \cite[page 512]{Jir} or the arXiv version of Barczy et al.\ \cite{BarNedPap}. Recently, Puplinskait{\.e} and Surgailis \cite{PupSur1, PupSur2} studied iterated aggregation of random coefficient autoregressive processes of order 1 with common innovations and with so-called idiosyncratic innovations, respectively, belonging to the domain of attraction of an $\alpha$-stable law. Limits of finite dimensional distributions of appropriately centered and scaled aggregated partial sum processes are shown to exist when first the number of copies \ $N \to \infty$ \ and then the time scale \ $n \to \infty$. \ Very recently, Pilipauskait{\.e} et al.\ \cite{PilSkoSur} extended the results of Puplinskait{\.e} and Surgailis \cite{PupSur2} (idiosyncratic case) deriving limits of finite dimensional distributions of appropriately centered and scaled aggregated partial sum processes when first the time scale \ $n \to \infty$ \ and then the number of copies \ $N \to \infty$, \ and when \ $n \to \infty$ \ and \ $N \to \infty$ \ simultaneously with possibly different rates. The above listed references are all about aggregation procedures for times series, mainly for randomized autoregressive processes. According to our knowledge this question has not been studied before in the literature. The present paper investigates aggregation schemes for some branching processes with low moment condition. Branching processes, especially Galton--Watson branching processes with immigration, have attracted a lot of attention due to the fact that they are widely used in mathematical biology for modelling the growth of a population in time. In Barczy et al.\ \cite{BarNedPap2}, we started to investigate the limit behavior of temporal and contemporaneous aggregations of independent copies of a stationary multitype Galton--Watson branching process with immigration under third order moment conditions on the offspring and immigration distributions in the iterated and simultaneous cases as well. In both cases, the limit process is a zero mean Brownian motion with the same covariance function. As of 2020, modeling the COVID-19 contamination of the population of a certain region or country is of great importance. Multitype Galton--Watson processes with immigration have been frequently used to model the spreading of a number of diseases, and they can be applied for this new disease as well. For example, Yanev et al. \cite{YanStoAta} applied a two-type Galton--Watson process with immigration to model the number of detected, COVID-19-infected and undetected, COVID-19-infected people in a population. The temporal and contemporaneous aggregation of the first coordinate process of the two-type branching process in question would mean the total number of detected, infected people up to some given time point across several regions. In this paper we study the limit behavior of temporal and contemporaneous aggregations of independent copies of a strongly stationary Galton--Watson branching process \ $(X_k)_{k\geqslant0}$ \ with regularly varying immigration having index in \ $(0, 2)$ \ (yielding infinite variance) in an iterated, idiosyncratic case, namely, when first the number of copies \ $N \to \infty$ \ and then the time scale \ $n \to \infty$. \ Our results are analogous to those of Puplinskait{\.e} and Surgailis \cite{PupSur2}. The present paper is organized as follows. In Section \ref{results}, first we collect our assumptions that are valid for the whole paper, namely, we consider a sequence of independent copies of \ $(X_k)_{k\geqslant0}$ \ such that the expectation of the offspring distribution is less than \ $1$ \ (so-called subcritical case). In case of \ $\alpha\in[1,2)$, \ we additionally suppose the finiteness of the second moment of the offspring distribution. Under our assumptions, by Basrak et al. \cite[Theorem 2.1.1]{BasKulPal} (see also Theorem \ref{Xtail}), the unique stationary distribution of \ $(X_k)_{k\geqslant0}$ \ is also regularly varying with the same index \ $\alpha$. In Theorem \ref{simple_aggregation1_stable_fd}, we show that the appropriately centered and scaled partial sum process of finite segments of independent copies of \ $(X_k)_{k\geqslant0}$ \ converges to an \ $\alpha$-stable process. The characteristic function of the \ $\alpha$-stable limit process is given explicitly as well. In Remarks \ref{mu_properties1} and \ref{mu_properties2}, we collect some properties of the \ $\alpha$-stable limit process in question, such as the support of its L\'evy measure. The proof of Theorem \ref{simple_aggregation1_stable_fd} is based on a slight modification of Theorem 7.1 in Resnick \cite{Res}, namely, on a result of weak convergence of partial sum processes towards L\'evy processes, see Theorem \ref{7.1}, where we consider a different centering. In the course of the proof of Theorem \ref{simple_aggregation1_stable_fd} one needs to verify that the so-called limit measures of finite segments of \ $(X_k)_{k\geqslant 0}$ \ are in fact L\'evy measures. We determine these limit measures explicitly (see part (i) of Proposition \ref{Pro_limit_meaure}) applying an expression for the so-called tail measure of a strongly stationary regularly varying sequence based on the corresponding (whole) spectral tail process given in Planini\'c and Soulier \cite[Theorem 3.1]{PlaSou}. While the centering in Theorem \ref{simple_aggregation1_stable_fd} is the so-called truncated mean, in Corollary \ref{simple_aggregation1_stable_centering_fd} we consider no-centering if \ $\alpha\in(0,1)$, \ and centering with the mean if \ $\alpha\in(1,2)$. \ In both cases the limit process is an \ $\alpha$-stable process, the same one as in Theorem \ref{simple_aggregation1_stable_fd} plus some deterministic drift depending on \ $\alpha$. \ Theorem \ref{simple_aggregation1_stable_fd} and Corollary \ref{simple_aggregation1_stable_centering_fd} together yield the weak convergence of finite dimensional distributions of appropriately centered and scaled contemporaneous aggregations of independent copies of \ $(X_k)_{k\geqslant0}$ \ towards the corresponding finite dimensional distributions of a strongly stationary, subcritical autoregressive process of order 1 with \ $\alpha$-stable innovations as the number of copies tends to infinity, see Corollary \ref{aggr_copies1} and Proposition \ref{Pro_AR1}. Theorem \ref{iterated_aggr_1} contains our main result, namely, we determine the weak limit of appropriately centered and scaled finite dimensional distributions of temporal and contemporaneous aggregations of independent copies of \ $(X_k)_{k\geqslant0}$, \ where the limit is taken in a way that first the number of copies tends to infinity and then the time corresponding to temporal aggregation tends to infinity. It turns out that the limit process is an \ $\alpha$-stable process if \ $\alpha \in (0, 1) \cup (1, 2)$, \ and a deterministic line with slope \ $1$ \ if \ $\alpha = 1$. \ We consider different kinds of centerings, and we give the explicit characteristic function of the limit process as well. In Remark \ref{Rem_char_spectral_proc}, we rewrite this characteristic function in case of \ $\alpha\in(0,1)$ \ in terms of the spectral tail process of \ $(X_k)_{k\geqslant 0}$. We close the paper with five appendices. In Appendix \ref{App_cont_map_theorem} we recall a version of the continuous mapping theorem due to Kallenberg \cite[Theorem 3.27]{Kal}. Appendix \ref{App_vague} is devoted to some properties of the underlying punctured space \ $\mathbb{R}^d\setminus\{{\boldsymbol{z}}ero\}$ \ and vague convergence. In Appendix \ref{App_reg_var_distr} we recall the notion of a regularly varying random vector and its limit measure, and, in Proposition \ref{Pro_mapping}, the limit measure of an appropriate positively homogeneous real-valued function of a regularly varying random vector. In Appendix \ref{App_Resnick_gen} we formulate a result on weak convergence of partial sum processes towards L\'evy processes by slightly modifying Theorem 7.1 in Resnick \cite{Res} with a different centering. In the end, we recall a result on the tail behavior and forward tail process of \ $(X_k)_{k\geqslant0}$ \ due to Basrak et al.\ \cite{BasKulPal}, and we determine the limit measures of finite segments of \ $(X_k)_{k\geqslant0}$, \ see Appendix \ref{App_tail}. Finally, we summarize the novelties of the paper. According to our knowledge, studying aggregation of regularly varying Galton--Watson branching processes with immigration has not been considered before. In the proofs we make use of the explicit form of the (whole) spectral tail process and a very recent result of Planini\'c and Soulier \cite[Theorem 3.1]{PlaSou} about the tail measure of strongly stationary sequences. We explicitly determine the limit measures of finite segments of \ $(X_k)_{k\geqslant0}$, \ see part (i) of Proposition \ref{Pro_limit_meaure}. In a companion paper, we will study the other iterated, idiosyncratic aggregation scheme, namely, when first the time scale \ $n \to \infty$ \ and then the number of copies \ $N \to \infty$. \section{Main results} \label{results} Let \ $\mathbb{Z}_+$, \ $\mathbb{N}$, \ $\mathbb{Q}$, \ $\mathbb{R}$, \ $\mathbb{R}_+$, \ $\mathbb{R}_{++}$, \ $\mathbb{R}_-$, \ $\mathbb{R}_{--}$ \ and \ $\mathbb{C}$ \ denote the set of non-negative integers, positive integers, rational numbers, real numbers, non-negative real numbers, positive real numbers, non-positive real numbers, negative real numbers and complex numbers, respectively. For each \ $d \in \mathbb{N}$, \ the natural basis in \ $\mathbb{R}^d$ \ will be denoted by \ ${\boldsymbol{e}}_1$, \ldots, ${\boldsymbol{e}}_d$. \ Put \ ${\boldsymbol{1}}_d := (1, \ldots, 1)^\top$ \ and \ $\mathbb{S}^{d-1} := \{{\boldsymbol{x}} \in \mathbb{R}^d : \|{\boldsymbol{x}}\| = 1\}$, \ where \ $\|{\boldsymbol{x}}\|$ \ denotes the Euclidean norm of \ ${\boldsymbol{x}}\in\mathbb{R}^d$, \ and denote by \ ${\mathcal B}(\mathbb{S}^{d-1})$ \ the Borel \ $\sigma$-field of \ $\mathbb{S}^{d-1}$. \ For a probability measure \ $\mu$ \ on \ $\mathbb{R}^d$, \ $\widehat{\mu}$ \ will denote its characteristic function, i.e., \ $\widehat{\mu}({\boldsymbol{\theta}}) := \int_{\mathbb{R}^d} \mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{x}}\rangle} \, \mu(\mathrm{d}{\boldsymbol{x}})$ \ for \ ${\boldsymbol{\theta}} \in \mathbb{R}^d$. \ Convergence in distributions and almost sure convergence of random variables, and weak convergence of probability measures will be denoted by \ $\distr$, \ $\as$ \ and \ $\distrw$, \ respectively. Equality in distribution will be denoted by \ $\distre$. \ We will use \ $\distrf$ \ or \ ${\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim$ \ for weak convergence of finite dimensional distributions. A function \ $f : \mathbb{R}_+ \to \mathbb{R}^d$ \ is called \emph{c\`adl\`ag} if it is right continuous with left limits. Let \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)$ \ and \ $\mathbb{C}(\mathbb{R}_+, \mathbb{R}^d)$ \ denote the space of all \ $\mathbb{R}^d$-valued c\`adl\`ag and continuous functions on \ $\mathbb{R}_+$, \ respectively. Let \ ${\mathcal B}(\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d))$ \ denote the Borel \ $\sigma$-algebra on \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)$ \ for the metric defined in Chapter VI, (1.26) of Jacod and Shiryaev \cite{JacShi}. With this metric \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)$ \ is a complete and separable metric space and the topology induced by this metric is the so-called Skorokhod topology. For \ $\mathbb{R}^d$-valued stochastic processes \ $({\boldsymbol{c}}Y_t)_{t \in \mathbb{R}_+}$ \ and \ $({\boldsymbol{c}}Y^{(n)}_t)_{t \in \mathbb{R}_+}$, \ $n \in \mathbb{N}$, \ with c\`adl\`ag paths we write \ ${\boldsymbol{c}}Y^{(n)} \distr {\boldsymbol{c}}Y$ \ as \ $n\to\infty$ \ if the distribution of \ ${\boldsymbol{c}}Y^{(n)}$ \ on the space \ $(\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d), {\mathcal B}(\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)))$ \ converges weakly to the distribution of \ ${\boldsymbol{c}}Y$ \ on the space \ $(\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d), {\mathcal B}(\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)))$ \ as \ $n \to \infty$. Let \ $(X_k)_{k\in\mathbb{Z}_+}$ \ be a Galton--Watson branching process with immigration. For each \ $k, j \in \mathbb{Z}_+$, \ the number of individuals in the \ $k^\mathrm{th}$ \ generation will be denoted by \ $X_k$, \ the number of offsprings produced by the \ $j^\mathrm{th}$ \ individual belonging to the \ $(k-1)^\mathrm{th}$ \ generation will be denoted by \ $\xi_{k,j}$, \ and the number of immigrants in the \ $k^\mathrm{th}$ \ generation will be denoted by \ $\operatorname{Var}e_k$. \ Then we have \[ X_k = \sum_{j=1}^{X_{k-1}} \xi_{k,j} + \operatorname{Var}e_k , \qquad k \in \mathbb{N} , \] where we define \ $\sum_{j=1}^0 := 0$. \ Here \ $\bigl\{X_0, \, \xi_{k,j}, \, \operatorname{Var}e_k : k, j \in \mathbb{N}\bigr\}$ \ are supposed to be independent non-negative integer-valued random variables. Moreover, \ $\{\xi_{k,j} : k, j \in \mathbb{N}\}$ \ and \ $\{\operatorname{Var}e_k : k \in \mathbb{N}\}$ \ are supposed to consist of identically distributed random variables, respectively. For notational convenience, let \ $\xi$ \ and \ $\operatorname{Var}e$ \ be independent random variables such that \ $\xi \distre \xi_{1,1}$ \ and \ $\operatorname{Var}e \distre \operatorname{Var}e_1$. If \ $m_\xi := \operatorname{\mathbb{E}}(\xi) \in [0, 1)$ \ and \ $\sum_{\ell=1}^\infty \log(\ell) \operatorname{\mathbb{P}}(\operatorname{Var}e = \ell) < \infty$, \ then the Markov chain \ $(X_k)_{k\in\mathbb{Z}_+}$ \ admits a unique stationary distribution \ $\pi$, \ see, e.g., Quine \cite{Qui}. Note that if \ $m_\xi \in [0, 1)$ \ and \ $\operatorname{\mathbb{P}}(\operatorname{Var}e = 0) = 1$, \ then \ $\sum_{\ell=1}^\infty \log(\ell) \operatorname{\mathbb{P}}(\operatorname{Var}e = \ell) = 0$ \ and \ $\pi$ \ is the Dirac measure \ $\delta_0$ \ concentrated at the point \ $0$. \ In fact, \ $\pi = \delta_0$ \ if and only if \ $\operatorname{\mathbb{P}}(\operatorname{Var}e = 0) = 1$. \ Moreover, if \ $m_\xi = 0$ \ (which is equivalent to \ $\operatorname{\mathbb{P}}(\xi = 0) = 1$), \ then \ $\pi$ \ is the distribution of \ $\operatorname{Var}e$. In what follows, we formulate our assumptions valid for the whole paper. We assume that \ $m_\xi \in [0, 1)$ \ (so-called subcritical case) and \ $\operatorname{Var}e$ \ is regularly varying with index \ $\alpha \in (0, 2)$, \ i.e., \ $\operatorname{\mathbb{P}}(\operatorname{Var}e > x)\in\mathbb{R}_{++}$ \ for all \ $x\in\mathbb{R}_{++}$ \ and \[ \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(\operatorname{Var}e > qx)}{\operatorname{\mathbb{P}}(\operatorname{Var}e > x)} = q^{-\alpha} \qquad \widetilde{e}xt{for all \ $q \in \mathbb{R}_{++}$.} \] Then \ $\operatorname{\mathbb{P}}(\operatorname{Var}e = 0) < 1$ \ and \ $\sum_{\ell=1}^\infty \log(\ell) \operatorname{\mathbb{P}}(\operatorname{Var}e = \ell) < \infty$, \ see, e.g., Barczy et al.\ \cite[Lemma E.5]{BarBosPap}, hence the Markov process \ $(X_k)_{k\in\mathbb{Z}_+}$ \ admits a unique stationary distribution \ $\pi$. \ We suppose that \ $X_0 \distre \pi$, \ yielding that the Markov chain \ $(X_k)_{k\in\mathbb{Z}_+}$ \ is strongly stationary. In case of \ $\alpha \in [1, 2)$, \ we suppose additionally that \ $\operatorname{\mathbb{E}}(\xi^2) < \infty$. \ By Basrak et al. \cite[Theorem 2.1.1]{BasKulPal} (see also Theorem \ref{Xtail}), \ $X_0$ \ is regularly varying with index \ $\alpha$, \ yielding the existence of a sequence \ $(a_N)_{N\in\mathbb{N}}$ \ in \ $\mathbb{R}_{++}$ \ with \ $N \operatorname{\mathbb{P}}(X_0 > a_N) \to 1$ \ as \ $N \to \infty$, \ see, e.g., Lemma \ref{a_n}. Let us fix an arbitrary sequence \ $(a_N)_{N\in\mathbb{N}}$ \ in \ $\mathbb{R}_{++}$ \ with this property. In fact, \ $a_N = N^{\frac{1}{\alpha} } L(N)$, \ $N \in \mathbb{N}$, \ for some slowly varying continuous function \ $L : \mathbb{R}_{++} \to \mathbb{R}_{++}$, \ see, e.g., Araujo and Gin\'e \cite[Exercise 6 on page 90]{AraGin}. Let \ $X^{(j)} = (X^{(j)}_k)_{k\in\mathbb{Z}_+}$, \ $j \in \mathbb{N}$, \ be a sequence of independent copies of \ $(X_k)_{k\in\mathbb{Z}_+}$. \ We mention that we consider so-called idiosyncratic immigrations, i.e., the immigrations \ $(\operatorname{Var}e^{(j)}_k)_{k\in\mathbb{N}}$, \ $j \in \mathbb{N}$, \ belonging to \ $(X^{(j)}_k)_{k\in\mathbb{Z}_+}$, \ $j \in \mathbb{N}$, \ are independent. One could study the case of common immigrations as well, i.e., when \ $(\operatorname{Var}e^{(j)}_k)_{k\in\mathbb{N}} = (\operatorname{Var}e^{(1)}_k)_{k\in\mathbb{N}}$, \ $j \in \mathbb{N}$. {\boldsymbol{e}}gin{Thm}\label{simple_aggregation1_stable_fd} For each \ $k \in \mathbb{Z}_+$, {\boldsymbol{e}}gin{equation}\label{help6_simple_aggregation1_stable_fd} {\boldsymbol{e}}gin{aligned} &\biggl(\frac{1}{a_N} \sum_{j=1}^{\lfloor Nt\rfloor} \Bigl(X^{(j)}_0 - \operatorname{\mathbb{E}}\bigl(X^{(j)}_0 {\boldsymbol{b}}one_{\{X^{(j)}_0\leqslant a_N\}}\bigr), \ldots, X^{(j)}_k - \operatorname{\mathbb{E}}\bigl(X^{(j)}_k {\boldsymbol{b}}one_{\{X^{(j)}_k\leqslant a_N\}}\bigr)\Bigr)^\top\biggr)_{t\in\mathbb{R}_+} \\ &= \biggl(\frac{1}{a_N} \sum_{j=1}^{\lfloor Nt\rfloor} \bigl(X^{(j)}_0, \ldots, X^{(j)}_k\bigr)^\top - \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr) {\boldsymbol{1}}_{k+1}\biggr)_{t\in\mathbb{R}_+} \distr \bigl({\boldsymbol{c}}X_t^{(k,\alpha)}\bigr)_{t\in\mathbb{R}_+} \end{aligned} \end{equation} as \ $N\to\infty$, \ where \ $\bigl({\boldsymbol{c}}X_t^{(k,\alpha)}\bigr)_{t\in\mathbb{R}_+}$ \ is a \ $(k + 1)$-dimensional \ $\alpha$-stable process such that the characteristic function of the distribution \ $\mu_{k,\alpha}$ \ of \ ${\boldsymbol{c}}X_1^{(k,\alpha)}$ \ has the form {\boldsymbol{e}}gin{align*} &\widehat{\mu_{k,\alpha}}({\boldsymbol{\theta}}) \\ &= \exp\biggl\{(1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty \biggl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle u} - 1 - \mathrm{i} u \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle {\boldsymbol{b}}one_{(0,1]}(u\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle)\biggr) \alpha u^{-1-\alpha} \, \mathrm{d} u\biggr\} \end{align*} for \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$ \ with the \ $(k + 1)$-dimensional vectors \[ {\boldsymbol{v}}_0^{(k)} := (1 - m_\xi^\alpha)^{-\frac{1}{\alpha}} {\boldsymbol{e}}gin{bmatrix} 1 \\ m_\xi \\ m_\xi^2 \\ \vdots \\ m_\xi^k \end{bmatrix} , \quad {\boldsymbol{v}}_1^{(k)} := {\boldsymbol{e}}gin{bmatrix} 0 \\ 1 \\ m_\xi \\ \vdots \\ m_\xi^{k-1} \end{bmatrix} , \quad {\boldsymbol{v}}_2^{(k)} := {\boldsymbol{e}}gin{bmatrix} 0 \\ 0 \\ 1 \\ \vdots \\ m_\xi^{k-2} \end{bmatrix} , \quad \ldots , \quad {\boldsymbol{v}}_k^{(k)} := {\boldsymbol{e}}gin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ 1 \end{bmatrix} . \] Moreover, for \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$, \[ \widehat{\mu_{k,\alpha}}({\boldsymbol{\theta}}) = {\boldsymbol{e}}gin{cases} \exp\Bigl\{- C_\alpha (1 - m_\xi^\alpha) \sum_{j=0}^k |\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle)\right) \\[2mm] \phantom{\exp\Bigl\{} - \mathrm{i} \frac{\alpha}{1-\alpha} \langle{\boldsymbol{\theta}}, {\boldsymbol{1}}_{k+1}\rangle\Bigr\} , & \widetilde{e}xt{if \ $\alpha \ne 1$,} \\[4mm] \exp\Bigl\{- C_1 (1 - m_\xi) \sum_{j=0}^k |\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle| \bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle) \log(|\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle|)\bigr) \\[2mm] \phantom{\exp\Bigl\{} + \mathrm{i} C \langle{\boldsymbol{\theta}}, {\boldsymbol{1}}_{k+1}\rangle \\[2mm] \phantom{\exp\Bigl\{} + \mathrm{i} (1 - m_\xi) \sum_{j=0}^k \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle)\Bigr\} , & \widetilde{e}xt{if \ $\alpha = 1$,} \end{cases} \] with the convention \ $0 \log(0) := 0$, \[ C_\alpha := {\boldsymbol{e}}gin{cases} \frac{\Gamma(2 - \alpha)}{1-\alpha} \cos\left(\frac{\pi\alpha}{2}\right) , & \widetilde{e}xt{if \ $\alpha \ne 1$,} \\ \frac{\pi}{2} , & \widetilde{e}xt{if \ $\alpha = 1$,} \end{cases} \] and {\boldsymbol{e}}gin{equation}\label{c} C := \int_1^\infty u^{-2} \sin(u) \, \mathrm{d} u + \int_0^1 u^{-2} (\sin(u) - u) \, \mathrm{d} u . \end{equation} \end{Thm} Note that \ $C$ \ exists and is finite, since \ $\int_1^\infty u^{-2} |\sin(u)| \, \mathrm{d} u \leqslant \int_1^\infty u^{-2} \, \mathrm{d} u = 1$, \ and, by L'H\^ospital's rule, \ $\lim_{u\to0} u^{-2} (\sin(u) - u) = 0$, \ hence the integrand \ $u^{-2} (\sin(u) - u)$ \ can be extended to \ $[0, 1]$ \ continuously, yielding that its integral on \ $[0, 1]$ \ is finite. Note that the scaling and the centering in \eqref{help6_simple_aggregation1_stable_fd} do not depend on \ $j$ \ or \ $k$, \ since the copies are independent and the process \ $(X_k)_{k\in\mathbb{Z}_+}$ \ is strongly stationary, and especially, \ $\operatorname{\mathbb{E}}\bigl(X^{(j)}_k {\boldsymbol{b}}one_{\{X^{(j)}_k\leqslant a_N\}}\bigr) = \operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}})$ \ for all \ $j \in \mathbb{N}$ \ and \ $k \in \mathbb{Z}_+$. The next two remarks are devoted to the study of some properties of \ $\mu_{k,\alpha}$. {\boldsymbol{e}}gin{Rem}\label{mu_properties1} By the proof of Theorem \ref{simple_aggregation1_stable_fd} (see \eqref{fint}), it turns out that the L\'evy measure of \ $\mu_{k,\alpha}$ \ is \[ \nu_{k,\alpha}(B) = (1 - m_\xi^\alpha) \sum_{j=0}^k \|{\boldsymbol{v}}_j^{(k)}\|^\alpha \int_0^\infty {\boldsymbol{b}}one_B\left(u \frac{{\boldsymbol{v}}_j^{(k)}}{\|{\boldsymbol{v}}_j^{(k)}\|}\right) \alpha u^{-\alpha-1} \, \mathrm{d} u , \qquad B \in {\mathcal B}(\mathbb{R}^{k+1}_0) , \] where the space \ $\mathbb{R}^{k+1}_0 := \mathbb{R}^{k+1} \setminus \{{\boldsymbol{z}}ero\}$ \ and its topological properties are discussed in Appendix \ref{App_vague}. The radial part of \ $ \nu_{k,\alpha}$ \ is \ $u^{-\alpha-1} \, \mathrm{d} u$, \ and the spherical part of \ $\nu_{k,\alpha}$ \ is any positive constant multiple of the measure \ $\sum_{j=0}^k\|{\boldsymbol{v}}_j^{(k)}\|^\alpha \epsilon_{{\boldsymbol{v}}_j^{(k)}/\|{\boldsymbol{v}}_j^{(k)}\|}$ \ on \ $\mathbb{S}^k$, \ where for any \ ${\boldsymbol{x}} \in \mathbb{R}^{k+1}$, \ $\epsilon_{{\boldsymbol{x}}}$ \ denotes the Dirac measure concentrated at the point \ ${\boldsymbol{x}}$. \ Particularly, the support of \ $\nu_{k,\alpha}$ \ is \ $\cup_{j=0}^k (\mathbb{R}_{++} {\boldsymbol{v}}_j^{(k)})$. \ The vectors \ ${\boldsymbol{v}}_0^{(k)}$, \ldots, ${\boldsymbol{v}}_k^{(k)}$ \ form a basis in \ $\mathbb{R}^{k+1}$, \ hence there is no proper linear subspace \ $V$ \ of \ $\mathbb{R}^{k+1}$ \ covering the support of \ $\nu_{k,\alpha}$. \ Consequently, \ $\mu_{k,\alpha}$ \ is a nondegenerate measure in the sense that there are no \ ${\boldsymbol{a}} \in \mathbb{R}^{k+1}$ \ and a proper linear subspace \ $V$ \ of \ $\mathbb{R}^{k+1}$ \ such that \ $a + V$ \ covers the support of \ $\mu_{k,\alpha}$, \ see, e.g., Sato \cite[Proposition 24.17 (ii)]{Sato}. \mbox{$\Box$} \end{Rem} {\boldsymbol{e}}gin{Rem}\label{mu_properties2} If \ $\alpha \in (0, 1)$, \ then, for each \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$, \[ \widehat{\mu_{k,\alpha}}({\boldsymbol{\theta}}) \\ = \exp\biggl\{(1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty \bigl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle u} - 1\bigr) \alpha u^{-1-\alpha} \, \mathrm{d} u - \mathrm{i} \frac{\alpha}{1-\alpha} \langle{\boldsymbol{\theta}}, {\boldsymbol{1}}_{k+1}\rangle\biggr\} , \] see the proof of Theorem \ref{simple_aggregation1_stable_fd}. Consequently, the drift of \ $\mu_{k,\alpha}$ \ is \ $- \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{k+1}$, \ see, e.g., Sato \cite[Remark 14.6]{Sato}. \ This drift is nonzero, hence \ $\mu_{k,\alpha}$ \ is not strictly $\alpha$-stable, see, e.g., Sato \cite[Theorem 14.7 (iv) and Definition 13.2]{Sato}. The \ $1$-stable probability measure \ $\mu_{k,1}$ \ is not strictly \ $1$-stable, since the spherical part of its nonzero L\'evy measure \ $\nu_{k,1}$ \ is concentrated on \ $\mathbb{R}_+^{k+1} \cap \mathbb{S}^k$, \ and hence the condition (14.12) in Sato \cite[Theorem 14.7 (v)]{Sato} is not satisfied. If \ $\alpha \in (1, 2)$, \ then, for each \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$, \[ \widehat{\mu_{k,\alpha}}({\boldsymbol{\theta}}) \\ = \exp\biggl\{(1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty \bigl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle u} - 1 - \mathrm{i} \langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle u\bigr) \alpha u^{-1-\alpha} \, \mathrm{d} u + \mathrm{i} \frac{\alpha}{\alpha-1} \langle{\boldsymbol{\theta}}, {\boldsymbol{1}}_{k+1}\rangle\biggr\} , \] see the proof of Theorem \ref{simple_aggregation1_stable_fd}. Consequently, the center of \ $\mu_{k,\alpha}$ \ is \ $\frac{\alpha}{\alpha-1} {\boldsymbol{1}}_{k+1}$, \ which is, in fact, the expectation of \ $\mu_{k,\alpha}$, \ and it is nonzero, and hence \ $\mu_{k,\alpha}$ \ is not strictly stable, see, e.g., Sato \cite[Theorem 14.7 (vi) and Definition 13.2]{Sato}. All in all, \ $\mu_{k,\alpha}$ \ is not strictly \ $\alpha$-stable, but \ $\alpha$-stable for any \ $\alpha \in (0, 2)$. \ We also note that \ $\mu_{k,\alpha}$ \ is absolutely continuous, see, e.g., Sato \cite[Theorem 27.4 and Proposition 14.5]{Sato}. \mbox{$\Box$} \end{Rem} The centering in Theorem \ref{simple_aggregation1_stable_fd} can be simplified in case of \ $\alpha \ne 1$. \ Namely, if \ $\alpha \in (0, 1]$, \ then for each \ $t \in \mathbb{R}_{++}$, \ by Lemma \ref{truncated_moments}, {\boldsymbol{e}}gin{equation}\label{centering01T} {\boldsymbol{e}}gin{aligned} \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}) &= \frac{\lfloor Nt\rfloor}{N} \frac{\operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}})}{a_N\operatorname{\mathbb{P}}(X_0 > a_N)} N \operatorname{\mathbb{P}}(X_0> a_N) \\ &\to {\boldsymbol{e}}gin{cases} \frac{\alpha}{1-\alpha} t & \widetilde{e}xt{for \ $\alpha \in (0, 1)$,} \\ \infty & \widetilde{e}xt{for \ $\alpha = 1$} \end{cases} \qquad \widetilde{e}xt{as \ $N \to \infty$.} \end{aligned} \end{equation} In a similar way, if \ $\alpha \in (1, 2)$, \ then for each \ $t \in \mathbb{R}_{++}$, \[ \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}) = \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0) - \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0>a_N\}}) , \] where, \ $\lim_{N\to\infty} \frac{\lfloor Nt\rfloor}{a_N} = \lim_{N\to\infty} t N^{1-\frac{1}{\alpha}} L(N)^{-1} = \infty$, \ and, by Lemma \ref{truncated_moments}, {\boldsymbol{e}}gin{equation}\label{centering12T} \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0>a_N\}}) \to \frac{\alpha}{\alpha-1} t \qquad \widetilde{e}xt{as \ $N \to \infty$.} \end{equation} This shows that in case of \ $\alpha \in (0, 1)$, \ there is no need for centering, in case of \ $\alpha \in (1, 2)$ \ one can center with the expectation as well, while in case of \ $\alpha = 1$, \ neither non-centering nor centering with the expectation works even if the expectation does exist. More precisely, without centering in case of \ $\alpha\in(0,1)$ \ or with centering with the expectation in case of \ $\alpha\in(1,2)$, \ we have the following convergences. {\boldsymbol{e}}gin{Cor}\label{simple_aggregation1_stable_centering_fd} In case of \ $\alpha \in (0, 1)$, \ for each \ $k \in \mathbb{Z}_+$, \ we have \[ \biggl(\frac{1}{a_N} \sum_{j=1}^{\lfloor Nt\rfloor} \bigl(X^{(j)}_0, \ldots, X^{(j)}_k\bigr)^\top\biggr)_{t\in\mathbb{R}_+} \distr \Bigl({\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t {\boldsymbol{1}}_{k+1}\Bigr)_{t\in\mathbb{R}_+} \] as \ $N\to\infty$, \ and, in case of \ $\alpha \in (1, 2)$, \ for each \ $k \in \mathbb{Z}_+$, \ we have {\boldsymbol{e}}gin{equation}\label{help6_simple_aggregation1_stable_centering2_fd} {\boldsymbol{e}}gin{aligned} &\biggl(\frac{1}{a_N} \sum_{j=1}^{\lfloor Nt\rfloor} \bigl(X^{(j)}_0 - \operatorname{\mathbb{E}}(X^{(j)}_0), \ldots, X^{(j)}_k - \operatorname{\mathbb{E}}(X^{(j)}_k)\bigr)^\top\biggr)_{t\in\mathbb{R}_+} \\ &= \biggl(\frac{1}{a_N} \sum_{j=1}^{\lfloor Nt\rfloor} \bigl(X^{(j)}_0, \ldots, X^{(j)}_k\bigr)^\top - \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0) {\boldsymbol{1}}_{k+1}\biggr)_{t\in\mathbb{R}_+} \distr \Bigl({\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t {\boldsymbol{1}}_{k+1}\Bigr)_{t\in\mathbb{R}_+} \end{aligned} \end{equation} as \ $N\to\infty$. \ Moreover, \ $\bigl({\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t {\boldsymbol{1}}_{k+1}\bigr)_{t\in\mathbb{R}_+}$ \ is a \ $(k + 1)$-dimensional \ $\alpha$-stable process such that the characteristic function of \ ${\boldsymbol{c}}X_1^{(k,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{k+1}$ \ has the form {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\Bigl(\exp\Bigl\{\mathrm{i}\Bigl\langle{\boldsymbol{\theta}},{\boldsymbol{c}}X_1^{(k,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{k+1}\Bigr\rangle\Bigr\}\Bigr) \\ &= {\boldsymbol{e}}gin{cases} \exp\biggl\{(1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty \bigl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle u} - 1\bigr) \alpha u^{-1-\alpha} \, \mathrm{d} u\biggr\} , &\widetilde{e}xt{if \ $\alpha \in (0, 1)$,} \\[3mm] \exp\biggl\{(1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty \bigl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle u} - 1 - \mathrm{i} \langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle u\bigr) \alpha u^{-1-\alpha} \, \mathrm{d} u\biggr\} , &\widetilde{e}xt{if \ $\alpha \in (1, 2)$,} \end{cases} \\ &=\exp\biggl\{- C_\alpha (1 - m_\xi^\alpha) \sum_{j=0}^k |\langle{\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle{\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle)\right)\biggr\} , \quad \widetilde{e}xt{if \ $\alpha \ne 1$,} \end{align*} for \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$. \end{Cor} Note that in case of \ $\alpha \in (1, 2)$, \ the scaling and the centering in \eqref{help6_simple_aggregation1_stable_centering2_fd} do not depend on \ $j$ \ or \ $k$, \ since the copies are independent and the process \ $(X_k)_{k\in\mathbb{Z}_+}$ \ is strongly stationary, and especially, \ $\operatorname{\mathbb{E}}\bigl(X^{(j)}_k\bigr) = \operatorname{\mathbb{E}}(X_0) = \frac{m_\operatorname{Var}e}{1-m_\xi}$ \ for all \ $j \in \mathbb{N}$ \ and \ $k \in \mathbb{Z}_+$ \ with \ $m_\operatorname{Var}e := \operatorname{\mathbb{E}}(\operatorname{Var}e)$, \ see, e.g., Barczy et al.\ \cite[formula (14)]{BarNedPap2}. The next remark is devoted to study some distributional properties of the \ $\alpha$-stable process \ $\bigl({\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t{\boldsymbol{1}}_{k+1}\bigr)_{t\in\mathbb{R}_+}$ \ in case of \ $\alpha \ne 1$. {\boldsymbol{e}}gin{Rem}\label{varrho_properties} The L\'evy measure of the distribution of \ ${\boldsymbol{c}}X_1^{(k,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{k+1}$ \ is the same as that of \ ${\boldsymbol{c}}X_1^{(k,\alpha)}$, \ namely, \ $\nu_{k,\alpha}$ \ given in Remark \ref{mu_properties1}. If \ $\alpha \in (0, 1)$, \ then the drift of the distribution of \ ${\boldsymbol{c}}X_1^{(k,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{k+1}$ \ is \ ${\boldsymbol{z}}ero$, \ hence the process \ $\bigl({\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t{\boldsymbol{1}}_{k+1}\bigr)_{t\in\mathbb{R}_+}$ \ is strictly \ $\alpha$-stable, see, e.g., Sato \cite[Theorem 14.7 (iv)]{Sato}. If \ $\alpha \in (1, 2)$, \ then the center, i.e., the expectation of \ ${\boldsymbol{c}}X_1^{(k,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{k+1}$ \ is \ ${\boldsymbol{z}}ero$, \ hence the process \ $\bigl({\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t{\boldsymbol{1}}_{k+1}\bigr)_{t\in\mathbb{R}_+}$ \ is strictly \ $\alpha$-stable see, e.g., Sato \cite[Theorem 14.7 (vi)]{Sato}. All in all, \ $\bigl({\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t{\boldsymbol{1}}_{k+1}\bigr)_{t\in\mathbb{R}_+}$ \ is strictly \ $\alpha$-stable for any \ $\alpha \ne 1$. \ We also note that for each \ $t \in \mathbb{R}_{++}$, \ the distribution of \ ${\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t {\boldsymbol{1}}_{k+1}$ \ is absolutely continuous, see, e.g., Sato \cite[Theorem 27.4 and Proposition 14.5]{Sato}. \mbox{$\Box$} \end{Rem} Let \ $\bigl({\mathcal Y}^{(\alpha)}_k\bigr)_{k\in\mathbb{Z}_+}$ \ be a strongly stationary process such that {\boldsymbol{e}}gin{align}\label{calY_def} \bigl({\mathcal Y}^{(\alpha)}_k\bigr)_{k\in\{0,\ldots,K\}} \distre {\boldsymbol{c}}X_1^{(K,\alpha)} \qquad \widetilde{e}xt{for each \ $K \in \mathbb{Z}_+$.} \end{align} The existence of \ $\bigl({\mathcal Y}^{(\alpha)}_k\bigr)_{k\in\mathbb{Z}_+}$ \ follows from the Kolmogorov extension theorem. Its strong stationarity is a consequence of \eqref{help6_simple_aggregation1_stable_fd} together with the strong stationarity of \ $(X_k)_{k\in\mathbb{Z}_+}$. \ We note that the common distribution of \ ${\mathcal Y}^{(\alpha)}_k$, \ $k \in \mathbb{Z}_+$, \ depends only on \ $\alpha$, \ it does not depend on \ $m_\xi$, \ since its characteristic function has the form {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta{\mathcal Y}^{(\alpha)}_0}\bigr) = \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta{\boldsymbol{c}}X_1^{(0,\alpha)}}\bigr) \\ &= \exp\biggl\{(1 - m_\xi^\alpha) \int_0^\infty \Bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta(1-m_\xi^\alpha)^{-\frac{1}{\alpha}}u} - 1 - \mathrm{i} u \operatorname{Var}theta (1-m_\xi^\alpha)^{-\frac{1}{\alpha}} {\boldsymbol{b}}one_{(0,1]}(u (1-m_\xi^\alpha)^{-\frac{1}{\alpha}})\Bigr) \alpha u^{-1-\alpha} \, \mathrm{d} u\biggr\} \\ &= \exp\biggl\{\int_0^\infty \bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta v} - 1 - \mathrm{i} \operatorname{Var}theta v {\boldsymbol{b}}one_{(0,1]}(v)\bigr) \alpha v^{-1-\alpha} \, \mathrm{d} v\biggr\} , \qquad \operatorname{Var}theta \in \mathbb{R} . \end{align*} {\boldsymbol{e}}gin{Pro}\label{Pro_AR1} For each \ $\alpha \in (0, 2)$, \ the strongly stationary process \ $\bigl({\mathcal Y}^{(\alpha)}_k\bigr)_{k\in\mathbb{Z}_+}$ \ is a subcritical autoregressive process of order 1 with autoregressive coefficient \ $m_\xi$ \ and with \ $\alpha$-stable innovations, namely, \[ {\mathcal Y}^{(\alpha)}_k = m_\xi {\mathcal Y}^{(\alpha)}_{k-1} + \widetilde{\operatorname{Var}e}^{(\alpha)}_k , \qquad k \in \mathbb{N} , \] where \[ \widetilde{\operatorname{Var}e}^{(\alpha)}_k := {\mathcal Y}^{(\alpha)}_k - m_\xi {\mathcal Y}^{(\alpha)}_{k-1}, \qquad k \in \mathbb{N}, \] is a sequence of independent, identically distributed \ $\alpha$-stable random variables such that for all \ $k\in\mathbb{N}$, \ $\widetilde{\operatorname{Var}e}^{(\alpha)}_k$ \ is independent of \ $({\mathcal Y}^{(\alpha)}_0, \ldots, {\mathcal Y}^{(\alpha)}_{k-1})^\top$. \ Therefore, \ $\bigl({\mathcal Y}^{(\alpha)}_k\bigr)_{k\in\mathbb{Z}_+}$ \ is a strongly stationary, time homogeneous Markov process. \end{Pro} Theorem \ref{simple_aggregation1_stable_fd} and Corollary \ref{simple_aggregation1_stable_centering_fd} have the following consequences for a contemporaneous aggregation of independent copies with different centerings. {\boldsymbol{e}}gin{Cor}\label{aggr_copies1} \renewcommand{{\rm(\roman{enumi})}}{{\rm(\roman{enumi})}} {\boldsymbol{e}}gin{enumerate} \item For each \ $\alpha \in (0, 2)$, {\boldsymbol{e}}gin{align*} \biggl(\frac{1}{a_N} \sum_{j=1}^N \Bigl(X^{(j)}_k - \operatorname{\mathbb{E}}\bigl(X^{(j)}_k {\boldsymbol{b}}one_{\{X^{(j)}_k\leqslant a_N\}}\bigr) \Bigr) \biggr)_{k\in\mathbb{Z}_+} &= \biggl(\frac{1}{a_N} \sum_{j=1}^N X^{(j)}_k - \frac{N}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr) \biggr)_{k\in\mathbb{Z}_+} \\ &\distrf \bigl({\mathcal Y}^{(\alpha)}_k\bigr)_{k\in\mathbb{Z}_+} \qquad \widetilde{e}xt{as \ $N \to \infty$,} \end{align*} \item in case of \ $\alpha \in (0, 1)$, \[ \biggl(\frac{1}{a_N} \sum_{j=1}^N X^{(j)}_k \biggr)_{k\in\mathbb{Z}_+} \distrf \Bigl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha}\Bigr)_{k\in\mathbb{Z}_+} \qquad \widetilde{e}xt{as \ $N \to \infty$,} \] \item in case of \ $\alpha \in (1, 2)$, {\boldsymbol{e}}gin{align*} \biggl(\frac{1}{a_N} \sum_{j=1}^N \bigl(X^{(j)}_k - \operatorname{\mathbb{E}}\bigl(X^{(j)}_k \bigr)\bigr)\biggr)_{k\in\mathbb{Z}_+} &= \biggl(\frac{1}{a_N} \sum_{j=1}^N X^{(j)}_k - \frac{N}{a_N} \operatorname{\mathbb{E}}\bigl(X_0\bigr) \biggr)_{k\in\mathbb{Z}_+} \\ &\distrf \Bigl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha} \Bigr)_{k\in\mathbb{Z}_+} \qquad \widetilde{e}xt{as \ $N \to \infty$,} \end{align*} \end{enumerate} where \ $({\mathcal Y}^{(k)})_{k\in\mathbb{Z}_+}$ \ is given by \eqref{calY_def}. \end{Cor} Limit theorems will be presented for the aggregated stochastic process \ $\bigl(\sum_{k=1}^{\lfloor nt \rfloor} \sum_{j=1}^N X^{(j)}_k\bigr)_{t\in\mathbb{R}_+}$ \ with different centerings and scalings. We will provide limit theorems in an iterated manner such that first $N$, and then $n$ converges to infinity. {\boldsymbol{e}}gin{Thm}\label{iterated_aggr_1} In case of \ $\alpha \in (0, 1)$, \ we have {\boldsymbol{e}}gin{equation}\label{iterated_aggr_1_1} {\boldsymbol{e}}gin{aligned} &{\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{n\to\infty} \, {\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{N\to\infty} \, \biggl(\frac{1}{n^{\frac{1}{\alpha}}a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N \Bigl(X^{(j)}_k - \operatorname{\mathbb{E}}\bigl(X^{(j)}_k {\boldsymbol{b}}one_{\{X^{(j)}_k\leqslant a_N\}}\bigr) \Bigr) \biggr)_{t\in\mathbb{R}_+} \\ &={\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{n\to\infty} \, {\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{N\to\infty} \, \biggl(\frac{1}{n^{\frac{1}{\alpha}}a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N X^{(j)}_k - \frac{{\lfloor nt\rfloor} N}{n^{\frac{1}{\alpha}}a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr)\biggr)_{t\in\mathbb{R}_+} \\ &= \Bigl({\mathcal Z}_t^{(\alpha)} + \frac{\alpha}{1-\alpha} t\Bigr)_{t\in\mathbb{R}_+} , \end{aligned} \end{equation} and {\boldsymbol{e}}gin{equation}\label{iterated_aggr_1_2} {\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{n\to\infty} \, {\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{N\to\infty} \, \biggl(\frac{1}{n^{\frac{1}{\alpha}}a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N X^{(j)}_k \Bigr) \biggr)_{t\in\mathbb{R}_+} = \Bigl({\mathcal Z}_t^{(\alpha)} + \frac{\alpha}{1-\alpha} t\Bigr)_{t\in\mathbb{R}_+} , \end{equation} in case of \ $\alpha = 1$, \ we have {\boldsymbol{e}}gin{equation}\label{iterated_aggr_1_3} {\boldsymbol{e}}gin{aligned} &{\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{n\to\infty} \, {\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{N\to\infty} \, \biggl(\frac{1}{n\log(n)a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N \Bigl(X^{(j)}_k - \operatorname{\mathbb{E}}\bigl(X^{(j)}_k {\boldsymbol{b}}one_{\{X^{(j)}_k\leqslant a_N\}}\bigr) \Bigr) \biggr)_{t\in\mathbb{R}_+} \\ &={\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{n\to\infty} \, {\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{N\to\infty} \, \biggl(\frac{1}{n\log(n)a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N X^{(j)}_k - \frac{{\lfloor nt\rfloor} N}{n\log(n)a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr)\biggr)_{t\in\mathbb{R}_+} \\ &= (t)_{t\in\mathbb{R}_+} , \end{aligned} \end{equation} and in case of \ $\alpha \in (1, 2)$, \ we have {\boldsymbol{e}}gin{equation}\label{iterated_aggr_1_4} {\boldsymbol{e}}gin{aligned} &{\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{n\to\infty} \, {\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{N\to\infty} \, \biggl(\frac{1}{n^{\frac{1}{\alpha}}a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N (X^{(j)}_k - \operatorname{\mathbb{E}}(X^{(j)}_k)) \Bigr) \biggr)_{t\in\mathbb{R}_+} \\ &={\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{n\to\infty} \, {\mathcal D}_\mathrm{f}\widetilde{e}xt{-}\widehat{s}pace*{-1mm}\lim_{N\to\infty} \, \biggl(\frac{1}{n^{\frac{1}{\alpha}}a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N X^{(j)}_k - \frac{{\lfloor nt\rfloor} N}{n^{\frac{1}{\alpha}}a_N} \operatorname{\mathbb{E}}(X_0) \biggr)_{t\in\mathbb{R}_+} \\ &= \Bigl({\mathcal Z}_t^{(\alpha)} + \frac{\alpha}{1-\alpha} t\Bigr)_{t\in\mathbb{R}_+} , \end{aligned} \end{equation} \noindent where \ $\bigl({\mathcal Z}_t^{(\alpha)}\bigr)_{t\in\mathbb{R}_+}$ \ is an \ $\alpha$-stable process such that the characteristic function of the distribution of \ ${\mathcal Z}_1^{(\alpha)}$ \ has the form \[ \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta{\mathcal Z}_1^{(\alpha)}}\bigr) = \exp\biggl\{\mathrm{i} b_\alpha \operatorname{Var}theta + \frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} \int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1 - \mathrm{i} \operatorname{Var}theta u {\boldsymbol{b}}one_{(0,1]}(u)) \alpha u^{-1-\alpha} \, \mathrm{d} u\biggr\} , \qquad \operatorname{Var}theta \in \mathbb{R} , \] where \[ b_\alpha := \biggl(\frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} - 1\biggr) \frac{\alpha}{1-\alpha} , \qquad \alpha \in (0, 1) \cup (1, 2) , \] and \ $\bigl({\mathcal Z}_t^{(\alpha)} + \frac{\alpha}{1-\alpha} t\bigr)_{t\in\mathbb{R}_+}$ \ is an \ $\alpha$-stable process such that the characteristic function of the distribution of \ ${\mathcal Z}_1^{(\alpha)} + \frac{\alpha}{1-\alpha}$ \ has the form {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\Bigl(\exp\Bigl\{\mathrm{i}\operatorname{Var}theta\Bigl({\mathcal Z}_1^{(\alpha)}+\frac{\alpha}{1-\alpha}\Bigr)\Bigr\}\Bigr) = {\boldsymbol{e}}gin{cases} \exp\Bigl\{\frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} \int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1) \alpha u^{-1-\alpha} \, \mathrm{d} u \Bigr\} , & \widetilde{e}xt{if \ $\alpha \in (0, 1)$,} \\[3mm] \exp\Bigl\{\frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} \int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1 - \mathrm{i} \operatorname{Var}theta u) \alpha u^{-1-\alpha} \, \mathrm{d} u \Bigr\} , & \widetilde{e}xt{if \ $\alpha \in (1, 2)$,} \end{cases} \\ &= \exp\biggl\{- C_\alpha \frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} |\operatorname{Var}theta|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\operatorname{Var}theta)\right)\biggr\} \quad \widetilde{e}xt{if \ $\alpha \in (0, 1) \cup (1, 2)$,} \end{align*} for \ $\operatorname{Var}theta \in \mathbb{R}$. \end{Thm} {\boldsymbol{e}}gin{Rem}\label{Rem_char_spectral_proc} Note that, in accordance with Basrak and Segers \cite[Remark 4.8]{BasSeg} and Mikosch and Wintenberger \cite[page 171]{MikWin}, in case of \ $\alpha \in (0, 1)$, \ we have {\boldsymbol{e}}gin{align}\label{help_kar_theta} {\boldsymbol{e}}gin{split} &\operatorname{\mathbb{E}}\Bigl(\exp\Bigl\{\mathrm{i} \operatorname{Var}theta \Bigl({\mathcal Z}_1^{(\alpha)} + \frac{\alpha}{1-\alpha}\Bigr)\Bigr\}\Bigr) \\ &= \exp\left\{ - \int_0^\infty \operatorname{\mathbb{E}}\left[ \exp\Big( \mathrm{i} u \operatorname{Var}theta \sum_{\ell=1}^\infty \Theta_\ell \Big) - \exp\Big( \mathrm{i} u \operatorname{Var}theta \sum_{\ell=0}^\infty \Theta_\ell \Big) \right] \alpha u^{-\alpha-1}\,\mathrm{d} u \right\} \end{split} \end{align} for \ $\operatorname{Var}theta \in \mathbb{R}$, \ where \ $(\Theta_\ell)_{\ell\in\mathbb{Z}_+}$ \ is the (forward) spectral tail process of \ $(X_\ell)_{\ell\in\mathbb{Z}_+}$ \ given in \eqref{spectral_tail_process} and \eqref{spectral_tail_process0}. Indeed, by \eqref{help_Z1}, {\boldsymbol{e}}gin{align*} &\exp\left\{ - \int_0^\infty \operatorname{\mathbb{E}}\left[ \exp\Big( \mathrm{i} u \operatorname{Var}theta \sum_{\ell=1}^\infty \Theta_\ell \Big) - \exp\Big( \mathrm{i} u \operatorname{Var}theta \sum_{\ell=0}^\infty \Theta_\ell \Big) \right] \alpha u^{-\alpha-1}\,\mathrm{d} u \right\} \\ & = \exp\left\{ - \int_0^\infty \operatorname{\mathbb{E}}\left[ \exp\Big( \mathrm{i} u \operatorname{Var}theta \sum_{\ell=1}^\infty m_\xi^\ell \Big) - \exp\Big( \mathrm{i} u \operatorname{Var}theta \sum_{\ell=0}^\infty m_\xi^\ell \Big) \right] \alpha u^{-\alpha-1}\,\mathrm{d} u \right\} \\ & = \exp\left\{ - \int_0^\infty \left( \exp\Big( \mathrm{i} u \operatorname{Var}theta \frac{m_\xi}{1-m_\xi} \Big) - \exp\Big( \mathrm{i} u \operatorname{Var}theta \frac{1}{1-m_\xi} \Big) \right) \alpha u^{-\alpha-1}\,\mathrm{d} u \right\} \\ &= \exp\left\{ - \int_0^\infty \left( \exp\Big( \mathrm{i} u \frac {\operatorname{Var}theta m_\xi}{1-m_\xi} \Big) - 1 \right) \alpha u^{-\alpha-1}\,\mathrm{d} u + \int_0^\infty \left( \exp\Big( \mathrm{i} u \frac{\operatorname{Var}theta}{1-m_\xi} \Big) - 1 \right) \alpha u^{-\alpha-1}\,\mathrm{d} u \right\} \\ & = \exp\Bigg\{ C_\alpha \left\vert \frac{\operatorname{Var}theta m_\xi}{1-m_\xi} \right\vert^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}\left(\frac{\operatorname{Var}theta m_\xi}{1-m_\xi}\right)\right) \\ &\phantom{= \exp\Bigg\{ \;} - C_\alpha \left\vert \frac{\operatorname{Var}theta }{1-m_\xi} \right\vert^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}\left(\frac{\operatorname{Var}theta}{1-m_\xi}\right)\right) \Bigg\} \\ & = \exp\left\{ - C_\alpha \frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} \vert\operatorname{Var}theta\vert^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}\left(\frac{\operatorname{Var}theta}{1-m_\xi}\right)\right) \right\} , \end{align*} as desired. We also remark that, using \eqref{help_Z2}, one can check that \eqref{help_kar_theta} does not hold in case of \ $\alpha\in(1,2)$, \ which is somewhat unexpected in view of page 171 in Mikosch and Wintenberger \cite{MikWin}. \mbox{$\Box$} \end{Rem} {\boldsymbol{e}}gin{Rem}\label{cZ_properties} If \ $\alpha \in (0, 1)$, \ then the drift of the distribution of \ ${\mathcal Z}_1^{(\alpha)} + \frac{\alpha}{1-\alpha}$ \ is \ $0$, \ hence the process \ $\bigl({\mathcal Z}_t^{(\alpha)} + \frac{\alpha}{1-\alpha} t\bigr)_{t\in\mathbb{R}_+}$ \ is strictly \ $\alpha$-stable, see, e.g., Sato \cite[Theorem 14.7 (iv) and Definition 13.2]{Sato}. If \ $\alpha \in (1, 2)$, \ then the center, i.e., the expectation of \ ${\mathcal Z}_1^{(\alpha)} + \frac{\alpha}{1-\alpha}$ \ is \ $0$, \ hence the process \ $\bigl({\mathcal Z}_t^{(\alpha)} + \frac{\alpha}{1-\alpha} t\bigr)_{t\in\mathbb{R}_+}$ \ is strictly \ $\alpha$-stable see, e.g., Sato \cite[Theorem 14.7 (vi) and Definition 13.2]{Sato}. All in all, the process \ $\bigl({\mathcal Z}_t^{(\alpha)} + \frac{\alpha}{1-\alpha}t\bigr)_{t\in\mathbb{R}_+}$ \ is strictly \ $\alpha$-stable for any \ $\alpha \ne 1$. \mbox{$\Box$} \end{Rem} \section{Proofs} \label{Proofs} \noindent{\bf Proof of Theorem \ref{simple_aggregation1_stable_fd}.} Let \ $k \in \mathbb{Z}_+$. \ We are going to apply Theorem \ref{7.1} with \ $d = k + 1$ \ and \ ${\boldsymbol{X}}_{N,j} := a_N^{-1} (X_0^{(j)}, \ldots, X_k^{(j)})^\top$, \ $N, j \in \mathbb{N}$. \ The aim of the following discussion is to check condition \eqref{(7.5)} of Theorem \ref{7.1}, namely {\boldsymbol{e}}gin{equation}\label{vague_fd} N \operatorname{\mathbb{P}}({\boldsymbol{X}}_{N,1} \in \cdot) = N \operatorname{\mathbb{P}}\bigl(a_N^{-1} (X_0^{(1)}, \ldots, X_k^{(1)})^\top \in \cdot\bigr) \distrv \nu_{k,\alpha}(\cdot) \qquad \widetilde{e}xt{on \ $\mathbb{R}_0^{k+1}$ \ as \ $N \to \infty$,} \end{equation} where \ $\nu_{k,\alpha}$ \ is a L\'evy measure on \ $\mathbb{R}_0^{k+1}$. \ For each \ $N \in \mathbb{N}$ \ and \ $B \in {\mathcal B}(\mathbb{R}_0^{k+1})$, \ we can write \[ N \operatorname{\mathbb{P}}({\boldsymbol{X}}_{N,1} \in B) = N \operatorname{\mathbb{P}}(X_0 > a_N) \frac{\operatorname{\mathbb{P}}(a_N^{-1} (X_0, \ldots, X_k)^\top \in B)} {\operatorname{\mathbb{P}}(X_0 > a_N)} . \] By the assumption, we have \ $N \operatorname{\mathbb{P}}(X_0 > a_N) \to 1$ \ as \ $N \to \infty$, \ yielding also \ $a_N \to \infty$ \ as \ $N \to \infty$, \ consequently, it is enough to show that {\boldsymbol{e}}gin{equation}\label{nu_alpha^k_old} \frac{\operatorname{\mathbb{P}}(x^{-1} (X_0, \ldots, X_k)^\top \in \cdot)}{\operatorname{\mathbb{P}}(X_0 > x)} \distrv \nu_{k,\alpha}(\cdot) \qquad \widetilde{e}xt{on \ $\mathbb{R}_0^{k+1}$ \ as \ $x \to \infty$,} \end{equation} where \ $\nu_{k,\alpha}$ \ is a L\'evy measure on \ $\mathbb{R}_0^{k+1}$. \ In fact, by Theorem \ref{Xtailprocess}, \ $(X_0, \ldots, X_k)^\top$ \ is regularly varying with index \ $\alpha$, \ hence, by Proposition \ref{vague}, we know that {\boldsymbol{e}}gin{equation}\label{tnu_alpha^k_old} \frac{\operatorname{\mathbb{P}}(x^{-1} (X_0, \ldots, X_k)^\top \in \cdot)}{\operatorname{\mathbb{P}}(\|(X_0, \ldots, X_k)^\top\| > x)} \distrv \widetilde{\nu}_{k,\alpha}(\cdot) \qquad \widetilde{e}xt{on \ $\mathbb{R}_0^{k+1}$ \ as \ $x \to \infty$,} \end{equation} where \ $\widetilde{\nu}_{k,\alpha}$ \ is the so-called limit measure of \ $(X_0, \ldots, X_k)^\top$. \ Applying Proposition \ref{Pro_mapping} for the canonical projection \ $p_0 : \mathbb{R}^{k+1} \to \mathbb{R}$ \ given by \ $p_0({\boldsymbol{x}}) := x_0$ \ for \ ${\boldsymbol{x}} = (x_0, \ldots, x_k)^\top \in \mathbb{R}^{k+1}$, \ which is continuous and positively homogeneous of degree 1, we obtain \[ \frac{\operatorname{\mathbb{P}}(X_0 > x)}{\operatorname{\mathbb{P}}(\|(X_0, \ldots, X_k)^\top\| > x)} \to \widetilde{\nu}_{k,\alpha}(T_1) \qquad \widetilde{e}xt{as \ $x \to \infty$,} \] with \ $T_1 := \{{\boldsymbol{x}} \in \mathbb{R}^{k+1}_0 : p_0({\boldsymbol{x}}) > 1\}$, \ where we have \ $\widetilde{\nu}_{k,\alpha}(T_1) \in (0, 1]$. \ Indeed, \ $\operatorname{\mathbb{P}}(X_0 > x) \leqslant \operatorname{\mathbb{P}}(\|(X_0, \ldots, X_k)^\top\| > x)$, \ hence \ $\widetilde{\nu}_{k,\alpha}(T_1) \leqslant 1$. \ Moreover, by the strong stationarity of \ $(X_k)_{k\in\mathbb{Z}_+}$, \ we have \[ \operatorname{\mathbb{P}}(\|(X_0, \ldots, X_k)^\top\| > x) \leqslant \sum_{j=0}^k \operatorname{\mathbb{P}}(X_j > x / \sqrt{k+1}) = (k + 1) \operatorname{\mathbb{P}}(X_0 > x / \sqrt{k+1}) , \] thus \[ \frac{\operatorname{\mathbb{P}}(X_0 > x)}{\operatorname{\mathbb{P}}(\|(X_0, \ldots, X_k)^\top\| > x)} \geqslant \frac{\operatorname{\mathbb{P}}(X_0 > x)}{(k+1)\operatorname{\mathbb{P}}(X_0 > x / \sqrt{k+1})} \to (k + 1)^{-1-\frac{\alpha}{2}} \qquad \widetilde{e}xt{as \ $x \to \infty$,} \] since \ $X_0$ \ is regularly varying with index \ $\alpha$, \ hence \ $\widetilde{\nu}_{k,\alpha}(T_1) \in (0, 1]$, \ as desired. Consequently, \eqref{nu_alpha^k_old} holds with \ $\nu_{k,\alpha} = \widetilde{\nu}_{k,\alpha}/\widetilde{\nu}_{k,\alpha}(T_1)$. \ In general, one does not know whether \ $\nu_{k,\alpha}$ \ is a L\'evy measure on \ $\mathbb{R}_0^{k+1}$ \ or not. So, additional work is needed. We will determine \ $\nu_{k,\alpha}$ \ explicitly, using a result of Planini\'{c} and Soulier \cite{PlaSou}. The aim of the following discussion is to apply Theorem 3.1 in Planini\'{c} and Soulier \cite{PlaSou} in order to determine \ $\nu_{k,\alpha}$, \ namely, we will prove that for each Borel measurable function \ $f : \mathbb{R}_0^{k+1} \to \mathbb{R}_+$, {\boldsymbol{e}}gin{equation}\label{fint} \int_{\mathbb{R}_0^{k+1}} f({\boldsymbol{x}}) \, \nu_{k,\alpha}(\mathrm{d}{\boldsymbol{x}}) = (1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty f(u{\boldsymbol{v}}_j^{(k)}) \alpha u^{-\alpha-1} \, \mathrm{d} u . \end{equation} Let \ $(X_\ell)_{\ell\in\mathbb{Z}}$ \ be a strongly stationary extension of \ $(X_\ell)_{\ell\in\mathbb{Z}_+}$. \ For each \ $i, j \in \mathbb{Z}$ \ with \ $i \leqslant j$, \ by Theorem \ref{Xtailprocess}, \ $(X_i, \ldots, X_j)^\top$ \ is regularly varying with index \ $\alpha$, \ hence, by the strong stationarity of \ $(X_k)_{k\in\mathbb{Z}}$ \ and the discussion above, we know that \[ \frac{\operatorname{\mathbb{P}}(x^{-1} (X_i, \ldots, X_j)^\top \in \cdot)}{\operatorname{\mathbb{P}}(X_0 > x)} = \frac{\operatorname{\mathbb{P}}(x^{-1} (X_0, \ldots, X_{j-i})^\top \in \cdot)}{\operatorname{\mathbb{P}}(X_0 > x)} \distrv \nu_{i,j,\alpha}(\cdot) \qquad \widetilde{e}xt{on \ $\mathbb{R}_0^{j-i+1}$ \ as \ $x \to \infty$,} \] where \ $\nu_{i,j,\alpha} := \nu_{j-i,\alpha}$ \ is a non-null locally finite measure on \ $\mathbb{R}_0^{j-i+1}$. \ According to Basrak and Segers \cite[Theorem 2.1]{BasSeg}, there exists a sequence \ $(Y_\ell)_{\ell\in\mathbb{Z}}$ \ of random variables, called the (whole) tail process of \ $(X_\ell)_{\ell\in\mathbb{Z}}$, \ such that \[ \operatorname{\mathbb{P}}(x^{-1} (X_i, \ldots, X_j)^\top \in \cdot \,|\, X_0 > x) \distrw \operatorname{\mathbb{P}}( (Y_i, \ldots, Y_j)^\top \in \cdot) \qquad \widetilde{e}xt{as \ $x \to \infty$.} \] Let \ $K$ \ be a random variable with geometric distribution \[ \operatorname{\mathbb{P}}(K = k) = m_\xi^{\alpha k} (1 - m_\xi^\alpha) , \qquad k \in \mathbb{Z}_+ . \] Especially, if \ $m_\xi = 0$, \ then \ $\operatorname{\mathbb{P}}(K = 0) = 1$. \ If \ $m_\xi \in (0, 1)$, \ then we have {\boldsymbol{e}}gin{equation}\label{tail_process} Y_\ell = {\boldsymbol{e}}gin{cases} m_\xi^\ell Y_0 , & \widetilde{e}xt{if \ $\ell \geqslant 0$,} \\ m_\xi^\ell Y_0 {\boldsymbol{b}}one_{\{K\geqslant-\ell\}} , & \widetilde{e}xt{if \ $\ell < 0$,} \end{cases} \end{equation} where \ $Y_0$ \ is a random variable independent of \ $K$ \ with Pareto distribution \[ \operatorname{\mathbb{P}}(Y_0 > y) = {\boldsymbol{e}}gin{cases} y^{-\alpha} , & \widetilde{e}xt{if \ $y \in [1, \infty)$,} \\ 1 , & \widetilde{e}xt{if \ $y \in (-\infty, 1)$.} \end{cases} \] Indeed, as shown in Basrak et al.~\cite[Lemma 3.1]{BasKulPal}, $(Y_\ell)_{\ell\in\mathbb{Z}_+}$ is the forward tail process of \ $(X_\ell)_{\ell\in\mathbb{Z}}$. \ On the other hand, by Janssen and Segers \cite[Example 6.2]{JanSeg}, \ $(Y_\ell)_{\ell\in\mathbb{Z}}$ \ is the tail process of the stationary solution \ $(X_\ell')_{\ell\in \mathbb{Z}}$ \ to the stochastic recurrence equation \ $X_\ell'=\mu_A X_{\ell-1}' + B_\ell$, \ $\ell \in \mathbb{Z}$. \ Since the distribution of the forward tail process determines the distribution of the (whole) tail process (see Basrak and Segers~\cite[Theorem 3.1 (ii)]{BasSeg}), it follows that \ $(Y_\ell)_{\ell\in\mathbb{Z}}$ \ represents the tail process of \ $(X_\ell)_{\ell\in\mathbb{Z}}$. \ If \ $m_\xi = 0$, \ then one can easily check that {\boldsymbol{e}}gin{equation}\label{tail_process0} Y_\ell = {\boldsymbol{e}}gin{cases} Y_0 , & \widetilde{e}xt{if \ $\ell = 0$,} \\ 0 , & \widetilde{e}xt{if \ $\ell \ne 0$.} \end{cases} \end{equation} By \eqref{tail_process} and \eqref{tail_process0}, we have \ $Y_\ell \as 0$ \ as \ $\ell \to \infty$ \ or \ $\ell \to -\infty$, \ hence condition (3.1) in Planini\'{c} and Soulier \cite{PlaSou} is satisfied. Moreover, there exists a unique measure \ $\nu_\alpha$ \ on \ $\mathbb{R}^\mathbb{Z}$ \ endowed with the cylindrical $\sigma$-algebra \ ${\mathcal B}(\mathbb{R})^{\otimes\mathbb{Z}}$ \ such that \ $\nu_\alpha(\{{\boldsymbol{z}}ero\}) = 0$ \ and for each \ $i, j \in \mathbb{Z}$ \ with \ $i \leqslant j$, \ we have \ $\nu_\alpha \circ p_{i,j}^{-1} = \nu_{i,j,\alpha}$ \ on \ $\mathbb{R}^{j-i+1}_0$, \ where \ $p_{i,j}$ \ denotes the canonical projection \ $p_{i,j} : \mathbb{R}^\mathbb{Z} \to \mathbb{R}^{j-i+1}$ \ given by \ $p_{i,j}({\boldsymbol{y}}) := (y_i, \ldots, y_j)$ \ for \ ${\boldsymbol{y}} = (y_\ell)_{\ell\in\mathbb{Z}} \in \mathbb{R}^\mathbb{Z}$, \ see, e.g., Planini\'{c} and Soulier \cite{PlaSou}. The measure \ $\nu_\alpha$ \ is called the tail measure of \ $(X_\ell)_{\ell\in\mathbb{Z}}$. If \ $m_\xi \in (0, 1)$, \ then, by \eqref{tail_process}, the (whole) spectral tail process \ ${\boldsymbol{\Theta}} = (\Theta_\ell)_{\ell\in\mathbb{Z}}$ \ of \ $(X_\ell)_{\ell\in\mathbb{Z}}$ \ is given by {\boldsymbol{e}}gin{equation}\label{spectral_tail_process} \Theta_\ell := \frac{Y_\ell}{|Y_0|} = {\boldsymbol{e}}gin{cases} m_\xi^\ell , & \widetilde{e}xt{if \ $\ell \geqslant 0$,} \\ m_\xi^\ell {\boldsymbol{b}}one_{\{K\geqslant-\ell\}} , & \widetilde{e}xt{if \ $\ell < 0$.} \end{cases} \end{equation} If \ $m_\xi = 0$, \ then, by \eqref{tail_process0}, {\boldsymbol{e}}gin{equation}\label{spectral_tail_process0} \Theta_\ell := \frac{Y_\ell}{|Y_0|} = {\boldsymbol{e}}gin{cases} 1 , & \widetilde{e}xt{if \ $\ell = 0$,} \\ 0 , & \widetilde{e}xt{if \ $\ell \ne 0$.} \end{cases} \end{equation} Let us introduce the so called infargmax functional \ $I : \mathbb{R}^\mathbb{Z} \to \mathbb{Z} \cup \{-\infty, \infty\}$. \ For \ ${\boldsymbol{y}} = (y_\ell)_{\ell\in\mathbb{Z}} \in \mathbb{R}^\mathbb{Z}$, \ the value \ $I({\boldsymbol{y}})$ \ is the first time when the supremum \ $\sup_{\ell\in\mathbb{Z}} |y_\ell|$ \ is achieved, more precisely, \[ I({\boldsymbol{y}}) := {\boldsymbol{e}}gin{cases} \ell \in \mathbb{Z} , & \widetilde{e}xt{if \ $\sup\limits_{m\leqslant\ell-1} |y_m| < |y_\ell|$ \ and \ $\sup\limits_{m\geqslant\ell+1} |y_m| \leqslant |y_\ell|$,} \\ -\infty , & \widetilde{e}xt{if \ $\sup\limits_{m\leqslant\ell} |y_m| = \sup\limits_{m\in\mathbb{Z}} |y_m|$ \ for all \ $\ell \in \mathbb{Z}$,} \\ \infty , & \widetilde{e}xt{if \ $\sup\limits_{m\leqslant\ell} |y_m| < \sup\limits_{m\in\mathbb{Z}} |y_m|$ \ for all \ $\ell \in \mathbb{Z}$.} \end{cases} \] We have \ $\operatorname{\mathbb{P}}(I({\boldsymbol{\Theta}}) = -K) = 1$, \ hence the condition \ $\operatorname{\mathbb{P}}(I({\boldsymbol{\Theta}}) \in \mathbb{Z}) = 1$ \ of Theorem 3.1 in Planini\'{c} and Soulier \cite{PlaSou} is satisfied. Consequently, we may apply Theorem 3.1 in Planini\'{c} and Soulier \cite{PlaSou} for the nonnegative measurable function \ $H : \mathbb{R}^\mathbb{Z} \to \mathbb{R}_+$ \ given by \ $H({\boldsymbol{y}}) = f \circ p_{0,k}$, \ where \ $f : \mathbb{R}^{k+1} \to \mathbb{R}_+$ \ is a measurable function with \ $f({\boldsymbol{z}}ero) = 0$. \ By (3.2) in Planini\'{c} and Soulier \cite{PlaSou}, we obtain {\boldsymbol{e}}gin{align*} \int_{\mathbb{R}_0^{k+1}} f({\boldsymbol{x}}) \, \nu_{k,\alpha}(\mathrm{d}{\boldsymbol{x}}) &= \int_{\mathbb{R}^{k+1}} f({\boldsymbol{x}}) \, \nu_{0,k,\alpha}(\mathrm{d}{\boldsymbol{x}}) = \int_{\mathbb{R}^{k+1}} f({\boldsymbol{x}}) \, (\nu_\alpha \circ p_{0,k}^{-1})(\mathrm{d}{\boldsymbol{x}}) = \int_{\mathbb{R}^\mathbb{Z}} f(p_{0,k}({\boldsymbol{y}})) \, \nu_\alpha(\mathrm{d}{\boldsymbol{y}}) \\ &= \int_{\mathbb{R}^\mathbb{Z}} H({\boldsymbol{y}}) \, \nu_\alpha(\mathrm{d}{\boldsymbol{y}}) = \sum_{\ell\in\mathbb{Z}} \int_0^\infty \operatorname{\mathbb{E}}(H(uL^\ell({\boldsymbol{\Theta}})) {\boldsymbol{b}}one_{\{I({\boldsymbol{\Theta}})=0\}}) \alpha u^{-\alpha-1} \, \mathrm{d} u , \end{align*} where \ $L$ \ denotes the backshift operator \ $L : \mathbb{R}^\mathbb{Z} \to \mathbb{R}^\mathbb{Z}$ \ given by \ $L({\boldsymbol{y}}) = (L({\boldsymbol{y}})_k)_{k\in\mathbb{Z}} := (y_{k-1})_{k\in\mathbb{Z}}$ \ for \ ${\boldsymbol{y}} = (y_k)_{k\in\mathbb{Z}} \in \mathbb{R}^\mathbb{Z}$. \ Using \ $\operatorname{\mathbb{P}}(I({\boldsymbol{\Theta}}) = -K)=1$, \ we obtain \[ \int_{\mathbb{R}_0^{k+1}} f({\boldsymbol{x}}) \, \nu_{k,\alpha}(\mathrm{d}{\boldsymbol{x}}) = \sum_{\ell\in\mathbb{Z}} \int_0^\infty \operatorname{\mathbb{E}}(f(p_{0,k}(uL^\ell({\boldsymbol{\Theta}}))) {\boldsymbol{b}}one_{\{K=0\}}) \alpha u^{-\alpha-1} \, \mathrm{d} u . \] For each \ $k \in \mathbb{Z}_+$ \ and \ $u \in \mathbb{R}_+$, \ on the event \ $\{K = 0\}$, \ by \eqref{spectral_tail_process} and \eqref{spectral_tail_process0}, we have \[ p_{0,k}(uL^\ell({\boldsymbol{\Theta}})) = {\boldsymbol{e}}gin{cases} {\boldsymbol{z}}ero \in \mathbb{R}^{k+1} , & \widetilde{e}xt{if \ $\ell > k$,} \\ u {\boldsymbol{v}}_\ell^{(k)} , & \widetilde{e}xt{if \ $\ell \in \{1, \ldots, k\}$,} \\ (1 - m_\xi^\alpha)^{\frac{1}{\alpha}} m_\xi^{-\ell} u {\boldsymbol{v}}_0^{(k)} , & \widetilde{e}xt{if \ $\ell \leqslant 0$,} \end{cases} \] hence, using \ $\operatorname{\mathbb{P}}(K = 0) = 1 - m_\xi^\alpha$, \ we obtain {\boldsymbol{e}}gin{align*} &\int_{\mathbb{R}_0^{k+1}} f({\boldsymbol{x}}) \, \nu_{k,\alpha}(\mathrm{d}{\boldsymbol{x}}) \\ &= (1 - m_\xi^\alpha) \sum_{\ell\leqslant0} \int_0^\infty f((1 - m_\xi^\alpha)^{\frac{1}{\alpha}} m_\xi^{-\ell} u {\boldsymbol{v}}_0^{(k)}) \alpha u^{-\alpha-1} \, \mathrm{d} u + (1 - m_\xi^\alpha) \sum_{\ell=1}^k \int_0^\infty f(u {\boldsymbol{v}}_\ell^{(k)}) \alpha u^{-\alpha-1} \, \mathrm{d} u \\ &= (1 - m_\xi^\alpha)^2 \sum_{\ell\leqslant0} m_\xi^{-\ell\alpha} \int_0^\infty f(u {\boldsymbol{v}}_0^{(k)}) \alpha u^{-\alpha-1} \, \mathrm{d} u + (1 - m_\xi^\alpha) \sum_{\ell=1}^k \int_0^\infty f(u {\boldsymbol{v}}_\ell^{(k)}) \alpha u^{-\alpha-1} \, \mathrm{d} u \\ &= (1 - m_\xi^\alpha) \sum_{\ell=0}^k \int_0^\infty f(u {\boldsymbol{v}}_\ell^{(k)}) \alpha u^{-\alpha-1} \, \mathrm{d} u . \end{align*} The measure \ $\nu_{k,\alpha}$ \ is a L\'evy measure on \ $\mathbb{R}_0^{k+1}$, \ since \eqref{fint} implies {\boldsymbol{e}}gin{align*} &\int_{\mathbb{R}_0^{k+1}} \min\{1, \|{\boldsymbol{x}}\|^2\} \, \nu_{k,\alpha}(\mathrm{d}{\boldsymbol{x}}) = (1-m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty \min\{1, \|u {\boldsymbol{v}}_j^{(k)}\|^2\} \alpha u^{-\alpha-1} \, \mathrm{d} u \\ &= (1-m_\xi^\alpha) \sum_{j=0}^k \|{\boldsymbol{v}}_j^{(k)}\|^\alpha \int_0^\infty \min\{1, w^2\} \alpha w^{-\alpha-1} \, \mathrm{d} w = \frac{2(1-m_\xi^\alpha)}{2-\alpha} \sum_{j=0}^k \|{\boldsymbol{v}}_j^{(k)}\|^\alpha < \infty . \end{align*} Consequently, we obtain \eqref{nu_alpha^k_old}, and hence \eqref{vague_fd}, so condition \eqref{(7.5)} is satisfied. The aim of the following discussion is to check condition \eqref{(7.6)} of Theorem \ref{7.1}, namely {\boldsymbol{e}}gin{equation}\label{Res_cond} \lim_{\operatorname{Var}e\downarrow0} \limsup_{N\to\infty} N \operatorname{\mathbb{E}}(a_N^{-2} (X_\ell^{(j)})^2 {\boldsymbol{b}}one_{\{X_\ell^{(j)}\leqslant a_N\operatorname{Var}e\}}) = \lim_{\operatorname{Var}e\downarrow0} \limsup_{N\to\infty} N \operatorname{\mathbb{E}}(a_N^{-2} X_0^2 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\operatorname{Var}e\}}) = 0 \end{equation} for each \ $j\in\mathbb{N}$ \ and \ $\ell \in \{0, \ldots, k\}$. \ By Lemma \ref{truncated_moments} with \ ${\boldsymbol{e}}ta = 2$, \ we have \[ \lim_{x\to\infty} \frac{x^2\operatorname{\mathbb{P}}(X_0>x)}{\operatorname{\mathbb{E}}(X_0^2{\boldsymbol{b}}one_{\{X_0\leqslant x\}})} = \frac{2-\alpha}{\alpha} , \] hence, for all \ $\operatorname{Var}e \in \mathbb{R}_{++}$, \ using again that \ $X_0$ \ is regularly varying with index \ $\alpha$, \ we have {\boldsymbol{e}}gin{align*} N \operatorname{\mathbb{E}}(a_N^{-2} X_0^2 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\operatorname{Var}e\}}) = \frac{\operatorname{\mathbb{E}}\bigl(X_0^2 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\operatorname{Var}e\}}\bigr)} {(a_N\operatorname{Var}e)^2\operatorname{\mathbb{P}}(X_0>a_N\operatorname{Var}e)} \frac{\operatorname{\mathbb{P}}(X_0>a_N\operatorname{Var}e)}{\operatorname{\mathbb{P}}(X_0>a_N)} \operatorname{Var}e^2 N \operatorname{\mathbb{P}}(X_0>a_N) \to \frac{\alpha}{2-\alpha} \operatorname{Var}e^{2-\alpha} \end{align*} as \ $N \to \infty$, \ and, as \ $\operatorname{Var}e \downarrow 0$, \ we conclude \eqref{Res_cond}. Consequently, we may apply Theorem \ref{7.1}, and we obtain \eqref{help6_simple_aggregation1_stable_fd}, where \ $({\boldsymbol{c}}X_t^{(k,\alpha)})_{t\in\mathbb{R}_+}$ \ is an $\alpha$-stable process such that the characteristic function of the distribution \ $\mu_{k,\alpha}$ \ of \ ${\boldsymbol{c}}X_1^{(k,\alpha)}$ \ has the form given in Theorem \ref{simple_aggregation1_stable_fd}. Indeed, \eqref{fint} is valid for each Borel measurable function \ $f : \mathbb{R}^{k+1}_0 \to \mathbb{C}$ \ as well, for which the real and imaginary parts of the right hand side of \eqref{fint} are well defined. Hence for all \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$, \ by \eqref{hmu}, {\boldsymbol{e}}gin{align*} &\widehat{\mu_{k,\alpha} }({\boldsymbol{\theta}}) = \exp\biggl\{ \int_{\mathbb{R}_0^{k+1}} \biggl(\mathrm{e}^{\mathrm{i} \langle{\boldsymbol{\theta}}, {\boldsymbol{y}}\rangle} - 1 - \mathrm{i} \sum_{\ell=1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{y}}\rangle {\boldsymbol{b}}one_{(0,1]}(|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{y}}\rangle|)\biggr) \nu_{k,\alpha}(\mathrm{d}{\boldsymbol{y}}) \biggr\} \\ &= \exp\biggl\{ (1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty \biggl(\mathrm{e}^{\mathrm{i}\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle u} - 1 - \mathrm{i} u \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle {\boldsymbol{b}}one_{(0,1]}(u\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle)\biggr) \alpha u^{-1-\alpha} \, \mathrm{d} u \biggr\} , \end{align*} since it will turn out that the real and imaginary parts of the exponent in the last expression are well defined. If \ $\alpha \in (0, 1)$, \ then \[ \int_0^\infty (\mathrm{e}^{\pm\mathrm{i} x} - 1) x^{-1-\alpha} \, \mathrm{d} x = \Gamma(-\alpha) \mathrm{e}^{\mp\mathrm{i}\pi\alpha/2} , \] see, e.g., (14.18) in Sato \cite{Sato} and its complex conjugate, thus for each \ $\operatorname{Var}theta \in \mathbb{R}_{++}$, {\boldsymbol{e}}gin{align*} &\int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1) u^{-1-\alpha} \, \mathrm{d} u = \operatorname{Var}theta^\alpha \int_0^\infty (\mathrm{e}^{\mathrm{i} v} - 1) v^{-1-\alpha} \, \mathrm{d} v = \operatorname{Var}theta^\alpha \Gamma(-\alpha) \mathrm{e}^{-\mathrm{i}\pi\alpha/2} \\ &= \operatorname{Var}theta^\alpha \frac{\Gamma(2-\alpha)}{(1-\alpha)(-\alpha)} \cos\left(\frac{\pi\alpha}{2}\right) \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right)\right) = - \frac{C_\alpha}{\alpha} \operatorname{Var}theta^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right)\right) . \end{align*} In a similar way, for each \ $\operatorname{Var}theta \in \mathbb{R}_{--}$, {\boldsymbol{e}}gin{align*} &\int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1) u^{-1-\alpha} \, \mathrm{d} u = (-\operatorname{Var}theta)^\alpha \int_0^\infty (\mathrm{e}^{-\mathrm{i} v} - 1) v^{-1-\alpha} \, \mathrm{d} v = (-\operatorname{Var}theta)^\alpha \Gamma(-\alpha) \mathrm{e}^{\mathrm{i}\pi\alpha/2} \\ &= (-\operatorname{Var}theta)^\alpha \frac{\Gamma(2-\alpha)}{(1-\alpha)(-\alpha)} \cos\left(\frac{\pi\alpha}{2}\right) \left(1 + \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right)\right) = - \frac{C_\alpha}{\alpha} (-\operatorname{Var}theta)^\alpha \left(1 + \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right)\right) . \end{align*} Thus, for each \ $\operatorname{Var}theta \in \mathbb{R}$, {\boldsymbol{e}}gin{align}\label{help_Z1} \int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1) u^{-1-\alpha} \, \mathrm{d} u = - \frac{C_\alpha}{\alpha} |\operatorname{Var}theta|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\operatorname{Var}theta)\right) , \end{align} and hence, for each \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$ \ and \ $j \in \{0, \ldots, k\}$, {\boldsymbol{e}}gin{align*} &\int_0^\infty \biggl(\mathrm{e}^{\mathrm{i}\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle u} - 1 - \mathrm{i} u \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle {\boldsymbol{b}}one_{(0,1]}(u\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle)\biggr) \alpha u^{-1-\alpha} \, \mathrm{d} u \\ &= - C_\alpha |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle)\right) - \mathrm{i} \alpha \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \int_0^{1/\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle} u^{-\alpha} \, \mathrm{d} u \\ &= - C_\alpha |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle)\right) - \mathrm{i} \frac{\alpha}{1-\alpha} \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \biggl(\frac{1}{\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle}\biggr)^{1-\alpha} \\ &= - C_\alpha |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle)\right) - \mathrm{i} \frac{\alpha}{1-\alpha} \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle^\alpha . \end{align*} Consequently, {\boldsymbol{e}}gin{equation}\label{mu_k} {\boldsymbol{e}}gin{aligned} \widehat{\mu_{k,\alpha}}({\boldsymbol{\theta}}) &= \exp\Bigl\{- C_\alpha (1 - m_\xi^\alpha) \sum_{j=0}^k |\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle{\boldsymbol{\theta}},{\boldsymbol{v}}_j^{(k)}\rangle)\right) \\ &\phantom{= \exp\Bigl\{} - \mathrm{i} \frac{\alpha}{1-\alpha} (1 - m_\xi^\alpha) \sum_{j=0}^k \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle^\alpha\Bigr\} \end{aligned} \end{equation} for all \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$, \ where {\boldsymbol{e}}gin{align*} \sum_{j=0}^k \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle^\alpha = \sum_{\ell=1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \sum_{j=0}^{\ell-1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle^\alpha = \frac{1}{1-m_\xi^\alpha} \sum_{\ell=1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle , \end{align*} since \ $\langle{\boldsymbol{e}}_1, {\boldsymbol{v}}_0^{(k)}\rangle^\alpha = (1-m_\xi^\alpha)^{-1}$, \ and, for each \ $\ell \in \{2, \ldots, k + 1\}$, \ we have \[ \sum_{j=0}^{\ell-1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle^\alpha = \frac{m_\xi^{(\ell-1)\alpha}}{1-m_\xi^\alpha} + \sum_{j=1}^{\ell-1} m_\xi^{(\ell-j-1)\alpha} = \frac{m_\xi^{(\ell-1)\alpha}}{1-m_\xi^\alpha} + \frac{1-m_\xi^{(\ell-1)\alpha}}{1-m_\xi^\alpha} = \frac{1}{1-m_\xi^\alpha} . \] Hence we obtain {\boldsymbol{e}}gin{equation}\label{drift_center} (1 - m_\xi^\alpha) \sum_{j=0}^k \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle^\alpha = \langle{\boldsymbol{\theta}}, {\boldsymbol{1}}_{k+1}\rangle , \end{equation} yielding the statement in case of \ $\alpha \in (0, 1)$. \ Note that the above calculation shows that \eqref{drift_center} is valid for each \ $\alpha\in(0,2)$. If \ $\alpha \in (1, 2)$, \ then \[ \int_0^\infty (\mathrm{e}^{\pm\mathrm{i} x} - 1 \mp \mathrm{i} x) x^{-1-\alpha} \, \mathrm{d} x = \Gamma(-\alpha) \, \mathrm{e}^{\mp\mathrm{i}\pi\alpha/2} , \] see, e.g., (14.19) in Sato \cite{Sato} and its complex conjugate, thus for each \ $\operatorname{Var}theta \in \mathbb{R}_{++}$, {\boldsymbol{e}}gin{align*} &\int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1 - \mathrm{i} \operatorname{Var}theta u) u^{-1-\alpha} \, \mathrm{d} u = \operatorname{Var}theta^\alpha \int_0^\infty (\mathrm{e}^{\mathrm{i} x} - 1 - \mathrm{i} x) x^{-1-\alpha} \, \mathrm{d} x = \operatorname{Var}theta^\alpha \Gamma(-\alpha) \mathrm{e}^{-\mathrm{i}\pi\alpha/2} \\ &= \operatorname{Var}theta^\alpha \frac{\Gamma(2-\alpha)}{(1-\alpha)(-\alpha)} \cos\left(\frac{\pi\alpha}{2}\right) \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right)\right) = - \frac{C_\alpha}{\alpha} \operatorname{Var}theta^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right)\right) . \end{align*} In a similar way, for each \ $\operatorname{Var}theta \in \mathbb{R}_{--}$, {\boldsymbol{e}}gin{align*} &\int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1 - \mathrm{i} \operatorname{Var}theta u) u^{-1-\alpha} \, \mathrm{d} u = (-\operatorname{Var}theta)^\alpha \int_0^\infty (\mathrm{e}^{-\mathrm{i} x} - 1 + \mathrm{i} x) x^{-1-\alpha} \, \mathrm{d} x = (-\operatorname{Var}theta)^\alpha \Gamma(-\alpha) \mathrm{e}^{\mathrm{i}\pi\alpha/2} \\ &= (-\operatorname{Var}theta)^\alpha \frac{\Gamma(2-\alpha)}{(1-\alpha)(-\alpha)} \cos\left(\frac{\pi\alpha}{2}\right) \left(1 + \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right)\right) = - \frac{C_\alpha}{\alpha} (-\operatorname{Var}theta)^\alpha \left(1 + \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right)\right) . \end{align*} Thus, for each \ $\operatorname{Var}theta \in \mathbb{R}$, {\boldsymbol{e}}gin{align}\label{help_Z2} \int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1 - \mathrm{i} \operatorname{Var}theta u) u^{-1-\alpha} \, \mathrm{d} u = - \frac{C_\alpha}{\alpha} |\operatorname{Var}theta|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\operatorname{Var}theta)\right) , \end{align} and hence, for each \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$ \ and \ $j \in \{0, \ldots, k\}$, {\boldsymbol{e}}gin{align*} &\int_0^\infty \biggl(\mathrm{e}^{\mathrm{i}\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle u} - 1 - \mathrm{i} u \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle {\boldsymbol{b}}one_{(0,1]}(u\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle)\biggr) \alpha u^{-1-\alpha} \, \mathrm{d} u \\ &= - C_\alpha |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle)\right) + \mathrm{i} \alpha \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \int_{1/\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle}^\infty u^{-\alpha} \, \mathrm{d} u \\ &= - C_\alpha |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle)\right) - \mathrm{i} \frac{\alpha}{1-\alpha} \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \biggl(\frac{1}{\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle}\biggr)^{1-\alpha} \\ &= - C_\alpha |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle)\right) + \mathrm{i} \frac{\alpha}{\alpha-1} \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle^\alpha . \end{align*} Consequently, we obtain \eqref{mu_k} for all \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$, \ and, applying again \eqref{drift_center}, we conclude the statement in case of \ $\alpha \in (1, 2)$. Finally, we consider the case \ $\alpha = 1$. \ For each \ $\operatorname{Var}theta \in \mathbb{R}_{++}$, \[ \int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1 - \mathrm{i} \operatorname{Var}theta u {\boldsymbol{b}}one_{(0,1]}(u)) u^{-2} \, \mathrm{d} u = - \frac{\pi\operatorname{Var}theta}{2} - \mathrm{i} \operatorname{Var}theta \log(\operatorname{Var}theta) + \mathrm{i} C \operatorname{Var}theta , \] where \ $C$ \ is given in \eqref{c}, see, e.g., (14.20) in Sato \cite{Sato}. Its complex conjugate has the form \[ \int_0^\infty (\mathrm{e}^{-\mathrm{i}\operatorname{Var}theta u} - 1 + \mathrm{i} \operatorname{Var}theta u {\boldsymbol{b}}one_{(0,1]}(u)) u^{-2} \, \mathrm{d} u = - \frac{\pi\operatorname{Var}theta}{2} + \mathrm{i} \operatorname{Var}theta \log(\operatorname{Var}theta) - \mathrm{i} C \operatorname{Var}theta , \qquad \operatorname{Var}theta \in \mathbb{R}_{++} , \] thus \[ \int_0^\infty (\mathrm{e}^{\mathrm{i}(-\operatorname{Var}theta)u} - 1 - \mathrm{i} (-\operatorname{Var}theta) u {\boldsymbol{b}}one_{(0,1]}(u)) u^{-2} \, \mathrm{d} u = - \frac{\pi(-(-\operatorname{Var}theta))}{2} - \mathrm{i} (-\operatorname{Var}theta) \log(-(-\operatorname{Var}theta)) + \mathrm{i} C (-\operatorname{Var}theta) \] for \ $\operatorname{Var}theta \in \mathbb{R}_{++}$, \ and hence {\boldsymbol{e}}gin{align*} &\int_0^\infty (\mathrm{e}^{\mathrm{i}\operatorname{Var}theta u} - 1 - \mathrm{i} \operatorname{Var}theta u {\boldsymbol{b}}one_{(0,1]}(u)) u^{-2} \, \mathrm{d} u = - \frac{\pi|\operatorname{Var}theta|}{2} - \mathrm{i} \operatorname{Var}theta \log(|\operatorname{Var}theta|) + \mathrm{i} C \operatorname{Var}theta \\ &= - C_1 |\operatorname{Var}theta| \left(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta) \log(|\operatorname{Var}theta|)\right) + \mathrm{i} C \operatorname{Var}theta , \qquad \operatorname{Var}theta \in \mathbb{R} . \end{align*} Consequently, for each \ ${\boldsymbol{\theta}} \in \mathbb{R}^{k+1}$ \ and \ $j \in \{0, \ldots, k\}$, {\boldsymbol{e}}gin{align*} &\int_0^\infty \biggl(\mathrm{e}^{\mathrm{i}\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle u} - 1 - \mathrm{i} u \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle {\boldsymbol{b}}one_{(0,1]}(u\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle)\biggr) u^{-2} \, \mathrm{d} u \\ &= - C_1 |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle| \left(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle) \log(|\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|)\right) + \mathrm{i} C \langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle \\ &\quad + \mathrm{i} \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \int_{1/\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle}^1 u^{-1} \, \mathrm{d} u \\ &= - C_1 |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle| \left(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle) \log(|\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|)\right) + \mathrm{i} C \langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle \\ &\quad - \mathrm{i} \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log\biggl(\frac{1}{\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle}\biggr) \\ &= - C_1 |\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle| \left(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle) \log(|\langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle|)\right) + \mathrm{i} C \langle {\boldsymbol{\theta}}, {\boldsymbol{v}}_j^{(k)}\rangle \\ &\quad + \mathrm{i} \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log(\langle{\boldsymbol{e}}_\ell,{\boldsymbol{v}}_j^{(k)}\rangle) . \end{align*} Applying again \eqref{drift_center}, we have the statement in case of \ $\alpha = 1$. \mbox{$\Box$} \noindent{\bf Proof of Corollary \ref{simple_aggregation1_stable_centering_fd}.} In case of \ $\alpha \in (0, 1)$, \ by \eqref{centering01T} with \ $t = 1$, \ we have {\boldsymbol{e}}gin{equation}\label{centering01} \lim_{N\to\infty} \frac{N}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr) = \frac{\alpha}{1-\alpha} . \end{equation} Next, we may apply Lemma \ref{Conv2Funct} with {\boldsymbol{e}}gin{gather*} {\boldsymbol{c}}U_t^{(N)} := \frac{1}{a_N} \sum_{j=1}^{\lfloor Nt\rfloor} \bigl(X^{(j)}_0, \ldots, X^{(j)}_k\bigr)^\top - \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr) {\boldsymbol{1}}_{k+1} , \qquad N \in \mathbb{N} , \\ \Phi_N(f)(t) := f(t) + \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}} ) {\boldsymbol{1}}_{k+1}, \qquad N \in \mathbb{N} , \\ {\boldsymbol{c}}U_t := {\boldsymbol{c}}X_t^{(k,\alpha)} , \qquad \Phi(f)(t) := f(t) + \frac{\alpha}{1-\alpha} t {\boldsymbol{1}}_{k+1} \end{gather*} for \ $t \in \mathbb{R}_+$ \ and \ $f \in \mathbb{D}(\mathbb{R}_+,\mathbb{R}^{k+1})$. \ Indeed, in order to show \ $\Phi_N(f_N) \to \Phi(f)$ \ in \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$ \ as \ $N \to \infty$ \ whenever \ $f_N \to f$ \ in \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$ \ as \ $N \to \infty$ \ with \ $f, f_N \in \mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$, \ $N \in \mathbb{N}$, \ by Propositions VI.1.17 and VI.1.23 in Jacod and Shiryaev \cite{JacShi}, it is enough to check that for each \ $T \in \mathbb{R}_{++}$, \ we have {\boldsymbol{e}}gin{align*} \sup_{t\in[0,T]} \biggl\|\frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr) {\boldsymbol{1}}_{k+1} - \frac{\alpha}{1-\alpha} t {\boldsymbol{1}}_{k+1}\biggr\| \to 0 \qquad \widetilde{e}xt{as \ $N\to\infty$.} \end{align*} This follows, since, by \eqref{centering01}, we obtain {\boldsymbol{e}}gin{align*} &\sup_{t\in[0,T]} \biggl\|\frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr) {\boldsymbol{1}}_{k+1} - \frac{\alpha}{1-\alpha} t {\boldsymbol{1}}_{k+1}\biggr\| \\ &\leqslant \sup_{t\in[0,T]} \biggl\|\frac{\lfloor Nt\rfloor}{N} \biggl(\frac{N}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr) - \frac{\alpha}{1-\alpha}\biggr) {\boldsymbol{1}}_{k+1}\biggr\| + \sup_{t\in[0,T]} \Biggl\|\frac{\alpha}{1-\alpha} \biggl(\frac{\lfloor Nt\rfloor}{N} - t\biggr) {\boldsymbol{1}}_{k+1}\Biggr\| \\ &\leqslant T \sqrt{k + 1} \biggl|\frac{N}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}\bigr) - \frac{\alpha}{1-\alpha}\biggr| + \frac{\sqrt{k+1}}{N} \frac{\alpha}{1-\alpha} \to 0 \qquad \widetilde{e}xt{as \ $N \to \infty$.} \end{align*} Applying Lemma \ref{Conv2Funct}, we obtain \[ \biggl(\frac{1}{a_N} \sum_{j=1}^{\lfloor Nt\rfloor} (X_0^{(j)}, \ldots, X_k^{(j)})\biggr)_{t\in\mathbb{R}_+} = \Phi_N({\boldsymbol{c}}U^{(N)}) \distr \Phi({\boldsymbol{c}}U) \qquad \widetilde{e}xt{as \ $N \to \infty$,} \] where \ $\Phi({\boldsymbol{c}}U)_t = {\boldsymbol{c}}X_t^{(k,\alpha)} + \frac{\alpha}{1-\alpha} t {\boldsymbol{1}}_{k+1}$, \ $t \in \mathbb{R}_+$, \ is a $(k + 1)$-dimensional \ $\alpha$-stable process. By Theorem \ref{simple_aggregation1_stable_fd} and Remark \ref{mu_properties2}, the characteristic function of \ ${\boldsymbol{c}}X_1^{(k,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{k+1}$ \ has the form given in the theorem, and hence we conclude the statement in case of \ $\alpha \in (0, 1)$. In case of \ $\alpha \in (1, 2)$, \ by \eqref{centering12T} with \ $t = 1$, \ we have {\boldsymbol{e}}gin{equation}\label{centering12} \lim_{N\to\infty} \frac{N}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0>a_N\}}\bigr) = \frac{\alpha}{\alpha-1} . \end{equation} Next, we may apply Lemma \ref{Conv2Funct} with \ ${\boldsymbol{c}}U$, \ $\Phi$ \ and \ ${\boldsymbol{c}}U^{(N)}$, \ $N \in \mathbb{N}$, \ as defined above, and with \[ \Phi_N(f)(t) := f(t) - \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0>a_N\}} ) {\boldsymbol{1}}_{k+1}, \qquad N \in \mathbb{N} , \quad f \in \mathbb{D}(\mathbb{R}_+,\mathbb{R}^{k+1}) , \quad t \in \mathbb{R}_+ . \] Indeed, in order to show \ $\Phi_N(f_N) \to \Phi(f)$ \ in \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$ \ as \ $N \to \infty$ \ whenever \ $f_N \to f$ \ in \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$ \ as \ $N \to \infty$ \ with \ $f, f_N \in \mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$, \ $N \in \mathbb{N}$, \ by Propositions VI.1.17 and VI.1.23 in Jacod and Shiryaev \cite{JacShi}, it is enough to check that for each \ $T \in \mathbb{R}_{++}$, \ we have {\boldsymbol{e}}gin{align*} \sup_{t\in[0,T]} \biggl\|\frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0 > a_N\}}\bigr) {\boldsymbol{1}}_{k+1} - \frac{\alpha}{\alpha-1} t {\boldsymbol{1}}_{k+1}\biggr\| \to 0 \qquad \widetilde{e}xt{as \ $N\to\infty$.} \end{align*} This follows, since, by \eqref{centering12}, we obtain {\boldsymbol{e}}gin{align*} &\sup_{t\in[0,T]} \biggl\|\frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0>a_N\}}\bigr) {\boldsymbol{1}}_{k+1} - \frac{\alpha}{\alpha-1} t {\boldsymbol{1}}_{k+1}\biggr\| \\ &\leqslant \sup_{t\in[0,T]} \biggl\|\frac{\lfloor Nt\rfloor}{N} \biggl(\frac{N}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0>a_N\}}\bigr) - \frac{\alpha}{\alpha-1}\biggr) {\boldsymbol{1}}_{k+1}\biggr\| + \sup_{t\in[0,T]} \Biggl\|\frac{\alpha}{\alpha-1} \biggl(\frac{\lfloor Nt\rfloor}{N} - t\biggr) {\boldsymbol{1}}_{k+1}\Biggr\| \\ &\leqslant T \sqrt{k + 1} \biggl|\frac{N}{a_N} \operatorname{\mathbb{E}}\bigl(X_0 {\boldsymbol{b}}one_{\{X_0>a_N\}}\bigr) - \frac{\alpha}{\alpha-1}\biggr| + \frac{\sqrt{k+1}}{N} \frac{\alpha}{\alpha-1} \to 0 \qquad \widetilde{e}xt{as \ $N \to \infty$.} \end{align*} Applying Lemma \ref{Conv2Funct}, we obtain \[ \biggl(\frac{1}{a_N} \sum_{j=0}^{\lfloor Nt\rfloor} \bigl(X_1^{(j)}, \ldots, X_k^{(j)}\bigr) - \frac{\lfloor Nt\rfloor}{a_N} \operatorname{\mathbb{E}}(X_0) {\boldsymbol{1}}_{k+1}\biggr)_{t\in\mathbb{R}_+} = \Phi_N({\boldsymbol{c}}U^{(N)}) \distr \Phi({\boldsymbol{c}}U) \qquad \widetilde{e}xt{as \ $N \to \infty$,} \] where \ $\Phi({\boldsymbol{c}}U)_t = {\boldsymbol{c}}X_t^{(k,\alpha)} - t \frac{\alpha}{\alpha-1} {\boldsymbol{1}}_{k+1}$, \ $t \in \mathbb{R}_+$, \ is a $(k + 1)$-dimensional \ $\alpha$-stable process. By Theorem \ref{simple_aggregation1_stable_fd} and Remark \ref{mu_properties2}, the characteristic function of \ ${\boldsymbol{c}}X_1^{(k,\alpha)} - \frac{\alpha}{\alpha-1} {\boldsymbol{1}}_{k+1}$ \ has the form given in the theorem, and hence we conclude the statement in case of \ $\alpha \in (1, 2)$ \ as well. \mbox{$\Box$} \noindent{\bf Proof of Proposition \ref{Pro_AR1}.} The sequence \ $(\widetilde{\operatorname{Var}e}^{(\alpha)}_k)_{k\in\mathbb{N}}$ \ consists of identically distributed random variables, since the strong stationarity of \ $\bigl({\mathcal Y}^{(\alpha)}_k\bigr)_{k\in\mathbb{Z}_+}$ \ yields that \ $\bigl({\mathcal Y}^{(\alpha)}_k,{\mathcal Y}^{(\alpha)}_{k-1} \bigr)_{k\in\mathbb{N}}$ \ consists of identically distributed random variables. In what follows, let \ $k\in\mathbb{N}$ \ be fixed. By \eqref{calY_def}, the characteristic function of \ $({\mathcal Y}^{(\alpha)}_0, \ldots, {\mathcal Y}^{(\alpha)}_{k-1}, \widetilde{\operatorname{Var}e}_k)$ \ has the form \[ \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}(\operatorname{Var}theta_0{\mathcal Y}^{(\alpha)}_0+\cdots+\operatorname{Var}theta_{k-1}{\mathcal Y}^{(\alpha)}_{k-1}+\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(\alpha)}_k)}\bigr) = \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}(\operatorname{Var}theta_0{\mathcal Y}^{(\alpha)}_0+\cdots+(\operatorname{Var}theta_{k-1}-m_\xi\operatorname{Var}theta_k){\mathcal Y}^{(\alpha)}_{k-1}+\operatorname{Var}theta_k{\mathcal Y}^{(\alpha)}_k)}\bigr) = \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}}_k,{\boldsymbol{c}}X^{(k,\alpha)}_1\rangle}\bigr) \] for \ $(\operatorname{Var}theta_0, \ldots, \operatorname{Var}theta_k)^\top \in \mathbb{R}^{k+1}$, \ where \[ {\boldsymbol{\theta}}_k := (\operatorname{Var}theta_0, \ldots, \operatorname{Var}theta_{k-2}, \operatorname{Var}theta_{k-1} - m_\xi \operatorname{Var}theta_k, \operatorname{Var}theta_k)^\top \in \mathbb{R}^{k+1} . \] We can write \ ${\boldsymbol{\theta}}_k = {\boldsymbol{\theta}}_k^{(1)} + {\boldsymbol{\theta}}_k^{(2)}$ \ with \[ {\boldsymbol{\theta}}_k^{(1)} := (\operatorname{Var}theta_0, \ldots, \operatorname{Var}theta_{k-2}, \operatorname{Var}theta_{k-1}, 0)^\top , \qquad {\boldsymbol{\theta}}_k^{(2)} := (0, \ldots, 0, - m_\xi \operatorname{Var}theta_k, \operatorname{Var}theta_k)^\top . \] We have \ $\langle{\boldsymbol{\theta}}_k^{(2)}, {\boldsymbol{v}}_j^{(k)}\rangle = 0$ \ for each \ $j \in \{0, \ldots, k - 1\}$, \ and \ $\langle{\boldsymbol{\theta}}_k^{(1)}, {\boldsymbol{v}}_k^{(k)}\rangle = 0$, \ hence \[ \langle{\boldsymbol{\theta}}_k,{\boldsymbol{v}}_j^{(k)}\rangle = {\boldsymbol{e}}gin{cases} \langle{\boldsymbol{\theta}}_k^{(1)},{\boldsymbol{v}}_j^{(k)}\rangle & \widetilde{e}xt{if \ $j \in \{0, \ldots, k - 1\}$,} \\ \langle{\boldsymbol{\theta}}_k^{(2)},{\boldsymbol{v}}_k^{(k)}\rangle & \widetilde{e}xt{if \ $j = k$.} \end{cases} \] In case of \ $\alpha \in (0, 1) \cup (1, 2)$, \ by Theorem \ref{simple_aggregation1_stable_fd}, we have {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}(\operatorname{Var}theta_0{\mathcal Y}^{(\alpha)}_0+\cdots+\operatorname{Var}theta_{k-1}{\mathcal Y}^{(\alpha)}_{k-1}+\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(\alpha)}_k)}\bigr) \\ &= \exp\biggl\{- C_\alpha (1 - m_\xi^\alpha) \sum_{j=0}^k |\langle{\boldsymbol{\theta}}_k,{\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle{\boldsymbol{\theta}}_k,{\boldsymbol{v}}_j^{(k)}\rangle)\right) - \mathrm{i} \frac{\alpha}{1-\alpha} \langle{\boldsymbol{\theta}}_k, {\boldsymbol{1}}_{k+1}\rangle\biggr\} \\ &= \exp\biggl\{- C_\alpha (1 - m_\xi^\alpha) \sum_{j=0}^{k-1} |\langle{\boldsymbol{\theta}}_k^{(1)},{\boldsymbol{v}}_j^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle{\boldsymbol{\theta}}_k^{(1)},{\boldsymbol{v}}_j^{(k)}\rangle)\right) - \mathrm{i} \frac{\alpha}{1-\alpha} \langle{\boldsymbol{\theta}}_k^{(1)}, {\boldsymbol{1}}_{k+1}\rangle \\ &\phantom{= \exp\biggl\{} - C_\alpha (1 - m_\xi^\alpha) |\langle{\boldsymbol{\theta}}_k^{(2)},{\boldsymbol{v}}_k^{(k)}\rangle|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle{\boldsymbol{\theta}}_k^{(2)},{\boldsymbol{v}}_k^{(k)}\rangle)\right) - \mathrm{i} \frac{\alpha}{1-\alpha} \langle{\boldsymbol{\theta}}_k^{(2)}, {\boldsymbol{1}}_{k+1}\rangle \biggr\} \\ &= \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}(\operatorname{Var}theta_0{\mathcal Y}^{(\alpha)}_0+\cdots+\operatorname{Var}theta_{k-1}{\mathcal Y}^{(\alpha)}_{k-1})}\bigr) \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(\alpha)}_k}\bigr) , \end{align*} where we used that {\boldsymbol{e}}gin{align*} &\langle{\boldsymbol{\theta}}_k^{(1)},{\boldsymbol{v}}_j^{(k)}\rangle = \langle (\operatorname{Var}theta_0, \ldots, \operatorname{Var}theta_{k-1})^\top , {\boldsymbol{v}}_j^{(k-1)} \rangle, \qquad j=0,1,\ldots,k-1,\\ & \langle{\boldsymbol{\theta}}_k^{(1)}, {\boldsymbol{1}}_{k+1}\rangle = \langle (\operatorname{Var}theta_0, \ldots, \operatorname{Var}theta_{k-1})^\top , {\boldsymbol{1}}_k \rangle,\\ & \langle{\boldsymbol{\theta}}_k^{(2)},{\boldsymbol{v}}_k^{(k)}\rangle = \operatorname{Var}theta_k,\qquad \langle{\boldsymbol{\theta}}_k^{(2)}, {\boldsymbol{1}}_{k+1}\rangle = \operatorname{Var}theta_k, \end{align*} and \ $\operatorname{Var}theta_0 = \operatorname{Var}theta_1=\ldots = \operatorname{Var}theta_{k-1}=0$ \ yields \ ${\boldsymbol{\theta}}_k^{(1)} = {\boldsymbol{z}}ero\in\mathbb{R}^{k+1}$. \ Thus we obtain the independence of \ $({\mathcal Y}^{(\alpha)}_0, \ldots, {\mathcal Y}^{(\alpha)}_{k-1})^\top$ \ and \ $\widetilde{\operatorname{Var}e}^{(\alpha)}_k$, \ and the characteristic function of \ $\widetilde{\operatorname{Var}e}^{(\alpha)}_k$ \ has the form \[ \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(\alpha)}_k}\bigr) = \exp\biggl\{- C_\alpha (1 - m_\xi^\alpha) |\operatorname{Var}theta_k|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\operatorname{Var}theta_k)\right) - \mathrm{i} \frac{\alpha}{1-\alpha} \operatorname{Var}theta_k\biggr\} , \qquad \operatorname{Var}theta_k \in \mathbb{R} , \] hence \ $\widetilde{\operatorname{Var}e}^{(\alpha)}_k$ \ is \ $\alpha$-stable (see, e.g., Sato \cite[Theorem 14.10]{Sato}). In fact, \ $\widetilde{\operatorname{Var}e}^{(\alpha)}_k + \frac{\alpha}{1-\alpha} \distre (1 - m_\xi^\alpha)^{\frac{1}{\alpha}} \bigl({\mathcal Y}^{(\alpha)}_0 + \frac{\alpha}{1-\alpha}\bigr)$, \ $k \in \mathbb{N}$, \ since for all \ $\operatorname{Var}theta_k \in \mathbb{R}$, \ we have {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\Bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_k(1-m_\xi^\alpha)^{\frac{1}{\alpha}}\bigl({\mathcal Y}^{(\alpha)}_0+\frac{\alpha}{1-\alpha}\bigr)}\Bigr) = \mathrm{e}^{\mathrm{i} \frac{\alpha}{1-\alpha}\operatorname{Var}theta_k (1 - m_\xi^\alpha)^{\frac{1}{\alpha}} } \widehat\mu_{0,\alpha} (\operatorname{Var}theta_k (1 - m_\xi^\alpha)^{\frac{1}{\alpha}}) \\ &= \exp\Bigl\{- C_\alpha (1 - m_\xi^\alpha) |\langle\operatorname{Var}theta_k (1 - m_\xi^\alpha)^{\frac{1}{\alpha}}, (1 - m_\xi^\alpha)^{-\frac{1}{\alpha}}\rangle|^\alpha \\ &\phantom{= \exp\Bigg\{-} \times \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\langle \operatorname{Var}theta_k (1 - m_\xi^\alpha)^{\frac{1}{\alpha}}, (1 - m_\xi^\alpha)^{- \frac{1}{\alpha}}\rangle)\right)\Bigr\} \\ &= \exp\Big\{- C_\alpha (1 - m_\xi^\alpha) |\operatorname{Var}theta_k|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\operatorname{Var}theta_k)\right)\Big\} . \end{align*} Further, \ $\bigl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha}\bigr)_{k\in\mathbb{Z}_+}$ \ is also a strongly stationary \ $\alpha$-stable time homogeneous Markov process. In fact, it is a subcritical autoregressive process of order 1 with autoregressive coefficient \ $m_\xi$ \ such that the distribution of its innovations satisfies {\boldsymbol{e}}gin{align*} \biggl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha}\biggr) - m_\xi \biggl({\mathcal Y}^{(\alpha)}_{k-1} + \frac{\alpha}{1-\alpha}\biggr) &= \widetilde\operatorname{Var}e_k + (1-m_\xi)\frac{\alpha}{1-\alpha} \\ &\distre (1 - m_\xi^\alpha)^{\frac{1}{\alpha}} \biggl({\mathcal Y}^{(\alpha)}_0 + \frac{\alpha}{1-\alpha}\biggr) -m_\xi\frac{\alpha}{1-\alpha} , \qquad k \in \mathbb{N} . \end{align*} In case of \ $\alpha = 1$, \ by Theorem \ref{simple_aggregation1_stable_fd}, we have {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}(\operatorname{Var}theta_0{\mathcal Y}^{(1)}_0+\cdots+\operatorname{Var}theta_{k-1}{\mathcal Y}^{(1)}_{k-1}+\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(1)}_k)}\bigr) \\ &= \exp\biggl\{- C_1 (1 - m_\xi) \sum_{j=0}^k |\langle{\boldsymbol{\theta}}_k,{\boldsymbol{v}}_j^{(k)}\rangle| \Bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\langle{\boldsymbol{\theta}}_k,{\boldsymbol{v}}_j^{(k)}\rangle) \log(|\langle{\boldsymbol{\theta}}_k,{\boldsymbol{v}}_j^{(k)}\rangle|)\Bigr) \\ &\phantom{= \exp\biggl\{} + \mathrm{i} C \langle{\boldsymbol{\theta}}_k, {\boldsymbol{1}}_{k+1}\rangle + \mathrm{i} (1 - m_\xi) \sum_{j=0}^k \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}_k\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle)\biggr\} . \end{align*} If \ $\alpha = 1$ \ and \ $m_\xi = 0$, \ then for each \ $j \in \{0, \ldots, k\}$, \ we have \ ${\boldsymbol{v}}_j^{(k)} = {\boldsymbol{e}}_{j+1}$, \ and hence \ $\langle{\boldsymbol{\theta}}_k,{\boldsymbol{v}}_j^{(k)}\rangle = \operatorname{Var}theta_j$, \ and \ $\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle = 1$ \ if \ $\ell = j + 1$ \ and \ $\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle = 0$ \ if \ $\ell \ne j + 1$. \ Consequently, {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}(\operatorname{Var}theta_0{\mathcal Y}^{(1)}_0+\cdots+\operatorname{Var}theta_{k-1}{\mathcal Y}^{(1)}_{k-1}+\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(1)}_k)}\bigr) \\ &= \exp\biggl\{- C_1 \sum_{j=0}^k |\operatorname{Var}theta_j| \Bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta_j) \log(|\operatorname{Var}theta_j|)\Bigr) + \mathrm{i} C \sum_{j=0}^k \operatorname{Var}theta_j\biggr\} \\ &= \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_0{\mathcal Y}^{(\alpha)}_0}\bigr) \cdots \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_0{\mathcal Y}^{(\alpha)}_{k-1}}\bigr) \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(\alpha)}_k}\bigr) , \end{align*} thus we obtain the independence of \ ${\mathcal Y}^{(1)}_0$, \ldots, ${\mathcal Y}^{(1)}_{k-1}$ \ and \ $\widetilde{\operatorname{Var}e}^{(1)}_k$, \ and the characteristic function of \ $\widetilde{\operatorname{Var}e}^{(1)}_k$ \ has the form \[ \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(1)}_k}\bigr) = \exp\Bigl\{- C_1 |\operatorname{Var}theta_k| \Bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta_k) \log(|\operatorname{Var}theta_k|)\Bigr) + \mathrm{i} C \operatorname{Var}theta_k\Bigr\} , \qquad \operatorname{Var}theta_k \in \mathbb{R} , \] hence \ $\widetilde{\operatorname{Var}e}^{(1)}_k$ \ is \ 1-stable (see, e.g., Sato \cite[Theorem 14.10]{Sato}). In fact, \ $\widetilde{\operatorname{Var}e}^{(1)}_k \distre {\mathcal Y}^{(1)}_0$, \ and \ $({\mathcal Y}_k^{(1)})_{k\in\mathbb{Z}_+}$ \ is a sequence of independent, identically distributed \ $1$-stable random variables. If \ $\alpha = 1$ \ and \ $m_\xi \in (0, 1)$, \ then, by Theorem \ref{simple_aggregation1_stable_fd}, {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}(\operatorname{Var}theta_0{\mathcal Y}^{(1)}_0+\cdots+\operatorname{Var}theta_{k-1}{\mathcal Y}^{(1)}_{k-1}+\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(1)}_k)}\bigr) \\ &= \exp\biggl\{- C_1 (1 - m_\xi) \sum_{j=0}^{k-1} |\langle{\boldsymbol{\theta}}_k^{(1)},{\boldsymbol{v}}_j^{(k)}\rangle| \Bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\langle{\boldsymbol{\theta}}_k^{(1)},{\boldsymbol{v}}_j^{(k)}\rangle) \log(|\langle{\boldsymbol{\theta}}_k^{(1)},{\boldsymbol{v}}_j^{(k)}\rangle|)\Bigr) \\ &\phantom{= \exp\biggl\{} + \mathrm{i} C \langle{\boldsymbol{\theta}}_k^{(1)}, {\boldsymbol{1}}_{k+1}\rangle + \mathrm{i} (1 - m_\xi) \sum_{j=0}^{k-1} \sum_{\ell=j+1}^k \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}_k^{(1)}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle) \\ &\phantom{= \exp\biggl\{} - C_1 (1 - m_\xi) |\langle{\boldsymbol{\theta}}_k^{(2)},{\boldsymbol{v}}_k^{(k)}\rangle| \Bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\langle{\boldsymbol{\theta}}_k^{(2)},{\boldsymbol{v}}_k^{(k)}\rangle) \log(|\langle{\boldsymbol{\theta}}_k^{(2)},{\boldsymbol{v}}_k^{(k)}\rangle|)\Bigr) \\ &\phantom{= \exp\biggl\{} + \mathrm{i} C \langle{\boldsymbol{\theta}}_k^{(2)}, {\boldsymbol{1}}_{k+1}\rangle + \mathrm{i} (1 - m_\xi) \sum_{j=0}^k \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}_k^{(2)}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle)\biggr\} \\ &= \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}(\operatorname{Var}theta_0{\mathcal Y}^{(1)}_0+\cdots+\operatorname{Var}theta_{k-1}{\mathcal Y}^{(1)}_{k-1})}\bigr) \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(1)}_k}\bigr) , \end{align*} where we used that \ $\langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}_k^{(1)}\rangle = 0$ \ if \ $\ell = k + 1$, \ and \ $\operatorname{Var}theta_0 = \operatorname{Var}theta_1=\ldots=\operatorname{Var}theta_{k-1}=0$ \ yields that \ ${\boldsymbol{\theta}}_k^{(1)}={\boldsymbol{z}}ero\in\mathbb{R}^{k+1}$. \ Thus we obtain the independence of \ $({\mathcal Y}^{(1)}_0, \ldots, {\mathcal Y}^{(1)}_{k-1})^\top$ \ and \ $\widetilde{\operatorname{Var}e}^{(1)}_k$, \ and the characteristic function of \ $\widetilde{\operatorname{Var}e}^{(1)}_k$ \ has the form {\boldsymbol{e}}gin{align*} \operatorname{\mathbb{E}}\bigl(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_k\widetilde{\operatorname{Var}e}^{(1)}_k}\bigr) &= \exp\biggl\{- C_1 (1 - m_\xi) |\operatorname{Var}theta_k| \Bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta_k) \log(|\operatorname{Var}theta_k|)\Bigr) + \mathrm{i} C \operatorname{Var}theta_k \\ &\phantom{= \exp\biggl\{} + \mathrm{i} (1 - m_\xi) \sum_{j=0}^k \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}_k^{(2)}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle)\biggr\} \end{align*} {\boldsymbol{e}}gin{align*} &= \exp\biggl\{- C_1 (1 - m_\xi) |\operatorname{Var}theta_k| \Bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta_k) \log(|\operatorname{Var}theta_k|)\Bigr) + \mathrm{i} \operatorname{Var}theta_k (C + m_\xi\log(m_\xi))\biggr\} \end{align*} for \ $\operatorname{Var}theta_k \in \mathbb{R}$, \ since {\boldsymbol{e}}gin{align*} &\sum_{j=0}^k \sum_{\ell=j+1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}_k^{(2)}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle) = \sum_{\ell=1}^{k+1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}_k^{(2)}\rangle \sum_{j=0}^{\ell-1} \langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle \log(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{(k)}\rangle) \\ &= (- m_\xi \operatorname{Var}theta_k) \biggl(\frac{m_\xi^{k-1}}{1-m_\xi} \log\biggl(\frac{m_\xi^{k-1}}{1-m_\xi}\biggr) + \sum_{j=1}^{k-1} m_\xi^{k-j-1} \log(m_\xi^{k-j-1})\biggr) \\ &\quad + \operatorname{Var}theta_k \biggl(\frac{m_\xi^k}{1-m_\xi} \log\biggl(\frac{m_\xi^k}{1-m_\xi}\biggr) + \sum_{j=1}^k m_\xi^{k-j} \log(m_\xi^{k-j})\biggr) \\ &= \operatorname{Var}theta_k \biggl(\frac{m_\xi^k}{1-m_\xi} \log(m_\xi) + \sum_{j=1}^{k-1} m_\xi^{k-j} \log(m_\xi)\biggr) = \operatorname{Var}theta_k \biggl(\frac{m_\xi^k}{1-m_\xi} + \frac{m_\xi-m_\xi^k}{1-m_\xi}\biggr) \log(m_\xi) \\ &= \operatorname{Var}theta_k \frac{m_\xi}{1-m_\xi} \log(m_\xi) . \end{align*} Hence \ $\widetilde{\operatorname{Var}e}^{(1)}_k$ \ is \ 1-stable (see, e.g., Sato \cite[Theorem 14.10]{Sato}). In fact, \[ \widetilde{\operatorname{Var}e}^{(1)}_k - m_\xi(C+\log(m_\xi)) \distre (1 - m_\xi) ({\mathcal Y}^{(1)}_0 + \log(1 - m_\xi)) , \qquad k \in \mathbb{N}, \] since for all \ $\operatorname{Var}theta_k \in \mathbb{R}$, {\boldsymbol{e}}gin{align*} \operatorname{\mathbb{E}}(\mathrm{e}^{\mathrm{i}\operatorname{Var}theta_k(1-m_\xi){\mathcal Y}^{(1)}_0}) &= \exp\Big\{ -C_1 (1-m_\xi) \vert\operatorname{Var}theta_k\vert \Bigl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta_k) \log(|\operatorname{Var}theta_k|)\Bigr) \\ &\phantom{= \exp\Big\{\;} + \mathrm{i} C (1 - m_\xi) \operatorname{Var}theta_k - \mathrm{i} \operatorname{Var}theta_k (1-m_\xi) \log(1-m_\xi) \Big\} . \end{align*} \mbox{$\Box$} \noindent{\bf Proof of Corollary \ref{aggr_copies1}.} It follows from Theorem \ref{simple_aggregation1_stable_fd} and Corollary \ref{simple_aggregation1_stable_centering_fd} using the continuous mapping theorem. \mbox{$\Box$} \noindent{\bf Proof of Theorem \ref{iterated_aggr_1}.} In case of \ $\alpha \in (0, 1)$, \ by \eqref{centering01}, we have \[ \lim_{N\to\infty} \frac{{\lfloor nt\rfloor} N}{n^{\frac{1}{\alpha}}a_N} \operatorname{\mathbb{E}}(X_0 {\boldsymbol{b}}one_{\{X_0\leqslant a_N\}}) = \frac{{\lfloor nt\rfloor}}{n^{\frac{1}{\alpha}}} \frac{\alpha}{1-\alpha} \to 0 \qquad \widetilde{e}xt{as \ $n \to \infty$,} \] hence, by Slutsky's lemma, \eqref{iterated_aggr_1_1} will be a consequence of \eqref{iterated_aggr_1_2}. For each \ $n \in \mathbb{N}$, \ by Corollary \ref{aggr_copies1} and by the continuous mapping theorem, we obtain \[ \biggl(\frac{1}{n^{\frac{1}{\alpha}}a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N X^{(j)}_k\biggr)_{t\in\mathbb{R}_+} \distrf \biggl(\frac{1}{n^{\frac{1}{\alpha}}} \sum_{k=1}^{\lfloor nt\rfloor} \biggl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha}\biggr)\biggr)_{t\in\mathbb{R}_+} \qquad \widetilde{e}xt{as \ $N \to \infty$} \] in case of \ $\alpha \in (0, 1)$, \ and \[ \biggl(\frac{1}{n^{\frac{1}{\alpha}}a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N \bigl(X^{(j)}_k - \operatorname{\mathbb{E}}(X^{(j)}_k)\bigr)\biggr)_{t\in\mathbb{R}_+} \distrf \biggl(\frac{1}{n^{\frac{1}{\alpha}}} \sum_{k=1}^{\lfloor nt\rfloor} \biggl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha} \biggr)\biggr)_{t\in\mathbb{R}_+} \qquad \widetilde{e}xt{as \ $N \to \infty$} \] in case of \ $\alpha \in (1, 2)$. \ Consequently, in order to prove \eqref{iterated_aggr_1_2} and \eqref{iterated_aggr_1_4}, we need to show that for each \ $\alpha \in (0,1) \cup (1,2)$, \ we have {\boldsymbol{e}}gin{equation}\label{conv_Z} \biggl(\frac{1}{n^{\frac{1}{\alpha}}} \sum_{k=1}^{\lfloor nt\rfloor} \biggl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha}\biggr)\biggr)_{t\in\mathbb{R}_+} \distrf \Bigl({\mathcal Z}^{(\alpha)}_t + \frac{\alpha}{1-\alpha} t\Bigr)_{t\in\mathbb{R}_+} \qquad \widetilde{e}xt{as \ $n \to \infty$.} \end{equation} For each \ $\alpha\in(0,1)\cup(1,2)$, \ $n \in \mathbb{N}$, \ $d \in \mathbb{N}$, \ $t_1, \ldots, t_d \in \mathbb{R}_{++}$ \ with \ $t_1 < \ldots < t_d$ \ and \ $\operatorname{Var}theta_1, \ldots, \operatorname{Var}theta_d \in \mathbb{R}$, \ we have \[ \operatorname{\mathbb{E}}\biggl(\exp\biggl\{\mathrm{i} \sum_{\ell=1}^d \frac{\operatorname{Var}theta_\ell}{n^{\frac{1}{\alpha}}} \sum_{k=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \Bigl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha}\Bigr)\biggr\}\biggr) = \operatorname{\mathbb{E}}\Bigl(\exp\Bigl\{\mathrm{i} \Bigl\langle n^{-\frac{1}{\alpha}} {\boldsymbol{\theta}}_n, {\boldsymbol{c}}X_1^{(\lfloor nt_d\rfloor,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{\lfloor nt_d\rfloor+1}\Bigr\rangle\Bigr\}\Bigr) \] with \ $t_0:=0$ \ and \[ {\boldsymbol{\theta}}_n := \sum_{\ell=1}^d \operatorname{Var}theta_\ell \sum_{k=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} {\boldsymbol{e}}_{k+1} \in \mathbb{R}^{\lfloor nt_d\rfloor+1} . \] For each \ $\alpha\in(0,1)\cup(1,2)$, \ by the explicit form of the characteristic function of \ ${\boldsymbol{c}}X_1^{(\lfloor nt_d\rfloor,\alpha)}$ \ given in Theorem \ref{simple_aggregation1_stable_fd}, {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\Bigl(\exp\Bigl\{\mathrm{i} \Bigl\langle n^{-\frac{1}{\alpha}} {\boldsymbol{\theta}}_n, {\boldsymbol{c}}X_1^{(\lfloor nt_d\rfloor,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{\lfloor nt_d\rfloor+1}\Bigr\rangle\Bigr\}\Bigr) \\ &= \exp\biggl\{- C_\alpha (1 - m_\xi^\alpha) \sum_{j=0}^{\lfloor nt_d\rfloor} \bigl|\bigl\langle n^{- \frac{1}{\alpha} } {\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}\bigl(\bigl\langle n^{- \frac{1}{\alpha}} {\boldsymbol{\theta}}_n,{\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr)\right)\biggr\} \\ &= \exp\biggl\{- C_\alpha (1 - m_\xi^\alpha) n^{-1} \sum_{j=0}^{\lfloor nt_d\rfloor} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}\bigl(\bigl\langle{\boldsymbol{\theta}}_n,{\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr)\right)\biggr\} . \end{align*} We have {\boldsymbol{e}}gin{align*} \bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_0^{(\lfloor nt_d\rfloor)}\bigr\rangle &= \sum_{i=1}^d \operatorname{Var}theta_i \sum_{k=\lfloor nt_{i-1}\rfloor+1}^{\lfloor nt_i\rfloor} \bigl\langle{\boldsymbol{e}}_{k+1}, {\boldsymbol{v}}_0^{(\lfloor nt_d\rfloor)}\bigr\rangle = \sum_{i=1}^d \operatorname{Var}theta_i \sum_{k=\lfloor nt_{i-1}\rfloor+1}^{\lfloor nt_i\rfloor} \frac{m_\xi^k}{(1-m_\xi^\alpha)^{\frac{1}{\alpha}}} \\ &= \frac{1}{(1-m_\xi^\alpha)^{\frac{1}{\alpha}}(1-m_\xi)} \sum_{i=1}^d \operatorname{Var}theta_i (m_\xi^{\lfloor nt_{i-1}\rfloor+1} - m_\xi^{\lfloor nt_i\rfloor+1}) , \end{align*} hence for each \ $\alpha\in(0, 1) \cup (1,2)$, \[ n^{-1} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_0^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}\bigl(\bigl\langle{\boldsymbol{\theta}}_n,{\boldsymbol{v}}_0^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr)\right) \to 0 \qquad \widetilde{e}xt{as \ $n \to \infty$.} \] The aim of the following discussion is to show that for each \ $\alpha\in(0, 1)\cup(1,2)$ \ and \ $\ell \in \{1, \ldots, d\}$, {\boldsymbol{e}}gin{equation}\label{conv_alpha} n^{-1} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \to \frac{(t_\ell-t_{\ell-1})|\operatorname{Var}theta_\ell|^\alpha}{(1-m_\xi)^\alpha}\qquad \widetilde{e}xt{as \ $n \to \infty$.} \end{equation} Here, for each \ $\ell \in \{1, \ldots, d\}$ \ and \ $j \in \{\lfloor nt_{\ell-1}\rfloor + 1, \ldots, \lfloor nt_\ell\rfloor\}$, {\boldsymbol{e}}gin{align}\label{help_7} {\boldsymbol{e}}gin{split} \bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle &= \sum_{i=1}^d \operatorname{Var}theta_i \sum_{k=\lfloor nt_{i-1}\rfloor+1}^{\lfloor nt_i\rfloor} \bigl\langle{\boldsymbol{e}}_{k+1}, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle = \sum_{i=1}^d \operatorname{Var}theta_i \sum_{k=\lfloor nt_{i-1}\rfloor+1}^{\lfloor nt_i\rfloor} m_\xi^{k-j} {\boldsymbol{b}}one_{\{k\geqslant j\}} \\ &= \frac{1}{1-m_\xi} \biggl(\operatorname{Var}theta_\ell (1 - m_\xi^{\lfloor nt_\ell\rfloor-j+1}) + \sum_{i=\ell+1}^d \operatorname{Var}theta_i (m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1})\biggr) . \end{split} \end{align} In case of \ $\alpha \in (0, 1]$, \ we have {\boldsymbol{e}}gin{align}\label{help_normlike_alpha01} |x|^\alpha - |y|^\alpha \leqslant |x+y|^\alpha \leqslant |x|^\alpha + |y|^\alpha , \qquad x, y \in \mathbb{R} . \end{align} In case of \ $\alpha \in (1, 2)$, \ by the mean value theorem and by \eqref{help_normlike_alpha01}, we have \[ \bigl||x + y|^\alpha - |x|^\alpha\bigr| \leqslant \alpha |y| \max\{|x + y|^{\alpha-1}, |x|^{\alpha-1}\} \leqslant \alpha |y| (|x|^{\alpha-1} + |y|^{\alpha-1}) , \qquad x, y \in \mathbb{R} . \] Hence for each \ $\alpha \in (0, 2)$ \ and \ $x, y \in \mathbb{R}$, \ we obtain \[ |x|^\alpha - 2 |y| (|x|^{\alpha-1} + |y|^{\alpha-1}) \leqslant |x+y|^\alpha \leqslant |x|^\alpha + 2 |y| (|x|^{\alpha-1} + |y|^{\alpha-1}) , \] so, by \eqref{help_7} and the squeeze theorem, to prove \eqref{conv_alpha}, it is enough to check that {\boldsymbol{e}}gin{gather} \frac{1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} (1 - m_\xi^{\lfloor nt_\ell\rfloor-j+1})^\alpha \to t_\ell - t_{\ell-1} , \label{conv_alpha_02_1} \\ \frac{1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \Biggl|\sum_{i=\ell+1}^d \operatorname{Var}theta_i (m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1})\Biggr|^\alpha \to 0 , \label{conv_alpha_02_2} \\ \frac{1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \Biggl|\sum_{i=\ell+1}^d \operatorname{Var}theta_i (m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1})\Biggr| (1 - m_\xi^{\lfloor nt_\ell\rfloor-j+1})^{\alpha-1} \to 0 \label{conv_alpha_02_3} \end{gather} as \ $n \to \infty$. \ Since \ $(1 - t)^\alpha = 1 - \alpha t + \operatorname{o}(t)$ \ as \ $t \downarrow 0$, \ there exists \ $j_0\in\mathbb{N}$ \ such that \ $|(1-m_\xi^j)^\alpha - 1 + \alpha m_\xi^j| \leqslant m_\xi^j$ \ for all \ $j\geqslant j_0$. \ Hence {\boldsymbol{e}}gin{align*} &\Biggl|\frac{1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \bigl(1 - m_\xi^{\lfloor nt_\ell\rfloor-j+1}\bigr)^\alpha - \frac{1}{n} \sum_{j=1}^{\lfloor nt_\ell\rfloor-\lfloor nt_{\ell-1}\rfloor} (1 - \alpha m_\xi^j)\Biggr| \\ &= \Biggl|\frac{1}{n} \sum_{j=1}^{\lfloor nt_\ell\rfloor-\lfloor nt_{\ell-1}\rfloor} (1 - m_\xi^j)^\alpha - \frac{1}{n} \sum_{j=1}^{\lfloor nt_\ell\rfloor-\lfloor nt_{\ell-1}\rfloor} (1 - \alpha m_\xi^j)\Biggr| \end{align*} {\boldsymbol{e}}gin{align*} &\leqslant \Biggl|\frac{1}{n} \sum_{j=1}^{j_0-1} \bigl((1 - m_\xi^j)^\alpha - 1 + \alpha m_\xi^j\bigr)\Biggr| + \frac{1}{n} \sum_{j=j_0}^{\lfloor nt_\ell\rfloor-\lfloor nt_{\ell-1}\rfloor} m_\xi^j \\ &\leqslant \Biggl|\frac{1}{n} \sum_{j=1}^{j_0-1} \bigl( (1 - m_\xi^j)^\alpha - 1 + \alpha m_\xi^j\bigr) \Biggr| + \frac{1}{n} \frac{m_\xi^{j_0}}{1-m_\xi} \to 0 \qquad \widetilde{e}xt{as \ $n \to \infty$.} \end{align*} Thus {\boldsymbol{e}}gin{align*} &\lim_{n\to\infty} \frac{1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \bigl(1 - m_\xi^{\lfloor nt_\ell\rfloor-j+1}\bigr)^\alpha = \lim_{n\to\infty} \frac{1}{n} \sum_{j=1}^{\lfloor nt_\ell\rfloor - \lfloor nt_{\ell-1}\rfloor} (1 - \alpha m_\xi^j)\\ &= \lim_{n\to\infty} \frac{1}{n} \Biggl(\lfloor nt_\ell\rfloor - \lfloor nt_{\ell-1}\rfloor - \alpha \frac{m_\xi- m_\xi^{\lfloor nt_\ell\rfloor - \lfloor nt_{\ell-1}\rfloor+1}}{1-m_\xi}\Biggr) = t_\ell - t_{\ell-1} , \end{align*} yielding \eqref{conv_alpha_02_1}. In case of \ $\alpha \in (1, 2)$, \ for all \ $x_1, \ldots, x_k \in \mathbb{R}$, \ we have \ $|x_1 + \cdots + x_k|^\alpha \leqslant k^{\alpha-1} (|x_1|^\alpha + \cdots + |x_k|^\alpha)$, \ hence, by \eqref{help_normlike_alpha01}, for each \ $\alpha \in (0, 2)$, \ we obtain \[ |x_1 + \cdots + x_k|^\alpha \leqslant k (|x_1|^\alpha + \cdots + |x_k|^\alpha) , \qquad x_1, \ldots, x_k \in \mathbb{R} . \] Consequently, we have {\boldsymbol{e}}gin{align*} &\frac{1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \Biggl| \sum_{i=\ell+1}^d \operatorname{Var}theta_i (m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1})\Biggr|^\alpha \\ &\leqslant \frac{1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} (d - \ell) \sum_{i=\ell+1}^d |\operatorname{Var}theta_i|^\alpha \bigl(m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1}\bigr)^\alpha \\ &\leqslant \frac{d}{n} \sum_{i=\ell+1}^d |\operatorname{Var}theta_i|^\alpha \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} m_\xi^{(\lfloor nt_{i-1}\rfloor-j+1)\alpha} \leqslant \frac{d}{n} \sum_{i=\ell+1}^d |\operatorname{Var}theta_i|^\alpha \sum_{k=0}^\infty m_\xi^{k\alpha} \to 0 \qquad \widetilde{e}xt{as \ $n \to \infty$,} \end{align*} yielding \eqref{conv_alpha_02_2}. For each \ $n \in \mathbb{N}$ \ and for each \ $j \in \{\lfloor nt_{\ell-1}\rfloor + 1, \ldots, \lfloor nt_\ell\rfloor\}$, \ we have \[ (1 - m_\xi^{\lfloor nt_\ell\rfloor-j+1})^{\alpha-1} \leqslant {\boldsymbol{e}}gin{cases} (1 - m_\xi)^{\alpha-1} & \widetilde{e}xt{if \ $\alpha \in (0, 1]$,} \\ 1 & \widetilde{e}xt{if \ $\alpha \in (1, 2)$,} \end{cases} \] and hence {\boldsymbol{e}}gin{align*} &\frac{1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \Biggl| \sum_{i=\ell+1}^d \operatorname{Var}theta_i (m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1})\Biggr| (1 - m_\xi^{\lfloor nt_\ell\rfloor-j+1})^{\alpha-1} \\ &\leqslant \frac{(1 - m_\xi)^{\alpha-1} \vee 1}{n} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \sum_{i=\ell+1}^d |\operatorname{Var}theta_i| \bigl(m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1}\bigr) \\ &\leqslant \frac{(1 - m_\xi)^{\alpha-1} \vee 1}{n} \sum_{i=\ell+1}^d |\operatorname{Var}theta_i| \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} \leqslant \frac{(1 - m_\xi)^{\alpha-1} \vee 1}{n} \sum_{i=\ell+1}^d |\operatorname{Var}theta_i| \sum_{k=0}^\infty m_\xi^k \to 0 \end{align*} as \ $n \to \infty$, \ yielding \eqref{conv_alpha_02_3}. Thus we obtain \eqref{conv_alpha}. Next we show that for each \ $\ell \in \{1, \ldots, d\}$, \ we have \[ n^{-1} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \operatorname{sign}\bigl(\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr) \to \frac{(t_\ell-t_{\ell-1})|\operatorname{Var}theta_\ell|^\alpha}{(1-m_\xi)^\alpha} \operatorname{sign}(\operatorname{Var}theta_\ell) \] as \ $n \to \infty$. \ If \ $\operatorname{Var}theta_\ell = 0$, \ then this readily follows from \eqref{help_7} and \eqref{conv_alpha_02_2}. If \ $\operatorname{Var}theta_\ell \ne 0$, \ then we show that there exists \ $\widetilde{C}_\ell \in \mathbb{R}_{++}$ \ such that {\boldsymbol{e}}gin{equation}\label{sign} \operatorname{sign}\bigl(\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr) = \operatorname{sign}(\operatorname{Var}theta_\ell) \end{equation} for each \ $n \in \mathbb{N}$ \ and for each \ $j \in \{\lfloor nt_{\ell-1}\rfloor + 1, \ldots, \lfloor nt_\ell\rfloor\}$ \ with \ $j < \lfloor nt_\ell\rfloor + 1 - \widetilde{C}_\ell$. \ First, observe that, by \eqref{help_7}, the inequality {\boldsymbol{e}}gin{equation}\label{sign1} \Biggl|\sum_{i=\ell+1}^d \operatorname{Var}theta_i \bigl(m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1}\bigr)\Biggr| < |\operatorname{Var}theta_\ell (1 - m_\xi^{\lfloor nt_{\ell}\rfloor - j+1})| \end{equation} implies \eqref{sign}. Then we have {\boldsymbol{e}}gin{align*} &\Biggl|\sum_{i=\ell+1}^d \operatorname{Var}theta_i \bigl(m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1}\bigr)\Biggr| \leqslant \left(\max_{\ell+1\leqslant i\leqslant d} |\operatorname{Var}theta_i|\right) \sum_{i=\ell+1}^d \bigl(m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1}\bigr) \\ &= \left(\sum_{i=\ell+1}^d |\operatorname{Var}theta_i|\right) \bigl(m_\xi^{\lfloor nt_\ell\rfloor-j+1} - m_\xi^{\lfloor nt_d\rfloor-j+1}\bigr) \leqslant \left(\sum_{i=\ell+1}^d |\operatorname{Var}theta_i|\right) m_\xi^{\lfloor nt_\ell\rfloor-j+1} , \end{align*} hence \eqref{sign1} is satisfied if \[ \left(\sum_{i=\ell+1}^d |\operatorname{Var}theta_i|\right) m_\xi^{\lfloor nt_\ell\rfloor-j+1} < |\operatorname{Var}theta_\ell| (1 - m_\xi^{\lfloor nt_{\ell}\rfloor - j+1}) , \] which is satisfied if \[ m_\xi^{\lfloor nt_\ell\rfloor - j+1} < \frac{|\operatorname{Var}theta_\ell|}{|\operatorname{Var}theta_\ell|+\cdots+|\operatorname{Var}theta_d|} , \] or equivalently, if \[ j < \lfloor nt_\ell\rfloor + 1 - \widetilde{C}_\ell \qquad \widetilde{e}xt{with \ $\widetilde{C}_\ell :=\log\biggl(\frac{|\operatorname{Var}theta_\ell|}{|\operatorname{Var}theta_\ell|+\cdots+|\operatorname{Var}theta_d|}\biggr) \bigg/ \log(m_\xi) \in \mathbb{R}_{++}$.} \] Hence, for \ $\operatorname{Var}theta_\ell \ne 0$, \ $n \in \mathbb{N}$, \ and \ $j \in \{\lfloor nt_{\ell-1}\rfloor + 1, \ldots, \lfloor nt_\ell\rfloor\}$ \ with \ $j < \lfloor nt_\ell\rfloor + 1 - \widetilde{C}_\ell$, \ we have \eqref{sign}. Moreover, for each \ $n \in \mathbb{N}$ \ and \ $j \in \{\lfloor nt_{\ell-1}\rfloor + 1, \ldots, \lfloor nt_\ell\rfloor\}$, \ by \eqref{help_7}, we have {\boldsymbol{e}}gin{align*} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr| &\leqslant \frac{1}{1-m_\xi} \biggl(|\operatorname{Var}theta_\ell| (1 - m_\xi^{\lfloor nt_\ell\rfloor-j+1}) + \sum_{i=\ell+1}^d |\operatorname{Var}theta_i| (m_\xi^{\lfloor nt_{i-1}\rfloor-j+1} - m_\xi^{\lfloor nt_i\rfloor-j+1})\biggr) \\ &\leqslant \frac{1}{1-m_\xi} \sum_{i=\ell}^d |\operatorname{Var}theta_i| , \end{align*} yielding that {\boldsymbol{e}}gin{align*} & \left\vert n^{-1} \sum_{j=\lfloor nt_{\ell}\rfloor+1 - \widetilde C_\ell}^{\lfloor nt_\ell\rfloor} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \operatorname{sign}\bigl(\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr) \right\vert \leqslant n^{-1} \sum_{j= \lfloor nt_\ell\rfloor +1- \widetilde C_\ell}^{\lfloor nt_\ell\rfloor} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha\\ &\leqslant \frac{\widetilde C_\ell}{n(1-m_\xi)^\alpha} \left(\sum_{i=\ell}^d |\operatorname{Var}theta_i|\right)^\alpha \to 0\qquad \widetilde{e}xt{as \ $n\to\infty$.} \end{align*} Consequently, by \eqref{conv_alpha}, {\boldsymbol{e}}gin{align*} &\lim_{n\to\infty} n^{-1} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \operatorname{sign}\bigl(\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr) \\ &= \lim_{n\to\infty} n^{-1} \sum_{\lfloor nt_{\ell-1}\rfloor+1\leqslant j<\lfloor nt_\ell\rfloor+1-\widetilde{C}_\ell} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \operatorname{sign}\bigl(\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr) \\ &= \lim_{n\to\infty} n^{-1} \sum_{\lfloor nt_{\ell-1}\rfloor+1\leqslant j<\lfloor nt_\ell\rfloor+1-\widetilde{C}_\ell} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \operatorname{sign}(\operatorname{Var}theta_\ell) \\ &= \lim_{n\to\infty} n^{-1} \sum_{j=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \bigl|\bigl\langle{\boldsymbol{\theta}}_n, {\boldsymbol{v}}_j^{(\lfloor nt_d\rfloor)}\bigr\rangle\bigr|^\alpha \operatorname{sign}(\operatorname{Var}theta_\ell) = \frac{(t_\ell-t_{\ell-1})|\operatorname{Var}theta_\ell|^\alpha}{(1-m_\xi)^\alpha} \operatorname{sign}(\operatorname{Var}theta_\ell) , \end{align*} as desired. We conclude for all \ $\alpha \in (0, 1) \cup (1, 2)$, {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\biggl(\exp\biggl\{\mathrm{i} \sum_{\ell=1}^d \frac{\operatorname{Var}theta_\ell}{n^{\frac{1}{\alpha}}} \sum_{k=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \Bigl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha}\Bigr)\biggr\}\biggr) \\ &=\operatorname{\mathbb{E}}\Bigl(\exp\Bigl\{\mathrm{i} \Bigl\langle n^{- \frac{1}{\alpha} } {\boldsymbol{\theta}}_n, {\boldsymbol{c}}X_1^{(\lfloor nt_d\rfloor,\alpha)} + \frac{\alpha}{1-\alpha} {\boldsymbol{1}}_{\lfloor nt_d\rfloor+1}\Bigr\rangle\Bigr\}\Bigr) \\ &\to \exp\biggl\{-C_\alpha \frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} \sum_{\ell=1}^d (t_\ell - t_{\ell-1}) |\operatorname{Var}theta_\ell|^\alpha \left(1 - \mathrm{i} \widetilde{a}n\left(\frac{\pi\alpha}{2}\right) \operatorname{sign}(\operatorname{Var}theta_\ell)\right)\biggr\} \\ &= \operatorname{\mathbb{E}}\biggl(\exp\biggl\{\mathrm{i} \sum_{\ell=1}^d \operatorname{Var}theta_\ell \Bigl(\Bigl({\mathcal Z}_{t_\ell}^{(\alpha)} + \frac{\alpha}{1-\alpha} t_\ell\Bigl) - \Bigr({\mathcal Z}_{t_{\ell-1}}^{(\alpha)} + \frac{\alpha}{1-\alpha} t_{\ell-1}\Bigr)\Bigr)\biggr\}\biggr) \qquad \widetilde{e}xt{as \ $n \to \infty$.} \end{align*} By the continuity theorem, we obtain for all \ $\alpha \in (0, 1) \cup (1, 2)$, \[ \Biggl(\frac{1}{n^{\frac{1}{\alpha}}} \sum_{k=\lfloor nt_{\ell-1}\rfloor+1}^{\lfloor nt_\ell\rfloor} \Bigl({\mathcal Y}^{(\alpha)}_k + \frac{\alpha}{1-\alpha}\Bigr)\Biggr)_{\ell\in\{1,\ldots,d\}} \distr \Bigl(\Bigl({\mathcal Z}_{t_\ell}^{(\alpha)} + \frac{\alpha}{1-\alpha} t_\ell\Bigl) - \Bigr({\mathcal Z}_{t_{\ell-1}}^{(\alpha)} + \frac{\alpha}{1-\alpha} t_{\ell-1}\Bigr)\Bigr)_{\ell\in\{1,\ldots,d\}} \] as \ $n \to \infty$, \ hence the continuous mapping theorem yields \eqref{conv_Z}, and we finished the proofs of \eqref{iterated_aggr_1_1}, \eqref{iterated_aggr_1_2} and \eqref{iterated_aggr_1_4}. Now we turn to prove \eqref{iterated_aggr_1_3}. For each \ $n \in \mathbb{N}$, \ by Corollary \ref{aggr_copies1} and by the continuous mapping theorem, in case of \ $\alpha = 1$, \ we obtain \[ \biggl(\frac{1}{n\log(n)a_N} \sum_{k=1}^{\lfloor nt\rfloor} \sum_{j=1}^N \bigl(X^{(j)}_k - \operatorname{\mathbb{E}}\bigl(X^{(j)}_k {\boldsymbol{b}}one_{\{X^{(j)}_k\leqslant a_N\}}\bigr)\bigr)\biggr)_{t\in\mathbb{R}_+} \distrf \biggl(\frac{1}{n\log(n)} \sum_{k=1}^{\lfloor nt\rfloor} {\mathcal Y}^{(1)}_k \biggr)_{t\in\mathbb{R}_+} \] as \ $N \to \infty$. \ Consequently, in order to prove \eqref{iterated_aggr_1_3}, we need to show that {\boldsymbol{e}}gin{equation}\label{conv_Z_1} \biggl(\frac{1}{n\log(n)} \sum_{k=1}^{\lfloor nt\rfloor} {\mathcal Y}^{(1)}_k \biggr)_{t\in\mathbb{R}_+} \distrf (t)_{t\in\mathbb{R}_+} \qquad \widetilde{e}xt{as \ $n \to \infty$.} \end{equation} Since the limit in \eqref{conv_Z_1} is deterministic, by van der Vaart \cite[Theorem 2.7, part (vi)]{Vaa}, it is enough to show that for each \ $t \in \mathbb{R}_+$, \ we have {\boldsymbol{e}}gin{equation}\label{conv_Z_1+} \frac{1}{n\log(n)} \sum_{k=1}^{\lfloor nt\rfloor} {\mathcal Y}^{(1)}_k \distr t \qquad \widetilde{e}xt{as \ $n \to \infty$.} \end{equation} For each \ $n \in \mathbb{N}$, \ $t \in \mathbb{R}_{+}$, \ and \ $\operatorname{Var}theta \in \mathbb{R}$, \ we have \[ \operatorname{\mathbb{E}}\biggl(\exp\biggl\{\mathrm{i} \frac{\operatorname{Var}theta}{n\log(n)} \sum_{k=1}^{\lfloor nt\rfloor} {\mathcal Y}^{(1)}_k \biggr\}\biggr) = \operatorname{\mathbb{E}}\biggl(\exp\biggl\{\mathrm{i} \biggl\langle\frac{\operatorname{Var}theta{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}}{n\log(n)} , {\boldsymbol{c}}X_1^{({\lfloor nt\rfloor},1)}\biggr\rangle\biggr\}\biggr) . \] By the explicit form of the characteristic function of \ ${\boldsymbol{c}}X_1^{(\lfloor nt\rfloor,1)}$ \ given in Theorem \ref{simple_aggregation1_stable_fd}, {\boldsymbol{e}}gin{align*} &\operatorname{\mathbb{E}}\biggl(\exp\biggl\{\mathrm{i} \biggl\langle\frac{\operatorname{Var}theta{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}}{n\log(n)} , {\boldsymbol{c}}X_1^{({\lfloor nt\rfloor},1)}\biggr\rangle\biggr\}\biggr) \\ &= \exp\biggl\{- C_1 (1 - m_\xi) \sum_{j=0}^{\lfloor nt\rfloor} \biggl|\biggl\langle\frac{\operatorname{Var}theta{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}}{n\log(n)},{\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\biggr\rangle\biggr| \biggl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}\biggl(\biggl\langle\frac{\operatorname{Var}theta{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}}{n\log(n)},{\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\biggr\rangle\biggr) \\ &\phantom{= \exp\biggl\{- C_1 (1 - m_\xi) \sum_{j=0}^{\lfloor nt\rfloor} \biggl|\biggl\langle\frac{\operatorname{Var}theta{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}}{n\log(n)},{\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\biggr\rangle\biggr| \biggl(1 +=} \times \log\biggl(\biggl|\biggl\langle\frac{\operatorname{Var}theta{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}}{n\log(n)},{\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\biggr\rangle\biggr|\biggr)\biggr) \\[2mm] &\phantom{= \exp\biggl\{} + \mathrm{i} C \biggl\langle\frac{\operatorname{Var}theta{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}}{n\log(n)}, {\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}\biggr\rangle\\ &\phantom{= \exp\biggl\{} + \mathrm{i} (1 - m_\xi) \sum_{j=0}^{\lfloor nt\rfloor} \sum_{\ell=j+1}^{{\lfloor nt\rfloor}+1} \biggl\langle{\boldsymbol{e}}_\ell, \frac{\operatorname{Var}theta{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}}{n\log(n)}\biggr\rangle \bigl\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle \log\bigl(\bigl\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle\bigr)\biggr\} \end{align*} {\boldsymbol{e}}gin{align*} &= \exp\biggl\{- \frac{C_1(1-m_\xi)|\operatorname{Var}theta|}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle \\ &\phantom{= \exp\biggl\{- \frac{C_1(1-m_\xi)|\operatorname{Var}theta|}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor}\;} \times \biggl(1 + \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta) \log\biggl(\frac{|\operatorname{Var}theta|}{n\log(n)} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1},{\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle\biggr)\biggr) \\[2mm] &\phantom{= \exp\biggl\{} + \mathrm{i} C \frac{\operatorname{Var}theta}{n\log(n)} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1} , {\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}\bigr\rangle \\[2mm] &\phantom{= \exp\biggl\{} + \mathrm{i} \frac{(1 - m_\xi)\operatorname{Var}theta}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \sum_{\ell=j+1}^{{\lfloor nt\rfloor}+1} \bigl\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle \log\bigl(\bigl\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle\bigr)\biggr\} \\ &\to \mathrm{e}^{\mathrm{i} t\operatorname{Var}theta} \end{align*} as \ $n \to \infty$ \ for each \ $\operatorname{Var}theta\in\mathbb{R}$. \ Indeed, \[ \frac{1}{n\log(n)} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}, {\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}\bigr\rangle = \frac{{\lfloor nt\rfloor}+1}{n\log(n)} \to 0 \qquad \widetilde{e}xt{as \ $n \to \infty$,} \] and {\boldsymbol{e}}gin{align*} &\biggl|\frac{1}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \sum_{\ell=j+1}^{{\lfloor nt\rfloor}+1} \bigl\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle \log\bigl(\bigl\langle{\boldsymbol{e}}_\ell, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle\bigr)\biggr| = \biggl|\frac{1}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \sum_{\ell=j+1}^{{\lfloor nt\rfloor}+1} m_\xi^{\ell-j-1} \log\bigl(m_\xi^{\ell-j-1}\bigr)\biggr| \\ &\leqslant \frac{|\log(m_\xi)|}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \sum_{\ell=j+1}^{{\lfloor nt\rfloor}+1} (\ell-j-1) m_\xi^{\ell-j-1} \leqslant \frac{|\log(m_\xi)|}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \sum_{\ell=j+1}^\infty (\ell-j-1) m_\xi^{\ell-j-1} \\ &= \frac{|\log(m_\xi)|({\lfloor nt\rfloor}+1)}{n\log(n)} \sum_{k=0}^\infty k m_\xi^k = \frac{m_\xi|\log(m_\xi)|({\lfloor nt\rfloor}+1)}{(1-m_\xi)^2n\log(n)} \to 0 \qquad \widetilde{e}xt{as \ $n \to \infty$,} \end{align*} and {\boldsymbol{e}}gin{equation}\label{help} \frac{1}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle = \frac{1}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \frac{1-m_\xi^{{\lfloor nt\rfloor}-j+1}}{1-m_\xi} \leqslant \frac{{\lfloor nt\rfloor}+1}{(1-m_\xi)n\log(n)} \to 0 \end{equation} as \ $n \to \infty$, \ and \[ - \frac{C_1(1-m_\xi)|\operatorname{Var}theta|}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta) \log\biggl(\frac{|\operatorname{Var}theta|}{n\log(n)} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1},{\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle\biggr) \to \mathrm{i} t \operatorname{Var}theta \] as \ $n \to \infty$, \ since {\boldsymbol{e}}gin{align*} &\biggl|\log\biggl(\frac{|\operatorname{Var}theta|}{\log(n)} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1},{\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle\biggr)\biggr| = \biggl|\log\biggl(\frac{|\operatorname{Var}theta|}{\log(n)} \frac{1-m_\xi^{{\lfloor nt\rfloor}-j+1}}{1-m_\xi}\biggr)\biggr| \\ &= \biggl|\log(|\operatorname{Var}theta|) - \log(\log(n)) + \log\biggl(\frac{1-m_\xi^{{\lfloor nt\rfloor}-j+1}}{1-m_\xi}\biggr)\biggr| \leqslant |\log(|\operatorname{Var}theta|)| + \log(\log(n)) + |\log(1-m_\xi)| , \end{align*} hence, by \eqref{help}, {\boldsymbol{e}}gin{align*} &\biggl|\frac{1}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle \log\biggl(\frac{|\operatorname{Var}theta|}{\log(n)} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1},{\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle\biggr)\biggr| \\ &\leqslant \frac{{\lfloor nt\rfloor}+1}{(1-m_\xi)n\log(n)} \bigl(|\log(|\operatorname{Var}theta|)| + \log(\log(n)) + |\log(1-m_\xi)|\bigr) \to 0 \qquad \widetilde{e}xt{as \ $n \to \infty$,} \end{align*} and {\boldsymbol{e}}gin{align*} \frac{C_1(1-m_\xi)|\operatorname{Var}theta|}{n\log(n)} \sum_{j=0}^{\lfloor nt\rfloor} \bigl\langle{\boldsymbol{1}}_{{\lfloor nt\rfloor}+1}, {\boldsymbol{v}}_j^{({\lfloor nt\rfloor})}\bigr\rangle \mathrm{i} \frac{2}{\pi} \operatorname{sign}(\operatorname{Var}theta) \log(n) = \mathrm{i} \frac{(1-m_\xi)\operatorname{Var}theta}{n} \sum_{j=0}^{\lfloor nt\rfloor} \frac{1-m_\xi^{{\lfloor nt\rfloor}-j+1}}{1-m_\xi} \to \mathrm{i} t \operatorname{Var}theta \end{align*} as \ $n \to \infty$. \ By the continuity theorem, we obtain \eqref{conv_Z_1+}, hence we finished the proof of \eqref{iterated_aggr_1_3}. \mbox{$\Box$} \vspace*{5mm} \appendix \vspace*{5mm} \noindent{\bf\Large Appendices} \section{A version of the continuous mapping theorem}\label{App_cont_map_theorem} If \ $\xi$ \ and \ $\xi_n$, \ $n \in \mathbb{N}$, \ are random elements with values in a metric space \ $(E, d)$, \ then we also denote by \ $\xi_n \distr \xi$ \ the weak convergence of the distributions of \ $\xi_n$ \ on the space \ $(E, {\mathcal B}(E))$ \ towards the distribution of \ $\xi$ \ on the space \ $(E, {\mathcal B}(E))$ \ as \ $n \to \infty$, \ where \ ${\mathcal B}(E)$ \ denotes the Borel \ $\sigma$-algebra on \ $E$ \ induced by the given metric \ $d$. The following version of the continuous mapping theorem can be found for example in Theorem 3.27 of Kallenberg \cite{Kal}. {\boldsymbol{e}}gin{Lem}\label{Lem_Kallenberg} Let \ $(S, d_S)$ \ and \ $(T, d_T)$ \ be metric spaces and \ $(\xi_n)_{n \in \mathbb{N}}$, \ $\xi$ \ be random elements with values in \ $S$ \ such that \ $\xi_n \distr \xi$ \ as \ $n \to \infty$. \ Let \ $f : S \to T$ \ and \ $f_n : S \to T$, \ $n \in \mathbb{N}$, \ be measurable mappings and \ $C \in {\mathcal B}(S)$ \ such that \ $\operatorname{\mathbb{P}}(\xi \in C) = 1$ \ and \ $\lim_{n \to \infty} d_T(f_n(s_n), f(s)) = 0$ \ if \ $\lim_{n \to \infty} d_S(s_n,s) = 0$ \ and \ $s \in C$, \ $s_n\in S$, \ $n\in\mathbb{N}$. \ Then \ $f_n(\xi_n) \distr f(\xi)$ \ as \ $n \to \infty$. \end{Lem} We will use the following corollary of this lemma several times. {\boldsymbol{e}}gin{Lem}\label{Conv2Funct} Let \ $d \in \mathbb{N}$, \ and let \ $({\boldsymbol{c}}U_t)_{t \in \mathbb{R}_+}$, \ $({\boldsymbol{c}}U^{(n)}_t)_{t \in \mathbb{R}_+}$, \ $n \in \mathbb{N}$, \ be \ $\mathbb{R}^d$-valued stochastic processes with c\`adl\`ag paths such that \ ${\boldsymbol{c}}U^{(n)} \distr {\boldsymbol{c}}U$ \ as $n \to \infty$. \ Let \ $\Phi : \mathbb{D}(\mathbb{R}_+, \mathbb{R}^d) \to \mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)$ \ and \ $\Phi_n : \mathbb{D}(\mathbb{R}_+, \mathbb{R}^d) \to \mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)$, \ $n \in \mathbb{N}$, \ be measurable mappings such that \ $\Phi_n(f_n) \to \Phi(f)$ \ in \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)$ \ as \ $n \to \infty$ \ whenever \ $f_n \to f$ \ in \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)$ \ as \ $n \to \infty$ \ with \ $f, f_n \in \mathbb{D}(\mathbb{R}_+, \mathbb{R}^d)$, \ $n \in \mathbb{N}$. \ Then \ $\Phi_n({\boldsymbol{c}}U^{(n)}) \distr \Phi({\boldsymbol{c}}U)$ \ as \ $n \to \infty$. \end{Lem} \section{The underlying space and vague convergence}\label{App_vague} For each \ $d \in \mathbb{N}$, \ put \ $\mathbb{R}_0^d := \mathbb{R}^d \setminus \{{\boldsymbol{z}}ero\}$, \ and denote by \ ${\mathcal B}(\mathbb{R}_0^d)$ \ the Borel \ $\sigma$-algebra of \ $\mathbb{R}_0^d$ \ induced by the metric \ $\operatorname{Var}rho : \mathbb{R}^d_0 \times \mathbb{R}^d_0 \to \mathbb{R}_+$, \ given by {\boldsymbol{e}}gin{align}\label{S_metric} \operatorname{Var}rho({\boldsymbol{x}}, {\boldsymbol{y}}) := \min\{\|{\boldsymbol{x}}- {\boldsymbol{y}}\|, 1\} + \biggl|\frac{1}{\|{\boldsymbol{x}}\|} - \frac{1}{\|{\boldsymbol{y}}\|}\biggr| , \qquad {\boldsymbol{x}}, {\boldsymbol{y}} \in \mathbb{R}^d_0 . \end{align} {\boldsymbol{e}}gin{Lem}\label{Lem_S_top} The set \ $\mathbb{R}^d_0$ \ furnished with the metric \ $\operatorname{Var}rho$ \ given in \eqref{S_metric} is a complete separable metric space, and \ $B \subset \mathbb{R}^d_0$ \ is bounded with respect to the metric \ $\operatorname{Var}rho$ \ if and only if \ $B$ \ is separated from the origin \ ${\boldsymbol{z}}ero \in \mathbb{R}^d$, \ i.e., there exists \ $\operatorname{Var}e \in \mathbb{R}_{++}$ \ such that \ $B \subset \{{\boldsymbol{x}} \in \mathbb{R}^d_0 : \|{\boldsymbol{x}}\| > \operatorname{Var}e\}$. Moreover, the topology and the Borel \ $\sigma$-algebra \ ${\mathcal B}(\mathbb{R}_0^d )$ \ on \ $\mathbb{R}_0^d$ \ induced by the metric \ $\operatorname{Var}rho$ \ coincides with the topology and the Borel \ $\sigma$-algebra on \ $\mathbb{R}_0^d$ \ induced by the usual metric \ $d({\boldsymbol{x}}, {\boldsymbol{y}}) := \|{\boldsymbol{x}}-{\boldsymbol{y}}\|$, \ ${\boldsymbol{x}}, {\boldsymbol{y}} \in \mathbb{R}_0^d$, \ respectively. \end{Lem} \noindent{\bf Proof.} First, we check that \ $\mathbb{R}^d_0$ \ furnished with the metric \ $\operatorname{Var}rho$ \ is a complete separable metric space. If \ $({\boldsymbol{x}}_n)_{n\in\mathbb{N}}$ \ is a Cauchy sequence in \ $\mathbb{R}^d_0$, \ then for all \ $\operatorname{Var}e \in (0, 1)$, \ there exists an \ $N_\operatorname{Var}e\in\mathbb{N}$ \ such that $\operatorname{Var}rho({\boldsymbol{x}}_n, {\boldsymbol{x}}_m)<\operatorname{Var}e$ for $n,m\geqslant N_\operatorname{Var}e$. Hence $\|{\boldsymbol{x}}_n-{\boldsymbol{x}}_m\|<\operatorname{Var}e$ and $\left\vert \frac{1}{\|{\boldsymbol{x}}_n\|} - \frac{1}{\|{\boldsymbol{x}}_m\|}\right\vert<\operatorname{Var}e$ for $n,m\geqslant N_\operatorname{Var}e$, i.e., $({\boldsymbol{x}}_n)_{n\in\mathbb{N}}$ and $(1/\|{\boldsymbol{x}}_n\|)_{n\in\mathbb{N}}$ are Cauchy sequences in $\mathbb{R}^d$ and in $\mathbb{R}$, respectively. Consequently, there exists an ${\boldsymbol{x}}\in\mathbb{R}^d$ such that $\lim_{n\to\infty} \|{\boldsymbol{x}}_n - {\boldsymbol{x}}\| = 0$ and $\frac{1}{\|{\boldsymbol{x}}_n\|}$ being convergent as $n\to\infty$, yielding that ${\boldsymbol{x}}\ne{\boldsymbol{z}}ero$, and so ${\boldsymbol{x}}\in\mathbb{R}^d_0$. By the continuity of the norm, $\lim_{n\to\infty} \operatorname{Var}rho({\boldsymbol{x}}_n, {\boldsymbol{x}}) = 0$, as desired. The separability of $\mathbb{R}^d_0$ readily follows, since $\mathbb{R}^d_0\cap \mathbb{Q}^d$ is a countable everywhere dense subset of $\mathbb{R}^d_0$. \noindent Next, we check that $B \subset \mathbb{R}^d_0$ is bounded with respect to the metric $\operatorname{Var}rho$ if and only if there exists $\operatorname{Var}e\in\mathbb{R}_{++}$ such that $B \subset \{{\boldsymbol{x}}\in \mathbb{R}^d_0 : \|{\boldsymbol{x}}\|>\operatorname{Var}e\}$. If $B \subset \mathbb{R}^d_0$ is bounded, then there exists $r>0$ such that $\operatorname{Var}rho({\boldsymbol{x}},{\boldsymbol{e}}_1)<r$, ${\boldsymbol{x}}\in B$, yielding that $\vert \frac{1}{\|{\boldsymbol{x}}\|} - 1\vert < r$, ${\boldsymbol{x}}\in B$, and then $\|{\boldsymbol{x}}\|>\frac{1}{1+r}$, ${\boldsymbol{x}}\in B$, so one can choose $\operatorname{Var}e=\frac{1}{1+r}$. If there exists $\operatorname{Var}e>0$ such that $B \subset \{{\boldsymbol{x}}\in \mathbb{R}^d_0 : \|{\boldsymbol{x}}\|>\operatorname{Var}e\}$, then $\operatorname{Var}rho({\boldsymbol{x}},{\boldsymbol{e}}_1) = \min\{\|{\boldsymbol{x}}-{\boldsymbol{e}}_1\|,1\} + \vert \frac{1}{\|{\boldsymbol{x}}\|} - 1\vert \leqslant 1+\frac{1}{\operatorname{Var}e}+1$, ${\boldsymbol{x}}\in B$. \mbox{$\Box$} Since \ $\mathbb{R}^d_0$ \ is locally compact, second countable and Hausdorff, one could choose a metric such that the relatively compact sets are precisely the bounded ones, see Kallenberg \cite[page 18]{kallenberg:2017}. The metric \ $\operatorname{Var}rho$ \ does not have this property, but we do not need it. Write \ $(\mathbb{R}^d_0)\widehat{s}pace*{.5mm}\widehat{}$ \ for the class of bounded Borel sets with respect to the metric \ $\operatorname{Var}rho$ \ given in \eqref{S_metric}. A measure \ $\nu$ \ on \ $(\mathbb{R}_0^d, {\mathcal B}(\mathbb{R}_0^d))$ \ is said to be locally finite if \ $\nu(B) < \infty$ \ for every \ $B \in (\mathbb{R}^d_0)\widehat{s}pace*{.5mm}\widehat{}$, \ and write \ ${\mathcal M}(\mathbb{R}^d_0)$ \ for the class of locally finite measures on \ $(\mathbb{R}_0^d, {\mathcal B}(\mathbb{R}_0^d))$. Write \ $\widehat{C}_{\mathbb{R}^d_0}$ \ for the class of bounded, continuous functions \ $f : \mathbb{R}^d_0 \to \mathbb{R}_+$ \ with bounded support. Hence, if \ $f \in \widehat{C}_{\mathbb{R}^d_0}$, \ then there exist an \ $\operatorname{Var}e \in \mathbb{R}_{++}$ \ such that \ $f({\boldsymbol{x}}) = 0$ \ for all \ ${\boldsymbol{x}} \in \mathbb{R}^d_0$ \ with \ $\|{\boldsymbol{x}}\| \leqslant \operatorname{Var}e$. \ The vague topology on \ ${\mathcal M}(\mathbb{R}^d_0)$ \ is constructed as in Chapter 4 in Kallenberg \cite{kallenberg:2017}. The associated notion of vague convergence of a sequence \ $(\nu_n)_{n\in\mathbb{N}}$ \ in \ ${\mathcal M}(\mathbb{R}^d_0)$ \ towards $\nu\in{\mathcal M}(\mathbb{R}^d_0)$, denoted by \ $\nu_n \distrv \nu$ \ as \ $n\to\infty$, \ is defined by the condition \ $\nu_n(f) \to \nu(f)$ \ as \ $n\to\infty$ \ for all \ $f \in \widehat{C}_{\mathbb{R}^d_0}$, \ where \ $\kappa(f) := \int_{\mathbb{R}^d_0} f({\boldsymbol{x}}) \, \kappa(\mathrm{d}{\boldsymbol{x}})$ \ for each \ $\kappa \in {\mathcal M}(\mathbb{R}^d_0)$. If \ $\nu$ \ is a measure on \ $(\mathbb{R}_0^d, {\mathcal B}(\mathbb{R}_0^d))$, \ then \ $B \in {\mathcal B}(\mathbb{R}_0^d)$ \ is called a \ $\nu$-continuity set if \ $\nu(\partial B) = 0$, \ and the class of bounded \ $\nu$-continuity sets will be denoted by \ $(\mathbb{R}^d_0)_\nu\widehat{s}pace*{-1.2mm}\widehat{}\widehat{s}pace*{1.2mm}$. \ The following statement is an analogue of the portmanteau theorem for vague convergence, see, e.g., Kallenberg \cite[15.7.2]{kallenberg:1983}. {\boldsymbol{e}}gin{Lem}\label{portmanteau} Let \ $\nu, \nu_n \in {\mathcal M}(\mathbb{R}^d_0)$, $n\in\mathbb{N}$. \ Then the following statements are equivalent: \renewcommand{{\rm(\roman{enumi})}}{{\rm(\roman{enumi})}} {\boldsymbol{e}}gin{enumerate} \item $\nu_n \distrv \nu$ \ as \ $n \to \infty$, \item $\nu_n(B) \to \nu(B)$ \ as \ $n \to \infty$ \ for all \ $B \in (\mathbb{R}^d_0)_\nu\widehat{s}pace*{-1.2mm}\widehat{}\widehat{s}pace*{1.2mm}$. \end{enumerate} \end{Lem} The following statement is an analogue of the continuous mapping theorem for vague convergence, see, e.g., Kallenberg \cite[15.7.3]{kallenberg:1983}. Write \ $\mathrm{D}_f$ \ for the set of discontinuities of a function \ $f : \mathbb{R}^d_0 \to \mathbb{R}$. {\boldsymbol{e}}gin{Lem}\label{cmt} Let \ $\nu, \nu_n \in {\mathcal M}(\mathbb{R}^d_0)$, \ $n\in\mathbb{N}$, \ with \ $\nu_n \distrv \nu$ \ as \ $n \to \infty$. \ Then \ $\nu_n(f) \to \nu(f)$ \ as \ $n \to \infty$ \ for every bounded measurable function \ $f : \mathbb{R}^d_0 \to \mathbb{R}_+$ \ with bounded support satisfying \ $\nu(\mathrm{D}_f) = 0$. \end{Lem} \section{Regularly varying distributions}\label{App_reg_var_distr} First, we recall the notions of slowly varying and regularly varying functions, respectively. {\boldsymbol{e}}gin{Def} A measurable function \ $U: \mathbb{R}_{++} \to \mathbb{R}_{++}$ \ is called regularly varying at infinity with index \ $\rho \in \mathbb{R}$ \ if for all \ $c \in \mathbb{R}_{++}$, \[ \lim_{x\to\infty} \frac{U(cx)}{U(x)} = c^\rho . \] In case of \ $\rho = 0$, \ we call \ $U$ \ slowly varying at infinity. \end{Def} {\boldsymbol{e}}gin{Def} A random variable \ $Y$ \ is called regularly varying with index \ $\alpha \in \mathbb{R}_{++}$ \ if \ $\operatorname{\mathbb{P}}(|Y| > x) \in \mathbb{R}_{++}$ \ for all \ $x \in \mathbb{R}_{++}$, \ the function \ $\mathbb{R}_{++} \ni x \mapsto \operatorname{\mathbb{P}}(|Y| > x) \in \mathbb{R}_{++}$ \ is regularly varying at infinity with index \ $-\alpha$, \ and a tail-balance condition holds: {\boldsymbol{e}}gin{equation}\label{TB} \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(Y>x)}{\operatorname{\mathbb{P}}(|Y|>x)} = p , \qquad \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(Y\leqslant-x)}{\operatorname{\mathbb{P}}(|Y|>x)} = q , \end{equation} where \ $p + q = 1$. \end{Def} {\boldsymbol{e}}gin{Rem}\label{tail-balance} In the tail-balance condition \eqref{TB}, the second convergence can be replaced by {\boldsymbol{e}}gin{equation}\label{tail-balance+} \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(Y<-x)}{\operatorname{\mathbb{P}}(|Y|>x)} = q . \end{equation} Indeed, if \ $Y$ \ is regularly varying with index \ $\alpha \in \mathbb{R}_{++}$, \ then \[ \limsup_{x\to\infty} \frac{\operatorname{\mathbb{P}}(Y<-x)}{\operatorname{\mathbb{P}}(|Y|>x)} \leqslant \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(Y\leqslant-x)}{\operatorname{\mathbb{P}}(|Y|>x)} = q , \] and {\boldsymbol{e}}gin{align*} \liminf_{x\to\infty} \frac{\operatorname{\mathbb{P}}(Y<-x)}{\operatorname{\mathbb{P}}(|Y|>x)} &\geqslant \liminf_{x\to\infty} \frac{\operatorname{\mathbb{P}}(Y\leqslant-x-1)}{\operatorname{\mathbb{P}}(|Y|>x)} \\ &= \liminf_{x\to\infty} \frac{\operatorname{\mathbb{P}}(Y\leqslant-x-1)}{\operatorname{\mathbb{P}}(|Y|>x+1)} \frac{\operatorname{\mathbb{P}}(|Y|>x(1+1/x))}{\operatorname{\mathbb{P}}(|Y|>x)} = q , \end{align*} since, by the uniform convergence theorem for regularly varying functions (see, e.g., Bingham et al.\ \cite[Theorem 1.5.2]{BinGolTeu}) together with the fact that \ $1 + 1/x \in [1, 2]$ \ for \ $x \in [1, \infty)$, \ we obtain \[ \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(|Y|>x(1+1/x))}{\operatorname{\mathbb{P}}(|Y|>x)} = 1 , \] and hence, we conclude \eqref{tail-balance+}. On the other hand, if \ $Y$ \ is a random variable such that \ $\operatorname{\mathbb{P}}(|Y| > x) \in \mathbb{R}_{++}$ \ for all \ $x \in \mathbb{R}_{++}$, \ the function \ $\mathbb{R}_{++} \ni x \mapsto \operatorname{\mathbb{P}}(|Y| > x) \in \mathbb{R}_{++}$ \ is regularly varying at infinity with index \ $-\alpha$, \ and \eqref{tail-balance+} holds, then the second convergence in the tail-balance condition \eqref{TB} can be derived in a similar way. \mbox{$\Box$} \end{Rem} {\boldsymbol{e}}gin{Lem}\label{hom} \renewcommand{{\rm(\roman{enumi})}}{{\rm(\roman{enumi})}} {\boldsymbol{e}}gin{enumerate} \item A non-negative random variable \ $Y$ \ is regularly varying with index \ $\alpha \in \mathbb{R}_{++}$ \ if and only if \ $\operatorname{\mathbb{P}}(Y > x) \in \mathbb{R}_{++}$ \ for all \ $x \in \mathbb{R}_{++}$, \ and the function \ $\mathbb{R}_{++} \ni x \mapsto \operatorname{\mathbb{P}}(Y > x) \in \mathbb{R}_{++}$ \ is regularly varying at infinity with index \ $-\alpha$. \item If \ $Y$ \ is a regularly varying random variable with index \ $\alpha \in \mathbb{R}_{++}$, \ then for each \ ${\boldsymbol{e}}ta \in \mathbb{R}_{++}$, \ $|Y|^{\boldsymbol{e}}ta$ \ is regularly varying with index \ $\alpha/{\boldsymbol{e}}ta$. \end{enumerate} \end{Lem} {\boldsymbol{e}}gin{Lem}\label{a_n} If \ $Y$ \ is a regularly varying random variable with index \ $\alpha \in \mathbb{R}_{++}$, \ then there exists a sequence \ $(a_n)_{n\in\mathbb{N}}$ \ in \ $\mathbb{R}_{++}$ \ such that \ $n \operatorname{\mathbb{P}}(|Y| > a_n) \to 1$ \ as \ $n \to \infty$. \ If \ $(a_n)_{n\in\mathbb{N}}$ \ is such a sequence, then \ $a_n \to \infty$ \ as \ $n \to \infty$. \end{Lem} \noindent{\bf Proof.} We are going to show that one can choose \ $a_n := \max\{\widetilde{a}_n, 1\}$, \ $n \in \mathbb{N}$, \ where \ $\widetilde{a}_n$ \ denotes the \ $1 - \frac{1}{n}$ \ lower quantile of \ $|Y|$, \ namely, \[ \widetilde{a}_n := \inf\biggl\{x \in \mathbb{R} : 1 - \frac{1}{n} \leqslant \operatorname{\mathbb{P}}(|Y| \leqslant x)\biggr\} = \inf\biggl\{x \in \mathbb{R} : \operatorname{\mathbb{P}}(|Y| > x) \leqslant \frac{1}{n}\biggr\} , \qquad n \in \mathbb{N} . \] For each \ $n \in \mathbb{N}$, \ by the definition of the infimum, there exists a sequence \ $(x_m)_{m\in\mathbb{N}}$ \ in \ $\mathbb{R}$ \ such that \ $x_m \downarrow \widetilde{a}_n$ \ as \ $m \to \infty$ \ and \ $\operatorname{\mathbb{P}}(|Y| > x_m) \leqslant \frac{1}{n}$, \ $m \in \mathbb{N}$. \ Letting \ $m \to \infty$, \ using that the distribution function of \ $|Y|$ \ is right-continuous, we obtain \ $\operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n) \leqslant \frac{1}{n}$, \ thus \ $n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n) \leqslant 1$, \ and hence {\boldsymbol{e}}gin{equation}\label{limsup_a_n} \limsup_{n\to\infty} n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n) \leqslant 1 . \end{equation} Moreover, for each \ $n \in \mathbb{N}$, \ again by the definition of the infimum, we have \ $\frac{1}{n} < \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n - 1)$, \ thus \ $n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n - 1) > 1$, \ and hence {\boldsymbol{e}}gin{equation}\label{liminf_a_n} \liminf_{n\to\infty} n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n - 1) \geqslant 1 . \end{equation} We have \ $\widetilde{a}_n \to \infty$ \ as \ $n \to \infty$, \ since \ $|Y|$ \ is regularly variable with index \ $\alpha \in \mathbb{R}_+$ \ (see part (ii) of Lemma \ref{hom}), yielding that \ $|Y|$ \ is unbounded. Thus for each \ $q \in (0, 1)$ \ and for sufficiently large \ $n \in \mathbb{N}$, \ we have \ $\widetilde{a}_n \geqslant \frac{1}{1-q}$, \ and then \ $\widetilde{a}_n - 1 \geqslant q \widetilde{a}_n$, \ and hence \ $\operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n - 1) \leqslant \operatorname{\mathbb{P}}(|Y| > q \widetilde a_n)$. \ Consequently, for each \ $q \in (0, 1)$, \ using \eqref{liminf_a_n} and that \ $|Y|$ \ is regularly varying with index \ $\alpha \in \mathbb{R}_{++}$, \ we obtain {\boldsymbol{e}}gin{align*} 1 &\leqslant \liminf_{n\to\infty} n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n - 1) \leqslant \liminf_{n\to\infty} n \operatorname{\mathbb{P}}(|Y| > q \widetilde{a}_n) \\ &= \liminf_{n\to\infty} \frac{\operatorname{\mathbb{P}}(|Y| > q \widetilde{a}_n)}{\operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n)} n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n) = q^{-\alpha} \liminf_{n\to\infty} n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n) . \end{align*} Hence for each \ $q \in (0, 1)$, \ we have \ $\liminf_{n\to\infty} n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n) \geqslant q^\alpha$. \ Letting \ $q \uparrow 1$, \ we get \ $\liminf_{n\to\infty} n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n) \geqslant 1$, \ and hence by \eqref{limsup_a_n}, we conclude \ $\lim_{n\to\infty} n \operatorname{\mathbb{P}}(|Y| > \widetilde{a}_n) = 1$. If \ $(a_n)_{n\in\mathbb{N}}$ \ is a sequence in \ $\mathbb{R}_{++}$ \ such that \ $n \operatorname{\mathbb{P}}(|Y| > a_n) \to 1$ \ as \ $n \to \infty$, \ then \ $a_n \to \infty$ \ as \ $n \to \infty$, \ since \ $|Y|$ \ is unbounded. \mbox{$\Box$} {\boldsymbol{e}}gin{Lem}[Karamata's theorem for truncated moments]\label{truncated_moments} Consider a non-negative regularly varying random variable \ $Y$ \ with index \ $\alpha \in \mathbb{R}_{++}$. \ Then {\boldsymbol{e}}gin{align*} \lim_{x\to\infty} \frac{x^{\boldsymbol{e}}ta\operatorname{\mathbb{P}}(Y>x)}{\operatorname{\mathbb{E}}(Y^{\boldsymbol{e}}ta{\boldsymbol{b}}one_{\{Y\leqslant x\}})} &= \frac{{\boldsymbol{e}}ta-\alpha}{\alpha} \qquad \widetilde{e}xt{for \ ${\boldsymbol{e}}ta \in [\alpha, \infty)$,} \\ \lim_{x\to\infty} \frac{x^{\boldsymbol{e}}ta\operatorname{\mathbb{P}}(Y>x)}{\operatorname{\mathbb{E}}(Y^{\boldsymbol{e}}ta{\boldsymbol{b}}one_{\{Y>x\}})} &= \frac{\alpha-{\boldsymbol{e}}ta}{\alpha} \qquad \widetilde{e}xt{for \ ${\boldsymbol{e}}ta \in (-\infty, \alpha)$.} \end{align*} \end{Lem} For Lemma \ref{truncated_moments}, see, e.g., Bingham et al.\ \cite[pages 26-27]{BinGolTeu} or Buraczewski et al.\ \cite[Appendix B.4]{BurDamMik}. Next, based on Buraczewski et al.\ \cite[Appendix C]{BurDamMik}, we recall the definition and some properties of regularly varying random vectors. {\boldsymbol{e}}gin{Def} A \ $d$-dimensional random vector \ ${\boldsymbol{Y}}$ \ and its distribution are called regularly varying with index \ $\alpha \in \mathbb{R}_{++}$ \ if there exists a probability measure \ $\psi$ \ on \ $\mathbb{S}^{d-1}$ \ such that for all \ $c \in \mathbb{R}_{++}$, \[ \frac{\operatorname{\mathbb{P}}\bigl(\|{\boldsymbol{Y}}\| > c x, \, \frac{{\boldsymbol{Y}}}{\|{\boldsymbol{Y}}\|} \in \cdot\bigr)} {\operatorname{\mathbb{P}}(\|{\boldsymbol{Y}}\| > x)} \distrw c^{-\alpha} \psi(\cdot) \qquad \widetilde{e}xt{as \ $x \to \infty$,} \] where \ $\distrw$ \ denotes the weak convergence of finite measures on \ $\mathbb{S}^{d-1}$. \ The probability measure \ $\psi$ \ is called the spectral measure of \ ${\boldsymbol{Y}}$. \end{Def} The following equivalent characterization of multivariate regular variation can be derived, e.g., from Resnick \cite[page 69]{Res0}. {\boldsymbol{e}}gin{Pro}\label{vague} A $d$-dimensional random vector \ ${\boldsymbol{Y}}$ \ is regularly varying with some index \ $\alpha \in \mathbb{R}_{++}$ \ if and only if there exists a non-null locally finite measure \ $\mu$ \ on \ $\mathbb{R}^d_0$ \ satisfying the limit relation {\boldsymbol{e}}gin{equation}\label{vague_Kallenberg} \mu_x(\cdot) := \frac{\operatorname{\mathbb{P}}(x^{-1} {\boldsymbol{Y}} \in \cdot)}{\operatorname{\mathbb{P}}(\|{\boldsymbol{Y}}\| > x)} \distrv \mu(\cdot) \qquad \widetilde{e}xt{as \ $x \to \infty$,} \end{equation} where \ $\distrv$ \ denotes vague convergence of locally finite measures on \ $\mathbb{R}^d_0$ \ (see Appendix \ref{App_vague} for the notion \ $\distrv$). \ Further, \ $\mu$ \ satisfies the property \ $\mu(cB) = c^{-\alpha}\mu(B)$ \ for any \ $c\in\mathbb{R}_{++}$ \ and \ $B\in{\mathcal B}(\mathbb{R}_0^d)$ \ (see, e.g., Theorem 1.14 and 1.15 and Remark 1.16 in Lindskog \cite{Lin}). \end{Pro} The measure \ $\mu$ \ in Proposition \ref{vague} is called the limit measure of \ ${\boldsymbol{Y}}$. \noindent{\bf Proof of Proposition \ref{vague}}. Recall that a \ $d$-dimensional random vector \ ${\boldsymbol{Y}}$ \ is regularly varying with some index \ $\alpha\in\mathbb{R}_{++}$ \ if and only if on \ $(\overline{\RR}_0^d, {\mathcal B}(\overline{\RR}_0^d))$, \ furnished with an appropriate metric \ $\overline{\varrhoho}$ \ (see, e.g., Kallenberg \cite[page 125]{kallenberg:2017}), the vague convergence \ $\mu_x \distrv \overline{\mu}$ \ as \ $x \to \infty$ \ holds with some non-null locally finite measure \ $\overline{\mu}$ \ with \ $\overline{\mu}(\overline{\RR}_0^d\setminus\mathbb{R}_0^d) = 0$, \ where \ $\overline{\RR}_0^d := \overline{\RR}^d \setminus \{{\boldsymbol{z}}ero\}$ \ with \ $\overline{\RR} := \mathbb{R} \cup \{-\infty, \infty\}$, \ see, e.g., Resnick \cite[page 69]{Res0}. It remains to check that \ $\mu_x \distrv \overline{\mu}$ \ as \ $x\to\infty$ \ on \ $(\overline{\RR}_0^d, {\mathcal B}(\overline{\RR}_0^d))$ \ holds if and only if \ $\mu_x \distrv \mu$ \ as \ $x\to\infty$ \ on \ $(\mathbb{R}_0^d, {\mathcal B}(\mathbb{R}_0^d))$ \ with \ $\mu:=\overline{\mu}\big\vert_{\mathbb{R}_0^d}$. \ By Lemma \ref{portmanteau}, \ $\mu_x(B\cap\mathbb{R}_0^d)=\mu_x(B)\to \overline{\mu}(B)=\overline{\mu}(B\cap\mathbb{R}_0^d)$ \ as \ $x\to\infty$ \ for any bounded \ $\overline{\mu}$-continuity Borel set \ $B$ \ of \ $\overline{\RR}_0^d$. \ By Kallenberg \cite[page 125]{kallenberg:2017} and Lemma \ref{Lem_S_top}, a subset \ $B$ \ of \ $\overline{\RR}_0^d$ \ is bounded with respect to the metric \ $\overline{\operatorname{Var}rho}$ \ if and only if \ $B\cap \mathbb{R}_0^d$ \ (as a subset of \ $\mathbb{R}_0^d$) \ is bounded with respect to the metric \ $\operatorname{Var}rho$. \ Further, for any \ $B\in{\mathcal B}(\overline{\RR}_0^d)$, \ $(\partial_{\overline{\RR}_0^d}B) \cap \mathbb{R}_0^d = \partial_{\mathbb{R}_0^d}(B\cap\mathbb{R}_0^d)$, \ where \ $\partial_{\overline{\RR}_0^d}B$ \ and \ $\partial_{\mathbb{R}_0^d}(B\cap\mathbb{R}_0^d)$ \ denotes the boundary of \ $B$ \ in \ $\overline{\RR}_0^d$ \ and that of \ $(B\cap\mathbb{R}_0^d)$ \ in \ $\mathbb{R}_0^d$, \ respetively, since a set \ $G \subset \overline{\RR}_0^d$ \ is open with respect to \ $\overline{\varrhoho}$ \ if and only if \ $G \cap \mathbb{R}^d_0$ \ is open with respect to \ $\operatorname{Var}rho$. \ Thus \ $\overline{\mu}(\partial_{\overline{\RR}_0^d}B)=\overline{\mu}((\partial_{\overline{\RR}_0^d}B) \cap \mathbb{R}_0^d)=0$ \ if and only if \ $\mu(\partial_{\mathbb{R}_0^d}(B \cap \mathbb{R}_0^d))=0$. \ Hence \ $\mu_x(B)\to \overline{\mu}(B)$ \ as \ $x\to\infty$ \ for any bounded \ $\overline{\mu}$-continuity set \ $B$ \ of \ $\overline{\RR}_0^d$ \ if and only if \ $\mu_x(B) \to \mu(B)$ \ as \ $x\to\infty$ \ for any bounded \ $\mu$-continuity set \ $B$ \ of \ $\mathbb{R}_0^d$. \ Consequently, by Lemma \ref{portmanteau}, \ $\mu_x \distrv \overline{\mu}$ \ as \ $x \to \infty$ \ on \ $\overline{\RR}^d_0$ \ if and only if \ $\mu_x\distrv \mu$ \ as \ $x\to\infty$ \ on \ $\mathbb{R}_0^d$. \mbox{$\Box$} The next statement follows, e.g., from part (i) in Lemma C.3.1 in Buraczewski et al.\ \cite{BurDamMik}. {\boldsymbol{e}}gin{Lem}\label{Lem_shift} If \ ${\boldsymbol{Y}}$ \ is a regularly varying \ $d$-dimensional random vector with index \ $\alpha \in \mathbb{R}_{++}$, \ then for each \ ${\boldsymbol{c}} \in \mathbb{R}^d$, \ the random vector \ ${\boldsymbol{Y}} - {\boldsymbol{c}}$ \ is regularly varying with index \ $\alpha$. \end{Lem} Recall that if \ ${\boldsymbol{Y}}$ \ is a regularly varying \ $d$-dimensional random vector with index \ $\alpha \in \mathbb{R}_{++}$ \ and with limit measure \ $\mu$ \ given in \eqref{vague_Kallenberg}, and \ $f: \mathbb{R}^d \to \mathbb{R}$ \ is a continuous function with \ $f^{-1}(\{0\}) = \{{\boldsymbol{z}}ero\}$ \ and it is positively homogeneous of degree \ ${\boldsymbol{e}}ta \in \mathbb{R}_{++}$ \ (i.e., \ $f(c {\boldsymbol{v}}) = c^{\boldsymbol{e}}ta f({\boldsymbol{v}})$ \ for every \ $c \in \mathbb{R}_{++}$ \ and \ ${\boldsymbol{v}} \in \mathbb{R}^d)$, \ then \ $f({\boldsymbol{Y}})$ \ is regularly varying with index \ $\frac{\alpha}{{\boldsymbol{e}}ta}$ \ and with limit measure \ $\mu(f^{-1}(\cdot))$, \ see, e.g., Buraczewski et al.\ \cite[page 282]{BurDamMik}. Next we describe the tail behavior of \ $f({\boldsymbol{Y}})$ \ for appropriate positively homogeneous functions \ $f: \mathbb{R}^d \to \mathbb{R}$. {\boldsymbol{e}}gin{Pro}\label{Pro_mapping} Let \ ${\boldsymbol{Y}}$ \ be a regularly varying \ $d$-dimensional random vector with index \ $\alpha \in \mathbb{R}_{++}$ \ and let \ $f: \mathbb{R}^d \to \mathbb{R}$ \ be a measurable function which is positively homogeneous of degree \ ${\boldsymbol{e}}ta \in \mathbb{R}_{++}$, \ continuous at \ ${\boldsymbol{z}}ero$ \ and \ $\mu(D_f) = 0$, \ where \ $\mu$ \ is the limit measure of \ ${\boldsymbol{Y}}$ \ given in \eqref{vague_Kallenberg} and \ $D_f$ \ denotes the set of discontinuities of \ $f$. \ Then \ $\mu(\partial_{\mathbb{R}_0^d}(f^{-1}((1, \infty)))) = 0$, \ where \ $\partial_{\mathbb{R}_0^d}(f^{-1}((1, \infty)))$ \ denotes the boundary of \ $f^{-1}((1, \infty))$ \ in \ $\mathbb{R}_0^d$. \ Consequently, \[ \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(f({\boldsymbol{Y}})>x)}{\operatorname{\mathbb{P}}(\|{\boldsymbol{Y}}\|^{\boldsymbol{e}}ta>x)} = \mu(f^{-1}((1,\infty))) , \] and \ $f({\boldsymbol{Y}})$ \ is regularly varying with tail index \ $\frac{\alpha}{{\boldsymbol{e}}ta}$. \end{Pro} \noindent{\bf Proof.} For all \ $x \in \mathbb{R}_{++}$, \ we have \[ \frac{\operatorname{\mathbb{P}}(f({\boldsymbol{Y}})>x)}{\operatorname{\mathbb{P}}(\|{\boldsymbol{Y}}\|^{\boldsymbol{e}}ta>x)} = \frac{\operatorname{\mathbb{P}}(x^{-1}f({\boldsymbol{Y}})>1)}{\operatorname{\mathbb{P}}(\|{\boldsymbol{Y}}\|>x^{1/{\boldsymbol{e}}ta})} = \frac{\operatorname{\mathbb{P}}(f(x^{-1/{\boldsymbol{e}}ta}{\boldsymbol{Y}})>1)}{\operatorname{\mathbb{P}}(\|{\boldsymbol{Y}}\|>x^{1/{\boldsymbol{e}}ta})} = \frac{\operatorname{\mathbb{P}}(x^{-1/{\boldsymbol{e}}ta}{\boldsymbol{Y}}\in f^{-1}((1,\infty)))}{\operatorname{\mathbb{P}}(\|{\boldsymbol{Y}}\|>x^{1/{\boldsymbol{e}}ta})} . \] Next, we check that \ $f^{-1}((1,\infty))$ \ is a \ $\mu$-continuity set being bounded with respect to the metric \ $\operatorname{Var}rho$ \ given in \eqref{S_metric}. Since \ $f({\boldsymbol{z}}ero) = 0$ \ (following from the positive homogeneity of \ $f$), \ we have \ $f^{-1}((1,\infty)) \in {\mathcal B}(\mathbb{R}_0^d)$. \ The continuity of \ $f$ \ at \ ${\boldsymbol{z}}ero$ \ implies the existence of an \ $\operatorname{Var}e \in \mathbb{R}_{++}$ \ such that for all \ ${\boldsymbol{x}} \in \mathbb{R}^d$ \ with \ $\|{\boldsymbol{x}}\| \leqslant \operatorname{Var}e$ \ we have \ $|f({\boldsymbol{x}})| \leqslant 1$, \ thus \ ${\boldsymbol{x}} \notin f^{-1}((1, \infty))$, \ hence \ $f^{-1}((1, \infty)) \subset \{{\boldsymbol{x}} \in \mathbb{R}_0^d : \|{\boldsymbol{x}}\| > \operatorname{Var}e\}$, \ i.e., \ $f^{-1}((1, \infty))$ \ is separated from \ ${\boldsymbol{z}}ero$, \ and hence, by Lemma \ref{Lem_S_top}, \ $f^{-1}((1, \infty))$ \ is bounded in \ $\mathbb{R}_0^d$ \ with respect to the metric \ $\operatorname{Var}rho$. \ Further, we have \[ \partial_{\mathbb{R}_0^d}(f^{-1}((1, \infty))) \subset f^{-1}(\partial_{\mathbb{R}}((1,\infty))) \cup D_f = f^{-1}(\{1\}) \cup D_f , \] and hence \[ \mu(\partial_{\mathbb{R}_0^d}(f^{-1}((1, \infty)))) \leqslant \mu(f^{-1}(\{1\})) + \mu(D_f) = \mu(f^{-1}(\{1\})) . \] Here \ $\mu(f^{-1}(\{1\})) = 0$, \ since if, on the contrary, we suppose that \ $\mu(f^{-1}(\{1\})) \in (0, \infty]$, \ then for all \ $u, v \in \mathbb{R}_{++}$ \ with \ $u < v$, \ we have {\boldsymbol{e}}gin{align*} \mu(f^{-1}((u,v))) \geqslant \mu\left(\bigcup_{q\in\mathbb{Q}\cap(u,v)} f^{-1}(\{q\})\right) &= \sum_{q\in\mathbb{Q}\cap(u,v)} \mu(f^{-1}(\{q\})) = \sum_{q\in\mathbb{Q}\cap(u,v)} \mu(q^{\frac{1}{{\boldsymbol{e}}ta}} f^{-1}(\{1\})) \\ &= \sum_{q\in\mathbb{Q}\cap(u,v)} q^{-\frac{\alpha}{{\boldsymbol{e}}ta}} \mu(f^{-1}(\{1\})) = \infty , \end{align*} where we used that \ $\mu(cB) = c^{-\alpha} \mu(B)$, \ $c \in \mathbb{R}_{++}$, \ $B \in {\mathcal B}(\mathbb{R}_0^d)$ \ (see Proposition \ref{vague}), and that {\boldsymbol{e}}gin{align*} f^{-1}(\{q\}) &= \{{\boldsymbol{x}} \in \mathbb{R}_0^d : f({\boldsymbol{x}}) = q\} = \{{\boldsymbol{x}} \in \mathbb{R}_0^d : f(q^{-\frac{1}{{\boldsymbol{e}}ta}}{\boldsymbol{x}}) = 1\} \\ &= \{q^{\frac{1}{{\boldsymbol{e}}ta}} {\boldsymbol{y}} \in \mathbb{R}_0^d : f({\boldsymbol{y}}) = 1\} = q^{\frac{1}{{\boldsymbol{e}}ta}} f^{-1}(\{1\}) , \qquad q \in \mathbb{R}_{++} . \end{align*} This leads us to a contradiction, since \ $f^{-1}((u, v))$ \ is separated from \ ${\boldsymbol{z}}ero$ \ (can be seen similarly as for \ $f^{-1}((1, \infty))$), so, by Lemma \ref{Lem_S_top}, it is bounded with respect to the metric \ $\operatorname{Var}rho$, \ and hence \ $\mu(f^{-1}((u, v))) < \infty$ \ due to the local finiteness of \ $\mu$. \ Hence \ $\mu(\partial_{\mathbb{R}_0^d}(f^{-1}((1, \infty)))) = 0$, \ as desired. Consequently, by portmanteau theorem for vague convergence (see Lemma \ref{portmanteau}), we have \[ \frac{\operatorname{\mathbb{P}}(x^{-1/{\boldsymbol{e}}ta}{\boldsymbol{Y}}\in f^{-1}((1,\infty)))}{\operatorname{\mathbb{P}}(\|{\boldsymbol{Y}}\|>x^{1/{\boldsymbol{e}}ta})} \to \mu(f^{-1}((1, \infty))) \qquad \widetilde{e}xt{as \ $x \to \infty$,} \] as desired. \mbox{$\Box$} \section{Weak convergence of partial sum processes towards L\'evy processes}\label{App_Resnick_gen} We formulate a slight modification of Theorem 7.1 in Resnick \cite{Res} with a different centering. {\boldsymbol{e}}gin{Thm}\label{7.1} Suppose that for each \ $N \in \mathbb{N}$, \ ${\boldsymbol{X}}_{N,j}$, \ $j \in \mathbb{N}$, \ are independent and identically distributed \ $d$-dimensional random vectors such that {\boldsymbol{e}}gin{equation}\label{(7.5)} N \operatorname{\mathbb{P}}({\boldsymbol{X}}_{N,1} \in \cdot) \distrv \nu(\cdot) \qquad \widetilde{e}xt{on \ $\mathbb{R}_0^d$ \ as \ $N \to \infty$,} \end{equation} where \ $\nu$ \ is a L\'evy measure on \ $\mathbb{R}_0^d$ \ such that \ $\nu(\{{\boldsymbol{x}} \in \mathbb{R}^d_0 : |\langle{\boldsymbol{e}}_\ell, {\boldsymbol{x}}\rangle| = 1\}) = 0$ \ for every \ $\ell \in \{1, \ldots, d\}$, \ and that {\boldsymbol{e}}gin{equation}\label{(7.6)} \lim_{\operatorname{Var}e\downarrow0} \limsup_{N\to\infty} N \operatorname{\mathbb{E}}\bigl(\langle{\boldsymbol{e}}_\ell,{\boldsymbol{X}}_{N,1}\rangle^2 {\boldsymbol{b}}one_{\{|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{X}}_{N,1}\rangle|\leqslant\operatorname{Var}e\}}\bigr) = 0 , \qquad \ell \in \{1, \ldots, d\} . \end{equation} Then we have \[ \Biggl(\sum_{j=1}^{\lfloor Nt\rfloor} \biggl({\boldsymbol{X}}_{N,j} - \sum_{\ell=1}^d \operatorname{\mathbb{E}}\bigl(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{X}}_{N,j}\rangle {\boldsymbol{b}}one_{\{|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{X}}_{N,j}\rangle|\leqslant1\}}\bigr) {\boldsymbol{e}}_\ell\biggr)\Biggr)_{t\in\mathbb{R}_+} \distr ({\boldsymbol{c}}X_t)_{t\in\mathbb{R}_+} \qquad \widetilde{e}xt{as \ $N \to \infty$,} \] where \ $({\boldsymbol{c}}X_t)_{t\in\mathbb{R}_+}$ \ is a L\'evy process such that the characteristic function of the distribution \ $\mu$ \ of \ ${\boldsymbol{c}}X_1$ \ has the form {\boldsymbol{e}}gin{equation}\label{hmu} \widehat{\mu}({\boldsymbol{\theta}}) = \exp\biggl\{\int_{\mathbb{R}^d_0} \biggl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{x}}\rangle} - 1 - \mathrm{i} \sum_{\ell=1}^d \langle{\boldsymbol{e}}_\ell, {\boldsymbol{\theta}}\rangle \langle{\boldsymbol{e}}_\ell, {\boldsymbol{x}}\rangle {\boldsymbol{b}}one_{(0,1]}(|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{x}}\rangle|)\biggr) \nu(\mathrm{d}{\boldsymbol{x}})\biggr\} , \quad {\boldsymbol{\theta}} \in \mathbb{R}^d . \end{equation} \end{Thm} \noindent{\bf Proof.} There exists \ $r \in \mathbb{R}_{++}$ \ such that \ $\nu(\{{\boldsymbol{x}} \in \mathbb{R}^d_0 : \|{\boldsymbol{x}}\| = r\}) = 0$, \ since the function \ $\mathbb{R}_{++} \ni t \mapsto \nu(\{{\boldsymbol{x}} \in \mathbb{R}^d_0 : \|{\boldsymbol{x}}\| > t\})$ \ is decreasing. By an appropriate modification of Theorem 7.1 in Resnick \cite{Res}, we obtain \[ \Biggl(\sum_{j=1}^{\lfloor Nt\rfloor} \bigl({\boldsymbol{X}}_{N,j} - \operatorname{\mathbb{E}}\bigl({\boldsymbol{X}}_{N,j} {\boldsymbol{b}}one_{\{\Vert {\boldsymbol{X}}_{N,j} \Vert\leqslant r\}}\bigr)\bigr)\Biggr)_{t\in\mathbb{R}_+} \distr (\widetilde{b}cX_t)_{t\in\mathbb{R}_+} \qquad \widetilde{e}xt{as \ $N \to \infty$,} \] where \ $(\widetilde{b}cX_t)_{t\in\mathbb{R}_+}$ \ is a L\'evy process such that the characteristic function of \ $\widetilde{b}cX_1$ \ has the form {\boldsymbol{e}}gin{equation*} \operatorname{\mathbb{E}}(\mathrm{e}^{\mathrm{i} \langle {\boldsymbol{\theta}}, \widetilde{b}cX_1 \rangle}) = \exp\biggl\{\int_{\mathbb{R}^d_0} \biggl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{x}}\rangle} - 1 - \mathrm{i} \langle {\boldsymbol{\theta}},{\boldsymbol{x}} \rangle {\boldsymbol{b}}one_{(0,r]}(\Vert {\boldsymbol{x}} \Vert)\biggr) \nu(\mathrm{d}{\boldsymbol{x}})\biggr\} , \qquad {\boldsymbol{\theta}} \in \mathbb{R}^d . \end{equation*} Let us consider the decomposition {\boldsymbol{e}}gin{align*} &\sum_{j=1}^{\lfloor Nt\rfloor} \biggl({\boldsymbol{X}}_{N,j} - \sum_{\ell=1}^d \operatorname{\mathbb{E}}\bigl(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{X}}_{N,j}\rangle {\boldsymbol{b}}one_{\{|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{X}}_{N,j}\rangle|\leqslant1\}}\bigr) {\boldsymbol{e}}_\ell\biggr) \\ &= \sum_{j=1}^{\lfloor Nt\rfloor} \bigl({\boldsymbol{X}}_{N,j} - \operatorname{\mathbb{E}}\bigl({\boldsymbol{X}}_{N,j} {\boldsymbol{b}}one_{\{\Vert {\boldsymbol{X}}_{N,j} \Vert\leqslant r\}}\bigr)\bigr) + \sum_{\ell=1}^d \sum_{j=1}^{\lfloor Nt\rfloor} \operatorname{\mathbb{E}}\bigl(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{X}}_{N,j}\rangle ( {\boldsymbol{b}}one_{\{ \Vert {\boldsymbol{X}}_{N,j} \Vert\leqslant r\}} - {\boldsymbol{b}}one_{\{|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{X}}_{N,j}\rangle|\leqslant1\}} ) \bigr) {\boldsymbol{e}}_\ell \end{align*} for each \ $t\in\mathbb{R}_{++}$. Here for each \ $\ell\in\{1,\ldots,d \}$, \ we have {\boldsymbol{e}}gin{align*} &\sum_{j=1}^{\lfloor Nt\rfloor} \operatorname{\mathbb{E}}\bigl(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{X}}_{N,j}\rangle ( {\boldsymbol{b}}one_{\{ \Vert {\boldsymbol{X}}_{N,j} \Vert\leqslant r\}} - {\boldsymbol{b}}one_{\{|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{X}}_{N,j}\rangle|\leqslant1\}} ) \bigr) \\ &= \lfloor Nt\rfloor \operatorname{\mathbb{E}}\bigl(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{X}}_{N,1}\rangle ( {\boldsymbol{b}}one_{\{ \Vert {\boldsymbol{X}}_{N,1} \Vert\leqslant r\}} - {\boldsymbol{b}}one_{\{|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{X}}_{N,1}\rangle|\leqslant1\}} ) \bigr) = \frac{\lfloor Nt\rfloor }{N} N \operatorname{\mathbb{E}}(g_\ell({\boldsymbol{X}}_{N,1})), \end{align*} where \ $g_\ell:\mathbb{R}^d\to\mathbb{R}$, \ $g_\ell({\boldsymbol{x}}):=x_\ell( {\boldsymbol{b}}one_{\{ \Vert {\boldsymbol{x}} \Vert\leqslant r\}} - {\boldsymbol{b}}one_{\{|x_\ell|\leqslant1\}} )$, \ ${\boldsymbol{x}}=(x_1,\ldots,x_d)^\top\in\mathbb{R}^d$. \ For each \ $\ell\in\{1,\ldots,d\}$, \ the positive and negative parts \ $g_\ell^+$ \ and \ $g_\ell^{-}$ \ of the function \ $g_\ell$ \ are bounded, measurable with a bounded support (following from Lemma \ref{Lem_S_top}), and, due to \ $\nu(\{{\boldsymbol{x}} \in \mathbb{R}^d_0 : |\langle{\boldsymbol{e}}_\ell, {\boldsymbol{x}}\rangle| = 1\}) = 0$, \ $\ell \in \{1, \ldots, d\}$, \ and \ $\nu(\{{\boldsymbol{x}} \in \mathbb{R}^d_0 : \Vert {\boldsymbol{x}} \Vert = r\})=0$, \ the sets of discontinuity points \ $D_{g^+_\ell}$ \ and \ $D_{g^-_\ell}$ \ have \ $\nu$-measure \ $0$, \ i.e., \ $\nu(D_{g^+_\ell})=\nu(D_{g^-_\ell})=0$. \ Consequently, by \eqref{(7.5)} and Lemma \ref{cmt}, we have \[ N \operatorname{\mathbb{E}}(g_\ell({\boldsymbol{X}}_{N,1})) = N \operatorname{\mathbb{E}}(g^+_\ell({\boldsymbol{X}}_{N,1})) - N \operatorname{\mathbb{E}}(g^-_\ell({\boldsymbol{X}}_{N,1})) \to \nu(g^+_\ell) - \nu(g^-_\ell) = \nu(g_\ell) \in \mathbb{R} \] as \ $N\to\infty$, \ since \ $\nu(g^+_\ell), \nu(g^-_\ell) \in \mathbb{R}_+$ \ due to the fact that \ $\nu$ \ is a L\'evy measure. Next, we may apply Lemma \ref{Conv2Funct} with {\boldsymbol{e}}gin{gather*} {\boldsymbol{c}}U_t^{(N)} := \sum_{j=1}^{\lfloor Nt\rfloor} \bigl({\boldsymbol{X}}_{N,j} - \operatorname{\mathbb{E}}\bigl({\boldsymbol{X}}_{N,j} {\boldsymbol{b}}one_{\{ \Vert {\boldsymbol{X}}_{N,j}\Vert\leqslant r\}}\bigr) \bigr) , \qquad N \in \mathbb{N} , \\ \Phi_N(f)(t) := f(t) + \lfloor Nt\rfloor \sum_{\ell=1}^d \operatorname{\mathbb{E}}(g_\ell({\boldsymbol{X}}_{N,1})){\boldsymbol{e}}_\ell, \qquad N \in \mathbb{N} , \\ {\boldsymbol{c}}U_t := \widetilde{b}cX_t , \qquad \Phi(f)(t) := f(t) +t \sum_{\ell=1}^d \nu(g_\ell){\boldsymbol{e}}_\ell \end{gather*} for \ $t \in \mathbb{R}_+$ \ and \ $f \in \mathbb{D}(\mathbb{R}_+,\mathbb{R}^d)$. \ Indeed, in order to show \ $\Phi_N(f_N) \to \Phi(f)$ \ in \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$ \ as \ $N \to \infty$ \ whenever \ $f_N \to f$ \ in \ $\mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$ \ as \ $N \to \infty$ \ with \ $f, f_N \in \mathbb{D}(\mathbb{R}_+, \mathbb{R}^{k+1})$, \ $N \in \mathbb{N}$, \ by Propositions VI.1.17 and VI.1.23 in Jacod and Shiryaev \cite{JacShi}, it is enough to check that for each \ $T \in \mathbb{R}_{++}$, \ we have {\boldsymbol{e}}gin{align*} \sup_{t\in[0,T]} \sum_{\ell=1}^d \biggl| \lfloor Nt\rfloor \operatorname{\mathbb{E}}(g_\ell({\boldsymbol{X}}_{N,1})) - t\nu(g_\ell) \biggr| \to 0 \qquad \widetilde{e}xt{as \ $N\to\infty$.} \end{align*} This follows, since for each \ $\ell\in\{1,\ldots,d\}$, {\boldsymbol{e}}gin{align*} &\sup_{t\in[0,T]} \biggl| \lfloor Nt\rfloor \operatorname{\mathbb{E}}(g_\ell({\boldsymbol{X}}_{N,1})) - t\nu(g_\ell) \biggr| \\ &\leqslant \sup_{t\in[0,T]} \biggl|\frac{\lfloor Nt\rfloor}{N} \bigl( N\operatorname{\mathbb{E}}(g_\ell({\boldsymbol{X}}_{N,1})) - \nu(g_\ell) \bigr)\biggr| + \sup_{t\in[0,T]} \Biggl| \nu(g_\ell)\biggl( \frac{\lfloor Nt\rfloor}{N} -t \biggr) \Biggr| \\ &\leqslant T \vert N\operatorname{\mathbb{E}}(g_\ell({\boldsymbol{X}}_{N,1})) - \nu(g_\ell) \vert + \frac{ \nu(g_\ell)}{N} \to 0 \qquad \widetilde{e}xt{as \ $N \to \infty$.} \end{align*} Applying Lemma \ref{Conv2Funct}, we obtain \[ \biggl(\sum_{j=1}^{\lfloor Nt\rfloor} \biggl({\boldsymbol{X}}_{N,j} - \sum_{\ell=1}^d \operatorname{\mathbb{E}}\bigl(\langle{\boldsymbol{e}}_\ell, {\boldsymbol{X}}_{N,j}\rangle {\boldsymbol{b}}one_{\{|\langle{\boldsymbol{e}}_\ell,{\boldsymbol{X}}_{N,j}\rangle|\leqslant1\}}\bigr) {\boldsymbol{e}}_\ell\biggr) \biggr)_{t\in\mathbb{R}_+} = \Phi_N({\boldsymbol{c}}U^{(N)}) \distr \Phi({\boldsymbol{c}}U) \qquad \widetilde{e}xt{as \ $N \to \infty$,} \] where \ $\Phi({\boldsymbol{c}}U)_t = \widetilde{b}cX_t + t \sum_{\ell=1}^d \nu(g_\ell){\boldsymbol{e}}_\ell = {\boldsymbol{c}}X_t$, \ $t \in \mathbb{R}_+$, \ is a $d$-dimensional L\'evy process, since {\boldsymbol{e}}gin{align*} \operatorname{\mathbb{E}}\big( \mathrm{e}^{\mathrm{i} \langle {\boldsymbol{\theta}}, \widetilde{b}cX_1 + \sum_{\ell=1}^d \nu(g_\ell){\boldsymbol{e}}_\ell \rangle}\big) & = \exp\biggl\{\int_{\mathbb{R}^d_0} \biggl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{x}}\rangle} - 1 - \mathrm{i} \langle {\boldsymbol{\theta}},{\boldsymbol{x}} \rangle {{\boldsymbol{b}}one_{(0,r]}}(\Vert {\boldsymbol{x}} \Vert)\biggr) \nu(\mathrm{d}{\boldsymbol{x}}) + \mathrm{i} \sum_{\ell=1}^d \langle {\boldsymbol{\theta}},{\boldsymbol{e}}_\ell \rangle \nu(g_\ell) \biggr\} \\ & = \exp\biggl\{\int_{\mathbb{R}^d_0} \biggl(\mathrm{e}^{\mathrm{i}\langle{\boldsymbol{\theta}},{\boldsymbol{x}}\rangle} - 1 - \mathrm{i} \langle {\boldsymbol{\theta}},{\boldsymbol{x}} \rangle {{\boldsymbol{b}}one_{(0,r]}}(\Vert {\boldsymbol{x}} \Vert)\biggr) \nu(\mathrm{d}{\boldsymbol{x}}) \\ &\phantom{= \exp\biggl\{} + \mathrm{i} \sum_{\ell=1}^d \langle {\boldsymbol{\theta}},{\boldsymbol{e}}_\ell \rangle \int_{\mathbb{R}_0^d} \langle {\boldsymbol{e}}_\ell,{\boldsymbol{x}} \rangle ( {\boldsymbol{b}}one_{\{\Vert {\boldsymbol{x}} \Vert\leqslant r \}} - {\boldsymbol{b}}one_{\{ \vert \langle {\boldsymbol{e}}_\ell,{\boldsymbol{x}} \rangle \vert \leqslant1\}})\,\nu(\mathrm{d} {\boldsymbol{x}}) \biggr\}, \end{align*} yielding \eqref{hmu}. \mbox{$\Box$} \section{Tail behavior of \ $(X_k)_{k\in\mathbb{Z}_+}$}\label{App_tail} Due to Basrak et al.\ \cite[Theorem 2.1.1]{BasKulPal}, we have the following tail behavior. {\boldsymbol{e}}gin{Thm}\label{Xtail} We have \[ \lim_{x\to\infty} \frac{\pi((x, \infty))}{\operatorname{\mathbb{P}}(\operatorname{Var}e > x)} = \sum_{i=0}^\infty m_\xi^{i\alpha} = \frac{1}{1-m_\xi^\alpha} , \] where \ $\pi$ \ denotes the unique stationary distribution of the Markov chain \ $(X_k)_{k\in\mathbb{Z}_+}$, \ and consequently, \ $\pi$ \ is also regularly varying with index \ $\alpha$. \end{Thm} Note that in case of \ $\alpha = 1$ \ and \ $m_\operatorname{Var}e = \infty$ \ Basrak et al.\ \cite[Theorem 2.1.1]{BasKulPal} assume additionally that \ $\operatorname{Var}e$ \ is consistently varying (or in other words intermediate varying), but, eventually, it follows from the fact that \ $\operatorname{Var}e$ \ is regularly varying. Let \ $(X_k)_{k\in\mathbb{Z}}$ \ be a strongly stationary extension of \ $(X_k)_{k\in\mathbb{Z}_+}$. \ Basrak et al. \cite[Lemma 3.1]{BasKulPal} described the so-called forward tail process of the strongly stationary process \ $(X_k)_{k\in\mathbb{Z}}$, \ and hence, due to Basrak and Segers \cite[Theorem 2.1]{BasSeg}, the strongly stationary process \ $(X_k)_{k\in\mathbb{Z}}$ \ is jointly regularly varying. {\boldsymbol{e}}gin{Thm}\label{Xtailprocess} The finite dimensional conditional distributions of \ $(x^{-1} X_k)_{k\in\mathbb{Z}_+}$ \ with respect to the condition \ $X_0 > x$ \ converge weakly to the corresponding finite dimensional distributions of \ $(m_\xi^k Y)_{k\in\mathbb{Z}_+}$ \ as \ $x \to \infty$, \ where \ $Y$ \ is a random variable with Pareto distribution \ $\operatorname{\mathbb{P}}(Y \leqslant y) = (1 - y^{-\alpha}) {\boldsymbol{b}}one_{[1,\infty)}(y)$, \ $y \in \mathbb{R}$. \ Consequently, the strongly stationary process \ $(X_k)_{k\in\mathbb{Z}}$ \ is jointly regularly varying with index \ $\alpha$, \ i.e., all its finite dimensional distributions are regularly varying with index \ $\alpha$. \ The process \ $(m_\xi^k Y)_{k\in\mathbb{Z}_+}$ \ is the so called forward tail process of $(X_k)_{k\in\mathbb{Z}}$. \ Moreover, there exists a (whole) tail process of $(X_k)_{k\in\mathbb{Z}}$ as well. \end{Thm} By the proof of Theorem \ref{simple_aggregation1_stable_fd} and Proposition \ref{Pro_mapping}, we obtain the following results. {\boldsymbol{e}}gin{Pro}\label{Pro_limit_meaure} For each \ $k \in \mathbb{Z}_+$, \renewcommand{{\rm(\roman{enumi})}}{{\rm(\roman{enumi})}} {\boldsymbol{e}}gin{enumerate} \item the limit measure \ $\widetilde{\nu}_{k,\alpha}$ \ of \ $(X_0, \ldots, X_k)^\top$ \ given in \eqref{tnu_alpha^k_old} takes the form \[ \widetilde{\nu}_{k,\alpha} = \frac{\nu_{k,\alpha}}{\nu_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:\|{\boldsymbol{x}}\|>1\})} , \] where \ $\nu_{k,\alpha}$ \ is given by \eqref{fint} and \[ \nu_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:\|{\boldsymbol{x}}\|>1\}) = \frac{1-m_\xi^\alpha}{(1-m_\xi^2)^{\alpha/2}} \biggl(\frac{(1-m_\xi^{2(k+1)})^{\alpha/2}}{1-m_\xi^\alpha} + \sum_{j=1}^k (1-m_\xi^{2(k-j+1)})^{\alpha/2}\biggr) ; \] \item the tail behavior of \ $X_0 + \cdots + X_k$ \ is given by \[ \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(X_0+\cdots+X_k>x)}{\operatorname{\mathbb{P}}(X_0>x)} = \frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} \biggl(\frac{ (1-m_\xi^{k+1})^\alpha }{1-m_\xi^\alpha} + \sum_{j=1}^k (1-m_\xi^{k-j+1})^\alpha\biggr) . \] \end{enumerate} \end{Pro} \noindent{\bf Proof.} (i). \ In the proof of Theorem \ref{simple_aggregation1_stable_fd}, we derived \ $\nu_{k,\alpha} = \widetilde{\nu}_{k,\alpha}/\widetilde{\nu}_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:x_0>1\})$. \ Consequently, \[ \widetilde{\nu}_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:x_0>1\}) = \frac{\widetilde{\nu}_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:\|{\boldsymbol{x}}\|>1\})}{\nu_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:\|{\boldsymbol{x}}\|>1\})} , \] where, using Proposition \ref{Pro_mapping} with the 1-homogeneous function \ $\mathbb{R}^{k+1} \ni {\boldsymbol{x}} \mapsto \|{\boldsymbol{x}}\|$, \ we have \[ \widetilde{\nu}_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:\|{\boldsymbol{x}}\|>1\}) = \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(\|(X_0,\ldots,X_k)^\top\|>x)}{\operatorname{\mathbb{P}}(\|(X_0,\ldots,X_k)^\top\|>x)} = 1 , \] and, by \eqref{fint}, {\boldsymbol{e}}gin{align*} &\nu_{k,\alpha}(\{{\boldsymbol{x}} \in \mathbb{R}_0^{k+1} : \|{\boldsymbol{x}}\| > 1\}) = \int_{\mathbb{R}_0^{k+1}} {\boldsymbol{b}}one_{\{\|{\boldsymbol{x}}\|>1\}} \, \nu_{k,\alpha}(\mathrm{d}{\boldsymbol{x}}) \\ &= (1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty {\boldsymbol{b}}one_{\{\|u{\boldsymbol{v}}_{j}^{(k)}\|>1\}} \alpha u^{-\alpha-1} \, \mathrm{d} u \\ &= (1 - m_\xi^\alpha) \sum_{j=0}^k \int_{\|{\boldsymbol{v}}_{j}^{(k)}\|^{-1}}^\infty \alpha u^{-\alpha-1} \, \mathrm{d} u = (1 - m_\xi^\alpha) \sum_{j=0}^k \|{\boldsymbol{v}}_j^{(k)}\|^\alpha \\ &= (1 - m_\xi^\alpha) \biggl((1-m_\xi^\alpha)^{-1}(1+m_\xi^2 + \cdots + m_\xi^{2k})^{\alpha/2} + \sum_{j=1}^k (1+m_\xi^2 + \cdots + m_\xi^{2(k-j)})^{\alpha/2}\biggr) \\ &= \frac{1-m_\xi^\alpha}{(1-m_\xi^2)^{\alpha/2}} \biggl(\frac{(1-m_\xi^{2(k+1)})^{\alpha/2}}{1-m_\xi^\alpha} + \sum_{j=1}^k (1-m_\xi^{2(k-j+1)})^{\alpha/2}\biggr) . \end{align*} (ii). \ Applying Proposition \ref{Pro_mapping} for the 1-homogeneous functions \ $\mathbb{R}^{k+1} \ni {\boldsymbol{x}} \mapsto x_0$ \ and \ $\mathbb{R}^{k+1} \ni {\boldsymbol{x}} \mapsto x_0 + \cdots + x_k$ \ and formula \eqref{fint}, we obtain {\boldsymbol{e}}gin{align*} &\lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(X_0+\cdots+X_k>x)}{\operatorname{\mathbb{P}}(X_0>x)} = \lim_{x\to\infty} \frac{\operatorname{\mathbb{P}}(\|(X_0,\ldots,X_k)^\top\|>x)}{\operatorname{\mathbb{P}}(X_0>x)} \frac{\operatorname{\mathbb{P}}(X_0+\cdots+X_k>x)}{\operatorname{\mathbb{P}}(\|(X_0,\ldots,X_k)^\top\|>x)} \\ &= \frac{\widetilde{\nu}_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:x_0+\cdots+x_k>1\})}{\widetilde{\nu}_{k,\alpha}(\{{\boldsymbol{x}}\in\mathbb{R}_0^{k+1}:x_0>1\})} = \nu_{k,\alpha}(\{{\boldsymbol{x}} \in \mathbb{R}_0^{k+1} : x_0 + \cdots + x_k > 1\}) \\ &= \nu_{k,\alpha}(\{{\boldsymbol{x}} \in \mathbb{R}_0^{k+1} : \langle{\boldsymbol{1}}_{k+1}, {\boldsymbol{x}}\rangle > 1\}) = \int_{\mathbb{R}_0^{k+1}} {\boldsymbol{b}}one_{\{\langle{\boldsymbol{1}}_{k+1}, {\boldsymbol{x}}\rangle > 1\}} \, \nu_{k,\alpha}(\mathrm{d}{\boldsymbol{x}}) \\ &= (1 - m_\xi^\alpha) \sum_{j=0}^k \int_0^\infty {\boldsymbol{b}}one_{\{\langle{\boldsymbol{1}}_{k+1}, u{\boldsymbol{v}}_{j}^{(k)}\rangle > 1\}} \alpha u^{-\alpha-1} \, \mathrm{d} u \\ &= (1 - m_\xi^\alpha) \sum_{j=0}^k \int_{\langle{\boldsymbol{1}}_{k+1}, {\boldsymbol{v}}_{j}^{(k)}\rangle^{-1}}^\infty \alpha u^{-\alpha-1} \, \mathrm{d} u = (1 - m_\xi^\alpha) \sum_{j=0}^k \langle {\boldsymbol{1}}_{k+1}, {\boldsymbol{v}}_j^{(k)} \rangle^\alpha \\ &= (1 - m_\xi^\alpha) \biggl((1-m_\xi^\alpha)^{-1}(1+m_\xi + \cdots + m_\xi^k)^\alpha + \sum_{j=1}^k (1+m_\xi + \cdots + m_\xi^{k-j})^\alpha\biggr) \\ & = \frac{1-m_\xi^\alpha}{(1-m_\xi)^\alpha} \biggl(\frac{ (1-m_\xi^{k+1})^\alpha }{1-m_\xi^\alpha} + \sum_{j=1}^k (1-m_\xi^{k-j+1})^\alpha\biggr) , \end{align*} as desired. \mbox{$\Box$} \end{document}
\begin{document} \title{A Note on Modeling Self-Suspending Time as Blocking Time in Real-Time Systems\thanks{This paper is supported by DFG, as part of the Collaborative Research Center SFB876 (http://sfb876.tu-dortmund.de/). This work was also partially supported by National Funds through FCT/MEC (Portuguese Foundation for Science and Technology) and co-financed by ERDF (European Regional Development Fund) under the PT2020 Partnership, within project UID/CEC/04234/2013 (CISTER); also by FCT/MEC and the EU ARTEMIS JU within project(s) ARTEMIS/0003/2012 - JU grant nr. 333053 (CONCERTO) and ARTEMIS/0001/2013 - JU grant nr. 621429 (EMC2).}} \titlerunning{A Note on Modeling Suspension as Blocking} \author{Jian-Jia Chen\inst{1}, Wen-Hung Huang\inst{1}, and Geoffrey Nelissen\inst{2}} \institute{TU Dortmund University, Germany\\ Email: [email protected], [email protected] \and CISTER/INESC-TEC, ISEP, Polytechnic Institute of Porto, Portugal \\ Email: [email protected] } \maketitle \begin{abstract} This report presents a proof to support the correctness of the schedulability test for self-suspending real-time task systems proposed by Jane W. S. Liu in her book titled ``Real-Time Systems'' (Pages 164-165). The same concept was also implicitly used by Rajkumar, Sha, and Lehoczky in RTSS 1988 (Page 267) for analyzing self-suspending behaviour due to synchronization protocols in multiprocessor systems. \end{abstract} \section{Introduction} This report presents a proof to support the correctness of the schedulability test for self-suspending real-time task systems proposed by Jane W. S. Liu in her book titled "Real-Time Systems" \cite[Pages 164-165]{Liu:2000:RS:518501}. The same concept was also implicitly used by Rajkumar, Sha, and Lehoczky \cite[Page 267]{DBLP:conf/rtss/RajkumarSL88} for analyzing self-suspending behaviour due to synchronization protocols in multiprocessor systems. The system model and terminologies are defined as follows: We assume a system composed of $n$ sporadic self-suspending tasks. A sporadic task $\tau_i$ is released repeatedly, with each such invocation called a job. The $j^{th}$ job of $\tau_i$, denoted $\tau_{i,j}$, is released at time $r_{i,j}$ and has an absolute deadline at time $d_{i,j}$. Each job of any task $\tau_i$ is assumed to have a worst-case execution time $C_i$. Each job of task $\tau_i$ suspends for at most $S_i$ time units (across all of its suspension phases). When a job suspends itself, the processor can execute another job. The response time of a job is defined as its finishing time minus its release time. Successive jobs of the same task are required to execute in sequence. Associated with each task $\tau_i$ are a period (or minimum inter-arrival time) $T_i$, which specifies the minimum time between two consecutive job releases of $\tau_i$, and a relative deadline $D_i$, which specifies the maximum amount of time a job can take to complete its execution after its release, i.e., $d_{i,j}=r_{i,j}+D_i$. The worst-case response time $R_i$ of a task $\tau_i$ is the maximum response time among all its jobs. The utilization of a task $\tau_i$ is defined as $U_i=C_i/T_i$. In this report, we focus on constrained-deadline task systems, in which $D_i \leq T_i$ for every task $\tau_i$. We only consider preemptive fixed-priority scheduling on a single processor, in which each task is assigned with a unique priority level. We assume that the priority assignment is given. We assume that the tasks are numbered in a decreasing priority order. That is, a task with a smaller index has higher priority than any task with a higher index, i.e., task $\tau_i$ has a higher-priority level than task $\tau_{i+1}$. When performing the schedulability analysis of a specific task $\tau_k$, we assume that $\tau_1, \tau_2, \ldots, \tau_{k-1}$ are already verified to meet their deadlines, i.e., that $R_i \leq D_i, \forall \tau_i \mid 1 \leq i \leq k-1$. We also classify the $k-1$ higher-priority tasks into two sets: ${\bf T}_1$ and ${\bf T}_2$. A task $\tau_i$ is in ${\bf T}_1$ if $C_i \geq S_i$; otherwise, it is in ${\bf T}_2$. \section{Model Suspension Time and Blocking Time} To analyze the worst-case response time (or the schedulability) of task $\tau_k$, we usually need to quantify the worst-case interference caused by the higher-priority tasks on the execution of any job of task $\tau_k$. In the ordinary sequential sporadic real-time task model, i.e., when $S_i=0$ for every task $\tau_i$, the so-called critical instant theorem by Liu and Layland \cite{Liu_1973} is commonly adopted. That is, the worst-case response time of task $\tau_k$ (if it is less than or equal to its period) happens for the first job of task $\tau_k$ when $\tau_k$ and all the higher-priority tasks release a job synchronously and the subsequent jobs are released as early as possible (i.e., with a rate equal to their period). However, as proven in \cite{ecrts15nelissen}, this definition of the critical instant does not hold for self-suspending sporadic tasks. In \cite[Pages 164-165]{Liu:2000:RS:518501}, Jane W. S. Liu proposed a solution to study the schedulability of self-suspending tasks by modeling the \emph{extra delay} suffered by a task $\tau_k$ due to the self-suspending behavior of the tasks as a blocking time denoted as $B_k$ and defined as follows: \begin{itemize} \item The blocking time contributed from task $\tau_k$ is $S_k$. \item A higher-priority task $\tau_i$ can only block the execution of task $\tau_k$ by at most $b_i=min(C_i, S_i)$ time units. \end{itemize} Therefore, \begin{equation} \label{eq:Bk} B_k = S_k + \sum_{i=1}^{k-1} b_i. \end{equation} In \cite{Liu:2000:RS:518501}, the blocking time is then used to derive a utilization-based schedulability test for rate-monotonic scheduling. Namely, it is stated that if $\frac{C_k+B_k}{T_k} + \sum_{i=1}^{k-1} U_i \leq k (2^{\frac{1}{k}}-1)$, then task $\tau_k$ can be feasibly scheduled by using rate-monotonic scheduling if $T_i=D_i$ for every task $\tau_i$ in the given task set. If the above argument is correct, we can further prove that a constrained-deadline task $\tau_k$ can be feasibly scheduled by the fixed-priority scheduling if \begin{equation} \label{eq:TDA-suspension} \exists t \mid 0 < t \leq D_k, \qquad C_k + B_k + \sum_{i=1}^{k-1}\ceiling{\frac{t}{T_i}} C_i \leq t. \end{equation} The same concept was also implicitly used by Rajkumar, Sha, and Lehoczky \cite[Page 267]{DBLP:conf/rtss/RajkumarSL88} for analyzing self-suspending behaviour due to synchronization protocols in multiprocessor systems. To account for the self-suspending behaviour, it reads as follows:\footnote{We rephrased the wordings and notation to be consistent with this report.} \begin{quote} \emph{For each higher priority job $J_i$ on the processor that suspends on global semaphores or for other reasons, add the term $min(C_i, S_i)$ to $B_k$, where $S_i$ is the maximum duration that $J_i$ can suspend itself. The sum ... yields $B_k$, which in turn can be used in $\frac{C_k+B_k}{T_k} + \sum_{i=1}^{k-1} U_i \leq k (2^{\frac{1}{k}}-1)$ to determine whether the current task allocation to the processor is schedulable.} \end{quote} However, as there is no proof in \cite{Liu:2000:RS:518501,DBLP:conf/rtss/RajkumarSL88} to support the correctness of the above tests, we present a proof in the next section of this report. \ifpaper \newtheorem{Property}{Property} \newtheorem{Lemma}{Lemma} \newtheorem{Corollary}{Corollary} \fi \section{Our Proof} This section provides the proof to support the correctness of the test in Eq. \eqref{eq:TDA-suspension}. First, it should be easy to see that we can convert the suspension time of task $\tau_k$ into computation. This has been proven in many previous works, e.g., Lemma~3 in \cite{Liu_2014} and Theorem~2 in \cite{ecrts15nelissen}. Yet, it remains to formally prove that the additional interference due to the self-suspension of a higher-priority task $\tau_i$ is upper-bounded by $b_i=min(C_i, S_i)$. The interference to be at most $C_i$ has been provided in the literature as well, e.g., \cite{Rajkumar_1990,Liu_2014}. However, the argument about blocking task $\tau_k$ due to a higher-priority task $\tau_i$ by at most $S_i$ amount of time is not straightforward. From the above discussions, we can greedily convert the suspension time of task $\tau_k$ to its computation time. For the sake of notational brevity, let $C_k'$ be $C_k + S_k$. We call this converted version of task $\tau_k$ as task $\tau_k'$. Our analysis is also based on very simple properties and lemmas enunciated as follows: \begin{Property} \label{prop:lower-priority} In a preemptive fixed-priority schedule, the lower-priority jobs do not impact the schedule of the higher-priority jobs. \end{Property} \begin{Lemma} \label{lemma:remove-same-task} In a preemptive fixed-priority schedule, if the worst-case response time of task $\tau_i$ is no more than its period $T_i$, preventing the release of a job of task $\tau_i$ does not affect the schedule of any other job of task $\tau_i$. \end{Lemma} \begin{proof} Since the worst-case response time of task $\tau_i$ is no more than its period, any job $\tau_{i,j}$ of task $\tau_i$ completes its execution before the release of the next job $\tau_{i,j+1}$. Hence, the execution of $\tau_{i,j}$ does not directly interfere with the execution of any other job of $\tau_i$, which then depends only on the schedule of the higher priority jobs. Furthermore, as stated in Property~\ref{prop:lower-priority}, the removal of $\tau_{i,j}$ has no impact on the schedule of the higher-priority jobs, thereby implying that the other jobs of task $\tau_i$ are not affected by the removal of $\tau_{i,j}$. \end{proof} We can prove the correctness of Eq. \eqref{eq:TDA-suspension} by using a similar proof than for the critical instant theorem of the ordinary sporadic task model. Let $R_k'$ be the minimum $t$ greater than $0$ such that Eq. \eqref{eq:TDA-suspension} holds, i.e., $C'_k + B_k + \sum_{i=1}^{k-1}\ceiling{\frac{R_k'}{T_i}} C_i = R_k'$ and $\forall 0 < t < R_k'$ $C'_k + B_k + \sum_{i=1}^{k-1}\ceiling{\frac{t}{T_i}} C_i > t$. The following theorem shows that $R_k'$ is a safe upper bound if the worst-case response time of task $\tau_k'$ is no more than $T_k$. \begin{theorem} \label{theorem:critical} $R_k'$ is a safe upper bound on the worst-case response time of task $\tau_k'$ if the worst-case response time of $\tau_k'$ is not larger than $T_k$. \end{theorem} \begin{proof} Let us consider the task set $\tau'$ composed of $\left\{\tau_1, \tau_2, \ldots, \tau_{k-1}, \tau_k', \tau_{k+1}, \ldots \right\}$ and let $\Psi$ be a schedule of $\tau'$ that generates the worst-case response time of $\tau_k'$. The proof is built upon the two following steps: \begin{enumerate} \item We discard all the jobs that do not contribute to the worst-case response time of $\tau_k'$ in the schedule $\Psi$. We follow an inductive strategy by iteratively inspecting the schedule of the higher priority tasks in $\Psi$, starting with $\tau_{k-1}$ until the highest priority task $\tau_1$. At each iteration, a time instant $t_j$ is identified such that $t_j \leq t_{j+1}$ ($1 \leq j < k$). Then, all the jobs of task $\tau_j$ released before $t_j$ are removed from the schedule and, if needed, replaced by an artificial job mimicking the interference caused by the residual workload of task $\tau_j$ at time $t_j$ on the worst-case response time of $\tau_{k}'$. \item The final reduced schedule is analyzed so as to characterize the worst-case response time of $\tau_k'$ in $\Psi$. We then prove that $R_k'$ is indeed an upper bound on the worst-case response time of $\tau_k'$. \end{enumerate} \noindent{\bf Step 1: Reducing the schedule $\Psi$} During this step, we iteratively build an artificial schedule $\Psi^j$ from $\Psi^{j+1}$ (with $1 \leq j < k$) so that the response time of $\tau_{k}'$ remains identical. At each iteration, we define $t_j$ for task $\tau_j$ in the schedule $\Psi^{j+1}$ (with $j=k-1, k-2, \ldots, 1$) and build $\Psi^j$ by removing all the jobs released by $\tau_j$ before $t_j$. ~ \noindent\textit{Basic step (definition of $\Psi^k$ and $t_k$):} Suppose that the job $J_{k}$ of task $\tau_k'$ with the largest response time in $\Psi$ arrives at time $r_k$ and finishes at time $f_k$. We know by Property~\ref{prop:lower-priority} that the lower priority tasks $\tau_{k+1}, \tau_{k+2}, \ldots, \tau_n$ do not impact the response time of $J_{k}$. Moreover, since we assume that the worst-case response time of task $\tau_k'$ is no more than $T_k$, Lemma~\ref{lemma:remove-same-task} proves that removing all the jobs of task $\tau_k'$ but $J_{k}$ has no impact on the schedule of $J_{k}$. Therefore, let $\Psi^k$ be a schedule identical to $\Psi$ but removing all the jobs released by the lower priority tasks $\tau_{k+1}, \ldots, \tau_n$ as well as all the jobs released by $\tau_k'$ at the exception of $J_{k}$. The response time of $J_{k}$ in $\Psi^{k}$ is thus identical to the response time of $J_{k}$ in $\Psi$. We define $t_k$ as the release time of $J_k$ (i.e., $t_k = r_k$). ~ \noindent\textit{Induction step (definition of $\Psi^j$ and $t_j$ with $1 \leq j < k$):} Let $r_j$ be the arrival time of the last job released by $\tau_j$ before $t_{j+1}$ in $\Psi^{j+1}$ and let $J_{j}$ denote that job. Removing all the jobs of task $\tau_j$ arrived before $r_j$ has no impact on the schedule of any other job released by $\tau_j$ (Lemma~\ref{lemma:remove-same-task}) or any higher priority job released by $\tau_1, \ldots, \tau_{j-1}$ (Property \ref{prop:lower-priority}). Moreover, because by the construction of $\Psi^{j+1}$, no task with a priority lower than $\tau_j$ executes jobs before $t_{j+1}$ in $\Psi^{j+1}$, removing the jobs released by $\tau_j$ before $t_{j+1}$ does not impact the schedule of the jobs of $\tau_{j+1}, \ldots, \tau_{k}$. Therefore, we can safely remove all the jobs of task $\tau_j$ arrived before $r_j$ without impacting the response time of $J_{k}$. Two cases must then be considered: \begin{enumerate}[(a)] \item $\tau_j \in {\bf T}_1$, i.e., $S_j < C_j$. In this case, we analyze two different subcases: \begin{itemize} \item $J_{j}$ completed its execution before or at $t_{j+1}$. By Lemma~\ref{lemma:remove-same-task} and Property \ref{prop:lower-priority}, removing all the jobs of task $\tau_j$ arrived before $t_{j+1}$ has no impact on the schedule of the higher-priority jobs (jobs released by $\tau_1, \ldots, \tau_{j-1}$) and the jobs of $\tau_j$ released after or at $t_{j+1}$. Moreover, because no task with lower priority than $\tau_j$ executes jobs before $t_{j+1}$ in $\Psi^{j+1}$, removing the jobs released by $\tau_j$ before $t_{j+1}$ does not impact the schedule of the jobs of $\tau_{j+1}, \ldots, \tau_{k}$. Therefore, $t_j$ is set to $t_{j+1}$ and $\Psi^j$ is generated by removing all the jobs of task $\tau_j$ arrived before $t_{j+1}$. The response time of $J_{k}$ in $\Psi^j$ thus remains unchanged in comparison to its response time in $\Psi^{j+1}$. \item $J_{j}$ did not complete its execution by $t_{j+1}$. For such a case, $t_{j}$ is set to $r_j$ and hence $\Psi^j$ is built from $\Psi^{j+1}$ by removing all the jobs released by $\tau_j$ before $r_j$. \end{itemize} Note that because by the construction of $\Psi^{j+1}$ and hence $\Psi^j$ there is no job with priority lower than $\tau_j$ available to be executed before $t_{j+1}$, the maximum amount of time during which the processor remains idle within $[t_j, t_{j+1})$ is at most $S_j$ time units. \item $\tau_j \in {\bf T}_2$, i.e., $S_j \geq C_j$. For such a case, we set $t_{j}$ to $t_{j+1}$. Let $c_j(t_j)$ be the remaining execution time for the job of task $\tau_j$ at time $t_j$. We know that $c_j(t_j)$ is at most $C_j$. Since by the construction of $\Psi^j$, all the jobs of $\tau_j$ released before $t_j$ are removed, the job of task $\tau_j$ arrived at time $r_j$ ($< t_j$) is replaced by a new job released at time $t_j$ with execution time $c_j(t_j)$ and the same priority than $\tau_j$. Clearly, this has no impact on the execution of any job executed after $t_j$ and thus on the response time of $J_k$. The remaining execution time $c_j(t_j)$ of $\tau_j$ at time $t_j$ is called the \emph{residual workload} of task $\tau_j$ for the rest of the proof. \end{enumerate} The above construction of $\Psi^{k-1}, \Psi^{k-2}, \ldots, \Psi^1$ is repeated until producing $\Psi^1$. The procedures are well-defined. Therefore, it is guaranteed that $\Psi^1$ can be constructed. Note that after each iteration, the number of jobs considered in the schedule have been reduced, yet without affecting the response time of $J_k$. ~ \noindent{\bf Step 2: Analyzing the final reduced schedule $\Psi^1$} We now analyze the properties of the final schedule $\Psi^1$ in which all the unnecessary jobs have been removed. The proof is based on the fact that for any interval $[t_1, t)$, there is \begin{equation} \label{eq:exec_plus_idle} \operatorname{idle}(t_1, t) + \operatorname{exec}(t_1, t) = (t - t_1) \end{equation} where $\operatorname{exec}(t_1, t)$ is the amount of time during which the processor executed tasks within $[t_1, t)$, and $\operatorname{idle}(t_1, t)$ is the amount of time during which the processor remained idle within the interval $[t_1, t)$. Because there is no job released by lower priority tasks than $\tau_k$ in $\Psi^1$, the workload released by $\tau_1, \ldots, \tau_k$ within any interval $[t_1, t)$ is an upper bound on the workload $\operatorname{exec}(t_1, t)$ executed within $[t_1, t)$. From case (b) of Step 1, the total residual workload that must be considered in $\Psi^1$ is upper bounded by $\sum_{\tau_i \in {\bf T}_2} C_i$. Therefore, considering the fact that no job of $\tau_j$ is released before $t_j$ in $\Psi^1$ ($j=1,2,\ldots,k$), the workload released by the tasks (by treating the residual workload in ${\bf T}_2$ as released workload as well) within any time interval $[t_1, t)$ in schedule $\Psi^1$ such that $t_1 < t \leq f_k$ is upper bounded by \begin{align*} \sum_{i=1}^k \left( c_i(t_i) + \max\{0, \ceiling{\frac{t- t_i}{T_i} } C_i \} \right) & \leq \sum_{\tau_i \in {\bf T}_2} C_i + \sum_{i=1}^k \max\{0, \ceiling{\frac{t- t_i}{T_i} } C_i \}, \end{align*} leading to \begin{equation} \label{eq:exec_time} \forall t \mid t_1 \leq t < f_k,\quad \operatorname{exec}(t_1, t) \leq \sum_{\tau_i \in {\bf T}_2} C_i + \sum_{i=1}^k \max\{0, \ceiling{\frac{t- t_i}{T_i} } C_i. \end{equation} Furthermore, from case (a) of Step 1, we know that the maximum amount of time during which the processor is idle in $\Psi^1$ within any time interval $[t_1, t)$ such that $t_1 < t \leq t_k$, is upper bounded by $\sum_{\tau_i \in {\bf T}_1} S_i$. That is, \begin{equation} \label{eq:idle_time} \forall t \mid t_1 \leq t < t_k,\quad \operatorname{idle}(t_1, t) \leq \sum_{\tau_i \in {\bf T}_1} S_i. \end{equation} Hence, injecting Eq. \eqref{eq:exec_time} and Eq. \eqref{eq:idle_time} into Eq. \eqref{eq:exec_plus_idle}, we get \[ \forall t \mid t_1 \leq t < t_k,\qquad \sum_{\tau_i \in {\bf T}_1} S_i + \sum_{\tau_i \in {\bf T}_2} C_i + \sum_{i=1}^{k} \max\{ 0, \ceiling{\frac{t- t_i}{T_i} } C_i\} \geq t-t_1. \] Since $C_k' > 0$ and $\max\{ 0, \ceiling{\frac{t- t_k}{T_k} } C_k'\} = 0$ for any $t$ smaller than $t_k$, it holds that \[ \forall t \mid t_1 \leq t < t_k,\qquad \sum_{\tau_i \in {\bf T}_1} S_i + \sum_{\tau_i \in {\bf T}_2} C_i + C_k' + \sum_{i=1}^{k-1} \max\left\{ 0, \ceiling{\frac{t- t_i}{T_i} } C_i\right\} > t-t_1, \] and using the definition of $b_i$ \begin{equation} \label{eq:eq1_in_proof} \forall t \mid t_1 \leq t < t_k,\qquad C_k'+\sum_{i=1}^{k-1} b_i + \sum_{i=1}^{k-1} \max\left\{ 0, \ceiling{\frac{t- t_i}{T_i} } C_i\right\} > t-t_1. \end{equation} Furthermore, because $J_k$ is released at time $t_k$ and does not complete its execution before $f_k$, it must hold that \begin{equation} \label{eq:eq2_in_proof} \forall t \mid t_k \leq t < f_k,\qquad C_k'+\sum_{i=1}^{k-1} b_i + \sum_{i=1}^{k-1} \max\left\{ 0, \ceiling{\frac{t- t_i}{T_i} } C_i\right\} > t-t_1. \end{equation} Combining Eq. \eqref{eq:eq1_in_proof} and Eq. \eqref{eq:eq2_in_proof}, we get \begin{equation} \label{eq:whole_interval} \forall t \mid t_1 \leq t < f_k,\qquad C_k'+\sum_{i=1}^{k-1} b_i + \sum_{i=1}^{k-1} \max\left\{ 0, \ceiling{\frac{t- t_i}{T_i} } C_i\right\} > t-t_1. \end{equation} Since $t_i \geq t_1$ for $i=1,2,\ldots,k$, there is $$\ceiling{\frac{t- t_i}{T_i} } \leq \ceiling{\frac{t- t_1}{T_i} },$$ thereby leading to \begin{equation} \label{eq:with_t1_only} \forall t \mid t_1 \leq t < f_k,\qquad C_k'+\sum_{i=1}^{k-1} b_i + \sum_{i=1}^{k-1} \max\left\{ 0, \ceiling{\frac{t- t_1}{T_i} } C_i\right\} > t-t_1. \end{equation} By replacing $t-t_1$ with $\theta$, Eq. \eqref{eq:with_t1_only} becomes\footnote{We take $0 < \theta$ instead of $0 \leq \theta$ since $C_k'$ is assumed to be positive.} \[ \forall \theta \mid 0 < \theta < f_k-t_1, \qquad C_k'+\sum_{i=1}^{k-1} b_i + \sum_{i=1}^{k-1} \ceiling{\frac{\theta}{T_i} } C_i > \theta. \] The above inequation implies that the minimum $t$ such that $C_k'+\sum_{i=1}^{k-1} b_i + \sum_{i=1}^{k-1} \ceiling{\frac{t}{T_i} } C_i \leq t$ is larger than or equal to $f_k-t_1$. And because by assumption the worst-case response time of $\tau_k'$ is equal to $f_k-t_k \leq f_k - t_1$ which is obviously smaller than or equal to $R_k'$, it holds that $R_k'$ is a safe upper bound on the worst-case response time of $\tau_k'$. \end{proof} For the simplicity of presentation, we assume that $\Psi$ is a schedule of $\tau'$ that generates the worst-case response time of $\tau_k'$ in the proof of Theorem~\ref{theorem:critical}. This can be relaxed to start from an arbitrary job $J_k$ in any fixed-priority schedule by using the same proof flow with similar arguments. \begin{Corollary} \label{cor:critical} $R_k'$ is a safe upper bound on the worst-case response time of task $\tau_k'$ if $R_k'$ is not larger than $T_k$. \end{Corollary} \begin{proof} Directly follows from Theorem~\ref{theorem:critical}. \end{proof} \begin{Corollary} $R'_k$ is a safe upper bound on the worst-case response time of task $\tau_k$ if $R_k'$ is not larger than $T_k$. \end{Corollary} \begin{proof} Since, as proven in \cite{Rajkumar_1990,Liu_2014}, the worst-case response time of $\tau_k'$ is always larger than or equal to the worst-case response time of $\tau_k$, this corollary directly follows from Corollary~\ref{cor:critical}. \end{proof} Note that the proof of Theorem~\ref{theorem:critical} does not require to start from the schedule with the worst-case response time for $\tau_k'$. The analysis still works well by considering any job with any arbitrary fixed-priority schedule. To illustrate Step 1 in the above proof, we also provide one concrete example. Consider a task system with the following 4 tasks: \begin{itemize} \item $T_1 = 6, C_1 = 1, S_1 = 1$, \item $T_2 = 10, C_2 = 1, S_2 = 6$, \item $T_3 = 18, C_3 = 4, S_3 = 1$, \item $T_4 = 20, C_4 = 5, S_4 = 0$. \end{itemize} Figure~\ref{fig:example} demonstrates a schedule for the jobs of the above 4 tasks. We assume that the first job of task $\tau_1$ arrives at time $4+\epsilon$ with a very small $\epsilon > 0$. The first job of task $\tau_2$ suspends itself from time $0$ to time $5+\epsilon$, and is blocked by task $\tau_1$ from time $5+\epsilon$ to time $6+\epsilon$. After some very short computation with $\epsilon$ amount of time, the first job of task $\tau_2$ suspends itself again from time $6+2\epsilon$ to $7$. In this schedule, $f_k$ is set to $20-\epsilon$. We define $t_4$ as $7$. Then, we set $t_3$ to $6$. When considering task $\tau_2$, since it belongs to ${\bf T}_2$, we greedily set $t_2$ to $t_3=6$ and the residual workload $C_2'$ is $1$. Then, $t_1$ is set to $4+\epsilon$. In the above schedule, the idle time from $4+\epsilon$ to $20-\epsilon$ is at most $2 = S_1+S_3$. We have to further consider one job of task $\tau_2$ arrived before time $t_1$ with execution time $C_2$. \begin{figure} \caption{An illustrative example of the proof.} \label{fig:example} \end{figure} {} \end{document}
\begin{document} \preprint{APS/123-QED} \title{Quantum Brownian motion simulation of the control effect for two harmonic oscillators coupling in position and momentum with general environment} \author{Hao Jia\thanks{Corresponding author: [email protected]}} \author{Xiaotong Ding} \affiliation{Department of Physics, Stevens Institute of Technology, Hoboken, NJ 07030, US} \date{\today} \begin{abstract}In this paper, we study the dynamical properties of two coupled quantum harmonic oscillators coupled with bosonic non-Markovian environment both in position and momentum. We deduce the exact analytical master equation using Quantum State Diffusion method and give the quantum trajectory description when the control is added to the system by applying interaction between two harmonic oscillators. With numerical simulation, we compare the evolution of entanglement under different controlling effects. At last, we use nonlinear QSD method to strengthen our above results by getting the same evolution. \end{abstract} \maketitle \section{Introduction} \label{sec:int} The study of quantum open systems are increasingly important in the field of quantum dynamics because it is impossible to isolate the system from its environment or make a measurement without involving with other systems. Generally, although environment and specific quantum system are initially independent, they will become entangled due to the interaction, as a result, the quantum system will no longer be pure state which means the evolution operator is non-unitary. Most experimental physicists face the quantum open system where a small system of interest coupled to a large system with a large number of freedom, which can be described by heat bath. Traditionally, we describe the open system by a Lindblad master equation which can be derived with Born-Markov approximation which means the flow of energy or information is unidirectional, in other words, the bath is memoryless. However, if the bath memory effects are relevant, for example in the cases of a high-Q cavity, atom laser or complex structured environment, where the Born-Markov approximation does not work, we have to use Non-Markovian process to describe the quantum system. Quantum Brownian Motion (QBM) is a paradigm of quantum open system motivated by possible observation of macroscopic effects in quantum systems and problem of quantum measurement theory. Also, Quantum Brownian Motion, as a exactly-solvable model, provides us a glance at the relationship between different measure of quantum systems, including entanglement, coherence, purity, during the evolution of quantum system and under the external time-dependent control. To study the quantum-to-classical transition in quantum cosmology, Hu, Paz and Zhang got the exact master equation with nonlocal dissipation and colored noise in a general environment, which beginning the new stage to treat the old problem\cite{HPZ}. Iater, Chou, Ting and Hu derived an exact master equation for two coupled quantum harmonic oscillators interacting via bilinear coupling with a common environment at arbitrary temperature made up of many harmonic oscillators with a general spectral density function\cite{yu2ho}, which makes it possible to study the decoherence and disentangle in Brownian motion model. Traditionally, we use reduced density matrix to describe the quantum open system when we consider the environment effect to the specific system. Recently, there are tremendous progresses in the development of stochastic Schrodinger equations to describe the quantum open system. We will have the reduced density matrix by tracing over the quantum trajectories of a stochastic Schrodinger equation, which means a possible series of influence of the environment to the system. Quantum state diffusion method provides not only an efficient way in the numerical calculation of quantum open system, but also a way to describe our system, which shed light on the difficulties encountered in environment memory effect. In this paper, we mainly give the numerical simulation and derivation of the stochastic Schrodinger equation and master equation for open quantum system containing two time-dependent interacting harmonic oscillators, coupled with a thermal bath involving infinite number of bosonic oscillators at zero temperature. The symmetric position-momentum coupling pattern is used, which also can be regarded as a Rotating Wave Approximation of position. We also shows in zero temperature case, the symmetric coupling in position and momentum provides an easy but effective way to have an glance at the quantum system under the influence of environment and external control field, especially for numerical simulations, because ten related differential equations are simplified to one differential equation. Our paper is organized as follows. We firstly give a brief introduction to Quantum State Diffusion method, in zero temperature in Section \ref{sec:intr}. The evolution of two harmonic oscillators with the symmetric coupled pattern in position and momentum in Section \ref{sec:rwa} is considered, where the time-local, convolution-less master equation, derived by non-Markovian quantum state diffusion method(NMQSD). We then consider the application of quantum control in Section \ref{subsec:qc} where we control the entanglement and coherence of the specific quantum system by time-dependent interaction. In Section \ref{subsec:num}, we simulate these controlling and non-Markov process, including coherence states, folk states and cat states under the influence of environment and control field. \section{Theoretical Framework} \subsection{Introduction to QSD in zero temperature} \label{sec:intr} The standard total Hamiltonian in the system-plus-reservoir in quantum open system can always be written as \begin{align} \begin{aligned}H_{tot}=H_{sys}+H_{int}+H_{bath}\\ =H+\hbar\sum_{\lambda}\left(g_{\lambda}^{*}Lb_{\lambda}^{\dagger}+\text{\textit{\ensuremath{g_{\lambda}}}}L^{\dagger}b_{\lambda}\right)+\sum_{\lambda}\hbar\omega_{\lambda}b_{\lambda}^{\dagger}b_{\lambda} \end{aligned} \end{align} where $L$ is the system operator providing the coupling between the system and the environment and $g_{\lambda}$ are coupling constants. we will then set $\text{\ensuremath{\hbar}}=1$. Using the Schrodinger equation, the non-Markov Quantum State Diffusion (NMQSD) equation is \cite{yu1ho} \begin{align} \ensuremath{\partial_{t}\psi_{t}=-iH\psi_{t}+Lz_{t}^{*}\psi_{t}-L^{\dagger}\int_{0}^{t}\alpha(t-s)\text{ }\frac{\delta\psi_{t}}{\delta z_{s}^{*}}ds} \end{align} where $\alpha(t-s)$ is the bath correlation function determinant by temperature and initial bath states, $H$ is the system Hamiltonian and $z_{t}^{*}$ are Gaussian random process with correlation functions that mirror the vacuum correlations of the bath operators in the interaction picture. It is possible to replace the functional derivative by some time-dependent operator $O$, which depends on the time $t$, $s$, and the entire history of the stochastic process $z_{t}^{*}$. by making an ansatz, the evolution equation for operator $O=O\left(t,s,z^{*}\right)$ is \begin{align} \begin{aligned}\partial_{t}O=\left[-iH+Lz_{t}^{*}-L^{\dagger}\bar{O}\left(t,z^{*}\right),O\right]\\ -L^{\dagger}\frac{\delta\bar{O}\left(t,z^{*}\right)}{\delta z_{s}^{*}} \end{aligned} \end{align} where the time-integrated operator $\bar{O}\left(t,z^{*}\right)$ is the integral of $O\left(t,s,z^{*}\right)$ from 0 to t with the weight function$\alpha(t-s)$ \begin{align} \ensuremath{\bar{O}\left(t,z^{*}\right)=\int_{0}^{t}\alpha(t-s)O\left(t,s,z^{*}\right)ds} \end{align} $\alpha(t-s)$ is the correlation function to describe the influence of the environment to the system, which depends on the spectral density of the system and temperature. \begin{align} \begin{aligned}\alpha(t-s)=\sum_{n=1}^{N}\frac{G_{n}^{2}}{2m_{n}\omega_{n}}[coth(\frac{\omega_{n}}{2k_{B}T})cos\omega_{n}(t-s)\\ -isin\omega_{n}(t-s)] \end{aligned} \end{align} Our numerical results focus on zero temperature and finite temperature. We can introduce the spectral density of bath oscillators: \begin{align} J(\omega) & =\sum_{n=1}^{N}\frac{G_{n}^{2}}{2m_{n}\omega_{n}}\delta(\omega-\omega_{n})=M\gamma\omega^{3}f_{c}(\frac{\omega}{\Lambda}). \end{align} Later we choose super-Ohmic environment in our numerical computation. $f_{c}$ is a cutoff function, of which $\Lambda$ is cutoff frequency, the most important parameters to control our environmental spectrum. In our simulation, cutoff function is chosen to be $f_{c}(x)=exp(-x)$. The problem of solving stochastic Schrodinger equation is more and less solving $O$ operator, exactly or approximately. The general QSD equation can be simplified to a form which may be simulated numerically, as long as the ansatz satisfies the consistency condition. In practice, even when the operator $O\left(t,s,z^{*}\right)$ cannot be determined exactly, perturbation techniques may be used to create a serious of $O$ operator allowing for an approximate form of general QSD equation. Fortunately, we can find exact $O$ for two coupled harmonic oscillators. The convolution-less form of NMQSD equation will appear if we replace the functional derivative by the known operator \begin{align} \ensuremath{\partial_{t}\psi_{t}=\left(-iH+Lz_{t}^{*}-L^{\dagger}\bar{O}\left(t,z^{*}\right)\right)\psi_{t}\left(z^{*}\right)} \end{align} Once we have the NMQSD equation, the reduced density operator is given by the ensemble mean over the trajectories of the stochastic Schrodinger equation. \begin{align} \nonumber \ensuremath{\dot{\rho_{t}}=-i\left[H,\rho_{t}\right]+\left[L,\mathcal{M}\left\{ P_{t}\bar{O}^{\dagger}\left(t,z^{*}\right)\right\} \right]+\\ \left[\mathcal{M}\left\{ O\left(t,z^{*}\right)P_{t}\right\} ,L\right]}\label{eq:fullms} \end{align} If the exact operator $O$ is independent of the noise $z_{t}^{*}$, we can find that \begin{align} \ensuremath{\dot{\rho_{t}}=-i\left[H,\rho_{t}\right]+\left[L,\rho_{t}\bar{O}^{\dagger}(t)\right]+\left[\bar{O}(t)\rho_{t},L\right]}\label{eq:rdeq} \end{align} In this paper, we will show that the exact master for two harmonic oscillators in zero temperature is Eq.\ref{eq:fullms} form while the exact master equation of Rotating Wave Approximation form in zero temperature is Eq. \ref{eq:rdeq} form. \subsection{Quantum Brownian motion with coupling symmetric in position and momentum by Quantum State Diffusion} Quantum Brownian motion of a damped harmonic oscillator bi linearly coupled to bath of harmonic oscillators has been studied for decades and the non-Markov exact master equation was provided by path integral techniques\cite{yu2ho}. This part will provide another approach to exact master equation by Quantum State Diffusion. In the more generalized forms, the masses and frequencies of the oscillators are different so that the master equation can be used easily in the research of non resonant problems. Also, the couple between two harmonic oscillators is a function of time to research the quantum control application, and if we are not interested in the control part we can set $k(t)=k$. The Hamiltonian of the total system consisting of a system of two mutually coupled harmonic oscillators with different mass and frequency interacting with a bath of harmonic oscillators of masses and frequencies in an equilibrium state at zero temperature: \begin{align} H_{tot} & =H_{sys}+H_{int}+H_{bath} \end{align} where \begin{align} \nonumber \ensuremath{H_{sys}=\frac{p_{1}^{2}}{2M_{1}}+\frac{1}{2}M_{1}\Omega_{1}^{2}q_{1}^{2}+\frac{p_{2}^{2}}{2M_{2}}\\+\frac{1}{2}M_{2}\Omega_{2}^{2}q_{2}^{2}+k(t)\left(q_{1}-q_{2}\right){}^{2}} \end{align} is the system Hamiltonian for the two system oscillators of interest, with displacements and momentum and time dependent couple function $k(t)$. \begin{align} \ensuremath{H_{bath}=\sum_{n=1}^{N_{B}}\left(\frac{\pi_{n}^{2}}{2m_{n}}+\frac{1}{2}m_{n}\omega_{n}{}^{2}b_{n}^{2}\right)} \end{align} is the bath Hamiltonian with displacements and conjugate momentum. \label{sec:rwa} \\One solvable model is that the system and the environment are coupled through different observables: \begin{align} \ensuremath{H_{int}=\left(q_{1}+q_{2}\right)\sum_{n=1}^{N_{B}}C_{n}b_{n}+\left(\frac{p_{1}+p_{2}}{M\Omega}\right)\sum_{n=1}^{N_{B}}\frac{C_{n}}{m_{n}\omega_{n}}\pi_{n}} \end{align} For simplification, we assume the masses and frequencies of those harmonic oscillators are same. It is easy to rewrite this interaction Hamiltonian in creation and annihilation operators\cite{paz2ho} \begin{align} \ensuremath{H_{int}=\sum_{n=1}^{N_{B}}\frac{C_{n}2\sqrt{2}}{\sqrt{Mm_{n}\Omega\omega_{n}}}\left(ab_{n}^{\dagger}+a^{\dagger}b_{n}\right)}\label{eq:rwa} \end{align} The same type can also be derived from standard position-position coupling with rotating-wave approximation (RWA). In the expansion of $(q-q')^{2}$, we can rewrite this in terms of creation and annihilation operators as $(a_{1}-a_{2}+a_{1}^{\dagger}-a_{2}^{\dagger})^{2}$. In RWA, high-frequency oscillating terms like $a^{2}$and $a^{\dagger2}$ are neglected. If we rewrite the system with RWA interaction as:\\\\ \begin{align} \begin{aligned}H_{tot}=\Omega\left(a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right)+k(t)\left(a_{1}-a_{2}+a_{1}^{\dagger}-a_{2}^{\dagger}\right){}^{2}\\\sum_{n=1}^{N_{B}}\omega_{n}b_{n}^{\dagger}+\sum_{n=1}^{N_{B}}(g_{n}a_{1}b_{n}^{\dagger}+g_{n}a_{2}b_{n}^{\dagger})\\ +\sum_{n=1}^{N_{B}}(g_{n}a_{1}^{\dagger}b_{n}+g_{n}a_{2}^{\dagger}b_{n}) \end{aligned}\end{align} In this model, the Lindblad operator will be $a_{1}+a_{2}$. We find the $O$ operator without an explicit noise dependence: \begin{align} \ensuremath{O\left(t,s,z^{*}\right)=f(t,s)\left(a_{1}+a_{2}\right)} \end{align} Define the integration of function $f$ with the correlation function as weight function \begin{align} \ensuremath{F(t)=\int_{0}^{t}f(t,s)\alpha(t-s)ds} \end{align} The complex function f follows the evolution equation and boundary condition: \begin{align} \ensuremath{\partial_{t}f=i\Omega f+2F(t)f} \end{align} \begin{align} \ensuremath{f(t,s=t)=1} \end{align} We can get the analytical solution if the frequency distribution is Lorentz spectrum, where $\ensuremath{\alpha(t,s)=\frac{\Gamma\gamma}{2}e^{-\gamma|t-s|}}$.If we integral both sides of the evolution equation of $f$ from $0$ to $t$ with $\alpha(t,s)$, we will have \begin{align} \ensuremath{\partial_{t}F=2F^{2}-(\gamma-i\Omega)F+\frac{\Gamma\gamma}{2}} \end{align} which is Riccati Equation. Solve the equation with transformation $$F(t)=(-y'(t))/(2y(t))$$ and we will have \begin{align} y^{\prime\prime}+\left(\gamma-i\Omega\right)y'+\Gamma\gamma y=0 \end{align} Finally, the solution is \begin{align} \ensuremath{F(t)=\frac{-\lambda_{1}e^{\lambda_{1}t}+\lambda_{1}e^{\lambda_{2}t}}{2\left(e^{\lambda_{1}t}-\frac{\lambda_{1}}{\lambda_{2}}e^{\lambda_{2}t}\right)}} \end{align} where the parameters $\Delta=\left(\gamma-i\Omega\right)^{2}-4\Gamma\gamma$ determines whether $C(t)$ is complex or real. In this case, as long as $\Omega\neq0$ or $\gamma^{2}-4\Gamma\gamma<0$, the system will be non-Markov. After obtaining the evolution of $O$, master equation turns to be as follows: \begin{widetext} \begin{align} \partial_{t}\rho=i[\Omega(a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2})+k(t)(a_{1}^{\dagger}-a_{2}^{\dagger}+a_{1}-a_{2})^{2},\rho]+[(a_{1}+a_{2}),\rho F^{*}(t)(a_{1}^{\dagger}+a_{2}^{\dagger})]+[F(t)(a_{1}+a_{2})\rho,(a_{1}^{\dagger}+a_{2}^{\dagger})]. \end{align} \end{widetext} \begin{figure} \caption{If $\Delta>0(\Gamma=1,\gamma=5,\Omega=0)$, the system will become\\\hspace{\textwidth} \caption{If $\Delta<0(\Gamma=1,\gamma=3,\Omega=0)$, system has obvious \\\hspace{\textwidth} \end{figure} $\Omega,\hbar$ and $M$ have been chosen to be 1 for simplicity. It is remarkable that this case is only for zero temperature where the Landblad operator is not Hermitian, $a_{1}+a_{2}\neq(a_{1}+a_{2})^{\dagger}$. It is a good example to solve numerically because eight related integral-diffential equations have been reduced to only one integral-diffential equation $F(t)$. \\The resulting nonlinear, convolution less non-Markov stochastic Schrodinger equation for the Brownian motion of a harmonic oscillator with a coupling to the environment through position thus reads \begin{align} \begin{aligned}\ensuremath{\ensuremath{\frac{d|\widetilde{\psi}\rangle}{dt}=[-iH_{s}+\triangle_{t}(L)\widetilde{z}_{t}^{*}]|\widetilde{\psi}(t)\rangle-}}\\ \triangle_{t}(L^{\dagger})\bar{O}(t,\widetilde{z^{*}})|\widetilde{\psi}(t)\rangle+\triangle_{t}(L^{\dagger})\bar{O}(t,\widetilde{z^{*}})\rangle_{t}|\widetilde{\psi}(t)\rangle \end{aligned} \end{align} Here, we have the notation \begin{align} \triangle_{t}(L) & \equiv L-\left\langle L\right\rangle _{t} \end{align} and the noise term \begin{align} \widetilde{z}_{t}^{*} & =z_{t}^{*}+\intop_{0}^{t}\alpha^{*}(t,s)\left\langle L^{\dagger}\right\rangle _{s}ds. \end{align} \section{Application in Quantum Control} \label{sec:qc} In this section, we present theoretical analysis and numerical results of the open quantum system. \subsection{Quantum control for entanglement for gaussian states} \label{subsec:qc}The decoherence of two harmonic oscillators has been studied systematically in \cite{yu2ho,paz2ho} where coupling constant between two oscillators is constant. If we control the coupling relation as a function of time, we can control the decoherence. With the control function, we can transform the dynamical phase from one to a different one. or simplification, we assume the masses and frequencies of those harmonic oscillators are same (resonant case). It is convenient to use coordinates $x_{\pm}=\left(x_{1}\pm x_{2}\right)/\sqrt{2}$since $x_{+}$ couples to the environment while $x_{-}$ is controlled: \begin{align} \begin{aligned}H_{tot}=\frac{p_{+}^{2}}{2m}+\frac{m}{2}\Omega^{2}x_{+}^{2}+\sqrt{2}x_{+}\sum_{n=1}^{N_{B}}c_{n}q_{n}\\ +\sum_{n=1}^{N_{B}}\left(\frac{p_{n}{}^{2}}{2m_{n}}+\frac{1}{2}m_{n}\omega_{n}{}^{2}q_{n}^{2}\right)\\ +\frac{p_{-}^{2}}{2m}+\frac{m}{2}\Omega^{2}x_{-}^{2}+2k(t)x_{-}^{2} \end{aligned} \end{align} The system is simplified to a harmonic oscillator coupled to environment and another harmonic oscillator controlled by external time dependent frequency $\ensuremath{\Omega'(t)=(\Omega^{2}+4k(t)/m)^{\frac{1}{2}}}$. The master equation for one harmonic oscillator\cite{HPZ} is \begin{align} \begin{aligned}\ensuremath{\partial_{t}\rho=-i\left[H_{R},\rho\right]-i\gamma(t)\left[x_{+},\left\{ p_{+},\rho\right\} \right]}\\ -D(t)\left[x_{+},\left[x_{+},\rho\right]\right]-f(t)\left[x_{+},\left[p_{+},\rho\right]\right] \end{aligned} \end{align} where the renormalized Hamiltonian is \begin{align} \ensuremath{H_{R}=\frac{p_{+}^{2}}{2m}+\frac{M}{2}\Omega^{2}x_{+}^{2}+\frac{p_{-}^{2}}{2m}+\frac{M}{2}\Omega^{2}x_{-}^{2}+\frac{M}{2}\delta\omega^{2}(t)x_{+}^{2}} \end{align} where the coefficients depend on the spectral density of the environment. The second moments of $x_{+}$and $p_{+}$ according to \cite{paz2ho} are \begin{align} \ensuremath{\partial_{t}\left(\frac{\left\langle p_{+}^{2}\right\rangle }{2m}\right)+\frac{m}{2}\Omega^{2}(t)\partial_{t}\left\langle x_{+}^{2}\right\rangle =-2\frac{\gamma(t)}{m}\left\langle p_{+}^{2}\right\rangle +\frac{D(t)}{m}} \end{align} \begin{align} \ensuremath{\frac{1}{2}\partial_{t}^{2}\left\langle x_{+}^{2}\right\rangle +\gamma(t)\partial_{t}^{2}\left\langle x_{+}^{2}\right\rangle +\Omega^{2}(t)\left\langle x_{+}^{2}\right\rangle =\frac{\left\langle p_{+}^{2}\right\rangle }{m}-\frac{f(t)}{m}} \end{align} Then, the system will become two separable harmonic oscillators: one is coupled with environment and another one is under the control. Similarly, we can have the second moments for $x_{-}$and $p_{-}$. \begin{align} \ensuremath{\partial_{t}\left(\frac{\left\langle p_{-}^{2}\right\rangle }{2m}\right)+\frac{m}{2}(\Omega^{2}+\frac{4k(t)}{m})^{\frac{1}{2}}\partial\left\langle x_{-}^{2}\right\rangle =0} \end{align} \begin{align} \ensuremath{\frac{1}{2}\partial_{t}^{2}\left\langle x_{-}^{2}\right\rangle +(\Omega^{2}+\frac{4k(t)}{m})\left\langle x_{-}^{2}\right\rangle =\frac{\left\langle p_{-}^{2}\right\rangle }{m}} \end{align} We assume that the initial state of system is Gaussian. Because the complete evolution is linear and all the operators are quadratic, the Gaussian nature of the state will be preserved for all times. The entanglement for Gaussian states is entirely determined by the properties of the con-variance matrix. The covariance matrix (CM) $\sigma$ is a real symmetric and positive $4\times4$ matrix. \begin{equation} \sigma(t)=\left[\begin{array}{cccc} \sigma_{x_{1}x_{1}}(t) & \sigma_{x_{1}p_{1}}(t) & \sigma_{x_{1}x_{2}}(t) & \sigma_{x_{1}p_{2}}(t)\\ \sigma_{x_{1}p_{1}}(t) & \sigma_{p_{1}p_{1}}(t) & \sigma_{x_{2}p1}(t) & \sigma_{p_{1}p_{2}}(t)\\ \sigma_{x_{1}x_{2}}(t) & \sigma_{x_{2}p_{1}}(t) & \sigma_{x_{2}x_{2}}(t) & \sigma_{x_{2}p_{2}}(t)\\ \sigma_{x_{1}p_{2}}(t) & \sigma_{p_{1}p_{2}}(t) & \sigma_{x_{2}p_{2}}(t) & \sigma_{p_{1}p_{2}}(t) \end{array}\right] \end{equation} where the matrix elements are defined as: \begin{align} \ensuremath{\sigma_{ij}=\frac{1}{2}\text{Tr}[(\xi_{i}\xi_{j}+\xi_{j}\xi_{i})\rho]-}\text{Tr}[\xi_{i}\rho]\text{Tr}[\xi_{j}\rho] \end{align} where $\xi=(x_{1},p_{1},x_{2},p_{2})$. CM also has the structure \begin{equation} \sigma(t)=\left[\begin{array}{cc} A & C\\ C^{T} & B \end{array}\right] \end{equation} where $A,B$ and $C$ are $2\times2$ Hermitian matrices and $T$ denotes the transposed matrix. $A$ and $B$ denote the symmetric covariance matrices for the individual reduced one-mode states, while the matrix $C$ contains the cross-correlations between modes. The advantage of gaussian state is that a good measure of entanglement for such states is logarithmic negativity $E_{N}$, which can be computed as \begin{equation} E_{N}=max\{0,-\text{ln}(2\nu_{min})\} \end{equation} where $\nu_{min}$ is the smallest symplectic eigenvalue of the partially transposed covariance matrix.Some expressions for $E_{N}$ based on previous paper \cite{paz2ho} have been known. For two mode squeezed state, obtained from the vacuum by acting with the creation operator $exp(-r(a_{1}^{\dagger}a_{2}^{\dagger}-a_{1}a_{2}))$, we would have $E_{N}=2\left|r\right|$. For this squeezed state, the dispersions satisfy the minimum uncertainty condition $\delta x_{+}\delta p_{+}=\delta x_{-}\delta p_{-}=1/2$. The squeezing factor determines the ratio between variances since $m\Omega\delta x_{+}/\delta p_{+}=\delta p_{-}/(m\Omega\delta x_{-})=exp(2r)$. If $r\rightarrow\infty$, the state becomes localized in the $p_{+}$and $x_{-}$variables approaching an ideal Einstein-Podolsky-Rosen state. We will use numerical simulation to see how time-dependent control external field will influence the evolution of entanglement. \subsection{Numerical results and discussion} \label{subsec:num}Within the framework of evolution equation, entanglement, system energy and quantum coherence, measured by $l1$ norm can be investigated numerically. Initially, we consider the simplest case, the symmetric coupling between momentum and position in zero temperature, to show how input control can result in the change of entanglement, energy and coherence. We keep the parameters of environment, which is strong non-Markov case, and the coupling between two harmonic oscillators weak relative to coupling between system and environment. We simulate system by evolute the master equation. The initial states for all simulations are squeezed states created by squeezed operator $exp(-r(a_{1}^{\dagger}a_{2}^{\dagger}-a_{1}a_{2}))$, we would have $E_{N}=2\left|r\right|$. Without control, the entanglement, coherence and energy will decrease from the initial value with fluctuations, where the non-Markov environment results in fluctuation by information and energy back flow. In long time limit, the entanglement will stay in a stable value. \begin{figure} \caption{Entanglement of non-Markov process without control} \end{figure} With the control which frequency is much greater than the response frequency, the entanglement will decrease with oscillations with high input frequency but get stable value more quickly, which means high-frequency input cannot influence the entanglement to reach a new point which does not exist before the control. There are several resonance frequencies as shown in the figure. Keeping the strength of the input signal but changing the frequency, the oscillation of the entanglement, energy and coherence is growing while the frequency is approaching to the resonance frequency, and decaying while the frequency is leaving the resonance frequency. When the input signal frequency is exactly same as the resonance frequency, the energy will accumulate to a high level compared with non-resonance cases and fluctuate, as what happens in classical regime, and the analysis of classical system with comparison between classical system and quantum system will be given later. The coherence will increase with similar trend as energy which can be regarded as the input of information with energy, then the flowing of information and energy to environment is weakened by changing to a specific resonance frequency compared to the incoming flowing information and energy, which results in the accumulation of energy and information.There is an interesting phenomenon for entanglement in the resonance cases, after reaching the highest point, the entanglement drops quickly with fluctuation and reaches zero, which means two harmonic oscillators disentangle just after the maximum of entanglement. The entanglement between two harmonic oscillators cannot reach zero because the environment is non-markov except by resonance control, which is different from classical cases. In experiment, the input signal with specific frequency can protect the system coherence and entanglement from real environment. \begin{figure} \caption{How entanglement evolutes with different environment and under different control} \end{figure} In order to make our results more credible, we calculate two above-mentioned models using two different methods, one by quantum state diffusion equation and the other by master equation. If our above results are valid, these two methods should give the same evolutionary picture of entanglement over time. Finally, our comparation clearly shows when the number of trajectories run by equation of quantum state diffusion is large enough, it gives the highly likely result with master equation, which shows our above numerical discussion is believable enough. \begin{figure} \caption{Comparation between Quantum State Diffusion method and Master equation method} \end{figure} \section{Conclusion} In this paper, we studied the control effect with two interacting oscillators coupled with bosonic thermal bath by the method of quantum brownian motion. Non-Markovian master equation of reduced density matrix is obtained by quantum state diffusion method and one momentum-position coupling patterns are considered. There are several attracting questions remaining in this work. We noticed that after interacting with environment, two quantum oscillators evolve from a pure state to a final mixed state, and correspondingly physical quantities such as energy of system, quantum coherence and purity remain at a non-zero level after fluctuating at the beginning. Environment has the ability to build up coherence and entanglement but cannot totally destroy the connection or squeeze all information contained within system back to surroundings. Classically second law of thermodynamics might help us to understand relative phenomenon in qualitative view but we aim to find out the quantitative interpretation by means of this model. We speculate that there could be some symmetry protected process which makes the system get rid of thorough destruction in long-time limit. Relevant investigation will be presented in future work. \\ \\ \\ \\\\\\\\\\ \\\\ \\\\\\\\\\ \appendix \end{document}
\begin{document} \title[]{Algebraicity of the Bergman Kernel} \begin{abstract} Our main result introduces a new way to characterize two-dimensional finite ball quotients by algebraicity of their Bergman kernels. This characterization is particular to dimension two and fails in higher dimensions, as is illustrated by a counterexample in dimension three constructed in this paper. As a corollary of our main theorem, we prove, e.g., that a smoothly bounded strictly pseudoconvex domain $G$ in $\mathbb{C}^2$ has rational Bergman kernel if and only if there is a rational biholomorphism from $G$ to $\mathbb{B}^2$. \end{abstract} \subjclass[2010]{32A36, 32C20, 32S99} \author [Ebenfelt]{Peter Ebenfelt} \address{Department of Mathematics, University of California at San Diego, La Jolla, CA 92093, USA} \email{{[email protected]}} \author[Xiao]{Ming Xiao} \address{Department of Mathematics, University of California at San Diego, La Jolla, CA 92093, USA} \email{{[email protected]}} \author [Xu]{Hang Xu} \address{Department of Mathematics, University of California at San Diego, La Jolla, CA 92093, USA} \email{{[email protected]}} \thanks{The first and second authors were supported in part by the NSF grants DMS-1900955 and DMS-1800549, respectively.} \maketitle \section{Introduction} The Bergman kernel, introduced by S. Bergman in \cite{Bergman1933, Bergman1935} for domains in $\mathbb{C}^n$ and later cast in differential geometric terms by S. Kobayashi \cite{Ko}, plays a fundamental role in several complex variables and complex geometry. Its biholomorphic invariance properties and intimate connection with the CR geometry of the boundary make it an important tool in the study of open complex manifolds. The use of the Bergman kernel, e.g., in the study of biholomorphic mappings and the geometry of bounded strictly pseudoconvex domains in $\mathbb{C}^n$ was pioneered by C. Fefferman \cite{Fe,Fe2, Fe3}, who developed a theory of Bergman kernels in such domains and initiated a now famous program to describe the boundary singularity in terms of the local invariant CR geometry; see also \cite{BaileyEastwoodGraham1994}, \cite{Hirachi2000} for further progress on Fefferman's program. A broad and general problem of foundational importance is that of classifying complex manifolds, or more generally analytic spaces, in terms of their Bergman kernels or Bergman metrics. For example, a well-known result of Q. Lu \cite{Lu} implies that if a relatively compact domain in an $n$-dimensional K\"ahler manifold has a complete Bergman metric with constant holomorphic sectional curvature, then the domain is biholomorphic to the unit ball $\mathbb{B}^n$ in $\mathbb{C}^n$. Another example is the conjecture of S.-Y. Cheng \cite{CH}, which states that the Bergman metric of a smoothly bounded strongly pseudoconvex domain in $\mathbb{C}^n$ is K\"ahler--Einstein (i.e., has Ricci curvature equal to a constant multiple of the metric tensor) if and only if it is biholomorphic to the unit ball $\mathbb{B}^n$. This conjecture was confirmed by Fu-Wong \cite{FuWo} and Nemirovski--Shafikov \cite{NeSh} in the two dimensional case, and in the higher dimensional case by X. Huang and the second author \cite{HX}. In this paper, we introduce a new characterization of the two-dimensional unit ball $\mathbb{B}^2\subset \mathbb{C}^2$ and, more generally, two-dimensional finite ball quotients $\mathbb{B}^2/\Gamma$ in terms of algebraicity of the Bergman kernel. It is interesting, and perhaps surprising then, to note that such a characterization fails in the higher dimensional case. Indeed, in Section \ref{Sec counterexample} below we construct a relatively compact domain $G$ with smooth strongly pseudoconvex boundary in a three-dimensional algebraic variety $V\subset \mathbb{C}^4$, with an isolated normal singularity in the interior of $G$, such that the boundary $\partial G$ is not spherical and, furthermore, $G$ is not biholomorphic to any finite ball quotient; recall that a CR hypersurface $M$ of dimension $2n-1$ is said to be {\em spherical} if near each point $p\in M$, it is locally CR diffeomorphic to an open piece of the unit sphere $S^{2n-1}\subset \mathbb{C}^n$. Nevertheless, in two dimensions it turns out that algebraicity of the Bergman kernel does characterize finite ball quotients: \begin{thm}\lambdabel{main theorem intro} Let $V$ be a $2$-dimensional algebraic variety in $\mathbb{C}^N$, and $G$ a relatively compact domain in $V$. Assume that every point in $\overline{G}$ is a smooth point of $V$ except for finitely many isolated normal singularities inside $G$, and that $G$ has a smooth strongly pseudoconvex boundary. Then the Bergman kernel form of $G$ is algebraic if and only if there is an algebraic branched covering map $F$ from $\mathbb{B}^2$ onto $G$, which realizes $G$ as a ball quotient $\mathbb{B}^2/\Gamma$ where $\Gamma$ is a finite unitary group with no fixed points on $\partial \mathbb{B}^2$. \end{thm} \begin{rmk}\lambdabel{rmk counterexample intro} We note that in addition to showing that Theorem \ref{main theorem intro} fails in dimension $\geq 3$, our example in Section \ref{Sec counterexample} also shows that the Ramanadov Conjecture for the Bergman kernel fails for higher dimensional normal Stein spaces. Recall that the Ramadanov Conjecture (c.f., \cite{Ramadanov1981}, \cite[Question 3]{EnZh}) proposes that if the logarithmic term in Fefferman's asymptotic expansion \cite{Fe2} of the Bergman kernel vanishes to infinite order at the boundary of a normal reduced Stein space with compact, smooth strongly pseudoconvex boundary, then the boundary is spherical. The Ramadanov Conjecture has been established in two dimensions by the work of D. Burns and R. C. Graham (see \cite{Graham1987b}). The normal reduced Stein space constructed in Section \ref{Sec counterexample} gives a $3$-dimensional counterexample with one isolated singularity. The counterexamples in \cite{EnZh} are smooth, but not Stein. \end{rmk} Theorem \ref{main theorem intro} has two immediate consequences in the non-singular case: \begin{cor}\lambdabel{main corollary intro} Let $V$ be a $2$-dimensional algebraic variety in $\mathbb{C}^N$, and let $G$ be a relatively compact domain in $V$ with smooth strongly pseudoconvex boundary. Assume that every point in $\overline{G}$ is a smooth point of $V$. Then the Bergman kernel form of $G$ is algebraic if and only if $G$ is biholomorphic to $\mathbb{B}^2$ by an algebraic map. \end{cor} \begin{cor}\lambdabel{main corollary 2 intro} Let $G$ be a bounded domain in $\mathbb{C}^2$ with smooth strongly pseudoconvex boundary. Then the Bergman kernel of $G$ is rational (respectively, algebraic) if and only if there is a rational (respectively, algebraic) biholomorphic map from $G$ to $\mathbb{B}^2$. \end{cor} We remark that although Theorem \ref{main theorem intro} fails in higher dimension, Corollary \ref{main corollary intro} and \ref{main corollary 2 intro} might still be true. For instance, it is clear from the proof below of Theorem \ref{main theorem intro} (see Remark \ref{rmk:Ramadanov}) that if the Ramadanov Conjecture is proved to hold for, e.g., strongly pseudoconvex bounded domains in $\mathbb{C}^n$, which is still a possibility despite Remark \ref{rmk counterexample intro} above, then Corollary \ref{main corollary 2 intro} also holds in $\mathbb{C}^n$. We also remark that the rationality of the biholomorphic map $G\to \mathbb{B}^2$ in Corollary \ref{main corollary 2 intro}, once its existence has been established, follows from the work of S. Bell \cite{Bell}. For the reader's convenience, a self-contained proof of the rationality is given in Section \ref{Sec Algebraic BK}. As a final remark in this introduction, we note that, by Lempert's algebraic approximation theorem \cite{Le}, if $G$ is a relatively compact domain in a reduced Stein space $X$ with only isolated singularities, then there exist an affine algebraic variety $V$, a domain $\Omega \subset V$, and a biholomorphism $F$ from a neighborhood of $\oo{G}$ to a neighborhood of $\oo{\Omega}$ with $F(\Omega)=G$. We shall say such a domain $\Omega$ is an \emph{algebraic realization} of $G$. Theorem \ref{main theorem intro} implies the following corollary. \begin{cor} Let $G$ be a relatively compact domain in a $2$-dimensional reduced Stein space $X$ with smooth strongly pseudoconvex boundary and only isolated normal singularities. If $G$ has an algebraic realization with an algebraic Bergman kernel, then $G$ is biholomorphic to a ball quotient $\mathbb{B}^2/\Gamma$, where $\Gamma$ is a finite unitary group with no fixed point on $\partial \mathbb{B}^2$. \end{cor} To prove the "only if" implication in Theorem \ref{main theorem intro}, we use the asymptotic boundary behavior of the Bergman kernel to establish algebraicity and sphericity of the boundary of $G$. Fefferman's asymptotic expansion \cite{Fe2} and the Riemann mapping type theorems due to X. Huang--S. Ji (\cite{HuJi98}) and X. Huang (\cite{Hu}) play important roles in the proof. To prove the converse ("if") implication in the theorem, we will need to compute the Bergman kernel forms of finite ball quotients. In order to do so, we shall establish a transformation formula for (possibly branched) covering maps of complex analytic spaces. This formula generalizes a classical theorem of Bell (\cite{Bell81}, \cite{Bell82}): \begin{thm}\lambdabel{BK transformation thm} Let $M_1$ and $M_2$ be two complex analytic sets. Let $V_1\subset M_1$ and $V_2 \subset M_2$ be proper analytic subvarieties such that $M_1-V_1, M_2-V_2$ are complex manifolds of the same dimension. Assume that $f:M_1-V_1\rightarrow M_2-V_2$ is a finite ($m-$sheeted) holomorphic covering map. Let $\Gamma$ be the deck transformation group for the covering map (with $|\Gamma|=m$), and denote by $K_i(z,\bar{w})$ the Bergman kernels of $M_i$ for $i=1,2$. Then the Bergman kernel forms transform according to \begin{equation}\lambdabel{BK transformation eq} \sum_{\alphamma\in \Gamma}(\alphamma, \mathrm{id})^*K_1=\sum_{\alphamma\in \Gamma}(\mathrm{id}, \alphamma)^*K_1=(f,f)^*K_2 \quad~\text{on}~(M_1-V_1) \times (M_1-V_1), \end{equation} where $\mathrm{id}: M_1\rightarrow M_1$ is the identity map. \end{thm} See Section 2 for the notation used in the formula in Theorem \ref{BK transformation thm}. We expect that this formula will be useful in other applications as well. In an upcoming paper \cite{EbenfeltXiaoXu2020}, the authors apply it to study the question of when the Bergman metric of a finite ball quotient is K\"ahler--Einstein. (This is always the case for finite disk quotients, i.e., one-dimensional ball quotients, by recent work of X. Huang and X. Li \cite{HuLi}.) The paper is organized as follows. Section \ref{Sec background} gives some preliminaries on algebraic functions and Bergman kernels of complex analytic spaces. Section \ref{Sec transformation law} is devoted to establishing the transformation formula in Theorem \ref{BK transformation thm}. Then in Section \ref{Sec Bergman kernel for finite ball quotient} we apply it to show that every standard algebraic realization (in particular, Cartan's canonical realization) of a finite ball quotient must have algebraic Bergman kernel, and thus prove the "if" implication in Theorem \ref{main theorem intro}. Section \ref{Sec Algebraic BK} gives the proof of the "only if" implication in Theorem \ref{main theorem intro}, as well as those of Corollaries \ref{main corollary intro} and \ref{main corollary 2 intro}. In Section \ref{Sec counterexample} and Appendix \ref{Sec Appendix}, we construct the counterexample mentioned above to the corresponding statement of Theorem \ref{main theorem intro} in higher dimensions. {\bf Acknowledgment.} The second author thanks Xiaojun Huang for many inspiring conversations on quotient singularities. \section{Preliminaries}\lambdabel{Sec background} \subsection{Algebraic Functions} In this subsection, we will review some basic facts about algebraic functions. For more details, we refer the readers to \cite[Chapter 5.4]{BER} and \cite{HuJi02}. \begin{defn}[Algebraic functions and maps] Let $\mathbb{K}$ be the field $\mathbb{R}$ or $\mathbb{C}$. Let $U\subset \mathbb{K}^n$ be a domain. A $\mathbb{K}-$analytic function $f: U\rightarrow \mathbb{K}$ is said to be $\mathbb{K}-$algebraic (i.e., real/complex-algebraic) on $U$ if there is a non-trivial polynomial $P(x,y)\in \mathbb{K}[x,y]$, with $(x,y)\in \mathbb{K}^n\times \mathbb{K}$, such that $P(x,f(x))=0$ for all $x\in U$. We say that a $\mathbb{K}-$analytic map $F: U \rightarrow \mathbb{C}^N$ is $\mathbb{K}-$algebraic if each of its components is so on $U$. \end{defn} \begin{rmk}\lambdabel{rmk:RvsC} We make two remarks: \begin{itemize} \item[(i)] If $f(x)$ is an $\mathbb{K}$-analytic function in a domain $U\subset\mathbb{K}^n$, then $f$ is $\mathbb{K}$-algebraic if and only if it is $\mathbb{K}$-algebraic in some neighborhood of any point $x_0\in U$. \item[(ii)] If $f(x)$ is an $\mathbb{R}$-analytic function in a domain $U\subset\mathbb{R}^n$, then there is domain $\hat U\subset \mathbb{C}^n$ containing $U\subset\mathbb{R}^n\subset\mathbb{C}^n$ and a $\mathbb{C}$-analytic (i.e., holomorphic) function $g(x+iy)$ in $\hat U$ such that $f=g|_U$; i.e., $f(x)=g(x)$ for $x\in U$. Moreover, $f$ is $\mathbb{R}$-algebraic if and only if $g$ is $\mathbb{C}$-algebraic. \end{itemize} \end{rmk} We say a differential form on $U\subset\mathbb{C}^n\cong\mathbb{R}^{2n}$ is real-algebraic if each of its coefficient functions is so. We can also define real-algebraicity of a differential form on an affine (algebraic) variety. \begin{defn}\lambdabel{algebraic on affine variety def} Let $V\subset \mathbb{C}^N$ be an affine variety and write $\Reg V$ for the set of its regular points. Let $\phi$ be a real analytic differential form on $\Reg V$. We say $\phi$ is real-algebraic on $V$ if for every point $z_0\in \Reg V$, there exists a real-algebraic differential form $\psi$ in a neighborhood $U$ of $z_0$ in $\mathbb{C}^N\cong\mathbb{R}^{2N}$ such that \begin{equation*} \psi|_V=\phi, \quad \mbox{ on } U\cap V. \end{equation*} \end{defn} Let $T_{z_0}V\cong T^{1,0}_{z_0}V$ be the complex tangent space of $V$ at a smooth point $z_0\in V$ considered as an affine complex subspace in $\mathbb{C}^n$ through $z_0$, and let $\xi=(\xi_1,\cdots,\xi_n)$ be affine coordinates for $T_{z_0}V$. Since $V$ can be realized locally as a graph over $T_{z_0}V$, the real and imaginary parts of $\xi$ also serve as local real coordinates for $V$ near $z_0$. We call such coordinates the \emph{canonical extrinsic coordinates at $z_0$}. Then the following statements are equivalent. \begin{itemize} \item[(a)] $\phi$ is real-algebraic on $\Reg V$ (in the sense of Definition \ref{algebraic on affine variety def}). \item[(b)] For any $z_0\in \Reg V$, $\phi$ is real-algebraic in canonical extrinsic coordinates at $z_0$. \end{itemize} If in addition, there is a domain $G\subset \mathbb{C}^n$ and a $\mathbb{C}$-algebraic (i.e., holomorphic algebraic) immersion $f: G\rightarrow \mathbb{C}^N$ such that $f(G)=\Reg V$, then (a) and (b) are further equivalent to \begin{itemize} \item[(c)] $f^*\phi$ is real-algebraic on $G$. \end{itemize} \begin{rmk} We can define complex-algebraicity of $(p,0)-$forms, $p>0,$ on an complex affine (algebraic) variety in a similar manner as in Definition \ref{algebraic on affine variety def}. \end{rmk} \subsection{The Bergman Kernel} In this section, we will briefly review some properties of the Bergman kernel on a complex manifold. More details can be found in \cite{KoNo}. Let $M$ be an n-dimensional complex manifold. Write $L^2_{(n,0)}(M)$ for the space of $L^2$-integrable $(n,0)$ forms on $M,$ which is equipped with the following inner product: \begin{equation}\lambdabel{inner product} (\varphi,\psi)_{L^2(M)}:=i^{n^2}\int_{M}\varphi\wedgeedge\oo{\psi}, \quad \varphi,\psi \in L^2_{(n,0)}(M), \end{equation} Define the {\em Bergman space} of $M$ to be \begin{equation}\lambdabel{Bergman space of forms} A^2_{(n,0)}(M):=\bigl\{\varphi \in L^2_{(n,0)}(M): \varphi \mbox{ is a holomorphic $(n,0)$ form on $M$} \}. \end{equation} Assume $A^2_{(n,0)}(M) \neq \{0\}$. Then $A^2_{(n,0)}(M)$ is a separable Hilbert space. Taking any orthonormal basis $\{\varphi_k\}_{k=1}^{q}$ of $A^2_{(n,0)}(M)$ (here $1 \leq q \leq \infty$), we define the {\em Bergman kernel (form)} of $M$ to be \begin{equation*} K_M(x,\bar{y})=i^{n^2}\sum_{k=1}^{q} \varphi_k(x)\wedgeedge \oo{\varphi_k(y)}. \end{equation*} Then, $K_M(x, \bar{x})$ is a real-valued, real analytic form of degree $(n,n)$ on $M$ and is independent of the choice of orthonormal basis. When $M$ is also (the set of regular points on) an affine variety, we say that the Bergman kernel of $M$ is {\em algebraic} if $K_{M}(x, \bar{x})$ is real-algebraic in the sense of Definition \ref{algebraic on affine variety def}. The following definitions and facts are standard in literature. \begin{defn}[Bergman projection] Given $g\in L^2_{(n,0)}(M)$, we define for $x\in M$ \begin{equation*} Pg(x)=\int_Mg(\zeta)\wedgeedge K_M(x,\bar{\zeta}):=i^{n^2}\sum_{k=1}^{q}\Bigl(\int_Mg(\zeta)\wedgeedge\oo{\varphi_k(\zeta)}\Bigr)\varphi_k(x). \end{equation*} \end{defn} $P\colon L^2_{(n,0)}\to A^2_{(n,0)}(M)$ is called the {\em Bergman projection}, and is the orthogonal projection to the Bergman space $A^2_{(n,0)}(M)$. The Bergman kernel form remains unchanged if we remove a proper complex analytic subvariety. The following theorem is from \cite{Ko}. \begin{thm}[\cite{Ko}]\lambdabel{Kobayashi thm} If $M'$ is a domain in an $n$-dimensional complex manifold $M$ and if $M-M'$ is a complex analytic subvariety of $M$ of complex dimension $\leq n-1$, then \begin{equation*} K_M(x,\bar{y})=K_{M'}(x,\bar{y}) \quad \mbox{ for any y }\in M'. \end{equation*} \end{thm} This theorem suggests the following generalization of the Bergman kernel form to complex analytic spaces. \begin{defn} Let $M$ be a reduced complex analytic space, and let $V\subset M$ denote its set of singular points. The Bergman kernel form of $M$ is defined as \begin{equation*} K_M(x,\bar{y})=K_{M-V}(x,\bar{y})\quad \mbox{ for any } x, y\in M-V, \end{equation*} where $K_{M-V}$ denotes the Bergman kernel form of the complex manifold consisting of regular points of $M$. \end{defn} Let $N_1, N_2$ be two complex manifolds of dimension $n$. Let $\alphamma: N_1\rightarrow M$ and $\tau: N_2\rightarrow M$ be holomorphic maps. The pullback of the Bergman kernel $K_M(x,\bar{y})$ of $M$ to $N_1 \times N_2$ is defined in the standard way. That is, for any $z\in N_1, w\in N_2$, \begin{equation*} \bigl( (\alphamma,\tau)^*K\bigr)(z,\bar{w})=\sum_{k=1}^{q} \alphamma^*\varphi_k(z)\wedgeedge \oo{\tau^*\varphi_k(w)}. \end{equation*} In terms of local coordinates, writing the Bergman kernel form of $M$ as \begin{equation*} K_M(x,\bar{y})=\wedgeidetilde{K}(x,\bar{y})dx_1\wedgeedge\cdots dx_n\wedgeedge d\oo{y_1}\wedgeedge\cdots \wedgeedge d\oo{y_n}, \end{equation*} we have \begin{equation*} \bigl( (\alphamma,\tau)^*K_M\bigr)(z,\bar{w})=\wedgeidetilde{K}(\alphamma(z),\oo{\tau(w)})\,J_{\alphamma}(z)\,\oo{J_{\tau}(w)}\,dz_1\wedgeedge\cdots dz_n\wedgeedge d\oo{w_1}\wedgeedge\cdots \wedgeedge d\oo{w_n}, \end{equation*} where $J_{\alphamma}$ and $J_{\tau}$ are the Jacobian determinants of the maps $\alphamma$ and $\tau$, respectively. \section{The transformation law for the Bergman kernel}\lambdabel{Sec transformation law} In this section, we shall prove Theorem \ref{BK transformation thm}. For this, we shall adapt the ideas in \cite{Bell82} to our situation. More precisely, we shall first prove the following transformation law for the Bergman projections. Then \eqref{BK transformation eq} will follow readily by comparing the associated distributional kernels for the projection operators. \begin{prop}\lambdabel{BP transformation prop} Under the assumptions and notation in Theorem $\ref{BK transformation thm}$, we denote by $n$ the complex dimension of $M_1-V_1$ and $M_2-V_2$. Let $P_i: L^2_{(n,0)}(M_i-V_i)\rightarrow A^2_{(n,0)}(M_i-V_i)$ e the Bergman projection for $i=1,2$. Then the Bergman projections transform according to \begin{equation}\lambdabel{BP transformation eq} P_1(f^*\phi)=f^*(P_2\phi) \quad \mbox{ for any } \phi\in L^2_{(n,0)}(M_2-V_2). \end{equation} \end{prop} We first check that $f^*\phi\in L^2_{(n,0)}(M_1-V_1)$ if $\phi\in L^2_{(n,0)}(M_2-V_2)$ in the next lemma. Recall that $f$ is an $m$-sheeted covering map $M_1-V_1\rightarrow M_2-V_2$. \begin{lemma}\lambdabel{integral between cover and base I lemma} \begin{equation*} \|f^*\phi\|_{L^2(M_1-V_1)}=m^{\frac{1}{2}}\|\phi\|_{L^2(M_2-V_2)} \quad \mbox{ for any } \phi\in L^2_{(n,0)}(M_2-V_2). \end{equation*} \end{lemma} \begin{proof} Let $\{U_j\}$ be a countable, locally finite open cover of $M_2-V_2$ such that \begin{itemize} \item each $U_j$ is relatively compact; \item $f^{-1}(U_j)=\cup_{k=1}^m V_{j,k}$ for some pairwise disjoint open sets $\{V_{j,k}\}_{k=1}^m$ on $M_1-V_1$; \item $f:V_{j, k}\rightarrow U_j$ is a biholomorphsm for each $j=1,2,\cdots m$. \end{itemize} Let $\{\rho_j\}$ be a partition of unity subordinate to the cover $\{U_j\}$. Then \begin{align*} i^{n^2}\int_{M_2-V_2} \phi\wedgeedge\oo{\phi} =\sum_ji^{n^2}\int_{U_j}\rho_j \phi\wedgeedge\oo{\phi}=\frac{1}{m} \sum_j\sum_{k=1}^m i^{n^2}\int_{V_{j,k}} (f^*\rho_j)\, f^*\phi\wedgeedge\oo{f^*\phi}. \end{align*} Note that $\{f^*\rho_j\}$ is a partition of unity subordinate to the countable, locally finite open cover $\{\cup_{k=1}^m V_{j,k}\}$ of $M_1-V_1$. Thus, \begin{align*} \frac{1}{m} \sum_j\sum_{k=1}^m i^{n^2}\int_{V_{j,k}} (f^*\rho_j)\, f^*\phi\wedgeedge\oo{f^*\phi} =&\frac{1}{m} \sum_j i^{n^2}\int_{\cup_{k=1}^mV_{j,k}} (f^*\rho_j)\, f^*\phi\wedgeedge\oo{f^*\phi} \\ =&\frac{1}{m} i^{n^2}\int_{M_1-V_1} f^*\phi\wedgeedge\oo{f^*\phi}. \end{align*} The result therefore follows immediately. \end{proof} Let $F_1, F_2, \cdots, F_m$ be the $m$ local inverses to $f$ defined locally on $M_2-V_2$. Note that $\sum_{k=1}^mF_k^*$ is a well-defined operator on $L_{(n,0)}^2 (M_1-V_1)$, though each individual $F_k$ is only locally defined. \begin{lemma}\lambdabel{integral between cover and base II lemma} Let $v\in L^2_{(n,0)}(M_1-V_1)$ and $\phi\in L^2_{(n,0)}(M_2-V_2)$. Then $\sum_{k=1}^m F_k^*(v) \in L^2_{(n,0)}(M_2-V_2)$ and \begin{equation}\lambdabel{integral between cover and base II eq} \bigl(v,f^*\phi\bigr)_{L^2(M_1-V_1)}=\bigl(\sum_{k=1}^m F_k^*(v), \phi\bigr)_{L^2(M_2-V_2)}. \end{equation} \end{lemma} \begin{proof} We first verify $\sum_{k=1}^m F_k^*(v) \in L^2_{(n,0)}(M_2-V_2)$. For that we note \begin{equation*} f^*\sum_{k=1}^m F_k^*(v)=\sum_{\alphamma\in \Gamma}\alphamma^*v. \end{equation*} By the same argument as in Lemma \ref{integral between cover and base I lemma}, we have \begin{equation}\lambdabel{eqlemma34} \bigl\|\sum_{\alphamma\in \Gamma}\alphamma^*v\bigr\|_{L^2(M_1-V_1)}=m^{\frac{1}{2}}\bigl\|\sum_{k=1}^m F_k^*(v)\bigr\|_{L^2(M_2-V_2)}. \end{equation} Since each deck transformation $\alphamma: M_1-V_1\rightarrow M_1-V_1$ is biholomorphic, it follows that \begin{align*} \bigl\|\sum_{\alphamma\in \Gamma}\alphamma^*v\bigr\|_{L^2(M_1-V_1)} \leq \sum_{\alphamma\in \Gamma}\bigl\|\alphamma^*v\bigr\|_{L^2(M_1-V_1)}=m\|v\|_{L^2(M_1-V_1)}. \end{align*} Therefore by \eqref{eqlemma34}, $\sum_{k=1}^m F_k^*(v) \in L_{(n,0)}^2(M_2-V_2)$. Now we are ready to prove \eqref{integral between cover and base II eq}. Let $\{U_j\}$, $\{V_{j,k}\}$ and $\{\rho_j\}$ be the open covers and partition of unity as in Lemma \ref{integral between cover and base I lemma}. Then \begin{align*} \bigl(\sum_{k=1}^m F_k^*(v), \phi\bigr)_{L^2(M_2-V_2)}=\sum_{j}i^{n^2}\int_{U_j}\rho_j\sum_{k=1}^m F_k^*(v)\wedgeedge \oo{\phi}. \end{align*} Note that every $F_k: U_j\rightarrow V_{j,k}$ is biholomorphic and the inverse of $f: V_{j,k}\rightarrow U_j$. Thus, \begin{align*} \sum_{j}i^{n^2}\int_{U_j}\rho_j\sum_{k=1}^m F_k^*(v)\wedgeedge \oo{\phi}=\sum_{j}\sum_{k=1}^mi^{n^2}\int_{V{j,k}}(f^*\rho_j) v\wedgeedge \oo{f^*\phi}=(v,f^*\phi)_{L^2(M_1-V_1)}. \end{align*} The last equality follows from the fact that $\{f^*\rho_j\}$ is a partition of unity subordinate to the countable, locally finite open cover $\{\cup_{k=1}^m V_{j,k}\}$ of $M_1-V_1$. This proves \eqref{integral between cover and base II eq}. \end{proof} We are now ready to prove Proposition \ref{BP transformation prop}. \begin{proof}[Proof of Proposition \ref{BP transformation prop}] If $\phi\in A^2_{(n,0)}(M_2-V_2)$, then $f^*\phi\in A^2_{(n,0)}(M_1-V_1)$ by Lemma \ref{integral between cover and base I lemma}, whence \eqref{BP transformation eq} holds trivially. It thus suffices to prove \eqref{BP transformation eq} for $\phi\in A^2_{(n,0)}(M_2-V_2)^{\perp}$. In this case, \eqref{BP transformation eq} reduces to \begin{equation*} P_1(f^*\phi)=0 \quad \mbox{ for any } \phi\in A^2_{(n,0)}(M_2-V_2)^{\perp}; \end{equation*} i.e., $\phi\in A^2_{(n,0)}(M_2-V_2)^{\perp}$ implies that $f^*\phi\in A^2_{(n,0)}(M_1-V_1)^{\perp}$. To prove this, we note that for any $v\in A^2_{(n,0)}(M_1-V_1)$, we have by Lemma \ref{integral between cover and base II lemma} \begin{align*} \bigl(v,f^*\phi)_{L^2(M_1-V_1)}=\bigl(\sum_{k=1}^m F_k^*(v), \phi\bigr)_{L^2(M_2-V_2)}=0. \end{align*} The last equality follows from the fact $\phi\in A^2_{(n,0)}(M_2-V_2)^{\perp}$. Thus, $f^*\phi\in A^2_{(n,0)}(M_1-V_1)^{\perp}$ and the proof is completed. \end{proof} We are now in a position to prove Theorem \ref{BK transformation thm}. \begin{proof}[Proof of Theorem \ref{BK transformation thm}] Let $\mathrm{id}_{M_i}$ be the identity map on $M_i$ for $i=1,2$. Recall that $\{F_k\}_{k=1}^m$ are local inverses of $f$. Note that $\sum_{k=1}^m(\mathrm{id}_{M_1}, F_k)^*K_1$ is a well-defined $(n,n)$ form on $(M_1-V_1)\times (M_2-V_2)$ though each $(\mathrm{id}_{M_1}, F_k)^*K_1$ is only locally defined. We shall write out the Bergman projection transformation law \eqref{BP transformation eq} in terms of integrals of the Bergman kernel forms. For any $\phi\in L^2_{(n,0)}(M_2-V_2)$, by Lemma \ref{integral between cover and base II lemma} we have for any $z\in M_1-V_1$, \begin{align*} P_1(f^*\phi)(z)=\int_{M_1-V_1} f^*\phi(\eta)\wedgeedge K_1(z,\eta)=\int_{M_2-V_2}\phi(\eta)\wedgeedge \sum_{k=1}^m(\mathrm{id}_{M_1}, F_k)^*K_1(z,\eta). \end{align*} On the other hand, \begin{align*} P_2(\phi)(\xi)=\int_{M_2-V_2}\phi(\eta)\wedgeedge K_2(\xi,\eta) \quad \mbox{ for any }\xi \in M_2-V_2. \end{align*} If we pull back the forms on both sides by $f$, then \begin{align*} f^*P_2(\phi)(z)=\int_{M_2-V_2}\phi(\eta)\wedgeedge (f,\mathrm{id}_{M_2})^*K_2(z,\eta) \quad \mbox{ for any }z \in M_1-V_1. \end{align*} Therefore, the Bergman projection transformation law \eqref{BP transformation eq} translates to \begin{align*} \int_{M_2-V_2}\phi(\eta)\wedgeedge \sum_{k=1}^m(\mathrm{id}_{M_1}, F_k)^*K_1(z,\eta) =\int_{M_2-V_2}\phi(\eta)\wedgeedge (f,\mathrm{id}_{M_2})^*K_2(z,\eta). \end{align*} As this equality holds for any $\phi\in L^2_{(n,0)}(M_2-V_2)$, it follows that for any $z \in M_1-V_1$ and $\eta\in M_2-V_2$, \begin{equation} \sum_{k=1}^m(\mathrm{id}_{M_1}, F_k)^*K_1(z,\eta) =(f,\mathrm{id}_{M_2})^*K_2(z,\eta). \end{equation} If we further pull back the forms on both sides by $(\mathrm{id}_{M_1},f): (M_1-V_1) \times (M_1-V_1)\rightarrow (M_1-V_1)\times (M_2-V_2)$, then we obtain for $z,w\in M_1-V_1$, \begin{align} \sum_{k=1}^m(\mathrm{id}_{M_1}, F_k\circ f)^*K_1(z,w) =(f,f)^*K_2(z,w). \end{align} By using the notation $\alphamma_k$ for the deck transformation $F_k\circ f$, we may write this as \begin{align}\lambdabel{transformation law 1} \sum_{k=1}^m(\mathrm{id}_{M_1}, \alphamma_k)^*K_1(z,w) =(f,f)^*K_2(z,w). \end{align} Note that \begin{align*} \sum_{k=1}^m(\mathrm{id}_{M_1}, \alphamma_k)^*K_1(z,w)=\sum_{k=1}^m(\alphamma_k\circ \alphamma_k^{-1}, \alphamma_k\circ \mathrm{id}_{M_1})^*K_1(z,w)=\sum_{k=1}^m(\alphamma_k^{-1}, \mathrm{id}_{M_1})^* (\alphamma_k,\alphamma_k)^*K_1(z,w). \end{align*} Since $\alphamma_k$ is a biholomorphism on $M_1-V_1$, we have \begin{equation*} (\alphamma_k,\alphamma_k)^*K_1(z,w)=K_1(z,w), \end{equation*} and hence \begin{align*} \sum_{k=1}^m(\mathrm{id}_{M_1}, \alphamma_k)^*K_1(z,w)=\sum_{k=1}^m(\alphamma_k^{-1}, \mathrm{id}_{M_1})^* K_1(z,w)=\sum_{k=1}^m(\alphamma_k, \mathrm{id}_{M_1})^* K_1(z,w). \end{align*} Theorem \ref{BK transformation thm} now follows by combining the above identity with \eqref{transformation law 1}. \end{proof} \section{Proof of Theorem \ref{main theorem intro}, part I: Bergman kernels of ball quotients} \lambdabel{Sec Bergman kernel for finite ball quotient} In this section, we will apply the transformation law in Theorem \ref{BK transformation thm} to study the Bergman kernel form of a finite ball quotient and prove the "if" implication in Theorem \ref{main theorem intro}. For this part, the restriction of the dimension of the algebraic variety to two is not needed, and we shall therefore consider the situation in an arbitrary dimension $n$. Let $\mathbb{B}^n$ denote the unit ball in $\mathbb{C}^n$ and $\text{Aut}(\mathbb{B}^n)$ its (biholomorphic) automorphism group. Let $\Gamma$ be a finite subgroup of $\text{Aut}(\mathbb{B}^n)$. As the unitary group $U(n)$ is a maximal compact subgroup of $\Aut(\mathbb{B}^n)$, by basic Lie group theory, there exists some $\psi\in \Aut(\mathbb{B}^n)$ such that $\Gamma\subset \psi^{-1}\cdot U(n)\cdot \psi$. Thus without loss of generality, we can assume $\Gamma\subset U(n)$, i.e., $\Gamma$ is a finite unitary group. Note that the origin $0\in \mathbb{C}^n$ is always a fixed point of every element in $\Gamma$. We say $\Gamma$ is \emph{fixed point free} if every $\alphamma\in \Gamma-\{\mathrm{id}\}$ has no other fixed point, or equivalently, if every $\alphamma\in \Gamma-\{\mathrm{id}\}$ has no fixed point on $\partial\mathbb{B}^n$. In this case, the action of $\Gamma$ on $\partial\mathbb{B}^n$ is properly discontinuous and $\partial\mathbb{B}^n/\Gamma$ is a smooth manifold. By a theorem of Cartan \cite{Cartan}, the quotient $\mathbb{C}^n/\Gamma$ can be realized as a normal algebraic subvariety $V$ in some $\mathbb{C}^N$. To be more precise, we write $\mathcal{A}$ for the algebra of $\Gamma$ invariant holomorphic polynomials, that is, \begin{equation*} \mathcal{A}:=\big\{p\in \mathbb{C}[z_1,\cdots, z_n]: p\circ\alphamma=p \,\mbox{ for all }\alphamma \in\Gamma \big\}. \end{equation*} By Hilbert's basis theorem, $\mathcal{A}$ is finitely generated. Moreover, we can find a minimal set of homogeneous polynomials $\{p_1, \cdots, p_N\}$ such that every $p\in \mathcal{A}$ can be expressed in the form \begin{equation*} p(z)=q(p_1(z), \cdots, p_N(z)) \quad \mbox{ for }z\in \mathbb{C}^n, \end{equation*} where $q$ is some holomorphic polynomial in $\mathbb{C}^N$. The map $Q:=(p_1,\cdots,p_N): \mathbb{C}^n\rightarrow \mathbb{C}^N$ is proper and induces a homeomorphism of $\mathbb{C}^n/\Gamma$ onto $V:=Q(\mathbb{C}^n)$. By Remmert's proper mapping theorem (see \cite{GH}), $V$ is an analytic variety. As $Q$ is a polynomial holomorphic map, $V$ is furthermore an algebraic variety. The restriction of $Q$ to the unit ball $\mathbb{B}^n$ maps $\mathbb{B}^n$ properly onto a relatively compact domain $G\subset V$. In this way, $\mathbb{B}^n/\Gamma$ is realized as $G$ by $Q$. Following \cite{Rudin}, we call such $Q$ the \emph{basic map} associated to $\Gamma$. The ball quotient $G=\mathbb{B}^n/\Gamma$ is nonsingular if and only if the group $\Gamma$ is generated by \emph{reflections}, i.e., elements of finite order in $U(n)$ that fix a complex subspace of dimension $n-1$ in $\mathbb{C}^n$ (see \cite{Rudin}); thus, if $\Gamma$ is fixed point free and nontrivial, then $G=\mathbb{B}^n/\Gamma$ must have singularities. Moreover, $G$ has smooth boundary if and only if $\Gamma$ is fixed point free (see \cite{Fo86} for more results along this line). We are now in a position to state the following theorem, which implies the "if" implication in Theorem \ref{main theorem intro}. \begin{thm} \lambdabel{main theorem backward direction} Let $G$ be a domain in an algebraic variety $V$ in $\mathbb{C}^N$ and $\Gamma \subset U(n)$ a finite unitary subgroup with $|\Gamma|=m$. Suppose there exist proper complex analytic varieties $V_1\subset \mathbb{B}^n$, $V_2 \subset G$ and $F: \mathbb{B}^n-V_1\rightarrow G-V_2$ such that $F$ is an m-sheeted covering map with deck transformation group $\Gamma$. If $F$ is algebraic, then the Bergman kernel form of $G$ is algebraic. \end{thm} \begin{proof} Note that the Bergman kernel form of $G$ coincides with that of $\wedgeidetilde{G}:=G-V_2$ by Theorem \ref{Kobayashi thm}, and likewise the Bergman kernel form $K_{\mathbb{B}^n}$ of $\mathbb{B}^n$ coincides with that of $\wedgeidetilde{B}:=\mathbb{B}^n-V_1$. By the transformation law in Theorem \ref{BK transformation thm}, we have \begin{equation*} \sum_{\alphamma\in\Gamma}(\mathrm{id}_{\mathbb{B}^n},\alphamma)^*K_{\mathbb{B}^n}=(F,F)^*K_G \quad \mbox{ on } \wedgeidetilde{B}\times\wedgeidetilde{B}. \end{equation*} Since all $\alphamma\in \Gamma$ and $K_{\mathbb{B}^n}$ are rational, so is the right hand side of the equation. This implies that $K_G$ is algebraic (see the equivalent condition (c) of algebraicity in \S 2.1). \end{proof} Theorem \ref{main theorem backward direction} applies in particular to Cartan's canonical realization of ball quotient. \begin{cor}\ Let $\Gamma\subset U(n)$ be a finite unitary group. Suppose $Q:\mathbb{C}^n\rightarrow \mathbb{C}^N$ is the basic map associated to $\Gamma$. Let $G=Q(\mathbb{B}^n)$, which is a relatively compact domain in the algebraic variety $V=Q(\mathbb{C}^n)$. Then the Bergman kernel form of $G$ is algebraic. \end{cor} \begin{proof} We let \begin{equation*} Z=\{z\in \mathbb{C}^n: \mbox{the Jacobian of $Q$ at $z$ is not full rank} \}. \end{equation*} Clearly, $Z$ is a proper complex analytic variety in $\mathbb{C}^n$. By Remmert's proper mapping theorem, $Q(Z)\subset V$ is a proper complex analytic variety. Moreover, $Q:\mathbb{B}^n-Z\rightarrow G-Q(Z)$ is a covering map with $m$ sheets, where $m=|\Gamma|$, and $\Gamma$ is its deck transformation group (Note that $Q^{-1}(Q(Z))=Z$; see \cite{Cartan}). The conclusion now follows from Theorem \ref{main theorem backward direction}. \end{proof} \begin{rmk} Note that the "if" implication in Theorem \ref{main theorem intro} in fact holds under a much weaker assumption than that stipulated in the theorem. In Theorem \ref{main theorem backward direction} we do not assume $n=2$ nor that the group $\Gamma$ is fixed point free. {We remark that the formula for the Bergman kernel of the finite ball quotient is also obtained by Huang-Li \cite{HuLi}.} \end{rmk} \section{Proof of Theorem \ref{main theorem intro}, part II}\lambdabel{Sec Algebraic BK} In this section, we prove one of the main results of the paper---the "only if" implication in Theorem \ref{main theorem intro}. We also prove Corollary \ref{main corollary intro} and \ref{main corollary 2 intro}. \begin{proof}[Proof of the "only if" implication in Theorem \ref{main theorem intro}.] Let $V$ and $G$ be as in Theorem \ref{main theorem intro} and assume that $G$ has algebraic Bergman kernel. We shall prove that $G$ is a finite ball quotient. We proceed in several steps. \textbf{Step 1.} In this step, we prove $\partial G$ is real analytic, and furthermore, real algebraic. For this step, we do not need to assume that the dimension of $V$ is two. \begin{prop}\lambdabel{boundary algebraic prop} Let $G$ be a relatively compact domain in an $n$-dimensional ($n\geq 2$) algebraic variety $V\subset\mathbb{C}^N$ with smooth strongly pseudoconvex boundary. If the Bergman kernel $K_G$ of $G$ is algebraic, then the boundary $\partial G$ of $G$ is Nash algebraic, i.e., $\partial G$ is locally defined by a real algebraic function. \end{prop} \begin{proof} Fix a point $p\in \partial G$. Then there exists a neighborhood $U$ of $p$ in $V$ with canonical extrinsic coordinates $z=(z_1,\cdots, z_n)$ on $U$ (see Section 2). Write the Bergman kernel form $K_G$ of $G$ as \begin{equation*} K_G=K(z,\bar{z})dz\wedgeedge d\oo{z} \quad \mbox{ on } U\cap G, \end{equation*} where $dz=dz_1\wedgeedge\cdots \wedgeedge dz_n$, $d\oo{z}=d\oo{z_1}\wedgeedge\cdots \wedgeedge d\oo{z_n}$ and $K(z,\bar{z})$ is a real algebraic function on $U\cap G$. As $K$ is real algebraic, there exist real-valued polynomials $a_1(z,\bar{z}), \cdots, a_q(z,\bar{z})$ in $\mathbb{C}^n\cong\mathbb{R}^{2n}$ with $a_q\neq 0$ such that \begin{equation}\lambdabel{eqnkz} a_q(z,\bar{z})K(z,\bar{z})^q+\cdots+a_1(z,\bar{z})K(z,\bar{z})+a_0(z,\bar{z})=0, \quad \mbox{on } U\cap G. \end{equation} Note that when $z\rightarrow \partial G$, we have $K(z,\bar{z})\rightarrow \infty$ as $\partial G$ is strictly pseudoconvex. We divide both sides of \eqref{eqnkz} by $K(z,\bar{z})^q$ and let $z \rightarrow \partial G$ to obtain \begin{equation*} a_q(z,\bar{z})=0, \quad \mbox{ on } U\cap \partial G. \end{equation*} Write $z_k=x_k+iy_k$ for $1\leq k\leq n$, $z'=(z_1,\cdots, z_{n-1})$ and $x'=(x_1,y_1,\cdots, x_{n-1},y_{n-1},x_n)$. By rotation, we can assume that $\partial G$ near $p$ is locally defined by \begin{equation*} y_n=\varphi(x'), \end{equation*} where $\varphi$ is a smooth function. We then have \begin{equation*} a_q\bigl(z',x_n+i\varphi(x'),\oo{z'},x_n-i\varphi(x')\bigr)=0. \end{equation*} By Malgrange's theorem (see \cite{Ne} and references therein), $\varphi$ is real analytic and thus, since $a_q$ is a polynomial, also real algebraic. Hence, $\partial G$ is Nash algebraic. \end{proof} \textbf{Step 2.} We now return to the case where $V$ is two-dimensional. We shall prove that $\partial G$ is spherical, where $G$ is as in Theorem \ref{main theorem intro}. Fix $p\in \partial G$, and a canonical extrinsic coordinates chart $(U,z)$ of $V$ at $p$, where $z=(z_1,z_2)$. We again write \begin{equation*} K_G(z,\bar{z})=K(z,\bar{z})dz\wedgeedge d\oo{z} \quad \mbox{ on } U\cap G, \end{equation*} where $dz=dz_1\wedgeedge dz_2$ and $d\oo{z}=d\oo{z_1}\wedgeedge d\oo{z_2}$. Choose a strongly pseudoconvex domain $D\Subset U\cap G$ such that \begin{equation*} B(p,\partialta)\cap D= B(p,\partialta)\cap G \quad \mbox{for some small } \partialta>0. \end{equation*} Here $B(p,\partialta)=\{z\in U: \|z-p\|<\partialta \}$ is the ball centered at $p$ with radius $\partialta$ with respect to the coordinates $(U,z)$. Write $K_D$ for the Bergman kernel of $D$, which is now considered as a function. Then $K_D-K$ extends smoothly across $B(p,\partialta)\cap \partial D$ (see \cite{Fe, BoSj}, see also \cite{HuLi} for a nice and detailed proof of this fact). Consequently, \begin{equation*} K_D(z,\bar{z})=K(z,\bar{z})+h(z,\bar{z})\quad \mbox{on } D, \end{equation*} where $h(z,\bar{z})$ is real analytic in $D$ and extends smoothly across $B(p,\partialta)\cap \partial D$. Let $r$ be a Fefferman defining function of $D$ and express the Fefferman asymptotic expansion of $K_D$ as \begin{equation*} K_D(z,\bar{z})=\frac{\phi(z,\bar{z})}{r(z)^3}+\psi(z,\bar{z})\log r(z) \quad \mbox{ on } D, \end{equation*} where $\phi$ and $\psi$ are smooth functions on $D$ that extend smoothly across $B(p,\partialta)\cap \partial D$; see \cite{Fe2}. Thus, \begin{equation}\lambdabel{eqnkghr} K(z,\bar{z})=\frac{\phi(z,\bar{z})-h(z,\bar{z})r(z)^3}{r(z)^3}+\psi(z,\bar{z})\log r(z) \quad \mbox{ on } D. \end{equation} As in Step 1, there exist real-valued polynomials $a_1(z,\bar{z}),\cdots, a_q(z,\bar{z})$ in $\mathbb{C}^2\cong\mathbb{R}^4$ with $a_q\neq 0$ for some $q\geq 1$, such that \begin{equation*} a_qK^q+\cdots+a_1K+a_0=0 \quad \mbox{ on } D. \end{equation*} If we substitute (\ref{eqnkghr}) into the above equation and multiply both sides by $r^{3q}$, then \begin{equation}\lambdabel{vanishing order eq} a_q\psi^q r^{3q}(\log r)^q+\sum_{j=0}^{q-1}b_j(\log r)^j=0 \quad \mbox{ on } D, \end{equation} where all $b_j$ for $0\leq j\leq q-1$ are smooth on $D$ and extend smoothly across $B(p,\partialta)\cap \partial D$. We recall the following lemma from \cite{FuWo}. \begin{lemma}[\cite{FuWo}] Let $f_0(t), \cdots, f_q(t)\in C^{\infty}(-\varepsilon,\varepsilon)$ for $\varepsilon>0$. If \begin{equation*} f_0(t)+f_1(t)\log t+\cdots+f_q(t)(\log t)^q=0 \end{equation*} for all $t\in (0,\varepsilon)$, then each $f_j(t)$ for $0\leq j\leq q$, vanishes to infinite order at $0$. \end{lemma} It follows from the above lemma and \eqref{vanishing order eq} that the coefficient $\psi$ of of the logarithmic term vanishes to infinite order at $\partial G$ near $p$. Since $G$ is two-dimensional, it follows that $G$ is locally spherical near $p$ by \cite{Gr} (see page 129 where the result is credited to Burns) and \cite{Bou} (see page 23). \begin{rmk}\lambdabel{rmk:Ramadanov} Recall from the introduction that the sphericity near $p$ above follows from the affirmation of the Ramadanov Conjecture in two dimensions. This is also the only place where the fact that $G$ is two dimensional is essentially used. \end{rmk} \textbf{Step 3.} In this step, we will prove there is an algebraic branched covering map $F: \mathbb{B}^2\rightarrow G$ with finitely many sheets. Since we have already shown that $\partial G$ is a Nash algebraic and spherical CR submanifold in $\mathbb{C}^N$, by a theorem of Huang (see Corollary 3.3 in \cite{Hu}), it follows that $\partial G$ is CR equivalent to a CR spherical space form $\partial\mathbb{B}^2/\Gamma$ with $\Gamma\subset U(n)$ a finite group with no fixed points on $\partial \mathbb{B}^2$. In particular, there is a CR covering map $f: \partial\mathbb{B}^2\rightarrow \partial G$ (see the proof of Theorem 3.1 in \cite{Hu} and also \cite{HuJi98}). By Hartogs's extension theorem, $f$ extends as a smooth map $F\colon \overline{\mathbb{B}^2} \to V$, holomorphic in $\mathbb{B}^2$ and sending $\partial \mathbb{B}^2$ onto $\partial G$. The latter implies that $F$ is moreover algebraic by X. Huang's algebraicity theorem \cite{Huang1994}. It is not difficult to see that $F$ sends $\mathbb{B}^2$ into $G$. Since $F$ maps $\partial \mathbb{B}^2$ to $\partial G$, we conclude that $F$ is a proper algebraic mapping $\mathbb{B}^2 \to G$. {\bf Claim 1.} $F\colon\mathbb{B}^2\to G$ is surjective. \begin{proof}[Proof of Claim 1] By the properness of $F$, $F(\mathbb{B}^2)$ is closed in $G$. Let us denote by \begin{equation*} Z:=\{z\in \mathbb{B}^2: F \mbox{ is not full rank at } z \}. \end{equation*} Since $F$ is a local biholomorphic map at every point of $\partial \mathbb{B}^2$, $Z$ is a finite set. We also note that if $p\in \mathbb{B}^2-Z$, then $F(p)$ is a smooth point of $V$, and $F(p)$ is an interior point of $F(\mathbb{B}^2)$. Assume, in order to reach a contradiction, that $F(\mathbb{B}^2)\neq G$. Since $F(\mathbb{B}^2)$ is closed in $G$, its complement $G\setminus F(\mathbb{B}^2)$ is then a non-empty open subset of $G$. Note that any boundary point of $F(\mathbb{B}^2)$ in $G$ can only be in $F(Z)$. But $F(Z)$ is a finite set, which cannot separate the (non-empty) interior of $F(\mathbb{B}^2)$ and the (non-empty) open complement $G\setminus F(\mathbb{B}^2)$ in the domain $G$. This is the desired contradiction and, hence, $F(\mathbb{B}^2)=G$. \end{proof} Now, we let $T:=F^{-1}(F(Z)) \supset Z$. Then $T$ is a compact analytic subvariety of $\mathbb{B}^2$ and thus is a finite set. Consider the restriction of $F$: \begin{equation*} F|_{\mathbb{B}^2-T}: \mathbb{B}^2-T\rightarrow G-F(Z), \end{equation*} still denoted by $F$. Clearly, $F$ is a proper surjective map. Since $F$ is also a local biholomorphism, $F$ is a finite covering map. Note that $\mathbb{B}^2-T$ is simply connected. It follows that the deck transformation group $\wedgeidetilde{\Gamma}=\{\wedgeidetilde{\alphamma}_k \}_{k=1}^m$ of the covering map $F: \mathbb{B}^2-T\rightarrow G-f(Z)$ acts transitively on each fiber. Since each $\wedgeidetilde{\alphamma}_k$ is a biholomorphism from $\mathbb{B}^2-T$ to $\mathbb{B}^2-T$, it extends to an automorphism of $\mathbb{B}^2$. Consequently, \begin{equation*} \wedgeidetilde{\Gamma}=\bigl\{ \wedgeidetilde{\alphamma}\in \Aut(\mathbb{B}^2): F\circ \wedgeidetilde{\alphamma}=F \mbox{ on } \mathbb{B}^2\bigr\}. \end{equation*} Recall that $\Gamma$ is the deck transformation group of the original covering map $f:\partial\mathbb{B}^2\rightarrow \partial G$. From this, it is clear that we can identify $\Gamma$ with $\wedgeidetilde{\Gamma}$. From now on, we will simply use the notation $\Gamma$ for either group. Note that $Z$ and $T$ are both closed under the action of $\Gamma$, and $(\mathbb{B}^2-T)/\Gamma$ is biholomorphic to $G-f(Z)$. {\bf Claim 2:} If $z, w \in \mathbb{B}^2$ satisfy $F(z)=F(w),$ then $w=\alphamma(z)$ for some $\alphamma \in \Gamma.$ Consequently, $T=Z$. \begin{proof}[Proof of Claim 2] We only need to prove the first assertion. If both $z, w$ are in $\mathbb{B}^2 -T,$ then the conclusion is clear as $\Gamma$ acts transitively on each fiber of the covering map $F: \mathbb{B}^2-T\rightarrow G-f(Z)$. Next we assume one of $z$ and $w$ is in $T$. Seeking a contradiction, suppose $w \neq \alphamma(z)$ for every $\alphamma \in \Gamma.$ Writing $q:=F(z)=F(w),$ there are then points in distinct orbits of $\Gamma$ that are mapped to $q.$ Writing $t$ for the number of orbits of $\Gamma$ that are mapped to $q$, we must then have $t \geq 2.$ Pick $p_1, \cdots, p_t$ from these $t$ distinct orbits of $\Gamma.$ Since $T$ is a finite set, we can choose, for each $1 \leq i \leq t,$ some disjoint neighborhoods $U_i$ of $p_i$ such that $U_i \cap T \subseteq \{p_i\}.$ Moreover, we can make $\alphamma(U_i) \cap U_j =\emptyset$ for all $\alphamma \in \Gamma$ if $i \neq j.$ Consequently, $F(U_i-\{p_i\}) \cap F(U_j-\{p_j\}) =\emptyset.$ Note there is a small open subset $W$ containing $q$ such that $W \subseteq \cup_{i=1}^t F(\cup_{\alphamma \in \Gamma} \alphamma(U_i))=\cup_{i=1}^t F(U_i)$. Thus $W-\{q\}\subseteq \cup_{i=1}^t F(U_i- \{p_i\}).$ But the sets $F(U_i-p_i) \cap (W-\{q\})$ are open and disjoint, and we can choose $W-\{q\}$ to be connected. This is a contradiction. Thus we must have $t=1$ and $w=\alphamma(z)$ for some $\alphamma \in \Gamma.$ \end{proof} Recall that $0$ is assumed to be the only fixed point for elements in $\Gamma$. We write $q_0:=F(0)$ and prove that $q_0$ is the only possible singularity in $G$. Also, recall that all singularities of $G$ are assumed to be isolated and normal. {\bf Claim 3.} $G$ can only have a singularity at $q_0$. \begin{proof}[Proof of Claim 3] Suppose $q_1$ is a (normal) singular point in $G$ and $q_1\neq q_0$. Since $F$ is onto, there exists some $p_1\in \mathbb{B}^2$ such that $f(p_1)=q_1$. First, note that we can find a small neighborhood $U_0$ of $p_1$, and a small neighborhood $W$ of $q_1$ in $G$ such that $$\text{(i)} ~U_0\cap T=\{p_1\}; \quad \text{(ii)}~F~\text{is injective on}~U_0; \quad \text{(iii)}~W \subseteq F(U_0)~\text{and}~W \cap F(Z)=\{q_1\}.$$ It is easy to see that we can make (i) and (ii) hold. It is guaranteed by Claim 2 (see its proof) that we can find $W \subseteq F(U_0)$; the second condition in (iii) is then easy to satisfy, since $F(Z)$ is a finite set. Now, we let $U:=U_0\cap F^{-1}(W)$, which is an open subset of $\mathbb{B}^2$ containing $p_1$. Then $F: U-\{p_1\}\rightarrow W-\{q_1\}$ is a biholomorphism. We let $g: W-\{q_1\}\rightarrow U-\{p_1\}$ denote its inverse. By the normality of $q_1$, we can assume that $g$ is the restriction of some holomorphic map $\wedgeidehat{g}$ defined on some open set $\wedgeidehat{W}\subset \mathbb{C}^N$, where $\wedgeidehat{W}$ contains $W$. Since $g\circ F|_{U-\{p_1\}}$ equals the identity map, $\wedgeidehat{g}\circ F$ equals identity on $U$ by continuity. Similarly $F \circ (\wedgeidehat{g}|_W)$ equals the identity on $W$. Therefore, $q_1$ cannot be a singular point. \end{proof} By Claim 2 and Claim 3, we also see that $T=Z=\{0\} \mbox{ or } \varnothing$. Therefore, $F$ gives a holomorphic algebraic branched covering map from $\mathbb{B}^2$ to $G$ with a possible branch point at $0$. This completes the proof of the "only if" implication in Theorem \ref{main theorem intro}. \end{proof} \begin{rmk} We reiterate (see Remark \ref{rmk:Ramadanov} above) that in the above proof, the condition that $\dim V=2$ is only used in the second step where we apply the affirmative solution of the Ramadanov conjecture in $\mathbb{C}^2$ (\cite{Gr}, \cite{Bou}). \end{rmk} We shall now prove Corollaries \ref{main corollary intro} and \ref{main corollary 2 intro}. \begin{proof}[Proof of Corollary \ref{main corollary intro}] By Theorem \ref{main theorem intro}, it follows that $G$ can be realized as a finite ball quotient $\mathbb{B}^2/\Gamma$ by an algebraic map for some finite unitary group $\Gamma$ with no fixed point on $\partial\mathbb{B}^2$. We must prove that $\Gamma = \{ \mathrm{id} \}$. Suppose not. But, then $G$ must have a singular point (see \cite{Rudin}), which is a contradiction. \end{proof} \begin{proof}[Proof of Corollary \ref{main corollary 2 intro}] The algebraic case follows immediately from Corollary \ref{main corollary intro}. Thus, we only need to consider the rational case. First, as a consequence of the algebraic case, there exists an algebraic biholomorphic map $f: G\rightarrow \mathbb{B}^2$. It remains to establish that $f$ is in fact rational. This follows immediately from a result by Bell \cite{Bell}. For the convenience of the readers, however, we sketch an independent proof here. Denote by $K_G$ and $K_{\mathbb{B}^2}$ the Bergman kernels (now considered as functions) of $G$ and $\mathbb{B}^2$, respectively. By the transformation law, they are related as \begin{align}\lambdabel{K_G transformation eq} \begin{split} K_G(z,w) =&\det\big(Jf(z)\bigr)\cdot K_{B}\bigl(f(z), f(w)\bigr)\cdot\oo{\det\bigl(Jf(w)\bigr)} \\ =&\frac{2!}{\pi^2}\det\big(Jf(z)\bigr)\cdot\oo{\det\bigl(Jf(w)\bigr)}\cdot\frac{1}{\bigl(1-f(z)\cdot \oo{f(w)}\bigr)^3}. \end{split} \end{align} We may assume $0\in G$ by translating $G$ if necessary and, by composing $f$ with an automorphism of $\mathbb{B}^2$, we may also assume $f(0)=0$. Thus, at $w=0$, we have \begin{equation*} K_G(z,0)= \frac{2!}{\pi^2}\det\big(Jf(z)\bigr)\cdot\oo{\det\bigl(Jf(0)\bigr)}. \end{equation*} It follows that \begin{equation}\lambdabel{det Jf eq} \det\bigl(Jf(z)\bigr)=\det\bigl(Jf(0)\bigr) \frac{K_G(z,0)}{K_G(0,0)}. \end{equation} In particular, this implies that $K_G(z,0)\neq 0$ for any $z\in D$. We evaluate \eqref{K_G transformation eq} on the diagonal $w=z$ and use \eqref{det Jf eq} to obtain \begin{equation*} K_G(z,z) =\frac{2!}{\pi^2}\bigl|\det\big(Jf(0)\bigr)\bigr|^2\frac{|K_G(z,0)|^2}{|K_G(0,0)|^2}\frac{1}{\bigl(1-\|f(z)\|^2\bigr)^3}. \end{equation*} Taking the logarithm of both sides yields \begin{equation*} \log K_G(z,z)+3\log\bigl(1-\|f(z)\|^2\bigr) =\log\frac{2!}{\pi^2}+\log\bigl|\det\big(Jf(0)\bigr)\bigr|^2+ \log|K_G(z,0)|^2-\log|K_G(0,0)|^2. \end{equation*} For $j=1,2$, we apply the derivative $\frac{\partial}{\partial \oo{z_j}}$ to both sides and obtain \begin{align*} \frac{1}{K_G(z,z)}\frac{\partial K_G(z,z)}{\partial \oo{z_j}}-\frac{3}{1-\|f(z)\|^2}\sum_{i=1}^2\frac{\partial \oo{f_i(z)}}{\partial \oo{z_j}}f_i(z) =\frac{1}{\oo{K_G(z,0)}}\frac{\partial \oo{K_G(z,0)}}{\partial \oo{z_j}}. \end{align*} Complexifying the above equation and evaluating it at $w=0$, after rearrangement, we obtain \begin{align*} \sum_{i=1}^2\frac{\partial \oo{f_i}}{\partial \oo{z_j}}(0)f_i(z)=\frac{1}{3}\left(\frac{1}{K_G(z,0)}\frac{\partial K_G}{\partial \oo{z_j}}(z,0)-\frac{1}{\oo{K_G(0,0)}}\frac{\partial \oo{K_G}}{\partial \oo{z_j}}(0,0)\right). \end{align*} Note this is a linear system for $f(z)=(f_1(z),f_2(z))$ and the coefficient matrix $\oo{Jf(0)}$ is non-singular. By solving this linear system for $f$, it is immediately clear that the rationality of $K_G$ implies that of $f$. \end{proof} \begin{rmk} Corollary \ref{main corollary intro} implies, in particular, that the Burns--Shnider domains in $\mathbb{C}^2$ (see page 244 in \cite{BuSh}) cannot have algebraic Bergman kernels. In fact, this holds for any Burns--Shnider domain in $\mathbb{C}^n$ for $n \geq 2$, which can be seen as follows. By Proposition \ref{boundary algebraic prop}, if the Bergman kernel were algebraic, then the boundary would be Nash algebraic. While this can be seen to not be so by inspection, a contradiction would also be reached by the Huang--Ji Riemann mapping theorem \cite{HuJi02} since the boundary of a Burns--Shnider domain is spherical while the domain itself is not biholomorphic to the unit ball. \end{rmk} \section{Counterexample in higher dimension}\lambdabel{Sec counterexample} In this section, we construct a $3-$dimensional reduced Stein space $G$ with only one normal singularity and compact, smooth strongly pseudoconvex boundary, realized as a relatively compact domain in a complex algebraic variety $V$ in $\mathbb{C}^4$. We will show that its Bergman kernel is algebraic, while $G$ is not biholomorphic to any finite ball quotient $\mathbb{B}^n/\Gamma$, which shows that Theorem \ref{main theorem intro} cannot hold in higher dimensions. Let $G$ be defined as \begin{equation*} G=\bigl\{w=(w_1,w_2,w_3,w_4)\in\mathbb{C}^4: |w_1|^2+|w_2|^2+|w_3|^2+|w_4|^2<1,\quad w_1w_4=w_2w_3 \bigr\}. \end{equation*} Then $G$ is a relatively compact domain in the complex algebraic variety \begin{equation}\lambdabel{variety V} V=\bigl\{ w\in \mathbb{C}^4: w_1w_4=w_2w_3\bigr\}. \end{equation} Since $G$ is a closed algebraic subvariety of $\mathbb{B}^4\subset \mathbb{C}^4$, $G$ is a reduced Stein space. Note that $0$ is the only singularity of $V$. Moreover it is a normal singularity as it is a hypersurface singularity of codimension 3 ($>2$; see \cite{sha}). It is also easy to verify that $G$ has smooth strongly pseudoconvex boundary in $V$. \begin{prop}\lambdabel{prop:nonspherical} The boundary $M=\partial G$ of $G$ is homogeneous and non-spherical. \end{prop} \begin{proof} Consider the product complex manifold $\mathbb{CP}^1\times \mathbb{CP}^1$. For $j=1,2$, let $\pi_j: \mathbb{CP}^1\times \mathbb{CP}^1\rightarrow \mathbb{CP}^1$ be the projection map to the $j$-th component and let $(L_0,h_0)\rightarrow \mathbb{CP}^1$ be the tautological line bundle $L_0$ with its standard Hermitian metric $h_0$. We set the Hermitian line bundle $(L,h)$ over $\mathbb{CP}^1\times \mathbb{CP}^1$ to be: \begin{equation*} (L,h):=\pi_1^*(L_0, h_0)\otimes \pi_2^*(L_0, h_0). \end{equation*} We begin the proof with the following claim. {\bf Claim 1.} Let $(L,h)\rightarrow \mathbb{CP}^1\times\mathbb{CP}^1$ be as above and let $S(L)\rightarrow \mathbb{CP}^1\times\mathbb{CP}^1$ be its unit circle bundle. Then $M$ is CR diffeomorphic to $S(L)$ by the restriction of biholomorphic map. \begin{proof}[Proof of Claim 1] Note that the circle bundle $S(L)\rightarrow\mathbb{CP}^1\times \mathbb{CP}^1$ can be written as \begin{equation}\lambdabel{circle bundle of CP1 times CP1} S(L)=\left\{\Bigl(\lambdambda(\zeta_1,z_1)\otimes(\zeta_2,z_2), [\zeta_1,z_1],[\zeta_2,z_2]\Bigr): \begin{array}{l} [\zeta_1,z_1]\in \mathbb{CP}^1, \quad [\zeta_2,z_2]\in \mathbb{CP}^1\\ |\lambdambda|^2(|\zeta_1|^2+|z_1|^2)(|\zeta_2|^2+|z_2|^2)=1 \end{array} \right\}. \end{equation} Define $F: L\rightarrow \mathbb{C}^4$ as \begin{equation}\lambdabel{eqnltom} F\Bigl(\lambdambda(\zeta_1,z_1)\otimes(\zeta_2,z_2), [\zeta_1,z_1],[\zeta_2,z_2]\Bigr)=\Bigl( \lambdambda\zeta_1\zeta_2, \lambdambda z_1\zeta_2,\lambdambda\zeta_1z_2,\lambdambda z_1z_2\Bigr). \end{equation} Then the map $F$ gives a biholomorphism that sends a neighborhood of $S(L)$ in $L$ to a neighborhood of $M$ in $V\subset \mathbb{C}^4$. This proves the claim. \end{proof} Note that $S(L)$ is homogeneous (see \cite{EnZh}) and non-spherical by Theorem 12 in \cite{Wang}. Thus, $M$ is homogeneous and non-spherical. \end{proof} \begin{prop} The Bergman kernel form $K_G$ of $G$ is algebraic. \end{prop} \begin{proof} Set \begin{equation}\lambdabel{domain Omega} \Omega:=\bigl\{ (\lambdambda, z)=(\lambdambda,z_1,z_2)\in \mathbb{C}^3: |\lambdambda|^2(1+|z_1|^2)(1+|z_2|^2)<1 \bigr\}. \end{equation} Note that $\Omega$ is an unbounded domain with smooth boundary in $\mathbb{C}^3$. Moreover, $\Omega$ has a rational Bergman kernel form $K_{\Omega}$ (see Appendix \ref{Sec Appendix} for a proof of this fact). Define the map $F: \mathbb{C}^3\rightarrow \mathbb{C}^4$ as \begin{equation*} F(\lambdambda,z_1,z_2):=(\lambdambda,\lambdambda z_1, \lambdambda z_2, \lambdambda z_1z_2), \end{equation*} We note that $F(\mathbb{C}^3)$ is contained in $V$ as defined by \eqref{variety V}. And $F$ is a holomorphic embedding on $\mathbb{C}^3-\{\lambdambda=0\}$. Moreover, $F(\Omega)\subset G$ and \begin{equation*} F: \wedgeidetilde{\Omega}:=\Omega-\{\lambdambda=0\}\rightarrow \wedgeidetilde{G}:=G-\{w_1=0\} \end{equation*} is a biholomorphism. By Theorem \ref{Kobayashi thm}, the Bergman kernel form $K_{\wedgeidetilde{\Omega}}$ of $\wedgeidetilde{\Omega}$ is the restriction (pullback) of $K_{\Omega}$ to $\wedgeidetilde{\Omega}$. Thus, $K_{\wedgeidetilde{\Omega}}$ is rational. By the transformation law \eqref{BK transformation eq}, we have \begin{equation*} K_{\wedgeidetilde{\Omega}}=(F,F)^*K_{\wedgeidetilde{G}}. \end{equation*} This implies that $K_{\wedgeidetilde{G}}$ is algebraic (see the equivalent condition (c) in \S 2.1), and thus $K_G$ is also algebraic by Theorem \ref{Kobayashi thm}. \end{proof} Before we prove $G$ is not biholomorphic to any finite ball quotient, we pause to study the following bounded domain $U$ in $\mathbb{C}^3$: \begin{equation*} U:=\Bigl\{ (w_1,w_2,w_3)\in \mathbb{C}^3: |w_1|^4+|w_1|^2(|w_2|^2+|w_3|^2)+|w_2w_3|^2<|w_1|^2 \Bigr\}. \end{equation*} \begin{prop}\lambdabel{propn6} The domain $U$ has algebraic Bergman kernel and its boundary is non-spherical at every smooth boundary point. \end{prop} \begin{proof} Let $\pi: \mathbb{C}^4\rightarrow \mathbb{C}^3$ be the projection map defined by \begin{equation*} \pi(w_1,w_2,w_3,w_4):=(w_1,w_2,w_3). \end{equation*} Let $\oo{G}$ be the closure of $G$ in $\mathbb{C}^4$. Then the image of $\oo{G}$ under the projection $\pi$ is \begin{align*} \wedgeidehat{U}:=\pi(\oo{G})=&\bigl\{ (w_1,w_2,w_3)\in \mathbb{C}^3: |w_1|^4+|w_1|^2(|w_2|^2+|w_3|^2)+|w_2w_3|^2 \leq |w_1|^2, w_1\neq 0 \bigr\} \\ &\quad\cup \bigl\{ (0,w_2,w_3)\in \mathbb{C}^3: |w_2|^2+|w_3|^2\leq 1, w_2w_3=0 \bigr\} \\ =&\bigl\{ (w_1,w_2,w_3)\in \mathbb{C}^3: |w_1|^4+|w_1|^2(|w_2|^2+|w_3|^2)+|w_2w_3|^2 \leq |w_1|^2, |w_2|^2+|w_3|^2\leq 1 \bigr\}. \end{align*} Note that $\wedgeidehat{U}^{\mathrm{o}}=U \quad \mbox{and}\quad \wedgeidehat{U}=\oo{U}$, where $\wedgeidehat{U}^{\mathrm{o}}$ denotes the interior of $\wedgeidehat{U}$ . But $U \neq \pi(G)$. On the other hand if we remove the variety $\{w_1=0\}$, then the projection map \begin{equation*} \pi: G-\{w_1=0\}\rightarrow U \end{equation*} is an algebraic biholomorphism. Consequently, by Theorem \ref{Kobayashi thm} the Bergman kernel form $K_U$ of $U$ is algebraic. This proves the first part of the proposition. To prove the second part of the proposition (i.e., the non-sphericity), we observe that the boundary $\partial U$ of $U$ is given by \begin{align*} \partial U =&\bigl\{ (w_1,w_2,w_3)\in \mathbb{C}^3: |w_1|^4+|w_1|^2(|w_2|^2+|w_3|^2)+|w_2w_3|^2 = |w_1|^2, w_1\neq 0 \bigr\} \\ &\quad\cup \bigl\{ (0,w_2,w_3)\in \mathbb{C}^3: |w_2|^2+|w_3|^2\leq 1, w_2w_3=0 \bigr\} \\ =&\bigl\{ (w_1,w_2,w_3)\in \mathbb{C}^3: |w_1|^4+|w_1|^2(|w_2|^2+|w_3|^2)+|w_2w_3|^2 = |w_1|^2, |w_2|^2+|w_3|^2\leq 1 \bigr\}. \end{align*} Write \begin{equation*} \partial U=\bigl( \partial U \cap \{w_1\neq 0 \} \bigr)\cup \bigl( \partial U \cap \{w_1 = 0 \} \bigr). \end{equation*} Since the projection map $\pi$ is a biholomorphism from $\oo{G}-\{w_1=0\}$ to $\wedgeidehat{U}-\{w_1=0\}$, every point $p\in \partial U \cap \{w_1\neq 0\}$ is a smooth point of $\partial U$, and, moreover, $\partial U$ is strictly pseudoconvex and non-spherical at $p$. We note that a defining function for $\partial U$ near $p$ is given by \begin{equation*} \rho=|w_1|^4+|w_1|^2(|w_2|^2+|w_3|^2)+|w_2w_3|^2-|w_1|^2. \end{equation*} Furthermore, it is easy to verify that every other point $q \in \partial U \cap \{w_1 = 0 \}$ is not a smooth boundary point of $U$. This proves the second part of the assertion. \end{proof} We are now ready to show that $G$ is indeed a counterexample to the conclusion of Theorem \ref{main theorem intro} in three dimensions. \begin{prop} $G$ is not biholomorphic to any finite ball quotient. \end{prop} \begin{proof} Seeking a contradiction, we suppose $G$ is biholomorphic to a finite ball quotient $\mathbb{B}^3/\Gamma$, where $\Gamma \subset U(n)$ is a finite unitary group. We realize $\mathbb{B}^3/\Gamma$ as the image $G_0\subset \mathbb{C}^N$ of $\mathbb{B}^3$ under the basic map $Q$ associated to $\Gamma$, where $Q=(p_1,\cdots, p_N): \mathbb{C}^3\rightarrow \mathbb{C}^N$ gives a proper map from $\mathbb{B}^3$ to $G_0.$ Let $F$ be a biholomorphism from $G_0\cong \mathbb{B}^3/\Gamma$ to $G$. Then there is an analytic variety $W_0\subset G_0$ such that \begin{equation*} F: G_0-W_0\rightarrow G-\{w_1=0\} \mbox{ is a biholomorphism}. \end{equation*} There also exists an analytic variety $W$ such that $W=Q^{-1}(W_0)$ and thus $Q: \mathbb{B}^3-W\rightarrow G_0-W_0$ is proper and onto. Set \begin{equation*} f:=\pi\circ F\circ Q : \mathbb{B}^3-W \rightarrow U=\pi(G-\{w_1=0\}), \end{equation*} where $\pi$ is the projection defined in the proof of Proposition \ref{propn6} and is a biholomorphism from $G-\{w_1=0\}$ to $U$. Note that $f$ is proper. Since $U\subset \mathbb{C}^3$, we can write $f$ as $(f_1,f_2,f_3)$. {\bf Claim 2.} There is a sequence $\{\zeta_i\}\subset \mathbb{B}^3-W$ with $\zeta_i\rightarrow \zeta^*\in \partial \mathbb{B}^3-\oo{W}$ such that \begin{equation*} f(\zeta_i) \rightarrow p^*\in \partial U \cap \{w_1\neq 0\}. \end{equation*} \begin{proof}[Proof of Claim 2] Suppose not. Then for any $\{\zeta_i\}\subset \mathbb{B}^n-W$ with $\zeta_i\rightarrow \zeta^* \in \partial \mathbb{B}^n-\oo{W}$, every convergent subsequence of $f(\zeta_i)$ converges to some point in $\partial U\cap \{w_1=0\}$. That is to say, if $f(\zeta_{i_k})$ is convergent, then $f_1(\zeta_{i_k})\rightarrow 0$. Note that $U$ is bounded. Thus, $f_1(\zeta_i)\rightarrow 0$ for any $\{\zeta_i\}\subset \mathbb{B}^3-W$ with $\zeta_i\rightarrow\zeta^*\in \partial\mathbb{B}^3-\oo{W}$. By a standard argument using analytic disks attached to $\partial\mathbb{B}^3-\oo{W}$, we see that \begin{equation*} f_1=0 \quad \mbox{ on } \mathbb{B}^3-W. \end{equation*} This is a contradiction. \end{proof} Let $\zeta_i, \zeta^*$ and $p^*$ be as in Claim 2. Note that $\zeta^*$ is a smooth strictly pseudoconvex boundary point of $\mathbb{B}^3-W$, and $p^*$ is a smooth strictly pseudoconvex boundary point of $U$ (see the proof of Proposition \ref{propn6}). It follows from \cite{FR} (see page 239) that $f$ extends to a H\"older-$\frac{1}{2}$ continuous map on a neighborhood of $\zeta^*$ in $\oo{\mathbb{B}^3}$. Since $f$ is proper $\mathbb{B}^3-W\to U$, its extension to the boundary is a (H\"older-$\frac{1}{2}$) continuous, nonconstant CR map sending a piece of $\partial \mathbb{B}^3$ containing $\zeta^*$ to a piece of $\partial U$ containing $p^*$. By \cite{PT}, $f$ extends holomorphically to a neighborhood of $\zeta^*$, since both boundaries are real analytic (in fact, real-algebraic). Now, since $f$ is non-constant and sends a strongly pseudoconvex hypersurface to another, it must be a CR diffeomorphism, which would mean that $\partial U$ is locally spherical near $p^*$. This contradicts Proposition \ref{prop:nonspherical}. \end{proof} We conclude this section by a couple of remarks. \begin{rmk} Since the Bergman kernel forms of $G$ and $U$ are algebraic, it follows from the proof of Theorem \ref{main theorem intro} in Section 5 (see Step 2) that the coefficients of the logarithmic term in Fefferman's expansions of $K_G$ and $K_V$ both vanish to infinite order at every smooth boundary point. The reduced normal Stein space $G$ gives the counterexample mentioned in Remark \ref{rmk counterexample intro}. The domain $U\subset \mathbb{C}^3$ establishes the following fact, which implies that the Ramadanov conjecture fails for non-smooth domains in higher dimension. {\it{There exists a bounded domain in $\mathbb{C}^3$ with smooth, real-algebraic boundary away from a 1-dimensional {complex} curve such that every smooth boundary point is strongly pseudoconvex and non-spherical, while the coefficient of the logarithmic term in Fefferman's asymptotic expansion of the Bergman kernel vanishes to infinite order at every smooth boundary point.}} \end{rmk} \begin{rmk} Using the same idea as in the above example, we can actually construct significantly more general examples of higher dimensional domains in affine algebraic varieties $V\subset \mathbb{C}^N$ with similar properties. Indeed, let $X$ be a compact Hermitian symmetric space of rank at least $2$. Write \begin{equation*} X=X_1\times \cdots \times X_t, \quad t\geq 1, \end{equation*} where $X_1, \cdots, X_t$ are the irreducible factors of $X$. Fix a \k-Einstein metric $\omega_j$ on $X_j$ and let $(\wedgeidehat{L}_j, \wedgeidehat{h}_j)$ be the top exterior product $\Lambda^nT^{1,0}$ of the holomorphic tangent bundle over $X_j$ with the metric induced from $\omega_j$. Then there is a homogeneous line bundle $(L_j,h_j)$ with a Hermitian metric $h_j$ such that its $p_j$-th tensor power gives $(\wedgeidehat{L}_j, \wedgeidehat{h}_j)$, where $p_j$ is the genus of $X_j.$ (see \cite{EnZh} for more details). Let $\pi_j$ be the projection from $X$ onto the $j$-th factor $X_j$ for $1\leq j\leq t$. Define the line bundle $L$ over $X$ with a Hermitian metric $h$ to be: \begin{equation*} (L,h):=\pi_1^*(L_1, h_1)\otimes \cdots \otimes \pi_t^*(L_t, h_t). \end{equation*} Let $(L^*, h^*)$ be the dual line bundle of $(L,h)$. Write $D(L^*)$ and $S(L^*)$ for the associated unit disc and unit circle bundle. The specific example above is the special case $t=2$ and $X_1=X_2=\mathbb{CP}^1$. Proceeding as in that example, one finds that there is a canonical way to map $L^*$ to $\mathbb{C}^N$, for some $N$, induced by the minimal embedding of $X$ into some complex projective space (see \cite{FHX}). If we denote this map $L^*\to \mathbb{C}^N$ by $F$ (in the example above, the map $F$ is as given by \eqref{eqnltom}), then $F$ sends the zero section of $L^*$ to the point $0$ and is a holomorphic embedding away from the zero section. It follows that the image of $D(L^*)$ under the map $F$ is a domain $G$ with a singular point at $0$. The boundary of $G$ is given by the image of $S(L^*)$. It is not spherical since $S(L^*)$ is not by \cite{Wang}. Moreover, as the Bergman kernel form of $D(L^*)$ is algebraic by \cite{EnZh}, the Bergman kernel form of $G$ is also algebraic by Theorem \ref{Kobayashi thm}. \end{rmk} \section{Appendix}\lambdabel{Sec Appendix} In this section, we will prove the claim that the Bergman kernel of the domain $\Omega$ in $\mathbb{C}^3$ as defined in \eqref{domain Omega} is rational. This fact actually follows from a general theorem in \cite{EnZh} (see Theorem 3.3 in \cite{EnZh} and its proof). We include here a proof in this particular example for the convenience of readers and self-containedness of this paper. In fact, we shall compute the Bergman kernel of $\Omega$ explicitly (Theorem \ref{thm:OmegaBergman} below). Recall that \begin{equation}\lambdabel{eq:Omega} \Omega:=\bigl\{ (z,\lambdambda)=(z_1,z_2,\lambdambda)\in \mathbb{C}^3: |\lambdambda|^2(1+|z_1|^2)(1+|z_2|^2)<1 \bigr\}. \end{equation} We let \begin{equation*} h(z):=(1+|z_1|^2)(1+|z_2|^2), \end{equation*} and denote the defining function by \begin{equation*} \rho(z,\lambdambda):=|\lambdambda|^2(1+|z_1|^2)(1+|z_2|^2)-1. \end{equation*} We recall that the Bergman space on $\Omega$ is defined as \begin{equation} A^2(\Omega):=\bigl\{f(z,\lambdambda) \mbox{ is holomorphic in } \Omega: i\int_{\Omega} |f(z,\lambdambda)|^2 dz\wedgeedge d\lambdambda\wedgeedge d\oo{z} \wedgeedge d\oo{\lambdambda}<\infty \bigr\}, \end{equation} and let \begin{equation} A^2_m(\Omega):=\bigl\{ f(z) \text{ {\rm is holomrphic in $\mathbb{C}^2$}}\colon \lambdambda^m f(z)\in A^2(\Omega) \bigr\}. \end{equation} Note that the $L^2$ norm of $\lambdambda^mf(z)$ is given by \begin{align*} \|\lambdambda^m f(z)\|^2=&i\int_{\Omega} |\lambdambda|^{2m}|f(z)|^2 dz\wedgeedge d\lambdambda \wedgeedge d\oo{z}\wedgeedge d\oo{\lambdambda} \\ =&\int_{z\in \mathbb{C}^2}\Bigl(\int_{|\lambdambda|^2<h(z)^{-1}} |\lambdambda|^{2m} i\,d\lambdambda \wedgeedge d\oo{\lambdambda} \Bigr) |f(z)|^2dz\wedgeedge d\oo{z} \\ =&\int_{z\in \mathbb{C}^2}\Bigl(\int_{|\lambdambda|^2<h(z)^{-1}} |\lambdambda|^{2m} i\,d\lambdambda \wedgeedge d\oo{\lambdambda} \Bigr) |f(z)|^2dz\wedgeedge d\oo{z}. \end{align*} We can rewrite the inner integral as follows: \begin{align*} \int_{|\lambdambda|^2<h(z)^{-1}} |\lambdambda|^{2m} i\,d\lambdambda \wedgeedge d\oo{\lambdambda} =\int_0^{2\pi}\int_0^{\frac{1}{\sqrt{h(z)}}} r^{2m}2r drd\theta =2\pi\int_0^{\frac{1}{h(z)}} r^{m} dr =\frac{2\pi}{m+1} h(z)^{-(m+1)}. \end{align*} Thus, \begin{equation}\lambdabel{norm} \|\lambdambda^m f(z)\|^2=\frac{2\pi}{m+1} \int_{\mathbb{C}^2} |f(z)|^2 h(z)^{-(m+1)} dz\wedgeedge d\oo{z}. \end{equation} If we introduce the weighted Bergman space \begin{equation*} A^2(\mathbb{C}^2, h^{-(m+1)})=\bigl\{ f(z) \mbox{ is holomorphic in }\mathbb{C}^2: \int_{\mathbb{C}^2} |f(z)|^2 h(z)^{-(m+1)} dz\wedgeedge d\oo{z}<\infty \bigr\}, \end{equation*} then \begin{equation}\lambdabel{weighted L2 space} A^2_m(\Omega)=\bigl\{\lambdambda^m f(z): f(z) \in A^2(\mathbb{C}^2, h^{-(m+1)}) \bigr\}. \end{equation} We note that $A_{m_1}^2(\Omega)$ and $A_{m_2}^2(\Omega)$ are orthogonal to each other if $m_1\neq m_2$. We can therefore orthogonally decompose $A^2(\Omega)$ into a direct sum as follows. \begin{lemma}\lambdabel{orthogonal decomposition} \begin{equation*} A^2(\Omega)=\bigoplus_{m=0}^{\infty} A_m^2(\Omega). \end{equation*} \end{lemma} \begin{proof} Let $f(z,\lambdambda)\in A^2(\Omega)$. If we fix $z\in \mathbb{C}^2$, then $\lambdambda$ is contained in the disc $\{\lambdambda\in \mathbb{C}, |\lambdambda|^2<h(z)^{-1}\}$. By taking the Taylor expansion at $\lambdambda=0$, we obtain \begin{equation*} f(z,\lambdambda)=\sum_{j=0}^{\infty} a_j(z)\lambdambda^j, \quad \mbox{ for } |\lambdambda|^2< h(z)^{-1}, \end{equation*} where each $a_j(z)$ is holomorphic on $\mathbb{C}^2$. We shall first write $\|f(z,\lambdambda)\|^2$ in terms of $\{a_j(z)\}_{j=0}^{\infty}$. We have \begin{align*} \|f(z,\lambdambda)\|^2=&\int_{z\in \mathbb{C}^2}\left(\int_{|\lambdambda|^2<h^{-1}(z)} |f(z,\lambdambda)|^2 i\,d\lambdambda\wedgeedge d\oo{\lambdambda} \right) dz\wedgeedge d\oo{z}. \end{align*} The inner integral can be computed as \begin{align*} \int_{|\lambdambda|^2<h^{-1}(z)} |f(z,\lambdambda)|^2 i\,d\lambdambda \wedgeedge d\oo{\lambdambda} =&\lim_{\varepsilon\rightarrow 0^+}\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}a_s(z)\oo{a_t(z)}\int_{|\lambdambda|^2<h^{-1}(z)-\varepsilon} \lambdambda^s\oo{\lambdambda^t} i\,d\lambdambda \wedgeedge d\oo{\lambdambda} \\ =&\lim_{\varepsilon\rightarrow 0^+}\sum_{j=0}^{\infty}\frac{2\pi }{j+1}|a_j(z)|^2\bigl(h(z)^{-1}-\varepsilon\bigr)^{j+1} \\ =&\sum_{j=0}^{\infty}\frac{2\pi }{j+1}|a_j(z)|^2 h(z)^{-(j+1)}, \end{align*} where the last equality follows from the Monotone Convergence theorem. Therefore, \begin{align*} \|f(z,\lambdambda)\|^2=&\sum_{j=0}^{\infty}\frac{2\pi }{j+1}\int_{z\in \mathbb{C}^2}|a_j(z)|^2 h(z)^{-(j+1)} dz\wedgeedge d\oo{z}, \end{align*} which immediately implies that each $a_j(z)$ is contained in $A^2(\mathbb{C}^2, h^{-(j+1)})$. Suppose $f(z,\lambdambda)\perp A^2_m(\Omega)$. Then for any $\lambdambda^m g(z)\in A^2_m(\Omega)$, \begin{align*} 0=&i\int_{\Omega} f(z,\lambdambda)\oo{\lambdambda^m g(z)} dz\wedgeedge d\lambdambda \wedgeedge d\oo{z}\wedgeedge d\oo{\lambdambda} \\ =&\int_{z\in \mathbb{C}^2}\left(\int_{|\lambdambda|^2<h(z)^{-1}} f(z,\lambdambda)\oo{\lambdambda^m} \,i\,d\lambdambda \wedgeedge d\oo{\lambdambda} \right) \oo{g(z)}dz\wedgeedge d\oo{z}. \end{align*} The inner integral can be computed as follows \begin{align*} \int_{|\lambdambda|^2<h^{-1}(z)} f(z,\lambdambda)\oo{\lambdambda^m} i\,d\lambdambda \wedgeedge d\oo{\lambdambda} =&\lim_{\varepsilon\rightarrow 0^+}\sum_{j=0}^{\infty}a_j(z)\int_{|\lambdambda|^2<h(z)^{-1}-\varepsilon} \lambdambda^j\oo{\lambdambda^m} i\,d\lambdambda \wedgeedge d\oo{\lambdambda} \\ =&\lim_{\varepsilon\rightarrow 0^+}\frac{2\pi }{m+1}a_m(z)\bigl(h(z)^{-1}-\varepsilon\bigr)^{m+1} \\ =&\frac{2\pi}{m+1} a_m(z) h(z)^{-(m+1)}. \end{align*} Therefore, \begin{align*} 0=&\frac{2\pi}{m+1}\int_{z\in \mathbb{C}^2} a_m(z) \oo{g(z)}h(z)^{-(m+1)}dz\wedgeedge d\oo{z}, \quad \mbox{ for any } g\in A^2(\mathbb{C}^2, h^{-(m+1)}). \end{align*} Since $a_m$ belongs to the space $A^2(\mathbb{C}^2, h^{-(m+1)})$, we get $a_m=0$. Therefore, the direct sum of $A_m^2(\Omega)$ for $0\leq m<\infty$ generates $A^2(\Omega)$. \end{proof} Since $A^2_m(\Omega)$ can be identified with the weighted Bergman space $A^2(\mathbb{C}^2, h^{-(m+1)})$ as in \eqref{weighted L2 space}, we can find an explicit orthonormal basis and compute its reproducing kernel. \begin{prop}\lambdabel{K_m^* CPn} Let $m \geq 1.$ The reproducing kernel of $A^2_m(\Omega)$ is \begin{equation} K_m^*(z,\lambdambda,\oo{w},\oo{\tau}) =\frac{(m+1)m^2}{(2\pi)^3}\lambdambda^m\oo{\tau}^m(1+z_1\oo{w_1})^{m-1}(1+z_2\oo{w_2})^{m-1}, \end{equation} where $(z,\lambdambda), (w,\tau)$ are points in $\Omega$. \end{prop} \begin{proof} Denote \begin{equation*} z^{\alpha}=z_1^{\alpha_1}z_2^{\alpha_2}, \quad \mbox{ for any multi-index } \alpha=(\alpha_1,\alpha_2)\in \mathbb{Z}_{\geq 0}^2. \end{equation*} By \eqref{weighted L2 space}, since $\Omega$ is Reinhardt, it is easy to see that \begin{equation*} \bigl\{ \lambdambda^m z^{\alpha}: z^{\alpha} \in A^2(\mathbb{C}^2, h^{-(m+1)}) \bigr\} \end{equation*} forms an orthogonal basis of $A^2_m(\Omega)$. We shall compute the norm for each $\lambdambda^m z^{\alpha}$. Using \eqref{norm}, we have \begin{align*} \|\lambdambda^m z^{\alpha}\|^2 =&\frac{2\pi}{m+1} \int_{\mathbb{C}^2} |z^{\alpha}|^2 (1+|z_1|^2)^{-(m+1)}(1+|z_2|^2)^{-(m+1)} dz\wedgeedge d\oo{z} \\ =&\frac{2\pi}{m+1} \int_{\mathbb{C}} |z_1|^{2\alpha_1} (1+|z_1|^2)^{-(m+1)}\,i\, dz_1\wedgeedge d\oo{z_1} \cdot\int_{\mathbb{C}} |z_2|^{2\alpha_2} (1+|z_2|^2)^{-(m+1)}\,i\, dz_2\wedgeedge d\oo{z_2} \\ =&\frac{(2\pi)^3}{m+1} \int_0^{\infty} r_1^{\alpha_1} (1+r_1)^{-(m+1)} dr_1 \cdot\int_0^{\infty} r_2^{\alpha_2} (1+r_2)^{-(m+1)} dr_2. \end{align*} By the elementary integral identity \begin{equation}\lambdabel{element integral} \int_0^{\infty} r^p \frac{1}{(1+r)^q}dr=\frac{(q-p-2)!p!}{(q-1)!}, \quad \mbox{ for any nonnegative integers $p, q$ with } q\geq p+2, \end{equation} we get \begin{equation*} \|\lambdambda^m z^{\alpha}\|^2=\begin{dcases} \frac{(2\pi)^3}{m+1}\frac{(m-\alpha_1-1)!(m-\alpha_2-1)!\alpha!}{m!^2} & \mbox{if }\alpha_1, \alpha_2\leq m-1, \\ +\infty & \mbox{otherwise}. \end{dcases} \end{equation*} Thus, $\bigl\{ \frac{\lambdambda^m z^{\alpha}}{\|\lambdambda^mz^{\alpha}\|}: 0\leq \alpha_1, \alpha_2\leq m-1 \bigr\}$ is an orthonormal basis of $A^2_m(\Omega)$, and the reproducing kernel of $A^2_m(\Omega)$ is given by \begin{align*} K_m^*(z,\lambdambda,\oo{w},\oo{\tau}) &=\sum_{0\leq \alpha_1, \alpha_2\leq m-1} \frac{z^{\alpha}\lambdambda^m\oo{w}^{\alpha}\oo{\tau}^m}{\|z^{\alpha}\lambdambda^m\|^2}\\ &=\frac{(m+1)m^2}{(2\pi)^3}\lambdambda^m\oo{\tau}^m\sum_{\alpha_1=0}^{m-1}\binom{m-1}{\alpha_1}z_1^{\alpha_1}\oo{w_1}^{\alpha_1} \sum_{\alpha_2=0}^{m-1}\binom{m-1}{\alpha_2}z_2^{\alpha_2}\oo{w_2}^{\alpha_2} \\ &=\frac{(m+1)m^2}{(2\pi)^3}\lambdambda^m\oo{\tau}^m(1+z_1\oo{w_1})^{m-1}(1+z_2\oo{w_2})^{m-1}. \end{align*} \end{proof} Now we are ready to compute the Bergman kernel form of $\Omega$. \begin{thm}\lambdabel{thm:OmegaBergman} The Bergman kernel form of the domain $\Omega\subset\mathbb{C}^3$ in $\eqref{eq:Omega}$ is given by \begin{equation*} K_\Omega(z,\lambdambda,\oo{w},\oo{\tau})=i K^*(z,\lambdambda,\oo{w},\oo{\tau}) dz\wedgeedge d\lambdambda\wedgeedge d\oo{w}\wedgeedge d\oo{\tau}, \end{equation*} where \begin{equation*} K^*(z,\lambdambda,\oo{w},\oo{\tau})=\sum_{m=1}^{\infty}\frac{(m+1)m^2}{(2\pi)^3}\lambdambda^m\oo{\tau}^m(1+z_1\oo{w_1})^{m-1}(1+z_2\oo{w_2})^{m-1}. \end{equation*} It can be written in terms of the complexified defining function $$ \rho(z,\lambdambda,\oo{w},\oo{\tau})=\lambdambda\oo{\tau}(1+z_1\oo{w_1})(1+z_2\oo{w_2})-1 $$ as \begin{equation*} K^*(z,\lambdambda,\oo{w},\oo{\tau})=\frac{1}{(2\pi)^3}\Bigl(\frac{4\lambdambda\oo{\tau}}{\rho(z,\lambdambda,\oo{w},\oo{\tau})^{3}}+\frac{6\lambdambda\oo{\tau}}{\rho(z,\lambdambda,\oo{w},\oo{\tau})^{4}}\Bigr). \end{equation*} \end{thm} \begin{proof} By Lemma \ref{orthogonal decomposition}, we immediately get the reproducing kernel of $A^2(\Omega)$ by adding up the reproducing kernels of $A^2_m(\Omega)$ for all $m$. Since $A^2_0(\Omega)=\{0\}$, we obtain \begin{align*} K^*(z,\lambdambda,\oo{w},\oo{\tau}) =&\sum_{m=1}^{\infty}K_m^*(z,\lambdambda,\oo{w},\oo{\tau})\\ =&\sum_{m=0}^{\infty}\frac{(m+2)(m+1)^2}{(2\pi)^3}\lambdambda^{m+1}\oo{\tau}^{m+1}(1+z_1\oo{w_1})^{m}(1+z_2\oo{w_2})^{m}. \end{align*} It remains to write $K^*(z,\lambdambda,\oo{w},\oo{\tau})$ in terms of the defining function $\rho(z,\lambdambda,\oo{w},\oo{\tau})$. We use the Taylor expansion of $1/(1-x)^{j+1}$ for $0\leq j\leq 3$ to obtain \begin{equation*} \frac{1}{(-\rho(z,\lambdambda,\oo{w},\oo{\tau}))^{j+1}}=\frac{1}{(1-(1+z_1 \oo{w_1})(1+z_2 \oo{w_2})\lambdambda\oo{\tau})^{j+1}}=\sum_{m=0}^{\infty}\binom{m+j}{j}(1+z_1 \oo{w_1})^m(1+z_2 \oo{w_2})^m\lambdambda^m\oo{\tau}^m. \end{equation*} Note that $(m+2)(m+1)^2$ is a polynomial in $m$ of degree $3$. Since $\{\binom{m+j}{j}\}_{j=0}^{3}$ is a basis of polynomials in $m$ with degree $\leq 3$, we can write \begin{equation*} (m+2)(m+1)^2=\sum_{j=0}^{3}a_j\binom{m+j}{j}. \end{equation*} One can check that the coefficients are given by $a_0=a_1=0$, $a_2=-4$ and $a_3=6$. Therefore, \begin{align*} K^*(z,\lambdambda,\oo{w},\oo{\tau}) =&\frac{1}{(2\pi)^3}\sum_{m=0}^{\infty}\sum_{j=0}^{3}a_j\binom{m+j}{j}\lambdambda^{m+1}\oo{\tau}^{m+1}(1+z_1\oo{w_1})^{m}(1+z_2\oo{w_2})^{m} \\ =&\frac{1}{(2\pi)^3}\sum_{j=0}^{3}a_j\frac{\lambdambda\oo{\tau}}{(-\rho(z,\lambdambda,\oo{w},\oo{\tau}))^{j+1}}, \end{align*} and the result follows. \end{proof} \end{document}
\begin{document} \author[1]{Serena Dipierro} \author[2]{Ovidiu Savin} \author[1]{Enrico Valdinoci} \affil[1]{\footnotesize Department of Mathematics and Statistics, University of Western Australia, 35 Stirling Highway, WA6009 Crawley, Australia } \affil[2]{\footnotesize Department of Mathematics, Columbia University, 2990 Broadway, New York NY 10027, USA } \title{On divergent fractional Laplace equations\thanks{The first and third authors are member of INdAM and are supported by the Australian Research Council Discovery Project DP170104880 NEW ``Nonlocal Equations at Work''. The first author is supported by the Australian Research Council DECRA DE180100957 ``PDEs, free boundaries and applications''. The second author is supported by the National Science Foundation grant DMS-1500438. Emails: {\tt [email protected]}, {\tt [email protected]}, {\tt [email protected]} }} \date{} \maketitle \begin{abstract} We consider the divergent fractional Laplace operator presented in~\cite{POLINOMI} and we prove three types of results. Firstly, we show that any given function can be locally shadowed by a solution of a divergent fractional Laplace equation which is also prescribed in a neighborhood of infinity. Secondly, we take into account the Dirichlet problem for the divergent fractional Laplace equation, proving the existence of a solution and characterizing its multiplicity. Finally, we take into account the case of nonlinear equations, obtaining a new approximation results. These results maintain their interest also in the case of functions for which the fractional Laplacian can be defined in the usual sense. \varepsilonnd{abstract} \section{Introduction} Given~$u:\mathbb{R}^n\to\mathbb{R}$ and~$s\in(0,1)$, to define the fractional Laplacian of~$u$, \begin{equation}\label{CLfar} (-\Delta)^s u(x):=\lim_{\rho\searrow0}\int_{\mathbb{R}^n\setminus B_\rho(x)} \frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy,\varepsilonnd{equation} one typically needs two main requisites on the function~$u$: \begin{itemize} \item $u$ has to be sufficiently smooth in the vicinity of~$x$, for instance~$u\in C^\gamma(B_\delta(x))$ for some~$\delta>0$ and~$\gamma>2s$, \item $u$ needs to have a controlled growth at infinity, for instance \begin{equation}\label{GR01d} \int_{\mathbb{R}^n}\frac{|u(x)|}{1+|x|^{n+2s}}\,dx<+\infty. \varepsilonnd{equation} \varepsilonnd{itemize} Nevertheless, in~\cite{POLINOMI} we have recently introduced a new notion of ``divergent'' fractional Laplacian, which can be used even when condition~\varepsilonqref{GR01d} is violated. This notion takes into account the case of functions with polynomial growth, for which the classical definition in~\varepsilonqref{CLfar} makes no sense, and it recovers the classical definition for functions with controlled growth such as in~\varepsilonqref{GR01d}. The notion of divergent fractional Laplacian possesses several interesting features and technical advantages, including suitable Schauder estimates in which the full smooth H\"older norm of the solution is controlled by a suitable seminorm of the nonlinearity. Moreover, compared to~\varepsilonqref{CLfar}, the notion of divergent fractional Laplacian is conceptually closer to the classical case in the sense that it requires a sufficient degree of regularity of the function~$u$ at a given point, without global conditions (up to a mild control at infinity of polynomial type), thus attempting to make the necessary requests as close as possible to the case of the classical Laplacian. In this article, we consider the setting of the divergent Laplacian and we obtain the following results: \begin{itemize} \item an approximation result with solutions of divergent Laplacian equations: we will show that these solutions can locally shadow any prescribed function, maintaining also a complete prescription at infinity, \item a characterization of the Dirichlet problem: we will show that the (possibly inhomogeneous) Dirichlet problem is solvable and we determine the multiplicity of the solutions, \item an approximation result with solutions of nonlinear divergent Laplacian equations, up to a small error also in the forcing term. \varepsilonnd{itemize} To state these results in detail, we now recall the precise framework for the divergent fractional Laplacian. Given~$k\in\mathbb{N}$, we consider the space of functions $$ {\mathcal{U}}_k:= \leqslantft\{ u:\mathbb{R}^n\to\mathbb{R}, {\mbox{ s.t. $u$ is continuous in $B_1$ and }} \int_{\mathbb{R}^n}\frac{|u(x)|}{1+|x|^{n+2s+k}}\,dx<+\infty \right\}.$$ Then (see Definition~1.1 in~\cite{POLINOMI}) we use the notation $$ \chi_R(x):=\begin{cases} 1 & {\mbox{ if }}x\in B_R,\\ 0 & {\mbox{ if }}x\in \mathbb{R}^n\setminus B_R, \varepsilonnd{cases}$$ and we say that \begin{equation}\label{RGAs} (-\Delta)^su\ugu f\qquad{\mbox{ in }}B_1\varepsilonnd{equation} if there exist a family of polynomials~$P_R$, which have degree at most~$k-1$, and functions~$f_R : B_1\to\mathbb{R}$ such that~$(-\Delta)^su=f_R+P_R$ in~$B_1$ in the viscosity sense, with $$ \lim_{R\to+\infty}f_R(x) = f(x)$$ for any~$ x\in B_1$. Interestingly, one can also think that the right hand side of equation~\varepsilonqref{RGAs} is not just a function, but an equivalence class of functions modulo polynomials, since one can freely add to~$f$ a polynomial of degree~$k$ when~$s\in\leqslantft(0,\frac12\right]$ and of degree~$k+1$ when~$s\in\leqslantft(\frac12,1\right)$ (see Theorem~1.5 in~\cite{POLINOMI}). The first result that we provide in this setting states that every given function can be modified in an arbitrarily small way in~$B_1$, remaining unchanged in a large ball, in such a way to become $s$-harmonic with respect to the divergent fractional Laplacian. \begin{theorem}\label{ALL}[All divergent functions are locally $s$-harmonic up to a small error] Let~$k$, $m\in\mathbb{N}$ and~$u:\mathbb{R}^n\to\mathbb{R}$ be such that~$u\in C^m(\overline{B_1})$ and \begin{equation*} \int_{B_1^c}\frac{|u(x)|}{|x|^{n+2s+k}}\,dx<+\infty.\varepsilonnd{equation*} Then, for any~$\varepsilon>0$ there exist~$u_\varepsilon$ and~$R_\varepsilon>1$ such that \begin{eqnarray}\label{DES:ALL1} && (-\Delta)^s u_\varepsilon \ugu 0 \quad{\mbox{ in }}B_1,\\ \label{DES:ALL2} && \| u_\varepsilon-u\|_{C^m({B_1})}\leqslant \varepsilon\\ \label{dtsfgyvhoqwfyguywqegfiowel} {\mbox{and }}&& u_\varepsilon=u \quad{\mbox{ in }}B_{R_\varepsilon}^c. \varepsilonnd{eqnarray} \varepsilonnd{theorem} A graphical sketch of Theorem~\ref{ALL} is given in Figure~\ref{fig:1} (notice the possible wild oscillations of~$u_\varepsilon$ in~$B_{R_\varepsilon}\setminus B_1$). \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{poli.pdf} \caption{\it {{The approximation result in Theorem~\ref{ALL}.}}} \label{fig:1} \varepsilonnd{figure} \begin{remark}{\rm When~$k=0$ and~$u=0$ outside~$B_2$, Theorem~\ref{ALL} reduces to the main result of~\cite{ALL-FUNCTIONS}. Interestingly, in the case considered here, one can preserve the values of the given function~$u$ at infinity and, if the growth of~$u$ at infinity is ``too fast'' for the classical fractional Laplacian to be defined, then the result still carries on, in the divergent fractional Laplace setting.} \varepsilonnd{remark} \begin{remark}{\rm We observe that Theorem~\ref{ALL} does not hold under the additional assumption that \begin{equation}\label{STdffOA0} {\mbox{$|u_\varepsilon(x)|\leqslant P(x)$ for all~$x\in\mathbb{R}^n$,}}\varepsilonnd{equation} for a given polynomial~$P$ (that is, one cannot replace a growth assumption at infinity with a pointwise bound). Indeed, under assumption~\varepsilonqref{STdffOA0}, we have that $$ \int_{\mathbb{R}^n} \frac{|u_\varepsilon(x)|}{1+|x|^{n+2s+d}}\,dx\leqslant \int_{\mathbb{R}^n} \frac{|P(x)|}{1+|x|^{n+2s+d}}\,dx =:J<+\infty,$$ being~$d\in\mathbb{N}$ the degree of the polynomial~$P$. As a consequence of this and~\varepsilonqref{DES:ALL1}, we deduce from Theorem~1.3 of~\cite{POLINOMI} that for any~$\gamma>0$ such that~$\gamma$ and~$\gamma+2s$ are not integer, $$ \|u_\varepsilon\|_{C^{\gamma+2s}}(B_{1/2})\leqslant C\,J,$$ for some~$C$ depending only on~$J$,~$n$, $s$, $\gamma$ and~$d$. In particular, if~$\gamma+2s\geqslant m$, we would have from~\varepsilonqref{DES:ALL2} that $$ \varepsilon\geqslant \|u_\varepsilon-u\|_{C^m(B_1)}\geqslant \|u_\varepsilon-u\|_{C^m(B_{1/2})} \geqslant \|u\|_{C^m(B_{1/2})}-\|u_\varepsilon\|_{C^m(B_{1/2})}\geqslant \|u\|_{C^m(B_{1/2})}-C\,J.$$ This set of inequalities would be violated for~$\varepsilon\in(0,1)$ by any function~$u$ satisfying \begin{equation}\label{STdffOA} \|u\|_{C^m(B_{1/2})}\geqslant C\,J+1.\varepsilonnd{equation} That is, solutions with a large $C^m$-norm (more specifically with a norm as in~\varepsilonqref{STdffOA}) cannot be approximated arbitrarily well by $s$-harmonic functions (not even ``modulo polynomials'') that satisfy a polynomial bound as in~\varepsilonqref{STdffOA0}. Interestingly, this remark is independent from~$R_\varepsilon$ in~\varepsilonqref{dtsfgyvhoqwfyguywqegfiowel} (hence, it is not possible to arbitrarily improve the approximation results if we require an additional polynomial bound, even if we drop the request that the approximating function is compactly supported). }\varepsilonnd{remark} Theorem~\ref{ALL} is also related to some recent results in~\cite{MR3716924, MR3774704, MR3935264, KRYL, CAR, CARBOO} (see~\cite{MR3790948} for an elementary exposition in the case of the fractional Laplacian in dimension~1). Next result focuses on the Dirichlet problem for divergent fractional Laplacians. We show that, given an external datum and a forcing term, the Dirichlet problem has a solution. Differently from the classical case, when~$k\not=0$ such solution is not unique, and we determine the dimension of the multiplicity space. \begin{theorem}\label{DIRI}[Solvability of the Dirichlet problem for divergent fractional Laplacians] Let~$k\in\mathbb{N}$ and~$u_0:B_1^c\to\mathbb{R}$ be such that \begin{equation*} \int_{B_1^c}\frac{|u_0(x)|}{|x|^{n+2s+k}}\,dx<+\infty.\varepsilonnd{equation*} Let~$f$ be continuous in~$B_1$. Then, there exists a function~$u\in{\mathcal{U}}_k$ such that \begin{equation}\label{DIRI2} \leqslantft\{ \begin{matrix} (-\Delta)^s u \ugu f \quad{\mbox{ in }}\;B_1,\\ u = u_0 \quad{\mbox{ in }}\;B_1^c. \varepsilonnd{matrix} \right. \varepsilonnd{equation} Also, the space of solutions of~\varepsilonqref{DIRI2} has dimension~$N_k$, with \begin{equation}\label{NK} N_k:= \sum_{j=0}^{k-1}\binom{j+n-1}{n-1}.\varepsilonnd{equation} \varepsilonnd{theorem} With the aid of Theorems~\ref{ALL} and~\ref{DIRI}, we can also consider the case of nonlinear equations, namely the case in which the right hand side depends also on the solution (as well as on its derivatives, since the result that we provide is general enough to comprise such a case too). In this setting, we establish that any prescribed function satisfies any prescribed nonlinear (and possibly divergent) fractional Laplace equation, up to an arbitrarily small error, once we are allowed to make arbitrarily small modifications of the given function in a given region, preserving its values at infinity. The precise result that we have is the following one: \begin{theorem}\label{NONLI}[All divergent functions almost solve nonlinear equations] Let~$k$, $m\in\mathbb{N}$ and~$u:\mathbb{R}^n\to\mathbb{R}$ be such that~$u\in C^{2m}(\overline{B_1})$ and \begin{equation*} \int_{B_1^c}\frac{|u(x)|}{|x|^{n+2s+k}}\,dx<+\infty.\varepsilonnd{equation*} Let $$ N(m):= n+\sum_{j=0}^m n^j$$ and let~$F\in C^m(\mathbb{R}^{N(m)})$. Then, for any~$\varepsilon>0$ there exist~$u_\varepsilon$, $\varepsilonta_\varepsilon:\mathbb{R}^n\to\mathbb{R}$ and~$R_\varepsilon>1$ such that \begin{eqnarray} \label{AMLO-1}&& (-\Delta)^s u_\varepsilon(x) \ugu F\big(x,u_\varepsilon(x),\nabla u_\varepsilon(x),\dots,D^m u_\varepsilon(x)\big) +\varepsilonta_\varepsilon(x) \quad{\mbox{ for all }}x\in B_1,\\ \label{AMLO-2} && \| \varepsilonta_\varepsilon\|_{L^\infty(B_1)}\leqslant \varepsilon,\\ \label{AMLO-3} && \| u_\varepsilon-u\|_{C^m({B_1})}\leqslant \varepsilon\\ \label{AMLO-4} {\mbox{and }}&& u_\varepsilon=u \quad{\mbox{ in }}B_{R_\varepsilon}^c. \varepsilonnd{eqnarray} \varepsilonnd{theorem} \begin{remark} {\rm We think that it is a very interesting {\varepsilonm open problem} to determine whether the statement in Theorem~\ref{NONLI} holds true also with~$\varepsilonta_\varepsilon:=0$. This would give that any given function can be locally approximated arbitrarily well by functions which solve exactly (and not only approximatively) a nonlinear equation.}\varepsilonnd{remark} \begin{remark} {\rm All the results presented here maintain their own interest even in the case~$k=0$: in this case, the definition of divergent fractional Laplacian boils down to the usual fractional Laplacian (see Corollary~3.8 in~\cite{POLINOMI}).}\varepsilonnd{remark} The rest of this article is organized as follows. In Section~\ref{SF-MAJo1} we give the proof of Theorem~\ref{ALL}, in Section~\ref{SF-MAJo2} we deal with the proof of Theorem~\ref{DIRI}, and in Section~\ref{SF-MAJo3} we focus on Theorem~\ref{NONLI}. \section{Proof of Theorem~\ref{ALL}}\label{SF-MAJo1} To prove Theorem~\ref{ALL}, we first present an observation on the decay of the divergent fractional Laplacians for functions that vanish on a large ball: \begin{lemma}\label{LT8} Let~$k\in\mathbb{N}$ and~$R>3$. Let~$u:\mathbb{R}^n\to\mathbb{R}$ be such that~$u=0$ in~$B_R$ and \begin{equation}\label{Dh0} \int_{\mathbb{R}^n}\frac{|u(x)|}{1+|x|^{n+2s+k}}\,dx<+\infty.\varepsilonnd{equation} Then, there exists~$f:B_1\to \mathbb{R}$ such that~$(-\Delta)^s u\ugu f$ in~$B_1$ and for which the following statement holds true: for any~$\varepsilon>0$ and any~$m\in\mathbb{N}$, there exists~${\bar{R}_\varepsilon}>3$ such that if~$R\geqslant {\bar{R}_\varepsilon}$ then \begin{equation}\label{Dh} \| f\|_{C^m(B_{1})}\leqslant \varepsilon.\varepsilonnd{equation} \varepsilonnd{lemma} \begin{proof} {F}rom Remark~3.5 in~\cite{POLINOMI}, we can write that~$(-\Delta)^s u\ugu f$ in~$B_1$, with \begin{eqnarray*}&& f(x)=f_u(x):= \int_{B_2} \frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy+ \int_{B_2^c} \frac{u(x)}{|x-y|^{n+2s}}\,dy+ \int_{B_2^c} \frac{u(y)\;\psi(x,y)}{|y|^{n+2s+k}}\,dy\\ &&\qquad\qquad\qquad= \int_{B_R^c} \frac{u(y)\;\psi(x,y)}{|y|^{n+2s+k}}\,dy,\varepsilonnd{eqnarray*} for some function~$\psi$ satisfying, for any~$j\in \mathbb{N}$, $$\sup_{{x\in B_1},\,{y\in B_2^c}} |D^j_x \psi(x,y)| \leqslant C_j,$$ for some~$C_j>0$. In particular, for any~$x\in B_1$, $$ |D^jf(x)|\leqslant \int_{B_R^c} \frac{|u(y)|\;|D^j_x\psi(x,y)| }{|y|^{n+2s+k}}\,dy\leqslant C_j\,\int_{B_R^c} \frac{|u(y)| }{|y|^{n+2s+k}}\,dy,$$ so the desired claim in~\varepsilonqref{Dh} follows from~\varepsilonqref{Dh0}. \varepsilonnd{proof} With this, we complete the proof of Theorem~\ref{ALL} in the following way. \begin{proof}[Proof of Theorem~\ref{ALL}] {F}rom Theorem~1.1 of~\cite{ALL-FUNCTIONS} we know that there exist a function~$v_\varepsilon$ and~$\rho_\varepsilon>1$ such that \begin{eqnarray} \label{veps1} && (-\Delta)^s v_\varepsilon = 0 \quad{\mbox{ in }}B_1,\\ \label{90hx:2} && \| v_\varepsilon-u\|_{C^m({B_1})}\leqslant \varepsilon\\ \label{90hx:3} {\mbox{and }}&& v_\varepsilon=0 \quad{\mbox{ in }}B_{\rho_\varepsilon}^c. \varepsilonnd{eqnarray} For any~$R>3$, we also set~$\tilde u_R:= (1-\chi_R)\,u$. Notice that \begin{equation}\label{90hx:9876} \tilde u_R=u\quad{\mbox{ in }}B_R^c.\varepsilonnd{equation} In addition, \begin{equation}\label{90hx} \tilde u_R=0\quad{\mbox{ in }}B_R,\varepsilonnd{equation} so, in view of Lemma~\ref{LT8}, there exist a function~$f_{\varepsilon}$ and~${\bar{R}_\varepsilon}>3$ such that \begin{eqnarray} \label{u6s8}&& (-\Delta)^s \tilde u_{{\bar{R}_\varepsilon}} \ugu f_\varepsilon \quad{\mbox{ in }}B_2,\\ \label{fduyas} {\mbox{and }}&& \| f_\varepsilon \|_{C^m(B_{2})}\leqslant \varepsilon. \varepsilonnd{eqnarray} Now we consider the standard solution of the Dirichlet problem \begin{equation}\label{weps1} \leqslantft\{ \begin{matrix} (-\Delta)^s w_\varepsilon=f_\varepsilon \quad{\mbox{ in }}B_2,\\ w_\varepsilon=0 \quad{\mbox{ in }}B_2^c. \varepsilonnd{matrix} \right.\varepsilonnd{equation} {F}rom Proposition~1.1 in~\cite{ROS-JMPA}, we have that \begin{eqnarray} \label{67dff} \|w_\varepsilon\|_{C^s(\mathbb{R}^n)}\leqslant C\, \|f_\varepsilon\|_{L^\infty(B_2)},\varepsilonnd{eqnarray} for some~$C>0$. Now we take~$\gamma:=m-s$. Notice that~$\gamma\not\in\mathbb{N}$ and~$\gamma+2s=m+s\not\in\mathbb{N}$. Then, by Schauder estimates (see e.g. Theorem~1.3 in~\cite{POLINOMI}, applied here with~$k:=0$), and exploiting~\varepsilonqref{fduyas} and~\varepsilonqref{67dff}, possibly renaming~$C>0$ line after line, we obtain that \begin{equation}\label{90hx:1} \begin{split} & \|w_\varepsilon\|_{C^{m}(B_1)} \leqslant \|w_\varepsilon\|_{C^{\gamma+2s}(B_1)} \leqslant C\,\leqslantft( [f_\varepsilon]_{C^\gamma(B_2)} +\int_{B_1^c}\frac{|w_\varepsilon(y)|}{|y|^{n+2s}}\,dy \right)\\ &\qquad\qquad\qquad\leqslant C\,\leqslantft( \|f_\varepsilon\|_{C^m(B_2)}+\|w_\varepsilon\|_{L^\infty(\mathbb{R}^n)} \right)\leqslant C\,\|f_\varepsilon\|_{C^m(B_2)}\leqslant C\varepsilon. \varepsilonnd{split}\varepsilonnd{equation} Now we define $$ u_\varepsilon:= v_\varepsilon +\tilde u_{{\bar{R}_\varepsilon}}-w_\varepsilon.$$ Using~\varepsilonqref{veps1}, \varepsilonqref{weps1} and the consistency result in Corollary~3.8 of~\cite{POLINOMI}, we see that $$ (-\Delta)^s v_\varepsilon {\;{\stackrel{0}{=}}\;}0 \quad {\mbox{ and }}\quad (-\Delta)^s w_\varepsilon{\;{\stackrel{0}{=}}\;}f_\varepsilon \quad {\mbox{ in~$B_1$}}.$$ Thus, the consistency result in formula~(1.7) of~\cite{POLINOMI} implies that $$ (-\Delta)^s v_\varepsilon {\;{\stackrel{k}{=}}\;}0 \quad {\mbox{ and }}\quad (-\Delta)^s w_\varepsilon{\;{\stackrel{k}{=}}\;}f_\varepsilon \quad {\mbox{ in~$B_1$}}.$$ Consequently, from~\varepsilonqref{u6s8}, we deduce that $(-\Delta)^s u_\varepsilon\ugu 0+f_\varepsilon-f_\varepsilon$ in~$B_1$, and this establishes~\varepsilonqref{DES:ALL1}. Furthermore, from~\varepsilonqref{90hx:2}, \varepsilonqref{90hx} and~\varepsilonqref{90hx:1}, we see that $$ \|u_\varepsilon-u\|_{C^m(B_1)}\leqslant \|v_\varepsilon-u\|_{C^m(B_1)}+\|\tilde u_{{\bar{R}_\varepsilon}}\|_{C^m(B_1)}+ \|w_\varepsilon\|_{C^m(B_1)}\leqslant \varepsilon+0+C\varepsilon.$$ This proves~\varepsilonqref{DES:ALL2} (up to renaming~$\varepsilon$). Now we take~$R_\varepsilon:=\rho_\varepsilon+{\bar{R}_\varepsilon}$. {F}rom~\varepsilonqref{90hx:3}, \varepsilonqref{90hx:9876} and~\varepsilonqref{weps1}, we have that, in~$B_{R_\varepsilon}^c$, it holds that~$u_\varepsilon=0+u-0$, which establishes~\varepsilonqref{dtsfgyvhoqwfyguywqegfiowel}, as desired. \varepsilonnd{proof} \section{Proof of Theorem~\ref{DIRI}}\label{SF-MAJo2} First, we prove the existence result in Theorem~\ref{DIRI}. To this aim, we let~$u_0$ and~$f$ be as in the statement of Theorem~\ref{DIRI} and we define $$ u_1:= \chi_{B_2^c}\,u_0\quad {\mbox{ and }}\quad u_2:= \chi_{B_2\setminus B_1} \,u_0.$$ We stress that~$u_1$ is smooth in~$B_1$ and~$u_2$ is supported in~$B_2$. {F}rom Remark~3.5 in~\cite{POLINOMI}, we can write~$(-\Delta)^s u_1\ugu f_{u_1}$ in~$B_1$, for a suitable function~$f_{u_1}$. Now we set~$\tilde f:= f-f_{u_1}$ and we consider the solution of the standard problem \begin{equation*} \leqslantft\{ \begin{matrix} (-\Delta)^s \tilde u = \tilde f & {\mbox{ in }} B_1,\\ \tilde u=u_2 & {\mbox{ in }} B_1^c. \varepsilonnd{matrix} \right.\varepsilonnd{equation*} Hence, the consistency result in Corollary~3.8 and formula~(1.7) in~\cite{POLINOMI} give that \begin{equation*} \leqslantft\{ \begin{matrix} (-\Delta)^s \tilde u \ugu \tilde f & {\mbox{ in }} B_1,\\ \tilde u=u_2 & {\mbox{ in }} B_1^c. \varepsilonnd{matrix} \right.\varepsilonnd{equation*} Then, we define~$u:= u_1+\tilde u$ and we see that~$(-\Delta)^s u\ugu f_{u_1}+\tilde f=f$ in~$B_1$. Moreover, in~$B_1^c$ it holds that~$u=u_1+u_2=u_0$, namely~$u$ is a solution of~\varepsilonqref{DIRI2}. This establishes the existence result in Theorem~\ref{DIRI}. Now, we prove the uniqueness claim in Theorem~\ref{DIRI}. For this, we observe that for any polynomial~$P$ of degree at most~$k-1$ there exists a unique solution~$u_P$ of the standard problem \begin{equation}\label{UN} \leqslantft\{ \begin{matrix} (-\Delta)^s u_P = P & {\mbox{ in }} B_1,\\ u_P=0 & {\mbox{ in }} B_1^c. \varepsilonnd{matrix} \right.\varepsilonnd{equation} That is, in view of the consistency result in Corollary~3.8 of~\cite{POLINOMI}, we have that~$(-\Delta)^s u_P {\;{\stackrel{0}{=}}\;} P$ in~$B_1$. Accordingly, from formula~(1.7) in~\cite{POLINOMI}, we get that~$(-\Delta)^s u_P {\;{\stackrel{k}{=}}\;} P$ in~$B_1$. Then, by formula~(1.8) in~\cite{POLINOMI}, it follows that~$u_P$ is a solution of $$ \leqslantft\{ \begin{matrix} (-\Delta)^s u_P \ugu 0 & {\mbox{ in }} B_1,\\ u_P=0 & {\mbox{ in }} B_1^c. \varepsilonnd{matrix} \right.$$ This means that if~$u$ is a solution of~\varepsilonqref{DIRI2}, then so is~$u+u_P$. Conversely, if~$u$ and~$v$ are two solutions, then~$w:=v-u$ satisfies $$ \leqslantft\{ \begin{matrix} (-\Delta)^s w \ugu 0 & {\mbox{ in }} B_1,\\ w=0 & {\mbox{ in }} B_1^c. \varepsilonnd{matrix} \right.$$ This and the consistency result in Lemma~3.9 of~\cite{POLINOMI} (used here with~$j:=0$) give that~$ (-\Delta)^s w {\;{\stackrel{0}{=}}\;} P$ in~$B_1$, for some polynomial~$P$ of degree at most~$k-1$. Hence, using the consistency result in Corollary~3.8 of~\cite{POLINOMI}, we can write $$ \leqslantft\{ \begin{matrix} (-\Delta)^s w = P & {\mbox{ in }} B_1,\\ w=0 & {\mbox{ in }} B_1^c. \varepsilonnd{matrix} \right.$$ {F}rom the uniqueness of the solution of the standard problem in~\varepsilonqref{UN}, we conclude that~$w=u_P$, and so~$v=u+u_P$. These observations yield that the space of solutions of~\varepsilonqref{DIRI2} is isomorphic to the space of polynomials~$P$ with degree less than or equal to~$k-1$, which in turn has dimension~$N_k$, as given in~\varepsilonqref{NK} (see e.g.~\cite{2021arXiv210107941D}). \section{Proof of Theorem~\ref{NONLI}}\label{SF-MAJo3} We can extend~$u$ such that~$u\in C^{2m}(B_{1+h})$, for some~$h\in(0,1)$. Then, for all~$x\in B_{1+h}$, we define~$f(x):= F\big(x,u(x),\nabla u(x),\dots,D^m u(x)\big)$. Then, $f\in C^m(B_{1+h})$ and we can exploit Theorem~\ref{DIRI} and obtain a function~$v\in{\mathcal{U}}_k$ such that \begin{equation*} \leqslantft\{ \begin{matrix} (-\Delta)^s v \ugu f \quad{\mbox{ in }}\;B_{1+h},\\ v = 0 \quad{\mbox{ in }}\;B_{1+h}^c. \varepsilonnd{matrix} \right. \varepsilonnd{equation*} By Theorem~1.3 in~\cite{POLINOMI}, we have that~$ v\in C^m(B_1)$. Hence, we can set~$w:=u-v\in C^m(B_1)$ and make use of Theorem~\ref{ALL} to find~$w_\varepsilon$ and~$R_\varepsilon>2$ such that \begin{eqnarray*} && (-\Delta)^s w_\varepsilon \ugu 0 \quad{\mbox{ in }}B_1,\\ && \| w_\varepsilon-w\|_{C^m({B_1})}\leqslant \varepsilon\\ {\mbox{and }}&& w_\varepsilon=w \quad{\mbox{ in }}B_{R_\varepsilon}^c. \varepsilonnd{eqnarray*} Now, we define~$u_\varepsilon:= v+w_\varepsilon$. We observe that $$ (-\Delta)^s u_\varepsilon(x) \ugu (-\Delta)^s v(x)+ (-\Delta)^s w_\varepsilon(x) \ugu f(x) = F\big(x,u(x),\nabla u(x),\dots,D^m u(x)\big) $$ for all $x\in B_1$. This gives that~\varepsilonqref{AMLO-1} is satisfied with \begin{equation} \label{65654219}\varepsilonta_\varepsilon(x):= F\big(x,u(x),\nabla u(x),\dots,D^m u(x)\big)- F\big(x,u_\varepsilon(x),\nabla u_\varepsilon(x),\dots,D^m u_\varepsilon(x)\big).\varepsilonnd{equation} Moreover, in~$B_{R_\varepsilon}^c$, $$ u_\varepsilon=v+w_\varepsilon=v+w=u,$$ and this proves~\varepsilonqref{AMLO-4}. Furthermore, $$\|u_\varepsilon-u\|_{C^m(B_1)}=\|v+w_\varepsilon-u\|_{C^m(B_1)} =\|w_\varepsilon-w\|_{C^m(B_1)}\leqslant\varepsilon,$$ which establishes~\varepsilonqref{AMLO-3}. Then, we take $$ S:=2+\sum_{j=0}^m \| D^j u\|_{L^\infty(B_1)}$$ and we denote by~$L$ the Lipschitz norm of~$F$ in~$[-S,S]^{N(m)}$. Thus, employing~\varepsilonqref{AMLO-3} and~\varepsilonqref{65654219}, for all~$x\in B_1$ we have that \begin{eqnarray*}&& |\varepsilonta_\varepsilon(x)|=\Big| F\big(x,u(x),\nabla u(x),\dots,D^m u(x)\big)- F\big(x,u_\varepsilon(x),\nabla u_\varepsilon(x),\dots,D^m u_\varepsilon(x)\big)\Big| \\&&\qquad\leqslant L\,\sum_{j=0}^m |D^j u(x)-D^ju_\varepsilon(x)|\leqslant Lm\,\|u_\varepsilon-u\|_{C^m(B_1)}\leqslant Lm\varepsilon,\varepsilonnd{eqnarray*} and this gives~\varepsilonqref{AMLO-2}, up to renaming~$\varepsilon$. \section*{References} \begin{biblist}[\normalsize] \bib{MR3716924}{article}{ author={Bucur, Claudia}, title={Local density of Caputo-stationary functions in the space of smooth functions}, journal={ESAIM Control Optim. Calc. Var.}, volume={23}, date={2017}, number={4}, pages={1361--1380}, issn={1292-8119}, review={\MR{3716924}}, doi={10.1051/cocv/2016056}, } \bib{CAR}{article}{ author={Carbotti, Alessandro}, author={Dipierro, Serena}, author={Valdinoci, Enrico}, title = {Local density of Caputo-stationary functions of any order}, journal = {Complex Var. Elliptic Equ.}, doi = {10.1080/17476933.2018.1544631}, URL = {https://doi.org/10.1080/17476933.2018.1544631}, } \bib{CARBOO}{book}{ author={Carbotti, Alessandro}, author={Dipierro, Serena}, author={Valdinoci, Enrico}, title={Local density of solutions to fractional equations}, series={De Gruyter Studies in Mathematics 74}, publisher={De Gruyter, Berlin}, date={2019}, ISBN={978-3-11-066435-5}, } \bib{ALL-FUNCTIONS}{article}{ author = {Dipierro, Serena}, author = {Savin, Ovidiu}, author = {Valdinoci, Enrico}, title={All functions are locally $s$-harmonic up to a small error}, journal={J. Eur. Math. Soc. (JEMS)}, volume={19}, date={2017}, number={4}, pages={957--966}, issn={1435-9855}, review={\MR{3626547}}, doi={10.4171/JEMS/684}, } \bib{MR3935264}{article}{ author={Dipierro, Serena}, author={Savin, Ovidiu}, author={Valdinoci, Enrico}, title={Local approximation of arbitrary functions by solutions of nonlocal equations}, journal={J. Geom. Anal.}, volume={29}, date={2019}, number={2}, pages={1428--1455}, issn={1050-6926}, review={\MR{3935264}}, doi={10.1007/s12220-018-0045-z}, } \bib{POLINOMI}{article}{ author={Dipierro, Serena}, author={Savin, Ovidiu}, author={Valdinoci, Enrico}, title={Definition of fractional Laplacian for functions with polynomial growth}, journal={Rev. Mat. Iberoam.}, volume={35}, date={2019}, number={4}, pages={1079--1122}, issn={0213-2230}, review={\MR{3988080}}, doi={10.4171/rmi/1079}, } \bib{2021arXiv210107941D}{article}{ author = {Dipierro, Serena}, author = {Valdinoci, Enrico}, title = {Elliptic partial differential equations from an elementary viewpoint}, journal = {arXiv e-prints}, date = {2021}, eid = {arXiv:2101.07941}, pages = {arXiv:2101.07941}, archivePrefix = {arXiv}, eprint = {2101.07941}, adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210107941D}, } \bib{KRYL}{article}{ author = {Krylov, N.~V.}, title = {On the paper ``All functions are locally $s$-harmonic up to a small error'' by Dipierro, Savin, and Valdinoci}, journal = {arXiv e-prints}, date = {2018}, archivePrefix = {arXiv}, eprint = {1810.07648}, adsurl = {https://ui.adsabs.harvard.edu/abs/2018arXiv181007648K}, } \bib{ROS-JMPA}{article}{ author={Ros-Oton, Xavier}, author={Serra, Joaquim}, title={The Dirichlet problem for the fractional Laplacian: regularity up to the boundary}, language={English, with English and French summaries}, journal={J. Math. Pures Appl. (9)}, volume={101}, date={2014}, number={3}, pages={275--302}, issn={0021-7824}, review={\MR{3168912}}, doi={10.1016/j.matpur.2013.06.003}, } \bib{MR3774704}{article}{ author={R\"{u}land, Angkana}, author={Salo, Mikko}, title={Exponential instability in the fractional Calder\'{o}n problem}, journal={Inverse Problems}, volume={34}, date={2018}, number={4}, pages={045003, 21}, issn={0266-5611}, review={\MR{3774704}}, doi={10.1088/1361-6420/aaac5a}, } \bib{MR3790948}{article}{ author={Valdinoci, Enrico}, title={All functions are (locally) $s$-harmonic (up to a small error)---and applications}, conference={ title={Partial differential equations and geometric measure theory}, }, book={ series={Lecture Notes in Math.}, volume={2211}, publisher={Springer, Cham}, }, date={2018}, pages={197--214}, review={\MR{3790948}}, } \varepsilonnd{biblist} \varepsilonnd{document}
\begin{document} \newcommand{\operatorname{Pos}}{Pos} \renewcommand{\operatorname{Im}}{\operatorname{Im}} \newcommand{\operatorname{Hess}}{\operatorname{Hess}} \newcommand{\operatorname{Id}}{\operatorname{Id}} \newcommand{\operatorname{Sym}}{\operatorname{Sym}} \newcommand{\operatorname{Hom}}{\operatorname{Hom}} \newcommand{\operatorname{argmin}}{\operatorname{argmin}} \newcommand{\operatorname{diag}}{\operatorname{diag}} \renewcommand{\operatorname{Pos}}{\operatorname{Pos}} \newcommand{\mathcal P}{\mathcal P} \newcommand{\operatorname{Int}}{\operatorname{Int}} \setcounter{tocdepth}{1} \title[The Minimum Principle for Convex Subequations]{The Minimum Principle for Convex Subequations} \author{Julius Ross and David Witt Nystr\"om} \begin{abstract} A subequation, in the sense of Harvey-Lawson, on an open subset $X\subset \mathbb R^n$ is a subset $F$ of the space of $2$-jets on $X$ with certain properties. A smooth function is said to be $F$-subharmonic if all of its $2$-jets lie in $F$, and using the viscosity technique one can extend the notion of $F$-subharmonicity to any upper-semicontinuous function. Let $\mathcal P$ denote the subequation consisting of those $2$-jets whose Hessian part is semipositive. We introduce a notion of product subequation $F\#\mathcal P$ on $X\times \mathbb R^{m}$ and prove, under suitable hypotheses, that if $F$ is convex and $f(x,y)$ is $F\#\mathcal P$-subharmonic then the marginal function $$ g(x):= \inf_y f(x,y)$$ is $F$-subharmonic. This generalises the classical statement that the marginal function of a convex function is again convex. We also prove a complex version of this result that generalises the Kiselman minimum principle for the marginal function of a plurisubharmonic function. \end{abstract} \maketitle \tableofcontents \section{Introduction}\label{sec:introduction} Although the maximum of two convex functions is always convex, the same is not normally true for the minimum. However there is a minimum principle for convex functions that states that if $f(x,y)$ is a function that is convex in two real variables then its marginal function $$ g(x): = \inf_{y} f(x,y)$$ is again convex. This fundamental property is used throughout the study of convex functions, in particular when considering convex optimisation problems. In fact, since a function is convex if and only if its epigraph is a convex set, this minimum principle can be viewed as precisely the statement that the linear projection of a convex set is again convex. In the complex case, convexity is replaced with the property of being plurisubhamonic, and then the statement becomes that if $f(z,w)$ is a plurisubharmonic function of two complex variables that is independent of the argument of $w$ then the marginal function $g(z) = \inf_{w} f(z,w)$ is again plurisubharmonic. This minimum principle is due to Kiselman, and is a key tool in pluripotential theory. \begin{center} * \end{center} Darvas-Rubinstein \cite{RubinsteinDarvas} were the first to show that this minimum principle extends beyond the above classical setting asked if it holds even more generally. The natural setting for this question is a huge generalization of convexity and plurisubharmonicity that uses the technique of viscosity subsolutions, as expounded by Harvey-Lawson \cite{HL_Dirichletduality,HL_Dirichletdualitymanifolds,HL_equivalenceviscosity}. To explain this elegant idea, consider the real case of convex functions (the complex case being completely analogous). If $g:X\to \mathbb R$ is a smooth function on an open $X\subset\mathbb R^n$, local convexity of $g$ is equivalent to the statement that for all $x\in X$ the Hessian $\operatorname{Hess}_{x}(g)$ is contained in the set of semipositive matrices. If $g:X\to \mathbb R\cup \{-\infty\}$ is merely upper-semicontinuous, then being locally convex is equivalent to the statement that for any smooth ``test-function" $\phi$ that touches $g$ from above at $x\in X$ (Figure \ref{fig:viscosity}) it holds that $\operatorname{Hess}_{x}\phi$ is semipositive. \begin{figure} \caption{The function $\phi$ touching $g$ from above at $x$} \label{fig:viscosity} \end{figure} Now, we are free to replace the cone of semipositive matrices with any other subset $F$ of symmetric matrices, and in doing we can define what it means to be \emph{$F$-subharmonic} in precisely the same way. In fact if instead of using the Hessian we instead use the full $2$-jet, we can take $F$ to be any subset of the space of $2$-jets. To have a useful theory we need to make some mild assumptions on $F$. \begin{definition*} Let $X\subset \mathbb R^n$ be open and $F$ be a subset of the space $$J^2(X) = X\times \mathbb R\times \mathbb R^n\times \operatorname{Sym}_n^2$$ of $2$-jets on $X$. We say that $F$ is a \emph{primitive subequation} if \begin{enumerate} \item $F$ is closed. \item If $(x,r,p,A)\in F$ and $P$ is semipositive then $(x,r,p,A+P)\in F$. \end{enumerate} \end{definition*} Somewhat surprisingly, even at this level of generality the space of $F$-subharmonic functions has many properties in common with convex and plurisubharmonic functions. In this paper we define and prove a minimum principle in this setting. \begin{center} * \end{center} For a precise statement, suppose that $F\subset J^2(X)$ is a primitive subequation and let $$\mathcal P = \{ (x,r,p,A) \in J^2(\mathbb R^m) : A \text{ is semipositive} \}$$ be the set of $2$-jets whose Hessian part is semipositive. We will define a new primitive subequation $$F\#\mathcal P\subset J^2(X\times \mathbb R^m)$$ in such as way that an $F\#\mathcal P$-subharmonic function is an upper-semicontinuous $$f:X\times \mathbb R^m\to\mathbb R\cup \{-\infty\}$$ whose restriction to any non-vertical slice is $F$-subharmonic, and whose restriction to any vertical slice is $\mathcal P$-subharmonic. That is, $f$ is $F\#\mathcal P$-subharmonic if and only if \begin{enumerate}[(i)] \item For each linear $\Gamma:\mathbb R^{n}\to \mathbb R^{m}$ and $y_0\in \mathbb R^{m}$ the function $$ x\mapsto f(x, y_0 +\Gamma x)$$ is $F$-subharmonic, and \item For each fixed $x_0$ the function $y\mapsto f(x_0,y)$ is $\mathcal P$-subharmonic (i.e.\ locally convex). \end{enumerate} We say that a subset $\Omega\subset X$ is \emph{$F$-pseudoconvex} if $\Omega$ admits a continuous and exhaustive $F$-subharmonic function. \begin{definition*}\label{def:minimumprinciple_intro} Let $\pi:X\times \mathbb R^m\to X$ denote the natural projection. We say that a primitive subequation $F\subset J^2(X)$ satisfies the \emph{minimum principle} if the following holds: Suppose $\Omega\subset X\times \mathbb R^{m}$ is an $F\#\mathcal P$-pseudoconvex domain such that that the slices $$\Omega_x : = \{ y \in \mathbb R^m : (x,y)\in \Omega\}$$ are connected for each $x\in X$. Then $\pi(\Omega)$ is $F$-pseudoconvex, and for any $F\#\mathcal P$-subharmonic function $f$ on $\Omega$, the marginal function $$g(x) : = \inf_{y\in \Omega_x} f(x,y)$$ is $F$-subharmonic on $\pi(\Omega)$. \end{definition*} In the complex case we take $X\subset \mathbb C^n$, and $F\subset J^{2,\mathbb C}(X)$ a complex primitive subequation (i.e.\ a subset of the space of complex $2$-jets on $X$ with the same properties as above). Denote by $\mathcal P^{\mathbb C}\subset J^{2,\mathbb C}(\mathbb C^m)$ the set of complex $2$-jets whose complex Hessian is semipositive. We then make an analogous definition of product $F\#_{\mathbb C}\mathcal P^{\mathbb C}$ in which we require the $\Gamma:\mathbb C^n \to \mathbb C^m$ to be $\mathbb C$-linear. The complex version of the minimum principle is similar, but we require both $\Omega\subset X\times \mathbb C^m$ and $f$ to be independent of the argument of the second variable. \begin{definition*}\label{def:minimumprinciple_complex_intro}\ Let $\pi:X\times \mathbb C^m\to X$ denote the natural projection. We say that a complex primitive subequation $F\subset J^{2,\mathbb C}(X)$ satisfies the \emph{minimum principle} if the following holds: Suppose $\Omega\subset X\times \mathbb R^{m}$ is an $F\#\mathcal P^\mathbb C$-pseudoconvex domain such that that the slices $\Omega_z$ are connected for each $z\in X$ and are independent of the argument of the second variable. Then $\pi(\Omega)$ is $F$-pseudoconvex, and for any $F\#\mathcal P^\mathbb C$-subharmonic function $f$ on $\Omega$ that is independent of the argument of the second variable, the marginal function $$g(w) : = \inf_{w\in \Omega_z} f(z,w)$$ is $F$-subharmonic on $\pi(\Omega)$. \end{definition*} We conjecture that the minimum principle holds for a wide-class of primitive subequations $F$, and in this paper we prove this in complete generality for constant-coefficient convex primitive equations that have one additional (mild) property. \begin{definition*} We say that $F\subset J^2(X)$ has the \emph{Negativity Property} if $$(x,r,p,A)\in F\text{ and }r'<r \Rightarrow (x,r',p,A)\in F.$$ We say that $F$ is \emph{constant coefficient} if for all $x,x'\in X$ we have $$ (x,r,p,A)\in F \Leftrightarrow (x',r,p,A)\in F,$$ and $F$ is \emph{convex} if the fibre $F_x= \{ (r,p,A) : (x,r,p,A)\in F\}$ is convex for each $x\in X$. \end{definition*} \begin{maintheorem}[Minimum Principle in the Convex Case]\label{thm:minimumprincipleconvexI:intro} Let $X\subset \mathbb R^n$ be open and $F\subset J^2(X)$ be a real or complex primitive subequation such that \begin{enumerate} \item $F$ satisfies the Negativity Property, \item $F$ is convex, \item $F$ is constant coefficient. \end{enumerate} Then $F$ satisfies the minimum principle. \end{maintheorem} It is not hard to check that $\mathcal P_{\mathbb R^n}\#\mathcal P_{\mathbb R^m} = \mathcal P_{\mathbb R^{n+m}}$ (and similarly in the complex case). So when $F=\mathcal P_{\mathbb R^n}$, the above Theorem reduces to the classical statement that the marginal function of a convex function is again convex. Similarly when $F=\mathcal P^{\mathbb C}_{\mathbb C^n}$ we get precisely the Kiselman minimum principle.\\ The strategy of proof is as follows. Suppose first that $f$ is $F\#\mathcal P$-subharmonic and smooth, and also that for each $x$ the minimum of $\{ f(x,y) : y\in \Omega_x\}$ is attained at some point $y=\gamma(x)$. Assume also that $\gamma$ is sufficiently smooth. Then a direct calculation, using the Chain Rule, allows one to express the $2$-jet of the marginal function $g$ in terms of the $2$-jet of the function $f$. And from this it becomes clear that $f$ being $F\#\mathcal P$-subharmonic implies that $g$ is $F$-subharmonic. The minimum principle for general $f$ is then reduced to this case through a number of approximations, the most significant being that since $F$ is assumed to be convex one can perform an integral mollification to eventually reduce to the case that $f$ is smooth. One difference between the minimum principle we prove here and the classical case is that being convex (resp.\ plurisubharmonic) can be tested by restricting to lines (resp.\ complex lines), so it is essentially enough to assume that $X$ is one dimensional (i.e.\ that $n=1$). For $F$-subharmonic functions we do not have this tool at our disposal. However we will see that we can reduce to the case that the second variable lies in a one dimensional space (i.e.\ that $m=1$) which turns out to be crucial for the argument we give.\\ Notice that the convexity of $F$ is used only in the approximation part of the above argument (namely that the mollification of an $F$-subharmonic function remains $F$-subharmonic)s. For this reason it seems to the authors that this convexity is a facet of the proof rather than an essential feature. In a sequel to this paper we will take up the minimum principle again for non-convex subequations.\\ \noindent {\bf Comparison with other work: }The viscosity technique arose in the study of fully non-linear degenerate second-order differential equations, as pioneered by the work of Caffarelli--Nirenberg--Spruck \cite{CaffarelliNirenbergSpruckIII} and Lions--Crandall--Ishii (see the User's guide \cite{CILUserGuide} and references therein). We use here the point of view taken by Harvey-Lawson \cite{HL_Dirichletduality,HL_Dirichletdualitymanifolds,HL_equivalenceviscosity} that replaces non-linear operators with certain subsets of the space of $2$-jets, which is in the same spirit as earlier work of Krylov \cite{Krylov}. The minimum principle for convex functions is such a basic property that its origin appears to be time immemorial. The minimum principle for plurisubharmonic functions was discovered by Kiselman \cite{KiselmanInvent,Kiselman}, and is often referred to as the Kiselman minimum principle. This minimum principle is a tool used throughout pluripotential theory, and is known to be intimately connected to singularities of plurisubharmonic functions \cite{Demaillybook,Hormander_introductiontocomplexanalysis,KiselmanInvent} as well as the complex Homogeneous Monge-Amp\`ere Equation (see for instance the work of Darvas-Rubinstein \cite{DarvasRubinstein_rooftop}, or previous work of the authors \cite{RossNystrom_harmonicdiscs}). A stronger form of minimum principle for convex functions was proved by Prekopa \cite{Prekopa_concave}, which has since been extended to the plurisubharmonic setting by Berndtsson \cite{Berndtsson_Prekopa} (see also \cite{Erausquin}). Other form of minimum principle can be found in \cite{Zeriahiminimum,Poletsky_minimumprinciple}, and surveys in \cite{DengZhangZhou_minimumprinciple,Kiselman_plurisubharmonicseveral}. In \cite{RubinsteinDarvas}, Darvas-Rubinstein prove the first ``non-classical" minimum principle using the viscosity technique. Their interest was in a particular subequation (that is constant coefficient and depends only on the Hessian part) related to the study of Lagrangian graphs. After this work was complete we became aware that a more general minimum principle, with some similarities to the work developed here, appeared in a lecture course of Rubinstein. We expect, but have not proved, that the minimum principle in \cite{RubinsteinDarvas} can be derived from the statement in this paper as the techniques used are rather similar (in fact both have close similarities with Kiselman's proof in \cite{Kiselman}). The reader may also note that the authors have as an application of the main result in \cite{RN_argmin} a minimum principle that does not require any convexity of the subequation $F$, but instead requires a certain concavity assumption on the function $f(x,y)$. \\ \noindent {\bf Possible Extensions and Future Directions:}. The authors' interest in this minimum principle is an ongoing project that extends their work connecting the Hele-Shaw flow and the complex Homogeneous Monge-Amp\`ere Equation \cite{RossNystrom_harmonicdiscs}. In fact it seems apparent that on can generalize the Hele-Shaw flow using F-subharmonic functions, and this flow is connected by a Legendre transformation to a Dirichlet problem involving the product subequation. They key step in this is the minimum principle, and we plan to continue this in future work. It is possible to extend the notion of $F$-subharmonicity to Riemannian manifolds $X$ (see \cite{HL_Dirichletdualitymanifolds} and we observe that this requires some additional structure on $X$). Being a local condition, it is clear that the work in this paper generalizes as well, and applies to products $X\times \mathbb R^m$. We expect that one can also extend to the case that the $\mathbb R^m$ factor is replaced by a Riemannian manifold, but have not checked the details as this would go beyond our intended application. We remark finally that the Harvey-Lawson theory also allows for quaternionic subequations. It would be interesting to know if there is a minimum principle in this setting as well.\\ {\bf Organization:} In Section \ref{sec:productsofsubequations} we introduce the new notion of product subequation and make precise what we mean by a minimum principle for $F$-subharmonic functions. Some examples are then given in Section \ref{sec:examples}. In Section \ref{sec:approximations} we prove some general statements about approximating $F$-subharmonic functions by ones that are more regular (for instance the method of sup-convlution allowing approximation by semiconvex functions, and mollification that allows approximation by smooth ones). The main proofs appear in Sections \ref{sec:reductions} through Section \ref{sec:minimumcomplexconvex}, and proceed by a number ``reductions" that show that to prove our desired minimum principle for $f:X\times \mathbb R^m\to \mathbb R\cup \{-\infty\}$ we may, without loss of generality, make the following assumptions: \begin{itemize} \item we may assume $m=1$ \item $f$ may be assumed to be bounded from below (Proposition \ref{prop:mcanbeone}) \item $f$ may be assume to be relatively exhaustive. i.e. the minimum of $f$ on each fibre is obtained strictly away from the boundary (Proposition \ref{prop:reductionfibrewiseminimum}) \item $f$ may be assumed to be semiconvex (Proposition \ref{prop:reductionsemiconvex}) \item $f$ may be assumed to be smooth (Proposition \ref{prop:reductionsmooth}) \end{itemize} After this we are left with the task of proving the minimum principle under all the above assumptions, which is done by direct calculation of the Hessian of $f$. In the real case this can be found in Proposition \ref{prop:hessiancalc} and in the complex case Proposition \ref{thm:minimumprinciplecomplexconvexI}.\\ \noindent {\bf Acknowledgements: }The authors wish to thank Tristan Collins and Yanir Rubinstein for conversations that stimulated this work. \section{Subequations} In this section we recall the definition of subequations and $F$-subharmonic functions. We start in the real case. Let $M_{n\times m}(\mathbb R)=\operatorname{Hom}(\mathbb R^m,\mathbb R^n)$ denote the space of $n\times m$ real matrices, $\operatorname{Sym}^2_{n}\subset M_{n\times n}(\mathbb R)$ denote the real symmetric $n\times n$ matrices and $$\operatorname{Pos}_{n} = \{ A\in \operatorname{Sym}^2_{n} : v^tAv \ge 0 \text{ for all }v \}$$ denote the symmetric semipositive matrices. Assume $X\subset \mathbb R^{n}$ is open and let $$J^2(X):=X\times \mathbb R\times \mathbb R^n \times \operatorname{Sym}^2_{n}$$ be the jet-bundle over $X$, which has fiber over $x\in X$ $$J^2_{n}: = \mathbb R\times \mathbb R^n \times \operatorname{Sym}^2_{n}.$$ For $F\subset J^2(X)$ and $x\in X$ write $$ F_x= \{ (r,p,A)\in J^2_n : (x,r,p,A)\in F\}.$$ \begin{definition}[Subequations] We say that $F\subset J^2(X)$ is a \emph{primitive subequation} if \begin{enumerate} \item (Closedness) $F$ is closed. \item (Positivity) \begin{equation}\label{eq:positivity} (r,p,A)\in F_x \text{ and } P\in \operatorname{Pos}_{n} \Rightarrow (r,p,A+P)\in F_x.\end{equation} \end{enumerate} We say that $ F\subset J^2(X)$ is a \emph{subequation} if in addition \begin{enumerate}\setcounter{enumi}{2} \item (Negativity) \begin{equation}\label{eq:negativity} (r,p,A)\in F_x \text{ and } r'\le r \Rightarrow (r',p,A)\in F_x.\end{equation} \item (Topological) \begin{equation}\label{eq:topological} F = \overline{\operatorname{Int}( F)} \text{ and } F_x = \overline{\operatorname{Int} F_x}.\text{ and } \operatorname{Int} F_x = (\operatorname{Int} F)_x\text{ for all }x\in X\end{equation} \end{enumerate} where the bar denotes topological closure. We say that $ F$ is \emph{convex} if each $F_x$ is convex, i.e. if $\alpha_1,\alpha_2\in F_x$ and $t\in [0,1]$ then $t\alpha_1 + (1-t)\alpha_2\in F_x$. \end{definition} \begin{remark} The majority of the results of this paper hold for primitive subequations that satisfy the Negativity Property. The extra assumption of being a subequation is important for the Comparison Principle proved in \cite{HL_Dirichletduality,HL_Dirichletdualitymanifolds}. \end{remark} \begin{example} Set $$\mathcal P_{X}:= X\times \mathbb R\times \mathbb R^n \times \operatorname{Pos}_{n}$$ which is a convex subequation. By abuse of notation we will simply write $\mathcal P$ for $\mathcal P_X$ when $X$ is clear from context. \end{example} \subsection{$F$-subharmonic functions} We again let $X$ be an open subset of $\mathbb R^n$. \begin{definition}[Upper contact points, Upper contact jets] Let $$f: X\to \mathbb R\cup\{-\infty\}.$$ We say that $x\in X$ is an \emph{upper contact point} of $f$ if $f(x)\neq -\infty$ and there exists $(p,A)\in \mathbb R^n\times \operatorname{Sym}^2_n$ such that $$f(y)\le f(x) + p.(y-x) + \frac{1}{2} (y-x)^tA(y-x) \text{ for all } y \text{ sufficiently near } x.$$ When this holds we refer to $(p,A)$ as an \emph{upper contact jet} of $f$ at $x$. \end{definition} \begin{definition}[F-subharmonic function] Suppose $F\subset J^2(X)$. We say that an upper-semicontinuous function $f:X\to \mathbb R\cup \{-\infty\}$ is \emph{F-subharmonic} if $$ (f(x),p,A)\in F_x \text{ for all upper contact jets } (p,A) \text{ of }f \text{ at } x.$$ We let $F(X)$ denote the set of $F$-subharmonic functions on $X$. \end{definition} Observe that by definition any $x$ such that $f(x)=-\infty$ is not an upper-contact point, and so the function $f\equiv -\infty$ is trivially $F$-subharmonic. Clearly being $F$-subharmonic is a local condition, by which we mean that if $\{X_{\alpha}\}_{\alpha\in \mathcal A}$ is an open cover of $X$ then $f\in F(X)$ if and only if $f\in F(X_{\alpha})$ for all $\alpha$. \begin{example}[Convex and Plurisubharmonic]\label{example:convexity} Recall $\mathcal P_X = X\times \mathbb R\times \mathbb R^n\times \operatorname{Pos}_n$. Then $\mathcal P_X(X)$ consists of locally convex functions on $X$ \cite[Example 14.2]{HL_Dirichletdualitymanifolds}. \end{example} For the reader's convenience, we collect the properties subequations and F-subharmonic functions from the work of Harvey-Lawson that we will need in Appendix \ref{appendix:subequations}. The most pertinent is that if $f$ is $\mathcal C^2$, and $F$ satisfies the Positivity assumption, then $f$ is $F$-subharmonic if and only if all its second order jets lie in $F$ (see Lemma \ref{lem:fsubharmonicc2}). For much more depth, including many interesting examples, the reader is referred to the original papers \cite{HL_Dirichletduality,HL_Dirichletdualitymanifolds,HL_equivalenceviscosity}. In particular in \ref{sec:complexsubequations} we discuss how this extends to the complex case (with the upshot being that for $\mathcal C^2$ functions the second order jet is to be understood in the complex sense). \section{Products of Primitive Subequations}\label{sec:productsofsubequations} \subsection{Real Products} For $\Gamma\in \operatorname{Hom}(\mathbb R^{n},\mathbb R^{m})=M_{m\times n}(\mathbb R)$ consider \begin{align}i_\Gamma:\mathbb R^{n} &\to \mathbb R^{n+m} \quad i_\Gamma(x) = (x,\Gamma x)\\ j : \mathbb R^{m} &\to \mathbb R^{n + m} \quad j(y) = (0,y).\end{align} These induce natural pullback-maps \begin{align*} i_\Gamma^*: J^2_{n+m}\to J^2_{n}\\ j^* : J^2_{n+m}\to J^2_{m}\end{align*} with the following property. If $f:\mathbb R^{n+m}\to \mathbb R$ is twice differentiable at $(0,0)\in \mathbb R^{n+m}$ and we let \begin{align*}f_1(x) &:= f(x,\Gamma x) \text{ for } x\in \mathbb R^n\\f_2(y) &: = f(0,y)\text{ for } x\in \mathbb R^m\end{align*} then $f_1,f_2$ are twice differentiable at $0\in \mathbb R^{n}$ and $0\in \mathbb R^{m}$ respectively, and \begin{align} J^2_0 (f_1) &= i_\Gamma^* J^2_{(0,0)} (f)\\\ J^2_{0} (f_2) &= j^* J^2_{(0,0)} (f). \end{align} Explicitly suppose $$ p := \left(\begin{array}{c} p_1 \\ p_2 \end{array} \right)\in \mathbb R^{n+ m}$$ and $$ A := \left(\begin{array}{cc} B & C \\ C^t & D \end{array}\right)\in \operatorname{Sym}^2_{n+m}$$ where the latter is in block form, so $B\in \operatorname{Sym}^2_{n}$ and $D\in \operatorname{Sym}^2_{m}$. Then \begin{align}\label{eq:defofistar} i_\Gamma^*(r,p,A) &= \left(r, p_1 + \Gamma^t p_2, B + C\Gamma + \Gamma^t C^t + \Gamma^tD\Gamma\right)\\ j^*(r,p,A) &= (r,p_2,D). \end{align} \begin{comment} \begin{definition}[Slices] Given ${x_0}\in X$ we call $$ i_{x_0} : \{ y\in Y : (x_0,y)\in X\times Y \} \to X\times Y \quad i_{x_0}(y) = (x,y)$$ a \emph{vertical slice} of $X\times Y$. Given $y_0\in Y$ and $\Gamma\in Hom(\mathbb R^{n},\mathbb R^{m})$ we call $$ i_{y_0,\Gamma} : \{ x : (x,y_0 + \Gammax)\in X\times Y\} \to X\times Y \quad i_{y_0,\Gamma}(x) = (x,y_0 + \Gammax)$$ a \emph{non-vertical slice} of $X\times Y$. \end{definition} Vertical and non-vertical slices induce restriction maps $$ i_{y_0,\Gamma}^* : i_{y_0,U}^*J^2(X\times Y) \to J^2(X).$$ $$ i_{x_0}^*: i_{x_0}^*J^2(X\times Y) \to J^2(Y)$$ \end{comment} \begin{definition}[Products] Let $X\subset\mathbb R^n$ and $Y\subset \mathbb R^m$ be open, and $F\subset J^2(X)$ and $G\subset J^2(Y)$. Define $$F\#G \subset J^2(X\times Y)$$ by $$(F\#G)_{(x,y)} = \left \{ \alpha\in J^2_{n+m} : \begin{array}{l} i_{\Gamma}^*\alpha \in F_{x} \text{ and } j^*\alpha \in G_y \\ \text{for all } \Gamma\in \operatorname{Hom}(\mathbb R^{n},\mathbb R^{m})\end{array}\right\}.$$ \end{definition} \begin{lemma}[Basic Properties of Products]\label{lem:productsofsubequations}\ \begin{enumerate} \item If $F$ and $G$ both satisfy any of the following properties then the same is true for $F\# G$: \begin{inparaenum} \item Positivity \item Negativity \item Being constant coefficient \item Being independent of the gradient part \item Convexity \item Being closed \item Being a primitive subequation. \end{inparaenum} \item If $F_1\subset F_2\subset J^2(X) \text{ and } G_1\subset G_2\subset J^2(Y)$ then $F_1\#G_1\subset F_2\#G_2.$ \item For $i=1,2$ let $H_i$ be a subgroup of $GL_{n_i}(\mathbb R)$ and suppose that $F_i\subset J^2(X_i)$ is $H_i$-invariant. Then $F_1\#F_2$ is $H_1\times H_2$-invariant. \end{enumerate} \end{lemma} \begin{proof} Suppose $P\in \operatorname{Pos}_{n+m}$ and $\Gamma\in \operatorname{Hom}(\mathbb R^n,\mathbb R^m)$. Then $i_\Gamma^*(0,0,P) = (0,0,P')$ and $j^*(0,0,P) = (0,0,P'')$ where $P'\in \operatorname{Pos}_{n}$ and $P''\in \operatorname{Pos}_{m}$. Thus (a) follows by linearity of $i_\Gamma^*$ and $j^*$. Statement (b,c,d) are immediate from the definition. Statement (e) follow from linearity of $i_\Gamma^*$ and $j^*$, and (f) from continuity of $i_\Gamma^*$ and $j^*$, and (g) combines (a) and (f). Statements (2) is immediate, and (3) follows from identities such as $$ i^*_{\Gamma} (h^*\alpha) = h_1^* (i_{h_2\Gamma h_1^{-1}}^*\alpha) \text{ for } h= (h_1,h_2)\in H_1\times H_2$$ the details of which are left to the reader. \end{proof} \begin{remark} Taking $H_1$ and $H_2$ to be the orthogonal group in Lemma \ref{lem:productsofsubequations}(3) allows us to extend the definition of $F_1\#F_2$ to products of Riemannian manifolds. See \cite{HL_Dirichletdualitymanifolds}. \end{remark} \begin{proposition}[Associativity of products]\label{prop:associativityofproducts} Let $X_i\subset \mathbb R^{n_i}$ be open and $F_i\subset J^2(X_i)$ for $i=1,2,3$. Then $$ (F_1\#F_2)\#F_3 = F_1\#(F_2\#F_3).$$ \end{proposition} \begin{proof} This can be proved by elementary manipulations, the details of which can found in Appendix \ref{sec:associativityproductsubequations}. \end{proof} \subsection{Products of Subequations}\label{subsec:productsofsubequations} We stress that the product of subequations is not necessarily a subequation since the Topological property may not be inherited (see Example \ref{sec:productsofsubequationnotasubequation}). The following two statements give conditions under which this does hold. \begin{definition} We say that $F\subset J^2(X)$ has Property (P\textsuperscript{++}) if the following holds. For all $x\in X$ and all $\epsilon>0$ there exists a $\delta>0$ such that \begin{equation}(x,r,p,A)\in F_x \Rightarrow (x',r-\epsilon,p, A+\epsilon \operatorname{Id})\in F_{x'} \text{ for all } \|x'-x\|<\delta. \tag{P\textsuperscript{++}}\label{eq:propertyH_original} \end{equation} \end{definition} Clearly if $F$ a constant coefficient primitive subequation that has the Negativitiy Property then \eqref{eq:propertyH_original} holds. \begin{lemma}[Products of gradient-independent subequations]\label{lem:productsofgradientindependent} Assume $F\subset J^2(X)$ and $G\subset J^2(Y)$ are subequations with property \eqref{eq:propertyH} that are independent of the gradient part. Then $F\#G$ is a subequation. \end{lemma} \begin{proof} The proof is elementary point-set topology, and can be found in Appendix \ref{appendixB}. \end{proof} \begin{corollary}[Products of constant-coefficient gradient-independent subequations] If $F,G$ are both constant-coefficient primitive subequations with the Negativity Property that are independent of the gradient part, then $F\#G$ is a subequation. \end{corollary} \begin{remark} Lemma \ref{lem:productsofgradientindependent} is far from optimal. In fact we will give in Section \ref{sec:examples} examples of products of subequations that depend non-trivially on the gradient part that remain a subequation. \end{remark} \subsection{Complex Products} Essentially the same definition is made in the complex case. Given $\Gamma\in \operatorname{Hom}(\mathbb C^n,\mathbb C^m)$ we let $i_{\Gamma}(z) = (z,\Gamma z)$ and $j(w) = (0,w)$, which induce pullbacks \begin{align*} i_\Gamma^*: J^{2,\mathbb C}_{n+m}\to J^{2,\mathbb C}_{n}\\ j^* : J^{2,\mathbb C}_{n+m}\to J^{2,\mathbb C}_{m}\end{align*} given explicitly by \begin{align}\label{eq:defofistar_complex} i_\Gamma^*(r,p,A) &= \left(r, p_1 + \Gamma^* p_2, B + C\Gamma + \Gamma^* C^* + \Gamma^*D\Gamma\right)\\ j^*(r,p,A) &= (r,p_2,D), \end{align} where $$ p := \left(\begin{array}{c} p_1 \\ p_2 \end{array} \right)\in \mathbb C^{n+ m}$$ and $$ A := \left(\begin{array}{cc} B & C \\ C^* & D \end{array}\right)\in \operatorname{Herm}(\mathbb C^{n+m})$$ is in block form, so $B\in \operatorname{Herm}(\mathbb C^{n})$ and $D\in \operatorname{Herm}(\mathbb C^{m})$. \begin{definition}[Products, Complex case] Suppose $X\subset \mathbb R^{2n}\simeq \mathbb C^{n}$ and $Y\subset \mathbb R^{2m} \simeq \mathbb C^{m}$ are open. Let $F\subset J^{2, \mathbb C}(X)$ and $G\subset J^{2, \mathbb C}(Y)$. Define $$F\#_{\mathbb C}G \subset J^{2,\mathbb C}(X\times Y)$$ by $$(F\#_{\mathbb C}G)_{(x,y)} = \left \{ \alpha\in J^{2,\mathbb C}_{n+m} : \begin{array}{l} i_{\Gamma}^*\alpha \in F_{x} \text{ and } j^*\alpha \in G_y \\ \text{for all } \Gamma\in \operatorname{Hom}(\mathbb C^{n},\mathbb C^{m})\end{array}\right\}.$$ \end{definition} This definition is compatible with our abuse of notation in thinking of a complex $F\subset J^2(X)$ as being a subset of $J^{2,\mathbb C}(X)$. That is, if $F'$ denotes the subset of $J^{2,\mathbb C}(X)$ we are associating with a complex primitive subequation $F\subset J^2(X)$, then $(F'\#_{\mathbb C} G') = (F\#G)'$. For this reason all the basic properties we prove about real products extend immediately to the complex case (for example the analogue of Lemma \ref{lem:productsofsubequations} and Proposition \ref{prop:associativityofproducts} in the complex case follow immediately from the real case). \\ Just as plurisubharmonic functions remain plurisubharmonic after composition with a biholomorphism, the same is true for $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic functions after composition with biholomorphism in the second variable. The precise statement is as follows: \begin{lemma}[Composition with biholomorphisms in the second variable]\label{lem:compositionbiholomorphism} Let $F\subset J^{2,\mathbb C}(X)$ be a complex primitive subequation, $\Omega\subset X\times \mathbb C^m$ be open and $f$ be $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic on $\Omega$. Suppose that $W\subset \mathbb C^m$ is open and $$\zeta:W\to \zeta(W)\subset \mathbb C^m$$ is a biholomorphism. Then the function $$\tilde{f}(z,w) : = f(z,\zeta(w))$$ is $(F\#_{\mathbb C}\mathcal P^{\mathbb C})$-subharmonic on $\{ (z,w)\in X\times W: (z,\zeta(w))\in\Omega\}$. \end{lemma} \begin{proof} Suppose first that $f$ is $\mathcal C^2$, so $J^{2,\mathbb C}_{(z,\zeta(w))} f$ exists. Then $j^*J^{2,\mathbb C}_{(z,\zeta(w))} f\in \mathcal P^{\mathbb C}$ which implies $j^* J^{2,\mathbb C}_{(z,w)} \tilde{f} \in \mathcal P^{\mathbb C}$. On the other hand if $\Gamma\in \operatorname{Hom}(\mathbb C^n,\mathbb C^m)$ then using the Chain Rule, $$ i_{\Gamma}^* J^{2,\mathbb C}_{(z,w)} \tilde{f} = i_{\frac{\partial\zeta}{\partial w} \Gamma}^* J^{2,\mathbb C}_{(z,\zeta(w))} f$$ which lies in $F_z$ since $f$ is assumed to be $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic. This proves the result for $\mathcal C^2$-functions $f$. The case of upper-semicontinuous $f$ follows, since if $\tilde{\phi}$ is a $\mathcal{C}^2$ test-function touching $\tilde{f}$ from above at $(z_0,w_0)$ then ${\phi}(z,w): = \tilde{\phi}(z,\zeta^{-1}(w))$ is a $\mathcal{C}^2$ test function touching $f$ from above at $(z_0,\zeta(w_0))$. Thus the result follows from the first paragraph, combined with Lemma \ref{lem:viscositydefinition}. \end{proof} \subsection{$F\#G$-subharmonicity by restriction to slices} We now prove that $f$ being $F\#G$-subharmonic is equivalent to the restriction of $f$ to each non-horizontal slice being $F$-subharmonic and its restriction to each horizonal slice being $G$-subharmonic. Again $X\subset \mathbb R^{n}$ and $Y\subset \mathbb R^{m}$ are assumed to be open. \begin{definition}[Slices] Given ${x_0}\in X$ we call $$ j_{x_0} :\mathbb R^{m}\to \mathbb R^{n}\times \mathbb R^{m} \quad j_{x_0}(y) = (x_0,y)$$ a \emph{vertical slice}. Given $y_0\in Y$ and $\Gamma\in \operatorname{Hom}(\mathbb R^{n},\mathbb R^{m})$ we call $$ i_{y_0,\Gamma} : \mathbb R^{n} \to \mathbb R^{n}\times \mathbb R^{m} \quad i_{y_0,\Gamma}(x) = (x,y_0 + \Gamma x)$$ a \emph{non-vertical slice}. \end{definition} \begin{proposition}\label{prop:slices} Assume that $F$ and $G$ are constant coefficient primitive subequations. Let $\Omega\subset X\times Y$ be open and $f:\Omega\to \mathbb R\cup \{-\infty\}$ be upper-semicontinuous. The following are equivalent \begin{enumerate} \item $f$ is $F\#G$-subharmonic \item $$i_{y_0,\Gamma}^*f \text{ is } F\text{-subharmonic for all } y_0\in Y\text{ and }\Gamma\in \operatorname{Hom}(\mathbb R^{n},\mathbb R^{m}) \text{ and} $$ $$j_{x_0}^*f \text{ is } G\text{-subharmonic for all } x_0\in X.$$ \end{enumerate} The analogous statement holds in the complex case. \end{proposition} \begin{proof} We prove the real case only as the complex one is the same.\\ (2)$\Rightarrow$ (1) This follows from the definitions. Let $$ \beta:=\left(\left(\begin{array}{c} p_1\\ p_2 \end{array}\right), \left(\begin{array}{cc} B & C \\ C^t & D \end{array}\right)\right)$$ be an upper contact jet for $f$ at a point $(x_0,y_0)$. Recall this means \begin{equation}\label{eq:sliceuppercontact} f(x,y) \le f(x_0,y_0) + \left(\begin{array}{c} p_1\\ p_2 \end{array}\right) . \left(\begin{array}{c} x-x_0\\ y-y_0 \end{array}\right) + \frac{1}{2} \left(\begin{array}{c} x-x_0\\ y-y_0 \end{array}\right) ^t\left(\begin{array}{cc} B & C \\ C^t & D \end{array}\right)\left(\begin{array}{c} x-x_0\\ y-y_0 \end{array}\right)\end{equation} for $(x,y)$ near $(x_0,y_0)$. We wish to show $$ \alpha: = ((x_0,y_0), f(x_0,y_0),\beta) \in F\#G.$$ Fixing $x=x_0$ and letting $y$ vary in \eqref{eq:sliceuppercontact} show that that $(p_2,D)$ is an upper contact jet for $j^*_{x_0}f$ at the point $y_0$. Since $j^*_{x_0}(f)\in G(Y)$ this implies $$j^*\alpha = (y_0,f(x_0,y_0),p_2,D)\in (F_2)_{y_0}.$$ Similarly putting $y=y_0+\Gamma (x-x_0)$ into \eqref{eq:sliceuppercontact} and letting $x$ vary gives that $(p_1+\Gamma ^t p_2, B + \Gamma C + C^t\Gamma ^t + \Gamma ^t D\Gamma )$ is an upper-contact jet for $i_{y_0,\Gamma }^*f$ at the point $x_0$. Since $i_{y_0,\Gamma }^*f\in F(X)$ this gives $$i_\Gamma ^*\alpha = (x_0,f(x_0,y_0),p_1 + \Gamma ^tp_2,B + C\Gamma + \Gamma ^t C^t + \Gamma ^tD\Gamma )\in F_{x_0}.$$ As this holds for all $\Gamma $ we deduce that $\alpha\in F\#G$, and hence $f$ is $F\#G$-subharmonic.\\ (1)$\Rightarrow$ (2) This direction is more substantial, but follows easily from the Restriction Theorem of Harvey-Lawson. (The reader may want to observe that what we are calling a primitive subequation is called a subequation in \cite{HL_Restriction} -- see the note after \cite[Definition 2.4]{HL_Restriction}). Working locally there is no loss in assuming that $\Omega=X\times Y$. If we let $i_{y_0,\Gamma }^*$ and $j^*_{x_0}$ also denote the pullback on second-order jets then, essentially by definition, $$ i_{y_0,\Gamma }^* (F\#G) = F \text{ and } j_{x_0}^*(F\#G) = G$$ which are both closed. Thus as $F\#G$ is assumed constant coefficient, \cite[Theorem 5.1]{HL_Restriction} applies to give (2). \end{proof} \subsection{Boundary convexity and definition of the minimum principle} Recall that a function $u:X\to \mathbb R$ defined on an open $X\subset \mathbb R^{n}$ is \emph{exhaustive} if for each $a\in \mathbb R$ the sublevel set $$\{ x\in X : u(x)<a\}$$ is relatively compact in $X$. \begin{definition}[$F$-pseudoconvexity]\label{def:Fpseudoconvexity} Let $F\subset J^2(X)$. We say that $\Omega\subset X$ is $F$-\emph{pseudoconvex} if there exists a $$u:\Omega\to \mathbb R$$ that is continuous, exhaustive and $F$-subharmonic. \end{definition} \begin{remark} In \cite[Section 5]{HL_Dirichletduality} there is a notion of ``boundary convexity" defined for domains with smooth boundary, which is possibly different to being $F$-pseudoconvex. \end{remark} Let $\pi:\mathbb R^{n+m}\to \mathbb R^m$ be the projection. For a subset $\Omega\subset X\times \mathbb R^m$ and $x\in X$ we write $$\Omega_x: = \pi^{-1}(x) = \{ y\in \mathbb R^m : (x,y) \in \Omega\}.$$ \begin{definition}[Connected fibres] We say that $\Omega$ has \emph{connected fibres} if $\Omega_x$ is connected for each $x\in \pi(\Omega)$. \end{definition} \begin{lemma}\label{lem:slicesareconvex} Assume that $F\subset J^2(X)$ is a constant coefficient primitive subequation, that and $f$ is $F\#\mathcal P$-subharmonic on an open $\Omega\subset X\times \mathbb R^{m}$. Then \begin{enumerate} \item The function $h:\Omega_x\to \mathbb R\cup \{-\infty\}$ given by $h(y) = f(x,y)$ is locally convex. \item If $\Omega_x$ is connected and $h$ is exhaustive then $\Omega_x$ is convex. \end{enumerate} \end{lemma} \begin{proof} Proposition \ref{prop:slices} says that $h$ is $\mathcal P$-sub\-harmonic, which means it is locally convex (Example \ref{example:convexity}) giving (1). Statement (2) follows as a connected open set admitting an exhaustive locally convex function is convex \cite[Theorem 2.1.25]{Hormander_notionsofconvexity}. \end{proof} \begin{corollary}\label{cor:slicesofpseudoconvexareconvex} If $\Omega\subset X\times \mathbb R^m$ is a $F\#\mathcal P$-pseudoconvex domain and $\Omega_x$ is connected then $\Omega_x$ is convex. \end{corollary} \begin{proof}Let $f$ be $F\#\mathcal P$-subharmonic and exhaustive on $X$. Then for each $x\in X$ the function $h(y)= f(x,y)$ is exhausing on $\Omega_x$ so Lemma \ref{lem:slicesareconvex}(2) applies. \end{proof} \begin{definition}[The minimum principle in the real case]\label{def:minimumprinciple} Let $F\subset J^2(X)$ be a primitive subequation. We say that $F$ satisfies the \emph{minimum principle} if the following holds: Let $\Omega\subset X\times \mathbb R^{m}$ be a $(F\#\mathcal P)$-pseudoconvex domain with connected fibres. Then $\pi(\Omega)$ is $F$-pseudoconvex, and for any $f$ that is $F\#\mathcal P$-pseudoconvex on $\Omega$, the marginal function $$g(x) : = \inf_{y\in \Omega_x} f(x,y)$$ is $F$-subharmonic on $\pi(\Omega)$. \end{definition} In the complex case we make essentially the same definition, only requiring also that $\Omega$ and $f$ be independent of the argument of the second variable. Consider the real torus $$\mathbb T^m : = \{e^{i\theta}:= (e^{i\theta_1},\cdots,e^{i\theta_m}): \theta = (\theta_1,\ldots, \theta_m)\}.$$ which acts on $\mathbb C^m$ by $$e^{i\theta} w =e^{i\theta} \left(\begin{array}{c} w_1 \\ \vdots \\ w_m\end{array}\right) = \left(\begin{array}{c} e^{i\theta_1} w_1 \\ \vdots \\ e^{i\theta_m} w_m\end{array}\right).$$ \begin{definition}[$\mathbb T^m$-invariance] We say that $\Omega\subset \mathbb C^n\times \mathbb C^m$ is \emph{$\mathbb T^m$-invariant} if $$(z,w)\in \Omega \Rightarrow (z,e^{i\theta} w)\in \Omega \text{ for all } e^{i\theta}\in \mathbb T^m.$$ When this holds we say a function $f:\Omega\to \mathbb R\cup \{-\infty\}$ is \emph{$\mathbb T^m$-invariant} if $$ f(z,w) = f(z,e^{i\theta} w) \text{ for all } e^{i\theta}\in \mathbb T^m.$$ \end{definition} \begin{definition}[Minimum Principle in the Complex Case]\label{def:minimumprinciple_complex} Let $X\subset \mathbb C^n$ be open. We say a complex primitive subequation $F\subset J^{2,\mathbb C}(X)$ satisfies the \emph{minimum principle} if the following holds: Suppose $\Omega\subset X\times \mathbb C^{m}$ is $\mathbb T^m$-invariant $(F\#_{\mathbb C}\mathcal P^{\mathbb C})$-psuedoconvex domain with connected fibres. Then $\pi(\Omega)$ is $F$-pseudoconvex, and if $f$ is a $\mathbb T^m$-invariant $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic function on $\Omega$ then the marginal function $$g(z) : = \inf_{w\in \Omega_z} f(z,w)$$ is $F$-subharmonic on $\pi(\Omega)$. \end{definition} In both the real and complex case we have expressed this minimum principle for the product $F\#\mathcal P$. However it is easy to see that one can replace the factor $\mathcal P$ with any smaller primitive subequation, as in the following statement. \begin{lemma} Let $X\subset \mathbb R^{n}$ and $Y\subset \mathbb R^{m}$ be open. Assume that $F\subset J^2(X)$ and $G\subset J^2(Y)$ are primitive subequations such that \begin{enumerate} \item $F$ satisfies the minimum principle. \item $G\subset \mathcal P$ \end{enumerate} Then for any $F\#G$-pseudoconvex domain $\Omega \subset X\times Y$ and any $F\#G$-subharmonic function $f$ on $\Omega$ the marginal function $$g(x) = \inf_{y\in \Omega_x} f(x,y)$$ is $F$-subharmonic on the $F$-pseudoconvex domain $\pi(\Omega)$. The analogous statement holds in the complex case. \end{lemma} \begin{proof} We consider $\Omega \subset X\times Y \subset X\times \mathbb R^{m}$. Since $G\subset \mathcal P$, if $\Omega$ is $F\#G$-psudoconvex then it is also $F\#\mathcal P$-pseudoconvex, and if $f$ is $F\#G$-subharmonic on $\Omega$ then it is also $F\#\mathcal P$-subharmonic. Hence the statement that we want follows from the minimum principle for $F$. \end{proof} \begin{remark}[Functions independent of the imaginary part of the second variable] The Kiselman minimum principle is often stated for pseudoconvex domains $\Omega\subset \mathbb C^n\times \mathbb C^m$ that are independent of the \emph{imaginary} part of the second variable (rather than, as we have done here, independent of the argument of the second variable). But these are equivalent by considering the map $\phi(z,w) := (z,e^{w})$ and observing that $\Omega$ is pseudoconvex and independent of the imaginary part of the second variable if and only if $\phi(\Omega)$ is pseudoconvex and independent of the argument of the second variable. When dealing with a general complex subequation $F$ it is not clear if $\Omega$ being $F\#_{\mathbb C}\mathcal P$-pseudoconvex implies that $\phi(\Omega)$ is also $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-pseudoconvex (this would follow if $\Omega$ admitted an exhaustive $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic function that was independent of the imaginary part of the second variable, but the authors do not know whether this always holds). However by an averaging argument we will later prove that a $\mathbb T$-invariant $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-pseudoconvex domain always admits a $\mathbb T$-invariant exhaustive $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic function (see Lemma \ref{lem:exhaustionindependent}). For this reason we have stated our minimum principle in terms of the argument of the second variable, rather than its imaginary part (but see also Corollary \ref{cor:minimumimaginary}). \end{remark} \section{Examples}\label{sec:examples} \subsection{Example I}\label{sec:exampleI} It is not hard to check directly from the definitions that \begin{align} \mathcal P_{\mathbb R^{n}}\# \mathcal P_{\mathbb R^{m}}&= \mathcal P_{\mathbb R^{n+m}}\text{ and }\label{eq:productrealpositive}\\ \mathcal P^{\mathbb C}_{\mathbb C^{n}}\# \mathcal P^{\mathbb C}_{\mathbb C^{m}}&= \mathcal P^{\mathbb C}_{\mathbb C^{n+m}}. \label{eq:productcomplexpositive} \end{align} In the following we will give another example with a similar property, that need not be constant coefficient or independent of the gradient part. \subsection{Example II} Suppose $X\subset \mathbb R^n$ is open and let \begin{align*} \lambda&:\mathbb R\to \mathbb R\\ \mu&:X\times \mathbb R\to \operatorname{Sym}^2_n(\mathbb R)\end{align*} \begin{definition} Define $$F_{\lambda,\mu} = \{ (x,r,p,A)\in J^2(X) : A \ge \lambda p p^t + \mu \text{ where } \lambda = \lambda(r) \text{ and } \mu= \mu(x,r)\}.$$ \end{definition} \begin{lemma}\ \begin{enumerate} \item $F_{\lambda,\mu}$ has the Positivity Property. \item If $\lambda$ and $\mu$ are continuous then $F$ is closed and has the Topological Property. \item If $\mu(x,r)$ and $\lambda(r)$ are monotonic increasing in $r$ then $F_{\lambda,\mu}$ has the Negativity Property. \item If $\lambda(r)=\lambda_0\ge 0$ for all $r$, and $\mu(x,r)$ is convex in $r$ then $F_{\lambda,\mu}$ is convex. \item If $\lambda(r)\ge 0$ and $\mu(x,r)\ge 0$ for all $x,r$ then $F_{\lambda,\mu}\subset\mathcal P$. \end{enumerate} In particular if both conditions (2) and (3) hold then $F_{\lambda,\mu}$ is a subequation. \end{lemma} \begin{proof} (1,3,5) are immediate from the definition, and (2) is simple point-set topology that is left to the reader. (4) follows as the function $p\mapsto pp^t$ is convex. \end{proof} \begin{proposition}\label{prop:characterizationexample} Suppose that for $i=1,2$ we have open $X_i\subset \mathbb R^{n_i}$. Let \begin{align*} \lambda &: \mathbb R\to \mathbb R\\ \mu_1 &: X_1\times \mathbb R\to \operatorname{Sym}^2_n. \end{align*} Define $\mu: X_1\times X_2\times \mathbb R\to \operatorname{Sym}^2_{n_1+n_2}$ by $$\mu(x_1,x_2,r) := \operatorname{diag}(\mu_1(x_1,r), 0)$$ (here $\operatorname{diag}$ is to be understood as the matrix in block diagonal form). Then $$F_{\lambda,\mu_1} \# F_{\lambda,0} = F_{\lambda,\mu}.$$ \end{proposition} In particular, the previous proposition gives various examples of products that are subequations even though they are not gradient-independent (compare Section \ref{sec:productsofsubequations}). \begin{proof}[Proof of Proposition \ref{prop:characterizationexample}] Write $F_1: = F_{\lambda,\mu_1}$ and $F_2: = F_{\lambda,0}$. Fix $x_i\in X_i$ and let $$\alpha:=\left(r, p=\left( \begin{array}{c}p_1 \\ p_2\end{array}\right) ,\left( \begin{array}{cc}B & C \\ C^t & D\end{array}\right)\right) \in J^2_{n+m}.$$ To ease notation set $\lambda: = \lambda(r)$ $\mu_1:=\mu_1(x_1,r)$ and $$\left( \begin{array}{cc}\hat{B} & \hat{C} \\ \hat{C}^t & \hat{D}\end{array}\right) : = \left( \begin{array}{cc}B & C \\ C^t & D\end{array}\right)-\lambda pp^t - \mu(x_1,x_2,r)$$ so \begin{align*} \hat{B} &= B - \lambda p_1p_1^t - \mu_1\\ \hat{C} &= C - \lambda p_1p_2^t\\ \hat{D}&=D - \lambda p_2p_2^t. \end{align*} {\bf Claim: } $\alpha\in (F_1\#F_2)_{(x_1,x_2)}$ if and only if the following three conditions all hold: \begin{align} \left(\operatorname{Id} - \hat{D} \hat{D}^{\dagger}\right)\hat{C}^t &= 0 \label{ex1:condition1}\\ \hat{B} - \hat{C} \hat{D}^{\dagger} \hat{C}^t &\ge 0 \label{ex1:condition2}, \\ \hat{D}&\ge 0 \label{ex1:condition3}, \end{align} (where $\hat{D}^{\dagger}$ denotes the pseudo-inverse of $\hat{D}$). The equivalence between these three conditions and the condition that \begin{equation}\left(\begin{array}{cc} B & C \\ C^t & D \end{array}\right)\ge \lambda pp^t + \mu \label{lem:characterizationexamplepos:repeat}\end{equation} is a standard piece of Linear Algebra (Proposition \ref{prop:positiveblock}), and this is precisely the condition that $\alpha\in (F_{\lambda,\mu})_{(x_1,x_2)}$.\\ To prove the claim, recall that by definition $\alpha\in (F_1\#F_2)_{(x_1,x_2)}$ if and only if \begin{enumerate}[(i)] \item $j^*\alpha = (r,p_2,D)\in (F_2)|_{x_2}$ \item $\iota_{\Gamma}(\alpha)\in (F_1)_{x_1}$ for all $\Gamma$. \end{enumerate} Now by definition of $F_2$, (i) is equivalent is equivalent to $D\ge \lambda p_2p_2^t$ which in turn is equivalent to $\hat{D}\ge 0$. To better understand (ii) write \begin{align*}\zeta(\Gamma)&:=B + C\Gamma + \Gamma^tC^t + \Gamma^t D\Gamma- \lambda (p_1+\Gamma^tp_2) (p_1+\Gamma^t p_2))^t -\mu_1\\ &=\hat{B} + \hat{C} \Gamma + \Gamma^t \hat{C}^t + \Gamma^t \hat{D} \Gamma. \end{align*} Then $$\iota_\Gamma^*(\alpha)\in (F_1)_{x_1} \Leftrightarrow \zeta(\Gamma)\ge 0.$$ Suppose first that \eqref{ex1:condition1},\eqref{ex1:condition2},\eqref{ex1:condition3} all hold. The first and third of these state that $(I-\hat{D}\hat{D}^\dagger) \hat{C}^t =0$ and $\hat{D}\ge 0$. So by Lemma \ref{lem:minquadraticmatrix} $$\zeta(\Gamma) \ge \hat{B} - \hat{C}^t \hat{D}^\dagger \hat{C}\ge 0.$$ As this holds for all $\Gamma$ we deduce $\alpha\in (F_1\#F_2)_{(x_1,x_2)}$. In the other direction, suppose that one of \eqref{ex1:condition1},\eqref{ex1:condition2},\eqref{ex1:condition3} do not hold. If it if \eqref{ex1:condition3} that does not hold then $j^*\alpha \notin (F_2)_{x_2}$ which immediately implies that $\alpha\notin (F_1\#F_2)_{(x_1,x_2)}$. So we may suppose that $\hat{D}\ge 0$. If \eqref{ex1:condition1} does not hold there is some $w\neq 0$ such that $(I- \hat{D}\hat{D}^{\dagger})\hat{C}^tw\neq 0$. Set $b:=\hat{C}^tw$ and let $$q_{\hat{D},b}:=2 x^tb+ x^t \hat{D} x$$ Then by Lemma \ref{lem:minquadratic} there is a sequence $x_j$ such that $q_{\hat{D},b}(x_j)\to -\infty$ as $j\to \infty$. For each $j$ pick a $\Gamma_j\in M_{n_1\times n_2}(\mathbb R)$ so that $x_j = \Gamma_j w$. Then $$ w^t(\hat{C} \Gamma_j +\Gamma_j^t\hat{C}^t + \Gamma_j^t \hat{D} \Gamma_j) w = b^t x_j + x_j^tb + x_j^t\hat{D} x_j = q_{\hat{D},b}(x_j)\to -\infty$$ as $j\to \infty$. But this implies $\zeta(\Gamma_j)$ is not semipositive for $j$ sufficiently large, and so $\alpha\notin (F_1\#F_2)_{(x_1,x_2)}$. Finally assume \eqref{ex1:condition1} and \eqref{ex1:condition3} both hold, but \eqref{ex1:condition2} does not. Then there is a $w\neq 0$ such that $$ w^t(\hat{B} - \hat{C} \hat{D}^{\dagger} \hat{C}^t) w <0.$$ Again set $b:= \hat{C}^tw$. Then Lemma \ref{lem:minquadratic} implies is an $x_0\in \mathbb R^n$ such that $$ q_{\hat{D},b}(x_0) = - b^t \hat{D}^{\dagger} b$$ Now choose $\Gamma\in M_{n\times m}(\mathbb R)$ so $x_0 = \Gamma w$. Then $$w^t\zeta(\Gamma)w = w^t\hat{B}w + q_{\hat{D},b}(x_0) = w^t(\hat{B} - \hat{C}\hat{D}^{\dagger}\hat{C}^t)w<0$$ Hence $\zeta(\Gamma)$ is not semipositive, and so $\alpha\notin (F_1\#F_2)_{(x_1,x_2)}$. \end{proof} \begin{lemma}\label{lem:minquadratic} Let $P\in \operatorname{Sym}^2(\mathbb R^n)$ and $b\in \mathbb R^n$. Then the function \begin{equation}\label{def:q} q_{P,b}(x) := 2 x^tb+ x^t P x\end{equation} is bounded from below if and only if $P\ge 0$ and $(I - P P ^{\dagger})b=0$, in which case the minimum value is attained and is equal to $$p^*:=-b^tP^{\dagger} b.$$ \end{lemma} \begin{proof} See \cite[p 420]{Gallier_Geometricmethods} or \cite[Proposition 4.2]{Gallier}. \end{proof} \begin{lemma}\label{lem:minquadraticmatrix} Let $D\in \operatorname{Pos}_{m}$ and $C\in M_{n\times m}$ with $(I - D D ^{\dagger})C^t=0$. For each $\Gamma\in M_{m\times n}$ let $$Q(\Gamma) : = C\Gamma + \Gamma^tC^t + \Gamma^t D \Gamma.$$ Then \begin{equation}\label{lowerboundQ}Q(\Gamma) \ge - C^t D^{\dagger} C \text{ for all } \Gamma.\end{equation} \end{lemma} \begin{proof} Fix a vector $v$ and consider $$ v^t Q(\Gamma) v = v^t C \Gamma v + v^t \Gamma^t C^t v + v^t \Gamma^t D \Gamma v.$$ Defining $x:=\Gamma v$ and $b:=C^t v$ this becomes $$ v^t Q(\Gamma) v = b^t x + x^t b + x^t D x = q_{D,b}(x)$$ where $q_{D,b}$ is as in \eqref{def:q}. Note that $(I-DD^{\dagger}) b = (I-DD^{\dagger})C^t v =0$. Hence Lemma \ref{lem:minquadratic} applies giving $$ v^t Q(\Gamma) v \ge - b^t D^{\dagger} b = -v^t C D^{\dagger} C^t v.$$ As this holds for all $v$ we conclude \eqref{lowerboundQ}. \end{proof} \begin{proposition}\label{prop:positiveblock} Consider a symmetric block matrix $$ M := \left(\begin{array}{cc} \hat{B} & \hat{C} \\ \hat{C}^t & \hat{D}\end{array}\right).$$ Then $M\ge 0$ if and only if (i) $\hat{D}\ge 0$ (ii) $(I- \hat{D} \hat{D}^{\dagger}) \hat{C}^t=0$ and (iii) $\hat{B} - \hat{C} \hat{D}^{\dagger} \hat{C}^t\ge 0$. \end{proposition} \begin{proof} See \cite[Theorem 16.1]{Gallier_Geometricmethods} or \cite[Theorem 4.3]{Gallier}. \end{proof} \begin{remark}\ \begin{enumerate} \item An analogous statement holds in the complex case; details are left to the reader. \item Putting $\lambda\equiv 0$ and $\mu\equiv 0$ recovers \eqref{eq:productrealpositive} and \eqref{eq:productcomplexpositive}. \end{enumerate} \end{remark} \subsection{Example 3}\label{sec:productsofsubequationnotasubequation} Let $F\subset J^2(\mathbb R^n)$ be given by $$F_x = \{ (r,p,A) : \|p\|\le 1\}$$ which is easily checked to be a constant-coefficient subequation. If $(r,\left(\begin{array}{c} p_1 \\ p_2\end{array}\right),A')\in (F\#\mathcal P)_{(x,y)}$ then $\|p_1 + \Gamma^t p_2 \|\le 1$ for all $\Gamma$, which happens if and only if $p_2=0$ and $\|p_1\|\le 1$. Thus $F\#\mathcal P$ is not the closure of its interior (so does not satisfy the Topological Property) and thus is not a subequation. \section{Approximations}\label{sec:approximations} In this section we describe three approximation techniques. The first gives an approximation of $F\#\mathcal P$-subharmonic functions by $F\#\mathcal P$-sub\-harmonic functions that are bounded from below. The second describes the sup-convolution that approximates bounded $F$-subharmonic functions by semiconvex ones. The third uses smooth mollification to approximate continuous $F$-subharmonic functions by smooth ones. We remark that it is only the third of these that requires $F$ to be convex. \subsection{Approximation by subharmonic functions bounded from below} If $F$ is such that sufficiently negative constant functions are $F$-subharmonic then any $F\#\mathcal P$-subharmonic function $f$ can be approximated by the $F\#\mathcal P$-sub\-harmonic functions $$ f_j = \max\{ f, -j\}$$ which are bounded from below and decrease to $f$ pointwise as $j$ tends to infinity. In the following three statements we will show that essentially the same can be arranged if $F$ is constant-coefficient, without assuming that sufficiently negative constant functions are $F$-subharmonic. \begin{lemma}\label{lem:boundedbelow} Let $F\subset J^2(X)$ be a primitive constant-coefficient subequation. Assume there exists an $F$-subharmonic function on $X$ that is not identically $-\infty$. Then any $x_0\in X$ is contained in a neighbourhood $x_0\in U\subset X$ on which there exists an $f\in F(U)$ that is bounded from below on $U$. \end{lemma} \begin{proof} Let $u\in F(X)$ be not identically $-\infty$ and choose some $x_0\in X$ such that $a:=u(x_0)\neq -\infty$. As $u$ is upper-semicontinuous the set $U_1:=\{ x\in X: f(x)<a+1\}$ is an open neighbourhood of $x_0$. Choose some $\delta>0$ so that $B_{\delta}(x_0)\subset U_1$ and let $U: =B_{\delta/2}(x_0)$. For $\|\zeta\|<\delta/2$ define $$ u_{\zeta}(x) : = u(x-\zeta) \text{ for } x\in U.$$ Observe that if $x\in U$ then $\|x-\zeta-x_0\|\le \|x-x_0\| + \|\zeta\|<\delta$, so $u_{\zeta}$ is well defined and bounded above by $a+1$. Moreover, as $F$ is constant-coefficient, $u_{\zeta}$ is $F$-subharmonic on $U$. Thus $$ v : = {\sup}^*\{ u_{\zeta} : \|\zeta\|<\delta/2 \}$$ is $F$-subharmonic on $U$. And if $x\in U$ then setting $\zeta':= x-x_0$ we have $\|\zeta'\|<\delta/2$ and so $v(x) \ge u_{\zeta'}(x) = u(x-\zeta') = u(x_0)=a$ and so $v$ is bounded from below by $a$ on $U$. Thus provides an $F$-subharmonic function bounded from below on a neighbourhood $U$ of the point $x_0$. But as $F$ is constant coefficient, we can translate $U$ to cover any given point of $X$, completing the proof. \end{proof} \begin{lemma}\label{lem:pullbacksubharmonic} Let $F\subset J^2(X)$ be a constant-coefficient primitive subequation. Then for any $F$-subharmonic function $f$ on $X$, the function $$\pi^*f(x,y) := f(x) \text{ for } (x,y)\in X\times \mathbb R^{m}$$ is $F\#\mathcal P$-subharmonic. \end{lemma} \begin{proof} The vertical slices of $\pi^*f$ are constant (so certainly $\mathcal P$-subharmonic) and the restriction of $\pi^*f$ to any non-vertical slice is equal to $f$ and hence $F$-subharmonic. Thus $\pi^*f$ is $F\#\mathcal P$-subharmonic from Proposition \ref{prop:slices}. \end{proof} \begin{proposition}\label{prop:approxboundedbelow} Let $F\subset J^2(X)$ be a constant coefficient primitive subequation that has the Negativity Property. Let $\Omega\subset X\times \mathbb R^{m}$ be open and $f$ be $F\#\mathcal P$-subharmonic on $\Omega$ that is not identically $-\infty$. Then given any $x_0\in \pi(\Omega)$ there exists a neighbourhood $x_0\subset U \subset \pi(\Omega)$ and a $v\in F(U)$ that is bounded from below. Moreover setting $$ \Omega_U := \{ (x,y)\in \Omega : x\in U\}$$ the functions $$ f_j = \max\{ f, \pi^*v -j\} \text{ on } \Omega_U$$ are $F\#\mathcal P$-subharmonic, are bounded below, and decrease pointwise to $f$ on $\Omega_U$ as $j\to\infty$. \end{proposition} \begin{proof} If the only $F$-subharmonic functions on $X$ are identically $-\infty$ then $f$ must also be identically $-\infty$ so there is nothing to prove. Otherwise given $x_0\in \pi(\Omega)$ Lemma \ref{lem:boundedbelow} provides a neighbourhood $x_0\subset U\subset \pi(\Omega)$ and an $F$-subharmonic function $v$ on $U$ that is bounded from below. Furthermore Lemma \ref{lem:pullbacksubharmonic} says that $\pi^*v$ is $F\#\mathcal P$-subharmonic on $U\times \mathbb R^{m}$, and so in particular is $F\#\mathcal P$-subharmonic on $\Omega_U$. Thus the $f_j$ are $F\#\mathcal P$-subharmonic, bounded from below, and decrease pointwise to $f$ on $\Omega_U$. \end{proof} \subsection{Sup-convolutions}\label{sec:supconvolution} The Sup-convolutions is a well-known construction in the theory of viscosity subsolutions, that allows approximation of certain $F$-subharmonic functions by semiconvex functions. \begin{definition}[Semiconvexity] Let $\kappa\ge 0$ and $X\subset \mathbb R^n$ be open. We say $f:X\to \mathbb R$ is \emph{$\kappa$-semiconvex} if $f(x) + \frac{\kappa}{2} \|x\|^2$ is convex. If $f$ is $\kappa$-semiconvex for some $\kappa \ge 0$ then we say simply $f$ is \emph{semiconvex}. \end{definition} \begin{lemma}\label{lem:supconvolutionsemiconvex} Let $X\subset \mathbb R^n$ be open and $f:X\to \mathbb R$. Given $\epsilon>0$ and non-empty $X'\subset X$ the function $$g(x):= \sup_{\zeta\in X'}\{ f(\zeta) - \frac{1}{2\epsilon} \|x-\zeta\|^2\} \text{ for } x\in X.$$ is $\frac{1}{2\epsilon}$-semiconvex on $X$. \end{lemma} \begin{proof} Write $$ g(x) + \frac{1}{2\epsilon} \|x\|^2 = \sup_{\zeta\in X'} \{ f(\zeta) - \frac{1}{2\epsilon} \|\zeta\|^2 + \frac{1}{\epsilon} x\cdot \zeta\}$$ which is a supremum of affine functions in $x$, and thus is convex. \end{proof} \begin{lemma}[Sup-convolution]\label{lem:supconvolution} Let $X\subset \mathbb R^n$ be open. Let $F\subset J^2(X)$ be a constant coefficient primitive subequation that has the Negativity property. Suppose $f$ is $F$-subharmonic on $X$ and bounded, say $|f(x)|<M$ for $x\in X$. Define the \emph{sup-convolution} $$f^{\epsilon}(x) = \sup_{\zeta\in X}\{ f(\zeta) - \frac{1}{2\epsilon} \|x-\zeta\|^2\} \text{ for } x\in X.$$ Let $\delta = \sqrt{4\epsilon M}$ and $$ X_{\delta} = \{ x : B_{\delta}(x)\subset X\}$$ Then \begin{enumerate} \item $f^{\epsilon}$ is $\frac{1}{2\epsilon}$-semiconvex \item $f^{\epsilon}\searrow f$ pointwise on $X$ as $\epsilon\to 0$. In fact \begin{equation} f^\epsilon(x) \le \sup_{\|\zeta-x\|<\delta} \{f(\zeta) - \frac{1}{2\epsilon} \|x-\zeta\|^2 \} \text{ for } x\in X_{\delta}.\label{eq:supconvolutionupperbound} \end{equation} \item $f^{\epsilon}$ is $F$-subharmonic on $X_{\delta}$. \end{enumerate} \end{lemma} \begin{proof} This is \cite[Thm 8.2]{HL_Dirichletduality} (we observe that in that paper $F$ is assumed to depend only on the Hessian part, but the proof works for any constant-coefficient primitive subequation $F$ that has the Negativity property.) For convenience we give the details. (1) follows from Lemma \ref{lem:supconvolutionsemiconvex}. For (2) observe that clearly we always have $f^\epsilon\ge f$, and if $x\in X_{\delta}$ and $\|x-\zeta\|>\delta$ then $f(\zeta) - f(x) - \frac{1}{2\epsilon} \|x-\zeta\|^2 \le 2M -\frac{\delta^2}{2\epsilon} =0$, so $f(\zeta)- \frac{1}{2\epsilon} \|x-\zeta\|^2 \le f(x)$, which proves \eqref{eq:supconvolutionupperbound}. By upper-semicontinuity of $f$, this implies $f^{\epsilon}$ decreases pointwise to $f$ as $\epsilon$ tends to zero. For (3) make a change of variables $\tau = \zeta -x$ to get $$f^{\epsilon}(x) = \sup_{\|\tau\|<\delta} \{ f(x+\tau) - \frac{1}{2\epsilon} \|\tau\|^2\} \text{ for } x\in X_{\delta}.$$ The translation $x\mapsto f(x-\tau)$ is $F$-subharmonic (Proposition \ref{prop:basicproperties}(5)), and by the Negativity property of $F$ so is the function $x\mapsto f(x-\tau) - \frac{1}{2\epsilon} \|\tau\|^2$. Now $f^{\epsilon}$ is semiconvex, so continuous, so we may replace the supremum with its upper-semicontinuous regularisation. Thus (Proposition \ref{prop:basicproperties}(4)) $f^{\epsilon}$ is $F$-subharmonic on $X_\delta$ as claimed. \end{proof} Thus the sup-convolution allows one to approximate bounded $F$-subharmonic functions by semiconvex ones (albeit on a slightly smaller subset). For this reason the following terminology will be useful: \begin{definition}[Eventually on compact sets] Suppose $P$ is a property of functions defined on open subsets of $\Omega$, and let $f_j$ be a sequence of functions $f_j$ defined on $\Omega$. \begin{enumerate} \item We say that $P$ holds \emph{eventually on relatively compact subsets} if given any open $K\Subset \Omega$ there is a $j_0$ such that $f_j|_{K}$ has property $P$ for all $j\ge j_0$. \item We say $f_j\searrow f$ \emph{pointwise eventually on relatively compact subsets} as $j\to \infty$ if for any open $K\Subset \Omega$ the sequence $f_j|_{K}$ decreases pointwise to $f|_{K}$ once $j$ is sufficiently large. \end{enumerate} \end{definition} \begin{proposition}[Exhaustive approximation by semiconvex functions]\label{prop:approxsemi} Let $F\subset J^2(X)$ be a constant-coefficient primitive subequation with the Negativity Property. Suppose that $f:X\to\mathbb R$ is $F$-subharmonic and bounded from below. Then there exists a sequence of functions $$ f_j : X\to \mathbb R$$ such that \begin{enumerate} \item $f_j$ is semiconvex on $X$. \item $f_j \searrow f$ pointwise eventually on relatively compact subsets as $j\to \infty$. \item $f_j$ is $F$-subharmonic eventually on relatively compact subsets. \end{enumerate} \end{proposition} \begin{proof} Let $K_1\Subset K_2\Subset K_3\Subset \cdots$ be open and each relatively compact in the next, with $\Omega = \cup_j K_j$. By assumption $f$ is bounded from below on $X$, and since it is also upper-semicontinuous $$ M_j : = \sup_{x\in K_j} |f(x)|$$ is finite. Now for $\epsilon>0$ consider $$ f_{j,\epsilon}(x) : = \sup_{\zeta \in K_j} \{ f(\zeta) - \frac{1}{2\epsilon} \|x-\zeta\|^2\}.$$ Lemma \ref{lem:supconvolutionsemiconvex} implies that $f_{j,\epsilon}$ is semiconvex on $X$. \\ {\bf Claim: } Given $j\ge 2$ and $\epsilon_j>0$ there exists an $\epsilon_{j+1}<\epsilon_j$ such that \begin{enumerate}[(a)] \item $f_{j+1,\epsilon_{j+1}} < f_{j,\epsilon_j} \text{ on } K_{j-1}$. \item $f_{j+1,\epsilon_{j+1}}(x)\le \sup_{|x-\zeta|<1/j} f(\zeta) \text{ for all } x\in K_{j}.$ \item $f_{j+1,\epsilon_{j+1}} \text{ is }F\text{-subharmonic on } K_j$. \end{enumerate} To see this, let $M=\inf_{\zeta\in K_{j-1}} |f_{j,\epsilon_j}(\zeta)|$ (which is finite as $f_{j,\epsilon_j}$ is semiconvex, and so in particular continuous, on $\overline{K_{j-1}}$). Any $\zeta\in K_{j+1}\setminus K_{j}$ is a bounded distance away from any $x\in K_{j-1}$, say $\|\zeta-x\|>\delta'>0$. So for such $\zeta$ and sufficiently small $\epsilon_{j+1}<\epsilon_j$, $$ f(\zeta) - \frac{1}{2\epsilon_{j+1}} \|x-\zeta\|^2 \le M_{j+1} - \frac{1}{2\epsilon_{j+1}} \delta'^2 < M\le f_{j,\epsilon_j}(x).$$ On the other hand for $\zeta\in {K}_j$ and $x\in K_{j-1}$, clearly $$f(\zeta) - \frac{1}{2\epsilon_{j+1}} \|x-\zeta\|^2\le f(\zeta) - \frac{1}{2\epsilon_{j}}\|x-\zeta\|^2 \le f_{j,\epsilon_j}(x).$$ Putting these together, $$ f(\zeta) - \frac{1}{2\epsilon_{j+1}} \|x-\zeta\|^2\le f_{j,\epsilon_j}(x) \text{ for all } \zeta\in {K}_{j+1}, x\in K_{j-1}$$ and taking the supremum over all $\zeta\in K_{j+1}$ proves (a). Furthermore there is no loss in assuming that $\epsilon_{j+1}$ is chosen small enough so that setting $\delta:=\sqrt{4M_{j+1}\epsilon_{j+1}}$ we have $K_j\subset \{ x : B_{\delta}(x) \subset K_{j+1}\}$ and $\delta<1/j$. Thus Lemma \ref{lem:supconvolution}(2,3) imply (b,c) respectively.\\ Now let $\epsilon_1=\epsilon_2:=1$ and $\epsilon_3>\epsilon_4>\cdots$ be defined inductively so the claim holds for all $j$ and set $f_j:=f_{j,\epsilon_j}$. Condition (1) in the statement of the theorem holds by construction, and since any open $K\Subset X$ is eventually contained in $K_j$ for $j$ sufficiently large, conditions (a,b,c) imply (2) and (3). For the final statement in the complex case, we may take each of the $K_j$ to be $G$-invariant, at which point it is clear that $f_{j,\epsilon}$ is also $G$-invariant. \end{proof} \begin{corollary}[Invariance] \label{cor:prop:approxsemiTinvariant} In the setting of Proposition \ref{prop:approxsemi} assume that $X\subset \mathbb R^{2n}\simeq \mathbb C^n$ and that $G$ is a subgroup of the group of $n\times n$ unitary matrices. Assume also that both $X$ and $f$ are $G$-invariant. Then the functions $f_j$ can be taken to be $G$-invariant as well. \end{corollary} \begin{proof} In the above proof we can take each of the $K_j$ to be $G$-invariant, and then the sup-convolutions $f_{j,\epsilon}$ will also be $G$-invariant. \end{proof} \subsection{Smooth Mollification}\label{sec:mollification} We next show that the smooth mollification of a continuous $F$-subharmonic function is again $F$-subharmonic, as long as we assume that $F$ is convex. Fix a mollifying function $\phi$ on $\mathbb R^n$, so $\phi$ is smooth, non-negative, compactly supported with $\int_{\mathbb R^n} \phi(t) dt =1$ and so $\phi_{\epsilon}(t): = \epsilon^{-n} \phi(\epsilon^{-1}t)$ tends to a dirac delta as $\epsilon\to 0$. \begin{proposition}\label{prop:convolution}[Smooth Mollifications Remain $F$-subharmonic] Let $F$ be a constant coefficient convex primitive subequation, and $f$ be continuous and $F$-subharmonic on $X$. Let $K\Subset \Omega$ be open, and for sufficiently small $\epsilon$ define $$f_{\epsilon}(x) : = f* \phi_{\epsilon}(x) = \int_{\mathbb R^n} \phi_{\epsilon}(t) f(x-t) dt.$$ Then $f_{\epsilon}$ is $F$-subharmonic on $K$ for all $\epsilon$ sufficiently small. \end{proposition} \begin{proof} Note that as $\phi$ is compactly supported, $f_{\epsilon}$ is well-defined for $\epsilon$ sufficiently small. We may think of the integral in the convolution as a limit of Riemann sums, each of the form $$ \sum_{i=1}^N \frac{\phi_\epsilon(t_i)}{N} f(x-t_i)$$ for some points $\{ t_i\}$ (where the $t_i$ of course also depend on $N$ which is dropped from notation). Note each function $x\mapsto f(x-t_i)$ is a translation of $f$, and hence is $F$-subharmonic on $K$. Let $$ S_N : = \sum_{i=1}^N \frac{\phi_\epsilon(t_i)}{N}$$ Then $S_N\to \int_{\mathbb R^n} \phi_\epsilon(dt) =1$ as $N\to \infty$. Now as $F$ is convex, the convex combination $$ h_N(x):= \sum_{i=1}^N \frac{\phi_\epsilon(t_i)}{N S_N} f(x-t_i)$$ is $F$-subharmonic on $K$ (Proposition \ref{prop:convexcombintation}). And $h_N$ tends to $f_\epsilon$ locally uniformly as $N$ tends to infinity, so $f_{\epsilon}$ is $F$-subharmonic as well. \end{proof} \begin{proposition}[Approximation by smooth functions]\label{prop:smoothapproximation} Let $F\subset J^2(X)$ be a constant-coefficient convex primitive subequation with the Negativity Property. Suppose that $f$ is $F$-subharmonic on $X$ and continuous. Then there exists a sequence of functions $$ h_j : X\to \mathbb R$$ with the following properties \begin{enumerate} \item $h_j$ is smooth. \item $h_j\to f$ locally uniformly. \item $h_j$ is $F$-subharmonic eventually on relatively compact subsets. \end{enumerate} \end{proposition} \begin{proof} Let $K_1\Subset K_2\Subset \cdots \Subset X$ be an exhausting family of open subsets, each relatively compact in the next. Using Proposition \ref{prop:convolution} may choose $\epsilon_j>0$ such that the function $h_j:= f* \phi_{\epsilon_j}$ is $F$-subharmonic on $K_j$. We can even arrange so that $h_j$ is smooth on $\overline{K}_j$ and then extend it arbitrarily to a smooth function on all of $X$. Further we may as well assume that $\epsilon_j\to 0$ as $j\to \infty$. Properties (1,2,3) then follow as an open $K\Subset X$ is contained in $K_i$ for $i$ sufficiently large. \end{proof} In the complex case we will want to apply this in a way that keeps the property of being $\mathbb T$-invariant. This can be done with the same kind of mollification argument using polar coordinates in the second variable, as long as we work away from the second variable being zero. The following statement is sufficient for our needs. \begin{proposition}[Invariance]\label{prop:smoothapproximation_complex} Suppose $X\subset \mathbb C^n$ is open and that $F\subset J^{2,\mathbb C}(X)$ is a complex constant-coefficient convex primitive subequation with the Negativity Property. Suppose also $$\Omega\subset X\times \mathbb C^*$$ is open, $\mathbb T$-invariant and that $f:\Omega\to \mathbb R$ is continuous, $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic and $\mathbb T$-invariant. Then there exists a sequence of smooth $\mathbb T$-invariant functions $h_j:\Omega\to \mathbb R$ that are eventually $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic on compact sets, and converge locally uniformly to $f$ on $\Omega$. \end{proposition} \begin{proof} Exhaust $\Omega$ by open $K_1\Subset K_2\Subset \cdots$ that are each $\mathbb T$-invariant. Consider $\phi(z,w) = (z,e^w)$ and let $g(z,w) = f(z,e^w)$. As $f$ is $\mathbb T$-invariant we have that $g$ is independent of the imaginary part of $w$ and is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic by Lemma \ref{lem:compositionbiholomorphism}. For each $j$ let $\epsilon$ be sufficiently small so that the mollification $$g_j(z,w):= \int_{\mathbb C^n} \int_{\mathbb R} g(z,w-t) \phi_{\epsilon}(z) \phi_\epsilon(t) dt dz$$ is well defined, smooth and $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic on $\phi^{-1}(K_j)$. Notice that $g_j$ is independent of the imaginary part of the second variable, and just as in the previous proof we may assume $g_j$ is defined, smooth and independent of the imaginary part on all of $\phi^{-1}(\Omega)$. Thus $g_j$ decends to a function $h_j$ on $\Omega$ (i.e. $g_j(z,w) = h_j(z,e^w)$) and the $h_j$ have the desired properties. \end{proof} \section{Reductions}\label{sec:reductions} We now consider four reductions concerning the minimum principle that work in both the real and complex case. The first allows us to reduce the dimension $m$ of the second variable to $1$ (that is, in the real case we can assume $\Omega\subset X\times \mathbb R$ and in the complex case $\Omega\subset X\times \mathbb C$). In the second reduction we show that if one shows that marginal functions of $F\#\mathcal P$-subharmonic functions are $F$-subharmonic, then the minimum principle holds for $F$ (i.e.\ the fact that the projection of an $F\#\mathcal P$-pseudoconvex set is $F$-pseudoconvex is then automatic). Our third reduction shows that it is sufficient to consider $F\#\mathcal P$-subharmonic functions $f$ that are both bounded below, and whose fibrewise minimum is attained strictly in the interior of its domain (a property that is strictly weaker than the function being exhaustive). In the fourth we show that we can even assume the function $f$ is semiconvex. \subsection{Reduction in dimension of the second variable} \begin{proposition}\label{prop:mcanbeone} Suppose $F\subset J^2(X)$ is closed, and the minimum principle holds for $F\#\mathcal P$-subharmonic functions $f$ on $F\#\mathcal P$-pseudoconvex domains $\Omega\subset X\times \mathbb R$ with connected fibres. Then the minimum principle holds for $F$ in general. The analogous statement holds in the complex case in which we assume $\Omega\subset X\times \mathbb C$ and that both $f$ and $\Omega$ are $\mathbb T$-invariant. \end{proposition} \begin{proof} We need to show the minimum principle holds for $F\#\mathcal P$-subharmonic functions on $F\#\mathcal P$-pseudoconvex domains $\Omega\subset X\times \mathbb R^m$ with connected fibres for any $m\ge 1$. We will use induction on $m$, the hypothesis of the Proposition being the case $m=1$. So assume the statement we want holds for all integers up to $m$. For ease of notation set $$\mathcal P_n:= \mathcal P_{\mathbb R^n}.$$ From Proposition \ref{prop:associativityofproducts} and Example \ref{sec:exampleI}, $$ F\#\mathcal P_{m+1} = F\# (\mathcal P_{m} \#\mathcal P_1) = (F\# \mathcal P_{m})\#\mathcal P_1.$$ Now let $$\Omega\subset X\times \mathbb R^{m+1} = X\times \mathbb R^{m}\times \mathbb R$$ be an $F\#\mathcal P_{m+1}$-pseudoconvex domain with connected fibres, and let $f$ be $F\#\mathcal P_{m+1}$-subharmonic on $\Omega$. We will use $(x,\zeta,y)$ as coordinates on $X\times \mathbb R^{m}\times \mathbb R$ and apply the inductive hypothesis twice to the function $f$, first taking the infimum over $y$ and then the infimum over $\zeta$. To this end let $\pi_X:X\times \mathbb R^{m+1}\to X$ and $\pi_1: X\times \mathbb R^m \times \mathbb R \to X\times \mathbb R^{m}$ and $\pi_2: X\times \mathbb R^{m}\to X$ be the natural projections, so $\pi_X = \pi_2\pi_1$. Set \begin{align*} \Omega_{(x,\zeta)} &:= \{ y \in \mathbb R : (x,\zeta,y) \in \Omega\}\\ g_1(x,\zeta) &:= \inf_{y\in \Omega_{(x,\zeta)}} f(x,\zeta, y) \text{ for } (x,\zeta)\in \pi_1(\Omega). \end{align*} {\bf Claim: } $\pi_1(\Omega)\subset X\times \mathbb R^m$ is $F\#\mathcal P_m$-pseudoconvex and $g_1$ is $F\#\mathcal P_m$-subharmonic. \\ In fact this follows by the inductive hypothesis (with $m=1$). First we check that for each $(x,\zeta)\in X\times \mathbb R^{m}$ the set $\Omega_{(x,\zeta)}$ is connected. But this is clear since $$\Omega_{(x,\zeta)} = \{ y\in \mathbb R : (\zeta, y) \in \Omega_x\}.$$ Now $\Omega_x$ is convex (Corollary \ref{cor:slicesofpseudoconvexareconvex}), which implies that $\Omega_{(x,\zeta)}$ is convex, so certainly connected. Second, our hypothesis is that $\Omega$ is $F\#\mathcal P_{m+1} = (F\#\mathcal P_m)\#\mathcal P_1$-psuedoconvex and $f$ is $F\#\mathcal P_{m+1} = (F\#\mathcal P_m)\#\mathcal P_1$-subharmonic. Thus the inductive hypothesis applies to give the claim.\\ Now let $$ g_2(x) := \inf_{\zeta\in \pi_1(\Omega)_x} g_1(x,\zeta) \text{ for } x\in \pi_X(\Omega).$$ We claim the inductive hypothesis (for $m$) implies $\pi_X(\Omega) =\pi_2(\pi_1(\Omega))$ is $F$-pseudoconvex and $g_2$ is $F$-subharmonic. For this we need only verify that $\pi_1(\Omega)_x$ is connected. But this follows easily as $$\pi_1(\Omega)_x = \{ \zeta : (x,\zeta)\in \pi_1(\Omega) \} = \{ \zeta : (\zeta,y)\in \Omega_x \text{ for some } y\}$$ which is convex as $\Omega_x$ is convex. Now elementary considerations show that $$ g_2(x) = \inf_{\zeta\in \pi_1(\Omega)_x} \inf_{y\in \Omega_{(x,\zeta)}} f(x,\zeta, y) = \inf_{(\zeta,y) \in \Omega_x} f(x,\zeta, y) \text{ for } x\in \pi_X(\Omega)$$ which is the marginal function of $f$. Thus the statement we want also holds for integers up to $m+1$, and the induction is complete. \end{proof} \subsection{Projections are automatically pseudoconvex} \begin{proposition}\label{prop:functiononly} Let $F\subset J^2(X)$ be a primitive subequation. Suppose that for any $F\#\mathcal P$-pseudoconvex domain $\Omega\subset X\times \mathbb R$ with connected fibres and any $f:\Omega\to \mathbb R\cup \{-\infty\}$ that is not identically $-\infty$ it holds that $$ f \text{ is } F\#\mathcal P\text{-subharmonic on }\Omega \Rightarrow g(x): = \inf_{y\in \Omega_x} f(x,y) \text{ is } F\text{-subharmonic on }\pi(\Omega).$$ Then $\pi(\Omega)$ is automatically $F$-pseudoconvex, and so the minimum principle holds for $F$. The analogous statement holds in the complex case, if we make the additional hypothesis that $\Omega\subset X\times \mathbb C$ and $f$ are $\mathbb T$-invariant. \end{proposition} \begin{proof} We start with the real case. Suppose that $\Omega\subset X\times \mathbb R$ is $F\#\mathcal P$-pseudoconvex with connected fibres, and we aim to show $\pi(\Omega)$ is $F$-subharmonic. By hypothesis, there is a continuous exhaustive $F\#\mathcal P$-subharmonic function $f:\Omega\to \mathbb R$. So by hypothesis its marginal function $g(x): = \inf_{y\in \Omega_x} f(x,y)$ is $F$-subharmonic on $\pi(\Omega)$. But $g$ is both continuous and exhaustive (Lemma \ref{lem:marginalsareexhausting}) from which we conclude that $\pi(\Omega)$ is $F$-pseudoconvex. This proves the minimum principle holds when $m=1$. But this is sufficient to prove the minimum principle for all $m$ by Proposition \ref{prop:mcanbeone}, thereby completing the proof in the real case. In the complex case, the fact that $\pi(\Omega)$ is again $F$-subharmonic follows in the same way once it is established that any $\mathbb T$-invariant $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-pseudoconvex domain admits a continuous exhaustive $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic function that is $\mathbb T$-invariant, which we do in Lemma \ref{lem:exhaustionindependent}. \end{proof} \begin{lemma}[Marginals functions preserving continuity and being exhaustive]\label{lem:marginalsareexhausting} Let $\Omega\subset X\times \mathbb R$ and $f:\Omega\to \mathbb R$ be continuous (resp.\ exhaustive). Then $g(x): = \inf_{y\in \Omega_x} f(x,y)$ is continuous (resp.\ exhaustive). \end{lemma} \begin{proof} For any real number $a$ we have $\{ x\in \pi_X(\Omega) : g(x) <a\} \subset \pi_X ( \{ (x,y)\in \Omega : f(x,y)<a\}$. If $f$ is exhaustive then $\{ (x,y)\in \Omega : f(x,y)<a\}$ is relatively compact, and hence so is $\{ x\in \pi_X(\Omega) : g(x) <a\}$, proving that $g$ is exhaustive. Now assume $f$ is continuous. If $g(x)<a$ for some $x$ then there is a $y\in \Omega_{x}$ such that $f(x,y)<a$. Then $f<a$ on some neighbourhood $U_0\times U_1$ of $(x_0,y_0)$, and hence $g<a$ on $U_0$. Thus $g$ is upper-semicontinuous. Next suppose $g(x_n)<a$ for a sequence of points $x_n\in \pi_X(\Omega)$ that converge to some $x\in \pi_X(\Omega)$ as $n$ tends to infinity. For each $n$ there is a $y_n$ with $f(x_n,y_n)<a$. Thus $(x_n,y_n)$ lie in the relatively compact set $\{(x,y) : f(x,y)<a\}$, and so after taking a subsequence we may assume it converges to some point $(x,y)\in \Omega$. Continuity of $f$ yields $f(x,y)\le a$, so $g(x)\le a$. Thus $g$ is also lower-semicontinuous, and hence continuous. \end{proof} \begin{lemma}[Existence of $\mathbb T^m$-invariant exhaustive functions]\label{lem:exhaustionindependent} Let $F\subset J^{2,\mathbb C}(X)$ be a complex primitive subequation and $\Omega\subset X\times \mathbb C^m$ be a $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-pseudoconvex and $\mathbb T^m$-invariant domain. Then there exists a continuous exhaustive $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-function on $\Omega$ that is $\mathbb T^m$-invariant. \end{lemma} \begin{proof} As $\Omega$ is $F\#_{\mathbb C}\mathcal P_{\mathbb C}$-pseudoconvex there exists an exhaustive continuous $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic function $u$ on $\Omega$. For $e^{i\theta}\in \mathbb T^m$ set $$ u_{\theta}(z,w): = u(z,e^{i\theta}w).$$ which by Lemma \ref{lem:twisttheta} is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic on $\Omega$. Now set $$ v(x,y) := \sup_{e^{i\theta}\in \mathbb T^m} u_{\theta}(x,y).$$ Clearly $v\ge u$, and so is exhaustive as $u$ is. We claim that $v$ is continuous. Let $\epsilon>0$. By continuity of $u$ and compactness of $\mathbb T^m$ there is a $\delta>0$ such that if $\|(z,w)-(z',w')\|<\delta$ then $|u_{\theta}(z,w) - u_{\theta}(z',w')|<\epsilon$ for all $e^{i\theta}\in \mathbb T^m$. Let $\|(z,w)-(z',w')\|<\delta$ . There is a $e^{i\theta_0}\in \mathbb T^m$ such that $v(z,w) < u_{\theta_0}(z,w) + \epsilon < u_{\theta_0}(z',w') + 2\epsilon < v(z',w') + 2\epsilon$. Swapping the role of $(z,w)$ and $(z',w')$ gives $|v(z,w)-v(z',w')|<2\epsilon$, proving continuity of $v$. Hence $v$ is locally bounded above and equal to its upper semicontinuous regularisation. So Proposition \ref{prop:basicproperties}(4) tells us that $v$ is $F\#_{\mathbb C}\mathcal P_{\mathbb C}$-subharmonic on $\Omega$. Note that $v$ is $\mathbb T^m$-invariant by construction. \end{proof} \begin{lemma}\label{lem:twisttheta} Let $f$ be an $(F\#_{\mathbb C}\mathcal P^{\mathbb C})$-subharmonic on a $\mathbb T^m$-invariant open $\Omega\subset X\times \mathbb C^m$. Then for any fixed $e^{i\theta}\in \mathbb T^m$ the function $f_{\theta}(z,w):= f(z,e^{i\theta} w)$ is $(F\#_{\mathbb C}\mathcal P^{\mathbb C})$-subharmonic on $\Omega$ \end{lemma} \begin{proof} Apply Lemma \ref{lem:compositionbiholomorphism} to the biholomorphism $\zeta(w) := e^{i\theta} w$. \end{proof} \begin{comment} \begin{proof} As $\Omega$ is $F\#_{\mathbb C}\mathcal P_{\mathbb C}$-pseudoconvex there exists an exhaustive continuous $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic function $u$ on $\Omega$. Fix $\theta\in \mathbb R^m$ and recall $e^{i\theta}:= \operatorname{diag}(e^{i\theta_1},\ldots,e^{i\theta_m})$. \\ {\bf Claim: } The function $$ u_{\theta}(z,w) : = u(z,e^{i\theta} w)$$ is $F\#_{\mathbb C}\mathcal P_{\mathbb C}$-subharmonic on $\Omega$. \\ We first prove this when $u$ is semiconvex. Then $u$ is twice differentiable at almost every $(x,y)\in \Omega$. For such $(x,y)$ we have $u_{\theta}$ is twice differentiable at $(x,e^{-i\theta}y)$ and for any $\Gamma\in M_{n\times m}(\mathbb C)$ $$ \iota_{\Gamma}^* J^2_{(x,e^{-i\theta} y)} u_{\theta} = \iota_{e^{i\theta} \Gamma} J^2_{(x,y)} u$$ which lies in $F_{x}$ since $U$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic. Moreover $j^*blah=blah$ and so $J^2_{(x,e^{-i\theta} y)} u_{\theta} \in F\#_{\mathbb C}\mathcal P^{\mathbb C}$. Hence the claim follows by the Almost Everywhere Theorem. Now if $u$ is merely continuous, using a sup-convolution it may be approximated by a sequence of $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic and semiconvex functions $u^{\epsilon}$ that decrease to $u$ as $\epsilon\to 0$. Then $(u^{\epsilon})_{\theta} = (u_{\theta})^{\epsilon}$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic, and decreases to $u_{\theta}$ as $\epsilon\to 0$, proving the claim in general.\\ Now set $$ v = \frac{1}{(2\pi)^m} \int_{[0,2\pi]^m} u(x,e^{i\theta} y) d\theta.$$ Clearly $v$ is continuous. As in the Proof of Proposition \ref{prop:} we may think of the integral defining $v$ as a limit of Riemann sums to deduce that $v$ is $F\#_{\mathbb C}\mathcal P_{\mathbb C}$-subharmonic. \\ We claim that $v$ is exhaustive. To see this let $a\in \mathbb R$, and set $K: = \{u <a\}$ which is relatively compact in $\Omega$. Since $\Omega$ is assumed to be $\mathbb T^m$-invariant, any $(x,y)\in K$ is contained in a neighbourhood $B_{(x,y)}$ that is relatively compact and $\mathbb T$-invariant (this is clear as $T:=\{ (x,e^{i\theta} y) : \theta\in [0,2\pi]^m\}$ lies in $\Omega$ and is compact, so we may take $B_{(x,y)}$ to consist of all points at most a small distance $\delta$ away from $T$). As $K$ is compact it is covered by a finite number of sets $B_{(x,y)}$, the union of which is a relatively compact neighbourhood $B$ of $K$ that is $\mathbb T$-invariant. So $e^{i\theta} K\subset B$ for all $\theta$. Now if $(x,y)\notin \bigcup_{\theta\in [0,2\pi]^m} e^{i\theta} K$ then $u(x,e^{i\theta} y)\ge a$ for all $\theta$, so $v(x,y)\ge a$. Thus $$\{ v<a \} \bigcup_{\theta\in [0,2\pi]^m} e^{i\theta} K \subset B$$ so $\{ v<a \}$ is relatively compact. As $a$ was arbitrary we conclude that $v$ is exhaustive. \end{proof} \end{comment} \begin{comment} \subsection{Reductions to functions that are bounded from below} We next show that it is sufficient to deal with functions that are bounded from below. To do so requires that we work locally. Going forward the following notation will be useful: \begin{definition} Let $$\mathcal B = \{ U\subset X : U \text{ is open and there exists a } v\in F(U) \text{ that is bounded from below} \}$$ \end{definition} So by Proposition \ref{prop:approxboundedbelow}, if $F\subset J^2(X)$ is constant coefficient then $\mathcal B$ is a basis for $X$. \begin{proposition}\label{prop:reductionboundedbelow} Let $F\subset J^2(X)$ be a primitive subequation that is constant coefficient. Suppose for any $U\in \mathcal B$ and $$ f:\Omega'\to \mathbb R\cup \{-\infty\}$$ such that \begin{enumerate}[(a)] \item The fibres of $\Omega'$ are connected \item $\Omega'$ is an open subset of $\pi^{-1}(U)\subset X\times \mathbb R$ \item $f$ is $F\#\mathcal P$-subharmonic \item $f$ is bounded from below \end{enumerate} the marginal function $g(x) = \inf_{y\in \Omega_x} f(x,y)$ is $F$-subharmonic on $U$. (Note that in the hypothesis we are not assuming that $\Omega'$ is $F\#\mathcal P$-pseudoconvex). Then the minimum principle holds for $F$. The analogous statement holds in the complex case under the additional assumption that $f$ and $\Omega'\subset \pi^{-1}(U)\subset X\times \mathbb C$ are $\mathbb T$-invariant. \end{proposition} \begin{proof} Let $\Omega\subset X\times \mathbb R$ be $F\#\mathcal P$-pseudoconvex and $\tilde{f}:\Omega\to \mathbb R\cup \{-\infty\}$ be $F\#\mathcal P$-subharmonic with marginal function $\tilde{g}(x):= \inf_{y\in \Omega_x} f(x,y)$. We claim that $g$ is $F$-subharmonic. By Proposition \ref{prop:functiononly} this implies the minimum principle holds for $F$. To this end let $U\subset \pi(\Omega)$ be an element of our basis $\mathcal B$. Let $v\in F(U)$ be bounded from below, and consider $$ f_j := \max\{ \tilde{f}, \pi^*v -j\} \text{ on } \Omega' := \pi^{-1}(U).$$ Using Proposition \ref{prop:approxboundedbelow} we see that $\Omega'$ and $f_j$ satisfy (a--d) . Thus by hypothesis $g_j(x):= \inf_y f_j(x,y)$ is in $F(U)$. But $f_j$ decreases to $\tilde{f}|_{\Omega'}$ as $j$ tends to infinity, and so $g_j$ decreases pointwise to $\tilde{g}|_U$ as $j$ tends to infinity. Thus $\tilde{g}|_U$ is $F$-subharmonic, and since this holds for all $U$ we conclude that $\tilde{g}$ is $F$-subharmonic on $\pi(\Omega)$ as claimed. The complex case is the same (observing that both $\Omega'$ and $f_j$ are $\mathbb T$-invariant if $f$ and $\Omega$ are). \end{proof} \end{comment} \subsection{Reduction to functions that are relatively exhaustive} We next show how to reduce to functions that are bounded from below, and have the property that on each fibre the minimum is attained strictly away from the boundary. In fact we will need two versions of this, and the following terminology is useful. Let $\Omega'\subset U\times \mathbb R^m$ where $U\subset \mathbb R^n$ is open and $f:\Omega\to \mathbb R\cup \{-\infty\}$. \begin{definition}[Relatively exhaustive] We say $f$ is \emph{relatively exhaustive} if for all open $V\Subset U$ and all real $a$ the set $$ \{ (x,y)\in \Omega : f(x,y)<a \text{ and } x\in V\}$$ is relatively compact in $\Omega'$. \end{definition} \begin{definition}[Functions with fibrewise minimum strictly in the interior]\label{def:fibrewisemin} We say $f$ \emph{attains its fiberwise minimum strictly in the interior} if for any $x_0\in U$ there exists a real number $a$ and neighbourhood $x_0\in V\subset U$ such that $$ K_V:= \{ (x,y)\in \Omega' : f(x,y)<a \text{ and } x\in V\}$$ is relatively compact in $\Omega'$ and $\pi(K_V) = V$. (In other words, the marginal function of $f$ is bounded from above by $a$ on $V$, and the set of points $(x,y)$ over $V$ for which $f$ is less than $a$ is contained strictly away from the boundary of $\Omega'$.) \end{definition} \begin{lemma}\label{lem:fibrewise}\label{we may be able to get rid of this lemma} If $f$ is upper-semicontinuous and relatively exhaustive then is attains its fiberwise minimum strictly in the interior. \end{lemma} \begin{proof} Let $g(x) = \inf_{y\in \Omega_x} f(x,y)$. For any $x_0\in \pi(\Omega)$ pick any $a$ with $a> g(x_0)$. Then there is a neighbourhood $V$ such that $g<a$ on $V$. The set $K_V$ is relatively compact in $\Omega'$ as $f$ is exhaustive, and $\pi(K_V) = V$ by construction. \end{proof} We will prove the next two statements simultaneously. \begin{proposition}\label{prop:reductionfibrewiseminimum} Let $F\subset J^2(X)$ be a primitive subequation that is constant coefficient and has the Negativity Property. Suppose for any $$ f:\Omega'\to \mathbb R$$ such that \begin{enumerate}[(a)] \item $\Omega'\subset X\times \mathbb R$ is open and has connected fibres, \item There exists a bounded $F$-subharmonic function on $\pi(\Omega')$, \item $f$ is $F\#\mathcal P$-subharmonic, \item $f$ is bounded from below, \item $f$ attains its fiberwise minimum strictly in the interior, \end{enumerate} the marginal function $$g(x) = \inf_{y\in \Omega_x} f(x,y)$$ is $F$-subharmonic on $\pi(\Omega')$. (Note that in the hypothesis we are not assuming that $\Omega'$ is $F\#\mathcal P$-pseudoconvex). Then the minimum principle holds for $F$. The analogous statement holds in the complex case under the additional assumption that $f$ and $\Omega'\subset X\times \mathbb C$ are $\mathbb T$-invariant. \end{proposition} \begin{proposition}\label{prop:reductionlemmarelativelyexhausting} Let $F\subset J^2(X)$ be a constant-coefficient primitive subequation that has the Negativity Property. Then the conclusion of Proposition \ref{prop:reductionfibrewiseminimum} continues to hold if condition (e) is replaced with \begin{enumerate}[(e')] \item $f$ is relatively exhaustive. \end{enumerate} \end{proposition} \begin{proof} Using Lemma \ref{lem:fibrewise} it is clear that Proposition \ref{prop:reductionlemmarelativelyexhausting} implies Proposition \ref{prop:reductionfibrewiseminimum}, so it is sufficient to prove only the former. Let $\Omega\subset X\times \mathbb R$ be an $F\#\mathcal P$-pseudoconvex domain with connected fibres and $\tilde{f}:\Omega\to \mathbb R\cup \{-\infty\}$ be $F\#\mathcal P$-subharmonic and not identically $-\infty$ with marginal function $$\tilde{g}(x):= \inf_{y\in \Omega_x} f(x,y).$$ We claim that $\tilde{g}$ is $F$-subharmonic on $\pi(\Omega)$. By Proposition \ref{prop:functiononly} this implies the minimum principle holds for $F$. By the hypothesis that $\Omega$ is $F\#\mathcal P$-pseudoconvex, there exists an exhaustive $F\#\mathcal P$-subharmonic function $u$ on $\Omega$. By Proposition \ref{prop:approxboundedbelow} there is an open cover of $\pi(\Omega)$ by open subsets $U\subset \pi(\Omega)$ on which there exist an $v\in F(U)$ that is bounded from below. In such a case consider for each $j\in \mathbb N$ the function $$ f_j := \max\{ \tilde{f}, \pi^*v -j , u-j\} \text{ on } \Omega' := \Omega\cap \pi^{-1}(U).$$ Observe that both $\pi^*v - j$ and $u-j$ are $F\#\mathcal P$-subharmonic (the first of these is Proposition \ref{prop:approxboundedbelow}, and the second follows as $F\#\mathcal P$ has the Negativity Property) and hence so is $f_j$. Thus $(\Omega',f_j)$ and satisfy properties (a--d,e'), so by hypothesis $$g_j(x):= \inf_{y\in \Omega_x} f_j(x,y)$$ is in $F(U)$. But $f_j$ decreases to $\tilde{f}|_{\Omega'}$ as $j\to \infty$, and so $g_j$ decreases pointwise to $\tilde{g}|_U$ as $j\to \infty$. Thus $\tilde{g}|_U$ is $F$-subharmonic, and since this holds for all such $U$ we conclude that $\tilde{g}$ is $F$-subharmonic on $\pi(\Omega)$ as claimed. The complex case is the same since we can pick $u$ to be $\mathbb T$-invariant by Lemma \ref{lem:exhaustionindependent}, so both $\Omega'$ and $f_j$ are $\mathbb T$-invariant if $\tilde{f}$ and $\Omega$ are. \end{proof} \begin{remark} \begin{enumerate} \item It is possible to prove the minimum principle in the real case only using Proposition \ref{prop:reductionlemmarelativelyexhausting}. However we will instead use Proposition \ref{prop:reductionfibrewiseminimum} so that we can more easily reuse our statements when proving the complex case. \item The only place so far we have used that $F$ is constant-coefficient is in ensuring that $X$ is covered by open sets that admit $F$-subharmonic functions bounded from below. There is a a version of Proposition \ref{prop:reductionlemmarelativelyexhausting} that does not require $F$ be constant-coefficient if one removes hypotheses (b) and (d). The proof is essentially the same, and is left to the reader. \end{enumerate} \end{remark} \begin{comment} \begin{proposition}\label{prop:fibrewisemin} Let $F\subset J^2(X)$ be a primitive subequation that has the Negativity Property. Then the conclusion of Proposition \ref{prop:reductionlemmarelativelyexhausting} continues to hold if we replace condition (d) with \begin{enumerate}[(d')] \item $f$ is attains its fiberwise minimum in the interior. \end{enumerate} \end{proposition} \end{comment} \section{The Minimum Principle in the Real Convex Case}\label{sec:minimumprincipleconvex} We now use approximation arguments to reduce the minimum principle to considering first only semiconvex funtions, and then only smooth functions. Note that we are restricting attention for the moment to the real case; the complex case is slightly more involved due to the requirement that all quantities involved be $\mathbb T$-invariant, and will be taken up in the next section. \subsection{Reduction to semiconvex functions} \begin{proposition} \label{prop:reductionsemiconvex} Let $F\subset J^2(X)$ be a constant-coefficient primitive subequation that has the Negativity Property. Then the real case of Proposition \ref{prop:reductionfibrewiseminimum} continues to hold we assume in addition that \begin{enumerate}[(f)] \item $f$ is semiconvex. \end{enumerate} \end{proposition} \begin{proof} Let $f:\Omega'\to \mathbb R$ be such that (a--e) hold, and as usual set $g(x) = \inf_{y\in \Omega_x'} f(x,y)$. We will show $g$ is $F$-subharmonic on $\pi(\Omega')$, so Proposition \ref{prop:reductionfibrewiseminimum} applies. Since $f$ satisfies condition (d) (namely $f$ is bounded from below) we can use the sup-convolution to approximate $f$ by $F\#\mathcal P$-subharmonic functions that are semiconvex. In detail, let $$ {f}_j : \Omega'\to \mathbb R$$ be the sequence of such approximations of $f$ provided by Proposition \ref{prop:approxsemi}, and for convenience of the reader we recall these functions satisfy: \begin{enumerate}[(1)] \item ${f}_j\searrow f$ eventually on relatively compact sets as $j\to \infty$. \item ${f}_j$ is semiconvex on $\Omega'$. \item ${f}_j$ is $F\#\mathcal P$-subharmonic eventually on relatively compact sets of $\Omega'$. \end{enumerate} It is sufficient to prove that $g$ is $F$-subharmonic on some neighbourhood of an arbitrary $x_0\in \pi(\Omega')$. As $f$ satisfies condition (e) (namely that it attains its relative minimum strictly in the interior) we know there is an $a>g(x_0)$ and a neighbourhood $V$ of $x_0$ in $\pi(\Omega')$ such that $$K_V := \{ (x,y) \in \Omega' : f(x,y)<a \text{ and } x\in V\}\Subset \Omega'$$ and $\pi(K_V) = V$. Since $\Omega'$ has connected fibres, we can fix an open $K'$ with $K_V\subset K'\subset \Omega'$ each relatively compact in the next such that $K'$ has connected fibres. Now choose $j_0$ large enough such that for all $j\ge j_0$, \begin{enumerate}[(i)] \item $f_j(x_0,y_0)<a$. \item $f_j\searrow f$ pointwise on $K'$. \item $f_j$ is $F\#\mathcal P\text{-subharmonic and semiconvex on }K'$. \end{enumerate} By (i) and continuity of $f_{j_0}$ there are small neighbourhoods $x_0\in U_0\subset \pi(K_V)$ and $y_0\in U_1\subset \Omega'_{x_0}$ such that $U_0\times U_1\subset K$ and $$ f_{j_0}<a \text{ on } U_0\times U_1.$$ Shrinking $U_0$ if necessary, we may as well assume that there exists a bounded $F$-subharmonic function on $U_0$ (Lemma \ref{lem:boundedbelow}).\\ Now set $K'_{U_0}: = K'\cap\pi^{-1}(U_0)$ and $$\tilde{f}_j : = f_j|_{K'_{U_0}}.$$ \noindent {\bf Claim I:} If $x\in U_0$ then \begin{equation} \inf_{y\in K'_x} \tilde{f}_j(x,y) \searrow \inf_{y\in \Omega_x'} f(x,y)=g(x) \text{ as } j\to \infty.\label{eq:claimdecreasecontinuous}\end{equation} To prove this note first by (ii) we have $$ \inf_{y\in K'_x} \tilde{f}_j(x,y) \searrow \inf_{y\in K'_x} f(x,y) \text{ as } j\to \infty.$$ On the other hand, (ii) also implies $f_j$ is pointwise decreasing on $U_0\times U_1\subset K$, so \begin{equation} f\le f_{j}<f_{j_0}<a \text{ on } U_0\times U_1 \text{ for all } j\ge j_0.\label{eq:lessthanonproduct}\end{equation} Thus $$ \inf_{y\in \Omega_x'} f(x,y) \le \inf_{y\in U_1} f(x,y) \le a.$$ But by construction if $y\in \Omega_x'\setminus K'$ then $f(x,y)\ge a$. So $$ \inf_{y\in \Omega_x'} f(x,y) = \inf_{y\in K'_x} \tilde{f}_j(x,y)$$ proving Claim I.\\ \noindent {\bf Claim II:} For $j\ge j_0$ the function $\tilde{f}_j:K'_{U_0}\to \mathbb R$ satisfies conditions (a--f).\\ We have arranged that (a) holds (namely that $K'_x$ is connected) and also that $\pi(K'_{U_0}) = U_0$ which admits a bounded $F$-subharmonic function, giving (b). Conditions (c) and (f) (namely that $\tilde{f}_j$ is $F\#\mathcal P$-subharmonic and semiconvex on $K_{U_0}$) are given by (iii). In fact $f_j$ is semiconvex on all of $\Omega'$, so in particular continuous on $\overline{K'}$, and thus $\tilde{f}_j$ is bounded from below, giving (d). It remains only to verify (e), namely that $\tilde{f}_j$ attains its fibrewise minimum strictly in the interior. For this let $\tilde{x}\in U_0$ and fix any neighbourhood $\tilde{x}\in \tilde{V}\Subset U_0$. Let $a$ be the number used above. Then \begin{align*} \tilde{K} &: = \{ (x,y)\in K'_{U_0} : \tilde{f}_j(x,y)<a \text{ and } x\in \tilde{V} \} \\ &\subset K_V\cap \pi^{-1}(\tilde{V}) \end{align*} which implies $\tilde{K}$ is relatively compact in $K'_{U_0}$ (see Lemma \ref{lem:relativelycompacttopological}), and \eqref{eq:lessthanonproduct} implies $\pi({\tilde{K}})=\tilde{V}$. This precisely says that $\tilde{f}_j$ satisfies (e), competing the proof of Claim II.\\ So by the hypothesis of the Proposition, the marginal function of $\tilde{f}_j|_{K'_{U_0}}$ is $F$-subharmonic on $U_0$. That is $$ \tilde{g}_j(x): = \inf_{y\in K'_x} \tilde{f}_j(x,y) \text{ for } x\in U_0 = \pi(K'_{U_0})$$ is $F$-subharmonic. But Claim I tells us that $\tilde{g}_j$ decreases pointwise to $g$ on $U_0$ as $j$ tends to infinity, and so $g\in F(U_0)$. Since $x_0\in \pi(\Omega')$ was arbitrary this implies $g$ is $F$-subharmonic $\pi(\Omega')$, completing the proof. \end{proof} \begin{lemma}\label{lem:relativelycompacttopological} Let $K_V\Subset K'\Subset X\times \mathbb R$ be open, and $\tilde{V}\Subset U_0\Subset \pi(K_V)$ also be open. Assume that $\tilde{K}\subset K_V\cap \pi^{-1}(\tilde{V})$. Then $\tilde{K}$ is relatively compact in $K'\cap \pi^{-1}(U_0)$. \end{lemma} \begin{proof} Left to the reader. \end{proof} \subsection{Reduction to the smooth case} \begin{proposition} \label{prop:reductionsmooth} Let $F\subset J^2(X)$ be a constant-coefficient primitive subequation that has the Negativity Property and is convex. Then the real case of Proposition \ref{prop:reductionsemiconvex} continues to hold if we assume in addition that \begin{enumerate}[(g)] \item $f$ is smooth. \end{enumerate} \end{proposition} \begin{proof} The proof of this is almost identical to that of the previous Proposition. Let $f:\Omega'\to \mathbb R\cup \{-\infty\}$ be such that (a--f) hold, and as usual set $g(x) = \inf_{y\in \Omega_x'} f(x,y)$. We will show is $F$-subharmonic on $\pi(\Omega')$ so Proposition \ref{prop:reductionsemiconvex} applies. Since $F$ is now assumed to be convex, so is $F\#\mathcal P$, and $f$ is assumed to be semiconvex (so in particular continuous), we may use smooth mollification to approximate $f$ locally uniformly by a sequence of smooth functions. So let $h_j:X\to \mathbb R$ be the sequence of smooth functions furnished by Proposition \ref{prop:convolution}. Recall these satisfy \begin{enumerate}[(1)] \item ${h}_j\to f$ locally uniformly as $j\to \infty$. \item ${h}_j$ is smooth. \item ${h}_j$ is $F\#\mathcal P$-subharmonic eventually on relatively compact sets as $j\to \infty$. \end{enumerate} Then the proof proceeds precisely as in Proposition \ref{prop:reductionsemiconvex}, only we replace \eqref{eq:claimdecreasecontinuous} with the statement that \begin{equation} \inf_{y\in K'_x} h_j(x,y) \to \inf_{y\in \Omega_x'} f(x,y)=g(x) \text{ locally uniformly as } j\to \infty\label{eq:claimdecreasecontinuousII}\end{equation} which follows easily from the fact that $h_j\to f$ locally uniformly.\\ \end{proof} \subsection{The minimum principle in the smooth real case} We next turn our attention to a statement that guarantees that marginal functions of certain sufficiently smooth $F\#\mathcal P$-subharmonic functions are $F$-subharmonic. \begin{proposition}\label{prop:hessiancalc} Let $X\subset \mathbb R^{n}$ be open and $F\subset J^2(X)$ be a primitive subequation. Let $\Omega\subset X\times \mathbb R^{m}$ be open and assume $f:\Omega\to \mathbb R$ is such that \begin{enumerate}[(1)] \item $f$ is $\mathcal{C}^2$. \item $f$ is strictly convex in the second variable. \item There exists a $\mathcal C^1$ function $\gamma:X\to \mathbb R^m$ such that $$ g(x) :=\inf_{y\in \Omega_x} f(x,y) = f(x,\gamma(x)) \text{ for } x\in \pi(\Omega).$$ \end{enumerate} Then \begin{enumerate}[(a)] \item The derivative of $\gamma$ at a point $x\in \pi(\Omega)$ is given by \begin{equation}\Gamma:=\frac{d\gamma}{dx} = -D^{-t} C^t\label{eq:hessiancalcderivativew}\end{equation} where \begin{equation}\label{eq:hessianblockform} \operatorname{Hess}_{(x,w(x)} (f) = \left( \begin{array}{cc} B&C \\ C^t & D \\ \end{array}\right)\end{equation} is the Hessian matrix of $f$ at $(x,\gamma(x))$ in block form (so $B_{ij} = \frac{\partial^2 f}{\partial x_i \partial x_j}|_{(x,\gamma(x))}$ etc.). \item The marginal function $g$ is $\mathcal C^2$, and its second order jet at a point $x\in \pi(\Omega)$ is given by \begin{equation}\label{eq:eq:hessiancalcderivativejetg} J^2_{x}(g) = i^*_{\Gamma } J^2_{(x,\gamma(x))}(f) \end{equation} where $i^*$ is as defined in \eqref{eq:defofistar}. \item If additionally $f$ is $F\#\mathcal P$-subharmonic on $\Omega$ then $g$ is $F$-subharmonic on $\pi(\Omega)$. \end{enumerate} \end{proposition} \begin{proof} Note first that $D = (\frac{\partial^2 f}{\partial y_i \partial y_j})$ is strictly positive as $f(x,y)$ is assumed to be strictly convex in $y$, so the inverse in \eqref{eq:hessiancalcderivativew} exists. Now, as $\gamma(x)$ is a minimum of the function $y\mapsto f(x,y)$ we have \begin{equation}\label{eq:chainruleatmin} \frac{\partial f}{\partial y} (x,\gamma(x)) =0 \text{ for all } x\in X.\end{equation} Differentiating $g(x) = f(x,\gamma(x))$ with respect to $x$ gives \begin{equation} \frac{\partial g}{\partial x}(x) = \frac{\partial f}{\partial x}(x,\gamma(x)) + \frac{\partial f}{\partial y}(x,\gamma(x)) = \frac{\partial f}{\partial x}(x,\gamma(x)),\label{eq:chainrule1}\end{equation} from which we see $\frac{d\gamma}{dx}$ is $\mathcal C^1$ (i.e.\ $\gamma$ is $\mathcal C^2$). Then differentiating \eqref{eq:chainruleatmin} with respect to $x$ yields $$0= C + (\frac{d\gamma}{dx})^t D = C + \Gamma^t D$$ giving \eqref{eq:hessiancalcderivativew}. Now differentiating \eqref{eq:chainrule1} with respect to $x$ gives $$\operatorname{Hess}_{x}(g) = B + C \frac{d\gamma}{dx} = B + C\Gamma= B - \Gamma^t D \Gamma.$$ So in terms of second order jets $$ J^2_{x}(g) = (f(x,\gamma(x)), \frac{\partial f}{\partial x}|_{(x,\gamma(x)}, B - \Gamma^t D\Gamma) = i^*_{\Gamma} J^2_{(x,\gamma(x))}(f)$$ which is \eqref{eq:eq:hessiancalcderivativejetg}. Finally assume $f$ is $F\#\mathcal P$-subahamonic. Then since it is also $\mathcal C^2$ we know that $J^2_{(x,\gamma(x_0))}(f) \in (F\#\mathcal P)_{(\gamma,\gamma(x))}$ and hence (by the definition of product subequations) $J^2_{x}(g) = i_{\Gamma}^* J^2_{(x,\gamma(x))}(f) \in F_{x}$. So as $g$ is $\mathcal C^2$ and $x$ is arbitrary, Lemma \ref{lem:fsubharmonicc2} yields $g$ is $F$-subharmonic as claimed. \end{proof} A simple argument with the implicit function theorem shows that if $y\mapsto f(x,y)$ is strictly convex and attains its (necessarily unique) minimum at a point $\gamma(x)$ then the function $x\mapsto \gamma(x)$ is $\mathcal C^1$. However in the case we will be interested in, the map $y\mapsto f(x,y)$ is convex but not necessarily strictly convex (so any such minimum need not be unique). If $F$ depends only on the Hessian part, then one can circumvent this problem by approximating $f$ by adding a small multiple of the function $\phi(x,y):= \|y\|^2$. Such an approximation will be strictly convex in the $y$ direction, and remains $F\#\mathcal P$-subharmonic as the Hessian of $\phi$ is strictly positive. The next proposition is needed to deal with the possibility of gradient dependence of $F$. We emphasise that among the hypothesis is that $m$ is equal to 1. \begin{proposition}\label{prop:hessiancalcII} Let $X\subset \mathbb R^{n}$ be open and $F\subset J^2(X)$ be a primitive subequation. Suppose $$f:\Omega'\to \mathbb R$$ is such that \begin{enumerate}[(a)] \item $\Omega'\subset X\times \mathbb R$ is open and has connected fibres.\addtocounter{enumi}{1} \item $f$ is $F\#\mathcal P$-subharmonic.\addtocounter{enumi}{1} \item $f$ attains its fibrewise minimum strictly in the interior.\addtocounter{enumi}{1} \item $f$ is $\mathcal C^2$. \end{enumerate} Then the marginal function $$ g(x): = \inf_{y\in \Omega_x} f(x,y) \text{ for } x\in \pi(\Omega)$$ is $F$-subharmonic. \end{proposition} \begin{proof} Fix $x_0\in \pi(\Omega)$. By (e) there is an $a$ so $g(x_0)<a$ and a neighbourhood $V$ of $x_0$ that is relatively compact in $U$ such that the set $$ K := \{ (x,y) \in \Omega : f(x,y)<a \text{ and } x\in V\}$$ is relatively compact, and $g<a$ on $V$. Note that for each $x\in V$ the function $y\mapsto f(x,y)$ is convex, and since $\Omega_x$ is connected, we see that $K_x$ is also connected. Now write in block form \begin{equation}\label{eq:hessianblockformII} \operatorname{Hess}_{(x,y)} (f) = \left( \begin{array}{cc}\hat{B} &{\hat{C}} \\ {\hat{C}}^t & {\hat{D}} \\ \end{array}\right)\end{equation} (so $\hat{B},\hat{C},\hat{D}$ are functions of $(x,y)$ which is dropped from notation). Given $\alpha,j\in \mathbb N$ we define \begin{align} \phi &:= \phi(y) := e^{-\alpha y}\\ \hat{\Gamma}&:=\hat{\Gamma}(x,y):= -(\hat{D}+j^{-1} \operatorname{Hess}_{y} \phi)^{-1}\hat{C}^t \end{align} {\bf Claim I:} For any $\delta>0$ it holds that for all $j\gg \alpha \gg 0$, \begin{align} |j^{-1} \phi |&\le \delta \\\label{eq:prophessiancalcnonstrictclaimi} \|j^{-1} \hat{\Gamma}^t \nabla\phi\|&\le \delta\end{align} uniformly over $(x,y)\in K$.\\ The first statement is immediate as $\phi$ is continuous and $K$ is relatively compact. For the second statement observe first that the inverse in the definition of $\hat{\Gamma}$ is well-defined as $f$ is convex in the second variable, so $\hat{D}\ge 0$ and $\phi$ is strictly convex so $\operatorname{Hess}_y\phi>0$. Now \begin{align*} \|j^{-1} \hat{\Gamma}^t \nabla\phi\| &= \| j^{-1} \alpha \hat{\Gamma}^t e^{-\alpha y}\| \\ &= \| j^{-1} \alpha \hat{C} (\hat{D}+ j^{-1} \alpha^2 e^{-\alpha y})^{-1} e^{-\alpha y}\| \\ &\le \|\hat{C}\| \frac{j^{-1}\alpha e^{-\alpha y}} { \hat{D} + j^{-1}\alpha^2 e^{-\alpha y}}\le \frac{\|\hat{C}\|}{\alpha}, \end{align*} where the last inequality uses that $\hat{D}\ge 0$. Now $\|\hat{C}\|$ is bounded uniformly over the relatively compact set $K$, so Claim I follows.\\ Consider next the function \begin{align*} f_j(x,y)&: = f(x,y) + j^{-1} \phi(y) \end{align*} and set $$g_j(x): = \inf_{y\in K_x} f_j(x,y).$$ Fix neighbourhoods $x_0\in U_0\subset V$ and $y_0\in U_1\subset \Omega_{x_0}$ such that $U_0\times U_1\subset K$.\\ {\bf Claim II: } For $x\in U_0$ $$ g_j(x)\searrow g(x) \text{ as } j\to \infty.$$ To see this observe that since $f_j$ decreases to $f$ we have $$ g_j(x) \searrow \inf_{y\in K_x} f(x,y) \text{ as } j\to \infty.$$ On the other hand, as $x\in U_0$ we certainly have $g_j(x)\le a$. But $f(x,y)\ge a$ for all $y\notin K_x$ and so in fact $\inf_{y\in K_{x}} f(x,y) = \inf_{y\in \Omega_x} f(x,y) = g(x)$.\\ Next recall from Lemma \ref{lem:limitsunderpertubationsofsubequation} the primitive subequation $F^\delta\subset J^2(X)$ given by $$ F^{\delta}_x = \{ (r,p,A) : \exists r',p' \text{ such that } (r',p',A)\in F_x \text{ and } |r-r'|<\delta \text{ and } \|p-p'\|<\delta\}.$$ {\bf Claim III: } For given $\delta>0$ it holds that for all $j\gg \alpha \gg 0$ the function $g_j$ is $F^{\delta}$-subharmonic on $U_0$.\\ Assuming this claim for now, fix such an $\alpha$ and let $j$ tend to infinity to deduce from Claim II that $g$ is $F^\delta$-subharmonic on $U_0$. Letting $\delta\to 0$ yields that $g$ is in fact $F$-subharmonic on $U_0$ (Lemma \ref{lem:limitsunderpertubationsofsubequation}(3)). Since $U_0$ is a neighbourhood of an arbitrary point $x_0\in \pi(\Omega)$ we conclude finally that $g$ is $F$-subharmonic on $\pi(\Omega)$ as needed.\\ {\bf Proof of Claim III:} Let $x\in U_0$. Since $f$ is convex in $y$ (Lemma \ref{lem:slicesareconvex}) the function $y\mapsto f_j(x,y)$ is strictly convex. So as $\Omega_x$ is connected, this along with (e) implies $y\mapsto f_j(x,y)$ has a unique minimum which we denote by $\gamma_j(x)$. Note that by construction $(x,\gamma_j(x))\in K$. So if $x\in U_0$ then $\gamma_j(x)$ is the unique point that satisfies $$ \frac{\partial f_j}{\partial y} (x,\gamma_j(x)) =0.$$ As $f_j$ is strictly convex in the $y$ direction, the implicit function theorem implies that $\gamma_j$ is $\mathcal C^1$. Observe by definition, $$g_j(x) = f_j(x,\gamma_j(x)) \text{ for } x\in U_0.$$ Now $$ \operatorname{Hess}_{(x,\gamma_j(x))} (f_j) = \left( \begin{array}{cc} {B}&{C} \\ {C}^t & {D} +j^{-1}\operatorname{Hess}_{\gamma(x)} \phi \\ \end{array}\right).$$ where $C = C(x) = \hat{C}(x,\gamma_j(x))$ and similarly for $B$ and $D$. Then Proposition \ref{prop:hessiancalc}(1,2) applies to $f_j|_{K}$, giving \begin{equation}\label{eq:hessianIIcalcU} \frac{d\gamma_j}{dx} = \Gamma_j(x)$$ where $$\Gamma_j(x) : = \hat{\Gamma}(x,\gamma_j(x)) = -(D+j^{-1}\alpha^2 e^{-\alpha \gamma(x)})^{-1}C^t.\end{equation} (we remark that the transpose from equation \eqref{eq:hessiancalcderivativew} has been dropped as $D$ is a $1\times 1$ matrix). Moreover $g_j$ is $\mathcal C^2$, and its second order jet at a point $x\in U_0$ is given by \begin{align*} J^2_{x}(g_j) &= i^*_{\Gamma_j} J^2_{(x,\gamma_j(x))}(f_j) \\ &=i_{\Gamma_j}^* J^2_{(x,\gamma(x))}(f) + j^{-1} i_{\Gamma_j}^* J^2_{(x,\gamma_j(x))}(\phi)\\ & = i_{\Gamma_j}^*J^2_{(x,\gamma_j(x))}(f) + j^{-1}(\phi(\gamma(x)) , \Gamma_j^t \nabla \phi|_{\gamma_j(x)}, \Gamma_j^t (\operatorname{Hess}_{\gamma_j(x)} \phi)\Gamma_j )\\ & \in F_x + j^{-1}(\phi(\gamma_j(x)) , \Gamma_j^t \nabla \phi_{\gamma_j(x)}, 0 ) \end{align*} where the last line uses that $f$ is $F\#\mathcal P$-subharmonic and the Positivity property of $F$. Then the bounds in Claim I show that for $j\gg \alpha \gg 0$ it holds that $J^2_{x}(g_j) \in F^{\delta}_x$ completing the proof of Claim III and the Proposition. \end{proof} \subsection{Synthesis} The above analysis in the smooth case, combined with our previous reductions, is enough to prove our first minimum principle. \begin{theorem}[Minimum Principle in the Real Convex Case]\label{thm:minimumprincipleconvexI} Let $F\subset J^2(X)$ be a real primitive subequation such that \begin{enumerate} \item $F$ satisfies the Negativity Property, \item $F$ is convex, \item $F$ is constant coefficient. \end{enumerate} Then $F$ satisfies the minimum principle. \end{theorem} \newcounter{step} \begin{proof} This follows by combining Proposition \ref{prop:reductionsmooth} and Proposition \ref{prop:hessiancalcII}. \end{proof} \begin{remark}\ When $F = \mathcal P_{n}$ we have seen in Section \ref{sec:examples} that $F\#\mathcal P_m = \mathcal P_{n+m}$. Then Theorem \ref{thm:minimumprincipleconvexI} becomes the classical statement that if $f(x,y)$ is a function that is convex as $(x,y)$ varies in a convex subset of $\mathbb R^{n\times m}$ whose fibers are connected then the marginal function $g(x) : = \inf_y f(x,y)$ is convex in $x$. \end{remark} \section{The Minimum Principle in the Complex Convex Case}\label{sec:minimumcomplexconvex} We now prove the minimum principle in convex complex case. The idea of the proof is similar to the real case, but made slightly more complicated as we are considering the complex Hessian and have to make sure that all the terms that we deal with are $\mathbb T$-invariant. In the following $X\subset \mathbb C^n$ will be open and $F\subset J^{2,\mathbb C}(X)$ a primitive complex subequation. We will let $z$ be a complex coordinate on $\mathbb C^n$ and $w$ a complex coordinate on $\mathbb C$. \subsection{Reduction to domains in the complement of zero section} We first consider a reduction that allows us to work away from the zero section of the second variable (i.e. to work insude $X\times \mathbb C^*$). This is needed to ensure that when we make a smooth mollification we can retain the property of being $\mathbb T$-invariant. \begin{proposition}\label{prop:reductionCstar} Let $F\subset J^{2,\mathbb C}(X)$ be a complex primitive subequation that is constant coefficient and has the Negativity Property. Suppose for any $$ f:\Omega'\to \mathbb R$$ such that \begin{enumerate}[(A)] \item $\Omega'$ is a $\mathbb T$-invariant open subset of $X\times \mathbb C$ with connected fibres, \item There exists an $F$-subharmonic function on $\pi(\Omega')$ that is bounded from below, \item $f$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic and $\mathbb T$-invariant, \item $f$ is bounded from below, \item $f$ attains its fiberwise minimum strictly in the interior, \item $\Omega'\subset X\times \mathbb C^*$, \end{enumerate} the marginal function $$g(z) = \inf_{w\in \Omega'_z} f(z,w)$$ is $F$-subharmonic on $\pi(\Omega')$. Then the minimum principle holds for $F$. \end{proposition} \begin{proof} Let $f:\Omega'\to \mathbb R$ be such that (A--D) hold and also \begin{enumerate}[(E')] \item $f$ is relatively exhaustive. \end{enumerate} Set $g(z) = \inf_{w\in \Omega_z'} f(z,w)$ which we will show is $F$-subharmonic on $\pi(\Omega')$. Then Proposition \ref{prop:reductionlemmarelativelyexhausting} applies to give that the minimum principle holds for $F$. Fixing $z_0\in \pi(\Omega')$ it is sufficient to prove $g$ is $F$-subharmonic in some neighbourhood $V$ of $z_0$. As $\Omega'_{z_0}$ is $\mathbb T$-invariant and connected we can write $$ \Omega'_{z_0} = \{w\in \mathbb C: r < |w| <R\}$$ for some $0\le r<R\le \infty$. The proof then splits into two cases.\\ \noindent {\bf Case 1: } $r=0$.\\ We are assuming $f$ is bounded from below on $\Omega$, so say $f\ge c$. Then $(z_0,0)\in \Omega$, so for sufficiently small neighbourhoods $V$ of $z_0$ we have $(z,0)\in \Omega$ for all $z\in V$. Shrinking $V$ if necessary we may assume $V\Subset \pi(\Omega')$, and there exists a bounded $F$-subharmonic function on $V$. In fact shrinking $V$ further if necessary we can fix real numbers $0<R<R<R''$ and $C$ such that for all $z\in V$, \begin{enumerate}[(I)] \item if $0\le |w|<R''$ then $(z,w)\in \Omega$. \item if $|w|<R$ then $f(z,w)<C$. \item if $R'<|w|<R''$ then $f(z,w)>2C-c$. \end{enumerate} (This is possible by using openness and upper-semicontinuity of $f$ to obtain $R$ and $C$ in such a way that $\{(z\in V, |w|<r\}$ is relatively compact in $\Omega'$, and then using that $f$ is relatively exhaustive to obtain $R'$ and $R''$). Set $$ \lambda: = \frac{R}{R'} <1.$$ {\bf Claim: } For all $j$ sufficiently large there exist $h_j:\Omega_j \to \mathbb R$ such that \begin{enumerate}[(i)] \item $\pi(\Omega_j)=V$. \item $h_j$ has properties (A--F). \item For all $z\in V$ \begin{equation}\label{eq:infsandwithch} \inf_{w\in \Omega_z, |w|>\lambda^{j+1}} f(z,w) \le \inf_{w\in \Omega_{j,z}} h_j(z,w) \le \inf_{w\in \Omega_z, |w|>\lambda^j} f(z,w).\end{equation} \end{enumerate} Given this claim for now, we use the hypothesis of the Proposition to conclude that $k_j(z): = \inf_{z\in \Omega_{j,z}} h_j(z,w)$ is $F$-subharmonic on $V$. But \eqref{eq:infsandwithch} implies also that $k_j$ decreases pointwise to $g(z)=\inf_{w\in \Omega_z} f(z,w)$, which is thus also $F$-subharmonic.\\ For the claim, choose $j$ large enough to $\lambda^{j}<R$. Define $$ \Omega_j : = \{ (z,w)\in \Omega' : z\in V \text{ and } |w|> (R/R'') \lambda^j \}$$ and set $$ h_j = \max\{ f(z,w), f(z, R\lambda^j/w) +c-C\}.$$ Observe first that if $(z,w)\in \Omega_j$ then $R\lambda^j/|w| < R''$ so $(z,w)\in \Omega$ and thus $h_j$ is well defined on $\Omega_j$. Moreover if $z\in V$ and $|w|>\lambda^j$ then $f(z,R\lambda^j/w)+c-C<c<f(z,w)$ so \begin{equation} (z,w)\in \Omega_j \text{ and }|w|>\lambda^j \Rightarrow h_j(z,w) = f(z,w).\label{eq:zone2} \end{equation} On the other hand if $z\in V$ and $(R/R'')\lambda^j < |w|< \lambda^{j+1}$ then $R\lambda^j/|w|>R'$ so using (III), $f(z,R\lambda^j/w) + c-C > C >f(z,w)$ (the last inequality follows from (II) since we are assuming $j$ is large enough so $\lambda^{j+1}<R$). Thus we have \begin{equation} (z,w)\in \Omega_j \text{ and } |w|< \lambda^{j+1} \Rightarrow h_j(z,w) >C. \label{eq:zone1} \end{equation} Now as $\lambda^j<R<R''$ item (I) implies $\pi(\Omega_j) =V$ so (i) holds. Proposition \ref{lem:compositionbiholomorphism} implies that the function $(z,w)\mapsto f(z,R\lambda^j/w)$ is $F\#\mathcal P$-subharmonic, hence so is $h_j$. From this properties (A-D) and (F) for the function $h_j$ are immediate, and property (E) (namely that $h_j$ attains its minimum strictly in the interior) follows from \eqref{eq:zone1} and \eqref{eq:zone2} giving (ii). From \eqref{eq:zone2} if $z\in V$ then $$ \inf_{w\in \Omega_{j,z}} h_j(z,w) = \inf_{w\in \Omega'_{z}, |w|>\lambda^{j+1}} h_j(z,w).$$ Then observing that $\Omega' \cap \{ |w|>\lambda^j\} \subset \Omega_{j}\cap \{ |w|>\lambda^{j+1}\}$ we have $$ \inf_{w\in \Omega_{j,z}} h_j(z,w) = \inf_{w\in \Omega_{j,z}, |w|>\lambda^{j+1}} h_j(z,w) \le \inf_{w\in \Omega'_{z}, |w|>\lambda^{j}} f(z,w).$$ On the other hand $h_j\ge f$ everywhere so \begin{align*} \inf_{w\in \Omega_{j,z}} h_j(z,w) &= \inf_{w\in \Omega_{j,z}, |w|>\lambda^{j+1}} h_j(z,w) \ge \inf_{w\in \Omega_{j,z}, |w|>\lambda^{j+1}} f(z,w)\\&\ge \inf_{w\in \Omega'_{z}, |w|>\lambda^{j+1}} f(z,w)\end{align*} where the last inequality uses $\Omega_j\subset \Omega'$. Thus we have (iii) giving the claim, and completing the proof in the case $r=0$.\\ \begin{comment} Then $(z_0,0)\in \Omega'$. Fix some $a>g(z_0)$. We may choose a relatively compact neighbourhood $V$ of $z_0$ in $\pi(\Omega')$ such that \begin{enumerate}[(I)] \item $\{(z,w) : z\in V \text{ and } |w|<R/2\}\subset \Omega'$ \item The set $$ K_V: = \{ (z,w)\in \Omega' : f(z,w)<a \text{ and } z\in V\}$$ is relatively compact in $\Omega'$ and $V=\pi(K_V)$. \item There exists a $c>0$ such that for all $z\in V$ there is a $(z,w)\in K_V$ with $|w|>c$. \item There exists a bounded $F$-subharmonic function on $V$. \end{enumerate} (Item (I) is possible by openness of $\Omega'$, (II) as $f$ is assumed to be relatively exhausing, and (III) comes from upper-semicontinuity and $\mathbb T$-invariance of $f$). Also fix some constant $C$ such that $$ C > f(z,w) \text{ for all } z\in V \text{ and } |w|<R/2$$ Now for small $\epsilon>0$ set $$\tilde{\Omega}: = \{ (z,w)\in \Omega\cap (V\times \mathbb C^*) : (z,\epsilon/w)\in \Omega\}$$ and $$h^\epsilon(z,w) : = \max\left\{ f(z,w), f(z,\epsilon/w)-\frac{1}{\epsilon}\right\}\text{ for }(z,w)\in \Omega^{\epsilon}:=\Omega\cap \tilde{\Omega}.$$ We claim that for $\epsilon$ sufficiently small \begin{enumerate}[(i)] \item $h^{\epsilon}:\Omega^{\epsilon}\to \mathbb R\cup \{-\infty\}$ satisfies conditions (A--E) \item The marginal function of $h^\epsilon$ decreases pointwise to the marginal function of $f$ on $V$ as $\epsilon$ tends to zero. That is, $$ \inf_{w\in \Omega^\epsilon_z} h^\epsilon(z,w) \searrow \inf_{w\in \Omega'_z} f(z,w) =g(z)\text{ as } \epsilon\to 0^+ \text{ for all } z\in V$$ \end{enumerate} Given these, we can apply our hypothesis to the pair $(h^\epsilon,\Omega^\epsilon)$ to deduce that $g|_V$ is $F$-subharmonic which completes the case case $r=0$. We now turn our attention to (i). Take $\epsilon$ small enough so $R/2>2\epsilon/R$. Then for any $z\in V$ and $w\in \mathbb C$ with $2\epsilon/R<|w|<R/2$ we have $(z,w)\in \Omega'$ and $(z,\epsilon/w)\in \Omega'$ nu (I), which together imply $(z,w)\in \Omega_{\epsilon}$. This shows that for sufficiently small $\epsilon$ we have $\pi(\Omega^{\epsilon}) = V$. Now clearly $\Omega^{\epsilon}\subset X\times \mathbb C$ is open and $\mathbb T$-invariant, giving (A). We have arranged so that (B) holds. Clearly $h^{\epsilon}$ is $\mathbb T$-invariant since $f$ is, and the function $f(z,\epsilon/w)$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic where defined (Lemma \ref{lem:compositionbiholomorphism}) and hence so is $h^{\epsilon}$ giving (C). The assumption that $f$ is bounded from below on $\Omega'$ implies that $h^{\epsilon}$ is bounded from below on $\Omega^{\epsilon}$ which is (D). The only thing remaining is to prove (E), namely that $h^{\epsilon}$ attains fibrewise minimum strictly in the interior. Fix $\tilde{z}\in \pi(\Omega^{\epsilon})= V$. With $a$ as above we use again that $f$ is relatively exhaustive to obtain a relatively compact neighbourhood $x\in \tilde{V}\subset V$ such that the set $$ L_{\tilde{V}} : = \{ (z,w)\in \Omega' : f(z,w)<a + \frac{1}{\epsilon} \text{ and } z\in \tilde{V}\}$$ is relatively compact in $\Omega'$. So we may find compact subsets $A,B\subset \Omega'$ with $K_V\subset A$ and $L_{\tilde{V}}\subset B$. Set $$ \tilde{B} : = \{(z,w) \in \Omega^\epsilon : (z,\epsilon/w)\in B \text{ and } z\in Cl_{V}(\tilde{V})\}$$ Then $$S':=\{ (z,w)\in \Omega^{\epsilon} : h^{\epsilon}(z,w)<a \text{ and } z\in \tilde{V} \} \subset A\cap \tilde{B} \subset \Omega^{\epsilon}.$$ But $A\cap \tilde{B}$ is compact (left to the reader) and so $S'$ is relatively compact. We show also $\pi(S') =\tilde{V}$. To this end let $z\in \tilde{V}$. Then by (III) there is a $(z,w)\in K_V$ with $|w|>c$. So $\epsilon/|w|<\epsilon/c<R/2$ as long as $\epsilon$ is sufficiently small. So $(z,w)\in \Omega^{\epsilon}$ by (I). And by our choice of constant $C$ we have $f(z,\epsilon/w)-\frac{1}{\epsilon} <C-\frac{1}{\epsilon}<a$ as long as $\epsilon$ is sufficiently small. Thus $h^{\epsilon}(z,w)<a$, and so $(z,w)\in S'$. Thus $\pi(S')=\tilde{V}$, which proves that $h^{\epsilon}$ has property (E), and completes the proof of (i).\\ We now turn our attention to (ii).... \end{comment} \noindent {\bf Case 2: } $r>0$.\\ Fix $a>g(z_0)$. As $f$ is assumed to be relatively exhaustive, there is relatively compact neighbourhood $V$ of $z_0$ such that the set $ K = \{ (z,w)\in \Omega' : f(z,w)<a \text{ and } z\in V\}$ is relatively compact in $\Omega'$. Hence (just because it is relatively compact) we can arrange, by shrinking $V$ is necessary, that there is an $r'>r$ such that $$ K \subset X\times \{ (z,w) : |w|>r'\}.$$ Furthermore there is no loss in assuming that there exists an $F$-subharmonic function on $V$ that is bounded from below. Now set $$j:= f|_{\Omega' \cap (V\times \{ (z,w) : |w|>r\})}.$$ Then by construction, \begin{enumerate}[(i)] \item $j$ has properties (A--F) \item The marginal function of $j$ equals the marginal function of $f$ on $V$, i.e. $$\inf_{w} j(z,w) = \inf_{w\in \Omega'_z} f(z,w) = g(z) \text{ for } z\in V.$$ \end{enumerate} Given this, our hypothesis apply to the $j'$ giving that $g$ is $F$-subharmonic on $V$, which completes the case $r=0$ and the proof of the Proposition is finished. \end{proof} \subsection{Reduction to the smooth complex case} \begin{proposition} \label{prop:reductionsemiconvexcomplex} Let $F\subset J^{2,\mathbb C}(X)$ be a complex constant-coefficient convex primitive subequation that has the Negativity Property. Then Proposition \ref{prop:reductionCstar} continues to hold we assume in addition that \begin{enumerate}[(G)] \item $f$ is semiconvex. \end{enumerate} \end{proposition} \begin{proof} The proof is the same as that of Proposition \ref{prop:reductionsemiconvex}, only using Corollary \ref{cor:prop:approxsemiTinvariant} instead of Proposition \ref{prop:approxsemi} to ensure that the approximating semiconvex functions $f_j$ are $\mathbb T$-invariant. \end{proof} \begin{proposition} \label{prop:reductionsmoothcomplex} Let $F\subset J^{2,\mathbb C}(X)$ be a complex constant-coefficient convex primitive subequation that has the Negativity Property. Then Proposition \ref{prop:reductionsemiconvexcomplex} continues to hold if we assume in addition that \begin{enumerate}[(H)] \item $f$ is smooth. \end{enumerate} \end{proposition} \begin{proof} The proof is the same as the proofs of Proposition \ref{prop:reductionsmooth}, only using Proposition \ref{prop:smoothapproximation_complex} instead of Proposition \ref{prop:smoothapproximation} to ensure that the approximating smooth functions $h_j$ are $\mathbb T$-invariant (and we observe here that we are using in a crucial way condition (F) that says $\Omega'\subset X\times \mathbb C^*$). \end{proof} \subsection{The minimum principle in the complex smooth case} \begin{proposition}\label{prop:hessiancalccomplex} Let $\Omega\subset X\times \mathbb C^{*}$ be open and $\mathbb T^m$-invariant, and assume $f:\Omega\to \mathbb R$ is such that \begin{enumerate}[(1)] \item $f$ is $\mathcal{C}^2$ and $\mathbb T^m$-invariant and strictly pseudoconvex in the second variable. \item There exists a real $\mathcal C^1$ function $\gamma:X\to \mathbb R^m\subset \mathbb C^m$ such that $$ g(z) :=\inf_{w\in \Omega_z} f(z,w) = f(z,\gamma(z)) \text{ for } z\in \pi(\Omega).$$ \end{enumerate} Then \begin{enumerate}[(a)] \item The derivative of $\gamma$ is given by \begin{equation}\Gamma:=\frac{d\gamma}{dz} = -\frac{1}{2{D}} C^*\label{eq:hessiancalcderivativewcomplex}\end{equation} where \begin{equation}\label{eq:hessianblockformcomplex} \operatorname{Hess}^{\mathbb C}_{(z,\gamma(z)} (f) = 2\left( \begin{array}{cc} B&C \\ C^* & D \\ \end{array}\right)\end{equation} is the complex Hessian matrix of $f$ at $(z,\gamma(z))$ in block form (by which we mean $B_{ij} = \frac{\partial^2 f}{\partial z_i \partial \overline{z}_j}|_{(z,\gamma(z))}$, $C_{i} = \frac{\partial^2 f}{\partial z_i \partial \overline{w}}|_{(z,\gamma(z))}$ and $D = \frac{\partial^2 f}{\partial w \partial \overline{w}}|_{(z,\gamma(z))}$). \item The marginal function $g$ is $\mathcal C^2$, and its second order complex jet at a point $x\in \pi(\Omega)$ is given by \begin{equation}\label{eq:hessiancalcderivativejetgcomplex} J^{2,\mathbb C}_{z}(g) = i^*_{2\Gamma } J^{2,\mathbb C}_{(z,\gamma(z))}(f) \end{equation} where $i^*$ is as defined in \eqref{eq:defofistar}. \item If additionally $f$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic on $\Omega$ then $g$ is $F$-subharmonic on $\pi(\Omega)$. \end{enumerate} \end{proposition} \begin{proof} As $\gamma(z)$ is a minimum of the function $w\mapsto f(z,w)$ we have \begin{align}\label{eq:chainruleatmin_complex1} f_{\overline{w}} (z,\gamma(z)) \equiv 0\text{ and} \\ f_{{w}} (z,\gamma(z)) \equiv 0. \nonumber \end{align} Differentiating \eqref{eq:chainruleatmin_complex1} with respect to $z$, $$f_{\overline{w} z} (z,\gamma(z)) + f_{w\overline{w}} (z,\gamma(z)) \gamma_z(z) + f_{ww}(z,\gamma(z)) \gamma_z =0$$ where we have used that $\gamma$ is real so $(\overline{\gamma})_z = \gamma_z$. Now $f$ is $\mathbb T^m$-invariant, so \eqref{eq:chainruleatmin_complex1} implies $$ f_{ww} (z,\gamma(z)) = f_{w\overline{w}}(z,\gamma(z))$$ (this can be seen, for instance, by using polar coordinates). Thus in fact $$f_{\overline{w} z} (z,\gamma(z)) + 2 f_{w\overline{w}} (z,\gamma(z)) \gamma_z(z)=0$$ giving \eqref{eq:hessiancalcderivativewcomplex}. For the second statement, by definition $$ g(z) = f(z,\gamma(z))$$ so differentiating with respect to $z$ gives \begin{align*} g_z &= f_z(z,\gamma(z)) + f_w(z,\gamma(z)) \gamma_z + f_{\overline{w}}(z,\gamma(z)) \gamma_z\\ &= f_z(z,\gamma(z)). \label{eq:chainrule1complex}\end{align*} Differentiating with respect to $\overline{z}$ gives $$g_{z\overline{z}}=f_{z\overline{z}}(z,\gamma(z)) + f_{zw}(z,\gamma(z)) \gamma_{\overline{z}} + f_{z\overline{w}}(z,\gamma(z)) \overline{\gamma_z}.$$ After some manipulation, in terms of second order jets we deduce $$ J^{2,\mathbb C}_{z}(g) = (f(z,\gamma(z)), 2 \frac{\partial f}{\partial \overline{z}}|_{(z,\gamma(z))}, 2B - 8\Gamma^* D\Gamma)) = i^*_{2\Gamma} J^2_{(z,\gamma(z)}(f)$$ which is \eqref{eq:hessiancalcderivativejetgcomplex}. The final statement follows from this (precisely as in the proof of Proposition \ref{prop:hessiancalc}). \end{proof} The following perturbation argument deals with the fact that $w\mapsto f(z,w)$ may not attain a (unique) minimum. We stress that we use in an essential way that $\Omega\subset X\times \mathbb C^*$. \begin{proposition}\label{prop:hessiancalcIIcomplex} Let $X\subset \mathbb C^{n}$ be open and $F\subset J^{2,\mathbb C}(X)$ be a complex primitive subequation. Suppose $$f:\Omega'\to \mathbb R$$ is such that \begin{enumerate}[(A)] \item $\Omega'\subset X\times \mathbb C$ is open, $\mathbb T$-invariant and has connected fibres.\addtocounter{enumi}{1} \item $f$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic and $\mathbb T$-invariant. \addtocounter{enumi}{1} \item $f$ attains is fibrewise minimum strictly in the interior. \item $\Omega\subset X\times \mathbb C^*$.\addtocounter{enumi}{1} \item $f$ is $\mathcal C^2$. \end{enumerate} Then the marginal function $$ g(z): = \inf_{w\in \Omega_z} f(z,w) \text{ for } z\in \pi(\Omega)$$ is $F$-subharmonic. \end{proposition} \begin{proof} This is similar to the proof of Proposition \ref{prop:hessiancalcII}, but made slightly more complicated due to the presence of the complex variable. For completeness we give the entire argument here. Fix $z_0\in \pi(\Omega)$, pick $a$ so $g(z_0)<a$ and let $w_0$ be such that $f(z_0,w_0)<a$. Fix also an open neighbourhood $x_0\in V\Subset \pi(\Omega)$ and let $$ K := \{ (z,w) \in \Omega : f(z,w)<a \text{ and } z\in V\}$$ which by hypothesis is relatively compact. Since we are assuming $\Omega\subset X\times \mathbb C^*$, the set $K$ is bounded away from $w=0$. So we may fix a large real $M$ so \begin{equation}(z,w)\in K \Rightarrow M^{-1}<\|w\|< M.\label{eq:defM}\end{equation} Note that $K$ is also $\mathbb T$-invariant, so shrinking $V$ is necessary we may assume that each $K_z$ is a non-empty annulus in $\mathbb C^*$ (so in particular each $K_z$ is connected). Write in block form \begin{equation}\label{eq:hessianblockformIII} \operatorname{Hess}^{\mathbb C}_{(x,y)} (f) = 2 \left( \begin{array}{cc}\hat{B} &{\hat{C}} \\ {\hat{C}}^* & {\hat{D}} \\ \end{array}\right)\end{equation} (so $\hat{B},\hat{C},\hat{D}$ are functions of $(x,y)$ which is dropped from notation). Given $\alpha,j\in \mathbb N$ we define \begin{align} \phi &:= \phi(w) := e^{\alpha |w|^2}\\ \hat{\Gamma}&:=\hat{\Gamma}_j(z,w):= -\frac{1}{2(\hat{D}+j^{-1} \phi_{w\overline{w}})}\hat{C}^*. \end{align} {\bf Claim I:} For any $\delta>0$ it holds that for all $j\gg \alpha \gg 0$, \begin{align} |j^{-1} \phi |&\le \delta \\\label{eq:prophessiancalcnonstrictclaimicomplex} \|4 j^{-1} \hat{\Gamma}^* \phi_{\overline{w}} \|&\le \delta\end{align} uniformly over $(z,w)\in K$.\\ The first statement is immediate from the definition of $\phi$ as $K$ is relatively compact. For the second statement observe first that the inverse in the definition of $\hat{\Gamma}$ is well-defined as $f$ is $\mathcal P^{\mathbb C}$-subharmonic in the second variable, so $\hat{D}\ge 0$ and $\phi$ is strictly plurisubharmonic so $\operatorname{Hess}^{\mathbb C}_y\phi$ is strictly positive. Now \begin{align} \phi_{\overline w} &= \alpha e^{\alpha |w|^2}{w}\\ \phi_{w\overline{w}} &= e^{\alpha |w|^2}(\alpha^2 |w|^2 + \alpha) \end{align} so \begin{align*} \| 4j^{-1} \hat{\Gamma}^* \phi_{\overline{w}} \| \le \frac{2j^{-1}\alpha e^{\alpha|w|^2}|w| \|\hat{C}^*\|}{\hat{D} + j^{-1} e^{\alpha |w|^2}(\alpha^2 |w|^2 + \alpha)} \le \frac{ 2\|\hat{C}^*\| }{\alpha |w|} \end{align*} as $\hat{D}\ge 0$. Using \eqref{eq:defM} this is bounded above by $\delta$ as long as $\alpha$ is sufficiently large as $\|\hat{C}\|$ is bounded uniformly over the relatively compact set $K$. Thus Claim I follows.\\ Consider next the function \begin{align*} f_j(z,w)&: = f(z,w) + j^{-1} \phi(w) \end{align*} and set $$g_j(z): = \inf_{y\in K_z} f_j(z,w).$$ Fix neighbourhoods $x_0\in U_0\subset V$ and $y_0\in U_1\subset \Omega_{x_0}$ such that $U_0\times U_1\subset K$.\\ {\bf Claim II: } For $x\in U_0$ $$ g_j(x)\searrow g(x) \text{ as } j\to \infty.$$ This is proved exactly as for Proposition \ref{prop:hessiancalcIIcomplex} so is not repeated.\\ {\bf Claim III: } For given $\delta>0$ it holds that for all $j\gg \alpha \gg 0$ the function $g_j$ is $F^{\delta}$-subharmonic on $U_0$.\\ Assuming this claim for now, fix such an $\alpha$ and let $j$ tend to infinity to deduce from Claim II that $g$ is $F^\delta$-subharmonic on $U_0$. Letting $\delta\to 0$ yields that $g$ is in fact $F$-subharmonic on $U_0$ (Lemma \ref{lem:limitsunderpertubationsofsubequation}(3)). Since $U_0$ is a neighbourhood of an arbitrary point $z_0\in \pi(\Omega)$ we conclude finally that $g$ is $F$-subharmonic on $\pi(\Omega)$ as needed.\\ {\bf Proof of Claim III:} Let $x\in U_0$. Since $f$ is plurisubharmonic in $y$ (Lemma \ref{lem:slicesareconvex}) the function $f_j$ is strictly plurisubharmonic and exhaustive. As it is also $\mathbb T$-invariant this implies $w\mapsto f_j(z,w)$ has a unique minimum which we denote by $\gamma_j(z)$, which for each $z$ is the unique point satisfying $$ \frac{\partial f}{\partial w} (z,\gamma(z))=0.$$ Again using that $f$ is $\mathbb T$-invariant, the implicit function theorem implies that $\gamma_j$ is $\mathcal C^1$. Note that by construction $(z,\gamma_j(z))\in K$. Thus by definition, $$g_j(z) = f_j(z,\gamma(z)) \text{ for } z\in U_0.$$ Now $$ \operatorname{Hess}^{\mathbb C}_{(z,\gamma(z))} (f_j) = 2 \left( \begin{array}{cc} {B}&2{C} \\ {C}^* & {D} +j^{-1} \phi_{w\overline{w}}\\ \end{array}\right),$$ where $C = C(z) = \hat{C}(z,\gamma_j(z))$ and similarly for $B$ and $D$. Then Proposition \ref{prop:hessiancalccomplex}(1,2) applies to $f_j|_{K}$, giving \begin{equation}\label{eq:hessianIIcalcU_complex} \Gamma_j(z):= \frac{d\gamma_j}{dz} = \hat{\Gamma}_j(z,\gamma(z))\end{equation} where $\hat{\Gamma}_j$ is as above (we remark that the transpose from equation \eqref{eq:hessiancalcderivativew} has been dropped as $m=1$). Moreover $g_j$ is $\mathcal C^2$, and its second order jet at a point $z\in U_0$ is given by \begin{align*} J^2_{z}(g_j) &= i^*_{2\Gamma_j} J^2_{(z,\gamma_j(z)}(f_j) \\ &=i_{2\Gamma_j^*} J^2_{(z,\gamma_j(z))}(f) + j^{-1} i_{2\Gamma_j^*} J^2_{(z,\gamma_j(z))}(\phi)\\ & = i_{2\Gamma_j^*}J^2_{(z,\gamma_j(z))}(f) + j^{-1}(\phi(\gamma_j(z)) , 4 \Gamma_j^* \phi_{\overline{w}}|_{\gamma_j(z)}, 8 \Gamma_j^* \phi_{w\overline{w}}|_{\gamma_j(z)}\Gamma_j )\\ & \in F_z + j^{-1}(\phi(\gamma_j(z)) , 4 \Gamma_j^* \phi_{\overline{w}}|_{\gamma_j(z)}, 0 ) \end{align*} where the last line uses that $f$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic and the Positivity property of $F$. Then \eqref{eq:prophessiancalcnonstrictclaimi} applies to show that for $j\gg \alpha \gg 0$ it holds that $J^2_{z}(g_j) \in F^{\delta}_z$ completing the proof of Claim III.\\ From Claim II and Claim III the Proposition follows, exactly as for Proposition \ref{prop:hessiancalcIIcomplex}. \end{proof} \subsection{Synthesis} \begin{theorem}[Minimum Principle in the Complex Convex Case]\label{thm:minimumprinciplecomplexconvexI} Let $X\subset \mathbb C^n$ be open and $F\subset J_{2,\mathbb C}(X)$ be a complex primitive subequation such that \begin{enumerate} \item $F$ has the Negativity Property, \item $F$ is convex, \item $F$ is constant coefficient. \end{enumerate} Then $F$ satisfies the minimum principle. \end{theorem} \begin{proof} This follows by combining Proposition \ref{prop:reductionsmoothcomplex} and Proposition \ref{prop:hessiancalcIIcomplex}. \end{proof} \begin{remark}\ When $F = \mathcal P^{\mathbb C}_{\mathbb C^n}$ we have seen in Example \ref{sec:exampleI} that $F\#\mathcal P_{\mathbb C^m} = \mathcal P_{\mathbb C^{n+m}}$. Then Theorem \ref{thm:minimumprinciplecomplexconvexI} is precisely the Kiselman minimum principle. \end{remark} \begin{corollary}\label{cor:minimumimaginary} With $F$ as in Theorem \ref{thm:minimumprinciplecomplexconvexI}, suppose that $\Omega\subset X\times \mathbb C$ is an $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-pseudoconvex domain and $f:\Omega\to \mathbb R\cup \{-\infty\}$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic. Assume that \begin{enumerate} \item $\Omega$ is independent of the imaginary part of the second variable. That is $$ (z,w)\in \Omega \text{ and } \operatorname{Im}(w) = \operatorname{Im}(w') \Rightarrow (z,w')\in \Omega.$$ \item $\Omega$ has connected fibres \item $f$ is independent of the imaginary part of the second variable. That is $$ (z,w)\in \Omega \text{ and } \operatorname{Im}(w) = \operatorname{Im}(w') \Rightarrow f(z,w) = f(z,w').$$ \item $\Omega$ admits an exhaustive continuous $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic function $u$ that is independent of the imaginary part of the second variable. \end{enumerate} Then $\pi(\Omega)$ is $F$-pseudoconvex and the marginal function $$ g(z) : = \inf_{z\in \Omega_z} f(z,w)$$ is $F$-subharmonic. \end{corollary} \begin{proof} Let $\phi(z,w) = (z,e^{w})$ and set $\Omega' = \phi(\Omega)$. Then $u$ descends to an exhaustive continuous function $u'$ on $\Omega'$, which by Lemma \ref{lem:compositionbiholomorphism} is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic. Hence $\Omega'$ is $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-pseudoconvex. Moreover $f$ descends to an $F\#_{\mathbb C}\mathcal P^{\mathbb C}$-subharmonic function $f'$ on $\Omega'$. But clearly $\pi(\Omega') = \pi(\Omega)$ and the marginal function of $f'$ is the same as the marginal function of $f$, so the minimum principle for $F$ applied to $(\Omega',f')$ gives the result we want. \end{proof} \appendix \section{F-subharmonic Functions}\label{appendix:subequations} \subsection{Types of Subequations} \begin{definition} Let $F\subset J^2(X)$. \begin{enumerate} \item We say $F$ is \emph{constant coefficient} if $F_x$ is independent of $x$, i.e.\ $$ (x,r,p,A) \in F_x \Leftrightarrow (x',r,p,A)\in F_{x'} \text{ for all }x,x',r,p,A.$$ \item We say $F$ is \emph{independent of the gradient part} (or \emph{gradient-independent}) if each $F_x$ is independent of $p$, i.e. $$ (r,p,A)\in F_x \Leftrightarrow (r,p',A) \in F_x \text{ for all } x,r,p,p',A.$$ \item We say $F$ \emph{depends only on the Hessian part} if each $F_x$ is independent of $(r,p)$, i.e. $$ (r,p,A) \in F_x \Leftrightarrow (r',p',A)\in F_{x} \text{ for all }x,r,r',p,p',A.$$ \end{enumerate} \end{definition} \begin{definition}[$G$-Invariance]The group $GL_n(\mathbb R)$ acts on $J^2(X)$ by $$ g^*(x,r,p,A):= (x,r,g^t p, g^t Ag) \text{ for } g\in GL_n(\mathbb R).$$ If $G$ is a subgroup of $GL_n(\mathbb R)$ we say $F\subset J^2(X) $ is \emph{$G$-invariant} if $g^*\alpha \in F$ for all $\alpha\in F$ and all $g\in G$. \end{definition} \begin{remark} Our action of $GL_n$ comes from thinking of the jet space using the cotangent space to $X$, and is different in convention to that of \cite{HL_Dirichletdualitymanifolds}. \end{remark} \begin{comment} \begin{proposition}\label{prop:quadratic} Assume that $F\subset J^2$ is closed and let $f$ be upper-semicontinuous. Then $f$ is not $F$-subharmonic if and only if there exists a point $x_0$ and $\epsilon>0$ and a quadratic function $$q(y):= r + p^t(x-x_0) +\frac{1}{2}(x-x_0)^t A (x-x_0)$$ such that \begin{enumerate} \item $f(x) - q(x) \le - \epsilon |x-x_0|^2$ \item $f(x_0) - q(x_0)=0$ \item $J^2_{x_0}(q)\notin F$ \end{enumerate} \end{proposition} \begin{proof} \cite[Lemma 2.4]{HLRiemannian} \end{proof} \end{comment} \subsection{Complex Subequations}\label{sec:complexsubequations} Set $$ \mathbb J = \left(\begin{array}{cc} 0& -\operatorname{Id}_{n} \\ \operatorname{Id}_n & 0\end{array} \right)\in M_{2n\times 2n}(\mathbb R).$$ If $A\in M_{2n\times 2n}(\mathbb R)$ commutes with $\mathbb J$ then making the standard identification $\mathbb C\simeq \mathbb R^2$ we think of $A$ as a complex matrix $\hat{A} \in M_{n\times n}(\mathbb C)$. Explicitly if in block form $$ A = \left(\begin{array}{cc} a& c \\ b & d \end{array} \right)$$ where $a,b,c,d\in M_{n\times n}(\mathbb R)$ then $A$ commutes with $\mathbb J$ if and only if $a=d$ and $b=-c$, in which case $$\hat{A}: = a+ib \in M_{n\times n}(\mathbb C).$$ Observe $\widehat{AB} = \hat{A} \hat{B}$ and $\widehat{A^t} = \hat{A}^*$. Let $\operatorname{Herm}_n$ be the set of hermitian $n\times n$ complex matrices, and $$\operatorname{Pos}_n^{\mathbb C} := \{ \hat{A} \in \operatorname{Herm}_n : v^* \hat{A} v \ge 0 \text{ for all } v\in \mathbb C^n\}$$ the subset of semipositive hermitian matrices. From the above it easy to check that if $A\mathbb J = \mathbb JA$ then \begin{align*} A\in \operatorname{Sym}^2_{2n} &\Longleftrightarrow \hat{A}\in \operatorname{Herm}(\mathbb C^n)\text{ and }\\ A \in \operatorname{Pos}_{2n}&\Longleftrightarrow \hat{A}\in \operatorname{Pos}^{\mathbb C}_{n}. \end{align*} Now for any $A\in M_{2n\times 2n}(\mathbb R)$ the matrix $$ A_{\mathbb C} : = \frac{1}{2} ( A - \mathbb J A \mathbb J)$$ commutes with $\mathbb J$ and thus we may think of $A_{\mathbb C}$ as an element of $M_{n\times n}(\mathbb C)$. Observe if $A$ is symmetric then $A_{\mathbb C}$ is hermitian. \begin{definition} Let $X\subset \mathbb R^{2n}\simeq \mathbb C^n$ be open. We say $F\subset J^2(X)$ is a \emph{complex subequation} if $(x,r,v,A)\in F$ if and only if $(x,r,v,A_{\mathbb C})\in F$. \end{definition} So by abuse of notation if $F$ is a complex subequation we may equivalently consider it as a subset $$ F\subset J^{2,\mathbb C}(X) := X\times \mathbb R \times \mathbb C^n \times \operatorname{Herm}(\mathbb C^n) =: X\times J^{2,\mathbb C}_n$$ without any loss of information. The group $GL_n(\mathbb C)$ acts on $J^{2,\mathbb C}(X)$ by $$ g^*(x,r,p,A) = (x,r,g^* p, g^* Ag).$$ Observe also if $F$ is complex, then having the Positivity property \eqref{eq:positivity} is equivalent to $$(x,r,p,A)\in F \Longrightarrow (x,r,p,A+P)\in F \text{ for all }P\in \operatorname{Pos}_n^{\mathbb C}.$$ \begin{example} Let $$\mathcal P^{\mathbb C}_X : = X\times \mathbb R\times \mathbb C^n \times \operatorname{Pos}^{\mathbb C}_n$$ which is a convex complex subequation. We will write $\mathcal P^{\mathbb C}$ for $\mathcal P^{\mathbb C}_X$ when $X$ is clear from context. \end{example} \begin{example}[Convex and Plurisubharmonic]\label{example:convexity:appendix} Recall $\mathcal P_X = X\times \mathbb R\times \mathbb R^n\times \operatorname{Pos}_n$. Then $\mathcal P_X(X)$ consists of locally convex functions on $X$ \cite[Example 14.2]{HL_Dirichletdualitymanifolds}. Similarly if $X\subset \mathbb C^n$ is open then $\mathcal P^{\mathbb C}_X(X)$ consists of the plurisubharmonic functions on $X$ \cite[p63]{HL_Dirichletdualitymanifolds}. \end{example} \subsection{Basic properties of $F$-subharmonic functions} The following lists some of the basic limit properties satisfied by $F$-subharmonic functions (under very mild assumptions on $F$). \begin{proposition}\label{prop:basicproperties} Let $F\subset J^2(X)$ be closed. Then \begin{enumerate} \item (Maximum Property) If $f,g\in F(X)$ then $\max\{f,g\}\in F(X)$. \item (Decreasing Sequences) If $f_j$ is decreasing sequence of functions in $F(X)$ (so $f_{j+1}\le f_j$ over $X$) then $f:=\lim_j f_j$ is in $F(X)$. \item (Uniform limits) If $f_j$ is a sequence of functions on $F(X)$ that converge locally uniformly to $f$ then $f\in F(X)$. \item (Families locally bounded above) Suppose $\mathcal F\subset F(X)$ is a family of $F$-subharmonic functions locally uniformally bounded from above. Then the upper-semicontinuous regularisation of the supremum $$ f:= {\sup}^*_{f\in \mathcal F} f$$ is in $F(X)$. \item If $F$ is constant coefficient and $f$ is $F$-subharmonic on $X$ and $x_0\in \mathbb R^{n}$ is fixed, then the function $x\mapsto f(x-x_0)$ is $F$-subharmonic on $X-x_0$. \end{enumerate} \end{proposition} \begin{proof} See \cite[Theorem 2.6]{HL_Dirichletdualitymanifolds} for (1-4). Item (5) is immediate from the definition. \end{proof} \begin{lemma}[Limits under perturbations of subequations]\label{lem:limitsunderpertubationsofsubequation} Let $X$ be open and $F\subset J^2(X)$ be a primitive subequation. For $\delta>0$ let $F^\delta\subset J^2(X)$ be defined by $$ F^{\delta} = \{ (x,r,p,A) : \exists r',p' \text{ with } (x,r',p',A)\in F \text{ and } |r-r'|\le \delta \text{ and } \|p-p'\|\le \delta\}.$$ Then \begin{enumerate} \item $F^{\delta}$ is a primitive subequation. \item If $F$ satisfies the Negativity property then so does $F^\delta$. \item $\bigcap_{\delta>0} (F^\delta(X))=F(X)$.\end{enumerate} \end{lemma} \begin{proof} That $F^{\delta}$ has the Positivity property is immediate from the definition, and $F^{\delta}$ is closed as $F$ is closed giving (1). Statement (2) is also immediate from the definition. Finally using $F$ is closed, $\bigcap_{\delta>0} F^{\delta}_x = F_x$, and thus $\bigcap_{\delta>0} (F^{\delta}(X)) = F(X).$ \end{proof} \subsection{$F$-subharmonicity in terms of second order jets} It is useful to understand the property of being $F$-subharmonic in terms of second order jets. To do so we first discuss what it means to be twice differentiable at a point. Again let $X\subset \mathbb R^n$ be open. \begin{definition}[Twice differentiability at a point] We say that $f:X\to \mathbb R$ is \emph{twice differentiable} at $x_0\in X$ if there exists a $p\in \mathbb R^n$ and an $L\in \operatorname{Sym}_n^2$ such that for all $\epsilon>0$ there is a $\delta>0$ such that for $\|x-x_0\|<\delta$ we have \begin{equation}\label{eq:twicediff} |f(x) - f(x_0) - p.(x-x_0) - \frac{1}{2} (x-x_0)^tL (x-x_0) | \le \epsilon \|x-x_0\|^2. \end{equation} \end{definition} When $f$ is twice differentiable at $x_0$ then the $p,L$ in \eqref{eq:twicediff} are unique, and moreover in this case $f$ is differentiable at $x_0$ and $$p = \nabla f|_{x_0}= \left( \begin{array}{c} \frac{\partial f}{\partial x_1} \\ \frac{\partial f}{\partial x_2} \\ \vdots \\ \frac{\partial f}{\partial x_n} \end{array}\right)|_{x_0}\in \mathbb R^n.$$ When $f$ is twice differentiable at $x_0$ we shall refer to $L$ as the \emph{Hessian} of $f$ at $x_0$ and denote it by $\operatorname{Hess}(f)|_{x_0}$. Of course, by Taylor's Theorem, when $f$ is $\mathcal C^{2}$ in a neighbourhood of $x_0$ then $\operatorname{Hess}_{x}(f)$ is the matrix with entries $$(\operatorname{Hess}(f)_{x_0} )_{ij}: = \frac{\partial^2 f}{\partial x_i\partial x_j}|_{x_0}.$$ \begin{definition}[Second order jet] Suppose that $f: X\to \mathbb R$ is twice differentiable at $x_0$. We denote the \emph{second order jet} of $f$ at $x_0$ by \begin{equation}J^2_{x_0}(f):= (f(x_0), \nabla f|_{x_0}, \operatorname{Hess}(f)|_{x_0}) \in J^2_{n} = \mathbb R\times \mathbb R^n\times \operatorname{Sym}_n^2.\label{eq:secondorderjet} \end{equation} \end{definition} The importance of the Positivity property is made apparent by the following that shows that $F$-subharmonicity behaves as expected for sufficiently smooth functions. \begin{lemma}\label{lem:fsubharmonicc2} Let $F\subset J^2(X)$ satisfy the Positivity assumption \eqref{eq:positivity} and suppose $f:X\to \mathbb R$ is $\mathcal C^2$. Then $f\in F(X)$ if and only if $J^2_{x}(f)\in F_x$ for all $x\in X$. \end{lemma} \begin{proof} The reader may easily verify this, or consult \cite[Equation 2.4 and Proposition 2.3]{HL_Dirichletdualitymanifolds}. \end{proof} The definition $F$-subharmonicity given above says that at any upper-contact point $x$, with upper-second order jet $(p,A)$, the quadratic function $$ y\mapsto f(y) + p.(x-y) + \frac{1}{2} (y-x)^tA (y-x)$$ has second-order jet lying in $F_x$. The next statement says that this is equivalent to the more classical ``viscosity definition". Given an upper-semicontinuous $f$ we say that $\phi$ is a $\mathcal{C}^2$-\emph{test function touching $f$ from above at $x_0$} if $\phi\in\mathcal C^2$ in a neighbourhood of $x_0$ with $\phi\ge f$ on this neighbourhood and $\phi(x_0) = f(x_0)$. \begin{lemma}[Viscosity definition of $F$-subharmonicity]\label{lem:viscositydefinition} An upper-semicontinuous $f:X\to \mathbb R\cup \{-\infty\}$ is in $F(X)$ if and only if for all $x_0\in X$ and test-functions $\phi$ touching $f$ from above at $x_0$ it holds that $J^2_{x_0}(\phi)\in F_{x_0}.$ \end{lemma} \begin{proof} See \cite[Lemma 2.4]{HL_Dirichletdualitymanifolds}. \end{proof} It takes some work to understand how $F$-subharmonicity interacts with linearity in the space of functions. However when $F$ is constant-coefficient and convex the following is true: \begin{proposition}\label{prop:convexcombintation}[Convex combinations of $F$-subharmonic functions] Let $F$ be a constant coefficient convex primitive subequation. Then any convex combination of $F$-subharmonic functions is again $F$-sub\-harmonic. \end{proposition} \begin{proof} This is implied by \cite[Theorem 5.1 ]{HL_AE} (apply the cited theorem to $F_x:=\lambda H_x$ and $G_x:= (1-\lambda)H_x$ for a given $\lambda\in [0,1]$) \end{proof} \section{Associativity of products}\label{sec:associativityproductsubequations} We prove Proposition \ref{prop:associativityofproducts} which states that if $X_i\subset \mathbb R^{n_i}$ are open and $F_i\subset J^2(X_i)$ for $i=1,2,3$ then $$ (F_1\#F_2)\#F_3 = F_1\#(F_2\#F_3).$$ Let $x,y,z$ be coordinates on $\mathbb R^{n_1},\mathbb R^{n_2},\mathbb R^{n_3}$ respectively. We will consider certain linear mappings \begin{align*} \Gamma&:\mathbb R^{n_1}\to \mathbb R^{n_2+n_3}\\ \Phi&:\mathbb R^{n_1+n_2}\to \mathbb R^{n_3}\\ \Psi&:\mathbb R^{n_1} \to \mathbb R^{n_2}\\ \Upsilon&:\mathbb R^{n_2}\to \mathbb R^{n_3} \end{align*} and write $$ \Phi(x,y) = \Phi_1(x) + \Phi_2(y)$$ where $\Phi_i:\mathbb R^{n_i} \to \mathbb R^{n_3}$ is linear. Recall that $\iota_{\Gamma}:\mathbb R^{n_1} \to \mathbb R^{n_1+n_2+n_3}$ is $\iota_{\Gamma}(x) = (x,\Gamma(x))$ and similarly for $\iota_\Phi:\mathbb R^{n_1+n_2}\to \mathbb R^{n_1+n_2+n_3}$ and $\iota_{\Psi}:\mathbb R^{n_1}\to \mathbb R^{n_1+n_2}$. \begin{lemma}\label{lem:factoriota} Suppose \begin{equation}\label{eq:GammaintermsofPhiPsi}\Gamma = (\Psi, \Phi_1 + \Phi_2\circ\Psi). \end{equation} Then $$\iota_{\Gamma} = \iota_{\Phi}\circ\iota_{\Psi}$$ \end{lemma} \begin{proof} \begin{align*} \iota_{\Phi}(\iota_{\Psi}(x)) &= \iota_{\Phi}(x,\Psi(x)) = (x,\Psi(x),\Phi(x,\Psi(x))) \\&= (x,\Psi(x),\Phi_1(x) + \Phi_2\circ\Psi(x)) = \iota_{\Gamma}(x).\end{align*} \end{proof} Now set $$\begin{array}{ll} j_2: \mathbb R^{n_2} \to \mathbb R^{n_1+n_2} &\text{ } j_2(y) = (0,y)\\ j_3: \mathbb R^{n_3} \to \mathbb R^{n_1+n_2+n_3} &\text{ } j_3(z) = (0,0,z)\\ j_{23}:\mathbb R^{n_2+ n_3} \to \mathbb R^{n_1+n_2+n_3} &\text{ } j_{23}(y,z) = (0,y,z)\\ k:\mathbb R^{n_3} \to \mathbb R^{n_2+n_3} &\text{ } k(z) = (0,z). \end{array}$$ Fix $(x,y,z)\in X_1\times X_2\times X_3$. By definition of the product subequation we know $\alpha \in (F_1\#(F_2\#F_3))_{(x,y,z)}$ if and only if \begin{equation}\label{eq:associative1} \forall \Gamma \text{ we have }\iota_\Gamma^*\alpha \in (F_1)_{x} \text{ and } j_{23}^*\alpha \in (F_2\#F_3)_{(y,z)}. \end{equation} Observe for every $\Gamma$ there is a pair $(\Phi,\Psi)$ such that \eqref{eq:GammaintermsofPhiPsi} holds. Thus by Lemma \ref{lem:factoriota}, condition \eqref{eq:associative1} is equivalent to \begin{equation}\label{eq:associative2} \forall \Psi,\Phi \text{ we have }\iota^*_\Psi \iota^*_\Phi \alpha \in (F_1)_{x} \text{ and } j_{23}^*\alpha \in (F_2\#F_3)_{(y,z)}. \end{equation} Using the definition of $F_2\#F_3$, condition \eqref{eq:associative2} is in turn equivalent to \begin{equation}\label{eq:associative3} \forall \Psi,\Phi,\Upsilon \text{ we have }\iota^*_\Psi \iota^*_\Phi \alpha \in (F_1)_{x} \text{ and } \iota_{\Upsilon}^* j_{23}^*\alpha \in (F_2)_{y} \text{ and }k^*j_{23}^*\alpha \in (F_3)_{z}. \end{equation} Now $j_{23}\circ k=j_3$, and a simple check yields $j_{23}\circ \iota_{\Phi_2} = \iota_{\Phi}\circ j_2$. Thus \eqref{eq:associative3} is equivalent to \begin{equation}\label{eq:associative4} \forall \Psi,\Phi \text{ we have }\iota^*_\Psi \iota^*_\Phi \alpha \in (F_1)_{x} \text{ and } j_{2}^*\iota_{\Phi}^*\alpha \in (F_2)_{y} \text{ and }j_3^*\alpha \in (F_3)_{z}. \end{equation} So from the definition of $(F_1\#F_2)$, condition \eqref{eq:associative4} is equivalent to \begin{equation}\label{eq:associative5} \forall \Phi \text{ we have }\iota^*_\Phi \alpha \in (F_1\#F_2)_{x} \text{ and }j_3^*\alpha \in (F_3)_{z} \end{equation} which, by definition, is equivalent to $\alpha\in ((F_1\#F_2)\#F_3)_{(x,y,z)}$. \section{Products of gradient-independent Subequations}\label{appendixB} Recall we say that a subequation $F\subset J^2(X)$ has Property (P\textsuperscript{++}) if the following holds. For all $x\in X$ and all $\epsilon>0$ there exists a $\delta>0$ such that \begin{equation}(x,r,p,A)\in F_x \Rightarrow (x',r-\epsilon,p, A+\epsilon \operatorname{Id})\in F_{x'} \text{ for all } \|x'-x\|<\delta\ \tag{P\textsuperscript{++}}\label{eq:propertyH} \end{equation} or said another way, \begin{equation}F_x + (x'-x,0,-\epsilon, \epsilon \operatorname{Id}) \subset F_{x'} \text{ for all } \|x'-x\|<\delta. \tag{P\textsuperscript{++}}\label{eq:propertyH:repeat} \end{equation} \begin{lemma}\label{lem:inclusioninterior2} Assume that $F$ and $G$ have property \eqref{eq:propertyH} and are independent of the gradient part. Then $H: = F\#G$ is a subequation \end{lemma} \begin{proof} We have already seen in Lemma \ref{lem:productsofsubequations} that $H$ is closed, and satisfies the Positivity and Negativity properties \eqref{eq:positivity} and \eqref{eq:negativity}. It remains to prove the Topological property \eqref{eq:topological} which we break up into a number of pieces. Since $F,G$ are independent of the gradient part, so is $H$, and thus the only non-trivial part of the topological property is to show \cite[Section 4.8]{HL_Dirichletdualitymanifolds} $$ \operatorname{Int}(H_{(x,y)}) =(\operatorname{Int} H)_{(x,y)}$$ The fact that $\operatorname{Int}(H_{(x,y)}) \subset (\operatorname{Int} H)_{(x,y)}$ is obvious, so the task is to prove the other inclusion. Let $$\alpha \in \operatorname{Int}( H_{(x,y)}),$$ so there exists a $\delta_1>0$ such that \begin{equation} \| \hat{\alpha} - \alpha \| <\delta _1 \text{ and } \hat{\alpha} \in J^2(X\times Y)|_{(x,y)} \Rightarrow \hat{\alpha}\in H_{(x,y)}.\label{eq:int1}\end{equation} By hypothesis there is a $\delta_2>0$ such that \begin{equation}\label{eq:intopen1} F_x +(x'-x, -\frac{\delta_1}{4}, 0,\frac{\delta_1}{4} \operatorname{Id}) \subset F_{x'} \text{ for } \|x-x'\|<\delta_2\end{equation} \begin{equation}\label{eq:intopen2} G_y +(x'-x, -\frac{\delta_1}{4}, 0,\frac{\delta_1}{4} \operatorname{Id}) \subset G_{y'} \text{ for } \|y-y'\|<\delta_2.\end{equation} Set $\delta = \min\{ \delta_1/2,\delta_2/2\}$ and pick any $\alpha'\in J^2(X\times Y)$ with $$ \|\alpha'-\alpha\|<\delta.$$ We will show that $\alpha' \in H$. Denote the space coordinate of $\alpha'$ by $(x',y')$, so $\alpha'\in J^2(X\times Y)|_{(x',y')}$. Thus we certainly have $\|x'-x\|<\delta<\delta_2$ and $\|y-y'\|<\delta_2$. Define $$\hat{\alpha} : = \alpha' + ((x-x',y-y'),-\frac{\delta_1}{2},0, -\frac{\delta_1}{2} \operatorname{Id}_{n+m}).$$ Then $\hat{\alpha} \in J^2(X\times Y)_{(x,y)}$ and $$ \|\hat{\alpha} - \alpha\| \le \| \alpha' - \alpha \| + \| \hat{\alpha} - \alpha'\| < \delta_1.$$ Thus \eqref{eq:int1} applies, so $\hat{\alpha}\in H_{(x,y)}$ which means $$ j^* \hat{\alpha} \in G_{y} \text{ and } i_U^* \hat{\alpha}\in G_{x} \text{ for all } U.$$ Now using \eqref{eq:intopen2}. $$ j^*\alpha' = j^*\hat{\alpha} + (y'-y,\frac{\delta_1}{2},0, \frac{\delta_1}{2} \operatorname{Id}_{m}) \in G_y + (y'-y,\frac{\delta_1}{2},0, \frac{\delta_1}{2} \operatorname{Id}_{m}) \subset G_{y'}.$$ Similarly using the Positivitiy property of $F$ and \eqref{eq:intopen1} \begin{align*} i_U^*\alpha' &= i_U^*\hat{\alpha} + (x-x',\frac{\delta_1}{2},0,\frac{\delta_1}{2} \operatorname{Id}_{n} + \frac{\delta_1}{2} U^tU) \\ &\subset i_U^*\hat{\alpha} + (x-x',\frac{\delta_1}{2},0,\frac{\delta_1}{2} \operatorname{Id}_{n})\\ &\subset F_{x'}. \end{align*} Thus $\alpha'\in H_{(x',y')}$. As this holds for all such $\alpha'$ we conclude $\alpha \in \operatorname{Int}(H)$ completing the proof. \end{proof} \begin{comment} We now state a theorem of Harvey-Lawson that characterizes $F$-subharmonic semiconvex functions in terms of second order jets almost everywhere. \begin{definition}[Semiconvexity] Let $\kappa\ge 0$. We say $f:X\to \mathbb R$ is \emph{$\kappa$-semiconvex} if $f(x) + \frac{\kappa}{2} \|x\|^2$ is convex. If $f$ is $\kappa$-semiconvex for some $\kappa \ge 0$ then we say simply $f$ is \emph{semiconvex}. \end{definition} \begin{theorem}[Alexandrov's Theorem]\label{thm:alexandrov} Suppose $f:X\to \mathbb R$ is semiconvex. Then $f$ is twice differentiable almost everywhere. \end{theorem} \begin{proof} This originates in \cite{Alexandrov} and for an exposition the reader is referred to \cite{Howard}. \end{proof} \begin{theorem}[The Almost Everywhere Theorem]\label{thm:ae} Assume that $F\subset J^2(X)$ is a primitive subequation and let $f:X\to \mathbb R$ be locally semiconvex. Then $$ f\in F(X) \Leftrightarrow J^2_{x}(f)\in F_x \text{ for almost all } x\in X.$$ \end{theorem} \begin{proof} See \cite[Theorem 4.1]{HL_AE}. \end{proof} \end{comment} \subsection{The complex case} Let $X\subset \mathbb R^{2n}\simeq \mathbb C^n$ be open. If $f:X\to \mathbb R$ is twice differentiable at a point $z\in X $ its \emph{complex Hessian} is $$ \operatorname{Hess}^{\mathbb C}_z(f) = \frac{1}{2} ( \operatorname{Hess}(f) - \mathbb J \operatorname{Hess}_x(f) \mathbb J) \in \operatorname{Herm}(\mathbb C^n).$$ When $f$ is sufficiently smooth we have $$ ( \operatorname{Hess}^{\mathbb C}_z(f))_{jk} = 2 \frac{\partial^2 f}{\partial z_j\partial\overline{z}_k}|_z$$ where, as usual, $$ \frac{\partial}{\partial z_j} = \frac{1}{2} \left( \frac{\partial}{\partial x_j} - i \frac{\partial}{\partial y_j}\right) \text{ for } z_j = x_j + i y_j.$$ In terms of the gradient, under the identification $\mathbb R^{2n}\simeq\mathbb C^n$ we have $$\nabla f|_z = \left(\begin{array}{c} \frac{\partial f}{\partial x}|_z \\ \frac{\partial f}{\partial {y}}|_z \end{array}\right) = 2\frac{\partial f}{\partial \overline{z}}|_z.$$ \begin{definition}[Complex $2$-jet] The complex $2$-jet of $f$ at $z\in X$ is $$ J^{2,\mathbb C}_{z} (f) := ( f(z), 2\frac{\partial f}{\partial \overline{z}}|_z,\operatorname{Hess}^{\mathbb C}_z(f)) \in J^{2,\mathbb C}_{z} = \mathbb R\times \mathbb C^n\times \operatorname{Herm}_n.$$ \end{definition} So if $F\subset J^2(X)$ is complex then $$ J^2_{z} (f)\in F_z \Longleftrightarrow J^{2,\mathbb C}_{z} (f)\in F_z.$$ \let\oldaddcontentsline\addcontentsline \renewcommand{\addcontentsline}[3]{} {} \let\addcontentsline\oldaddcontentsline \small{ \noindent {\sc Julius Ross, Mathematics Statistics and Computer Science, University of Illinois at Chicago, Chicago IL, USA\\ [email protected]} \noindent{\sc David Witt Nystr\"om, Department of Mathematical Sciences, Chalmers University of Technology and the University of Gothenburg, Sweden \\ [email protected], [email protected]} } \end{document} \end{document}
\begin{document} \title{Stochastic Bandit Models for Delayed Conversions} \begin{abstract} Online advertising and product recommendation are important domains of applications for multi-armed bandit methods. In these fields, the reward that is immediately available is most often only a proxy for the actual outcome of interest, which we refer to as a \emph{conversion}. For instance, in web advertising, clicks can be observed within a few seconds after an ad display but the corresponding sale --if any-- will take hours, if not days to happen. This paper proposes and investigates a new stochastic multi-armed bandit model in the framework proposed by Chapelle (2014) --based on empirical studies in the field of web advertising-- in which each action may trigger a future reward that will then happen with a stochastic delay. We assume that the probability of conversion associated with each action is unknown while the distribution of the conversion delay is known, distinguishing between the (idealized) case where the conversion events may be observed whatever their delay and the more realistic setting in which late conversions are censored. We provide performance lower bounds as well as two simple but efficient algorithms based on the UCB and KLUCB frameworks. The latter algorithm, which is preferable when conversion rates are low, is based on a Poissonization argument, of independent interest in other settings where aggregation of Bernoulli observations with different success probabilities is required. \end{abstract} \todoo{TBD} \section{INTRODUCTION} \label{sec:introduction} Characterizing the relationship between marketing actions and users' decisions is of prime importance in advertising, product recommendation and customer relationship management. In online advertising a key aspect of the problem is that whereas marketing actions can be taken very fast --typically in less than a tenth of a second, if we think of an ad display--, user's buying decisions will happen at a much slower rate \cite{ji2016probabilistic,chapelle2014modeling,rosales2012post}. In the following, we refer to a user's decision of interest under the generic term of \emph{conversion}. Chapelle, in \cite{chapelle2014modeling}, while analyzing data from the real-time bidding company Criteo, observed that, on average, only 35\% of the conversions occurred within the first hour. Furthermore, about 13\% of the conversions could be attributed to ad display that were more than two weeks old. Another interesting observation from this work is the fact that the delay distribution could be reasonably well fitted by an exponential distribution, particularly when conditioning on context variables that are available to the advertiser. The present work addresses the problem of sequentially learning to select relevant items in the context where the feedback happens with long delays. By long we mean in particular that the feedback associated with a fraction of the actions taken by the learner will be practically unobserved because they will happen with an excessive delay. In the example cited above, if we were to run an online algorithm during two weeks, at least 13\% of the actions would not receive an observable feedback because of delays. A related situation occurs if the online algorithm is run during, say, one month, but its memory is limited to a sliding window of two weeks. In Section~\ref{sec:model} below we introduce models suitable for addressing these two related situations in the framework of multi-armed bandits. Delayed feedback is a topic that has been considered before in the reinforcement learning literature and we defer the precise comparison between existing approaches and the proposed framework to Section~\ref{sec:related}. In a nutshell however, the distinctive features of our approach is to consider potentially infinite stochastic delays, resulting in some feedback being \emph{censored} (ie. not observable). Existing works on delayed bandits focus on cases where the feedback is observed after some delay, typically assumed to be finite. In contrast, we assume that delays are random with a distribution that may have an unbounded support -- although we require that it has finite expectation. As a result, some conversion events cannot be observed within any finite horizon and the proposed learning algorithm must take this uncertainty into account. In Section~\ref{sec:model}, we propose discrete-time stochastic multi-armed bandit models to address the problem of long delays with possibly censored feedback. We distinguish two situations that correspond to the cases mentioned informally above: In the \emph{uncensored} model, conversions can be assumed to be eventually observed after some possibly arbitrarily long delay; In the \emph{censored} model, it is assumed that the environment imposes that the conversions related to actions cannot be observed anymore after a finite window of $m$ time steps. Assuming that the delay distribution is known, we prove problem-dependent lower bounds on the regret of any uniformly efficient bandit algorithm for the censored and uncensored models in Section~\ref{sec:lower-bound}. In Section~\ref{sec:indices}, we describe efficient anytime policies relying on optimistic indices, based on the UCB \cite{auer2002finite} or KLUCB \cite{garivier2011klucb} algorithms. The latter uses a Poissonization argument that can be of independent interest in other bandit models. In typical scenarios where the conversion rates are less than one percent, using the KLUCB variant will ensure a much faster learning and provides near-optimal perfomance on the long run (see Theorem~\ref{th:klucb}). These algorithms are analyzed in Section~\ref{sec:algorithms}, showing that they reach close to optimal asymptotic performance. In Section~\ref{sec:experiments} we discuss the implementation of these methods, showing that it is further simplified in the case of geometrically distributed delays, and we illustrate their performance on simulated data. \section{A STOCHASTIC MODEL FOR THE DELAYS} \label{sec:model} We now describe our bandit setting for delayed conversion events, inspired by \cite{chapelle2014modeling}. We first consider the setting in which delays may be potentially unbounded and then consider the case where censoring occurs. \subsection{GENERAL BANDIT MODEL UNDER DELAYED FEEDBACK} At each round $t \in \mathds{N}^*$, the learner chooses an arm $A_t \in \{1,\dots,K\}$. This action simultaneously triggers two independent random variables: \begin{itemize} \item $C_t \in \{0,1\}$, is the \emph{conversion indicator} that is equal to 1 only if the action $A_t$ will lead to a conversion; \item $D_t \in \mathds{N}$, is \emph{the delay} indicating the number of time steps needed before the conversion -- if any -- be disclosed to the learner. \end{itemize} At each round $t$, the agent then receives an integer-valued reward $Y_t$ which corresponds to the number of observed conversions at time $t$: \[ Y_t = \sum_{s=1}^t C_s \mathds{1}\{D_s = t-s \} . \] In the following, we will use the short-hand notation $X_{s,t} = C_s \mathds{1}\{D_s \leq t-s \}$, for $s \leq t$ to denote the possible contribution of the action taken at time $s$ to the conversion(s) observed at a later time $t$. We emphasize that even if the actual reward of the learner is the sum of the conversions, we assume that the agent also observes all the individual contributions $(X_{s,t})_{1 \leq s \leq t}$ at time $t$ triggered by actions taken before time $t$. The above mechanism implies that if $C_t = 1$, the learner will observe $D_t$ at time $t+D_t$, whereas if $C_t = 0$, the delay will not be directly observable. In particular, if at time $t$, $X_{s,u} = 0$, for all $s \leq u \leq t$, it is impossible to decide whether $C_s = 0$ or if $C_s = 1$ but $D_s > t-s$. Formally, the history of the algorithm is the $\sigma$-field generated by $\mathcal{H}_t:=(X_{s,u})_{1 \leq u \leq t, 1 \leq s \leq u}$. \todov{Changed it to gain some lines} We consider the stochastic model under the following basic assumptions: \begin{align*} & C_t | \mathcal{H}_{t-1} \sim \operatorname{Bernoulli}(\theta_{A_t}) ,\\ & D_t | \mathcal{H}_{t-1} \sim \text{distribution with CDF $\tau$}, \end{align*} and $C_t,D_t$ are conditionally independent given $\mathcal{H}_{t-1}$. \begin{lemma} \label{lemma:regret} Denote by $a^* \in \{1,\dots,K\}$ an index such that $\theta_a^* \geq \theta_k$, for $k\in \{1,\dots,K\}$, and define by $r(T) = \sum_{t=1}^T Y_t$ the cumulated reward of the learner and by $r^*(T)$ the cumulated reward obtained by an oracle playing $A_t = a^*$ at each round. The expected regret of the learner is given by \begin{equation} \label{eq:regret} L(T) = \mathds{E}\left[r^*(T)-r(T)\right] = \sum_{s=1}^T \mathds{E}\left[\theta_{a^*}-\theta_{A_s}\right] \tau_{T-s} \end{equation} where by definition $\tau_{T-s} = \mathds{P}(D_s \leq T-s)$. Denoting $\mathds{E}[N_k(T)]:= \sum_{s=1}^{T-1} \mathds{1}\{A_s=k\}$, it holds that $$ L (T) \leq \sum_{k=1}^K (\theta_{a^\star}-\theta_k) N_k(t) $$ and if $\mu = \mathds{E}[D_s]<\infty$, \begin{equation} \label{eq:regret:relationship} \sum_{k=1}^K (\theta_{a^\star}-\theta_k) N_k(t) - L (T) \leq \mu \sum_{k=1}^K (\theta_{a^\star}- \theta_k). \end{equation} \end{lemma} \begin{proof} The cumulated reward at time $T$ satisfies \begin{multline*} r(T) = \sum_{t=1}^T Y_t = \sum_{t=1}^T \sum_{s=1}^t C_s \boldsymbol{1}\{D_s = t-s \} \nonumber \\ = \sum_{s=1}^T C_s \boldsymbol{1}\{D_s \leq T-s \} = \sum_{s=1}^T X_{s,T}, \end{multline*} where the index $T$ stands for the time at which all past conversions are observed while $s$ is the index at which the action has been taken. Hence Eq.~\eqref{eq:regret} is obtained by \begin{align*} \mathds{E} \left[ \sum_{t=1}^T r(T) \right] &= \mathds{E} \left[ \sum_{t=1}^T Y_t \right]\\ & = \sum_{s=1}^T \mathds{E} \left[ X_{s,T}\right] =\sum_{s=1}^{T} \theta_{A_s} \tau_{T-s}. \end{align*} Obviously the fact that $\tau_{T-s} \leq 1$ implies that $L(T)$ is upper bounded by $\sum_{k=1}^K (\theta_{a^\star}-\theta_k) N_k(t)$, which corresponds to the usual regret formula in the bandit model with explicit immediate feedback. To upper bound the difference, note that \begin{align*} \sum_{k=1}^K & (\theta_{a^\star}-\theta_k) N_k(t) - L(T) \\ & = \sum_{k=1}^K (\theta_{a^\star}-\theta_k) \sum_{s=1}^T \boldsymbol{1}\{A_s=k\} (1-\tau_{T-s}) \\ & \leq \sum_{k=1}^K (\theta_{a^\star}-\theta_k) \sum_{n=0}^{\infty} (1-\tau_{n}) = \mu \sum_{k=1}^K (\theta_{a^\star}-\theta_k). \end{align*} \end{proof} \subsection{THRESHOLDED DELAYS: CENSORED OBSERVATIONS} The model with $m$-thresholded delays takes into account the fact that a conversion can only be observed within $m$ steps after the action occurred. This basically changes the expression of the expected instantaneous reward $Y_t$ which becomes, \[ Y_t = \sum_{s=t-m}^t C_s \boldsymbol{1}\{D_s = t-s \} \] and the future contributions of each action are capped to the next $m$ time steps: $(X_{s,t})_{t-m \leq s \leq t}$. The history of the algorithm only consists of $\mathcal{H}_t = \sigma((X_{s,u})_{1 \leq u \leq t, u-m \leq s \leq u})$ and the regret expression of Lemma~\ref{lemma:regret} can be split into two terms corresponding to old pulls and the $m$ most recent pulls: \begin{equation} \label{eq:regret_th} \sum_{s=1}^{T-m} (\theta_{a^\star}-\mathds{E}[\theta_{A_s}]) \tau_{m} + \sum_{s=T-m+1}^T (\theta_{a^\star}-\mathds{E}[\theta_{A_s}]) \tau_{T-s} \end{equation} In the remaining, for $(p,q)\in [0,1]^2$, we will denote by $d(p,q)=p\log(p/q)+(1-p)\log((1-p)/(1-q))$ the binary entropy between $p$ and $q$, that is the Kullback-Leibler divergence between Bernoulli distributions with parameters $p$ and $q$. Moreover, without loss of generality, we will assume that $a^*=1$ is the unique optimal arm of the considered bandit problems and denote by $\theta^* = \theta_1$ the optimal conversion rate. \todov{Why did we remove the following (I found it interesting): " It is possible to recast censored observations as un-censored observations by modifying the probability of conversion $\theta_k$ and the CDF, namely by defining $\theta'_k=\theta_k\tau_m$ and $\tau's=\tau_s/\tau_m$. However, we are able to derive more precise and powerful results with censored than un-censored data thus we prefer to keep and describe both models. "} \section{RELATED WORK ON DELAYED BANDITS} \label{sec:related} Delayed feedback recently received increasing attention in the bandit and online learning literature due to its various applications ranging from online advertising \cite{chapelle2014modeling} to distributed optimization \cite{jun2016top,cesa2016delay}. Indeed, delayed feedback have been extensively considered in the context of Markov Decision Processes (MDPs) \cite{katsikopoulos2003markov,walsh2009learning}. However, the present work focuses on unbounded delays and the models considered therein would result in an infinite space MDP for which even the planning problem would be challenging. In contrast, the lack of memory in bandits makes it possible to propose relatively simple algorithms even in the case where the delays may be very long. For a review of previous works in online learning in the stochastic and non-stochastic settings, see \cite{joulani2013onlinelong} and references therein. The latter work tackles the more general problem of partial monitoring under delayed feedback, with Sections 3.2 and 4 of the paper focusing on the stochastic delayed bandit problem. A key insight from this work is that, in minimax analysis, delay increases the regret in a multiplicative way in adversarial problems, and in an additive way in stochastic problems. The algorithm of \cite{joulani2013online} relies on a queuing principle termed \textsc{Q-PMD} that uses an optimistic bandit referred to as ``BASE'' to perform exploration; in~\cite{joulani2013online} UCB is chosen as BASE strategy while the follow-up work \cite{joulani2013onlinelong} also considers the use of KLUCB. The idea is to store all the observations that arrive at the same time $t$ in a FIFO buffer and to feed BASE with the information related to an arm $k$ only when this arm is about to be chosen. It means that the number of draws of an arm as well as the cumulated sum of the subsequent rewards are only updated whenever the observation arrives to the learner. Meanwhile, the algorithm acts as if nothing happened. However, in the setting considered in the present work, updating counts only after the observations are eventually received cannot lead to a practical algorithm: Whenever a click is received, the associated reward is 1 by definition, otherwise the ambiguity between non-received and negative feedback remains. Thus, the empirical average of the rewards for each arm computed by the updating mechanism of \textsc{Q-PMD} sticks to 1 and does not allow to compare the arms. As a consequence, the \textsc{Q-PMD} policy cannot be used for the models described in Section~\ref{sec:model}, except in the specific case of the uncensored delay model with bounded delays: Then there is no censoring anymore as one only needs to wait long enough (longer than the maximal possible delay) to reveal with certainty the exact value of the feedback. Also, \cite{mandel2015queue} notices that the empirical performances of this queuing-based heuristic are not fully satisfying because of the lack of variability in the decisions made by the policy while waiting for feedback. Their suggestion is to use random policies instead of deterministic ones in order to improve the overall exploration. Note that even though we stick to deterministic, history-based, policies, this problem is taken care of by our algorithm thanks to the use of the CDF of the delays that allow us to correct the confidence intervals continuously after a pull has been made. Another possible way to handle bounded delays would be to plan ahead the sequence of pulls by batches, following the principles of Explore Then Commit, see \cite{PerchetAl2016}. With finite delays, a new un-necessary batch of exploration pulls might be started before the algorithm enters the exploitation (or commitment) phase. The extra cost would therefore be the maximal observable delay. Although these techniques are random and not deterministic, they have the same drawbacks as the other ones: The policy is not updated while waiting for feedback and, as a consequence, cannot handle arbitrarily large delays. An obvious limitation of our work is that we assume that the delay distribution is known. We believe that it is a realistic assumption however as the delay distribution can be identified from historical data as reported in~\cite{chapelle2014modeling}. In addition, as we assume that the same delay distribution is shared by all actions, it is natural to expect that estimating the delay distribution on-line can be done at no additional cost in terms of performance. Perhaps more interestingly, it is possible to extend the model so as to include cases where the context of each action is available to the learner and determines the distribution of the corresponding delay, using for instance the generalized linear modeling of~\cite{chapelle2014modeling}. In particular, the same algorithms can be used in this case, by considering the proper CDFs corresponding to different instances. Of course the analysis to be described below would need to be extended to cover also this contextual case. \section{LOWER BOUND ON THE REGRET} \label{sec:lower-bound} The purpose of this section is to provide lower bounds on the regret of \emph{uniformly efficient} algorithms in the two different settings of the Stochastic Delayed Bandit problem that we consider. This class of policies, introduced by \cite{lai1985asymptotically}, refers to algorithms such that for any bandit model $\nu$, and any $\alpha \in (0,1)$, $\mathds{E}[R(T)]/T^\alpha \to 0$ when $T\to \infty$. Our results rely on changes of measure argument that are encapsulated in Lemma~1 of \cite{kaufmann2015complexity}, or more recently, and more generally, in Inequality (F) of \cite{garivier2017explore}. Those results can actually be reformulated as a lower bound on the expected log-likelihood ratio of the observations under the originally considered bandit model $\theta$ and the alternative one $\theta'$ \[ \mathds{E}[\ell_T]=\mathds{E}_\theta\left[ \frac{p_\theta((X_{s,t})_{1 \leq t \leq T, 1 \leq s \leq t})}{p_{\theta'}((X_{s,t})_{1 \leq t \leq T, 1 \leq s \leq t})}\right] . \] The following inequality is obtained using proof techniques from Appendix B of \cite{kaufmann2015complexity} that are detailed in Appendix~\ref{ap:lower_bound}. \begin{equation} \label{eq:llr_bound} \liminf_{T\to \infty} \frac{\mathds{E}[\ell_T]}{\log(T)} \geq 1 . \end{equation} To obtain explicit regret lower bounds for the models introduced in Section~\ref{sec:model}, we compute below the expected log-likelihood ratio corresponding to these two models. \begin{lemma} \label{lem:llr-uncens} In the censored delayed feedback setting, the expected log-likelihood ratio is given by \begin{align*} \mathds{E}_\theta\left[ \ell_T\right] =& \sum_{s=1}^{T-m} d(\theta_{A_s}\tau_{m}, \theta_{A_s}'\tau_{m}) \\ & + \sum_{s=T-m}^T d(\theta_{A_s}\tau_{T-s}, \theta_{A_s}'\tau_{T-s}). \end{align*} In the uncensored setting, the above sum is not split and we have \[ \mathds{E}_\theta\left[ \ell_T\right] = \sum_{s=1}^T d(\theta_{A_s}\tau_{T-s}, \theta_{A_s}'\tau_{T-s}). \] \end{lemma} \begin{proof} Given $\mathcal{H}_{s-1}$, $(X_{s,s}, \dots, X_{s,T})$ can be equal to \begin{itemize} \item $(0, \dots, 0)$, with proba.\ $(1-\theta_{A_s}) + \theta_{A_s} (1-\tau_{T-s})$, \item $(0, \dots, 0, 1, 1, \dots, 1)$ with proba.\ $\theta_{A_s} \delta_{u-s}$, for $u=s, \dots, T$ ($u$ denotes the position of 1 in the vector), where $\delta_k = \mathds{P}(D_s \leq k)$. \end{itemize} Hence, \begin{align*} & \mathds{E}_\theta\left[ \left. \log \frac{p_\theta(X_{s,s}, \dots, X_{s,T})}{p_{\theta'}(X_{s,s}, \dots, X_{s,T}} \right| \mathcal{H}_{s-1} \right] \\ & = \log \frac{1-\theta_{A_s}\tau_{T-s}}{1-\theta_{A_s}'\tau_{T-s}} (1-\theta_{A_s}\tau_{T-s}) \\ & \qquad \qquad + \sum_{u=s}^T \log \frac{\theta_{A_s} \delta_{u-s}}{\theta_{A_s}' \delta_{u-s}} \theta_{A_s} \delta_{u-s} \\ & = \log \frac{1-\theta_{A_s}\tau_{T-s}}{1-\theta_{A_s}'\tau_{T-s}} (1-\theta_{A_s}\tau_{T-s}) + \log \frac{\theta_{A_s}}{\theta_{A_s}'} \theta_{A_s}\tau_{T-s} \\ & = d(\theta_{A_s}\tau_{T-s}, \theta_{A_s}'\tau_{T-s}). \end{align*} The equivalent expression for the censored case is easily deduced from the same calculations. \end{proof} \subsection{CENSORED SETTING} Using our notations, the following theorem provides a problem-dependent lower bound on the regret. \begin{theorem} \label{th:lb-cens} The regret of any uniformly efficient algorithm is bounded from below by \[ \liminf_{T \to \infty} \frac{R(T)}{\log(T)} \geq \sum_{k\neq k^*} \frac{\tau_m(\theta^* - \theta_k)}{d(\tau_m \theta_k, \tau_m \theta^*)} . \] \end{theorem} \begin{proof} The details of the proof can be found in Appendix~\ref{ap:lower_bound} but we provide here a sketch of the main argument. The log-likelihood ratio is given by Lemma~\ref{lem:llr-uncens}: \begin{align*} \mathds{E}_\theta\left[ \ell_T\right] =& \sum_{s=1}^{T-m} d(\theta_{A_s}\tau_{m}, \theta_{A_s}'\tau_{m}) \\ & + \sum_{s=T-m}^T d(\theta_{A_s}\tau_{T-s}, \theta_{A_s}'\tau_{T-s}) , \end{align*} which is bounded from below by Eq.\eqref{eq:llr_bound}. However, obtaining a lower bound on the regret requires to decompose this quantity into $(K-1)$ terms depending on the suboptimal arms. For a fixed arm $k\neq 1$, we consider $\theta'=(\theta_1,\ldots,\theta_{k-1},\theta_1+\epsilon,\ldots,\theta_K)$ for which the expected log-likelihood ratio is \begin{align*} &\mathds{E}[N_k(T)] d(\tau_m\theta_k,\tau_m(\theta_1+\epsilon)) \\ & + \sum_{s=T-m}^T d(\theta_k\tau_{T-s}, (\theta_1+\epsilon)\tau_{T-s} )\geq \mathds{E}_\theta\left[ \ell_T\right]. \end{align*} Divide by $\log(T)$ and let $T$ to infinity, to get the result for $\epsilon \to 0$, as the second term in the l.h.s. is bounded. \end{proof} This lower bound implies that the delayed bandits problem with trespassing probability $\tau_m$ is as hard as solving the scaled bandit problem with expected rewards $(\tau_m\theta_1,\ldots,\tau_m\theta_K)$. In the long run, one cannot learn faster than the heuristic approach discarding the last $m$ observations and considerimg the fictitious bandit model with parameters $(\tau_m\theta_1,\ldots,\tau_m\theta_K)$. However, on horizons of the order of $m$ time-steps, we will show empirically in Section~\ref{sec:experiments} that taking delay distributions into account allows for much faster learning. Note also that the convexity of the function $\tau \to d(\tau p, \tau q)$ proved in Lemma~\ref{lem:incrkappa} implies that the regret lower bound is a monotonically increasing function of $\tau_m$. Hence, either reduced values of $m$ or longer values of the expected delay $\mu$ actually make the problem harder. \subsection{UNCENSORED SETTING} In the uncensored model, the same argument shows that the lower bound does not differ from the classical Lai \& Robbins Lower Bound \cite{lai1985asymptotically}. \begin{theorem} \label{th:lb-uncens} The regret of any uniformly efficient algorithm in the Uncensored Delays Setting is bounded from below by \[ \liminf_{T \to \infty} \frac{R(T)}{\log(T)} \geq \sum_{k\neq k^*} \frac{(\theta^* - \theta_k)}{d(\theta_k, \theta^*)} . \] \end{theorem} The full proof of this result is similar to the proof of Theorem~\ref{th:lb-cens} and can be found in Appendix~\ref{ap:lower_bound}. \section{DELAY-CORRECTED ESTIMATORS AND CONFIDENCE INTERVALS} \label{sec:indices} In this section, for a fixed arm $k \in \{1,\dots,K\}$, we define a conditionally unbiased estimator for the conversion rate $\theta_k$. Then, based on suitable concentration results we derive optimistic indices: a delay-corrected UCB as in~\cite{auer2002finite} as well as a delay-corrected KLUCB as in \cite{garivier2011klucb}. \subsection{PARAMETER ESTIMATOR} Define the sum of rewards up to time $t$ as $$ S_k(t) = \sum_{s=1}^{t} \sum_{u=1}^{s} X_{u,s}\mathds{1}\{A_u=k\} . $$ We recall that we defined the exact number of pulls of arm $k$ up to time $t$ as $N_k(t):=\sum_{s=1}^{t-1}\mathds{1}\{A_s = k\}$. However, defining an estimator of $\theta_k$ that is unbiased -- when conditioning on the selections of arms -- requires to consider a delay-corrected count $\tilde{N}(t)$ taking into account the probability of having eventually observed the reward associated with each previous pull of $k$. We distinguish the expression of $\tilde{N}(t)$ according to whether feedback is censored or not. \paragraph{Censored model.} When rewards cannot be disclosed after $m$ rounds following the action, the current available information on the pulls is split into two main groups: The `oldest' pulls, censored if not observed yet, and the most recent ones. Namely, we now define $\tilde{N}_k(t)$ as \[ \tilde{N}_k(t) = \sum_{s=1}^{t-m} \mathds{1}\{A_s=k\} \tau_{m} \, + \; \sum_{s=t-m+1}^{t-1} \mathds{1}\{A_s=k\} \tau_{t-s}. \] Overall, the conversion rate estimator is defined as \begin{equation} \label{eq:estimator} \hat{\theta}_k(t) = \frac{S_k(t)}{\tilde{N}_k(t)} . \end{equation} \begin{remark} In the uncensored case, defining $\tilde{N}_k(t) := \sum_{s=1}^{t} \mathds{1}\{A_s=k\} \tau_{t-s}$. leads to an equivalent definition of $\hat{\theta}_k(t)$ as a conditionally unbiased estimator. \end{remark} \subsection{UCB INDEX} We first define a delay-corrected UCB-index for bounded rewards. \paragraph{Concentration bound.} Using the self-normalized concentration inequality of Proposition~8 of \cite{lagree2016multiple}, yields the following result, that we recall here for completeness. \begin{proposition} \label{prop:controlUCB} Let $k$ be an arm in $\{1,...,K\}$, then for any $\beta >0$ and for all $t>0$, \[ \mathds{P}\left(\theta_k > \hat{\theta}_k(t) + \sqrt{\frac{N_k(t)}{\tilde{N}_k(t)}} \sqrt{\frac{\beta}{2\tilde{N}_k(t)}}\right) < \beta e \log(t) e^{-\beta}. \] \end{proposition} \paragraph{Upper-confidence Bound.} Thus, an UCB index for $\hat{\theta}_k(t)$ may be defined as \[ U^{\textsc{ucb}}_k(t) = \hat{\theta}_k(t) + \sqrt{\frac{N_k(t)}{\tilde{N}_k(t)}} \sqrt{\frac{\beta_\epsilon (t)}{2\tilde{N}_k(t)}}, \] where $\beta\epsilon(t)$ is a suitable slowly growing exploration function (see below). This upper confidence interval is scaled by $N_k(t)/\tilde{N}_k(t)$ when compared to the classical UCB index. This ratio gets bigger when the $(\tau_d)$'s are small for large delays $d$, that is when the median delay is large: The longer we need to wait for observations to come, the largest our uncertainty about our current cumulated reward. \subsection{KLUCB INDEX} \paragraph{Concentration bound.} We first state a concentration inequality that controls the underestimation probability based on an alternative Chernoff bound for a sum of independent binary random variables (Lemma \ref{lemma:laplace} proved in Appendix~\ref{ap:concentration}). This lemma only holds for a sequence of pulls fixed before-hand, independently of realizations, i.e., the values of $A_t$ do not depend on the sequence of $X_s$. Although with a restrictive scope, it provides intuition on the construction of the algorithm. \begin{lemma} \label{lemma:KL-concentration} Assume that the sequence of pulls is fixed beforehand and let $k$ be an arm in $\{1,...,K\}$. Then for any $\delta > 0$ and for all $t>0$, \[ \mathds{P}\left( \left\{\hat{\theta}_k(t) < \theta_k \right\} \cap \left\{\tilde{N}_k(t)d_{\mathrm{Pois}}(\hat{\theta}_k(t),\theta_k) > \delta\right\} \right) < e^{-\delta}. \] where $d_{\mathrm{Pois}}(p,q) = p\log p/q + q-p$ denotes the Poisson Kullback-Leibler divergence. \end{lemma} To get upper confidence bounds for $\theta_k$ from Lemma~\ref{lemma:KL-concentration}, we follow~\cite{garivier2011klucb} and define the KL-UCB index by \begin{multline*} U_k^{\textsc{kl}}(t) = \max \Bigl\{ q \in [\hat{\theta}_k(t), 1] \, : \\ \, \tilde{N}_k(t)d_{\mathrm{Pois}}(\hat{\theta}_k(t),q)\leq \beta_\epsilon(t) \Bigr\} . \end{multline*} Using $\beta_\epsilon(t) = \beta$, this KL-UCB index satisfies a result analogous to Proposition~\ref{prop:controlUCB} (see Proposition \ref{th:self-normalized} in Appendix \ref{ap:Poisson}): \[ \mathds{P}\left( \theta_k > U_k^{\textsc{kl}}(t)\right) \leq e \lceil \beta \log (t) \rceil e^{-\beta} . \] Even though the Kullback-Leibler divergence does not have the same expression for Bernoulli and Poisson random variables, the following lemma (proved in Appendix \ref{ap:Poisson}) shows that for a certain range or parameters they are actually very close. \begin{lemma} \label{lem:div_bound} For $0<p<q<1$, $$ (1-q) d(p,q) \leq d_{\mathrm{Pois}}(p,q) \leq d(p,q). $$ \end{lemma} \section{ALGORITHMS} \label{sec:algorithms} Algorithm~\ref{algo} present the scheme common to both the censored and uncensored cases, which differ only by the definition of the parameter estimator. In both cases, one may also consider either of the two UCB or KL-UCB index defined in the previous section, resulting in the \textsc DelayedUCB\ and \textsc DelayedKLUCB\ algorithms. We provide a finite-time analysis of the regret of these algorithms, when using an exploration function of the form $\beta_\epsilon(t)=(1+\epsilon)\log(t)$, for some positive $\epsilon$. \begin{algorithm}[hbt] \caption{-- \textsc DelayedUCB\ and \textsc DelayedKLUCB.} \label{algo} \begin{algorithmic}\small \REQUIRE{$K$, CDF parameters $(\tau_d)_{d\geq 0}$, threshold $m>0$ if feedback is censored.} \STATE{Initialization: First $K$ rounds, play each arm once. } \FOR{$t>K$} \STATE{Compute $S_k(t)$ and $\tilde{N}_k(t)$ for all $k$ according to the assumed feedback model (censored or not),} \STATE{Compute $\hat{\theta}_k(t)$ for al $k$,} \STATE{For a given choice of algorithm $\mathcal{A} \in \{\textsc{klucb},\textsc{ucb\}}$,} \STATE{$A_t \gets \arg\max_k U^{\mathcal{A}}_k(t)$}. \STATE{Observe reward $Y_t$ and all individual feedback $(X_s,t)_{s\leq t}$} \ENDFOR \end{algorithmic} \end{algorithm} \paragraph{Finite-time Analysis of \textsc DelayedUCB.} \begin{theorem}\label{th:ucb} In the censored setting, the regret of \textsc DelayedUCB\ is bounded from above by \[ L_{\textsc{ucb}}(T)\leq (1+\epsilon)\log(T)\sum_{k \neq \ast} \frac{1}{2\tau_m\Delta_k} + o_{\epsilon,m}(\log (T)). \] \end{theorem} \begin{proof} Outline of the proof (cf Appendix~\ref{ap:ucb}): \begin{enumerate} \item First upper-bound the regret using Lemma~\ref{lemma:regret} in the uncensored case: \[ R(T) \leq \sum_{k>1} \Delta_k \mathds{E}[N_k(T)], \] and bounding the first $m$ losses by $1$ in the censored case: \[ R(T) \leq m + \sum_{k>1} \tau_m \Delta_k \mathds{E}\left[ \sum_{t>m}^{T}\mathds{1}\{A_t = k\} \right]. \] \item Then, decompose the event $\mathds{1}\{A_t = k\}$ as in \cite{auer2002finite} \begin{multline*} \sum_{t>m}^{T}\mathds{1}\{A_t = k\} \leq \sum_{t>m}^{T}\mathds{1}\left\{ U^{\text{UCB}}_1(t) < \theta_1 \right\} \\ + \sum_{t>m}^{T}\mathds{1}\left\{ A_{t+1}=k, U^{\textsc{UCB}}_k(t) \geq \theta_1 \right\}. \end{multline*} \item Remark that the first sum is handled by Proposition~\ref{prop:controlUCB} so it suffices to control the second sum. \begin{align*} &\mathds{E}\left[ \mathds{1}\left\{ A_{t+1}=k, U^{\textsc{UCB}}_k(t) \geq \theta_1 \right\} \right]\\ & \qquad \leq \frac{(1+\epsilon)\log(T)}{2\tau_m^2 \Delta_i^2} \\ & \qquad \qquad + \sum_{s > \frac{(1+\epsilon)\log(T)}{2\Delta_i^2} } \mathds{P}\left( U^{\textsc{UCB}}_k(t) \geq \theta_i + \Delta_i\right) . \end{align*} \end{enumerate} The last term is actually $O(\sqrt{\log(T)})$, giving the desired result. Details, as well as explicit constants and dependencies can be found in Appendix~\ref{ap:ucb}. \end{proof} \begin{corollary} \label{cor:uncensUCB} In the uncensored setting, we also assume that there exists $c >0$ such that $1-\tau_m \leq \frac{c}{m}$ for all $m \geq 1$. Then, is bounded from above by \[ L_{\textsc{ucb}}(T)\leq \frac{1+\epsilon}{1-\epsilon}\log(T)\sum_{k>1} \frac{1}{2\Delta_k} + o_{\epsilon,m}(\log (T)) . \] \end{corollary} \begin{proof} The analysis of \textsc DelayedUCB\ given in Appendix~\ref{ap:ucb} (in the censored setting) shows that the performances of \textsc DelayedUCB\ in the uncensored setting can be upper-bounded by its performances in the censored setting, where the threshold $m$ can be arbitrarily fixed to some value. Choosing $m$ will only have an impact on the analysis of the algorithm. The specific choice of $m$ satisfying $\tau_m \geq 1-\epsilon$ gives the claimed result. As indicated in Appendix~\ref{ap:ucb}, the dependency of $o_{\epsilon,m}(\log (T))$ is actually only linear in $m$. As a consequence, along with the assumption on the decay of $1-\tau_m$, this yields that the overall dependency in the parameter $m$ is reduced to $1/\epsilon$. \end{proof} We emphasize that the assumption that $1-\tau_m \leq 1/m$, is actually rather natural. Indeed, if $1-\tau_m \leq c/m^\gamma$, for some constants $c,\gamma>0$, then the finiteness requirement on the expected delay is satisfied if and only if $\gamma >1$. \paragraph{Finite-time Analysis of \textsc DelayedKLUCB.} \begin{theorem}\label{th:klucb} For any $\eta >0$, the regret of \textsc DelayedKLUCB\ is bounded in the censored setting as \begin{align*} L_{\textsc{klucb}}(T)\leq &(1+\eta)\frac{\beta_\epsilon (t)}{1-\theta_1}\sum_{k > 1} \frac{\tau_m\Delta_k}{d(\tau_m\theta_k, \tau_m\theta_1)} \\ &+ o_{\epsilon,m,\eta}(\log (T)). \end{align*} \end{theorem} \begin{proof} Outline of the proof (cf. Appendix\ref{ap:klucb}): \begin{enumerate} \item We start by decomposing the regret according to the different types of unfavorable events. Note that thanks to the upper bound on the regret provided by Lemma~\ref{lemma:regret}, we need to control on the number of suboptimal pulls $\mathds{E}[N_k(T)]$ for arms $k>1$. \begin{multline*} \mathds{E}[N_k(T)] \leq m + \mathds{E}\left[ \sum_{t=m+1}^T \mathds{1}\{U_1(t)<\theta_1 \} \right] \\ + \mathds{E}\left[ \sum_{t=m+1}^T \mathds{1}\{ A(t)=k, U_k(t) \geq \theta_1 \} \right] . \end{multline*} \item The first sum is handled by Theorem~\ref{th:self-normalized} in Appendix~\ref{ap:concentration} which shows that it is $o(\log(T))$. For the second term, we bound the indices using the fact that $\tilde{N}_k(t) \geq \tau_m N_k(t-m)$ to obtain \begin{align*} & U^{\textsc{kl}}_k(t) \leq U^{\textsc{kl}+}_k(t)\\ &:= \argmax_{q \in [\hat{\theta}_k,1]} \bigl\{q|\tau_m d_{\mathrm{Pois}}(\hat{\theta}_k(t),q)\leq \frac{\beta_\epsilon(t)}{N_k(t-m)}\bigr\} . \end{align*} Notice that the $U^{\textsc{kl}+}_k(t)$ indices are well defined for $t>m$. \item Then, we proceed as in the proof of Theorem~10 in Appendix B.2 of \cite{joulani2013online}. For any $\eta>0$, we define the characteristic number of pulls \[ K_k(T) = \frac{(1+\eta )\beta_\epsilon (t)}{d_{\mathrm{Pois}}(\tau_m\theta_k,\tau_m\theta_1)} , \] and we prove \begin{align*} \sum_{s\geq K_k(T)} \mathds{P} \left( \tau_m s d_{\mathrm{Pois}}(\hat{\theta}_{k,s},\theta_1) \leq\beta_\epsilon (t) \right) \\ = o_{\epsilon,m,\eta} (\log T) \end{align*} using Fact 2 of \cite{cappe2013kullback} for exponential families. \end{enumerate} \end{proof} \begin{corollary} In the uncensored setting, under the same hypothesis than in Corollary~\ref{cor:uncensUCB}, namely that there exists a constant $c$ such that $1-\tau_m\leq \frac{c}{m}$ for all $m\leq 1$. Then, the regret of \textsc DelayedKLUCB\ is bounded from above as \begin{align*} L_{\textsc{klucb}}(T) & \leq \frac{\beta_\epsilon(t)}{1-\theta_1} \sum_{k > 1} \frac{(1+\eta)(1-\epsilon)\Delta_k}{d((1-\epsilon)\theta_k, (1-\epsilon)\theta_1)} \\ & + o_{\eta,\epsilon}(\log(T)) . \end{align*} \end{corollary} \begin{proof} As for the proof of Corollary~\ref{cor:uncensUCB}, the performance of \textsc DelayedKLUCB\ in the uncensored case can be bounded as in the censored case by a specific choice of $m(\epsilon)$ such that $\tau_m \geq 1-\epsilon$, namely $m(\epsilon)\geq c/\epsilon$. As shown in the proof of Theorem~\ref{th:klucb} in the censored case, the dependency in $m$ of the term of rest is linear, reducing to $1/\epsilon$. \end{proof} \paragraph{Naive benchmark: The \textsc{Discarding} policy.} An obvious benchmark algorithm in the censored setting is to use the regular UCB and KLUCB policies only using the first $t-m$ pulls and observed rewards at each time $t$. In that case the empirical average considered is simply $\hat{\theta}^m_k(t) = S_k(t-m)/\tau_m N_k(t-m)$ and the corresponding optimistic indices are \begin{align*} & U^{m} (t)= \hat{\theta}^m_k(t) + \sqrt{\beta_\epsilon (t)/2\tau_m N_k(t-m)},\\ & U^{m|\textsc{kl}}_k(t) = \max_{q \in [\hat{\theta}_k^m, 1]} \Bigl\{q |\, \tau_m d_{\mathrm{Pois}}(\hat{\theta}_k^m, q) \leq \frac{\beta_\epsilon (t)}{N(t-m)} \Bigr\}. \end{align*} These indices can only be computed after at least $m$ rounds. The proof technique used for the analysis of our algorithms in the censored case actually shows that the \textsc{DiscardingUCB} and \textsc{DiscardingKLUCB} policies are asymptotically optimal. Nonetheless, in practice it is very undesirable to have an arbitrarily long linear regret phase at the beginning of the learning until the threshold $m$ is reached. This is especially true if the threshold $m$ is large as compared to the horizon $T$. In that case, we empirically show in Section~\ref{sec:experiments} that our algorithms achieve drastically improved short-horizon performance. \section{EXPERIMENTS} \label{sec:experiments} \begin{figure*} \caption{Expected regret of \textsc{d-ucb} \label{fig:high-rew} \label{fig:low-rew} \label{fig:opt} \end{figure*} In this section we perform simulations in our two delayed feedback frameworks. The algorithms described in the previous section will be denoted \textsc{d-ucb} and \textsc{d-klucb} in the censored setting, and \textsc{ud-ucb} and \textsc{ud-klucb} in the uncensored setting. As a matter of fact, the bottleneck of such policies is to compute $\tilde{N}(t)$ which is theoretically a weighted sum over all past actions and, without any assumption on the weights $(\tau_s)_{s\geq 0}$, it requires to store all previous rewards and recompute $\tilde{N}(t)$ at each iteration. Following the conclusions of \cite{chapelle2014modeling}, we assume all along this section that the delays follow a geometric distribution with parameter $\lambda := 1/\mu$. This assumption allows us to implement our algorithms in a computationally, memory-efficient manner. Indeed, for each $s\geq 0$, we now have $(1-\tau_{s+1})=\lambda (1-\tau_s)$ and this remark provides a sequential updating scheme of the quantity $\tilde{N}_k(t)$ for $k\in[K]$. In the uncensored setting, we have: \[ \tilde{N}_k(t) = \sum_{s=1}^t (1-\lambda^{t-s+1})\mathds{1}\{A_s = k\} = N_k(t) - O_k(t) , \] where $O_k(t)$ is updated after each round as follows \begin{equation} \label{eq:O} O_k(t+1)\gets \lambda O_k(t) + \mathds{1}\{A_t=k\}. \end{equation} In the censored setting, however, one must still keep track of some of the previous pulls in order to compute \[ \tilde{N}_k(t) = N_k(t-m)\tau_{m} +\sum_{s=t-m+1}^{t-1} \mathds{1}\{A_s=k\} \tau_{t-s}. \] In practice this can be done by maintaining a buffer of size $m$ containing the last $m$ pulls that are multiplied by the probability of observing a reward with the delay corresponding to their current position in the buffer. In addition to this buffer, we add old pulls in a separate count $N_k(t-m)$ for which the weight will stay $\tau_m$. \paragraph{Comparing \textsc DelayedUCB\ and \textsc DelayedKLUCB.} We compare the regret of both delayed bandits policies in the censored and uncensored setting for $T=10000$, $\mu=500$ and $m=1000$. Simulations on Figure~\ref{fig:opt}, for two problems, $\theta_H=(0.5,0.4, 0.3)$ on the left, and $\theta_L=(0.1,0.05, 0.03)$ on the right, display the classical pattern that while UCB-based algorithms perform satisfactorily for central values (close to 0.5) of the conversion rate, they are clearly sub-optimal with more realistic values for the conversion rate. The two right plots also confirm that, for the KLUCB-based algorithms, the loss with respect to the optimal regret growth rate due to the use of the Poisson divergence is -- as expected from Theorem~\ref{th:klucb} -- not significant for low values (here $\theta^*=0.1$) of the conversion rates. \begin{figure} \caption{Expected regret of \textsc{d-ucb} \label{fig:disc} \end{figure} \paragraph{\textsc DelayedUCB\ and \textsc DelayedKLUCB\ vs. \textsc{Discarding}.} In this section, we illustrate the good empirical initial performance of \textsc DelayedUCB\ and \textsc DelayedKLUCB, when compared to the heuristic \textsc{Discarding} approach presented in Section~\ref{sec:algorithms}. Figure~\ref{fig:disc} compares results for both \textsc DelayedUCB\ and \textsc DelayedKLUCB\ with $\theta =(0.1,0.05, 0.03)$, $T=10000$, $\mu=500$ and $m=1000$ in the censored setting. We observe that discarding policies incur a linear regret phase at the beginning of the learning and happen to catch up with the expected regret growth rate only after a large number of rounds. These figures reveal a non-negligible gap in performance between the naive \textsc{Discarding} approach and our delay-adapted quasi-optimal algorithms. \section{CONCLUSION} The stochastic delayed bandit setting introduced in this work addresses an important problem in many applications where the feedback associated to each action is delayed and censored, due to the ambiguity between conversions that will never happen and conversions that will occur at some later -- perhaps unobservable-- time. Under the hypothesis that the distribution of the delay is known, we provided a complete analysis of this model as well as simple and efficient algorithms. An interesting generalization of the present work would be to relax the model hypothesis and estimate the delay distribution on-the-go, possibly using context-dependent delay distributions. \appendix \onecolumn \section{Concentration results} \label{ap:concentration} \subsection{Poissonization of the KL indices} \label{ap:Poisson} We require variants of Lemma 9 and Theorem 10 in \cite{garivier2011klucb} adapted to our setting. \begin{lemma} \label{lemma:laplace} For $\theta \in [0,1]$, $(\tau_i)_{1 \leq i \leq L} \in [0,1]$, let $(X_{i,j})_{1\leq i \leq L, j \geq 1}$ be a collection of independent Bernoulli random variables such that $ \mathds{E}(X_{i,j}) = \tau_i\theta$ and $(\epsilon_{i,j}) \in \{0,1\}$ associated deterministic indicators. For $1\leq i \leq L$, denote by $n_i = \sum_{j=1}^\infty \epsilon_{i,j}$ and whe shall assume that all $n_i$ are finite an that at least one of them is non-zero. Let $X = \sum_{i=1}^L \sum_{j=1}^\infty \epsilon_{i,j} X_{i,j}$ and denote by $\phi(\lambda) = \log \mathds{E}\left[\exp(\lambda X)\right]$ its log-Laplace transform and by $\phi^*(x) = \sup_{\lambda} x\lambda - \phi(\lambda)$ the associated convex conjugate (Fenchel--Legendre transform). Then, for all $\lambda \in \mathds{R}$, \begin{equation} \label{eqap:loglaplace-bound:pois} \phi(\lambda) \leq \left(\sum_{i=1}^L \tau_i n_i \right) \theta \left(e^\lambda - 1\right) , \end{equation} and, for all $x \geq 0$, \begin{equation} \label{eqap:divergence-bound:pois} \phi^*(x) \geq \left(\sum_{i=1}^L \tau_i n_i \right) d_{\mathrm{Pois}}\left(\frac{x}{\sum_{i=1}^L \tau_i n_i},\theta\right) , \end{equation} where $d_{\mathrm{Pois}}(p,q) = p\log p/q + q-p$ denotes the Poisson Kullback-Leibler divergence. \end{lemma} \begin{proof} By direct calculation, \[ \phi(\lambda) = \sum_{i=1}^L n_i \log\left(1 - \tau_i\theta + \tau_i\theta e^{\lambda}\right) . \] The function $\tau_i \to \log\left(1 - \tau_i\theta + \tau_i\theta e^{\lambda}\right)$ is a strictly concave function on $[0,1]$ and we upper bound it by its tangent in 0, that is, \begin{equation*} \log\left(1 - \tau_i\theta + \tau_i\theta e^{\lambda}\right) \leq \tau_i \theta (e^{\lambda}-1) , \end{equation*} which yields~\eqref{eqap:loglaplace-bound:pois} upon summing on $i$. The r.h.s. of~\eqref{eqap:loglaplace-bound:pois} is easilly recognized as the log-Laplace transform of the Poisson distribution with expectation $\left(\sum_{i=1}^L \tau_i n_i \right) \theta$. To obtain~\eqref{eqap:divergence-bound:pois}, we use the observations that $x \lambda - a \left(e^\lambda - 1\right)$ is maximized for $\lambda = \log (x/a)$ where it is equal to $d_{\mathrm{Pois}}(x,a)$ as well as the fact that $d_{\mathrm{Pois}}(\tau x,\tau a) = \tau d_{\mathrm{Pois}}(x, a)$. \end{proof} Lemma~\ref{lemma:laplace} bounds the log-Laplace transform of the Bernoulli distribution with that of the Poisson distribution with the same mean and uses the stability of the Poisson distribution. Using $d_{\mathrm{Pois}}(p, q)$ instead of $d(p,q)$ --where $d(p,q) = p\log p/q + (1-p)\log[1-p)/(1-q)$ denotes the Bernoulli Kullback-Leibler divergence-- will of course induce a performance gap, which is however not significant for low values of the probabilities as shown by the Lemma~\ref{lem:div_bound}, which we recall below. \newcounter{temp} \setcounter{temp}{\thetheorem} \setcounter{theorem}{7} \begin{lemma}\label{8} For $0<p<q<1$, $$ (1-q) d(p,q) \leq d_{\mathrm{Pois}}(p,q) \leq d(p,q). $$ \end{lemma} \setcounter{theorem}{\thetemp} \begin{proof} For the upper bound, \begin{equation*} d(p,q) - d_{\mathrm{Pois}}(p,q) = (1-p)\log\frac{1-p}{1-q} - (q - p) = -(1-p) \log(1+\frac{p-q}{1-p}) + (p-q) \geq 0, \end{equation*} using $-\log(1+x) \geq -x$. For the lower bound, \begin{equation} \label{q:sfdjqos} d_{\mathrm{Pois}}(p,q) - (1-q)d(p,q) = qp \log \frac{p}{q} + q -p - (1-q)(1-p)\log\left(\frac{1-p}{1-q}\right) . \end{equation} One has $d_{\mathrm{Pois}}(q,q) = d(q,q) = 0$ and the derivative of~\eqref{q:sfdjqos} wrt. $p$ is equal to \[ q \log \frac{p}{q} + (1-q) \log\left(\frac{1-p}{1-q}\right) = - d(q,p) \leq 0. \] Hence, $d_{\mathrm{Pois}}(p,q) - (1-q)d(p,q)$ is positive when $p \leq q$. \end{proof} We can now prove the concentration result stated in Lemma~\ref{lemma:KL-concentration} that we recall below for readability purpose. \setcounter{temp}{\thetheorem} \setcounter{theorem}{6} \begin{lemma} Assume that the sequence of pulls is fixed beforehand and let $k$ be an arm in $\{1,...,K\}$. Then for any $\delta > 0$ and for all $t>0$, \[ \mathds{P}\left( \left\{\hat{\theta}_k(t) < \theta_k \right\} \cap \left\{\tilde{N}_k(t)d_{\mathrm{Pois}}(\hat{\theta}_k(t),\theta_k) > \delta\right\} \right) < e^{-\delta}. \] where $d_{\mathrm{Pois}}(p,q) = p\log p/q + q-p$ denotes the Poisson Kullback-Leibler divergence. \end{lemma} \setcounter{theorem}{\thetemp} \begin{proof} To bound $\mathds{P}(\hat{\theta}_k(t) < x) = \mathds{P}(S_k(t) < \tilde{N}_k(t)x)$, for $0<x<\theta_k$, apply Chernoff's method using the result of Lemma~\ref{lemma:laplace} to obtain \[ \mathds{P}(\hat{\theta}_k(t) < x) \leq e^{-\tilde{N}_k(t) d_{\mathrm{Pois}}(x, \theta_k)}. \] Using that $x \mapsto d_{\mathrm{Pois}}(x,\theta_k)$ is decreasing on $[0,\theta_k]$, we can apply it on both side of the inequality on the left-hand side to obtain \[ \mathds{P}\left(\left\{\hat{\theta}_k(t) < \theta_k \right\} \cap \left\{\tilde{N}_k(t)d_{\mathrm{Pois}}(\hat{\theta}_k(t),\theta_k)>\tilde{N}_k(t)d_{\mathrm{Pois}}(x,\theta_k)\right\}\right) \leq e^{-\tilde{N}_k(t) d_{\mathrm{Pois}}(x, \theta_k)}. \] Denoting $\delta = \tilde{N}_k(t) d_{\mathrm{Pois}}(x, \theta_k)$ yields the desired result. \end{proof} \begin{theorem} \label{th:self-normalized} Consider $(\tau_i)_{1 \leq i\leq L} \in [0,1]$, $\theta \in(0, 1)$, and independent sequences $(X_i(s))_{s\geq1}$ of independent Bernoulli random variables such that $\mathds{E} X_i(s) = \tau_i\theta$. Let $\mathcal{F}_t$ denote an increasing sequence of sigma-fields, such that for each $t$ and all $i$, $\sigma(X_i(1),\ldots,X_i(t))\subset \mathcal{F}_t$. Also consider a predictable sequence of indicator variables $\epsilon_i(s) \in \{0,1\}$, that is, such that $\sigma(\epsilon_1(t+1),\ldots,\epsilon_L(t+1))\subset \mathcal{F}_t$. Define \[ S_i(t)=\sum_{s=1}^{t} \epsilon_i(s) X_i(s), \qquad N_i(t) = \sum_{s=1}^t \epsilon_i(s); \] and the pooled quantities \[ S(t) = \sum_{i=1}^{L} S_i(t), \qquad N(t) = \sum_{i=1}^{L} N_i(t), \qquad \tilde{N}(t) = \sum_{i=1}^{L}\tau_i N_i(t), \qquad \hat{\theta}(t)=\frac{S(t)}{\tilde{N}(t)} . \] The KLUCB index, defined as, \[ U^{\textsc{kl}}(n) = \max \left\{ q \in \left[\hat{\theta}(n), \theta_M\right] \, : \, \tilde{N}(n)d_{\mathrm{Pois}}(\hat{\theta}(n),q)\leq \delta \right\}. \] satisfies \[ \mathds{P}\left( U(n) \leq \theta\right) \leq e \lceil \delta \log (n) \rceil e^{-\delta} . \] \end{theorem} \begin{proof} The proof is analogous to that of Theorem 10 of \cite{garivier2011klucb} and we only detail the step that differs, namely, the identification of the supermartingale $W_t^\lambda$. Define, $W_0^\lambda = 1$ and, for $t\geq 1$, \[ W^{\lambda}_t = \exp\left(\lambda S(t) - \tilde{N}(t) \theta\left(e^\lambda-1\right) \right). \] \begin{align*} \mathds{E}\left[\exp(\lambda(S(t+1)-S(t))) \left\vert \mathcal{F}_t \right. \right] & = \mathds{E}\left[ \left. \exp\left(\lambda \sum_{i=1}^L \epsilon_i(t+1) X_i(t+1)\right) \right\vert \mathcal{F}_t \right] \\ & \leq \exp\left( \left(\sum_{i=1}^L \tau_i \epsilon_i(t+1) \right) \theta \left(e^\lambda-1\right) \right) \\ & = \exp\left( \left(\tilde{N}(t+1)-\tilde{N}(t) \right) \theta \left(\theta e^\lambda-1\right) \right) , \end{align*} where we have used~\eqref{eqap:loglaplace-bound:pois} and the definition of $\tilde{N}(t)$. Multiplying both sides of the inequality by $\exp\left(\lambda S(t) - \tilde{N}(t+1) \theta \left(e^\lambda-1\right) \right)$ show that $\mathds{E} W_{t+1}^\lambda \leq \mathds{E} W_{t}^\lambda$ and hence that $W_t^\lambda$ is a supermartingale. The rest of the proof is as in \cite{garivier2011klucb} replacing $N(t)$ by $\tilde{N}(t)$ and $\phi_\mu(\lambda)$ by $\theta\left(e^\lambda-1\right)$. \end{proof} \section{Details on the Lower Bound Results} \label{ap:lower_bound} We provide here the details of the proof of Theorems~\ref{th:lb-uncens}~ and~\ref{th:lb-cens}. The key result that we use is a lower bound on the log-likelihood ratio under two alternative bandit models $\theta$ and $\theta'$ that do not have the same best arm. Namely, according to Lemma~1 of \cite{kaufmann2015complexity}, we have \[\liminf_{T\to \infty} \frac{\mathds{E}[\ell_T]}{\log(T)} \geq 1. \] Now, considering specific changes of measures $\theta'$ that only modify the distribution of one single suboptimal arm, we are going to obtain lower bounds on each expected number of pulls $\mathds{E}[N_k(T)]$ for $k\neq 1$ as in Appendix~B of \cite{kaufmann2015complexity}. \paragraph{Uncensored Setting: } As argued in Section~\ref{sec:lower-bound}, in the uncensored setting the likelihood of the observations is \[ \mathds{E}_\theta\left[ \ell_T\right] = \sum_{s=1}^T d(\theta_{A_s}\tau_{T-s}, \theta_{A_s}'\tau_{T-s}) . \] Now, fix arm $k \neq 1$ and for $\epsilon >0$, consider $\theta'=(\theta_1,\ldots,\theta_{k-1},\theta_1+\epsilon,\ldots,\theta_K)$. For this change of measure, the expected log-likelihood only contains the terms involving arm $k$: \[ \mathds{E}_\theta\left[ \ell_T\right] = \sum_{s=1}^T \mathds{1}\{A_s=k\} d(\theta_k\tau_{T-s}, (\theta_1+\epsilon)\tau_{T-s}) . \] Now, in order to obtain an expression that involves $\mathds{E}[N_k(T)]$, we need to bound from above this sum using Lemma~5 of the Appendix~B of \cite{katariya2016stochastic}, which we recall here for completeness. \begin{lemma}\label{lem:incrkappa} Let $p,q$ be any fixed real numbers in $(0,1)$. The function $f:\alpha \mapsto d(\alpha p,\alpha q)$ is convex and increasing on $(0,1)$. As a consequence, for any $\alpha<1$, $d(\alpha p, \alpha q) < d(p,q)$. \end{lemma} Thus, for each $s\geq 1$ we have $\tau_{T-s}\leq 1$ and according to the above result, \[ d(\theta_k, (\theta_1+\epsilon)) \geq d(\theta_k\tau_{T-s}, (\theta_1+\epsilon)\tau_{T-s}) \] and \[ \mathds{E}[N_k(T)] d(\theta_k,\theta_1+\epsilon) \geq \sum_{s=1}^T \mathds{1}\{A_s=k\} d(\theta_k\tau_{T-s}, (\theta_1+\epsilon)\tau_{T-s}). \] We obtain \[ \liminf_{T\to \infty} \frac{\mathds{E}[N_k(T)] d(\theta_k,\theta_1+\epsilon)}{\log(T)} \geq \liminf_{T\to \infty} \frac{\mathds{E}[\ell_T]}{\log(T)} \geq 1. \] Letting $\epsilon \to 0$ yields \[ \liminf_{T\to \infty} \frac{\mathds{E}[N_k(T)] }{\log(T)} \geq \frac{1}{d(\theta_k,\theta_1)}. \] In order to bound the expected regret $L_T$, we use the inequality~\eqref{eq:regret:relationship} from Lemma~\ref{lemma:regret}: \[ L_T\geq \sum_{k=2}^K (\theta_{1} - \theta_k) \left(\mathds{E}[N_k(t)] - \mu\right), \] where $\mu = \mathds{E}[D_s]$. We now lower bound each $\mathds{E}[N_k(t)]$ and under the assumption that $\mathds{E}[D_s]<\infty$ and we use that $\liminf_{T\to \infty} \mu/\log(T) = 0$ to obtain \[ \liminf_{T\to \infty} \frac{L_T }{\log(T)} \geq \liminf_{T\to \infty} \frac{\sum_{k=2}^K (\theta_{1}- \theta_k) \left(\mathds{E}[N_k(t)] - \mu\right)}{\log(T)} \geq \sum_{k=2}^{K}\frac{(\theta_{1}- \theta_k)}{d(\theta_k,\theta_1)}. \] \paragraph{Censored Setting: } The proof in the Censored Setting follows the same step as the proof above expect for the fact that we do not require Lemma~5 of \cite{katariya2016stochastic} in order to bound the log-likelihood ratio. We directly have \begin{align*} \mathds{E}_\theta\left[ \ell_T\right] =& \sum_{s=1}^{T-m} d(\theta_{A_s}\tau_{m}, \theta_{A_s}'\tau_{m})+ \sum_{s=T-m}^T d(\theta_{A_s}\tau_{T-s}, \theta_{A_s}'\tau_{T-s}). \end{align*} Proceeding as above and considering the adequate change of measure involving only one suboptimal arm $k$ and taking $\epsilon \to 0$, we obtain \[ \liminf_{T\to \infty} \frac{\mathds{E}[N_k(T)] d(\tau_m\theta_k,\tau_m\theta_1) + \sum_{s=T-m}^T d(\theta_k\tau_{T-s}, \theta_1\tau_{T-s})}{\log(T)} = \liminf_{T\to \infty} \frac{\mathds{E}[N_k(T)] d(\tau_m\theta_k,\tau_m\theta_1)}{\log(T)} \geq 1 , \] where we used the fact that the second term of the sum in the left-hand side is finite. The end of the proof is similar to the uncensored setting case treated above where we can simply bound the regret according to Eq.~\eqref{eq:regret_th} as \[ L_T\geq \sum_{k=2}^k \tau_m(\theta_{1}- \theta_k) \mathds{E}[N_k(T-m)] + \sum_{s=T-m+1}^{T} \tau_{T-s}(\theta_1 - \theta_{A_s}) \] in order to obtain the asymptotic lower bound. \section{Analysis of \textsc DelayedUCB\ and \textsc DelayedKLUCB} In order to control the empirical averages of the rewards of each arm for different values of $N_k(t)$, we introduce the notation $\hat{\theta}_{k,s}:=\sum_{u=1}^s X_{k,u}/s$ for the mean over the first $s$ pulls of $k$. \subsection{\textsc DelayedUCB} \label{ap:ucb} In this section, we provide the complete proof of Theorem~\ref{th:ucb}. We decompose the regret after bounding by $1$ the first $m$ losses of the policy : \[ L_{\textsc{ucb}}(T) \leq m + \sum_{k>1} \tau_m \Delta_k \mathds{E}\left[ \sum_{t>m}^{T}\mathds{1}\{A_t = k\} \right]. \] Hence we only need to bound the number of suboptimal pulls, as in the seminal proof by \cite{auer2002finite}. For any suboptimal $k>1$, we have: \begin{align*} \mathds{E}[N_k(T)] \leq 1 & + \sum_{t=K+1}^{T}\mathds{P}\left( U^{\textsc{UCB}}_1(t) < \theta_1 \right) \\ & + \sum_{t=K+1}^{T} \mathds{P}\left( A_{t+1}=k, U^{\textsc{UCB}}_k(t) \geq \theta_1 \right). \end{align*} While the first term is simply handled by Proposition~\ref{prop:controlUCB} and is $O(1/\epsilon^3)=o(\log(T)$, the second one must be controlled as in the original proof of UCB1 by \cite{auer2002finite} using the fact that for all $t>m$, \[ \frac{N_k(t)}{\tilde{N}_k(t)} \leq \frac{N_k(t-m)+m}{\tilde{N}_k(t)} \leq \frac{1}{\tau_m} + \frac{m}{\tau_m N_k(t-m)} \] that allows us to upper-bound the optimistic indices as \[ \hat{\theta}_k(t) + \left( \frac{1}{\tau_m} + \frac{m}{\tau_m N_k(t-m)} \right) \sqrt{\frac{\beta_\epsilon(t)}{2 N_k(t)}} \geq U^{\text{UCB}}_k(t). \] Then, we use this upper bound on the indices in order to bound the relevant sum of probabilities. \begin{align*} &\sum_{t=m+1}^{T} \mathds{P}\left(A_{t+1}=k, U^{\text{UCB}}_k(t) \geq \theta_1 \right) \\ & \leq \mathds{E}\left[ \sum_{s\geq 1 } \mathds{1} \left\{ \hat{\theta}_{i,s} + \left( \frac{1}{\tau_m} + \frac{m}{\tau_m s} \right) \sqrt{\frac{\beta_\epsilon(t)}{2s}} \geq \theta_i + \Delta_i \right\} \right] . \end{align*} In order to upper-bound this expectation, we first introduce the quantity $\underline{s}_i >0$ defined by $$ \left( \frac{1}{\tau_m} + \frac{m}{\tau_m \underline{s}_i} \right) \sqrt{\frac{\beta_\epsilon(t)}{2\underline{s}_i}} = \Delta_i, $$ that we rewrite, with the introduction of $\gamma_i>0$, as $$ \underline{s}_i= \frac{\beta_\epsilon(t)}{2\tau_m^2\Delta_i^2} (1+\gamma_i)^2 \quad \text{ so that we get } \quad \left( 1+ \frac{m}{ \underline{s}_i} \right)\frac{1}{1+\gamma_i} = 1. $$ Simple computations finally leads to, if $\gamma_i \leq 1$, \begin{align*} \frac{2m\tau_m^2\Delta_i^2}{\beta_\epsilon(t)} = \gamma_i(1+\gamma_i)^2 \leq 4 \gamma_i. \end{align*} As a consequence, if $T$ is big enough (so that the left hand side is smaller than 4), we get that $$ \underline{s}_i \leq \frac{(1+\varepsilon)\log(T)}{2\tau_m^2\Delta^2_i}(1+\gamma_i)^2 \leq \frac{(1+\varepsilon)\log(T)}{2\tau_m^2\Delta^2_i}(1+3\gamma_i) \leq \frac{(1+\varepsilon)\log(T)}{2\tau_m^2\Delta^2_i} + m. $$ We now focus on the sum to upper-bound: \begin{align*} \mathds{E}\left[ \sum_{s\geq 1 } \mathds{1} \left\{ \hat{\theta}_{i,s} + \left( \frac{1}{\tau_m} + \frac{m}{\tau_m s} \right) \sqrt{\frac{\beta_\epsilon(t)}{2s}} \geq \theta_i + \Delta_i \right\} \right] &\leq \lceil\underline{s}_i\rceil+1 + \sum_{s > \lceil\underline{s}_i\rceil}e^{ -2s \left(\Delta_i - \left( \frac{1}{\tau_m} + \frac{m}{\tau_m s} \right) \sqrt{\frac{\beta_\epsilon(t)}{2s}} \right)^2}\\ & \leq \underline{s}_i+2 + \sum_{s > \lceil\underline{s}_i\rceil}e^{ -2 \left(\sqrt{s}\Delta_i - \left( 1+ \frac{m}{ \underline{s}_i} \right) \sqrt{\frac{\beta_\epsilon(t)}{2\tau_m^2}} \right)^2}, \end{align*} where we used the Chernoff's inequality for bounded random variables. Standard computations (comparisons between sums and integrals) give the following \begin{align*} \sum_{s > \lceil\underline{s}_i\rceil}e^{ -2 \left(\sqrt{s}\Delta_i - \left( 1+ \frac{m}{ \underline{s}_i} \right) \sqrt{\frac{\beta_\epsilon(t)}{2\tau_m^2}} \right)^2} & \leq \int_{\underline{s}_i}^\infty e^{ -2 \left(\sqrt{s}\Delta_i - \left( 1+ \frac{m}{ \underline{s}_i} \right) \sqrt{\frac{\beta_\epsilon(t)}{2\tau_m^2}}\right)^2} ds \\ &\leq \frac{1}{2\Delta_i^2}\left(1+\frac{\sqrt{2\pi}}{4}\left( 1+ \frac{m}{ \underline{s}_i} \right) \sqrt{\frac{\beta_\epsilon(t)}{2\tau_m^2}}\right)\\ & \leq \frac{1}{2\Delta_i^2}\left(1 + \frac{\sqrt{2\pi}}{4}\sqrt{\underline{s}_i}\Delta_i\right) \\ & \leq\frac{1}{2\Delta_i^2}\left(1 + \frac{\sqrt{\pi}}{4}\sqrt{\frac{(1+\epsilon)\log(T)}{\tau_m^2}} + \frac{\sqrt{\pi}}{4}\sqrt{m}\right). \end{align*} As a consequence, we have just proved that $$ \sum_{t=m+1}^{T} \mathds{P}\left(A_{t+1}=k, U^{\text{UCB}}_k(t) \geq \theta_1 \right) \leq \frac{(1+\epsilon)\log(T)}{2\tau_m^2\Delta^2_i} + o(\log(T))\ . $$ More precisely, combining all our claims yields that $$ L_{\textsc{ucb}}(T) \leq \frac{(1+\epsilon)\log(T)}{2\tau_m^2\Delta_i} + O\left( \frac{1}{\Delta_i}\sqrt{\frac{(1+\epsilon)\log(T)}{2\tau_m^2}}\right) + O\left(\frac{1}{\Delta_i}\frac{1}{\epsilon^3}\right) + O\left(\frac{\sqrt{m}}{\Delta_i}+m\right), $$ and the result follows. \subsection{\textsc DelayedKLUCB} \label{ap:klucb} We follow the steps of \cite{garivier2011klucb} and decompose the regret as \begin{align*} \mathds{E}[N_k(T)] \leq & 1 + m-K \sum_{t=m+1}^{T}\mathds{P}\left( U^{\textsc{kl}}_1(t) < \theta_1 \right) + \sum_{t=m+1}^{T} \mathds{P}\left( A_{t+1}=k, U^{\textsc{KL}}_k(t) \geq \theta_1 \right)\\ &\leq 1 + m -K + \sum_{t=m+1}^{T}\mathds{P}\left( U^{\textsc{KL}}_1(t) < \theta_1 \right) + \sum_{t=m+1}^{T} \mathds{P}\left( A_{t+1}=k, U^{\textsc{kl}}_k(t) \geq \theta_1 \right). \end{align*} The first term of the above sum is handled by Theorem~\ref{th:self-normalized} that shows that it is $o(\log(T))$. We must now bound the second sum corresponding to the cases when suboptimal indices reach the optimal mean $\theta_1$. To proceed, we simply notice that for all $t$, $\tilde{N}_k(t) \geq \tau_m N_k(t-m)$. We define an alternative optimistic index that upper bounds $U^{\textsc{kl}}_k(t)$ for $t>m$: \[ U^{\textsc{kl}}_k(t) \leq \argmax_{q \in [\hat{\theta}_k,1]} \{q | \tau_m N_k(t-m) d_{\mathrm{Pois}}(\hat{\theta}_k(t),q)\leq \beta_\epsilon(t)\} :=U^{\textsc{kl}+}_k(t). \] Now we can finish the proof following the steps of the proof of Theorem 2 in \cite{garivier2011klucb}. First, we denote \[ K_k(T) = \frac{(1+\eta)\beta_\epsilon(t)}{ d(\tau_m\theta_k,\tau_m\theta_1)} \] and we decompose the second sum after bounding the first $K_k(T)$ terms by 1 and bounding the remaining terms in a similar way as in Lemma 11 of \cite{joulani2013online}: \begin{align*} \sum_{t=m+1}^{T} &\mathds{P}\left( A_{t+1}=k, U^{\textsc{kl}+}_k(t) \geq \theta_1 \right) \leq K_k(T) + \sum_{t \geq K_k(T)+m+1} \mathds{P}\left( A_{t+1}=k, U^{\textsc{kl}+}_k(t) \geq \theta_1 \right)\\ & \leq K_k(T) + \mathds{E}\left[ \sum_{t \geq K_k(T)+m+1} \sum_{s=1}^{t} \mathds{1}\left\{ A_{t+1}=k, N_k(t-m)=s, \tau_m s d_{\mathrm{Pois}}(\hat{\theta}_{k,s},\theta_1) \leq \beta_\epsilon(t) \right\} \right]\\ & \leq K_k(T) + \mathds{E}\left[ \sum_{s=K_k(T)}^{T} \mathds{1}\left\{ \tau_m s d_{\mathrm{Pois}}(\hat{\theta}_{k,s},\theta_1) \leq \beta_\epsilon(t) \right\} \sum_{t = s} ^{T} \mathds{1}\left\{ A_{t+1}=k, N_k(t-m)=s\right\} \right]\\ &\leq K_k(T) + m \sum_{s\geq K_k(T)} \mathds{P} \left(\tau_m s d_{\mathrm{Pois}}(\hat{\theta}_{k,s},\theta_1) \leq \beta_\epsilon(t) \right)\\ &\leq K_k(T) + \frac{m C_2(\eta)}{T^{f(\eta)}}, \end{align*} where the last inequality comes from the fact that for all $s\in \{1,\ldots, T\}$, $\sum_{t = 1} ^{T} \mathds{1}\left\{ A_{t+1}=k, N_k(t-m)=s\right\} \leq m$ and from the proof of Fact 2 for exponential family bandits in \cite{cappe2013kullback} that proves the existence of the constants $C_2(\eta)$ and $f(\eta)$ that achieve the bound. We can now upper bound the regret thanks to the decomposition provided by equation~\ref{eq:regret_th}: \[ L_{\textsc{klucb}}(T) \leq m + \sum_{k>1} \tau_m \Delta_k \mathds{E}\left[ N_k(T)\right] \leq (1+\eta) \beta_\epsilon(t)\sum_{k=2}^{K}\frac{\tau_m \Delta_k}{d_{\mathrm{Pois}}(\tau_m\theta_k,\tau_m\theta_1)} + o(\log(T)). \] To obtain the final result, we use Lemma~\ref{lem:div_bound} that shows that for $\theta_k<\theta_1$, $d_{\mathrm{Pois}}(\tau_m\theta_k,\tau_m\theta_1) > (1-\tau_m \theta_1) d(\tau_m \theta_k, \tau_m \theta_1)$. Thus, \[ L_{\textsc{klucb}}(T) \leq m + (1+\eta) \frac{\beta_\epsilon(t)}{1-\tau_m \theta_1} \sum_{k=2}^{K}\frac{\tau_m \Delta_k}{d(\tau_m\theta_k,\tau_m\theta_1)} + o(\log(T)). \] \section{Additionnal experiments on delay agnostic policies} As a last additional contribution to this work, we suggest a distribution-agnostic heuristic that estimates the CDF parameters $(\tau_d)_{d\geq 0}$ in an online fashion. Indeed, as the delay distribution is assumed to be shared between actions, each observed reward provides an information on the delays that can be exploited to estimate the CDF without having to deal with the exploration-exploitation dilemma. \paragraph{Uncensored setting. } In the Uncensored setting and under the geometric assumption on the distribution of the delays, the entire CDF can be retrieved using an estimate of the unique parameter $\lambda = 1/\mu$. To this aim, we build an estimate the expected delay at round $t$, $\hat{\mu}(t)$, using a stochastic approximation process with decreasing weights $\alpha_t= 1/t^\gamma$ for $1\geq \gamma \geq 0.5$. When an observation $D_t$ arrives we update \[ \hat{\mu}(t) \gets (1-\alpha_t) \hat{\mu}(t)+ \alpha_t D_t. \] Then we use this estimator as a plug-in quantity to compute $O_k(t)$ defined in \eqref{eq:O} for all $k$. \paragraph{Censored setting. } In the Censored setting however, no observation comes after the threshold $m$ and this does not allow us to directly estimate the expected delay $\mu$ as the longest observations are censored. To circumvent this problem, we choose to estimate biased parameters for $\tau_1,\ldots,\tau_m$. Concretely, we initialize counts for the observed delay values $\delta_0 = (0,\ldots,0)\in \mathds{N}^{m+1}$ (delay can be null). Then, after each observation $D_t$, we increment all the counts $\delta_s$ for $s\geq D_t$. Then, the biased empirical CDF is obtained by normalizing those counts by the total number of observations received up to time $t$, $n_d(t)$. We emphasize that the obtained estimators are biased: For each $s\in \{0,\ldots,m\}, \mathds{E}[\delta_s(t)/n_d(t)] = \tau_s/\tau_m$ as all observed delays are smaller or equal to $m$. Thus, plugging those estimates in $\tilde{N}_k(t)$ actually allows to have an estimate of $\tilde{N}_k(t)/\tau_m$ instead of $\tilde{N}_k(t)$ and consequently an estimate of $\tau_m \theta_k$ instead of $\theta_k$. \begin{figure} \caption{Expected regret of \textsc DelayedKLUCB~ with and without online estimation of the CDF in both the censored and uncensored setting. Results are averaged over 100 independent runs.} \label{fig:est} \end{figure} Figure~\ref{fig:est} compares both our policies to its equivalent, delay-agnostic heuristic using the same confidence intervals with plug-in estimates of the $(\tau_d)_{d\geq 0}$. It is clear from these experiments that using delay parameters estimated on-the-go does not hurt the cumulated regret overall. \end{document}
\begin{document} \title{Strongly Quasi-local algebras and their $K$-theories} \author{Hengda Bao} \author{Xiaoman Chen} \author{Jiawen Zhang} \address{School of Mathematical Sciences, Fudan University, 220 Handan Road, Shanghai, 200433, China.} \email{[email protected], [email protected], [email protected]} \thanks{} \mathrm{d}ate{} \keywords{Roe algebras, Quasi-local algebras, Strong quasi-locality, Coarse embeddability} \baselineskip=16pt \begin{abstract} In this paper, we introduce a notion of strongly quasi-local algebras. They are defined for each discrete metric space with bounded geometry, and sit between the Roe algebra and the quasi-local algebra. We show that strongly quasi-local algebras are coarse invariants, hence encoding coarse geometric information of the underlying spaces. We prove that for a discrete metric space with bounded geometry which admits a coarse embedding into a Hilbert space, the inclusion of the Roe algebra into the strongly quasi-local algebra induces an isomorphism in $K$-theory. \end{abstract} \mathrm{d}ate{\today} \maketitle \parskip 4pt \section{Introduction} Roe algebras are $C^*$-algebras associated to metric spaces which encode coarse geometric information of the underlying spaces. They were introduced by J. Roe in his pioneering work of higher index theory on open manifolds \cite{roe1988:index-thm-on-open-mfds,Roe96}, in which he showed that the $K$-theory of Roe algebras serves as receptacles for indices of elliptic differential operators. Hence the computation of their K-theories becomes a central problem in higher index theory. An efficient and practical approach is to employ the coarse Baum-Connes conjecture, which asserts that the coarse assembly map from the coarse $K$-homology of the space to the $K$-theory of the Roe algebra is an isomorphism \cite{roe1993coarse,Yu95}. The coarse Baum-Connes conjecture has fruitful and significant applications in geometry and topology, for instance on the Novikov conjecture and the bounded Borel rigidity conjecture (see \emph{e.g.} \cite{guent-tess-yu2012:Borel-stuff,yu1998:Novikov-for-FAD,Yu00}), and on the non-existence of metrics of positive scalar curvature on open Riemannian manifolds (see \emph{e.g.} \cite{schick2014:ICM,yu1997:0-in-the-spec-PSC}). By definition, the Roe algebra $C^*(X)$ of a discrete metric space $(X,d)$ with bounded geometry is defined to be the norm closure of all locally compact operators $T \in \mathfrak{B}(\ell^2(X; \mathcal{H}H))$ (where $\mathcal{H}H$ is an infinite-dimensional separable Hilbert space) with finite propagation in following sense: there exists $R>0$ such that for any $f,g \in \ell^\infty(X)$ acting on $\ell^2(X; \mathcal{H}H)$ by amplified pointwise multiplication, we have $fTg=0$ when their supports are $R$-disjoint (\emph{i.e.}, $d(\mathrm{supp} f, \mathrm{supp} g)>R$). Since general elements in Roe algebras may not have finite propagation, it is usually hard to detect whether a given operator belongs to them or not. To overcome this issue, J. Roe suggested an asymptotic version of finite propagation called quasi-locality in \cite{roe1988:index-thm-on-open-mfds, Roe96}. More precisely, an operator $T \in \mathfrak{B}(\ell^2(X; \mathcal{H}H))$ is quasi-local if for any $\varepsilon>0$ there exists $R>0$ such that for any $f,g \in \ell^\infty(X)$ with $R$-disjoint supports, we have $\|fTg\|<\varepsilon$. We form the \emph{quasi-local algebra}\footnote{Note that a uniform version was already introduced in \cite{intro}.} $C^*_q(X)$ of $X$ as the $C^*$-algebra consisting of all locally compact and quasi-local operators in $\mathfrak{B}(\ell^2(X; \mathcal{H}H))$, and show that they are coarse invariants. It is clear that operators with finite propagation are quasi-local, and hence the quasi-local algebra $C^*_q(X)$ contains the Roe algebra $C^*(X)$. A natural question is to ask whether these two algebras coincide, which has been extensively studied over the last few decades \cite{engel2015rough,lange1985noethericity,LWZ2018,Roe96,ST19,SZ20}. Currently the most general result is due to \v{S}pakula and the third author \cite{SZ20}, which states that $C^*(X) = C^*_q(X)$ for any discrete metric space with bounded geometry and having Yu's property A. Here property A is a coarse geometric property introduced by Yu \cite{Yu00} in his study on the coarse Baum-Connes conjecture. However, the question remains widely open outside the world of property A \cite{structure,intro}. On the other hand, the property of quasi-locality is also crucial in the work of Engel \cite{engel2015rough} on index theory of pseudo-differential operators. He discovered that while indices of genuine differential operators on Riemannian manifolds live in the $K$-theory of (appropriate) Roe algebras, the indices of uniform pseudo-differential operators are only known to be in the $K$-theory of quasi-local algebras. Hence it is important to study whether the Roe algebra and the quasi-local algebra have the same $K$-theory. In this paper, we introduce a notion of strong quasi-locality and study associated strongly quasi-local algebras. Our main focus is to study their $K$-theories, which might be a potential approach to attack the higher indices problem above. To illustrate the idea, let us explain when $X$ is uniformly discrete (\emph{i.e.}, there exists $C>0$ such that $d(x,y)>C$ for $x\neq y$). For the general case, see Section \ref{ssec:strong quasi-locality}. Fix an infinite-dimensional separable Hilbert space $\mathcal{H}H$ and denote by $\mathfrak{K}(\mathcal{H}H)_1$ the unit ball of the compact operators on $\mathcal{H}H$. We introduce the following: \begin{introdefn}\label{introdefn:strong quasi-locality} Let $X$ be a uniformly discrete metric space with bounded geometry and $T \in \mathfrak{B}(\ell^2(X; \mathcal{H}H))$. We say that $T$ is \emph{strongly quasi-local} if for any $\varepsilon > 0$ there exists $L >0$ such that for any $L$-Lipschitz map $g:X \rightarrow \mathfrak{K}(\mathcal{H}H)_1$, we have \begin{equation*} \big\| [T\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g) ] \big\|< \varepsilon \end{equation*} where $\mathcal{L}ambda(g) \in \mathfrak{B}(\ell^2(X; \mathcal{H}H \otimes \mathcal{H}H))$ is defined by $\mathcal{L}ambda(g)(\mathrm{d}elta_x \otimes \xi \otimes \eta):=\mathrm{d}elta_x \otimes \xi \otimes g(x)\eta$ for $\mathrm{d}elta_x \otimes \xi \otimes \eta \in \ell^2(X; \mathcal{H}H \otimes \mathcal{H}H) \cong \ell^2(X) \otimes \mathcal{H}H \otimes \mathcal{H}H$. \end{introdefn} Definition \ref{introdefn:strong quasi-locality} is inspired by a characterisation for quasi-locality provided in \cite{ST19} which states that an operator $T \in \mathfrak{B}(\ell^2(X; \mathcal{H}H))$ is quasi-local \emph{if and only if} for any $\varepsilon > 0$ there exists $L >0$ such that for any $L$-Lipschitz map $g:X \rightarrow \mathbb{C}$ with $\|g\|_\infty \leq 1$, we have $\|[T,g]\|<\varepsilon$. Hence the notion of strong quasi-locality can be regarded as a compact operator valued version of quasi-locality, and undoubtedly strengthens the original notion (as literally suggested). Analogous to the case of quasi-locality, we form the \emph{strongly quasi-local algebra} $C^*_{sq}(X)$ as the $C^*$-algebra consisting of all locally compact and strongly quasi-local operators in $\mathfrak{B}(\ell^2(X; \mathcal{H}H))$. We show that the strongly quasi-local algebra $C^*_{sq}(X)$ contains the Roe algebra $C^*(X)$, and is contained in the quasi-local algebra $C^*_q(X)$ (see Proposition \ref{prop:relations between Roe, QL and SQL}). We also study coarse geometric features of strongly quasi-local algebras, and show that they are coarse invariants as in the case of Roe algebras and quasi-local algebras (see Corollary \ref{cor:strong coarse invariance}). Our motivation of introducing strongly quasi-local algebras is that their $K$-theory is relatively easy to handle when the underlying space is coarsely embeddable. More precisely, we prove the following: \begin{introthm}\label{thm:main result} Let $X$ be a discrete metric space of bounded geometry. If $X$ admits a coarse embedding into a Hilbert space, then the inclusion of the associated Roe algebra $C^*(X)$ into the strongly quasi-local algebra $C^*_{sq}(X)$ induces an isomorphism in $K$-theory. \end{introthm} Theorem \ref{thm:main result} is the main result of this paper, which is inspired by the well-known theorem of Yu \cite{Yu00} that the coarse Baum-Connes conjecture holds for discrete bounded geometry spaces admitting a coarse embedding into Hilbert space. The proof of Theorem \ref{thm:main result} follows the outline of \cite[Section 12]{willett2020higher} (which originates in \cite{Yu00}), but is more involved and requires new techniques. We divide the proof into several steps, and here let us explain several key ingredients in the proof. First we prove a coarse Mayer-Vietoris argument for strongly quasi-local algebras (Proposition \ref{prop:M-V sequence for strong quasi-local algebras}), which allows us to cut the space and decompose the associated algebras. Recall that an analogous result for Roe algebras was already established in \cite{HRY93}. This leads to the reduction of the proof for Theorem \ref{thm:main result} to the case of sequences of finite metric spaces with block-diagonal operators thereon (Lemma \ref{lem:reduce to block-diagonal}). We would like to highlight a technical lemma used to achieve the coarse Mayer-Vietoris result. Recall that for a quasi-local operator $T \in C^*_q(X)$, it is clear from definition that the restriction $\chi_A T \chi_A$ belongs to $C^*_q(A)$ for any subspace $A$. However, this is not obvious in the case of strongly quasi-local algebras due to certain obstructions on Lipschitz extension (see Remark \ref{rem:obstructions on Lipschitz extension}). To overcome the issue, we provide a characterisation for strong quasi-locality in terms of compact operator valued Higson functions (Proposition \ref{prop:strong quasi-locality iff Higson}). Note that these functions appeared in \cite[Section 4.2]{Wil09} to study the stable Higson corona and the Baum-Connes conjecture. Thanks to the extendability of Higson functions, we obtain a restriction result (Lemma \ref{lem:subspace-wise version}) as required. Moreover by some delicate analysis, we obtain a ``uniform'' version (Proposition \ref{prop:subspace strong quasi-locality}) which plays a key role in following steps. Then we construct a twisted version of strongly quasi-local algebras (Definition \ref{defn:twisted quasi-local}) for sequences of finite metric spaces, and show that the identity map on the $K$-theory of the strongly quasi-local algebra factors through the $K$-theory of its twisted counterpart (Proposition \ref{prop:strong quasi-local index map isom.}). To achieve, we replace several propagation requirements for twisted Roe algebras by different versions of (strong) quasi-locality, and construct an index map in terms of the Bott-Dirac operators. We would like to point out that for the original quasi-local algebras, there is a technical issue to define the index map (Lemma \ref{lem:commutator}) following the methods either in \cite[Lemma 7.6]{Yu00} or in \cite[Lemma 12.3.9]{willett2020higher}. Hence we have to move to the world of strong quasi-locality. Finally we prove that the inclusion map from the twisted Roe algebra into the twisted strongly quasi-local algebra induces an isomorphism in $K$-theory (Proposition \ref{prop:iso. of twisted algebras in $K$-theory}). Combining with a diagram-chasing argument, we conclude the proof for Theorem \ref{thm:main result}. Theorem \ref{thm:main result} should be regarded as a first step to attack the problem whether quasi-local algebras have the same $K$-theory as Roe algebras. More precisely, we pose the following open question: \begin{introques} Let $X$ be a metric space with bounded geometry which admits a coarse embedding into Hilbert space. Then do we have $K_\ast(C^*_{sq}(X)) = K_\ast(C^*_q(X))$? \end{introques} The paper is organised as follows: In Section \ref{sec:pre}, we collect notions from coarse geometry and recall the definition of Roe algebras. We also define quasi-local algebras and show that they are coarse invariants. In Section \ref{sec:sq}, we introduce the main concept of this paper---strong quasi-locality and study their permanence property and coarse geometric features. Section \ref{sec:cMV} is devoted to the coarse Mayer-Vietoris sequence for (strongly) quasi-local algebras, based on which we reduce the proof for Theorem \ref{thm:main result} to the case of sequences of finite metric spaces. We introduce twisted strongly quasi-local algebras in Section \ref{sec:twisted}, and construct the index map in Section \ref{sec:index map}. In Section \ref{sec:loc isom}, we show that twisted Roe algebras and twisted strongly quasi-local algebras have the same $K$-theory, and hence conclude the proof in Section \ref{sec:proof of main thm}. The appendix provides a proof for Proposition \ref{prop:Psi function} which slightly strengthens \cite[Proposition 12.1.10]{willett2020higher} and is necessary to achieve the main theorem, hence we give detailed proofs for convenience to readers. \subsection*{Acknowledgments} We wish to thank Jinmin Wang and Rufus Willett for several helpful discussions, and Yijun Yao for useful comments after reading an early version of this paper. \section{Preliminaries}\label{sec:pre} We start with some notions and definitions. \subsection{Notions in coarse geometry} Here we collect several basic notions. \begin{defn}\label{defn:basic notions spaces} Let $(X,d_X)$ be a metric space, $A\subseteq X$ and $R\geq 0$. \begin{enumerate} \item $A$ is \emph{bounded} if its diameter $\mathrm{diam}(A):=\sup\{d_X(x,y): x,y\in A\}$ is finite. \item The \emph{$R$-neighbourhood of $A$ in $X$} is $\mathcal{N}_R(A):=\{x\in X: d_X(x,A) \leq R\}$. \item $A$ is a \emph{net} in $X$ if there exists some $C>0$ such that $\mathcal{N}_C(A)=X$. \item For $x_0\in X$, the open \emph{$R$-ball of $x_0$ in $X$} is $B(x_0;R):=\{x\in X: d_X(x_0,x)<R \}$. \item $(X,d_X)$ is said to be \emph{proper} if every closed bounded subset is compact. \item If $(X,d_X)$ is discrete, we say that $X$ has \emph{bounded geometry} if for any $r>0$ there exists an $N\in \mathbb{N}$ such that $|B(x;r)| \leq N$ for any $x\in X$, where $|B(x;r)|$ denotes the cardinality of the set $B(x;r)$. \end{enumerate} \end{defn} \begin{defn}\label{defn:basic notions maps} Let $f: (X,d_X) \to (Y,d_Y)$ be a map between metric spaces. \begin{enumerate} \item $f$ is \emph{uniformly expansive} if there exists a non-decreasing function $\rho_+: [0,\infty) \to [0,\infty)$ such that for any $x,y\in X$, we have: \[ d_Y(f(x),f(y)) \leq \rho_+(d_X(x,y)). \] \item $f$ is \emph{proper} if for any bounded $B\subseteq Y$, the pre-image $f^{-1}(B)$ is bounded in $X$. \item $f$ is \emph{coarse} if it is uniformly expansive and proper. \item $f$ is \emph{effectively proper} if there exists a proper non-decreasing function $\rho_-: [0,\infty) \to [0,\infty)$ such that for any $x,y\in X$, we have: \[ \rho_-(d_X(x,y)) \leq d_Y(f(x),f(y)). \] \item $f$ is a \emph{coarse embedding} if it is uniformly expansive and effectively proper. \end{enumerate} \end{defn} Note that $f$ is uniformly expansive is equivalent to that the \emph{expansion function} $\rho_f :[0,\infty)\rightarrow [0,\infty]$ of $f$, defined as \begin{equation}\label{EQ:expansion function} \rho_f(s):=\sup\{d_Y(f(x),f(y)) : x,y\in X \mbox{~with~} d_X(x,y)\le s\}, \end{equation} is finite-valued. \begin{defn}\label{defn:basic notions coarse equivalence} Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces. \begin{enumerate} \item Two maps $f,g: (X,d_X) \to (Y,d_Y)$ are \emph{close} if the exists $R\ge 0$ such that for all $x\in X$, we have $d_Y(f(x),g(x))\le R$. \item A coarse map $f: (X,d_X) \to (Y,d_Y)$ is called a \emph{coarse equivalence} if there exists another coarse map $g: (Y,d_Y) \to (X,d_X)$ such that $f\circ g$ and $g\circ f$ are close to identities, where $g$ is called a \emph{coarse inverse} to $f$. It is clear that $f$ is a coarse equivalence \emph{if and only if} it is a coarse embedding and $f(X)$ is a net in $Y$. \item $(X,d_X)$ and $(Y,d_Y)$ are said to be \emph{coarsely equivalent} if there exists a coarse equivalence from $X$ to $Y$. \end{enumerate} \end{defn} For families of metric spaces and maps, we also need the following notions. \begin{defn}\label{defn:basic notions coarse disjoint union} Let $\{(X_n,d_{X_n})\}_{n\in \mathbb{N}}$ be a sequence of finite metric spaces. A \emph{coarse disjoint union} of $\{(X_n,d_{X_n})\}$ is a metric space $(X,d_X)$ where $X$ is the disjoint union of $\{X_n\}$ as a set, and $d_X$ is a metric on $X$ satisfying: \begin{itemize} \item the restriction of $d_X$ on $X_n$ coincides with $d_{X_n}$; \item $d_X(X_n,X\setminus X_n)\rightarrow \infty $ as $n\rightarrow \infty$. \end{itemize} Note that any two such metric $d_X$ are coarsely equivalent. We say that a sequence $\{(X_n,d_{X_n})\}_{n\in \mathbb{N}}$ has \emph{uniformly bounded geometry} if its coarse disjoint union has bounded geometry. \end{defn} \begin{defn}\label{defn:basic notions uniformly coarse embedding} A family of maps $\{f_i: X_i \to Y_i\}_{i\in I}$ between metric spaces is called a \emph{uniformly coarse embedding} if there are non-decreasing proper functions $\rho_\pm: [0,\infty) \to [0,\infty)$ such that \[ \rho_-(d_{X_i}(x,y)) \leq d_{Y_i}(f_i(x),f_i(y)) \leq \rho_+(d_{X_i}(x,y)) \] for all $i\in I$ and $x,y\in X_i$. We say that \emph{$\{X_i\}_{i\in I}$ uniformly coarsely embeds into Hilbert spaces} if there exists a uniformly coarse embedding $\{f_i: X_i \to E_i\}_{i\in I}$ where each $E_i$ is a Hilbert space. \end{defn} It is clear that a sequence of finite metric spaces $\{X_n\}_{n\in \mathbb{N}}$ uniformly coarsely embeds into Hilbert spaces \emph{if and only if} its coarse disjoint union $\bigsqcup_{n} X_n$ coarsely embeds into some Hilbert space. \subsection{Roe algebras and quasi-local algebras} For a proper metric space $(X,d_X)$, recall that an \emph{$X$-module} is a non-degenerate $\ast$-representation $C_0(X) \to \mathfrak{B}(\mathcal{H}_X)$ for some infinite-dimensional separable Hilbert space $\mathcal{H}_X$. We also say that $\mathcal{H}_X$ is an $X$-module if the representation is clear from the context. An $X$-module is called \emph{ample} if no non-zero element of $C_0(X)$ acts as a compact operator on $\mathcal{H}_X$. Note that every proper metric space $X$ admits an ample $X$-module. Let $\mathcal{H}_X$ and $\mathcal{H}_Y$ be ample modules of proper metric spaces $X$ and $Y$, respectively. Given an operator $T\in \mathfrak{B}(\mathcal{H}_X,\mathcal{H}_Y)$, the \emph{support} of $T$ is defined to be \[ \mathrm{supp}(T):=\big\{(y,x)\in Y\times X: \chi_V T\chi_U \neq 0 \mbox{~for~all~neighbourhoods~}U\mbox{~of~}x\mbox{~and~}V\mbox{~of~}y\big\}. \] When $X=Y$, the \emph{propagation} of $T \in \mathfrak{B}(\mathcal{H}_X)$ is defined to be \[ \mathrm{prop}(T):=\sup\{d_X(x,y): (x,y)\in \mathrm{supp}(T)\}. \] We say that an operator $T\in \mathfrak{B}(\mathcal{H}_X)$ has \emph{finite propagation} if $\mathrm{prop}(T)$ is finite, and $T$ is \emph{locally compact} if $fT$ and $Tf$ are compact for all $f\in C_0(X)$ (which is equivalent to that both $\chi_K T$ and $T\chi_K$ are compact for all compact subset $K \subseteq X$). \begin{defn}\label{defn:Roe alg} For a proper metric space $X$ and an ample $X$-module $\mathcal{H}_X$, the \emph{algebraic Roe algebra} $\mathbb{C}[\mathcal{H}_X]$ of $\mathcal{H}_X$ is defined to be the $*$-algebra of locally compact finite propagation operators on $\mathcal{H}_X$, and the \emph{Roe algebra $C^*(\mathcal{H}_X)$} of $\mathcal{H}_X$ is defined to be the norm-closure of $\mathbb{C}[\mathcal{H}_X]$ in $\mathfrak{B}(\mathcal{H}_X)$. \end{defn} It is a standard result that the Roe algebra $C^*(\mathcal{H}_X)$ does not depend on the chosen ample module $\mathcal{H}_X$ up to $*$-isomorphisms, hence denoted by $C^*(X)$ and called the \emph{Roe algebra of $X$}. Furthermore, $C^*(X)$ is a coarse invariant of the metric space $X$ (up to non-canonical $*$-isomorphisms), and their K-theories are coarse invariants up to canonical isomorphisms (see, \emph{e.g.}, \cite{roe1993coarse}). Now we move on to the case of quasi-locality. \begin{defn}\label{defn:quasi-locality} Given a proper metric space $(X,d_X)$ and an ample $X$-module $\mathcal{H}_X$, an operator $T\in \mathfrak{B}(\mathcal{H}_X)$ is said to be \emph{quasi-local} if for any $\varepsilon>0$, there exists $R>0$ such that $T$ has \emph{ $(\varepsilon,R)$-propagation}, \emph{i.e.}, for any Borel sets $A,B\subseteq X$ with $d_X(A,B)\geq R$, we have $\| \chi_A T\chi_B\| < \varepsilon$. \end{defn} It is clear that the set of all locally compact quasi-local operators on $\mathcal{H}_X$ forms a $C^*$-subalgebra of $\mathfrak{B}(\mathcal{H}_X)$, which leads to the following: \begin{defn}\label{defn:quasi-local algebra} For a proper metric space $X$ and an ample $X$-module $\mathcal{H}_X$, the set of all locally compact quasi-local operators on $\mathcal{H}_X$ is called the \emph{quasi-local algebra of $\mathcal{H}_X$}, denoted by $C^*_q(\mathcal{H}_X)$. \end{defn} As in the case of Roe algebras, we now show that quasi-local algebras do not depend on the chosen ample modules either. Let $X$ and $Y$ be proper metric spaces and $\mathcal{H}_X, \mathcal{H}_Y$ be ample modules, respectively. Let $f:X \to Y$ be a coarse map. Recall that a \emph{covering isometry} for $f$ is an isometry $V:\mathcal{H}_X \rightarrow \mathcal{H}_Y$ such that $\mathrm{supp}(V)\subseteq \{(y,x):d_Y(y,f(x))\le C\}$ for some $C\ge 0$. In this case, we also say that $V$ \emph{covers $f$}. It is shown in \cite[Proposition 4.3.4]{willett2020higher} that covering isometries always exist. Following the case of Roe algebras, we have: \begin{prop}\label{prop:coarse invariance} Let $\mathcal{H}_X$ and $\mathcal{H}_Y$ be ample modules for proper metric spaces $X$ and $Y$, respectively. Let $f: X \to Y$ be a coarse map with a covering isometry $V: \mathcal{H}_X\rightarrow \mathcal{H}_Y$. Then $V$ induces the following $\ast$-homomorphism \[ \mathrm{Ad}_V: C^*_q(\mathcal{H}_X)\longrightarrow C^*_q(\mathcal{H}_Y), \quad T\mapsto VTV^*. \] Furthermore, the induced K-thoeretic map $(\mathrm{Ad}_V)_*: K_*(C^*_q(\mathcal{H}_X)) \to K_*(C^*_q(\mathcal{H}_Y))$ does not depend on the choice of the covering isometry $V$, hence denoted by $f_*$. \end{prop} \begin{proof} Note that there exists a Borel coarse map close to $f$ by \cite[Lemma A.3.12]{willett2020higher}, hence without loss of generality, we can assume that $f$ is Borel and $V$ covers $f$. Following the same argument as in the Roe case (see, \emph{e.g.}, \cite[Lemma 5.1.12]{willett2020higher}), $VTV^*$ is locally compact. Fix a $t_0>0$ such that $\mathrm{supp}(V)\subseteq\{(y,x) :d_Y(y,f(x))< t_0\}$. For any $\varepsilon>0$, the quasi-locality of $T$ implies that there exists a $R_0>0$ such that $T$ has $(\varepsilon,R_0)$-propagation. We set $R=2t_0 +\rho_f(R_0)+1$ where $\rho_f$ is defined in Equation (\ref{EQ:expansion function}). For any Borel sets $C,D\subseteq Y$ with $d_Y(C,D)\ge R$, it is clear that $d_Y(\mathcal{N}_{t_0}(C),\mathcal{N}_{t_0}(D))\ge \rho_f(R_0)+1> \rho_f(R_0)$ and hence $d_X(f^{-1}(\mathcal{N}_{t_0}(C)),f^{-1}(\mathcal{N}_{t_0}(D)))\ge R_0$. Since $V$ covers $f$, we obtain: \[ \chi_CV=\chi_CV\chi_{f^{-1}(\mathcal{N}_{t_0}(C))} \quad \mathrm{and} \quad V^*\chi_D=\chi_{f^{-1}(\mathcal{N}_{t_0}(D))}V^*\chi_D. \] Hence \[ \| \chi_C VTV^*\chi_D\| = \| \chi_CV\chi_{f^{-1}(\mathcal{N}_{t_0}(C))}T\chi_{f^{-1}(\mathcal{N}_{t_0}(D))}V^*\chi_D\| \leq \|\chi_{f^{-1}(\mathcal{N}_{t_0}(C))}T\chi_{f^{-1}(\mathcal{N}_{t_0}(D))}\| < \varepsilon, \] which implies that $VTV^*$ is quasi-local. The second statement follows almost the same argument as in the case of Roe algebra (see, \emph{e.g.}, \cite[Lemma 5.1.12]{willett2020higher}), hence omitted. \end{proof} It is shown in \cite[Proposition 4.3.5]{willett2020higher} that for a coarse equivalence $f: X \to Y$, we can always choose an isometry $V: \mathcal{H}_X \to \mathcal{H}_Y$ covering $f$ such that $V$ is a unitary. Consequently, we obtain the following: \begin{cor}\label{cor:coarse invariance} Let $\mathcal{H}_X$ and $\mathcal{H}_Y$ be ample modules for proper metric spaces $X$ and $Y$, respectively. If $X$ and $Y$ are coarsely equivalent, then the quasi-local algebra $C_q^*(\mathcal{H}_X)$ is $\ast$-isomorphic to $C_q^*(\mathcal{H}_Y)$. In particular, for a proper metric space $X$ the quasi-local algebra $C_q^*(\mathcal{H}_X)$ does not depend on the chosen ample $X$-module $\mathcal{H}_X$ up to $\ast$-isomorphisms, hence called \emph{the quasi-local algebra of $X$} and denoted by $C^*_q(X)$. \end{cor} \section{Strongly quasi-local algebras}\label{sec:sq} In this section, we introduce a new class of operator algebras which are called the strongly quasi-local algebras. They sit between Roe algebras and quasi-local algebras and their K-theories will be the main focus of the paper. Here we study their basic properties and coarse geometric features. Let us begin with some more notions: \begin{defn}\label{defn:variation} Let $(X,d_X), (Y,d_Y)$ be metric spaces and $g:X\rightarrow Y$ be a map. \begin{enumerate} \item Given $L>0$, we say that $f$ is \emph{$L$-Lipschitz} if $d_Y(g(x),g(y)) \le Ld_X(x,y)$ for any $x,y\in X$. \item Given $A\subseteq X$, $\varepsilon>0$ and $R>0$, we say that $g$ has \emph{$(\varepsilon,R)$-variation on $A$} if for any $x,y\in A$ with $d_X(x,y) < R$, we have $d_Y(g(x),g(y))< \varepsilon$. When $A=X$, we also say that $g$ has \emph{$(\varepsilon,R)$-variation}. \end{enumerate} \end{defn} \begin{defn}\label{defn:higson function} Let $g: X \to \mathbb{C}$ be a continuous function on a metric space $(X,d_X)$. \begin{enumerate} \item We say that $g$ is \emph{bounded} if its norm $\|g\|_\infty:=\sup_{x\in X}|g(x)| < \infty$. Denote the set of all bounded continuous functions on $X$ by $C_b(X)$, and by $C_b(X)_1$ the subset consisting of functions with norm at most $1$. \item We say that $g$ is a \emph{Higson function} if $g\in C_b(X)$ and for any $\varepsilon>0$ and $R>0$, there exists a compact subset $K\subset X$ such that $g$ has $(\varepsilon,R)$-variation on $X\setminus K$. Denote $C_h(X)$ the set of all Higson functions on $X$. \end{enumerate} \end{defn} Our notion of strong quasi-locality is inspired by the following result partially from \cite[Theorem 2.8]{ST19}. Recall that for operators $T,S\in \mathfrak{B}(\mathcal{H})$ on some Hilbert space $\mathcal{H}$, their \emph{commutator} is defined to be $[T,S]:=TS-ST$. \begin{prop}\label{prop:pictures of quasi-locality} Let $X$ be a proper metric space, $\mathcal{H}_X$ an ample $X$-module and $T\in\mathfrak{B}(\mathcal{H}_X)$ be a locally compact operator. Then the following are equivalent: \begin{enumerate} \item $T$ is quasi-local in the sense of Definition \ref{defn:quasi-locality}; \item For any $\varepsilon>0$, there exists $L>0$ such that for any $L$-Lipschitz function $g\in C_b(X)_1$ we have $\|[T,g]\|<\varepsilon$; \item For any $\varepsilon>0$, there exist $\mathrm{d}elta,R>0$ such that for any function $g\in C_b(X)_1$ with $(\mathrm{d}elta,R)$-variation we have $\|[T,g]\|<\varepsilon$; \item $[T,h]$ is a compact operator for any $h\in C_h(X)$. \end{enumerate} \end{prop} Note that the equivalence among (1), (2) and (4) are the ``easier'' part of \cite[Theorem 2.8]{ST19}. And also note that the equivalence between (1) and (3) can be proved using the same argument therein to show ``(1) $\mathcal{L}eftrightarrow$ (2)'', hence omitted. \subsection{Strong quasi-locality}\label{ssec:strong quasi-locality} Now we introduce the notion of strong quasi-locality, where we consider compact operator valued functions instead of complex valued ones in Proposition \ref{prop:pictures of quasi-locality}(3). \emph{Throughout the rest of the paper, we only consider proper discrete metric spaces to simplify the notation. We also fix an infinite-dimensional separable Hilbert space $\mathcal{H}H$.} Let $X$ be a proper discrete metric space and $\mathcal{H}_X$ be an ample $X$-module. For each $x\in X$, denote $\mathcal{H}_x:= \chi_{ \{x\} } \mathcal{H}_X$. An operator $S\in \mathfrak{B}(\mathcal{H}_X \otimes \mathcal{H}H)$ can be regarded as an $X$-by-$X$ matrix $(S_{x,y})_{x,y\in X}$, where $S_{x,y} \in \mathfrak{B}(\mathcal{H}_{y} \otimes \mathcal{H}H , \mathcal{H}_{x} \otimes \mathcal{H}H)$. Denote $\mathfrak{K}(\mathcal{H}H)$ the $C^*$-algebra of compact operators on $\mathcal{H}H$, and $\mathfrak{K}(\mathcal{H}H)_1$ its closed unit ball (with respect to the operator norm). Recall that a map $g:X \rightarrow \mathfrak{K}(\mathcal{H}H)$ is \emph{bounded} if $\|g\|_\infty:=\sup_{x\in X}\|g(x)\|<\infty$. Given a bounded map $g:X \rightarrow \mathfrak{K}(\mathcal{H}H)$, we define an operator $\mathcal{L}ambda(g)\in \mathfrak{B}(\mathcal{H}_X \otimes \mathcal{H}H)$ by setting its matrix entry as follows: \begin{eqnarray}\label{EQ:Lambda} \mathcal{L}ambda(g)_{x,y}:= \begin{cases} ~\mathrm{Id}_{\mathcal{H}_x} \otimes g(x), & y=x; \\ ~0, & \mbox{otherwise}. \end{cases} \end{eqnarray} Note that this is a block-diagonal operator with respect to the decomposition $\mathcal{H}_X\otimes \mathcal{H}H=\bigoplus_{x\in X} (\mathcal{H}_x\otimes \mathcal{H}H)$. We also write $\mathcal{L}ambda_{\mathcal{H}_X}(g)$ instead of $\mathcal{L}ambda(g)$ when we want to emphasise the module $\mathcal{H}_X$ involved. The following is the main concept of this paper: \begin{defn}\label{defn:strongly quasi-local algebra} Let $X$ be a proper discrete metric space and $\mathcal{H}_X$ be an ample $X$-module. An operator $T\in \mathfrak{B}(\mathcal{H}_X)$ is called \emph{strongly quasi-local} if for any $\varepsilon > 0$ there exist $\mathrm{d}elta, R >0$ such that for any map $g:X \rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta, R)$-variation, we have \begin{equation}\label{EQ:commut strong quasi-local} \big\| [T\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g) ] \big\|_{ \mathfrak{B} (\mathcal{H}_X \otimes \mathcal{H}H) }< \varepsilon. \end{equation} It is easy to see that the set of all locally compact strongly quasi-local operators on $\mathcal{H}_X$ forms a $C^*$-algebra, hence called the \emph{strongly quasi-local algebra of $\mathcal{H}_X$} and denoted by $C^*_{sq}(\mathcal{H}_X)$. \end{defn} \begin{rem}\label{rem:entry for strong quasi-local} A direct calculation shows that for $x,y\in X$, the $xy$-matrix entry of the commutator $[T\otimes Id_{\mathcal{H}H} , \mathcal{L}ambda(g) ]$ in Inequality (\ref{EQ:commut strong quasi-local}) is given by: \begin{equation}\label{EQ:entry for strong quasi-local} [T\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g) ]_{x,y}=T_{x,y} \otimes \big( g(y)-g(x) \big). \end{equation} \end{rem} The following result records the relation amongst Roe algebras, quasi-local algebras and strongly quasi-local algebras. \begin{prop}\label{prop:relations between Roe, QL and SQL} Let $X$ be a proper discrete metric space and $\mathcal{H}_X$ be an ample $X$-module. Then we have: \begin{enumerate} \item $C^*_{sq}(\mathcal{H}_X) \subseteq C^*_q(\mathcal{H}_X)$; \item If $X$ has bounded geometry, then $C^*(\mathcal{H}_X) \subseteq C^*_{sq}(\mathcal{H}_X)$; \item If $X$ has bounded geometry and Property A, then $C^*(\mathcal{H}_X) = C^*_{sq}(\mathcal{H}_X) = C^*_q(\mathcal{H}_X)$. \end{enumerate} \end{prop} \begin{proof} (1). Fix a rank-one projection $p \in \mathfrak{B}(\mathcal{H}H)$. For $g\in C_b(X)_1$, we construct $\tilde{g}: X \rightarrow \mathfrak{K}(\mathcal{H}H)_1$ by $\tilde{g}(x):=g(x)p$. Since $[T\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(\tilde{g})] = [T,g] \otimes p$, the conclusion follows from the definition of strong quasi-locality and Proposition \ref{prop:pictures of quasi-locality}(3). (2). Assume that $T \in \mathfrak{B}(\mathcal{H}_X)$ has propagation at most $R$. Then for any $g:X\rightarrow \mathfrak{K}(\mathcal{H}H)_1$, the commutator $[T\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g)]$ has propagation at most $R$ from (\ref{EQ:entry for strong quasi-local}). Since $X$ has bounded geometry, it is well-known (see, \emph{e.g.}, \cite[Lemma 12.2.4]{willett2020higher}) that there exists an $N$ depending on $R$ such that for any $g:X\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ we have: \[ \big\|[T\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g)]\big\| \leq N \cdot \sup_{x,y\in X\atop d(x,y)\leq R} \big\|T_{x,y} \otimes \big( g(y)-g(x) \big)\big\|. \] This concludes the proof. (3). It follows from \cite[Theorem 3.3]{SZ20} that $C^*(\mathcal{H}_X) = C^*_q(\mathcal{H}_X)$ under the given assumption, which (together with (1) and (2)) concludes the proof. \end{proof} Our next aim is to explore characterisations for strong quasi-locality as in Proposition \ref{prop:pictures of quasi-locality}. First note that Definition \ref{defn:strongly quasi-local algebra} is a compact operator valued version of condition (3) therein. Unfortunately, we cannot find an appropriate substitute for condition (1) in Proposition \ref{prop:pictures of quasi-locality}. As for condition (2) therein, it is clear that the compact operator valued version is equivalent to strong quasi-locality provided the underlying space is uniformly discrete (\emph{i.e.}, there exists $C>0$ such that $d(x,y)>C$ for $x\neq y$ in $X$). However, it is unclear whether this holds in general. As for condition (4) in Proposition \ref{prop:pictures of quasi-locality}, we have the following result concerning compact operator valued Higson functions. Recall that a compact operator valued function $h: X \to \mathfrak{K}(\mathcal{H}H)$ on a metric space $X$ is a \emph{Higson function} if $h$ is bounded and for any $\varepsilon>0$ and $R>0$, there exists a compact subset $K\subset X$ such that $h$ has $(\varepsilon,R)$-variation on $X\setminus K$. \begin{prop}\label{prop:strong quasi-locality iff Higson} Let $X$ be a discrete metric space of bounded geometry and $\mathcal{H}_X$ an ample $X$-module. Then for a locally compact operator $T\in\mathfrak{B}(\mathcal{H}_X)$, the following are equivalent: \begin{enumerate} \item $T$ is strongly quasi-local; \item $[T\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(h)]\in \mathfrak{K}(\mathcal{H}_X\otimes\mathcal{H}H)$ for any Higson function $h: X \to \mathfrak{K}(\mathcal{H}H)$. \end{enumerate} \end{prop} The proof of Proposition~\ref{prop:strong quasi-locality iff Higson} is almost identical to that of \cite[Theorem 2.8 ``(1) $\mathcal{L}eftrightarrow$ (3)'']{ST19} with minor changes, hence omitted. \subsection{Strong quasi-locality on subspaces}\label{ssec:Strong quasi-locality on subspaces} In this subsection, we study the behaviour of strong quasi-locality under taking subspace. First note that in the case of quasi-locality, we have the following observation (which follows directly from Definition \ref{defn:quasi-locality}): given a proper discrete metric space $X$ and an ample module $\mathcal{H}_X$, for any quasi-local operator $T \in \mathfrak{B}(\mathcal{H}_X)$ and any $\varepsilon>0$ there exists $R>0$ such that for any $A\subseteq X$ the operator $\chi_A T \chi_A$ has $(\varepsilon,R)$-propagation. In other words, quasi-locality is preserved ``uniformly'' under taking subspaces. Now we focus on the case of strongly quasi-local operators, and show that they have similar behaviour when taking subspaces. However, the proof is more involved due to the lack of characterisation in terms of $(\varepsilon,R)$-propagation. \begin{prop}\label{prop:subspace strong quasi-locality} Let $X$ be a discrete metric space with bounded geometry and $\mathcal{H}_X$ an ample $X$-module. Assume $T\in\mathfrak{B}(\mathcal{H}_X)$ is locally compact and strongly quasi-local. Then for any $\varepsilon >0$, there exist $\mathrm{d}elta, R>0$ such that for any $A\subseteq X$ and $g: A\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta, R)$-variation, we have $\big\| [(\chi_A T\chi_A) \otimes Id_{\mathcal{H}H} , \mathcal{L}ambda(g) ] \big\|_{ \mathfrak{B} (\mathcal{H}_A \otimes \mathcal{H}H) }< \varepsilon$, where $\chi_A T \chi_A$ is naturally regarded as an operator on $\mathcal{H}_A:=\chi_A\mathcal{H}_X$. \end{prop} \begin{rem}\label{rem:obstructions on Lipschitz extension} A natural thought for the proof is to extend a function $g: A\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ to $X$ and preserve the variation (or at least with controlled variations). However (as pointed out by Rufus Willett \cite{Wil21}), this is at least as hard as finding extensions with values in a Hilbert space. The problem of extending Hilbert space valued functions is fairly well-studied \cite{Kir34}, and there are known obstructions. In the following, we will bypass the problem using Proposition \ref{prop:strong quasi-locality iff Higson}. \end{rem} First we prove a ``subspace-wise'' version of Proposition \ref{prop:subspace strong quasi-locality} (note the difference on orders of quantifiers). To simplify the notation, for $A \subseteq X$ we will regard the characteristic function $\chi_A$ either as the multiplication operator on $\mathcal{H}_X$ or the amplified multiplication operator $\chi_A \otimes \mathrm{Id}_{\mathcal{H}H}$ on $\mathcal{H}_X \otimes \mathcal{H}H$ according to the context. \begin{lem}\label{lem:subspace-wise version} Let $X$ be a discrete metric space with bounded geometry and $\mathcal{H}_X$ an ample $X$-module. Assume that $T\in\mathfrak{B}(\mathcal{H}_X)$ is locally compact and strongly quasi-local. Then for any $A\subseteq X$ and $\varepsilon >0$, there exist $\mathrm{d}elta, R>0$ such that for any $g: A\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta, R)$-variation, we have $\big\| [(\chi_A T\chi_A) \otimes Id_{\mathcal{H}H} , \mathcal{L}ambda(g) ] \big\|_{ \mathfrak{B} (\mathcal{H}_A \otimes \mathcal{H}H) }< \varepsilon$. \end{lem} \begin{proof} By Proposition \ref{prop:strong quasi-locality iff Higson}, we know that $[T\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(h)]\in \mathfrak{K}(\mathcal{H}_X\otimes\mathcal{H}H)$ for any Higson function $h: X \to \mathfrak{K}(\mathcal{H}H)$. Now fix a subspace $A \subseteq X$. For any Higson function $g: A \to \mathfrak{K}(\mathcal{H}H)$, it follows from \cite[Lemma 4.3.4]{Wil09} that $g$ can be extended to a Higson function $\tilde{g}: X \to \mathfrak{K}(\mathcal{H}H)$. Hence we obtain: \[ [(\chi_A T\chi_A) \otimes Id_{\mathcal{H}H} , \mathcal{L}ambda(g) ]= \chi_A [T \otimes Id_{\mathcal{H}H} , \mathcal{L}ambda(\tilde{g}) ]\chi_A \in \chi_A \mathfrak{K}(\mathcal{H}_X\otimes\mathcal{H}H) \chi_A \subseteq \mathfrak{K}(\mathcal{H}_A\otimes\mathcal{H}H). \] Using Proposition \ref{prop:strong quasi-locality iff Higson} again, we obtain that $\chi_A T\chi_A$ is strongly quasi-local on $\mathcal{H}_A$. This concludes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:subspace strong quasi-locality}] Fix a base point $x_0 \in X$, and write $B_S$ for $B(x_0;S)$ where $S>0$. Assume the contrary, then there exists some $\varepsilon_0>0$ such that for each $n\in\mathbb{N}$, there exist $A_n\subseteq X$ and $g_n:A_n\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\frac{1}{n},n)$-variation on $A_n$ such that \begin{equation}\label{EQ:subspace1} \big\|[\chi_{A_n}T\chi_{A_n}\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda(g_n)] \big\|>\varepsilon_0. \end{equation} Without loss of generality, we can assume that each $A_n$ is finite. For the above $\varepsilon_0$, there exists $R_0>0$ such that $T$ has $(\frac{\varepsilon_0}{8},R_0)$-propagation. \emph{Claim.} For any $R > R_0$, there exists $N\in\mathbb{N}$ such that for any $n\ge N$ we have: \[ \big\|[\chi_{A_n\setminus B_R}T\chi_{A_n\setminus B_R}\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g_n)]\big\| > \frac{\varepsilon_0}{8}. \] We assume the contrary, \emph{i.e.}, assume that there exist some $R > R_0$ and an increasing sequence $(n_k)_{k=1}^\infty \subseteq \mathbb{N}$ tending to infinity such that \[ \big\|[\chi_{A_{n_k}\setminus B_{R}} T\chi_{A_{n_k}\setminus B_{R}}\otimes \mathrm{Id}_{\mathcal{H}H} ,\mathcal{L}ambda(g_{n_k})]\big\| \le \frac{\varepsilon_0}{8}. \] Since $d_X(B_{R}, X\setminus B_{2R}) \ge R >R_0$ we obtain: \[ \big\|\chi_{A_{n_k}\cap B_{R}}T\chi_{A_{n_k}\setminus B_{2R}}\big\| \le \frac{\varepsilon_0}{8} \quad \mbox{and} \quad \big\|\chi_{A_{n_k}\setminus B_{2R}}T \chi_{A_{n_k}\cap B_{R}}\big\| \le \frac{\varepsilon_0}{8}. \] Now we cut up the operator $\chi_{A_{n_k}}T\chi_{A_{n_k}}$ as follows: \begin{eqnarray*} \chi_{A_{n_k}}T\chi_{A_{n_k}} &=& \chi_{A_{n_k}\cap B_{2R}}T\chi_{A_{n_k}\cap B_{2R}} + \chi_{A_{n_k}\setminus B_{R}} T\chi_{A_{n_k}\setminus B_{2R}} \\ & & + \chi_{A_{n_k}\setminus B_{2R}} T\chi_{A_{n_k}\cap (B_{2R}\setminus B_{R})} +\chi_{A_{n_k}\cap B_{R}}T\chi_{A_{n_k}\setminus B_{2R}} +\chi_{A_{n_k}\setminus B_{2R}}T \chi_{A_{n_k}\cap B_{R}}. \end{eqnarray*} Combining the above inequalities with (\ref{EQ:subspace1}), we obtain: \[ \big\|[\chi_{A_{n_k}\cap B_{2R}}T\chi_{A_{n_k}\cap B_{2R}} \otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g_{n_k})]\big\| > \varepsilon_0 - 2 \cdot \frac{\varepsilon_0}{8}- 2 \cdot \frac{\varepsilon_0}{4} =\frac{\varepsilon_0}{4}, \] which is a contradiction since $A_{n_k}\cap B_{2R}$ is contained in a fixed finite subset $B_{2R}$ and $g_{n_k}$ has $(\frac{1}{n_k},n_k)$-variation on $A_{n_k}\cap B_{2R}$. Hence we prove the Claim. Now we continue the proof of Proposition~\ref{prop:subspace strong quasi-locality}. Set $\tilde{A}_1:=A_1, n_1:=1$ and choose $R_1>R_0$ such that $\tilde{A}_1 \subseteq B_{R_1-2}$. We recursively choose subsets $\tilde{A}_1, \tilde{A}_2, \ldots$, positive numbers $R_1<R_2<\cdots$ and natural numbers $n_1<n_2<\cdots$ as follows. Suppose that $\tilde{A}_1, \ldots, \tilde{A}_{i-1}$, $R_1< \cdots<R_{i-1}$ and $n_1<\cdots<n_{i-1}$ are chosen for $i\ge 2$. The Claim implies that there exists a natural number $n_i>n_{i-1}$ such that \[ \big\|[\chi_{A_{n_i}\setminus B_{R_{i-1}}}T\chi_{A_{n_i}\setminus B_{R_{i-1}}}\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g_{n_i})]\big\| > \frac{\varepsilon_0}{8}. \] We take $\tilde{A}_i:=A_{n_i}\setminus B_{R_{i-1}}$ (which is non-empty by the above estimate) and choose $R_i>R_{i-1}$ such that $\tilde{A}_1\sqcup\cdots\sqcup\tilde{A}_i \subseteq B_{R_i-2^i}$. In summary, we obtain non-empty subsets $\{\tilde{A}_i\}_{i\in \mathbb{N}}$ and functions $\hat{g_i}:=g_{n_i}|_{\tilde{A}_i}: \tilde{A}_i \to \mathfrak{K}(\mathcal{H}H)_1$ with $(\frac{1}{n_i},n_i)$-variation such that \[ \big\|[\chi_{\tilde{A}_i}T\chi_{\tilde{A}_i}\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(\hat{g_i})]\big\| > \frac{\varepsilon_0}{8}. \] Define $A:=\bigsqcup_{i\in \mathbb{N}} \tilde{A}_i$ and extend each $\hat{g_i}$ to $A$ by zero on the complement (still denoted by $\hat{g_i}$). It is clear from the above construction that $d_X(\tilde{A}_i , A\setminus \tilde{A}_i)\ge 2^{i-1}$, and hence $\hat{g_i}$ has $(\frac{1}{i},i)$-variation on $A$. Moreover, we have: \[ \big\|[\chi_AT\chi_A\otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(\hat{g_i})]\big\| > \frac{\varepsilon_0}{8}. \] This is a contradiction to Lemma \ref{lem:subspace-wise version}. Hence we conclude the proof. \end{proof} \subsection{Coarse invariance of strongly quasi-local algebras} In this subsection, we show that strongly quasi-local algebras are coarse invariants provided the underlying spaces have bounded geometry. In particular, this implies that strongly quasi-local algebras are independent of ample modules. The proof follows the outline of that for Proposition \ref{prop:coarse invariance} but is more involved. \begin{prop}\label{prop:strong coarse invariance} Let $X,Y$ be discrete metric spaces with bounded geometry and $\mathcal{H}_X, \mathcal{H}_Y$ be ample modules for $X$ and $Y$, respectively. Let $f: X \to Y$ be a coarse map with a covering isometry $V: \mathcal{H}_X\rightarrow \mathcal{H}_Y$. Then $V$ induces the following $\ast$-homomorphism \[ \mathrm{Ad}_V: C^*_{sq}(\mathcal{H}_X)\longrightarrow C^*_{sq}(\mathcal{H}_Y), T\mapsto VTV^*. \] Furthermore, the induced K-thoeretic map $(\mathrm{Ad}_V)_*: K_*(C^*_{sq}(\mathcal{H}_X)) \to K_*(C^*_{sq}(\mathcal{H}_Y))$ does not depend on the choice of the covering isometry $V$, hence denoted by $f_*$. \end{prop} \begin{proof} We only show that $VTV^* \in C^*_{sq}(\mathcal{H}_Y)$ if $T\in C^*_{sq}(\mathcal{H}_X)$. The ``Furthermore'' part follows almost the same argument as in the case of Roe algebra, hence omitted. First note that $VTV^*$ is locally compact as in Proposition~\ref{prop:coarse invariance}. To see that $VTV^*$ is strongly quasi-local, we assume that $\mathrm{supp} (V) \subseteq \{(y,x):d_Y(f(x),y)<R_0\}$ for some $R_0>0$. Since $Y$ has bounded geometry, there exists $N\in \mathbb{N}$ such that $\big|\{y\in Y:d_Y(f(x),y)<R_0\}\big|\le N$ for any $x\in X$. Hence we can write: \[ V=W_1 + W_2 + \cdots +W_N \] where each $W_i\in \mathfrak{B}(\mathcal{H}_X,\mathcal{H}_Y)$ satisfies $\mathrm{supp}(W_i) \subseteq \mathrm{supp} (V)$, $\mathrm{supp}(W_i) \cap \mathrm{supp}(W_j)$ is empty for any $j\neq i$, and for any pair $(y_1,x_1)\neq (y_2,x_2) \in \mathrm{supp}(W_i)$ we have $x_1 \ne x_2$. Set $M:=\max\big\{\|W_i\|:i=1,\ldots,N\big\}$. For later use, we denote $D_i:=\{x\in X: \exists y\in Y \mathrm{\ such\ that\ } (y,x)\in \mathrm{supp}(W_i)\} \subseteq X$. It follows that for each $i$ there exists a map $t_i:D_i \to Y$ such that $(y,x)\in \mathrm{supp}(W_i)$ if and only if $x\in D_i$ and $y=t_i(x)$. It suffices to show that each $W_iTW_j^*$ is strongly quasi-local. Given an $\varepsilon > 0$, there exist $\mathrm{d}elta', R'>0$ such that for any $\varphi : X\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta', R')$-variation, we have $\|[T\otimes \mathrm{Id}_{\mathcal{H}H},\mathcal{L}ambda(\varphi)]\|<\frac{\varepsilon}{2M^2}$. Set \[ \mathrm{d}elta=\min\big\{\frac{\varepsilon}{4M^2\|T\|},\mathrm{d}elta'\big\} \quad \mbox{and} \quad R=R_0+\rho_f(R'), \] where $\rho_f$ is defined in (\ref{EQ:expansion function}). For any $g:Y\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta,R)$-variation and each $i$, we construct $\varphi_i: X\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ as follows: \[ \varphi_i(x):= \begin{cases} ~(g\circ t_i)(x), & \mbox{if~} x\in D_i; \\ ~0, & \mbox{otherwise}. \end{cases} \] It is clear that $(t_i(x),x) \in \mathrm{supp}(W_i) \subseteq \mathrm{supp}(V)\subseteq \{(y,x):d_Y(f(x),y)<R_0\}$ for each $i$ and $x\in D_i$, which implies that $d_Y\big( t_i(x),f(x) \big)<R_0 \leq R$. Hence we obtain \[ \sup_{x\in D_i} \big\|\varphi_i(x) - (g\circ f)(x)\big\| \leq \mathrm{d}elta, \] which implies that for each $i$ we have: \begin{equation}\label{EQ:coarse equivalence} \|\mathcal{L}ambda(\varphi_i - g\circ f) (W_i^*\otimes \mathrm{Id}_{\mathcal{H}H})\| \leq \mathrm{d}elta M \quad \mbox{and} \quad \|(W_i\otimes \mathrm{Id}_{\mathcal{H}H}) \mathcal{L}ambda(\varphi_i - g\circ f)\| \leq \mathrm{d}elta M. \end{equation} On the other hand, direct calculations show that for each $i$ we have: \[ \mathcal{L}ambda(g)(W_i \otimes \mathrm{Id}_{\mathcal{H}H})=(W_i\otimes \mathrm{Id}_{\mathcal{H}H})\mathcal{L}ambda(\varphi_i)\quad \mbox{and}\quad (W_i^* \otimes \mathrm{Id}_{\mathcal{H}H})\mathcal{L}ambda(g) = \mathcal{L}ambda(\varphi_i)(W_i^*\otimes \mathrm{Id}_{\mathcal{H}H}). \] Hence we obtain: \begin{eqnarray*} &&\big\|[(W_iTW_j^*)\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda(g)] \big\| \\ &=& \big\|\big((W_iT)\otimes \mathrm{Id}_{\mathcal{H}H}\big) \mathcal{L}ambda(\varphi_j) (W_j^*\otimes \mathrm{Id}_{\mathcal{H}H}) - (W_i\otimes \mathrm{Id}_{\mathcal{H}H}) \mathcal{L}ambda(\varphi_i) \big((TW_j^*)\otimes \mathrm{Id}_{\mathcal{H}H}\big)\big\| \\ &\leq & \big\|\big((W_iT)\otimes \mathrm{Id}_{\mathcal{H}H}\big) \mathcal{L}ambda(g\circ f) (W_j^*\otimes \mathrm{Id}_{\mathcal{H}H}) - (W_i\otimes \mathrm{Id}_{\mathcal{H}H}) \mathcal{L}ambda(g\circ f) \big((TW_j^*)\otimes \mathrm{Id}_{\mathcal{H}H}\big)\big\| + 2M^2\|T\|\mathrm{d}elta\\ &\leq & \big\|(W_i\otimes \mathrm{Id}_{\mathcal{H}H}) [T\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda(g\circ f)](W_j^*\otimes \mathrm{Id}_{\mathcal{H}H})\big\| + \frac{\varepsilon}{2}, \end{eqnarray*} where we use (\ref{EQ:coarse equivalence}) in the second inequality. Note that $g\circ f : X\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ has $(\mathrm{d}elta',R')$-variation. Hence $\|[T\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda(g\circ f)]\| < \frac{\varepsilon}{2M^2}$, which implies: \[ \big\|[(W_iTW_j^*)\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda(g)] \big\| < M^2 \cdot \frac{\varepsilon}{2M^2} + \frac{\varepsilon}{2} = \varepsilon. \] Hence each $W_iTW_j^*$ is strongly quasi-local. \end{proof} As a direct corollary, we obtain: \begin{cor}\label{cor:strong coarse invariance} Let $\mathcal{H}_X$ and $\mathcal{H}_Y$ be ample modules for discrete metric spaces $X$ and $Y$ of bounded geometry, respectively. If $X$ and $Y$ are coarsely equivalent, then the strongly quasi-local algebra $C^*_{sq}(\mathcal{H}_X)$ is $\ast$-isomorphic to $C^*_{sq}(\mathcal{H}_Y)$. In particular, for a discrete metric space $X$ of bounded geometry the strongly quasi-local algebra $C^*_{sq}(\mathcal{H}_X)$ does not depend on the chosen ample $X$-module $\mathcal{H}_X$ up to $\ast$-isomorphisms, hence called \emph{the strongly quasi-local algebra of $X$} and denoted by $C^*_{sq}(X)$. \end{cor} \subsection{The case for sequences of metric spaces} Here we study the strongly quasi-local algebra for a sequence of metric spaces. This is crucial to analyse the ``building blocks'' when we prove our main theorem. Let $\{X_n\}_{n\in \mathbb{N}}$ be a sequence of finite metric spaces and $\rho_n: C_0(X_n) \to \mathfrak{B}(\mathcal{H}_n)$ an ample module for $X_n$. Let $X$ be a coarse disjoint union of $\{X_n\}$ and $\mathcal{H}_X:=\bigoplus_n \mathcal{H}_n$. Since $C_0(X) = \bigoplus_n C_0(X_n)$, we can compose $\rho_n$ into a single representation: \[ \rho=\bigoplus_n \rho_n: C_0(X) \to \prod_n \mathfrak{B}(\mathcal{H}_n) \subseteq \mathfrak{B}(\mathcal{H}_X). \] It is clear that $\rho$ is an ample module for $X$. In the following, we also regard a sequence $(T_n)_{n\in \mathbb{N}} \in \prod_n \mathfrak{B}(\mathcal{H}_n)$ as a single operator in $\mathfrak{B}(\mathcal{H}_X)$. For a locally compact operator $T\in \mathfrak{B}(\mathcal{H}_X)$ with finite propagation, it follows directly from definition that $T$ is block-diagonal upto compact operators. Hence we have the following decomposition for Roe algebras: \begin{lem}\label{lem:cdu to separated case I} Using the same notation as above, we have: \begin{enumerate} \item $\big(C^*(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)\big) + \mathfrak{K}(\mathcal{H}_X) = C^*(\mathcal{H}_X)$; \item $\big(C^*(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)\big) \cap \mathfrak{K}(\mathcal{H}_X) = \bigoplus_n C^*(\mathcal{H}_n)$. \end{enumerate} \end{lem} In the case of (strong) quasi-locality, we have similar results as follows. We only need those concerning strong quasi-locality for later use, while we collect them here for completion. \begin{lem}\label{lem:cdu to separated case II} Using the same notation as above, we have: \begin{enumerate} \item $\big(C^*_q(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)\big) + \mathfrak{K}(\mathcal{H}_X) = C^*_q(\mathcal{H}_X)$; \item $\big(C^*_q(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)\big) \cap \mathfrak{K}(\mathcal{H}_X) = \bigoplus_n C^*_q(\mathcal{H}_n)$; \item $\big(C^*_{sq}(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)\big) + \mathfrak{K}(\mathcal{H}_X) = C^*_{sq}(\mathcal{H}_X)$; \item $\big(C^*_{sq}(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)\big) \cap \mathfrak{K}(\mathcal{H}_X) = \bigoplus_n C^*_{sq}(\mathcal{H}_n)$. \end{enumerate} \end{lem} \begin{proof} The proof is different from that for Roe algebras, and we only prove (3) and (4) since the other two are similar and easier. For (3): note that $\mathfrak{K}(\mathcal{H}_X) \subseteq C^*_{sq}(\mathcal{H}_X)$, hence the left hand side is contained in the right one. For the converse, it follows from \cite[Corollary 4.3]{ST19} that for any $T\inC^*_{sq}(\mathcal{H}_X) \subseteq C^*_q(\mathcal{H}_X)$ and $\varepsilon >0$, there exists some $N\in \mathbb{N}$ such that \[ \big\| T - \big(\sum_{i=1}^N \chi_{X_i}\big) T \big(\sum_{i=1}^N \chi_{X_i}\big) - \sum_{i>N} \chi_{X_i} T \chi_{X_i} \big\| < \varepsilon. \] Since $T$ is locally compact, then $(\sum_{i=1}^N \chi_{X_i}) T (\sum_{i=1}^N \chi_{X_i})$ is compact. It suffice to show that $\sum_{i>N} \chi_{X_i} T \chi_{X_i} \in C^*_{sq}(\mathcal{H}_X)$. Given $\varepsilon>0$, the strong quasi-locality of $T$ implies that there exist $\mathrm{d}elta,R>0$ such that for any $g: X \to \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta, R)$-variation, we have $\big\| [T \otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g) ] \big\|< \varepsilon$. Now for any such $g$, we have: \begin{eqnarray*} \big\| \big[\big(\sum_{i>N} \chi_{X_i} T \chi_{X_i}\big)\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda(g) \big] \big\| &=& \sup_{i>N} \big\| [(\chi_{X_i} T\chi_{X_i}) \otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g) ] \big\| \\ &=& \sup_{i>N} \big\|\chi_{X_i} [ T \otimes \mathrm{Id}_{\mathcal{H}H} , \mathcal{L}ambda(g) ] \chi_{X_i}\big\| < \varepsilon. \end{eqnarray*} Hence we obtain that $\sum_{i>N} \chi_{X_i} T \chi_{X_i}$ is strongly quasi-local, which concludes (3). For (4): note that $C^*(\mathcal{H}_n)= C^*_q(\mathcal{H}_n)= C^*_{sq}(\mathcal{H}_n)= \mathfrak{K}(\mathcal{H}_n)$ for each $n$ and hence: \begin{align*} \big(C^*_{sq} & (\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)\big) \cap \mathfrak{K}(\mathcal{H}_X) = C^*_{sq}(\mathcal{H}_X) \cap \big( \prod_n \mathfrak{B}(\mathcal{H}_n) \cap \mathfrak{K}(\mathcal{H}_X) \big) \\ &= C^*_{sq}(\mathcal{H}_X) \cap \bigoplus_n \mathfrak{K}(\mathcal{H}_n) = \bigoplus_n \mathfrak{K}(\mathcal{H}_n) = \bigoplus_n C^*_{sq}(\mathcal{H}_n). \end{align*} Hence we conclude the proof. \end{proof} For later use, we introduce the following notion of (strong) quasi-locality for a sequence of operators. Note that the definition is nothing but uniform versions of (strong) quasi-locality. \begin{defn}\label{defn:sql for sequence} Let $\{X_n\}_{n\in \mathbb{N}}$ be a sequence of finite metric spaces and $\rho_n: C_0(X_n) \to \mathfrak{B}(\mathcal{H}_n)$ be ample modules. For a sequence $(T_n)_{n\in \mathbb{N}}$ where $T_n \in \mathfrak{B}(\mathcal{H}_n)$, we say that: \begin{enumerate} \item $(T_n)_{n\in \mathbb{N}}$ is \emph{uniformly quasi-local} if for any $\varepsilon>0$ there exists $R>0$ such that for any $n\in \mathbb{N}$ and $C_n, D_n \subseteq X_n$ with $d(C_n, D_n)\le R$, we have $\|\chi_{C_n} T_n \chi_{D_n}\|<\varepsilon$. \item $(T_n)_{n\in \mathbb{N}}$ is \emph{uniformly strongly quasi-local} if for any $\varepsilon>0$ there exist $\mathrm{d}elta,R >0$ such that for any $n \in \mathbb{N}$ and $g_n:X_n\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta, R)$-variation, we have $\| [T_n \otimes Id_{\mathcal{H}H} , \mathcal{L}ambda(g_n) ] \| < \varepsilon$. \end{enumerate} \end{defn} \begin{lem}\label{lem:char for sequence of strong quasi-locality} Let $\{X_n\}_{n\in \mathbb{N}}$ be a sequence of finite metric spaces, $\rho_n: C_0(X_n) \to \mathfrak{B}(\mathcal{H}_n)$ be ample modules and $\mathcal{H}_X:=\bigoplus_n \mathcal{H}_n$. For a sequence $(T_n)_{n\in \mathbb{N}} \in \prod_n \mathfrak{K}(\mathcal{H}_n)$, we have: \begin{enumerate} \item $(T_n)\in C^*_q(\mathcal{H}_X)$ \emph{if and only if} $(T_n)$ is uniformly quasi-local. \item $(T_n)\in C^*_{sq}(\mathcal{H}_X)$ \emph{if and only if} $(T_n)$ is uniformly strongly quasi-local. \end{enumerate} Hence if $(T_n)$ is uniformly strongly quasi-local then it is uniformly quasi-local. \end{lem} The proof is straightforward , hence omitted. Analogous to the coarse invariance of Roe algebras, we have the following result concerning sequences of spaces. The proof is similar, hence omitted. \begin{prop}\label{prop:independent of modules} Let $\{X_n\}_{n\in \mathbb{N}}$ be a sequence of finite metric spaces with uniformly bounded geometry, and $\rho_n: C_0(X_n) \to \mathfrak{B}(\mathcal{H}_n)$ be an ample module for $X_n$. Let $X$ be a coarse disjoint union of $\{X_n\}$ and $\mathcal{H}_X:=\bigoplus_n \mathcal{H}_n$. Then the K-theories $K_\ast\big( C^*(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n) \big), K_\ast\big( C^*_q(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n) \big)$ and $K_\ast\big( C^*_{sq}(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n) \big)$ are independent of $\rho_n$ up to canonical isomorphisms. \end{prop} \section{The coarse Mayer-Vietoris sequence}\label{sec:cMV} The tool of Mayer-Vietories sequences is widely used within different area of mathematics, especially in algebraic topology. It provides a ``cutting and pasting'' procedure, which allows us to obtain global information from local pieces. In coarse geometry, Higson, Roe and Yu introduced a coarse Mayer-Vietoris sequence for K-theories of Roe algebras associated to a suitable decomposition of the underlying metric space in \cite{HRY93}. More precisely, recall that a closed cover $(A,B)$ of a metric space $X$ is said to be \emph{$\omega$-excisive} if for each $r>0$ there is some $s>0$ such that $\mathcal{N}_r(A)\cap \mathcal{N}_r(B) \subseteq \mathcal{N}_s(A\cap B)$. Associated to an $\omega$-excisive closed cover $(A,B)$ of a metric space $X$, we have the following short exact sequence (called the \emph{coarse Mayer-Vietoris sequence}): \[ \begin{CD} K_0(C^*(A\cap B)) @>>> K_0(C^*(A))\oplus K_0(C^*(B)) @>>> K_0(C^*(X)) \\ @AAA & & @VVV \\ K_1(C^*(X)) @<<< K_1(C^*(A))\oplus K_1(C^*(B)) @<<< K_1(C^*(A\cap B)). \end{CD} \] In this section, we explore a coarse Mayer-Vietoris sequence for strongly quasi-local algebras and use it to reduce the proof of Theorem \ref{thm:main result} to the case of ``sparse'' spaces. Let $X$ be a discrete metric space with bounded geometry and $\mathcal{H}_X$ be an ample $X$-module. \begin{defn}\label{defn:strong M-V ideals} Let $A$ be a (closed) subset of $X$. Denote by $C^*_{sq}(A,X)$ the norm-closure of the set of all operators $T\in C^*_{sq}(\mathcal{H}_X)$ with support contained in $\mathcal{N}_R(A) \times \mathcal{N}_R(A)$ for some $R\geq 0$. \end{defn} \begin{lem}\label{lem:strong M-V ideals} $C^*_{sq}(A,X)$ is a closed two-sided $\ast$-ideal in $C^*_{sq}(\mathcal{H}_X)$. \end{lem} \begin{proof} It suffices to show that for $T,S \in C^*_{sq}(\mathcal{H}_X)$ with $\mathrm{supp}(T) \subseteq \mathcal{N}_R(A) \times \mathcal{N}_R(A)$ for some $R\geq 0$, then $TS$ and $ST$ belong to $C^*_{sq}(A,X)$. By Proposition \ref{prop:relations between Roe, QL and SQL}(1), we know that $S \in C^*_q(\mathcal{H}_X)$. Hence for any $\varepsilon>0$, there exists $R_0>0$ such that $S$ has $(\frac{\varepsilon}{\|T\|},R_0)$-propagation. It follows that \begin{equation*} \|TS - \chi_{\mathcal{N}_R(A)}TS\chi_{\mathcal{N}_{R+R_0}(A)}\| = \|\chi_{\mathcal{N}_R(A)}T(\chi_{\mathcal{N}_R(A)}S - \chi_{\mathcal{N}_R(A)}S\chi_{\mathcal{N}_{R+R_0}(A)})\| < \varepsilon. \end{equation*} Hence by definition, we obtain that $TS \in C^*_{sq}(A,X)$. A similar argument shows that $ST \in C^*_{sq}(A,X)$ as well, which concludes the proof. \end{proof} Based on a similar argument as in the proof of \cite[Section 5/Lemma 1]{HRY93} together with Corollary \ref{cor:strong coarse invariance}, we have the following: \begin{lem}\label{lem:strong M-V ideals have same K-theory} For a (closed) subset $A\subseteq X$, take an isometry $V$ covering the inclusion $i: A \hookrightarrow X$. Then the range of $\mathrm{Ad}_V:C^*_{sq}(A) \to C^*_{sq}(X)$ is contained in $C^*_{sq}(A,X)$. Furthermore, the map $i_*:K_*(C^*_{sq}(A)) \to K_*(C^*_{sq}(A,X))$ is an isomorphism. \end{lem} We also have the following result analogous to \cite[Section 5/Lemma 2]{HRY93}: \begin{lem}\label{lem:strong M-V ideal calculations} Let $(A,B)$ be an $\omega$-excisive (closed) cover of $X$, then we have \[ C^*_{sq}(A,X)+C^*_{sq}(B,X)=C^*_{sq}(X) \] and \[ C^*_{sq}(A,X)\cap C^*_{sq}(B,X)= C^*_{sq}(A\cap B,X). \] \end{lem} \begin{proof} Given $T\in C^*_{sq}(X)$ and $\varepsilon>0$, it follows from Proposition \ref{prop:relations between Roe, QL and SQL}(1) that there exists $R>0$ such that $T$ has $(\varepsilon,R)$-propagation. Note that $T=\chi_A T + \chi_{B \setminus A} T$ since $A \cup B=X$, then $T$ is $2\varepsilon$-close to $\chi_A T \chi_{\mathcal{N}_R(A)} + \chi_{B \setminus A} T \chi_{\mathcal{N}_R(B \setminus A)}$. Hence we obtain that $C^*_{sq}(A,X)+C^*_{sq}(B,X)$ is dense in $C^*_{sq}(X)$. It follows from a standard argument in $C^*$-algebras (\emph{e.g.}, \cite[Section 3/Lemma 1]{HRY93}) that $C^*_{sq}(A,X)+C^*_{sq}(B,X)=C^*_{sq}(X)$. Concerning the second equation, we only need to show that $C^*_{sq}(A,X) C^*_{sq}(B,X) \subseteq C^*_{sq}(A\cap B,X)$. Fix $T, S \in C^*_{sq}(X)$ with $\mathrm{supp}(T) \subseteq \mathcal{N}_R(A) \times \mathcal{N}_R(A)$ and $\mathrm{supp}(S) \subseteq \mathcal{N}_{R}(B) \times \mathcal{N}_{R}(B)$ for some $R>0$. The assumption of $\omega$-excision implies that there exists an $L>0$ such that $\mathcal{N}_R(A) \cap \mathcal{N}_{R}(B) \subseteq \mathcal{N}_L (A \cap B)$. Hence we have $TS = T \chi_{\mathcal{N}_L (A \cap B)} S$. For any $\varepsilon>0$ there exists an $L'>0$ such that $T$ has $(\frac{\varepsilon}{\|S\|},L')$-propagation and $S$ has $(\frac{\varepsilon}{\|T\|},L')$-propagation. Hence we have: \[ \|TS - \chi_{\mathcal{N}_{L+L'} (A \cap B)} TS \chi_{\mathcal{N}_{L+L'} (A \cap B)}\| \leq 2\varepsilon. \] Therefore we obtain that $TS \in C^*_{sq}(A\cap B,X)$, which concludes the proof. \end{proof} Applying the Mayer-Vietoris sequence in $K$-theory for $C^*$-algebras (see \cite[Section 3]{HRY93}) to the ideals $C^*_{sq}(A,X), C^*_{sq}(B,X)$ in $C^*_{sq}(X)$ and combining with Lemma \ref{lem:strong M-V ideals have same K-theory} and Lemma \ref{lem:strong M-V ideal calculations}, we obtain the following coarse Mayer-Vietoris principle for strongly quasi-local algebras: \begin{prop}\label{prop:M-V sequence for strong quasi-local algebras} Let $(A,B)$ be a (closed) $\omega$-excisive cover of $X$. Then there is a six-term exact sequence \[ \begin{CD} K_0(C^*_{sq}(A\cap B)) @>>> K_0(C^*_{sq}(A))\oplus K_0(C^*_{sq}(B)) @>>> K_0(C^*_{sq}(X)) \\ @AAA & & @VVV \\ K_1(C^*_{sq}(X)) @<<< K_1(C^*_{sq}(A))\oplus K_1(C^*_{sq}(B)) @<<< K_1(C^*_{sq}(A\cap B)). \end{CD} \] \end{prop} For future use, we record that the same argument can be applied to obtain the Mayer-Vietoris principle for quasi-local algebras as follows. However, this will not be used in this paper. \begin{prop}\label{prop:M-V sequence for quasi-local algebras} Let $(A,B)$ be a (closed) $\omega$-excisive cover of $X$. Then there is a six-term exact sequence \[ \begin{CD} K_0(C^*_q(A\cap B)) @>>> K_0(C^*_q(A))\oplus K_0(C^*_q(B)) @>>> K_0(C^*_q(X)) \\ @AAA & & @VVV \\ K_1(C^*_q(X)) @<<< K_1(C^*_q(A))\oplus K_1(C^*_q(B)) @<<< K_1(C^*_q(A\cap B)). \end{CD} \] \end{prop} Now we use Proposition \ref{prop:M-V sequence for strong quasi-local algebras} to reduce the proof of Thereom~\ref{thm:main result} to the case of block-diagonal operators: \begin{lem}\label{lem:reduce to block-diagonal} To prove Theorem~\ref{thm:main result} for all bounded geometry metric spaces that coarsely embed into Hilbert space, it suffices to prove that for any sequence of finite metric spaces $\{Y_n\}_{n=1}^\infty$ which has uniformly bounded geometry and uniformly coarsely embeds into Hilbert space, the inclusion $C^*(\mathcal{H}_Y) \cap \prod_n \mathfrak{B}(\mathcal{H}_n) \hookrightarrow C^*_{sq}(\mathcal{H}_Y) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)$ induces isomorphisms in $K$-theory where $\mathcal{H}_n$ is an ample $Y_n$-module, $\mathcal{H}_Y$ is their direct sum and $Y$ is a coarse disjoint union of $\{Y_n\}$. \end{lem} \begin{proof} Lemma~\ref{lem:cdu to separated case I} and \ref{lem:cdu to separated case II} imply that \[ \frac{C^*(\mathcal{H}_Y)}{\mathfrak{K}(\mathcal{H}_Y)} \cong \frac{C^*(\mathcal{H}_Y) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)}{\bigoplus_n C^*(\mathcal{H}_n)} \quad \mathrm{and} \quad \frac{C^*_{sq}(\mathcal{H}_Y)}{\mathfrak{K}(\mathcal{H}_Y)} \cong \frac{C^*_{sq}(\mathcal{H}_Y) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)}{\bigoplus_n C^*_{sq}(\mathcal{H}_n)}. \] Since $C^*(\mathcal{H}_n)=C^*_{sq}(\mathcal{H}_n)$ for each $n$, we obtain the following commutative diagram:\\ \begin{small} \centerline{ \xymatrix{ \cdots \ar[r] & K_*\big(\bigoplus_n C^*(\mathcal{H}_n)\big) \ar[r] \ar@{=}[d] & K_*(C^*(\mathcal{H}_Y) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)) \ar[d]\ar[r] & K_*(C^*(\mathcal{H}_Y)/\mathfrak{K}(\mathcal{H}_Y)) \ar[d]\ar[r] & \cdots \\ \cdots \ar[r] & K_*\big(\bigoplus_n C^*_{sq}(\mathcal{H}_n)\big) \ar[r] & K_*(C^*_{sq}(\mathcal{H}_Y) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)) \ar[r] & K_*(C^*_{sq}(\mathcal{H}_Y)/\mathfrak{K}(\mathcal{H}_Y)) \ar[r] & \cdots. }} \end{small} Hence the right vertical map is an isomorphism from the assumption and the Five Lemma. Now consider the following commutative diagram:\\ \centerline{ \xymatrix{ \cdots \ar[r] & K_*(\mathfrak{K}(\mathcal{H}_Y)) \ar[r] \ar@{=}[d] & K_*(C^*(\mathcal{H}_Y)) \ar[d]\ar[r] & K_*(C^*(\mathcal{H}_Y) / \mathfrak{K}(\mathcal{H}_Y)) \ar[d]\ar[r] & \cdots \\ \cdots \ar[r] & K_*(\mathfrak{K}(\mathcal{H}_Y)) \ar[r] & K_*(C^*_{sq}(\mathcal{H}_Y)) \ar[r] & K_*(C^*_{sq}(\mathcal{H}_Y) / \mathfrak{K}(\mathcal{H}_Y)) \ar[r] & \cdots, }} we obtain that $K_*(C^*(\mathcal{H}_Y)) \to K_*(C^*_{sq}(\mathcal{H}_Y) )$ is an isomorphism by the Five Lemma. Now for a metric space $X$ satisfying the assumption, we follow the argument in \cite[Lemma 12.5.3]{willett2020higher}. Fix a basepoint $x_0 \in X$ and for each $n \in \mathbb{N}\cup\{0\}$, we set \[ X_n:=\{x\in X: n^3-n\le d_X(x,x_0)\le (n+1)^3+(n+1)\}. \] Let $A:=\bigsqcup_{n: even} X_n$ and $B:=\bigsqcup_{n: odd}X_n$. It is obvious that $(A,B)$ is an $\omega$-exicisive cover of $X$. Applying the coarse Mayer-Vietoris sequences for the associated Roe algebras (\cite{HRY93}) and strongly quasi-local algebras (Proposition \ref{prop:M-V sequence for strong quasi-local algebras}), we obtain the following commutative diagram \begin{small} \[ \begin{CD} \cdots @>>>K_*(C^*(A\cap B)) @>>> K_*(C^*(A))\oplus K_*(C^*(B)) @>>> K_*(C^*(X))@>>> \cdots\\ & & @VVV @VVV @VVV \\ \cdots @>>>K_*(C^*_{sq}(A\cap B)) @>>> K_*(C^*_{sq}(A))\oplus K_*(C^*_{sq}(B)) @>>> K_*(C^*_{sq}(X))@>>> \cdots. \end{CD} \] \end{small} The left and middle vertical maps are isomorphisms according to the previous paragraph, hence we conclude the proof by the Five Lemma again. \end{proof} \section{Twisted strongly quasi-local algebras}\label{sec:twisted} In this section, we recall the Bott-Dirac operators which will be used in the next section to construct index maps. We also recall the notion of twisted Roe algebras from \cite[Section 12.3]{willett2020higher} (originally in \cite[Section 5]{Yu00}) and introduce their strongly quasi-local analogue. \subsection{The Bott-Dirac operators on Euclidean spaces}\label{ssec:The Bott-Dirac operators on Euclidean spaces} Let us start by recalling some elementary properties of the Bott-Dirac operators. Here we only list necessary notions and facts, while guide readers to \cite[Section 12.1]{willett2020higher} for details. Let $E$ be a real Hilbert space (also called a \emph{Euclidean space}) with even dimension $d \in \mathbb{N}$. The \emph{Clifford algebra} of $E$, denoted by $\mathrm{Cliff}_\mathbb{C}(E)$, is the universal unital complex algebra containing $E$ as a real subspace and subject to the multiplicative relations $x \cdot x = \|x\|_E^2$ for all $x\in E$. It is natural to treat $\mathrm{Cliff}_\mathbb{C}(E)$ as a graded Hilbert space (see for example \cite[Example E.2.12]{willett2020higher}), and in this case we denote it by $\mathcal{H}_E$. Denote $\mathcal{L} _E^2$ the graded Hilbert space of square integrable functions from $E$ to $\mathcal{H}_E$ where the grading is inherited from $\mathcal{H}_E$, and $\mathcal{S}_E$ the dense subspace consisting of Schwartz class functions from $E$ to $\mathcal{H}_E$. Fix an orthonormal basis $\{e_1,\ldots,e_d\}$ of $E$ and let $x_1,\ldots,x_d:E \to \mathbb{R}$ be the corresponding coordinates. Recall that the \emph{Bott operator} $C$ and the \emph{Dirac operator} $D$ are unbounded operators on $\mathcal{L} _E^2$ with domain $\mathcal{S}_E$ defined as: \[ (Cu)(x) = x \cdot u(x), \quad \mbox{and} \quad (Du)(x)=\sum_{i=1}^d \hat{e_i} \cdot \frac{\partial u}{\partial x_i}(x) \] for $u \in \mathcal{S}_E$ and $x\in E$, where $\hat{e_i}: \mathrm{Cliff}_\mathbb{C}(E) \to \mathrm{Cliff}_\mathbb{C}(E)$ is the operator determined by $\hat{e_i} (w)=(-1)^{\partial w} w \cdot e_i$ for any homogeneous element $w\in \mathrm{Cliff}_\mathbb{C}(E)$. \begin{defn}\label{defn:Bott-Dirac} The \emph{Bott-Dirac operator} is the unbounded operator $B:=D+C$ on $\mathcal{L} _E^2$ with domain $\mathcal{S}_E$. \end{defn} Given $x\in E$, recall that the \emph{left Clifford multiplication operator associated to $x$} is the bounded operator $c_x$ on $\mathcal{L} _E^2$ defined as the left Clifford multiplication by the fixed vector $x$, and the \emph{translation operator associated to $x$} is the unitary operator $V_x$ on $\mathcal{L} _E^2$ defined by $(V_xu)(y):=u(y-x)$. Given $s\in [1,\infty)$, recall that the \emph{shrinking operator associated to $s$} is the unitary operator $S_s$ on $\mathcal{L} _E^2$ defined by $(S_s u)(y):=s^{{-d}/{2}}u(sy)$. \begin{defn}\label{defn:Bott-Dirac modified} For $s\in [1,\infty)$ and $x\in E$, the \emph{Bott-Dirac operator associated to $(x,s)$} is the unbounded operator $B_{s,x}:=s^{-1}D+C-c_x$ on $\mathcal{L} _E^2$ with domain $\mathcal{S}_E$. \end{defn} Note that $B_{1,0}=B$ and $B_{s,x}=s^{{-1}/{2}}~V_xS_{\sqrt{s}}BS^*_{\sqrt{s}}V_x^*$. It is also known that for each $s\in [1,\infty)$ and $x\in E$, the operator $B_{s,x}$ is unbounded, odd, essentially self-adjoint and maps $\mathcal{S}_E$ to itself (see, \emph{e.g.}, \cite[Corollary 12.1.4]{willett2020higher}). \begin{defn}\label{defn:F_{s,x}} Let $s\in [1,\infty)$, $x\in E$ and $B_{s,x}$ be the Bott-Dirac operator associated to $(x,s)$. Define a bounded operator on $\mathcal{L}_E^2$ by: \[ F_{s,x}:= B_{s,x}(1+B^2_{s,x})^{-1/2}. \] \end{defn} We list several important properties of the operator $F_{s,x}$. For simplicity, denote $\chi_{x,R}:=\chi_{B(x;R)}$ for $x\in E$ and $R \geq 0$. \begin{prop}[{\cite[Proposition 12.1.10]{willett2020higher}}]\label{prop:Psi function} For each $\varepsilon >0$ there exists an odd function $\mathcal{P}si :\mathbb{R} \rightarrow [-1,1]$ with $\mathcal{P}si (t)\rightarrow 1$ as $t\rightarrow +\infty$, satisfying the following: \begin{enumerate} \item For all $s\in [1,\infty)$ and $x\in E$, we have $\|F_{s,x}-\mathcal{P}si(B_{s,x})\|< \varepsilon$. \item There exists $R_0>0$ such that for all $s\in [1,\infty)$ and $x\in E$, we have $\mathrm{prop}(\mathcal{P}si(B_{s,x})) \le s^{-1}R_0$. \item For all $s\in [1,\infty)$ and $x\in E$, $\mathcal{P}si(B_{s,x})^2 - 1$ is compact. \item For all $s\in [1,\infty)$ and $x,y\in E$, $\mathcal{P}si(B_{s,x}) - \mathcal{P}si(B_{s,y})$ is compact. \item For all $s\in [1,\infty)$ and $x,y\in E$, $\|F_{s,x}-F_{s,y}\|\le 3\|x-y\|_E$. And there exists $c>0$ such that for all $s\in [1,\infty)$ and $x,y\in E$, we have \[ \|\mathcal{P}si(B_{s,x}) - \mathcal{P}si(B_{s,y})\| \leq c \|x-y\|_E. \] \item For all $x\in E$, the function $$[1,\infty)\rightarrow \mathfrak{B}(\mathcal{L}_E^2),\ s \mapsto \mathcal{P}si(B_{s,x})$$ is strong-$*$ continuous. \item The family of functions \[ [1,\infty) \to \mathfrak{B}(\mathcal{L}_E^2), \ s \mapsto \mathcal{P}si(B_{s,x})^2-1 \] is norm equi-continuous as $x$ varies over $E$ and $s$ varies over any fixed compact subset of $[1,\infty)$. \item For any $r \geq 0$, the family of functions \[ [1,\infty) \to \mathfrak{B}(\mathcal{L}_E^2), \ s \mapsto \mathcal{P}si(B_{s,x}) - \mathcal{P}si(B_{s,y}) \] is norm equi-continuous as $(x,y)$ varies over the elements of $E \times E$ with $|x-y| \leq r$, and $s$ varies over any fixed compact subset of $[1,\infty)$. \item For any $\varepsilon_1>0$, there exists $R_1 > 0$ such that for all $R\ge R_1$, $s\ge 2d$ and $x\in E$, we have $$\| ( \mathcal{P}si(B_{s,x})^2-1 )( 1-\chi_{x,R} ) \|< \varepsilon_1.$$ \item For any $\varepsilon_2>0, r>0$ there exists $R_2 >0$ such that for all $R\ge R_2$, $s\ge 2d$ and $x,y \in E$ with $\|x-y\|_E\le r$, we have $$\|(\mathcal{P}si(B_{s,x})-\mathcal{P}si(B_{s,y}))(1-\chi_{x,R})\|< \varepsilon_2.$$ \end{enumerate} Moreover, we can require the function $\mathcal{P}si$, constants $R_0$ in (2), $c$ in (5), $R_1$ in (9) and $R_2$ in (10) are independent of the dimension $d$ of the Euclidean space $E$. \end{prop} \begin{rem}\label{rem:Psi function revised} Note that statements (9) and (10) above are slightly stronger than those in \cite[Proposition 12.1.10]{willett2020higher}. For completeness, we give the proofs in Appendix \ref{app:proof2}. \end{rem} \subsection{Twisted Roe and strongly quasi-local algebras}\label{ssec:Twisted Roe and strong quasi-local algebras} Thanks to Lemma \ref{lem:reduce to block-diagonal}, we only focus on sequences of finite metric spaces with uniformly bounded geometry. We fix some notation first. Let $\{X_n\}_{n\in \mathbb{N}}$ be a sequence of finite metric spaces with uniformly bounded geometry which admits a uniformly coarse embedding into Euclidean spaces $\{f_n:X_n\rightarrow E_n\}$ where each $E_n$ is a Euclidean space of even dimension $d_n$. Let $X$ be a coarse disjoint union of $\{X_n\}$ and denote $E:=\{E_n\}_{n\in \mathbb{N}}$. Recall that $\mathcal{H}H$ is a fixed infinite-dimensional separable Hilbert space. Denote $\mathcal{H}_n:=\ell^2(X_n)\otimes \mathcal{H}H$, which is an ample $X_n$-module under the amplified multiplication representation. Denote $\mathcal{H}_{n,E_n}:=\mathcal{H}_n\otimes \mathcal{L}^2_{E_n}$, which is both an ample $X_n$-module and an ample $E_n$-module similarly. Also define $\mathcal{H}_X:=\bigoplus_n\mathcal{H}_n$ and $\mathcal{H}_{X,E}:=\bigoplus_n\mathcal{H}_{n,E_n}$, both of which are ample $X$-modules. For $T_n\in \mathfrak{B}(\mathcal{H}_{n,E_n})$, write $\mathrm{prop}_{X_n}(T_n)$ and $\mathrm{prop}_{E_n}(T_n)$ for the propagation of $T_n$ with respect to the $X_n$-module structure and the $E_n$-module structure, respectively. From Definition \ref{defn:Roe alg} and Definition \ref{defn:strongly quasi-local algebra}, we form the Roe algebras $C^*(\mathcal{H}_{n,E_n})$ of $X_n$ and $C^*(\mathcal{H}_{X,E})$ of $X$, and the strongly quasi-local algebras $C^*_{sq}(\mathcal{H}_{n,E_n})$ of $X_n$ and $C^*_{sq}(\mathcal{H}_{X,E})$ of $X$. To introduce the twisted Roe and strongly quasi-local algebras, we need an extra construction from \cite[Definition 12.3.1]{willett2020higher} which involves the information of uniformly coarse embedding as follows: \begin{defn}\label{defn:V-construction} Given $n\in\mathbb{N}$ and $T \in \mathfrak{B}(\mathcal{L}_{E_n}^2)$, we define a bounded operator $T^V$ on $\mathcal{H}_{n,E_n}=\ell^2(X_n)\otimes \mathcal{H}H\otimes \mathcal{L}^2_{E_n}$ by the formula \[ T^V: \mathrm{d}elta_x\otimes \xi\otimes u\mapsto \mathrm{d}elta_x\otimes \xi\otimes V_{f_n(x)}TV_{f_n(x)}^*u, \] for $x\in X_n$, $\xi\in \mathcal{H}H$ and $u\in \mathcal{L}^2_{E_n}$, where $f_n$ is the uniformly coarse embedding and $V_{f_n(x)}$ is the translation operator defined in Section \ref{ssec:The Bott-Dirac operators on Euclidean spaces}. \end{defn} For each $n$, decompose $\mathcal{H}_n=\bigoplus_{x\in X_n} \mathcal{H}_{n,x}$ where $\mathcal{H}_{n,x} := \chi_{\{x\}} \mathcal{H}_n$ for $x\in X_n$ and $\mathcal{H}_{n,E_n}=\bigoplus_{x\in X_n} \mathcal{H}_{n,x} \otimes \mathcal{L}_{E_n}^2$. Hence $T\in \mathfrak{B}(\mathcal{H}_{n,E_n})$ can be considered as an $X_n$-by-$X_n$ matrix operator $(T_{x,y})_{x,y\in X_n}$ where $T_{x,y}$ is a bounded operator from $\mathcal{H}_{n,y} \otimes \mathcal{L}_{E_n}^2$ to $\mathcal{H}_{n,x} \otimes \mathcal{L}_{E_n}^2$. It is clear that for $T \in \mathfrak{B}(\mathcal{L}_{E_n}^2)$ we have: \[ T^V_{x,y}= \begin{cases} ~\mathrm{Id}_{\mathcal{H}H} \otimes V_{f_n(x)}TV_{f_n(x)}^*, & y=x; \\ ~0, & \mbox{otherwise}. \end{cases} \] Hence $T^V$ is a block-diagonal operator with respect to the above decomposition. Now we introduce the following twisted Roe algebras from \cite[Section 12.6]{willett2020higher}. \begin{defn}\label{defn:twisted Roe} Let $\prod_{n\in \mathbb{N}}C_b([1,\infty),\mathfrak{K}(\mathcal{H}_{n,E_n}))$ denote the product $C^*$-algebra of all bounded continuous functions from $[1,\infty)$ to $\mathfrak{K}(\mathcal{H}_{n,E_n})$ with supremum norm. Write elements of this $C^*$-algebra as a collection $(T_{n,s})_{n\in \mathbb{N},s\in [1,\infty)}$ for $T_{n,s} \in \mathfrak{K}(\mathcal{H}_{n,E_n})$, whose norm is \[ \|(T_{n,s})\|=\sup_{n\in \mathbb{N},s\in [1,\infty)}\|T_{n,s}\|_{\mathfrak{B}(\mathcal{H}_{n,E_n})}. \] Let $\mathbb{A}(X;E)$ denote the $*$-algebra of $\prod_{n\in \mathbb{N}}C_b([1,\infty),\mathfrak{K}(\mathcal{H}_{n,E_n}))$ consisting of elements satisfying the following conditions: \begin{enumerate} \item $\sup\limits_{s\in [1,\infty),n\in \mathbb{N}}\mathrm{prop}_{X_n}(T_{n,s})<\infty$;\\[0.1cm] \item $\lim\limits_{s\rightarrow \infty}\sup\limits_{n\in \mathbb{N}}\mathrm{prop}_{E_n}(T_{n,s})=0$;\\[0.1cm] \item $\lim\limits_{R\rightarrow \infty}\sup\limits_{s\in [1,\infty),n\in \mathbb{N}}\|\chi_{0,R}^VT_{n,s}-T_{n,s}\|=\lim\limits_{R\rightarrow \infty}\sup\limits_{s\in [1,\infty),n\in \mathbb{N}}\|T_{n,s}\chi_{0,R}^V-T_{n,s}\|=0$. \end{enumerate} The \emph{twisted Roe algebra} $A(X;E)$ of $\{X_n\}_{n\in \mathbb{N}}$ is defined to be the norm-closure of $\mathbb{A}(X;E)$ in $\prod_{n\in \mathbb{N}}C_b([1,\infty),\mathfrak{K}(\mathcal{H}_{n,E_n}))$. \end{defn} \begin{rem}\label{rem:diff between twisted Roe} The above definition appears different from \cite[Definition 12.6.2]{willett2020higher}, while it coincides with the case of $r=0$ therein. To see this, note that each $X_n$ is finite hence $C^*(\mathcal{H}_{n,E_n})=\mathfrak{K}(\mathcal{H}_{n,E_n})$ for $n\in \mathbb{N}$. Then the following lemma shows that we can recover condition (4) in \cite[Definition 12.6.2]{willett2020higher}. \end{rem} \begin{lem}\label{lem:condition (4) recovery} Given $n\in \mathbb{N}$ and a compact operator $T \in \mathfrak{K}(\mathcal{H}_{n,E_n})$, we have \[ \lim_{i\in I}\|p_i^VT-T\|=\lim_{i\in I}\|Tp_i^V-T\|=0 \] where $\{p_i\}_{i\in I}$ is the net of finite rank projections on $\mathcal{L}_{E_n}^2$. \end{lem} \begin{proof} Given $\varepsilon>0$, it suffices to find a finite rank projection $p\in \mathfrak{B}(\mathcal{L}_{E_n}^2)$ such that $\| p^V T-T \| < \varepsilon$. Replacing $T$ by its adjoint $T^*$, we obtain the other equality as well. Since $T$ is compact, there exists a finite rank projection $P\in \mathfrak{B}(\mathcal{H}_{n,E_n})$ such that $\|PT-T\| < \frac{\varepsilon}{2}$. Moreover, we can assume that the image of $P$ is contained in the subspace spanned by the finite set: \[ \big\{ \mathrm{d}elta_{x} \otimes \xi_i \otimes u_j: x \in X_n, \xi_i\in \mathcal{H}H, u_j \in \mathcal{L}_E^2 \mbox{~for~}i, j=1,2,\ldots,N\big\}. \] Hence for each $x \in X_n$, there exists a finite rank projection $q_x\in \mathfrak{B}(\mathcal{L}_{E_n}^2)$ such that \[ P\le \sum_{x\in X_n} p_x \otimes \mathrm{Id}_{\mathcal{H}H} \otimes q_x, \] where $p_x$ is the orthogonal projection onto $\mathbb{C} \mathrm{d}elta_{x} \subseteq \ell^2(X_n)$. Take an arbitrary finite rank projection $p \in \mathfrak{B}(\mathcal{L}^2_E)$ with $p\geq V_{f_n(x)}^* q_x V_{f_n(x)}$ for each $x\in X_n$. Then we have: \begin{align*} p^V &= \sum_{x\in X_n} p_x \otimes \mathrm{Id}_{\mathcal{H}H} \otimes V_{f_n(x)} p V_{f_n(x)}^* \ge \sum_{x\in X_n} p_x \otimes \mathrm{Id}_{\mathcal{H}H} \otimes q_x \ge P \end{align*} This implies that $p^VP= P$. Hence we obtain \[ \|p^VT - T\| \le \|p^VT - p^VPT\| + \|PT - T\| \le 2\|PT - T\| < \varepsilon, \] which concludes the proof. \end{proof} \begin{defn}\label{defn:twisted quasi-local} Let $\prod_{n\in \mathbb{N}}C_b([1,\infty),\mathfrak{K}(\mathcal{H}_{n,E_n}))$ denote the product $C^*$-algebra of all bounded continuous functions from $[1,\infty)$ to $\mathfrak{K}(\mathcal{H}_{n,E_n})$ with supremum norm. Write elements of this $C^*$-algebra as a collection $(T_{n,s})_{n\in \mathbb{N},s\in [1,\infty)}$ for $T_{n,s} \in \mathfrak{K}(\mathcal{H}_{n,E_n})$, whose norm is \[ \|(T_{n,s})\|=\sup_{n\in \mathbb{N},s\in [1,\infty)}\|T_{n,s}\|_{\mathfrak{B}(\mathcal{H}_{n,E_n})}. \] Let $\mathbb{A}sq (X;E)$ denote the $*$-algebra of $\prod_{n\in \mathbb{N}}C_b([1,\infty),\mathfrak{K}(\mathcal{H}_{n,E_n}))$ consisting of elements satisfying the following conditions: \begin{enumerate} \item For any $\varepsilon>0$, there exists $\mathrm{d}elta, R >0$ such that for any $n\in \mathbb{N}, s \in [1,\infty)$ and $g_n:X_n\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta, R)$-variation we have $\|[T_{n,s}\otimes\mathrm{Id}_{\mathcal{H}H},\mathcal{L}ambda_{\mathcal{H}_{n,E_n}}(g_n)]\|<\varepsilon$, where $\mathcal{H}H$ is the fixed Hilbert space and $\mathcal{L}ambda$ is from (\ref{EQ:Lambda}). \\[0.05cm] \item $\lim\limits_{s\rightarrow \infty}\sup\limits_{n\in \mathbb{N}}\mathrm{prop}_{E_n}(T_{n,s})=0$.\\[0.05cm] \item For any $\varepsilon>0$, there exists $R'>0$ such that for each $n$, $C_n\subseteq X_n$ and Borel set $D_n\subseteq E_n$ with $d_{E_n}(f_n(C_n),D_n)\ge R'$ we have $\|\chi_{C_n}T_{n,s}\chi_{D_n}\|<\varepsilon$ and $\|\chi_{D_n}T_{n,s}\chi_{C_n}\|<\varepsilon$ for all $s \in [1,\infty)$. \end{enumerate} The \emph{twisted strongly quasi-local algebra} $A_{sq}(X;E)$ of $\{X_n\}_{n\in \mathbb{N}}$ is defined to be the norm-closure of $\mathbb{A}sq (X;E)$ in $\prod_{n\in \mathbb{N}}C_b([1,\infty),\mathfrak{K}(\mathcal{H}_{n,E_n}))$. \end{defn} \begin{rem}\label{rem:Asq explanation} We provide some explanation on condition (3) in Definition \ref{defn:twisted quasi-local}. Recall that $\mathcal{H}_{n,E_n}$ is both an $X_n$-module and an $E_n$-module, so we can consider the \emph{$(X_n \times E_n)$-support} of a given operator $T \in \mathfrak{B}(\mathcal{H}_{n,E_n})$ defined as \[ \mathrm{supp}_{X_n \times E_n}(T):=\big\{(x,v) \in X_n \times E_n: \chi_{\{x\}} T\chi_U \neq 0 \mbox{~for~all~neighbourhoods~}U\mbox{~of~}v\big\}. \] We define the associated \emph{$(X_n \times E_n)$-propagation} of $T$ to be \[ \mathrm{prop}_{X_n,E_n}(T):=\sup\big\{\|f_n(x)-v\|_{E_n}: (x,v) \in \mathrm{supp}_{X_n \times E_n}(T)\big\}. \] Definition \ref{defn:twisted quasi-local}(3) says that $T_{n,s}$ and $T_{n,s}^*$ are \emph{uniformly $(X_n \times E_n)$-quasi-local} in the sense that for any $\varepsilon>0$ there exists $R>0$ such that for each $n$, $C_n\subseteq X_n$ and Borel set $D_n \subseteq E_n$ with $C_n \times D_n \subseteq \big\{(x,v) \in X_n \times E_n: \|f_n(x)-v\|_E \geq R\big\}$, we have $\|\chi_{C_n}T_{n,s}\chi_{D_n}\|<\varepsilon$ and $\|\chi_{C_n}T^*_{n,s}\chi_{D_n}\|<\varepsilon$ for all $s \in [1,\infty)$. It is clear that limits of uniformly finite $(X_n \times E_n)$-propagation operators are uniformly $(X_n \times E_n)$-quasi-local. \end{rem} \begin{lem}\label{lem:A and Aq} We have $A(X;E) \subseteq A_{sq} (X;E) \subseteq \prod_{n\in \mathbb{N}}C_b([1,\infty),\mathfrak{K}(\mathcal{H}_{n,E_n}))$. \end{lem} \begin{proof} Given $T=(T_{n,s})_{n\in \mathbb{N},s\in [1,\infty)} \in \mathbb{A}(X;E)$, condition (1) in Definition \ref{defn:twisted quasi-local} follows from Proposition~\ref{prop:relations between Roe, QL and SQL} and Lemma \ref{lem:char for sequence of strong quasi-locality}. We only need to check condition (3). Given $R>0$, Remark \ref{rem:Asq explanation} implies that it suffices to show that $T_{n,s}\chi_{0,R}^V$ and $(T_{n,s}\chi_{0,R}^V)^\ast= \chi_{0,R}^VT_{n,s}^\ast$ have uniformly finite $(X_n \times E_n)$-propagation for $n\in \mathbb{N}$ and $s\in [1,\infty)$. Now Definition \ref{defn:twisted Roe}(1) implies that there exists an $M>0$ such that $\mathrm{prop}_{X_n}(T_{n,s}) \leq M$ and $\mathrm{prop}_{X_n}(T_{n,s}^\ast) \leq M$ for all $n\in \mathbb{N}$ and $s\in [1,\infty)$. Since $\{f_n: X_n \to E_n\}_{n\in \mathbb{N}}$ is a uniformly coarse embedding, there exists some $\rho_+: \mathbb{R}^+ \to \mathbb{R}^+$ such that $\|f_n(x)-f_n(y)\|_E \leq \rho_+(d_{X_n}(x,y))$ for $n\in \mathbb{N}$ and $x,y\in X_n$. It follows directly from definition that $\mathrm{prop}_{X_n,E_n}(T_{n,s}\chi_{0,R}^V) \leq \rho_+(M)+R$ and $\mathrm{prop}_{X_n,E_n}(\chi_{0,R}^VT_{n,s}^\ast) \leq \rho_+(M)+R$ for all $n\in \mathbb{N}$ and $s\in [1,\infty)$. \end{proof} Finally, we introduce the following operators: \begin{defn}[{\cite[Section 12.3 and 12.6]{willett2020higher}}]\label{defn:F} For each $n\in \mathbb{N}$, $s\in [1,\infty)$ and $x\in E_n$, Definition \ref{defn:F_{s,x}} provides a bounded operator $F_{s,x} \in \mathfrak{B}(\mathcal{L}_{E_n}^2)$, also denoted by $F_{n,s,x}$. Applying Definition \ref{defn:V-construction}, we obtain an operator $F_{n,s}:=F^V_{n,s+2d_n,0}$ in $\mathfrak{B}(\mathcal{H}_{n,E_n})$ where $d_n$ is the dimension of $E_n$. Let $F_s$ be the block diagonal operator in $\prod_n \mathfrak{B}(\mathcal{H}_{n,E_n}) \subseteq \mathfrak{B}(\mathcal{H}_{X,E})$ defined by $F_s:=(F^V_{n,s+2d_n,0})_n$. Finally, we define $F$ to be an element in $\prod_n \mathfrak{B}(L^2([1,\infty);\mathcal{H}_{n,E_n})) \subseteq \mathfrak{B}(L^2 ([1,\infty);\mathcal{H}_{X,E}))$ defined by $(F(u))(s):=F_su(s)$. Similarly, given $\varepsilon>0$ let $\mathcal{P}si$ be a function as in Proposition~\ref{prop:Psi function} and $F^\mathcal{P}si_s$ be the bounded diagonal operator on $\mathcal{H}_{X,E}$ defined by $F^\mathcal{P}si_s:=(F^\mathcal{P}si_{n,s})_n$ where $F^\mathcal{P}si_{n,s}=\mathcal{P}si(B_{n,s+2d_n,0})^V$. Let $F^\mathcal{P}si$ be the bounded operator on $\bigoplus_n L^2 ([1,\infty), \mathcal{H}_{n,E_n} )$ defined by $(F^\mathcal{P}si(u))(s):=F^\mathcal{P}si_su(s)$. \end{defn} \section{The index map}\label{sec:index map} Recall that in \cite[Secition 12.3 and 12.6]{willett2020higher}, Willett and Yu construct an index map (with notation as in Section \ref{ssec:Twisted Roe and strong quasi-local algebras}): \[ \mathrm{Ind}_F: K_* \big( C^*(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n) \big) \rightarrow K_*(A(X;E)), \] where $F$ is the operator in Definition \ref{defn:F}. They use $\mathrm{Ind}_F$ to transfer $K$-theoretic information from Roe algebras to their twisted counterparts, which allow them to reprove the coarse Baum-Connes conjecture via local isomorphisms. More precisely, they prove the following: \begin{prop}[{\cite[Proposition 12.6.3]{willett2020higher}}]\label{prop:index map isom. for Roe} With notation as in Section \ref{ssec:Twisted Roe and strong quasi-local algebras}, for each $s\in [1,\infty)$ the composition \[ K_* \big( C^*(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n) \big) \stackrel{\mathrm{Ind}_F}{\longrightarrow} K_*(A(X;E)) \stackrel{\iota^s_*}{\longrightarrow} K_*\big( C^*(\mathcal{H}_{X,E})\cap \prod_n\mathfrak{B}(\mathcal{H}_{n,E_n}) \big) \] is an isomorphism, where $\iota^s:A(X;E)\rightarrow C^*(\mathcal{H}_{Y,E})$ is the evaluation map at $s$. \end{prop} In this section, we construct index maps in the strongly quasi-local setting and prove similar results. This allows us to prove certain isomorphisms in $K$-theory to attack Theorem \ref{thm:main result} later. We follow the procedure in \cite[Section 12.3]{willett2020higher}, while more technical analysis is required. We follow the same notation as in Section \ref{ssec:Twisted Roe and strong quasi-local algebras}. Let $\{X_n\}_{n\in \mathbb{N}}$ be a sequence of finite metric spaces with uniformly bounded geometry which admits a uniformly coarse embedding into Euclidean spaces $\{f_n:X_n\rightarrow E_n\}$ where each $E_n$ is a Euclidean space of even dimension $d_n$. Let us start with several lemmas to analyse relations between the operator $F$ from Definition \ref{defn:F} and the twisted strongly quasi-local algebra $A_{sq}(X;E)$. \begin{lem}\label{lem:F multiplier} The operator $F$ is a self-adjoint, norm one, odd operator in the multiplier algebra of $A_{sq}(X;E)$. \end{lem} \begin{proof} The operator $F$ is self-adjoint, norm one and odd since each $F_{n,s,x}$ is. Given $\varepsilon>0$, let $\mathcal{P}si:\mathbb{R} \to [-1,1]$ be a function as in Proposition \ref{prop:Psi function} for this $\varepsilon$. Then Proposition \ref{prop:Psi function}(1) implies: \[ \|F - F^\mathcal{P}si\| \leq \sup_{n\in \mathbb{N}, s\in [1,\infty)} \|F^V_{n,s,0} - \mathcal{P}si(B_{n,s,0})^V\| \leq \sup_{n\in \mathbb{N}, s\in [1,\infty)} \sup_{x\in X_n} \|F_{n,s,f_n(x)} - \mathcal{P}si(B_{n,s,f_n(x)})\| \leq \varepsilon. \] Hence it suffices to show that $(T_{n,s}) F^\mathcal{P}si=(T_{n,s}F^\mathcal{P}si_{n,s})$ belongs to $\mathbb{A}sq(X;E)$ for any $(T_{n,s}) \in \mathbb{A}sq(X;E)$. First it follows from \cite[Lemma 12.3.5]{willett2020higher} that for each $n\in \mathbb{N}$, the map $s\mapsto T_{n,s}F^\mathcal{P}si_{n,s}$ is norm-continuous. Now we verify conditions (1)-(3) in Definition \ref{defn:twisted quasi-local} for $(T_{n,s}F^\mathcal{P}si_{n,s})$. Note that condition (2) follows directly from Proposition \ref{prop:Psi function}(2) and (3) holds since $\mathrm{prop}_{E_n}(F^\mathcal{P}si_{n,s})$ are uniformly finite for all $n\in \mathbb{N}$ and $s \in [1,\infty)$. For condition (1), note that for any $n\in\mathbb{N}$, $s\in [1,\infty)$ and $g:X_n\rightarrow\mathfrak{K}(\mathcal{H}H)_1$, we have \[ \big( F^\mathcal{P}si_{n,s}\otimes\mathrm{Id}_{\mathcal{H}H} \big) \cdot \mathcal{L}ambda(g) = \mathcal{L}ambda(g) \cdot \big( F^\mathcal{P}si_{n,s}\otimes\mathrm{Id}_{\mathcal{H}H} \big). \] Hence we obtain: \[ \big\|[(T_{n,s}F^\mathcal{P}si_{n,s})\otimes\mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda(g)]\big\|=\big\|\big([T_{n,s}\otimes\mathrm{Id}_{\mathcal{H}H},\mathcal{L}ambda(g)]\big)\cdot \big(F^\mathcal{P}si_{n,s}\otimes\mathrm{Id}_{\mathcal{H}H}\big)\big\| \le \big\|[T_{n,s}\otimes\mathrm{Id}_{\mathcal{H}H},\mathcal{L}ambda(g)]\big\|, \] which concludes the proof. \end{proof} \begin{lem}\label{lem:multiplier} Considered as represented on $L^2([1,\infty)) \otimes\mathcal{H}_{X,E}$ via amplification of identity, $C^*_{sq}(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n)$ is a subalgebra of the multiplier algebra of $A_{sq}(X;E)$. \end{lem} \begin{proof} It suffices to show that $(S_nT_{n,s}) \in \mathbb{A}sq(X;E)$ for any $(T_{n,s}) \in \mathbb{A}sq(X;E)$ and $(S_n)\in C^*_{sq}(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n)$.\footnote{To be more precise, $(S_nT_{n,s})$ stands for $\big((S_n\otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}) \cdot T_{n,s}\big)$.} It is clear that the map $s\mapsto S_nT_{n,s}$ is norm-continuous and bounded for each $n\in \mathbb{N}$. Now we verify conditions (1)-(3) in Definition \ref{defn:twisted quasi-local} for $s\mapsto (S_nT_{n,s})$. First note that for any $n\in\mathbb{N}$, $s\in [1,\infty)$ and $g:X_n\rightarrow\mathfrak{K}(\mathcal{H}H)_1$ we have \begin{equation}\label{EQ:commutant lemma} \big\| [S_n\otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_{n,E_n}}(g)] \big\| = \big\| [S_n\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_n}(g)] \big\|. \end{equation} Hence condition (1) follows from direct calculations together with Lemma \ref{lem:char for sequence of strong quasi-locality}. Condition (2) follows from the fact that each $S_n\otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}$ has zero $E_n$-propagation. Now we check condition (3). Given $\varepsilon>0$, it follows from $(S_n)\in C^*_{sq}(\mathcal{H}_X) \subseteq C^*_q(\mathcal{H}_X)$ that there exists $R_1>0$ such that $S_n$ has $(\frac{\varepsilon}{2\|(T_{n,s})\|},R_1)$-propagation for all $n\in\mathbb{N}$. On the other hand, there exists $R_2>0$ such that for any $n\in\mathbb{N}$, $s\in [1,\infty)$, $C_n\subseteq X_n$ and Borel set $D_n\subseteq E_n$ with $d_{E_n}(f_n(C_n),D_n)\ge R_2$ we have $\|\chi_{C_n}T_{n,s}\chi_{D_n}\|<\frac{\varepsilon}{2\|(S_n)\|}$ and $\|\chi_{D_n}T_{n,s}\chi_{C_n}\|<\frac{\varepsilon}{2\|(S_n)\|}$. Now let $R=\rho_+ (R_1) + R_2$ where $\rho_+$ comes from the uniformly coarse embedding $\{ f_n : X_n \rightarrow E_n \}$. For any $n\in\mathbb{N}$, $C_n'\subseteq X_n$ and Borel set $D_n'\subseteq E_n$ with $d_{E_n}(f_n(C_n'),D_n')\ge R$ we have $f_n( \mathcal{N}_{R_1}(C_n') ) \subseteq \mathcal{N}_{\rho_+(R_1)}( f_n(C_n') )$, which implies that $d_{E_n}(f_n( \mathcal{N}_{R_1}(C_n') ) , D_n')\ge R_2$. Therefore, we obtain: \begin{align*} \| \chi_{C_n'} S_nT_{n,s}\chi_{D_n'} \| \le & \|\chi_{C_n'} S_n T_{n,s}\chi_{D_n'} - \chi_{C_n'} S_n \chi_{\mathcal{N}_{R_1}(C_n')} T_{n,s}\chi_{D_n'} \| + \| \chi_{C_n'} S_n \chi_{\mathcal{N}_{R_1}(C_n')} T_{n,s}\chi_{D_n'} \| \\ \le & \| \chi_{C_n'} S_n \chi_{(\mathcal{N}_{R_1}(C_n'))^c} \| \cdot \|T_{n,s}\| + \| S_n \| \cdot \| \chi_{\mathcal{N}_{R_1}(C_n')} T_{n,s}\chi_{D_n'} \| \\ < & \frac{\varepsilon}{2\|(T_{n,s})\|} \cdot \| (T_{n,s}) \| + \|(S_n)\| \cdot \frac{\varepsilon}{2\|(S_n)\|} = \varepsilon \end{align*} for all $s\in [1,\infty)$. On the other hand, we have: \[ \| \chi_{D_n'} S_n T_{n,s} \chi_{C_n'} \| = \| S_n \chi_{D_n'} T_{n,s} \chi_{C_n'} \| \le \|S_n\| \cdot \| \chi_{D_n'} T_{n,s} \chi_{C_n'} \| < \varepsilon \] for all $s\in [1,\infty)$. So we finish the proof. \end{proof} Regarding $C^*_{sq}(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n)$ as a subalgebra in $\mathfrak{B}(L^2([1,\infty)) \otimes\mathcal{H}_{X,E})$ as in Lemma \ref{lem:multiplier}, we have the following: \begin{lem}\label{lem:commutator} For any $(S_n) \in C^*_{sq}(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n)$, we have $[(S_n),F] \in A_{sq}(X;E)$. \end{lem} \begin{proof} From Proposition~\ref{prop:Psi function}(1), it suffices to show that the map \[ s\mapsto [(S_n),F_s^\mathcal{P}si] = [(S_n) , (\mathcal{P}si(B_{n,s+2d_n,0})^V) ] \] belongs to $\mathbb{A}sq(X;E)$ for any $\mathcal{P}si$ as in Proposition~\ref{prop:Psi function}, \emph{i.e.}, to verify conditions (1)-(3) in Definition \ref{defn:twisted quasi-local}. First note that for any $n\in\mathbb{N}$, $s\in[1,\infty)$ and $g:X_n\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ we have \begin{align*} &\big[ [S_n,\mathcal{P}si(B_{n,s+2d_n,0})^V]\otimes\mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_{n,E_n}}(g) \big] \\ &=[S_n\otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}\otimes\mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_{n,E_n}}(g)]\mathcal{P}si(B_{n,s+2d_n,0})^V+ \mathcal{P}si(B_{n,s+2d_n,0})^V[\mathcal{L}ambda_{\mathcal{H}_{n,E_n}}(g), S_n\otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}\otimes\mathrm{Id}_{\mathcal{H}H}], \end{align*} which has norm at most $2\|[S_n\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_{n}}(g)]\|$ according to (\ref{EQ:commutant lemma}). Hence we conclude condition (1) from the strong quasi-locality of $(S_n)$. Condition (2) follows from Propostion~\ref{prop:Psi function}(2) and that fact that $S_n$ has zero $E_n$-propagation. To check condition (3), we fix an $\varepsilon>0$. It follows from Proposition~\ref{prop:subspace strong quasi-locality} that there exist $\mathrm{d}elta', R'>0$ such that for any $n\in\mathbb{N}$, $A\subseteq X_n$ and $g:A\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ with $(\mathrm{d}elta', R')$-variation we have $\big\|[\chi_{A}S_n\chi_{A}\otimes\mathrm{Id}_{\mathcal{H}H},\mathcal{L}ambda(g)]\big\|<\frac{\varepsilon}{4}$. Moreover since $C^*_{sq}(\mathcal{H}_X)\subseteq C^*_q(\mathcal{H}_X)$, we assume that $(S_n)$ has $(\frac{\varepsilon}{4},R')$-propagation. Denote by $\rho_+$ the parameter function from the uniformly coarse embedding $\{ f_n : X_n \rightarrow E_n \}$. By Proposition~\ref{prop:Psi function}(10), there exists $R''>0$ such that for all $n\in\mathbb{N}, s\geq 2d_n$ and $ x,y\in E_n$ with $\|x-y\|_{E_n} \le\rho_+(R')$ we have $\|(\mathcal{P}si(B_{n,s,x}) -\mathcal{P}si(B_{n,s,y}))(1-\chi_{B(x,R'')})\|<\mathrm{d}elta'$. Set $R=R''+\rho_+(R')$. For any $n\in\mathbb{N}$, $s\in [1,\infty)$, $C\subseteq X_n$ and Borel set $D\subseteq E_n$ with $d_{E_n}(f_n(C),D)\ge R$, we are going to estimate the norm $\|\chi_C[S_n,F_{n,s}^\mathcal{P}si]\chi_D\|$. Denote $C':=\mathcal{N}_{R'}(C) \subseteq X_n$. Since $(S_n)$ has $(\frac{\varepsilon}{4},R')$-propagation, we obtain: \begin{equation}\label{EQ:commutant lem2} \big\|\chi_C[S_n,F_s^\mathcal{P}si]\chi_D\big\| < 2\cdot \frac{\varepsilon}{4}+ \big\|\chi_C[\chi_{C'}S_n\chi_{C'},F_s^\mathcal{P}si \chi_D]\big\|. \end{equation} Consider the function \[ g: X_n \to \mathfrak{B}(\mathcal{L}_{E_n}^2)_1 \quad \mbox{by} \quad x \mapsto \mathcal{P}si(B_{n,s+2d_n,f_n(x)}) \chi_D. \] Proposition~\ref{prop:Psi function}(4) implies that $g(x)-g(y) \in \mathfrak{K}(\mathcal{L}_{E_n}^2)$ for any $x,y\in X_n$. Moreover, we claim that $g$ has $(\mathrm{d}elta',R')$-variation on $C'$. In fact, for any $x,y\in C'$ with $d_{X_n}(x,y)< R'$ we have $\|f_n(x)-f_n(y)\|_{E_n}\le \rho_+(R')$. Note that $d_{E_n}(f_n(C),D)\ge R$ and $x\in C'=\mathcal{N}_{R'}(C)$, hence $D \subseteq E_n \setminus B(f_n(x),R'')$. Therefore by the choice of $R''$ above, we obtain that $g$ has $(\mathrm{d}elta',R')$-variation on $C'$. Finally, note that each $\mathcal{L}_{E_n}^2$ is separable and infinite dimensional, hence isomorphic to the fixed Hilbert space $\mathcal{H}H$. Fixing an $x_0 \in X_n$, we define $\hat{g}: X_n \to \mathfrak{K}(\mathcal{L}_{E_n}^2)_1$ by $\hat{g}(x):=\frac{g(x)-g(x_0)}{2}$. It follows from the above analysis that $\hat{g}$ has $(\mathrm{d}elta',R')$-variation on $C'$. Hence by the choice of $\mathrm{d}elta', R'$ at the beginning, we obtain that \[ [\chi_{C'}S_n\chi_{C'},F_s^\mathcal{P}si \chi_D] = \big[(\chi_{C'}S_n \chi_{C'}) \otimes \mathrm{Id}_{\mathcal{L}_{E_n}^2}, 2\mathcal{L}ambda_{\mathcal{H}_n}(\hat{g})\big] \] has norm at most $\frac{\varepsilon}{2}$. Combining with (\ref{EQ:commutant lem2}), we obtain: \[ \big\|\chi_C[S_n,F_s^\mathcal{P}si]\chi_D\big\| < 2\cdot \frac{\varepsilon}{4}+ \big\|\chi_C[\chi_{C'}S_n\chi_{C'},F_s^\mathcal{P}si \chi_D]\big\| \leq \frac{\varepsilon}{2}+\frac{\varepsilon}{2} = \varepsilon. \] Similarly, we have $\|\chi_D[S_n,F_{n,s}^\mathcal{P}si]\chi_C\| < \varepsilon$. Hence we conclude the proof. \end{proof} \begin{lem}\label{lem:projection} For any projection $(p_n) \in C^*_{sq}(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n)$, the function \[ s\mapsto ( (p_n)F_s(p_n) )^2 - (p_n) \] is in $(p_n) A_{sq}(X;E) (p_n)$. \end{lem} \begin{proof} From Lemma~\ref{lem:commutator}, it suffices to show that the function $ s\mapsto (p_n)F_s^2 - (p_n)$ is in $A_{sq}(X;E)$. Moreover, we only need to show that the function \[ s\mapsto (p_n) (F_{n,s}^\mathcal{P}si)^2 - (p_n) = (p_n (\mathcal{P}si(B_{n,s+2d_n,0})^V)^2) - (p_n) \] is in $\mathbb{A}sq(X;E)$ for any $\mathcal{P}si$ as in Proposition~\ref{prop:Psi function}. For $n\in \mathbb{N}$, it follows from Proposition \ref{prop:Psi function}(7) that the function $s \mapsto p_n(F_{n,s}^\mathcal{P}si)^2 - p_n$ is bounded and continuous. Now we verify conditions (1)-(3) in Definition \ref{defn:twisted quasi-local}. First note that for any $n\in\mathbb{N}$, $s\in[1,\infty)$ and $g:X_n\rightarrow \mathfrak{K}(\mathcal{H}H)_1$ we have \begin{align*} \big[&(p_n(\mathcal{P}si(B_{n,s+2d_n,0})^V)^2-p_n)\otimes\mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_{n,E_n}}(g)\big] \\ &= \big[p_n \otimes\mathrm{Id}_{\mathcal{L}^2_{E_n}}\otimes\mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_{n,E_n}}(g)\big]\cdot \big((\mathcal{P}si(B_{n,s+2d_n,0})^V)^2\otimes\mathrm{Id}_{\mathcal{H}H}\big) - \big[p_n \otimes\mathrm{Id}_{\mathcal{L}^2_{E_n}} \otimes\mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_{n,E_n}}(g)\big], \end{align*} which has norm at most $2\|[p_n\otimes \mathrm{Id}_{\mathcal{H}H}, \mathcal{L}ambda_{\mathcal{H}_{n}}(g)]\|$ according to (\ref{EQ:commutant lemma}). Hence we conclude condition (1) from the strong quasi-locality of $(p_n)$. Condition (2) follows from Propostion~\ref{prop:Psi function}(2) and that fact that $p_n$ has zero $E_n$-propagation. Finally, condition (3) follows from the uniform quasi-locality of $(p_n)$ together with Proposition~\ref{prop:Psi function}(9). Hence we conclude the proof. \end{proof} Having obtained the above essential ingredients, we are now in the position to construct the index map. It follows from a standard construction in $K$-theories (see, \emph{e.g.}, \cite[Definitoin 2.8.5]{willett2020higher}): \begin{defn}\label{defn:index definition} Let $\mathcal{H}=\mathcal{H}^{+}\oplus \mathcal{H}^{-}$ be a graded Hilbert space with grading operator $U$ (\emph{i.e.}, $U$ is a self-adjoint unitary operator in $\mathfrak{B}(\mathcal{H})$ such that $\mathcal{H}^{\pm}$ coincides with the $(\pm 1)$-eigenspace of $U$), and $A$ be a $C^*$-subalgebra of $\mathfrak{B}(\mathcal{H})$ such that $U$ is in the multiplier algebra of $A$. Let $F\in \mathfrak{B}(\mathcal{H})$ be an odd operator of the form \[ F = \begin{pmatrix} 0&V \\ W&0 \end{pmatrix} \] for some operators $V: \mathcal{H}^{-} \rightarrow \mathcal{H}^{+}$ and $W: \mathcal{H}^{+} \rightarrow \mathcal{H}^{-} $. Suppose $F$ satisfies: \begin{itemize} \item $F$ is in the multiplier algebra of $A$; \item $F^2-1$ is in $A$. \end{itemize} Then we define the \emph{index class} $\mathrm{Ind}[F] \in K_0(A)$ of $F$ to be \[ \mathrm{Ind}[F] := \begin{bmatrix} (1-VW)^2&V(1-WV) \\ W(2-VW)(1-VW)&WV(2-WV) \end{bmatrix} - \begin{bmatrix} 0&0 \\ 0&1 \end{bmatrix}. \] \end{defn} \, \\[-.7cm] Combining Lemma \ref{lem:F multiplier}$\sim$Lemma \ref{lem:projection}, we obtain that for any projection $(p_n) \in C^*_{sq}(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n)$ the operator $( (p_n)F_s(p_n) )$ is an odd self-adjoint operator on the graded Hilbert space $\bigoplus_n p_n(L^2 ( [1,\infty), \mathcal{H}_{n,E_n} ) )$ satisfying: \begin{itemize} \item $((p_n)F_s(p_n)) $ is in the multiplier algebra of $(p_n) A_{sq}(X;E) (p_n)$; \item $( (p_n)F_s(p_n) )^2 - (p_n) $ is in $(p_n) A_{sq}(X;E) (p_n)$. \end{itemize} Hence Definition~\ref{defn:index definition} produces an index class in $K_0((p_n) A_{sq}(X;E) (p_n))$. Composing with the $K_0$-map induced by the inclusion $(p_n) A_{sq}(X;E) (p_n) \hookrightarrow A_{sq}(X;E)$, we get an element in $K_0( A_{sq}(X;E))$, denoted by $\mathrm{Ind}_{F,sq}[(p_n)]$. Analogous to \cite[Lemma~12.3.11]{willett2020higher}, we obtain the following: \begin{prop}\label{prop:index} Through the process above together with a suspension argument, we get well-defined $K_\ast$-maps for $\ast=0,1:$ \[ \mathrm{Ind}_{F,sq} : K_\ast \big( C^*_{sq}(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n) \big) \rightarrow K_\ast( A_{sq}(X;E) ), \] which are called the \emph{strongly quasi-local index maps}. \end{prop} Finally, we have the follwing result (comparing with Proposition \ref{prop:index map isom. for Roe}). The proof is almost identical to that for \cite[Proposition~12.3.13 and Proposition~12.6.3]{willett2020higher}, hence omitted. \begin{prop}\label{prop:strong quasi-local index map isom.} Given $s\in [1,\infty)$, let $\iota^s:A_{sq}(X;E)\rightarrow C^*_{sq}(\mathcal{H}_{X,E})\cap \prod_n\mathfrak{B}(\mathcal{H}_{n,E_n})$ be the evaluation map at $s$ . Then the composition \[ K_* \big( C^*_{sq}(\mathcal{H}_X)\cap \prod_n\mathfrak{B}(\mathcal{H}_n) \big) \stackrel{\mathrm{Ind}_{F,sq}}{\longrightarrow} K_*(A_{sq}(X;E)) \stackrel{\iota^s_*}{\longrightarrow} K_*\big( C^*_{sq}(\mathcal{H}_{X,E})\cap \prod_n\mathfrak{B}(\mathcal{H}_{n,E_n}) \big) \] is an isomorphism. \end{prop} \section{Isomorphisms between twisted algebras in $K$-theory}\label{sec:loc isom} In this section, we study the $K$-theory of the twisted algebras $A(X;E)$ and $A_{sq}(X;E)$ defined in Section~\ref{ssec:Twisted Roe and strong quasi-local algebras}. The main result is the following: \begin{prop}\label{prop:iso. of twisted algebras in $K$-theory} The inclusion map from $A(X;E)$ to $A_{sq}(X;E)$ induces an isomorphism in $K$-theory. \end{prop} The proof follows the outline of that in \cite[Section 12.4]{willett2020higher}, and the main ingredient is to use appropriate Mayer-Vietoris arguments for twisted algebras (Proposition~\ref{prop:M_V for twisted algebra}). This allows us to chop the space into easily-handled pieces, on which we prove the required local isomorphisms (Proposition~\ref{prop:local isom. of inclusion in K-theory}). By saying that $(F_n)_{n \in \mathbb{N}}$ is a sequence of closed subsets in $(E_n)$, we mean that $F_n$ is a closed subset of $E_n$ for each $n$. Firstly we define the following subalgebras associated to $(F_n)$, which is inspired by \cite[Definition 6.3.5]{willett2020higher}. \begin{defn}\label{defn:twisted ideal} For a sequence of closed subsets $(F_n)$ in $(E_n)$, we define $\mathbb{A}_{sq,(F_n)}(X;E)$ to be the set of elements $(T_{n,s}) \in \mathbb{A}sq(X;E)$ satisfying the following: for each $n$ and $\varepsilon >0$ there exists $s_{n, \varepsilon }>0$ such that for $s\ge s_{n, \varepsilon }$ we have \[ \mathrm{supp} _{E_n} (T_{n,s}) \subseteq \mathcal{N} _\varepsilon (F_n) \times \mathcal{N} _\varepsilon (F_n). \] Denote by $A_{sq,(F_n)}(X;E)$ the norm closure of $\mathbb{A}_{sq,(F_n)}(X;E)$ in $A_{sq}(X;E)$. Similarly, we define $A_{(F_n)}(X;E) \subseteq A(X;E)$ in the case of twisted Roe algebra. \end{defn} It is easy to see that $A_{sq,(F_n)}(X;E)$ and $A_{(F_n)}(X;E)$ are closed two-side ideals in $A_{sq}(X;E)$ and $A(X;E)$ respectively. Moreover, we have the following: \begin{lem} Let $(F_n)$ and $(G_n)$ be two sequences of compact subsets in $(E_n)$. Then \[ A_{sq,(F_n)}(X;E) \cap A_{sq,(G_n)}(X;E) = A_{sq,(F_n\cap G_n)}(X;E) \] and \[ A_{sq,(F_n)}(X;E) + A_{sq,(G_n)}(X;E) = A_{sq,(F_n\cup G_n)}(X;E). \] The same holds for twisted Roe algebras. \end{lem} \begin{proof} We only prove the case of twisted strongly quasi-local algebras, while the Roe algebra case is similar. The first equation follows from a $C^*$-algebraic fact that two intersections of ideals coincides with their product together with a basic fact for metric space: For a compact metric space $K$, a closed cover $(C,D)$ of $K$ and $\varepsilon>0$, there exists $\mathrm{d}elta>0$ such that $\mathcal{N}_\mathrm{d}elta(C) \cap \mathcal{N}_\mathrm{d}elta(D) \subseteq \mathcal{N}_\varepsilon(C\cap D)$. For the second, we fix $(T_{n,s})\in \mathbb{A}_{sq,(F_n\cup G_n)}(X;E)$. By definition, for each $n$ there is a strictly increasing sequence $(s_{n,k})_{k\in \mathbb{N}}$ in $[1,\infty)$ tending to infinity such that for $s\ge s_{n,k}$ we have \[ \mathrm{supp}_{E_n}(T_{n,s}) \subseteq \mathcal{N}_{\frac{1}{k+1}}(F_n\cup G_n) \times \mathcal{N}_{\frac{1}{k+1}}(F_n \cup G_n). \] For each $n$, we construct an operator $(W_{n,s})_s$ on $L^2((1,\infty]) \otimes \mathcal{H}_{n,E_n}$ as follows, where $W_{n,s}\in \mathfrak{B}(\mathcal{H}_{n,E_n})$. We set: \[ W_{n,s}= \begin{cases} ~\chi_{\mathcal{N}_{1}(F_n)}, & \mbox{if~} 1\le s \le s_{n,1}; \\[0.3cm] ~\frac{s_{n,k+1}-s}{s_{n,k+1} - s_{n,k}}\chi_{\mathcal{N}_{\frac{1}{k}}(F_n)} + \frac{s-s_{n,k}}{s_{n,k+1} - s_{n,k}}\chi_{\mathcal{N}_{\frac{1}{k+1}}(F_n)}, & \mbox{if~} s_{n,k} \leq s \leq s_{n,k+1}, k\in \mathbb{N}. \end{cases} \] Then $(W_{n,s})$ is in the multiplier algebra of $A_{sq}(X;E)$. Now we consider: \[ (T_{n,s}) = (W_{n,s}) (T_{n,s}) + (1-W_{n,s}) (T_{n,s}) (W_{n,s}) +(1-W_{n,s}) (T_{n,s}) (1-W_{n,s}) . \] It is clear that $(W_{n,s}) (T_{n,s})$ and $(1-W_{n,s}) (T_{n,s}) (W_{n,s})$ are in $A_{sq,(F_n)}(X;E)$. Also note that from the construction above, for each $n$ and $s\ge s_{n,k}$ we have: \[ \mathrm{supp}_{E_n}((1-W_{n,s}) T_{n,s} (1-W_{n,s})) \subseteq \mathcal{N}_{\frac{1}{k+1}}( G_n) \times \mathcal{N}_{\frac{1}{k+1}}( G_n). \] Hence we obtain that $A_{sq,(F_n)}(X;E) + A_{sq,(G_n)}(X;E)$ is dense in $A_{sq,(F_n\cup G_n)}(X;E)$, which concludes the proof. \end{proof} Consequently, we obtain the following Mayer-Vietoris sequences for twisted algebras: \begin{prop}\label{prop:M_V for twisted algebra} Let $(F_n)$ and $(G_n)$ be two sequences of compact subsets in $(E_n)$. Then we have the following six-term exact sequence: \begin{small} \[ \begin{CD} K_0(A_{sq,(F_n\cap G_n)}(X;E)) @>>> K_0(A_{sq,(F_n)}(X;E))\oplus K_0(A_{sq,(G_n)}(X;E)) @>>> K_0(A_{sq,(F_n\cup G_n)}(X;E)) \\ @AAA & & @VVV \\ K_1(A_{sq,(F_n\cup G_n)}(X;E)) @<<< K_1(A_{sq,(F_n)}(X;E))\oplus K_1(A_{sq,(G_n)}(X;E)) @<<< K_1(A_{sq,(F_n\cap G_n)}(X;E)). \end{CD} \] \end{small} The same holds in the case of twisted Roe algebra. Furthermore, we have the following commutative diagram:\\ \begin{small} \centerline{ \xymatrix{ \cdots \ar[r] & K_*(A_{(F_n\cap G_n)}(X;E)) \ar[r] \ar[d] & K_*(A_{(F_n)}(X;E))\oplus K_*(A_{(G_n)}(X;E)) \ar[d]\ar[r] & K_*(A_{(F_n\cup G_n)}(X;E)) \ar[d]\ar[r] & \cdots \\ \cdots \ar[r] & K_*(A_{sq,(F_n\cap G_n)}(X;E)) \ar[r] & K_*(A_{sq,(F_n)}(X;E))\oplus K_*(A_{sq,(G_n)}(X;E)) \ar[r] & K_*(A_{sq,(F_n\cup G_n)}(X;E)) \ar[r] & \cdots }} \end{small} where the vertical maps are induced by inclusions. \end{prop} Proposition \ref{prop:M_V for twisted algebra} allows us to chop the space into small pieces, on which we have the following ``local isomorphism'' result. Recall that a family $\{Y_i\}_{i\in I}$ of subspaces in a metric space $Y$ is \emph{mutually $R$-separated} for some $R>0$ if $d(Y_i,Y_j) >R$ for $i \neq j$. \begin{prop}\label{prop:local isom. of inclusion in K-theory} Let $(F_n)$ be a sequence of closed subsets in $(E_n)$ such that $F_n = \bigsqcup_{j=1}^{\infty}F_j^{(n)}$ for a mutually $3$-separated family $\{F_j^{(n)}\}_j$ and there exist $R>0$ and $x^{(n)}_j\in X_n$ such that $F^{(n)}_j \subseteq B(f(x^{(n)}_j);R)$. Then the inclusion map from $A_{(F_n)}(X;E)$ to $A_{sq,(F_n)}(X;E)$ induces an isomorphism in $K$-theory. \end{prop} Before we prove Proposition \ref{prop:local isom. of inclusion in K-theory}, let us use it to finish the proof of Proposition \ref{prop:iso. of twisted algebras in $K$-theory}. To achieve that, we need an extra lemma from \cite[Lemma 12.4.5]{willett2020higher}: \begin{lem}\label{lem:decomposition} For any $s>0$, there exist $M \in \mathbb{N}$ and decompositions \[ X_n=X_{n,1}\sqcup X_{n,2}\sqcup \cdots \sqcup X_{n,M}, \quad \mbox{for all } n\in \mathbb{N}, \] such that the family $\big\{\overline{B(f_n(x);s)}\big\}_{x\in X_{n,i}}$ is mutually $3$-separated for each $n\in\mathbb{N}$ and $i=1,2,\ldots,M$,. \end{lem} \begin{proof}[Proof of Proposition \ref{prop:iso. of twisted algebras in $K$-theory}] Given $s>0$, let $M\in \mathbb{N}$ and $\{X_{n,i}\}_{n\in\mathbb{N},1\le i\le M}$ be obtained by Lemma~\ref{lem:decomposition}. Setting $W^s_n:=\mathcal{N}_s(f_n(X_n))$ and $W_{n,i}^s:=\bigsqcup_{x\in X_{n,i}}\overline{B(f_n(x);s)}$, we have $W^s_n=\bigcup_{i=1}^M W_{n,i}^s$. For each $i$ applying Proposition~\ref{prop:local isom. of inclusion in K-theory} to the sequence of subsets $(W_{n,i}^s)_n$, we obtain that the inclusion map \[ A_{(W_{n,i}^s)}(X;E) \rightarrow A_{sq,(W_{n,i}^s)}(X;E) \] induces an isomorphism in $K$-theory. Applying the Mayer-Vietoris sequence from Proposition~\ref{prop:M_V for twisted algebra} $(M-1)$-times (and Proposition~\ref{prop:local isom. of inclusion in K-theory} again to deal with the intersection) together with the Five Lemma, we obtain that the inclusion map \[ A_{(W^s_n)}(X;E) \rightarrow A_{sq,(W^s_n)}(X;E) \] induces an isomorphism in $K$-theory. Finally, note that condition $(3)$ in Definition~\ref{defn:twisted Roe} and condition $(3)$ in Definition~\ref{defn:twisted quasi-local} imply that \[ A(X;E)=\lim_{s\rightarrow \infty}A_{(W^s_n)}(X;E) \quad \mbox{and} \quad A_{sq}(X;E)=\lim_{s\rightarrow \infty}A_{sq,(W^s_n)}(X;E). \] Hence we conclude the proof. \end{proof} The rest of this section is devote to the proof of Proposition \ref{prop:local isom. of inclusion in K-theory}. First let us introduce some more notation: Let $(F_n)$ and $(G_n)$ be sequences of closed subsets in $(E_n)$. We define: \[ A_{sq}( X;(G_n) ):=(1_{\mathcal{H}_n} \otimes \chi_{G_n})_n \cdot A_{sq}(X;E) \cdot (1_{\mathcal{H}_n} \otimes \chi_{G_n})_n \] and \[ A_{ sq,(F_n) }( X;(G_n) ):= A_{sq}( X;(G_n) ) \cap A_{ sq,(F_n) }( X;E ). \] Also define $A( X;(G_n) )$ and $A_{ (F_n) }( X;(G_n) )$ in a similar way. Moreover, given a sequence of subspaces $Z_n \subseteq X_n~ (n\in \mathbb{N})$ we define: \[ A_{sq}( (Z_n) ; (G_n) ):=\big(\chi_{Z_n} \otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}\big)_n \cdot A_{sq}( X;(G_n) ) \cdot \big(\chi_{Z_n} \otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}\big)_n \] and \[ A_{sq,(F_n) }( (Z_n) ; (G_n) ):=\big(\chi_{Z_n} \otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}\big)_n \cdot A_{sq,(F_n) }( X;(G_n) ) \cdot \big(\chi_{Z_n} \otimes \mathrm{Id}_{\mathcal{L}^2_{E_n}}\big)_n. \] Also define $A( (Z_n);(G_n) )$ and $A_{ (F_n) }( (Z_n);(G_n) )$ in a similar way. Now we move back to the setting of Proposition \ref{prop:local isom. of inclusion in K-theory}. Let $(F_n)$ be a sequence of closed subsets in $(E_n)$ such that $F_n = \bigsqcup_{j=1}^{\infty}F_j^{(n)}$ for a mutually $3$-separated family $\{F_j^{(n)}\}_j$. Taking $G_j^{(n)}=\mathcal{N}_{1}(F_j^{(n)})$ for each $j$ and $n$, we define the ``restricted product'': \[ \prod_{j}^{res}A_{sq,(F_j^{(n)}) }( X;(G_j^{(n)}) ) := \big(\prod_{j}A_{sq,(F_j^{(n)}) }( X;(G_j^{(n)}) ) \big) \cap A_{sq,(F_n)}(X;E). \] Similarly, we define $\prod_{j}^{res}A_{ (F_j^{(n)}) }( X;(G_j^{(n)}) )$ in the case of twisted Roe algebra. The following lemma is a key step in the proof of Proposition \ref{prop:local isom. of inclusion in K-theory}: \begin{lem}\label{lem:cluster axiom} Using the same notation as above, the inclusion \[ i: \prod_{j}^{res}A_{sq,(F_j^{(n)}) }( X;(G_j^{(n)}) ) \hookrightarrow A_{sq,(F_n)}(X;E) \] induces an isomorphism in $K$-theory. The same holds for the twisted Roe algebra case. \end{lem} \begin{proof} We only prove the case of twisted strongly quasi-local algebras, and the Roe case is similar. The proof follows the outline of \cite[Theorem 6.4.20]{willett2020higher}. Consider the following quotient algebras: \[ A_{sq,(F_n),Q}(X;E) := \frac{A_{sq,(F_n)}(X;E)}{A_{sq,(F_n),0}(X;E)} \quad \mbox{and} \quad \prod_{j}^{res,Q}A_{sq,(F_j^{(n)})}( X;(G_j^{(n)}) ):= \frac{\prod_{j}^{res}A_{sq,(F_j^{(n)})}( X;(G_j^{(n)}) )}{\prod_{j}^{res}A_{sq,(F_j^{(n)}),0 }( X;(G_j^{(n)}) )}, \] where $A_{sq,(F_n),0}(X;E)$ consists of elements $(T_{n,s})\in A_{sq,(F_n)}(X;E)$ such that $\lim\limits_{s\rightarrow \infty}T_{n,s}=0$ for each $n$, and $A_{sq,(F_j^{(n)}),0 }( X;(G_j^{(n)}) )$ is defined in a simialr way. From a standard Eilenberg Swindle argument (see for example \cite[Lemma 6.4.11]{willett2020higher}), $A_{sq,(F_n),0}(X;E)$ and $\prod_{j}^{res}A_{sq,(F_j^{(n)}),0 }( X;(G_j^{(n)}) )$ both have trivial $K$-theories. Hence the quotient maps $$A_{sq,(F_n)}(X;E) \rightarrow A_{sq,(F_n),Q}(X;E) \quad \mbox{and} \quad \prod_{j}^{res}A_{sq,(F_j^{(n)}) }( X;(G_j^{(n)}) ) \rightarrow \prod_{j}^{res,Q}A_{sq,(F_j^{(n)})}( X;(G_j^{(n)}) )$$ induce isomophisms in $K$-theory. It is clear that the inclusion $i$ induces a $*$-homomorphism: \[ i_Q:\prod_{j}^{res,Q}A_{sq,(F_j^{(n)})}( X;(G_j^{(n)}) ) \rightarrow A_{sq,(F_n),Q}(X;E). \] We also define a map \[ \gamma :A_{sq,(F_n)}(X;E) \rightarrow \prod_{j}^{res}A_{sq,(F_j^{(n)}) }( X;(G_j^{(n)}) ) \quad \mbox{by} \quad (T_{n,s})\mapsto \prod_{j}(\chi_{G_j^{(n)}}T_{n,s}\chi_{G_j^{(n)}}), \] which induces a $*$-homomorphism \[ \gamma_Q: A_{sq,(F_n),Q}(X;E) \rightarrow \prod_{j}^{res,Q}A_{sq,(F_j^{(n)})}( X;(G_j^{(n)}) ) . \] We can check that the compositions $i_Q\circ \gamma_Q$ and $\gamma_Q\circ i_Q$ are both the identity maps. Hence $i_Q$ is an isomorphism in $K$-theory, which implies that the inclusion $i$ induces an isomorphism in $K$-theory. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:local isom. of inclusion in K-theory}] We use the same notation as above and define $G_j^{(n)}=\mathcal{N}_{1}(F_j^{(n)})$ for each $n\in \mathbb{N}$ and $j$. Then there is a commutative diagram \[ \begin{CD} A_{(F_n)}(X;E) @>>> A_{sq,(F_n)}(X;E) \\ @AAA @AAA \\ \prod_{j}^{res}A_{ (F_j^{(n)}) }( X;(G_j^{(n)}) ) @>>> \prod_{j}^{res}A_{sq,(F_j^{(n)}) }( X;(G_j^{(n)}) ) \end{CD} \] where all maps involved are inclusion maps. It follows from Lemma \ref{lem:cluster axiom} that vertical maps induce isomorphisms in $K$-theory. Hence it suffices to show that the bottom horizontal map induces an isomorphism in $K$-theory. Note that conditions (3) in Definition~\ref{defn:twisted Roe} and \ref{defn:twisted quasi-local} imply that \[ \prod_{j}^{res}A_{ (F_j^{(n)}) }( X;(G_j^{(n)}) )=\lim_{m\rightarrow \infty}\prod_{j}^{res}A_{ (F_j^{(n)}) } \big( (B(x^{(n)}_j;m));(G_j^{(n)}) \big) \] and \[ \prod_{j}^{res}A_{ sq,(F_j^{(n)}) }( X;(G_j^{(n)}) )=\lim_{m\rightarrow \infty}\prod_{j}^{res}A_{ sq,(F_j^{(n)}) } \big( (B(x^{(n)}_j;m));(G_j^{(n)}) \big). \] Hence it suffices to show that for each fixed $m$, the inclusion \[ \prod_{j}^{res}A_{ (F_j^{(n)}) } \big( (B(x^{(n)}_j;m));(G_j^{(n)}) \big) \hookrightarrow \prod_{j}^{res}A_{ sq,(F_j^{(n)}) } \big( (B(x^{(n)}_j;m));(G_j^{(n)}) \big) \] induces an isomorphism in $K$-theory. Note that the inclusion $\{x^{(n)}_j\} \hookrightarrow B(x^{(n)}_j;m)$ induces a commutative diagram \[ \begin{CD} \prod_{j}^{res}A_{ (F_j^{(n)}) } \big( (B(x^{(n)}_j;m));(G_j^{(n)}) \big) @>>> \prod_{j}^{res}A_{ sq,(F_j^{(n)}) } \big( (B(x^{(n)}_j;m));(G_j^{(n)}) \big) \\ @AAA @AAA \\ \prod_{j}^{res}A_{ (F_j^{(n)}) } \big( (\{x^{(n)}_j\});(G_j^{(n)}) \big) @>>> \prod_{j}^{res}A_{ sq,(F_j^{(n)}) } \big( (\{x^{(n)}_j\});(G_j^{(n)}) \big) \end{CD} \] where the vertical maps are $*$-isomorphisms by standard arguments (see for example Proposition \ref{prop:coarse invariance}). Also note that the bottom horizontal inclusion map $\prod_{j}^{res}A_{ (F_j^{(n)}) }( (\{x^{(n)}_j\});(G_j^{(n)}) ) \hookrightarrow \prod_{j}^{res}A_{ sq,(F_j^{(n)}) }( (\{x^{(n)}_j\});(G_j^{(n)}) )$ is a $*$-isomorphism as well, since conditions $(1)$ and $(3)$ in Definition~\ref{defn:twisted Roe} and~\ref{defn:twisted quasi-local} are equivalent in this case. Hence we conclude the proof. \end{proof} \section{Proof of Theorem~\ref{thm:main result}}\label{sec:proof of main thm} In this final section, we finish the proof of the main result. \begin{proof}[Proof of Theorem~\ref{thm:main result}] Consider the following commutative diagram \[ \begin{CD} K_*(C^*(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)) @>>> K_*(A(X;E)) @>>> K_*(C^*(\mathcal{H}_{X,E}) \cap \prod_n \mathfrak{B}(\mathcal{H}_{n,E_n})) \\ @VVV @VVV @VVV \\ K_*(C^*_{sq}(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)) @>>> K_*(A_{sq}(X;E)) @>>> K_*(C^*_{sq}(\mathcal{H}_{X,E}) \cap \prod_n \mathfrak{B}(\mathcal{H}_{n,E_n})), \end{CD} \] where the horizontal maps come from Proposition~\ref{prop:index map isom. for Roe} and Proposition~\ref{prop:strong quasi-local index map isom.} and all vertical maps are induced by inclusions. From Proposition~\ref{prop:index map isom. for Roe} and Proposition~\ref{prop:strong quasi-local index map isom.} again, we know that the compositions of horizontal maps are isomorphisms. The middle vertical map is an isomorphism by Proposition~\ref{prop:iso. of twisted algebras in $K$-theory}, and the left vertical map identifies with the right one due to Proposition~\ref{prop:independent of modules}. Hence the inclusion map $$C^*(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n) \hookrightarrow C^*_{sq}(\mathcal{H}_X) \cap \prod_n \mathfrak{B}(\mathcal{H}_n)$$ induces an isomorphism in $K$-theory from diagram chasing. Finally combining with Lemma~\ref{lem:reduce to block-diagonal}, we finish the proof. \end{proof} \appendix \section{Proof of Proposition \ref{prop:Psi function}}\label{app:proof2} This appendix is devoted to the proof of Proposition \ref{prop:Psi function}. We follow the outline of that for \cite[Proposition 12.1.10]{willett2020higher} and use the same notation as in Section \ref{ssec:The Bott-Dirac operators on Euclidean spaces}. Define a function $f: \mathbb{R} \to [-1,1]$ by $f(x)=\frac{x}{\sqrt{1+x^2}}$, $x\in \mathbb{R}$. Also fix a smooth even function $g:\mathbb{R}\rightarrow [0,\infty)$ of integral one and having compactly supported Fourier transform. It follows from the proof of \cite[Proposition 12.1.10]{willett2020higher} that given $\varepsilon>0$ there exists $\mathrm{d}elta>0$ such that the convolution $\mathcal{P}si:= f\ast g_\mathrm{d}elta$ satisfies condition (1)-(8) in Proposition \ref{prop:Psi function}, where $g_\mathrm{d}elta(x):=\frac{1}{\mathrm{d}elta}g(\frac{x}{\mathrm{d}elta})$ for $x\in \mathbb{R}$. In the following, we will prove condition (9) and (10) therein. Let us recall the following two lemmas, which follow from \cite[Lemma 12.1.6 and 12.1.8]{willett2020higher}. \begin{lem}\label{lem:appB1} For all $s\in[1,\infty)$, $x\in E$ and $t\in \mathbb{R}$, we have that $$f(B_{s,x}-t)=\frac{2}{\pi} \int_{0}^{\infty}(B_{s,x}-t)(1+\lambda^2+(B_{s,x}-t)^2)^{-1} \mathrm{d}\lambda,$$ where the integral on the right converges in the strong-$\ast$ operator topology. Moreover for any $s\in[1,\infty)$, $x,y\in E$ and $t\in \mathbb{R}$, we have that \begin{align*} f(B_{s,x}-t) - f(B_{s,y}-t)= &c_{x-y}(1+(B_{s,x}-t)^2)^{-\frac{1}{2}} +\frac{2}{\pi} \int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} \\ &\cdot \big( (B_{s,y}-t) c_{y-x} + c_{y-x} (B_{s,x}-t) \big) (1+\lambda^2+(B_{s,x}-t)^2)^{-1} \mathrm{d}\lambda, \end{align*} where the integral on the right again converges in the strong-$\ast$ topology. \end{lem} \begin{proof} The first formula follows from that for any $t\in\mathbb{R}$, we have the formula $$\frac{x-t}{\sqrt{1+(x-t)^2}}= \frac{2}{\pi} \int_0^\infty \frac{x-t}{1+\lambda^2 +(x-t)^2} \mathrm{d}\lambda $$ and functional calculus. And the second formula follows by easy computations as in the proof of \cite[Lemma 12.1.6]{willett2020higher}. \end{proof} \begin{lem}\label{lem:appB2} For any $R\ge 0$, $\lambda\in [0,\infty)$, $x\in E$ and $s\ge 2d$, we have that $$\|(1+\lambda^2+B_{s,x}^2)^{-\frac{1}{2}}(1-\chi_{x,R})\| \le \big(\frac{1}{2}+\lambda^2+R^2 \big)^{-\frac{1}{4}}.$$ \end{lem} \begin{proof}[Proof of Proposition \ref{prop:Psi function}(9).] Given $\varepsilon_1>0$, there exists a compact subset $K\subseteq \mathbb{R}$ and a function $h: \mathbb{R} \to [0,\infty)$ of integral one and support in $K$ such that $\|g_\mathrm{d}elta - h\|_1<\frac{\varepsilon_1}{4}$. Setting $\mathcal{P}hi:= f \ast h$, we have: $$\|\mathcal{P}si - \mathcal{P}hi\|_\infty = \|f\ast g_\mathrm{d}elta - f\ast h\|_\infty = \|f\ast (g_\mathrm{d}elta - h)\|_\infty \le \|f\|_\infty \cdot \|g_\mathrm{d}elta - h\|_1 <\frac{\varepsilon_1}{4},$$ which implies $\|\mathcal{P}hi(B_{s,x}) - \mathcal{P}si(B_{s,x})\|<\frac{\varepsilon_1}{4}$. Hence it suffices to show that there exists $R_1>0$ such that for all $s\ge 2d$ and $x\in E$, we have $$\|(\mathcal{P}hi(B_{s,x})^2-1)(1-\chi_{x,R_1}) \| < \frac{\varepsilon_1}{4}.$$ Now we set $\omega: \mathbb{R} \to \mathbb{R}$ by $\omega(x):=\frac{1}{1+x^2}$. For any $R\ge 0$, we have: \begin{align*} \| (\mathcal{P}hi(B_{s,x})^2-1)(1-\chi_{x,R}) \| &= \big \| \big( (f\ast h)^2 -1 \big) (B_{s,x}) \cdot (1-\chi_{x,R}) \big\| \\ &= \big\| \big( \frac{(f\ast h)^2 -1}{\omega} \big)(B_{s,x}) \cdot \omega(B_{s,x})(1-\chi_{x,R}) \big\| \\ &\le \big\| \big( \frac{(f\ast h)^2 -1}{\omega} \big)(B_{s,x}) \big\| \cdot \big\| (1+B_{s,x}^2)^{-\frac{1}{2}} \big\| \cdot \big\| (1+B_{s,x}^2)^{-\frac{1}{2}}(1-\chi_{x,R})\big\| \\ &\le \big\| \big( \frac{(f\ast h)^2 -1}{\omega} \big)(B_{s,x}) \big\| \cdot \big\| (1+B_{s,x}^2)^{-\frac{1}{2}} \big\| \cdot \big(\frac{1}{2}+R^2 \big)^{-\frac{1}{4}}, \end{align*} where the last inequality comes from Lemma~\ref{lem:appB2} for $\lambda=0$. We claim that the function $\frac{(f\ast h)^2 -1}{\omega}$ is bounded on $\mathbb{R}$. Indeed, since $h$ has support on $K$ and integral one we have: \begin{align*} \big( \frac{(f\ast h)^2 -1}{\omega} \big)(x) &= \big(\int_{\mathbb{R}} f(x-t)h(t) \mathrm{d} t +1 \big) \big(\int_{\mathbb{R}} f(x-t)h(t) \mathrm{d} t -1 \big) (1+x^2) \\ &= \big(\int_{K} \big(f(x-t)+1 \big) h(t)\mathrm{d} t \big) \big(\int_{K} \big(f(x-t)-1 \big)h(t)\mathrm{d} t \big) (1+x^2). \end{align*} Direct calculation shows that \begin{align*} \big(f(x-t)-1\big)(1+x^2) &= -\frac{x}{\sqrt{1+(x-t)^2}} \cdot \frac{x}{(x-t)+\sqrt{1+(x-t)^2}} \cdot \frac{1+x^2}{x^2}, \end{align*} which is uniformly bounded on $[0,+\infty)$ for $t\in K$. Similarly, $\big(f(x-t)+1\big)(1+x^2)$ is uniformly bounded on $(-\infty,0]$ for $t\in K$. Hence $\frac{(f\ast h)^2 -1}{\omega}$ is bounded on $\mathbb{R}$. On the other hand, note that $\| (1+B_{s,x}^2)^{-\frac{1}{2}}\| \le 1$ from functional calculus. Hence we obtain that $\| (\mathcal{P}hi(B_{s,x})^2-1)(1-\chi_{x,R}) \|$ tends to zero as $R$ tends to infinity, which conclude the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:Psi function}(10).] Given $\varepsilon_2>0$, there exists a compact subset $K\subseteq \mathbb{R}$ and a function $h: \mathbb{R} \to [0,\infty)$ of integral one and support in $K$ such that $\|g_\mathrm{d}elta - h\|_1< \frac{\varepsilon_2}{3}$. Setting $\mathcal{P}hi:= f \ast h$, we have $\|\mathcal{P}hi(B_{s,x}) - \mathcal{P}si(B_{s,x})\|<\frac{\varepsilon_2}{3}$. Hence it suffices to show that for any $r>0$ there exists $R_2>0$ such that for any $s\ge 2d$ and $x,y\in E$ with $d_E(x,y)\le r$, we have $$\|(\mathcal{P}hi(B_{s,x})-\mathcal{P}hi(B_{s,y}))(1-\chi_{x,R_2})\|<\frac{\varepsilon_2}{3}.$$ For any $R>0$, we have: \begin{align*} (\mathcal{P}hi&(B_{s,x})-\mathcal{P}hi(B_{s,y}))(1-\chi_{x,R}) = \big((f\ast h)(B_{s,x})-(f\ast h)(B_{s,y}) \big) (1-\chi_{x,R}) \\ &= \int_{\mathbb{R}}\big(f(B_{s,x}-t)-f(B_{s,y}-t)\big)h(t)\mathrm{d} t \cdot (1-\chi_{x,R}). \end{align*} Combining with Lemma~\ref{lem:appB1}, we have \begin{align*} &(\mathcal{P}hi(B_{s,x})-\mathcal{P}hi(B_{s,y}))(1-\chi_{x,R}) \\ &=\int_{\mathbb{R}} \mathfrak{B}igg( c_{x-y}(1+(B_{s,x}-t)^2)^{-\frac{1}{2}} + \frac{2}{\pi} \int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} \\ & \quad \cdot \big( (B_{s,y}-t) c_{y-x} + c_{y-x} (B_{s,x}-t) \big) (1+\lambda^2+(B_{s,x}-t)^2)^{-1} \mathrm{d}\lambda \mathfrak{B}igg) h(t)\mathrm{d} t \cdot (1-\chi_{x,R}) \\ &=\int_{K}c_{x-y}(1+(B_{s,x}-t)^2)^{-\frac{1}{2}}(1-\chi_{x,R})h(t)\mathrm{d} t \\ &\quad + \frac{2}{\pi} \int_{K}\int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} (B_{s,y}-t) c_{y-x} (1+\lambda^2+(B_{s,x}-t)^2)^{-1}(1-\chi_{x,R}) \mathrm{d}\lambda h(t)\mathrm{d} t \\ &\quad + \frac{2}{\pi} \int_{K}\int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} c_{y-x}(B_{s,x}-t) (1+\lambda^2+(B_{s,x}-t)^2)^{-1}(1-\chi_{x,R}) \mathrm{d}\lambda h(t)\mathrm{d} t . \end{align*} Then it is suffices to show that each of the three terms on the right side tends to zero as $R$ tends to infinity. For the first term, note that the following constant \[ N_1:= \sup_{t\in K, x\in \mathbb{R}} \frac{\sqrt{1+x^2}}{\sqrt{1+(x-t)^2}} \] is finite since $K$ is compact. Hence using Lemma~\ref{lem:appB2} for $\lambda=0$, we obtain \begin{align*} \mathfrak{B}ig\| \int_{K}& c_{x-y}(1+(B_{s,x}-t)^2)^{-\frac{1}{2}}(1-\chi_{x,R})h(t)\mathrm{d} t \mathfrak{B}ig\| \\ &\le \int_{K} \|c_{x-y}\| \cdot \|(1+(B_{s,x}-t)^2)^{-\frac{1}{2}}(1+B_{s,x}^2)^{\frac{1}{2}}\| \cdot \|(1+B_{s,x}^2)^{-\frac{1}{2}}(1-\chi_{x,R})\|h(t)\mathrm{d} t \\ &\le r\cdot N_1 \cdot \big( \frac{1}{2}+R^2 \big)^{-\frac{1}{4}}, \end{align*} which tends to zero as $R$ tends to infinity. For the second term, note that \begin{align*} &\frac{2}{\pi} \int_{K}\int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} (B_{s,y}-t) c_{y-x} (1+\lambda^2+(B_{s,x}-t)^2)^{-1}(1-\chi_{x,R}) \mathrm{d}\lambda h(t)\mathrm{d} t \\ &= \frac{2}{\pi} \int_{K}\int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1}(B_{s,y}-t) \cdot c_{y-x} \cdot (1+\lambda^2+(B_{s,x}-t)^2)^{-\frac{1}{2}} \\ &\quad \cdot (1+\lambda^2+(B_{s,x}-t)^2)^{-\frac{1}{2}} (1+\lambda^2+B_{s,x}^2)^{\frac{1}{2}} \cdot (1+\lambda^2+B_{s,x}^2)^{-\frac{1}{2}} (1-\chi_{x,R}) \mathrm{d}\lambda h(t)\mathrm{d} t \end{align*} From functional calculus, for any $t\in K$ and $\lambda\in[0,\infty)$ we have $$\|(B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1}(B_{s,y}-t)\|\le 1$$ and $$\|(1+\lambda^2+(B_{s,x}-t)^2)^{-\frac{1}{2}}\| \le (1+\lambda^2)^{-\frac{1}{2}}.$$ Also note that the constant \[ N_2:= \sup_{t\in K, x\in \mathbb{R}, \lambda \in [0,\infty)} \frac{\sqrt{1+\lambda^2+x^2}}{\sqrt{1+\lambda^2 + (x-t)^2}} \] is finite since $K$ is compact. Hence using Lemma~\ref{lem:appB2}, we obtain \begin{align*} &\mathfrak{B}ig\| \frac{2}{\pi} \int_{K}\int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} (B_{s,y}-t) c_{y-x} (1+\lambda^2+(B_{s,x}-t)^2)^{-1}(1-\chi_{x,R}) \mathrm{d}\lambda h(t)\mathrm{d} t \mathfrak{B}ig\| \\ &\le \frac{2}{\pi} \cdot 1 \cdot r \cdot N_2 \cdot \int_{0}^{\infty}(1+\lambda^2)^{-\frac{1}{2}}\big( \frac{1}{2}+\lambda^2 + R^2 \big)^{-\frac{1}{4}}\mathrm{d}\lambda, \end{align*} which tends to zero as $R$ tends to infinity. Finally, let us look at the last term. Note that \begin{align*} &\frac{2}{\pi} \int_{K}\int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} c_{y-x}(B_{s,x}-t) (1+\lambda^2+(B_{s,x}-t)^2)^{-1}(1-\chi_{x,R}) \mathrm{d}\lambda h(t)\mathrm{d} t \\ &=\frac{2}{\pi} \int_{K}\int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} \cdot c_{y-x} \cdot (B_{s,x}-t) (1+\lambda^2+(B_{s,x}-t)^2)^{-\frac{1}{2}} \\ &\quad \cdot (1+\lambda^2+(B_{s,x}-t)^2)^{-\frac{1}{2}} (1+\lambda^2+B_{s,x}^2)^{\frac{1}{2}} \cdot (1+\lambda^2+B_{s,x}^2)^{-\frac{1}{2}}(1-\chi_{x,R}) \mathrm{d}\lambda h(t)\mathrm{d} t. \end{align*} It is easy to see that \[ \sup_{x\in \mathbb{R}} \big| \frac{x}{1+\lambda^2 + x^2}\big| \leq \frac{1}{2}(1+\lambda^2)^{-\frac{1}{2}}. \] Hence functional calculus gives that for any $t\in K$, $$\|(B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1}\| \le \frac{1}{2}(1+\lambda^2)^{-\frac{1}{2}}.$$ Note also that functional calculus give that for any $t\in K$ and $\lambda\in[0,\infty)$, $$\| (B_{s,x}-t) (1+\lambda^2+(B_{s,x}-t)^2)^{-\frac{1}{2}} \| \le 1.$$ Then using Lemma~\ref{lem:appB2}, we have \begin{align*} &\mathfrak{B}ig\| \frac{2}{\pi} \int_{K}\int_{0}^{\infty} (B_{s,y}-t)(1+\lambda^2+(B_{s,y}-t)^2)^{-1} c_{y-x}(B_{s,x}-t) (1+\lambda^2+(B_{s,x}-t)^2)^{-1}(1-\chi_{x,R}) \mathrm{d}\lambda g(t)\mathrm{d} t \mathfrak{B}ig\| \\ &\le \frac{2}{\pi} \cdot r \cdot 1 \cdot N_2 \cdot \int_{0}^{\infty}\frac{1}{2}(1+\lambda^2)^{-\frac{1}{2}}\big( \frac{1}{2}+\lambda^2 + R^2 \big)^{-\frac{1}{4}}\mathrm{d}\lambda, \end{align*} which tends to zero as $R$ tends to infinity. Hence we conclude the proof. \end{proof} \end{document}
\begin{equation}gin{document} \tilde{t}le{Lindel\" of hypothesis and the order of the mean-value of $|\zeta(s)|^{2k-1}$ in the critical strip} \author{Jan Moser} \address{Department of Mathematical Analysis and Numerical Mathematics, Comenius University, Mlynska Dolina M105, 842 48 Bratislava, SLOVAKIA} \email{[email protected]} \keywords{Riemann zeta-function} \begin{equation}gin{abstract} The main subject of this paper is the mean-value of the function $|\zeta(s)|^{2k-1}$ in the critical strip. On Lindel\" of hypothesis we give a solution to this question for some class of disconnected sets. This paper is English version of our paper \cite{5}. \end{abstract} \title{Lindel\" of hypothesis and the order of the mean-value of $|\zeta(s)|^{2k-1} \section{Introduction} \subsection{} E.C. Titchmarsh had began with the study of the mean-value of the function \begin{equation}gin{displaymath} \left|\zeta\left(\sigma+it\right)\right|^\omega,\ \frac 12<\sigma\leq 1,\ 0<\omega , \end{displaymath} where $\omega$ is non-integer number, \cite{6} (comp. \cite{2}, p. 278). Next, Ingham and Davenport have obtained the following result (see \cite{1}, \cite{2}, comp. \cite{7}, pp. 132, 133) \begin{equation} \label{1.1} \frac 1T \int_1^T \left|\zeta\left(\sigma+it\right)\right|^{2\omega}{\rm d}t= \sum_{n=1}^\infty \frac{d^2_{\omega}(n)}{n^{2\sigma}}+\mathcal{O}(1),\ \omega\in (0,2],\ T\to\infty. \end{equation} Let us remind that: \begin{equation}gin{itemize} \item[(a)] for $\omega\in\mathbb{N}$ the symbol $d_\omega(n)$ denotes the number of decompositions of $n$ into $\omega$-factors , \item[(b)] in the case $\omega$ is not an integer, we define $d_\omega(n)$ as the coefficient of $n^{-s}$ in the Dirichlet series for the function $\zeta^\omega(s)$ converging for all $\sigma>1$. \end{itemize} \subsection{} Next, for \begin{equation}gin{displaymath} \omega=\frac 12,\frac 32 \end{displaymath} it follows from (\ref{1.1}) that the orders of mean-values \begin{equation}gin{displaymath} \frac 1T \int_1^T \left|\zeta\left(\sigma+it\right)\right|{\rm d}t,\ \frac 1T \int_1^T \left|\zeta\left(\sigma+it\right)\right|^3{\rm d}t \end{displaymath} are determined. But a question about the order of mean-value of \begin{equation}gin{displaymath} |\zeta(\sigma+it)|^{2l+1},\ l=2,3,\dots,\ \frac 12<\sigma<1 \end{displaymath} remains open. In this paper we give a solution to this open question on the assumption of truth of the Lindel\" of hypothesis for some infinite class of disconnected sets. In a particular case we obtain the following result: on Lindel\" of hypothesis we have \begin{equation} \label{1.2} \begin{equation}gin{split} & 1-|o(1)|<\frac 1H\int_T^{T+H}|\zeta(\sigma+it)|^{2k-1}{\rm d}t < \\ & < \sqrt{F(\sigma,2k-1)}+|o(1)|,\quad H=T^\epsilon,\ k=1,2,\dots,\ 0<\epsilon , \end{split} \end{equation} where \begin{equation} \label{1.3} F(\sigma,\omega)=\sum_{n=1}^\infty \frac{d^2_{\omega}(n)}{n^{2\sigma}}, \end{equation} and $\epsilon$ is an arbitrarily small number. The proof of our main result is based on our method (see \cite{4}) for the proof of new mean-value theorem for the Riemann zeta-function \begin{equation}gin{displaymath} Z(t)=e^{i\vartheta(t)}\zf \end{displaymath} with respect of two infinite classes of disconnected sets. \section{Main formulas} We use the following formula: on Lindel\" of hypothesis \begin{equation} \label{2.1} \begin{equation}gin{split} & \zeta^k(s)=\sum_{n\leq t^\delta}\frac{d_k(n)}{n^s}+\mathcal{O}(t^{-\lambda}),\ \lambda=\lambda(k,\delta,\sigma)>0, \\ & s=\sigma+it,\ \frac 12<\sigma<1,\ t>0 \end{split} \end{equation} (see \cite{7}, p. 277) for every natural number $k$, where $\delta$ is any given positive number less than $1$. Let us remind that \begin{equation} \label{2.2} d_k(n)=\mathcal{O}(n^\eta) , \end{equation} where $0<\eta$ is an arbitrarily small number. Of course, (see (\ref{2.1}), (\ref{2.2})) \begin{equation} \label{2.3} \begin{equation}gin{split} & \zeta^k(s)=\mathcal{O}\left(\sum_{n\leq t^\delta}d_k(n)n^{-\sigma}\right)= \mathcal{O}\left( t^{\delta\eta+\delta(1-\sigma)}\right)= \\ & = \mathcal{O}\left( t^{(n+1/2)\delta}\right). \end{split} \end{equation} Let \begin{equation} \label{2.4} t\in [T,T+H],\ H=T^{\epsilon},\ 2\delta n+2\delta<\epsilon. \end{equation} Since \begin{equation}gin{displaymath} \sum_{T^\delta\leq n\leq (T+H)^\delta}1=\mathcal{O}(T^{\delta+\epsilon-1}) \end{displaymath} then \begin{equation} \label{2.5} \begin{equation}gin{split} & \sum_{T^\delta\leq n\leq t^{\delta}}\frac{d_k(n)}{n^s}= \mathcal{O}\left( T^{\delta\eta+\delta\sigma}\cdot \sum_{T^\delta\leq n\leq (T+H)^\delta} 1 \right)= \\ & = \mathcal{O}(T^{\delta\eta-\delta\sigma+\delta+\epsilon-1})=\mathcal{O}(T^{-\lambda_1}), \end{split} \end{equation} where \begin{equation} \label{2.6} \lambda_1=1-\delta-\epsilon+\delta\sigma-\delta\eta>0 , \end{equation} (of course, for sufficiently small $\epsilon$ the inequality (\ref{2.6}) holds true). Next, for \begin{equation} \label{2.7} \lambda_2=\lambda_2(k,\delta,\sigma,\epsilon,\eta)=\min \{\lambda,\lambda_1\}>0 \end{equation} the following formula (see (\ref{2.1}), (\ref{2.5}) -- (\ref{2.7})) \begin{equation} \label{2.8} \zeta^k(s)=\sum_{n<T^\delta}\frac{d_k(n)}{n^s}+\mathcal{O}(T^{-\lambda_2}),\ t\in [T,T+H] \end{equation} holds true. Since \begin{equation} \label{2.9} \zeta^k(s)=U_k(\sigma,t)+iV_k(\sigma,t) \end{equation} then - on Lindel\" of hypothesis - we obtain from (\ref{2.8}) the following main formula \begin{equation} \label{2.10} \begin{equation}gin{split} & U_k(\sigma,t)=1+\sum_{2\leq n < T^\delta}\frac{d_k(n)}{n^\sigma}\cos(t\ln n)+\mathcal{O}(T^{-\lambda_2}), \\ & V_k(\sigma,t)=-\sum_{2\leq n < T^\delta}\frac{d_k(n)}{n^\sigma}\sin(t\ln n)+\mathcal{O}(T^{-\lambda_2}), \\ & t\in [T,T+H]. \end{split} \end{equation} \section{The first class of lemmas} Let us denote by \begin{equation}gin{displaymath} \{ t_\nu(\tau)\} \end{displaymath} an infinite set of sequences that we defined (see \cite{4}, (1)) by the condition \begin{equation} \label{3.1} \vartheta[t_\nu(\tau)]=\pi\nu+\tau,\ \nu=1,\dots,\ \tau\in [-\pi,\pi], \end{equation} of course, \begin{equation}gin{displaymath} t_\nu(0)=t_\nu, \end{displaymath} where (see \cite{7}, pp. 220, 329) \begin{equation} \label{3.2} \begin{equation}gin{split} & \vartheta(t)=-\frac t2\ln\pi+\mbox{Im}\ln\Gamma\left(\frac 12+i\frac t2\right), \\ & \vartheta'(t)=\frac 12\ln\frac{t}{2\pi}+\mathcal{O}\left(\frac 1t\right), \\ & \vartheta''(t)\sim \frac{1}{2t}. \end{split} \end{equation} \subsection{} The following lemma holds true. \begin{equation}gin{mydef51} If \begin{equation}gin{displaymath} 2\leq m,n < T^{\delta} \end{displaymath} then \begin{equation} \label{3.3} \sum_{T\leq t_\nu\leq T+H}\cos\{ t_\nu(\tau)\ln n\}=\mathcal{O}\left(\frac{\ln T}{\ln n}\right), \end{equation} \begin{equation} \label{3.4} \sum_{T\leq t_\nu\leq T+H}\cos\{ t_\nu(\tau)\ln (mn)\}=\mathcal{O}\left(\frac{\ln T}{\ln (mn)}\right), \end{equation} \begin{equation} \label{3.5} \sum_{T\leq t_\nu\leq T+H}\cos\left\{ t_\nu(\tau)\ln \frac mn\right\}= \mathcal{O}\left(\frac{\ln T}{\ln \frac mn}\right), \ m>n, \end{equation} where the $\mathcal{O}$-estimates are valid uniformly for $\tau\in [-\pi,\pi]$. \end{mydef51} \begin{equation}gin{proof} We use the van der Corput's method. Let (see (\ref{3.3})) \begin{equation}gin{displaymath} \varphi_1(\nu)=\frac{1}{2\pi}t_\nu(\tau)\ln n. \end{displaymath} Next, (see (\ref{2.4}), (\ref{3.1}), (\ref{3.2})) \begin{equation}gin{displaymath} \begin{equation}gin{split} & \varphi_1'(\nu)=\frac{\ln n}{2\vartheta'[t_\nu(\tau)]}, \\ & \varphi_1''(\nu)=-\frac{\pi\ln n}{2\{ \vartheta'[t_\nu(\tau)]\}^2}\vartheta''\{ t_\nu(\tau)\}<0, \\ & 0< A\frac{\ln n}{\ln T}\leq \varphi_1'(\nu)= \frac{\ln n}{\ln\frac{t_\nu(\tau)}{2\pi}+\mathcal{O}(\frac 1t)}=\frac{\ln n}{\ln\frac{T}{2\pi}+\mathcal{O}(\frac HT)}< \\ & < \delta\frac{\ln T}{\ln\frac{T}{2\pi}+\mathcal{O}(\frac HT)}<\frac 14, \end{split} \end{displaymath} ($A>0$, sice $\delta$ may be sufficiently small). Hence, (see \cite{7}, p. 65 and p. 61, Lemma 4.2) \begin{equation}gin{displaymath} \begin{equation}gin{split} & \sum_{T\leq t_\nu\leq T+H}\cos\{ t_\nu(\tau)\ln n\}= \\ & = \int_{T\leq t_x\leq T+H} \cos\{ 2\pi\varphi_1(x)\}{\rm d}x+\mathcal{O}(1)= \mathcal{O}\left(\frac{\ln T}{\ln n}\right), \end{split} \end{displaymath} i.e. the estimate (\ref{3.3}) holds true. The estimates (\ref{3.4}) and (\ref{3.5}) follow by the similar way. \end{proof} \subsection{} The following lemma holds true. \begin{equation}gin{mydef52} On Lindel\" of hypothesis we have \begin{equation} \label{3.6} \sum_{T\leq t_\nu\leq T+H} U_k[\sigma,t_\nu(\tau)]=\frac{1}{2\pi}H\ln\frac{T}{2\pi}+\mathcal{O}(H). \end{equation} \end{mydef52} \begin{equation}gin{proof} Let us remind that \begin{equation} \label{3.7} \sum_{T\leq t_\nu\leq T+H} 1=\frac{1}{2\pi}H\ln\frac{T}{2\pi}+\mathcal{O}(1), \end{equation} (see \cite{3}, (23)). Next, (see (\ref{2.10}), (\ref{3.7})) \begin{equation} \label{3.8} \begin{equation}gin{split} & \sum_{T\leq t_\nu\leq T+H} U_k[\sigma,t_\nu(\tau)]=\frac{1}{2\pi}H\ln\frac{T}{2\pi}+\mathcal{O}(1)+\\ & + \mathcal{O}(T^{-\lambda_2}H\ln T)+\sum_{2\leq n<T^\delta}\frac{d_k(n)}{n^\sigma}\cdot \sum_{T\leq t_nu\leq T+H}\cos\{ t_\nu(\tau)\ln n\}= \\ & = \frac{1}{2\pi}H\ln T+\mathcal{O}(1)+\mathcal{O}(T^{-\lambda_2}H\ln T)+w_1. \end{split} \end{equation} Since (see (\ref{2.2}), (\ref{2.4}), (\ref{3.3})) \begin{equation}gin{displaymath} \begin{equation}gin{split} & w_1=\mathcal{O}\left(T^{\delta\eta}\ln T\sum_{2\leq n\leq T^\delta}\frac{1}{\sqrt{n}\ln n}\right)= \\ & = \mathcal{O}\left\{ T^{\delta\eta}\ln T\left( \sum_{2\leq n<T^{\delta/2}} \ + \ \sum_{T^{\delta/2}\leq n<T^\delta}\right)\frac{1}{\sqrt{n}\ln n}\right\}= \\ & = \mathcal{O}(T^{\delta\eta+\delta/2})=\mathcal{O}(H), \end{split} \end{displaymath} then from (\ref{3.8}) the formula (\ref{3.6}) follows. \end{proof} \section{Theorem 1} \subsection{} Next, we define the following class of disconnected sets (comp. \cite{4}, (3)): \begin{equation} \label{4.1} G(x)=\bigcup_{T\leq t_\nu\leq T+H}\{ t:\ t_\nu(-x)<t<t_\nu(x)\},\ 0<x\leq \frac{\pi}{2}. \end{equation} Let us remind that (see \cite{4}, (7)) \begin{equation} \label{4.2} \begin{equation}gin{split} & t_\nu(x)-t_\nu(-x)=\frac{4x}{\ln\frac{T}{2\pi}}+\mathcal{O}\left(\frac{xH}{T\ln^2T}\right), \\ & t_\nu(-x),t_\nu(x)\in [T,T+H]. \end{split} \end{equation} Of course, \begin{equation} \label{4.3} m\{ G(x)\}=\frac{2x}{\pi}H+\mathcal{O}(x), \end{equation} (see (\ref{3.7}), (\ref{4.2})), where $m\{ G(x)\}$ stands for the measure of $G(x)$. \subsection{} The following theorem holds true. \begin{equation}gin{mydef11} On Lindel\" of hypothesis \begin{equation} \label{4.4} \int_{G(x)} U_k(\sigma,t){\rm d}t=\frac{2x}{\pi}H+o\left(\frac{xH}{\ln T}\right). \end{equation} \end{mydef11} First of all, we obtain from (\ref{4.4}) by (\ref{4.3}) the following \begin{equation}gin{mydef41} On Lindel\" of hypothesis \begin{equation} \label{4.5} \frac{1}{m\{ G(x)\}}\int_{G(x)} U_k(\sigma,t){\rm d}t=1+o\left(\frac{1}{\ln T}\right). \end{equation} \end{mydef41} Next, we obtain from (\ref{4.4}) the following \begin{equation}gin{mydef42} On Lindel\" of hypothesis \begin{equation} \label{4.6} \int_{G(x)}|U_k(\sigma,t)|{\rm d}t\geq \frac{2xH}{\pi}\{ 1-|o(1)|\}. \end{equation} \end{mydef42} Since (see (\ref{2.9})) \begin{equation}gin{displaymath} |\zeta(s)|^{2k-1}=\sqrt{U^2_{2k-1}+V^2_{2k-1}}\geq |U_{2k-1}| \end{displaymath} then we obtain from(\ref{4.6}), $k\longrightarrow 2k-1$, the following \begin{equation}gin{mydef43} On Lindel\" of hypothesis \begin{equation} \label{4.7} \int_{G(x)}|\zeta(\sigma+it)|^{2k-1}{\rm d}t\geq \frac{2xH}{\pi}\{ 1-|o(1)|\}. \end{equation} \end{mydef43} \subsection{} In this part we shall give the \begin{equation}gin{proof} of the Theorem 1. Since (see (\ref{3.1}), (\ref{3.2})) \begin{equation}gin{displaymath} \begin{equation}gin{split} & \left(\frac{{\rm d}t_\nu(\tau)}{{\rm d}\tau}\right)^{-1}=\ln P_0+\mathcal{O}\left(\frac HT\right),\\ & t_\nu(\tau)\in [T,T+H],\ P_0=\sqrt{\frac{T}{2\pi}}, \end{split} \end{displaymath} then we obtain by using of the substitution \begin{equation}gin{displaymath} t=t_\nu(\tau) , \end{displaymath} and by estimates (\ref{2.3}) that \begin{equation}gin{displaymath} \begin{equation}gin{split} & \int_{-\pi}^\pi U_k[\sigma,t_\nu(\tau)]{\rm d}\tau=\int_{-\pi}^\pi U_k[\sigma,t_\nu(\tau)] \left(\frac{{\rm d}t_\nu(\tau)}{{\rm d}\tau}\right)^{-1}\cdot \frac{{\rm d}t_\nu(\tau)}{{\rm d}\tau} {\rm d}\tau= \\ & = \ln P_0\int_{-\pi}^\pi U_k[\sigma,t_\nu(\tau)]\frac{{\rm d}t_\nu(\tau)}{{\rm d}\tau}{\rm d}\tau+ \\ & + \mathcal{O}\left( x\max\{|\zeta^k|\}\frac HT\max\left\{\frac{{\rm d}t_\nu(\tau)}{{\rm d}\tau}\right\}\right)= \\ & = \ln P_0\int_{t_\nu(-x)}^{t_\nu(x)}U_k(\sigma,t){\rm d}t+ \mathcal{O}\left( x\frac{T^{\delta\eta+\delta/2+\epsilon-1}}{\ln T}\right), \end{split} \end{displaymath} where the $\max$ is taken with respect to the segment $[T,T+H]$. Consequently, (see (\ref{2.3}), (\ref{3.7}), (\ref{4.1}) and (\ref{4.2})) \begin{equation}gin{displaymath} \begin{equation}gin{split} & \sum_{T\leq t_\nu\leq T+H}\int_{-\pi}^\pi U_k[\sigma,t_\nu(\tau)]{\rm d}\tau= \\ & = \ln P_0 \int_{G(x)}U_k(\sigma,t){\rm d}t+\mathcal{O}(xT^{\delta\eta+\delta/2+2\epsilon-1})+ \\ & + \mathcal{O}\left(\frac{xT^{(\eta+1/2)\delta}}{\ln T}\right). \end{split} \end{displaymath} Now, the integration (\ref{3.6}) by \begin{equation}gin{displaymath} \tau\in [-\pi,\pi] \end{displaymath} gives the formula \begin{equation}gin{displaymath} \begin{equation}gin{split} & \ln P_0\int_{G(x)} U_k(\sigma,t){\rm d}t+\mathcal{O}(xT^{\delta\eta+\delta/2+2\epsilon-1})= \\ & = \frac x\pi H\ln\frac{T}{2\pi}+\mathcal{O}(xT^{\delta\eta+\delta/2}) \end{split} \end{displaymath} and from this by (\ref{2.4}) the formula (\ref{4.4}) follows immediately (here $\epsilon$ is arbitrarily small number). \end{proof} \section{The second class of lemmas} \subsection{} Let \begin{equation} \label{5.1} \begin{equation}gin{split} & S_1(t)=\sum_{2\leq n\leq T^\delta}\frac{d_k(n)}{n^\sigma}\cos(t\ln n), \\ & w_2(t)=\{ S_1(t)\}^2. \end{split} \end{equation} The following lemma holds true. \begin{equation}gin{mydef53} \begin{equation} \label{5.2} \begin{equation}gin{split} & \sum_{T\leq t_\nu\leq T+H} w_2[t_\nu(\tau)]= \\ & = \{ F(\sigma,k)-1\}\cdot \frac{1}{4\pi} H\ln\frac{T}{2\pi}+o(H), \end{split} \end{equation} (on $F(\sigma,k)$ see (\ref{1.3})). \end{mydef53} \begin{equation}gin{proof} First of all we have \begin{equation} \label{5.3} \begin{equation}gin{split} & w_2(t)=\sum_m\sum_n \frac{d_k(m)d_k(n)}{(mn)^\sigma}\cos(t\ln m)\cos(t\ln n)= \\ & = \frac 12\sum_m\sum_n\frac{d_k(m)d_k(n)}{(mn)^\sigma}\cos\{t\ln(mn)\}+\\ & + \ssum_{n<m}\frac{d_k(m)d_k(n)}{(mn)^\sigma}\cos\left(t\ln\frac mn\right)+ \frac 12\sum_n\frac{d_k^2(n)}{n^{2\sigma}}= \\ & = w_{21}(t)+w_{22}(t)+w_{23}(t). \end{split} \end{equation} Now we have: by (\ref{2.2}), (\ref{2.4}) and (\ref{3.4}) \begin{equation} \label{5.4} \begin{equation}gin{split} & \sum_{T\leq t_\nu\leq T+H}w_{21}[t_\nu(\tau)]= \mathcal{O}\left( T^{\delta\eta}\ln T\cdot \ssum_{2\leq m,n<T^\delta}\frac{1}{\sqrt{mn}\ln(mn)}\right)= \\ & = \mathcal{O}(T^{2\delta\eta+\delta}\ln T)=o(H); \end{split} \end{equation} by (\ref{2.2}), (\ref{2.4}) and (\ref{3.5}) and by \cite{7}, p. 116, Lemma, ($T\longrightarrow T^\delta$), \begin{equation} \label{5.5} \begin{equation}gin{split} & \sum_{T\leq t_\nu\leq T+H}w_{22}[t_\nu(\tau)]= \mathcal{O}\left( T^{\delta\eta}\ln T\cdot \ssum_{2\leq n<m<T^\delta}\frac{1}{\sqrt{mn}\ln\frac mn}\right)= \\ & = \mathcal{O}(T^{2\delta\eta+\delta}\ln^2T)=o(H); \end{split} \end{equation} by (\ref{1.3}), $\omega\longrightarrow k$, and by (\ref{2.2}) \begin{equation}\label{5.6} \begin{equation}gin{split} & 2_{23}=\frac 12\sum_{n=2}^\infty \frac{d_k^2(n)}{n^{2\sigma}}-\frac 12\sum_{n\geq T^{\delta}} \frac{d_k^2(n)}{n^{2\sigma}}= \\ & = \frac 12\{ F(\sigma,k)-1\}+\mathcal{O}\left(\int_{T^\delta}^\infty x^{\eta-2\sigma}{\rm d}x\right)= \\ & = \frac 12\{ F(\sigma,k)-1\}+\mathcal{O}(T^{-\delta(2\sigma-1-\eta)}); \end{split} \end{equation} (of course, $2\sigma-1-\eta>0$ since $\eta$ is arbitrarily small). Finally, by (\ref{2.4}), (\ref{3.7}) and (\ref{5.6}) we obtain \begin{equation} \label{5.7} \sum_{T\leq t_\nu\leq T+H}w_{23}=\{ F(\sigma,k)-1\}\frac{1}{4\pi}H\ln\frac{T}{2\pi}+o(H). \end{equation} Hence, from (\ref{5.3}) by (\ref{5.4}) -- (\ref{5.7}) the formula (\ref{5.2}) follows. \end{proof} Next, the following lemma holds true. \begin{equation}gin{mydef54} On Lindel\" of hypothesis \begin{equation} \label{5.8} \sum_{T\leq t_\nu\leq T+H}U_k^2[\sigma,t_\nu(\tau)]=\{ F(\sigma,k)-1\}\frac{1}{2\pi}G\ln\frac{T}{2\pi}+ o(H). \end{equation} \end{mydef54} \begin{equation}gin{proof} Since (see (\ref{2.10}), (\ref{5.1})) \begin{equation}gin{displaymath} U_k(\sigma,t)=1+S_1+\mathcal{O}(T^{-\lambda_2}) , \end{displaymath} then \begin{equation} \label{5.9} U_k^2(\sigma,t)=1+w_2+2S_1+\mathcal{O}(|S_1|T^{-\lambda_2})+\mathcal{O}(T^{-2\lambda_2}). \end{equation} Now we have: by (\ref{2.4}) \begin{equation}gin{displaymath} \sum_{T\leq t_\nu\leq T+H}S_1[t_\nu(\tau)]=\mathcal{O}(T^{\delta\eta+\delta/2})=o(H); \end{displaymath} by(\ref{2.4}), (\ref{3.7}) and (\ref{5.2}) \begin{equation}gin{displaymath} \begin{equation}gin{split} & \sum_{T\leq t_\nu\leq T+H}|S_1|T^{-\lambda_2}= \\ & = \mathcal{O}\left\{ T^{-\lambda_2}\sqrt{H\ln T} \left(\sum_{T\leq t_\nu\leq T+H}w_2[t_\nu(\tau)]\right)^{1/2}\right\}=\\ & = \mathcal{O}(T^{-\lambda_2}H\ln T)=o(H). \end{split} \end{displaymath} Consequently, from (\ref{5.9}) by (\ref{3.7}) the formula (\ref{5.8}) follows. \end{proof} \subsection{} Let \begin{equation} \label{5.10} \begin{equation}gin{split} & S_2(t)=\sum_{2\leq n<T^\delta}\frac{d_k(n)}{n^\sigma}\sin(t\ln n), \\ & w_3(t)=\{ S_2(t)\}^2. \end{split} \end{equation} The following lemma holds true. \begin{equation}gin{mydef55} \begin{equation} \label{5.11} \sum_{T\leq t_\nu\leq T+H} w_3[t_\nu(\tau)]=\{ F(\sigma,k)-1\}\frac{1}{4\pi}H\ln\frac{T}{2\pi}+o(H). \end{equation} \end{mydef55} \begin{equation}gin{proof} Since (comp. (\ref{5.3})) \begin{equation} \label{5.12} \begin{equation}gin{split} & w_3(t)=\ssum_{m,n}\frac{d_k(m)d_k(n)}{(mn)^\sigma}\sin(t\ln m)\sin(t\ln n)= \\ & = -\frac 12\ssum_{m,n}\frac{d_k(m)d_k(n)}{(mn)^\sigma}\cos\{ t\ln(mn)\}+ \\ & + \ssum_{n<m} \frac{d_k(m)d_k(n)}{(mn)^\sigma}\cos\left( t\ln\frac mn\right)+ \\ & + \frac 12 \sum_n \frac{d_k^2(n)}{n^{2\sigma}}=w_{31}(t)+w_{32}(t)+w_{33}(t), \end{split} \end{equation} then we obtain by the way (\ref{5.3}) -- (\ref{5.7}) our formula (\ref{5.11}). \end{proof} Next, the following lemma holds true \begin{equation}gin{mydef56} On Lindel\" of hypothesis \begin{equation} \label{5.13} \begin{equation}gin{split} & \sum_{T\leq t_\nu\leq T+H} V_k^2[\sigma,t_\nu(\tau)]= \\ & = \{ F(\sigma,k)-1\}\frac{1}{4\pi}H\ln\frac{T}{2\pi}+o(H). \end{split} \end{equation} \end{mydef56} \begin{equation}gin{proof} Since (see (\ref{2.10}), (\ref{5.10})) \begin{equation}gin{displaymath} V_k(\sigma,t)=-S_2+\mathcal{O}(T^{-\lambda_2}), \end{displaymath} then \begin{equation} \label{5.14} V_k^2(\sigma,t)=w_3+\mathcal{O}(T^{-\lambda_2}|S_2|)+\mathcal{O}(T^{-2\lambda_2}). \end{equation} Consequently, the proof may be finished in the same way as it was done in the case of our Lemma 4 (comp. (\ref{5.12}), (\ref{5.14})). \end{proof} Since (see (\ref{2.9})) \begin{equation}gin{displaymath} |\zeta(s)|^{2k}=U_k^2+V_k^2 , \end{displaymath} then by (\ref{5.8}), (\ref{5.11}) we obtain the following. \begin{equation}gin{mydef57} On Lindel\" of hypothesis \begin{equation} \label{5.14} \sum_{T\leq t_\nu\leq T+H} |\zeta[\sigma_it_\nu(\tau)]|^{2k}=\frac{1}{2\pi}F(\sigma,k)H\ln\frac{T}{2\pi}+o(H). \end{equation} \end{mydef57} \section{Theorem 2 and main Theorem} Now we obtain from (\ref{5.14}) by the way very similar to than one used in the proof of the Theorem 1, the following. \begin{equation}gin{mydef12} On Lindel\" of hypothesis \begin{equation} \label{6.1} \int_{G(x)}|\zeta(\sigma+it)|^{2k}{\rm d}t=\frac{2x}{\pi}F(\sigma,t)H+o\left(\frac{xH}{\ln T}\right). \end{equation} \end{mydef12} Further, from (\ref{6.1}) we obtain \begin{equation}gin{mydef44} On Lindel\" of hypothesis \begin{equation} \label{6.2} \int_{G(x)}|\zeta(\sigma+it)|^{2k-1}{\rm d}t<\frac{2xH}{\pi}\sqrt{F(\sigma,2k-1)}\cdot \{1+|o(1)|\}. \end{equation} \end{mydef44} Indeed, by (\ref{4.3}), (\ref{6.1}) we have \begin{equation}gin{displaymath} \begin{equation}gin{split} & \int_{G(x)}|\zeta(\sigma+it)|^{2k}{\rm d}t< \\ & < \sqrt{m\{ G(x)\}}\left( \int_{G(x)}|\zeta(\sigma+it)|^{4k-2}{\rm d}t\right)^{1/2}< \\ & < \frac{2xH}{\pi}\sqrt{F(\sigma,2k-1)}\cdot\{ 1+|o(1)|\}. \end{split} \end{displaymath} Finally, from (\ref{4.7}), (\ref{6.2}) we obtain our main result: \begin{equation}gin{mydef1} On Lindel\" of hypothesis \begin{equation} \label{6.3} \begin{equation}gin{split} & 1-|o(1)|<\frac{1}{m\{ G(x)\}}\int_{G(x)}|\zeta(\sigma+it)|^{2k-1}{\rm d}t< \\ & < \sqrt{F(\sigma,2k-1)}+|o(1)|. \end{split} \end{equation} \end{mydef1} \begin{equation}gin{remark} The question about the order of the mean-value of the function \begin{equation}gin{displaymath} |\zeta(\sigma+it)|^{2k-1},\ k=1,2,\dots \end{displaymath} defined on infinite class of disconnected sets $\{ G(x)\}$ is answered by the inequalities (\ref{6.3}). \end{remark} \begin{equation}gin{remark} Inequalities (\ref{1.2}) follows from (\ref{6.3}) as a special case for $x=\pi/2$, (see (\ref{2.3}), (\ref{2.4}), (\ref{4.3})). \end{remark} \thanks{I would like to thank Michal Demetrian for helping me with the electronic version of this work.} \begin{equation}gin{thebibliography}{29} \bibitem{1} H. Davenport, `Note on mean-value theorems for the Riemann zeta-function`, J. London Math. Soc., 10 (1935), 136-138. \bibitem{2} A. E. Ingham, `Mean-value theorems in the Riemann zeta-function`, Quart. J. Math., 4, (1933), 278-290. (1995). \bibitem{3} J. Moser, `On the theorem of Hardy-Littlewood in the theory of the Riemann zeta-function`, Acta. Arith., 31, (1976), 45-51, (in Russian). \bibitem{4} J. Moser, `New consequences of the Riemann-Siegel formula and law of asymptotic equality of signum areas of the $Z(t)$-function`, Acta Arith., 42 (1982), 1-10, (in Russian); arXiv: 1312.4767. \bibitem{5} J. Moser, `Lindel\" of hypothesis and the order of the mean-value of $|\zeta(s)|^{2k-1}$ in the critical strip`, Acta Math. Univ. Comen., 48-49, (1986), 53-54, (in Russian). \bibitem{6} E. C. Titchmarsh, `Mean-value theorems in the theory of the Riemann zeta-function`, Messenger of Math., 58, (1929), 125-129. \bibitem{7} E. C. Titchmarsh, `\emph{The theory of the Riemann zeta-function}`, Clarendon Press, Oxford, 1951. \end{thebibliography} \end{document}
\begin{document} \author{Maria Gualdani and Nestor Guillen} \begin{abstract} In this manuscript we investigate the regularization of solutions for the spatially homogeneous Landau equation. For moderately soft potentials, it is shown that weak solutions become smooth instantaneously and stay so over all times, and the estimates depend only on the initial mass, energy, and entropy. For very soft potentials we obtain a conditional regularity result, hinging on what may be described as a nonlinear Morrey space bound, assumed to hold uniformly over time. This bound always holds in the case of moderately soft potentials, and nearly holds for general potentials, including Coulomb. This latter phenomenon captures the intuition that for moderately soft potentials, the dissipative term in the equation is of the same order as the quadratic term driving the growth (and potentially, singularities). In particular, for the Coulomb case, the conditional regularity result shows a rate of regularization much stronger than what is usually expected for regular parabolic equations. The main feature of our proofs is the analysis of the linearized Landau operator around an arbitrary and possibly irregular distribution. This linear operator is shown to be a degenerate elliptic Schr\"odinger operator whose coefficients are controlled by $A_p$-weights. \end{abstract} \title{On $A_p$ weights and the Landau equation} \baselineskip=14pt \pagestyle{headings} \markboth{$A_p$ weights and the homogeneous Landau equation}{M. Gualdani, N. Guillen} \section{Introduction}\label{section:introduction} The Landau equation with Coulomb potential models the evolution of a particule distribution \begin{align*} f(x,v,t): \Omega \times \mathbb{R}^d\times\mathbb{R}_+\to\mathbb{R},\;\; \Omega\subset\mathbb{R}^d, \end{align*} and arises as a limit from the Boltzmann equation in the regime where only grazing collisions are predominant. Generally, the evolution of the particle density is described by the equation \begin{align}\label{eqn:Landau} \partial_tf +v\cdot\nabla_x f =Q(f,f), \end{align} with initial and boundary conditions on $f$. The quadratic term $Q(f,f)$ describes how collisions affect the evolution of the particle distribution. The collision operator $Q(f,f)$ for Landau-Coulomb is given by \begin{align}\label{eqn:Landau Collisional Term Classical Expression} Q(f,f) := \textnormal{div}_v \left (\int_{\mathbb{R}^d} \Phi(v-w)\left (\Pi(v-w)(f(w)\nabla_vf(v)-f(v)\nabla_wf(w) \right )\;dw \right ), \end{align} with $\Pi(z)$ (for $z\neq 0$) the projection onto the orthogonal complement of $z$, \begin{align*} \Pi(z) := \mathbb{I}- \frac{ z\otimes z}{|z|^{2}}, \end{align*} and \begin{align*} \Phi(z) = C_d|z|^{2-d}, \quad d\geq 3. \end{align*} A great deal of attention from the mathematical community has been devoted also to the the analysis of \eqref{eqn:Landau}-\eqref{eqn:Landau Collisional Term Classical Expression} with $\Phi$ of the more general form \begin{align} \Phi(z) = C_\gamma |z|^{2+\gamma},\;\;\;\textrm{for any $\gamma\in[-d,0]$}. \label{gene_kernel} \end{align} Borrowing the terminology from the Boltzmann equation, in the literature $ \Phi(z)$ is called a moderately soft potential when $\gamma \in [-2,0]$ and a very soft potential if $\gamma < -2$. The resulting family of equations \eqref{eqn:Landau}-\eqref{gene_kernel} presents very interesting analytical questions, and their understanding has shone light on the physically relevant Landau-Coulomb equation ($\gamma=-d$). The following fact about the structure of $Q(f,f)$ has been key in a great deal of the literature on the Landau equation: for a smooth $f$, the interaction term $Q(f,f)$ can be expressed in terms of a second order elliptic operator with respect to $v$ and (this is most crucial) its coefficients are given by integral non-local operators of $f$. For $\Phi(z) $ as in \eqref{gene_kernel} these integral operators involve Riesz potentials of $f$. Indeed, for such $\Phi(z)$ the operator $Q(f,f)$ takes the form \begin{align*} Q(f,f)= \textnormal{div} \left (A_{f,\gamma}\nabla f-f\nabla a_{f,\gamma} \right), \textnormal{ with } a_{f,\gamma} := \textnormal{tr}(A_{f,\gamma}), \end{align*} where \begin{align}\label{Af_gamma} & A_{f,\gamma} := C(d,\gamma) \int_{\mathbb{R}^d} |v-w|^{2+\gamma}\Pi(v-w)f(w) \;dw. \end{align} The constant $C(d,\gamma)$ is such that \begin{align*} a_{f,\gamma} := \textnormal{tr}(A_{f,\gamma}) = (-\Delta)^{-\frac{d+2+\gamma}{2}}f. \end{align*} In general, for any $\gamma \geq -d$, we will write \begin{align}\label{hf_gamma} h_{f,\gamma}:= -\Delta a_{f,\gamma} = (-\Delta)^{-\frac{(d+\gamma)}{2}}(f) = \left\{ \begin{array}{cl} f, & \quad \textnormal{ if } \gamma=-d,\\ c(d,\gamma) f * |v|^{\gamma}, & \quad \textnormal{ if } \gamma>-d. \end{array} \right. \end{align} The collision operator $Q(f,f) $ then has the ``non-divergence'' representation \begin{align*} Q(f,f) = \textnormal{tr}(A_{f,\gamma}D^2f)+fh_{f,\gamma}. \end{align*} Equation \eqref{eqn:Landau}-\eqref{gene_kernel} therefore may be described as a nonlinear transport equation involving reaction, diffusion, and transport effects. If the initial distribution is spatially inhomogeneous, i.e. if $f_{\textnormal{in}}(x,v) = f_{\textnormal{in}}(v)$, one could consider a simpler equation in the velocity space only, where transport term drops. This yields the homogeneous Landau equation \begin{align}\label{eqn:Landau homogeneous} \partial_tf = Q(f,f), \end{align} which is the equation studied hereafter. {Let us illustrate the challenges in estimating solutions \eqref{eqn:Landau homogeneous} in the most physically important case, namely the Coulomb case $\gamma = -d$. The non-divergence expression for the equation is most helpful; for $\gamma=-d$ we have $h_{f,\gamma} = f$, and the equation becomes \begin{align*} \partial_t f = \textnormal{tr}(A_{f,\gamma}D^2f)+f^2. \end{align*} The term $f^2$ could very well drive the equation to finite time blow up, as with the semilinear heat equation \cite{GigaKohn85}. However, there are not known examples of solutions blowing up in finite time, leaving open the possibility that global $L^\infty$ estimates may hold. Any attempt to obtain $L^\infty$ estimates for the equation must take into account the competition between the term $f^2$ (and $h_{f,\gamma} f$ in general) and the diffusive term $\textnormal{tr}(A_{f,\gamma}D^2f)$. This is discussed further near the end of Section \ref{section: contributions}. The other challenge is the behavior of $A_{f,\gamma}$ for large $v$; such term degenerates as $|v|\to +\infty$ and this weight needs to be taken into account in the coercivity estimates. The current understanding of \eqref{eqn:Landau} is far less advanced than that of the equation \eqref{eqn:Landau homogeneous}, although a lot of recent activities on \eqref{eqn:Landau homogeneous} are changing this (this is briefly discussed in the literature overview below)}. As can be inferred, the analysis of \eqref{eqn:Landau homogeneous} can serve as a stepping stone towards understanding the full, spatially inhomogeneous system \eqref{eqn:Landau}. The question of regularity versus breakdown for \eqref{eqn:Landau homogeneous} (that is, the validity of $L^\infty$bounds discussed above) remains unresolved not only for the Coulomb case, but also for all very soft potentials $\gamma \in [-d,-2)$. Even more, some authors have proposed to consider $\gamma$ below $-d$. Observe that from this point on the operator $h_{f,\gamma}$ becomes a pseudo-differential operator as opposed to a convolution operator. In this manuscript we investigate $L^\infty$ estimates for the homogeneous Landau equation in various situations. First, for \eqref{eqn:Landau homogeneous} in the case $\gamma\in(-2,0)$ (Theorem \ref{thm:main soft potentials}) we show that weak solutions become instantaneously bounded locally in space, and remain so for all later times. The weak solutions considered are only assumed to have finite initial mass, second moment, and entropy. For $\gamma \in [-d,-2]$ we obtain a conditional $L^\infty$ estimate for classical solutions (Theorems \ref{thm:very soft potentials estimate} and \ref{thm:Coulomb case good estimate}). The conditions imposed in this case are rather weak: in fact, as argued in Section \ref{sec:discussion of extra assumptions}, although these conditions may fail to hold in general, they appear to ``almost'' hold. Moreover, the conditional $L^\infty$ estimate we obtain for $\gamma=-d$ comes with a much faster rate of regularization than is expected for uniformly parabolic equations. We review some of the recent relevant literature and provide some context for our results in Sections \ref{section: literature review} and \ref{section: contributions} below. \subsection{Recent results in the regularity theory for kinetic equations}\label{section: literature review} The analysis of \eqref{eqn:Landau} and \eqref{eqn:Landau homogeneous} comprises a vast literature. Here we mention only a few of the results relevant to the question of regularity, with an emphasis on \eqref{eqn:Landau homogeneous}. In general, the question of existence of weak solutions for \eqref{eqn:Landau homogeneous} (arbitrary $\gamma$) was addressed by Villani \cite{V98}, where the notion of $H$-solution --which exploits the $H$-theorem to make sense of the equation-- was introduced. For the case of Maxwell potentials ($\gamma= 0$) and more general over-Maxwellian potentials, Desvillettes and Villani \cite{DesVil2000a,DesVil2000b} studied thoroughly the issues of existence, uniqueness, and convergence to equilibrium. First, in \cite{DesVil2000a}[Theorem 5] they show that given any an initial datum $f_{\textnormal{in}}$ which also lies in $L^2$, then there exists at least one weak solution to \eqref{eqn:Landau homogeneous} which becomes smooth for all positive times. The estimates depend only on the known conserved quantities. It is also shown that weak solutions are unique if $f_{\textnormal{in}}$ lies in a weighted $L^2$ space \cite{DesVil2000a}[Theorem 6]. In subsequent work \cite{DesVil2000b}, entropy dissipation estimates are used to prove convergence of solutions to equilibrium, this being exponentially fast for over Maxwellian potentials, and with an algebraic rate for hard potentials. Analyzing the case of soft potentials ($\gamma \in (-2,0]$) has proved more difficult. Fournier and Guerin \cite{FournierGuerin2009} proved uniqueness of bounded weak solutions in the case of soft potentials. Later work of Fournier \cite{Fournier2010} extends this to the Coulomb potential. Alexandre, Liao, and Lin \cite{AleLinLia2013} obtained estimates on the growth of the $L^p$ norms of a solution, this being later extended to the case $\gamma=-2$ by Wu \cite{Wu13}. These last two results are based on the propagation of $L^p$ bounds, and thus they require initial data in $L^p$. The resulting upper bounds for the $L^p$ norm grow exponentially with time \cite{AleLinLia2013,Wu13}. In \cite{Silvestre2015}, Silvestre obtains $L^\infty$ estimates for classical solutions to the homogeneous Landau equation for $\gamma> -2$, yielding by non divergence methods estimates for classical solutions that are similar to those in this paper (see discussion after Theorem \ref{thm:main soft potentials}). In the work of Golse, Imbert, Mouhot, and Vasseur \cite{GIMV16} the inhomogeneous Landau equation is considered (and more generally, kinetic Fokker-Planck equations with rough coefficients) and the authors obtain H\"older regularity for bounded weak solutions for any $\gamma\ge-d$. We also point out earlier work on parabolic kinetic equations by Pascucci and Polidoro \cite{PascucciPolidoro2004}, which is a predecessor, in the Moser iteration flavor, of \cite{GIMV16}, which is based on De Giorgi's method. Subsequently, Cameron, Silvestre, and Snelson \cite{CamSilSne2017} applied the H\"older estimates from \cite{GIMV16} to obtain an priori estimates for the $L^\infty$ norm of bounded solutions in the case of moderately soft potentials, $\gamma>-2$ (see Section \ref{section: contributions} where we further discuss this result). For the Boltzmann equation, we mention recent regularity results by Silvestre \cite{Silvestre2014} by non-divergence methods (for the homogeneous case) and by Imbert and Silvestre \cite{Imb_Silv2017} using a combination of variational and non-divergence arguments (for the general case). For the (inhomogeneous) Landau near equilibrium, recently Kim, Guo, and Hwang \cite{KimGuoHwang2016} used De Giorgi style iterations following \cite{GIMV16} and showed the existence of global smooth solutions for initial data close enough to equiibrium, the closeness measured just in the $L^\infty$ norm. For very soft potentials much less is known: this includes (i) local existence, and (ii) global existence of a classical solution for initial data close to equilibrium by Guo \cite{Guo02} (covering the spatially inhomogeneous case). Chen, Desvillettes and He \cite{CDH09}, Carrapatoso, Tristani and Wu \cite{CTW15} for $\gamma \ge -2$ showed existence of the so called H-solutions by Villani in \cite{V98}, and of weak solutions by Desvillettes \cite{Desvillettes14} for $\gamma=-d$. In a related work, Carrapatoso, Desvillettes and He \cite{CDH15} showed that any weak solution to the Landau equation with $\gamma =-3$ converges to the equilibrium function for any initial data with finite mass, energy and higher $L^1$-moments. The question of $L^\infty$ estimates for very soft potentials $\gamma<-2$ (and notably, the Coulomb potential $\gamma=-d$) remains a difficult one. There has been partial but encouraging progress for an equation similar to \eqref{eqn:Landau homogeneous}, introduced by Krieger and Strain in \cite{KriStr2012}. In successive works, Krieger and Strain \cite{KriStr2012} and later Gressman, Krieger and Strain \cite{GreKriStr2012} consider for $d=3$ the equation \begin{align}\label{eqn:Krieger-Strain} \partial_t f = a_f \Delta f + \alpha f^2,\;\;a_f = (-\Delta)^{-1}f, \end{align} and show that for spherically symmetric and radially decreasing initial data there is a global in time solution (enjoying the same symmetries for positive time) which remains bounded, provided that $\alpha \in (0,\tfrac{74}{75})$. The restriction on the parameter $\alpha$ corresponds to the range of $\alpha$ for which solutions to \eqref{eqn:Krieger-Strain} have finite $L^{3/2}$ norm. Observe that for $\alpha=1$ the above equation may be rewritten as \begin{align*} \partial_t f = \textnormal{div}\left ( a_f \nabla f -f\nabla a_f\right ), \end{align*} which resembles \eqref{eqn:Landau homogeneous} in the Coulomb case $\gamma=-d$. In \cite{GuGu15}, the authors of this current manuscript proved that for all $\alpha \in [0,1]$ equation \eqref{eqn:Krieger-Strain} has spherically symmetric, radially decreasing solutions that remain bounded for all positive times, assuming the initial data have high enough integrability. The proof relies on a barrier argument applied to a mass function associated to the solution. Unfortunately, this barrier argument falls short of yielding a result for the Landau-Coulomb equation. Nevertheless, the approach in \cite{GuGu15} does show that a (spherically symmetric, radially decreasing) solution to the Landau equation with $\gamma=-d$ does not blow up as long as it remains bounded in some $L^p$ for any $p>\frac{d}{2}$. In light of what is known about blow up for semilinear parabolic equations, this last result is to be expected. We note moreover, that the aforementioned work of Silvestre \cite{Silvestre2015} obtains regularization under the assumption that similar $L^p$ norms remain bounded in time. We remark that the tools in \cite{GuGu15} and \cite{Silvestre2015} are somewhat different: while \cite{Silvestre2015} relies on methods from fully nonlinear equations, \cite{GuGu15} uses a mass function barrier argument to propagate an initial $L^p$ bound with $p>d/2$ over time, ultimately leading to an {\em non conditional } $L^\infty$ bound for spherical functions that so far have not been obtained with the methods from \cite{Silvestre2015}. Regarding higher regularity of the solutions, in the homogeneous case, once the $L^\infty$ norm is controlled, one can obtain higher regularity of the solution (see for instance \cite[Section 4]{GuGu15} for radially symmetric case). For the inhomogeneous case traditional parabolic tools are insufficient. In \cite{HendersonSnelson2017} Henderson and Snelson provide $L^\infty$ estimates for all the derivatives of bounded weak solutions to the inhomogeneous Landau equation, and subsequently in \cite{HendersonSnelsonTarfulea2017} Henderson, Snelson, and Tarfulea improve on the previous result by showing first that solutions spread mass instantaneously to all parts of the domain, which leads to improved estimates on the derivatives. \subsection{Contributions of the present work}\label{section: contributions} There are two main ideas we would like to highlight in this manuscript: (i) how the theory of $A_p$ weights and weighted Sobolev and Poincare's estimates plays a role in the analysis of the Landau equation; and (ii) the observation that there are inequalities that guarantee $L^\infty$ estimates for solutions and these inequalities \textbf{hold for all solutions} in the case of moderately soft potentials and \textbf{nearly hold} in the case of very soft potentials (see discussion in Section \ref{sec:discussion of extra assumptions}). This last claim refers to the fact that the ``reaction'' term in \eqref{eqn:Landau homogeneous} although possibly very singular for $\gamma<-2$ is of the same order as the diffusion term in the equation (see Proposition \ref{prop:assumption 1 almost holds}). This means that \eqref{eqn:Landau homogeneous} ought to be thought of as a critical equation in some sense. This suggestion is further reinforced by the recent results in \cite{GuGu15} regarding estimates for solutions to \eqref{eqn:Krieger-Strain}. The main {\em non-conditional} result in this work states that weak solutions (Definition \ref{def:solution}) to the Landau equation with $\gamma\in(-2,0]$ become instantaneously bounded. \begin{thm}\label{thm:main soft potentials} Let $f: \mathbb{R}^d\times \mathbb{R}_+\to\mathbb{R}$ be a weak solution of the Landau equation \eqref{eqn:Landau homogeneous} where $\gamma \in(-2,0]$ and the initial data has finite mass, energy and entropy. Then, there is a constant $C_0$ determined by $d$, $\gamma$, the mass, energy, and entropy of $f_{\textnormal{in}}$, such that \begin{align*} f(v,t) \leq C_0\left ( 1 + \frac{1}{t} \right )^{\frac{d}{2}} \left (1+|v| \right)^{-\gamma\tfrac{d}{2}}. \end{align*} \end{thm} The result of Theorem \ref{thm:main soft potentials} is certainly expected, save for the appearance of the factor $(1+|v|)^{-\gamma\tfrac{d}{2}}$, which only reflects a shortcoming of our arguments. Indeed, the a priori estimate obtained in \cite{Silvestre2015} and \cite{CamSilSne2017} does not contain such a factor. However, the barrier arguments in \cite{Silvestre2015,CamSilSne2017} although covering weak solutions to the PDE, do require these weak solutions to be bounded. Unlike in \cite{Silvestre2015,CamSilSne2017}, the result in Theorem \ref{thm:main soft potentials} makes no boundedness assumption and is therefore new. This subtle but non-trivial distinction would dissapear as soon as uniqueness results for unbounded weak solutions are discovered: indeed, the estimate for bounded weak solutions in \cite{Silvestre2015,CamSilSne2017} would be carried to arbitrary (that is unbounded) weak solutions by a density argument as soon as such weak solutions are known to be unique. The other two main results in this work are conditional estimates, Theorem \ref{thm:very soft potentials estimate} and Theorem \ref{thm:Coulomb case good estimate}, dealing with very soft potentials and the Coulomb potential respectively. These are discussed at length in the next section. However, we want to highlight the second of these two theorems here (Theorem \ref{thm:Coulomb case good estimate}), and state its assumptions in informal terms before going forward (the assumptions are Assumptions \ref{Assumption:Epsilon Poincare}-\ref{Assumption:Local Doubling} in the following section). We explain what we mean by ``$\varepsilon$-Poincar\'e inequality'' below, while the notion of ``doubling'' is reviewed in Section \ref{section: Main result}. \begin{thm}\label{thm:main Coulomb conditional} Let $f: \mathbb{R}^d\times [0,T] \to\mathbb{R}$ be a weak solution of the Landau equation \eqref{eqn:Landau homogeneous} with Coulomb potential ($\gamma=-d$) and whose initial $f_{\textnormal{in}}$ data has finite mass, energy and entropy. Assume $f$ is doubling and the $\varepsilon$-Poincar\'e inequality holds uniformly over the time interval $[0,T]$. Then, for every $s\in (0,1)$ there is a constant $C_0$ determined by $s$, the initial and the $\varepsilon$-Poincar\'e and doubling constants of $f$ such that \begin{align*} f(v,t) \leq C_0(1+|v|)^s\left ( 1 + \frac{1}{t} \right )^{1+s},\;\;\forall\;t\in [0,T]. \end{align*} \end{thm} The theorem is still a conditional one, but we want to remark that within those conditions it produces a very fast regularization rate (as close to $t^{-1}$ as desired). We believe this reflects how the diffusive power of the equation is particularly strong when $f$ is singular, and that such a rate should be expected if the quadratic term does not overcome the diffusion term in general. Of course, it could turn out that the assumptions of the theorem only hold when the resulting regularization rate becomes trivial; for instance, if $f(t)$ is assumed to be uniformly in $L^p$ when $t\in [0,T]$ for some sufficiently large $p$. However this is unclear at the moment. In any case, we note that a function $f$ can be doubling without enjoying high integrability, while the $\varepsilon$-Poincar\'e inequality ``almost'' holds for arbitrary $f \in L^1$, see Proposition \ref{prop:assumption 1 almost holds} and the discussion that follows it. The approach taken towards all of our results starts from the tautological observation that a solution to \eqref{eqn:Landau homogeneous} solves a linear equation \begin{align}\label{eqn:linear equation} \partial_t f = Q(g,f), \end{align} where $g = f$. Then, one aims to prove as much as possible about linear operators of the form $Q(g,\cdot)$ by only using properties of $g$ that are preserved by the equation (i.e. its mass, second moment, and entropy bounds). This is of course not a novel approach at all; the novelty here lies in the observation that the coefficients arising in the differential operator $Q(g,\cdot)$ are bounded by Muckenhoupt weights and that the strength of the term driving growth is of the same order as the term driving the dissipation. Let us elaborate on these two points. {For the latter point, suppose we were interested in controlling the $L^2$ norm of $f$ solving \eqref{eqn:linear equation} for some fixed $g$. Roughly speaking, this entails determining the positivity of the differential operator given by $L(\cdot) = -Q(g,\cdot)$, so \begin{align*} Lf = -\textnormal{div}(A_{g,\gamma} \nabla f - f \nabla a_{g,\gamma}) = -\textnormal{tr}(A_{g,\gamma}D^2f) - fh_{g,\gamma}. \end{align*} Integrating several times, we have that (for sufficiently smooth $f$) \begin{align*} \int_{\mathbb{R}^d} fLf\;dv = \int_{\mathbb{R}^d} (A_{g,\gamma}\nabla f,\nabla f)\;dv-\tfrac{1}{2}\int_{\mathbb{R}^d}h_{g,\gamma} f^2\;dv. \end{align*} The ``potential'' $h_{g,\gamma}$ corresponds to the ``reactive'' part of the equation, in particular for the actual Landau equation \eqref{eqn:Landau homogeneous} with $\gamma=-d$ we would have $g=f$ and $h_{f,\gamma} = f$. Then for general $g$, determining the positivity of $L$ would amount to the validity of the weighted Poincar\'e inequality \begin{align*} \tfrac{1}{2}\int_{\mathbb{R}^d}h_{g,\gamma} f^2\;dv\leq \int_{\mathbb{R}^d} (A_{g,\gamma}\nabla f,\nabla f)\;dv. \end{align*} We are in fact content with bounding the growth of the $L^2$ norm instead of showing monotonicity and a slightly weaker inequality would suffice (for general $g$, monotonicity is not a reasonable expectation). Therefore, for the purposes of bounding the growth of $f$, it would suffice to show there is some $C = C(g)$ such that for all $f$, \begin{align}\label{eqn:pre epsilon Poincare} \tfrac{1}{2}\int_{\mathbb{R}^d}h_{g,\gamma} f^2\;dv\leq \int_{\mathbb{R}^d} (A_{g,\gamma}\nabla f,\nabla f)\;dv + C\int_{\mathbb{R}^d}f^2\;dv. \end{align} Then, the validity of such an inequality depends on certain properties about the function $g$; moreover if (\ref{eqn:pre epsilon Poincare}) holds, it would mean that the diffusive term in $Q(g,\cdot)$ overcomes the growth term, represented by the potential $h_{g,\gamma}$. We will show that this inequality (actually a more general version of it) always holds when $\gamma \in (-2,0]$. When $\gamma \in [-d,-2]$ it is not clear at the moment if this inequality holds, but we show that for such range of $\gamma$ it is always very close to holding (see Proposition \ref{prop:assumption 1 almost holds} together with Theorem \ref{thm:local weighted Sobolev inequalities}). } This is where the theory of Muckenhoupt weights comes in. The fact that, for any positive $g \in L^1$ with finite mass, momentum and entropy, the coefficients $A_{g,\gamma}$ and $h_{g,\gamma}$ are controlled by Muckenhoupt weights facilitates the proof of certain weighted Poincar\'e and Sobolev inequalities, under conditions that vary depending on the range of $\gamma$. Concretely, we prove a refined version of \eqref{eqn:pre epsilon Poincare}, which we call the ``$\varepsilon$-Poincar\'e inequality'' and for $\varepsilon \in (0,1)$ reads as \begin{align*} \int_{\mathbb{R}^d} h_{f,\gamma} \phi^2\;dv \leq \varepsilon \int_{\mathbb{R}^d} (A_{f,\gamma} \nabla \phi,\nabla \phi)\;dv + \Lambda(\varepsilon) \int_{\mathbb{R}^d} \phi^2\;dv, \end{align*} for all functions $\phi$ that makes the right hand side finite. Again, we will show that such inequality always holds for any $f$ solution to the Landau equation with $\gamma \in (-2,0]$ and {\em almost holds} when $\gamma \in [-d,-2]$. This inequality is the decisive point in the proof. To see how, note that thanks to the above inequality, one can control the growth of the $L^p$ norms of $f$. For instance, for $p=2$, applying the inequality above with $\phi=f$, we are led to \begin{align*} \frac{1}{2}\frac{d}{dt} \int_{\mathbb{R}^d} f^2\;dv & = - \int_{\mathbb{R}^d} (A_{f,\gamma} \nabla f,\nabla f)\;dv +\int_{\mathbb{R}^d} h_{f,\gamma} f^2\;dv \\ & \leq -(1-\varepsilon) \int_{\mathbb{R}^d} (A_{f,\gamma} \nabla f,\nabla f)\;dv + \Lambda(\varepsilon) \int_{\mathbb{R}^d} f^2\;dv. \end{align*} The derivation of $L^\infty$ estimates (using the aforementioned weighted inequalities), follows to a great extent the De Giorgi-Nash-Moser theory for divergence for parabolic equations. As the respective ``parabolic'' equations have degenerate coefficients, the ideas developed in the 80's in order to deal with singular weights \cite{FKS82, GuWhe91} has a significant role in our arguments (see, in particular, Lemma \ref{lem:weighted Moser estimate}). The validity of weighted inequalities like the one above is directly related to the theory of $A_p$-weights and to the positivity of Schr\"odinger operators with non-constant coefficients. In particular, for a generic distribution $f$, one can think of the linearized Landau operator $L(\cdot) = Q(g,\cdot)$ as a Schr\"odinger operator with very rough coefficients. The observation that even for a very singular $f$ the resulting coefficients of $L$ satisfy pointwise bounds with respect to certain $A_p$-weights, permits us to invoke the extensive literature on weighted integral inequalities (including, but not limited to, \cite{Fefferman1983}). We remark that for the present work it is necessary to obtain weighted inequalities corresponding to mass distributions $f$ which may be far from equilibrium, so we must assume as little about $f$ as possible. Moreover, these inequalities are intrinsically related to the uncertainty principle and its generalization, as noted by Fefferman in \cite{Fefferman1983}. The relevance of uncertainty principles to kinetic equations has been noted before, and we refer the reader to work of Alexandre, Morimoto, Ukai, Xu and Yang \cite{AMUXY08} for further discussion. As already mentioned, an $\varepsilon$-Poincar\'e appears to hold for any $\gamma\geq -d$ for some universal $\varepsilon <<1$, independently of even the mass, second moment, and entropy of $f$. For moderately soft potentials, such an inequality also holds for all small $\varepsilon$. It is helpful to make an analogy with the situation seen for the linear heat equation with a Hardy potential, that is \begin{align}\label{eqn:Heat equation Hardy potential} \partial_tf = \Delta f + \lambda |v|^{-2}f. \end{align} It was shown by Baras and Goldstein \cite{BarasGoldstein1984} that there is a critical value of $\lambda$ ( $=(d-2)/2)^2$) across which \eqref{eqn:Heat equation Hardy potential} admits or not solutions. Going back to \eqref{eqn:Landau homogeneous}, the question is whether the linear operator $Q(g,\cdot)$ lies ``on the side'' of regularization for arbitrary $g$. However, a simple calculation shows that for $g$ of the form $|v|^{-m}\chi_{B_1}$ with $m$ close to $d$, the $\varepsilon$-Poincar\'e inequality cannot hold with an arbitrary small $\varepsilon$, suggesting that an entirely different approach is needed for $\gamma<-2$. \subsection{Organization of the paper} In Section \ref{section:preliminaries and main result} we discuss some preliminary concepts and state the main results. In Section \ref{section:A_p class and weighted inequalities} we review relevant facts from the theory of $A_p$ weights and in Section \ref{section:epsilon Poincare} we apply them to obtain weighted normed inequalities of relevance to the Landau equation. In Section \ref{section: Regularization in L infinity} we derive an energy inequality adapted to the Landau equation and prove a De Giorgi-Nash-Moser type estimate. Lastly, in Section \ref{section:Coulomb regularization} we focus on the case of the Coulomb potential and show a conditional but anomalous rate of rate of regularization in time. \subsection{Notation} A constant will be said to be universal if it is determined by $d$, $\gamma$ and the mass, energy, and entropy of $f_{\textnormal{in}}$. Universal constants will be denoted by $c,c_0,c_1,C_0,C_1,C$. Vectors in $\mathbb{R}^d$ will be denoted by $v,w,x,y$ and so on, the inner product between $v$ and $w$ will be written $(v,w)$. $B_R(v_0)$ denotes the closed ball of radius $R$ centered at $v_0$, if $v_0=0$ we simply write $B_R$. When talking about a family of balls or a generic ball we will omit the radius and center altogether and write $B,B',B''$. The identity matrix will be noted by $\mathbb{I}$, the trace of a matrix $X$ will be denoted $\textnormal{tr}(X)$. The initial condition for the Cauchy problem will always be denoted by $f_{\textnormal{in}}$ and any constant of the form $C(f_{\textnormal{in}})$ is a constant that only depends on the mass, second moment and entropy of $f_{\textnormal{in}}$. If $K$ is a convex set and $\lambda>0$, we will use $\lambda K$ to denote the set of points of the form $\lambda (v-v_c)+v_c$, where $v\in K$ and $v_c$ is the center of mass of $K$. In particular, if $K$ is a Euclidean ball or a cube of radius $r$, then $\lambda K$ is respectively a Euclidean ball or a cube of radius $\lambda r$. \section{Preliminaries and main result}\label{section:preliminaries and main result} \subsection{The matrix $A_{f,\gamma}$ and Riesz potentials}\label{section:the Matrix A and Riesz potentials} We will be interested in how the matrix $A_{f,\gamma}$ defined in \eqref{Af_gamma} acts along a given unit direction. For that we introduce a few related functions. \begin{DEF}\label{def: the matrix A and its functions} Let $e\in\mathbb{S}^{d-1}$, and define: \begin{align*} a^*_{f,\gamma,e}(v) & := (A_{f,\gamma}(v)e,e),\\ a^*_{f,\gamma}(v) & := \inf \limits_{e\in\mathbb{S}^{d-1}}a^*_{f,\gamma,e}. \end{align*} \end{DEF} \begin{rem} Throughout the paper, we will often omit the subindex $f$, $\gamma$ or both and understand $A, a, h,a^*_e,a^*$ to be always determined by $f$ and $\gamma$ as in the above definitions. \end{rem} The following lemma is a standard lower bound for $A_{f,\gamma}$ and $a_{f,\gamma}$ in terms of the mass, energy and entropy of $f$ that will be used later in the manuscript, see Desvilletes and Villani \cite{DesVil2000a}. \begin{lem}\label{lem: A lower bound in terms of conserved quantities} There is a constant $c$ determined by $\gamma$, the dimension $d$, as well as the mass, energy, and entropy of $f$, such that for all $v\in \mathbb{R}^d$ \begin{align*} a_{f,\gamma}(v) &\geq c \langle v\rangle^{\gamma+2}, \\ A_{f,\gamma}(v) &\geq c \langle v\rangle^\gamma\mathbb{I}. \end{align*} \end{lem} \subsection{Basic assumptions: weak solutions} In all what follows we fix an initial condition $f_{\textnormal{in}}$ which is a distribution on $\mathbb{R}^d$. For convenience, we will assume $f_{\textnormal{in}}$ is normalized thus \begin{align*} \int_{\mathbb{R}^d} f_{\textnormal{in}}(v) \;dv = 1,\;\;\int_{\mathbb{R}^d} f_{\textnormal{in}}(v) v\;dv =0,\;\;\int_{\mathbb{R}^d} f_{\textnormal{in}}(v) |v|^2\;dv =d, \end{align*} and we shall assume the initial entropy of $f_{\textnormal{in}}$ is finite \begin{align*} H(f_{\textnormal{in}}) =\int_{\mathbb{R}^d}f_{\textnormal{in}}(v)\log(f_{\textnormal{in}}(v))\;dv < \infty. \end{align*} \begin{DEF} \label{def:solution} A weak solution of \eqref{eqn:Landau homogeneous} in $(0,T)$ with initial data $f_{\textnormal{in}}$ is a function \begin{align*} f:\mathbb{R}^d\times [0,T)\mapsto \mathbb{R} \end{align*} satisfying the following conditions: \begin{enumerate} \item $f\geq 0$ a.e. in $\mathbb{R}^d\times (0,T)$ and \begin{align*} f\in C([0,T);\mathcal{D}'),\;\; f(t) \in L^1_2\cap L\log(L)\;\;\forall\;t\in(0,T),\;f(0) = f_{\textnormal{in}}. \end{align*} \item For every $t\in (0,T)$ \begin{align*} \int_{\mathbb{R}^d}f(v,t)\;dv = 1,\;\int_{\mathbb{R}^d}f(v,t)v_i\;dv = 0,\;\int_{\mathbb{R}^d}f(v,t)\;dv = d. \end{align*} \item The equation is understood in the following weak sense: given $\eta \in C^2_c(\mathbb{R}^d)$ and $\phi\in C^2(\mathbb{R})$ such that $\phi''(s)\equiv 0$ for all large $s$ and $\lim\limits_{s\to 0}s^{-1}\phi(s) = 0$, and times $t_1<t_2$, we have \begin{align} & \int_{\mathbb{R}^d} \eta^2 \phi(f(t_2))\;dv-\int_{\mathbb{R}^d} \eta^2 \phi(f(t_1))\;dv \label{eqn:weak formulation}\\ & = -\int_{t_1}^{t_2}\int_{\mathbb{R}^d} (A_{f,\gamma}\nabla f-f\nabla a_{f,\gamma}, \nabla (\eta^2 \phi'(f))) \;dvdt. \notag \end{align} \item The entropy functional $H(f(t)):=\int_{\mathbb{R}^d} f \log (f)dv$ is non increasing in $t$ and for any pair of times $0\leq t_1<t_2 <T$ we have \begin{align*} H(f(t_1))-H(f(t_2)) = \int_{t_1}^{t_2} D(f(t))\;dt \ge 0, \end{align*} where $D(f)$ denotes the entropy production, that is \begin{align*} D(f(t)) & = \int_{\mathbb{R}^d}4(A_{f,\gamma}\nabla \sqrt{f}, \nabla \sqrt{f})-f h_{f,\gamma}\;dv = -\int_{\mathbb{R}^d}Q(f,f)\log(f)\;dv. \end{align*} \end{enumerate} \end{DEF} \subsection{Further assumptions for very soft potentials: Local Doubling and an $\varepsilon$-Poincar\'e inequality} For $\gamma\leq -2$ all our results are conditional, that is, the regularity estimates are obtained under two further assumptions on the solution $f$, which may not hold in general. These two assumptions are described below. Given $\varepsilon \in (0,1)$ we say that ``$f$ satisfies the $\varepsilon$-Poincar\'e inequality'' if there is some positive constant $\Lambda>0$ such that for any Lipschitz, compactly supported $\phi$ we have \begin{align*} \int_{\mathbb{R}^d} \phi^2 h_{f,\gamma}\;dv \leq \varepsilon \int_{\mathbb{R}^d} (A_{f,\gamma} \nabla\phi,\nabla\phi)\;dv + \Lambda\int_{\mathbb{R}^d} \phi^2\;dv. \end{align*} This can be thought of as an ``uncertainty estimate'' for the Landau equation (see the discussion regarding the uncertainty principle at the end of Section \ref{section: contributions}). We will also say that ``$f$ satisfies the strong Poincar\'e inequality'' if there is a decreasing, non-negative function $\Lambda:(0,1)\mapsto \mathbb{R}$ such that for any smooth $\phi$ and $\varepsilon \in (0,1)$ we have \begin{align*} \int_{\mathbb{R}^d} \phi^2 h_{f,\gamma}\;dv \leq \varepsilon \int_{\mathbb{R}^d} (A_{f,\gamma}\nabla\phi,\nabla\phi)\;dv + \Lambda(\varepsilon) \int_{\mathbb{R}^d}\phi^2\;dv. \end{align*} \begin{rem}\label{rem:compactness of supports and inequalities} Although the inequality is stated for compactly supported $\phi$, it is clear (by density) that once it holds for compactly supported $\phi$ it also holds for any $\phi$ with \begin{align*} \int_{\mathbb{R}^d} (A_{f,\gamma} \nabla\phi,\nabla\phi)\;dv<\infty \textnormal{ and } \int_{\mathbb{R}^d} \phi^2\;dv<\infty. \end{align*} Accordingly, throughout the manuscript we may apply the $\varepsilon$-Poincar\'e inequality (and other weighted Sobolev inequalities) to $\phi$ which may not be compactly supported but satisfy these two integrability conditions. \end{rem} The two conditions above might be thought of as bounds on the spectral functional, \begin{align}\label{eqn:Lambda f epsillon definition} \Lambda(\varepsilon)f := \sup \left \{ \int_{\mathbb{R}^d}\phi^2 h_{f,\gamma} - \varepsilon( A_{f,\gamma}\nabla \phi,\nabla \phi)\;dv \;:\; \|\phi\|_{L^2(\mathbb{R}^d)} = 1,\;\; t\in (0,T) \right \}. \end{align} This brings us to the first of the extra Assumptions needed for very soft potentials. \begin{Assumption}\label{Assumption:Epsilon Poincare} There are constants $C_p,\kappa>0$ such that for $\Lambda(\varepsilon)f$ defined in \eqref{eqn:Lambda f epsillon definition}, \begin{align}\label{eqn:Spectral Assumption on f} \Lambda(\varepsilon)f \leq C_P\varepsilon^{-\kappa}\;\; \forall\;\varepsilon\in (0,1). \end{align} \end{Assumption} \begin{rem}\label{rem: h in Ld/2 implies epsilon Poincare} If $\gamma \in (-2,0]$, any $f$ satisfies the $\varepsilon$-Poincar\'e inequality for all $\varepsilon \in (0,1)$ with universal constants (Theorem \ref{thm:sufficient conditions for the Poincare inequality}). This can be intuited from the fact that for $\gamma \in (-2,0]$ the function $ h_{f,\gamma} $ defined in \eqref{hf_gamma} belongs to $ L^{p}_{\textnormal{loc}}$ for some $p>d/2$, with a norm controlled by a universal constant. Then, at least for functions with compact support, the standard Sobolev inequality and the integrability of $h_{f,\gamma}$ can be used to prove a $\varepsilon$-Poincar\'e inequality. \end{rem} The other important assumption that will be required for $\gamma \leq -2$ is the following ``local doubling'' property. \begin{Assumption}\label{Assumption:Local Doubling} There is a constant $C_D>1$ such that for all times $t\in (0,T)$, we have \begin{align}\label{eqn:doubling local} \int_{B_{2r}(v_0)}f(v,t)\;dv \leq C_D\int_{B_r(v_0)}f(v,t)\;dv,\;\;\forall\;v_0\in\mathbb{R}^d,\;r\in(0,1). \end{align} \end{Assumption} The reason we say this is ``local'' is that it only involves balls with radius less than $1$. A locally integrable function which is globally doubling (meaning the above holds for balls with any radius $r>0$) cannot belong to $L^1(\mathbb{R}^d)$. On the other hand, plenty of functions in $L^1(\mathbb{R}^d)$ can be locally doubling. For a discussion on how reasonable this assumption is, see Section \ref{sec:discussion of extra assumptions}. \subsection{Main results}\label{section: Main result} The main results of this manuscript are summarized in the next theorems. First, we present sufficient conditions to guarantee that $f$ satisfies a $\varepsilon$-Poincar\'e inequality. \begin{thm}\label{thm:sufficient conditions for the Poincare inequality} Let $\gamma\in (-2,0]$. Then, given any $f\geq 0$ with mass, second moment, and entropy as $f_{\textnormal{in}}$, the following holds for any Lipschitz $\phi$ with compact support \begin{align}\label{eqn:epsilon_Poincare_inequality gamma>-2} \int_{\mathbb{R}^d} \phi^2 h_{f,\gamma} \;dv \leq \varepsilon \int_{\mathbb{R}^d} (A_{f,\gamma}\nabla\phi,\nabla\phi)\;dv + C(f_{\textnormal{in}},d,\gamma)\varepsilon^{\frac{\gamma}{2+\gamma}}\int_{\mathbb{R}^d}\phi^2\langle v\rangle ^\gamma\;dv. \end{align} Let $\gamma \in [-d,-2]$. Assume $f\geq 0$ is doubling in the sense of \eqref{eqn:doubling local}, and assume there exist $s>1$ and a modulus of continuity $\eta(\cdot)$ such that for any $Q$ with side length $r\in(0,1)$ \begin{align*} |Q|^{\frac{1}{d}} \left( f_{\textnormal{in}}t_{Q}{h_{f,\gamma}}^s\;dv\right)^{\frac{1}{2s}}\left(f_{\textnormal{in}}t_{Q}(a_{f,\gamma}^*)^{-s}\;dv\right)^{\frac{1}{2s}} \leq \eta(r). \end{align*} Then, the following Poincar\'e inequality holds for any $\varepsilon \in (0,1)$ \begin{align}\label{eqn:epsilon_Poincare_inequality strongerII_bounded} \int_{\mathbb{R}^d} \phi^2h_{f,\gamma} \;dv \leq \varepsilon \int_{\mathbb{R}^d} (A_{f,\gamma}\nabla\phi,\nabla\phi)\;dv + C(f_{\textnormal{in}},d,\gamma)\tilde \eta(\varepsilon)\int_{\mathbb{R}^d} \phi^2\;dv, \end{align} where $\tilde \eta:(0,1)\mapsto\mathbb{R}$ is a decreasing function with $\eta(0+)=\infty$ determined by $\eta$. \end{thm} A very important consequence of Theorem \ref{thm:sufficient conditions for the Poincare inequality} are the following $L^\infty$-estimates for solutions to \eqref{eqn:Landau homogeneous}. \begin{thm}\label{thm:main_1} Let $\gamma \in (-2,0]$ and $f:\mathbb{R}^d\times [0,\infty)\to\mathbb{R}$ be a weak solution to \eqref{eqn:Landau homogeneous}. There is a constant $C=C(f_{\textnormal{in}},d,\gamma)$ such that for any $R>1$, \begin{align*} \|f(t)\|_{L^\infty(B_R(0))}\leq CR^{-\gamma\tfrac{d}{2}}\left (1+\frac{1}{t}\right )^{d/2},\;\;\forall\;t\in(0,+\infty). \end{align*} \end{thm} \begin{thm}\label{thm:very soft potentials estimate} Let $-d< \gamma \leq -2$. Let $f:\mathbb{R}^d\times [0,T)\to\mathbb{R}$ be a classical solution to \eqref{eqn:Landau homogeneous} for which Assumptions \ref{Assumption:Epsilon Poincare} and \ref{Assumption:Local Doubling} hold. Then, there is a constant $C=C(f_{\textnormal{in}},d,\gamma,C_P,\kappa_P,C_D)$ such that for any $R>1$, \begin{align*} \|f(t)\|_{L^\infty(B_R(0))} \leq CR^{-\gamma\tfrac{d}{2}}\left (1+\frac{1}{t}\right )^{\frac{d}{2}},\;\;\forall\;t\in(0,T). \end{align*} \end{thm} \begin{thm}\label{thm:Coulomb case good estimate} Let $\gamma =-d$ and $f:\mathbb{R}^d\times [0,T)\to\mathbb{R}$ be a classical solution to \eqref{eqn:Landau homogeneous} for which Assumptions \ref{Assumption:Epsilon Poincare} and \ref{Assumption:Local Doubling} hold. Given any $s \in (0,1)$ there is a constant $C=C(s, f_{\textnormal{in}},d,\gamma,C_P,\kappa_P,C_D)$ such that for any $R>1$, \begin{align*} \|f(t)\|_{L^\infty(B_R(0))} \leq C R^s\left (1+\frac{1}{t}\right )^{1+s},\;\;\forall\;t\in(0,T). \end{align*} \end{thm} \begin{rem}\label{rem:Coulomb case good estimate discussion} The assumptions in Theorem \ref{thm:Coulomb case good estimate}, particularly \eqref{eqn:epsilon_Poincare_inequality strongerII_bounded}, are not satisfactory from the point of view of a general theory for weak solutions. As such, Theorem \ref{thm:Coulomb case good estimate} should be seen as a conditional result which provides relatively simple conditions that guarantee strong regularization. Most interesting, however, is the rate of decay in the $L^\infty$ estimate. For any solution to an uniformly parabolic equation one expects an estimate of the form \begin{align*} \|f(t)\|_{L^\infty(\mathbb{R}^d)} \lesssim t^{-d/2}, \end{align*} at least for small times $t$ when the initial data belongs to $L^1(\mathbb{R}^d)$. Instead, for the case of Coulomb potential we have a much faster (even if conditional) rate, a rate which can be made to be arbitrary close to $t^{-1}$. This faster rate of regularization can be seen as a reflection of the fact that the ``diffusivity'' in \eqref{eqn:Landau homogeneous} (for a lack of a better term) has a strength of order comparable to $(-\Delta)^{-1}f$, therefore, near points where $f$ fails to be in $L^{\frac{d}{2}}$, the diffusivity becomes stronger, and therefore tends to drive the solution $f$ towards smoothness more dramatically. In our proof, the specific way in which the stronger regularization arises in Theorem \ref{thm:Coulomb case good estimate} is through the fact that $h_{f,\gamma}=f$ when $\gamma=-d$ gives an extra power of $f$ in the $\varepsilon$-Poincar\'e inequality, or alternatively, via a nonlinear integral inequality found first by Gressman, Krieger, and Strain \cite{GreKriStr2012}, \eqref{eqn:GressmannKriegerStrain} in Section \ref{section:Coulomb regularization}. This inequality captures in an analytical way the fact that the diffusivity in the equation (which tends to drive the solution towards smoothness) is at least of the same order as the quadratic term (which tends to drive the solution towards blow-up). \end{rem} A side-product of our study are weighted Sobolev inequalities with weights involving ${a^*}_{f,\gamma}$ and $h_{f,\gamma}$, this under (relatively) mild assumptions on $f$ when $\gamma <-2$. These inequalities may be of independent interest. In this manuscript they are used in Section \ref{section: Regularization in L infinity} in deriving De Giorgi-Nash-Moser type estimates with the proper weights. \begin{thm}\label{lem:Inequalities Sobolev weight aII} Let $\gamma \in [-2,0]$, there is a constant $C=C(d,\gamma,f_{\textnormal{in}})$ such that given a function $f\geq 0$ with mass, energy, and entropy equal $f_{\textnormal{in}}$ then for all Lipschitz, compactly supported functions $\phi:\mathbb{R}^d\to\mathbb{R}$, \begin{align*} \left ( \int_{\mathbb{R}^d} \phi^{\frac{2d}{d-2}}({a^*})_{f,\gamma}^{\frac{d-2}{d}} \;dv\right )^{\frac{d-2}{d}}\leq C\int_{\mathbb{R}^d} {a^*}_{f,\gamma} |\nabla \phi|^2\;dv+C\int_{\mathbb{R}^d}|\phi|^2\;dv. \end{align*} Furthermore, if we have $f:\mathbb{R}^d\times I \mapsto \mathbb{R}$ such that $f(\cdot,t)$ satisfies the previous assumptions for all $t\in I$, then for any $\phi:\mathbb{R}^d\times I\to\mathbb{R}$ which is spatially Lipschitz and spatially compactly supported \begin{align*} \left ( \int_{I}\int_{\mathbb{R}^d} \phi^{2(1+\frac{2}{d})} {a^*}_{f,\gamma} \;dvdt \right )^{\frac{1}{(1+\frac{2}{d})}} & \leq C\left ( \int_{I} \int_{\mathbb{R}^d} {a^*}_{f,\gamma} |\nabla \phi|^2 \;dvdt + \sup \limits_{I} \int_{\mathbb{R}^d} \phi^{2}\;dv \right ). \end{align*} Such inequalities extend to $\phi$'s that might not be Lipschitz in the $v$ variable, as long as they can be approximated in the respective norm by Lipschitz, compactly supported $\phi$'s. \end{thm} \begin{thm}\label{thm:Inequalities Sobolev weight gamma below -2} Let $\gamma \in (-d,-2)$. Let $f$ and $\phi$ be as in the first part of the previous theorem, and assume further $f$ is locally doubling \eqref{eqn:doubling local}, then there is a constant $C = C(d,\gamma,f_{\textnormal{in}},C_D)$ (with $C_D$ as in \eqref{eqn:doubling local}) such that \begin{align*} \left ( \int_{\mathbb{R}^d} \phi^{\frac{2d}{d-2}}({a^*}_{f,\gamma})^{\frac{d-2}{d}} \;dv\right )^{\frac{d-2}{d}}\leq C\int_{\mathbb{R}^d} {a^*}_{f,\gamma} |\nabla \phi|^2+\phi^2\;dv. \end{align*} Moreover, if $f$ and $\phi$ are as in the second previous theorem, and additionally $f$ satisfies \eqref{eqn:doubling local} uniformly in time, then there is some $C=C(d,\gamma,f_{\textnormal{in}},C_D)$ such that \begin{align*} \left ( \int_{I}\int_{\mathbb{R}^d} \phi^{2(1+\frac{2}{d})} {a^*}_{f,\gamma} \;dvdt \right )^{\frac{1}{(1+\frac{2}{d})}} & \leq C\left ( \int_{I} \int_{\mathbb{R}^d} {a^*}_{f,\gamma} |\nabla \phi|^2 \;dvdt + \sup \limits_{I} \int_{\mathbb{R}^d} \phi^{2}\;dv \right ). \end{align*} Finally, for $\gamma=-d$ if $f,\phi$ are as in the first part of the previous theorem, $f$ satisfies \eqref{eqn:doubling local}, then for any $m\in (0,\tfrac{d}{d-2})$ there is some constant $C=C(d,m,\gamma,f_{\textnormal{in}},C_D)$ such that \begin{align*} \left ( \int_{\mathbb{R}^d} \phi^{2m}({a^*}_{f,\gamma})^{m} \;dv\right )^{\frac{1}{m}}\leq C\int_{\mathbb{R}^d} {a^*}_{f,\gamma} |\nabla \phi|^2+\phi^2\;dv. \end{align*} Lats but not least, if $f,\phi:\mathbb{R}^d\times I \mapsto \mathbb{R}$ are as in the second part of the previous theorem, $f$ satisfies \eqref{eqn:doubling local} uniformly in time, then for any $q\in (1,2(1+\frac{2}{d}))$ there is a constant $C = C(d,q,\gamma,f_{\textnormal{in}},C_D)$ such that \begin{align*} \left ( \int_{I}\int_{\mathbb{R}^d} \phi^{q} {a^*}_{f,\gamma} \;dvdt \right )^{\frac{2}{q}} & \leq C\left ( \int_{I} \int_{\mathbb{R}^d} {a^*}_{f,\gamma} |\nabla \phi|^2 \;dvdt + \sup \limits_{I} \int_{\mathbb{R}^d} \phi^{2}\;dv \right ). \end{align*} \end{thm} \subsection{Regarding the extra assumptions for the case $\gamma\leq -2$}\label{sec:discussion of extra assumptions} Let us make a few remarks about Assumption \ref{Assumption:Epsilon Poincare} and Assumption \ref{Assumption:Local Doubling}, and make the case that although they are rather unsatisfactory, the fact that they ``almost hold'' is encouraging as it suggests that \eqref{eqn:Landau homogeneous} may enjoy regularity estimates in the regime of very soft potentials. Let us explain and elaborate on this point. In light of Theorem \ref{thm:sufficient conditions for the Poincare inequality}, \ref{thm:very soft potentials estimate} and \ref{thm:Coulomb case good estimate}, regularity for solutions in the case of very soft potentials is guaranteed for any time interval where one can show (i) Assumption \ref{Assumption:Local Doubling}, and (ii) there are some $s>1$ and $\delta>0$ such that \begin{align}\label{eqn:Sufficient Condition for epsilon Poincare} |Q|^{\frac{1}{d}}\left ( f_{\textnormal{in}}t_{Q}h_{f,\gamma}^s\;dv\right )^{\frac{1}{2s}}\left (f_{\textnormal{in}}t_{Q} ({a^*}_{f,\gamma})^{-s} dv\right )^{\frac{1}{2s}} \leq \varepsilon, \end{align} for some small enough $\varepsilon$ and all cubes of side length $\leq \delta$. This condition is always true for moderately soft potentials $\gamma \in (-2,0]$, (see Proposition \ref{new_pro_global_space}). It is therefore encouraging that a quantity with close resemblance to \eqref{eqn:Sufficient Condition for epsilon Poincare} is bounded by a constant depending only on $d$ and $\gamma$, even for $\gamma \in [-d,-2]$, as shown in the next proposition. \begin{prop}\label{prop:assumption 1 almost holds} For each $\gamma \in [-d,-2]$ there is a $C=C(d,\gamma)$ such that for any non-negative $f\in L^1(\mathbb{R}^d)$ and any cube $Q \subset \mathbb{R}^d$, \begin{align}\label{eqn:assumption 1 almost holds ratio} |Q|^{\frac{1}{d}}\left ( f_{\textnormal{in}}t_{Q}h_{f,\gamma}\;dv \right )^{\frac{1}{2}}\left ( f_{\textnormal{in}}t_{Q} (a_{f,\gamma})^{-1} dv\right )^{\frac{1}{2}} \leq C(d,\gamma). \end{align} \end{prop} \begin{proof} Let $Q$ be a cube and $v_0$ it's center. If $\gamma=-d$, then \begin{align*} f_{\textnormal{in}}t_{Q}a_{f,\gamma}\;dv \approx \int_{\mathbb{R}^d}f(w)\max\{|Q|^{\frac{1}{d}},|v_0-w|\}^{2-d}\;dw \gtrsim |Q|^{\frac{2-d}{d}}\int_{Q}f(w)\;dw. \end{align*} Therefore (the implied constants depending only on $d$ and $\gamma$) \begin{align*} |Q|^{\frac{2}{d}}f_{\textnormal{in}}t_{Q}f\;dv & \lesssim f_{\textnormal{in}}t_{Q}a_{f,\gamma}\;dv. \end{align*} For the case $\gamma \in(-d,-2]$ the computations are somewhat similar, first note that \begin{align*} |Q|^{\frac{2}{d}}f_{\textnormal{in}}t_{Q}h\;dv & \approx |Q|^{\frac{2}{d}}\int_{\mathbb{R}^d} f(w) \max \{|Q|^{\frac{1}{d}},|v_0-w|\}^\gamma\;dw\\ & \lesssim |Q|^{\frac{2+\gamma}{d}}\int_{Q}f(w)\;dw+\int_{\mathbb{R}^d\setminus Q} f(w)|v_0-w|^{2+\gamma}\;dw. \end{align*} Next, for $\gamma \in (-d,-2]$ we have \begin{align*} f_{\textnormal{in}}t_{Q} a\;dv \gtrsim |Q|^{\frac{2+\gamma}{d}}\int_{Q}f(w)\;dw+\int_{\mathbb{R}^d\setminus Q} f(w)|v_0-w|^{2+\gamma}\;dw. \end{align*} It follows that \begin{align*} |Q|^{\frac{2}{d}} f_{\textnormal{in}}t_{Q}h_{f,\gamma}\;dv \lesssim f_{\textnormal{in}}t_{Q}a_{f,\gamma}\;dv. \end{align*} We conclude that both for $\gamma=-d$ and $\gamma\in (-d,-2]$ we have, \begin{align*} |Q|^{\frac{1}{d}}\left ( f_{\textnormal{in}}t_{Q}h_{f,\gamma}\;dv \right )^{\frac{1}{2}}\left ( f_{\textnormal{in}}t_{Q} a_{f,\gamma}^{-1} dv\right )^{\frac{1}{2}} \lesssim \left ( f_{\textnormal{in}}t_{Q}a_{f,\gamma}\;dv\right )^{\frac{1}{2}}\left ( f_{\textnormal{in}}t_{Q}(a_{f,\gamma})^{-1}\;dv \right )^{\frac{1}{2}}. \end{align*} Finally, as we shall see later (Proposition \ref{prop: Riesz potentials are A1 weights}), $a_{f,\gamma}$ is a $\mathcal{A}_1$ with a constant determined by $d$ and $\gamma$, which means that \begin{align*} \left ( f_{\textnormal{in}}t_{Q}a_{f,\gamma}\;dv\right )^{\frac{1}{2}}\left ( f_{\textnormal{in}}t_{Q}(a_{f,\gamma})^{-1}\;dv \right )^{\frac{1}{2}} \lesssim 1, \end{align*} with the implied constants being determined by $d$ and $\gamma$. \end{proof} The estimate in the last proposition yields a bound similar to \eqref{eqn:Sufficient Condition for epsilon Poincare}, except ${a^*}$ is replaced with $a$ and there is no $s>1$. Since $a_{f,\gamma}$ is of class $\mathcal{A}_p$ for any $p\geq 1$ (see Proposition \ref{prop: Riesz potentials are A1 weights}), it is possible to show that for some $s=s(d,\gamma)>1$ we have (again with implied constants determined by $d$ and $\gamma$) \begin{align*} \left(f_{\textnormal{in}}t_{Q} a_{f,\gamma}^{-s} \;dv \right)^{1/s} \lesssim C f_{\textnormal{in}}t_{Q} a_{f,\gamma} \;dv,\;\; \left ( f_{\textnormal{in}}t_Q h^s\;dv \right )^{\frac{1}{s}}\lesssim f_{\textnormal{in}}t_Q h\;dv, \end{align*} the second inequality holding only in the case $\gamma\neq -d$. The same observation extends to $h_{f,\gamma}$ in the case where $\gamma \neq -d$. So the lack of a power $s$ in Proposition \ref{prop:assumption 1 almost holds} is not the main obstacle to overcoming the need for Assumption \ref{Assumption:Epsilon Poincare}. On the other hand, if Assumption \ref{Assumption:Local Doubling} holds, we shall see later that for $\gamma\leq -2$, ${a^*}$ and $a$ are pointwise comparable in compact sets (Lemma \ref{prop:a vs a star gamma larger than -2}). So, in Proposition \ref{prop:assumption 1 almost holds} the fact that we have $a$ instead of ${a^*}$ is also not a serious shortcoming. This brings us to the crux of the problem for Assumption \ref{Assumption:Epsilon Poincare} and the limits of Proposition \ref{prop:assumption 1 almost holds}: the estimate in \eqref{eqn:assumption 1 almost holds ratio} is in some sense sharp for generic $f$, since there are $f$ for which this quantity remains of order $1$ even as the size of the cube $Q$ goes to zero. This is what underlines the idea mentioned in the Introduction that the strength of the nonlinearity for the Landau equation is of the same order as the diffusion, and therefore obtaining unconditional estimates for the Landau equation (if they at all exist) will require looking in a more precise way at the relationship between these two terms. Lastly, let us make two more points regarding Assumption \ref{Assumption:Local Doubling}. The first one is that it is worthwhile to investigate whether solutions to \eqref{eqn:Landau homogeneous} gain a local doubling property similar to that in Assumption \ref{Assumption:Local Doubling}, since this property is not dissimilar to the weak Harnack inequality for supersolutions to parabolic equations. The second point regarding the local doubling assumption is related to the isotropic equation \eqref{eqn:Krieger-Strain}, and we record it as a Remark. \begin{rem} Exchanging ${a^*}_{f,\gamma}$ with $a_{f,\gamma}$, the estimates in Section \ref{section:A_p class and weighted inequalities} can be used to show inequalities akin to those theorems \ref{thm:sufficient conditions for the Poincare inequality} and \ref{thm:Inequalities Sobolev weight gamma below -2} \emph{without assuming $f$ is locally doubling}. In particular, one always has the inequality for every $\gamma \in (-d,-2]$, \begin{align*} \int_{\mathbb{R}^d}|\phi|^2h_{f,\gamma}\;dv \leq C(d,\gamma)\int_{\mathbb{R}^d}a_{f,\gamma}|\nabla \phi|^2\;dv. \end{align*} The above inequality is still far from an $\varepsilon$-Poincar\'e inequality, and it's not clear that for a generic $f$ one can do better than some positive constant. In any case, we still have the corresponding weighted Sobolev inequality, that is, for $\gamma\in(-d,-2]$ there is a constant $C(d,\gamma)$ such that \begin{align*} \left ( \int_{\mathbb{R}^d} |\phi|^{\frac{2d}{d-2}} a_{f,\gamma}^{\frac{d}{d-2}}\;dv\right )^{\frac{d-2}{d}} \leq C(d,\gamma) \int_{\mathbb{R}^d}a_{f,\gamma}|\nabla \phi|^2\;dv. \end{align*} \end{rem} As we are done with the preliminaries and the statement of the main results, let us say a word about the location of the proofs throughout the rest of the paper: Theorem \ref{thm:sufficient conditions for the Poincare inequality} is proved in Section \ref{section:A_p class and weighted inequalities}, Theorem \ref{lem:Inequalities Sobolev weight aII} in Section \ref{Space time inequalities}, Theorem \ref{thm:Coulomb case good estimate} and Theorem \ref{thm:very soft potentials estimate} are proved at the end of Section \ref{section: Regularization in L infinity}, lastly, Theorem \ref{thm:main_1} in Section \ref{section:Coulomb regularization}. \section{The $\mathcal{A}_p$ class and weighted inequalities for the Landau equation}\label{section:A_p class and weighted inequalities} We recall necessary facts from the theory of Muckenhoupt weights. The interested reader should consult the book of Garcia-Cuerva and Rubio De Francia \cite[Chapter IV]{GarciaCuervaRubioDeFrancia1985} and \cite{Turesson} for a thorough discussion. \begin{DEF} Given $p\in(1,\infty)$ and a ball $B_0$, a nonnegative locally integrable function $\omega:\mathbb{R}^d\to\mathbb{R}$ is said to be a {\bf{$\mathcal{A}_p(B_0)$-weight}} if \begin{align}\label{def_Ap} \sup \limits_{B \subset 8B_0}\left( f_{\textnormal{in}}t_{B} \omega(v) \;dv \right) \left(f_{\textnormal{in}}t_{B} \omega(v)^{-\frac{1}{p-1}} \;dv \right)^{p-1}<\infty. \end{align} The above supremum will be called the $\mathcal{A}_p$ constant of $\omega$. The class $\mathcal{A}_\infty(B_0)$ will refer to the union of all the classes $\mathcal{A}_p(B_0)$, $p>1$. Finally, the class $\mathcal{A}_1$ refers to weights such that for some $C>0$ one has for any $B\subset 8B_0$ and almost every $v_0 \in B$, \begin{align*} f_{\textnormal{in}}t_{B}\omega(v) \;dv \leq C \omega(v_0). \end{align*} The infimum of the $C$'s for which the above holds will be called the $\mathcal{A}_1$ constant of $\omega$. \end{DEF} The class $\mathcal{A}_p$ appears in many important questions in analysis, here we are mostly interested in their properties relevant to weighted Sobolev inequalities. As we will see below, natural weights associated to the Landau equation will be in this class. \begin{DEF}A non-negative locally integrable function $w$ is said to satisfy the {\em reverse H\"older inequality} with exponent $m\ge 1 $ and constant $C$ if for any ball $B$ we have \begin{align*} \left (f_{\textnormal{in}}t_{B}w^m\;dv \right )^{\frac{1}{m}} \leq Cf_{\textnormal{in}}t_{B}w\;dv. \end{align*} \end{DEF} As it turns out, $\mathcal{A}_p$ weights satisfy a reverse H\"older inequality. \begin{lem}\label{lem:A_p implies reverse Holder} \emph{($\mathcal{A}_p$ weights satisfy a reverse H\"older inequality.)} Let $\omega$ be a $\mathcal{A}_p$ weight, $1\leq p< +\infty$. There are $\varepsilon >0$ and $C>1$ --determined by $p$ and the $\mathcal{A}_p$ constant for $\omega$-- such that $\omega$ satisfies the reverse H\"older inequality with exponent $1+\varepsilon$ and constant $C$. \end{lem} \begin{proof} See \cite[Chapter 4, Lemma 2.5]{GarciaCuervaRubioDeFrancia1985} for a proof. \end{proof} The next proposition shows that Riesz potentials are indeed $\mathcal{A}_1$ weights. \begin{prop}\label{prop: Riesz potentials are A1 weights} For $m \in [0,d)$ and $f\in L^1(\mathbb{R}^d)$, $f\geq 0$, the function $F:=f* |v|^{-m}$ is in the class $\mathcal{A}_1$ with an $\mathcal{A}_1$ constant that only depends on $m$ and $\|f\|_{L^1(\mathbb{R}^d)}$. Consequently the function \begin{align*} h_{f,\gamma}(v)= \int f(w)|v-w|^\gamma \;dw, \end{align*} belongs to the class $\mathcal{A}_1$ for each $-d<\gamma \le 0$, $d\ge3$ and \begin{align*} a_{f,\gamma}(v)= \int f(w)|v-w|^{2+\gamma}\; dw, \end{align*} belongs to $\mathcal{A}_1$ for each $-d\le \gamma \le -2$, $d\ge3$. \end{prop} \begin{proof} Fix $m \in [0,d)$ and $w \in \mathbb{R}^d$, we analyze the function \begin{align*} v \to |v-w|^{-m},\;\;v\in\mathbb{R}^d. \end{align*} Lemma \ref{lem:averages for powers of v} and Remark \ref{rem:averages for powers of v} imply that \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}|v-w|^{-m}\;dv \leq C_m|v_1-w|^{-m},\;\;\forall\; v_1 \in B_r(v_0),\;w\in\mathbb{R}^d. \end{align*} Thus, $|v-w|^{-m}$ belongs to the class $\mathcal{A}_1$. Since the class $\mathcal{A}_1$ is convex, it follows that the convolution of $|v|^{-m}$ with any nonnegative $L^1$ function is also in the class $\mathcal{A}_1$. Indeed, let $f\in L^1(\mathbb{R}^d)$, $f\geq 0$ and consider the function \begin{align*} F(v) = \int_{\mathbb{R}^d}f(w)|v-w|^{-m}\;dw. \end{align*} It is evident that $F\in L^1_{\textnormal{loc}}(\mathbb{R}^d)$ by Fubini's theorem. Moreover, if $v_0\in\mathbb{R}^d$ and $r>0$, \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)} F(v)\;dv & = \int_{\mathbb{R}^d}f(w)f_{\textnormal{in}}t_{B_r(v_0)}|v-w|^{-m}\;dvdw\\ & \leq C_m\int_{\mathbb{R}^d}f(w)|v_0-w|^{-m}\;dw = C_m F(v_1),\;\;\forall\;v_1\in B_r(v_0). \end{align*} \end{proof} Proposition \ref{prop: Riesz potentials are A1 weights} says that the function $a_{f,\gamma}$ (respectively $h_{f,\gamma}$) for $\gamma \in [-d,-2]$ (resp. for $\gamma \in (-d,0]$) is an $\mathcal{A}_1$-weight and satisfies a reverse H\"older inequality for some exponent $q=1+\varepsilon$ (Lemma \ref{lem:A_p implies reverse Holder}). However, we can obtain a better, explicit exponent for which these functions satisfy a reverse H\"older inequality. The following result illustrates what can be expected for the special weights we are dealing with. \begin{lem}\label{lem:a is reverse Holder with a nice constant} Let {{$\gamma \in [-d,-2]$}} and $m \in (0,\frac{d}{|2+\gamma|})$. There is a universal constant $C_{m,\gamma}$ such that $a_{f,\gamma}$ satisfies the reverse H\"older inequality with exponent $m$ and constant $C_{m,\gamma}$. In other words, for any ball $B$ (or cube $Q$) we have \begin{align*} \left ( f_{\textnormal{in}}t_{B} a_{f,\gamma}^m \;dv \right )^{\frac{1}{m}} \leq C_{m,\gamma} f_{\textnormal{in}}t_{B}a_{f,\gamma}\;dv. \end{align*} \end{lem} \begin{proof}[Proof of Lemma \ref{lem:a is reverse Holder with a nice constant}] The Lemma follows immediately applying Remark \ref{rem:reverse Holder for inverse powers of v} and Proposition \ref{prop:reverse Holder is a convex condition} (proved below) taking $\Phi=|v|^{2+\gamma}$, and $g = f$. \end{proof} \begin{prop}\label{prop:reverse Holder is a convex condition} Let $g\in L^1(\mathbb{R}^d)$ be non-negative. Let $\Phi(x)$ be a non-negative, locally integrable function that satisfies the reverse H\"older inequality with exponent $m>1$ and constant $C>0$. Then, the convolution $g* \Phi$ also satisfies the reverse H\"older inequality with same exponent $m$ and same constant $C$. That is, for any ball $B$ we have \begin{align*} \left (f_{\textnormal{in}}t_{B} (g * \Phi (v))^m \;dv \right )^{\frac{1}{m}} \leq C f_{\textnormal{in}}t_{B} (g *\Phi)(v)\;dv. \end{align*} \end{prop} \begin{proof} It is clear that \begin{align*} \left (f_{\textnormal{in}}t_{B} (g * \Phi (v))^m \;dv \right )^{\frac{1}{m}} =\left ( f_{\textnormal{in}}t_B \left | \int_{\mathbb{R}^d} \Phi(v-w) g(w)\;dw \right |^mdv \right )^{\frac{1}{m}}. \end{align*} Minkowski's integral inequality says that \begin{align*} \left ( f_{\textnormal{in}}t_B \left | \int_{\mathbb{R}^d} \Phi(v-w) g(w)\;dw \right |^mdv \right )^{\frac{1}{m}} \leq \int_{\mathbb{R}^d} \left (f_{\textnormal{in}}t_B |\Phi(v-w)|^m\;dv \right )^{\frac{1}{m}}g(w)\;dw. \end{align*} By the reverse H\"older assumption for $\Phi$, we have for every $w\in \mathbb{R}^d$, \begin{align*} \left (f_{\textnormal{in}}t_B |\Phi(v-w)|^m\;dv \right )^{\frac{1}{m}} \leq Cf_{\textnormal{in}}t_{B}|\Phi(v-w)|\;dv = Cf_{\textnormal{in}}t_{B}\Phi(v-w)\;dv. \end{align*} Lastly, we use the positivity of $g$ and apply Fubini's theorem, which yields, \begin{align*} \left (f_{\textnormal{in}}t_{B} (g * \Phi (v))^m \;dv \right )^{\frac{1}{m}} & \leq C\int_{\mathbb{R}^d} g(w)f_{\textnormal{in}}t_{B}\Phi(v-w)\;dvdw = Cf_{\textnormal{in}}t_B (g* \Phi)(v)\;dv. \end{align*} \end{proof} \begin{rem}\label{rem:reverse Holder for inverse powers of v} As shown in the Appendix, $|v|^{-q}$ is in the class $\mathcal{A}_1$ for all $0<q<d$. In particular, if $m\ge 0$ and $p \ge 0$ are such that $0\le mp<d$, we have \begin{align*} f_{\textnormal{in}}t_{B}|w|^{-mp}\;dw \leq C_{mp}|v|^{-mp},\;\;\forall\;v\in B. \end{align*} This may be rewritten as \begin{align*} \left ( f_{\textnormal{in}}t_{B}|w|^{-mp}\;dw \right )^{\frac{1}{m}}\leq (C_{mp})^{\frac{1}{m}}|v|^{-p},\;\;\forall\;v\in B. \end{align*} Taking the average for $v\in B$, it follows that \begin{align*} \left (f_{\textnormal{in}}t_{B}|w|^{-mp}\;dw \right )^{\frac{1}{m}}\leq (C_{mp})^{\frac{1}{m}}f_{\textnormal{in}}t_{B}|w|^{-p}\;dw. \end{align*} \end{rem} \subsection{Averages of ${a^*}_{f,\gamma}$ and $a_{f,\gamma}$} Recall the definition of ${a^*}_{f,\gamma}$ and $a_{f,\gamma}$, \begin{align*} {a^*}_{f,\gamma} &= \inf \limits_{e\in\mathbb{S}^{d-1}} C_\gamma \int_{\mathbb{R}^d} |v-w|^{2+\gamma}\langle \Pi(v-w)e,e\rangle f(w)\;dw,\\ a_{f,\gamma} &= c(d,\gamma)\int_{\mathbb{R}^d}f(w)|v-w|^{-2-\gamma}\;dw. \end{align*} Since $|\langle \Pi(v-w)e,e\rangle|\leq 1$ it is clear that ${a^*}_{f,\gamma} (v)\le a_{f,\gamma}(v)$ for each $v\in \mathbb{R}^d$. We briefly recall bounds on the functions $a_{f,\gamma}$ and $A_{f,\gamma}$ that will be used later. These estimates are now standard (see Desvilletes and Villani \cite{DesVil2000a} or Silvestre \cite[Lemma 3.2]{Silvestre2015}) but we include a proof for the sake of completeness. \begin{prop}\label{prop:a vs a star gamma larger than -2} If $\gamma \in [-2,0]$ there exist constants that only depend on $d$, $f_{\textnormal{in}}$ and $\gamma$ such that for any $f\geq 0$ and $v\in\mathbb{R}^d$ we have \begin{align*} c_{f_{\textnormal{in}},d,\gamma}\langle v\rangle^{2+\gamma} & \leq a_{f,\gamma}(v) \;\leq C_{d,\gamma}\langle v\rangle^{2+\gamma},\\ c_{f_{\textnormal{in}},d,\gamma}\langle v\rangle^\gamma & \leq {a^*}_{f,\gamma}(v) \leq C_{f_{\textnormal{in}},d,\gamma}\langle v\rangle^\gamma. \end{align*} Moreover, for any $v\neq 0$, we have $(A_{f,\gamma}(v)e,e) \leq C{a^*}_{f,\gamma}$ for $e=v/|v|$. \end{prop} \begin{proof} We drop the subindices $f,\gamma$. The lower bounds were already stated in Lemma \ref{lem: A lower bound in terms of conserved quantities}, and therefore here we only need to prove the upper bounds for $a$ and ${a^*}$. Since $2+\gamma\geq 0$, for any $v,w\in\mathbb{R}^d$ \begin{align*} |v-w|^{2+\gamma} \leq \left ( |v|+|w| \right )^{2+\gamma} \leq 4 \left ( |v|^{2+\gamma}+|w|^{2+\gamma}\right ). \end{align*} Using conservation of mass and energy, and that $2+\gamma\leq 2$, we have \begin{align*} a_{f,\gamma}(v) = C(d,\gamma)\int_{\mathbb{R}^d} f(w)|v-w|^{2+\gamma}\;dw & \leq 4C(d,\gamma)\int_{\mathbb{R}^d}f(w)\left( |v|^{2+\gamma}+|w|^{2+\gamma} \right )\;dw\\ & = 4C(d,\gamma)|v|^{2+\gamma}+4C(d,\gamma)\int_{\mathbb{R}^d}f(w)|w|^{2+\gamma}\;dw\\ & \le C(f_{\textnormal{in}},d,\gamma) \langle v\rangle^{2+\gamma}. \end{align*} In conclusion, for some constant $C(d,\gamma)$ \begin{align*} a_{f,\gamma}(v) \leq C(d,\gamma)\langle v\rangle^{2+\gamma}. \end{align*} It remains to prove the upper bound for ${a^*}$. Let $v\in\mathbb{R}^d$, $|v|\geq 1$ and $e = {v}/|v|$. We estimate the ``tail'' of the integral defining $a^*_{f,\gamma,e}$, noting that \begin{align*} & \int_{B_{\langle v\rangle}(v)^c}f(w)|v-w|^{2+\gamma}(\Pi(v-w)e,e)\;dw\\ & = \int_{B_{\langle v\rangle}(v)^c}f(w)|v-w|^{\gamma}(|v-w|^2-(v-w,e)^2 )\;dw. \end{align*} Since $e$ and $v$ are parallel, it follows that the difference of squares in the last integral is just the square of the distance from $w$ to the line spanned by $e$, therefore \begin{align*} |v-w|^2-(v-w,e)^2 \leq |w|^2,\;\;\forall\;w\in\mathbb{R}^d. \end{align*} With this inequality and using that $\gamma\leq 0$, we conclude that \begin{align*} \int_{B_{\langle v\rangle/2}(v)^c}f(w)|v-w|^{2+\gamma}(\Pi(v-w)e,e)\;dw \leq 2^{-\gamma}\langle v\rangle^\gamma \int_{B_{\langle v\rangle/2}(v)^c}f(w)|w|^2\;dw. \end{align*} On the other hand, since $2+\gamma\geq 0$, we have \begin{align*} \int_{B_{\langle v\rangle/2}(v)}f(w)|v-w|^{2+\gamma}(\Pi(v-w)e,e)\;dw & \leq \langle v\rangle^{2+\gamma}\int_{B_{\langle v\rangle/2}(v)}f(w)\;dw\\ & \leq C(\gamma) \langle v\rangle^\gamma\int_{B_{\langle v\rangle/2}(v)}f(w)\langle w\rangle^2\;dw. \end{align*} This shows that \begin{align*} (A_{f,\gamma}(v)e,e)\leq C(d,\gamma)\langle v\rangle^\gamma,\;\;e=\frac{v}{|v|}, \end{align*} from where it follows that ${a^*}_{f,\gamma}(v)\leq C\langle v\rangle^\gamma$ and $(A_{f,\gamma}(v)e,e)\leq C{a^*}_{f,\gamma}(v)$ for all $v\neq 0$. \end{proof} \begin{prop}\label{prop:a vs a star gamma below -2} Let $\gamma \in [-d,-2)$, then, under Assumption \ref{Assumption:Local Doubling}, there is a $C$ such that \begin{align*} a_{f,\gamma}(v)\leq C\langle v\rangle^{\max\{-\gamma-2,2\}}{a^*}_{f,\gamma}(v),\;\; C=C(f_{\textnormal{in}},d,\gamma,C_D). \end{align*} Moreover, if one assumes $f$ has a uniformly bounded moment of order $-\gamma-2$, we have \begin{align*} a_{f,\gamma}(v)\leq C\langle v\rangle^{2}{a^*}_{f,\gamma}(v), \end{align*} with the $C$ this time depending also on the bound on the $-\gamma-2$ moment of $f$. \end{prop} \begin{proof} Let $r>0$, then we may write \begin{align*} a_{f,\gamma}(v) = C(d,\gamma)\int_{B_r(v)}f(w)|v-w|^{2+\gamma}\;dw+C(d,\gamma)\int_{\mathbb{R}^d\setminus B_r(v)}f(w)|v-w|^{2+\gamma}\;dv. \end{align*} Choosing $r = \frac{1}{2}\langle v\rangle$ and using that $2+\gamma\leq 0$ it follows that \begin{align*} a_{f,\gamma}(v) \leq C\int_{B_{\langle v\rangle/2}(v)}f(w)|v-w|^{2+\gamma}\;dw+C\langle v\rangle^{2+\gamma}. \end{align*} Again, since $2+\gamma\leq 0$, we have $|v-w|^{2+\gamma}\leq 1$ in $\mathbb{R}^d\setminus B_1(v)$, therefore \begin{align*} \int_{B_{\langle v\rangle/2}(v) \setminus B_1(v)}f(w)|v-w|^{2+\gamma}\;dw \leq \int_{B_{\langle v\rangle/2}(v)}f(w)\;dw . \end{align*} Now, in $B_{\langle v\rangle/2}(v)$ we have \begin{align*} \int_{B_{\langle v\rangle/2}(v) \setminus B_1(v)}f(w)|v-w|^{2+\gamma}\;dw & \leq \langle v\rangle^{-2}\int_{\mathbb{R}^d}f(w)|w|^{2}\;dv \leq C\langle v\rangle^{-2-\gamma}{a^*}. \end{align*} By the local doubling property of $f$, for any $e\in\mathbb{S}^{d-1}$ we have \begin{align*} \int_{B_1(v)}f(w)|v-w|^{2+\gamma}\;dw & \leq C\int_{B_1(v)}f(w)|v-w|^{2+\gamma}(\Pi(v-w)e,e)\;dw \leq a_{f,\gamma,e}(v). \end{align*} Taking the infimum over $e$, and using the estimates above, \begin{align*} a_{f,\gamma}(v) \leq C\langle v\rangle^2 {a^*}_{f,\gamma}+C\langle v\rangle^{-2-\gamma}{a^*}_{f,\gamma}(v) \leq C\langle v\rangle^{\max\{2,-\gamma-2\} }{a^*}_{f,\gamma}(v). \end{align*} Lastly, if $f$ has a finite $-\gamma-2$ moment, then above one has the bound \begin{align*} \int_{B_{\langle v\rangle/2}(v) \setminus B_1(v)}f(w)|v-w|^{2+\gamma}\;dw & \leq \langle v\rangle^{2-\gamma}\int_{\mathbb{R}^d}f(w)|w|^{-\gamma-2}\;dw \\ & \leq C\langle v\rangle^2 {a^*}_{f,\gamma}(v), \end{align*} from where the respective estimate follows. \end{proof} \begin{rem} Observe that if $d=3$, then $\max\{-\gamma-2,2\} =2$ for all $\gamma \geq -4$, so the above Proposition yields the pointwise relation $a_{f,\gamma}\leq C\langle v\rangle^2 {a^*}$ for the case of Coulomb potential when $d=3$ under Assumption \ref{Assumption:Local Doubling}. \end{rem} We now show a result that will be used later to prove Theorem \ref{thm:sufficient conditions for the Poincare inequality}. \begin{prop}\label{new_pro_global_space} If $\gamma \in [-2,0]$ there is $s=s(d,\gamma) >1$ such that for any cube $Q \subset \mathbb{R}^d$ with sides of length $\leq 1$, we have \begin{align*} |Q_r|^{\frac{1}{d}} \left( f_{\textnormal{in}}t_{Q_r}{h_{f,\gamma}}^{s}\;dv\right)^{\frac{1}{2s}}\left(f_{\textnormal{in}}t_{Q_r}(a_{f,\gamma}^*)^{-s}\;dv\right)^{\frac{1}{2s}} \le C(f_{\textnormal{in}},d,\gamma) |Q|^{\frac{2+\gamma}{d}}. \end{align*} \end{prop} \begin{proof} This estimate will follow by a kind of interpolation, estimating an integral respectively outside and inside a ball, using in each case respectively either that $\gamma> -2$ or the mass and second moment of $f$. Fix a cube $Q$ with center $v_0 \in \mathbb{R}^d$ and sides of length $r$ with $2r\leq 1 $. By Proposition \ref{prop:a vs a star gamma larger than -2} for any $s>1$ we have, \begin{align*} f_{\textnormal{in}}t_{Q}{(a_{f,\gamma}^*)^{-s}}\;dv \leq C(f_{\textnormal{in}},d,\gamma,s)\langle v_0\rangle^{-s\gamma}. \end{align*} Above we have used that the diameter of $Q$ was at most $1$ along with the lower estimate for $a^*_{f,\gamma}$ from Lemma \ref{lem: A lower bound in terms of conserved quantities}. Now, let us bound the integral of $h_{f,\gamma}$. To do this, we make use of the identity \begin{align*} f_{\textnormal{in}}t_{Q}h_{f,\gamma}\;dv = C(d,\gamma)\int_{\mathbb{R}^d}f_{\textnormal{in}}t_{Q}f(w)|v-w|^\gamma\;dvdw, \end{align*} and of the estimate \begin{align*} f_{\textnormal{in}}t_{Q} |v-w|^\gamma\;dv \leq C(d,\gamma) \left ( \max\{r,|v_0-w|\} \right )^\gamma, \end{align*} (see Lemma \ref{lem:averages for powers of v}). We also use the fact that $h_{f,\gamma}$ is an $\mathcal{A}_1$ weight and hence satisfies a reverse H\"older inequality \begin{align*} \left( f_{\textnormal{in}}t_{Q}h_{f,\gamma}^s\;dv\right)^{1/s} \le C f_{\textnormal{in}}t_{Q}h_{f,\gamma}\;dv, \end{align*} for some $s=s(d,\gamma)>1$. Next, we split $\mathbb{R}^d$ as the union of the ball $B_{\langle v_0\rangle/2}(v_0)$ and its complement. Using conservation of mass, the second moment of $f$, and that $-\gamma\leq 2$, it follows that \begin{align*} \int_{B_{|v_0|/2}(v_0)} f(w) \int_Q |v-w|^{\gamma}\;dv\;dw & \leq |Q|^{1+\frac{\gamma}{d}}\langle v_0 \rangle^{\gamma} \int_{\mathbb{R}^d}f(w)\langle w\rangle^{-\gamma}\;dw\\ & \leq C(d,\gamma,f_{\textnormal{in}})|Q|^{1+\frac{\gamma}{d}} \langle v_0\rangle^{\gamma}. \end{align*} For the second integral we distinguish two cases depending on whether $|v_0|$ is greater or less than one. If $|v_0| \ge 1$ we have \begin{align*} \int_{B_{|v_0|/2}^c(v_0)} f(w) \int_Q |v-w|^{\gamma}\;dv\;dw &\le C(d,\gamma) |Q| \int_{B_{|v_0|/2}^c(v_0)}f(w)|v_0-w|^{\gamma}\;dw \\ & \leq C(d,\gamma)|Q| | v_0|^{\gamma} \int_{B_{|v_0|/2}(v_0)^c}f(w)\;dw\\ & \leq C(f_{\textnormal{in}},d,\gamma)\langle v_0\rangle^{\gamma}|Q|. \end{align*} On the other hand, if $|v_0| < 1$ then $1\leq \langle v_0\rangle \le 2$ and \begin{align*} \int_{B_{|v_0|/2}^c(v_0)} f(w) \int_Q |v-w|^{\gamma}\;dv\;dw & \leq r^\gamma |Q| \int_{B_{|v_0|/2}} f(w)\;dw \leq C(f_{\textnormal{in}},d,\gamma) |Q|^{1+\frac{\gamma}{d}} \langle v_0\rangle^{\gamma}. \end{align*} Combining these inequalities, we see that \begin{align}\label{hwithr=1} f_{\textnormal{in}}t_{Q}h\;dv \le C(f_{\textnormal{in}},d,\gamma)\langle v_0\rangle^{\gamma} |Q|^{\frac{\gamma}{d}} \end{align} and we conclude that \begin{align*} |Q|^{\frac{2}{d}} \left( f_{\textnormal{in}}t_{Q}{h_{f,\gamma}}^s\;dv\right)^{\frac{1}{s}}\left(f_{\textnormal{in}}t_{Q}(a_{f,\gamma}^*)^{-s}\;dv\right)^{\frac{1}{s}} \leq C(f_{\textnormal{in}},d,\gamma) |Q|^{\frac{2+\gamma}{d}}. \end{align*} \end{proof} The following Lemma, dealing with the cases where $\gamma \in [-d,-2]$, is one of the places where Assumption \ref{Assumption:Local Doubling} is key. It guarantees that ${a^*}$ is (locally) a $\mathcal{A}_1$ weight with a constant depending on the constant $C_D$ from Assumption \ref{Assumption:Local Doubling}. \begin{lem}\label{lem:a star is A1 if f is doubling} Let $\gamma \in [-d,-2)$ and assume $f$ satisfies Assumption \ref{Assumption:Local Doubling}, then, for any $v_0 \in \mathbb{R}^d$ and $r\in(0,1/2)$ we have \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}a^*_{f,\gamma}(v) \;dv\leq Ca^*_{f,\gamma}(v_1), \;\;\forall\;v_1 \in B_r(v_0), \end{align*} with a constant $C$ determined by $\gamma$, $d$, and the constant $C_D$ in Assumption \ref{Assumption:Local Doubling}. \end{lem} \begin{proof} Fix $e\in\mathbb{S}^{d-1}$, $v_0 \in \mathbb{R}^d$ and $r\in (0,1)$. We are first going to prove that \begin{align}\label{eqn:weight a_e A1c center} f_{\textnormal{in}}t_{B_r(v_0)}a_e^*(v)\;dv\leq Ca_e^*(v_0). \end{align} We define the set \begin{align*} S(v_0,r,e) := \{ w \in\mathbb{R}^d \mid \textnormal{dist}(v_0-w,[e]) \geq 2r\}. \end{align*} We make use of the inequality \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}a^*_e(v)\;dv & \leq Ca^*_e(v_0)+r^{2+\gamma}\int_{B_{2r}(v_0)}f(w)\;dw\\ & \;\;\;\;+r^2\int_{S(v_0,r,e)^c \cap B_{2r}(v_0)^c}f(w)|v_0-w|^\gamma\;dw, \end{align*} proved in Proposition \ref{prop:weight a_e is almost A1} Next, let $K_e(v_0) = \{ w\mid (\Pi(v_0-w)e,e)\geq \tfrac{1}{2} \}$, applying (repeatedly) Assumption \ref{Assumption:Local Doubling} to a cube contained in $B_{2r}(v_0) \cap K_e(v_0)$ it can be seen that \begin{align*} \int_{B_{2r}(v_0)}f(w)\;dw \leq C\int_{B_{2r}(v_0) \cap K_e(v_0) }f(w)\;dw,\;\; C = C(C_D). \end{align*} Then, since $2+\gamma\leq 0$, it follows that \begin{align*} r^{2+\gamma}\int_{B_{2r}(v_0)}f(w)\;dw \leq C\int_{B_{2r}(v_0)}f(w)|v_0-w|^{2+\gamma}(\Pi(v_0-w)e,e)\;dw, \end{align*} and thus \begin{align*} r^{2+\gamma}\int_{B_{2r}(v_0)}f(w)\;dw \leq Ca_e^*(v_0),\;C=C(C_D). \end{align*} An analogous argument using the local doubling property also yields \begin{align*} r^2\int_{S(v_0,r,e)^c \cap B_{2r}(v_0)^c}f(w)|v_0-w|^\gamma\;dw \leq Ca^*_e(v_0),\;\;C=C(C_D). \end{align*} These last two estimates prove \eqref{eqn:weight a_e A1c center}. Taking the minimum over $e\in\mathbb{S}^{d-1}$ on both sides of \eqref{eqn:weight a_e A1c center} it is immediate that \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}{a^*}(v_0)\;dv\leq C{a^*}(v_0). \end{align*} To finish the proof, assume now that $r\in (0,1/2)$ and note that if $v_1 \in B_{r}(v_0)$ then $B_{r}(v_0) \subset B_{2r}(v_1)$, thus, we can apply the above estimate in the ball $B_{2r}(v_1)$, so that \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}{a^*}(v)\;dv \leq 2^d f_{\textnormal{in}}t_{B_{2r}(v_1)}{a^*}(v)\;dv \leq C2^d{a^*}(v_1). \end{align*} \end{proof} The following lemma uses the estimates on $a^*_{f,\gamma}$ and the kernel averages in Proposition \ref{prop:kernel averages} to show that $a^*_{f,\gamma}$ satisfies a certain condition that will be used later in Proposition \ref{prop:LpLp via interpolation with weight} for a weighted Sobolev inequality. \begin{lem} \label{this_is_new_cond_Sob} Let $\gamma \in [-d,-2)$, $s>1$, and $m \in (0, \tfrac{d}{|2+\gamma|})$. If Assumption \ref{Assumption:Local Doubling} holds, then for any $t\in (0,T)$, $v_0 \in \mathbb{R}^d$, and $r\in(0,1/2)$, we have \begin{align*} \left ( f_{\textnormal{in}}t_{Q}{(a^*_{f,\gamma})}^m\;dv\right )^{\frac{1}{m}} \left ( f_{\textnormal{in}}t_{Q}(a^*_{f,\gamma})^{-s}\;dv \right )^{\frac{1}{s}} \leq C. \end{align*} Here $Q=Q_{r}(v_0)$, $C=C(d,\gamma,m,f_{\textnormal{in}},C_D)$, where $C_D$ is the constant from Assumption \ref{Assumption:Local Doubling}. \end{lem} \begin{proof} By Minkowski's inequality, if $K_e(v) := |v|^{2+\gamma} \left( \Pi(v)e,e\right)$ ($e\in\mathbb{S}^{d-1}$ fixed), we have \begin{align*} \left ( f_{\textnormal{in}}t_{Q}(a^*_{e})^m\;dv\right )^{\frac{1}{m}} \leq \int_{\mathbb{R}^d}f(w)\left ( f_{\textnormal{in}}t_{Q}K_e(v-w)^m\;dv\right )^{\frac{1}{m}}dw. \end{align*} From this inequality and Proposition \ref{prop:kernel averages}, we obtain for any $v_1 \in Q$, \begin{align*} \left ( f_{\textnormal{in}}t_{Q}(a^*_e)^m\;dv \right )^{\frac{1}{m}}& \leq C\int_{S(v_1,r,e)^c}f(w)|v_1-w|^{2+\gamma}(\Pi(v_1-w)e,e)\;dw\\ & \;\;\;\;+Cr^2\int_{S(v_1,r,e)}f(w)\max\{2r,|v_1-w|\}^{\gamma }\;dw. \end{align*} We observe that the first term is controlled by $a^*_e(v_1)$ (cf. Proposition \ref{prop:weight a_e is almost A1}), therefore \begin{align*} \left ( f_{\textnormal{in}}t_{Q}(a^*_e)^m\;dv\right )^{\frac{1}{m}} & \leq Ca^*_e(v_1)+r^2\int_{S(v_1,r,e)}f(w)\max\{2r,|v_1-w|\}^\gamma\;dw,\;\;\forall\;v_1\in Q. \end{align*} Next, arguing using Assumption \ref{Assumption:Local Doubling} just as in Lemma \ref{lem:a star is A1 if f is doubling}, and taking the minimum with respect to $e$, we conclude that \begin{align*} \left ( f_{\textnormal{in}}t_{Q}({a^*})^m\;dv\right )^{\frac{1}{m}} & \leq C{a^*}(v_1),\;\;\forall\;v_1\in Q, \end{align*} where $C=C(d,\gamma,m,f_{\textnormal{in}},C_D)$. This proves the Lemma, since for any $s>1$ we have \begin{align*} \left ( f_{\textnormal{in}}t_{Q}({a^*})^m\;dv\right )^{\frac{1}{m}} \left ( f_{\textnormal{in}}t_{Q}({a^*})^{-s}\;dv \right )^{\frac{1}{s}} \leq C\left ( \inf \limits_{Q} {a^*} \right )\left ( f_{\textnormal{in}}t_{Q}({a^*})^{-s}\;dv \right )^{\frac{1}{s}}\leq C. \end{align*} \end{proof} \subsection{Global weighted inequalities} In this section we make us of known results on Sobolev-type inequalities with weights, namely those of the form \begin{align}\label{eqn:weighted Sobolev} \left( \int_{Q}|\phi|^qw_1\;dv\right)^{1/q}\leq C\left( \int_{Q} |\nabla \phi|^p\;w_2dv\right)^{1/p}, \end{align} where $q\leq p$, $C$ is independent of $\phi$, and $\phi$ has either compact support in $Q$ or its average over $Q$ vanishes. The literature on inequalities of this form is much too vast to discuss properly here, so we shall limit ourselves to highlighting the work of Chanillo and Wheeden \cite{ChanilloWheeden1985,ChanilloWheeden1985II} and work of Sawyer and Wheeden \cite{SawWhe1992}, since they are the ones that we invoke directly here. These results say --roughly speaking-- that \eqref{eqn:weighted Sobolev} is guaranteed to hold if certain averages involving $w_1$ and $w_2$ are bounded over all cubes contained in a cube slightly larger than $Q$. For $p=2$ and $2\leq q<\infty$, $s>1$, a cube $Q$, weights $w_1$ and $w_2$, we define \begin{align*} \sigma_{q,2,s}(Q,w_1,w_2) := |Q|^{\frac{1}{d}-\frac{1}{2}+\frac{1}{q}}\left (f_{\textnormal{in}}t_{Q}w_1^s\;dv \right )^{\frac{1}{q s}}\left (f_{\textnormal{in}}t_{Q}w_2^{-s}\;dv \right )^{\frac{1}{2 s}}. \end{align*} If $p=q=2$ and $w_1$ is in the class $\mathcal{A}_p$ for \emph{some} $p$ and $w_2$ satisfies a reverse H\"older inequality\footnote{so, $w_2$ also being a $\mathcal{A}_p$ weight would suffice, per Lemma \ref{lem:A_p implies reverse Holder}} then an alternative to $\sigma$ is (see \cite[Theorem 1.2]{ChanilloWheeden1985}), \begin{align*} \sigma_{2,2}(Q,w_1,w_2):= C|Q|^{\frac{2}{d}} \frac{f_{\textnormal{in}}t_{Q}w_1\;dv}{f_{\textnormal{in}}t_{Q}w_2\;dv}. \end{align*} The theorem we will be using is stated below. For its proof the reader is referred to \cite[Theorem 1]{SawWhe1992}, \cite[Theorem 1.5]{ChanilloWheeden1985II}, and \cite[Theorem 1.2]{ChanilloWheeden1985}. \begin{thm}\label{thm:local weighted Sobolev inequalities} Consider two weights $w_1$, $w_2$, a cube $Q$, $2\leq q<\infty$ and $s>1$. Then, for any Lipschitz function $\phi$ which has compact support in the interior of a cube $Q$ or is such that $\int_Q \phi\;dv = 0$, we have \begin{align}\label{eqn:Local Weighted Sobolev Inequality General Weights} \left( \int_{Q}|\phi|^qw_1\;dv\right)^{1/q}\leq \mathcal{C}_{q,2,s}(Q,w_1,w_2)\left( \int_{Q}|\nabla \phi|^2\; w_2dv\right)^{1/2}, \end{align} where for some constant $C(d,s,q)$, \begin{align*} \mathcal{C}_{q,2,s}(Q,w_1,w_2) := C(d,s,q)\sup \limits_{Q' \subset 8Q} \sigma_{q,2,s}(Q',w_1,w_2). \end{align*} \end{thm} In the next lemma we make use of Theorem \ref{thm:local weighted Sobolev inequalities} to obtain an inequality over $\mathbb{R}^d$. \begin{lem}\label{lem:Global weighted Sobolev from local ones} For $\ell\in(0,1)$, $q\geq 2$ and some $s>1$, define \begin{align*} \mu(\ell) := \sup \{ \mathcal{C}_{q,2,s}(Q,w_1,w_2) \;:\; Q \textnormal{ cube of side length } \leq \ell \} . \end{align*} Then, there is a $C=C(q)$ such that any compactly supported Lipschitz $\phi$ satisfies \begin{align*} \left ( \int_{\mathbb{R}^d}|\phi|^q w_1\;dv\right )^{\frac{1}{q}} \leq C(q)\mu(8\ell)\left ( \int_{\mathbb{R}^d} |\nabla \phi|^2\; w_2 dv \right )^{\frac{1}{2}}+C(q)\left (\int_{\mathbb{R}^d}E_\ell(w_1)|\phi|^2\;dv \right )^{\frac{1}{2}}. \end{align*} Here, $E_{\ell}(w_1)$ denotes the function \begin{align*} E_\ell(w_1) = \sum \limits_{Q\in\mathcal{Q}_\ell} \frac{1}{|Q|}\left ( \int_{Q}w_1\;dv \right )^{\frac{2}{q}}\chi_{Q}, \end{align*} and $\mathcal{Q}_{\ell}$ denotes the class of cubes having the form $[0,\ell]^d+ k\ell$ with $k\in \mathbb{Z}^d$. \end{lem} \begin{proof} The idea is to decompose the integral over $\mathbb{R}^d$ as integrals over the family of cubes $\mathcal{Q}_{\ell}$, which are non overlapping and cover all of $\mathbb{R}^d$. Then, we apply the weighted Sobolev inequality from Theorem \ref{thm:local weighted Sobolev inequalities} in each cube --making sure to control the extra term involving the average of the function over each cube. Denote by $(\phi)_{Q}$ the average of $\phi$ over $Q$. Young's inequality yields \begin{align*} \int_{\mathbb{R}^d}|\phi|^qw_1\;dv & = \sum \limits_{Q\in\mathcal{Q}_\ell} \int_{Q}|\phi|^qw_1\;dv = \sum \limits_{n} \int_{Q}|\phi-(\phi)_{Q}+(\phi)_{Q} |^qw_1\;dv \\ & \leq C(q)\sum \limits_{Q\in\mathcal{Q}_{\ell}} \int_{Q}|\phi-(\phi)_{Q}|^qw_1\;dv+C(q)\sum \limits_{Q\in\mathcal{Q}_\ell} |(\phi)_{Q}|^q \int_{Q}w_1\;dv. \end{align*} On the other hand, for each $Q\in\mathcal{Q}_\ell$ we apply Theorem \ref{thm:local weighted Sobolev inequalities}, and get \begin{align*} \int_{Q}|\phi-(\phi)_{Q}|^qw_1\;dv & \leq \mathcal{C}_{q,2,s}(Q,w_1,w_2)^q\left ( \int_{Q} |\nabla \phi|^2\;w_2 dv \right )^{\frac{q}{2}}\\ & \leq \mu(8\ell)^q \left ( \int_{Q} |\nabla \phi|^2\;w_2 dv \right )^{\frac{q}{2}}. \end{align*} Since $q\geq 2$, \begin{align*} \sum \limits_{Q\in\mathcal{Q}_\ell}\left ( \int_{Q} |\nabla \phi|^2 w_2\;dv \right )^{\frac{q}{2}} & \leq C \left (\sum\limits_{Q\in\mathcal{Q}_\ell} \int_{Q}|\nabla \phi|^2\;w_2 dv \right )^{\frac{q}{2}} = C \left (\int_{\mathbb{R}^d}|\nabla \phi|^2\;w_2 dv \right )^{\frac{q}{2}}. \end{align*} It follows that \begin{align*} \int_{\mathbb{R}^d}|\phi|^qw_1\;dv\leq C(q)\mu(8\ell)^q\left (\int_{\mathbb{R}^d}|\nabla \phi|^2\;w_2 dv \right )^{\frac{q}{2}}+C\sum \limits_{Q\in\mathcal{Q}_\ell} |(\phi)_{Q}|^q \int_{Q}w_1\;dv . \end{align*} To deal with the second term, we apply Jensen's inequality \begin{align*} |(\phi)_{Q}|^q \int_{Q}w_1\;dv \leq \left( \frac{1}{|Q|}\int_{Q}|\phi|^2\;dv \right )^{\frac{q}{2}}\int_{Q}w_1\;dv ,\;\;\forall\;Q\in\mathcal{Q}_\ell. \end{align*} Adding all these inequalities, it follows that \begin{align*} \sum \limits_{Q\in\mathcal{Q}_\ell} \int_{Q}w_1\;dv |(\phi)_{Q}|^q \leq \left ( \int_{\mathbb{R}^d}E_{\ell}(w_1)|\phi|^2\;dv \right )^{\frac{q}{2}}. \end{align*} In conclusion, \begin{align*} \int_{\mathbb{R}^d}|\phi|^qw_1\;dv \leq C \mu(8\ell)^q \left (\int_{\mathbb{R}^d}|\nabla \phi|^2\;w_2 dv \right )^{\frac{q}{2}}+C\left ( \int_{\mathbb{R}^d}E_{\ell}(w_1)|\phi|^2\;dv \right )^{\frac{q}{2}}, \end{align*} raising both sides to the power $1/q$, the Lemma follows. \end{proof} \section{The $\varepsilon$-Poincar\'e inequality}\label{section:epsilon Poincare} We are now ready to prove Theorem \ref{thm:sufficient conditions for the Poincare inequality}. For simplicity, throughout this section we drop the subindices in $h_{f,\gamma}$ and ${a^*}_{f,\gamma}$. \begin{proof}[Proof of Theorem \ref{thm:sufficient conditions for the Poincare inequality}] We consider the cases $\gamma \in (-2,0]$ and $\gamma \in [-d,-2]$ separately. \emph{Case $\gamma \in (-2,0]$}: Proposition \ref{new_pro_global_space} says that for any $Q$ with sides less or equal $1$ we have \begin{align*} \sigma_{2,2}(Q,h,a^*) \leq C(d,\gamma,f_{\textnormal{in}})|Q|^{\frac{2+\gamma}{d}}, \end{align*} and therefore $\mu(8\ell) \leq C(d,\gamma,f_{\textnormal{in}})\ell^{2+\gamma}$ for $\ell \in (0,1/8)$. Then, applying Lemma \ref{lem:Global weighted Sobolev from local ones} with $w_1 = h$ and $w_2 = {a^*}$, \begin{align*} \int_{\mathbb{R}^d}|\phi|^2 h\;dv \leq C(d,\gamma,f_{\textnormal{in}})\ell^{2+\gamma}\int_{\mathbb{R}^d} {a^*}|\nabla \phi|^2\;dv+C(2)\int_{\mathbb{R}^d}E_\ell(h)|\phi|^2\;dv. \end{align*} We may use the estimate \eqref{hwithr=1} to bound $E_{\ell}(h)$, indeed, for any $Q\in \mathcal{Q}_\ell$ we have \begin{align*} f_{\textnormal{in}}t_{Q} h\;dv &\leq C(d,\gamma,f_{\textnormal{in}})\ell^\gamma \langle v_Q\rangle^\gamma , \end{align*} where $v_Q$ denotes the center of $Q$. Since each $Q$ has sides $\leq 1$, it follows that $\langle v\rangle \approx \langle v_Q\rangle$ for all $v\in Q$, therefore \begin{align*} E_{\ell}(h) \leq C\ell^\gamma\langle v\rangle^\gamma . \end{align*} Taking $\ell = (\min\{C(d,\gamma,f_{\textnormal{in}})^{-1},2^{-|2+\gamma|}\}\varepsilon )^{\frac{1}{2+\gamma}}$ for $\varepsilon \in(0,1)$, it follows that for any $\phi$ with compact support we have \begin{align*} \int_{\mathbb{R}^d} \phi^2 h\;dv \leq \varepsilon \int_{\mathbb{R}^d}a^*|\nabla \phi|^2\;dv + C\varepsilon^{\frac{\gamma}{2+\gamma}}\int_{\mathbb{R}^d} \phi^2\langle v\rangle^\gamma\;dv,\;\;C=C(d,\gamma,f_{\textnormal{in}}). \end{align*} \noindent \emph{Case $\gamma \in [-d,-2]:$} As before we apply Lemma \ref{lem:Global weighted Sobolev from local ones}. This time, however, for $\varepsilon \in (0,1)$ we let $\ell = \ell_\mu(\varepsilon)$ be given as the largest number in $(0,1/10]$ such that $C(2)\mu(8 \ell) \leq \varepsilon$. Observe that \begin{align*} \int_{Q}h\;dv & \leq C(d,\gamma)\int_{\mathbb{R}^d} f(w) \int_{Q}|v-w|^{\gamma} \; dvdw \leq C(d,\gamma)|Q |^{\frac{d+\gamma}{d}}. \end{align*} The above inequality allows us to bound $E_{\ell_\mu(\varepsilon)}(h)$ in this case, since \begin{align*} f_{\textnormal{in}}t_{Q} h\;dv\leq C(d,\gamma,f_{\textnormal{in}})\ell_\mu(\varepsilon)^\gamma,\;\;\forall\;Q\in\mathcal{Q}_{\ell_\mu(\varepsilon)}. \end{align*} Then, back to Lemma \ref{lem:Global weighted Sobolev from local ones}, we conclude that for any compactly supported, Lipschitz $\phi$, \begin{align*} \int_{\mathbb{R}^d} \phi^2 h\;dv \leq \varepsilon\int_{\mathbb{R}^d}a|\nabla \phi|^2\;dv + C(d,\gamma,f_{\textnormal{in}})\ell_\mu(\varepsilon)^\gamma\int_{\mathbb{R}^d} \phi^2\;dv. \end{align*} \end{proof} \subsection{Space-time inequalities}\label{Space time inequalities} \begin{lem}\label{lem:Sobolev weight astar astar to the m} Let $\gamma \in [-d,-2)$. If Assumption \ref{Assumption:Local Doubling} holds, then there is a $C=C(d,\gamma,f_{\textnormal{in}},C_D)$ such that for any Lipschitz $\phi$ with compact support we have \begin{align*} \left ( \int_{\mathbb{R}^d} \phi^{2m}({a^*}_{f,\gamma})^m \;dv\right )^{\frac{1}{m}}\leq C\int_{\mathbb{R}^d} a_{f,\gamma}^*|\nabla \phi|^2\;dv+C\int_{\mathbb{R}^d}\phi^2\;dv, \end{align*} Here, $m = \frac{d}{d-2}$ if $\gamma>-d$, otherwise, $m$ can be chosen to be any $m<\frac{d}{d-2}$ when $\gamma=-d$. \end{lem} \begin{proof} Let us take first $\gamma \in (-d,-2)$. Then, let $m =d/(d-2)$. Such $m$ solves \begin{align}\label{eqn:exponent relation for d over d-2} \frac{1}{d}-\frac{1}{2}+\frac{1}{2m} = 0. \end{align} With this $m$, and with $\gamma>-d$ there is always some $s=s(d,\gamma)>1$ such that $m s |\gamma+2|<d$. Then Lemma \ref{this_is_new_cond_Sob} with $q=ms$, $w_1 = {a^*}^{ms}$, and $w_2 = (a^*)^{-s}$ yields \begin{align*} \left( f_{\textnormal{in}}t_Q (a_*)^{m s}\;dv \right)^{\frac{1}{ms}}\left( f_{\textnormal{in}}t_Q (a_*)^{-s}\;dv\right)^{\frac{1}{s}} \leq C(d,\gamma,f_{\textnormal{in}},C_D) \end{align*} for all $Q$ with side length smaller than $1/2$. Taking the square root of both sides, and using \eqref{eqn:exponent relation for d over d-2}, we conclude that for some $C=C(d,\gamma,f_{\textnormal{in}},C_D)$ \begin{align*} |Q|^{\frac{1}{d}-\frac{1}{2}+\frac{1}{2m}}\left( f_{\textnormal{in}}t_Q (a_*)^{ms}\;dv \right)^{\frac{1}{2ms}}\left( f_{\textnormal{in}}t_Q (a_*)^{-s}\;dv\right)^{\frac{1}{2s}} \leq C. \end{align*} This again being for all cubes $Q$ with side length smaller than $1/2$. Then, we apply the covering argument of Lemma \ref{lem:Global weighted Sobolev from local ones} with $\ell= 1/16$, and obtain \begin{align*} \left ( \int_{\mathbb{R}^d}|\phi|^{2m}({a^*})^m\;dv\right )^{\frac{1}{m}} \leq C\int_{\mathbb{R}^d}{a^*} |\nabla \phi|^2\;dv+C\int_{\mathbb{R}^d}E_{\frac{1}{16}(({a^*})^m)}|\phi|^2\;dv. \end{align*} It remains to bound $E_{\frac{1}{16}(({a^*})^m)}$, for which it suffices to bound $\frac{1}{|Q|}\left (\int_{Q}({a^*})^m\;dv \right )^{\frac{1}{m}}$ for any cube $Q$ with side length $\leq 1/2$. On the other hand, for any cube $Q$ with side length less than $1$ we have \begin{align*} \frac{1}{|Q|}\left (\int_{Q}({a^*})^m\;dv \right )^{\frac{1}{m}} & = \frac{1}{|Q|^{1-\frac{1}{m}}}\left (f_{\textnormal{in}}t_{Q}({a^*})^m\;dv \right )^{\frac{1}{m}} \leq C\frac{1}{|Q|^{1-\frac{1}{m}}}f_{\textnormal{in}}t_{Q}{a^*}\;dv. \end{align*} In particular, if $Q$ has side length exactly equal to $1/16$, then \begin{align*} \frac{1}{|Q|}\left (\int_{Q}({a^*})^m\;dv \right )^{\frac{1}{m}} \leq C(d,\gamma,f_{\textnormal{in}},C_D). \end{align*} This implies that $\|E_\ell(({a^*})^m)\|_{L^\infty(\mathbb{R}^d)}\leq C$, which finishes the proof for $\gamma \in (-d,-2)$. Now, for the case $\gamma=-d$, we let $m$ be any number in $(1,\frac{d}{d-2})$. In this case, we have \begin{align*} \frac{1}{d}-\frac{1}{2}+\frac{1}{2m} \geq 0. \end{align*} and, we also have that there is some $s=s(d,m)>1$ such that $sm<\frac{d}{d-2}$. Then, in light of the above inequality involving $m$, we have for all cubes of side length $\leq 1$, \begin{align*} |Q|^{\frac{1}{d}-\frac{1}{2}+\frac{1}{2m}}\left( f_{\textnormal{in}}t_Q (a_*)^{ms}\;dv \right)^{\frac{1}{2ms}}\left( f_{\textnormal{in}}t_Q (a_*)^{-s}\;dv\right)^{\frac{1}{2s}} \leq C. \end{align*} From here on, we apply Lemma \ref{lem:Global weighted Sobolev from local ones} with $\ell=1/16$ and bound $\| E_{1/16}({a^*})^m \|_{L^\infty(\mathbb{R}^d)}$ just as before, finishing the proof. \end{proof} \begin{rem}\label{rem:Sobolev weight astar to the m case gamma bigger -2} For $\gamma\in [-2,0]$, without requiring any doubling assumption on $f$, we always have the inequality \begin{align*} \left ( \int_{\mathbb{R}^d} \phi^{2m}({a^*}_{f,\gamma})^m \;dv\right )^{\frac{1}{m}}\leq C\int_{\mathbb{R}^d} a_{f,\gamma}^*|\nabla \phi|^2\;dv+C\int_{\mathbb{R}^d}|\phi|^2\;dv,\;\; m = \frac{d}{d-2}. \end{align*} In fact, recall that $m$ as above satisfies \eqref{eqn:exponent relation for d over d-2}. Then using Lemma \ref{lem:averages for powers of bracket v} one can see there are $C=C(d,\gamma)>1$ and $s=s(d,\gamma)>1$ such that for any cube $Q$ with side length $\leq 1$, \begin{align*} \left ( f_{\textnormal{in}}t_{Q}\langle v \rangle^{\gamma m s} \;dv \right )^{\frac{1}{2m s}}\left (f_{\textnormal{in}}t_{Q}\langle v\rangle^{-\gamma s} \;dv \right )^{\frac{1}{2 s}} \leq C. \end{align*} Then, Lemma \ref{lem:Global weighted Sobolev from local ones} with $q=2m$, $w_1 = \langle v\rangle^{m\gamma}$, $w_2 = \langle v\rangle^\gamma$, and $\ell = 1/2$ yields the basic weighted inequality, \begin{align*} \left ( \int_{\mathbb{R}^d}|\phi|^{\frac{2d}{d-2}}\langle v\rangle^{\gamma m}\;dv\right )^{\frac{d-2}{d}} \leq C_{d,\gamma}\int_{\mathbb{R}^d}\langle v\rangle^\gamma|\nabla \phi|^2\;dv+C_{d,\gamma}\int_{\mathbb{R}^d}|\phi|^2\;dv. \end{align*} Finally, by Proposition \ref{prop:a vs a star gamma larger than -2}, for $\gamma\in [-2,0]$ we have \begin{align*} {a^*} \approx \langle v\rangle^\gamma, \end{align*} substituting this pointwise bound in the weighted inequality above, the estimate follows. \end{rem} With the above tools in hand, we prove now Theorems \ref{lem:Inequalities Sobolev weight aII} and \ref{thm:Inequalities Sobolev weight gamma below -2}. We combine both proofs since they are basically identical. \begin{proof}[Proof of Theorem \ref{lem:Inequalities Sobolev weight aII} and Theorem \ref{thm:Inequalities Sobolev weight gamma below -2} ] In the following $m = \frac{d}{d-2}$ if $\gamma>-d$, otherwise, $m$ can be chosen to be any $m<\frac{d}{d-2}$ when $\gamma=-d$. The first part on both theorems has been shown respectively in Remark \ref{rem:Sobolev weight astar to the m case gamma bigger -2} and Lemma \ref{lem:Sobolev weight astar astar to the m}. For the second part let $\phi: \mathbb{R}^d \times I \mapsto \mathbb{R}$ and fix $t\in I$. Let $q = 4-\frac{2}{m}$, and note that $2<q<2m$. Interpolation yields \begin{align*} \int_{\mathbb{R}^d}\phi^q {a^*} \;dv & = \int_{\mathbb{R}^d} \phi^{q\theta + q(1-\theta)} {a^*} \;dv \\ & \leq \left ( \int_{\mathbb{R}^d} \phi^{2m} ({a^*})^{\frac{2m}{(1-\theta)q}} \;dv \right )^{\frac{(1-\theta)q}{2m}} \left (\int_{\mathbb{R}^d} \phi^{2}\;dv \right )^{\frac{q \theta}{2}}, \end{align*} where $\frac{1}{q} = \frac{\theta}{2}+\frac{1-\theta}{2m}$. An elementary calculation shows that \begin{align*} q\theta = 2\left (1-\frac{1}{m} \right ),\;\;(1-\theta)q= 2. \end{align*} Therefore, the last inequality turns out to be \begin{align*} \int_{\mathbb{R}^d} \phi^q {a^*} \;dv \leq \left ( \int_{\mathbb{R}^d} \phi^{2m} ({a^*})^m\;dv\right )^{\frac{1}{m}} \left (\int_{\mathbb{R}^d} \phi^2\;dv \right )^{1-\frac{1}{m}}. \end{align*} This inequality holds for each $t\in I$, integrating in time we arrive at \begin{align*} \int_I\int_{\mathbb{R}^d}\phi^q {a^*} \;dvdt & \leq \sup \limits_{I} \left(\int_{\mathbb{R}^d} \phi^{2}\;dv \right )^{1-\frac{1}{m}} \int_{I}\left ( \int_{\mathbb{R}^d} \phi^{2m} ({a^*})^{m} \;dv \right )^{\frac{1}{m}}\;dt. \end{align*} Then, by Lemma \ref{lem:Sobolev weight astar astar to the m} or Remark \ref{rem:Sobolev weight astar to the m case gamma bigger -2}, \begin{align*} \int_{I}\int_{\mathbb{R}^d}\phi^q {a^*} \;dvdt & \leq C \left(\sup \limits_{I} \int_{\mathbb{R}^d} \phi^{2}\;dv \right )^{1-\frac{1}{m}} \int_{I}\int_{\mathbb{R}^d}{a^*} |\nabla \phi|^2 +\phi^2\;dvdt. \end{align*} From the above, it is elementary to see that \begin{align*} \int_{I}\int_{\mathbb{R}^d}\phi^q {a^*} \;dvdt & \leq C'\left ( \int_{I}\int_{\mathbb{R}^d}{a^*} |\nabla \phi|^2\;dvdt +\sup \limits_{I}\int_{\mathbb{R}^d}\phi^2\;dv\right )^{2-\frac{1}{m}}, \end{align*} then, recalling that $q = 4-\frac{2}{m}$, this inequality becomes \begin{align*} \int_{I} \int\phi^q {a^*} \;dvdt & \leq C'\left ( \int_{I} \int {a^*} |\nabla \phi|^2 \;dvdt + \sup \limits_{I} \int \phi^{2}\;dv \right )^{\frac{q}{2}}, \end{align*} Finally, we note that if $m=\frac{d}{d-2}$ then $q=2\left (1+\frac{2}{d}\right )$. \end{proof} \section{$L^\infty$-Estimates}\label{section: Regularization in L infinity} \subsection{From entropy production to $L^pL^p$ estimates} For the purposes of $L^\infty$ estimates, it is important to have a weighted $L^pL^p$ bound for $f$ for some $p>1$ and a proper weight. With this aim, we recall the standard and well known fact that through the entropy production one can obtain a space-time integral bound for a derivative of $f$ (see, for instance \cite{Desvillettes14}). Also in what follows, recall that the notion of weak solution used here is described in Definition \ref{def:solution}. Let $f$ be a solution to \eqref{eqn:Landau homogeneous}, this being understood as a weak solution if $\gamma \in (-2,0]$ or as a classical solution in the case $\gamma \in [-d,-2]$. Moreover, let $\rho_{in}$ denote the Maxwellian with same mass, center of mass, and energy as $f_{\textnormal{in}}$. We have \begin{align}\label{eqn:entropy production bound} \int_{t_1}^{t_2}\int_{\mathbb{R}^d}4(A_{f,\gamma}\nabla f^{1/2},\nabla f^{1/2})-f^2\;dvdt \leq H(f_{\textnormal{in}}) - H(\rho_{f_{\textnormal{in}}}) = C(f_{\textnormal{in}}), \end{align} and $C(f_{\textnormal{in}})$ only depends on $f_{\textnormal{in}}$ through it's mass, second moment, and entropy. Moreover the estimate \eqref{eqn:entropy production bound} implies \begin{align}\label{eqn:entropy production bound_II} \| f \langle v \rangle^{\gamma}\|_{L^1(0,T,L^{d/d-2}(\mathbb{R}^d))} \le C(T,f_{\textnormal{in}}), \end{align} (see Theorem 3 and Lemma 3 in \cite{Desvillettes14}). The inequality \eqref{eqn:entropy production bound_II}, combined with the $\varepsilon$-Poincar\'e inequality and the conserved quantities yields a weighted $L^pL^p$ bound. This is the content of the following proposition. \begin{prop}\label{prop:LpLp via interpolation with weight} If $\gamma \in (-2,0]$, and $f$ is a weak solution, then for any pair of times $T_1,T_2 \in (0,T)$ with $0\leq T_1<T_2\leq T_1+1$ we have \begin{align*} \int_{T_1}^{T_2}\int_{\mathbb{R}^d} f^{1+\frac{2}{d}}a^*_{f,\gamma}\;dvdt \leq C(d,\gamma,f_{\textnormal{in}}). \end{align*} Meanwhile, if $\gamma\in (-d,-2]$, $f$ is a classical solution, and Assumptions \ref{Assumption:Epsilon Poincare} and \ref{Assumption:Local Doubling} hold, then for $T_1$ and $T_2$ as before \begin{align*} \int_{T_1}^{T_2}\int_{\mathbb{R}^d} f^{1+\frac{2}{d}}a^*_{f,\gamma}\;dvdt \leq C(d,\gamma,f_{\textnormal{in}},C_D,C_P,\kappa_P). \end{align*} The constants $C_D,C_P$, and $\kappa_P$ being as in Assumptions \ref{Assumption:Epsilon Poincare} and \ref{Assumption:Local Doubling}. \end{prop} \begin{proof} Let $p$ denote $1+\frac{2}{d}$. Since $p \in (1,\frac{d}{d-2})$ there is a $\theta \in (0,1)$ such that \begin{align*} 1 = \theta p + \frac{(1-\theta)p}{m},\;\; m = \frac{d}{d-2}. \end{align*} Note that $\theta$ entirely determines $p$, and vice versa. H\"older's inequality then yields, \begin{align*} \int_{\mathbb{R}^d} f^p a^*\;dv & \leq \left ( \int_{\mathbb{R}^d} f \;dv \right )^{\theta p} \left (\int_{\mathbb{R}^d} f^{m} (a^*)^{\frac{m}{(1-\theta)p}}\;dv \right )^{\frac{(1-\theta)p}{m}} = \left (\int_{\mathbb{R}^d} f^{m} (a^*)^{\frac{m}{(1-\theta)p}}\;dv \right )^{\frac{(1-\theta)p}{m}}. \end{align*} The fact that $p=1+\frac{2}{d}$ guarantees that $(1-\theta) p =1$, and therefore (since $\|f\|_{L^1(\mathbb{R}^d)}\equiv1$) \begin{align*} \int_{\mathbb{R}^d} f^p a^*\;dv \leq \left ( \int_{\mathbb{R}^d} f^{m} (a^*)^{m}\;dv \right )^{\frac{1}{m}}. \end{align*} Integrating in $(T_1,T_2)$, it follows that \begin{align*} \int_{T_1}^{T_2}\int_{\mathbb{R}^d} f^pa^*\;dvdt \leq C \int_{T_1}^{T_2}\left ( \int_{\mathbb{R}^d} f^{m}(a^*)^{m} \;dv \right )^{\frac{1}{m}}\;dt. \end{align*} In this case, the weighted Sobolev inequality in Theorem \ref{lem:Inequalities Sobolev weight aII} (if $\gamma\in [-2,0]$) or in Theorem \ref{thm:Inequalities Sobolev weight gamma below -2} (if $\gamma\in [-d,-2)$) guarantees that \begin{align*} & \left( \int_{\mathbb{R}^d} f^{m} (a^*)^{m} \;dv \right )^{\frac{1}{m}} \leq C\int_{\mathbb{R}^d} a^*|\nabla f^{\frac{1}{2}}|^2\;dv+C \int_{\mathbb{R}^d}f\;dv. \end{align*} Then, \begin{align*} \int_{T_1}^{T_2}\int_{\mathbb{R}^d} f^p a^*\;dvdt \leq C \int_{T_1}^{T_2}\int_{\mathbb{R}^d} a^*|\nabla f^{\frac{1}{2}}|^2\;dvdt+C(T_2-T_1). \end{align*} It remains to bound the right hand side. For this we make use of the bound, \begin{align*} 4\int_{T_1}^{T_2}\int_{\mathbb{R}^d}(A_{f,\gamma}\nabla f^{1/2},\nabla f^{1/2})\;dvdt-\int_{T_1}^{T_2}\int_{\mathbb{R}^d}hf\;dvdt \leq H(f_{\textnormal{in}})-H(\rho_{f_{\textnormal{in}}}). \end{align*} Now, the $\varepsilon$-Poincar\'e inequality applied with $\phi = \sqrt{f}$ and $\varepsilon = 1/8$ says that \begin{align*} \int_{T_1}^{T_2}\int_{\mathbb{R}^d}f h_{f,\gamma}\;dvdt \leq 2\int_{T_1}^{T_2}\int_{\mathbb{R}^d}(A_f\nabla f^{1/2},\nabla f^{1/2})\;dvdt+\Lambda_f(1/8)\int_{T_1}^{T_2}\int_{\mathbb{R}^d}f\;dvdt. \end{align*} In which case, we have \begin{align*} 2\int_{T_1}^{T_2}\int_{\mathbb{R}^d} (A_{f,\gamma}\nabla f^{\frac{1}{2}},\nabla f^{\frac{1}{2}}) \;dvdt & \leq H(f_{\textnormal{in}})-H(\rho_{f_{\textnormal{in}}})+\Lambda_f(1/8). \end{align*} \end{proof} We shall make use of one further $L^pL^p$ bound, but this time there is no weight. \begin{prop}\label{prop: LpLp bound via interpolation1} Let $f$ be a solution of \eqref{eqn:Landau homogeneous}. Then, \begin{align*} \|f\|_{L^{p_{\gamma}}(0,T,L^{p_\gamma}(\mathbb{R}^d))} & \leq C(f_{\textnormal{in}},d,\gamma,T). \end{align*} where \begin{align*} p_\gamma := \min \left \{\frac{d(2-\gamma)}{2(d-2)-d\gamma}\;, 1+\frac{2}{d} \right \}, \end{align*} and $C$ does not depend on $T$ if $\gamma \in [-\tfrac{4}{d},0]$ (observe this corresponds to $p_\gamma = 1+\frac{2}{d}$). \end{prop} \begin{proof} The proof is analogous to the previous one. As before, with $p=p_\gamma$ we have \begin{align*} \int_{\mathbb{R}^d} f^p\;dv & \le \left(\int_{\mathbb{R}^d} \left(f^{p(1-\theta)} \langle v \rangle^{-m}\right)^q dv\right)^{1/q} \left(\int_{\mathbb{R}^d} \left(f^{p\theta} \langle v \rangle^{m}\right)^{\frac{1}{p\theta}} dv\right)^{p\theta}, \end{align*} with $ q = \frac{1}{(1-p)\theta}$. The aim is to estimate $ \int f^p\;dv $ with the quantities $ \int f\langle v \rangle^{2}\;dv$ and $ \int f^{\frac{d}{d-2}}\langle v \rangle^{\frac{d}{d-2}\gamma}\;dv$. First, suppose that $p_\gamma = d(2-\gamma)/ (2(d-2)-d\gamma)$, then we choose \begin{align*} \frac{m}{p\theta} = 2, \quad {p(1-\theta)q}{} = \frac{d}{d-2}, \quad mq = -\gamma\frac{d}{d-2}. \end{align*} and it follows that for some $C=C(d,\gamma)$ we have \begin{align*} \int_{\mathbb{R}^d} f^{p_\gamma}\;dv & \leq C\left(\int_{\mathbb{R}^d} f^{\frac{d}{d-2}}\langle v \rangle^{{\frac{d}{d-2}}\gamma}\;dv\right)^{\frac{2(d-2)}{2(d-2)-d\gamma}}\left(\int_{\mathbb{R}^d} f\langle v \rangle^{2}\;dv\right)^{\frac{d\gamma}{d\gamma-2(d-2)}}. \end{align*} Now we use the above inequality for $f$ and integrate for $t\in (0,T)$: \begin{align*} \int_0^T\left(\int_{\mathbb{R}^d} f^p \;dv\right)\;dt & \leq \int_0^T \|f \langle v\rangle^\gamma\|_{L^{\frac{d}{d-2}}}^{\frac{2d}{2(d-2)-d\gamma}} \|f \langle v\rangle^2\|_{L^1}^{\frac{d\gamma}{d\gamma-2(d-2)}} \;dt \\ & \leq \sup_{(0,T)} \|f \langle v\rangle^2\|_{L^1}^{\frac{d\gamma}{d\gamma-2(d-2)}}\int_0^T \|f \langle v\rangle^\gamma\|_{L^{\frac{d}{d-2}}}^{\frac{2d}{2(d-2)-d\gamma}}\;dt \\ & \leq C(f_{\textnormal{in}}) T^{1- {\frac{2d}{2(d-2)-d\gamma}}}\left(\int_0^T \|f \langle v\rangle^\gamma\|_{L^{\frac{d}{d-2}}}\;dt\right)^{\frac{2d}{2(d-2)-d\gamma}}, \end{align*} thanks to \eqref{eqn:entropy production bound_II} and the conservation of the second momentum of $f$. In the last inequality we applied Jensen's inequality since $\frac{2d}{2(d-2)-d\gamma} <1$ if $\gamma <-\frac{4}{d}$ (which is the case when $p_\gamma>1+\frac{2}{d}$). Second, suppose that $p_\gamma = 1+\frac{2}{d}$, then, we take \begin{align*} q = \frac{d}{d-2}, \; p = 1+\frac{2}{d},\; \theta = \frac{2}{2+d}, \end{align*} and $m$ such that \begin{align*} mq \ge -\frac{\gamma d}{d-2},\quad \frac{m}{p\theta} \leq 2 \end{align*} and obtain \begin{align*} \int_0^T \int_{\mathbb{R}^d} f^p\;dvdt & \le \int_0^T\left(\int_{\mathbb{R}^3} f^{\frac{d}{d-2}}\langle v \rangle^{{\frac{\gamma d}{d-2}}}\;dv\right)^{\frac{d-2}{d}}\left(\int_{\mathbb{R}^d} f\langle v \rangle^{2}\;dv\right)^{\frac{2}{d}}\;dt \leq C(f_{\textnormal{in}},d,\gamma), \end{align*} using again \eqref{eqn:entropy production bound_II} and conservation of second moment. \end{proof} \subsection{Energy inequality}\label{sec: energy ineq} At the center of the iteration argument are integrals of functions of the distribution $f$. That is, integrals of the form \begin{align}\label{eqn:energy_inequality basic integrals} \int_{\mathbb{R}^d} \eta^2 \phi(f) \;dv, \end{align} with $\eta$ a generic $C^2_c(\mathbb{R}^d)$-function and $\phi:\mathbb{R}\mapsto\mathbb{R}$ a $C^\infty$ function such that \begin{align}\label{eqn:properties of phi} \phi''(s) \geq 0 \;\;\forall\;s \textnormal{ and } \phi''(s) = 0 \textnormal{ for all large } s, \textnormal{ and } \lim\limits_{s\to 0} s^{-1}\phi(s) = 0. \end{align} The integrals \eqref{eqn:energy_inequality basic integrals} will be understood using an energy inequality derived from the equation, understood in the sense of Definition \ref{def:solution}. We need some preliminaries on the type of functions $\phi$ that will be used, and certain functions to them that will appear in the computations below. \begin{rem}\label{rem:definition of overline phi and underline phi} Let $\phi:\mathbb{R}\mapsto\mathbb{R}$ be as in \eqref{eqn:properties of phi}, for $u\geq 0$ define functions $\overline{\phi}$ and $\underline{\phi}$ by \begin{align}\label{eqn:overline phi and underline phi} \overline{\phi}(u) := \int_0^u (\phi''(s))^{\frac{1}{2}}\;ds,\;\;\underline{\phi}(u) = \int_0^u s\phi''(s)\;ds. \end{align} Observe that for every $s$, we have \begin{align*} (s\phi'(s)-\underline{\phi}(s)-\phi(s))' = \phi'(s)+s\phi''(s)-s\phi''(s)-\phi'(s) = 0,\;\;\forall\;s, \end{align*} thus $s\phi'(s)-\underline{\phi}(s)-\phi(s) = -\underline{\phi}(0)-\phi(0) = 0$ for all $s$, which means that \begin{align*} s\phi'(s)-\underline{\phi}(s) = \phi(s). \end{align*} \end{rem} In all that follows, we drop the subindices $f,\gamma$ from $A_{f,\gamma}$, $a_{f,\gamma}$, $h_{f,\gamma}$ and $a_{f,\gamma}^*$. The first Proposition aims to provide a more workable expression for the right hand side of the integral formulation for the equation (Definition \ref{def:solution}). \begin{prop}\label{prop:basic energy inequality} Let $f$ be a weak solution (if $\gamma\in [-2,0]$) or a classical solution (if $\gamma \in [-d,-2)$), $\eta \in C^2_c(\mathbb{R}^d)$ and $\phi$ as \eqref{eqn:properties of phi}. Then, we have the identity \begin{align*} -\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla f-f\nabla a,\nabla (\eta^2\phi'(f)))\;dvdt = (\textnormal{I})+(\textnormal{II})+(\textnormal{III})+(\textnormal{IV}), \end{align*} where, with $\overline{\phi}$ and $\underline{\phi}$ as in \eqref{eqn:overline phi and underline phi}, we have \begin{align*} (\textnormal{I}) & = -\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi})(f))\;dvdt+2\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\overline{\phi}(f)(A\nabla (\eta \overline{\phi}(f)),\nabla \eta)\;dvdt,\\ (\textnormal{II}) & = -\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\overline{\phi}(f)^2(A\nabla \eta,\nabla \eta)\;dvdt+\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\phi(f)\textnormal{tr}(AD^2(\eta^2))\;dvdt,\\ (\textnormal{III}) & = \int_{t_1}^{t_2}\int_{\mathbb{R}^d} \eta^2\underline{\phi}(f) h\;dvdt,\\ (\textnormal{IV}) & = 2\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\phi(f)(\nabla a,\nabla \eta^2)\;dvdt. \end{align*} \end{prop} \begin{proof} The proof amounts to keeping track of various elementary pointwise identities and a couple of integration by parts. First, it is clear that \begin{align*} & \int_{\mathbb{R}^d}(A\nabla f,\nabla (\eta^2 \phi'(f)))\;dv = \int_{\mathbb{R}^d}\eta^2 \phi''(f)(A\nabla f,\nabla f)+\phi'(f)(A\nabla f,\nabla \eta^2)\;dv. \end{align*} A couple of elementary identities: from the definition of $\overline{\phi}$ it follows that \begin{align*} & \eta^2 \phi''(f)(A\nabla f,\nabla f) \\ & = (A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f))) -2\eta \overline{\phi}(f)(A\nabla \eta,\nabla \overline{\phi}(f))-\overline{\phi}(f)^2(A\nabla \eta,\nabla \eta), \end{align*} at the same time, \begin{align*} -2\eta \overline{\phi}(f)(A\nabla \eta,\nabla \overline{\phi}(f)) = -2 \overline{\phi}(f)(A\nabla \eta,\nabla (\eta \overline{\phi}(f)) )+2\overline{\phi}(f)^2(A\nabla \eta,\nabla \eta). \end{align*} These two formulas yield \begin{align*} & \int_{\mathbb{R}^d}\eta^2 \phi''(f)(A\nabla f,\nabla f) \;dv\\ & = \int_{\mathbb{R}^d} (A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f))) -2 \overline{\phi}(f)(A\nabla \eta,\nabla (\eta \overline{\phi}(f)))+\overline{\phi}(f)^2(A\nabla \eta,\nabla \eta) \;dv. \end{align*} On the other hand, the identity $\phi'(f)\nabla f = \nabla \phi(f)$ and integration by parts yield \begin{align*} \int_{\mathbb{R}^d}\phi'(f)(A\nabla f,\nabla \eta^2)\;dv & = -\int_{\mathbb{R}^d}\phi(f)(\textnormal{tr}(AD^2(\eta^2))+(\nabla a,\nabla (\eta^2)))\;dv. \end{align*} Integrating these identities in $(t_1,t_2)$ (and recalling definition of $\textnormal{(I)}$ and $\textnormal{(II)}$ leads to \begin{align}\label{eqn:energy identity first half} & -\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla f,\nabla (\eta^2 \phi'(f)))\;dvdt = \textnormal{(I)}+\textnormal{(II)}+\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\phi(f)(\nabla a,\nabla (\eta^2))\;dvdt. \end{align} To deal with the other term in the integral, we make use of the identity \begin{align*} f \nabla (\eta^2 \phi'(f)) & = \eta^2 f\phi''(f)\nabla f+f\phi'(f)\nabla (\eta^2)\\ & = \eta^2 \nabla \underline{\phi}(f)+f\phi'(f)\nabla (\eta^2)\\ & = \nabla (\eta^2 \underline{\phi}(f) )- \underline{\phi}(f)\nabla (\eta^2)+f\phi'(f)\nabla (\eta^2). \end{align*} Integrating, this leads to \begin{align*} \int_{\mathbb{R}^d}(f\nabla a,\nabla (\eta^2 \phi'(f)) )\;dv = \int_{\mathbb{R}^d} \eta^2\underline{\phi}(f) h\;dvdt+\int_{\mathbb{R}^d}(f\phi'(f)-\underline{\phi}(f))(\nabla a,\nabla (\eta^2))\;dv. \end{align*} According to Remark \ref{rem:definition of overline phi and underline phi}, $f\phi'(f)-\underline{\phi}(f) = \phi(f)$, therefore \begin{align*} \int_{t_1}^{t_2}\int_{\mathbb{R}^d}(f\nabla a,\nabla (\eta^2 \phi'(f)) )\;dvdt = \textnormal{(III)}+\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\phi(f)(\nabla a,\nabla (\eta^2))\;dvdt. \end{align*} This identity combined with \eqref{eqn:energy identity first half} proves the Proposition. \end{proof} The following remark deals with an approximation to the power function $\phi(s) = s^p/p$ via functions $\phi$ which are sublinear as $s\to\infty$. \begin{DEF}\label{def:Definitions of approximations to p-th power I} Fix $p>1$ and $h>0$. Let $\chi:\mathbb{R}\mapsto\mathbb{R}$ be a $C^\infty$ function such that \begin{align*} & 0\leq \chi(s) \leq 1 \textnormal{ and } 0 \leq -\chi'(s)\leq 2 \;\textnormal{ for all } \;s,\\ &\chi(s) =1 \textnormal{ if } s<0,\;\chi(s) = 0 \textnormal{ if } s>1. \end{align*} Then, for $u\geq 0$ we define \begin{align}\label{eqn:def of approximations to the pth power} \chi_h(u) := \int_{0}^u\chi(s-h)\;ds,\;\;\; \phi_{p,h}(u) = \phi(u) := \int_0^u\chi_h(s)^{p-1}\;ds. \end{align} \end{DEF} One should think of $\chi_h(u)$ as a smooth approximation to $\min\{u,h+1\}$. The following Lemma contains useful and straightforward inequalities regarding $\phi_{p,h}$ and the associated functions $\overline{\phi}_{p,h}$ and $\underline{\phi}_{p,h}$. \begin{lem}\label{lem:properties of approximations to p-th power} Let $p>1$ and let $h(p)$ be chosen such that \begin{align*} h\geq h(p) \Rightarrow p h^{-1}(1+h^{-1})^{p-1} \leq 1. \end{align*} Then, for $h\geq h(p)$ the function $\phi = \phi_{p,h}$ satisfies the following inequalities for all $u\geq 0$ \begin{align} & \underline{\phi}(u) \leq \frac{p}{2}\overline{\phi}(u)^2,\;\; \overline{\phi}(u)^2 \leq \frac{8(p-1)}{p} \phi(u), \label{eqn:upper bounds for under and over bar phi}\\ & |\phi(u)- \frac{p}{4(p-1)}(\overline{\phi}(u))^2| \leq \frac{p+4}{4} h^{p-1}(u-h)_+. \label{eqn:bound difference phi and overline phi} \end{align} \end{lem} \begin{proof} We fix $p$ and $h$ as in the statement of the Lemma and take $\phi = \phi_{p,h}$ throughout the proof. Note that, \begin{align*} & \chi_h(u) = u \textnormal{ if } u<h,\\ & \chi_h(u) \leq h+1 \;\;\forall\;u \geq 0,\\ &\chi_h(u) \equiv \chi_h(h+1)\;\; \forall\;u\geq h+1. \end{align*} Moreover, we have the identities \begin{align*} \phi'(u) = \chi_h(u)^{p-1},\;\;\phi''(u) = (p-1)\chi_h(u)^{p-2}\chi(u-h). \end{align*} Now, let us see what the corresponding functions $\overline{\phi}$ and $\underline{\phi}$ from \eqref{eqn:overline phi and underline phi} look like. If $u\leq h$, we have \begin{align*} \overline{\phi}(u) & = (p-1)^{\frac{1}{2}}\int_{0}^u \chi_h(s)^{\frac{p}{2}-1}\chi(s-h)\;ds = (p-1)^{\frac{1}{2}}\int_{0}^u s^{\frac{p}{2}-1}\;ds,\\ \underline{\phi}(u) & = (p-1)\int_{0}^u s \chi_h(s)^{p-2}\chi(s-h)\;ds = (p-1)\int_{0}^u s^{p-1}\;ds. \end{align*} Then, for $u\leq h$, \begin{align*} \overline{\phi}(u) = \tfrac{2}{p}(p-1)^{\frac{1}{2}}u^{\frac{p}{2}},\;\; \underline{\phi}(u) = \tfrac{1}{p}(p-1)u^p \Rightarrow \underline{\phi}(u) = \frac{p}{4}\overline{\phi}(u)^2,\;\;\forall\;u\leq h. \end{align*} While, for $u\geq h$, \begin{align*} \underline{\phi}(u) & \leq \tfrac{p}{4}\overline{\phi}(h)^2+ (p-1)\int_h^{h+1}(h+1)^{p-1}\;ds\\ & \leq \tfrac{p}{4}\overline{\phi}(h)^2+ \frac{p^2}{4} \overline{\phi}(h)^2h^{-1} (1+h^{-1})^{p-1}\\ \overline{\phi}(u) & \leq (p-1)^{\frac{1}{2}}(h+1)^{\frac{p}{2}}, \end{align*} and, using that $\overline{\phi}(u)^2 \geq \overline{\phi}(h)^2 = \tfrac{4}{p^2}(p-1)h^p$ for $u\geq h$ we conclude that \begin{align*} \underline{\phi}(u) \leq \frac{p}{2}\overline{\phi}(u)^2,\;\; \overline{\phi}(u)^2 \leq \frac{8(p-1)}{p} \phi(u)\;\;\forall\;u\geq 0, \end{align*} provided that $h$ is larger than $h(p)$, where $h(p)$ is chosen such that by \begin{align*} h\geq h(p) \Rightarrow p h^{-1}(1+h^{-1})^{p-1} \leq 1. \end{align*} It remains to prove the second inequality. From the previous observations, it follows that \begin{align*} \phi(u) - \frac{p}{4(p-1)}(\overline{\phi}(u))^2 = 0 \textnormal{ for } u \in [0,h]. \end{align*} For $u\geq h$, we shall differentiate the difference with respect to $u$, so that \begin{align*} & \phi'(u) - \frac{p}{4(p-1)}\overline{\phi}(u)\overline{\phi}'(u)\\ & = \chi_h(u)^{p-1}-\frac{p}{4(p-1)}\overline{\phi}(u)(p-1)^{\frac{1}{2}}\chi_{h}(u)^{\frac{p}{2}-1}\left (\chi(u-h) \right )^{\frac{1}{2}}. \end{align*} Therefore, \begin{align*} |\phi'(u) - \frac{p}{4(p-1)}\overline{\phi}(u)\overline{\phi}'(u)| \leq \frac{p+4}{4}(h+1)^{p-1} \end{align*} \begin{align*} & |\phi(u)- \frac{p}{4(p-1)}\overline{\phi}(u)^2|\\ & = \left |\phi(u)-\frac{p}{4(p-1)}\overline{\phi}(u)^2-\left ( \phi(h)- \frac{p}{4(p-1)}\overline{\phi}(h)^2 \right ) \right |\\ & \leq \frac{p+4}{4}(h+1)^{p-1}|u-h|. \end{align*} In conclusion, \begin{align*} |\phi(u)- \frac{p}{4(p-1)}(\overline{\phi}(u))^2| \leq \frac{p+4}{4} h^{p-1}(u-h)_+, \end{align*} and the second inequality is proved. \end{proof} \begin{prop}\label{prop:energy inequality term IV} Let $\phi=\phi_{p,h}$ as in Definition \ref{def:Definitions of approximations to p-th power I}. Then, using the notation from the previous proposition, we have \begin{align*} (\textnormal{IV}) & = (\textnormal{IV})_1+(\textnormal{IV})_2, \end{align*} where, with $c_p=\frac{p}{4(p-1)}$, \begin{align*} (\textnormal{IV})_1 & = 2c_p\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\overline{\phi}(f)^2 (4(A\nabla \eta,\nabla \eta)-\textnormal{tr}(AD^2\eta^2))\;dvdt\\ &\;\;\;\;-8c_p\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\overline{\phi}(f) (A\nabla (\eta \overline{\phi}(f)),\nabla \eta)\;dvdt,\\ |(\textnormal{IV})_2| & \leq \frac{p+4}{2}\int_{t_1}^{t_2}\int_{\{f >h\} }(h+1)^{p-1}(f-h)|\nabla a| |\nabla \eta^2|\;dvdt. \end{align*} \end{prop} \begin{proof} Firstly, let us write \begin{align*} (\textnormal{IV}) = 2c_p\int_{\mathbb{R}^d}\overline{\phi}(f)^2(\nabla a,\nabla \eta^2)\;dv+2\int_{\mathbb{R}^d}(\phi(f)-c_p\overline{\phi}(f)^2)(\nabla a,\nabla \eta^2)\;dv, \end{align*} and set \begin{align*} (\textnormal{IV})_1 = 2c_p\int_{\mathbb{R}^d}\overline{\phi}(f)^2(\nabla a,\nabla \eta^2)\;dv,\;\;(\textnormal{IV})_2 = 2\int_{\mathbb{R}^d}(\phi(f)-c_p\overline{\phi}(f)^2)(\nabla a,\nabla \eta^2)\;dv. \end{align*} Then, for $(\textnormal{IV})_1$, we integrate by parts, obtaining \begin{align*} (\textnormal{IV})_1 & = 2c_p\int_{\mathbb{R}^d}\textnormal{div}(A \overline{\phi}(f)^2 (\nabla \eta^2 ) ) - \textnormal{tr}(AD(\overline{\phi}(f)^2\nabla \eta^2)) \;dv \\ & = -2c_p\int_{\mathbb{R}^d}\overline{\phi}(f)^2 \textnormal{tr}(AD^2\eta^2)\;dv-2c_p\int_{\mathbb{R}^d}(A\nabla \overline{\phi}(f)^2,\nabla \eta^2)\;dv. \end{align*} Furthermore, using the elementary identity, \begin{align*} (A\nabla \overline{\phi}(f)^2,\nabla \eta^2 ) & = 4\overline{\phi}(f)\eta (A\nabla \overline{\phi}(f),\nabla \eta)\\ & = 4\overline{\phi}(f) (A\nabla (\eta \overline{\phi}(f)),\nabla \eta)-4\overline{\phi}(f)^2 (A\nabla \eta,\nabla \eta), \end{align*} it follows that \begin{align*} (\textnormal{IV})_1 & = -2c_p\int_{\mathbb{R}^d}\overline{\phi}(f)^2 \textnormal{tr}(AD^2\eta^2)\;dv\\ &\;\;\;\;-8c_p\int_{\mathbb{R}^d}\overline{\phi}(f) (A\nabla (\eta \overline{\phi}(f)),\nabla \eta)\;dv+8c_p\int_{\mathbb{R}^d}\overline{\phi}(f)^2 (A\nabla \eta,\nabla \eta)\;dv. \end{align*} As for $(\textnormal{IV})_2$, one has \begin{align*} |(\textnormal{IV})_2|\leq 2\int_{t_1}^{t_2}\int_{\mathbb{R}^d}|\phi(f)-c_p\overline{\phi}(f)^2||\nabla a||\nabla \eta^2|\;dvdt. \end{align*} Then, using the pointwise inequality \eqref{eqn:bound difference phi and overline phi} from Lemma \ref{lem:properties of approximations to p-th power}, we obtain \begin{align*} |(\textnormal{IV})_2|\leq \frac{p+4}{2}\int_{t_1}^{t_2}\int_{\{f>h\}} (h+1)^{p-1}(f-h)|\nabla a||\nabla \eta^2|\;dvdt. \end{align*} \end{proof} In the following Lemma we intend to use Propositions \ref{prop:basic energy inequality} and \ref{prop:energy inequality term IV} with $\phi = \phi_{p,h}$ as above to prove an energy inequality. The $\varepsilon$-Poincar\'e inequality is used as well. \begin{lem}\label{lem:energy inequality with bad term} Let $f$ be a weak solution in $(0,T)$ (if $\gamma \in (-2,0]$) or else a classical solution satisfying Assumption \ref{Assumption:Epsilon Poincare} if $\gamma \in [-d,-2]$. Let, $T_1,T_2,T_3$ be times such that $0<T_1<T_2<T_3<T$ and $\phi=\phi_{p,h}$ be as in Definition \ref{def:Definitions of approximations to p-th power I} with $p\geq p_0$ for some $p_0>1$. Then, we have that the quantity \begin{align*} \sup \limits_{(T_2,T_3)}\int_{\mathbb{R}^d}\eta^2 \phi(f(t))\;dv+\frac{1}{4}\int_{T_2}^{T_3}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f) )\;dvdt, \end{align*} is no larger than \begin{align*} & \frac{1}{T_2-T_1}\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\eta^2 \phi(f)(t)\;dvdt +C(p_0)\left (\|\nabla \eta\|_\infty^2+\|D^2\eta^2\|_\infty \right )\int_{T_1}^{T_3}\int_{\textnormal{spt}(\eta)}\phi(f)a\;dvdt\\ & +\tfrac{p}{2}\Lambda_f\left (\tfrac{1}{2p}\right)\int_{T_1}^{T_3}\int_{\mathbb{R}^d}(\eta \overline{\phi}(f))^2\;dvdt+\frac{p+4}{2}\int_{T_1}^{T_3}\int_{\{f>h\}} (h+1)^{p-1}(f-h)|\nabla a||\nabla \eta^2|\;dvdt. \end{align*} \end{lem} \begin{proof} Fix times $t_1,t_2$. From the definition of weak solution, and Proposition \ref{prop:basic energy inequality}, \begin{align}\label{eqn:energy inequality basic expression} \int_{\mathbb{R}^d}\eta^2 \phi(f(t_2))\;dv-\int_{\mathbb{R}^d}\eta^2 \phi(f(t_1))\;dv & = \textnormal{(I)}+\textnormal{(II)}+\textnormal{(III)}+\textnormal{(IV)}. \end{align} Let us bound each of the terms $\textnormal{(I)}-\textnormal{(IV)}$. For every $\delta>0$ we have the elementary inequality \begin{align*} |\overline{\phi}(f)(A\nabla (\eta \overline{\phi}(f)),\nabla \eta)| \leq \frac{\delta}{2}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f)))+ \frac{1}{2\delta}\overline{\phi}(f)^2(A\nabla \eta,\nabla \eta), \end{align*} which we shall use twice to bound the terms $(\textnormal{I})$ and $(\textnormal{IV})_1$ defined in Proposition \ref{prop:basic energy inequality} and Proposition \ref{prop:energy inequality term IV}. Then, for positive $\delta_1$ and $\delta_2$ (to be determined later), we have \begin{align*} (\textnormal{I}) & \leq -(1-\delta_1)\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f)))\;dvdt+\frac{1}{\delta_1}\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\overline{\phi}(f)^2(A\nabla \eta,\nabla \eta)\;dvdt\\ & \leq -(1-\delta_1)\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f)))\;dvdt+\frac{1}{\delta_1}\|\nabla \eta\|_{L^\infty}^2 \int_{t_1}^{t_2}\int_{\textnormal{spt}(\eta)}\overline{\phi}(f)^2a\;dvdt. \end{align*} and, \begin{align*} (\textnormal{IV})_1 & \leq 4c_p\delta_2 \int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f)) )\;dvdt\\ &\;\;\;\; +4c_p(2+\delta_2^{-1}) \int_{t_1}^{t_2}\int_{\mathbb{R}^d}\overline{\phi}(f)^2(A\nabla \eta,\nabla \eta)\;dvdt-2c_p\int_{t_1}^{t_2}\int_{\mathbb{R}^d}\overline{\phi}(f)^2\textnormal{tr}(AD^2\eta^2)\;dvdt\\ & \leq 4c_p \delta_2 \int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f)) )\;dvdt\\ & \;\;\;\;+4c_p\left ( 2+\delta_2^{-1}\right )\left (\|\nabla \eta\|_{L^\infty}^2+\|D^2\eta^2\|_{L^\infty} \right )\int_{t_1}^{t_2}\int_{\textnormal{spt}(\eta)}\overline{\phi}(f)^2a\;dvdt. \end{align*} On the other hand, applying inequality \eqref{eqn:upper bounds for under and over bar phi} from Lemma \ref{lem:properties of approximations to p-th power}, it follows that for $h$ large enough \begin{align*} (\textnormal{II}) \leq 8\left ( \|\nabla \eta\|_{L^\infty}^2 + \|D^2\eta^2\|_{L^\infty} \right )\int_{t_1}^{t_2}\int_{\textnormal{spt}(\eta)}\phi(f)a\;dvdt. \end{align*} Then, choosing $\delta_1 = 1/4$ and $\delta_2 = (4c_p)^{-1}/4$, it follows that \begin{align*} (\textnormal{I})+(\textnormal{II})+(\textnormal{IV}) & \leq -\frac{1}{2}\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f) )\;dvdt\\ & \;\;\;\;+8c_p(6+16 c_p )\left (\|\nabla \eta\|_{L^\infty}^2+\|D^2\eta^2\|_{L^\infty} \right )\int_{t_1}^{t_2}\int_{\textnormal{spt}(\eta)}\phi(f)a\;dvdt+(\textnormal{IV})_2. \end{align*} Again by Lemma \ref{lem:properties of approximations to p-th power}, provided $h$ is sufficiently large, we have \begin{align*} \int_{t_1}^{t_2}\int_{\mathbb{R}^d}\eta^2 \underline{\phi}(f)h\;dvdt \leq \frac{p}{2}\int_{t_1}^{t_2}\int_{\mathbb{R}^d} (\eta \overline{\phi}(f))^2h\;dvdt. \end{align*} Next, we apply the $\varepsilon$-Poincar\'e with $\varepsilon = \frac{1}{2p}$, \begin{align*} (\textnormal{III}) \leq \frac{1}{4}\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f)))\;dvdt+\tfrac{p}{2}\Lambda_f \left ( \tfrac{1}{2p} \right )\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(\eta \overline{\phi}(f))^2\;dvdt. \end{align*} Going back to \eqref{eqn:energy inequality basic expression}, we conclude that for any pair of times $t_1<t_2$, we have \begin{align*} & \int_{\mathbb{R}^d}\eta^2 \phi(f(t_2))\;dv + \frac{1}{4}\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f) )\;dvdt \\ & \leq \int_{\mathbb{R}^d}\eta^2 \phi(f(t_1))\;dv+\tfrac{p}{2}\Lambda_f (\tfrac{1}{2p})\int_{t_1}^{t_2}\int_{\mathbb{R}^d}(\eta \overline{\phi}(f))^2\;dvdt\\ & \;\;\;\;+C(p_0)\left (\|\nabla \eta\|_{L^\infty}^2+\|D^2\eta^2\|_{L^\infty} \right )\int_{t_1}^{t_2}\int_{\textnormal{spt}(\eta)}\phi(f)a\;dvdt\\ &\;\;\;\;+\frac{p+4}{2}\int_{t_1}^{t_2}\int_{\{f>h\}} (h+1)^{p-1}(f-h)|\nabla a||\nabla \eta^2|\;dvdt, \end{align*} where $C(p_0) := 8c_{p_0}(6+16c_{p_0}) \geq 8c_{p}(6+16c_{p})$ for all $p\geq p_0$. Observe that all terms appearing on both sides of the inequality are non-negative. Fix $t_2 \in (T_2,T_3)$ and take the average of both sides of this inequality with respect to $t_1 \in (T_1,T_2)$, the resulting inequality says then that the quantity \begin{align*} & \int_{\mathbb{R}^d}\eta^2 \phi(f(t_2))\;dv + \frac{1}{4}\int_{T_2}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f)),\nabla (\eta \overline{\phi}(f) )\;dvdt \end{align*} is no larger than \begin{align*} & \leq \frac{1}{T_2-T_1}\int_{T_1}^{T_2}\int_{\mathbb{R}^d}\eta^2 \phi(f)\;dvdt+\tfrac{p}{2}\Lambda_f(\tfrac{1}{2p})\int_{T_1}^{t_2}\int_{\mathbb{R}^d}(\eta \overline{\phi}(f))^2\;dvdt\\ & \;\;\;\;+C(p_0)\left (\|\nabla \eta\|_{L^\infty}^2+\|D^2\eta^2\|_{L^\infty} \right )\int_{T_1}^{t_2}\int_{\textnormal{spt}(\eta)}\phi(f)a\;dvdt\\ &\;\;\;\;+\frac{p+4}{2}\int_{T_1}^{t_2}\int_{\{f>h\}} (h+1)^{p-1}(f-h)|\nabla a||\nabla \eta^2|\;dvdt. \end{align*} \end{proof} Let us note that the last term in the previous Lemma ought to go to zero as $h\to \infty$ when $f$ has enough integrability (the integrability requirement being higher the larger $p$ is). Since our intention is to follow Moser's approach to De Giorgi-Nash-Moser estimates, we shall show alternate between obtaining some integrability of $f$, to showing that the expression above goes to zero for some $p$, which in turn yields better integrability, and so on. The next Lemma gives a starting point in the case $\gamma \in (-2,0]$ (for the remaining $\gamma$ we will assume $f$ is bounded and treat the result as an a priori estimate). \begin{lem}\label{lem:Ld/d-1 bound for f unif in time} Let $\gamma \in (-2,0]$. Consider times $0<\tau<\tau'$. Then, \begin{align*} \sup \limits_{(\tau,\tau')} \int_{\mathbb{R}^d} (f(t))^{\frac{d}{d-1}}\;dv < \infty. \end{align*} \end{lem} \begin{proof} Let $\phi = \phi_{p,h}$, be as in Definition \ref{def:Definitions of approximations to p-th power I} with $p = p_\gamma$ as in Proposition \ref{prop: LpLp bound via interpolation1}. For $R>1$ let $\eta_R$ be a smooth function such that $0\leq \eta_R \leq 1$ in $\mathbb{R}^d$ and such that \begin{align*} \eta_R \equiv 1 \textnormal{ in } B_R, \;\eta_R \equiv 0 \textnormal{ in } \mathbb{R}^d\setminus B_{2R},\;\;|\nabla \eta_R|^2+|D^2\eta_R| \leq CR^{-2}. \end{align*} Lemma \ref{lem:energy inequality with bad term} says that \begin{align*} & \int_{\mathbb{R}^d}\eta_R^2 \phi(f(t_2))\;dv + \frac{1}{4}\int_{T_2}^{t_2}\int_{\mathbb{R}^d}(A\nabla (\eta_R \overline{\phi}(f)),\nabla (\eta_R \overline{\phi}(f) )\;dvdt \\ & \leq \frac{2}{T_2-T_1}\int_{T_1}^{T_2}\int_{\mathbb{R}^d}\eta_R^2 \phi(f)\;dvdt+p \Lambda_f (\tfrac{1}{2p})\int_{T_1}^{T_3}\int_{\mathbb{R}^d}(\eta_R \overline{\phi}(f))^2\;dvdt\\ & \;\;\;\;+2C(p_\gamma)\left (\|\nabla \eta_R\|_{L^\infty}^2+\|D^2\eta_R^2\|_{L^\infty} \right )\int_{T_1}^{T_3}\int_{\textnormal{spt}(\nabla \eta_R)}\phi(f)a\;dvdt\\ &\;\;\;\;+(p+4)\int_{T_1}^{T_3}\int_{\{f>h\}} (h+1)^{p-1}(f-h)|\nabla a||\nabla \eta_R^2|\;dvdt. \end{align*} Since $\gamma \in [-2,0]$, Proposition \ref{prop:a vs a star gamma larger than -2} yields that $R^{-2}a \leq C(d,\gamma,f_{\textnormal{in}}) {a^*}$ in $B_{2R}\setminus B_R$, it follows that the term \begin{align*} \left (\|\nabla \eta_R\|_{L^\infty}^2+\|D^2\eta_R^2\|_{L^\infty} \right )\int_{T_1}^{T_3}\int_{\textnormal{spt}(\nabla \eta_R)}\phi(f)a\;dvdt, \end{align*} is bounded by \begin{align*} C(d,\gamma,f_{\textnormal{in}})\int_{T_1}^{T_3} \int_{\textnormal{spt}(\nabla \eta_R)} \phi(f) {a^*} \;dvdt. \end{align*} From the integrability of $f{a^*}$ over the whole space (which follows from the upper bound for ${a^*}$ and the second moment bound for $f)$ it follows that the above integral converges to zero as $R\to \infty$, therefore \begin{align*} \lim\limits_{R\to\infty }\left (\|\nabla \eta_R\|_{L^\infty}^2+\|D^2\eta_R^2\|_{L^\infty} \right )\int_{T_1}^{T_3}\int_{\textnormal{spt}(\nabla \eta_R)}\phi(f)a\;dvdt = 0. \end{align*} Next, we consider the term \begin{align*} \int_{T_1}^{T_3} \int_{\textnormal{spt}(\nabla \eta_R)} (h+1)^{p-1}(f-h)_+|\nabla a||\nabla \eta_R^2|\;dvdt. \end{align*} Always keeping in mind that \begin{align*} |\nabla a[f] | \le \int \frac{f(y)}{|v-y|^{-\gamma-1}}\;dy. \end{align*} If $\gamma \geq -1$, then arguing as in Proposition \ref{prop:a vs a star gamma larger than -2} (using the second moment bound for $f$) it is not easy to see that $|\nabla a[f]|\leq C(d,\gamma,f_{\textnormal{in}}) \langle v\rangle^{1+\gamma}$ for all $v$. Since $1+\gamma\leq 2$ in any case, using again the second moment bound for $f$ it follows that (with $h$ fixed) \begin{align*} \lim \limits_{R \to \infty}\int_{T_1}^{T_3} \int_{\textnormal{spt}(\nabla \eta_R)}(h+1)^{p-1}(f-h)_+|\nabla a||\nabla \eta_R^2|\;dvdt = 0. \end{align*} Next, we prove the same limit as above in the case $-2 <\gamma<-1$. Let us show that \begin{align*} \int_{T_1}^{T_3}\int_{\mathbb{R}^d}f|\nabla a| \;dvdt<\infty. \end{align*} From the definition of $\nabla a$, we have $|\nabla a(v)| \leq \tilde a_1+\tilde a_2$, where \begin{align*} \tilde a_1 & := C(d,\gamma)\int_{B_1(v)}f(w)|v-w|^{1+\gamma}\;dw,\\ \tilde a_2 & := C(d,\gamma)\int_{\mathbb{R}^d\setminus B_1(v)}f(w)|v-w|^{1+\gamma}\;dw. \end{align*} Since $1+\gamma\leq 0$ and $|v-w|\geq 1$, we have for all times \begin{align*} \tilde a_2(v,t) = C(d,\gamma)\int_{\mathbb{R}^d\setminus B_1(v)}f(w,t)|v-w|^{1+\gamma}\;dw \leq C(d,\gamma)\|f\|_{L^1(\mathbb{R}^d)}. \end{align*} From here, it is immediate that \begin{align*} \int_{T_1}^{T_3}\int_{\mathbb{R}^d}f \tilde a_2 \;dvdt<\infty. \end{align*} Let us now deal with the term $\tilde a_1$. Given that $\gamma \in (-2,-1)$, we have \begin{align*} C(d,\gamma)\chi_{B_1(0)}|v|^{1+\gamma} \in L^{q_\gamma}(\mathbb{R}^d),\;\; q_\gamma := \frac{d}{-\gamma-1} . \end{align*} Therefore, by the preservation of the $L^1(\mathbb{R}^d)$ norm of $f(t)$, we conclude that \begin{align*} \| \tilde a_1 \|_{L^\infty(T_1,T_3,L^{q_\gamma}(\mathbb{R}^d) )} <\infty. \end{align*} With this in mind, we apply Young's inequality, which yields \begin{align*} f\tilde a_1 \leq f^{\frac{q_\gamma}{q_\gamma-1}} + \tilde a_1^{q_\gamma}. \end{align*} Therefore, \begin{align*} & \int_{T_1}^{T_3} \int_{\mathbb{R}^d} f \tilde a_1 \;dvdt \leq \int_{T_1}^{T_3} \int_{\mathbb{R}^d} f^{\frac{q_\gamma}{q_\gamma-1}} \;dvdt + \int_{T_1}^{T_3} \int_{\mathbb{R}^d} \tilde a_1^{q_\gamma }\;dvdt. \end{align*} Now, recall that by Proposition \ref{prop: LpLp bound via interpolation1} we know that $\int_{T_1}^{T_3}\int_{\mathbb{R}^d} f^{p_\gamma}\;dvdt$ is finite. Moreover, from the definition of $p_\gamma$ and the fact that $q_\gamma>d$ we have $\frac{q_\gamma}{q_\gamma-1}<p_\gamma$, it follows that \begin{align*} \int_{T_1}^{T_3} \int_{\mathbb{R}^d} f^{\frac{q_\gamma}{q_\gamma-1}} \;dvdt <\infty. \end{align*} At the same time, from the bound for $\tilde a_1$, we have \begin{align*} \int_{T_1}^{T_3}\int_{\mathbb{R}^d}\tilde a_1^{q_\gamma} \;dvdt = \int_{T_1}^{T_3} \|\tilde a_1(t)\|_{q_\gamma}^{g_\gamma}\;dt <\infty. \end{align*} In either case, for $\gamma \in [-2,0]$ and with $h>0$ fixed, we have \begin{align*} \lim\limits_{R\to\infty} \int_{T_1^{t_2}}\int_{\textnormal{spt}(\nabla \eta_R)}(h+1)^{p-1}(f-h)_+|\nabla a||\nabla \eta_R^2|\;dvdt = 0. \end{align*} Therefore, \begin{align*} \int_{\mathbb{R}^d}\phi(f(t_2))\;dv \leq \frac{2}{T_2-T_1}\int_{T_1}^{T_2}\int_{\mathbb{R}^d} \phi(f)\;dvdt+p\Lambda_f(\tfrac{1}{2p})\int_{T_1}^{T_3}\int_{\mathbb{R}^d}\phi(f)\;dvdt. \end{align*} Taking the sup with respect to $t_2 \in (T_2,T_3)$ we get \begin{align*} \sup_{t\in(T_2,T_3)} \int_{\mathbb{R}^d}\phi(f)(t)\;dv \leq \left( \frac{2}{T_2-T_1} \right)\int_{T_1}^{T_3}\int_{\mathbb{R}^d}\phi(f)\;dvdt+p\Lambda_f(\tfrac{1}{2p})\int_{T_1}^{T_3}\int_{\mathbb{R}^d} \phi(f)\;dvdt \end{align*} It remains to take the limit as $h\to 0^+$, recalling that $\phi = \phi_{p_\gamma,h}$. Using Proposition \ref{prop: LpLp bound via interpolation1} and Lemma \ref{lem:properties of approximations to p-th power}, we conclude that \begin{align*} & \limsup \limits_{h\to 0^+} \left \{ \left( \frac{2}{T_2-T_1} \right)\int_{T_1}^{T_3}\int_{\mathbb{R}^d}\phi_{p,\gamma}(f)\;dvd+p\Lambda_f(\tfrac{1}{2p})\int_{T_1}^{T_3}\int_{\mathbb{R}^d} \phi_{p_\gamma,h}(f)\;dvdt \right \} \\ & \leq \left ( \frac{2}{T_2-T_1}+p\Lambda_{f}(\tfrac{1}{2p})\right ) \int_{T_1}^{T_3} \int_{\mathbb{R}^d}f^{p_\gamma}\;dvdt \leq C(d,\gamma,f_{\textnormal{in}}) \left ( \frac{1}{T_2-T_1}+p\Lambda_{f}(\tfrac{1}{2p})\right ) \end{align*} At the same time, \begin{align*} \sup_{t\in(T_2,T_3)} & \int_{\mathbb{R}^d}\frac{1}{p_\gamma}f^{p_\gamma}(t)\;dv \leq \limsup\limits_{h\to 0^+} \sup_{t\in(T_2,T_3)} \int_{\mathbb{R}^d}\phi_{p_\gamma,h}(f)(t)\;dv. \end{align*} In conclusion, \begin{align*} \sup_{t\in(T_2,T_3)} & \int_{\mathbb{R}^d}f^{p_\gamma}(t)\;dv \leq C(d,\gamma,f_{\textnormal{in}}) \left ( \frac{1}{T_2-T_1}+p\Lambda_{f}(\tfrac{1}{2p})\right ) < \infty. \end{align*} It is not hard to see that $p_\gamma \geq \frac{d}{d-1}$ with $p_\gamma$ as in Proposition \ref{prop: LpLp bound via interpolation1}, in the case $\gamma \in [-2,0]$. From here, the proposition follows by interpolation with the $L^1$ norm. \end{proof} \begin{prop}\label{prop:energy inequality negligible term} Let $p=1+\tfrac{2}{d}$, $\eta \in C^2_c(\mathbb{R}^d)$ and times $\tau<\tau'$. Suppose that $f$ is a weak solution and either 1) $\gamma \in (-2,0]$ or 2) $\gamma \in [-d,-2]$ and $f$ is bounded. Then, \begin{align*} \lim\limits_{h\to\infty}\int_{\tau}^{\tau'}\int_{\{f>h\}}(h+1)^{p-1}(f-h)_+|\nabla a||\nabla \eta^2|\;dvdt = 0. \end{align*} \end{prop} \begin{proof} The second case is trivial, since $\{f>h\}$ is empty for large enough $h$, so the integral is exactly zero for all large $h$. Let us prove the proposition in the first case. Taking $h\geq 1$ (without loss of generality), we have \begin{align*} \int_{\tau}^{\tau'}\int_{\{f>h\}}(h+1)^{p-1}(f-h)_+|\nabla a||\nabla \eta^2|\;dvdt \leq 2^p\|\nabla a\|_{L^\infty}\int_{\tau}^{\tau'}\int_{\{f>h\}}f^p|\nabla \eta^2|\;dvdt \end{align*} The $L^\infty$ norm being over $(\tau,\tau')\times \textnormal{spt}( \nabla \eta^2)$. We observe that the Proposition will be proved once we show that this $L^\infty$ norm of $\nabla a$ is finite. Since we are in the case $\gamma \in (-2,0]$, for any $t\in (\tau,\tau')$ we have for any ball $B_r$, \begin{align*} \|\nabla a(t)\|_{L^\infty(B_r)} \leq C\| f * |v|^{1+\gamma}\|_{L^\infty(B_r)}, \end{align*} from this inequality, and the $L^{\frac{d}{d-1}}$ bound from Lemma \ref{lem:Ld/d-1 bound for f unif in time}, the proposition follows. \end{proof} \begin{cor}\label{cor_iterat} Let $f$ be a weak solution in $(0,T)$, $p\geq 1+\tfrac{2}{d}$, and $\eta \in C^2_c(\mathbb{R}^d)$. Let us assume that either $\gamma \in (-2,0]$ and \begin{align*} \sup \limits_{T_1\leq t\leq T_3} \int_{\textnormal{spt}(\eta)} (f(t))^p\;dv < \infty, \end{align*} or that $\gamma \in [-d,-2]$ and $f$ is a bounded solution for which Assumption \ref{Assumption:Epsilon Poincare} holds. Then, \begin{align*} \sup \limits_{(T_2,T_3)}\int_{\mathbb{R}^d}\eta^2 f^p\;dv+\frac{p-1}{p}\int_{T_2}^{T_3}\int_{\mathbb{R}^d}(A\nabla (\eta f^{\frac{p}{2}}),\nabla (\eta f^{\frac{p}{2}})\;dvdt, \end{align*} is no larger than \begin{align*} & \left (\frac{1}{T_2-T_1}+\tfrac{p}{2} \Lambda_f (\tfrac{1}{2p}) \right )\int_{T_1}^{T_3}\int_{\mathbb{R}^d}\eta^2 f^p\;dvdt +C(d)\left (\|\nabla \eta\|_\infty^2+\|D^2\eta^2\|_\infty \right )\int_{T_1}^{T_3}\int_{\textnormal{spt}(\eta)}f^pa\;dvdt. \end{align*} \end{cor} \begin{proof} Let $\tau \in (T_2,T_3)$, then by Lemma \ref{lem:energy inequality with bad term} \begin{align*} \int_{\mathbb{R}^d}\eta^2 \phi(f)(\tau)\;dv+\frac{1}{4}\int_{T_2}^{T_3}\int_{\mathbb{R}^d}(A\nabla (\eta \overline{\phi}(f) ) ,\nabla (\eta \overline{\phi}(f) ) )\;dvdt, \end{align*} is bounded from above by \begin{align*} & \left ( \frac{1}{T_2-T_1} +\tfrac{p}{2}\Lambda_f(\tfrac{1}{2p})\right )\int_{T_1}^{T_3}\int_{\mathbb{R}^d}\eta^2 \phi(f)(t)\;dvdt +C(d)\left (\|\nabla \eta\|_\infty^2+\|D^2\eta^2\|_\infty \right )\int_{T_1}^{T_3}\int_{\textnormal{spt}(\eta)}\phi(f)a\;dvdt\\ & +\frac{p+4}{2}\int_{T_1}^{T_3}\int_{\{f>h\}} (h+1)^{p-1}(f-h)|\nabla a||\nabla \eta^2|\;dvdt. \end{align*} Applying monotone convergence, the lower semicontinuity of the (weighted) $\dot H^1$ seminorm, and the same argument used in Proposition \ref{prop:energy inequality negligible term}, we obtain in the limit $h\to \infty$ the inequality \begin{align*} & \frac{1}{p}\int_{\mathbb{R}^d}\eta^2 f^p(\tau)\;dv+\frac{p-1}{4p^2}\int_{T_2}^{T_3}\int_{\mathbb{R}^d}(A\nabla (\eta f^{\frac{p}{2}} ) ,\nabla (\eta f^{\frac{p}{2}} ) )\;dvdt\\ & \leq \left ( \frac{1}{T_2-T_1} +\tfrac{p}{2}\Lambda_f (\tfrac{1}{2p})\right )\frac{1}{p}\int_{T_1}^{T_3}\int_{\mathbb{R}^d}\eta^2 f^p\;dvdt \\ & \;\;\;\;+C(d)\left (\|\nabla \eta\|_\infty^2+\|D^2\eta^2\|_\infty \right )\frac{1}{p} \int_{T_1}^{T_3}\int_{\textnormal{spt}(\eta)}f^p a\;dvdt. \end{align*} To finish the proof, we multiply both sides of the resulting inequality by $p$ and take the supremum of the left hand side with respect to $\tau \in (T_2,T_3)$. \end{proof} \subsection{De Giorgi-Nash-Moser iteration} Fix $R>1$, we define the sequences \begin{align}\label{eqn:iteration times and radii} T_n = \frac{1}{4}\left ( 2- \frac{1}{2^n} \right )T,\;\;R_n = \frac{1}{2}\left(1 + \frac{1}{2^n}\right)R. \end{align} Associated to $R_n$, we will denote by $\eta_n$ a sequence of functions which are such that \begin{align} & \eta_n \in C^2_c(B_{R_n}(0)), \;\;0\leq \eta_n \leq 1 \textnormal{ in } \mathbb{R}^d,\;\; \eta_n \equiv 1 \textnormal{ in } B_{R_{n+1}}(0), \notag\\ & \|\nabla \eta_n\|_{L^\infty} \leq CR^{-1}2^n,\;\;\|D^2\eta_n\|_{L^\infty} \leq CR^{-2}4^n. \label{eqn:iteration cut off functions} \end{align} \begin{lem}\label{lem:weighted Moser estimate}{(Moser's Iteration)} Let $f$ be a weak solution in $(0,T)$ if $\gamma \in (-2,0]$, or let $f$ be a bounded weak solution satisfying Assumptions \ref{Assumption:Epsilon Poincare} and \ref{Assumption:Local Doubling} if $\gamma \in [-d,-2]$. Then with $p=1+\tfrac{2}{d}$ and $R>1$ there is a constant $C$ with $C=C(d,\gamma,f_{\textnormal{in}})$ or $C=C(d,\gamma,f_{\textnormal{in}},C_D,C_P,\kappa_P)$ accordingly, such that \begin{align*} \|f\|_{L^\infty(B_{R/2}(0)\times (T/2,T))} \leq C\left \{ R^{-\gamma}\left (\frac{1}{T}+1 \right ) \right \}^{\frac{1}{p}\frac{q}{q-2}}\left (\int_{T/4}^T \int_{B_R(0)} f^{p}{a^*}_{f,\gamma} \;dvdt \right )^{\frac{1}{p}}. \end{align*} Here, $q = 2+\frac{4}{d}$ if $\gamma>-d$ or $q$ can be any exponent in $(2,2+\frac{4}{d})$ if $\gamma=-d$. \end{lem} \begin{proof} With $p=1+\tfrac{2}{d}$ fixed, we define the sequence \begin{align*} p_n = p \left (\frac{q}{2} \right)^n,\;\;\forall\;n\in\mathbb{N}, \end{align*} with $q$ as above. Then, for each $n\ge 0$, let $E_n$ denote the quantity, \begin{align*} E_n := \left ( \int_{T_n}^T\int \eta_n^q f^{p_n}{a^*}\;dvdt\right )^{\frac{1}{p_n}}. \end{align*} where the times $T_n$ and the functions $\eta_n$ are as in \eqref{eqn:iteration times and radii} and \eqref{eqn:iteration cut off functions}. As it is standard for divergence elliptic equations, we are going to derive a recursive relation for $E_n$. To do this, first note $E_{n+1}$ may be written as, \begin{align*} E_{n+1}^{p_{n}} = \left ( \int_{T_{n+1}}^T\int_{B_R(0)} \left ( \eta_{n+1} f^{\frac{p_{n}}{2}}\right )^{q}{a^*}\;dvdt \right )^{\frac{2}{q}}. \end{align*} Therefore, Theorem \ref{lem:Inequalities Sobolev weight aII} (if $\gamma\in(-2,0]$) and Theorem \ref{thm:Inequalities Sobolev weight gamma below -2} (if $\gamma\in[-d,-2]$, under Assumption \ref{Assumption:Local Doubling}) lead to \begin{align*} E_{n+1}^{p_{n}} \leq C \left \{ \sup \limits_{T_{n+1} \leq t\leq T} \left \{ \int \eta_{n+1}^2 f^{p_n}(t)\;dv \right \} + \frac{(p_n-1)}{p_n}\int_{T_{n+1}}^{T}\int a^* |\nabla (\eta_{n+1} f^{\frac{p_n}{2}})|^2\;dvdt \right \}, \end{align*} where (recall $C_D$ denotes the constant in Assumption \ref{Assumption:Local Doubling}) \begin{align*} C & = C(d,\gamma) \textnormal{ if } \gamma \in (-2,0], \; C = C(d,\gamma,C_D) \textnormal{ if } \gamma \in (-d,-2], and\\ C & = C(d,\gamma,q,C_D) \textnormal{ if } \gamma=-d. \end{align*} Applying Corollary \ref{cor_iterat} with $T_1 = T_n$, $T_2 = T_{n+1}$ and $T_3 = T$ we have \begin{align*} C^{-1}E_{n+1}^{p_{n}} & \leq \left( \frac{2^{n+2}}{T}+ p_n\Lambda_f (\tfrac{1}{2p_n}) \right) \int_{T_n}^{T}\int_{\mathbb{R}^d} \eta_{n+1}^2f^{p_n}\;dvdt\\ & \;\;\;\;+C(d)\left ( \|\nabla \eta_{n+1}\|_{\infty}^2+\|D^2\eta_{n+1}^2\|_{\infty} \right )\int_{T_n}^{T}\int_{\textnormal{spt}(\eta_{n+1})} f^{p_n}a \;dvdt. \end{align*} Since $\eta_n \equiv 1$ in the support of $\eta_{n+1}$, and $\nabla \eta_n,D^2\eta_n$ are supported in $B_{R_{n+1}}\setminus B_{R_{n+2}}$, where we have $|v|\approx R$ (since $R\geq 1$), we have the pointwise inequalities \begin{align*} \eta_{n+1}^2 & \leq \eta_{n}^q,\\ |\nabla \eta_{n+1}| & \leq C2^nR^{-1} \eta_n^q \leq C2^n\langle v\rangle^{-1}\eta_n^q,\\ \eta_{n+1}|D^2\eta_{n+1}| & \leq C4^nR^{-2}\eta_n^q\leq C 4^n\langle v\rangle^{-2}\eta_n^q. \end{align*} Substituting these in the above inequality, we have \begin{align*} C^{-1}E_{n+1}^{p_{n}} & \leq \left ( \frac{2^{n+2}}{T}+p_n\Lambda_f(\tfrac{1}{2p_n}) \right )\int_{T_n}^T \int_{\mathbb{R}^d} \eta_n^q f^{p_n} \;dvdt +C(d)4^n \int_{T_n}^T \int_{\mathbb{R}^d} \eta_n^q f^{p_n} \langle v\rangle^{-2}a\;dvdt. \end{align*} Now we apply Theorem \ref{thm:sufficient conditions for the Poincare inequality} for $\gamma \in (-2,0]$, and for $\gamma \in [-d,-2]$ we make use of Assumption \ref{Assumption:Epsilon Poincare} (or use Theorem \ref{thm:sufficient conditions for the Poincare inequality} with the respective extra Assumption), and obtain \begin{align*} C^{-1}E_{n+1}^{p_{n}} & \leq \left ( \frac{2^{n+2}}{T}+C_P(2p_n)^{1+\kappa_P} \right )\int_{T_n}^T \int_{\mathbb{R}^d} \eta_n^q f^{p_n} \;dvdt +C(d)4^n \int_{T_n}^T \int_{\mathbb{R}^d} \eta_n^q f^{p_n} \langle v\rangle^{-2}a\;dvdt. \end{align*} Let us analyze each integral on the right. First, we have $1\leq CR^{-\gamma}{a^*}$ in $B_{R}(0)$, so that \begin{align*} & \left ( \frac{2^{n+2}}{T}+C_P(2p)^{1+\kappa_P}(q/2)^{n(1+\kappa_P)} \right )\int_{T_n}^T \int_{\mathbb{R}^d} \eta_n^q f^{p_n} \;dvdt \\ & \leq C \max\{2,(q/2)^{1+\kappa_P}\}^n\left ( \frac{1}{T}+1 \right )R^{-\gamma} \int_{T_n}^T \int_{\mathbb{R}^d} \eta_n^q f^{p_n}{a^*} \;dvdt, \end{align*} with $C= C(d,\gamma,f_{\textnormal{in}},C_P,\kappa_P)$. On the other hand, by Proposition \ref{prop:a vs a star gamma larger than -2} (for $\gamma \in (-2,0]$) and Proposition \ref{prop:a vs a star gamma below -2} (for $\gamma\in [-d,-2]$) we have \begin{align*} C(d)4^n\int_{T_n}^T\int_{\mathbb{R}^d}\eta_n^q f^{p_n}\langle v\rangle^{-2}a\;dvdt \leq C4^n R^{\underline{m}} \int_{T_n}^T\int_{\mathbb{R}^d}\eta_n^q f^{p_n}{a^*} \;dvdt, \end{align*} where, again, we have $C=C(d,\gamma,f_{\textnormal{in}})$ or $C=C(d,\gamma,f_{\textnormal{in}},C_D)$ depending on whether $\gamma>-2$ or not, and where \begin{align*} \underline{m} = 0 \textnormal{ if } \gamma\in (-2,0], \underline{m} = \max\{-\gamma-4,0\} \textnormal{ if } \gamma\in [-d,-2]. \end{align*} Let us write \begin{align*} b := \max\{4,(q/2)^{1+\kappa_P}\}. \end{align*} Since $R\geq 1$, we may add up terms taking the worst factor of $R$ (note that we always have $\underline{m}\leq -\gamma$) therefore we arrive at \begin{align*} & E_{n+1}^{p_{n}} \leq Cb^nR^{-\gamma }\left( \frac{1}{T}+1 \right)\int_{T_n}^{T}\int_{\mathbb{R}^d} \eta_n^q f^{p_n} {a^*} \;\;dvdt \end{align*} Taking the $1/p_n$ root from both sides of the last inequality, it follows that \begin{align*} E_{n+1} & \leq C^{\frac{1}{p}\left ( \frac{2}{q}\right)^n} \left ( b^{\frac{1}{p}} \right )^{n\left (\frac{2}{q} \right )^n}\left \{ R^{-\gamma}\left( \frac{1}{T}+1 \right) \right \}^{\frac{1}{p}\left (\frac{2}{q} \right)^n} E_{n}. \end{align*} Then, for a universal $b_0$, we have \begin{align*} E_{n+1} \leq b_0^{n\left (\frac{2}{q}\right )^n}\left \{ R^{-\gamma}\left (\frac{1}{T}+1 \right ) \right \}^{\frac{1}{p}\left ( \frac{2}{q}\right )^n} E_n. \end{align*} This recursive relation, and a straightforward induction argument, yield that \begin{align*} E_{n} \leq b_0^{ \sum \limits_{k=0}^{n-1}k \left ( \frac{2}{q}\right )^k}\left \{ R^{-\gamma}\left ( \frac{1}{T}+1\right ) \right \}^{\frac{1}{p} \sum \limits_{k=0}^{n-1} \left (\frac{2}{q} \right )^k } E_0. \end{align*} Now, since \begin{align*} \sum \limits_{k=0}^{n-1}\left ( \frac{2}{q}\right )^k \leq \sum \limits_{k=0}^\infty \left ( \frac{2}{q}\right )^k \frac{q}{q-2},\;\;\; \sum \limits_{k=0}^{n-1}k \left ( \frac{2}{q}\right )^k \leq \sum \limits_{k=0}^\infty k \left ( \frac{2}{q}\right )^k = \frac{2q}{(q-2)^2}, \end{align*} we conclude that \begin{align}\label{eqn:Moser iteration Lpn estimate for all n} E_n \leq C\left \{ R^{-\gamma}\left (\frac{1}{T}+1 \right ) \right \}^{\frac{1}{p}\frac{q}{q-2}}E_0, \end{align} where $C=C(d,\gamma,f_{\textnormal{in}})$ or $C=(d,\gamma,f_{\textnormal{in}},C_D,C_P,\kappa_P)$ accordingly, and \begin{align*} E_0 = \left( \int_{T/4}^T \int_{B_R(0)} f^p {a^*} dvdt \right)^{1/p}. \end{align*} Now, since $\eta_n\geq 1$ in $B_{R/2}(0)$ and $T_n\leq T/2$ for all $n$, it follows that \begin{align*} E_n \geq \left ( \int_{T/2}^T\int_{B_{R/2}(0)} f^{p_n} {a^*} \;dvdt \right )^{\frac{1}{p_n}}. \end{align*} At the same time, for each $n$ we have \begin{align*} E_n \geq \left ( \inf \limits_{B_R\times (0,T)} {a^*} \right )^{\frac{1}{p_n}} \left ( \int_{T/2}^T\int_{B_{R/2}(0)} f^{p_n}\;dvdt \right )^{\frac{1}{p_n}}. \end{align*} Since the infimum of ${a^*}$ over $B_R\times (0,T)$ is strictly positive for each $R$, it follows that \begin{align*} \limsup \limits_{n \to \infty} E_n \geq \| f\|_{L^\infty(B_{R/2}(0)\times (T/2,T))}. \end{align*} This, together with \eqref{eqn:Moser iteration Lpn estimate for all n} proves the Lemma. \end{proof} \begin{proof}[Proof of Theorems \ref{thm:main_1} and \ref{thm:very soft potentials estimate}.] It is elementary that if $p=1+\frac{2}{d}$ and $q=2(1+\frac{2}{d})$, then \begin{align*} \frac{q}{p(q-2)} = \frac{d}{2}. \end{align*} In this case, applying Lemma \ref{lem:weighted Moser estimate} with this selection of $p$ and $q$, and $R>0$, we have \begin{align*} \|f\|_{L^\infty(B_{R/2}(0)\times (T/2,T))} \leq C R^{-\gamma \tfrac{d}{2}}\left (\frac{1}{T}+1 \right )^{\frac{d}{2}} \left (\int_{T/4}^T \int_{B_R(0)}f^{p}{a^*} \;dvdt \right )^{\frac{d}{d+2}}. \end{align*} Then, Proposition \ref{prop:LpLp via interpolation with weight} yields \begin{align*} \|f\|_{L^\infty({T}/{2},T; B(0, R/2))} & \leq C(f_{\textnormal{in}},d,\gamma) R^{-\gamma \tfrac{d}{2}} \left (\frac{1}{T}+1 \right )^{\frac{d}{2}}. \end{align*} \end{proof} \section{The Coulomb potential's case}\label{section:Coulomb regularization} Throughout this section, we focus on the Coulomb case, as such we set $\gamma=-d$ and make no further references to the case $\gamma\neq-d$. At the end we will prove Theorem \ref{thm:Coulomb case good estimate}. Since $\gamma=-d$, $h_{f,\gamma} = f$, and thus the $\varepsilon$-Poincar\'e inequality when applied to $\phi = f^p$ for some $p$, yields an inequality that may be used in place of the Sobolev embedding in an iteration procedure akin to the more standard one in Lemma \ref{lem:weighted Moser estimate}. \begin{rem} Related to the $\varepsilon$-Poincar\'e inequality there is a very interesting \emph{nonlinear} inequality proved by Gressman, Krieger, and Strain in \cite{GreKriStr2012}: for each $p>0$ and any Lipschitz non-negative $f \in L^1(\mathbb{R}^d)$ we have the inequality \begin{align}\label{eqn:GressmannKriegerStrain} \int_{\mathbb{R}^d} f^{p+1}\;dv \leq \left (\frac{p+1}{p} \right )^2\int_{\mathbb{R}^d} (A_{f,\gamma}\nabla f^{\frac{p}{2}},\nabla f^{\frac{p}{2}})\;dv. \end{align} As pointed out in \cite{GreKriStr2012}, the explicit constant in the inequality becomes sharp as $p\to 1^+$ (it corresponds to the $H$-theorem). The difference between the $\varepsilon$-Poincar\'e and \eqref{eqn:GressmannKriegerStrain} is that the former involves an arbitrary Lipschitz, and integrable function $\phi$, whereas the latter one involves the function $\phi = f^{\frac{p}{2}}$. \end{rem} In the next two lemmas we prove higher integrability for $f$ and for $a_{f,\gamma}$, always assuming the $\varepsilon$-Poincar\'e inequality. \begin{prop}\label{prop:L2L2 bound for Coulomb case} Let $f(v,t)$ be a classical solution in $(0,T)$ for which the $\varepsilon$-Poincar\'e holds uniformly in time for some some $\varepsilon<4$ (this is guaranteed by the much stronger Assumption \ref{Assumption:Epsilon Poincare}). Then, the following estimate holds \begin{align*} \int_{T_1}^{T_2}\int_{\mathbb{R}^d}f^2\;dvdt \leq \frac{4}{4-\varepsilon}(C(f_{\textnormal{in}})+(T_2-T_1)\Lambda(\varepsilon)). \end{align*} \end{prop} \begin{proof} Applying the $\varepsilon$-Poincar\'e with $\phi=f^{1/2}$ at any time $t\in (T_1,T_2)$ leads to \begin{align*} \int_{\mathbb{R}^d} f^2\;dv \leq \varepsilon \int_{\mathbb{R}^d}(A\nabla f^{1/2},\nabla f^{1/2})\;dv+\Lambda(\varepsilon)\int_{\mathbb{R}^d}f\;dv. \end{align*} On the other hand, \begin{align*} 4\int_{T_1}^{T_2}\int_{\mathbb{R}^d}(A\nabla f^{1/2},\nabla f^{1/2})\;dvdt-\int_{T_1}^{T_2}\int_{\mathbb{R}^d}f^2\;dvdt \leq C(f_{\textnormal{in}}). \end{align*} Combining these two, always keeping in mind that $\|f(t)\|_{L^1}=1$ for all $t$, it follows that \begin{align*} (4-\varepsilon)\int_{T_1}^{T_2}\int_{\mathbb{R}^d}(A\nabla f^{1/2},\nabla f^{1/2})\;dvdt \leq C(f_{\textnormal{in}})+(T_2-T_1)\Lambda(\varepsilon). \end{align*} Then, we apply \eqref{eqn:GressmannKriegerStrain}, and conclude that \begin{align*} \int_{T_1}^{T_2}\int_{\mathbb{R}^d}f^2\;dvdt \leq \frac{4}{4-\varepsilon}\left ( C(f_{\textnormal{in}})+(T_2-T_1)\Lambda(\varepsilon)\right ). \end{align*} \end{proof} Observe that in the last step of the previous proof, we could have just as well used \eqref{eqn:GressmannKriegerStrain}, however, the result is still conditional, as it relies on the $\varepsilon$-Poincar\'e to control the quadratic term in the first place. The next Lemma shows that, with a constant that grows with $p$, one can obtain $L^pL^p$ estimates for $f$ using the energy inequality (which assumes the $\varepsilon$-Poincar\'e inequality) and the inequality \eqref{eqn:GressmannKriegerStrain}. \begin{lem}\label{lem:LpLp iterative bound for Coulomb} Let $f$ be as before, except we now assume the $\varepsilon$-Poincar\'e holds for any sufficiently small $\varepsilon$ (note this is still weaker than Assumption \ref{Assumption:Epsilon Poincare}). Let $n\geq 0$ be any positive integer and $p_0>1$. Then there is a a $C$ determined by $n$, $\Lambda_{f}$ (for the whole time interval), and $p_0$, such that \begin{align*} \sup \limits_{T/4\leq t\leq T} \|f(t)\|_{L^{p_0+n}(\mathbb{R}^d)} \leq C\left (\frac{1}{T}+1 \right )^{\frac{n+1}{p_0+n}}\left (\int_{0}^T\int_{\mathbb{R}^d} f^{p_0}\;dvdt \right )^{\frac{1}{p_0+n}}. \end{align*} \end{lem} \begin{proof} Let us write, \begin{align*} p_n := p_0+n,\;\;T_{n} = \frac{1}{4}\left ( 1- \frac{1}{2^n} \right )T. \end{align*} We apply Corollary \ref{cor_iterat} with exponent $p_n$ and $\eta=1$; or more precisely, we apply it to a sequence of $\eta$'s with compact support which converge locally uniformly to the constant $1$ and pass to the limit. It follows that \begin{align*} \frac{p_0-1}{2p_0}\int_{T_{n+1}}^T\int_{\mathbb{R}^d}a_*|\nabla f^{\frac{p_n}{2}}|^2\;dvdt \leq \left ( \frac{2^{n+3}}{T}+ p_n\Lambda_f(\tfrac{1}{2p_n}) \right ) \int_{T_n}^T\int_{\mathbb{R}^d} f^{p_n}\;dvdt. \end{align*} Applying \eqref{eqn:GressmannKriegerStrain}, it follows that \begin{align*} \int_{T_{n+1}}^T \int_{\mathbb{R}^d} f^{p_{n+1}}\;dvdt \leq C(p_0,\Lambda_f(1/2p_n),n) \left ( \frac{1}{T} +1 \right ) \int_{T_n}^T\int_{\mathbb{R}^d} f^{p_n}\;dvdt. \end{align*} Iterating this estimate, it follows that for some $C=C(p_0,\Lambda_f(\cdot),n)$ \begin{align*} \int_{T_{n+1}}^T \int_{\mathbb{R}^d} f^{p_{n+1}}\;dvdt \leq C \left ( \frac{1}{T} +1 \right )^{n+1} \int_{0}^T\int_{\mathbb{R}^d} f^{p_0}\;dvdt. \end{align*} Combining this bound with the energy inequality for $f^{p_n+1}$, we have \begin{align*} \sup_{T_{n+2} \leq t\leq T} \left \{ \int_{\mathbb{R}^d} f^{p_{n+1}}(t)\;dv \right \} &\leq C\left ( \frac{1}{T} +1 \right )^{n+2} \int_{0}^T\int_{\mathbb{R}^d} f^{p_0}\;dvdt, \end{align*} for some even bigger $C=C(p_0,\Lambda_f(\cdot),n)$, which implies \begin{align*} \sup \limits_{T_{n+1}\leq t\leq T} \|f(t)\|_{L^{p_n}(\mathbb{R}^d)} & \leq C\left (\frac{1}{T}+1 \right )^{\frac{n+1}{p_n}}\left (\int_{0}^T\int_{\mathbb{R}^d} f^{p_0}\;dvdt \right )^{\frac{1}{p_n}}. \end{align*} Since $T_n\leq T/4$ for each $n\ge 0$ we conclude that \begin{align*} \sup \limits_{T/4\leq t\leq T} \|f(t)\|_{L^{p_n}(\mathbb{R}^d)} & \leq C\left (\frac{1}{T}+1 \right )^{\frac{n+1}{p_n}}\left (\int_{0}^T\int_{\mathbb{R}^d} f^{p_0}\;dvdt \right )^{\frac{1}{p_n}}, \end{align*} where $C$ is a function of $p_0$, $\Lambda_f(\cdot)$, and $n$. \end{proof} \begin{rem} The constant obtained in Lemma \ref{lem:LpLp iterative bound for Coulomb} goes to infinity as $n$ increases to $\infty$. This is different from the usual Moser type iteration, the way Lemma \ref{lem:LpLp iterative bound for Coulomb} will be used will not involve passing to the limit, instead we will take a large, but finite, $n$. \end{rem} A consequence of the previous lemma is highlighted in the following corollary. \begin{cor}\label{a_bounded} There is a constant $C(f_{\textnormal{in}},d,C_P,\kappa_P,p)$ such that \begin{align*} \|a_{f,\gamma}\|_{L^\infty(T/4,T,L^\infty(\mathbb{R}^d))}\leq C\left (\frac{1}{T}+1 \right )^{1-\frac{2}{d}}. \end{align*} \end{cor} \begin{proof} Since $a = c_d|v|^{2-d} * f$, interpolation yields for $p\in (d/2,\infty)$ \begin{align}\label{eqn:interpolation for Newtonian potential} \|a(t)\|_{L^\infty(\mathbb{R}^d)} \leq C(d,p)\|f(t)\|_{L^1(\mathbb{R}^d)}^{\frac{p}{p-1}\left ( \frac{2}{d}-\frac{1}{p} \right )}\|f(t)\|_{L^p(\mathbb{R}^d)}^{\frac{p}{p-1}\left ( 1- \frac{2}{d} \right )}. \end{align} Indeed, for every $r>0$ and $v\in \mathbb{R}^d$ we have \begin{align*} |a(v)|\leq C(d)\int_{B_r(v)}f(w)|v-w|^{2-d}\;dw +C(d)\int_{\mathbb{R}^d\setminus B_r(v)}f(w)|v-w|^{2-d}\;dw . \end{align*} H\"older's inequality yields, \begin{align*} |a(v)|\leq C(d) \|f\|_{L^p(\mathbb{R}^d)}\left (\int_{B_r(v)}|v-w|^{(2-d)p'}\;dw\right )^{\frac{1}{p'}}+C(d) R^{2-d}\|f\|_{L^1(\mathbb{R}^d)} . \end{align*} Note that \begin{align*} \left (\int_{B_r(v)}|v-w|^{2-d}\;dw\right )^{\frac{1}{p'}} \leq C(d)^{\frac{1}{p'}}r^{\frac{d}{p'}+2-d} = C(d)^{1-\frac{1}{p}}r^{2-\frac{d}{p}}. \end{align*} Therefore \begin{align*} \|a\|_{L^\infty(\mathbb{R}^d)} \leq C(d) \|f\|_{L^p(\mathbb{R}^d)}r^{2-\frac{d}{p}}+C(d) R^{2-d}\|f\|_{L^1(\mathbb{R}^d)} . \end{align*} Optimizing in the parameter $r>0$, we obtain \eqref{eqn:interpolation for Newtonian potential}. Therefore, for $p\in(d/2,\infty)$ \begin{align}\label{a_opt} \|a_{f,\gamma}\|_{L^\infty(T/4,T,L^\infty(\mathbb{R}^d))}\le C(d,p,f_{\textnormal{in}})\|f\|_{L^\infty(T/4,T,L^p(\mathbb{R}^d))}^{\frac{p}{p-1}\left( 1-\frac{2}{d} \right )}. \end{align} Next, let us apply Proposition \ref{prop:L2L2 bound for Coulomb case} and Lemma \ref{lem:LpLp iterative bound for Coulomb} with $p_0=2$, yielding for $n\geq 1$ \begin{align}\label{sup_f_p+n} \sup \limits_{T/4\leq t\leq T} \|f(t)\|_{L^{p_0+n}(\mathbb{R}^d)} &\leq C\left (\frac{1}{T}+1 \right )^{\frac{n+1}{n+2}}\left (\int_{0}^T\int f^{2}\;dvdt \right )^{\frac{1}{n+2}} \nonumber \\ & \leq C\left (\frac{1}{T}+1 \right )^{\frac{p-1}{p}}, \end{align} where $C = C(f_{\textnormal{in}},d,\gamma,p,C_P,\kappa_P)$, $p=2+n$, and $n$ is chosen so that $p>d/2$. For such $p$'s, \eqref{a_opt} and \eqref{sup_f_p+n} yield \begin{align*} \|a_{f,\gamma}\|_{L^\infty(T/4,T,L^\infty(\mathbb{R}^d))}\leq C(d,p,f_{\textnormal{in}},\varepsilon)\left ( 1 + \frac{1}{T} \right )^{\left( 1-\frac{2}{d} \right )}. \end{align*} \end{proof} \begin{proof}[Proof of Theorem \ref{thm:Coulomb case good estimate}:] Let $R>0$, noting that ${a^*}\leq a$, and using Corollary \ref{a_bounded}, we have for $p>1$ a constant $C=C(d,\gamma,f_{\textnormal{in}},p,C_P,\kappa_P)$ such that \begin{align*} \left ( \int_{T/4}^{T}\int f^{p}{a^*}\;dvdt\right )^{\frac{1}{p}} \leq C\left (1+\frac{1}{T} \right )^{\frac{1}{p}\left (1-\frac{2}{d}\right)} \left ( \int_{T/4}^{T}\int f^{p}\;dvdt\right )^{\frac{1}{p}}. \end{align*} This estimate, combined with Lemma \ref{lem:weighted Moser estimate} leads to the following bound for each $R>1$, \begin{align*} \|f\|_{L^\infty({T}/{2},T; B(0,R))} & \leq CR^{-\frac{\gamma}{p}\frac{q}{q-2}}\left (\frac{1}{T}+1 \right )^{\frac{1}{p}\frac{q}{q-2}}\left (1+\frac{1}{T} \right )^{\frac{1}{p}\left ( 1-\frac{2}{d}\right ) } \|f\|_{L^{p}(T/4,T; L^{p}(\mathbb{R}^d))}, \end{align*} where this time $C=C(d,\gamma,f_{\textnormal{in}},p,C_P,\kappa_P,C_D)$ and $q$ is as in Lemma \ref{lem:weighted Moser estimate} (it's exact value will be immaterial for what follows). On the other hand, Lemma \ref{lem:LpLp iterative bound for Coulomb} with $p_0=2$ and $p=2+n$ and Proposition \ref{prop:L2L2 bound for Coulomb case}, yield \begin{align*} \|f\|_{L^{p}(T/4,T; L^{p}(\mathbb{R}^d))} \leq C(f_{\textnormal{in}},d,p,C_P,\kappa_P)\left (1+\frac{1}{T}\right)^{\frac{p-1}{p}}. \end{align*} Therefore, for every $p$ of the form $p=2+n$, $n\in\mathbb{N}$, we have the estimate \begin{align*} \|f\|_{L^\infty({T}/{2},T; B(0, R))} & \leq CR^{\tilde \delta(p)}\left (1 +\frac{1}{T} \right )^{1+\delta(p)}. \end{align*} Here, $\delta(p)$ and $\tilde \delta(p)$ denote the expressions \begin{align*} \delta(p) := \frac{1}{p}\left(\frac{q}{q-2} -\frac{2}{d}\right) >0,\; \tilde \delta(p) = -\frac{\gamma}{p}\frac{q}{q-2} >0 \end{align*} so evidently $\delta(p),\tilde \delta(p) \to 0$ as $p\to\infty$. Therefore, given any $s>0$ there exists a $p>0$ and $C$ such that $\delta(p),\tilde \delta(p)\leq s$ and thus, for some constant $C$ we have \begin{align*} \|f\|_{L^\infty({T}/{2},T; B(0, R))} & \leq C R^{s}\left (1 +\frac{1}{T} \right )^{1+s},\;\;\forall\;R>1. \end{align*} \end{proof} \appendix \section{Auxiliary computations}\label{sec:Auxiliary Computations} \subsection{Some properties of the weight $|w|^{-m}$} \begin{lem}\label{lem:averages for powers of v} The following inequalities hold for any $m < d $, \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)} \frac{1}{|v|^{m}}\;dv \approx \left (\max\{ |v_0|,2r\}\right )^{-m}. \end{align*} The implied constants being determined by $m$ and $d$. The same result holds (with a different implied constant) if one uses cubes instead of balls. \end{lem} \begin{rem}\label{rem:cubes versus balls} In light of the inclusions \begin{align*} B_r(v_0) \subset Q_r(v_0) \subset B_{r\sqrt{d}}(v_0), \end{align*} it is clear that the inequality above holds for cubes if and only if it holds balls. \end{rem} \begin{proof}[Proof of Lemma \ref{lem:averages for powers of v}] In light of Remark \ref{rem:cubes versus balls} it suffices to prove the lemma for balls $B_r(v_0)$. In the integral under consideration, take the change of variables $v=r(w_0+w)$, where $w_0 := v_0/r$, then \begin{align*} f_{\textnormal{in}}t_{B_{r}(v_0)}|v|^{-m}\;dv = r^{-m}f_{\textnormal{in}}t_{B_1(0)}|w_0+w|^{-m}\;dw \end{align*} If $|v_0|\leq 2r$, that is if $|w_0|\leq 2$, then an elementary computation shows that \begin{align*} f_{\textnormal{in}}t_{B_1(0)}|w_0+w|^{-m}\;dw \approx 1. \end{align*} It follows that if $|v_0|\leq 2r$, \begin{align*} f_{\textnormal{in}}t_{B_{r}(v_0)}|v|^{-m}\;dv \approx r^{-m}. \end{align*} If $|v_0|\geq 2r$, that is if $|w_0|\geq 2$, then $|w_0|/2\leq |w_0+w|\leq 2|w_0|$ for all $w\in B_1(0)$, so \begin{align*} f_{\textnormal{in}}t_{B_1(0)}|w_0+w|^{-m}\;dw \approx |w_0|^{-m} = r^{m}|v_0|^{-m}. \end{align*} It follows that when $|v_0|\geq 2r$, \begin{align*} f_{\textnormal{in}}t_{B_{r}(v_0)}|v|^m\;dv \approx |v_0|^{m}. \end{align*} \end{proof} \begin{rem} \label{rem:averages for powers of v} If $m\in [0,d)$, then $|v_1|\leq |v_0|+r\lesssim \max\{|v_0|,2r\}$ for all $v_1\in B_r(v_0)$. Then, Lemma \ref{lem:averages for powers of v} implies that \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}\frac{1}{|v|^m}\;dv \lesssim \frac{1}{|v_1|^m},\;\;\forall\;v_1\in B_r(v_0). \end{align*} The implied constant depending only on $m$ and $d$. In particular, for every $m\in [0,d)$ the function $\frac{1}{|v|^{m}}$ is a $\mathcal{A}_1$-weight. \end{rem} \begin{lem}\label{lem:averages for powers of bracket v} For every $m \in \mathbb{R}$, $v_0 \in \mathbb{R}^d$, and $r\in(0,1)$ we have the estimate \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}(1+|v|)^{-m}\;dv \approx (1+\max\{|v_0|,2r\})^{-m}. \end{align*} The implied constants depending only on $d$ and $m$. The same result holds (with a different implied constant) if one uses cubes instead of balls. \end{lem} \begin{proof} Again by Remark \ref{rem:cubes versus balls}, it is clear it suffices to prove the lemma for balls $B_r(v_0)$. Using the same change of variables as in the previous lemma, it follows that \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}(1+|v|)^{-m}\;dv = f_{\textnormal{in}}t_{B_1(0)}(1+r|w_0+w|)^{-m}\;dw,\;\;w_0 = v_0/r. \end{align*} If $|v_0|\geq 2r$, which is the same as $|w_0|\geq 2$, we have as in the proof of the previous lemma that $|w_0|/2\leq |w_0+w|\leq 2|w_0|$, thus \begin{align*} (1+r|w_0|)/2 \leq 1+r|w_0+w| \leq 2(1+r|w_0|),\;\;\forall\;w\in B_1(0). \end{align*} Since $r|w_0|=|v_0|$, it follows that for $|v_0|\geq 2r$, \begin{align*} f_{\textnormal{in}}t_{B_1(0)}(1+r|w_0+w|)^{-m}\;dw \approx (1+|v_0|)^{-m}. \end{align*} Next, given that $r\in(0,1)$, we have $1+2r \in (1,3)$. Therefore, for $|v_0|\leq 2r$ we have \begin{align*} (1+\max\{|v_0|,2r\})^{-m} = (1+2r)^{-m} \approx 1. \end{align*} At the same time, when $|v_0|\leq 2r$ we have $0\leq |v|\leq 3r$ for all $v\in B_r(v_0)$. Then, we can conclude that for $|v_0|\leq 2r$ and $r\in(0,1)$ we have \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}(1+|v|)^{-m}\;dv \approx 1 \approx (1+2r)^{-m} = (1+\max\{|v_0|,2r) )^{-m}, \end{align*} with all the implied constants being determined by $d$ and $m$, and the Lemma is proved. \end{proof} In what follows, given $e\in \mathbb{S}^{d-1}$, we shall write $[e] := \{ r e \mid r\in\mathbb{R} \}$. \begin{prop} \label{prop:kernel averages} Let $\gamma\geq -d$, $m\geq 0$ with $m|2+\gamma|<d$, $v_0 \in \mathbb{R}^d$, $e\in\mathbb{S}^{d}$, and $r>0$, then for any $v_1 \in B_r(v_0)$ we have that \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}|v|^{(2+\gamma)m}(\Pi(v)e,e)^m\;dv\approx \left \{ \begin{array}{rl} |v_1|^{(2+\gamma)m}(\Pi(v_1)e,e)^m & \textnormal{ if } \textnormal{dist}(v_1,[e]) \geq 2r,\\ \max\{ |v_1|,2r\}^{\gamma m}r^{2m} & \textnormal{ if } \textnormal{dist}(v_1,[e])\leq 2r. \end{array}\right. \end{align*} The implied constants being determined by $d, m$, and $\gamma$. The same result holds (with different implied constants) if one uses cubes instead of balls. \end{prop} \begin{proof} As before, we shall write the proof for the case of balls, noting the same arguments yield the respective result for cubes. Let us write $p=-(2+\gamma)m$. The change of variables $v=v_0+rw$ yields \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}|v|^{-p}(\Pi(v)e,e)^m\;dv & = f_{\textnormal{in}}t_{B_1(0)}|v_0+rw|^{-p}(\Pi(v_0+rw)e,e)^m\;dw. \end{align*} Then, writing $v_0 = rw_0$, we have \begin{align*} |v_0+rw|^{-p}(\Pi(v_0+rw)e,e)^m = r^{-p} |w_0+w|^{-p}(\Pi(w_0+w)e,e)^m. \end{align*} We consider two cases, first, if $|w_0|\leq 2$, then \begin{align*} f_{\textnormal{in}}t_{B_1(0)}|w_0+w|^{-p}(\Pi(w_0+w)e,e)^m\;dw \approx f_{\textnormal{in}}t_{B_1(0)}(\Pi(w_0+w)e,e)^m\;dw \approx 1. \end{align*} While, if $|w_0|\geq 2$, then \begin{align*} \frac{1}{2}|w_0| \leq |w_0+w| \leq \frac{3}{2}|w_0|,\;\;\forall\;w\in B_1(0). \end{align*} These inequalities together with $(\Pi(w_0+w)e,e)\geq 0$ lead to \begin{align*} f_{\textnormal{in}}t_{B_1(0)}|w_0+w|^{-p}(\Pi(w_0+w)e,e)^m\;dw \approx \max\{|v_0|,2r\}^{-p}f_{\textnormal{in}}t_{B_1(0)}(\Pi(w_0+w)e,e)^m\;dw. \end{align*} Next, we make use of the fact that \begin{align*} |v|^2(\Pi(v)e,e) = \textnormal{dist}(v, [e] )^2. \end{align*} By the triangle inequality, if $\textnormal{dist}(w_0,[e]) \geq 2$, we have for every $w\in B_1(0)$, \begin{align*} \frac{1}{2}\textnormal{dist}(w_0, [e] )\leq \textnormal{dist}(w_0+w, [e] )\leq \frac{3}{2}\textnormal{dist}(w_0, [e] ). \end{align*} In this case we also have $|w_0|\geq 2$, so \begin{align*} (\Pi(w_0+w)e,e) \approx (\Pi(w_0)e,e),\;\;\forall\;w\in B_1(0). \end{align*} Therefore, in the case $\textnormal{dis}(v_0,[e])\geq 2r$ we have \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}|v|^{-p}(\Pi(v)e,e)^m\;dv & \approx |v_0|^{-p}(\Pi(w_0)e,e)^m. \end{align*} It remains to consider the case $|w_0|\geq 2$ and $\textnormal{dist}(w_0,[e])\leq 2$. It is easy to see that \begin{align*} f_{\textnormal{in}}t_{B_1(0)}(\Pi(w_0+w)e,e)^m\;dw \approx \min\{1,|w_0|^{-1}\}^{2m} = |w_0|^{-2m}, \end{align*} from where it follows that, \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}|v|^{-p}(\Pi(v)e,e)^m\;dv & \approx |v_0|^{-p} (r/|v_0|)^{2m}. \end{align*} From the definition of $p$, $ |v_0|^{-p} \min\{1,r/|v_0|\}^{2m} = |v_0|^{(2+\gamma)m} (r^2|v_0|^{-2})^m$ thus \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}|v|^{-p}(\Pi(v)e,e)^m\;dv & \approx r^{2m}|v_0|^{\gamma m}. \end{align*} This proves the proposition for $v_1 = v_0$. For $v_1 \in B_r(v_0)$, we make use of the inclusions $B_r(v_0)\subset B_{2r}(v_1)$ and $B_{r}(v_1) \subset B_{2r}(v_0)$ to obtain the estimates in the general case. \end{proof} The estimates on averages from Proposition \ref{prop:kernel averages} yield that the function given by $a_e(v)=(A_{f,\gamma}e,e)$ ($e\in\mathbb{S}^{d-1}$ fixed) is \emph{almost} in the $\mathcal{A}_1$ class. \begin{prop}\label{prop:weight a_e is almost A1} Fix $f\geq 0$, $f\in L^1(\mathbb{R}^d)$, and $v_0 \in \mathbb{R}^d, e\in \mathbb{S}^{d-1}$, and $r>0$. Define \begin{align*} S(v,r,e) = \{ w \in\mathbb{R}^d \mid\; \textnormal{dist}(w-v,[e]) \leq 2r\},\;\;v \in \mathbb{R}^d. \end{align*} Then, for any $v_1 \in B_r(v_0)$ we have the inequality \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}a_e(v)\;dv & \leq Ca_e(v_1)+r^2\int_{S(v_1,r,e)}f(w)\max\{2r,|v_1-w|\}^\gamma\;dw, \end{align*} with $C = C(d,\gamma)$. The same result holds if one uses cubes $Q_r(v_0)$ in place of balls. \end{prop} \begin{proof} Throughout the proof let us write $a_e(v)=(A_{f,\gamma}e,e)$. Recall that we always have, \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}a_e(v)\;dv & = C_{d,\gamma}\int_{\mathbb{R}^d}f(w)f_{\textnormal{in}}t_{B_r(v_0)}|v-w|^{2+\gamma}(\Pi(v-w)e,e)\;dvdw\\ & = C_{d,\gamma}\int_{\mathbb{R}^d}f(w)f_{\textnormal{in}}t_{B_r(v_0)}K_e(v-w)\;dvdw, \end{align*} where for convenience we are writing \begin{align*} K_e(v) = |v|^{2+\gamma}(\Pi(v)e,e). \end{align*} Now, note that \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}K_e(v-w)\;dv = f_{\textnormal{in}}t_{B_r(v_0-w)}K_e(v)\;dv. \end{align*} Note that if $v_1 \in B_r(v_0)$ then $v_1-w\in B_r(v_0-w)$. Therefore, we may apply Proposition \ref{prop:kernel averages} in $B_r(v_0-w)$ with $m=1$ and with respect to the point $v_1-w\in B_r(v_0-w)$, which leads to a bound according to the location of $w$: If $w\not\in S(v_1,r,e)$, then \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}K_e(v-w)\;dv \approx K_e(v_1-w), \end{align*} while for $w \in S(v_1,r,e)$, \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}K_e(v-w)\;dv \approx r^{2}\max \{2r,|v_1-w|\}^{\gamma}. \end{align*} Combining these two estimates, it follows that \begin{align*} f_{\textnormal{in}}t_{B_r(v_0)}a_e(v)\;dv & \approx \int_{S(v_1,r,e)^c}f(w)K_e(v_1-w)\;dw+r^2\int_{S(v_1,r,e)}f(w)\max \{2r,|v_0-w|\}^{\gamma}\;dw. \end{align*} Since the first term is no larger than $Ca_e(v_1)$, for $C=C(d,\gamma)$, the proposition is proved. \end{proof} \end{document}
\begin{document} \hyphenation{Ryd-berg} \title{Rydberg excitation of Bose-Einstein condensates} \author{Rolf Heidemann} \email[Electronic address: ]{[email protected]} \author{Ulrich Raitzsch} \author{Vera Bendkowsky} \author{Bj\"{o}rn Butscher} \author{Robert L\"{o}w} \affiliation{5. Physikalisches Institut, Universit\"{a}t Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany} \author{Tilman Pfau} \email[Electronic address:]{[email protected]} \affiliation{5. Physikalisches Institut, Universit\"{a}t Stuttgart, Pfaffenwaldring 57, 70569 Stuttgart, Germany} \date{29 October 2007} \begin{abstract} Rydberg atoms provide a wide range of possibilities to tailor interactions in a quantum gas. Here we report on Rydberg excitation of Bose-Einstein condensed $^{87}$Rb atoms. The Rydberg fraction was investigated for various excitation times and temperatures above and below the condensation temperature. The excitation is locally blocked by the van der Waals interaction between Rydberg atoms to a density-dependent limit. Therefore the abrupt change of the thermal atomic density distribution to the characteristic bimodal distribution upon condensation could be observed in the Rydberg fraction. The observed features are reproduced by a simulation based on local collective Rydberg excitations. \end{abstract} \maketitle The ability to tailor interactions among atoms in a quantum gas is one of the main strengths of ultracold atoms as model systems for other research fields such as condensed matter physics. So far mostly effective contact interactions have been used which can be changed over a wide range in strength and even in sign. Two years ago, the long-range magnetic dipole-dipole interaction was observed between atoms in a $^{52}$Cr-BEC \cite{Stuhler:2005}. It was proposed to induce dynamic dipole-dipole interactions \cite{Low:2005} or a $1/r$ interaction potential \cite{O'Dell:2000} between ultracold atoms by light. To induce a static electric dipole-dipole interaction, it was suggested to admix a Rydberg state with a static electric dipole moment to the atomic ground state by photo-excitation \cite{Santos:2000}. Rydberg atoms and ground state atoms are predicted to form unusual weakly bound molecular states \cite{Greene:2000,Greene:2006}. As the average interparticle distances in BECs are of the same order of magnitude as the molecular binding distances, a BEC seems to be the ideal starting point for photoassociation of these molecules. A Rydberg atom also constitutes an impurity in the BEC related to Ref.\,\cite{Chikkatur:2000}, which could be manipulated by electric fields. The interaction could lead to an agglomeration of ground state atoms around the impurity as was calculated for ionic impurities \cite{Massignan:2005}. As the nature and strength of the interaction among Rydberg atoms can be tailored by electric fields and microwave fields e.g. from van der Waals type to dipole dipole type \cite{Vogt:2007} or an isotropic $1/r^3$ potential \cite{Buchler:2007} they provide a new tool for many body quantum physics. In this Letter we describe the excitation of atoms to a Rydberg state while undergoing a phase transition from a thermal gas to a Bose-Einstein condensate. We present a model based on collective excited states to simulate the observed Rydberg fraction across the phase transition which agrees qualitatively with the observed data. Using the setup described in \cite{Loew:2007}, we magnetically trap $^{87}$Rb-atoms in the 5S$_{1/2}, F=2, m_F=2$ state and produce samples from thermal clouds to Bose-Einstein condensates by means of forced RF-evaporation. We vary the temperature from \unit[5]{$\mu$K} down to \unit[200]{nK} crossing the condensation temperature at around \unit[700]{nK}. After this preparation, the atoms are subject to a two-photon Rydberg excitation via the 5P$_{3/2}$ state to the 43S$_{1/2}$ state. The duration of the square pulses of excitation light was varied in this experiment between \unit[170]{ns} and \unit[2]{$\mu$s}. To avoid significant absorption and heating due to spontaneous photon scattering, the light is blue detuned by $\Delta$= \unit[483]{MHz} from the 5$P_{3/2}, F=3$ level. Thus only one photon per 100 atoms is scattered for the longest excitation time. For the following experiments the Rabi frequency $\Omega_1$ on the 5S-5P transition is \unit[11]{MHz}. A Rabi frequency $\Omega_2$ for the 5P-43S transition of \unit[9.7]{MHz} \cite{CommentRabifrequency:2007} results in a two-photon Rabi frequency $\Omega_1\Omega_2/(2\Delta)$ of \unit[110]{kHz}. The thermal motion of the atoms is negligible on the $\mu$s-time scale of the experiments, but interactions among the Rydberg atoms can lead to collisions and ionisation. By choosing a Rydberg state with repulsive interactions and applying an electric extraction field, the unwanted effects of ions are avoided \cite{Heidemann:2007}. The Rydberg excitation is followed immediately by a field-ionisation of the Rydberg atoms and their detection on a microchannel plate. Up to three such sequences of excitation and detection are applied to one sample followed by absorption imaging of the remaining ground state atoms. From these calibrated detection methods, the total atom number $N_g$ and the fraction $f=N_R/N_g$ of atoms in the Rydberg state are derived. This normalisation of the Rydberg atom number $N_R$ is useful since the total atom number drops significantly from $10^7$ to $10^5$ while reducing the temperature. The temperature of each sample is derived from the size on the absorption image after an expansion during \unit[20]{ms} time of flight. The thermal clouds are fitted using a Bose-distribution neglecting interaction among the ground state atoms. The BEC component is fitted with a Thomas-Fermi distribution. From the parameters of the fit and the known trapping potential, the in-trap density distributions are calculated and used for simulating the Rydberg excitation. The peak density and the condensate fraction are plotted versus temperature in Fig.\,\ref{Fig_density_ratio} where no effect of the different excitation times on the atomic distribution is visible. The bimodality of the density distribution will prove to be crucial for the understanding of the results of the Rydberg excitation. When the temperature is decreased below the critical temperature, the central density increases abruptly by a factor of about 4. When the temperature is reduced further, more atoms are condensed into the small constricted region of the BEC reducing the density of the thermal component. In the condensate the average atomic distance to the next neighbour is \unit[150]{nm} ($\approx 0.55 n_g^{-\nicefrac{1}{3}}$ \cite{Clark:1954}) which is calculated from the peak atomic density $n_g$. \begin{figure} \caption{\label{Fig_density_ratio} \label{Fig_density_ratio} \end{figure} In a recent work, we investigated the density dependence of the Rydberg excitation dynamics in magnetically trapped clouds above the condensation temperature \cite{Heidemann:2007}. The Rydberg atom number as a function of excitation time initially increases linearly with a slope $R$ and saturates to a value $N_{\text{sat}}$ after a time $\tau_{s}=N_{\text{sat}}/R$. From this previous experimental observation we know that the saturation is reached faster with increasing density and that the saturation Rydberg density depends only very weakly on the density of ground state atoms $n_g$. In this Letter we investigate the excitation dynamics across the condensation temperature. Due to the bimodal density distribution in a partially condensed cloud the time scales and saturation Rydberg fractions are different in the condensate and thermal component. Figure \ref{Fig_Rydberg_fraction}(a) shows the Rydberg fraction after different excitation times as a function of temperature. The lines are the result of a smoothing intended to guide the eye to the main features of the measurement: At medium excitation times (\unit[320 and 370]{ns}) the Rydberg fraction decreases with proceeding formation of the condensate i.e. reduced temperature. For the shortest and longest excitation times, no such kink in the Rydberg fraction is visible at the critical temperature. \begin{figure} \caption{\label{Fig_Rydberg_fraction} \label{Fig_Rydberg_fraction} \end{figure} The excitation times in this experiment are chosen such that for the shortest time (170 ns), both components are just slightly blocked but not saturated. For longer excitation times, the excitation is already saturated in the condensate component while it is not yet in the thermal part. For the longest excitation time (1970 ns), the excitation is saturated in both components. As a start the following intuitive picture may be used to explain the reduction of the total Rydberg fraction at medium excitation times: With increasing BEC fraction, more atoms are condensed into the BEC where the Rydberg fraction is significantly lower than in the thermal cloud due to its higher density, this also lowers the total Rydberg fraction. This effect dominates over the smaller increase in the Rydberg fraction in the thermal cloud as its density decreases (see Fig.\,\ref{Fig_density_ratio}(a)). A deeper comprehension is obtained by the following model based on collective states. In this experiment the van der Waals interaction among the Rydberg atoms blocks further excitation within a blockade radius $r_b$ around one Rydberg atom \cite{Tong:2004,Singer:2004,Heidemann:2007}. This limitation to one excitation per blockade sphere leads to a collective Rabi oscillation which is enhanced in frequency by a factor $\sqrt{N}$. In this sense the $N$ atoms per blockade sphere act like a `superatom' \cite{Vuletic:2006} with a $\sqrt{N}$ larger transition matrix-element. In the regime where the size of the sample is larger than the blockade radius $r_{b}$, quantum correlations interconnect the whole sample. Thus the system with $N_g\approx10^7$ atoms and $N_R\approx10^3$ excitations is inaccessible to direct ab initio simulations. The model described below reduces the quantum correlations to spatial correlations and locally collective states by assuming $N_{R}$ independent superatoms which oscillate at their respective collective Rabi frequencies $\sqrt{N}\Omega_0$. We will assume the superatoms to arrange in the close packing of a face-centered cubic structure with 12 next neighbours. In this hypothetical lattice, the saturation density is $n_{R}=\sqrt{2}\,r_{b}^{-3}$ and the interaction energy at the center of one superatom is $Z=14.5$ times the pairwise van der Waals ernergy. $Z$ is slightly larger than 12 due to the atoms beyond the shell of next neighbours. In a local density approximation the local number of ground state atoms forming one superatom is given by $N(r)=n_{g}(r)/n_{R}(r)$ and density variations on the scale of $r_b$ are neglected. The saturation density of Rydberg atoms can be derived from the blockade condition which equates the van der Waals interaction energy with the collective Rabi frequency, where $\kappa$ is a constant expected to be on the order of one: \begin{equation}\label{eqn_blockade}\frac{Z\,C_6}{r_{b}^6(r)}=Z\,C_6\,\frac{1}{2}\,n_{R}^2(r)=\kappa\,\hbar\sqrt{N(r)}\,\Omega_0. \end{equation} The $C_6$-coefficient for the $43S$-state is $-1.7\times10^{19}$ a.u. \cite{Singer:2005}. As shown later, the blockade radius is in the micrometer regime, substantially larger than the mean interatomic distance. At these interatomic separations, the van der Waals interaction that is used in Eq.~(\ref{eqn_blockade}) is justified. From this relation, the local saturation density $n_R(r)$ and then the collective Rabi frequency $\Omega_c$ can be directly derived. \begin{eqnarray}\label{eqn_integration1} n_{R}(r)&=&\left(\nicefrac{2\,\kappa\,\hbar}{Z\,C_6} \right)^{\nicefrac{2}{5}}n_{g}^{\nicefrac{1}{5}}(r)\Omega_0^{\nicefrac{2}{5}},\\ \label{eqn_integration2}\Omega_{c}(r)&=&\sqrt{N(r)}\,\Omega_0=\left(\nicefrac{Z\,C_6}{2\,\kappa\,\hbar} \right)^{\nicefrac{1}{5}} n_{g}^{\nicefrac{2}{5}}(r)\Omega_0^{\nicefrac{4}{5}} \end{eqnarray} \begin{figure} \caption{\label{Fig_simulation_distribution} \label{Fig_simulation_distribution} \end{figure} To illustrate the calculated bimodality in the saturation density of Rydberg atoms $n_R(r)$, Fig.\,\ref{Fig_simulation_distribution} shows the density distributions as well as the distribution of the fraction $f(r)$ which shows a significant reduction at the position of the BEC component. The time dependence of the total number $N_{R}(\tau)$ of Rydberg atoms can be obtained by integrating the oscillations of the superatoms over the sample: \begin{equation} \label{eqn_integration3}N_{R}(\tau)=\int n_{R}(r) \cdot\sin^2 \left(\Omega_c(r)\,\tau/2 \right) d^3r \end{equation} The simulated behaviour of $N_R(\tau)$ nicely reproduces the saturation curves shown in Ref.\,\cite{Heidemann:2007}. The derived scaling of $N_{\text{sat}}\propto n_R\propto n_g^{\nicefrac{1}{5}}\Omega_0^{\nicefrac{2}{5}}$ with Rabi frequency is in excellent agreement with our previous observation where we found $N_{\text{sat}}\propto n_g^{0.07\pm0.02}\Omega_0^{0.38\pm0.04}$\cite{Heidemann:2007}, the scaling with the peak density $n_g$ is close. The calculated scalings of the initial slope of the excitation curve, $R\propto n_R\,\Omega_{c}\propto n_g^{\nicefrac{3}{5}}\Omega_0^{\nicefrac{6}{5}}$, are in good agreement with the previous observation of $R\propto n_g^{0.49\pm0.06}\Omega_0^{1.1\pm0.1}$ \cite{Heidemann:2007}. This simplified model, despite its approximate character, explained well the observed scalings. Here we apply it for the situation of a changing density distribution $n_g(r)$ upon condensation. The lines in Fig.\,\ref{Fig_Rydberg_fraction}(b) show the calculated values for the Rydberg fraction using this model. The atomic density distributions were calculated from the information obtained from the absorption images and averaged over several pictures taken at different excitation times. The simulation uses two free parameters which is the propotionality factor $\kappa$ in Eq.~(\ref{eqn_blockade}) and the Rabi-frequency $\Omega_0$. The factor $\kappa$ was adjusted to 0.3 such that the measurement is best reproduced for high temperatures. The Rabi frequency had to be reduced by a factor of 5.5 to reproduce the characteristic kink at the right excitation times. This reduction is reasonable as the frequency of the excitation lasers is only stable within 1.5 MHz between adjacent excitations thus effectively reducing the Rabi-frequency in this model. Note that the simulation was used beyond the large radius $r$, where the model gets unphysical due to $n_{R}(r)>n_{g}(r)$. For our parameters the effect on the total fraction $f$ is negligible as only a fraction of at most $10^{-4}$ of the atoms are at these large distances where the collective behaviour ends and their single-atom excitation fraction is negligible. The scaling of the Rydberg fraction $f= N_R/N_g$ with density can be derived from the model for short, moderate and long excitation times: For short excitation times, when the dynamics cannot yet be distinguished from single-atom behaviour, the Rydberg atom number is proportional to $N_g$, therefore $f$ is independent of $n_g$. For intermediate excitation times $\tau$, during the linear increase of $N_{R}(\tau)$, $f$ can be approximated by $R\,\tau/N_g \propto n_R\,\Omega_c\tau/n_g\propto n_g^{-\nicefrac{2}{5}}$. For long excitation times, in the saturation regime, the dependence of the fraction on the density is stronger: $f\propto N_{\text{sat}}/N_g\propto n_R/n_g\propto n_{g,0}^{-\nicefrac{4}{5}}$. These considerations hold for a change in density without changing the shape of the distribution i.e. without crossing the critical temperature or for the thermal component alone. The simulation is able to reproduce the main features of the experimental data: The dependence of $f$ on the density is increasing for increasing excitation time. For intermediate excitation times, $f$ bends down below the condensation temperature. At long excitation times, this kink is covered by the increased dependence of $f$ on $n_g$ in the saturated thermal cloud. To emphasize the effect of the bimodal density distribution, the dashed lines in Fig.\,\ref{Fig_Rydberg_fraction}(b) show the calculation which includes only the thermal component of the atomic cloud which does not reproduce the characteristic kink. According to this simulation, the maximum density of Rydberg atoms in the BEC is \unit[9$\times 10^{15}$]{m$^{-3}$}, corresponding to a blockade radius of \unit[5.4]{$\mu$m} and 5 Rydberg excitations within the volume of the BEC. The experimental fraction $f$ shows an overall increase with decreasing temperature which is not sufficiently reproduced by the simulation. We attribute this remaining discrepancy to a failure of the local density approximation (LDA). The radial width of the gaussian distribution gets as narrow as \unit[2]{$\mu$m} while the dynamics depends on a density that is averaged over the size of the blockade radius. For low temperatures this averaging effectively lowers the ground state densities and leads to higher Rydberg fractions than the simulation suggests. Note that we do not expect the change in the density-density correlation upon condensation to have a significant effect on the described measurements as the de Broglie wavelength which is the length scale on which the bunching in the thermal cloud occurs is on the order of ($\lesssim$ \unit[350]{nm}) and therefore much smaller than the blockade radius. To conclude, we have demonstrated the Rydberg excitation of Bose-Einstein condensates and show that the momentum distribution of the condensate is not significantly affected by the presence of the Rydberg atoms. We present measurements on the excitation dynamics which show a significantly smaller Rydberg fraction in the condensed sample than in the thermal sample. A simplified superatom model reproduces the main features of the measurement as a consequence of the changing density distribution upon condensation. The coherence of the condensate was not relevant for the understanding of the measurement of the overall excitation dynamics. Future studies will focus on making use of the coherence properties of the condensate, e.g. in order to measure the spatial density-density correlation function of the Rydberg atoms by means of matter wave interferometry. We acknowledge fruitful discussions with F. Robicheaux, H.P. B\"{u}chler and L. Santos as well as financial support from the Deutsche Forschungsgemeinschaft within the SFB/TRR21 and under the contract PF 381/4-1, U.R. acknowledges support from the Landesgraduiertenf\"{o}rderung Baden-W\"{u}rttemberg.\\ \end{document}
\begin{document} \global\long\def\mathbf{p}{\mathbf{p}} \global\long\def\mathbf{q}{\mathbf{q}} \global\long\def\mathfrak{C}{\mathfrak{C}} \global\long\def\mathcal{P}{\mathcal{P}} \global\long\def\mathbf{p}r{\operatorname{pr}} \global\long\def\operatorname{im}{\operatorname{im}} \global\long\def\operatorname{otp}{\operatorname{otp}} \global\long\def\operatorname{dec}{\operatorname{dec}} \global\long\def\operatorname{suc}{\operatorname{suc}} \global\long\def\mathbf{p}re{\operatorname{pre}} \global\long\def\mathbf{q}e{\operatorname{qf}} \global\long\def\operatorname{ind}{\operatorname{ind}} \global\long\def\operatorname{Nind}{\operatorname{Nind}} \global\long\def\operatorname{lev}{\operatorname{lev}} \global\long\def\operatorname{Suc}{\operatorname{Suc}} \global\long\def\operatorname{HNind}{\operatorname{HNind}} \global\long\def{\lim}{{\lim}} \global\long\def\frown{\frown} \global\long\def\operatorname{cl}{\operatorname{cl}} \global\long\def\operatorname{tp}{\operatorname{tp}} \global\long\def\operatorname{id}{\operatorname{id}} \global\long\def\left(\star\right){\left(\star\right)} \global\long\def\mathbf{q}f{\operatorname{qf}} \global\long\def\operatorname{ai}{\operatorname{ai}} \global\long\def\operatorname{dtp}{\operatorname{dtp}} \global\long\def\operatorname{acl}{\operatorname{acl}} \global\long\def\operatorname{nb}{\operatorname{nb}} \global\long\def{\lim}{{\lim}} \global\long\def\leftexp#1#2{{\vphantom{#2}}^{#1}{#2}} \global\long\def\operatorname{interval}{\operatorname{interval}} \global\long\def\emph{at}{\emph{at}} \global\long\def\mathfrak{I}{\mathfrak{I}} \global\long\def\operatorname{uf}{\operatorname{uf}} \global\long\def\operatorname{ded}{\operatorname{ded}} \global\long\def\operatorname{Ded}{\operatorname{Ded}} \global\long\def\operatorname{Df}{\operatorname{Df}} \global\long\def\operatorname{Th}{\operatorname{Th}} \global\long\def\operatorname{eq}{\operatorname{eq}} \global\long\def\operatorname{Aut}{\operatorname{Aut}} \global\long\defac{ac} \global\long\def\operatorname{Df}One{\operatorname{df}_{\operatorname{iso}}} \global\long\def\modp#1{\mathbf{p}mod#1} \global\long\def\sequence#1#2{\left\langle #1\left|\,#2\right.\right\rangle } \global\long\def\set#1#2{\left\{ #1\left|\,#2\right.\right\} } \global\long\def\operatorname{Diag}{\operatorname{Diag}} \global\long\def\mathbb{N}{\mathbb{N}} \global\long\def\mathrela#1{\mathrel{#1}} \global\long\def\mathord{\sim}{\mathord{\sim}} \global\long\def\mathordi#1{\mathord{#1}} \global\long\def\mathbb{Q}{\mathbb{Q}} \global\long\def\operatorname{dense}{\operatorname{dense}} \global\long\def\operatorname{cof}{\operatorname{cof}} \global\long\def\operatorname{tr}{\operatorname{tr}} \global\long\def\operatorname{tr}eeexp#1#2{#1^{\left\langle #2\right\rangle _{\operatorname{tr}}}} \title{Examples in dependent theories} \author{Itay Kaplan and Saharon Shelah} \thanks{The first author's research was partially supported by the SFB 878 grant.} \thanks{The first author was partially supported by SFB grant 878. The second author would like to thank the Israel Science Foundation for partial support of this research (Grants nos. 710/07 and 1053/11). No. 946 on the second author's list of publications.} \address{Itay Kaplan \\ Institut f\"ur mathematische Logik und Grundlagenforschung \\ Fachbereich Mathematik und Informatik \\ Universit\"at M\"unster \\ Einsteinstra{\ss}e 62 \\ 48149 M\"unster \\ Germany } \email{[email protected]} \urladdr{https://sites.google.com/site/itay80/ } \address{Saharon Shelah\\ The Hebrew University of Jerusalem\\ Einstein Institute of Mathematics \\ Edmond J. Safra Campus, Givat Ram\\ Jerusalem 91904, Israel} \address{Saharon Shelah \\ Department of Mathematics\\ Hill Center-Busch Campus\\ Rutgers, The State University of New Jersey\\ 110 Frelinghuysen Road\\ Piscataway, NJ 08854-8019 USA} \email{[email protected]} \urladdr{http://shelah.logic.at/} \subjclass[2010]{03C45, 03C95, 03C64} \begin{abstract} In the first part we show a counterexample to a conjecture by Shelah regarding the existence of indiscernible sequences in dependent theories (up to the first inaccessible cardinal). In the second part we discuss generic pairs, and give an example where the pair is not dependent. Then we define the notion of directionality which deals with counting the number of coheirs of a type and we give examples of the different possibilities. Then we discuss non-splintering, an interesting notion that appears in the work of Rami Grossberg, Andr\'es Villaveces and Monica VanDieren, and we show that it is not trivial (in the sense that it can be different than splitting) whenever the directionality of the theory is not small. In the appendix we study dense types in RCF. \end{abstract} \maketitle \section{Introduction} This paper gives some examples of dependent theories that exemplify certain phenomenons. Recall, \begin{defn} A first order theory $T$ is \emph{dependent} (NIP) if it does not have the independence property which means: there are no formula $\varphi\left(x,y\right)$ and tuples $\left\langle a_{i},b_{s}\left|\, i<\omega,s\subseteq\omega\right.\right\rangle $ in $\mathfrak{C}$ such that $\varphi\left(a_{i},b_{s}\right)$ if and only if $i\in s$. \end{defn} \subsection{Existence of indiscernibles} Indiscernible sequences are very important in model theory. Usually one uses Ramsey's theorem to prove their existence. Sometimes we want to have a stronger result. For instance, we may want that any large enough set contains an indiscernible sequence and indeed this was conjectured by Shelah for dependent theories. We will show that at least in some models of ZFC, one cannot hope for such a result to be true. \subsection{Generic pairs} In a series of papers (\cite{Sh:900,Sh877,Sh906,Sh950}), Shelah has proved (among other things) that dependent theories give rise to a ``generic pair'' of models (and in fact this characterizes dependent theories). The natural question is whether the theory of the pair is again dependent. The answer is no. We present an example of an $\omega$-stable theory all of whose generic pairs have the independence property. \subsection{Directionality} The directionality of a theory measures the number of finitely satisfiable global extensions of a complete type (these are also called coheirs). We say that a theory has \emph{small} directionality if for every type $p$ over a model $M$, the number of complete finitely satisfiable (in $M$) $\Delta$-types which are consistent with $p$ is finite for all finite sets $\Delta$. The theory has \emph{medium} directionality if this number is bounded by $\left|M\right|$, and it has \emph{large} directionality if it is not small or medium. We give an equivalent definition (Theorem \ref{thm:trichotomy} below). We provide examples of dependent theories of each kind of directionality, and calculate the directionality of some theories, including RCF and ACVF. \subsection{Splintering} This section is connected to the work of Rami Grossberg, Andr\'es Villaveces and Monica VanDieren. In \cite{GrViVa} they study Shelah's Generic pair conjecture (which is now a theorem) and in their analysis, they came up with the notion of splintering which is similar to splitting. We show that in any dependent theory with medium or large directionality, splintering is different than splitting. We also provide an example of such a theory with small directionality, and prove this cannot happen in the stable realm. \subsection{Dense types} In the appendix, we study dense types in RCF. Namely, we show that $\operatorname{ded}\lambda$ --- the supremum of the number of cuts of a linear order of size $\lambda$ --- equals the supremum of the number of dense types in a model of RCF of size $\lambda$. This is useful for the calculation of the directionality of RCF. \subsection{Acknowledgment} We would like to thank the anonymous referee for many useful comments and for suggesting to apply the method used for calculating the directionality of RCF to valued fields (in the previous version it was only shown that certain valued fields are not small). We would also like to thank Marcus Tressl with whom we discussed the directionality of RCF and Pierre Simon and Immanuel Halupczok for discussing valued fields with us. \subsection{Notation} When $\alpha$ and $\beta$ are ordinals, we use left exponentiation $\leftexp{\beta}{\alpha}$ to denote the set of functions from $\beta$ to $\alpha$, as to not to confuse with ordinal (or cardinal) exponentiation. If there is no room for confusion, and $A$ and $B$ are some sets we use $A^{B}$ instead. The set $\alpha^{<\beta}$ is the set of sequences (functions) $\bigcup\set{\leftexp{\gamma}{\alpha}}{\gamma<\beta}$. We do not distinguish elements and tuples unless we say so explicitly. $\mathfrak{C}$ will be the monster model of the theory. $S_{n}\left(A\right)$ is the set of all complete types in $n$ variables over $A$. $S_{<\omega}\left(A\right)$ is the union $\bigcup_{n<\omega}S_{n}\left(A\right)$. $S\left(A\right)$ is the set of all types (perhaps with infinitely mane variables) over $A$. For a set of formulas with a partition of variables, $\Delta\left(x,y\right)$, $L_{\Delta}\left(A\right)$ is the set of formulas of the form $\varphi\left(x,a\right),\neg\varphi\left(x,a\right)$ where $\varphi\left(x,y\right)\in\Delta$ and $a\in A$. $S_{\Delta}\left(A\right)$ is the set of all complete $L_{\Delta}\left(A\right)$-types. Similarly we may define $\operatorname{tp}_{\Delta}\left(b/A\right)$ as the set of formulas $\varphi\left(x,a\right)$ such that $\varphi\left(x,y\right)\in\Delta$ and $\mathfrak{C}\models\varphi\left(b,a\right)$. For a partial type $p\left(x\right)$ over $A$, $p\upharpoonright\Delta=p\cap L_{\Delta}\left(A\right)$. Usually we want to consider a set of formulas $\Delta$ without specifying a partition of the variables. In this case, for a tuple of variables $x$, $\Delta^{x}$ is a set of partitioned formulas induced from $\Delta$ by partitioning the formulas in $\Delta$ to $\left(x,y\right)$ in all possible ways. Then $L_{\Delta}^{x}\left(A\right)$ is just $L_{\Delta^{x}}\left(A\right)$ and $S_{\Delta}^{x}\left(A\right)$, $p\upharpoonright\Delta^{x}$ are defined similarly. If $x$ is clear from the context, we omit it. So for instance, when $p$ is a type in $x$ over $A$, then $p\upharpoonright\Delta$ is the set of all formulas $\varphi\left(x,a\right)$ where $\varphi\left(z,w\right)\in\Delta$. \section{\label{sec:Indisc}Few indiscernibles} \subsection{Introduction} \begin{defn} Let $T$ be a theory. For a cardinal $\kappa$, $n\leq\omega$ and an ordinal $\delta$, $\kappa\to\left(\delta\right)_{T,n}$ means: for every set $A\subseteq\mathfrak{C}^{n}$ of size $\kappa$, there is a non-constant sequence of elements of $A$ of length $\delta$ which is indiscernible. \end{defn} This definition was suggested by Grossberg and Shelah in \cite[pg. 208, Definition 3.1(2)]{sh:d} with a slightly different form \footnote{The definition there is: $\kappa\to\left(\delta\right)_{T,n}$ if and only if for each sequence of length $\kappa$ (of $n$-tuples), there is an indiscernible sub-sequence of length $\delta$. For us there is no difference because we are dealing with examples where $\kappa\not\to\left(\mu\right)_{T,n}$. It is also not hard to see that when $\delta$ is an infinite cardinal these two definitions are equivalent. }. There it is also conjectured: \begin{conjecture} \label{conj:existence of indiscernibles-1}\cite[pg. 209, Conjecture 3.3]{sh:d} If $T$ is dependent then for every cardinal $\mu$ there is some cardinal $\lambda$ such that $\lambda\to\left(\mu\right)_{T,1}$. \end{conjecture} In stable theories this holds: it is known that for any $\lambda$ satisfying $\lambda=\lambda^{\left|T\right|}$, $\lambda^{+}\to\left(\lambda^{+}\right)_{T,n}$ (proved by Shelah in \cite{Sh:c}, and follows from local character of non-forking). In \cite[pg. 209]{sh:d} it is proved that this conjecture does not hold for simple unstable theories. In \cite{Sh863}, Shelah proved this conjecture for strongly dependent theories: \begin{fact} \label{fac:TStrongly} If $T$ is strongly dependent (see Definition \ref{def:StronglyDep} below), then for all $\lambda\geq\left|T\right|$, $\beth_{\left|T\right|^{+}}\left(\lambda\right)\to\left(\lambda^{+}\right)_{T,n}$ for all $n<\omega$. \end{fact} This conjuncture is connected to a result by Shelah and Cohen: in \cite{ShCo919}, they proved that a theory is stable if and only if it can be presented in some sense in a free algebra\textcolor{black}{{} in a fixed vocabulary but allowing function symbols with infinite arity.} If this result could be extended to: a theory is dependent if and only if it can be represented as an algebra with ordering, then this could be used to prove existence of indiscernibles. In this section, we shall show: \begin{thm} \label{thm:IntoMain}There is a countable dependent theory $T$ such that if $\kappa$ is smaller than the first inaccessible cardinal, then for all $n\in\omega$, $\kappa\not\to\left(\omega\right)_{T,n}$. \end{thm} Thus, Conjecture \ref{conj:existence of indiscernibles-1} fails in a model of ZFC with no inaccessible cardinals. It appears in a more precise way as Theorem \ref{thm:MainThm} below. An even stronger result can be obtained, namely: \begin{fact} \cite{KaSh975}\label{thm:general result}For every $\theta$ there is a dependent theory $T$ of size $\theta$ such that for all $\kappa$ and $\delta$, \textup{$\kappa\to\left(\delta\right)_{T,1}$ if and only if $\kappa\to\left(\delta\right)_{\theta}^{<\omega}$. } \end{fact} where: \begin{defn} $\kappa\to\left(\delta\right)_{\theta}^{<\omega}$ means: for every function $c:\left[\kappa\right]^{<\omega}\to\theta$ there is an homogeneous sub-sequence of length $\delta$ (i.e., there exists $\left\langle \alpha_{i}\left|\, i<\delta\right.\right\rangle \in\leftexp{\delta}{\kappa}$ and $\left\langle c_{n}\left|\, n<\omega\right.\right\rangle \in\leftexp{\omega}{\theta}$ such that $c\left(\alpha_{i_{0}},\ldots,\alpha_{i_{n-1}}\right)=c_{n}$ for every $i_{0}<\cdots<i_{n-1}<\delta$). \end{defn} By \cite{KaSh975}, whenever $\left|T\right|\leq\theta$, $\kappa\to\left(\delta\right)_{\theta}^{<\omega}$ always implies that $\kappa\to\left(\delta\right)_{T,n}$ for all $n<\omega$, so this is the best result possible. However, the proof of Theorem \ref{thm:general result} is considerably harder, so it is given in a subsequent work. The second part of this section is devoted to giving a related example in the field of real numbers. By Fact \ref{fac:TStrongly}, as RCF is strongly dependent, we cannot prove Theorem \ref{thm:IntoMain} for RCF, but instead we show that the requirement that $n<\omega$ is necessary: \begin{thm} \label{thm:IntoMainRCF}If $\kappa$ is smaller than the first strongly inaccessible cardinal, then $\kappa\not\to\left(\omega\right)_{RCF,\omega}$. \end{thm} This is Theorem \ref{thm:MainThmRCF} below. \subsubsection*{Notes} It was unknown to us that in 2011 Kuda{\u\i}bergenov proved a related result, which refutes a strong version of Conjecture \ref{conj:existence of indiscernibles-1}, namely that $\beth_{\omega+\omega}\left(\mu+\left|T\right|\right)\to\left(\mu\right)_{T,1}$. He proved that for every ordinal $\alpha$ there exists a dependent theory (we have not checked whether it is strongly dependent) $T_{\alpha}$ such that $\left|T_{\alpha}\right|=\left|\alpha\right|+\aleph_{0}$ and $\beth_{\alpha}\left(\left|T_{\alpha}\right|\right)\not\to\left(\aleph_{0}\right)_{T_{\alpha},1}$ and thus seem to indicate that the bound in Fact \ref{fac:TStrongly} is tight. See \cite{russianConjecture}. \subsubsection*{The idea of the construction} The counterexample is a ``tree of trees'' with functions connecting the different trees. For every $\eta$ in the tree $2^{<\omega}$ we shall have a predicate $P_{\eta}$ and an ordering $<_{\eta}$ such that $\left(P_{\eta},<_{\eta}\right)$ is a dense tree. In addition we shall have functions $G_{\eta,\eta\frown\left\langle i\right\rangle }:P_{\eta}\to P_{\eta\frown\left\langle i\right\rangle }$ for $i=0,1$. The idea is to prove that $\kappa\not\to\left(\mu\right)_{T,1}$ by induction on $\kappa$. To use the induction hypothesis, we push the counter examples we already have for smaller $\kappa$'s to deeper levels in the tree $2^{<\omega}$. \subsection{Preliminaries} \begin{defn} We shall need the following fact: \end{defn} \begin{fact} \label{fac:DepPolBd} \cite[II, 4]{Sh:c} Let $T$ be any theory. Then for all $n<\omega$, $T$ is dependent if and only if $\square_{n}$ if and only if $\square_{1}$ where for all $n<\omega$,\end{fact} \begin{itemize} \item [$\square_n$] For every finite set of formulas $\Delta\left(x,y\right)$ with $n=\lg\left(x\right)$, there is a polynomial $f$ such that for every finite set $A\subseteq M\models T$, $\left|S_{\Delta}\left(A\right)\right|\leq f\left(\left|A\right|\right)$. \end{itemize} Since we also discuss strongly dependent theories, here is the definition: \begin{defn} \label{def:StronglyDep}A theory is called \emph{strongly dependent} if there is no sequence of formulas $\left\langle \varphi_{n}\left(x,y_{n}\right)\left|\, n<\omega\right.\right\rangle $ such that the set $\left\{ \varphi_{n}\left(x_{\eta},y_{n,k}\right)^{\eta\left(n\right)=k}\left|\,\eta:\omega\to\omega\right.\right\} $ is consistent with the theory (where $\varphi^{\mbox{True}}=\varphi,\varphi^{\mbox{False}}=\neg\varphi$). \end{defn} See \cite{Sh863} for further discussion of strongly dependent theories. There it is proved that $Th\left(\mathbb{R}\right)$ is strongly dependent, and so is the theory of the $p$-adics. \subsection{The example} Let $S_{n}$ be the finite binary tree $2^{\leq n}$. On a well ordered tree such as $S_{n}$, we define $<_{\operatorname{suc}}$ as follows: $\eta<_{\operatorname{suc}}\nu$ if $\nu$ is a successor of $\eta$ in the tree. Let $L_{n}$ be the following language: \[ L_{n}=\left\{ P_{\eta},<_{\eta},\wedge_{\eta},G_{\eta,\nu}\left|\,\eta,\nu\in S_{n},\eta<_{\operatorname{suc}}\nu\right.\right\} . \] Where: \begin{itemize} \item $P_{\eta}$ is a unary predicate; $<_{\eta}$ is a binary relation symbol; $\wedge_{\eta}$ is a binary function symbol; $G_{\eta,\nu}$ is a unary function symbol. \end{itemize} Let $T_{n}^{\forall}$ be the following theory: \begin{itemize} \item $P_{\eta}\cap P_{\nu}=\emptyset$ for $\eta\neq\nu$. \item $\left(P_{\eta},<_{\eta},\wedge_{\eta}\right)$ is a tree, where $\wedge_{\eta}$ is the meet function on $P_{\eta}$, i.e., \[ x\wedge_{\eta}y=\max\left\{ z\in P_{\eta}\left|\, z\leq_{\eta}x\,\&\, z\leq_{\eta}y\right.\right\} . \] \item $G_{\eta,\nu}:P_{\eta}\to P_{\nu}$ and no further restrictions on it. \item In all the axioms above, for elements or pairs outside of the domain of any of the functions $\wedge_{\eta}$ or $G_{\eta,\nu}$, these functions are the identity on the leftmost coordinate, so for example if $\left(x,y\right)\notin P_{\eta}^{2}$, then $x\wedge_{\eta}y=x$. \end{itemize} Thus we have: \begin{claim} $T_{n}^{\forall}$ is a universal theory. \end{claim} \begin{claim} $T_{n}^{\forall}$ has the joint embedding property (JEP) and the amalgamation property (AP). \end{claim} \begin{proof} Easy to see. \end{proof} From this we deduce, by e.g., \cite[Theorem 7.4.1]{Hod}: \begin{cor} \label{cor:ModelCom}$T_{n}^{\forall}$ has a model completion, $T_{n}$ which eliminates quantifiers, and moreover: if $M\models T_{n+1}^{\forall}$, $M'=M\upharpoonright L_{n}$ and $M'\subseteq N'\models T_{n}^{\forall}$ then $N'$ can be enriched to a model $N$ of $T_{n+1}^{\forall}$ so that $M\subseteq N$. Hence if $M$ is an existentially closed model of $T_{n+1}^{\forall}$, then $M'$ is an e.c. model of $T_{n}^{\forall}$. Hence $T_{n}\subseteq T_{n+1}$ (for more see \cite[Theorem 8.2.4]{Hod}).\end{cor} \begin{proof} The moreover part: for each $\eta\in S_{n+1}\backslash S_{n}$, we define $P_{\eta}^{N}=P_{\eta}^{M}$ and in the same way $\wedge_{\eta}$. The functions $G_{\eta,\nu}^{N}$ for $\eta\in S_{n}$ and $\nu\in S_{n+1}$ will be extensions of $G_{\eta,\nu}^{M}$. \end{proof} Now we show that $T_{n}$ is dependent, but before that, a few easy remarks: \begin{obs} \label{Obs:Bijection}$ $ \begin{enumerate} \item If $A\subseteq M\models T_{0}^{\forall}$ is a finite substructure (so just a tree, with no extra structure), then for all $b\in M$, the structure generated by $A$ and $b$ is $A\cup\left\{ b\right\} \cup\left\{ \max\left\{ b\wedge a\left|\, a\in A\right.\right\} \right\} $. \item If $M\models T_{n}^{\forall}$ and $\eta\in2^{\leq n}$, we can define a new structure $M_{\eta}\models T_{n-\lg\left(\eta\right)}^{\forall}$ whose universe is $\bigcup\left\{ P_{\eta\frown\nu}^{M}\left|\,\nu\in2^{\leq n-\lg\left(n\right)}\right.\right\} $ by: $P_{\nu}^{M_{\eta}}=P_{\eta\frown\nu}^{M}$, and in the same way we interpret every other symbol (for instance, $G_{\nu_{1},\nu_{2}}^{M_{\eta}}=G_{\eta\frown\nu_{1},\eta\frown\nu_{1}}^{M}$). For every formula $\varphi\left(x\right)\in L_{n-\lg\left(\eta\right)}$ there is a formula $\varphi'\left(x\right)\in L_{n}$ such that for all $a\in M_{\eta}$, $M\models\varphi'\left(a\right)$ if and only if $M_{\eta}\models\varphi\left(a\right)$ (we get $\varphi'$ by concatenating $\eta$ before any symbol). \item For $M$ as before and $\eta\in2^{\leq n}$, for any $k<\omega$ there is a bijection between \[ \left\{ p\left(x_{0},\ldots,x_{k-1}\right)\in S_{k}^{\mathbf{q}e}\left(M\right)\left|\,\forall i<k\left(P_{\eta}\left(x_{i}\right)\in p\right)\right.\right\} \] and \[ \left\{ p\left(x_{0},\ldots,x_{k-1}\right)\in S_{k}^{\mathbf{q}e}\left(M_{\eta}\right)\left|\,\forall i<k\left(P_{\left\langle \right\rangle }\left(x_{i}\right)\in p\right)\right.\right\} . \] \end{enumerate} \end{obs} \begin{proof} (3): The bijection is given by (2). This is well defined, meaning that if $p\left(x_{0},\ldots,x_{k-1}\right)$ is a type over $M_{\eta}$ such that $P_{\left\langle \right\rangle }\left(x_{i}\right)\in p$ for all $i<k$, then $\left\{ \varphi'\left|\,\varphi\in p\right.\right\} $ determines a complete type over $M$, such that $P_{\eta}\left(x_{i}\right)\in p$ for all $i<k$. The point is that all atomic formulas over $M$ which mention elements from $M\backslash M_{\eta}$ or any $\nu\not\geq\eta$ are trivially determined. \end{proof} \begin{prop} $T_{n}$ is dependent.\end{prop} \begin{proof} We use Fact \ref{fac:DepPolBd}. It is sufficient to find a polynomial $f\left(x\right)$ such that for every finite set $A$, $\left|S_{1}\left(A\right)\right|\leq f\left(\left|A\right|\right)$. First we note that for a set $A$, the size of the structure generated by $A$ is bounded by a polynomial in $\left|A\right|$: it is generated by applying $\wedge_{\left\langle \right\rangle }$ on $P_{\left\langle \right\rangle }\cap A$, applying $G_{\left\langle \right\rangle ,\left\langle 1\right\rangle }$ and $G_{\left\langle \right\rangle ,\left\langle 0\right\rangle }$, and then applying $\wedge_{\left\langle 0\right\rangle },\wedge_{\left\langle 1\right\rangle }$ and so on. Every step in the process is polynomial, and it ends after $n$ steps. Hence we can assume that $A$ is a substructure, i.e., $A\models T_{n}^{\forall}$. The proof is by induction on $n$. To ease notation, we shall omit the subscript $\eta$ from $<_{\eta}$ and $\wedge_{\eta}$. First we deal with the case $n=0$. In $T_{0}$, $P_{\left\langle \right\rangle }$ is a a tree with no extra structure, while outside $P_{\left\langle \right\rangle }$ there is no structure at all. The number of types outside $P_{\left\langle \right\rangle }$ is bounded by $\left|A\right|+1$ (because there is only one non-algebraic type). In the case that $P_{\left\langle \right\rangle }\left(x\right)\in p$ for some type $p$ over $A$, we can characterize $p$ by characterizing the (tree) order-type of $x':=\max\left\{ a\wedge x\left|\, a\in A\right.\right\} $, i.e., the cut that $x'$ induces on the tree, and by knowing whether $x'=x$ or $x>x'$ (we note that in general, every theory of a tree is dependent by \cite{Parigot}). Now assume that the claim is true for $n$. Suppose $\eta\in2^{\leq n+1}$ and $1\leq\lg\left(\eta\right)$. By Observation \ref{Obs:Bijection}(3), there is a bijection between the types $p\left(x\right)$ over $A$ where $P_{\eta}\left(x\right)\in p$ and the types $p\left(x\right)$ in $T_{n+1-\lg\left(\eta\right)}$ over $A_{\eta}$ where $P_{\left\langle \right\rangle }\in p$. $A_{\eta}\models T_{n+1-\lg\left(\eta\right)}^{\forall}$, and so by the induction hypothesis, the number of types over $A_{\eta}$ is bounded by a polynomial in $\left|A_{\eta}\right|\leq\left|A\right|$. As the number of types $p\left(x\right)$ such that $P_{\eta}\left(x\right)\notin p$ for all $\eta$ is bounded by $\left|A\right|+1$ as in the previous case, we are left with checking the number of types $p\left(x\right)$ such that $P_{\left\langle \right\rangle }\left(x\right)\in p$. In order to describe $p$, we first have to describe $p$ restricted to the language $\left\{ <_{\left\langle \right\rangle },\wedge_{\left\langle \right\rangle }\right\} $, and this is polynomially bounded. Let $x'=\max\left\{ a\wedge x\left|\, a\in A\right.\right\} $. By Observation \ref{Obs:Bijection}(1), if $A\cup\left\{ x\right\} $ is not closed under $\wedge_{\left\langle \right\rangle }$, $x'$ is the only new element in the structure generated by $A\cup\left\{ x\right\} $ in $P_{\left\langle \right\rangle }$. Hence we are left to determine the type of the pairs $\left(G_{\left\langle \right\rangle ,\left\langle i\right\rangle }\left(x\right),G_{\left\langle \right\rangle ,\left\langle i\right\rangle }\left(x'\right)\right)$ over $A$ for $i=0,1$ (if $x'$ is not new, then it's enough to determine the type of $G_{\left\langle \right\rangle ,\left\langle i\right\rangle }\left(x\right)$). The number of these types is equal to the number of types of pairs in $T_{n}$ over $A_{\left\langle i\right\rangle }$. As $T_{n}$ is dependent we are done by Fact \ref{fac:DepPolBd}.\end{proof} \begin{defn} Let $L=\bigcup_{n<\omega}L_{n}$, $T=\bigcup_{n<\omega}T_{n}$ and $T^{\forall}=\bigcup_{n<\omega}T_{n}^{\forall}$. \end{defn} We easily have: \begin{cor} $T$ is complete, it eliminates quantifiers and is dependent. \end{cor} We shall prove the following theorem (which implies Theorem \ref{thm:IntoMain} from the introduction): \begin{thm} \label{thm:MainThm}For any two cardinals $\mu\leq\kappa$ such that in $\left[\mu,\kappa\right]$ there are no (uncountable) strongly inaccessible cardinals, $\kappa\not\to\left(\mu\right)_{T,1}$. \end{thm} We shall prove a slightly stronger statement, by induction on $\kappa$: \begin{prop} Given $\mu$ and $\kappa$, such that either $\kappa<\mu$ or there are no (uncountable) strongly inaccessible cardinals in $\left[\mu,\kappa\right]$, there is a model $M\models T^{\forall}$ such that $\left|P_{\left\langle \right\rangle }^{M}\right|\geq\kappa$ and $P_{\left\langle \right\rangle }^{M}$ does not contain a non-constant indiscernible sequence (for quantifier free formulas) of length $\mu$. \end{prop} From now on, indiscernible will only mean ``indiscernible for quantifier free formulas''. \begin{proof} Fix $\mu$. The proof is by induction on $\kappa$. We divide into cases: \begin{casenv} \item $\kappa<\mu$. Clear. \item $\kappa=\mu=\aleph_{0}$. Denote $\eta_{j}=\left\langle 1,\ldots,1\right\rangle $, i.e., the constant sequence of length $j$ and value $1$. Find $M\models T^{\forall}$ such that its universe contains a set $\left\{ a_{i,j}\left|\, i,j<\omega\right.\right\} $ where $a_{i,j}\neq a_{i',j'}$ for all $\left(i,j\right)\neq\left(i',j'\right)$, $a_{i,j}\in P_{\eta_{j}}$ and in addition $G_{\eta_{j},\eta_{j+1}}\left(a_{i,j}\right)=a_{i,j+1}$ if $j<i$ and $G_{\eta_{j},\eta_{j+1}}\left(a_{i,j}\right)=a_{0,j+1}$ otherwise. We also need that $P_{\left\langle \right\rangle }^{M}=\left\{ a_{i,0}\left|\, i<\omega\right.\right\} $. Any model satisfying these properties will do (so no need to specify what the tree structures are). Now, if in $P_{\left\langle \right\rangle }^{M}=\left\{ a_{i,0}\left|\, i<\omega\right.\right\} $ there is a non-constant indiscernible sequence, $\left\langle a_{i_{k},0}\left|\, k<\omega\right.\right\rangle $, then for $j\geq i_{0},i_{1}$, \[ G_{\eta_{j},\eta_{j+1}}\circ\cdots\circ G_{\eta_{0},\eta_{1}}\left(a_{i_{0},0}\right)=G_{\eta_{j},\eta_{j+1}}\circ\cdots\circ G_{\eta_{0},\eta_{1}}\left(a_{i_{1},0}\right). \] But for every $k$ such that $i_{k}>j$, $G_{\eta_{j},\eta_{j+1}}\circ\cdots\circ G_{\eta_{0},\eta_{1}}\left(a_{i_{1},0}\right)\neq G_{\eta_{j},\eta_{j+1}}\circ\cdots\circ G_{\eta_{0},\eta_{1}}\left(a_{i_{k},0}\right)$ --- contradiction. \item $\kappa$ is singular. Suppose $\kappa=\bigcup_{i<\sigma}\lambda_{i}$ where $\sigma,\lambda_{i}<\kappa$ for all $i<\sigma$. By the induction hypothesis, for $i<\sigma$ there is a model $M_{i}\models T^{\forall}$ such that $\left|P_{\left\langle \right\rangle }^{M_{i}}\right|\geq\lambda_{i}$ and in $P_{\left\langle \right\rangle }^{M_{i}}$ there is no non-constant indiscernible sequence of length $\mu$. Also, there is a model $N$ such that $\left|P_{\left\langle \right\rangle }^{N}\right|\geq\sigma$ and in $P_{\left\langle \right\rangle }^{N}$ there is no non-constant indiscernible sequence of length $\mu$. We may assume that the universes of all these models are pairwise disjoint and disjoint from $\kappa$. Suppose that $\left\{ a_{i}\left|\, i<\sigma\right.\right\} \subseteq P_{\left\langle \right\rangle }^{N}$, and $\left\{ b_{j}\left|\,\sum_{l<i}\lambda_{l}\leq j<\lambda_{i}\right.\right\} \subseteq P_{\left\langle \right\rangle }^{M_{i}}$ witness that $\left|P_{\left\langle \right\rangle }^{N}\right|\geq\sigma$ and $\left|P_{\left\langle \right\rangle }^{M_{i}}\right|\geq\lambda_{i}\backslash\sum_{l<i}\lambda_{l}$. Let $\bar{M}$ be a model extending each $M_{i}$ and containing the disjoint union of the sets $\bigcup_{i<\sigma}M_{i}$ (exists by JEP). Define a new model $M\models T^{\forall}$: $\left(P_{\left\langle \right\rangle }^{M},<_{\left\langle \right\rangle }\right)=\left(\kappa,<\right)$ (so $\wedge_{\left\langle \right\rangle }=\min$); $\left(P_{\left\langle 1\right\rangle \frown\eta}^{M},<_{\eta}\right)=\left(P_{\eta}^{N},<_{\eta}\right)$ and $\left(P_{\left\langle 0\right\rangle \frown\eta}^{M},<_{\left\langle 0\right\rangle ,\eta}\right)=\left(P_{\eta}^{\bar{M}},<_{\eta}\right)$. In the same way define $\wedge_{\eta}$ for all $\eta$ of length $\geq1$. The functions are also defined in the same way: $G_{\left\langle 1\right\rangle \frown\eta,\left\langle 1\right\rangle \frown\nu}^{M}=G_{\eta,\nu}^{N}$ and $G_{\left\langle 0\right\rangle \frown\eta,\left\langle 0\right\rangle \frown\nu}^{M}=G_{\eta,\nu}^{\bar{M}}$. We are left to define $G_{\left\langle \right\rangle ,\left\langle 0\right\rangle }$ and $G_{\left\langle \right\rangle ,\left\langle 1\right\rangle }$. So let: $G_{\left\langle \right\rangle ,\left\langle 1\right\rangle }\left(\alpha\right)=a_{\min\left\{ i\left|\,\alpha<\lambda_{i}\right.\right\} }$ and $G_{\left\langle \right\rangle ,\left\langle 0\right\rangle }\left(\alpha\right)=b_{\alpha}$ for all $\alpha<\kappa$. Note that if $I$ is an indiscernible sequence contained in $P_{\left\langle 1\right\rangle }^{M}$ then $I$ is an indiscernible sequence in $N$ contained in $P_{\left\langle \right\rangle }^{N}$, and the same is true for $P_{\left\langle 0\right\rangle }^{M}$ and $\bar{M}$. Assume $\left\langle \alpha_{j}\left|\, j<\mu\right.\right\rangle $ is an indiscernible sequence in $P_{\left\langle \right\rangle }^{M}$. Then $\left\langle G_{\left\langle \right\rangle ,\left\langle 1\right\rangle }\left(\alpha_{j}\right)\left|\, j<\mu\right.\right\rangle $ is a constant sequence (by the choice of $N$). So there is $i<\sigma$ such that $\sum_{l<i}\lambda_{l}\leq\alpha_{j}<\lambda_{i}$ for all $j<\mu$. So $\left\langle G_{\left\langle \right\rangle ,\left\langle 0\right\rangle }\left(\alpha_{j}\right)=b_{\alpha_{j}}\left|\, j<\mu\right.\right\rangle $ is a constant sequence (it is indiscernible in $P_{\left\langle \right\rangle }^{\bar{M}}$ and in fact contained in $P_{\left\langle \right\rangle }^{M_{i}}$), hence $\left\langle \alpha_{j}\left|\, j<\mu\right.\right\rangle $ is constant, as we wanted. \item $\kappa$ is regular uncountable. By the hypothesis of the proposition, $\kappa$ is not strongly inaccessible, so there is some $\lambda<\kappa$ such that $2^{\lambda}\geq\kappa$. By the induction hypothesis on $\lambda$, there is a model $N\models T^{\forall}$ such that in $P_{\left\langle \right\rangle }^{N}$ there is no non-constant indiscernible sequence of length $\mu$. Let $\left\{ a_{i}\left|\, i\leq\lambda\right.\right\} \subseteq P_{\left\langle \right\rangle }^{N}$ witness that $\left|P_{\left\langle \right\rangle }^{N}\right|\geq\lambda$. Define $M\models T^{\forall}$ as follows: $P_{\left\langle \right\rangle }^{M}=2^{\leq\lambda}$ and the ordering is inclusion (equivalently, the ordering is by initial segment). $\wedge_{\left\langle \right\rangle }$ is defined naturally: $f\wedge_{\left\langle \right\rangle }g=f\upharpoonright\min\left\{ \alpha\left|\, f\left(\alpha\right)\neq g\left(\alpha\right)\right.\right\} $. For all $\eta$, let $P_{\left\langle 1\right\rangle \frown\eta}^{M}=P_{\eta}^{N}$, and the ordering and the functions are naturally induced from $N$. The main point is that we set $G_{\left\langle \right\rangle ,\left\langle 1\right\rangle }\left(f\right)=a_{\lg\left(f\right)}$. Now choose $P_{\left\langle 0\right\rangle \frown\eta}^{M}$, $G_{\left\langle 0\right\rangle \frown\eta,\left\langle 0\right\rangle \frown\nu}$, etc. arbitrarily, and let $G_{\left\langle \right\rangle ,\left\langle 0\right\rangle }$ be any function. Suppose that $\left\langle f_{i}\left|\, i<\mu\right.\right\rangle $ is a non-constant indiscernible sequence: If $f_{1}<f_{0}$ (i.e., $f_{1}<_{\left\langle \right\rangle }f_{0}$), we shall have an infinite decreasing sequence in a well-ordered tree --- a contradiction. If $f_{0}<f_{1}$, $\left\langle f_{i}\left|\, i<\mu\right.\right\rangle $ is increasing, so $\left\langle G_{\left\langle \right\rangle ,\left\langle 1\right\rangle }^{M}\left(f_{i}\right)=a_{\lg\left(f_{i}\right)}\left|\, i<\mu\right.\right\rangle $ is non-constant --- contradiction (as it is an indiscernible sequence in $M$ and hence in $P_{\left\langle 0\right\rangle }^{N}$). Let $h_{i}=f_{0}\wedge f_{i+1}$ for $i<\mu$ (where $\wedge=\wedge_{\left\langle \right\rangle }$). This is an indiscernible sequence, and by the same arguments, it cannot increase or decrease, but as $h_{i}\leq f_{0}$, and $\left(P_{\left\langle \right\rangle },<_{\left\langle \right\rangle }\right)$ is a tree, it follows that $h_{i}$ is constant. Assume $f_{0}\wedge f_{1}<f_{1}\wedge f_{2}$, then $f_{2i}\wedge f_{2i+1}<f_{2\left(i+1\right)}\wedge f_{2\left(i+1\right)+1}$ for all $i<\mu$, and again $\left\langle f_{2i}\wedge f_{2i+1}\left|\, i<\mu\right.\right\rangle $ an increasing indiscernible sequence and we have a contradiction. By the same reasoning, it cannot be that $f_{0}\wedge f_{1}>f_{1}\wedge f_{2}$. As $\left(P_{\left\langle \right\rangle },<_{\left\langle \right\rangle }\right)$ is a tree, we conclude that $f_{0}\wedge f_{2}=f_{0}\wedge f_{1}=f_{1}\wedge f_{2}$. But that is a contradiction (because if $\alpha=\lg\left(f_{0}\wedge f_{1}\right)$, then $\left|\left\{ f_{0}\left(\alpha\right),f_{1}\left(\alpha\right),f_{2}\left(\alpha\right)\right\} \right|=3$). \end{casenv} \end{proof} \subsection{In RCF there are few indiscernibles of $\omega$-tuples.} Here we will prove Theorem \ref{thm:IntoMainRCF}. Since RCF is strongly dependent, Fact \ref{fac:TStrongly} (which discusses finite tuples) holds for it, so we will show that a similar phenomenon as in the previous section holds for $\omega$-tuples in RCF. So assume $\mathfrak{C}\models RCF$. \begin{notation} The set of all open intervals $\left(a,b\right)$ (where $a<b$ and $a,b\in\mathfrak{C}$) is denoted by $\mathfrak{I}$. \end{notation} \begin{defn} \label{def:arrowInterval}For a cardinal $\kappa$, $n\leq\omega$ and an ordinal $\delta$, $\kappa\to\left(\delta\right)_{n}^{\operatorname{interval}}$ means: for every set $A$ of $n$-tuples of (non-empty, open) intervals (so for each $\bar{I}\in A$, $\bar{I}=\left\langle I^{i}\left|\, i<n\right.\right\rangle \in\mathfrak{J}^{n}$) of size $\kappa$, there is a sequence $\left\langle \bar{I}_{\alpha}\left|\,\alpha<\delta\right.\right\rangle \in A^{\delta}$ of order type $\delta$ such that $\bar{I}_{\alpha}\neq\bar{I}_{\beta}$ for $\alpha<\beta<\delta$, and there is a sequence $\left\langle \bar{b}_{\alpha}\left|\,\alpha<\delta\right.\right\rangle $ such that $\bar{b}_{\alpha}\in\bar{I}_{\alpha}$ (i.e., $\bar{b}_{\alpha}=\left\langle b_{\alpha}^{0},\ldots,b_{\alpha}^{n-1}\right\rangle $ and $b_{\alpha}^{i}\in I_{\alpha}^{i}$) and such that $\left\langle \bar{b}_{\alpha}\left|\,\alpha<\delta\right.\right\rangle $ is an indiscernible sequence. \end{defn} \begin{rem} Note that: \begin{enumerate} \item If $\kappa\to\left(\delta\right)_{n}^{\operatorname{interval}}$ then $\kappa\to\left(\delta\right)_{m}^{\operatorname{interval}}$ for all $m\leq n$. \item If $\kappa\not\to\left(\delta\right)_{n}^{\operatorname{interval}}$ then $\kappa\not\to\left(\delta\right)_{RCF,n}$ (why? if $A$ witnesses that $\kappa\not\to\left(\delta\right)_{n}^{\operatorname{interval}}$, then for each $\bar{I}\in A$, choose $\bar{b}_{\bar{I}}\in\bar{I}$ (as above) in such a way that $\left\{ \bar{b}_{\bar{I}}\left|\,\bar{I}\in A\right.\right\} $ has size $\kappa$. By definition this set witnesses $\kappa\not\to\left(\delta\right)_{RCF,n}$). \item If $\lambda<\kappa$ and $\kappa\not\to\left(\delta\right)_{n}^{\operatorname{interval}}$ then $\lambda\not\to\left(\delta\right)_{n}^{\operatorname{interval}}$. \end{enumerate} \end{rem} We shall prove the following theorem (which immediately implies Theorem \ref{thm:IntoMainRCF}): \begin{thm} \label{thm:MainThmRCF}For any two cardinals $\mu\leq\kappa$ such that in $\left[\mu,\kappa\right]$ there are no strongly inaccessible cardinals, $\kappa\not\to\left(\mu\right)_{\omega}^{\operatorname{interval}}$. \end{thm} The proof follows from a sequence of claims: \begin{claim} \label{cla:Obvious}If $\kappa<\mu$ then $\kappa\not\to\left(\mu\right)_{n}^{\operatorname{interval}}$ for all $n\leq\omega$.\end{claim} \begin{proof} Obvious.\end{proof} \begin{claim} \label{cla:aleph0}If $\kappa=\mu=\aleph_{0}$ then $\kappa\not\to\left(\mu\right)_{1}^{\operatorname{interval}}$.\end{claim} \begin{proof} For $n<\omega$, let $I_{n}=\left(n,n+1\right)$. \end{proof} \begin{claim} \label{cla:Singular}Suppose $\kappa=\sum_{i<\sigma}\lambda_{i}$ and $n\leq\omega$. Then, if $\sigma\not\to\left(\mu\right)_{n}^{\operatorname{interval}}$ and $\lambda_{i}\not\to\left(\mu\right)_{n}^{\operatorname{interval}}$ then $\kappa\not\to\left(\mu\right)_{2+2n}^{\operatorname{interval}}$.\end{claim} \begin{proof} By assumption, we have a set of intervals $\left\{ \bar{R}_{i}\left|\, i<\sigma\right.\right\} $ that witness $\sigma\not\to\left(\mu\right)_{n}^{\operatorname{interval}}$ and for each $i<\sigma$ we have $\left\{ \bar{S}_{\beta}\left|\,\sum_{j<i}\lambda_{j}<\beta<\lambda_{i}\right.\right\} $ that witness $\left|\lambda_{i}\backslash\sum_{j<i}\lambda_{j}\right|\not\to\left(\mu\right)_{n}^{\operatorname{interval}}$. Fix an increasing sequence of elements $\left\langle b_{i}\left|\, i<\sigma\right.\right\rangle $. For $\alpha<\kappa$, let $\beta=\beta\left(\alpha\right)=\min\left\{ i<\sigma\left|\,\alpha<\lambda_{i}\right.\right\} $ and for $i<2+2n$, define: \begin{itemize} \item If $i=0$, let $I_{\alpha}^{i}=\left(b_{2\beta\left(\alpha\right)},b_{2\beta\left(\alpha\right)+1}\right)$. \item If $i=1$, let $I_{\alpha}^{i}=\left(b_{2\beta\left(\alpha\right)+1},b_{2\beta\left(\alpha\right)+2}\right)$. \item If $i=2k+2$, let $I_{\alpha}^{i}=R_{\beta\left(\alpha\right)}^{k}$. \item If $i=2k+3$, let $I_{\alpha}^{i}=S_{\alpha}^{k}$. \end{itemize} Suppose $\left\langle \bar{b}_{\varepsilon}\left|\,\varepsilon<\mu\right.\right\rangle $ is an indiscernible sequence such that $\bar{b}_{\varepsilon}\in\bar{I}_{\alpha_{\varepsilon}}$ for $\varepsilon<\mu$. Denote $\bar{b}_{\varepsilon}=\left\langle b_{0}^{\varepsilon},\ldots,b_{2+2n-1}^{\varepsilon}\right\rangle $. Note that $b_{1}^{\varepsilon}<b_{0}^{\varepsilon'}$ if and only if $\beta\left(\alpha_{\varepsilon}\right)<\beta\left(\alpha_{\varepsilon'}\right)$ (we need two intervals for the ``only if'' direction). Hence $\left\langle \beta\left(\alpha_{\varepsilon}\right)\left|\,\varepsilon<\mu\right.\right\rangle $ is increasing or constant. But if it is increasing then we have a contradiction to the choice of $\left\{ \bar{R}_{i}\left|\, i<\sigma\right.\right\} $. So it is constant, and suppose $\beta\left(\alpha_{\varepsilon}\right)=i_{0}$ for all $\varepsilon<\mu$. But then $\alpha_{\varepsilon}\in\lambda_{i_{0}}\backslash\sum_{j<i_{0}}\lambda_{j}$ for all $\varepsilon<\mu$ and we get a contradiction to the choice of $\left\langle \bar{S}_{\beta}\left|\,\sum_{j<i_{0}}\lambda_{j}<\beta<\lambda_{i_{0}}\right.\right\rangle $. \end{proof} \begin{claim} \label{cla:Regular}Suppose $\lambda\not\to\left(\mu\right)_{n}^{\operatorname{interval}}$. Then $2^{\lambda}\not\to\left(\mu\right)_{4+2n}^{\operatorname{interval}}$. \end{claim} \begin{proof} \renewcommand{$\square$}{}Suppose $\left\{ \bar{I}_{\alpha}\left|\,\alpha<\lambda\right.\right\} $ witnesses that $\lambda\not\to\left(\mu\right)_{n}^{\operatorname{interval}}$. By adding two intervals to each $\bar{I}_{\alpha}$, we can ensure that it has the extra property that if $\bar{c}_{1}\in I_{\alpha_{1}}$ and $\bar{c}_{2}\in\bar{I}_{\alpha_{2}}$ then $c_{1}^{1}<c_{2}^{0}$ if and only if $\alpha_{1}<\alpha_{2}$ (as in the previous claim). By this we have increased the length of $\bar{I}_{\alpha}$ to $2+n$ (and it is still a witness of $\lambda\not\to\left(\mu\right)_{n}^{\operatorname{interval}}$). We write $\bar{c}_{1}<^{*}\bar{c}_{2}$ for $c_{1}^{1}<c_{2}^{0}$ , but note that it is not really an ordering (it is not transitive in general). We shall find below a four-place definable function $f$ such that: \begin{itemize} \item [$\heartsuit$] For every two ordinals, $\delta,\zeta$, if $\left\langle \bar{R}_{\alpha}\left|\,\alpha<\delta\right.\right\rangle $ is a sequence of $\zeta$-tuples of intervals, then there exists a set of $2\zeta$-tuples of intervals, $\left\{ \bar{S}_{\eta}\left|\,\eta\in\leftexp{\delta}2\right.\right\} $ (of size $2^{\left|\delta\right|}$) such that for all $i<\zeta$ and $\eta_{1}\neq\eta_{2}$, if $b_{1}\in S_{\eta_{1}}^{2i},b_{2}\in S_{\eta_{1}}^{2i+1}$ and $b_{3}\in S_{\eta_{2}}^{2i},b_{4}\in S_{\eta_{2}}^{2i+1}$ then $f\left(b_{1},b_{2},b_{3},b_{4}\right)$ is in $R_{\lg\left(\eta_{1}\wedge\eta_{2}\right)}^{i}$. \end{itemize} Apply $\heartsuit$ to our situation to get $\left\{ \bar{J}_{\eta}\left|\,\eta\in\leftexp{\lambda}2\right.\right\} $ such that $\bar{J}_{\eta}=\left\langle J_{\eta}^{i}\left|\, i<4+2n\right.\right\rangle $ and for all $k<2+n$ and $\eta_{1}\neq\eta_{2}$, if $b_{1}\in J_{\eta_{1}}^{2k},b_{2}\in J_{\eta_{1}}^{2k+1}$ and $b_{3}\in J_{\eta_{2}}^{2k},b_{4}\in J_{\eta_{2}}^{2k+1}$ then $f\left(b_{1},b_{2},b_{3},b_{4}\right)$ is in $I_{\lg\left(\eta_{1}\wedge\eta_{2}\right)}^{k}$. This is enough (the reasons are exactly as in the regular case of the proof of Theorem \ref{thm:MainThm}, but we shall repeat it for clarity): To simplify notation, we regard $f$ as a function on tuples, so that if $\bar{b}_{1}\in\bar{J}_{\eta_{1}},\bar{b}_{2}\in\bar{J}_{\eta_{2}}$ then $f\left(\bar{b}_{1},\bar{b}_{2}\right)$ is in $\bar{I}_{\lg\left(\eta_{1}\wedge\eta_{2}\right)}$ (namely, $f\left(\bar{b}_{1},\bar{b}_{2}\right)=\left\langle a_{k}\left|\, k<2+n\right.\right\rangle $ where $a_{k}=f\left(b_{2k}^{1},b_{2k+1}^{1},b_{2k}^{2},b_{2k+1}^{2}\right)\in I_{\lg\left(\eta_{1}\wedge\eta_{2}\right)}^{k}$ for $k<2+n$). Suppose $\left\langle \eta_{i}\left|\, i<\mu\right.\right\rangle \subseteq\leftexp{\lambda}2$ is without repetitions and $\left\langle \bar{b}_{\eta_{i}}\left|\, i<\mu\right.\right\rangle $ is an indiscernible sequence such that $\bar{b}_{\eta_{i}}\in\bar{J}_{\eta_{i}}$. Let $h_{i}=\eta_{0}\wedge\eta_{i+1}$ for $i<\mu$. If $\lg\left(h_{i}\right)<\lg\left(h_{j}\right)$ for some $i\neq j$ then $f\left(\bar{b}_{\eta_{0}},\bar{b}_{\eta_{i+1}}\right)<^{*}f\left(\bar{b}_{\eta_{0}},\bar{b}_{\eta_{j+1}}\right)$ and so by indiscernibility, $\left\langle \lg\left(h_{i}\right)\left|\, i<\mu\right.\right\rangle $ is increasing (it cannot be decreasing), and so $f\left(\bar{b}_{\eta_{0}},\bar{b}_{\eta_{i+1}}\right)$ contradicts our choice of $\left\langle \bar{I}_{\alpha}\left|\,\alpha<\lambda\right.\right\rangle $. Hence (because $h_{i}\leq\eta_{0}$) $h_{i}$ is constant. Assume $\eta_{0}\wedge\eta_{1}<\eta_{1}\wedge\eta_{2}$, then $f\left(\bar{b}_{\eta_{0}},\bar{b}_{\eta_{1}}\right)<^{*}f\left(\bar{b}_{\eta_{1}},\bar{b}_{\eta_{2}}\right)$ so $f\left(\bar{b}_{\eta_{1}},\bar{b}_{\eta_{2}}\right)<^{*}f\left(\bar{b}_{\eta_{2}},\bar{b}_{\eta_{3}}\right)$, and so $\lg\left(\eta_{0}\wedge\eta_{1}\right)<\lg\left(\eta_{1}\wedge\eta_{2}\right)<\lg\left(\eta_{2}\wedge\eta_{3}\right)$ hence, $f\left(\bar{b}_{\eta_{0}},\bar{b}_{\eta_{1}}\right)<^{*}f\left(\bar{b}_{\eta_{2}},\bar{b}_{\eta_{3}}\right)$ and it follows that $\left\langle \lg\left(\eta_{2i}\wedge\eta_{2i+1}\right)\left|\, i<\mu\right.\right\rangle $ is increasing. And this is again a contradiction. Similarly, it cannot be that $\eta_{0}\wedge\eta_{1}>\eta_{1}\wedge\eta_{2}$. As both sides are less or equal than $\eta_{1}$, it must be that $\eta_{0}\wedge\eta_{2}=\eta_{0}\wedge\eta_{1}=\eta_{1}\wedge\eta_{2}$. But that is impossible (because if $\alpha=\lg\left(\eta_{0}\wedge\eta_{1}\right)$, then $\left|\left\{ \eta_{0}\left(\alpha\right),\eta_{1}\left(\alpha\right),\eta_{2}\left(\alpha\right)\right\} \right|=3$). \begin{claim*} $\heartsuit$ is true. \end{claim*} \begin{proof} \renewcommand{$\square$}{$\square$}Let $f\left(x,y,z,w\right)=\left(x-z\right)/\left(y-w\right)$ (do not worry about division by $0$, we shall explain below). It is enough, by the definition of $\heartsuit$, to assume $\zeta=1$. By compactness, we may assume that $\delta$ is finite, and to avoid confusion, denote it by $m$. So we have a finite set, $\leftexp m2$, and a sequence of intervals $\left\langle R_{i}\left|\, i<m\right.\right\rangle $. Each $R_{i}$ is of the form $\left(a_{i},b_{i}\right)$. Let $c_{i}=\left(b_{i}+a_{i}\right)/2$. Let $d\in\mathfrak{C}$ be any element greater than any member of $A:=\operatorname{acl}\left(a_{i},b_{i}\left|\, i<m\right.\right)$. For each $\eta\in\leftexp m2$, let $a_{\eta}=\sum_{i<m}\eta\left(i\right)c_{i}d^{m-i}$, and $b_{\eta}=\sum_{i<m}\eta\left(i\right)d^{m-i}$. Let $S_{\eta}^{0}=\left(a_{\eta}-1,a_{\eta}+1\right)$ and $S_{\eta}^{1}=\left(b_{\eta}-1,b_{\eta}+1\right)$. This works: Assume that $\eta_{1}\neq\eta_{2}$, $b_{1}\in S_{\eta_{1}}^{0},b_{2}\in S_{\eta_{1}}^{1}$ and $b_{3}\in S_{\eta_{2}}^{0},b_{4}\in S_{\eta_{2}}^{1}$. We have to show $\left(b_{1}-b_{3}\right)/\left(b_{2}-b_{4}\right)\in R_{\lg\left(\eta_{1}\wedge\eta_{2}\right)}$. Denote $k=\lg\left(\eta_{1}\wedge\eta_{2}\right)$ (so $k<m$). $a_{\eta_{1}}-a_{\eta_{2}}$ is of the form $\varepsilon c_{k}d^{m-k}+F\left(d\right)$ where $\varepsilon\in\left\{ -1,1\right\} $, and $F\left(d\right)$ is a polynomial over $A$ of degree $\leq m-k-1$. $b_{\eta_{1}}-b_{\eta_{2}}$ is of the form $\varepsilon d^{m-k}+G\left(d\right)$, where $\varepsilon$ is the same for both (and $G$ is a polynomial over $\mathbb{Z}$ of degree $\leq m-k-1$). Now, $b_{1}-b_{3}\in\left(a_{\eta_{1}}-a_{\eta_{2}}-2,a_{\eta_{1}}-a_{\eta_{2}}+2\right)$, and $b_{2}-b_{4}\in\left(b_{\eta_{1}}-b_{\eta_{2}}-2,b_{\eta_{1}}-b_{\eta_{2}}+2\right)$, and hence we know that $b_{2}-b_{4}\neq0$. It follows that $\left(b_{1}-b_{3}\right)/\left(b_{2}-b_{4}\right)$ is inside an interval whose endpoints are $\left\{ \left(\varepsilon c_{k}d^{m-k}+F\left(d\right)\mathbf{p}m2\right)/\left(\varepsilon d^{m-k}+G\left(d\right)\mathbf{p}m2\right)\right\} $. But \[ \left(\varepsilon c_{k}d^{m-k}+F\left(d\right)\mathbf{p}m2\right)/\left(\varepsilon d^{m-k}+G\left(d\right)\mathbf{p}m2\right)\in R_{k} \] by our choice of $d$, and we are done. Note that for $\eta_{1}\neq\eta_{2}$, $S_{\eta_{1}}^{0}\neq S_{\eta_{2}}^{0}$ regardless of the $R_{i}$'s (which can be a constant interval). \end{proof} \end{proof} The proof of Theorem \ref{thm:MainThmRCF} now follows by induction on $\kappa$: fix $\mu$, and let $\kappa$ be the first cardinal for which the theorem fails. Then by Claim \ref{cla:Obvious}, $\kappa\geq\mu$. By Claim \ref{cla:aleph0}, $\aleph_{0}<\kappa$. By Claim \ref{cla:Singular}, $\kappa$ cannot be singular. By Claim \ref{cla:Regular}, $\kappa$ cannot be regular, because if it were, there would be a $\lambda<\kappa$ such that $2^{\lambda}\geq\kappa$ (because $\kappa$ is not strongly inaccessible). Note that we did use Claim \ref{cla:Obvious} to deal with cases where we couldn't use the induction hypothesis (for example, in the regular case, it might be that $\lambda<\mu$). \subsubsection*{Further remarks} Theorem \ref{thm:MainThmRCF} can be generalized to allow parameters: Suppose $\mathfrak{C}\models RCF$, and $A\subseteq\mathfrak{C}$. \begin{defn} $\kappa\to_{A}\left(\mu\right)_{\omega}^{\operatorname{interval}}$ means the same as in Definition \ref{def:arrowInterval}, but we require that the indiscernible sequence is indiscernible over $A$. \end{defn} Then we have: \begin{thm} For any set of parameters $A$ and any two cardinals $\mu,\kappa$ such that in $\left[\max\left\{ \left|A\right|,\mu\right\} ,\kappa\right]$ there are no strongly inaccessible cardinals or $\kappa<\max\left\{ \left|A\right|,\mu\right\} $, $\kappa\not\to_{A}\left(\mu\right)_{\omega}^{\operatorname{interval}}$.\end{thm} \begin{proof} The proof goes exactly as the proof of Theorem \ref{thm:MainThmRCF}, but the base case for the induction is different. If $\max\left\{ \left|A\right|,\mu\right\} =\mu$, the proof is exactly the same. Otherwise, we have to deal with the case $\kappa\leq\left|A\right|$: Enumerate $A=\left\{ a_{i}\left|\, i<\mu'\right.\right\} $. Let $\varepsilon\in\mathfrak{C}$ be greater than $0$ but smaller than any element in $\operatorname{acl}\left(A\right)$. For $i<\mu'$, let $I_{i}=\left(a_{i},a_{i}+\varepsilon\right)$. Then $\left\{ I_{i}\left|\, i<\kappa\right.\right\} $ witnesses $\kappa\not\to_{A}\left(\mbox{\ensuremath{\mu}}\right)_{\omega}^{\operatorname{interval}}$. \end{proof} \section{Generic pair} Here we give an example of an $\omega$-stable theory, such that for all weakly generic pairs of structures $M\mathbf{p}rec M_{1}$ (see below for the definition) the theory of the pair $\left(M_{1},M\right)$ in an extended language where we name $M$ by a predicate has the independence property. \begin{defn} \label{def:WeaklyGenPair}A pair $\left(M_{1},M\right)$ as above is \emph{weakly generic} if for all formula $\varphi\left(x\right)$ with parameters from $M$, if $\varphi$ has infinitely many solutions in $M$, then it has a solution in $M_{1}\backslash M$. \end{defn} This definition is induced by the well known ``generic pair conjecture'' (see \cite{Sh:900,Sh950}), and it is worth while to give the precise definitions. \begin{defn} \label{def:GenPair}Assume that $\lambda=\lambda^{<\lambda}>\left|T\right|$ (in particular, $\lambda$ is regular) and that $2^{\lambda}=\lambda^{+}$. The \emph{generic pair property }for $\lambda$ says that there exists a saturated model $M$ of cardinality $\lambda^{+}$, an increasing continuous sequence of models $\sequence{M_{\alpha}}{\alpha<\lambda^{+}}$ and a club $E\subseteq\lambda^{+}$ such that $\bigcup_{\alpha<\lambda^{+}}M_{\alpha}=M$ and for all $\alpha<\beta\in E$ of cofinality $\lambda$, the pair $\left(M_{\beta},M_{\alpha}\right)$ has the same isomorphism type. We call this pair the \emph{generic} \emph{pair} of $T$ of size $\lambda$.\end{defn} \begin{prop} \label{prop:generic pair depenends only on T}Assume that $\lambda=\lambda^{<\lambda}>\left|T\right|$ and that $2^{\lambda}=\lambda^{+}$. The generic pair property for $\lambda$ holds iff for every saturated model $N$ of cardinality $\lambda^{+}$ and for every increasing continuous sequence of models $\sequence{N_{\alpha}}{\alpha<\lambda^{+}}$ with union $N$ there exists a club $E\subseteq\lambda^{+}$ such that for all $\alpha<\beta\in E$ of cofinality $\lambda$, the pair $\left(N_{\beta},N_{\alpha}\right)$ has the same isomorphism type. Moreover, this type does not depended on the particular choice of $N$ or $N_{\alpha}$.\end{prop} \begin{proof} Left to right: Suppose $M$, $\sequence{M_{\alpha}}{\alpha<\lambda^{+}}$ and $E$ witness that $T$ has the generic pair property for $\lambda$. If $N$ is another saturated model of size $\lambda^{+}$ and $\sequence{N_{\alpha}}{\alpha<\lambda^{+}}$ is as in the Proposition. Then $N\cong M$, so we may assume $N=M$. Let $E_{0}=\left\{ \delta<\lambda^{+}\left|\, N_{\delta}=M_{\delta}\right.\right\} $. This is a club of $\lambda^{+}$, and so $E\cap E_{0}$ is also a club of $\lambda^{+}$ such that $\left(N_{\beta},N_{\alpha}\right)$ has the same isomorphism type for any $\alpha<\beta\in E\cap E_{0}$ of cofinality $\lambda$. Right to left is clear. \end{proof} Justifying definition \ref{def:WeaklyGenPair} we have: \begin{claim} Assume that $T$ has the generic pair property for $\lambda$, then every generic pair of size $\lambda$ is weakly generic.\end{claim} \begin{proof} Suppose that $M$, $\left\langle M_{\alpha}\left|\,\alpha<\lambda^{+}\right.\right\rangle $ and $E$ are as in Definition \ref{def:GenPair}. Suppose $\alpha,\beta\in E$ and $\alpha<\beta$ are of cofinality $\lambda$. We are given a formula $\varphi\left(x\right)$ with parameter from $M_{\alpha}$, such that $\aleph_{0}\leq\left|\varphi\left(M_{\alpha}\right)\right|$. By saturation of $M$, $\lambda^{+}=\left|\varphi\left(M\right)\right|$. Since $M=\bigcup_{\beta'\in E,\mbox{cf}\left(\beta'\right)=\lambda}M_{\beta}$ there is some $\alpha<\beta'\in E$ of cofinality $\lambda$ such that $\varphi\left(M_{\beta'}\right)\backslash M_{\alpha}\neq\emptyset$, but as $\left(M_{\beta},M_{\alpha}\right)\cong\left(M_{\beta'},M_{\alpha}\right)$, we are done. \end{proof} Proposition \ref{prop:generic pair depenends only on T} implies that the generic pair property and the the generic pair are both natural notions. It is important in the study of dependent theories as it lead to the development of a theory of type decomposition in NIP. Using this theory, the second author's \cite{Sh950,Sh:900} prove that the generic pair property holds for dependent theories and large enough $\lambda$'s. On the other hand, \cite{Sh877,Sh906} prove that if $T$ has IP then it lacks the generic pair property for all large enough $\lambda$. Hence it makes sense to ask whether the theory of the pair is dependent. The answer is no: \begin{thm} There exists an $\omega$-stable theory such that for every weakly generic pair of models $M\mathbf{p}rec M_{1}$, the theory of the pair $\left(M_{1},M\right)$ has the independence property. \end{thm} We shall describe this theory: Let $L=\left\{ P,R,Q_{1},Q_{2}\right\} $ where $R,P$ are unary predicates and $Q_{1},Q_{2}$ are binary relations. Let $\tilde{M}$ be the following structure for $L$: \begin{enumerate} \item The universe is: \begin{eqnarray*} \tilde{M} & = & \left\{ u\subseteq\omega\left|\,\left|u\right|<\omega\right.\right\} \cup\\ & & \left\{ \left(u,v,i\right)\left|\, u,v\subseteq\omega,\left|u\right|<\omega,\left|v\right|<\omega,i<\omega\,\&\, u\subseteq v\Rightarrow i<\left|v\right|+1\right.\right\} . \end{eqnarray*} \item The predicates are interpreted as follows: \begin{itemize} \item $P^{\tilde{M}}=\left\{ u\subseteq\omega\left|\,\left|u\right|<\aleph_{0}\right.\right\} $. \item $R^{\tilde{M}}$ is $\tilde{M}\backslash\left(P^{M}\right)$. \item $Q_{1}^{\tilde{M}}=\left\{ \left(u,\left(u,v,i\right)\right)\left|\, u\in P^{\tilde{M}}\right.\right\} $. \item $Q_{2}^{\tilde{M}}=\left\{ \left(v,\left(u,v,i\right)\right)\left|\, v\in P^{\tilde{M}}\right.\right\} $. \end{itemize} \end{enumerate} Let $T=Th\left(\tilde{M}\right)$. As we shall see in the next claim, $T$ gives rise to the following definition: \begin{defn} \label{def:PBA}We call a structure $\left(B,\cup,\cap,-,\subseteq,0\right)$ a \emph{pseudo Boolean algebra} (PBA) when it satisfies all the axioms of a Boolean algebra except: There is no greatest element $1$ (i.e., remove all the axioms concerning it). \end{defn} Pseudo Boolean algebra can have atoms like in Boolean algebras (nonzero elements that do not contain any smaller nonzero elements). \begin{defn} Say that a PBA is of \emph{finite type} if every element is a union of finitely many atoms. \end{defn} \begin{defn} For a PBA $A$, and $C\subseteq A$ a sub-PBA, let $A_{C}:=\left\{ a\in A\left|\,\exists c\in C\left(a\subseteq^{A}c\right)\right.\right\} $, and for a subset $D\subseteq A$, let $\emph{at}\left(D\right)$ be the set of atoms contained in $D$.\end{defn} \begin{prop} \label{pro:ISomPBA}Every PBA of finite type is isomorphic to $\left(\mathcal{P}_{<\infty}\left(\kappa\right),\cup,\cap,-,\subseteq,\emptyset\right)$ for some $\kappa$ where $\mathcal{P}_{<\infty}\left(\kappa\right)$ is the set of all finite subsets of $\kappa$. Moreover: Assume $A,B$ are PBAs of finite type and $C\subseteq A,B$ is a common sub-PBA. Then, if: \begin{enumerate} \item $\left|\emph{at}\left(A\right)\backslash\emph{at}\left(A_{C}\right)\right|=\left|\emph{at}\left(B\right)\backslash\emph{at}\left(B_{C}\right)\right|$. \item For every $c\in C$, $A$ and $B$ agree on the size of $c$ (the number of atoms it contains). \end{enumerate} Then there is an isomorphism of PBAs $f:A\to B$ such that $f\upharpoonright C=\operatorname{id}$.\end{prop} \begin{proof} The first part follows from the easy observation that in a PBA of finite type, every element has a unique presentation as a union of finitely many atoms. So if $A$ is a PBA, and its set of atoms is $\left\{ a_{i}\left|\, i<\kappa\right.\right\} $, then take $a_{i}$ to $\left\{ i\right\} $. For the moreover part, first we extend $\operatorname{id}_{C}$ to an isomorphism from $A_{C}$ to $B_{C}$: consider all elements in $C$ of minimal size, these are the atoms of $C$. For each such $c$, map the set of atoms in $A$ contained in $c$ to the set of atoms in $B$ contained in $c$. This is well defined and can be extended to all of $A_{C}$. Now, $\left|\emph{at}\left(A\right)\backslash\emph{at}\left(A_{C}\right)\right|=\left|\emph{at}\left(B\right)\backslash\emph{at}\left(B_{C}\right)\right|$, so any bijection between the set of atoms induces an isomorphism. \end{proof} \begin{claim} \label{cla:omegaStable}$T$ is $\omega$-stable. \end{claim} \begin{proof} We prove that an expansion of $T$ to a larger vocabulary is $\omega$-stable, by adding new relations to the language, which are all definable --- \[ \left\{ S_{n},\subseteq_{n},\mathbf{p}i_{1},\mathbf{p}i_{2},\cap_{n},\cup_{n},-_{n},e\left|\, n\geq1\right.\right\} \] where $S_{n}$ is a unary relation defined on $P$, $\subseteq_{n}$ is a binary relation defined on $P$, $\mathbf{p}i_{1},\mathbf{p}i_{2}$ are two unary functions from $R$ to $P$, $\cap_{n},-_{n}$ are binary functions from $S_{n}$ to $S_{n}$ and $e$ is a constant in $P$. Their interpretation in $\tilde{M}$ are as follows: \begin{itemize} \item $\mathbf{p}i_{1}\left(\left(u,v,i\right)\right)=u$, $\mathbf{p}i_{2}\left(\left(u,v,i\right)\right)=v$. \item For each $1\leq n<\omega$, $S_{n}\left(v\right)\Leftrightarrow\left|v\right|\leq n$. \item For each $1\leq n$, $u\subseteq_{n}v$ if and only if $\left|u\right|\leq n$, $\left|v\right|\leq n$ and $u\subseteq v$. \item $u\cap_{n}v=u\cap v$ for all $u,v\in S_{n}$. \item $u-_{n}v=u\backslash v$ for $v,u\in S_{n}$. \item $u\cup_{n}v=u\cup v$ for $u,v\in S_{n}$. \item $e=\emptyset$. \end{itemize} Note that they are indeed definable: \begin{enumerate} \item $\mathbf{p}i_{1}\left(x\right)$ is the unique $y$ such that $Q_{1}\left(y,x\right)$, and similarly $\mathbf{p}i_{2}$ is definable. \item Let $E\left(x,y\right)$ by an auxiliary equivalence relation defined by $\mathbf{p}i_{1}\left(x\right)=\mathbf{p}i_{1}\left(y\right)\land\mathbf{p}i_{2}\left(x\right)=\mathbf{p}i_{2}\left(y\right)$. \item $e$ is the unique element $x\in P$ such that there exists exactly one element $z\in R$ such that $\mathbf{p}i_{1}\left(z\right)=x=\mathbf{p}i_{2}\left(z\right)$. \item $x\subseteq_{n}y$ is defined by ``$P\left(x\right),P\left(y\right)$ and the number of elements in the $E$ class of some (equivalently any) element $z$ such that $\mathbf{p}i_{1}\left(z\right)=x$, $\mathbf{p}i_{2}\left(z\right)=y$ is at most $n+1$''. \item $S_{n}\left(x\right)$ is defined by ``$P\left(x\right)$ and $e\subseteq_{n}x$'' (In particular, $e\in S_{n}$ for all $n$). \item $\cap_{n}$ and $-_{n}$ are then naturally definable using $\subseteq_{n}$. For instance $x-_{n}y=z$ if and only if $x,y,z$ are in $S_{n}$, $z\subseteq_{n}x$ and for each $e\neq w\subseteq_{n}y$, $w\nsubseteq_{n}z$. \item $x\cup_{n}y=z$ if and only if $x,y\in S_{n}$, $z\in S_{2n}$, $x,y\subseteq_{2n}z$ and $z-_{2n}x\subseteq_{2n}y$. \end{enumerate} Furthermore, $\mathordi{\subseteq_{k}\upharpoonright S_{n}}=\mathordi{\subseteq_{n}}$ for $n\leq k$. Hence every model $M$ of $T$ gives rise naturally to an induced PBA: $B^{M}:=\left(\bigcup_{n}S_{n}^{M},\cup^{M},\cap^{M},-^{M},\subseteq^{M},e^{M}\right)$ where $\cup^{M}=\bigcup\left\{ \cup_{n}^{M}\left|\, n<\omega\right.\right\} $, and similarly for $\subseteq^{M},-^{M}$ and $\cap^{M}$ (see Definition \ref{def:PBA} above). \begin{claim*} In the extended language, $T$ eliminates quantifiers.\end{claim*} \begin{proof} Suppose $M,N\models T$ are saturated models, $\left|M\right|=\left|N\right|$ and $A\subseteq M,N$ is a common substructure (where $\left|A\right|<\left|M\right|$). It is enough to show that we have an isomorphism from $M$ to $N$ fixing $A$. By Proposition \ref{pro:ISomPBA}, we have an isomorphism $f$ from $B^{M}$ to $B^{N}$ preserving $A\cap B^{M}$ (by saturation and the choice of language, the condition of the proposition are satisfied). On $P^{M}\backslash\left(B^{M}\cup A\right)$ there is no structure and it has the same size as $P^{N}\backslash\left(B^{N}\cup A\right)$ (namely $\left|N\right|$), so we can extend the isomorphism $f$ to $P^{M}$. We are left with $R^{M}$: let $a\in R^{M}$, and $a_{i}=\mathbf{p}i_{i}\left(a\right)$ for $i=1,2$. We already defined $f\left(a_{1}\right),f\left(a_{2}\right)$. Suppose $a_{1}\subseteq_{n}a_{2}$ for minimal $n$. Then there are exactly $n$ elements $z\in R^{M}$ with $\mathbf{p}i_{1}\left(z\right)=a_{1},\mathbf{p}i_{2}\left(z\right)=a_{2}$. This is true also in $R^{N}$, and the number of such $z$'s not in $A$ is the same for both $M$, $N$. Hence we can take this $E$-equivalence class from $M$ to the appropriate class in $N$. If not, i.e., $a_{1}\nsubseteq_{n}a_{2}$ for all $n$, then there are infinitely many elements $z$ in $N$ and in $M$ with $\mathbf{p}i_{1}\left(z\right)=a_{1}$, $\mathbf{p}i_{2}\left(z\right)=a_{2}$, and again we take this $E$-class in $M$ outside of $A$ to the appropriate $E$-class in $N$. \end{proof} Now we can conclude the proof by a counting types argument. Let $M$ be a countable model of $T$. Let $p\left(x\right)$ be a non-algebraic type over $M$. There are some cases: \begin{casenv} \item $S_{n}\left(x\right)\in p$ for some $n$. Then the type is determined by the maximal element $c$ in $M$ such that $c\subseteq_{n}x$ (this is easy, but also follows from the proof of Proposition \ref{pro:ISomPBA}). \item $S_{n}\left(x\right)\notin p$ for all $n$ but $P\left(x\right)\in p$. Then $x$ is already determined --- there is nothing more we can say on $x$. \item $R\left(x\right)\in p$. Then the type of $x$ is determined by the type of $\left(\mathbf{p}i_{1}\left(x\right),\mathbf{p}i_{2}\left(x\right)\right)$ over $M$. \end{casenv} So the number of types over $M$ is countable.\end{proof} \begin{prop} Every weakly generic pair of models of $T$ has the independence property.\end{prop} \begin{proof} Suppose $\left(M_{1},M\right)$ is a weakly generic pair. We think of it as a structure of the language $L_{Q}$, where $Q$ is interpreted as $M$. Consider the formula \[ \varphi\left(x,y\right)=P\left(x\right)\land P\left(y\right)\land\exists z\notin Q\left(Q_{1}\left(x,z\right)\land Q_{2}\left(y,z\right)\right). \] This formula has IP: Let $\left\{ a_{i}\left|\, i<\omega\right.\right\} \subseteq M$ be elements from $P^{M}$ such that $a\in S_{1}^{M}$ (as in the language of the proof of Claim \ref{cla:omegaStable}), i.e., they are atoms in the induced PBA, and $a_{i}\neq a_{j}$ for $i\neq j$. For any finite $s\subseteq\omega$ of size $n$, there is an element $b_{s}\in P^{M}$ be such that $a_{i}\subseteq_{n}^{M}b_{s}$ for all $i\in s$. Then for all $i\in\omega$, $\varphi\left(a_{i},b_{s}\right)$ if and only if $i\notin s$: If $\varphi\left(a_{i},b_{s}\right)$ there are infinitely many $z$'s in $M$ such that $Q_{1}\left(a_{i},z\right)\wedge Q_{2}\left(b_{s},z\right)$ (otherwise they would all be in $M$). This means that $a_{i}\nsubseteq_{n}^{M}b_{s}$ so $i\notin s$. For the other direction, the same exact argument works, but this time use the fact that the pair is weakly generic. \end{proof} \section{\label{sec:Directionality}Directionality} \subsection{Introduction} \begin{defn} A global type $p\left(x\right)\in S\left(\mathfrak{C}\right)$ is said to be \emph{finitely satisfiable} in a set $A$, or a \emph{coheir} over $A$ if for every formula $\varphi\left(x,y\right)$, if $\varphi\left(x,b\right)\in p$, then for some $a\in A$, $\varphi\left(a,b\right)$ holds. \end{defn} It is well known (see \cite{Ad}) that a theory $T$ is dependent if and only if given a type $p\left(x\right)\in S\left(M\right)$ over a model $M$, the number of complete global types $q\in S\left(\mathfrak{C}\right)$ that extend $p$ and are finitely satisfiable in $M$ is at most $2^{\left|M\right|}$ (while the maximal number is $2^{2^{\left|M\right|}}$). We analyze the behavior of the number of global coheir extensions in a dependent theory and classify theories by what we call directionality: Say that $T$ has \emph{small} directionality if and only if the number of $\Delta$-coheirs (for a finite set of formulas $\Delta$) that extend a type $p\in S\left(M\right)$ is finite. $T$ has \emph{medium} directionality if this number is $\left|M\right|$, and it has \emph{large} directionality if it is neither small or medium. In that case we will show that it is at least $\operatorname{ded}\left|M\right|$. We give an equivalent definition in terms of the number of global coheir extensions (see Theorem \ref{thm:trichotomy}). As far as we know, the first person to give an example of a dependent theory with large directionality was Delon in \cite{Delon}. We give simple combinatorial examples for each of the possible directionalities, and furthermore we show that RCF and some theories of valued fields are large. We do not always assume that $T$ is dependent in this section. \subsection{Equivalent definitions of directionality} \begin{defn} For a type $p\in S\left(A\right)$, let: \[ \operatorname{uf}\left(p\right)=\left\{ q\in S\left(\mathfrak{C}\right)\left|\, q\mbox{ is a coheir extension of }p\mbox{ over }A\right.\right\} . \] For a partial type $p\left(x\right)$ over a set $A$, and a set of formulas $\Delta$, \[ \operatorname{uf}_{\Delta}\left(p\right)=\left\{ q\left(x\right)\in S_{\Delta}\left(\mathfrak{C}\right)\left|\, q\cup p\mbox{ is f.s. in }A\right.\right\} . \] Note: this definition only makes sense if $p$ is finitely satisfiable in $A$. The notation $\operatorname{uf}$ refers to ultrafilter. \end{defn} And here is the main definition of this section: \begin{defn} \label{def:directionality}Let $T$ be any theory, then: \begin{enumerate} \item $T$ is said to have \emph{small} directionality (or just, $T$ is small) if and only if for all finite $\Delta$, $M\models T$ and $p\in S\left(M\right)$, $\operatorname{uf}_{\Delta}\left(p\right)$ is finite. \item $T$ is said to have \emph{medium} directionality (or just, $T$ is medium) if and only if for every $\lambda\geq\left|T\right|$, \[ \lambda=\sup\left\{ \left|\operatorname{uf}_{\Delta}\left(p\right)\right|\left|\, p\in S\left(M\right),\Delta\mbox{ finite},\left|M\right|=\lambda\right.\right\} . \] \item $T$ is said to have \emph{large} directionality (or just, $T$ is large) if $T$ is neither small nor medium. \end{enumerate} \end{defn} \begin{obs} \label{obs:IPLarge} If $T$ has the independence property, then it is large. In fact, if $\varphi\left(x,y\right)$ has the independence property, then there is a type $p\left(x\right)$ over a model $M$, that has $2^{2^{\left|M\right|}}$ many $\left\{ \varphi\left(x,y\right)\right\} $-extensions that are finitely satisfiable in $M$. \end{obs} \begin{proof} We may assume that $T$ has Skolem functions. Let $\lambda\geq\left|T\right|$, and let $\bar{a}=\left\langle a_{i}\left|\, i<\lambda\right.\right\rangle $, $\left\langle b_{s}\left|\, s\subseteq\lambda\right.\right\rangle $ be such that $\left\langle a_{i}\left|\, i<\lambda\right.\right\rangle $ is indiscernible and $\varphi\left(a_{i},b_{s}\right)$ holds iff $i\in s$. Let $M$ be the Skolem hull of $\bar{a}$. Let $p\left(x\right)\in S\left(M\right)$ be the limit of $\bar{a}$ in $M$ (so $\mathbf{p}si\left(x,c\right)\in p$ iff $\mathbf{p}si\left(x,c\right)$ holds for an end segment of $\bar{a}$). Let $P\subseteq\mathcal{P}\left(\lambda\right)$ be an independent family of size $2^{\lambda}$ (i.e., such that every finite Boolean combination has size $\lambda$). Then for each $D\subseteq P$, $p\left(x\right)\cup\left\{ \varphi\left(x,b_{s}\right)^{s\in D}\left|\, s\in P\right.\right\} $ is finitely satisfiable in $M$. \end{proof} \subsubsection{Small directionality} The following construction will be useful (here and in Section \ref{sec:Splintering}): \begin{const} \label{const:not small directionality}Let $T$ be any complete theory and $M\models T$. Suppose that there is some $p\in S\left(M\right)$ and finite $\Delta$ such that $\operatorname{uf}_{\Delta}\left(p\right)$ is infinite, and contains $\left\{ q_{i}\left|\, i<\omega\right.\right\} $. For all $i<j<\omega$, there is a formula $\varphi_{i,j}\in\Delta$ and $b_{i,j}\in\mathfrak{C}$ such that $\varphi_{i,j}\left(x,b_{i,j}\right)\in q_{i}$, $\neg\varphi_{i,j}\left(x,b_{i,j}\right)\in q_{j}$ (or the other way around). By Ramsey's Theorem we may assume $\varphi_{i,j}$ is constant --- $\varphi\left(x,y\right)$. Let $N$ be a model containing $M$ and $\left\{ b_{i,j}\left|\, i<j<\omega\right.\right\} $. Suppose $\left\langle c_{i}\left|\, i<\omega\right.\right\rangle $ are in $\mathfrak{C}$ and $c_{i}\models q_{i}|_{N}$. Let $N'$ be a model containing $\left\{ c_{i}\left|\, i<\omega\right.\right\} \cup N$. Let $M^{*}=\left(N',N,M,Q,\bar{f}\right)$ where $Q=\left\{ c_{i}\left|\, i<\omega\right.\right\} $ and $\bar{f}:Q^{2}\to N$ is a tuple of functions of length $\lg\left(y\right)$ defined by $\bar{f}\left(c_{i},c_{j}\right)=\bar{f}\left(c_{j},c_{i}\right)=b_{i,j}$ for $i<j$. So if $N\models Th\left(M^{*}\right)$ then $N=\left(N_{0}',N_{0},M_{0},Q_{0},\bar{f}_{0}\right)$ and \begin{itemize} \item $M_{0}\mathbf{p}rec N_{0}\mathbf{p}rec N_{0}'\models T$, \item $N_{0}\cup Q_{0}\subseteq N_{0}'$, \item $\bar{f}_{0}$ are functions from $Q_{0}^{2}$ to $N_{0}$, \item For all $c,d\in Q_{0}$, $c\operatorname{eq}uiv_{M_{0}}d$, \item $\operatorname{tp}_{\Delta}\left(c/N_{0}\right)\cup\operatorname{tp}\left(c/M_{0}\right)$ is finitely satisfiable in $M_{0}$ for all $c\in Q_{0}$, and \item $\varphi\left(c,\bar{f}_{0}\left(c,d\right)\right)\operatorname{tr}iangle\varphi\left(d,\bar{f}_{0}\left(c,d\right)\right)$ (where $\operatorname{tr}iangle$ denotes symmetric difference) holds for all $c\neq d\in Q_{0}$. \end{itemize} \end{const} \begin{claim} \label{cla:bounded}Let $T$ be any theory. Then $T$ is small if and only if for every $M\models T$ and every type $p\left(x\right)\in S\left(M\right)$, $\left|\operatorname{uf}\left(p\right)\right|\leq2^{\left|T\right|}$ (here $p$ can also be an infinitary type, but then the bound is $2^{\left|T\right|+\left|\lg\left(x\right)\right|}$). In addition, if $T$ is not small, then for every $\lambda\geq\left|T\right|$, there is a model $M\models T$ of cardinality $\lambda$, a type $p\in S\left(M\right)$, and a finite set of formulas $\Delta$ such that $\left|\operatorname{uf}_{\Delta}\left(p\right)\right|\geq\lambda$.\end{claim} \begin{proof} Assume that $T$ is small. The injective function $\operatorname{uf}\left(p\right)\to\mathbf{p}rod_{\varphi\in L}\operatorname{uf}_{\left\{ \varphi\right\} }\left(p\right)$ shows that $\left|\operatorname{uf}\left(p\right)\right|\leq2^{\left|T\right|}$. Conversely (and the ``In addition'' part): Assume that there is some $p$ and $\Delta$ such that $\operatorname{uf}_{\Delta}\left(p\right)$ is infinite. Use Construction \ref{const:not small directionality}: For every $\lambda\geq\left|T\right|$ we may find $N\models Th\left(M^{*}\right)$ of size $\lambda$ such that $\left|M_{0}\right|=\left|Q_{0}\right|=\lambda$, and we have a model $M_{0}$ of $T$ with a type $p$ over it, which has at least $\lambda$ many $\Delta$-coheirs. \end{proof} We conclude this section with a claim on theories with non-small directionality. \begin{claim} Suppose $T$ has medium or large directionality. Then there exists some $M\models T$, $p\in S\left(M\right)$, $\mathbf{p}si\left(x,y\right)$ and $\left\{ c_{i}\left|\, i<\omega\right.\right\} \subseteq\mathfrak{C}$ such that for each $i<\omega$ the set $p\left(x\right)\cup\left\{ \mathbf{p}si\left(x,c_{j}\right)^{j=i}\left|\, j<\omega\right.\right\} $ is finitely satisfiable in $M$. \end{claim} \begin{proof} We consider the structure $M^{*}$ introduced in Construction \ref{const:not small directionality} and the formula $\varphi$ chosen there. Find an extension of $M^{*}$ with an indiscernible sequence $\left\langle d_{i}\left|\, i\in\mathbb{Z}\right.\right\rangle $ inside $Q$. Assume without loss that $\varphi\left(d_{0},\bar{f}\left(d_{0},d_{1}\right)\right)\land\neg\varphi\left(d_{1},\bar{f}\left(d_{0},d_{1}\right)\right)$ holds. This means that $\varphi\left(d_{0},\bar{f}\left(d_{0},d_{1}\right)\right)\land\neg\varphi\left(d_{0},\bar{f}\left(d_{-1},d_{0}\right)\right)$. We claim that $\varphi\left(d_{i},\bar{f}\left(d_{j},d_{j+1}\right)\right)\land\neg\varphi\left(d_{i},\bar{f}\left(d_{j-1},d_{j}\right)\right)$ holds if and only if $i=j$: Suppose this holds but $i\neq j$. If $i>j$ then, since $\neg\varphi\left(d_{1},f\left(d_{0},d_{1}\right)\right)$, it must be that $i>j+1$, but then we have a contradiction to indiscernibility. Similarly, it cannot be that $i<j$. Thus the claim is proved with $\mathbf{p}si\left(x;y,z\right)=\varphi\left(x,y\right)\land\neg\varphi\left(x,z\right)$, and $c_{i}=\left\langle f\left(d_{i},d_{i+1}\right),f\left(d_{i-1},d_{i}\right)\right\rangle $. \end{proof} \subsubsection{Some helpful facts about dependent theories} Assume $T$ is dependent. Recall, \begin{defn} A global type $p\left(x\right)$ is \emph{invariant} \emph{over} a set $A$ if it does not split over it, namely if whenever $b$ and $c$ have the same type over $A$, $\varphi\left(x,b\right)\in p$ if and only if $\varphi\left(x,c\right)\in p$ for every formula $\varphi\left(x,y\right)$. \end{defn} \begin{defn} \label{def:tensor}Suppose $p\left(x\right)$ and $q\left(y\right)$ are global $A$-invariant types. Then $\left(q\otimes p\right)\left(x,y\right)$ is a global invariant type defined as follows: for any $B\supseteq A$, let $a_{B}\models p|_{B}$ and $b_{B}\models q|_{Ba_{B}}$, then $p\otimes q=\bigcup_{B\supseteq A}\operatorname{tp}\left(a_{B},b_{B}/B\right)$. One can easily check that it is well defined and $A$-invariant. Let $p^{\left(n\right)}=p\otimes p\cdots\otimes p$ where the product is done $n$ times. So $p^{\left(n\right)}$ is a type in $\left(x_{0},\ldots,x_{n-1}\right)$, and $p^{\left(\omega\right)}=\bigcup_{n<\omega}p^{\left(n\right)}$ is a type in $\left(x_{0},\ldots,x_{n},\ldots\right)$. For $n\leq\omega$, $p^{\left(n\right)}$ is a type of an $A$-indiscernible sequence of length $n$. \end{defn} \begin{fact} \emph{\label{fac:pOmega}}\cite[Lemma 2.5]{HP} If $T$ is NIP then for a set $A$ the map $p\left(x_{0}\right)\mapsto p^{\left(\omega\right)}\left(x_{0},\ldots\right)|_{A}$ from global $A$-invariant types to $\omega$-types over $A$ is injective. \end{fact} In the rest of the section, $\Delta$ will always denote a finite set of formulas, closed under negation. \begin{claim} \label{cla:local types define indiscernibles} For every set $A\subseteq C$, any type $q\left(x\right)\in S_{\Delta}\left(\mathfrak{C}\right)$ which is finitely satisfiable in $A$ and any choice of a coheir $q'\in S\left(\mathfrak{C}\right)$ over $A$ which completes $q$: \begin{itemize} \item $\left(a_{0},\ldots,a_{n-1}\right)\models\left(q'^{\left(n\right)}|_{C}\right)\upharpoonright\Delta$ if and only if $a_{0}\models q|_{C}$, $a_{1}\models q|_{Ca_{0}}$, etc. \end{itemize} This enables us to define $q^{\left(n\right)}\left(x_{0},\ldots x_{n-1}\right)\in S_{\Delta}\left(\mathfrak{C}\right)$ as $q'^{\left(n\right)}\upharpoonright\Delta$. It follows that $q^{\left(n\right)}$ is a type of a $\Delta$-indiscernible sequence of length $n$. \end{claim} \begin{proof} The proof is by induction on $n$: Right to left: suppose $a_{i}\models q|_{Ca_{0}\cdots a_{i-1}}$ for $i\leq n$, $\varphi\left(x_{0},\ldots,x_{n},y\right)\in\Delta$ and $\varphi\left(x_{0},\ldots,x_{n},c\right)\in q'^{\left(n+1\right)}$ for $c\in C$ but $\neg\varphi\left(a_{0},\ldots,a_{n},c\right)$ holds. Then by the choice of $a_{n}$, $\neg\varphi\left(a_{0},\ldots,a_{n-1},x,c\right)\in q$. Suppose $\left(b_{0},\ldots,b_{n-1}\right)\models q'^{\left(n\right)}|_{C}$, then $\varphi\left(b_{0},\ldots,b_{n-1},x,c\right)\in q$ so there is some $c'\in A$ such that $\varphi\left(b_{0},\ldots,b_{n-1},c',c\right)\land\neg\varphi\left(a_{0},\ldots,a_{n-1},c',c\right)$ holds. But this is a contradiction to the induction hypothesis. Left to right is similar. \end{proof} The following is a local version of Fact \ref{fac:pOmega}, which will be useful later: \begin{prop} \label{prop:localpomega}($T$ dependent) Suppose $\Delta$ is a finite set of formulas, $x$ a finite tuple of variables. Then there exists $n<\omega$ and finite set of formulas $\Delta_{0}$ such that for every set $A$, if $q_{1}\left(x\right),q_{2}\left(x\right)\in S\left(\mathfrak{C}\right)$ are coheirs over $A$ and $\left(q_{1}^{\left(n\right)}\upharpoonright\Delta_{0}\right)|_{A}=\left(q_{2}^{\left(n\right)}\upharpoonright\Delta_{0}\right)|_{A}$ then $q_{1}\upharpoonright\Delta=q_{2}\upharpoonright\Delta$.\end{prop} \begin{proof} By compactness and NIP, \begin{itemize} \item there exists some finite set of formulas $\Delta_{0}$ and some $n$ such that for all $\varphi\left(x,y\right)\in\Delta$ and all $\Delta_{0}$-indiscernible sequences $\left\langle a_{0},\ldots,a_{n_{\Delta}-1}\right\rangle $, there is \uline{no} $c$ such that $\varphi\left(a_{i},c\right)$ holds if and only if $i$ is even. We may assume that $\Delta\subseteq\Delta_{0}$. \end{itemize} By Claim \ref{cla:local types define indiscernibles}, we can conclude: \begin{itemize} \item Suppose that $\left(q_{1}^{\left(n\right)}\upharpoonright\Delta_{0}\right)|_{A}=\left(q_{2}^{\left(n\right)}\upharpoonright\Delta_{0}\right)|_{A}$, but $q_{1}\upharpoonright\Delta\neq q_{2}\upharpoonright\Delta$. Then there is some formula $\varphi\left(x,y\right)\in\Delta$ and some $c\in\mathfrak{C}$ such that $\varphi\left(x,c\right)\in q_{1}$ and $\neg\varphi\left(x,c\right)\in q_{2}$. Since $\left(q_{1}\upharpoonright\Delta_{0}\right)^{\left(n\right)}|_{A}=\left(q_{2}\upharpoonright\Delta_{0}\right)^{\left(n\right)}|_{A}$, $\left(q_{1}\upharpoonright\Delta_{0}\right)^{\left(m\right)}|_{A}=\left(q_{2}\upharpoonright\Delta_{0}\right)^{\left(m\right)}|_{A}$ for every $m\leq n$, and it follows by induction on $m$ that the sequence defined by $a_{0}\models\left(q_{1}\upharpoonright\Delta_{0}\right)|_{Ac}$, $a_{1}\models\left(q_{2}\upharpoonright\Delta_{0}\right)|_{Aca_{0}}$, $a_{2}\models\left(q_{1}\upharpoonright\Delta_{0}\right)|_{Aca_{0}a_{1}}$, $\ldots$, $a_{m-1}\models\left(q_{i}\upharpoonright\Delta_{0}\right)|_{Aca_{0}\cdots a_{m-2}}$ ($i\in\left\{ 1,2\right\} $) realizes this type. But this entails a contradiction, because $\left\langle a_{0},\ldots,a_{n-1}\right\rangle $ is a $\Delta_{0}$ indiscernible sequence (even over $A$), while $\varphi\left(a_{i},c\right)$ holds if and only if $i$ is even. \end{itemize} \end{proof} \begin{problem} Does Proposition \ref{prop:localpomega} hold for invariant types (not just for coheirs)? \end{problem} \subsubsection{Large directionality and definability } Let us recall the definition of $\operatorname{ded}\lambda$. \begin{defn} \label{def:ded}Let $\operatorname{ded}\lambda$ be the supremum of the set: \[ \left\{ \left|I\right|\left|\, I\mbox{ is a linear order with a dense subset of size }\leq\lambda\right.\right\} . \] \end{defn} \begin{fact} \label{fac:Ded}It is well known that $\lambda<\operatorname{ded}\lambda\leq\left(\operatorname{ded}\lambda\right)^{\aleph_{0}}\leq2^{\lambda}$. If $\lambda^{<\lambda}=\lambda$ then $\operatorname{ded}\lambda=2^{\lambda}$ so $\operatorname{ded}\lambda=\left(\operatorname{ded}\lambda\right)^{\aleph_{0}}=2^{\lambda}$. \end{fact} For more, see Section \ref{sec:Appendix:-dense-types} and \cite[Section 6]{Sh1007}. \begin{defn} \label{def:ExtDefType}Suppose $M$ is a model and $p\in S\left(M\right)$. Let $M_{p}$ be $M$ enriched with externally definable sets defined over a realization of $p$. Namely, we enrich the language to a language $L_{p}$ by adding new relation symbols $\left\{ d_{p}x\varphi\left(x,y\right)\left|\,\varphi\left(x,y\right)\mbox{ is a formula}\right.\right\} $ (so $d_{p}$ is thought of as a quantifier over $x$), and let $M_{p}$ be a structure for $L_{p}$ with universe $M$ where we interpret $d_{p}x\varphi\left(x,y\right)$ as $\left\{ b\in M\left|\,\varphi\left(x,b\right)\in p\right.\right\} $. \end{defn} \begin{rem} \label{rem:TpModel}Every model $N\models Th\left(M_{p}\right)$ gives rise to a complete $L$ type over $N$, namely $p^{N}=\left\{ \varphi\left(b,x\right)\left|\, b\in N,N\models d_{p}x\varphi\left(x,b\right)\right.\right\} $.\end{rem} \begin{claim} \label{cla:Large}Let $T$ be any theory, $M\models T$. Suppose $p\in S\left(M\right)$, $q\in\operatorname{uf}\left(p\right)$, and $\bar{a}=\left\langle a_{0},a_{1},\ldots\right\rangle \models q^{\left(\omega\right)}|_{M}$. If $\operatorname{tp}\left(\bar{a}/M\right)$ is not definable with parameters in $M_{p}$, then $T$ is large. Moreover, in this case \begin{itemize} \item [$\otimes$]There exists a finite $\Delta$ such that for every $\lambda\geq\left|T\right|$, \[ \operatorname{ded}\lambda\leq\sup\left\{ \left|\operatorname{uf}_{\Delta}\left(p\right)\right|\left|\, p\in S\left(N\right),\left|N\right|=\lambda\right.\right\} . \] \end{itemize} \end{claim} \begin{proof} We may assume that $\left|M\right|=\left|L\right|$: let $r=q^{\left(\omega\right)}|_{M}$, and $N\mathbf{p}rec M_{r}$, $\left|N\right|=\left|L\right|$. Then $N$ gives rise to a complete type $r'\left(x_{0},x_{1},\ldots\right)\in S\left(N\right)$. Let $p'=r'\upharpoonright x_{0}$. It is easy to see that $r'=q'^{\left(\omega\right)}|_{N}$ for some $q'\in\operatorname{uf}\left(p'\right)$. Also, $r'$ is not definable with parameters in $N_{p'}$. Let us recall a theorem from \cite{Sh009} (we formulate it a bit differently): Suppose $L$ is a language of cardinality at most $\lambda$, $P$ a new predicate (or relation symbol), and $S$ a complete theory in $L\left(P\right)$. \begin{defn*} $\operatorname{Df}One\lambda$ is the the supremum of the set of cardinalities: \[ \left|\left\{ B'\subseteq M\left|\,\left(M,B'\right)\cong\left(M,B\right)\right.\right\} \right| \] where $\left(M,B\right)$ is an $L\left(P\right)$ model of $S$ of cardinality $\lambda$. \end{defn*} \begin{thm*} \cite[Theorem 12.4.1]{Sh009,Hod} The following are equivalent \footnote{The original theorem referred to $\operatorname{ded}^{*}\lambda$, which counts the number of branches of the same height in a tree with $\lambda$ many nodes, but it equals $\operatorname{ded}\lambda$, see \cite[Section 6]{Sh1007} and Fact \ref{fac:calculating ded}. }: \begin{enumerate} \item $P$ is not definable with parameters in $S$, i.e., there is no $L$-formula $\theta\left(x,y\right)$ such that $S\models\exists y\forall x\left(P\left(x\right)\leftrightarrow\theta\left(x,y\right)\right)$. \item For every $\lambda\geq\left|L\right|$, $\operatorname{Df}One\left(\lambda\right)\geq\operatorname{ded}\lambda$. \end{enumerate} \end{thm*} Let $n<\omega$ be the integer first such that $\operatorname{tp}\left(\bar{a}\upharpoonright n/M\right)$ is not definable with parameters in $M_{p}$. So $1<n$ and $r=\operatorname{tp}\left(a_{0},\ldots,a_{n-2}/M\right)$ is definable but $\operatorname{tp}\left(a_{0},\ldots,a_{n-1}/M\right)$ is not. For a formula $\alpha\left(x_{0},\ldots,x_{n-2},y\right)$ let $\left(d_{r}\alpha\right)\left(y\right)$ be a formula in $L\left(M_{p}\right)$ defining $\alpha\left(\bar{a}\upharpoonright n-1,M\right)$. If $M_{p}\mathbf{p}rec N\models T_{p}$ then, as in Remark \ref{rem:TpModel}, there is a complete $L$ type $r^{N}\left(x_{0},\ldots,x_{n-2}\right)$ over $N$ defined by $\alpha\left(x_{0},\ldots,x_{n-2},b\right)\in r^{N}$ if and only if $N\models\left(d_{r}\alpha\right)\left(b\right)$. There is some formula $\varphi\left(x_{0},\ldots,x_{n-1},y\right)$ such that the set $B_{0}:=\varphi\left(a_{0},\ldots,a_{n-1},M\right)$ is not definable with parameters in $M_{p}$. Let $S=Th\left(M_{p},B_{0}\right)$ in the language $L\left(M_{p}\right)\left(P\right)$ (naming elements from $M$, so that $N\models S$ implies $M_{p}\mathbf{p}rec N$). By the theorem cited above, for every $\lambda\geq\left|L\right|$ and $\kappa<\operatorname{ded}\lambda$, there exists a model $\left(N_{\lambda,\kappa},B_{\lambda,\kappa}\right)=\left(N,B\right)\models S$ of cardinality $\lambda$ such that, letting $\mathcal{B}^{N}=\left\{ B'\left|\,\left(N,B'\right)\cong\left(N,B\right)\right.\right\} $, $\left|\mathcal{B}^{N}\right|>\kappa$. Let $\bar{a}^{N}=\left(a_{0}^{N},\ldots,a_{n-2}^{N}\right)\models r^{N}$ and for every $B'\in\mathcal{B}^{N}$, let \[ q_{B'}=p^{N}\left(x\right)\cup\left\{ \varphi\left(\bar{a}^{N},x,\bar{b}\right)\left|\,\bar{b}\in B'\right.\right\} \cup\left\{ \neg\varphi\left(\bar{a}^{N},x,\bar{b}\right)\left|\,\bar{b}\notin B'\right.\right\} . \] By choice of $S$, $B$ and $\mathcal{B}$, $q_{B'}$ is finitely satisfiable in $N$ and for $B'\neq B''\in\mathcal{B}^{N}$, $q_{B'}\upharpoonright\varphi\neq q_{B''}\upharpoonright\varphi$, so now $N\upharpoonright L$ is a model of $T$ with a type $p^{N}$ such that $\left|\operatorname{uf}_{\left\{ \varphi\right\} }\left(p\right)\right|>\kappa$. \end{proof} If $T$ is small we can say more: \begin{claim} Assume $T$ is small, $M\models T$. Suppose $p\in S\left(M\right)$, $q\in\operatorname{uf}\left(p\right)$, and $\bar{a}=\left\langle a_{0},a_{1},\ldots\right\rangle \models q^{\left(\omega\right)}|_{M}$, then $\operatorname{tp}\left(\bar{a}/M\right)$ is definable over $\operatorname{acl}^{\operatorname{eq}}\left(\emptyset\right)$ in $M_{p}$.\end{claim} \begin{proof} By Claim \ref{cla:Large} it is definable with parameters in $M_{p}$. Let $n<\omega$ be minimal such that $q^{\left(n\right)}|_{M}$ is not definable over $\operatorname{acl}^{\operatorname{eq}}\left(\emptyset\right)$ in $M_{p}$. Suppose that for some formula $\varphi\left(x_{0},\ldots,x_{n-1},y\right)$, $\operatorname{tp}_{\varphi}\left(\bar{a}/M\right)$ is not definable over $\operatorname{acl}^{\operatorname{eq}}\left(\emptyset\right)$ in $M_{p}$. This means that while the set $\left\{ b\in M\left|\,\mathfrak{C}\models\varphi\left(\bar{a},b\right)\right.\right\} $ is definable by $\mathbf{p}si\left(y,c\right)$ for some $\mathbf{p}si$ in $L\left(M_{p}\right)$, $c\notin\operatorname{acl}^{\operatorname{eq}}\left(\emptyset\right)$. We may assume that $c\in M_{p}^{\operatorname{eq}}$ is the code of this set (for every automorphism $\sigma$ of the monster model $\mathfrak{C}$ of $M_{p}^{\operatorname{eq}}$, $\sigma$ fixes $\mathbf{p}si\left(\mathfrak{C},c\right)$ if and only if $\sigma\left(c\right)=c$). So in some elementary extension $M_{p}\mathbf{p}rec N$, there are infinitely many conjugates of $c$ over $\operatorname{acl}^{\operatorname{eq}}\left(\emptyset\right)$, $\left\{ c_{i}\left|\, i<\omega\right.\right\} $, such that $\mathbf{p}si\left(N,c_{i}\right)\neq\mathbf{p}si\left(N,c_{j}\right)$ for $i\neq j$. This implies that $\operatorname{uf}_{\left\{ \varphi\right\} }\left(p^{N}\right)\geq\aleph_{0}$, just as in the proof of Claim \ref{cla:Large}.\end{proof} \begin{cor} \label{cor:Medium}($T$ dependent) $T$ is large if and only if for every $\lambda\geq\left|T\right|$, \[ \sup\left\{ \left|\operatorname{uf}_{\Delta}\left(p\right)\right|\left|\, p\in S\left(M\right),\Delta\mbox{ finite},\left|M\right|=\lambda\right.\right\} =\operatorname{ded}\lambda. \] \end{cor} \begin{proof} Suppose $T$ is large, i.e., for some $\Delta$, $M$ of size $\lambda\geq\left|T\right|$ and $p\in S\left(M\right)$, $\left|\operatorname{uf}_{\Delta}\left(p\right)\right|>\lambda$. By Proposition \ref{prop:localpomega}, find some $\Delta_{0}$ and $n$ such that \[ \left|\operatorname{uf}_{\Delta}\left(p\right)\right|\leq\left|\left\{ \left(q^{\left(n\right)}\upharpoonright\Delta_{0}\right)|_{M}\left|\, q\in\operatorname{uf}\left(p\right)\right.\right\} \right|. \] Hence there is some $q\in\operatorname{uf}_{\Delta}\left(p\right)$ such that $q^{\left(n\right)}\upharpoonright\Delta_{0}$ is not definable with parameters over $M_{p}$, and we are done by Claim \ref{cla:Large} (also note that $\left|S_{\Delta}\left(M\right)\right|\leq\operatorname{ded}\left|M\right|$ for every finite $\Delta$ in dependent theories (see e.g., \cite[Theorem 4.3]{Sh10})). \end{proof} \subsubsection{Concluding remarks } \begin{thm} \label{thm:trichotomy}For every theory $T$, \begin{enumerate} \item $T$ is small iff for all $M\models T$, $p\left(x\right)\in S\left(M\right)$, $\left|\operatorname{uf}\left(p\right)\right|\leq2^{\left|T\right|+\left|\lg\left(x\right)\right|}$ . \item $T$ is medium iff for all $M\models T$, $p\left(x\right)\in S\left(M\right)$, $\left|\operatorname{uf}\left(p\right)\right|\leq\left|M\right|^{\left|T\right|+\left|\lg\left(x\right)\right|}$ and $T$ is not small. \end{enumerate} \end{thm} \begin{proof} (1) is Claim \ref{cla:bounded}. (2) Left to right is clear. Conversely, since $T$ is not small, by Claim \ref{cla:bounded}, for all $\lambda\geq\left|T\right|$, \[ \sup\left\{ \left|\operatorname{uf}_{\Delta}\left(p\right)\right|\left|\, p\in S\left(M\right),\Delta\mbox{ finite},\left|M\right|=\lambda\right.\right\} \geq\lambda. \] On the other hand, if it is strictly greater than $\lambda$, then by definition $T$ is large. For $\lambda$ large enough, $\operatorname{ded}\lambda>\lambda=\lambda^{\left|T\right|+\left|\lg\left(x\right)\right|}$ so by Corollary \ref{cor:Medium}, we get a contradiction to the right hand side. \end{proof} In Section \ref{sub:examples of directionality}, we will show that these classes are not empty, and thus: \begin{cor} For $\lambda\geq\left|T\right|$, the cardinality: \[ \sup\left\{ \left|\operatorname{uf}_{\Delta}\left(p\right)\right|\left|\, p\in S\left(M\right),\Delta\mbox{ finite},\left|M\right|=\lambda\right.\right\} \] has four possibilities: finite / $\aleph_{0}$ ; $\lambda$; $\operatorname{ded}\lambda$; $2^{2^{\lambda}}$. This corresponds to small, medium, and large directionality (the last one happens when the theory has the independence property, see Observation \ref{obs:IPLarge}). \end{cor} \begin{problem} Suppose $T$ is interpretable in $T'$, and $T$ is large. Does this imply that $T'$ is large or at least not small? \end{problem} \subsection{\label{sub:examples of directionality}Examples of different directionalities.} Here we give examples of the different directionalities. \subsubsection{Small directionality} \begin{example} \label{exa:DLO} $Th\left(\mathbb{Q},<\right)$ has small directionality. In fact, every $1$-type over a model $M$, has at most 2 global coheirs, and in general, a type $p\left(x_{0},\ldots,x_{n-1}\right)$ is determined by the order type of $\left\{ x_{0},\ldots,x_{n-1}\right\} $ and $p\upharpoonright x_{0}$, $p\upharpoonright x_{1}$, etc. \end{example} \begin{prop} \label{pro:DTRBounded}The theory of dense trees is also small. \end{prop} \begin{proof} So here $T$ is the model completion of the theory of trees in the language $\left\{ <,\wedge\right\} $. \begin{claim*} Let $M\models T$ and $p\left(x_{0},x_{1},\ldots,x_{n-1}\right)\in S\left(M\right)$ be any type. Then $\bigcup_{i,j<n}p\upharpoonright\left(x_{i},x_{j}\right)\cup p|_{\emptyset}\vdash p$. \end{claim*} \begin{proof} Let $\Sigma=\bigcup_{i,j<n}p\upharpoonright\left(x_{i},x_{j}\right)\cup p|_{\emptyset}$. Suppose $\left(a_{0},\ldots,a_{n-1}\right)\models\Sigma$. By quantifier elimination, the formulas in $p$ are Boolean combination of formulas of the form $\bigwedge_{k<m}x_{j_{k}}\wedge a\leq\bigwedge_{l<r}x_{r_{l}}\wedge b$ where $b,a\in M$ and $j_{0},\ldots,j_{k-1},r_{0},\ldots,r_{l-1}<n$. If $a,b$ does not appear, $\bar{a}$ satisfy this formula because we included $p|_{\emptyset}$. Consider $\bigwedge_{k<m}x_{j_{k}}\wedge a$: by assumption we know what is the ordering of $\left\{ x_{j_{k}}\wedge a\left|\, k<m\right.\right\} $ (this set is linearly ordered --- it is below $a$). Hence, as $\bar{a}\models\Sigma$, $\bigwedge_{k<m}a_{j_{k}}\wedge a$ must be equal to the minimal element in this set, namely $a_{j_{k}}\wedge a$ for some $k$, which is determined by $\Sigma$. Now $a_{j_{k}}\wedge a\leq\bigwedge_{l<r}a_{r_{l}}\wedge b$ holds if and only if for each $r<l$, we have $a_{j_{k}}\wedge a\leq a_{r}$ and $a_{j_{k}}\wedge a\leq b$, both decided in $\Sigma$. Note that we can get rid of $p|_{\emptyset}$ but we should replace 2-types by 3-types.\end{proof} \begin{claim*} For any $A\subseteq M\models T$, and $p\left(x\right),q\left(y\right)\in S\left(A\right)$, there are only finitely many complete type $r\left(x,y\right)$ that contain both $p$ and $q$. In fact there is a uniform bound on their number.\end{claim*} \begin{proof} We may assume that $A$ is a substructure. For any $a\in M$, the structure generated by $A$ and $a$, denoted by $A\left(a\right)$, is just $A\cup\left\{ a_{A},a\right\} $ where $a_{A}=\max\left\{ b\wedge a\left|\, b\in A\right.\right\} $. Note that $a_{A}$ need not exist, but if it does, then it is the only new element apart from $a$ (because if $b_{1}\wedge a<b_{2}\wedge a$ then $b_{1}\wedge a=b_{1}\wedge b_{2}$). Now, let $a\models p$ and $b\models q$. Let $B=B\left(p,q\right)=\left\{ d\in A\left|\, d\leq a\,\&\, d\leq b\right.\right\} $. This set is linearly ordered, and it may have a maximum. If it does, denote it by $m$. Note that $m$ depends only on $p$ and $q$. Now it is easy to show that $\operatorname{tp}\left(a,a_{A},b,b_{A},a\wedge b/m\right)\cup\operatorname{tp}\left(a/A\right)\cup\operatorname{tp}\left(b/A\right)$ determines $\operatorname{tp}\left(a,b/A\right)$ by quantifier elimination. This suffices because the number of types of finite tuples over a finite set is finite. \end{proof} Let $M\models T$, $p\left(\bar{x}\right)\in S\left(M\right)$ and $I$ be the set of all types $r\left(\bar{x}_{0},\bar{x}_{1},\ldots\right)$ over $M$ such that realizations of $r$ are indiscernible sequences of tuples satisfying $p$. Let $V=\left\{ \left(y_{0},y_{1}\right)\left|\, y_{0},y_{1}\mbox{ are any 2 variables from }\bar{x}_{0},\bar{x}_{1}\right.\right\} $. By the second claim, for any $\left(y_{0},y_{1}\right)\in V$, the set $\left\{ r\upharpoonright\left(y_{0},y_{1}\right)\left|\, r\in I\right.\right\} $ is finite. By the first claim and indiscernibility, the function taking $r$ to $\left(r|_{\emptyset},\left\langle r\upharpoonright\left(y_{0},y_{1}\right)\left|\,\left(y_{0},y_{1}\right)\in V\right.\right\rangle \right)$ is injective. Together, it means that $\left|I\right|\leq2^{\aleph_{0}+\left|\lg\left(\bar{x}\right)\right|}$ and we are done by Fact \ref{fac:pOmega}. \end{proof} \subsubsection{Medium directionality} \begin{example} Let $L=\left\{ P,Q,H,<\right\} $ where $P$ and $Q$ are unary predicates, $H$ is a unary function symbol and $<$ is a binary relation symbol. Let $T^{\forall}$ be the following theory: \begin{itemize} \item $P\cap Q=\emptyset$. \item $H$ is a function from $P$ to $Q$ (so $H\upharpoonright Q=\operatorname{id}$). \item $\left(P,<,\wedge\right)$ is a tree. \end{itemize} And let $T$ be its model completion (so $T$ eliminates quantifiers). Note that there is no structure on $Q$. So as in Section \ref{sec:Indisc}, $T$ is dependent (this theory is interpretable in the theory there). Let $T'$ be the restriction of $T$ to the language $L'=L\backslash\left\{ H\right\} $. The same ``moreover'' part applies here as in Corollary \ref{cor:ModelCom}, so $T'$ is the model completion of $T^{\forall}\upharpoonright L'$ and also eliminates quantifiers. \end{example} \begin{claim} $T'$ has small directionality.\end{claim} \begin{proof} The only difference between $T'$ and dense trees is the new set $Q$ which has no structure. Easily this does not make any difference. \end{proof} \begin{prop} \label{pro:MediumExample}$T$ has medium directionality.\end{prop} \begin{proof} Let $M\models T$. Let $B$ be a branch in $P^{M}$ (i.e., a maximal linearly ordered set). Let $p\left(x\right)\in S\left(M\right)$ be a complete type containing $\Sigma_{B}:=\left\{ b<x\left|\, b\in B\right.\right\} $. Note that $\Sigma$ ``almost'' isolates $p$: the only freedom we have, is to determine what is $H\left(x\right)$. So suppose $H\left(x\right)=m\in p$ for $m\in Q$. Let $c\models p$ (so $c\notin M$). For each $a\in Q^{M}$, let $p_{a}\left(x\right)=p\cup\left\{ H\left(c\wedge x\right)=a\right\} $. Then $p_{a}$ is finitely satisfiable in $M$: Suppose $\Gamma\subseteq p_{a}$ is finite. By quantifier elimination, we may assume that $\Gamma\subseteq\Sigma_{B}\cup\left\{ H\left(c\wedge x\right)=a\right\} \cup\left\{ H\left(x\right)=m\right\} $. Since $B$ is linearly ordered, we may assume that $\Gamma=\left\{ b<x,H\left(c\wedge x\right)=a,H\left(x\right)=m\right\} $ for some $b\in B$. Since $T$ is the model completion of $T^{\forall}$ which has the amalgamation property, there are two elements $d,e\in M$ such that $b<d,e$, $e\in B$, $d\notin B$, $H\left(d\right)=m$ and $H\left(d\wedge e\right)=a$. Since $d\wedge c=d\wedge e$, $d\models\Gamma$. We have found $\left|Q^{M}\right|$ coheirs of $p$, and since $M$ was arbitrary $T$ is not small. This gives a lower bound on the directionality of $T$, and we would like to find an upper bound as well. We shall use the same idea as in the proof of Proposition \ref{pro:DTRBounded}. Let $M\models T$, $p\left(\bar{x}\right)\in S\left(M\right)$ and $I$ be the set of all types $r\left(\bar{x}_{0},\bar{x}_{1},\ldots\right)$ over $M$ such that realizations of $r$ are indiscernible sequences of tuples satisfying $p$. Let $I'=\left\{ r\upharpoonright L'\left|\, r\in I\right.\right\} $. By the proof of Proposition \ref{pro:DTRBounded}, $\left|I'\right|\leq2^{\aleph_{0}+\left|\lg\left(\bar{x}\right)\right|}$. Let $V=\left\{ t\left|\, t\mbox{ is a term in }L\mbox{ in the variables }\bar{x}_{0},\bar{x}_{1},\ldots\mbox{ over }\emptyset\right.\right\} $. Suppose $r\in I$, then, as in the proof of Proposition \ref{pro:DTRBounded}, for every term $t$ such that $P\left(t\right)\in r$, let $t_{r}=\max\left\{ a\wedge t\left|\, a\in M\right.\right\} $ --- a term over $M$ (it need not exist). To determine $r$, it is enough to determine the equations that occur between the images under $H$ of the $t_{r}$'s and the $t$'s over $M$. This shows that $\left|I\right|\leq\left|M\right|^{\aleph_{0}+\left|\lg\left(\bar{x}\right)\right|}$. \end{proof} \subsubsection{Large directionality} \begin{example} \label{exa:Large}Let $L=\left\{ P,Q,H,<_{P},<_{Q}\right\} $ where $P$ and $Q$ are unary predicates, $H$ is a unary function symbol and $<_{P},<_{Q}$ are binary relation symbols. Let $T^{\forall}$ be the following theory: \begin{itemize} \item $P\cap Q=\emptyset$. \item $H$ is a function from $P$ to $Q$. \item $\left(P,<_{P},\wedge\right)$ is a tree. \item $\left(Q,<_{Q}\right)$ is a linear order. \end{itemize} \end{example} \begin{prop} $T$ has large directionality.\end{prop} \begin{proof} This is similar to the proof of Proposition \ref{pro:MediumExample}. Let $M\models T$. Let $B$ be a branch in $P^{M}$. Let $p\left(x\right)\in S\left(M\right)$ be a complete type containing $\Sigma_{B}:=\left\{ b<x\left|\, b\in B\right.\right\} $ saying that $H\left(x\right)=m$ for some $m\in Q^{M}$. Let $c\models p$ (so $c\notin M$). For each cut $I\subseteq Q^{M}$, let: \[ p_{I}\left(x\right)=p\cup\left\{ e<H\left(c\wedge x\right)<f\left|\, e\in I,f\in Q^{M}\backslash I\right.\right\} . \] Then $p_{I}$ is finitely satisfiable in $M$ as in the proof of \ref{pro:MediumExample}. So for every cut in $Q$ we found a coheir of $p$, and since $M$ was arbitrary $T$ is not small nor medium (because for every linear order, we can find a model such that $Q$ contains this order). \end{proof} \subsubsection{RCF} It turns out that even RCF has large directionality, as we shall present now. Apparently, that RCF was not small was already known and can be deduced from Marcus Tressl's thesis (see \cite[18.13]{Tressle}), but here we give a direct proof that RCF is in fact large and even more. \begin{defn} \label{def:dense types}Let $M\models RCF$. A type $p\in S\left(M\right)$ is called \emph{dense} if it is not definable and the differences $b-a$ with $a,b\in M$ and $a<x<b\in p$, are arbitrarily (w.r.t. $M$) close to $0$. \end{defn} For example, if $R$ is the real closure of $\mathbb{Q}$, then $\operatorname{tp}\left(\mathbf{p}i/R\right)$ is dense. \begin{fact} \label{fac:DenseExist}Any real closed field can be embedded into a real closed field of the same cardinality with some dense type.\end{fact} \begin{proof} {[}due to Marcus Tressl{]} Let $R$ be a real closed field. Let $S$ be the (real closed) field $R\left(\left(t^{\mathbb{Q}}\right)\right)$ of generalized power series over $R$. Let $K$ be the definable closure of $R\left(t\right)$ in $S$ and let $p$ be the 1-type of the formal Taylor series of $e^{t}$ over $K$: $\operatorname{tp}\left(1+t^{1}/1!+t^{2}/2!+\cdots/K\right)$ . Then $p$ is a dense 1-type over $K$. \end{proof} \begin{claim} \label{cla:WeakOrth}Suppose $p$ is dense and $q$ is a definable type over $M\models RCF$ and both are complete. Then $q$ and $p$ are weakly orthogonal, meaning that $p\left(x\right)\cup q\left(y\right)$ implies a complete type over $M$.\end{claim} \begin{proof} {[}Remark: this is an easy result that is well known, but for completeness we give a proof.{]} Let $\omega\models q,\alpha\models p$. Note that since $p$ is not definable over $M$, for every $m\in M$, and even for every $m\in\mathfrak{C}$ such that $\operatorname{tp}\left(m/M\right)$ is definable, there is some $\varepsilon_{m}\in M$ such that $0<\varepsilon_{m}<\left|\alpha-m\right|$. Now, suppose that $\varphi\left(x,y\right)$ is any formula over $M$. Then, as $q$ is definable, there is a formula $\mathbf{p}si\left(x\right):=\left(d_{q}y\right)\varphi\left(x,y\right)$ over $M$ that defines $\varphi\left(M,\omega\right)$. We claim that $p\left(x\right)\cup q\left(y\right)\models\varphi\left(x,y\right)$ if and only if $p\left(x\right)\models\left(d_{q}y\right)\varphi\left(x,y\right)$. We know that $\mathbf{p}si$ is equivalent to a finite union of intervals and points from $M$. We also know that $\varphi\left(\mathfrak{C},\omega\right)$ is such a union, but the types of the end-points over $M$ are definable over $M$ (since we have definable Skolem functions). So denote the set of all these end-points by $A$. Let $0<\varepsilon\in M$ be smaller than every $\varepsilon_{m}$ for each $m\in A$. Let $a,b\in M$ such that $a<\alpha<b$ and $b-a<\varepsilon$. Then: \begin{itemize} \item $\mathbf{p}si\left(\alpha\right)$ holds if and only if \item $\mathbf{p}si\left(m\right)$ holds for all $m\in M$ such that $a\leq m\leq b$ if and only if \item $\varphi\left(m,\omega\right)$ holds for all $m\in M$ such that $a\leq m\leq b$ if and only if \item $\varphi\left(\alpha,\omega\right)$ holds. \end{itemize} \end{proof} We claim that RCF has large directionality. Moreover, we seem to answer an open question raised in \cite{Delon} (at least in some sense, see below), as she says there: \begin{quotation} Mais il laisse ouverte la possibilit\'e que la borne du nombre de coh\'eritiers soit $\operatorname{ded}\left|M\right|$ dans le cas de la propri\'et\'e de l'ordre et $\left(\operatorname{ded}\left|M\right|\right)^{\left(\omega\right)}$ dans le cas de l'ordre multiple. \end{quotation} So let us make clear what the question means: \begin{defn} (Taken from \cite{KeislerStabFunction,KeislerSixClasses}) A theory $T$ is said to have the \emph{multiple order property} if there are formulas $\varphi_{n}\left(x,y_{n}\right)$ for $n<\omega$ such that the following set of formulas is consistent with $T$: \[ \Gamma=\left\{ \varphi_{n}\left(x_{\eta},y_{n,k}\right)^{\eta\left(k\right)<n}\left|\,\eta\in\leftexp{\omega}{\omega}\right.\right\} . \] \end{defn} \begin{rem} If $T$ is strongly dependent (see \ref{def:StronglyDep}), for example, if $T=RCF$, it does not have the multiple order property.\end{rem} \begin{proof} Suppose $T$ has the multiple order property as witnessed by formulas $\varphi_{n}$. Consider the formulas $\mathbf{p}si_{n}\left(x,y,z\right)=\varphi_{n}\left(x,y\right)\leftrightarrow\varphi_{n}\left(x,z\right)$. It is easy to see that $\left\{ \mathbf{p}si_{n}\left|\, n<\omega\right.\right\} $ exemplify that the theory is not strongly dependent. \end{proof} \begin{fact} \cite{KeislerStabFunction} If $T$ is countable and has the multiple order property, then for every cardinal $\lambda$, $\sup\left\{ S\left(M\right)\left|\, M\models T,\left|\, M\right|=\lambda\right.\right\} \geq\left(\operatorname{ded}\lambda\right)^{\omega}$. If $T$ does not have the multiple order property, then $\sup\left\{ S\left(M\right)\left|\, M\models T,\left|M\right|=\lambda\right.\right\} \leq\operatorname{ded}\lambda$. \end{fact} So the question can be formulated as follows: \begin{itemize} \item Is there a countable theory without the multiple order property such that for every $\lambda\geq\aleph_{0}$, $\sup\left\{ \left|\operatorname{uf}\left(p\right)\right|\left|\, p\in S_{<\omega}\left(M\right),\left|M\right|=\lambda\right.\right\} =\left(\operatorname{ded}\lambda\right)^{\omega}$ (recall that $S_{<\omega}\left(M\right)$ is the set of all finitary types over $M$). \end{itemize} It is a natural question, because of 2 reasons: \begin{enumerate} \item In general, the number of types (in $\alpha$ variables) over a model of size $\lambda$ in a dependent theory is bounded by $\left(\operatorname{ded}\lambda\right)^{\left|T\right|+\left|\alpha\right|+\aleph_{0}}$ (by \cite[Theorem 4.3]{Sh10}), so by Fact \ref{fac:pOmega} this is an upper bound for $\sup\left\{ \left|\operatorname{uf}\left(p\right)\right|\left|\, p\in S\left(M\right),\left|M\right|=\lambda\right.\right\} $. \item It is very easy to construct an example with the multiple order property that attains this maximum: for example, one can modify example \ref{exa:Large}, and add $\aleph_{0}$ independent orderings to $Q$.\end{enumerate} \begin{defn} For $M\models RCF$, let $S_{\operatorname{dense}}\left(M\right)$ be the set of dense complete types over $M$.\end{defn} \begin{thm} \label{thm:more coheirs than dense types}Suppose $M\models RCF$. Then there is a type $p\in S_{2}\left(M\right)$ such that $\left|\operatorname{uf}\left(p\right)\right|\geq\left|S_{\operatorname{dense}}\left(M\right)\right|^{\omega}$.\end{thm} \begin{proof} We may assume $S_{\operatorname{dense}}\left(M\right)\neq\emptyset$. Suppose $r_{*}$ is a dense type. Let $\alpha\models r_{*}$, and let $\omega\in\mathfrak{C}$ be an element greater than any element in $M$. Then $q=\operatorname{tp}\left(\omega/M\right)$ is definable and we can apply Claim \ref{cla:WeakOrth}. Let $p(x_{\omega},x_{\alpha})=\operatorname{tp}(\omega,\alpha/M)$. For every dense type $r\left(x\right)$ over $M$, choose a realization $a_{r}\in\mathfrak{C}$. For every sequence $\bar{r}=\left\langle r_{i}\left|\, i<\omega\right.\right\rangle $ of \uline{positive} dense types over $M$ (i.e., $r_{i}\models x>0$) , we define a coheir $p_{\bar{r}}$ of $p$ as follows: Fix $\bar{a}=\sequence{a_{r_{i}}}{i<\omega}$. For every sequence $\bar{b}=\left\langle b_{i}\left|\, i<\omega\right.\right\rangle \in M^{\omega}$ such that $r_{i}\models x<b_{i}$ for all $i<\omega$, and for each $n<\omega$ let $f_{n}\left(\bar{a},x\right)=\alpha+\sum_{i=0}^{n}\left(a_{r_{i}}/x^{i+1}\right)$ and $g_{n}\left(\bar{a},\bar{b},x\right)=\alpha+\sum_{i=0}^{n-1}\left(a_{r_{i}}/x^{i+1}\right)+b_{n}/x^{n+1}$. Now, let $p_{\bar{r}}\left(x_{\omega},x_{\alpha}\right)$ be: \begin{eqnarray*} p_{\bar{r}}\left(x_{\omega},x_{\alpha}\right) & = & p\left(x_{\omega},x_{\alpha}\right)\cup\left\{ f_{n}\left(\bar{a},x_{\omega}\right)<x_{\alpha}<g_{n}\left(\bar{a},\bar{b},x_{\omega}\right)\left|\,\bar{b}\mbox{ as above, }n<\omega\right.\right\} . \end{eqnarray*} \begin{claim*} $p_{\bar{r}}\left(x_{\omega},x_{\alpha}\right)$ (which is over $M\cup\left\{ \alpha\right\} \cup\set{a_{r_{i}}}{i<\omega}$) is finitely satisfiable in $M$. \end{claim*} \begin{proof} Suppose we are given a finite subset $p_{0}\subseteq p_{\bar{r}}\left(x_{\omega},x_{\alpha}\right)$, and a finite set of inequalities $S=\set{f_{k}\left(\bar{a},x_{\omega}\right)<x_{\alpha}<g_{k}\left(\bar{a},\bar{b},x_{\omega}\right)}{k\leq n,\,\bar{b}\in B}$ where $B$ is some finite set of tuples $\sequence{b_{i}}{i\leq n}$ such that $r_{i}\models x<b_{i}$ for $i\leq n$. Let $\bar{b}$ be a tuple $\sequence{b_{i}}{i\leq n}\in M^{n+1}$ such that for $i\leq n$, $r_{i}\models x<b_{i}$ and $b_{i}<b_{i}'$ for any tuple $\sequence{b_{i}'}{i\leq n}\in B$. We may assume that $p_{0}=r_{*,0}\left(x_{\alpha}\right)\cup q_{0}\left(x_{\omega}\right)$ where $r_{*,0}\subseteq r_{*}$ and $q_{0}\subseteq q$. We may assume in addition that both $r_{*,0}$ and $q_{0}$ are intervals over $M$ (i.e., types in the language $\left\{ <\right\} $). Finally, we may assume that $B=\left\{ \bar{b}\right\} $. We will show: \begin{itemize} \item [$\smiley$]For all $o\in M$ large enough, there is some $0<\varepsilon_{o}\in M$ such that for all $k,l\leq n$, $\varepsilon_{o}<g_{k}\left(\bar{a},\bar{b},o\right)-f_{l}\left(\bar{a},o\right)$. \end{itemize} Once $\smiley$ is established, let $o$ be large enough so that it has such an $\varepsilon_{o}$, $o$ satisfies $q_{0}\left(x_{\omega}\right)$ and for every $k\leq n$, $f_{k}\left(\bar{a},o\right),g_{k}\left(\bar{a},\bar{b},o\right)\models r_{*,0}\left(x_{\alpha}\right)$ (so also every element between $f_{k}$ and $g_{k}$). Suppose $l\leq n$ is such that $f_{l}\left(\bar{a},o\right)$ is maximal and $k\leq n$ is such that $g_{k}\left(\bar{a},\bar{b},o\right)$ is minimal. For $i\leq n$, let $c_{i}\in M$ be such that $a_{r_{i}}<c_{i}$ and $c_{i}-a_{r_{i}}<\varepsilon_{o}\cdot\left(o^{i+1}\right)/\left(l+2\right)$ (these exist since the $r_{i}$'s are dense), and let $\alpha<\alpha_{0}\in M$ be such that $\alpha_{0}-\alpha<\varepsilon_{o}/\left(l+2\right)$. Let $d=\alpha_{0}+\sum_{i=0}^{l}\left(c_{i}/o^{i+1}\right)\in M$. Then $f_{l}\left(\bar{a},o\right)<d$ and $d-f_{l}\left(\bar{a},o\right)=\left(\alpha_{0}-\alpha\right)+\sum_{i=0}^{l}\left(c_{i}-a_{r_{i}}\right)/o^{i+1}<\varepsilon_{o}$. So $d<g_{k}\left(\bar{a},\bar{b},o\right)$, and so $\left(o,d\right)\models p_{0}$. So we only need to show $\smiley$. It is enough to show that for each $k,l\leq n$, for all large enough $o$, there is some $0<\varepsilon_{o,k,l}\in M$ such that $\varepsilon_{o,k,l}<g_{k}\left(\bar{a},\bar{b},o\right)-f_{l}\left(\bar{a},o\right)$. Suppose $k>l$. In that case, \[ g_{k}\left(\bar{a},\bar{b},o\right)-f_{l}\left(\bar{a},o\right)\geq b_{k}/o^{k+1}>0 \] (since the types $r_{i}$ are positive). Suppose $k\leq l$. So, \[ g_{k}\left(\bar{a},\bar{b},o\right)-f_{l}\left(\bar{a},o\right)=\left(b_{k}-a_{r_{k}}\right)/o^{k+1}-\left(\sum_{i=k+1}^{l}a_{r_{i}}/o^{i+1}\right). \] Since $r_{k}$ is dense, there is some $0<\varepsilon\in M$ such that $\varepsilon<b_{k}-a_{r_{k}}$. Also, there are some $a_{i}'\in M$ such that $a_{r_{i}}<a_{i}'$. The difference above is greater than: \[ \varepsilon/o^{k+1}-\sum_{i=k+1}^{l}a_{i}'/o^{i+1}\in M, \] and for $o$ large enough this number is positive, so let it be $\varepsilon_{o,k,l}$. \end{proof} Note that for $\bar{r}\neq\bar{r}'$, $p_{\bar{r}}\cup p_{\bar{r}'}$ is inconsistent. Also, since $r_{*},r_{*}+1,r_{*}+2,\ldots$ are all dense types, $\left|S_{\operatorname{dense}}\left(M\right)\right|\geq\aleph_{0}$, so the number of positive dense types over $M$ is equal to the number of all dense types over $M$. Together, we are done. \end{proof} We conclude: \begin{cor} \label{thm:RCF-has-large}RCF has large directionality. In addition, RCF does not have the multiple order property but for every $\lambda\geq\aleph_{0}$, with $\operatorname{cof}\left(\operatorname{ded}\lambda\right)>\omega$, \[ \sup\left\{ \left|\operatorname{uf}\left(p\right)\right|\left|\, M\models RCF,\, p\in S_{2}\left(M\right),\,\left|M\right|=\lambda\right.\right\} \geq\left(\operatorname{ded}\lambda\right)^{\omega}. \] \end{cor} \begin{proof} We will use results from Section \ref{sec:Appendix:-dense-types}. By Theorem \ref{thm:ded via dense types}, we know that: \[ \sup\left\{ \left|\operatorname{uf}\left(p\right)\right|\left|\, p\in S_{2}\left(M\right),\left|M\right|=\lambda\right.\right\} \geq\sup\set{\left|S_{\operatorname{dense}}\left(M\right)\right|^{\omega}}{\left|M\right|=\lambda}. \] On the other hand, Corollary \ref{cor:what can we get with S_d} says that: \[ \sup\set{\left|S_{\operatorname{dense}}\left(M\right)\right|^{\omega}}{\left|M\right|=\lambda}=\sup\set{\left(\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}\right)^{\omega}}{\mu\leq\lambda,\operatorname{cof}\left(\mu\right)=\mu}, \] so this already implies that RCF is large (by Fact \ref{fac:calculating ded}, the right hand side is $\geq\operatorname{ded}\lambda$). Corollary \ref{cor:cof uncountable is ok} says that for any cardinal $\lambda$, if $\operatorname{cof}\left(\operatorname{ded}\lambda\right)>\aleph_{0}$, then \[ \sup\set{\left(\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}\right)^{\omega}}{\mu\leq\lambda,\operatorname{cof}\left(\mu\right)=\mu}=\left(\operatorname{ded}\lambda\right)^{\omega}. \] Together we are done. \end{proof} \begin{rem} For an easy proof that RCF is large, using the same notation from the proof of Theorem \ref{thm:more coheirs than dense types}, for every bounded cut $I\subseteq M$, define: \[ p_{I}\left(x_{\omega},x_{\alpha}\right)=r_{*}\left(x_{\alpha}\right)\cup q\left(x_{\omega}\right)\cup\set{\alpha+a/x_{\omega}<x_{\alpha}<\alpha+b/x_{\omega}}{a\in I,b\notin I}. \] \end{rem} Marcus Tressl has pointed out the type $\operatorname{tp}\left(\alpha,\omega\right)$ to us as a type with infinitely many coheirs (this follows from \cite[18.13]{Tressle}). We thank him for that. This proof that the theory is large is ours. \subsubsection{Valued fields} We can combine the techniques of Theorem \ref{thm:RCF-has-large} and Example \ref{exa:Large} in order to prove a similar result for valued fields. \begin{defn} \label{def: language of val field}The language $L$ of valued fields is the following. It is a 3-sorted language, one sort for the base field $K$ equipped with the ring language $\left\{ 0,1,+,\cdot\right\} $, another for the valuation group $\Gamma$ equipped with the ordered abelian groups language $L_{\Gamma}=\left\{ 0,+,<\right\} $, and another for the residue field $k$ equipped with the ring language $L_{k}$. We also have the valuation map $v:K^{\times}\to\Gamma$ and an angular component map $ac:K\to k$. Recall that an angular component is a function that satisfies $ac\left(0\right)=0$ and $ac\upharpoonright K^{\times}:K^{\times}\to k^{\times}$ is a homomorphism such that if $v\left(x\right)=0$ then $ac\left(x\right)$ is the residue of $x$. \end{defn} For more on valued fields with angular component, see e.g., \cite{belair,Pas1989}, which also gives us the following fact: \begin{fact} \label{fac: Elimination of Field Quantifiers}\cite[Theorem 4.1]{Pas1989} The theory of any Henselian valued field of characteristic $\left(0,0\right)$ in the language $L$ has elimination of field quantifiers: every formula $\varphi\left(x_{K},x_{k},x_{\Gamma}\right)$ (where $x_{K}$, $x_{k}$ and $x_{\Gamma}$ are tuples of variables in the base field, the residue field and the valuation group respectively) is equivalent to a Boolean combination of formulas of the form $\varphi\left(ac\left(f_{0}\left(x_{K}\right)\right),\ldots,ac\left(f_{n-1}\left(x_{K}\right)\right),x_{k}\right)$ and $\chi\left(v\left(g_{0}\left(x_{K}\right)\right),\ldots,v\left(g_{m-1}\left(x_{K}\right)\right),x_{\Gamma}\right)$ where $\varphi$ is a formula in $L_{k}$, $\chi$ is a formula in $L_{\Gamma}$ and $g_{i}$ and $f_{j}$ are polynomials over the integers. \end{fact} \begin{thm} Let $T=Th\left(K,\Gamma,k\right)$ be any theory of valued fields in $L$ which eliminates field quantifiers. Then $T$ has large directionality.\end{thm} \begin{proof} Let $M_{0}=\left(K_{0},\Gamma_{0},k_{0}\right)\models T$ be a countable model such that $\Gamma_{0}$ contains a copy of the rationals $\set{\gamma_{q}}{q\in\mathbb{Q}}$ with the usual order and group structure, so $\gamma_{0}=0_{\Gamma}$ (by compactness, one only needs to embed a copy of a finitely generated subgroup of $\left(\mathbb{Q},+,<\right)$ in a model of $T$, but any such subgroup is contained in a subgroup generated by one element, which is isomorphic to $\left(\mathbb{Z},+,<\right)$). Let $S$ be the tree $2^{\leq\omega}$ and let $S_{0}\subseteq S$ be $2^{<\omega}$, so $S_{0}$ is countable. Let $\Sigma\sequence{x_{s}}{s\in S_{0}}$ be the following set of formulas with variables in the field sort (over $M_{0}$): \[ \left\{ v\left(x_{s}-x_{t}\right)=\gamma_{\operatorname{lev}\left(s\wedge t\right)}\left|\, s,t\in S_{0},\, s\wedge t<s,t\right.\right\} \cup\set{v\left(x_{s}-x_{t}\right)\geq\gamma_{\operatorname{lev}\left(s\right)}}{s,t\in S_{0},\, s\leq t}. \] Then $\Sigma$ is consistent with $M_{0}$: to realize $\Sigma\upharpoonright2^{<n}$, choose $a_{i}\in K_{0}$ with $v\left(a_{i}\right)=\gamma_{i}$ and let $x_{s}=\sum_{i<\operatorname{lev}\left(s\right)}s\left(i\right)a_{i}$ for $s\in2^{<n}$. Let $M=\left(K_{1},\Gamma_{1},k_{1}\right)$ be a countable model containing $M_{0}$ and some $\set{a_{s}}{s\in S_{0}}$ realizing $\Sigma$. For each $\eta\in S$ with domain $\omega$ (this is a \emph{branch} of $S_{0}$), let $p_{\eta}\left(x\right)$ be the following type in the valued field sort: \[ \left\{ v\left(x-a_{s}\right)\geq\gamma_{\operatorname{lev}\left(s\right)}\left|\, s<\eta\right.\right\} . \] It is consistent since any finite subset if realized by $a_{t}$ for any $t<\eta$ large enough. If $\eta_{1}\neq\eta_{2}$ then $p_{\eta_{1}}\cup p_{\eta_{2}}$ is inconsistent: Suppose $s=\eta_{1}\wedge\eta_{2}$ and $s<t<\eta_{1}$, $s<t'<\eta_{2}$. If $p_{\eta_{2}}$ is consistent with $v\left(x-a_{t}\right)\geq\gamma_{\operatorname{lev}\left(t\right)}$, then there is some $a$ such that $v\left(a-a_{t}\right)\geq\gamma_{\operatorname{lev}\left(t\right)}$ and $v\left(a-a_{t'}\right)\geq\gamma_{\operatorname{lev}\left(t'\right)}$. So $v\left(a_{t}-a_{t'}\right)\geq\min\left\{ \gamma_{\operatorname{lev}\left(t'\right)},\gamma_{\operatorname{lev}\left(t\right)}\right\} $, but $t\wedge t'=s<t,t'$ and so $v\left(a_{t}-a_{t'}\right)=\gamma_{\operatorname{lev}\left(s\right)}$. This is a contradiction since $\gamma_{\operatorname{lev}\left(t\right)},\gamma_{\operatorname{lev}\left(t'\right)}>\gamma_{\operatorname{lev}\left(s\right)}$. Let $\Omega$ be the algebraic closure (as a valued field) of the monster model $\mathfrak{C}$ of $M$. let $\bar{M}=\left(\bar{K_{1}},\bar{\Gamma_{1}},\bar{k_{1}}\right)$ be the algebraic closure of $M$ as a valued field in $\Omega$. Since $\bar{M}$ is countable, there is some branch $\eta\in S$ such that $p_{\eta}$ is not realized in $\bar{M}$. Then for every polynomial $f\left(x\right)$ over $K_{1}$ and every $s<\eta$ large enough, $p_{\eta}\modelsac\left(f\left(x\right)\right)=ac\left(f\left(a_{s}\right)\right)\land v\left(f\left(x\right)\right)=v\left(f\left(a_{s}\right)\right)$ (decompose $f$ into linear factors $\mathbf{p}rod\left(x-b_{i}\right)$. For every large enough $s$, $v\left(b_{i}-a_{s}\right)<\gamma_{\operatorname{lev}\left(s\right)}$ for all $i$, and so if $d\models p_{\eta}$ in $\mathfrak{C}$, then $ac\left(f\left(d\right)\right)=ac\left(f\left(a_{s}\right)\right)$ (because $res\left(\frac{f\left(d\right)}{f\left(a_{s}\right)}\right)=1$ --- we do not assume that $ac$ extends to $\bar{M}$) and $v\left(f\left(d\right)\right)=v\left(f\left(a_{s}\right)\right)$). Since field quantifiers are eliminated in $T$, this implies that $p_{\eta}$ is a complete type. Moreover, we have the following claim: \begin{claim*} For any type $r\left(y\right)\in S\left(M\right)$ such that $y$ is a tuple of variables in the valuation group sort, $p_{\eta}$ and $q$ are weakly orthogonal, meaning that $p_{\eta}\left(x\right)\cup r\left(y\right)$ implies a complete type over $M$.\end{claim*} \begin{proof} By elimination of field quantifiers, we need only to determine whether \[ \chi\left(v\left(g_{0}\left(x\right)\right),\ldots,v\left(g_{m-1}\left(x\right)\right),y\right) \] is in $p_{\eta}\left(x\right)\cup q\left(y\right)$ for any formula $\chi$ in $L_{\Gamma}$ over $M$ and polynomials $g_{i}$ over $M$. By the remark above, $p_{\eta}\models v\left(g_{i}\left(x\right)\right)=v\left(g_{i}\left(a_{s}\right)\right)$ for any $s<\eta$ large enough and all $i<m$. So $p_{\eta}\cup r\models\chi$ iff $r\models\chi\left(v\left(g_{0}\left(a_{s}\right)\right),\ldots,v\left(g_{m-1}\left(a_{s}\right)\right),y\right)$. \end{proof} Let $r\left(y\right)\in S\left(M\right)$ be a type in the valuation group sort which is finitely satisfiable in $\set{\gamma_{q}}{q\in\mathbb{Q}}$ and contains $\set{y>\gamma_{q}}{q\in\mathbb{Q}}$. By the claim, $r$ and $p_{\eta}$ are weakly orthogonal. Fix some $d\models p_{\eta}$ in $\mathfrak{C}$. For each bounded cut $I\subseteq\mathbb{Q}$, let $p_{I}\left(x,y\right)$ be the following type: \[ p_{\eta}\left(x\right)\cup r\left(y\right)\cup\set{y+\gamma_{q}<v\left(x-d\right)<y+\gamma_{q'}}{q\in I,\, q'\notin I}. \] Then $p_{I}\left(x,y\right)$ is finitely satisfiable in $M$: Suppose we are given finite subsets $p_{0}\subseteq p_{\eta}$ and $r_{0}\subseteq r$, $I_{0}\subseteq I$ and $I_{0}'\subseteq\mathbb{Q}\backslash I$. Let $q=\max I_{0}$ and $q'=\min I_{0}'$. Note that there is some $s<\eta$ such that for any $a\in K_{1}$, if $v\left(a-a_{s}\right)\geq\gamma_{\operatorname{lev}\left(s\right)}$ then $a\models p_{0}$. Let $q_{0}\in\mathbb{Q}$ be larger than $\operatorname{lev}\left(s\right)$, larger than $\operatorname{lev}\left(s\right)-q$ and such that $\gamma_{q_{0}}\models r_{0}$. Let $q''\in\mathbb{Q}$ be in the interval $\left(q_{0}+q,q_{0}+q'\right)$ and let $b\in K_{1}$ be such that $v\left(b\right)=\gamma_{q''}$. Let $s<t<\eta$ be such that $\operatorname{lev}\left(t\right)>q''$ and let $a=a_{t}+b\in K_{1}$. Then $p_{0}\left(a\right)\cup r_{0}\left(\gamma_{q_{0}}\right)$ holds, and in addition, \[ v\left(a-d\right)=v\left(a_{t}-d+b\right)=v\left(b\right)=\gamma_{q''}. \] Moreover, \[ \gamma_{q_{0}}+\gamma_{q}=\gamma_{q_{0}+q}<\gamma_{q''}<\gamma_{q_{0}+q'}=\gamma_{q_{0}}+\gamma_{q'}. \] Obviously, for different cuts $I$ and $J$, the types $p_{I}$ and $p_{J}$ contradict each other. Together this shows that, letting $p\left(x,y\right)$ be the complete type determined by $p_{\eta}\left(x\right)\cup r\left(y\right)$, $\left|\operatorname{uf}_{\Delta}\left(p\right)\right|\geq2^{\aleph_{0}}$ where $\Delta=\left\{ \varphi\left(x,y;z_{0},z_{1},z_{2}\right)\right\} $ and: \begin{eqnarray*} \varphi\left(x,y;z_{0},z_{1},z_{2}\right) & = & y+z_{1}<v\left(x-z_{0}\right)<y+z_{2}. \end{eqnarray*} So $T$ is large as promised. \end{proof} There are other languages of valued fields in addition to the one in Definition \ref{def: language of val field} that would make the proof above work. The only requirements are that we can construct the tree $S$ inside the field, that $\Gamma$ is a sort and that field quantifiers are eliminated. This can be done in the theory of the $p$-adics, when we add to the language $ac_{n}$ for $n<\omega$ as in \cite{MR1023804}, and also in ACVF --- algebraically closed valued fields (where there is quantifier elimination in any reasonable language, and in fact there is no need for $ac$). \begin{cor} The theory of any Henselian valued field of characteristic $\left(0,0\right)$ (in the language described in Definition \ref{def: language of val field}), ACVF (in any characteristic and any reasonable language with quantifier elimination and a sort for the valuation group), and the theory of $\mathbb{Q}_{p}$ (in the language of Pas with $ac_{n}$) are large. \end{cor} \section{\label{sec:Splintering}Splintering} This part of the paper is motivated by the work of Rami Grossberg, Andr\'es Villaveces and Monica VanDieren. In their paper \cite{GrViVa} they study Shelah's Generic pair conjecture (which is now a theorem --- \cite{Sh:900,Sh950,Sh906}), and in their analysis they came up with the notion of splintering, a variant of splitting. \begin{defn} Let $p\in S\left(\mathfrak{C}\right)$. Say that $p$ \emph{splinters} over $M$ if there is some $\sigma\in\operatorname{Aut}\left(\mathfrak{C}\right)$ such that \begin{enumerate} \item $\sigma\left(p\right)\neq p$. \item $\sigma\left(p|_{M}\right)=p|_{M}$. \item $\sigma\left(M\right)=M$ setwise. \end{enumerate} \end{defn} \begin{rem} {[}due to Martin Hils{]} Splitting implies splintering, and if $T$ is stable, then they are equal. \end{rem} \begin{proof} Suppose $p\in S\left(\mathfrak{C}\right)$ does not split over $M$, then, by stability, it is definable over $M$, and $p$ is the unique non-forking extension of $p|_{M}$. Then for any $\sigma\in\operatorname{Aut}\left(\mathfrak{C}\right)$, $\sigma\left(p\right)$ is the unique non-forking extension of $\sigma\left(p|_{M}\right)$. So if $\sigma\left(p|_{M}\right)=p|_{M}$, this means that $\sigma\left(p\right)=p$ so $p$ does not splinter over $M$.\end{proof} \begin{claim} \label{cla:unstable implies sp neq spln}Outside of the stable context, splitting $\neq$ splintering.\end{claim} \begin{proof} Let $T$ be the theory of random graphs in the language $\left\{ I\right\} $. Let $M\models T$ be countable, and let $a\neq b\in M$ with an automorphism $\sigma\in\operatorname{Aut}\left(M\right)$ taking $a$ to $b$. Let $p\left(x\right)\in S\left(\mathfrak{C}\right)$ say that $x\mathrela Ic$ for every $c\in M$ and if $c\notin M$ then $x\mathrela Ic$ if and only if $c$ is connected $a$ and not connected to $b$. Obviously, $p$ does not split over $M$. However, let $\sigma'\in\operatorname{Aut}\left(\mathfrak{C}\right)$ be an extension of $\sigma$. Let $c\in\mathfrak{C}$ be such that $c$ is connected to $a$ but not to $b$. Then $x\mathrela Ic\in p$ but $x\mathrela Ic\notin\sigma'\left(p\right)$. \end{proof} However, \begin{claim} If $T=Th\left(\mathbb{Q},<\right)$, then splitting equals splintering.\end{claim} \begin{proof} Observe that by quantifier elimination every complete type $r\left(x_{i}\left|\, i\in I\right.\right)$ over a set $A$ is determined by $\operatorname{tp}\left(\sequence{x_{i}}{i\in I}/\emptyset\right)\cup\bigcup\left\{ \operatorname{tp}\left(x_{i}/A\right)\left|\, i\in I\right.\right\} $. Assume $q\left(x_{i}\left|\, i\in I\right.\right)$ is a global type that splinters but does not split over a model $M$. Then it follows that for some $i\in I$, $q\upharpoonright x_{i}$ splinters, so we may assume $\left|I\right|=1$. Suppose $\sigma\in\operatorname{Aut}\left(\mathfrak{C}\right)$ is such that $\sigma\left(M\right)=M$, $\sigma\left(q|_{M}\right)=q|_{M}$ and $\sigma\left(q\right)\neq q$. Note that $\sigma\left(q^{\left(\omega\right)}\right)=\sigma\left(q\right)^{\left(\omega\right)}$, so by Fact \ref{fac:pOmega}, $\sigma\left(q^{\left(\omega\right)}\right)|_{M}\neq q^{\left(\omega\right)}|_{M}$. We get a contradiction by quantifier elimination again. \end{proof} We shall now generalize Claim \ref{cla:unstable implies sp neq spln} to every theory with the independence property. In fact, to any theory with large or medium directionality. \begin{thm} Suppose $T$ has medium or large directionality then splitting $\neq$ splintering.\end{thm} \begin{proof} We know that there is some $p$ and $\Delta$ such that $\operatorname{uf}_{\Delta}\left(p\right)$ is infinite. Let us use Construction \ref{const:not small directionality}: We may find a saturated model $N=\left(N_{0}',N_{0},M_{0},Q_{0},\bar{f}_{0}\right)$ of $Th\left(M^{*}\right)$ of size $\lambda$ where $\lambda$ is big enough. Then there is $c\neq d\in Q_{0}$ such that $\operatorname{tp}\left(c/\emptyset\right)=\operatorname{tp}\left(d/\emptyset\right)$ in the extended language (with symbols for $N_{0},M_{0},Q_{0}$ and $\bar{f}_{0}$). So there is an automorphism $\sigma$ of this structure (in particular of $N_{0}'$) such that $\sigma\left(c\right)=d$. By definition, $\sigma\left(N_{0}\right)=N_{0}$ and $\sigma\left(M_{0}\right)=M_{0}$. So $\operatorname{tp}\left(c/N_{0}\right)$ is finitely satisfiable in $M_{0}$ and hence does not split over $M_{0}$. But it splinters since $\sigma\left(\operatorname{tp}\left(c/M_{0}\right)\right)=\operatorname{tp}\left(d/M_{0}\right)=\operatorname{tp}\left(c/M_{0}\right)$ but $\sigma\left(\operatorname{tp}\left(c/N_{0}\right)\right)=\operatorname{tp}\left(d/N_{0}\right)\neq\operatorname{tp}\left(c/N_{0}\right)$ as witnessed by $\varphi$. If there are no saturated models, we can take a big enough special model (see \cite[Theorem 10.4.4]{Hod}). Note that we may also find an example of a type $p\in S\left(M\right)$ with a splintering, non-splitting, global extension, with $\left|M\right|=\left|T\right|$: consider the structure $\left(N_{0}',N_{0},M_{0},\sigma,c,d\right)$, and find an elementary substructure of size $\left|T\right|$. \end{proof} \begin{defn} Let $T$ be a complete theory. We say that $\left(M,p,\varphi\left(x;y\right),A_{1},A_{2}\right)$ is an\emph{ sp-example} for $T$ when: \begin{itemize} \item $M\models T$; $A_{1},A_{2}\subseteq M$ are nonempty and disjoint; $p=p\left(x\right)$ is a complete type over $M$, finitely satisfiable in $A_{1}$; $\operatorname{Th}\left(M_{p},A_{1}\right)=\operatorname{Th}\left(M_{p},A_{2}\right)$ (see Definition \ref{def:ExtDefType}); For each pair of finite sets $s_{1}\subseteq A_{1}$ and $s_{2}\subseteq A_{2}$, $M\models\exists y\left(\bigwedge_{a\in s_{1}}\varphi\left(a,y\right)\land\bigwedge_{b\in s_{2}}\neg\varphi\left(b,y\right)\right)$. \end{itemize} \end{defn} \begin{prop} \label{pro:sp-example}$T$ has an sp-example if and only if there is a finitely satisfiable type over a model which splinters over it (in particular, splitting is different than splitting).\end{prop} \begin{proof} Suppose $\left(M,p,\varphi\left(x,y\right),A_{1},A_{2}\right)$ is an sp-example for $T$. Let $M'$ be the structure $\left(M_{p},A_{1},A_{2}\right)$ (in the language $L_{p}\cup\left\{ P_{1},P_{2}\right\} $ where $P_{1},P_{2}$ are predicates). Assume $\left|T\right|<\mu=\mu^{<\mu}$, and let $N'=\left(N_{q},B_{1},B_{2}\right)$ be a saturated extension of $M'$ of size $\mu$ where $N=N'\upharpoonright L$ and $q=q^{N}$ is as in Remark \ref{rem:TpModel}. Since $\left(N_{q},B_{1}\right)\operatorname{eq}uiv\left(N_{q},B_{2}\right)$, there is an automorphism $\sigma$ of $N_{q}$, such that $\sigma$ takes $B_{1}$ to $B_{2}$ and so $\sigma\left(q\right)=q$. Let $q'$ be a global extension of $q$, finitely satisfiable in $B_{1}$ and $\sigma'$ a global extension of $\sigma$. So $q'$ does not split over $N$, but it splinters: Consider the type $\left\{ \varphi\left(a,y\right)\left|\, a\in B_{1}\right.\right\} \cup\left\{ \neg\varphi\left(b,y\right)\left|\, b\in B_{2}\right.\right\} $. It is finitely satisfiable in $N$ by choice of $\varphi$. Let $c\in\mathfrak{C}$ satisfy this type. Then $\varphi\left(x,c\right)\in q'$ but $\varphi\left(x,c\right)\notin\sigma\left(q'\right)$ (because $\sigma\left(q'\right)$ is finitely satisfiable in $B_{2}$). If we do not assume the existence of such a $\mu$, we can use special models. Now suppose that splitting is different than splintering, as witnessed by some global type $p$ that splinters over a model $M$ but is finitely satisfiable in it. Then there is some automorphism $\sigma$ of $\mathfrak{C}$ that witnesses it. There is a formula $\varphi\left(x,y\right)$ and $a\in\mathfrak{C}$ such that $\varphi\left(x,a\right)\in p,\neg\varphi\left(x,a\right)\in\sigma\left(p\right)$. Let $B_{1}=\left\{ m\in M\left|\,\varphi\left(m,a\right)\land\neg\varphi\left(m,\sigma^{-1}\left(a\right)\right)\right.\right\} $, $B_{2}=\sigma\left(B_{1}\right)$. It is easy to check that $\left(M,p,\varphi,B_{1},B_{2}\right)$ is an sp-example \end{proof} The following theorem answers the natural question: \begin{thm} There is a theory with small directionality in which splitting $\neq$ splintering.\end{thm} \begin{proof} Let $L=\left\{ R\right\} $ where $R$ is a ternary relation symbol. Let $M_{0}=\left\langle \mathbb{Q},<\right\rangle $ and define $R\left(x,y,z\right)$ by $x<y<z$ or $z<y<x$, i.e., $y$ is between $x$ and $z$. Let $T=Th\left(M_{0}\upharpoonright L\right)$. \begin{claim*} $T$ has small directionality.\end{claim*} \begin{proof} Suppose $M\models T$. Let $\left(a,b\right)$ denote $\left\{ c\left|\, R\left(a,c,b\right)\right.\right\} $. Then, for any choice of a pair of distinct elements $a,b$ there is a unique enrichment of $M$ to a model $M'$ of $Th\left(\mathbb{Q},<\right)$ such that $R$ is defined as above and $a<b$: For $w\neq z$, $w<z$ if and only if $\left(a,b\right)\cap\left(a,z\right)\neq\emptyset$ and ($\left(a,w\right)\subseteq\left(a,z\right)$ or $\left(a,w\right)\cap\left(a,z\right)=\emptyset$) or $\left(a,b\right)\cap\left(a,z\right)=\emptyset$ and $\left(z,b\right)\subseteq\left(w,b\right)$. From this observation, it follows that there is a unique completion of any type $p\in S\left(M\right)$ to a type $p'\in S\left(M'\right)$. So if $\Delta$ is a finite set of $L$ formulas and $\operatorname{uf}_{\Delta}\left(p\right)$ is infinite, then $\operatorname{uf}_{\Delta}\left(p'\right)$ is also infinite --- contradiction to Example \ref{exa:DLO}.\end{proof} \begin{claim*} $T$ has an sp-example\end{claim*} \begin{proof} Let $M=M_{0}\upharpoonright L$. Let $p\left(x\right)=\operatorname{tp}\left(\mathbf{p}i/M\right)$. Let $A_{1}=\left\{ x\in\mathbb{Q}\left|\, x>\mathbf{p}i\right.\right\} $ and $A_{2}=\mathbb{Q}\backslash A_{1}$, and let $\varphi\left(x;y_{1},y_{2}\right)=R\left(y_{1},x,y_{2}\right)$. We claim that $\left(M,p,\varphi\left(x,y_{1},y_{2}\right),A_{1},A_{2}\right)$ is an sp-example: First, let $M'$ be the reduct of $\left(\mathbb{Q}\cup\left\{ \mathbf{p}i\right\} ,<\right)$ to $L$. There is some $\sigma\in\operatorname{Aut}\left(M'/\mathbf{p}i\right)$ such that $\sigma\left(A_{1}\right)=A_{2}$. Hence $\left(M_{p},A_{1}\right)\cong\left(M_{p},A_{2}\right)$. Also, since $\operatorname{tp}\left(\mathbf{p}i/M_{0}\right)$ (in $\left\{ <\right\} $) is finitely satisfiable in both $A_{1}$ and $A_{2}$ (by quantifier elimination), $p$ is finitely satisfiable in both $A_{1}$ and $A_{2}$. Finally, for finite $s_{1}\subseteq A_{1}$ and $s_{2}\subseteq A_{2}$, there exists $c_{1},c_{2}\in\mathbb{Q}$ such that $R\left(c_{1},a,c_{2}\right)$ for all $a\in s_{1}$ and $\neg R\left(c_{1},b,c_{2}\right)$ for all $b\in s_{2}$. \end{proof} \end{proof} \section{\label{sec:Appendix:-dense-types}Appendix: dense types in RCF} \begin{defn} For $M\models RCF$, let $S_{\operatorname{dense}}\left(M\right)$ be the set of dense complete types over $M$ (see Definition \ref{def:dense types}). \end{defn} Here we will prove the following theorem: \begin{thm} \label{thm:ded via dense types}$\operatorname{ded}\lambda=\sup\set{\left|S_{\operatorname{dense}}\left(M\right)\right|}{M\models RCF,\left|M\right|=\lambda}$. \end{thm} For the proof we will need some definitions and facts: \begin{defn} \label{def:trees} \begin{enumerate} \item By a \emph{tree} we mean a partial order $\left(T,<\right)$ such that for every $a\in T$, $T_{<a}=\left\{ x\in T\left|\, x<a\right.\right\} $ is well ordered. For $a\in T$, the order type of $T_{<a}$ is $a$'s \emph{level}. By a \emph{branch} in $T$ we mean a maximally linearly ordered subset of $T$. Its length is its order type. \item For two cardinals $\lambda$ and $\mu$, let $\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}$ be: \[ \sup\left\{ \kappa\left|\,\mbox{\mbox{there is some tree} }T\mbox{ with }\lambda\mbox{ many nodes and }\kappa\mbox{ branches of length }\mu\right.\right\} . \] \end{enumerate} \end{defn} \begin{fact} \label{fac:calculating ded}(See \cite[Theorem 2.1(a)]{Baumgartner}) The following cardinalities are the same: \begin{enumerate} \item $\operatorname{ded}\lambda$. \item $\sup\left\{ \kappa\left|\,\mbox{\mbox{there is a regular }\ensuremath{\mu}\ and a tree }T\mbox{ with }\kappa\mbox{ branches of length }\mu\mbox{ and }\left|T\right|\leq\lambda\right.\right\} $. \item $\sup\left\{ \lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}\left|\,\mu\leq\lambda\mbox{ is regular}\right.\right\} $. \end{enumerate} \end{fact} It is somewhat easier to consider trees which are sub-trees of $\lambda^{<\mu}$ (with the usual ``first-segment'' order) for some $\lambda,\mu$. Given any tree $T$, and any cardinal $\mu$, suppose we are interested in computing the number of branches of length $\mu$. For this we may assume that the level of each element in $T$ is $<\mu$. Suppose $\left|T\right|=\lambda$, so we may assume that its universe is $\lambda$. Let $T'$ be $\set{T_{<a}}{a\in T}$. This is easily seen to be a tree with the inclusion ordering, and moreover it is isomorphic to a complete sub-tree $T''$ of $\lambda^{<\mu}$ (in the sense that if $\eta\in T''$ and $\nu$ is an initial segment of $\eta$, then $\nu\in T''$): if $\operatorname{lev}\left(a\right)=\alpha$, map $T_{<a}$ to $\eta:\alpha\to\lambda$ where $\eta\left(\beta\right)$ is the $\beta$'th element in $T_{<a}$. If $B\subseteq T$ is a branch of length $\mu$, let $B'=\set{T_{<a}}{a\in B}$. Then $B'$ is also a branch of length $\mu$, and in addition if $B_{1}\neq B_{2}$ are branches of $T$, then $B_{1}'\neq B_{2}'$ in $T'$. This shows that $T'$ (so also $T''$) has at least as many branches as $T$, and so in calculating $\operatorname{ded}\lambda$ we can add to our list of cardinalities from Fact \ref{fac:calculating ded}: \begin{enumerate} \item [(4)]$\sup\left\{ \kappa\left|\,\mbox{\mbox{there is a regular }\ensuremath{\mu}\ and a tree }T\subseteq\lambda^{<\mu}\mbox{ with }\kappa\mbox{ branches of length }\mu\mbox{ and }\left|T\right|\leq\lambda\right.\right\} $. \end{enumerate} Theorem \ref{thm:ded via dense types} follows from: \begin{prop} \label{prop:every tree manifests in dense types}For every tree $T\subseteq\lambda^{<\mu}$ of size $\lambda$, there is a model $M\models RCF$ of size $\lambda$ such that $\left|S_{\operatorname{dense}}\left(M\right)\right|$ is at least the number of branches in $T$ of length $\mu$.\end{prop} \begin{proof} We may assume that $\mu\leq\lambda$. For $i<\mu$, let $T_{i}=T\cap\leftexp i{\lambda}$, $T_{<i}=T\cap\lambda^{<i}$. By induction on $i<\mu$ we construct a sequence of models $\bar{M}=\sequence{M_{i}}{i<\mu}$ and $\sequence{a_{\eta},b_{\eta}}{\eta\in T_{<i}}$ such that: $\bar{M}$ is an $\mathbf{p}rec$-increasing continuous sequence of models of RCF; For all $\eta\in T_{<i}$, $a_{\eta},b_{\eta}\in M_{\lg\left(\eta\right)+1}$; $a_{\eta}<b_{\eta}$; $b_{\eta}-a_{\eta}<c$ for all $0<c\in M_{\lg\left(\eta\right)}$; If $\alpha<\beta<\lambda$ and $\eta\frown\left\langle \alpha\right\rangle ,\eta\frown\left\langle \beta\right\rangle \in T_{<i}$ then $b_{\eta\frown\left\langle \alpha\right\rangle }<a_{\eta\frown\left\langle \beta\right\rangle }$; For $\nu<\eta$, $a_{\nu}<a_{\eta}<b_{\eta}<b_{\nu}$. The construction: Let $M_{0}$ be any model of size $\lambda$. For $i$ limit, let $M_{i}=\bigcup_{j<i}M_{j}$ (there are no new $\left(a_{\eta},b_{\eta}\right)$'s). For $i=j+1$ for $j$ a successor, let $M_{i}$ be a model of size $\lambda$ containing $M_{j}$ and an increasing sequence $\sequence{c_{\alpha}}{\alpha<\lambda}$ such that $0<c_{\alpha}<d$ for all $0<d\in M_{j}$. For $\eta\in T_{j-1}$, if $\eta\frown\left\langle \alpha\right\rangle \in T$, let $a_{\eta\frown\left\langle \alpha\right\rangle }=a_{\eta}+c_{2\alpha}$ and $b_{\eta\frown\left\langle \alpha\right\rangle }=a_{\eta}+c_{2\alpha+1}$ (note that $b_{\eta\frown\left\langle \alpha\right\rangle }<b_{\eta}$). For $i=j+1$ for $j$ limit (or $j=0$), let $M_{i}$ be model of size $\lambda$ containing $M_{j}$ and $a_{\eta},b_{\eta}$ for $\eta\in T_{j}$ where $a_{\eta\upharpoonright j'}<a_{\eta}<b_{\eta}<b_{\eta\upharpoonright j'}$ for all $j'<j$ (so for $j=0$ this just means $a_{\left\langle \right\rangle }<b_{\left\langle \right\rangle }$) and $b_{\eta}-a_{\eta}<d$ for all $d\in M_{j}$. Finally, we let $M=\bigcup_{i<\mu}M_{i}$. For each branch $\eta\in\leftexp{\mu}{\lambda}$ of $T$, let $p_{\eta}=\set{a_{\eta\upharpoonright i}<x<b_{\eta\upharpoonright i}}{i<\mu}$. This is easily seen to be a dense type. Also, it is very easy to see that $p_{\eta}\neq p_{\eta'}$ for $\eta\neq\eta'$. \end{proof} \begin{rem} Note that this proof only used the fact that the order is dense, and so this holds in any densely ordered abelian group. \end{rem} Next we will show that Proposition \ref{prop:every tree manifests in dense types} is ``as good as it gets''. \begin{prop} \label{prop:S_d is bounded}If $M\models RCF$, $\left|M\right|=\lambda$, and $\mu=\operatorname{cof}\left(M,<\right)$, then $\left|S_{\operatorname{dense}}\left(M\right)\right|\leq\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}$.\end{prop} \begin{proof} We shall construct a tree of size $\lambda$ with $\left|S_{\operatorname{dense}}\left(M\right)\right|$ branches of length $\mu$. Let $\sequence{d_{i}}{i<\mu}$ be an increasing cofinal sequence of positive elements in $M$. Let $<^{*}$ be a well ordering on $M^{2}$. We define a sequence of pairs $\sequence{\left(a_{i,p},b_{i,p}\right)}{i<\mu,p\in S_{\operatorname{dense}}\left(M\right)}$ by induction on $i<\mu$ such that: $\left(a_{i,p},b_{i,p}\right)\in M^{2}$ is the $<^{*}$-first pair such that $p\left(x\right)\models a_{i,p}<x<b_{i,p}$, $b_{i,p}-a_{i,p}<1/d_{i}$ and for $j<i$, $a_{j,p}<a_{i,p}$, $b_{i,p}<b_{j,p}$. \begin{claim*} $\left(a_{i,p},b_{i,p}\right)$ exist for all $p\in S_{\operatorname{dense}}\left(M\right)$ and $i<\mu$.\end{claim*} \begin{proof} Fix some $p\in S_{\operatorname{dense}}\left(M\right)$. Suppose $i<\mu$ is the first such that $\left(a_{i,p},b_{i,p}\right)$ do not exist. For $j<i$, let $0<c_{j}\in M$ be such that $p\left(x\right)\models x+c_{j}<b_{j,p}$ and $p\left(x\right)\models a_{j,p}+c_{j}<x$ (exists since $p$ is dense, since otherwise it would be definable). Since the cofinality of $M$ is $\mu$, there must be some $e\in M$ such that $e>d_{i}$ and $e>1/c_{j}$ for all $j<i$. Since $p$ is dense there must be some $a,b\in M$ such that $p\left(x\right)\models a<x<b$ and $b-a<1/e$. By choice of $e$ for all $j<i$, $a_{j,p}<a$, $b<b_{j,p}$ and $b-a<1/d_{i}$. \end{proof} For $i<\mu$, let: \[ T_{i}=\set{\eta:i\to M^{2}}{\exists p\in S_{\operatorname{dense}}\left(M\right)\forall j<i\left[\eta\left(j\right)=\left(a_{j,p},b_{j,p}\right)\right]}. \] \begin{claim*} If $\eta\in T_{i}$ then $\eta\upharpoonright j\in T_{j}$ for all $j<i$. \end{claim*} \begin{claim*} $\left|T_{i}\right|\leq\lambda$. \end{claim*} \begin{proof} By the first claim, if $\eta\in T_{i}$ then it can be extended to some $\nu$ in $T_{i+1}$. So it is enough to show that $\left|T_{i+1}\right|\leq\lambda$. For that it is enough to show that the map $\eta\mapsto\eta\left(i\right)$ from $T_{i+1}$ to $M^{2}$ is injective. But this follows from definition of $\left(a_{i,p},b_{i,p}\right)$. \end{proof} Let $T=\bigcup_{i<\mu}T_{i}$. Then $T$ a tree, and for each dense type $p\in S_{\operatorname{dense}}\left(M\right)$, we can find a branch $\eta_{p}:\mu\to M^{2}$ defined by $\eta\left(i\right)=\left(a_{i,p},b_{i,p}\right)$. The following claim finishes the proof: \begin{claim*} For $p_{1}\neq p_{2}$, $\eta_{p_{1}}\neq\eta_{p_{2}}$.\end{claim*} \begin{proof} Suppose $p_{1}\left(x\right)\models x<b$ and $p_{2}\left(x\right)\models b<x$, and let $0<e\in M$ be such that $p_{1}\left(x\right)\models x+e<b$ and $p_{2}\left(x\right)\models b+e<x$ (exists since $p_{1}$ and $p_{2}$ are not definable). For some $i<\mu$, $d_{i}>1/e$. Then it follows that $\eta_{p_{1}}\left(i\right)\neq\eta_{p_{2}}\left(i\right)$ . \end{proof} \end{proof} \begin{cor} \label{cor:what can we get with S_d}The following equality holds for all cardinals $\lambda\geq\aleph_{0}$: \[ \sup\set{\left|S_{\operatorname{dense}}\left(M\right)^{\omega}\right|}{M\models RCF,\,\left|M\right|=\lambda}=\sup\set{\left(\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}\right)^{\omega}}{\mu\leq\lambda,\operatorname{cof}\left(\mu\right)=\mu}. \] \end{cor} \begin{proof} The inequality $\leq$ follows immediately from Proposition \ref{prop:S_d is bounded}. For $\geq$ we will show that for every regular $\mu\leq\lambda$, $\left(\operatorname{tr}eeexp{\lambda}{\mu}\right)^{\omega}\leq\sup\set{\left|S_{\operatorname{dense}}\left(M\right)^{\omega}\right|}{\left|M\right|=\lambda}$. Suppose $\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}$ is attained, i.e., there is a tree of size $\lambda$ with $\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}$ branches of length $\mu$. Then by Proposition \ref{prop:every tree manifests in dense types}, for some model $M\models RCF$ of size $\lambda$, $\left|S_{\operatorname{dense}}\left(M\right)\right|\geq\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}$, so in that case we are done. Suppose $\operatorname{tr}eeexp{\lambda}{\mu}$ is not attained. In that case $\operatorname{cof}\left(\operatorname{tr}eeexp{\lambda}{\mu}\right)>\lambda$. Indeed, if not, then $\operatorname{tr}eeexp{\lambda}{\mu}=\bigcup_{i<\lambda}\sigma_{i}$ for some cardinals $\sigma_{i}<\operatorname{tr}eeexp{\lambda}{\mu}$. For each $i<\lambda$, there is a tree $T_{i}$ of size $\lambda$ with more than $\sigma_{i}$ branches of length $\mu$. Let $T$ be the disjoint union of $T_{i}$ for $i<\lambda$. Then $T$ is a tree of size $\lambda$, with at least $\operatorname{tr}eeexp{\lambda}{\mu}$ branches of length $\mu$ --- contradiction. In particular, $\operatorname{cof}\left(\operatorname{tr}eeexp{\lambda}{\mu}\right)>\omega$, so every function $f:\omega\to\operatorname{tr}eeexp{\lambda}{\mu}$ is bounded, and hence: \[ \left(\operatorname{tr}eeexp{\lambda}{\mu}\right)^{\omega}=\sup\set{\kappa^{\omega}}{\kappa<\operatorname{tr}eeexp{\lambda}{\mu}}. \] So it is enough to show that for each $\kappa<\operatorname{tr}eeexp{\lambda}{\mu}$, there is a model $M$ of size $\lambda$ with more than $\kappa$ dense types, which follows from Proposition \ref{prop:every tree manifests in dense types}. \end{proof} \begin{example} In \cite[Section 6]{Sh1007} it is shown that it is consistent with ZFC that there is an uncountable cardinal $\lambda$ such that: \begin{enumerate} \item $\operatorname{cof}\left(\operatorname{ded}\lambda\right)=\operatorname{cof}\left(\lambda\right)=\aleph_{0}$, so $\left(\operatorname{ded}\lambda\right)^{\omega}>\operatorname{ded}\lambda$. \item For all regular cardinals $\mu<\lambda$, $\lambda^{\mu}\leq\operatorname{ded}\lambda$. \end{enumerate} So in this case, \[ \sup\set{\left(\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}\right)^{\omega}}{\mu\leq\lambda,\operatorname{cof}\left(\mu\right)=\mu}=\operatorname{ded}\lambda. \] However, \end{example} \begin{cor} \label{cor:cof uncountable is ok}For any cardinal $\lambda$, if $\operatorname{cof}\left(\operatorname{ded}\lambda\right)>\aleph_{0}$, then \textup{ \[ \sup\set{\left(\lambda^{\left\langle \mu\right\rangle _{\operatorname{tr}}}\right)^{\omega}}{\mu\leq\lambda,\operatorname{cof}\left(\mu\right)=\mu}=\left(\operatorname{ded}\lambda\right)^{\omega}. \] }\end{cor} \begin{proof} $\leq$ is clear. For $\geq$, we use a similar argument as in the proof of Corollary \ref{cor:what can we get with S_d}. Since $\left(\operatorname{ded}\lambda\right)^{\omega}=\bigcup\set{\kappa^{\omega}}{\kappa<\operatorname{ded}\lambda}$, we only have to show that every $\kappa<\operatorname{ded}\lambda$, $\kappa<\operatorname{tr}eeexp{\lambda}{\mu}$ for some regular $\mu\leq\lambda$. But that already follows from Fact \ref{fac:calculating ded}. \end{proof} \end{document}
\begin{document} \begin{abstract} We describe a bound on the degree of the generators for some adjoint rings on surfaces and threefolds. \Cal End{abstract} \mu _{\text {max}}ketitle \tilde{A}bleofcontents \section{Introduction} The aim of this paper is to provide a first step towards an effective version of the finite generation of adjoint rings. The study of effective results in birational geometry has a long history. On the one hand, the boundedness results on the pluricanonical maps for varieties of general type, due to Hacon, M\textsuperscript cKernan, Takayama and Tsuji \cite{HM06,Takayama06}, laid the foundation for a great deal of work towards an effective version of the (log)-Iitaka fibrations (e.g. \cite{VZ09,TX08}). On the other hand, Koll\'ar's effective version \cite{Kollar93b} of the Kawamata-Shokurov's base point free theorem provides an explicit bound for a multiple of a nef adjoint divisor to be base point free. Finally, many recent results on the geography of projective threefolds of general type yield, in particular, a description of the singularities which may appear on varieties of general type (e.g. see \cite{ChenChen10a,ChenChen10}). The main goal of this paper is to combine together some of these results and study an effective version of the finite generation for adjoint rings. More specifically, given a Kawamata log terminal pair $(X,B)$, the main result of \cite{BCHM10} implies that the canonical ring $R(X,K_X+B)$ of $K_X+B$ is finitely generated (see also \cite{Siu08,CL10a}). Moreover, if $A$ is an ample $\mu _{\text {max}}thbb Q$-divisor and $B_1,\dots,B_k$ are $\mu _{\text {max}}thbb Q$-divisors such that $(X,B_i)$ is Kawamata log terminal for all $i=1,\dots,k$, then the associated adjoint ring $R(X;K_X+A+B_1,\dots,K_X+A+B_k)$ is also finitely generated (cf. Definition \operatorname{op}eratorname{Re}f{d_adjointrings}). On the other hand, the problem about the finite generation of $R(X;K_X+B_1,\dots,K_X+B_k)$, without the assumption of $B_i$ being big, is still open and it implies the abundance conjecture (e.g. see \cite{CL10}). Thus, it is reasonable to ask if there exists a bound on the number of generators of the adjoint ring on a smooth projective variety which depends only on the numerical and topological invariants associated to the pairs $(X,B_i)$ for $i=1,\dots,k$. Inspired by these questions, our main result is the following: \begin{theorem}\label{t_ld_threefold} Let $(X,B)$ be a Kawamata log terminal projective threefold such that $X$ is smooth. Assume that $B$ is nef and that $B$ or $K_X+B$ is big. Let $a$ be a positive integer such that $aB$ is Cartier. Then, there exists a positive integer $q$, depending only on the Picard number $\rho(X)$ of $X$ such that $R(X,qa(K_X+B))$ is generated in degree $5$ (cf. Definition \operatorname{op}eratorname{Re}f{d_adjointrings}). \Cal End{theorem} As a direct consequence, we obtain: \begin{corollary}\label{c_main} Let $X$ be a smooth projective threefold of general type. Then there exists a constant $m$ which depends only on the Picard number $\rho(X)$ such that the stable base locus of $K_X$ coincides with the base locus of the linear system $|mK_X|$. \Cal End{corollary} In addition, we obtain a stronger version of the theorem above in the case of surfaces. It is worth to mention that the proofs of these results rely in a crucial way on the classification of Kawamata log terminal surface singularities and terminal threefold singularities. The paper is organized as follows. In section \operatorname{op}eratorname{Re}f{s_pr}, we describe the main tools used in the paper, which are mainly based on Koll\'ar's effective base point free theorem, Mumford's regularity theorem and the classification of surface and threefold singularities. In section \operatorname{op}eratorname{Re}f{s_bound}, we describe a bound on the index of the singularities of the minimal model of a smooth projective threefold $X$, which depends on the Picard number $\rho(X)$ of $X$. The bound is obtained as a consequence of a recent result by Chen and Hacon \cite{ChenHacon11}. Finally, in section \operatorname{op}eratorname{Re}f{s_fg}, we prove the main results of the paper and we show, in many examples, that some of these results are optimal. In particular, the bound on the degree of the generators of a Kawamata log terminal pair $(X,B)$ depends on the Picard number of the projective variety $X$. Note that all the bounds obtained in the paper are easily computable, but they are far away from being sharp. For this reason, we often omit an explicit description of these bounds. \section{Preliminary results}\label{s_pr} \subsection{Notation} We work over the field of complex numbers $\mu _{\text {max}}thbb C$. We refer to \cite{KM98} for the classical definitions of singularities in the Minimal Model Program. In particular, given a log pair $(X,B)$, we denote by $a(\nu,X,B)$ the {\Cal Em discrepancy} of $(X,B)$ with respect to a valuation $\nu$. A rational map $f\colon \rmap X.Y.$ between normal projective varieties $X$ and $Y$ is a \Cal Emph{contraction} if the inverse map $f^{-1}$ does not contract any divisors. The {\Cal Em exceptional locus} of $f$ is the subset of $X$ on which $f$ is not an isomorphism. Let $f\colon \rmap X.Y.$ be a proper birational contraction of normal projective varieties and let $D$ be an $\mu _{\text {max}}thbb R$-Cartier divisor on $X$ such that $D_Y=f_*D$ is also $\mu _{\text {max}}thbb R$-Cartier. Then $f$ is $D$-\Cal Emph{non-positive} (respectively $D$-\Cal Emph{negative}) if for some common resolution $p\colon \mu _{\text {max}}p W.X.$ and $q\colon \mu _{\text {max}}p W.Y.$ which resolves the indeterminacy locus of $f$, we may write $$p^*D=q^*D_Y+E$$ where $E\ge 0$ is $q$-exceptional (respectively $E\ge 0$ is $q$-exceptional and the support of $E$ contains the strict transform of the exceptional divisor of $f$). In particular, if $(X,B)$ is a Kawamata log terminal pair, $D=K_X+B$ and $D_Y=K_Y+B_Y$, then $f$ is $(K_X+B)$-non-positive (respectively $(K_X+B)$-negative) if and only if   $$a(F,X,B)\le a(F,Y,f_*B) \qquad (\text{ respectively }a(F,X,B)<a(F,Y,f_*B) ~)$$ for all the prime divisors $F$ which are exceptional over $Y$. Let $(X,B)$ be a Kawamata log terminal pair. A proper birational contraction $f\colon \rmap X.Y.$ of normal projective varieties is a \Cal Emph{log terminal model} for $(X,B)$ if $f$ is $(K_X+B)$-negative, $Y$ is $\mu _{\text {max}}thbb Q$-factorial and $K_Y+f_*B$ is nef. If $B=0$ then a log terminal model of $(X,B)$ is called a \Cal Emph{minimal model} of $X$. We denote by LMMP$_n$ the classical conjectures in the Log Minimal Model program in dimension $n$. In particular, LMMP$_n$ implies that each Kawamata log terminal pair $(X,B)$ such that $K_X+B$ is pseudo-effective admits a log terminal model. \begin{definition}\label{d_z} Let $X$ be a smooth projective variety and let $D$ be a $\mu _{\text {max}}thbb Q$-divisor on $X$. We denote by $\kappa(X,D)$ the Kodaira dimension of $D$. For any positive integer $q$ such that $qD$ is Cartier and the linear system $|qD|$ is not empty, we denote by $\operatorname{op}eratorname{Fix}|qD|$, the fixed part of the linear system $|qD|$. Thus, if $\kappa(X,D)\ge 0$, we may define $$\operatorname{op}eratorname{\mu _{\text {max}}thbf{Fix}}(D)=\liminf_{q\to \infty} \frac 1 q \operatorname{op}eratorname{Fix}|qD|,$$ where the limit is taken over all sufficiently divisible positive integers. \Cal End{definition} \begin{remark}\label{r_z} Let $(X,B)$ be a log smooth projective pair of dimension $n$ such that $\rfdown B.=0$ and let $f\colon \rmap X.Y.$ be a log terminal model of $K_X+B$. Then, if $B_Y=f_*B$, we may write $$K_X+B=f^*(K_Y+B_Y)+E$$ for some $f$-exceptional $\mu _{\text {max}}thbb Q$-divisor $E\ge 0$. The negativity lemma (e.g. \cite[Lemma 3.6.2]{BCHM10}) implies that $E$ does not depend on the log terminal model $f$. In addition, assuming LMMP$_n$, it is easy to check that $E=\operatorname{op}eratorname{\mu _{\text {max}}thbf{Fix}}(K_X+B)$. \Cal End{remark} \begin{lemma}\label{l_nef} Let $(X,B)$ be a $\mu _{\text {max}}thbb Q$-factorial projective Kawamata log terminal pair. Assume that $B$ is nef, $K_X+B$ is pseudo-effective and that $B$ or $K_X+B$ is big. Then there exists a sequence of steps $$X=\rmap X_0.\rmap .\dots. .X_k=Y.$$ of the $K_X$-minimal model program such that the induced birational map $f\colon \rmap X.Y.$ is a log terminal model of $(X,B)$. \Cal End{lemma} \begin{proof} Let $A\ge 0$ be an ample $\mu _{\text {max}}thbb Q$-divisor. For any rational number $\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon>0$, since $B$ is nef, we have that $B+\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon A$ is ample. Thus, there exist a $\mu _{\text {max}}thbb Q$-divisor $H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon\sim_{\mu _{\text {max}}thbb Q}B+\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon A$ and $\lambda_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon>0$ such that, for any sufficiently small $\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$, the pair $(X,\lambda_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon)$ is Kawamata log terminal and $K_X+\lambda_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$ is nef. Let us consider the $K_X$-minimal model program of $X$ with scaling of $H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$ \cite[Remark 3.10.10]{BCHM10}. Note that if $B$ is not big, then by assumption, $K_X+B$ is big and therefore there exist $\bar {\partial}ta,\Cal Eta>0$ such that $\Cal Eta(K_X+B)\sim_{\mu _{\text {max}}thbb Q}\bar {\partial}ta A+B'$ for some $\mu _{\text {max}}thbb Q$-divisor $B'\ge 0$ such that $(X,B+B')$ is Kawamata log terminal. Let $C=B+B'+(\bar {\partial}ta +\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon(1+\Cal Eta))A$. We may assume that $(X,C)$ is Kawamata log terminal. If $1\le t\le \lambda_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$, we have $$\begin{aligned} (1+\Cal Eta)&(K_X+tH_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon)\sim_{\mu _{\text {max}}thbb Q} (1+\Cal Eta)(K_X+B+\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon A+(t-1)H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon)\\ &\sim_{\mu _{\text {max}}thbb Q} K_X+B+B'+(\bar {\partial}ta +\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon(1+\Cal Eta))A+(t-1)(1+\Cal Eta)H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon\\ &\sim_{\mu _{\text {max}}thbb Q}K_X+C+(t-1)(1+\Cal Eta)H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon. \Cal End{aligned} $$ Thus, if $1\le t\le \lambda_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$, a log terminal model of $(X,tH_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon)$ is also a log terminal of $(X,C+(t-1)(1+\Cal Eta)H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon)$ and the $K_X-$minimal model with scaling of $H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$ coincides with the $(K_X+C)$-minimal model with scaling of $(1+\Cal Eta)H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$. Therefore, after a finite number steps of the $K_X$-minimal model program, we obtain a $K_X$-negative map $f_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon\colon \rmap X. X_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon.$ such that $X_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$ is $\mu _{\text {max}}thbb Q$-factorial and $K_{X_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon}+f_{\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon *} H_\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$ is nef. By finiteness of models \cite[Theorem E]{BCHM10}, there exists a sequence $\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon_i$ such that $\lim \operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon_i=0$ and $X_{\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon_i}$ is constant. In particular $K_{X_{\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon_i}}+f_{{\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon}_i *}B$ is nef. Note that since $f_{\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon_i}\colon \rmap X.X_{\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon_i}.$ is $(K_X+H_{\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon_i})$-negative and $A$ is ample, it is also $(K_X+B)$-negative. Thus $f_{\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon_i}$ is a log terminal model of $(X,B)$. \Cal End{proof} \subsection{Koll\'ar's effective base point freeness} In this section we describe some easy generalisations of Koll\'ar's base point freeness theorem and Mumford's regularity theorem. \begin{theorem}\label{t_ebpf} Let $(X,B)$ be a projective Kawamata log terminal pair of dimension $n$ such that $K_X+B$ is nef and $B$ or $K_X+B$ is big. Let $a$ be a positive integer such that $a(K_X+B)$ is Cartier. Then there exists a positive integer $q$ depending only on $n$ such that the linear system $|qa(K_X+B)|$ is base point free. \Cal End{theorem} \begin{proof} If $B$ is big, then there exists an ample $\mu _{\text {max}}thbb Q$-divisor $A$ and an effective $\mu _{\text {max}}thbb Q$-divisor $D$ such that $B\sim_{\mu _{\text {max}}thbb Q}A+D$. Then if $\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon>0$ is a sufficiently small rational number and $\Delta=(1-\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon)B+\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon D$, then the pair $(X,\Delta)$ is Kawamata log terminal and $B\sim_{\mu _{\text {max}}thbb Q}\Delta+\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon A.$ Thus, if $M=a(K_X+B)$, then $$M-(K_X+\Delta)\sim_{\mu _{\text {max}}thbb Q} \operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon A + (a-1)(K_X+B)$$ is ample and the result follows by Koll\'ar's effective base point freeness theorem \cite{Kollar93b}. Thus, we may assume that $K_X+B$ is big and nef. Let $L=2a(K_X+B)$. Then $L$ is Cartier and $L-(K_X+B)$ is big and nef. The result follows again from \cite{Kollar93b}. \Cal End{proof} \begin{remark} Using the notation of Theorem \operatorname{op}eratorname{Re}f{t_ebpf}, by \cite[Theorem 1.1]{Kollar93b} we can take $q=4(n+2)!(n+1)$. \Cal End{remark} \begin{lemma}\label{l_ebpf} Let $(X,B)$ be a projective Kawamata log terminal surface such that $K_X+B$ is nef. Let $a$ be a positive integer such that $a(K_X+B)$ is Cartier. Then there exists a positive integer $m$ depending only on $a$ such that $|m(K_X+B)|$ is base point free. \Cal End{lemma} \begin{proof} By Theorem \operatorname{op}eratorname{Re}f{t_ebpf}, we may assume that $K_X+B$ is not big. If $K_X+B\sim_{\mu _{\text {max}}thbb Q} 0$, then the results follows from \cite[Theorem 3.1]{TX08}. Thus, we may assume that there exists a map $f\colon \mu _{\text {max}}p X.C.$ onto a smooth curve $C$ such that $K_X+B=f^*D$ for some ample $\mu _{\text {max}}thbb Q$-divisor $D$ on $C$. We may assume that $D=K_C+B_C$ for some effective $\mu _{\text {max}}thbb Q$-divisor $B_C$ on $C$ such that $\rfdown B_C.=0$ (e.g. see \cite{Kawamata98}). By \cite[Theorem 8.1]{PS09}, there exists a constant $b$, depending only on $a$ such that $bB_C$ is Cartier. Thus, the results follows. \Cal End{proof} The next result follows closely the proof of Mumford's regularity theorem (e.g. see \cite[Theorem 1.8.3]{Lazarsfeld04a}). \begin{proposition}\label{p_mumford} Let $X$ be a normal projective variety of dimension $n$. Let $B_1,\dots,B_k$ be $\mu _{\text {max}}thbb Q$-divisors on $X$ and let $a_1,\dots,a_k$ be positive integers such that $(X,B_i)$ is a Kawamata log terminal pair and there exist Cartier divisors $L_1,\dots,L_k$ such that $L_i\sim_{\mu _{\text {max}}thbb Q}a_i(K_X+B_i)$ and the linear system $|L_i|$ is base point free, for $i=1,\dots,k$. Let $G=\sum_{i=1}^k b_i L_i$ for some positive integers $b_1,\dots,b_k$ and assume that $b_\Cal Ell > n+1$. Then, the natural map $$H^0(X,\ring X.(G))\otimes H^0(X,\ring X.( L_\Cal Ell )) \to H^0(X,\ring X.(G+ L_\Cal Ell ))$$ is surjective. \Cal End{proposition} \begin{proof} We first assume that $\sum_{i=1}^k L_i$ is not big. Since the linear system $|\sum_{i=1}^k L_i|$ is base point free, there exists a morphism with connected fibres $f\colon \mu _{\text {max}}p X.Y.$ onto a normal projective $Y$ such that $\sum_{i=1}^k L_i=f^*A$ for some very ample divisor $A$ on $Y$. Since $L_1,\dots,L_k$ are nef, if $\xi$ is a curve contracted by $f$ then $L_i\cdot\xi=0$ for any $i=1,\dots,k$. In particular, since $|L_i|$ is base point free, and the restriction of $L_i$ to any fibre of $f$ is trivial, it follows that there exist Cartier divisors $L'_1,\dots,L'_k$ such that $L_i=f^*L'_i$. By \cite[Theorem 4.1]{Ambro05}, it follows that $L'_i\sim_{\mu _{\text {max}}thbb Q}a_i(K_{Y}+B'_i)$ for some $\mu _{\text {max}}thbb Q$-divisor $B'_i$ such that $(X,B'_i)$ is Kawamata log terminal for any $i=1,\dots,k$. Note that the linear system $|L'_i|$ is base point free and that $H^0(X,\sum_{i=1}^k c_i L_i )\simeq H^0(Y,\sum_{i=1}^k c_iL'_i)$ for any non-negative integers $c_1,\dots,c_k$. Thus, after replacing $X$ by $Y$, $L_i$ by $L'_i$ and $B_i$ by $B'_i$, we may assume that $\sum_{i=i}^k L_i$ is big. Let $V=H^0(X,\ring X.( L_\Cal Ell ))$ and let $\mu _{\text {max}}thcal V=V\otimes \ring X.$. Then $$\mu _{\text {max}}thcal V\otimes \ring X.(- L_\Cal Ell )\to \ring X.$$ is surjective. Thus, if $r=\operatorname{op}eratorname{dim} V$, then the sequence $$\begin{aligned} 0=\wedge^{r+1}\mu _{\text {max}}thcal V\otimes &\ring X.(-(r+1) L_\Cal Ell )\to \dots\\ &\to \wedge^2\mu _{\text {max}}thcal V\otimes \ring X.(-2 L_\Cal Ell )\to \mu _{\text {max}}thcal V\otimes \ring X.(- L_\Cal Ell )\to \ring X.\to 0 \Cal End{aligned} $$ is exact. Twisting by $\ring X.(G+ L_\Cal Ell )$ gives $$\begin{aligned} 0\to & \wedge^{r}\mu _{\text {max}}thcal V\otimes \ring X.(G-(r-1) L_\Cal Ell )\to \dots \\ &\to \wedge^2\mu _{\text {max}}thcal V\otimes \ring X.(G-L_\Cal Ell )\to \mu _{\text {max}}thcal V\otimes \ring X.(G)\to \ring X.(G+L_\Cal Ell )\to 0 \Cal End{aligned} $$ Since $\sum_{i=1}^kL_i$ is big, $b_\Cal Ell\ge n+2$ and $b_i\ge 1$ for $i\neq \Cal Ell$, we have that $$\begin{aligned} \sum_{i\neq \Cal Ell} b_i L_i + (b_\Cal Ell - j) &L_\Cal Ell - (K_X+B_\Cal Ell)\\ &\sim_{\mu _{\text {max}}thbb Q} \sum_{i\neq \Cal Ell} b_i L_i + (b_\Cal Ell - j) L_\Cal Ell -\frac 1 {a_\Cal Ell} L_\Cal Ell \\ &\sim_{\mu _{\text {max}}thbb Q} \sum_{i=1}^k L_i + \sum_{i\neq \Cal Ell} (b_i-1)L_i+ \left (b_\Cal Ell - j - 1 - \frac 1 {a_\Cal Ell}\right )L_\Cal Ell \Cal End{aligned} $$ is big and nef. Thus, Kawamata-Viehweg vanishing implies that $$\begin{aligned} H^j(X,\wedge^{j+1}\mu _{\text {max}}thcal V &\otimes \ring X.(G-j L))=\wedge^{j+1}V\otimes H^j(X,\ring X.(G-j L_\Cal Ell ))\\ &=\wedge^{j+1}V\otimes H^j(X,\ring X.(\sum_{i\neq \Cal Ell} b_i L_i + (b_\Cal Ell - j) L_\Cal Ell ))=0 \Cal End{aligned} $$ for any $j>0$. Since $$H^0(X,\ring X.(G))\otimes H^0(X,\ring X.( L_\Cal Ell ))=H^0(X,\mu _{\text {max}}thcal V\otimes \ring X.(G)),$$ the map $$H^0(X,\ring X.(G))\otimes H^0(X,\ring X.( L_\Cal Ell ))\to H^0(X,\ring X.(G+ L_\Cal Ell ))$$ is surjective and the claim follows. \Cal End{proof} \subsection{Adjoint Rings} In this section, we recall some basic notion about adjoint rings. \begin{definition}\label{d_adjointrings} Let $X$ be a smooth projective variety and let $D_1,\dots,D_k$ be Cartier divisors on $X$. The \Cal Emph{adjoint ring associated to} $D_1,\dots,D_k$ is $$R(X;D_1,\dots,D_k)=\bigoplus_{(a_1,\dots,a_k)\in \mu _{\text {max}}thbb Z_{\ge 0}^k}H^0(X,\ring X.(\sum_{i=1}^k a_iD_i)).$$ We say that $R=R(X;D_1,\dots,D_k)$ is \Cal Emph{generated in degree} $m$ if $R$ is generated by sections of $H^0(X,\ring X.(\sum_{i=1}^k a_iD_i))$, for $a_1,\dots,a_k\in \{0,\dots,m\}$. \Cal End{definition} \begin{remark} The definition of adjoint rings can be easily extended to $\mu _{\text {max}}thbb Q$-divisors $D_1,\dots,D_k$ on a smooth projective variety $X$, but it will not be used in this paper in this generality (e.g. see \cite{CL10a} for more details). \Cal End{remark} \begin{remark}\label{r_b} Let $X$ be a smooth projective variety and let $B$ be a $\mu _{\text {max}}thbb Q$-divisor on $X$ such that $(X,B)$ is Kawamata log terminal. Let $q$ be a positive integer such that $qB$ is Cartier. Then $R=R(X,q(K_X+B))$ is generated in degree $m$ if and only if the sections of $$\bigoplus_{a\le m}H^0(X,aq(K_X+B))$$ span $R$. \Cal End{remark} \begin{proposition}\label{p_b} Let $X$ be a smooth projective variety of dimension $n$. Let $B_1,\dots,B_k$ be $\mu _{\text {max}}thbb Q$-divisors on $X$ and let $a_1,\dots,a_k$ be positive integers such that $(X,B_i)$ is Kawamata log terminal and there exist Cartier divisors $L_1,\dots,L_k$ such that $L_i\sim_{\mu _{\text {max}}thbb Q}a(K_X+B_i)$ and the linear system $|L_i|$ is base point free, for $i=1,\dots,k$. Then $R(X;L_1,\dots,L_k)$ is generated in degree $n+2$. \Cal End{proposition} \begin{proof} Let $G=\sum_{i=1}^k m_i L_i$ for some integers $m_1,\dots,m_k\ge 0$ and assume that there exists $\Cal Ell\in\{1,\dots,k\}$ such that $m_\Cal Ell> n+2$. Then Proposition \operatorname{op}eratorname{Re}f{p_mumford} implies that $$H^0(X, \ring X.(G-L_\Cal Ell)\otimes H^0(X, \ring X.(L_\Cal Ell))\to H^0(X,\ring X.(G))$$ is surjective and the claim follows. \Cal End{proof} \begin{lemma}\label{l_b} Let $X$ be a smooth projective variety and let $D$ be a Cartier divisor on $X$. Assume that $R(X,D)$ is generated in degree $m$ and let $q=m!$. Then the stable base locus of $D$ is equal to the base locus of the linear system $|qD|$. In addition, if $F=\operatorname{op}eratorname{\mu _{\text {max}}thbf{Fix}}(D)$ (cf. Definition \operatorname{op}eratorname{Re}f{d_z}), then $qF$ is a Cartier divisor. \Cal End{lemma} \begin{proof} If $x$ is a point contained in the base locus of the linear system $|qD|$ and $m'\le m$ is a positive integer, then, since $m'$ divides $q$, it follows that any section of $H^0(X,\ring X.(m'D))$ vanishes at $x$. Thus, by assumption, any section of $H^0(X,\ring X.(\Cal Ell D))$ vanishes at $x$, for any positive integer $\Cal Ell$. In particular, $x$ is contained in the stable base locus of $D$ and the first claim follows. The proof of the second claim is analogous. \Cal End{proof} \subsection{Surface and threefolds singularities} In this section, we recall a few known facts about Kawamata log terminal singularities in dimension $2$ and terminal singularities in dimension $3$. \begin{lemma}\label{l_fact} Let $(X,B)$ be a log smooth surface such that $\rfdown B.=0$ and $K_X+B$ is pseudo-effective. Let $f\colon\mu _{\text {max}}p X.Y.$ be the log terminal model of $(X,B)$. Then there exist birational morphisms $g\colon \mu _{\text {max}}p X.Z.$ and $h\colon \mu _{\text {max}}p Z.Y.$ which factorize $f$ and such that \begin{enumerate} \item $g$ is a sequence of smooth blow-ups; \item $h$ contracts only divisors contained in the support of $g_*B$. \Cal End{enumerate} \Cal End{lemma} \begin{proof} Let $h\colon \mu _{\text {max}}p Z .Y.$ be the minimal resolution of $Y$. Then, since $X$ is smooth, there exists a morphism $g\colon\mu _{\text {max}}p X.Z.$, which is a sequence of smooth blow-ups and such that $f=h\circ g$. Let $C=g_*B$. Then, $h$ is the log terminal model of $(Z,C)$ and since $Z$ is the minimal resolution, it follows that $h$ does not contract any $(-1)$-curve. In particular, $a(F,Y)\le 0$ for any curve $F$ contracted by $h$. Thus, $a(F,Y,f_*B)\le 0$ and since $h$ is $(K_Z+C)$-negative, it follows that $a(F,Z,C)<0$. Therefore $F$ is contained in the support of $C$. \Cal End{proof} We proceed by bounding the index of a Kawamata log terminal surface with respect to the graph of its minimal resolution: \begin{proposition}\label{p_kltsurface} Let $(S,p)$ be the germ of a Kawamata log terminal surface and let $f\colon T\to S$ be the minimal resolution of $S$. Assume that $E_1,\dots,E_k$ are the irreducible components of the exceptional divisor of $f$ and let $\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon>0$ be such that $a(E_i,S)\ge -1+\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$ for all $i=1,\dots,k$. Then there exists a constant $r=r(k,\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon)$, depending only on $k$ and $\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon$, such that $rD$ is Cartier for all Weil divisors $D$ on $S$. \Cal End{proposition} \begin{proof} The germ of a Kawamata log terminal surface $(S,p)$ is given by the quotient of $\mu _{\text {max}}thbb C^2$ by a finite subgroup $G$ of $\operatorname{op}eratorname{GL}(2,\mu _{\text {max}}thbb C)$, without quasi-reflections. Thus, it is enough to bound the order of $G$, depending on the graph associated to the minimal resolution of $(S,p)$. It follows from \cite[pag. 348]{Brieskorn68} that the order of $G$ is at most $r=120k^2/\operatorname{op}eratorname{\mu _{\text {max}}thbb Cal Var}epsilon^3$. \Cal End{proof} We now consider the germ $(X,p)$ of a terminal singularities in dimension $3$. The {\Cal Em index} of $X$ at $p$ is the smallest positive integer $r=r(X,p)$ such that $rK_X$ is Cartier. In addition, it follows from the classification of terminal singularities \cite{Mori85}, that there exists a deformation of $(X,p)$ into a variety with $k\ge 1$ terminal singularities $p_1,\dots,p_k$ which are isolated cyclic quotient singularities of index $r(p_i)$. The set $\{p_1,\dots,p_k\}$ is called the {\Cal Em basket} $\mu _{\text {max}}thcal B(X,p)$ of singularities of $X$ at $p$ \cite{Reid85}. The number $k$ is called the {\Cal Em axial weight} $\operatorname{op}eratorname{aw}(X,p)$ of $(X,p)$. As in \cite{ChenHacon11}, we define $$\Xi(X,p)=\sum_{i=1}^k r(p_i).$$ Thus, if $X$ is a projective variety of dimension $3$ and with terminal singularities, we may define $$\Xi(X)=\sum_{p\in X} \Xi(X,p)\qquad \text{and}\qquad \operatorname{op}eratorname{aw}(X)=\sum_{p\in X}\operatorname{op}eratorname{aw}(X,p).$$ \begin{definition} Let $X$ be a terminal projective variety of dimension $3$. Then a $w$-resolution of $X$ is a sequence $$Y=\mu _{\text {max}}p X_N.\mu _{\text {max}}p .\dots. .X_0=X$$ such that \begin{enumerate} \item $X_i$ has only terminal singularities for $i=1,\dots,N$ and $X_N$ has only Gorenstein singularities; \item the map $\mu _{\text {max}}p X_{i+1}.X_{i}.$ is a weighted blow-up at $P_i\in X_i$ with minimal discrepancy $1/m_i$ where $m_i$ is the index of $X_{i}$ at $P_i$, for $i=0,\dots,N-1$. \Cal End{enumerate} \Cal End{definition} The following result is due to Hayakawa. \begin{theorem} Let $X$ be a terminal projective threefold. Then $X$ admits a $w$-resolution. \Cal End{theorem} \begin{proof} See \cite[Theorem 6.1]{Hayakawa00}. \Cal End{proof} Given $X$ as above, we define $\operatorname{op}eratorname{dep} (X)$ the {\Cal Em depth} of $X$ to be the minimum length of any $w$-resolution of $X$. \begin{proposition}\label{t_ch} Let $X,X'$ be terminal projective varieties of dimension $3$. Then \begin{enumerate} \item If $\rmap X.X'.$ is a a flip, then $\operatorname{op}eratorname{dep} (X)>\operatorname{op}eratorname{dep} (X')$. \item If $\mu _{\text {max}}p X. X'.$ is an extremal divisorial contraction to a curve, then $\operatorname{op}eratorname{dep} (X)\ge \operatorname{op}eratorname{dep} (X')$. \item If $\mu _{\text {max}}p X.X'.$ is an extremal divisorial contraction to a point, then $\operatorname{op}eratorname{dep}(X)\ge \operatorname{op}eratorname{dep}(X')-1$. \Cal End{enumerate} \Cal End{proposition} \begin{proof} See \cite[Proposition 2.15, 3.8, 3.9]{ChenHacon11}. \Cal End{proof} \section{Bounding threefold terminal singularities}\label{s_bound} The aim of this section is to give a bound on the singularities of a minimal projective threefold depending on the topology of its resolution. More specifically, we show that we can bound the sum of the indices of all the points in all the baskets of $Y$ by a constant which depends only on the Picard number $\rho(X)$ of $X$. We first show a bound on the number of flips for the minimal model program of a smooth projective threefold $X$: \begin{lemma}\label{l_number} Let $X$ be a smooth projective threefold. Then both the number of divisorial contractions and the number of flips in the minimal model program of $X$ are bounded by $\rho(X)$. In addition, if $Y$ is the minimal model of $X$ then the number of points of $Y$ with index greater than one is also bounded by $\rho(X)$. \Cal End{lemma} \begin{proof} Clearly the number of divisorial contractions is bounded by $\rho(X)$. Let $$X=\rmap X_0.\rmap .\dots. .X_k=Y$$ be a sequence of steps for the $K_X$-minimal model program of $X$, where $\rmap X.Y.$ is a minimal model of $X$. Let $d(X_i)$ be the Shokurov's difficulty of $X_i$, i.e. $$d(X_i)=\#\{\nu =\text{valuation}\mu _{\text {min}}d a(\nu,X_i)<1,\quad \nu\text{ is exceptional over $X_i$}\}.$$ Then, since $X$ is smooth, we have $d(X)=0$. In addition, If $\rmap X_i.X_{i+1}$ is an extremal divisorial contraction, then $$d(X_i)\le d(X_{i-1})+1.$$ Finally, if $\rmap X_i.X_{i+1}$ is a flip, then $$d(X_i) \le d(X_{i-1})-1,$$ (e.g. see \cite{KM98}). Thus, the total number of flips is bounded by the number of divisorial contractions. Finally, for each point $p\in Y$ such that $r(Y,p)>1$, the main result in \cite{Kawamata93} implies that there exists a weighted blow-up $f\colon W\to Y$ of minimal discrepancy, i.e. if $E$ is the exceptional divisor of $f$ then $$a(E,Y)=\frac {1}{r(Y,p)}.$$ Thus, it follows that the number of such points is bounded by $d(Y)$. Thus, it is also bounded by $\rho(X)$ and the claim follows. \Cal End{proof} The following is a generalization of \cite[Proposition 2.13]{ChenHacon11}: \begin{lemma}\label{l_xidep} Let $Z$ be a terminal projective variety of dimension $3$. Then, $$\Xi(Z)\le 2 \operatorname{op}eratorname{dep} (Z).$$ \Cal End{lemma} \begin{proof}We proceed by induction on $\operatorname{op}eratorname{dep}(Z)$. If $\operatorname{op}eratorname{dep} (Z)=0$ then $Z$ is Gorenstein and $\Xi(Z)=0$. Thus, the claim follows. Assume now that $\operatorname{op}eratorname{dep} (Z)>0$. We claim that if $f\colon \mu _{\text {max}}p Y.Z.$ is a weighted blow-up of minimal discrepancy at the point $p\in Z$, then $$\Xi(Y)\ge\Xi(Z)-2.$$ We first prove the Lemma, assuming the claim. Let $f\colon \mu _{\text {max}}p Y.Z.$ be the first weighted blow-up of minimal discrepancy in a $w$-resolution of $Z$. Then, by definition we have that $\operatorname{op}eratorname{dep}(Z)=\operatorname{op}eratorname{dep} (Y)+1$ and by induction, we have that $\Xi(Y)\le 2\operatorname{op}eratorname{dep}(Y)$. Thus $$\Xi(Z)\le \Xi(Y)+2\le 2\operatorname{op}eratorname{dep} (Y)+2=2\operatorname{op}eratorname{dep}(Z),$$ and the lemma follows. The claim follows from the classification of weighted blow-ups in \cite{Hayakawa99,Hayakawa00}. For example, assume that $(Z,p)$ is a point of type $cA/r$ and $\mu _{\text {max}}p Y.Z.$ is a weighted blow-up of minimal discrepancy. Then, by the proof of \cite[Theorem 6.4]{Hayakawa99}, there exist positive integers $a,b$ satisfying $a+b=kr$ with $k\le \operatorname{op}eratorname{aw}(Z)$ and such that the only points of index greater than $1$ on $Y$ are the following: a point $Q_1$ which is a cyclic quotient singularity of index $a$, a point $Q_2$ which is a cyclic quotient singularity of index $b$ and, if $k<\operatorname{op}eratorname{aw}(Z,p)$, a point $Q_3$ which is of type $cA/r$ of index $r$ and axial weight $\operatorname{op}eratorname{aw}(Z)-k$. Thus, since $\Xi(Z,p)=r\operatorname{op}eratorname{aw}(Z,p)$, it follows that $$\Xi(Y)-\Xi(Z)=a+b+r(\operatorname{op}eratorname{aw}(Z,p) -k) - r\operatorname{op}eratorname{aw}(Z,p)=0.$$ Similarly, if $(Z,p)$ is a point of of type $cAx/4$ and $\mu _{\text {max}}p Y. Z.$ is the weighted blow-up given by \cite[Theorem 7.4]{Hayakawa99}, then there exists a positive integer $k$ such that the only points of index greater than $1$ on $Y$ are the following: a point $Q_1$ which is cyclic of quotient singularity of index $2k+3$ and, if $\operatorname{op}eratorname{aw}(Z,p)>k+1$, a point $Q_2$ which is of type $cD/2$ and such that $\operatorname{op}eratorname{aw}(Y,Q_2)=\operatorname{op}eratorname{aw}(Z,p)-k-1$. Thus, since $\Xi(Z,p)=2\operatorname{op}eratorname{aw}(Z,p)+2$, it follows that $$\Xi(Y)-\Xi(Z)= 2(\operatorname{op}eratorname{aw}(Z,p)-k-1)+2k+3 - (2\operatorname{op}eratorname{aw}(Z,p)+2)=-1.$$ It is easy to check that if $(Z,p)$ is a singularity of different type, then Hayakawa's list of weighted blow-ups of minimal discrepancy implies that the inequality is satisfied in all the cases. \Cal End{proof} \begin{proposition}\label{p_2b2} Let $X$ be a smooth projective threefold and assume that $$X=\rmap X_0.\rmap .\dots. .X_k=Y$$ is a sequence of steps for the $K_X$-minimal model program of $X$. Then $$\Xi(Y)\le 2\rho(X).$$ In particular, the inequality holds if $Y$ is the minimal model of $X$. \Cal End{proposition} \begin{proof} Since $X$ is smooth, we have that $\operatorname{op}eratorname{dep}(X)=0$. By Proposition \operatorname{op}eratorname{Re}f{t_ch}, it follows that $\operatorname{op}eratorname{dep}(X_i)\le \operatorname{op}eratorname{dep}(X_{i-1})$ unless $\rmap X_{i+1}.X_{i}.$ is a divisorial contraction to a point and in this case, we have $\operatorname{op}eratorname{dep}(X_i)\le\operatorname{op}eratorname{dep}(X_{i-1})+1$. Thus, $\operatorname{op}eratorname{dep}(X_i)$ is bounded by the number of divisorial contractions in the first $i$ steps of the minimal model program of $X$. Thus, Lemma \operatorname{op}eratorname{Re}f{l_number} implies that $\operatorname{op}eratorname{dep}(Y)\le \rho(X)$. On the other hand, Lemma \operatorname{op}eratorname{Re}f{l_xidep} implies that $$\Xi(Y)\le 2\operatorname{op}eratorname{dep} (Y).$$ Thus, the claim follows. \Cal End{proof} \section{Effective Finite Generation}\label{s_fg} We now proceed to show a bound on the degree of the generators of the adjoint ring associated to a Kawamata log terminal surface depending on the number of components of its boundary and the corresponding coefficients. \begin{proposition}\label{p_ld_surfaces} Let $(X,B=\sum_{i=1}^p a_i S_i)$ be a log smooth projective surface, where $S_1,\dots,S_p$ are distinct prime divisors and $\rfdown B.=0$. Let $a$ be a positive integer such that $aB$ is Cartier. Then, there exists a positive integer $m=m(a,p)$, depending only on $a$ and $p$, such that $R(X,m(K_X+B))$ is generated in degree $4$. \Cal End{proposition} \begin{proof} Let $f\colon \mu _{\text {max}}p X.S.$ be the log terminal model of $(X,B)$. Lemma \operatorname{op}eratorname{Re}f{l_fact} implies that $f$ factorizes through the minimal resolution $h\colon \mu _{\text {max}}p S'.S.$ of $S$. Let $g\colon \mu _{\text {max}}p X.S'.$ be the induced map. Then $h$ contracts only prime divisors which are rational curves with negative self-intersection and contained in the support of $g_*B$. Let $\Gamma$ be the strict transform of such a curve on $X$. Then $$\operatorname{op}eratorname{mult}_\Gamma B \ge -a(\Gamma,S)$$ Thus, since $a\operatorname{op}eratorname{mult}_\Gamma B$ is a positive integer and $\operatorname{op}eratorname{mult}_\Gamma B<1$, it follows that $$a(\Gamma,S)\ge -1 + \frac 1 a.$$ By Proposition \operatorname{op}eratorname{Re}f{p_kltsurface}, it follows that there exists a constant $r=r(a,p)$ depending only on $a$ and $p$ such that $r D$ is Cartier for any Weil divisor $D$ on $S$. In particular, if $B_S=f_*B$, then $ar(K_S+B_S)$ is a Cartier divisor. Thus, by Lemma \operatorname{op}eratorname{Re}f{l_ebpf} there exists a constant $q=q(a,p)$ depending on $a$ and $p$ such that the linear system $|q(K_S+B_S)|$ is base point free. Let $L=q(K_S+B_S)$. Proposition \operatorname{op}eratorname{Re}f{p_mumford} implies that if $b>3$ then the natural map $$H^0(S,\ring S.(bL ))\otimes H^0(S,\ring S.(L))\to H^0(S,\ring S.((b+1)L))$$ is surjective. Thus, $R(X,q(K_X+B))$ is generated in degree $4$ and the claim follows. \Cal End{proof} \begin{remark} Using the same notation as in Proposition \operatorname{op}eratorname{Re}f{p_ld_surfaces}, let $q$ be the number of components of $B$ which are contained in the stable base locus of $K_X+B$. Then, the same proof shows that, $R(X,m(K_X+B))$ is generated in degree $4$ where $m$ is a constant depending on $a$ and $q$. Note that $q\le \rho(X)$. Thus, we can bound $m$ by a constant which depends on $a$ and the Picard number $\rho(X)$ of $X$. \Cal End{remark} The following example shows that the degree of the generators depends on the number of components $p$. \begin{example} Let $r$ be a prime and let $X_r$ be the smooth toric surface obtained by blowing-up a sequence of $r$ infinitesimal near points of $\mu _{\text {max}}thbb P^2$. More specifically, let $X_0=\mu _{\text {max}}thbb P^2$ with a torus $T=(\mu _{\text {max}}thbb C^*)^2$ acting on it, let $X_1$ be the blow-up of $\mu _{\text {max}}thbb P^2$ at a $T$-invariant point $p$ and for each $i=1,\dots,r$ let $X_i$ be the surface obtained by blowing-up the $T$-invariant point in the exceptional divisor of $f_{i-1}\colon \mu _{\text {max}}p X_{i-1}.X_{i-2}.$ which is not contained in the strict transform of the exceptional divisor of $f_{i-2}$. Then $X_r$ admits a chain of $T$-invariant $(-2)$-curves $e_1,\dots,e_{r-1}$ and a $T$-invariant $(-1)$-curve $e_r$, given the exceptional curve of $f_r$. Let $E=\sum_{i=1} ^re_i$, let $\mu _{\text {max}}thbb Pi\colon \mu _{\text {max}}p X_r.X_0.$ be the induced map and let $h=\mu _{\text {max}}thbb Pi^*\Cal Ell$ where $\Cal Ell$ is the $T$-invariant line in $X_0$ not passing through $p$. Note that if $f\colon \mu _{\text {max}}p X_r.Y.$ is the map obtained by contracting the curves $e_1,\dots,e_{r-1}$, then $Y$ admits a unique singular point of type $A_{r-1}$. Let $$G=\frac 1 r \sum_{i=1}^r i e_i.$$ Then the $T$-invariant curves on $X_r$ are contained in the classes $$e_1,\dots,e_r,h,h-rG,h-e_1.$$ It follows that if $a$ is a sufficiently large positive integer then $ah-2rG$ is nef and therefore the linear system $|a h-2rG|$ is base point free. Thus, there exists a $\mu _{\text {max}}thbb Q$-divisor $B_0\ge 0$ such that $B_0\sim_{\mu _{\text {max}}thbb Q} a h-2rG$, and if $B=B_0+\frac 1 2 \sum_{i=1}^r e_i$ then $(X,B)$ is log smooth, $\rfdown B.=0$, and $2B$ is Cartier. It is easy to check that $f\colon \mu _{\text {max}}p X_r.Y.$ is the log terminal model of $(X,B)$ and that $$K_X+B=f^*(K_Y+f_*B)+E$$ where $$E=\frac 1 {2r}\sum_{i=1}^{r-1} (r-i)e_i.$$ In particular, by Remark \operatorname{op}eratorname{Re}f{r_z}, we have $E=\operatorname{op}eratorname{\mu _{\text {max}}thbf{Fix}}(K_X+B)$. Thus, since $r$ is prime, Lemma \operatorname{op}eratorname{Re}f{l_b} implies that if $a$ and $m$ are positive integer such that $a$ is even and $R(X,a(K_X+B))$ is generated in degree $m$ then either $a\ge r$ or $m\ge r$. \Cal End{example} The following example shows that Proposition \operatorname{op}eratorname{Re}f{p_ld_surfaces} cannot be extended to the log canonical (or dlt) case. \begin{example} Let $X$ be the Hirzebruch surface $\mu _{\text {max}}thbb F_r$ of prime degree $r\ge 3$, let $S$ be the unique curve of negative self-intersection $-r$ and let $H$ be a curve of self-intersection $r$. Then there exists an effective $\mu _{\text {max}}thbb Q$-divisor $B'\sim_{\mu _{\text {max}}thbb Q} 2 H$ such that, $S$ is not contained in the support of $B'$ and if $B=S+B'$ then $(X,B)$ is log smooth, $\rfdown B'.=0$ and $2B'$ is Cartier. In particular, the support of $B'$ does not intersect $S$. It is easy to check that $$\operatorname{op}eratorname{\mu _{\text {max}}thbf{Fix}}(K_X+B)=\frac 2 rS.$$ Thus, Lemma \operatorname{op}eratorname{Re}f{l_b} implies that if $a$ is an even positive integer such that $R(X,a(K_X+B))$ is generated in degree $m$ then either $a\ge r$ or $m\ge r$. \Cal End{example} We now proceed with the proof of Theorem \operatorname{op}eratorname{Re}f{t_ld_threefold}: \begin{proof}[Proof of Theorem \operatorname{op}eratorname{Re}f{t_ld_threefold}] We may assume that $K_X+B$ is pseudo-effective, otherwise $R(X,K_X+B)$ is trivial. Since $B$ is big or $K_X+B$ is big, Lemma \operatorname{op}eratorname{Re}f{l_nef} implies that there exists a sequence of steps $$X=\rmap X_0.\rmap .\dots. .X_k=Y.$$ of the $K_X$-minimal model program such that the induced birational map $f\colon \rmap X.Y.$ is a log terminal model of $(X,B)$. Note that, in particular, since $X$ is smooth, $Y$ has terminal singularities. By Proposition \operatorname{op}eratorname{Re}f{p_2b2}, it follows that there exists a constant $r$ depending only on $\rho(X)$ such that $r D$ is Cartier for any Weil divisor $D$ on $Y$. In particular, if $B_Y=f_*B$, then $ra(K_Y+B_Y)$ is a Cartier divisor. By Theorem \operatorname{op}eratorname{Re}f{t_ebpf} there exists a constant $q'$ such that, if $q=q'r$, then the linear system $|qa(K_Y+B_Y)|$ is base point free. Similarly to Proposition \operatorname{op}eratorname{Re}f{p_ld_surfaces}, Proposition \operatorname{op}eratorname{Re}f{p_mumford} implies that $R(X,qa(K_X+B))$ is generated in degree $5$, as claimed. \Cal End{proof} We expect that a more general result holds for any Kawamata log terminal pair threefold or even more in general for adjoint rings on threefolds, as in the case of surfaces. On the other hand, the following example shows that it is not possible to bound the degree of the generators of the canonical ring $R(X,K_X)$ of a smooth projective variety $X$ independently of $\rho(X)$, even assuming $B=0$. \begin{example} Let $r$ be a prime number and let $Y_r$ be a projective variety of dimension $3$ such that $K_{Y_r}$ is ample and $Y_r$ admits a singular point of type $$\frac 1 r(1,1,r-1).$$ Let $\mu _{\text {max}}p X_r.Y_r.$ be a resolution. Then there exists a prime divisor $E$ on $X_r$ which is exceptional over $Y_r$ and such that $a(E,Y_r)=\frac 1 r$. In particular, $$\operatorname{op}eratorname{mult}_E \operatorname{op}eratorname{\mu _{\text {max}}thbf{Fix}}(K_{X_r})=\frac 1 r.$$ Thus, since $r$ is prime, Lemma \operatorname{op}eratorname{Re}f{l_b} implies that if $q$ is a positive integer and $R(X,qK_X)$ is generated in degree $m$, then either $q\ge r$ or $m\ge r$. \Cal End{example} \begin{proof}[Proof of Corollary \operatorname{op}eratorname{Re}f{c_main}] It follows immediately from Theorem \operatorname{op}eratorname{Re}f{t_ld_threefold} and Lemma \operatorname{op}eratorname{Re}f{l_b}. \Cal End{proof} \Cal End{document}
\begin{document} \begin{abstract} We show that one can achieve transversality for lifts of holomorphic disks to a projectivized vector bundle by locally enlarging the structure group and considering the action of gauge transformations on the almost complex structure, which eases computation considerably. As an application we prove a special case of the Arnold-Givental conjecture. \text{Edge}nd{abstract} \title{Transversality via gauge transformations for pseudo holomorphic disks in projectivized vector bundles} \setcounter{tocdepth}{2} \tableofcontents \section{Introduction} This paper is part of a program of computing Lagrangian Floer theory in symplectic fibrations; namely, one has a compact fiber bundle of symplectic manifolds $$F\rightarrow E\rightarrow B$$ for which the the symplectic form on the total space restricts to the fiber form and is augmented by a pullback of the base form in the horizontal direction. One can imagine having a fibration of Lagrangians $$L_F\rightarrow L\rightarrow L_B$$ and wanting to say something about that Lagrangian Floer theory of $L$ given that of $L_F$ and $L_B$. The author's previous work in \cite{fiberedpotential} computed the number of low energy Maslov two disks through a generic point in $L$ when the symplectic fibration was trivial above $L_B$ and $L\cong L_B\times L_F$ (and under some additional monotonicity resp. rationality assumptions on $(F,L_F)$ resp. $(B,L_B)$). The goal of this paper is to extend this result to certain classes of fibrations in two ways: The first of which is by including Lagrangians that are not products in the given symplectic fibration, and the second is by including the high energy disks in our count. We give a basic example: \begin{example}[Example \ref{realprojectivebundles}] On the fiber bundle $\mc{O}\oplus \mc{O}_k\xrightarrow{\mathbb{P}i} \bb{CP}^n$ we choose a Hermitian metric and a connection whose parallel transport has values in $U(2)$ of the fibers. We get an induced connection and fiberwise symplectic form on $\bb{P}(\mc{O}\oplus \mc{O}_k)$ to which we can apply Guillemin-Lehrman-Sternberg's \cite{guillemin} construction to obtain a closed two form $a$ that restricts to the fiber form. The symplectic structure on $\bb{P}(\mc{O}\oplus \mc{O}_k)$ is then given by $\text{Edge}psilon a +\mathbb{P}i^*\omega_{\bb{CP}^n}$ where the $\text{Edge}psilon$ prevents the symplectic form from degenerating in the horizontal direction (a \text{Edge}mph{weak coupling form}). In general, there are families of almost complex structures on the total space that fiberwise agree with a chosen one and make the projection holomorphic. From the definition of the line bundles considered, there is an anti-holomorphic involution $\tau$ on $\bb{P}(\mc{O}\oplus \mc{O}_k)$ that lifts the the involution on $\bb{CP}^n$ (see the full example \ref{realprojectivebundles}), from which we get a fibered Lagrangian $$\bb{RP}^1\rightarrow L\rightarrow \bb{RP}^n$$ Let $u:(D,\mathbb{P}artial D)\rightarrow (\bb{CP}^n,\bb{RP}^n)$ be a holomorphic disk for the integrable toric $J_{\bb{CP}^n}$. The pullback bundle is symplectomorphic to a pullback of the Hirzebruch surface under the hemisphere map. $L$ must be invariant under parallel transport [Lemma \ref{fiberedlagrangianlemma}], so $u\text{Vert}ert_{\mathbb{P}artial D}^*L$ is of the form \begin{equation} L:=\left\lbrace \Big( [x_0,e^{ki\theta/2}x_1],\theta \Big): x_i\in \bb{R}\right\rbrace \subset \bb{CP}^1\times S^1 \text{Edge}nd{equation} If we choose the $J$ on the total space to agree with the standard complex structure on the fibers and to preserve the Hirzebruch connection (see Lemma \ref{uniqueacs}), some obvious holomorphic lifts of $u$ look like \begin{gather} \hat{v}_{[0,1]}(z)=[0,1]\label{constantsectiontopintro}\\ \hat{v}_{[1,0]}(z)=[1,0]\label{constantsectionbotintro}\\ \hat{v}_{[x_0,x_1]}(z)=[x_0,z^{k/2}\cdot x_1] \qquad x_0\neq 0,\, k\text{ even} \label{non-constsectionintro} \text{Edge}nd{gather} We would like to include some (perturbation) of these disks in the count of Maslov index 2 configurations evaluating to a generic point. \text{Edge}nd{example} In order to say anything about the count of Maslov 2 configurations given the above information, one needs some sort of transversality. The main result in this paper is that for pullback bundles we can achieve transversality by enlarging the structure group. Namely, the projectivization of a vector bundle with Hermitian connection has a structure group which includes into $U(n)$ and descends to an inclusion into $PU(n)$ on the fiberwise projectivization. Take an almost complex structure $J$ on the total space to agree with fiberwise integrable $PU(n)$-invariant structure. Consider the action of the $PU(n)$-valued gauge transformations on the symplectic connection/almost complex structure on the pullback bundle: \begin{theorem}[Theorem \ref{liftthm}] Let $A\in H_2( u^*E,u^*L, \bb{Z})$ be a relative section class and $p\in L$ a generic point. There is a comeager set of gauge transformations $\mf{G}^{reg}\subset \mf{G}$ such that the moduli space $$\mc{M}_{j,\mc{G}^*J}(u^*E,u^*L,A,p)$$ of $\mc{G}^*J$-holomorphic sections for $\mc{G}\in \mf{G}^{reg}$ in the class $A$ is a smooth manifold of expected dimension $$\text{d}im \mc{M}_{j,\mc{G}^*J}(u^*E,u^*L,A,p)=\text{Ind} (A)$$ \text{Edge}nd{theorem} A basic computation [Lemma \ref{hologaugelemma}] shows that such gauge transformation preserve holomorphic sections. Thus, one can use the sections \ref{constantsectiontopintro}\ref{constantsectionbotintro}\ref{non-constsectionintro} to count the Maslov index 2 configurations through a generic point. As an application, we show a special case of the Arnold-Givental conjecture. \begin{theorem} Let $\bb{P}(\mc{V})\rightarrow \bb{CP}^n$ be the projectivization of a vector bundle with a lift $\tau$ of the anti-symplectic involution on $\bb{CP}^n$. Then for $L=\text{Fix}(\tau)$ and any Hamiltonian isotopy $\mathbb{P}hi^t:E\times \bb{R}\rightarrow E$, we have $$\#\mathbb{P}hi^1(L)\cap L\Gammaeq \sum_i \text{d}im H^i(L,\bb{Z}_2)$$ \text{Edge}nd{theorem} The open Gromov-Witten invariants of the fixed point set of an anti-symplectic involution in a rational symplectic manifold are discussed in Section 5.4 \cite{CW1}. However, we give a short proof of this fact to demonstrate the ease of computation due to Theorem \ref{liftthm}. We provide an explicit computation using the aforementioned special almost complex structures, and relate the Floer cohomology groups to those computed with a general divisoral perturbation system as in \cite{CW1,CW2}. \subsection{Outline} Section \ref{symplecticbackgroundsection} is devoted to the construction of a symplectic manifold from a vector bundle. Section \ref{holomorphicsectionssection} contains the language and proof of Theorem \ref{liftthm}. Section \ref{transection} provides the modifications to previous results of the author's for transversality and compactness of general configurations in fiber bundles. Section \ref{fukayaalgebrasection} defines the Fukaya algebra of a Lagrangian and the obstruction to Floer cohomology Section \ref{examplesection} Contains some examples and the proof of Theorem \ref{agthm}. \section{Symplectic manifolds from projectivized vector bundles}\label{symplecticbackgroundsection} Let $B$ be a rational symplectic manifold, and let $$V\rightarrow \mc{V}\rightarrow (B,\omega_B)$$ be a rank $m+1$ complex vector bundle equipped with a Hermitian metric $h$ and a compatible fiberwise integrable complex structure. We assume that there is a connection on $V$ whose holonomy lies in $U(m+1)$ of each fiber. The imaginary part of $h$ defines a global alternating two-form which restricts to a symplectic form on each fiber. With respect to an $S^1$ action we can form the fiberwise reduction, denoted $$\bb{CP}^n\rightarrow \bb{P}(\mc{V})\rightarrow B$$ with a fiberwise symplectic two-form $a_h$ induced from the reduction of $\mc{I}m (h)$. The connection induces a splitting of the tangent space $$T\bb{P}^m\oplus T\bb{P}^{m,\mathbb{P}erp a_h}=:TF\oplus H_h$$ which prescribes $PU_{m+1}$-valued parallel transport. We will often denote the total space by $E$ and the fiber by $F$ To this connection we can associate a closed connnection form via the Guilleman-Lerman-Sternberg \cite{guillemin} minimal coupling construction: Necessarily we have the \text{Edge}mph{curvature identity} for a closed connection form \begin{equation}\label{curvatureidentity} d a(v^\sharp,w^\sharp)=\iota_{[v^\sharp,w^\sharp]^{vert}}a \qquad mod\, B \text{Edge}nd{equation} [1.3 \cite{guillemin}] where $v^\sharp$ is the horizontal lift of a tangent vector from $B$ and $[\bullet^\sharp,\bullet^\sharp]^{vert}$ is the projection of the Lie bracket onto $TF$. We take the value of the \text{Edge}mph{minimal coupling form} on $v,w\in \Lambda^2 H$ to be the unique zero-average Hamiltonian associated to \ref{curvatureidentity}, and to otherwise agree with $\mc{I}m h$. Such a form is closed [Theorem 1.4.1 \cite{guillemin}] The total space of the symplectic fibration becomes a symplectic manifold if we take $$\omega_{\text{Edge}psilon,a}:=\text{Edge}psilon a +\mathbb{P}i^*\omega_B$$ for $\text{Edge}psilon <<1$. Usually one can start with a symplectic $E$ and make it into a symplectic fibration without changing the symplectic form substantially. A Lagrangian in this context is the following: \begin{definition} A \text{Edge}mph{fibered Lagrangian} $L$ is a fiber sub-bundle of Lagrangians $$L_F\rightarrow L\xrightarrow{\mathbb{P}i} L_B$$ with $L_F$ monotone and $L_B$ rational in the sense that the symplectic area of relative disk classes forms a discreet subset of $\bb{R}$. \text{Edge}nd{definition} One has the following lemma for finding such Lagrangians: \begin{lemma}{(Fibered Lagrangian Construction Lemma)}\cite{floerfibrations}\label{fiberedlagrangianlemma} Let $L_F\rightarrow L\rightarrow L_B$ be a connected sub-bundle. Then $L$ is Lagrangian with respect to $\text{Edge}psilon a+\mathbb{P}i^*\omega_B$ if and only if \begin{enumerate} \item\label{paralleltransinv} $L$ is invariant under parallel transport along $L_B$ and \item\label{normalizing} there is a point $p\in L_B$ such that $a_p\text{Vert}ert_{TL_F\oplus H_L}=0$, where $H_L:=H_a\cap TL$ is the connection restricted to $L$. \text{Edge}nd{enumerate} \text{Edge}nd{lemma} \section{Holomorphic sections of the pull-back} \label{holomorphicsectionssection} We consider a $J_B$-holomorphic disk $$u:(D,\mathbb{P}artial D)\rightarrow (B,L_B)$$ and review a strategy for trivializing $u^*E$ holomorphically. Then, we apply similar principles to achieve transversality for holomorphic sections. To simplify notation, let $a:=u^*a_{h}$ be the pullback minimal coupling form. We get a symplectic connection on $u^*E$ via $$H:=TF^{\mathbb{P}erp_{a}}$$ One can vary the connection as in [Chapter 8 \cite{ms2}] or \cite{salamonakveld}. Let $$\sigma\in \Lambda^1(TD,C^\infty_0(u^*E))$$ be a one-form on $D$ with values in zero fiber-average smooth functions, and denote by $X_\sigma$ the associated fiberwise-Hamiltontian vector field valued one-form. It follows that $u\text{Vert}ert_{\mathbb{P}artial D}^*L$ defines a Hamiltonian isotopy path of Lagrangians in the isotopy class of $L_F$. To see that this is actually a loop (c.f. \cite{salamonakveld}), we use the fact that $L$ is invariant under parallel transport [Lemma \ref{fiberedlagrangianlemma}]. Let $$\mc{J}^{vert}(F):=C^\infty(D,\mc{J}(F,\omega_F))$$ Following \cite{gromov,ms2}, there is a canonical almost complex structure on the bundle $u^*E$: \begin{lemma}[Lemma 8.2.8 \cite{ms2}]\label{uniqueacs} For every $J_F\in \mc{J}^{vert}(F)$ and a connection $H$ as above, there is a unique almost complex structure $J_H$ on $u^*E$ so that \begin{enumerate} \item $J_H\text{Vert}ert_{TF}=J_F$ \item $\mathbb{P}i$ is holomorphic with respect to $(j,J_H)$, where $j$ is the standard integrable complex structure on $D$ \item $J_H$ preserves $H$. \text{Edge}nd{enumerate} \text{Edge}nd{lemma} In trivializing coordinates and the associated trivial connection, such a structure takes the form \begin{equation} J_\sigma:= \begin{bmatrix} J_F & J_F\circ X_\sigma-X_\sigma\circ j \\ 0 & j \text{Edge}nd{bmatrix} \text{Edge}nd{equation} and the horizontal distribution is of the form $$(-X_{\sigma(v)},v).$$ \subsection{Moduli of holomorphic sections} \label{verticalmaslovsection} Let $N$ be the minimal Maslov number for $L_F$. Let $L\subset u^*E$ be shorthand for the sub-bundle $u\text{Vert}ert_{\mathbb{P}artial D}^*L$. For any disk or sphere class $A\in H_2(u^*E,L,\bb{Z})$ we can assign a Maslov index $\mu_F(A)$ which is the Maslov index of the path of Lagrangian subspaces $T_{u(e^{i\theta})}L$ after choosing a trivialization of $\hat{u}^*TF$ for a map $\hat{u}:D\rightarrow (F,L)$ in the class $A$. \begin{lemma}[Lemma 6.1 \cite{salamonakveld}]\label{vmilemma} Let $\hat{u}_i:(D,\mathbb{P}artial D)\rightarrow (u^* E, L)$ for $i=1,2$. Then $$\mu_F(\hat{u}_1)\text{Edge}quiv \mu_F(\hat{u}_2) \; \t{mod}\, N$$ \text{Edge}nd{lemma} \begin{definition} The \text{Edge}mph{Maslov index of the pair} $(u^*E,L)$ is defined as $\mu_F(u^*E)$ mod $N$ from Lemma \ref{vmilemma} \text{Edge}nd{definition} The say that the \text{Edge}mph{vertical Maslov index} for a lift $\hat{u}$ of $u$, equivalently for a section of $u^*E$, is $\mu_F(\hat{u})$. $J$-holomorphic lifts can be shown to have the following transversality property: \begin{theorem}[Theorem 5.1 \cite{salamonakveld}]\label{liftmodulithm} Let $A\in H^{section}_2(u^*E,L,\bb{Z})$ be the class of a section and $J_F\in \mc{J}^{vert}(F)$ be a section of fiberwise almost complex structures. If $\mc{M}(A,J_F,H)$ is the moduli of $J_H$-holomorphic sections in the class $A$, then there is a comeager subset of Hamiltonian connections $\mc{H}^{\t{reg}}$ so that for $H\in\mc{H}^{\t{reg}}$ we have that the moduli space is a smooth manifold with \begin{equation}\label{liftdimensionformula} \text{d}im_{\bb{R}} \mc{M}(A,J_F,H)=\text{d}im_\bb{R} L_F+\mu_F(A) \text{Edge}nd{equation} \text{Edge}nd{theorem} \begin{remark}\label{movingboundaryproblem} A lift of a disk $u$ is equivalent to solving the \text{Edge}mph{inhomogeneous moving boundary problem}: \begin{gather*} \hat{u}:D\rightarrow F\\ \bar{\mathbb{P}artial}_{J,j}\hat{u}=X^{0,1}_\sigma\\ \hat{u}(e^{i\theta})\in \mathbb{P}hi^\theta_\sigma (L_F) \text{Edge}nd{gather*} where $$X_\sigma^{0,1}:=J_I\circ X_{\sigma\circ j}+X_{\sigma}$$ is the anti-holomorphic part of the 1-form $X_\sigma$. \text{Edge}nd{remark} This remark motivates the following definition. Define the \text{Edge}mph{covariant derivative} of a section $\hat{u}$ with respect to a Hamiltonian connection $H_\sigma$ is \begin{equation} \nabla^\sigma \hat{u}:= \mathbb{P}artial_{J_I,j}\hat{u}-X^{1,0}_\sigma \text{Edge}nd{equation} and we can define the covariant energy with respect to a connection form $a_\sigma$ as \begin{definition}[Covariant energy] $$e_\sigma(\hat{u}):=\int_D \text{Vert}ert \nabla^\sigma \hat{u}\text{Vert}ert_{\omega_F,J_I}$$ \text{Edge}nd{definition} Such a quantity satisfies some topological properties. First, the \text{Edge}mph{curvature} of a connection $H$ is the two form \begin{equation}\label{curvaturedef} R_H ds\wedge dt := a_H(\mathbb{P}artial^\sharp_s,\mathbb{P}artial^\sharp_t) ds\wedge dt \text{Edge}nd{equation} where $\mathbb{P}artial_s^\sharp$ is the horizontal lift of $\mathbb{P}artial_s$ to the connection $H$ (resp. $\mathbb{P}artial_t$). We remark that by the curvature identity \ref{curvatureidentity}, the function in \ref{curvaturedef} is the zero-average Hamiltonian corresponding to the vector field $[\mathbb{P}artial_s^\sharp,\mathbb{P}artial_t^\sharp]^{\t{vert}}$. \begin{lemma}\label{covariantenergyprop}[Lemma 5.2 \cite{salamonakveld}] $$e_\sigma(\hat{u})=\int_D \hat{u}^*a_H +\int_D R_H\circ \hat{u}\, ds\wedge dt$$ \text{Edge}nd{lemma} \subsection{Existence of a flat connection}\label{flatconnectionsection} We begin to specialize to the case $E=\bb{P}(\mc{V})$ for a vector bundle $V\rightarrow \mc{V}\rightarrow B$. Let $G_\bb{C}$ denote the complexified structure group of $E$, and $\t{Hol}(D,G_\bb{C},G)$ the group of holomorphic sections of the trivial principal $G_\bb{C}$-bundle that are $G$-valued on the boundary. We use a result of Donaldson \cite{sdh} to show that there is a section which trivializes the connection on $u^* \mc{V}$. \begin{theorem}[Theorem 1 \cite{sdh}]\label{heatflowthm} Let $\mc{V}$ be a holomorphic vector bundle over a K\"ahler manifold $Z$. For any metric $f$ on the restriction of $\mc{V}$ to $\mathbb{P}artial Z$, there is a unique Hermitian metric $h$ that satisfies the Yang-Mills equation: \begin{enumerate} \item $h\text{Vert}ert_{\mathbb{P}artial Z}=f$ \item $i\Lambda F_h=0$ \label{ymequation} \text{Edge}nd{enumerate} where $i\Lambda F_h$ is the $(1,1)$ part of the curvature of $h$ in the K\"ahler decomposition. \text{Edge}nd{theorem} When $Z$ is a disk, \ref{heatflowthm} says $$\mc{V}\cong (D\times V,j\times J_V)$$ as holomorphic vector bundles due to \ref{ymequation} simply stating that $h$ induces a flat connection. Let $h_\infty$ denote the flat metric on $u^*\mc{V}$ from \ref{heatflowthm} that agrees with $h_0$ over $\mathbb{P}artial D$. Then there exists a smooth section $$\mc{G}:(D,\mathbb{P}artial D)\rightarrow (G_\bb{C},G)$$ such that $$h_\infty(\cdot,\cdot)=h_0(d\mc{G}\cdot,d\mc{G}\cdot)=:\mc{G}^*h_0.$$ The discussion leads to the following corollary of \ref{heatflowthm}: \begin{corollary}\label{heatflowgaugever} Let $u^*\mc{V}$ be a holomorphic vector bundle equipped with a fiberwise $G_\bb{C}$ invariant complex structure $J_V$ and K\"ahler metric $h_0$. Then there is complex gauge transformation $\mc{G}\in \mc{C}^\infty (D,G_\bb{C},G)$ that is $G$ valued on $\mathbb{P}artial D$ so that $h_\infty:=\mc{G}^*h_0$ is flat. \text{Edge}nd{corollary} Next, we show that we can obtain holomorphic sections of the projectivization by looking at the gauge-transformed pullback. For $\mc{G}\in \mc{C}^\infty(D,G_\bb{C},G)$ we get an isomorphism of bundles $$\mc{G}:u^*\bb{P}(\mc{V})\rightarrow u^*\bb{P}(\mc{V})$$ which has a well-defined tangent map $d\mc{G}$. \begin{definition} For the pullback symplectic connection $H$ discussed in the beginning of section \ref{holomorphicsectionssection}, define the connection $$\mc{G}_*H:=d\mc{G}(H)$$ on $u^*\bb{P}(\mc{V})$ with respect to the fiberwise symplectic form $\mc{G}^*\mc{I}m (h)$. \text{Edge}nd{definition} On $\mc{G}u^*\bb{P}(\mc{V})$ we define the almost complex structure \begin{equation} \mc{G}^*J_H:=d\mc{G}\circ J_H\circ d\mc{G}^{-1} \text{Edge}nd{equation} \begin{lemma}\label{hologaugelemma} $\mc{G}^*J_H$ is the unique almost complex structure compatible with $\mc{G}_*H$ from Lemma \ref{uniqueacs}. Hence, $\mc{G}$ preserves holomorphic sections. \text{Edge}nd{lemma} \begin{proof} $\mc{G}_*H$ is of the form $$(-\mc{G}_*X_y, y)$$ Thus, the unique almost complex structure on $\mc{G}(v^*E)$ from Lemma \ref{uniqueacs} is \begin{equation}\label{gaugeacs} \begin{bmatrix} J_I & J_I \mc{G}_*X_\sigma-\mc{G}_*X_{\sigma\circ j}\\ 0 & j \text{Edge}nd{bmatrix} \text{Edge}nd{equation} in the trivial connection, which is equal to $\mc{G}^*J_H$ by the $G_\bb{C}$ equivariance of $J_I$ and the fact that $\mc{G}$ acts as the identity of on the $TD$ component of the trivial splitting. To check the second statement, let $v$ be a $J_H$-holomorphic section $\mathbb{P}i_F$ be the projection onto the fiber in the above trivialization and $dv^F:=\mathbb{P}i_{TF}\circ dv$ be the projection onto the $TF$ component in the induced trivial connection. Since $v$ is $J_H$ holomorphic, we have \begin{equation}\label{hamiltonianperturbedeqn} \bar{\mathbb{P}artial}_{J_I,j}\mathbb{P}i_F v=dv^F+J_I\circ dv^F\circ j=X_\sigma^{0,1} \text{Edge}nd{equation} wherefore $\mc{G}v$ satisfies \begin{align*}\bar{\mathbb{P}artial}_{J_I,j}\mathbb{P}i_F\mc{G}v &=\mc{G}_*dv^F+J_I\circ\mc{G}_*dv^F\circ j \\ &=\mc{G}_*\bar{\mathbb{P}artial}_{J_I,j}\mathbb{P}i_Fv=\mc{G}_*X_\sigma^{0,1} \\ &=(\mc{G}_*X_\sigma)^{0,1} \text{Edge}nd{align*} again by the $G_\bb{C}$-invariance of $J_I$. By remark \ref{movingboundaryproblem} a solution to \ref{hamiltonianperturbedeqn} is equivalent to a $J_H$-holomorphic section, so we are done. \text{Edge}nd{proof} \subsection{Holomorphic lifts to $(E,L)$}\label{holomorphicliftsubsection} Continuing from the last subsection, we show that there is at least one holomorphic lift of a base disk through each fiber point. However, we cannot always achieve transversality for a given lift by gauge transformation: In example \ref{realprojectivebundles} the structure group is $S^1$, and in lieu of Theorem \ref{transversalitythm} such an action does not provide a large enough space of connections to achieve transversality, particularly for the constant sections that are fixed by the gauge group. We get around this by allowing transformations of $u^*E$ that take values in $PU_2$ over the interior of the disk. Such a group acts via holomorphic symplectomorphisms and the differential $\mf{g}\rightarrow T_pF$ is surjective at every point, allowing sufficiently many perturbations to achieve transversality. In this section, we record the existence of a lift and prove a transversality result. Since $E=\bb{P}(\mc{V})$ is the projectivization of a vector bundle and the symplectic connection is induced by a choice of Hermitian metric on $\mc{V}$, we have canonical action via $PU_{m+1}$ on $E$ that preserves the fiberwise symplectic form and the standard complex structure. Fix a (left invariant) inner product $\langle\cdot,\cdot\rangle$ on $PU_{m+1}$ with norm $\text{Vert}ert\cdot\text{Vert}ert$. The left action of such a group on $v^*E$ preserves the fiberwise symplectic form and integrable complex structure and acts on connection by the discussion in subsection \ref{flatconnectionsection}. We define the space of perturbations $$\mf{G}^l:= W^{l,2}_{\mathbb{P}artial D, D}(G,PU_{m+1}):=\bigg\lbrace \mc{G}:(\mathbb{P}artial D,D)\rightarrow (G,PU_{m+1}):\int_D \text{Vert}ert D^i \mc{G} \text{Vert}ert^2 dxdy <\infty \bigg\rbrace$$ as the Banach manifold of $l$-differentiable $2$-integrable $PU_{m+1}$-valued gauge transformations of $u^*E$ that take values in $G$ over $\mathbb{P}artial D$. Take $l$ large enough so that such a space includes into continuous functions by Sobolev embedding. At a given point the tangent space is written as $$T_\mc{G}\mf{G}^l= W^{l,2}_{\mathbb{P}artial D,D}(\mf{g},\mf{pu}_{m+1})$$ With local charts given by the exponential map and transitions by left shift. Note that such a space can also be written as the space of bundle isomorphisms covering the identity that are differentiable to some degree. \subsubsection{Stabilizing Divisors} A crucial component of the perturbation scheme is the existence of a stabilizing divisor for a certain class of rational Lagrangians $L_B$. \begin{definition} We say a Lagrangian $L_B\subset B$ is \text{Edge}mph{strongly rational} if there exists a line bundle-with-connection on $B$ with a section $\chi$ whose restriction $\chi\text{Vert}ert_{L_B}$ is covariant constant. \text{Edge}nd{definition} \begin{definition} We say that an almost complex submanifold $D_B\subset B\setminus L_B$ with $[D_B]^{PD}=N[\omega_B]$ is a \text{Edge}mph{stabilizing divisor} for $L_B$ if $\forall [v]\in H^\circ_2(B,L_B,\bb{Z})$ with positive symplectic area we have $$v^{-1}( D_B) \neq \text{Edge}mptyset$$ \text{Edge}nd{definition} For a stabilizing divisor $D_B$, call an almost complex structure $J_B$ \text{Edge}mph{adapted to $D_B$} if it is an almost complex submanifold with respect to $J_B$. The following is made possible by a number of results, with the stabilizing property made possible by the work in \cite{CW1}. \begin{theorem}[\cite{CW1} Section 3.1,\cite{agm},\cite{SD}] For $B,L_B$ rational, there exists a stabilizing divisor with an adapted almost complex structure. \text{Edge}nd{theorem} Let $D$ be a lift of the divisor $D_B$ to the total space. Since $D_B$ is stabilizing for $L_B$, we have that every representative of a non-trivial $[u]\in H_2(B,L_B,\bb{Z})$ intersects $D_B$ at least once. It follows that every (section or multisection) of $u^*E$ intersects $D$ in at least one point. The expected dimension of a configuration in $u^*E$ through a point $p\in L_F$ with prescribed intersection to $D_E$ in the relative class $[A]$ is \begin{equation}\label{liftdimensionformula2} \text{Ind} (A)= \sum_{i=1}^j I(v_i)-2\langle A, D_E\rangle_{u^*E}+2\text{Vert}ert \text{Edge}^\bullet(\Gammaamma_u)\text{Vert}ert \text{Edge}nd{equation} where $\text{Edge}^\bullet(\Gammaamma_u)$ is the number of interior marked points on $u$ prescribed to the divisor $D_B$. By positivity of intersection of holomorphic curves with the divisor, $\langle A, D_E\rangle_{v^*E}$ is equal to the sum of the degrees of vanishing of the normal derivative in coordinates adapted to the divisor. On the other hand, we will usually assume that one can perturb the almost complex structure and/or divisor so that each disk intersects the divisor transversely, making the last two terms on the right hand side of \ref{liftdimensionformula2} irrelevant. \begin{theorem}\label{liftthm} Let $A\in H_2( u^*E,u^*L, \bb{Z})$ be a relative section class and $p\in L$ a generic point. There is a comeager set of smooth gauge transformations $\mf{G}^{\infty,reg}\subset \mf{G}^{\infty}$ such that the moduli space $$\mc{M}_{j,\mc{G}^*J_H}(u^*E,u^*L,A,p)$$ of $\mc{G}^*J_H$-holomorphic sections for $\mc{G}\in \mf{G}^{\infty,reg}$ in the class $A$ is a smooth manifold of expected dimension $$\text{d}im \mc{M}_{j,\mc{G}^*J_H}(u^*E,u^*L,A,p)=\text{Ind} (A)$$ Moreover, there exists a trivialization of $u^*E$ that preserves the fiberwise complex structure and the boundary Lagrangian. In particular, let $p\in L_F\cap \mathbb{P}i^{-1}(1)$. There exists a (regular) $\mc{G}^*J_H$-holomorphic section $\hat{u}$ of $u^*(E,L)$ with $\hat{u}(1)=p$ and $\mc{G}\in \mf{G}^{\infty,reg}$ \text{Edge}nd{theorem} The final statement was initially made in \cite{fiberedpotential}. We include a proof for completeness and clarity. \begin{remark}\label{computeremark} The proof of Lemma \ref{hologaugelemma} is exactly the same in case $\mc{G}$ takes values in $PU_{m+1}$ over $D$. Hence by Theorem \ref{liftthm} it suffices to compute holomorphic sections through $p$ in the given connection and then perturb with a generic gauge transformation that is close enough to the identity so as to not change the relative class. \text{Edge}nd{remark} \begin{proof} First we show the existence of such a section, and then prove regularity for more general maps. Let $\mc{G}_\bb{C}$ denote the flattening gauge transformation from Corollary \ref{heatflowgaugever} The action by $G$ is Hamiltonian, so it follows that $\mc{G}_\bb{C}(L)$ is fiberwise Lagrangian. The parallel transport along $\mc{G}_\bb{C}(H)\text{Vert}ert_{\mathbb{P}artial D}$ is given by $$\mc{G}_\bb{C}\circ \nabla^H\circ \mc{G}_\bb{C}^{-1}$$ where $\nabla^H$ is transit along $H\text{Vert}ert_{\mathbb{P}artial D}$. Since $L$ is invariant under parallel transit, it follows that $\mc{G}_\bb{C}(L)$ must also be. Hence the fibered Lagrangian construction Lemma \ref{fiberedlagrangianlemma} tells us that $\mc{G}_\bb{C}(L)$ is Lagrangian. Since $\mc{G}_{\bb{C}*} H$ is flat, parallel transport starting at $u(1)$ defines a holomorphic trivialization $$\mc{T}:[\mc{G}_\bb{C}u^*E,\mc{G}_{\bb{C}*}L,J_H]\cong [D \times F,\mathbb{P}artial D\times L_F, j\times J_I]$$ Then by lemma \ref{hologaugelemma} the trivialization $$\mc{T}\circ \mc{G}_\bb{C}:(u^*E,J_H)\rightarrow (D\times \bb{P}^m,j\times J_I)$$ is a biholomorphism. Let $p\in L_F$, and define $$v_p^0(z):= \mc{G}_{\bb{C}}^{-1}\circ \mc{T}^{-1}(z,p)$$ The map $\mc{G}_\bb{C}^{-1}\circ\mc{T}^{-1}$ is given precisely by left multiplication on the total pullback via $$\mc{G}_{\mc{T}}: (D,\mathbb{P}artial D)\rightarrow (G_\bb{C},G)$$ as in theorem \ref{heatflowgaugever}. Moreover, such a disk is holomorphic by Lemma \ref{hologaugelemma}. Hence, we have shown the existence of a section $v^0_p$. Next we check the surjectivity of the linearized operator in the case of no prescribed tangecies. The proof is similar to the proof of Theorem \ref{transversalitythm} (see \cite{floerfibrations}), except that we restrict ourselves to $PU(m+1)$ connections. Choose a metric $(g,\nabla_g)$ with Levi-Civita connection on $u^*E$ such that $L$ is totally geodesic, pick $\frac{k}{m}>\frac{1}{2}$ so that Sobolev embedding into continuous functions holds, and consider the Banach manifold $$\Omega^{k,2}_{A, sec}(D,\mathbb{P}artial D, u^*E,L)$$ of sections in a relative class $A$ that are $k$ times weakly differentiable with square integrable derivatives in regards to $\nabla_g$. Given a map $v$ in such a space, we get a local chart $$W_v\subset W^{k,2}(D,\mathbb{P}artial D,v^*TF, v^*TL_F)$$ that we can map into $\Omega^{k,2}$ by geodesic exponentiation according to $\nabla_g$. The transition map between two such charts is given by parallel transport of vector fields between the chart centers. Set $$\mc{B}_{A}^k:=\Omega^{k,2}_{A,sec}\times \mf{G}^l$$ Over such a manifold we have a Banach vector bundle $\mc{E}^{k-1}$ whose fiber above a map $(v,\mc{G})$ is $$(\mc{E}^{k-1})_{v,\mc{G}}:= \Lambda_{\mc{G}^*J,j}^{0,1}(D,v^*TF)_{k-1,2}$$ and whose transition functions are given expontiation in $PU_{m+1}$ and by parallel transport with regards to a Hermitian connection $\nabla^{\mc{G}}$ on $u^*TE$ that preserves the almost complex structure $\mc{G}^*J$ (see [3.2 \cite{ms2}]). We have a section \begin{align*} \mathbb{P}artial(v,\mc{G}):=&\bar{\mathbb{P}artial}_{\mc{G}^*J,j}v=dv+\mc{G}^*J\circ dv\circ j \text{Edge}nd{align*} with linearization at a holomorphic $v$ given by \begin{equation}D_{v,\mc{G}^*J}\bar{\mathbb{P}artial}:=D_{v,\mc{G}^*J}+J_\mf{h}\circ dv\circ j \text{Edge}nd{equation} Here $D_{v,\mc{G}^*J}$ the Cauchy-Riemann (hence Fredholm) linearization of $\bar{\mathbb{P}artial}_{\mc{G}^*J,j}$ with respect to the variable $v$, $\mf{h}$ is a section of projective skew-Hermitian matrices in the Lie algebra $T_{I}\mf{G}^l$ with values in $TG$ over $\mathbb{P}artial D$, and $J_\mf{h}$ is the derivative of \ref{gaugeacs} with respect to $\mc{G}$. We define the \text{Edge}mph{universal moduli space} as $$\mc{M}_{sec}^{k,2}(u^*E,L,A):=\bar{\mathbb{P}artial}^{-1}(\mc{B})$$ where $\mc{B}$ is identified with the zero section. The first goal will be to show that $D_{v,\mc{G}^*J}\bar{\mathbb{P}artial}$ is surjective when $v$ is $\mc{G}^*J$-holomorphic to prove that the universal space is a smooth manifold. Lastly, we apply a Sard-Smale argument to find a set of comeager $\mc{G}$ such that the projection $$\mathbb{P}i_{\mf{G}}:\mc{M}_{sec}^{k,2}(u^*E,L,A)\rightarrow \mf{G}^l$$ has a comeager set of regular values. Suppose $D_{v,\mc{G}^*J}\bar{\mathbb{P}artial}$ is not surjective. Since it is a Fredholm operator, we can assume that its image is closed. By the Hahn-Banach theorem there is a non-zero section $$\text{Edge}ta\in \Lambda^{0,1}_{\mc{G}^*J,j}(D,v^*TF)$$ such that \begin{align}&\label{derivequation}\int_D\langle D_{v,\mc{G}^*J}\xi,\text{Edge}ta\rangle=0 \\& \label{perturbequation}\int_D\langle J_\mf{h}\circ dv\circ j,\text{Edge}ta\rangle=0 \text{Edge}nd{align} for all $\xi\in W^{k,2}(D,\mathbb{P}artial,v^*TF,v^*TL_F)$ and all $\mf{h}\in T_{I}\mf{G}^l$. We have that $\text{Edge}ta$ is in the kernel of the adjoint real Cauchy-Riemann operator $$D_{v,\mc{G}^*J}^*\text{Edge}ta=0.$$ Choose a point $p$ where $\text{Edge}ta\neq 0$ and $p\in U$, which is possible since $\text{Edge}ta$ cannot be zero on a dense set. The map $\mf{pu}_{m+1}\rightarrow TF$ is surjective, so choose a skew-Hermitian matrix $g$ so that $$\langle J_g\circ du(p)\circ j,\text{Edge}ta(p)\rangle >0$$ and extend $g$ to a section of skew-Hermitian matrices $\mf{h}$ by a bump function so that $$\langle J_\mf{h}\circ du\circ j,\text{Edge}ta\rangle \Gammaeq 0$$ Thus, \ref{perturbequation} is strictly positive for the supposed $\text{Edge}ta$, which is a contradiction. The case for surjectivity in the presence of presribed tangentcies to $\mathbb{P}i^{-1}(D_B)$ is covered comprehensively in [\cite{CM} Lemma 6.6] and we refer the reader to such. By the implicit function theorem, $\mc{M}_{sec}^{k,2}(u^*E,L,A)$ is a class $C^q$ manifold for $q<l-k$. By the Sard-Smale theorem, and for $k,l>>1$, the set of regular values of the projection $\mathbb{P}i_\mf{G}$ is comeager. Since the kernel and cokernel of $\mathbb{P}i_\mf{G}$ are isomorphic to that of $D(v,\mc{G})$, we have that $$\mathbb{P}i^{-1}(\mc{G})=\mc{M}_{j,\mc{G}^*J_H}(u^*E,u^*L,A,p)$$ is a $C^?$ manifold of dimension $\text{Ind} (A)$. The existence of a comeager set for smooth perturbation follows from an argument due to Taubes (see proof \cite{ms2} Theorem 3.1.5). The fact that all elements of the moduli space are smooth solutions follows from an elliptic bootstrapping argument. Thus, each moduli space $\mc{M}_{j,\mc{G}^*J_H}(u^*E,u^*L,A,p)$ is smooth of expected dimension. \text{Edge}nd{proof} \section{Transversality/Compactness}\label{transection} The fact that transversality for full configurations of expected dimension $\leq 1$ follows directly from the author's work in \cite{fiberedpotential}\cite{floerfibrations}. We provide a brief recall of the main notions and theorem in this section. Essentially, holomorphic configurations \text{Edge}mph{(treed disks)} are based on trees with each vertices corresponding to holomorphic disks/spheres and edges to Morse flow lines/nodes/marked points mapping to the divisor. The notions of a treed disk and the notation are based on work in \cite{CW2} and initially inspired by \cite{birancorneapearl} \begin{figure}[h] \includegraphics[scale=1]{flssequencetree.pdf} \caption{\cite{floerfibrations} The geometric realization of a treed disk. Shading correspond to domains mapped to constants under $\mathbb{P}i$.} \text{Edge}nd{figure} Let $\Gamma$ be metric tree with the additional labeling of edges and vertices: \begin{center} \begin{tabular}{||c | c ||} \hline $\text{Vert}_s(\Gamma)$ & spherical vertices\\ \hline $\text{Vert}_d(\Gamma)$ & disk vertices\\ \hline $\text{Edge}_\rightarrow^\bullet (\Gamma)$ & interior markings ($\sim$ interior special points)\\ \hline $\text{Edge}_-^\bullet(\Gamma)$ & interior nodes ($\sim$ interior special points)\\ \hline $\text{Edge}_\rightarrow^\circ (\Gamma)$ & boundary markings ($\sim$ boundary special points)\\ \hline $\text{Edge}_-^\circ(\Gamma)$ & boundary nodes ($\sim$ boundary special points)\\ \hline $\text{Edge}ll:\text{Edge}_-^\circ (\Gamma)\rightarrow [0,\infty]$& boundary node length\\ \hline $m:\text{Edge}_\rightarrow^\bullet(\Gamma)\rightarrow \bb{Z}_{\Gammaeq 0}$ & divisor intersection multiplicity\\ \hline $[\cdot]_d:\text{Vert}_d(\Gamma)\rightarrow H^d_2(E,L)$ & disk classes\\ \hline $[\cdot]_s:\text{Vert}_s(\Gamma)\rightarrow H^s_2(E)$ & sphere classes\\ \hline \text{Edge}nd{tabular} \text{Edge}nd{center} We require each disk/sphere domain with $\mathbb{P}i_*[d]\neq 0$ to be stable in the sense that the number of boundary special points plus twice the number of interior special points is $\Gammaeq 3$ (\text{Edge}mph{$\mathbb{P}i$-stability}). On spheres the perturbation in Theorem \ref{liftthm} becomes more complex, so we resort to using a more general perturbation of the connection as in [Chapter 8 \cite{ms2}]. Following [Section 5.3 \cite{floerfibrations}], let $$\text{Ham} (F,B):=\Lambda^1(B,C^\infty_0(F,\bb{R}))$$ denote the space of one forms on $B$ with values in fiberwise zero average smooth functions. For $\sigma\in \text{Ham}(F,B)$, one obtains a different connection form on the total space by considering $$a_\sigma:=a-d\sigma$$ that we denote by $H_\sigma$. The holonomy of $H_\sigma$ differs infinitesimally from that of $H$ by the fiberwise Hamiltonian vector field of $\sigma$, which we denote by $X_\sigma$. Let \begin{equation*}\mc{J}^l(E,\omega_{\text{Edge}psilon,a_\sigma}):= \bigg\lbrace J \text{ from Lemma \ref{uniqueacs}},\, J\text{Vert}ert_F=J_I,\, J \text{ is tamed by }\text{Edge}psilon a_\sigma+\mathbb{P}i^*\omega_B\bigg\rbrace . \text{Edge}nd{equation*} be the subset of tamed structures adapted to the connection. This can be realized as an open subset of $\mc{J}^l(B,\omega_B)$ and thus is a Banach manifold. Let $\mc{U}_\Gamma$ denote the \text{Edge}mph{universal treed disk} of type $\Gamma$; that is, we have a fiber bundle $$\mc{U}_\Gamma\rightarrow \mf{M}_\Gamma$$ whose fibers are isomorphism classes of treed disks and $\mf{M}_\Gamma$ is the moduli space of treed disks of type $\Gamma$. More generally than in subsection \ref{holomorphicliftsubsection}, let $$\mf{G}^l(\mc{U}_\Gamma):= W^{l,2}_{\mc{U}_\Gamma}(PU_{m+1}):=\bigg\lbrace \mc{G}:\mc{U}_\Gamma\rightarrow PU_{m+1}:\int_D \text{Vert}ert D^i \mc{G} \text{Vert}ert^2 dxdy <\infty \bigg\rbrace$$ be the Banach space of $l$ differentiable square integrable functions into $PU_{m+1}$. Choose neighborhoods $\mc{S}_\Gamma^o$ resp. $\mc{T}_\Gamma^o$ of each surface part $\mc{S}_\Gamma$ resp. edge part $\mc{T}_\Gamma$ not containing the special points and boundary on each surface component resp. not containing $\infty$ on each edge. \begin{definition}\cite{floerfibrations}\label{perturbationdatadef} Fix an adapted $J_{D_B}$ to $D_B$. Let $(\Gamma,\underline{x})$ be a $\mathbb{P}i$-stable type of treed disk with a Morse labeling. A class $C^l$ \text{Edge}mph{fibered perturbation datum} for the type $\Gamma$ is an element $$\mc{G}_\Gamma\in \mf{G}^l(\mc{U}_\Gamma) $$ together with piecewise $C^l$ maps $$(J_\Gamma,\sigma):\mathcal{U}_\Gamma\rightarrow \mathcal{J}^l(E,\omega_{H,K})\times \text{Ham} (F,B) $$ $$X:\mc{T}_\Gamma\rightarrow \t{Vect}^l(TL_F)\oplus \t{Vect}^l(TL_B)$$ denoted $P_\Gamma$, that satisfy the following properties: \begin{enumerate} \item $\sigma\text{Edge}quiv 0$ outside sphere components and on the neighborhoods $\mc{S}_\Gamma - \mc{S}_\Gamma^o$ \item $J_B\text{Edge}quiv J_{D_B}$ on the neighborhoods $\mc{S}_\Gamma - \mc{S}_\Gamma^o$, \item $\mc{G}_\Gamma$ takes values in $G$ on edges, the boundary of each disk component, $\mc{S}_\Gamma - \mc{S}_\Gamma^o$, and vertices with $\mathbb{P}i_*[v]=0$, and vanishes on spheres, neighborhoods of interior special points, and the neighborhoods $\mc{T}_\Gamma-\mc{T}_\Gamma^o$ \item $X$ is identified with a map $X:\mc{T}_\Gamma\rightarrow \t{Vect}^l(TL_F)\oplus \t{Vect}^l(H_L)$ via horizontal lift and $X\text{Edge}quiv X_f$ in the neighborhood $\mc{T}_\Gamma-\mc{T}_\Gamma^o$ of $\infty$. \text{Edge}nd{enumerate} Let $\mc{P}^l_\Gamma(E,D)$ denote the Banach manifold of all class $C^l$ fibered perturbation data (for a fixed $J_{D_B}$). \text{Edge}nd{definition} \begin{definition} A perturbation datum for a collection of combinatorial types $\Gammaamma$ is a family $\underline{P}:=(P_\Gamma)_{\Gamma\in\Gammaamma}$ \text{Edge}nd{definition} In order for compactness to hold, such a collection must satisfy some coherence axioms, which we reference \begin{definition}{[Definition 2.11 \cite{CW2}]} \label{coherentdefinition} A fibered perturbation datum $\underline{P}=(P_\Gamma)_{\Gamma\in\Gammaamma}$ is \text{Edge}mph{coherent} if it is compatible with the morphisms on the moduli space of treed disks in the sense that \begin{enumerate} \item\text{Edge}mph{(Cutting edges axiom)} if $\Pi:\Gamma'\rightarrow \Gamma$ cuts an edge of infinite length, then $P_{\Gamma}=\Pi_* P_{\Gamma'}$, \item\text{Edge}mph{(Collapsing edges/making an edge finite or non-zero axiom)} if $\Pi:\Gamma\rightarrow \Gamma'$ collapses an edge or makes an edge finite/non-zero, then $P_{\Gamma}=\Pi^*P_{\Gamma'}$, \item\text{Edge}mph{(Product axiom)} if $\Gamma$ is the union of types $\Gamma_1, \Gamma_2$ obtained from cutting an edge of $\Gamma$, then $P_\Gamma$ is obtained from $P_{\Gamma_1}$ and $P_{\Gamma_2}$ as follows: Let $\mathbb{P}i_k:\mf{M}_\Gamma\cong \mf{M}_{\Gamma_1}\times\mf{M}_{\Gamma_2}\rightarrow\mf{M}_{\Gamma_k}$ denote the projection onto the $k^{th}$ factor, so that $\mc{U}_\Gamma$ is the unions of $\mathbb{P}i_{1}^{*}\mathcal{U}_{\Gamma_1}$ and $\mathbb{P}i_{2}^{*}\mathcal{U}_{\Gamma_2}$. Then we require that $P_\Gamma$ is equal to the pullback of $P_{\Gamma_k}$ on $\mathbb{P}i_{k}^*\mathcal{U}_{\Gamma_k}$ \item \text{Edge}mph{(Ghost-marking independence)} If $\Pi:\Gamma'\rightarrow \Gamma$ forgets a marking on components corresponding to vertices with $[v]=0$ and stabilizes, then $P_{\Gamma'}=\Pi^* P_{\Gamma}$. \text{Edge}nd{enumerate} \text{Edge}nd{definition} We refer the reader to section 2 of \cite{CW2} for the precise definition of the moduli of treed disks and the morphisms between them. Let $u:C_\Gamma\rightarrow E$ be a Floer trajectory class based on a $\mathbb{P}i$-stable combinatorial type $\Gamma$. $u$ is called \text{Edge}mph{adapted} to $D$ if \begin{enumerate} \item(Stable domain) The geometric realization of $\Gamma$ after forgetting vertices with $\mathbb{P}i_*[v]=0$ is a stable domain; \item(Non-constant spheres)\label{spheresaxiom} Each component of $C$ that maps entirely to $D$ is constant; \item(Markings)\label{markingsaxiom} Each interior marking $z_i$ maps to $D$ and each component of $u^{-1}(D)$ contains an interior marking. \text{Edge}nd{enumerate} Denote the set of adapted $P_\Gamma$-Floer trajectories by $$\mc{M}_\Gamma(\underline{x},P_\Gamma,D)$$ Finally, we have the expected dimension of a given moduli space \begin{align}\label{indexformulatotalspace} \text{Ind} (\Gamma,\underline{x}):&= \mathrm{dim}W^+_{X}(x_0)-\sum_{i=1}^{n}\mathrm{dim}W^+_{X}(x_i) + \sum_{i=1}^{m} I(u_i)+n-2 -|\text{Edge}^{\circ,0}_-(\Gamma)|\\&-|\text{Edge}_-^{\circ,\infty}(\Gamma)|-2|\text{Edge}^\bullet_-(\Gamma)| -|\text{Edge}^\bullet_\rightarrow(\Gamma)|-2\sum_{e\in \text{Edge}^{\bullet}_\rightarrow} m(e) \text{Edge}nd{align} In the following, by an uncrowded type we a type of treed disk whose constant components have at most one interior marking. \begin{theorem}[Transversality, Theorem 6.1 \cite{floerfibrations}] \label{transversalitythm} Let $E$ be a symplectic K\"ahler fibration, $L$ a fibered Lagrangian. Suppose that we have a finite collection of uncrowded, $\mathbb{P}i$-adapted, and possibly broken types $\lbrace\Gamma\rbrace$ with $$\text{Ind}(\Gamma,\underline{x})\leq 1.$$ Then there is a comeager subset of smooth regular data for each type $$\mc{P}^{\infty,reg}_\Gamma(E,D)\subset\mc{P}^\infty_\Gamma$$ and a selection $$(P_\Gamma)\in \Pi_\Gamma \mc{P}^{\infty,reg}_\Gamma$$ that forms a regular, coherent datum. Moreover, we have the following results about tubular neighborhoods and orientations: \begin{enumerate} \item\label{tubularneighborhoods}\text{Edge}mph{(Gluing)} If $\Pi:\Gamma\rightarrow\Gamma'$ collapses an edge or makes an edge finite/non-zero, then there is an embedding of a tubular neighborhood of $\mc{M}_\Gamma (P_\Gamma,D)$ into $\overline{\mc{M}}_{\Gamma'}(P_{\Gamma'},D)$, and \item \text{Edge}mph{(Orientations)} if $\Pi:\Gamma\rightarrow\Gamma'$ is as in \ref{tubularneighborhoods}, then the inclusion $\mathcal{M}_{\Gamma} (P_{\Gamma},D)\rightarrow \overline{\mathcal{M}}_{\Gamma'} (P_\Gamma,D)$ gives an orientation on $\mc{M}_\Gamma$ by choosing an orientation for $\mc{M}_{\Gamma'}$ and the outward normal direction on the boundary. \text{Edge}nd{enumerate} \text{Edge}nd{theorem} The proof is almost exactly the same as that of [Theorem 6.1 \cite{floerfibrations}] after considering the ideas from Theorem \ref{liftthm}. The only difference is the use of $PU_{m+1}$-connections on disk components that match with $G$-connections on the boundary instead of more general Hamiltonian connections. Finally, the coherence axioms combined with transversality provide a compactness result: \begin{theorem}[Compactness, Theorem 7.1 \cite{floerfibrations}]\label{compactnessthm} Let $\Gamma$ be an uncrowded type with $\text{Ind}(\Gamma,\underline{x})\leq 1$ and let $\mc{P}=(P_\Gammaamma)$ be a collection of coherent, regular, stabilized fibered perturbation data that contains data for all types $\Gammaamma$ with $\text{Ind}(\Gammaamma,\underline{x})\Gammaeq 0$ and from which $\Gamma$ can be obtained by (making an edge finite/non-zero) or (contracting an edge). Then the compactified moduli space $\overline{\mc{M}}_\Gamma(D, P_\Gamma)$ contains only regular configurations $\Gammaamma$ with broken edges and unmarked disk vertices. In particular, there is no sphere bubbling or vertical disk bubbling. \text{Edge}nd{theorem} \begin{comment} \section{Things to do:} \begin{enumerate} \item finish higher dimensional projective bundle example \item finish initially complete flag example \item try to compute potential for obstructed cases (i.e. taking values in cohomology with Novikov coefficients of $L$) (might be hard???) \item gysin sequence/non-standard Lagrangian \text{Edge}nd{enumerate} Things to fix in other papers: \begin{enumerate} \item tangency in ff and fsp \item change "complex gauge transformation preserves constant sections" to "preserves maslov index" b/c already trivial on boundary (since the complex gauge transformation \text{Edge}mph{is} a trivialization of the vb pair $(u^*TE,u^*TL)$ \item Morrey's inequality *** \text{Edge}nd{enumerate} \text{Edge}nd{comment} \section{The Fukaya Algebra}\label{fukayaalgebrasection} We define the Fukaya algebra arising from counting isolated holomorphic configurations with a given number of inputs. In \cite{fiberedpotential} we show that a similar algebra satisfies the axioms of a $A_\infty$-algebra. \subsubsection{Grading} To begin, we have a short review of grading. Let $\text{Lag}(V,\omega)$ be the Lagrangian Grassmannian associated to a symplectic vector space and let $N$ be an even integer. The coverings of $\text{Lag}$ with deck transformation group $\bb{Z}_N$ are classified by classes in $H^1(\text{Lag},\bb{Z}_N)$. Specifically, there is a well-defined \text{Edge}mph{Maslov class} $\mu_V\in H^1(\text{Lag}(V,\omega),\bb{Z})$, so let $\text{Lag}^N(V,\omega)$ be the covering space corresponding to the image of $\mu_V$ in $H^1(\text{Lag},\bb{Z}_N)$. Let $\mc{L}(B)\rightarrow B$ be the fiber bundle with fiber $\mc{L}(B)_b=\t{Lag}(T_b B,\omega)$ the Lagrangian subspaces in $T_b B$. Following Seidel \cite{seidelgraded} an $N$-fold \text{Edge}mph{Maslov covering} of $B$ is the fiber bundle $$\mc{L}(B)^N\rightarrow E$$ with fibers $\text{Lag}^N(T_b B,\omega)$. For an orientable Lagrangian $L_B$, there is a canonical section $s:L_B\rightarrow \mc{L}(E)\text{Vert}ert_L$. A $\bb{Z}_N$-\text{Edge}mph{grading} of $L_B$ is a lift of $s$ to a section $$s^N:L\rightarrow \mc{L}^N(B)\text{Vert}ert_{L_B}.$$ It is natural to ask about the existence of such a grading, for which we have the following answer: \begin{lemma}[Lemma 2.2 \cite{seidelgraded}] $B$ admits an $N$-fold Maslov cover iff $2c_1(B)\in H^2(E,\bb{Z}_N)$ is zero. \text{Edge}nd{lemma} Let $\mu^N\in H^1(\mc{L}(B),\bb{Z}_N)$ the global Maslov class mod $N$. \begin{lemma}[Lemma 2.3 \cite{seidelgraded}] $L_B$ admits a $\bb{Z}_N$-grading iff $0=s^*\mu^N\in H^1(L_B,\bb{Z}_N)$. \text{Edge}nd{lemma} Henceforth, we fix an $N$-fold grading $s^N$ on $L_B$. This gives rise to a \text{Edge}mph{forgetful grading} $\mathbb{P}i^*s^N:L\rightarrow \mc{L}^N(B)$ on $L$, where we will keep track of the vertical grading with the vertical Maslov index as in section \ref{verticalmaslovsection}. \subsection{The Floer complex}\label{floercomplexsubsection} We will use a variant of the Novikov ring in two variables. Choose a regular coherent perturbation datum $\mc{P}:=\lbrace P_\Gamma\rbrace_\Gamma$ as in Theorem \ref{transversalitythm}. Let $H_2^\circ (B,L_B,\bb{Z})$ denote the cone of relative homology classes that have a representative as a union of $J$-holomorphic configurations for some collection of tamed $J$'s. Let $H_2^{\circ,\mathbb{P}i} (E,L,\bb{Z})$ denote the same type of cone but only for tamed almost complex structures that make $\mathbb{P}i$ holomorphic. Note that $$\mathbb{P}i_* H_2^{\circ,\mathbb{P}i} (E,L,\bb{Z})\subset H_2^\circ (B,L_B,\bb{Z})$$ Hence, by Theorem \ref{liftthm} we have $$\mathbb{P}i_*H^{\circ,\mathbb{P}i}_2(E,L,\bb{Z})= H^\circ_2 (B,L_B,\bb{Z})$$ Let $q$ be a formal variable and define $$\mc{Q}(B,L_B):=\lbrace 0,q^A:A\in H_2^\circ (B,L_B,\bb{Z})\text{Vert}ert\, q^A\cdot q^B=q^{A+B}, q^0=1\rbrace$$ denote the ring generated by formal symbols. We have a morphism under the monoidal structure \begin{gather*}\text{d}eg (\cdot):\mc{Q}\rightarrow \bb{R}_{\Gammaeq 0}\\ \text{d}eg( q^A ):=\omega_B(A):=\int_{\mc{C}}\omega_B^* u \text{Edge}nd{gather*} for a (holomorphic) representative $u$ of $A$, and $\text{d}eg(q^{A+B})=\text{d}eg(q^A)+\text{d}eg(q^B)$. Choose an $\text{Vert}arepsilon <<1$, let $N$ be an arbitrary real number, and define the Novikov ring in two variables as \begin{align*} \Lambda^2_{(\text{Vert}arepsilon)}=\bigg\{ \sum_{i,j}c_{A_ij}q^{A_i}r^{\text{Edge}ta_j}\Vert & c_{A_ij}\in \bb{Z}_2; q^{A_i}\in \mc{Q},\text{Edge}ta_j\in\bb{R};\\ (1-\text{Vert}arepsilon)&\text{d}eg(q^A)+\text{Edge}ta_j\Gammaeq 0; \\& \#\{ a_{Aj}\neq 0\Vert \text{d}eg(q^{A_i})+ \text{Edge}ta_j\leq N\}<\infty \bigg\} \text{Edge}nd{align*} where the multiplication is $q^A\cdot q^B:=q^{A+B}$ and extended to products and sums in the usual way. We choose a Morse-Smale function $f$ on $L$ that is constructed by $$f=\mathbb{P}i^*b+\sum_i \mathbb{P}hi_i g_i$$ where $b$ is Morse-Smale on $L_B$ and $g_i$ are Morse-Smale on $$\bigsqcup_i L_{F,i}:=\mathbb{P}i^{-1}(\crit{b})\cap L;$$ the \text{Edge}mph{critical fibers}. We assume that the $g_i\text{Vert}ert_{L_{F,i}}$ and $b$ have a unique index $0$ and index $n$ critical point, denote $x^M_i$ resp. $x^m_i$, where we define the index of a critical point as $$\text{Ind} (x)=\text{d}im W^+_{X_f}(x),$$ the dimension of the stable manifold. One can then choose an appropriate pseudo-gradient $$X_f=X_g\oplus X_b$$ in the splitting $TL_F\oplus L\cap H$ on $L$. Define the \text{Edge}mph{Floer chain complex} as the free module generated by the critical points of $X_f$ $$CF^*(L,\Lambda^2):=\bigoplus_{x^i\in \crit{X_f}}\Lambda^2x^i $$ with grading \begin{gather*} \text{Vert}ert x\text{Vert}ert =\text{d}im W^+_{X_b}(\mathbb{P}i(x)) + \text{d}im W^+_{X_g}(x)=\text{d}im W^+_{X_f}(x)\\ \text{Vert}ert q^A\text{Vert}ert := \text{d}eg(q^A)\\ \text{Vert}ert r^\text{Edge}ta\text{Vert}ert =\text{Edge}ta \text{Edge}nd{gather*} and extended on products so that $$\text{Vert}ert x^iq^A r^\text{Edge}ta\text{Vert}ert=\text{Vert}ert x \text{Vert}ert+\text{Edge}ta+\text{Vert}ert q^A\text{Vert}ert.$$ Let $\rho\in\text{Hom}(\mathbb{P}i_1(L),\Lambda^{2\times})$ denote a choice of rank one local system. This $\text{Hom}$ space is a smooth manifold modeled on $H^1(L,\Lambda^{2\times})$ by Poincare duality and exponentiation $\text{Edge}xp:\Lambda^{2\times}\rightarrow \Lambda^2$. For a $P_\Gamma$-holomorphic configuration $u:C\rightarrow (E,L)$, let $\text{Hol}_\rho(u)$ denote the evaluation of $\rho$ on $[u(\mathbb{P}artial C)]$. For a relative homology class $A\in H_2^\circ (E,L,\bb{Z})$ let $$a(A):=\int_\mc{C}a^*(u)$$ where $u:\mc{C}\rightarrow (E,L)$ is a holomorphic representative. Since $a\text{Vert}ert L\text{Edge}quiv 0$ [Lemma \ref{fiberedlagrangianlemma}] and $a$ is closed, it follows $a(A)$ is well-defined as a homotopy invariant. Define the $\bb{Z}_2$-$A_\infty$ maps as \begin{equation}\label{ainftydefmod2} \nu^n_{\bb{Z}_2,\rho}(x_1,\text{d}ots, x_n):=\sum_{x_0,[u]\in\mc{M}_0(E,L,\mc{P},x_0,x_1,\text{d}ots,x_n)} \text{Hol}_\rho(u)q^{\mathbb{P}i_*A}r^{a(A)}x_0 \text{Edge}nd{equation} It is shown in [Theorem 6.1 \cite{fiberedpotential}] that a version of \ref{ainftydefmod2} with coefficients in the Novikov ring over $\bb{C}$ satisfies the $A_\infty$-axioms. The proof that $$\mc{A}(L)_{\bb{Z}_2}:=\bigg( CF(L,\Lambda^2_{\bb{Z}_2}),\nu_{\bb{Z}_2,\rho}^n\bigg)$$ satisfies these axioms is similar and follows from a description of the boundary of the $1$-dimensional moduli spaces from \ref{compactnessthm}. \subsection{Invariance}\label{invariancesection} It is important to see how the definition \ref{ainftydefmod2} respects homotopy of almost complex structure and gauge transformation. This is discussed in \cite{floerfibrations} section 9, and we review the discussion here. Given two coherent, regular perturbation data $P_1$ and $P_2$ in the rational case \cite{CW2} and their respective $A_\infty$-algebras $A_1$, $A_2$, one defines a perturbation morphism $P_{01}:A_1\rightarrow A_2$ based on a count of quilted Floer configurations which are $P_1$-holomorphic on one side of a ``seam" and $P_2$-holomorphic on the other (see section 3.1 \cite{CW2} for the precise definition of an $A_\infty$ morphism). In order to show that $P_{10}\circ P_{01}$ is $A_\infty$ homotopic to the identity, one defines a moduli of twice-quilted disks and shows that the appropriate count gives rise to an $A_\infty$ homotopy between the two maps. In our case, the divisor $D$ is not stabilizing for $L$, so in order to related the above objects to invariants in the rational case, we first observe that there is a natural map \begin{gather*}\mf{f}:\Lambda^2_{\bb{Z}_2}\rightarrow \Lambda_{t,\bb{Z}_2}\\ q^\text{Edge}ta r^\rho\mapsto t^{\text{Edge}ta+\text{Edge}psilon\rho} \text{Edge}nd{gather*} where the $\text{Edge}psilon$ is that in the weak coupling form. Assuming that $(E,L)$ is a rational pair, let $D^1\subset E$ be a stabilizing divisor for $L$ which intersects $\mathbb{P}i^{-1}(D_B)=:D^0$ $\text{Edge}psilon$-transversely and is $\theta$-approximately holomorphic by [Theorem 8.1 \cite{CM}] and [Theorem 3.6 \cite{CW1}] respectively. Then we define a perturbation morphism between our $A_\infty$-algebra and a more general one by counting quilted Floer configurations that are $\mathbb{P}i$-adapted to $D^0$ below the seam and adapted to $D^1$ at or above the seam, while fixing neighborhoods or the boundary and special points on the domain on the seam component where the almost complex structure is constant and adapted to both $D^0$ and $D^1$. To show that the composition of such a perturbation morphism and a morphism in the reverse direction is $A_\infty$ homotopic to the identity, one adapts the construction of twice-quilted Floer trajectories \cite{CW2} in an analogous way. \section{A potential formula}\label{examplesection} We will see in example \ref{realprojectivebundles} that the potential for the total Lagrangian depends on more than the potential for the base and fiber, and requires the knowledge of almost all of the holomorphic representatives in the base. Such a thing is hard to compute outside of some very special cases. We include a family of examples. Let $\bb{P}^m\rightarrow \bb{P}(\mc{V})\rightarrow B$ be a complex projective symplectic fibration of dimension $2m+2n$, and $L_B\subset B$ a rational Lagrangian such that $\L_B=\t{fix}(\tau_B)\neq \text{Edge}mptyset$ for an anti-symplectic involution. Suppose there is an anti-symplectic involution $\tau$ on $E$ such that $$ \tau_B\circ \mathbb{P}i=\mathbb{P}i\circ \tau$$ and that $\tau$ is anti-holomorphic on the fibers when we identify $F_p$ and $F_{\tau(p)}$ holomorphically. We get a Lagrangian $$L_F\rightarrow L\rightarrow L_B$$ and we aim to compute Floer type invariants for $L$ in some very special cases. Via the Arnold-Givetal conjecture \cite{givag}, one expects the Floer cohomology to be non-zero when defined. The conjecture states the following: Let $\mathbb{P}hi^t$ with $t\in[0,1]$ be a Hamiltonian isotopy of $(M,\omega)$. Then \begin{equation}\label{agestimate}\#\mathbb{P}hi^1(L)\cap L\Gammaeq \sum_i \text{d}im H^i(L,\bb{Z}_2). \text{Edge}nd{equation} This conjecture is verified in many situations, starting with $(\bb{CP}^n,\bb{RP}^n)$ \cite{givag}\cite{ohrpn} and the case of real forms in monotone Hermitian symmetric spaces \cite{ohagconjecture}. Symplectic quotients were considered in \cite{frauenfelderag}, and the relative case was considered in \cite{iriyehsakaitasaki}. \begin{comment} \subsection{Example: real flags in complex flags}\label{realflagsexample} Consider the manifold of complex flags \begin{equation}\label{c-flags} \t{Flag}_{i_1,\text{d}ots, i_k}(\bb{C}^n):=\left\lbrace V_{i_{j-1}}\subset V_{i_j}\subset \bb{C}^n\text{Vert}ert \text{d}im_{\bb{C}} V_{i_j}=i_j\right\rbrace \text{Edge}nd{equation} equipped with the KKS form $\omega_{KKS}$. We have a Lagrangian \begin{equation}\label{r-flags} \t{Flag}_{i_1,\text{d}ots,i_k}(\bb{R}^n):=\left\lbrace V_{i_{j-1}}\subset V_{i_j}\subset \bb{C}^n\text{Vert}ert \text{d}im_{\bb{R}} V_{i_j}=i_j\right\rbrace \text{Edge}nd{equation} that is the fixed point set of the anti-symplectic involution arising from the complex conjugation $\bold{z}\mapsto\bar{\bold{z}}$ on $\bb{C}^n$. We can realize \ref{c-flags} as a fibration \begin{equation} G_{\bb{C}}(i_1,i_2)\rightarrow \t{Flag}_{i_1,\text{d}ots, i_k}(\bb{C}^n)\rightarrow \t{Flag}_{i_2,\text{d}ots, i_k}(\bb{C}^n) \text{Edge}nd{equation} or \begin{equation} G_{\bb{C}}(n-i_k,n-i_{k-1})\rightarrow \t{Flag}_{i_1,\text{d}ots, i_k}(\bb{C}^n)\rightarrow \t{Flag}_{i_1,\text{d}ots, i_{k-1}}(\bb{C}^n) \text{Edge}nd{equation} and $\t{Flag}_{i_1,\text{d}ots, i_k}(\bb{C}^n)$ fibers in the same way except with $\bb{R}$ taken instead of $\bb{C}$. \text{Edge}nd{comment} \subsection{Example: Rank 1 real projective bundles}\label{realprojectivebundles} We take $\bb{CP}^1\rightarrow \bb{CP}(\mc{O}\oplus \mc{O}_k)\rightarrow \bb{CP}^n$ for $n$ odd and find an anti-symplectic involution. Let $U\subset \bb{CP}^n$. Recall the definition of the $k^{th}$ twisting sheaf $$\mc{O}_k(U):=\bigg\lbrace (V,f)\in U\times \mc{A}(U)\text{Vert}ert f(a\cdot \bold{z})=a^k\cdot f(\bold{z})\, \forall a\in V \bigg\rbrace $$ where $\mc{A}(u)$ is the ring of homogeneous polynomials on $U$. For such a space, there is an anti-holomorphic involution $\tau(V,f)=(\bar{V},\bar{f})$ that lifts the involution on the base, where $\bar{f}$ denotes the map \begin{gather*} \bar{\cdot}:\mc{A}(U)\rightarrow \mc{A}(\bar{U})\\ f(\cdot)\mapsto \bar{f}(\bar{\cdot}) \text{Edge}nd{gather*} Such a space can be realized as a toric variety, and hence a symplectic manifold by Delzant's construction. The above involution acts as anti-symplectic with regards to the toric symplectic form, and hence preserves the toric connection $T$ on the fibration. To the toric connection we can apply 6.4.1 from \cite{ms1} to obtain a weak coupling form: $$\omega_{T,K}:=\text{Vert}arepsilon a_T+\mathbb{P}i^*\omega_{KKS}$$ to which $\tau$ is also anti-symplectic. The above involution extends to an involution on $\mc{O}\oplus \mc{O}_k$ and hence to $E_{n,k}:=\bb{P}(\mc{O}\oplus\mc{O}_k)$. On $E_{n,k}$ the involution descends to one on $\bb{CP}^n$, so $$L:=\t{Fix}(\tau)$$ is an $\bb{RP}^1$ fibration over $\bb{RP}^n$. The Maslov index $n+1$ disks in $(\bb{CP}^n,\bb{RP}^n)$ are in the class $A$ such that $2A$ is the class of a line. By an argument due to Oh [Lemma 4.6 \cite{ohrpn}] the toric complex structure $J_{\bb{CP}^n}$ is regular for holomorphic disk configurations with boundary in $\bb{RP}^n$. Moreover, the discussion in section 3.1 \cite{agm} and the proof of Theorem 3.6 in \cite{CW1} shows that we can find a smooth stabilizing divisor $D_B$ for $\bb{RP}^n\subset \bb{CP}^n$ which is a complex submanifold with respect to $J_{\bb{CP}^n}$. Theorem \ref{transversalitythm} holds for such a divisor and complex structure on the base. For a $J_{\bb{CP}^n}$-holomorphic sphere $u$ in the class $2A$, it follows that $u^*E\cong \bb{P}(\mc{O}\oplus \mc{O}_k)$ as a holomorphic-bundle-with-connection. If the pullback connection is to be compatible with the transition maps on this Hirzebruch bundle then the holonomy around the equator must be a rotation by $k/2$. Thus in a smooth trivialization-with-connection $\mathbb{P}i\circ u^*L$ is the set \begin{equation} L:=\left\lbrace \Big( [x_0,e^{ki\theta/2}x_1],\theta \Big): x_i\in \bb{R}\right\rbrace \subset \bb{CP}^1\times S^1 \text{Edge}nd{equation} and the holonomy around a closed loop $\Gammaamma$ in $\bb{P}^1$ is rotation by the angle $2\mathbb{P}i k\cdot \mc{A}(\Gammaamma)$ where $\mc{A}(\Gammaamma)$ is the area enclosed by $\Gammaamma$ (assuming the area of the sphere is $1$). For $k$ even, there are some easy sections in the given Hirzebruch trivialization: \begin{gather} \hat{v}_{[0,1]}(z)=[0,1]\label{constantsectiontop}\\ \hat{v}_{[1,0]}(z)=[1,0]\label{constantsectionbot}\\ \hat{v}_{[x_0,x_1]}(z)=[x_0,z^{k/2}\cdot x_1] \qquad x_0\neq 0 \label{non-constsection} \text{Edge}nd{gather} Presumably these are the "covariant-constant" sections from Theorem \ref{liftthm}. The case of \ref{non-constsection} shows how the gauge transformation from corollary \ref{heatflowgaugever} need not preserve the curvature of a section in the projectivization. The vertical Maslov indices are as follows: \begin{gather} \mu(\hat{v}_{[0,1]})=-k\\ \mu(\hat{v}_{[1,0]})=k\\ \mu(\hat{v}_{[x_0,x_1]})=k \text{Edge}nd{gather} which can be obtained by taking a global trivialization over the disk centered at one of the poles of $\bb{CP}^1$ (c.f. Lemma 6.2 \cite{salamonakveld}). Note that the maps \ref{constantsectiontop} and \ref{constantsectionbot} and hence these computations make sense when $k$ is odd. Next we compute lifts of $v$ that are of expected dimension $0$ using the base a.c.s. $J_{\bb{CP}^n}$, integrable invariant fiber structure and following remark \ref{computeremark}. Let $v_{F_1}:(D,\mathbb{P}artial D)\rightarrow (\bb{CP}^1,\bb{RP}^1)$ be a holomorphic map from the disk to a hemisphere in the fiber $F_{v(1)}$. In the trivialization from Theorem \ref{liftthm} let $\tilde{v}^m_{[0,1]}$ be the lift of an $m$-fold branched cover of $v$ as in \ref{constantsectionbot}, and let $$\hat{v}^m_{[0,1]}:(D,\mathbb{P}artial D)\rightarrow (v^*E,v^*L)$$ denote the image of this section under inverse trivialization followed by gauge transformation perturbation from Theorem \ref{liftthm}. It follows that the Maslov index of such a disk is $$\mu(\widehat{v}^{m}_{[0,1]})=m(n+1-k)$$ Denote by $$\widehat{v^{m,l}_p}:C\rightarrow (E,L)$$ the holomorphic configuration that consists of $\hat{v}^m_{[0,1]}$ together with an $l$-fold holomorphic cover the of a hemisphere $$v_{F_p}^l:(D,\mathbb{P}artial D)\rightarrow (\bb{CP}^1,\bb{RP}^1)$$ in the fiber containing $p$. We count these types of configurations that are isolated according to index \ref{indexformulatotalspace}, and which pass through a generic point $p\in L$. As already mentioned, there is a fiber hemisphere map $v_{F_p}$ that evaluates $1$ to $p$, and we count these up to reparameterizations fixing $1$. Any non-constant holomorphic section of $v^*E$ has vertical maslov index $k$ if $k$ is even. Thus the total Maslov index of such a disk is $$n+1+k$$ so we would not count this disk on its own. As discussed in example 3.2 \cite{aurouxspeciallagfibrations}, it is to our benefit to count isolated configurations that contain the exceptional section \ref{constantsectionbot}. Such a section gives a holomorphic disk in the total space $(E,L)$ of Maslov index $$n+1-k.$$ There are several cases depending on the positive integer values of $k$ and $n$. We don't consider when $k=0$. \begin{enumerate} \item[Case $n+1-k=2$] Such a case has expected dimension $0$ but the evaluation is not a generic point. \item[Case $n+1-k=0$] We include the configuration that consists of the fiber disk together with the exceptional section $\hat{v}_{[0,1]}^1$, denoted as a parameterization class of $\hat{v}_{[0,1]}^{1,1}$. As transversality holds for multiple covers $\hat{v}_{[0,1]}^m$, the count includes $v_{[0,1]}^{m,1}$ for $m\in \bb{Z}_{\Gammaeq 0}$. \item[Case $n+1-k=1$] We include the fiber disk, and separately the double cover $\widehat{v^{2,0}_{[0,1]}}$. \item[Case $n+1-k<0$] This case counts non-trivial positive solutions $(m,l)$ to the integer equation \begin{equation} m \left( n+1-k\right) + 2l= 2 \text{Edge}nd{equation} for when to count the maps $\widehat{v_{[0,1]}^{m,l}}$. \text{Edge}nd{enumerate} Let $\mc{I}_{\Gammaeq 0}$ denote the set of non-negative integer solutions, and denote by $y_1$ resp. $y_2$ denote the values of a representation $\rho\in \text{Hom} (\mathbb{P}i_1(L),\Lambda^{2 \times})$ on the generators of $\mathbb{P}i_1(L)$. If $\mc{D}_L (\rho)$ denotes the number mod $2$ of Maslov index $2$ treed disks through a generic point weighted by their symplectic area and representation $\rho$, then we have \begin{align}\label{potentialforrank1} \mc{D}_L &(\rho)=\left(y_1+y_1^{-1}\right)r^{a[v_{F_p}]}\\ +&\sum_{(m,l)\in \mc{I}_{\Gammaeq 0}}\left(y_1^ky_2^m-y_1^{-k}y_2^m+y_1^ky_2^{-m}-y_1^{-k}y_2^{-m}\right)q^{m\omega_B[\hat{v}_{[0,1]}]}r^{la[v_{F_p}]+ma[\hat{v}_{[0,1]}]} \text{Edge}nd{align} In particular, picking the trivial representation gives $0$. \subsection{Example: Rank $s$ real projective bundles} In analogy we can consider fixed point sets of anti-holomorphic involutions in the following $$\bb{CP}^k\rightarrow \bb{P}\Big(\mc{O}\oplus \mc{O}_{k_1}\oplus\text{d}ots \mc{O}_{k_s}\Big)\rightarrow \bb{CP}^n$$ Assume that the $k_i\leq k_{i+1}$ From the discussion at the beginning of example \ref{realprojectivebundles} it follows that there is a Lagrangian with respect to a coupling form that fibers as $$\bb{RP}^s\rightarrow L\rightarrow \bb{RP}^n$$ We adopt the same notation as the previous example, with $H$ the toric connection on $E$, $D_B$ the stabilizing divisor in the base. Take $J_F$ to be the fiberwise integrable complex structure. In the connection $H$ we have some holomorphic sections \begin{align} \hat{v}_{[1,a_1,\text{d}ots a_s]}(z):=&[1,z^{k_1/2}a_1,\text{d}ots z^{k_s/2}a_s]\label{topsection}\\ \label{bottomsection} \hat{v}_{[0,\text{d}ots a_i,\text{d}ots a_s]}(z):=&[0,\text{d}ots 0,a_{i},\text{d}ots a_{j},z^{\frac{k_{j+1}-k_j}{2}}a_{j+1},\text{d}ots z^{\frac{k_s-k_i}{2}}a_s], \quad k_i=k_{j},\, (a)_i^j\in \bb{R}^{j-i}\setminus 0 \text{Edge}nd{align} where is it understood that for $k_f$ or $k_f-k_e$ odd we have $a_f=0$. The vertical Maslov indices are \begin{align*} \mu(\hat{v}_{[1,a_1,\text{d}ots a_s]})=&\sum_{t=1}^s k_t\\ \mu(\hat{v}_{[0,\text{d}ots a_i,\text{d}ots a_s]})=&\sum_{t=1}^s k_t - (s+1)c \cdot k_i \\ \text{Edge}nd{align*} In addition, there are some other sections incorporating fiber classes; namely let $f^i(z):D\rightarrow \bb{H}$ be the M\"obius transformation that sends $\mathbb{P}artial D$ to $\bb{R}$. Denote $f^i_l(z)=f^i(z^l)$. We consider the following modification to the section \ref{bottomsection} \begin{equation}\label{bottomsectionmods} \hat{v}^{\underline{l}}_{[0,\text{d}ots a_i,\text{d}ots a_s]}(z):=[0,\text{d}ots 0,f^i_{l_i}a_i,\text{d}ots f^j_{l_j}a_j, z^{k_{j+1}-k_i}f^{j+1}_{l_{i+1}}a_{j+1},\text{d}ots,z^{k_s-k_i}f^{s}_{l_s}a_s] \text{Edge}nd{equation} Such a section has Maslov index $$\sum_{t=1}^s k_t-(s+1)k_i + 2\text{Vert}ert\underline{l}\text{Vert}ert$$ With all of this said, the section \ref{bottomsection} and modifications \ref{bottomsectionmods} do not intersect a generic point in $v^*L$, so they only show up in configurations that already involve the fiber class. Summarily we only consider the above sections $\hat{v}$ for which $$\mu(\hat{v})\leq -n-1$$ To understand which of these sections are counted we write down parameterizations of the fiber class through a generic point $p=[a_0,\text{d}ots a_s]$ with $a_i\neq 0$. One can show that any holomorphic disk with $f(1)=p$ is of the form \begin{equation}\label{verticaldisksforranks} f(z)= [\mathbb{P}hi_0 a_0,\mathbb{P}hi_1 a_1,\text{d}ots \mathbb{P}hi_sa_s] \text{Edge}nd{equation} where each $\mathbb{P}hi_i:(D,\mathbb{P}artial D)\rightarrow (\bb{C},\bb{R})$ is a finite Blaschke product with $\mathbb{P}hi_i(1)=1$ [c.f. Theorem 10.1 \cite{cho}] (and not all $\mathbb{P}hi_i=0$ at a given $z$). Such a map has Maslov index $$(s+1)\sum_{i=1}^s\texteta(\mathbb{P}hi_i)$$ where $\texteta$ is the number of Blashke factors. Taken together, the count will be a subset of integer solutions to the system \begin{align}\label{indexcountequation} m\left(\sum_{t=1}^s k_t-(s+1)k_i +n+1\right) + (s+1)\sum_{i=1}^s\texteta(\mathbb{P}hi_i)+2\text{Vert}ert\underline{l}\text{Vert}ert=2 \text{Edge}nd{align} for each $i$ in the variables $\underline{l}$, $m$, and $\texteta(\mathbb{P}hi_i)$. \ \begin{comment} \subsubsection{Case: $F=B=\bb{CP}^2$} We work out the above example in the specific case that both the base and fiber are 4 dimensional. We take the fibration $\bb{P}(\mc{O}\oplus \mc{O}_{k_1}\oplus \mc{O}_{k_2}) \rightarrow \bb{CP}^2$ with $k_2\Gammaeq k_1$. Contributions to the potential satisfy the index equation $$m\Big( k_1+k_2 -3k_i +3\Big) + 2(l_1+l_2)+\mu(v^{vert})=2$$ where $\mu(v^{vert})$ is the Maslov index of vertical component. For a generic point $p\in L$, say that $p=[1,p_1,p_2]$ in some fiber of $\mathbb{P}i$. We have the following vertical holomorphic disks: let $f^i_{l_i}$ be a M\"obius transformation as above and $$v^{\underline{l}}_{F_p}(z)=[1,f^1_{l_1}(z)p_1,f^2_{l_2}(z)p_2]$$ be a holomorphic disk with $f^i_{l_1}(1)=1$. When $f^i_{l_1}=\infty$ it intersects the points $[0,p_1,0]$, $[0,0,p_2]$, and $[0,p_1,p_2]$ respectively depending upon whether $l_1>l_2$, $l_1<l_2$,or $l_1=l_2$ respectively. We first take $i=2$. So long as $2k_2-k_1 >3$ , . In fact, we must have $l_i=0$, so we take $m>>1$ and and the map $ v^{vert}$ such that $\mathbb{P}hi_2$ takes on the value of $\infty$ when $\mathbb{P}hi_1$ does not. Next take $i=1$ and suppose that $k_2-2k_1+3<0$. Choose $m>>1$ so that \ref{indexcountequation} has a positive integer solution in $l$ and $\texteta$. Then we can consider vertical holomorphic representatives of the form \ref{verticaldisksforranks} where $\mathbb{P}hi_0$ is non-constant and $\text{Edge}xists x_0\in \mathbb{P}artial D$ so that $$f(x_0)=[0,a_1,e^{(k_2-k_1)\theta}a_2]$$ for some $\theta$, together with the section \ref{bottomsection}. On the other hand, assuming that $m$ is large enough, can also augment a vertical disk with \ref{bottomsectionmods}. For a pair $(l_1,l_2)$ we can consider the section $$z\mapsto [0,f_{1}a_1,z^{(k_2-k_1)\theta}f_2 a_2]$$ where $f_i$ is a Blaschke product with $l_i$ factors and the $f_i$ have mutually exclusive zeros with $f_i(1)=1$. Such a section intersections a vertical disks in $\mathbb{P}i^{-1}(v(1))$ of the form \ref{verticaldisksforranks} where $\mathbb{P}hi_0(-1)=0$, $\mathbb{P}hi_1(-1)=1$, $\mathbb{P}hi_2(-1)=1$. \text{Edge}nd{comment} \subsection{Computing Floer cohomology of involution fixed point sets in projectivized vector bundles} Based on the observations in the above examples, we consolidate a general argument that allows us to show that the $A_\infty$-algebra is unobstructed: \begin{theorem}\label{agthm} Let $\bb{P}(\mc{V})\rightarrow \bb{CP}^n$ be the projectivization of a vector bundle with a lift $\tau$ of the anti-symplectic involution on $\bb{CP}^n$. Then for $L=\text{Fix}(\tau)$, $HF(L,\Lambda_{t,\bb{Z}_2})$ is defined and isomorphic to $H^{Morse}(L,\Lambda_{t,\bb{Z}_2})$. \text{Edge}nd{theorem} \begin{proof} Most of the proof is demonstrated through example \ref{realprojectivebundles}, but we provide it here. For the base, all holomorphic disks come in a pairs in the integrable toric structure $J_{\bb{CP}^n}$, which is regular by a doubling trick followed by a topological argument [Lemma 4.6 \cite{ohrpn}]. By section 3.1 in \cite{agm} and by the proof of [Theorem 3.6 in \cite{CW1}], we can find a high degree smooth hypersurface $D_B\subset\bb{CP}^n$ which is holomorphic in the toric structure and is stabilizing for $\bb{RP}^n$. We apply Theorem \ref{liftthm} using this base complex structure and stabilizing divisor to find a comeager subset of regular gauge transformations for each section class. By Remark \ref{computeremark} is suffices to find sections in the given connection, and then perturb using a regular (invertible) gauge transformation. Given a holomorphic configuration $u$ in the perturbation system above, the involution $\tau$ gives another configuration $u(z)\mapsto \tau\circ u(\bar{z})=:v(z)$ with matching boundary conditions. To see that $v$ is holomorphic it suffices to see that $\tau$ is anti-holomorphic. Note that in a local trivialization the almost complex structure $$J:=\begin{bmatrix} J_{\bb{P}(V)} & J_{\bb{P}(V)}\circ X_\sigma-X_\sigma\circ J_{\bb{CP}^n} \\ 0 & J_{\bb{CP}^n} \text{Edge}nd{bmatrix} $$ is the unique such one from Lemma \ref{uniqueacs} when the base is $\bb{CP}^n$. We have $\tau^*a=-a$, so $\tau$ sends $TF^{\mathbb{P}erp a}$ to $TF^{\mathbb{P}erp a}$. Thus $\tau^* J$ is the unique almost complex structure that agrees with $-J_{\bb{P}(V)}$ on the fibers, preserves $TF^{\mathbb{P}erp a}$ and makes $\mathbb{P}i$ holomorphic w.r.t. $-J_{\bb{CP}^n}$, so it must be equal to $-J$ in the above coordinates, and hence globally. Thus, the full potential vanishes at the trivial representation, so that the definition of Floer cohomology of $L$ is unobstructed. Moreover, any holomorphic disk in the Floer differential cancels mod 2 for exactly the same reason. Hence $$HF(L,\Lambda_{\bb{Z}_2,t},J,\mc{G})\cong H^{Morse}(L,\Lambda_{\bb{Z}_2,t}^2)$$ From the discussion in the invariance section \ref{invariancesection} we have an isomorphism for a generic perturbation system. Since $\text{rank}_{\Lambda_{\bb{Z}_2,t}}HF(L,\Lambda_{\bb{Z}_2}^2)\leq \text{Fix}(\mathbb{P}hi)$ for a Hamiltonian isotopy $\mathbb{P}hi$, estimate \ref{agestimate} follows. \text{Edge}nd{proof} \begin{comment} \subsection{Coisotropic fibers over $G_{\bb{C}}(k,n)$} Identify $PMat_{k\times n}(\bb{C})$ with the symplectic manifold $(\bb{CP}^{nk-1},\omega_{FS})$. The action of $PU(K)$ is Hamiltonian with moment map $$\Phi(A)=\frac{i}{2}AA^*- \frac{Id}{2i}$$ \subsection{Example: Fibrations with Del Pezzo fibers}\label{dpexample} Let $\mathbb{D}_n$ denote the monotone symplectic blowup of $\mathbb{P}^2$ in $n< 8$ points, equipped with the induced K\"ahler form $\omega_n$. By the Weinstein neighborhood theorem, any Lagrangian $2$-sphere must have self intersection number $-2$. This fact, coupled with a calculation as in \cite{evansspheres} gives us the classes which contain Lagrangian $2$-spheres. *** fill A better way to view $\mathbb{D}_n$ in this regard is as the monotone symplectic blowup of $\mathbb{P}^1\times \mathbb{P}^1$. Equip $\mathbb{P}^1\times \mathbb{P}^1$ with the direct sum symplectic form $\omega_{FS}\oplus \omega_{FS}$. Define the \text{Edge}mph{antidiagonal} $$\bar{\mathbb{D}elta}:=\left\{ (p,\tau (p))\text{Vert}ert p\in \mathbb{P}^1 \right\}$$ where $\tau$ is the anti-holomorphic, anti-symplectic antipode map on $\mathbb{P}^1$. This particular Lagrangian is invariant under the diagonal $S^1$ action. \\ $\mathbb{D}_n$ for $n\Gammaeq 2$ can be realized as the symplectic blowup of $\mathbb{P}^1\times \mathbb{P}^1$ in $n-1$ points. Further: one can arrange so that the symplectically embedded neighborhood along which we blow up are $T^2$ invariant and do not intersect $\bar{\mathbb{D}elta}$. This gives a $S^1$ invariant Lagrangian $L\subset \mathbb{D}_n$ in the binary class (see \cite{evansspheres}).\\ \subsubsection{A fibered Lagrangian} To use the above construction in context, let us take the symplectic fibration $$\mathbb{P}^1\times \mathbb{P}^1 \rightarrow \mathbb{P}(\mathcal{O}\oplus \mathcal{O}_l)\otimes_{\mathbb{P}^k}\mathbb{P}(\mathcal{O}\oplus \mathcal{O}_l)\xrightarrow{\mathbb{P}i} \mathbb{P}^k$$ the fiberwise product of $\mathbb{P}(\mathcal{O}\oplus \mathcal{O}_l)$ with itself. Let $E$ denote the total space of the fibration. Equip $E$ with $\omega_{\Gamma,K}=a_H+K\mathbb{P}i^*\omega_{FS}$ where $a\text{Vert}ert_{\mathbb{P}i^{-1}(p)}=\omega_{FS}\oplus\omega_{FS}$ is the weak coupling associated to some symplectic connection $H$. \\ The $\mathbb{T}^k$ action lifts to a $\mathbb{T}^{k+1}$ action on $\mathbb{P}(\mathcal{O}\oplus \mathcal{O}_l)$ and gives a diagonal action on $\mathbb{P} (\mathcal{O}\oplus \mathcal{O}_l)\otimes \mathbb{P}(\mathcal{O} \oplus \mathcal{O}_l)\rightarrow \mathbb{P}^k$. If we assume that the connection $H$ is induced from the toric symplectic form then we can arrange for $H$ to be $\mathbb{T}^{k+1}$-equivariant. Restrict to a sub-torus $\mathbb{T}^k\subset \mathbb{T}^{k+1}$ for which the action on $\mathbb{P}^k$ is free. Let $\Psi$ denote the moment map for this action on $\mathbb{P}^k$. We invoke a theorem about the moment map of a symplectic fibration \begin{theorem}[\cite{guillemin} Theorem 4.6.3] There is a neighborhood $U$ of $\cl{k}$ and a $\mathbb{T}^k$-invariant connection $H'$ on $\mathbb{P}i^{-1}(U)$ such that the moment map for the $\mathbb{T}^k$ action on $(\mathbb{P}i^{-1}(U),\omega_{H', K})$ is $\Psi\circ \mathbb{P}i$ \text{Edge}nd{theorem} It follows that $E\text{Vert}ert_{\cl{k}}$ is a moment fiber for the $\mathbb{T}^k$ action so that $$(E\text{Vert}ert_{\cl{k}},\omega_{H',K})\cong (\mathbb{P}^1\times\mathbb{P}^1\times \cl{k},\omega_{FS}\oplus \omega_{FS}\oplus 0)$$ and parallel transport along the connection $H'$ is given precisely by the $T^k$ action. By [\cite{guillemin} Theorem 4.6.2, equivariant version] there is a $\mathbb{T}^{k}$-invariant connection $H''$ on all of $E$ and a neighborhood of $\cl{k} \subset U'\subset U$ such that $H''=H'$ on $H'$ and $H''=H$ outside $U$.\\ We define $L$ as the $\mathbb{T}^k$ orbit of $\bar{\mathbb{D}elta}$. If we take $K$ large enough, [\cite{guillemin} Theorem 1.6.3, equivariant version] gives us a $T^k$-equivariant symplectic isotopy $f:E\rightarrow E$ covering the identity such that $f^*\omega_{H',K}=\omega_{H,K}$. We see that the embedding of the Lagrangian $L$ depends on the embedding $$0\rightarrow \mathbb{T}^k\hookrightarrow \mathbb{T}^{k+1}$$ For instance, let us take $\lbrace \theta_i\rbrace_i$ as a basis for $\mathbb{T}^{k+1}$ where $\theta_{k+1}$ acts by scaling $$f(z_0,\text{d}ots, z_{k+1})\mapsto f(e^{i\theta_{k+1}}z_0,\text{d}ots , e^{i\theta_{k+1}}z_{k+1})$$ on $\mathcal{O}$ and $\mathcal{O}_l$, hence trivially on $\mathbb{P}^k$. Suppose that the $\mathbb{T}^k$ chosen in the construction of $L$ is the sub-torus generated by $$\left\{ \theta_j\oplus n_j\theta_{k+1} \right\}_{j=1}^k$$ for $n_j\in \bb{Z}$. Then for $f\in \mathcal{O}_l(\cl{k})$ $$(\theta_j\oplus n_j\theta_{k+1})\cdot f= \theta_j\cdot e^{n_j\theta_{k+1}}f$$ so the monodromy around the $j^{th}$ factor of $\cl{k}$ is a diagonal rotation of $S^2\times S^2$ by $e^{2\mathbb{P}i n_j li}$: an $n_j$-Dehn twist.\\ \text{Edge}nd{comment} \mathbb{P}rintbibliography \text{Edge}nd{document}
\begin{document} \title[increasing stability]{Increasing stability for the inverse source scattering problem with multi-frequencies} \author{Peijun Li} \address{Department of Mathematics, Purdue University, West Lafayette, Indiana 47907, USA.} \epsilonmail{[email protected]} \author{Ganghua Yuan} \address{KLAS, School of Mathematics and Statistics, Northeast Normal University, Changchun, Jilin, 130024, China} \epsilonmail{[email protected]} \thetaanks{MSC: 35R30, 78A46.} \thetaanks{ The research of PL was supported in part by the NSF grant DMS-1151308. The research of GY was supported in part by NSFC grants 10801030, 11271065, 11571064, the Ying Dong Fok Education Foundation under grant 141001, and the Fundamental Research Funds for the Central Universities under grant 2412015BJ011.} \keywords{stability, inverse source problem, Helmholtz equation, Partial differential equation} \begin{abstract} Consider the scattering of the two- or three-dimensional Helmholtz equation where the source of the electric current density is assumed to be compactly supported in a ball. This paper concerns the stability analysis of the inverse source scattering problem which is to reconstruct the source function. Our results show that increasing stability can be obtained for the inverse problem by using only the Dirichlet boundary data with multi-frequencies. \epsilonnd{abstract} \maketitle \section{Introduction and problem statement} In this paper, we consider the following Helmholtz equation: \begin{equation}\lambdaabel{sol} \Delta u(x)+ \kappa^2 u(x)=f(x), \quad x\in\mathbb{R}^{d}, \epsilonnd{equation} where $d=2$ or $3$, the wavenumber $\kappa>0$ is a constant, $u$ is the radiated wave field, and $f$ is the source of the electric current density which is assumed to have a compact support. Denote by $B_{\rho}=\{x\in\mathbb{R}^d: |x|<\rho\}$ the ball with radius $\rho>0$ and center at the original. Let $R>0$ be a constant which is large enough such that $B_R$ contains the support of $f$. Let $\partial B_R$ be the boundary of $B_R$. The following Sommerfeld radiation condition is required to ensure the uniqueness of the wave field $u$: \begin{equation}\lambdaabel{rc} \lambdaim_{r\to\infty}r^{\frac{d-1}{2}}(\partial_r u-{\rm i}\kappa u)=0,\quad r=|x|, \epsilonnd{equation} uniformly in all directions $\hat{x}=x/|x|$. For a given function $u$ on $\partial B_R$ in two dimensions, it has the Fourier series expansion \[ u(R, \thetaeta)=\sum_{n\in\mathbb{Z}}\hat{u}_n(R)e^{{\rm i}n\thetaeta},\quad \hat{u}_n(R)=\frac{1}{2\partiali}\int_0^{2\partiali}u(R, \thetaeta)e^{-{\rm i}n\thetaeta}{\rm d}\thetaeta. \] We may introduce the Dirichlet-to-Neumann (DtN) operator $\mathscr{B}: H^{1/2}(\partial B_R)\to H^{-1/2}(\partial B_R)$ given by \[ (\mathscr{B}u)(R, \thetaeta)=\kappa\sum_{n\in\mathbb{Z}} \frac{H_n^{(1)'}(\kappa R)}{H_n^{(1)}(\kappa R)}\hat{u}_n(R) e^{{\rm i}n\thetaeta}. \] For a given function $u$ on $\partial B_R$ in three dimensions, it has the Fourier series expansion: \[ u(R, \thetaeta, \vertarphirphi)=\sum_{n=0}^\infty\sum_{m=-n}^n \hat{u}_n^m(R) Y_n^m(\thetaeta, \vertarphirphi),\quad \hat{u}_n^m(R)=\int_{\partial B_R} u(R, \thetaeta, \vertarphirphi)\begin{align}r{Y}_n^m(\thetaeta, \vertarphirphi){\rm d}\gamma. \] We may similarly introduce the DtN operator ${\mathscr B}: H^{1/2}(\partial B_R)\to H^{-1/2}(\partial B_R)$ as follows: \[ (\mathscr{B}u)(R, \thetaeta, \vertarphirphi)=\kappa\sum_{n=0}^\infty\sum_{m=-n}^n \frac{h_n^{(1)'}(\kappa R)}{h_n^{(1)}(\kappa R)}\hat{u}_n^m(R)Y_n^m(\thetaeta, \vertarphirphi). \] Here $H_n^{(1)}$ is the Hankel function of the first kind with order zero, $h_n^{(1)}$ is the spherical Hankel function of the first kind with order zero, $Y_n^m$ is the spherical harmonics of order $n$, and the bar denotes the complex conjuate. Using the DtN operator, we can reformuate the Sommerfeld radiation condition into a transparent boundary condition \[ \partial_{\nu}u={\mathscr B}u\quad\text{on} ~ \partial B_R, \] where $\nu$ is the unit outer normal on $\partial B_R$. Hence one can also obtain the Neumann data on $\partial B_R$ once the Dirichlet date is available on $\partial B_R$. Now we are in the position to discuss our inverse source problem: {\bf IP.} {\epsilonm Let $f$ be a complex function with a compact support contained in $B_R$. The inverse problem is to determine $f$ by using the boundary observation data $u(x,\kappa)|_{\partial B_R}$ with an interval of frequencies $\kappa\in (0,K)$ where $K>1$ is a positive constant.} The inverse source problem has significant applications in medical and biomedical imaging \cite{I-89}, and various tomography problems \cite{Ar, StUh}. In this paper, we study the stability of the above inverse problem. As is known, the inverse source problem does not have a unique solution at a single frequency \cite{DS-IEEE82, HKP-IP05}. Our goal is to establish increasing stability of the inverse problems with multi-frequencies. We refer to \cite{BLT-JDE10, CIL} for increasing stability analysis of the inverse source scattering problem. In \cite{CIL}, the authors discussed increasing stability of the inverse source problem for the three-dimensional Helmholtz equation in a general domain $\Omega$ by using the Huygens principle. The observation data are both $u(x,\kappa)|_{\partial\Omega}, 0<\kappa<K$ and $\nabla u(x,\kappa)|_{\partial\Omega}, 0<\kappa<K$. In \cite{BLT-JDE10}, the authors studied the stability of the two- and three-dimensional Helmholtz equations via Green's functions. But the stabilities in \cite{BLT-JDE10} are different from the stability in this paper where only the Dirichlet data is required. Related results can be found in \cite{I-CM07, I-D11} on increasing stability of determining potentials and in the continuation for the Helmholtz equation. We refer to \cite{El, BaLiTr} for a uniqueness result and numerical study for the inverse source scattering problem. A survey can be found in \cite{BLLT-IP15} for some general inverse scattering problems with multi-frequencies. \section{Main result} Let $0<r<R$, define a complex-valued functional space: \[ \mathcal{C}_M =\{f\in H^{n+1}(B_R): \|f\|_{H^{n+1}(B_R)}\lambdaeq M, ~{\rm supp}f\subset B_r\subset B_R, ~f: B_R\to\mathbb{C}\}, \] where $M>1$ and $0<r<R$ are constants. For any $v\in H^{1/2}(\partial B_R)$, we set \[ \|v(x,\kappa)\|_{\partial B_R}=\int_{\partial B_R}\lambdaeft( |\mathscr{B} v(x, \kappa)|^2 +\kappa^2 |v(x, \kappa)|^2 \right){\rm d}\gamma. \] Now we show the main stability result of the inverse problem. \begin{theo}\lambdaabel{mr1} Let $f_j\in \mathcal{C}_M, j=1, 2$, and let $u_j$ be the solution of the scattering problem \epsilonqref{sol}--\epsilonqref{rc} corresponding to $f_j$. Then there exists a positive constant $C$ independent of $n, K, M, \kappa$ such that \begin{align} \lambdaabel{cfe} \| f_1-f_2\|^2_{L^2(B_R)}\lambdaeq C \lambdaeft(\epsilonpsilon^2+\frac{M^2}{\lambdaeft(\frac{K^{\frac{2} {3}}|\lambdan\epsilonpsilon|^{\frac{1}{4}}}{(R+1)(6n-6d+3)^3}\right)^{2n-2d+1}} \right), \epsilonnd{align} where $K>1$, $n\ge d$ and \begin{align} \lambdaabel{e1} \epsilonpsilon&=\lambdaeft(\int_0^K \kappa^{d-1} \|(u_1-u_2)(x,\kappa)\|_{\partial B_R}{\rm d}\kappa\right)^{\frac{1}{2}}. \epsilonnd{align} \epsilonnd{theo} \begin{rema} There are two parts in the stability estimates \epsilonqref{cfe}: the first part is the data discrepancy and the second part comes from the high frequency tail of the function. It is clear to see that the stability increases as $K$ increases, i.e., the problem is more stable as more frequencies data are used. We can also see that when $n<\lambdaeft[\frac{K^{\frac{2}{9}}|\lambdan\epsilonpsilon|^{\frac{1}{12}}}{(R+1)^{\frac{1}{3}} }+d-\frac{1}{2}\right]$, the stability increases as $n$ increases, i.e., the problem is more stable as the functions have suitably higher regularity. \epsilonnd{rema} Next we prove Theorem \ref{mr1} in the following section. \section{Proof of Theorem \ref{mr1}} First we present several useful lemmas. \begin{lemm} Let $f_j\in L^2(B_R)$ and ${\rm supp}f_j\subset B_R$, $j =1, 2$. Then \begin{align*} \|f_1-f_2\|^2_{L^2(B_R)} &\lambdae C\int_0^{\infty}\kappa^{d-1}\int_{\partial B_R}\lambdaeft|\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)\right|^2{\rm d}\gamma {\rm d}\kappa. \epsilonnd{align*} \epsilonnd{lemm} \begin{proof} Let $\xi\in\mathbb{R}$ with $|\xi|=\kappa$. Multiplying $e^{-{\rm i}\xi x}$ on both sides of \epsilonqref{sol} and integrating over $B_R$, we obtain \[ \int_{B_R}e^{-{\rm i}\xi x}f(x){\rm d}x=\int_{\partial B_R}e^{-{\rm i}\xi x}(\partial_{\nu}u(x,\kappa)+{\rm i}\xi\nu u(x,\kappa)){\rm d}\gamma, \quad |\xi|=\kappa\in (0,\infty). \] Since $\mbox{supp}f\subset B_R$, we have \[ \int_{\mathbb{R}^d}e^{-{\rm i}\xi x}f(x){\rm d}x=\int_{\partial B_R}e^{-{\rm i}\xi x}(\partial_{\nu}u(x,\kappa)+{\rm i}\xi\nu u(x,\kappa)){\rm d}\gamma, \quad |\xi|=\kappa\in (0,\infty), \] which gives \[ \lambdaeft|\int_{\mathbb{R}^d}e^{-{\rm i}\xi x}f(x){\rm d}x\right|^2\lambdae\lambdaeft|\int_{\partial B_R}(\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)){\rm d}\gamma\right|^2, \quad |\xi|=\kappa\in (0,\infty). \] Hence, \begin{align*} \lambdaeft(\int_{\mathbb{R}^d}\lambdaeft|\int_{\mathbb{R}^d}e^{-{\rm i}\xi x}f(x){\rm d}x\right|^2 {\rm d}\xi\right)^{\frac{1}{2}}&\cr \lambdae&\lambdaeft(\int_{\mathbb{R}^d}\lambdaeft|\int_{\partial B_R}(\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)){\rm d}\gamma\right|^2 {\rm d}\xi\right)^{\frac{1}{2}}, \quad |\xi|=\kappa\in (0,\infty). \epsilonnd{align*} When $d=2$, we obtain by using the polar coordinates that \begin{align*} &\lambdaeft(\int_{\mathbb{R}^2}\lambdaeft|\int_{\mathbb{R}^2}e^{-{\rm i}\xi x}f(x){\rm d}x\right|^2{\rm d}\xi\right)^{\frac{1}{2}}\cr &\lambdae\lambdaeft(\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{\infty}\kappa\lambdaeft|\int_{\partial B_R}(\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)){\rm d}\gamma\right|^2{\rm d}\kappa\right)^{\frac{1}{2 }}\cr &\lambdae \lambdaeft(2\partiali\int_0^{\infty}\kappa\lambdaeft|\int_{\partial B_R}(\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)){\rm d}\gamma\right|^2{\rm d}\kappa\right)^{\frac{1}{2}}\cr &\lambdae\lambdaeft(2\partiali^2R^2\int_0^{\infty}\kappa\int_{\partial B_R}\lambdaeft|\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)\right|^2{\rm d}\gamma {\rm d}\kappa\right)^{\frac{1}{2}}, \epsilonnd{align*} It follows from the Plancherel theorem that \begin{align*} \|f_1-f_2\|^2_{L^2(B_R)}&=\|f_1-f_2\|^2_{L^2(\mathbb{R}^2)}\\ & =\frac{1}{(2\partiali)^2}\int_{\mathbb{R}^2}|\hat{f} _1(\xi)-\hat{f}_2(\xi)|^2{\rm d}\xi\\ &\lambdae C\int_0^{\infty}\kappa\int_{\partial B_R}|\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)|^2{\rm d}\gamma {\rm d}\xi. \epsilonnd{align*} When $d=3$, we obtain by using the polar coordinates that \begin{align*} &\lambdaeft(\int_{\mathbb{R}^3}\lambdaeft|\int_{\mathbb{R}^3}e^{-{\rm i}\xi x}f(x){\rm d}x\right|^2{\rm d}\xi\right)^{\frac{1}{2}}\cr &\lambdae\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{\partiali}{\rm sin}\vertarphirphi {\rm d}\vertarphirphi\int_0^{\infty}\kappa^2\lambdaeft|\int_{\partial B_R}(\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)){\rm d}\gamma\right|^2{\rm d}\kappa\right|^{\frac{1}{2}}\cr &\lambdae \lambdaeft(2\partiali^2\int_0^{\infty}\kappa^2\lambdaeft|\int_{\partial B_R}(\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)){\rm d}\gamma\right|^2{\rm d}\kappa\right)^{\frac{1}{2}}\cr &\lambdae\lambdaeft(\frac{8}{3}\partiali^3R^3\int_0^{\infty}\kappa^2\int_{\partial B_R}\lambdaeft|\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)\right|^2{\rm d}\gamma {\rm d}\kappa\right)^{\frac{1}{2}}. \epsilonnd{align*} It follows from the Plancherel theorem that \begin{align*} \|f_1-f_2\|^2_{L^2(B_R)}&=\|f_1-f_2\|^2_{L^2(\mathbb{R}^3)}\\ & =\frac{1}{(2\partiali)^3}\int_{\mathbb{R}^3}|\hat{f} _1(\xi)-\hat{f}_2(\xi)|^2{\rm d}\xi\\ &\lambdae C\int_0^{\infty}\kappa^2\int_{\partial B_R}\lambdaeft|\partial_{\nu}u(x,\kappa)+\kappa u(x,\kappa)\right|^2{\rm d}\gamma {\rm d}\xi, \epsilonnd{align*} which completes the proof. \epsilonnd{proof} For $d=2$, let \begin{align}\lambdaabel{3.1} I_1(s)=&\int_0^s \kappa^{3}\int_{\partial B_R}\lambdaeft(\int_{B_R}-\frac{{\rm i}}{4}H_0^{(1)}(\kappa|x-y|)(f_1(y)-f_2(y)){\rm d}y\right)\cr &\lambdaeft(\int_{B_R}\frac{{\rm i}}{4}\begin{align}r{H}_0^{(1)}(\kappa|x-y|)(\begin{align}r{f}_1(y)-\begin{align}r{f}_2(y)){\rm d}y\right){\rm d}\gamma(x){\rm d}\kappa, \\ \lambdaabel{3.2}I_2(s)=&\int_0^s \kappa\int_{\partial B_R}\lambdaeft(-\int_{B_R}\frac{{\rm i}}{4}\partial_{\nu}H_0^{(1)}(\kappa|x-y|)(f_1(y)-f_2(y)){\rm d}y\right)\cr &\lambdaeft(\int_{B_R}\frac{{\rm i}}{4}\partial_{\nu}\begin{align}r{H}_0^{(1)}(\kappa|x-y|)(\begin{align}r{f} _1(y)-\begin{align}r{f}_2(y)){\rm d}y\right){\rm d}\gamma(x){\rm d}\kappa. \epsilonnd{align} For $d=3$, let \begin{align}\lambdaabel{3.3} I_1(s)=&\int_0^s \kappa^{4}\int_{\partial B_R}\lambdaeft(\int_{B_R}\frac{e^{{\rm i}\kappa|x-y|}}{4\partiali |x-y|}(f_1(y)-f_2(y)){\rm d}y\right)\cr &\lambdaeft(\int_{B_R}\frac{e^{-{\rm i}\kappa|x-y|}}{4\partiali |x-y|}(\begin{align}r{f}_1(y)-\begin{align}r{f}_2(y)){\rm d}y\right){\rm d}\gamma(x){\rm d}\kappa,\\ \lambdaabel{3.4}I_2(s)=&\int_0^s \kappa^{3}\int_{\partial B_R}\lambdaeft(\int_{B_R}\partial_{\nu}\frac{e^{{\rm i}\kappa|x-y|}}{4\partiali |x-y|}(f_1(y)-f_2(y)){\rm d}y\right)\cr &\lambdaeft(\int_{B_R}\partial_{\nu}\frac{e^{-{\rm i}\kappa|x-y|}}{4\partiali |x-y|}(\begin{align}r{f}_1(y)-\begin{align}r{f}_2(y)){\rm d}y\right){\rm d}\gamma(x){\rm d}\kappa. \epsilonnd{align} Denote $$ S=\{z=x+{\rm i}y\in\mathbb{C}: -\frac{\partiali}{4}<{\rm arg} z<\frac{\partiali}{4}\}. $$ The integrands in \epsilonqref{3.1}--\epsilonqref{3.4} are analytic functions of $\kappa$ in $S$. The integrals with respect to $\kappa$ can be taken over any path joining points $0$ and $s$ in $S$. Thus $I_1(s)$ and $I_2(s)$ are analytic functions of $s=s_1+{\rm i}s_2\in S, s_1, s_2\in\mathbb{R}$. \begin{lemm} Let $f_j\in L^2(B_R), {\rm supp}f_j\subset B_R, j =1, 2$. We have for any $s=s_1+{\rm i}s_2\in S$ that \begin{enumerate} \item for $d=2$, \begin{align} \lambdaabel{3.5} |I_1(s)|&\lambdaeq 16\partiali^3R^3|s|^{5}e^{4R|s_2|}\|f_1(x)-f_2(x)\|_{L^2(B_R)}^2,\\ \lambdaabel{3.6} |I_2(s)|&\lambdaeq 16\partiali^3R^3|s|^{3}e^{4R|s_2|}\|f_1(x)-f_2(x)\|_{H^1(B_R)}^2, \epsilonnd{align} \item for $d=3$, \begin{align} \lambdaabel{3.7} |I_1(s)|&\lambdaeq 16\partiali^3(|s|^{3}R^3+|s|^{4}R^4)e^{4R|s_2|}\|f_1(x)-f_2(x)\|_{L^2(B_R)}^2,\\ \lambdaabel{3.8} |I_2(s)|&\lambdaeq 16\partiali^3(|s|^{2}R^3+|s|^{3}R^4)e^{4R|s_2|}\|f_1(x)-f_2(x)\|_{H^1(B_R)}^2, \epsilonnd{align} \epsilonnd{enumerate} \epsilonnd{lemm} \begin{proof} We first prove (\ref{3.7}). Let $\kappa=st, t\in(0, 1)$. A simple calculation yields \begin{align*} I_1(s)=&\int_0^1 s^{5}t^{4}\int_{\partial B_R}\lambdaeft(\int_{B_R}\frac{e^{{\rm i}st|x-y|}}{4\partiali |x-y|}(f_1(y)-f_2(y)){\rm d}y\right)\cr &\lambdaeft(\int_{B_R}\frac{e^{-{\rm i}st|x-y|}}{4\partiali |x-y|}(\begin{align}r{f}_1(y)-\begin{align}r{f}_2(y)){\rm d}y\right){\rm d}\gamma(x){\rm d}t. \epsilonnd{align*} Noting that $|e^{{\rm i}st |x-y|}|\lambdaeq e^{2R|s_2|}$ for all $x\in \partial B_R, y\in B_R$, we have \begin{align*} |I_1(s)|&=\int_0^1|s|^{5}t^{4}\int_{\partial B_R}\lambdaeft|\int_{B_R}\frac{e^{2|s_2|R}}{|x-y| }|f_1(y)-f_2(y)|{\rm d}y\right|^2{\rm d}\gamma(x){\rm d}t\cr &\lambdae\int_0^1|s|^{5}t^{4}\int_{\partial B_R}\lambdaeft|\int_{B_R} |f_1(y)-f_2(y)|^2 {\rm d}y\right|\int_{B_R}\frac{e^{4R|s_2|}}{|x-y|^2}{\rm d}y {\rm d}\gamma(x){\rm d}t,\cr \epsilonnd{align*} where we have used the Schwarz inequality for the integral with respect to $y$ in the last inequality. Using the polar coordinates $\rho=|x-y|$ with respect to $y$ yields \begin{align*} |I_1(s)|\lambdae\int_0^1|s|^{5}\lambdaeft(\int_{B_R}|f_1(y)-f_2(y)|^2{\rm d}y\right)\int_{ \partial B_R} \lambdaeft(2\partiali^2\int_0^{2R}e^{4|s_2|R}{\rm d}\rho \right){\rm d}\gamma(x){\rm d}t,\cr \epsilonnd{align*} which implies (\ref{3.7}). Next we prove (\ref{3.8}). Let $\kappa=st, t\in(0, 1)$. A simple calculation yields \begin{align*} I_2(s)=&\int_0^1 s^{3}t^{2}\int_{\partial B_R}\lambdaeft(\int_{B_R}\partial_{\nu}\frac{e^{{\rm i}st|x-y|}}{4\partiali |x-y|}(f_1(y)-f_2(y)){\rm d}y\right)\cr &\lambdaeft(\int_{B_R}\partial_{\nu}\frac{e^{-{\rm i}st|x-y|}}{4\partiali |x-y|}(\begin{align}r{f}_1(y)-\begin{align}r{f}_2(y)){\rm d}y\right){\rm d}\gamma(x){\rm d}t, \epsilonnd{align*} which gives \begin{align*} |I_2(s)|=\int_0^1|s|^{3}t^{2}\int_{\partial B_R}\lambdaeft|\int_{B_R}\nabla_x\lambdaeft(\frac{e^{{\rm i}st|x-y|}}{|x-y|}\right)\cdot\nu(f_1(y)-f_2(y)){\rm d}y\right|^2{\rm d}\gamma(x){\rm d}t. \epsilonnd{align*} Noting $\nabla_x\lambdaeft(\frac{e^{{\rm i}st|x-y|}}{|x-y|}\right)=-\nabla_y\lambdaeft(\frac{e^{{\rm i}st|x-y|}}{|x-y|}\right)$ and ${\rm supp}f_j\subset B_R, j=1,2$, we have \begin{align*} |I_2(s)|=\int_0^1|s|^{3}t^{2}\int_{\partial B_R}\lambdaeft|\int_{B_R}\frac{e^{{\rm i}st|x-y|}}{|x-y|}\nabla_y\lambdaeft(|f_1(y)-f_2(y)\right)\cdot\nu {\rm d}y\right|^2{\rm d}\gamma(x){\rm d}t. \epsilonnd{align*} Following a similar argument for proving (\ref{3.7}), we can prove (\ref{3.8}). Now we show the proofs of (\ref{3.5}) and (\ref{3.6}). First we prove (\ref{3.5}). By (\ref{3.1}) we have \begin{align*} I_1(s)=\int_0^1 s^{4}t^{3}\int_{\partial B_R}\lambdaeft|\int_{B_R}\frac{{\rm i}}{4}H_0^{(1)}(st|x-y|)(f_1(y)-f_2(y)){\rm d}y\right|^2{\rm d}\gamma(x){\rm d}t. \epsilonnd{align*} The Hankel function can also be expressed by the following integral when ${\rm Re}z>0$ (see e.g.,\cite{Wa}, Chapter VI): \begin{align*} H_0^{(1)}(z)=\frac{1}{{\rm i}\partiali}\int_{1+\infty{\rm i}}^{1}e^{{\rm i}z\tau}(\tau^2-1)^{-1/2}{\rm d}\tau. \epsilonnd{align*} Consequently, \begin{align*} |H_0^{(1)}(z)|&= \lambdaeft| \frac{1}{\partiali}\int^0_{+\infty}e^{{\rm i}({\rm Re }z+{\rm i}{\rm Im }z)(1+t{\rm i})}((1+t{\rm i})^2-1)^{-1/2}{\rm d}t\right|\cr &\lambdae \lambdaeft|\frac{1}{\partiali}e^{{\rm i}{\rm Re }z -{\rm Im }z}\int^0_{+\infty}e^{-t{\rm Re z}-{\rm i}t{\rm Im }z}(2\tau{\rm i}-\tau^2)^{-1/2}{\rm d}t\right|\cr &\lambdae \frac{1}{\partiali}e^{|{\rm Im }z|}\int_0^{+\infty}\frac{e^{-t{\rm Re }z}}{\lambdaeft|\tau^{1/2}(2{\rm i}-\tau)^{1/2}\right|}{\rm d}t\cr &\lambdae \frac{1}{\partiali}e^{|{\rm Im }z|}\int_0^{+\infty}\frac{e^{-t{\rm Re }z}}{\tau^{1/2}(\tau^2+4)^{1/4}}{\rm d}t\cr &\lambdae \frac{1}{\partiali}e^{|{\rm Im }z|}\int_0^{+\infty}\frac{e^{-t{\rm Re }z}}{\tau^{1/2}2^{1/2}}{\rm d}t\cr &=\frac{1}{\partiali}e^{|{\rm Im }z|}\lambdaeft(\int_0^{1}\frac{e^{-t{\rm Re }z}}{\tau^{1/2}2^{1/2}}{\rm d}t+\int_1^{+\infty}\frac{e^{-t{\rm Re }z}}{\tau^{1/2}2^{1/2}}{\rm d}t\right)\cr &\lambdae\frac{1}{\partiali}e^{|{\rm Im }z|}\lambdaeft(\int_0^{1}\frac{1}{\tau^{1/2}}{\rm d}t+\int_1^{+\infty}e^{-t{\rm Re }z}{\rm d}t\right)\cr &\lambdae\frac{1}{\partiali}e^{|{\rm Im }z|}\lambdaeft(2+\frac{1}{{\rm Re }z}\right). \epsilonnd{align*} Similarly, we can obtain \begin{align*} |\omegal{H}_0^{(1)}(z)|\lambdae \frac{1}{\partiali}e^{|{\rm Im }z|}\lambdaeft(2+\frac{1}{{\rm Re }z}\right). \epsilonnd{align*} Hence we have \begin{align*} |I_1(s)|\lambdae\int_0^1|s|^{4}t^{3}\int_{\partial B_R}\lambdaeft|\int_{B_R}|f_1(y)-f_2(y)|^2{\rm d}y\right|\int_{B_R} e^{4R|s_2|}\lambdaeft(2+\frac{1}{|x-y|s_1t}\right){\rm d}y{\rm d}\gamma(x){\rm d}t. \epsilonnd{align*} Using the polar coordinates $\rho=|x-y|$ with respect to $y$ yields \begin{align*} |I_1(s)|\lambdae\int_0^1|s|^{4}t^{3}\lambdaeft|\int_{B_R}|f_1(y)-f_2(y)|^2{ \rm d}y\right|\int_ { \partial B_R} \lambdaeft(2\partiali^2\int_{0}^{2R}e^{4R|s_2|}\lambdaeft(2\rho+\frac{1}{s_1t}\right){\rm d}\rho\right){\rm d}\gamma(x){\rm d}t. \epsilonnd{align*} which completes the proof of (\ref{3.5}). Noting that $\partial_{\nu}H_0^{(1)}(\kappa|x-y|)=\nabla_x H_0^{(1)}(\kappa|x-y|)\cdot\nu$ and $\nabla_x H_0^{(1)}(\kappa|x-y|)=-\nabla_y H_0^{(1)}(\kappa|x-y|)$, we can prove (\ref{3.6}) in a similar way. \epsilonnd{proof} \begin{lemm} Let $f_j\in H^n(B_R), n\ge d, {\rm supp}f_j\subset B_r\subset B_R, j =1, 2$. Then there exists a constant $C$ independent of $n$ such that for any $s\ge 1$ \begin{align}\lambdaabel{3.9} \int_s^{+\infty} \int_{\partial B_R}\kappa^{d-1} \bigl(|\partial_{\nu}u(x,\kappa)|^2 +\kappa^2|u(x, \kappa)|^2 \bigr){\rm d}\gamma{\rm d}\kappa&\lambdaeq C s^{-(2n-2d+1)}\| f_1-f_2\|^2_{H^{n+1}(B_R)}. \epsilonnd{align} \epsilonnd{lemm} \begin{proof} It is easy to see that \begin{align*} &\int_s^{+\infty}\int_{\partial B_R} \kappa^{d-1}\bigl(|\partial_{\nu}u(x,\kappa)|^2 +\kappa^2|u(x, \kappa)|^2 \bigr){\rm d}\gamma{\rm d}\kappa\cr &= \int_s^{+\infty} \int_{\partial B_R} \kappa^{d+1}|u(x, \kappa)|^2 {\rm d}\gamma{\rm d}\kappa + \int_s^{+\infty} \int_{\partial B_R} \kappa^{d-1} |\partial_{\nu}u(x,\kappa)|^2{\rm d}\gamma{\rm d}\kappa\cr &\triangleangleq L_1+L_2. \epsilonnd{align*} Next, we will estimate $L_1$ and $L_2$. When $d=3$, we have \begin{align*} L_1&=\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}|u(x, \kappa)|^2 {\rm d}\gamma{\rm d}\kappa\cr &=\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_{\mathbb{R}^3}\frac{e^{{\rm i}\kappa|x-y|}}{4\partiali|x-y|}(f_1-f_2)(y){\rm d}y\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Using the polar coordinates $\rho=|y-x|$ originated at $x$ with respect to $y$, we have \begin{align*} L_1=\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{\partiali}\sin\vertarphirphi{\rm d}\vertarphirphi\int_0^{+\infty}\frac{e^{{\rm i}\kappa\rho}}{4\partiali}(f_1-f_2)\rho {\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Using integration by parts and noting ${\rm supp}f_j\subset B_r\subset B_R$, we obtain \begin{align*} L_1=\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{\partiali}\sin\vertarphirphi{\rm d}\vertarphirphi\int_{R-r}^{2R}\frac{e^{{\rm i}\kappa\rho}}{4\partiali({\rm i}\kappa)^n}\frac{\partial^n[(f_1-f_2)\rho]}{\partial\rho^n} {\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Consequently, \begin{align*} L_1\lambdae &\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{\partiali}\sin\vertarphirphi{\rm d}\vertarphirphi\int_{R-r}^{2R}\frac{1}{4\partiali \kappa^n}\right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha} (f_1-f_2)\right|\rho\right.\lambdaeft.\lambdaeft.+n\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{ \alphapha}(f_1-f_2)\right|\right){\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{\partiali}\sin\vertarphirphi{\rm d}\vertarphirphi\int_{R-r}^{2R}\frac{1}{4\partiali \kappa^n}\right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha}(f_1-f_2)\right|\frac{1}{\rho} \right.\lambdaeft.\lambdaeft.+\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{\alphapha} (f_1-f_2)\right|\frac{n}{\rho^2}\right)\rho^2{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa\cr \lambdae& \int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{\partiali}\sin\vertarphirphi{\rm d}\vertarphirphi\int_{R-r}^{2R}\frac{1}{4\partiali \kappa^n}\right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha}(f_1-f_2)\right|\frac{1}{R-r} \right.\lambdaeft.\lambdaeft.+\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{\alphapha} (f_1-f_2)\right|\frac{n}{(R-r)^2}\right)\rho^2{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{\partiali}\sin\vertarphirphi{\rm d}\vertarphirphi\int_{0}^{+\infty}\frac{1}{4\partiali \kappa^n}\right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha}(f_1-f_2)\right|\frac{1}{R-r} \right.\lambdaeft.\lambdaeft.+\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{\alphapha} (f_1-f_2)\right|\frac{n}{(R-r)^2}\right)\rho^2{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Changing back to the Cartesian coordinates with respect to $y$, we have \begin{align} L_1 \lambdae& \int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_{\mathbb{R}^3}\frac{1}{4\partiali \kappa^n}\right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha}(f_1-f_2)\right|\frac{1}{R-r} \right.\lambdaeft.\lambdaeft.+\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{\alphapha} (f_1-f_2)\right|\frac{n}{(R-r)^2}\right)dy\right|^2 {\rm d}\gamma{\rm d}\kappa\cr \lambdae& C n\|f_1-f_2\|_{H^n(B_R)}\int_s^{+\infty}\kappa^{4-2n}{\rm d}\kappa\cr =&C\frac{n}{2n-5}\|f_1-f_2\|_{H^n(B_R)}\frac{1}{s^{2n-5}}\cr \lambdaabel{3.10}\lambdae &3C\|f_1-f_2\|_{H^n(B_R)}\frac{1}{s^{2n-5}},\quad n\ge 3. \epsilonnd{align} Next we estimate $L_2$ for $d=3$, \begin{align*} L_2=&\int_s^{+\infty} \int_{\partial B_R} \kappa^{2} |\partial_{\nu}u(x,\kappa)|^2{\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_{\mathbb{R}^3}\lambdaeft(\nabla_y\frac{e^{{\rm i}\kappa|x-y|}}{4\partiali|x-y|}\cdot\nu\right)(f_1-f_2)(y){\rm d}y\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Noting that $\nabla_y\frac{e^{{\rm i}\kappa|x-y|}}{4\partiali|x-y|}=-\nabla_x\frac{e^{{\rm i}\kappa|x-y|}}{4\partiali|x-y|}$ and ${\rm supp}f_j\subset B_R$, we have \begin{align*} L_2=&\int_s^{+\infty} \int_{\partial B_R} \kappa^{2} |\partial_{\nu}u(x,\kappa)|^2{\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{2}\lambdaeft|\int_{\mathbb{R}^3}\lambdaeft(\nabla_y\frac{e^{{\rm i}\kappa|x-y|}}{4\partiali|x-y|}\cdot\nu\right)(f_1-f_2)(y){\rm d}y\right|^2 {\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{2}\lambdaeft|\int_{\mathbb{R}^3}\frac{e^{{\rm i}\kappa|x-y|}}{4\partiali|x-y|}\lambdaeft(\nabla_y(f_1-f_2)(y)\cdot\nu\right){\rm d}y\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Following a similar argument as that for the proof of (\ref{3.10}), we can obtain \begin{align}\lambdaabel{3.11} L_2\lambdae C n\|f_1-f_2\|_{H^{n+1}(B_R)}\int_s^{+\infty}\kappa^{2-2n}{\rm d}\kappa=C\frac{n}{2n-3} \|f_1-f_2\|_{ H^{n+1}(B_R)}\frac{1}{s^{2n-3}},\quad n\ge 2. \epsilonnd{align} Combining (\ref{3.10})--(\ref{3.11}) and noting $s>1$, we obtain (\ref{3.9}) for $d=3$. When $d=2$, we have \begin{align*} L_1=&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}|u(x, \kappa)|^2 {\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_{\mathbb{R}^2}\frac{{\rm i}}{4}H_0^{(1)}(\kappa|x-y|)(f_1-f_2)(y){\rm d}y\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} The Hankel function can also be expressed by the following integral when $t>0$ (e.g., \cite{Wa}, Chapter VI): \begin{align*} H_0^1(t)=\frac{2}{{\rm i}\partiali}\int_0^{+\infty}e^{{\rm i}ts}(s^2-1)^{-1/2}{\rm d}s. \epsilonnd{align*} Using the polar coordinates $\rho=|y-x|$ originated at $x$ with respect to $y$, we have \begin{align*} L_1=\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_0^{+\infty}\frac{1}{4}H_0^{(1)}(\kappa\rho)(f_1-f_2)\rho{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Let \begin{align}\lambdaabel{3.12} H_n(t)=\frac{2}{{\rm i}\partiali}\int_0^{+\infty}\frac{e^{{\rm i}ts}}{({\rm i}s)^n(s^2-1)^{1/2}}{\rm d}s, \quad n=1,2,\cdots. \epsilonnd{align} It is clear to note that \[ H_0(t)=H_0^{(1)}(t)\quad\text{and}\quad \frac{{\rm d} H_n(t)}{{\rm d}t}=H_{n-1}(t), ~t>0, ~n\in\mathbb{N}. \] Using integration by parts and noting ${\rm supp}f_j\subset B_r\subset B_R$, we obtain \begin{align*} L_1 =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_{R-r}^{2R}\frac{H_1(\kappa\rho)}{4\kappa^2}\frac{\partial (f_1-f_2)\rho}{\partial\rho}{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_{R-r}^{2R}\frac{H_n(\kappa\rho) }{4\kappa^{n+1}}\frac{\partial^n (f_1-f_2)\rho}{\partial \rho^n}{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Consequently, we have \begin{align*} L_1\lambdae&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_{R-r}^{2R}\lambdaeft|\frac{H_n(\kappa\rho) }{4\kappa^{n+1}}\right|\lambdaeft|\frac{\partial^n (f_1-f_2)\rho}{\partial \rho^n}\right|{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa\cr \lambdae&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_{R-r}^{2R}\lambdaeft|\frac{H_n(\kappa\rho) }{4\kappa^{n+1}}\right| \right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha} (f_1-f_2)\right|\right.\lambdaeft.\lambdaeft.+\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{\alphapha }(f_1-f_2)\right|\frac{n}{\rho}\right)\rho{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa\cr \lambdae&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_{R-r}^{2R}\lambdaeft|\frac{H_n(\kappa\rho) }{4\kappa^{n+1}}\right| \right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha} (f_1-f_2)\right|\right.\lambdaeft.\lambdaeft.+\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{\alphapha }(f_1-f_2)\right|\frac{n}{R-r}\right)\rho{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Noting (\ref{3.12}), we see that there exists a constant $C>0$ such that $|H_n(\kappa\rho)|\lambdae C$ for $n\ge 1$. Hence, \begin{align*} L_1\lambdae&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_0^{2\partiali}{\rm d}\thetaeta\int_{R-r}^{2R}\frac{C }{4\kappa^{n+1}} \right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha}(f_1-f_2)\right|\right.\lambdaeft.\lambdaeft. +\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{\alphapha}(f_1-f_2)\right|\frac{n}{R-r} \right)\rho{\rm d}\rho\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Changing back to the Cartesian coordinates with respect to $y$, we have \begin{align} L_1\lambdae&\int_s^{+\infty} \int_{\partial B_R} \kappa^{3}\lambdaeft|\int_{B_R}\frac{C}{4\kappa^{n+1}} \right.\cr &\lambdaeft(\lambdaeft|\sum\lambdaimits_{|\alphapha|=n}\partial_y^{\alphapha}(f_1-f_2)\right|\right.\lambdaeft.\lambdaeft. +\lambdaeft|\sum\lambdaimits_{|\alphapha|=n-1}\partial_y^{\alphapha}(f_1-f_2)\right|\frac{n}{R-r} \right){\rm d}x\right|^2 {\rm d}\gamma{\rm d}\kappa\cr \lambdaabel{3.13}\lambdae& C n\|f_1-f_2\|_{H^n(B_R)}\int_s^{+\infty}\kappa^{1-2n}{\rm d}\kappa=C\frac{n}{2n-2} \|f_1-f_2\|_ {H^n(B_R)}\frac{1}{s^{2n-2}}. \epsilonnd{align} Next we estimate $L_2$ for $d=2$. A simple calculation yields \begin{align*} L_2=&\int_s^{+\infty} \int_{\partial B_R} \kappa^{2} |\partial_{\nu}u(x,\kappa)|^2{\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{4}\lambdaeft|\int_{\mathbb{R}^3}\lambdaeft(\frac{\rm i}{4}\nabla_y H_0^{(1)}(\kappa|x-y|)\cdot\nu\right)(f_1-f_2)(y){\rm d}y\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Noting that $\nabla_y H_0^{(1)}(\kappa|x-y|)=-\nabla_x H_0^{(1)}(k|x-y|)$ and ${\rm supp}f_j\subset B_r\subset B_R$, we have \begin{align*} L_2=&\int_s^{+\infty} \int_{\partial B_R} \kappa^{2} |\partial_{\nu}u(x,\kappa)|^2{\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{2}\lambdaeft|\int_{\mathbb{R}^3}\lambdaeft(\frac{\rm i}{4}\nabla_y H_0^{(1)}(\kappa|x-y|)\cdot\nu\right)(f_1-f_2)(y){\rm d}y\right|^2 {\rm d}\gamma{\rm d}\kappa\cr =&\int_s^{+\infty} \int_{\partial B_R} \kappa^{2}\lambdaeft|\int_{\mathbb{R}^3}\frac{\rm i}{4}H_0^{(1)}(\kappa|x-y|)\lambdaeft(\nabla_y(f_1-f_2)(y)\cdot\nu\right){\rm d}y\right|^2 {\rm d}\gamma{\rm d}\kappa. \epsilonnd{align*} Following a similar argument as the proof of (\ref{3.13}), we can obtain \begin{align} L_2 \lambdae& C n\|f_1-f_2\|_{H^{n+1}(B_R)}\int_s^{+\infty}\kappa^{-2n}{\rm d}\kappa\cr \lambdaabel{3.14}=&C\frac{n}{2n-1}\|f_1-f_2\|_{H^{n+1}(B_R)}\frac{1}{s^{2n-1}}. \epsilonnd{align} Combining (\ref{3.13}) and (\ref{3.14}) completes the proof of (\ref{3.9}) for $d=2$. \epsilonnd{proof} The following lemma is proved in \cite{CIL}. \begin{lemm} Let $J(z)$ be analytic in $S=\{z=x+{\rm i}y\in\mathbb{C}: -\frac{\partiali}{4}<{\rm arg} z<\frac{\partiali}{4}\}$ and continuous in $\begin{align}r{S}$ satisfying \[ \begin{cases} |J(z)|\lambdaeq\epsilonpsilon, & z\in (0, ~ L],\\ |J(z)|\lambdaeq V, & z\in S,\\ |J(0)|=0. \epsilonnd{cases} \] Then there exits a function $\mu(z)$ satisfying \[ \begin{cases} \mu(z)\geq\frac{1}{2}, & z\in(L, ~ 2^{\frac{1}{4}}L),\\ \mu(z)\geq \frac{1}{\partiali}((\frac{z}{L})^4-1)^{-\frac{1}{2}}, & z\in (2^{\frac{1}{4}}L, ~ \infty) \epsilonnd{cases} \] such that \[ |J(z)|\lambdaeq V\epsilonpsilon^{\mu(z)}, \quad\forall\, z\in (L, ~ \infty). \] \epsilonnd{lemm} \begin{lemm} Let $f_j\in\mathcal{C}_M, j=1,2$. Then there exists a function $\mu(z)$ satisfying \begin{equation}\lambdaabel{mu} \begin{cases} \mu(s)\geq\frac{1}{2}, \quad & s\in(K, ~ 2^{\frac{1}{4}}K),\\ \mu(s)\geq \frac{1}{\partiali}((\frac{s}{K})^4-1)^{-\frac{1}{2}}, \quad & s\in (2^{\frac{1}{4}}K, ~\infty), \epsilonnd{cases} \epsilonnd{equation} such that \[ |I_1(s)+I_2(s)|\lambdaeq CM^2 e^{(4R+1)s}\epsilonpsilon^{2\mu(s)},\quad\forall s\in (K, ~\infty), \] for $d=2,3$. \epsilonnd{lemm} \begin{proof} It follows from Lemma 3.2 that \[ |[I_1(s)+I_2(s)]e^{-(4R+1)s}|\lambdaeq CM^2,\quad\forall s\in S. \] Recalling \epsilonqref{e1}, \epsilonqref{3.1}-\epsilonqref{3.4}, we have \[ |[I_1(s)+I_2(s)]e^{-(4R+1)s}|\lambdaeq\epsilonpsilon^2,\quad s\in [0, ~K]. \] A direct application of Lemma 3.5 shows that there exists a function $\mu(s)$ satisfying \epsilonqref{mu} such that \[ |[I_1(s)+I_2(s)]e^{-(4R+1)s}|\lambdaeq CM^2\epsilonpsilon^{2\mu},\quad\forall s\in (K, ~\infty), \] which completes the proof. \epsilonnd{proof} Now we show the proof of Theorem 2.1. \begin{proof} We can assume that $\epsilonpsilon<e^{-1}$, otherwise the estimate is obvious. Let \[ s=\begin{cases} \frac{1}{((4R+3)\partiali)^{\frac{1}{3}}}K^{\frac{2}{3}}|\lambdan\epsilonpsilon|^{\frac{1}{4}}, & 2^{\frac{1}{4}} ((4R+3)\partiali)^{\frac{1}{3}}K^{\frac{1}{3}}<|\lambdan\epsilonpsilon|^{\frac{1}{4}},\\ K, &|\lambdan\epsilonpsilon|\lambdaeq 2^{\frac{1}{4}}((4R+3)\partiali)^{\frac{1}{3}}K^{\frac{1}{3}}. \epsilonnd{cases} \] If $2^{\frac{1}{4}}(((4R+3)\partiali)^{\frac{1}{3}}K^{\frac{1}{3}}<|\lambdan\epsilonpsilon|^{\frac{ 1}{4}}$, then we have \begin{align*} |I_1(s)+I_2(s)|&\lambdaeq CM^2 e^{(4R+3)s} e^{-\frac{2|\lambdan\epsilonpsilon|}{\partiali}((\frac{s}{K})^4-1)^{-\frac{1}{2}}}\cr &\lambdaeq CM^2 e^{\frac{(4R+3)}{((4R+3)\partiali)^{\frac{1}{3}}}K^{\frac{2}{3}}|\lambdan\epsilonpsilon|^{\frac{ 1}{4}}-\frac{2|\lambdan\epsilonpsilon|}{\partiali} (\frac{K}{s})^2}\\ &=CM^2 e^{-2\lambdaeft(\frac{(4R+3)^2}{\partiali}\right)^{\frac{1}{3}}K^{\frac{2}{3}} |\lambdan\epsilonpsilon|^{\frac{1}{2}}\lambdaeft(1-\frac{1}{2} |\lambdan\epsilonpsilon|^{-\frac{1}{4}}\right)}. \epsilonnd{align*} Noting that $\frac{1}{2} |\lambdan\epsilonpsilon|^{-\frac{1}{4}}<\frac{1}{2}$, $\lambdaeft(\frac{(4R+3)^2}{\partiali}\right)^{\frac{1}{3}}>1$ we have \begin{align*} |I_1(s)+I_2(s)|& \lambdaeq CM^2 e^{-K^{\frac{2}{3}}|\lambdan\epsilonpsilon|^{\frac{1}{2}}}. \epsilonnd{align*} Using the elementary inequality \[ e^{-x}\lambdaeq \frac{(6n-6d+3)!}{x^{3(2n-2d+1)}}, \quad x>0, \] we get \begin{align}\lambdaabel{3.16} |I_1(s)+I_2(s)|\lambdaeq\frac{CM^2}{\lambdaeft(\frac{K^2|\lambdan\epsilonpsilon|^{\frac{3}{2}}}{ (6n-6d+3)^3}\right)^{2n-2d+1}}. \epsilonnd{align} If $|\lambdan\epsilonpsilon|\lambdaeq 2^{\frac{1}{4}}(((4R+3)\partiali)^{\frac{1}{3}}K^{\frac{1}{3}}$, then $s=K$. We have from \epsilonqref{e1}, \epsilonqref{3.1}-\epsilonqref{3.4} that \[ |I_1(s)+I_2(s)|\lambdaeq \epsilonpsilon^2, \] Here we have noted that for $s>0$, $I_1(s)+I_2(s)=\int_0^s \int_{\partial B_R}\kappa^{d-1} \bigl(|\partial_{\nu}u(x,\kappa)|^2 +\kappa^2|u(x, \kappa)|^2 \bigr){\rm d}\gamma{\rm d}\kappa$. Hence we obtain from Lemma 3.3 and \epsilonqref{3.16} that \begin{align*} &\int_0^\infty \int_{\partial B_R}\kappa^{d-1} \bigl(|\partial_{\nu}u(x,\kappa)|^2 +\kappa^2|u(x, \kappa)|^2 \bigr){\rm d}\gamma{\rm d}\kappa\\ &\lambdaeq I_1(s)+I_2(s)+\int_s^\infty \int_{\partial B_R}\kappa^{d-1} \bigl(|\partial_{\nu}u(x,\kappa)|^2 +\kappa^2|u(x, \kappa)|^2 \bigr){\rm d}\gamma{\rm d}\kappa\\ &\lambdaeq \epsilonpsilon^2+\frac{CM^2}{\lambdaeft(\frac{K^2|\lambdan\epsilonpsilon|^{\frac{3}{2}}}{(6n-6d+3)^3} \right)^{2n-2d+1}}+\frac{C \|f_1-f_2\|^2_{H^{n+1}(B_R)}}{\lambdaeft(2^{-\frac{1}{4}}((4R+3)\partiali)^{-\frac{1}{3}}K^{ \frac{2}{3}} |\lambdan\epsilonpsilon|^{\frac{1}{4}}\right)^{2n-2d+1}}. \epsilonnd{align*} By Lemma 3.1, we have \[ \|f_1-f_2\|^2_{L^2(B_R)}\lambdaeq C \lambdaeft(\epsilonpsilon^2 +\frac{M^2}{\lambdaeft(\frac{K^2|\lambdan\epsilonpsilon|^{\frac{3}{2}}}{(6n-6d+3)^3}\right)^{ 2n-2d+1}}+\frac{M^2}{\lambdaeft(\frac{K^{\frac{2} {3}}|\lambdan\epsilonpsilon|^{\frac{1}{4}}}{(R+1)(6n-6d+3)^3}\right)^{2n-2d+1}}\right). \] Since $K^{\frac{2}{3}}|\lambdan\epsilonpsilon|^{\frac{1}{4}}\lambdaeq K^2 |\lambdan\epsilonpsilon|^{\frac{3}{2}}$ when $K>1$ and $|\lambdan\epsilonpsilon|>1$, we obtain the stability estimate. \epsilonnd{proof} \begin{thebibliography}{1} \bibitem{Ar} S. Arridge, Optical tomography in medical imaging, Inverse Problems, 15 (1999) R41--R93. \bibitem{BLLT-IP15} G. Bao, P. Li, J. Lin, and F. Triki, Inverse scattering problems with multi-frequencies, Inverse Problems, 31 (2015), 093001. \bibitem{BLT-JDE10} G. Bao, J. Lin, and F. Triki, A multi-frequency inverse source problem, J. Differential equations, 249 (2010) 3443--3465. \bibitem{BaLiTr} G. Bao, J. Lin, and F. Triki, Numerical solution of the inverse source problem for the Helmholtz equation with multiple frequency data, Contemp. Math. 548 (2011) 45--60. \bibitem{BLRX-SJNA15} G. Bao, S. Lu, W. Rundell, and B. Xu, A recursive algorithm for multifrequency acoustic inverse source problems, SIAM J. Numer. Anal., 53 (2015), 1608--1628. \bibitem{CIL} J. Cheng, V. Isakov, and S. Lu, Increasing stability in the inverse source problem with many frequencies, J. Differential Equations, 260 (2016), 4786--4804. \bibitem{DS-IEEE82} A. Devaney and G. Sherman, Nonuniqueness in inverse source and scattering problems, IEEE Trans. Antennas Propag., 30 (1982), 1034--1037. \bibitem{El} M. Eller and N.P. Valdivia, Acoustic source identification using multiple frequency information, Inverse Problems, 25 (2009) 115005. \bibitem{HKP-IP05} K.-H. Hauer, L. K\"{u}hn, and R. Potthast, On uniqueness and non-uniqueness for current reconstruction from magnetic fields, Inverse Problems, 21 (2005), 955--967. \bibitem{I-89} V. Isakov, Inverse Source Problems, AMS, Providence, RI, 1989. \bibitem{Is} V. Isakov, Inverse Problems for Partial Differential Equations, Springer-Verlag, New York, 2006. \bibitem{I-CM07} V. Isakov, Increasing stability in the continuation for the Helmholtz equation with variable coefficient, Contemp. Math. 426 (2007), 255--269. \bibitem{I-D11} V. Isakov, Increasing stability for the Sch{\"o}dinger potential from the Dirichlet-to-Neumann map, DCDS-S, 4 (2011), 631--640. \bibitem{StUh} P. Stefanov and G. Uhlmann, Themoacoustic tomography arising in brain imaging, Inverse Problems, 27 (2011) 075011. \bibitem{Wa} G. N. Watson, A Treatise on the Theorey of Bessel Functions, Cambridge University Press, (1922). \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{On closeness to $k$-wise uniformity} \author{Ryan O'Donnell\thanks{Supported by NSF grants CCF-1618679, CCF-1717606. This material is based upon work supported by the National Science Foundation under grant numbers listed above. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation (NSF).} \and Yu Zhao$^*$} \date{ } \maketitle \begin{abstract} A probability distribution over $\{-1, 1\}^n$ is \emph{$(\eps, k)$-wise uniform} if, roughly, it is $\eps$-close to the uniform distribution when restricted to any $k$ coordinates. We consider the problem of how far an $(\epsilon, k)$-wise uniform distribution can be from any globally $k$-wise uniform distribution. We show that every $(\epsilon, k)$-wise uniform distribution is $O(n^{k/2}\epsilon)$-close to a $k$-wise uniform distribution in total variation distance. In addition, we show that this bound is optimal for all even $k$: we find an $(\eps, k)$-wise uniform distribution that is $\Omega(n^{k/2}\epsilon)$-far from any $k$-wise uniform distribution in total variation distance. For $k=1$, we get a better upper bound of $O(\eps)$, which is also optimal. One application of our closeness result is to the sample complexity of testing whether a distribution is $k$-wise uniform or $\delta$-far from $k$-wise uniform. We give an upper bound of $O(n^{k}/\delta^2)$ (or $O(\log n/\delta^2)$ when $k = 1$) on the required samples. We show an improved upper bound of $\tilde{O}(n^{k/2}/\delta^2)$ for the special case of testing fully uniform vs.\ $\delta$-far from $k$-wise uniform. Finally, we complement this with a matching lower bound of $\Omega(n/\delta^2)$ when $k = 2$. Our results improve upon the best known bounds from \cite{AAKMRX07}, and have simpler proofs. \end{abstract} \section{Introduction} \subsection{$k$-wise uniformity and almost $k$-wise uniformity} We say that a probability distribution over $\{-1,1\}^n$ is \emph{$k$-wise uniform} if its marginal distribution on every subset of $k$ coordinates is the uniform distribution. For Fourier analysis of the Hamming cube, it is convenient to identify the distribution with its density function $\varphi : \{-1, 1\}^n \to \R^{\geq 0}$ satisfying \[ \E_{\bx \sim \{-1, 1\}^n}[\varphi(\bx)] = 1. \] We write $\bx \sim \varphi$ to denote that $\bx$ is a random variable drawn from the associated distribution with density~$\varphi$: \[ \Pr_{\bx \sim \varphi}[\bx = x] = \frac{\varphi(x)}{2^n} \] for any $x \in \{-1, 1\}^n$. Then a well-known fact is that a distribution is $k$-wise uniform if and only if the Fourier coefficient of $\varphi$ is $0$ on every subset $S \subseteq [n]$ of size between $1$ and $k$: \[ \widehat{\varphi}(S) = \E_{\bx \sim \varphi}\left[\prod_{i \in S} \bx_i\right] = 0. \] $k$-wise uniformity is an essential tool in theoretical computer science. Its study dates back to work of Rao \cite{Rao47}. He studied $k$-wise uniform sets, which are special cases of $k$-wise uniform distribution. A subset of $\{-1, 1\}^n$ is a \emph{$k$-wise uniform set} if the uniform distribution on this subset is $k$-wise uniform. Rao gave constructions of a pairwise-uniform set of size $n+1$ (when $n=2^r-1$ for any integer $r$), a $3$-wise uniform set of size $2n$ (when $n = 2^r$ for any integer $r$), and a lower bound (reproved in \cite{ABI86, CGHFRS85}) that a $k$-wise uniform set on $\{-1,1\}^n$ requires size at least $\Omega(n^{\lfloor k/2 \rfloor})$. An alternative proof of the lower bound for even~$k$ is shown in \cite{AGM03} using a hypercontractivity-type technique, as opposed to the linear algebra method. Coding theorists have also heavily studied $k$-wise uniformity, since MacWilliams and Sloane showed that linear codes with dual minimum distance $k+1$ correspond to $k$-wise uniform sets in \cite{MS77}. The importance in theoretical computer science of $k$-wise independence for derandomization arose simultaneously in many papers, with \cite{KW85,Luby86} emphasizing derandomization via the most common pairwise-uniformity case, and \cite{ABI86,CGHFRS85} emphasizing derandomization based on $k$-wise independence more generally. A distribution is ``almost $k$-wise uniform'' if its marginal distribution on every $k$ coordinates is very close to the uniform distribution. Typically we say two distributions $\varphi, \psi$ are \emph{$\delta$-close}, if the total variation distance between $\varphi$ and $\psi$ is at most $\delta$; and we say they are \emph{$\delta$-far}, if the total variation distance between them is more than $\delta$. However the precise notion of ``close to uniform'' has varied in previous work. Suppose $\psi$ is the density function for the marginal distribution of $\varphi$ restricted to some specific $k$ coordinates and $\bone$ is the density function for the uniform distribution. Several standard ways are introduced in \cite{AGM03,AAKMRX07} to quantify closeness to uniformity, corresponding to the $L_1, L_2, L_\infty$ norms: \begin{itemize} \item ($L_1$ norm): $\|\psi - \bone\|_1 = 2d_{\text{TV}}(\psi, \bone) \leq \eps$, where $d_{\text{TV}}$ denotes total variation distance; \item ($L_2$ norm): $\|\psi - \bone \|_2 = \sqrt{\chi^2(\psi, \bone)} = \sqrt{\sum_{S \neq \emptyset} \widehat{\psi}(S)^2} \leq \eps$, where $\chi^2(\psi, \bone)$ denotes the $\chi^2$-divergence of $\psi$ from the uniform distribution; \item ($L_\infty$ norm): $\|\psi - \bone\|_\infty \leq \eps$, or in other words, for any $x \in \{-1, 1\}^n$, \[ \left|\Pr_{\bx \sim \psi}[\bx = x] - 2^{-k}\right| \leq 2^{-k} \eps. \] \end{itemize} Note the following: First, closeness in $L_1$ norm is the most natural for algorithmic derandomization purposes: it tells us that the algorithm cannot tell $\psi$ is different from the uniform distribution up to $\eps$ error. Second, these definitions of closeness are in increasing order of strength. On the other hand, we also have that $\| \psi-\bone\|_1 \leq \|\psi - \bone \|_\infty \leq 2^k \| \psi-\bone\|_1$; thus all of these notions are within a factor of $2^k$. We generally consider $k$ to be constant (or at worst, $O(\log n)$), so that these notions are roughly the same. A fourth reasonable notion, proposed by Naor and Naor in \cite{NN93}, is that the distribution has a small bias over every non-empty subset of at most $k$ coordinates. We say density function $\varphi$ is \emph{$(\eps, k)$-wise uniform} if for every non-empty set $S \subseteq [n]$ with size at most $k$, \[ |\widehat{\varphi}(S)| = \left|\Pr_{\bx \sim \varphi}\left[\prod_{i \in S}\bx_i = 1\right] - \Pr_{\bx \sim \varphi}\left[\prod_{i \in S}\bx_i = -1\right]\right| \leq \epsilon. \] Here we also have $\eps = 0$ if and only if $\varphi$ is exactly $k$-wise uniform. Clearly if the marginal density of $\varphi$ over every $k$ coordinates is $\epsilon$-close to the uniform distribution in total variation distance, then $\varphi$ is~$(\eps, k)$-wise uniform. On the other hand, if $\varphi$ is~$(\epsilon,k)$-wise uniform, then the marginal density of $\varphi$ over every~$k$ coordinates is $2^{k/2}\eps$-close to uniform distribution in total variation distance. Again, if $k$ is considered constant, this bias notion is also roughly the same as previous notions. In the rest of paper we prefer this~$(\epsilon, k)$-wise uniform notion for ``almost $k$-wise uniform'' because of its convenience for Fourier analysis. The original paper about almost $k$-wise uniformity, \cite{NN93}, was concerned with derandomization; e.g., they use $(\epsilon, k)$-wise uniformity for derandomizing the ``set balancing (discrepancy)'' problem. Alon et al.\ gave a further discussion of the relationship between almost $k$-wise uniformity and derandomization in~\cite{AGM03}. The key idea is the following: In many cases of randomized algorithms, the analysis only relies on the property that the random bits are $k$-wise uniform, as opposed to fully uniform. Since there exists an efficiently samplable $k$-wise uniform distribution on a set of size at most $O(n^{\lfloor k/2 \rfloor})$, one can reduce the number of random unbiased bits used in the algorithm down to $O(k \log n)$. To further reduce the number of random bits used, a natural line of thinking is to consider distributions which are ``almost $k$-wise uniformity''. Alon et al.\ \cite{AGHP92} showed that we can deterministically construct $(\epsilon, k)$-wise uniform sets that are of size~$\text{poly}(2^k,\log n,1/ \eps)$, much smaller than exact $k$-wise uniform ones (roughly $\Omega(n^{\lfloor k/2 \rfloor})$ size). Therefore we can use substantially fewer random bits by taking random strings from an almost $k$-wise uniform distribution. However we need to ensure that the original analysis of the randomized algorithm still holds under the almost $k$-wise uniform distribution. This is to say that if the randomized algorithm behaves well on a $k$-wise uniform distribution, it may or may not also work as well with an $(\epsilon, k)$-wise uniform distribution, when the parameter~$\epsilon$ is small enough. \subsection{The Closeness Problem} For the analysis of derandomization, it would be very convenient if $(\epsilon, k)$-wise uniformity -- which means that ``every $k$-local view looks close to uniform'' -- implies global $\delta$-closeness to $k$-wise uniformity. A natural question that arises, posed in \cite{AGM03}, is the following: \emph{How small can $\delta$ be such that the following is true? For every $(\eps, k)$-wise uniform distribution $\varphi$ on $\{-1, 1\}^n$, $\varphi$ is $\delta$-close to some $k$-wise uniform distribution.} In this paper, we will refer to this question as \emph{the Closeness Problem}. \subsubsection{Previous work and applications} On one hand, the main message of \cite{AGM03} is a lower bound: For every even constant $k > 4$, they gave an $(\eps,k)$-wise uniform distribution with $\eps = O (1/n^{k/4 - 1})$, yet which is $\frac12$-far from every $k$-wise uniform distribution in total variation distance. On the other hand, \cite{AGM03} proved a very simple theorem that $\delta \leq O(n^k \eps)$ always holds. Despite its simplicity, this upper bound has been used many times in well known results. One application is in circuit complexity. \cite{AGM03}'s upper bound is used for fooling disjunctive normal formulas (DNF) \cite{Bazzi09} and $\AC^0$ \cite{Braverman10}. In these works, once the authors showed that $k$-wise uniformity suffices to fool DNF/$\AC^0$, they deduced that $(O(1/n^k),k)$-uniform distributions suffice, and hence $O(1/n^k)$-biased sets sufficed trivially. \cite{AGM03}'s upper bound is also used as a tool for the construction of two-source extractors for a similar reason in \cite{CZ16,Li16}. Another application is for hardness of constraint satisfactory problems ($\CSP$s). Austrin and Mossel \cite{AM09} show that one can obtain integrality gaps and UGC-hardness for CSPs based on $k$-wise uniform distributions of small support size. If a predicate is $k$-wise uniform, Kothari et al.\ \cite{KMOW17} showed that one can get SOS-hardness of refuting random instances of it when there are around $n^{(k+1)/2}$ constraints. Indeed, \cite{KMOW17} shows that if we have a predicate that is $\delta$-close to $k$-wise uniform, then with roughly $n^{(k+1)/2}$ random constraints, SOS cannot refute that a $(1-O(\delta))$-fraction of constraints are satisfiable. This also motivates studying $\delta$-closeness to $k$-wise uniformity and how it relates to Fourier coefficients. $\delta$-closeness to $k$-wise uniformity is also relevant for hardness of random $\CSP$, as shown in \cite{AOW15}. Alon et al.\ \cite{AAKMRX07} investigated the Closeness Problem further by improving the upper bound to $\delta = O((n\log n)^{k/2} \eps)$. Indeed, they showed a strictly stronger fact that a distribution is $O\!\left(\sqrt{\W{1\dots k}[\varphi]} \log^{k/2} n\right)$-close to some $k$-wise uniform, where $\W{1 \dots k}[\varphi] = \sum_{1 \leq |S| \leq k} \wh{\varphi}(S)^2$. Rubinfeld and Xie \cite{RN13} generalized some of these results to non-uniform $k$-wise independent distributions over larger product spaces. Let us briefly summarize the method \cite{AAKMRX07} used to prove their upper bounds. Given an $(\epsilon, k)$-wise uniform $\varphi$, they first try to generate a $k$-wise uniform ``pseudo-distribution'' $\varphi'$ by forcing all Fourier coefficients at degree at most $k$ to be zero. It is a ``pseudo-distribution'' because some points might have negative density. After this, they use a fully uniform distribution and $k$-wise uniform distributions with small support size to try to mend all points to be nonnegative. They bound the weight of these mending distributions to upper-bound the distance incurred by the mending process. This mending process uses the fully uniform distribution to mend the small negative weights and uses $k$-wise uniform distributions with small support size to correct the large negative weights point by point. By optimizing the threshold between small and large weights it introduces a factor of $(\log n)^{k/2}$. Though they did not mention it explicitly, they also give a lower bound for the Closeness Problem of~$\delta \geq \Omega\left(\frac{n^{(k-1)/2}}{\log n} \eps\right)$ for $k > 2$ by considering the uniform distribution on a set of $O(n^k)$ random chosen strings. No previous work gave any lower bound for the most natural case of $k = 2$. \subsubsection{Our result} In this paper, we show sharper upper and lower bounds for the Closeness Problem, which are tight for $k$ even and $k = 1$. Comparing to the result in \cite{AAKMRX07}, we get rid of the factor of $(\log n)^{k/2}$. \begin{theorem} \label{thm:section1_closeness_upperbound} Any density $\varphi$ over $\{-1, 1\}^n$ is $\delta$-close to some $k$-wise uniform distribution, where \[ \delta \leq e^k \sqrt{\W{1 \dots k}[\varphi]} = e^k \sqrt{\sum_{1 \leq |S| \leq k} \widehat{\varphi}(S)^2}. \] Consequently, if $\varphi$ is $(\epsilon, k)$-wise uniform, i.e., $|\widehat{\varphi}(S)| \leq \epsilon$ for every non-empty set $S$ with size at most $k$, then \[ \delta \leq e^k n^{k/2} \eps. \] For the special case $k = 1$, the corresponding $\delta$ can be further improved to $\delta \leq \epsilon$. \end{theorem} Our new technique is trying to mend the original distribution to be $k$-wise uniform all at once. We want to show that some mixture distribution $(\varphi + w \psi)$ is $k$-wise uniform with small mixture weight $w$. The distance between the final mixture distribution and the original distribution $\varphi$ is bounded by $O(w)$. Therefore we only need to show that the mending distribution $\psi$ exists for some small weight $w$. Showing the existence of such a distribution~$\psi$ can be written as the feasibility of a linear program (LP). We upper bound $w$ by bounding the dual LP, using the hypercontractivity inequality. Our result is sharp for all even $k$, and is also sharp for $k = 1$. We state the matching lower bound for even $k$: \begin{theorem} \label{thm:section1_closeness_lowerbound} For any $n$ and even $k$, and small enough $\epsilon$, there exists some $(\epsilon, k)$-wise uniform distribution $\varphi$ over $\{-1, 1\}^n$, such that $\varphi$ is $\delta$-far from every $k$-wise uniform distribution in total variation distance, where \[ \delta \geq \Omega\left(\frac1k\right)^k n^{k/2} \eps. \] \end{theorem} Our method for proving this lower bound is again LP duality. Our examples in the lower bound are symmetric distributions with Fourier weight only on level $k$. The density functions then can be written as binary Krawtchouk polynomials which behave similar to Hermite polynomials when $n$ is large. Our dual LP bounds use various properties of Krawtchouk and Hermite polynomials. Interestingly both our upper and lower bound utilize LP-duality, which we believe is the most natural way of looking at this problem. We remark that we can derive a lower bound for odd $k$ from Theorem~\ref{thm:section1_closeness_lowerbound} trivially by replacing $k$ by~$k-1$. There exists a gap of $\sqrt{n}$ between the resulting upper and lower bounds for odd $k$. We believe that the lower bound is tight, and the upper bound may be improvable by a factor of $\sqrt{n}$, as it is in the special case $k = 1$. We leave it as a conjecture for further work: \begin{conjecture} Suppose the distribution $\varphi$ over $\{-1, 1\}^n$ is $(\epsilon, k)$-wise uniform. Then $\varphi$ is $\delta$-close to some $k$-wise uniform distribution in total variation distance, where \[ \delta \leq O (n^{\lfloor k/2 \rfloor} \eps). \] \end{conjecture} \subsection{The Testing Problem} Another application of the Closeness Problem is to property testing of $k$-wise uniformity. Suppose we have sample access from an unknown and arbitrary distribution; we may wonder whether the distribution has a certain property. This question has received tremendous attention in the field of statistics. The main goal in the study of property testing is to design algorithms that use as few samples as possible, and to establish lower bound matching these sample-efficient algorithms. In particular, we consider the property of being~$k$-wise uniform: \emph{Given sample access to an unknown and arbitrary distribution $\varphi$ on $\{-1, 1\}^n$, how many samples do we need to distinguish between the case that $\varphi$ is $k$-wise uniform versus the case that $\varphi$ is $\delta$-far from every $k$-wise uniform distribution?} In this paper, we will refer to this question as the \emph{Testing Problem}. We say a testing algorithm is \emph{a $\delta$-tester for $k$-wise uniformity} if the algorithm outputs ``Yes'' with high probability when the distribution $\varphi$ is $k$-wise uniform, and the algorithm outputs ``No'' with high probability when the distribution $\varphi$ is $\delta$-far from any $k$-wise uniform distribution (in total variation distance). Property testing is well studied for Boolean functions and distributions. Previous work studied the testing of related properties of distribution, including uniformity \cite{GR11,BFRSW00,RS09} and independence \cite{BFFKRW01,BKR04, ADK15, DK16}. The papers \cite{AGM03, AAKMRX07, Xie12} discussed the problem of testing $k$-wise uniformity. \cite{AGM03} constructed a $\delta$-tester for $k$-wise uniformity with sample complexity $O(n^{2k} / \delta^2)$, and \cite{AAKMRX07} improved it to $O(n^k \log^{k+1} n / \delta^2)$. As for lower bounds, \cite{AAKMRX07} showed that $\Omega(n^{(k-1)/2} / \delta)$ samples are necessary, albeit only for $k > 2$. This lower bound is in particular for distinguishing the uniform distribution from $\delta$-far-from-$k$-wise distributions. We show a better upper bound for sample complexity: \begin{theorem} \label{thm:section1_testing_upperbound} There exists a $\delta$-tester for $k$-wise uniformity of distributions on $\{-1, 1\}^n$ with sample complexity $O\left(\frac1k\right)^{k/2}\frac{n^k}{\delta^2}$. For the special case of $k=1$, the sample complexity is $O\left(\frac{\log n}{\delta^2}\right)$. \end{theorem} A natural $\delta$-tester of $k$-wise uniformity is mentioned in \cite{AAKMRX07}: Estimate all Fourier coefficients up to level $k$ from the samples; if they are all smaller than $\epsilon$ then output ``Yes''. In fact this algorithm is exactly attempting to check whether the distribution is $(\eps, k)$-wise uniform. Hence the sample complexity depends on the upper bound for the Closeness Problem. Therefore we can reduce the sample complexity of this algorithm down to $O\left(\frac{n^k \log n}{\delta^2}\right)$ via our improved upper bound for the Closeness Problem. One $\log n$ factor remains because we need to union-bound over the $O(n^k)$ Fourier coefficients up to level $k$. To further get rid of the last $\log n$ factor, we present a new algorithm that estimates the Fourier weight up to level $k$, $\sum_{1 \leq |S| \leq k} \widehat{\varphi}^2(S)$, rather than estimating these Fourier coefficients one by one. Unfortunately, a lower bound for the Closeness Problem does not imply a lower bound for the Testing Problem directly. In \cite{AAKMRX07}, they showed that a uniform distribution over a random subset of $\{-1, 1\}^n$ of size $O(\frac{n^{k-1}}{\delta^2})$, is almost surely $\delta$-far from any $k$-wise uniform distribution. On the other hand, by the Birthday Paradox, it is hard to distinguish between the fully uniform distribution on all strings of length $n$ and a uniform distribution over a random set of such size. This gives a lower bound for the Testing Problem as $\Omega(n^{(k-1)/2}/\delta)$. Their result only holds for $k > 2$; there was no previous non-trivial lower bound for testing pairwise uniformity. We show a lower bound for the pairwise case. \begin{theorem} \label{thm:section1_testing_lowerbound} Any $\delta$-tester for pairwise uniformity of distributions on $\{-1, 1\}^n$ needs at least $\Omega(\frac{n}{\delta^2})$ samples. \end{theorem} For this lower bound we analyze a symmetric distribution with non-zero Fourier coefficients only on level~2. We prove that it is hard to distinguish a randomly shifted version of this distribution from the fully uniform distribution. This lower bound is also better than \cite{AAKMRX07} in that we have a better dependence on the parameter $\delta$ ($\frac{1}{\delta^2}$ rather than $\frac1{\delta}$). Unfortunately we are unable to generalize our lower bound for higher~$k$. Notice that for our new upper and lower bounds for $k$-wise uniformity testing, there still remains a quadratic gap for $k \geq 2$, indicating that the upper bound might be able to be improved. Both the lower bound in our paper and that in \cite{AAKMRX07} show that it is hard to distinguish between the fully uniform distribution and some specific sets of distributions that are far from $k$-wise uniform. We show that if one wants to improve the lower bound, one will need to use a distribution in the ``Yes'' case that is \emph{not} fully uniform, because we give a sample-efficient algorithm for distinguishing between fully uniform and $\delta$-far from $k$-wise uniform: \begin{theorem} \label{thm:section1_fully_testing} For any constant $k$, for testing whether a distribution is fully uniform or $\delta$-far from every $k$-wise uniform distribution, there exists an algorithm with sample complexity $O(k)^k\cdot n^{k/2} \cdot \frac1{\delta^2} \cdot \left(\log \frac{n}{\delta}\right)^{k/2}$. In fact, for testing whether a distribution is $\alpha k$-wise uniform or $\delta$-far from $k$-wise uniform with $\alpha > 4$, there exists an algorithm with sample complexity $O(\alpha)^{k/2} \cdot n^{k/2} \cdot \frac1{\delta^2} \cdot \left(\frac{n^k}{\delta^4}\right)^{1/(\alpha - 2)}$. \end{theorem} We remark that testing full uniformity can be treated as a special case of testing $\alpha k$-wise uniformity approximately, by setting $\alpha = \log \frac{n}{\delta}$. Testing full uniformity has been studied in \cite{GR11,BFRSW00}. Paninski \cite{Paninski08} showed that testing whether an unknown distribution on $\{-1,1\}^n$ is $\Theta(1)$-close to fully uniform requires $2^{n/2}$ samples. Rubinfeld and Servedio \cite{RS09} studied testing whether an unknown monotone distribution is fully uniform or not. The fully uniform distribution has the nice property that every pair of samples is different in $\frac{n}{2} \pm O(\sqrt{n})$ bits with high probability when the sample size is small. Our algorithm first rejects those distributions that disobey this property. We show that the remaining distributions have small Fourier weight up to level $2k$. Hence by following a similar analysis as the tester in Theorem~\ref{thm:section1_testing_upperbound}, we can get an improved upper bound when these lower Fourier weights are small. The lower bound remains the same as testing $k$-wise vs.\ far from $k$-wise. Our tester is tight up to a logarithmic factor for the pairwise case, and is tight up to a factor of $\tilde{O}(\sqrt{n})$ when $k > 2$. We compare our results and previous best known bounds from \cite{AAKMRX07} in Table~\ref{table}. (We omit constant factors depending on $k$.) \begin{table}[h] \centering \small \begin{tabular}{ |l||c|c|c|c| } \hline & \multicolumn{2}{c|}{Upper bound} & \multicolumn{2}{c|}{Lower bound} \\ \cline{2-5} & \cite{AAKMRX07} & Our paper & \cite{AAKMRX07} & Our paper\\ \hline \multirow{2}{*}{Closeness Problem} & \multirow{2}{*}{$O(n^{k/2} (\log n)^{k/2} \epsilon)$} & {$O(n^{k/2} \epsilon)$ } & \multirow{2}{*}{$\Omega\left(\frac{n^{(k-1)/2}}{\log n} \epsilon\right)$} & \multirow{2}{*}{$\Omega(n^{\lfloor k/2 \rfloor} \epsilon)$} \\ &&$O(\epsilon)$ for $k=1$&&\\ \hline Testing $k$-wise vs. &\multirow{2}{*}{$\displaystyle O\left(\frac{n^{k} (\log n)^{k + 1} }{\delta^2}\right)$} & {$O\left(\frac{n^{k}}{\delta^2}\right)$ } & \multirow{2}{*}{$\displaystyle \Omega\left(\frac{n^{(k-1)/2}}{\delta}\right)$ for $k > 2$} & \multirow{2}{*}{$\displaystyle \Omega\left(\frac{n}{\delta^2}\right)$ for $k = 2$}\\ far from $k$-wise&&$O\left(\frac{\log n}{\delta^2}\right)$ for $k = 1$&&\\ \hline Testing $n$-wise vs.\ &\multirow{2}{*}{$\displaystyle O\left(\frac{n^{k} (\log n)^{k + 1} }{\delta^2}\right)$} & {$O\left(\frac{n^{k/2}}{\delta^2} (\log \frac{n}{\delta})^{k/2}\right)$ } & \multirow{2}{*}{$\displaystyle \Omega\left(\frac{n^{(k-1)/2}}{\delta}\right)$ for $k > 2$} & \multirow{2}{*}{$\displaystyle \Omega\left(\frac{n}{\delta^2}\right)$ for $k = 2$}\\ far from $k$-wise&&$O\left(\frac{\log n}{\delta^2}\right)$ for $k = 1$&&\\ \hline \end{tabular} \caption{Comparison of our results to \cite{AAKMRX07}} \label{table} \end{table} \subsection{Organization} Section~\ref{sec:pre} contains definitions and notations. We will discuss upper and lower bounds for the Closeness Problem in Section~\ref{sec:closeness}. We will discuss the sample complexity of testing $k$-wise uniformity in Section~\ref{sec:testing}. We present a tester for distinguishing between $\alpha k$-wise uniformity (or fully uniformity) and far-from $k$-wise uniformity in Section~\ref{sec:fully}. \section{Preliminaries} \label{sec:pre} \subsection{Fourier analysis of Boolean functions} We use $[n]$ to denote the set $\{1, \dots, n\}$. We denote the symmetric difference of two sets $S$ and $T$ by $S \oplus T$. For Fourier analysis we use notations consistent with \cite{OD14}. Every function $f : \{-1, 1\}^n \to \R$ has a unique representation as a multilinear polynomial \[ f(x) = \sum_{S \subseteq [n]} \widehat{f}(S) x^S \quad \text{where} \quad x^S = \prod_{i \in S} x_i. \] We call $\widehat{f}(S)$ the Fourier coefficient of $f$ on $S$. We use $\bx \sim \{-1, 1\}^n$ to denote that $\bx$ is uniformly distributed on~$\{-1, 1\}^n$. We can represent Fourier coefficients as \[ \widehat{f}(S) = \E_{\bx \sim \{-1, 1\}^n} \left[f(\bx) \bx^S\right]. \] We define an inner product $\langle \cdot, \cdot \rangle$ on pairs of functions $f, g : \{-1,1\}^n \to \R$ by \[ \langle f, g \rangle = \E_{\bx \sim \{-1, 1\}^n} [f(\bx) g(\bx)] = \sum_{S \subseteq[n]}\widehat{f}(S) \widehat{g}(S). \] We introduce the following $p$-norm notation: $\|f\|_p = \left(\E[|f(\bx)|^p]\right)^{1/p}$, and the Fourier $\ell_p$-norm is $\|\widehat{f}\|_p = \left(\sum_{S \subseteq [n]}|\widehat{f}(S)|^p\right)^{1/p}$. We say the \emph{degree} of a Boolean function, $\textnormal{deg}(f)$ is $k$ if its Fourier polynomial is degree $k$. We denote $f^{=k}(x) = \sum_{|S| = k} \widehat{f}(S) x^S$, and $f^{\leq k}(x) = \sum_{|S| \leq k} \widehat{f}(S) x^S$. We denote the \emph{Fourier weight} on level $k$ by~$\W{k}[f] = \sum_{|S| = k} \widehat{f}(S)^2$. We denote~$\W{1\dots k}[\varphi] = \sum_{1 \leq |S| \leq k} \wh{\varphi}(S)^2$. We define the convolution $f * g$ of a pair of functions $f, g : \{-1, 1\}^n \to \R$ to be \[ (f * g)(x) = \E_{\by \sim \{-1, 1\}^n}[f(x)g(x \circ \by)], \] where $\circ$ denotes entry-wise multiplication. The effect of convolution on Fourier coefficients is that $\widehat{f * g}(S) = \wh{f}(S) \wh{g}(S)$. \subsection{Densities and distances} When working with probability distribution on $\{-1, 1\}^n$, we prefer to define them via \emph{density} function. A density function $\varphi : \{-1, 1\}^n \to \R^{\geq 0}$ is a nonnegative function satisfying $ \wh{\varphi}(\emptyset) = \E_{\bx \sim \{-1, 1\}^n}[\varphi(\bx)] = 1. $ We write $\by \sim \varphi$ to denote that $\by$ is a random variable drawn from the distribution $\varphi$, defined by \[ \Pr_{\by \sim \varphi}[\by = y] = \frac{\varphi(y)}{2^n}, \] for all $y \in \{-1, 1\}^n$. We identify distributions with their density functions when there is no risk of confusion. We denote $\varphi^{+t}(x) = \varphi(x \circ t)$. We denote by $\bone_A$ the density function for the uniform distribution on support set $A$. The density function associated to the fully uniform distribution is the constant function $\bone$. The following lemma about density functions of degree at most $k$ derives from Fourier analysis and hypercontractivity. \begin{lemma} \label{lem:fourierl1bound} Let $\varphi : \{-1, 1\}^n \to \R^{\geq 0}$ be a density function of degree at most $k$. Then \[ \|\widehat{\varphi}\|_2 = \sqrt{\sum_{S} \widehat{\varphi}(S)^2} \leq e^k. \] \end{lemma} \begin{proof} \[ \|\widehat{\varphi}\|_2 = \|\varphi\|_2 \leq e^k \|\varphi\|_1 = e^k. \] The first equality holds by Parseval's Theorem (see Section~1.4 in \cite{OD14}). The inequality holds by hypercontractivity (see Theorem~9.22 in \cite{OD14}). The last equality holds since $\varphi$ is a density function. \end{proof} A distribution $\varphi$ over $\{-1, 1\}^n$ is \emph{$k$-wise uniform} if and only if $\widehat{\varphi}(S) = 0$ for all $1 \leq |S| \leq k$ (see Chapter 6.1 in \cite{OD14}). We say that distribution $\varphi$ over $\{-1, 1\}^n$ is \emph{$(\epsilon, k)$-wise uniform} if $|\widehat{\varphi}(S)| \leq \eps$ for all~$1 \leq S \leq k$. The most common way to measure the distance between two probability distributions is via their \emph{total variation distance}. If the distributions have densities $\varphi$ and $\psi$, then the total variation distance is defined to be \[ d_{\textnormal{TV}}(\varphi, \psi) = \sup_{A \subseteq \{-1, 1\}^n} \left|\Pr_{\bx \sim \varphi}[\bx \in A] - \Pr_{\bx \sim \psi}[\bx \in A]\right| = \frac12\E_{\bx}\left[|\varphi(\bx) - \psi(\bx)|\right] = \frac12 \|\varphi - \psi \|_1. \] We say that $\varphi$ and $\psi$ are \emph{$\delta$-close} if $d_{\textnormal{TV}}(\varphi, \psi) \leq \delta$. Supposing $H$ is a set of distributions, we denote \[ d_{\textnormal{TV}}(\varphi, H) = \min_{\psi \in H} d_{\textnormal{TV}}(\varphi, \psi). \] In particular, we denote the set of $k$-wise uniform densities by $\kWISE{k}$. We say that density $\varphi$ is \emph{$\delta$-close to $k$-wise uniform} if $d_{\textnormal{TV}}(\varphi, \kWISE{k}) \leq \delta$, and is \emph{$\delta$-far} otherwise. \subsection{Krawtchouk and Hermite polynomials} Krawtchouk polynomials were introduced in \cite{Krawtchouk1929}, and arise in the analysis of Boolean functions as shown in \cite{Levenshtein95,Kalai2002}. Consider the following Boolean function of degree $k$ and input length $n$: $f(x) = \sum_{|S| = k} x^S$. It is symmetric and therefore only depends on the Hamming weight of $x$. Let $t$ be the number of $-1$'s in $x$. Then the output of $f$ is exactly the same as the Krawtchouk polynomial $K_k^{(n)}(t)$. \begin{definition} We denote by $K_k^{(n)}(t)$ the Krawtchouk polynomial: \[ K_k^{(n)}(t) = \sum_{j = 0}^{k}(-1)^j {t \choose j}{n-t \choose k-j}, \] for $k = 0, 1, \dots, n$. \end{definition} We will also use Hermite polynomials in our analysis. \begin{definition} We denote by $h_k(z)$ the normalized Hermite polynomial: \[ h_k(z) = \frac1{\sqrt{k!}} (-1)^k e^{\frac12z^2} \frac{d^k}{dz^k} e^{-\frac12z^2}. \] Its explicit formula is \[ h_k(z) = \sqrt{k!} \cdot \left(\frac{z^k}{0!! \cdot k!} - \frac{z^{k-2}}{2!! \cdot (k-2)!} + \frac{z^{k-4}}{4!! \cdot (k-4)!} - \frac{z^{k-6}}{6!! \cdot (k-6)!} + \cdots\right). \] \end{definition} One useful fact is that the derivative of a Hermite polynomial is a scalar multiple of a Hermite polynomial (see Exercise~11.10 in \cite{OD14}): \begin{fact} \label{fact:hermite} For any integer $k \geq 1$, we have \[ \frac{d}{dz}h_k(z) = \sqrt{k} h_{k-1}(z). \] \end{fact} The relationship between Krawtchouk and Hermite polynomials is that we can treat Hermite polynomials as a limit version of Krawtchouk polynomials when $n$ goes to infinity (see Exercise~11.14 in \cite{OD14}). \begin{fact} \label{fact:kravchuk} For all $k \in \N$ and $z \in \R$ we have \[ {n \choose k}^{-1/2} \cdot K_k^{(n)}\left(\frac{n-z\sqrt{n}}2\right) \xrightarrow{n \to \infty} h_k(z). \] \end{fact} Instead of analyzing Krawtchouk polynomials, it is easier to study Hermite polynomials when $n$ is large because Hermite polynomials have a more explicit form. We present some basic properties of Hermite polynomials with brief proofs. \begin{lemma} \label{lem:hermite} The following are properties of $h_k(z)$: \begin{enumerate} \item $|h_k(z)| \leq h_k(k)$ for any $|z| \leq k$; \item $h_k(z)$ is positive and increasing when $z \geq k$; \item $h_k(Ck) \leq (Ck)^k/\sqrt{k!}$ for any constant $C \geq 1$. \end{enumerate} \end{lemma} \begin{proof} We will treat the case of $k = 4i + 2$ for some integer $i$. The proof for the general case is similar. When $k = 4i + 2$, we can group adjacent terms into pairs: \[ h_k(z) = \sqrt{k!} \cdot \sum_{i = 0}^{(k-2)/4} \frac{z^{k-4i-2}}{(4i+2)!! \cdot (k-4i)!}((4i+2)z^2 - (k-4i)(k-4i - 1)). \] \begin{enumerate} \item Notice that $|(4i+2)z^2 - (k-4i)(k-4i - 1)|$ is always between $-(k-4i)(k-4i - 1)$ and $(4i+2)k^2 - (k-4i)(k-4i - 1)$ when $|z| \leq k$. Both the upper and lower bound have absolute value at most $(4i+2)k^2 - (k-4i)(k-4i - 1)$. Therefore by the triangle inequality we have $|h_k(z)| \leq h_k(k)$. \item It is easy to check that $((4i+2)z^2 - (k-4i)(k-4i - 1))$ is positive when $z \geq k$. Then by Fact~\ref{fact:hermite}, $\frac{d}{dz}h_k(z) = \sqrt{k} h_{k - 1}(z) > 0$ when $z \geq k$. \item This is trivial from the explicit formula since each term is exactly smaller than the previous term when $z \geq k$. \qedhere \end{enumerate} \end{proof} \section{The Closeness Problem} \label{sec:closeness} In this section, we prove the upper bound in Theorem~\ref{thm:section1_closeness_upperbound} and the lower bound in Theorem~\ref{thm:section1_closeness_lowerbound}. One interesting fact is that we use duality of linear programming (LP) in both the upper and lower bound. We think this is the proper perspective for analyzing these questions. \subsection{Upper bound} The key idea for proving the upper bound is mixture distributions. Given an ($\epsilon$, $k$)-wise uniform density $\varphi$, we try to mix it with some other distribution $\psi$ using mixture weight $w$, such that the mixture distribution $\frac1{1+w}(\varphi + w \psi)$ is $k$-wise uniform and is close to the original distribution. The following lemma shows that the distance between the original distribution and the mixture distribution is bounded by the weight $w$. \begin{lemma} \label{lem:weighted} If $\varphi' =\frac1{1+w}(\varphi + w \psi)$ for some $0 \leq w \leq 1$ and density functions $\varphi, \psi$, then $d_{\textnormal{TV}}(\varphi, \varphi')\leq w$. \end{lemma} \begin{proof} $ d_{\textnormal{TV}}(\varphi, \varphi') = \frac12 \|\varphi'-\varphi\|_1 = \frac 12 \|\varphi' - ((1+w)\varphi' - w\psi))\|_1 = \frac12 w\|\varphi'-\psi\|_1 \leq w. $ \end{proof} Therefore we only need to show the existence of an appropriate $\psi$ for some small $w$. The constraints on~$\psi$ can be written as an LP feasibility problem. Therefore by Farkas' Lemma we only need to show that its dual is not feasible. The variables in the dual LP can be seen as a density function of degree at most $k$. \begin{proof}[Proof of Theorem~\ref{thm:section1_closeness_upperbound} (general $k$ case)] Given density function $\varphi$, we try to find another density function $\psi$ with constraints \[ \widehat{\psi}(S) = -\frac1w \widehat{\varphi}(S) \] for all $1 \leq |S| \leq k$. Suppose such a density function $\psi$ exists. Then it is trivial that $\frac{\varphi + w\psi}{1 + w}$ is also a density function and is $k$-wise uniform. By Lemma~\ref{lem:weighted}, we conclude that $d_{\textnormal{TV}}(\varphi, \kWISE{k}) \leq w$. The rest of proof is to show that such a $\psi$ exists when $w = e^k\sqrt{\W{1\dots k}[\varphi]}$. We can write the existence as an LP feasibility problem with variables $\psi(x)$ for $x \in \{-1, 1\}^n$ and constraints: \begin{align*} \wh{\psi}(\emptyset) &= 1, & \\ \wh{\psi}(S) &= -\frac1w \wh{\varphi}(S), & \forall 1 \leq |S| \leq k,\\ \psi(x) &\geq 0, & \forall x \in \{-1, 1\}^n, \end{align*} where $\wh{\psi}(S) = \E[\psi(\bx)\bx^S]$ is a linear combination of variables $\psi(x)$. The dual LP has variables $\psi'(x)$ for $x \in \{-1, 1\}^n$ with constraints: \begin{align*} \wh{\psi'}(\emptyset) &= 1, & \\ \wh{\psi'}(S) &= 0, & \forall |S| > k,\\ \psi'(x) &\geq 0, &\forall x \in \{-1, 1\}^n,\\ \frac1w \sum_{1 \leq |S| \leq k} \wh{\varphi}(S) \wh{\psi'}(S) &> 1. \end{align*} The original LP is feasible if and only if its dual LP is infeasible, by Farkas' Lemma. This completes the proof, since when $w = e^k \sqrt{\W{1\dots k}[\varphi]}$, for any density function $\psi'$ with degree $k$ we have \[ \frac1w \sum_{1 \leq |S| \leq k} \wh{\varphi}(S) \wh{\psi'}(S) \leq \frac1{e^k\sqrt{\W{1\dots k}[\varphi]}} \sum_{1 \leq |S| \leq k} |\wh{\varphi}(S)| |\wh{\psi'}(S)| \leq \frac1{e^k} \|\wh{\psi'}\|_2 \leq 1, \] where the second inequality holds by Cauchy--Schwarz, and the last inequality holds by Lemma~\ref{lem:fourierl1bound} since $\psi'$ has degree at most $k$. \end{proof} For $k = 1$, further improvement can be achieved. We still try to use mixture distributions. Here we want to mix the distribution $\varphi$ with indicator distributions on subsets of coordinates that have opposite biases to those of the original distribution. \begin{proof}[Proof of Theorem~\ref{thm:section1_closeness_upperbound} (case $k = 1$)] By identifying each $x_i$ with $-x_i$ if necessary, we may assume without loss of generality that $\wh{\varphi}(\{i\}) \geq 0$ for all $i$. In addition, by reordering the coordinates, we may assume without loss of generality that $0 \leq \wh{\varphi}(\{1\}) \leq \dots \leq \wh{\varphi}(\{n\}) = \eps$. Define $\psi_j$ to be the density of the distribution over $\{-1, 1\}^n$ which is uniform on coordinates $x_1, \dots, x_{j-1}$, and has $x_i$ constantly fixed to be $-1$ for $j \leq i \leq n$. It is easy to check $\wh{\psi_j}(\{i\}) = 0$ for $i < j$ and $\wh{\psi_j}(\{i\}) = -1$ for $i \geq j$. We define $\varphi'$ as \[ \varphi' = \frac1{1+\eps}\left(\varphi + \sum_{j=1}^n w_j \psi_j\right), \] where \[ w_1 = \wh{\varphi}(\{1\}), \qquad w_j = \wh{\varphi}(\{j\}) - \wh{\varphi}(\{{j-1}\}) \quad \forall 1 < j \leq n. \] It is easy to check that $\varphi'$ is a density function and \[ \wh{\varphi'}(\{i\}) = \frac1{1+\eps} \left(\wh{\varphi}(\{i\}) + \left( \sum_{j=1}^i w_j \right) (-1) \right) = 0. \] Therefore $\varphi'$ is 1-wise uniform. Then by Lemma~\ref{lem:weighted}, \[ d_{TV}(\varphi, \kWISE{1}) \leq \frac12 \|\varphi - \varphi'\|_1 \leq \sum_{j=1}^n w_j = \eps. \qedhere \] \end{proof} \subsection{Lower bound} Interestingly, our proof of the lower bound also utilizes LP duality. We can write the Closeness Problem in the form of linear programming with variables $\varphi'(x)$ for $x \in \{-1, 1\}^n$, as follows: \begin{align*} &\text{minimize} & d_{\text{TV}}(\varphi, \varphi') &= \frac12 \|\varphi - \varphi'\|_1 &\\ &\text{subject to:} & \wh{\varphi'}(\emptyset) &= 1, & \\ && \wh{\varphi'}(S) &= 0, & \forall 1 \leq |S| \leq k,\\ && \varphi'(x) &\geq 0, & \forall x \in \{-1, 1\}^n. \end{align*} We ignore the factor of $1/2$ in the minimization for convenience in the following analysis. The dual LP, which has variables $p(x), q(x)$ for $x \in \{-1, 1\}^n$, is the following: \begin{align*} &\text{maximize} & \langle \varphi, q\rangle - \wh{p}(\emptyset) & &\\ &\text{subject to:} & p(x) - q(x) &\geq 0, & \forall x \in \{-1, 1\}^n,\\ && q(x) & \leq 1, &\forall x \in \{-1, 1\}^n,\\ && p(x) & \geq -1, &\forall x \in \{-1, 1\}^n,\\ && \textnormal{deg}(p) &\leq k. & \end{align*} Thus given a pair of Boolean functions $p, q$ satisfying the constraints, the quantity $\langle \varphi, q\rangle - \wh{p}(\emptyset)$ is a lower bound for our Closeness Problem. Our distribution $\varphi$ achieving the lower bound is a symmetric polynomial, homogeneous of degree $k$ (except that it has a constant term of $1$, as is necessary for every density function). We can use Krawtchouk and Hermite polynomials to simplify the analysis. \iffalse \begin{theorem} For any even constant $k$, there exists a family of distribution $\varphi$, such that \[\eps = \max_{1 \leq |S| \leq k}|\wh{\varphi}(S)| = \frac{\sqrt{k!}}{2k^k}{n \choose k}^{-1/2} \leq O\left(\frac{k!}{k^kn^{k/2}}\right) \] and \[ \Delta(\varphi, \varphi_{\text{kwi}}) \geq \Omega\left(\frac{k!}{k^{2k}}\right)= \Omega\left(\frac{n^{k/2}}{k^k}\epsilon\right). \] \end{theorem} \fi \begin{proof}[Proof of Theorem~\ref{thm:section1_closeness_lowerbound}] We define \[ \varphi(x) = 1 + \mu {n \choose k }^{-1/2} \sum_{|S| = k} x^S, \quad p(x) = \mu {n \choose k }^{-1/2} \sum_{|S| = k} x^S, \quad q(x) = \min(p(x), 1), \] where $\mu$ is a small parameter to be chosen later that will ensure $\varphi(x) \geq 0$ and $p(x) \geq -1$ for all $x \in \{-1, 1\}^n$. We have $\eps = \max_{1 \leq |S| \leq k}|\wh{\varphi}(S)| = \mu {n \choose k }^{-1/2}$. Since $\wh{p}(\emptyset) = 0$, the objective function of the dual LP is \begin{align*} \langle \varphi, q \rangle &= \langle \varphi, \min(p, 1) \rangle = \langle \varphi, 1_{p > 1} \rangle + \langle \varphi, p1_{p\leq 1} \rangle = \langle \varphi, p \rangle - \langle \varphi, (p-1)1_{p > 1} \rangle \\ &\geq \langle \varphi, p \rangle - \sqrt{\Pr_{\bx \sim \varphi}[p(\bx) > 1] \cdot \langle\varphi, (p-1)^2\rangle}, \end{align*} where the last inequality holds by Cauchy--Schwarz. It is easy to calculate the inner products $\langle \varphi, p \rangle = \mu^2 $, and \begin{align*} \langle \varphi, (p-1)^2 \rangle &= \langle \varphi, p^2 \rangle - 2\langle \varphi, p \rangle + 1 \\ &= \mu^2 + \mu^3 {n \choose k}^{-1/2} {k \choose k/2} {n-k \choose k/2} -2\mu^2 + 1 \\ &\leq 1 + \mu^3 {k \choose k/2}^{3/2} - \mu^2. \end{align*} Assuming $\mu < 2^{-\frac32k}$, we have $\langle \varphi, (p-1)^2 \rangle < 1$. Now we need to upper bound $\Pr_{\bx \sim \varphi}[p(\bx) > 1]$. Define $\bz$ satisfying $(n -\bz\sqrt{n})/2 = \sum_i \bx_i$. Then \[ \Pr_{\bx \sim \varphi}[p(\bx) > 1] = \Pr_{\bx \sim \varphi}\left[\mu {n \choose k}^{-1/2} \cdot K_k\left(\frac{n - \bz \sqrt{n}}2, n\right) > 1 \right]. \] By Fact~\ref{fact:kravchuk}, we know that when $z \leq k$, for sufficient large $n$, \[ {n \choose k}^{-1/2} \cdot K_k\left(\frac{n-z\sqrt{n}}2,n\right) < 2h_k(z). \] Now we set $\mu = \frac{\sqrt{k!}}{2(Ck)^k}$ with some constant $C \geq 1$. It is easy to check that $\mu < 2^{-\frac32k}$. Using the properties in Lemma~\ref{lem:hermite}, we get \begin{align*} \Pr_{\bx \sim \varphi}\left[\mu {n \choose k}^{-1/2} \cdot K_k\left(\frac{n - \bz \sqrt{n}}2, n\right) > 1 \right] &\leq \Pr_{\bx \sim \varphi}[2\mu h_k(\bz)> 1] \\ &\leq \Pr_{\bx \sim \varphi}[h_k(\bz) > h_k(Ck)] \\ &= \Pr_{\bx \sim \varphi}[|\bz| > Ck]. \end{align*} Then using Cauchy--Schwarz again, we get \begin{align*} \Pr_{\bx \sim \varphi}[|\bz| > Ck] &\leq \sqrt{\E_{\bx \sim \{-1, 1\}^n}[\varphi(\bx)^2]} \sqrt{\Pr_{\bx \sim \{-1, 1\}^n}[|\bz| > Ck]} \\ &\leq \sqrt{1+ \mu^2} \sqrt{2\exp(-C^2k^2/2)} \\ &\leq 2\exp(-(Ck)^2/4). \end{align*} Therefore we get that the objective function is at least \[ \langle \varphi, p \rangle - \sqrt{\Pr_{\bx \sim \varphi}[p(\bx) > 1] \cdot \langle\varphi, (p-1)^2\rangle} \geq \mu^2 -\sqrt{2\exp(-(Ck)^2/4)} \geq \Omega\left(\frac1k\right)^k. \] The last inequality holds when we choose a sufficiently large constant $C$. This completes the proof, because $\varphi$ is at least $\delta$-far from $k$-wise uniform with $\delta = \Omega\left(\frac1k\right)^k$, and we have $\eps = \mu {n \choose k }^{-1/2} \leq \frac{n^{-k/2}}{2^{\Omega(k)}}$. Therefore we have $\delta \geq \Omega\left(\frac1k\right)^k n^{k/2} \eps$. \end{proof} \section{The Testing Problem} \label{sec:testing} In this section, we study the problem of testing whether a distribution is $k$-wise uniform or $\delta$-far from $k$-wise uniform. These bounds are based on new bounds for the Closeness Problem. We present a new testing algorithm for general $k$ in Section~\ref{sec:testing_upper_new}. We give a lower bound for the pairwise case in Section~\ref{sec:testing_lower}. \subsection{Upper bound} \label{sec:testing_upper_new} Given $m$ samples from $\varphi$, call them $\bx_1, \dots, \bx_m$, we will first show that \[ \Delta (\bX) = \avg_{1 \leq s < t \leq m} \left(\sum_{1 \leq |S|\leq k} \bx_s^S\bx_t^S\right) \] is a natural estimator of $\W{1 \dots k}[\varphi]$. \begin{lemma} \label{lem:mean-and-var} It holds that \begin{align} \mu = \E[\Delta(\bX)] &= \W{1\dots k}[\varphi]; \nonumber\\ \label{eq:var} \Var[\Delta(\bX)] &\leq \frac{4}{m^2}L_k(\varphi)+ \frac{4}{m}\sqrt{L_k(\varphi)} \mu, \end{align} where $L_k(\varphi) = \sum_{1 \leq |S_1|, |S_2| \leq k} \wh{\varphi}(S_1 \oplus S_2)^2$. \end{lemma} \begin{proof} We denote $F(x, y) = \sum_{1 \leq |S| \leq k} x^S y^S$. We know that \[ \E_{\bx, \by \sim \varphi}[\bx^S \by^S] = \E_{\bx \sim \varphi}[\bx^S]\E_{\by \sim \varphi}[\by^S] = \wh{\varphi}(S)^2, \] when $\bx$ and $\by$ are independent samples drawn from $\varphi$. Therefore by linearity of expectation, $\E_{\bx, \by \sim \varphi}[F(\bx, \by)] = \W{1\dots k}[\varphi]$, and clearly by taking the average, \[ \mu = \E[\Delta(\bX)] = \E[\text{avg}_{s < t} F(\bx_s, \bx_t)] = \text{avg}_{s < t} \E[F(\bx_s, \bx_t)] = \W{1\dots k}[\varphi]. \] We need to expand the variance: \begin{equation} \label{eq:expand_variance_delta} \Var\left[\avg_{s < t} (F(\bx_s, \bx_t))\right] = \frac1{{{m \choose 2}}^2}\sum_{\substack{s < t\\ s' < t'}} \Cov[F(\bx_s, \bx_t), F(\bx_{s'}, \bx_{t'})]. \end{equation} We will discuss these covariances in three cases. \textbf{Case 1:} $| \{s, t\} \cap \{s', t'\}| = 2$. Let $\bx, \by \sim \varphi$ be independent random variables. \[ \Cov[F(\bx, \by), F(\bx, \by)] = \Var_{\bx, \by \sim \varphi}[F(\bx, \by)] \leq \E_{\bx, \by \sim \varphi}[F(\bx, \by)^2] = \E_{\bx, \by \sim \varphi}\left[\left(\sum_{1\leq |S| \leq k}\bx^S \by^S\right)^2\right]. \] Notice here all $\bx_i$'s are Rademacher variables with $\bx_i^2 = 1$, and similarly for the $\by_i$'s. Therefore \begin{align*} \E_{\bx, \by \sim \varphi}\left[\left(\sum_{1\leq |S| \leq k}\bx^S \by^S\right)^2\right] &= \sum_{1\leq |S_1|, |S_2| \leq k} \E_{\bx, \by \sim \varphi}\left[\bx^{S_1 \oplus S_2} \by^{S_1 \oplus S_2}\right] \\ &= \sum_{1\leq |S_1|, |S_2| \leq k} \wh{\varphi}(S_1 \oplus S_2)^2 = L_k(\varphi). \end{align*} \textbf{Case 2:} $| \{s, t\} \cap \{s', t'\}| = 1$. Let $\bx, \by, \bz \sim \varphi$ be independent random variables. Similar to Case 1, we have: \begin{align*} \Cov[F(\bx, \by), F(\bx, \bz)] &\leq \E[F(\bx, \by)F(\bx, \bz)] \\ &=\E\left[\left(\sum_{1\leq |S_1| \leq k}\bx^{S_1}\by^{S_1}\right)\left(\sum_{1\leq |S_2| \leq k}\bx^{S_2}\bz^{S_2}\right)\right]\\ &=\E\left[\sum_{1\leq |S_1|, |S_2| \leq k} \bx^{S_1 \oplus S_2} \by^{S_1} \bz^{S_2}\right]\\ & = \sum_{1\leq |S_1|, |S_2| \leq k} \wh{\varphi}(S_1 \oplus S_2) \wh{\varphi}(S_1) \wh{\varphi}(S_2) \\ & \leq \sqrt{\sum_{1 \leq |S_1|, |S_2| \leq k} \wh{\varphi}(S_1 \oplus S_2)^2} \sqrt{\sum_{1 \leq |S_1|, |S_2| \leq k}\wh{\varphi}(S_1)^2 \wh{\varphi}(S_2)^2}\\ & = \sqrt{L_k(\varphi)}\mu, \end{align*} where the inequality comes from Cauchy--Schwarz. \textbf{Case 3:} $| \{s, t\} \cap \{s', t'\}| = 0$. Let $\bx, \by, \bz, \bw \sim \varphi$ be independent random variables. Clearly $F(\bx, \by)$ and $F(\bz, \bw)$ are independent and therefore $\Cov[F(\bx, \by), F(\bz, \bw)] = 0$. Plugging all these cases into \cref{eq:expand_variance_delta}, we get \begin{align*} \Var[\Delta(\bX)] &= \Var\left[\avg_{s < t} (F(\bx_s, \bx_t)\right]\\ &= \frac1{{{m \choose 2}}^2}\left({m \choose 2} L_k(\varphi) + m(m - 1)(m - 2) \sqrt{L_k(\varphi)} \mu\right)\\ &\leq \frac{4}{m^2} L_k(\varphi) + \frac4{m} \sqrt{L_k(\varphi)} \mu. \qedhere \end{align*} \end{proof} Given Lemma~\ref{lem:mean-and-var} we can bound the samples we need for estimating $\W{1\dots k}[\varphi]$. \begin{theorem}[$\W{1\dots k}$ Estimation Test] \label{thm:sample} Let $\varphi :\{-1, 1\}^n \to \R^{\geq 0}$ be a density function, promised to satisfy $\W{i}[\varphi] \leq A n^{i/2}$ for all $i = 0, 1, \dots, 2k$. There is an algorithm that, given \begin{equation} \label{eq:samplebound} m \geq 1000\frac{2^{k} \sqrt{A} n^{k/2}}{\theta} \end{equation} samples, distinguishes with probability at least $3/4$ whether $\W{1\dots k}[\varphi] \leq \frac12 \theta$ or $\W{1\dots k}[\varphi] > \theta$. \end{theorem} \begin{proof} The algorithm is simple: we report ``$ \mu \leq \frac12 \theta$'' if $\Delta(\bX) \leq \frac34 \theta$ and report ``$\mu > \theta$'' if $\Delta(\bX) > \frac34 \theta$. Now we need to bound $L_k(\varphi)$ to bound the variance of $\Delta(\bX)$. For a fixed subset $|S| \leq 2k$, how many pairs of $1 \leq |S_1|, |S_2| \leq k$ are there satisfying $S = S_1 \oplus S_2$? We denote $S_1 = S_1' \cup T$, $S_2 = S_2' \cup T$, where~$S_1', S_2', T$ are disjoint. Then $S = S_1' \cup S_2'$. For a fixed set $S$, there are at most $2^{|S|}$ different ways to split it into two sets $S_1', S_2'$. Because $\max\{S_1', S_2'\} \geq \lceil|S|/2\rceil$ and $|S_1|, |S_2| \leq k$, we have $|T| \leq k - \lceil|S|/2 \rceil$. Therefore there are at most \[ \sum_{j = 0} ^ {k - \lceil |S|/2 \rceil}{n - |S| \choose j} \leq \frac{2n^{k - \lceil |S|/2 \rceil}}{(k - \lceil |S|/2 \rceil)!} \] ways to choose the set $T$ for any fixed $S_1', S_2'$. Hence, \begin{align*} L_k(\varphi) &= \sum_{1\leq |S_1|, |S_2| \leq k} \wh{\varphi}(S_1 \oplus S_2)^2 \\ &= \sum_{|S| \leq 2k} \sum_{\substack{S_1' \cap S_2' = \emptyset \\ S_1' \cup S_2' = S}} \quad \sum_{\substack{T \cap S_1' = \emptyset, T \cap S_2' = \emptyset \\ |T| + \max\{|S_1'|, |S_2'|\} \leq k}} \wh{\varphi} (S)^2 \\ & \leq \sum_{|S| \leq 2k} 2^{|S|} \frac{2n^{k - \lceil |S|/2 \rceil}}{(k - \lceil |S|/2 \rceil)!} \wh{\varphi} (S)^2 \\ & = \sum_{i = 0}^{2k} 2^i \frac{2n^{k - \lceil i/2 \rceil}}{(k - \lceil i/2 \rceil)!} \W{i}[\varphi]. \end{align*} Plugging in $\W{i}[\varphi] \leq An^{i/2}$, we get \begin{equation} \label{eq:boundlk} L_k(\varphi) \leq \sum_{i = 0}^{2k} 2^i \frac{2n^{k - \lceil i/2 \rceil}}{(k - \lceil i/2 \rceil)!} \W{i}[\varphi] \leq 2^{2k+2} An^k. \end{equation} By substituting \cref{eq:boundlk} and \cref{eq:samplebound} into \cref{eq:var}, we have \[ \Var[\Delta(\bX)] \leq \frac{4}{500^2}\theta^2 + \frac{4}{500} \theta \mu \leq \frac1{64} \max \{\theta^2, \mu^2\}. \] Then we conclude our proof by Chebyshev's inequality: \begin{align*} \Pr\left[|\Delta(\bX) - \mu| \leq \frac14 \max\{\theta, \mu\}\right] &\geq \Pr\left[|\Delta(\bX) - \mu| \leq 2 \sqrt{\Var[\Delta(\bX)]}\right]\\ &\geq 1 - \left(\frac12\right)^2 = \frac34. \qedhere \end{align*} \end{proof} This $\W{1\dots k}$ Estimation Test is just what we need for testing $k$-wise uniformity with the upper bound of the Closeness Problem. \begin{proof}[Proof of Theorem~\ref{thm:section1_testing_upperbound}] From Theorem~\ref{thm:section1_closeness_upperbound} we know that if density $\varphi$ is $\delta$-far from $k$-wise uniform, then $\W{1 \dots k}[\varphi] > \left(\frac{\delta}{e^k}\right)^2$. On the other hand if $\varphi$ is $k$-wise uniform, by definition we have $\W{1 \dots k}[\varphi] = 0$. Therefore distinguishing between $k$-wise uniform and $\delta$-far from $k$-wise uniform can be reduced to distinguishing between $\W{1 \dots k}[\varphi] > \left(\frac{\delta}{e^k}\right)^2$ and $\W{1 \dots k}[\varphi] = 0$. For any density function $\varphi$, $|\wh{\varphi}(S)| = \left| \E[\varphi(\bx) \bx^S] \right| \leq 1$ for any $S \subseteq [n]$. Therefore assigning $A = n^k$, we have \[ \W{i}[\varphi] = \sum_{|S| = i} \wh{\varphi}(S)^2 \leq n^i \leq A n^{i/2} \] for every $i = 0, 1, \dots, 2k$. Hence we can run the $\W{1\dots k}$ Estimator Test in Theorem~\ref{thm:sample} with parameter $\theta = \left(\frac{\delta}{e^k}\right)^2$ and $A = n^k$, thereby we solve the Testing Problem with sample complexity $2^{O(k)}n^k/\delta^2$. In fact by mroe precise calculation we can further improve the constant factor involving $k$ to $O\left(\frac1k\right)^{k/2}$, but we will omit the proof here for the sake of brevity. \end{proof} \subsection{Lower bound for the pairwise case} \label{sec:testing_lower} An upper bound for the Closeness Problem implies an upper bound for the Testing Problem. But a lower bound for Closeness does not obviously yield a lower bound for the Testing Problem. The function used to show the lower bound for the Closeness Problem is far from $k$-wise uniform, but it is not sufficient to say that it is hard to distinguish between it and some $k$-wise uniform distribution. In \cite{AAKMRX07}, they show that it is hard to distinguish between the fully uniform distribution and the uniform distribution on a random set of size around $O(n^{k-1}/\delta^2)$; this latter distribution is far from $k$-wise uniform with high probability for $k > 2$. We show that the density function $\varphi$ we used for the lower bound for the Closeness Problem is a useful density to use for a testing lower bound in the pairwise case. However it is not hard to distinguish between the fully uniform distribution and $\varphi$. Our trick is shifting $\varphi$ by a random ``center''. We remind the reader that we denote by $\varphi^{+t}(x) = \varphi (x \circ t)$ the distribution $\varphi$ shifted by vector $t$. We claim that with $m = o(n/\delta^2)$ samples, it is hard to distinguish the fully uniform distribution from $\varphi^{+t}$ with a uniformly randomly chosen~$t$. \begin{lemma} \label{lem:testing_lb_k=2} Let $\varphi$ be the density function defined by $\varphi(x) = 1 + \frac{\delta}{n}\sum_{i < j} x_i x_j$. Assume $m < n/\delta^2$. Let~$\Phi : (\{-1,1\}^n)^m \to \R^{\geq 0}$ be the density associated to the distribution on $m$-tuples of strings defined as follows: First, choose $\bt$ in $\{-1,1\}^n$ uniformly; then choose $m$ strings independently from $\varphi^{+\bt}$. Let $\bone$ denote the constantly $1$ function on $(\{-1,1\}^n)^m$, the density associated to the uniform distribution. Then the $\chi^2$-divergence between $\Phi$ and $\bone$, $\|\Phi - \bone\|_2^2$, is bounded by \[ \|\Phi - \bone\|_2^2 \leq O\left(\frac{m\delta^2}{n}\right). \] \end{lemma} \begin{proof} We need to show that $\E[(\Phi - \bone)^2] = \E[\Phi^2] - 1 \leq O(m \delta^2/n)$. For uniform and independent $\bx^{(1)}, \dots, \bx^{(m)}$, \begin{align*} \E[\Phi(\bx^{(1)}, \dots, \bx^{(m)})^2] &= \E_{\bx}\left[\left(\E_{\bt}\left[\prod_{i=1}^m\varphi^{+\bt}(\bx^{(i)})\right]\right)^2\right] \\ &= \E_{\bx, \bt, \bt'}\left[\prod_{i = 1}^m\varphi^{+\bt}(\bx^{(i)})\varphi^{+\bt'}(\bx^{(i)})\right] \\ &= \E_{\bt,\bt'}[\langle \varphi^{+\bt}, \varphi^{+\bt'}\rangle^m]. \end{align*} It is a trivial fact that $\langle \varphi^{+t}, \varphi^{+t'}\rangle = \varphi * \varphi(t+t')$. Therefore \[ \E[\Phi(\bx^{(1)}, \dots, \bx^{(m)})^2] = \E[(\varphi * \varphi)^m]. \] We know that $\widehat{\varphi * \varphi}(S) = \varphi(S)^2$. Therefore \[ \varphi * \varphi = 1 + \frac{\delta^2}{n^2}\sum_{i < j} x_i x_j. \] To compute $\E[(\varphi * \varphi)^m]$, we just need to calculate the constant term of $(1 + \frac{\delta^2}{n^2}\sum_{i < j} x_i x_j)^m$ since $x_i^2 = 1$. Suppose that when expanding this out, we take $l$ terms of $x_i x_j$; we think these as $l$ (possibly parallel) edges in the complete graph on $n$ vertices. Then if these $l$ terms ``cancel out'', the associated edges form a collection of cycles, since each vertex has even degree. There are at most $n^l$ collections of cycles with $l$ edges. Considering choosing those $l$ terms (edges) in order, we get an upper bound of $(mn)^l$ for the number of ways of choosing $l$ terms of $x_ix_j$ to get canceled. Therefore we have \[ \E\left[\left(1 + \frac{\delta^2}{n^2}\sum_{i \neq j} \bx_i \bx_j\right)^m\right] \leq \sum_{l = 0}^{m} (mn)^l \left( \frac{\delta^2}{n^2}\right)^l \leq \sum_{l = 0}^m \left(\frac{ m \delta^2}{n}\right)^l \leq 1 + O\left(\frac{m\delta^2}{n}\right), \] which completes the proof. \end{proof} Now we are ready to give the lower bound for sample complexity of testing fully uniform vs.\ far-from-pairwise uniform. \begin{proof}[Proof of Theorem~\ref{thm:section1_testing_lowerbound}] If $m = o(n/\delta^2)$, by Lemma~\ref{lem:testing_lb_k=2} we have $\|\Phi - \bone\|_2^2 \leq o(1)$. Then any tester cannot distinguish, with more than $o(1)$ advantage, whether those $m$ samples are fully uniform or they are drawn from $\varphi^{+\bt}$ for some random $\bt$. On the other hand, the proof of Theorem~\ref{thm:section1_closeness_lowerbound} shows that $\varphi$ is $\Omega(\delta)$-far from pairwise uniform, and from the Fourier characterization, we have that $\varphi^{+t}$ is pairwise uniform whenever $\varphi$ is. We can conclude that testing fully uniform versus $\delta$-far-from-pairwise-uniform needs sample complexity at least $\Omega(n/\delta^2)$. \end{proof} Unfortunately, we do not see an obvious way to generalize this lower bound to $k > 2$. \section{Testing $\alpha k$-wise/fully uniform vs.\ far from $k$-wise uniform} \label{sec:fully} \subsection{The algorithm} In this section we show a sample-efficient algorithm for testing whether a distribution is $\alpha k$-wise/fully uniform or $\delta$-far from $k$-wise uniform. As a reminder, Theorem~\ref{thm:sample} indicates that the sample complexity of estimating $\W{1 \dots k}[\varphi]$ is bounded by the Fourier weight up to level $2k$. This suggests using a filter test to try to ``kick out'' those distributions with noticeable Fourier weight up to degree $2k$. \textbf{Filter Test.} Draw $m_1$ samples from $\varphi$. If there exists a pair of samples $\bx, \by$ such that $\left|\sum_{i = 1}^n \bx_i \by_i\right| > t\sqrt{n}$, output ``Reject''; otherwise, output ``Accept''. The Overall Algorithm is combining the Filter Test and the $\W{1\dots k}$ Estimation Test. \textbf{Overall Algorithm.} Do the Filter Test with $m_1$ samples and parameter $t$. If it rejects, say ``No''. Otherwise, do the $\W{1\dots k}$ Estimation Test with $m_2$ samples and $\theta = (\delta/e^k)^2$. Say ``No'' if it outputs ``$\W{1\dots k}[\varphi] > \theta$'' and say ``Yes'' otherwise. Here ``Yes'' means $\varphi$ is $\alpha k$-wise/fully uniform, and ``No'' means $\varphi$ is $\delta$-far from $k$-wise uniform. We will decide the parameters $m_1, t, m_2$ in the Overall Algorithm later. For simplicity, we denote $\overline{k} = \alpha k$. We will focus on testing $\alpha k$-wise uniform vs.\ far from $k$-wise uniform in the analysis. For fully uniformity, the analysis is almost the same, and we will discuss it at the end of this subsection. First of all, we will prove that if $\varphi$ is $\overline{k}$-wise uniform, it will pass the Filter Test with high probability, provided we choose $m_1$ and $t$ properly. \begin{lemma} \label{lem:k'-accept} If $\varphi$ is $\overline{k}$-wise uniform (assuming $\overline{k}$ is even), the Filter Test will accept with probability at least .9 when $m_1^2 \leq \frac{t^{\overline{k}}}{5 \overline{k}^{\overline{k}/2}} $. \end{lemma} \begin{proof} If $\varphi$ is $\overline{k}$-wise uniform with $\overline{k}$ even, then by Markov's inequality on the $\overline{k}$-th moment, we have \[ \Pr_{\substack{\bx, \by \sim \varphi \\ \text{independent}}}\left[\left|\sum_{i = 1}^n \bx_i \by_i\right| > t\sqrt{n}\right] = \Pr_{\bx, \by \sim \varphi}\left[\left(\sum_{i = 1}^n \bx_i \by_i\right)^{\overline{k}} > (t\sqrt{n})^{\overline{k}}\right] \leq \frac{\E_{\bx, \by \sim \varphi}\left[\left(\sum_{i=1}^n\bx_i \by_i\right)^{\overline{k}}\right]}{t^{\overline{k}}n^{\overline{k}/2}}. \] When we expand $\left(\sum_{i=1}^n x_i y_i\right)^{\overline{k}}$, each term is at most degree $\overline{k}$ in $x$ or $y$. Because $\bx$ and~$\by$ are independent random variables chosen from $\overline{k}$-wise uniform distribution $\varphi$, the whole polynomial behaves the same as if $\bx$ and $\by$ were chosen from the fully uniform distribution: \begin{align*} \E_{\bx, \by \sim \varphi}\left[\left(\sum_{i=1}^n\bx_i \by_i\right)^{\overline{k}}\right] &= \E_{\bz \sim \{-1, 1\}^n} \left[\left(\sum_{i = 1}^n \bz_i\right)^{\overline{k}}\right]\\ &\leq \overline{k}^{\overline{k}/2} \left(\E_{\bz \sim \{-1, 1\}^n} \left[\left(\sum_{i = 1}^n \bz_i\right)^2\right]\right)^{\overline{k}/2} \\ &= \overline{k}^{\overline{k}/2} n^{\overline{k}/2}. \end{align*} The inequality uses hypercontractivity; see Theorem~9.21 in \cite{OD14}. Hence we have \[ \Pr_{\bx, \by \sim \varphi}\left[\left|\sum_{i = 1}^n \bx_i \by_i\right| > t\sqrt{n}\right] \leq \frac{\overline{k}^{\overline{k}/2}}{t^{\overline{k}}}. \] When drawing $m_1$ examples, there are at most ${m_1 \choose 2} \leq \frac12 m_1^2$ pairs. Hence by the union bound, the probability of $\varphi$ getting rejected is at most $\frac{m_1^2\overline{k}^{\overline{k}/2}}{2t^{\overline{k}}}\leq \frac1{10}$. \end{proof} Secondly, we claim that for any distribution $\varphi$ that does not get rejected by the Filter Test, it is close to a distribution $\varphi'$ with upper bounds on the Fourier weights of each of its levels. \begin{lemma} \label{lem:filter-low-weight} Any distribution $\varphi$ either gets rejected by the Filter Test with probability at least $.9$, or there exists some distribution $\varphi'$ such that: \begin{enumerate} \item $\varphi'$ and $\varphi$ are $\frac{8}{m_1}$-close in total variation distance; \item $\W{i}[\varphi'] \leq \frac{10^7}{m_1^2} n^i + t^in^{i/2}$ for all $i = 1, \dots, n$. \end{enumerate} \end{lemma} We will present the proof of Lemma~\ref{lem:filter-low-weight} in the next subsection. If $\varphi$ is not rejected by the Filter Test, Lemma~\ref{lem:filter-low-weight} tells us that it is close to some distribution $\varphi'$ with bounded Fourier weights on each of its levels. Even though we are drawing samples from $\varphi$, we can ``pretend'' that we are drawing samples from $\varphi'$ since they are close: \begin{claim} \label{claim:nodiff} Let $m_2 \leq \frac{m_1}{200}$, and let $A(X^{(m_2)})$ be any event related to $m_2$ samples in $\{-1, 1\}^n$, $X^{(m_2)} = \{x_1, \dots, x_{m_2}\}$. Then we have \[ \left|\Pr_{\bX^{(m_2)} \sim \varphi} [A(\bX^{(m_2)})] - \Pr_{\bX^{(m_2)} \sim \varphi'}[A(\bX^{(m_2)})] \right| \leq .08, \] when $\varphi$ and $\varphi'$ are $\frac{8}{m_1}$-close. \end{claim} \begin{proof} We denote by $\Phi$(respectively, $\Phi'$) the joint distribution of $m_2$ samples from $\varphi$(respectively, $\varphi'$). Then by a union bound we know that $\Phi$ and $\Phi'$ are $.04$-close, since $m_2 \frac{8}{m_1} \leq .04$. We denote $\bone[A(\bX^{(m_2)})]$ as the indicator function of event $A$ happening on $\bX^{(m_2)}$. Then we have \begin{align*} \left|\Pr_{\bX^{(m_2)} \sim \varphi}[A(\bX^{(m_2)})] - \Pr_{\bX^{(m_2)} \sim \varphi'}[A(\bX^{(m_2)})]\right| &= \left|\sum_{X^{(m_2)}} \bone[A(X^{(m_2)})] \left(\Phi(X^{(m_2)}) - \Phi'(X^{(m_2)})\right)\right| \\ & \leq \sum_{X^{(m_2)}} \left|\Phi(X^{(m_2)}) - \Phi'(X^{(m_2)})\right| \\ & = 2 d_{\text{TV}}(\Phi, \Phi') \leq .08 \end{align*} which completes the proof. \end{proof} Now we are ready to analyze the Overall Algorithm. \begin{proof}[Proof of Theorem~\ref{thm:section1_fully_testing}] We discuss distinguishing between $\overline{k}$-wise uniform and $\delta$-far from $k$-wise uniform first. In the Overall Algorithm, we set the parameters $t = \left( 10^{11} (4e^4)^k \overline{k}^{\overline{k}/2} \frac{n^k}{\delta^4}\right)^{\frac1{\overline{k}-2k}}$ and $m_1 = \sqrt{\frac{t^{\overline{k}}}{5\overline{k}^{\overline{k}/2}}}$ in the Filter Test; and, we set $m_2 = \frac1{200} m_1$ and $\theta = \left(\frac{\delta}{e^k}\right)^2$ in the $\W{1 \dots k}$ Estimation test. In total we use $m_1 + m_2 = O\left(\sqrt{\frac{t^{\overline{k}}}{\overline{k}^{\overline{k}/2}}}\right)$ samples in the Overall Algorithm. By plugging in the definition of $t$ and $\overline{k} = \alpha k$, we can simplify the sample complexity to $O(\alpha)^{k/2} \cdot n^{k/2} \cdot \frac1{\delta^2} \cdot \left(\frac{n^k}{\delta^4}\right)^{1/(\alpha - 2)}$. The rest of the proof is to show the correctness of this algorithm. We discuss the two cases. \textbf{``Yes'' case:} Suppose $\varphi$ is $\overline{k}$-wise uniform. By Lemma~\ref{lem:k'-accept} we know that $\varphi$ will pass the Filter Test with probability at least .9 since $m_1^2 = \frac{t^{\overline{k}}}{5\overline{k}^{\overline{k}/2}}$. Now $\varphi$ is $\overline{k}$-wise uniform with $\overline{k} > 2k$, which means $\widehat{\varphi}(S) = 0$ for any $1 \leq |S| \leq 2k$. Therefore by setting $\delta = \left( \frac{\theta}{e^k} \right)^2$ and $A = 1$, Theorem~\ref{thm:sample} tells us that $m_2$ samples are large enough for $\W{1\dots k}$ Estimation Test to output ``$\W{1\dots k}[\varphi] \leq \frac12 \theta$'' with probability $3/4$. The overall probability of the Overall Algorithm saying ``Yes'' is therefore at least $.9 \times \frac34 > \frac23$. \textbf{``No'' case:} Suppose $\varphi$ is $\delta$-far from $k$-wise uniform. Either $\varphi$ gets rejected by the Filter Test with probability .9, or according to Lemma~\ref{lem:filter-low-weight}, we know that there exists some distribution $\varphi'$ which is $\frac{8}{m_1}$-close to $\varphi$ and $\W{i}[\varphi'] \leq \frac{10^7}{m_1^2} n^i + t^in^{i/2}$ for all $i = 1, \dots, n$. The second stage is slightly tricky. As described in Claim~\ref{claim:nodiff}, at the expense of losing $.08$ probability, we may pretend we are drawing samples from $\varphi'$ rather than $\varphi$. Notice that $m_1^2 = \frac{t^{\overline{k}}}{5\overline{k}^{\overline{k}/2}} = \omega(n^k)$. We have \[ \W{i}[\varphi'] \leq \frac{10^7}{m_1^2} n^i + t^in^{i/2} = (1 + o(1)) t^in^{i/2} \leq An^{i/2} \] for $i = 0, \dots, 2k$ with parameter $A = 1.01t^{2k}$. Then plugging $A = 1.01t^{2k}$ and $\theta = \left( \frac{\delta}{e^k}\right)^2$ into Theorem~\ref{thm:sample}, we know that the $\W{1\dots k}$ Estimation Test will say ``$\W{1\dots k}[\varphi] > \theta$'' with probability at least $\frac34$ when $\varphi'$ is $\delta$-far from $k$-wise uniform, provided we have at least $1005\frac{(2e^2)^kt^kn^{k/2}}{\delta^2}$ samples. It is easy to check $m_2 = \frac1{200}\sqrt{\frac{t^{\overline{k}}}{5\overline{k}^{\overline{k}/2}}}$ is sufficient. However, in the real algorithm we are drawing samples from $\varphi$ rather than $\varphi'$. From Claim~\ref{claim:nodiff}, we know that the estimator will accept with probability at least $\frac34 - .08 > \frac23$ when $\varphi'$ is $\delta$-far from $k$-wise uniform. Notice that $\varphi$ and $\varphi'$ are $\frac{8}{m_1}$-close, where $\frac{8}{m_1} = o\left(\frac{\delta^4}{n^k}\right)$. Hence if $\varphi$ is $\delta$-far from $k$-wise uniform, $\varphi'$ is also $\delta$-far from $k$-wise uniform, which completes the proof. Finally, for distinguishing between a distribution being fully uniform and a distribution being $\delta$-far from $k$-wise uniform, the modification we need is that in Lemma~\ref{lem:k'-accept} we use Hoeffding's inequality to get \[ \Pr_{\bx, \by \sim \varphi}\left[\left|\sum_{i = 1}^n \bx_i \by_i\right| > t\sqrt{n}\right] \leq 2e^{-t^2/2}, \] and then we have the constraint $m_1^2 \leq \frac1{10} e^{t^2/2}$. Following exactly the same analysis, we get the same algorithm with sample complexity $O(k)^k\cdot n^{k/2} \cdot \frac1{\delta^2} \cdot \left(\log \frac{n}{\delta}\right)^{k/2}$. \end{proof} \subsection{Proof of Lemma~\ref{lem:filter-low-weight}} The rest of this section is devoted to proving Lemma~\ref{lem:filter-low-weight}. We will use the following definition in the analysis. \begin{definition} For $x, y \in \{-1, 1\}^n$, we say $(x, y)$ is \emph{skewed} if $\left| \sum_{i = 1}^n x_i y_i \right| > t\sqrt{n}$. We say that $x$ is \emph{$\beta$-bad for distribution $\varphi$} if $\Pr_{\by \sim \varphi}[(x, \by) \text{ is skewed}] > \beta$. \end{definition} \begin{claim} \label{claim:bad} If $\Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad for } \varphi\right] > \frac8{m_1}$, then $\varphi$ will be rejected by the Filter Test with probability at least $.9$. \end{claim} \begin{proof} Suppose $\Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad for } \varphi\right] > \frac8{m_1}$. We will divide the samples we draw for the Filter Test into two sets with size ${m_1}/2$ each. Then the probability of choosing an $\frac8{m_1}$-bad~$x$ among the first ${m_1}/2$ samples is at least \[ \Pr_{\bx_1, \dots, \bx_{{m}/2} \sim \varphi}\left[\exists x \text{ $\frac8{m_1}$-bad for $\varphi$ among } \bx_1, \dots, \bx_{{m}/2}\right] > 1 - \left(1-\frac8{m_1}\right)^{{m_1}/2} \geq 1-e^{-4}. \] Now if we have such an $\frac8{m_1}$-bad $x$ among the first ${m_1}/2$ samples, each $(x, \bx_t)$ will be skewed with probability at least $\frac8{m_1}$ for any $t = {m_1}/2+1, \dots, {m}$. Therefore \[ \Pr_{\bx_{{m}/2+1}, \dots, \bx_{{m}}}[(x, \bx_t) \text{ is skewed for some } t = \frac{{m}}2+1, \dots, {m}] \geq 1 - \left(1-\frac8{m_1}\right)^{{m_1}/2} \geq 1-e^{-4}. \] Combining the two inequalities together, we know that the probability of at least one pair being skewed is at least $(1-e^{-4})^2 \geq .9$. \end{proof} Now we only need to consider the case when the probability of drawing a bad $x$ from $\varphi$ is very small. We want to show a stronger claim that even the probability of drawing a skewed pair from $\varphi$ is small. However this might not be true for $\varphi$ itself. Thus we look at another distribution $\varphi'$, which is defined to be $\varphi$ conditioned on outcomes being not bad. Define $\varphi'$ as \begin{equation} \label{eq:phi'} \varphi'(x) = \varphi(x) \frac{ \bone\left[x \text{ not $\frac8{m_1}$-bad for $\varphi$}\right]}{1 - \Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad for $\varphi$}\right]}. \end{equation} We show that $\varphi'$ is close to $\varphi$ and that $\varphi'$ has no bad samples: \begin{claim} \label{claim:phi'} Suppose $\varphi$ satisfies $\Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad for $\varphi$}\right] \leq \frac8{{m_1}}$ and $m_1 \geq 16$. Let $\varphi'$ be defined as in \cref{eq:phi'}. Then: \begin{enumerate} \item $\varphi$ and $\varphi'$ are $\frac8{m_1}$-close; \item $\varphi'(x) = 0$ for any $x$ that is $\frac{16}{m_1}$-bad for $\varphi'$. \end{enumerate} \end{claim} \begin{proof} \begin{enumerate} \item Notice that $\varphi'(x) = 0 \leq \varphi(x)$ when $x$ is $\frac8{m_1}$-bad for $\varphi$, and $\varphi'(x) \geq \varphi(x)$ otherwise. Hence, \begin{align*} d_{\text{TV}}(\varphi, \varphi') &= \frac12 \E_{\bx}[|\varphi(\bx) - \varphi'(\bx)|] \\ &= \frac1{2^n} \sum_{\varphi'(x) < \varphi(x)} (\varphi(x) - \varphi'(x))\\ &\leq \Pr_{\bx \sim \varphi}\left[\bx \text{ $\frac8{m_1}$-bad on $\varphi$}\right] \leq \frac8{{m_1}}. \end{align*} \item $\varphi'(x)$ is either 0 or at most $(1 + \frac{16}{{m_1}}) \varphi(x)$ given $\Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad for $\varphi$}\right] \leq \frac8{{m_1}}$ and $m_1 \geq 16$. Therefore if $\varphi'(x) > 0$, $x$ is not $\frac8{m_1}$-bad for $\varphi$. Hence, \begin{align*} \Pr_{\by \sim \varphi'}\left[(x, \by) \text{ is skewed}\right] &\leq \left(1 + \frac{16}{{m_1}}\right) \Pr_{\by \sim \varphi}\left[(x, \by) \text{ is skewed}\right] \\ &\leq \left(1+\frac{16}{{m_1}}\right)\frac8{m_1} \leq \frac{16}{m_1}. \qedhere \end{align*} \end{enumerate} \end{proof} \begin{claim} \label{claim:skewsmall} Suppose distribution $\varphi$ satisfies $\Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad for $\varphi$}\right] \leq \frac8{{m_1}}$. Let $\varphi'$ be defined as in \cref{eq:phi'}. If $\Pr_{\bx, \by \sim \varphi'}[(\bx, \by) \text{ is skewed}] > \frac{10^7}{m_1^2}$, then with probability at least $.9$, $\varphi$ will be rejected by the Filter Test. \end{claim} We want to clarify that the constraint is about $\varphi'$, but we are drawing samples from $\varphi$ in the Filter Test. \begin{proof} We only consider the first $m_1' = \frac{m_1}{200}$ samples. From Claim~\ref{claim:phi'} we know that $\varphi$ and $\varphi'$ are $\frac{8}{m_1}$-close. Therefore, we only need to show that if the samples are drawn from $\varphi'$, the probability of appearing a skewed pair among these $m_1'$ samples is at least $.98$. Then $\varphi$ will be rejected by the Filter Test with probability at least $.98 - .08 \geq .9$ according to Claim~\ref{claim:nodiff}. Define random variable $\bU_{s, t}$ to be the indicator associated with the event that $(\bx_s, \bx_t)$ is skewed, and $\bU = \sum_{1 \leq s < t \leq m_1'} \bU_{s, t}$. We need to prove that $\Pr[\bU = 0] \leq .02$. (From now on, all probabilities and expectations are based on choosing samples from distribution $\varphi'$.) By Chebyshev's inequality, we know that $\Pr[\bU = 0] \leq \frac{\Var[\bU]}{\E[\bU]^2}$, so we need to calculate $\Var[\bU]$ and $\E[\bU]$. Denote $\mu = \Pr_{\bx, \by \sim \varphi'}[(\bx, \by) \text{ is skewed}]$. Then $\E[\bU_{s, t}] = \mu$ for any $s < t$ and hence we have \[ \E[\bU] = \sum_{s < t} \E[\bU_{s, t}] = {m_1' \choose 2} \mu. \] It remains to calculate $\E[\bU^2]$. We can expand it as \[ \E[\bU^2] = \E\left[\left(\sum_{s < t} \bU_{s, t} \right)^2\right] = \sum_{\substack{s < t\\s' < t'}}\E[\bU_{s,t}\bU_{s', t'}]. \] Similar to the proof of Lemma~\ref{lem:mean-and-var}, we discuss these expectations in three cases. \textbf{Case 1:} $|\{s, t\} \cap \{s', t'\}| = 2.$ Since $\bU_{s, t}$ is a Bernoulli random variable, we know that \[ \E[\bU_{s,t}^2] = \E[\bU_{s,t}] = \mu. \] \textbf{Case 2:} $|\{s, t\} \cap \{s', t'\}| = 1.$ Without loss of generality we assume $s = s'$. We consider drawing $\bx_s$ first. For any fixed $x_s$ with $\varphi'(x_s) > 0$, \[ \E_{\bx_{t'}}[\bU_{s, t'}] = \Pr_{\bx_{t'}}[(x_s, \bx_{t' }) \text{ get skewed}] \leq \frac{16}{m_1} = \frac{2}{25m_1'}, \] where the inequality comes from Claim~\ref{claim:phi'}. Therefore, \[ \E[\bU_{s, t}\bU_{s, t'}] = \E_{\bx_s, \bx_t}[\bU_{s,t} \E_{\bx_{t'}}[\bU_{s, t'}]] \leq \frac{2\mu}{25 m_1'}. \] \textbf{Case 3:} $|\{s, t\} \cap \{s', t'\}| = 0.$ Since $s, t, s', t'$ are all distinct, we have \[ \E[\bU_{s, t}\bU_{s', t'}] = \E[\bU_{s, t}]\E[\bU_{s', t'}] = \mu^2. \] Combining these cases together, we get \[ \E[\bU^2] = {m_1' \choose 2}\mu + m_1'(m_1'-1)(m_1'-2)\frac{2\mu}{25 m_1'} + {m_1' \choose 2}{m_1'-2 \choose 2} \mu^2. \] Then we have \[ \frac{\Var[\bU]}{\E[\bU]^2} = \frac{\E[\bU^2]}{\E[\bU]^2} - 1 \leq \frac{58}{25m_1'^2 \mu}. \] By substituting $\mu \geq \frac{10^7}{m_1^2} = \frac{10^3}{4m_1'^2}$, we conclude $\Pr[\bU = 0] = \frac{\Var[\bU]}{\E[\bU]^2} \leq .02$, which completes the proof. \end{proof} Now we only need to consider those distributions $\varphi$ where their corresponding $\varphi'$ satisfies that \linebreak $\Pr_{\bx, \by \sim \varphi'}[(\bx, \by) \text{ is skewed}] \leq \frac{10^7}{m_1^2}$. This gives us an upper bound on the Fourier weight on all levels of $\varphi'$. \begin{claim} \label{claim:weightsmall} If $\Pr_{\bx, \by \sim \varphi'}[(\bx, \by) \text{ is skewed}] \leq \frac{10^7}{m_1^2}$, then \[ \W{i}[\varphi'] \leq \frac{10^7}{m_1^2} n^i + t^i n^{i/2} \] for $i = 1, \dots, n$. \end{claim} \begin{proof} We will first show that $\W{i}[\varphi'] \leq \E_{\bx, \by \sim \varphi'}[(\sum_{j=1}^n \bx_j \by_j)^i]$. Since $(\sum_{j=1}^n x_j y_j)^i$ is a symmetric function, we can expand it as \[ \left(\sum_{j=1}^n x_j y_j\right)^i = \sum_{\substack{0\leq k \leq i\\ i-k\text{ even}}} \alpha_{k}\left(\sum_{|S| = k} x^{S} y^{S}\right), \] with positive integer coefficients $\alpha_{k}$. Notice that \[ \E_{\bx, \by \sim \varphi'} \left[\sum_{|S| = k} x^{S} y^{S}\right] = \W{k}[\varphi']. \] Therefore \[ \E_{\bx, \by \sim \varphi'}\left[\left(\sum_{j=1}^n \bx_j \by_j\right)^i\right] = \sum_{\substack{0\leq k \leq i \\ i-k\text{ even}}} \alpha_{k} \W{k}[\varphi'] \geq \W{i}[\varphi']. \] The last inequality holds because the $\alpha_{k}$'s are positive integers and each $\W{k}[\varphi']$ is non-negative. The rest of the proof is devoted to bounding $\E_{\bx, \by \sim \varphi'}[(\sum_{j=1}^n \bx_j \by_j)^i]$. When $(x, y)$ is not skewed, $\sum_j x_j y_j$ is at most $n$; otherwise by the definition of ``being skewed'', $\sum_j x_j y_j$ is at most $t\sqrt{n}$. Therefore, \[ \E\left[\left(\sum_{j=1}^n \bx_j \by_j\right)^i\right] \leq \frac{10^7}{m_1^2} n^i + t^i n^{i/2} \] for all $i = 1, \dots, n$. \end{proof} Combining the above discussion, we get the proof of Lemma~\ref{lem:filter-low-weight}. \begin{proof}[Proof of Lemma~\ref{lem:filter-low-weight}] We consider three cases for $\varphi$. \textbf{Case 1:} If $\Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad on } \varphi\right] > \frac8{m_1}$, Claim~\ref{claim:bad} tells us that $\varphi$ is rejected by the Filter Test with probability at least .9. For the remaining two cases we know that $\Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad on } \varphi\right] \leq \frac8{m_1}$. We construct $\varphi'$ as in \cref{eq:phi'}. \textbf{Case 2:} If $\Pr_{\bx \sim \varphi}\left[\bx \text{ is $\frac8{m_1}$-bad on } \varphi\right] \leq \frac8{m_1}$ but $\Pr_{\bx, \by \sim \varphi'}[(\bx, \by) \text{ is skewed}] > \frac{10^7}{m_1^2}$, Claim~\ref{claim:skewsmall} tells us that $\varphi$ also gets rejected with probability at least .9. \textbf{Case 3:} If $\Pr_{\bx, \by \sim \varphi'}[(\bx, \by) \text{ is skewed}] > \frac{10^7}{m_1^2}$, then according to Claim~\ref{claim:weightsmall}, $\W{i}[\varphi'] \leq \frac{10^7}{m_1^2} n^i + t^in^{i/2}$ for all $i = 1, \dots, n$. Also by Claim~\ref{claim:phi'} we know that $\varphi$ and $\varphi'$ are $\frac8{m_1}$-close. \end{proof} \end{document}
\begin{document} \title{Towards a Multi Target Quantum Computational Logic } \author{Giuseppe Sergioli } \institute{Giuseppe Sergioli \at University of Cagliari, Via Is Mirrionis 1, I-09123 Cagliari, Italy\\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} Unlike the standard Quantum Computational Logic (QCL), where the carrier of information (target) is conventionally assumed to be only the last qubit over a sequence of many qubits, here we propose an extended version of the QCL (we call Multi Target Quantum Computational Logic, briefly MT-QCL) where the number and the position of the target qubits are arbitrary. \keywords{Quantum Computational Logic \and Quantum Circuit \and Control and Target Qubits} \end{abstract} \section{Introduction} Both in classical and in quantum computation, a circuit is described in terms of a sequence of gates that transform an arbitrary state (input) into another state (output) \cite{KSV}. In classical computation these transformations are basically irreversible: the Boolean functions $f$ that represent the classical gates are many-to-one, i.e. $f:\{0,1\}^n\mapsto\{0,1\}$, where the dimension of the input state does not correspond to the dimension of the output. Hence, the same output may be obtained in correspondence with different inputs, producing an irreversible process. Anyway, this irreversibility is not longer essencial. As proved by Toffoli \cite{T}, through a very simple expedient any classical irreversible gate can be canonically converted into a respective reversible gate. Given an arbitrary irreversible gate, any input can be divided in two components: the first is the ``genuine" input while the second is an \emph{ancilla} (or \emph{target} bit); on the other hand, also the output can be divided in a first component, which is a copy of the ``genuine" input, and a second component that is obtained by the evolution of the ancilla. Basically, only the second part of the output contains the result of the computation, while the first part only contains redundant information, that represents a kind of \emph{memory} of the input state. Using this expedient, the dimensions of the input and the output states are the same and the computation becomes reversible. The price to pay in order to obtain the desirable reversibility is an increasing of the dimension of the computational space. Unlike classical computation, the theory of quantum computation is naturally reversible \cite{SGP}. The Schr\"odinger equation describes the dynamic evolution of quantum systems, showing how the state $\ket{\psi_{t_0}}$ at the initial time $t_0$ evolves into another state $\ket{\psi_{t_1}}$ at the final time $t_1$ by the equation $\ket{\psi_{t_0}}\rightarrow \ket{\psi_{t_1}}=U\ket{\psi_{t_0}}$, where $U$ is a unitary operator that represents a reversible transformation. The Schr\"odinger equation is naturally applied also in quantum information theory \cite{NC}. Indeed, quantum logical gates are unitary operators that describe the time evolution of a quantum input state into an output and to perform a quantum algorithm exactly means to apply a sequence of quantum gates to a quantum input state. Unlike the classical case, because of their unitarity, the dimensions of the input and the output of a quantum gate are the same and the quantum logical gate always represents a reversible transformation. Furthermore, another peculiar difference between classical and quantum information theory is given by the basic information quantity, that, in the classical framework, is stored by the classical bit while in quantum computation is given by the quantum bit (\emph{qubits} for short), i.e. unitary vector in the complex Hilbert space $\mathbb{C}^2$ \cite{DGS,NC}. The qubits turns out to store a very larger amount of information with respect to its classical counterpart and this is the reason that makes, in principle, quantum computation more efficient with respect to the classical one. The input of a quantum circuit is given by a composition of qubits that is mathematically represented by the tensor product operation. Hence, given $k$ qubits $\ket {x_1},\ket {x_2},\cdots,\ket {x_k}\in\mathbb{C}^2$ the input state given by an ensemble of $k$ qubits is given by $\ket {x_1}\otimes\ket {x_2}\otimes\cdots\otimes\ket {x_k}$ (that, for short, we call quantum register - or \emph{quregister} - and we indicate by $\ket {x_1 x_2 \cdots x_k}\in\otimes^k\mathbb{C}^2$). It is important to remark that, given the non-commutativity of the tensor product, the sequence in which any qubit appears in the state is not negligible; in other words, $\ket{x_1 x_2}$ and $\ket{x_2 x_1}$ generally represent two different states. A quantum circuit is represented by the evolution of the input quregister under the application of some unitary quantum logical gates \cite{H,NC}. Obviously, it is often the case where a $n$-ary quantum gate $U^{(n)}$ is applied to $n$ qubits over a given input configuration, where these $n$ qubits can also be not \emph{adjacent} one another. This very common problem is related to the topic of the architecture in quantum computer design, that plays a crucial role for the realization of advanced technologies in quantum computation \cite{Linke2017}. Even if physical architectures conveniently use particular constraints on the qubits distribution based on the nearest-neighbor couplings \cite{K,Linke2017}, in principle, these constraints have not incidence in the possibility to perform arbitrary computations, because the $Swap$ operations can be suitably used; anyway, the use of the $Swap$ operations is not free of any computational cost. On this basis, recent topics related to efficient quantum computing between remote qubits in nearest-neighbor architectures - such as the linear neighbor architectures (LNN) \cite{K} - are active and important areas of research, also devoted to physical implementations \cite{FHH,TKO}.\footnote {As an example, the linear neighbor architectures LNN \cite{K}, offer an appropriate approximate method to approach to physical problems regarding trapped ions \cite{HHRB}, liquid nuclear magnetic resonance \cite{LSB} and the original Kane model \cite{Ka}.} In addition, from a more theoretical point of view, the architecture in quantum computer design plays also a crucial role in the very general problem regarding the classical simulation of quantum circuits. As focused by Jozsa and Miyake \cite{JM}, the capability of a classical computer to efficiently simulate a quantum circuit is strictly related to the ``distance" between the qubits on which a given quantum gate operates, i.e. the number of the $Swap$ operators necessary to simulate the circuit. On this basis, in \cite{S} a simple mathematical representation of an arbitrary quantum circuit is provided. This representation turns out to be very beneficial for pratical usages (for istances related to the implementation of a software package able to efficiently simulate a quantum circuit \cite{Gerdt1,Gerdt2}). But, in addition to the implementative purposes, this representation could be very useful also to provide a generalization of the quantum computational logic that takes into account a more realistic scenario. This paper is devoted to introduce a new version of the quantum computational logic that is strictly dependent to the architecture of the quantum circuit and that, at the same time, represents a generalization of the standard quantum computational logic \cite{DGG}. The paper is organized as follows: in Section 2 we discuss on the different roles that can assume different qubits along a computation, depending on the architecture of the quantum circuit. In Section 3 we briefly describe a formal model to represent an arbitrary quantum circuit by using a synthetic block matrix representation that allows to avoid the reiterate involvement of multiple $Swap$ operators. Section 4 is devoted to introduce a very fundamental background of the main features of the standard Quantum Computational Logic (QCL). Sections 5 and 6 constitute the real core of this paper: in Section 5 the so called Multi Target Quantum Computational Logic (MT-QCL) is introduced in details, while in Section 6 similarities and differences between QCL and MT-QCL are formally showed. The closing section is devoted to introduce some final comments and possible further developments. \section{Different roles of the qubits in a quantum circuit} In the standard computational framework, a quantum circuit can be described by three main ingredients: an input state, a logical gate (or a sequence of logical gates) and an output state. In the quantum computational context, the input state is a $n$-dimensional unitary vector $\ket x$ belonging to the complex Hilbert space $\otimes^n\mathbb C^2$, the logical gates are unitary operators and the output state is again a $n$-dimensional unitary vector $\ket y\in\otimes^n\mathbb C^2.$ Of course, the input state is arbitrary and the output is univocally determined by the application of the gate to the input state; indeed, the quantum circuit is identified just as a sequence (i.e. a product) of unitary operators (i.e. one unitary operator obtained as a product of some unitary operators). The input of a certain quantum circuit can be given by one qubit only or, more frequently, by many qubits. Let us consider an input state $\ket x=\ket{x_1}\otimes\ket{x_2}\dots\ket{x_n}=\ket{x_1\dots x_n}.$ Depending on the gates that are applied during the computation, not all the qubits play the same role. As an example, a very useful class of gates - used for many algorithmic implementations - is given by the following class of \emph{Controlled-U} gates: $$CU^{(n)}\ket{x_0\cdots x_n}= \begin{cases} \ket{x_0\dots x_n} & \mbox{if} \hspace{0.1cm} x_0=0;\\ \ket{x_0}\otimes U^{(n)}\ket{x_1\dots x_n} & \mbox{if} \hspace{0.1cm} x_0=1; \end{cases}$$ where $U^{(n)}$ is an arbitrary $n$-dimensional unitary operator. From a logical perspective, a $CU^{(n)}$ gate can be interpreted as a kind of ``\emph{double implication}" that forces the application of $U^{(n)}$ on $\ket{x_1\dots x_n}$ if and only if $x_0=1$. For this reason, we conventionally say that $\ket 0$ plays the role of the ``\emph{control}" qubit (that keeps unchanged under the application of the $CU^{(n)}$ gate), while $\ket{x_1}, \ket{x_1},\dots,\ket{x_n},$ are called ``\emph{target}" qubits (that, accordingly to the value of $\ket 0$, can change under the application of the $CU^{(n)}$ gate). It is not hard to show \cite{SF} that, for any $n$-ary unitary operator $U^{(n)}$, the respective $CU^{(n)}$ operator (whose dimension is $n+1$) assumes the following useful matrix representation: \begin{eqnarray}\label{CU} CU^{(n)} &=& \left[ \begin{array}{c|c} I^{(n)} & 0 \\ \hline 0 & U^{(n)} \end{array} \right]= \left[ \begin{array}{c|c} I^{(n)} & 0 \\ \hline 0 & 0 \end{array} \right] + \left[ \begin{array}{c|c} 0 & 0 \\ \hline 0 & U^{(n)} \end{array} \right]=\\ &=& P_0 \otimes I^{(n)}+P_1 \otimes U^{(n)}, \end{eqnarray} where $P_0$ and $P_1$ are the projector operators: $P_0=\ket 0 \langle 0| =\left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array}\right]$ and $P_1=\ket 1 \langle 1|=\left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \\ \end{array}\right].$ Let us remark how a qubit can be neither a control nor a target qubit. Let us consider the following picture. \begin{figure} \caption{Different roles of a qubit in a quantum circuit} \label{Binary} \end{figure} In this picture the line between the second and the third qubit follows the standard representation of a $CNot$ gate, where the qubit $\ket {x_2}$ plays the role of the control qubit, $\ket {x_2}$ plays the role of the target qubit and the unitary operator $U^{(n)}$ is represented by the standard negation $Not^{(1)}$. Further, the circuit also includes the qubit $\ket{x_1}$ that does play any role in the computation. Formally, the circuit can be represented as the following transformation: $$(I^{(1)}\otimes CNot)\ket{x_1 x_2 x_3}=\ket{x_1 x_2 (x_2\oplus x_3)},$$ where $I^{(1)}$ is the one dimensional identity matrix and $\oplus$ is the sum modulo $1$. In this case, we can realize the following three different cases: $i)$ $\ket {x_1}$ is a qubit that is left unchanged during the computation and has not incidence in the evolution of the other qubits (it is neither a control nor a target qubit); $ii)$ $\ket {x_2}$ is a control qubit, i.e. it is left unchanged during the computation but it has a relevant incidence in the evolution of the third qubit; $iii)$ $\ket{x_3}$ is a target qubit and, accordingly to the value of $\ket{x_2}$, it can change during the computation. As a further remark, let us notice that a qubit is generally not designed, in principle, to be a control, a target or neither a control nor a target qubit. Rather, appling a sequence of different quantum gates $U_1,\dots,U_n$ it is simply possible that a given qubit plays a role under the action of $U_i$ but another role under the action of $U_j$. This point will be remarked in the next sections. \section{Generalized Quantum Circuits}\label{Circuit} In the previous section particular attention has been devoted to the role of the $Controlled-U$ gates in the architecture of a quantum circuit. In this section we briefly provide the mathematical description of arbitrary quantum circuits, where arbitrary gates are involved. This representation turns out to be useful in the rest of the paper. The simplest case is represented by a circuit where only a single unary gate is applied. However, this information is not enough to mathematically describe the quantum circuit; indeed, a crucial information is regarding the qubit to which the gate is applied to. Let us consider a unary gate $U^{(1)}$ applied to the $i$-th qubit over a $k$-dimesional circuit (i.e. a circuit such that the input and the output live in the Hilbert space $\otimes^k\mathbb C^2$). In this case, the circuit is simply represented by the following operator: $$I^{(i-1)}\otimes U^{(1)}\otimes I^{(k-i)}.$$ Similarly, it is possible that $j$ unary operators $U^{(1)}_1,\dots,U^{(1)}_j$ are applied to different $(i_1)^{st},\dots,(i_j)^{th}$ qubits, respectively, over a $k$-dimensional input state. In this case, the circuit is formally represented as: \begin{eqnarray}\label{operator} U=I^{(i_1-1)}\otimes U_1^{(1)}\otimes I^{(i_2-i_1-1)}\otimes U_{2}^{(1)}\otimes\cdots\otimes U_j^{(1)}\otimes I^{(k-i_{j})}. \end{eqnarray} Anyway, a more complex quantum circuit is generally given by sequences of gates in such a way that any qubit can change under the action of several quantum gates. As an example, it is possible to the consider the operator $\tilde{U}$ that corresponds to the application of $\tilde{j}$ unitary operators $\tilde{U}^{(1)},\dots,\tilde{U}^{(1)}_{\tilde j}$ applied to different $(\tilde i_1)^{st},\dots,(\tilde i_{\tilde j})^{th}$ qubits. Given a $k$-dimensional input state $\ket{\psi_{in}}$, it is possible to consider to apply $\tilde U$ not to the input, but to the $k$-dimesnional state that arises from the application of $U$ to the input state $\ket{\psi}$. In this case the circuit is described by the product $\tilde{U}U$ and, trivially, the output is $\ket{\psi_{out}}=\tilde{U}U\ket{\psi}.$ We have considered only the special case of a circuit given by unitary operators only. By a suitable arrangment of the dimension of the identity operators, it is easy to generalize the argument above to the case of circuits given by compositions of gates of different and arbitrary arity. As an example, let us consider a $3$-dimensional input state $\ket{\psi}=\ket{x_1x_2x_3}$ where we first apply a $CNot$ between the first and the second qubit and an Hadamard gate $H$ to the third qubit. After, we apply a $CNot$ between the second and the third qubit. In this case, the circuit is given by the following composition: $$(CNot\otimes H)(I^{(1)}\otimes CNot).$$ This example is also useful to realize how the role of a qubit of the input state is not designed, in principle, to be a control or a target bit; indeed, the qubit $\ket{x_2}$ plays the role of target during the first ``step" of the computation and the role of control during the second ``step". All the arguments above contain the implicit assumption that a $n$-ary gate is applied to $n$ adjacent qubits. In a more general scenario, a $n$-ary gate $U^{(n)}$ could also be applied to $n$ qubits arbitrary placed over a $k$-dimesional Hilbert space (with $k\geq n$). In this case, the very standard quantum computational procedure requires a full involvment of the $Swap$ operator, by the following strategy. Let us consider a $k$-dimensional input state $\ket x^{in}=\ket{x_1 \dots x_m \dots x_{m+n} \cdots x_k}$. For the sake of the simplicity, let us consider a binary gate $U^{(2)}$ that is applied to the two non adjacent qubits $\ket{x_m}$ and $\ket{x_{m+n}}$, leaving the other qubits of the input state $\ket x^{in}$ unchanged. First, we apply $(n-1)$ $Swap$ gates in order to ``move" the $\ket{x_m}$ in the $(m+n-1)^{th}$ position. Formally: $$\ket{x}^a=Swap_{[k;m,m+n-1]}\ket{x}^{in},$$ where $Swap_{[k;m,m+n-1]}$ represents the composition of all the $(n-1)$ $Swap$ gates we need to ``move" the $\ket{x_m}$ in the $(m+n-1)^{th}$ position. At this stage it is possible to apply the binary operator $U^{(2)}$ to the two qubits $\ket{x_m}$ and $\ket{x_{m+n}}$ (that are now adjacent). Formally (by referring to Eq.\ref{operator}): $$\ket {x}^b=(I^{(m+n-2)}\otimes U^{2}\otimes I^{(k-m-n)})\ket {x}^a.$$ Finally, we need to apply the inverse\footnote{It is easy to see that $Swap_{[k;m,m+n-1]}^{-1}=Swap_{[k;m,m+n-1]}$ for any value of $k,m$ and $n$.} of $Swap_{[k;m,m+n-1]}$ in order to retreive the initial configuration: $\ket{x}^{out}=Swap_{[k;m,m+n-1]}\ket{x}^b.$ We can formally summarinzing this procedure by writing: $$\ket{x}^{out}=U^{(2)}_{[k;m,m+n]}\ket x^{in},$$ where $U^{(2)}_{[k;m,m+n]}$ indicates the $k$-dimensional operator that acts as a binary operator $U^{(2)}$ on the $m^{th}$ and the $(m+n)^{th}$ qubits of $\ket x^{in}$ an leaves all the other qubits of $\ket x^{in}$ unchanged; the extended representation of $U^{(2)}_{[k;m,m+n]}$ is now simply obtainable by composition of $Swap$, $I$ and $U^{(2)}$ gates, suggested by the previous description. Without any lost of generality, all the agument above can be easily generalized for the case of $n$-ary operators applied to $n$ qubits arbitrarily allocated over a $k$ dimensional input state. Of course, the ``distance" between the qubits where the gates are applied, the ariety of these gates and the number of these gates are elements that could provide a relevant increasing of the number of the $Swap$ gates necessary to make the required computation. This problem is fully investigated in the general context of architecture in quantum computer design. Indeed, recent topics related to efficient quantum computing among remote qubits in nearest-neighbor architectures are currently under investigation \cite{K,TKO}. From a purely mathematical viewpoint, it is easy to realize how the using of multiple $Swap$ gates turns out to be particularly unconfortable. For this reason we provide the following block matrix representation \cite{S} that are useful in order to formally manage quantum circuits with different gates applied to non adjacent qubits. \begin{theorem}\label{Swap}\cite{S} Let us consider the projectors operators $P_0$ and $P_1$ and the Ladder operators \cite{F} $L_0=\ket 0 \langle 1| =\left[ \begin{array}{cc} 0 & 1 \\ 0 & 0 \\ \end{array}\right]$ and $L_1=\ket 1 \langle 0| =\left[ \begin{array}{cc} 0 & 0 \\ 1 & 0 \\ \end{array}\right]$ and let $P_0^{(n)}=I^{(n-1)}\otimes P_0$ (with $I^{(n-1)}$ $(n-1)$-dimensional identity matrix); similarly for $P_1^{(n)}, L_0^{(n)}$ and $L_1^{(n)}.$ The block-matrix representation of the operator $Swap_{[k;m,n+n]}$ (that swaps the $m^{th}$ qubit with the $(m+n)^{th}$ qubit over a $k$-dimensional input state) is given by: $$Swap_{[k;m,m+n]}=I^{(m-1)}\otimes Swap_{[n+1;1,n+1]}\otimes I^{(k-m-n)}$$ where $$Swap_{[n;1,n]}=\left[ \begin{array}{c|c} P_0^{(n-1)} & L_1^{(n-1)} \\ \hline L_0^{(n-1)} & P_1^{(n-1)} \\ \end{array}\right].$$ \end{theorem} This Theorem allows us to provide also a confortable block matrix representation of an arbitrary binary gate $U^{(2)}$ (applied to two non adjacent qubits) without incurring in the annoying involvement of multiple $Swap$ gates. \begin{theorem}\label{binary}\cite{S} Let $U^{(2)}$ a binary unitary operator given by the following block-matrix representation $ U^{(2)} = \left[ \begin{array}{c|c} U_{11} & U_{12} \\ \hline U_{21}& U_{22} \\ \end{array}\right], $ where $U_{ij}$ are $2-$ dimensional square matrices given by $U_{11} = \left[ \begin{array}{cc} u_{11} & u_{12} \\ u_{21}& u_{22} \\ \end{array}\right],$ $ U_{12} = \left[ \begin{array}{cc} u_{13} & u_{14} \\ u_{23}& u_{24} \\ \end{array}\right]$ , $ U_{21} = \left[ \begin{array}{cc} u_{31} & u_{32} \\ u_{41}& u_{42} \\ \end{array}\right]$ and $ U_{22} = \left[ \begin{array}{cc} u_{23} & u_{24} \\ u_{43}& u_{44} \\ \end{array}\right].$ The block-matrix representation of $U^{(2)}_{[k;m,m+n]}$ is given by: $$U^{(2)}_{[k;m,m+n]}=I^{(m-1)}\otimes \left[ \begin{array}{c|c} U_{11}^{(n)} & U_{12}^{(n)} \\ \hline U_{21}^{(n)}& U_{22}^{(n)} \\ \end{array}\right]\otimes I^{(k-m-n)},$$ where $U_{ij}^{(n)}=I^{(n-1)}\otimes U_{ij}.$ \end{theorem} It is not hard to convince that, by following very similar arguments, it is possible to obtain a similar result for an arbitrary $n$-ary gate applied to $n$ non adjacent qubits \cite{S}. \section{The standard approach to the Quantum Computational Logic}\label{QCL} The theory of Quantum Computation has naturally inspired new forms of quantum logic, the so called \emph{Quantum Computational Logic} (QCL) \cite{DGG,SGP}. From a semantic point of view, any formula of the language in the QCL denotes a piece of quantum information, i.e. a density operator living in a complex Hilbert space whose dimension depends on the linguistic complexity of the formula. Similarly, the logical connectives are interpreted as special examples of quantum gates. Accordingly, any formula of a quantum computational language can be regarded as a logical description of a quantum circuit. The initial concept at the very background of the QCL is the assignment of the truth value of a quantum state that represents a formula of the language. Conventionally, the QCL assumes to assign the truth value ``false" to the information stored by the qubit $\ket 0$ and the truth value ``true" to the qubit $\ket 1.$ Unlike the classical logic, QCL turns out to be a \emph{probabilistic} logic, where the qubit $\ket\psi=c_0\ket 0 +c_1\ket 1$ logically represents a ``probabilistic superposition" of the two classical truth values, where the \emph{falsity} has probability $|c_0|^2$ and the \emph{truth} has the probability $|c_1|^2$. As in the qubit case, in the standard approach of QCL it is also defined a probability function $\texttt p$ that assings a probability value $\texttt p(\rho)$ to any density operator $\rho$ living in the space of the $n$-arbitrary dimensional density operators (we denote this space by $\mathcal D(\otimes^n\mathbb C^2)$). Intuitively, $\texttt p(\rho)$ is the probability that the quantum information stored by $\rho$ corresponds to a \emph{true} information. In order to define the function $\texttt p$, we first need to identify in any space $\otimes^n\mathbb C^2$ the two operators $P_0^{(n)}$ and $P_1^{(n)}$ as the two special projectors that represent the \emph{falsity} and the \emph{truth} properties, respectively. Before this, a step is very crucial. In order to extend the definition of \emph{true} and \emph{false} from the space $\mathbb C^2$ of the qubits to the space $\otimes^n\mathbb C^2$ of the tensor product on $n$ qubits (i.e. on an arbitrary \emph{quregister}), the standard approach of the QCL accords with the following convention: a quregister $\ket x=\ket{x_1\dots x_n}$ is said to be it is said to be \emph{false} if and only if $x_n=0$; conversely, it is said to be \emph{true} if and only if $x_n=1$. Hence, the truth value of a quregister only depends on its last component. On this basis, it is natural to define the property \emph{falsity} (or \emph{truth}) on the space $\otimes^n\mathbb C^2$ as the projector $P_0^{(n)}$ (or $P_1^{(n)}$) onto the span of the set of all \emph{false} (or \emph{true}) registers. Now, accordingly with the Born rule, the probability that the state $\rho$ is \emph{true} is defined as: \begin{eqnarray}\texttt p(\rho)=Tr(P_1^{(n)}\rho). \end{eqnarray} In QCL the evolution of a quregister is dictated by the application of a unitary operator (that represents a reversible transformation) while the evolution of a density operator is dictated by the application of a quantum operation (that represents a transformation that, in general, is not reversible). Of course, for any quantum gate $U$ there exists the correspondent quantum operation $^\mathcal DU$ that replaces the behaviour of the quantum gate in the context of the density operators (in particular, $^\mathcal DU(\rho)=U\rho U^{\dagger}$), but the other way generally does not hold. In the language of the QCL it is usual to distinguish between \emph{semiclassical} gates (called \emph{semiclassical} because, when they are applied to the elements of the computational basis $\mathbb B=\{\ket 0, \ket 1\}$, they replace the behaviour of their corresponding classical logical gates) and \emph{genuinely} quantum gates (called \emph{genuinely} quantum because their application to the elements of the computational basis has not any classical counterpart). The semiclassical gates usually involved in the QCL are: the Identity $I$, the Negation $Not$, the control-negation (or Xor) $CNot$ and the Toffoli gate $T$, while the genuinely quantum gates are: the Hadamard gate $\sqrt I$ (also named square root of the identity) and the square root of the negation $\sqrt{Not}.$ In particular, the $T$ and the $Not$ gates allows to provide a \emph{probabilistic} replacement of the classical logic in virtue of the following properties: \begin{itemize} \item $\texttt p(^\mathcal D{Not}(\rho))=1-\texttt p(\rho),\, \text{for any} \rho\in\otimes^n\mathbb C^2;$ \item $\texttt p(AND(\rho,\sigma))=\texttt p(^\mathcal D{T}(\rho\otimes\sigma\otimes P_0))=\texttt p(\rho) \texttt p(\sigma), \, \text{for any} \rho,\sigma \in \otimes^n\mathbb C^2.$ \end{itemize} Let us notice how the conjunction is obtained by the expedient to use the ternary Toffoli gate equipped by the projector $P_0$ that plays the role of an \emph{ancilla}. Basing on this approach and inspired by the intrinsic properties of the quantum systems, the semantic of the QCL turns out to be strongly non-compositional and context dependent \cite{DGLS}. This approach, that may appear \emph{prima facie} a little strage, leads to the benefit to reflect pretty well plenty of informal arguments that are currently used in our rational activity \cite{DGLNS}. A detailed description of the QCL and its algebraic properties are summarized in \cite{DGG,DGLS,DGS,LS}. \section{A new definition of probability in QCL} Following the brief description of a quantum circuit given in Section \ref{Circuit}, we can easy realize that the actual carrier of information is given by the target bit, while the control bit remains unchanged under the application of a given quantum gate; furthermore, the set of the qubits that play the role of target can change during the computation, depending on the gates that, time by time, are applied. On the other hand, despite its remarkable expressive power, the very preliminary notions of the QCL introduced in Section \ref{QCL} seem to do not take into suitable account this fact, assuming that the ``useful" information is always stored by the last qubit only. Indeed, the assignment of the truth value of a given composition of qubits (a quregister), accordingly with what we have previously introduced, only depends on the last component. For this reason, all the gates that are involved in the language of the QCL are only one-target gates. This restriction is basically unnecessary and it could also seem to be a little far from the architecture of a real quantum circuit. For this reason, this section is devoted to introduce an extension of the QCL (that we will call \emph {Multi Target} QCL, briefly MT-QCL) and to show some preliminary result. The immediate benefit of the MT-QCL with respect to the QCL is given by the fact that the MT-QCL can involve in the language also multi target gates (such as the $Swap$ gate, the square root of $Swap$ gate $\sqrt{Swap}$ and the Fredkin gate $F$) without any lost of generality. Further, in this framework the standard QCL can be seen as a particular (one-target) case of the MT-QCL. Similarly to the case of QCL, in order to introduct the MT-QCL the essencial step is given by the definition of probability. First, let us consider a very simple circuit given by a $n$-dimensional input state $\ket x= \ket{x_1\dots x_n}$ and only one operator $U^{(n)}$ acting on the space $\otimes^n\mathbb C^2$ as $U^{(n)}\ket{x}=\ket{y_1 \dots y_n}$. Let us consider the two following sets of indexes strictly dependent on the operator $U^{(n)}$: $$C_{U^{(n)}}=\{i:\ket{x_i}=\ket{y_i}\} \,\,\, \text{and} \,\,\, T_{U^{(n)}}=\{j:\ket{x_j}\neq\ket{y_i}\}.$$ Intuitively, $C_{U^{(n)}}$ selects the position of the qubits of the input state that are not affected by $U^{(n)}$; conversely for $T_U^{(n)}.$ Conveniently, let us call any $i$ belonging to $C_{U^{(n)}}$ a \emph{control position} and any $j$ belonging to $T_{U^{(n)}}$ a \emph{target position}.\footnote{Let us give a slight abuse of the terms \emph{target} and \emph{control} remarking that, for example, if we refer to the picture of the Section \ref{Circuit}, the qubit $\ket{x_1}$ is not a control qubit but, accordingly with this notation, it assumes a \emph{control position} over the quantum circuit where it is located.} On this basis, we define a probability $\texttt P$ related not to the state only but also to the operator $U^{(n)}$, i.e. we associate a probability to the couple $[U^{(n)},\ket x]$ by the following definition: \begin{definition}\label{Prob}[Probability in MT-QCL] $$\texttt P[U^{(n)},\ket x]=Tr(\mathcal P_1\otimes\cdots\otimes\mathcal P_n)^{\mathcal D}U^{(n)}\rho_{\ket x}$$ where $U^{(n)}$ is a $n$-dimensional unitary operator, $\rho_{\ket x}=\ket x\langle x|$ and $$\mathcal P_i=\begin{cases} I \,\,\,\, \text{if} \,\, i\in C_{(U^{(n)})}; \\ P_1 \,\,\,\, \text{if} \,\, i\in T_{(U^{(n)})}. \end{cases}$$ \end{definition} Let us remark that the quantity $\mathcal P_1\otimes\dots\otimes\mathcal P_n$ is univocally determined by the operator $U^{(n)}$; in any case, the quantity $\mathcal P_1\otimes\dots\otimes\mathcal P_n$ is a projector operator for any $U^{(n)}$; hence, $\texttt P[U^{(n)},\ket x]$ is a well defined probability. Further, let us notice that, even if the definition is given in the special case where the input state is pure, the definition can be naturally generalized, without any lost of generality, to the case where the input state $\rho$ is also a mixed state (formally assuming the form $\texttt P[^{\mathcal D}U^{(n)},\rho]$). Let us also notice that the identity operator plays a very special rule in this framework, in virtue of the following Theorem. \begin{theorem}\ \\ $\texttt P[^{\mathcal D}I^{(n)},\rho]=1$ for any $\rho\in\mathcal D(\mathbb C^2).$ \begin{proof}\ \\ Trivially follows by Def.(\ref{Prob}). \end{proof} \end{theorem} As discussed in Section \ref{Circuit}, when more quantum gates are involved along a computation, it is possible that the same qubit plays, as an example, the role of target during one ``step" of the computation and the role of control during another ``step". Resonably, we accord to fix that if a qubit $\ket{x_i}$ assumes a target position at list one time during the computation, then the position $i$ will be considered as a target position in the evaluation of the probability given by Def. \ref{Prob}. On this basis, it is possible to naturally generalize the Def. \ref{Prob} to the case where $U^{(n)}$ is not only one gate quantum circuit, but it is an arbitrary circuit given by an arbitrary composition of quantum gates. \begin{example}\ \\ Let us consider a quantum circuit given by the following composition of gates $$U^{(3)}=(CNot\otimes H)(I^{(1)}\otimes CNot).$$ and let $\rho^{in}=\rho\otimes\sigma\otimes\tau$, where $\rho=\frac{1}{2}\left[ \begin{array}{cc} 1+r_3 & r_1-ir_2 \\ r_1+ir_2 & 1-r_3 \\ \end{array}\right],$ $\sigma=\frac{1}{2}\left[ \begin{array}{cc} 1+s_3 & s_1-is_2 \\ s_1+is_2 & 1-s_3 \\ \end{array}\right]$ and $\tau=\frac{1}{2}\left[ \begin{array}{cc} 1+t_3 & t_1-it_2 \\ t_1+it_2 & 1-t_3 \\ \end{array}\right].$ We observe that only the first qubit $\rho$ never assumes a \emph{target position}. Hence, by Def. \ref{Prob}, we have that:\footnote{Notice that the probability $\texttt p$ we refer to is the one introduced in the context of the standard QCL (see Eq. 4.1). On this basis, we are using this example to express the MT-QCL probability $\texttt P$ in terms of the QCL probability $\texttt p$.} \end{example} $$\texttt P[U^{(3)}, \rho^{in}]=Tr(I^{(1)}\otimes P_1\otimes P_1)^\mathcal DU^{(3)}\rho^{(in)}=$$ $$=\frac{1}{4}[(1-2\texttt p_{\rho})(1-2\texttt p_{\sigma})-1](t_1-1).$$ Let us immediately notice that the main difference between the definition of probability given in the standard QCL and the one given in the MT-QCL is that the first is related to the quantum state, while the second is related to the quantum circuit. Indeed, given a certain output state $\ket x^{out}$, the QCL assignes a probability to $\ket x^{out}$ indipendently to the computation from which the state comes from; conversely, the probability assigned by the MT-QCL is strictly dependent on the \emph{history} of $\ket x^{out}.$ This turns out to be very noticeable by the next example. \begin{example}\ \\ Let us consider the input state $\rho\in\mathcal D(\otimes^2\mathbb C^2)$ and let also consider the two circuits dictated by the following compositions of quantum gates: $U_1=(\sqrt I\otimes I)(\sqrt I\otimes I)$ and $U_2=I^{(2)}.$ Obviously, $U_1=U_2$ and $^{\mathcal D}U_1(\rho)=^{\mathcal D}U_2(\rho)=\rho$. But, $$\texttt P[^{\mathcal D}U_1,\rho]=Tr(P_1\otimes I)\rho\neq Tr(I\otimes I)\rho=\texttt P[^{\mathcal D}U_2,\rho].$$ Indeed, even if $U_1=U_2$, the first position of $U_1$ is a target position, while the first position of $U_2$ is a control position. This difference is captured by the MT-QCL in virtue of the new definition of probability. \end{example} The fact that two (let's say) ``equivalent" but not ``identical" circuits (such as $U_1$ and $U_2$ of the previous example) provide different probabilistic results in the framework of MT-QCL is not so surprising. Indeed, even in the real quantum computation, two equivalent (but not identical) circuits applied to the same input state can provide as output a different distribution. In Fig.\ref{diff} we show the result of the performing of two equivalent (but not identical) identity circuits applied to the same input state $\ket{\psi}=\ket{00000}.$ The probabilistic results is obtained by running the output on a large number of computations on the real IBM quantum computer \cite{De}. As we can see, even if the expected result for both case is, obviously, $\ket{\psi}$, the probabilistic distributions in the two cases are different. This difference arises from the fact that, even if from an operational point of view the two circuits are exactly equivalent, the computations that are executed by the quantum computer are essentially different. \begin{figure} \caption{\label{diff} \label{diff} \end{figure} \section{Some result} Now let us generalize the notation that we have introduced in Section \ref{Circuit}, and let us indicate by $U{(\alpha)}_{[k;m,m+n_1,\dots,m+n_{\alpha-1}]}$ an $\alpha$-ariety gate that is applied to the $m^{th}, (m+n_1)^{th},\dots,(m+n_{\alpha-1})^{th}$ qubits, respectively, and let calculate the value that assumes the new definition of probability associated to some particular gate. \begin{theorem}\label{T1}\ \\ Let $\rho_1,\rho_2,\dots,\rho_k$ be density operator leaving in the two dimensional Hilbert space $\mathcal D(\mathbb C^2)$ such that $\rho_i=\frac{1}{2}\left[ \begin{array}{cc} 1+r_{i3} & r_{i1}-ir_{i2} \\ r_{i1}+ir_{i2} & 1-r_{i3} \\ \end{array}\right]$. We have that: \begin{enumerate} \item $\texttt P[^{\mathcal D}Not^{(1)}_{[k;i]},(\rho_1\otimes\cdots\otimes\rho_i\otimes\rho_k)]=\frac{1+r_{i3}}{2};$ \item $\texttt P[^{\mathcal D}\sqrt I^{(1)}_{[k;i]},(\rho_1\otimes\cdots\otimes\rho_i\otimes\rho_k)]=\frac{1- r_{i1}}{2};$ \item $\texttt P[^{\mathcal D}\sqrt {Not}^{(1)}_{[k;i]},(\rho_1\otimes\cdots\otimes\rho_i\otimes\rho_k)]=\frac{1- r_{i2}}{2};$ \item $\texttt P[^{\mathcal D}CNot_{[k;m,m+n]},(\rho_1\otimes\dots\otimes\rho_k)]=\frac{1-r_{m3}s_{(m+n)3}}{2};$ \item $\texttt P[^{\mathcal D}T_{[k;m,m+n,m+n+p]}),(\rho_1\otimes\dots\otimes\dots\rho_k)]=\frac{1}{4}(2+(r_{m3}(r_{(m+n)3}-1)-r_{(m+n)3}-1)\cdot r_{(m+n+p)3}).$ \end{enumerate} \begin{proof} \begin{enumerate} \item \begin{eqnarray*} & & \texttt P[^{\mathcal D}Not_{[k;i]}^{(1)},(\rho_1\otimes\dots\otimes\rho_i\otimes\dots\otimes\rho_k)]=\\ &=& Tr[(I^{(i-1)}\otimes P_1\otimes I^{(k-i)})\cdot\,^{\mathcal D}(I^{(i-1)}\otimes Not\otimes I^{(k-i)})\cdot (\rho_1\otimes\dots\otimes\rho_i\otimes\cdots\rho_k)]=\\ &=&Tr[(I^{(i-1)}\cdot\,^{\mathcal D}I^{(i-1)}(\rho_1\otimes\cdots\otimes\rho_{i-1}))\otimes(P_1\cdot\,^{\mathcal D}Not\,\rho_i)\otimes \\ &\otimes&(I^{(k-i)}\cdot\,^{\mathcal D}I^{(k-i)}(\rho_{i+1}\otimes\cdots\otimes\rho_{k}))]=\frac{1+r_{i3}}{2}; \end{eqnarray*} \item follows in a similar way; \item follows in a similar way; \item \begin{eqnarray*} & & \texttt P[^{\mathcal D} CNot_{[k;m,m+n]},(\rho_1\otimes\dots\otimes\rho_m\otimes\dots\otimes\rho_{m+m}\otimes\dots\otimes\rho_k)]=\\ &=&Tr[(I^{(m+n-1)}\otimes P_1\otimes I^{k-m-n})\cdot\,^{\mathcal D}Swap_{[k;m,m+n-1]}\,(^{\mathcal D}(I^{(m+n-2)}\otimes CNot\otimes I^{(k-m-n)})\cdot \\ &\cdot& (^{\mathcal D}Swap_{[k;m,m+n-1]}(\rho_1\otimes\dots\otimes\rho_k)))]=\\ & & \text{(by Theorem \ref{binary})}\, =Tr(P_1(CNot(\rho_m\otimes\rho_{m+n})))=\frac{1-r_{m3}s_{(m+n)3}}{2}; \end{eqnarray*} \item follows in a similar way. \end{enumerate} \end{proof} \end{theorem} Let us notice that the previous results are in accord with the standard QCL \cite{DGS} in the cases (1), (2) and (3). Also for the case (4), after considering to ``move" the qubit $\rho_m$ in the position $(m+n-1)$ by a suitable availment of $Swap$ (in accord with Theorem \ref{Swap}) in order to have $\rho_m$ and $\rho_{m+n}$ in two adjacent positions, we are prefectly recovering the probability values obtained by the standard QCL. It is easy to think in a very similar way regarding the item (5). As we have discussed in Section \ref{Circuit}, the action of the $Swap$ gate allows to generalize the action of $n$-ary gates to the case where the qubits to which the gates are applied are not adjacent one another. Anyway, it has not any influence in the calculation of the probability; as an example: $\texttt P[^{\mathcal D} CNot_{[k;m,m+n]}),(\rho_1\otimes\cdots\otimes\rho_m\otimes\cdots\otimes\rho_{m+n}\otimes\cdots\otimes\rho_k)]$ it is simply equal to $\texttt P[^{\mathcal D} CNot,(\rho_m\otimes\rho_{m+n})]$: hence, without lost of generality, we can confine ourselves in calculating the probability in the special case in which the qubits where the gate acts on are adjacent. \begin{theorem}\ \\ Let $\rho_1,...,\rho_k$ as defined in Theorem \ref{T1}. \begin{enumerate} \item $\texttt P\,[^{\mathcal D}Swap,(\rho_1\otimes\rho_2)]= \mathcal P(\rho_1)\mathcal P(\rho_2);$ \item $\texttt P\,[^{\mathcal D}\sqrt{Swap},(\rho_1\otimes\rho_2)]= \mathcal P(\rho_1)\mathcal P(\rho_2);$ \item $\texttt P\,[^{\mathcal D}F,(\rho_1\otimes\rho_2\otimes\rho_3)= \mathcal P(\rho_2)\mathcal P(\rho_3)];$ \end{enumerate} \begin{proof}\ \\ Trivially follow from Definition \ref{Prob} and by straightforward calculations. \end{proof} \end{theorem} Let us notice that, by using the definition of probability provided in the standard QCL, the results of previous theorem should assume remarkable different values. By the way, the most remarkable results should regard the case where the input state is not a product state. On this basis, many results have been obtained regarding one-target gates \cite{DGLS,SF}. Following similar arguments and according with the new definition of probability, we show similar results also for the $Swap$, $\sqrt{Swap}$ and $F$ gates. \begin{theorem}\label{binary2}\ \\ Let $U$ a binary operator $U=\left[ \begin{array}{c|c} U_{11} & U_{12} \\ \hline U_{21} & U_{22} \\ \end{array}\right]$ (as represented in Theorem (\ref{binary})) and let us consider that $U$ is not a control-target operator. Let $\rho\in\mathcal D(\otimes^k\mathbb C^2).$ Then: $$\texttt P[^{\mathcal D}U_{[k;m,m+n]},\rho]=Tr[(\Lambda_U^{(m+n)}\otimes I^{(k-m-n)})\rho],$$ where $\Lambda_U^{(n+1)}=\left[ \begin{array}{c|c} I^{(n-1)}\otimes(U_{21}^{\dagger}P_1U_{21}) & I^{(n-1)}\otimes(U_{21}^{\dagger}P_1U_{22}) \\ \hline I^{(n-1)}\otimes(U_{22}^{\dagger}P_1U_{21}) & I^{(n-1)}\otimes(U_{22}^{\dagger}P_1U_{22}) \\ \end{array}\right].$ \begin{proof}\ \\ By Def.(\ref{Prob}), we have: \begin{eqnarray*} & & \texttt P[U_{[k;m,m+n]},\rho]=\\ &=& Tr[(P_1^{(m)}\otimes P_1^{(n)}\otimes I^{(k-m-n)})U_{[k;m,m+n]}\,\,\rho\,\, U_{[k;m,m+n]}^{\dagger}] =\\ &=& Tr[(U_{[k;m,m+n]}^{\dagger}(P_1^{(m)}\otimes P_1^{(n)}\otimes I^{(k-m-n)})U_{[k;m,m+n]})\,\rho]=\\ & & \text{(by Theorem (\ref{binary}))} = Tr[(I^{(m-1)}\otimes \left[ \begin{array}{c|c} U_{11}^{(n)\dagger} & U_{21}^{(n)\dagger} \\ \hline U_{12}^{(n)\dagger} & U_{22}^{(n)\dagger} \\ \end{array}\right] \otimes I^{(k-m-n)})\cdot \\ &\cdot& (I^{(m-1)}\otimes (P_1\otimes P_1^{(n)}) \otimes I^{(k-m-n)}) \cdot \\ &\cdot& (I^{(m-1)}\otimes \left[ \begin{array}{c|c} U_{11}^{(n)} & U_{12}^{(n)} \\ \hline U_{21}^{(n)} & U_{22}^{(n)} \\ \end{array}\right] \otimes I^{(k-m-n)}) \rho]=\\ &=& Tr[(I^{(m-1)}\otimes(\left[ \begin{array}{c|c} U_{11}^{(n)\dagger} & U_{21}^{(n)\dagger} \\ \hline U_{12}^{(n)\dagger} & U_{22}^{(n)\dagger} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} {\bf 0}^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & P_1^{(n)} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} U_{11}^{(n)} & U_{12}^{(n)} \\ \hline U_{21}^{(n)} & U_{22}^{(n)} \\ \end{array}\right])\otimes I^{(k-m-n)})\,\rho], \end{eqnarray*} where ${\bf 0}^{(n)}$ is the $n$-dimensional null matrix. Let us notice that \begin{eqnarray*} & \left[ \begin{array}{c|c} U_{11}^{(n)\dagger} & U_{21}^{(n)\dagger} \\ \hline U_{12}^{(n)\dagger} & U_{22}^{(n)\dagger} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} {\bf 0}^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & P_1^{(n)} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} U_{11}^{(n)} & U_{12}^{(n)} \\ \hline U_{21}^{(n)} & U_{22}^{(n)} \\ \end{array}\right]=\\ &=\left[ \begin{array}{c|c} {\bf 0}^{(n)} & U_{21}^{(n)\dagger} P_1^{(n)} \\ \hline {\bf 0}^{(n)} & U_{22}^{(n)\dagger} P_1^{(n)} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} U_{11}^{(n)} & U_{12}^{(n)} \\ \hline U_{21}^{(n)} & U_{22}^{(n)} \\ \end{array}\right]= \left[ \begin{array}{c|c} U_{21}^{(n){\dagger}}P_1^{(n)}U_{21}^{(n)} & U_{21}^{(n){\dagger}}P_1^{(n)}U_{22}^{(n)} \\ \hline U_{22}^{(n){\dagger}}P_1^{(n)}U_{21}^{(n)} & U_{22}^{(n){\dagger}}P_1^{(n)}U_{22}^{(n)} \\ \end{array}\right]=\\ &= \left[ \begin{array}{c|c} I^{(n-1)}\otimes(U_{21}^{{\dagger}}P_1U_{21}) & I^{(n-1)}\otimes(U_{21}^{{\dagger}}P_1U_{22}) \\ \hline I^{(n-1)}\otimes(U_{22}^{{\dagger}}P_1U_{21}^{(n)}) & I^{(n-1)}\otimes(U_{22}^{{\dagger}}P_1U_{22}) \\ \end{array}\right]=\Lambda_{U}^{(n+1)} \end{eqnarray*} Hence, $$\texttt P[^{\mathcal D}U_{[k;m,m+n]},\rho]=Tr[(I^{(m-1)}\otimes\Lambda_U^{(n+1)}\otimes I^{(k-m-n)})\rho]=Tr[(\Lambda_U^{(m+n)}\otimes I^{(k-m-n)})\rho].$$ \end{proof} \end{theorem} In this theorem we have considered the case where the binary operator $U$ is not a control-target operator. Accordingly with Def. \ref{Prob}, this assumption is essential in order to establish the correct expression of the probability of the circuit. In the case where the binary operator $U$ is a control-target gate, this expression will change, as showed in the following theorem. \begin{theorem}\ \\ Let $C{\tilde U}$ be a binary control-target operator $C{\tilde U}=\left[ \begin{array}{c|c} I & 0 \\ \hline 0 & \tilde{U} \\ \end{array}\right]$ (as represented in (\ref{CU})), with $\tilde U$ arbitrary unary gate. Let $\rho\in\mathcal D(\otimes^k\mathbb C^2).$ Then: $$\texttt P[^{\mathcal D}C{\tilde U}_{[k;m,m+n]},\rho]=Tr[(\Lambda_{C{\tilde U}}^{(m+n)}\otimes I^{(k-m-n)})\rho],$$ where $\Lambda_{C{\tilde U}}^{(n+1)}=\left[ \begin{array}{c|c} P_1^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes(\tilde{U}^{\dagger}P_1\tilde{U}) \\ \end{array}\right].$ \begin{proof}\ \\ By Def.(\ref{Prob}), we have: \begin{eqnarray*} & & \texttt P[^{\mathcal D}C\tilde{U}_{[k;m,m+n]},\rho]=\\ &=& Tr[(I^{(m)}\otimes P_1^{(n)}\otimes I^{(k-m-n)})C\tilde{U}_{[k;m,m+n]}\,\,\rho\,\, (C\tilde{U}_{[k;m,m+n]})^{\dagger}] =\\ &=& Tr[((C\tilde{U}_{[k;m,m+n]})^{\dagger}(I^{(m)}\otimes P_1^{(n)}\otimes I^{(k-m-n)})C\tilde{U}_{[k;m,m+n]})\,\rho]=\\ & & \text{(by Theorem (\ref{binary}))} = Tr[(I^{(m-1)}\otimes \left[ \begin{array}{c|c} I^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes \tilde{U}^{\dagger} \\ \end{array}\right] \otimes I^{(k-m-n)})\cdot \\ &\cdot& (I^{(m)}\otimes P_1^{(n)} \otimes I^{(k-m-n)}) \cdot (I^{(m-1)}\otimes \left[ \begin{array}{c|c} I^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes \tilde{U} \\ \end{array}\right] \otimes I^{(k-m-n)}) \rho]=\\ &=& Tr[(I^{(m-1)}\otimes(\left[ \begin{array}{c|c} I^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes \tilde{U}^{\dagger} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} P_1^{(n)} & 0 \\ \hline 0 & P_1^{(n)} \\ \end{array}\right]\cdot\\ & &\left[ \begin{array}{c|c} I^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes \tilde{U}\\ \end{array}\right])\otimes I^{(k-m-n)})\,\rho]. \end{eqnarray*} Let us notice that (by the \emph{mixed-produt property} of the tensor product): \begin{eqnarray*} & \left[ \begin{array}{c|c} I^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes \tilde{U}^{\dagger} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} P_1^{(n)} & 0 \\ \hline 0 & P_1^{(n)} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} I^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes \tilde{U}\\ \end{array}\right]=\\ &=\left[ \begin{array}{c|c} P_1^{(n)} & {\bf 0}^{(n)}\\ \hline {\bf 0}^{(n)} & (I^{(n-1)}\otimes {\tilde U}^{\dagger}) P_1^{(n)} \\ \end{array}\right]\cdot \left[ \begin{array}{c|c} I^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes {\tilde U} \\ \end{array}\right]=\\ & \left[ \begin{array}{c|c} P_1^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & (I^{(n-1)}\otimes \tilde{U}^{\dagger})P_1^{(n)}(I^{(n-1)}\otimes \tilde{U}) \\ \end{array}\right]= \left[ \begin{array}{c|c} P_1^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & I^{(n-1)}\otimes(\tilde{U}^{\dagger}P_1\tilde{U}) \\ \end{array}\right]=\Lambda_{C\tilde{U}}^{(n+1)} \end{eqnarray*} Hence, our claim. \end{proof} \end{theorem} \begin{corollary} $$\texttt P(\sqrt{Swap}_{[k;m,m+n]},\rho)=Tr[(P_1^{(m)}\otimes P_1^{(n)}\otimes I^{k-m-n})\rho].$$ \begin{proof}\ \\ By a straightforward calculation, it can be seen that, for an arbitrary $\rho\in\mathcal D(\otimes^k\mathbb C^2)$, is: $$\Lambda_{\sqrt{Swap}_{[k;m,m+n]}}^{(n+1)}=\left[ \begin{array}{c|c} {\bf 0}^{(n)} & {\bf 0}^{(n)} \\ \hline {\bf 0}^{(n)} & P_1^{(n)} \\ \end{array}\right]=P_1\otimes P_1^{(n)}.$$ Hence, $\Lambda_{\sqrt{Swap}_{[k;m,m+n]}}^{(m+n)}=I^{(m-1)}\otimes \Lambda_{\sqrt{Swap}_{[k;m,m+n]}}^{(n+1)}=P_1^{(m)}\otimes P_1^{(n)}.$ By a direct application of Theorem (\ref{binary2}), easily follows our claim. \end{proof} \end{corollary} Following a similar reasoning, it is easy to verify that $$\Lambda_{Swap_{[k;m,m+n]}}^{(n+1)}=\Lambda_{\sqrt{Swap}_{[k;m,m+n]}}^{(n+1)},$$ hence $\texttt P(Swap_{[k;m,m+n]},\rho)=\texttt P(\sqrt{Swap}_{[k;m,m+n]},\rho)$ for any $\rho\in\otimes^k\mathcal D(\mathbb C^2)$. It is remarkable to notice that in the evaluation of these probability values, the gates $Swap$ and $\sqrt{Swap}_{[k;m,m+n]}$ plays the same role of the binary identity gate. It is also easy to verify that the probability value of the $CNot_{[k;m,m+n]}$ gate in the context of the MT-QCL is in accord with the standard QCL. As a final example concerning a ternary gate, we also calculate the probability of the generalized Fredkin gate. \begin{theorem}\ \\ Let $\rho\in\mathcal D(\otimes^k\mathbb C^2)$ and let $F_{[k;m,m+n,m+n+l]}$ the $k$-dimensional Fredkin gate. We have that $$\texttt P(F_{[k;m,m+n,m+n+l]},\rho)=Tr[(I^{(m-1)}\otimes\Lambda_F^{(n+l+1)}\otimes I^{(k-m-n-l)})\rho],$$ where $\Lambda_F^{(n+l+1)}= P_0\otimes P_1^{(n)}\otimes P_1^{(l)}+P_1\otimes P_1^{(l)}\otimes P_1^{(n)}.$ \begin{proof}\ \\ The proof follows by considering the Fredkin gate $F_{[k;m,m+n,m+n+l]}$ (let's simply say $F$) as a Control-Swap gate, where the $m^{th}$ qubit plays the role of control and the $(m+n)^{th}$ and the $(m+n+l)^{th}$ qubits play the role of target. For this reason, in accord with Def.\ref{Prob}, the probability will be given by: $$\texttt P(F,\rho)= Tr[(I^{(m)}\otimes P_1^{(n)})\otimes P_1^{(l)}\otimes I^{(k-m-n-l)}F\rho F].$$ Further, by considering the representation of the Fredkin gate as \cite{SF}: $$F^{(m,n,l)} = I^{m-1}(P_0 \otimes I^{(n+l)} + P_1\otimes Swap_{[n+l;n,n+l]}$$ and by a direct calculation, we obtain that $\texttt P(F_{[k;m,m+n,m+n+l]},\rho)=Tr[(I^{(m-1)}\otimes\Lambda_F^{(n+l+1)}\otimes I^{(k-m-n-l)})\rho],$ where \begin{eqnarray*} & \Lambda_F^{(n++l+1)}=\\ & [P_0\otimes I^{(n+l)}+P_1\otimes Swap_{[n+l;n,n+l]}]\cdot[I\otimes(P_1^{(n)})\otimes P_1^{(l)}]\cdot\\ & [P_0\otimes I^{(n+l)}+P_1\otimes Swap_{[n+l;n,n+l]}]=\\ & = [P_0\otimes P_1^{(n)}\otimes P_1^{(l)}+P_1\otimes (Swap_{[n+l;n,n+l]}\cdot(P_1^{(n)}\otimes P_1^{(l)}))]\cdot\\ & [P_0\otimes I^{(n+l)}+P_1\otimes Swap_{[n+l;n,n+l]}]=\\ & (P_0\cdot P_0)\otimes((P_1^{(n)}\otimes P_1^{(l)})\cdot I^{(n+l)})+\\ & (P_0\cdot P_1)\otimes ((P_1^{(n)}\otimes P_1^{(l)})\cdot Swap_{[n+l;n,n+l]})+\\ & (P_1\cdot P_0)\otimes (Swap_{[n+l;n,n+l]}\cdot(P_1^{(n)}\otimes P_1^{(l)})\cdot I^{(n+l)}) +\\ & (P_1\cdot P_1)\otimes (Swap_{[n+l;n,n+l]}\cdot(P_1^{(n)}\otimes P_1^{(l)})\cdot Swap_{[n+l;n,n+l]}). \end{eqnarray*} Hence our claim. \end{proof} \end{theorem} The results provided in this section permit to realize that the QCL can be seens as a particular (one-target) generalization of the MT-QCL. Indeed, if we confine to the set of the one-target qubits, the MT-QCL replace the same probabilistic results of the standard QCL; on the other hand, the MT-QCL allows to consider also multi-target gates that are not possible to take into account in the context of the standard QCL. \section{Conclusion and further developments} In this paper we have introduced the formal framework of a generalization of the standard Quantum Computational Logic, that takes into account the different roles of the qubits in a quantum circuit. The most relevant utility of the Quantum Computational Logic is based on the fact that it turns out to be strongly holistic (non compositional) and it is estremely useful to represent all these kinds of situations where the meaning of a compound system (logically represented by a sentence) is not simply dependent on the meaning of its subsystems (logically represented by the atomic senteces), but it has to be considere as a whole. Further, the standard Quantum Computational Logic has also the extreme privilege to be \emph{context dependent}, i.e. the same sentence can assume different meanings in different contexts. Non-compositionality and context-dependences are two features that make the Quantum Computational Logic extremely useful to describe situations related to very different aspects of the real rational activity in many different contexts such as common language, human psychology, machine learning and even the usual way to perceive the music \cite{BDCGLS,DGLNS,DGLS,HSFP,SSDMG}. Let us notice how the QCL can reach a so strong expressive power by considering in the language only a particular class of gates (one-target gates). The Multi Target Quantum Computational Logic hereby introduced, allows to consider infinite many more gates (i.e. logical connectives) inspired by the standard language of Quantum Computational Logic and, at the same time, it is more realistic in accord with what happens in a real quantum computational process. On this basis, this expansion of the language can be considered as a promising tool to boost the expressive power of the QCL. As a natural pursuance, the investigation on the theoretical and semantic benefits of this more expressive representation seems to be an interesting argument for further developments. In particular, future investigations will be devoted - from a theoretical point of view - to a complete study on the algebraic properties of this new logical structures and - from a more applicative viewpoint - to continue to investigate on the benefit obtained by applying these logical structures inspired by quantum theory to non-standard contexts such as human behavior, machine learning and so on. \end{document}
\begin{document} \title{Neostability-properties of Fra\"{\i}ss\'e limits of 2-nilpotent groups of exponent $p> 2$} \author{Andreas Baudisch} \date{\today} \maketitle \begin{abstract} \noindent Let $L(n)$ be the language of group theory with $n$ additional new constant symbols $c_1,\ldots,c_n$. In $L(n)$ we consider the class ${\mathbb K}(n)$ of all finite groups $G$ of exponent $p > 2$, where $G'\subseteq\langle c_1^G,\ldots,c_n^G\rangle \subseteq Z(G)$ and $c_1^G,\ldots,c_n^G$ are linearly independent. Using amalgamation we show the existence of Fra\"{\i}ss\'e limits $D(n)$ of ${\mathbb K}(n)$. $D(1)$ is Felgner's extra special $p$-group. The elementary theories of the $D(n)$ are superstable of SU-rank 1. They have the independence property. \end{abstract} \section{Introduction} We consider the variety ${\mathbb G}_{2,p}$ of nilpotent groups of class 2 of exponent $p>2$ in the language $L$ of group theory. To get the Amalgamation Property (AP) in \cite{Bau} an additional predicate $P(G)$ for $G\in{\mathbb G}_{2,p}$ with $G'\subseteq P(G)\subseteq Z(G)$ is introduced. Let ${\mathbb G}_{2,p}^P$ be the category of this groups in the extended language $L_P$ where the morphisms are embeddings. Using the class ${\mathbb K}_{2,p}^P$ of finite structures in ${\mathbb G}_{2,p}^P$ we get a Fra\"{\i}ss\'e limit $D$. If we build $D$ by amalgamation then $P(a)$ says that $a$ will become an element of the commutator subgroup $D'$ of $D$ in that process. In \cite{Bau} it is shown that ${\rm Th}(D)$ is not simple. Here we point out that $D$ has the tree property of the second kind (TP$_2$). This is easily seen. Let $L(n)$ be the language of group theory with $n$ additional new constant symbols $c_1,\ldots,c_n$. In $L(n)$ we consider the class ${\mathbb G}(n)$ of all groups $G\in {\mathbb G}_{2,p}$, where $G'\subseteq\langle c_1^G,\ldots,c_n^G\rangle \subseteq Z(G)$ and $c_1^G,\ldots,c_n^G$ are linearly independent. We use linear independence, since we can consider an abelian group of expoent p as a vector space over ${\mathbb F}_p$. $\langle X \rangle$ denotes the substructure generated by $X$. Hence $\langle c_1^G,\ldots,c_n^G\rangle = \langle \emptyset \rangle$. ${\mathbb G}(n)$ is uniformly locally finite. Let ${\mathbb K}(n)$ be the class of finite structures in ${\mathbb G}(n)$. ${\mathbb K}(n)$ has the Hereditary Property (HP), the Joint Embedding Property (JEP) and the Amalgamation Property (AP). Hence the Fra\"{\i}ss\'e limit $D(n)$ of the class ${\mathbb K}(n)$ exists. Note that $D(1)$ is the extra special $p$-group considered by U.~Felgner in \cite{Fe}. In \cite{MacSt} the corresponding bilinear alternating map is obtained as an ultraproduct of finite structures. It is a well-known example of a supersimple theory of SU-rank 1. We show that the theories of all Fra\"{\i}ss\'e limits $D(n)$ are supersimple of SU-rank 1. To prove this we check the properties of non-forking that characterize simple theories \cite{KP}. Before we show that each group $G$ in ${\mathbb G}(n)$ where $G/Z(G)$ is infinite has the Independence Property especially all $D(n)$. \section{TP{\boldmath$_2$} of {\boldmath${\rm Th}(D)$}}\label{s1} \begin{prop}\label{p2.1} In ${\rm Th}(D)$ the formulae $[x,y_1]=[y_2,y_3]$ has the tree property of the second kind. \end{prop} {\em Proof\/}. Since $D$ is the Fra\"{\i}ss\'e limit of ${\mathbb K}_{2,p}^P$ there is an embedding of an infinite free group of $G_{2,p}$ in $D$. Assume that $\{b_\alpha:\alpha<\omega\}\cup\{c_{\alpha,i},d_{\alpha,i}:\alpha<\omega, i<\omega\}$ are free generators of such an infinite free subgroup. We consider the array \[ \overline{a}_{\alpha,i}=\{(b_{\alpha,i}, c_{\alpha,i},d_{\alpha,i}):\alpha<\omega, i<\omega\} \] where $b_{\alpha,i}=b_\alpha$ for all $\alpha$ and $i$: \[ \mbox{Then }\; D\vDash\neg\exists x([x,b_\alpha]=[c_{\alpha,i},d_{\alpha,i}]\wedge[x,b_\alpha] = [c_{\alpha,j},d_{\alpha,j}]) \] for fixed $\alpha$ and $i\ne j$. Now let $f$ be any map of $\omega$ into $\omega$. Then the set \[ \{[x,b_\alpha]=[c_{\alpha,f(\alpha)},d_{\alpha,f(\alpha)}]:\alpha<\omega\} \] is consistent, since $D$ is a Fra\"{\i}ss\'e limit. $\Box$ \section{The amalgamation property for {\boldmath${\mathbb K}(n)$}}\label{s3} Let $G$ be a group in ${\mathbb G}(n)$ with the elements $c_1^G,\ldots,c_n^G$ short $c_1,\ldots,c_n$. Let $P(G)$ be the subgroup generated by $c_1,\ldots,c_n$. In the language $L(n)$ \, $P(G)$ is the $L(n)$-substructure generated by the empty set. By definition $G'\subseteq P(G)\subseteq Z(G)$ and the linear dimension ${\rm ldim}(P(G))$ of $P(G)$ is $n$. In \cite{Bau} a functor $F$ from ${\mathbb G}_{2,p}^P$ into the category ${\mathbb B}^P$ of bilinear alternating maps $(V,W,\beta)$ is defined where $V$ and $W$ are ${\mathbb F}_p$-vector spaces and $\beta$ is a bilinear alternating map from $V$ into $W$. Morphisms of ${\mathbb B}^P$ from $(V_1,W_1,\beta_1)$ to $(V_2,W_2,\beta_2)$ consists of vector space embeddings $f:V_1$ into $V_2$ and $g:W_1$ into $W_2$ that commute with the bilinear maps $\beta_i$. \[ \begin{xy} \xymatrix{ V_1\ar[d]^{\textstyle f}&\times&V_1\ar[d]^{\textstyle f}\ar[r]^{\textstyle\beta_1} &W_1\ar[d]^{\textstyle g, }\\ V_2&\times&V_2 \ar[r]^{\textstyle\beta_2}&W_2 } \end{xy} \] $F$ is defined in the following way: $F(G)$ is $(V,W,\beta)$ where $V=G/P(G)$, $W= P(G)$ and $\beta$ is induced by $[\;,\;]$. If $f: G\to H$ then $F(f)=(\overline{f}, f\restriction P)$ where $\overline{f}:G/P(G)\to H/P(H)$ is induced by $f$. $F$ is a bijection on the level of objects up to isomorphisms. If we consider the category ${\mathbb G}(n)$, then the morphisms $f:G\to H$ send $c_i^G$ to $c_i^H$. Hence $f$ induces an isomorphism of $P(G)$ onto $P(H)$. We call ${\mathbb B}(n)$ the corresponding category of bilinear alternating maps $(V,P,\beta)$ where $P=\langle c_1,\ldots,c_n\rangle$ is fixed. The morphisms have the form $(g,{\rm id})$. We define the functor $F$ from ${\mathbb G}(n)$ to ${\mathbb B}(n)$ as above and obtain as in \cite{Bau2}: \begin{lemma}\label{l3.1} \begin{enumerate} \item[{\rm i)}] $F$ is a functor of ${\mathbb G}(n)$ onto ${\mathbb B}(n)$ that is a bijection for the objects of the categories up to isomorphisms. \item[{\rm ii}] If $G_0 \in {\mathbb G}(n)$ and $(g,id)$ is ann embedding of $F(G_0)$ into some $(V,P,\beta)$, then there are some $G \in {\mathbb G}(n)$ and some embedding $f$ of $G_0$ into $G$, such that $F(G) = (V,P, \beta)$ and $F(f) = (g,id)$. \item[{\rm iii)}] In ${\mathbb G}(n)$ we consider $e_0:G_0\to G$, $e_1:H_0\to H$ where $f_0$ is an isomorphism of $G_0$ onto $H_0$. In ${\mathbb B}(n)$ we assume that there is $g$ such that \[ \begin{xy} \xymatrix{ F(G_0)\ar[d]^{F(f_0)}\ar[r]^{F(e_0)} &F(G)\ar[d]^{(g,{\rm id})}\\ F(H_0) \ar[r]^{F(e_1)}&F(H)\;\;. } \end{xy} \] \end{enumerate} Then there is an embedding $f$ of $G$ into $H$ such that $F(f)=(g,{\rm id})$ and \[ \begin{xy} \xymatrix{ G_0\ar[d]^{f_0}\ar[r]^{e_0} &G\ar[d]^{f}\\ H_0 \ar[r]^{e_1}&H\;\;. } \end{xy} \] \end{lemma} Lemma~\ref{l3.1} shows that AP for ${\mathbb B}(n)$ implies AP for ${\mathbb G}(n)$ as in \cite{Bau}. To show AP for ${\mathbb B}(n)$ we cannot use the free amalgam as in \cite{Bau}. Assume \begin{eqnarray*} (f_A,{\rm id}):&&(V_B,P,\beta_B)\longrightarrow(V_A,P,\beta_A),\\ (f_C,{\rm id}):&&(V_B,P,\beta_B)\longrightarrow(V_C,P,\beta_C). \end{eqnarray*} W.l.o.g. $V_B$ is a common subspace of $V_A$ and $V_C$. Let $V_D$ be the vector space amalgam $V_C\bigoplus\limits_{V_B} V_A$ with respect to $f_A$ and $f_C$. We get the desired amalgam $\langle V_D,P,\beta_D\rangle$ if \[ \beta_D=\beta_A\;\mbox{ on }\; V_A \quad\mbox{ and } \quad \beta_D=\beta_C\;\mbox{ on }\;V_C \] and the rest is obtained in the following way: If $X$ is a basis of $V_A$ over $V_B$ and $Y$ is a basis of $V_C$ over $V_B$ then we can choose for each pair $x\in X$ and $y\in Y$ \, $\beta_0(x,y)$ in $P$ as we want. In our context AP implies JEP. \begin{theorem}\label{t3.1} ${\mathbb K}(n)$ has HP, JEP and AP. Hence the Fra\"{\i}ss\'e limit $D(n)$ exists. It is $\aleph_0$-categorical. ${\rm Th}(D(n))$ has the elimination of quantifiers. \end{theorem} The theorem uses the known theory. See \cite{Ho}. Uniform local finiteness and finite signature for ${\mathbb K}(n)$ imply $\aleph_0$-categoricity and elimination of quantifiers. ${\rm Th}(D(n))$ can be axiomatized by the following sentences: Let $M$ be a model of ${\rm Th}(D(n))$. \begin{enumerate} \item[$\Sigma\,1)$] $M$ is a nilpotent group of class 2 with exponent $p$. \item[$\Sigma\,2)$] $M'=Z(M)=\langle c_1,\ldots,c_n\rangle$ is of linear dimension $n$. \item[$\Sigma\,3)$] For $B\subseteq A$ in ${\mathbb K}(n)$ it holds: If $B'\subseteq M$ and $B'\cong B$, then this embedding of $B$ into $M$ can be extended to $A$. \end{enumerate} In the case $n=1$ these axioms imply that $M$ is infinite and $M'=Z(M)$ is cyclic. By U.~Felgner \cite{Fe} $D(1)$ is the extra special $p$-group, since his axiomatization is $\Sigma\,1)$ $M'=Z(M)$ is cyclic and infiniteness. {\bf Question} \, Is there an easier axiomatization of ${\rm Th}(D(n))$ for $n\ge 2$? \section{Independence property in {\boldmath${\mathbb G}(n)$}}\label{s4} Assume $M\vDash {\rm Th}(D(1))$ and $M$ is countable. We write $c$ instead of $c_1$. By \cite{Fe} $M$ is a central product over $\langle c\rangle$: \[ M=\bigodot\limits_{\stackrel{\scriptstyle\langle c\rangle}{i<\omega}}\langle c,a_ib_i\rangle \] where $c$ is a generator of the cyclic subgroup $M'=Z(M)$ and $[b_i,a_i]=c$. By the elimination of quantifiers of ${\rm Th}(D(1))$ \, $a_0\hat{\;}b_0,a_1\hat{\;}b_1\ldots,a_n\hat{\;}b_n,\ldots$ is an indiscernible sequence in $M$. Then $a_1\hat{\;}b_1,a_2\hat{\;}(b_2\circ b_0),a_3\hat{\;}b_3,a_4\hat{\;}(b_4\circ b_0),\ldots$ and $b_1,b_2\circ b_0,b_3,b_4\circ b_0,\ldots$ are indiscernible sequences in $M$. We have $M\vDash[b_{2i+1},a_0]=1$ for $i<\omega$ and $M\vDash[b_{2i}\circ b_0,a_0]=c$ for $i\le i<\omega$. We have shown (see \cite{A}): \begin{lemma}\label{l4.1} The formula $[y,x]=1$ has the independence property in ${\rm Th}(D(1))$. \end{lemma} Let $G$ be in ${\mathbb G}(n)$ with $G/Z(G)$ is infinite. For $a\in G\setminus Z(G)$ choose a maximal linearly independent subset $\{e_1,\ldots,e_m\}=X_a$ of $P(G)$ such that for every $1\le i\le m\le n$ there is some $b_i\in G$ with $[a,b_i]=e_i$. Let $E_a$ be $\{b_1,\ldots,b_m\}$. If $[a,b]=t\ne 1\in P(G)$, then $t=\sum\limits_{1\le i\le m} e_i^{r_i}$ and $[a,b\cdot b_1^{p-r_1}\cdot\ldots\cdot b_m^{p-r_m}]=1$. Hence every element $a\in G$ has a centralizer $C(a)$ of index $\le n$ and $G=\langle a,E_a\rangle\circ C(a)$. Now we start again with $d_0\in G\setminus Z(G)$. Then $X_{d_0}\ne \emptyset$ and $E_0=E_{d_0}\ne\emptyset$ and we choose $e_0\in E_0$ with $[d_0,e_0]\ne 1$. Since $C(\langle d_0,E_0\rangle)$ has finite index in $G$ there is some \[ d_1\in C(\langle d_0,E_0\rangle)\quad\mbox{ with }\; d_1\not\in Z(G). \] We get $E_1=E_{d_1}\ne\emptyset$ and choose $b_1\in E_1$. We can repeat this argument and get \[ d_2\in C(\langle d_0,e_0,E_0,E_1\rangle), \quad d_2\not\in Z(G). \] Finally we have $d_0,e_0, d_1,e_1,\ldots$ with $[d_i,e_i]\ne 1$ and $[d_i,d_j]=1$, $[d_i,e_j]=1$ and $[e_i,e_j]=1$ for $i\ne j$. We can select a subsequence with $[d_i,e_i]=c\ne 1$ for some $c\in P(G)$. Assume w.l.o.g. $[d_i,e_i]=c$ for all $i<\omega$. We have shown that $D(1)$ is a subgroup of $G$. Since the independence property of $D(1)$ is given by a quantifier formula we get \begin{theorem}\label{t4.2} For every $G\in{\mathbb G}(n)$ with $G/Z(G)$ infinite we have: \begin{enumerate} \item[{\rm i)}] There is an embedding of $D(1)$ in $G$. \item[{\rm ii)}] $G$ has the independence property. \end{enumerate} \end{theorem} \begin{corollary}\label{c4.3} The Fra\"{\i}ss\'e limits $D(n)$ of ${\mathbb K}(n)$ have the independence property. \end{corollary} \section{Superstability of {\boldmath$Th(D(n))$}}\label{s5} Let ${\mathbb C}(n)$ be a monster model of $Th(D(n))$. We define \[ A\mathop{\mathpalette\Ind{}}\limits_B{}\!\!^0 C, \mbox{ if }\; \langle A\rangle\cap\langle C\rangle=\langle B\rangle. \] Note that all substructures as $\langle A\rangle$ contain $P({\mathbb C}(n))$. We have to check that $\mathop{\mathpalette\Ind{}}^0$ fulfils the conditions of B.~Kim and A.~Pillay \cite{KP} that characterize Non-forking. Working in the vector space ${\mathbb C}(n)/P({\mathbb C}(n))$ Monotonicity, Transitivity, Symmetry, Finite Character, and Local Character are easily shown. {\bf Existence:} $\overline{a},B\subseteq A$ are considered in ${\mathbb C}(n)$. Then there is some $\overline{d}$ in ${\mathbb C}(n)$ with ${\rm tp}(\overline{a}/B)={\rm tp}(\overline{d}/B)$ and $\overline{d}\mathop{\mathpalette\Ind{}}\limits_B{}\!\!^0A$. W.l.o.g. $B$ and $A$ are $L(n)$-substructures. Since $P\subseteq B$ we can assume that $\overline{a}$ is linearly independent over $B$. Choose $X_B$ and $X_A$ such that the images of $X_B$ and $X_BX_A$ are vector space bases of $B/P$ and $A/P$, respectively. Let $\beta((X_BX_A)^2)$ be the set of all $\beta(b_1,b_2)=[b_1,b_2]$ where $b_1,b_2\in X_BX_A$. Then $A$ is uniquely determined by $X_BX_A$ and $\beta((X_BX_A)^2)$. Now we define an extension $G$ of $A$. Let $\overline{e}$ linearly independent over $A$. $\overline{e}X_BX_A$ is linearly independent over $P$. $\beta((\overline{e}\,\hat{\;}X_BX_A)^2)$ is chosen as any extension of $\beta((X_BX_A)^2)$ and $\beta((\overline{e}X_B)^2)$, where the last set is obtained from $\beta((\overline{a}X_B)^2)$ by replacing $a_i$ in $\overline{a}$ by $e_i$ in $\overline{e}$. $G=\langle\overline{e}A\rangle$ is a structure in ${\mathbb K}(n)$. By the axioms $\Sigma\,3)$ of ${\rm Th}(D(n))$ there is an embedding of $\overline{e}$ onto $\overline{d}$ over $A$ in ${\mathbb C}(n)$ . By quantifier elimination ${\rm tp}(\overline{d}/B)={\rm tp}(\overline{a}/B)$. Furthermore $\overline{d}\mathop{\mathpalette\Ind{}}\limits_B{}\!\!^0A$ by construction. Finally we have to show: \subsubsection*{Independence over Models} Let $M\preceq {\mathbb C}(n)$, ${\rm tp}(\overline{a}{}^0/M)={\rm tp}(\overline{a} {}^1/M)$ \[ \overline{b}{}^0\mathop{\mathpalette\Ind{}}\limits_M{}\!\!^0\;\overline{b}{}^1,\qquad \overline{a}{}^0\mathop{\mathpalette\Ind{}}\limits_M{}\!\!^0\;\overline{b}{}^0,\qquad \overline{a}{}^1\mathop{\mathpalette\Ind{}}\limits_M{}\!\!^0\;\overline{b}{}^1. \] Then there is some $\overline{e}$ with \[ {\rm tp}(\overline{e}/M\overline{b}{}^0)={\rm tp}(\overline{a}_0/M\overline{b}{}^0),\qquad {\rm tp}(\overline{e}/M\overline{b}{}^1)={\rm tp}(\overline{a}_1/M\overline{b}{}^1), \] and \[ \overline{e}\mathop{\mathpalette\Ind{}}\limits_M{}\!\!^0\;\overline{b}{}^0\overline{b}{}^1. \] Let $X_M$ be a set in $M$ such that its image is a vector space basis of $M/P$. By assumption we can assume that w.l.o.g. $\overline{b}{}^0\overline{b}{}^1$ is linearly independent over $M$ modulo~$P$. We choose $\overline{d}$ linearly independent over $\overline{b}{}^0\overline{b}{}^1X_M$ modulo~$P$. Now we extend $\langle \overline{b}{}^0,\overline{b}{}^1,X_M\rangle$ to a group $G$ in ${\mathbb G}(n)$ defined on $\overline{d}\,\overline{b}{}^0\overline{b}{}^1X_M$. We extend $\beta((\overline{b}{}^0\overline{b}{}^1X_M)^2)$ to $\beta((\overline{d}\,\overline{b}{}^0\overline{b}{}^1X_M)^2)$ by the following: \begin{eqnarray*} \beta(d_i,m)&&\mbox{for }\:d_i\in\overline{d}\;\mbox{ and }\; m\in X_M\;\mbox{ is given by }\;\beta(a_i^0,m)=\beta(a_i^1,m),\\ \beta(d_i,b_j^0)&&\mbox{is given by }\;\beta(a_i^0,b_j^0), \mbox{ and}\\ \beta(d_i,b_j^1)&&\mbox{is given by }\; \beta(a_i^1,b_j^1). \end{eqnarray*} Now we find an image $\overline{e}$ of $\overline{d}$ in ${\mathbb C}(n)$ over $\langle\overline{b}{}^0,\overline{b}{}^1,M\rangle$ by axioms $\Sigma\,3)$ that defines an embedding. By elimination of quantifiers and the consltruction $\overline{e}$ has the desired properties. \begin {theorem}\label{t5.1} $\mathop{\mathpalette\Ind{}}^0$ is non-forking for $D(n)$. $D(n)$ is supersimple of SU-rank $1$. It is not stable. \end {theorem} {\em Proof\/}. As shown above $\mathop{\mathpalette\Ind{}}^0$ is non-forking and $D(n)$ is simple. Any type ${\rm tp}(\overline{a}/A)$ does not fork on a finite subset of $A$. By the description of non-forking we have SU-rank 1. In Chapter~4 it is shown that $D(n)$ has the independence property. \end{document}
\begin{equation}gin{document} \maketitle \section{Introduction} The restriction problem for the Fourier transform in $\bR^n$ was introduced by E. M. Stein, who proved the first result in any dimension \cite[p. 28]{Fe}, later improved by the sharper Stein-Tomas method [T]. Since then more and more sophisticated techniques have been introduced to attack the still open problems in this area, concerning the maximal range of exponents for which the restriction inequality holds. In two-dimensions, the restriction estimate for the circle had been proved already, in an almost optimal range of exponents, by Fefferman and Stein \cite[p. 33]{Fe}. Shortly later, sharp estimates were obtained by Zygmund \cite{Z} for the circle and by Carleson and Sj\"olin \cite{CS} and Sj\"olin \cite{Sj} for a class of curves including strictly convex $C^2$ curves. The present paper does not mean to proceed along these lines, but rather to propose a reflection on the measure-theoretic meaning of the restriction phenomenon and possibly suggest some related problems. A restriction theorem is usually meant as a family of {\it apriori} inequalities \begin{equation}\label{restriction_inequality} \big\|\widehat f_{|_S}\big\|_{L^q(S,\mu)}\le C\big\|f\|_{L^p(\bR^n)}\ , \end{equation} where $f\in \cS(\bR^n)$, $S$ is a surface with appropriate curvature properties, and $\mu$ a suitably weighted finite surface measure on $S$. The validity of such an inequality implies the existence of a bounded {\it restriction operator} $\cR:L^p(\bR^n)\longrightarrow L^q(S,\mu)$ such that $\cR f=\widehat f_{|_S}$ when $f$ is a Schwartz function. In general terms our question is: assuming that \eqref{restriction_inequality} holds, what is the ``intrinsic'' pointwise relation between $\cR f$ and $\widehat f$ for a general $L^p$-function $f$? A partial answer follows directly from the restriction inequality. Assume that \eqref{restriction_inequality} holds for given $p,q$. This forces the condition $p<2$, so that $\widehat f\in L^{p'}$. Fix an approximate identity $\chi_\eps(x)=\eps^{-n}\chi(x/\eps)$ with $\chi\in\cS(\bR^n)$, $\int\chi=1$. Then, with $\psi=\cF\inv\chi$, $$ \widehat f*\chi_\eps=\widehat{f\psi(\eps\cdot)} $$ is well defined on $S$ and coincides with $\cR\big(f\psi(\eps\cdot)\big)$. Moreover, $f\psi(\eps\cdot)\to f$ in $L^p(\bR^n)$, so that $(\widehat f*\chi_\eps)_{|_S}\to \cR f$ in $L^q(S,\mu)$. Hence, for a subsequence $\eps_k\to0$, the $\chi_{\eps_k}$-averages of $\widehat f$ converge pointwise to $\cR f$ $\mu$-a.e. It is natural to ask if the limit over all $\eps$ exists $\mu$-a.e. We give positive answers in two dimensions to this and related questions. We recall that, for a curve $S$ in the plane, necessary conditions on $p,q$ for having \eqref{restriction_inequality} are $p<\frac43$ and $p'\ge 3q$ and that they are also sufficient when $S$ is $C^2$ with nonvanishing curvature and $\mu$ is the arclength measure, or, more generally, when $S$ is just $C^2$ and convex, and $\mu$ is the {\it affine arclength measure}~\cite{Sj}. Notice that the two measures differ by a factor comparable to the $\frac13$ power of the curvature, so that the affine arclength is concentrated on the set of points with nonvanishing curvature and ordinary arclength is damped near these points. \begin{equation}gin{theorem}\label{lebesgue} Let $S$ be a $C^2$ curve in $\bR^2$ and $f\in L^p(\bR^2)$. \begin{equation}gin{enumerate} \item[\rm(i)] Assume that $1\le p<\frac43$ and let $\chi\in\cS(\bR^2)$ with $\int\chi=1$. Then, with respect to arclength measure, for almost every $x\in S$ at which the curvature does not vanish, $\lim_{\eps\to0}\widehat f*\chi_\eps(x)=\cR f(x)$. \item[\rm(ii)] Assume that $1\le p<\frac87$. Then, with respect to arclength measure, almost every $x\in S$ at which the curvature does not vanish is a Lebesgue point for $\widehat f$ and the regularized value of $\widehat f$ at $x$ coincides with $\cR f(x)$. \end{enumerate} \end{theorem} Several questions remain open, regarding extensions to less regular curves, to other values of $p$ in the range $\frac87\le p<\frac43$, or to higher dimensions. We just mention here that, in dimension $d\ge3$, our method gives results for a class of curves including $\Gamma(t)=(t,t^2,\dots,t^d)$. Theorem \ref{lebesgue} is a direct consequence of certain ``maximal restriction theorems'' concerning restrictions to $S$ of truncated maximal functions of the Fourier transform. Since maximal restriction inequalities may also have an intrinsic interest, we go beyond what is strictly needed to deduce Theorem \ref{lebesgue} and consider (truncated) two-parameter maximal functions, such as the strong maximal function, relative to any coordinate system in $\bR^2$. In Theorem \ref{vertical} we prove that, for a convex $C^2$ curve, the two-parameter maximal operator defined in \eqref{Mv}, is $L^p-L^q$ bounded for $p,q$ in the full range of validity of the restriction theorem, with the $L^q$-norm on $S$ relative to affine arc-length measure. In Corollary \ref{tildeM} we deduce the same $L^p-L^q$ estimates, but in the smaller range $p<\frac87$, for the truncated strong maximal function, which does not only control averages of $\hat f$, but also those of~$|\hat f|$. The proof is based on the Kolmogorov-Seliverstov-Plessner linearization method \cite[Ch. XIII]{Zb}. This leads to proving uniform estimates for a family of linear operators to which a modification of the basic approach of [CS,Z] for curves in $\bR^2$ can be applied. For this reason our method is limited to the two-dimensional context. Unfortunately, the usual $TT^*$ method of Stein-Tomas does not seem to be applicable, even for the Hardy-Littlewood maximal function. \section{The strong maximal function of $\widehat f$ along a curve }\label{sec-vertical} Let $S=\{\Gamma(t):t\in I\}$, where $\Gamma$ is a $C^2$ curve in $\bR^2$ with nonnegative signed curvature, i.e., with $\kappa(t)=\det(\Gamma',\Gamma'')(t)\ge0$. Denote by $d\mu(t)=\kappa^{\frac13}(t)\,dt$ the pull-back to~$I$ of the affine arclength measure on~$S$. We assume for simplicity that $\Gamma(x)=\big(x,\ph(x)\big)$ is the graph of a convex $C^2$ function $\ph$ on a bounded interval $I$. Notice that the measure $\mu$ is concentrated on the set where $\kappa=\ph''>0$. We consider the two-parameter maximal function\footnote{Theorem \ref{vertical} also holds if $\chi\otimes\chi$ is replaced by a general $\chi\in\cS(\bR^2)$, because this can be expanded into a rapidly decreasing series $\sum_j\chi'_j\otimes\chi''_j$.} \begin{equation}\label{Mv} \cM f(x)=\sup_{0<\eps',\eps''<1}\Big|\int \widehat f\big(x+s,\ph(x)+t\big) \chi_{\eps'}(s) \chi_{\eps''}(t)\,ds\,dt\Big|\ , \end{equation} where $\chi_\eps(\cdot)=\eps\inv\chi(\cdot/\eps)$, with $\chi\in\cS(\bR)$, even, with $\int\chi=1$. \begin{equation}gin{theorem}\label{vertical} The inequality \begin{equation} \|\cM f\|_{L^q(I,\mu)}\le C_p\|f\|_{L^p(\bR^2)}\ , \end{equation} holds for $1\le p<\frac43$ and $p'\ge 3q$. \end{theorem} \begin{equation}gin{proof} We may and shall assume $f\in\cS(\bR^2)$ and, since $\mu$ is finite, $p'= 3q$ by H\"older's inequality. We linearize $\cM $ by defining, for fixed measurable functions $\eps'(x),\eps''(x)$ on $I$ with values in $(0,1)$, \begin{equation}a\label{Reps} \cR_{\eps',\eps''} f(x)&=\int \widehat f\big(x+s,\ph(x)+t\big) \chi_{\eps'}(s) \chi_{\eps''}(t)\,ds\,dt\\ &=\int f(\xi,\eta)\int e^{-i(\xi (x+s)+\eta(\ph(x)+t))}\chi_{\eps'}(s) \chi_{\eps''}(t)\,ds\,dt\,d\xi\,d\eta\\ &=\int \widehat\chi\big(\eps'(x)\xi\big)\widehat\chi\big(\eps''(x)\eta\big) e^{-i(\xi x+\eta\ph(x))}f(\xi,\eta)\,d\xi\,d\eta\ . \end{equation}a The formal adjoint of $\cR_{\eps',\eps''}$ is \begin{equation}a\label{Eeps} \cE_{\eps',\eps''} g(\xi,\eta)&=\cR^*_{\eps',\eps''} g(\xi,\eta)\\ &=\int_I\widehat\chi\big(\eps'(x)\xi\big)\widehat\chi\big(\eps''(x)\eta\big)e^{i(\xi x+\eta\ph(x))}g(x)\kappa^\frac13(x)\,dx\ . \end{equation}a It suffices to prove the inequality \begin{equation}\label{Eeps-estimate} \|\cE_{\eps',\eps''} g\|_{L^{p'}(\bR^2)}\le C_p \|g\|_{L^{q'}(I,\mu)}\ ,\qquad g\in C^\infty_c(I)\ , \end{equation} uniformly in the functions $\eps'(x),\eps''(x)$. We introduce a truncation in $\xi$ and $\eta$, in order to gain decay at infinity for $\cE_{\eps',\eps''} g$. Fixing another function $\chi_0$ smooth on $\bR$, supported in $[-2,2]$ and equal to 1 on $[-1,1]$, we define, for $\la\gg1$, \begin{equation}\label{Eepslambda} \cE_{\eps',\eps''}^\la g(\xi,\eta)=\chi_0\Big(\frac\xi\la\Big)\chi_0\Big(\frac\eta\la\Big)\int_I\widehat\chi\big(\eps'(x)\xi\big)\widehat\chi\big(\eps''(x)\eta\big)e^{i(\xi x+\eta\ph(x))}g(x)\kappa^\frac13(x)\,dx\ . \end{equation} It will then suffice to prove \eqref{Eeps-estimate} with $\cE_{\eps',\eps''}$ replaced by $\cE^\la_{\eps',\eps''}$, uniformly in $\eps'(x),\eps''(x)$ and $\la$. We start from the identity \begin{equation}\label{square} \|\cE_{\eps',\eps''}^\la g\|_{p'}=\big\|(\cE_{\eps',\eps''}^\la g)^2\|^\half_{p'/2}\ . \end{equation} If $U$ is the open subset of $I$ where $\kappa(x)>0$, the measure $\mu$ is concentrated on $U$, so we have \begin{equation}as (\cE_{\eps',\eps''}^\la g)^2(\xi,\eta)&=\chi_0^2\Big(\frac\xi\la\Big)\chi_0^2\Big(\frac\eta\la\Big)\int_{U^2}\widehat\chi\big(\eps'(x)\xi\big)\widehat\chi\big(\eps''(x)\eta\big)\widehat\chi\big(\eps'(y)\xi\big)\widehat\chi\big(\eps''(y)\eta\big)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad e^{i(\xi (x+y)+\eta(\ph(x)+\ph(y))}g(x)\kappa^\frac13(x)g(y)\kappa^\frac13(y)\,dx\,dy\\ &=\chi_0^2\Big(\frac\xi\la\Big)\chi_0^2\Big(\frac\eta\la\Big)\int_{U^2}\widehat\chi\big(\eps'(x)\xi\big)\widehat\chi\big(\eps''(x)\eta\big)\widehat\chi\big(\eps'(y)\xi\big)\widehat\chi\big(\eps''(y)\eta\big)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad e^{i(\xi (x+y)+\eta(\ph(x)+\ph(y))}G_0(x,y)\,dx\,dy\ , \end{equation}as with $G_0=(g\kappa^\frac13)\otimes(g\kappa^\frac13)$. We want to make the change of variables $z_1=x+y$, $z_2=\ph(x)+\ph(y)$. It follows from the convexity of $\ph$ that the map $\Phi(x,y)=\big(x+y,\ph(x)+\ph(y)\big)$ is injective on each of the subsets $U^2_\pm=\{(x,y)\in U^2: x\lessgtr y\}$ and that $\det\Phi'(x,y)=\ph'(y)-\ph'(x)\ne0$ on $U^2$. With $A=\Phi(U_+)=\Phi(U_-)$, we set, for $z=(z_1,z_2)\in A$, \begin{equation}a\label{change} &\big(x_\pm(z),y_\pm(z)\big)=(\Phi_{|_{U^2_\pm}})\inv(z)\\ &{\eps'_1}^\pm(z)=\eps'\big(x_\pm(z)\big)\ ,\qquad {\eps'_2}^\pm(z)=\eps'\big(y_\pm(z)\big)\\ &{\eps''_1}^\pm(z)=\eps''\big(x_\pm(z)\big)\ ,\qquad {\eps''_2}^\pm(z)=\eps''\big(y_\pm(z)\big)\\ &G_\pm(z)=\frac{G_0\big(x_\pm(z),y_\pm(z)\big)}{\big|\ph'\big(x_\pm(z)\big)-\ph'\big(y_\pm(z)\big)\big|}\ . \end{equation}a Then \begin{equation}a\label{E2} \cE_{\eps',\eps''}^\la g(\xi,\eta)^2&=\chi_0^2\Big(\frac\xi\la\Big)\chi_0^2\Big(\frac\eta\la\Big)\\ &\qquad \sum_\pm\int_A\widehat\chi\big({\eps'_1}^\pm(z)\xi\big)\widehat\chi\big({\eps''_1}^\pm(z)\eta\big)\widehat\chi\big({\eps'_2}^\pm(z)\xi\big)\widehat\chi\big({\eps''_2}^\pm(z)\eta\big)e^{i(\xi z_1+\eta z_2)}G_\pm(z)\,dz\ . \end{equation}a We are so led to consider the operator $$ T^\la_{\overline\eps}G(\xi,\eta)=\chi_0^2\Big(\frac\xi\la\Big)\chi_0^2\Big(\frac\eta\la\Big)\int_A\widehat\chi\big(\eps'_1(z)\xi\big)\widehat\chi\big(\eps''_1(z)\eta\big)\widehat\chi\big(\eps'_2(z)\xi\big)\widehat\chi\big(\eps''_2(z)\eta\big)e^{i(\xi z_1+\eta z_2)}G(z)\,dz\ , $$ for arbitrary measurable functions $\overline\eps=(\eps'_1,\eps''_1,\eps'_2,\eps''_2)$ on $A$ with values in $(0,1)^4$ and arbitrary continuous functions~$G$ on $A$. \begin{equation}gin{lemma}\label{Tpp'} For $1\le p\le 2$, $T^\la_{\overline\eps}$ is bounded from $L^p(A)$ to $L^{p'}(\bR^2)$, uniformly in $\overline\eps$ and $\la$. \end{lemma} \begin{equation}gin{proof} The statement is trivial for $p=1$. For $p=2$ we prove the equivalent statement that $(T^\la_{\overline\eps})^*T^\la_{\overline\eps}:L^2(A)\longrightarrow L^2(A)$. We have $$ (T^\la_{\overline\eps})^*T^\la_{\overline\eps}G(z)=\int_A K^\la_{\overline\eps}(z,w)G(w)\,dw\ , $$ where, for $(z,w)\in A^2$, \begin{equation}a\label{Keps} K^\la_{\overline\eps}(z,w)&=\int_{\bR^2} e^{-i(\xi,\eta)\cdot(z-w)}\\ &\qquad \chi_0^4\Big(\frac\xi\la\Big)\chi_0^4\Big(\frac\eta\la\Big)\widehat\chi\big(\eps'_1(z)\xi\big)\widehat\chi\big(\eps'_2(z)\xi\big)\widehat\chi\big(\eps''_1(z)\eta\big)\widehat\chi\big(\eps''_2(z)\eta\big)\\ &\qquad\qquad\qquad\qquad \widehat\chi\big(\eps'_1(w)\xi\big)\widehat\chi\big(\eps'_2(w)\xi\big)\widehat\chi\big(\eps''_1(w)\eta\big)\widehat\chi\big(\eps''_2(w)\eta\big)\,d\xi\,d\eta \ . \end{equation}a Let \begin{equation}a\label{eps(zwla)} \eps'(z,w,\la)&=\max\big\{\eps'_1(z),\eps'_2(z),\eps'_1(w),\eps'_2(w),\la\inv\big\}\\ \eps''(z,w,\la)&=\max\big\{\eps''_1(z),\eps''_2(z),\eps''_1(w),\eps''_2(w),\la\inv\big\}\ . \end{equation}a Using iteratively the property that, given two Schwartz functions $f,g$ on $\bR$, the product $f(at)g(bt)$ can be expressed as $h\big((a\vee b)t\big)$ with each Schwartz norm $\|h\|_{(N)}$ controlled by the same norm of $f$ and $g$, we can write \begin{equation}as \chi_0^4\Big(\frac\xi\la\Big)\widehat\chi\big(\eps'_1(z)\xi\big)\widehat\chi\big(\eps'_2(z)\xi\big)\widehat\chi\big(\eps'_1(w)\xi\big)\widehat\chi\big(\eps'_2(w)\xi\big)&=\psi'_{z,w,\la}\big(\eps'(z,w,\la)\xi\big)\\ \chi_0^4\Big(\frac\eta\la\Big)\widehat\chi\big(\eps''_1(z)\eta\big)\widehat\chi\big(\eps''_2(z)\eta\big)\widehat\chi\big(\eps''_1(w)\eta\big)\widehat\chi\big(\eps''_2(w)\eta\big)&=\psi''_{z,w,\la}\big(\eps''(z,w,\la)\eta\big)\ , \end{equation}as with $\psi'_{z,w,\la},\psi''_{z,w,\la}\in\cS(\bR)$ uniformly bounded in each Schwartz norm. Then \begin{equation}\label{Ksimpler} K^\la_{\overline\eps}(z,w)=\frac 1{\eps'(z,w,\la)\eps''(z,w,\la)}\widehat{\psi'}_{z,w,\la}\Big(\frac{z_1-w_1}{\eps'(z,w,\la)}\Big)\widehat{\psi''}_{z,w,\la}\Big(\frac{z_2-w_2}{\eps''(z,w,\la)}\Big)\ , \end{equation} so that, for every $N$, we have the uniform bound $$ \big|K^\la_{\overline\eps}(z,w)\big|\le C_N \frac1{\eps'(z,w,\la)\eps''(z,w,\la)}\Big(1+\frac{|z_1-w_1|}{\eps'(z,w,\la)}\Big)^{-N}\Big(1+\frac{|z_2-w_2|}{\eps''(z,w,\la)}\Big)^{-N}\ . $$ We now make a double partition of $A^2$, depending on which of the three parameters $z,w,\la$ determines the value of $\eps'$ and $\eps''$ respectively: $$ A^2=E'_1\cup E'_2\ ,\qquad A^2=E''_1\cup E''_2\ , $$ such that $$ \eps'(z,w,\la)=\begin{equation}gin{cases} \eps'_1(z)\text{ or }\eps'_2(z)\text{ or }\la\inv&\text{ on }E'_1\\ \eps'_1(w)\text{ or }\eps'_2(w)&\text{ on }E'_2\ , \end{cases}\qquad \eps''(z,w,\la)=\begin{equation}gin{cases} \eps''_1(z)\text{ or }\eps''_2(z)\text{ or }\la\inv&\text{ on }E''_1\\ \eps''_1(w)\text{ or }\eps''_2(w)&\text{ on }E''_2 \ . \end{cases} $$ On any intersection $E'_j\cap E''_k=E_{jk}$, each of $\eps'$ and $\eps''$ depends on only one of the variables $z,w$. We decompose \begin{equation}as \big|(T^\la_{\overline\eps})^*T^\la_{\overline\eps}G(z)\big|&\le\sum_{j,k=1}^2\int_A {\mathbf 1}_{E_{jk}}(z,w)\big|K^\la_{\overline\eps}(z,w)\big|G(w)\big|\,dw\\ &=\sum_{j,k=1}^2U_{jk}|G|(z)\ , \end{equation}as In the case $j=k=1$ we have \begin{equation}as U_{11}|G|(z)&\le C \int_A \frac1{\tilde\eps'(z)\tilde\eps''(z)}\Big(1+\frac{|z_1-w_1|}{\tilde\eps'(z)}\Big)^{-2}\Big(1+\frac{|z_2-w_2|}{\tilde\eps''(z)}\Big)^{-2}\big|G(w)\big|\,dw\\ &\le CM_s G(z)\ , \end{equation}as where $M_s$ denotes the strong maximal function in $\bR^2$. Hence $U_{11}$ is bounded on $L^2$. In the case $j=k=2$, it is sufficient to observe that $U_{22}^*$ has the same form as $U_{11}$ to obtain the same conclusion. Suppose now that $j\ne k$, say $j=1,k=2$, i.e., with $\eps'$ depending on $z$ and $\eps''$ on $w$. Then, extending $G$ to be 0 on $\bR^2\setminus A$, \begin{equation}as U_{12}|G|(z)&\le C \int_A \frac1{\tilde\eps'(z)\tilde\eps''(w)}\Big(1+\frac{|z_1-w_1|}{\tilde\eps'(z)}\Big)^{-2}\Big(1+\frac{|z_2-w_2|}{\tilde\eps''(w)}\Big)^{-2}\big|G(w)\big|\,dw\\ &= C \int_\bR\frac1{\tilde\eps'(z)}\Big(1+\frac{|z_1-w_1|}{\tilde\eps'(z)}\Big)^{-2}\bigg(\int_\bR \frac1{\tilde\eps''(w)}\Big(1+\frac{|z_2-w_2|}{\tilde\eps''(w)}\Big)^{-2}\big|G(w_1,w_2)\big|\,dw_2\bigg)\,dw_1\\ &= C\int_\bR\frac1{\tilde\eps'(z)}\Big(1+\frac{|z_1-w_1|}{\tilde\eps'(z)}\Big)^{-2} (T|G|)(w_1,z_2)\,dw_1\\ &\le C\,M_1(T|G|)(z_1,z_2)\ , \end{equation}as where $M_1f(z_1,z_2)$ denotes the one-dimensional Hardy-Littlewood maximal function of $f(\cdot,z_2)$ evaluated at $z_1$ and $$ Tf(w_1,z_2)=\int_\bR \frac1{\tilde\eps''(w)}\Big(1+\frac{|z_2-w_2|}{\tilde\eps''(w)}\Big)^{-2}f(w_1,w_2)\,dw_2\ . $$ In analogy with the previous case, the operator $T^*$, $$ T^*h(w_1,w_2)=\int_\bR \frac1{\tilde\eps''(w)}\Big(1+\frac{|z_2-w_2|}{\tilde\eps''(w)}\Big)^{-2}h(w_1,z_2)\,dz_2\ , $$ is dominated by $$ \sup_{0<\eps<1}\int_\bR \frac1{\eps}\Big(1+\frac{|z_2-w_2|}{\eps}\Big)^{-2}\big|h(w_1,z_2)\big|\,dz_2= M_2h(w_1,w_2)\ , $$ $M_2$ being now the Hardy-Littlewood maximal operator in the second variable. It follows that $T$, and hence $U_{12}$, is bounded on $L^2$ and this proves the statement for $p=2$. The conclusion for $1<p<2$ follows by Riesz-Thorin interpolation. \end{proof} We go back to the proof of Theorem \ref{vertical}, recalling that we are assuming $p'=3q$. Observing that $p'/2>2$ and combining together \eqref{square}, \eqref{E2} and Lemma \ref{Tpp'}, we have $$ \|\cE^\la_{\eps',\eps''} g\|_{L^{p'}(\bR^2)}\le C\big(\|G_+\|_{L^r(A)}+\|G_-\|_{L^r(A)})^\half\ , $$ with $G_\pm$ as in \eqref{change} and $r=(p'/2)'=\frac p{2-p}$. To express the right-hand side in terms of the original function $g$, we find that \begin{equation}as \|G_+\|_{L^r(A)}^r&=\int_A \Big|\frac{G_0\big(x_+(z),y_+(z)\big)}{\ph'\big(x_+(z)\big)-\ph'\big(y_+(z)\big)}\Big|^r\,dz\\ &=\int_{U_+} \frac{|G_0(x,y)|^r}{|\ph'(x)-\ph'(y)|^{r-1}}\,dx\,dy\\ &=\int_{U_+} \frac{|g(x)|^r|g(y)|^r}{|\ph'(x)-\ph'(y)|^{r-1}}\kappa(x)^\frac r3\kappa(y)^\frac r3\,dx\,dy\ . \end{equation}as Making the change of variables $$ u=\ph'(x)\ ,\qquad v=\ph'(y)\ , $$ and setting $x(u)=(\ph')\inv(u)$, $y(v)=(\ph')\inv(v)$, we obtain that \begin{equation}as \|G_+\|_{L^r(A)}^r&=\int_{\ph'(U_+)} \frac{|g\big(x(u)\big)|^r|g\big(y(v)\big)|^r}{|u-v|^{r-1}}\kappa\big(x(u)\big)^{\frac r3-1}\kappa\big(y(v)\big)^{\frac r3-1}\,du\,dv\ . \end{equation}as Notice that $1\le r<2$, so that we can interpret, up to a constant factor, the integral as the pairing $\lan I^{2-r}f,f\ran$, where $I^\alpha$ denotes fractional integration of order $\alpha$ and $f(u)=|g\big(x(u)\big)|^r\kappa\big(x(u)\big)^{\frac r3-1}$. By the Hardy-Littlewood-Sobolev inequality, $$ \|G_+\|_{L^r(A)}^r\le C_r\|f\|_{L^s(\ph'(U_+))}^2\ , $$ with $s=\frac 2{3-r}$. The same estimate holds for $G_-$, so that, for this value of $s$, \begin{equation}as \|\cE^\la_{\eps',\eps''} g\|_{L^{p'}(\bR^2)}&\le C_p\|f\|_{L^s(\ph'(U))}^\frac1r\\ &=C_p\Big(\int_{\ph'(U)}|g\big(x(u)\big)|^\frac{2r}{3-r}\kappa\big(x(u)\big)^{-\frac 23}\,du\Big)^\frac{3-r}{2r}\\ &=C_p\Big(\int_U|g(x)|^\frac{2r}{3-r}\kappa(x)^{\frac 13}\,du\Big)^\frac{3-r}{2r}\\ &=\|g\|_{L^\frac{2r}{3-r}(I,\mu)}\ . \end{equation}as But $\frac{2r}{3-r}=(p'/3)'=q'$ with $q$ as in the statement of the theorem. \end{proof} Consider now the truncated strong maximal function of $\widehat f$, \begin{equation}\label{tildeM} \cM^+f(x)=\sup_{0<\eps',\eps''<1/4}\frac1{4\eps'\eps''}\int_{|s|<\eps',|t|<\eps''}\big|\widehat f\big(x+s,\ph(x)+t\big) \big|\,ds\,dt\ ,\qquad x\in I. \end{equation} From Theorem \ref{vertical} we obtain the following inequality for $ \cM^+$ for a more restricted range of $p$. \begin{equation}gin{corollary}\label{tildeM} The inequality \begin{equation}a \| \cM^+f\|_{L^q(I,\mu)}\le C_p\|f\|_{L^p(\bR^2)}\ ,\qquad f\in\cS(\bR^2), \end{equation}a holds for $1\le p<\frac87$ and $p'\ge 3q$. \end{corollary} \begin{equation}gin{proof} As before, we assume $p'=3q$. Let $h=f*f^*$, where $f^*(x,y)=\overline{f(-x,-y)}$. Then $\widehat h=|\widehat f|^2$, so that $\|h\|_r\le\|f\|_p^2$, with $r=\frac p{2-p}<\frac43$. Then , for $s$ such that $r'=3s$, $\|\cM h\|_s\le C_r\|f\|_p^2$. But, for $\eps',\eps''<\frac14$ and $\chi$ as in \eqref{Mv}, \begin{equation}as \frac1{4\eps'\eps''}\int_{|s|<\eps',|t|<\eps''}\big|\widehat f\big(x+s,\ph(x)+t\big) \big|\,ds\,dt&\le \Big(\frac1{4\eps'\eps''}\int_{|s|<\eps',|t|<\eps''}\big|\widehat f\big(x+s,\ph(x)+t\big) \big|^2\,ds\,dt\Big)^\half\\ &=\Big(\frac1{4\eps'\eps''}\int_{|s|<\eps',|t|<\eps''}\widehat h\big(x+s,\ph(x)+t\big) \,ds\,dt\Big)^\half\\ &\le \Big( \int \widehat h\big(x+s,\ph(x)+t\big) \chi_{4\eps'}(s)\chi_{4\eps''}(t)\,ds\,dt\Big)^\half\\ &\le \big(\cM h(x)\big)^\half\ . \end{equation}as Hence $\| \cM^+f\|_{L^q(I,\mu)}\le \|\cM h\|_{q/2}^\half$ and it can be easily checked that $q/2=s$. \end{proof} \section{Lebesgue points of $\widehat f$ along a curve}\label{sec-differentiation} Adapting standard arguments, cf. [S], we obtain the following reformulation of Theorem \ref{lebesgue} (ii), where $B_\eps$ denotes the disk of radius $\eps$ centered at $0$. \begin{equation}gin{corollary} Let $1\le p<\frac87$ and $S$ be a $C^2$ curve in the plane. Given $f\in L^p(\bR^2)$, for almost every $x\in I$ relative to affine arclength, $$ \lim_{\eps\to0}\frac1{|B_\eps|}\int_{B_\eps}\big|\widehat f\big(x+x',\ph(x)+y'\big)-\cR f\big(x,\ph(x)\big)\big|\,dx'\,dy'=0\ . $$ \end{corollary} \begin{equation}gin{proof} We may restrict ourselves to a subset of $S$ which is the graph of a $C^2$ function $\ph$ on an interval $I$ with $\ph''\ne0$. Let $\mu$ be as in Section \ref{sec-vertical}. Given $\tau>0$, let $g\in\cS(\bR^2)$ such that $\|f-g\|_p<\tau$. Since $\cR g=\widehat g_{|_S}$, \begin{equation}as F(x)&=\limsup_{\eps\to0}\frac1{|B_\eps|}\int_{B_\eps}\big|\widehat f\big(x+x',\ph(x)+y'\big)-\cR f\big(x,\ph(x)\big)\big|\,dx'\,dy'\\ &\le \limsup_{\eps\to0}\frac1{|B_\eps|}\int_{B_\eps}\big|\widehat {(f-g)}\big(x+x',\ph(x)+y'\big)\big|\,dx'\,dy'+\big|\cR (f-g)\big(x,\ph(x)\big)\big|\\ &\le \cM^+(f-g)(x)+\big|\cR (f-g)\big(x,\ph(x)\big)\big|\ . \end{equation}as Hence, if $q=p'/3$, $\|F\|_{L^q(I,\mu)}\le C\tau$ for every $\tau>0$, i.e., $F=0$ $\mu$-a.e. \end{proof} \vskip1cm \begin{equation}gin{thebibliography}{99} \newcommand\auth[1]{{\textrm{#1}}} \newcommand\papertitle[1]{{\textrm {#1}}} \newcommand\jourtit[1]{{\textit{\frenchspacing#1}}} \newcommand\journum[1]{{\textbf{#1}}} \newcommand\booktit[1]{{\textit{#1}}} \bibitem{CS} \auth{L. Carleson, P. Sj\"olin}, \papertitle{Oscillatory integrals and a multiplier problem for the disc}, \jourtit{. Studia Math.}, \journum{44}\,(1972), 287--299. \bibitem{Fe} \auth{C. Fefferman}, \papertitle{Inequalities for strongly singular convolution operators}, \jourtit{Acta Math.}, \journum{124}\,(1970), 9--36. \bibitem{Sj} \auth{P. Sj\"olin}, \papertitle{Fourier multipliers and estimates of the Fourier transforms of measures carried by smooth curves in $\bR^2$}, \jourtit{. Studia Math.}, \journum{51}\,(1974), 169--182. \bibitem{S} \auth{E. M. Stein}, \booktit{\it Singular Integrals and Differentiability Properties of Functions}, Princeton University Press, Princeton, N.J., 1970. \bibitem{T} \auth{P. Tomas}, \papertitle{A restriction theorem for the Fourier transform}, \jourtit{Bull. Amer. Math. Soc.}, \journum{81}\,(1975), 477--478. \bibitem{Zb} \auth{A. Zygmund}, \booktit{\it Trigonometric Series}, Cambridge University Press, 1959. \bibitem{Z} \auth{A. Zygmund}, \papertitle{On Fourier coefficients and transforms of functions of two variables}, \jourtit{Studia Math.}, \journum{50}\,(1974), 189--201. \end{thebibliography} \end{document}
\begin{document} \subjclass[2010]{14E20, 14J28, (14J27, 14J29)} \keywords{K3 surfaces, triple covers, Tschirnhausen vector bundle} \title{Triple covers of K3 surfaces} \begin{abstract} We study triple covers of K3 surfaces, following \cite{M85}. We relate the geometry of the covering surfaces with the properties of both the branch locus and the Tschirnhausen vector bundle. In particular, we classify Galois triple covers computing numerical invariants of the covering surface and of its minimal model. We provide examples of non Galois triple covers, both in the case in which the Tschirnhausen bundle splits into the sum of two line bundles and in the case in which it is an indecomposable rank 2 vector bundle. We provide a criterion to construct rank 2 vector bundles on a K3 surface $S$ which determine a non-Galois triple cover of $S$. The examples presented are in any admissible Kodaira dimension and in particular we provide the constructions of irregular covers of K3 surfaces and of surfaces with geometrical genus equal to 2 whose transcendental Hodge structure splits in the sum of two Hodge structures of K3 type. \end{abstract} \section{Introduction} The Galois covers of K3 surfaces are a quite classical and interesting argument of research: for example the K3 surfaces which are Galois covers of other K3 surfaces are classified in \cite{X} and the Abelian surfaces which are Galois covers of K3 surfaces are classified in \cite{F}. The study of surfaces with higher Kodaira dimension which are covers of K3 surfaces is less systematic and sporadic examples appear in order to construct specific surfaces see e.g. \cite{CD89, L19,L20, PZ19, RRS19, S06}. Nevertheless, a systematic approach to the study of the double covers of K3 surfaces is presented in \cite{Gdouble}, where smooth double covers are classified and their birational invariants are given. In the same paper certain covers surfaces with $p_g=2$ are described with more details, since the geometry of these surfaces is quite interesting (see e.g. \cite{L19,L20}). The aim of this paper is to analyse triple covers of K3 surfaces. One of the main differences between covers of degree 2 and the ones of degree 3 is that the latter are not necessarily Galois.\\ In Section \ref{sec: triple cover in algebraic geometry} we present the general theory of the triple covers of surfaces following \cite{M85, Tan02}. We consider a smooth surface $S$ and a triple cover $f\colon X\rightarrow S$ which is naturally associated to a rank 2 vector bundle $\mathscr{E}$, with the property $$f_*\mathcal{O}_X=\mathcal{O}_S\oplus \mathscr{E}.$$ The vector bundle $\mathscr{E}$ is called the Tschirnhausen vector bundle of the cover. There are three possibilities: \begin{itemize} \item the cover is Galois (in particular $\mathbb{Z}/3\mathbb{Z}$-cyclic); this happens if $\mathscr{E}$ splits in the direct sum of two line bundles $\mathcal{L}$ and $\mathcal{M}$ which are determined by the non-trivial characters of $\mathbb{Z}/3\mathbb{Z}$, see Paragraph \ref{say: Galois enigespance}. In this case the triple cover is totally ramified and the singularities of $X$ are due only to singularities of the branch locus; \item $\mathscr{E}$ splits into the direct sum of two line bundles $\mathcal{L}$ and $\mathcal{M}$ but the cover is not Galois. In this case there are components in the branch locus which are of simple, but not total, ramification and we refer to this situation as a \emph{split non-Galois} triple cover; \item $\mathscr{E}$ is indecomposable; also in this case the cover is not Galois and we refer to this case as the \emph{non-split} triple cover. \end{itemize} We are interested in calculating the numerical invariants of the covering surface $X$: $p_g(X)$, $q(X)$, $c_1(X)^2$, $c_2(X)$ and $\kappa(X)$ (which are respectively the geometric genus, the irregularity, the square of the first Chern class, the second Chern class and the Kodaira dimension of the surface $X$). We relate them with the properties of the surface $S$ and of the bundle $\mathscr{E}$. Some of these numbers are not birational invariants hence, if $X$ is singular, we need to find the minimal model of $X$ to determine the numerical invariants for this model. This aspect is highly non trivial, even if one restricts itself to the Galois triple covers, indeed it requires not just a carefully analysis of the singularities of $X$, but also of the configurations of the $(-1)$-curves appearing in its minimal resolution. We restrict to the situation where $S$ is a K3 surface. We provide a criterion to determine the Kodaira dimension of the cover surface $X$ and due to the example constructed in the paper we prove \begin{theorem} There exist pairs $(S,f)$ such that $f:X\rightarrow S$ is a triple cover either Galois, or split non-Galois or non-split and with either $\kappa(X)=1$ or $\kappa(X)=2$. There exist pairs $(S,f)$ such that $f:X\rightarrow S$ is a triple cover either Galois or split non-Galois with $\kappa(X)=0$ and $X$ is necessarily (a possibly singular model of) either a K3 surface or an Abelian surface. \end{theorem} Since $X$ is a cover of $S$, it is not possible that $\kappa(X)=-\infty$. We do not know if there exist non split triple cover $f:X\rightarrow S$ with $\kappa(X)=0$.\\ We consider the Galois triple covers of K3 surfaces, obtaining both general results (in Section \ref{sec: the Galois case}) and constructing explicitly families of examples (in Section \ref{sec: examples Galois cover of K3 surfaces}). First, we discuss the singularities of $X$, all coming from the singularities in the branch locus of the cover $f:X\rightarrow S$. There are two different strategies to resolve the singularities of the triple cover: one can blow up $S$ in the singularities of the branch locus until one obtains a birational model of $S$ such that the strict transform of the branch locus is smooth, then one construct the smooth triple cover of this surface. The surface obtained is birational to $X$ and it is called the canonical resolution of $X$ (cfr. \cite{Tan02}). But one can also consider the possibly singular surface $X$ and then resolve its singularities obtaining a resolution which is called minimal resolution of $X$. Note that neither the canonical nor the minimal resolution are necessarily minimal surface. In Sections \ref{say: singu type 1} and \ref{say: sing type 2} we construct both these resolutions if the singularities of the branch locus are ordinary and we observe that they are {\it negligible} (see Definition \ref{def.neg.sing} and Proposition \ref{prop: negligible}) which allows us to compute the numerical invariants not only of $X$, but also of its canonical and of its minimal resolution in Proposition \ref{prop: rito numbers}. Some other singularities in the branch locus are considered in Theorem \ref{say: other singularities in the branch locus} and they are proved to be negligible too. Under mild conditions on the smoothness of some components of the branch locus we are often able to identify all the $(-1)$-curves that appear in the resolutions considered and therefore to compute the numerical invariant of the minimal model of $X$, see Proposition \ref{cor: example general type, irreducible D1}, Proposition \ref{prop: case (2) theorem- possibilities}, Proposition \ref{prop: examples case 3 of proposition}. The main result of this part is a systematic classification of the Galois triple covers of $S$, which can be summarized in the following theorem \begin{theorem} Let $f\colon X \rightarrow S$ be a normal Galois triple cover of a K3 surface, whose branch locus has $n \geq 1$ connected components, $D_1,\ldots, D_n$ and let $\Lambda_{D_i}$ be the lattice generated by the irreducible components of $D_i$: $\bullet$ $k(X)=0$ if and only if all the lattices $\Lambda_{D_i}$ are negative definite; in this case $\Lambda_{D_i}\simeq A_2(-1)$, $n=6$ or $n=9$ and $X$ has a trivial canonical bundle. $\bullet$ $k(X)=2$ if and only if there exists a lattice $\Lambda_{D_i}$ whose signature is $sgn(\Lambda_{D_i})=(1,{\rm rank}(\Lambda_{D_i})-1)$; in this case all the others $\Lambda_{D_j}$ are isometric to $A_2(-1)$. $\bullet$ $k(X)=1$ if and only if there are no lattices $\Lambda_{D_i}$ which are indefinite and there exists at least a lattice $\Lambda_{D_i}$ which is degenerate; in this case the elliptic fibration on $X$ is induced by one on $S$. In particular if there is a component $D_1$ in the branch locus such that $D_1$ is an irreducible curve, then it holds: $\bullet$ if $D_1^2=0$ then $k(X)=1$; $\bullet$ if $D_1^2>0$ then $k(X)=2$, $n\leq 10$, $D_1^2=6d$, for an integer $d>0$ and exists an integer $k$ such that $d=n-1+3k$ and $k\geq -2$. If $D_1$ is smooth and $X^{\circ}$ is the minimal model of $X$ then \begin{equation*}\leftarrowbel{eq: inv Xm special case}\chi(X^{\circ})=5+n+5k,\ K_{X^{\circ}}^2=8n-8+24k, \ e(X^{\circ})=67+5n+36k.\end{equation*} \end{theorem} We construct two interesting kinds of examples: in Corollary \ref{cor: pg=2 gen.type} we construct Galois triple covers $f:X\rightarrow S$ of K3 surfaces $S$ which have $p_g(X)=2$. Hence the transcendental Hodge structure of the $X$ are of type $(2,\star,2)$. Since the pullback of the transcendental Hodge structure of $S$ is of K3 type, i.e. of type $(1,\star',1)$, there is a splitting of the transcendental Hodge structure of $X$ in the sum of two Hodge structures of K3 type. One of them is of course geometrically associated to a K3 surface, i.e. to $S$. It would be interest to find another K3 surface associated to the other Hodge structure of K3 type. The second example of geometric interest is the construction of irregular triples cover of regular surface, see Section \ref{subsec: irregular}. If the Kodaira dimension of the surface $X$ is 0, or 1, the construction is well known: there are triple covers of K3 surfaces with Abelian surfaces (which are irregular and with Kodaira dimension 0); the base change on an elliptically fibered K3 surfaces often produces elliptic fibrations (with Kodaira dimension equal to 1) with a non rational base curve (which forces the surface to be irregular). The situation is more complicated if one requires that $X$ is a surface of general type: such covers exists, but are not very frequent (in the case of double covers classified in \cite{Gdouble} there are very few examples). Here we provide an explicit construction in Theorems \ref{theo: cover S16} and \ref{theo: cover S15}. In Section \ref{sec: triple split non Galois cover} we briefly discuss the case split non-Galois for tripe cover of K3 surfaces and we provide an example in any admissible Kodaira dimension. The construction are based on the study of the Galois closure. In Section \ref{sec: triple cover non split} we consider the most general and complicated case, i.e. the case where the vector bundle $\mathscr{E}$ is indecomposable. In this case we consider a vector bundle $\mathscr{E}$ defined by the sequence $$0\rightarrow \mathcal{L} \rightarrow \mathscr{E}^{\vee} \rightarrow \mathcal{M}\otimes \mathcal{I}_Z\rightarrow 0,$$ where $\mathcal{L}$ and $\mathcal{M}$ are lines bundles on $S$ and $Z$ a non empty 0-dimensional scheme. Our goal is to list reasonable conditions on $\mathcal{L}$ and $\mathcal{M}$ which assure the existence of the vector bundle $\mathscr{E}$ and of a triple cover $X\rightarrow S$ whose Tschirnhausen is $\mathscr{E}$: \begin{theorem} Let $\mathcal{L}$ and $\mathcal{M}$ two lines bundle on a K3 surface $S$ such that $$h^0(S,\mathcal{L}^{\vee}\otimes \mathcal{M})=0,\ h^1(S,\mathcal{L}^{\vee}\otimes \mathcal{M})\geq 1\ h^0(S,\mathcal{L}^{\otimes 2}\otimes \mathcal{M}^{\vee})\geq 1.$$ Let $Z$ be a non empty $0$-dimensional scheme on $S$. Then there exists the triple cover $f:X\rightarrow S$ whose Tschirnhausen bundle is any rank two indecomposable vector bundle $\mathscr{E}$ obtained by a non-split extension : $$0\rightarrow \mathcal{L} \rightarrow \mathscr{E}^{\vee} \rightarrow \mathcal{M}\otimes \mathcal{I}_Z\rightarrow 0.$$\end{theorem} Thanks to this theorem the problem of finding a vector bundle $\mathscr{E}$ which defines non split triple cover, is reduced to the problem of finding certain line bundles on $S$, with required properties. We apply this Theorem to construct a non split triple cover with positive Kodaira dimension and in particular we perform all the computations in one case, obtaining a surface of Kodaira dimension 1, $p_g=6$, $q=3$.\\ \textbf{Notation and conventions.} We work over the field $\mathbb{C}$ of complex numbers. For $a,b\in \mathbb{Z}$, $a\equiv_n b$ means $a\equiv b\mod n$. By ``\emph{surface}'' we mean a projective, non-singular surface $S$, and for such a surface $\omega_S=\mathcal{O}_S(K_S)$ denotes the canonical class, $p_g(S)=h^0(S, \, \omega_S)$ is the \emph{geometric genus}, $q(S)=h^1(S, \, \omega_S)$ is the \emph{irregularity} and $\chi(\mathcal{O}_S)=1-q(S)+p_g(S)$ is the \emph{Euler-Poincar\'e characteristic}. If $q(S)>0$, we call $S$ an \emph{irregular surface}. The minimal model of a surface $S$ will be denoted by $S^{\circ}$; the minimal resolution of the a singular surface $S$ will be denoted by $S'$. Throughout the paper, we denote Cartier (or Weil) divisors on a variety by capital letters and the corresponding line bundles by italic letters, so we write for instance $\mathcal{L}=\mathcal{O}_S(L)$. Moreover, if $d \in H^0(\mathcal{L})$ the corresponding Weil divisor will be denoted by $D$. Given $Z$ be a purely 0-dimensional subscheme of a variety, we often call $Z$ a $0$-cycle and we denote by $\ell(Z)$ its length. For a locally free sheaf $\mathcal{F}$ we shall denote its total Chern class by $c(\mathcal{F})$ and its Chen Character by $ch(\mathcal{F})$. \textbf{Acknowledgments} Both authors were partially supported by GNSAGA-INdAM. We thank R. Pignatelli and F. Polizzi for useful discussions and suggestions. Both authors wish to thank the referee for many comments and suggestions that improved the presentation of these results. \section{Triple covers in algebraic geometry}\leftarrowbel{sec: triple cover in algebraic geometry} The case of triple covers of algebraic varieties differs sensibly from the double covers case, above of all because the cover might be not Galois. Therefore, a different approach is needed. This theory of triple covers in algebraic geometry was started by R. Miranda in his seminal paper \cite{M85}, and developed further by Casnati--Ekedahl in \cite{CE96} and Tan in \cite{Tan02}, see also \cite{Pa91, FPV19}. The main result of this theory is the following. \begin{theorem} \emph{\cite[Theorem 1.1]{M85}} \leftarrowbel{teo.miranda} A triple cover $f \colon X \to Y$ of an algebraic variety $Y$ is determined by a rank $2$ vector bundle $\mathscr{E}$ on $Y$ and by a global section $\eta \in H^0(Y, \, S^3 \mathscr{E}^{\vee} \otimes \bigwedge^2 \mathscr{E})$, and conversely. \end{theorem} The vector bundle $\mathscr{E}$ is called the \emph{Tschirnhausen bundle} of the cover, and it satisfies \begin{equation}\leftarrowbel{eq_OOE} f_{*}\mathcal{O}_X = \mathcal{O}_Y \oplus \mathscr{E}. \end{equation} By \cite[Theorem 1.5]{CE96} if $Y$ is smooth and the section $\eta\in H^0(Y, \, S^3 \mathscr{E}^{\vee} \otimes \bigwedge^2 \mathscr{E})$ is generic, then $X$ is Gorenstein. Let $D$ be a divisor such that $\mathcal{O}_Y(D)=\bigwedge^2\mathscr{E}^{-2}$. \begin{proposition} \emph{\cite[Theorem 1.3]{Tan02}}\leftarrowbel{thm_Branch} Let $f \colon X \to Y$ be a triple cover, assume that $Y$ be a normal variety. There exist two divisors $D'$ and $D''$ such that $D=2D'+D''$ and if $f$ is totally ramified then $D''=0$ and $D'$ is the branch divisor; otherwise $D$ is the branch divisor and $D'$ is the divisor over which $f$ is totally ramified. \end{proposition} \begin{say}\leftarrowbel{say_diag} We observe that there exists a divisor $D/2$ such that $\mathcal{O}_Y(D/2)=\bigwedge^2\mathscr{E}^{-1}$. By the previous proposition $D''=D-2D'$ is effective and it is two divisible (i.e. $D''=2(D/2-D')\in Pic(Y)$). Hence there exists a double cover of $Y$ branched on $D''$. This double cover is used to get the Galois closure of the triple cover whose Galois group is $\mathfrak{S}_3$ (see \cite{Tan02,CP}). We have the following digram: \begin{eqnarray}\leftarrowbel{diag split triple cover}\xymatrix{&&Z\ar[dd]_{\mathfrak{S}_3}\ar[dl]_{2:1}^{\alpha}\ar[dr]^{3:1}_{\beta_2}\\&X\ar[dr]_{3:1}^f&&W\ar[dl]^{2:1}_{\beta_1}\supset \beta_1^{-1}(D')\\ &D'\cup D''\subset &Y&\supset D''}\end{eqnarray} We notice that the branch locus of $\beta_1$ is $D''$; $\beta_2$ is a Galois triple cover branched along $\beta_1^{-1}(D')$; the triple cover $f$ is totally branched on $D'$ and simply on $D''$. Finally it is worth to notice that this is a special case of dihedral cover studied in \cite{CP}.\\ \end{say} If $Y$ is smooth, then $f$ is smooth over $Y - D$, in other words all the singularities of $X$ come from the singularities of the branch locus. More precisely, we have \begin{prop} \emph{\cite[Proposition 5.4]{Pa89}, \cite[Theorem 3.2]{Tan02}} \leftarrowbel{prop.sing.ram} Let $Y$ be a smooth variety. Let $y\in Y$, $f^{-1}(y)$ is a singular point of $X$ if and only if $y\in \emph{Sing}(D)$ and one of the following conditions holds: \begin{itemize} \item[$(i)$] $f$ in not totally ramified over $y;$ \item[$(ii)$] $f$ is totally ramified over $y$ and $\emph{mult}_y(D) \geq 3$. \end{itemize} So -- using the notation of Proposition \ref{thm_Branch} -- $X$ is smooth if and only if \begin{enumerate} \item $D'$ is smooth; \item $D''$ and $D'$ have no common points; \item $D''$ has only cusps as singular points where $f$ is totally ramified. \end{enumerate} \end{prop} We observe that $(1)$ is due to the multiplicity 2 of the divisor $D'$ in the branch divisor $D$ and that even if $D''$ is the divisor where $f$ is simply branched it could contain isolated points of total branch. \begin{prop} \emph{\cite[Theorem 4.1]{Tan02}} \leftarrowbel{prop.can.ris} Let $f \colon X \to Y$ be a triple cover of a smooth surface $Y$, with $X$ normal. Then there are a finite number of blow-ups $ \sigma \colon \widetilde{Y} \to Y$ of $Y$ and a commutative diagram \begin{equation} \leftarrowbel{dia.can} \begin{xy} \xymatrix{ \widetilde{X} \ar[d]_{\tilde{f}} \ar[rr]^{\tilde{\sigma}} & & X \ar[d]^{f} \\ \widetilde{Y} \ar[rr]^{\sigma} & & Y, \\ } \end{xy} \end{equation} where $\widetilde{X}$ is the normalization of $\widetilde{Y} \times_{Y} X$, such that $\tilde{f}$ is a triple cover with smooth branch locus. In particular, $\widetilde{X}$ is a resolution of the singularities of $X$ (in general this resolution is neither the minimal resolution nor it gives a minimal model of $X$). \end{prop} Following \cite[Paragraph 4]{Tan02}, we call $\widetilde{X}$ the \emph{canonical resolution} of $X$. In the case of smooth surfaces, one has the following formulae. \begin{prop} \emph{\cite[Propositions 4.7 and 10.3]{M85}} \leftarrowbel{prop.invariants} Let $f \colon X \rightarrow Y$ be a triple cover of smooth surfaces with Tschirnhausen bundle $\mathscr{E}$. Then \begin{itemize} \item[$\boldsymbol{(i)}$] $h^i(X, \, \mathcal{O}_X)=h^i(Y, \, \mathcal{O}_Y)+h^i(Y, \, \mathscr{E})$ for all $i\geq 0;$ \item[$\boldsymbol{(ii)}$] $\chi(\mathcal{O}_X) = \chi(\mathcal{O}_Y) + \chi(\mathscr{E}) = 3\chi(\mathcal{O}_Y) + \frac{1}{2}c^2_1(\mathscr{E}) - \frac{1}{2}c_1(\mathscr{E})K_Y - c_2(\mathscr{E})$. \item[$\boldsymbol{(iii)}$] $K^2_X=3K^2_Y-4c_1(\mathscr{E})K_Y+2c_1^2(\mathscr{E})-3c_2(\mathscr{E})$. \item[$\boldsymbol{(iv)}$] $e(X) = 3e(Y) - 2c_1(\mathscr{E})K_Y + 4c^2_1(\mathscr{E})-9c_2(\mathscr{E})$. \end{itemize} \end{prop} \begin{say}\leftarrowbel{say_Canonical} Here we analyze shortly the relation between the canonical bundle of the covering surface and the base one. Let $f\colon X \longrightarrow Y$ be a triple cover with Tschirnhausen bundle $\mathscr{E}$. We assume that $Y$ is smooth and $X$ normal. Following \cite{CE96}, we observe that to each cover $f:X\rightarrow Y$ of degree $d$ it is associated an exact sequence $$0\rightarrow \mathcal{O}_Y\rightarrow f_*\mathcal{O}_X\rightarrow \mathscr{E}^{\vee}\rightarrow 0$$ whose dual sequence is $$0\rightarrow \mathscr{E}\rightarrow f_*\omega_{X|Y}\rightarrow\mathcal{O}_Y\rightarrow 0,$$ defining the relative dualizing sheaf $\omega_{X|Y}$. If we assume that $X$ is Gorenstein, by \cite[Theorem 2.1]{CE96}, the ramification divisor $R$ satisfies $\mathcal{O}_X(R)=\omega_{X|Y}$ where $R$ is the set of the critical points of the map $f:X\rightarrow Y$. Being $X$ normal, following \cite{R87} we define the canonical divisor $K_X$ of $X$ as the Weil divisor whose restriction to the smooth locus is the canonical divisor. Since $X$ is assumed to be Gorenstein, $K_X$ is a Cartier divisor as well. The restriction of $f$ to $X_{0}$, the smooth locus of $X$, is a triple cover $f_{0}:X_{0}\rightarrow f(X_{0})$. By Hurwitz ramification formula $$K_{X_{0}} = f_{0}^*(K_{f(X_{0})})+(\omega_{X|Y})_{|X_0}.$$ Indeed by the definition of the canonical divisor we obtain the following equality \begin{equation}\leftarrowbel{canonicalformula1} K_X = f^*(K_Y)+R. \end{equation} In the sequel, we shall be particularly interested in the case when $Y$ is a K3 surface -- or in general when $Y$ is a surface with trivial canonical bundle--. Then the Equation \eqref{canonicalformula1} simplifies to \begin{equation}\leftarrowbel{canonicalformula} K_X = f^*(K_Y)+R=R. \end{equation} In addition, by Equation \eqref{eq_OOE} and by duality for finite flat morphisms we obtain \begin{equation*} f_*\mathcal{O}_X(K_X)\cong \big(f_*\mathcal{O}_X\big)^{\vee}\otimes \mathcal{O}_Y(K_Y) \cong \mathcal{O}_Y(K_Y) \oplus \big(\mathscr{E}^{\vee} \otimes \mathcal{O}_Y(K_Y)\big), \end{equation*} which yields at once \begin{equation}\leftarrowbel{eq_h0k} h^0(\mathcal{O}_X(K_X)) \geq h^0(\mathcal{O}_Y(K_Y)). \end{equation} The formula \eqref{canonicalformula} is particularly useful to determine the Kodaira dimension of the covering surface $X$, indeed it holds: \end{say} \begin{prop}\leftarrowbel{prop_kodairaDim1} Let $f\colon X \longrightarrow Y$ be a triple cover with $Y$ smooth and $X$ Gorenstein and let $R$ be the ramification divisor. Write $|R|=|M|+F$ with $|M|$ the moving part and $F$ the fixed part of $R$. Suppose that $\mathcal{O}(K_Y)\cong \mathcal{O}_Y$, then the Kodaira dimension $\kappa(X)$ of $X$ is greater than or equal to $0$. Moreover, it holds: \begin{description} \item[$\mathbf{\kappa(X)=0}$] if and only if $|M| = \emptyset$ and either $F$ is supported on rational curves or $F=\emptyset$. Moreover, $X$ cannot be neither an Enriques surface nor a bielliptic one. \item[$\mathbf{\kappa(X)=1}$] if and only if $|M| \neq \emptyset$ and the general member of $|M|$ is supported on elliptic curves. \item[$\mathbf{\kappa(X)=2}$] if and only if $|M| \neq \emptyset$ and the general member of $|M|$ is supported on curves of genus $g \geq 2$. \end{description} \end{prop} \begin{proof} Suppose, first, that $f$ is \'etale of degree d, then $\kappa(X)=d\cdot \kappa(Y)=0$, hence we can assume that $f$ is ramified. Being $f$ a triple cover of a surface $Y$ with trivial canonical bundle by \eqref{eq_h0k} we have $h^0(\mathcal{O}(K_X))\geq 1$ hence the Kodaira dimension of $X$ satisfies $\kappa(X) \geq 0$. Moreover, if $\kappa(X)=0$ then $X$ cannot be neither an Enriques surface nor a bielliptic one. By the Hurwitz formula \eqref{canonicalformula} the canonical divisor is $K_X =R$. If the divisor $R$ is supported at least on a moving positive genus curve then equation \eqref{canonicalformula} implies that $K_X$ is non-trivial, thus $\kappa(X)>0$. Moreover, $X$ is a properly elliptic surface if and only its canonical bundle is supported only on elliptic curves (see \cite[Section III]{M89}), thus, again by \eqref{canonicalformula}, we have the second case of the Proposition. While, if the general member of $|R|$, is supported on a curve of genus $g\geq 2$ then $X$ is of general type. Finally, if $R$ does not move in a linear system and it is supported only on rational curves, then the minimal model of $X$ must have trivial canonical bundle and so we get the last case of (1). Conversely, if $|M|=\emptyset$, $\kappa(X)$ can not be 1 or 2, hence $\kappa(X)=0$. If $|M| \neq \emptyset$ and the general member of $|M|$ is supported on curves of genus $g \geq 2$, $\kappa(X)$ can not be 0 or 1, hence it is 2. If $|M| \neq \emptyset$ and the general member of $|M|$ is supported on elliptic curves, by adjunction $M\cdot M=0$ and so $X$ would have a genus 1 fibration, which is impossible for surfaces of general type. Since $|M| \neq \emptyset$, $\kappa(X)\neq 0$. \end{proof} Similar results work in case $X$ is normal, but not necessarily Gorenstein. The main problem here is that the canonical divisor is not Cartier, so we have consider the canonical resolution of $X$. \begin{prop}\leftarrowbel{prop_kodairaDim} The results of Proposition \ref{prop_kodairaDim1} hold even if $X$ is normal (not necessarily Gorenstein).\end{prop} \begin{proof} If $X$ is normal we consider the canonical resolution $\widetilde{X}$, as in Proposition \ref{prop.can.ris}, since $\kappa(X)=\kappa(\widetilde{X})$. The map $\sigma:\widetilde{Y}\rightarrow Y$ is a blow up and introduces an exceptional divisor $E$, which does not move in a linear system. The canonical bundle of $\widetilde{Y}$ is $K_{\widetilde{Y}}=K_Y+E=E$. We apply the Hurwitz ramification formula to the triple cover $\widetilde{f}\colon \widetilde{X}\rightarrow\widetilde{Y}$: $$K_{\widetilde{X}}=\widetilde{f}^*E+R_{\widetilde{f}},$$ where $R_{\widetilde{f}}$ is the ramification divisor of $\widetilde{f}$. Let analyse both the summands: $\widetilde{f}^*E$ has negative self intersection and is the exceptional locus of $\widetilde{\sigma}$ for the commutativity of the diagram. Hence its movable part is trivial, otherwise there would be at least one curve in $|\widetilde{f}^*E|$ which is not contracted by $\widetilde{\sigma}$, say $\widetilde{D}$. Denoted by $D$ the image of this curve, $\sigma^*(D)\sim \widetilde{D}+k\widetilde{f}^*E \sim (k+1)\widetilde{f}^*E$, for a non negative $k$. But this contradicts $\sigma^*(D)\widetilde{f}^*E=0$. We observe that $R_{\widetilde{f}}$ are curves contained in the linear system of $$\widetilde{\sigma}^*R=\widetilde{\sigma}^*(M+F)=\widetilde{M}+\widetilde{F}+k\widetilde{f}^*E,$$ where $\widetilde{M}$ and $\widetilde{F}$ are the strict transforms of $M$ and $F$ respectively, using the notation of Proposition \ref{prop_kodairaDim1}. Since $\widetilde{f}^*E$ has no movable part, the movable part of $R_{\widetilde{f}}$ is contained in $\widetilde{M}$. Since $\sigma$ is a blow up $|M|$ is isomorphic to $|\widetilde{M}|$. To conclude it suffices to apply the Proposition \ref{prop_kodairaDim1} to the cover $\widetilde{f}$. \end{proof} \begin{say}\leftarrowbel{say_tripleGalois} The situation becomes a little bit easier if we consider only the Galois case. Indeed, by \cite[Theorem 2.1]{Tan02} a triple cover is Galois if and only if it is totally ramified over its branch locus. Moreover, in this instance the branch locus is exactly $D'=D/2=B+C$ \cite[Theorem 1.3 (2)]{Tan02}. In this case $X$ is smooth if and only if $D$ is smooth \cite[Theorem 3.2]{Tan02}. A Galois triple cover $f:X\rightarrow Y$ (with $Y$ smooth) is first of all a cyclic $\mathbb{Z}Z/3\mathbb{Z}Z$-cover, hence it can be treated as a cyclic cover. Moreover, it is determined by two curves $B$ and $C$ in $Y$ and by two divisors divisors $L,$ $M$ on $Y$ such that $B\in |2L-M|$ and $C\in |2M-L|$. As already remarked the branch locus of $f$ is $B+C$ and $3L\equiv 2B+C,$ $3M\equiv B+2C$. Thus the class of the branch locus is $B+C=L+M$. The surface $X$ is normal if and only if $B+C$ is reduced. Otherwise it is possible to consider the normalization, which is associated to another triple cover as explain in \cite[Proposition 7.5]{M85}. The singularities of $X$ lie over the singularities of $D'=B+C$, see Proposition \ref{prop.sing.ram}. If $B+C$ is smooth, we have \begin{equation}\leftarrowbel{eq1} \chi(\mathcal O_X)=3\chi(\mathcal O_Y)+\frac{1}{2}\left(L^2+K_YL\right)+\frac{1}{2}\left(M^2+K_YM\right), \end{equation} \begin{equation}\leftarrowbel{eq2} K_X^2=3K_Y^2+4\left(L^2+K_YL\right)+4\left(M^2+K_YM\right)-4LM, \end{equation} \begin{equation}\leftarrowbel{eq3} q(X)=q(Y)+h^1(Y,\mathcal O_Y(K_Y+L))+h^1(Y,\mathcal O_Y(K_Y+M)), \end{equation} \begin{equation}\leftarrowbel{eq4} p_g(X)=p_g(Y)+h^0(Y,\mathcal O_Y(K_Y+L))+h^0(Y,\mathcal O_Y(K_Y+M)). \end{equation} \end{say} Of course, one can apply Theorem \ref{teo.miranda} to the Galois case. For this, let $\xi$ be a primitive cube root of unity, generating $\mathbb{Z}Z/3\mathbb{Z}Z$ and we obtain: \begin{prop}\emph{\cite[Proposition 7.1]{M85}, \cite[Theorem 1.3]{Tan02}}\leftarrowbel{prop_MirandaGalois} If $f\colon X \longrightarrow Y$ is a Galois triple cover, then: \begin{itemize} \item[$\boldsymbol{(i)}$] The sheaf $f_*\mathcal{O}_X$ splits into eigenspaces as $\mathcal{O}_Y \oplus \mathcal{L}^{-1} \oplus \mathcal{M}^{-1}$ where $\mathcal{O}_Y$, $\mathcal{L}^{-1}$ and $\mathcal{M}^{-1}$ are the eigenspaces for $1$, $\xi$ and $\xi^2$, respectively. \item[$\boldsymbol{(ii)}$] The Tschirnhausen bundle $\mathscr{E}$ for $f$ is the sum of eigenspaces $\mathcal{L}^{-1} \oplus \mathcal{M}^{-1}$. \item[$\boldsymbol{(iii)}$] The branch locus of $f$ is the divisor $D'$ such that $\mathcal{O}(D')=\mathcal{L} \otimes \mathcal{M}$. \end{itemize} \end{prop} \proof $(i)$ and $(ii)$ are contained in \cite[Proposition 7.1]{M85}. Recall that, by \cite[Theorem 2.1]{Tan02}, a triple coverer is Galois if and only if it is totally ramified over its branch locus. Moreover, in this case the branch locus is exactly $D'=D/2=B+C$ \cite[Theorem 1.3 (2)]{Tan02}.\endproof \begin{say}\leftarrowbel{say: Galois enigespance} Being Galois triple cover cyclic, one can compare the previous result with the standard theory of cyclic covers, see e.g. \cite[Chapter I.17]{BHPV}. Both the line bundles $\mathcal{M}$ of Proposition \ref{prop_MirandaGalois} and $\mathcal{L}^2$ correspond to the same eigenspace of $f_*\mathcal{O}_X$, the one relative to the eigenvalue $\xi^2$. Nevertheless the can differ in the Picard group by 3-torsion element and integer multiple of divisors supported on the codimension 1 subvarieties in the branch locus, see also \cite[Section 1.4]{Tan02}. The similar situation must occur for $\mathcal{M}^2$ and $\mathcal{L}$. Viceversa, given a Tschirnhausen bundle $\mathscr{E}$ and a section $\eta\in H^0(Y,S^3\mathscr{E}^{\vee}\otimes \bigwedge^2\mathscr{E})$ it determines a triple cover and to see if it is Galois one has to check that $\mathcal{O}_Y\oplus\mathscr{E}$ is a representation of the group $\mathbb{Z}/3\mathbb{Z}$. In particular $\mathscr{E}$ must split according to the two characters $\xi$ and $\xi^2$. Therefore $\mathscr{E}=\mathcal{L}^{-1}\oplus\mathcal{M}^{-1}$ where $\mathcal{L}^2$ differs from $\mathcal{M}$ by 3-torsion element and integer multiple of divisors supported on the codimension 1 subvarieties in the branch locus and $\mathcal{M}^2$ differs from $\mathcal{L}$ by 3-torsion element and integer multiple of divisors supported on the codimension 1 subvarieties in the branch locus. The choice of $\eta$ determines uniquely the cover. \end{say} \begin{say}\leftarrowbel{say_GTD} Finally we call a \emph{Galois triple cover data} over $Y$ a pair of line bundles $\mathcal{L}$ and $\mathcal{M}$ on $Y$ and two sections \[ b \in H^0(\mathcal{L}^2 \otimes \mathcal{M}^{-1}), \quad c \in H^0(\mathcal{L}^{-1} \otimes \mathcal{M}^2). \] To give a Galois triple cover data we will use alternatively the quadruple $(b,c,\mathcal{L},\mathcal{M})$ or $(B,C,\mathcal{L},\mathcal{M})$ or even $(B,C,L,M)$ with their clear meaning, i.e., the different cases of the letter give a different incarnation of the objects treated, once being a section, once a divisor and once a sheaf. \end{say} \begin{say} We assume that $X$ is normal and denote by $X'$ the \emph{minimal resolution} of the singularities of $X$. We observe that in general $X'$ coincides neither with the minimal model $X^{\circ}$ nor with the canonical resolution $\widetilde{X}$. One cannot apply neither the formulae of Proposition \ref{prop.invariants} nor the \eqref{eq1},\eqref{eq2},\eqref{eq3} and \eqref{eq4} if $X$ is singular and in particular for the last ones if the branch locus is singular. We would like to define a class of singularities of $f:X\rightarrow Y$ such that the formulae of Proposition \ref{prop.invariants} give the invariants of $X'$(instead of the one of $X$). We follow \cite[Definition 1.5]{PP13}. \end{say} \begin{definition} \leftarrowbel{def.neg.sing} Let $f\colon X \rightarrow Y$ be a triple cover of a smooth algebraic surface $Y$, with Tschirnhausen bundle $\mathscr{E}$. We say that $X$ has only \emph{negligible} $($or \emph{non essential}$)$ singularities if the invariants of the minimal resolution $X'$ are given by the formulae in Proposition \emph{\ref{prop.invariants}}. We also call \emph{negligible} singularities the corresponding singularities of the branch locus. \end{definition} \begin{proposition}\emph{\cite[Examples 1.6 and 1.8]{PP13}}\leftarrowbel{prop: negligible} Let $f\colon X \rightarrow Y$ be a triple cover of a smooth algebraic surface $Y$, with Tschirnhausen bundle $\mathscr{E}$. If the singularities of $X$ are only of type $\frac{1}{3}(1,1)$ and $\frac{1}{3}(1,2)$, then $X$ has only negligible singularities. \end{proposition} \section{Triple covers of K3 surfaces: the Galois Case}\leftarrowbel{sec: the Galois case} From this section onwards, we shall always consider as the base of a triple cover a K3 surface $S$. Unless otherwise stated $f:X\rightarrow S$ is a triple cover such that $X$ is a normal connected surface. \subsection{Galois triple cover of K3 surface} Let $S$ be a K3 surface and $f:X\rightarrow S$ be a Galois triple cover of $S$ with branch locus $\coprod_{i=1}^n D_i$ where $D_i$ are (possibly singular and reducible) curves and thus the $D_i$'s are the $n$ connected components of the branch locus. Requiring that $X$ is normal implies that the $D_i$'s are reduced. Up to reordering the components, we can always assume that $D_1^2\geq D_i^2$ for every $i=1,\ldots, n$. Since $B+C$ is the branch locus (with the same notation of Paragraph \ref{say_tripleGalois}), there exist curves $B_i$ and $C_i$ (not necessarily connected) such that $D_i=B_i+C_i$ and $B=\sum_i B_i$, $C=\sum_iC_i$. \begin{lemma}\leftarrowbel{lemma: intersections} Let $(B, C, L,M)$ be the Galois triple cover data of a Galois triple cover $X\rightarrow S$. Let $B=\sum_{i=1}^nB_i$, $C=\sum_{i=1}^nC_i$ and $D_i=B_i+C_i$. Then:\begin{itemize}\item $B_iB_j=C_iC_j=B_iC_j=0$, if $i\neq j$; \item $B_i^2\equiv B_iC_i\equiv C_i^2\equiv_3 D_i^2$ \end{itemize}\end{lemma} \proof Since $B_i$ (resp. $C_i$) are contained in a connected component of the branch locus and $B_j$ (resp. $C_j$) in another one, we have $B_iB_j=C_iC_j=B_iC_j=0$, if $i\neq j$. Moreover, the last part of the statement follows directly by the conditions (not all independent): $B_iL=B_i(2B+C)/3=(2B_i^2+B_iC_i)/3\in \mathbb{Z}$. Analogously $C_iL=C_i(2B+C)/3\in \mathbb{Z}$, $L^2\in2\mathbb{Z}$, $B_iM=B_i(B+2C)/3\in \mathbb{Z}$, $C_iM=C_i(B+2C)/3\in \mathbb{Z}$, $M^2\in2\mathbb{Z}$.\endproof \begin{proposition}\leftarrowbel{prop: smooth Galois triple cover, branch locus} If $f:X\rightarrow S$ is a smooth Galois triple cover, then the branch locus is smooth. In particular: \begin{itemize} \item all the $D_i's$ are irreducible smooth curves with positive genus; \item if moreover $D_1^2>0$, then $n=1$ (i.e. the branch locus is $D_1$) and there exists a divisor $H\in Pic(S)$ such that $D=3H$; \item if at least one $D_i$ has genus 1, i.e. $D_i^2=0$, then all the $D_i$'s are smooth curves of genus 1. In particular $\varphi_{|D_1|}:S\rightarrow \mathbb{P}^1$ is an elliptic fibration, all the $D_i$'s are smooth fibers and $f:X\rightarrow S$ is obtained by a base change of order 3 branched on the $n$ smooth fibers $D_i$'s. \end{itemize} \end{proposition} \proof The triple cover $f:X\rightarrow S$ is smooth if and only if the branch locus is smooth (see Paragraph \ref{say_tripleGalois}), thus each $D_i$ is a smooth irreducible curve. In particular either $D_i=B_i$ or $D_i=C_i$. In both the cases one obtains $D_i^2\equiv_3 0$, by Lemma \ref{lemma: intersections}. If $D_i$ is a smooth irreducible rational curve, $D_i^2=-2\not\equiv_3 0$ which is not admissible. So for every $D_i$ we have $g(D_i)\geq 0$. If $D_1^2>0$, then, by Hodge index theorem $D_i^2<0$ for each $i>1$, which implies that $D_i$ are rational curves for each $i>1$, contradicting the first assertion. Therefore, if $D_1^2>0$, there are no other components in the branch locus. This implies that $B=D_1$ and $C=0$ (or vice versa). So $L=B/3\in Pic(S)$, i.e. there exists a divisor $H$ such that $3H\equiv D$. If $D_1^2=0$ and it is a smooth irreducible curve, then it is a genus 1 curve on $S$ and $\varphi_{|D_1|}:S\rightarrow\mathbb{P}^1$ is a genus 1 fibration. In particular, any $D_j$ orthogonal to $D_i$ is contained in a fiber of $\varphi_{|D_1|}$ and thus it can be either a rational curve (component of a reducible fiber) or a genus 1 curve. Since there are no rational curves contained in the branch locus, we conclude that all the $D_i$'s are smooth fibers of the same fibration and that the triple cover $X\rightarrow S$ is branched over smooth fibers of the fibration $S\rightarrow \mathbb{P}^1$. This induces a genus 1 fibration $X\rightarrow C$, where $C$ is a smooth curve, such that that there is a $3:1$ map $g:C\rightarrow \mathbb{P}^1$ and $X\simeq S\times_g\mathbb{P}^1$. \endproof Let us now consider the case of a singular branch locus: in order to construct $X$ we first consider its canonical desingularization $\widetilde{X}$. So we blow up $S$ in such a way that the strict transform of the branch locus becomes smooth and then we take the triple cover (see Proposition \ref{prop.can.ris}). The surface $X$ is a contraction of $\widetilde{X}$. This gives information both on the singularities of $X$ and on the construction of a smooth model of it. So now we perform a local analysis near the singularities of the branch locus. To simplify the treatment we work locally around a singular point $P$ and we assume it is the unique singularity of the branch locus. \begin{say}{\it Singularities of the branch locus of type 1.}\leftarrowbel{say: singu type 1} Let $f:X\rightarrow S$ be a triple cover. Let $W$ and $V$ two curves on $S$, meeting transversally in the point $P$. Let $W$ and $V$ be contained in the branch locus, and assume that $P$ is an ordinary double point of the branch. Let us assume that locally the equation of the branch locus of the triple cover near to $P=(0,0)$ is $xy$. We blow up $P$ obtaining $\beta_1:S_1\rightarrow S$ which introduces an exceptional divisor $E_P$. We denote by $W_1$ and $V_1$ the strict transforms of $W$ and $V$ with respect to $\beta_1$. The divisor $E_P$ appears with multiplicity 2, so it is still contained in the branch locus of the triple cover and it intersects $W_1$ and $V_1$ in two points $R$ and $Q$. We further blow up $R$ and $Q$, obtaining the map $\beta_2:S_2\rightarrow S_1$, the exceptional divisors $E_R$ and $E_Q$ and the strict transforms $W_2$, $V_2$ and $\widetilde{E_P}$ of $W_1$, $V_1$ and $E_P$ respectively. Denoted by $w=W^2$ and $v=V^2$, the intersection properties on $S_2$ are the following $E_R^2=E_Q^2=-1$, $W_2^2= w-2$, $V_2^2=v-2$, $\widetilde{E_P}^2=-3$, $E_RW_2=E_QV_2=E_R\widetilde{E_P}=E_R\widetilde{E_P}=1$, the other intersections are trivial. The triple cover of $f_2:X_2\rightarrow S_2$, induced by the one of $S$, is branched on $W_2$, $V_2$ and $\widetilde{E_P}$ so the branch locus is smooth and $X_2$ is the canonical resolution of $f:X\rightarrow S$. The self intersections of the inverse image of some curves on $S_2$ are the following: $$\begin{array} {ll}\left(f_2^{-1}(E_R)\right)^2=\left(f_2^{-1}(E_Q)\right)^2=-3,& \left(f_2^{-1}(W_2)\right)^2=(W^2-2)/3,\\ \left(f_2^{-1}(V_2)\right)^2=(V^2-2)/3,& \left(f_2^{-1}(\widetilde{E_P})\right)^2=-1\end{array}.$$ To get $X$ we contract $f_2^{-1}(E_R)$, $f_2^{-1}(E_Q)$, $f_2^{-1}(\widetilde{E_p})$. Since $(f_2^{-1}(\widetilde{E_P}))^2=-1$, the contraction $\gamma_2:X_2\rightarrow X_1$ gives the minimal resolution $X'=X_1$ of $X$. Moreover, $X_2$ coincides with the canonical resolution $\widetilde{X}$. Then one contracts the curves $\gamma_2(f_2^{-1}(E_R))$ and $\gamma_2(f_2^{-1}(E_Q))$ which are $-2$ curves meeting in a point (i.e.\ a Hirzebruch-Jung string of type $\frac{1}{3}(1,2)$). The contraction $\gamma_1:X_1\rightarrow X$ of these two curves produces the surface $X$ (triple cover of $S$). The image of these two curves under $\gamma_1$ is the point $f^{-1}(P)$, and thus $f^{-1}(P)$ is a singular point on $X$ of type $A_2$, i.e. of type $\frac{1}{3}(1,2)$. We have the following diagram (see also \verb|Figure 1|) $$\xymatrix{X\ar[d]_f&X_1\ar[l]^{\gamma_1}&X_2\ar[l]^{\gamma_2}\ar[d]_{f_2}\\ S&S_1\ar[l]^{\beta_1}&S_2\ar[l]^{\beta_2}}.$$ \begin{center} \includegraphics[width=8cm]{SingT1} \verb|Figure 1| \end{center} \end{say} \begin{say} {\it Singularities of the branch locus of type 2.}\leftarrowbel{say: sing type 2} Let $f:X\rightarrow S$ be a triple cover. Let $W$ and $V$ two curves on $S$, meeting transversally in the point $P$. Let $W$ and $V$ be contained in the branch locus, and assume that $P$ is an ordinary double point of the branch. Let us assume that locally the equation of the branch locus of the triple cover near to $P=(0,0)$ is $xy^2$. We blow up $P$ obtaining $\beta_1:S_1\rightarrow S$ which introduces an exceptional divisor $E_P$. We denote by $W_1$ and $V_1$ the strict transforms of $W$ and $V$ with respect to $\beta_1$. The divisor $E_P$ appears with multiplicity 3, so it is not contained in the branch locus of the triple cover. The triple cover $f_1:X_1\rightarrow S_1$, induced by the one of $S$, is branched on $W_1\cup V_1$ and since the branch locus is smooth, $X_1$ is the canonical resolution $\widetilde{X}$ of $X$. The intersection properties on $S_1$ are the following $W_1^2=W^2-1$, $V^2=V_1^2-1$, $E_P^2=-1$, $W_1E_{P}=V_1E_P=1$, the other intersections are trivial. The self intersections of the inverse image of some curves on $S_1$ are the following: $$\left(f_1^{-1}(E_P)\right)^2=-3,\ \left(f_2^{-1}(W_1)\right)^2=(W^2-1)/3,\ \left(f_2^{-1}(V_1)\right)^2=(V^2-1)/3.$$ The contraction $\gamma_1:X_1\rightarrow X$ of $E_P$ produces the surface $X$ (triple cover of $S$). The image of this curve under $\gamma_1$ is the point $f^{-1}(P)$, and thus $f^{-1}(P)$ is a singular point on $X$ of type $\frac{1}{3}(1,1)$. In particular $X_1$ is also the minimal resolution $X'$ of $X$ which coincides in this case with $\widetilde{X}$. We have the following diagram $$\xymatrix{X\ar[d]_f&X_1\ar[l]^{\gamma_1}\ar[d]_{f_1}\\ S&S_1\ar[l]^{\beta_1}.}$$ \end{say} We can summarize the above discussion with the following proposition. \begin{proposition} Let $f:X\rightarrow S$ be a Galois triple cover. Let $P$ be a singular point of the branch locus. \begin{itemize} \item If the local equation of the branch locus near $P$ is $xy$, then $f^{-1}(P)$ is a singularity of $X$ type $\frac{1}{3}(1,2)$; \item If the local equation of the branch locus near $P$ is $xy^2$, then $f^{-1}(P)$ is a singularity of $X$ type $\frac{1}{3}(1,1)$. \end{itemize} \end{proposition} In both the previous cases $f^{-1}(P)$ is a negligible singularity by Proposition \ref{prop: negligible}. \begin{remark}{\rm Let $f:X\rightarrow S$ be a triple cover with Galois cover data $(B,C,L,M)$. Let $V$ and $W$ two irreducible components of the branch locus, meeting in a point $P$. If $P$ is a singularity of type 1, then $V$ and $W$ are both components either of $B$ or of $C$. If $P$ is a point of type 2 then, w.l.o.g., $V$ is a component of $B$ and $W$ is a component of $C$. We observe the ordinary nodes of a curve (in the branch locus) are singularities of type 1. } \end{remark} By considering the contraction $\gamma_1$ described in \ref{say: singu type 1} and \ref{say: sing type 2} one obtains the following. \begin{corollary} Let $X\rightarrow S$ be a Galois triple cover such that the branch locus contains $c$ singularities of type $1$ and $b$ singularities of type 2 and no other singular points. The Picard number of the minimal resolution $X'$ of $X$ is $\rho(X')=\rho(X)+2c+b$, the ones of the canonical resolution $\widetilde{X}$ is $\rho(\widetilde{X})= \rho(X)+3c+b$ (see also Figure 1). \end{corollary} In the following lemma and examples we will show that singularities both of type 1 and of type 2 appear as branch locus of triple cover of K3 surfaces. \begin{lemma}\leftarrowbel{lemma: type of sing} Let $N_1$ and $N_2$ be two rational curves contained in the branch locus of a Galois triple cover $X\rightarrow S$ and such that any other curves contained in the branch locus is disjoint from them. Let us assume that $N_1N_2=k$. Then:\\ $k\not \equiv_3 0$; \\ if $k\equiv_3 1$ the points of intersection of $N_1$ and $N_2$ are of type 2; \\ if $k\equiv_3 2$ the points of intersection of $N_1$ and $N_2$ are of type 1.\end{lemma} \proof Since $N_1$ and $N_2$ have trivial intersection with any other components of the branch locus, the condition $LN_i\in \mathbb{Z}$, restricts to the condition $\mbox{$N_i(\alpha N_1+\beta N_2)/3\in \mathbb{Z}$}$, where $\alpha\in\{1,2\}$ and $\beta\in\{1,2\}$ and $(\alpha N_1+\beta N_2)/3$ is the summand of $L$ in which $N_1$ and $N_2$ appear with non trivial coefficient. Up to replace possibly $L$ with $M$, we can assume that $\alpha=1$. So the condition is now $N_i(N_1+\beta N_2)/3\in \mathbb{Z}$ which implies $-2+\beta k\equiv_3 0$ and $k-2\beta\equiv_3 0$. These conditions imply that $k\not\equiv_3 0$, $k\equiv_3 1$ if an only if $\beta=2$ and $k\equiv_3 2$ if and only if $\beta=1$.\endproof \begin{example} Let us consider an elliptic fibration $\mathcal{E}:S\rightarrow\mathbb{P}^1$ with 2 fibers of type $I_2$ over the point $p_1$, $p_2$. The base change of order 3 $f:\mathbb{P}^1\rightarrow\mathbb{P}^1$ branched over the points $p_1$ and $p_2$ induces a triple cover of $S$ whose branch locus contains 4 singularities of type 1. Indeed the branch curves are rational curves meeting in 2 points and then the singularities are of type 1 by Lemma \ref{lemma: type of sing}. \end{example} The following example contains singularities of type 2 and moreover shows that the minimal resolution $X'$ of $X$ constructed above is not necessarily a minimal model: \begin{example} Let $f:S\rightarrow\mathbb{P}^1$ be an elliptic fibration with 6 fibers of type $I_3$ and a 3-torsion section. Generically $\rho(S)=14$, see \cite[Section 4.1]{GS07}. Let $\mbox{T}heta_i^{(j)}$, $i=0,1,2$ and $j=1,\ldots, 6$ be the irreducible components of the reducible fibers. There exists a $3:1$ cover $X\rightarrow S$ branched over $\mbox{T}heta_1^{(j)}$, $\mbox{T}heta_2^{(j)}$ for $j=1,\ldots, 6$. The minimal model $X^{\circ}$ is a K3 surface, see \cite[Proposition 4.1]{G13}. In particular this triple cover is branched over 6 configurations of type $A_2$ of rational curves and the branch data are $B=\sum_{j=1}^6\mbox{T}heta_1^{(j)}$, $C=\sum_{j=1}^6\mbox{T}heta_2^{(j)}$, $L=(2B+C)/3$ and $M=(B+2C)/3$. The surface $X^{\circ}$ has Picard number equal to 14, see \cite[Proposition 4.1]{GS07}, but the minimal resolution of $X'$ has Picard number $\rho(X')=\rho(S)+6$. In particular this implies that $X^{\circ}\neq X'$, i.e. the minimal resolution of $X$ is not minimal. \end{example} The minimal resolution of $X$ is not a minimal model if there is configuration of type $A_2$ in the branch locus of $X\rightarrow S$, indeed in this case one has to contracts 2 $(-1)$-curves for each of these configurations, as show in \verb|Figure 2|. \begin{center} \includegraphics[width=8cm]{A2} \verb|Figure 2| \end{center} \begin{rem}\leftarrowbel{rem: contraction of curves singularities of type 2}{\rm Let $f:X\rightarrow S$ be a triple cover, $X'$ the minimal resolution of $X$ and $X^{\circ}$ the smooth minimal model of $X$. By the above construction one obtains that if a singularity of type 2 lie on exactly one rational curve which contains no other singularities of the branch locus, then its inverse image in $X'$ is contractible and so $K_{X^{\circ}}^2\geq K_{X'}^2+1$. If the singularity of type 2 lie on the intersection of two rational curves which do not contain other singularities of the branch locus, then in $X'$ there are three contractible curves (this is the case of Figure 2) and so $K_{X^{\circ}}^2\geq K_{X'}^2+3$. We refer to the last situation as $A_2$-configuration in the branch locus.}\end{rem} In Theorem \ref{say: other singularities in the branch locus} we will analyze other type of singularities which can appear in the branch locus of a triple cover. \begin{proposition}\leftarrowbel{prop: rito numbers} Let $X$ be a normal variety which is a Galois triple cover of $S$, whose branch locus has $n$ connected components, i.e., the $n$ connected reducible curves $D_1,\ldots, D_n$. Then \begin{enumerate} \item Each curve $D_i$ is reduced; \item At most one $D_i$ is a curve with positive self intersection. \item If the singularities of $X$ are negligible, denoted by $(B,C,L,M)$ the Galois triple cover data, the invariants of the minimal resolution $X'$ are \[ \begin{split} \chi(\mathcal{O}_{X'})= & \, 6+\frac{1}{2}(L^2+M^2),\\ K^2_{X'}= & \, 2L^2+2M^2+LM,\\ e(X')=& \, 72+4(L^2+M^2)-LM,\ h^{1,0}(X')=h^1(L)+h^1(M). \end{split} \] \end{enumerate} \end{proposition} \proof Condition $(1)$ is needed to obtain a normal cover $X\rightarrow S$, see paragraph \ref{say_tripleGalois}. Condition $(2)$ follows by the Hodge Index Theorem. Since the singularities are negligible, the numerical invariants for $X'$ can be calculated using the Tschirnhausen bundle $\mathscr{E}=\mathcal{L}^{-1} \oplus \mathcal{M}^{-1}=\mathcal{O}_S(-L) \oplus\mathcal{O}_S(-M)$. It has the following Chern classes \[ c_1(\mathscr{E})=-L-M, \quad c_2(\mathscr{E})=L\cdot M. \] To have a connected covering we have: \[ 0=h^0(\mathscr{E})=h^0(-L)+h^0(-M) \, \mathbb{R}ightarrow \, h^0(-L)=h^0(-M)=0. \] In addition, by Hirzebruch--Riemann--Roch theorem we have \begin{equation}\leftarrowbel{eq_chiT}\begin{split} \chi(\mathscr{E})=h^2(\mathscr{E})-h^1(\mathscr{E})=h^0(L)+h^0(M)-h^1(\mathscr{E})&=\\ =\int_S \textrm{Td}(S) \cdot \textrm{ch}(\mathscr{E}) = \int_S(1,0,2)\cdot\left(2,-L-M,\frac{L^2+M^2}{2}\right)&=\frac{L^2+M^2}{2}+4. \end{split} \end{equation} So by Proposition \ref{prop.invariants} (ii) we have: $\chi(\mathcal{O}_{X'})=6+\frac{1}{2}(L^2+M^2)$, by Proposition \ref{prop.invariants} (iii) we have: $K^2_{X'}=2L^2+2M^2+LM $ and finally, by Proposition \ref{prop.invariants} (iv) we have: $e(X')=72+4(L^2+M^2)-LM.$ By equation \eqref{eq_chiT} and the genus formula: \begin{equation} \begin{split} h^1(\mathscr{E})=h^0(L)+h^0(M)-\chi(\mathscr{E})=\left(\chi(L)+h^1(L)\right)+\left(\chi(M)+h^1(M)\right)-\chi(\mathscr{E})= \\ = \left(\frac{L^2}{2}+2+h^1(L)\right)+\left(\frac{M^2}{2}+2+h^1(M)\right)-\frac{L^2+M^2}{2}-4=h^1(L)+h^1(M), \end{split} \end{equation} which allows to compute $h^{1,0}(X')$, by Proposition \ref{prop.invariants} (i). \endproof \begin{rem} {\rm If the branch locus of the cover $f:X\rightarrow S$ is smooth, then $BC=0$ and so the formulae in Proposition \ref{prop: rito numbers} and \eqref{eq2} which compute the self intersection of the canonical divisor of $X$ coincides and give $K_X^2=12(B^2+C^2)$.} \end{rem} \begin{lemma}\leftarrowbel{lemma: -2 curves} Let us consider a connected reducible reduced curve, $C$, on a K3 surface $S$, whose irreducible components are rational curves (i.e. negative curves). If $C^2\leq 0$, then $C^2=-2$. In particular, a reducible reduced negative curve $D$ such that $D^2<-2$ has more than one connected component. \end{lemma} \proof Let us call $C_i$ the irreducible components of $C$. Then $C=\sum_{i=1}^n C_i$ and \[ (C)^2=-2n+2\sum_{i=1}^n\sum_{j=i+1}^{n}C_iC_j. \] $C$ is connected and each curve $C_i$ intersects at least one other curve, one has $\sum_{i=1}^n\sum_{j=i+1}^{n}C_iC_j\geq n-1$. Hence $C^2\geq -2n+2(n-1)=-2$. If $C^2<0$, then $C^2=-2$.\endproof \begin{lemma} Let $S$ be a K3 surface, and $\Lambda$ a sublattice of $NS(S)$. If $\Lambda$ is generated by irreducible curves and is degenerate, then all the curves represented by classes in $\Lambda$ are either fibers or components of fibers of a genus 1 fibration on $S$. \end{lemma} \proof Since $\Lambda$ is degenerate, there exists $v\in\Lambda$ such that $v^2=0$ and $vw=0$ for every $w\in \Lambda$. By Riemann--Roch Theorem either $v$ or $-v$ is effective, so we assume that $v$ is effective and hence $\varphi_{|v|}$ is a fibration whose smooth fibers are genus 1 curve, cf. e.g. \cite[Proof of Lemma 2.1]{Ko}. Since any other class $w$ in $\Lambda$ has a trivial intersection with $v$, if a curve is represented by $w$, it is contracted by $\varphi_{|v|}$ and hence it is contained in a fiber of the fibration $\varphi_{|v|}$. \endproof Let $\Lambda_{D_i}$ be the lattice generated by the irreducible components of an effective divisor $D_i$. \begin{theorem}\leftarrowbel{theorem: three cases giusto} Let $f\colon X \rightarrow S$ be a normal Galois triple cover of a K3 surface, whose branch locus has $n \geq 1$ connected components, i.e., the $n$ connected possibly reducible curves $D_1,\ldots, D_n$. \begin{itemize} \item[1)] If $\kappa(X)=0$, then all the lattices $\Lambda_{D_i}$ are $A_2(-1)$ (in particular negative definite). \item[2)] If $\kappa(X)=2$, then there exists a lattice $\Lambda_{D_i}$ such that $\mbox{$\rm{sgn}(\Lambda_{D_i})=(1,{\rm rank}(\Lambda_{D_i})-1)$}$ and all the other lattices $\Lambda_{D_j}$ are negative definite. \item[3)] If $\kappa(X)=1$, then there are no lattices $\Lambda_{D_i}$ which are indefinite and there exists at least on $\Lambda_{D_i}$ which is degenerate. \end{itemize} \end{theorem} \proof $1)$ If $\kappa(X)=0$, each component of $D_i$ is a negative curve, and thus by adjunction a $(-2)$-curve, since the intersection with the canonical bundle is trivial for every curve on $X$ (if $X$ is a K3 surface see also Lemma \ref{lemma: -2 curves}). Therefore, the lattice $\Lambda_{D_i}$ is a negative definite root lattice, and thus $\Lambda_{D_i}(-1)$ is an A-D-E lattice, \cite[Lemma 2.12, Chapter I]{BHPV}. By \cite[Chapter I, Section 17]{BHPV} the existence of a triple cover branched in $D_i$ implies that a linear combination of the components of the $D_i$'s with non integer coefficients in $\frac{1}{3}\mathbb{Z}$ is contained in the Picard group of $S$. So the discriminant group of $\Lambda_{D_i}$ contains $\mathbb{Z}/3\mathbb{Z}$, which implies that either $\Lambda_{D_i}\simeq A_{3k-1}(-1)$ with $k>1$ or $\Lambda_{D_i}\simeq E_6(-1)$. In both the cases $\Lambda_{D_i}^{\vee}/\Lambda_{D_i}$ is a cyclic group, generated by a class $l$ (supported on the components of $D_i$). The generators of the subgroups $\mathbb{Z}/3\mathbb{Z}$ in $\Lambda_{D_i}^{\vee}/\Lambda_{D_i}$ are multiple of $l$ and they are always supported on a disjoint $A_2$-configurations of curves. So each $D_i$ is the sum $B_i+C_i$ where $B_i$ and $C_i$ are two rational curves meeting in a point, i.e. $\leftarrowngle B_i,C_i\rightarrowngle\simeq A_2(-1)$. $2)$ If $\kappa(X)=2$, then there exists a curve $C\in |R|$, where $R$ is the ramification of $f$ such that $g(C)>1$ (see Proposition \ref{prop_kodairaDim}) and $C^2\geq 0$. By the projection formula $\left(f(C)\right)^2\geq 0$ and so $g(f(C))\geq 1$. There exists a (possibly reducible) branch curve $\Gamma$ linearly equivalent to $f(C)$ and contained in $D_i$. If $g(f(C))>1$, then $\Gamma^2>0$ and hence $\Lambda_{D_i}$ satisfies the hypothesis, by Hodge index Theorem. If $g(f(C))=1$, $f(C)$ defines a genus 1 fibration $\mathcal{E}$ on $S$. Moreover $f(C)$ intersects the branch locus, i.e. $f(C)\cap(\cup_iD_i)\neq \emptyset$, otherwise $g(C)=1$. This implies that $\Gamma$ in the union of a fiber of $\mathcal{E}$ and some horizontal curves. In particular $\Gamma$, and hence $D_i$, contains the class $F$ of the fiber of the fibration $\mathcal{E}$ and at lest an horizontal curve $\Delta$. The class $\alpha F+\delta\Delta$ is contained in $\Lambda_{D_i}$ and it has a positive self intersection for $\alpha$ sufficiently big. By Hodge index theorem the signature of all the lattice $\Lambda_{D_i}$ are the required ones. $3)$ If $\kappa(X)=1$, then there exists a curve $C\in |R|$, where $R$ is the ramification of $f$ such that $g(C)=1$ and $C$ defines a genus 1 fibration on $X$. Hence $f(C)$ defines a genus 1 fibration on $S$. All the irreducible components of $|f(C)|$ are genus 1 curve and $f(C)^2=0$. As above there exists a (possibly reducible) branch curve $\Gamma$ linearly equivalent to $f(C)$ and contained in $D_i$. Then $\Gamma\in |f(C)|$ and it is a fiber (irreducible or not), so it coincides with $D_i$ for a certain $i$ and $\Lambda_{D_i}$ is degenerate. All the others $D_j$'s are contained in (or coincides with) fibers, so each $\Lambda_{D_j}$ can not contain a class with positive self intersection. \endproof \begin{corollary}\leftarrowbel{cor: giusto} Let $f\colon X \rightarrow S$ be a normal Galois triple cover of a K3 surface, whose branch locus has $n \geq 1$ connected components, i.e., the $n$ connected reducible and reduced curves $D_1,\ldots, D_n$. \begin{itemize} \item[1)] If all the $\Lambda_{D_i}$ are negative definite, then $\kappa(X)=0$, $n=6$ or $n=9$ and the minimal model of $X$ is either a K3 or an Abelian surface. \item[2)] If ${\rm sgn}(\Lambda_{D_1})=(1,{\rm rank}(\Lambda_{D_1})-1)$, then $\kappa(X)=2$, $\Lambda_{D_j}\simeq A_2(-1)$ for all $j\neq 1$ and $n\leq 10$. \item[3)] If $\Lambda_{D_1}$ is degenerate, then $\Lambda_{D_j}$ for $j\neq 1$ is either degenerate or negative definite and $\kappa(X)=1$. \end{itemize} \end{corollary} \proof The assumption that the curve $D_i$'s are reduced guarantees that $X$ is normal. $(1)$ If all the $\Lambda_{D_i}$ are negative definite, we already showed in Theorem \ref{theorem: three cases giusto} that $\Lambda_{D_i}\simeq A_2(-1)$. The sets of $A_2$-configurations which are 3-divisible (and thus are in the branch locus of a triple cover) necessarily contain either 6 or 9 $A_2$-configurations \cite[Lemma 1]{B98}. In the first case the minimal model of the triple cover is a K3 surface, in the latter it is an Abelian surface and in particular $\kappa(X)=0$. $(2)$ If sgn$(\Lambda_{D_1})=(1,{\rm rank}(\Lambda_{D_1})-1)$, by Hodge index theorem, the lattices $\Lambda_{D_j}$ with $j\neq 1$ are negative definite. As in proof of $(1)$ of Theorem \ref{theorem: three cases giusto}, this implies that $\Lambda_{D_j}$ coincides with $A_2(-1)$. The maximal number of disjoint $A_2$-configurations of rational curves on a K3 surface is 9, hence $n\leq 10$. To show that $\kappa(X)=2$, we observe that if $\kappa(X)=0$, then by Theorem \ref{theorem: three cases giusto} all $\Lambda_{D_i}$ should be negative definite, which contradicts the hypothesis on $\Lambda_{D_1}$. If $\kappa(X)=1$ then by Theorem \ref{theorem: three cases giusto} at least one $\Lambda_{D_i}$ should be degenerate, which is again impossible. $(3)$ If $\Lambda_{D_1}$ is degenerate, then $D_1$ consists of a fiber of an elliptic fibration. So $D_j$, $j\neq 1$ are contained in fibers and then $\Lambda_{D_j}$ cannot contain a positive class. In particular $\kappa(X)$ can not be 2, because there is no an indefinite lattice $\Lambda_{D_i}$ and can not be 0, because not all the lattices $\Lambda_{D_i}$ are negative definite. \endproof We now consider the case in which one connected component $D_i$ of the branch locus is irreducible. The following corollary shows that if its self intersection is non negative, we can easily determine the Kodaira dimension of $X$. \begin{corollary}\leftarrowbel{cor: giusto irreducible} Let $D_i$ be an irreducible reduced curve in the branch locus of a Galois triple cover $f:X\rightarrow S$. If $D_i^2>0$ then $\kappa(X)=2$. If $D_i^2=0$ then $\kappa(X)=1$. If $D_i^2\geq 0$, then the minimal model $X^{\circ}$ of $X$ is obtained by contracting on $X'$ three curves for each $A_2$-configuration in the branch locus. \end{corollary} \proof The first part of the statement follows directly by Corollary \ref{cor: giusto}. Let $D_1$ be a smooth irreducible curve, then the singularities of the branch locus are of type 2 and they are the singular point of each the $A_2$-configurations in the branch locus. In this case $X^{\circ}$ is obtained by contracting on $X'$ three curves for each $A_2$-configuration in the branch locus. Indeed by Remark \ref{rem: contraction of curves singularities of type 2} one has to contract at least 3 curves for each $A_2$-configuration and we call the surface obtained by these contraction $X_{m}$. It remains to prove that $X_{m}$ and $X^{\circ}$ coincide, and so that there are no other possible contractions to a smooth surface. On $X_{m}$ there is an automorphism $\sigma_{m}$ of order 3, induced by the Galois $\mathbb{Z}/3\mathbb{Z}$-cover automorphism $\sigma$ on $X$. The surface $S_{m}:=X_{m}/\sigma_{m}$ is the singular surface obtained by $S$ contracting all the $A_2$-configuration in the branch locus. If there were a $(-1)$ curve $E$ on $X_{m}$ then there are two cases: either $E$ is disjoint from the ramification locus of $f_{m}:X_{m}\rightarrow S_{m}$, or $E$ meets it (not being contained). If $E$ is disjoint from the ramification locus, then there exists a $(-1)$-curve on $X$ which is mapped to $E$ (because we blow up and down points away from $E$). Hence $\sigma(E)\cap E$ is empty therefore $\sigma(E)$ and $\sigma^2(E)$ are other two $(-1)$-curves on $X$. So $f(E)=f(\sigma(E))=f(\sigma^2(E))\subset S$ is a $(-1)$-curve. This is absurd because $S$ is a K3 surface. Hence $E$ meets the ramification locus $R_{m}$ and $\sigma_{m}(E)=E$. Otherwise there should be different $(-1)$-curves ($E$ and $\sigma_{m}(E)$) meeting in the point $E\cap R_{m}$ and this is impossible on a surface with non negative Kodaira dimension. Moreover $\sigma$ is an automorphism of order 3 of the rational curve $E$, and then $E\cap R_{m}$ consists of two points Let $\beta:X_{m}\rightarrow X^{\circ}$ the contraction of $E$ and $\sigma^{\circ}$ the automorphism induced by $\sigma_{m}$ on $X^{\circ}$. This induces the contraction $S_{m}\rightarrow S^{\circ}$ of the curve $f_{m}(E)$. Since $E\cap R_{m}$ consist of two points, the contraction of $f_{m}(E)$ identifies two singular points on $S_{m}$ and introduces a singularity on the image of $D_1$. By construction the smooth $X^{\circ}$ is a triple cover of $S^{\circ}$ and thus the singularities of $S^{\circ}$ cannot be worst than $(\mathbb{C}^2,0)/\mathbb{Z}/3\mathbb{Z}$ so two singularities of $S_{m}$ can not be identified. Moreover, there cannot be a singularity on the branch curve image of $D_1$, $S^{\circ}$ has to coincides with $S_{m}$. We conclude that $X_{m}$ coincides with $X^{\circ}$.\endproof \section{Examples of Galois covers of K3 surfaces}\leftarrowbel{sec: examples Galois cover of K3 surfaces} By Theorem \ref{theorem: three cases giusto} and Corollary \ref{cor: giusto}, $k(X)=0$ if and only if for all the components $D_i$ of the branch locus, $\Lambda_{D_i}$ is negative. In this case the minimal model of $X$ is either a K3 surface or an Abelian surface and these cases are well known (see e.g. \cite{B98}). We now provide examples for the other cases. First we consider surfaces of general type: in Proposition \ref{cor: example general type, irreducible D1} and in Proposition \ref{prop: case (2) theorem- possibilities} we compute the invariants of $X^{\circ}$ if it is of general type and we make specific assumption on the component $D_1$ of the branch locus such that $\Lambda_{D_1}$ is indefinite; in Corollary \ref{cor: pg=2 gen.type} we provide an example of surface $X^{\circ}$ with $p_g=2$; in Theorems \ref{theo: cover S16}, \ref{theo: cover S15} examples of $X^{\circ}$ with $q\neq 0$. Then we consider the case $\kappa(X^{\circ})=1$ and we classify the invariants of $X^{\circ}$ in Proposition \ref{prop: examples case 3 of proposition}. \subsection{The covering surface $X$ is of general type} The case $(2)$ of Theorem \ref{theorem: three cases giusto} is the most general one, but under some assumptions we are able to give a more detailed description of these covers. We first assume to be in the hypothesis of Corollary \ref{cor: giusto irreducible}. \begin{prop}\leftarrowbel{cor: example general type, irreducible D1} Let $D_1$ be a connected component of the branch locus of a Galois triple cover $f:X\rightarrow S$, which is also irreducible reduced and of positive genus. Then $D_1^2=6d$, for an integer $d>0$; $d\equiv_3 n-1$ and denoted by $k$ the integer such that $d=n-1+3k$ one has $k\geq -2$. If $D_1$ is moreover smooth then \begin{equation}\leftarrowbel{eq: inv Xm special case}\chi(X^{\circ})=5+n+5k,\ K_{X^{\circ}}^2=8n-8+24k, \ e(X^{\circ})=68+4n+36k\end{equation} \end{prop} \proof By Corollary \ref{cor: giusto} the components $D_j$ with $j>1$ consists of two rational curves meeting in a point. We denote by $A_1^{j}$ and $A_2^{j}$ the two components of $D_j$. Since $D_1$ is irreducible, it is a component of $B$ (or equivalently of $C$). Then the data of the triple cover are $$B=D_1+\sum_{j=1}^{n-1}A_1^{j},\ \ C=\sum_{j=1}^{n-1}A_2^{j}$$ $$L=\frac{D_1+\sum_{j=1}^{n-1}\left(A_1^{j}+2A_2^{j}\right)}{3},\ \ M=\frac{2D_1+\sum_{j=1}^{n-1}\left(2A_1^{j}+A_2^{j}\right)}{3}.$$ By $LD_1\in \mathbb{Z}$ and $L^2\in 2\mathbb{Z}$ it follows that $D_1^2\equiv_3 0$ and $d\equiv_3 n-1$. Since $n\leq 10$ and $d>0$, $9+3k>0$, so $k\geq -2$. The formulae \eqref{eq: inv Xm special case} follows by Proposition \ref{prop: rito numbers} since $L^2=2k$, $M^2=2n+8k-2$ and $LM=n-1+4k$ and $X^{\circ}$ is obtained by $X$ contracting $3(n-1)$ curves. We conclude by Corollary \ref{cor: giusto irreducible}.\endproof \begin{say} In the situation of the previous corollary, if $L^2\geq -2$, there exists a member of the linear system $|D_1|$ which splits into the union of a curve $G$ and the curves $A_i^{j}$, where $G\simeq \frac{D_1-\sum_{j=1}^{n-1}\left(A_1^{j}+2A_2^{j}\right)}{3}$ and hence $D_1\simeq 3G+\sum_{j=1}^{n-1}\left(A_1^{j}+2A_2^{j}\right)$. We observe that $GA_2^{(j)}=1$ and $GA_1^{(j)}=0$. Since $G^2=2k$, if $G$ is an irreducible and smooth curve, then $g(G)=k+1$. In particular $G$ is rational if $k=-1$. A limit case is the one with $k=-1$ and $n=3$. It is not an example of case $(1)$ of the Theorem \ref{theorem: three cases giusto}, because $D_1^2=0$, but it is still instructive, since the interpretations of the curves $G$, $A_i^{(j)}$ in this situation is well known: $D_1$ is a fiber of type $IV^*$ of an elliptic fibration and the curves $G$ and $A_i^{(j)}$ are its components. Similarly there is a member of $|2D_1|$ which splits in a the union of a curve $F$ and the union of the curves $A_i^{j}$, where $F\simeq \frac{2D_1-\sum_{j=1}^{n-1}\left(2A_1^{j}+A_2^{j}\right)}{3}$ and hence $2D_1\simeq 3F+\sum_{j=1}^{n-1}\left(2A_1^{j}+A_2^{j}\right)$. Since $F^2=2n+8k$, if $F$ is irreducible and smooth curve, then $g(F)=n+4k+1$. In particular $F$ is rational if $n+4k=-1$ and elliptic if $n=-4k$. \end{say} \begin{corollary}\leftarrowbel{cor: pg=2 gen.type} There exists a smooth Galois triple cover $X$ of a K3 surface $S$ whose branch locus consists of a smooth curve of genus 4 and 7 $A_2$-configurations of rational curves such that, denoted by $X^o$ the minimal model of $X$, it holds $$\chi(X^{\circ})=3,\ \ q(X^{\circ})=0,\ \ p_g(X^{\circ})=2,\ \ K_{X^{\circ}}^2=8$$ \end{corollary} \proof Let $S$ be a K3 surface with an elliptic fibration such that the reducible fibers are $I_2+6I_3$ and the Mordell--Weil group is $\mathbb{Z}/3\mathbb{Z}$. The existence of a K3 surface with such an elliptic fibration is guarantee by \cite[Table 1 case 835]{Sh00}. We denote by $F$ the class of the fiber of this fibration, by $\mathcal{O}$ the class of the zero section, by $A_2^{(1)}$ the class of the irreducible component of the fiber $I_2$ which meet the section $\mathcal{O}$ and by $A_h^{(j)}$, $h=1,2$, $j=2,\ldots, 7$ the classes of the two irreducible components not meeting the zero section of the $j$-th reducible fiber, which is a fiber of type $I_3$. We observe that the class of the 3-torsion section $P$, which generates the Mordell--Weil group can be written in terms of the previous curves as $$P=2F+\mathcal{O}-\frac{1}{3}(\sum_{j=2}^7A_1^{(j)}+2A_2^{(j)}).$$ Moreover, we observe that there are 7 disjoint $A_2$-configurations on this surface, which are given by $\mathcal{O}$, $A_2^{(1)}$, $A_h^{(j)}$, $h=1,2$, $j=2,\ldots,7$. Let us consider the divisor $3F+A_2^{(1)}+2\mathcal{O}=D_1$ and we notice that $D_1^2=6$, $D_1F=2$, $D_1A_2^{(1)}=0$, $D_1\mathcal{O}=0$. One can check that $D_1$ is a big and nef divisor and hence in its linear system there is a smooth irreducible curve of genus 4, still denoted by $D_1$. We claim that there exists a triple cover of $S$ branched over $D_1$ and the 7 $A_2$-configurations $\mathcal{O}$, $A_2^{(1)}$, $A_h^{(j)}$, $h=1,2$, $j=2,\ldots,7$. Indeed, the divisor $$L:=\left(D_1+\mathcal{O}+\sum_{i=2}^7 A_1^{(i)}+2\sum_{j=1}^7 A_2^{(j)}\right)/3=3F+A_2^{(1)}+2\mathcal{O}-P$$ is contained in $NS(S)$ and so $$B:=D_1+\mathcal{O}+\sum_{i=2}^7 A_1^{(i)},\ \ C:=\sum_{j=1}^7 A_2^{(j)}, L=(B+2C)/3,\ \ M=(2B+C)/3$$ form a triple cover data on $S$. So there exists a triple cover $X\rightarrow S$ which satisfies the condition of Proposition \ref{cor: example general type, irreducible D1} with $k=-2$, $n=8$, $d=1$ and then we deduced the properties of $X^o$.\endproof \begin{rem}{\rm If $S$ is a K3 surface and $p_g(X)=2$, the cover $f:X\rightarrow S$ induces a splitting of the Hodge structure on $T_X$ in a direct sum of two Hodge structures of K3-type (i.e. Hodge structure of weight two of type $(1,\star,1)$), indeed $$T_X=f^*(T_S)\oplus \left(f^*(T_S)\right)^{\perp}.$$ The Hodge structure of $T_X$ is of type $(2,\star,2)$ and the one of $f^*(T_S)$ is of type $(1,\star',1)$ since it is induced by the Hodge structure of the K3 surface $S$. Hence both $f^*(T_S)$ and $\left(f^*(T_S)\right)^{\perp}$ carry a K3-type Hodge structure. The surfaces with $p_g=2$ such that the transcendental Hodge structure splits in the direct sum of two K3-type Hodge structure are studied in several context (see e.g. \cite{Mo87}, \cite{Gdouble}, \cite{L19}, \cite{L20}, \cite{PZ19}) and in general it is interesting to look for K3 surfaces geometrically associated to the K3-type Hodge structure. In the case of covers of K3 surfaces $S$ (and in particular in the setting of Corollary \ref{cor: pg=2 gen.type}), at least one of the two K3-type Hodge structures is of course geometrically associated to the K3 surface $S$ indeed it is the pull back of the Hodge structure of $S$.} \end{rem} We give examples of the case $(2)$ of the Theorem \ref{theorem: three cases giusto} such that $D_1$ is reduced and reducible. In particular we consider the case $D_1^2=0$, but $\Lambda_{D_1}$ contains a class with a positive self intersection. In this case the support of $D_1$ consists of a certain number of fibers $F_i$ and some rational horizontal curves $P_j$, such that $$(D_1)^2=(r\sum_iF_i+\sum_jP_j)^2=0.$$ We denote by $F$ the class of the fiber, therefore $F_iP_j=FP_j$ for all $i$. Since $(D_1)^2=\sum_{k=1}^{s}D_1P_k+rFP_k$, $D_1^2=0$ and $FP_k>0$ it follows that $$\exists j\mbox{ such that }D_1P_j<0.$$ In particular $D_1$ is not nef. The following lemma shows that each curve $P_j$ such that $D_1P_j<0$ is a section orthogonal to all the other horizontal curves in $D_1$. \begin{lemma}\leftarrowbel{lemma: case (2 theorem)} Let $F$ be the class of the fiber of an elliptic fibration and $P_j$'s irreducible horizontal curves. Let $D_1=rF+\sum_{j=1}^kP_j$ be such that $D_1^2=0$ and $D_1$ is reduced. There exists $j$ such that $D_1P_j<0$ if and only if $FP_j=1$, $P_jP_i=0$ for every $j\neq i$ and $r=1$ \end{lemma} \proof We already observed that $D_1$ intersect negatively at least one of its components, say $P_j$. Hence $P_j$ is a fixed component of a non nef divisor and $P_j^2=-2$. So $$D_1P_j=rFP_j+\sum_{i\neq j}P_iP_j-2<0.$$ So $rFP_j+\sum_{i\neq j}P_iP_j\leq 1$. We observe that $P_j$ is horizontal so $FP_j>0$, and $P_jP_i\geq 0$ because $P_i$ are irreducible effective curves. Hence the only possibility is $r=1$, $FP_j=1$ and $P_iP_j=0$.\endproof We recall that by (2) of Corollary \ref{cor: giusto}, if $D_1$ is as above, in the branch locus there are $n-1$ components $D_h$, $n\geq 1$ which are $A_2$-configurations of curves. We denote by $A_1^{(h)}$, $A_2^{(h)}$ the components of such configurations. \begin{proposition}\leftarrowbel{prop: case (2) theorem- possibilities} Let $S$ be a K3 surface admitting an elliptic fibration with $k$ disjoint sections $P_j$ and whose class of the fiber is $F$ and let $D_1=F+\sum_j{P}_j$. There exists a triple cover $X\rightarrow S$ branched on $D_1$ and other $n-1$ irreducible components $D_h$ if and only if \begin{equation}\leftarrowbel{eq: conditions gen type fibration} D_1=F+\sum_{j=1}^kP_j,\ k\equiv_3 0,\ \ n\equiv_3 1+\frac{k}{3} \end{equation} and the data of the triple cover are determined by $$B=F+\sum_{h=1}^{n-1}A_1^{(h)},\ \ C=\sum_{i=1}^kP_i+\sum_{h=1}^{n-1}A_2^{(h)}.$$ The surface $X^{\circ}$ is of general type and its numerical invariants are $$\chi(\mathcal{O}_{X^{\circ}})=\frac{60-6n-k}{9} ,\ \ K_{X^{\circ}}^2= \frac{2k}{3}.$$ \end{proposition} \proof Since $L=(B+2C)/3$ and $LP_i\in\mathbb{Z}$, it follows that if $F$ is a component of $B$, the $P_i$'s must be components of $C$. The divisor $$L=\frac{F+2\sum_{j=1}^kP_j+\sum_{h=1}^{n-1}\left(A_1^{(h)}+2A_2^{(h)}\right)}{3}$$ has to be contained in $NS(S)$, $LF\in \mathbb{Z}$, $LP_i\in \mathbb{Z}$ for every $i=1,\ldots, h$ and $L^2\in 2\mathbb{Z}$. These conditions implies $$\frac{2k}{3}\in\mathbb{Z},\ \ -1\in\mathbb{Z}\mbox{ and }\frac{-4k-6(n-1)}{9}=2\frac{-2k-3n+3}{9}\in 2\mathbb{Z}.$$ Recall that $$M=\frac{2F+\sum_{j=1}^kP_j+\sum_{h=1}^{n-1}\left(2A_1^{(h)}+A_2^{(h)}\right)}{3},$$ so $$L^2=\frac{6-4k-6n}{9},\ M^2=\frac{2k+6-6n}{9}\mbox{ and }LM=\frac{k+3-3n}{9},$$ which gives $\chi(X)$ and $K_X^2$. Since the singularities of $X$ are negligible, these are the invariants of the minimal resolution of the cover. To obtain a minimal model one has to contract the $(-1)$-curves. Each $A_2$-configuration produces three $(-1)$-curves in the minimal resolution of triple cover and each curve $P_j$ produces one $(-1)$-curve, by Remark \ref{rem: contraction of curves singularities of type 2}. So one contracts at least $3(n-1)+k$ curves to obtain the minimal model from the minimal resolution and so $K_{X^{\circ}}\geq K_X^2+3(n-1)+k=2k/3$. As in proof of Corollary \ref{cor: giusto irreducible}, if there were other $(-1)$-curves they have to intersect the ramification locus and we already excluded the cases coming from rational curves intersecting the components $D_j$ $j\geq 2$. We consider the canonical resolution of the triple cover $\widetilde{f}:\tilde{X}\rightarrow \tilde{S}$ as in the diagram \eqref{dia.can}. Thanks to the particular configuration of curves in $D_1$, one checks that the only $(-1)$-curves on $\tilde{X}$ mapped to $\sigma^{-1}(D_1)$ are the strict transforms of the triple cover of the curves $P_i$. After their contraction, there are no other $(-1)$-curves in the strict transform of $\sigma^{-1}(D_1)$. Consider a rational curve $C\subset S$ with $CD_1>0$ and observe that $C$ is not mapped to $\sigma^{-1}(D_1)$. We denote by $\tilde{C}$ the strict transform of $C$. If $\tilde{f}^{-1}(\tilde{C})$ is an irreducible rational curve it intersects the ramification locus in at most 2 points. Moreover $(\tilde{f}^{-1}(\tilde{C}))^2=3\tilde{C}^2\leq -6$. Since we can contract at most two curves meeting $\tilde{f}^{-1}(\tilde{C})$, we can not obtain a $(-1)$-curve. If $\tilde{f}^{-1}(\tilde{C})$ splits in three curves and they meet, then they can not be $(-1)$-curves (because $X$ is of general type). If $\tilde{f}^{-1}(\tilde{C})$ splits in three curves these can not meet, $\tilde{f}^{-1}(\tilde{C})$ does not meet the ramification locus and $(\tilde{f}^{-1}(\tilde{C}))^2=(\tilde{C}^2)$. In particular they are not $(-1)$-curves. The $(-1)$-curves on $\tilde{X}$ are contained in the ramification locus and thus $\tilde{f}^{-1}(\tilde{C})$ does not meet these curves. \endproof In the previous proposition we do not discuss the existence of the K3 surfaces considered, so a priori it is possible that the hypothesis of the proposition are empty. This is not the case, by the following corollary. \begin{corollary}\leftarrowbel{corollary: case (2) theorem: existence} Let $k$ and $n$ be two positive integers such that $k+2n-1\leq 11$. Then there exists a K3 surface $S$ with an elliptic fibration with $k$ disjoint sections $P_j$ and $n-1$ fibers of type $I_3$ such that the sections $P_j$ all meet the same component of each $I_3$-fiber. In particular there exists a the triple cover $X\rightarrow S$ as in Proposition \ref{prop: case (2) theorem- possibilities} if $(k,n)=(3,2),(6,3), (9,1)$ and in these cases $\chi(X)=5,4,5$ and $K_{X^{\circ}}^2=2,4,6$ respectively. \end{corollary} \proof Let $\Lambda$ be a lattice generated by the following classes $$F,\ P_j,\ (j=1,\ldots, k),\ A_1^{(h)},\ A_2^{(h)},\ \frac{F+2\sum_{i=1}^jP_j+\sum_{h=1}^{n-1}\left(A_1^{(h)}+2A_2^{(h)}\right)}{3}=L,$$ where the intersections which are not zero are $$FP_j=A_1^{(h)}A_2^{(h)}=1,\ P_j^2=\left(A_i^{(h)}\right)^2=-2.$$ The lattice $\Lambda$ is even and hyperbolic. If the rank of the lattice $\Lambda$ is less than 12, then it admits a primitive embedding $\Lambda_{K3}$ and there exists a projective K3 surface $S$ such that $NS(S)=\Lambda$ (by the surjectivity of the period map). If $(k,n)$ is such that ${\rm rank}(\Lambda)\leq 12$ and satisfies the condition \eqref{eq: conditions gen type fibration}, then $(k,n)=(3,2),(6,3), (9,1)$. The classes $F$ and $F+P_j$ span a copy of $U$ inside $\Lambda$, then there exists a negative definite lattice $K$ such that $U\oplus K\simeq \Lambda$. In \cite[Proof of Lemma 2.1]{Ko} it is proved that if $NS(S)\simeq\Lambda$ is as described, there exists an elliptic fibration. The $2(n-1)$ $(-2)$-curves $A_i^{(h)}$, which are roots in $K$, are contained in singular fibers as in the statement. By Shioda--Tate formula, the rank of the Mordell--Weil group of the elliptic fibration is the Picard number of the surface minus the rank of the trivial lattice. The latter is spanned by $U$ and the roots contained in $K$. Hence the Mordell--Weil group has rank $k-1$ and hence there are $k$ independent sections, which corresponds to the classes $P_j$ (see \cite[Corollary 6.13]{SchS}). \endproof \begin{say} We observe that the K3 surface associated to the values $(9,1)$ corresponds to a K3 surface with an elliptic fibration with nine disjoint sections and, generically, without reducible fibers. This K3 surface is obtained as base change of order 2 on a generic rational elliptic surface $R$. One can directly check this fact by considering the N\'eron--Severi group of $S$, which is generated by $L$ and by the classes $P_1,\ldots, P_9$ and it is isometric to $U\oplus E_8(-2)$ which is the N\'eron--Severi group of a K3 surface corresponding to the double cover of a generic rational elliptic surfaces \cite{GSal}. In particular, the rational elliptic surface is a blow up of $\mathbb{P}^2$ in the base locus of a generic pencil of cubics and hence the K3 surface $S$ can be described as double cover of $\mathbb{P}^2$ branched on two specific cubics of the pencil see e.g. \cite{GSal}. The triple cover $X\rightarrow S$ defines a $6:1$ Galois cover $X\rightarrow\mathbb{P}^2$ whose Galois group is $\mathfrak{S}_3$. In particular, the rational surface $R$ admits a non Galois triple cover $W$ and by construction $X$ is a double cover of $W$.\\ \end{say} In Lemma \ref{lemma: case (2 theorem)} and hence in Proposition \ref{prop: case (2) theorem- possibilities} we assume that the fibers appearing in the component $D_1$ of the branch locus are smooth. But of course this is not the only possibility. Indeed one can also consider reducible fibers. This gives many ways to construct the data of the triple cover. For example let $F$ be a reducible fiber with two components $G_0$ and $G_1$, then either both $G_0$ and $G_1$ are contained in $B$ or one is contained in $B$ and the other in $C$. These choices produce different covers and the situation can be easily generalized with fibers with many components. We now describe one case where the fiber is of type $I_2$ (and then it has two components), one component is contained in $B$ and the other in $C$. Moreover in the branch locus there are 4 sections and other $2$ $A_2$-configurations of rational curves. This means that $n=3$ with the notation of Theorem \ref{theorem: three cases giusto}, case $(2)$. \begin{example} {\rm Le us consider a K3 surface $S$ and the configuration of $(-2)$-curves as in Figure 2. The existence of a K3 surface with the required fibration is guaranteed by the surjectivity of the period map as in Corollary \ref{corollary: case (2) theorem: existence}. \begin{center} \definecolor{qqqqff}{rgb}{0,0,1} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.7cm,y=0.7cm] \clip(-5.00,-3.5) rectangle (5.00,7); \dashrightarroww [line width=2pt] (-1,-3)-- (-3,-1); \dashrightarroww [line width=2pt,color=qqqqff] (-3,-2)-- (-1,0); \dashrightarroww [line width=2pt,color=qqqqff] (1,-3)-- (3,-1); \dashrightarroww [line width=2pt] (3,-2)-- (1,0); \dashrightarroww [samples=50,rotate around={-90:(-0.5,3)},xshift=-0.5cm,yshift=3cm,line width=2pt,domain=-8:8)] plot (\x,{(\x)^2-3/2/1}); \dashrightarroww [samples=50,rotate around={-270:(0.5,3)},xshift=0.5cm,yshift=3cm,line width=2pt,color=qqqqff,domain=-8:8)] plot (\x,{(\x)^2-3/2/1}); \dashrightarroww [line width=2pt,color=black] (-3,5)-- (-1,6); \dashrightarroww [line width=2pt,color=qqqqff] (3,5)-- (1,6); \dashrightarroww [line width=2pt,color=qqqqff] (1,1)-- (3,2); \dashrightarroww [line width=2pt,color=black] (-1,1)-- (-3,2); \begin{scriptsize} \dashrightarroww[color=black] (-1.64,-1.57) node {$A^{(1)}_2$}; \dashrightarroww[color=qqqqff] (-1.64,-0.23) node {$A^{(1)}_1$}; \dashrightarroww[color=qqqqff] (2.3,-2.29) node {$A^{(2)}_1$}; \dashrightarroww[color=black] (2.3,-0.53) node {$A^{(2)}_2$}; \dashrightarroww[color=black] (1.62,4.99) node {$G_1$}; \dashrightarroww[color=qqqqff] (-1.38,4.99) node {$G_0$}; \dashrightarroww[color=black] (-1.4, 6.12) node {$P_0$}; \dashrightarroww[color=qqqqff] (2.28,6.09) node {$P_1$}; \dashrightarroww[color=qqqqff] (2.28,1.10) node {$P_3$}; \dashrightarroww[color=black] (-1.72,1.1) node {$P_2$}; \end{scriptsize} \end{tikzpicture} \verb|Figure 3| \end{center} We then consider the triple cover such that $$D_1=G_0+G_1+P_0+P_1+P_2+P_3,\ D_2=A_1^{(1)}+A_2^{(1)},\ \ D_3=A_1^{(2)}+A_2^{(2)}.$$ The triple cover data are $$B=G_0+P_1+P_3+A_1^{(1)}+A_1^{(2)},\ C=G_1+P_0+P_2+A_2^{(1)}+A_2^{(2)},\ L=\frac{B+2C}{3},\ M=\frac{2B+C}{3}.$$ We observe that $\Lambda_{D_1}$ is indefinite (for example $2(G_0+G_1)+P_0$ has a positive self intersection) and $X$ is of general type by Theorem \ref{theorem: three cases giusto}. A straight forward calculation shows that \( B^2=-10, \quad C^2=-10, \mbox{ and } BC=8. \)\\ Which yields \( L^2=-2, \quad M^2=-2, \mbox{ and } ML=0. \) Since all the singularities in the branch locus are of type 2, and in particular negligible, by Proposition \ref{prop: rito numbers} we obtain \[ \chi(X')=4, \quad K^2_{X'}=-8. \] Moreover, since $L^2=M^2=-2$, it follows that $h^1(S,L)=h^1(S,M)=0$, and then $q(X')=0$. Therefore $p_g(X')=3$. The surface $X'$ is smooth but not minimal. Indeed each configuration of type $A_2$ in the branch locus corresponds to three $(-1)$-curves on the minimal resolution $X'$ of the cover and each curve $P_j$ corresponds to a $(-1)$-curve on $X'$, see Remark \ref{rem: contraction of curves singularities of type 2}. So we have to contract at least $3\cdot 2+4=10$ curves on $X'$ and we denote by $X_{m}$ the surfaces obtained contracting these 10 curves on $X'$. Then $K_{X_{m}}^2=-8+10=2$. The surface $X_{m}$ is minimal, to prove that directly it is not straightforward. Nevertheless, it is possible to see it using a different construction of $X_{m}$. In \cite{BP17}, the second author with Bini introduce a Calabi-Yau threefold $Y_6$ with Hodge numbers (10, 10). To some extent, this is special. In fact, its group of automorphisms contains a subgroup $G$ isomorphic to $\mathbb{Z}Z/6$. Moreover, this Calabi-Yau threefold can be described as a small resolution of a (3, 3) complete intersection $Y$ in $ \mathbb{P}^5$ with $72$ ordinary double points. Furthermore, the group $G$ extends to a group of automorphisms of $ \mathbb{P}^5$. Thus, there are six invariant hyperplane sections corresponding to irreducible characters of the abelian group $\mathbb{Z}Z/6$. Since the intersections of $Y$ with these sections are invariant with respect to this group, it was natural to investigate them and their quotients. Out of the six invariant sections mentioned before, three of them are irreducible. These are singular surfaces of general type; on the minimal model $\Sigma$ of one of them let act $\mathbb{Z}Z/2 \leq G$, then it is not hard to prove that the minimal resolution of $\Sigma/(\mathbb{Z}Z/2)$ is a minimal surface and is isomorphic to $X_{m}$. Hence $X_{m}$ is the minimal model of $X$, i.e. it is $X^{\circ}$. }\end{example} \subsection{Examples of case $(3)$ of Theorem \ref{theorem: three cases giusto}}\leftarrowbel{subsec: base change of elliptic fibration} The case $(3)$ of Theorem \ref{theorem: three cases giusto} implies that $D_1$ is a fiber of an elliptic fibration. So $D_j$, $j>1$, is necessarily contained in a fiber and this naturally gives two cases: either for all $j=1,\ldots, n$ $D_j$ is a full fiber or at least one of the $D_j$ is supported on rational components of a fiber but it does not coincide with the full fiber. Both cases are possible and we now discuss them. \begin{proposition} Let $X\rightarrow S$ be a triple cover as in case $(3)$ of Theorem \ref{theorem: three cases giusto}. This implies that $\varphi_{|D_1|}:S\rightarrow \mathbb{P}^1$ is an elliptic fibration. Suppose that all the $D_j$ are linearly equivalent to $D_1$, then $X$ is obtained by a base change of order 3. \end{proposition} \proof By proof of Corollary \ref{cor: giusto}, $D_1$ is the fiber of an elliptic fibration $\varphi_{|D_1|}:S\rightarrow\mathbb{P}^1$. By hypothesis the $D_j$'s are fibers of the fibration $\varphi_{|D_1|}$ too. The $3:1$ cover $X\rightarrow S$ induces a $3:1$ cover $C\rightarrow\mathbb{P}^1$ where $C$ is a smooth curve. The branch locus of $C \rightarrow \mathbb{P}^1$ is the image of the branch locus of $X\rightarrow S$ and so it is $\varphi_{|D_1|}(\bigcup_j D_j)$.\endproof \begin{proposition}\leftarrowbel{prop: examples case 3 of proposition} Let $S$ be a K3 surface endowed with an elliptic fibration $\varphi_{|F|}:S\rightarrow\mathbb{P}^1$. Let us consider $p_1,\ldots, p_n$ points in $\mathbb{P}^1$ and a triple Galois cover $g:W\rightarrow \mathbb{P}^1$ totally branched on $p_1,\ldots, p_n$. The fiber product $X:=S\times_g\mathbb{P}^1$ is a triple cover of $S$ branched on $n$ fibers. If the fibers over the points $p_i$ are reduced, $X$ is also normal. Denoted by $X^\circ$ the minimal model of $X$, if $X^{\circ}$ is not a product, then $$h^{1,0}(X^{\circ})=g(W)=n-2\mbox{ and } \ K_{X^{\circ}}^2=0.$$ If $X^{\circ}$ is a product, then $h^{1,0}(X^{\circ})=g(W)+1=n-1$. If all the branch fibers are reduced and not of type $IV$, then $$e(X^\circ)=72,\ \ \chi(X^\circ)=6 \mbox{ and }p_g(X^{\circ})=3+n.$$ \end{proposition} \proof The cover automorphism of $g:W\rightarrow\mathbb{P}^1$ acts as $\zeta_3$ locally near the first $b_1$ ramification points and as $\zeta_3^2$ locally near the other $b_2=n-b_1$ ramification points. Notice that $b_1+2b_2\equiv_3 0$. Let us consider the fiber product $S\times_g\mathbb{P}^1$. It is the triple cover $X$ of $S$ branched over the fibers $F_{p_i}=\varphi_{|F|}^{-1}(p_i)$. So we have a Galois triple cover $X\rightarrow S$. If all the fibers $F_{p_i}$ are reduced, the cover $X$ is normal and we can apply the previous theory. The branching data are $B=\sum_{i=1}^{b_1}F_{p_i}$, $C=\sum_{j=b_1+1}^{n}F_{p_j}$, $L\simeq (b_1+2b_2)F/3$ and $M\simeq (2b_1+b_2)F/3$. Now let us compute the triple cover invariants. The surface $X$ is endowed with the elliptic fibration. Let us denote by $X^{\circ}$ the minimal model of $X$, which is a smooth surface, admitting a relatively minimal elliptic fibration $\mathcal{E}:X^\circ\rightarrow W$. Assume that it admits at least one singular fiber (which is surely true if there exists a singular fibers of $S$ which is not in the branch locus). Then $X^{\circ}$ is not a product and $h^{1,0}(X^{\circ})=g(W)=n-2$. Moreover, since $X^{\circ}$ admits an elliptic fibration $K_{X^{\circ}}^2=0$. We recall that if a branch fiber are of type $IV$ the corresponding ramification fiber on $X^{\circ}$ is smooth, hence it is possible that after the base change the fibration $X^{\circ}\rightarrow W$ has no singular fibers and it maybe a product. If the branch fibers are reduced and different from $IV$, the Euler characteristic of their strict transform is 3 times their Euler characteristic (see \cite[VI.4.1]{M85}), then $e(X^{\circ})=3e(S)=72$ so $\chi(X^{\circ})=e(X^{\circ})/12=3\chi(S)=6.$ \endproof We observe that even if there are non reduced fibers $F_{p_i}$ in the branch locus, the invariants of the minimal model of a normalization can be computed by the theory of the base change of elliptic fibrations, see \cite[VI.4.1]{M85}. \begin{theorem}\leftarrowbel{theo: list negligible}\leftarrowbel{say: other singularities in the branch locus} The singularities in the branch locus of a Galois triple cover are negligible (see Definition \ref{def.neg.sing}) if the local equation of the branch locus is in the following list: \begin{itemize}\item $xy$ (type 1)\item $xy^2$ (type 2); \item $xy(x+y)$ (simple triple point); \item $x^2-y^3$ (cusp); \item $x(y-x^2)$\end{itemize}\end{theorem} \proof The singularities of type 1 and 2 were considered in \cite[Example 1.6 and 1.8]{PP13} (see also Proposition \ref{prop: negligible}). For the other cases we use the results of Proposition \leftarrowbel{prop: examples case 3 of proposition}, where we computed the invariants of $X^{\circ}$ directly by considering the geometry of the elliptic fibration. We compare them with the ones obtained by applying the point $(3)$ of Proposition \ref{prop: rito numbers}. If they coincide, this means that all the singularities which can appear in the branch locus are negligible. Since $L^2=M^2=LM=0$, one obtains $\chi(X')=6$, $e(X')=72$ and $K_{X'}^2=0$. Moreover, $h^1(L)=h^1(\frac{b_1+2b_2}{3}F)=\frac{b_1+2b_2}{3}-1=\frac{b_1+2b_2-3}{3}$ and $h^1(M)=h^1(\frac{2b_1+b_2}{3}F)=\frac{2b_1+b_2}{3}-1=\frac{2b_1+b_2-3}{3}$ so that $h^1(L)+h^1(M)=b_1+b_2-2=n-2$. So all the singularities appearing in the singular reduced fibers are negligible. For example, a branch fiber of type $III$ in $S$ consists of two tangent rational curves. We deduce that the singularities in the branch locus obtained by tangency of two components are negligible singularities. More precisely if there is a branch fiber of type $III$ on $S$, it induces a fiber of type $IV^*$ (whose dual graph of curves is $\widetilde{E}_6$) of $\mathcal{E}:X'\rightarrow W$. We deduce that if the branch locus of a totally ramified triple cover contains two tangent curves, the triple cover has a singularities of type $E_6$ (see \verb|Figure 4|). \begin{center} \includegraphics[width=8cm]{E6} \verb|Figure 4| \end{center} If a branch fiber on $S$ is singular, then the corresponding fiber on $X^{\circ}$ is obtained by a base change of order 3; \cite[Table VI.4.1]{M89} shows that effect of a base change on singular fibers of an elliptic fibration. By \cite[Table VI.4.1]{M89} one immediately obtains that, if the branch contains a cusp, the cover has a singularity of type $D_4$, if the branch has a simple triple point, the cover has an elliptic singularity. Notice that one recovers the singularities of type 1 by considering fibers of type $I_m$ in the branch locus. \endproof Note that the list of Theorem \ref{theo: list negligible} is not necessarily complete.\\ We now consider the other possibility of $(3)$ of the Theorem \ref{theorem: three cases giusto}, hence we assume that $D_1$ is a fiber but at least one of the other components of the branch locus is strictly contained in a fiber. In this case, the effect of the triple cover on the fibers strictly containing $D_j$ is not the one of a base change of order 3. In the following proposition we describe how the fibers changes under a triple cover of this type. \begin{corollary} Let $D_1$ be a connected reducible (possibly non reduced) component of the branch locus of a Galois triple cover $f:X\rightarrow S$. Let $\varphi_{|D_1|}:S\rightarrow \mathbb{P}^1$ be the induced elliptic fibration (see proof of Theorem \ref{theorem: three cases giusto}). Let $D_j\subset (F_S)_j$ and we assume that $D_j=(F_S)_j$ iff $j\leq k$. Given a fiber $F_S$, we denote by $F_{X'}$ the correspondent fiber in the minimal model of the normalization of $X$. Then we have the following tables of types of singular fibers: $$\begin{array}{cc} \begin{array}{|c|c|} j \leq k&\\ \hline \mbox{type }F_S&\mbox{type }F_{X'}\\ \hline I_m&I_{3m}\\ \hline I_m^*&I_{3m}^*\\ \hline II^*&I_0^*\\ \hline III^*&III\\ \hline II&I_0^*\\ \hline III&III^*\\ \hline IV&I_0\\ \end{array}&\begin{array}{|c|c|}j>k&\\ \hline \mbox{type }F_S&\mbox{type }F_{X'}\\ \hline I_{3m}&I_{m}\\ \hline IV&IV\\ \hline IV^*&I_0\end{array}\end{array}$$ \end{corollary} \proof If $j\leq k$, then $D_j=(F_S)_j$, the fibers $(F_S)_j$ are branch fibers and the type of $(F_X)_j$ is given in \cite{M85}. We already observed that if $D_i$ is properly contained in a fiber $F_S$, $i\geq k$ and $D_i$ is supported on an $A_2$-configuration. Let us suppose that $F_S$ properly contains $r$ connected components $D_i$, $i\geq k$. These are $A_2$-configuration, whose components are denoted by $A_1^{(1)}$, $A_2^{(1)}$, $A_1^{(2)}$, $A_2^{(2)}$, $\ldots$, $A_1^{(r)}$, $A_2^{(r)}$. Moreover, $\sum_{h=1}^r(A_1^{(h)}+2A_2^{(h)})/3$ necessarily has an integer intersection with all the components of $F_S$, hence it is contained in the discriminant group of lattice associated to $F_S$. We conclude that $F_S$ is necessarily one of the following: $I_{3m}$, with $r=m$, $IV$ with $r=1$ and $IV^*$ with $r=2$. There are $m$ $A_2$-configurations contained in $I_{3m}$ and the birational inverse image of each of them in the minimal model $X^{\circ}$ is a point. The remaining curves form an $I_m$ fiber. To obtain the minimal model in case $IV^*$, one first contracts the inverse image of the curves in the $A_2$-configurations each to a point. These three points lie on a $(-1)$-curve (which is the inverse image of the unique remaining curve in $IV^*$). Contracting this curve we obtain an $I_0$ fiber. The case $IV$ is as the {\bf{\verb|Figure 5|}}. \begin{center} \includegraphics[width=8cm]{IV} \verb|Figure 5| \end{center} \endproof \subsection{Irregular covers of K3 surfaces}\leftarrowbel{subsec: irregular} Even if a K3 surface is a regular surface, it is of course possible to construct triple covers of K3 surfaces which are irregular surfaces. The easiest examples are provided by cases $(1)$ and $(3)$ of Theorem \ref{theorem: three cases giusto}. Indeed, by case (1) of Theorem \ref{theorem: three cases giusto} the Galois triple cover of a K3 surface branched on 9 $A_2$-configuration is an Abelian surface. This case is effective, since it is known that there exist Abelian surfaces admitting a symplectic automorphism of order 3 such that their quotient by this automorphism is birational to a K3 surface, \cite{F}. In Proposition \ref{prop: examples case 3 of proposition} the irregularity of a surface $X$ obtained as base change of order 3 on an elliptic fibration on a K3 surface $S$ is computed. One check that if $n>2$, then the surface $X$ is irregular. Once again it is clear that there exists explicit examples of this situation, indeed it suffices to construct a curve $W$ (with the notation of Proposition \ref{prop: examples case 3 of proposition}) which is a Galois triple cover of $\mathbb{P}^1$ branched on more than 2 points. We observe that the previous constructions produce surfaces $X$ whose Kodaira dimension is either 0 or 1. It is more complicated to construct examples of irregular covers of K3 surfaces which are surfaces of general type. This is essentially due to the difficulties in finding branch divisors on a K3 surfaces which are not supported on rational or elliptic curves, but such that the associated triple cover is irregular. Indeed if $X \rightarrow S$ is a triple cover of a K3 surface, than $X$ is irregular if and only if at least one between $L$ and $M$ satisfies $h^1(L)>0$ or $h^1(M)>0$, and there are not so many divisors with this property. \begin{lemma}\leftarrowbel{lemma: h1D neq 0} Let $D$ be a divisor on a K3 surface $S$ such that $-D$ is not effective. If $h^1(D)\neq 0$ then one of the following holds \begin{itemize} \item if $D^2<0$ then $D^2\leq -4$, and if $D^2=-4$, then $D$ is effective; \item if $D^2= 0$ and $D$ is nef, then $D=nE$ where $E$ is a genus 1 curve, $n>1$ and $h^1(S,D)=n-1$; if $D^2=0$ and $D$ is not nef $D=M+F$ where $M$ is its moving part and $F$ is its fix part, and $$F(F-2D)>0.$$ \item if $D^2>0$, then $D$ is not nef. In this case $D=M+F$ where $M$ is its moving part and $F$ is its fix part, and $$F(F-2D)>0.$$ \end{itemize} \end{lemma} \proof If $D^2<0$, then $h^0(S,D)\leq 1$. If $D^2=-2$, then $\chi(D)=1$ which would imply $h^1(S,D)=0$. If $D^2>0$ and $D$ is nef, then $D$ is big and nef, and by Kawamata--Viehweg vanishing theorem $h^1(S,D)= 0$. Therefore, if $D^2>0$ and $h^1(D)\neq 0$ then $D$ is not nef. In particular, there is a fixed part $F$ in $|D|$ such that $D=M+F$ with $F^2<0$ and $DF<0$. Moreover $h^0(S,D)=h^0(S,D-F)$. By Riemann--Roch theorem $$\frac{1}{2}(D-F)^2+2=h^0(S,D-F)=h^0(S,D)=\frac{1}{2}D^2+2+h^1(S,D).$$ Finally, if $D^2=0$ and $D$ is not nef the proof is the same as the case above. If $D$ is nef, then $D=nE$ where $E$ is a genus 1 curve. We recall the well known fact $h^1(S,nE)=n-1$. To recover this, note that $L^2 = 0$ implies $h^0(S, L) \geq 2$, as by Serre duality $h^2(S, L) = h^0(S, L^{-1}) = 0$ for the non-trivial nef line bundle $L$ (intersect with an ample curve). It is enough to use Riemann--Roch theorem for a K3 surface to compute $\chi(\mathcal{O}_S(nE))=2$ and then use inductively the exact sequence in cohomology associated to the fundamental sequence of $E$ tensorized with $\mathcal{O}_S(nE)$: \[ 0 \rightarrow \mathcal{O}_S\big((n-1)E\big) \rightarrow \mathcal{O}_S(nE) \rightarrow \mathcal{O}_C(nE|_E) \rightarrow 0. \] \endproof \begin{say} In view of Lemma \ref{lemma: h1D neq 0}, we consider divisors on a K3 surface with very low self intersection, In Theorems \ref{theo: cover S16} and \ref{theo: cover S15} we construct specific irregular surfaces of general type which are covers of K3 surfaces and some notation are needed. We denote by $M_{(\mathbb{Z}/2\mathbb{Z})^4}$ a specific overlattice of $\oplus_{1}^{15}A_1$, constructed as follows: denoted by $N_i$ the 15 orthogonal classes with self intersection $-2$ which generate $\oplus_{1}^{15}A_1$, we add to the lattice $\leftarrowngle N_i\rightarrowngle$ the vectors \begin{eqnarray*}\begin{array}{c}v_1:=\left(\sum_{i=1}^{8}N_i\right)/2, v_2:=\left(\sum_{i=1}^{4}N_i+\sum_{j=9}^{12}N_j\right)/2\\ v_3:=\left(N_1+N_2+N_5+N_6+N_9+N_{10}+N_{13}+N_{14}\right)/2,\ v_4:=\left(\sum_{i=0}^{7}N_{2i+1}\right)/2.\end{array}\end{eqnarray*} The lattice $M_{(\mathbb{Z}/2\mathbb{Z})^4}=\leftarrowngle N_1,\ldots ,N_{15},\ v_1,\ldots, v_4\rightarrowngle$ is an even negative definite lattice of discriminant $(\mathbb{Z}/2\mathbb{Z})^7$. Similarly, we define $M_{(\mathbb{Z}/2\mathbb{Z})^3}$ the overlattice of $\oplus A_1^{14}$ obtained as above starting with 14 divisors $N_i$ and adding the classes $v_1$, $v_2$, $v_3$. The lattice $M_{(\mathbb{Z}/2\mathbb{Z})^3}$ is an even negative definite lattice of discriminant $(\mathbb{Z}/2\mathbb{Z})^8$. Let us consider the rank 16 lattice $R_{16}:=\leftarrowngle 6\rightarrowngle\oplus M_{(\mathbb{Z}/2\mathbb{Z})^4}$ and let us denote by $H$ the generator of $\leftarrowngle 6\rightarrowngle$. By \cite[Theorem 8.3]{GS}, there exists an even overlattice $R'_{16}$ of index 2 of $R_{16}$ obtained by adding to the lattice $R_{16}$ the class $\left(H+\sum_{i=1}^{15}N_i\right)/2$. The discriminant group of $R'_{16}$ is $\mathbb{Z}/6\mathbb{Z}\times (\mathbb{Z}/2\mathbb{Z})^5$. By \cite[Theorem 1.14.4 and Remark 1.14.5]{Nik Int} $R'_{16}$ admits a primitive embedding in $\Lambda_{K3}$. So there is a K3 surface (indeed a 4 dimensional family of K3 surfaces) whose N\'eron--Severi group is isometric to $R'_{16}$ lattice. Observe that this K3 surface has a model (given by the map $\varphi_{|H|}$) as the complete intersection of a quadric and a cubic in $\mathbb{P}^4$ with 15 nodes. \end{say} \begin{theorem}\leftarrowbel{theo: cover S16} Let $S_{16}$ be a K3 surface such that $NS(S_{16})\simeq R'_{16}$. On the surface $S_{16}$ there are a smooth curve $G$ of genus 4 and fifteen disjoint rational curves $N_i$, $i=1,\ldots, 15$ such that $GN_i=1$, $i=1,\ldots, 15$. There exists a Galois triple cover $\pi:X\rightarrow S_{16}$ branched on $G\bigcup\cup_i N_i$. The invariants of minimal model $X^{\circ}$ of $X$ are: $$p_g(X^{\circ})=6,\ q(X^{\circ})=1,\ c_1(X^{\circ})^2=18$$\end{theorem} \proof By \cite[Proposition 8.5]{GS}, the divisors $H$ and $\left(H-\sum_{i=1}^{15}N_i\right)/2\in NS(S_{16})$ are pseudoample. So $(3H-\sum_{i=1}^{15}N_i)/2\in NS(S_{16})$ is a pseudoample divisor whose self intersection is $6$. Hence there is a smooth curve of genus 4, denoted by $G$, in $\left|\left(3H-\sum_{i=1}^{15}N_i\right)/2\right|$. Moreover, one can assume that the divisors $N_i$ are supported on irreducible rational curves (see \cite[Propositions 2.3 and 5.1]{G}). So, there are a smooth genus 4 curve $G$ and 15 disjoint rational curves $N_i$ on $S$ such that $GN_i=1$ for $i=1,\ldots, 15$. Set: $$B:=G,\ \ C:=\sum_{i=1}^{15}N_i,\ \ L:=\frac{2B+C}{3}=H, \ \ M:=\frac{B+2C}{3}=\left(H+\sum_{i=1}^{15}N_i\right)/2.$$ The divisors $B$, $C$, $L$ and $M$ satisfy the conditions in \ref{say_tripleGalois} and thus determine a triple cover of $S_{16}$. The branch locus is not smooth and we consider the minimal resolution $X'$ of $X$. The singularities of the branch locus are negligible, since they are transversal intersection of smooth curves, and thus the invariants of $X'$ are obtained applying the formulae given Proposition \ref{prop: rito numbers}. Since $L^2=6$, $M^2=-6$ and $LM=3$, one obtains $\chi(\mathcal{O}_{X'})=6$, $K^2_{X'}=3$ and $e(X')=69$. The surface $X'$ is non minimal and the inverse images on $X$ of the curves $N_i$ are 15 disjoint exceptional curves $E_i$. We now prove that these are the unique $(-1)$-curves also even after their contraction. Indeed suppose that there is a $(-1)$-curve on a contraction of $X'$ which is mapped to a curve $I$ on $S_{16}$ with $I\neq N_i$. Then $I$ is a rational curve and in the triple cover it has to split and intersect the exceptional curves $E_i$. Otherwise the self intersection of the inverse image of $I$ is lower than $-1$, cfr. proof of Proposition \ref{prop: case (2) theorem- possibilities}. By direct inspection one sees that also this case is not possible, since $I$ has to split in three $(-1)$-curves which meet. So the minimal model $X^{\circ}$ is obtained by $X'$ contracting the curves $E_i$. So one obtains $K^2_{X^{\circ}}=18$ and $e(X^{\circ})=54$ and $\chi(X)=\chi(X^{\circ})$. Moreover, one has $$h^{1,0}(X^{\circ})=h^{1,0}(X)=0+h^1(S,L)+h^1(S,M).$$ Since $L=H$ is a pseudoample divisor, $h^1(S,L)=0$. Since $M^2=-6$ and $M$ is not effective, it follows $h^1(S,M)=1$. One concludes that $q(X^{\circ})=q(X)=1$ and $p_g(X^{\circ})=p_g(X)=\chi(X^{\circ})=6$.\endproof Let us now consider the rank 15 lattice $R_{15}:=\leftarrowngle 4\rightarrowngle\oplus M_{(\mathbb{Z}/2\mathbb{Z})^3}$ with $H$ the generators of $\leftarrowngle 4\rightarrowngle$. Let $R'_{15}$ be the overlattice of $R_{15}$ constructed by adding the class $v=(H-\sum_{i=1}^{14}N_1)/2$. The lattice $R'_{15}$ admits a primitive embedding in $\Lambda_{K3}$ and thus there exists a K3 surface $S_{15}$ whose N\'eron--Severi group is isometric to $R'_{15}$. \begin{theorem}\leftarrowbel{theo: cover S15} Let $S_{15}$ be a K3 surface such that $NS(S_{15})\simeq R'_{15}$. On the surface $S_{15}$ there are a smooth curve $G$ of genus 2 and fourteen disjoint rational curves $N_i$, $i=1,\ldots, 14$ such that $GN_i=1$, $i=1,\ldots, 14$. There exists a Galois triple cover $\pi:X\rightarrow S_{15}$ branched on $G\bigcup\cup_i N_i$. The invariants of the minimal model $X^{\circ}$ of $X$ are: $$p_g(X^{\circ})=4,\ q(X^{\circ})=1,\ c_1(X^{\circ})^2=12.$$\end{theorem} \proof The proof is analogous to the one of the previous proposition, but one has to chose $G$ in the linear system $\left|3H-\sum_{i=1}^{14}N_i)/2\right|$. So one finds $$B:=G,\ \ C:=\sum_{i=1}^{14}N_i,\ \ L:=\frac{2B+C}{3}=H, \ \ M:=\frac{B+2C}{3}=\left(H+\sum_{i=1}^{14}N_i\right)/2,$$ thus $$L^2=4,\ \ M^2=-6,\ \ LM=2 $$ and one has to contract 14 curves to obtain $X^{\circ}$ from the minimal resolution $X'$ of $X$. So $K_{X^{\circ}}^2=K_{X'}^2+14=-2+14=12$, $\chi(X^{\circ})=\chi(X')=5$. As in the previous proof one finds $q(X^{\circ})=1$.\endproof A different idea for finding irregular triple cover is to exploit the Albanese morphism and the Kummer construction, but the following remark shows that this approach is too naive. \begin{remark}{\rm Due to the relation of an Abelian surface $A$ and its Kummer surface $Km((A))$ it is possible that a Galois triple cover $Y\rightarrow A$ defines a triple cover $X\rightarrow Km(A)$. In particular this happens if the involution $\iota: a\rightarrow -a$ on $A$ induces an involution on $Y$. In this case one has the following diagram $$\xymatrix{Y\ar[r]^{3:1}\ar@{-->}[d]^{2:1}&A\ar@{-->}[d]^{2:1}\\X\ar[r]^{3:1}&Km(A)}$$ Since $q(A)=2$ it is easier to find cover $Y$ such that $q(Y)\neq 0$. Nevertheless let $Y$ be a surface such that the Albanese variety $Alb(Y)$ coincides with $A$ (so in particular it has dimension 2 and the Albanese map is $3:1$), then it holds: If $\iota$ lifts to an involution $\iota_Y$ of $Y$, the quotient $X:=Y/\iota_Y$ is a regular surface. Indeed the Albanese surface $A$ is defined as $H^0(Y,\Omega_1)^{\vee}/H_1(Y,\mathbb{Z})$ and $\iota$ acts on the space $H^0(Y,\Omega_1)$ as $-1$. Thus $\iota_Y$ does not preserve the 1-holomorphic form of $Y$.} \end{remark} \section{Triple cover of K3 surfaces: the split not Galois case}\leftarrowbel{sec: triple split non Galois cover} Now we analyse the non-Galois triple covers $f\colon X\rightarrow S$ of a K3 surface under the assumption that the Tschirnhausen bundle $\mathscr{E}$ splits. We will provide an example of a such a triple cover for all possible Kodaira dimension. Let $\mathscr{E}$ be a direct sum of two line bundles $\mathcal{L}^{-1}$ and $\mathcal{M}^{-1}$ so that $\mathscr{E}^{\vee}=\mathcal{L}\oplus\mathcal{M}$. We have already observed following diagram \eqref{diag split triple cover} that the triple cover $f$ is totally branched over $D'$ and simply branched over $D''$. All the other covers in the diagram are Galois covers.\\ Since $\mathscr{E}^{\vee}=\mathcal{L}\oplus \mathcal{M}$, we have that $$2L+2M=2D'+D''\mbox{ is the branch locus of the triple cover} f:X\rightarrow S,$$ with $D''\neq 0$ and there exists four effective divisors $B'$, $C'$, $B''$ and $C''$ such that $$L=\frac{2B'+C'}{3}+\frac{B''}{2},\ \ M=\frac{B'+2C'}{3}+\frac{C''}{2}. $$ So $L+M=B'+C'+\left(B''+C''\right)/2$ is the branching data of an $\mathfrak{S}_3$ cover of $S$, i.e. the line bundle $\mathcal{O}_S(L+M)$ returns the geometric line bundle $\mathbb{L}$ of the \cite[Theorem 6.1]{CP}. This $\mathfrak{S}_3$-cover is the Galois closure of the non Galois triple cover $X\rightarrow S$. Notice that in \cite[Theorem 6.1]{CP} one assumes the branch locus to be smooth, but the results extend also to the non-smooth case. \begin{say}\leftarrowbel{say:non Galois split K3 K3}{\bf A split non Galois triple cover of a K3 surface with a K3 surface.} We construct a split but not Galois triple cover $f:X\rightarrow S$ such that $S$ is a K3 surface and $X$ is a singular surface whose minimal resolution is still a K3 surface. We refer to the diagram \eqref{diag split triple cover} for the notation. Let $Z$ be a K3 surface such that $\mathfrak{S}_3\subset \mbox{Aut}(Z)$ and $\mathfrak{S}_3$ acts symplectically on $Z$. Then the quotient surface $Z/\mathfrak{S}_3$ has 3 singularities of type $A_2$ and 8 singularities of type $A_1$, see \cite[ p.78 case 6]{X}. The resolution of $Z/\mathfrak{S}_3$ is a K3 surface $S$, which admits a Galois $\mathfrak{S}_3$ cover, branched on the Jung-Hirzebruch strings which resolve the singularities. Let us denote by $B_i\cup C_i$, $i=1,2,3$ the $i$-th $A_2$ configuration and by $N_j$, $j=1,\ldots, 8$ the $j$-th $A_1$ configuration. Then $L=\left(\sum_{i=1}^32B_i+C_i\right)/3+\sum_{j=1}^4\left(N_j\right)/2$ and $M=\left(\sum_{i=1}^3B_i+2C_i\right)/3+\sum_{j=4}^8\left(N_j\right)/2.$ The K3 surface $S$ admits a $2:1$ cover branched along $\cup_{j=1}^8 N_j$, which is the non-minimal surface $W$, whose minimal model is a K3 surface $W^{\circ}$. There are 6 $A_2$-configurations on $W^{\circ}$, inverse image of the 3 $A_2$-configurations in $S$. These 6 $A_2$-configurations form a 3-divisible set. The Galois triple cover of $W^{\circ}$ branched on these 6 $A_2$-configurations is a non minimal surface, whose minimal model is $Z$. The quotient of $Z$ by an involution in $\mathfrak{S}_3$ is the singular surface $X$, whose minimal resolution is another K3 surface, $X^{\circ}$. The surface $X$ is by construction the non Galois triple cover of $S$ associated to $\mathcal{E}^{\vee}=\mathcal{L}^{-1}\otimes\mathcal{M}^{-1}$. The total ramification of $S$ is on $D'=\sum_{i=1}^3(B_i+C_i)$ and the simple one is on $D''=\sum_{i=1}^8 N_i$. \end{say} \begin{say}\leftarrowbel{say:non Galois split K3 Kod=1}{\bf A split non Galois triple cover of a K3 surface with a properly elliptic surface.} Let us consider a K3 surface $S$ endowed with an elliptic fibration $\mathcal{E}:S\rightarrow \mathbb{P}^1$. Let us consider $g:C\rightarrow\mathbb{P}^1$ a split non Galois triple cover of $\mathbb{P}^1$. This can be constructed by considering $2k$ points $P_{i}$, $i=1,\ldots, k$ and $2h$ points $Q_j$, $j=1,\ldots, 2h$. Then one uses, as triple cover data, $B'=\sum_{i=1}^kP_i$, $C'=\sum_{i=k+1}^{2k}P_i$, $B''=\sum_{j=1}^{2r}Q_j$, $C''=\sum_{j=2r+1}^{2h}Q_j$ with $r\leq h$. So $L=\frac{\sum_{i=1}^k2P_i+P_{i+k}}{3}+\frac{\sum_{j=1}^{2r}Q_j}{2}$, $M=\frac{\sum_{i=1}^kP_i+2P_{i+k}}{3}+\frac{\sum_{j=2r+1}^{2h}Q_j}{2}$ and there exists a split non Galois triple cover of $\mathbb{P}^1$ totally branched on $\cup_{i=1}^{2k}P_i$ and simply branched on $\cup_{j=1}^{2h}Q_j$. The genus of the curve $C$, is given by $2g(C)-2=-6+2(2k)+2h$ so $g(C)=2k+h-2\geq 1$. Now we consider the fiber product $$\xymatrix{ S\times_{\mathbb{P}^1}C\ar[r]\ar[d]&S\ar[d]^{\mathcal{E}}\\ C\ar[r]^{g}&\mathbb{P}^1}$$ If the fibers of $\mathcal{E}$ over the points $P_i$ and $Q_j$ are smooth, the surface $X:=S\times_{\mathbb{P}^1}C$ is smooth and it is a triple non Galois cover of $S$ totally branched over $\cup_i\mathcal{E}^{-1}(P_i)$ and simply branched over $\cup_j\mathcal{E}^{-1}(Q_j)$. The fibration $\mathcal{E}$ induces a fibration $\mathcal{E}_X:X\rightarrow C$ whose generic fiber is a smooth genus 1 curve and which has $2h$ fibers with multiplicity 2. It holds $h^{1,0}(X)\geq g(C)\geq 1$. The surface $X$ is necessarily proper elliptic, i.e. $\kappa(X)=1$, cfr. \ref{prop_kodairaDim} and \cite[Lemma III.4.6]{M89}. \end{say} \begin{say}\leftarrowbel{say:non Galois split K3 Kod=2}{\bf A split non Galois triple cover of a K3 surface with a surface of general type.} Let $S$ be a K3 surface which admits an even set of $k$ disjoint rational curves $N_i$, so either $k=8$ or $k=16$. There exists a pseudo ample divisor $H$ which is contained in $\leftarrowngle N_1,\ldots , N_k\rightarrowngle^{\perp_{NS(S)}}$ with $H^2=2h$ for a positive number $h$. Then, putting $C'=3H$ $B'=0$, $B''=\sum_i N_i$ and $C''=0$, one obtains the data of a split non Galois triple cover: $$L=\frac{C'}{3}+\frac{B''}{2}=H+(\sum_{i=1}^{k}N_i)/2,\ \ M=\frac{2C'}{3}=2H.$$ We consider the rank 2 vector bundle $\mathscr{E}=\mathcal{O}(-L)\oplus \mathcal{O}(-M)$. Since $S^3(\mathscr{E}^{\vee})\otimes \bigwedge^2\mathscr{E}=\mathcal{O}(2L-M)\oplus \mathcal{O}(2M-L)\oplus \mathcal{O}(L)\oplus \mathcal{O}(M)$ admits global sections, $\mathscr{E}$ is the Tschirnhausen bundle of a triple cover, see Theorem \ref{teo.miranda}. This triple cover, denoted by $f:X\rightarrow S$ is totally ramified on $C'$ (i.e. one a curve contained in the linear system $|3H|$) and simply ramified on $\cup_i N_i$. With the notation of \eqref{diag split triple cover}, one has that $W$ is a non minimal surface and its minimal model is obtained by contracting $k$ $(-1)$curves. The minimal model of $W$ is a K3 surface or an Abelian surface according to the fact that $k=8$ or $k=16$. In particular, denoted by $E_i$ the $(-1)$-curves on $W$, one has $$K_W=-\sum_{i=1}^kE_i,\ \ K_WK_W=-k,\ \ \chi(\mathcal{O}_W)=\frac{16-k}{4},\ \ h^{1,0}(W)=\frac{k-8}{4},\ \ e(W)=48-2k.$$ The Galois triple cover $\beta_2:Z\rightarrow W$ is branched on a curve in the linear system $|\beta_1^*(3H)|$. Let us assume that the branch locus is a smooth curve in this linear system. Hence $\beta_2:Z\rightarrow W$ is a smooth Galois triple cover, whose data are $(B,C,L,M)=(0,\beta_1^*C',\beta_1^*H,2\beta_1^*H)$ and whose invariant are $$\begin{array}{ll} \chi(\mathcal{O}_Z)=3\chi(\mathcal{O}_W)+\frac{1}{2}(\beta_1^*H)^2+\frac{1}{2}(2\beta_1^*H)^2=&\frac{48-3k}{4}+10h\\ K_Z^2=3K_W^2+2(\beta_1^*H)^2+2(2\beta_1^*H)^2+2(\beta_1^*H)^2=&-3k+48h\\ h^{i}(Z,\mathcal{O}_Z)=h^1(W,\mathcal{O}_W)+h^1(W,\beta_1^*H)+h^1(W,2\beta_1^*H)=&\frac{k-8}{4}\\ e(Z)=3e(W)+4((\beta_1^*H)^2+(2\beta_1^*H)^2)-2(\beta_1^*H)^2=&144-6k+72h \end{array} $$ We used that $\beta_1^*D\beta_1^*D=2D^2$ for every divisor $D\in Pic(S)$ and that $H$ is big and nef, so that $\beta_1^*H$ is big and nef and hence the vanishing theorems hold. Moreover, one obtains $$h^{2,0}(Z)=\chi(Z)-1+h^{1,0}(Z)=9-\frac{k}{2}+10h.$$ Now $\alpha:Z\rightarrow X$ is a double cover branched on $k$ rational curves: $\alpha^*(N_j)=A_j+2A_j'$, which are geometrically two copies of a rational curve $N_j$, but one has multiplicity 2, the other 1, only one of them is a component of the branch locus of $\alpha$. We want to apply the formulae \cite[Chapter 5 Section 22]{BHPV} to the double cover $Z\rightarrow X$. The branch locus $J$ is such that $-2k=(K_X+J)J$ by adjunction. This implies that $-k=K_XI+2I^2$ where $I$ is a divisor such that $2I=J$. By construction $f^{-1}(N_i)=M_i\cup M_i'$ where both $M_i$ and $M_i'$ are isomorphic to $N_i$, one of them has multiplicity 2 (because $N_i$ is contained in the simple ramification) and $M_i$ and $M_i'$ are disjoint. So $f^*(N_i)=M_i+2M_i'$. Since $f$ is a $1:1$ map restricted to $M_i$ and $M_i'$, one obtains $f_*(M_i)=f_*(M_i')=N_i$ and, by the projection formula, $$M_i^2=(M_i+2M_i')M_i=f^*(N_i)M_i=N_if_*(M_i)=N_i^2=-2 \mathbb{R}ightarrow M_i^2=-2$$ $$2(M_i')^2=(M_i+2M_i')M_i'=f^*(N_i)M_i'=N_if_*(M_i')=N_i^2=-2 \mathbb{R}ightarrow (M_i')^2=-1.$$ The branch locus of the cover $\alpha$ consists of the curves $M_i$, i.e. with the previous notation $$J=\sum_{i=1}^kM_i\mbox{ so }I=\left(\sum_{i=1}^kM_i\right)/2\mbox{ and }I^2=-\frac{k}{2}.$$ By $-k=K_XI+2I^2$ it follows that $K_XI=0$. Hence $$\chi(Z)=\frac{48-3k}{4}+10h=2\chi(X)+\frac{1}{2}K_XI+\frac{1}{2}I^2=2\chi(X)-\frac{k}{4}$$ $$K_Z^2=-3k+48h=2K_X^2+4K_XI+2I^2=2K_x^2-k$$ $$e(Z)=144-6k+72h=2e(X)+2K_XI+4I^2=2e(X)-2k.$$ So the invariants of the surface $X$ are $$e(X)=72+36h-2k,\ \ K_X^2=-k+24h,\ \ \chi(X)=(24-k)/4+5h.$$ The surface $X$ is non minimal, since it contains at least $k$ $(-1)$-curves. We observe that in any case $K_X^2>0$ and $\kappa(X)\geq 0$, hence $X$ is of general type. We notice that $h^{2,0}(Z)\geq 11$ and by choosing $h$ big enough $h^{2,0}(Z)$ and $h^{2,0}(X)$ are arbitrarily big. \end{say} \section{Triple covers of K3 surfaces: the Non Split Case}\leftarrowbel{sec: triple cover non split} The most general situation for a triple cover $f\colon X \rightarrow S$ is when the Tschirnhausen bundle $\mathscr{E}$ is indecomposable, in particular the cover is non Galois. The main question that one has to address is the existence of the Tschirnhausen bundle $\mathscr{E}$; and this boils down to the study of rank 2 indecomposable vector bundle $\mathscr{E}$ on a K3 surface which satisfy the further condition $H^0(Y, \, S^3 \mathscr{E}^{\vee} \otimes \bigwedge^2\mathscr{E}) \neq 0$ given in Theorem \ref{teo.miranda}. A standard approach (see e.g., \cite[Section 2]{PP13}) is to construct the Tschirnhausen bundle $\mathscr{E}$ exploiting the Cayley-Bacharach property (CB) of some 0-subscheme (see also \cite[Page 36]{F98} and \cite{Ca90}), which we recall for simplicity : \begin{theorem}\cite[Theorem 5.1.1]{HL} Let $Z \subset S$ be a local complete intersection of codimension two, and let $\mathcal{L}$ and $\mathcal{M}$ be line bundles on $S$. Then there exists an extension \begin{equation*}\leftarrowbel{eq_CB} 0\rightarrow \mathcal{L} \rightarrow \mathscr{E}^{\vee}\rightarrow \mathcal{M}\otimes \mathcal{I}_Z\rightarrow 0 \end{equation*} such that $\mathscr{E}$ is locally free if and only if the pair $(\mathcal{L}^{-1}\otimes \mathcal{M} \otimes K_S,Z)$ has the Cayley- Bacharach property: {\bf (CB)} If $Z' \subset Z$ is a subscheme with $\ell(Z') = \ell(Z) - 1$ and $s \in H^0(S, \mathcal{L}^{-1}\otimes \mathcal{M} \otimes K_S)$ with $s|_{Z'}=0$, then $s|_Z=0$. \end{theorem} We consider \begin{equation}\leftarrowbel{eq_CB} 0\rightarrow \mathcal{L} \rightarrow \mathscr{E}^{\vee}\rightarrow \mathcal{M}\otimes \mathcal{I}_Z\rightarrow 0 \end{equation} where $\mathcal{L}$ and $\mathcal{M}$ are line bundles on a K3 surface $S$ and $Z$ a 0-cycle. Notice that if $Z=\emptyset$ and the sequence \eqref{eq_CB} splits, then $\mathscr{E}^{\vee} = \mathcal{L} \oplus \mathcal{M}$ and we are back to the cases treated in the previous sections. Therefore, we would like to assume that $Z \neq \emptyset$ and that the sequence \eqref{eq_CB} does not split. First we discuss some criteria which assure the existence of the triple cover associated to \eqref{eq: THE extension}, then we apply them to some possible choices of the triple $(\mathcal{L},\mathcal{M},Z)$. The following proposition gives a conditions on $\mathcal{L}^{\vee}\otimes \mathcal{M}$ which assure the existence of the sequence \eqref{eq_CB}. \begin{prop}\leftarrowbel{prop_exExt_Gen} If $h^0(S,\mathcal{L}^{\vee}\otimes \mathcal{M})=0$ and $h^1(S,\mathcal{L}^{\vee}\otimes \mathcal{M})\neq 0$, then $\mathrm{Ext}^1(\mathcal{M}\otimes \mathcal{I}_Z, \mathcal{L}) \neq 0$ and the extension \ref{eq_CB} exists. In particular if $h^1(S,\mathcal{L}^{\vee}\otimes \mathcal{M})=1$ the extension is unique. \end{prop} \begin{proof} Let $\mathcal{G}:=\mathcal{L}$ and $\mathcal{F}=\mathcal{L}^{\vee}\otimes \mathcal{M} \otimes \mathcal{I}_Z$. We want to prove that $\mathrm{Ext}^1(\mathcal{F}\otimes \mathcal{G}, \mathcal{G}) \neq 0$. By Serre duality we have: \[ \mathrm{Ext}^1(\mathcal{F}\otimes \mathcal{G}, \mathcal{G})=\mathrm{Ext}^1(\mathcal{F}, \mathcal{O})=H^1(\mathcal{F})=H^1(\mathcal{L}^{\vee}\otimes \mathcal{M} \otimes \mathcal{I}_Z). \] Now consider the fundamental exact sequence of the scheme $Z$. \[ 0 \rightarrow \mathcal{I}_Z \rightarrow \mathcal{O}_S \rightarrow \mathcal{O}_Z \rightarrow 0 \] Tensorised by $\mathcal{L}^{\vee}\otimes \mathcal{M}$ we get \[ 0 \rightarrow \mathcal{I}_Z\otimes\mathcal{L}^{\vee}\otimes \mathcal{M} \rightarrow \mathcal{L}^{\vee}\otimes \mathcal{M}\rightarrow \mathcal{O}_Z\otimes \mathcal{L}^{\vee}\otimes \mathcal{M} \rightarrow 0. \] Since $H^0(S,\mathcal{L}^{\vee}\otimes \mathcal{M})=0$, one obtains $h^0(Z,\mathcal{O}_Z\otimes \mathcal{L}^{\vee}\otimes \mathcal{M})=0$ and subsequently the long exact sequence in cohomology gives \[ 0 \rightarrow H^1(\mathcal{I}_Z\otimes\mathcal{L}^{\vee}\otimes \mathcal{M}) \rightarrow H^1(\mathcal{L}^{\vee}\otimes \mathcal{M}) \rightarrow 0. \] So $$\dim\mathrm{Ext}^1(\mathcal{F}\otimes \mathcal{G}, \mathcal{G})=h^1(\mathcal{I}_Z\otimes\mathcal{L}^{\vee}\otimes \mathcal{M})=h^1(\mathcal{L}^{\vee}\otimes \mathcal{M})$$ and the claim follows. \end{proof} We observe that by Serre duality, on a K3 surface $$h^1(S,\mathcal{L}^\vee\otimes\mathcal{M})=h^1(S,\mathcal{L}\otimes \mathcal{M}^\vee),$$ so one can substitute the hypothesis $h^1(\mathcal{L}^\vee\otimes\mathcal{M})\neq 0$ with $h^1(\mathcal{L}\otimes \mathcal{M}^\vee)\neq 0$. \begin{theorem}\leftarrowbel{theor: existence triple tschi} Let $S$ be a K3 surface, $Z$ a non empty 0-dimensional scheme on $S$, $\mathcal{L}$, $\mathcal{M}$ be two line bundles such that:\begin{itemize}\item $h^0(S,\mathcal{L}^{\vee}\otimes \mathcal{M})=0$;\item $h^1(S,\mathcal{L}^{\vee}\otimes \mathcal{M})=h^1(S,\mathcal{L}\otimes \mathcal{M}^{\vee})\neq 0$;\item $h^0(S,\mathcal{L}^{\otimes 2}\otimes \mathcal{M}^{\vee})\geq 1$.\end{itemize} Then there exists a triple cover $X\rightarrow S$ with Tschirnhausen $\mathcal{E}$, defined by the \eqref{eq_CB} for any possible choice of $Z$. \end{theorem} \begin{proof} The condition (CB) is automatically satisfied if $h^0(\mathcal{L}^\vee\otimes \mathcal{M})=0$ (see \cite[Theorem 12]{F98}) and by Proposition \ref{prop_exExt_Gen} $\mathscr{E}$ exists and is locally free, and its dual as well. To assure the existence of the triple cover we have to prove that \[ h^0(S, \, S^3\mathscr{E}^{\vee} \otimes \bigwedge^2 \mathscr{E} ) \neq 0. \] We apply the Eagon-Northcott complex (see e.g. \cite[Appendix 2]{E95} and \cite[Lemma 4.7]{CT07}) to the sequence \eqref{eq_CB} and we obtain \[ 0 \rightarrow \mathcal{L} \otimes S^2\mathscr{E}^{\vee} \rightarrow S^3\mathscr{E}^{\vee} \rightarrow \mathcal{M}^3 \otimes \mathcal{I}_Z^3 \rightarrow 0. \] Now, let us tensorize the previous sequence by $ \Lambda^2\mathscr{E} \cong \mathcal{L}^{-1} \otimes \mathcal{M}^{-1}$ and we get \[ 0 \rightarrow S^2\mathscr{E}^{\vee} \otimes \mathcal{M}^{-1} \rightarrow S^3\mathscr{E}^{\vee} \otimes \Lambda^2\mathscr{E}\rightarrow \mathcal{M}^2\otimes \mathcal{L}^{-1} \otimes \mathcal{I}^3_Z \rightarrow 0. \] So, if we prove that $S^2\mathscr{E}^{\vee} \otimes \mathcal{M}^{-1}$ has global section we are done. To do so we apply Eagon-Northcott complex to the sequence \eqref{eq_CB} and we tensorize it by $\mathcal{M}^{-1}$. We have \[ 0 \rightarrow \mathcal{L} \otimes \mathscr{E}^{\vee} \otimes \mathcal{M}^{-1} \rightarrow S^2\mathscr{E}^{\vee} \otimes \mathcal{M}^{-1} \rightarrow \mathcal{M} \otimes \mathcal{I}_Z^2 \rightarrow 0. \] As a last step we show that $\mathcal{L} \otimes \mathscr{E}^{\vee} \otimes \mathcal{M}^{-1}$ has global section. This is true by hypothesis and by the short exact sequence \[ 0\rightarrow \mathcal{L}^2 \otimes \mathcal{M}^{-1} \rightarrow \mathscr{E}^{\vee} \otimes \mathcal{L} \otimes \mathcal{M}^{-1} \rightarrow \mathcal{L}\otimes \mathcal{I}_Z\rightarrow 0 \] obtained tensorizing the sequence \eqref{eq_CB} by $\mathcal{L} \otimes \mathcal{M}^{-1}$. Therefore, we have $h^0(S^3\mathscr{E}^{\vee} \otimes \bigwedge^2 \mathscr{E}) \geq 1$. \end{proof} Now we give some explicit examples, choosing the line bundles $\mathcal{L}$ and $\mathcal{M}$. Let $C$ be a smooth genus 1 curve on $S$, and hence a fiber of an elliptic fibration on $S$. We pose $$\mathcal{L}=\mathcal{O}_S(nC),\ \ \mathcal{M}=\mathcal{O}_S(mC).$$ If $-n+m<0$ then $h^0(\mathcal{L}^{\vee}\otimes\mathcal{M})=0$; if $n-m\geq 2$ then $h^1(\mathcal{L}\otimes \mathcal{M}^{\vee})\neq 0$ and $h^0(\mathcal{L}^{\otimes 2}\otimes \mathcal{M})\geq 1$. Hence is $n\geq m+2$ the hypothesis of theorem \ref{theor: existence triple tschi} are satisfied and hence there exists the triple cover $X\rightarrow S$. If $n=m+2$, then the extension given by \eqref{eq_CB} is unique. So we now discuss the triple cover associated with the sequence \begin{equation}\leftarrowbel{eq: THE extension}0\rightarrow \mathcal{O}_S(nC)\rightarrow \mathscr{E}\rightarrow \mathcal{I}_Z(mC)\rightarrow 0\ \ \ n\geq m+2\end{equation} and in particular we analyze it according to the choice of 0-scheme $Z$. \begin{lemma}\leftarrowbel{lemma: l(Z)=1} Let $S$ be an elliptic K3 surface with elliptic fibration $\varphi_{|C|}:S\rightarrow\mathbb{P}^1$. Let us assume that $Z$ is a 0-cycle on $S$ such that $\ell(Z)=1$. \begin{itemize} \item If $m \geq 2$ then $h^0\big(\mathcal{I}_Z(mC)\big)\neq 0$ and $h^1\big(\mathcal{I}_Z(mC)\big)\neq 0$. \item If $m =1$ then $h^0\big(\mathcal{I}_Z(C)\big)=1,$ and $h^1\big(\mathcal{I}_Z(C)\big)=0$. \end{itemize} \end{lemma} \proof Since $\ell(Z)=1$, the subscheme $Z$ consists in a single point $p$. By \[ 0 \rightarrow \mathcal{I}_p\big(mC\big) \rightarrow \mathcal{O}_S\big(mC\big) \rightarrow \mathcal{O}_p\big(mC\big) \rightarrow 0 \] one obtains the long exact sequence \begin{equation}\leftarrowbel{eq_longLz=1} \begin{split} 0\rightarrow H^0\left(\mathcal{I}_p(mC)\right) \rightarrow H^0\left(\mathcal{O}_S(mC)\right)& \rightarrow H^0\big(\mathcal{O}_p(mC)\big) \rightarrow \\ \rightarrow H^1(\mathcal{I}_p\big(mC\big)) \rightarrow H^1(\mathcal{O}_S\big(mC\big)) &\rightarrow 0, \end{split} \end{equation} which is \[ 0\rightarrow H^0\left(\mathcal{I}_p(mC)\right) \rightarrow \C^{m+1}\rightarrow \mathbb{C}\rightarrow H^1(\mathcal{I}_p\big(mC\big)) \rightarrow \C^{m-1}\rightarrow 0. \] This yields at once the first statement. For the second one, let us insert the value $m=1$ in \eqref{eq_longLz=1} and get \[ 0\rightarrow H^0\left(\mathcal{I}_p(C)\right) \rightarrow H^0\left(\mathcal{O}(C)\right)\stackrel{\psi}{\rightarrow} H^0\big(\mathcal{O}_p(C)\big) \cong \mathbb{C}\rightarrow H^1(\mathcal{I}_p\big(C\big)) \rightarrow 0. \] By \cite[Proposition 3.10]{H} the linear system $|C|$ is base point free, hence $\psi$ -- which is an evaluation map -- is not the zero map and we have conclude the proof. \endproof \begin{lemma}\leftarrowbel{lemma: l(Z)=2} Let $S$ be an elliptic K3 surface with elliptic fibration $\varphi_{|C|}:S\rightarrow\mathbb{P}^1$. Let us assume that $Z$ is a 0-cycle on $S$ such that $\ell(Z)=2$. \begin{itemize} \item If $m \geq 2$ then $h^0\big(\mathcal{I}_Z(mC)\big)\neq 0$ and $h^1\big(\mathcal{I}_Z(mC)\big)\neq 0$. \item If $m =1$ then $h^0\big(\mathcal{I}_Z(C)\big)=h^1\big(\mathcal{I}_Z(C)\big)=0$ if $Z$ consists of $2$ distinct smooth points $z_1$ and $z_2$ which do not lie on the same fiber of the fibration. \item If $m =1$ then $h^0\big(\mathcal{I}_Z(C)\big)=h^1\big(\mathcal{I}_Z(C)\big)=1$ if $Z$ consists of $2$ distinct smooth points $z_1$ and $z_2$ which lie on the same fiber of the fibration or $Z$ is a single point. \end{itemize} \end{lemma} \proof The first statement is proven exactly in the same way as in Lemma \ref{lemma: l(Z)=1}. Let $m=1$ then $H^0(\mathcal{O}_S(C)) \cong \CC^2$ and also $H^0\big(\mathcal{O}_Z(C)\big) \cong \CC^2$. In the long exact sequence \begin{equation*} 0\rightarrow H^0\left(\mathcal{I}_Z(C)\right) \rightarrow H^0\left(\mathcal{O}_S(C)\right) \stackrel{\psi}{\rightarrow} H^0\big(\mathcal{O}_Z(C)\big) \rightarrow H^1(\mathcal{I}_Z\big(C\big)) \rightarrow H^1(\mathcal{O}_S\big(C\big)) = 0, \end{equation*} the map $\psi$ is the evaluation map $ev\colon s \mapsto s(x)$ with $x \in Z$, which is the zero map if and only is $Z$ is in not the base locus of $|C|$. By \cite[Proposition 3.10]{H} the linear system $|C|$ is base point free. This yields that $h^0(\mathcal{I}_Z(C)) \leq 1$. There are two cases: \begin{enumerate} \item there is a section $s \in H^0(\mathcal{O}_S(C))$ which passes through $Z$, and in this case $Z$ consists either of $2$ distinct smooth points $z_1$ and $z_2$ which lie on the same fiber of $\varphi_{|C|}$ or $Z$ is a single double point. In this case \[ H^0\left(\mathcal{I}_Z(C)\right) \cong <s>. \] \item no single section passes through $Z$ and in this case $Z$ consists of $2$ distinct smooth points $z_1$ and $z_2$ which do not lie on the same fiber of the fibration and $H^0\left(\mathcal{I}_Z(C)\right)=0$. \end{enumerate} \endproof \begin{lemma} The total Chern classes of the vector bundle $\mathscr{E}$, determined by the extension \eqref{eq: THE extension}, is $c(\mathscr{E})=(1,(n+m)C,\ell(Z))$. \end{lemma} \proof We use $ch(C\otimes\mathcal{I}_Z)=ch(C)ch(\mathcal{I}_Z).$ Recalling that \[ ch(V)=(rk(V),c_1(V),\frac{1}{2}\left(c_1^2(V)-2c_2(V)\right), \] one has $ch(C)=(1,C,0)$, $ch(\mathcal{I}_Z)=(1,0,-\ell(Z))$ and thus $$ch(C\otimes\mathcal{I}_Z)=(1,C,-\ell(Z)),\mbox{ so }c(C\otimes\mathcal{I}_Z)=(1,C,\ell(Z)).$$ Since $c(\mathscr{E})=c(nC)c(mC\otimes\mathcal{I}_Z)$ (see e.g., \cite[Section 5]{HL}), one obtains \[ c(\mathscr{E})=(1,nC,0)(1,mC,\ell(Z))=(1,(n+m)C,\ell(Z)). \] \endproof \begin{lemma} \leftarrowbel{lemma (n,m,l)=(3,1,2)} Let $S$ be an elliptic K3 surface with elliptic fibration $\varphi_{|C|}:S\rightarrow\mathbb{P}^1$. Let $(n,m,\ell(Z))=(3,1,2)$. If $Z$ is supported on two points $z_1$, $z_2$ which do not lie on the same fiber of the fibration $\varphi_{|C|}:S\rightarrow\mathbb{P}^1$, then $h^0(\mathscr{E})=4$ and $h^1(\mathscr{E})=2$. \end{lemma} \proof Suppose $(n,m,\ell(Z))=(3,1,2)$. The computation of $h^i(\mathscr{E})$ is based on the sequence $$0\rightarrow H^0(3C) \cong \CC^4 \rightarrow H^0(\mathscr{E})\rightarrow H^0(\mathcal{I}_{Z}(C))\rightarrow H^1(3C) \cong \CC^2 \rightarrow H^1(\mathscr{E})\rightarrow H^1(\mathcal{I}_Z(C)) \rightarrow 0$$ If $Z$ consists of $2$ distinct smooth points $z_1$ and $z_2$ which do not lie on the same fiber of the fibration then by Lemma \ref{lemma: l(Z)=2} we have $H^0\big(\mathcal{I}_{Z}(C)\big)=H^1\big(\mathcal{I}_{Z}(C)\big)=0$ and we have $\left(h^0(\mathscr{E}),h^1(\mathscr{E})\right)=\left(4,2\right)$. \endproof \begin{prop} Let us suppose that $S$ is an elliptic K3 surface with elliptic fibration $\varphi_{|C|}:S\rightarrow\mathbb{P}^1$ and that $(n,m,\ell(Z))=(3,1,2)$. Moreover let us assume that $Z$ is supported on two points $z_1$, $z_2$ which do not lie on the same fiber of the fibration $\varphi_{|C|}:S\rightarrow\mathbb{P}^1$. Then there exists a properly elliptic surface $X$ which is a non Galois triple cover of an elliptic K3 surface, such that $(h^{1,0}(X),h^{2,0}(X))=(3,6)$. \end{prop} \begin{proof} The existence of the triple cover follows by Theorem \ref{theor: existence triple tschi}, hence the surface $X$ exists. Moreover, the birational numerical invariants of $X$ are given by Proposition \ref{prop.invariants} $(i)$ with the information given in \ref{lemma (n,m,l)=(3,1,2)}. By Proposition \ref{thm_Branch} the branch divisor of $f\colon X \rightarrow S$ is given by $\Lambda^2\mathscr{E}^{-2}\simeq \mathcal{O}_S(8C)$. Finally, by Proposition \ref{prop_kodairaDim} $X$ is a properly elliptic surface. \end{proof} \begin{rem}{\rm The triple cover $X\rightarrow S$ is not induced by a base change $g:C\rightarrow\mathbb{P}^1$ (for a certain curve $C$) as in Section \ref{say:non Galois split K3 Kod=1} and in Proposition \ref{prop: examples case 3 of proposition} because otherwise $\mathscr{E}$ would split.} \end{rem} Another possible choice of $\mathcal{L}$ and $\mathcal{M}$ in the sequence \eqref{eq_CB} is presented in the following corollary. \begin{corollary} Let $S$ be a K3 surface with an irreducible curve $C$ of selfintersection $2d>0$ such that there exists $h$ disjoint rational curves $R_i$'s, which are disjoint also from to $C$. If $h\leq 10$ a K3 surface with this configuration of curves exists. If $2\leq h\leq 10$ and $(9h-1)/4\leq d\leq 4h-3$ there exists a non Galois triple cover $f:S\rightarrow X$ whose Tschirnhausen is determined by the sequence: $$0\rightarrow \mathcal{O}_S\left(C-\sum_{i=1}^hR_i\right)\rightarrow \mathscr{E}^{\vee}\rightarrow \mathcal{I}_Z\left(\sum_{i=1}^hR_j\right)\rightarrow 0\ \ \ $$ The surface $X$ is a surface of general type. \end{corollary} \begin{proof} The existence of $S$ depends on the existence of a primitive embedding of the lattice spanned by $C$ and the $R_i$ in the K3 lattice, which is guaranteed if $h\leq 10$, because in this case the lattice has rank at most 11. The genus of the curve $C$ is $g(C)=d+1>1$, by hypothesis. We pose $$\mathcal{L}=\mathcal{O}_S\left(C-\sum_{i=1}^hR_i\right),\ \ \ \mathcal{M}=\mathcal{O}_S\left(\sum_{i=1}^h R_i\right).$$ Since $\mathcal{L}^{\vee}\otimes \mathcal{M}=\mathcal{O}_S\left(-C+2\sum_{i=h}^hR_i\right)$, one has $h^0(\mathcal{L}^{\vee}\otimes \mathcal{M})=0$. Indeed $C$ is an irreducible divisor with positive self intersection and hence it would intersect non negatively every effective divisor, but $(-C+2\sum_{i=h}^hR_i)C=-CC<0$. Moreover, $(-C+2\sum_{i=h}^hR_i)^2=2d-8h$ and if $ d\leq 4h-3$, $(-C+2\sum_{i=h}^hR_i)^2\leq -6$. By Riemann Roch theorem one obtains $h^1(\mathcal{L}^{\vee}\otimes \mathcal{M})\neq 0$. Moreover $$\mathcal{L}^{\otimes 2}\otimes \mathcal{M}^{\vee}=\mathcal{O}_S\left(2C-3\sum_{i=1}^hR_i\right).$$ If $d\geq (9h-1)/4$, $\left(2C-3\sum_{i=1}^hR_i\right)^2=8d-18h\geq -2$ and since $\left(2C-3\sum_{i=1}^hR_i\right)C>0$, one has $h^0(\mathcal{L}^{\otimes 2}\otimes \mathcal{M}^{\vee})\geq 1$. We conclude that the triple cover exists by Theorem \ref{theor: existence triple tschi}. Since the branch divisor is $\wedge^2\mathscr{E}^{-2}=(\mathcal{L}\otimes \mathcal{M})^{\otimes 2}=\mathcal{O}_S(2C)$, if the branch locus were not reduced, it would be 2 times $C_q$ where $C_q$ is a specific curve in $|C|$. Moreover every global section of $S^3\mathscr{E}\otimes \bigwedge^2\mathscr{E}^{\vee}$ should vanish along $C_q$. In particular also every global section of $\mathcal{L}^{2}\otimes \mathcal{M}^{\vee}$ would vanish along $C_q$. But this implies that $$H^0(\mathcal{O}_S(2C-3\sum R_i))=H^0(\mathcal{L}^{2}\otimes \mathcal{M}^{\vee})=H^0(\mathcal{L}^{2}\otimes \mathcal{M}^{\vee}\otimes\mathcal{O}_S(-C_q))=H^0(\mathcal{O}_S(C-3\sum R_i)),$$ which is not the case. In particular $X$ is normal and the branch locus contains curves with positive genus, so $X$ is of general type. \end{proof} Alice Garbagnati Universit\`a degli Studi di Milano, Dipartimento di Matematica \emph{''Federigo Enriques"}, I-20133 Milano, Italy \\ \emph{e-mail} \verb|[email protected]| Matteo Penegini, Universit\`a degli Studi di Genova, DIMA Dipartimento di Matematica, I-16146 Genova, Italy \\ \emph{e-mail} \verb|[email protected]| \end{document}
\begin{document} \author{C. G. Melles\\ Department of Mathematics\\ U. S. Naval Academy\\ Annapolis, MD 21402\\ [email protected] \and W. D. Joyner\\ [email protected]} \title{On $p$-ary Bent Functions \\ and Strongly Regular Graphs} \maketitle \begin{abstract} Our main result is a generalized Dillon-type theorem, giving graph-theoretic conditions which guarantee that a $p$-ary function in an even number of variables is bent, for $p$ a prime number greater than $2$. The key condition is that the component Cayley graphs associated to the values of the function are strongly regular, and either all of Latin square type, or all of negative Latin square type. Such a Latin or negative Latin square type bent function is regular or weakly regular, respectively. Its dual function has component Cayley graphs with the same parameters as those of the original function. We also give a criterion for bent functions involving structure constants of association schemes. We prove that if a $p$-ary function with component Cayley graphs of feasible degrees determines an amorphic association scheme, then it is bent. Since amorphic association schemes correspond to strongly regular graph decompositions of Latin or negative Latin square type, this result is equivalent to our main theorem. We show how to construct bent functions from orthogonal arrays and give some examples. \end{abstract} {\parindent0pt {\bf Keywords}: bent function, strongly regular graph, Latin square type graph, amorphic association scheme } \section{Introduction} Bent functions over a finite field can be thought of as maximally non-linear functions. They can be defined using Walsh transforms, but can also be studied using the combinatorics of their level sets, the parameters of certain associated Cayley graphs, and the algebras generated by the adjacency matrices of these graphs. Dillon \cite{D74} characterized bent Boolean functions as those whose supports form combinatorial structures known as difference sets of elementary Hadamard type. An alternative and closely related characterization is that a Boolean function is bent if and only if its Cayley graph is strongly regular with parameters $(\nu, k, \lambda, \mu)$ satisfying $\lambda= \mu$ (Bernasconi, Codenotti, and VanderKam \cite{BCV01}). These theorems do not generalize in an obvious way for primes $p$ greater than 2. The Cayley graphs associated with a bent $p$-ary function are not necessarily strongly regular (see, for example, \S \ref{subsec:GF52}). We consider $p$-ary functions over the finite field $GF(p)$ with $p$ elements, where $p$ is a prime number greater than 2. A function $f \colon GF(p)^{2m} \rightarrow GF(p)$ determines a collection of component Cayley graphs corresponding to the values of $f$. When $f$ is even, these graphs are undirected. We will usually also assume that $f$ vanishes at $0$ (a weak assumption, since adding a constant to a bent function results in another bent function). The component Cayley graphs are regular, with degrees determined by the sizes of the level sets of $f$. Our main result is a generalization of the theorems of Dillon and Bernasconi, Codenotti, and VanderKam in one direction. We prove that if the component Cayley graphs of $f$ are all strongly regular and are either all of Latin square type with feasible degrees, or all of negative Latin square type with feasible degrees, then $f$ is bent. The feasibility conditions are simply conditions arising from the possible sizes of the level sets of a bent function. The proof of our main theorem uses an expression for the Walsh transform of $f$ in terms of the eigenvalues of the component Cayley graphs. When the component Cayley graphs are strongly regular and of feasible Latin or negative Latin square type, we obtain formulas for the eigenvalues and their multiplicities, which we use to calculate the values of the Walsh transform and to show that $f$ is bent. As a consequence of the proof outlined above, we also find that functions of feasible Latin square type are regular, and functions of feasible negative Latin square type are $(-1)$-weakly regular. In each case, the component Cayley graphs of the dual function are strongly regular, with the same parameters as those of the original function. The proof of this duality theorem uses a relationship between the component functions of the dual function and the Fourier transforms of the component functions of the original function. The papers of Gol'fand, Ivanov, and Klin \cite{GIK94}, van Dam \cite{vD03}, and van Dam and Muzychuk \cite{vDM10} describe the close relationship between graphs of Latin and negative Latin square type and amorphic association schemes. We say that a $p$-ary function $f$ is {\it amorphic} if its level sets determine an amorphic association scheme. In a previous paper, \cite{CJMPW16}, we showed that for any prime $p$ greater than 2, there are $(p+1)!/2$ bent amorphic functions of two variables with algebraic normal form homogeneous of degree $p-1$ and such that the level sets corresponding to nonzero elements of $GF(p)$ all have size $p-1$. In this paper, we generalize part of our previous result by proving that if an even $p$-ary function of $2m$ variables with level sets of feasible sizes is amorphic, then it is bent. The key to the proof is a criterion for a function to be bent involving sums of structure constants of an association scheme. We also use a result of Ito, Munemasa, and Yamada \cite{IMY91} describing the structure constants of an amorphic association scheme. In light of the relationship between Latin and negative Latin square type graphs and amorphic association schemes described in \cite{GIK94}, \cite{vD03}, and \cite{vDM10}, we see that a $p$-ary function is amorphic if and only if its component Cayley graphs are strongly regular and either all of Latin square type, or all of negative Latin square type. Thus, our criterion for bent functions involving eigenvalues and our criterion involving structure constants lead to equivalent theorems, proven by different methods. The existence of amorphic bent $p$-ary functions of $2m$ variables follows from the existence of $p$-class amorphic association schemes with corresponding graphs of appropriate degrees. Such amorphic association schemes can be constructed from orthogonal arrays of size $(N+1) \times N^2$, where $N=p^m$. Orthogonal arrays of these dimensions exist by a construction of Bush \cite{Bu52}. We describe a construction of amorphic bent functions of Latin square type from orthogonal arrays and give examples for $n=2$ and $4$, and $p=3$, 5, and 7. We also give examples of amorphic bent functions of negative Latin square type, and of bent functions whose component Cayley graphs are not all strongly regular. It would be interesting to find a combinatorial generalization of the theorems of Dillon and Bernasconi, Codenotti, and VanderKam in the other direction, giving simple graph-theoretic properties that the component Cayley graphs of a bent $p$-ary function must possess. In the case $p=3$, Tan, Pott, and Feng \cite{TPF10} show that if $f \colon GF(3^{2m} ) \rightarrow GF(3)$ is an even weakly regular bent function with $f(0)=0$, then the component Cayley graphs of $f$ are strongly regular and either all of Latin square or all of negative Latin square type. Using the theory of quadratic residues, Chee, Tan, and Zhang \cite{CTZ11} and Feng, Wen, Xiang, and Yin \cite{FWXY13} generalize this result for $p$ a prime greater than 2. However, their decompositions for $p>3$ are not related to the component graphs studied in this paper. Our paper is structured as follows. In Section 2, we study the level sets of a $p$-ary function. We use a result of Kumar, Scholtz, and Welsh to calculate the possible sizes of the level sets of an even bent $p$-ary function of $2m$ variables which vanishes at 0. The component Cayley graphs and the corresponding component functions of a $p$-ary function are defined in Section 3. We explain why the eigenvalues of these graphs are the values of the Fourier transforms of the component functions. In Section 4, we state a criterion for a $p$-ary function to be bent, involving eigenvalues of the component Cayley graphs. We give formulas for the eigenvalues of the component Cayley graphs of a function of feasible Latin or negative Latin square type in Section 5. We revisit the theorems of Dillon and Bernasconi, Codenotti, and VanderKam in this context. In Section 6, we prove that $p$-ary functions of feasible Latin or negative Latin square type are bent. We describe their dual functions in Section 7. In Section 8, we discuss $p$-ary functions that determine association schemes. We state a structure constant criterion for such a function to be bent. Using this criterion, we prove that amorphic functions with component Cayley graphs of feasible degrees are bent. In Section 9, we describe how to construct amorphic bent functions of Latin square type from orthogonal arrays. Section 10 is devoted to examples, most of which were constructed with the aid of computers. We conclude with some questions and ideas for further study. \section{Sizes of level sets of bent functions} \label{sec:2} In this section, we study even $p$-ary functions of an even number of variables, vanishing at 0. We obtain necessary, but not sufficient, conditions for such a function to be bent by considering the sizes of its level curves. We refer to these conditions as feasibility conditions. The feasibility conditions are derived from the possible sizes of the level sets in the bent case, which we calculate using a result of Kumar, Scholtz, and Welsh \cite{KSW85}. We also state the feasibility conditions, equivalently, in terms of the degrees of a function's component Cayley graphs, in \S \ref{subsec:feasible-degrees}. We fix, once and for all, an ordering for $GF(p)^n$. That ordering will be used for all vectors whose coordinates are indexed by $GF(p)^n$, all matrices whose entries are indexed by $GF(p)^n \times GF(p)^n$, and all vertices of associated component Cayley graphs defined below. We routinely identify the elements of $GF(p)$ with $\{0, 1, 2, \ldots , p-1\}$. \subsection{The Walsh transform} Let $p$ be a prime number, and let $\zeta$ be the $p$th root of unity given by \[ \zeta = e^{\frac {2\pi i} p}. \] Let $n$ be a positive integer. A $p$-ary function $f \colon GF(p)^n \to GF(p)$ determines a well-defined complex-valued function $\zeta^f \colon GF(p)^n \rightarrow \mathbb{C}$. The {\it Walsh} or {\it Walsh-Hadamard transform} of $f$ is defined to be the function $W_f \colon GF(p)^n \rightarrow \mathbb{C}$ given by \begin{equation*} W_f(x) = \sum_{y \in GF(p)^n} \zeta^{f(y)- \langle x,y\rangle}, \end{equation*} where $\langle \ , \ \rangle$ is the usual inner product on $GF(p)^n$. \subsection{The Fourier transform} If $g \colon GF(p)^n \rightarrow \mathbb{C}$ is a complex-valued function on $GF(p)^n$, the {\it Fourier transform} of $g$ is the function $\hat g \colon GF(p)^n \rightarrow \mathbb{C}$ given by \begin{equation*} \hat {g}(x) = \sum_{y \in GF(p)^n} g(y) \zeta^{- \langle x,y\rangle}. \end{equation*} Thus, the Walsh transform of $f \colon GF(p)^n \rightarrow GF(p)$ is the Fourier transform of $\zeta^f \colon GF(p)^n \rightarrow \mathbb{C}$. \subsection{Bent functions} Let $f \colon GF(p)^n \to GF(p)$ be a $p$-ary function. We say that $f$ is {\it bent} if \[ |W_f(x)|=p^{\frac n 2}, \] for all $x$ in $GF(p)^n$. We note that if $p$-ary functions $f_1$ and $f_2$ differ by a constant element of $GF(p)$, then $f_1$ is bent if and only if $f_2$ is bent, so from now on we will assume that $f(0)=0$. We will also assume that $f$ is {\it even}, i.e., $f(x)=f(-x)$, for all $x$ in $GF(p)^n$. When $f$ is even, the component Cayley graphs of $f$ are undirected (see \S \ref{subsec:cCg}). \subsection{Level sets of a $p$-ary function} The {\it level sets} of $f$, for $1 \leq i \leq p-1$, are the sets \[ D_i = \{x \in GF(p)^n \ | \ f(x) = i\}. \] For consistency with our later discussions of component Cayley graphs and association schemes, we define \[ D_p = \{x \in GF(p)^n \ | \ x \neq 0 \ \text{and} \ f(x) = 0\} \quad \text{and} \quad D_0=\{0\}. \] Thus, the level set $f^{-1}(0)$ is the union of $D_p$ and $\{ 0 \}$. In our discussion of the sizes of the level sets of $f$, it is convenient to express our results in terms of the sizes of the sets $D_i$, because these sizes are the degrees of the component Cayley graphs, which we define in \S \ref{subsec:cCg} below. \subsection{Level sets of a bent Boolean function} \label{subsec:bentBool} In the Boolean ($p=2$) case, the Walsh transform takes integer values. If $f \colon GF(2)^n \to GF(2)$ is bent, $n$ must be even, since $W_f(0) = \pm 2^{\frac n 2}$. Setting $m = \frac n 2$, we find that the only possible sizes for $D_1$ are \[ |D_1| = 2^{2m-1} \pm 2^{m-1}. \] \subsection{Kumar--Scholtz--Welsh theorem} The description of the Walsh transform in the next theorem is a key step in our calculation of the sizes of level sets of even bent $p$-ary functions. This theorem follows directly from a result of Kumar, Scholtz, and Welsh \cite[Property~7]{KSW85}, but we include a proof for the sake of completeness. \begin{theorem}[Kumar-Scholtz-Welsh] \label{propn:KSW} Suppose that $f \colon GF(p)^{2m} \rightarrow GF(p)$ is an even bent function, where $p$ is a prime number greater than $2$, and $m$ is a positive integer. Then, for every $x$ in $GF(p)^{2m}$, \[ W_f(x)=\pm \zeta^j p^m \] for some integer $j$ with $0 \leq j \leq p-1$. \end{theorem} \begin{proof} Fix $x$ in $GF(p)^{2m}$, and let $W=W_f(x)$. It is sufficient to show that $p^{-m}W$ is a root of unity in $\mathbb{Q} (\zeta)$, since the only roots of unity in $\mathbb{Q} (\zeta)$ are those of the form $\pm \zeta^j$ (see, e.g., \cite[p.~158]{BS66} or \cite[Corollary 3.5.12]{C07}). We will first show that $p^{-m}W$ is an element of $\mathbb{Z}[\zeta]$, and then use a theorem of Kronecker \cite{K1857}, which states that an element of $\mathbb{Z}[\zeta]$, all of whose conjugates have magnitude 1, is a root of unity in $\mathbb{Q} (\zeta)$ (for a more accessible source, see \cite[Corollary 3.3.10]{C07}). For $\alpha$ in $\mathbb{Z} [\zeta]$, let $\langle \alpha \rangle$ denote the principal ideal generated by $\alpha$ in $\mathbb{Z} [\zeta]$. Note that the ideal $\langle p \rangle$ has a factorization as $\langle p \rangle=\langle 1 - \zeta \rangle^{p-1}$, and the ideal $\langle 1-\zeta \rangle$ in $\mathbb{Z} [\zeta]$ is prime (see, e.g., \cite[p.~157]{BS66} or \cite[Lemma 1.4]{W82}). Let $\overline W$ be the complex conjugate of $W$. Since $f$ is bent, $W \overline W = p^{2m}$. Thus, the ideal in $\mathbb{Z}[\zeta]$ generated by $W \overline W $ has a factorization into prime ideals as $\langle 1 - \zeta \rangle^{2m(p-1)}$. Suppose that $\langle W \rangle=\langle 1-\zeta \rangle^k$ and $\langle \overline W \rangle = \langle 1-\zeta \rangle^\ell$ for some integers $k$ and $\ell$. Since $\langle 1 - \overline \zeta\rangle = \langle 1 - \zeta\rangle$, we also have $\langle \overline W\rangle = \langle 1 - \overline \zeta\rangle^k = \langle 1 - \zeta\rangle^k$, so $k=\ell$. Therefore, $\langle W\rangle = \langle \overline W\rangle = \langle p^m \rangle$. It follows that $W = u p^m$ for some unit $u$ of magnitude 1 in $\mathbb{Z}[\zeta]$. The conjugates of $u$ are the images of $u$ under the elements of the Galois group of $\mathbb{Q} (\zeta)$. This Galois group consists of the $p-1$ automorphisms $\sigma_k$ of $\mathbb{Q} (\zeta)$, which are determined by the equations $\sigma_k(\zeta)=\zeta^k$, for $1 \leq k \leq p-1$. It is straightforward to show that $\sigma_k(W_f(x)) = W_{kf}(kx)$. It can also be shown that $kf$ is bent, for $k$ in $\{1, 2, \dots , p-1\}$, for example, by using the balanced derivative criterion of \S \ref{subsec:balanced-derivative}. Thus, all the conjugates of $u$ under the actions of the maps $\sigma_k$, i.e., all the images $\sigma_k(u)$, have magnitude 1. It follows from the theorem of Kronecker \cite{K1857} mentioned above, that $u$ is a root of unity in $\mathbb{Q} (\zeta)$. Therefore $u = \pm \zeta^j$ for some $j$ with $0 \leq j \leq p-1$. \end{proof} \subsection{Feasible sizes of level sets of $p$-ary functions} \label{subsec:feas-size-set} In this section, we calculate the possible sizes of level sets of even bent $p$-ary functions of $2m$ variables, vanishing at 0. At the end of this section, we state feasibility conditions for a function to be bent, based on these sizes. As a first step toward this goal, we prove the following corollary of the theorem of Kumar, Scholtz, and Welsh. \begin{corollary} \label{cor:W_f0-real} Suppose that $f \colon GF(p)^{2m} \to GF(p)$ is an even bent function such that $f(0)=0$, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Then \[ W_f(0)=\pm p^m. \] Furthermore, the level sets $D_i$, for $1 \leq i \leq p-1$, are all the same size, i.e., \[ | D_1| = |D_2| = \cdots =|D_{p-1}|. \] \end{corollary} \begin{proof} Let $k_i =|D_i|= | f^{-1}(i) |$ for $1 \leq i \leq p - 1$, and let $k_p =|D_p|= |f^{-1}(0)| - 1$. Notice that since $f$ is even and $f(0)=0$, $k_i$ must be an even integer, for $1 \leq i \leq p$. From the definition of the Walsh transform, \begin{equation} \label{eqn:W_f0} W_f(0) = 1 + k_1 \zeta + k_2 \zeta^2 + \cdots + k_{p-1} \zeta^{p-1} + k_p. \end{equation} By the result of Kumar, Scholtz, and Welsh, \begin{equation} \label{eqn:Wf0KSW} W_f(0)=\pm \zeta^j p^m, \end{equation} for some $j$ such that $0 \leq j \leq p-1$. Since $1 + \zeta+ \zeta^2 + \cdots + \zeta^{p-1} = 0$, Equation (\ref{eqn:W_f0}) can be rewritten as \begin{equation*} W_f(0) = \sum_{i = 1}^{p-1} (k_i - 1 - k_p)\zeta^i . \end{equation*} The roots of unity $\zeta, \zeta^2, \ldots , \zeta^{p-1}$ are linearly independent over $\mathbb{Q}$. It follows from Equation (\ref{eqn:Wf0KSW}) that if $1 \leq j \leq p-1$, then $k_i - 1 - k_p = 0$, for $i \neq j$. But this is impossible, since $k_i$ and $k_p$ are both even. Therefore $j=0$, and $W_f(0) = \pm p^m$. Furthermore, we must have $k_i-1-k_p=-W_f(0)$ for $1 \leq i \leq p-1$. Therefore $k_1 = k_2 = \cdots =k_{p-1}$, i.e., \[ | D_1| = |D_2| = \cdots =|D_{p-1}|. \] \end{proof} We now calculate the possible sizes of level sets of even bent $p$-ary functions of $2m$ variables, vanishing at 0. \begin{proposition} \label{prop:sizeD_i} Suppose that $f \colon GF(p)^{2m} \rightarrow GF(p)$ is an even bent function such that $f(0)=0$, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Then the possible sizes of the sets $D_i$ are \[ |D_i| = (N-1) \frac N p, \] for $1 \leq i \leq p-1$, and \[ |D_p| = (N-1) \left( \frac N p + 1 \right), \] where $N=W_f(0) = \pm p^m$. \end{proposition} \begin{proof} By Corollary \ref{cor:W_f0-real}, the Walsh transform of $f$ at $0$ is $W_f(0)=\pm p^m$. Also by Corollary \ref{cor:W_f0-real}, the sizes of the level sets $D_1, D_2, \ldots , D_{p-1}$ are all equal. Let $k = |D_i|$, for $1 \leq i \leq p-1$, and let $k_p=|D_p|$. Let $N = W_f(0)$. Since \[ \{0\} \cup D_1 \cup D_2 \cup \cdots \cup D_p = GF(p)^{2m}, \] the constants $k$ and $k_p$ are related by the equation. \begin{equation} \label{eqn:union-D_i} 1+(p-1)k+k_p=p^{2m}=N^2. \end{equation} By the definition of the Walsh transform at 0, \begin{equation} \label{eqn:Walsh-count} 1+k\left( \zeta+\zeta^2 + \cdots + \zeta^{p-1} \right) + k_p=N. \end{equation} Since $\zeta+\zeta^2 + \cdots + \zeta^{p-1}=-1$, Equation (\ref{eqn:Walsh-count}) can be rewritten as \begin{equation*} k_p=k+N-1. \end{equation*} Substituting into Equation (\ref{eqn:union-D_i}) we find that \[ pk +N=N^2. \] Hence, \[ k= \left( N - 1 \right)\frac {N} p \] and \[ k_p=\left(N-1 \right)\left( \frac {N} p + 1 \right). \] \end{proof} \begin{remark} {\rm A straightforward calculation shows that if $f$ satisfies the hypotheses of Proposition \ref{prop:sizeD_i}, then $p$ divides the norm-squared of the \lq\lq signature" $(|f^{-1}(0)|, |f^{-1}(1)|, \dots , |f^{-1}(p-1))|$ of the function $f$, i.e., $p$ divides the quantity \[ | \{0\} \cup D_p|^2+\sum_{i=1}^{p-1} |D_i|^2. \] } \end{remark} We have described the possible sizes of the level sets of an even bent function of $2m$ variables in the Boolean case, in \S \ref{subsec:bentBool}, and in the $p$-ary case, for $p$ a prime number greater than 2, in Proposition \ref{prop:sizeD_i}. These results lead to the following feasibility conditions for a function to be bent. Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be an even function with $f(0)=0$, where $p$ is a prime number, and $m$ is a positive integer. Let $D_i = f^{-1}(i)$ for $1 \leq i \leq p-1$, and let $D_p = f^{-1}(0) \setminus \{0\}$. We say that the level sets of $f$ are of {\it feasible sizes} if, for $1 \leq i \leq p$, \begin{equation} \label{eqn:feasible-sizes} |D_i| = (N-1)r_i, \end{equation} where \[ N = \pm p^m, \quad r_i = \frac N p \ \text{for $1 \leq i \leq p-1$}, \quad \text{and} \quad r_p = \frac N p + 1. \] A function whose level sets are not of feasible sizes cannot be bent. In the next section, we will state these feasibility conditions, equivalently, in terms of the degrees of a function's component Cayley graphs (see \S \ref{subsec:feasible-degrees}). \section{Cayley graphs of $p$-ary functions} In this section, we describe a collection of regular graphs $\{\Gamma_1, \Gamma_2, \dots , \Gamma_p\}$, the component Cayley graphs, associated to a $p$-ary function $f$. We define the component functions $f_i$ of $f$ to be the indicator functions of the sets $D_i$ described above. The eigenvalues of the adjacency matrix of $\Gamma_i$ are the values of the Fourier transform of $f_i$. The adjacency matrices of the component Cayley graphs commute. \subsection{Component functions of a $p$-ary function} Suppose that $f \colon GF(p)^n \rightarrow GF(p)$ is a $p$-ary function. Recall that we define $D_i = f^{-1}(i)$, for $1 \leq i \leq p-1$, and $D_p = f^{-1}(0)\setminus \{0\}$. The {\it component functions} $f_i \colon GF(p)^n \rightarrow \mathbb{C}$ of $f$ are defined to be the indicator functions of the sets $D_i$, given by \begin{equation} \label{eqn:defn-fi} f_i(x)= \begin{cases} 1 &\qquad \text{if $x \in D_i$,}\\ 0 &\qquad \text{otherwise,} \end{cases} \end{equation} for $1 \leq i \leq p$. \subsection{Component Cayley graphs of a $p$-ary function} \label{subsec:cCg} Let $f \colon GF(p)^n \rightarrow GF(p)$ be an even function with $f(0)=0$, where $p$ is a prime number greater than 2, and $n$ is a positive integer. The function $f$ determines a graph decomposition $\{ \Gamma_1, \Gamma_2, \dots , \Gamma_p\}$ of the complete graph on the vertex set $GF(p)^n$. For $1 \leq i \leq p-1$, there is an edge in $\Gamma_i$ between distinct vertices $x$ and $y$ in $GF(p)^n$ if $f(x-y)=i$, i.e., if $x-y \in D_i$. There is an edge in $\Gamma_p$ between distinct vertices $x$ in $y$ in $GF(p)^n$ if $f(x-y)=0$, i.e., if $x-y \in D_p$. Note that these graphs may be considered undirected, since $f$ is even, so $f(x-y)=f(y-x)$. The graph $\Gamma_i$ is the Cayley graph of the pair $(GF(p)^n, D_i)$. We refer to the graphs $\Gamma_i$ as the {\it component Cayley graphs} or simply the {\it Cayley graphs} of $f$. We can also regard $\Gamma_i$ as the Cayley graph of the component function $f_i$. The graph $\Gamma_i$ is regular of degree $| D_i|$, i.e., every vertex is of degree $|D_i|$. For example, let $f \colon GF(3)^2 \rightarrow GF(3)$ be given by $f(x_0,x_1)=-x_0^2+x_1^2$. The component Cayley graphs $\Gamma_1$, $\Gamma_2$, and $\Gamma_3$ of $f$ are shown in Figure \ref{fig:3-aryLST}. \begin{figure} \caption{{The Cayley graphs of the $3$-ary function $-x_0^2+x_1^2$.} \label{fig:3-aryLST} \end{figure} \subsection{Feasible degrees of component Cayley graphs} \label{subsec:feasible-degrees} Suppose that $f \colon GF(p)^{2m} \rightarrow GF(p)$ is an even function such that $f(0)=0$, where $p$ is a prime number greater than 2, and $m$ is a positive integer. In Equation (\ref{eqn:feasible-sizes}) of Section \ref{subsec:feas-size-set}, we described the feasible sizes of the level sets of $f$. If the level sets of $f$ are not of these sizes, $f$ cannot be bent. We now restate these feasibility conditions in terms of degrees of graphs. We say that the component Cayley graphs $\Gamma_i$ of $f$ are of {\it feasible degrees} if the degrees of these graphs correspond to the feasible sizes of level sets, i.e., \[ \text{degree}(\Gamma_i)=(N-1)r_i, \] where \[ N = \pm p^m, \quad r_i = \frac N p \ \text{for $1 \leq i \leq p-1$}, \quad \text{and} \quad r_p = \frac N p + 1. \] If the degrees of the graphs $\Gamma_i$ are not of these sizes, the function $f$ cannot be bent. \subsection{Adjacency matrices} Let $\Gamma$ be a matrix with vertex set $V$ of size $\nu$. The {\it adjacency matrix} of $\Gamma$, with respect to a fixed ordering of the vertices, is the $\nu \times \nu$ matrix $A$ whose rows and columns are indexed by the elements of $V$, such that \begin{equation*} A_{xy}= \begin{cases} 1 &\qquad \text{if $x \neq y$ and $(x,y)$ is an edge of $\Gamma$,}\\ 0 &\qquad \text{otherwise.} \end{cases} \end{equation*} Let $f \colon GF(p)^n \rightarrow GF(p)$ be an even function such that $f(0)=0$. Let $A(i)$ be the adjacency matrix of the component Cayley graph $\Gamma_i$ with respect to the ordering of $GF(p)^n$ fixed in \S \ref{sec:2}. We will show below that the matrices $A(1), A(2), \dots ,A(p)$ commute, since they share a common basis of eigenvectors. \subsection{Hadamard vectors} \label{subsec:Had} Let $\nu=p^n$. For each vector $x$ in $GF(p)^n$, we define a vector $h(x)$ in $\mathbb{C}^\nu$, using the same fixed ordering of $GF(p)^n$ as in \S \ref{sec:2}, by \[ h(x)_y=\zeta^{-\langle x,y \rangle}. \] We call the vectors $h(x)$ {\it generalized Hadamard} or simply {\it Hadamard} vectors. The vector $h(0)$ is the all 1's vector. By the following lemma, the remaining vectors $h(x)$, where $x \neq 0$, span the subspace of $\mathbb{C}^\nu$ orthogonal to $h(0)$. \begin{lemma} \label{lem:Hadvec} The $\nu$ Hadamard vectors $h(x)$ in $\mathbb{C}^\nu$ are orthogonal and linearly independent over $\mathbb{C}$. \end{lemma} \begin{proof} Let $H$ be the matrix whose columns are the Hadamard vectors $h(x)$, for $x$ in $GF(p)^n$. It is straightforward to show that $H \overline H^t = \nu I$, where $\nu=p^n$, and $I$ is the $\nu \times \nu$ identity matrix. \end{proof} The matrix $H$ whose columns are the vectors $h(x)$ is sometimes called a {\it generalized Hadamard} or {\it Butson} matrix. \subsection{Eigenvalues corresponding to Hadamard vectors} \label{subsec:eig-Had-h} Suppose that $\Gamma$ is a graph with vertex set $V$ of size $\nu$, and $A$ is the adjacency matrix of $\Gamma$. The set of eigenvalues of $A$ is called the {\it spectrum} of the graph $\Gamma$. We sometimes refer to the eigenvalues of $A$ as eigenvalues of $\Gamma$. Let $f \colon GF(p)^n \rightarrow GF(p)$ be a even function with $f(0)=0$, with component Cayley graphs $\Gamma_i$ and corresponding adjacency matrices $A(i)$. We will show that the Hadamard vectors $h(x)$ of Lemma \ref{lem:Hadvec} form a basis of common eigenvectors of the matrices $A(i)$ over $\mathbb{C}$, and the values of the Fourier transforms $\hat {f_i}$ of the component functions $f_i$ are the eigenvalues of these matrices. \begin{lemma} \label{lemma:lamfi} The Hadamard vector $h(x)$ is an eigenvector of $A(i)$ corresponding to the eigenvalue $\hat f_i(x)$, for each $x$ in $GF(p)^n$, and for $1 \leq i \leq p$. \end{lemma} \begin{proof} The entry in position $y$ in the product $A(i)h(x)$ is \begin{align*} \left( A(i)h(x) \right)_y &= \sum_t A(i)_{yt}h(x)_t\\ &=\sum_t f_i(t-y)\zeta^{-\langle x, t \rangle}\\ &=\sum_z f_i(z)\zeta^{-\langle x, z+y \rangle}\\ &= \left(\sum_z f_i(z) \zeta^{-\langle x, z \rangle} \right) \zeta^{-\langle x, y \rangle}\\ &= \hat {f_i}(x) h(x)_y. \end{align*} Thus, \[ A(i) h(x) = \hat{f_i}(x) h(x), \] i.e., the vector $h(x)$ is an eigenvector of $A(i)$ corresponding to the eigenvalue $\hat f_i(x)$. \end{proof} As an immediate corollary of the previous lemma, we see that the adjacency matrices $A(i)$ commute. \begin{corollary} If $f \colon GF(p)^n \rightarrow GF(p)$ is an even $p$-ary function with $f(0)=0$, the adjacency matrices $A(1), A(2), \dots ,A(p)$ of the component Cayley graphs of $f$ commute. \end{corollary} We will often use the notation $\lambda_i(x)$ to denote the eigenvalue of $A(i)$ corresponding to the Hadamard eigenvector $h(x)$. Thus, \begin{equation} \label{eqn:lambdaiofx} \lambda_i(x)=\hat {f_i}(x), \end{equation} for $1 \leq i \leq p$ and for all $x$ in $GF(p)^n$. \section{Eigenvalue criterion for bent functions} In this section, we characterize even bent $p$-ary functions in terms of eigenvalues of their component Cayley graphs. \begin{proposition} \label{propn:Walsheigenvalues} Let $f \colon GF(p)^n \rightarrow GF(p)$ be an even function such that $f(0)=0$, where $p$ is is a prime number, and $n$ is a positive integer. Then the Walsh transform of $f$ satisfies \[ W_f(x) = 1 + \sum_{i=1}^p \zeta^i \lambda_i(x), \] for all $x$ in $GF(p)^n$, where $\lambda_i(x)$ is the eigenvalue from Equation (\ref{eqn:lambdaiofx}) above. \end{proposition} \begin{proof} Recall that we defined $D_i = f^{-1}(i)$ for $1 \leq i \leq p-1$, and $D_p = f^{-1}(0) \setminus \{0\}$. As above, we denote by $f_i$ the component function of $f$ defined in Equation (\ref{eqn:defn-fi}), and by $\hat{f_i}$ its Fourier transform. The Walsh transform of $f$ can be written in terms of the Fourier transforms of the functions $f_i$ as \begin{equation*} \begin{aligned} W_f(x) & =\sum_{y \in GF(p)^n} \zeta^{f(y)} \zeta^{- \langle y, x \rangle }\\ & = \zeta^0 + \sum_{i=1}^{p-1} \sum_{y \in D_i} \zeta^i \zeta^{-\langle y , x\rangle} + \sum_{y \in D_p} \zeta^{- \langle y ,x\rangle} \\ &= 1 + \sum_{i=1}^{p-1} \zeta^i \sum_{y \in GF(p)^n} f_i(y) \zeta^{-\langle y , x \rangle} + \sum_{y \in GF(p)^n} f_p(y) \zeta^{- \langle y , x \rangle} \\ &= 1 + \zeta \hat {f_1}(x) + \zeta^2 \hat {f_2}(x) + \cdots + \zeta^{p-1}\hat {f_{p-1}}(x) + \hat {f_p}(x). \end{aligned} \end{equation*} Since $\lambda_i(x)=\hat {f_i}(x)$, (see Equation (\ref{eqn:lambdaiofx}) above), this completes the proof. \end{proof} From the previous result, we obtain the following characterization of an even bent $p$-ary function in terms of eigenvalues of its component Cayley graphs. \begin{proposition} \label{propn:eigenvaluecharacterization} Let $f \colon GF(p)^n \rightarrow GF(p)$ be an even function such that $f(0)=0$, where $p$ is a prime number, and $n$ is a positive integer. Then $f$ is bent if and only if \[ |1+\sum_{i=1}^p \zeta^i \lambda_i(x) | = p^{\frac n 2} \] for all $x$ in $GF(p)^n$, where $\lambda_i(x)$ is the eigenvalue from Equation (\ref{eqn:lambdaiofx}) above. \end{proposition} \section{Feasible Latin and negative Latin square type functions} In this section, we consider even $p$-ary functions whose component Cayley graphs are strongly regular and either all of Latin square type with feasible degrees, or all of negative Latin square type with feasible degrees. We describe the eigenvalues of the component Cayley graphs of such functions. In order to illustrate how our main result is related to the theorems of Dillon and Bernasconi, Codenotti, and VanderKam, we recast their theorems in this context. We begin with some background material on strongly regular graphs and graphs of Latin and negative Latin square type (for further details see, for example, Godsil and Royle \cite[Chapter~10]{GR01}). \subsection{Strongly regular graphs} Let $\Gamma$ be a $k$-regular graph on $\nu$ vertices (every vertex has degree $k$). The graph $\Gamma$ is called {\it strongly regular} if there exist nonnegative integers $\lambda$ and $\mu$ such that if $x$ and $y$ are neighbors in $\Gamma$, there are $\lambda$ common neighbors of $x$ and $y$, and if $x$ and $y$ are not neighbors in $\Gamma$, there are $\mu$ common neighbors of $x$ and $y$. The constants $(\nu , k , \lambda , \mu)$ are called the {\it parameters} of the graph $\Gamma$. A strongly regular graph on the vertex set $GF(p)^n$ with parameters $(v, k, \lambda, \mu)$ corresponds to a symmetric partial difference set with parameters $(v, k, \lambda, \mu)$ (see, for example, \cite[Chapter 6]{JM17}). Thus, many of our statements about strongly regular graphs could be rephrased in terms of symmetric partial difference sets. \subsection{Eigenvalues of strongly regular graphs} The eigenvalues of the adjacency matrix of a strongly regular graph and their multiplicities can be expressed in terms of the parameters of the graph by the following well-known formulas. A strongly regular graph $\Gamma$ with parameters $(\nu,k,\lambda,\mu)$ has eigenvalues $k$, $\theta$, and $\tau$, where the eigenvector $k$ corresponds to the all $1$'s vector, \[ \theta = \frac { \lambda - \mu + \sqrt {(\lambda - \mu)^2 + 4 (k - \mu)}} 2, \] and \[ \tau = \frac { \lambda - \mu - \sqrt {(\lambda - \mu)^2 + 4 (k - \mu)}} 2. \] From these equations we see that \begin{equation} \label{eqn:thetatau0} \theta +\tau = \lambda - \mu \qquad \text{and} \qquad \theta \tau = -k + \mu. \end{equation} The multiplicities of $\theta$ and $\tau$ on the space of vectors in $\mathbb{C}^\nu$ orthogonal to the all 1's vector are given by \[ m_\theta = \frac {(\nu - 1) \tau + k}{\tau - \theta} \qquad \text{and} \qquad m_\tau = \frac {(\nu - 1)\theta + k}{\theta - \tau}. \] \subsection{Latin and negative Latin square type graphs} \label{subsec:LSTNLST} We say that a strongly regular graph $\Gamma$ is of {\it Latin square type} if there exist integers $N>0$ and $r>0$ such that the parameters of $\Gamma$ are \begin{equation} \label{eqn:LSTdefn} (\nu,k,\lambda, \mu)=(N^2, (N-1)r, N+r^2-3r, r^2-r). \end{equation} A strongly regular graph $\Gamma$ is of {\it negative Latin square type} if there exist integers $N<0$ and $r<0$ such that the parameters of $\Gamma$ are given by Equation (\ref{eqn:LSTdefn}). If $\Gamma$ is a strongly regular graph of Latin square type then the eigenvalues of $\Gamma$ are \begin{equation} \label{eqn:thetatau1} k=(N-1)r, \qquad \theta= N - r, \qquad \text{and} \qquad \tau= -r, \end{equation} where $\theta$ has multiplicity $m_\theta=(N-1)r$ on the subspace of $\mathbb{C}^\nu$ orthogonal to the all $1$'s vector, and $\tau$ has multiplicity $m_\tau=(N-1)(N-r+1)$. If $\Gamma$ is a strongly regular graph of negative Latin square type, then the eigenvalues of $\Gamma$ are \begin{equation} \label{eqn:thetatau2} k=(N-1)r, \qquad \theta= - r, \qquad \text{and} \qquad \tau= N-r, \end{equation} where $\theta$ has multiplicity $m_\theta=(N-1)(N-r+1)$ on the subspace of $\mathbb{C}^\nu$ orthogonal to the all $1$'s vector, and $\tau$ has multiplicity $m_\tau=(N-1)r$. If $N= \pm p^m$ and $r= \frac N p$, the eigenvalues $k$, $\theta$, and $\tau$ are distinct, except in the case that $m=1$ and $N = p$. In this case $r=1$, so $\theta = k = p-1$, $\lambda=p-2$, $\mu=0$, and $\tau=-1$. In this case, when there are only two distinct eigenvalues, the graph $\Gamma$ is not connected, and consists of $p$ copies of the complete graph $K_p$. \subsection{Latin and negative Latin square type functions} \label{defn:fLSTNLST} Let $f \colon GF(p)^n \rightarrow GF(p)$ be a $p$-ary function, where $p$ is prime number, and $n$ is a positive integer. We say that $f$ is of {\it Latin square type} if $f$ is even with $f(0)=0$, and its component Cayley graphs are all strongly regular and of Latin square type. Similarly, we say that $f$ is of {\it negative Latin square type} if $f$ is even with $f(0)=0$, and its component Cayley graphs are all strongly regular and of negative Latin square type. We will sometimes use the abbreviations LST and NLST for Latin square type and negative Latin square type, respectively. \subsection{Feasible LST and NLST functions} \label{subsec:LSTFdefn} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a $p$-ary function, where $p$ is a prime number, and $m$ is a positive integer. We say that $f$ is of {\it feasible Latin square type} if it is of Latin square type and the parameters of the component Cayley graph $\Gamma_i$ are \begin{equation} \label{eqn:LSTdefn2a} (\nu,k,\lambda, \mu)=(N^2, (N-1)r_i, N+r_i^2-3r_i, r_i^2-r_i), \end{equation} where $N = p^m$, $r_i = \frac N p$ for $1 \leq i \leq p-1$, and $r_p = \frac N p + 1$. Similarly, we say that $f$ is of {\it feasible negative Latin square type} if it is of negative Latin square type and the parameters of the component Cayley graphs are given by Equation (\ref{eqn:LSTdefn2a}), where $N = -p^m$, $r_i = \frac N p$ for $1 \leq i \leq p-1$, and $r_p = \frac N p + 1$. \begin{remark} \label{rk:no_NLST_m1p5} {\rm If $m=1$, there are no functions of feasible negative Latin square type for $p \geq 5$. If there were such a function, the formula above would give a value of $\lambda = -p+4$, which is impossible since $\lambda \geq 0$. } \end{remark} \subsection{Eigenvalues of feasible LST and NLST graphs} From the formulas of the previous two sections, we can calculate the eigenvalues of the component Cayley graphs of $f$ and their multiplicities, in the case that $f$ is of feasible Latin or negative Latin square type. In this section, we capture a more subtle feature of how these eigenvalues interact, which is key to proving our main result and the subsequent duality theorem. Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be an even function such that $f(0)=0$, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Let $A(i)$ be the adjacency matrix of the component Cayley graph $\Gamma_i$. Recall that we denote by $\lambda_i(x)$ the eigenvalue of $A(i)$ corresponding to the Hadamard vector $h(x)$ defined in \S \ref{subsec:Had}. \begin{proposition} \label{propn:distinguished-eigenvalue} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a $p$-ary function, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Suppose that $f$ is of feasible Latin or negative Latin square type. Then for each nonzero $x$ in $GF(p)^{2m}$, there exists a unique distinguished index $j$ in $\{1, 2, \dots , p-1,p\}$ such that \[ \lambda_j(x) = N - r_j, \] while for all the remaining values of $i$ such that $1 \leq i \leq p$ and $i \neq j$, \[ \lambda_i(x) = - r_i. \] \end{proposition} \begin{proof} Let $\nu=p^{2m}$, let $I$ be the $\nu \times \nu$ identity matrix, and let $J$ be the $\nu \times \nu$ all 1's matrix. Recall that for $x$ in $GF(p)^n$, the Hadamard vectors $h(x)$ are orthogonal vectors in $\mathbb{C}^\nu$, and $h(0)$ is the all 1's vector. Thus, if $x$ is a nonzero point in $GF(p)^n$, then $Jh(x)=0$. The adjacency matrices $A(i)$ of the component graphs $\Gamma_i$ of $f$ satisfy \[ I + \sum_{i=1}^p A(i)=J. \] Multiplying on the right by $h(x)$, where $x \neq 0$, gives \begin{equation*} h(x)+\sum_{i=1}^p A(i) h(x) =\left(1 +\sum_{i=1}^p \lambda_i(x)\right)h(x) =0, \end{equation*} It follows that if $x \neq 0$, \begin{equation} \label{eqn:sumlambda} 1+\sum_{i=1}^p \lambda_i(x) = 0. \end{equation} Since the component graphs $\Gamma_i$ are of Latin or negative Latin square type with $N=\pm p^m$, $r_i = \frac N p$ for $1 \leq i \leq p-1$, and $r_p=\frac N p +1$, the eigenvalues of $\Gamma_i$ are $k_i=(N-1)r_i$, $N-r_i$, and $-r_i$. When $x \neq 0$, the eigenvalue $\lambda_i(x)$ must take the value $N-r_i$ or $-r_i$, for $1 \leq i \leq p$. Let $a$ be the number of the eigenvalues in the set $\{ \lambda_i(x) \ | \ 1 \leq i \leq p-1 \}$ which take the value $N - r_i$. Similarly, let $b=1$ if $\lambda_p(x) = N - r_p$, and let $b=0$ otherwise. We wish to show that one of the numbers $a$ and $b$ is 1 and the other is 0. Let $r$ be the common value of $r_i$ for $1 \leq i \leq p-1$, \ i.e., $r = \frac N p$. Then $r_p = r+1$. Substituting into Equation (\ref{eqn:sumlambda}) we obtain \begin{align*} 0 &= 1+a( N - r) + (p-1-a)(-r) + b (N-r_p) + (1-b)(-r_p)\\ &= 1+a( N - r) + (p-1-a)(-r) + b (N-r-1) + (1-b)(-r-1)\\ &=1+ (a+b)N + p(-r) -1\\ &= (a+b)N-N. \end{align*} Thus, $a+b=1$, so there is exactly one index $j$, with $1 \leq j \leq p$, such that $\lambda_j(x)=N-r_j$. For the remaining values of $i \neq j$, $\lambda_i(x)=-r_i$. \end{proof} \subsection{Dillon and Bernasconi--Codenotti--VanderKam\\ theorems} Suppose that $f \colon GF(2)^{2m} \to GF(2)$ is a Boolean function such that $f(0)=0$, where $m$ is a positive integer. We say that the Cayley graph $\Gamma$ of $f$ is the component Cayley graph $\Gamma_1$. Recall from \S \ref{subsec:bentBool} that if $f$ is bent, then the only possible sizes for the set $D_1=f^{-1}(1)$ (and hence the only possible degrees of the Cayley graph $\Gamma$) are $|D_1| = 2^{2m-1} \pm 2^{m-1}$. These are the feasible degrees of the Cayley graph of a Boolean function of $2m$ variables that vanishes at 0. Dillon's criterion \cite{D74} for bent Boolean functions was stated in the language of difference sets. We state an essentially equivalent version in terms of strongly regular graphs. \begin{theorem}[Dillon] Let $f \colon GF(2)^{2m} \to GF(2)$ be a function with $f(0)=0$. Then $f$ is bent if and only if its Cayley graph is strongly regular of feasible Latin or negative Latin square type. \end{theorem} Bernasconi, Codenotti, and VanderKam \cite{BCV01} proved that a function $f \colon GF(2)^{2m} \to GF(2)$ with $f(0)=0$ is bent if and only if the Cayley graph $\Gamma$ of $f$ is strongly regular with parameters $(2^{2m}, k, \lambda, \lambda)$, for some $\lambda$, where $k = |D_1|$. From the discussion above, we see that $k = 2^{2m-1} \pm 2^{m-1}$ and $\lambda=2^{2m-2} \pm 2^{m-1}$. In the next section, we show that the theorems of Dillon and Bernasconi, Codenotti, and VanderKam can be generalized in one direction (Theorem \ref{thm:main1}). In \S \ref{subsec:GF52}, we give examples to show that converse of Theorem \ref{thm:main1} does not hold, since the component Cayley graphs of a $p$-ary bent function are not necessarily strongly regular. \section{Feasible Latin and negative Latin square type functions are bent} We now prove our main result. \begin{theorem} \label{thm:main1} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be an even function such that $f(0)=0$, where $p$ is a prime number greater than 2, and $m$ is a positive integer. If the component Cayley graphs of $f$ are all strongly regular and are either all of feasible Latin square type, or all of feasible negative Latin square type, then $f$ is bent. \end{theorem} \begin{proof} By the hypotheses of the theorem, the component Cayley graphs $\Gamma_i$ of $f$ are strongly regular with parameters \[ \left(N^2,(N-1)r_i,N+r_i^2-3r_i,r_i^2-r_i \right), \] where $N=p^m$ if the graphs are all of feasible Latin square type, $N = - p^m$ if the graphs are all of feasible negative Latin square type, $r_i = \frac N p$, for $1 \leq i \leq p-1$, and $r_p=\frac N p+1$. In order to show that $f$ is bent, we wish to show that the magnitude of the Walsh transform $W_f(x)$ is $|N| = p^m $, for each $x$ in $GF(p)^{2m}$. The Walsh transform at $x=0$ is easily calculated, by counting the number of times $f$ takes each value $i$, as \begin{align*} W_f(0)&= \sum_{y \in GF(p)^{2m}} \zeta^{f(y)}\\ &=1+ \sum_{i=1}^{p-1} \zeta^i (N-1)r_i+(N-1)r_p\\ &=1 +(N-1)\left(\frac N p\right) \sum_{i=1}^{p-1} \zeta^i + (N-1) \left(\frac N p +1\right)\\ &=1-(N-1)\left(\frac N p\right) + (N-1) \left(\frac N p +1\right)\\ &=N. \end{align*} Thus, $W_f(0)= \pm p^m$. We will now apply the characterization of bent functions in terms of eigenvalues (Proposition \ref{propn:eigenvaluecharacterization}) to show that $|W_f(x)|=p^m $ for all nonzero $x$ in $GF(p)^{2m}$. Recall from Proposition \ref{propn:distinguished-eigenvalue} that for each nonzero $x$ in $GF(p)^{2m}$, there exists a unique distinguished value of $j$ in $\{1, 2, \dots , p-1,p\}$ such that $\lambda_j(x) = N - r_j$, while for all the remaining values of $i$ such that $1 \leq i \leq p$ and $i \neq j$, we have $\lambda_i(x) = - r_i$. Continuing with this notation and using Proposition \ref{propn:Walsheigenvalues}, we find that \begin{align*} W_f(x) &= 1 + \sum_{i=1}^p \zeta^i \lambda_i(x)\\ &= 1 + \zeta^j (N - r_j) + \sum_{i \neq j} \zeta^i (-r_i)\\ & = 1 + \zeta^j N + \sum_{i=1}^p \zeta^i (-r_i)\\ &= 1 + \zeta^j N - \left(\frac N p\right)\sum_{i=1}^{p-1} \zeta^i - \left( \frac N p +1\right)\\ &=1+\zeta^j N +\left(\frac N p\right) - \left( \frac N p +1\right)\\ &=\zeta^j N. \end{align*} Therefore $f$ is bent. \end{proof} As an immediate consequence of the proof of the theorem above, we obtain the following corollary. \begin{corollary} \label{cor:Wfzetaj} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a $p$-ary function, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Suppose that $f$ is of feasible Latin or negative Latin square type. Then $W_f(0) = p^m$ in the feasible Latin square type case, and $W_f(0)=-p^m$ in the feasible negative Latin square type case. Furthermore, if $x \neq 0$, then \[ W_f(x) = \zeta^j W_f(0), \] where $j$ is the distinguished index described in Proposition \ref{propn:distinguished-eigenvalue} such that $\lambda_j(x)$ has the form $N - r_j$. \end{corollary} This corollary gives us a dual $p$-ary function $f^*$, satisfying \[ W_f(x) = \zeta^{f^*(x)} W_f(0). \] The properties of $f^*$ are described in the next section. \section{Dual functions} In this section, we prove that a bent function $f$ of feasible Latin or negative Latin square type (as defined in \S \ref{subsec:LSTFdefn}) has a dual $f^*$ whose component Cayley graphs have the same parameters as those of $f$. The main idea of the proof is to relate the component functions $f_i^*$ of $f^*$ to the component functions $f_i$ of $f$ by means of the equation \begin{equation} \label{eqn:fi*} f_i^*(x)=\frac 1 N \hat {f_i}(x) + \frac {r_i} N - r_i \delta_0(x), \end{equation} where $\hat {f_i}$ is the Fourier transform of $f_i$, and $\delta_0$ is the delta function centered at 0 (see Equation (\ref{defn:delta})). Since the eigenvalues of the component Cayley graphs of a $p$-ary function are given by the Fourier transforms of the component functions, Equation (\ref{eqn:fi*}) allows us to calculate the eigenvalues of the component Cayley graphs of $f^*$ and show that these graphs are strongly regular with the desired parameters. \subsection{Regular and weakly regular bent functions} A bent function $f \colon GF(p)^n \to GF(p)$ is said to be {\it regular} if there exists a {\it dual function} $f^* \colon GF(p)^n \rightarrow GF(p)$ such that \[ W_f(x) =\zeta^{f^*(x)} p^{\frac n 2} \] for all $x$ in $GF(p)^n$. Similarly, a bent function $f \colon GF(p)^n \to GF(p)$ is said to be {\it weakly regular} or {\it $\mu$-weakly regular} if there exists a constant $\mu$ in $\mathbb{C}$ with magnitude 1 and a {\it dual function} or {\it $\mu$-weakly regular dual function} $f^* \colon GF(p)^n \rightarrow GF(p)$ such that \[ W_f(x) = \mu \zeta^{f^*(x)} p^{\frac n 2} \] for all $x$ in $GF(p)^n$. It is known that the dual $f^*$ of a regular or weakly regular bent function $f$ is also bent. If $f$ is regular, so is $f^*$, and if $f$ is weakly regular, so is $f^*$. If $f$ is an even function, then $f^*$ is also even. See \cite[\S 6.4]{JM17} for further background on duality. \subsection{Regularity and feasible LST and NLST functions} We show that a function $f \colon GF(p)^{2m} \rightarrow GF(p)$ of feasible Latin or negative Latin square type (as defined in \S \ref{subsec:LSTFdefn}) is regular or weakly regular, respectively. Recall that in the feasible Latin or negative Latin square case, the parameters of the component Cayley graph $\Gamma_i$ are \begin{equation*} (\nu,k,\lambda, \mu)=(N^2, (N-1)r_i, N+r_i^2-3r_i, r_i^2-r_i), \end{equation*} where \begin{equation} \label{eqn:LSTdefn2} N = \pm p^m, \quad r_i = \frac N p \ \text{for $1 \leq i \leq p-1$}, \quad \text{and} \quad r_p = \frac N p + 1. \end{equation} When $N=p^m$, the graph $\Gamma_i$ is of Latin square type, and when $N=-p^m$, it is of negative Latin square type. Note that in the case $p=3$ and $m=1$, it is possible to have a strongly regular graph decomposition of $GF(p)^{2m}$ that is both Latin and negative Latin square type (see Example \ref{ex:negpos}), but only the negative Latin square type graph decomposition is feasible. As above, we denote the eigenvalue of $\Gamma_i$ corresponding to the Hadamard eigenvector $h(x)$ by $\lambda_i(x)$ (see Sections \ref{subsec:Had} and \ref{subsec:eig-Had-h}). \begin{proposition} \label{propn:f^*defn} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a $p$-ary function, where $p$ is a prime number greater than 2, and $m$ is a positive integer. If $f$ is of feasible Latin square type, then $f$ is a regular bent function. If $f$ is of feasible negative Latin square type, then $f$ is a $(-1)$-weakly regular bent function. \end{proposition} \begin{proof} The function $f$ is bent, by Theorem \ref{thm:main1}. We define $f^*(0)=0$. By Proposition \ref{propn:distinguished-eigenvalue}, for every nonzero $x$ in $GF(p)^{2m}$, there is a unique distinguished index $j$ in $\{1, 2, \dots , p\}$ such that $\lambda_j(x)=N-r_j$. If $j \neq p$, we define $f^*(x)=j$, and if $j = p$, we define $f^*(x)=0$. By Corollary \ref{cor:Wfzetaj}, $W_f(x) =\zeta^{f^*(x)} p^m$ in the Latin square type case, and $W_f(x) =-\zeta^{f^*(x)} p^m$ in the negative Latin square type case. Thus, $f$ is regular in the Latin square type case and $(-1)$-weakly regular in the negative Latin square type case, and $f^*$ is the dual of $f$. \end{proof} \subsection{Level sets of dual functions} We describe the level sets of the dual function of $f$, when $f$ is of feasible Latin or negative Latin square type, using Proposition \ref{propn:distinguished-eigenvalue}, on distinguished indices of eigenvalues. Suppose that $f \colon GF(p)^{2m} \to GF(p)$ is an even bent function with $f(0)=0$ such that $f$ is regular or weakly regular. Let $f^* \colon GF(p)^{2m} \to GF(p)$ be the dual function of $f$. Recall that we define $D_i = f^{-1}(i)$, for $1 \leq i \leq p-1$, and $D_p = f^{-1}(0) \setminus \{0\}$. Similarly we define the corresponding sets for the dual function: \begin{equation} \label{defn:Di*} D^*_i=(f^*)^{-1}(i), \ \text{for $1 \leq i \leq p-1$, and} \ D^*_p = (f^*)^{-1}(0) \setminus\{0\}. \end{equation} Recall also that the eigenvalues of the adjacency matrix $A(i)$ of the component Cayley graph $\Gamma_i$ of $f$ are the values $\hat f_i (x)$ of the Fourier transform of $f_i$ for $x$ in $GF(p)^{2m}$. More concisely, \begin{equation*} \lambda_i(x)=\hat {f_i}(x), \end{equation*} for $1 \leq i \leq p$ and for all $x$ in $GF(p)^{2m}$. If the graphs $\Gamma_i$ are strongly regular, and all of feasible Latin square type or all of feasible negative Latin square type, then the eigenvalues of $A(i)$ are $k_i = (N-1)r_i$, $N - r_i$, and $-r_i$, where $N$ and $r_i$ are as in Equation (\ref{eqn:LSTdefn2}). \begin{proposition} \label{prop:Di*size} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a feasible Latin or negative Latin square type $p$-ary function, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Then the set $D_i^*$ is given by \[ D_i^* = \{ x \in GF(p)^{2m} \setminus\{0\} \ | \ \hat {f_i}(x) = N - r_i\}, \] for $1 \leq i \leq p$. Furthermore, the cardinality $k_i^*$ of $D_i^*$ is \[ k_i^* = |D_i^*| = (N-1)r_i = |D_i|=k_i. \] \end{proposition} \begin{proof} The description of $D_i^*$ follows directly from the description of the dual function $f^*$ in the proof of Proposition \ref{propn:f^*defn} (noting that $\lambda_i(x) = \hat {f_i}(x)$). For every nonzero $x$ in $GF(p)^{2m}$, there is a unique distinguished index $i$ in $\{1, 2, \dots , p\}$ such that $\lambda_i(x)=N-r_i$, by Proposition \ref{propn:distinguished-eigenvalue}. The multiplicity of $N-r_i$ as an eigenvalue of $\Gamma_i$ on the orthogonal complement of the all 1's vector is given by $(N-1)r_i$ (see \S\ref{subsec:LSTNLST}). \end{proof} \subsection{Component functions of dual functions} We formulate an expression for the $i$th component function of a dual function $f^*$, in terms of the Fourier transform of the $i$th component function of the original function $f$. The $i$th component function of $f^*$ is the function $f^*_i \colon GF(p)^{2m} \rightarrow \mathbb{C}$ given by \[ f_i^*(x) = \begin{cases} 1 &\qquad \text{ if $x \in D_i^*$,}\\ 0 & \qquad \text{otherwise,} \end{cases} \] for $1 \leq i \leq p$. The following result is an immediate corollary of Proposition \ref{prop:Di*size}. \begin{corollary} \label{cor:Di*f_hat} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a feasible Latin or negative Latin square type $p$-ary function, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Then $f_i^*(0)=0$, and \begin{equation*} f_i^*(x)=1 \quad \text{if and only if} \quad \hat {f_i}(x)=N-r_i, \end{equation*} for $x$ in $GF(p)^{2m} \setminus\{0\}$ and $1 \leq i \leq p$, where $N$ and $r_i$ are as in Equation (\ref{eqn:LSTdefn2}). \end{corollary} In order to relate the component functions of the dual $f^*$ to the component functions of $f$, we introduce a delta function and its Fourier transform. We define the {\it delta function at $0$ on $GF(p)^{2m}$} to be the function $\delta_0 \colon GF(p)^{2m} \to \mathbb{C}$ given by \begin{equation} \label{defn:delta} \delta_0(x) = \begin{cases} 1 & \qquad \text{if $x=0$,}\\ 0 & \qquad \text{otherwise.} \end{cases} \end{equation} We denote by $\iota$ the constant function $\iota \colon GF(p)^{2m} \to \mathbb{C}$ given by $\iota(x) = 1$, for $x$ in $GF(p)^{2m}$. The next lemma follows directly from the definition of the Fourier transform. \begin{lemma} \label{lem:Fouriotadelta} The Fourier transform of the delta function at $0$ on $GF(p)^{2m}$ is given by \[ \hat \delta_0 = \iota, \] where $\iota$ is the function defined above, with constant value 1 on $GF(p)^{2m}$. The Fourier transform of $\iota$ is given by \[ \hat \iota = p^{2m} \delta_0. \] \end{lemma} The following proposition expresses the component functions of the dual $f^*$ in terms of the Fourier transforms of the component functions of $f$. \begin{proposition} \label{propn:fi*2} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a feasible Latin or negative Latin square type $p$-ary function, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Then the $i$th component function $f_i^*$ of the dual of $f$ satisfies \begin{equation} \label{eqn:fi*2} f_i^*(x)=\frac 1 N \hat {f_i}(x) + \frac {r_i} N - r_i \delta_0(x), \end{equation} for $1 \leq i \leq p$, where $N$ and $r_i$ are as in Equation (\ref{eqn:LSTdefn2}), and $\hat {f_i}$ is the Fourier transform of the $i$th component function of $f$. \end{proposition} \begin{proof} Recall that $\hat {f_i}(x)=\lambda_i(x)$, where $\lambda_i(x)$ is the eigenvalue of the adjacency matrix of $\Gamma_i$ corresponding to the Hadamard eigenvector $h(x)$. Thus, by Equations (\ref{eqn:thetatau1}) and (\ref{eqn:thetatau2}) of \S \ref{subsec:LSTNLST}, $\hat{f_i}(0)=(N-1)r_i$, and $\hat{f_i}(x)$ equals either $N-r_i$ or $-r_i$ for $x \neq 0$. By Proposition \ref{prop:Di*size}, for $x \neq 0$, $\hat{f_i}(x)=N-r_i$ if and only if $x \in D_i^*$, where $D_i^*$ is as in Equation (\ref{defn:Di*}). Therefore, \begin{align*} \frac 1 N \hat {f_i}(x) + \frac {r_i} N - r_i \delta_0(x) &= \begin{cases} \frac 1 N (N-1)r_i + \frac {r_i} N - r_i &\ \text{if $x=0$,}\\ \frac 1 N (N-r_i) + \frac {r_i} N - 0 &\ \text{if $x\in D_i^*$,}\\ \frac 1 N (-r_i) + \frac {r_i} N - 0 &\ \text{otherwise,} \end{cases}\\ &= \begin{cases} 1 & \ \text{if $x \in D_i^*$}\\ 0 &\ \text{otherwise,} \end{cases}\\ &=f_i(x). \end{align*} \end{proof} \subsection{Eigenvalues for dual functions} We calculate the eigenvalues of the component Cayley graphs of dual functions, using the Fourier transforms of the corresponding component functions described in Proposition \ref{propn:fi*2} above. We first review some basic properties of inverse Fourier transforms, which we will need in the proof of the next proposition. Recall that the Fourier transform of a function $g \colon GF(p)^n \to \mathbb{C}$ is the function $\hat g \colon GF(p)^n \to \mathbb{C}$ given by \[ \hat g (x) = \sum_{y \in GF(p)^n} g(y) \zeta^{- \langle x,y \rangle}. \] The {\it inverse Fourier transform} of a function $h \colon GF(p)^n \to \mathbb{C}$ is the function $\check h \colon GF(p)^n \to \mathbb{C}$ given by \[ \check h (x) =\frac 1 {p^n} \sum_{y \in GF(p)^n} h(y) \zeta^{\langle x,y \rangle}. \] The Fourier transform and its inverse satisfy \[ \check{\hat g} (x) = g(x). \] If $g$ is an even function, i.e., if $g(-x)=g(x)$ for all $x$ in $GF(p)^n$, then \begin{align*} \check g (x) &=\frac 1 {p^n} \hat g (x). \end{align*} Thus, if $g$ is even, \begin{equation} \label{eqn:ghathat} \hat{\hat g} (x) = p^n g(x). \end{equation} The next proposition is dual to Proposition \ref{propn:fi*2} in the sense that it expresses the Fourier transforms of the component functions of the dual function $f^*$ in terms of the component functions of the original function $f$. \begin{proposition} \label{propn:Fourierfi*} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a feasible Latin or negative Latin square type $p$-ary function, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Then the Fourier transform of the $i$th component function $f_i^*$ of the dual of $f$ satisfies \[ (f_i^*)^\wedge (x) = N f_i(x)+N r_i \delta_0(x) - r_i, \] for $1 \leq i \leq p$, where $N$ and $r_i$ are as in Equation (\ref{eqn:LSTdefn2}). \end{proposition} \begin{proof} Taking the Fourier transform of each term of Equation (\ref{eqn:fi*2}) of Proposition \ref{propn:fi*2}, we obtain \begin{equation*} ( f_i^*)^\wedge (x) = \frac 1 N \hat{\hat {f_i}} (x) + \frac {r_i} N \hat \iota(x) - r_i \hat {\delta_0} (x), \end{equation*} where $\iota$ is the constant function with value 1 on $GF(p)^{2m}$. By Equation (\ref{eqn:ghathat}), $\hat{\hat {f_i}} (x) =p^{2m}f_i(x) = N^2f_i(x)$. The Fourier transforms of $\iota$ and $\delta_0$ are given by Lemma \ref{lem:Fouriotadelta} as $\hat \iota(x)= N^2 \delta_0(x)$ and $\hat {\delta_0}(x)=1$ . Therefore \begin{equation*} ( f_i^*)^\wedge (x) = N f_i(x) + N r_i \delta_0(x) -r_i. \end{equation*} \end{proof} From Proposition \ref{propn:Fourierfi*}, we obtain the eigenvalues of the component Cayley graphs of the dual function. \begin{corollary} \label{cor:Gamma*} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be a feasible Latin or negative Latin square type $p$-ary function, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Then the eigenvalues of the component Cayley graph $\Gamma_i^*$ of the dual of $f$ and their multiplicities are the same as those of $\Gamma_i$, for $1 \leq i \leq p$. Specifically, \begin{enumerate} \item the eigenvalue $k^*_i=(N-1)r_i$ corresponds to the all 1's eigenvector, \item the eigenvalue $N-r_i$ occurs with multiplicity $(N-1)r_i$ on the vector space orthogonal to the all 1's vector, and \item the eigenvalue $-r_i$ occurs with multiplicity $(N-1)(N+1-r_i)$ on the vector space orthogonal to the all 1's vector, \end{enumerate} where $N$ and $r_i$ are as in Equation (\ref{eqn:LSTdefn2}). \end{corollary} \begin{proof} Let $f_i^*$ be the $i$th component function of the dual of $f$. The eigenvalues of $\Gamma_i^*$ are the values $(f_i^*)^\wedge(x)$ of the Fourier transform of $f_i^*$, for $x$ in $GF(p)^{2m}$. By Proposition \ref{propn:Fourierfi*}, the Fourier transform of $f_i^*$ is given by \[ (f_i^*)^\wedge (x) = N f_i(x)+N r_i \delta_0(x) - r_i. \] Thus, \begin{align*} (f_i^*)^\wedge (x) &= \begin{cases} N r_i - r_i & \qquad \text{ if $x=0$,}\\ N - r_i & \qquad \text{ if $x \in D_i$,}\\ - r_i & \qquad \text{ if $x \in GF(p)^{2m} \setminus \left( \{0\} \cup D_i \right)$,} \end{cases} \end{align*} so $\Gamma^*_i$ has the same eigenvalues as $\Gamma_i$. The multiplicity of $N-r_i$ on the vector space orthogonal to the all 1's vector is $|D_i| = (N-1)r_i$. The multiplicity of $- r_i$ on the vector space orthogonal to the all 1's vector is \[ |GF(p)^{2m} \setminus \left( \{0\} \cup D_i \right) | = N^2 - 1 - (N-1) r_i = (N-1)(N+1-r_i), \] Therefore, the multiplicities of the eigenvalues of $\Gamma_i^*$ are the same as those of $\Gamma_i$. \end{proof} \subsection{Duality theorem} In this section we prove that the dual of a feasible Latin or negative Latin square type function is also a feasible Latin or, respectively, negative Latin square type function. \begin{theorem} \label{thm:duals} Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be an even function such that $f(0)=0$, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Suppose that the component Cayley graphs $\Gamma_i$ of $f$ are all strongly regular and are either all of feasible Latin square type, or all of feasible negative Latin square type. Then the component Cayley graphs $\Gamma_i^*$ of the dual function $f^*$ are also all strongly regular, and the parameters of $\Gamma_i^*$ are the same as the parameters of $\Gamma_i$. \end{theorem} \begin{proof} Recall that a simple regular graph which is not complete or edgeless and which has exactly two distinct eigenvalues corresponding to eigenvectors orthogonal to the all 1's vector must be a strongly regular graph (see, for example, Brouwer and Haemers, \cite[Theorem 9.1.2]{BH11}). Thus, it follows directly from Corollary \ref{cor:Gamma*} that the component Cayley graphs $\Gamma_i^*$ are all strongly regular, for $1 \leq i \leq p$, since each has at most 3 eigenvalues. Of the parameters $(\nu,k_i^*, \lambda_i^*, \mu_i^*)$ for $\Gamma_i^*$, the parameters $\nu=N^2$ and $k_i^* = (N-1)r_i$ are known, (where, as above, $N = \pm p^m$, $r_i = \frac N p$ for $1 \leq i \leq p-1$, and $r_p = \frac N p + 1$). We will calculate the parameters $\mu_i^*$ and $\lambda_i^*$ from the eigenvalues of $\Gamma_i^*$, using Corollary \ref{cor:Gamma*}. We denote the two distinct eigenvalues of $\Gamma^*_i$ on the vector space orthogonal to the all 1's vector by $\theta_i$ and $\tau_i$ (where by convention $\theta_i >\tau_i$, although this order is not needed here). By Corollary \ref{cor:Gamma*}, one of these two eigenvalues is $N-r_i$, and the other is $-r_i$. Therefore, by Equation (\ref{eqn:thetatau0}), \[ \mu_i^* = k_i^* + \theta_i^*\tau_i^* = r_i^2-r_i, \] and \[ \lambda_i^* = \mu_i^* + \theta_i^*+\tau_i^* = N + r_i^2-3r_i. \] It follows that the graphs $\Gamma_i^*$ are either all of feasible Latin square type or all of feasible negative Latin square type. \end{proof} \section{Amorphic bent functions} In this section we consider even $p$-ary functions which determine association schemes. We give a criterion for such a function to be bent, in terms of structure constants of its association scheme. We show that a function which determines an amorphic association scheme and whose component Cayley graphs are of feasible degrees must be bent. It is well-known that a $p$-ary function $f \colon GF(p)^n \rightarrow GF(p)$ is bent if and only if the derivative functions given by $\mathcal D_b f(x) = f(x+b)-f(x)$ are balanced, for all nonzero $b$ in $GF(p)^n$ (i.e., $\mathcal D_b f$ takes all values equally often). In the case that $f$ determines an association scheme, we show that the number of times $\mathcal D_b f$ takes each value in $GF(p)$ can be expressed in a natural way in terms of structure constants of the association scheme. The structure constants of amorphic association schemes were described by Ito, Munemasa, and Yamada \cite{IMY91}. By summing the appropriate structure constants, we show that $\mathcal D_b f$ is balanced if $f$ is amorphic. \subsection{Association schemes} \label{sec:defn-assoc-scheme} Let $V$ be a finite set. A {\it binary relation} $R$ on $V$ is a subset of $V \times V$. The {\it dual} of a relation $R$ is the set $R^* = \{(x,y)\in V\times V\ |\ (y,x)\in R\}$. Let $\{R_0, R_1, \dots, R_p\}$ be a set of disjoint binary relations on $V$ whose union is $V \times V$, such that $R_0=\{ (x,x)\in V\times V\ |\ x\in V\}$, and such that for each $i$ there is a $j$ for which $R_i^*=R_j$. For $0 \leq i, j, k \leq p$ and for $(x,y)$ in $R_k$, let \[ \rho_{ij}^k(x,y) = |\{z \in V \ | \ \text{$(x,z) \in R_i$ and $(z,y) \in R_j$}\} |. \] We say that the collection $(V, R_0, R_1, \dots , R_p)$ forms a {\it $p$-class association scheme} if the numbers $\rho_{ij}^k(x,y)$ are independent of which pair $(x,y)$ we choose in $R_k$ (hence depend only on $i$, $j$, and $k$). The numbers $\rho_{ij}^k$ are called the {\it structure constants} or {\it intersection numbers} of the association scheme. If, in addition, $R_i^*=R_i$ for all $i$, then we say that the association scheme is {\it symmetric}. A symmetric association scheme determines a collection of undirected graphs $\{ \Gamma_1, \Gamma_2 , \dots , \Gamma_p\}$ on the vertex set $V$. We are interested in the case in which $V=GF(p)^n$ and the relations $R_i$ correspond to the component Cayley graphs $\Gamma_i$ of a $p$-ary function. Let $f \colon GF(p)^n \rightarrow GF(p)$ be an even function such that $f(0)=0$, where $p$ is a prime number greater than 2, and $n$ is a positive integer. Associated with $f$ is a set of binary relations $\{ R_0, R_1, \dots , R_{p-1},R_p\}$ on $V$ given by \[ R_0=\{ (x,x)\in V\times V\ |\ x\in V\}, \] \[ R_i = \{ (x,y) \in V \times V \ | \ f(x-y)=i\} \] for $1 \leq i \leq p-1$, and \[ R_p=\{ (x,y) \in V \times V \ | \ \text {$f(x-y) = 0$ and $x \neq y$} \}. \] Note that, by the assumption that $f$ is even, these relations are all self-dual. Furthermore, \begin{equation*} (x,y) \in R_i \quad \text{if and only if} \quad x-y \in D_i, \end{equation*} where $D_0 = \{0\}$, $D_i = f^{-1}(i)$ for $1 \leq i \leq p-1$, and $D_p = f^{-1}(0) \setminus \{0\}$. The relations $R_i$, for $1 \leq i \leq p$, correspond to the component Cayley graphs $\Gamma_i$ of $f$. Recall that we denote by $A(i)$ the adjacency matrix of the component Cayley graph $\Gamma_i$ of $f$. Let $A(0)=I$, the $\nu \times \nu$ identity matrix, where $\nu = p^n$. The matrices $A(0), A(1) , \dots , A(p)$ can also be thought of as the adjacency matrices of the relations $R_0, R_1, \dots , R_p$. The sum of these adjacency matrices is the all 1's matrix $J$. The condition that $f$ determines a $p$-class symmetric association scheme with structure constants $\rho_{ij}^k$ is equivalent to the condition that there exist nonnegative integers $\rho_{ij}^k$ such that \begin{equation*} A(i)A(j) = \sum_{k=0}^p \rho_{ij}^k A(k) \end{equation*} for $0 \leq i, j, k \leq p$. In this case, the matrices $A(0),A(1), \dots , A(p)$ generate a Bose-Mesner algebra (see, e.g., \cite[\S 6.7.1]{JM17}). The structure constants can be calculated from the adjacency matrices by the following formula (see \cite[Chapter 17]{CvL80}): \begin{equation} \label{eqn:rhoijkformula} \rho^k_{ij} = \left(\dfrac{1}{p^n|D_k|}\right) tr(A(i)A(j)A(k)), \end{equation} for $1 \leq i, j, k \leq p$, where $tr$ denotes the matrix trace, and the sets $D_k$ are as above. \subsection{Amorphic association schemes and functions} Let $V$ be a finite set, and let $\mathcal R = \{R_0, R_1, \dots, R_p\}$ be a set of disjoint binary relations on $V$ whose union is $V \times V$. A set of disjoint binary relations $\mathcal T = \{T_0, T_1, \dots , T_m\}$ whose union is $V \times V$ is called a {\it fusion} of $\mathcal R$ if each $T_i$ is a union of elements of $\mathcal R$. An association scheme $(V, R_0, R_1, \dots , R_p)$ is called {\it amorphic} if for each fusion $\mathcal T$ of $\mathcal R$, the collection $(V, T_0, T_1, \dots , T_m)$ is also an association scheme. A 2-class association scheme is trivially amorphic. Consider an even function $f \colon GF(p)^n \rightarrow GF(p)$ such that $f(0)=0$. Let $V=GF(p)^n$, and let $\{R_0, R_1, \dots , R_p\}$ be the binary relations determined by $f$, as described above. We call $f$ {\it amorphic} if $(V, R_0, R_1, \dots , R_p)$ is an amorphic association scheme. \subsection{van Dam and Gol'fand--Ivanov--Klin theorems} \label{subsec:vDam} There is a close relationship between amorphic association schemes and strongly regular graphs of Latin and negative Latin square type. A theorem of Gol'fand, Ivanov, and Klin from \cite{GIK94} (which we learned of from van Dam and Muzychuk \cite{vDM10}), states that the graphs determined by a $p$-class amorphic association scheme, with $p\geq 3$, are all strongly regular, and are either all of Latin square type, or all of negative Latin square type. Van Dam \cite[Theorem 3]{vD03} proved the converse: a decomposition of a complete graph into strongly regular graphs, all of Latin square type, or all of negative Latin square type, determines an amorphic association scheme. Thus, $p$-ary functions whose component Cayley graphs are strongly regular and all of Latin square type or all of negative Latin square type are amorphic functions. \subsection{Balanced derivative criterion for bent functions} \label{subsec:balanced-derivative} Let $f \colon GF(p)^n \rightarrow GF(p)$ be a $p$-ary function. The {\it derivative} function $\mathcal D_bf \colon GF(p)^n \rightarrow GF(p)$ is defined by \[ \mathcal D_bf(x) = f(x+b) - f(x). \] If $f$ is linear, then $\mathcal D_bf$ is constant. The following result, which is well-known (see, e.g., \cite[Proposition 6.3.9]{JM17}), implies that bent functions are in some sense maximally non-linear. \begin{proposition} \label{propn:bent-balanced-derivative} The function $f$ is bent if and only if $\mathcal D_bf$ is balanced for all $b \neq 0$, i.e., if $\mathcal D_bf$ takes each value in $GF(p)$ equally often. \end{proposition} \subsection{Structure constant criterion for bent functions} In this section, we consider even $p$-ary functions whose component Cayley graphs determine symmetric $p$-class association schemes. We state a criterion for such a function to be bent, involving sums of structure constants of these schemes. Let $f \colon GF(p)^n \rightarrow GF(p)$ be an even function with $f(0)=0$. Suppose that $f$ determines an association scheme with structure constants $\rho_{ij}^k$, as described in \S \ref{sec:defn-assoc-scheme}. Let $D_i = f^{-1}(i)$, for $1 \leq i \leq p-1$, and let $D_p = f^{-1}(0)\setminus \{0\}$. \begin{proposition} \label{propn:balanced} Suppose that $b \in D_i$, for some $i$ such that $1 \leq i \leq p$. The number of times $\mathcal D_bf$ takes the value $j$, for $1 \leq j \leq p-1$, is \begin{equation*} \left( \sum_{k=0}^{p} \rho_{j+k \ (\text{mod}\ p), k}^i \right) + \rho_{p,p-j}^i. \end{equation*} \end{proposition} \begin{proof} Suppose that $x \in D_k$ and $x+b \in D_m$, for some $m$ and $k$ such that $0 \leq m,k \leq p$. Then $f(x+b) - f(x)=j$ if and only if $m-k \ (\text{mod} \ p )= j$. Either $m = j+k \ (\text{mod} \ p)$, or $m=p$ and $k = p-j$. Recall that the condition that $(x,y) \in R_\ell$ is equivalent to the condition that $x-y \in D_\ell$. Therefore, $(b,0) \in R_i$ and \begin{align*} \rho_{m,k}^i &=|\{z \in GF(p)^n \ | \ \text{$(b,z) \in R_m$ and $(z,0) \in R_k$}\}|\\ &=| \{z \in GF(p)^n \ | \ \text{$b-z \in D_m$ and $z \in D_k$}\}|. \end{align*} Let $x = -z$. Since $f$ is even, $z \in D_k$ if and only if $-z \in D_k$. Therefore, \begin{equation*} \rho_{m,k}^i =| \{x \in GF(p)^n \ | \ \text{$b+x \in D_m$ and $x \in D_k$}\}|. \end{equation*} Summing over all pairs of indices $(m, k)$ such that $m-k \ (\text{mod} \ p) = j$, we obtain the total number of $x$ in $GF(p)^n$ for which $f(x+b)-f(x)=j$: \[ \sum_{k=0}^{p} \rho_{j+k \ (\text{mod} \ p), k}^i +\rho_{p,p-j}^i . \] \end{proof} From Proposition \ref{propn:balanced}, we obtain the following criterion for a function to be bent, in terms of structure constants of an association scheme. \begin{proposition} \label{propn:structureconstantcriterion} Let $f \colon GF(p)^n \rightarrow GF(p)$ be an even function with $f(0)=0$ that determines an association scheme with structure constants $\rho_{ij}^k$. Then $f$ is bent if and only if \[ \left( \sum_{k=0}^{p} \rho_{j+k \ (\text{mod} \ p), k}^i \right) + \rho_{p,p-j}^i =p^{n-1} \] for $1 \leq i \leq p$ and $1 \leq j \leq p-1$. \end{proposition} \subsection{Ito--Munemasa--Yamada theorem} If a $p$-ary function determines an amorphic association scheme, we can use a theorem of Ito, Munemasa, and Yamada \cite{IMY91} (which is formulated in more modern notation by Van Dam and Muzychuk in \cite[Corollary~1]{vDM10}) to obtain the structure constants of this amorphic association scheme. Let $f \colon GF(p)^{2m} \rightarrow GF(p)$ be an even function with $f(0)=0$, where $p$ is a prime number greater than 2, and $m$ is a positive integer. Suppose that $f$ determines an amorphic association scheme. By the theorem of Gol'fand, Ivanov, and Klin \cite{GIK94} mentioned in Section \ref{subsec:vDam}, the component Cayley graphs $\Gamma_i$ of $f$ are strongly regular, and are either all of Latin square type, or all of negative Latin square type. Therefore, each graph $\Gamma_i$ has parameters of the form \begin{equation*} (\nu,k,\lambda, \mu)=(N^2, (N-1)r_i, N+r_i^2-3r_i, r_i^2-r_i), \end{equation*} where $N$ equals $p^m$ in the Latin square case and $-p^m$ in the negative Latin square case, and each $r_i$ is an integer with the same sign as $N$. The structure constants $\rho_{jk}^i$ of an association scheme satisfy $\rho_{jk}^i=\rho_{kj}^i$. Also, $\rho_{0j}^i=\delta_{ij}$ for $1 \leq i,j \leq p$, where $\delta_{ij}=0$ if $i \neq j$, and $\delta_{ij}=1$ if $i=j$. We will use the following theorem of \cite{IMY91} to obtain the remaining structure constants. \begin{theorem}[Ito, Munemasa, Yamada] \label{thm:IMY} If $f$ determines an amorphic association scheme, then the intersection numbers of the scheme, in the notation above, satisfy \begin{itemize} \item[(a)] $\rho_{ii}^i = N +r_i^2-3r_i$ if $1 \leq i \leq p$, \item[(b)] $\rho_{jj}^i = (r_j-1)r_j$ if $i$ and $j$ are distinct and $1 \leq i,j \leq p$, \item[(c)] $\rho_{ij}^i=\rho_{ji}^i=(r_i-1)r_j$ if $i$ and $j$ are distinct and $1 \leq i,j \leq p$, and \item[(d)] $\rho_{jk}^i=\rho_{kj}^i=r_jr_k$ if $i$, $j$, and $k$ are distinct and $1 \leq i,j,k \leq p$. \end{itemize} \end{theorem} The proof below is included for completeness, and is only for the special case of interest to us, when the component Cayley graphs of $f$ are of feasible degrees. In this case, $r_i = \frac N p$ for $1 \leq i \leq p-1$, and $r_p = \frac N p +1$. \begin{proof} When the component Cayley graphs of $f$ are of feasible degrees, these structure constant formulas may be derived from Proposition \ref{propn:distinguished-eigenvalue} and Equation (\ref{eqn:rhoijkformula}). We note that \[ tr(A(i)A(j)A(k)) = \sum_{x \in GF(p)^n} \lambda_i(x) \lambda_j(x) \lambda_k(x), \] where $\lambda_i(x)$ is the eigenvalue of $\Gamma_i$ corresponding to the Hadamard vector $h(x)$ (see \S \ref{subsec:Had}). For $x \neq 0$, each eigenvalue $\lambda_i(x)$ is either of the form $N - r_i$ or $-r_i$. For each fixed $x \neq 0$, there is exactly one value of $i$ such that $\lambda_i(x)$ is of the form $N-r_i$. Furthermore, the multiplicities of each eigenvalue are also known. Let $i$, $j$, and $k$ be distinct integers such that $1 \leq i, j, k \leq p$. By a straightforward counting argument, we find that \[ tr\left(A(i)^3\right) = N^2 (N-1)r_i (N+r_i^2-3r_i), \] \[ tr\left(A(i)^2 A(j) \right) = N^2(N-1) r_i(r_i-1)r_j, \] and \[ tr(A(i) A(j) A(k)) = N^2 (N-1)r_i r_j r_k. \] Substituting these three cases into Equation (\ref{eqn:rhoijkformula}), we obtain the desired structure constants. \end{proof} \subsection{Proof that feasible amorphic functions are bent} In this section, we use our structure constant criterion for bent functions to prove that if an even $p$-ary function of $2m$ variables, vanishing at 0, with component Cayley graphs of feasible degrees is amorphic, then it is bent. As usual, the feasible degrees are those specified in \S \ref{subsec:feasible-degrees}. \begin{theorem} \label{thm:amorphic-bent} Suppose that $f \colon GF(p)^{2m} \rightarrow GF(p)$ is an even function such that $f(0)=0$, where $p$ is a prime number greater than 2, and $m$ is a positive integer, such that the component Cayley graphs of $f$ are of feasible degrees and $f$ determines an amorphic association scheme. Then $f$ is bent. \end{theorem} \begin{remark} {\rm Theorem \ref{thm:amorphic-bent} is equivalent to Theorem \ref{thm:main1}, due to the results of van Dam \cite{vD03} and Gol'fand, Ivanov, and Klin \cite{GIK94} discussed in \S \ref{subsec:vDam}, but we include both theorems because the proofs are different. } \end{remark} \begin{proof} We will use the structure constant criterion of Proposition \ref{propn:structureconstantcriterion} to show that $f$ is bent: we will show that \begin{equation} \label{eqn:proofsum} \left( \sum_{k=0}^{p} \rho_{t+k \ (\text{mod} \ p), k}^s \right) + \rho_{p,p-t}^s = p^{2m-1} \end{equation} for $1 \leq s \leq p$ and $1 \leq t \leq p-1$. To evaluate this sum, we use the theorem of Ito, Munemasa, and Yamada (Theorem \ref{thm:IMY}). We consider the following four cases: $s=p$; $s \neq p$ and $s=t$; $s \neq p$ and $s = p-t$; and $s \neq p$, $s \neq t$, and $s \neq p-t$ (this last case cannot occur for $p=3$). Terms of types (a) and (b) in Theorem \ref{thm:IMY} do not occur in the sum of Equation (\ref{eqn:proofsum}). In the following chart, we indicate how many times terms of each of the remaining types occur in the sum, in each of the four cases. There are a total of $p+2$ terms in each column. In the chart, we use the convention that $i$, $j$, and $k$ represent distinct values in the set $\{1, 2, \dots , p-1\}$. We also use the notation $r = \frac N p$. \begin{tabular}{|l|c|c|c|c|} \hline &\multicolumn {4} {c|} { Number of occurrences} \\ \hline Term type & \multirow{2}{*}{$s=p$} & \multirow{2}{*}{$s =t \neq p$} & \multirow{2}{*}{$s =p-t \neq p$} & $s \neq p, t, p-t$ \\ and value & & & & $p \neq 3$\\ \hline $\rho_{0i}^i =\rho_{i0}^i=1$ & 0 & 1 & 1 & 0\\ \hline $\rho_{0j}^i=\rho_{j0}^i=0$ & 0 & 1 & 1 & 2\\ \hline $\rho_{0i}^p=\rho_{i0}^p=0$ & 2 & 0 & 0 & 0\\ \hline $\rho_{ip}^p=\rho_{pi}^p=r^2$ & 2 & 0 & 0 & 0\\ \hline $\rho_{ij}^i=\rho_{ji}^i=r^2-r$ & 0 & 1 & 1 & 2\\ \hline $\rho_{pi}^i=\rho_{ip}^i=r^2-1$ & 0 & 1 & 1 & 0\\ \hline $\rho_{jp}^i=\rho_{pj}^i=r^2+r$ & 0 & 1 & 1 & 2\\ \hline $\rho_{jk}^i=\rho_{kj}^i=r^2$ & $0$ & $p-3$& $p-3$ & $p-4$ \\ \hline $\rho_{ij}^p=\rho_{ji}^p=r^2$ & $p-2$ & $0$& $0$ & $0$ \\ \hline \end{tabular} We see that for each column, the number of occurrences of each term type multiplied by the value sums to $pr^2 = p^{2m-1}$. For example, for the $s=p$ column, we obtain $2r^2+(p-2)r^2 = pr^2$. Therefore, $f$ is bent. \end{proof} \subsection{Examples} To illustrate some properties of amorphic bent functions, we include an example from \cite{CJMPW16}. Additional properties of the functions in this example are given in Examples \ref{ex:GF32fa} and \ref{ex:GF32fb}. We show a sample sum of structure constants of the type of Propositions \ref{propn:balanced} and \ref{propn:structureconstantcriterion}. We also give an example of a bent function which is not amorphic. \begin{example} (\cite{CJMPW16}) \label{ex:gf32bent} Let $f \colon GF(3)^2\to GF(3)$ be an even bent function with $f(0)=0$. Then $f(x_0,x_1)$ is equivalent to either $-x_0^2+x_1^2$ or $x_0^2+x_1^2$ under the action of $GL(2,GF(3))$ on $(x_0,x_1)$. In each case, the function $f$ determines an amorphic association scheme. \begin{enumerate} \item In the case of $-x_0^2+x_1^2$, the structure constants $\rho_{ij}^k$ are given in the following arrays. \[ \begin{array}{cc} \begin{array}{c|cccc} \rho_{ij}^0 & 0 & 1 & 2 & 3 \\ \hline 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 2 & 0 & 0 \\ 2 & 0 & 0 & 2 & 0 \\ 3 & 0 & 0 & 0 & 4 \\ \end{array} & \begin{array}{c|cccc} \rho_{ij}^1 & 0 & 1 & 2 & 3 \\ \hline 0 & 0 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 \\ 2 & 0 & 0 & 0 & 2 \\ 3 & 0 & 0 & 2 & 2 \\ \end{array} \\ & \\ \begin{array}{c|cccc} \rho_{ij}^2 & 0 & 1 & 2 & 3 \\ \hline 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 2 \\ 2 & 1 & 0 & 1 & 0 \\ 3 & 0 & 2 & 0 & 2 \\ \end{array} & \begin{array}{c|cccc} \rho_{ij}^3 & 0 & 1 & 2 & 3 \\ \hline 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 1 & 1 \\ 2 & 0 & 1 & 0 & 1 \\ 3 & 1 & 1 & 1 & 1 \\ \end{array} \\ \end{array} \] \item In the case of $x_0^2+x_1^2$, the component Cayley graph $\Gamma_3$ is empty, and the structure constants $\rho_{ij}^k$ are given in the following arrays. \[ \begin{array}{cc} \begin{array}{c|ccc} \rho_{ij}^0 & 0 & 1 & 2 \\ \hline 0 & 1 & 0 & 0 \\ 1 & 0 & 4 & 0 \\ 2 & 0 & 0 & 4 \\ \end{array} & \begin{array}{c|cccc} \rho_{ij}^1 & 0 & 1 & 2 \\ \hline 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 2 \\ 2 & 0 & 2 & 2 \\ \end{array} \\ & \\ \begin{array}{c|cccc} \rho_{ij}^2 & 0 & 1 & 2 \\ \hline 0 & 0 & 0 & 1 \\ 1 & 0 & 2 & 2 \\ 2 & 1 & 2 & 1 \\ \end{array} & {\rm no} \ \rho_{ij}^3 \\ \end{array} \] \end{enumerate} \end{example} \begin{example} {\rm Consider the structure constants for $-x_0^2+x_1^2$ given above. The expression in Proposition \ref{propn:balanced}, for $i=1$ and $j=2$, is \[ \rho^1_{2,0}+\rho^1_{0,1}+\rho^1_{1,2}+\rho^1_{2,3} +\rho^1_{3,1}=3. \] The other sums from Proposition \ref{propn:balanced} can be calculated from the arrays above in a similar manner. } \end{example} \begin{example} \label{ex:GF52nonexample} {\rm Consider the function $f \colon GF(5)^2 \to GF(5)$ given by \[ f(x_0,x_1)=3x_0^4+2x_0^2+2x_0 x_1. \] It can be checked that $f$ is bent with \begin{align*} D_1&=\{ (1, 3), (2, 0), (3, 0), (4, 2)\}\\ D_2&=\{(1, 1), (2, 4), (3, 1), (4, 4)\}\\ D_3&=\{(1, 4), (2, 3), (3, 2), (4, 1)\}\\ D_4&=\{(1, 2), (2, 2), (3, 3), (4, 3)\}\\ D_5&=\{(0, 1), (0, 2), (0, 3), (0, 4), (1, 0), (2, 1), (3, 4), (4, 0)\} \end{align*} However, graphs $\Gamma_1$, $\Gamma_2$, and $\Gamma_4$ have 6 distinct eigenvalues, $\Gamma_3$ has 2 distinct eigenvalues, and $\Gamma_5$ has 7 distinct eigenvalues, so the graphs are not all strongly regular. By the theorem of Gol'fand, Ivanov, and Klin \cite{GIK94} (see \S \ref{subsec:vDam}), $f$ is not amorphic. } \end{example} We give more examples in \S \ref{sec:manyexamples}. \section{Orthogonal arrays and bent functions} Orthogonal arrays are closely related to strongly regular graphs of Latin square type, and may be used to construct amorphic association schemes. See \cite[\S 10.4]{GR01} and \cite{vDM10} for further background on these topics. We will describe how to construct bent functions using orthogonal arrays. \subsection{Orthogonal arrays} Let $S$ be a set of size $N$. An {\it orthogonal array of size $r \times N^2$ with entries in $S$} consists of $r$ rows of $N^2$ entries from $S$, such that for any two rows, the $N^2$ ordered pairs determined by the columns are all distinct. Such an array is denoted $OA(r,N)$. We are primarily interested in orthogonal arrays of size $(N+1) \times N^2$, where $S=GF(p)^m$, $N=p^m$, and $p$ is prime. \begin{example} \label{ex:OAGF32} {\rm Let $S=GF(3)$, $r=4$, and $N=3$. The following is an $OA(4,3)$ with entries in $S$. \begin{equation*} \mathcal O \mathcal A = \begin{array}{c c c c c c c c c c c} 0&0&0&\ &1&1&1&\ &2&2&2\\ 0&1&2&\ &0&1&2&\ &0&1&2\\ 0&1&2&\ &1&2&0&\ &2&0&1\\ 0&1&2&\ &2&0&1&\ &1&2&0 \end{array} \end{equation*} } \end{example} \subsection{Latin square type graphs from orthogonal arrays} We can form a graph $\Gamma$ from an orthogonal array $OA(r,N)$ as follows. The vertices of $\Gamma$ are the columns of the array. Two distinct vertices are connected by an edge exactly when the columns have the same entry in one row. It is well-known that the graph $\Gamma$ is either complete (in the case $r=N+1$) or strongly regular of Latin square type. We include a proof for the convenience of the reader. We then give examples in which we construct graphs from orthogonal arrays and use these graphs to construct amorphic bent functions. \begin{lemma} The graph $\Gamma$ determined by an $OA(r,N)$ is either complete or strongly regular of Latin square type with parameters \[ \left( N^2, (N-1)r, N+r^2-3r,r^2-r \right). \] \end{lemma} \begin{proof} Consider any column $v$ of the array. The $i$th entry of $v$ occurs in exactly $N-1$ other locations in row $i$. By the definition of an orthogonal array, two columns can agree in at most one row. Thus, vertex $v$ has $(N-1)r$ neighbors. If $r=N+1$, then each vertex $v$ has $N^2-1$ neighbors, and $\Gamma$ is complete. Suppose that $v$ and $w$ are neighbors, i.e., $v$ and $w$ are distinct columns, with equal entries in some row $i$. A column $u$ which is a neighbor of both $v$ and $w$ agrees with each of $v$ and $w$ in exactly one row. There are $N-2$ neighbors $u$ that have the same entry in row $i$ as $v$ and $w$. Any other neighbor $u$ of $v$ and $w$ must agree with $v$ in some row $j\neq i$ and with $w$ in some row $k\neq i,j$. There are $(r-1)(r-2)$ such ordered pairs $(j,k)$, each corresponding to exactly one neighbor $u$ of $v$ and $w$. Thus there are $N+r^2-3r$ common neighbors of $v$ and $w$. Finally, suppose that $\Gamma$ is not complete, and that $v$ and $w$ are distinct columns which are not adjacent. A neighbor $u$ of $v$ and $w$ must agree with $v$ in some row $j$ and with $w$ in some row $k \neq j$. There are $r(r-1)$ such ordered pairs $(j,k)$, each corresponding to exactly one neighbor $u$ of $v$ and $w$. \end{proof} \begin{example} \label{ex:K9} {\rm The graph $\Gamma$ determined by the orthogonal array $\mathcal O \mathcal A = OA(4,3)$ of Example \ref{ex:OAGF32} is a complete graph on 9 vertices. } \end{example} \begin{example} \label{ex:negpos} {\rm We may partition the orthogonal array $\mathcal O \mathcal A$ of Example \ref{ex:OAGF32} into two smaller orthogonal arrays: \begin{equation*} \mathcal O \mathcal A_1 \ = \ \begin{array}{c c c c c c c c c c c} 0&0&0&\ &1&1&1&\ &2&2&2\\ 0&1&2&\ &0&1&2&\ &0&1&2 \end{array} \end{equation*} and \begin{equation*} \mathcal O \mathcal A_2 \ = \ \begin{array}{c c c c c c c c c c c} 0&1&2&\ &1&2&0&\ &2&0&1\\ 0&1&2&\ &2&0&1&\ &1&2&0 \end{array} . \end{equation*} We identify the vertices of the graphs $\Gamma_1$ and $\Gamma_2$ corresponding to $\mathcal O \mathcal A_1$ and $\mathcal O \mathcal A_2$ with the vertices of the graph $\Gamma$ corresponding to $\mathcal O \mathcal A$ in the obvious way. The graphs $\Gamma_1$ and $\Gamma_2$ are both strongly regular with parameters $( 9, 4, 1, 2 )$. They are of Latin square type with $N=3$, $r=2$, and of negative Latin square type with $N=-3$, $r=-1$. They form a strongly regular decomposition of the complete graph on $9$ vertices. The negative Latin square type decomposition is feasible. By van Dam's theorem (see \S \ref{subsec:vDam}), this graph decomposition determines an amorphic association scheme. Let us use the first two entries in each column of $\mathcal O \mathcal A$ to identify the 9 vertices of these graphs with elements of $GF(3)^2$. The neighbors of $(0,0)$ in $\Gamma_1$ form the set \[ D_1 = \{(0,1), (0,2), (1,0), (2,0)\}. \] The neighbors of $(0,0)$ in $\Gamma_2$ form the set \[ D_2 = \{ (1,1), (2,2), (1,2), (2,1) \}. \] Now, let us define an even function $f \colon GF(3)^2 \to GF(3)$ by setting \[ f(x) = \begin{cases} 0 &\qquad \text{if $x=(0,0)$,}\\ 1 & \qquad \text{if $x \in D_1$,}\\ 2 & \qquad \text{if $x \in D_2$.} \end{cases} \] The graphs $\Gamma_1$ and $\Gamma_2$ are component Cayley graphs of $f$. The third component Cayley graph of $f$ is empty. The function $f$ is amorphic, and consequently bent. It can be shown that $f$ is given by \[ f(x_0,x_1) = x_0^2+x_1^2. \] See Example \ref{ex:GF32fb} for more properties of this function. } \end{example} \begin{example} {\rm Similarly, if we partition the orthogonal array $\mathcal O \mathcal A$ of Example \ref{ex:OAGF32} into three arrays, consisting of the first row, the second row, and the last two rows, we obtain sets $D_1 = \{ (0,1), (0,2)\}$, $D_2=\{ (1,0),(2,0)\}$, and $D_3 = \{ (1,1),(2,2), (1,2), (2,1)\}$. The corresponding graph decomposition of the complete graph on $9$ vertices is a strongly regular decomposition. All three graphs are of Latin square type. The graphs $\Gamma_1$ and $\Gamma_2$ have parameters $(9,2,1,0)$, and the graph $\Gamma_3$ has parameters $(9,4,1,2)$. The sets $D_1$, $D_2$, and $D_3$ determine the amorphic bent function given by \[ g(x_0,x_1) = -x_0^2+x_1^2. \] See Example \ref{ex:GF32fa} for more properties of this function. } \end{example} \subsection{Bent functions from orthogonal arrays} In this section, we show how to construct an amorphic bent $p$-ary function of Latin square type on $2m$ variables, for any prime number $p$ greater than 2 and any positive integer $m$. We start with an orthogonal array of appropriate dimensions, and use a generalization of the procedure in the previous example. The technique for constructing amorphic association schemes from orthogonal arrays is known (see \cite[\S 5]{vDM10}), but we include details in the case of interest for completeness. Thanks to a construction of Bush \cite{Bu52}, it is known that it is possible to construct an orthogonal array of type $OA(N+1,N)$ when $N = p^m$, for every prime number $p$. If the entries are in $GF(p)^m$, we may construct our orthogonal array in such a way that all entries in the first column are equal to the 0 element of $GF(p)^m$. Let $p$ be a prime number greater than $2$. Consider a partition of the $N+1$ rows of an $OA(N+1,N)$ with entries in $GF(p)^m$ into $p-1$ sets of $r = \frac N p$ rows, and one set of $r_p=\frac N p +1$ rows. We denote the corresponding orthogonal subarrays by $\mathcal O \mathcal A_1, \mathcal O \mathcal A_2, \ldots , \mathcal O \mathcal A_p$. This partition determines a strongly regular decomposition $\Gamma_1, \Gamma_2, \dots , \Gamma_p$ of the complete graph on $p^{2m}$ vertices, consisting of $p$ strongly regular graphs of Latin square type. (Once again, we identify the vertices of each graph $\Gamma_i$ with the vertices of the complete graph on $GF(p)^{2m}$ determined by the original $OA(N+1,N)$.) By van Dam's theorem \cite{vD03} (see \S \ref{subsec:vDam}), this graph decomposition determines an amorphic association scheme. Let $D_i$ be the set of all neighbors of $0$ in the graph $\Gamma_i$ corresponding to $\mathcal O \mathcal A_i$. By our assumption that the first column of our array consists of 0 entries, a vertex $v$ is in $D_i$ if and only if it has a $0$ entry in one of the rows in $\mathcal O \mathcal A_i$. Therefore, $D_i$ is symmetric, i.e., if $x \in D_i$ then $-x \in D_i$. Define an even function $f \colon GF(p)^{2m} \to GF(p)$ by setting \[ f(x) = \begin{cases} 0 &\qquad \text{if $x=0$,}\\ i & \qquad \text{if $x \in D_i$ and $1 \leq i \leq p-1$,}\\ 0 & \qquad \text{if $x \in D_p$.} \end{cases} \] Then the component Cayley graphs of $f$ are the strongly regular Latin square type graphs $\Gamma_i$. By Theorem \ref{thm:main1} or Theorem \ref{thm:amorphic-bent}, the function $f$ is amorphic and bent. \section{Examples} \label{sec:manyexamples} In this section we give examples of amorphic bent functions of Latin and negative Latin square type, together with their duals. We also give examples of 5-ary bent functions whose component Cayley graphs are not all strongly regular. \subsection{Examples constructed from orthogonal arrays} In this section we provide three examples of amorphic bent $p$-ary functions of Latin square type, which were constructed from orthogonal arrays using a computer. In each case, an orthogonal array of size $(p^2+1) \times p^4$ was constructed by the method of Bush, with symbols in $GF(p^2)$, which were then replaced by entries from $GF(p)^2$. The algebraic form of the resulting function was found using \cite[Theorem 53 and Corollary 6]{CJMPW16}. \begin{example} \label{ex:OA3LST} {\rm The following amorphic $3$-ary bent function on $GF(3)^4$ was constructed from a $10 \times 81$ orthogonal array, using a computer: \[ f(x_0,x_1,x_2,x_3) =2 x_0 x_3 + x_1 x_2 + x_0^2 x_1 x_2 + 2 x_0 x_1^2 x_3. \] The component Cayley graphs $\Gamma_i$ of $f$ are all strongly regular of Latin square type. The parameters of $\Gamma_1$ and $\Gamma_2$ are $(81,24,9,6)$ and the parameters of $\Gamma_3$ are $(81,32,13,12)$. The function $f$ is regular with dual \[ f^*(x_0,x_1,x_2,x_3)=x_0 x_3+2x_1 x_2+x_0 x_2^2 x_3 + 2x_1 x_2 x_3^2. \] } \end{example} \begin{example} \label{ex:OA5LST} {\rm The following amorphic $5$-ary bent function on $GF(5)^4$ was constructed from an orthogonal array, using a computer: \[ 4 x_0^3 x_3 + 3 x_0^2 x_1 x_2 + x_0 x_1^2 x_3 + 3 x_1^3 x_2 + x_0^4 x_1^3 x_2 +3 x_0^3 x_1^4 x_3 \] The component Cayley graphs $\Gamma_i$ are all strongly regular of Latin square type. The parameters of $\Gamma_i$, for $1 \leq i \leq 4$, are $(625,120,35,20)$ and the parameters of $\Gamma_5$ are $(625,144,43,30)$. The function is regular with dual \[ 2 x_1 x_2^3 + 4 x_0 x_2^2 x_3 + 2 x_1 x_2 x_3^2 + x_0 x_3^3 + 2 x_0 x_2^4 x_3^3 + 4 x_1 x_2^3 x_3^4. \] } \end{example} \begin{example} \label{ex:OA7LST} {\rm The following amorphic $7$-ary bent function on $GF(7)^4$ was constructed from an orthogonal array, using a computer: \[ 6 x_0^5 x_3 + 4 x_0^4 x_1 x_2 + x_0^3 x_1^2 x_3 + 6 x_0^2 x_1^3 x_2 + 5 x_0 x_1^4 x_3 + 4 x_1^5 x_2 + 5 x_0^6 x_1^5 x_2 + 4 x_0^5 x_1^6 x_3 \] The component Cayley graphs $\Gamma_i$ are all strongly regular of Latin square type. The parameters of $\Gamma_i$, for $1 \leq i \leq 6$, are $(2401, 336, 77, 42)$ and the parameters of $\Gamma_7$ are $(2401, 384, 89, 56)$. The function is regular with dual \[ 2 x_0 x_2^4 x_3 +6 x_0 x_2^2 x_3^3 + x_0 x_3^5 + 3 x_1 x_2^5 + x_1 x_2^3 x_3^2 + 3 x_1 x_2 x_3^4 + 3 x_0 x_2^6 x_3^5 + 2 x_1 x_2^5 x_3^6. \] } \end{example} \subsection{Examples on $GF(3)^2$} Every even bent function $f \colon GF(3)^2 \rightarrow GF(3)$ with $f(0)=0$ is equivalent to the function of Example \ref{ex:GF32fa} or of Example \ref{ex:GF32fb} below under the action of $GL(2,GF(3))$ on the variables $(x_0,x_1)$ (see \cite[Proposition 10]{CJMPW16}). Example \ref{ex:GF32fa} is amorphic of Latin square type, and Example \ref{ex:GF32fb} is amorphic of negative Latin square type. \begin{example} \label{ex:GF32fa} {\rm The function $f \colon GF(3)^2 \rightarrow GF(3)$ given by \[ f(x_0,x_1)=-x_0^2+x_1^2 \] is an even bent function with $f(0)=0$. The component Cayley graphs $\Gamma_1$ and $\Gamma_2$ are strongly regular with parameters $(9,2,1,0)$. The component Cayley graph $\Gamma_3$ is strongly regular with parameters $(9,4,1,2)$. The graphs $\Gamma_1$, $\Gamma_2$, and $\Gamma_3$ are all of Latin square type. The function $f$ is amorphic and regular, with dual $f^*(x_0,x_1)=x_0^2-x_1^2$. } \end{example} \begin{example} \label{ex:GF32fb} {\rm The function $f \colon GF(3)^2 \rightarrow GF(3)$ given by \[ f(x_0,x_1)=x_0^2+x_1^2 \] is an even bent function with $f(0)=0$. The component Cayley graphs $\Gamma_1$ and $\Gamma_2$ are strongly regular with parameters $(9,4,1,2)$. The component Cayley graph $\Gamma_3$ is empty. The graphs $\Gamma_1$ and $\Gamma_2$ are of Latin square type with $N=3$ and $r=2$ and of negative Latin square type with $N=-3$ and $r=-1$. However, only the negative Latin square type parameters satisfy the feasibility condition $r = \frac N p$, where $p=3$. The function $f$ is amorphic and (-1)-weakly regular, with dual $f^*(x_0,x_1)=-x_0^2-x_1^2$. } \end{example} \subsection{Examples on $GF(5)^2$} \label{subsec:GF52} In a previous paper, \cite[Proposition 14 and Example 65]{CJMPW16}, we classified all even bent functions $g \colon GF(5)^2 \rightarrow GF(5)$ with $g(0)=0$ into eleven equivalence classes under the action of $GL(2,GF(5))$ on the variables $(x_0,x_1)$. Using additional computer calculations, it can be shown that the three functions of Example \ref{ex:GF52g} represent the only equivalence classes whose functions are of amorphic Latin square type, i.e., those whose component Cayley graphs are all strongly regular of Latin square type. It can also be shown that there is no even bent function $g \colon GF(5)^2 \rightarrow GF(5)$ with $g(0)=0$ whose component Cayley graphs are all strongly regular of feasible negative Latin square type (see Remark \ref{rk:no_NLST_m1p5}). The remaining examples in this section are not amorphic. In these examples, some or all of the component Cayley graphs are not strongly regular. \begin{example} \label{ex:GF52g} {\rm The following three functions $g_i \colon GF(5)^2 \rightarrow GF(5)$ are even bent functions with $g_i(0)=0$: \[ g_1(x_0,x_1)=x_0^3x_1+2x_1^4, \ g_2(x_0,x_1)=-x_0x_1^3+x_1^4, \ g_3(x_0,x_1)=-x_0^3x_1+x_1^4. \] The component Cayley graphs $\Gamma_i$, for $1 \leq i \leq 4$, are strongly regular with parameters $(25,4,3,0)$. The component Cayley graph $\Gamma_5$ is strongly regular with parameters $(25,8,3,2)$. The graphs $\Gamma_i$, for $1 \leq i \leq 5$, are all of Latin square type. The functions $g_i$ are amorphic and regular, with duals \[ g_1^*(x_0,x_1)=2x_0^4-x_0 x_1^3, \ g_2^*(x_0,x_1)=x_0^4+x_0^3 x_1, \ g_3^*(x_0,x_1)=x_0^4+x_0 x_1^3. \] } \end{example} \begin{example} \label{ex:counter1} {\rm The function $g \colon GF(5)^2 \rightarrow GF(5)$ given by \[ g(x_0,x_1)=-x_0^2 + 2x_1^2 \] is an even bent function with $g(0)=0$. The function $g$ is (-1)-weakly regular, with dual $g^*(x_0,x_1) = -x_0^2 + 3x_1^2$. The degree of $\Gamma_i$, for $1 \leq i \leq 4$, is $k_i=6$. The graph $\Gamma_5$ is empty. It can be shown, by checking the number of distinct eigenvalues of each graph, that the component Cayley graphs $\Gamma_i$ are not strongly regular. The unions $\Gamma_1 \cup \Gamma_4$ and $\Gamma_2 \cup \Gamma_3$ are strongly regular of negative Latin square type with parameters $(25,12,5,6)$. } \end{example} \begin{example} \label{ex:counter2} {\rm The function $g \colon GF(5)^2 \rightarrow GF(5)$ given by \[ g(x_0,x_1)=-x_0 x_1 + x_1^2 \] is an even bent function with $g(0)=0$. The function $g$ is regular, with dual $g^*(x_0,x_1) = x_0^2+x_0x_1$. The graph $\Gamma_5$ is strongly regular of Latin square type with parameters $(25, 8, 3, 2)$. The degree of $\Gamma_i$, for $1 \leq i \leq 4$, is $k_i=4$. It can be shown, by checking the number of distinct eigenvalues of each graph, that the component Cayley graphs $\Gamma_1$, $\Gamma_2$, $\Gamma_3$, and $\Gamma_4$ are not strongly regular. The unions $\Gamma_1 \cup \Gamma_4$ and $\Gamma_2 \cup \Gamma_3$ are strongly regular of Latin square type with parameters $(25, 8, 3, 2)$. } \end{example} \begin{example} \label{ex:counter3} {\rm The function $g \colon GF(5)^2 \rightarrow GF(5)$ given by \[ g(x_0,x_1)=2x_0 x_1^3+x_1^4-x_1^2 \] is an even bent function with $g(0)=0$. The function $g$ is regular, with dual $g^*(x_0,x_1)=x_0^2 + x_0^4 + 3 x_0^3 x_1$. None of the component Cayley graphs $\Gamma_i$ is strongly regular. Moreover, no union of the component Cayley graphs $\Gamma_i \cup \Gamma_j$ for $i \neq j$ (and hence no union of the form $\Gamma_i \cup \Gamma_j \cup \Gamma_k$ for $i$, $j$, and $k$ distinct) is strongly regular. The degrees of the component Cayley graphs are $k_i = 4$, for $1 \leq i \leq 4$, and $k_5 = 8$. } \end{example} \subsection{Examples on $GF(3)^4$} Recall that in Example \ref{ex:OA3LST} we gave a bent function on $GF(3)^4$ whose component Cayley graphs are all strongly regular of Latin square type. We now give two examples of bent functions on $GF(3)^4$ whose component Cayley graphs are all strongly regular of negative Latin square type. \begin{example} {\rm The function $f \colon GF(3)^4 \rightarrow GF(3)$ given by \[ f(x_0,x_1,x_2,x_3)=-x_0^2-x_1^2+x_2 x_3 \] is an even bent function with $f(0)=0$. The component Cayley graphs $\Gamma_1$ and $\Gamma_2$ are strongly regular with parameters $(81,30,9,12)$. The component Cayley graph $\Gamma_3$ is strongly regular with parameters $(81,20,1,6)$. The graphs $\Gamma_1$, $\Gamma_2$, and $\Gamma_3$ are all of negative Latin square type. The function $f$ is amorphic and (-1)-weakly regular, with dual \[ f^*(x_0,x_1,x_2,x_3)=x_0^2+x_1^2-x_2 x_3. \] In this example, $D_1^*=D_2$, $D_2^*=D_1$, and $D_3^*=D_3$. } \end{example} \begin{example} {\rm The function $f \colon GF(3)^4 \rightarrow GF(3)$ given by \[ f(x_0,x_1,x_2,x_3) = x_0^2 + x_1^2 +x_0 x_2 + 2 x_2 x_3 \] is an even bent function with $f(0)=0$. The component Cayley graphs $\Gamma_1$ and $\Gamma_2$ are strongly regular with parameters $(81,30,9,12)$. The component Cayley graph $\Gamma_3$ is strongly regular with parameters $(81,20,1,6)$. The graphs $\Gamma_1$, $\Gamma_2$, and $\Gamma_3$ are all of negative Latin square type. The function $f$ is amorphic and (-1)-weakly regular, with dual \[ f^*(x_0,x_1,x_2,x_3) = 2 x_0^2 + 2 x_1^2 +x_0 x_3+ x_2 x_3+2x_3^2. \] In this example, we know of no simple relationship between the sets $D_1$, $D_2$, and $D_3$ and the sets $D_1^*$, $D_2^*$, and $D_3^*$. } \end{example} \subsection{Ideas for further study} We conclude with some questions and ideas for further study. \begin{enumerate} \item Can we find a way to construct all amorphic bent functions of the type of Theorem \ref{thm:amorphic-bent}? Can we count them? \item Consider equivalence classes of $p$-ary functions under the action of $GL(n,GF(p))$ on coordinates. Do there exist non-equivalent bent functions which determine isomorphic association schemes? Which of our amorphic examples have isomorphic association schemes? \item Find examples of functions that are not bent, whose level sets determine association schemes that are not amorphic. \end{enumerate} \end{document}
\begin{document} \title{Variants of Wythoff's game translating its $\mathcal{P}$-positions} \author{Nhan Bao Ho} \address{Department of Mathematics, La Trobe University, Melbourne, Australia 3086} \email{[email protected], [email protected]} \begin{abstract} We introduce a restriction of Wythoff's game, which we call $\mathcal{F}$-\emph{Wythoff}, in which the integer ratio of entries must not change if an equal number of tokens are removed from both piles. We show that $\mathcal{P}$-positions of $\mathcal{F}$-Wythoff are exactly those positions obtained from $\mathcal{P}$-positions of Wythoff's game by adding 1 to each entry. We describe the distribution of Sprague-Grundy values and, in particular, generalize two properties on the distribution of those positions which have Sprague-Grundy value $k$, for a given $k$, for variants of Wythoff's game. We analyze the mis\`{e}re $\mathcal{F}$-Wythoff and show that the normal and mis\`{e}re versions differ exactly on those positions which have Sprague-Grundy values 0, and 1 via a swap. We examine two further variants of $\mathcal{F}$-Wythoff, one restriction and one extension, preserving its $\mathcal{P}$-positions. We raise two general questions based on the translation phenomenon of the $\mathcal{P}$-positions. \end{abstract} \maketitle \section{Introduction} Introduced by Willem Abraham Wythoff \cite{Wyt}, Wythoff's game is a variant of Nim played on two piles of tokens. Two players move alternately, either removing a number of tokens from one pile or removing an equal number of tokens from both piles. The player first unable to move (because the two piles become empty) loses and his/her opponent wins. We denote by $(a,b)$ the position of the two piles of sizes $a$, $b$. Symmetrically, $(a,b)$ is identical to $(b,a)$. A position is called an \emph{$\mathcal{N}$-position} (known as \emph{winning position}) if the player about to move from there has a strategy to win. Otherwise, we have a \emph{$\mathcal{P}$-position} (known as \emph{losing position}). Here, $\mathcal{N}$ stands for the $\mathcal{N}$ext player and $\mathcal{P}$ stands for the $\mathcal{P}$revious player. Wythoff \cite{Wyt} shows that the $\mathcal{P}$-positions of Wythoff's game form the set $\{(\lfloor \phi n \rfloor, \lfloor \phi^2 n \rfloor) | n \geq 0\}$ where $\phi = (1+\sqrt{5})/2$ is the golden ratio and $\lfloor . \rfloor$ denotes the integer part. Recall that Wythoff's game is an impartial combinatorial game. If there exists a move from a position $p$ to some position $q$, then the position $q$ is called a \emph{follower} of $p$. For a finite set $S$ of nonnegative integers, the \emph{minimum excluded number} of $S$, denoted by $mex(S)$, is the smallest nonnegative integer not in $S$. The \emph{Sprague-Grundy function} for an impartial combinatorial game is the function $\mathcal{G}$ from the set of its positions into the nonnegative integers, defined inductively by \[\mathcal{G}(p) = mex\{\mathcal{G}(q)| q \text{ is a follower of } p\}\] with $mex\{\} = 0$. The value $\mathcal{G}(p)$ is called the \emph{Sprague-Grundy value} at $p$. The base theory of combinatorial games and the Sprague-Grundy function can be found in \cite{ww1}. Recall that a position is a $\mathcal{P}$-position if and only if it has Sprague-Grundy value 0 \cite{ww1}. One also can prove the following fundamental property of Sprague-Grundy values: a position $p$ has Sprague-Grundy value $k > 0$ if and only if (i) $\mathcal{G}(p) \neq \mathcal{G}(q)$ if there exists a move from either $p$ to $q$ or $q$ to $p$, and (ii) for every $l < k$, there exists a position $q$ such that $\mathcal{G}(q) = l$ and one can move from $p$ to $q$. Despite being one of oldest impartial games, Wythoff's game is still a highly interesting topic. The Sprague-Grundy function for this game is studied widely in \cite{blass, Dress, landman, nivasch}. Several variants of Wythoff's game have been examined, including (i) {\em restrictions}: obtained from Wythoff's game by eliminating some moves \cite{Gen-Connell, Nim-Wythoff, ho}, and (ii) {\em extensions}: obtained from Wythoff's game by adding extra moves \cite{Heapgame, Howtobeat, Gen-Fra, Adjoining, RRR0, RRR, ho, hog, Some-Gen}. Solving the winning strategy for variants of Wythoff's game is always an interesting exercise. In particular, it has been showed that there exist variants of Wythoff's game, including restrictions and extensions, preserving its $\mathcal{P}$-positions \cite{Ext-Res, ho}. This paper makes further investigations of variants of Wythoff's game whose $\mathcal{P}$-positions are slightly different to those of Wythoff's game. In this paper, we introduce a restriction of Wythoff's game, called \emph{$\mathcal{F}$-Wythoff}, in which a legal move is either of the following two types: \begin{itemize} \item [(i)] removing any number of tokens from one pile; \item [(ii)] removing an equal number of tokens from two piles provided that the integer ratio of the two entries does not change. \end{itemize} We obtain the following result on $\mathcal{P}$-positions: the $\mathcal{P}$-positions of $\mathcal{F}$-Wythoff are those positions obtained directly from $\mathcal{P}$-positions of Wythoff's game by adding 1 to each entry. This translation phenomenon is the main theme of this paper. We also establish several results for further modifications of $\mathcal{F}$-Wythoff. The paper is organized as follows. In the next section, we solve the $\mathcal{P}$-positions in both algebraic and recursive characterizations before giving formulas for those positions which have Sprague-Grundy values 1 and 2. Section 3 analyzes the distributions of Sprague-Grundy values for $\mathcal{F}$-Wythoff on the 2-dimension array whose $(i,j)$ entry is the Sprague-Grungdy value of the position $(i,j)$. In particular, we generalize two results for the Sprague-Grundy values of Wythoff's game and its variants. In Section 4, we examine $\mathcal{F}$-Wythoff in mis\`{e}re play. We show that $\mathcal{F}$-Wythoff and its mis\`{e}re form differ slightly on the set of positions which have Sprague-Grundy values 0 and 1. Section 5 answers the question as to whether there exists a variant of $\mathcal{F}$-Wythoff preserving its $\mathcal{P}$-positions. Two such variants, one restriction and an extension, are discussed. In the final section, we raise two questions for variants of Wythoff's game, based on the theme of the paper. This paper is the continuation of our investigations on variants of Wythoff's game \cite{ho} and, more generally, 2-pile variants of the game of Nim \cite{MEuclid, Min, CHL}. \section{Translation phenomenon on those positions which have Sprague-Grundy value 0, 1 and 2} This section first solves the $\mathcal{P}$-positions in $\mathcal{F}$-Wythoff. We then give formulas for those positions which have Sprague-Grundy values 1 and 2. It will be shown that these positions are all obtained from $\mathcal{P}$-positions of Wythoff's game by a translation, except for some initial positions. Let $\phi = (1+\sqrt{5})/2$. Then $\phi^2 = \phi + 1$. Therefore, for every positive integer $n$, we have \[\lfloor \phi^2n \rfloor = \lfloor \phi n + n \rfloor = \lfloor \phi n \rfloor + n.\] \begin{lemma} \label{Comp} \cite{beatty1} For each $i \geq 1$, set $a_i = \lfloor \phi i \rfloor$ and $b_i = \lfloor \phi^2 i \rfloor$. Then \begin{align*} & \{a_i | i \geq 1\} \cup \{b_i | i \geq 1\} = \mathbb{N},\\ & \{a_i | i \geq 1\} \cap \{b_i | i \geq 1\} = \emptyset, \end{align*} in which $\mathbb{N}$ is the set of positive integers. \end{lemma} Consequently, we have \begin{corollary} \label{Comp.1} Let $a \geq 2$ be an integer. There exists exactly one $n$ such that either $a = \lfloor \phi n \rfloor+1$ or $a = \lfloor \phi^2 n \rfloor +1$. Moreover, the number $a$ cannot be of both forms. \end{corollary} Recall that in Wythoff's game, we have $\mathcal{P}$-positions as follows. \begin{theorem} \label{W-P} \cite{Wyt} A position in Wythoff's game is a $\mathcal{P}$-position if and only if it is of the form $(\lfloor\phi n\rfloor, \lfloor \phi^2 n \rfloor)$ for some $n \geq 0$. \end{theorem} We first prove an equality that will be used many times in this paper. \begin{lemma} \label{U-lem} Let $n,k$, and $i$ be positive integers. We have \[ \mathcal{B}igg{\lfloor} \frac{\lfloor\phi^2 n\rfloor + k + i}{\lfloor\phi n\rfloor + k + i}\mathcal{B}igg{\rfloor} = \mathcal{B}igg{\lfloor} \frac{\lfloor\phi^2 n\rfloor + i}{\lfloor\phi n\rfloor + i} \mathcal{B}igg{\rfloor} = 1. \] \end{lemma} \begin{proof} We have \begin{align*} \mathcal{B}igg{\lfloor} \frac{\lfloor\phi^2 n\rfloor + k + i}{\lfloor\phi n\rfloor + k + i}\mathcal{B}igg{\rfloor} & = \mathcal{B}igg{\lfloor} \frac{\lfloor\phi n\rfloor +n + k + i}{\lfloor\phi n\rfloor + k + i}\mathcal{B}igg{\rfloor} = 1 + \mathcal{B}igg{\lfloor} \frac{n}{\lfloor\phi n\rfloor + k + i}\mathcal{B}igg{\rfloor}\\ &= 1 = 1 + \mathcal{B}igg{\lfloor} \frac{n}{\lfloor\phi n\rfloor + i}\mathcal{B}igg{\rfloor} = \mathcal{B}igg{\lfloor} \frac{\lfloor\phi^2 n\rfloor + i}{\lfloor\phi n\rfloor + i} \mathcal{B}igg{\rfloor}. \end{align*} \end{proof} We now solve the winning strategy for $\mathcal{F}$-Wythoff. \begin{theorem}[Algebraic characterization] \label{FW-P} A position in $\mathcal{F}$-Wythoff is a $\mathcal{P}$-position if and only if it is an element of the set \[\{(0,0), (\lfloor \phi n \rfloor+1, \lfloor \phi^2n \rfloor +1) | n \geq 0 \}. \] \end{theorem} \begin{proof} Let $\mathcal{A} = \{(0,0), (\lfloor\phi n\rfloor +1, \lfloor \phi^2 n \rfloor + 1) | n \geq 0\}$. We need to show that the following two properties hold for $\mathcal{F}$-Wythoff: \begin{itemize} \item [(i)] Every move from a position in $\mathcal{A}$ cannot terminate in $\mathcal{A}$, \item [(ii)] From every position not in $\mathcal{A}$, there is a move terminating in $\mathcal{A}$. \end{itemize} For (i), assume by contradiction that there is a move from $(\lfloor\phi n\rfloor +1, \lfloor \phi^2 n \rfloor + 1)$ to $(\lfloor\phi m\rfloor +1, \lfloor \phi^2 m \rfloor + 1)$ for some $n > m$. At the moment, there are at most three possibilities for this move: (1) removing some $k$ tokens from the small pile $\lfloor\phi n\rfloor +1$, or (2) removing some $k$ tokens from the large pile $\lfloor\phi^2 n\rfloor +1$, or (3) removing some $k$ tokens from both piles. Note that the move (3) does not exist. In fact, otherwise one can move from $(\lfloor\phi n\rfloor, \lfloor \phi^2 n \rfloor)$ to $(\lfloor\phi m\rfloor, \lfloor \phi^2 m \rfloor)$ in Wythoff's game. This is impossible. Possibility (1) implies the system \begin{align*} \begin{cases} \lfloor\phi n\rfloor +1 - k = \lfloor\phi m\rfloor +1, \\ \lfloor \phi^2 n \rfloor + 1 = \lfloor \phi^2 m \rfloor + 1. \end{cases} \end{align*} It follows from the second equation that $n = m$, giving a contradiction. Possibility (2) implies the system \begin{align*} \begin{cases} \lfloor \phi^2 n \rfloor + 1 - k = \lfloor\phi m\rfloor +1, \\ \lfloor\phi n\rfloor +1 = \lfloor \phi^2 m \rfloor + 1 \end{cases} \end{align*} giving $\lfloor\phi n\rfloor = \lfloor \phi^2 m \rfloor$. This is impossible by Lemma \ref{Comp}. For (ii), let $p = (a,b) \notin \mathcal{A}$. Set $q = (a-1,b-1)$. Then $q$ is not of the form $(\lfloor\phi n\rfloor, \lfloor\phi^2 n\rfloor)$. By Theorem \ref{W-P}, there exists a legal move from $q$ to some $(\lfloor\phi n\rfloor, \lfloor\phi^2 n\rfloor)$ in Wythoff's game. This move is identical to the move from $p$ to $(\lfloor\phi n\rfloor +1, \lfloor\phi^2 n\rfloor+1)$ in $\mathcal{F}$-Wythoff provided that the integer ratio of the two entries does not change if an equal number of tokens is removed from both piles. The proof is complete by Lemma \ref{U-lem}. \end{proof} Set $(a_0,b_0) = (0,0)$. For $n \geq 1$, set $a_n = \lfloor\phi (n-1)\rfloor +1$, $b_n = \lfloor\phi^2 (n-1)\rfloor +1$. Then $\{(a_i,b_i) | i \geq 0\}$ is the set of $\mathcal{P}$-positions of $\mathcal{F}$-Wythoff. We now describe a recursive characterization of the sequence $\{(a_i,b_i)\}_{i \geq 0}$. \begin{corollary}[Recursive characterization] \label{FW-P-A} Consider the sequence $\{(a_i,b_i)\}_{i \geq 1}$ of $\mathcal{P}$-positions of $\mathcal{F}$-Wythoff. For each $n \geq 1$, we have \begin{align*} \begin{cases} a_n = mex\{a_i,b_i | 0 \leq i \leq n-1\},\\ b_n = a_n+n-1. \end{cases} \end{align*} \end{corollary} \begin{proof} The first equation follows from Corollary \ref{a_i} below and the second equation follows from Theorem \ref{FW-P}. Note that the order that Corollary \ref{a_i} comes in the paper does not affect its independent content used for this corollary. \end{proof} We next give a formula for those positions which have Sprague-Grundy value 1. \begin{theorem} \label{V1} The set of positions which have Sprague-Grundy value $1$ in $\mathcal{F}$-Wythoff is \[\mathcal{B} = \{(0,1), (\lfloor \phi n \rfloor +2, \lfloor \phi^2 n \rfloor +2) | n \geq 0\}.\] \end{theorem} \begin{proof} Recall that the set of $\mathcal{P}$-positions of $\mathcal{F}$-Wythoff is \[\mathcal{P} = \{(0,0),(\lfloor \phi n \rfloor +1, \lfloor \phi^2 n \rfloor + 1)| n \geq 0\}.\] Based on the definition of Sprague-Grundy function, we need to prove that \begin{itemize} \item [(i)] $\mathcal{B} \cap \mathcal{P} = \emptyset$, \item [(ii)] There is no move from a position in $\mathcal{B}$ to a position in $\mathcal{B}$, \item [(iii)] From every position not in $\mathcal{B} \cup\mathcal{P}$, there exists a move to some position in $\mathcal{B}$. \end{itemize} For (i), assume by contradiction that $\mathcal{B} \cap \mathcal{P} \neq \emptyset$. Then there exist $n,m$ such that \[(\lfloor \phi n \rfloor +1, \lfloor \phi^2 n \rfloor + 1) = (\lfloor \phi m \rfloor +2, \lfloor \phi^2 m \rfloor +2).\] It follows that \begin{align*} \begin{cases} \lfloor \phi n \rfloor = \lfloor \phi m \rfloor +1, \\ \lfloor \phi n \rfloor + n = \lfloor \phi m \rfloor + m +1. \end{cases} \end{align*} One can check that this system of equations gives a contradiction. For (ii), one can check that there is no move from a position of the form $(\lfloor \phi n \rfloor +2, \lfloor \phi^2 n \rfloor +2)$ to (0,1). Similar to case (ii) in Theorem \ref{FW-P}, one can check that there is no move between positions of the form $(\lfloor \phi n \rfloor +2, \lfloor \phi^2 n \rfloor + 2)$. For (iii), let $(a,b) \notin \mathcal{B} \cup \mathcal{P}$ with $a \leq b$. One can move from $(a,b)$ to (0,1) if either $a = 0$ or $a = 1$. We now assume that $a \geq 2$. Consider the position $p = (a-2,b-2)$. Note that $p$ is not of the form $(\lfloor \phi n \rfloor, \lfloor \phi^2 n \rfloor)$. In Wythoff's game, there is one move from $p$ to some position $(\lfloor \phi m \rfloor, \lfloor \phi^2 m \rfloor)$. This move is identical to the move from $(a,b)$ to $(\lfloor \phi m \rfloor + 2, \lfloor \phi^2 m \rfloor +2)$ in $\mathcal{F}$-Wythoff provided that the integer ratio of the two entries does not change if an equal number of tokens is removed from both piles. The proof is then complete by Lemma \ref{U-lem}. \end{proof} Theorem \ref{V1} shows that, except for the first position, those positions which have Sprague-Grundy value 1 are all obtained from $\mathcal{P}$-positions by adding 1 to each entry. Similarly, those positions which have Sprague-Grundy value 2 in $\mathcal{F}$-Wythoff are also obtained from the $\mathcal{P}$-positions via a translation, except for the first two positions. We leave the proof of the following theorem for the readers. \begin{theorem} \label{V2} A position $(a,b)$ has Sprague-Grundy value $2$ in $\mathcal{F}$-Wythoff if and only if $(a,b)$ is an element of the set \[\{(0,2), (1,3), (\lfloor \phi n \rfloor + 4, \lfloor \phi^2 n \rfloor + 4) | n \geq 0\}.\] \end{theorem} We do not know a formula of such forms for those positions which have Sprague-Grundy values more than 2. It would be therefore interesting if we can answer the following question. \begin{question} Does there exist $g > 2$ such that those positions which have Sprague-Grundy value $g$, possibly except for a finite number of positions, can be obtained by a translation from the $\mathcal{P}$-positions? \end{question} \section{On the distribution of Sprague-Grundy values} Consider the 2-dimension infinite array $\mathbb{A}$ whose $(i,j)$ entry is the Sprague-Grundy value $\mathcal{G}(i,j)$ of the position $(i,j)$ in $\mathcal{F}$-Wythoff. Table \ref{T} displays some values of the array with $i, j \leq 9$. \begin{table} [ht] \begin{center} \begin{tabular}{c|cccccccccc} 9 &9 &8 &11 &10 &12 &13&1 &2 &6 &7 \\ 8 &8 &9 &10 &7 &11 &0 &12&4 &5 &6 \\ 7 &7 &6 &5 &8 &9 &1 &10&11&4 &2 \\ 6 &6 &7 &4 &5 &0 &2 &3 &10&12&1 \\ 5 &5 &4 &7 &6 &3 &8 &2 &1 &0 &13 \\ 4 &4 &5 &6 &1 &2 &3 &0 &9 &11&12 \\ 3 &3 &2 &0 &4 &1 &6 &5 &8 &7 &10 \\ 2 &2 &3 &1 &0 &6 &7 &4 &5 &10&11 \\ 1 &1 &0 &3 &2 &5 &4 &7 &6 &9 &8 \\ 0 &0 &1 &2 &3 &4 &5 &6 &7 &8 &9\\ \hline i/j &0 &1 &2 &3 &4 &5 &6 &7 &8 &9 \end{tabular} \caption{Sprague-Grundy values $\mathcal{G}(i,j)$ for $i,j\leq 9$}\label{T} \end{center} \end{table} We discuss in this section the distribution of Sprague-Grundy values in the array $\mathbb{A}$. We first show that each row (column) in $\mathbb{A}$ contains every Sprague-Grundy value exactly one time. \begin{theorem} \label{Row} Let $a, g$ be nonnegative integers. There exists a unique integer $b$ such that $\mathcal{G}(a,b) = g$. \end{theorem} \begin{proof} The uniqueness holds by the definition of $\mathcal{F}$-Wythoff. We now prove the existence. Note that the theorem holds for $a = 0$. We first show that the theorem holds for $g = 0$. If $a = 1$ then $\mathcal{G}(1,1) = 0$. If $a \geq 2$, by Corollary \ref{Comp.1}, there exists $m$ such that either $a = \lfloor \phi m \rfloor + 1$ or $a = \lfloor \phi^2 m \rfloor + 1$. The former case gives $\mathcal{G}(a,\lfloor \phi^2 m \rfloor + 1) = 0$ and the latter case gives $\mathcal{G}(a,\lfloor \phi m \rfloor + 1) = 0$. Assume that $g> 0$ and assume by contradiction that the sequence $R_a = \{\mathcal{G}(a,n)\}_{n \geq 0}$ does not contain $g$. We can assume that $g$ is the smallest integer not in the sequence $R_a$. Then there exists the smallest integer $b_0 \geq a$ such that \begin{align} \label{BW-b0} \{0, 1, \ldots,g -1\} \subseteq \{\mathcal{G}(a,i) | i \leq b_0-1\}. \end{align} For each $s \geq 1$, let $b_s = b_0+s(a+1)$. We have \begin{align*} \mathcal{G}(a,b_s) = mex\{&\mathcal{G}(a-i,b_s), \mathcal{G}(a,b_s-j), \mathcal{G}(a-l,b_s-l) | 1 \leq i \leq a,\\ &1 \leq j \leq b_s, 1 \leq l < a, \lfloor\frac{b_s-l}{a-l}\rfloor = \lfloor\frac{b}{a}\rfloor\}. \end{align*} By (\ref{BW-b0}), the $mex$ set contains $\{0, 1, \ldots,g -1\}$. Note that $\mathcal{G}(a,b_s) \neq g$. Therefore, $\mathcal{G}(a,b_s) > g$ and so the $mex$ set contains $g$. Since $\mathcal{G}(a,b_s-i) \neq g$ for all $i$, there exists either $i_s \leq a$ or $l_s < a$ such that either $\mathcal{G}(a-i_s,b_s) = g$ or $\mathcal{G}(a-l_s,b_s-l_s) = g$. Note that as $s$ varies, the integers $b_s$ assume infinitely many values, while $i_s, l_s\leq a$ for each $s$. Moreover, by the uniqueness, there are at most $a+1$ positions of the form $(a-i_s,b_s)$ whose Sprague-Grundy values all are $g$. So there must exist $s_1 < s_2$ such that $l_{s_1} = l_{s_2}$ and $\mathcal{G}(a-l_{s_1},b_{s_1}-l_{s_1}) = \mathcal{G}(a-l_{s_2},b_{s_2}-l_{s_2})$. This is impossible since one can move from $(a-l_{s_2},b_{s_2}-l_{s_2})$ to $(a-l_{s_1},b_{s_1}-l_{s_1})$ by removing $b_{s_2}-b_{s_1}$ tokens from the larger pile. Thus, the sequence $R_a$ contains $g$ and so $\mathcal{G}(a,b) = g$ for some $b$. \end{proof} Recall that the \emph{Wythoff sequence}, or sequence of $\mathcal{P}$-positions, of Wythoff's game is the sequence $\{(\lfloor \phi n \rfloor, \lfloor \phi^2 n \rfloor)\}_{n \geq 0}$. Let $A = \{a_n | n \geq 0\}$, $B = \{b_n | n \geq 0\}$ where $a_n = \lfloor \phi n \rfloor$, $b_n = \lfloor \phi^2 n \rfloor$. By Lemma \ref{Comp}, the Wythoff sequence of Wythoff's game satisfies conditions $A \cap B = \{0\}$, $A \cup B = \mathbb{Z}_{\geq 0}$ where $\mathbb{Z}_{\geq 0}$ denotes the set of nonnegative integers. Moreover, $a_n = mex\{a_i, b_i | 0 \leq i < n\}$. Curiously, these three conditions hold for several variants of Wythoff's game \cite{Heapgame, Howtobeat, Wyt-mis, Adjoining, ho}. We generalize these results for $\mathcal{F}$-Wythoff in the next two corollaries. The \emph{$k$-sequence} of $\mathcal{F}$-Wythoff is the sequence $\{(a_n,b_n)\}_{n \geq 0}$ of positions whose Sprague-Grundy values are $k$ in which $0 \leq a_n \leq b_n$ and $a_n < a_m$ for $n < m$. We first describe a recursive characterization of the first entries $a_i$ in the $k$-sequence of $\mathcal{F}$-Wythoff. \begin{corollary} \label{a_i} For each $k \geq 0$, consider the $k$-sequence $\{(a_n,b_n)\}_{n \geq 0}$ of $\mathcal{F}$-Wythoff. We have $a_n = mex\{a_i, b_i | 0 \leq i < n\}$ for each $n$. \end{corollary} \begin{proof} Assume that there exists an integer $n > 0$ such that $a_n \neq m = mex\{a_i,b_i | 0 \leq i \leq n-1\}$. If $a_n < m$ then $a_n \in \{a_i,b_i | 0 \leq i \leq n-1\}$ and so there exists $l < n$ such that $a_n = b_l$. This means there exists a move between the two positions $(a_n,b_n)$ and $(a_l,b_l)$ whose Sprague-Grundy values are $k$. This is a contradiction. Assume now that $a_n > m$. By Theorem \ref{Row}, there exists $m'$ (either $m \leq m'$ or $m' < m$) such that the position $(m,m')$ has Sprague-Grundy value $k$. This means there exists some $j < n$ such that $(m,m')$ is identical to $(a_j, b_j)$. It follows that $m \in mex\{a_i,b_i | 0 \leq i \leq n-1\}$ giving a contradiction. Hence, $a_n = mex\{a_i,b_i | 0 \leq i \leq n-1\}$ for all $n$. \end{proof} Note that Corollary \ref{a_i} is still true for those variants of Wythoff's game which satisfy Theorem \ref{Row}. Such games, including Wythoff's game, are discussed in \cite{ho}. Similarly, the first equation in the next corollary also holds for those variants of Wythoff's game which satisfy Theorem \ref{Row}. \begin{corollary} \label{Z+} For each $k \geq 0$, consider the $k$-sequence $\{(a_n,b_n)\}_{n \geq 0}$ of $\mathcal{F}$-Wythoff. We have \begin{align*} \begin{cases} \{a_n | n \geq 0\} \cup \{b_n | n \geq 0\} = \mathbb{Z}_{\geq 0}, \\ |\{a_n | n \geq 0\} \cap \{b_n | n \geq 0\}| \leq 2, \end{cases} \end{align*} where $|S|$ is the number of elements of the set $S$. \end{corollary} \begin{proof} The first equation holds by Theorem \ref{Row}. We now prove the second equation. Assume, by a contradiction, that $|\{a_n | n \geq 0\} \cap \{b_n | n \geq 0\}| \geq 3$. Let $x, y,z$ be three elements in the intersection $\{a_n | n \geq 0\} \cap \{b_n | n \geq 0\}$ such that $x < y < z$ . Then there exist $a_1 \leq x \leq b_1, a_2 \leq y \leq b_2,a_3 \leq z \leq b_3$ such that the six positions $(a_1, x)$, $(x,b_1)$, $(a_2,y)$, $(y,b_2)$, $(a_3, z)$, and $(z,b_3)$ all have Sprague-Grundy value $k$. We have then $a_1 = b_1$, $a_2 = b_2$, and $a_3 = b_3$. Note that $a_2, a_3 > 0$ and so one can move from $(a_3,b_3)$ to $(a_2, b_2)$. This is a contradiction as these two positions belong to the $k$-sequence. \end{proof} We now return to a discussion of the Sprague-Grundy values in each row of the array $\mathbb{A}$. A sequence $(s_i)$ is said to be \emph{ultimately additively periodic} if there exist $N,p > 0$ such that $s_{n+p} = s_n+p$ for all $n \geq N$. Based on our computer explorations, we conjecture that this periodicity holds for each row in the array $\mathbb{A}$. \begin{conjecture} \label{add-per} Let $a \geq 0$. The sequence $\{\mathcal{G}(a,n)_{n \geq 0}\}$ is ultimately additively periodic. \end{conjecture} Recall that ultimately additive periodicity also holds for Wythoff's game \cite{Dress, landman}. Our computer explorations show that this periodicity is common in variants of Wythoff's game. (See \cite{ho}.) This lead us to the following problem. \begin{problem} \label{cha-add-per} Characterize variants of Wythoff's game whose nim-sequences $\{\mathcal{G}(a,n)\}_{n \geq 0}$ are ultimately additively periodic for all $a$. \end{problem} We now discuss the distribution of Sprague-Grundy values of $\mathcal{F}$-Wythoff in each diagonal parallel to the main diagonal in the array $\mathbb{A}$. It is well known that each such diagonal for Wythoff's game contains every nonnegative integer \cite{blass}. Based on our computer explorations, the same result is conjectured for $\mathcal{F}$-Wythoff. \begin{conjecture} \label{Dia} Let $a, g$ be nonnegative integers. There exists a unique integer $b$ such that $\mathcal{G}(b,a+b) = g$. \end{conjecture} Note that $\mathcal{G}(\lfloor \phi a \rfloor + 1,\lfloor \phi a \rfloor+a + 1) = 0$ and so Conjecture \ref{Dia} holds for $g = 0$. We give here the proof of Conjecture \ref{Dia} for the case $a = 0$. \begin{proof}[Proof of Conjecture \ref{Dia} for the case $a = 0$] Assume that $g > 0$ and assume by contradiction that the sequence $\{\mathcal{G}(n,n)\}_{n \geq 0}$ does not contain $g$. We can assume that $g$ is the smallest integer not in that sequence. Then there exists the smallest integer $b_0 > 0$ such that \begin{align} \label{BW-b00} \{0,1,\ldots, g-1\} \subseteq \{\mathcal{G}(i,i) | i \leq b_0-1\}. \end{align} For each $s \leq b_0$, there exists at most one value $t_s \geq b_0$ such that $\mathcal{G}(s,t_s) = g$. Let $S$ be the set of the values $t_s$, and set \[ T_0 = \begin{cases}\max(S),&\ \text{if}\ S\not=\emptyset;\\ b_0, &\ \text{otherwise}. \end{cases} \] Then $\mathcal{G}(s,t) \neq g$ for $s \leq b_0, t > T_0$. Note that $\mathcal{G}(i,i) \neq g$ for all $i$. Set $m = T_0+1$. We have \[ \mathcal{G}(m,m) = mex\{\mathcal{G}(m,i), \mathcal{G}(j,j) | i, 1 \leq j \leq m-1\}. \] By (\ref{BW-b00}), $\mathcal{G}(m,m) \geq g$ and so $\mathcal{G}(m,m) > g$ as $\mathcal{G}(m,m) \neq g$. Since $\mathcal{G}(j,j) \neq g$ for all $j$, there exists $i_0 \leq m-1$ such that $\mathcal{G}(m,i_0) = g$. Note that $i_0 > b_0$ as otherwise $m \leq T_0$ giving a contradiction with $m = T_0+1$. We have \[ \mathcal{G}(i_0,i_0) = mex\{\mathcal{G}(i_0,j), \mathcal{G}(l,l) | j \leq i_0-1, 1 \leq l \leq i_0-1\}. \] By (\ref{BW-b00}), $\mathcal{G}(i_0,i_0) \geq g$ and so $\mathcal{G}(i_0,i_0) > g$ as $\mathcal{G}(i_0,i_0) \neq \mathcal{G}(i_0,m) = g$ by Theorem \ref{Row}. Since $\mathcal{G}(l,l) \neq g$ for all $l$, there exists $j_0 < i_0$ such that $\mathcal{G}(i_0,j_0) = g$. However, there exists a move from $(m,i_0)$ to $(i_0,j_0)$ as $m > i_0 > j_0$. This is a contradiction. \end{proof} So far, we haven't been able to prove the conjecture for any given $a \geq 1$. \section{$\mathcal{F}$-Wythoff in mis\`{e}re play} Recall that in the game we have discussed so far, a player wins if (s)he makes the last move. This is the \emph{normal} convention. Oppositely, in \emph{mis\`{e}re} convention, a player is declared to be the winner if (s)he forces the opponent to make the last move. In this section, we study $\mathcal{F}$-Wythoff played under the mis\`{e}re convention. We show that $\mathcal{F}$-Wythoff and mis\`{e}re $\mathcal{F}$-Wythoff swap Sprague-Grundy values 0 and 1 while agreeing for all other Sprague-Grundy values. We first show that the $\mathcal{P}$-positions of mis\`{e}re $\mathcal{F}$-Wythoff are exactly those positions which have Sprague-Grundy value 1 of $\mathcal{F}$-Wythoff while those positions which have Sprague-Grundy value 1 of mis\`{e}re $\mathcal{F}$-Wythoff are exactly $\mathcal{P}$-positions of $\mathcal{F}$-Wythoff. The proofs of the following two theorems are essential the same as those of Theorems \ref{FW-P} and \ref{V1}, respectively, and so we omit the proofs. \begin{theorem} \label{P-Mis} The position $(a,b)$ with $a \leq b$ is a $\mathcal{P}$-positions in mis\`{e}re $\mathcal{F}$-Wythoff if and only if it is an element of the set \[\{(0,1), (\lfloor \phi n \rfloor +2, \lfloor \phi^2 n \rfloor +2) | n \geq 0\}.\] \end{theorem} \begin{theorem} \label{1-Mis} The position $(a,b)$ with $a \leq b$ has Sprague-Grundy value 1 in mis\`{e}re $\mathcal{F}$-Wythoff if and only if it is an element of the set \[\{(\lfloor \phi n \rfloor + 1, \lfloor \phi^2 n \rfloor + 1) | n \geq 0\}.\] \end{theorem} We now go further to show that $\mathcal{F}$-Wythoff and its mis\`{e}re version differ on those positions which have Sprague-Grundy value 0 and 1 via a swap. An impartial game can be described as a finite directed acyclic graph without multiple edges in which each vertex is a position and each downward edge is a move. Note that if a game $G$, under the normal convention, is described as a graph $\mathcal{G}amma$, then the mis\`{e}re version of $G$ can be described as the graph $\mathcal{G}amma^-$ obtained from $\mathcal{G}amma$ by adding one extra vertex $v$ and an edge downward from each final vertex (vertex without outgoing edge) in $\mathcal{G}amma$ to $v$. For an impartial game $G$, denote by $\mathcal{G}_G$ and $\mathcal{G}^-_G$ the Sprague-Grundy functions for $G$ and its mis\`{e}re version, respectively. If there exist some subset $V_0$ of $\mathcal{P}$-positions and $V_1$ of those positions which have Sprague-Grundy value 1 of $G$ such that $\mathcal{G}_G(p) + \mathcal{G}^-_G(p) = 1$ if $p \in V_0 \cup V_1$ and $\mathcal{G}_G(p) = \mathcal{G}^-_G(p)$ otherwise, then $G$ is said to be \emph{miserable}. If $V_0$ coincides with the $\mathcal{P}$-positions and $V_1$ coincides with those positions which have Sprague-Grundy value 1 of $G$, then $G$ is said to be \emph{strongly miserable}. Several miserable and strongly miserable impartial games are studied in \cite{RRR}, including Wythoff's game. Gurvich has shown that Wythoff's game is miserable but not strongly miserable \cite{RRR}. We now show that strong miserability holds for $\mathcal{F}$-Wythoff. \begin{theorem} \label{Str.Mis} The game $\mathcal{F}$-Wythoff is strongly miserable. \end{theorem} \begin{proof} Let $\mathcal{G}amma$ be the graph of $\mathcal{F}$-Wythoff. Consider the graph $\mathcal{G}amma^-$ of mis\`{e}re $\mathcal{F}$-Wythoff obtained from $\mathcal{G}amma$ with the extra sink $v_0$. For each vertex (position) $v$, the {\em height} $h(v)$ of $v$ is the length of the longest directed path from $v$ to the sink $v_0$. Denote by $\mathcal{G}^-$ the Sprague-Grundy function for mis\`{e}re $\mathcal{F}$-Wythoff. We will prove by induction on $h(v)$ that $\mathcal{G}^-(v) = \mathcal{G}(v)$ if $\mathcal{G}^-(v) \geq 2$. One can check that the claim is true for $h(v) \leq 2$. Assume that the claim is true for $h(v) \leq n$ for some $n \geq 2$. We show that the claim is true for $h(v) = n+1$. For each $k < \mathcal{G}^-(v)$, there exists $w_k$ such that $\mathcal{G}^-(w_k) = k$ and one can move from $v$ to $w_k$. By Theorems \ref{P-Mis}, \ref{1-Mis} and the inductive hypothesis, we have \[\{\mathcal{G}(w_k)| 0 \leq k < \mathcal{G}^-(v)\} = \{0,1, \ldots, \mathcal{G}^-(v)-1\}.\] Note that if there exists a move from $v$ to some $w$ in $\mathcal{F}$-Wythoff, then that move can also be made in mis\`{e}re $\mathcal{F}$-Wythoff. Moreover, by Theorems \ref{P-Mis}, \ref{1-Mis} and the inductive hypothesis, $\mathcal{G}(w) \neq \mathcal{G}^-(v)$. We have \[\mathcal{G}(v) = mex\{\mathcal{G}(w) | \text{$w$ is a follower of $v$}\}.\] Since the $mex$ set includes the set $\{0,1, \ldots, \mathcal{G}^-(v)-1\}$ but excludes $\mathcal{G}^-(v)$, $\mathcal{G}(v) = \mathcal{G}^-(v)$. \end{proof} Recall that Wythoff's game and several of its variants are either miserable or strongly miserable \cite{RRR}. Our computer explorations show that the two variants of Wythoff's game recently discussed in \cite{ho} are miserable. This commonness leads us to the following question and problem. \begin{question} Are all extensions of Wythoff's game either miserable or strongly miserable? \end{question} \begin{problem} Characterize miserable or strongly miserable restrictions of Wythoff's game. \end{problem} \section{Variants of $\mathcal{F}$-Wythoff preserving its $\mathcal{P}$-positions} In this section, we answer the question as to whether there exists either a restriction or an extension of $\mathcal{F}$-Wythoff preserving its $\mathcal{P}$-positions. For an impartial game, a move is said to be \emph{redundant} \cite{Ext-Res} if the elimination of this move from the game does not change the set of $\mathcal{P}$-positions. A move is therefore not redundant if there exists a position $p$ such that that move is the unique winning move from that position. Given an impartial game, a move can be added into the set of moves without changing the set of $\mathcal{P}$-positions if this move does not lead a $\mathcal{P}$-position to another $\mathcal{P}$-position. We will introduce in this section one restriction and one extension of $\mathcal{F}$-Wythoff preserving it $\mathcal{P}$-positions. The idea for this section comes from our recent work on variants of Wythoff's game preserving its $\mathcal{P}$-positions \cite{ho}. Consider the restriction of $\mathcal{F}$-Wythoff which we call \emph{$\mathcal{F}_{\mathcal{R}}$-Wythoff} obtained as follow: if the two piles have different sizes, removing tokens from a single pile cannot be made on the smaller pile. The second game is an extension of $\mathcal{F}$-Wythoff obtained by adding an extra move as follows: from a position $(a,b)$ with $a \leq b$, one can remove $k$ tokens from the pile of size $a$ and remove $l \leq k$ tokens from the pile of size $b$ provided that the integer ratio of the two entries does not change. We call this extension \emph{$\mathcal{F}_{\mathcal{E}}$-Wythoff}. The proofs for the results in this section are quite similar to those for $\mathcal{F}$-Wythoff and we leave them for the reader. \begin{theorem} \label{REF-P} The $\mathcal{P}$-positions of $\mathcal{F}_{\mathcal{R}}$-Wythoff $($and $\mathcal{F}_{\mathcal{E}}$-Wythoff$)$ are identical to those of $\mathcal{F}$-Wythoff. \end{theorem} We now answer the question as to whether there exists a restriction of $\mathcal{F}_{\mathcal{R}}$-Wythoff preserving its $\mathcal{P}$-position. \begin{theorem} \label{RF-Res} There is no restriction of $\mathcal{F}_{\mathcal{R}}$-Wythoff preserving its $\mathcal{P}$-positions \end{theorem} \begin{proof} We will show that neither of the moves in $\mathcal{F}$-Wythoff is redundant. We need to show that for every positive integer $k$, the following two properties hold: \begin{itemize} \item [(i)] there exists a winning position $(a,b)$ with $a < b$ such that removing $k$ tokens from the larger pile is the unique winning move; \item [(ii)] there exists a winning position such that removing $k$ tokens from both piles is the unique winning move. \end{itemize} For (i), let $a = 2, b = 3+k$. Then $(a,b)$ is an $\mathcal{N}$-position. Moreover, by Theorem \ref{FW-P}, removing $k$ tokens from the larger pile is the unique winning move. For (ii), we first claim that there exist positive integers $n$ and $m$ such that $\lfloor\phi n\rfloor + k = \lfloor\phi m\rfloor$. In fact, set $n_1 = \lfloor 2\phi\rfloor = 3$, $n_2 = \lfloor3\phi \rfloor = 4$, $m_1 = 3+k$, and $m_2 = 4 + k$. We show that either $m_1$ or $m_2$ is of the form $\lfloor\phi m\rfloor$ for some $m$. Assume by contradiction that neither $m_1$ nor $m_2$ is of the form $\lfloor\phi m\rfloor$. By Lemma \ref{Comp}, there exist $r_1 < r_2$ such that $m_1 = \lfloor\phi r_1\rfloor +r_1$, $m_2 = \lfloor\phi r_2\rfloor +r_2$. Note that $\lfloor\phi r_1\rfloor < \lfloor\phi r_2\rfloor$ and so \[1 = m_2 - m_1 = \lfloor\phi r_2\rfloor +r_2 - (\lfloor\phi r_1\rfloor +r_1) = \lfloor\phi r_2\rfloor - \lfloor\phi r_1\rfloor + r_2 - r_1 \geq 2 \] giving a contradiction. Now, if $m_1 = \lfloor\phi m\rfloor$ (resp. $m_2 = \lfloor\phi m\rfloor$), let $n = 2$ (resp. $n = 3$). Then $n$ and $m$ satisfy the condition $\lfloor\phi n\rfloor + k = \lfloor\phi m\rfloor$. Let $a = \lfloor\phi n\rfloor +1 + k$, $b = \lfloor\phi n\rfloor +n +1 + k$. Then $(a,b)$ is an $\mathcal{N}$-position and removing $k$ tokens from both piles is a winning move. (Note that an equal number of tokens can be removed from $(a,b)$ by Lemma \ref{U-lem}.) Moreover, for $k' \neq k$, removing $k'$ tokens from both piles is not a winning move. In fact, otherwise there is a move between the two $\mathcal{P}$-positions $(a-k,b-k)$ and $(a-k',b-k')$. (Note that $\lfloor b-k/a-k \rfloor = \lfloor b-k'/a-k' \rfloor = 1$ by the last equation of Lemma \ref{U-lem}.) It remains to show that removing $k$ tokens from both pile of $(a,b)$ is the unique winning move. Assume by contradiction that there exists another winning move from $(a,b)$. This move must take some $l$ tokens from the larger pile leading $(a,b)$ to some position $(\lfloor\phi r\rfloor + 1, \lfloor\phi r\rfloor + r + 1)$. First consider the case $b-l = \lfloor\phi r\rfloor + 1$, $a = \lfloor\phi r\rfloor + r + 1$. We have shown the existence of $m$ such that $\lfloor\phi m\rfloor = \lfloor\phi n\rfloor + k = a - 1$ and so $\lfloor\phi m \rfloor = \lfloor\phi r\rfloor + r = \lfloor\phi^2 r\rfloor$ which contradicts Lemma \ref{Comp}. Now consider the case $b-l = \lfloor\phi r\rfloor + r + 1$, $a = \lfloor\phi r\rfloor + 1$. We have \begin{align*} \begin{cases} a = \lfloor\phi n\rfloor + k + 1 = \lfloor\phi r\rfloor + 1,\\ b-l = \lfloor\phi n\rfloor + n + 1 + k - l = \lfloor\phi r\rfloor + r + 1. \end{cases} \end{align*} The first equation implies $n < r$. By substituting $\lfloor\phi n\rfloor + k$ from the first equation into the second one, we get $\lfloor\phi r\rfloor + n - l = \lfloor\phi r\rfloor + r$ which implies $n = l+r > r$ giving a contradiction. Therefore, this case is impossible. \end{proof} \begin{theorem} \label{REF-1} The positions which have value 1 of $\mathcal{F}_{\mathcal{R}}$-Wythoff $($and $\mathcal{F}_{\mathcal{E}}$-Wythoff$)$ are identical to those of $\mathcal{F}$-Wythoff. \end{theorem} One can check that those positions which have Sprague-Grundy value $i$, for $2 \leq i \leq 3$, in $\mathcal{F}_{\mathcal{R}}$-Wythoff and those positions which have Sprague-Grundy value $i$, for $2 \leq j \leq 7$, in $\mathcal{F}_{\mathcal{E}}$-Wythoff can be obtained from $\mathcal{P}$-positions of $\mathcal{F}$-Wythoff by a translation, except for some first positions. Consider the 2-dimension arrays of Sprague-Grundy values of $\mathcal{F}_{\mathcal{R}}$-Wythoff and $\mathcal{F}_{\mathcal{E}}$-Wythoff (as in Table \ref{T}). We have similar results to Theorem \ref{Row}. \begin{theorem} \label{REF-1} Let $a, g$ be nonnegative integer. For each of the two games $\mathcal{F}_{\mathcal{R}}$-Wythoff and $\mathcal{F}_{\mathcal{E}}$-Wythoff, there exists $b$ such that the position $(a,b)$ has Sprague-Grundy value $g$. The uniqueness holds for $\mathcal{F}_{\mathcal{E}}$-Wythoff. \end{theorem} We end this section with a result on the strong miserability of the two variants. \begin{theorem} \label{Str.Mis-RE} The two games $\mathcal{F}_{\mathcal{R}}$-Wythoff and $\mathcal{F}_{\mathcal{R}}$-Wythoff are both strongly miserable. \end{theorem} \section{More open questions} Based on the translation phenomenon discussed above, we raise in this section two general questions on variants of Wythoff's game. \begin{question} Does there exist another variant of Wythoff's game whose $\mathcal{P}$-positions, possibly except for a finite number of positions, accept the formula $(\lfloor \phi n \rfloor+m, \lfloor \phi^2n \rfloor +m)$ for some $m \geq 2$. \end{question} More generally, \begin{question} Does there exist another variant of Wythoff's game whose $\mathcal{P}$-positions, possibly except for a finite number of positions, accept the formula $(\lfloor \phi n \rfloor+a, \lfloor \phi^2n \rfloor +b)$ for some integers $a$ and $b$. \end{question} \end{ack} \small \end{document} \end{document}
\begin{document} \begin{frontmatter} \title{A matrix approach to the computation of quadrature formulas on the unit circle\thanksref{Agrad}} \author{ Mar\'{\i}a Jos\'e Cantero} \address{Department of Applied Mathematics, University of Zaragoza. Calle Maria de Luna 3, 50018 Zaragoza, Spain.} \author{Ruym\'an Cruz-Barroso and } \author{Pablo Gonz\'alez-Vera \corauthref{cor}} \corauth[cor]{Corresponding author.} \ead{[email protected]} \address{Department of Mathematical Analysis. La Laguna University. 38271 La Laguna. Tenerife. Canary Islands. Spain} \thanks[Agrad]{The work of the first author was partially supported by Gobierno de Arag\'{o}n-CAI, ``Programa Europa de Ayudas a la Investigaci\'{o}n'' and by a research grant from the Ministry of Education and Science of Spain, project MTM2005-08648-C02-01. The work of the second and the thrid authors was partially supported by the research project MTM 2005-08571 of the Spanish Government.} \date{} \begin{abstract} In this paper we consider a general sequence of orthogonal Laurent polynomials on the unit circle and we first study the equivalences between recurrences for such families and Szeg\H{o}'s recursion and the structure of the matrix representation for the multiplication operator in $\Lambda$ when a general sequence of orthogonal Laurent polynomials on the unit circle is considered. Secondly, we analyze the computation of the nodes of the Szeg\H{o} quadrature formulas by using Hessenberg and five-diagonal matrices. Numerical examples concerning the family of Rogers-Szeg\H{o} $q$-polynomials are also analyzed. \end{abstract} \begin{keyword} Orthogonal Laurent polynomials, Szeg\H{o} polynomials, Recurrence relations, Szeg\H{o} quadrature formulas, Rogers-Szeg\H{o} q-polynomials, Hessenberg matrices, five-diagonal matrices. \end{keyword} \end{frontmatter} \section{Introduction} As it is known, when dealing with the estimation of the integral $I_{\sigma}(f)=\int_{a}^{b} f(x)d\sigma(x)$, $\sigma(x)$ being a positive measure on $[a,b]$ by means of an $n$-point Gauss-Christoffel quadrature rule, $I_n(f)=\sum_{j=1}^{n} A_j f(x_j)$ such that $I_{\sigma}(P)=I_n(P)$ for any polynomial of degree up to $2n-1$, the effective computation of the nodes $\{ x_j \}_{j=1}^{n}$ and weights $\{ A_j \}_{j=1}^{n}$ in $I_n(f)$ has become an interesting matter of study both numerical and theoretical. As shown by Gautschi (see \cite{Ga}, \cite{Ga2} or \cite{Ga3}) among others, here the basic fact is the three-term recurrence relation satisfied by the sequence of orthogonal polynomials for the measure $\sigma$ giving rise to certain tridiagonal matrices (Jacobi matrices) so that the eigenvalues of the $n$-th principal submatrix coincide with the nodes $\{ x_j \}_{j=1}^{n}$ i.e., with the zeros of the $n$-th orthogonal polynomial. Furthermore, the weights $\{ A_j \}_{j=1}^{n}$ can be easily expressed in terms of the first component of the normalized eigenvectors. In this paper, we shall be concerned with the approximate calculation of integrals of $2\pi$-periodic functions with respect to a positive measure $\mu$ on $[-\pi,\pi]$ or more generally with integrals on the unit circle like $I_{\mu}(f)=\int_{-\pi}^{\pi} f\left( e^{i\theta} \right) d\mu(\theta)$. Here we will also propose as an estimation for $I_{\mu}(f)$ an $n$-point quadrature rule $I_n(f)=\sum_{j=1}^{n}\lambda_j f(z_j)$ with distinct nodes on the unit circle but now imposing exactness not for algebraic polynomials but trigonometric polynomials or more generally Laurent polynomials or functions of the form $L(z)=\sum_{j=p}^{q} \alpha_jz^j$, $\alpha_j \in \Bbb C$, $p$ and $q$ integers with $p \leq q$. Now, it should be recalled that Laurent polynomials on the real line were used by Jones and Thron in the early 1980 in connection with continued fractions and strong moment problems (see \cite{JNT} and \cite{JT}) and also implicitly in \cite{JH}. Their study, not only suffered a rapid development in the last decades giving rise to a theory of orthogonal Laurent polynomials on the real line (see e.g. \cite{CC}, \cite{CD}, \cite{He}, \cite{OT}, \cite{On} and \cite{Th}), but it was extended to an ampler context leading to a general theory of orthogonal rational functions (see \cite{Bu}). On the other hand, the rapidly growing interest on problems on the unit circle, like quadratures, Szeg\H{o} polynomials and the trigonometric moment problem has suggested to develop a theory of orthogonal Laurent polynomials on the unit circle introduced by Thron in \cite{Th}, continued in \cite{Jo}, \cite{JC}, \cite{RO} and where the recent contributions of Cantero, Moral and Vel\'azquez in \cite{CM}, \cite{Ca} and \cite{CMV3} has meant an important and definitive impulse for the spectral analysis of certain problems on the unit circle. Here, it should be remarked that the theory of orthogonal Laurent polynomials on the unit circle establishes features totally different to the theory on the real line because of the close relation between orthogonal Laurent polynomials and the orthogonal polynomials on the unit circle (see \cite{RP}). The purpose of this paper is to study orthogonal Laurent polynomials as well as the analysis and computation of the nodes and weights of the so-called Szeg\H o quadrature formulas and it is organized as follows: In Section 2, sequences of orthogonal Laurent polynomials on the unit circle are constructed. They satisfy certain recurrence relations which are proved to be equivalent to the recurrences satisfied by the family of Szeg\H{o} polynomials, as shown in Section 3. The multiplication operator in the space of Laurent polynomials with a general ordering previously fixed is considered in Section 4. This operator plays on the unit circle the fundamental role in the five-diagonal representation obtained in \cite{Ca} analogous to the Jacobi matrices in the real line. Our main result of this section is to prove that this is the minimal representation from a different point of view than in \cite{CMV3}. In Section 5 a matrix approach to Szeg\H{o} quadrature formulas in the more natural framework of orthogonal Laurent polynomials on the unit circle is analyzed. Finally, we present some illustrative numerical examples on computation of the nodes and weights of these quadrature formulas, by way five-diagonal matrices versus Hessenberg matrices considering the so-called Rogers-Szeg\H{o} weight function in Section 6. \section{Orthogonal Laurent polynomials on the unit circle. Preliminary results} We start this section with some convention for notations and some preliminary results. We denote by $\Bbb T :=\{ z \in \Bbb C : |z|=1 \}$ and $\Bbb D :=\{ z : |z|<1 \}$ the unit circle and the open disk on the complex plane respectively. $\Bbb P=\Bbb C[z]$ is the complex vector space of polynomials in the variable $z$ with complex coefficients, $\Bbb P_n:=span \{ 1,z,z^3, \dots, z^n \}$ is the corresponding vector subspace of polynomials with degree less or equal than $n$ while $\Bbb P_{-1} =\{0\}$ is the trivial subspace. $\Lambda:=\Bbb C[z,z^{-1}]$ denotes the complex vector subspace of Laurent polynomials in the variable $z$ and for $m,n \in \Bbb Z$, $m \leq n$, we define the vector subspace $\Lambda_{m,n} = span \{ z^m, z^{m+1},\dots, z^n \}$. Also, for a given function $f(z)$ we define the ``substar-conjugate'' as $f_*(z)=\overline{f\left( 1/\overline{z} \right)}$ whereas for a polynomial $P(z) \in \Bbb P_n$ its reversed (or reciprocal) as $P^*(z)=z^nP_*(z)=z^n\overline{P\left( 1/\overline{z} \right)}$. Throughout the paper, we shall be dealing with a positive Borel measure $\mu$ supported on the unit circle $\Bbb T$, normalized by the condition $\int_{-\pi}^{\pi} d\mu(\theta)=1$ (i.e, a probability measure). As usual, the inner product induced by $\mu(\theta)$ is given by $$\langle f,g \rangle_{\mu}=\int_{-\pi}^{\pi} f\left( e^{i\theta} \right) \overline{g\left( e^{i\theta} \right)} d\mu(\theta).$$ For our purposes, we start constructing a sequence of subspaces of Laurent polynomials $\{ {\cal L}_n \}_{n=0}^{\infty}$ satisfying $$dim\left( {\cal L}_n \right) = n+1 \;\;\;,\;\; {\cal L}_n \subset {\cal L}_{n+1} \;\;,\;n=0,1,\ldots.$$ This can be done, by taking a sequence $\{ p(n) \}_{n=0}^{\infty}$ of nonnegative integers such that $p(0)=0$, $0 \leq p(n) \leq n$ and $s(n)=p(n)-p(n-1) \in \{0,1\}$ for $n=1,2,\ldots$. In the sequel, a sequence $\{ p(n) \}_{n=0}^{\infty}$ satisfying these requirements will be said a ``generating sequence''. Then, set \begin{equation}\label{l} {\cal L}_n = \Lambda_{-p(n),q(n)}= span \left\{ z^j \;:\;-p(n) \leq j \leq q(n) \right\} \;,\;q(n):=n-p(n). \end{equation} Observe that $\{ q(n) \}_{n=0}^{\infty}$ is also a generating sequence and that $\Lambda=\bigcup_{n=0}^{\infty} {\cal L}_n$ if and only if $\lim_{n\rightarrow \infty} p(n) = \lim_{n\rightarrow \infty} q(n) = \infty$. Moreover, $${\cal L}_{n+1} = \left\{ \begin{array}{lc} {\cal L}_n \oplus span \{ z^{q(n+1)} \} &if \;s(n+1)=0 \\ \\ {\cal L}_n \oplus span \{ z^{-p(n+1)} \} &if \;s(n+1)=1 \\ \end{array} \right. .$$ In any case, we will say that $\{ p(n) \}_{n=0}^{\infty}$ has induced an ``ordering'' in $\Lambda$. Sometimes we will need to define $p(-1)=0$ and hence $s(0)=0$. Now, by applying the Gram-Schmidt orthogonalization procedure to ${\cal L}_n$, an orthogonal basis $\{\psi_0(z), \ldots, \psi_n(z) \}$ can be obtained. If we repeat the process for each $n=1,2,\ldots$, a sequence $\{ \psi_n(z) \}_{n=0}^{\infty}$ of Laurent polynomials can be obtained, satisfying \begin{equation}\label{ortovarphi} \begin{array}{lcl} \psi_n(z) \in {\cal L}_n \backslash {\cal L}_{n-1} \;\;,\;n=1,2,\ldots &, &\psi_0(z) \equiv c \neq 0 \\ \\ \langle \psi_n(z),\psi_m(z) \rangle_{\mu}=\kappa_n \delta_{n,m} \;\;,\;\kappa_n > 0 &, &\delta_{n,m}= \left\{ \begin{array}{ccl} 0 &if &n \neq m \\ 1 &if &n=m \end{array}. \right. \end{array} \end{equation} $\{ \psi_n(z) \}_{n=0}^{\infty}$ will be called a ``sequence of orthogonal Laurent polynomials for the measure $\mu$ and the generating sequence $\{ p(n) \}_{n=0}^{\infty}$''. It should be noted that the orders considered by Thron in \cite{Th} (``balanced'' situation), expanding $\Lambda$ in the ordered basis $$\Lambda_{0,0} \;\;\;,\;\;\;\Lambda_{-1,0} \;\;\;,\;\;\;\Lambda_{-1,1} \;\;\;,\;\;\;\Lambda_{-2,1} \;\;\;,\;\;\;\Lambda_{-2,2} \;\;\;,\;\;\;\Lambda_{-3,2} \;\;\;,\;\;\;\ldots$$ and $$\Lambda_{0,0} \;\;\;,\;\;\;\Lambda_{0,1} \;\;\;,\;\;\;\Lambda_{-1,1} \;\;\;,\;\;\;\Lambda_{-1,2} \;\;\;,\;\;\;\Lambda_{-2,2} \;\;\;,\;\;\;\Lambda_{-2,3} \;\;\;,\;\;\;\ldots$$ corresponds to $p(n)=E\left[ \frac{n+1}{2} \right]$ and $p(n)=E\left[ \frac{n}{2} \right]$ respectively, where as usual, $E[x]$ denotes the integer part of $x$ (see \cite{Ca}, \cite{RP} and \cite{Cr} for other properties for these particular orderings). In the sequel we will denote by $\{ \phi_n(z) \}_{n=0}^{\infty}$ the sequence of monic orthogonal Laurent polynomials for the measure $\mu$ and the generating sequence $\{ p(n) \}_{n=0}^{\infty}$, that is, when the leading coefficients are equal to 1 for all $n \geq 0$ (coefficients of $z^{q(n)}$ or $z^{-p(n)}$ when $s(n)=0$ or $s(n)=1$ respectively). Moreover, we will denote by $\{ \chi_n(z) \}_{n=0}^{\infty}$ the sequence of orthonormal Laurent polynomials for the measure $\mu$ and the generating sequence $\{ p(n) \}_{n=0}^{\infty}$, i.e. when $\kappa_n=1$ for all $n \geq 0$ in (\ref{ortovarphi}). This sequence is also uniquely determined by assuming that the leading coefficient in $\chi_n(z)$ is positive for each $n \geq 0$. On the other hand, when taking $p(n)=0$ for all $n=0,1,\ldots$ then ${\cal L}_n=\Lambda_{0,n}=\Bbb P_n$ so that, the $n$-th monic orthogonal Laurent polynomial coincides with the $n$-th monic Szeg\H{o} polynomial (see e.g. \cite{Sz}) which will be denoted by $\rho_n(z)$ for $n=0,1,\ldots$. This means means that $\rho_0(z) \equiv 1$ and for each $n \geq 1$, $\rho_n(z) \in \Bbb P_n \backslash \Bbb P_{n-1}$ is monic and satisfies \begin{equation}\label{rho} \begin{array}{l} \langle \rho_n(z),z^s \rangle_{\mu}=\langle \rho_n^{*}(z),z^t \rangle_{\mu}=0 ,\;s=0,1,\ldots,n-1 \;\;,\;t=1,2,\ldots,n \\ \langle \rho_n(z),z^n \rangle_{\mu}=\langle \rho_n^{*}(z),1 \rangle_{\mu} > 0. \end{array} \end{equation} Moreover, we will denote by $\{ \varphi_n(z) \}_{n=0}^{\infty}$ the sequence of orthonormal polynomials on the unit circle for $\mu(\theta)$, i.e., satisfying $\parallel \varphi_n(z) \parallel_{\mu}=\langle \varphi_n(z),\varphi_n(z) \rangle_{\mu}^{1/2}=1$ for all $n \geq 0$. This family is uniquely determined by assuming that the leading coefficient in $\varphi_n(z)$ is positive for each $n \geq 0$ and they are related with the family of monic orthogonal polynomials by $\rho_0(z) \equiv \varphi_0(z) \equiv 1$ and $\rho_n(z)=l_n \varphi_n(z)$ with $l_n=\langle \rho_n(z) , \rho_n(z) \rangle_{\mu}^{1/2}$ for all $n \geq 1$. Explicit expressions for Szeg\"o polynomials are in general not available and in order to compute them we can make use of the following (Szeg\"o) forward recurrence relations (see e.g. \cite{Sz}): \begin{equation}\label{le} \begin{array}{lc} \rho_{0}(z)=\rho_{0}^{*}(z) \equiv 1 & \\ \rho_{n}(z)=z\rho_{n-1}(z)+\delta_{n}\rho_{n-1}^{*}(z) \; &n \geq 1 \\ \rho_{n}^{*}(z)=\overline{\delta_{n}}z\rho_{n-1}(z)+\rho_{n-1}^{*}(z) \; &n \geq 1 \\ \end{array} \end{equation} where $\delta_0=1$ and $\delta_n := \rho_n(0)$ for all $n=1,2,\ldots$ are the so-called {\em Schur parameters} ({\em Szeg\"o}, {\em reflection}, {\em Verblunsky} or {\em Geronimus} parameters, see \cite{BS}) with respect to $\mu(\theta)$. Since the zeros of $\rho_n(z)$ lie in $\Bbb D$, they satisfy $|\delta_n|<1$ for $n \geq 1$. Now, if we introduce the sequence of nonnegative real numbers $\{ \eta_n \}_{n=1}^{\infty}$ by \begin{equation}\label{eta} \eta_n=\sqrt{1-|\delta_n|^2} \in (0,1] \;\;\;,\;\;n=1,2,\ldots, \end{equation} then, a straightforward computation from (\ref{le}) yields $$\frac{\langle \rho_n(z),\rho_n(z) \rangle_{\mu}}{\langle \rho_{n-1}(z),\rho_{n-1}(z) \rangle_{\mu}} = \eta_n^2$$ and so, a forward recurrence for the family of orthonormal Szeg\H{o} polynomials is given by: \begin{equation}\label{lenorma} \begin{array}{lc} \varphi_{0}(z)=\varphi_{0}^{*}(z) \equiv 1 & \\ \eta_n \varphi_{n}(z)=z\varphi_{n-1}(z)+\delta_{n}\varphi_{n-1}^{*}(z) \; &n \geq 1 \\ \eta_n \varphi_{n}^{*}(z)=\overline{\delta_{n}}z\varphi_{n-1}(z)+\varphi_{n-1}^{*}(z) \; &n \geq 1. \\ \end{array} \end{equation} We conclude this section considering the following results proved in \cite{Ca} and \cite{RO}. The first one establishes the relation between the families of orthogonal Laurent polynomials with respect to the generating sequences $\{ p(n) \}_{n=0}^{\infty}$ and $\{ q(n) \}_{n=0}^{\infty}$ whereas the second one states the relation between the family of orthonormal and monic orthogonal Laurent polynomials for the generating sequence $\{ p(n) \}_{n=0}^{\infty}$ and the family of orthonormal and Szeg\H{o} ordinary polynomials. This last one explains how to construct orthogonal Laurent polynomials on the unit circle from the sequence of Szeg\H{o} polynomials. Here it should be remarked that the situation on the real line, i.e. when dealing with sequences of orthogonal Laurent polynomials with respect to a positive measure supported on the real line, is totally different (for details, see e.g. \cite{CD}). \begin{propo}\label{conex1} Let $\{\xi_n (z) \}_{n=0}^{\infty}$ be a sequence of orthogonal Laurent polynomials for the measure $\mu$ and the generating sequence $\{q(n)\}_{n=0}^{\infty}$. Then, $\xi_n(z)=\psi_{n*}(z)$ for all $n \geq 0$, $\{\psi_n (z) \}_{n=0}^{\infty}$ being a sequence of orthogonal Laurent polynomials for the measure $\mu$ and the generating sequence $\{p(n)\}_{n=0}^{\infty}$, where $p(n)=n-q(n)$. \end{propo} \begin{flushright} $\Box$ \end{flushright} \begin{propo}\label{conex2} The families $\{ \phi_n(z) \}_{n=0}^{\infty}$ and $\{ \chi_n(z) \}_{n=0}^{\infty}$ are the respective sequences of monic orthogonal and orthonormal Laurent polynomials on the unit circle for a measure $\mu$ and the ordering induced by the generating sequence $\{p(n) \}_{n=0}^{\infty}$, if and only if, \begin{equation}\label{correspond} \phi_n(z)=\left\{ \begin{array}{ccl} \frac{\rho_n(z)}{z^{p(n)}} &if &s(n)=0 \\ \\ \frac{\rho_n^*(z)}{z^{p(n)}} &if &s(n)=1 \end{array} \right. \;\;\;\;\;and\;\;\;\;\; \chi_n(z)=\left\{ \begin{array}{ccl} \frac{\varphi_n(z)}{z^{p(n)}} &if &s(n)=0 \\ \\ \frac{\varphi_n^*(z)}{z^{p(n)}} &if &s(n)=1 \end{array} . \right. \end{equation} \end{propo} \begin{flushright} $\Box$ \end{flushright} \section{Recurrence relations} \setcounter{equation}{0} We start with the next result which establishes a three-term recurrence relation for the monic orthogonal and orthonormal families of Laurent polynomials for the measure $\mu$ and certain (balanced) generating sequences, \begin{propo}\label{equival1} Consider the familes $\{ \phi_n(z) \}_{n=0}^{\infty}$ and $\{ \tilde{\phi}_n(z) \}_{n=0}^{\infty}$ of monic orthogonal Laurent polynomials for the measure $\mu$ and the generating sequences $p(n)=E \left[ \frac{n+1}{2} \right]$ and $p(n)=E \left[ \frac{n}{2} \right]$ respectively. Set $$A_{n}= \left\{ \begin{array}{crl} \delta_{n} &if &\;n \;is \;even \\ \overline{\delta_n} &if &\;n \;is \;odd \end{array}. \right.$$ Then, \begin{equation}\label{eq1} \begin{array}{c} \phi_{n}(z)=\left( A_n + \overline{A_{n-1}} z^{(-1)^n} \right) \phi_{n-1}(z) + \eta_{n-1}^2 z^{(-1)^n} \phi_{n-2}(z) \;,\;\;n \geq 2 \\ \\ \phi_{0}(z) \equiv 1 \;\;,\;\;\;\;\phi_{1}(z)=\overline{\delta_1} + \frac{1}{z}. \end{array} \end{equation} \begin{equation}\label{eq3} \begin{array}{c} \tilde{\phi}_{n}(z)=\left( \overline{A_{n}} + A_{n-1} z^{(-1)^{n+1}} \right) \tilde{\phi}_{n-1}(z) + \eta_{n-1}^2 z^{(-1)^{n+1}} \tilde{\phi}_{n-2}(z) \;,\;\;n \geq 2 \\ \\ \tilde{\phi}_{0}(z) \equiv 1 \;\;,\;\;\;\;\tilde{\phi}_{1}(z)=\delta_1 + z. \end{array} \end{equation} \end{propo} \begin{flushright} $\Box$ \end{flushright} These recurrences were initially proved by Thron in \cite{Th} in the context of continued fractions. An alternative proof is given in \cite{Cr} making use of (\ref{le}) and Proposition \ref{conex2}. We will see now that the recurrences (\ref{le}) and (\ref{eq1}) (the same with (\ref{eq3})) are in fact equivalent. \begin{teorema}\label{equivalole} The recurrences (\ref{le}) and (\ref{eq1}) are equivalent. \end{teorema} {\em Proof}.- As above said, it remains to show that from the relations (\ref{eq1}) we deduce the Szeg\H{o} recurrence (\ref{le}). Indeed, from Proposition \ref{conex2}, (\ref{eq1}) and by taking ``super-star'' conjugation it follows for $k \geq 1$: \begin{equation}\label{rewritted1} \begin{array}{lcl} \rho_{2k}(z) &= &\left( \delta_{2k} + \delta_{2k-1}z \right)\rho_{2k-1}^*(z) + \eta_{2k-1}^2z^2\rho_{2k-2}(z) \\ \\ &= &\delta_{2k}\rho_{2k-1}^*(z) + z\left[ \delta_{2k-1}\rho_{2k-1}^*(z) + \eta_{2k-1}^2z\rho_{2k-2}(z) \right] \end{array} \end{equation} \begin{equation}\label{rewritted2} \begin{array}{lcl} \rho_{2k-1}^*(z) &= &\left( \overline{\delta_{2k-1}}z + \overline{\delta_{2k-2}} \right)\rho_{2k-2}(z) + \eta_{2k-2}^2\rho_{2k-3}^*(z) \\ \\ &= &\overline{\delta_{2k-1}}z\rho_{2k-2}(z) + \left[ \overline{\delta_{2k-2}}\rho_{2k-2}(z) + \eta_{2k-2}^2\rho_{2k-3}^*(z) \right] \end{array} \end{equation} \begin{equation}\label{rewritted3} \begin{array}{lcl} \rho_{2k}^*(z) &= &\left( \overline{\delta_{2k}}z + \overline{\delta_{2k-1}} \right)\rho_{2k-1}(z) + \eta_{2k-1}^2\rho_{2k-2}^*(z) \\ \\ &= &\overline{\delta_{2k}}z\rho_{2k-1}(z) + \left[ \overline{\delta_{2k-1}}\rho_{2k-1}(z) + \eta_{2k-1}^2\rho_{2k-2}^*(z) \right] \end{array} \end{equation} \begin{equation}\label{rewritted4} \begin{array}{lcl} \rho_{2k-1}(z) &= &\left( \delta_{2k-1} + \delta_{2k-2}z \right)\rho_{2k-2}^*(z) + \eta_{2k-2}^2z^2\rho_{2k-3}(z) \\ \\ &= &\delta_{2k-1}\rho_{2k-2}^*(z) + z\left[ \delta_{2k-2}\rho_{2k-2}^*(z) + \eta_{2k-2}^2z\rho_{2k-3}(z) \right]. \end{array} \end{equation} Clearly, the proof will be completed if from (\ref{rewritted1})-(\ref{rewritted4}) we deduce that: \begin{equation}\label{rewritted5} \rho_{2k-1}(z)=\delta_{2k-1}\rho_{2k-1}^*(z)+\eta_{2k-1}^2z\rho_{2k-2}(z) \end{equation} \begin{equation}\label{rewritted8} \rho_{2k-2}^*(z)=\overline{\delta_{2k-2}}\rho_{2k-2}(z)+\eta_{2k-2}^2\rho_{2k-3}^*(z) \end{equation} \begin{equation}\label{rewritted7} \rho_{2k-1}^*(z)=\overline{\delta_{2k-1}}\rho_{2k-1}(z)+\eta_{2k-1}^2\rho_{2k-2}^*(z) \end{equation} \begin{equation}\label{rewritted6} \rho_{2k-2}(z)=\delta_{2k-2}\rho_{2k-2}^*(z)+\eta_{2k-2}^2z\rho_{2k-3}(z). \end{equation} For this purpose, it will be enough to check that (\ref{rewritted5}) is valid since the proof of (\ref{rewritted8})-(\ref{rewritted6}) follows in a similar way. Thus, set $$R_{2k-1}(z)=\delta_{2k-1}\rho_{2k-1}^*(z)+\eta_{2k-1}^2z\rho_{2k-2}(z) \in \Bbb P_{2k-1}.$$ Hence, $R_{2k-1}(z)=\sum_{j=0}^{2k-1} \alpha_j \rho_j(z).$ By comparision of monomials $z^{2k-1}$ it follows that $\alpha_{2k-1}=1$. Since $$\alpha_j=\langle R_{2k-1}(z),\rho_j(z) \rangle_{\mu} = \delta_{2k-1} \langle \rho_{2k-1}^*(z),\rho_j(z) \rangle_{\mu} + \eta_{2k-1}^2 \langle z\rho_{2k-2}(z),\rho_j(z) \rangle_{\mu}$$ it follows from the orthogonality conditions for the Szeg\H{o} polynomials that $\alpha_j=0$ for all $j=1,\ldots,2k-2$. Hence, $R_{2k-1}(z)=\rho_{2k-1}(z)+\alpha_0$. Finally, by comparision of monomials $z^0=1$ it follows that $\alpha_0=0$. \begin{flushright} $\Box$ \end{flushright} Moreover, the following recurrence relations were proved in \cite{Ca}: \begin{propo}\label{equival2} The family of orthonormal Laurent polynomials for the measure $\mu$ and the generating sequence $p(n)=E \left[ \frac{n}{2} \right]$ satisties \begin{equation}\label{five2} \begin{array}{rcl} z \tilde{\chi}_0(z) &= &-\delta_1 \tilde{\chi}_0(z) + \eta_1 \tilde{\chi}_1(z) \;\;\;, \\ \\ z \left( \begin{array}{c} \tilde{\chi}_{2n-1}(z) \\ \tilde{\chi}_{2n}(z) \end{array} \right) &= &\left( \begin{array}{cc} -\eta_{2n-1}\delta_{2n} &-\overline{\delta_{2n-1}}\delta_{2n} \\ \eta_{2n-1}\eta_{2n} &\overline{\delta_{2n-1}}\eta_{2n} \end{array} \right) \left( \begin{array}{c} \tilde{\chi}_{2n-2}(z) \\ \tilde{\chi}_{2n-1}(z) \end{array} \right) \;+ \\ \\ &&\left( \begin{array}{cc} -\eta_{2n}\delta_{2n+1} & \eta_{2n} \eta_{2n+1} \\ -\overline{\delta_{2n}}\delta_{2n+1} & \overline{\delta_{2n}} \eta_{2n+1} \end{array} \right) \left( \begin{array}{c} \tilde{\chi}_{2n}(z) \\ \tilde{\chi}_{2n+1}(z) \end{array} \right) \;\;,\;\;n \geq 1 \end{array} \end{equation} \begin{flushright} $\Box$ \end{flushright} \end{propo} A similar matrix recurrence is deduced if the generating sequence $p(n)=E \left[ \frac{n+1}{2} \right]$ is considered (see \cite{Ca}). Once we have proved that the Szeg\H{o} recurrence (\ref{le}) is equivalent with both given in Proposition \ref{equival1} we will see now the equivalence with the recurrence given in Proposition \ref{equival2}. It will be proved in the case $p(n)=E\left[ \frac{n}{2} \right]$. The proof when the generating sequence $p(n)=E\left[ \frac{n+1}{2} \right]$ is considered can be established from Proposition \ref{conex1} just by taking ``super-star'' conjugation. \begin{teorema}\label{iff} The recurrence relations given in Propositions \ref{equival1} and \ref{equival2} are equivalent. \end{teorema} {\em Proof}.- First of all, when the family of orthonormal Laurent polynomials is considered, (\ref{eq3}) becomes \begin{equation}\label{eq4} \begin{array}{c} \eta_n \cdot \tilde{\chi}_{n}(z) = \left( \overline{A_n} + A_{n-1} z^{(-1)^{n+1}} \right) \tilde{\chi}_{n-1}(z) + \eta_{n-1} z^{(-1)^{n+1}} \tilde{\chi}_{n-2}(z) \;,\;\;n \geq 2 \\ \\ \tilde{\chi}_{i}(z) = \frac{\tilde{\phi}_{i}(z)}{\parallel \tilde{\phi}_{i}(z) \parallel}_{\mu} \;\;\;\;i=0,1. \end{array} \end{equation} Since the recurrence (\ref{eq4}) is equivalent to (\ref{le}) and the recurrences given in Proposition \ref{equival2} were obtained from this, it remains to prove that (\ref{five2}) implies (\ref{eq4}). From Proposition \ref{conex2} and since the generating sequence $p(n)=E\left[ \frac{n}{2} \right]$ is considered, it follows that (\ref{five2}) is equivalent to the relations \begin{equation}\label{qq1} \begin{array}{rcl} z^2 \varphi_{2n-1}(z) &= &\eta_{2n}\eta_{2n+1}\varphi_{2n+1}(z)-\eta_{2n}\delta_{2n+1}\varphi_{2n}^*(z) \\ &&-\overline{\delta_{2n-1}}\delta_{2n}z\varphi_{2n-1}(z)-\eta_{2n-1}\delta_{2n}z\varphi_{2n-2}^*(z) \end{array} \end{equation} \begin{equation}\label{qq2} \begin{array}{rcl} z\varphi_{2n}^*(z) &= &\overline{\delta_{2n}}\eta_{2n+1}\varphi_{2n+1}(z)-\overline{\delta_{2n}}\delta_{2n+1}\varphi_{2n}^*(z) \\ &&+\overline{\delta_{2n-1}}\eta_{2n}z\varphi_{2n-1}(z)+\eta_{2n-1}\eta_{2n}z\varphi_{2n-2}^*(z). \end{array} \end{equation} Again, from Proposition \ref{conex2} we have to prove from (\ref{qq1}) and (\ref{qq2}) that \begin{equation}\label{qq3} \eta_{2n}\varphi_{2n}^*(z) = \left( \overline{\delta_{2n}}z + \overline{\delta_{2n-1}} \right) \varphi_{2n-1}(z) + \eta_{2n-1}\varphi_{2n-2}^*(z) \end{equation} and \begin{equation}\label{qq4} \eta_{2n+1}\varphi_{2n+1}(z)= \left( \delta_{2n+1} + \delta_{2n}z \right) \varphi_{2n}^*(z) + \eta_{2n}z^2 \varphi_{2n-1}(z) \end{equation} holds. By one hand, since $\eta_{2n} \neq 0$ it follows from (\ref{qq1}) $$\begin{array}{rcl} \eta_{2n}z^2\varphi_{2n-1}(z) &= &\left(1 - |\delta_{2n}|^2 \right) \eta_{2n+1} \varphi_{2n+1}(z) - \left(1 - |\delta_{2n}|^2 \right) \delta_{2n+1}\varphi_{2n}^*(z) \\ \\ &&-\eta_{2n}\delta_{2n}\overline{\delta_{2n-1}}z\varphi_{2n-1}(z)-\eta_{2n}\eta_{2n-1}\delta_{2n}z\varphi_{2n-2}^*(z) \\ \\ &= &\eta_{2n+1}\varphi_{2n+1}(z) - \delta_{2n+1}\varphi_{2n}^*(z) - \delta_{2n} [ \overline{\delta_{2n}}\eta_{2n+1}\varphi_{2n+1}(z) \\ \\ &&- \overline{\delta_{2n}}\delta_{2n+1}\varphi_{2n}^*(z) + \eta_{2n}\overline{\delta_{2n-1}}z\varphi_{2n-1}(z) + \eta_{2n}\eta_{2n-1}z\varphi_{2n-2}^*(z) ] \end{array}$$ and now from (\ref{qq2}) the relation (\ref{qq4}) is deduced. By other hand, from (\ref{qq2}) and since $\eta_{2n} \neq 0$ it follows that $$\begin{array}{rcl} \eta_{2n}z\varphi_{2n}^*(z) &= &\overline{\delta_{2n}}\eta_{2n}\eta_{2n+1}\varphi_{2n+1}(z)-\overline{\delta_{2n}}\delta_{2n+1}\eta_{2n}\varphi_{2n}^*(z) + \overline{\delta_{2n-1}}\eta_{2n}^2z\varphi_{2n-1}(z) \\ \\ &&+\eta_{2n}^2\eta_{2n-1}z\varphi_{2n-2}^*(z) \\ \\ &=& \overline{\delta_{2n}}\eta_{2n} \left[ \eta_{2n+1}\varphi_{2n+1}-\delta_{2n+1}\varphi_{2n}^*(z) \right] + z \times \\ \\ &&\left[ \overline{\delta_{2n-1}}\varphi_{2n-1} - \overline{\delta_{2n-1}}|\delta_{2n}|^2\varphi_{2n-1} + \eta_{2n-1}\varphi_{2n-2}^*(z)-\eta_{2n-1}|\delta_{2n}|^2\varphi_{2n-2}^*(z) \right]. \end{array}$$ Now, from (\ref{qq4}) $$\begin{array}{rcl} \eta_{2n}\varphi_{2n}^*(z) &= &\overline{\delta_{2n}}\eta_{2n} \left[ \eta_{2n}z \varphi_{2n-1}(z) + \delta_{2n}\varphi_{2n}^*(z) \right] + \overline{\delta_{2n-1}}\varphi_{2n-1} - \overline{\delta_{2n-1}}|\delta_{2n}|^2\varphi_{2n-1}\\ \\ &&+ \eta_{2n-1}\varphi_{2n-2}^*(z)-\eta_{2n-1}|\delta_{2n}|^2\varphi_{2n-2}^*(z) \\ \end{array}$$ which is equivalent to $$\eta_{2n}^2\left[ \eta_{2n} \varphi_{2n}^*(z)-\overline{\delta_{2n}}z\varphi_{2n-1}(z)-\overline{\delta_{2n-1}}\varphi_{2n-1}(z)-\eta_{2n-1}\varphi_{2n-2}^*(z) \right] = 0.$$ So, (\ref{qq3}) is proved since $\eta_{2n} \neq 0$. \begin{flushright} $\Box$ \end{flushright} \begin{nota} Observe that the three-term recurrences given in Proposition \ref{equival1} involves multiplication by $z$ and $z^{-1}$ together whereas the recurrences given in Poposition \ref{equival2} involves only multiplication by $z$. These last relations will play a fundamental role in the next section. \end{nota} Untill the present moment we have deduced recurrences for the families of orthogonal Laurent polynomials when the generating sequences associated with the balanced orderings are considered. In the rest of this section we will consider an arbitrary generating sequence, starting with a three-term recurrence relation for $\{ \phi_n(z) \}_{n=0}^{\infty}$ involving multiplication by $z$ and $z^{-1}$. Here, we define $p(-1)=0$ and so $s(0)=0$. \begin{teorema}\label{ttgeneral} The family of monic orthogonal Laurent polynomials $\{ \phi_n(z) \}_{n=0}^{\infty}$ with respect to the measure $\mu$ and the generating sequence $\{ p(n) \}_{n=0}^{\infty}$ satisfies for $n \geq 2$ the three-term recurrence relation \begin{equation}\label{tt} \begin{array}{rcl} \phi_n(z) &= &\left( A_n B_n + C_n z^{1-2s(n)} \right) \phi_{n-1}(z) + \\ &&(-1)^{1 + s(n-2) - s(n-1) } D_n E_n \eta_{n-1}^2 z^{1- s(n)- s(n-2) } \phi_{n-2}(z) \end{array} \end{equation} with initial conditions \begin{equation}\label{ttinit} \phi_0(z) \equiv 1 \;\;,\;\;\;\phi_1(z) = K_1 + z^{1-2s(1)} \end{equation} where $K_1=\left\{ \begin{array}{ccc} \delta_1 &if &s(1)=0 \\ \overline{\delta_1} &if &s(1)=1 \end{array} \right.$ and for $n \geq 2$, \begin{equation}\label{abn} A_n= \left\{ \begin{array}{lcl} 1 &if &s(n) \neq s(n-1) \\ \delta_{n-1}^{2s(n)-1} &if &s(n) = s(n-1) \;\;,\;s(n-2)=0 \\ \left( \overline{\delta_{n-1}} \right)^{1-2s(n)} &if &s(n) = s(n-1) \;\;,\;s(n-2)=1 \end{array} \right. , \end{equation} \begin{equation}\label{bn} B_n= \left\{ \begin{array}{lcl} \delta_n &if &s(n)=0 \\ \overline{\delta_n} &if &s(n)=1 \end{array} \right. , \end{equation} \begin{equation}\label{cn} C_n= \left\{ \begin{array}{lcl} 1 &if &s(n)=s(n-1) \\ \delta_{n-1}^{s(n-1)-s(n)} &if &s(n) \neq s(n-1) \;\;,\;s(n-2)=0 \\ \left( \overline{\delta_{n-1}} \right)^{s(n)-s(n-1)} &if &s(n) \neq s(n-1) \;\;,\;s(n-2)=1 \\ \end{array} \right. , \end{equation} \begin{equation}\label{dn} D_n= \left\{ \begin{array}{lcl} \delta_n &if &s(n-1) = s(n)=0 \\ \overline{\delta_{n}} &if &s(n-1) = s(n) =1 \\ 1 &if &s(n-1) \neq s(n) \\ \end{array} \right. , \end{equation} \begin{equation}\label{den} E_n= \left\{ \begin{array}{lcl} 1/\delta_{n-1} &if &s(n-2) = s(n-1)=0 \\ 1/\overline{\delta_{n-1}} &if &s(n-2) = s(n-1)=1 \\ 1 &if &s(n-2) \neq s(n-1) \\ \end{array} \right. . \end{equation} In the cases $s(n-2)=s(n-1)$ the three-term recurrence relation holds if and only if $\delta_{n-1} \neq 0$. \end{teorema} {\em Proof}.- The initial conditions (\ref{ttinit}) follows from Proposition \ref{conex2} and (\ref{le}). Consider, for $n \geq 2$, the eight cases $\left( s(n),s(n-1),s(n-2) \right) = (i,j,k)$ with $i,j,k \in \{ 0,1 \}$. As we said, the proofs in the balanced situations $(0,1,0)$ and $(1,0,1)$ were given in \cite{Cr} making use of Proposition \ref{conex2} and the Szeg\H{o} recurrence (\ref{le}). Proceeding in the same way in the remaining cases we obtain: \begin{enumerate} \item Case $(0,0,0)$: \begin{equation}\label{000} \phi_n(z)=\left( \frac{\delta_n}{\delta_{n-1}} + z \right) \phi_{n-1}(z) - \frac{\delta_n}{\delta_{n-1}}\eta_{n-1}^2z \phi_{n-2}(z). \end{equation} \item Case $(0,0,1)$: \begin{equation}\label{001} \phi_n(z)=\left( \delta_n\overline{\delta_{n-1}} + z \right) \phi_{n-1}(z) + \delta_n\eta_{n-1}^2 \phi_{n-2}(z). \end{equation} \item Case $(0,1,1)$: \begin{equation}\label{011} \phi_n(z)= \left( \delta_n + \frac{z}{\overline{\delta_{n-1}}} \right) \phi_{n-1}(z) - \frac{1}{\overline{\delta_{n-1}}}\eta_{n-1}^2 \phi_{n-2}(z). \end{equation} \item Case $(1,0,0)$: \begin{equation}\label{100} \phi_n(z)=\left( \overline{\delta_n} + \frac{1}{\delta_{n-1}z} \right) \phi_{n-1}(z) - \frac{1}{\delta_{n-1}}\eta_{n-1}^2 \phi_{n-2}(z). \end{equation} \item Case $(1,1,0)$: \begin{equation}\label{110} \phi_n(z)=\left( \overline{\delta_n}\delta_{n-1} + \frac{1}{z} \right) \phi_{n-1}(z) + \overline{\delta_n}\eta_{n-1}^2 \phi_{n-2}(z). \end{equation} \item Case $(1,1,1)$: \begin{equation}\label{111} \phi_n(z)=\left( \frac{\overline{\delta_n}}{\overline{\delta_{n-1}}} + \frac{1}{z} \right) \phi_{n-1}(z) - \frac{\overline{\delta_n}}{\overline{\delta_{n-1}}z}\eta_{n-1}^2 \phi_{n-2}(z). \end{equation} \end{enumerate} Finally, all these three-term recurrences can be expressed as in (\ref{tt}) along with (\ref{abn})-(\ref{den}) in a whole pattern. \begin{flushright} $\Box$ \end{flushright} \begin{nota} When the family of orthonormal Laurent polynomials $\{\chi_n(z) \}_{n=0}^{\infty}$ is considered, the relation (\ref{tt}) becomes \begin{equation}\label{ttnormapn} \begin{array}{rcl} \eta_n \chi_n(z) &= &\left( A_n B_n + C_n z^{1-2s(n)} \right) \chi_{n-1}(z) + \\ &&(-1)^{1 + s(n-2) - s(n-1) } D_n E_n \eta_{n-1} z^{1- s(n)- s(n-2) } \chi_{n-2}(z) \end{array} \end{equation} with initial conditions $\chi_{i}(z) = \frac{\phi_{i}(z)}{\parallel \phi_{i}(z) \parallel_{\mu}}$ for $i=0,1.$ \end{nota} In order to complete the equivalences between the recurrence relations it remains to show that the recurrence (\ref{tt})-(\ref{den}) implies the Szeg\H{o} recurrence (\ref{le}). The proof in the balanced situations is given in Theorem \ref{equivalole} and the proof when a general generating sequence is considered can be done with similar arguments starting from the corresponding recurrence (\ref{000})-(\ref{111}). In the next result we consider a general generating sequence and prove a similar result without using orthogonality conditions so that Proposition \ref{conex2} can not be used. \begin{teorema}\label{sinorto} Let $\{ p(n) \}_{n \geq 0}$ be a generating sequence and define $p(-1)=0$. Consider $\{ s(n) \}_{n \geq 0}$ defined by $s(n)=p(n)-p(n-1)$ and an arbitrary given sequence of complex numbers $\{ \delta_n \}_{n \geq 0}$ with $\delta_0=1$ and $|\delta_n| \neq 1$ for all $n \geq 1$. Suppose that $\delta_{n-1} \neq 0$ if $s(n-2)=s(n-1)$. Let $\{ \phi_{n}(z) \}_{n \geq 0}$ be the sequence of Laurent polynomials defined by the recurrence relation (\ref{tt}) along with (\ref{abn})-(\ref{den}) and with initials conditions (\ref{ttinit}). Then, \begin{enumerate} \item For all $n \geq 0$, $\phi_n(z)$ is a monic Laurent polynomial with $\phi_n(z) \in {\cal L}_n \backslash {\cal L}_{n-1}$. \item Write $\phi_n(z)=\frac{N_n(z)}{z^{p(n)}}$ with $N_n(z) \in \Bbb P_n$ and set $F_0(z) \equiv N_0(z) \equiv 1$ and \begin{equation}\label{Fn} F_n(z)=\left\{ \begin{array}{ccl} N_n(z) &if &s(n)=0 \\ N_n^*(z) &if &s(n)=1 \end{array} \right. \;\;\;,\;\;n \geq 1. \end{equation} Then, $\{ F_n(z) \}_{n=1}^{\infty}$ satisfies the recurrence relation \begin{equation}\label{Nn} \begin{array}{c} F_{n}(z)=zF_{n-1}(z) +\delta_{n}F_{n-1}^*(z) \\ F_{n}^*(z)=\overline{\delta_n}zF_{n-1}(z) +F_{n-1}^*(z) \end{array} \;\;\;,\;\;n \geq 1 \;. \end{equation} \end{enumerate} \end{teorema} {\em Proof}.- $(1)$ Proceeding by induction it is proved for all $n \geq 1$ that $\phi_n(z) \in {\cal L}_n \backslash {\cal L}_{n-1}$ and that the coefficients of monomials $z^{q(n)}$ and $z^{-p(n)}$ are equal to $1$ when $s(n)=0$ and $s(n)=1$ respectively, that the coefficient of monomial $z^{-p(n)}$ is equal to $\delta_n$ when $s(n)=0$ and that the coefficient of monomial $z^{q(n)}$ is equal to $\overline{\delta_n}$ when $s(n)=1$. The claim holds for $n=1$ from the initial conditions (\ref{ttinit}). Hence, the proof can be achieved by induction from (\ref{tt}) by splitting into the eight cases $\left( s(n),s(n-1),s(n-2) \right) = (i,j,k)$, with $i,j,k \in \{0,1 \}$. $(2)$ From (\ref{ttinit}) it clearly follows that $N_0(z) \equiv 1$ and $$N_1(z)=\left\{ \begin{array}{crl} \delta_1 + z &if &s(1)=0 \\ \overline{\delta_{1}}z + 1 &if &s(1)=1 \end{array} \right. \;,$$ implying that $F_1(z)=zF_0(z) + \delta_1 F_0^*(z)$. Suppose from the hypothesis of induction that \begin{equation}\label{Fnmenosuno} \begin{array}{c} F_{n-1}(z)=zF_{n-2}(z) +\delta_{n-1}F_{n-2}^*(z) \\ F_{n-1}^*(z)=\overline{\delta_{n-1}}zF_{n-2}(z) +F_{n-2}^*(z) \end{array} . \end{equation} From (\ref{tt}) it follows for $n \geq 2$ that $$\begin{array}{ccl} \frac{N_n(z)}{z^{p(n)}} &= &\left( A_n B_n + C_n z^{1-2s(n)} \right) \frac{N_{n-1}(z)}{z^{p(n)-s(n)}} + \\ &&(-1)^{1+s(n-2)-s(n-1)}D_n E_n \eta_{n-1}^2 z^{1-s(n)-s(n-2)} \frac{N_{n-2}(z)}{z^{p(n)-\left[ s(n)+s(n-1) \right]}} \end{array}$$ implying the following recurrence relation for the family of ordinary polynomials $\{ N_n(z) \}_{n \geq 2}$: \begin{equation}\label{recuordi} \begin{array}{ccl} N_n(z) &= &\left( A_n B_nz^{s(n)} + C_n z^{1-s(n)} \right) N_{n-1}(z) + \\ &&(-1)^{1+s(n-2)-s(n-1)}D_n E_n \eta_{n-1}^2 z^{1+s(n-1)-s(n-2)} N_{n-2}(z). \end{array} \end{equation} We consider now the four cases $\left( s(n),s(n-1),s(n-2) \right) = (0,j,k)$, with $j,k \in \{0,1 \}$. Then, from (\ref{Fnmenosuno}) and relation (\ref{recuordi}) it follows: \begin{itemize} \item Case $(0,0,0)$: $$\begin{array}{ccl} N_n(z) &= &\left( \frac{\delta_n}{\delta_{n-1}} + z \right)N_{n-1}(z) - \frac{\delta_n}{\delta_{n-1}}\eta_{n-1}^2zN_{n-2}(z) \\ &= &zN_{n-1}(z) + \frac{\delta_{n}}{\delta_{n-1}} \left[ N_{n-1}(z) - \eta_{n-1}^2zN_{n-2}(z) \right] \\ &= &zN_{n-1}(z) + \frac{\delta_{n}}{\delta_{n-1}} \left[ zN_{n-2}(z) +\delta_{n-1}N_{n-2}^*(z) - \eta_{n-1}^2zN_{n-2}(z) \right] \\ &= &zN_{n-1}(z) + \frac{\delta_{n}}{\delta_{n-1}} \left[ |\delta_{n-1}|^2zN_{n-2}(z) + \delta_{n-1}N_{n-2}^*(z) \right] \\ &= &zN_{n-1}(z) + \delta_{n} \left[ \overline{\delta_{n-1}}zN_{n-2}(z) + N_{n-2}^*(z) \right] \\ &= &zN_{n-1}(z) + \delta_{n}N_{n-1}^*(z). \end{array}$$ \item Case $(0,0,1)$: $$\begin{array}{ccl} N_n(z) &= &\left( \overline{\delta_{n-1}}\delta_n + z \right)N_{n-1}(z) + \delta_n\eta_{n-1}^2N_{n-2}(z) \\ &= &zN_{n-1}(z) + \delta_{n} \left[ \overline{\delta_{n-1}}N_{n-1}(z) + \eta_{n-1}^2N_{n-2}(z) \right] \\ &= &zN_{n-1}(z) + \delta_{n} \left[ \overline{\delta_{n-1}}zN_{n-2}^*(z) + |\delta_{n-1}|^2N_{n-2}(z) + \eta_{n-1}^2N_{n-2}(z) \right] \\ &= &zN_{n-1}(z) + \delta_{n} \left[ \overline{\delta_{n-1}}zN_{n-2}^*(z) + N_{n-2}(z) \right] \\ &= &zN_{n-1}(z) + \delta_{n} N_{n-1}^*(z). \end{array}$$ \item Case $(0,1,0)$: $$\begin{array}{ccl} N_n(z) &= &\left( \delta_{n}+\delta_{n-1}z \right)N_{n-1}(z) + \eta_{n-1}^2z^2N_{n-2}(z) \\ &= &\delta_n N_{n-1}(z) + z\left[ \delta_{n-1}N_{n-1}(z) + \eta_{n-1}^2zN_{n-2}(z) \right] \\ &= &\delta_n N_{n-1}(z) + z\left[ |\delta_{n-1}|^2zN_{n-2}(z) + \delta_{n-1}N_{n-2}^*(z) + \eta_{n-1}^2zN_{n-2}(z) \right] \\ &= &\delta_n N_{n-1}(z) + z\left[ zN_{n-2}(z) + \delta_{n-1}N_{n-2}^*(z) \right] \\ &= &\delta_n N_{n-1}(z) + zN_{n-1}^*(z). \end{array}$$ \item Case $(0,1,1)$: $$\begin{array}{ccl} N_n(z) &= &\left( \delta_n + \frac{z}{\overline{\delta_{n-1}}} \right)N_{n-1}(z) - \frac{1}{\overline{\delta_{n-1}}}\eta_{n-1}^2zN_{n-2}(z) \\ &= &\delta_nN_{n-1}(z) + z \frac{1}{\overline{\delta_{n-1}}}\left[ N_{n-1}(z) - \eta_{n-1}^2N_{n-2}(z) \right] \\ &= &\delta_nN_{n-1}(z) + z \frac{1}{\overline{\delta_{n-1}}}\left[ \overline{\delta_{n-1}}zN_{n-2}^*(z) + N_{n-2}(z) - \eta_{n-1}^2 N_{n-2} \right] \\ &= &\delta_nN_{n-1}(z) + z \frac{1}{\overline{\delta_{n-1}}}\left[ \overline{\delta_{n-1}}zN_{n-2}^*(z) + |\delta_{n-1}|^2N_{n-2}(z) \right] \\ &= &\delta_nN_{n-1}(z) + z \left[ zN_{n-2}^*(z) + \delta_{n-1}N_{n-2}(z) \right] \\ &= &\delta_nN_{n-1}(z) + z N_{n-1}^*(z). \end{array}$$ \end{itemize} Finally, the proof in the four remaining cases follows from the above proofs and from Proposition \ref{conex1} just by taking super-star conjugation. \begin{flushright} $\Box$ \end{flushright} We conclude this section by considering a Favard-type Theorem for the monic sequence of Laurent polynomials $\{ \phi_n(z) \}_{n=0}^{\infty}$ and a general generating sequence $\{ p(n) \}_{n=0}^{\infty}$. A proof in the ordinary polynomial situation (i.e. $p(n)=0$ for all $n$) is given in \cite{Jo} and an alternative simpler approach in \cite{Er} (see also \cite{Ma}). A proof based on the techniques introduced in the Chihara's book \cite{Ch} in the balanced situations $p(n)=E \left[ \frac{n+1}{2} \right]$ and $p(n)=E \left[ \frac{n}{2} \right]$ is given in \cite{RP}. Anyway, the proof presented here based upon the above equivalences between recurrences is very much simpler, and still valid when a general generating sequence is considered. For our purposes, we briefly recall the concept of orthogonality with respect to a Hermitian linear functional. Indeed, let $\{ \mu_n \}_{n=-\infty}^{\infty}$ be a complex sequence satisfying $\mu_n=\overline{\mu_{-n}}$ for all $n=0,1,2,\ldots$ and denote by $\mu$ the linear functional defined on $\Lambda$ by $$\mu \left( \sum_{j=p}^{q} \alpha_j z^j \right) := \sum_{j=p}^{q} \alpha_j \mu_{-j} \;\;\;,\;\;\;\alpha_j \in \Bbb C \;\;\;\;\;-\infty < p \leq q < +\infty.$$ In terms of $\mu$ we define a bilinear functional $<\cdot ,\cdot >_{\mu}$ on $\Lambda \times \Lambda$ by \begin{equation}\label{newproductointerior} <L,M>_{\mu} := \mu \left( L(z)\overline{M(1/\overline{z})} \right) \;\;\;,\;\;\;L,M \in \Lambda. \end{equation} Now (see e.g. \cite{Jo}), it will be said that the functional $\mu$ is quasi-definite if and only if the principal submatrices of the infinite Toeplitz moment matrix associated with $\{ \mu_n \}_{-\infty}^{\infty}$ are nonsingular and positive-definite if the determinants of these matrices are positive. Quasi-definiteness is a necessary and sufficient condition for the existence of a family of orthogonal Laurent polynomials with respect to the linear functional (\ref{newproductointerior}) in the sense that there exists a sequence $\{ R_n(z) \}_{n=0}^{\infty}$ of Laurent polynomials satisfying $R_n(z) \in {\cal L}_n \backslash {\cal L}_{n-1}$, $n=1,2,\ldots$ and $<R_n(z), R_m(z)>_{\mu}=k_n \delta_{n,m} \;, k_n \neq 0$. On the other hand, if the linear functional $\mu$ is positive-definite then, the associated linear functional (\ref{newproductointerior}) is an inner product on $\Lambda \times \Lambda$ and it holds $<R_n(z),R_m(z)>_{\mu}=k_n \delta_{n,m}$ with $k_n>0$. When $k_n=1$ for all $n=0,1,2,\ldots$ $\{ R_n(z) \}_{n=0}^{\infty}$ will be called ``orthonormal''. \begin{teorema}[Favard]\label{favard} Let $\{ p(n) \}_{n \geq 0}$ be a generating sequence and define $p(-1)=0$ and the sequence $\{ s(n) \}_{n \geq 0}$ by $s(n)=p(n)-p(n-1) \in \{ 0,1 \}$. Consider an arbitrary given sequence of complex numbers $\{ \delta_n \}_{n \geq 0}$ with $\delta_0=1$, $|\delta_n| \neq 1$ for all $n \geq 1$ and set $\{ \lambda_n \}_{n \geq 1}$ defined by $\lambda_n=1-|\delta_n|^2$. Suppose also the restriction $\delta_{n-1} \neq 0$ in the cases $s(n-2)=s(n-1)$. Let $\{ \phi_{n}(z) \}_{n \geq 0}$ be the sequence of Laurent polynomials defined for $n \geq 2$ by the recurrence relation \begin{equation}\label{ttlambda} \begin{array}{rcl} \phi_n(z) &= &\left( A_n B_n + C_n z^{1-2s(n)} \right) \phi_{n-1}(z) \\ &&+ (-1)^{1 + s(n-2) - s(n-1) } D_n E_n \lambda_{n-1} z^{1- s(n)- s(n-2) } \phi_{n-2}(z) \end{array} \end{equation} with initial conditions (\ref{ttinit}) and where $A_n$, $B_n$, $C_n$, $D_n$ and $E_n$ are complex constants given by (\ref{abn})-(\ref{den}). Then, for a fixed $\mu_0 \in \Bbb R \backslash \{0\}$ there exists a unique quasi-definite linear functional $\mu$ such that $\mu(1)=\mu_0$ and $\{ \phi_{n}(z) \}_{n=0}^{\infty}$ is the sequence of monic orthogonal Laurent polynomials with respect to $\mu$ and the ordering induced in $\Lambda$ by the generating sequence $\{ p(n) \}_{n \geq 0}$. Furthermore, if we take $\mu_0 >0$ then $\mu$ is positive definite if and only if $|\delta_{n}|<1$ for all $n=1,2,\dots$. \end{teorema} {\em Proof}.- From Theorem \ref{sinorto} one sees that for $n \geq 1$, $\phi_n(z) \in {\cal L}_n \backslash {\cal L}_{n-1}$ so that one can write $\phi_n(z)=\frac{N_n(z)}{z^{p(n)}}$ with $N_n(z) \in \Bbb P_n$. If we define now the sequence of ordinary polynomials $\{ \rho_n(z) \}_{n \geq 0}$ by $$\rho_n(z)= \left\{ \begin{array}{ccc} N_n(z) &if &s(n)=0 \\ N_n^*(z) &if &s(n)=1 \end{array} \right. \;,$$ then, the relations (\ref{Nn})-(\ref{Fn}) become (\ref{le}). Thus, making use of the Favard's theorem in the ordinary polynomial situation (see e.g. \cite{Er} or \cite{Jo}) we see that the family $\{ \rho_n(z) \}_{n \geq 0}$ represents the family of monic orthogonal polynomials (Szeg\H{o} polynomials) with respect to a unique quasi-definite linear functional $\mu$ with $\mu(1)=\mu_0$ and positive definite if and only if $\mu_0 >0$ and $|\delta_{n}|<1$ for all $n=1,2,\dots$. Finally, proof follows from Proposition \ref{conex2} that still clearly holds when the measure $\mu$ is replaced for a quasi-definite (positive) linear functional. \begin{flushright} $\Box$ \end{flushright} \section{The multiplication operator in $\Lambda$} \setcounter{equation}{0} Through this section it will play a fundamental role the multiplication operator defined on $\Lambda$, namely $$\begin{array}{rccc} M: &\Lambda &\longrightarrow &\Lambda \\ &L(z) &\rightarrow &zL(z) \end{array}.$$ As we have seen, if we consider the sequence of orthonormal Laurent polynomials with respect to the measure $\mu$ and the generating sequence $p(n)=0$ for all $n \geq 0$ then the $n$-th orthonormal Laurent polynomial coincides with the $n$-th orthonormal Szeg\H{o} polynomial, for all $n \geq 0$. Since the operator $M$ lets $\Bbb P$ invariant, taking $\{ \varphi_n(z) \}_{n=0}^{\infty}$ as a basis of $\Bbb P$ then the following matrix representation of the restriction of $M$ to $\Bbb P$ is obtained (see e.g. \cite{Go}, \cite{GR} or \cite{BS}): \begin{equation}\label{hess} {\cal H}(\delta)= \left( \begin{array}{cccccc} h_{0,0} &h_{0,1} &0 &0 &0 &\cdots \\ h_{1,0} &h_{1,1} &h_{1,2} &0 &0 &\cdots \\ h_{2,0} &h_{2,1} &h_{2,2} &h_{2,3} &0 &\cdots \\ \vdots &\vdots &\vdots &\vdots &\vdots &\ddots \\ \end{array} \right) . \end{equation} The elements of this matrix with Hessenberg structure are given by \begin{equation}\label{elementhess} h_{i,j}= \left\{ \begin{array}{ll} -\overline{\delta_j} \delta_{i+1} \prod_{k=j+1}^{i} \eta_k &if \;\;j=0,1,\ldots,i-1 \;, \\ -\overline{\delta_i} \delta_{i+1} &if \;\;j=i \;, \\ \eta_{i+1} &if \;\;j=i+1 \;. \end{array} \right. \end{equation} If we consider now the generating sequence $p(n)=E \left[ \frac{n}{2} \right]$ it follows from the recurrence given in Proposition \ref{equival2} the following five-diagonal matrix (CMV representation, see \cite{BS}) for the multiplication operator M (see \cite{Ca}) wich can be also expressed as a product of two tri-diagonal ones: \begin{equation}\label{cmvmatrix} \begin{array}{ccl} {\cal C}(\delta) &= &\left( \begin{array}{cccccccc} -\delta_1 &\eta_1 &0 &0 &0 &0 &0 &\cdots \\ -\eta_1 \delta_2 &-\overline{\delta_1}\delta_2 &-\eta_2 \delta_3 &\eta_2 \eta_3 &0 &0 &0 &\cdots \\ \eta_1 \eta_2 & \overline{\delta_1} \eta_2 &-\overline{\delta_2}\delta_3 &\overline{\delta_2}\eta_3 &0 &0 &0 &\cdots \\ 0 &0 &-\eta_3 \delta_4 &-\overline{\delta_3}\delta_4 &-\eta_4 \delta_5 &\eta_4 \eta_5 &0 &\cdots \\ 0 &0 &\eta_3 \eta_4 &\overline{\delta_3}\eta_4 &-\overline{\delta_4} \delta_5 &\overline{\delta_4} \eta_5 &0 &\cdots \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array} \right) \\ \\ &= &\left( \begin{array}{ccccccc} 1 &0 &0 &0 &0 &0 &\cdots \\ 0 &-\delta_2 &\eta_2 &0 &0 &0 &\cdots \\ 0 &\eta_2 &\overline{\delta_2} &0 &0 &0 &\cdots \\ 0 &0 &0 &-\delta_4 &\eta_4 &0 &\cdots \\ 0 &0 &0 &\eta_4 &\overline{\delta_4} &0 &\cdots \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array} \right) \left( \begin{array}{ccccccc} -\delta_1 &\eta_1 &0 &0 &0 &0 &\cdots \\ \eta_1 &\overline{\delta_1} &0 &0 &0 &0 &\cdots \\ 0 &0 &-\delta_3 &\eta_3 &0 &0 &\cdots \\ 0 &0 &\eta_3 &\overline{\delta_3} &0 &0 &\cdots \\ 0 &0 &0 &0 &-\delta_5 &\eta_5 &\cdots \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array} \right) . \end{array} \end{equation} Moreover, it is easy to check that the matrix representation when the generating sequence $p(n)=E \left[ \frac{n+1}{2} \right]$ is considered is ${\cal C}(\delta)^{T}$. The aim of this section is to analyze the structure of the matrix representation for the multiplication operator $M$ when an arbitrary generating sequence is considered. We start with the following result which is a generalization of Proposition 2.4 in \cite{Ca} (here the generating sequences associated with the balanced orderings are previously fixed): \begin{teorema}\label{tk} Let $\{ \chi_n(z) \}_{n=0}^{\infty}$ be the sequence of orthonormal Laurent polynomials for the measure $\mu$ and the generating sequence $\{ p(n) \}_{n=0}^{\infty}$ and suppose that $\lim_{n \rightarrow \infty} q(n)=\infty$. Then, for each $n \geq 0$ there exists $k=k(n) \geq 1$ and $t=t(n) \geq 1$ such that $z \chi_n(z) \in span \{ \chi_{n-t}(z), \cdots , \chi_{n+k}(z) \}$, i.e. $$z \chi_n(z) = \sum_{s=n-t}^{n+k} a_{n,s} \chi_s(z) \;\;\;\;,\;\;\;a_{n,s}=\langle z\chi_n(z),\chi_s(z) \rangle_{\mu}.$$ Moreover, $k=k(n)$ and $t=t(n)$ are defined as follows: \begin{enumerate} \item $k=1$ if $s(n+1)=0$ and otherwise $k \geq 2$ is defined satisfying $s(n+1)=\cdots=s(n+k-1)=1, s(n+k)=0$. \item $t=1$ if $s(n-1)=1$ and otherwise $t \geq 2$ is defined satisfying $s(n-1)=\cdots=s(n+1-t)=0, s(n-t)=1$. \end{enumerate} \end{teorema} {\em Proof}.- On the one hand, since $\chi_n(z) \in {\cal L}_n$ then $$z\chi_n(z) \in z{\cal L}_n = span \{ \frac{1}{z^{p(n)-1}} , \cdots , z^{q(n)+1} \} \subset {\cal L}_{n+k}$$ with $k=k(n) \geq 1$. Observe that the existence of $k$ is guaranteed from the condition $\lim_{n \rightarrow \infty} q(n)=\infty$. On the other hand, since $\chi_n(z) \perp {\cal L}_{n-1}$ then $$z\chi_n(z) \perp z{\cal L}_{n-1} = span \{ \frac{1}{z^{p(n-1)-1}} , \cdots , z^{q(n-1)+1} \} \supset {\cal L}_{n-1-t}$$ with $t=t(n) \geq 1$. Since $z\chi_n(z) \subset {\cal L}_{n+k}$ and $z\chi_n(z) \perp {\cal L}_{n-1-t}$ the proof follows. \begin{flushright} $\Box$ \end{flushright} From Theorem \ref{tk} we analyze now the matrix representation for the operator $M$ with respect to a generating sequence with minimal number of diagonals. As we have seen, the balanced orderings gives rise to five-diagonal matrices and we must ask if there exists other possible generating sequences which gives rise to a matrix representation with less or equal number of diagonals than five. In this respect we start remarking that a five-diagonal representation is obtained when it happens \begin{equation}\label{5conditions} \begin{array}{cl} 1. &k(n)=1,\;t(n) \leq 3 \;\;for \;all \;n \\ 2. &k(n) \leq 2,\;t(n) \leq 2 \;\;for \;all \;n \\ 3. &k(n) \leq 3,\;t(n) = 1 \;\;for \;all \;n. \\ \end{array} \end{equation} Hence, we have the following considerations: \begin{enumerate} \item \begin{lema}\label{minilema} A five-diagonal representation is not obtained if $\{s(n) \}_{n \geq 1}$ contains three or more consecutive zeros or ones. \end{lema} {\em Proof}.- From Theorem \ref{tk} it follows that a block $(s(n),s(n+1),s(n+2),s(n+3),s(n+4))=(1,0,0,0,1)$ implies $(k(n),k(n+1),k(n+2))=(1,1,1)$ with $k(n+3) \geq 2$ and $(t(n+1),t(n+2),t(n+3),t(n+4))=(1,2,3,4)$ whereas a block $(s(n),s(n+1),s(n+2),s(n+3),s(n+4))=(0,1,1,1,0)$ implies $(k(n),k(n+1),k(n+2),k(n+3))=(4,3,2,1)$ and $t(n+1) \geq 2$ with $(t(n+2),t(n+3),t(n+4))=(1,1,1)$. The proof follows since the condition (\ref{5conditions}) fulfill in both situations. From this, the proof when $\{s(n) \}_{n \geq 1}$ contains more than three consecutive zeros or ones is trivial. \begin{flushright} $\Box$ \end{flushright} As a consequence of Lemma \ref{minilema}, the number of consecutive zeros or ones in the sequence $\{s(n) \}_{n \geq 1}$ is at most 2. \item If the number of consecutive zeros or ones is just one for all $n$ then it corresponds to the generating sequences $$p(n)=E\left[ \frac{n}{2} \right] \;\;\;,\;\;\;p(n)=E\left[ \frac{n+1}{2} \right].$$ and the five-diagonal matrix representation ${\cal C}(\delta)$ and ${\cal C}(\delta)^{T}$ are obtained respectively. \end{enumerate} Now, let us concentrate what happens if two consecutive zeros or ones appears in the sequence $\{ s(n) \}_{n \geq 1}$. Indeed, a block of the form$$\left( s(n),s(n+1),s(n+2),s(n+3) \right) = (1,0,0,1)$$ implies $t(n+2)=2$, $k(n+2) \geq 2$, $t(n+3)=3$, $k(n+3) \geq 1$ and hence a five-diagonal representation is not obtained since condition (\ref{5conditions}) does not hold. Suppose now a block of the form $$\left( s(n),s(n+1),s(n+2),s(n+3) \right) = (0,1,1,0).$$ If $s(n-1)=0$ then $t(n+1)=3$ and $k(n+1)=2$ whereas if $s(n-1)=1$ then $t(n)=1$, $k(n)=3$ and $t(n+1)=k(n+1)=2$, implying in both cases that the condition (\ref{5conditions}) is not satisfied. Observe that this argument is valid for all $n \geq 2$, but it really holds for $n \geq 0$. Indeed, if we consider the generating sequences $p(n)=E \left[ \frac{n+1}{2} \right]$ for all $n \geq 2$ with $p(0)=p(1)=0$ or $p(n)=E \left[ \frac{n}{2} \right]$ for all $n \geq 2$ with $p(0)=0$ and $p(1)=1$ then it is easy to check that a non-five diagonal matrix representation is obtained. Summarizing, we can enunciate: \begin{teorema}\label{optimal} The matrix representation for the multiplication operator $M$ is a five-diagonal matrix if and only if $p(n)=E\left[ \frac{n}{2} \right]$ or $p(n)=E\left[ \frac{n+1}{2} \right]$. Moreover, this representation is the narrowest one in the sense that any matrix representation for another different generating sequence gives rise to a $l$-diagonal matrix representation with $l \geq 6$. \begin{flushright} $\Box$ \end{flushright} \end{teorema} \begin{nota} The proof of Theorem \ref{optimal} was obtained using orthogonality conditions. This result has been also recently proved in \cite{CMV3} by using operator theory techniques. \end{nota} \begin{ej} Suppose that we expand $\Lambda$ in the ordered basis $$\{1,z,z^{-1},z^2,z^{-2},z^{-3},z^3,z^4,z^{-4},z^{-5},z^5,z^6,z^{-6},z^{-7},\ldots \}.$$ Here $\{ s(n) \}_{n \geq 1} = \{ 0,1,0,1,1,0,0,1,1,0,0,1,1,\ldots \}$ and $\lim_{n \rightarrow \infty} q(n)=\infty$. Observe that $z\chi_0(z) \in span\{ \chi_0(z),\chi_1(z) \}$ and $z\chi_1(z) \in span\{ \chi_0(z),\chi_1(z), \chi_2(z), \chi_3(z) \}$. From the sequence $\{ s(n) \}_{n \geq 1}$ and Theorem \ref{tk} we construct the sequences $\{ t(n) \}_{n \geq 2} = \{ 2,1,2,1,1,2,3,1,1,\ldots \}$ and $\{ k(n) \}_{n \geq 2} = \{1,3,2,1,1,3,2,1,1,\ldots \}$. These sequences indicate us the number of nonzero elements in each file of the matrix. Now, the coefficients of the matrix can be obtained in terms of the orthonormal Szeg\H{o} polynomials by using the formula given in Theorem \ref{tk} and Proposition \ref{conex2}, resulting expressions of the form $\langle z^l f(z),g(z) \rangle_{\mu}$ where $l \geq 0$ and $f(z),g(z) \in \{ \varphi_n(z) \}_{n \geq 0} \cup \{ \varphi_n^*(z) \}_{n \geq 0}$. Some of them are explicitly calculated in terms of the sequence of Schur parameters (see \cite[Ch. 1]{BS}). The remainder quantities can be obtained from these formulas and from the relations $$\begin{array}{rcl} \langle z^t \varphi_m^*(z),\varphi_n(z) \rangle_{\mu} &= &\langle z^{t+m-n} \varphi_n^*(z),\varphi_m(z) \rangle_{\mu} \\ \langle z^t \varphi_m^*(z),\varphi_n^*(z) \rangle_{\mu} &= &\langle z^{t+m-n} \varphi_n(z),\varphi_m(z) \rangle_{\mu} \end{array} \;\;\;,\;\;t \in \Bbb Z \;\;,\;n,m \geq 0,$$ which holds since $\varphi_n^*(z)=z^n\overline{\varphi_n}(z)$ when $z \in \Bbb T$. The result is the following matrix representation with seven-diagonal structure: \begin{center} {\small $$\left( \begin{array}{cccccccccccc} -\delta_1 &\eta_1 &0 &0 &0 &0 &0 &0 &0 &0 &0 &\cdots \\ -\eta_1\delta_2 &-\overline{\delta_1}\delta_2 &-\eta_2\delta_3 &\eta_2\eta_3 &0 &0 &0 &0 &0 &0 &0 &\cdots \\ \eta_1\eta_2 &\overline{\delta_1}\eta_2 &-\overline{\delta_2}\delta_3 &\overline{\delta_2}\eta_3 &0 &0 &0 &0 &0 &0 &0 &\cdots \\ 0 &0 &-\eta_3\delta_4 &-\overline{\delta_3}\delta_4 &-\eta_4\delta_5 &-\eta_4\eta_5\delta_6 &\eta_4\eta_5\eta_6 &0 &0 &0 &0 &\cdots \\ 0 &0 &\eta_3\eta_4 &\overline{\delta_3}\eta_4 &-\overline{\delta_4}\delta_5 &-\overline{\delta_4}\eta_5\delta_6 &\overline{\delta_4}\eta_5\eta_6 &0 &0 &0 &0 &\cdots \\ 0 &0 &0 &0 &\eta_5 &-\overline{\delta_5}\delta_6 &\overline{\delta_5}\eta_6 &0 &0 &0 &0 &\cdots \\ 0 &0 &0 &0 &0 &-\eta_6\delta_7 &-\overline{\delta_6}\delta_7 &\eta_7 &0 &0 &0 &\cdots \\ 0 &0 &0 &0 &0 &-\eta_6\eta_7\delta_8 &-\overline{\delta_6}\eta_7\delta_8 &-\overline{\delta_7}\delta_8 &-\eta_8\delta_9 &-\eta_8\eta_9\delta_{10} &\eta_8\eta_9\eta_{10} &\cdots \\ 0 &0 &0 &0 &0 &\eta_6\eta_7\eta_8 &\overline{\delta_6}\eta_7\eta_8 &\overline{\delta_7}\eta_8 &-\overline{\delta_8}\delta_9 &-\overline{\delta_8}\eta_9\delta_{10} &\overline{\delta_8}\eta_9\eta_{10} &\cdots \\ 0 &0 &0 &0 &0 &0 &0 &0 &\eta_9 &-\overline{\delta_9}\delta_{10} &\overline{\delta_9}\eta_{10} &\cdots \\ 0 &0 &0 &0 &0 &0 &0 &0 &0 &-\eta_{10}\delta_{11} &-\overline{\delta_{10}}\delta_{11} &\cdots \\ \vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\ddots \end{array} \right) .$$} \end{center} \end{ej} Finally we conclude this section considering briefly the inverse multiplication operator defined on $\Lambda$, namely $$\begin{array}{rccc} N: &\Lambda &\longrightarrow &\Lambda \\ &L(z) &\rightarrow &\frac{1}{z}L(z) \end{array}.$$ The first result is an analogous to Theorem \ref{tk} which can be proved in the same way: \begin{teorema}\label{tkbis} Let $\{ \chi_n(z) \}_{n=0}^{\infty}$ be the sequence of orthonormal Laurent polynomials for the measure $\mu$ and the generating sequence $\{ p(n) \}_{n=0}^{\infty}$ and suppose that $\lim_{n \rightarrow \infty} p(n)=\infty$. Then, for each $n \geq 0$ there exists $\tilde{k}=\tilde{k}(n) \geq 1$ and $\tilde{t}=\tilde{t}(n) \geq 1$ such that $\frac{1}{z} \chi_n(z) \in span \{ \chi_{n-\tilde{t}}(z), \cdots , \chi_{n+\tilde{k}}(z) \}$, i.e. \begin{equation}\label{tildean} \frac{1}{z} \chi_n(z) = \sum_{s=n-\tilde{t}}^{n+\tilde{k}} \tilde{a}_{n,s} \chi_s(z) \;\;\;\;,\;\;\;\tilde{a}_{n,s}=\overline{a_{s,n}}. \end{equation} Moreover, $\tilde{k}=\tilde{k}(n)$ and $\tilde{t}=\tilde{t}(n)$ are defined as follows: \begin{enumerate} \item $\tilde{k}=1$ if $s(n+1)=1$ and otherwise $\tilde{k} \geq 2$ is defined satisfying $s(n+1)=\cdots=s(n+\tilde{k}-1)=0, s(n+\tilde{k})=1$. \item $\tilde{t}=1$ if $s(n-1)=0$ and otherwise $\tilde{t} \geq 2$ is defined satisfying $s(n-1)=\cdots=s(n+1-\tilde{t})=1, s(n-\tilde{t})=0$. \end{enumerate} \end{teorema} \begin{flushright} $\Box$ \end{flushright} Now, proceeding as before, a similar result to Theorem \ref{optimal} can be deduced for the inverse multiplication operator $N$. Furthermore, from (\ref{tildean}) it follows that the matrix representation of the operator $N$ when dealing with the balanced generating sequence $p(n)=E \left[ \frac{n}{2} \right]$ is ${\cal C}(\delta)^*=\overline{{\cal C}(\delta)}^{T}$. \section{Quadrature formulas on the unit circle} \setcounter{equation}{0} Throughout this section we shall be concerned with the estimation of integrals on $\Bbb T$ of the form, \begin{equation}\label{int} I_{\mu}(f)= \int_{-\pi}^{\pi} f\left( e^{i\theta} \right) d\mu(\theta). \end{equation} As usual, estimations of $I_{\mu}(f)$ may be produced when replacing in (\ref{int}), $f(z)$ by an appropriate approximating (interpolating) function $L(z)$ so that $I_{\mu}(L)$ can be now easily computed. It seems reasonable to choose as an approximation to $f(z)$ in (\ref{int}) some appropriate Laurent polynomial, because of the density of $\Lambda$ in $C(\Bbb T)=\{ f: \Bbb T \rightarrow \Bbb C \;,\; f \;{\rm continuous} \}$ with respect to the uniform norm (see e.g. \cite{Ra} and \cite{Sz}). So, the so-called \lq\lq quadrature formulas on the unit circle" arise. Indeed, given the integral $I_{\mu}(f)$, by an $n$-point quadrature formula on $\Bbb T$ we mean an expression like \begin{equation}\label{quad} I_{n}(f)=\sum_{j=1}^{n} \lambda_j f(z_j) \;\;\;,\;\;z_i \neq z_j \;\;,\;\;i \neq j \;\;\;,\;\; z_j \in \Bbb T \;\;,\;j=1, \ldots, n, \end{equation} where the nodes $\{z_j \}_{j=1}^{n}$ and the coefficients or weights $\{ \lambda_j \}_{j=1}^{n}$ are chosen so that $I_n(f)$ exactly integrates $I_{\mu}(f)$ in subspaces of $\Lambda$ with dimension as large as possible i.e., $I_n(L)=I_{\mu}(L)$ for any $L \in \Lambda_{-p,q}$ with $p$ and $q$ nonnegative integers depending on $n$ with sum as great as possible. If we first try with subspaces of the form $ \Lambda_{-p,p}$, it can be easily checked that there can not exist an $n$-point quadrature formula $I_n(f)$ as (\ref{quad}) to be exact in $ \Lambda_{-n,n}$. Hence, it holds that $p \leq n-1$. In \cite{RO} the following \lq\lq necessary condition" on the nodal polynomial is proved: \begin{teorema}\label{21} For $n \geq 1$, let $I_n(f)=\sum_{j=1}^{n} \lambda_j f(z_j)$ with $z_j \in \Bbb T$, $j=1,\ldots,n$ be exact in $ \Lambda_{-(n-1),n-1}$, and set $P_n(z)=\prod_{j=1}^{n} (z-z_j)$. Then, \begin{equation}\label{paranewbis} P_n(z)=C_n\left[ \rho_n(z) + \tau_n \rho_n^*(z) \right] \;\;\;,\;\;|\tau_n| =1 \;\;\;,\;\;C_n=\left( 1 + \tau_n \overline{\delta_n} \right)^{-1} \end{equation} \end{teorema} \begin{flushright} $\Box$ \end{flushright} Moreover, in \cite{Jo} it is proved the following converse result (sufficient conditions on the nodal polynomial): \begin{teorema}\label{22} Let $P_n(z)$ be a polynomial of degree $n$ given by (\ref{paranewbis}) (up to a multiplicative factor). Then, \begin{enumerate} \item $P_n(z)$ has exactly $n$ distinct zeros $z_1,\ldots,z_n$ on $\Bbb T$. \item There exist positive real numbers $\lambda_1,\ldots,\lambda_n$ such that \begin{equation}\label{nueva1} I_n(f)=\sum_{j=1}^{n} \lambda_j f(z_j)=I_{\omega}(f) \;\;\;,\;\;\forall f \in \Lambda_{-(n-1),n-1}. \end{equation} \end{enumerate} \begin{flushright} $\Box$ \end{flushright} \end{teorema} The quadrature formula $I_n(f)$ given by (\ref{nueva1}), and earlier introduced in \cite{Jo} was called an \lq\lq n-point Szeg\"o quadrature formula", representing the analogue on the unit circle of the Gaussian formulas for intervals of the real axis. However, it must be remarked two big differences in this respect: the nodes are not the zeros of the $n$-th orthogonal polynomial with respect to $\mu$ and the $n$-point Szeg\"o formula is exact in $\Lambda_{-(n-1),n-1}$, whose dimension is $2n-1$ instead of $2n$. Observe that since the nodes are the zeros of the $n$-th para-orthogonal polynomial characterized by (\ref{paranewbis}), an one-parameter family of quadrature formulas exact in $\Lambda_{-(n-1),n-1}$ arises. On the other hand, starting from a generating sequence $\{ p(n) \}_{n=0}^{\infty}$ and because of the fact that $ {\cal L}_{n-1}$ is a Chebyshev system on $\Bbb T$ of dimension $n$ (since $0 \not\in \Bbb T$), for $n$ distinct nodes $z_1,\ldots,z_n$ on $\Bbb T$, parameters $\lambda_1,\ldots,\lambda_n$ can be uniquely determined so that, $I_n(L)= I_{\mu}(L)$ for all $L \in {\cal L}_{n-1}$. In order to recover Szeg\H{o} formulas in the natural framework of the orthogonal Laurent polynomials on the unit circle, and inspired by the ordinary polynomial situation, we will deal with subspaces of $\Lambda$ of the form ${\cal L}_{n} {\cal L}_{r*} = \Lambda_{-[p(n)+q(r)],[q(n)+p(r)]}$ with dimension $n+r+1$ (observe that ${\cal L}_{n-1} \subset {\cal L}_{n} {\cal L}_{r*}$). Hence, from above, $0 \leq r \leq n-1$ and we might analyze how large $r=r(n)$ can be taken. The first step should be to consider $r=n-1$, but a negative answer is proved in \cite{RO}: \begin{teorema}\label{import} There cannot exist an $n$-point quadrature formula like (\ref{quad}) with nodes on $\Bbb T$ which is exact in ${\cal L}_n {\cal L}_{(n-1)*}$ for any given arbitrary generating sequence $\{ p(n) \}_{n\geq 0}$. \begin{flushright} $\Box$ \end{flushright} \end{teorema} The second step is to consider $r=n-2$. For this purpose, we set $\lambda(n)=p(n)-p(n-2) \in \{ 0,1,2 \}$. The results obtained in \cite{RO} are summarized in: \begin{teorema}\label{cerosquasi} Let $\{\chi_n(z) \}_{n=0}^{\infty}$ be the sequence of orthonormal Laurent polynomials with respect to the measure $\mu$ and the ordering induced by the generating sequence $\{p(n) \}_{n=0}^{\infty}$. Suppose that $\lambda(n)=p(n)-p(n-2)=1$ and consider \begin{equation}\label{znew1} R_n(z,u)=C_n \left[ \eta_n \chi_n(z) + \tau_n \chi_{n-1}(z) \right] \end{equation} being $C_n \neq 0$ and \begin{equation}\label{znew2} \tau_n=\left\{ \begin{array}{ccl} \overline{u-\delta_n} &if &s(n)=1 \\ u - \delta_n &if &s(n)=0 \end{array} \right. \end{equation} with $u \in \Bbb T$ and $\{\delta_n \}_{n=0}^{\infty}$ the sequence of Schur parameters associated with $\mu$. Then, \begin{enumerate} \item $R_n(z,u)$ has exactly $n$ distinct zeros on $\Bbb T$. \item If $z_1,\ldots,z_n$ are the zeros of $R_n(z,u)$, then there exist positive numbers $\lambda_1,\ldots,\lambda_n$ such that \begin{equation}\label{quasiquad} I_n(f)=\sum_{j=1}^{n} \lambda_j f(z_j) = I_{\mu}(f) \;\;\;,\;\;\forall f \in {\cal L}_{n}{\cal L}_{(n-2)*}. \end{equation} \item There cannot exist an $n$-point quadrature formula with nodes on $\Bbb T$ to be exact in ${\cal L}_{n}{\cal L}_{(n-2)*}$ if $\lambda(n)=0$ or $\lambda(n)=1$. \end{enumerate} \begin{flushright} $\Box$ \end{flushright} \end{teorema} Thus, under the assumption that $\lambda(n)=p(n)-p(n-2)=1$, we see that ${\cal L}_n {\cal L}_{(n-2)*}=\Lambda_{-(n-1),n-1}$. Therefore, the quadrature rule given by (\ref{quasiquad}) coincides with an $n$-point Szeg\H{o} quadrature formula for $\mu(\theta)$ and, taking into account that the solutions of the finite difference equation $\lambda(n)=p(n)-p(n-2)=1$ for $n \geq 2$ are given by $$p(n)= \left\{ \begin{array}{ccl} E\left[ \frac{n}{2} \right] &if &p(0)=p(1)=0 \\ E\left[ \frac{n+1}{2} \right] &if &p(0)=0\;,\;\; p(1)=1 \\ \end{array} \right. \;,$$ we see as the natural balanced orderings earlier introduced by Thron in \cite{Th} are again recovered. Furthermore, they are the only ones which produce quadrature formulas with nodes on $\Bbb T$ with a maximal domain of validity. On the other hand, as we have seen in the Section 4, these orderings correspond with the narrowest matricial representation of a sequence of orthonormal Laurent polynomials. In order to complete the construction of such quadrature formulas we give expressions for the weights also proved in \cite{RO}: \begin{teorema}\label{pesosmejor} Let $\{\chi_n(z) \}_{n=0}^{\infty}$ be the sequence of orthonormal Laurent polynomials with respect to the measure $\mu$ and the ordering induced by a generating sequence $\{p(n) \}_{n=0}^{\infty}$. Then, the weights $\{\lambda_j \}_{j=1}^{n}$ for the quadrature formula (\ref{quasiquad}) are given for $j=1,\ldots,n$ by, either \begin{equation}\label{newweight} \lambda_j=\frac{1}{\sum_{k=0}^{n-1}\left| \chi_k(z_j) \right|^2} \end{equation} or \begin{equation}\label{newweightbis} \lambda_j = \frac{(-1)^{s(n)}}{2\Re \left[ z_j \chi'_n(z_j)\overline{\chi_n(z_j)}\right] + (p(n)-q(n))\left| \chi_n(z_j) \right|^2} \;, \end{equation} where the nodes $\{z_j\}_{j=1}^{n}$ are the zeros of $R_n(z,u)$ given by (\ref{znew1}), or equivalently the zeros of $P_n(z)$ in (\ref{paranewbis}). \begin{flushright} $\Box$ \end{flushright} \end{teorema} Once quadrature formulas on $\Bbb T$ with a maximal domain of validity have been constructed (nodes from (\ref{paranewbis}) or (\ref{znew1}) and weights by (\ref{newweight}) or (\ref{newweightbis})) it seems that the following should be done is their effective computation. In this respect, it should be noticed that, untill the present moment and as far as we know, numerical experiments with Szeg\H{o} quadrature have mostly involved either measures whose sequences of Szeg\H{o} polynomials are explicitly known or these polynomials have been computed by Levinson's algorithm (see e.g. \cite{Le}, \cite{LD} and \cite{JC}). In this respect, the zeros of (\ref{paranewbis}) or (\ref{znew1}) can be found by using any standard root finding method available in the literature (as for a specific procedure concerning rational modifications of the Lebesgue measure see also \cite{Tru}). Hence, in the rest of the section we shall revise some strategies to effectively compute the nodes $\{ z_j \}_{j=1}^{n}$ and weights $\{ \lambda_j \}_{j=1}^{n}$ for an $n$-point Szeg\H{o} quadrature formula: $$I_n(f)=\sum_{j=1}^{n} \lambda_j f(z_j) \;\;\;,\;\;|z_j|=1 \;\;\;,\;\;j=1,\ldots,n.$$ Indeed, for the nodal polynomial $P_n(z)=\prod_{j=1}^{n} \left( z-z_j \right)$ given by (\ref{paranewbis}) we can write from (\ref{le}) (see \cite{CM}): $$P_n(z)=P_n(z,u)=z\rho_{n-1}(z) + u\rho_{n-1}^*(z) \;\;,\;|u|=1.$$ Now, it can be easily checked that $\{z\varphi_0(z),\ldots,z\varphi_{n-2}(z),-u\varphi_{n-1}^*(z) \}$ is an orthonormal basis of $\Bbb P_{n-1}$ which must be related to $\{\varphi_0(z),\ldots,\varphi_{n-1}(z) \}$ by an unitary matrix $U_n$. Setting for $n\geq 0$, $e_n=\langle \rho_n(z),\rho_n(z) \rangle_{\mu}$ (recall that $e_n=\prod_{k=1}^{n} \eta_k^2$ for $n \geq 1$ and that $\varphi_n(z)=\frac{\rho_n(z)}{\sqrt{e_n}}$) it is straight forward to check that $U_n$ is the following irreducible Hessenberg matrix: \begin{equation}\label{un} U_n= \left( \begin{array}{ccccc} d_{1,1} &d_{1,2} &0 &\cdots &0 \\ d_{2,1} &d_{2,2} &d_{2,3} &\cdots &0 \\ \vdots &\vdots &\vdots &\ddots &\vdots \\ d_{n,1} &d_{n,2} &d_{n,3} &\cdots &d_{n,n} \end{array} \right) \end{equation} where \begin{equation}\label{undata} d_{i,j}= \left\{ \begin{array}{crl} -\overline{\delta_{j-1}}u\sqrt{\frac{e_{n-1}}{e_{j-1}}} &if &i=n \\ -\overline{\delta_{j-1}}\delta_{i}\sqrt{\frac{e_{i-1}}{e_{j-1}}} &if &i \leq n-1 \;\;,\;j \leq i \\ \eta_i &if &i \leq n-1 \;\;,\;j=i+1 \end{array} \right. . \end{equation} Hence, we can write $$\left( \begin{array}{c} z\varphi_0(z) \\ \vdots \\ z\varphi_{n-2}(z) \\ -u\varphi_{n-1}^*(z) \end{array} \right) = U_n \left( \begin{array}{c} \varphi_0(z) \\ \vdots \\ \varphi_{n-2}(z) \\ \varphi_{n-1}(z) \end{array} \right) .$$ Observe that, if we introduce the notation ${\cal H}_n(\delta_0,\ldots,\delta_{n})$ to indicate the principal submatrix of ${\cal H}(\delta)$ (given by (\ref{hess})) of order $n$ with entries $\{ \delta_k \}_{k=0}^{n}$, then $U_n={\cal H}_n(\delta_0,\ldots,\delta_{n-1},u)$, that is, the principal submatrix of ${\cal H}(\delta)$ of order $n$ where the Schur parameter $\delta_n$ is replaced by $u \in \Bbb T$. Now, $$\left( \begin{array}{c} z\varphi_0(z) \\ \vdots \\ z\varphi_{n-2}(z) \\ -u\varphi_{n-1}^*(z) \end{array} \right) = \left( \begin{array}{c} z\varphi_0(z) \\ \vdots \\ z\varphi_{n-2}(z) \\ z\varphi_{n-1}(z)-z\varphi_{n-1}(z)-u\varphi_{n-1}^*(z) \end{array} \right) = z \left( \begin{array}{c} \varphi_0(z) \\ \vdots \\ \varphi_{n-2}(z) \\ \varphi_{n-1}(z) \end{array} \right) - \left( \begin{array}{c} 0 \\ \vdots \\ 0 \\ \frac{P_n(z,u)}{\sqrt{e_{n-1}}} \end{array} \right).$$ Thus, we have the identity \begin{equation}\label{zvn} zV_n(z)=U_nV_n(z) + b_n(z) \end{equation} where $$V_n(z)=\left( \varphi_0(z), \varphi_1(z), \cdots,\varphi_{n-1}(z) \right)^{T} \;\;;\;\;\;b_n(z)=\left( 0,\cdots,0,\frac{P_n(z,u)}{\sqrt{e_{n-1}}} \right)^T$$ with $|u|=1$. From (\ref{zvn}) one sees that any zero $\xi$ of $P_n(z,u)$ is an eigenvalue of $U_n$ with associated eigenvector $V_n(\xi)$. So, let $z_j$ be a zero of $P_n(z,u)$ for $j=1,\ldots,n$ and consider the corresponding normalized eigenvector of $U_n$: $$W_n(z_j)=\frac{V_n(z_j)}{\left[ \sum_{k=0}^{n-1} \left| \varphi_k(z_j) \right|^2 \right]^{1/2}}.$$ Now, taking into account that for any $z \in \Bbb T$, $\left| \varphi_k(z) \right|^2=\left| \chi_k(z) \right|^2$, from (\ref{newweight}) it follows that \begin{equation}\label{wn} W_n(z_j)=\lambda_j^{1/2}V_n(z_j). \end{equation} If we write $W_n(z_j)=\left( q_{0,j},\ldots,q_{n-1,j} \right)^T$ and select out the first components of both sides of (\ref{wn}), we obtain $q_{0,j}=\lambda_j^{1/2}\varphi_0(z_j)$. But, since we are dealing with a probability measure ($\int d\mu(\theta)=1$) then $\varphi_0(z) \equiv 1$ and hence, $$\lambda_j=q_{0,j}^2 \;\;\;,\;\;j=1,\ldots,n.$$ In short, the following theorem has been proved \begin{teorema}\label{eigen1} Let $I_n(f)$ be the $n$-th Szeg\H{o} quadrature formula (\ref{quasiquad}). Then, \begin{enumerate} \item The nodes $\{z_j \}_{j=1}^{n}$ are the eigenvalues of $U_n={\cal H}_n(\delta_0,\ldots,\delta_{n-1},u)$ given by (\ref{un})-(\ref{undata}), for all $u \in \Bbb T$. \item The weights $\{\lambda_j \}_{j=1}^{n}$ are given by the first component of the normalized eigenvectors. \end{enumerate} \end{teorema} Finally, we show an alternative approach to the computation of a Szeg\H{o} quadrature formula (\ref{quasiquad}) by using truncations of the five-diagonal matrix ${\cal C}(\delta)$. In the next result we will use the matrix ${\cal C}_n(\delta_1,\ldots,\delta_{n-1},u)$, that is, the $n$-th principal submatrix of ${\cal C}(\delta)$ of order $n$ where the Schur parameter $\delta_n$ is replaced by $u \in \Bbb T$. The first part has been already deduced in \cite{CMV4} by using operator theory techniques meanwhile our proof here presented is based on the recurrence relations satisfied by the family of orthonormal Laurent polynomials. Without loss of generality, we can fix the ordering induced by $p(n)=E\left[ \frac{n}{2} \right]$ (recall that the matrix representation associated with the ordering induced by $p(n)=E\left[ \frac{n+1}{2} \right]$ is ${\cal C}(\delta)^T$). \begin{teorema}\label{eigen2} Let $I_n(f)$ be the $n$-th Szeg\H{o} quadrature formula (\ref{quasiquad}). Then, \begin{enumerate} \item The nodes $\{z_j \}_{j=1}^{n}$ are the eigenvalues of ${\cal C}_n(\delta_1,\ldots,\delta_{n-1},u)$, for all $u \in \Bbb T$. \item The weights $\{\lambda_j \}_{j=1}^{n}$ are given by the first component of the normalized eigenvectors. \end{enumerate} \end{teorema} {\em Proof}.- As we have seen, the nodes are the zeros of $R_n(z)$ given by (\ref{znew1})-(\ref{znew2}). Setting $X_n(z)= \left( \chi_0(z),\ldots,\chi_{n-1}(z)\right)^T$ it follows from (\ref{five2}) that $$zX_n(z)={\cal C}_n(\delta_1,\ldots,\delta_{n-1},u) X_n(z) + T_n(z) \;,$$ where $$T_n(z)= \left\{ \begin{array}{ccl} (0,\ldots,0,A(z))^T &if &n=2k \\ (0,\ldots,0,B(z),C(z))^T &if &n=2k+1 \end{array} \right.$$ being $$\begin{array}{rcl} A(z) &= &\eta_{2k-1}(u-\delta_{2k})\chi_{2k-2}(z)+\overline{\delta_{2k-1}}(u-\delta_{2k})\chi_{2k-1}(z)- \\ &&\eta_{2k}\delta_{2k+1}\chi_{2k}(z)+\eta_{2k}\eta_{2k+1}\chi_{2k+1}(z) \end{array}$$ $$B(z)=\eta_{2k}(u-\delta_{2k+1})\chi_{2k}(z) + \eta_{2k}\eta_{2k+1}\chi_{2k+1}(z)$$ $$C(z)=\overline{\delta_{2k}}(u-\delta_{2k+1})\chi_{2k}(z) + \overline{\delta_{2k}}\eta_{2k+1}\chi_{2k+1}(z).$$ Now, on the one hand, in the odd case ($n=2k+1$) it follows from (\ref{znew1})-(\ref{znew2}) that $$T_n(z)=\left( 0,\ldots,0, \eta_{n-1}R_n(z),\overline{\delta_{n-1}}R_n(z) \right)^T$$ implying that $T_n(z) \equiv (0,\ldots,0)^T$ if and only if $R_n(z) =0$. On the other hand, from (\ref{eq4}) and since $u \in \Bbb T$ it follows in the even case ($n=2k$) that $$\begin{array}{ccl} A(z) &= &(u-\delta_{2k})\left[ z\eta_{2k}\chi_{2k}(z)-\left( \overline{\delta_{2k}}z + \overline{\delta_{2k-1}} \right) \chi_{2k-1}(z) \right]\\ && + \overline{\delta_{2k-1}}(u-\delta_{2k})\chi_{2k-1}(z)-\eta_{2k}\delta_{2k+1}\chi_{2k}(z) + \\ && \eta_{2k}\left[ \left( \delta_{2k+1} + \delta_{2k}z \right)\chi_{2k}(z) + \eta_{2k}z\chi_{2k-1}(z) \right] \\ &= &uzR_n(z) \end{array}$$ and hence it holds again that $T_n(z) \equiv (0,\ldots,0)^T$ if and only if $R_n(z)=0$. The proof of the second part follows from the same arguments as in Theorem \ref{eigen1}. \begin{flushright} $\Box$ \end{flushright} \section{Numerical examples} \setcounter{equation}{0} In order to numerically illustrate the results given in the previous section we expose the computation of nodes and weights of the Szeg\H{o} quadrature formulas considering the absolutely continuous measure defined on $[-\pi,\pi]$ by $d\mu(\theta)=\omega(\theta)d\theta$ with \begin{equation}\label{rogerszegoweight} \omega(\theta) = \frac{2\pi}{\sqrt{2\pi \log \left( \frac{1}{q} \right)}} \sum_{j=-\infty}^{\infty} \exp \left({\frac{-\left(\theta - 2\pi j \right)^2}{2\log \left( \frac{1}{q} \right)}} \right) \;\;\;,\;\;q \in (0,1) \;\;. \end{equation} The corresponding monic orthogonal polynomials are the so-called Rogers-Szeg\"o $q$-polynomials. Throughout this section we also fix the ordering induced by the generating sequence $p(n)=E \left[ \frac{n+1}{2} \right]$. An explicit expression for such polynomials is given in \cite[Ch. 1]{BS} and so from Proposition \ref{conex2} the following explicit expression for the corresponding monic orthogonal Laurent polynomials is deduced: \begin{equation}\label{rogsez} \phi_n(z)= \left\{ \begin{array}{lcl} \sum_{j=-k}^{k} (-1)^{j+k} \left[ \begin{array}{c} 2k \\ j+k \end{array} \right]_q q^{\frac{k-j}{2}} z^{j} &if &n=2k \\ \sum_{j=-(k+1)}^{k} (-1)^{j+k+1} \left[ \begin{array}{c} 2k+1 \\ k-j \end{array} \right]_q q^{\frac{j+k+1}{2}} z^{j} &if &n=2k+1 \end{array} \right. \end{equation} where, as usual, the $q$-binomial coefficients $\left[ \begin{array}{c} n \\ j \end{array} \right]_q$ are defined by, \begin{equation} \begin{array}{ccl} (n)_q &= &(1-q)(1-q^2)\cdots (1-q^n) \;\;\;,\;\;(0)_q \equiv 1 \\ \\ \left[ \begin{array}{c} n \\ j \end{array} \right]_q &= &\frac{(n)_q}{(j)_q (n-j)_q} = \frac{(1-q^n) \cdots (1-q^{n-j+1})}{(1-q) \cdots (1-q^j)}. \end{array} \end{equation} Now, writting $\phi_{2k}(z)=\sum_{j=-k}^{k} a_jz^j$ and $\phi_{2k+1}(z)=\sum_{j=-(k+1)}^{k} b_jz^j$, then the coefficients $\{a_j \}_{j=-k}^{k}$ and $\{b_j \}_{j=-(k+1)}^{k}$ can be recursively computed by $$a_k=1 \;\;\;,\;\;\;a_j=-a_{j+1}\sqrt{q}\frac{1-q^{k+j+1}}{1-q^{k-j}} \;\;,\;-k \leq j \leq k-1$$ $$b_{-(k+1)}=1 \;\;\;,\;\;\;b_{j+1}=-b_j\sqrt{q}\frac{1-q^{k-j}}{1-q^{k+j+2}} \;\;,\;-(k+1) \leq j \leq k-1.$$ The following two figures show the zeros of $\phi_{10}(z)$ and $\phi_{11}(z)$, taking $q=0.1, 0.25, 0.5, 0.75$ and $0.9$ (as usual, the circles below represent the unit circle). The data were computed by using the Jenkins and Traub root-finding method (see \cite{JTR}), appropriate in this case since the polynomials has real coefficients, and they are similar to those given in \cite[Ch. 8]{BS} where the distribution of zeros of Szeg\"o polynomials for this measure is also considered: $$Figure \;1 \;\;\left( zeros \;of \;\phi_{10}(z) \right)$$ $$\begin{array}{ccc} q=0.1 & q=0.25 &q=0.5 \\ \begin{picture}(120,70) \put(60,35){\circle{60}} \put(30,35){\line(1,0){60}} \put(60,5){\line(0,1){60}} \put(55,38.5){\circle*{2}} \put(55,31.5){\circle*{2}} \put(57.5,40.25){\circle*{2}} \put(57.5,29.75){\circle*{2}} \put(60,41){\circle*{2}} \put(60,29){\circle*{2}} \put(64.25,39.5){\circle*{2}} \put(64.25,30.5){\circle*{2}} \put(66.5,35.25){\circle*{2}} \put(66.5,34.75){\circle*{2}} \end{picture} & \begin{picture}(120,70) \put(60,35){\circle{60}} \put(30,35){\line(1,0){60}} \put(60,5){\line(0,1){60}} \put(51,40){\circle*{2}} \put(51,30){\circle*{2}} \put(55.75,43.75){\circle*{2}} \put(55.75,26.25){\circle*{2}} \put(61.5,45){\circle*{2}} \put(61.5,25){\circle*{2}} \put(66.5,42){\circle*{2}} \put(66.5,28){\circle*{2}} \put(69,37.5){\circle*{2}} \put(69,32.5){\circle*{2}} \end{picture} & \begin{picture}(120,70) \put(60,35){\circle{60}} \put(30,35){\line(1,0){60}} \put(60,5){\line(0,1){60}} \put(48.5,44){\circle*{2}} \put(48.5,26){\circle*{2}} \put(56.5,48.5){\circle*{2}} \put(56.5,21.5){\circle*{2}} \put(63.75,48.25){\circle*{2}} \put(63.75,21.75){\circle*{2}} \put(70.5,44){\circle*{2}} \put(70.5,26){\circle*{2}} \put(73.25,38.5){\circle*{2}} \put(73.25,31.5){\circle*{2}} \end{picture} \end{array}$$ $$\begin{array}{cc} q=0.75 & q=0.9 \\ \begin{picture}(120,70) \put(60,35){\circle{60}} \put(30,35){\line(1,0){60}} \put(60,5){\line(0,1){60}} \put(52,49.5){\circle*{2}} \put(52,20.5){\circle*{2}} \put(60,52.25){\circle*{2}} \put(60,17.75){\circle*{2}} \put(68,50){\circle*{2}} \put(68,20){\circle*{2}} \put(74,45){\circle*{2}} \put(74,25){\circle*{2}} \put(77,39){\circle*{2}} \put(77,31){\circle*{2}} \end{picture} & \begin{picture}(120,70) \put(60,35){\circle{60}} \put(30,35){\line(1,0){60}} \put(60,5){\line(0,1){60}} \put(61.5,53.5){\circle*{2}} \put(61.5,16.5){\circle*{2}} \put(68.5,51.5){\circle*{2}} \put(68.5,18.5){\circle*{2}} \put(73.5,47.5){\circle*{2}} \put(73.5,22.5){\circle*{2}} \put(77.15,41.6){\circle*{2}} \put(77.15,28.4){\circle*{2}} \put(78.5,37){\circle*{2}} \put(78.5,33){\circle*{2}} \end{picture} \end{array}$$ $$Figure \;2 \;\;\left( zeros \;of \;\phi_{11}(z) \right)$$ $$\begin{array}{lr} q=0.1 & q=0.25 \\ \\ \\ \\ \\ \begin{picture}(120,70) \put(15,35){\circle{60}} \put(-55,35){\line(1,0){140}} \put(15,-35){\line(0,1){140}} \put(78.5,35){\circle*{2}} \put(69.75,64){\circle*{2}} \put(69.75,6){\circle*{2}} \put(47.5,88.5){\circle*{2}} \put(47.5,-18.5){\circle*{2}} \put(15.1,98){\circle*{2}} \put(15.1,-28){\circle*{2}} \put(-15.5,88.6){\circle*{2}} \put(-15.5,-18.6){\circle*{2}} \put(-39,64.25){\circle*{2}} \put(-39,5.75){\circle*{2}} \end{picture} & \begin{picture}(120,70) \put(105,35){\circle{60}} \put(60,35){\line(1,0){90}} \put(105,-10){\line(0,1){90}} \put(144.5,35){\circle*{2}} \put(139.5,54){\circle*{2}} \put(139.5,16){\circle*{2}} \put(125,69){\circle*{2}} \put(125,1){\circle*{2}} \put(107,74.75){\circle*{2}} \put(107,-4.75){\circle*{2}} \put(86.75,70.5){\circle*{2}} \put(86.75,-0.5){\circle*{2}} \put(71.5,56){\circle*{2}} \put(71.5,14){\circle*{2}} \end{picture} \end{array}$$ $$\begin{array}{ccc} q=0.5 & q=0.75 &q=0.9 \\ \\ \begin{picture}(120,70) \put(60,35){\circle{60}} \put(30,35){\line(1,0){60}} \put(60,5){\line(0,1){60}} \put(88,35){\circle*{2}} \put(85,47){\circle*{2}} \put(85,23){\circle*{2}} \put(77,57.5){\circle*{2}} \put(77,12.5){\circle*{2}} \put(50,59.75){\circle*{2}} \put(50,10.25){\circle*{2}} \put(64.5,62.75){\circle*{2}} \put(64.5,7.25){\circle*{2}} \put(37.25,51.5){\circle*{2}} \put(37.25,18.5){\circle*{2}} \end{picture} & \begin{picture}(120,70) \put(60,35){\circle{60}} \put(30,35){\line(1,0){60}} \put(60,5){\line(0,1){60}} \put(83,35){\circle*{2}} \put(80.5,44){\circle*{2}} \put(80.5,26){\circle*{2}} \put(76,50.75){\circle*{2}} \put(76,19.25){\circle*{2}} \put(60,57.5){\circle*{2}} \put(60,12.5){\circle*{2}} \put(68.5,55.9){\circle*{2}} \put(68.5,14.1){\circle*{2}} \put(47.75,54){\circle*{2}} \put(47.75,16){\circle*{2}} \end{picture} & \begin{picture}(120,70) \put(60,35){\circle{60}} \put(30,35){\line(1,0){60}} \put(60,5){\line(0,1){60}} \put(81,35){\circle*{2}} \put(80.25,40.5){\circle*{2}} \put(80.25,29.5){\circle*{2}} \put(77.5,46.75){\circle*{2}} \put(77.5,23.25){\circle*{2}} \put(74.5,50){\circle*{2}} \put(74.5,20){\circle*{2}} \put(68.25,54.4){\circle*{2}} \put(68.25,15.6){\circle*{2}} \put(60.25,56.4){\circle*{2}} \put(60.25,13.6){\circle*{2}} \end{picture} \end{array}$$ From Figures 1 and 2 one can observe that the zeros of $\phi_{10}(z)$ are located on the circle $\left\{z : |z|=q^{1/2} \right\}$ whereas the zeros of $\phi_{11}(z)$ are located on the circle $\left\{z : |z|=q^{-1/2} \right\}$, in accordance with Proposition \ref{conex2} and the Mazel-Geronimo-Hayes theorem (see \cite{MZ}). Now, we compute the nodes and weights of the quadrature formula (\ref{quasiquad}) also for $n=10$ and $q=0.1, 0.25, 0.5, 0.75$ and $0.9$. The nodes are the zeros of $R_{10}(z,u)$ given by (\ref{znew1}) (we take $u=1$). The corresponding sequence of Schur parameters are given by $\delta_n=(-1)^nq^{\frac{n}{2}}$ for all $n=1,2,\ldots$ (see \cite{BS}). As for the weights, we can make use of (\ref{newweightbis}) taking into account that $$\chi_n(z)=\frac{\phi_n(z)}{\sqrt{(1-q)\cdots (1-q^n)}}$$ with $\phi_n(z)$ explicitly given by (\ref{rogsez}). The results are displayed in Tables 1-5. \begin{center} $Table \;1 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;Table \;2$ \end{center} $$\begin{array}{rl} \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.1$ & $n=10$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -0.940400 \pm 0.34007i \\ -0.531157 \pm 0.847273i \\ 0.0668824 \pm 0.997761i \\ 0.624424 \pm 0.781086i \\ 0.955949 \pm 0.293533i \end{array}$ & $\begin{array}{c} 0.0459602 \\ 0.0669775 \\ 0.100057 \\ 0.133157 \\ 0.153848 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent & \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.25$ & $n=10$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -0.922051 \pm 0.387069i \\ -0.473103 \pm 0.881007i \\ 0.119954 \pm 0.992779i \\ 0.650270 \pm 0.759703i \\ 0.959239 \pm 0.282596i \end{array}$ & $\begin{array}{c} 0.0195773 \\ 0.0466391 \\ 0.0947585 \\ 0.150362 \\ 0.188665 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent \end{array}$$ \begin{center} $Table \;3 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;Table \;4$ \end{center} $$\begin{array}{rl} \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.5$ & $n=10$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -0.842988 \pm 0.537932i \\ -0.333209 \pm 0.942853i \\ 0.234605 \pm 0.972091i \\ 0.703537 \pm 0.710659i \\ 0.965879 \pm 0.258994i \end{array}$ & $\begin{array}{c} 0.00312009 \\ 0.0207928 \\ 0.0737936 \\ 0.163017 \\ 0.239274 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent & \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.75$ & $n=10$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -0.517559 \pm 0.855648i \\ 0.00961854 \pm 0.999954i \\ 0.467501 \pm 0.883993i \\ 0.801825 \pm 0.597559i \\ 0.977622 \pm 0.210369i \end{array}$ & $\begin{array}{c} 0.000196919 \\ 0.00541542 \\ 0.0439839 \\ 0.158275 \\ 0.292128 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent \end{array}$$ \begin{center} $Table \;5$ \end{center} \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.9$ & $n=10$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} 0.112467 \pm 0.993655i \\ 0.475746 \pm 0.879582i \\ 0.734593 \pm 0.678508i \\ 0.904709 \pm 0.426031i \\ 0.989421 \pm 0.145076i \end{array}$ & $\begin{array}{c} 0.0000221869 \\ 0.000173911 \\ 0.0276214 \\ 0.146351 \\ 0.324221 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent \begin{nota} From the above tables one sees that the weights corresponding to a pair of complex conjugate nodes are equal. This directly follows from (\ref{newweight}) since in this case the coefficients of the $n$-th orthonormal Laurent polynomial are real. \end{nota} In order to check the effectiveness of Theorems \ref{eigen1} and \ref{eigen2} we also compute for $n=10$, $q=0.1,0.25,0.5,0.75,0.9$ and $u=1$ the eigenvalues and the first component of the normalized eigenvectors of ${\cal C}_{10}(\delta_1,\ldots,\delta_9,1)$ and ${\cal H}_{10}(\delta_0,\ldots,\delta_9,1)$, yielding the nodes and weights of the quadrature formula (\ref{quasiquad}). The computations were made by using a standard eigenvalue-finding method and the results were exactly the same ones as those obtained before. Moreover, the results of the computations for $n=11$ are also showed in the following tables: \begin{center} $Table \;6 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;Table \;7$ \end{center} $$\begin{array}{rl} \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.1$ & $n=11$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -1 \\ -0.814432 \pm 0.580259i \\ -0.355944 \pm 0.934507i \\ 0.199254 \pm 0.979948i \\ 0.68359 \pm 0.729866i \\ 0.963208 \pm 0.268755i \end{array}$ & $\begin{array}{c} 0.03873768 \\ 0.04723999 \\ 0.06942034 \\ 0.09817396 \\ 0.12485883 \\ 0.14093668 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent & \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.25$ & $n=11$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -1 \\ -0.778821 \pm 0.627246i \\ -0.301023 \pm 0.953617i \\ 0.243118 \pm 0.969997i \\ 0.70384 \pm 0.710359i \\ 0.965731 \pm 0.259545i \end{array}$ & $\begin{array}{c} 0.01403402 \\ 0.00025263 \\ 0.05384075 \\ 0.09739723 \\ 0.14346753 \\ 0.17367845 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent \end{array}$$ \begin{center} $Table \;8 \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;Table \;9$ \end{center} $$\begin{array}{rl} \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.5$ & $n=11$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -1 \\ -0.681725 \pm 0.731608i \\ -0.178603 \pm 0.983921i \\ 0.335240 \pm 0.942133i \\ 0.745084 \pm 0.666970i \\ 0.970795 \pm 0.239909i \end{array}$ & $\begin{array}{c} 0.00105402 \\ 0.00606822 \\ 0.02834884 \\ 0.08185750 \\ 0.16052782 \\ 0.22291291 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent & \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.75$ & $n=11$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -1 \\ -0.400355 \pm 0.91636i \\ 0.0972837 \pm 0.995257i \\ 0.517853 \pm 0.85547i \\ 0.821232 \pm 0.570595i \\ 0.979848 \pm 0.199745i \end{array}$ & $\begin{array}{c} 0.00000331 \\ 0.00047627 \\ 0.00836960 \\ 0.05348059 \\ 0.16811708 \\ 0.29160532 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent \end{array}$$ \begin{center} $Table \;10$ \end{center} \hbox{ \small\tabcolsep 5pt \begin{tabular}{|c|c|} \hline & \\ $q=0.9$ & $n=11$ \\ \hline & \\ Nodes & Weights \\ \hline & \\ $\begin{array}{r} -1 \\ 0.149722 \pm 0.988728i \\ 0.498881 \pm 0.866671i \\ 0.746631 \pm 0.665238i \\ 0.909098 \pm 0.416581i \\ 0.989911 \pm 0.141688i \end{array}$ & $\begin{array}{c} 0.00000000058 \\ 0.00005234 \\ 0.00328985 \\ 0.04447931 \\ 0.21730036 \\ 0.46353159 \end{array}$ \\ & \\ \hline \end{tabular}} \par\noindent Here, it should be remarked that when computations for higher values of $n$ are required, special eigenvalue-finding methods should be considered because of the error propagation. In this respect, appropriate procedures for product of matrices (see \cite{W}) could be considered due to the factorization of ${\cal C}(\delta)$ as product of two tri-diagonal matrices. Moreover, in \cite{Ca} it is remarked that: \begin{enumerate} \item The computational cost by using techniques for banded matrices in comparision with techniques for Hessenberg matrices is reduced. \item In the ${\cal C}(\delta)$ matrix, the Schur parameters appear in only finitely many elements and hence, every modification of a finite number of Schur parameters induces a finitely dimensional perturbation, something that is not true in the Hessenberg matrix. \end{enumerate} Throughout this section the computations were made using MATHEMATICA software. {\bf Aknowledgements} The authors are very grateful to Professor Olav Nj\aa stad from the University of Trondheim (Norway) for his remarks and useful sugestions. \end{document}
\begin{document} \title[Near optimality of quantized policies] {Continuity of Cost in Borkar Control Topology and Implications on Discrete Space and Time Approximations for Controlled Diffusions under Several Criteria} \author[Somnath Pradhan]{Somnath Pradhan$^\dag$} \address{$^\dag$Department of Mathematics and Statistics, Queen's University, Kingston, ON, Canada} \email{[email protected]} \author[Serdar Y\"{u}ksel]{Serdar Y\"{u}ksel$^{\mathfrak{d}ag}$} \address{$^\mathfrak{d}ag$Department of Mathematics and Statistics, Queen's University, Kingston, ON, Canada} \email{[email protected]} \begin{abstract} We first show that the discounted cost, cost up to an exit time, and ergodic cost involving controlled non-degenerate diffusions are continuous on the space of stationary control policies when the policies are given a topology introduced by Borkar [V. S. Borkar, A topology for markov controls, Applied Mathematics and Optimization 20 (1989), 55–62]. The same applies for finite horizon problems when the control policies are Markov and the topology is revised to include time also as a parameter. We then establish that finite action/piecewise constant stationary policies are dense in the space of stationary Markov policies under this topology and the same holds for continuous policies. Using the above mentioned continuity and denseness results we establish that finite action/piecewise constant policies approximate optimal stationary policies with arbitrary precision. This gives rise to the applicability of many numerical methods such as policy iteration and stochastic learning methods for discounted cost, cost up to an exit time, and ergodic cost optimal control problems in continuous-time. For the finite-horizon setup, we establish additionally near optimality of time-discretized policies by an analogous argument. We thus present a unified and concise approach for approximations directly applicable under several commonly adopted cost criteria. \end{abstract} \keywords{Controlled diffusions, Near optimality, Piecewise constant policy, Finite actions, Hamilton-Jacobi-Bellman equation} \subjclass[2000]{Primary 93E20, 60J60, Secondary 35Q93} \maketitle \section{Introduction} In this paper, we study regularity properties of induced cost (under several criteria) on a controlled diffusion process with respect to a control topology defined by Borkar \cite{Bor89}, and implications of these properties on existence and, in particular, approximations for optimal controlled diffusions. We will arrive at very general approximation results for optimal control policies by quantized (finite action / piecewise constant) stationary control policies for a general class of controlled diffusions in the whole space ${\mathds{R}^{d}}$\, as well as time-discretizations for the criteria with finite horizons. Such a problem is of significant practical consequence, and accordingly has been studied extensively in a variety of setups. Due to its wide range of applications in domains that spans from mathematical finance, large deviations and robust control, vehicle and mobile robot control and several other fields, the stochastic optimal control problems for controlled diffusions have been studied extensively in literature see, e.g., \cite{Bor-book}, \cite{HP09-book} (finite horizon cost) \cite{BS86}, \cite{BB96} (discounted cost) \cite{Ari-12}, \cite{AA13}, \cite{BG90}, \cite{BG88I}, \cite{BG90b}, \cite{ABG-book} (ergodic cost) and references therein\,. Typically, there are two main approaches to deal with these problems. The first one is the Bellman's Dynamic Programming Principal (DPP). The DPP approach allows one to characterize the value function of the optimal control problem as the unique solution of the associated Hamilton-Jacobi-Bellman (HJB) equation \cite{Bor-book}, \cite{HP09-book}, \cite{ABG-book}, \cite{Lions-83A}, \cite{Lions-83B}. The second one is Pontryagin maximum principal (in the stochastic framework) \cite{PBGM62}\,. For numerical methods as well as learning theoretic methods, it is imperative to arrive at rigorous approximation results. In the continuous-time literature, most of the approximation results are build on time-discretization and mainly focused on finite horizon or discounted cost criteria see, e.g., \cite{KD92}, \cite{KH77}, \cite{KH01}, \cite{KN98A}, \cite{KN2000A}, \cite{BR-02}, \cite{BJ-06}\,, though the ergodic control and control up to an exit time criteria have also been studied \cites{KD92,kushner2014partial}. For finite horizon criteria, a commonly adopted approach of approximating controlled diffusions by a sequence of discrete time Markov chain via weak convergence methods was studied by Kushner and Kushner and Dupuis, see \cite{KD92}, \cite{KH77}, \cite{KH01}\,. These works deal with numerical procedures to construct near optimal control policies for controlled diffusion models by approximating the space of (open-loop adapted) relaxed control policies with those that are piece-wise constant, and by considering the weak convergence of approximating probability measures on the path space to the measure on the continuous-time limit. It is shown in \cite{KD92}, \cite{KH77}, \cite{KH01} that if the constructed controlled Markov chain satisfies a certain ``consistency" condition at the discrete-time sampling instants, then the state process and the corresponding value function asymptotically approximates the continuous time state process and the associated value function. This approach has been referred to as the {\it weak convergence} approach. In an alternative program, building on finite difference approximations for Bellman's equations utilizing their regularity properties, Krylov \cite{KN98A}, \cite{KN2000A} established the convergence rate of for such approximation techniques, where finite difference approximations are studied to arrive at stability results. In particular, some estimates for the error bound of the finite-difference approximation schemes in the problem of finding viscosity or probabilistic solutions to degenerate Bellman's equations are established. The proof technique is based on mean value theorems for stochastic integrals (as in \cite{KN2001AA}), obtained on the basis of elementary properties of the associated Bellman's equations\,. Also, for controlled non-degenerate diffusion processes, it is shown in \cite{KN99AA} that using policies which are constant on intervals of length $h^2$, one can approximate the value function with errors of order $h^{\frac{1}{3}}$\,. In \cite{BR-02}, \cite{BJ-06} Barles et. al. improved the error bounds obtained in \cite{KN98A}, \cite{KN2000A}, \cite{KN99AA}\,. Borkar \cite{Bor89}, \cite{Bor-book}, for the finite-horizon cost case pursued an alternative approach to show continuity (when only stationary state feedback policies are considered for finite horizon problems) in his newly introduced topology; he studied the dependence of the strategic measures (on the path space) on the control policy, via regularity properties of generator functions. Additionally, Borkar \cite{Bor89} did not study the implications in approximations. Instead of the approaches adopted in the aforementioned studies, in this paper, utilizing regularity results of the associated Poisson equations via PDE theory, we arrive at continuity results under relatively weaker set of assumptions on the diffusion coefficients (with the exception of Krylov's method, which is tailored for finite horizon problems). Our approach allows one to arrive at a unification of approximation methods for finite horizon criterion, infinite discounted criterion, control up to an exit time, and ergodic cost criterion problems. Accordingly, our primary approach is to utilize the regularity properties of the partial differential equations directly, first via uniqueness of solutions, and then via regularity properties of the solutions to establish consistency of optimality equations satisfied by the limits of solutions (as policies converge). We will see that one can obtain rather concise, direct, and general results. Additionally, our results can be used to present weaker conditions under which the weak convergence methods can be applicable or when discretized approximations can be shown to be near optimal: For example it will be a consequence of our analysis that for many of the criteria one can utilize piece-wise continuous or continuous control policies for near optimality, which implies \cite[Assumption A2.3, pp. 322]{KD92} used for approximations under ergodic cost criteria (where invariant measures under sampled chains can be shown to converge to the invariant measure of a continuous-time limit as discretization gets finer). Furthermore, we do not impose uniform boundedness conditions on the drift term or (uniform) Lipschitz continuity conditions, a common assumption in \cite{KD92}, \cite{KH77}, \cite{KH01}, \cite{KN98A}, and \cite{KN2000A}. As noted above, the study of the finite action/piecewise constant approximation problem plays important role in computing near optimal policies and learning algorithms for controlled diffusions in ${\mathds{R}^{d}}$\,. As it is pointed out in \cite{RF-15N}, \cite{JPR-19P}, piecewise constant policies are also useful in numerical methods for solving HJB equations. The computational advantage comes from the fact that over the intervals in which the policy is constant, we have to only solve the linear PDEs\,. In the continuous time setup learning problems become much more involved due to the complex structure of the dynamics and the optimality equation. One common approach to overcome these difficulties is to construct simpler models by discretizing time, space and action spaces which approximates the original continuous time model\,. In a recent work \cite{BK-22R}, the authors studied an approximate $Q$-learning algorithm for controlled diffusion models by discretizing the time, space and action spaces. Under mild assumptions, they produced a learning algorithm which converges to some approximately optimal control policy for a discounted cost problem. They assumed that the discretization is uniform in time but the discretization in state and action can be non-uniform\,. Similar learning algorithm for controlled diffusions is proposed in \cite{RP-97B}, this result is based on the finite difference and finite element approximations (as in,\cite{KD92})\,. Thus, if one can establish that learning a control model with finitely many control actions is sufficient for the approximate optimality, then it will be easier to produce efficient learning algorithms for the original model\,. In the literature of discrete time Markov decision processes (MDPs), various approximation techniques are available to address the approximation problems, e.g., approximate dynamic programming, approximate value or policy iteration, approximate linear programming, simulation based techniques, neuro-dynamic programming (or reinforcement learning), state aggregation, etc. (see \cite{Bertsekas1975A}, \cite{BVR-06}, \cite{DP-13SIAM}, \cite{SLS-18B} and the references therein)\,. For discrete time controlled models the near optimality of quantized policies studied extensively in the literature see, e.g., \cite{KY22A}, \cite{SYL17}, \cite{SSL-16}, \cite{SST-20A}, \cite{SLS-18B} \,. In \cite{SYL17}, \cite{SSL-16}, authors studied the finite state, finite action approximation (respectively) of fully observed MDPs with Borel state and action spaces, for both discounted and average costs criteria\,. In the compact state space case explicit rate of convergence is also established in \cite{SYL17}\,. Later, these results are extended to partially observed Markov decision process setup in \cite{KY22A}, \cite{SST-20A}, also see the references therein\,. Recently, \cite[Section 4]{arapostathis2021optimality} established the denseness of the performance of deterministic policies with finite action spaces, among the performance values attained by the set of all randomized stationary policies. \subsection*{Contributions and main results} In this manuscript our main goal is to study the following approximation problem: for a general class of controlled diffusions in ${\mathds{R}^{d}}$ under what conditions one can approximate the optimal control policies for both finite/infinite horizon cost criteria by policies with finite actions/ piecewise constant/continuous policies? While the time discretization approximation results for finite horizon problems, studied extensively by Krylov \cite{KN98A}, \cite{KN2000A}, \cite{KN99AA} (for degenerate diffusions), we will discuss this (for the non-degenerate case) as an application our results. In order to address these questions, we first show that both finite horizon and infinite horizon (discounted/ergodic) costs are continuous as a function of control policies under Borkar topology \cite{Bor89}. We establish these results by exploiting the existence and uniqueness results of the associated Poisson equations (see, Theorem~\ref{TContFHC} (finite horizon), Theorem~\ref{T1.1} (discounted), Theorem~\ref{T1.1Exit} (control up to an exit time), Theorem~\ref{ergodicnearmono1}, \ref{ergodicLyap1} (ergodic)). The analysis of ergodic cost case is relatively more involved. One of the major issues in analyzing the ergodic cost criteria under the near-monotone hypothesis is the non-uniqueness/restricted uniqueness of the solution of the associated HJB/Poisson equation (see, \cite[Example~3.8.3]{ABG-book},\cite{AA13})\,. In \cite[Example~3.8.3]{ABG-book},\cite{AA13} it is shown that under near-monotone hypothesis the associated HJB/Poisson equation may admit uncountable many solutions\,. In this paper, we have shown that under near-monotone hypothesis the associated Poisson equation admits unique solution in the space of compatible solution pairs (see, \cite[Definition~1.1]{AA13})\,. Continuity results obtained in the paper will be also useful in establishing the existence of optimal policies of the corresponding optimal control problems. Next, utilizing the Lusin's theorem and Tietze's extension theorem we show that under Borkar topology, quantized (finite actions/ piecewise constant) stationary policies are dense in the space of stationary Markov policies (see, Section~\ref{DensePol})\,. Also, following the analogous proof technique, we establish the denseness of space continuous stationary polices in the space of stationary policies (see Theorem~\ref{TDContP})\,. Following and briefly modifying the proof technique of the denseness of stationary policies, including time also as a parameter we establish that piecewise constant Markov policies are dense in the space of Markov policies under Borkar topology (see, Theorem~\ref{TDPCMP}). Then, using our continuity and denseness results, we deduce that for both finite and infinite horizon cost criteria, the optimal control policies can be approximated by quantized (finite actions/ piecewise constant) policies with arbitrary precision (see, Theorem~\ref{TFiniteOptApprox1} (finite horizon), Theorem~\ref{T1.2ExitCost} (control upto an exit time), Theorem~\ref{ErgodNearmOPT1}, \ref{TErgoOptApprox1} (infinite horizon)). The remaining part of the paper is organized as follows. In Section~\ref{PD} we provide the problem formulation\,. The continuity of discounted cost/ cost up to an exit time as a function of control policy are proved in Section~\ref{CDiscCost}. Similar continuity result for ergodic cost is presented in Section~\ref{CErgoCost}, where we establish these results under two types of condition; stability or near-monotonicity. Section~\ref{DensePol} is devoted to establish the denseness of finite action/piecewise constant stationary policies under Borkar topology. Then using the denseness and continuity results we show the near optimality of finite models for cost up to an exit time and discounted/ ergodic cost criteria in Section~\ref{NOptiFinite}. Finally, in Section~\ref{TimeDMarkov}, we analyze the denseness of piecewise constant Markov policies under Borkar topology and then exploiting the denseness result we prove the near optimality of the piecewise constant Markov policies for finite horizon cost criterion. \subsection*{Notation:} \begin{itemize} \item For any set $A\subset\mathds{R}^{d}$, by $\uptau(A)$ we denote \emph{first exit time} of the process $\{X_{t}\}$ from the set $A\subset\mathds{R}^{d}$, defined by \begin{equation*} \uptau(A) \,:=\, \inf\,\{t>0\,\colon X_{t}\not\in A\}\,. \end{equation*} \item ${\mathscr{B}}_{r}$ denotes the open ball of radius $r$ in $\mathds{R}^{d}$, centered at the origin, \item $\uptau_{r}$, ${\Breve\uptau}_{r}$ denote the first exist time from ${\mathscr{B}}_{r}$, ${\mathscr{B}}_{r}^c$ respectively, i.e., $\uptau_{r}:= \uptau({\mathscr{B}}_{r})$, and ${\Breve\uptau}_{r}:= \uptau({\mathscr{B}}^{c}_{r})$. \item By $\trace S$ we denote the trace of a square matrix $S$. \item For any domain $\mathcal{D}\subset\mathds{R}^{d}$, the space ${\mathcal{C}}^{k}(\mathcal{D})$ (${\mathcal{C}}^{\infty}(\mathcal{D})$), $k\ge 0$, denotes the class of all real-valued functions on $\mathcal{D}$ whose partial derivatives up to and including order $k$ (of any order) exist and are continuous. \item ${\mathcal{C}}_{\mathrm{c}}^k(\mathcal{D})$ denotes the subset of ${\mathcal{C}}^{k}(\mathcal{D})$, $0\le k\le \infty$, consisting of functions that have compact support. This denotes the space of test functions. \item ${\mathcal{C}}_{b}({\mathds{R}^{d}})$ denotes the class of bounded continuous functions on ${\mathds{R}^{d}}$\,. \item ${\mathcal{C}}^{k}_{0}(\mathcal{D})$, denotes the subspace of ${\mathcal{C}}^{k}(\mathcal{D})$, $0\le k < \infty$, consisting of functions that vanish in $\mathcal{D}^c$. \item ${\mathcal{C}}^{k,r}(\mathcal{D})$, denotes the class of functions whose partial derivatives up to order $k$ are H\"older continuous of order $r$. \item ${L}^{p}(\mathcal{D})$, $p\in[1,\infty)$, denotes the Banach space of (equivalence classes of) measurable functions $f$ satisfying $\int_{\mathcal{D}} \abs{f(x)}^{p}\,\mathrm{d}{x}<\infty$. \item ${\mathscr W}^{k,p}(\mathcal{D})$, $k\ge0$, $p\ge1$ denotes the standard Sobolev space of functions on $\mathcal{D}$ whose weak derivatives up to order $k$ are in ${L}^{p}(\mathcal{D})$, equipped with its natural norm (see, \cite{Adams})\,. \item If $\mathcal{X}(Q)$ is a space of real-valued functions on $Q$, $\mathcal{X}_{\mathrm{loc}}(Q)$ consists of all functions $f$ such that $f\varphi\in\mathcal{X}(Q)$ for every $\varphi\in{\mathcal{C}}_{\mathrm{c}}^{\infty}(Q)$. In a similar fashion, we define ${\mathscr W}l^{k, p}(\mathcal{D})$. \item For $\mu > 0$, let $e_{\mu}(x) = e^{-\mu\sqrt{1+\abs{x}^2}}$\,, $x\in{\mathds{R}^{d}}$\,. Then $f\in {L}^{p,\mu}((0, T)\times {\mathds{R}^{d}})$ if $fe_{\mu} \in {L}^{p}((0, T)\times {\mathds{R}^{d}})$\,. Similarly, ${\mathscr W}^{1,2,p,\mu}((0, T)\times {\mathds{R}^{d}}) = \{f\in {L}^{p,\mu}((0, T)\times {\mathds{R}^{d}}) \mid f, \frac{\partial f}{\partial t}, \frac{\partial f}{\partial x_i}, \frac{\partial^2 f}{\partial x_i \partial x_j}\in {L}^{p,\mu}((0, T)\times {\mathds{R}^{d}}) \}$ with natural norm (see \cite{BL84-book}) \begin{align*} \norm{f}_{{\mathscr W}^{1,2,p,\mu}} =& \norm{\frac{\partial f}{\partial t}}_{{L}^{p,\mu}((0, T)\times {\mathds{R}^{d}})} + \norm{f}_{{L}^{p,\mu}((0, T)\times {\mathds{R}^{d}})} \\ & + \sum_{i}\norm{\frac{\partial f}{\partial x_i}}_{{L}^{p,\mu}((0, T)\times {\mathds{R}^{d}})} + \sum_{i,j}\norm{\frac{\partial^2 f}{\partial x_i \partial x_j}}_{{L}^{p,\mu}((0, T)\times {\mathds{R}^{d}})}\,. \end{align*} Also, we use the following convention $\norm{f}_{{\mathscr W}^{1,2,p,\mu}} = \norm{f}_{1,2,p,\mu}$\,. \end{itemize} \section{The Borkar Topology on Control Policies, Cost Criteria, and the Problem Statement}\label{PD} Let $\mathbb{U}$ be a compact metric space and $\mathrm{V}=\mathscr{P}(\mathbb{U})$ be the space of probability measures on $\mathbb{U}$ with topology of weak convergence. Let $$b : {\mathds{R}^{d}} \times \mathbb{U} \to {\mathds{R}^{d}}, $$ $$ \sigma : {\mathds{R}^{d}} \to \mathds{R}^{d \times d},\, \sigma = [\sigma_{ij}(\cdot)]_{1\leq i,j\leq d},$$ be given functions. We consider a stochastic optimal control problem whose state is evolving according to a controlled diffusion process given by the solution of the following stochastic differential equation (SDE) \begin{equation}\label{E1.1} \mathrm{d} X_t \,=\, b(X_t,U_t) \mathrm{d} t + \upsigma(X_t) \mathrm{d} W_t\,, \quad X_0=x\in{\mathds{R}^{d}}. \end{equation} Where \begin{itemize} \item $W$ is a $d$-dimensional standard Wiener process, defined on a complete probability space $(\Omega, {\mathfrak{F}}, \mathbb{P})$. \item We extend the drift term $b : {\mathds{R}^{d}} \times \mathrm{V} \to {\mathds{R}^{d}}$ as follows: \begin{equation*} b (x,\mathrm{v}) = \int_{\mathbb{U}} b(x,\zeta)\mathrm{v}(\mathrm{d} \zeta), \end{equation*} for $\mathrm{v}\in\mathrm{V}$. \item $U$ is a $\mathrm{V}$ valued adapted process satisfying following non-anticipativity condition: for $s<t\,,$ $W_t - W_s$ is independent of $${\mathfrak{F}}_s := \,\,\mbox{the completion of}\,\,\, \sigma(X_0, U_r, W_r : r\leq s)\,\,\,\mbox{relative to} \,\, ({\mathfrak{F}}, \mathbb{P})\,.$$ \end{itemize} The process $U$ is called an \emph{admissible} control, and the set of all admissible controls is denoted by $\mathfrak U$ (see, \cite{BG90}). By a Markov control we mean an admissible control of the form $U_t = v(t,X_t)$ for some Borel measurable function $v:\mathds{R}_+\times{\mathds{R}^{d}}\to\mathrm{V}$. The space of all Markov controls is denoted by $\mathfrak U_{\mathsf{m}}$\,. If the function $v$ is independent of $t$, i.e., $U_t = v(X_t)$ then $U$ or by an abuse of notation $v$ itself is called a stationary Markov control. The set of all stationary Markov controls is denoted by $\mathfrak U_{\mathsf{sm}}$. To ensure existence and uniqueness of strong solutions of \cref{E1.1}, we impose the following assumptions on the drift $b$ and the diffusion matrix $\upsigma$\,. \begin{itemize} \item[\hypertarget{A1}{{(A1)}}] \emph{Local Lipschitz continuity:\/} The function $\upsigma\,=\,\bigl[\upsigma^{ij}\bigr]\colon\mathds{R}^{d}\to\mathds{R}^{d\times d}$, $b\colon{\mathds{R}^{d}}\times\mathbb{U}\to{\mathds{R}^{d}}$ are locally Lipschitz continuous in $x$ (uniformly with respect to the other variables for $b$). In other words, for some constant $C_{R}>0$ depending on $R>0$, we have \begin{equation*} \abs{b(x,\zeta) - b(y, \zeta)}^2 + \norm{\upsigma(x) - \upsigma(y)}^2 \,\le\, C_{R}\,\abs{x-y}^2 \end{equation*} for all $x,y\in {\mathscr{B}}_R$ and $\zeta\in\mathbb{U}$, where $\norm{\upsigma}:=\sqrt{\trace(\upsigma\upsigma^{\mathsf{T}})}$\,. Also, we are assuming that $b$ is jointly continuous in $(x,\zeta)$. \item[\hypertarget{A2}{{(A2)}}] \emph{Affine growth condition:\/} $b$ and $\upsigma$ satisfy a global growth condition of the form \begin{equation*} \sup_{\zeta\in\mathbb{U}}\, \langle b(x, \zeta),x\rangle^{+} + \norm{\upsigma(x)}^{2} \,\le\,C_0 \bigl(1 + \abs{x}^{2}\bigr) \qquad \forall\, x\in\mathds{R}^{d}, \end{equation*} for some constant $C_0>0$. \item[\hypertarget{A3}{{(A3)}}] \emph{Nondegeneracy:\/} For each $R>0$, it holds that \begin{equation*} \sum_{i,j=1}^{d} a^{ij}(x)z_{i}z_{j} \,\ge\,C^{-1}_{R} \abs{z}^{2} \qquad\forall\, x\in {\mathscr{B}}_{R}\,, \end{equation*} and for all $z=(z_{1},\dotsc,z_{d})^{\mathsf{T}}\in\mathds{R}^{d}$, where $a:= \frac{1}{2}\upsigma \upsigma^{\mathsf{T}}$. \end{itemize} \subsection{The Borkar Topology on Control Policies}\label{B-topo} We now introduce the Borkar topology on stationary or Markov controls \cite{Bor89} \begin{itemize} \item[•]\emph{Topology of Stationary Policies:} From \cite[Section~2.4]{ABG-book}, we have that the set $\mathfrak U_{\mathsf{sm}}$ is metrizable with compact metric. \begin{definition}[Borkar topology of stationary Markov policies]\label{DefBorkarTopology1A} A sequence $v_n\to v$ in $\mathfrak U_{\mathsf{sm}}$ if and only if \begin{equation}\label{BorkarTopology} \lim_{n\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x \end{equation} for all $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in {\mathcal{C}}_b({\mathds{R}^{d}}\times \mathbb{U})$ (for more details, see \cite[Lemma~2.4.1]{ABG-book}, \cite{Bor89})\,. \end{definition} \item[•]\emph{Topology of Markov Policies:} In the proof of \cite[Theorem~3.1, Lemma~3.1]{Bor89}, replacing $A_n$ by $\hat{A}_n = A_n\times [0,n]$ and following the arguments as in the proof of \cite[Theorem~3.1, Lemma~3.1]{Bor89}, we have the following topology on the space of Markov policies $\mathfrak U_{\mathsf{m}}$\,. \begin{definition}[Borkar topology of Markov policies]\label{BKTP1} A sequence $v_n\to v$ in $\mathfrak U_{\mathsf{m}}$ if and only if \begin{equation}\label{BorkarTopologyM} \lim_{n\to\infty}\int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(t,x)\int_{\mathbb{U}}g(x,t,\zeta)v_{n}(t,x)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t = \int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(t,x)\int_{\mathbb{U}}g(x,t,\zeta)v(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t \end{equation} for all $f\in L^1({\mathds{R}^{d}}\times [0, \infty))\cap L^2({\mathds{R}^{d}}\times [0, \infty))$ and $g\in {\mathcal{C}}_b({\mathds{R}^{d}}\times [0, \infty)\times \mathbb{U})$\,. \end{definition} \end{itemize} It is well known that under the hypotheses \hyperlink{A1}{{(A1)}}--\hyperlink{A3}{{(A3)}}, for any admissible control \cref{E1.1} has a unique weak solution \cite[Theorem~2.2.11]{ABG-book}, and under any stationary Markov strategy \cref{E1.1} has a unique strong solution which is a strong Feller (therefore strong Markov) process \cite[Theorem~2.2.12]{ABG-book}. \subsection{Cost Criteria} Let $c\colon{\mathds{R}^{d}}\times\mathbb{U} \to \mathds{R}_+$ be the \emph{running cost} function. We assume that $c$ is bounded, jointly continuous in $(x, \zeta)$ and locally Lipschitz continuous in its first argument uniformly with respect to $\zeta\in\mathbb{U}$. We extend $c\colon{\mathds{R}^{d}}\times\mathrm{V} \to\mathds{R}_+$ as follows: for $\mathrm{v} \in \mathrm{V}$ \begin{equation*} c(x,\mathrm{v}) := \int_{\mathbb{U}}c(x,\zeta)\mathrm{v}(\mathrm{d}\zeta)\,. \end{equation*} In this article, we consider the problem of minimizing finite horizon cost, $\alpha$-discounted cost and ergodic cost, respectively: \subsubsection{Finite Horizon Cost} For $U\in \mathfrak U$, the associated \emph{finite horizon cost} is given by \begin{equation}\label{FiniteCost1} {\mathcal{J}}_{T}(x, U) = \mathrm{e}xp_x^{U}\left[\int_0^{T} c(X_s, U_s) \mathrm{d}{s} + H(X_T)\right]\,. \end{equation} and the optimal value is defined as \begin{equation}\label{FiniteCost1Opt} {\mathcal{J}}_{T}^*(x) \,:=\, \inf_{U\in \mathfrak U}{\mathcal{J}}_{T}(x, U)\,. \end{equation} Then a policy $U^*\in \mathfrak U$ is said to be optimal if we have \begin{equation}\label{FiniteCost1Opt1} {\mathcal{J}}_{T}(x, U^*) = {\mathcal{J}}_{T}^*(x)\,. \end{equation} \subsubsection{Discounted Cost Criterion} For $U \in\mathfrak U$, the associated \emph{$\alpha$-discounted cost} is given by \begin{equation}\label{EDiscost} {\mathcal{J}}_{\alpha}^{U}(x, c) \,:=\, \mathrm{e}xp_x^{U} \left[\int_0^{\infty} e^{-\alpha s} c(X_s, U_s) \mathrm{d} s\right],\quad x\in{\mathds{R}^{d}}\,, \end{equation} where $\alpha > 0$ is the discounted factor and $X(\cdot)$ is the solution of \cref{E1.1} corresponding to $U\in\mathfrak U$ and $\mathrm{e}xp_x^{U}$ is the expectation with respect to the law of the process $X(\cdot)$ with initial condition $x$. The controller tries to minimize \cref{EDiscost} over his/her admissible policies $\mathfrak U$\,. Thus, a policy $U^{*}\in \mathfrak U$ is said to be optimal if for all $x\in {\mathds{R}^{d}}$ \begin{equation}\label{OPDcost} {\mathcal{J}}_{\alpha}^{U^*}(x, c) = \inf_{U\in \mathfrak U}{\mathcal{J}}_{\alpha}^{U}(x, c) \,\,\, (\,=:\, \,\, V_{\alpha}(x))\,, \end{equation} where $V_{\alpha}(x)$ is called the optimal value. \subsubsection{Ergodic Cost Criterion} For $U\in \mathfrak U$, the associated \emph{ergodic cost} is given by \begin{equation}\label{ErgCost1} {\mathscr{E}}_{x}(c, U) = \limsup_{T\to \infty}\frac{1}{T}\mathrm{e}xp_x^{U}\left[\int_0^{T} c(X_s, U_s) \mathrm{d}{s}\right]\,. \end{equation} and the optimal value is defined as \begin{equation}\label{ErgCost1Opt} {\mathscr{E}}^*(c) \,:=\, \inf_{x\in{\mathds{R}^{d}}}\inf_{U\in \mathfrak U}{\mathscr{E}}_{x}(c, U)\,. \end{equation} Then a policy $U^*\in \mathfrak U$ is said to be optimal if we have \begin{equation}\label{ErgCost1Opt1} {\mathscr{E}}_{x}(c, U^*) = {\mathscr{E}}^*(c)\,. \end{equation} \subsubsection{Control up to an Exit Time}\label{exitTimeSection} For each $U\in\mathfrak U$ the associated cost is given as \begin{equation*} \hat{{\mathcal{J}}}_{e}^{U}(x) \,:= \, \mathrm{e}xp_x^{U} \left[\int_0^{\tau(O)} e^{-\int_{0}^{t}\delta(X_s, U_s) \mathrm{d} s} c(X_t, U_t) \mathrm{d} t + e^{-\int_{0}^{\tau(O)}\delta(X_s, U_s) \mathrm{d} s}h(X_{\tau(O)})\right],\quad x\in{\mathds{R}^{d}}\,, \end{equation*} where $O\subset {\mathds{R}^{d}}$ is a smooth bounded domain, $\tau(O) \,:=\, \inf\{t \geq 0: X_t\notin O\}$, $\delta(\cdot, \cdot): \bar{O}\times\mathbb{U}\to [0, \infty)$ is the discount function and $h:\bar{O}\to \mathds{R}_+$ is the terminal cost function. The optimal value is defined as \[\hat{{\mathcal{J}}}_{e}^{*}(x)=\inf_{U\in \mathfrak U}\hat{{\mathcal{J}}}_{e}^{U}(x),\] We assume that $\delta\in {\mathcal{C}}(\bar{O}\times \mathbb{U})$, $h\in{\mathscr W}^{2,p}(O)$. \subsection{Problems Studied} The main purpose of this manuscript will be to address the following problems: \begin{itemize} \item[•]\textbf{Continuity of finite and infinite horizon costs.} Suppose $\{v_n\}_{n\in \mathds{N}}$ is a sequence of control policies which converge to another control policy $v$ in some sense (in particular, under Borkar topology, see Subsection~\ref{B-topo}). Does this imply that \begin{itemize} \item[•]\emph{for finite horizon cost:} ${\mathcal{J}}_{T}(x, v_n)\to {\mathcal{J}}_{T}(x, v)$\,? \item[•]\emph{for discounted cost:} ${\mathcal{J}}_{\alpha}^{v_n}(x, c)\to {\mathcal{J}}_{\alpha}^{v}(x, c)$ \,? \item[•]\emph{for ergodic cost:} ${\mathscr{E}}_{x}(c, v_n)\to {\mathscr{E}}_{x}(c, v)$ \,? \item[•] \emph{for cost up to an exit time:} $\hat{{\mathcal{J}}}_{e}^{v_n}(x) \to \hat{{\mathcal{J}}}_{e}^{v}(x)$ \,? \end{itemize} \item[•]\textbf{Near optimality of quantized policies.} For any given $\epsilon > 0$, whether it is possible to construct a quantized (finite action/ piecewise constant) policy $v_{\epsilon}$ such that \begin{itemize} \item[•]\emph{for finite horizon cost:} ${\mathcal{J}}_{T}(x, v_{\epsilon})\leq {\mathcal{J}}_{T}^*(x) + \epsilon$\,? \item[•]\emph{for discounted cost:} ${\mathcal{J}}_{\alpha}^{v_{\epsilon}}(x, c)\leq V_{\alpha}(x) + \epsilon$ \,? \item[•]\emph{for ergodic cost:} ${\mathscr{E}}_{x}(c, v_{\epsilon})\leq {\mathscr{E}}^*(c) + \epsilon$ \,? \item[•] \emph{for cost up to an exit time:} $\hat{{\mathcal{J}}}_{e}^{v_{\epsilon}}(x) \leq \hat{{\mathcal{J}}}_{e}^{*}(x) + \epsilon$ \,? \end{itemize} \end{itemize} In this manuscript, we have shown that under a mild set of assumptions the answers to the above mentioned questions are affirmative. For the finite horizon case, we also study the time-discretization approximations as a further implication of our analysis. Let us introduce a parametric family of elliptic operator, which will be useful in our analysis\,. With $\zeta\in \mathbb{U}$ treated as a parameter, we define a family of operators ${\mathscr{L}}_{\zeta}$ mapping ${\mathcal{C}}^2({\mathds{R}^{d}})$ to ${\mathcal{C}}({\mathds{R}^{d}})$ by \begin{equation}\label{E-cI} {\mathscr{L}}_{\zeta} f(x) \,:=\, \trace\bigl(a(x)\nabla^2 f(x)\bigr) + \,b(x,\zeta)\cdot \nabla f(x)\,, \end{equation} where $f\in {\mathcal{C}}^2({\mathds{R}^{d}})\cap{\mathcal{C}}_b({\mathds{R}^{d}})$\, and for $\mathrm{v} \in\mathrm{V}$ we extend ${\mathscr{L}}_{\zeta}$ as follows: \begin{equation}\label{EExI} {\mathscr{L}}_\mathrm{v} f(x) \,:=\, \int_{\mathbb{U}} {\mathscr{L}}_{\zeta} f(x)\mathrm{v}(\mathrm{d} \zeta)\,. \end{equation}Also, for each $v \in\mathfrak U_{\mathsf{sm}}$, we define \begin{equation}\label{Efixstra} {\mathscr{L}}_{v} f(x) \,:=\, \trace(a\nabla^2 f(x)) + b(x,v(x))\cdot\nabla f(x)\,. \end{equation} \section{Continuity of Expect Cost under Various Criteria in Control Policies under the Borkar Topology} \subsection{Continuity for Discounted Cost/Cost upto an Exit Time}\label{CDiscCost} Since the proof techniques are almost similar, in this section, we analyze the continuity of both discounted cost as well as the cost upto an exit time with respect to the policies in the space of stationary policies under Borkar topology (see Definition~\ref{DefBorkarTopology1A}), i.e., we show that the maps $v\to {\mathcal{J}}_{\alpha}^{v}$ and $v\to \hat{{\mathcal{J}}}_{e}^{v}$ are continuous on $\mathfrak U_{\mathsf{sm}}$\,. \subsubsection{Continuity of Discounted Cost} Now we prove the continuity of the discounted cost as a function of the control policies\,. \begin{theorem}\label{T1.1} Suppose Assumptions (A1)-(A3) hold. Then the map $v\mapsto {\mathcal{J}}_{\alpha}^{v}(x, c)$ from $\mathfrak U_{\mathsf{sm}}$ to $\mathds{R}$ is continuous. \end{theorem} \begin{proof} Let $\{v_n\}_n$ be a sequence in $\mathfrak U_{\mathsf{sm}}$ such that $v_n\to v$ in $\mathfrak U_{\mathsf{sm}}$\,. It known that ${\mathcal{J}}_{\alpha}^{v_n}(x, c)$ is a solution to the Poisson's equation (see, \cite[Lemma~A.3.7]{ABG-book}) \begin{equation}\label{ET1.1A} {\mathscr{L}}_{v_n}{\mathcal{J}}_{\alpha}^{v_n}(x, c) - \alpha {\mathcal{J}}_{\alpha}^{v_n}(x, c) = - c(x, v_n(x))\,. \end{equation} Now by standard elliptic p.d.e. estimates as in \cite[Theorem~9.11]{GilTru}, for any $p\geq d+1$ and $R >0$, we deduce that \begin{equation}\label{ET1.1B} \norm{{\mathcal{J}}_{\alpha}^{v_n}(x, c)}_{{\mathscr W}^{2,p}({\mathscr{B}}_R)} \,\le\, \kappa_1\bigl(\norm{{\mathcal{J}}_{\alpha}^{v_n}(x, c)}_{L^p({\mathscr{B}}_{2R})} + \norm{c(x, v_n(x))}_{L^p({\mathscr{B}}_{2R})}\bigr)\,, \end{equation} for some positive constant $\kappa_1$ which is independent of $n$\,. Since \begin{equation*} \norm{c}_{\infty} \,:=\, \sup_{(x,u)\in{\mathds{R}^{d}}\times\mathbb{U}} c(x,u) \leq M \, <\,\infty \,, \quad \text{and}\quad {\mathcal{J}}_{\alpha}^{v_n}(x, c) \leq \frac{\norm{c}_{\infty}}{\alpha}\,, \end{equation*} from \cref{ET1.1A} we obtain \begin{equation}\label{ETC1.3B} \norm{{\mathcal{J}}_{\alpha}^{v_n}(x, c)}_{{\mathscr W}^{2,p}({\mathscr{B}}_R)} \,\le\, \kappa_1 M\bigl(\frac{|{\mathscr{B}}_{2R}|^{\frac{1}{p}}}{\alpha} + |{\mathscr{B}}_{2R}|^{\frac{1}{p}}\bigr)\,. \end{equation}We know that for $1< p < \infty$, the space ${\mathscr W}^{2,p}({\mathscr{B}}_R)$ is reflexive and separable, hence, as a corollary of Banach Alaoglu theorem, we have that every bounded sequence in ${\mathscr W}^{2,p}({\mathscr{B}}_R)$ has a weakly convergent subsequence (see, \cite[Theorem~3.18.]{HB-book}). Also, we know that for $p\geq d+1$ the space ${\mathscr W}^{2,p}({\mathscr{B}}_R)$ is compactly embedded in ${\mathcal{C}}^{1, \beta}(\bar{{\mathscr{B}}}_R)$\,, where $\beta < 1 - \frac{d}{p}$ (see \cite[Theorem~A.2.15 (2b)]{ABG-book}), which implies that every weakly convergent sequence in ${\mathscr W}^{2,p}({\mathscr{B}}_R)$ will converge strongly in ${\mathcal{C}}^{1, \beta}(\bar{{\mathscr{B}}}_R)$\,. Thus, in view of estimate \cref{ETC1.3B}, by a standard diagonalization argument and Banach Alaoglu theorem, we can extract a subsequence $\{V_{\alpha}^{n_k}\}$ such that for some $V_{\alpha}^*\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})$ \begin{equation}\label{ET1.1C} \begin{cases} {\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c)\to & V_{\alpha}^*\quad \text{in}\quad {\mathscr W}l^{2,p}({\mathds{R}^{d}})\quad\text{(weakly)}\\ {\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c)\to & V_{\alpha}^*\quad \text{in}\quad {\mathcal{C}}^{1, \beta}_{loc}({\mathds{R}^{d}}) \quad\text{(strongly)}\,. \end{cases} \end{equation} In the following we will show that $V_{\alpha}^* = {\mathcal{J}}_{\alpha}^{v}(x, c)$. Note that \begin{align*} b(x,v_{n_k}(x))\cdot \nabla {\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c) - b(x,v(x))\cdot \nabla V_{\alpha}^*(x) = & b(x,v_{n_k}(x))\cdot \nabla \left({\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c) - V_{\alpha}^*\right)(x) \\ & + \left(b(x,v_{n_k}(x)) - b(x,v(x))\right)\cdot \nabla V_{\alpha}^*(x)\,. \end{align*} Since ${\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c)\to V_{\alpha}^*$ in ${\mathcal{C}}^{1, \beta}_{loc}({\mathds{R}^{d}})$ and $b$ is locally bounded, on any compact set $b(x,v_{n_k}(x))\cdot \nabla \left({\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c) - V_{\alpha}^*\right)(x)\to 0$ strongly. Also, since $\nabla V_{\alpha}^*\in {\mathcal{C}}^{1, \beta}_{loc}({\mathds{R}^{d}})$, in view of the topology of $\mathfrak U_{\mathsf{sm}}$, for any $\phi\in{\mathcal{C}}_c^{\infty}({\mathds{R}^{d}})$ we have \begin{equation*} \lim_{n\to\infty}\int_{{\mathds{R}^{d}}}b(x,v_{n_k}(x))\cdot \nabla V_{\alpha}^*(x)\phi(x)\mathrm{d} x = \int_{{\mathds{R}^{d}}}b(x,v(x))\cdot \nabla V_{\alpha}^*(x)\phi(x)\mathrm{d} x\,. \end{equation*} Hence, as $k\to \infty$, we obtain \begin{equation}\label{ET1.1D} b(x,v_{n_k}(x))\cdot \nabla {\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c) + c(x, v_{n_k}(x)) \to b(x,v(x))\cdot \nabla V_{\alpha}^*(x) + c(x, v(x))\quad\text{weakly}\,. \end{equation} Now, multiplying by a test function $\phi\in {\mathcal{C}}_{c}^{\infty}({\mathds{R}^{d}})$, from \cref{ET1.1A}, it follows that \begin{align*} \int_{{\mathds{R}^{d}}}\trace\bigl(a(x)\nabla^2 {\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c)\bigr)\phi(x)\mathrm{d} x + \int_{{\mathds{R}^{d}}}\{b(x,v_{n_k}(x))\cdot \nabla {\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c) + & c(x, v_{n_k}(x))\}\phi(x)\mathrm{d} x \\ &= \alpha\int_{{\mathds{R}^{d}}} {\mathcal{J}}_{\alpha}^{v_{n_k}}(x, c)\phi(x)\mathrm{d} x\,. \end{align*} Hence, using \cref{ET1.1C}, \cref{ET1.1D}, and letting $k\to\infty$ (in the sense of distributions), we obtain \begin{equation}\label{ET1.1E} \int_{{\mathds{R}^{d}}}\trace\bigl(a(x)\nabla^2 V_{\alpha}^*(x)\bigr)\phi(x)\mathrm{d} x + \int_{{\mathds{R}^{d}}} \{b(x,v(x))\cdot \nabla V_{\alpha}^*(x) + c(x, v(x))\}\phi(x)\mathrm{d} x = \alpha\int_{{\mathds{R}^{d}}} V_{\alpha}^*(x)\phi(x)\mathrm{d} x\,. \end{equation} Since $\phi\in {\mathcal{C}}_{c}^{\infty}({\mathds{R}^{d}})$ is arbitrary and $V_{\alpha}^*\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})$ from \cref{ET1.1E}, we deduce that the function $V_{\alpha}^*\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\cap {\mathcal{C}}_{b}({\mathds{R}^{d}})$ satisfies \begin{equation}\label{ET1.1F} \trace\bigl(a(x)\nabla^2 V_{\alpha}^{*}(x)\bigr) + b(x,v(x))\cdot \nabla V_{\alpha}^{*}(x) + c(x, v(x)) = \alpha V_{\alpha}^{*}(x)\,. \end{equation} Let $X$ be the solution of the SDE \cref{E1.1} corresponding to $v$. Now applying It$\hat{\rm o}$-Krylov formula, we obtain the following \begin{align*} & \mathrm{e}xp_x^{v}\left[ e^{-\alpha T}V_{\alpha}^{*}(X_{T})\right] - V_{\alpha}^{*}(x)\nonumber\\ & \,=\,\mathrm{e}xp_x^{v}\left[\int_0^{T} e^{-\alpha s}\{\trace\bigl(a(X_s)\nabla^2 V_{\alpha}^{*}(X_s)\bigr) + b(X_s, v(X_s))\cdot \nabla V_{\alpha}^{*}(X_s) - \alpha V_{\alpha}^{*}(X_s))\} \mathrm{d}{s}\right] \,. \end{align*} Hence, by \cref{ET1.1F}, we get \begin{align}\label{ET1.1G} \mathrm{e}xp_x^{v}\left[ e^{-\alpha T} V_{\alpha}^{*}(X_{T})\right] - V_{\alpha}^{*}(x) \,=\,- \mathrm{e}xp_x^{v}\left[\int_0^{T} e^{-\alpha s}c(X_s, v(X_s))\mathrm{d}{s}\right] \,. \end{align} Since $V_{\alpha}^{*}$ is bounded and $$\mathrm{e}xp_x^{v}\left[ e^{-\alpha T} V_{\alpha}^{*}(X_{T})\right] = e^{-\alpha T}\mathrm{e}xp_x^{v}\left[ V_{\alpha}^{*}(X_{T})\right],$$ letting $T\to\infty$, it follows that \begin{equation*} \lim_{T\to\infty}\mathrm{e}xp_x^{v}\left[ e^{-\alpha T} V_{\alpha}^{*}(X_{T})\right] = 0\,. \end{equation*} Thus, letting $T \to \infty$ by monotone convergence theorem, from \cref{ET1.1G}, we obtain \begin{align}\label{ET1.1H} V_{\alpha}^{*}(x) \,=\, \mathrm{e}xp_x^{v}\left[\int_0^{\infty} e^{-\alpha s}c(X_s, v(X_s)) \mathrm{d}{s}\right] = {\mathcal{J}}_{\alpha}^{v}(x, c)\,. \end{align}This completes the proof. \end{proof} \subsubsection{Continuity of Cost upto an Exit Time} Following the proof technique of Theorem~\ref{T1.1}, now we show that the cost upto an exit time (defined in Subsection~\ref{exitTimeSection}) is continuous as a function of the control policies\,. \begin{theorem}\label{T1.1Exit} Suppose Assumptions (A1)-(A3) hold. Then the map $v\mapsto \hat{{\mathcal{J}}}_{e}^{v}(x)$ from $\mathfrak U_{\mathsf{sm}}$ to $\mathds{R}$ is continuous\,. \end{theorem} \begin{proof} Let $\{v_n\}_n$ be a sequence in $\mathfrak U_{\mathsf{sm}}$ such that $v_n\to v$ in $\mathfrak U_{\mathsf{sm}}$\,. From \cite[Theorem~9.15]{GilTru}, it follows that there exist a unique function $\psi_n(x)\in {\mathscr W}^{2,p}(O)$ satisfying the following Poisson's equation \begin{equation}\label{T1.1ExitA} {\mathscr{L}}_{v_n}\psi_{n}(x) - \delta(x, v_n(x)) \psi_{n}(x) + c(x, v_n(x)) = 0\quad \text{with}\quad \psi_n = h\,\,\, \text{on}\,\,\partial{O}\,. \end{equation} Applying It$\hat{\rm o}$-Krylov formula, one can show that $\psi_n(x) = \hat{{\mathcal{J}}}_{e}^{v_n}(x)$ (this stochastic representation also ensures the uniqueness of the solution of \cref{T1.1ExitA} )\,. Now following the argument as in Theorem~\ref{T1.1}, by standard elliptic p.d.e. estimates \cite[Theorem~9.11]{GilTru}, we deduce that there exists $\psi(x)\in {\mathscr W}^{2,p}(O)$ such that $\psi_n \to \psi$ weakly in ${\mathscr W}^{2,p}(O)$\,. Thus, closely following the proof of Theorem~\ref{T1.1}, letting $n\to \infty$\,, from \cref{T1.1ExitA} it follows that \begin{equation}\label{T1.1ExitB} {\mathscr{L}}_{v}\psi(x) - \delta(x, v(x)) \psi(x) + c(x, v(x)) = 0\quad \text{with}\quad \psi = h\,\,\, \text{on}\,\,\partial{O}\,. \end{equation} Again, by It$\hat{\rm o}$-Krylov formula, using \cref{T1.1ExitB} we deduce that $\psi(x) = \hat{{\mathcal{J}}}_{e}^{v}(x)$\,. This completes the proof of the theorem\,. \end{proof} \subsection{Continuity for Ergodic Cost}\label{CErgoCost} In this section we study the continuity of the ergodic costs with respect to policies under Borkar topology in the space of stationary Markov policies. We will study this problem under two sets of assumptions: the first is so called near-monotonicity assumption on the running cost function and other one is Lyapunov stability assumption on the system. Our proof strategies will be slightly different under these two setups: In the former we will build on regularity properties of invariant probability measures, in the latter we will build more directly on regularity properties of solutions to HJB equations\,. \subsubsection{Under a near-monotonicity assumption}\label{NearMonotone} We assume that the running cost function $c$ is near-monotone with respect to ${\mathscr{E}}^*(c)$, i.e., \begin{itemize} \item[\hypertarget{A4}{{(A4)}}] It holds that \begin{equation}\label{ENearmonot} \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta) > {\mathscr{E}}^*(c)\,. \end{equation} \end{itemize} This condition penalizes the escape of probability mass to infinity. Since our running cost $c$ is bounded it is easy to see that ${\mathscr{E}}^*(c) \leq \norm{c}_{\infty}$\,. It is known that under \cref{ENearmonot}, optimal control exists in the space of stable stationary Markov controls (see, \cite[Theorem~3.4.5]{ABG-book}). First, we prove that for each stable stationary Markov policy $v\in \mathfrak U_{\mathsf{sm}}$ the associated Poisson's equation admits a unique solution in a certain function space. This uniqueness result will be useful in establishing the continuity and near optimality of quantized policies. For the following supporting result, we closely follow \cite{ABG-book}\,. \begin{theorem}\label{NearmonotPoisso} Suppose that Assumptions (A1) - (A4) hold. Let $v\in\mathfrak U_{\mathsf{sm}}$ be a stable control with unique invariant measure $\eta_{v}$, such that \begin{equation}\label{ENearmonotPoisso1} \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta) > \inf_{x\in{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v)\,. \end{equation} Then, there exists a unique pair $(V^v, \rho_v)\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\times \mathds{R}$, \, $1< p < \infty$, with $V^v(0) = 0$, $\inf_{{\mathds{R}^{d}}} V^v > -\infty$ and $\rho_v = \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x})$, satisfying \begin{equation}\label{EErgonearPoisso1A} \rho_v = \left[{\mathscr{L}}_{v}V^v(x) + c(x, v(x))\right] \end{equation} Moreover, we have \begin{itemize} \item[(i)]$\rho_v = \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v)$\,. \item[(ii)] for all $x\in {\mathds{R}^{d}}$ \begin{equation}\label{EErgonearPoisso1B} V^v(x) \,=\, \lim_{r\downarrow 0}\mathrm{e}xp_{x}^v\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v(X_t)) - \rho_v\right)\mathrm{d} t\right]\,. \end{equation} \end{itemize} \end{theorem} \begin{proof} Since $c$ is bounded, we have $\left(\rho^{v} \,:=\,\right) \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x}) \leq \norm{c}_{\infty}$\,. In view of \cref{ENearmonotPoisso1}, by writing $\rho^v =\alpha \int J^v_{\alpha}(x, c)\eta_{v}(\mathrm{d} x)$ from \cite[Lemma~3.6.1]{ABG-book}, we have \begin{equation}\label{EErgonearPoisso1C} \inf_{\kappa(\rho^v)}{\mathcal{J}}_{\alpha}^{v}(x, c) = \inf_{{\mathds{R}^{d}}}{\mathcal{J}}_{\alpha}^{v}(x, c) \leq \frac{\rho^{v}}{\alpha}\,, \end{equation} where $\kappa(\rho^v) \,:=\, \{x\in {\mathds{R}^{d}}\mid \min_{\zeta\in\mathbb{U}}c(x,\zeta) \leq \rho^v\}$ and ${\mathcal{J}}_{\alpha}^{v}(x, c)$ is the $\alpha$-discounted cost defined as in \cref{EDiscost}. As earlier, we have that ${\mathcal{J}}_{\alpha}^{v}(x, c)$ is a solution to the Poisson's equation (see, \cite[Lemma~A.3.7]{ABG-book}) \begin{equation}\label{EErgonearPoisso1D} {\mathscr{L}}_{v}{\mathcal{J}}_{\alpha}^{v}(x, c) - \alpha {\mathcal{J}}_{\alpha}^{v}(x, c) = - c(x, v(x))\,. \end{equation} Since $x\to\min_{\zeta\in\mathbb{U}}c(x,\zeta)$ is continuous, we have $\kappa(\rho^v)$ is closed and \cref{ENearmonotPoisso1} implies that it is bounded. Therefore $\kappa(\rho^v)$ is compact and hence for some $R_0>0$, we have $\kappa(\rho^v)\subset {\mathscr{B}}_{R_{0}}$\,. This gives us $\inf_{{\mathscr{B}}_{R_{0}}} {\mathcal{J}}_{\alpha}^{v}(x, c) = \inf_{{\mathds{R}^{d}}}{\mathcal{J}}_{\alpha}^{v}(x, c)$\,. Thus, following the arguments as in \cite[Lemma~3.6.3]{ABG-book}, we deduce that for each $R> R_0$ there exist constants $\Tilde{C}_{2}(R), \Tilde{C}_{2}(R, p)$ depending only on $d, R_0$ such that \begin{equation}\label{EErgonearPoisso1E} \left(\osc_{{\mathscr{B}}_{2R}} {\mathcal{J}}_{\alpha}^{v}(x, c) := \right)\sup_{{\mathscr{B}}_{2R}} {\mathcal{J}}_{\alpha}^{v}(x, c) - \inf_{{\mathscr{B}}_{2R}} {\mathcal{J}}_{\alpha}^{v}(x, c) \leq \Tilde{C}_{2}(R)\left(1 + \alpha\inf_{{\mathscr{B}}_{R_0}}{\mathcal{J}}_{\alpha}^{v}(x, c) \right)\,, \end{equation} \begin{equation}\label{EErgonearPoisso1F} \norm{{\mathcal{J}}_{\alpha}^{v}(\cdot, c) - {\mathcal{J}}_{\alpha}^{v}(0, c)}_{{\mathscr W}^{2,p}({\mathscr{B}}_R)}\leq \Tilde{C}_{2}(R, p) \left(1 + \alpha\inf_{{\mathscr{B}}_{R_0}}{\mathcal{J}}_{\alpha}^{v}(x, c) \right)\,. \end{equation} Hence, by following the arguments as in \cite[Lemma~3.6.6]{ABG-book}, we conclude that there exists $(V^{v}, \Tilde{\rho}^v)\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\times \mathds{R}$ such that along a subsequence (as $\alpha\to 0$), ${\mathcal{J}}_{\alpha}^{v}(\cdot, c) - {\mathcal{J}}_{\alpha}^{v}(0, c) \to V^{v}(\cdot)$ and $\alpha{\mathcal{J}}_{\alpha}^{v}(0, c) \to \Tilde{\rho}_{v}$ and the pair $(V^{v}, \Tilde{\rho}_v)$ satisfies \begin{equation}\label{EErgonearPoisso1G} {\mathscr{L}}_{v}V^{v}(x) + c(x, v(x)) = \Tilde{\rho}_{v}\,. \end{equation} We will show that the subsequential limits are unique\,. From \cref{EErgonearPoisso1C}, we get $\Tilde{\rho}_{v} \leq \rho^{v}$. Now, in view of estimates \cref{EErgonearPoisso1C} and \cref{EErgonearPoisso1F}, it is easy to see that \begin{equation}\label{EErgonearPoisso1H} \norm{V^{v}}_{{\mathscr W}^{2,p}({\mathscr{B}}_R)}\leq \Tilde{C}_{2}(R, p) \left(1 + M \right)\,. \end{equation} Also, for each $x\in {\mathds{R}^{d}}$, we have \begin{align}\label{EErgonearPoisso1HLower} V^{v}(x) &= \lim_{\alpha\to 0}\left({\mathcal{J}}_{\alpha}^{v}(x, c) - {\mathcal{J}}_{\alpha}^{v}(0, c)\right) \geq \liminf_{\alpha\to 0} \left({\mathcal{J}}_{\alpha}^{v}(x, c) - \inf_{{\mathds{R}^{d}}}{\mathcal{J}}_{\alpha}^{v}(x, c) + \inf_{{\mathds{R}^{d}}}{\mathcal{J}}_{\alpha}^{v}(x, c) - {\mathcal{J}}_{\alpha}^{v}(0, c)\right) \nonumber\\ &\geq - \limsup_{\alpha\to \infty} \left({\mathcal{J}}_{\alpha}^{v}(0, c) - \inf_{{\mathds{R}^{d}}}{\mathcal{J}}_{\alpha}^{v}(x, c)\right) + \liminf _{\alpha\to \infty} \left({\mathcal{J}}_{\alpha}^{v}(x, c) - \inf_{{\mathds{R}^{d}}}{\mathcal{J}}_{\alpha}^{v}(x, c)\right)\nonumber\\ &\geq - \limsup_{\alpha\to \infty} \left({\mathcal{J}}_{\alpha}^{v}(0, c) - \inf_{{\mathscr{B}}_{R_0}}{\mathcal{J}}_{\alpha}^{v}(x, c)\right) + \liminf _{\alpha\to \infty} \left({\mathcal{J}}_{\alpha}^{v}(x, c) - \inf_{{\mathds{R}^{d}}}{\mathcal{J}}_{\alpha}^{v}(x, c)\right)\nonumber\\ &\geq - \limsup_{\alpha\to \infty} \left(\osc_{{\mathscr{B}}_{R_0}} {\mathcal{J}}_{\alpha}^{v}(x, c)\right);\quad \left(\text{since}\,\,\, {\mathcal{J}}_{\alpha}^{v}(x, c) - \inf_{{\mathds{R}^{d}}}{\mathcal{J}}_{\alpha}^{v}(x, c) \geq 0 \right)\,, \end{align} where in the third inequality we have used the fact that $\inf_{{\mathscr{B}}_{R_0}} {\mathcal{J}}_{\alpha}^{v}(x, c) = \inf_{{\mathds{R}^{d}}} {\mathcal{J}}_{\alpha}^{v}(x, c)$\,. Thus, from \cref{EErgonearPoisso1E}, we deduce that \begin{equation}\label{EErgonearPoisso1I} V^{v}\geq -\Tilde{C}_{2}(R_0) \left(1 + M \right)\,. \end{equation} This shows that $\inf_{{\mathds{R}^{d}}} V^{v} > -\infty$\,. Now, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgonearPoisso1G} we obtain \begin{align*} \mathrm{e}xp_x^{v}\left[V^{v}\left(X_{T\wedge \uptau_{R}}\right)\right] - V^v(x)\,=\, \mathrm{e}xp_x^{v}\left[\int_0^{T\wedge \uptau_{R}} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \mathrm{d}{t}\right]\,. \end{align*} This implies \begin{align*} \inf_{y\in{\mathds{R}^{d}}}V^{v}(y) - V^v(x)\,\leq\, \mathrm{e}xp_x^{v}\left[\int_0^{T\wedge \uptau_{R}} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \mathrm{d}{t}\right]\,. \end{align*}Since $v$ is stable, letting $R\to \infty$, we get \begin{align*} \inf_{y\in{\mathds{R}^{d}}}V^{v}(y) - V^v(x)\,\leq\, \mathrm{e}xp_x^{v}\left[\int_0^{T} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \mathrm{d}{t}\right]\,. \end{align*}Now dividing both sides of the above inequality by $T$ and letting $T\to \infty$, it follows that \begin{align*} \limsup_{T\to \infty}\frac{1}{T}\mathrm{e}xp_x^{v}\left[\int_0^{T} c(X_t, v(X_t)) \mathrm{d}{t}\right] \,\leq\, \Tilde{\rho}_{v}\,. \end{align*} Thus, $\rho^v \leq \Tilde{\rho}_{v}$. This indeed implies that $\rho^v = \Tilde{\rho}_{v}$\,. The representation \cref{EErgonearPoisso1B} of $V^v$ follows by closely mimicking the argument of \cite[Lemma~3.6.9]{ABG-book}. Therefore, we have a solution pair $(V^v, \rho_v)$ to \cref{EErgonearPoisso1A} satisfying (i) and (ii). Next we want to prove that the solution pair is unique. To this end, let $(\hat{V}^v, \hat{\rho}_v)\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\times \mathds{R}$, \, $1< p < \infty$, with $\hat{V}^v(0) = 0$, $\inf_{{\mathds{R}^{d}}} \hat{V}^v > -\infty$ and $\hat{\rho}_v = \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x})$, satisfying \begin{equation}\label{EErgonearPoisso1J} \hat{\rho}_v = \left[{\mathscr{L}}_{v}\hat{V}^v(x) + c(x, v(x))\right] \end{equation} Since $\hat{V}^v$ is bounded from below, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgonearPoisso1J} we get \begin{align}\label{EErgonearPoisso1L} \limsup_{T\to \infty}\frac{1}{T}\mathrm{e}xp_x^{v}\left[\int_0^{T} c(X_t, v(X_t)) \mathrm{d}{t}\right] \,\leq\, \hat{\rho}_{v} \end{align} Hence, from \cref{EErgonearPoisso1L}, it follows that \begin{align}\label{EErgonearPoisso1M} \hat{\rho}_{v} = \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x}) \leq \limsup_{T\to \infty}\frac{1}{T}\mathrm{e}xp_x^{v}\left[\int_0^{T} c(X_t, v(X_t)) \mathrm{d}{t}\right] \,\leq\, \hat{\rho}_{v} \end{align} This implies that $\hat{\rho}^{v} = \rho_{v}$\,. Now, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgonearPoisso1J}, we obtain \begin{align}\label{EErgonearPoisso1N} \hat{V}^v(x)\,=\, \mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}\wedge \uptau_{R}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \mathrm{d}{t} + \hat{V}^{v}\left(X_{{\Breve\uptau}_{r}\wedge \uptau_{R}}\right)\right]\,. \end{align} Since $v$ is stable and $\hat{V}^v$ is bounded from below, for all $x\in {\mathds{R}^{d}}$ we obtain \begin{equation*} \liminf_{R\to\infty}\mathrm{e}xp_x^{v}\left[\hat{V}^{v}\left(X_{\uptau_{R}}\right)\mathds{1}_{\{{\Breve\uptau}_{r}\geq \uptau_{R}\}}\right]\geq \liminf_{R\to\infty}\left(\inf_{{\mathds{R}^{d}}}\hat{V}^{v}\right)\mathbb{P}_{x}\left({\Breve\uptau}_{r}\geq \uptau_{R}\right) = 0\,. \end{equation*} In the above we have used the fact that $\uptau_{R}\to\infty$ as $R\to \infty$ and $\mathrm{e}xp_x^{v}\left[{\Breve\uptau}_{r}\right] < \infty$\,. Again, since $v$ is stable we have $\mathrm{e}xp_x^{v}\left[{\Breve\uptau}_{r}\right] < \infty$ (see \cite[Theorem~2.6.10]{ABG-book})\,. Hence, letting $R\to\infty$ by Fatou's lemma from \cref{EErgonearPoisso1N}, it follows that \begin{align*} \hat{V}^v(x)&\,\geq\, \mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \mathrm{d}{t} +\hat{V}^{v}\left(X_{{\Breve\uptau}_{r}}\right)\right]\nonumber\\ &\,\geq\, \mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \mathrm{d}{t}\right] +\inf_{{\mathscr{B}}_r}\hat{V}^{v}\,. \end{align*}Since $\hat{V}^{v}(0) =0$, letting $r\to 0$, we deduce that \begin{align}\label{EErgonearPoisso1o} \hat{V}^v(x)\,\geq\, \limsup_{r\downarrow 0}\mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \mathrm{d}{t} \right]\,. \end{align} From \cref{EErgonearPoisso1B} and \cref{EErgonearPoisso1o}, it is easy to see that $V^v - \hat{V}^v \leq 0$ in ${\mathds{R}^{d}}$. On the other hand by \cref{EErgonearPoisso1A} and \cref{EErgonearPoisso1J} one has ${\mathscr{L}}_{v}\left(V^v - \hat{V}^v\right)(x)\geq 0$ in ${\mathds{R}^{d}}$. Hence, applying the strong maximum principle \cite[Theorem~9.6]{GilTru}, one has $V^v = \hat{V}^v$. This proves uniqueness. \end{proof} Now we prove the continuity of ergodic cost under near-monotonicity assumption on the running cost function. \begin{theorem}\label{ergodicnearmono1} Suppose that Assumptions (A1)-(A4) hold. Let $\{v_n\}_n$ be a sequence of stable policies such that $v_n \to v$ in $\mathfrak U_{\mathsf{sm}}$\, and $\{\eta_{v_n}\}_n$ tight. If $$\sup_{n}{\mathscr{E}}_x(c, v_n) < \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta),$$ then we have the following \begin{equation}\label{EErgonearOpt1A} \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v_n) \to \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v)\quad \text{as}\,\,\, n\to\infty\,. \end{equation} \end{theorem} \begin{proof} From Theorem~\ref{NearmonotPoisso}, we know that for each $n\in\mathds{N}$ there exists $(V^{v_n}, \rho_{v_n})\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\times \mathds{R}$, \, $1< p < \infty$, with $V^{v_n}(0) = 0$ and $\inf_{{\mathds{R}^{d}}} V^{v_n} > -\infty$, satisfying \begin{equation}\label{EErgoContnuity1A} \rho_{v_n} = {\mathscr{L}}_{v_n}V^{v_n}(x) + c(x, v_n(x))\,, \end{equation} where $\rho_{v_n} = \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v_n(x)(\mathrm{d}{u})\eta_{v_n}(\mathrm{d}{x}) = \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v_n)$\,. Now from \cite[Lemma~4.4]{AB-10Uni}, since we impose tightness apriori, we deduce that $\eta_{v_n} \to \eta_{v}$ in total variation topology. Hence the associated densities $\varphi_{v_n}\to \varphi$ in ${L}^1({\mathds{R}^{d}})$ (see the proof of \cite[Lemma~3.2.5]{ABG-book}). Note that \begin{align} &\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\eta_{v_{n}}(\mathrm{d} x)\, -\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta) v(x)(\mathrm{d} \zeta)\eta_{v}(\mathrm{d} x) \nonumber \\ & = \bigg(\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\varphi_{v_n}(x)\mathrm{d} x - \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\varphi(x)\mathrm{d} x \bigg) \nonumber \\ &\quad + \bigg(\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\varphi(x)\mathrm{d} x -\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v(x)(\mathrm{d} \zeta)\varphi(x)\mathrm{d} x \bigg) \end{align} Since $c$ is bounded, the first term of the right hand side converges to zero since $\varphi_{v_n}\to \varphi$ in ${L}^1({\mathds{R}^{d}})$ and the second term converges to zero by the convergence of $v_n\to v$ (see Definition~\ref{DefBorkarTopology1A})\,. Hence, it follows that $\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v_n(x)(\mathrm{d}{u})\eta_{v_n}(\mathrm{d}{x}) \to \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x})$\,. Thus, in view of Theorem~\ref{NearmonotPoisso}, we obtain $\inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v_n) \to \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v)$ as $n\to \infty$\,. This completes the proof. \end{proof} \begin{remark} The tightness assumption is not superfluous. In view of \cite{AA13}, we know that the map $v\mapsto \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v)$ in general may not be continuous on $\mathfrak U_{\mathsf{sm}}$ under near-monotone cost criterion (of the form, \cref{EErgonearOpt1A})\,. The reason is the following: for each $n\in\mathds{N}$ let $(V^{v_n}, \rho_{v_n})$ be the unique compatible solution pair (see, \cite[Definition ~1.1]{AA13}) of the equation \cref{EErgoContnuity1A}, if $(V^{v_n}, \rho_{v_n})$ converge to a solution pair $(\bar{V}, \bar{\rho})$ of the limiting equation of \cref{EErgoContnuity1A} as $n\to \infty$, the solution pair $(\bar{V}, \bar{\rho})$ may not necessarily be compatible (see, \cite{AA13}). One sufficient condition which ensure this continuity is the tightness of the space of corresponding invariant measures $\{\eta_{v_n}: n\in\mathds{N}\}$\,. \end{remark} \subsubsection{Under Lyapunov stability}\label{Lyapunov stability} In this section we study the continuity of ergodic cost criterion under Lyapunov stability assumption. We assume the following Lyapunov stability condition on the dynamics. \begin{itemize} \item[\hypertarget{A5}{{(A5)}}] There exists a positive constant $\widehat{C}_0$, and a pair of inf-compact functions $({\mathcal{V}}, h)\in {\mathcal{C}}^{2}({\mathds{R}^{d}})\times{\mathcal{C}}({\mathds{R}^{d}}\times\mathbb{U})$ (i.e., the sub-level sets $\{{\mathcal{V}}\leq k\} \,,\{h\leq k\}$ are compact or empty sets in ${\mathds{R}^{d}}$\,, ${\mathds{R}^{d}}\times\mathbb{U}$ respectively for each $k\in\mathds{R}$) such that \begin{equation}\label{Lyap1} {\mathscr{L}}_{\zeta}{\mathcal{V}}(x) \leq \widehat{C}_{0} - h(x,\zeta)\quad \text{for all}\,\,\, (x,\zeta)\in {\mathds{R}^{d}}\times \mathbb{U}\,, \end{equation} where $h$ ($>0$) is locally Lipschitz continuous in its first argument uniformly with respect to the second and ${\mathcal{V}} > 1$. \end{itemize} A function $f\in\mathcal{O}({\mathcal{V}})$ if $f\leq \widehat{C}_1 {\mathcal{V}}$ for some positive constant $\widehat{C}_1$\, and $f \in {\mathfrak{o}}({\mathcal{V}})$ if $\displaystyle{\limsup_{|x|\to \infty} \frac{|f|}{{\mathcal{V}}} = 0}$\,. Now following \cite[Lemma~3.7.8]{ABG-book}, we want to prove that a certain equation admits a unique solution in some suitable function space. This uniqueness result is crucial to obtain continuity of the map $v\to {\mathscr{E}}_x(c, v)$ on $\mathfrak U_{\mathsf{sm}}$\,. \begin{theorem}\label{TErgoExis1} Suppose that Assumptions (A1)-(A3) and (A5) hold. Then for each $v\in \mathfrak U_{\mathsf{sm}}$ there exist a unique solution pair $(\widehat{V}^v, \widehat{\rho}^{v})\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}({\mathcal{V}})\times \mathds{R}$ for any $p >1$ satisfying \begin{equation}\label{EErgoOpt1A} \widehat{\rho}^{v} = {\mathscr{L}}_{v}\widehat{V}^v(x) + c(x, v(x))\quad\text{with}\quad \widehat{V}^v(0) = 0\,. \end{equation} Furthermore, we have \begin{itemize} \item[(i)]$\widehat{\rho}^{v} = {\mathscr{E}}_x(c, v)$ \item[(ii)] for all $x\in{\mathds{R}^{d}}$, we have \begin{equation}\label{EErgoOpt1C} \widehat{V}^v(x) \,=\, \lim_{r\downarrow 0}\mathrm{e}xp_{x}^{v}\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v(X_t)) - {\mathscr{E}}_x(c, v)\right)\mathrm{d} t\right]\,. \end{equation} \end{itemize} \end{theorem} \begin{proof} Existence of a solution pair $(\widehat{V}^v, \widehat{\rho}^{v})\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}({\mathcal{V}})\times \mathds{R}$ for any $p >1$ satisfying (i) and (ii) follows from \cite[Lemma~3.7.8]{ABG-book}\,. Also, it is known that along a subsequence $\alpha{\mathcal{J}}_{\alpha}^{v}(0, c)\to \widehat{\rho}^{v}$ and ${\mathcal{J}}_{\alpha}^{v}(x, c) - {\mathcal{J}}_{\alpha}^{v}(0, c)\to \widehat{V}^v$ uniformly over compact subsets of ${\mathds{R}^{d}}$ (see \cite[Lemma~3.7.8 (i)]{ABG-book})\,. Next we show that the sub-sequential limits are unique\,. This indeed imply the uniqueness of the solutions. Let $(\bar{V}^v, \bar{\rho}^{v})\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}({\mathcal{V}})\times \mathds{R}$ for any $p >1$ be any other solution pair of \cref{EErgoOpt1A} with $\bar{V}^v(0) = 0$. Thus, by It$\hat{\rm o}$-Krylov formula, for $R>0$ we obtain \begin{align}\label{ETErgoExis1A} \mathrm{e}xp_{x}^{v}\left[\bar{V}^v(X_{T\wedge\uptau_{R}})\right] - \bar{V}^v(x) &= \mathrm{e}xp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} {\mathscr{L}}_{v} \bar{V}^v(X_s) \mathrm{d} s\right]\nonumber\\ & = \mathrm{e}xp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s \right]\,. \end{align} Note that \begin{equation*} \int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s = \int_{0}^{T\wedge\uptau_{R}} \bar{\rho}^{v} - \int_{0}^{T\wedge\uptau_{R}}c(X_s, v(X_s))\mathrm{d} s \end{equation*} Thus, letting $R\to \infty$ by monotone convergence theorem, we get \begin{equation*} \lim_{R\to\infty}\mathrm{e}xp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s \right] = \mathrm{e}xp_{x}^{v}\left[\int_{0}^{T} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s \right]\,. \end{equation*} Since $\bar{V}^v \in {\mathfrak{o}}{({\mathcal{V}})}$, in view of \cite[Lemma~3.7.2 (ii)]{ABG-book}, letting $R\to\infty$, we deduce that \begin{align}\label{ETErgoExis1BA} \mathrm{e}xp_{x}^{v}\left[\bar{V}^v(X_{T})\right] - \bar{V}^v(x) = \mathrm{e}xp_{x}^{v}\left[\int_{0}^{T} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s \right]\,. \end{align} Also, from \cite[Lemma~3.7.2 (ii)]{ABG-book}, we have \begin{equation*} \lim_{T\to\infty}\frac{\mathrm{e}xp_{x}^{v}\left[\bar{V}^v(X_{T})\right]}{T} = 0\,. \end{equation*} Hence, dividing both sides of \cref{ETErgoExis1BA} by $T$ and letting $T\to\infty$, we obtain \begin{align*} \bar{\rho}^{v} = \limsup_{T\to \infty}\frac{1}{T}\mathrm{e}xp_{x}^{v}\left[\int_{0}^{T} \left(c(X_s, v(X_s))\right)\mathrm{d} s \right]\,. \end{align*}This implies that $\bar{\rho}^{v} = \widehat{\rho}^{v}$\,. Again, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgoOpt1A}, we have \begin{align}\label{ETErgoExis1B} \bar{V}^v(x)\,=\, \mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}\wedge \uptau_{R}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \mathrm{d}{t} + \bar{V}^{v}\left(X_{{\Breve\uptau}_{r}\wedge \uptau_{R}}\right)\right]\,. \end{align} Also, from \cref{Lyap1}, by It$\hat{\rm o}$-Krylov formula it follows that \begin{align*} \mathrm{e}xp_x^{v}\left[{\mathcal{V}}\left(X_{{\Breve\uptau}_{r}\wedge \uptau_{R}}\right)\right] - {\mathcal{V}}(x)\,=\, \mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}\wedge \uptau_{R}} {\mathscr{L}}_{v}{\mathcal{V}}(X_t) \mathrm{d}{t}\right] \leq \mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}\wedge \uptau_{R}} \left(\widehat{C}_0 - h(X_t, v(X_t))\right) \mathrm{d}{t}\right]\,. \end{align*} This gives us the following (since $h(x,\zeta) > 0$) \begin{equation*} \mathrm{e}xp_x^{v}\left[{\mathcal{V}}\left(X_{\uptau_{R}}\right)\mathds{1}_{\{{\Breve\uptau}_{r}\geq \uptau_{R}\}}\right]\leq \widehat{C}_0 \mathrm{e}xp_x^{v}\left[{\Breve\uptau}_{r}\right] + {\mathcal{V}}(x)\quad \text{for all} \,\,\, r <|x|<R\,. \end{equation*} Now, it is easy to see that \begin{equation*} -\sup_{\partial{{\mathscr{B}}_R}}\frac{|\hat{V}^{v}|}{{\mathcal{V}}} \left(\widehat{C}_0 \mathrm{e}xp_x^{v}\left[{\Breve\uptau}_{r}\right] + {\mathcal{V}}(x)\right)\leq \mathrm{e}xp_x^{v}\left[\hat{V}^{v}\left(X_{\uptau_{R}}\right)\mathds{1}_{\{{\Breve\uptau}_{r}\geq \uptau_{R}\}}\right] \leq \sup_{\partial{{\mathscr{B}}_R}}\frac{|\hat{V}^{v}|}{{\mathcal{V}}} \left(\widehat{C}_0 \mathrm{e}xp_x^{v}\left[{\Breve\uptau}_{r}\right] + {\mathcal{V}}(x)\right)\,. \end{equation*} Since $\bar{V}^v \in {\mathfrak{o}}({\mathcal{V}})$, from the above estimate, we get \begin{equation*} \liminf_{R\to\infty}\mathrm{e}xp_x^{v}\left[\hat{V}^{v}\left(X_{\uptau_{R}}\right)\mathds{1}_{\{{\Breve\uptau}_{r}\geq \uptau_{R}\}}\right] = 0\,. \end{equation*}Thus, letting $R\to\infty$ by Fatou's lemma from \cref{ETErgoExis1B}, it follows that \begin{align*} \bar{V}^v(x)&\,\geq\, \mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \mathrm{d}{t} +\bar{V}^{v}\left(X_{{\Breve\uptau}_{r}}\right)\right]\nonumber\\ &\,\geq\, \mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \mathrm{d}{t}\right] +\inf_{{\mathscr{B}}_r}\bar{V}^{v}\,. \end{align*}Since $\bar{V}^{v}(0) =0$, letting $r\to 0$, we deduce that \begin{align}\label{ETErgoExis1C} \bar{V}^v(x)\,\geq\, \limsup_{r\downarrow 0}\mathrm{e}xp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \mathrm{d}{t} \right]\,. \end{align} Since $\widehat{\rho}^v = \bar{\rho}^v$, from \cref{EErgoOpt1C} and \cref{ETErgoExis1C}, it follows that $\widehat{V}^v - \bar{V}^v \leq 0$ in ${\mathds{R}^{d}}$. Also, since $(\widehat{V}^v, \widehat{\rho}^v)$ and $(\bar{V}^v, \bar{\rho}^v)$ are two solution pairs of \cref{EErgoOpt1A}, we have ${\mathscr{L}}_{v}\left(\widehat{V}^v - \bar{V}^v\right)(x) = 0$ in ${\mathds{R}^{d}}$. Hence, by strong maximum principle \cite[Theorem~9.6]{GilTru}, one has $\widehat{V}^v = \bar{V}^v$. This proves uniqueness \end{proof} Next we prove that the map $v\to \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v)$ is continuous on $\mathfrak U_{\mathsf{sm}}$ under the Borkar topology\,. \begin{theorem}\label{ergodicLyap1} Suppose that Assumptions (A1)-(A3) and (A5) hold. Let $\{v_n\}_n$ be a sequence of policies in $\mathfrak U_{\mathsf{sm}}$ such that $v_n \to v$ in $\mathfrak U_{\mathsf{sm}}$\,. Then we have \begin{equation}\label{EErgoLyap1A} \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v_n) \to \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v)\quad \text{as}\,\,\, n\to\infty\,. \end{equation} \end{theorem} \begin{proof} From Theorem~\ref{TErgoExis1}, we know that for each $n\in \mathds{N}$ there exists unique solution pair $(\widehat{V}^{v_n}, \widehat{\rho}^{v_n})\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}({\mathcal{V}})\times \mathds{R}$ for any $p >1$ satisfying \begin{equation}\label{EErgoLyap1B} \widehat{\rho}^{v_n} = {\mathscr{L}}_{v_n}\widehat{V}^{v_n}(x) + c(x, v_n(x))\quad\text{with}\quad \widehat{V}^{v_n}(0) = 0\,, \end{equation} where \begin{itemize} \item[(i)]$\widehat{\rho}^{v_n} = {\mathscr{E}}_x(c, v_n)$ \item[(ii)] for all $x\in{\mathds{R}^{d}}$, we have \begin{equation*} \widehat{V}^{v_n}(x) \,=\, \lim_{r\downarrow 0}\mathrm{e}xp_{x}^{v_n}\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v_n(X_t)) - {\mathscr{E}}_x(c, v_n)\right)\mathrm{d} t\right]\,. \end{equation*} \end{itemize} In view of \cref{Lyap1}, it is easy to see that, each $v\in\mathfrak U_{\mathsf{sm}}$ is stable and $\inf_{v\in\mathfrak U_{\mathsf{sm}}}\eta_v({\mathscr{B}}_R) > 0$ for any $R>0$ (see, \cite[Lemma~3.3.4]{ABG-book} and \cite[Lemma~3.2.4(b)]{ABG-book}). Thus, from \cite[Theorem~3.7.4]{ABG-book}, it follows that \begin{equation}\label{EErgoLyap1C} \norm{{\mathcal{J}}_{\alpha}^{v_n}(\cdot, c) - {\mathcal{J}}_{\alpha}^{v_n}(0, c)}_{{\mathscr W}^{2,p}({\mathscr{B}}_R)}\leq \frac{\widehat{C}_{2}(R, p)}{\eta_{v_n}({\mathscr{B}}_{2R})} \left(\frac{\widehat{\rho}^{v_n}}{\eta_{v_n}({\mathscr{B}}_{2R})} + \sup_{{\mathscr{B}}_{4R}\times \mathbb{U}}c(x,\zeta) \right)\,, \end{equation} where the positive constant $\widehat{C}_{2}(R, p)$ depends only on $R$ and $p$\,. Since the running cost is bounded we have $\|c\|_{\infty} \leq M$ for some positive constant $M$. Thus, we have $\widehat{\rho}^{v_n} \leq M$. Hence from \cref{EErgoLyap1C}, we deduce that \begin{equation*} \norm{{\mathcal{J}}_{\alpha}^{v_n}(\cdot, c) - {\mathcal{J}}_{\alpha}^{v_n}(0, c)}_{{\mathscr W}^{2,p}({\mathscr{B}}_R)}\leq \frac{M\widehat{C}_{2}(R, p)}{\inf_{n}\eta_{v_n}({\mathscr{B}}_{2R})} \left(\frac{1}{\inf_{n}\eta_{v_n}({\mathscr{B}}_{2R})} + 1 \right)\,. \end{equation*} This implies that \begin{equation}\label{EErgoLyap1D} \norm{\widehat{V}^{v_n}}_{{\mathscr W}^{2,p}({\mathscr{B}}_R)} \leq \widehat{C}_{3}(R, p)\,, \end{equation}where $\widehat{C}_{3}(R, p)$ is a positive constant which depends only on $R$ and $p$\,. Hence, by a standard diagonalization argument and Banach Alaoglu theorem (see, \cref{ET1.1C}), one can extract a subsequence $\{\widehat{V}^{v_{n_k}}\}$ such that for some $\widehat{V}^*\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})$ we have \begin{equation}\label{EErgoLyap1E} \begin{cases} \widehat{V}^{v_{n_k}}\to & \widehat{V}^*\quad \text{in}\quad {\mathscr W}l^{2,p}({\mathds{R}^{d}})\quad\text{(weakly)}\\ \widehat{V}^{v_{n_k}}\to & \widehat{V}^*\quad \text{in}\quad {\mathcal{C}}^{1, \beta}_{loc}({\mathds{R}^{d}}) \quad\text{(strongly)}\,. \end{cases} \end{equation}Also, since $\widehat{\rho}^{v_n} \leq M$, along a further subsequence $\widehat{\rho}^{v_{n_k}}\to \widehat{\rho}^{*}$ (without loss of generality denoting by the same sequence). Now, by similar argument as in Theorem~\ref{T1.1}, multiplying by test function on the both sides of \cref{EErgoLyap1B} and letting $k\to\infty$, we deduce that $(\widehat{V}^*, \widehat{\rho}^{*})\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\times \mathds{R}$ satisfies \begin{equation}\label{EErgoLyap1F} \widehat{\rho}^{*} = {\mathscr{L}}_{v}\widehat{V}^{*}(x) + c(x, v(x))\,. \end{equation}Since $\widehat{V}^{v_n}(0) = 0$ for each $n$, we get $\widehat{V}^*(0) = 0$ Next we want to show that $\widehat{V}^{*}\in {\mathfrak{o}}{({\mathcal{V}})}$. Following the proof of \cite[Lemma~3.7.8]{ABG-book} (see, eq.(3.7.47) or eq.(3.7.50)), it is easy to see that \begin{equation*} \widehat{V}^{v_n}(x) \,\leq\, \mathrm{e}xp_{x}^{v_n}\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v_n(X_t)) - {\mathscr{E}}_x(c, v_n)\right)\mathrm{d} t + \widehat{V}^{v_n}(X_{{\Breve\uptau}_{r}})\right]\,. \end{equation*} This gives us the following estimate \begin{equation}\label{EErgoLyap1G} |\widehat{V}^{v_n}(x)| \,\leq\, M\sup_{n}\mathrm{e}xp_{x}^{v_n}\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v_n(X_t)) + 1\right)\mathrm{d} t + \sup_{{\mathscr{B}}_r}|\widehat{V}^{v_n}|\right]\,. \end{equation} We know that, for $d < p < \infty$, the space ${\mathscr W}^{2,p}({\mathscr{B}}_R)$ is compactly embedded in ${\mathcal{C}}^{1, \beta}(\bar{{\mathscr{B}}}_R)$\,, where $\beta < 1 - \frac{d}{p}$ (see \cite[Theorem~A.2.15 (2b)]{ABG-book}). Thus, from \cref{EErgoLyap1D}, we obtain $\displaystyle{\sup_{n}\sup_{{\mathscr{B}}_r}|\widehat{V}^{v_n}| < \widehat{M}}$ for some positive constant $\widehat{M}$\,. Therefore, in view of \cite[Lemma~3.7.2 (i)]{ABG-book}, form \cref{EErgoLyap1G}, we deduce that $\widehat{V}^{*}\in {\mathfrak{o}}{({\mathcal{V}})}$\,. Since the pair $(\widehat{V}^*, \widehat{\rho}^{*})\in {\mathscr W}l^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}{({\mathcal{V}})} \times \mathds{R}$ satisfies \cref{EErgoLyap1F}, by uniqueness of solution of \cref{EErgoLyap1F} (see, Theorem~\ref{TErgoExis1}) it follows that $(\widehat{V}^*, \widehat{\rho}^{*})\equiv (\widehat{V}^v, \widehat{\rho}^{v})$\,. This completes the proof of the theorem\,. \end{proof} \section{Denseness of Finite Action/Piecewise Constant Stationary Policies}\label{DensePol} \subsection{Denseness of Policies with Finite Actions} Let $d_{\mathbb{U}}$ be the metric on the action space $\mathbb{U}$\,. Since $\mathbb{U}$ is compact, we have $\mathbb{U}$ is totally bounded. Thus, one can find a sequence of finite grids $\{\{\zeta_{n,i}\}_{i=1}^{k_n}\}_{n\geq 1}$ such that $$\min_{i= 1,2,\dots , k_n}d(\zeta, \zeta_{n,i}) < \frac{1}{n}\quad\text{for all}\,\,\, \zeta\in \mathbb{U}\,.$$ Let $\Lambda_{n} := \{\zeta_{n,1}, \zeta_{n,2}, \dots ,\zeta_{n,k_n}\}$ and define a function $Q_n: \mathbb{U}\to \Lambda_{n}$ by \begin{equation*} Q_n(\zeta) = \argmin_{\zeta_{n,i}\in \Lambda_n} d(\zeta, \zeta_{n,i})\,, \end{equation*} where ties are broken so that $Q_n$ is measurable. The function $Q_n$ is often known as nearest neighborhood quantizer (see, \cite{SYL17}). For each $n$ the function $Q_n$ induces a partition $\{\mathbb{U}_{n,i}\}_{i=1}^{k_n}$ of the action space $\mathbb{U}$ given by \begin{equation*} \mathbb{U}_{n,i} = \{\zeta\in \mathbb{U} : Q_n(\zeta) = \zeta_{n,i}\}\,. \end{equation*} By triangle inequality, it follows that $\text{diam}(\mathbb{U}_{n,i}):= \sup_{\zeta_1, \zeta_2 \in \mathbb{U}_{n,i}} d_{\mathbb{U}}(\zeta_1, \zeta_2) < \frac{2}{n}$\,. Now, for each $v\in\mathfrak U_{\mathsf{sm}}$ define a sequence of policies with finite actions as follows: \begin{equation}\label{DenseStra1} v_{n}(\zeta_{n,i}|x) = Q_n v(\zeta_{n,i}|x) = v(\mathbb{U}_{n,i}|x)\,. \end{equation} In the next lemma we prove that the space of stationary policies with finite actions are dense in $\mathfrak U_{\mathsf{sm}}$ with respect to the \emph{Borkar topology} (see, Definition~\ref{DefBorkarTopology1A}) \,. \begin{lemma}\label{DenseBorkarTopo} For each $v\in\mathfrak U_{\mathsf{sm}}$ there exists a sequence of policies $\{v_n\}_n$ (defined as in \cref{DenseStra1}) with finite actions, satisfying \begin{equation}\label{DenseStra2} \lim_{n\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x \end{equation} for all $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in {\mathcal{C}}_b({\mathds{R}^{d}}\times \mathbb{U})$ \end{lemma} \begin{proof} Let $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in {\mathcal{C}}_b({\mathds{R}^{d}}\times \mathbb{U})$. Then from the construction of the sequence $\{v_n\}_n$, it is easy to see that \begin{align*} |\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,u)v_{n}(x)(\mathrm{d} u)\mathrm{d} x & - \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x |\\ &\leq \int_{{\mathds{R}^{d}}}|f(x)|\sum_{i=1}^{k_n}\int_{\mathbb{U}_{n,i}}|g(x,\zeta_{n,i}) - g(x,\zeta)|v(x)(\mathrm{d} \zeta)\mathrm{d} x\,. \end{align*} Since $g\in{\mathcal{C}}_b({\mathds{R}^{d}}\times \mathbb{U})$ and $\text{diam}(\mathbb{U}_{n,i}) < \frac{2}{n}$, it follows that \begin{equation*} |f(x)|\sum_{i=1}^{k_n}\int_{\mathbb{U}_{n,i}}|g(x,\zeta_{n,i}) - g(x,\zeta)|v(x)(\mathrm{d} \zeta)\rightarrow 0\quad\text{for all}\,\,\, x\in{\mathds{R}^{d}}\,. \end{equation*} As we know that $g$ is bounded, for some positive constant $M_1$ we have $|g| \leq M_1$. Thus, we deduce that \begin{equation*} |f(x)|\sum_{i=1}^{k_n}\int_{\mathbb{U}_{n,i}}|g(x,\zeta_{n,i}) - g(x,\zeta)|v(x)(\mathrm{d} \zeta)\leq 2M_1 |f(x)| \quad\text{for all}\,\,\, x\in{\mathds{R}^{d}}\,. \end{equation*}Since $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$, by dominated convergence theorem, we obtain \begin{equation*} \lim_{n\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,u)v_{n}(x)(\mathrm{d} u)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x\,. \end{equation*}This completes the proof of the lemma. \end{proof} \subsection{Denseness of Piecewise Constant Policies} Let $d_{\mathscr{P}}$ be the Prokhorov metric on $\mathrm{V}$\,. Since $(\mathbb{U}, d_{\mathbb{U}})$ is separable (being a compact metric space) thus convergence in $(\mathrm{V}, d_{\mathscr{P}})$ is equivalent to weak convergence of probability measures. \begin{theorem}\label{TDPCP} For each $v\in \mathfrak U_{\mathsf{sm}}$ there exists a sequence of piecewise constant policies $\{v_m\}_{m}$ in $\mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{BorkarTopology2} \lim_{m\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v_{m}(x)(\mathrm{d} \zeta)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x \end{equation} for all $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in {\mathcal{C}}_b({\mathds{R}^{d}}\times \mathbb{U})$ \end{theorem} \begin{proof} Let ${\mathscr{B}}_{0} = \emptyset$ and define $D_{n} = {\mathscr{B}}_{n}\setminus {\mathscr{B}}_{n-1}$ for $n\in \mathds{N}$\,. Thus it is easy to see that ${\mathds{R}^{d}} = \cup_{n=1}^{\infty} D_{n}$. Since each $v\in \mathfrak U_{\mathsf{sm}}$ is a measurable map $v: {\mathds{R}^{d}} \to \mathrm{V}$, it follows that $\hat{v}_{n}\,:=\, v\arrowvert_{D_n} : D_n \to \mathrm{V}$ is a measurable map. Hence, by Lusin's theorem (see \cite[Theorem~7.5.2]{D02-book}), for any $\epsilon_n > 0$ there exists a compact set $K_{n}^{\epsilon_n}\subset D_n$ and a continuous function $\hat{v}_{n}^{\epsilon_n} : K_{n}^{\epsilon_n}\to \mathrm{V}$ such that (the Lebesgue measure of the set $D_n\setminus K_{n}^{\epsilon_n}$) $\arrowvert(D_n\setminus K_{n}^{\epsilon_n}) \arrowvert < \epsilon_n$ and $\hat{v}_{n} \equiv \hat{v}_{n}^{\epsilon_n}$ on $K_{n}^{\epsilon_n}$\,. Again, Tietze's extension theorem (see \cite[Theorem~4.1]{DG51}) there exists a continuous function $\tilde{v}_{n}^{\epsilon_n}: D_n \to \mathrm{V}$ such that $ \tilde{v}_{n}^{\epsilon_n}\equiv \hat{v}_{n}^{\epsilon_n}$ on $K_{n}^{\epsilon_n}$\,. \begin{itemize} \item[\textbf{Step1}] Therefore for any $\hat{f}\in {L}^1({\mathds{R}^{d}})\cap {L}^2({\mathds{R}^{d}})$ and $\hat{g}\in {\mathcal{C}}(\mathbb{U})$, we have \begin{align}\label{EBT1} &\arrowvert \int_{D_n}\hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_n}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{D_n\setminus K_{n}^{\epsilon_n}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_n\setminus K_{n}^{\epsilon_n}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{D_n\setminus K_{n}^{\epsilon_n}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert + \arrowvert\int_{D_n\setminus K_{n}^{\epsilon}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \|\hat{g}\|_{\infty}\int_{D_n\setminus K_{n}^{\epsilon_n}}|\hat{f}(x)|\mathrm{d} x + \|\hat{g}\|_{\infty}\int_{D_n\setminus K_{n}^{\epsilon_n}}|\hat{f}(x)|\mathrm{d} x \nonumber\\ &\leq 2\|\hat{g}\|_{\infty}\|\hat{f}\|_{{L}^2({\mathds{R}^{d}})} \sqrt{|(D_n\setminus K_{n}^{\epsilon_n})|} \leq 2\sqrt{\epsilon_n} \|\hat{g}\|_{\infty}\|\hat{f}\|_{{L}^2({\mathds{R}^{d}})}\quad\text{(by H\"older's inequality)}\,. \end{align} Now, since $(\mathrm{V}, d_{\mathscr{P}})$ is compact, for each $m\in \mathds{N}$ there exists a finite set $\widehat{\Lambda}_{m} = \{\mu_{m,1}, \mu_{m,2}, \dots , \mu_{m, k_m}\}$ such that $$\inf_{\mu_{m, i}\in \widehat{\Lambda}_{m}}d_{\mathscr{P}}(\mu, \mu_{m, i}) < \frac{1}{m}\quad \text{for any}\quad \mu\in \mathrm{V}\,.$$ Let $\widehat{Q}_{m}: \mathrm{V} \to \widehat{\Lambda}_{m}$ be defined as \begin{equation*} \widehat{Q}_{m} (\mu) = \argmin_{\mu_{m,i}\in\widehat{\Lambda}_{m}} d_{\mathscr{P}}(\mu, \mu_{m,i})\,. \end{equation*} Ties are broken so that $\widehat{Q}_{m}$ is a measurable map. Hence, it induces a partition $\{\widehat{U}_{m,i}\}_{i=1}^{k_m}$ of the space $\mathrm{V}$ which is given by \begin{equation*} \widehat{U}_{m,i} = \{\mu\in\mathrm{V}: \widehat{Q}_{m}(\mu) = \mu_{m,i}\}\,. \end{equation*}By triangle inequality it is easy to see that \begin{equation*} \diam(\widehat{U}_{m,i}) := \sup_{\mu_1, \mu_2}d_{\mathscr{P}}(\mu_1, \mu_2) < \frac{2}{m}\,. \end{equation*} Now, for $v\in\mathfrak U_{\mathsf{sm}}$ define $D_{n,i}^m = (\tilde{v}_{n}^{\epsilon_n})^{-1}(\widehat{U}_{m,i})$. This implies that $D_{n} = \cup_{i=1}^{k_m} D_{n,i}^m$\,. Define \begin{equation*} \hat{v}_{n,m}^{\epsilon_n}(x) := \sum_{i=1}^{k_m} \mu_{m,i}\mathds{1}_{\{D_{n,i}^m\}}(x)\quad\text{for all}\quad x\in D_n\,\,\,\text{and}\,\,\,m\in \mathds{N}\,. \end{equation*} Therefore, we deduce that \begin{align}\label{EBT2} &\arrowvert \int_{D_n} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_n} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n,m}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \sum_{i=1}^{k_m} \arrowvert \int_{D_{n,i}^{m}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_{n,i}^{m}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\mu_{m,i}(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \sum_{i=1}^{k_m} \int_{D_{n,i}^{m}} |\hat{f}(x)| \arrowvert \int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta) - \int_{\mathbb{U}} \hat{g}(\zeta)\mu_{m,i}(\mathrm{d} \zeta)\arrowvert \mathrm{d} x \nonumber\\ &\leq \|\hat{f}\|_{{L}^1({\mathds{R}^{d}})}\epsilon_n \quad\text{(for large enough $m$)}\,. \end{align} If we choose $\epsilon_n = \min\{\frac{\epsilon_n\|\hat{f}\|_{{L}^2({\mathds{R}^{d}})}\|\hat{g}\|_{\infty}}{4}, \frac{\epsilon_n\|\hat{f}\|_{{L}^1({\mathds{R}^{d}})}}{2}\}$, combining \cref{EBT1}, \cref{EBT2}, there exists $\bar{M}_0 >0$ (depending on $\hat{f}, \hat{g}$ and $\epsilon_n$) such that \begin{align}\label{EBT3} \arrowvert \int_{D_n}\hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_n}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n,m}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \epsilon_n\,, \end{align} for all $m\geq \bar{M}_0$\,. \item[\textbf{Step2}] Let $\epsilon > 0$ be a small number. Now define \begin{equation}\label{EBT4} \bar{v}_{m}^{\epsilon} := \sum_{n=1}^{\infty} \hat{v}_{n,m}^{\epsilon_n} \quad \text{for}\,\,\, m\in \mathds{N}\,. \end{equation} Since $\hat{f}\in{L}^1({\mathds{R}^{d}})$ there exists $N_0 \in \mathds{N}$ such that $\int_{{\mathscr{B}}_{N_0}^c}|\hat{f}(x)|\mathrm{d} x < \frac{\epsilon}{4\|\hat{g}\|_{\infty}}$ \begin{align*} &\arrowvert \int_{{\mathds{R}^{d}}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{{\mathscr{B}}_{N_0}^c} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert + \arrowvert \int_{{\mathscr{B}}_{N_0}} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \frac{\epsilon}{2} + \arrowvert \int_{{\mathscr{B}}_{N_0}} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \end{align*} Now, choose $\epsilon_{i} > 0$ for $i= 1,\dots , N_0$ such that $\sum_{i = 1}^{N_0} \epsilon_i < \frac{\epsilon}{2}$. Thus, in view of \cref{EBT3} there exists $M_i >0$ such that for each $i = 1, \dots , N_0$ \begin{equation*} \arrowvert \int_{D_i}\hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{i}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_i}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{i,m}^{\epsilon_i}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \epsilon_i\,, \end{equation*} for all $m\geq M_i$\,. Hence, for $m\geq \max\{M_i,\, i = 1,\dots ,N_0\}$, we get \begin{align}\label{EBT5} \arrowvert \int_{{\mathscr{B}}_{N_0}} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(v(x) & - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber \\ & \leq \sum_{i=1}^{N_0} |\int_{D_{i}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(\hat{v}_{i}(x) - \hat{v}_{i,m}^{\epsilon_i}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \sum_{i = 1}^{N_0} \epsilon_i < \frac{\epsilon}{2}\,. \end{align} Therefore, for each $\epsilon >0$ we deduce that there exists a positive constant $\hat{M}_0$ (= $\max\{M_i,\, i = 1,\dots ,N_0\}$) such that for $m\geq \hat{M}_0$ (where $\hat{M}_0$ depends on $\hat{f}, \hat{g}, \epsilon$) \begin{equation}\label{EBT6} \arrowvert \int_{{\mathds{R}^{d}}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \epsilon\,. \end{equation} \item[\textbf{Step3}] Let $\{\hat{f}_k\}_{k\in\mathds{N}}$ and $\{h_{j}\}_{j\in\mathds{N}}$ be countable dense set in ${L}^1({\mathds{R}^{d}})$ and ${\mathcal{C}}(\mathbb{U})$ respectively\,. Thus \cref{EBT6} holds true for each $\hat{f}_k$ and $h_j$\,. Let $f\in{L}^{1}({\mathds{R}^{d}})\cap {L}^{2}({\mathds{R}^{d}})$ and $g\in{\mathcal{C}}_{b}({\mathds{R}^{d}}\times \mathbb{U})$\,. Since $f\in {L}^{1}({\mathds{R}^{d}})$ for $\epsilon > 0$ there exists $N_{1}\in \mathds{N}$ such that $\int_{{\mathscr{B}}_{N_1}^c} |f(x)|\mathrm{d} x \leq \frac{\epsilon}{4\|g\|_{\infty}}$\,. This implies \begin{align}\label{EBT7} &\arrowvert \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}} g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}} g(x,\zeta)\bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{{\mathscr{B}}_{N_1}^c} f(x)\int_{\mathbb{U}} g(x,\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert + \arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} g(x,\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \frac{\epsilon}{2} + \arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert\,. \end{align} It is well known that in ${\mathcal{C}}_b(\bar{{\mathscr{B}}}_{N_1}\times \mathbb{U})$ the functions of the form $\{\sum_{i}^{m} r_{i}(x)p_i(\zeta)\}_{m\in\mathds{N}}$ forms an algebra which contains constants, where $r_i\in {\mathcal{C}}(\bar{{\mathscr{B}}}_{N_1})$ and $p_i \in {\mathcal{C}}(\mathbb{U})$\,. Thus by Stone-Weierstrass theorem there exists $\hat{m}$ (large enough) such that \begin{equation}\label{EBT8} \sup_{{\mathscr{B}}_{N_1}\times \mathbb{U}} |g(x,\zeta) - \sum_{i}^{\hat{m}} r_{i}(x)p_i(\zeta)| \leq \frac{\epsilon}{24\|f\|_{{L}^1({\mathds{R}^{d}})}}\,. \end{equation} Since $p_i \in {\mathcal{C}}(\mathbb{U})$ we can find $h_{j(i)} \in {\mathcal{C}}(\mathbb{U})$ such that \begin{equation}\label{EBT9} \sup_{\zeta\in \mathbb{U}} |p_i(\zeta) - h_{j(i)}(\zeta)| \leq \frac{\epsilon}{24\|f\|_{{L}^1({\mathds{R}^{d}})}\|r_{i}\|_{\infty}}\,. \end{equation} Also, since $fr_i\in {L}^1({\mathds{R}^{d}})$ there exists $\hat{f}_{k(i)}$ such that \begin{equation}\label{EBT9A} \int_{{\mathscr{B}}_{N_1}} |f(x)r_i(x) - \hat{f}_{k(i)}(x)|\mathrm{d} x \leq \frac{\epsilon}{24\|f\|_{{L}^1({\mathds{R}^{d}})}\|h_{i}\|_{\infty}}\,. \end{equation} Now, using \cref{EBT8}, \cref{EBT9}, \cref{EBT9A} we have the following \begin{align}\label{EBT10} &\arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)v(x) - \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)\bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert\nonumber\\ \leq & \arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} \sum_{i}^{\hat{m}} r_{i}(x)p_i(\zeta) v(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert\nonumber\\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)p_i(\zeta) v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert\nonumber\\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathscr{B}}_{N_1}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert\nonumber\\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{{\mathscr{B}}_{N_1}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathscr{B}}_{N_1}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert \nonumber \\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)h_{j(i)}(\zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathscr{B}}_{N_1}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert\nonumber\\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)p_i(\zeta) \bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)h_{j(i)}(\zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert\nonumber\\ & + \arrowvert \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathscr{B}}_{N_1}} f(x)\int_{\mathbb{U}} \sum_{i}^{\hat{m}} r_{i}(x)p_i(\zeta) \bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert\nonumber\\ & \leq \frac{\epsilon}{4} + \sum_{l=1}^{N_1}\sum_{i=1}^{\hat{m}}\arrowvert \int_{D_{l}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{D_{l}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)\bar{v}_{l,m}^{\epsilon_l}(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert \end{align} Now, choose $\epsilon_{l,i}$ for $l = 1,\dots , N_1$ and $i=1,\dots ,\hat{m}$ in such a way that $\sum_{l=1}^{N_1}\sum_{i=1}^{\hat{m}}\epsilon_{l,i} \leq \frac{\epsilon}{4}$\,. Thus, in view of \cref{EBT3} there exists $\hat{M}_{2} := \max\{M_{k(i),j(i)}^{l}: i=1,\dots , \hat{m}; l = 1,\dots , N_1\}$ (where $M_{k(i),j(i)}^{l}\in \mathds{N}$ is the constant obtained as in \cref{EBT3} for $i=1,\dots , \hat{m}; l = 1,\dots , N_1$). Therefore, from \cref{EBT7} and \cref{EBT10}, we conclude that \begin{align}\label{EBT11} \arrowvert \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}} g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}} g(x,\zeta)\bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \epsilon\,, \end{align} for all $m\geq \hat{M}_2$\,. This completes the proof of the theorem\,. \end{itemize} \end{proof} \subsection{Denseness of Continuous Policies} Following the discussions above, one can show that the space of continuous stationary policies are also dense in the space of stationary policies under Borkar topology. This is a useful result as continuity allows for many approximation results to be invoked with little effort (see e.g. \cite[Assumption A2.3, pp. 322]{KD92} where convergence properties of invariant measures corresponding to time-discretizations are facilitated). \begin{theorem}\label{TDContP} For each $v\in \mathfrak U_{\mathsf{sm}}$ there exists a sequence of continuous policies $\{v_m\}_{m}$ in $\mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{BorkarTopology2Cont} \lim_{m\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v_{m}(x)(\mathrm{d} \zeta)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x \end{equation} for all $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in {\mathcal{C}}_b({\mathds{R}^{d}}\times \mathbb{U})$ \end{theorem} \begin{proof} As earlier we have $\{f_i\}_{i\in\mathds{N}}$ is a countable dense set in ${L}^{1}({\mathds{R}^{d}})$\,. Now for each $i\in\mathds{N}$, define a finite measure $\nu_i$ on $({\mathds{R}^{d}}, {\mathscr{B}}({\mathds{R}^{d}}))$, given by $$\nu_i(A) = \int_{A} |f_i(x)|\mathrm{d} x \quad \forall \,\,\, A\in {\mathscr{B}}({\mathds{R}^{d}})\,.$$ Let $v\in\mathfrak U_{\mathsf{sm}}$. Then, as in the proof of Theorem~\ref{TDPCP}, by successive application of Lusin's theorem (see \cite[Theorem~7.5.2]{D02-book}) and Tietze's extension theorem (see \cite[Theorem~4.1]{DG51}), for any $\epsilon_i >0$ there exists a closed set $K_i\in {\mathds{R}^{d}}$ and a continuous function $v^{i}: {\mathds{R}^{d}} \to \mathrm{V}$ such that $v^{i}\equiv v$ on $K_i$ and $\nu_i({\mathds{R}^{d}}\setminus K_i) < \epsilon_{i}$\,. Hence, for any $g\in{\mathcal{C}}_b({\mathds{R}^{d}}\times \mathbb{U})$, we have \begin{align*}\label{EBT1Remark} &\arrowvert \int_{{\mathds{R}^{d}}}f_i(x) \int_{\mathbb{U}} g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}f_i(x)\int_{\mathbb{U}} g(x,\zeta)v^i(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{{\mathds{R}^{d}}\setminus K_i} f_i(x) \int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}\setminus K_i} f_i(x) \int_{\mathbb{U}} g(x,\zeta)v^i(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq 2\|g\|_{\infty}\int_{{\mathds{R}^{d}}\setminus K_i}|f_i(x)|\mathrm{d} x \nonumber\\ & = 2\|g\|_{\infty}\nu_i({\mathds{R}^{d}}\setminus K_i) \leq 2\|g\|_{\infty}\epsilon_i\,. \end{align*} Since $\{f_i\}_{i\in\mathds{N}}$ is dense in ${L}^{1}({\mathds{R}^{d}})$, by choosing $\epsilon_i$ appropriately, we obtain our result\,. \end{proof} \section{Near Optimality of Finite Models for Controlled Diffusions}\label{NOptiFinite} First we prove the near optimality of quantized policies for the $\alpha$-discounted cost. \begin{theorem}\label{T1.2} Suppose Assumptions (A1)-(A3) hold. Then for each $\epsilon >0$ there exists a policy $v_{\epsilon}^*\in \mathfrak U_{\mathsf{sm}}$ with finite actions and piecewise constant policies $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{ET1.2A} {\mathcal{J}}_{\alpha}^{v_{\epsilon}^*}(x, c) \leq \inf_{U\in \mathfrak U}{\mathcal{J}}_{\alpha}^{U}(x, c) + \epsilon \quad\text{and}\quad {\mathcal{J}}_{\alpha}^{\bar{v}_{\epsilon}^*}(x, c) \leq \inf_{U\in \mathfrak U}{\mathcal{J}}_{\alpha}^{U}(x, c) + \epsilon \quad\quad\text{for all}\,\, x\in{\mathds{R}^{d}}\,. \end{equation} \end{theorem} \begin{proof} From \cite[Theorem~3.5.6]{ABG-book}, it follows that there exists $v^*\in \mathfrak U_{\mathsf{sm}}$ such that ${\mathcal{J}}_{\alpha}^{v^*}(x, c) = \inf_{U\in \mathfrak U}{\mathcal{J}}_{\alpha}^{U}(x, c)$ for all $x\in {\mathds{R}^{d}}$\,. Since the map $v\mapsto {\mathcal{J}}_{\alpha}^{v}(x, c)$ is continuous on $\mathfrak U_{\mathsf{sm}}$ (see, Theorem~ \ref{T1.1}) and the space of quatized stationary policies are dense in $\mathfrak U_{\mathsf{sm}}$ (see, Lemma~\ref{DenseBorkarTopo}), it follows that for each $\epsilon > 0$ there exists a quatized policy $v_{\epsilon}^*\in \mathfrak U_{\mathsf{sm}}$ satisfying \cref{ET1.2A}\,. Similarly, since the peicewise constant policies are dense in $\mathfrak U_{\mathsf{sm}}$ (see, Theorem~\ref{TDPCP}), we conclude that for any $\epsilon > 0$ there exists $\bar{v}_{\epsilon}^*\in\mathfrak U_{\mathsf{sm}}$ which satisfies \cref{ET1.2A}\,. This completes the proof. \end{proof} We now show that for the cost upto an exit time, the quantized (finite action/ piecewise constant) policies are near optimal\,. \begin{theorem}\label{T1.2ExitCost} Suppose Assumptions (A1)-(A3) hold. Then for each $\epsilon >0$ there exists a policy $v_{\epsilon}^*\in \mathfrak U_{\mathsf{sm}}$ with finite actions and piecewise constant policies $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{ET1.2ExitCostA} \hat{{\mathcal{J}}}_{e}^{v_{\epsilon}^*}(x) \leq \inf_{U\in \mathfrak U}\hat{{\mathcal{J}}}_{e}^{U}(x) + \epsilon \quad\text{and}\quad \hat{{\mathcal{J}}}_{e}^{\bar{v}_{\epsilon}^*}(x) \leq \inf_{U\in \mathfrak U}\hat{{\mathcal{J}}}_{e}^{U}(x) + \epsilon \quad\quad\text{for all}\,\, x\in{\mathds{R}^{d}}\,. \end{equation} \end{theorem} \begin{proof} From \cite[p. 229]{B05Survey}, we know that there exists $v^*\in \mathfrak U_{\mathsf{sm}}$ such that $\hat{{\mathcal{J}}}_{e}^{v^*}(x) = \inf_{U\in \mathfrak U}\hat{{\mathcal{J}}}_{e}^{U}(x)$\,. Now form the continuity of the map $v\to \hat{{\mathcal{J}}}_{e}^{v}(x)$ (see Theorem~\ref{T1.1Exit}) and the density results (see Section~\ref{DensePol}), it is easy to see that for any given $\epsilon > 0$ there exists policies $v_{\epsilon}^*\in \mathfrak U_{\mathsf{sm}}$ with finite actions and piecewise constant policies $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ satisfying \cref{ET1.2ExitCostA}\,. This completes the proof of the theorem\,. \end{proof} Next we prove the near optimality of the quantized policies for the ergodic cost under near-monotonicity assumption on the running cost\,. Let $$\Theta_{v} := \{v_n\mid v_n \,\,\text{is the quantized policy defined as in \cref{DenseStra1} corresponding to}\,\, v \}$$ and $$\bar{\Theta}_{v} := \{\bar{v}_n\mid \bar{v}_n \,\,\text{is the quantized policy defined as in \cref{EBT4} corresponding to}\,\, v \}\,.$$ In order to establish our result we are assuming that the invariant measures set $$\Gamma_{v^*} := \{\eta_{v_n^*}\mid \eta_{v_n^*}\,\,\text{is the invariant measure corresponding to}\,\, v_n^*\in \Theta_{v^*}\}$$ and $$\bar{\Gamma}_{v^*} := \{\eta_{\bar{v}_n^*}\mid \eta_{\bar{v}_n^*}\,\,\text{is the invariant measure corresponding to}\,\, \bar{v}_n^*\in \bar{\Theta}_{v^*}\}$$ are tight, where $v^*\in\mathfrak U_{\mathsf{sm}}$ is an ergodic optimal control. The sufficient condition which assures the required tightness is the following: if there exists a non-negative inf-compact function $f\in{\mathcal{C}}^2({\mathds{R}^{d}})$ such that $${\mathscr{L}}_{v_{n}^*} f(x) \leq \kappa_0 - f(x)\quad\text{and}\quad {\mathscr{L}}_{\bar{v}_{n}^*} f(x) \leq \kappa_0 - f(x)$$ for some constant $\kappa_0 >0$\,. \begin{theorem}\label{ErgodNearmOPT1} Suppose that Assumptions (A1) - (A4) hold. Also, suppose that corresponding to the optimal policy $v^*\in \mathfrak U_{\mathsf{sm}}$, the following set of invariant measures $\Gamma_{v^*}$ and $\bar{\Gamma}_{v^*}$ are tight and the running cost $c$ is near monotone with respect to $\sup_{v_n^*\in\Theta_{v^*}}{\mathscr{E}}_x(c, v_{n}^*)$ and $\sup_{\bar{v}_n^*\in\bar{\Theta}_{v^*}}{\mathscr{E}}_x(c, \bar{v}_{n}^*)$, that is, $$\sup_{v_n^*\in\Theta_{v^*}}{\mathscr{E}}_x(c, v_{n}^*) < \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta)\quad \text{and}\quad \sup_{\bar{v}_n^*\in\bar{\Theta}_{v^*}}{\mathscr{E}}_x(c, \bar{v}_{n}^*) < \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta).$$ Then for any given $\epsilon >0$ there exists a policy $v_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ with finite actions and a piecewise constant policy $\bar{v}_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{ENearmonOPTA1A} {\mathscr{E}}_x(c, v_{\epsilon}) \leq {\mathscr{E}}^*(c) + \epsilon\quad \text{and}\quad {\mathscr{E}}_x(c, \bar{v}_{\epsilon}) \leq {\mathscr{E}}^*(c) + \epsilon\,. \end{equation} \end{theorem} \begin{proof} From \cite[Theorem~3.6.10]{ABG-book}, we know there exits a stable $v^*\in \mathfrak U_{\mathsf{sm}}$ such that ${\mathscr{E}}_x(c, v^*) = {\mathscr{E}}^*(c)$\,. Since, by our assumption, the set of invariant measures $\Gamma_{v^*}$ and $\bar{\Gamma}_{v^*}$ are tight. Thus by the continuity result (see Theorem~\ref{ergodicnearmono1}) and the density results (see Lemma~\ref{DenseBorkarTopo}, Theorem~\ref{TDPCP}), we deduce that for each $\epsilon>0$ there exists $v_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ with finite actions and piecewise constant policy $\bar{v}_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ such that \cref{ENearmonOPTA1A} holds\,. This completes the proof. \end{proof} Now for the ergodic cost criterion, under the Lyapunov type stability assumption we prove near optimality of quantized policies. \begin{theorem}\label{TErgoOptApprox1} Suppose that assumptions (A1) - (A3) and (A5) hold. Then for any given $\epsilon>0$ there exists a quantized policy $v_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ with finite actions and a piecewise constant policy $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{ENearmonOPTA1} {\mathscr{E}}_x(c, v_{\epsilon}) \leq {\mathscr{E}}^*(c) + \epsilon \quad\text{and}\quad {\mathscr{E}}_x(c, \bar{v}_{\epsilon}) \leq {\mathscr{E}}^*(c) + \epsilon\,. \end{equation} \end{theorem} \begin{proof} From \cite[Theorem~3.7.14]{ABG-book}, we know that there exists $v^*\in \mathfrak U_{\mathsf{sm}}$ such that ${\mathscr{E}}_x(c, v^*) = {\mathscr{E}}^*(c)$. Now, since the space of quantized polices and piecewise constant policies are dense in $\mathfrak U_{\mathsf{sm}}$ (see, Lemma~\ref{DenseBorkarTopo} and Theorem~\ref{TDPCP}) and the map $v\to \inf_{{\mathds{R}^{d}}}{\mathscr{E}}_x(c, v)$ is continuous on $\mathfrak U_{\mathsf{sm}}$ (see, Theorem~\ref{ergodicLyap1}). For any given $\epsilon>0$, one can find a quantized policy $v_{\epsilon}\in\mathfrak U_{\mathsf{sm}}$ with finite actions and a piecewise constant policy $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ such that \cref{ENearmonOPTA1} holds. \end{proof} \begin{remark}\label{RContStaNear1} In view of the continuity (see Section~\ref{CDiscCost}, Section~\ref{CErgoCost}) and the denseness (see Theorem~\ref{TDContP}) results, we have the near optimality of continuous stationary policies\,. \end{remark} \section{Finite Horizon Cost: Time Discretization of Markov Policies and Near Optimality of Piecewise Constant Policies}\label{TimeDMarkov} Recall \cref{FiniteCost1} as our cost criterion for the finite horizon setup. We will present three results in this section, where the ultimate goal is to arrive at near optimality of piecewise constant policies. While this approximation problem is a well-studied problem \cite{KD92}, \cite{HK-02A}, \cite{RF-16A}, our proof method is rather direct and appears to be new. Under uniform Lipschitz continuity and uniform boundedness assumptions on the diffusion coefficients and running cost function, in \cite{KD92}, \cite{HK-02A}, \cite{RF-16A} the authors have established similar approximation results using numerical procedures\,. \subsection*{Continuity of Finite Horizon Cost on Markov Policies under the Borkar Topology} For simplicity, in this subsection we are assuming that $a, b, c$ are uniformly bounded (it is possible to relax these boundedness assumptions). In particular we are assuming that \begin{itemize} \item[\hypertarget{B1}{{(B1)}}] The functions $a, b, c$ are are uniformly bounded, i.e., \begin{equation*} \sup_{(x,\zeta)\in {\mathds{R}^{d}}\times \mathbb{U}}\left[\abs{b(x,\zeta)} + \norm{a(x)} + \sum_{i}^{d} \norm{\frac{\partial{a}}{\partial x_i}(x)} + \abs{c(x, \zeta)}\right] \,\le\, \mathrm{K}\,. \end{equation*} for some positive constant $\mathrm{K}$\,. Moreover, $H\in {\mathscr W}^{2,p,\mu}({\mathds{R}^{d}})\cap {L}^{\infty}({\mathds{R}^{d}})$\,,\,\, $p\ge 2$\,. \end{itemize} In view of \cite[Theorem~3.3, p. 235]{BL84-book}, the optimality equation (or, the HJB equation) \begin{align*} &\frac{\partial \psi}{\partial t} + \inf_{\zeta\in \mathbb{U}}\left[{\mathscr{L}}_{\zeta}\psi + c(x, \zeta) \right] = 0 \\ & \psi(T,x) = H(x) \end{align*} admits a unique solution $\psi\in {\mathscr W}^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})\cap {L}^{\infty}((0, T)\times{\mathds{R}^{d}})$\,,\,\, $p\ge 2$\,. Thus, by It\^{o}-Krylov formula (see the verification results as in \cite[Theorem~3.5.2]{HP09-book}), we know the existence of an optimal Markov policy, that is, there exists $v^*\in \mathfrak U_{\mathsf{m}}$ such that ${\mathcal{J}}_{T}(x, v^*) = {\mathcal{J}}_{T}^*(x)$\,. In the following theorem, we show that the finite horizon cost is continuous in $\mathfrak U_{\mathsf{m}}$ with respect to the Borkar topology (see Definition~\ref{BKTP1})\,. \begin{theorem}\label{TContFHC} Suppose Assumptions (A1), (A3) and (B1) hold. Then the map $v\mapsto {\mathcal{J}}_{T}(x, v)$ from $\mathfrak U_{\mathsf{m}}$ to $\mathds{R}$ is continuous. \end{theorem} \begin{proof} Let $v_n$ be a sequence in $\mathfrak U_{\mathsf{m}}$ such that $v_n \to v$ in $\mathfrak U_{\mathsf{m}}$, for some $v\in\mathfrak U_{\mathsf{m}}$\,. From \cite[Theorem~3.3, p. 235]{BL84-book}, we have that for each $n\in\mathds{N}$ there exists a unique solution $\psi_n\in{\mathscr W}^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})\cap {L}^{\infty}((0, T)\times{\mathds{R}^{d}})$\,,\,\, $p\ge 2$ to the following Poisson equation \begin{align}\label{TContFHC1A} &\frac{\partial \psi_n}{\partial t} + \left[{\mathscr{L}}_{v_n}\psi_n + c(x, v_n(t,x)) \right] = 0 \nonumber\\ & \psi_n(T,x) = H(x)\,. \end{align} By It\^{o}-Krylov formula, we deduce that \begin{align}\label{TContFHC1B} \psi_{n}(t,x) = \mathrm{e}xp_x^{v_n}\left[\int_t^{T} c(X_s, v_n(s, X_s)) \mathrm{d}{s} + H(X_T)\right] \end{align} This gives us \begin{equation}\label{TContFHC1C} \norm{\psi_n}_{\infty} \leq T\norm{c}_{\infty} + \norm{H}_{\infty}\,. \end{equation} Rewriting \cref{TContFHC1A}, we get \begin{align*} &\frac{\partial \psi_n}{\partial t} + {\mathscr{L}}_{v_n}\psi_n + \lambda_0 \psi_n = \lambda_0 \psi_n - c(x, v_n(t,x)) \nonumber\\ & \psi_n(T,x) = H(x)\,, \end{align*} for some fixed $\lambda_0 >0$\,. Thus, by parabolic pde estimate \cite[eq. (3.8), p. 234]{BL84-book}, we deduce that \begin{equation}\label{TContFHC1D} \norm{\psi_n}_{{\mathscr W}^{1,2,p,\mu}} \leq \kappa_1 \norm{\lambda_0 \psi_n - c(x, v_n(t,x))}_{{L}^{p,\mu}}\,. \end{equation} Hence, from \cref{TContFHC1C}, \cref{TContFHC1D}, it follows that $\norm{\psi_n}_{{\mathscr W}^{1,2,p,\mu}} \leq \kappa_2$ for some positive constant $\kappa_2$ (independent of $n$)\,. Since ${\mathscr W}^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})$ is a reflexive Banach space, as a corollary of Banach Alaoglu theorem, there exists $\psi^*\in{\mathscr W}^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})$ such that along a subsequence (without loss of generality denoting by same sequence) \begin{equation}\label{TContFHC1E} \begin{cases} \psi_n \to & \psi^*\quad \text{in}\quad {\mathscr W}^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})\quad\text{(weakly)}\\ \psi_n \to & \psi^*\quad \text{in}\quad {\mathscr W}^{0,1,p,\mu}((0, T)\times{\mathds{R}^{d}})\quad \text{(strongly)}\,. \end{cases} \end{equation} Since $v_n\to v$ in $\mathfrak U_{\mathsf{m}}$, multiplying both sides of the \cref{TContFHC1A} by test function $\phi\in{\mathcal{C}}_c^{\infty}((0, T)\times {\mathds{R}^{d}})$ and integrating, we get \begin{align}\label{TContFHC1EA} \int_{0}^{T}\int_{{\mathds{R}^{d}}}\frac{\partial \psi_n}{\partial t}\phi(t,x)\mathrm{d} t \mathrm{d} x + & \int_{0}^{T}\int_{{\mathds{R}^{d}}}\trace\bigl(a(x)\nabla^2 \psi_n\bigr)\phi(t,x)\mathrm{d} t \mathrm{d} x \nonumber\\ & + \int_{0}^{T}\int_{{\mathds{R}^{d}}}\{b(x,v_{n}(t,x))\cdot \nabla \psi_n + c(x, v_{n}(t,x))\}\phi(t,x)\mathrm{d} t \mathrm{d} x = 0\,. \end{align} In view of \cref{TContFHC1E}, letting $n\to \infty$, from \cref{TContFHC1EA} we obtain that \begin{align*} \int_{0}^{T}\int_{{\mathds{R}^{d}}}\frac{\partial \psi^*}{\partial t}\phi(t,x)\mathrm{d} t \mathrm{d} x + & \int_{0}^{T}\int_{{\mathds{R}^{d}}}\trace\bigl(a(x)\nabla^2 \psi^*\bigr)\phi(t,x)\mathrm{d} t \mathrm{d} x \nonumber\\ & + \int_{0}^{T}\int_{{\mathds{R}^{d}}}\{b(x,v(t,x))\cdot \nabla \psi^* + c(x, v(t,x))\}\phi(t,x)\mathrm{d} t \mathrm{d} x = 0\,. \end{align*} This implies that $\psi^*\in{\mathscr W}^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})$ satisfies \begin{align}\label{TContFHC1F} &\frac{\partial \psi^*}{\partial t} + \left[{\mathscr{L}}_{v}\psi^* + c(x, v(t,x)) \right] = 0 \nonumber\\ & \psi(T,x) = H(x)\,. \end{align} Again, by It\^{o}-Krylov formula, it follows that \begin{align}\label{TContFHC1G} \psi^{*}(t,x) = \mathrm{e}xp_x^{v}\left[\int_t^{T} c(X_s, v(s, X_s)) \mathrm{d}{s} + H(X_T)\right]\,. \end{align} Therefore, from \cref{TContFHC1B} and \cref{TContFHC1G}, we conclude that $v\mapsto {\mathcal{J}}_{T}(x, v)$ from $\mathfrak U_{\mathsf{m}}$ to $\mathds{R}$ is continuous. \end{proof} \subsection{Time Discretization of Markov Policies} Following, and briefly modifying, our approach so far involving stationary policies, in this section we show that piece-wise constant Markov policies are dense in the space of Markov policies $\mathfrak U_{\mathsf{m}}$\,. Also, using this result we deduce the near optimality of piece-wise constant Markov policies\,. \begin{theorem}\label{TDPCMP} For any $v\in \mathfrak U_{\mathsf{m}}$ there exists a sequence of piecewise constant policies $\{v_m\}_{m}$ such that \begin{equation}\label{BorkarTopology3} \lim_{m\to\infty}\int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(x,t)\int_{\mathbb{U}}g(x,t,\zeta)v_{m}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t = \int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(x,t)\int_{\mathbb{U}}g(x,t,\zeta)v(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t \end{equation} for all $f\in L^1({\mathds{R}^{d}}\times [0, \infty))\cap L^2({\mathds{R}^{d}}\times [0, \infty))$ and $g\in {\mathcal{C}}_b({\mathds{R}^{d}}\times [0, \infty)\times \mathbb{U})$ \,. \end{theorem} \begin{proof} Let $\hat{{\mathscr{B}}}_{0} = \emptyset$ and $\hat{{\mathscr{B}}}_{n} = {\mathscr{B}}_n\times [0, n)$. Then, define $\hat{D}_{n} = \hat{{\mathscr{B}}}_{n}\setminus \hat{{\mathscr{B}}}_{n-1}$ for $n\in \mathds{N}$\,. Now, it is clear that ${\mathds{R}^{d}}\times [0,\infty) = \cup_{n=1}^{\infty} \hat{D}_{n}$\,. Since $\bar{v}_{n}\,:=\, v\arrowvert_{\hat{D}_n} : \hat{D}_n \to \mathrm{V}$ is a measurable map. As in Theorem~\ref{TDPCP}, by Lusin's theorem and Tietze's extension theorem, for any $\epsilon_n > 0$ there exists a compact set $\hat{K}_{n}^{\epsilon_n}\subset \hat{D}_n$ and a continuous function $\bar{v}_{n}^{\epsilon_n}: \hat{D}_n \to \mathrm{V}$ such that $ \bar{v}_{n}^{\epsilon_n}\equiv \bar{v}_{n}$ on $\hat{K}_{n}^{\epsilon_n}$ and $\arrowvert(\hat{D}_n\setminus \hat{K}_{n}^{\epsilon_n}) \arrowvert < \epsilon_n$\,. Also, as in Theorem~\ref{TDPCP}, since $(\mathrm{V}, d_{\mathscr{P}})$ is compact, for each $m\in \mathds{N}$ there exists a finite set $\widehat{\Lambda}_{m} = \{\mu_{m,1}, \mu_{m,2}, \dots , \mu_{m, k_m}\}$ and a quantizer $\widehat{Q}_{m}: \mathrm{V} \to \widehat{\Lambda}_{m}$ which induces a partition $\{\widehat{U}_{m,i}\}_{i=1}^{k_m}$ of the space $\mathrm{V}$\,. Now, for any $v\in\mathfrak U_{\mathsf{m}}$ define $\hat{D}_{n,i}^m = (\bar{v}_{n}^{\epsilon_n})^{-1}(\widehat{U}_{m,i})$. It is easy to see that $\hat{D}_{n} = \cup_{i=1}^{k_m} \hat{D}_{n,i}^m$\,. Define \begin{equation*} \bar{v}_{n,m}^{\epsilon_n}(x) := \sum_{i=1}^{k_m} \mu_{m,i}\mathds{1}_{\{\hat{D}_{n,i}^m\}}(x)\quad\text{for all}\quad x\in \hat{D}_n\,\,\,\text{and}\,\,\,m\in \mathds{N}\,. \end{equation*} Hence, as in the proof of Theorem~\ref{TDPCP} (see Step~$1$), for any $\hat{f}\in L^1({\mathds{R}^{d}}\times [0, \infty))\cap L^2({\mathds{R}^{d}}\times [0, \infty)), \hat{g}\in {\mathcal{C}}_b(\mathbb{U})$, there exists a positive constant $\bar{M}_0$ (depending on $\hat{f}, \hat{g}$ and $\epsilon_n$) such that \begin{align}\label{EBTM2} \arrowvert \int_{\hat{D}_n}\hat{f}(x,t) \int_{\mathbb{U}} \hat{g}(\zeta)\bar{v}_{n}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t - \int_{\hat{D}_n}\hat{f}(x,t)\int_{\mathbb{U}} \hat{g}(\zeta)\bar{v}_{n,m}^{\epsilon_n}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t\arrowvert \leq \epsilon_n\,, \end{align} for all $m\geq \bar{M}_0$\,. Now, for any given $\epsilon > 0$, define \begin{equation}\label{EBTM3} \Tilde{v}_{m}^{\epsilon} := \sum_{n=1}^{\infty} \bar{v}_{n,m}^{\epsilon_n} \quad \text{for}\,\,\, m\in \mathds{N}\,. \end{equation} Since $\hat{f}\in{L}^1({\mathds{R}^{d}}\times [0, \infty))$ there exists $N_0 \in \mathds{N}$ such that $\int_{\hat{{\mathscr{B}}}_{N_0}^c}|\hat{f}(x,t)|\mathrm{d} x \mathrm{d} t < \frac{\epsilon}{4\|\hat{g}\|_{\infty}}$\,. Then closely mimicking the argument of Theorem~\ref{TDPCP} (see Step~$2$), we have that for each $\epsilon >0$ there exists a positive constant $\hat{M}_0$ (depending on $\hat{f}, \hat{g}, \epsilon$) such that for all $m\geq \hat{M}_0$ \begin{equation}\label{EBTM4} \arrowvert \int_{[0, \infty)}\int_{{\mathds{R}^{d}}}\hat{f}(x,t)\int_{\mathbb{U}} \hat{g}(\zeta)v(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t - \int_{[0, \infty)}\int_{{\mathds{R}^{d}}}\hat{f}(x,t)\int_{\mathbb{U}} \hat{g}(\zeta)\Tilde{v}_{m}^{\epsilon}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t\arrowvert \leq \epsilon\,. \end{equation} Let $\{\hat{f}_k\}_{k\in\mathds{N}}$ and $\{h_{j}\}_{j\in\mathds{N}}$ be countable dense set in ${L}^1({\mathds{R}^{d}}\times [0, \infty))$ and ${\mathcal{C}}(\mathbb{U})$ respectively\,. Suppose that $f\in{L}^{1}({\mathds{R}^{d}}\times [0, \infty))\cap {L}^{2}({\mathds{R}^{d}}\times [0, \infty))$ and $g\in{\mathcal{C}}_{b}({\mathds{R}^{d}}\times [0, \infty)\times \mathbb{U})$\,. Since $f\in {L}^{1}({\mathds{R}^{d}}\times [0, \infty))$, for given $\epsilon > 0$ there exists $N_{1}\in \mathds{N}$ such that $\int_{\hat{{\mathscr{B}}}_{N_1}^c} |f(x,t)|\mathrm{d} x \mathrm{d} t\leq \frac{\epsilon}{4\|g\|_{\infty}}$\,. We know that in ${\mathcal{C}}_b(\bar{\hat{{\mathscr{B}}}}_{N_1}\times \mathbb{U})$ the functions of the form $\{\sum_{i}^{m} r_{i}(x,t)p_i(\zeta)\}_{m\in\mathds{N}}$ forms an algebra which contains constants, where $r_i\in {\mathcal{C}}(\bar{\hat{{\mathscr{B}}}}_{N_1})$ and $p_i \in {\mathcal{C}}(\mathbb{U})$\,. Thus by Stone-Weierstrass theorem there exists $\hat{m}$ (large enough) such that \begin{equation}\label{EBTM5} \sup_{\hat{{\mathscr{B}}}_{N_1}\times \mathbb{U}} |g(x,t,\zeta) - \sum_{i}^{\hat{m}} r_{i}(x,t)p_i(\zeta)| \leq \frac{\epsilon}{24\|f\|_{{L}^1({\mathds{R}^{d}}\times [0, \infty))}}\,. \end{equation} Since $p_i \in {\mathcal{C}}(\mathbb{U})$ one can choose $h_{j(i)} \in {\mathcal{C}}(\mathbb{U})$ such that \begin{equation}\label{EBTM6} \sup_{\zeta\in \mathbb{U}} |p_i(\zeta) - h_{j(i)}(\zeta)| \leq \frac{\epsilon}{24\|f\|_{{L}^1({\mathds{R}^{d}}\times [0, \infty))}\|r_{i}\|_{\infty}}\,. \end{equation} Also, since $fr_i\in {L}^1({\mathds{R}^{d}}\times [0, \infty))$ there exists $\hat{f}_{k(i)}$ such that \begin{equation}\label{EBTM7} \int_{\hat{{\mathscr{B}}}_{N_1}} |f(x,t)r_i(x,t) - \hat{f}_{k(i)}(x,t)|\mathrm{d} x \mathrm{d} t \leq \frac{\epsilon}{24\|f\|_{{L}^1({\mathds{R}^{d}}\times [0, \infty))}\|h_{i}\|_{\infty}}\,. \end{equation} Thus, in view of \cref{EBTM5}, \cref{EBTM6}, \cref{EBTM7}, following the steps of Theorem~\ref{TDPCP} (see Step~$3$) we conclude that \begin{align*} \arrowvert \int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(x,t)\int_{\mathbb{U}} g(x,t,\zeta)v(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t - \int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(x,t)\int_{\mathbb{U}} g(x,t,\zeta)\bar{v}_{m}^{\epsilon}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t\arrowvert \leq \epsilon\,, \end{align*} for all $m\geq \hat{M}_1$, for some positive constant $\hat{M}_1$\,. This completes the proof of the theorem\,. \end{proof} \subsection*{Near Optimality of Piecewise Constant Policies for Finite Horizon Cost} Now, from Theorem~\ref{TDPCMP} and Theorem~\ref{TContFHC}, we have the following near-optimality results\,. \begin{theorem}\label{TFiniteOptApprox1} Suppose that assumptions (A1),(A3) and (B1) hold. Then for any given $\epsilon>0$ there exists a piecewise constant policy $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{m}}$ such that \begin{equation}\label{TFiniteOptApprox1A} {\mathcal{J}}_{T}(x, \bar{v}_{\epsilon}^*) \leq {\mathcal{J}}_{T}^* + \epsilon \quad\text{for all} \quad x\in{\mathds{R}^{d}}\,. \end{equation} \end{theorem} \begin{proof} From our previous discussion, we know that there exists $v^*\in \mathfrak U_{\mathsf{m}}$ such that ${\mathcal{J}}_{T}(x, v^*) = {\mathcal{J}}_{T}^*$\,. Since the space of piecewise constant policies are dense in $\mathfrak U_{\mathsf{m}}$ (see Theorem~\ref{TDPCMP}) and the map $v\mapsto {\mathcal{J}}_{T}(x, v)$ is continuous on $\mathfrak U_{\mathsf{m}}$ (see Theorem~\ref{TContFHC}), for any given $\epsilon>0$, one can find a piecewise constant policy $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{m}}$ such that \cref{TFiniteOptApprox1A} holds\,. \end{proof} \begin{remark} In view of the existence results as in \cite[Chapter~4]{LSU67-book}, in obtaining the near optimality of piecewise constant Markov policies for finite horizon costs, one can relax the uniform boundedness assumption (B1), in particular, under (A1)-(A3) we can deduce similar results\,. Which extends the results of \cite{KD92}, \cite{HK-02A}, \cite{RF-16A} to a more general control model\,. \end{remark} \section*{Conclusion} We studied regularity properties of induced cost (under several criteria) on a controlled diffusion process with respect to a control policy space defined by Borkar \cite{Bor89}. We then studied implications of these properties on existence and, in particular, approximations for optimal controlled diffusions. Via such a unified approach, we arrived at very general approximation results for optimal control policies by quantized (finite action / piecewise constant) stationary control policies for a general class of controlled diffusions in the whole space ${\mathds{R}^{d}}$\, as well as time-discretizations for the criteria with finite horizons. \end{document}
\begin{document} \title[Twisted Gauss sums and totally isotropic subspaces] {Higher level quadratically twisted Gauss sums and totally isotropic subspaces} \author{Lynne Walling} \address{School of Mathematics, University of Bristol, University Walk, Clifton, Bristol BS8 1TW, United Kingdom; phone +44 (0)117 331-5245} \email{[email protected]} \keywords{Gauss sums, quadratic forms} \begin{abstract} We consider a generalized Gauss sum supported on matrices over a number field. We evaluate this Gauss sum and relate it to the number of totally isotropic subspaces of related quadratic spaces. Then we consider a further generalization of such a Gauss sum, realizing its value in terms of numbers of totally isotropic subspaces of related quadratic spaces. \end{abstract} \maketitle \def\arabic{footnote}{} \footnote{2010 {\it Mathematics Subject Classification}: Primary 11L05, 11E08 } \def\arabic{footnote}{\arabic{footnote}} \section{Introduction} Gauss sums and their numerous generalizations are ubiquitous in number theory. When studying the action of Hecke operators on half-integral weight Hilbert-Siegel modular forms, the generalized Gauss sum we encounter is defined as follows. Let ${\mathbb K}$ be a number field with $\mathcal O$ its ring of integers, ${\mathfrak P}$ a nondyadic prime ideal in $\mathcal O$, and $\mathbb F=\mathcal O/{\mathfrak P}$; we fix $\rho\in\partial^{-1}{\mathfrak P}^{-1}$ so that $\rho\mathcal O_{{\mathfrak P}}=\partial^{-1}{\mathfrak P}^{-1}\mathcal O_{{\mathfrak P}}$ (where $\partial$ is the different of ${\mathbb K}$). Then for $T\in\mathbb F^{n,n}_{\sym}$ (meaning that $T$ is a symmetric $n\times n$ matrix over $\mathbb F$), we set $$\mathcal G^*_T({\mathfrak P})=\sum_{S\in\mathbb F^{n,n}_{\sym}} \left(\frac{\det S}{{\mathfrak P}}\right)\e\{2TS\rho\}$$ where $\sigma$ denotes the matrix trace map, $\e\{*\}=\exp(\pi i Tr^{{\mathbb K}}_{\mathbb Q}(\sigma(*)))$, and $\left(\frac{*}{{\mathfrak P}}\right)$ is the Legendre symbol. One sees that for $M,N\in\mathcal O^{n,n}_{\sym}$ with $M\equiv N\ (\text{mod }{\mathfrak P})$, we have $\e\{2M\rho\}=\e\{2N\rho\}$; consequently, $\mathcal G^*_T({\mathfrak P})$ is well-defined, although it is dependent on the choice of $\rho$. For our application to half-integral weight Hecke operators, we need to relate these Gauss sums to $R^*(T\perp\big<1\big>,0_a)$, which is the number of $a$-dimensional totally isotropic subspaces of the dimension $n+1$ $\mathbb F$-space $V$ whose quadratic form is given by $T\perp\big<1\big>$. (A subspace $W$ of $V$ is totally isotropic if the quadratic form restricted to $W$ is 0, and $A\perp B$ denotes the block-diagonal matrix $\diag(A,B)$.) In Theorem 1.1 we evaluate $\mathcal G^*_T({\mathfrak P})$, and in Corollary 1.2 we give $\mathcal G^*_T({\mathfrak P})$ in terms of $R^*(T\perp\big<1\big>,0_a)$. To state the theorem, set $\varepsilon=\left(\frac{-1}{{\mathfrak P}}\right),$ and fix $\omega\in\mathbb F$ so that $\omega$ is not a square in $\mathbb F$; set $J_n=I_{n-1}\perp\big<\omega\big>$. For $T,S\in\mathbb F^{n,n}_{\sym}$, write $T\sim S$ if there is some $G\in GL_n(\mathbb F)$ so that $T=\,^tGSG$. Note that with $d=\rank T$, either $T\sim I_d\perp 0_{n-d}$ or $T\sim J_d\perp 0_{n-d}$. With this notation, we have the following. \begin{thm} Take $T\in\mathbb F^{n,n}_{\sym}$ where $n\in\mathbb Z_+$. Suppose that $0\le d\le n$ and $T\sim I_d\perp 0_{n-d}$ or $T\sim J_d\perp 0_{n-d}$. Take $c$ so that $d=2c$ or $d=2c+1$. \begin{enumerate} \item[(a)] Suppose that $n=2m$. Then with $N({\mathfrak P})$ the norm of ${\mathfrak P}$, $$\mathcal G^*_T({\mathfrak P})=(-1)^c\varepsilon^m N({\mathfrak P})^{m^2}\cdot \prod_{i=1}^{m-c}(N({\mathfrak P})^{2i-1}-1).$$ \item[(b)] Suppose that $n=2m+1$. When $d=2c$, $\mathcal G^*_T({\mathfrak P})=0$. When $d=2c+1$, $$\mathcal G^*_T({\mathfrak P})=(-1)^c\varepsilon^{m+c}N({\mathfrak P})^{m^2+2m-c}\,\mathcal G^*_1({\mathfrak P})\cdot \prod_{i=1}^{m-c}(N({\mathfrak P})^{2i-1}-1)$$ if $T\sim I_d\perp 0_{n-d}$, and $\mathcal G^*_T({\mathfrak P})=-\mathcal G^*_{I_d\perp 0_{n-d}}({\mathfrak P})$ if $T\sim J_d\perp 0_{n-d}$. \end{enumerate} \end{thm} \begin{cor} Take $T\in\mathbb F^{n,n}_{\sym}$ where $n\in\mathbb Z_+$. Let $V$ be the dimension $n+1$ space over $\mathbb F$ with quadratic form given by $T\perp\big<1\big>$, and let $R^*(T\perp\big<1\big>,0_a)$ be the number of $a$-dimensional totally isotropic subspaces of $V$. We have $$\left(\mathcal G_1^*({\mathfrak P})\right)^n\mathcal G^*_T({\mathfrak P}) =\sum_{a=0}^n (-1)^{n+a}N({\mathfrak P})^{n(n+1)/2+a(a-n)} R^*(T\perp\big<1\big>,0_a).$$ \end{cor} To prove Theorem 1.1, we perform a deconstruction to reduce $\mathcal G^*_T({\mathfrak P})$ to a sum in terms of Gauss sums $\mathcal G^*_Y({\mathfrak P})$ where the $Y$ are smaller than $T$. For this we repeatedly use the elementary fact that $\sigma(AB)=\sigma(BA)$, knowledge of representation numbers over finite fields, and elementary combinatorial methods. Then using induction, we prove Theorem 1.1; Corollary 1.2 then follows from Lemma 3.1 of \cite{half-int-aps}. In Proposition 4.1, we consider the following generalized Gauss sum: with $T\in\mathbb F^{n,n}_{\sym}$ and $0\le r\le n$, set $$\mathcal G^*_T({\mathfrak P};r)= \sum_{S\sim I_r\perp 0_{n-r}}\e\{2TS\rho\}-\sum_{S\sim J_r\perp 0_{n-r}}\e\{2TS\rho\}$$ (so for $r=n$, this is $\mathcal G^*_T({\mathfrak P})$). We again deconstruct $\mathcal G^*_T({\mathfrak P};r)$ as a sum in terms of Gauss sums $\mathcal G^*_Y({\mathfrak P})$ with the $Y$ smaller than $T$, and then from this and Theorem 1.1, we describe $\mathcal G^*_T({\mathfrak P};r)$ in terms of numbers of totally isotropic subspaces of spaces of quadratic spaces related to $T$. It is important for us to note that in \cite{S}, Saito studies analogues of these Gauss sums over finite fields, with an interest to applications to twists of Siegel modular forms. Although his main interest is in twists by the quadratic and the trivial characters, he considers twists by all characters, making his arguments more complicated than ours. We note that Theorem 1.3 \cite{S} includes the results of our Theorem 1.1. Saito also considers finite field analogues of the Gauss sums $\mathcal G^*_T({\mathfrak P};r)$. He develops relations between these Gauss sums, some of which are quite complicated. In Proposition 4.1 (a), we present a simple relation very similar to his relation in Proposition 1.12 \cite{S}; then in Proposition 4.1 (b) we present formulas for these Gauss sums in terms of numbers of totally isotropic subspaces. The value of this paper is to present an approach simpler than that of \cite{S}, demonstrating our deconstruction technique, and to relate these Gauss sums to representations of zeros. Note that it is quite easy to modify our techniques to generalized Gauss sums twisted by the trivial character, and to Gauss sums over a finite field $\mathbb F_q$ with odd characteristic $p$ where $\e\{*\}$ is replaced by $\exp(\pi i Tr^{\mathbb F_q}_{\mathbb F_p}(\sigma(*))/p)).$ \section{Notation} Besides the notation given in the introduction, we define the following. For $t,s\in \mathbb Z_+$ with $s\le t$, and $X\in\mathbb F^{t,t}_{\sym}$, $Y\in\mathbb F^{s,s}_{\sym}$, define the representation number $r(X,Y)$ to be $$r(X,Y)=\#\{C\in\mathbb F^{t,s}:\ ^tCXC=Y\ \},$$ and define the primitive representation number $r^*(X,Y)$ to be $$r^*(X,Y)=\#\{C\in\mathbb F^{t,s}:\ ^tCXC=Y,\ \rank C=s\ \}.$$ Let $o(X)$ denote the order of the orthogonal group of $X$; so $o(X)=r^*(X,X).$ We make great use of the following elementary functions, that help us encode formulas involving representation numbers. \begin{align*} \boldsymbol \mu(t,s)&=\prod_{i=0}^{s-1}(N({\mathfrak P})^{t-i}-1),\ \boldsymbol \delta(t,s)=\prod_{i=0}^{s-1}(N({\mathfrak P})^{t-i}+1),\\ \boldsymbol \beta(t,s)&=\frac{\boldsymbol \mu(t,s)}{\boldsymbol \mu(s,s)},\ \boldsymbol \mathfrak Nu(t,s)=\prod_{i=s}^{t-1}(N({\mathfrak P})^t-N({\mathfrak P})^i),\ \boldsymbol \mathbf gamma(t,s)=\frac{\boldsymbol \mu\boldsymbol \delta(t,s)}{\boldsymbol \mu\boldsymbol \delta(s,s)}. \end{align*} We agree that when $s=0$, the value of any of these functions is 1; when $s<0$, we agree that $\boldsymbol \beta(t,s)=0$. Note that $\boldsymbol \beta(t,s)$ is the number of $s$-dimensional subspaces of a $t$-dimensional space over $\mathbb F$, and $\boldsymbol \mathfrak Nu(t,0)$ is the number of bases for a $t$-dimensional space. Finally, for $d\in\mathbb Z_+$ and $i\in\mathbb Z$ with $0\le i\le d$, we set $U_{d,i}=I_i\perp 0_{d-i}$ and $\overline U_{d,i}=J_i\perp 0_{d-i}.$ \section{Proofs of Theorem 1.1 and Corollary 1.2} We begin by proving Theorem 1.1. As ${\mathfrak P}$ is fixed, in this section we write $\mathcal G^*_T$ for $\mathcal G^*_T({\mathfrak P})$. First notice that $$\mathcal G^*_{0_n}=\sum_{Y\sim I_n}1-\sum_{Y\sim J_n}1 =\frac{|GL_n(\mathbb F)|}{o(I_n)}-\frac{|GL_n(\mathbb F)|}{o(J_n)};$$ so using Lemma 5.1, when $n$ is odd we get $\mathcal G^*_{0_n}=0$, and when $n=2m$ we get $$\mathcal G^*_{0_n}=\varepsilon^mN({\mathfrak P})^{m^2}\frac{\boldsymbol \mu(2m,2m)}{\boldsymbol \mu\boldsymbol \delta(m,m)}.$$ For the rest of this section, take $d$ so that $0<d<n$. With $G\in GL_n(\mathbb F)$, we have $$\e\{2\,^tGI_nGU_{n,d}\cdot\rho\} =\e\{2Y'\rho\},\ \e\{2\,^tGI_nG\overline U_{n,d}\cdot\rho\} =\e\{2Y'J_d\rho\}$$ where $Y'$ is the upper left block of $^tGI_nG$; similarly, $$\e\{2\,^tGJ_nGU_{n,d}\cdot\rho\} =\e\{2Y'\rho\},\ \e\{2\,^tGJ_nG\overline U_{n,d}\cdot\rho\} =\e\{2Y'J_d\rho\}$$ where $Y'$ is the upper left block of $^tGJ_nG$. The number of $Y\sim I_n$ with upper left $d\times d$ block $Y'$ is $\boldsymbol \mathfrak Nu(n,d) r^*(I_n,Y')/o(I_n),$ as for $C\in\mathbb F^{n,d}$ with $\rank C=d$, the number of ways to extend $C$ to an element of $GL_n(\mathbb F)$ is $\boldsymbol \mathfrak Nu(n,d)$. Similarly, the number of $Y\sim J_n$ with upper left $d\times d$ block $Y'$ is $\boldsymbol \mathfrak Nu(n,d) r^*(J_n,Y')/o(J_n).$ Hence we have \begin{align*} \mathcal G^*_{U_{n,d}} &=\sum_{Y\sim I_n} \e\{2YU_{n,d}\cdot\rho\} -\sum_{Y\sim J_n} \e\{2YU_{n,d}\cdot\rho\}\\ &=\sum_{G\in GL_n(\mathbb F)} \left( \frac{\e\{2\,^tGI_nGU_{n,d}\cdot\rho\}}{o(I_n)} - \frac{\e\{2\,^tGJ_nGU_{n,d}\cdot\rho\}}{o(J_n)}\right)\\ &=\boldsymbol \mathfrak Nu(n,d)\sum_{Y'\in\mathbb F^{d,d}_{\sym}} \left(\frac{r^*(I_n,Y')}{o(I_n)}-\frac{r^*(J_n,Y')}{o(J_n)}\right)\e\{2Y'\rho\}. \end{align*} Note that we can partition $\mathbb F^{d,d}_{\sym}$ into $GL_d(\mathbb F)$-orbits, and in Lemma 5.1, we compute representation numbers $r^*(\cdot,\cdot)$. We find that when $n$ is odd, we have $o(I_n)=o(J_n)$, $r^*(I_n, U_{d,2k})-r^*(J_n,U_{d,2k})=0,$ and $$r^*(I_n,U_{d,2k+1})-r^*(J_n,U_{d,2k+1}) =r^*(J_n,\overline U_{d,2k+1})-r^*(I_n,\overline U_{d,2k+1}).$$ Hence with $n=2m+1$, using Lemma 5.1 and then Lemma 5.3, we get \begin{align*} \mathcal G^*_{U_{n,d}} &=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_n)} \sum_{k=0}^{d/2} 2\varepsilon^{m-k}N({\mathfrak P})^{2mk-k^2+m-k+(d-2k-1)(d-2k-2)/2}\\ &\mathfrak quad\cdot \boldsymbol \mu\boldsymbol \delta(m,d-k-1) \left(\sum_{Y\sim U_{d,2k+1}}\e\{2Y\rho\} -\sum_{Y\sim\overline U_{d,2k+1}}\e\{2Y\rho\}\right)\\ &\mathfrak quad=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_n)} \sum_{k=0}^{d/2} 2\varepsilon^{m-k}N({\mathfrak P})^{2mk-k^2+m-k+(d-2k-1)(d-2k-2)/2}\\ &\mathfrak quad\cdot \boldsymbol \mu\boldsymbol \delta(m,d-k-1) \cdot\sum_{\cls Y\in\mathbb F^{2k+1,2k+1}_{\sym}}\frac{r^*(I_d,Y)}{o(Y)}\, \mathcal G^*_Y \end{align*} (where $\cls Y$ is the isometry class of $Y$, or equivalently, the $GL_d(\mathbb F)$-orbit of $Y$). With $n=2m+1$, similar reasoning gives us \begin{align*} \mathcal G^*_{\overline U_{n,d}} &=\boldsymbol \mathfrak Nu(n,d)\sum_{Y'\in\mathbb F^{d,d}_{\sym}} \left(\frac{r^*(I_n,Y')}{o(I_n)}-\frac{r^*(J_n,Y')}{o(J_n)}\right) \e\{2Y'J_d\rho\}\\ &\mathfrak quad=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_n)} \sum_{k=0}^{d/2} 2\varepsilon^{m-k}N({\mathfrak P})^{2mk-k^2+m-k+(d-2k-1) (d-2k-2)/2}\\ &\mathfrak quad\cdot \boldsymbol \mu\boldsymbol \delta(m,d-k-1)\cdot \sum_{\cls Y\in\mathbb F^{2k+1,2k+1}_{\sym}}\frac{r^*(J_d,Y)}{o(Y)}\, \mathcal G^*_Y. \end{align*} Now suppose that $n=2m$. Then using Lemma 5.1 we have \begin{align*} &\frac{r^*(I_n,Y)}{o(I_n)}-\frac{r^*(J_n,Y)}{o(J_n)}\\ &\mathfrak quad= \frac{1}{o(I_{n+1})} (r^*(I_{2m+1},\big<1\big>\perp Y)-r^*(J_{2m+1},\big<1\big>\perp Y)). \end{align*} So following the above reasoning, for $n=2m$ we have \begin{align*} \mathcal G^*_{U_{n,d}} &=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_{n+1})} \sum_{k=0}^{d/2} 2\varepsilon^{m-k}N({\mathfrak P})^{2mk-k^2+m-k+(d-2k)(d-2k-1)/2}\\ &\mathfrak quad\cdot \boldsymbol \mu\boldsymbol \delta(m,d-k)\cdot \sum_{\cls Y\in\mathbb F^{2k,2k}_{\sym}}\frac{r^*(I_d,Y)}{o(Y)}\, \mathcal G^*_Y,\\ \mathcal G^*_{\overline U_{n,d}} &=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_{n+1})} \sum_{k=0}^{d/2} 2\varepsilon^{m-k}N({\mathfrak P})^{2mk-k^2+m-k+(d-2k)(d-2k-1)/2}\\ &\mathfrak quad\cdot \boldsymbol \mu\boldsymbol \delta(m,d-k)\cdot \sum_{\cls Y\in\mathbb F^{2k,2k}_{\sym}}\frac{r^*(J_d,Y)}{o(Y)}\, \mathcal G^*_Y. \end{align*} To evaluate $\mathcal G^*_{I_n}$ and $\mathcal G^*_{J_n}$, we make use of the (non-twisted) Gauss sums \begin{align*} \mathcal G_{I_n}&=\sum_{U\in\mathbb F^{n,n}}\e\{2I_n[U]\rho\},\ \mathcal G_{J_n}=\sum_{U\in\mathbb F^{n,n}}\e\{2J_n[U]\rho\},\\ \overline\mathcal G_{I_n} &=\sum_{U\in\mathbb F^{n,n}}\e\{2I_n[U]J_n\rho\},\ \overline\mathcal G_{J_n} =\sum_{U\in\mathbb F^{n,n}}\e\{2J_n[U]J_n\rho\}. \end{align*} For $Y\in\mathbb F^{n,n}$, by looking at the trace of the matrix $^tYY$, it is easy to check that $\mathcal G_{I_n}=(\mathcal G_1^*)^{n^2}$. Similarly, we have \begin{align*} \mathcal G_{J_n}&=(\mathcal G_1^*)^{n(n-1)}\cdot(\mathcal G^*_{\omega})^{n}= \overline\mathcal G_{I_n},\ \overline\mathcal G_{J_n}=(\mathcal G^*_1)^{(n-1)^2}\cdot(\mathcal G^*_{\omega})^{2n-1}. \end{align*} Classical techniques give us $\mathcal G^*_{\omega}=-\mathcal G^*_1$ and $(\mathcal G^*_1)^2=\varepsilon N({\mathfrak P})$. On the other hand, we have \begin{align*} \mathcal G_{I_{n}}&=\sum_{Y\in\mathbb F^{n,n}_{\sym}} r(I_{n}, Y) \e\{2Y\rho\},\ \mathcal G_{J_{n}}=\sum_{Y\in\mathbb F^{n,n}_{\sym}} r(J_{n}, Y) \e\{2Y\rho\},\\ \overline\mathcal G_{I_{n}}&=\sum_{Y\in\mathbb F^{n,n}_{\sym}} r(I_{n}, Y) \e\{2YJ_n\rho\},\ \overline\mathcal G_{J_{n}}=\sum_{Y\in\mathbb F^{n,n}_{\sym}} r(J_{n}, Y) \e\{2YJ_n\rho\}. \end{align*} Partitioning $\mathbb F^{n,n}_{\sym}$ into $GL_{n}(\mathbb F)$-orbits, we get \begin{align*} &\frac{1}{o(I_n)}\,\mathcal G_{I_n}-\frac{1}{o(J_n)}\,\mathcal G_{J_n}\\ &\mathfrak quad= \frac{r(I_n,0_n)}{o(I_n)}-\frac{r(J_n,0_n)}{o(J_n)}\\ &\mathfrak qquad+ \sum_{0<\ell\le n}\sum_{Y\sim U_{n,\ell}} \left(\frac{r(I_n,U_{n,\ell})}{o(I_n)}- \frac{r(J_n,U_{n,\ell})}{o(J_n)}\right) \e\{2Y\rho\}\\ &\mathfrak qquad + \sum_{0<\ell\le n}\sum_{Y\sim\overline U_{n,\ell}} \left(\frac{r(I_n,\overline U_{n,\ell})}{o(I_n)}- \frac{r(J_n,\overline U_{n,\ell})}{o(J_n)}\right) \e\{2Y\rho\}. \end{align*} Notice that $r(I_n,I_{\ell}\perp 0_{d-\ell})=r^*(I_n,I_{\ell})r(I_{n-\ell},0_{d-\ell}).$ So using Lemmas 5.1 and 5.2, and then Lemma 5.3, when $n$ is odd we get \begin{align*} \mathcal G_{I_n}-\mathcal G_{J_n} &\mathfrak quad= \sum_{\substack{0\le \ell\le n\\ \ell\,\text{odd}}} (r(I_{n}, U_{n,\ell})-r(J_{n}, U_{n,\ell}))\\ &\mathfrak qquad\cdot \left(\sum_{Y\sim U_{n,\ell}}\e\{2Y\rho\} -\sum_{Y\sim \overline U_{n,\ell}}\e\{2Y\rho\}\right)\\ &\mathfrak quad= \sum_{\substack{0\le \ell\le n\\ \ell\,\text{odd}}} (r(I_{n}, U_{n,\ell})-r(J_{n}, U_{n,\ell})) \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(I_{n},Y)}{o(Y)}\, \mathcal G^*_Y. \end{align*} Similarly, with $n$ odd, \begin{align*} \overline\mathcal G_{I_{2m+1}}-\overline\mathcal G_{J_{2m+1}} &=\sum_{\substack{0\le \ell\le n\\ \ell\,\text{odd}}} (r(I_{n}, U_{n,\ell})-r(J_{n}, U_{n,\ell})) \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(J_{n},Y)}{o(Y)}\, \mathcal G^*_Y. \end{align*} When $n$ is even, similar arguments give us \begin{align*} &r^*(I_{n+1},1)\,\mathcal G_{I_{n}}-r^*(J_{n+1},1)\,\mathcal G_{J_{n}}\\ &= \sum_{\substack{0\le \ell\le n\\ \ell\,\text{even}}} (r(I_{n+1}, \big<1\big>\perp U_{n,\ell})-r(J_{n+1}, \big<1\big>\perp U_{n,\ell})) \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(I_{n},Y)}{o(Y)}\, \mathcal G^*_Y, \end{align*} \begin{align*} &r^*(I_{n+1},1)\,\overline\mathcal G_{I_{n}}-r^*(J_{n+1},1)\,\overline\mathcal G_{J_{n}}\\ &= \sum_{\substack{0\le \ell\le n\\ \ell\,\text{even}}} (r(I_{n+1}, \big<1\big>\perp U_{n,\ell})-r(J_{n+1}, \big<1\big>\perp U_{n,\ell})) \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(J_{n},Y)}{o(Y)}\, \mathcal G^*_Y. \end{align*} Now we argue by induction on $m$ to prove the theorem in the case that $n=2m+1$. For $m=0$, we have $\mathcal G^*_{U_{1,1}}=\mathcal G^*_1$ (by definition of $\mathcal G^*_1$), and as we have already noted, $\mathcal G^*_{\overline U_{1,1}}=-\mathcal G^*_1$. So suppose that $m\mathbf ge1$ and that the theorem holds for all $\mathcal G^*_Y$ where $Y\in\mathbb F^{2r+1,2r+1}_{\sym}$ and $0\le r<m$. With $0<d<n$, we begin with the expression for $\mathcal G^*_{U_{n,d}}$ that we derived above. By the induction hypothesis, for $2k+1\le d$ and $Y\in\mathbb F^{2k+1,2k+1}_{\sym}$, we have $\mathcal G^*_Y=\varepsilon^k N({\mathfrak P})^{k^2+2k}\cdot h_Y$ where $h_Y$ is defined in Lemma 5.4. So by Lemma 5.4 we have $\mathcal G^*_{U_{n,d}}=0$ when $d$ is even, and \begin{align*} \mathcal G^*_{U_{n,d}} &=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_n)} \sum_{k=0}^{d/2} 2\varepsilon^{m-k}N({\mathfrak P})^{2mk-k^2+m-k+(d-2k-1)(d-2k-2)/2}\\ &\mathfrak quad\cdot \boldsymbol \mu\boldsymbol \delta(m,d-k-1)\cdot (-1)^k\varepsilon^{k+c} N({\mathfrak P})^{k^2+c}\,\boldsymbol \mathbf gamma(c,k) \end{align*} when $d$ is odd with $d=2c+1$. So assume now that $d=2c+1$; then we have \begin{align*} \mathcal G^*_{U_{n,d}} &= \frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_n)} \,2\varepsilon^{m+c} N({\mathfrak P})^{m+2c^2}\mathcal G^*_1\cdot A(c,0) \text{ where }\\ A(t,q)&=\sum_{k=0}^t(-1)^k N({\mathfrak P})^{2k(m+k-2t-q)}\boldsymbol \mu\boldsymbol \delta(m,2t+q-k) \boldsymbol \mathbf gamma(t,k). \end{align*} Since $\mathbf gamma(t,k)=N({\mathfrak P})^{2k}\boldsymbol \mathbf gamma(t-1,k)+\boldsymbol \mathbf gamma(t-1,k-1)$, we have \begin{align*} A(t,q) &=\sum_{k=0}^{t-1} (-1)^k N({\mathfrak P})^{2k(m+k+1-2t-q)} \boldsymbol \mu\boldsymbol \delta(m,2t+q-k-1))\boldsymbol \mathbf gamma(t-1,k)\\ &\mathfrak quad\cdot (\boldsymbol \mu\boldsymbol \delta(m,2t+q-k)-N({\mathfrak P})^{2(m-2t-q+k+1)})\\ &=-A(t-1,q+1)=(-1)^tA(0,t)=(-1)^t \boldsymbol \mu\boldsymbol \delta(m,t+q). \end{align*} Therefore, using that $\boldsymbol \mathfrak Nu(n,d)=N({\mathfrak P})^{(n-d)(n+d-1)/2}\boldsymbol \mu(n-d,n-d)$, \begin{align*} \mathcal G^*_{U_{n,d}} &=(-1)^c \varepsilon^{m+c} N({\mathfrak P})^{m^2+2m-c}\cdot \frac{\boldsymbol \mu(2(m-c),2(m-c))}{\boldsymbol \mu\boldsymbol \delta(m-c,m-c)}\, \mathcal G^*_1, \end{align*} as claimed in the statement of the theorem. A virtually identical argument gives us $\mathcal G^*_{\overline U_{n,d}}=-\mathcal G^*_{U_{n,d}}$. Now, still taking $n=2m+1$ and beginning with our earlier expression for $\mathcal G_{I_n}-\mathcal G_{J_n},$ we use Lemmas 5.1 and 5.2 to give us \begin{align*} \mathcal G_{I_n}-\mathcal G_{J_n}&=2\sum_{k=0}^{m}\sum_{s=0}^{m-k}(-1)^{m-k-s} \varepsilon^{m-k}N({\mathfrak P})^{2mk-k^2+m-k+(m-k)^2+s^2}\\ &\mathfrak quad\cdot \boldsymbol \mu\boldsymbol \delta(m,k) \boldsymbol \beta\boldsymbol \delta(m-k,s) \cdot\sum_{\cls Y\in\mathbb F^{2k+1,2k+1}_{\sym}} \frac{r^*(I_{2m+1},Y)}{o(Y)}\, \mathcal G^*_Y. \end{align*} Using that $o(I_n)=2N({\mathfrak P})^{m^2}\boldsymbol \mu\boldsymbol \delta(m,m)$, and the induction hypothesis for $Y\in\mathbb F^{2k+1,2k+1}_{\sym}$ with $k<m$, we have \begin{align*} \mathcal G_{I_n}-\mathcal G_{J_n}&= o(I_{2m+1})\mathcal G^*_{I_{2m+1}} -o(I_{2m+1})\varepsilon^mN({\mathfrak P})^{m^2+2m}\mathcal G^*_1\, h_{I_{2m+1}}+2\mathcal G^*_1B \end{align*} where \begin{align*}B&=\sum_{k=0}^m\sum_{s=0}^{m-k}(-1)^{m-k-s} \varepsilon^{m-k}N({\mathfrak P})^{m^2+m-k+s^2}\boldsymbol \mu\boldsymbol \delta(m,k)\boldsymbol \beta\boldsymbol \delta(m-k,s)\\ &\mathfrak quad\cdot \varepsilon^kN({\mathfrak P})^{k^2+2k} \sum_{\cls Y\in\mathbb F^{2k+1,2k+1}_{\sym}} \frac{r^*(I_{2m+1},Y)}{o(Y)}\, h_Y. \end{align*} By Lemma 5.4, we get \begin{align*} B&=\sum_{k=0}^m\sum_{s=0}^{m-k}(-1)^{m-s}N({\mathfrak P})^{m^2+2m+k(k-1)+s^2} \boldsymbol \mu\boldsymbol \delta(m,k)\boldsymbol \beta\boldsymbol \delta(m-k,s)\boldsymbol \mathbf gamma(m,k). \end{align*} Since $\boldsymbol \mathbf gamma(m,k)\boldsymbol \beta\boldsymbol \delta(m-k,s) =\boldsymbol \beta\boldsymbol \delta(m,s)\boldsymbol \mathbf gamma(m-s,k),$ we can sum on $0\le s\le m$, $0\le k\le m-s$. Then replacing $s$ by $m-s$ and using that $\boldsymbol \beta(m,m-s)=\boldsymbol \beta(m,s),$ we get \begin{align*} &B=\sum_{s=0}^m(-1)^sN({\mathfrak P})^{2m^2+2m-2ms+s^2} \boldsymbol \delta(m,m-s)\boldsymbol \beta(m,s)\cdot C(s) \text{ where}\\ &C(t)=\sum_{k=0}^t N({\mathfrak P})^{k(k-1)}\boldsymbol \mu\boldsymbol \delta(m,k)\boldsymbol \mathbf gamma(t,k). \end{align*} Since $\boldsymbol \mathbf gamma(t,k)=N({\mathfrak P})^{2k}\boldsymbol \mathbf gamma(t-1,k)+\boldsymbol \mathbf gamma(t-1,k-1),$ we have \begin{align*} C(t) &=N({\mathfrak P})^{2m}C(t-1)=N({\mathfrak P})^{2mt}C(0)=N({\mathfrak P})^{2mt}. \end{align*} Therefore \begin{align*} &B=N({\mathfrak P})^{2m^2+2m}D(m,0) \text{ where}\\ &D(t,q)=\sum_{s=0}^t (-1)^s N({\mathfrak P})^{s(s+q)}\boldsymbol \delta(t+q,t-s)\boldsymbol \beta(t,s). \end{align*} Since $\boldsymbol \beta(t,s)=N({\mathfrak P})^s\boldsymbol \beta(t-1,s)+\boldsymbol \beta(t-1,s-1),$ we have \begin{align*} D(t,q) &=D(t-1,q+1)=D(0,t+q)=1. \end{align*} Therefore $B=N({\mathfrak P})^{2m^2+2m}$. Our earlier computations show that $\mathcal G_{I_n}-\mathcal G_{J_n}=2 N({\mathfrak P})^{2m^2+2m}\mathcal G^*_1$; so $$\mathcal G^*_{I_{2m+1}}=\varepsilon^mN({\mathfrak P})^{m^2+2m}\cdot h_{I_{2m+1}}\mathcal G^*_1 =(-1)^mN({\mathfrak P})^{m^2+m}\,\mathcal G^*_1.$$ A virtually identical argument gives us $\mathcal G^*_{J_n}=-\mathcal G^*_{I_n}$. Now we argue by induction on $m$ to prove the theorem in the case that $n=2m$. Since the computation for $m=1$ is essentially identical to the induction step for $m>1$, we formally define $\mathcal G^*_{I_0}=\mathcal G^*_{J_0}=1$ (which is consistent with the formula claimed in the theorem). So now suppose that $m\mathbf ge1$ and that the theorem holds for all $\mathcal G^*_Y$ where $Y\in\mathbb F^{2r,2r}_{\sym}$ and $0\le r<m$. With $0<d<n$, we begin with the expression for $\mathcal G^*_{U_{n,d}}$ that we derived above. Take $c$ so that $d=2c$ or $d=2c+1$. Using the induction hypothesis, Lemma 5.4, and arguing as we did when $n$ was odd, we get $$\mathcal G^*_{U_{n,d}}=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_{n+1})} 2\varepsilon^m N({\mathfrak P})^{m+d(d-1)/2}A(c,d-2c)$$ where $A(t,q)$ is as defined earlier in this proof; recall that $$A(t,q)=(-1)^t\boldsymbol \mu\boldsymbol \delta(m,t+q).$$ Since $\boldsymbol \mu(2(m-c),1)=\boldsymbol \mu\boldsymbol \delta(m-c,1)$, for $d=2c$ or $2c+1$ we get $$\mathcal G^*_{U_{n,d}} =(-1)^c\varepsilon^mN({\mathfrak P})^{m^2} \frac{\boldsymbol \mu(2(m-c),2(m-c))}{\boldsymbol \mu\boldsymbol \delta(m-c,m-c)}.$$ A virtually identical argument gives us $\mathcal G^*_{\overline U_{n,d}}=\mathcal G^*_{U_{n,d}}.$ Still assuming that $n=2m$ and beginning with our earlier expression for $r^*(I_{n+1},1)\mathcal G_{I_n}-r^*(J_{n+1},1)\mathcal G_{J_n}$, we use Lemmas 5.1 and 5.2 and the induction hypothesis to get \begin{align*} &r^*(I_{n+1},1)\mathcal G_{I_n}-r^*(J_{n+1},1)\mathcal G_{J_n}\\ &\mathfrak quad=o(I_{n+1})\mathcal G^*_{I_n}-o(I_{n+1})\varepsilon^m N({\mathfrak P})^{m^2} \cdot h_{I_n} +2\varepsilon^m N({\mathfrak P})^{-m}B \end{align*} where $B$ is as in the case of $n$ odd. We saw that $B=N({\mathfrak P})^{2m^2+2m}$, and $$r^*(I_{n+1},1)\mathcal G_{I_n}-r^*(J_{n+1},1)\mathcal G_{J_n} =2\varepsilon^m N({\mathfrak P})^{2m+m};$$ so we get $$\mathcal G^*_{I_{2m}}=\varepsilon^m N({\mathfrak P})^{m^2}\cdot h_{I_{2m}} =(-1)^m\varepsilon^m N({\mathfrak P})^{m^2}.$$ The argument to evaluate $\mathcal G^*_{J_{2m}}$ is essentially identical to that of evaluating $\mathcal G^*_{J_{2m+1}}$, where for this we begin with the identity $$r^*(J_{2m+1},1)\overline\mathcal G_{I_{2m}}-r^*(I_{2m+1},1)\overline\mathcal G_{J_{2m}} =2\varepsilon^m N({\mathfrak P})^{2m^2+m}.$$ This proves the theorem. To prove Corollary 1.2, we first note that by Theorem 1.1, $\left(\mathcal G^*_1\right)^m\mathcal G^*_T$ has no dependence on our choice of $\rho$. Thus we can follow the argument of Lemma 3.1 \cite{half-int-aps}, as the techniques are local. In \cite{half-int-aps}, all quadratic forms were assumed to be even; since $2$ is a unit in $\mathbb F$, we have $R^*(T\perp\big<1\big>,0_a)= R^*(2T\perp\big<2\big>,0_a)$, and hence Corollary 1.2 follows. \section{Variations on quadratically twisted Gauss sums} For $T\in\mathbb F^{n,n}_{\sym}$ and $0\le r\le n$, here we consider $\mathcal G_T^*({\mathfrak P};r)$, as defined in the introduction. For $T=0_n$, we have $\mathcal G^*_{0_n}({\mathfrak P};r)=\mathcal G^*_{0_d}({\mathfrak P})$, so we only need to consider $T\mathfrak Not=0_n$. \begin{prop} Take $n\in\mathbb Z_+$, $T\in\mathbb F^{n,n}_{\sym}$ and let $d=\rank T$. \begin{enumerate} \item[(a)] Suppose that $0\le 2t+1\le n$. When $d$ is even we have $\mathcal G^*_T({\mathfrak P};2t+1)=0.$ When $d$ is odd with $d=2c+1$, we have $\mathcal G^*_{\overline U_{n,d}}({\mathfrak P};2t+1)=-\mathcal G^*_{U_{n,d}}({\mathfrak P};2t+1),$ and \begin{align*} \mathcal G^*_{U_{n,d}}({\mathfrak P};2t+1) &=\frac{\boldsymbol \mathfrak Nu(n,2c+1)}{\boldsymbol \mathfrak Nu(n-1,2c)}\varepsilon^cN({\mathfrak P})^c\mathcal G^*_1({\mathfrak P}) \mathcal G^*_{U_{n-1,2c}}({\mathfrak P};2t). \end{align*} \item[(b)] Suppose that $0\le 2t\le n$; set $s=n-2t$. Then with $c$ so that $d=2c$ or $2c+1$, we have \begin{align*} \mathcal G^*_{T}({\mathfrak P};2t) &= \frac{\boldsymbol \mathfrak Nu(n,d)}{o(I_{2t+1}\perp0_s)} \sum_{k=0}^c (-1)^k\varepsilon^kN({\mathfrak P})^{s(2k+1)+2tk+t-k}\boldsymbol \mu\boldsymbol \delta(t,k) \\ &\mathfrak quad\cdot \boldsymbol \mathbf gamma(c,k) A_s(t-k,c-k) \end{align*} where \begin{align*} A_s(x,y)= (N({\mathfrak P})^x+\varepsilon^x) r^*(I_{2x}\perp0_s,0_{2y}) - (N({\mathfrak P})^x-\varepsilon^x)r^*(J_{2x}\perp0_s,0_{2y}). \end{align*} \end{enumerate} \end{prop} \mathfrak Noindent{\bf Remark:} As $\boldsymbol \mathfrak Nu(2y,0)$ is the number of bases for any $2y$-dimensional space, $r^*(T',0_{2y})=\boldsymbol \mathfrak Nu(2y,0)\cdot R^*(T',0_{2y})$ for any symmetric $T'$. \begin{proof} Throughout this proof, we follow the lines of argument used in Section 3. In this way we get \begin{align*} \mathcal G^*_{U_{n,d}}({\mathfrak P};r) &=\boldsymbol \mathfrak Nu(n,d)\sum_{Y\in\mathbb F^{d,d}_{\sym}} \left(\frac{r^*(U_{n,r},Y)}{o(U_{n,r},Y)} -\frac{r^*(\overline U_{n,r})}{o(\overline U_{n,r},Y)}\right) \e\{2Y/p\}, \end{align*} \begin{align*} \mathcal G^*_{\overline U_{n,d}}({\mathfrak P};r) &=\boldsymbol \mathfrak Nu(n,d)\sum_{Y\in\mathbb F^{d,d}_{\sym}} \left(\frac{r^*(U_{n,r},Y)}{o(U_{n,r},Y)} -\frac{r^*(\overline U_{n,r})}{o(\overline U_{n,r},Y)}\right) \e\{2YJ_d/p\}. \end{align*} We have $o(0_{n-r})=\boldsymbol \mathfrak Nu(n-r,n-r),$ and $$o(U_{n,r})=o(I_r)N({\mathfrak P})^{r(n-r)}o(0_{n-r}),\ o(\overline U_{n,r})=o(J_r)N({\mathfrak P})^{r(n-r)}\boldsymbol \mathfrak Nu(n-r,n-r).$$ First consider the case that $r=2t+1$. Then for even $\ell$ ($\ell\le d$), we have $$r^*(U_{n,r},U_{d,\ell})-r^*(\overline U_{n,r},U_{d,\ell}) =0=r^*(U_{n,r}\overline U_{d,\ell})-r^*(\overline U_{n,r},\overline U_{d,\ell}),$$ and for odd $\ell$ we have $$r^*(U_{n,r},U_{d,\ell})=r^*(\overline U_{n,r},\overline U_{d,\ell}),\ r^*(U_{n,r},\overline U_{d,\ell})=r^*(\overline U_{n,r},U_{d,\ell}).$$ Hence using Lemma 5.3, we have \begin{align*} \mathcal G^*_{U_{n,d}}({\mathfrak P};2t+1) &=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(U_{n,2t+1})} \sum_{0\le 2k+1\le d} \left(\sum_{\cls Y\in\mathbb F^{2k+1,2k+1}_{\sym}}\frac{r^*(I_d,Y)}{o(Y)} \mathcal G^*_Y\right)\\ &\mathfrak quad\cdot (r^*(U_{n,2t+1},U_{d,2k+1})-r^*(\overline U_{n,2t+1},U_{d,2k+1})). \end{align*} So by Theorem 1.1 and Lemmas 5.1 and 5.4, when $d$ is even we get $\mathcal G^*_{U_{n,d}}({\mathfrak P};2t+1)=0$, and when $d$ is odd with $d=2c+1$, we get \begin{align*} \mathcal G^*_{U_{n,d}}({\mathfrak P};2t+1) &=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(U_{n,2t+1})} \sum_{k=0}^c (-1)^k\varepsilon^{k+c} N({\mathfrak P})^{k^2+c} \boldsymbol \mathbf gamma(c,k) \mathcal G^*_1\\ &\mathfrak quad\cdot (r^*(U_{n,2t+1},U_{d,2k+1})-r^*(\overline U_{n,2t+1},U_{d,2k+1})). \end{align*} An almost identical argument gives us $\mathcal G^*_{\overline U_{n,d}}({\mathfrak P};2t+1)=-\mathcal G^*_{U_{n,d}}({\mathfrak P};2t+1).$ Now consider the case that $r=2t$. With $d=2c$ or $2c+1$, reasoning as in Section 3 gives us \begin{align*} \mathcal G^*_{U_{n,d}}({\mathfrak P};2t) &=\frac{\boldsymbol \mathfrak Nu(n,d)}{o(U_{n+1,2t+1})} \sum_{k=0}^c(-1)^k\varepsilon^k N({\mathfrak P})^{k^2} \boldsymbol \mathbf gamma(c,k)\\ &\mathfrak quad\cdot\left(r^*(U_{n+1,2t+1},U_{d+1,2k+1}) - r^*(\overline U_{n+1,2t+1},U_{d+1,2k+1})\right)\\ &= \mathcal G^*_{\overline U_{n,d}}({\mathfrak P};2t). \end{align*} This gives us (a) and part of (b). To finish proving (b), we begin with the above equation, taking $d=2c$. It is easily seen that $r^*(I_r\perp 0_s,1)=N({\mathfrak P})^s r^*(I_r,1),$ and consequently from Lemma 5.1 we get \begin{align*} & r^*(U_{n+1,2t+1},U_{d+1,2k+1}) - r^*(\overline U_{n+1,2t+1},U_{d+1,2k+1})\\ &\mathfrak quad = N({\mathfrak P})^{(n-2t)(2k+1)+2tk-k^2+t-k}\boldsymbol \mu\boldsymbol \delta(t,k) A_{n-2t}(t-k,c-k) \end{align*} with $A_s(x,y)$ as in the statement of the proposition. \end{proof} \section{Lemmas and their proofs} \begin{lem} Take $t,d\in\mathbb Z_+$. We have: \begin{align*} r^*(I_{2t},1)&=N({\mathfrak P})^{t-1}(N({\mathfrak P})^t-\varepsilon^t)=r^*(I_{2t},\omega),\\ r^*(J_{2t},1)&=N({\mathfrak P})^{t-1}(N({\mathfrak P})^t+\varepsilon^t)=r^*(J_{2t},1),\\ r^*(I_{2t+1},1)&=N({\mathfrak P})^t(N({\mathfrak P})^t+\varepsilon^t)=r^*(J_{2t+1},\omega),\\ r^*(I_{2t+1},\omega)&=N({\mathfrak P})^t(N({\mathfrak P})^t-\varepsilon^t)=r^*(J_{2t+1},1); \end{align*} also, \begin{align*} r^*(I_{2t},0_d)&=N({\mathfrak P})^{d(d-1)/2}(N({\mathfrak P})^t-\varepsilon^t)\boldsymbol \mu\boldsymbol \delta(t-1,d-1) (N({\mathfrak P})^{t-d}+\varepsilon^t),\\ r^*(J_{2t},0_d)&=N({\mathfrak P})^{d(d-1)/2}(N({\mathfrak P})^t+\varepsilon^t)\boldsymbol \mu\boldsymbol \delta(t-1,d-1) (N({\mathfrak P})^{t-d}-\varepsilon^t),\\ r^*(I_{2t+1},0_d)&=N({\mathfrak P})^{d(d-1)/2}\boldsymbol \mu\boldsymbol \delta(t,d)=r^*(J_{2t+1},0_d). \end{align*} \end{lem} \begin{proof} The first collection of formulas are from Theorems 2.59 and 2.60 of \cite{Ger}. For the second collection of formulas, we begin with Theorems 2.59 and 2.60 of \cite{Ger}, giving us formulas for $r(I_t,0)=r^*(I_t,0)+1$ and $r(J_t,0)=r^*(J_t,0)+1$. Now consider the case that $V$ is a $2t$-dimensional space over $\mathbb F$ equipped with a quadratic form $Q_V$ given by $I_{2t}$ relative to some basis for $V$. So $r^*(I_{2t},0_d)$ is the number of all (ordered) bases for $d$-dimensional, totally isotropic subspaces of $V$. (Recall that a subspace $W$ of $V$ is totally isotropic if $Q_V$ restricts to 0 on $W$.) Suppose that $d>1$; we construct all bases for $d$-dimensional, totally isotropic subspaces of $V$ as follows. Choose an isotropic vector $x$ from $V$ (so $x\mathfrak Not=0$ and $Q_V(x)=0$; note that this is not possible if $t=1$ and $\varepsilon=-1$). Then as $V$ is a regular space, there is some $y\in V$ so that $y$ is not orthogonal to $x$; hence (by Theorem 2.23 \cite{Ger}) $x,y$ span a hyperbolic plane, and (by Theorem 2.17 \cite{Ger}), this hyperbolic plane splits $V$, giving us $V=(\mathbb F x\oplus\mathbb F y)\perp V'$ where $V'$ is hyperbolic if and only if $V$ is. We have $\disc V=\varepsilon \disc V'$ and so the quadratic form on $V'$ is given by $I_{2(t-1)}$ if $\varepsilon=1$, and by $J_{2(t-1)}$ if $\varepsilon=-1$. The number of all bases for $d$-dimensional, totally isotropic subspaces of $V$ with $x$ as the first basis element is $N({\mathfrak P})^{d-1} r^*(I_{2(t-1)},0_{d-1})$ if $\varepsilon=1$, and $N({\mathfrak P})^{d-1} r^*(J_{2(t-1)},0_{d-1})$ otherwise. The formula claimed now follows by induction on $d$. Virtually identical arguments yield the formulas when $I_{2t}$ is replaced by $J_{2t}$ or $I_{2t+1}$ or $J_{2t+1}$. \end{proof} \begin{lem} Suppose that $m\mathbf ge 0$. We have \begin{align*} &\sum_{s=0}^m (-1)^s N({\mathfrak P})^{(2m+1)(m-s)+s^2}\boldsymbol \beta\boldsymbol \delta(m,m-s)\\ &\mathfrak quad=r(I_{2m+1},0_{2m+1}) =r(J_{2m+1},0_{2m+1}), \end{align*} \begin{align*} &\sum_{s=0}^m(-1)^s N({\mathfrak P})^{2m(m-s)+s(s-1)}\boldsymbol \beta(m,m-s)\boldsymbol \delta(m-1,m-s)\\ &\mathfrak quad=\begin{cases} r(I_{2m},0_{2m})&\text{if $\varepsilon^m=1$,}\\ r(J_{2m},0_{2m})&\text{if $\varepsilon^m=-1$,}\end{cases}\\ &\sum_{s=1}^m (-1)^{s+1} N({\mathfrak P})^{2m(m-s)+s(s-1)}\boldsymbol \beta(m-1,m-s)\boldsymbol \delta(m,m-s)\\ &\mathfrak quad=\begin{cases} r(J_{2m},0_{2m})&\text{if $\varepsilon^m=1$,}\\ r(I_{2m},0_{2m})&\text{if $\varepsilon^m=-1$.}\end{cases} \end{align*} \end{lem} \begin{proof} Suppose that $V$ is an $n$-dimensional vector space over $\mathbb F$ equipped with a quadratic form given by $Q_V=I_n$ or $J_n$. Then $r(Q_V,0_n)$ is the number of (ordered) $x_1,\ldots,x_n\in V$ so that $\text{span}\{x_1,\ldots,x_n\}$ is totally isotropic. As $\boldsymbol \mathfrak Nu(d,0)$ is the number of bases for any given dimension $d$ space over $\mathbb F$, the number of dimension $d$ totally isotropic subspaces of $V$ is $$\varphi_d(V)=r^*(Q_V,0_d)/\boldsymbol \mathfrak Nu(d,0).$$ We treat the case that $V\simeq\mathbb H^m$, meaning that $\dim V=2m$ and the quadratic form on $V$ is given by $I_{2m}$ if $\varepsilon^m=1$, and by $J_{2m}$ otherwise (analogous arguments treat the other cases). Slightly abusing notation, we write $(x_1,\ldots,x_{2m})\subseteq V$ to mean that $(x_1,\ldots,x_{2m})$ is an ordered $2m$-tuple of vectors from $V$. We set $$\mathcal W_{m-s}=\{\text{dimension } m-s \text{ totally isotropic subspaces } W \text{ of }V\},$$ and we let $${\mathbb 1}_W(x_1,\ldots,x_{2m})= \begin{cases}1&\text{if $x_1,\ldots,x_{2m}\in W$,}\\ 0&\text{otherwise}.\end{cases}$$ Thus for $(x_1,\ldots,x_{2m})\subseteq V$, $\sum_{W\in\mathcal W_{m-s}} {\mathbb 1}_W(x_1,\ldots,x_{2m})$ is the number of elements of $\mathcal W_{m-s}$ containing $x_1,\ldots,x_{2m}$, and, noting that $N({\mathfrak P})^{2m(m-s)}$ is the number of (ordered) $2m$-tuples of vectors in each $W\in\mathcal W_{m-s}$, we have $$\sum_{(x_1,\ldots,x_{2m})\subseteq V} \left(\sum_{W\in\mathcal W_{m-s}} {\mathbb 1}_W(x_1,\ldots,x_{2m})\right) = N({\mathfrak P})^{2m(m-s)}\varphi_{m-s}(V).$$ So \begin{align*} \psi(V):= &\sum_{s=0}^m (-1)^s N({\mathfrak P})^{s(s-1)+2m(m-s)}\varphi_{m-s}(V)\\ =&\sum_{(x_1,\ldots,x_{2m})\subseteq V}\left(\sum_{s=0}^m (-1)^s N({\mathfrak P})^{s(s-1)} \sum_{W\in\mathcal W_{m-s}} {\mathbb 1}_W(x_1,\ldots,x_{2m})\right). \end{align*} Fix $(x'_1,\ldots,x'_{2m})\subseteq V$; let $W'$ be the subspace spanned by $x'_1,\ldots,x'_{2m}$, and set $\ell=\dim W'$. If $W'$ is not totally isotropic then ${\mathbb 1}_W(x'_1,\ldots,x'_{2m})=0$ for all totally isotropic $W$. So suppose that $W'$ is totally isotropic. Then repeatedly using Theorems 2.19, 2.23, 2.52 of \cite{Ger} and the assumption that $V$ is regular, we find that there is a dimension $\ell$ subspace $W''$ so that $W'\oplus W''\simeq\mathbb H^{\ell}$ and $V=(W'\oplus W'')\perp V'$ where $V'\simeq \mathbb H^{m-\ell}$. Hence the number of $W\in\mathcal W_{m-s}$ that contain $W'$ is $\varphi_{m-s-\ell}(V')$. Therefore, using Lemma 5.1 and the above formula for $\varphi_{m-s-\ell}(V')$, we have \begin{align*} &\sum_{s=0}^{m-\ell} (-1)^s N({\mathfrak P})^{s(s-1)} \sum_{W\in\mathcal W_{m-s}} {\mathbb 1}(x'_1,\ldots,x'_{2m}) =A(m-\ell,m-\ell-1) \end{align*} where $$A(t,k)=\sum_{s=0}^t (-1)^sN({\mathfrak P})^{s(s+k-t)}\boldsymbol \delta(k,t-s)\boldsymbol \beta(t,t-s).$$ We argue by induction on $t$ to show that for any $k$ and $t\mathbf ge 0$, we have $A(t,k)=1$. Clearly $A(0,k)=1$ for all $k$. So fix $t\mathbf ge0$ and suppose that $A(t,k)=1$ for all $k$. Hence we have \begin{align*} 1 &=(N({\mathfrak P})^k+1)\sum_{s=0}^t(-1)^sN({\mathfrak P})^{s(s+k-1-t)} \boldsymbol \delta(k-1,t-s)\boldsymbol \beta(t,t-s)\\ &\mathfrak quad -N({\mathfrak P})^k\sum_{s=0}^t (-1)^s N({\mathfrak P})^{s(s+k-t)}\boldsymbol \delta(k,t-s)\boldsymbol \beta(t,t-s). \end{align*} Notice that in the first sum in the above equality, we can allow $s$ to vary from $0$ to $t+1$ (since $\boldsymbol \beta(t,-1)=0$), and in the second sum we can allow $s$ to vary from $-1$ to $t$ (since $\boldsymbol \beta(t,t+1)=0$). Also, we know that $$(N({\mathfrak P})^k+1)\boldsymbol \delta(k-1,t-s)=\boldsymbol \delta(k,t-s+1);$$ so replacing $s$ by $s-1$ in the second sum and using that $$\boldsymbol \beta(t,t-s)+N({\mathfrak P})^{t+1-s}\boldsymbol \beta(t,t+1-s)=\boldsymbol \beta(t+1,t+1-s),$$ we find that $A(t+1,k)=1$ for all $k$. Hence $\psi$ counts $(x_1,\ldots,x_{2m})\subseteq V$ zero times if $\text{span}\{x_1,\ldots,x_{2m}\}$ is not totally isotropic, and once otherwise. Thus $\psi=r(Q_V,0_{2m}).$ \end{proof} \begin{lem} Fix $d\in\mathbb Z_+$ and $\ell\in\mathbb Z$ so that $0\le \ell<d$. Then $$\sum_{Y\sim U_{d,\ell}} \e\{2Y\rho\} -\sum_{Y\sim \overline U_{d,\ell}} \e\{2Y\rho\} =\sum_{\cls Y'\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(I_d,Y')}{o(Y')} \mathcal G^*_{Y'}({\mathfrak P} I_{\ell})$$ and $$\sum_{Y\sim U_{d,\ell}} \e\{2YJ_d\rho\} -\sum_{Y\sim \overline U_{d,\ell}} \e\{2YJ_d\rho\} =\sum_{\cls Y'\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(J_d,Y')}{o(Y')} \mathcal G^*_{Y'}({\mathfrak P} I_{\ell}),$$ where $\cls Y'$ varies over a set of representatives for the $GL_{\ell}(\mathbb F)$-orbits in $\mathbb F^{\ell,\ell}_{\sym}$. \end{lem} \begin{proof} We first consider the sum over $Y\sim \overline U_{d,\ell}$. We know that for $G\in GL_{d}(\mathbb F)$ and $G'$ in the orthogonal group of $\overline U_{d,\ell}$, we have $^t(G'G)\overline U_{d,\ell}(G'G)=\,^tG\overline U_{d,\ell}G$, so when we let $G$ vary over $GL_{d}(\mathbb F)$, each element in the orbit of $\overline U_{d,\ell}$ appears exactly $o(\overline U_{d,\ell})$ times. Also, recall that with $\sigma$ denoting the matrix trace map, we have $\sigma(^tG\overline U_{d,\ell}G)=\sigma(\overline U_{d,\ell}GI_d\,^tG)$ and $\sigma(\overline U_{d,\ell}GI_d\,^tG)=\sigma(J_{\ell}Y')$ where $Y'$ is the upper left $\ell\times \ell$ block of $GI_d\,^tG$. So we have \begin{align*} \sum_{Y\sim\overline U_{d,\ell}}\e\{2Y\rho\} &=\frac{1}{o(\overline U_{d,\ell})} \sum_{G\in GL_d(\mathbb F)} \e\{2\,^tG\overline U_{d,\ell}G\rho\}\\ &=\frac{\boldsymbol \mathfrak Nu(d,\ell)}{o(\overline U_{d,\ell})} \sum_{Y'\in\mathbb F^{\ell,\ell}_{\sym}} r^*(I_d,Y') \e\{2 J_{\ell} Y'\rho\} \end{align*} since $$r^*(I_d,Y')=\#\{C\in\mathbb F^{d,\ell}: \ ^tCC=Y',\ \rank C=\ell\ \},$$ and the number of ways to extend $C$ to an element of $GL_{d}(\mathbb F)$ is $\boldsymbol \mathfrak Nu(d,\ell)$. Now, as $G$ varies over $GL_{\ell}(\mathbb F)$, $^tGY'G$ varies $o(Y')$ times over the elements in $\cls Y'$. Also, by Lemma 5.1, we have $o(\overline U_{d,\ell})=o(J_{\ell})\boldsymbol \mathfrak Nu(d,\ell)$. Hence \begin{align*} \sum_{Y\sim\overline U_{d,\ell}} \e\{2Y\rho\} &= \frac{1}{o(J_{\ell})} \sum_{G\in GL_d(\mathbb F)} \sum_{\cls Y'\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(I_d,Y')}{o(Y')}\e\{2G J_{\ell}\,^tGY'\rho\}\\ &= \sum_{\cls Y'\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(I_d,Y')}{o(Y')} \sum_{X\sim J_{\ell}}\e\{2XY'\rho\} \end{align*} where for the last equality we used that as $G$ varies over $GL_{\ell}(\mathbb F)$, $GJ_{\ell}\,^tG$ varies $o(J_{\ell})$ times over the elements in the orbit of $J_{\ell}$. The analysis of $$\sum_{Y\sim U_{d,\ell}}\e\{2Y\rho\},\ \sum_{Y\sim U_{d,\ell}}\e\{2YJ_d\rho\},\text{ and } \sum_{Y\sim \overline U_{d,\ell}}\e\{2YJ_d\rho\}$$ follow in a virtually identical manner. Then we note that $$\sum_{X\sim I_{\ell}}\e\{2XY'\rho\}-\sum_{X\sim J_{\ell}}\e\{2XY'\rho\} =\mathcal G^*_{Y'},$$ completing the proof. \end{proof} \begin{lem} Suppose that $0< \ell\le d$; take $c$ so that $d$ is $2c$ or $2c+1$. Take $Y\in\mathbb F^{\ell,\ell}_{\sym}$, and take $b$ so that $\rank Y$ is $2b$ or $2b+1$. \begin{enumerate} \item[(a)] Suppose that $\ell=2k$; set $$h_Y=(-1)^b\cdot\frac{\boldsymbol \mu(2(k-b),2(k-b))}{\boldsymbol \mu\boldsymbol \delta(k-b,k-b)}.$$ Then $$\sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(I_d,Y)}{o(Y)}\,h_Y= \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(J_d,Y)}{o(Y)}\,h_Y= (-1)^k\boldsymbol \mathbf gamma(c,k).$$ \item[(b)] Suppose that $\ell=2k+1$; set $$h_Y=\begin{cases} (-1)^b\varepsilon^b N({\mathfrak P})^{-b}\cdot\frac{\boldsymbol \mu(2(k-b),2(k-b))}{\boldsymbol \mu\boldsymbol \delta(k-b,k-b)} &\text{if $Y\sim I_{2b+1}\perp 0_{2(k-b)}$,}\\ (-1)^{b+1}\varepsilon^b N({\mathfrak P})^{-b}\cdot\frac{\boldsymbol \mu(2(k-b),2(k-b))}{\boldsymbol \mu\boldsymbol \delta(k-b,k-b)} &\text{if $Y\sim J_{2b+1}\perp 0_{2(k-b)}$,}\\ 0&\text{if $\rank Y=2b$.} \end{cases}$$ Then when $d=2c$, $$\sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(I_d,Y)}{o(Y)}\,h_Y= \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(J_d,Y)}{o(Y)}\,h_Y= 0,$$ and when $d=2c+1$, \begin{align*} \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(I_d,Y)}{o(Y)}\,h_Y &= -\sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(J_d,Y)}{o(Y)}\,h_Y\\ &= (-1)^k\varepsilon^cN({\mathfrak P})^{c-2k} \boldsymbol \mathbf gamma(c,k). \end{align*} \end{enumerate} \end{lem} \begin{proof} We have \begin{align*} &\sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(I_d,Y)}{o(Y)\,}h_Y =\frac{r^*(I_d,0_{\ell})}{o(0_{\ell})}\\ &\mathfrak quad +\sum_{a=1}^{\ell}\left(\frac{r^*(I_d,I_a\perp 0_{\ell-a})}{o(I_a\perp0_{\ell-a})} \,h_{I_a\perp 0_{\ell-a}} +\frac{r^*(I_d,J_a\perp 0_{\ell-a})}{o(J_a\perp0_{\ell-a})} \,h_{J_a\perp 0_{\ell-a}}\right). \end{align*} \begin{align*} &o(I_{2b+1})N({\mathfrak P})^{2bs}\boldsymbol \mathfrak Nu(s,0)=N({\mathfrak P})^{2b}\boldsymbol \mu(s,1)o(I_{2b+1}\perp 0_{s-1})\\ &\mathfrak quad=r^*(I_{2b+1},1)o(I_{2b}\perp0_s) =r^*(J_{2b+1},1)o(J_{2b}\perp0_s). \end{align*} (a) Suppose that $\ell=2k$. Then using Lemma 5.1, when $d=2c$ we get \begin{align*} &r^*(I_{2b+1},1)r^*(I_d,I_{2b}\perp 0_{2(k-b)}) +r^*(J_{2b+1},1)r^*(I_d,J_{2b}\perp 0_{2(k-b)})\\ &\mathfrak quad= 2N({\mathfrak P})^{(k-b)(2k-2b-1)+2cb-b^2}\\ &\mathfrak qquad\cdot \boldsymbol \mu\boldsymbol \delta(c-1,2k-b-1)(N({\mathfrak P})^c-\varepsilon^c)(N({\mathfrak P})^{c-2(k-b)}+\varepsilon^c) \end{align*} and \begin{align*} &N({\mathfrak P})^{2b}(N({\mathfrak P})^{2(k-b)}-1)\\ &\mathfrak qquad\cdot \left(r^*(I_d,I_{2b+1}\perp 0_{2(k-b)-1}) +r^*(I_d,J_{2b+1}\perp 0_{2(k-b)-1})\right)\\ &\mathfrak quad= 2N({\mathfrak P})^{(k-b)(2k-2b-1)+2cb-b^2+c-2(k-b)}\\ &\mathfrak qquad\cdot \boldsymbol \mu\boldsymbol \delta(c-1,2k-b-1)(N({\mathfrak P})^c-\varepsilon^c)(N({\mathfrak P})^{2(k-b)}-1). \end{align*} So using that $\boldsymbol \mu\boldsymbol \delta(t,s+s')=\boldsymbol \mu\boldsymbol \delta(t,s)\boldsymbol \mu\delta(t-s,s'),$ we have \begin{align*} \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}}\frac{r^*(I_{2c},Y)}{o(Y)}\,h_Y &= \sum_{b=0}^k(-1)^b\frac{N({\mathfrak P})^{2b(b+c-2k)}\boldsymbol \mu\boldsymbol \delta(c,2k-b)\boldsymbol \mu\boldsymbol \delta(k,b)} {\boldsymbol \mu\boldsymbol \delta(b,b)\boldsymbol \mu\boldsymbol \delta(k-b,k-b)\boldsymbol \mu\boldsymbol \delta(k,b)}\\ &= \boldsymbol \mathbf gamma(k,c)\cdot S(k,c) \end{align*} where $$S(k,c)=\sum_{b=0}^k(-1)^b N({\mathfrak P})^{2b(b+c-2k)} \boldsymbol \mu\boldsymbol \delta(c-k,k-b)\boldsymbol \mathbf gamma(k,b).$$ Since $\boldsymbol \mathbf gamma(k,b)=N({\mathfrak P})^{2b}\boldsymbol \mathbf gamma(k-1,b)+\boldsymbol \mathbf gamma(k-1,b-1),$ we find that \begin{align*} S(k,c) &= -S(k-1,c-1)=(-1)^k S(0,c-k)=(-1)^k, \end{align*} proving one case of (a). We follow this same line of argument when replacing $I_{2c}$ by $J_{2c}$, and when replacing $2c$ by $2c+1$. (b) Suppose that $\ell=2k+1$. Using the definition of $h_Y$, we have \begin{align*} \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(I_d,Y)}{o(Y)}\, h_Y &=\sum_{b=0}^k \left(r^*(I_d,I_{2b+1}\perp0_{2(k-b)}) - r^*(I_d,J_{2b+1}\perp0_{2(k-b)})\right)\\ &\mathfrak quad\cdot \frac{h_{I_{2b+1}\perp0_{2(k-b)}}}{o(I_{2b+1}\perp0_{2(k-b)})}. \end{align*} When $d$ is even, $r^*(I_d,I_{2b+1}\perp 0_{2(k-b)}) =r^*(I_d,J_{2b+1}\perp0_{2(k-b)}),$ so when $d$ is even the above sum on $\cls Y$ is 0. So suppose that $d=2c+1$. Then with $S(k,c)$ as in case (a), we have \begin{align*} \sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(I_d,Y)}{o(Y)}\, h_Y &=\varepsilon^c N({\mathfrak P})^{c-2k} \boldsymbol \mathbf gamma(c,k) S(c,k)\\ &=(-1)^k\varepsilon^c N({\mathfrak P})^{c-2k} \boldsymbol \mathbf gamma(c,k). \end{align*} To evaluate the sum on $\cls Y$ when $I_d$ is replaced by $J_d$, we first note that for any $s\mathbf ge0$, when $d$ is even we have $r^*(J_d,I_{2b+1}\perp 0_{s})=r^*(J_d,J_{2b+1}\perp0_{s})$, and when $d$ is odd we have $r^*(J_d,I_{2b+1}\perp 0_{s})=r^*(I_d,J_{2b+1}\perp0_{s})$ . So mimicking our above analysis, we find that when $d$ is even, the sum on $\cls Y =0$, and when $d$ is odd with $d=2c+1$, we have $$\sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(J_d,Y)}{o(Y)}\,h_Y = -\sum_{\cls Y\in\mathbb F^{\ell,\ell}_{\sym}} \frac{r^*(I_d,Y)}{o(Y)}\,h_Y.$$ \end{proof} \end{document}
\begin{document} \newtheorem{corollary}{Corollary} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \newtheorem{theorem}{Theorem} \newtheorem{fact}{Fact} \newtheorem{property}{Property} \newcommand{\bra}[1]{\langle #1|} \newcommand{\ket}[1]{|#1\rangle} \newcommand{\braket}[3]{\langle #1|#2|#3\rangle} \newcommand{\ip}[2]{\langle #1|#2\rangle} \newcommand{\op}[2]{|#1\rangle \langle #2|} \newcommand{{\rm tr}}{{\rm tr}} \newcommand {\E } {{\mathcal{E}}} \newcommand {\F } {{\mathcal{F}}} \newcommand {\diag } {{\rm diag}} \title{\Large {\bf Separability of Bosonic Systems}} \author{Nengkun Yu$^{1,2}$} \email{[email protected]} \affiliation{$^1$Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, Canada\protect\\ $^2$Department of Mathematics $\&$ Statistics, University of Guelph, Guelph, Ontario, Canada} \begin{abstract} In this paper, we study the separability of quantum states in bosonic system. Our main tool here is the \lq \lq separability witnesses", and a connection between \lq \lq separability witnesses" and a new kind of positivity of matrices---\lq \lq Power Positive Matrices" is drawn. Such connection is employed to demonstrate that multi-qubit quantum states with Dicke states being its eigenvectors is separable if and only if two related Hankel matrices are positive semidefinite. By employing this criterion, we are able to show that such state is separable if and only if it's partial transpose is non-negative, which confirms the conjecture in [Wolfe, Yelin, Phys. Rev. Lett. (2014)]. Then, we present a class of bosonic states in $d\otimes d$ system such that for general $d$, determine its separability is NP-hard although verifiable conditions for separability is easily derived in case $d=3,4$. \end{abstract} \pacs{03.65.Ud, 03.67.Hk} \maketitle \textit{Introduction---}Entanglement, first recognized as a “spooky” feature of quantum machinery by Einstein, Podolsky, and Rosen \cite{EPR35}, lies at the heart of quantum mechanics. It has been discovered that entanglement plays an essential role in various fundamental applications and protocols in quantum information science, such as quantum teleportation, superdense coding and cryptography \cite{BW92,BBCJ+93,BB84}. Moreover, high order of multipartite entanglement has been shown to be requisite to reach the maximal sensitivity in metrological tasks \cite{GZN+10}. In multipartite systems, a quantum state is called separable if it can be written as a statistical mixture of product states, otherwise, it is entangled. Research on separability criteria, that is, on computational methods to determine whether a given state is separable or entangled, turns out to be a a cumbersome problem and essential subject in quantum information theory. Starting from the famous PPT (Positive Partial Transpose) criterion \cite{PER96}, a considerable number of different separability criterions have been discovered (see the references in \cite{HHHH09,IOA07}). One fundamental tool of detecting entanglement is entanglement witnesses \cite{HHH96,TER00}, which is equivalent to the method of positive, but not completely positive maps. Entanglement witnesses are observables that completely characterize separable states and allow to detect entanglement physically. Their origin stems from Hyperplane separation theorem of geometry: the convex sets can be described by hyperplanes. In particular, a witness is an observable, which is non-negative for separable states, and it can have a negative expectation value for entangled states. Despite great efforts and considerable progress have been made, the physical understanding and mathematical description of its essential characteristics remain however highly nontrivial tasks, especially when many-particle systems are analyzed. Moreover, it was shown by Gurvits \cite{GUR04} that this problem is NP-hard. However, it is still possible to have complete criterion for the separability of some interesting certain situations. A problem of great interest is to study the entanglement of bosonic system \cite{ESBL02,TG09,TAH+12,CBF+13,WY14}. For $N$-qubit bosonic system, a natural basis is $N$-qubit Dicke states(unormalized) which are defined as, \begin{equation*} \ket{D_{N,n}} \! := P_{\textrm{sym}}\bigl( \ket{0}^{\otimes n} \otimes \ket{1}^{\otimes N-n} \bigr), \end{equation*} with $P_{\textrm{sym}}$ being the projection onto the Bosonic (fully symmetric) subspace, $i.e.$, $P_{\textrm{sym}} = \frac{1}{N!} \sum_{\pi\in S_N} U_\pi$, the sum extending over all permutation operators $U_\pi$ of the $N$-qubit systems. It is worth to note that the entanglement of pure Dicke state has been widely studied recently \cite{YU13,DVC00,YCGD10,YGD14,HKWG+09,BKMG+09,BTZLS+09,WKSW09}. In this Letter, we focus on the problem of the separability criterion for quantum states in bosonic system by considering separable witnesses. We first draw a connection between the separable witnesses of general multi-qubit bosonic states and a new type of positivity of matrices---what we called \lq \lq Power Positivity". This connection is employed to study the separability of $N$-qubit quantum states which being the mixture of Dicke states. In particular, the separable witnesses of such states corresponds to diagonal \lq \lq Power Positive" matrices, that are just polynomials whose value of is always non-negative for non-negative variable. By employing the characterization of non-negative polynomials, an easily evaluated \textit{complete} criterion for the separability of mixture of Dicke states is demonstrated. Moreover, we show that any such separable state can be written as the mixture of $(N+1)(N+2)$ product states. We then study the separability of a class of states whose eigenvectors are generalized $d\otimes d$ Dicke states. It is proved that the separability problem of such states is NP-complete for general $d$, although very simple criterion is demonstrated for $d=3,4$. \textit{Main Results---} In the $N$-qudit system $\mathcal{H}_1\otimes\mathcal{H}_2\otimes\cdots\otimes\mathcal{H}_N$ with $d$ being the dimension of each Hilbert space $\mathcal{H}_i$, the bosonic space is a subspace that spanned by pure quantum states which are invariant under the swap of any two subsystems among all $N$ subsystems, $i.e.$, for the swap operator exchanging the two qudits system $F_{i,j}$, \begin{equation*} S:\equiv\{\ket{\psi}:\ket{\psi}=F_{i,j}\ket{\psi},\ \mathrm{for\ all}\ i,\ j\ \mathrm{and\ Swap}\ F\}. \end{equation*} A mixed state $\rho$ is called bosonic if its support is a subspace of bosonic space where the support of $\rho$, $Supp(\rho)$, is the subspace spanned by the eigenvectors corresponding to its non-zero eigenvalues. In other words, $\rho=F_{i,j}\rho=\rho F_{i,j}$ holds for any $1\leq i,j\leq N$. One very simple observation is that, if a bosonic state $\rho$ is separable, $i.e.$, there exist product states (unnormalized) $\otimes_{k=1}^N\op{\alpha_{j_k}}{\alpha_{j_k}}$ such that \begin{small} $$\rho=\sum_{j}\bigotimes_{k=1}^N\op{\alpha_{j_k}}{\alpha_{j_k}},$$ \end{small} then we can choose product states $\ket{\alpha_{j}}^{\otimes N}$, that is \begin{small} $$\rho=\sum_{j}\bigotimes_{k=1}^N\op{\alpha_{j}}{\alpha_{j}}.$$ \end{small} To see this, one only need to observe that \begin{small} \begin{equation*} \bigotimes_{k=1}^N\ket{\alpha_{j_k}}\in S\Rightarrow \exists\ket{\alpha_{j}},\bigotimes_{k=1}^N\ket{\alpha_{j_k}}=\ket{\alpha_{j}}^{\otimes N}. \end{equation*} \end{small} Now, we introduce the separable witnesses for \textit{bosonic} system as a useful tool: For $N$-qudit system, a Hermitian operator $W$ is called a separable witness of bosonic system if $W=P_SWP_S$ with $P_S$ being the projection of the bosonic space $S$ and it satisfies that \begin{eqnarray*} {\rm tr}(W\alpha^{\otimes N})\geq 0, \mathrm{for~all}~\alpha=\op{\alpha}{\alpha}. \end{eqnarray*} The importance of separability witness is due to the following proposition. \begin{proposition} A bosonic state $\rho$ is separable if and only if ${\rm tr}(W\rho)\geq 0$ holds for all separability witness $W$ of bosonic system. \end{proposition} \textit{Remark}: This proposition can be generalized to the separability of quantum states lying in fixed subspace. \textit{Proof:}---The only if part simply follows from the above observation about the structure of separable states of bosonic system. To show the validity of the if part, we assume the existence of entangled bosonic state $\rho$ such that ${\rm tr}(W\rho)\geq 0$ holds for all separability witness $W$ of bosonic system. Notice that the set of separable states of bosonic system is convex and compact. Entangled $\rho$ does not lie in this set, by hyperplane separation theorem, one can conclude that there exists a $H$ such that ${\rm tr}(H\rho)<0$ and ${\rm tr}(H\alpha^{\otimes N})\geq 0$ holds for all $\alpha$. Therefore, ${\rm tr}(W\alpha^{\otimes N})={\rm tr}(P_SHP_S\alpha^{\otimes N})={\rm tr}(HP_S\alpha^{\otimes N} P_S)={\rm tr}(H\alpha^{\otimes N})\geq 0$ for $W=P_SHP_S$, then $W$ is a separability witness. On the other hand, ${\rm tr}(W\rho)={\rm tr}(P_SHP_S\rho)={\rm tr}(HP_S\rho P_S)={\rm tr}(H\rho)<0$, which contradicts to the assumption. $\blacksquare$ Notice that the set of separable witnesses forms a convex compact set. In order to check the separability of bosonic states, one way is to parameterize the set of separable witnesses, at least the set of extreme points of separable witnesses. For simplicity, we mainly focus on the separable witnesses of $N$-qubit bosonic system. Notice that any Hermitian $W=P_SHP_S$ corresponds to a Hermitian matrix $M:=(m_{i,j})_{(N+1)\times(N+1)}$ as follows \begin{eqnarray*} W:=\sum_{i,j=0}^N m_{i,j}\op{\widetilde{D_{N,i}}}{\widetilde{D_{N,j}}} \end{eqnarray*} where we employe the dual basis of Dicke states as \begin{equation*} \ket{\widetilde{D_{N,n}}}:= {N \choose n}^{-1}P_{\textrm{sym}}\bigl( \ket{0}^{\otimes n} \otimes \ket{1}^{\otimes N-n} \bigr), \end{equation*} that is, $\ip{D_{N,m}}{\widetilde{D_{N,n}}}=\delta_{m,n}$. Now we can derive the condition for $W$ being separable witness: ${\rm tr}(W\alpha^{\otimes N})\geq 0$ holds for all one-qubit $\ket{\alpha}$ is equivalent to \begin{eqnarray*} &&{\rm tr}(W\op{0}{0}^{\otimes N})\geq 0\Leftrightarrow m_{N,N}\geq 0, \\ &&{\rm tr}\{W[(\ket{1}+z\ket{0})(\langle{1}|+z^*\langle{0}|)]^{\otimes N}\}\geq 0, \end{eqnarray*} for all $z\in \mathbb{C}$. O ne can observe that the second condition indicates the first one as $|z|\rightarrow \infty$. Observe that $(\ket{1}+z\ket{0})^{\otimes N}=\sum_{j=0}^N z^j\ket{D_{N,j}}$, we know that the second condition given above is just \begin{eqnarray*} \vec{z}^{\dag} M \vec{z} \geq 0~\mathrm{for~all}~\vec{z}=(1,z,z^2,\cdots,z^{N})^{T}\in\mathbb{C}^{N+1}. \end{eqnarray*} This is what we called \lq \lq Power Positive Matrix", which is far different from \lq \lq Semi-definite", \lq \lq Complete Positve". Unfortunately, we are not able to give a complete description of the set of \lq \lq Power Positive Matrices", even one can easily conclude that it is a superset of \lq \lq Semi-definite Matrices". Although it is difficult to check the separability of general $N$-qubit states, we demonstrate an easily verified analytical condition for the separability of the following general diagonal symmetric states which is necessary and sufficient. In $N$-qubit bosonic system, one can naturally define the following class of quantum states, so called the general diagonal symmetric states, GDS \cite{WY14}, \begin{equation*} \rho=\sum_{n=0}^N\chi_n \op{D_{N,n}}{D_{N,n}}, \end{equation*} where $\chi_n$ represent the eigenvalues in the eigen-decomposition of $\rho$. Notice that any GDS state $\rho$ enjoys the symmetry that for all diagonal qubit unitary $U_{\theta}=diag\{1,e^{i\theta}\}$, \begin{equation*} \rho=U_{\theta}^{\otimes N}\rho~U_{\theta}^{\dag \otimes N}. \end{equation*} Thus, for any separability witness $W$, we have \begin{equation*} {\rm tr}(W\rho)={\rm tr}(WU_{\theta}^{\otimes N}\rho~U_{\theta}^{\dag \otimes N})={\rm tr}(U_{\theta}^{\dag \otimes N}WU_{\theta}^{\otimes N}\rho), \end{equation*} $W_0$ is a \lq \lq diagonal" separable witness and ${\rm tr}(W_0\rho)={\rm tr}(W\rho)$ with \begin{equation*} W_0=\frac{1}{2\pi}\int_{0}^{2\pi}U_{\theta}^{\dag \otimes N}WU_{\theta}^{\otimes N}d\theta=\sum_{k=0}^N m_{k,k}\op{\widetilde{D_{N,k}}}{\widetilde{D_{N,k}}}. \end{equation*} Here, a separable witness $W=\sum_{i,j=0}^N m_{i,j}\op{\widetilde{D_{N,i}}}{\widetilde{D_{N,j}}}$ is called diagonal if $m_{i,j}=0$ for $i\neq j$. According to Proposition 1, we know that: \begin{proposition} A general diagonal symmetric state $\rho$ is separable if and only if ${\rm tr}(W_0\rho)\geq 0$ for all diagonal separable witness $W_0$. \end{proposition} Recall the concept of \lq \lq Power Positive Matrix", $W_0$ is a separable witness if and only if $\sum_{k=0}^N m_{k,k}|z|^{2k}$ is always non-negative for all $z\in\mathbb{C}$. This is equivalent to \begin{equation*} g(r)\geq 0~\mathrm{for~all}~r\geq 0, \end{equation*} for real coefficient polynomial $g(x):=\sum_{k=0}^N m_{k,k}x^{k}$, whose value $g(x)$ is always non-negative for non-negative $x$. The characterization of such polynomials is accomplished by the following proposition. \begin{proposition} A real coefficient polynomial $g(x)$ satisfies that $g(r)\geq 0$ for all $r\geq 0$ if and only if there exist real coefficient polynomial $P_i(x),Q_i(x)$ such that \begin{equation*} g(x)=\sum_i xP_i^2(x)+\sum_iQ_i^2(x). \end{equation*} \end{proposition} \textit{Proof:}---The if part is simple. To show the validity of the only if part, we use the fundamental theorem of algebra, \begin{equation*} g(x)=a_0\prod (x-z_k)^{l_k}. \end{equation*} For non real root $z_k$, we know that for all real $r$, \begin{equation*} (r-z_k)(r-\bar{z_k})=(r-Re(z_k))^2+Im^2(z_k)\geq 0. \end{equation*} For non-positive $z_k$, we know that for all $r\geq 0$, \begin{equation*} r-z_k=r+(-z_k)\geq 0. \end{equation*} For positive $z_k$, its power $l_k$ must be even. Thus, expanding $g(x)=a_0\prod(x-z_k)^{l_k}$ confirms the only if part. $\blacksquare$ Invoking the relation between the diagonal separable witness $W_0$ and the polynomial $g(x)$, one can deduce the following, \begin{proposition} Extreme point of the diagonal separable witnesses for GDS has one of the following forms \begin{eqnarray*} S&=&\sum_{0\leq i,j\leq \frac{N}{2}}a_ia_j\op{\widetilde{D_{N,i+j}}}{\widetilde{D_{N,i+j}}},\\ T&=&\sum_{0\leq i,j\leq \frac{N-1}{2}}b_ib_j\op{\widetilde{D_{N,i+j+1}}}{\widetilde{D_{N,i+j+1}}}, \end{eqnarray*} with $a_k,b_k\in\mathbb{R}$. \end{proposition} Now we are ready to show our main result, \begin{theorem} The GDS state $\rho=\sum_{n=0}^N\chi_n \op{D_{N,n}}{D_{N,n}}$ is separable if and only if the following two Hankel Matrices \cite{PAR88} $M_0,M_1$ are positive semi-definite, $i.e.$, \begin{eqnarray} M_0:=\left(\begin{array}{cccc} \chi_0 & \chi_1 & \cdots & \chi_{m_0}\\ \chi_1 & \chi_2 & \cdots & \chi_{m_0+1}\\ \cdots & \cdots & \cdots & \cdots\\ \chi_{m_0} & \chi_{m_0+1} & \cdots & \chi_{2m_0} \end{array}\right)\geq 0,\\ M_1:=\left(\begin{array}{cccc} \chi_1 & \chi_2 & \cdots & \chi_{m_1}\\ \chi_2 & \chi_3 & \cdots & \chi_{m_1+1}\\ \cdots & \cdots & \cdots & \cdots\\ \chi_{m_1} & \chi_{m_1+1} & \cdots & \chi_{2m_1-1} \end{array}\right)\geq 0, \end{eqnarray} where $m_0:=[\frac{N}{2}]$ and $m_1:=[\frac{N+1}{2}]$. \end{theorem} \textit{Proof:}---According to Proposition 2 and Proposition 4, $\rho$ is separable if and only if ${\rm tr}(W_0\rho)\geq 0$ holds for any extreme point $W_0$ of the diagonal separable witnesses for GDS, that is, for all $\vec{a}=(a_0,\cdots,a_{m_0})^T\in\mathbb{R}^{m_0+1}$, $\vec{b}=(b_1,\cdots,b_{m_1})^T\in\mathbb{R}^{m_1}$the following quadratic forms are non-negative, \begin{eqnarray*} {\rm tr}(S\rho)=\sum_{0\leq i,j\leq m_0}\chi_{i+j}a_ia_j=\vec{a}^TM_0\vec{a}\geq 0,\\ {\rm tr}(T\rho)=\sum_{1\leq i,j\leq m_1}\chi_{i+j-1}b_ib_j=\vec{b}^TM_1\vec{b}\geq 0. \end{eqnarray*} It is equivalent to the non-negativity of real Hankel Matrices $M_0,M_1$. $\blacksquare$ Now we are going to present the rigorous proof of the conjecture from \cite{WY14}. \begin{theorem} The GDS state $\rho=\sum_{n=0}^N\chi_n \op{D_{N,n}}{D_{N,n}}$ is separable if and only if it is PPT. More precisely, if and only if it is PPT under the partial transpose of $m_0=[\frac{N}{2}]$ subsystems. \end{theorem} \textit{Proof:}---First, it is sufficient to consider the partial transpose of the first $m_0$ subsystems by noticing the symmetric in the bosonic system. Assume $\rho$ is positive under the partial transpose of $m_0=[\frac{N}{2}]$ subsystems, according to Theorem 1, we only need to show $M_0,M_1\geq 0$ of Eq.(1,2). One can write $\rho^{\Gamma}$ in basis $\ket{D_{m_0,j}}\ket{D_{m_1,k}}$ with $0\leq j\leq m_0,0\leq k\leq m_1$ by verifying the following equations, \begin{eqnarray*} \ket{D_{N,n}}&=&\sum_{j=0}^n\ket{D_{m_0,j}}\ket{D_{m_1,n-j}},~\mathrm{for}~n\leq m_0,\\ \ket{D_{N,n}}&=&\sum_{j=n-m_1}^{m_0}\ket{D_{m_0,j}}\ket{D_{m_1,n-j}},~\mathrm{for}~n > m_0, \end{eqnarray*} where $m_1=N-m_0$. Since $\rho^{\Gamma}\geq 0$, then the restriction of $\rho^{\Gamma}$ on subspace spanned by $\{\ket{D_{m_0,j}}\ket{D_{m_1,j}},0\leq j\leq m_0\}$ is still non-negative, direct calculation leads us to the fact that this is just $M_0\geq 0$. On the other hand, the restriction of $\rho^{\Gamma}$ on subspace spanned by $\{\ket{D_{m_0,j-1}}\ket{D_{m_1,j}},1\leq j\leq m_1\}$ is still non-negative, direct calculation leads us to the fact that this is just $M_1\geq 0$. Invoking Theorem 1, we can conclude that $\rho$ is separable. $\blacksquare$ One can have the following interesting corollary, \begin{corollary} GDS state $\rho=\sum_{n=0}^N\chi_n \op{D_{N,n}}{D_{N,n}}$ is positive under the partial transpose of $m_0=[\frac{N}{2}]$ subsystems, then it is positive under the partial transpose of arbitrary subsystems. \end{corollary} In the following, we introduce a class of bipartite GDS states, and study the separability of such states: In $d\otimes d$ system, one can define the following general diagonal symmetric states, \begin{eqnarray*} \rho=\sum_{i,j=1}^d\chi_{i,j} \op{\psi_{i,j}}{\psi_{i,j}}, \end{eqnarray*} with $\ket{\psi_{i,j}}:=\begin{cases}\ket{ii} &{\rm if}\ i=j,\\ \ket{ij}+\ket{ji} &{\rm otherwise.}\end{cases}$ being some basis of the bosonic subspace of $d\otimes d$ system, $i.e.$, the symmetric subspace. Notice that $\rho=(U\otimes U)\rho(U\otimes U)^{\dag}$ holds for all diagonal qudit unitary $U$. Then, $\rho$ is separable if and only if there exist qudit states $\ket{\alpha_k}=\sum_{j=1}^d x_{k,j}\ket{j}$ such that \begin{eqnarray*} \rho&=&\sum_{k} \int(U\otimes U)\alpha_k^{\otimes 2}(U\otimes U)^{\dag} dU\\ &=&\sum_{k} |x_{k,i}|^2|x_{k,j}|^2\op{\psi_{i,j}}{\psi_{i,j}}.\\ \Leftrightarrow \chi:&=&(\chi_{ij})_{d\times d}=\sum_{k} \vec{x_k}\vec{x_k}^{T}. \end{eqnarray*} where $dU$ ranging over all diagonal unitaries, and $\vec{x_k}=(|x_{k,1}|^2,\cdots,|x_{k,d}|^2)^{T}\in\mathbb{R}^d$. Recall that the cone of complete completely positive matrices \cite{DIA61,GW80} is define as \begin{eqnarray*} \mathcal{C}=\{\sum_{i}\vec{y_k}\vec{y_k}^{T}:\vec{y_k}\in\mathbb{R}^d_{+}\}, \end{eqnarray*} where $\mathbb{R}^d_{+}$ stands for the $d$-dimensional vector space whose entries are all non-negative. It is widely known that the decision problem on checking the completely positivity of given matrix is NP-Hard for general $d$ while for $d=3,4$, checking that the matrix is positive semidefinite and has all entries $\geq 0$ is both necessary and sufficient. Formally, \begin{theorem} It is NP-Hard to decide whether a given $d\otimes d$ GDS state is separable. On the other hand, $\rho=\sum_{i,j=1}^d\chi_{i,j} \op{\psi_{i,j}}{\psi_{i,j}}$ is separable if and only if $\chi=(\chi_{ij})_{d\times d}$ is semi-definite positive. \end{theorem} In other words, we have the following: PPT criterion is not sufficient for detect the entanglement for bosonic states, even for GDS states unless at least $P=NP$, which is highly impossible. \textit{Conclusion---}In this paper, we study the separability problem of bosonic system. An analytical condition for the separability of $n$-qubit states whose eigenstates are Dicke states is demonstrated. For bipartite qudit system, we present a class of standard bosonic states, for general $d$, its separability is NP-hard, while for $d=3,4$, the condition of separability is provided. It is still not clear that whether there exist easily verified analytical conditions for the separability of general $n-$qubit bosonic states? We thank Prof. John Watrous, Prof. Debbie Leung and Prof. Bei Zeng for their comments. NY is supported by NSERC, NSERC DAS, CRC, and CIFAR. \end{document}
\begin{document} \maketitle \begin{abstract} In this paper, we establish boundary H\"older gradient estimates for solutions to the linearized Monge-Amp\`ere equations with $L^{p}$ ($n<p\leq\infty$) right hand side and $C^{1,\gamma}$ boundary values under natural assumptions on the domain, boundary data and the Monge-Amp\`ere measure. These estimates extend our previous boundary regularity results for solutions to the linearized Monge-Amp\`ere equations with bounded right hand side and $C^{1, 1}$ boundary data. \varepsilonnd{abstract} \noindentindent \section{Statement of the main results} In this paper, we establish boundary H\"older gradient estimates for solutions to the linearized Monge-Amp\`ere equations with $L^{p}$ ($n<p\leq\infty$) right hand side and $C^{1,\gamma}$ boundary values under natural assumptions on the domain, boundary data and the Monge-Amp\`ere measure. Before stating these estimates, we introduce the following assumptions on the domain $\Omega$ and function $\partialhi$. Let $\Omega\subset {\mathbb R}^{n}$ be a bounded convex set with \begin{equation}\label{om_ass} B_\rho(\rho e_n) \subset \, \Omega \, \subset \{x_n \geq 0\} \cap B_{\frac 1\rho}, \varepsilonnd{equation} for some small $\rho>0$. Assume that \begin{equation} \Omega~ \text{contains an interior ball of radius $\rho$ tangent to}~ \partial \Omega~ \text{at each point on} ~\partial \Omega\cap\ B_\rho. \label{tang-int} \varepsilonnd{equation} Let $\partialhi : \overline \Omega \rightarrow {\mathbb R}$, $\partialhi \in C^{0,1}(\overline \Omega) \cap C^2(\Omega)$ be a convex function satisfying \begin{equation}\label{eq_u} 0 <\lambda \leq \det D^2\partialhi \leq \Lambda \quad \text{in $\Omega$}. \varepsilonnd{equation} Throughout, we denote by $\Phi = (\Phi^{ij})$ the matrix of cofactors of the Hessian matrix $D^{2}\partialhi$, i.e., $$\Phi = (\det D^{2} \partialhi) (D^{2} \partialhi)^{-1}.$$ We assume that on $\partial \Omega\cap B_\rho$, $\partialhi$ separates quadratically from its tangent planes on $\partial \Omega$. Precisely we assume that if $x_0 \in \partial \Omega \cap B_\rho$ then \begin{equation} \rho\abs{x-x_{0}}^2 \leq \partialhi(x)- \partialhi(x_{0})-\nabla \partialhi(x_{0}) (x- x_{0}) \leq \rho^{-1}\abs{x-x_{0}}^2, \label{eq_u1} \varepsilonnd{equation} for all $x \in \partial\Omega.$ Let $S_{\partialhi}(x_0, h)$ be the section of $\partialhi$ centered at $x_0\in \overline{\Omega}$ and of height $h$: $$S_{\partialhi}(x_0, h):= \{x\in \overline{\Omega}: \partialhi(x)<\partialhi(x_0)+\nabla\partialhi(x_0)(x-x_0)+ h\}.$$ When $x_0$ is the origin, we denote for simplicity $S_h:= S_{\partialhi}(0, h).$ Now, we can state our boundary H\"older gradient estimates for solutions to the linearized Monge-Amp\`ere equations with $L^{p}$ right hand side and $C^{1,\gamma}$ boundary data. \begin{theorem} \label{h-bdr-gradient} Assume $\partialhi$ and $\Omega$ satisfy the assumptions \varepsilonqref{om_ass}-\varepsilonqref{eq_u1} above. Let $u: B_{\rho}\cap \overline{\Omega}\rightarrow {\mathbb R}$ be a continuous solution to \begin{equation*} \left\{ \begin{alignedat}{2} \Phi^{ij}u_{ij} ~& = f ~&&\text{in} ~ B_{\rho}\cap \Omega, \\\ u &= \varphi~&&\text{on}~\partial \Omega \cap B_{\rho}, \varepsilonnd{alignedat} \right. \varepsilonnd{equation*} where $f\in L^{p}(B_{\rho}\cap\Omega)$ for some $p>n$ and $\varphi \in C^{1,\gamma}(B_{\rho}\cap\partial\Omega)$. Then, there exist $\alpha\in (0, 1)$ and $\theta_0$ small depending only on $n, p, \rho, \lambda, \Lambda, \gamma$ such that for all $\theta\leq \theta_0$ we have $$\|u- u(0)-\nabla u(0)x\|_{L^{\infty}(S_{\theta})}\leq C\left(\|u\|_{L^{\infty}(B_{\rho}\cap\Omega)} + \|f\|_{L^{p}(B_{\rho}\cap\Omega)} + \|\varphi\|_{C^{1,\gamma}(B_{\rho}\cap\partial\Omega)}\right) (\theta^{1/2})^{1+\alpha}$$ where $C$ only on $n, p, \rho, \lambda, \Lambda, \gamma$. We can take $\alpha:= \min\{1-\frac{n}{p}, \gamma\}$ provided that $\alpha<\alpha_0$ where $\alpha_0$ is the exponent in our previous boundary H\"older gradient estimates (see Theorem \ref{LS-gradient}). \varepsilonnd{theorem} Theorem \ref{h-bdr-gradient} extends our previous boundary H\"older gradient estimates for solutions to the linearized Monge-Amp\`ere equations with bounded right hand side and $C^{1, 1}$ boundary data \cite[Theorem 2. 1]{LS1}. This is an affine invariant analogue of the boundary H\"older gradient estimates of Ural'tseva \cite{U1} (see also \cite{U2} for a survey) for uniformly elliptic equation with $L^{p}$ right hand side. \begin{remark} By the Localization Theorem \cite{S1, S2}, we have $$B_{c\theta^{1/2}/\abs{log \theta}}\cap \overline{\Omega}\subset S_{\theta}\subset B_{C\theta^{1/2}\abs{log \theta}}\cap\overline{\Omega}.$$ Therefore, Theorem \ref{h-bdr-gradient} easily implies that $\nabla u$ is $C^{0, \alpha^{'}}$ on $B_{\rho/2}\cap\partial\Omega$ for all $\alpha^{'}<\alpha.$ \varepsilonnd{remark} As a consequence of Theorem \ref{h-bdr-gradient}, we obtain global $C^{1,\alpha}$ estimates for solutions to the linearized Monge-Amp\`ere equations with $L^{p}$ ($n<p\leq\infty$) right hand side and $C^{1,\gamma}$ boundary values under natural assumptions on the domain, boundary data and continuity of the Monge-Amp\`ere measure. \begin{theorem} Assume that $\Omega \subset B_{1/\rho}$ contains an interior ball of radius $\rho$ tangent to $\partial \Omega$ at each point on $\partial \Omega.$ Let $\partialhi : \overline \Omega \rightarrow {\mathbb R}$, $\partialhi \in C^{0,1}(\overline \Omega) \cap C^2(\Omega)$ be a convex function satisfying $$ \det D^2 \partialhi =g \quad \quad \mbox{with} \quad \lambda \le g \le \Lambda,\quad g\in C(\overline{\Omega}).$$ Assume further that on $\partial \Omega$, $\partialhi$ separates quadratically from its tangent planes, namely \begin{equation*} \rho\abs{x-x_{0}}^2 \leq \partialhi(x)- \partialhi(x_{0})-\nabla \partialhi(x_{0}) (x- x_{0}) \leq \rho^{-1}\abs{x-x_{0}}^2, ~\forall x, x_{0}\in\partial\Omega. \varepsilonnd{equation*} Let $u: \overline{\Omega}\rightarrow {\mathbb R}$ be a continuous function that solves the linearized Monge-Amp\`ere equation \begin{equation*} \left\{ \begin{alignedat}{2} \Phi^{ij}u_{ij} ~& = f ~&&\text{in} ~ \Omega, \\\ u &= \varphi ~&&\text{on}~\partial \Omega, \varepsilonnd{alignedat} \right. \varepsilonnd{equation*} where $\varphi$ is a $C^{1,\gamma}$ function defined on $\partial\Omega$ $(0<\gamma\leq 1)$ and $f\in L^{p}(\Omega)$ with $p>n$. Then \begin{equation*} \|u\|_{C^{1, \beta} (\overline \Omega )} \leq K( \|\varphi\|_{C^{1,\gamma}(\partial \Omega)}+ \|f\|_{L^p(\Omega)}), \varepsilonnd{equation*} where $\beta\in (0,1)$ and $K$ are constants depending on $n, \rho, \gamma, \lambda, \Lambda, p$ and the modulus of continuity of $g$. \label{global-reg} \varepsilonnd{theorem} Theorem \ref{global-reg} extends our previous global $C^{1,\alpha}$ estimates for solutions to the linearized Monge-Amp\`ere equations with bounded right hand side and $C^{1, 1}$ boundary data \cite[Theorem 2. 5 and Remark 7.1]{LS1}. It is also the global counterpart of Guti\'errez-Nguyen's interior $C^{1,\alpha}$ estimates for the linearized Monge-Amp\`ere equations. If we assume $\varphi$ to be more regular, say $\varphi \in W^{2, q}(\Omega)$ where $q>p$, then Theorem \ref{global-reg} is a consequence of the global $W^{2, p}$ estimates for solutions to the linearized Monge-Amp\`ere equations \cite[Theorem 1. 2]{LN}. In this case, the proof in \cite{LN} is quite involved. Our proof of Theorem \ref{global-reg} here is much simpler. \begin{remark} The estimates in Theorem \ref{global-reg} can be improved to \begin{equation} \|u\|_{C^{1, \beta} (\overline \Omega )} \leq K( \|\varphi\|_{C^{1,\gamma}(\partial \Omega)}+ \|f/tr~ \Phi\|_{L^p(\Omega)}). \label{improved-est} \varepsilonnd{equation} This follow easily from the estimates in Theorem \ref{global-reg} and the global $W^{2,p}$ estimates for solutions to the standard Monge-Amp\`ere equations with continuous right hand side \cite{S3}. Indeed, since $$tr ~\Phi\geq n(\det \Phi)^{\frac{1}{n}}\geq n \lambda^{\frac{n-1}{n}},$$ we also have $f/tr ~\Phi\in L^{p} (\Omega)$. Fix $q\in (n, p)$, then by \cite{S3}, $tr~ \Phi\in L^{\frac{pq}{p-q}}(\Omega)$. Now apply the estimates in Theorem \ref{global-reg} to $f\in L^{q}(\Omega)$ and then use H\"older inequality to $f= (f/tr~ \Phi) (tr~ \Phi)$ to obtain (\ref{improved-est}). \varepsilonnd{remark} \begin{remark} The linearized Monge-Amp\`ere operator $L_{\partialhi}:= \Phi^{ij}\partial_{ij}$ with $\partialhi$ satisfying the conditions of Theorem \ref{global-reg} is in general degenerate. Here is an explicit example in two dimensions, taken from \cite{Wang1}, showing that $L_{\partialhi}$ is not uniformly elliptic in $\overline{\Omega}.$ Consider $$\partialhi (x, y)=\frac{x^2}{log |log (x^2 + y^2)|} + y^2 log|log (x^2 + y^2)| $$ in a small ball $\Omega= B_{\rho}(0)\subset R^2$ around the origin. Then $\partialhi \in C^{0,1}(\overline \Omega) \cap C^2(\Omega)$ is strictly convex with $$\det D^2 \partialhi(x, y) = 4 + O (\frac{log|log(x^2 + y^2)|}{log(x^2 + y^2)})\in C(\overline{\Omega})$$ and $\partialhi$ has smooth boundary data on $\partial\Omega$. The quadratic separation of $\partialhi$ from its tangent planes on $\partial\Omega$ can be readily checked (see also \cite[Proposition 3.2]{S2}). However $\partialhi\noindentt\in W^{2,\infty}(\Omega).$ \varepsilonnd{remark} \begin{remark} For the global $C^{1,\alpha}$ estimates in Theorem \ref{global-reg}, the condition $p>n$ is sharp, since even in the uniformly elliptic case (for example, when $\partialhi(x)= \frac{1}{2}|x|^2$, $L_{\partialhi}$ is the Laplacian), the global $C^{1,\alpha}$ estimates fail when $p=n$. \varepsilonnd{remark} We prove Theorem \ref{h-bdr-gradient} using the perturbation arguments in the spirit of Caffarelli \cite{C, CC} (see also Wang \cite{Wang}) in combination with our previous boundary H\"older gradient estimates for the case of bounded right hand side $f$ and $C^{1,1}$ boundary data \cite{LS1}. The next section will provide the proof of Theorem \ref{h-bdr-gradient}. The proof of Theorem \ref{global-reg} will be given in the final section, Section \ref{proof-sec}. \section{Boundary H\"older gradient estimates} In this section, we prove Theorem \ref{h-bdr-gradient}. {\it We will use the letters $c, C$ to denote generic constants depending only on the structural constants $n, p, \rho, \gamma, \lambda, \Lambda$ that may change from line to line.} Assume $\partialhi$ and $\Omega$ satisfy the assumptions \varepsilonqref{om_ass}-\varepsilonqref{eq_u1}. We can also assume that $\partialhi(0)=0$ and $\nabla \partialhi(0)=0.$ By the Localization Theorem for solutions to the Monge-Amp\`ere equations proved in \cite{S1, S2}, there exists a small constant $k$ depending only on $n, \rho, \lambda, \Lambda$ such that if $h\leq k$ then \begin{equation}kE_h\cap \overline{\Omega}\subset S_{\partialhi}(0, h)\subset k^{-1} E_h\cap \overline{\Omega}\ \label{loc-k} \varepsilonnd{equation} where $$E_h:= h^{1/2}A_h^{-1} B_1$$ with $A_h$ being a linear transformation (sliding along the $x_n=0$ plane) \begin{equation}A_h(x) = x- \tau_h x_n,~ \tau_h\cdot e_n =0, ~\det A_h =1\ \label{Amap} \varepsilonnd{equation} and $$|\tau_h|\leq k^{-1}\abs{log h}.$$ We define the following rescaling of $\partialhi$ \begin{equation}\partialhi_h(x):= \frac{\partialhi(h^{1/2} A^{-1}_h x)}{h} \label{phi-h} \varepsilonnd{equation} in \begin{equation}\Omega_h:= h^{-1/2}A_h\Omega. \label{omega-h} \varepsilonnd{equation} Then $$\lambda \leq \det D^2 \partialhi_h(x)= \det D^2 \partialhi(h^{1/2}A_{h}^{-1}x)\leq \Lambda$$ and $$B_{k}\cap \overline{\Omega_h}\subset S_{\partialhi_h}(0, 1)= h^{-1/2} A_{h}S_h\subset B_{k^{-1}}\cap \overline{\Omega_h}.$$ Lemma 4. 2 in \cite{LS1} implies that if $h, r\leq c$ small then $\partialhi_h$ satisfies in $S_{\partialhi_h}(0, 1)$ the hypotheses of the Localization Theorem \cite{S1, S2} at all $x_0\in S_{\partialhi_h}(0, r)\cap\partial S_{\partialhi_h}(0, 1).$ In particular, there exists $\tilde\rho$ small, depending only on $n, \rho, \lambda, \Lambda$ such that if $x_0\in S_{\partialhi_h}(0, r)\cap\partial S_{\partialhi_h}(0, 1)$ then \begin{equation} \tilde\rho\abs{x-x_{0}}^2 \leq \partialhi_h(x)- \partialhi_h(x_{0})-\nabla \partialhi_h(x_{0}) (x- x_{0}) \leq \tilde\rho^{-1}\abs{x-x_{0}}^2, \label{loc-h} \varepsilonnd{equation} for all $x \in \partial S_{\partialhi_h}(0, 1).$ We fix $r$ in what follows. Our previous boundary H\"older gradient estimates \cite{LS1} for solutions to the linearized Monge-Amp\`ere with bounded right hand side and $C^{1, 1}$ boundary data hold in $S_{\partialhi_h}(0, r)$. They will play a crucial role in the perturbation arguments and we now recall them here. \begin{theorem}(\cite[Theorem 2.1 and Proposition 6.1]{LS1}) Assume $\partialhi$ and $\Omega$ satisfy the assumptions \varepsilonqref{om_ass}-\varepsilonqref{eq_u1} above. Let $u: S_r\cap \overline{\Omega}\rightarrow {\mathbb R}$ be a continuous solution to \begin{equation*} \left\{ \begin{alignedat}{2} \Phi^{ij}u_{ij} ~& = f ~&&\text{in} ~ S_r\cap \Omega, \\\ u &= 0~&&\text{on}~\partial \Omega \cap S_r, \varepsilonnd{alignedat} \right. \varepsilonnd{equation*} where $f\in L^{\infty}(S_r\cap\Omega)$. Then $$|\partial_n u(0)|\leq C_0 \left(\|u\|_{L^{\infty}(S_r\cap\Omega)} + \|f\|_{L^{\infty}(S_r\cap\Omega)}\right)$$ and for $s\leq r/2$ $$\max_{S_r}|u-\partial_n u(0)x_n|\leq C_0 (s^{1/2})^{1+\alpha_0}\left(\|u\|_{L^{\infty}(S_r\cap\Omega)} + \|f\|_{L^{\infty}(S_r\cap\Omega)} \right)$$ where $\alpha_0\in (0, 1)$ and $C_0$ are constants depending only on $n, \rho, \lambda, \Lambda $. \label{LS-gradient} \varepsilonnd{theorem} Now, we are ready to give the proof of Theorem \ref{h-bdr-gradient}. \begin{proof}[Proof of Theorem \ref{h-bdr-gradient}] Since $u|_{\partial\Omega \cap B_{\rho}}$ is $C^{1,\gamma}$, by subtracting a suitable linear function we can assume that on $\partial\Omega\cap B_{\rho}$, $u$ satisfies $$\abs{u(x)}\leq M |x^{'}|^{1+\gamma}.$$ Let $$\alpha:=\min\{\gamma, 1-\frac{n}{p}\}$$ if $\alpha<\alpha_0$; otherwise let $\alpha\in (0, \alpha_0)$ where $\alpha_0$ is in Theorem \ref{LS-gradient}. The only place where we need $\alpha<\alpha_0$ is (\ref{alpha0}). By dividing our equation by a suitable constant we may assume that for some $\theta$ to be chosen later $$\|u\|_{L^{\infty}(B_{\rho}\cap\Omega)} + \|f\|_{L^{p}(B_{\rho}\cap\Omega)} + M\leq (\theta^{1/2})^{1+\alpha}=: \delta.$$ {\bf Claim.} There exists $0<\theta_0<r/4$ small depending only on $n, \rho, \lambda, \Lambda, \gamma, p$, and a sequence of linear functions $$l_m(x):= b_m x_n$$ with where $b_0= b_1 =0$ such that for all $\theta\leq\theta_0$ and for all $m\geq 1$, we have \begin{myindentpar}{1cm} (i) $$\|u-l_m\|_{L^{\infty}(S_{\theta^m})}\leq (\theta^{m/2})^{1+\alpha},$$ and\\ (ii) $$|b_m-b_{m-1}|\leq C_0 (\theta^{\frac{m-1}{2}})^{\alpha}.$$ \varepsilonnd{myindentpar} Our theorem follows from the claim. Indeed, (ii) implies that $\{l_m\}$ converges uniformly in $S_{\theta}$ to a linear function $l(x)= bx_n$ with $b$ universally bounded since \begin{equation*} \abs{b}\leq \sum_{m=1}^{\infty} \abs{b_m-b_{m-1}}\leq \sum_{m=1}^{\infty} C_0 (\theta^{\theta/2})^{m-1}= \frac{C_0}{1-\theta^{\alpha/2}}\leq 2C_0. \varepsilonnd{equation*} Furthermore, by (\ref{loc-k}) and (\ref{Amap}), we have $|x_n|\leq k^{-1}\theta^{m/2}$ for $x\in S_{\theta^m}$. Therefore, for any $m\geq 1$, \begin{eqnarray*} \|u-l\|_{L^{\infty}(S_{\theta^m})} &\leq & \|u-l_m\|_{L^{\infty}(S_{\theta^m})} + \sum_{j=m+1}^{\infty} \|l_j-l_{j-1}\|_{L^{\infty}(S_{\theta^m})} \\&\leq& (\theta^{m/2})^{1+\alpha} + \sum_{j=m+1}^{\infty} C_0 (\theta^{\frac{j-1}{2}})^{\alpha} (k^{-1}\theta^{m/2})\\ &\leq& C(\theta^{m/2})^{1+\alpha}. \varepsilonnd{eqnarray*} We now prove the claim by induction. Clearly (i) and (ii) hold for $m=1$. Suppose (i) and (ii) hold up to $m\geq 1$. We prove them for $m+1$. Let $h= \theta^m$. We define the rescaled domain $\Omega_h$ and function $\partialhi_h$ as in (\ref{omega-h}) and (\ref{phi-h}). We also define for $x\in \Omega_h$ $$v(x):= \frac{(u-l_m)(h^{1/2} A^{-1}_h x)}{h^{\frac{1+\alpha}{2}}}, ~f_h(x): = h^{\frac{1-\alpha}{2}}f(h^{1/2} A^{-1}_h x).$$ Then $$\|v\|_{L^{\infty}(S_{\partialhi_h}(0,1))}= \frac{1}{h^{\frac{1+\alpha}{2}}}\|u-l_m\|_{L^{\infty}(S_h)}\leq 1$$ and $$\Phi_h^{ij}v_{ij}=f_h~\text{in}~S_{\partialhi_h}(0,1)$$ with $$\|f_h\|_{L^{p}(S_{\partialhi_h}(0,1))} = (h^{1/2})^{1-\alpha-n/p} \|f\|_{L^{p}(S_h)}\leq \delta.$$ Let $w$ be the solution to \begin{equation*} \left\{ \begin{alignedat}{2} \Phi_h^{ij}w_{ij} ~& = 0 ~&&\text{in} ~ S_{\partialhi_h}(0, 2\theta), \\\ w &= \varphi_h~&&\text{on}~\partial S_{\partialhi_h}(0, 2\theta), \varepsilonnd{alignedat} \right. \varepsilonnd{equation*} where \begin{equation*} \varphi_h = \left\{\begin{alignedat}{1} 0 ~&~ \text{on} ~\partial S_{\partialhi_h}(0, 2\theta)\cap\partial\Omega_h \\ v~& ~\text{on}~ \partial S_{\partialhi_h}(0, 2\theta) \cap \Omega_h. \varepsilonnd{alignedat} \right. \varepsilonnd{equation*} By the maximum principle, we have $$\|w\|_{L^{\infty}(S_{\partialhi_h}(0, 2\theta))}\leq \|v\|_{L^{\infty}(S_{\partialhi_h}(0, 2\theta))}\leq 1.$$ Let $$\bar{l}(x):= \bar{b}x_n;~\bar{b}:=\partialartial_n w(0).$$ Then the boundary H\"older gradient estimates in Theorem \ref{LS-gradient} give \begin{equation}\abs{\bar{b}}\leq C_0 \|w\|_{L^{\infty}(S_{\partialhi_h}(0, 2\theta))}\leq C_0 \label{b-bar} \varepsilonnd{equation} and \begin{eqnarray}\|w-\bar{l}\|_{L^{\infty}(S_{\partialhi_h}(0, \theta))} \leq C_0\|w\|_{L^{\infty}(S_{\partialhi_h}(0, 2\theta))} (\theta^{\frac{1}{2}})^{1+\alpha_0} &\leq& C_0 (\theta^{\frac{1}{2}})^{1+\alpha_0}\noindentnumber\\ &\leq& \frac{1}{2}(\theta^{\frac{1}{2}})^{1+\alpha}, \label{alpha0} \varepsilonnd{eqnarray} provided that $$C_0\theta_0^{\frac{\alpha_0-\alpha}{2}}\leq 1/2.$$ We will show that, by choosing $\theta\leq \theta_0$ where $\theta_0$ is small, we have \begin{equation}\|w-v\|_{L^{\infty}(S_{\partialhi_h}(0, 2\theta))} \leq \frac{1}{2}(\theta^{\frac{1}{2}})^{1+\alpha}. \label{w-eq} \varepsilonnd{equation} Combining this with (\ref{alpha0}), we obtain $$\|v-\bar{l}\|_{L^{\infty}(S_{\partialhi_h}(0, \theta))}\leq (\theta^{\frac{1}{2}})^{1+\alpha}.$$ Now, let $$l_{m+1}(x):= l_m(x) + (h^{1/2})^{1+\alpha} \bar{l}(h^{-1/2}A_h x).$$ Then, for $x\in S_{\theta^{m+ 1}}= S_{\theta h}$, we have $h^{-1/2}A_h x\in S_{\partialhi_h}(0,\theta)$ and $$(u-l_{m+1})(x) = u(x)- l_m(x) - (h^{1/2})^{1+\alpha} \bar{l}(h^{-1/2}A_h x)= (h^{1/2})^{1+\alpha}(v- \bar{l})(h^{-1/2}A_h x).$$ Thus $$\|u-l_{m+1}\|_{L^{\infty}(S_{\theta^{m+1}})} = (h^{1/2})^{1+\alpha}\|v- \bar{l}\|_{L^{\infty}(S_{\partialhi_h}(0,\theta))}\leq (h^{1/2})^{1+\alpha} (\theta^{1/2})^{1+\alpha}= (\theta^{\frac{m+1}{2}})^{1+\alpha},$$ proving (i). On the other hand, we have $$l_{m+1} (x) = b_{m+1}x_n$$ where, by (\ref{Amap}) $$b_{m+1}:= b_m + (h^{1/2})^{1+\alpha} h^{-1/2} \bar{b} = b_m + h^{\alpha/2}\bar{b}.$$ Therefore, the claim is established since (ii) follows from (\ref{b-bar}) and $$\abs{b_{m+1}-b_m}= h^{\alpha/2}\abs{\bar{b}}\leq C_{0}\theta^{m\alpha/2}.$$ It remains to prove (\ref{w-eq}). We will use the ABP estimate to $w-v$ which solves \begin{equation*} \left\{ \begin{alignedat}{2} \Phi_h^{ij}(w-v)_{ij} ~& = -f_h ~&&\text{in} ~ S_{\partialhi_h}(0, 2\theta), \\\ w-v &= \varphi_h-v~&&\text{on}~\partial S_{\partialhi_h}(0, 2\theta). \varepsilonnd{alignedat} \right. \varepsilonnd{equation*} By this estimate and the way $\varphi_h$ is defined, we have \begin{multline*}\|w-v\|_{L^{\infty}(S_{\partialhi_h}(0, 2\theta))} \leq \|v\|_{L^{\infty}(\partial S_{\partialhi_h}(0, 2\theta)\cap \partial\Omega_h)} + C(n) diam (S_{\partialhi_h}(0, 2\theta)) \|\frac{f_h}{(\det \Phi_h)^{\frac{1}{n}}}\|_{L^{n}(S_{\partialhi_h}(0, 2\theta))}\\=: (I) + (II). \varepsilonnd{multline*} To estimate (I), we denote $y= h^{1/2}A_{h}^{-1}x$ when $x\in \partial S_{\partialhi_h}(0, 2\theta)\cap \partial\Omega_h.$ Then $y\in \partial S_{\partialhi}(0, 2\theta)\cap\partial\Omega$ and moreover, $$y_n = h^{1/2}x_n,~ y^{'}-\nu_h y_n = h^{1/2} x^{'}.$$ Noting that $x\in \partial S_{\partialhi_h}(0, 1)\cap \partial\Omega_h\subset B_{k^{-1}}$, we have by (\ref{Amap}) $$\abs{y}\leq k^{-1}h^{1/2}\abs{log h}\abs{x} \leq h^{1/4}\leq \rho$$ if $h=\theta^m$ is small. This is clearly satisfied when $\theta_0$ is small. Since $\Omega$ has an interior tangent ball of radius $\rho$, we have $$\abs{y_n}\leq \rho^{-1}|y^{'}|^2.$$ Therefore $$\abs{v_h y_n}\leq k^{-1}\abs{log h} \rho^{-1} |y^{'}|^2 \leq k^{-1}\rho^{-1} h^{1/4}\abs{log h} |y^{'}|\leq \frac{1}{2}|y^{'}|$$ and consequently, $$\frac{1}{2}|y^{'}|\leq |h^{1/2} x^{'}|\leq \frac{3}{2}|y^{'}|.$$ From (\ref{loc-h}) $$\tilde\rho |x^{'}|^2 \leq \partialhi_h(x)\leq 2\theta,$$ we have $$|y^{'}|\leq 2 h^{1/2} |x^{'}|\leq 2(2\tilde\rho^{-1})^{1/2} (\theta h)^{1/2}.$$ By (ii) and $b_0=0$, we have $$\abs{b_m}\leq \sum_{j=1}^{m}\abs{b_j-b_{j-1}}\leq \sum_{j=1}^{\infty} C_0 (\theta^{\theta/2})^{j-1}= \frac{C_0}{1-\theta^{\alpha/2}}\leq 2C_0$$ if $$\theta_0^{\alpha/2}\leq 1/2.$$ Now, we obtain from the definition of $v$ that $$h^{\frac{1+\alpha}{2}} |v(x)| = |(u-l_m)(y)| \leq \abs{u(y)} + 2C_0\abs{y_n} \leq \delta |y^{'}|^{1+\gamma} + 2C_0 \rho^{-1} |y^{'}|^2 = |y^{'}|^{1+\gamma}(\delta + 2C_0 \rho^{-1} |y^{'}|^{1-\gamma}).$$ Using $|y^{'}|\leq C \theta^{1/2}$ and $\gamma\geq \alpha$, we find $$v(x) \leq \frac{C ((\theta h)^{1/2})^{1+\gamma}(\delta +\theta^{\frac{1-\gamma}{2}})}{h^{\frac{1+\alpha}{2}}} = Ch^{\gamma-\alpha}\theta^{\frac{1+\gamma}{2}} (\theta^{\frac{1+\alpha}{2}} +\theta^{\frac{1-\gamma}{2}})\leq Ch^{\gamma-\alpha}\theta\leq \frac{1}{4} (\theta^{1/2})^{1+\alpha}$$ if $\theta_0$ is small. We then obtain $$(I) \leq \frac{1}{4} (\theta^{1/2})^{1+\alpha}.$$ To estimate (II), we recall $\delta = (\theta^{1/2})^{1+\alpha}$ and $$S_{\partialhi_h}(0, 2\theta)\subset B_{C(2\theta)^{1/2} \abs{log 2\theta}};~ \abs{S_{\partialhi_h}(0, 2\theta)}\leq C(2\theta)^{n/2}.$$ Since $$\det \Phi_h = (\det D^2\partialhi_h)^{n-1}\geq \lambda^{n-1},$$ we therefore obtain from H\"older inequality that \begin{eqnarray*} (II) &\leq& \frac{ C(n)}{\lambda^{\frac{n-1}{n}}} diam (S_{\partialhi_h}(0, 2\theta)) \|f_h\|_{L^{n}(S_{\partialhi_h}(0, 2\theta))}\\ &\leq& C(n, \lambda)diam (S_{\partialhi_h}(0, 2\theta)) \abs{S_{\partialhi_h}(0, 2\theta)}^{\frac{1}{n}-\frac{1}{p}} \| f_h\|_{L^{p}(S_{\partialhi_h}(0, 2\theta))} \\ &\leq & C\delta \theta^{1/2}\abs{log 2\theta} (\theta^{1/2})^{1-n/p}= C(\theta^{1/2})^{1+\alpha} \abs{log 2\theta} (\theta^{1/2})^{2-n/p}\leq \frac{1}{4} (\theta^{1/2})^{1+\alpha} \varepsilonnd{eqnarray*} if $\theta_0$ is small. It follows that $$\|w-v\|_{L^{\infty}(S_{\partialhi_h}(0, 2\theta))} \leq (I) + (II)\leq \frac{1}{2}(\theta^{\frac{1}{2}})^{1+\alpha},$$ proving (\ref{w-eq}). The proof of our theorem is complete. \varepsilonnd{proof} \section{Global $C^{1,\alpha}$ estimates} \label{proof-sec} In this section, we will prove Theorem \ref{global-reg}. \begin{proof}[Proof of Theorem \ref{global-reg}] We extend $\varphi$ to a $C^{1,\gamma}(\overline{\Omega})$ function in $\overline{\Omega}$. By the ABP estimate, we have \begin{equation} \label{u-max} \noindentrm{u}_{L^{\infty}(\Omega)} \leq C \left(\noindentrm{f}_{L^{p}(\Omega)} + \|\varphi\|_{L^{\infty}(\overline{\Omega})}\right) \varepsilonnd{equation} for some $C$ depending on $n, p, \rho, \lambda$. By multiplying $u$ by a suitable constant, we can assume that $$\noindentrm{f}_{L^{p}(\Omega)} + \|\varphi\|_{C^{1,\gamma}(\overline{\Omega})}=1.$$ By using Guti\'errez-Nguyen's interior $C^{1,\alpha}$ estimates \cite{GN} and restricting our estimates in small balls of definite size around $\partial\Omega$, we can assume throughout that $1-\varepsilon\leq g\leq 1+ \varepsilon$ where $\varepsilon$ is as in Theorem \ref{h-bdr-gradient}. Let $y\in \Omega $ with $r:=dist (y,\partialartial\Omega) \le c,$ for $c$ universal, and consider the maximal section $S_{\partialhi}(y, h)$ of $\partialhi$ centered at $y$, i.e., $$h=\sup\{t\,| \quad S_{\partialhi}(y,t)\subset \Omega\}.$$ Since $\partialhi$ is $C^{1, 1}$ on the boundary $\partial\Omega$, by Caffarelli's strict convexity theorem, $\partialhi$ is strictly convex in $\Omega$. This implies the existence of the above maximal section $S_{\partialhi}(y, h)$ of $\partialhi$ centered at $y$ with $h>0$. By \cite[Proposition 3.2]{LS1} applied at the point $x_0\in \partial S_{\partialhi}(y,h) \cap \partial \Omega,$ we have \begin{equation} h^{1/2} \sim r, \label{hr} \varepsilonnd{equation} and $ S_{\partialhi}(y,h)$ is equivalent to an ellipsoid $E$ i.e $$cE \subset S_{\partialhi}(y,h)-y \subset CE,$$ where \begin{equation}E :=h^{1/2}A_{h}^{-1}B_1, \quad \mbox{with} \quad \|A_{h}\|, \|A_{ h}^{-1} \| \le C |\log h|; \det A_{h}=1. \label{eh} \varepsilonnd{equation} We denote $$\partialhi_y:=\partialhi-\partialhi(y)-\nabla \partialhi(y) (x-y).$$ The rescaling $\tilde \partialhi: \tilde S_1 \to {\mathbb R}$ of $u$ $$\tilde \partialhi(\tilde x):=\frac {1}{ h} \partialhi_y(T \tilde x) \quad \quad x=T\tilde x:=y+ h^{1/2}A_{h}^{-1}\tilde x,$$ satisfies $$\det D^2\tilde \partialhi(\tilde x)=\tilde g(\tilde x):=g(T \tilde x), $$ and \begin{equation} \label{normalsect} B_c \subset \tilde S_1 \subset B_C, \quad \quad \tilde S_1=\bar h^{-1/2} A_{\bar h}(S_{y, \bar h}- y), \varepsilonnd{equation} where $\tilde S_1:= S_{\tilde \partialhi} (0, 1)$ represents the section of $\tilde \partialhi$ at the origin at height 1. We define also the rescaling $\tilde u$ for $u$ $$\tilde u(\tilde x):= h^{-1/2}\left(u(T\tilde x)- u(x_{0})-\nabla u(x_0)(T\tilde x-x_0)\right),\quad \tilde x\in \tilde S_{1}.$$ Then $\tilde u$ solves $$\tilde \Phi^{ij} \tilde u_{ij} = \tilde f(\tilde x):= h^{1/2} f(T\tilde x).$$ Now, we apply Guti\'errez-Nguyen's interior $C^{1,\alpha}$ estimates \cite{GN} to $\tilde u $ to obtain $$\abs{D\tilde u (\tilde z_{1})-D\tilde u(\tilde z_{2})}\leq C\abs{\tilde z_{1}-\tilde z_{2}}^{\beta} \{\noindentrm{\tilde u }_{L^{\infty}(\tilde S_{1})} + \noindentrm{\tilde f}_{L^{p}(\tilde S_{1})}\},\quad\forall \tilde z_{1}, \tilde z_{2}\in \tilde S_{1/2},$$ for some small constant $\beta\in (0,1)$ depending only on $n, \lambda, \Lambda$.\\ By (\ref{normalsect}), we can decrease $\beta$ if necessary and thus we can assume that $2\beta\leq \alpha$ where $\alpha\in (0,1)$ is the exponent in Theorem \ref{h-bdr-gradient}. Note that, by (\ref{eh}) \begin{equation} \noindentrm{\tilde f}_{L^{p}(\tilde S_{1})} = h^{1/2-\frac{n}{2p}}\noindentrm{f}_{L^{p}(S_{y, \bar{h}})}. \label{scaled-lp} \varepsilonnd{equation} We observe that (\ref{hr}) and (\ref{eh}) give $$B_{C r\abs{log r}}(y)\supset S_{\partialhi}(y,h) \supset S_{\partialhi}(y,h/2)\supset B_{c\frac{r}{\abs{log r}}}(y)$$ and $$diam ( S_{\partialhi}(y,h))\leq Cr\abs{log r}.$$ By Theorem \ref{h-bdr-gradient} applied to the original function $u$, (\ref{u-max}) and (\ref{hr}), we have $$\noindentrm{\tilde u }_{L^{\infty}(\tilde S_{1})} \leq C h^{-1/2} \left(\|u\|_{L^{\infty}(\Omega)} +\noindentrm{f}_{L^{p}(\Omega)} + \|\varphi\|_{C^{1,\gamma}(\overline{\Omega})}\right)diam ( S_{\partialhi}(y,h))^{ 1+ \alpha} \leq C r^{\alpha}\abs{log r}^{1 +\alpha}.$$ Hence, using (\ref{scaled-lp}) and the fact that $\alpha\leq 1/2 (1-n/p)$, we get $$\abs{D\tilde u (\tilde z_{1})-D\tilde u(\tilde z_{2})}\leq C\abs{\tilde z_{1}-\tilde z_{2}}^{\beta} r^{\alpha}\abs{log r}^{1 +\alpha}~\forall \tilde z_{1}, \tilde z_{2}\in \tilde S_{1/2}.$$ Rescaling back and using $$\tilde z_1-\tilde z_2= h^{-1/2}A_{ h}(z_1-z_2),\quad h^{1/2}\sim r,$$ and the fact that $$\abs{\tilde z_1-\tilde z_2}\leq \noindentrm{ h^{-1/2}A_{ h}}\abs{z_1-z_2} \leq C h^{-1/2}\abs{\log h}\abs{z_1-z_2}\leq C r^{-1}\abs{log r}\abs{z_1-z_2},$$ we find \begin{eqnarray}|Du(z_1)-Du( z_2)| &=&|A_{h}(D\tilde u (\tilde z_{1})-D\tilde u(\tilde z_{2})| \leq C\abs{\log h}(r^{-1}\abs{log r}\abs{z_1-z_2})^{\beta} r^{\alpha}\abs{log r}^{1 +\alpha} \noindentnumber\\& \le& |z_1-z_2|^{\beta} \quad \forall z_1, z_2 \in S_{\partialhi}(y,h/2). \label{oscv} \varepsilonnd{eqnarray} Notice that this inequality holds also in the Euclidean ball $B_{c\frac{r}{\abs{log r}}}(y)\subset S_{\partialhi}(y,h/2)$. Combining this with Theorem \ref{h-bdr-gradient}, we easily obtain $$[Du]_{C^\beta(\bar \Omega)} \le C$$ and the desired global $C^{1,\beta}$ bounds for $u$. \varepsilonnd{proof} {\bf Acknowledgments.} The authors would like to thank the referee for constructive comments on the manuscript. The first author was partially supported by the Vietnam Institute for Advanced Study in Mathematics (VIASM), Hanoi, Vietnam. \begin{thebibliography}{9999} \bibitem{C} Caffarelli, L. A. Interior a priori estimates for solutions of fully nonlinear equations, {\it Ann. of Math.}(2) {\bf 130} (1989), 189-213. \bibitem{CC} Caffarelli, L. A., and ~Cabr\'e, X. {\varepsilonm Fully nonlinear elliptic equations.} American Mathematical Society Colloquium Publications, volume 43, 1995. \bibitem{GN} Guti\'errez, C.; Nguyen, T. Interior gradient estimates for solutions to the linearized Monge-Amp\`ere equations, {\it Adv. Math.} {\bf 228} (2011), 2034-2070. \bibitem{LN} Le, N. Q.; Nguyen, T. Global $W^{2,p}$ estimates for solutions to the linearized Monge--Amp\`ere equations, arXiv:1209.1998v2 [math.AP], to appear in {\it Math. Ann.} \bibitem{LS1} Le, N. Q.; Savin, O. Boundary regularity for solutions to the linearized Monge-Amp\`ere equations, {\it Arch. Ration. Mech. Anal.} DOI:10.1007/s00205-013-0653-5. \bibitem{S1} Savin, O. A localization property at the boundary for the Monge-Amp\`ere equation. {\it Advances in Geometric Analysis}, 45-68, Adv. Lect. Math. (ALM), {\bf 21}, Int. Press, Somerville, MA, 2012. \bibitem{S2} Savin, O. Pointwise $C^{2,\alpha}$ estimates at the boundary for the Monge-Amp\`ere equation, {\it J. Amer. Math. Soc.} {\bf 26} (2013), 63--99. \bibitem{S3} Savin, O. Global $W^{2,p}$ estimates for the Monge-Amp\`ere equations, {\it Proc. Amer. Math. Soc.} {\bf 141} (2013), 3573--3578. \bibitem{U1}Ural'tseva, N. N. H\"oder continuity of gradients of solutions of parabolic equations with boundary conditions of Signorini type. (Russian) {\it Dokl. Akad. Nauk SSSR }{\bf 280} (1985), no. 3, 563--565; English translation: {\it Soviet Math. Dokl.} {\bf 31} (1985), no. 1, 135--138. \bibitem{U2}Ural'tseva, N. N. Estimates of derivatives of solutions of elliptic and parabolic inequalities. Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Berkeley, Calif., 1986), 1143--1149, Amer. Math. Soc., Providence, RI, 1987. \bibitem{Wang1} Wang, X. J. Some counterexamples to the regularity of Monge-Amp\`ere equations. {\it Proc. Amer. Math. Soc.} {\bf 123} (1995), no. 3, 841--845 \bibitem{Wang} Wang, X. J. Schauder estimates for elliptic and parabolic equations, {\it Chinese Ann. Math.}, {\bf 27}(2006), 637-642. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{On degenerations of ${\mathbb{Z}}/2$-Godeaux surfaces} \author{\textrm{Eduardo Dias, Carlos Rito, Giancarlo Urz\'ua}} \date{} \maketitle \begin{abstract} We compute equations for the Coughlan's family in \cite{CoughlanGodeaux} of Godeaux surfaces with torsion ${\mathbb{Z}}/2$, which we call ${\mathbb{Z}}/2$-Godeaux surfaces, and we show that it is (at most) 7 dimensional. We classify non-rational KSBA degenerations $W$ of ${\mathbb{Z}}/2$-Godeaux surfaces with one Wahl singularity, showing that $W$ is birational to particular either Enriques surfaces, or $D_{2,n}$ elliptic surfaces, with $n=3,4$ or $6$. We present examples for all possibilities in the first case, and for $n=3,4$ in the second. \end{abstract} \tableofcontents \section{Introduction} \label{s0} Smooth minimal complex projective surfaces of general type with the lowest possible numerical invariants, namely geometric genus $p_g=0$ and self-intersection of the canonical divisor $K^2=1$, are known to exist since Godeaux' construction in 1931 \cite{God31}. His surface has topological fundamental group $\mathbb Z/5$. Surfaces of general type with $K^2=1, p_g=0$ are called numerical Godeaux surfaces. Miyaoka \cite{Miy76} showed that the order of their torsion group is at most 5, and Reid \cite{Rei78} excluded the case $(\mathbb Z/2)^2$, so their possible torsion groups are $\mathbb Z/n$ with $1\leq n\leq 5$. All of them are realizable. Reid constructed the moduli space for the cases $n=5,4,3$, and it follows from his work that the topological fundamental group coincides with the torsion group for $n=5,4$. Urz\'ua and Coughlan \cite{CU18} showed that the same happens for $n=3$. In those three cases, the moduli space is unirational and irreducible of dimension 8. Reid conjectured that the same should happen for numerical Godeaux surfaces with torsion $\mathbb Z/2$ and with no torsion. Both cases remain a challenge as far as we know; it is not even known if the topological fundamental groups are indeed ${\mathbb{Z}}/2$ and trivial in these two cases. Several authors have worked on these surfaces, and there are some unrelated constructions of some components of the moduli space. (See e.g. \cite{CP00}, \cite[\S 6]{CataneseY}, \cite{BCP11} for a survey on $p_g=0$ surfaces and various references, \cite{RTU17}). In the case of ${\mathbb{Z}}/2$-Godeaux surfaces, Catanese and Debarre \cite{CataneseY} show that their \'etale double covers are surfaces with birational bicanonical map and hyperelliptic canonical curve, and they do a general study of its canonical ring. Coughlan \cite{CoughlanGodeaux} gives the construction of a family depending on 8 parameters. In this paper, we implement Coughlan's construction, overcoming some computational difficulties, and we obtain explicit equations for his family of surfaces. We show that it depends on at most $7$ parameters, so the problem of classification of $\mathbb Z/2$-Godeaux surfaces is still wide open. (We recall that deformation theory has been used to show the existence of 8-dimensional components of $\mathbb Z/2$-Godeaux surfaces, see e.g. \cite{Werner8dim}, \cite{KLP12}, and Remark \ref{qgor}.) Each surface is embedded in the projective space $\mathbb P(1,2,2,2,3,3,3,3,4)$, and we give equations for the embedding by the tricanonical map into $\mathbb P^7$, as well as the image by the bicanonical map, an octic surface in $\mathbb P^3$. Moreover, we show that the \'etale coverings of Coughlan's surfaces belong to the 16-dimensional component $\mathscr M_E$ described in \cite[\S 5]{CataneseY}, thus their topological fundamental group is $\mathbb Z/2$. We also classify deformations to non-rational surfaces $W$ with a unique Wahl singularity, and ample canonical class. Hence these surfaces $W$ belong to the Koll\'ar-Shepher-Barron--Alexeev (KSBA) compactification of the moduli space of Godeaux surfaces \cite{KSB88,A94} (see \cite{Hack11}). The relevance of stable surfaces with one Wahl singularity is that, under no obstructions in deformation, they represent boundary divisors in the KSBA compactification (see \cite[\S 9]{Hack11}), and they are abundant (see e.g. \cite{SU16}). What allows us to classify is the recent work \cite{RU19} which optimally bounds Wahl singularities in stable surfaces with one singularity. Using that work and the particular situation of degenerations of ${\mathbb{Z}}/2$-Godeaux surfaces, in this paper we show that the smooth minimal model of $W$ is a particular either Enriques surface or $D_{2,n}$ elliptic surface, with $n\in {3,4,6}$. We give a complete list of the geometric possibilities and the singularities that may occur. The description of an Enriques surface as a double plane makes the construction of examples for this case simpler, and in fact we give constructions for all cases in the list. Although, we also use the explicit MMP in \cite{HTU17,U16} to show existence for some cases. The case of $D_{2,n}$ elliptic surfaces is much harder, explicit constructions are difficult to obtain, so we take a different approach. We search for such degenerations using our equations of Coughlan's family of $\mathbb Z/2$-Godeaux surfaces. The method is to study random surfaces, working over finite fields, in order to get ideas of where the interesting cases may be, then try to construct it over the complex numbers. We explicitly obtain codimension 1 families of $D_{2,4}$ and $D_{2,3}$ elliptic surfaces (containing a $(-4)$-curve). The existence of the first case can also be proved via MMP \cite{HTU17,U16} and we indicate how, but of course this is not explicit. That case appears in several constructions by deformations, suggesting that the irreducibility of the moduli space may hold. The second case is more interesting because one can show the existence of $D_{2,3}$ surfaces with a $(-4)$-curve inside, whose contraction can be ${\mathbb{Q}}$-Gorenstein smoothed to simply connected Godeaux surfaces (see e.g. \cite[\S 5]{U16}). The paper is organized as follows. In Section \ref{ListPossibilities} we give a complete list of possibilities for the deformations to non-rational surfaces with one Wahl singularity that may occur. In Sections \ref{Enriques}, \ref{Realizations} we construct several examples of such degenerations: all possible cases with $W$ an Enriques surface, with explicit constructions; and two different cases with $W$ a $D_{2,4}$ surface, using deformation theory and MMP. In Section \ref{CoughlanFamily} we describe how to find explicit equations for Coughlan's family of $\mathbb Z/2$-Godeaux surfaces, using computer algebra, and we show that it is (at most) 7-dimensional. Finally in Section \ref{CoughlanDnm} we explain how to find, in this family, equations for surfaces in the cases where $W$ is a $D_{2,4}$ or $D_{2,3}$ surface. \subsubsection*{Notation} \noindent \begin{itemize} \item A $(-m)$-curve is a curve isomorphic to ${\mathbb{P}}^1$ with self-intersection $-m$. \item A $D_{n,m}$ surface is a smooth projective surface with an elliptic fibration over $\mathbb P^1$ which has two fibres of multiplicities $n,m$, and $p_g=0$. The fundamental group of $D_{n,m}$ is ${\mathbb{Z}}/\text{gcd}(n,m)$ (see e.g. \cite[\S 3]{Dolg88}). \item A ${\mathbb{Z}}/2$-Godeaux surface is a smooth minimal projective surface with $p_g=0$, $K^2=1$, and $\pi_1^{\text{\'et}}={\mathbb{Z}}/2$ (which is equivalent to have torsion group ${\mathbb{Z}}/2$). \item If $\phi \colon X \to W$ is a birational morphism, then exc$(\phi)$ is the exceptional divisor. The strict transform of an irreducible curve $\Gamma$ in $W$ will be denoted by $\Gamma$ again. \item A cyclic quotient singularity $Y$, denoted by $\frac{1}{m}(1,q)$, is a germ at the origin of the quotient of ${\mathbb{C}}^2$ by the action of $\mu_m$ given by $(x,y)\mapsto (\mu_m x, \mu_m^q y)$, where $\mu_m$ is a primitive $m$-th root of $1$, and $q$ is an integer with $0<q<m$ and gcd$(q,m)=1$. If $\sigma \colon \widetilde{Y} \rightarrow Y$ is the minimal resolution of $Y$, then the exceptional curves $E_i={\mathbb{P}}^1$ of $\sigma$, with $1 \leq i \leq s$, form a chain such that $E_i^2=-b_i$ where $ \frac{m}{q} = [b_1, \ldots ,b_s]$ is the Hirzebruch-Jung continued fraction. Commonly we will refer to exc$(\sigma)$ as $[b_1, \ldots ,b_s]$. \item The Kodaira dimension of $X$ is denoted by $\kappa(X)$. \item A KSBA surface in this paper is a normal projective surface with log-canonical singularities and ample canonical class \cite{KSB88}. \end{itemize} \subsubsection*{Acknowledgments} We thank Stephen Coughlan for useful conversations related to this paper. The first and second authors were supported by FCT (Portugal) under the project PTDC/MAT-GEO/2823/2014 and by CMUP (UID/ MAT/00144/2019), which is funded by FCT with national (MCTES) and European structural funds through the programs FEDER, under the partnership agreement PT2020. The second author was supported by the fellowship SFRH/BPD/111131/2015. The third author thanks FONDECYT for support from the regular grant 1190066. The second and third authors thank Pontificia Universidad Cat´\'olica de Chile and Universidade do Porto for the hospitality during their visits in December 2018 and February 2019. \section{Non-rational degenerations with one Wahl singularity} A Wahl singularity is a cyclic quotient singularity of the type $1/n^2(1,na-1)$ with $0<a<n$ coprime. Equivalently, they are precisely the cyclic quotient singularities which admit a smoothing with Milnor number equal to zero. KSBA surfaces with one Wahl singularity turn out to be abundant in the closure of the moduli space of surfaces of general type. When in addition there are no local-to-global obstructions, these surfaces represent divisors in the KSBA compactification (see \cite[\S 4]{Hack11}). In this section we classify all possible degenerations of ${\mathbb{Z}}/2$-Godeaux surfaces into non-rational KSBA surfaces with one Wahl singularity. The main tool is \cite{RU19}, where we can find explicit optimal bounds for Wahl singularities and some useful features for particularly ``small" cases. \subsection{List of possibilities}\label{ListPossibilities} \begin{figure} \caption{Options for $\kappa=1$} \label{f1} \end{figure} \begin{figure} \caption{Options for $\kappa=0$} \label{f2} \end{figure} \begin{theorem} Let $W$ be a ${\mathbb{Q}}$-Gorenstein degeneration of a ${\mathbb{Z}}/2$-Godeaux surface which has one Wahl singularity and $K_W$ ample. If $\phi \colon X \to W$ is the minimal resolution and $X$ is not rational, then $X$ belongs to the following list: \begin{itemize} \item[A.] $\kappa(X)=1$ \begin{enumerate} \item[$(1)$] The surface $X$ is a $D_{2,3}$, and exc$(\phi)=[4]$. \item[$(2)$] The surface $X$ is a $D_{2,6}$, and exc$(\phi)=[4]$. \item[$(3)$] The surface $X$ is a $D_{2,4}$, and exc$(\phi)=[4]$. \item[$(4)$] The surface $X$ is the blow-up at one point of a $D_{2,4}$, and exc$(\phi)=[5,2]$. The $(-1)$-curve intersects the $(-5)$-curve with multiplicity $2$. \item[$(5)$] The surface $X$ is the blow-up of a $D_{2,4}$ twice at the node of the multiplicity four $I_1$ fiber, and exc$(\phi)=[3,5,2]$. The surface $D_{2,4}$ contains a $(-3)$-curve which is a $4$-section. \end{enumerate} \item[B.] $\kappa(X)=0$, and $X$ is an Enriques surface blown-up \begin{enumerate} \item[$(1)$] once, and exc$(\phi)=[5,2]$. The $(-1)$-curve intersects the $(-5)$-curve with multiplicity $3$. \item[$(2)$] twice, and exc$(\phi)=[2,5,3]$. There is one $(-1)$-curve touching the $(-5)$-curve with multiplicity $2$, and there is another $(-1)$-curve intersecting the $(-5)$-curve and the $(-3)$-curve at one point. \item[$(3)$] twice, and exc$(\phi)=[6,2,2]$. There are two disjoint $(-1)$-curves intersecting the $(-6)$-curve with multiplicity $2$ each. \item[$(4)$] three times, and exc$(\phi)=[2,6,2,3]$. There is a $(-1)$-curve intersecting the first $(-2)$-curve and the $(-6)$-curve at one point each, and there is a $(-1)$-curve intersecting the $(-6)$-curve and the $(-3)$-curve at one point each. \item[$(5)$] three times, and exc$(\phi)=[3,5,3,2]$. There is a $(-1)$-curve intersecting the first $(-3)$-curve and the $(-5)$-curve at one point each, and there is a $(-1)$-curve intersecting the $(-5)$-curve and the $(-2)$-curve at one point each. \item[$(6)$] four times, and exc$(\phi)=[2,2,3,5,4]$. There is a $(-1)$-curve intersecting the first $(-2)$-curve and the $(-5)$-curve at one point each, and there is a $(-1)$-curve intersecting the $(-4)$-curve with multiplicity $2$. \item[$(7)$] four times, and exc$(\phi)=[2,2,6,2,4]$. There is a $(-1)$-curve intersecting the first $(-2)$-curve and the $(-6)$-curve at one point each, and there is a $(-1)$-curve intersecting the $(-4)$-curve with multiplicity $2$. \end{enumerate} \end{itemize} Moreover, all cases do exist, except possibly A$2$ and A$5$. \label{possible} \end{theorem} \begin{proof} First, by \cite[Proposition 2.2]{RU19} and our hypothesis ($K_W^2=1$ and $W$ non-rational), we have that $X$ is the blow-up of either an elliptic surface with $q=0$ or an Enriques surface. Note that $p_g(W)=0$ as well, because $W$ is a ${\mathbb{Q}}$-Gorenstein degeneration of a ${\mathbb{Z}}/2$-Godeaux surface $Z$. {\bf Say that $\kappa(X)=1$.} Let $\pi \colon X \to S$ be the blow-down to a minimal surface $S$. Hence, in our situation, $S$ has an elliptic fibration $S \to {\mathbb{P}}^1$. As in the proof of \cite[Proposition 6.1]{RTU17}, we have that $$\pi_1^{\text{\'et}}(Z) \to \pi_1^{\text{\'et}}(W)$$ is surjective, $\pi_1^{\text{\'et}}(W) \simeq \pi_1^{\text{\'et}}(X)$, and $\pi_1(X)$ is residually finite, and so $\pi_1(X)$ could be trivial or ${\mathbb{Z}}/2$. As the fundamental group is finite and $\kappa(S)=1$, by \cite[Corollary p.146]{Dolg88} we have that the elliptic surface $S$ must have two multiple fibres and so it is a $D_{n,m}$. Since $\pi_1(X)$ could be trivial or ${\mathbb{Z}}/2$, we have that gcd$(n,m)=1$ or $2$ respectively. The canonical class formula gives $K_S \sim -F + (n-1)F_n + (m-1)F_m$ where $F$ is a general fibre, and the divisors $F_n$, $F_m$ are reduced fibres so that $F \sim nF_n \sim mF_m$. On the other hand, by \cite[Theorem 2.15]{RU19} we have that the exceptional divisors in $X$ could be $[4]$, $[5,2]$, $[6,2,2]$, $[2,5,3]$. We now check case by case: $[4]$: Then $X=S$. Let $C$ be the $(-4)$-curve. Then $K_S \cdot C=2$ gives restrictions on gcd$(n,m)$. The only possible pairs $(n,m)$ are $(2,3)$, $(2,4)$ and $(2,6)$. $[5,2]$: Then $X \to S$ is the blow-up at one point. The $(-1)$-curve cannot touch the $(-2)$-curve, and so it must touch the $(-5)$-curve with multiplicity $2$. (It cannot just intersect it at one point since $K_W$ is ample, and for multiplicity $>2$ it would be trivial or negative for $K_S$.) So in $S$ the $(-5)$-curve becomes a curve $\Gamma$ such that $\Gamma \cdot K_S=1$. Then we use the canonical class formula and gcd$(n,m)=1$ or $2$ to get that $(n,m)=(2,4)$ or $(2,3)$ only. In the case of $(2,3)$ we have a simply connected surface. Then by \cite[Corollary 1.2.4]{H16} and since the index of $[5,2]$ is $3$, we obtain an exact sequence ${\mathbb{Z}}/3 \to {\mathbb{Z}}/2 \to 0$, which is a contradiction. Therefore the only possible case is $(2,4)$. $[6,2,2]$: Then $X \to S$ is the composition of two blow-ups. According to \cite[Corollaries 2.12, 2.13 and Theorem 2.15]{RU19} this case can only happen with a $(-1)$-curve which forms a long diagram of type I or II (see \cite{RU19} for the definition). But then there is a $(-1)$-curve intersecting one of the $(-2)$-curves transversally at only one point, and this is a contradiction with the number of blow-ups from $S$. $[2,5,3]$: Similarly, according to \cite[Corollaries 2.12, 2.13 and Theorem 2.15]{RU19} this can only happen as $X \to S$ blow-up twice, where there is a $(-1)$-curve in $X$ intersecting once the $(-2)$-curve and once the $(-5)$-curve, disjoint from the $(-3)$-curve. But then the $(-3)$-curve in $S=D_{n,m}$ intersects a nodal rational curve at one point. Then, by using adjunction, we obtain that the only possible pair is $(n,m)=(2,4)$ where the multiplicity $4$ fiber is the $I_1$ image under $X \to S$ of the $(-5)$-curve.\footnote{This was not considered in \cite[Theorem 3.2]{RU19}, but it is in the arXiv corrected version.} {\bf Say now that $\kappa(X)=0$.} Let $\pi \colon X \to S$ be the blow-down to an Enriques surface $S$. By \cite[Corollaries 2.12, 2.13 and Theorem 2.15]{RU19} we have that the exceptional divisor in $X$ could have at most $5$ ${\mathbb{P}}^1$'s. The case of $5$ ${\mathbb{P}}^1$'s was classified in \cite[Theorem 3.1]{RU19}, and it gives precisely the cases (6) and (7) in the list above. Thus we now check case by case when we have at most $4$ ${\mathbb{P}}^1$'s: $[4]$: This case is impossible since $X=S$ and $K_S \equiv 0$. $[5,2]$: We have that $X \to S$ is the blow-up at one point. The $(-1)$-curve cannot touch the $(-2)$-curve. The only possibility then is that it touches the $(-5)$-curve with multiplicity $3$. $[6,2,2]$: Here $X \to S$ contracts two $(-1)$-curves. One checks that a $(-1)$-curve must be disjoint from the $(-2)$-curves. Since $K_S \equiv 0$, the only possible situation is to have two disjoint $(-1)$-curves intersecting the $(-6)$-curve at two points each. $[2,5,3]$: In this case $X \to S$ is blow-up twice. The $(-1)$-curve cannot touch the $(-2)$-curve. Since we have a $(-3)$-curve in $X$, we need a $(-1)$-curve touching it once. Since $K_W$ is ample, it must intersect the $(-5)$-curve. It can only be at one point, and there must exist another $(-1)$-curve intersecting the $(-5)$-curve with multiplicity $2$. For the next cases, it can only be the situation of a long diagram of type I or II. The map $X \to S$ is a blow-up three times. $[7,2,2,2]$: It is not possible, since we would have $4$ blow-downs. $[2,6,2,3]$: There is a $(-1)$-curve intersecting the first $(-2)$-curve and the $(-6)$-curve at one point each. After contracting it and the new $(-1)$-curve from the $(-2)$-curve, we obtain a nodal rational curve with self-intersection $-1$. We still have a $(-3)$-curve, and so a new $(-1)$-curve is needed intersecting it at one point, and also the nodal $(-1)$-curve. $[2,2,5,4]$: Long diagrams of type I or II here are not possible, just using that $K_S \equiv 0$. $[3,5,3,2]$: Here the long diagram gives a $(-1)$-curve intersecting the $(-2)$-curve and the $(-5)$-curve at one point each. After that, one can check that there must be a $(-1)$-curve intersecting the first $(-3)$-curve with the $(-5)$-curve. The existence of such surfaces will be proved in the next sub-section. \end{proof} \begin{remark} For simply-connected Godeaux surfaces the analogue non-rational list contains only two possible surfaces: either a $D_{2,3}$ with Exc$(\phi)=[4]$, or the blow-up at one point of a $D_{2,3}$ with Exc$(\phi)=[5,2]$ and a $(-1)$-curve intersecting the $(-5)$-curve with multiplicity $2$. Both are realizable (see e.g. \cite[Table in p.666]{SU16}), and give divisors in the KSBA compactification of the moduli space. \end{remark} \begin{remark} It is not clear how to optimally bound Wahl singularities in rational surfaces. As far as we know, there is no written example of a rational degeneration $W$ of ${\mathbb{Z}}/2$-Godeaux surfaces in the literature. (We believe they exist in Coughlan's family of ${\mathbb{Z}}/2$-Godeaux surfaces, but the computations involved in order to describe them are terribly slow.) However for simply-connected Godeaux surfaces there are many examples (see e.g. \cite[Table in p.666]{SU16}, where there are $30$ examples). We note that in this rational case the index of the Wahl singularity for a ${\mathbb{Z}}/2$-Godeaux degeneration must be even because of \cite[Corollary 1.2.4]{H16}. \end{remark} \subsection{Enriques double planes}\label{Enriques} The following construction of an Enriques surface as the smooth minimal model of a double plane is well-known, see e.g. \cite[\S IV.9]{CoDo89}. Consider lines $L_1,L_2\subset\mathbb P^2$ meeting at a point $p_0,$ and take points $p_1\in L_1,$ $p_2\in L_2,$ $p_1,p_2\not= p_0.$ Let $B$ be a sextic plane curve with a node at $p_0,$ a tacnode at $p_i$ with branches tangent to the line $L_i,$ for $i=1,2,$ and at most other negligible singularities. Let $\pi:X\rightarrow\mathbb P^2$ be the blow-up that resolves the singularities of the curve $B+L_1+L_2$. For $i=1,2,$ let $\widehat L_i$ be the strict transform of $L_i$ and $E_0,E_i,E_i'$ be the exceptional curves such that the total transform of $L_i$ is $E_0+\widehat L_i+2E_i'+E_i$ (we have $E_i^2=-2,$ $E_i'^2=-1$). Let $S'\rightarrow X$ be the double cover with branch curve $$\overline B:=\widehat B +\widehat L_1+\widehat L_2+E_1+E_2,$$ where $\widehat B$ is the strict transform of $B.$ Let $S$ be the minimal model of $S',$ which is obtained by contracting the four $(-1)$-curves that are the preimage of $\widehat L_1+\widehat L_2+E_1+E_2.$ The surface $S$ is an Enriques surface. \begin{figure} \caption{Sextic curves and their resolution.} \label{sextics} \end{figure} Consider the three Enriques surfaces corresponding to branch curves $\overline B\subset X$ as in Figure \ref{sextics}, where from left to right we blow-up $\mathbb P^2$ until we resolve the singularities of the curve (dotted curves are not in the branch curve). Note that the existence of such curves is not surprising, because we are imposing at most 20 conditions to a linear system of dimension 27. We give explicit equations in an arXiv ancillary file. These smooth Enriques surfaces have a configuration of rational curves as in Figure \ref{f3}, with the correspondences \begin{itemize} \item[(i)] $T_1\longleftrightarrow C+D,\ \ T_2\longleftrightarrow B,\ \ T_3\longleftrightarrow A,$ \item[(ii)] $T_4\longleftrightarrow B,\ \ T_5\longleftrightarrow A,\ \ T_6\longleftrightarrow D,\ \ T_7\longleftrightarrow C,$ \item[(iii)] $T_8\longleftrightarrow B,\ \ T_9\longleftrightarrow A,\ \ T_{10}\longleftrightarrow C,\ \ T_{11}\longleftrightarrow D.$ \end{itemize} \subsection{Realizations of degenerations}\label{Realizations} In this section we discuss the realization of the possibilities in Theorem \ref{possible}. Below we follow the numeration in that theorem. We do not know about existence for the possibilities \textbf{(A2)} and \textbf{(A5)}. The case \textbf{(A2)} is in the classification of degenerations of Godeaux surfaces with one $\frac{1}{4}(1,1)$ singularity in \cite{K14}, but there was no construction (see \cite[Remark 2.11]{K14}). \textbf{(A1)} It can be realized using our equations of Coughlan's family of surfaces. For the details see Section \ref{D23}. These degenerations are particularly interesting since we have the same ${\mathbb{Q}}$-Gorenstein degenerations via simply connected Godeaux surfaces (see e.g. \cite[\S 5]{U16}). \textbf{(A3)} This possibility can be realized using \cite[Example 1]{KLP12}. The singular surface $W$ constructed for that example has $4$ singularities: two $[2,3,2,4]$ and two $[4]$. It also has no local-to-global obstructions to deform, and it is proved that a ${\mathbb{Q}}$-Gorenstein smoothing is a Godeaux surface with $\pi_1 \simeq {\mathbb{Z}}/2$. We realize a surface in (A3) as the minimal resolution of a ${\mathbb{Q}}$-Gorenstein smoothing of all singularities in $W$ except one $[4]$. To show that it is indeed a $D_{2,4}$ with a $(-4)$-curve inside, we use the explicit MMP in \cite{HTU17} (see also \cite{U16}). We note that it is not a trivial computation since we have $3$ possibilities for $D_{n,m}$ here. At the end, it is a $D_{2,4}$ because it comes from a ${\mathbb{Q}}$-Gorenstein smoothing ``over" a multiplicity $2$ fiber for the singularity $[4]$. This example gives a divisor in the KSBA moduli space, whose general member is a $D_{2,4}$ with the $(-4)$-curve contracted. Also, it can be realized explicitly using our equations of Coughlan's family of surfaces. For the details see Section \ref{D24}. \textbf{(A4)} Take again the singular surface $W$ in \cite[Example 1]{KLP12} but now we ${\mathbb{Q}}$-Gorenstein smooth all singularities in $W$ except one $[2,3,2,4]$. By the explicit MMP in \cite{HTU17} we obtain that the minimal resolution of $[2,3,2,4]$ is the blow-up of a $D_{2,4}$ at one point, where the $(-1)$-curve connects the $(-3)$-curve with the $(-4)$-curve. We recall that the $M$-resolution of $[2,3,2,4]$ is the partial resolution $[2,5]-1-[2,5]-1-[2,5]$ which also has no-local-to-global obstructions. Then, we just keep one $[2,5]$ in a ${\mathbb{Q}}$-Gorenstein, smoothing all the rest to obtain the surface we are looking for. Its minimal resolution corresponds to (A4). As in (A3), this example gives a divisor in the KSBA moduli space. \begin{figure} \caption{Key configurations of Enrique type} \label{f3} \end{figure} \textbf{(B1)} and \textbf{(B4)} From Section \ref{Enriques}, there exists an Enriques surface $S$ which has the configuration of smooth rational curves $A,B,C,D$ shown in Figure \ref{f3} part (i). Let $\pi \colon X \to S$ be the blow-up of $S$ five times, so that the configuration $A,B,C,D$ is transformed into the configuration in Figure \ref{f4}, where the $E_i$ are the ordered exceptional curves. Hence $E_1^2=E_3^2=-2$, and $E_2^2=E_4^2=E_5^2=-1$. \begin{figure} \caption{The surface $X$ for cases (B1) and (B4)} \label{f4} \end{figure} We get Wahl chains $[E_1,B,C,D]=[2,6,2,3]$ and $[A,E_3]=[5,2]$. Let $\phi \colon X \to W$ be the contraction of both of them. The normal projective surface $W$ has two Wahl singularities $\frac{1}{7^2}(1,20)$ and $\frac{1}{3^2}(1,2)$. The canonical class $K_W$ is ample since $\phi^*(K_W)$ can be written ${\mathbb{Q}}$-effectively using only curves in Figure \ref{f4}, and so we check ampleness through the intersections $\phi^*(K_W).E_i >0$ for $i=2,4,5$. \footnote{We may find ADE configurations disjoint from $A,B,C,D$ in $S$ which would intersect $K_W$ trivially. If that happens, one can always smooth them up so that $K$ for the resulting surface is ample.} We now show that there are no local-to-global obstructions to deform $W$. For that it is enough to show that $H^2(S, T_S(-\log(A+B+C+D)))=0$, following the well-known strategy from \cite{LP07}. We will use the following lemma (see \cite[Section 3.1]{RU19}). \begin{lemma} Let $f \colon S' \to S$ be the \'etale double cover induced by the relation $2K_S \sim 0$. Let $\Gamma_1,\ldots, \Gamma_r$ be a simple normal crossings divisor in $S$. Then $$f_*\big(T_{\bar S}\big(-\log\big(\sum_i f^* \Gamma_i \big)\big)\big)=T_S\big(-\log\big(\sum_i \Gamma_i \big)\big)\oplus T_S\big(-\log\big(\sum_i \Gamma_i \big)\big)\big(-K_S \big),$$ and so $$H^0(S',{\mathcal{O}}mega_{S'}^1 \big(\log\big(\sum_i f^* \Gamma_i \big)\big)=H^2(S,T_S\big(-\log\big(\sum_i \Gamma_i \big)\big) \oplus H^0(S,{\mathcal{O}}mega_S^1\big(\log \sum_i \Gamma_i \big) ).$$ In particular, if the curves $\{f^*(\Gamma_i)\}_{i=1}^r$ are numerically independent, then $$H^0(S',{\mathcal{O}}mega_{S'}^1 \big(\log\big(\sum_i f^* \Gamma_i \big)\big)=0,$$ and so $H^2(S,T_S\big(-\log\big(\sum_i \Gamma_i \big)\big)=0$. \label{trick} \end{lemma} By Lemma \ref{trick}, we only need to check that $f^*(A+B+C+D)$ is a divisor supported in numerically independent curves. For that we compute the corresponding intersection matrix and check that the determinant is not zero. Then we have no local-to-global obstructions, and we consider a ${\mathbb{Q}}$-Gorenstein smoothing of $W$. Since we have $E_2 \simeq {\mathbb{P}}^1$ connecting the ends of the Wahl chains and the indexes of the singularities are coprime, we obtain that the general fiber has fundamental group isomorphic to ${\mathbb{Z}}/2$. One also has $K_W^2=1$, and $p_g=q=0$, and so we have ${\mathbb{Z}}/2$-Godeaux surfaces as general fibers. To obtain examples of types (B1) and (B4), we consider the minimal resolution of the partial ${\mathbb{Q}}$-Gorenstein smoothing of $[2,6,2,3]$ or $[5,2]$, respectively. To check that they are indeed blow-ups of Enriques surfaces, we run the explicit MMP in \cite{HTU17}. For each of the singularities we obtain a divisor in the KSBA compactification of the moduli space of ${\mathbb{Z}}/2$-Godeaux surfaces. Both of these examples are new in the literature. \textbf{(B2) and (B5)} From Section \ref{Enriques}, there exists an Enriques surface $S$ which has the configuration of smooth rational curves $A,B,C,D$ shown in Figure \ref{f3} part (ii). \begin{figure} \caption{The surface $X$ for cases (B2) and (B5)} \label{f5} \end{figure} Let $\pi \colon X \to S$ be the blow-up of $S$ six times, so that the configuration $A,B,C,D$ is transformed into the configuration in Figure \ref{f5}, where the $E_i$ are the ordered exceptional curves. Hence $E_3^2=-3$, $E_1^2=E_4^2=-2$, and $E_2^2=E_5^2=E_6^2=-1$. We get Wahl chains $[E_1,A,B,E_3]=[2,3,5,3]$ and $[E_4,D,C]=[2,5,3]$. Let $\phi \colon X \to W$ be the contraction of both of them. The normal projective surface $W$ has two Wahl singularities $\frac{1}{8^2}(1,23)$ and $\frac{1}{5^2}(1,9)$. The canonical class $K_W$ is ample since $\phi^*(K_W)$ can be written ${\mathbb{Q}}$-effectively using only curves in Figure \ref{f5}, and so we check ampleness through the intersections $\phi^*(K_W).E_i >0$ for $i=2,5,6$. \footnote{Just as in the previous example, zero curves for $K_W$ do not matter.} We now show that there are no local-to-global obstructions to deform $W$. As done above, for that it is enough to show that $H^2(S, T_S(-\log(A+B+C+D)))=0$. We use again Lemma \ref{trick}, and so we only need to check that $f^*(A+B+C+D)$ is a divisor supported in numerically independent curves. For that we compute the corresponding intersection matrix and check that the determinant is not zero. Then we have no local-to-global obstructions, and we consider a ${\mathbb{Q}}$-Gorenstein smoothing of $W$. Since we have $E_5 \simeq {\mathbb{P}}^1$ connecting the ends of the Wahl chains and the indices of the singularities are coprime, we obtain that the general fiber has fundamental group isomorphic to ${\mathbb{Z}}/2$. One also has $K_W^2=1$, and $p_g=q=0$, and so we have ${\mathbb{Z}}/2$-Godeaux surfaces as general fibers. To obtain examples of types (B2) and (B5), we consider the minimal resolution of the partial ${\mathbb{Q}}$-Gorenstein smoothing of $[2,3,5,3]$ or $[2,5,3]$, respectively. To check that they are indeed blow-ups of Enriques surfaces we run the explicit MMP in \cite{HTU17}. For each of the singularities we obtain a divisor in the KSBA compactification of the moduli space of ${\mathbb{Z}}/2$-Godeaux surfaces. Both of these examples are new in the literature. \textbf{(B3) and (B7)} As in the previous examples, we first construct an Enriques surface $S$ which has the configuration of smooth rational curves $A,B,C,D$ shown in Figure \ref{f3} part (iii), see Section \ref{Enriques}. \begin{figure} \caption{The surface $X$ for cases (B3) and (B7)} \label{f6} \end{figure} Let $\pi \colon X \to S$ be the blow-up of $S$ seven times, so that the configuration $A,B,C,D$ is transformed into the configuration in Figure \ref{f6}, where the $E_i$ are the ordered exceptional curves. Hence $E_1^2=E_2^2=E_4^2=E_5^2=-2$, and $E_3^2=E_6^2=E_7^2=-1$. We get Wahl chains $[E_2,E_1,A]=[2,2,6]$ and $[E_5,E_4,B,C,D]=[2,2,6,2,4]$. Let $\phi \colon X \to W$ be the contraction of both of them. The normal projective surface $W$ has two Wahl singularities $\frac{1}{4^2}(1,3)$ and $\frac{1}{10^2}(1,29)$. The canonical class $K_W$ is ample since $\phi^*(K_W)$ can be written ${\mathbb{Q}}$-effectively using only curves in Figure \ref{f5}, and so we check ampleness through the intersections $\phi^*(K_W).E_i >0$ for $i=3,6,7$. All the rest of the arguments are analogues to the ones given in the last two examples, except for the computation of obstruction. Lemma \ref{trick} is used in a different way. One can prove that on the K3 surface $S'$ we have $H^0(S',{\mathcal{O}}mega_{S'}^1 \big(\log\big(\sum_i f^* \Gamma_i \big)\big)=1$, but at the same time $H^0(S,{\mathcal{O}}mega_{S}^1 \big(\log\big(\sum_i \Gamma_i \big)\big)=1$, and so $H^2(S,T_S\big(-\log\big(\sum_i \Gamma_i \big)\big)=0$. (It is the same argument as in case (B) of \cite[Section 3.1]{RU19}. For each of the singularities we obtain a divisor in the KSBA compactification of the moduli space of ${\mathbb{Z}}/2$-Godeaux surfaces. The one for $[6,2,2]$ is new in the literature, the other one is \cite[Section 3.1 (B)]{RU19}. \textbf{(B6)} There is an example of this case in \cite[Section 3.1 (C)]{RU19}. It gives also a boundary divisor in the KSBA compactification of the moduli space of ${\mathbb{Z}}/2$-Godeaux surfaces. This case together with (B7) achieve the optimal upper bound for lengths of Wahl singularities in stable surfaces (see \cite[Theorem 3.1]{RU19}). \section{Coughlan's family}\label{CoughlanFamily} Stephen Coughlan \cite{CoughlanGodeaux} has given the construction of an irreducible family of simply connected surfaces $Y$ with invariants $p_g=1$, $q=0$, $K^2=2$ having a free action of ${\mathbb{Z}}/2$, thus producing a family of ${\mathbb{Z}}/2$-Godeaux surfaces $Y/({\mathbb{Z}}/2)$. Here we go over his construction and implement it in order to get explicit equations for the family. This task is computationally demanding and some workarounds are needed in order to succeed. In Section \ref{exthypk3} we give an overall resume of the method used in \cite{CoughlanGodeaux}. In Section \ref{unprojIV} we follow the method described in \cite{ReidUnprojIV} to obtain explicit equations for the surfaces $Y$. The corresponding computations are implemented with Magma \cite{BCP}, version V2.25-2, and are available as arXiv ancillary files. As a conclusion of this construction, we find out that one of the 8 parameters of Coughlan's family is redundant, so his model depends on $7$ parameters, see Section \ref{count}. We show in Section \ref{16dim} that the \'etale coverings of Coughlan's surfaces belong to the 16-dimensional component $\mathscr M_E$ described in \cite[\S 5]{CataneseY}, thus their topological fundamental group is $\mathbb Z/2.$ \subsection{Extending hyperelliptic $K3$ surfaces}\label{exthypk3} The description of a canonical ring for $Y$ is based on a diagram\\\\ $\begin{CD} W'_{6,6} \subset {\mathbb{P}}(1,2^3,3^2) @<{\rm proj}<< W \subset {\mathbb{P}}(1,2^4,3^4,4) @<<< Y\subset {\mathbb{P}}(1,2^3,3^4,4)\\ @AAA @AAA @AAA\\ T'_{6,6} \subset {\mathbb{P}}(2^3,3^2) @<{\rm proj}<< T \subset {\mathbb{P}}(2^4,3^4,4)@<<< D \subset {\mathbb{P}}(2^3,3^4,4) \end{CD}$\\\\ \noindent where the 'proj' lines represent projections and the others are inclusions as hyperplane sections (of the correct degree), and the varieties involved are defined below. In \cite[pag. $72$]{extending} Reid has attempted the description of the canonical ring of $Y$ by extending the ring of its canonical curve $D$. Due to computational limitations this attempt was unsuccessful. Instead of trying to compute the extension of such a high-codimensional ideal by a variable of degree $1$, Coughlan uses simpler extensions followed by projections. This makes the varieties manageable as we will describe here. The curve $D\subset Y$ is a hyperelliptic canonical curve section in $|K_Y|$ and its projective model is explicitly given in \cite[$\S 2$]{CoughlanGodeaux}. It has a simple description as given by the $2\times 2$ minors of a $4\times 4$ matrix. In \cite[$\S 3$]{CoughlanGodeaux} it is described a hyperelliptic $K3$ surface $T$ containing $D$, i.e. a $K3$ surface polarised by an ample line bundle $L$ such that the complete linear system $|L|$ contains the hyperelliptic curve $D$. In such case, $L$ determines a double cover $\pi\colon T\rightarrow Q\subset {\mathbb{P}}^3$, where $Q$ is a quadric surface, branched on a curve $C\in |-2K_Q|$. Identifying $Q$ with ${\mathbb{P}}^1\times{\mathbb{P}}^1$, the branch locus $C$ is of bidegree $(4,4)$. Assuming that it splits as $C_1+C_2$, of bidegree $(1,3)$ and $(3,1)$ respectively, the surface $T$ has $10$ nodes. Blowing up one of the nodes in $C_1+C_2$, $P\in Q$, we get a double cover $\widetilde T\rightarrow \operatorname{Bl}_PQ$, with an exceptional divisor $E\cong {\mathbb{P}}^1$. Contracting the two $(-1)$--curves on $\operatorname{Bl}_PQ$ arising from the rulings of $Q$ we get a double cover $T'\rightarrow {\mathbb{P}}^2$ branched over two nodal cubics. The procedure above can be seen as a projection from the point $P$. \begin{proposition}\cite[Prop. $3.3$]{CoughlanGodeaux} The projection from the node $P\in T\subset {\mathbb{P}}(2^4,3^4,4)$ gives a complete intersection $$T'_{6,6}\subset {\mathbb{P}}(2^3,3^2)=\operatorname{Proj}({\mathbb{C}}[y_1,y_2,y_3,z_1,z_2])$$ of the type $$ \begin{array}{r} z_1^2 - y_1f^2 + (l_1f +l_2y_2+l_3g)y_2^2+l_4fgy_2 = 0\\ z_2^2 - y_3g^2 + (m_1f +m_2y_2+m_3g)y_2^2+m_4fgy_2 = 0, \end{array} $$ where $$f=y_1+\alpha y_3,\ \ \ g=\beta y_1+y_3$$ and $\alpha,\beta,l_i,m_j$ are constants. The image of the exceptional curve $E$ is the line $\{y_2=0\}$. \end{proposition} The involution in $T'_{6,6}$ is given by $$ y_1\mapsto y_3,\hspace{2mm} y_2\mapsto -y_2,\hspace{2mm} y_3\mapsto y_1,\hspace{2mm} z_1\mapsto z_2,\hspace{2mm} z_2\mapsto z_1, $$ so from here on we will only consider the parameters $\{\alpha,l_1,\dots,l_4\}$ as we set $\beta=\alpha, (m_1,m_2,m_3,m_4)=(l_3,-l_2,l_1,-l_4)$. The reverse procedure is an unprojection of type IV (see \cite{ReidUnprojIV}). To do so one uses a parametrization $$\varphi\colon{\mathbb{P}}^1(u,v)\hookrightarrow \{y_2=0\}\cap T'_{6,6}$$ whose image is a genus $2$ curve which is a double cover of the image of the exceptional divisor $E$. The map is defined as $$ (u,v)\mapsto (u^2,0,v^2,u(u^2+\alpha v^2),v(\alpha u^2+v^2)).$$ Now we explicitly describe a $3$-fold $W'_{6,6}$, projection of a $3$-fold $W$, by extending the map $\varphi$ to a map ${\mathbb{P}}hi\colon{\mathbb{P}}^2(x,u,v) \rightarrow {\mathbb{P}}(1,2^3,3^2)$ such that $\varphi(u,v)={\mathbb{P}}hi(0,u,v)$. We start by describing this extension in full generality, as done in \cite[$\S 4$]{CoughlanGodeaux}, i.e. extending $\varphi$ to a map $\widetilde{\mathbb{P}}hi\colon{\mathbb{P}}^5(x_1,x_2,x_3,x_4,u,v) \rightarrow {\mathbb{P}}(1^4,2^3,3^2)$ as $$ \widetilde{\mathbb{P}}hi^*(x_i) = x_i,\, \widetilde{\mathbb{P}}hi^*(y_1) = u^2+2x_1v,\, \widetilde{\mathbb{P}}hi^*(y_2) = x_2u+x_3v,\, \widetilde{\mathbb{P}}hi^*(y_3) = v^2+2x_4u. $$ Then, to define ${\mathbb{P}}hi\colon{\mathbb{P}}^2(x,u,v) \rightarrow {\mathbb{P}}(1,2^3,3^2)$ keeping the involution, we get $x_1=x_4$ and $x_2=-x_3$. Setting $x=x_1$ and $x_2=lx$ for a constant $l$, the map ${\mathbb{P}}hi$ can be written as $$ \begin{array}{cccc} {\mathbb{P}}hi^*(x) = x, & {\mathbb{P}}hi^*(y_1) = u^2+2xv, & {\mathbb{P}}hi^*(y_2) = lx(u-v), & {\mathbb{P}}hi^*(y_3) = v^2+2xu. \end{array}$$ We can remove the parameter $l$ by using the change of variable $y_2\mapsto ly_2$, i.e. we can consider $${\mathbb{P}}hi^*(y_2) = x(u-v).$$ \begin{remark} It is this change of variable, that was not used by Coughlan, that allows us to say that the family is at most 7-dimensional. (By computing the equations without that change of variable, and then taking $l=0$, we have checked that the particular case $l=0$ gives degenerate surfaces. Thus we can assume $l\ne 0$.) \end{remark} By \cite[Thm. $4.2$]{CoughlanGodeaux}, $$ \begin{array}{rcl} {\mathbb{P}}hi^*(z_1) & = & u(f+\alpha(1+\alpha)x^2)+(1-\alpha^2)xuv-\alpha(1-\alpha^2)x^2v, \\ {\mathbb{P}}hi^*(z_2) & = & v(g+\alpha(1+\alpha)x^2)+(1-\alpha^2)xuv-\alpha(1-\alpha^2)x^2u. \end{array} $$ We have then that ${\mathbb{C}}[x,u,v]/{\mathbb{C}}[{\mathbb{P}}hi^*(x),{\mathbb{P}}hi^*(y_i),{\mathbb{P}}hi^*(z_j)]$, as a module over the ring ${\mathbb{C}}[{\mathbb{P}}hi^*(x),{\mathbb{P}}hi^*(y_i),{\mathbb{P}}hi^*(z_j)]$, is generated by $\{1,u,v,uv\}$. In the next section we extend the surface $T'_{6,6}$ to $W'_{6,6}$, a Fano $3$-fold of index $1$, and use unprojection methods to determine the $3$-fold $W$. \subsection{Type IV unprojection}\label{unprojIV} Let $R={\mathbb{C}}[x,y_1,y_2,y_3,z_1,z_2]$ be the homogeneous coordinate ring of ${\mathbb{P}}(1,2^3,3^2)$ and ${\mathbb{C}}(\Gamma)=R/I_\Gamma$, where $\Gamma$ is the image of ${\mathbb{P}}hi$. By construction, the normalisation of ${\mathbb{C}}(\Gamma)$ is the $R$-module ${\mathbb{C}}[x,u,v]$. Notice that ${\mathbb{C}}[x,u,v]$, as $R$-module, is generated by $\{1,u,v,uv\}$. Furthermore, using the embedding ${\mathbb{P}}hi^*$ to define the multiplication by elements of $R$, one can write the relations between these generators. For $y_2$ one has $$ \begin{array}{lcl} y_2\cdot 1 & = & xu -xv, \\ y_2\cdot u & = & xu^2-xuv= x(y_1-2xv)-xuv=xy_1-2x^2v-xuv, \\ y_2\cdot v & = & xuv-xv^2=xuv-x(y_3-2xu)=-xy_3 +2x^2v + xuv, \\ y_2\cdot uv & = & x(u^2v-uv^2)= x((y_1-2xv)v-u(y_3-2xu)) = \\ \ & = & x((y_1v-y_3u)-2x(v^2-u^2)) = \\ & = & x((y_1v-y_3u)-2x(y_3-2xu-y_1+2xv)) = \\ & = & 2x^2(y_1-y_3)-(xy_3-4x^3)u +(xy_1-4x^3)v. \end{array} $$ Doing the same for $z_1$ and $z_2$, one can write the relations in matrix form as $\left(\begin{array}{cccc}1 & u & v & uv\end{array}\right)B=0$, where $B$ is a $4\times 12$ matrix with entries in $R$ that can be written as $\left(\begin{array}{c|c|c} B_{y_2} & B_{z_1} & B_{z_2} \end{array}\right)$, where $B_{y_2}, B_{z_1}, B_{z_2}$ are, respectively, \footnotesize $$ \begin{array}{l} \left(\begin{array}{cccc} -y_2 & xy_1 & -xy_3 & 2x^2(y_1 - y_3) \\ x & -y_2 & 2x^2 & -xy_3+4x^3 \\ -x & -2x^2 & -y_2 & xy_1-4x^3 \\ 0 & -x & x & -y_2 \end{array}\right), \\ \left(\begin{array}{cccc} z_1 & 2s_3xy_3-y_1(f+s_1) & s_2y_3+2s_3xy_1 & 2(f+s_1)xy_3+2s_2xy_1-s_3y_1y_3 \\ -(f+s_1) & z_1 - 4s_3x^2 & 2xs_2-y_3s_3 & 2s_3xy_1-4(f+s_1)x^2-s_2y_3 \\ -s_2 & 2(f+s_1)x-s_3y_1 & z_1-4s_3x^2 & 2s_3xy_3-4s_2x^2-(f+s_1)y_1 \\ -s_3 & -s_2 & -(f+s_1) & z_1-4s_3x^2 \end{array}\right), \\ \left(\begin{array}{cccc} z_2 & 2s_3xy_3-y_1s_2 & 2s_3xy_1-(g+s_1)y_3 & 2(g+s_1)xy_1-s_3y_1y_3+2s_2xy_3 \\ -s_2 & z_2-4s_3x^2 & 2(g+s_1)x-s_3y_3 & 2s_3y_1x-(g+s_1)y_3-4s_2x^2 \\ -(g+s_1) & 2s_2x-s_3y_1 & z_2-4s_3x^2 & 2s_3xy_3-s_2y_1-4(g+s_1)x^2 \\ -s_3 & -(g+s_1) & -s_2 & z_2 -4s_3x^2 \end{array}\right), \end{array} $$ \normalsize and $s_1=\alpha(1+\alpha)x^2, s_2=-\alpha(1-\alpha^2)x^2, s_3=(1-\alpha^2)x$. The matrix $B_{y_2}$ is similar to one appearing in \cite[$\S 4$]{CoughlanGodeaux} (being the last column the only difference). This matrix has the advantage that, by direct computation, these three matrices commute. One can then write the resolution of ${\mathbb{C}}[x,u,v]$ as the Koszul resolution of a complete intersection, i.e. $${\mathbb{C}}[x,u,v]\leftarrow P_0\xleftarrow{p_1} P_1\xleftarrow{p_2} P_2\xleftarrow{p_3} P_3\leftarrow 0,$$ where $p_2, p_3$ are given by the matrices $$ \left(\begin{array}{c|c|c} 0 & B_{z_2} & -B_{z_1} \\ \hline - B_{z_2} & 0 & B_{y_2} \\ \hline B_{z_1} & -B_{y_2} & 0 \end{array}\right), \left(\begin{array}{c} B_{y_2} \\ B_{z_1} \\ B_{z_2} \\ \end{array}\right), $$ respectively, and $$ \begin{array}{l} P_0 = R\oplus R(-1)^{\oplus 2}\oplus R(-2), \\ P_1 = R(-2) \oplus R(-3)^{\oplus 2} \oplus R(-4) \oplus\left(R(-3) \oplus R(-4)^{\oplus 2} \oplus R(-5)\right)^{\oplus 2}, \\ P_2 = R(-6) \oplus R(-7)^{\oplus 2} \oplus R(-8) \oplus\left(R(-5) \oplus R(-6)^{\oplus 2} \oplus R(-7)\right)^{\oplus 2}, \\ P_3 = R(-8) \oplus R(-9)^{\oplus 2} \oplus R(-10). \end{array} $$ To determine the image of ${\mathbb{C}}(\Gamma)$ in ${\mathbb{P}}(1,2^3,3^2)$ one projects the graph of ${\mathbb{P}}hi$ contained in ${\mathbb{P}}(u,v,x)\times{\mathbb{P}}(1,2^3,3^2)$ into ${\mathbb{P}}(1,2^3,3^2)$. Algebraically this is just the elimination of the variables $\{u,v\}$ of the ideal $I_\Gamma$ generated by $$x-{\mathbb{P}}hi^*(x),\quad y_i-{\mathbb{P}}hi^*(y_i),\quad z_j-{\mathbb{P}}hi^*(z_j).$$ Computationally, such elimination turned out difficult to execute. We have succeeded only by using the software Singular with the negative degree reverse lexicographical monomial ordering (ds). In degree $6$ we obtain six generators, $ C_1, C_2, Q_1, Q_2, Q_3, Q_4$, which are the deformations of the polynomials $$\widetilde C_1=z_1^2 - y_1f^2,\,\widetilde C_2=z_2^2-y_3g^2,\,\widetilde Q_1=fy_2^2,\,\widetilde Q_2=y_2^3,\,\widetilde Q_3=gy_2^2,\,\widetilde Q_4=fgy_2.$$ Although the above ordering is a local one, one can check that those polynomials are still in the ideal $I_\Gamma$. We note that these 6 polynomials were also determined in \cite[Cor. $4.3$]{CoughlanGodeaux}. Our approach is the one described in \cite{ReidUnprojIV}. The $3$-fold $W'_{6,6}$ is given by the vanishing of the polynomials $$ \begin{array}{c} F:= C_1 +l_1 Q_1+l_2 Q_2+l_3 Q_3+l_4 Q_4 \\ G:= C_2 +l_3 Q_1-l_2 Q_2+l_1 Q_3-l_4 Q_4. \end{array} $$ To find the unprojection variables we need the maps between the $R$ resolutions of ${\mathbb{C}}(W'_{6,6})$ and ${\mathbb{C}}[u,v,x]$. $$ \xymatrix{ {\mathbb{C}}(W'_{6,6})\ar@{^{(}->}[d] & \ar[l] R \ar@{=}[d] & \ar[l] R(-6)^{\oplus 2} \ar[d] & \ar[l] R(-12) & \ar[l] 0 & \\ {\mathbb{C}}(\Gamma) \ar@{^{(}->}[d] & \ar[l] R \ar@{^{(}->}[d] & \ar[l] K_1 \ar[d] & \ar[l] \cdots & & \\ {\mathbb{C}}[u,v,x] & \ar[l] P_0 & \ar[l] P_1 & \ar[l] P_2 & \ar[l] P_3 & \ar[l] 0 } $$ where the free module $K_1$ contains $R(-6)^{\oplus 6}$ as a direct summand corresponding to the six generators of $I_{\Gamma}$, $\left( C_1, C_2, Q_1, Q_2, Q_3, Q_4\right)$. The down arrow $R(-6)^{\oplus 2}\rightarrow K_1$ is a matrix with the following first columns $$ \left(\begin{array}{cccccc} 1 & 0 & l_1 & l_2 & l_3 & l_4 \\ 0 & 1 & l_3 & -l_2 & l_1 & -l_4 \end{array}\right)^t. $$ The second down arrow, $K_1\rightarrow L_1$, expresses the generators of $K_1$ as linear combinations of the columns of $B$. Composing the maps we get the following diagram $$ \xymatrix{ {\mathbb{C}}(W'_{6,6})& \ar[l] R \ar@{^{(}->}[d] & \ar[l] R(-6)^{\oplus 2} \ar[d]^{N_1} & \ar[l] \ar[d]^{N_2} R(-12) & \ar[l] 0 & \\ {\mathbb{C}}[u,v,x] & \ar[l] P_0 & \ar[l] P_1 & \ar[l] P_2 & \ar[l]_{p_3} P_3 & \ar[l] 0 } $$ Heavy computations are needed in order to compute the maps $N_1$ and $N_2$, we give the details in an appendix, available as an arXiv ancillary file, containing also the correspondent Magma computations. The main idea is as follows. Let $p$ be a matrix and $H$ be a vector, both with entries in a multivariate polynomial ring. One can use the Magma function 'Solution' to compute $N$ such that $pN=H.$ But in the case we are interested in, this computation finishes only if we fix the parameter $\alpha.$ So we define a Magma function that does it several times by evaluating $\alpha$ at a list $pts$ of points, then we use these data to recover the coefficients of the computed polynomials as polynomials on the parameter $\alpha$. Since we are fixing one of the parameters, it can happen that this value appears in some denominator of a rational coefficient of the solution. To overcome this, we have included an input polynomial 'correction': after each computation of $N$ for $\alpha=\alpha_0\in pts,$ the solution is multiplied by that polynomial evaluated at $\alpha_0$. Having the matrix $N_2,$ one can write the linear equations of the unprojection of $\Gamma$ in $W'_{6,6}$. The method is similar to the unprojection of type I. As $\Gamma$ is a codimension $1$ subscheme of $W'_{6,6}$, using adjunction formula one has \begin{equation}\label{unprojses} 0\longleftarrow \omega_\Gamma\longleftarrow \mathcal{H}om_{{\mathcal{O}}_{W'_{6,6}}}\left(I_\Gamma,\omega_{W'_{6,6}}\right)\longleftarrow \omega_{W'_{6,6}}\longleftarrow 0. \end{equation} As ${\mathbb{C}}(\Gamma)\hookrightarrow{\mathbb{C}}[u,v]$ is an isomorphism outside the origin, the dualising module satisfies $\omega_\Gamma=\omega_{{\mathbb{C}}[x,u,v]}\cong {\mathbb{C}}[x,u,v]$. On the other hand, as a module over ${\mathcal{O}}_{W'_{6,6}}$, or over $R$, it needs $4$ generators, $\{1,u,v,uv\}$. The coordinate ring of the unprojected variety is obtained from ${\mathbb{C}}(W'_{6,6})$ by adjoining rational functions $\{y_4,z_3,z_4,t\}$ with poles along $\Gamma$. These can be seen as homomorphisms in $\mathcal{H}om_{{\mathcal{O}}_{W'_{6,6}}}\left(I_\Gamma,\omega_{W'_{6,6}}\right)$. The variable $y_4$ is the rational form that maps to a basis of $\omega_\Gamma(3)\cong {\mathcal{O}}_\Gamma={\mathcal{O}}_{{\mathbb{P}}^2}$. Notice that $\omega_{W'_{6,6}}\cong{\mathcal{O}}_{W'_{6,6}}(-1)$ hence, using sequence (\ref{unprojses}), we get that $\deg(y_4)=2$. Denoting by $z_3, z_4, t$ the forms that map to $u, v, uv$, respectively, we get $\deg(z_i)=3$, $\deg(t)=4$. The linear relations between $y_4,z_3,z_4,t$ are given as in a type I unprojection. Each of them corresponds to a generator of $P_3$ and is mapped by $p_3$ to the image of $N_2$, i.e. $$ p_3\left(\begin{array}{c} t \\ z_4 \\ z_3 \\ y_4 \end{array}\right)=N_2. $$ We have now to determine the quadratic relations between the unprojection variables $\{z_3,z_4,t\}$. Notice that the extension ${\mathbb{C}}(W'_{6,6})\subset {\mathbb{C}}(W)$ can be seen as the normalisation of the ring ${\mathbb{C}}(W'_{6,6})[y_4]$ or, as in some sense $z_3, z_4, t$ correspond to $uy_4, vy_4, uvy_4$, respectively, there must be relations of the form $$ \begin{array}{rcl} z_3^2 -y_1y_4^2+2xy_4z_4,\, z_3z_4-y_4t,\, z_4^2-y_3y_4^2+2xy_4z_3 & \in & \langle \text{mon. of degree } 6\rangle, \\ z_3t-y_1y_4z_4+2xz_4^2,\, z_4t-y_3y_4z_3+2xz_3^2 & \in & \langle \text{mon. of degree } 7\rangle, \\ t^2-y_1y_3y_4^2+2x(y_1y_4z_3+y_3y_4z_4)-4x^2y_4t & \in & \langle \text{mon. of degree } 8\rangle, \\ \end{array} $$ where the monomials on the right hand side are linear in the unprojection variables. We will find these relations as equations $f=0$ with $xf$ or $y_2f$ contained in the ideal generated by the linear equations $F_1,\ldots,F_{14}$. The detailed Magma computation is given in the appendix referred above, the idea is as follows. Let $G_i,H_i$ be such that $F_i=xG_i+H_i.$ A polynomial $\sum c_iF_i$ is divisible by $x$ if $\sum c_iH_i=0.$ In order to find such coefficients $c_i,$ it suffices to compute the syzygy matrix of the sequence $E:=[E_1,\ldots,E_{14}]$ obtained by evaluating the equations $H_i$ at $x=0.$ But our computer cannot finish this, so we replace the parameters $\alpha,l_j$ appearing in the $E_i$ by some distinct prime numbers. In this way we can compute the syzygy matrix, obtaining a list of relations of the type $$c_1E_1+\cdots +c_{14}E_{14}=0.$$ Each $c_i$ is a polynomial in the variables $y_1,y_2,y_3,z_1,z_2,z_3,z_4,t,$ with coefficients in $\mathbb Q.$ We want to recover these coefficients as polynomials in $\alpha,l_j.$ For that aim, we replace each coefficient by a new variable, obtaining a linear system (with a lot of variables) that we can solve (in fact the computations turned out to be simple, except for one of the polynomials). \begin{remark} On the unprojection method to describe $W$ no new parameters were used, so they are the parameters describing $W'_{6,6}$, i.e. $\{\alpha,l_1,l_2,l_3,l_4\}$. \end{remark} \begin{remark} The bicanonical image of a general surface $Y$ is an octic in $\mathbb P^3$, see \cite{CataneseY}. We have computed the family of these octics for Coughlan's family of surfaces by eliminating the variables $z_1,\ldots,z_4,t$ from the equations. This octic polynomial is given in an ancillary arXiv file. \end{remark} \subsection{Description of $Y$ and parameter counting}\label{count} \begin{proposition}\label{parametercount} The Coughlan family of Godeaux surfaces with $\pi_1={\mathbb{Z}}/2$, where each $X$ is obtained as a ${\mathbb{Z}}/2$ quotient of a hyperelliptic surface $Y$ such that $K^2=2, p_g=1, q=0$, is determined by $7$ parameters. \end{proposition} \begin{proof} We have a hyperelliptic tower $D\subset T\subset W$, and in Section \ref{unprojIV} we described the ring $$R(W,-K_W)={\mathbb{C}}[x,y_1,y_2,y_3,y_4,z_1,z_2,z_3,z_4,t]/I.$$ Furthermore, we have an involution $\sigma\colon W\rightarrow W$ whose action on ${\mathbb{P}}(1,2^4,3^4,4)$ has the following eigenspaces $$ \begin{array}{c|c|c} n & H^0(W,-nK_W)^+ & H^0(W,-nK_W)^- \\ \hline 1 & & x \\ 2 & y_1+y_3 & y_1-y_3, y_2, y_4 \\ 3 & z_1-z_2, z_3+z_4 & z_1+z_2,z_3-z_4 \\ 4 & & t \end{array} $$ A model for the canonical ring of the surfaces $Y$, is then given by $$R(Y,K_Y)=R(W,-K_W)/(H_2^-),$$ where $H_2^-$ is a hyperplane of degree $2$ that is $\sigma$-anti-invariant. By construction, $W$ depends on the parameters $\{\alpha, l_1,\dots,l_4\}$. As $\dim\left({\mathbb{P}}(H^0(W,-2K_W)^-)\right)=2$, we get a total of $7$ parameters. In this way, there is a redundant parameter in \cite[Theorem $1$]{CoughlanGodeaux}. \end{proof} \begin{remark} The counting could have been done in a different way. In the description of $T_{6,6}'$ we get the variables $y_1, y_3$ fixed. On the other hand $y_2$ is only defined as the variable such that $\{y_2=0\}$ is the line that goes through the nodes of the two plane cubics, hence we are free to re-scale $y_2$ at will. With this is mind, one can remove one of the parameters $l_1,\ldots,l_4$ and see that $T'_{6,6}$ depends on $4$ parameters. Doing so, one gets one parameter for the extension proving that for each $K3$ we have a set of $3$-folds parametrized by a single parameter. \end{remark} \subsection{Families of universal covers of ${\mathbb{Z}}/2$-Godeaux surfaces} \label{16dim} Let $\mathscr M$ be the moduli space of simply connected surfaces with $p_g=1,$ $q=0$ and $K^2=2$, and let $\mathscr M_1$ be the subvariety corresponding to surfaces with bicanonical map of degree $4$ onto a smooth quadric surface in $\mathbb P^3.$ Catanese and Debarre \cite{CataneseY} have shown that there is a unique $16$-dimensional irreducible component $\mathscr M_E\subset \mathscr M$ which contains $\mathscr M_1.$ \begin{proposition} Coughlan's family of surfaces with $p_g=1,$ $q=0$ and $K^2=2$ is contained in $\mathscr M_E$ (thus the topological fundamental group of Coughlan's ${\mathbb{Z}}/2$-Godeaux surfaces is indeed $\mathbb Z/2$). \end{proposition} \begin{proof} The smooth $K3$ surfaces $T$ are flat deformations of the canonical curve $D\subset Y$ by a regular element $y_4$ of degree $2$. Furthermore, by construction these surfaces project into $\{(y_1+\alpha y_3)(\alpha y_1+y_3)-y_2y_4=0\}$. With the change of variable $y_4=x^2$, one gets a component of the flat extensions of $D$ by a variable of degree $1$, i.e. surfaces with the invariants $K^2=2, p_g=1, q=0$. These surfaces project into the smooth quadric $$\{(y_1+\alpha y_3)(\alpha y_1+y_3)-x^2y_2=0\}\subset \mathbb{P}^2(1,2,2,2).$$ To see this family one neglects the extension of the embedding $\varphi\colon \mathbb P(u,v)\rightarrow T'_{6,6}$ to $\widetilde{\mathbb{P}}hi\colon \mathbb P(x,u,v)\rightarrow W'_{6,6}$. Or, in other words, one sets all the \mbox{parameters} \mbox{describing} such extension to be zero. To be more specific, recall that the \mbox{extension} of the embedding (for the non-involution case) is defined as $$ \widetilde{\mathbb{P}}hi^*(x_i) = x_i,\, \widetilde{\mathbb{P}}hi^*(y_1) = u^2+2x_1v,\, \widetilde{\mathbb{P}}hi^*(y_2) = x_2u+x_3v,\, \widetilde{\mathbb{P}}hi^*(y_3) = v^2+2x_4u. $$ Setting each $x_i=a_ix$, where $a_i$ is a parameter, Coughlan's family is unirational and parametrized by $\{\alpha,\beta,l_i,m_i,a_i,c_i\}$, where the $c_i$ are the parameters defining the hyperplane $$\{y_4-c_0x^2-c_1y_1-c_2y_2-c_3y_3=0\}.$$ Then any member of Coughlan's family can be deformed to a surface in $\mathscr{M}_1$ by linearly mapping $a_i\mapsto 0$ and $(c_0,c_1,c_2,c_3)\mapsto (1,0,0,0).$ \end{proof} \begin{remark} There exists an 8-dimensional family $\mathcal M$ of ${\mathbb{Z}}/2$-Godeaux surfaces whose universal covers live in a 16-dimensional family, so that an 8-dimensional subvariety of this family parametrizes surfaces with a ${\mathbb{Z}}/2$ free action whose quotients give back $\mathcal M$. To see this, we consider the example $W$ of type B6 constructed in \cite[\S 3.1 (C)]{RU19}. It has one Wahl singularity $[2,2,3,5,4]$, and it has no-local-to-global obstructions to deform. ${\mathbb{Q}}$-Gorenstein smoothings of $W$ produce an 8-dimensional family of Godeaux surfaces with fundamental group ${\mathbb{Z}}/2$. To prove unobstructedness, we show (see Lemma \ref{trick}) that the \'etale double cover $W'$ of $W$ induced by the \'etale double cover of the Enriques surface has also no-local-to-global obstructions in deformation, and so it produces a 16-dimensional family of simply-connected surfaces of general type with $K^2=2$, and $p_g=1$. Since Pic$(W) \subset$ Pic$(X)$, where $X$ is a ${\mathbb{Q}}$-Gorenstein smoothing of $W$, we have a lifting of the \'etale cover on a subfamily of $Y$'s (${\mathbb{Q}}$-Gorenstein smoothings of $W'$). This is similar to the procedure used in \cite{PSU13} for a branched double cover. In this way, we would expect that this 16-dimensional family of simply connected surfaces is $\mathscr M_E$, where an 8-dimensional subfamily gives the moduli space of ${\mathbb{Z}}/2$-Godeaux surfaces. We leave it as an open question. (Of course, we could have started with another ${\mathbb{Q}}$-Gorenstein degeneration.) \label{qgor} \end{remark} \section{Degenerations from Coughlan's family}\label{CoughlanDnm} The computations below were implemented with Magma \cite{BCP}, version V2.25-2, and are available as arXiv ancillary files. \subsection{$D_{2,4}$ elliptic surfaces}\label{D24} Denote by \(Y\subset\mathbb P=\mathbb P(1,2,2,2,3,3,3,3,4)\) a surface in Coughlan's family, whose general element is the universal covering of a \(\mathbb Z/2\)-Godeaux surface. Our computer experiments over finite fields say that there are values of the parameters ($\alpha,l_1,l_2,l_3,l_4,l_5,l_6$) for which the corresponding surface \(Y\hookrightarrow \mathbb P^7\) (for a general $Y$ this is the embedding by the $3$-canonical map) splits as the union of a degree 16 surface \(Y'\) with two planes, and the quotient of \(Y'\) by the "Godeaux" involution is a \(D_{2,4}\) elliptic surface. Those two planes correspond to two base points of the map \(\mathbb P\rightarrow\mathbb P^7\), and the coordinates of these points satisfy \(x=z_1=z_2=z_3=z_4=t=0\). Here we use this information to obtain a 6-dimensional family of \(D_{2,4}\) elliptic surfaces as quotients of a codimension 1 subset of Coughlan's family of surfaces. \\ \noindent{\bf Step 1.}\\ We load the \(20\) equations that define Coughlan's family, evaluated at \(x=z_1=z_2=z_3=z_4=t=0\). Then we impose \(y_1+y_3\neq 0\) and eliminate all variables except the parameters. We get one single relation $f=0$ on the parameters. Our goal is to show that a random point in this set of parameters corresponds to a surface such that its quotient by the "Godeaux" involution is a \(D_{2,4}\) elliptic surface with a \((-4)\)-curve.\\ \noindent{\bf Step 2.}\\ We take such a random surface and want to embed it in $\mathbb P^7$ with coordinates $(X_0,\ldots,X_7)=(x^3,xy_1,xy_2,xy_3,z_1,z_2,z_3,z_4)$. To achieve this we have to eliminate the variable $t$, but the computer cannot do it. We have done it "by hand", obtaining a set of equations that is not complete: for general values of the parameters we get a surface plus the component $X_0=0.$ Removing this component we get a surface $Y'.$ To speed up the computations, we work over a finite field. \begin{remark} Working over a field of characteristic zero, we can compute the complement of the canonical curve of $Y'$ in its hyperplane $X_0=0$, obtaining the union of two disjoint rational curves. Moreover, these meet the canonical curve with multiplicity 2, hence are $(-4)$-curves, and are identified by the fixed point free "Godeaux" involution. Thus it follows from Theorem 2.1 that the quotient of $Y'$ by the involution is a $D_{2,n}$ elliptic surface. We will show that $n=4$ by computing, over a finite field, the elliptic fibres of multiplicities 2 and 4. This implies that $n=4$ also over the base field $\mathbb C.$ \end{remark} \noindent{\bf Step 3.}\\ In order to compute the singular subscheme of \(Y'\), we need first to reduce the number of its defining equations. We wrote an algorithm for that, which basically removes one equation at a time. Then we verify that \(Y'\) is smooth. \\ \noindent{\bf Step 4.}\\ We check that the "Godeaux" involution acts freely on \(Y'\). \\ \noindent{\bf Step 5.}\\ The hyperplane \(X_0=0\) cuts \(Y'\) at the union of the canonical divisor of \(Y'\) with two disjoint \((-4)\)-curves, which are identified by the "Godeaux" involution. \\ \noindent{\bf Step 6.}\\ Some Magma functions give the invariants of \(Y'\). \\ \noindent{\bf Step 7.}\\ Studying the equations of the pencil \(|2K_Y'|\), we find elliptic curves \(D_1\), \(D_2\) such that \(D_1\equiv 2D_2\). We see that these two curves are fixed by the (fixed point free) "Godeaux" involution. This shows that the quotient of \(Y'\) by the involution is a \(D_{2,4}\) elliptic surface with one \((-4)\)-curve. \\ \subsection{$D_{2,3}$ elliptic surfaces}\label{D23} Denote by \(Y\) an element of Coughlan's family of surfaces, whose general surface is the universal covering of a \(\mathbb Z/2\)-Godeaux surface. Our computer experiments over finite fields say that there are values of the parameters ($\alpha,l_1,l_2,l_3,l_4,l_5,l_6$) for which the surface \(Y\subset\mathbb P(1,2,2,2,3,3,3,3,4)\) contains a node, which is the only point that is fixed by the ``Godeaux" involution. Moreover, the smooth minimal model of the quotient of \(Y\) by that involution is a \(D_{2,3}\) elliptic surface. The coordinates of that point satisfy \(y_2=y_3-y_1=z_2-z_1=z_4+z_3=0\). Here we use this information to obtain a 6-dimensional family of \(D_{2,3}\) elliptic surfaces as quotients of a codimension 1 subset of Coughlan's family of surfaces.\\ \noindent{\bf Step 1.}\\ We load the \(20\) equations that define Coughlan's family, and we impose \(y_2=0\), \(y_3=y_1\), \(z_2=z_1\) \(z_4=-z_3\). Then we eliminate all variables except the parameters. (To speed up computations, we fix the parameter $\alpha$.) We obtain one single relation, which contains the component $l_1+l_3=0$ (which does not depend on $\alpha$). \\ \noindent{\bf Step 2.}\\ We pick an arbitrary surface on the family given by \(l_1+l_3=0\). We aim to show that the resolution of its quotient by the "Godeaux" involution is indeed a \(D_{2,3}\) elliptic surface with a \((-4)\)-curve. \\ \noindent{\bf Step 3.}\\ We check that the subscheme of \(Y\) that satisfies \(y_2=y_3-y_1=z_2-z_1=z_4+z_3=0\) is a point, which is a node fixed by the involution. Thus it follows from Theorem 2.1 that the quotient of $Y$ by the involution is a $D_{2,n}$ elliptic surface. In order to speed up the computations, from now on we work over a finite field. We will show that $n=3$ by computing the $D_{2,3}$ surface and its double and triple fibres. This implies that $n=3$ also over the base field $\mathbb C.$ \\ \noindent{\bf Step 4.}\\ We compute the linear system of the curves of degree 5 that contain the above fixed point and are preserved by the "Godeaux" involution. This system defines a map \(\phi:Y \to \mathbb P^{10}\), which resolves the singularity of \(Y\). We will show that it is of degree 2 onto a \(D_{2,3}\) elliptic surface $G$ with a \((-4)\)-curve that is the image of the node of \(Y\). \\ \noindent{\bf Step 5.}\\ The direct computation of \(\phi(Y)\) seems unattainable, so we compute the image of many points and then the linear systems \(L_2,\) \(L_3\) of hypersurfaces of degree 2, 3 through these points. These cut out a surface $G$ in $\mathbb P^{10}.$ \\ \noindent{\bf Step 6.}\\ In order to show that \(G\) is smooth, and to avoid the computation of all \(8\times 8\) minors of matrices of partial derivatives, we random such minors until they define an empty subscheme of \(G\). \\ \noindent{\bf Step 7.}\\ Some Magma functions give the invariants of $G$. \\ \noindent{\bf Step 8.}\\ The system $2K_Y$ is given by the pullback of $2K_G+C,$ where $C$ is the $(-4)$-curve corresponding to the node of $Y$. This means that there exists an invariant bicanonical curve through the node of $Y$. We show that its quotient in $G$ is $H:=F_3+C$, where $3F_3$ is an elliptic fibre and $C$ is a $(-4)$-curve. \\ \noindent{\bf Step 9.}\\ We find the double elliptic fibre \(2F_2\) by computing the unique element in $|F_3+K_G|$. \\ \noindent{\bf Step 10.}\\ Finally we check that $CF_3=4$ and $C$ is the image of the node of $Y$. \noindent Eduardo Dias \\ Departamento de Matem\' atica \\ Faculdade de Ci\^encias da Universidade do Porto \\ Rua do Campo Alegre 687 \\ 4169-007 Porto, Portugal \\ www.fc.up.pt, {\tt [email protected]}\\ \noindent Carlos Rito \\{\it Permanent address:} \\ Universidade de Tr\'as-os-Montes e Alto Douro, UTAD \\ Quinta de Prados \\ 5000-801 Vila Real, Portugal \\ www.utad.pt, {\tt [email protected]} \\{\it Temporary address:} \\ Departamento de Matem\' atica \\ Faculdade de Ci\^encias da Universidade do Porto \\ Rua do Campo Alegre 687 \\ 4169-007 Porto, Portugal \\ www.fc.up.pt, {\tt [email protected]}\\ \noindent Giancarlo Urz\'ua \\ Facultad de Matem\'aticas \\ Pontificia Universidad Cat\'olica de Chile \\ Campus San Joaqu\'in \\ Avenida Vicu\~na Mackenna \\ 4860, Santiago, Chile \\ {\tt [email protected]}\\ \end{document}
\begin{document} \maketitle \begin{abstract} Throughout this paper, we work over ${\mathbb C}$, and $n$ is an integer such that $n\geq 2$. For an Enriques surface $E$, let $E^{[n]}$ be the Hilbert scheme of $n$ points of $E$. By Oguiso and Schr\"oer $[\ref{bio:3},\,{\rm Theorem}\,3.1]$, $E^{[n]}$ has a Calabi-Yau manifold $X$ as the universal covering space, $\pi :X\rightarrow E^{[n]}$ of degree $2$. The purpose of this paper is to investigate a relationship of the small deformation of $E^{[n]}$ and that of $X$ $({\rm Theorem}\ 1.1)$, the natural automorphism of $E^{[n]}$ $({\rm Theorem}\,1.2)$, and count the number of isomorphism classes of the Hilbert schemes of $n$ points of Enriques surfaces which has $X$ as the universal covering space when we fix one $X$ $({\rm Theorem}\,1.3)$. \end{abstract} \section{Introduction} Throughout this paper, we work over ${\mathbb C}$, and $n$ is an integer such that $n\geq 2$. For an Enriques surface $E$, let $E^{[n]}$ be the Hilbert scheme of $n$ points of $E$. By Oguiso and Schr\"oer $[\ref{bio:3},\,{\rm Theorem}\,3.1]$, $E^{[n]}$ has a Calabi-Yau manifold $X$ as the universal covering space, $\pi :X\rightarrow E^{[n]}$ of degree $2$. The purpose of this paper is to investigate a relationship of the small deformation of $E^{[n]}$ and that of $X$ $({\rm Theorem}\,1.1)$, the natural automorphism of $E^{[n]}$ $({\rm Theorem}\,1.2)$, and count the number of isomorphism classes of the Hilbert schemes of $n$ points of Enriques surfaces which has $X$ as the universal covering space when we fix one $X$ $({\rm Theorem}\,1.3)$. \\ Small deformations of a smooth compact surface $S$ induce that of the Hilbert scheme of $n$ points of $S$ by taking the relative Hilbert scheme. Let $K$ be a $K3$ surface. By Beauville $[\ref{bio:6},\,{\rm page}\,779$-$781]$, a very general small deformation of $K^{[n]}$ is not isomorphic to the Hilbert scheme of $n$ points of a $K3$ surface. On the other hand, by Fantechi $[\ref{bio:1},\,{\rm Theorems}0.1\,{\rm and}\,0.3]$, every small deformations of $E^{[n]}$ is induced by that of $E$. Since $X$ is the universal covering of $E^{[n]}$, the small deformation of $E^{[n]}$ induces that of $X$. We consider a relationship of the small deformation of $E^{[n]}$ and that of $X$. Our first main result is following: \begin{thm}\label{thm:1} Let $E$ be an Enriques surface, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, and $X$ the universal covering space of $E^{[n]}$. Then every small deformation of $X$ is induced by that of $E^{[n]}$. \end{thm} Compare with the fact that a general small deformation of the universal covering $K3$ surface of $E$ is not induced by that of $E$. \\ Next, we study the natural automorphisms of $E^{[n]}$. Any automorphism $f\in $ Aut$(S)$ induces an automorphism $f^{[n]}\in $ Aut$(S^{[n]})$. An automorphism $g\in $ Aut$(S^{[n]})$ is called natural if there is an automorphism $f\in $ Aut$(S)$ such that $g=f^{[n]}$. When $K$ is a $K3$ surface, the natural automorphisms of $K^{[n]}$ have been studied by Boissi\`ere and Sarti $[\ref{bio:5},\,{\rm Theorem}\,1]$. They used the global Torelli theorem for $K3$ surfaces: an effective Hodge isometry $\alpha $ is induced by a unique automorphism $\beta $ of $K3$ surface such that $\alpha =\beta ^{\ast }$. Our second main result is the following theorem, similar to $[\ref{bio:5},\,{\rm Theorem}\,1]$ without the Torelli theorem for Enriques surfaces by using a result of Oguiso $[\ref{bio:4},\,{\rm Proposition}\,4,4]$. \begin{thm}\label{thm:2} Let $E$ be an Enriques surface, $D_{E}$ the exceptional divisor of the Hilbert-Chow morphism $\pi _{E}:E^{[n]}\rightarrow E^{(n)}$, and $n\geq 2$. An automorphism $f$ of $E^{[n]}$ is natural if and only if $f(D_{E})=D_{E}$, i.e.\,$f^{\ast }({\mathcal O}_{E^{[n]}}(D_{E}))={\mathcal O}_{E^{[n]}}(D_{E})$. \end{thm} Finally, we compute the number of isomorphism class of the Hilbert schemes of $n$ points of Enriques surfaces which have $X$ as the universal covering space when we fixed one $X$. \begin{thm}\label{thm:3} Let $E$ and $E'$ be two Enriques surfaces, $E^{[n]}$ and $E'^{[n]}$ the Hilbert scheme of $n$ points of $E$ and $E'$, $X$ and $X'$ the universal covering space of $E^{[n]}$ and $E'^{[n]}$, and $n\geq 3$. If $X\cong X'$, then $E^{[n]}\cong E'^{[n]}$, i.e. when we fix $X$, then there is just one isomorphism class of the Hilbert schemes of $n$ points of Enriques surfaces such that they have it as the universal covering space. \end{thm} Our proof is based on Theorem $1.2$ and the study of the action of the covering involutions on $H^{2}(X,{\mathbb C})$. This is the result that is greatly different from the result of Ohashi \\ $($See $[\ref{bio:10},\,{\rm Theorem}\,0.1])$ that, for any nonnegative integer $l$, there exists a $K3$ surface with exactly $2^{l+10}$ distinct Enriques quotients. In particular, there does not exist a universal bound for the number of distinct Enriques quotients of a $K3$ surface. Here we will call two Enriques quotients of a K3 surface distinct if they are not isomorphic to each other. \begin{mar}\label{dfn:1000} {\rm When} n=2, {\rm I\ do\ not\ count\ the\ number\ of\ isomorphism\ classes\ of\ the Hilbert\ schemes\ of} n {\rm points\ of\ Enriques\ surfaces\ which\ has} X {\rm as\ the\ universal\ covering\ space\ when\ we\ fix\ one} X. \end{mar} \section{Preliminaries} A $K3$ surface $K$ is a compact complex surface with $K_{K} \sim 0$ and $H^{1}(K, {\mathcal O}_{K})=0$. An Enriques surface $E$ is a compact complex surface with $H^{1}(E, {\mathcal O}_{E} )=0$, $H^{2}(E, {\mathcal O}_{E})=0$, $K_{E}\not\sim 0$, and $2K_{E}\sim 0$. The universal covering of an Enriques surface is a $K3$ surface. A Calabi-Yau manifold $X$ is an $n$-dimensional compact k\"ahler manifold such that it is simply connected, there is no holomorphic $k$-form on $X$ for $ 0 < k < n$ and there is a nowhere vanishing holomorphic $n$-form on $X$. Let $S$ be a nonsingular surface, $S^{[n]}$ the Hilbert scheme of $n$ points of $S$, $\pi _{S}:S^{[n]}\rightarrow S^{(n)}$ the Hilbert-Chow morphism, and $p_{S}:S^{n}\rightarrow S^{(n)}$ the natural projection. We denote by $D_{S}$ the exceptional divisor of $\pi _{S}$. Note that $S^{[n]}$ is smooth of dim$_{{\mathbb C}}S^{[n]}=2n$. Let $\Delta _{S}^{n}$ be the set of $n$-uples $(x_{1},\ldots ,x_{n})\in S^{n}$ with at least two $x_{i}$'s equal, $S^{n}_{\ast }$ the set of $n$-uples $(x_{1},\ldots ,x_{n})\in S^{n}$ with at most two $x_{i}$'s equal. We put \[ S^{(n)}_{\ast }:=p_{S}(S^{n}_{\ast }), \] \[ \Delta _{S}^{(n)}:=p_{S}(\Delta _{S}^{n}), \] \[ S^{[n]}_{\ast }:=\pi _{S}^{-1}(S^{(n)}_{\ast }), \] \[ \Delta _{S\ast }^{n}:=\Delta _{S}^{n}\cap S^{n}_{\ast }, \] \[ \Delta _{S\ast }^{(n)}:=p_{S}(\Delta _{S\ast }^{n}),\ {\rm and}\] \[ F_{S}:=S^{[n]}\setminus S^{[n]}_{\ast }. \] Then we have ${\rm Blow}_{\Delta _{S\ast }^{n}}S^{n}_{\ast }/{\mathcal S}_{n}\simeq S^{[n]}_{\ast }$, $F_{S}$ is an analytic closed subset, and its codimension is $2$ in $S^{[n]}$ by Beauville $[\ref{bio:6},\,{\rm page}\,767$-$768]$. Here ${\mathcal S}_{n}$ is the symmetric group of degree $n$ which acts naturally on $S^{n}$ by permuting of the factors. Let $E$ be an Enriques surface, and $E^{[n]}$ the Hilbert scheme of $n$ points of $E$. By Oguiso and Schr\"oer $[\ref{bio:3},\,{\rm Theorem}\,3.1]$, $E^{[n]}$ has a Calabi-Yau manifold $X$ as the universal covering space $\pi :X\rightarrow E^{[n]}$ of degree $2$. Let $\mu :K\rightarrow E$ be the universal covering space of $E$ where $K$ is a $K3$ surface, $S_{K}$ the pullback of $\Delta _{E}^{(n)}$ by the morphism \[ \mu ^{(n)}:K^{(n)}\ni [(x_{1},\ldots ,x_{n})]\mapsto [(\mu (x_{1}),\ldots ,\mu (x_{n}))]\in E^{(n)}. \] Then we get a $2^{n}$-sheeted unramified covering space \[ \mu ^{(n)}|_{K^{(n)}\backslash S_{K} }:K^{(n)}\backslash S_{K} \rightarrow E^{(n)}\backslash \Delta _{E}^{(n)}.\] Furthermore, let $\Gamma _{K}$ be the pullback of $S_{K}$ by natural projection $p_{K}:K^{n}\rightarrow K^{(n)}$. Since $\Gamma _{K}$ is an algebraic closed set with codimension $2$, then \[ \mu ^{(n)}\circ p_{K}:K^{n}\backslash \Gamma _{K}\rightarrow E^{(n)}\backslash \Delta _{E}^{(n)} \] is the $2^{n}n!$-sheeted universal covering space. Since $E^{[n]}\backslash D_{E}=E^{(n)}\backslash \Delta _{E}^{(n)}$ where $D_{E}=\pi _{E}^{-1}(\Delta _{E}^{(n)})$, we regard the universal covering space $\mu ^{(n)}\circ p_{K}:K^{n}\backslash \Gamma _{K} \rightarrow E^{(n)}\backslash \Delta _{E}^{(n)}$ as the universal covering space of $E^{[n]}\setminus D_{E}$: \[ \mu ^{(n)}\circ p_{K}:K^{n}\backslash \Gamma _{K} \rightarrow E^{[n]}\backslash D_{E}. \] Since $\pi :X\setminus \pi ^{-1}(D_{E})\rightarrow E^{[n]}\setminus D_{E}$ is a covering space and $\mu ^{(n)}\circ p_{K}:K^{n}\setminus \Gamma _{K}\rightarrow E^{[n]}\setminus D_{E}$ is the universal covering space, there is a morphism \[ \omega :K^{n}\setminus \Gamma _{K}\rightarrow X\setminus \pi ^{-1}(D_{E}) \] such that $\omega :K^{n}\setminus \Gamma _{K}\rightarrow X\setminus \pi ^{-1}(D_{E})$ is the universal covering space and $\mu ^{(n)}\circ p_{K}=\pi \circ \omega $: $$ \xymatrix{ K^{n}\setminus \Gamma _{K} \ar[dr]_{\mu ^{(n)}\circ p_{K}} \ar[r]^{\omega } &X\setminus \pi ^{-1}(D_{E}) \ar[d]^{\pi } \\ &E^{[n]}\setminus D_{E}. } $$ We denote the covering transformation group of $\pi \circ \omega $ by: \[ G:=\{g\in {\rm Aut}(K^{n}\setminus \Gamma _{K}):\pi \circ \omega \circ g=\pi \circ \omega \}. \] Then $G$ is of order $2^{n}.n!$, since deg$(\mu ^{(n)}\circ p_{K})=2^{n}.n!$. Let $\sigma $ be the covering involution of $\mu :K\rightarrow E$, and for \[ 1\leq k\leq n,\ 1\leq i_{1}<\cdots <i_{k}\leq n \] we define automorphisms $\sigma _{i_{1}\ldots i_{k}}$ of $K^{n}$ by following. For $x=(x_{i})_{i=1}^{n}\in K^{n}$, $$ the\ j\mathchar`-th\ component\ of\ \sigma _{i_{1}\ldots i_{k}}(x)= \begin{cases} \sigma (x_{j}) & j\in \{i_{1},\cdots ,i_{k}\} \\ x_{j} & j\not\in \{i_{1},\cdots ,i_{k}\}. \end{cases} $$ Then ${\mathcal S}_{n}\subset G$, and $\{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}\subset G$. Let $H$ be the subgroup of $G$ generated by ${\mathcal S}_{n}$ and $\{\sigma _{ij}\}_{1\leq i<j\leq n}.$ \begin{pro}\label{dfn:3} $G$ is generated by ${\mathcal S}_{n}$ and $\{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}$. Moreover any element is of the form $s\circ t$ where $s\in {\mathcal S}_{n}$, $t\in \{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}$. \end{pro} \begin{proof} If $(s,t)=(s',t')$ for $s,s'\in {\mathcal S}_{n}$ and $t,t'\in \{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}$, then we have $s=s'$ and $t=t'$ by paying attention to the permutation of component. As $|{\mathcal S}_{n}|=n!$, and $|\{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}|=2^{n}$, $G$ is generated by ${\mathcal S}_{n}$ and $\{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}$. \end{proof} \begin{pro}\label{dfn:4} $|H|=2^{n-1}.n!$. \end{pro} \begin{proof} $H$ is generated by ${\mathcal S}_{n}$ and $\{\sigma _{ij}\}_{1\leq i<j\leq n}$. By paying attention to the permutation of component, we have $\sigma _{i}\not\in H$ for all $i$. For arbitrary $j$, $(i,j)\circ \sigma _{i}\circ (i,j)=\sigma _{j}$. Since ${\mathcal S}_{n}\subset H$, and Proposition $2.1$, we obtain $|G/H|=2$, i.e. $|H|=2^{n-1}.n!$. \end{proof} We put \[ K^{n}_{\ast \mu }:=({\mu ^{n}})^{-1}(E^{n}_{\ast }), \] where $\mu ^{n}:K^{n}\ni (x_{i})_{i=1}^{n}\mapsto (\mu (x_{i}))_{i=1}^{n}\in E^{n}$. Recall that $\mu :K\rightarrow E$ the universal covering with $\sigma $ the covering involution. We further put \[ T_{\ast \mu \,ij}:=\{ (x_{l})_{l=1}^{n}\in K^{n}_{\ast \mu }: \sigma (x_{i})=x_{j} \}, \] \[ \Delta _{K\ast \mu \,ij}:=\{ (x_{l})_{l=1}^{n}\in K^{n}_{\ast \mu }: x_{i}=x_{j} \}, \] \[ T_{\ast \mu }:=\bigcup _{1\leq i<j\leq n}T_{\ast \mu \,i,j},\ {\rm and}\] \[ \Delta _{K\ast \mu }:=\bigcup _{1\leq i<j\leq n}\Delta _{K\ast \mu \,i,j}. \] By the definition of $K^{n}_{\ast \mu }$, $H$ acts on $K^{n}_{\ast \mu }$, and by the definition of $\Delta _{K\ast \mu }$ and $T_{\ast \mu }$, we have $\Delta _{K\ast \mu }\cap T_{\ast \mu }=\emptyset $. \begin{lem}\label{dfn:20} For $t\in H$ and $1\leq i<j\leq n$, if $t\in H$ has a fixed point on $\Delta _{K\ast \mu \,ij}$, then $t=(i,j)$ or $t={\rm id}_{K^{n}}$. \end{lem} \begin{proof} Let $t\in H$ be an element of $H$ where there is an element $\tilde{x}=(\tilde{x}_{i})_{i=1}^{n}\in \Delta _{K\ast \mu \,ij}$ such that $t(\tilde{x})=\tilde{x}$. By Proposition $\ref{dfn:3}$, for $t\in H$, there are two elements $\sigma _{i_{1},\cdots ,i_{k}}\in \{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}$ and $(j_{1},\cdots ,j_{l})\in {\mathcal S}_{n}$ such that \[ t=(j_{1},\cdots ,j_{l})\circ \sigma _{i_{1},\cdots ,i_{k}}. \] From the definition of $\Delta _{K\ast \mu \,ij}$, for $(x_{l})_{l=1}^{n}\in \Delta _{K\ast \mu \,ij}$, \[ \{ x_{1},\ldots ,x_{n} \}\cap \{\sigma (x_{1}),\ldots ,\sigma (x_{n}) \}=\emptyset. \] Suppose $\sigma _{i_{1},\cdots ,i_{k}}\not={\rm id}_{K^{n}}$. Since $t(\tilde{x})=\tilde{x}$, we have \[ \{ \tilde{x}_{1},\ldots ,\tilde{x}_{n} \}\cap \{\sigma (\tilde{x}_{1}),\ldots ,\sigma (\tilde{x}_{n}) \}\not=\emptyset, \] a contradiction. Thus we have $t=(j_{1},\cdots ,j_{l})$. Similarly from the definition of $\Delta _{K\ast \mu \,ij}$, for $(x_{l})_{l=1}^{n}\in \Delta _{K\ast \mu \,ij}$, if $x_{s}=x_{t}$ $(1\leq s<t\leq n)$, then $s=i$ and $t=j$. Thus we have $t=(i,j)$ or $t={\rm id}_{K^{n}}$. \end{proof} \begin{lem}\label{dfn:21} For $t\in H$ and $1\leq i<j\leq n$, if $t\in H$ has a fixed point on $T_{\ast \mu \,ij}$, then $t=\sigma _{i,j}\circ (i,j)$ or $t={\rm id}_{K^{n}}$. \end{lem} \begin{proof} Let $t\in H$ be an element of $H$ where there is an element $\tilde{x}=(\tilde{x}_{i})_{i=1}^{n}\in T_{K\ast \mu \,ij}$ such that $t(\tilde{x})=\tilde{x}$. By Proposition $\ref{dfn:3}$, for $t\in H$, there are two elements $\sigma _{i_{1},\cdots ,i_{k}}\in \{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}$ and $(j_{1},\cdots ,j_{l})\in {\mathcal S}_{n}$ such that \[ t=(j_{1},\cdots ,j_{l})\circ \sigma _{i_{1},\cdots ,i_{k}}. \] Since $(j,j+1)\circ \sigma _{i,j}\circ (j,j+1):\Delta _{K\ast \mu \,ij}\rightarrow T_{\ast \mu \,ij}$ is an isomorphism, and by Lemma $\ref{dfn:20}$, we have \[ (j,j+1)\circ \sigma _{i,j}\circ (j,j+1)\circ t\circ (j,j+1)\circ \sigma _{i,j}\circ (j,j+1)=(i,j)\ {\rm or}\ {\rm id}_{K^{n}}. \] If $(j,j+1)\circ \sigma _{i,j}\circ (j,j+1)\circ t\circ (j,j+1)\circ \sigma _{i,j}\circ (j,j+1)={\rm id}_{K^{n}}$, then $t={\rm id}_{K^{n}}$. If $(j,j+1)\circ \sigma _{i,j}\circ (j,j+1)\circ t\circ (j,j+1)\circ \sigma _{i,j}\circ (j,j+1)=(i,j)$, then \begin{equation*} \begin{split} t&=(j,j+1)\circ \sigma _{i,j}\circ (j,j+1)\circ (i,j)\circ (j,j+1)\circ \sigma _{i,j}\circ (j,j+1)\\ &=(j,j+1)\circ \sigma _{i,j}\circ (i,j+1)\circ \sigma _{i,j}\circ (j,j+1)\\ &=(j,j+1)\circ \sigma _{i,j+1}\circ (i,j+1)\circ (j,j+1)\\ &=\sigma _{i,j}\circ (i,j). \end{split} \end{equation*} Thus we have $t=\sigma _{i,j}\circ (i,j)$. \end{proof} From Lemma $\ref{dfn:20}$ and Lemma $\ref{dfn:21}$, the universal covering map $\mu $ induces a local isomorphism \[ \mu ^{[n]}_{\ast }:{\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H\rightarrow {\rm Blow}_{\Delta _{E\ast }^{n}}E^{n}_{\ast }/{\mathcal S}_{n}=E^{[n]}_{\ast }. \] Here Blow$_{A}B$ is the blow up of $B$ along $A\subset B$. \begin{lem}\label{dfn:22} For every $x\in E^{[n]}_{\ast }$, $|({\mu ^{[n]}_{\ast }})^{-1}(x)|=2$. \end{lem} \begin{proof} For $(x_{i})_{i=1}^{n}\in \Delta _{E\ast }^{n}$ with $x_{1}=x_{2}$, there are $n$ elements $y_{1},\ldots ,y_{n}$ of $K$ such that $y_{1}=y_{2}$ and $\mu (y_{i})=x_{i}$ for $1\leq i\leq n$. Then \[ ({\mu ^{n}})^{-1}((x_{i})_{i=1}^{n})\cap K^{n}_{\ast \mu }=\{y_{1},\sigma (y_{1})\}\times \cdots \times \{y_{n},\sigma (y_{n})\}. \] For $\sigma _{i_{1}\ldots i_{k}}\in G$, since $H$ is generated by ${\mathcal S}_{n}$ and $\sigma _{i_{1}\ldots i_{k}}$, if $k$ is even we get $\sigma _{i_{1}\ldots i_{k}}\in H$, if $k$ is odd $\sigma _{i_{1}\ldots i_{k}}\not\in H$. For $\{z_{i}\}_{i=1}^{n}\in ({\mu ^{n}})^{-1}((x_{i})_{i=1}^{n})\cap K^{n}_{\ast \mu }$, if the number of $i$ with $z_{i}=y_{i}$ is even then \[ \{z_{i}\}_{i=1}^{n}=\{\sigma (y_{1}),\sigma (y_{2}),y_{3}\ldots ,y_{n}\}\ {\rm on}\ K^{n}_{\ast \mu }/H,\ {\rm and} \] if the number of $i$ with $z_{i}=y_{i}$ is odd then \[ \{z_{i}\}_{i=1}^{n}=\{\sigma (y_{1}),y_{2},y_{3}\ldots ,y_{n}\}\ {\rm on}\ K^{n}_{\ast \mu }/H. \] Furthermore since $\sigma _{i}\not\in H$ for $1\leq i\leq n$, \[ \{\sigma (y_{1}),\sigma (y_{2}),y_{3}\ldots ,y_{n}\}\not=\{\sigma (y_{1}),y_{2},y_{3}\ldots ,y_{n}\}. \] Thus for every $x\in E^{[n]}_{\ast }$, $|({\mu ^{[n]}_{\ast }})^{-1}(x)|=2$. \end{proof} \begin{pro}\label{dfn:30} $\mu ^{[n]}_{\ast }:{\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H\rightarrow {\rm Blow}_{\Delta _{E\ast }^{n}}E^{n}_{\ast }/{\mathcal S}_{n}$ is the universal covering space, and $X\setminus \pi ^{-1}(F_{E})\simeq {\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H$. \end{pro} \begin{proof} Since $\mu ^{[n]}_{\ast }$ is a local isomorphism and the number of fiber is constant, so $\mu ^{[n]}_{\ast }$ is a covering map. Furthermore $\pi :X\setminus \pi ^{-1}(F_{E})\rightarrow E^{[n]}_{\ast }$ is the universal covering space and number of fiber is $2$, so $\mu ^{[n]}_{\ast }:{\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H\rightarrow {\rm Blow}_{\Delta _{E\ast }^{n}}E^{n}_{\ast }/{\mathcal S}_{n}$ is the universal covering space, and by the uniqueness of the universal covering space, we have $X\setminus \pi ^{-1}(F_{E})\simeq {\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H$. \end{proof} Recall that $H$ is generated by ${\mathcal S}_{n}$ and $\{\sigma _{ij}\}_{1\leq i<j\leq n}$. \begin{thm}\label{thm:10} Let $E$ be an Enriques surface, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, $\pi :X\rightarrow E^{[n]}$ the universal covering space of $E^{[n]}$, and $n\geq 2$. Then there is a resolution $\varphi _{X}:X\rightarrow K^{n}/H$ such that $\varphi _{X}^{-1}(\Gamma _{K}/H)=\pi ^{-1}(D_{E})$. \end{thm} \begin{proof} Let $E$ be an Enriques surface, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, $\pi :X\rightarrow E^{[n]}$ the universal covering space of $E^{[n]}$ where $X$ is a Calabi-Yau manifold, and $\rho $ the covering involution of $\pi $. From ${\rm Proposition}2.6$, we have $X\setminus \pi ^{-1}(F_{E})\simeq {\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H$. Thus there is a meromorphim $f$ of $X$ to $K^{n}/H$ with satisfying the following commutative diagram: $$ \xymatrix{ E^{[n]}\setminus F_{E} \ar[r]^{\pi _{E}}&E^{(n)} \\ X\setminus \pi ^{-1}(F_{E}) \ar[u]^{\pi } \ar[r]^{f} &K^{n}/H \ar[u]^{p_{H}} } $$ where $\pi _{E}:E^{[n]}\rightarrow E^{(n)}$ is the Hilbert-Chow morphism, and $p_{H}:K^{n}/H\rightarrow E^{(n)}$ is the natural projection. For any ample line bundle ${\mathcal L}$ on $E^{(n)}$, since the natural projection $p_{H}:K^{n}/H\rightarrow E^{(n)}$ is finite, and $E^{(n)}$ and $K^{n}/H$ are projective, $p_{H}^{\ast }{\mathcal L}$ is ample. Since $\pi ^{-1}(F_{E})$ is an analytic closed subset of codimension $2$ in $X$, there is a line bundle ${\mathbb L}$ on $X$ such that $f^{\ast }(p_{H}^{\ast }{\mathcal L})={\mathbb L}\mid _{X\setminus \pi ^{-1}(F_{E})}$. From the above diagram, we have \[ {\mathbb L}=\pi ^{\ast }(\pi _{E}^{\ast }{\mathcal L}).\] Since ${\mathcal L}$ is ample on $E^{(n)}$, $\pi _{E}^{\ast }{\mathcal L}$ is a globally generated line bundle on $E^{[n]}$. Moreover $\pi ^{\ast }(\pi _{E}^{\ast }{\mathcal L})$ is also a globally generated line bundle on $X$. Since $p_{H}^{\ast }{\mathcal L}$ is ample on $K^{n}/H$ and ${\mathbb L}$ is globally generated, there is a holomorphism $\varphi _{X}$ of $X$ to $K^{n}/H$ such that $\varphi _{X}\mid _{X\setminus \pi ^{-1}(F_{E})}=f\mid _{X\setminus \pi ^{-1}(F_{E})}$. Since $X$ is a proper and the image of $f$ contains a Zariski open subset, $\varphi _{X}:X\rightarrow K^{n}/H$ is surjective. Moreover $f:X\setminus \pi ^{-1}(D_{E})\cong (K^{n}\setminus \Gamma _{K})/H$, that is a resolution. \end{proof} \section{Proof of Theorem $1.1$} Let $S$ be a smooth projective surface and $P(n)$ the set of partitions of $n$. We write $\alpha \in P(n)$ as $\alpha =(\alpha _{1},\ldots ,\alpha _{n})$ with $1\cdot {\alpha _{1}}+\cdots+n\cdot {\alpha _{n}}=n$, and put $|\alpha |:= \sum _{i}\alpha _{i}$. We put $S^{\alpha }:=S^{\alpha _{1}}\times \cdots \times S^{\alpha _{n}}$, $S^{(\alpha )}:=S^{(\alpha _{1})}\times \cdots \times S^{(\alpha _{n})}$ and $S^{[n]}$ the Hilbert scheme of $n$ points of $S$. The cycle type $\alpha (g)$ of $g\in {\mathcal S}_{n}$ is the partition $(1^{\alpha _{1}(g)},\ldots ,n^{\alpha _{n}(g)})$ where $\alpha _{i}(g)$ is the number of cycles with length $i$ as the representation of $g$ in a product of disjoint cycles. As usual, we denote by $(n_{1},\ldots ,n_{r})$ the cycle defined by mapping $n_{i}$ to $n_{i+1}$ for $i < r$ and $n_{r}$ to $n_{1}$. By Steenbrink $[\ref{bio:21},\,{\rm page}\,526$-$530]$, $S^{(\alpha )}\,(\alpha \in P(n))$ have the Hodge decomposition. By G\"ottsche and Soergel $[\ref{bio:2},\,{\rm Theorem}\,2]$, we have an isomorphism of Hodge structures: \[ H^{i+2n}(S^{[n]},{\mathbb C})(n) = \sum _{\alpha \in P(n)}H^{i+2|\alpha |}(S^{(\alpha )},{\mathbb C})(|\alpha |) \] where $H^{i+2|\alpha |}(S^{(\alpha )},{\mathbb C})(|\alpha |)$ is the Tate twist of $H^{i+2|\alpha |}(S^{(\alpha )},{\mathbb C})$, \\ and $H^{i+2n}(S^{[n]},{\mathbb C})(n)$ is the Tate twist of $H^{i+2n}(S^{[n]},{\mathbb C})$. Since $H^{i+2n}(S^{[n]},{\mathbb C})(n)$ is a Hodge structure of weight $i+2n-2n=i$, we have $H^{i+2n}(S^{[n]},{\mathbb C})(n)^{p,q}=H^{i+2n}(S^{[n]},{\mathbb Q})^{p+n,q+n}$ for $p,q\in {\mathbb Z}$ with $p+q=i$, and $H^{i+2|\alpha |}(S^{(\alpha )},{\mathbb C})(|\alpha |)$ is a Hodge structure of weight $i+2|\alpha |-2|\alpha |=i$, we have $H^{i+2|\alpha |}(S^{(\alpha )},{\mathbb C})(|\alpha |)^{p,q}=H^{i+2|\alpha |}(S^{(\alpha )},{\mathbb C})^{p+|\alpha |,q+|\alpha |}$ for $p,q\in {\mathbb Z}$ with $p+q=i$. Thus we have \begin{equation} \label{eq:1012} {\rm dim}_{{\mathbb C}}H^{2n}(S^{[n]},{\mathbb C})^{1,2n-1} = \sum _{\alpha \in P(n)}{\rm dim}_{{\mathbb C}}H^{2|\alpha |}(S^{(\alpha )},{\mathbb C})^{1-n+|\alpha |,n-1+|\alpha |}. \end{equation} Let $E$ be an Enriques surface, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, and $\pi :X\rightarrow E^{[n]}$ the universal covering space of $E^{[n]}$ where $X$ is a Calabi-Yau manifold. \begin{pro}\label{dfn:9} ${\rm dim}_{{\mathbb C}}H^{1}(E^{[n]}, \Omega ^{2n-1}_{E^{[n]}})=0$. \end{pro} \begin{proof} From $[\ref{bio:21},\,{\rm page}\,526$-$530]$, $E^{(n)}$ have the Hodge decomposition, we have \[ H^{2n}(E^{[n]},{\mathbb C})^{1,2n-1}\simeq H^{2n-1}(E^{[n]}, \Omega ^{1}_{E^{[n]}}),\ {\rm and} \] \[ H^{2n}(E^{(n)},{\mathbb C})^{1,2n-1}\simeq H^{2n-1}(E^{(n)}, \Omega ^{1}_{E^{(n)}}).\] Similarly since $E^{(\alpha )}$ $(\alpha \in P(n))$ has the Hodge decomposition, if $1-n+|\alpha |<0$ or $n-1+|\alpha |>2n$ for $\alpha \in P(n)$, then \[ H^{2|\alpha |}(E^{(\alpha )},{\mathbb C})(|\alpha |)^{1-n+|\alpha |,n-1+|\alpha |}=0. \] For $\alpha \in P(n)$ with $1-n+|\alpha |\geq 0$ and $n-1+|\alpha |\leq 2n$, then $|\alpha |=n-1$, $|\alpha |=n$ or $|\alpha |=n+1$. By the definition of $\alpha \in P(n)$ and $|\alpha |$, we obtain $\alpha =\{(n,0,\ldots ,0),(n-2,1,0,\ldots ,0) \}$. Thus, by the above equation $(\ref{eq:1012})$, we have \[ {\rm dim}_{{\mathbb C}}H^{2n}(E^{[n]}, {\mathbb C})^{1,2n-1}={\rm dim}_{{\mathbb C}}H^{2n}(E^{(n)},{\mathbb C})^{1,2n-1}\oplus H^{2n-2}(E^{(n-2)}\times E^{(2)},{\mathbb C})^{0,2n-2}. \] From the K\"unneth Theorem, we obtain \[ H^{2n-2}(E^{(n-2)}\times E^{(2)},{\mathbb C})^{0,2n-2}\simeq \bigoplus _{s+t=2n-2}H^{s}(E^{(n-2)},{\mathbb C})^{0,s}\otimes H^{t}(E^{(2)},{\mathbb C})^{0,t}. \] Since $H^{1}(E,{\mathbb C})^{0,1}=H^{2}(E,{\mathbb C})^{0,2}=0$, we have \[ H^{2n-2}(E^{(n-2)}\times E^{(2)},{\mathbb C})^{0,2n-2}=0. \] Let $\Lambda $ be a subset of ${\mathbb Z}_{\geq 0}^{2n}$ \[ \Lambda :=\{ (s_{1},\cdots ,s_{n},t_{1},\cdots ,t_{n})\in {\mathbb Z}_{\geq 0}^{2n}:\Sigma _{i=1}^{n}s_{i}=1,\,\Sigma _{j=1}^{n}t_{j}=2n-1 \}. \] From the K\"unneth Theorem, we have $$ H^{2n}(E^{n},{\mathbb C})^{1,2n-1}\simeq \bigoplus _{(s_{1},\cdots ,s_{n},t_{1},\cdots ,t_{n})\in \Lambda }\biggl{(}\bigotimes _{i=1}^{n}H^{2}(E,{\mathbb C})^{s_{i},t_{i}}\biggl{)}. $$ Since $n\geq 2$, for each $(s_{1},\cdots ,s_{n},t_{1},\cdots ,t_{n})\in \Lambda $, there is a number $i\in \{1,\cdots ,n\}$ such that $s_{i}=0$. Thus since $H^{2}(E,{\mathbb C})^{0,2}=0$, we have $H^{2n-1}(E^{n},{\mathbb C})^{1,2n-1}=0$, so $H^{2n-1}(E^{(n)},{\mathbb C})^{1,2n-1}=0$. Hence $H^{1}(E^{[n]}, \Omega ^{2n-1}_{E^{[n]}})=0$. \end{proof} \begin{thm}\label{thm:1} Let $E$ be an Enriques surface, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, and $X$ the universal covering space of $E^{[n]}$. Then all small deformations of $X$ is induced by that of $E^{[n]}$. \end{thm} \begin{proof} Since each canonical bundle of $E$ and $E^{[n]}$ is torsion, and from Ran $[\ref{bio:20},\,{\rm Corollary}\,2]$, they have unobstructed deformations. The Kuranishi family of $E$ has a $10$-dimensional smooth base, so the Kuranishi family of $E^{[n]}$ has a $10$-dimensional smooth base by $[\ref{bio:1},\,{\rm Theorems}\,0.1\,{\rm and}\,0.3]$. Thus we have dim$_{{\mathbb C}}H^{1}(E^{{n}},T_{E^{[n]}})=10$. Since $K_{E^{[n]}}$ is not trivial and $2K_{E^{[n]}}$ is trivial, we have \[ T_{E^{[n]}}\simeq \Omega ^{2n-1}_{E^{[n]}}\otimes K_{E^{[n]}}. \] Therefore we have dim$_{{\mathbb C}}H^{1}(E^{{n}},\Omega ^{2n-1}_{E^{[n]}}\otimes K_{E^{[n]}})=10$. Since $K_{X}$ is trivial, then we have $T_{X}\simeq \Omega ^{2n-1}_{X}$. Since $\pi :X\rightarrow E^{[n]}$ is the covering map and \[ X\simeq {\mathcal Spec}\,{\mathcal O}_{E^{[n]}}\oplus {\mathcal O}_{E^{[n]}}(K_{E^{[n]}}) \] by $[\ref{bio:3},\,{\rm Theorem}\,3.1]$, we have \begin{equation*} \begin{split} H^{k}(X,\Omega ^{2n-1}_{X})&\simeq H^{k}(E^{[n]},\Omega ^{2n-1}_{E^{[n]}}\oplus (\Omega ^{2n-1}_{E^{[n]}}\otimes K_{E^{[n]}}))\\ &\simeq H^{k}(E^{[n]},\Omega ^{2n-1}_{E^{[n]}})\oplus H^{k}(E^{[n]},\Omega ^{2n-1}_{E^{[n]}}\otimes K_{E^{[n]}}). \end{split} \end{equation*} Combining this with Proposition $3.1$, we obtain \[ {\rm dim}_{\mathbb C}H^{1}(X,\Omega ^{2n-1}_{X})={\rm dim}_{\mathbb C}H^{1}(E^{[n]},\Omega ^{2n-1}_{E^{[n]}}\otimes K_{E^{[n]}}). \] Since $\pi :X\rightarrow E^{[n]}$ is a covering map, $\pi ^{\ast }:H^{1}(E^{[n]},T_{E^{[n]}})\hookrightarrow H^{1}(X,T_{X})$ is injective. Thus we have dim$_{\mathbb C}H^{1}(X,T_{X})=10$. Let $p:{\mathcal Y}\rightarrow U$ be the universal family of $E^{[n]}$ and $f:{\mathcal X}\rightarrow {\mathcal Y}$ is the universal covering space. Then $q:{\mathcal X}\rightarrow U$ is a flat family of $X$ where $q:=p\circ f$. Then we have a commutative diagram: $$ \xymatrix{ T_{U,0} \ar[dr]_{\rho _{q}} \ar[r]^{\rho_{p}} &{\rm H}^{1}({\mathcal Y}_{0},T_{{\mathcal Y}_{0}}) \ar[d]^{\tau } \ar@{=}[r] &H^{1}(E^{[n]},T_{E^{[n]}}) \ar[d]^{\pi ^{\ast }} \\ &{\rm H}^{1}({\mathcal X}_{0},T_{{\mathcal X}_{0}}) \ar@{=}[r] &H^{1}(X,T_{X}). } $$ Since $H^{1}(E^{[n]},T_{E^{[n]}})\simeq H^{1}(X,T_{X})$ by $\pi ^{\ast }$, the vertical arrow $\tau $ is an isomorphism and \[ {\rm dim}_{{\mathbb C}}H^{1}({\mathcal X}_{u},T_{{\mathcal X}_{u}})={\rm dim}_{{\mathbb C}}H^{1}({\mathcal X}_{u},\Omega ^{2n-1}_{{\mathcal X}_{u}}) \] is a constant for some neighborhood of $0\in U$, it follows that $q:{\mathcal X}\rightarrow U$ is the complete family of ${\mathcal X}_{0}=X$, therefore $q:{\mathcal X}\rightarrow U$ is the versal family of ${\mathcal X}_{0}=X$. Thus every fibers of any small deformation of $X$ is the universal covering of some the Hilbert scheme of $n$ points of some Enriques surface. \end{proof} \section{Proof of Theorem $1.2$} Let $E$ be an Enriques surface, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, and $\pi :X\rightarrow E^{[n]}$ the universal covering space of $E^{[n]}$ where $X$ is a Calabi-Yau manifold. At first, we show that for an automorphism $f$ of $E^{[n]}$, $f(D_{E})=D_{E}\Leftrightarrow f^{\ast }({\mathcal O}_{E^{[n]}}(D_{E}))={\mathcal O}_{E^{[n]}}(D_{E})$. Next, we show Theorem $1.2$. \begin{pro}\label{dfn:80} ${\rm dim}_{{\mathbb C}}H^{0}(E^{[n]},{\mathcal O}_{E^{[n]}}(D_{E}))=1$. \end{pro} \begin{proof} Since $D_{E}$ is effective, we obtain dim$_{{\mathbb C}}H^{0}(E^{[n]},{\mathcal O}_{E^{[n]}}(D_{E}))\geq 1$. Since the codimension of $\Delta _{E}^{(n)}$ is $2$ in $E^{(n)}$, and $E^{(n)}$ is normal, we have \[ H^{0}(E^{(n)},{\mathcal O}_{E^{(n)}})=\Gamma (E^{(n)}\setminus \Delta _{E}^{(n)},{\mathcal O}_{E^{(n)}}).\] Since $\pi _{E}|_{E^{[n]}\backslash D_{E}}:E^{[n]}\backslash D_{E}\simeq E^{(n)}\backslash \Delta _{E}^{(n)}$, and ${\mathcal O}_{E^{[n]}}(D_{E})\simeq {\mathcal O}_{E^{[n]}}$ on $E^{[n]}\setminus D_{E}$, we have \[ (\pi _{E})_{\ast }({\mathcal O}_{E^{[n]}}(D_{E}))\simeq {\mathcal O}_{E^{(n)}}\ {\rm on}\ E^{(n)}\setminus \Delta _{E}^{(n)}. \] Hence \[ \Gamma (E^{[n]}\setminus D_{E},{\mathcal O}_{E^{[n]}}(D_{E}))\simeq H^{0}(E^{(n)},{\mathcal O}_{E^{(n)}}). \] Since $E^{(n)}$ is compact, we have $H^{0}(E^{(n)},{\mathcal O}_{E^{(n)}})\simeq {\mathbb C}$. Therefore we have \[ {\rm dim}_{\mathbb C}\Gamma (E^{[n]}\setminus D_{E},{\mathcal O}_{E^{[n]}}(D_{E}))=1. \] Thus we obtain ${\rm dim}_{{\mathbb C}}H^{0}(E^{[n]},{\mathcal O}_{E^{[n]}}(\pi ^{\ast }(D_{E})))=1$. \end{proof} \begin{mar}\label{dfn:1001} {\rm Then\ by} ${\rm Proposition}\,\ref{dfn:80}$, {\rm for\ an\ automorphism} $\varphi \in$ {\rm Aut}($E^{[n]}$), {\rm the\ condition} $\varphi ^{\ast }({\mathcal O}_{E^{[n]}}(D_{E}))={\mathcal O}_{E^{[n]}}(D_{E})$ {\rm is\ equivalent\ to\ the\ condition} $\varphi (D_{E})=D_{E}$. \end{mar} Recall that $\pi \circ \omega :K^{n}\setminus \Gamma _{K}\rightarrow E^{[n]}\setminus D_{E}$ is the universal covering space. \begin{thm}\label{thm:2} Let $E$ be an Enriques surface, $D_{E}$ the exceptional divisor of the Hilbert-Chow morphism $\pi _{E}:E^{[n]}\rightarrow E^{(n)}$. An automorphism $f$ of $E^{[n]}$ is natural if and only if $f(D_{E})=D_{E}$, i.e. $f^{\ast }({\mathcal O}_{E^{[n]}}(D_{E}))={\mathcal O}_{E^{[n]}}(D_{E})$. \end{thm} \begin{proof} Let $f$ be an automorphism of $E^{[n]}$ with $f(D_{E})=D_{E}$. Then $f$ induces an automorphism of $E^{[n]}\backslash D_{E}$. Since the uniqueness of the universal covering space, there is an automorphism $g$ of $K^{n}\backslash \Gamma _{K}$ such that $\pi \circ \omega \circ g=f\circ \pi \circ \omega $: $$ \xymatrix{ K^{n}\setminus \Gamma _{K} \ar[d]^{\pi \circ \omega } \ar[r]^{g} &K^{n}\setminus \Gamma _{K} \ar[d]^{\pi \circ \omega } \\ E^{[n]}\setminus D_{E} \ar[r]^{f} &E^{[n]}\setminus D_{E}. } $$ Since $\Gamma _{K}$ is an analytic set of codimension $2$, and $K^{n}$ is projective, $g$ can be extended to a birational automorphism of $K^{n}$. By Oguiso $[\ref{bio:4},\,{\rm Proposition}\,4.1]$, $g$ is an automorphism of $K^{n}$, and there are some automorphisms $g_{1},\ldots,g_{n}\in $ Aut$(K)$ and $s\in {\mathcal S}_{n}$ such that $g=s\circ g_{1}\times \cdots\times g_{n}$. Since ${\mathcal S}_{n}\subset G$, we can assume that $g=g_{1}\times \cdots\times g_{n}$. \\ Recall that we denote the covering transformation group of $\pi \circ \omega $ by: \[ G:=\{g\in {\rm Aut}(K^{n}\setminus \Gamma _{K}):\pi \circ \omega \circ g=\pi \circ \omega \}. \] By Proposition $\ref{dfn:12}$ below, we have $g_{i}=g_{1}$ or $g_{1}\circ \sigma $ for $1\leq i\leq n$ and $g_{1}\circ \sigma =\sigma \circ g_{1}$. We denote $g_{1}^{[n]}$ the induced automorphism of $E^{[n]}$ given by $g_{1}$. Then ${g_{1}}^{[n]}|_{E^{[n]}\backslash D_{E}}=f|_{E^{[n]}\backslash D_{E}}$. Thus $g_{1}^{[n]}=f$, so $f$ is natural. The other implication is obvious. \end{proof} \begin{pro}\label{dfn:12} In the proof of Theorem $4.3$, we have $g_{i}=g_{1}$ or $g_{i}=g_{1}\circ \sigma $ for each $1\leq i\leq n$. Moreover $g_{1}\circ \sigma =\sigma \circ g_{1}$. \end{pro} \begin{proof} We show the first assertion by contradiction. Without loss of generality, we may assume that $g_{2}\not=g_{1}$ and $g_{2}\not=g_{1}\circ \sigma $. Let $h_{1}$ and $h_{2}$ be two morphisms of $K$ where $g_{i}\circ h_{i}={\rm id}_{K}$ and $h_{i}\circ g_{i}={\rm id}_{K}$ for $i=1$, $2$. We define two morphisms $H_{1,2}$ and $H_{1,2,\sigma }$ from $K$ to $K^{2}$ by following. \[ H_{1,2}:K\ni x\mapsto (h_{1}(x),h_{2}(x))\in K^{2} \] \[ H_{1,2,\sigma }:K\ni x\mapsto (h_{1}(x),\sigma \circ h_{2}(x))\in K^{2}. \] Let $S_{\sigma }:=\{ (x,y) |\,y=\sigma (x)\}$ be the subset of $K^{2}$. Since $h_{1}\not=h_{2}$ and $h_{1}\not=\sigma \circ h_{2}$, $H_{1,2}^{-1}(\Delta _{K}^{2})\cup H_{1,2,\sigma }^{-1}(S_{\sigma })$ do not coincide $K$. Thus there is $x'\in K$ such that $H_{1,2}(x')\not\in \Delta _{K}^{2}$ and $H_{1,2,\sigma }(x')\not\in S_{\sigma }$. For $x'\in K$, we put $x_{i}:=h_{i}(x')\in K$ for $i=1$, $2$. Then there are some elements $x_{3},\ldots ,x_{n}\in K$ such that $(x_{1},\ldots ,x_{n})\in K^{n}\setminus \Gamma _{K}$. We have $g((x_{1},\ldots ,x_{n}))\not\in K^{n}\backslash \Gamma _{K}$ by the assumption of $x_{1}$ and $x_{2}$. It is contradiction, because $g$ is an automorphism of $K^{n}\backslash \Gamma _{K}$. Thus we have $g_{i}=g_{1}$ or $g_{i}=g_{1}\circ \sigma $ for $1\leq i\leq n$. We show the second assertion. Since the covering transformation group of $\pi \circ \omega $ is $G$, the liftings of $f$ are given by \[ \{ g\circ u: u\in G \}=\{ u\circ g: u\in G \}. \] Thus for $\sigma _{1}\circ g$, there is an element $\sigma _{i_{1}\cdots i_{k}}\circ s$ of $G$ where $s\in {\mathcal S}_{n}$ and $t\in \{\sigma _{i_{1}\ldots i_{k}}\}_{1\leq k\leq n,\ 1\leq i_{1}<\ldots <i_{k}\leq n}$ such that $\sigma _{1}\circ g=g\circ \sigma _{i_{1}\cdots i_{k}}\circ s$. If we think about the first component of $\sigma _{1}\circ g$ and $[\ref{bio:9},\,{\rm Lemma}\,1.2]$, we have $s={\rm id}$ and $t=\sigma _{1}$. Therefore $g\circ \sigma _{1}\circ g^{-1}=\sigma _{1}$, we have $\sigma \circ g_{1}=g_{1}\circ \sigma $. \end{proof} \section{Proof of Theorem $1.3$} Let $E$ be an Enriques surface, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, and $\pi :X\rightarrow E^{[n]}$ the universal covering space of $E^{[n]}$ where $X$ is a Calabi-Yau manifold. First, for $n=2$, we compute the Hode number of $X$. Next, for $n\geq 3$, we show that the covering involution of $\pi :X\rightarrow E^{[n]}$ acts on $H^{2}(X,{\mathbb C})$ as identity, and by using Theorem $1.2$, we classify automorphisms of $X$ acting on $H^{2}(X,{\mathbb C})$ identically and its order is $2$. Finally, we show Theorem $1.3$. \\ We suppose $n=2$. Since $E^{2}_{\ast }=E^{2}$, we have $E^{[2]}=E^{[2]}_{\ast }=$ Blow$_{\Delta _{E}^{2}}E^{2}/{\mathcal S}_{2}$. Let $\pi :X\rightarrow E^{[2]}$ be the universal covering space of $E^{[2]}$. Since $K^{2}_{\ast \mu }=K^{2}$ and Proposition $\ref{dfn:30}$, we have \[ X\simeq {\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H, \] where $T:=\{(x,y)\in K^{2}:y=\sigma (x)\}$. Let $\eta:{\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H\rightarrow K^{2}/H$ be the natural map. We put \[ D_{\Delta }:=\eta ^{-1}(\Delta _{K}^{2}/H)\ {\rm and}\] \[ D_{T}:=\eta ^{-1}(T/H). \] For two inclusions \[ j_{D_\Delta }:D_{\Delta }\hookrightarrow {\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H,\ {\rm and} \] \[ j_{D_{T}}:D_{T}\hookrightarrow {\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H, \] let $j_{\ast D_{\Delta }}$ be the Gysin morphism \[ j_{\ast D_{\Delta }}:H^{p}(D_{\Delta },{\mathbb C})\rightarrow H^{p+2}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H,{\mathbb C}), \] $j_{\ast D_{T}}$ the Gysin morphism \[ j_{\ast D_{T}}:H^{p}(D_{T},{\mathbb C})\rightarrow H^{p+2}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H,{\mathbb C}), {\rm and} \] \[ \psi:=\eta ^{\ast }+j_{\ast D_{\Delta }}\circ \eta |_{D_{\Delta }}^{\ast }+j_{\ast D_{T}}\circ \eta |_{D_{T}}^{\ast } \] morphisms from $H^{p}(K^{2}/H,{\mathbb C})\oplus H^{p-2}(\Delta _{K}^{2}/H,{\mathbb C})\oplus H^{p-2}(T/H,{\mathbb C})$ to\\ $H^{p}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H,{\mathbb C}).$ From $[\ref{bio:30},\,{\rm Theorem}\,7.31],$ we have isomorphisms of Hodge structure on $H^{k}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H,{\mathbb C})$ by $\psi $: \begin{equation} \label{eq:20} H^{k}(K^{2}/H,{\mathbb C})\oplus H^{k-2}(\Delta _{K}^{2}/H,{\mathbb C})\oplus H^{k-2}(T/H,{\mathbb C})\simeq H^{k}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H,{\mathbb C}). \end{equation} For algebraic variety $Y$, let $h^{p,q}(Y)$ be the number $h^{p,q}(Y)=$ dim$_{{\mathbb C}}H^{p+q}(Y,{\mathbb C})^{p,q}$. \begin{thm}\label{dfn:91} For the universal covering space $\pi :X\rightarrow E^{[2]}$, we have $h^{0,0}(X)=1$, $h^{1,0}(X)=0$, $h^{2,0}(X)=0$, $h^{1,1}(X)=12$, $h^{3,0}(X)=0$, $h^{2,1}(X)=0$, $h^{4,0}(X)=1$, $h^{3,1}(X)=10$, and $h^{2,2}(X)=131$. \end{thm} \begin{proof} Let $\sigma $ be the covering involution of $\mu :K\rightarrow E$. Put \[ H^{k}_{\pm }(K,{\mathbb C})^{p,q}:=\{ \alpha \in H^{k}(K,{\mathbb C})^{p,q}:\sigma ^{\ast }(\alpha )=\pm \alpha \}\,{\rm and} \] \[ h^{p,q}_{\pm }(K):={\rm dim}_{\mathbb C}H^{k}_{\pm }(K,{\mathbb C})^{p,q}. \] Then for an Enriques surface $E\simeq K/\langle\sigma \rangle$, we have \[ H^{k}(E,{\mathbb C})^{p,q}\simeq H^{k}_{+}(K,{\mathbb C})^{p,q}. \] Since $K$ is a $K3$ surface, we have \[ h^{0,0}(K)=1,\ h^{1,0}(K)=0,\ h^{2,0}(K)=1,\ {\rm and}\ h^{1,1}(K)=20,\,{\rm and} \] \[ h_{+}^{0,0}(K)=1,\,h_{+}^{1,0}(K)=0,\,h_{+}^{2,0}(K)=0,\,{\rm and}\,h_{+}^{1,1}(K)=10,\,{\rm and} \] \[ h^{0,0}_{-}(K)=0,\,h^{1,0}_{-}(K)=0,\,h^{2,0}_{-}(K)=1,\,{\rm and}\,h^{2,0}_{-}(K)=10. \] Since $n=2$, we obtain $\Delta _{K}^{2}/H\simeq E$ and $T/H\simeq E$. Thus we have \[ h^{0,0}(\Delta _{K}^{2}/H)=1,\,h^{1,0}(\Delta _{K}^{2}/H)=0,\,h^{2,0}(\Delta _{K}^{2}/H)=0,\,{\rm and}\,h^{1,1}(\Delta _{K}^{2}/H)=10, \] and we have \[ h^{0,0}(T/H)=1,\,h^{1,0}(T/H)=0,\,h^{2,0}(T/H)=0,\,{\rm and}\,h^{1,1}(T/H)=10. \] By the definition of $H$, we obtain $H=\langle{\mathcal S}_{2},\sigma _{1,2}\rangle$. From the K\"unneth Theorem, we have \[ H^{p+q}(K^{2},{\mathbb C})^{p,q}\simeq \bigoplus _{s+u=p,t+v=q}H^{s+t}(K,{\mathbb C})^{s,t}\otimes H^{u+v}(K,{\mathbb C})^{u,v},\,{\rm and} \] \[ H^{k}(K^{2}/H,{\mathbb C})^{p,q}\simeq \{ \alpha \in H^{k}(K^{2},{\mathbb C})^{p,q}:s^{\ast }(\alpha )=\alpha \,{\rm for}\,s\in {\mathcal S}_{2}\,{\rm and}\,\sigma ^{\ast }_{1,2}(\alpha )=\alpha \}. \] Thus we obtain \[ h^{0,0}(K^{2}/H)=1,\,h^{1,0}(K^{2}/H)=0,\,h^{2,0}(K^{2}/H)=0,\,h^{1,1}(K^{2}/H)=10,\] \[ h^{3,0}(K^{2}/H)=0,\,h^{2,1}(K^{2}/H)=0,\,h^{4,0}(K^{2}/H)=1, \] \[ h^{3,1}(K^{2}/H)=10,\, {\rm and}\,h^{2,2}(K^{2}/H)^{2,2}=111. \] Specially, we fix a basis $\beta $ of $H^{2}(K,{\mathbb C})^{2,0}$ and a basis $\{\gamma _{i}\}_{i=1}^{10}$ of $H^{2}_{-}(K,{\mathbb C})^{1,1}$, then we have \begin{equation} \label{eq:30} H^{4}(K^{2}/H,{\mathbb C})^{3,1}\simeq \bigoplus _{i=1}^{10}{\mathbb C}(\beta \otimes \gamma _{i}+\gamma _{i}\otimes \beta ). \end{equation} By the above equation $(\ref{eq:20})$, we have \[ h^{0,0}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=1,\,h^{1,0}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=0,\] \[ h^{2,0}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=0,\,h^{1,1}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=12,\] \[ h^{3,0}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=0,\,h^{2,1}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=0, \] \[ h^{4,0}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=1,\,h^{3,1}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=10,\,{\rm and}\] \[ h^{2,2}({\rm Blow}_{\Delta _{K}^{2}\cup T}K^{2}/H)=131. \] Thus we obtain $h^{0,0}(X)=1$, $h^{1,0}(X)=0$, $h^{2,0}(X)=0$, $h^{1,1}(X)=12$, $h^{3,0}(X)=0$, $h^{2,1}(X)=0$, $h^{4,0}(X)=1$, $h^{3,1}(X)=10$, and $h^{2,2}(X)=131$. \end{proof} We show that for $n\geq 3$, the covering involution of $\pi :X\rightarrow E^{[n]}$ acts on $H^{2}(X,{\mathbb C})$ as identity, by using Theorem $1.2$ we classify automorphisms of $X$ acting on $H^{2}(X,{\mathbb C})$ identically and its order is $2$, and Theorem $1.3$ from here. \begin{lem}\label{dfn:90} Let $X$ be a smooth complex manifold, $Z\subset X$ a closed submanifold with codimension is $2$, $\tau :X_{Z}\rightarrow X$ the blow up of $X$ along $Z$, $E=\tau ^{-1}(Z)$ the exceptional divisor, and $h$ the first Chern class of the line bundle ${\mathcal O}_{X_{Z}}(E)$.\\ Then $\tau ^{\ast }:H^{2}(X,{\mathbb C})\rightarrow H^{2}(X_{Z},{\mathbb C})$ is injective, and \[ H^{2}(X_{Z},{\mathbb C})\simeq H^{2}(X,{\mathbb C})\oplus {\mathbb C}h. \] \end{lem} \begin{proof} Let $U:=X\setminus Z$ be an open set of $X$. Then $U$ is isomorphic to an open set $U'=X_{Z}\setminus E$ of $X_{Z}$. As $\tau $ gives a morphism between the pair $(X_{Z},U')$ and the pair $(X,U)$, we have a morphism $\tau ^{\ast }$ between the long exact sequence of cohomology relative to these pairs: $$ \xymatrix{ H^{k}(X,U,{\mathbb C}) \ar[r] \ar[d]^{\tau _{X,U}^{\ast }} &H^{k}(X,{\mathbb C}) \ar[r] \ar[d]^{\tau ^{\ast }_{X}} &H^{k}(U,{\mathbb C}) \ar[d]^{\tau _{U}^{\ast }} \ar[r] &H^{k+1}(X,U,{\mathbb C}) \ar[d]^{\tau _{X,U}^{\ast }} \\ H^{k}(X_{Z},U',{\mathbb C}) \ar[r] &H^{k}(X_{Z},{\mathbb C}) \ar[r] &H^{k}(U',{\mathbb C}) \ar[r] &H^{k+1}(X_{Z},U',{\mathbb C}). } $$ By Thom isomorphism, the tubular neighborhood Theorem, and Excision theorem, we have \[ H^{q}(Z,{\mathbb C})\simeq H^{q+4}(X,U,{\mathbb C}),\ {\rm and}\] \[ H^{q}(E,{\mathbb C})\simeq H^{q+2}(X_{Z},U',{\mathbb C}). \] In particular, we have \[ H^{l}(X,U,{\mathbb C})=0\ {\rm for}\ l=0,1,2,3,\ {\rm and} \] \[ H^{j}(X_{Z},U',{\mathbb C})=0\ {\rm for}\ l=0,1. \] Thus we have $$ \xymatrix{ 0 \ar[r] \ar[d]^{\tau _{X,U}^{\ast }} &H^{1}(X,{\mathbb C}) \ar[r] \ar[d]^{\tau ^{\ast }_{X}} &H^{1}(U,{\mathbb C}) \ar[d]^{\tau _{U}^{\ast }} \ar[r] &0 \ar[d]^{\tau _{X,U}^{\ast }} \\ 0 \ar[r] &H^{1}(X_{Z},{\mathbb C}) \ar[r] &H^{1}(U',{\mathbb C}) \ar[r] &H^{0}(E,{\mathbb C}), } $$ and $$ \xymatrix{ 0 \ar[r] \ar[d]^{\tau _{X,U}^{\ast }} &H^{2}(X,{\mathbb C}) \ar[r] \ar[d]^{\tau ^{\ast }_{X}} &H^{2}(U,{\mathbb C}) \ar[d]^{\tau _{U}^{\ast }} \ar[r] &0 \ar[d]^{\tau _{X,U}^{\ast }} \\ H^{0}(E,{\mathbb C}) \ar[r] &H^{2}(X_{Z},{\mathbb C}) \ar[r] &H^{2}(U',{\mathbb C}) \ar[r] &H^{3}(X_{Z},U',{\mathbb C}). } $$ Since $\tau \mid _{U'}:U'\xrightarrow{\sim}U$, we have isomorphisms $\tau _{U}^{\ast }:H^{k}(U,{\mathbb C})\simeq H^{k}(U',{\mathbb C})$. Thus we have \[ {\rm dim}_{{\mathbb C}}H^{2}(X_{Z},{\mathbb C})={\rm dim}_{{\mathbb C}}H^{2}(X,{\mathbb C})+1,\ {\rm and} \] \[ \tau ^{\ast }:H^{2}(X,{\mathbb C})\rightarrow H^{2}(X_{Z},{\mathbb C})\ {\rm is\ injective},\] and therefore we obtain \[ H^{2}(X_{Z},{\mathbb C})\simeq H^{2}(X,{\mathbb C})\oplus {\mathbb C}h. \] \end{proof} \begin{pro}\label{pro:70} Suppose $n\geq 3$. For the universal covering space $\pi :X\rightarrow E^{[n]}$, ${\rm dim}_{{\mathbb C}}H^{2}(X,{\mathbb C})=11$. \end{pro} \begin{proof} Since the codimension of $\pi ^{-1}(F_{E})$ is $2$, $H^{2}(X,{\mathbb C})\cong H^{2}(X\setminus \pi ^{-1}(F_{E}),{\mathbb C})$. By Proposition $2.6$, $X\setminus \pi ^{-1}(F_{E})\simeq {\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H$. \\ Let $\tau :{\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }\rightarrow K^{n}_{\ast \mu }$ be the blow up of $K^{n}_{\ast \mu }$ along $\Delta _{K\ast \mu }\cup T_{\ast \mu }$, \\ \[ h_{ij}\ {\rm the\ first\ Chern\ class\ of\ the\ line\ bundle}\ {\mathcal O}_{{\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }}(\tau ^{-1}(\Delta _{K\ast \mu \,i,})), \] and \[ k_{ij}\ {\rm the\ first\ Chern\ class\ of\ the\ line\ bundle}\ {\mathcal O}_{{\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }}(\tau ^{-1}(T _{K\ast \mu \,ij})).\] By Lemma $\ref{dfn:90}$, we have \[ H^{2}({\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu },{\mathbb C})\cong H^{2}(K^{n},{\mathbb C})\oplus \biggl{(}\bigoplus _{1\leq i<j\leq n}{\mathbb C}h_{ij}\biggl{)} \oplus \biggl{(}\bigoplus _{1\leq i<j\leq n}{\mathbb C}k_{ij}\biggl{)}. \] Since $n\geq 3$, there is an isomorphism \[ (j,j+1)\circ \sigma _{ij}\circ (j,j+1):\bigtriangleup _{K\ast \mu \,ij}\xrightarrow{\sim}T_{\ast \mu \,ij}. \] Thus we have dim$_{{\mathbb C}}H^{2}({\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H,{\mathbb C})=11$, i.e.\,${\rm dim}_{{\mathbb C}}H^{2}(X,{\mathbb C})=11$. \end{proof} \begin{pro}\label{dfn:13} ${\rm dim}_{{\mathbb C}}H^{0}(X,{\mathcal O}_{X}(\pi ^{\ast }(D_{E})))=1$. \end{pro} \begin{proof} Since $\pi $ is finite, we obtain ${\rm dim}_{{\mathbb C}}H^{0}(X,{\mathcal O}_{X}(\pi ^{\ast }(D_{E})))={\rm dim}_{{\mathbb C}}H^{0}(E^{[n]},\pi _{\ast }{\mathcal O}_{X}(\pi ^{\ast }(D_{E})))$. From the projective formula and $X\simeq {\mathcal Spec}\,{\mathcal O}_{E^{[n]}}\oplus {\mathcal O}_{E^{[n]}}(K_{E^{[n]}})$, we have $\pi _{\ast }{\mathcal O}_{X}(\pi ^{\ast }(D_{E}))\simeq {\mathcal O}_{E^{[n]}}(D_{E})\oplus {\mathcal O}_{E^{[n]}}(D_{E}\otimes K_{E^{[n]}})$. By Proposition $\ref{dfn:80}$, dim$_{{\mathbb C}}H^{0}(E^{[n]},{\mathcal O}_{E^{[n]}}(D_{E}))=1$.\\ We show that \[ {\rm dim}_{{\mathbb C}}H^{0}(E^{[n]},{\mathcal O}_{E^{[n]}}(D_{E}\otimes K_{E^{[n]}}))=0. \] Since $\pi _{E}|_{E^{[n]}\setminus D_{E}}:E^{[n]}\setminus D_{E}\simeq E^{(n)}\setminus \Delta _{E}^{(n)}$, we have \[ (\pi _{E})_{\ast }({\mathcal O}_{E^{[n]}}(D_{E}\otimes K_{E^{[n]}}))\simeq \Omega ^{2n}_{E^{(n)}}\ {\rm on}\ E^{(n)}\setminus \Delta _{E}^{(n)}.\] Hence we have \[ \Gamma (E^{[n]}\setminus D_{E},{\mathcal O}_{E^{[n]}}(D_{E}\otimes K_{E^{[n]}}))\simeq \Gamma (E^{(n)}\setminus \Delta _{E}^{(n)},\Omega ^{2n}_{E^{(n)}}). \] Since $H^{2}(E,{\mathbb C})^{2,0}=0$, and from the K\"unneth Theorem, \[ H^{2n}(E^{n},{\mathbb C})^{2n,0}=H^{0}(E^{n},\Omega ^{2n}_{E^{n}})=0. \] Since the codimension of $\Delta _{E}^{n}$ is $2$, and $\Omega ^{2n}_{E^{n}}$ is a locally free sheaf, we have \[ \Gamma (E^{n}\setminus \Delta _{E}^{n},\Omega ^{2n}_{E^{n}})=H^{0}(E^{n},\Omega ^{2n}_{E^{n}}). \] Thus we have \[ \Gamma (E^{(n)}\setminus \Delta _{E}^{(n)},\Omega ^{2n}_{E^{(n)}})=0, \] and therefore \[ {\rm dim}_{{\mathbb C}}H^{0}(E^{[n]}\setminus D_{E},{\mathcal O}_{E^{[n]}}(D_{E}\otimes K_{E^{[n]}}))=0. \] Hence \[ {\rm dim}_{{\mathbb C}}H^{0}(E^{[n]},{\mathcal O}_{E^{[n]}}(D_{E}\otimes K_{E^{[n]}}))=0. \] Thus we obtain ${\rm dim}_{{\mathbb C}}H^{0}(X,{\mathcal O}_{X}(\pi ^{\ast }(D_{E})))=1$. \end{proof} \begin{mar}\label{dfn:1002} {\rm Then\ by} {\rm Proposition}\,$\ref{dfn:13}$, {\rm for\ an\ automorphism} $\varphi \in $ {\rm Aut}(X), {\rm the\ condition} $\varphi ^{\ast }({\mathcal O}_{X}(\pi ^{\ast }D_{E}))={\mathcal O}_{X}(\pi ^{\ast }D_{E})$ {\rm is\ equivalent\ to\ the condition} $\varphi (\pi ^{-1}(D_{E}))=\pi ^{-1}(D_{E})$. \end{mar} Let $\rho $ be the covering involution of $\pi :X\rightarrow E^{[n]}$. \begin{pro}\label{pro:3} For $n\geq 3$, the induced map $\rho ^{\ast }:H^{2}(X,{\mathbb C})\rightarrow H^{2}(X,{\mathbb C})$ is identity. \end{pro} \begin{proof} Since $E^{[n]}\simeq X/\langle \rho \rangle$ , we have $H^{2}(E^{[n]},{\mathbb C})\simeq H^{2}(X,{\mathbb C})^{\rho ^{\ast }}$. By Proposition $\ref{pro:70}$, for $n\geq 3$, we have dim$_{\mathbb C}H^{2}(X,{\mathbb C})=11$. By $[\ref{bio:6},\,{\rm page}\,767]$, dim$_{\mathbb C}H^{2}(E^{[n]},{\mathbb C})=11$. Thus the induced map $\rho ^{\ast }:H^{2}(X,{\mathbb C})\rightarrow H^{2}(X,{\mathbb C})$ is identity for $n\geq 3$. \end{proof} Recall that $\mu :K\rightarrow E$ is the universal covering of $E$ where $K$ is a $K3$ surface, and $\sigma $ the covering involution of $\mu $. \begin{pro}\label{dfn:130} Let $E$ be an Enriques surface which does not have numerically trivial involutions, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, $\pi :X\rightarrow E^{[n]}$ the universal covering space of $E^{[n]}$, $\rho $ the covering involution of $\pi $, and $n\geq 3$. Let $\iota $ be an involution of $X$ which acts on $H^{2}(X,{\mathbb C})$ as ${\rm id}$, then $\iota =\rho $. \end{pro} \begin{proof} Let $\iota $ be an involution of $X$ which acts on $H^{2}(X,{\mathbb C})$ as ${\rm id}$. By Remark $\ref{dfn:1002}$, $\iota |_{X\setminus \pi ^{-1}(D_{E})}$ is automorphism of $X\setminus \pi ^{-1}(D_{E})$. By the uniqueness of the universal covering space, there is an automorphism $g$ of $K^{n}\backslash \Gamma _{K}$ such that $\iota \circ \omega =\omega \circ g$: $$ \xymatrix{ K^{n}\setminus \Gamma _{K} \ar[d]^{\omega } \ar[r]^{g} &K^{n}\setminus \Gamma _{K} \ar[d]^{\omega } \\ X\setminus \pi ^{-1}(D_{E}) \ar[r]^{\iota }&X\setminus \pi ^{-1}(D_{E}). } $$ Like the proof of Proposition $\ref{dfn:12}$, we can assume that there are some automorphisms $g_{i}$ of $K$ such that $g=g_{1}\times \cdots \times g_{n}$, for each $1\leq i\leq n$, $g_{i}=g_{1}$ or $g_{i}=g_{1}\circ \sigma $, and $g_{1}\circ \sigma =\sigma \circ g_{1}$. Since $\iota ^{2}={\rm id}_{X}$, so we have $g^{2}\in H$. Thus we have $g^{2}={\rm id}_{K^{n}}$ or $\sigma _{i_{1}\ldots i_{k}}$. By $[\ref{bio:9},\,{\rm Lemma}\,1.2]$, we have $g^{2}={\rm id}_{K^{n}}$. We put $g':=g_{1}$. Let $g'_{E}$ be the induced automorphism of $E$ by $g'$, and $g'^{[n]}_{E}$ the induced automorphism of $E^{[n]}$ by $g'_{E}$. Since $g'^{[n]}_{E}\circ \pi =\pi \circ \iota $ and $n\geq 3$, $g'^{[n]\ast }_{E}$ acts on $H^{2}(E^{[n]},{\mathbb C})$ as ${\rm id}$, and therefore $g'^{\ast }_{E}$ acts on $H^{2}(E,{\mathbb C})$ as ${\rm id}$. Since $E$ does not have numerically trivial involutions, $g'_{E}={\rm id}_{E}$, and therefore we have $g'=\sigma $ or $g'={\rm id}_{K}$. Thus we have $\pi \circ \omega \circ g=\pi \circ \omega $: $$ \xymatrix{ K^{n}\setminus \Gamma _{K} \ar[d]^{\pi \circ \omega } \ar[r]^{g} &K^{n}\setminus \Gamma _{K} \ar[d]^{\pi \circ \omega } \\ E^{[n]}\setminus D_{E} \ar[r]^{{\rm id}}&E^{[n]}\setminus D_{E}. } $$ Since $\iota \circ \omega =\omega \circ g$, we have we have $\pi =\pi \circ \iota $: $$ \xymatrix{ X\setminus \pi ^{-1}(D_{E}) \ar[d]^{\pi } \ar[r]^{\iota } &X\setminus \pi ^{-1}(D_{E}) \ar[d]^{\pi } \\ E^{[n]}\setminus D_{E} \ar[r]^{{\rm id}}&E^{[n]}\setminus D_{E}. } $$ Since the degree of $\pi$ is $2$, we have $\iota =\rho $. \end{proof} We suppose that $E$ has numerically trivial involutions. By $[\ref{bio:9},\,{\rm Proposition}\,1.1]$, there is just one automorphism of $E$, denoted $\upsilon $, such that its order is $2$, and $\upsilon ^{\ast }$ acts on $H^{2}(E,{\mathbb C})$ as ${\rm id}$. For $\upsilon $, there are just two involutions of $K$ which are liftings of $\upsilon $, one acts on $H^{0}(K,\Omega ^{2}_{K})$ as ${\rm id}$, and another acts on $H^{0}(K,\Omega ^{2}_{K})$ as $-{\rm id}$, we denote by $\upsilon _{+}$ and $\upsilon _{-}$, respectively. Then they satisfies $\upsilon _{+}=\upsilon _{-}\circ \sigma $. Let $\upsilon ^{[n]}$ be the automorphism of $E^{[n]}$ which is induced by $\upsilon $. For $\upsilon ^{[n]}$, there are just two automorphisms of $X$ which are liftings of $\upsilon ^{[n]}$, denoted $\varsigma $ and $\varsigma '$, respectively: $$ \xymatrix{ X \ar[d]^{\pi } \ar[r]^{\varsigma \,(\varsigma ')} &X \ar[d]^{\pi }\\ E^{[n]} \ar[r]^{\upsilon ^{[n]}} &E^{[n]}. } $$ Then they satisfies $\varsigma =\varsigma '\circ \sigma $. Since $n\geq 3$ and like the proof of Proposition $\ref{dfn:130}$, each order of $\varsigma $ and $\varsigma '$ is $2$ . \begin{lem}\label{dfn:100} For $\varsigma $ and $\varsigma '$, one acts on $H^{0}(X,\Omega ^{2n}_{X})$ as ${\rm id}$, and another act on $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$. \end{lem} \begin{proof} Since $\upsilon ^{[n]}|_{E^{[n]}\setminus D_{E}}$ is an automorphism of $E^{[n]}\setminus D_{E}$, and from the uniqueness of the universal covering space, there is an automorphism $g$ of $K^{n}\setminus \Gamma _{K}$ such that $\upsilon ^{[n]} \circ \pi \circ \omega =\pi \circ \omega \circ g$: $$ \xymatrix{ K^{n}\setminus \Gamma _{K} \ar[d]^{\pi \circ \omega } \ar[r]^{g} &K^{n}\setminus \Gamma _{K} \ar[d]^{\pi \circ \omega } \\ E^{[n]}\setminus D_{E} \ar[r]^{\upsilon ^{[n]}} &E^{[n]}\setminus D_{E}. } $$ Like the proof of Proposition $\ref{dfn:12}$, we can assume that there are some automorphisms $g_{i}$ of $K$ such that $g=g_{1}\times \cdots\times g_{n}$ for each $1\leq i\leq n$, $g_{i}=g_{1}$ or $g_{i}=g_{1}\circ \sigma $, and $g_{1}\circ \sigma =\sigma \circ g_{1}$. From Theorem $\ref{thm:10}$, we get $K^{n}\setminus \Gamma _{K}/H\simeq X\setminus \pi ^{-1}(D_{E}).$ Put \[ \upsilon _{+,even}:=u_{1}\times \cdots \times u_{n} \] where \[ u_{i}=\upsilon _{+}\ {\rm or}\ u_{i}=\upsilon _{-}\ {\rm and}\ {\rm the\ number\ of}\ i\ {\rm with}\ u_{i}=\upsilon _{+}\ {\rm is\ even} \] which is an automorphism of $K^{n}$ and induces an automorphism $\widetilde{\upsilon _{+,even}}$ of $X\setminus \pi ^{-1}(D_{E}).$ We define automorphisms $\widetilde{\upsilon _{+,odd}}$, $\widetilde{\upsilon _{-,even}}$, and $\widetilde{ \upsilon _{-,odd}}$ of $K^{n}\setminus \Gamma _{K}/H$ in the same way. Since $\sigma _{ij}\in H$ for $1\leq i<j\leq n$, and $\upsilon _{+}=\upsilon _{-}\circ \sigma $, if $n$ is odd, \[ \widetilde{\upsilon _{+,odd}}=\widetilde{\upsilon _{-,even}},\ \widetilde{\upsilon _{+,even}}=\widetilde{\upsilon _{-,odd}},\ {\rm and}\ \widetilde{\upsilon _{+,odd}}\not=\widetilde{\upsilon _{+,even}}, \] and if $n$ is even, \[ \widetilde{\upsilon _{+,odd}}=\widetilde{\upsilon _{-,odd}},\ \widetilde{\upsilon _{+,even}}=\widetilde{\upsilon _{-,even}},\ {\rm and}\ \widetilde{\upsilon _{+,odd}}\not=\widetilde{\upsilon _{+,even}}. \] Since $\upsilon ^{[n]}\circ \pi =\pi \circ \widetilde{\upsilon _{+,odd}}$ and $\upsilon ^{[n]}\circ \pi =\pi \circ \widetilde{\upsilon _{+,even}}$, and the degree of $\pi $ is $2$, Thus we have $\{\varsigma ,\varsigma '\}=\{\widetilde{\upsilon _{+,odd}},\widetilde{\upsilon _{+,even}}\}$. Let $\omega _{X}\in H^{0}(X,\Omega ^{2n}_{X})$ be a basis of $H^{0}(X,\Omega ^{2n}_{X})$ over ${\mathbb C}$. Since $X\setminus \pi ^{-1}(F_{E})\simeq {\rm Blow}_{\Delta _{K\ast \mu }\cup T_{\ast \mu }}K^{n}_{\ast \mu }/H$, and by the definition of $\upsilon _{+}$ and $\upsilon _{-}$, \[ \widetilde{\upsilon _{+,odd}}^{\ast }(\omega _{X})=-\omega _{X}\ {\rm and}\ \widetilde{\upsilon _{+,even}}^{\ast }(\omega _{X})=\omega _{X}. \] Thus for $\{\varsigma ,\varsigma '\}$, one acts on $H^{0}(X,\Omega ^{2n}_{X})$ as ${\rm id}$, and another act on $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$. \end{proof} We put $\varsigma _{+}\in \{\varsigma ,\varsigma '\}$ as acts on $H^{0}(X,\Omega ^{2n}_{X})$ as ${\rm id}$ and $\varsigma _{-}\in \{\varsigma ,\varsigma '\}$ as acts on $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$. \begin{pro}\label{dfn:50} Suppose $E$ has numerically trivial involutions. Let $E^{[n]}$ be the Hilbert scheme of $n$ points of $E$, $\pi :X\rightarrow E^{[n]}$ the universal covering space of $E^{[n]}$, $\rho $ the covering involution of $\pi $, and $n\geq 3$. Let $\iota $ be an involution of $X$ which $\iota ^{\ast }$ acts on $H^{2}(X,{\mathbb C})$ as ${\rm id}$ and on $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$, and $\iota \not=\rho $. Then we have $\iota =\varsigma _{-}$. \end{pro} \begin{proof} Let $\iota $ be an involution of $X$ which acts on $H^{2}(X,{\mathbb C})$ as ${\rm id}$ and on $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$, and $\iota \not=\rho $. By Remark $\ref{dfn:1002}$, $\iota |_{X\setminus \pi ^{-1}(D_{E})}$ is an automorphism of $X\setminus \pi ^{-1}(D_{E})$. By the uniqueness of the universal covering space, there is an automorphism $g$ of $K^{n}\backslash \Gamma _{K}$ such that $\iota \circ \omega =\omega \circ g$: $$ \xymatrix{ K^{n}\setminus \Gamma _{K} \ar[d]^{\omega } \ar[r]^{g} &K^{n}\setminus \Gamma _{K} \ar[d]^{\omega } \\ X\setminus \pi ^{-1}(D_{E}) \ar[r]^{\iota }&X\setminus \pi ^{-1}(D_{E}). } $$ Like the proof of Proposition $\ref{dfn:12}$, we can assume that there are some automorphisms $g_{i}$ of $K$ such that $g=g_{1}\times \cdots \times g_{n}$, for each $1\leq i\leq n$, $g_{i}=g_{1}$ or $g_{i}=g_{1}\circ \sigma $, and $g_{1}\circ \sigma =\sigma \circ g_{1}$. Since $\iota ^{2}={\rm id}_{X}$, so we have $g^{2}\in H$. Thus we have $g^{2}={\rm id}_{K^{n}}$ or $\sigma _{i_{1}\ldots i_{k}}$. By $[\ref{bio:9},\,{\rm Lemma}\,1.2]$, we have $g^{2}={\rm id}_{K^{n}}$. We put $g':=g_{1}$. Let $g'_{E}$ be the induced automorphism of $E$ by $g'$, and $g'^{[n]}_{E}$ the induced automorphism of $E^{[n]}$ by $g'_{E}$. Since $g'^{[n]}_{E}\circ \pi =\pi \circ \iota $ and $n\geq 3$, $g'^{[n]\ast }_{E}$ acts on $H^{2}(E^{[n]},{\mathbb C})$ as ${\rm id}$, and therefore $g'^{\ast }_{E}$ acts on $H^{2}(E,{\mathbb C})$ as $id$. If $g'_{E}={\rm id}_{E}$, then we have $\iota =\rho $ or ${\rm id}_{X}$, a contradiction. Since $g^{2}={\rm id}_{K^{n}}$ Thus the order of $g'_{E}$ is $2$. Since $g'^{\ast }_{E}$ acts on $H^{2}(E,{\mathbb C})$ as ${\rm id}$, we have $g'_{E}=\upsilon $, and therefore $g'=\upsilon _{+}$ or $g'=\upsilon _{-}$. By the definition of $\varsigma $ and $\varsigma '$, we obtain $\iota =\varsigma $ or $\iota =\varsigma '$. Since $\iota ^{\ast }$ acts on $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$, we obtain $\iota =\varsigma _{-}$. \end{proof} \begin{thm}\label{dfn:131} Let $E$ be an Enriques surface, $E^{[n]}$ the Hilbert scheme of $n$ points of $E$, $\pi :X\rightarrow E^{[n]}$ the universal covering space of $E^{[n]}$, and $n\geq 3$. If $X$ has a involution $\iota $ which $\iota ^{\ast }$ acts on $H^{2}(X,{\mathbb C})$ as ${\rm id}$, and $\iota \not=\rho $. Then $E$ has a numerically trivial involution. \end{thm} \begin{proof} Let $\iota $ be an involution of $X$ which acts on $H^{2}(X,{\mathbb C})$ as ${\rm id}$, and $\iota \not=\rho $. By {\rm Remark} $\ref{dfn:1002}$, $\iota |_{X\setminus \pi ^{-1}(D_{E})}$ is an automorphism of $X\setminus \pi ^{-1}(D_{E})$. By the uniqueness of the universal covering space, there is an automorphism $g$ of $K^{n}\backslash \Gamma _{K}$ such that $\iota \circ \omega =\omega \circ g$: $$ \xymatrix{ K^{n}\setminus \Gamma _{K} \ar[d]^{\omega } \ar[r]^{g} &K^{n}\setminus \Gamma _{K} \ar[d]^{\omega } \\ X\setminus \pi ^{-1}(D_{E}) \ar[r]^{\iota }&X\setminus \pi ^{-1}(D_{E}). } $$ Like the proof of Proposition $\ref{dfn:12}$, we can assume that there are some automorphisms $g_{i}$ of $K$ such that $g=g_{1}\times \cdots \times g_{n}$, for each $1\leq i\leq n$, $g_{i}=g_{1}$ or $g_{i}=g_{1}\circ \sigma $, and $g_{1}\circ \sigma =\sigma \circ g_{1}$. Since $\iota ^{2}={\rm id}_{X}$, we have $g^{2}\in H$. Thus we have $g^{2}={\rm id}_{K^{n}}$ or $\sigma _{i_{1}\ldots i_{k}}$. By $[\ref{bio:9},\,{\rm Lemma}\,1.2]$, we have $g^{2}={\rm id}_{K^{n}}$. We put $g':=g_{1}$. Let $g'_{E}$ be the induced automorphism of $E$ by $g'$, and $g'^{[n]}_{E}$ the induced automorphism of $E^{[n]}$ by $g'_{E}$. Since $g'^{[n]}_{E}\circ \pi =\pi \circ \iota $ and $n\geq 3$, $g'^{[n]\ast }_{E}$ acts on $H^{2}(E^{[n]},{\mathbb C})$ as ${\rm id}$, and therefore $g'^{\ast }_{E}$ acts on $H^{2}(E,{\mathbb C})$ as ${\rm id}$. If $g'_{E}={\rm id}$, like the proof of Proposition $\ref{dfn:130}$ we have $\iota =\rho $ or $\iota ={\rm id}_{X}$, a contradiction. Thus we have $g'_{E}\not={\rm id}$. Since $g^{2}={\rm id}_{K^{n}}$, $g'_{E}$ is an involution of $E$. Since $g'^{\ast }_{E}$ acts on $H^{2}(E,{\mathbb C})$ as ${\rm id}$, $E$ has a numerically trivial involution. \end{proof} \begin{lem}\label{lem:1} dim$_{\mathbb C}H^{2n-1,1}(K^{n}/H,{\mathbb C})=10.$ \end{lem} \begin{proof} Let $\sigma $ be the covering involution of $\mu :K\rightarrow E$. Put \[ H^{k}_{\pm }(K,{\mathbb C})^{p,q}:=\{ \alpha \in H^{k}(K,{\mathbb C})^{p,q}:\sigma ^{\ast }(\alpha )=\pm \alpha \}\,{\rm and} \] \[ h^{p,q}_{\pm }(K):={\rm dim}_{\mathbb C}H^{k}_{\pm }(K,{\mathbb C})^{p,q}. \] Since $K$ is a $K3$ surface, we have \[ h^{0,0}(K)=1,\ h^{1,0}(K)=0,\ h^{2,0}(K)=1,\ {\rm and}\ h^{1,1}(K)=20,\,{\rm and} \] \[ h_{+}^{0,0}(K)=1,\,h_{+}^{1,0}(K)=0,\,h_{+}^{2,0}(K)=0,\,{\rm and}\,h_{+}^{1,1}(K)=10,\,{\rm and} \] \[ h^{0,0}_{-}(K)=0,\,h^{1,0}_{-}(K)=0,\,h^{2,0}_{-}(K)=1,\,{\rm and}\,h^{2,0}_{-}(K)=10. \] Let $\Lambda $ be a subset of ${\mathbb Z}_{\geq 0}^{2n}$ \[ \Lambda :=\{ (s_{1},\cdots ,s_{n},t_{1},\cdots ,t_{n})\in {\mathbb Z}_{\geq 0}^{2n}:\Sigma _{i=1}^{n}s_{i}=2n-1,\,\Sigma _{j=1}^{n}t_{j}=1 \}. \] From the K\"unneth Theorem, we have $$ H^{2n}(k^{n},{\mathbb C})^{2n-1,1}\simeq \bigoplus _{(s_{1},\cdots ,s_{n},t_{1},\cdots ,t_{n})\in \Lambda }\biggl{(}\bigotimes _{i=1}^{n}H^{2}(K,{\mathbb C})^{s_{i},t_{i}}\biggl{)}. $$ We fix a basis $\alpha $ of $H^{2}(K,{\mathbb C})^{2,0}$ and a basis $\{\beta _{i}\}_{i=1}^{10}$ of $H^{2}_{-}(K,{\mathbb C})^{1,1}$, and let \[ \tilde{\beta _{i}}:=\bigotimes _{j=1}^{n}\epsilon _{j} \] where $\epsilon _{j}=\alpha $ for $j\not=i$ and $\epsilon _{j}=\beta _{i}$ for $j=i$, and \[ \gamma _{i}:=\bigoplus _{j=1}^{n}\tilde{\beta _{j}}. \] then we have \begin{equation} \label{eq:10} H^{2n}(K^{n}/H,{\mathbb C})^{2n-1,1}\simeq \bigoplus _{i=1}^{10}{\mathbb C}\gamma _{i}, \end{equation} dim$_{\mathbb C}H^{2n-1,1}(K^{n}/H,{\mathbb C})=10$. \end{proof} Since $X$ and $K^{n}/H$ are projective, $K^{n}/H$ is a V-manifold, and $\pi $ is a surjective, $\pi ^{\ast }:H^{p,q}(K^{n}/H,{\mathbb C})\rightarrow H^{p,q}(X,{\mathbb C})$ is injective. \begin{thm}\label{thm:91} We suppose $n\geq 2$. Let $\pi :X\rightarrow E^{[n]}$ be the universal covering space. For any automorphism $f$ of $X$, if $f^{\ast }$ is acts on $H^{\ast }(X,{\mathbb C}):=\bigoplus _{i=0}^{2n}H^{i}(X,{\mathbb C})$ as identity, then $f={\rm id}_{X}$. \end{thm} \begin{proof} Since $f^{\ast }$ acts on $H^{2}(X,{\mathbb C})$ as identity, $f$ is an automorphism of $K^{n}\setminus \Gamma _{K}/H$. Let $p_{H}:K^{2}\setminus \Gamma _{K}\rightarrow K^{2}\setminus \Gamma _{K}/H$ be the natural map. Then the uniqueness of the universal covering space, we can that there are some automorphisms $g_{i}$ of $K$ such that $g:=g_{1}\times \cdots \times g_{n}$, $g_{i}=g_{1}$ or $g_{i}=g_{1}\circ \sigma $, $g_{1}\circ \sigma =\sigma \circ g_{1}$ for $2\leq i\leq n$, and $f\circ p_{H} =p_{H}\circ g$: $$ \xymatrix{ K^{n}\setminus \Gamma _{K}/H \ar[r]^{f}&K^{n}\setminus \Gamma _{K}/H \\ K^{n}\setminus \Gamma _{K} \ar[u]^{p_{H}} \ar[r]^{g} &K^{n}\setminus \Gamma _{K} \ar[u]^{p_{H}}. } $$ Let $g_{H}$ be the induced automorphism of $K^{n}/H$. Then we obtain $g_{H}\circ \varphi _{X}=\varphi _{X}\circ f$: $$ \xymatrix{ K^{n}/H \ar[r]^{g_{H}}&K^{n}/H \\ X \ar[u]^{\varphi _{X}} \ar[r]^{f} &X \ar[u]^{\varphi _{X}}. } $$ Put $g_{1E}$ the automorphism of $E$ induced by $g_{1}$. Since $f^{\ast }$ acts on $H^{2}(X,{\mathbb C})$ as identity, $g_{H}^{\ast }$ acts on $H^{2}(K^{n}/H,{\mathbb C})$ as identity. Since $H^{2}(K^{n}/H,{\mathbb C})\cong H^{2}(E,{\mathbb C})$, $g_{1E}^{\ast }$ acts on $H^{2}(E,{\mathbb C})$ as identity. From Lemma $\ref{lem:1}$, we have \[ H^{2n}(X,{\mathbb C})^{2n-1,1}=\bigoplus _{i=1}^{10}{\mathbb C}\varphi ^{\ast }_{X}\gamma _{i}. \] Suppose $g_{1}\not=\sigma $ and $g_{1}\not={\rm id}_{K}$. Since $g_{1E}^{\ast }$ acts on $H^{2}(E,{\mathbb C})$ as identity, from $[\ref{bio:9},\,{\rm page}\,386$-$389]$, the order of $g_{1E}$ is at most $4$. If the order of $g_{1E}$ is $2$, there is an element $\alpha _{\pm }\in H^{2}_{-}(K,{\mathbb C})^{1,1}$ such that $g_{1}^{\ast }(\alpha _{\pm })=\pm \alpha $. By the equation $(\ref{eq:10})$ and the proof of Lemma $\ref{dfn:100}$, $f$ does not act on $H^{2n}(X,{\mathbb C})^{2n-1,1}$ as identity, it is a contradiction. If the order of $g_{1E}$ is $4$, then there is an element $\alpha '_{\pm }\in H^{2}_{-}(K,{\mathbb C})^{1,1}$ such that $g_{1}^{\ast }(\alpha '_{\pm })=\pm \sqrt{-1}\alpha '_{\pm }$ from $[\ref{bio:9},\,{\rm page}\,390$-$391]$. By the equation $(\ref{eq:10})$ and and the proof of Lemma $\ref{dfn:100}$, $f$ does not act on $H^{2n}(X,{\mathbb C})^{2n-1,1}$ as identity, it is a contradiction. Thus we have $g_{1E}={\rm id}_{E}$, i.e. $g_{1}=\sigma $ or $g_{1}={\rm id}_{K}$, and $f={\rm id}_{X}$ or $f=\rho $ where $\rho $ is the covering involution of $\pi :X\rightarrow E^{n}$. From Proposition $\ref{dfn:9}$ $H^{2n}(E^{[n]},{\mathbb C})^{2n-1,1}\simeq 0$, $\rho $ does not act on $H^{2n}(X,{\mathbb C})^{2n-1,1}$ as identity. Since $f^{\ast }$ acts on $H^{2n}(X,{\mathbb C})^{2n-1,1}$ as identity, we have $f={\rm id}_{X}$. \end{proof} \begin{cro}\label{cro:92} We suppose $n\geq 2$. Let $\pi :X\rightarrow E^{[2]}$ be the universal covering space. For any two automorphisms $f$ and $g$ of $X$, if $f^{\ast }=g^{\ast }$ on $H^{\ast }(X,{\mathbb C})$, then $f=g$. \end{cro} By $[\ref{bio:9},\,{\rm Proposition}\,1.1]$, there is just one automorphism of $E$, denoted $\upsilon $, such that its order is $2$, and $\upsilon ^{\ast }$ acts on $H^{2}(E,{\mathbb C})$ as ${\rm id}$. For $\upsilon $, there are just two involutions of $K$ which are liftings of $\upsilon $, one acts on $H^{0}(K,\Omega ^{2}_{K})$ as ${\rm id}$, and another acts on $H^{0}(K,\Omega ^{2}_{K})$ as $-{\rm id}$, we denote by $\upsilon _{+}$ and $\upsilon _{-}$, respectively. Then they satisfies $\upsilon _{+}=\upsilon _{-}\circ \sigma $. Let $\upsilon ^{[n]}$ be the automorphism of $E^{[n]}$ which is induced by $\upsilon $. For $\upsilon ^{[n]}$, there are just two automorphisms of $X$ which are liftings of $\upsilon ^{[n]}$, denoted $\varsigma $ and $\varsigma '$, respectively. Then they satisfies $\varsigma =\varsigma '\circ \sigma $, and each order of $\varsigma $ and $\varsigma '$ is $2$. From ${\rm Lemma}5.11$, one acts on $H^{0}(X,\Omega ^{2n}_{X})$ as ${\rm id}$, and another act on $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$. We put $\varsigma _{+}\in \{\varsigma ,\varsigma '\}$ as acts on $H^{0}(X,\Omega ^{2n}_{X})$ as ${\rm id}$ and $\varsigma _{-}\in \{\varsigma ,\varsigma '\}$ as acts on $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$. \begin{thm}\label{thm:3} Let $E$ and $E'$ be two Enriques surfaces, $E^{[n]}$ and $E'^{[n]}$ the Hilbert scheme of $n$ points of $E$ and $E'$, $X$ and $X'$ the universal covering space of $E^{[n]}$ and $E'^{[n]}$, and $n\geq 3$. If $X\cong X'$, then $E^{[n]}\cong E'^{[n]}$, i.e. when we fix $X$, then there is just one isomorphism class of the Hilbert schemes of $n$ points of Enriques surfaces such that they have it as the universal covering space. \end{thm} \begin{proof} For an involution of $X$ which is the covering involution of some the Hilbert scheme of $n$ points of Enriques surfaces acts on $H^{2}(X,{\mathbb C})$ as ${\rm id}$, $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$, and $H^{2n}(X,{\mathbb C})^{2n-1,1}$ as $-{\rm id}$. From ${\rm Proposition}5.12$, the automorphisms which acts on $H^{2}(X,{\mathbb C})$ as ${\rm id}$, $H^{0}(X,\Omega ^{2n}_{X})$ as $-{\rm id}$, are only $\rho $ and $\varsigma _{-}$. From the definition of $\varsigma _{-}$ and Lemma $\ref{lem:1}$, $\varsigma _{-}$ does not act on $H^{2n}(X,{\mathbb C})^{2n-1,1}$ as $-{\rm id}$. Thus we have an argument. \end{proof} \end{document}
\begin{document} \title{A Wasserstein-type distance in the space of Gaussian Mixture Models hanks{Submitted to the editors DATE. unding{This work was funded by the French National Research Agency under the grant ANR-14-CE27-0019 - MIRIAM.} \begin{abstract} In this paper we introduce a Wasserstein-type distance on the set of Gaussian mixture models. This distance is defined by restricting the set of possible coupling measures in the optimal transport problem to Gaussian mixture models. We derive a very simple discrete formulation for this distance, which makes it suitable for high dimensional problems. We also study the corresponding multi-marginal and barycenter formulations. We show some properties of this Wasserstein-type distance, and we illustrate its practical use with some examples in image processing. \end{abstract} \begin{keywords} optimal transport, Wasserstein distance, Gaussian mixture model, multi-marginal optimal transport, barycenter, image processing applications \end{keywords} \begin{AMS} 65K10, 65K05, 90C05, 62-07, 68Q25, 68U10, 68U05, 68R10 \end{AMS} \section{Introduction} Nowadays, Gaussian Mixture Models (GMM) have become ubiquitous in statistics and machine learning. These models are especially useful in applied fields to represent probability distributions of real datasets. Indeed, as linear combinations of Gaussian distributions, they are perfect to model complex multimodal densities and can approximate any continuous density when the numbers of components is chosen large enough. Their parameters are also easy to infer with algorithms such as the Expectation-Maximization (EM) algorithm~\cite{dempster1977maximum}. For instance, in image processing, a large body of works use GMM to represent patch distributions in images\footnote{Patches are small image pieces, they can be seen as vectors in a high dimensional space.}, and use these distributions for various applications, such as image restoration~\cite{ZoranWeiss2011,teodoro2015single,PLE,SPLE,houdard2018high,delon2018gaussian} or texture synthesis~\cite{galerne2017semi}. The optimal transport theory provides mathematical tools to compare or interpolate between probability distributions. For two probability distributions $\mu_0$ and $\mu_1$ on $\R^d$ and a positive cost function $c$ on $\R^d\times \R^d$, the goal is to solve the optimization problem \begin{equation} \label{eq:transport_definition} \inf_{Y_0\sim \mu_0; Y_1\sim \mu_1} \E \left(c(Y_0,Y_1)\right), \end{equation} where the notation $Y\sim\mu$ means that $Y$ is a random variable with probability distribution $\mu$. When $c(x,y) = \|x-y\|^p$ for $p\geq 1$, Equation~\eqref{eq:transport_definition} (to a power $1/p$) defines a distance between probability distributions that have a moment of order $p$, called the Wasserstein distance $W_p$. While this subject has gathered a lot of theoretical work (see \cite{villanitopics, villani2008optimal, santambrogio2015optimal} for three reference monographies on the topic), its success in applied fields was slowed down for many years by the computational complexity of numerical algorithms which were not always compatible with large amount of data. In recent years, the development of efficient numerical approaches has been a game changer, widening the use of optimal transport to various applications notably in image processing, computer graphics and machine learning~\cite{peyre2019computational}. However, computing Wasserstein distances or optimal transport plans remains intractable when the dimension of the problem is too high. Optimal transport can be used to compute distances or geodesics between Gaussian mixture models, but optimal transport plans between GMM, seen as probability distributions on a higher dimensional space, are usually not Gaussian mixture models themselves, and the corresponding Wasserstein geodesics between GMM do not preserve the property of being a GMM. In order to keep the good properties of these models, we define in this paper a variant of the Wasserstein distance by restricting the set of possible coupling measures to Gaussian mixture models. The idea of restricting the set of possible coupling measures has already been explored for instance in~\cite{bionnadal2019}, where the distance is defined on the set of the probability distributions of strong solutions to stochastic differential equations. The goal of the authors is to define a distance which keeps the good properties of $W_2$ while being numerically tractable. In this paper, we show that restricting the set of possible coupling measures to Gaussian mixture models transforms the original infinitely dimensional optimization problem into a finite dimensional problem with a simple discrete formulation, depending only on the parameters of the different Gaussian distributions in the mixture. When the ground cost is simply $c(x,y) = \|x-y\|^2$, this yields a geodesic distance, that we call $MW_2$ (for Mixture Wasserstein), which is obviously larger than $W_2$, and is always upper bounded by $W_2$ plus a term depending only on the trace of the covariance matrices of the Gaussian components in the mixture. The complexity of the corresponding discrete optimization problem does not depend on the space dimension, but only on the number of components in the different mixtures, which makes it particularly suitable in practice for high dimensional problems. Observe that this equivalent discrete formulation has been proposed twice recently in the machine learning literature, but with a very different point of view, by two independent teams~\cite{chen2016distance,chen2019aggregated} and \cite{chen2017optimal,chen2019optimal}. Our original contributions in this paper are the following: \begin{enumerate} \item We derive an explicit formula for the optimal transport between two GMM restricted to GMM couplings, and we show several properties of the resulting distance, in particular how it compares to the classical Wasserstein distance. \item We study the multi-marginal and barycenter formulations of the problem, and show the link between these formulations. \item We propose a generalized formulation to be used on distributions that are not GMM. \item We provide two applications in image processing, respectively to color transfer and texture synthesis. \end{enumerate} The paper is organized as follows. Section~\ref{sec:background} is a reminder on Wasserstein distances and barycenters between probability measures on $\R^d$. We also recall the explicit formulation of $W_2$ between Gaussian distributions. In Section~\ref{sec:GMM_prop}, we recall some properties of Gaussian mixture models, focusing on an identifiabiliy property that will be necessary for the rest of the paper. We also show that optimal transport plans for $W_2$ between GMM are generally not GMM themselves. Then, Section~\ref{sec:MW2} introduces the $MW_2$ distance and derives the corresponding discrete formulation. Section~\ref{sec:comparison} compares $MW_2$ with $W_2$. Section~\ref{sec:multimarginal} focuses on the corresponding multi-marginal and barycenter formulations. In Section~\ref{sec:assignment}, we explain how to use $MW_2$ in practice on distributions that are not necessarily GMM. We conclude in Section~\ref{sec:applications} with two applications of the distance $MW_2$ to image processing. To help the reproducibility of the results we present in this paper, we have made our Python codes available on the Github website \url{https://github.com/judelo/gmmot}. \section*{Notations} We define in the following some of the notations that will be used in the paper. \begin{itemize} \item The notation $Y\sim\mu$ means that $Y$ is a random variable with probability distribution $\mu$. \item If $\mu$ is a positive measure on a space $\mathcal{X}$ and $T:\mathcal{X}\to\mathcal{Y}$ is an application, $T\#\mu$ stands for the push-forward measure of $\mu$ by $T$, {\em i.e.} the measure on $\mathcal{Y}$ such that $\forall A \subset \mathcal{Y} $, $(T\#\mu)(A) = \mu(T^{-1}(A))$. \item The notation $\mathrm{tr}(M)$ denotes the trace of the matrix $M$. \item The notation $\mathrm{Id}$ is the identity application. \item $\langle\xi,\xi'\rangle $ denotes the Euclidean scalar product between $\xi$ and $\xi'$ in $\R^d$ \item $\MM_{n,m}(\R)$ is the set of real matrices with $n$ lines and $m$ columns, and we denote by $ \MM_{n_0,n_1,\dots,n_{J-1}}(\R)$ the set of $J$ dimensional tensors of size $n_k$ in dimension $k$. \item $\1_n = (1,1,\dots,1)^t$ denotes a column vector of ones of length $n$. \item For a given vector $m$ in $\R^d$ and a $d\times d$ covariance matrix $\Sigma$, $g_{m,\Sigma}$ denotes the density of the Gaussian (multivariate normal) distribution $\mathcal{N}(m,\Sigma)$. \item When $a_i$ is a finite sequence of $K$ elements (real numbers, vectors or matrices), we denote its elements as $a_i^0,\ldots, a_i^{K-1}$. \end{itemize} \section{Reminders: Wasserstein distances and barycenters between probability measures on $\R^d$} \label{sec:background} Let $d\geq 1$ be an integer. We recall in this section the definition and some basic properties of the Wasserstein distances between probability measures on $\R^d$. We write $\mathcal{P}(\R^d)$ the set probability measures on $\R^d$. For $p\geq 1$, the Wasserstein space $\mathcal{P}_p(\R^d)$ is defined as the set of probability measures $\mu$ with a finite moment of order $p$, {\em i.e.} such that \[\int_{\R^d} \|x\|^pd\mu(x) < +\infty,\] with $\|.\|$ the Euclidean norm on $\R^d$.\\ For $t\in[0,1]$, we define $\mathrm{P}_t:\R^d\times \R^d \rightarrow \R^d$ by $$\forall x,y \in\R^d , \quad \rmP_t(x,y) = (1-t) x + t y \in\R^d .$$ Observe that $\mathrm{P}_0$ and $\mathrm{P}_1$ are the projections from $\R^d\times \R^d$ onto $\R^d$ such that $\mathrm{P}_0(x,y) = x$ and $\mathrm{P}_1(x,y) = y$. \subsection{Wasserstein distances} Let $p\geq 1$, and let $\mu_0, \mu_1$ be two probability measures in $\mathcal{P}_p(\R^d)$. Define $\Pi(\mu_0,\mu_1)\subset \mathcal{P}_p(\R^d\times \R^d)$ as being the subset of probability distributions $\gamma$ on $\R^d\times \R^d$ with marginal distributions $\mu_0$ and $\mu_1$, \ie such that $\mathrm{P}_0\# \gamma = \mu_0$ and $\mathrm{P}_1\# \gamma = \mu_1$. The \textit{$p$-Wasserstein distance} $W_p$ between $\mu_0$ and $\mu_1$ is defined as \begin{equation} \label{eq:wasserstein_definition} W_p^p(\mu_0,\mu_1) := \inf_{Y_0\sim \mu_0; Y_1\sim \mu_1} \E \left(\|Y_0 - Y_1\|^p\right) = \inf_{\gamma \in \Pi(\mu_0,\mu_1)} \int_{\R^d\times\R^d} \|y_0-y_1\|^pd\gamma(y_0,y_1). \end{equation} This formulation is a special case of~\eqref{eq:transport_definition} when $c(x,y) = \|x-y\|^p$. It can be shown (see for instance~\cite{villani2008optimal}) that there is always a couple $(Y_0,Y_1)$ of random variables which attains the infimum (hence a minimum) in the previous energy. Such a couple is called an \textit{optimal coupling}. The probability distribution $\gamma$ of this couple is called an \textit{optimal transport plan} between $\mu_0$ and $\mu_1$. This plan distributes all the mass of the distribution $\mu_0$ onto the distribution $\mu_1$ with a minimal cost, and the quantity $W_p^p(\mu_0,\mu_1)$ is the corresponding total cost. As suggested by its name ($p$-Wasserstein distance), $W_p$ defines a metric on $\mathcal{P}_p(\R^d)$. It also metrizes the weak convergence\footnote{A sequence $(\mu_k)_k$ converges weakly to $\mu$ in $\mathcal{P}_p(\R^d)$ if it converges to $\mu$ in the sense of distributions and if $\int \|y\|^pd\mu_k(y)$ converges to $\int \|y\|^pd\mu(y)$.} in $\mathcal{P}_p(\R^d)$ (see~\cite{villani2008optimal}, chapter 6). It follows that $W_p$ is continuous on $\PP_p(\R^d)$ for the topology of weak convergence. From now on, we will mainly focus on the case $p=2$, since $W_2$ has an explicit formulation if $\mu_0$ and $\mu_1$ are Gaussian measures. \subsection{Transport map, transport plan and displacement interpolation} Assume that $p=2$. When $\mu_0$ and $\mu_1$ are two probability distributions on $\R^d$ and assuming that $\mu_0$ is absolutely continuous, then it can be shown that the optimal transport plan $\gamma$ for the problem~\eqref{eq:wasserstein_definition} is unique and has the form \begin{equation} \gamma = (\mathrm{Id},T)\# \mu_0,\label{eq:transport_map} \end{equation} where $T:\R^d \mapsto \R^d$ is an application called {\it optimal transport map} and satisfying $T\# \mu_0=\mu_1$ (see~\cite{villani2008optimal}). If $\gamma$ is an optimal transport plan for $W_2$ between two probability distributions $\mu_0$ and $\mu_1$, the path $(\mu_t)_{t\in[0,1]}$ given by $$\forall t\in[0,1], \quad \mu_t := \rmP_t\# \gamma $$ defines a constant speed geodesic in $\mathcal{P}_2(\R^d)$ (see for instance~\cite{santambrogio2015optimal} Ch.5, Section 5.4). The path $(\mu_t)_{t\in[0,1]}$ is called the displacement interpolation between $\mu_0$ and $\mu_1$ and it satisifes \begin{equation} \label{eq:ot_geodesic} \mu_t \in \, \, \mathrm{argmin}_\rho \,\, (1-t) W_2(\mu_0,\rho)^2 + t W_2(\mu_1,\rho)^2 . \end{equation} This interpolation, often called Wasserstein barycenter in the literature, can be easily extended to more than two probability distributions, as recalled in the next paragraphs. \subsection{Multi-marginal formulation and barycenters} For $J\geq2$, for a set of weights $\lambda = (\lambda_0,\dots,\lambda_{J-1})\in (\R_+)^J$ such that $\lambda \1_{J}= \lambda_0+\ldots + \lambda_{J-1} =1$ and for $x = (x_0,\dots,x_{J-1})\in (\R^d)^J$, we write \begin{equation} B(x) = \sum_{i=0}^{J-1}\lambda_i x_i = \mathrm{argmin}_{y\in\R^d} \sum_{i=0}^{J-1}\lambda_i \| x_i -y\|^2 \label{eq:barycenter} \end{equation} the barycenter of the $x_i$ with weights $\lambda_i$. For $J$ probability distributions $\mu_0,\mu_1\dots,\mu_{J-1}$ on $\R^d$, we say that $\nu^*$ is the barycenter of the $\mu_j$ with weights $\lambda_j$ if $\nu^*$ is solution of \begin{equation} \label{eq:W2_bary} \inf_{\nu \in \PP_2(\R^d)} \sum_{j=0}^{J-1} \lambda_j W_2^2(\mu_j,\nu). \end{equation} Existence and unicity of barycenters for $W_2$ has been studied in depth by Agueh and Carlier in~\cite{agueh2011barycenters}. They show in particular that if one of the $\mu_j$ has a density, this barycenter is unique. They also show that the solutions of the barycenter problem are related to the solutions of the multi-marginal transport problem (studied by Gangbo and \'Swi\'ech in~\cite{gangbo1998optimal}) {\small \begin{eqnarray} mmW_2^2(\mu_0,\dots,\mu_{J-1}) & = & \inf_{\gamma\in\Pi(\mu_0,\mu_1,\dots,\mu_{J-1})} \int_{\R^d\times \dots \times \R^d}\frac{1}{2} \sum_{i,j=0}^{J-1}\lambda_i\lambda_j \| y_i-y_j\|^2 d\gamma(y_0,y_1,\ldots,y_{J-1}) , \label{eq:W2_multimarge_definition} \end{eqnarray}} where $\Pi(\mu_0,\mu_1,\dots,\mu_{J-1})$ is the set of probability measures on $(\R^d)^{J}$ having $\mu_0,\mu_1,\dots,\mu_{J-1}$ as marginals. More precisely, they show that if~\eqref{eq:W2_multimarge_definition} has a solution $\gamma^*$, then $\nu^* = B\# \gamma^*$ is a solution of~\eqref{eq:W2_bary}, and the infimum of~\eqref{eq:W2_multimarge_definition} and~\eqref{eq:W2_bary} are equal. \subsection{Optimal transport between Gaussian distributions} Computing optimal transport plans between probability distributions is usually difficult. In some specific cases, an explicit solution is known. For instance, in the one dimensional ($d=1$) case, when the cost $c$ is a convex function of the Euclidean distance on the line, the optimal plan consists in a monotone rearrangement of the distribution $\mu_0$ into the distribution $\mu_1$ (the mass is transported monotonically from left to right, see for instance Ch.2, Section 2.2 of \cite{villanitopics} for all the details). Another case where the solution is known for a quadratic cost is the Gaussian case in any dimension $d\geq 1$. \subsubsection{Distance $W_2$ between Gaussian distributions} If $\mu_i= \mathcal{N}(m_i,\Sigma_i)$, $i\in\{0,1\}$ are two Gaussian distributions on $\R^d$, the $2$-Wasserstein distance $W_2$ between $\mu_0$ and $\mu_1$ has a closed-form expression, which can be written \begin{equation} \label{eq:wasserstein_gaussian} W_2^2(\mu_0,\mu_1) = \|m_0-m_1\|^2 + \mathrm{tr}\left( \Sigma_0 + \Sigma_1 - 2 \left(\Sigma_0^{\frac 1 2}\Sigma_1\Sigma_0^{\frac 1 2}\right)^{{\frac 1 2}} \right), \end{equation} where, for every symmetric semi-definite positive matrix $M$, the matrix $M^{\frac 1 2}$ is its unique semi-definite positive square root. If $\Sigma_0$ is non-singular, then the optimal map $T$ between $\mu_0$ and $\mu_1$ turns out to be affine and is given by \begin{equation} \forall x\in\R^d , \quad T(x) = m_1+\Sigma_0^{-\frac 1 2}\left(\Sigma_0^{\frac 1 2}\Sigma_1\Sigma_0^{\frac 1 2}\right)^{{\frac 1 2}}\Sigma_0^{-\frac 1 2}(x-m_0) = m_1 + \Sigma_0^{-1}(\Sigma_0\Sigma_1)^{\frac 1 2} (x-m_0) , \label{eqT:eq} \end{equation} and the optimal plan $\gamma$ is then a Gaussian distribution on $\R^d\times\R^d=\R^{2d}$ that is degenerate since it is supported by the affine line $y=T(x)$. These results have been known since~\cite{dowson1982frechet}. Moreover, if $\Sigma_0$ and $\Sigma_1$ are non-degenerate, the geodesic path $(\mu_t)$, $t\in(0,1)$, between $\mu_0$ and $\mu_1$ is given by $\mu_t = \NN(m_t,\Sigma_t)$ with $m_t = (1-t)m_0 +tm_1$ and \[\Sigma_t = ((1-t) \Id +tC)\Sigma_0 ((1-t) \Id+tC),\] with $\Id$ the $d\times d$ identity matrix and $C = \Sigma_1^{\frac 1 2}\left(\Sigma_1^{\frac 1 2}\Sigma_0\Sigma_1^{\frac 1 2}\right)^{{-\frac 1 2}}\Sigma_1^{\frac 1 2}$. This property still holds if the covariance matrices are not invertible, by replacing the inverse by the Moore-Penrose pseudo-inverse matrix, see Proposition 6.1 in~\cite{xia2014synthesizing}. The optimal map $T$ is not generalized in this case since the optimal plan is usually not supported by the graph of a function. \subsubsection{$W_2$-Barycenters in the Gaussian case} \label{sec:multimarginal_gaussian} For $J\geq 2$, let $\lambda = (\lambda_0,\dots,\lambda_{J-1})\in (\R_+)^J$ be a set of positive weights summing to $1$ and let $\mu_0,\mu_1\dots,\mu_{J-1}$ be $J$ Gaussian probability distributions on $\R^d$. For $j=$ $0 \dots J-1$, we denote by $m_j$ and $\Sigma_j$ the expectation and the covariance matrix of $\mu_j$. Theorem 2.2 in~\cite{ruschendorf2002n} tells us that if the covariances $\Sigma_{j}$ are all positive definite, then the solution of the multi-marginal problem~\eqref{eq:W2_multimarge_definition} for the Gaussian distributions $\mu_0,\mu_1\dots,\mu_{J-1}$ can be written \begin{equation} \label{eq:gamma_optimal_multimarge} \gamma^*(x_0,\dots,x_{J-1}) = g_{m_0,\Sigma_0}(x_0)\, \delta_{(x_1,\dots,x_{J-1})=(S_1S_0^{-1}x_0,\dots,S_{J-1}S_0^{-1}x_0)} \end{equation} where $S_j = \Sigma_{j}^{1/2}\left(\Sigma_{j}^{1/2}\Sigma_*\Sigma_{j}^{1/2}\right)^{-1/2}\Sigma_{j}^{1/2}$ with $\Sigma_*$ a solution of the fixed-point problem \begin{equation} \label{eq:fixed} \sum_{j=0}^{J-1} \lambda_j\left(\Sigma_*^{1/2} \Sigma_{j}\Sigma_*^{1/2}\right)^{1/2} = \Sigma_*. \end{equation} The barycenter $\nu^*$ of all the $\mu_j$ with weights $\lambda_j$ is the distribution $\NN(m_*,\Sigma_*)$, with $m_* = \sum_{j=0}^{J-1} \lambda_j m_j$. Equation~\eqref{eq:fixed} provides a natural iterative algorithm (see \cite{alvarez2016fixed}) to compute the fixed point $\Sigma_*$ from the set of covariances $\Sigma_j$, $j\in \{0,\dots,J-1\}$. \section{Some properties of Gaussian Mixtures Models} \label{sec:GMM_prop} The goal of this paper is to investigate how the optimisation problem~\eqref{eq:wasserstein_definition} is transformed when the probability distributions $\mu_0$, $\mu_1$ are finite Gaussian mixture models and the transport plan $\gamma$ is forced to be a Gaussian mixture model. This will be the aim of Section~\ref{sec:MW2}. Before, we first need to recall a few basic properties on these mixture models, and especially a density property and an identifiability property. In the following, for $N\geq 1$ integer, we define the simplex $$ \Gamma_{N} = \{\pi\in \R^{N}_+\;;\;\pi\1_N = \sum_{k=1}^{N} \pi_k = 1\} .$$ \begin{defi} Let $K\geq 1$ be an integer. A (finite) Gaussian mixture model of size $K$ on $\R^d$ is a probability distribution $\mu$ on $\R^d$ that can be written \begin{equation} \label{eq:GMM} \mu =\sum_{k=1}^{K} \pi_k \mu_k \;\text{ where }\; \mu_k = \mathcal N (m_k, \Sigma_k) \text{ and } \pi \in \Gamma_{K}. \end{equation} \end{defi} We write $GMM_d(K)$ the subset of $\mathcal{P}(\R^d)$ made of probability measures on $\R^d$ which can be written as Gaussian mixtures with less than $K$ components (such mixtures are obviously also in $\mathcal{P}_p(\R^d)$ for any $p\geq 1$). For $K < K'$, $GMM_d(K)\subset GMM_d(K').$ The set of all finite Gaussian mixture distributions is written $$GMM_d(\infty) = \cup_{K\geq 0} GMM_d(K) .$$ \subsection{Two properties of GMM} The following lemma states that any measure in $\mathcal{P}_p(\R^d)$ can be approximated with any precision for the distance $W_p$ by a finite convex combination of Dirac masses. This classical result will be useful in the rest of the paper. \begin{lemma} \label{prop:Dirac_dense_Wp} The set \[ \left\{ \sum_{k=1}^N \pi_k \delta_{y_k}\;;\; N\in\N,\;(y_k)_k \in (\R^{d})^N, \; (\pi_k)_k\in \Gamma_N \right\}\] is dense in $\mathcal{P}_p(\R^d)$ for the metric $W_p$, for any $p\geq 1$. \end{lemma} For the sake of completeness, we provide a proof in Appendix, adapted from the proof of Theorem 6.18 in~\cite{villani2008optimal}. Since Dirac masses can be seen as degenerate Gaussian distributions, a direct consequence of Lemma~\ref{prop:Dirac_dense_Wp} is the following proposition. \begin{prop} \label{prop:density_gmm} $GMM_d(\infty)$ is dense in $\mathcal{P}_p(\R^d)$ for the metric $W_p$. \end{prop} Another important property will be necessary, related to the identifiability of Gaussian mixture models. It is clear such models are not \textit{stricto sensu} identifiable, since reordering the indexes of a mixture changes its parametrization without changing the underlying probability distribution, or also because a component with mass $1$ can be divided in two identical components with masses $\frac 1 2$, for example. However, if we write mixtures in a ``compact'' way (forbidding two components of the same mixture to be identical), identifiability holds, up to a reordering of the indexes. This property is reminded below. \begin{prop} The set of finite Gaussian mixtures is identifiable, in the sense that two mixtures $\mu_0 =\sum_{k=1}^{K_0} \pi_0^k \mu_0^k$ and $\mu_1= \sum_{k=1}^{K_1} \pi_1^k \mu_1^k$, written such that all $\{\mu_0^k\}_k$ (resp. all $\{\mu_1^j\}_j$) are pairwise distinct, are equal if and only if $K_0=K_1$ and we can reorder the indexes such that for all $k$, $\pi_0^k = \pi_1^k$, $m_0^k = m_1^k$ and $\Sigma_0^k = \Sigma_1^k$. \label{lemma:unicity_gaussian} \end{prop} This result is also classical and the proof is provided in Appendix. \subsection{Optimal transport and Wasserstein barycenters between Gaussian Mixture Models} We are now in a position to investigate optimal transport between Gaussian mixture models (GMM). A first important remark is that given two Gaussian mixtures $\mu_0$ and $\mu_1$ on $\R^d$, optimal transport plans $\gamma$ between $\mu_0$ and $\mu_1$ are usually not GMM. \begin{prop} Let $\mu_0 \in GMM_d(K_0)$ and $\mu_1 \in GMM_d(K_1)$ be two Gaussian mixtures such that $\mu_1$ cannot be written $T\#\mu_0$ with $T$ affine. Assume also that $\mu_0$ is absolutely continuous with respect to the Lebesgue measure. Let $\gamma \in \Pi(\mu_0,\mu_1)$ be an optimal transport plan between $\mu_0$ and $\mu_1$. Then $\gamma$ does not belongs to $GMM_{2d} (\infty)$. \end{prop} \begin{proof} Since $\mu_0$ is absolutely continuous with respect to the Lebesgue measure, we know that the optimal transport plan is unique and is of the form $\gamma = (\mathrm{Id},T)\#\mu_0$ for a measurable map $T:\R^d \rightarrow \R^d$ that satisfies $T\#\mu_0=\mu_1$. Thus, if $\gamma$ belongs to $GMM_{2d}(\infty)$, all of its components must be degenerate Gaussian distributions $\NN(m_k,\Sigma_k)$ such that \[\cup_{k} \left(m_k+\mathrm{Span}(\Sigma_k)\right) = \mathrm{graph}(T).\] It follows that $T$ must be affine on $\R^d$, which contradicts the hypotheses of the proposition. \end{proof} When $\mu_0$ is not absolutely continuous with respect to the Lebesgue measure (which means that one of its components is degenerate), we cannot write $\gamma$ under the form~\eqref{eq:transport_map}, but we conjecture that the previous result usually still holds. A notable exception is the case where all Gaussian components of $\mu_0$ and $\mu_1$ are Dirac masses on $\R^d$, in which case $\gamma$ is also a GMM composed of Dirac masses on $\R^{2d}$. We conjecture that since optimal plans $\gamma$ between two GMM are usually not GMM, the barycenters $(\mathrm{P}_t)\#\gamma$ between $\mu_0$ and $\mu_1$ are also usually not GMM either (with the exception of $t=0,1$). Take the one dimensional example of $\mu_0 = \mathcal{N}(0,1)$ and $\mu_1 = \frac 1 2 (\delta_{-1}+\delta_1)$. Clearly, an optimal transport map between $\mu_0$ and $\mu_1$ is defined as $T(x) = \mathrm{sign}(x)$. For $t\in(0,1)$, if we denote by $\mu_t$ the barycenter between $\mu_0$ with weight $1-t$ and $\mu_1$ with weight $t$, then it is easy to show that $\mu_t$ has a density $$f_t(x) = \frac{1}{1-t} \left(g\left( \frac{x+t}{1-t}\right)\mathbf{1}_{x<-t}+ g\left( \frac{x-t}{1-t}\right)\mathbf{1}_{x>t} \right),$$ where $g$ is the density of $\NN(0,1)$. The density $f_t$ is equal to $0$ on the interval $(-t,t)$ and therefore cannot be the density of a GMM. \section{$MW_2$: a distance between Gaussian Mixture Models} \label{sec:MW2} In this section, we define a Wasserstein-type distance between Gaussian mixtures ensuring that barycenters between Gaussian mixtures remain Gaussian mixtures. To this aim, we restrict the set of admissible transport plans to Gaussian mixtures and show that the problem is well defined. Thanks to the identifiability results proved in the previous section, we will show that the corresponding optimization problem boils down to a very simple discrete formulation. \subsection{Definition of $MW_2$} \begin{defi} \label{def:MW2} Let $\mu_0$ and $\mu_1$ be two Gaussian mixtures. We define \begin{equation} \label{eq:MW2} MW_2^2(\mu_0,\mu_1) := \inf_{\gamma \in \Pi(\mu_0,\mu_1) \cap GMM_{2d}(\infty)} \int_{\R^d\times\R^d} \|y_0-y_1\|^2d\gamma(y_0,y_1). \end{equation} \end{defi} First, observe that the problem is well defined since $\Pi(\mu_0,\mu_1) \cap GMM_{2d}(\infty)$ contains at least the product measure $\mu_0 \otimes \mu_1$. Notice also that from the definition we directly have that $$MW_2(\mu_0,\mu_1) \geq W_2(\mu_0,\mu_1).$$ \subsection{An equivalent discrete formulation} Now, we can show that this optimisation problem has a very simple discrete formulation. For $\pi_0 \in \Gamma_{K_0}$ and $\pi_1 \in \Gamma_{K_1}$, we denote by $\Pi(\pi_0,\pi_1)$ the subset of the simplex $\Gamma_{K_0\times K_1}$ with marginals $\pi_0$ and $\pi_1$, {\em i.e.} \begin{align} \label{eq:Pi} \Pi(\pi_0,\pi_1) &= \{w \in \MM_{K_0,K_1}(\R^+);\;\;w\1_{K_1} = \pi_0;\;\;w^t\mathbf{1}_{K_0} = \pi_1\} \\ &= \{w \in \MM_{K_0,K_1}(\R^+);\;\forall k,\sum_j w_{kj} = \pi_0^k \text{ and }\forall j,\;\sum_k w_{kj} = \pi_1^j\;\}. \end{align} \begin{prop} \label{prop:MW2} Let $\mu_0 =\sum_{k=1}^{K_0} \pi_0^k \mu_0^k$ and $\mu_1= \sum_{k=1}^{K_1} \pi_1^k \mu_1^k$ be two Gaussian mixtures, then \begin{equation} \label{eq:DW_conj} MW_2^2(\mu_0,\mu_1) = \min_{w \in \Pi(\pi_0,\pi_1)} \sum_{k,l}w_{kl} W_2^2(\mu_0^k,\mu_1^l). \end{equation} Moreover, if $w^*$ is a minimizer of~\eqref{eq:DW_conj}, and if $T_{k,l}$ is the $W_2$-optimal map between $\mu_0^k$ and $\mu_1^l$, then $\gamma^*$ defined as $$ \gamma^* (x,y) = \sum_{k,l} w_{k,l}^\ast \, g_{m_0^k,\Sigma_0^k}(x)\, \delta_{y=T_{k,l}(x)}$$ is a minimizer of~\eqref{eq:MW2}. \end{prop} \begin{proof} First, let $w^*$ be a solution of the discrete linear program \begin{equation} \inf_{w \in \Pi(\pi_0,\pi_1)} \sum_{k,l}w_{kl} W_2^2(\mu_0^k,\mu_1^l). \end{equation} For each pair $(k,l)$, let \[\gamma_{kl} = \mathrm{argmin}_{\gamma \in \Pi(\mu_0^k,\mu_1^l)} \int_{\R^d\times\R^d} \|y_0-y_1\|^2d\gamma(y_0,y_1)\] and \[\gamma^* = \sum_{k,l} w^*_{kl}\gamma_{kl}.\] Clearly, $\gamma^* \in \Pi(\mu_0,\mu_1) \cap GMM_{2d}(K_0K_1)$. It follows that \begin{eqnarray*} \sum_{k,l} w^*_{kl}W_2^2(\mu_0^k,\mu_1^l) &=& \int_{\R^d\times\R^d} \|y_0-y_1\|^2d\gamma^*(y_0,y_1)\\ &\geq& \min_{\gamma \in \Pi(\mu_0,\mu_1) \cap GMM_{2d}(K_0K_1)} \int_{\R^d\times\R^d} \|y_0-y_1\|^2d\gamma(y_0,y_1)\\ &\geq& \min_{\gamma \in \Pi(\mu_0,\mu_1) \cap GMM_{2d}(\infty)} \int_{\R^d\times\R^d} \|y_0-y_1\|^2d\gamma(y_0,y_1), \end{eqnarray*} because $GMM_{2d}(K_0K_1) \subset GMM_{2d}(\infty)$. Now, let $\gamma$ be any element of $\Pi(\mu_0,\mu_1) \cap GMM_{2d}(\infty)$. Since $\gamma$ belongs to $GMM_{2d}(\infty)$, there exists an integer $K$ such that $\gamma = \sum_{j=1}^{K}w_j \gamma_j$. Since $\mathrm{P}_0{\#} \gamma = \mu_0$, it follows that \[\sum_{j=1}^{K}w_j \mathrm{P}_0{\#} \gamma_j = \sum_{k=1}^{K_0} \pi_0^k\mu_0^k.\] Thanks to the identifiability property shown in the previous section, we know that these two Gaussian mixtures must have the same components, so for each $j$ in $\{1,\dots K\}$, there is $1\leq k\leq K_0$ such that $\mathrm{P}_0{\#} \gamma_j = \mu_0^k$. In the same way, there is $1\leq l\leq K_1$ such that $\mathrm{P}_1{\#} \gamma_j = \mu_1^l$. It follows that $\gamma_j$ belongs to $\Pi(\mu_0^k,\mu_1^l)$. We conclude that the mixture $\gamma$ can be written as a mixture of Gaussian components $\gamma_{kl} \in \Pi(\mu_0^k,\mu_1^l)$, {\em i.e} $\gamma = \sum_{k=1}^{K_0}\sum_{l=1}^{K_1}w_{kl} \gamma_{kl}$. Since $\mathrm{P}_0{\#} \gamma = \mu_0$ and $\mathrm{P}_1{\#} \gamma = \mu_1$, we know that $w \in \Pi(\pi_0,\pi_1)$. As a consequence, \begin{eqnarray*} \int_{\R^d\times\R^d} \|y_0-y_1\|^2d\gamma(y_0,y_1) &\geq& \sum_{k=1}^{K_0}\sum_{l=1}^{K_1}w_{kl} W_2^2(\mu_0^k,\mu_1^l)\geq \sum_{k=1}^{K_0}\sum_{l=1}^{K_1}w^*_{kl} W_2^2(\mu_0^k,\mu_1^l). \end{eqnarray*} This inequality holds for any $\gamma$ in $\Pi(\mu_0,\mu_1) \cap GMM_{2d}(\infty)$, which concludes the proof. \end{proof} It happens that the discrete form~\eqref{eq:DW_conj}, which can be seen as an aggregation of simple Wasserstein distances between Gaussians, has been recently proposed as an ingenious alternative to $W_2$ in the machine learning literature, both in \cite{chen2016distance,chen2019aggregated} and \cite{chen2017optimal,chen2019optimal}. Observe that the point of view followed here in our paper is quite different from these works, since $MW_2$ is defined in a completely continuous setting as an optimal transport between GMMs with a restriction on couplings, following the same kind of approach as in~\cite{bionnadal2019}. The fact that this restriction leads to an explicit discrete formula, the same as the one proposed independently in \cite{chen2016distance,chen2019aggregated} and \cite{chen2017optimal,chen2019optimal}, is quite striking. Observe also that thanks to the ``identifiability property'' of GMMs, this continuous formulation~\eqref{eq:MW2} is obviously non ambiguous, in the sense that the value of the minimium is the same whatever the parametrization of the Gaussian mixtures $\mu_0$ and $\mu_1$. This was not obvious from the discrete versions. We will see in the following sections how this continuous formulation can be extended to multi-marginal and barycenter formulations, and how it can be generalized or used in the case of more general distributions. Notice that we do not use in the definition and in the proof the fact that the ground cost is quadratic. Definition~\ref{def:MW2} can thus be generalized to other cost functions $c:\R^{2d}\mapsto \R$. The reason why we focus on the quadratic cost is that optimal transport plans between Gaussian measures for $W_2$ can be computed explicitely. It follows from the equivalence between the continuous and discrete forms of $MW_2$ that the solution of~\eqref{eq:MW2} is very easy to compute in practice. Another consequence of this equivalence is that there exists at least one optimal plan $\gamma^*$ for~\eqref{eq:MW2} containing less than $K_0+K_1-1$ Gaussian components. \begin{Corollary} Let $\mu_0 =\sum_{k=1}^{K_0} \pi_0^k \mu_0^k$ and $\mu_1= \sum_{k=1}^{K_1} \pi_1^k \mu_1^k$ be two Gaussian mixtures on $\R^d$, then the infimum in \eqref{eq:MW2} is attained for a given $\gamma^* \in \Pi(\mu_0,\mu_1) \cap GMM_{2d}(K_0+K_1-1)$. \end{Corollary} \begin{proof} This follows directly from the proof that there exists at least one optimal $w^*$ for~\eqref{eq:MW2} containing less than $K_0+K_1-1$ Gaussian components (see~\cite{peyre2019computational}). \end{proof} \subsection{An example in one dimension} \label{sec:example1} In order to illustrate the behavior of the optimal maps for $MW_2$, we focus here on a very simple example in one dimension, where $\mu_0$ and $\mu_1$ are the following mixtures of two Gaussian components \[\mu_0 = 0.3 \NN(0.2,0.03^2)+ 0.7\NN(0.4,0.04^2),\] \[\mu_1 = 0.6 \NN(0.6,0.06^2)+ 0.4\NN(0.8,0.07^2).\] Figure~\ref{1D_example_plans} shows the optimal transport plans between $\mu_0$ (in blue) and $\mu_1$ (in red), both for the Wasserstein distance $W_2$ and for $MW_2$. As we can observe, the optimal transport plan for $MW_2$ (a probability measure on $\R\times \R$) is a mixture of three degenerate Gaussians measures supported by 1D lines. \begin{figure} \caption{Transport plans between two mixtures of Gaussians $\mu_0$ (in blue) and $\mu_1$ (in red). Left, optimal transport plan for $W_2$. Right, optimal transport plan for $MW_2$. (The values on the x-axes have been mutiplied by $100$). These examples have been computed using the Python Optimal Transport (POT) library \cite{flamary2017pot} \label{1D_example_plans} \end{figure} \subsection{Metric properties of $MW_2$ and displacement interpolation} \subsubsection{Metric properties of $MW_2$} \begin{prop} \label{prop:metric} $MW_2$ defines a metric on $GMM_d(\infty)$ and the space $GMM_{d}(\infty)$ equipped with the distance $MW_2$ is a geodesic space. \end{prop} This proposition can be proved very easily by making use of the discrete formulation~\eqref{eq:DW_conj} of the distance (see for instance~\cite{chen2019optimal}). For the sake of completeness, we provide in the following a proof of the proposition using only the continuous formulation of $MW_2$. \begin{proof} First, observe that $MW_2$ is obviously symmetric and positive. It is also clear that for any Gaussian mixture $\mu$, $MW_2(\mu,\mu)=0$. Conversely, assume that $MW_2(\mu_0,\mu_1)=0$, it implies that $W_2(\mu_0,\mu_1) = 0$ and thus $\mu_0 =\mu_1$ since $W_2$ is a distance. It remains to show that $MW_2$ satisfies the triangle inequality. This is a classical consequence of the gluing lemma, but we must be careful to check that the constructed measure remains a Gaussian mixture. Let $\mu_0,\mu_1, \mu_2$ be three Gaussian mixtures on $\R^d$. Let $\gamma_{01}$ and $\gamma_{12}$ be optimal plans respectively for $(\mu_0,\mu_1)$ and $(\mu_1,\mu_2)$ for the problem $MW_2$ (which means that $\gamma_{01}$ and $\gamma_{12}$ are both GMM on $\R^{2d}$). The classical gluing lemma consists in disintegrating $\gamma_{01}$ and $\gamma_{12}$ into \[d\gamma_{01}(y_0,y_1) = d\gamma_{01}(y_0|y_1)d\mu_1(y_1) \quad \text{and} \quad d\gamma_{12}(y_1,y_2) = d\gamma_{12}(y_2|y_1)d\mu_1(y_1),\] and to define \[d\gamma_{012}(y_0,y_1,y_2) = d\gamma_{01}(y_0|y_1)d\mu_1(y_1)d\gamma_{12}(y_2|y_1),\] which boils down to assume independence conditionnally to the value of $y_1$. Since $\gamma_{01}$ and $\gamma_{12}$ are Gaussian mixtures on $\R^{2d}$, the conditional distributions $d\gamma_{01}(y_0|y_1)$ and $d\gamma_{12}(y_2|y_1)$ are also Gaussian mixtures for all $y_1$ in the support of $\mu_1$ (recalling that $\mu_1$ is the marginal on $y_1$ of both $\gamma_{01}$ and $\gamma_{12}$). If we define a distribution $\gamma_{02}$ by integrating $\gamma_{012}$ over the variable $y_1$, {\it i.e.} \[d\gamma_{02}(y_0,y_2) = \int_{y_1 \in \R^d} d\gamma_{012}(y_0,y_1,y_2) = \int_{y_1 \in \mathrm{Supp}(\mu_1)}d\gamma_{01}(y_0|y_1)d\mu_1(y_1)d\gamma_{12}(y_2|y_1) \] then $\gamma_{02}$ is obviously also a Gaussian mixture on $\R^{2d}$ with marginals $\mu_0$ and $\mu_2$. The rest of the proof is classical. Indeed, we can write \begin{align*} MW_2^2(\mu_0,\mu_2) &\leq \int_{\R^d\times \R^d} \|y_0-y_2\|^2d\gamma_{02}(y_0,y_2)= \int_{\R^d\times \R^d\times\R^d} \|y_0-y_2\|^2d\gamma_{012}(y_0,y_1,y_2). \end{align*} Writing $\|y_0 - y_2\|^2 = \|y_0-y_1\|^2 + \|y_1-y_2\|^2 + 2\langle y_0-y_1,y_1-y_2\rangle $ (with $\langle \;,\;\rangle $ the Euclidean scalar product on $\R^d$), and using the Cauchy-Schwarz inequality, it follows that \begin{align*} MW_2^2(\mu_0,\mu_2) & \le\left(\sqrt{\int_{\R^{2d}} \|y_0-y_1\|^2 d\gamma_{01}(y_0,y_1)} +\sqrt{ \int_{\R^{2d}} \|y_1-y_2\|^2 d\gamma_{12}(y_1,y_2)}\right)^2 . \end{align*} The triangle inequality follows by taking for $\gamma_{01}$ (resp. $\gamma_{12}$) the optimal plan for $MW_2$ between $\mu_0$ and $\mu_1$ (resp. $\mu_1$ and $\mu_2$). Now, let us show that $GMM_{d}(\infty)$ equipped with the distance $MW_2$ is a geodesic space. For a path $\rho=(\rho_t)_{t\in[0,1]}$ in $GMM_d(\infty)$ (meaning that each $\rho_t$ is a GMM on $\R^d$), we can define its length for $MW_2$ by $$ \mathrm{Len}_{MW_2}(\rho) = \mathrm{Sup}_{N; 0=t_0\leq t_1 ... \leq t_N=1} \quad \sum_{i=1}^N MW_2(\rho_{t_{i-1}}, \rho_{t_i}) \in [0,+\infty] .$$ Let $\mu_0=\sum_k \pi_0^k \mu_0^k$ and $\mu_1=\sum_l \pi_1^l \mu_1^l$ be two GMM. Since $MW_2$ satifies the triangle inequality, we always have that $\mathrm{Len}_{MW_2}(\rho) \geq MW_2(\mu_0,\mu_1)$ for all paths $\rho$ such that $\rho_0=\mu_0$ and $\rho_1=\mu_1$. To prove that $(GMM_d(\infty),MW_2)$ is a geodesic space we just have to exhibit a path $\rho$ connecting $\mu_0$ to $\mu_1$ and such that its length is equal to $MW_2(\mu_0,\mu_1)$. We write $\gamma^{\ast}$ the optimal transport plan between $\mu_0$ and $\mu_1$. For $t\in(0,1)$ we can define $$\mu_t=(\rmP_t)\#\gamma^{\ast}.$$ Let $t<s \in [0,1]$ and define $\gamma_{t,s}^{\ast}=(\rmP_t,\rmP_s)\#\gamma^{\ast}$. Then $\gamma_{t,s}^{\ast}\in\Pi(\mu_t,\mu_s)\cap GMM_{2d}(\infty)$ and therefore \begin{align*} MW_2(\mu_t,\mu_s)^2 & = \min_{\widetilde{\gamma}\in\Pi(\mu_t,\mu_s)\cap GMM_{2d}(\infty)} \iint \| y_0-y_1\|^2 \, d\widetilde{\gamma}(y_0,y_1) \\ & \leq \iint \| y_0-y_1\|^2 \, d\gamma_{t,s}^{\ast}(y_0,y_1) = \iint \| \rmP_t(y_0,y_1) - \rmP_s(y_0,y_1)\|^2 \, d\gamma^{\ast}(y_0,y_1) \\ & = \iint \| (1-t)y_0 + ty_1-(1-s)y_0-sy_1\|^2 \, d\gamma^{\ast}(y_0,y_1) \\ &= (s-t)^2 MW_2(\mu_0,\mu_1)^2 . \end{align*} Thus we have that $MW_2(\mu_t,\mu_s)\leq (s-t) MW_2(\mu_0,\mu_1)$ Now, by the triangle inequality, \begin{align*} MW_2(\mu_0,\mu_1) &\leq MW_2(\mu_0,\mu_t) +MW_2(\mu_t,\mu_s)+MW_2(\mu_s,\mu_1) \\ &\leq (t+s-t+1-s) MW_2(\mu_0,\mu_1). \end{align*} Therefore all inequalities are equalities, and $MW_2(\mu_t,\mu_s)= (s-t) MW_2(\mu_0,\mu_1)$ for all $0\leq t\leq s\leq 1$. This implies that the $MW_2$ length of the path $(\mu_t)_t$ is equal to $MW_2(\mu_0,\mu_1)$. It allows us to conclude that $(GMM_d(\infty),MW_2)$ is a geodesic space, and we have also given the explicit expression of the geodesic. \end{proof} The following Corollary is a direct consequence of the previous results. \begin{Corollary} The {barycenters} between $\mu_0=\sum_k \pi_0^k \mu_0^k$ and $\mu_1=\sum_l \pi_1^l \mu_1^l$ all belong to $GMM_d(\infty)$ and can be written explicitely as $$\forall t\in[0,1] , \quad \mu_t = \rmP_t\#\gamma^{\ast} = \sum_{k,l} w^\ast_{k,l} \mu_t^{k,l} ,$$ where $w^\ast$ is an optimal solution of~\eqref{eq:DW_conj}, and $\mu_t^{k,l}$ is the displacement interpolation between $\mu_0^k$ and $\mu_1^l$. When $\Sigma_0^k$ is non-singular, it is given by $$\mu_t^{k,l} = ((1-t)\mathrm{Id}+tT_{k,l})\# \mu_0^k ,$$ with $T_{k,l}$ the affine transport map between $\mu_0^k$ and $\mu_1^l$ given by Equation~\eqref{eqT:eq}. These barycenters have less than $K_0+K_1-1$ components. \end{Corollary} \subsubsection{1D and 2D barycenter examples} \paragraph{One dimensional case} \begin{figure} \caption{Barycenters $\mu_t$ between two Gaussian mixtures $\mu_0$ (green curve) and $\mu_1$ (red curve). Top: barycenters for the metric $W_2$. Bottom: barycenters for the metric $MW_2$. The barycenters are computed for $t=0.2,0.4,0.6,0.8$. \label{1D_barycenters} \label{1D_barycenters} \end{figure} Figure~\ref{1D_barycenters} shows barycenters $\mu_t$ for $t=0.2,0.4,0.6,0.8$ between the $\mu_0$ and $\mu_1$ defined in Section~\ref{sec:example1}, for both the metric $W_2$ and $MW_2$. Observe that the barycenters computed for $MW_2$ are a bit more regular (we know that they are mixtures of at most 3 Gaussian components) than those obtained for $W_2$. \paragraph{Two dimensional case} \begin{figure} \caption{Barycenters $\mu_t$ between two Gaussian mixtures $\mu_0$ (first column) and $\mu_1$ (last column). Top: barycenters for the metric $W_2$. Bottom: barycenters for the metric $MW_2$.\label{2D_barycenters} \label{2D_barycenters} \end{figure} Figure~\ref{2D_barycenters} shows barycenters $\mu_t$ between the following two dimensional mixtures \[\mu_0 = 0.3 \NN\left( \begin{pmatrix} 0.3\\0.6 \end{pmatrix} ,0.01 I_2\right)+ 0.7 \NN\left( \begin{pmatrix} 0.7\\0.7 \end{pmatrix} ,0.01 I_2\right),\] \[\mu_1 = 0.4 \NN\left( \begin{pmatrix} 0.5\\0.6 \end{pmatrix} ,0.01 I_2\right)+ 0.6 \NN\left( \begin{pmatrix} 0.4\\0.25 \end{pmatrix} ,0.01 I_2\right),\] where $I_2$ is the $2\times 2$ identity matrix. Notice that the $MW_2$ geodesic looks much more regular, each barycenter is a mixture of less than three Gaussians. \subsection{Comparison between $MW_2$ and $W_2$} \label{sec:comparison} \begin{prop} \label{prop:ineq} Let $\mu_0 \in GMM_d(K_0)$ and $\mu_1 \in GMM_d(K_1)$ be two Gaussian mixtures, written as in~\eqref{eq:GMM}. Then, \[W_2(\mu_0,\mu_1) \leq MW_2(\mu_0,\mu_1) \leq W_2(\mu_0,\mu_1) + \sum_{i=0,1}\left(2\sum_{k=1}^{K_i} \pi_i^k \mathrm{trace}(\Sigma_i^k)\right)^{\frac 1 2}.\] The left-hand side inequality is attained when for instance \begin{itemize} \item $\mu_0$ and $\mu_1$ are both composed of only one Gaussian component, \item $\mu_0$ and $\mu_1$ are finite linear combinations of Dirac masses, \item $\mu_1$ is obtained from $\mu_0$ by an affine transformation. \end{itemize} \end{prop} As we already noticed it, the first inequality is obvious and follows from the definition of $MW_2$. It might not be completely intuitive that $MW_2$ can indeed be strictly larger than $W_2$ because of the density property of $GMM_d(\infty)$ in $\PP_2(\R^d)$. This follows from the fact that our optimization problem has constraints $\gamma\in\Pi(\mu_0,\mu_1)$. Even if any measure $\gamma$ in $\Pi(\mu_0,\mu_1)$ can be approximated by a sequence of Gaussian mixtures, this sequence of Gaussian mixtures will generally not belong to $\Pi(\mu_0,\mu_1)$, hence explaining the difference between $MW_2$ and $W_2$. In order to show that $MW_2$ is always smaller than the sum of $W_2$ plus a term depending on the trace of the covariance matrices of the two Gaussian mixtures, we start with a lemma which makes more explicit the distance $MW_2$ between a Gaussian mixture and a mixture of Dirac distributions. \begin{lemma} \label{lemma1} Let $\mu_0 = \sum_{k=1}^{K_0} \pi_0^k \mu_0^k $ with $\mu_0^k = \NN(m_0^k,\Sigma_0^k)$ and $\mu_{1} = \sum_{k=1}^{K_1} \pi_1^k \delta_{m_1^k}$. Let $\tilde{\mu}_0 = \sum_{k=1}^{K_0} \pi_0^k \delta_{m_0^k}$ ($\tilde{\mu}_0$ only retains the means of $\mu_0$). Then, \[MW_2^2(\mu_0,\mu_1) = W_2^2( \tilde{\mu}_0,\mu_1) + \sum_{k=1}^{K_0} \pi_0^k\mathrm{trace}(\Sigma_0^k).\] \end{lemma} \begin{proof} \begin{eqnarray*} MW_2^2(\mu_0,\mu_1) &=& \inf_{w \in \Pi(\pi_0,\pi_1)} \sum_{k,l}w_{kl} W_2^2(\mu_0^k,\delta_{m_1^l})= \inf_{w \in \Pi(\pi_0,\pi_1)} \sum_{k,l}w_{kl} \left(\|m_1^l - m_0^k\|^2 + \mathrm{trace}(\Sigma_0^k)\right) \\ &=& \inf_{w \in \Pi(\pi_0,\pi_1)} \sum_{k,l}w_{kl} \|m_1^l - m_0^k\|^2 + \sum_{k}\pi_0^k\mathrm{trace}(\Sigma_0^k) = W_2^2(\tilde{\mu}_0,\mu_1) + \sum_{k=1}^{K_0} \pi_0^k\mathrm{trace}(\Sigma_0^k). \end{eqnarray*} \end{proof} In other words, the squared distance $MW_2^2$ between $\mu_0$ and $\mu_1$ is the sum of the squared Wasserstein distance between $\tilde{\mu}_0$ and $\mu_1$ and a linear combination of the traces of the covariance matrices of the components of $\mu_0$. We are now in a position to show the other inequality between $MW_2$ and $W_2$. \begin{proof} [Proof of Proposition~\ref{prop:ineq}] Let $(\mu_0^n)_n$ and $(\mu_1^n)_n$ be two sequences of mixtures of Dirac masses respectively converging to $\mu_0$ and $\mu_1$ in $\PP_2(\R^d)$. Since $MW_2$ is a distance, \begin{eqnarray*} MW_2(\mu_0,\mu_1) &\leq& MW_2(\mu_0^n,\mu_1^n)+MW_2(\mu_0,\mu_0^n)+ MW_2(\mu_1,\mu_1^n) \\ &=& W_2(\mu_0^n,\mu_1^n)+MW_2(\mu_0,\mu_0^n)+ MW_2(\mu_1,\mu_1^n). \end{eqnarray*} We study in the following the limits of these three terms when $n\rightarrow +\infty$. First, observe that $MW_2(\mu_0^n,\mu_1^n) = W_2(\mu_0^n,\mu_1^n) \longrightarrow_{n\rightarrow \infty}W_2(\mu_0,\mu_1) $ since $W_2$ is continuous on $\PP_2(\R^d)$. Second, using Lemma~\ref{lemma1}, for $i=0,1$, $$ MW_2^2(\mu_i,\mu_i^n) = W_2^2( \tilde{\mu}_i,\mu_i^n) + \sum_{k=1}^{K_i} \pi_i^k\mathrm{trace}(\Sigma_i^k) \longrightarrow_{n\rightarrow \infty} W_2^2( \tilde{\mu}_i,\mu_i)+ \sum_{k=1}^{K_i} \pi_i^k\mathrm{trace}(\Sigma_i^k). $$ Define the measure $d\gamma(x,y) = \sum_{k=1}^{K_i} \pi_i^k \delta_{m_i^k}(y)g_{m_i^k,\Sigma_i^k}(x)dx$, with $g_{m_i^k,\Sigma_i^k}$ the probability density function of the Gaussian distribution $\NN(m_i^k,\Sigma_i^k)$. The probability measure $\gamma$ belongs to $\Pi(\mu_i,\tilde{\mu}_i)$, so \begin{eqnarray*} W_2^2( \mu_i, \tilde{\mu}_i)&\leq& \int \|x - y\|^2 d\gamma(x,y) = \sum_{k=1}^{K_i} \pi_i^k \int_{\R^d}\|x - m_i^k\|^2 g_{m_i^k,\Sigma_i^k}(x)dx \\ &=& \sum_{k=1}^{K_i} \pi_i^k \mathrm{trace}(\Sigma_i^k). \end{eqnarray*} We conclude that \begin{eqnarray*} MW_2(\mu_0,\mu_1) &\leq& \lim\inf_{n\rightarrow \infty} \left( W_2(\mu_0^n,\mu_1^n)+MW_2(\mu_0,\mu_0^n)+ MW_2(\mu_1,\mu_1^n) \right)\\ &\leq& W_2(\mu_0,\mu_1) + \left(W_2^2( \tilde{\mu}_0,\mu_0) + \sum_{k=1}^{K_0} \pi_0^k\mathrm{trace}(\Sigma_0^k)\right)^{\frac 1 2} + \left(W_2^2(\tilde{\mu}_1,\mu_1) + \sum_{k=1}^{K_1} \pi_1^k\mathrm{trace}(\Sigma_1^k)\right)^{\frac 1 2}\\ &\leq& W_2(\mu_0,\mu_1) + \left(2\sum_{k=1}^{K_0} \pi_0^k\mathrm{trace}(\Sigma_0^k)\right)^{\frac 1 2}+ \left(2\sum_{k=1}^{K_1} \pi_1^k\mathrm{trace}(\Sigma_1^k)\right)^{\frac 1 2}. \end{eqnarray*} This ends the proof of the proposition. \end{proof} Observe that if $\mu$ is a Gaussian distribution $\NN(m,\Sigma)$ and $\mu^n$ a distribution supported by a finite number of points which converges to $\mu$ in $\PP_2(\R^d)$, then $$ W_2^2(\mu,\mu^n)\longrightarrow_{n\rightarrow \infty} 0$$ and \[MW_2(\mu,\mu^n) = \left(W_2^2(\tilde{\mu},\mu^n) + \mathrm{trace}(\Sigma)\right)^{\frac 1 2} \longrightarrow_{n\rightarrow \infty} \left(2 \mathrm{trace}(\Sigma)\right)^{\frac 1 2} \neq 0.\] Let us also remark that if $\mu_0$ and $\mu_1$ are Gaussian mixtures such that $\max_{k,i} \mathrm{trace}(\Sigma_i^k) \leq \varepsilon$, then \[MW_2(\mu_0,\mu_1) \leq W_2(\mu_0,\mu_1) + 2\sqrt{2\varepsilon}.\] \subsection{Generalization to other mixture models} A natural question is to know if the methodology we have developped here, and that restricts the set of possible coupling measures to Gaussian mixtures, can be extended to other families of mixtures. Indeed, in the image processing litterature, as well as in many other fields, mixture models beyond Gaussian ones are widely used, such as Generalized Gaussian Mixture Models~\cite{DeledalleGGM} or mixtures of T-distributions~\cite{van2014student}, for instance. Now, to extend our methodology to other mixtures, we need two main properties: (a) the identifiability property (that will ensure that there is a canonical way to write a distribution as a mixture); and (b) a marginal consistency property (we need all the marginal of an element of the family to remain in the same family). These two properties permit in particular to generalize the proof of Proposition~\ref{prop:MW2}. In order to make the discrete formulation convenient for numerical computations, we also need that the $W_2$ distance between any two elements of the family must be easy to compute. Starting from this last requirement, we can consider a family of elliptical distributions, where the elements are of the form $$\forall x\in\R^d , f_{m,\Sigma} (x) = C_{h,d,\Sigma} \, h( (x-m)^t \Sigma^{-1} (x-m)) ,$$ where $m\in\R^d$, $\Sigma$ is a positive definite symmetric matrix and $h$ is a given function from $[0,+\infty)$ to $[0,+\infty)$. Gaussian distributions are an example, with $h(t)=\exp(-t/2)$. Generalized Gaussian distributions are obtained with $h(t)=\exp(-t^\beta)$, with $\beta$ not necessarily equal to $1$. T-distributions are also in this family, with $h(t)=(1+t/\nu)^{-(\nu+d)/2}$, etc. Thanks to their elliptical contoured property, the $W_2$ distance between two elements in such a family (i.e. $h$ fixed) can be explicitely computed (see Gelbrich \cite{Gelbrich1990}), and yields a formula that is the same as the one in the Gaussian case (Equation \eqref{eq:wasserstein_gaussian}). In such a family, the identifiability property can be checked, using the asymptotic behavior in all directions of $\R^d$. Now, if we want the marginal consistency property to be also satisfied (which is necessary if we want the coupling restriction problem to be well-defined), the choice of $h$ is very limited. Indeed, Kano in \cite{Kano1994}, proved that the only elliptical distributions with the marginal consistency property are the ones which are a scale mixture of normal distributions with a mixing variable that is unrelated to the dimension $d$. So, generalized Gaussian distributions don't satisfy this marginal consistency property, but T-distributions do. \section{Multi-marginal formulation and barycenters} \label{sec:multimarginal} \subsection{Multi-marginal formulation for $MW_2$} Let $\mu_0,\mu_1\dots,\mu_{J-1}$ be $J$ Gaussian mixtures on $\R^d$, and let $\lambda_0,\dots\lambda_{J-1}$ be $J$ positive weights summing to $1$. The multi-marginal version of our optimal transport problem restricted to Gaussian mixture models can be written { \begin{equation} \label{eq:multi-marginal_MW2} MMW_2^2(\mu_0,\dots,\mu_{J-1}):=\hspace{-0.2em}\inf_{\gamma \in \Pi(\mu_0,\dots,\mu_{J-1})\cap GMM_{Jd}(\infty)} \int_{\R^{dJ}}c(x_0,\dots,x_{J-1})d\gamma(x_0,\dots,x_{J-1}), \end{equation}} where \begin{equation} c(x_0,\dots,x_{J-1}) = \sum_{i=0}^{J-1} \lambda_i\|x_i - B(x)\|^2 = \frac{1}{2} \sum_{i,j=0}^{J-1} \lambda_i\lambda_j\|x_i - x_j\|^2 \label{eq:cost_multimarginal} \end{equation} and where $\Pi(\mu_0,\mu_1,\dots,\mu_{J-1})$ is the set of probability measures on $(\R^d)^{J}$ having $\mu_0$, $\mu_1$, $\dots$, $\mu_{J-1}$ as marginals. Writing for every $j$, $\mu_{j} = \sum_{k=1}^{K_j} \pi_j^k\mu_j^k$, and using exactly the same arguments as in Proposition~\ref{prop:MW2}, we can easily show the following result. \begin{prop} \label{prop:MMW2} The optimisation problem~\eqref{eq:multi-marginal_MW2} can be rewritten under the discrete form {\small \begin{equation} \label{eq:discrete_multi-marginal} MMW_2^2(\mu_0,\dots,\mu_{J-1})=\min_{w \in \Pi(\pi_0,\dots,\pi_{J-1})} \sum_{k_0,\dots,k_{J-1}=1}^{K_0,\dots,K_{J-1}}w_{k_0\dots k_{J-1}}mmW_2^2(\mu_0^{k_0},\dots,\mu_{J-1}^{k_{J-1}}), \end{equation}} where $\Pi(\pi_0,\pi_1,\dots,\pi_{J-1})$ is the subset of tensors $w$ in $ \MM_{K_0,K_1,\dots,K_{J-1}}(\R^+)$ having $\pi_0$, $\pi_1$, $\dots$, $\pi_{J-1}$ as discrete marginals, {\it i.e.} such that \begin{equation} \label{eq:Pi_multi} \forall j\in\{0,\dots,J-1\}, \;\forall k \in \{1,\dots,K_j\},\sum_{\substack{1\leq k_0\leq K_0 \\\dots\\1\leq k_{j-1}\leq K_{j-1}\\k_j = k\\1\leq k_{j+1}\leq K_{j+1}\\\dots \\1\leq k_{J-1}\leq K_{J-1}}} w_{k_0k_1\dots k_{J-1}} = \pi_j^k. \end{equation} Moreover, the solution $\gamma^*$ of~\eqref{eq:multi-marginal_MW2} can be written \begin{equation} \gamma^* = \sum_{\substack{1\leq k_0 \leq K_0\\ \dots \\1\leq k_{J-1} \leq K_{J-1}}} w^*_{k_0 k_1\dots k_{J-1}}\gamma^*_{k_0 k_1\dots k_{J-1}}, \end{equation} where $w^*$ is solution of~\eqref{eq:discrete_multi-marginal} and $\gamma^*_{k_0 k_1\dots k_{J-1}}$ is the optimal multi-marginal plan between the Gaussian measures $\mu_0^{k_0},\dots,\mu_{J-1}^{k_{J-1}}$ (see Section~\ref{sec:multimarginal_gaussian}). \end{prop} From Section~\ref{sec:multimarginal_gaussian}, we know how to construct the optimal multi-marginal plans $\gamma^*_{k_0 k_1\dots k_{J-1}}$, which means that computing a solution for~\eqref{eq:multi-marginal_MW2} boils down to solve the linear program~\eqref{eq:discrete_multi-marginal} in order to find $w^*$. \subsection{Link with the $MW_2$-barycenters} We will now show the link between the previous multi-marginal problem and the barycenters for $MW_2$. \begin{prop} The barycenter problem \begin{equation} \label{eq:MW2_bary} \inf_{\nu \in GMM_d(\infty)} \sum_{j=0}^{J-1} \lambda_j MW_2^2(\mu_j,\nu), \end{equation} has a solution given by $\nu^* = B\# \gamma^*$, where $\gamma^*$ is an optimal plan for the multi-marginal problem~\eqref{eq:multi-marginal_MW2}. \end{prop} \begin{proof} For any $\gamma \in \Pi(\mu_0,\dots,\mu_{J-1})\cap GMM_{Jd}(\infty)$, we define $\gamma_j = (P_j,B)\# \gamma$, with $B$ the barycenter application defined in \eqref{eq:barycenter} and $P_j:(\R^d)^J\mapsto \R^d$ such that $P(x_0,\dots,x_{J-1}) = x_j$. Observe that $\gamma_j$ belongs to $\Pi(\mu_j,\nu)$ with $\nu = B\#\gamma$. The probability measure $\gamma_j$ also belongs to $GMM_{2d}(\infty)$ since $(P_j,B)$ is a linear application. It follows that \begin{align*} \int_{(\R^d)^J} \sum_{j=0}^{J-1} \lambda_j \|x_j - B(x)\|^2 d\gamma(x_0,\dots,x_{J-1}) &= \sum_{j=0}^{J-1} \lambda_j \int_{(\R^d)^J} \|x_j - B(x)\|^2 d\gamma(x_0,\dots,x_{J-1}) \\ &= \sum_{j=0}^{J-1} \lambda_j \int_{\R^d\times \R^d} \|x_j - y\|^2 d\gamma_j(x_j,y) \\ &\ge \sum_{j=0}^{J-1} \lambda_j MW_2^2(\mu_j,\nu). \end{align*} This inequality holds for any arbitrary $\gamma \in \Pi(\mu_0,\dots,\mu_{J-1})\cap GMM_{Jd}(\infty)$, thus \[MMW_2^2(\mu_0,\dots,\mu_{J-1}) \ge \inf_{\nu \in GMM_d(\infty)} \sum_{j=0}^{J-1} \lambda_j MW_2^2(\mu_j,\nu).\] Conversely, for any $\nu$ in $GMM_{d}(\infty)$, we can write $\nu = \sum_{l=1}^L \pi_{\nu}^l \nu^l$, the $\nu^l$ being Gaussian probability measures. We also write $\mu_j = \sum_{k=1}^{K_j} \pi_j^k\mu_j^k$, and we call $w^j$ the optimal discrete plan for $MW_2$ between the mixtures $\mu_j$ and $\nu$ (see Equation~\eqref{eq:DW_conj}). Then, \begin{align*} \sum_{j=0}^{J-1} \lambda_j MW_2^2(\mu_j,\nu) &= \sum_{j=0}^{J-1} \lambda_j \sum_{k,l} w_{k,l}^j W_2^2(\mu_j^k,\nu^l). \end{align*} Now, if we define a $K_0\times \dots \times K_{J-1}\times L$ tensor $\alpha$ and a $K_0\times \dots \times K_{J-1}$ tensor $\overline{\alpha}$ by \[\alpha_{k_0\dots k_{J-1} l} = \frac{\prod_{j=0}^{J-1} w_{k_j,l}^j}{(\pi_{\nu}^l)^{J-1}}\;\; \text{ and }\quad \overline{\alpha}_{k_0\dots k_{J-1}} = \sum_{l=1}^L \alpha_{k_0\dots k_{J-1} l},\] clearly $\alpha \in \Pi(\pi_0,\dots,\pi_{J-1},\pi_{\nu})$ and $\overline{\alpha} \in \Pi(\pi_0,\dots,\pi_{J-1})$. Moreover, \begin{align*} \sum_{j=0}^{J-1} \lambda_j MW_2^2(\mu_j,\nu) & = \sum_{j=0}^{J-1} \lambda_j \sum_{k_j=1}^{K_j}\sum_{l=1}^L w_{k_j,l}^j W_2^2(\mu_j^{k_j},\nu^l)\\ &= \sum_{j=0}^{J-1} \lambda_j \sum_{k_0,\dots,k_{J-1},l} \alpha_{k_0\dots k_{J-1} l} W_2^2(\mu_j^{k_j},\nu^l)\\ &= \sum_{k_0,\dots,k_{J-1},l} \alpha_{k_0\dots k_{J-1} l} \sum_{j=0}^{J-1} \lambda_j W_2^2(\mu_j^{k_j},\nu^l)\\ & \ge \sum_{k_0,\dots,k_{J-1},l} \alpha_{k_0\dots k_{J-1} l} mmW_2^2(\mu_0^{k_0},\dots,\mu_{J-1}^{k_{J-1}}) \quad \text{ (see Equation~\eqref{eq:MW2_bary})}\\ & = \sum_{k_0,\dots,k_{J-1}} \overline{\alpha}_{k_0\dots k_{J-1}} mmW_2^2(\mu_0^{k_0},\dots,\mu_{J-1}^{k_{J-1}}) \ge MMW_2^2(\mu_0,\dots,\mu_{J-1}), \end{align*} the last inequality being a consequence of Proposition~\ref{prop:MMW2}. Since this holds for any arbitrary $\nu$ in $GMM_{d}(\infty)$, this ends the proof. \end{proof} The following corollary gives a more explicit formulation for the barycenters for $MW_2$, and shows that the number of Gaussian components in the mixture is much smaller than $\prod_{j=0}^{J-1} K_j$. \begin{Corollary} Let $\mu_0,\dots,\mu_{J-1}$ be $J$ Gaussian mixtures such that all the involved covariance matrices are positive definite, then the solution of~\eqref{eq:MW2_bary} can be written \begin{equation} \label{eq:barycenterMW2} \nu = \sum_{k_0,\dots,k_{J-1}} w^*_{k_0\dots k_{J-1}} \nu_{k_0 \dots k_{J-1}} \end{equation} where $\nu_{k_0\dots k_{J-1}}$ is the Gaussian barycenter for $W_2$ between the components $\mu_0^{k_0},\dots, \mu_{J-1}^{k_{J-1}}$, and $w^*$ is the optimal solution of~\eqref{eq:discrete_multi-marginal}. Moreover, this barycenter has less than $K_0+\dots+K_{J-1}-J+1$ non-zero coefficients. \end{Corollary} \begin{proof} This follows directly from the proof of the previous propositions. The linear program~\eqref{eq:discrete_multi-marginal} has $K_0+\dots+K_{J-1}-J+1$ affine constraints, and thus must have at least a solution with less than $K_0+\dots+K_{J-1}-J+1$ components. \end{proof} To conclude this section, it is important to emphasize that the problem of barycenters for the distance $MW_2$, as defined in~\eqref{eq:MW2_bary}, is completely different from \begin{equation} \label{eq:GMM_W2_bary} \inf_{\nu \in GMM_d(\infty)} \sum_{j=0}^{J-1} \lambda_j W_2^2(\mu_j,\nu). \end{equation} Indeed, since $GMM_d(\infty)$ is dense in $\PP_2(\R^d)$ and the total cost on the right is continuous on $\PP_2(\R^d)$, the infimum in~\eqref{eq:GMM_W2_bary} is exactly the same as the infimum over $\PP_2(\R^d)$. Even if the barycenter for $W_2$ is not a mixture itself, it can be approximated by a sequence of Gaussian mixtures with any desired precision. Of course, these mixtures might have a very high number of components in practice. \subsection{Some examples} The previous propositions give us a very simple way to compute barycenters between Gaussian mixtures for the metric $MW_2$. For given mixtures $\mu_0,\dots,\mu_{J-1}$, we first compute all the values $mmW_2(\mu_0^{k_0},\dots ,\mu_{J-1}^{k_{J-1}})$ between their components (and these values can be computed iteratively, see Section~\ref{sec:multimarginal_gaussian}) and the corresponding Gaussian barycenters $\nu_{k_0\dots k_{J-1}}$. Then we solve the linear program~\eqref{eq:discrete_multi-marginal} to find $w^*$. Figure~\ref{bary_4GMM} shows the barycenters between the following simple two dimensional mixtures {\small \begin{align*} \mu_0 =& \frac 1 3 \NN\left( \begin{pmatrix} 0.5\\0.75 \end{pmatrix} ,0.025 \begin{pmatrix} 0.1 & 0 \\0 & 0.05 \end{pmatrix}\right) + \frac 1 3 \NN\left( \begin{pmatrix} 0.5\\0.25 \end{pmatrix} ,0.025 \begin{pmatrix} 0.1 & 0 \\0 & 0.05 \end{pmatrix}\right)\\&+\frac 1 3 \NN\left( \begin{pmatrix} 0.5\\0.5 \end{pmatrix} ,0.025 \begin{pmatrix} 0.06 & 0 \\0.05 & 0.05 \end{pmatrix}\right),\\ \mu_1 =& \frac 1 4 \NN\left( \begin{pmatrix} 0.25\\0.25 \end{pmatrix} ,0.01 I_2 \right) + \frac 1 4 \NN\left( \begin{pmatrix} 0.75\\0.75 \end{pmatrix} ,0.01 I_2\right) +\frac 1 4 \NN\left( \begin{pmatrix} 0.7\\0.25 \end{pmatrix} ,0.01 I_2\right)\\ &+\frac 1 4 \NN\left( \begin{pmatrix} 0.25\\0.75 \end{pmatrix} ,0.01 I_2\right), \\ \mu_2 =& \frac 1 4 \NN\left( \begin{pmatrix} 0.5\\0.75 \end{pmatrix} ,0.025 \begin{pmatrix} 1 & 0 \\0 & 0.05 \end{pmatrix}\right) + \frac 1 4 \NN\left( \begin{pmatrix} 0.5\\0.25 \end{pmatrix} ,0.025 \begin{pmatrix} 1 & 0 \\0 & 0.05 \end{pmatrix}\right)\\ &+\frac 1 4 \NN\left( \begin{pmatrix} 0.25\\0.5 \end{pmatrix} ,0.025 \begin{pmatrix} 0.05 & 0 \\0 & 1 \end{pmatrix}\right)+\frac 1 4 \NN\left( \begin{pmatrix} 0.75\\0.5 \end{pmatrix} ,0.025 \begin{pmatrix} 0.05 & 0 \\0 & 1 \end{pmatrix}\right),\\ \mu_3 =& \frac 1 3 \NN\left( \begin{pmatrix} 0.8\\0.7 \end{pmatrix} ,0.01 \begin{pmatrix} 2 & 0 \\1 & 1 \end{pmatrix}\right) + \frac 1 3 \NN\left( \begin{pmatrix} 0.2\\0.7 \end{pmatrix} ,0.01 \begin{pmatrix} 2 & 0 \\-1 & 1 \end{pmatrix}\right)\\ &+\frac 1 3 \NN\left( \begin{pmatrix} 0.5\\0.3 \end{pmatrix} ,0.01 \begin{pmatrix} 6 & 0 \\0 & 1 \end{pmatrix}\right), \end{align*} } where $I_2$ is the $2\times 2$ identity matrix. Each barycenter is a mixture of at most $K_0+K_1+K_2+K_3 -4 + 1 = 11$ components. By thresholding the mixtures densities, this yields barycenters between 2-D shapes. \begin{figure} \caption{$MW_2$-barycenters between 4 Gaussian mixtures $\mu_0$, $\mu_1$, $\mu_2$ and $\mu_3$. On the left, some level sets of the distributions are displayed. On the right, densities thresholded at level $1$ are displayed. We use bilinear weights with respect to the four corners of the square. \label{bary_4GMM} \label{bary_4GMM} \end{figure} To go further, Figure~\ref{bary_12GMM} shows barycenters where more involved shapes have been approximated by mixtures of 12 Gaussian components each. Observe that, even if some of the original shapes (the star, the cross) have symmetries, these symmetries are not necessarily respected by the estimated GMM, and thus not preserved in the barycenters. This could be easily solved by imposing some symmetry in the GMM estimation for these shapes. \begin{figure} \caption{Barycenters between four mixtures of 12 Gaussian components, $\mu_0$, $\mu_1$, $\mu_2$, $\mu_3$ for the metric $MW_2$. The weights are bilinear with respect to the four corners of the square. \label{bary_12GMM} \label{bary_12GMM} \end{figure} \section{Using $MW_2$ in practice} \label{sec:assignment} \subsection{Extension to probability distributions that are not GMM} Most applications of optimal transport involve data that do not follow a Gaussian mixture model and we can wonder how to make use of the distance $MW_2$ and the corresponding transport plans in this case. A simple solution is to approach these data by convenient Gaussian mixture models and to use the transport plan $\gamma$ (or one of the maps defined in the previous section) to displace the data. Given two probability measures $\nu_0$ and $\nu_1$, we can define a pseudo-distance $MW_{K,2}(\nu_0,\nu_1)$ as the distance $MW_2({\mu}_0, {\mu}_1)$, where each ${\mu}_i$ ($i=0,1$) is the Gaussian mixture model with $K$ components which minimizes an appropriate ``similarity measure'' to $\nu_i$. For instance, if $\nu_i$ is a discrete measure $\nu_i = \frac 1 {J_i} \sum_{j=1}^{J_i} \delta_{x^i_j}$ in $\R^d$ , this similarity can be chosen as the opposite of the log-likelihood of the discrete set of points $\{x_j\}_{j=1,\dots, J_i}$ and the parameters of the Gaussian mixture can be infered thanks to the Expectation-Maximization algorithm. Observe that this log-likelihood can also be written $$\E_{\nu_i }[\log {\mu}_i].$$ If $\nu_i$ is absolutely continuous, we can instead choose ${\mu}_i$ which minimizes $\KL(\nu_i,{\mu}_i )$ among GMM of order $K$. The discrete and continuous formulations coincide since $$\KL(\nu_i,{\mu}_i ) = - H(\nu_i) - \E_{\nu_i }[\log {\mu}_i],$$ where $H(\nu_i)$ is the differential entropy of $\nu_i$. In both cases, the corresponding $MW_{K,2}$ does not define a distance since two different distributions may have the same corresponding Gaussian mixture. However, for $K$ large enough, their approximation by Gaussian mixtures will become different. The choice of $K$ must be a compromise between the quality of the approximation given by Gaussian mixture models and the affordable computing time. In any case, the optimal transport plan $\gamma_K$ involved in $MW_2({\mu}_0,{\mu}_1)$ can be used to compute an approximate transport map between $\nu_0$ and $\nu_1$. In the experimental section, we will use this approximation for different data, generally with $K=10$. \subsection{A similarity measure mixing $MW_2$ and $KL$} \label{Extension:subsec} In the previous paragraphs, we have seen how to use our Wasserstein-type distance $MW_2$ and its associated optimal transport plan on probability measures $\nu_0$ and $\nu_1$ that are not GMM. Instead of a two step formulation (first an approximation by two GMM, and second the computation of $MW_2$), we propose here a relaxed formulation combining directly $MW_2$ with the Kullback-Leibler divergence. Let $\nu_0$ and $\nu_1$ be two probability measures on $\R^d$, we define \begin{equation} \label{eq:EKl} E_{K,\lambda} (\nu_0,\nu_1) = \min_{\gamma \in GMM_{2d}(K)} \int_{\R^d\times\R^d} \|y_0-y_1\|^2 d\gamma(y_0,y_1) - \lambda\E_{\nu_0 }[\log P_0\# \gamma] - \lambda\E_{\nu_1 }[\log P_1\# \gamma] , \end{equation} where $\lambda>0$ is a parameter. In the case where $\nu_0$ and $\nu_1$ are absolutely continuous with respect to the Lebesgue measure, we can write instead \begin{equation} \label{eq:EKltilde} \widetilde{E_{K,\lambda}}(\nu_0,\nu_1) =\min_{\gamma \in GMM_{2d}(K)} \int_{\R^d\times\R^d} \|y_0-y_1\|^2 d\gamma(y_0,y_1) + \lambda \KL(\nu_0,P_0\# \gamma) + \lambda \KL(\nu_1,P_1\# \gamma) \end{equation} and $\widetilde{E_{K,\lambda}}(\nu_0,\nu_1) = E_{K,\lambda} (\nu_0,\nu_1) - \lambda H(\nu_0) - \lambda H(\nu_1).$ Note that this formulation does not define a distance in general. This formulation is close to the unbalanced formulation of optimal transport proposed by Chizat et al. in~\cite{Chizat2017ScalingAF}, with two differences: a) we constrain the solution $\gamma$ to be a GMM; and b) we use $\KL(\nu_0,P_0\# \gamma) $ instead of $\KL(P_0\# \gamma,\nu_0)$. In their case, the support of $P_i\# \gamma$ must be contained in the support of $\nu_i$. When $\nu_i$ has a bounded support, this constraint is quite strong and would not make sense for a GMM $\gamma$. For discrete measures $\nu_0$ and $\nu_1$, when $\lambda$ goes to infinity, minimizing~\eqref{eq:EKl} becomes equivalent to approximate $\nu_0$ and $\nu_1$ by the EM algorithm and this only imposes the marginals of $\gamma$ to be as close as possible to $\nu_0$ and $\nu_1$. When $\lambda$ decreases, the first term favors solutions $\gamma$ whose marginals become closer. Solving this problem (Equation \eqref{eq:EKl}) leads to computations similar to those used in the EM iterations~\cite{bishop2006pattern}. By differentiating with respect to the weights, means and covariances of $\gamma$, we obtain equations which are not in closed-form. For the sake of simplicity, we illustrate here what happens in one dimension. \\ Let $\gamma\in GMM_2(K)$ be a Gaussian mixture in dimension $2d=2$ with $K$ elements. We write $$\gamma = \sum_{k=1}^K \pi_k \mathcal{N}\left( \left( \begin{array}{c} m_{0,k} \\ m_{1,k} \end{array} \right) , \left(\begin{array}{cc} \sigma_{0,k}^2 & a_k \\ a_k & \sigma_{1,k}^2 \end{array} \right) \right) .$$ We have that the marginals are given by the 1d Gaussian mixtures $$ P_0\# \gamma = \sum_{k=1}^K \pi_k \mathcal{N}(m_{0,k}, \sigma_{0,k}^2) \quad \text{ and } \quad P_1\# \gamma = \sum_{k=1}^K \pi_k \mathcal{N}(m_{1,k}, \sigma_{1,k}^2) .$$ Then, to minimize, with respect to $\gamma$, the energy $E_{K,\lambda}(\nu_0,\nu_1)$ above, since the $\KL$ terms are independent of the $a_k$, we can directly take $a_k=\sigma_{0,k}\sigma_{1,k}$, and the transport cost term becomes $$ \int_{\R^d\times\R^d} \|y_0-y_1\|^2 d\gamma(y_0,y_1) = \sum_{k=1}^K \pi_k \left[ (m_{0,k} - m_{1,k})^2 + (\sigma_{0,k} - \sigma_{1,k})^2 \right] . $$ Therefore, we have to consider the problem of minimizing the following ``energy'': \begin{eqnarray*} F(\gamma) & = & \sum_{k=1}^K \pi_k \left[ (m_{0,k} - m_{1,k})^2 + (\sigma_{0,k} - \sigma_{1,k})^2 \right] \\ & & - \lambda\int_\R \log\left( \sum_{k=1}^K \pi_k g_{m_{0,k}, \sigma_{0,k}^2}(x) \right) d\nu_0(x) - \lambda\int_\R \log\left( \sum_{k=1}^K \pi_k g_{m_{1,k}, \sigma_{1,k}^2}(x) \right) d\nu_1(x) . \end{eqnarray*} It can be optimized through a simple gradient descent on the parameters $\pi_k$, $m_{i,k}$, $\sigma_{i,k}$ for $i=0,1$ and $k=1,\ldots, K$. Indeed a simple calculus shows that we can write $$\frac{\partial F(\gamma)}{\partial \pi_k} = \left[ (m_{0,k} - m_{1,k})^2 + (\sigma_{0,k} - \sigma_{1,k})^2 \right] - \lambda \frac{\tilde{\pi}_{0,k} +\tilde{\pi}_{1,k} }{\pi_k} ,$$ $$\frac{\partial F(\gamma)}{\partial m_{i,k}} = 2 \pi_k(m_{i,k} - m_{i,k}) - \lambda \frac{\tilde{\pi}_{i,k}}{\sigma_{i,k}^2}(\tilde{m}_{i,k} - m_{i,k} ) ,$$ $$\text{ and } \quad \frac{\partial F(\gamma)}{\partial \sigma_{i,k}} = 2 \pi_k(\sigma_{i,k} - \sigma_{j,k}) - \lambda \frac{\tilde{\pi}_{i,k}}{\sigma_{i,k}^3}(\tilde{\sigma}_{i,k}^2 - \sigma_{i,k}^2 ) ,$$ where we have introduced some auxilary empirical estimates of the variables given, for $i=0,1$ and $k=1,\ldots, K$, by $$ \gamma_{i,k}(x) = \frac{\pi_k g_{m_{i,k}, \sigma_{i,k}^2}(x)}{\sum_{l=1}^K \pi_l g_{m_{i,l}, \sigma_{i,l}^2}(x)} \quad \text{ and } \quad \tilde{\pi}_{i,k} = \int \gamma_{i,k}(x) d\nu_i(x) ; $$ $$ \tilde{m}_{i,k} = \frac{1}{\tilde{\pi}_{i,k}} \int x \gamma_{i,k}(x) d\nu_i(x) \quad \text{ and } \quad \tilde{\sigma}_{i,k}^2 = \frac{1}{\tilde{\pi}_{i,k}} \int (x-m_{i,k})^2 \gamma_{i,k}(x) d\nu_i(x) .$$ Automatic differenciation of $F$ can also be used in practice. At each iteration of the gradient descent, we project on the constraints $\pi_k\geq 0$, $\sigma_{i,k}\geq 0$ and $\sum_k \pi_k =1$. On Figure \ref{fig:MW2KL}, we illustrate this approach on a simple example. The distributions $\nu_0$ and $\nu_1$ are 1d discrete distributions, plotted as the red and blue histograms. On this example, we choose $K=3$ and we use automatic differenciation (with the torch.autograd Python library) for the sake of convenience. The red and blue plain curves represent the final distributions $P_0\#\gamma$ and $ P_1\#\gamma$, for $\lambda$ in the set $\{10, 2, 1, 0.5, 0.1, 0.01\}$. The behavior is as expected: when $\lambda$ is large, the KL terms are dominating and the distribution $\gamma$ tends to have its marginal fitting well the two distributions $\nu_0$ and $\nu_1$. Whereas, when $\lambda$ is small, the Wasserstein transport term dominates and the two marginals of $\gamma$ are almost equal. \begin{figure} \caption{ The distributions $\nu_0$ and $\nu_1$ are 1d discrete distributions, plotted as the red and blue discrete histograms. The red and blue plain curves represent the final distributions $P_0\#\gamma$ and $ P_1\#\gamma$. In this experiment, we use $K=3$ Gaussian components for $\gamma$.\label{fig:MW2KL} \label{fig:MW2KL} \end{figure} \subsection{From a GMM transport plan to a transport map} Usually, we need not only to have an optimal transport plan and its corresponding cost, but also an assignment giving for each $x\in\R^d$ a corresponding value $T(x)\in\R^d$. Let $\mu_0$ and $\mu_1$ be two GMM. Then, the optimal transport plan between $\mu_0$ and $\mu_1$ for $MW_2$ is given by $$ \gamma (x,y) = \sum_{k,l} w_{k,l}^\ast g_{m_0^k,\Sigma_0^k}(x) \delta_{y=T_{k,l}(x)} .$$ It is not of the form $(\mathrm{Id},T)\#\mu_0$ (see also Figure \ref{1D_example_plans} for an example), but we can however define a unique assignment of each $x$, for instance by setting $$ T_{mean}(x) = \E_\gamma (Y | X=x) ,$$ where here $(X,Y)$ is distributed according to the probability distribution $\gamma$. Then, since the distribution of $Y|X=x$ is given by the discrete distribution $$ \sum_{k,l} p_{k,l}(x) \delta_{T_{k,l}(x)} \quad \text{ with } \quad p_{k,l}(x) = \frac{w_{k,l}^\ast g_{m_0^k,\Sigma_0^k}(x)}{\sum_{j} \pi_0^j g_{m_0^j,\Sigma_0^j}(x)} ,$$ we get that $$ T_{mean}(x) = \frac{\sum_{k,l} w_{k,l}^\ast g_{m_0^k,\Sigma_0^k}(x) T_{k,l}(x)}{\sum_{k} \pi^k_{0} g_{m_0^k,\Sigma_0^k}(x)} .$$ Notice that the $T_{mean}$ defined this way is an assignment that will not necessarily satisfy the properties of an optimal transport map. In particular, in dimension $d=1$, the map $T_{mean}$ may not be increasing: each $T_{k,l}$ is increasing but because of the weights that depend on $x$, their weighted sum is not necessarily increasing. Another issue is that $T_{mean}\#\mu_0$ may be ``far'' from the target distribution $\mu_1$. This happens for instance, in 1D, when $\mu_0=\mathcal{N}(0,1)$ and $\mu_1$ is the mixture of $\mathcal{N}(-a,1)$ and $\mathcal{N}(a,1)$, each with weight $0.5$. In this extreme case we even have that $T_{mean}$ is the identity map, and thus $T_{mean}\#\mu_0=\mu_0$, that can be very far from $\mu_1$ when $a$ is large. Now, another way to define an assignment is to define it as a random assignment using the optimal plan $\gamma$. More precisely, for a fixed value $x$ we can define $$T_{rand}(x) = T_{k,l}(x) \quad \text{ with probability } p_{k,l}(x) = \frac{w_{k,l}^\ast g_{m_0^k,\Sigma_0^k}(x)}{\sum_{j} \pi_0^j g_{m_0^j,\Sigma_0^j}(x)} .$$ Observe that, from a mathematical point of view, we can define a random variable $T_{rand}(x)$ for a fixed value of $x$, or also a finite set of independent random variables $T_{rand}(x)$ for a finite set of $x$. But constructing and defining $T_{rand}$ as a stochastic process on the whole space $\R^d$ would be mathematically much more difficult (see \cite{Kallenberg} for instance). Now, for any measurable set $A$ of $\R^d$ and any $x\in\R^d$, we can define the map $\kappa(x,A) := \mathbb{P}[T_{rand}(x) \in A],$ and we have $$\kappa(x,A) = \frac{\gamma(x,A)}{\sum_{j} \pi_0^j g_{m_0^j,\Sigma_0^j}(x) }, \; \text{ and thus }\;\;\; \int\kappa(x,A) d\mu_0(x) = \mu_1(A).$$ It means that if the measure $T_{rand}$ could be defined everywhere, then ``$T_{rand} \# \mu_0$'', would be equal in expectation to $\mu_1$. Figure~\ref{fig:points_displacement} illustrates these two possible assignments $T_{mean}$ and $T_{rand}$ on a simple example. In this example, two discrete measures $\nu_0$ and $\nu_1$ are approximated by Gaussian mixtures $\mu_0$ and $\mu_1$ of order $K$, and we compute the transport maps $T_{mean}$ and $T_{rand}$ for these two mixtures. These maps are used to displace the points of $\nu_0$. We show the result of these displacements for different values of $K$. We can see that depending on the configuration of points, the results provided by $T_{mean}$ and $T_{rand}$ can be quite different. As expected, the measure $T_{rand} \# \nu_0$ (well-defined since $\nu_0$ is composed of a finite set of points) looks more similar to $\nu_1$ than $T_{mean} \# \nu_0$ does. And $T_{rand}$ is also less regular than $T_{mean}$ (two close points can be easily displaced to two positions far from each other). This may not be desirable in some applications, for instance in color transfer as we will see in Figure \ref{fig:renoir_gauguin_histograms} in the experimental section. \begin{figure} \caption{Assignments between two point clouds $\nu_0$ (in blue) and $\nu_1$ (in yellow) composed of $40$ points, for different values of $K$. Green points represent $T\#\nu_0$, where $T=T_{rand} \label{fig:points_displacement} \end{figure} \section{Two applications in image processing} \label{sec:applications} We have already illustrated the behaviour of the distance $MW_2$ in small dimension. In the following, we investigate more involved examples in larger dimension. In the last ten years, optimal transport has been thoroughly used for various applications in image processing and computer vision, including color transfer, texture synthesis, shape matching. We focus here on two simple applications: on the one hand, color transfer, that involves to transport mass in dimension $d=3$ since color histograms are 3D histograms, and on the other hand patch-based texture synthesis, that necessitates transport in dimension $p^2$ for $p\times p$ patches. These two applications require to compute transport plans or barycenters between potentially millions of points. We will see that the use of $MW_2$ makes these computations much easier and faster than the use of classical optimal transport, while yielding excellent visual results. The codes of the different experiments are available through Jupyter notebooks on \url{https://github.com/judelo/gmmot}. \subsection{Color transfer} \label{sec:colortransfer} We start with the problem of color transfer. A discrete color image can be seen as a function $u:\Omega \rightarrow \R^3$ where $\Omega = \{0,\dots n_r-1\}\times\{0,\dots n_c-1\}$ is a discrete grid. The image size is $n_r\times n_c$ and for each $i \in \Omega$, $u(i)\in \R^3$ is a set of three values corresponding to the intensities of red, green and blue in the color of the pixel. Given two images $u_0$ and $u_1$ on grids $\Omega_0$ and $\Omega_1$, we define the discrete color distributions $\eta_k = \frac 1{|\Omega_k|} \sum_{i\in\Omega_k} \delta_{u_k(i)}$, $k=0,1$, and we approximate these two distributions by Gaussian mixtures $\mu_0$ and $\mu_1$ thanks to the Expectation-Maximization (EM) algorithm\footnote{In practice, we use the \textit{scikit-learn} implementation of EM with the \textit{kmeans} initialization.}. Keeping the notations used previously in the paper, we write $K_k$ the number of Gaussian components in the mixture $\mu_k$, for $k=0,1$. We compute the $MW_2$ map between these two mixtures and the corresponding $T_{mean}$. We use it to compute $T_{mean}(u_0)$, an image with the same content as $u_0$ but with colors much closer to those of $u_1$. Figure~\ref{fig:renoir_gauguin} illustrates this process on two paintings by Renoir and Gauguin, respectively \textit{Le déjeuner des canotiers} and \textit{Manhana no atua}. For this experiment, we choose $K_0=K_1=10$. The corresponding transport map for $MW_2$ is relatively fast to compute (less than one minute with a non-optimized Python implementation, using the POT library \cite{flamary2017pot} for computing the map between the discrete distributions of $10$ masses). We also show on the same figure $T_{rand}(u_0)$, the result of the sliced optimal transport~\cite{rabin2011wasserstein,bonneel2015sliced}, and the result of the separable optimal transport (i.e. on each color channel separately). Notice that the complete optimal transport on such huge discrete distributions (approximately 800000 Dirac masses for these $1024\times 768$ images) is hardly tractable in practice. As could be expected, the image $T_{rand}(u_0)$ is much noisier than the image $T_{mean}(u_0)$. We show on Figure~\ref{fig:renoir_gauguin_histograms} the discrete color distributions of these different images and the corresponding classes provided by EM (each point is assigned to its most likely class). \begin{figure} \caption{First line, images $u_0$ and $u_1$ (two paintings by Renoir and Gauguin). Second line, $T_{mean} \label{fig:renoir_gauguin} \end{figure} \begin{figure} \caption{The images $u_0$ and $u_1$ are the ones of Figure~\ref{fig:renoir_gauguin} \label{fig:renoir_gauguin_histograms} \end{figure} The value $K=10$ that we have chosen here is the result of a compromise. Indeed, when $K$ is too small, the approximation by the mixtures is generally too rough to represent the complexity of the color data properly. At the opposite, we have observed that increasing the number of components does not necessarily help since the corresponding transport map will loose regularity. For color transfer experiments, we found in practice that using around $10$ components yields the best results. We also illustrate this on Figure \ref{fig:red_white_mountains}, where we show the results of the color transfer with $MW_2$ for different values of $K$. On the different images, one can appreciate how the color distribution gets closer and closer to the one of the target image as $K$ increases. \begin{figure} \caption{The left-most image is the ``red mountain'' image, and its color distribution is modified to match the one of the right-most image (the ``white mountain'' image) with $MW_2$ using respectively $K=1$, $K=3$ and $K=10$ components in the Gaussian mixtures. \label{fig:red_white_mountains} \label{fig:red_white_mountains} \end{figure} We end this section with a color manipulation experiment, shown on Figure~\ref{fig:cat_barycenter}. Four different images being given, we create barycenters for $MW_2$ between their four color palettes (represented again by mixtures of 10 Gaussian components), and we modify the first of the four images so that its color palette spans this space of barycenters. For this experiment (and this experiment only), a spatial regularization step is applied in post-processing~\cite{rabin2011removing} to remove some artifacts created by these color transformations between highly different images. \begin{figure} \caption{In this experiment, the top left image is modified in such a way that its color palette goes through the $MW_2$-barycenters between the color palettes of the four corner images. Each color palette is represented as a mixture of 10 Gaussian components. The weights used for the barycenters are bilinear with respect to the four corners of the rectangle. \label{fig:cat_barycenter} \label{fig:cat_barycenter} \end{figure} \subsection{Texture synthesis} Given an exemplar texture image $u:\Omega \rightarrow R^3$, the goal of texture synthesis is to synthetize images with the same perceptual characteristics as $u$, while keeping some innovative content. The literature on texture synthesis is rich, and we will only focus here on a bilevel approach proposed recently in~\cite{galerne2017semi}. The method relies on the optimal transport between a continuous (Gaussian or Gaussian mixtures) distribution and a discrete distribution (distribution of the patches of the exemplar texture image). The first step of the method can be described as follows. For a given exemplar image $u:\Omega \rightarrow R^3$, the authors compute the asymptotic discrete spot noise (ADSN) associated with $u$, which is the stationary Gaussian random field $U:\mathbb{Z}^2 \rightarrow \R^3$ with same mean and covariance as $u$, {\em i.e.} \[\forall x \in \mathbb{Z}^2, \; U(x) = \bar{u}+ \sum_{y\in \mathbb{Z}^2} t_u(y)W(x-y), \; \text{ where } \begin{cases} \bar{u} = \frac 1 {|\Omega|} \sum_{x\in \Omega} u(x)\\ t_u = \frac 1 {\sqrt{|\Omega|}} (u-\bar{u}) \mathbf{1}_{\Omega}, \end{cases} \] with $W$ a standard normal Gaussian white noise on $\mathbb{Z}^2$. Once the ADSN $U$ is computed, they extract a set $S$ of $p\times p$ sub-images (also called \textit{patches}) of $u$. In our experiments, we extract one patch for each pixel of $u$ (excluding the borders), so patches are overlapping and the number of patches is approximately equal to the image size. The authors of~\cite{galerne2017semi} then define $\eta_1$ the empirical distribution of this set of patches (thus $\eta_1$ is in dimension $3\times p \times p$, {\em i.e.} $27$ for $p=3$) and $\eta_0$ the Gaussian distribution of patches of $U$, and compute the semi-discrete optimal transport map $T_{SD}$ from $\eta_0$ to $\eta_1$. This map $T_{SD}$ is then applied to each patch of a realization of $U$, and an ouput synthetized image $v$ is obtained by averaging the transported patches at each pixel. Since the semi-discrete optimal transport step is numerically very expensive in such high dimension, we propose to make use of the $MW_2$ distance instead. For that, we approximate the two discrete patch distributions of $u$ and $U$ by Gaussian Mixture models $\mu_0$ and $\mu_1$, and we compute the optimal map $T_{mean}$ for $MW_2$ between them. The rest of the algorithm is similar to the one described in~\cite{galerne2017semi}. Figure~\ref{fig:texture} shows the results for different choices of exemplar images $u$. In practice, we use $K_0=K_1=10$, as in color transfer, and $3\times 3$ color patches. The results obtained with our approach are visually very similar to the ones obtained with~\cite{galerne2017semi}, for a computational time approximately 10 times smaller. More precisely, for instance for an image of size $256\times 256$, the proposed approach takes about $35$ seconds, whereas the semi-discrete approach of \cite{galerne2017semi} takes about $400$ seconds. We are currently exploring a multiscale version of this approach, inspired by the recent~\cite{leclaire2019multi}. \begin{figure} \caption{Left, original texture $u$. Middle, ADSN $U$. Right, synthetized version.\label{fig:texture} \label{fig:texture} \end{figure} \section{Discussion and conclusion} In this paper, we have defined a Wasserstein-type distance on the set of Gaussian mixture models, by restricting the set of possible coupling measures to Gaussian mixtures. We have shown that this distance, with an explicit discrete formulation, is easy to compute and suitable to compute transport plans or barycenters in high dimensional problems where the classical Wasserstein distance remains difficult to handle. We have also discussed the fact that the distance $MW_2$ could be extended to other types of mixtures, as soon as we have a marginal consistency property and an identifiability property similar to the one used in the proof of Proposition~\ref{prop:MW2}. In practice, Gaussian mixture models are versatile enough to represent large classes of concrete and applied problems. One important question raised by the introduced framework and its generalization in Section \ref{Extension:subsec} is how to estimate the mixtures for discrete data, since the obtained result will depend on the number $K$ of Gaussian components in the mixtures and on the parameter $\lambda$ that weights the data-fidelity terms. If the number of Gaussian components is chosen large enough, and covariances small enough, the transport plan for $MW_2$ will look very similar to the one of $W_2$, but at the price of a high computational cost. If, on the contrary, we choose a very small number of components (like in the color transfer experiments of Section~\ref{sec:colortransfer}), the resulting optimal transport map will be much simpler, which seems to be desirable for some applications. \section*{Acknowledgments} We would like to thank Arthur Leclaire for his valuable assistance for the texture synthesis experiments. \section*{Appendix: proofs} \subsection*{Density of $GMM_d(\infty)$ in $\mathcal{P}_p(\R^d)$} \begin{lemmastar}[Lemma 3.1] The set \[ \left\{ \sum_{k=1}^N \pi_k \delta_{y_k}\;;\; N\in\N,\;(y_k)_k \in (\R^{d})^N, \; (\pi_k)_k\in \Gamma_N \right\}\] is dense in $\mathcal{P}_p(\R^d)$ for the metric $W_p$, for any $p\geq 1$. \end{lemmastar} \begin{proof} The proof is adapted from the proof of Theorem 6.18 in~\cite{villani2008optimal} and given here for the sake of completeness. Let $\mu \in \mathcal{P}_p(\R^d)$. For each $\epsilon>0$, we can find $r$ such that $\int_{B(0,r)^c}\|y\|^pd\mu(x) \leq\epsilon^p$, where $B(0,r) \subset \R^d$ is the ball of center $0$ and radius $r$, and $B(0,r)^c$ denotes its complementary set in $\R^d$. The ball $B(0,r)$ can be covered by a finite number of balls $B(y_k,\epsilon)$, $1\leq k \leq N$. Now, define $B_k = B(y_k,\epsilon) \setminus \cup_{1\leq j<k} B(y_j,\epsilon)$, all these sets are disjoint and still cover $B(0,r)$. \\ Define $\phi:\R^d\rightarrow \R^d$ on $\R^d$ such that \[ \forall k,\; \forall y\in B_k \cap B(0,r) , \,\, \phi(y) = y_k\;\text{ and }\; \forall y\in B(0,r)^c, \,\, \phi(y) = 0.\] Then, \[\phi \# \mu = \sum_{k=1}^N \mu(B_k\cap B(0,r)) \delta_{y_k} + \mu(B(0,r)^c)\delta_0 \] and \begin{eqnarray*} W_p^p(\phi \# \mu,\mu) &\leq& \int_{\R^d} \|y - \phi(y)\|^pd\mu(y) \\ &\leq& \epsilon^p\int_{B(0,r)}d\mu(y) + \int_{B(0,r)^c} \|y\|^p d\mu(y) \leq \epsilon^p+ \epsilon^p = 2\epsilon^p, \end{eqnarray*} which finishes the proof. \end{proof} \subsection*{Identifiability properties of Gaussian mixture models} \begin{propstar}[Proposition 2] The set of finite Gaussian mixtures is identifiable, in the sense that two mixtures $\mu_0 =\sum_{k=1}^{K_0} \pi_0^k \mu_0^k$ and $\mu_1= \sum_{k=1}^{K_1} \pi_1^k \mu_1^k$, written such that all $\{\mu_0^k\}_k$ (resp. all $\{\mu_1^j\}_j$) are pairwise distinct, are equal if and only if $K_0=K_1$ and we can reorder the indexes such that for all $k$, $\pi_0^k = \pi_1^k$, $m_0^k = m_1^k$ and $\Sigma_0^k = \Sigma_1^k$. \label{lemma:unicity_gaussian} \end{propstar} This result is classical and the proof is also given here in the Appendix for the sake of completeness. \begin{proof} This proof is an adaptation and simplification of the proof of Proposition 2 in~\cite{yakowitz1968}. First, assume that $d=1$ and that two Gaussian mixtures are equal: \begin{equation} \sum_{k=1}^{K_0} \pi_0^k \mu_0^k = \sum_{j=1}^{K_1} \pi_1^j \mu_1^j.\label{eq:GMM_equal} \end{equation} We start by identifying the Dirac masses from both sums, so only non-degenerate Gaussian components remain. Writing $\mu_i^k = \mathcal N (m_i^k, (\sigma_i^k)^2)$, it follows that \[\sum_{k=1}^{K_0} \frac{\pi_0^k}{\sigma_0^k} e^{-\frac{(x-m_0^k)^2}{2(\sigma_0^k)^2}} = \sum_{j=1}^{K_1} \frac{\pi_1^j}{\sigma_1^j} e^{-\frac{(x-m_1^j)^2}{2(\sigma_1^j)^2}},\;\;\forall x \in \mathbb{R}.\] Now, define $k_0 = \mathrm{argmax}_k \sigma_0^k$ and $j_0 = \mathrm{argmax}_j \sigma_1^j$. If the maximum is attained for several values of $k$ (resp. $j$), we keep the one with the largest mean $m_0^k$ (resp. $m_1^j$). Then, when $x\rightarrow +\infty$, we have the equivalences \[\sum_{k=1}^{K_0} \frac{\pi_0^k}{\sigma_0^k} e^{-\frac{(x-m_0^k)^2}{2(\sigma_0^k)^2}} \underset{x\to+\infty}{\sim} \frac{\pi_0^{k_0}}{\sigma_0^{k_0}} e^{-\frac{(x-m_0^{k_0})^2}{2(\sigma_0^{k_0})^2}} \;\;\text{ and } \sum_{j=1}^{K_1} \frac{\pi_1^j}{\sigma_1^j} e^{-\frac{(x-m_1^j)^2}{2(\sigma_1^j)^2}} \underset{x\to+\infty}{\sim} \frac{\pi_1^{j_0}}{\sigma_1^{j_0}} e^{-\frac{(x-m_1^{j_0})^2}{2(\sigma_1^{j_0})^2}}.\] Since the two sums are equal, these two terms must also be equivalent when $x\rightarrow +\infty$, which implies necessarily that $\sigma_0^{k_0} = \sigma_1^{j_0}$, $m_0^{k_0} = m_1^{j_0}$ and $\pi_0^{k_0} = \pi_1^{j_0}$. Now, we can remove these two components from the two sums and we obtain \[\sum_{k=1\dots K_0,\; k \neq k_0} \frac{\pi_0^k}{\sigma_0^k} e^{-\frac{(x-m_0^k)^2}{2(\sigma_0^k)^2}} = \sum_{j=1\dots K_1,\; j\neq j_0} \frac{\pi_1^j}{\sigma_1^j} e^{-\frac{(x-m_1^j)^2}{2(\sigma_1^j)^2}},\;\;\;\forall x \in \mathbb{R}.\] We can start over and show recursively that all components are equal. For $d>1$, assume once again that two Gaussian mixtures $\mu_0$ and $\mu_1$ are equal, written as in Equation~\eqref{eq:GMM_equal}. The projection of this equality yields \begin{equation} \sum_{k=1}^{K_0} \pi_0^k \mathcal N (\langle m_0^k,\xi\rangle ,\xi^t\Sigma_0^k\xi) = \sum_{j=1}^{K_1} \pi_1^j \mathcal N (\langle m_1^j,\xi\rangle ,\xi^t\Sigma_1^j\xi),\;\;\; \forall \xi\in \mathbb{R}^d.\label{eq:GMM_projected} \end{equation} At this point, observe that for some values of $\xi$, some of these projected components may not be pairwise distinct anymore, so we cannot directly apply the result for $d=1$ to such mixtures. However, since the pairs $(m_0^k,\Sigma_0^k)$ (resp. $(m_1^j,\Sigma_1^j)$) are all distinct, then for $i=0,1$, the set \[\Theta_i = \bigcup_{1\leq k,k'\leq K_i}\left\{\xi \;\;\text{s.t.}\; \langle m_i^k - m_i^{k'},\xi\rangle = 0 \;\;\text{and}\;\; \xi^t\left(\Sigma_i^k - \Sigma_i^{k'}\right)\xi=0\right\}\] is of Lebesgue measure $0$ in $\mathbb{R}^d$. For any $\xi$ in $\mathbb{R}^d\setminus\Theta_0\cup\Theta_1$, the pairs $\{(\langle m_0^k,\xi\rangle,\xi^t\Sigma_0^k\xi)\}_{k}$ (resp. $\{(\langle m_1^j,\xi\rangle,\xi^t\Sigma_1^j\xi)\}_{j}$) are pairwise distinct. Consequently, using the first part of the proof (for $d=1$), we can deduce that $K_0 = K_1$ and that \begin{equation} \label{eq:inclusion} \mathbb{R}^d\setminus \Theta_0\cup\Theta_1 \subset \bigcap_k\bigcup_j \Xi_{k,j} \end{equation} where \begin{equation*} \Xi_{k,j} = \left\{\xi,\;\;\text{s.t.}\; \pi_0^k = \pi_1^j,\;\;\langle m_0^k - m_1^{j},\xi\rangle = 0 \;\;\text{and}\;\; \xi^t\left(\Sigma_0^k - \Sigma_1^{j}\right)\xi=0\right\}.\label{eq:2} \end{equation*} Now, assume that the two sets $\{(\pi_0^k,m_0^k,\Sigma_0^k)\}_k$ and $\{(\pi_1^j,m_1^j,\Sigma_1^j)\}_j$ are different. Since each of these sets is composed of different triplets, it is equivalent to assume that there exists $k$ in $\{1,\dots K_0\}$ such that $(\pi_0^k,m_0^k,\Sigma_0^k)$ is different from all triplets $(\pi_1^j,m_1^j,\Sigma_1^j)$. In this case, the sets $\Xi_{k,j}$ for $j = 1,\dots K_0$ are all of Lebesgue measure $0$ in $\mathbb{R}^d$, which contradicts~\eqref{eq:inclusion}. We conclude that the sets $\{(\pi_0^k,m_0^k,\Sigma_0^k)\}_k$ and $\{(\pi_1^j,m_1^j,\Sigma_1^j)\}_j$ are equal. \end{proof} \end{document}
\begin{document} \setcounter{footnote}{0} \setcounter{equation}{0} \title [Some Aspects of Boolean Valued Analysis] {Some Aspects of Boolean Valued Analysis} \author{A.~G.~Kusraev and S.~S.~Kutateladze} \begin{abstract} This is a~survey of some recent applications of Boolean valued analysis to operator theory and harmonic analysis. Under consideration are pseudoembedding operators, the noncommutative Wickstead problem, the Radon--Nikod\'ym Theorem for $JB$-algebras, and the Bochner Theorem for lattice-valued positive definite mappings on locally compact groups. \end{abstract} \keywords {Boolean valued transfer principle, pseudoembedding operator, Wickstead problem, Radon--Nikod\'ym theorem, Fourier transform, positive definite mapping, Bochner theorem.} \date{February 14, 2015} \maketitle \section*{1.~Introduction} We survey here some aspects of Boolean valued analysis that concern operator theory. The term {\it Boolean valued analysis\/} signifies the technique of studying properties of an arbitrary mathematical object by comparison between its representations in two different set-theoretic models whose construction utilizes principally distinct Boolean algebras. As these models, the classical Cantorian paradise in the shape of the von Neumann universe $\mathbb{V}$ and a specially-trimmed Boolean valued universe $\mathbb{V}^{(\mathbb{B})}$ are usually taken. Comparison analysis is carried out by some interplay between $\mathbb{V}$ and $\mathbb{V}^{(\mathbb{B})}$. Boolean valued analysis not only is tied up with many topological and geometrical ideas but also provides a technology for expanding the content of the already available theorems. Each theorem, proven by the classical means, possesses some new unobvious content that relates to ``variable sets.'' A general scheme of the method is as follows; see \cite{IBA, BA_ST}. Assume that $\mathbf{X}\subset\mathbb{V}$ and $\mathbb{X}\subset\mathbb{V}^{(\mathbb{B})}$ are two classes of mathematical objects. Suppose that we are able to prove The {\it Boolean Valued Representation:}~Every $X\in\mathbf{X}$ embeds into a Boolean valued model, becoming an object $\mathcal{X}\in\mathbb{X}$ within $\mathbb{V}^{(\mathbb{B})}$. The {\it Boolean Valued Transfer Principle} tells us then that every theorem about $\mathcal{X}$ within Zermelo--Fraenkel set theory has its counter\-part for the original object $X$ interpreted as a Boolean valued object $\mathcal{X}$. The {\it Boolean Valued Machinery} enables us to perform some translation of theorems from $\mathcal{X}\in\mathbb{V}^{(\mathbb{B})}$ to $X\in\mathbb{V}$ by using the appropriate general operations (ascending--descending) and the principles of Boolean valued analysis. Everywhere below $\mathcal{R}$ stands for the reals within $\mathbb{V}^{(\mathbb{B})}$. The Gordon Theorem states that the descent $\mathcal{R}{\downarrow}$ which is an algebraic system in $\mathbb{V}$ is a universally complete vector lattice; see \cite{IBA, BA_ST}. Moreover, there exists an~isomorphism~$\chi$ of~$\mathbb{B}$ onto the Boolean algebra $\mathbb{P}(\mathrsfs R{\downarrow})$ such that $$ \gathered \chi (b) x=\chi (b) y\Longleftrightarrow b\le [\![\,x=y\,]\!], \\ \chi (b) x\le \chi (b) y\Longleftrightarrow b\le [\![\,x\le y\,]\!] \endgathered \eqno(\mathbb{G}) $$ for all~$x, y\in \mathrsfs R{\downarrow}$ and $b\in\mathbb{B}$. The {\it restricted descent} $\mathcal{R}{\Downarrow}$ of $\mathcal{R}$ is the part of $\mathcal{R}{\downarrow}$ consisting of elements $x\in\mathcal{R}{\downarrow}$ with $|x|\leq C\mathbf{1}$ for some $C\in\mathbb{R}$, where $\mathbf{1}$ is an order unit in $\mathcal{R}{\downarrow}$. By a~{\it vector lattice} throughout the~sequel we mean an Archimedean real vector lattice. We denote the~Boolean algebras of all bands and all band projections in a~vector lattice~$X$ respectively by $\mathbb{B}(X)$ and $\mathbb{P}(X)$ and we let $\mathcal{Z}(X)$ and $\mathop{\fam0 Orth}(X)$ stand for the {\it ideal center} of $X$ and the $f$-{\it algebra of orthomorphisms} on $X$, respectively. The~universal completion $X^\mathbb{u}$ of a vector lattice $X$ is always considered as a semiprime $f$-algebra whose multiplication is uniquely determined by fixing an order unit as a ring unit. The space of all order bounded linear operators from $X$ to $Y$ is denoted by $L^\sim(X,Y)$. The Riesz--Kan\-to\-rovich Theorem tells us that if $Y$ is a Dedekind complete vector lattice then so is $L^\sim(X,Y)$. A linear operator $T$ from $X$ to $Y$ is a \textit{lattice homomorphism\/} whenever $T$ preserves lattice operations; i.e., $T(x_1\vee x_2)=Tx_1\vee Tx_2$ (and so $T(x_1\wedge x_2)=Tx_1\wedge Tx_2$) for all $x_1,x_2\in X$. Vector lattices $X$ and $Y$ are said to be \textit{lattice isomorphic\/} if there is a lattice isomorphism from $X$ onto $Y$, i.e., $T$ and $T^{-1}$ are lattice homomorphisms. Let $\mathop{\fam0 Hom}(X,Y)$ stand for the set of all lattice homomorphism from $X$ to $Y$. Recall also that the elements of the band $L_d^{\sim}(X,Y)\!:=\mathop{\fam0 Hom}(X,Y)^\perp$ are referred to as \textit{diffuse operators}. An order bounded operator $T:X\to Y$ is said to be \textit{pseudoembedding\/} if $T$ belongs to the complementary band $L_a^{\sim}(X,Y)\!:= \mathop{\fam0 Hom}(X,Y)^{\perp\perp}$, the band generated by all lattice homomorphisms. Put $X^\sim\!:=L^\sim(X,\mathbb{R})$ and $X^\sim_a\!:=L^\sim_a(X,\mathbb{R})$. We~let $\!:=$ denote the~assignment by definition, while $\mathbb{N}$, $\mathbb{R}$, and $\mathbb{C}$ symbolize the~naturals, the~reals, and the~complexes. Throughout the sequel $\mathbb{B}$ is a complete Boolean algebra with top $\mathbb{1}$ and bottom $\mathbb{0}$. A~{\it partition of unity\/} in~$\mathbb{B}$ is a~family $(b_{\xi})_{\xi\in \Xi}$ in $\mathbb{B}$ such that $\bigvee_{\xi\in \Xi} b_{\xi}=\mathbb{1}$ and $b_{\xi}\wedge b_{\eta}=\mathbb{O}$ whenever $\xi\ne \eta$. The~reader can find the~relevant information on the~theory of order bounded operators in~\cite{AB, DOP, LZ, Vul, Zaan}; on the Boolean valued models of set theory, in \cite{Bell, TZ}; and on Boolean valued analysis, in~\cite{BVA, IBA}. \section*{2. Pseudoembedding Operators} In this section we will give a~description of the band generated by disjointness preserving operators in the~vector lattice of order bounded operators. First we examine the scalar case. \subsec{2.1.}\proclaim{}For an arbitrary vector lattice $X$ there exist a~unique cardinal $\gamma$ and a~disjoint family $(\varphi_ \alpha)_{ \alpha<\gamma}$ of nonzero lattice homomor\-phisms $\varphi_ \alpha:X\rightarrow\mathbb{R}$ such that every $f\in X^{\sim}$ admits the unique representation $$ f=f_d+\mathop{o\text{\/-}\!\sum}_{ \alpha<\gamma}\lambda_ \alpha\varphi_ \alpha $$ where $f_d\in X_d^{\sim}$ and $(\lambda_ \alpha)_{ \alpha<\gamma}\subset\mathbb{R}$. The family $(\varphi_ \alpha)_{ \alpha<\gamma}$ is unique up to permutation and positive scalar multiplication. \rm \par\mbox{$\vartriangleleft$}~The Dedekind complete vector lattice $X^{\sim}$ splits into the direct sum of the atomic band $X_a^{\sim}$ and the diffuse band $X_d^{\sim}\!:=(X_a^\sim)^\perp$; therefore, each functional $f\in E^{\sim}$ admits the unique representation $f=f_a+f_d$ with $f_a\in X_a^{\sim}$ and $f_d\in X_d^{\sim}$. Let $\gamma$ be the cardinality of the set $\mathcal{K}$ of one-dimensional bands in $X_a^{\sim}$ ($=$~atoms in $\mathbb{B}(X^{\sim})$). Then there exists a~family of lattice homomorphisms $(\varphi_ \alpha:X\rightarrow\mathbb{R})_{ \alpha<\gamma}$ such that $\mathcal{K}=\{\varphi_ \alpha^{\perp\perp}:\, \alpha<\gamma\}$. It remains to observe that the mapping sending a~family of reals $(\lambda_ \alpha)_{ \alpha<\gamma}$ to the functional $x\mapsto\mathop{o\text{\/-}\!\sum}_{ \alpha<\gamma}\lambda_ \alpha\varphi_ \alpha(x)$ implements a~lattice isomorphism between $X_a^{\sim}$ and some ideal in the vector lattice $\mathbb{R}^\gamma$. If $(\psi_ \alpha)_{ \alpha<\gamma}$ is a~disjoint family of nonzero real lattice homomorphisms on $X$ then for all $ \alpha,\beta<\gamma$ the functionals $\varphi_ \alpha$ and $\psi_\beta$ are either disjoint or proportional with a~strictly positive coefficient, so that there exist a~permutation $(\omega_\beta)_{\beta<\gamma}$ of $(\varphi_ \alpha)_{ \alpha<\gamma}$ and a~unique family $(\mu_\beta)_{\beta<\gamma}$ in $\mathbb{R}_+$ such that $\psi_\beta=\mu_\beta\omega_\beta$ for all $\beta<\gamma$.~\text{$\vartriangleright$} \subsec{2.2.}~Given two families $(S_ \alpha)_{ \alpha\in\mathrm{A}}$ and $(T_\beta)_{\beta\in\mathrm{B}}$ in $L^\sim(X,Y)$, say that $(S_ \alpha)_{ \alpha\in\mathrm{A}}$ is a~$\mathbb{P}(Y)$-\textit{permutation\/} of $(T_\beta)_{\beta\in\mathrm{B}}$ whenever there exists a~double family $(\pi_{ \alpha,\beta})_{ \alpha\in\mathrm{A},\,\beta\in\mathrm{B}}$ in $\mathbb{P}(Y)$ such that $S_ \alpha=\sum_{\beta\in\mathrm{B}}\pi_{ \alpha,\beta}T_\beta$ for all $ \alpha\in\mathrm{A}$, while $(\pi_{ \alpha,\mathop{\fam0 ba}r{\beta}})_{ \alpha\in\mathrm{A}}$ and $(\pi_{\mathop{\fam0 ba}r{ \alpha},\beta})_{\beta\in\mathrm{B}}$ are partitions of unity in $\mathbb{B}(Y)$ for all $\mathop{\fam0 ba}r{ \alpha}\in\mathrm{A}$ and $\mathop{\fam0 ba}r{\beta}\in\mathrm{B}$. It is easily seen that in case $Y=\mathbb{R}$ this amounts to saying that there is a~bijection $\nu:\mathrm{A}\rightarrow\mathrm{B}$ with $S_ \alpha=T_{\nu( \alpha)}$ for all $ \alpha\in\mathrm{A}$; i.e., $(S_ \alpha)_{ \alpha\in\mathrm{A}}$ is a~permutation of $(T_\beta)_{\beta\in\mathrm{B}}$. We also say that $(S_ \alpha)_{ \alpha\in\mathrm{A}}$ is $\mathop{\fam0 Orth}(Y)$-multiple of $(T_ \alpha)_{ \alpha\in\mathrm{A}}$ whenever there exists a~family of orthomorphisms $(\pi_ \alpha)_{ \alpha\in\mathrm{A}}$ in $\mathop{\fam0 Orth}(Y)$ such that $S_ \alpha=\pi_ \alpha T_ \alpha$ for all $ \alpha\in\mathrm{A}$. In case $Y=\mathbb{R}$ we evidently get that $S_ \alpha$ is a~scalar multiple of $T_ \alpha$ for all $ \alpha\in\mathrm{A}$. Using the above notation, define the two mappings $\mathcal{S}:\mathrm{A}\rightarrow X^{{\scriptscriptstyle\wedge}\sim}{\downarrow}$ and $\mathcal{T}:\mathrm{B}\rightarrow X^{{\scriptscriptstyle\wedge}\sim}{\downarrow}$ by putting $\mathcal{S}( \alpha)\!:=S_ \alpha{\upwardarrow}$ $( \alpha\in\mathrm{A})$ and $\mathcal{T}(\beta)\!:=T_\beta{\upwardarrow}$ $(\beta\in\mathrm{B})$. Recall that $\upwardarrow$ signify the modified ascent; see \cite[1.6.8]{BA_ST}. \subsec{2.3.}\proclaim{}Define the internal mappings $\tau,\sigma\in\mathbb{V}^{(\mathbb{B})}$ as $\sigma\!:=\mathcal{S}{\upwardarrow}$ and $\tau\!:=\mathcal{T}{\upwardarrow}$. Then $(\sigma( \alpha))_{ \alpha\in\mathrm{A}^{\scriptscriptstyle\wedge}}$ is a~permutation of $(\tau(\beta))_{\beta\in\mathrm{B}^{\scriptscriptstyle\wedge}}$ within~$\mathbb{V}^{(\mathbb{B})}$ if and only if $(S_ \alpha)_{ \alpha\in\mathrm{A}}$ is a~ $\mathbb{P}(Y)$-\textit{permutation\/} of $(T_\beta)_{\beta\in\mathrm{B}}$. \rm \par\mbox{$\vartriangleleft$}~Assume that $(\sigma( \alpha))_{ \alpha\in\mathrm{A}^{\scriptscriptstyle\wedge}}$ is a~permutation of $(\tau(\beta))_{\beta\in\mathrm{B}^{\scriptscriptstyle\wedge}}$ within $\mathbb{V}^{(\mathbb{B})}$. Then there is a~bijection $\nu:\mathrm{B}^{\scriptscriptstyle\wedge} \rightarrow\mathrm{A}^{\scriptscriptstyle\wedge}$ such that $\sigma( \alpha)=\tau(\nu( \alpha))$ for all $( \alpha\in\mathrm{A}^{\scriptscriptstyle\wedge})$. By \cite[1.5.8]{BA_ST} $\nu{\downwardarrow}$ is a~function from $\mathrm{A}$ to $(\mathrm{B}^{\scriptscriptstyle\wedge}){\downarrow}= \mathop{\fam0 mix}\nolimits(\{\beta^{\scriptscriptstyle\wedge}:\,\beta\in\mathrm{B}\})$. Thus, for each $ \alpha\in\mathrm{A}$ there exists a~partition of unity $(b_{ \alpha,\beta})_{\beta\in\mathrm{B}}$ such that $\nu{\downwardarrow}( \alpha)=\mathop{\fam0 mix}\nolimits_{\beta\in\mathrm{B}}(b_{ \alpha,\beta}\beta^{\scriptscriptstyle\wedge})$. Since $\nu{\downwardarrow}$ is injective, we have $$ \gathered \mathbb{1}=[\![(\forall \alpha_1, \alpha_2\in\mathrm{A}^{\scriptscriptstyle\wedge}) (\nu( \alpha_1)=\nu( \alpha_2)\rightarrow \alpha_1= \alpha_2)]\!] \\ =\bigwedge_{ \alpha_1, \alpha_2\in\mathrm{A}} [\![\nu( \alpha_1^{\scriptscriptstyle\wedge}) =\nu( \alpha_2^{\scriptscriptstyle\wedge})\rightarrow \alpha_1^{\scriptscriptstyle\wedge}= \alpha_2^{\scriptscriptstyle\wedge}]\!] \\ =\bigwedge_{ \alpha_1, \alpha_2}[\![\nu\downwardarrow( \alpha_1) =\nu\downwardarrow( \alpha_2)]\!]\Rightarrow [\![ \alpha_1^{\scriptscriptstyle\wedge}= \alpha_2^{\scriptscriptstyle\wedge}]\!], \endgathered $$ and so $[\![\nu\downwardarrow( \alpha_1)=\nu\downwardarrow( \alpha_2)]\!] \leq[\![ \alpha_1^{\scriptscriptstyle\wedge}= \alpha_2^{\scriptscriptstyle\wedge}]\!]$ for all $ \alpha_1, \alpha_2\in\mathrm{A}$. Taking this inequality and the definition of $\nu{\downwardarrow}$ into account yields $$ \gathered b_{ \alpha_1,\beta}\wedge b_{ \alpha_2,\beta}\leq[\![\nu{\downwardarrow}( \alpha_1)=\beta^{\scriptscriptstyle\wedge}]\!] \wedge[\![\nu{\downwardarrow}( \alpha_2)=\beta^{\scriptscriptstyle\wedge}]\!] \\ \leq[\![\nu\downwardarrow( \alpha_1)=\nu\downwardarrow( \alpha_2)]\!] \leq[\![ \alpha_1^{\scriptscriptstyle\wedge}= \alpha_2^{\scriptscriptstyle\wedge}]\!], \endgathered $$ so that $ \alpha_1\ne \alpha_2$ implies $b_{ \alpha_1,\beta}\wedge b_{ \alpha_2,\beta}=\mathbb{0}$ (because $x \ne y\Longleftrightarrow[\![x^{\scriptscriptstyle\wedge}=y^{\scriptscriptstyle\wedge}]\!]=\mathbb{O}$ by \cite[1.4.5\,(2)]{BA_ST}. At the same time, surjectivity of $\nu$ implies $$ \gathered \mathbb{1}=[\![(\forall\,\beta\in\mathrm{B}^{\scriptscriptstyle\wedge}) (\exists\, \alpha\in\mathrm{A}^{\scriptscriptstyle\wedge})\beta=\nu( \alpha)]\!] \\ =\bigwedge_{\beta\in\mathrm{B}}\bigvee_{ \alpha\in\mathrm{A}}[\![\beta^{\scriptscriptstyle\wedge} =\nu{\downwardarrow}( \alpha)]\!]=\bigwedge_{\beta\in\mathrm{B}}\bigvee_{ \alpha\in\mathrm{A}} b_{ \alpha,\beta}. \endgathered $$ It follows that $(b_{ \alpha,\beta})_{ \alpha\in\mathrm{A}}$ is a~partition of unity in $\mathbb{B}$ for all $\beta\in\mathrm{B}$. By the choice of $\nu$ it follows that $b_{ \alpha,\beta}\leq[\![\sigma( \alpha^{\scriptscriptstyle\wedge})=\tau(\beta^{\scriptscriptstyle\wedge})]\!]$, because of the estimations $$ \gathered b_{ \alpha,\beta}\leq[\![\sigma( \alpha^{\scriptscriptstyle\wedge})= \tau(\nu( \alpha^{\scriptscriptstyle\wedge}))]\!]\wedge[\![\nu( \alpha^{\scriptscriptstyle\wedge})=\beta^{\scriptscriptstyle\wedge}]\!] \\ \leq[\![\sigma( \alpha^{\scriptscriptstyle\wedge})= \tau(\beta^{\scriptscriptstyle\wedge})]\!] =[\![\mathcal{S}( \alpha)=\mathcal{T}(\beta)]\!]. \endgathered $$ Put now $\pi_{ \alpha,\beta}\!:=\chi(b_{ \alpha,\beta})$ and observe that $b_{ \alpha,\beta}\leq[\![\mathcal{S}( \alpha)x^{\scriptscriptstyle\wedge}= \mathcal{T}(\beta)x^{\scriptscriptstyle\wedge}]\!] \leq[\![S_ \alpha x=T_\beta x]\!]$ for all $ \alpha\in\mathrm{A}$, $\beta\in\mathrm{B}$, and $x\in X$. Using $(\mathbb{G})$, we obtain $\pi_{ \alpha,\beta}S_ \alpha=\pi_{ \alpha,\beta}T_\beta$ and so $S_ \alpha=\sum_{\beta\in\mathrm{B}}\pi_{ \alpha,\beta}T_\beta$ for all $ \alpha\in\mathrm{A}$. Clearly, $(\pi_{ \alpha,\beta})$ is the family as required in Definition 2.2. The sufficiency is shown by the same reasoning in the reverse direction.~\text{$\vartriangleright$} \subsec{2.4.}~A nonempty set $\mathcal{D}$ of positive operators from $X$ to $Y$ is called {\it strongly generating\/} if $\mathcal{D}$ is a disjoint set and $S(X)^{\perp\perp}=Y$ for all $S\in\mathcal{D}$. If, in addition, $\mathcal{D}^{\perp\perp}=B$, then we say also that $\mathcal{D}$ \textit{strongly generates\/} the band $B\subset L^{\sim}(X,Y)$ or $B$ is strongly generated by $\mathcal{D}$. In case $Y=\mathbb{R}$, the strongly generating sets in $X^\sim=L^{\sim}(X,\mathbb{R})$ are precisely disjoint sets of nonzero positive functionals. Given a~cardinal $\gamma$ and a~universally complete vector lattice $Y$, say that a~vector lattice $X$ is $(\gamma,Y)$-\textit{homogeneous\/} if the band $L^{\sim}_a(X,Y)$ is strongly generated by a~set of lattice homomorphisms of cardinality $\gamma$ and for every nonzero projection $\pi\in\mathbb{P}(Y)$ and every strongly generating set $\mathcal{D}$ in $L^{\sim}_a(X,\pi Y)$ we have $\mathop{\fam0 card}(\mathcal{D})\geq\gamma$. We say also that $X$ is $(\gamma,\pi)$-homogeneous if $\pi\in\mathbb{P}(Y)$ and $X$ is $(\gamma,\pi Y)$-homogeneous. Evidently, the $(\gamma,\mathbb{R})$-homogeneity of a~vector lattice $X$ amounts just to saying that the band $X_a^{\sim}$ is generated in $X^\sim$ by a~cardinality $\gamma$ disjoint set of nonzero lattice homomorphisms or, equivalently, the cardinality of the set of atoms in $\mathbb{B}(X^\sim)$ equals $\gamma$. Take $\mathcal{D}\subset L^\sim(X,\mathcal{R}{\downarrow})$ and $\Delta\in\mathbb{V}^{(\mathbb{B})}$ with $[\![\Delta\subset (X^{\scriptscriptstyle\wedge})^\sim]\!]=\mathbb{1}$. Put $\mathcal{D}_{\uparrow}\!:=\{T{\upwardarrow}:\,T\in\mathcal{D}\}{\uparrow}$ and $\Delta^{\downarrow}\!:=\{\tau{\downwardarrow}:\,\tau\in\Delta{\downarrow}\}$. Let $\mathop{\fam0 mix}\nolimits(\mathcal{D})$ stand for the set of all $T\in L^\sim(X,\mathcal{R}{\downarrow})$ representable as $Tx=\mathop{o\text{\/-}\!\sum}_{\xi\in\Xi}\pi_\xi T_\xi x$ $(x\in X)$ with $(\pi_\xi)_{\xi\in\Xi}$ a~partition of unity in $\mathbb{P}(\mathcal{R}{\downarrow})$ and $(T_\xi)_{\xi\in\Xi}$ a~family in $\mathcal{D}$. \subsec{2.5}~\proclaim{}Let $\Delta\subset(X^{\scriptscriptstyle\wedge})^\sim$ be a~disjoint set of nonzero positive func\-tio\-nals which has cardinality $\gamma^{\scriptscriptstyle\wedge}$ within $\mathbb{V}^{(\mathbb{B})}$. Then there exists a~cardinality $\gamma$ strongly generating set of positive operators $\mathcal{D}$ from $X$ to $\mathcal{R}{\downarrow}$ such that $\Delta=\mathcal{D}_\uparrow$ and $\Delta^\downarrow=\mathop{\fam0 mix}\nolimits(\mathcal{D})$. \rm \par\mbox{$\vartriangleleft$}~If $\Delta$ obeys the conditions then there is $\phi\in\mathbb{V}^{(\mathbb{B})}$ such that $[\![\phi:\gamma^{\scriptscriptstyle\wedge}\rightarrow\Delta$ is a~bijection$]\!]=\mathbb{1}$. Note that $\varphi{\downwardarrow}$ sends $\gamma$ into $\Delta{\downarrow}\subset(X^{\scriptscriptstyle\wedge})^\sim{\downarrow}$ by \cite[1.5.8]{BA_ST}. By \cite[Theorem 3.3.3]{BA_ST}, we can define the mapping $ \alpha\mapsto\Phi( \alpha)$ from $\gamma$ to $L^\sim(X,\mathcal{R}{\downarrow})$ by putting $\Phi( \alpha)\!:=(\phi{\downwardarrow}( \alpha)){\downwardarrow}$. Put $\mathcal{D}\!:=\{\Phi( \alpha):\, \alpha\in\gamma\}$ and note that $\mathcal{D}\subset\Delta^{\downarrow}$. Using~\cite[1.6.6]{BA_ST} and the surjectivity of $\phi$ we have $\Delta{\downarrow}= \varphi(\gamma^{\scriptscriptstyle\wedge}){\downarrow}= \mathop{\fam0 mix}\nolimits\{\phi{\downwardarrow}( \alpha)):\, \alpha\in\gamma\}$ and combining this with \cite[3.3.7]{BA_ST} we get $\Delta=\mathcal{D}_\uparrow$ and $\Delta^\downarrow=\mathop{\fam0 mix}\nolimits(\mathcal{D})$. The injectivity of $\phi$ implies that $$ [\![(\forall \alpha,\beta\in\gamma^{\scriptscriptstyle\wedge})( \alpha\ne\beta \rightarrow\phi( \alpha)\ne\phi(\beta)]\!]=\mathbb{1}. $$ Replacing the universal quantifier by the supremum over $ \alpha,\beta\in\gamma^{\scriptscriptstyle\wedge}$, from \cite[1.4.5\,(1) and 1.4.5\,(2)]{BA_ST} we deduce that $$ \mathbb{1}=\bigwedge_{ \alpha,\beta\in\gamma} [\![ \alpha^{\scriptscriptstyle\wedge}\ne\beta^{\scriptscriptstyle\wedge}]\!] \Rightarrow[\![\varphi( \alpha^{\scriptscriptstyle\wedge})\ne\phi(\beta)^{\scriptscriptstyle\wedge}]\!] =\bigwedge_{\substack{{ \alpha,\beta\in\gamma}\\{ \alpha\ne\beta}}} [\![\Phi( \alpha)\ne\Phi(\beta)]\!], $$ and so $ \alpha\ne\beta$ implies $\Phi( \alpha)\ne\Phi(\beta)$ for all $ \alpha,\beta\in\gamma$. Thus $\Phi$ is injective and the cardinality of $\mathcal{D}$ is $\gamma$. The fact that $\mathcal{D}$ is strongly generating follows from \cite[3.3.5\,(5) and 3.8.4]{BA_ST}.~\text{$\vartriangleright$} \subsec{2.6.}\proclaim{}If $\mathcal{D}$ is a~cardinality $\gamma$ strongly generating set of positive operators from $X$ to $\mathcal{R}{\downarrow}$ of then $\Delta=\mathcal{D}_\uparrow\subset (X^{\scriptscriptstyle\wedge})^\sim$ is a~disjoint set of nonzero positive func\-tio\-nals which has cardinality $|\gamma^{\scriptscriptstyle\wedge}|$ within $\mathbb{V}^{(\mathbb{B})}$. \rm \par\mbox{$\vartriangleleft$}~Assume that $\mathcal{D}\subset L(X,\mathcal{R}{\downarrow})$ is a~strongly generating set of car\-di\-na\-lity $\gamma$. Then there is a~bijection $f:\gamma\rightarrow\mathcal{D}{\upwardarrow}$. Moreover, $ \alpha\ne\beta$ implies $[\![f( \alpha)\perp f(\beta)]\!]=\mathbb{1}$ by \cite[3.3.5\,(5)]{BA_ST} and $[\![f( \alpha)\ne0]\!]=\mathbb{1}$ by~\cite[3.8.4]{BA_ST}. Interpreting in $\mathbb{V}^{(\mathbb{B})}$ the $\mathop{\fam0 ZFC}$-theorem $$ (\forall f,g\in X^\sim)(f\ne0\wedge g\ne0\wedge f\perp g\rightarrow f\ne g) $$ yields $[\![f( \alpha)\ne f(\beta)]\!]=\mathbb{1}$ for all $ \alpha,\beta\in\gamma$, $ \alpha\ne\beta$. It follows that $\phi\!:=f{\upwardarrow}$ is a~bijection from $\gamma^{\scriptscriptstyle\wedge}$ onto $\Delta=(\mathcal{D}{\upwardarrow}){\uparrow}$, so that the cardinality of $\Delta$ is $|\gamma^{\scriptscriptstyle\wedge}|$. The proof is completed by the arguments similar to those in~2.5.~\text{$\vartriangleright$} \subsec{2.7.}\proclaim{}A vector lattice $X$ is $(\gamma,\mathcal{R}{\downarrow})$-homogeneous for some cardinal $\gamma$ if and only if $[\![\,\gamma^{\scriptscriptstyle\wedge}$ is a~cardinal and $X^{\scriptscriptstyle\wedge}$ is $(\gamma^{\scriptscriptstyle\wedge},\mathcal{R})$-homogeneous $]\!]=\mathbb{1}$. \rm \par\mbox{$\vartriangleleft$}~\textit{Sufficiency:}~Assume that $\gamma^{\scriptscriptstyle\wedge}$ is a~cardinal and $X^{\scriptscriptstyle\wedge}$ is $(\gamma^{\scriptscriptstyle\wedge},\mathcal{R})$-homo\-ge\-neous within $\mathbb{V}^{(\mathbb{B})}$. The latter means that $(X^{\scriptscriptstyle\wedge})_a^\sim$ is generated by a~disjoint set of nonzero lattice homomorphisms $\Delta\subset(X^{\scriptscriptstyle\wedge})^\sim$ which has cardinality $\gamma^{\scriptscriptstyle\wedge}$ within $\mathbb{V}^{(\mathbb{B})}$. By~2.5 there exists a~strongly generating set $\mathcal{D}$ in $L^\sim_a(X,\mathcal{R}{\downarrow})$ of cardinality $\gamma^{\scriptscriptstyle\wedge}$ such that $\Delta=\mathcal{D}_{\uparrow}$. Take a~nonzero $\pi\in\mathbb{P}(\mathcal{R}{\downarrow})$ and put $b\!:=\chi^{-1}(\pi)$. Recall that we can identify $L^\sim(X,\pi(\mathcal{R}{\downarrow}))$ and $L^\sim(X,(b\wedge\mathcal{R}){\downarrow})$. If $\mathcal{D}'$ is a~strongly generating set in $L_a^\sim(X,\pi(\mathcal{R}{\downarrow}))$ of cardinality $\beta$ then $\mathcal{D}^\prime_{\uparrow}$ strongly generates $(X^{\scriptscriptstyle\wedge})_a^\sim$ and has cardinality $|\beta^{\scriptscriptstyle\wedge}|$ within the relative universe $\mathbb{V}^{([\mathbb{0},b])}$. By \cite[1.3.7]{BA_ST} $\gamma^{\scriptscriptstyle\wedge}= |\beta^{\scriptscriptstyle\wedge}|\leq \beta^{\scriptscriptstyle\wedge}$ and so $\gamma\leq\beta$. \textit{Necessity:}~Assume now that $X$ is $(\gamma,\mathcal{R}{\downarrow})$-homogeneous and the~set $\mathcal{D}$ of lattice homomorphisms of cardinality $\gamma$ generates strongly the band $L_a^\sim(X,\mathcal{R}{\downarrow})$. Then $\Delta=\mathcal{D}_{\uparrow}$ generates the band $(X^{\scriptscriptstyle\wedge})_a^\sim$ and the cardinalities of $\Delta$ and $\gamma^{\scriptscriptstyle\wedge}$ coincide; i.e., $|\Delta|=|\gamma^{\scriptscriptstyle\wedge}|$. By \cite[1.9.11]{BA_ST} the cardinal $|\gamma^{\scriptscriptstyle\wedge}|$ has the representation $|\gamma^{\scriptscriptstyle\wedge}| =\mathop{\fam0 mix}\nolimits_{ \alpha\leq\gamma}b_ \alpha \alpha^{\scriptscriptstyle\wedge}$, where $(b_ \alpha)_{ \alpha\leq\gamma}$ is a~partition of unity in $\mathbb{B}$. It follows that $b_ \alpha\leq[\![\Delta$ is a~generating set in $(X^{\scriptscriptstyle\wedge})_a^\sim$ of cardinality $ \alpha^{\scriptscriptstyle\wedge}]\!]=\mathbb{1}$. If $b_ \alpha\ne\mathbb{0}$ then $b_ \alpha\wedge\Delta$ is a~generating set in $(X^{\scriptscriptstyle\wedge})_a^\sim$ of cardinality $|\gamma^{\scriptscriptstyle\wedge}|= \alpha^{\scriptscriptstyle\wedge}\leq\gamma^{\scriptscriptstyle\wedge}$ in the relative universe $\mathbb{V}^{[\mathbb{0},b_ \alpha]}$. Put $\pi_ \alpha=\chi(b_ \alpha)$ and $\pi_ \alpha\circ\mathcal{D}\!:=\{\pi_ \alpha\circ T:\,T\in\mathcal{D}\}$. Clearly, $b_ \alpha\wedge\Delta=(\pi_ \alpha\mathcal{D})_{\uparrow}$ and so $\pi_ \alpha\circ\mathcal{D}$ strongly generates the band $L_a^\sim(X,\mathcal{R}{\downarrow})$. By hypothesis $\mathcal{D}$ is $(\gamma,\mathcal{R}{\downarrow})$-homogeneous, consequently, $ \alpha\geq\gamma$, so that $ \alpha=\gamma$, since $ \alpha\leq\gamma$ if and only if $ \alpha^{\scriptscriptstyle\wedge}\leq\gamma^{\scriptscriptstyle\wedge}$. Thus, $|\gamma^{\scriptscriptstyle\wedge}|=\gamma^{\scriptscriptstyle\wedge}$ whenever $b_ \alpha\ne\mathbb{0}$ and $\gamma^{\scriptscriptstyle\wedge}$ is a~cardinal within $\mathbb{V}^{(\mathbb{B})}$.~\text{$\vartriangleright$} \subsec{2.8.}\proclaim{}Let $X$ be a~$(\gamma,Y)$-homogeneous vector lattice for some universally complete vector lattice $Y$ and a~nonzero cardinal $\gamma$. Then there exists a~strongly generating family of lattice homomorphisms $(\Phi_{ \alpha})_{ \alpha<\gamma}$ from $X$ to $Y$ such that each operator $T\in L_a^{\sim}(X,Y)$ admits the unique representation $T=\mathop{o\text{\/-}\!\sum}_{ \alpha<\gamma}\sigma_{ \alpha}\circ\Phi_{\gamma, \alpha}$, where $(\sigma_{ \alpha})_{ \alpha<\gamma}$ is a~family of orthomorphisms in $\mathop{\fam0 Orth}(Y)$. \rm \par\mbox{$\vartriangleleft$}~This is immediate from the definitions in 2.4.~\text{$\vartriangleright$} \subsec{2.9.}~\theorem{}Let $X$ and $Y$ be vector lattices with $Y$ universally complete. Then there are a~nonempty set of cardinals $\Gamma$ and a~partition of unity $(Y_\gamma)_{\gamma\in\Gamma}$ in $\mathbb{B}(Y)$ such that $X$ is $(\gamma,Y_\gamma)$-homogeneous for all $\gamma\in\Gamma$. \rm \par\mbox{$\vartriangleleft$}~We may assume without loss of generality that $Y=\mathcal{R}{\downarrow}$. The transfer principle tells us that according to~2.1 there exists a~cardinal $\varkappa$ within $\mathbb{V}^{(\mathbb{B})}$ such that $(X^{\scriptscriptstyle\wedge})^\sim_a$ is generated by a~cardinality~$\varkappa$ disjoint set $\mathcal{H}$ of nonzero $\mathbb{R}^{\scriptscriptstyle\wedge}$-linear lattice homomorphisms or, equivalently, $[\![X^{\scriptscriptstyle\wedge}\text{~is~}(\varkappa, \mathcal{R})\text{-homogeneous}\,]\!]=\mathbb{1}$. By \cite[1.9.11]{BA_ST} there is a~nonempty set of cardinals $\Gamma$ and a~partition of unity $(b_\gamma)_{\gamma\in\Gamma}$ in $\mathbb{B}$ such that $\varkappa=\mathop{\fam0 mix}\nolimits_{\gamma\in\Gamma} b_\gamma\gamma^{\scriptscriptstyle\wedge}$. It follows that $b_\gamma\leq[\![X^{\scriptscriptstyle\wedge} \text{~is~}(\gamma^{\scriptscriptstyle\wedge},\mathcal{R}) \text{-homogeneous}\,]\!]$ for all $\gamma\in\Gamma$. Passing to the relative subalgebra $\mathbb{B}_\gamma\!:=[\mathbb{0},b_\gamma]$ and considering \cite[1.3.7]{BA_ST} we conclude that $\mathbb{V}^{(\mathbb{B}_\gamma)}\models ``X^{\scriptscriptstyle\wedge}\text{~is~} (\gamma^{\scriptscriptstyle\wedge},b_\gamma\wedge\mathcal{R}) \text{-homogeneous''}$, so that $X$ is $(\gamma,(b_\gamma\wedge\mathcal{R}){\downarrow})$-homogeneous by 2.7. In view of \cite[2.3.6]{BA_ST} $(b_\gamma\wedge\mathcal{R}){\downarrow}$ is lattice isomorphic to $Y_\gamma$, and so the desired result follows.~\text{$\vartriangleright$} \subsec{2.10.}~\theorem{}Let $X$ and $Y$ be vector lattices with $Y$ uni\-ver\-sally complete. Then there are a~nonempty set of cardinals $\Gamma$ and a~partition of unity $(Y_\gamma)_{\gamma\in\Gamma}$ in $\mathbb{B}(Y)$ such that to each cardinal $\gamma\in\Gamma$ there is a~disjoint family of lattice homomorphisms $(\Phi_{\gamma, \alpha})_{ \alpha<\gamma}$ from $X$ to $Y_\gamma$ satisfying \vspace*{2pt} \subsec{\bf(1)}~$\Phi_{\gamma, \alpha}(X)^{\perp\perp}=Y_\gamma\ne\{0\}$ for all $\gamma\in\Gamma$ and $ \alpha<\gamma$. \vspace*{2pt} \subsec{\bf(2)}~$X$ is $(\gamma,Y_\gamma)$-homogeneous for all $\gamma\in\Gamma$. \vspace*{2pt} \subsec{\bf(3)}~For each order dense sublattice $Y_0\subset Y$ each $T\in L^{\sim}(X,Y_0)$ admits the unique representation $$ T=T_d+\mathop{o\text{\/-}\!\sum}_{\gamma\in\Gamma} \mathop{o\text{\/-}\!\sum}_{ \alpha<\gamma}\sigma_{\gamma, \alpha}\circ\Phi_{\gamma, \alpha}, $$ with $T_d\in L^{\sim}_d(X,Y)$ and $\sigma_{\gamma, \alpha}\in\mathop{\fam0 Orth}(\Phi_{\gamma, \alpha}, Y_0)$. For every $\gamma\in\Gamma$ the family $(\Phi_{\gamma, \alpha})_{ \alpha<\gamma}$ is unique up to $\mathbb{P}(Y)$-permutation and $\mathop{\fam0 Orth}(Y_{\gamma})_+$-multiplication. \rm \par\mbox{$\vartriangleleft$}~The existence of $(Y_\gamma)_{\gamma\in\Gamma}$ and $(\Phi_{\gamma, \alpha})_{\gamma\in\Gamma,\, \alpha<\gamma}$ with the required properties is immediate from 2.8 and 2.9. The uniqueness follows from 2.1 and 2.3.~\text{$\vartriangleright$} \subsec{2.11.}~Theorem 2.10, the main result of Section 2, was proved by Tabuev in~\cite[Theorem 2.2]{Tab2} with standard tools. The pseudoembedding operators are closely connected with the so-called order narrow operators. A linear operator $T:X\to Y$ is \textit{order narrow\/} if for every $x\in X_+$ there exists a~net $(x_ \alpha)$ in $X$ such that $|x_ \alpha|=x$ for all $ \alpha$ and $(Tx_ \alpha)$ is order convergent to zero in $Y$; see \cite[Definition 3.1]{MMP}. The main result by Maslyuchenko, Mykhaylyuk, and Popov in~\cite[Theorem 11.7\,(ii)]{MMP} states that if $X$ and $Y$ are Dedekind complete vector lattices with $X$ atomless and $Y$ an order ideal of some order continuous Banach lattice then an order bounded order continuous operator is order narrow if and only if it is pseudoembedding. \subsec{2.12.}~The term {\it pseudoembedding operator} stems from a~result by Rosenthal \cite{Ros} which asserts that a~nonzero bounded linear operator in $L^1$ is a~pseudoembedding if and only if it is a~near isometric embedding when restricted to a~suitable $L^1(A)$-subspace. Systematic study of narrow operators was started by Plichko and Popov in \cite{PP}. Concerning a~detailed presentation of the theory of narrow operators, we refer to the recent book by Popov and Randrianantoanina \cite{PR} and the references therein. \section*{3.~The Noncommutative Wickstead Problem} \textsl{When are we so happy in a~vector lattice that all band preserving linear operators turn out to be order bounded?} This question was raised by~Wickstead in~\cite{Wic1}. We refer to \cite{IBA, BA_ST} and \cite{GKK} for a detailed presentation of the Wickstead problem. In this section we consider a noncommutative version of the problem. The relevant information on the theory of Baer $\ast$-algebras and $AW^ \ast$-algebras can be found in~Berberian~\cite{Berb}, Chilin~\cite{Chil}, Kusraev~\cite{DOP}. \subsec{3.1.}~A {\it Baer $ \ast$-algebra\/} is a~complex involutive algebra $A$ such that, for each nonempty $M\subset A$, there is a~projection, i.e., a~hermitian idempotent $p$, satisfying $M^\perp=pA$ where $M^\perp\!:=\{y\in A:(\forall\,x\in M)\,xy=0\}$ is the right annihilator of~$M$. Clearly, this amounts to saying that each left annihilator has the form ${}^\perp M=Aq$ for an appropriate projection $q$. To each left annihilator $L$ in a~Baer \mbox{$ \ast$-algebra} there is a~unique projection $q_L\in A$ such that $x=xq_L$ for all $x\in L$ and $q_L y=0$ whenever $y\in L^\perp$. The mapping $L\mapsto q_L$ is an isomorphism between the poset of left annihilators and the poset of all projections. Thus, the poset $\mathbb{P}(A)$ of all projections in a~Baer \mbox{$ \ast$-algebra} is an~order complete lattice. \big(Clearly, the formula $q\leq p\Longleftrightarrow q=qp=pq$, sometimes pronounced as ``$p$ contains $q$,'' specifies some order on the set of projections $\mathbb{P}(A)$.\big) An element $z$ in $A$ is {\it central\/} provided that $z$ commutes with every member of~$A$; i.e., $(\forall\,x\in A)\,xz=zx$. The {\it center\/} of a~Baer $ \ast$-algebra $A$ is the set $\mathcal{Z}(A)$ comprising central elements. Clearly, $\mathcal{Z}(A)$ is a~commutative Baer $ \ast$-subalgebra of~$A$, with $\lambda\mathbb{1}\in\mathcal{Z}(A)$ for all $\lambda\in\mathbb{C}$. A {\it central projection\/} of $A$ is a~projection belonging to $\mathcal{Z}(A)$. Put $\mathbb{P}_c(A)\!:=\mathbb{P}(A)\cap\mathcal{Z}(A)$. \subsec{3.2.}~A {\it derivation\/} on a~Baer $ \ast$-algebra $A$ is a~linear operator $d:A\to A$ satisfying $d(xy)=d(x)y+xd(y)$ for all $x,y\in A$. A derivation $d$ is {\it inner\/} provided that $d(x)=ax-xa$ $(x\in A)$ for some $a\in A$. Clearly, an inner derivation vanishes on $\mathcal{Z}(A)$ and is $\mathcal{Z}(A)$-linear; i.e., $d(ex)=ed(x)$ for all $x\in A$ and $e\in\mathcal{Z}(A)$. Consider a~derivation $d:A\to A$ on a~Baer $ \ast$-algebra $A$. If $p\in A$ is a~central projection then $d(p)=d(p^2)=2pd(p)$. Multiplying this identity by $p$ we have $pd(p)=2pd(p)$ so that $d(p)=pd(p)=0$. Consequently, every derivation vanishes on the linear span of $\mathbb{P}_c(A)$, the set of all central projections. In particular, $d(ex)=ed(x)$ whenever $x\in A$ and $e$ is a~linear combination of central projections. Even if the linear span of central projections is dense in a~sense in $\mathcal{Z}(A)$, the derivation $d$ may fail to be $\mathcal{Z}(A)$-linear. This brings up the natural question: \textsl{Under what conditions is every derivation $Z$-linear on a~Baer $ \ast$-algebra $A$ provided that $Z$ is a~Baer $ \ast$-subalgebra of $\mathcal{Z}(A)$?} \subsec{3.3.}~An~{\it $AW^ \ast$-algebra\/} is a~$C^ \ast$-algebra with unity $\mathbb{1}$ which is also a~Baer $ \ast$\hbox{-}algebra. More explicitly, an~$AW^ \ast$-algebra is a~$C^ \ast$-algebra whose every right annihilator has the form~$pA$, with $p$ a~projection. Clearly, $\mathcal{Z}(A)$ is a~commutative $AW^*$\hbox{-}subalgebra of~$A$. If $\mathcal{Z}(A)= \{\lambda\mathbb{1}:\lambda\in\mathbb{C}\}$ then the $AW^*$-algebra $A$ is an~$AW^*$\hbox{-}{\it factor}. \subsec{3.4.}\Proclaim{}A~$C^ \ast$-algebra $A$ is an~$AW^ \ast$-algebra if and only if the two conditions hold: \subsec{(1)} Each orthogonal family in $\mathbb{P}(A)$ has a~supremum. \subsec{(2)} Each maximal commutative $ \ast$-subalgebra of $A_0\subset A$ is a~Dedekind complete $f$-algebra {\rm(}or, equivalently, coincides with the least norm closed $ \ast$-subalgebra containing all projections of~$A_0)$. \rm \subsec{3.5.}~Given an $AW^ \ast$-algebra $A$, define the two sets $C(A)$ and $S(A)$ of measurable and locally measurable operators, respectively. Both are Baer $ \ast$-algebras; cp. Chilin~\cite{Chil}. Suppose that $\Lambda$ is an $AW^\ast$-sub\-al\-gebra in $\mathcal{Z}(A)$, and $\Phi$ is a~$\Lambda$-valued trace on $A_+$. Then we can define another Baer $ \ast$-algebra, $L(A,\Phi)$, of $\Phi$-measurable operators. The center $\mathcal{Z}(A)$ is a~vector lattice with a~strong unit, while the centers of $C(A)$, $S(A)$, and $L(A,\Phi)$ coincide with the universal completion of~$\mathcal{Z}(A)$. If $d$ is a~derivation on $C(A)$, $S(A)$, or $L(A,\Phi)$ then $d(px)=pd(x)$ $\bigl(p\in\mathbb{P}_c(A)\bigr)$ so that $d$ can be considered as band preserving in a~sense (cp. \cite[4.1.1 and 4.10.4]{BA_ST}). The natural question arises immediately about these algebras: \subsec{3.6.}~\textrm{WP(C)}:~\textsl{When are all derivations on $C(A)$, $S(A)$, or $L(A,\Phi)$ inner?} This question may be regarded as the \textit{noncommutative Wickstead problem}. \subsec{3.7.}~The classification of $AW^ \ast$-algebras into types is determined from their lattices of projections $\mathbb{P}(A)$; see Sakai~\cite{Sak}. We recall only the definition of type~I $AW^ \ast$-algebra. A~projection $\pi\in A$ is {\it abelian\/} if $\pi A\pi$ is a~commutative algebra. An algebra $A$ has {\it type\/}~I provided that each nonzero projection in $A$ contains a~nonzero abelian projection. A~$C^ \ast$-algebra $A$ is ${\mathbb B}$-{\it embeddable} provided that there are a~type~I $AW^ \ast$-algebra $N$ and a~$ \ast$-monomorphism $\imath: A\rightarrow N$ such that ${\mathbb B}={\mathbb P}_c(N)$ and $\imath (A)=\imath(A)^{\prime\prime}$, where $\imath (A)^{\prime\prime}$ is the bicommutant of~$\imath (A)$ in~$N$. Note that in this event $A$ is an~$AW^ \ast$-algebra and ${\mathbb B}$ is a~complete subalgebra of~${\mathbb P}_c(A)$. \subsec{3.8.} \theorem{}Let $A$ be a~type I $AW^ \ast$-algebra, let $\Lambda$ be an $AW^ \ast$\hbox{-}subalgebra of $\mathcal{Z}(A)$, and let $\Phi$ be a~$\Lambda$-valued faithful normal semifinite trace on $A$. If the complete Boolean algebra $\mathbb{B} \!:=\mathbb{P}(\Lambda)$ is $\sigma$-distributive and $A$ is $\mathbb{B}$-embeddable, then every derivation on $L(A,\Phi)$ is inner. \rm \par\mbox{$\vartriangleleft$}~We briefly sketch the proof. Let $\mathcal{A}\in\mathbb{V}^{(\mathbb{B})}$ be the Boolean valued representation of~$A$. Then $\mathcal{A}$ is a~von Neumann algebra within $\mathbb{V}^{(\mathbb{B})}$. Since the Boolean valued interpretation preserves classification into types, $\mathcal{A}$ is of type I. Let $\varphi$ stand for the Boolean valued representation of~$\Phi$. Then $\varphi$ is a faithful normal semifinite $\mathcal{C}$-valued trace on $\mathcal{A}$ and the descent of $L(\mathcal{A},\varphi)$ is $ \ast$-$\Lambda$-isomorphic to $L(A,\Phi)$; cp.~Korol$'$ and Chilin~\cite{KCh}. Suppose that $d$ is a~derivation on $L(A,\Phi)$ and $\delta$ is the Boolean valued representation~of~$d$. Then $\delta$ is a~$\mathcal{C}$-valued $\mathbb{C}^{\scriptscriptstyle\wedge}$-linear derivation on $L(\mathcal{A},\varphi)$. Since $\mathbb{B}$ is $\sigma$-distributive, $\mathcal{C}= {\mathbb C}^{\scriptscriptstyle\wedge}$ within $\mathbb{V}^{(\mathbb{B})}$ and $\delta$ is $\mathcal{C}$-linear. But it is well known that every derivation on a~type I von Neumann algebra is inner; cp. \cite{AAKud}. Therefore, $d$ is also inner.~\text{$\vartriangleright$} \subsec{3.8.}~Theorem 3.8 is taken from Gutman, Kusraev, and Kutateladze~\cite[Theorem 4.3.6]{GKK}. This fact is an interesting ingredient of the theory of noncommutative integration which stems from Segal \cite{Seg}. Considerable attention is given to derivations on various algebras of measurable operators associated with an $AW^ \ast$-algebra and a~central-valued trace. We mention only the article \cite{AAKud} by Albeverio, Ajupov, and Kudaybergenov and the article \cite{BPS} by Ber, de~Pagter, and Sukochev. \section*{4.~The Radon--Nikod\'ym Theorem for \emph{JB}-Algebras} In this section we sketch some further applications of the Boolean value approach to a nonassociative Radon--Nikod\'ym type theorems. \subsec{4.1.}~Let $A$ be a~vector space over some field $\mathbb{F}$. Say that $A$ is a~{\it Jordan algebra,} if there is given a (generally) nonassociative binary operation $A\times A\ni(x,y)\mapsto xy\in A$ on~$A$, called {\it multiplication} and satisfying the following for all $x,y,z\in A$ and ${\alpha}\in {\mathbb F}$: \subsec{(1)}~$\ xy=yx$; \subsec{(2)}~$\ (x+y)z=xz+yz$; \subsec{(3)}~$\ {\alpha}(xy)=({\alpha}x)y$; \subsec{(4)}~$\ ({x^2}y)x={x^2}(yx)$. An element $e$ of a Jordan algebra $A$ is a~{\it unit element} or a {\it unit} of~$A$, if $e\neq 0$ and $ea=a$ for all $a\in A$. \subsec{4.2.}~Recall that a $J\!B$-{\it algebra} is simultaneously a real Banach space~$A$ and a unital Jordan algebra with unit $\mathbb{1}$ such that \subsec{(1)}~$\ \|xy\|\le\|x\|\cdot\|y\| \quad (x, y\in A)$, \subsec{(2)}~$\ \|x^2\|=\|x\|^2 \quad (x\in A)$, \subsec{(3)}~$\ \|x^2\|\le\|x^2+y^2\| \quad(x, y\in A)$. The set $A_+:=\{x^2 : x\in A\}$, presenting a proper convex cone, determines the structure of an ordered vector space on $A$ so that the unity $\mathbb{1}$ of the algebra $A$ serves as a strong order unit, and the order interval $[-\mathbb{1},\,\mathbb{1}]:=\{x\in A : -\mathbb{1}\le x\le\mathbb{1}\}$ serves as the unit ball. Moreover, the inequalities $-\mathbb{1}\le x\le\mathbb{1}$ and $0\le x^2\le\mathbb{1}$ are equivalent. The intersection of all maximal associative subalgebras of $A$ is called the {\it center} of $A$ and denoted by $\mathcal{Z}(A)$. The element $a$ belongs to $\mathcal{Z}(A)$ if and only if $(ax)y=a(xy)$ for all $x, y\in A$. If $\mathcal{Z}(A)=\mathbb{R}\cdot\mathbb{1}$, then $A$ is said to be a $J\!B$-\textit{factor}. The center $Z(A)$ is an associative $J\!B$-algebra, and such an algebra is isometrically isomorphic to the real Banach algebra $C(Q)$ of continuous functions on some compact space $Q$. \subsec{4.3.}~The idempotents of a $J\!B$-algebra are also called {\it projections}. The set of all projections $\mathbb{P}(A)$ forms a complete lattice with the order defined as $\pi\leq\rho\Longleftrightarrow\pi\circ\rho=\pi$. The sublattice of \textit{central projections\/} $\mathbb{P}_c(A)\!:=\mathbb{P}(A)\cap\mathcal{Z}(A)$ is a Boolean algebra. Assume that $\mathbb{B}$ is a subalgebra of the Boolean algebra $\mathbb{P}_c(A)$ or, equivalently, $\mathbb B\,(\mathbb R)$ is a subalgebra of the center $\mathcal Z (A)$ of~$A$. Then we say that $A$ is a $\mathbb B$-$J\!B$-{\it algebra} if, for every partition of unity $(e_\xi)_{\xi \in \Xi}$ in $\mathbb B$ and every family $(x_\xi)_{\xi \in \Xi}$ in $A$, there exists a unique $\mathbb B$-mixing $x:=\mathop{\fam0 mix}\nolimits_{\xi \in \Xi}\,(e_\xi x_\xi)$, i.e., a unique element $x\in A$ such that $e_\xi x_\xi=e_\xi x$ for all $\xi\in\Xi$. If $\mathbb B\,(\mathbb R)=\mathcal Z(A)$, then a $\mathbb B$-$J\!B$-algebra is also referred to as {\it centrally extended $J\!B$-algebra.} The unit ball of a $\mathbb{B}$-$J\!B$-algebra is closed under $\mathbb{B}$-mixings. Consequently, each $\mathbb{B}$-$J\!B$-algebra is a $\mathbb{B}$-cyclic Banach space. \subsec{4.4.}~\theorem{}The restricted descent of a $J\!B$-algebra in the model $\mathbb{V}^{(\mathbb{B})}$ is a $\mathbb{B}$-$J\!B$-algebra. Conversely, for every $\mathbb{B}$-$J\!B$-algebra $A$ there exists a unique $($up to isomorphism$)$ $J\!B$-algebra $\mathcal{A}$ within $\mathbb{V}^{\mathbb{B}}$ whose restricted descent is isometrically $\mathcal{B}$-isomorphic to $A$. Moreover, $[\![\mathcal{A}$ is a $J\!B$-factor $\!]\!]=\mathbb{1}$ if and only if\/ $\mathbb{B}\,(\mathbb{R})=\mathcal{Z}(A)$. \rm \par\mbox{$\vartriangleleft$}~See \cite[Theorem 12.7.6]{IBA} and \cite[Theorem 3.1]{KusJB}.~\text{$\vartriangleright$} \subsec{4.5.}~Now we give two applications of the above Boolean valued representation result to $\mathbb{B}$-$J\!B$-algebras. Theorems 4.7 and 4.11 below appear by transfer of the corresponding facts from the theory of $J\!B$-algebras. Let $A$ be a $\mathbb B$-$J\!B$-algebra and let $\Lambda :=\mathbb B\,(\mathbb R)$. An operator $\Phi\in A^\sharp $ is called a $\Lambda$-{\it valued state} if $\Phi\ge 0$ and $\Phi(\mathbb{1})=\mathbb{1}$. A state $\Phi$ is said to be {\it normal\/} if, for every increasing net $(x_{\alpha})$ in $A$ with the least upper bound $x:=\sup x_\alpha$, we have $\Phi(x)=\mathop{o\text{-}{\fam0 lim}}\Phi(x_\alpha)$. If $\mathcal{A}$ is the Boolean valued representation of the algebra $A$, then the ascent $\varphi:=\Phi\!\uparrow$ is a bounded linear functional on $\mathcal{A}$ by \cite[Theorem 5.8.12]{BA_ST}. Moreover, $\varphi$ is positive and order continuous; i.e., $\varphi$ is a normal state on $\mathcal{A}$. The converse is also true: if $[\![\varphi$ is a normal state on $\mathcal{A}]\!]=\mathbb{1}$, then the restriction of the operator $\varphi\!\downarrow$ to $A$ is a $\Lambda$-valued normal state. Now we will characterize $\mathbb{B}$-$J\!B$-algebras that are $\mathbb{B}$-dual spaces. Toward this end, it suffices to give Boolean valued interpretation for the following result. \subsec{4.6.}~\theorem{}A $J\!B$-algebra is a dual Banach space if and only if it is monotone complete and has a separating set of normal states. \rm \par\mbox{$\vartriangleleft$}~See \cite[Theorem 2.3]{Shu}.~\text{$\vartriangleright$} \subsec{4.7.}~\theorem{}Let $\mathbb{B}$ be a complete Boolean algebra and $\Lambda$ a Dedekind complete unital $AM$-space with $\mathbb{B}\simeq\mathbb{P}(\Lambda)$. A $\mathbb B$-$J\!B$-algebra $A$ is a $\mathbb{B}$-dual space if and only if $A$ is monotone complete and admits a separating set of $\Lambda$-valued normal states. If one of these equivalent conditions holds, then the part of $A^{\scriptscriptstyle\#}$ consisting of order continuous operators serves as a $\mathbb{B}$-predual space of $A$. \rm \par\mbox{$\vartriangleleft$}~See \cite[Theorem 12.8.5]{IBA} and \cite[Theorem 4.2]{KusJB}.~\text{$\vartriangleright$} \subsec{4.8.}~An algebra $A$ satisfying one of the equivalent conditions 4.7 is called a $\mathbb B$-$JBW$-{\it algebra}. If, moreover, $\mathbb{B}$ coincides with the set of all central projections, then $A$ is said to be a $\mathbb{B}$-$JBW$-{\it factor}. It follows from Theorems 4.4 and 4.7 that $A$ is a $\mathbb B$-$JBW$-algebra $(\mathbb B$-$JBW$-factor) if and only if its Boolean valued representation $\mathcal A\in V^{(\mathbb B)}$ is a $JBW$-algebra ($JBW$-factor). A mapping $\Phi:A_+\to\Lambda\cup\{+\infty\}$ is a ($\Lambda$-valued) \textit{weight\/} if the conditions are satisfied (under the assumptions that $\lambda+(+\infty):=+\infty+\lambda:=+\infty$, $\lambda\cdot(+\infty)=:\lambda$ for all $\lambda\in\Lambda$, while $0\cdot(+\infty):=0$ and $+\infty+(+\infty):=+\infty$): \subsec{(1)}~$\Phi(x+y)=\phi(x)+\Phi(y)$ for all $x,y\in A_+$. \subsec{(2)}~$\Phi(\lambda x)=\lambda\Phi(x)$ for all $x\in A_+$ and $\lambda\in\Lambda_+$. A weight $\Phi$ is said to be a \textit{trace\/} if the additional condition is satisfied \subsec{(3)}~$\Phi(x)=\Phi(U_sx)$ for all $x\in A_+$ and $s\in A$ with $s^2=\mathbb{1}$. Here, $U_a$ is the operator from $A$ to $A$ defined for a given $a\in A$ as $U_a:x\mapsto 2a(ax)-a^2$ $(x\in A)$. This operator is positive, i.e., $U_a(A_+)\subset A_+$. If $a\in\mathcal{Z}(A)$, then $U_ax=a^2x$ $(x\in A)$. A weight (trace) $\Phi$ is called: \textit{normal\/} if $\Phi(x)=\sup_\alpha\Phi(x_\alpha)$ for every increasing net $(x_\alpha)$ in $A_+$ with $x=\sup_\alpha x_\alpha$; \textit{semifinite\/} if there exists an increasing net $(a_\alpha)$ in $A_+$ with $\sup_\alpha a_\alpha=\mathbb{1}$ and $\Phi(a_\alpha)\in\Lambda$ for all $\alpha$; \textit{bounded\/} if $\Phi(\mathbb{1})\in\Lambda$. Given two $\Lambda$-valued weights $\Phi$ and $\Psi$ on $A$, say that $\Phi$ is dominated by $\Psi$ if there exists $\lambda\in\Lambda_+$ such that $\Phi(x)\leq\lambda\Psi(x)$ for all $x\in A_+$. \subsec{4.9.}~We need a few additional remarks about descents and ascents. Fix $+\infty\in\mathbb{V}^{(\mathbb{B})}$. If $\Lambda\!=\mathcal{R}{\Downarrow}$ and $\Lambda^\mathbb{u}\!=\mathcal{R}{\downarrow}$ then $$ (\Lambda^\mathbb{u}\cup\{+\infty\}){\uparrow}=(\Lambda\cup\{+\infty\}){\uparrow}= \Lambda{\uparrow}\cup\{+\infty\}{\uparrow}=\mathcal{R}\cup\{+\infty\}. $$ At the same time, $\Lambda^\star\!:=(\mathcal{R}\cup\{+\infty\}){\downarrow}= \mathop{\fam0 mix}\nolimits(\mathcal{R}{\downarrow}\cup\{+\infty\})$ consists of all elements of the form $\lambda_\pi\!:=\mathop{\fam0 mix}\nolimits(\pi\lambda,\pi^\perp(+\infty))$ with $\lambda\in\Lambda^{\mathbb{u}}$ and $\pi\in\mathbb{P}(\Lambda)$. Thus, $\Lambda^\mathbb{u}\cup\{+\infty\}$ is a proper subset of $\Lambda^\star$, since $x_\pi\in\Lambda\cup\{+\infty\}$ if and only if $\pi=0$ or $\pi=I_\Lambda$. Assume now that $A=\mathcal{A}{\downarrow}$ with $\mathcal{A}$ a $J\!B$-algebra within $\mathbb{V}^{(\mathbb{B})}$ and $\mathbb{B}$ equal to $\mathbb{P}(A)$. Every bounded weight $\Phi:A\to\Lambda$ is evidently extensional: $b\!:=[\![x=y]\!]$ implies $bx=by$ which in turn yields $b\Phi(x)=\Phi(bx)=\Phi(by)=b\Phi(y)$ or, equivalently, $b\leq[\![\Phi(x)=\Phi(y)]\!]$. But an unbounded weight may fail to be extensional. Indeed, if $\Phi(x_0)=+\infty$ and $\Phi(x)\in\Lambda$ for some $x_0,x\in A$ and $b\in\mathbb{P}(A)$ then $$ \Phi(\mathop{\fam0 mix}\nolimits(bx,b^\perp x_0))= \mathop{\fam0 mix}\nolimits(b\Phi(x),b^\perp(+\infty)) \notin\Lambda\cup\{+\infty\}. $$ Given a semifinite weight $\Phi$ on $A$, we define its extensional modification $\widehat{\Phi}:A\to\Lambda^\star$ as follows: If $\Phi(x)\in\Lambda$ we put $\widehat{\Phi}(x)\!:=\Phi(x)$. If $\Phi(x)=+\infty$ then $x=\sup D$ with $D\!:=\{a\in A:\,0\leq a\leq x,\,\Phi(a)\in\Lambda\}$. Let $b$ stand for the greatest element of $\mathbb{P}(\Lambda)$ such that $\Phi(bD)$ is order bounded in $\Lambda^\mathbb{u}$ and put $\lambda\!:=\sup\Phi(bD)$. We define $\widehat{\Phi}(x)$ as $\lambda_b= \mathop{\fam0 mix}\nolimits(b\lambda,b^\perp(+\infty))$; i.e., $b\widehat{\Phi}(x)=\lambda$ and $b^\perp\widehat{\Phi}(x)=b^\perp(+\infty)$. It is not difficult to check that $\widehat{\Phi}$ is an extensional mapping. Thus, for $\varphi\!:=\widehat{\Phi}{\uparrow}$ we have $[\![\varphi:\mathcal{A}\to\mathcal{R}\cup\{+\infty\}]\!]=\mathbb{1}$ and, according to \cite[1.6.6]{BA_ST}, $\widehat{\Phi}=\varphi{\downarrow}\ne\Phi$. But if we define $\varphi{\Downarrow}$ as $\varphi{\Downarrow}(x)=\varphi{\downarrow}(x)$ whenever $\varphi{\downarrow}(x)\in\Lambda$ and $\varphi{\Downarrow}(x)=+\infty$ otherwise, then $\Phi=(\widehat{\Phi}{\uparrow}){\Downarrow}$. \subsec{4.10.}\theorem{}Let $A$ be a $J\!BW$-algebra and let $\tau$ be a normal semifinite real-valued trace on $A$. For each real-valued weight $\varphi$ on $A$ dominated by $\tau$ there exists a unique positive element $h\in A$ such that $\varphi(a)=\tau(U_{h^{1/2}}a)$ for all $a\in A_+$. Moreover, $\varphi$ is bounded if and only if $\tau(h)$ is finite and $\varphi$ is a trace if and only if $h$ is a central element of $A$. \rm \par\mbox{$\vartriangleleft$}~This fact was proved in \cite{King}.~\text{$\vartriangleright$} \subsec{4.11.}~\theorem{}Let $A$ be a $\mathbb{B}$-$J\!BW$-algebra and $\mathrm{T}$ is a normal semifinite $\Lambda$-valued trace on $A$. For each weight $\Phi$ on $A$ dominated by $\mathrm{T}$ there exists a unique positive $h\in A$ such that $\Phi(x)= \mathrm{T}(U_{h^{1/2}}x)$ for all $x\in A_+$. Moreover, $\Phi$ is bounded if and only if $\mathrm{T}(h)\in\Lambda$ and $\Phi$ is a trace if and only if $h$ is a central element of $A$. \rm \par\mbox{$\vartriangleleft$}~We present a sketch of the proof. Taking into consideration the remarks in 4.9, we define $\varphi\!=\widehat{\Phi}{\uparrow}$ and $\psi\!=\widehat{\Psi}{\uparrow}$. Then within $\mathbb{V}^{(\mathbb{B})}$ the following hold: $\tau$ is a semifinite normal real-valued trace on $\mathcal{A}$ and $\varphi$ is real-valued weight on $\mathcal{A}$ dominated by $\tau$. By transfer principle we may apply Theorem 4.10 and find $h\in\mathcal{A}$ such that $\varphi(x)=\tau(U_{h^{1/2}}x)$ for all $x\in\mathcal{A}_+$. Actually, $h\in A$ and $\varphi{\Downarrow}(x)=\tau{\Downarrow}(U_{h^{1/2}}x)$ for all $x\in A_+$. It remains to note that $\Phi=\varphi{\Downarrow}$ and $\mathrm{T}=\tau{\Downarrow}$. The details of the proof are left to the reader.~\text{$\vartriangleright$} \subsec{4.12.}~~$J\!B$-algebras are nonassociative real analogs of $C^*$-algebras and von Neumann operator algebras. The theory of these algebras stems from Jordan, von~Neumann, and Wigner \cite{JNW} and exists as a~branch of functional analysis since the mid 1960s. The stages of its development are reflected in Alfsen, Shultz, and St\o rmer \cite{ASS}. The theory of $J\!B$-algebras undergoes intensive study, and the scope of its applications widens. Among the main areas of research are the structure and classification of $J\!B$-algebras, nonassociative integration and quantum probability theory, the geometry of states of $J\!B$-algebras, etc.; see Ajupov \cite{Aju1, Aju2}; Hanshe-Olsen and St\"ormer \cite{H-OS} as well as the references therein. {\bf(2)}~The Boolean valued approach to $JB$-algebras was charted by Kusraev in~the article \cite{KusJB} which contains Theorems 4.4 and 4.7 (also see \cite{KusJB1}). These theorems are instances of the Boolean valued interpretation of the results by Shulz \cite{Shu} and by Ajupov and Abdullaev \cite{AjA}. In \cite{KusJB} Kusraev introduced the class of $\mathbb{B}$-$JBW$-algebras which is broader than the class of $JBW$-algebras. The principal distinction is that a~$\mathbb{B}$-$JBW$-algebra has faithful representation as the algebra of selfadjoint operators in some $AW^ \ast$-module rather than on a~Hilbert space as in the case of $JBW$-algebras (cp.~Kusraev and Kutateladze~\cite{IBA}). The class of $AJW$-algebras was firstly mentioned by Topping in~\cite{Top}. Theorem 4.11 was not published before. \section*{5.~Transfer in Harmonic Analysis} In what follows, $G$ is a locally compact abelian group and $\tau$ is the topology of $G$, while $\tau(0)$ is a~neighborhood base of $0$ in $G$ and $G'$ stands for the dual group of~$G$. Note that $G$ is also the dual group of $G'$ and we write $\langle x,\gamma\rangle\!:=\gamma(x)$ $(x\in G,\,\gamma\in G')$. \subsec{5.1.}~By restricted transfer, $G^{\scriptscriptstyle\wedge}$ is a group within $\mathbb{V}^{(\mathbb{B})}$. At the same time $\tau(0)^{\scriptscriptstyle\wedge}$ may fail to be a topology on $G^{\scriptscriptstyle\wedge}$. But $G^{\scriptscriptstyle\wedge}$ becomes a~topological group on taking $\tau(0)^{\scriptscriptstyle\wedge}$ as a neighborhood base of $0\!:=0^{\scriptscriptstyle\wedge}$. This topological group is again denoted by $G^{\scriptscriptstyle\wedge}$ itself. Clearly, $G^{\scriptscriptstyle\wedge}$ may not be locally compact. Let $U$ be a neighborhood of $0$ such that $U$ is compact. Then $U$ is totally bounded. It follows by restricted transfer that $U^{\scriptscriptstyle\wedge}$ is totally bounded as well, since total boundedness can be expressed by a~restricted formula. Therefore the completion of $G^{\scriptscriptstyle\wedge}$ is locally compact. The completion of $G^{\scriptscriptstyle\wedge}$ is denoted by $\mathcal{G}$, and by the above observation $\mathcal{G}$ is a locally compact abelian group within $\mathbb{V}^{(\mathbb{B})}$. \subsec{5.2.}~Let $Y$ be a Dedekind complete vector lattice and let $Y_\mathbb{C}$ be the complexification of~$Y$. A vector-function $\varphi:G\to Y$ is said to be {\it uniformly order continuous\/} on a~set $K$ if $$ \inf\limits_{U\in\tau(0)}\sup\{|\varphi(g_1)-\varphi (g_2)|: \ g_1,g_2\in K,\ g_1-g_2\in U\}=0. $$ This amounts to saying that $\varphi$ is order bounded on $K$ and, if $e\in Y$ is an arbitrary upper bound of $\varphi(K)$, then for each $0<\varepsilon\in\mathbb{R}$ there exists a partition of unity $(\pi_\alpha)_{\alpha\in\tau(0)}$ in $\mathbb{P}(Y)$ such that $\pi_\alpha|\varphi(g_1)-\varphi (g_2)|\leq\varepsilon e$ for all $\alpha\in\tau(0)$ and $g_1,g_2\in K$, $g_1-g_2\in\alpha$. If, in this definition we put $g_2=0$, then we arrive at the definition of mapping \textit{order continuous at zero}. We now introduce the class of dominated $Y$-valued mappings. A~mapping $\psi: G\to Y_\mathbb{C}$ is called {\it positive definite\/} if $$ \sum\limits_{j,k=1}^n \psi(g_j-g_k)\ c_j \overline{c}_k\geq0 $$ for all finite collections $g_1,\ldots,g_n \in G$ and $c_1,\ldots,c_n\in\mathbb{C}$ $(n\in\mathbb{N})$. For $n=1$, the definition implies readily that $\psi(0)\in Y_+$. For $n=2$, we have $|\psi(g)|\le\psi(0)$ $(g\in G)$. If we introduce the structure of an $f$-algebra with unit $\psi(0)$ in the order ideal of $Y$ generated by $\psi(0)$ then, for $n=3$, from the above definition we can deduce one more inequality $$ |\psi(g_1)-\psi(g_2)|^2\le 2 \psi(0)(\psi(0)-\Re\psi(g_1-g_2)) \quad (g_1,g_2\in G). $$ It follows that every positive definite mapping $\psi : G\to Y_\mathbb{C}$ $o$-continuous at zero is order-bounded (by the element $\psi(0)$) and uniformly $o$-continuous. A mapping $\varphi : G\to Y$ is called {\it dominated\/} if there exists a~positive definite mapping $\psi : G\to Y_\mathbb C$ such that $$ \Bigg|\sum\limits_{j,k=1}^n \varphi(g_j-g_k) c_j \overline{c}_k\Bigg|\le \sum\limits_{j,k=1}^{n}\psi(g_j-g_k) c_j \overline c_k $$ for all $g_1,\ldots,g_n \in G$, $c_1,\ldots, c_n \in\mathbb{C}$ $(n\in\mathbb{N})$. In this case we also say that $\psi$ is a {\it dominant} of $\varphi$. It can be easily shown that if $\varphi:G\to Y_\mathbb{C}$ has dominant order continuous at zero then $\varphi$ is order bounded and uniformly order continuous. We denote by $\mathfrak{D}(G,Y_\mathbb{C})$ the vector space of all dominated mappings from $G$ into $Y_\mathbb{C}$ whose dominants are order continuous at zero. We also consider the set $\mathfrak{D}(G,Y_\mathbb{C})_+$ of all positive definite mappings from $G$ into $Y_\mathbb{C}$. This set is a proper cone in $\mathfrak{D}(G,Y_\mathbb{C})$ and defines the order compatible with the structure of a~vector space on $\mathfrak{D}(G,Y_\mathbb{C})$. Actually, $\mathfrak{D}(G,Y_\mathbb{C})$ is a Dedekind complete complex vector lattice; cp.~5.13 below. Also, define $\mathfrak{D}(\mathcal{G},\mathcal{C})\in\mathbb{V}^{(\mathbb{B})}$ to be the set of functions $\varphi:\mathcal{G}\to\mathcal{C}$ with the property that $[\![\varphi$ has dominant continuous at zero$]\!]=\mathbb{1}$. \subsec{5.3.}\proclaim{}Let $Y=\mathcal{R}{\downarrow}$. For every $\varphi\in\mathfrak{D}(G,Y_\mathbb{C})$ there exists a unique $\tilde{\varphi}\in\mathbb{V}^{(\mathbb{B})}$ such that $[\![\tilde{\varphi}\in\mathfrak{D}(\mathcal{G},\mathcal{C})]\!]=\mathbb{1}$ and $[\![\tilde{\varphi}(x^{\scriptscriptstyle\wedge})=\varphi(x)]\!]=\mathbb{1}$ for all $x\in G$. The mapping $\varphi\mapsto\tilde{\varphi}$ is an linear and order isomorphism from $\mathfrak{D}(G,Y)$ onto $\mathfrak{D}(\mathcal{G},\mathcal{C}){\downarrow}$. \rm \subsec{5.4.}~Define $C_0(G)$ as the space of all continuous complex functions $f$ on $G$ vanishing at infinity. The latter means that for every $0<\varepsilon\in\mathbb{R}$ there exists a compact set $K\subset G$ such that $|f(x)|<\varepsilon$ for all $x\in G\setminus K$. Denote by $C_{c}(G)$ the space of all continuous complex functions on $G$ having compact support. Evidently, $C_c(G)$ is dense in $C_0(G)$ with respect to the norm $\|\cdot\|_\infty$. \subsec{5.5.}~Introduce the class of dominated operators. Let $X$ be a complex normed space and let $Y$ be a complex Banach lattice. A linear operator $T:X\to Y$ is said to be \textit{dominated\/} or {\it having abstract norm\/}if $T$ sends the unit ball of $X$ into an order bounded subset of $Y$. This amounts to saying that there exists $c\in Y_+$ such that $|Tx|\leq c\|x\|_\infty$ for all $x\in C_0(Q)$. The~set of all dominated operators from $X$ to $Y$ is denoted by $L_m(X, F)$. If $Y$ is Dedekind complete then $$ \mathopen{\kern1pt\/ \vrule height7.6pt depth2.2pt width1pt\kern1.5pt}T\mathclose{\kern1.5pt\vrule height7.6pt depth2.2pt width1pt\kern1pt}\!:=\{|Tx|:\ x\in X,\ \|x\|\leq 1\} $$ exists and is called the~{\it abstract norm\/} or {\it dominant\/} of $T$. Moreover, if $X$ is a vector lattice and $Y$ is Dedekind complete then $L_m(X,Y)$ is a vector sublattice of $L^{\sim}(X,Y)$. Given a positive $T\in L_m(C_0(G'),Y)$, we can define the mapping $\varphi:G\to Y$ by putting $\varphi(x)=T(\langle x,\cdot\rangle)$ for all $(x\in G)$, since $\gamma\mapsto\langle x,\gamma\rangle$ lies in $C_0(G')$ for every $x\in G$. It is not difficult to ensure that $\varphi$ is order continuous at zero and positive definite. The converse is also true; see 5.8. \subsec{5.6.}~Consider a metric space $(M,r)$. The definition of metric space can be written as a bounded formula, say $\varphi(M,r,\mathbb{R})$, so that $[\![\varphi(M^{\scriptscriptstyle\wedge}, r^{\scriptscriptstyle\wedge}, \mathbb{R}^{\scriptscriptstyle\wedge})]\!]=\mathbb{1}$ by restricted transfer. In other words, $(M^{\scriptscriptstyle\wedge}, r^{\scriptscriptstyle\wedge})$ is a metric space within $\mathbb{V}^{(\mathbb{B})}$. Moreover we consider the internal function $r^{\scriptscriptstyle\wedge}:\, M^{\scriptscriptstyle\wedge}\to \mathbb{R}^{\scriptscriptstyle\wedge}\subset\mathcal{R}$ as an $\mathcal{R}$-valued metric on $M^{\scriptscriptstyle\wedge}$. Denote by $(\mathcal{M},\rho)$ the completion of $(M^{\scriptscriptstyle\wedge}, r^{\scriptscriptstyle\wedge})$; i.e., $[\![(\mathcal{M},\rho)$ is a complete metric space $]\!]=\mathbb{1}$, $[\![M^{\scriptscriptstyle\wedge}$ is a dense subset of $\mathcal{M}]\!]=\mathbb{1}$, and $[\![r(x^{\scriptscriptstyle\wedge})= \rho(x^{\scriptscriptstyle\wedge})]\!]=\mathbb{1}$ for all $(x\in M)$ Now, if $(X,\|\cdot\|)$ is a real (or complex) normed space then $[\![X^{\scriptscriptstyle\wedge}$ is a vector space over the field $\mathbb{R}^{\scriptscriptstyle\wedge}$ (or $\mathbb{C}^{\scriptscriptstyle\wedge}$) and $\|\cdot\|^{\scriptscriptstyle\wedge}$ is a norm on $X^{\scriptscriptstyle\wedge}$ with values in $\mathbb{R}^{\scriptscriptstyle\wedge}\subset\mathcal{R} ]\!]=\mathbb{1}$. So, we will consider $X^{\scriptscriptstyle\wedge}$ as an~$\mathbb{R}^{\scriptscriptstyle\wedge}$-vector space with $\mathcal{R}$-valued norm within $\mathbb{V}^{(\mathbb{B})}$. Let $\mathcal X\in\mathbb{V}^{(\mathbb{B})}$ stand for the (metric) completion of $X^{\scriptscriptstyle\wedge}$ within $\mathbb{V}^{(\mathbb{B})}$. It is not difficult to see that $[\![\mathcal{X}$ is a real (complex) Banach space including $X^{\scriptscriptstyle\wedge}$ as an $\mathbb{R}^{\scriptscriptstyle\wedge}(\mathbb{C}^{\scriptscriptstyle\wedge})$-linear subspace$]\!]=\mathbb{1}$, since the metric $(x,y)\mapsto\|x-y\|$ on $X^{\scriptscriptstyle\wedge}$ is translation invariant. Clearly, if $X$ is a real (complex) Banach lattice then $[\![\mathcal{X}$ is a real (complex) Banach lattice including $X^{\scriptscriptstyle\wedge}$ as an $\mathbb{R}^{\scriptscriptstyle\wedge}(\mathbb{C}^{\scriptscriptstyle\wedge})$-linear sublattice$]\!]=\mathbb{1}$. \subsec{5.7.}~\Theorem{}Let $Y=\mathcal{C}{\downarrow}$ and $\mathcal{X}'$ be the topological dual of $\mathcal{X}$ within $\mathbb{V}^{(\mathbb{B})}$. For every $T\in L_m(X,Y)$ there exists a unique $\tau\in\mathcal{X}'{\downarrow}$ such that $[\![\tau(x^{\scriptscriptstyle\wedge})=T(x)]\!]=\mathbb{1}$ for all $x\in X$. The mapping $T\mapsto\phi(T)\!:=\tau$ defines an isomorphism between the $\mathcal{C}{\downarrow}$-modules $L_m(X,Y)$ and $\mathcal{X}'{\downarrow}$. Moreover, $\mathopen{\kern1pt\/ \vrule height7.6pt depth2.2pt width1pt\kern1.5pt}T\mathclose{\kern1.5pt\vrule height7.6pt depth2.2pt width1pt\kern1pt}=\mathopen{\kern1pt\/ \vrule height7.6pt depth2.2pt width1pt\kern1.5pt}\phi(T)\mathclose{\kern1.5pt\vrule height7.6pt depth2.2pt width1pt\kern1pt}$ for all $T\in L_m(X,Y)$. If $X$ is a normed lattice then $[\![\mathcal{X}'$ is a Banach lattice $]\!]=\mathbb{1}$, while $\mathcal{X}'{\downarrow}$ is a vector lattice and $\phi$ is a lattice isomorphism. \rm \par\mbox{$\vartriangleleft$}~Suffice it to consider the real case. Apply \cite[Theorem 8.3.2]{DOP} to the lattice normed space $X\!:=(X,\mathopen{\kern1pt\/ \vrule height7.6pt depth2.2pt width1pt\kern1.5pt} \cdot\mathclose{\kern1.5pt\vrule height7.6pt depth2.2pt width1pt\kern1pt})$ with $\mathopen{\kern1pt\/ \vrule height7.6pt depth2.2pt width1pt\kern1.5pt} x\mathclose{\kern1.5pt\vrule height7.6pt depth2.2pt width1pt\kern1pt}=\|x\|\mathbb{1}$. By \cite[Theorem 8.3.4\,(1) and Proposition 8.3.4\,(2)]{DOP} the spaces $\mathcal{X}'{\downarrow}:=\mathcal{L}^{(\mathbb{B})}(\mathcal{X}, \mathcal{R}){\downarrow}$ and $L_m(X,Y)$ are linear isometric. We are left with referring to \cite[Proposition 5.5.1\,(1)]{DOP}.~\text{$\vartriangleright$} \subsec{5.8.}~\Theorem{}A mapping $\varphi:G\to Y_\mathbb{C}$ is order continuous at zero and positive definite if and only if there exists a unique positive operator $T\in L_m(C_0(G'),Y_\mathbb{C})$ such that $\varphi(x)=T(\langle x,\cdot\rangle)$ for all $(x\in G)$. \rm \par\mbox{$\vartriangleleft$}~By transfer, 5.3, and Theorem 5.7, we can replace $\varphi$ and $T$ by their Boolean valued representations $\tilde{\varphi}$ and $\tau$. The norm completion of $C_0(G')^{\scriptscriptstyle\wedge}$ within $\mathbb{V}^{(\mathbb{B})}$ coincides with $C_0(\mathcal{G}')$. (This can be proved by the reasoning similar to that in Takeuti\author{Takeuti G.} \cite[Proposition 3.2]{Tak2}.) Application of the classical Bochner Theorem (see Loomis\author{Loomis L.H} \cite[Section 36A]{Loo}) to $\tilde{\varphi}$ and $\tau$ yields the desired result.~\text{$\vartriangleright$} \subsec{5.9.}~We now specify the vector integral of use in this subsection; see details in~\cite[5.14.B]{BA_ST}. Let $\mathcal{A}$ be a $\sigma$-algebra of subsets of $Q$, i.e. $\mathcal{A}\subset\mathcal{P}(Q)$. We identify this algebra with the isomorphic algebra of the characteristic functions $\{1_A\!:=\chi_A:\,A\in\mathcal{A}\}$ so that $S(\mathcal A)$ is the space of all $\mathcal{A}$-simple functions on $Q$; i.e., $f\in S(\mathcal{A})$ means that $f=\sum_{k=1}^n\alpha_k\chi_{A_k}$ for some $\alpha_1,\dots,\alpha _n\in\mathbb{R}$ and disjoint $A_1,\dots,A_n\in\mathcal{A}$. Let a measure $\mu$ be defined on $\mathcal{A}$ and take values in a~Dedekind complete vector lattice~$Y$. We suppose that $\mu$ is order bounded. If $f\in S(\mathcal{A})$ then we put $$ I_\mu \!:=\int f\, d\mu =\sum\limits_{k=1}^n\alpha_k\mu(A_k). $$ It can be easily seen that the integral $I_\mu $ can be extended to the spaces of $\mu$-summable functions $\mathcal{L}^1(\mu)$ for which the more informative notations $\mathcal{L}^1(Q,\mu)$ and $\mathcal{L}^1(Q,\mathcal A,\mu)$ are also used. On identifying equivalent functions, we obtain the Dedekind $\sigma$-complete vector lattice $L^1(\mu)\!:=L^1(Q,\mu)\!:=L^1(Q,\mathcal A,\mu)$. \subsec{5.10.}~Assume now that $Q$ is a topological space. Denote by $\mathcal{F}(Q)$, $\mathcal{K}(Q)$, and $\mathcal{B}(Q)$ the collections of all closed, compact, and Borel subsets of $Q$. A measure $\mu:\mathcal{B}(Q)\to Y$ is said to be {\it quasi-Radon} (\textit{quasi-regular}) if $\mu$ is order bounded and $$ \gathered |\mu|(U)=\sup\{|\mu|(K):\ K\in\mathcal{K}(Q),\,K\subset U\} \\ (|\mu|(U)=\sup\{|\mu|(K):\ K\in\mathcal{F}(Q),\,K\subset U\}). \endgathered $$ for every open (respectively, closed) set $U\subset Q$. If the above equalities are fulfilled for all Borel $U\subset Q$ then we speak about {\it Radon} and \textit{regular\/} measures. Say that $\mu=\mu_1+i\mu_2:\mathcal{B}(Q)\to Y_\mathbb{C}$ have one of the above properties whenever this property is enjoyed by both $\mu_1$ and $\mu_2$. We denote by $\mathop{\fam0 qca}(Q,Y)$ the vector lattice of all $\sigma$-additive $Y_\mathbb{C}$-valued quasi-Radon measures on $\mathcal{B}(Q)$. If $Q$ is locally compact or even completely regular then $\mathop{\fam0 qca}(Q,Y)$ is a vector lattice; see \cite[Theorem 6.2.2]{DOP}. The variation of a $Y_{\mathbb{C}}$-valued (in particular, $\mathbb{C}$-valued) measure $\nu $ is denoted in the standard fashion: $|\nu|$. \subsec{5.11.}~\Theorem{}Let $Y$ be a real Dedekind complete vector lattice and let $Q$ be a locally compact topological space. Then for each $T:L_m(C_0(Q),Y_\mathbb{C})$ there exists a~unique measure $\mu\!:=\mu_T\in\mathop{\fam0 qca}(Q,Y_\mathbb{C})$ such that $$ T(f)=\int\limits_{Q}f\,d\mu \quad (f \in C_0(Q)). $$ The mapping $T\mapsto\mu_T$ is a lattice isomorphism from $L_m(C_0(Q),Y_\mathbb{C})$ onto $\mathop{\fam0 qca}(Q,Y_\mathbb{C})$. \rm \par\mbox{$\vartriangleleft$}~See \cite[Theorem 2.5]{KM5}.~\text{$\vartriangleright$} \subsec{5.12.}~\Theorem{}Let $G$ be a locally compact abelian group, let $G'$ be the dual group of~$G$, and let $Y$ be a Dedekind complete real vector lattice. For a mapping $\varphi:G\to Y_\mathbb{C}$ the following are equivalent: \subsec{(1)}~$\varphi$ has dominant order continuous at zero. \subsec{(2)}~There exists a unique measure $\mu\in\mathop{\fam0 qca}(G',Y_\mathbb{C})$ such that $$ \varphi(g)=\int\limits_{G'}\chi(g)\,d\mu(\chi)\quad(g\in G). $$ \rm \par\mbox{$\vartriangleleft$}~This is immediate from Theorems 5.8 and 5.11.~\text{$\vartriangleright$} \subsec{5.13.}~\Corollary{}The Fourier transform establishes an order and linear isomorphism between the space of measures $\mathop{\fam0 qca}(G',Y)$ and the space of dominated mappings $\mathfrak{D}(G,Y_\mathbb{C})$. In particular, $\mathfrak{D}(G,Y_\mathbb{C})$ is a~Dedekind complete complex vector lattice. \rm \subsec{5.14.}~{\bf(1)}~In~\cite{Tak2} Takeuti introduced the Fourier transform for the mappings defined on a~locally compact abelian group and having as values pairwise commuting normal operators in a~Hilbert space. By applying the transfer principle, he developed a~general technique for translating the classical results to operator-valued functions. In particular, he established a~version of the Bochner Theorem describing the set of all inverse Fourier transforms of positive operator-valued Radon measures. Given a~complete Boolean algebra $\mathbb{B}$ of projections in a~Hilbert space $H$, denote by $(\mathbb{B})$ the space of all selfadjoint operators on $H$ whose spectral resolutions are in $\mathbb{B}$; i.e., $A\in(\mathbb{B})$ if and only if $A=\int_\mathbb{R}\lambda\,dE_\lambda$ and $E_\lambda\in\mathbb{B}$ for all $\lambda\in\mathbb{R}$. If $Y\!:=(\mathbb{B})$ then Theorem 5.8 is essentially Takeuti's result \cite[Theorem~1.3]{Tak2}. {\bf(2)}~Kusraev and Malyugin in~\cite{KM5} abstracted Takeuti's results in the following directions: First, they considered more general arrival spaces, namely, norm complete lattice normed spaces. So the important particular cases of Banach spaces and Dedekind complete vector lattices were covered. Second, the class of dominated mappings was identified with the set of all inverse Fourier transforms of order bounded quasi-Radon vector measures. Third, the construction of a suitable Boolean valued universe was eliminated from all definitions and statements of results. In particular, Theorem 5.12 and Corollary 5.13 correspond to \cite[Theorem 4.3]{KM5} and \cite[Theorem 4.4]{KM5}; while their lattice normed versions, to \cite[Theorem 4.1]{KM5} and \cite[Theorem 4.5]{KM5}, respectively. {\bf(3)}~Theorem 5.7 is due to Gordon \cite[Theorem 2]{Gor2}. Proposition~3.3 in Takeuti \cite{Tak2} is essentially the same result for the particular departure and arrival spaces; i.e., $X=L^1(G)$ and $Y=(\mathbb{B})$. Theorem 5.11 is taken from Kusraev and Malyugin \cite{KM5}. In the case of $Q$ compact, it was proved by Wright in~\cite[Theorem 4.1]{Wr6}. In this result $\mu$ cannot be chosen regular rather than quasiregular. The quasiregular measures were introduced by Wright in~\cite{Wr1}. \noindent {\it Anatoly G.~Kusraev}\\ {\leftskip\parindent\small \noindent Southern Mathematical Institute\\ 22 Markus Street\\ Vladikavkaz, 362027, RUSSIA\\ E-mail: [email protected] \par} \noindent {\it Sem\"en S.~Kutateladze}\\ {\leftskip\parindent\small \noindent Sobolev Institute of Mathematics\\ 4 Koptyug Avenue\\ Novosibirsk, 630090, RUSSIA\\ E-mail: [email protected] \par} \end{document}
\begin{document} \title[Compatible Recurrent Identities of the Sandpile Group]{Compatible Recurrent Identities of the Sandpile Group and Maximal Stable Configurations} \author{Yibo Gao} \address{Department of Mathematics, Massachusetts Institute of Technology, \mbox{Cambridge, MA 02139}} \email{[email protected]} \author{Rupert Li} \address{Massachusetts Institute of Technology, \mbox{Cambridge, MA 02139}} \email{[email protected]} \keywords{chip-firing, recurrent configuration, sandpile group, reduced Laplacian, maximal stable configuration, recurrent identity, Cartesian product, strong product, bipartite graph, tree} \date{June 23, 2020} \begin{abstract} In the abelian sandpile model, recurrent chip configurations are of interest as they are a natural choice of coset representatives under the quotient of the reduced Laplacian. We investigate graphs whose recurrent identities with respect to different sinks are compatible with each other. The maximal stable configuration is the simplest recurrent chip configuration, and graphs whose recurrent identities equal the maximal stable configuration are of particular interest, and are said to have the complete maximal identity property. We prove that given any graph $G$ one can attach trees to the vertices of $G$ to yield a graph with the complete maximal identity property. We conclude with several intriguing conjectures about the complete maximal identity property of various graph products. \end{abstract} \maketitle \section{Introduction}\label{section:Introduction} The \textbf{abelian sandpile model} was invented by Dhar in \cite{dhar1990self} as a model for self-organized criticality, which was introduced in \cite{bak1987self}. The automaton model was invented to model stacked items at sites, which would topple if a critical height was reached, and send the items to adjacent sites; for example, using a lattice graph to model a plane, the abelian sandpile model could mimic a pile of sand on a flat surface as it collapses under gravity to reach a certain stable state, and the destabilizing behavior of adding additional sand analyzed. The analysis of these models have led to the generalization of it for arbitrary locations and compositions of sites. Among the models that display self-organized criticality, the abelian sandpile model, which has been used to model landslides, is the simplest analytically tractable model, according to \cite{dhar1999abelian}. Overviews of the abelian sandpile model, related models, and self-organized criticality are given in \cite{dhar2006theoretical,bak2013nature}. It has been demonstrated in \cite{bak1990forest,burridge1967model,turcotte2004landslides} that self-organized criticality is present in models that can be associated to the natural hazards of landslides, earthquakes, and forest fires, as well as to financial markets in \cite{bartolozzi2006scale,biondo2015modeling,scheinkman1994self}, evolution and species extinction in \cite{sneppen1995evolution,newman1996self,sole1996extinction}, and neural systems in the brain in \cite{hesse2014self,pu2013developing,chialvo2010emergent}. The abelian sandpile model, also referred to as the \textbf{chip-firing game} after the paper \cite{bjorner1991chip}, on a directed graph $G$ consists of a collection of chips at each vertex of $G$. If a vertex $v$ has at least as many chips as its outdegree, then it can fire, sending one chip along each outgoing edge to its neighboring vertices. This continues indefinitely or until no vertex can fire. Conditions for which configurations lead to terminating processes have been studied in \cite{bjorner1991chip,bjorner1992chip}, and a polynomial bound for the length of the process was proven in \cite{tardos1988polynomial}. Chip-firing has been studied using an algebraic potential theory approach in \cite{biggs1997algebraic}; a particular type of chip-firing, referred to as the probabilistic abacus, has also been considered in \cite{engel1975probabilistic,engel1976does} as a quasirandom process that provides insight into Markov chains. Books surveying chip-firing include \cite{klivans2018mathematics,corry2018divisors}. In \cref{section:Preliminaries}, we establish the basic theory surrounding the chip-firing game, and define the primary algebraic object associated with the model, the \textbf{sandpile group}, and the special role \textbf{recurrent} elements play in the group. The sandpile group has been studied in \cite{creutz1991abelian}, as well as in \cite{cori2000sandpile} and \cite{levine2009sandpile} for the cases of dual graphs and wired trees, respectively. The algebraic structure of the sandpile group has been studied in \cite{wagner2000critical,dhar1995algebraic}, and the properties of recurrent configurations have been studied in \cite{biggs1999chip,van2001algorithmic}. In \cref{section:CMIP}, we begin stating our own results, investigating the graphs for which the simplest recurrent element, the \textbf{maximal stable configuration}, is the identity of the sandpile group, regardless of the choice of sink. The identity of the sandpile group is of particular interest, and has been previously investigated in \cite{caracciolo2008explicit,le2002identity} for $\mathbb{Z}^2$ lattice graphs. In \cref{section: necessary and sufficient}, we find necessary conditions for the property developed in \cref{section:CMIP}, the \textbf{complete maximal identity property}. In \cref{section:CIP}, we generalize the complete maximal identity property into the \textbf{complete identity property}, investigating graphs for which the recurrent identity remains the same regardless of the choice of sink. We consider the possible relationship between the complete identity property and bipartite graphs. We conclude in \cref{section:Conjectures} with some conjectures surrounding the recurrent identity of sandpiles formed from graph products. \section{Preliminaries}\label{section:Preliminaries} We refer readers to \cite{corry2018divisors} and \cite{holroyd2008chip} for detailed background on chip-firing. Unless otherwise stated, we restrict the chip-firing game to be on graphs which are simple, connected, and nontrivial (have more than one vertex). Recall that a simple graph is an unweighted, undirected graph without loops or multiple edges. A \textbf{sandpile} is a graph $G$ that has a special vertex, called a \textbf{sink}. A \textbf{chip configuration} over the sandpile is a vector of integers indexed over all non-sink vertices of $G$. In standard convention, these numbers must be nonnegative integers, a discrete number of \textbf{chips}. A chip configuration $c$, if explicitly stated to be over all vertices, not just all non-sink vertices, may be used for convenience when the sink changes. The entry corresponding to the sink is simply excluded from the vector. In a sandpile, a vertex can \textbf{fire} if it has at least as many chips as its degree, at which point it sends chips along each edge to its neighboring vertices, with each edge transferring $w$, where $w$ is the weight of the edge (in an undirected graph, each edge has weight 1); the vertex that fires loses the chips it fired. A vertex is said to be \textbf{active} if it can fire. The sink is not allowed to fire, nor are its chips considered; hence a chip configuration for a sandpile does not have an entry corresponding to the number of chips at the sink. A \textbf{stable} configuration is a configuration that has no active vertices. If by a sequence of firings a chip configuration $c$ can result in a stable configuration, that stable configuration is called the \textbf{stabilization} and is denoted $\operatorname{Stab}(c)$. It has been proven in \cite{bjorner1991chip} that if a stabilization exists, then it is unique for each chip configuration, regardless of the order in which vertices are fired; moreover, regardless of the order in which vertices are fired, the chip configuration will eventually result in the stable configuration, always taking the same number of firings in total. It has also been shown in \cite{holroyd2008chip} that in a sandpile all chip configurations stabilize. \begin{definition}\label{definition:MSC} The \textbf{maximal stable configuration} $m_G$ is the chip configuration in which every vertex $v$ has $d_v-1$ chips, where $d_v$ is the degree of vertex $v$. \end{definition} For a sandpile, a chip configuration $c$ is called \textbf{accessible} if for all (stable) configurations $d$ there exists a configuration $e$ such that $\operatorname{Stab}(d+e)=c$, where $d+e$ is the componentwise addition of the vectors $d$ and $e$. \begin{definition}\label{definition: recurrent} A chip configuration is called \textbf{recurrent} if it is accessible and stable. \end{definition} Note that $m_G$ is always recurrent, even when there is only one recurrent configuration for a given sandpile, and is by far the simplest recurrent chip configuration. \begin{example}\label{example: preliminary defs C4} The cycle graph on four vertices $C_4$ may have its vertices indexed 0, 1, 2, and 3, where $v_0$ is the sink, as shown in \cref{subfigure: C4 Sandpile}, where the sink is colored green. \begin{figure} \caption{$C_4$ with sink $v_0$, labelled as $s$.} \label{subfigure: C4 Sandpile} \end{figure} This sandpile may have a chip configuration $c=(2,2,0)$ as shown in \cref{subfigure: C4 220}. In $c$, vertices $v_1$ and $v_2$ are active. If $v_2$ (colored blue in \cref{subfigure: C4 220}) fires, the result is chip configuration $d=(3,0,1)$ as shown in \cref{subfigure: C4 301}. In $d$, $v_1$ is the only active vertex (colored blue in \cref{subfigure: C4 301}), and firing it results in the maximal stable configuration $m_G=(1,1,1)$ where in this example $G = C_4$ with sink at $v_0$, as shown in \cref{subfigure: C4 111}. \end{example} \begin{figure} \caption{Configuration $c$} \label{subfigure: C4 220} \caption{Configuration $d$} \label{subfigure: C4 301} \caption{$m_G$ for $G = C_4$} \label{subfigure: C4 111} \caption{Chip-firing on $C_4$. The non-sink vertices are labeled with their number of chips.} \label{figure: Chip-firing example on C4} \end{figure} The order of a graph $|G|$ is the number of vertices of $G$, while the size of a graph $\operatorname{size}(G)$ is the number of edges of $G$. For example, for a tree $T$ we have $|T|=\operatorname{size}(T)+1$. If one labels the vertices of a sandpile $G$ as $v_1,\dots,v_{|G|}$, where the graph $G$ may be weighted with positive integer weights, then the \textbf{Laplacian} of $G$ is the $|G| \times |G|$ matrix $\mathbb{D}elta = D^\mathrm{T} - A^\mathrm{T}$, where $D$ is the diagonal matrix where $D_{ii} = d_{v_i}$, and $A$ is the adjacency matrix of $G$. That is, if~$a_{ij}$ is the weight of the edge from vertex $v_i$ to $v_j$, and $d_i$ is the degree of~$v_i$, \begin{equation*} \mathbb{D}elta_{ij} = \begin{cases} -a_{ij} & \text{for } i \neq j, \\ d_i & \text{for } i = j. \end{cases} \end{equation*} The \textbf{reduced Laplacian} $\mathbb{D}elta '$ of $G$ is obtained by removing from $\mathbb{D}elta$ the row and column corresponding to the sink. To explicitly refer to the reduced Laplacian with the sink at $v_i$, the notation~$\mathbb{D}elta^{(i)}$ is used. We can represent the firing of a non-sink vertex $v$ as the subtraction of the column of $\mathbb{D}elta'$ corresponding to $v$ from the chip configuration. \begin{example}\label{example: Laplacian C4} The cycle graph on four vertices $C_4$ has Laplacian \begin{equation*} \mathbb{D}elta = \begin{bmatrix} 2 & -1 & 0 & -1 \\ -1 & 2 & -1 & 0 \\ 0 & -1 & 2 & -1 \\ -1 & 0 & -1 & 2 \\ \end{bmatrix} \end{equation*} and the reduced Laplacian, for sink $v_1$, is \begin{equation*} \mathbb{D}elta' = \begin{bmatrix} 2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 2 \\ \end{bmatrix}. \end{equation*} \end{example} In order to view configurations before and after firing as equivalent, we define the \textbf{sandpile group} of $G$ to be the group quotient \begin{equation*} \mathcal{S}(G)=\mathbb{Z}^{|G|-1} / \mathbb{D}elta^{\prime} \mathbb{Z}^{|G|-1}. \end{equation*} From the definition of the sandpile group, we see that $\left|\mathcal{S}(G)\right|=\left|\mathbb{D}elta'\right|$, where $\left|\mathcal{S}(G)\right|$ denotes the order of the sandpile group $\mathcal{S}(G)$ and $\left|\mathbb{D}elta'\right|$ denotes the determinant of the reduced Laplacian. This holds regardless of the choice of sink. By the Matrix-Tree theorem (see for example \cite{stanley2013algebraic}), $\left|\mathbb{D}elta'\right|$ is the number of spanning trees of $G$. It is also shown in \cite{biggs1999chip} that each equivalence class of $\mathcal{S}(G)$ contains exactly one recurrent configuration, and that the recurrent configurations form an abelian group with the operation being defined as the stabilization of the sum of two recurrent configurations. A configuration $c$ over graph $G$ with $n$ vertices is \textbf{equivalent} to another configuration $d$ when they have the same image in $\mathcal{S}(G)$, meaning they lie in the same equivalence class in $\mathcal{S}(G)$, or in other words, \begin{equation*} c\equiv d \mathbb{L}eftrightarrow \text{there exists } \, \mathbf{v} \in \mathbb{Z}^{|G|-1}: c - d = \mathbb{D}elta' \mathbf{v}. \end{equation*} Notice that as $\mathbb{D}elta'$ is non-singular by the Matrix-Tree theorem, the vector $(\mathbb{D}elta')^{-1} (c-d)$ is the unique solution to the equation $c = d + \mathbb{D}elta' \mathbf{v}$, and thus \begin{equation*} c \equiv d \mathbb{L}ongleftrightarrow (\mathbb{D}elta')^{-1} (c-d) \in \mathbb{Z}^{|G|-1}. \end{equation*} However, two configurations being equivalent does not necessarily imply there exists a firing sequence that takes one to the other. Hence, we introduce the ideas of backfiring and unrestricted firing. To fire a vertex $v$ is equivalent to subtracting a column of $\mathbb{D}elta'$ corresponding to $v$, and similarly to \textbf{backfire} $v$ is equivalent to \textit{adding} that column. Backfiring has been studied in \cite{dhar1994inverse} and \cite{caracciolo2012multiple} under the names ``untoppling" and ``anti-toppling", respectively. \textbf{Unrestricted firing} is when vertices are allowed to fire and/or backfire regardless of the number of chips they have. Furthermore, the sink is allowed to fire, where firing the sink corresponds to backfiring all non-sink vertices. Of interest in the sandpile group is the identity element, which leads us to the definition of the recurrent identity. \begin{definition}\label{definition: recurrent identity} The \textbf{recurrent identity} is the recurrent configuration equivalent to the all-zero configuration. \end{definition} Recall that each equivalence class contains exactly one recurrent configuration, and thus the recurrent identity is well-defined. \begin{example} Let $G=K_3$. We look at the sandpile of $G$ with arbitrary sink $s$. We will show that the recurrent identity $c$ is equivalent to the all-zero configuration $d$, where our two chip configurations are \begin{equation*} c = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \quad d = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{equation*} and the reduced Laplacian is \begin{equation*} \mathbb{D}elta' = \begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix}. \end{equation*} Now we notice that \begin{equation*} (\mathbb{D}elta')^{-1} (c-d) = \begin{bmatrix}[1.3] \frac{2}{3} & \frac{1}{3} \\ \frac{1}{3} & \frac{2}{3} \end{bmatrix} \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} \in \mathbb{Z}^2. \end{equation*} Hence, $c\equiv d$, and $d$ can be obtained from $c$ by unrestrictedly firing (see \cref{section:CMIP} for a formal definition) the two non-sink vertices once each. See \cref{fig:equivalence} for a visual example of the firing sequence taking $c$ to $d$, and thus the equivalence of $c$ and $d$. \begin{figure} \caption{Equivalence of the recurrent identity and the all-zero configuration. The sink is colored green, and the vertex being fired is colored blue.} \label{fig:equivalence} \end{figure} \end{example} The recurrent identity has been studied previously, including some limiting behavior for $\mathbb{Z}^2$ lattice graphs as in \cite{caracciolo2008explicit} and \cite{le2002identity}. For example, the recurrent identity for the $128 \times 128$ and $198 \times 198$ square grids with a boundary sink, or a sink connected to all the boundary vertices, is shown in \cref{figure: Identity Square Grid}. Many intriguing questions about the identity are still open. \begin{figure} \caption{$128 \times 128$ square grid} \label{subfigure: Identity 128} \caption{$198 \times 198$ square grid} \label{subfigure: Identity 198} \caption{The recurrent identity of the $128\times128$ and $198\times198$ square grids, as shown in \cite{holroyd2008chip} \label{figure: Identity Square Grid} \end{figure} \section{The Complete Maximal Identity Property}\label{section:CMIP} The maximal stable configuration is guaranteed to be recurrent over all sandpiles, even when there is only one recurrent configuration, and is by far the simplest recurrent configuration. Furthermore, of interest among the recurrent elements is the identity of the abelian group, the recurrent identity. Hence, it is of interest to know for which graphs the maximal stable configuration is the recurrent identity, leading to our newly created definition of the maximal identity property. \begin{definition}\label{definition:MIP} A graph $G$ is said to have the \textbf{maximal identity property} at a vertex $s$ if, having chosen $s$ as the sink, the maximal stable configuration is the recurrent identity of the sandpile $G$. \end{definition} The following lemma provides an equivalent condition for the maximal identity property. \begin{lemma}\label{lemma: unrestricted firing MIP} The sandpile $G$ has the maximal identity property if and only if there exists an unrestricted firing sequence that takes $m_G$ to $\mathbf{0}$. \end{lemma} \begin{proof} The maximal stable configuration $m_G$ is always recurrent. An unrestricted firing sequence between $m_G$ and $\mathbf{0}$ exists if and only if the two configurations are equivalent, which occurs if and only if $m_G$ is the recurrent identity, resulting in the maximal identity property. \end{proof} Another necessary and sufficient condition for a sandpile $G$ to have the maximal identity property is for $\operatorname{Stab}(m_G+m_G)=m_G$. The complete maximal identity property, developed for the first time in this paper, allows any vertex of a graph to be chosen as the sink while preserving the maximal identity property. \begin{definition}\label{definition:CMIP} A graph is said to have the \textbf{complete maximal identity property} if for all vertices $v$, it has the maximal identity property for sink $v$. \end{definition} \begin{example}\label{example: CMIP Petersen and Diamond Ring} Both the Petersen graph and the $n$-diamond ring for all positive integers $n$ have the complete maximal identity property (see \cref{figure: Petersen and Diamong Ring} for the Petersen graph and the $n$-diamond ring for $n=3,4$). The proofs are straightforward but tedious, and thus not included in this paper. \end{example} \begin{figure} \caption{The Petersen graph} \label{subfigure: Petersen graph} \caption{The 3-diamond ring} \label{subfigure: 3-diamond Ring} \caption{The 4-diamond ring} \label{subfigure: 4-diamond Ring} \caption{The Petersen graph and the $n$-diamond ring for $n=3,4$} \label{figure: Petersen and Diamong Ring} \end{figure} \begin{proposition}\label{proposition: CMIP complete odd cycle tree} All complete graphs $K_n$, odd cycles $C_{2n+1}$, and trees have the complete maximal identity property. \end{proposition} \begin{proof} \textbf{Case 1:} $K_n$. By symmetry, we only need to prove that $m_{K_n} \equiv \mathbf{0}$ for one sink of $K_n$. We prove this via the unrestricted firing method of \cref{lemma: unrestricted firing MIP}. The configuration~$m_{K_n}$ has~$n-2$ chips at each non-sink vertex. Backfire the sink $n-2$ times to yield the all-zero configuration. \textbf{Case 2:} $C_{2n+1}$. By symmetry, we only need to prove that $m_{C_{2n+1}} \equiv \mathbf{0}$ for one sink of $C_{2n+1}$. We prove this via the unrestricted firing method. Note that $m_{C_{2n+1}} \equiv \mathbf{1}$. Define the \textbf{rank} of a vertex to be the edge length of the shortest path from the sink to that vertex. Fire both vertices of rank $n$ once. This results in a net movement on one chip to each of the vertices of rank $n-1$, ridding the vertices of rank $n$ of chips. Now, as each vertex of rank $n-1$ has 2 chips, fire all vertices with rank at least $n-1$ twice to eliminate all chips from those vertices. Continue in this manner, firing all vertices with rank $n-k$ or higher $k+1$ times. This results in the all-zero configuration. \textbf{Case 3:} Trees. There is only one spanning tree of a tree, the tree itself. Hence, we apply the Matrix Tree Theorem to find that there is only one equivalence class in the sandpile group, regardless of the choice of sink. Thus, there is only one recurrent element; as $m_G$ is always recurrent, it is the only recurrent configuration. Therefore, $m_G$ must be the recurrent identity. \end{proof} The fact that a tree has a trivial sandpile group gives it a unique effect when added to other graphs. The concept of adding a tree of size $n$ to a vertex $v\in G$ is simply taking any tree~$T$ on~$n+1$ vertices and $n$ edges, then taking the disjoint union of $T$ and $G$ by combining their vertex sets together and their edge sets together, and finally merging a vertex of~$T$ with vertex $v$. We will show in \cref{lemma:addtreeCMIPidentityconfiguration} that ultimately only the size of the tree matters. See \cref{Diamond} and \cref{Diamond+} for an example on how trees are added to graphs. Notice that adding the tree of size 0, a single vertex, does not change the graph. The following theorem states how one can attach trees to any connected graph in the aforementioned manner to result in a graph that has the complete maximal identity property. \begin{figure} \caption{The Diamond Graph} \label{Diamond} \caption{The Diamond Graph with a tree of size 1 added to vertex $v_4$. This graph has the maximal identity property for sink at $v_2$.} \label{Diamond+} \end{figure} \begin{theorem}\label{theorem: Add tree CMIP} Given any connected graph $G$, there exists infinitely many graphs derived from adding trees to $G$ that have the complete maximal identity property. \end{theorem} To prove \cref{theorem: Add tree CMIP}, we need the following lemma. \begin{lemma}\label{lemma:addtreeCMIPidentityconfiguration} Let $G$ be any graph. Let $\{T_v\}$ be a family of trees labeled by vertices of $G$ and let~$G'$ be the graph obtained from $G$ by adding each tree $T_v$ to vertex $v$. Then $G'$ has the complete maximal identity property if and only if the configuration $c$ over $G$ with $d_v + |T_v| - 2$ chips at all vertices $v$ is equivalent to the identity for all selections of sinks, where $|T_v|$ is the number of vertices in the tree. \end{lemma} \begin{proof} We first prove the if direction. Pick a sink $s\in G \subseteq G'$. Let $c$ be the chip configuration in the lemma statement. We observe that each tree $T_v$ has size $c_v - (m_G)_v$. As trees have a trivial sandpile group, there exists an unrestricted firing sequence that moves all the chips of $m_{G'}$ in $T_v$ to $v$. Each edge of $T_v$ provides one additional chip to $m_{G'}$ from $m_G$, and thus once all the chips are moved towards their attachment point $v$ in $G$, we find that the configuration that results is configuration~$c\equiv m_{G'}$. As~$c\equiv \mathbf{0_G}$, there exists an unrestricted firing sequence that takes $c$ to $\mathbf{0}$ in~$G$. Use this same sequence on the chip configuration $c$ over $G'$, except whenever vertex $v$ in $G$ is fired, fire vertex~$v$ and all the vertices of $T_v$ in $G'$. This results in $\mathbf{0_{G'}}$, thus illustrating that $G'$ has the maximal identity property for sink $s$. The choice of $s$ was arbitrary, so it applies for all vertices in $G$. However, $G'$ also has the vertices in the trees that were added. We must prove that $G'$ has the maximal identity property for these vertices as sinks. Pick a sink $s\in G'$ from tree $T_v$ for arbitrary vertex $v\in G$. Use the unrestricted firing sequence that took $m_{G'}$ to $\mathbf{0}$ for sink $v$, to result in a configuration equivalent to $m_G$ that has all of the chips at vertex $v$. As all trees have a trivial sandpile group, there exists a unrestricted firing sequence that takes all the chips at vertex $v$ to $s$, if only $T_v$ is considered. Now, whenever vertex $v$ needs to be fired, fire all vertices in $G$, and all vertices of $T_{v'}$ for all $v'\neq v$. This allows all firings in $T_v\subset G'$ to work inside $G'$. This demonstrates that $m_{G'}\equiv \mathbf{0}$, and thus that $G'$ has the complete maximal identity property. To prove the only if direction, assume $G'$ has the complete maximal identity property. Reversing the unrestricted firing sequences used to move all the chips of $m_{G'}$ in $T_v$ to $v$, we find $c \equiv m_{G'}$ as the $|T_v|-1$ chips $T_v$ needs for the maximal stable configuration are taken away from $v$ to result in $v$ having $d_v - 1$ chips, the number of chips it needs for the maximal stable configuration. As $G'$ has the complete maximal identity property, $c \equiv m_{G'} \equiv \mathbf{0_{G'}}$. \end{proof} We now can prove \cref{theorem: Add tree CMIP}. \begin{proof}[Proof of \cref{theorem: Add tree CMIP}] Say we have a graph $G$. By the Matrix-Tree Theorem, regardless of the choice of sink, $|\mathbb{D}elta'|$ is a constant integer. Say $|\mathbb{D}elta'|=k$. By \cref{lemma:addtreeCMIPidentityconfiguration}, to prove the result it suffices to find infinitely many configurations $c$, where $c$ is over all vertices, such that $c \geq m_G$ and for all selections of sinks, $c\equiv\mathbf{0}$. We will create such a configuration as follows. For each vertex $v$, let the number of chips on it be equal to a multiple of $k$ that is greater than or equal to $d_v-1$. This configuration can be represented by a vector $k\mathbf{x}$, where $\mathbf{x}$ is a vector composed of nonnegative integers. For a particular selection of sink, to prove that this configuration is equivalent to the identity, we must show that there exists a vector $\mathbf{y}$ with integer entries such that \begin{equation*} \mathbb{D}elta' \mathbf{y} = k\mathbf{x}. \end{equation*} But we know that $|\mathbb{D}elta'|=k$, so using the fact that the adjoint matrix of an integer matrix (which~$\mathbb{D}elta'$ is) has integer entries, we know that $k(\mathbb{D}elta')^{-1}$ has integer entries. Hence \begin{equation*} \mathbf{y} = k(\mathbb{D}elta')^{-1} \mathbf{x}, \end{equation*} and as $\mathbf{x}$ has integer entries, $\mathbf{y}$ has integer entries. As there are infinitely many multiples of a positive integer greater than a fixed value, there are infinitely many valid configurations $c$. Thus, there exists infinitely many graphs consisting of the given graph with trees attached to it that have the complete maximal identity property. \end{proof} Because we may add trees to graphs to give them the complete maximal identity property, we wish to have a notion of irreducibility that eliminates such graphs which have trees added to them. This leads us to the classical notion of a biconnected graph. \begin{definition}\label{definition: biconnected} A graph is \textbf{biconnected} if it remains connected even if one removes any single vertex and its incident edges. \end{definition} In other words, for any two vertices in a biconnected graph, there exist at least two vertex-disjoint paths that connect them. A search over all biconnected graphs with 11 or fewer vertices found that asides from odd cycles and complete graphs, there were only three and two biconnected graphs (up to isomorphism) with 8 and 10 vertices, respectively. These include the 2-diamond ring and the Petersen graph from \cref{example: CMIP Petersen and Diamond Ring}. In addition, there are three other biconnected graphs with 12 vertices known to have the complete maximal identity property, including the 3-diamond ring; however, the search over all biconnected graphs with 12 vertices is too computationally intensive to complete. The observed lack of any other biconnected graphs with an odd number of vertices that possess the complete maximal identity property prompts the following question. \begin{question}\label{question: odd biconnected CMIP} Are the only biconnected graphs with an odd number of vertices that possess the complete maximal identity property cycle graphs and complete graphs? \end{question} \section{Necessary Conditions for the Complete Maximal Identity Property}\label{section: necessary and sufficient} In order to create necessary conditions for the complete maximal identity property, we first make the following definition. \begin{definition}\label{definition:compatible} A vector $c\in\mathbb{Z}^n$ is \textbf{compatible} if for any sink $v_i$ we have \begin{equation*} (c_1,\ldots,c_{i-1},c_{i+1},\ldots,c_n)\in\mathbb{D}elta^{(i)}\mathbb{Z}^{n-1}, \end{equation*} where the vector on the left hand side will be denoted by $c^{(i)}$. An equivalent definition is that for any choice of sink $v_i$, \begin{equation*} (\mathbb{D}elta^{(i)})^{-1} c^{(i)} \in \mathbb{Z}^{|G|-1}. \end{equation*} \end{definition} Note that if $a$ and $b$ are compatible, then $a\pm b$ is compatible. The following lemma considers compatible configurations where only a single vertex contains chips. We use $\mathbf{e}_i$ to denote the $i$th standard basis vector with a 1 in the $i$th coordinate and 0s elsewhere. \begin{lemma}\label{lemma:compatible sum entries} If $c$ is compatible and $s=c_1 + \cdots + c_n$, then $s\mathbf{e}_i$ is compatible for all $i$. \end{lemma} \begin{proof} Notice that if $c\in\mathbb{Z}^n$ is in the image of $\mathbb{D}elta$, then $c$ is compatible. Since $(c_1,\dots,c_n)$ is compatible, then \[ (c_1,\dots,c_{i-1}, c_{i+1}, \dots, c_n)=\sum_{j\in\{1,\dots,n\}\setminus\{i\}}\alpha_j \mathbb{D}elta^{(i)}_j, \] where $\alpha_j\in\mathbb{Z}$ and $\mathbb{D}elta^{(i)}_j$ denotes the column of $\mathbb{D}elta^{(i)}$ corresponding to vertex $v_j$. Then \[ \left(c_1,\dots,c_{i-1},-\sum_{j\in\{1,\dots,n\}\setminus\{i\}} c_j,c_{i+1},\dots,c_n\right)=\sum_{j\in\{1,\dots,n\}\setminus\{i\}}\alpha_j \mathbb{D}elta_j \] is in the image of $\mathbb{D}elta$ and thus is compatible. Subtracting the two compatible vectors, then $s\mathbf{e}_i$ is compatible for any $i$ where $s = c_1 + \cdots +c_n$. \end{proof} Clearly, $I=\{d\mid d\mathbf{e}_i\text{ is compatible for any }i\}$ forms an ideal in $\mathbb{Z}$. For a graph $G$ with the complete maximal identity property, we know $k\in I$, where $k=|\mathbb{D}elta'|$, and \[ \sum_{v\in G} (\mathrm{deg}(v)-1) = 2 \cdot \operatorname{size}(G) - |G| \in I, \] where $\operatorname{size}(G)$ denotes the number of edges of $G$, so their greatest common divisor is in $I$. As $\mathbb{Z}$ is a principal ideal domain, for each graph $G$, $I=(x)$ for some positive integer $x$. Notice that $x$ is the smallest positive element of $I$. This leads us to our definition of the minimal compatibility number of a graph. \begin{definition}\label{definition: minimal compatibility number} The \textbf{minimal compatibility number} of a graph is the positive integer $x$ such that $(x) = I := \{d\mid d\mathbf{e}_i\text{ is compatible for any }i\}$. \end{definition} \begin{lemma}\label{lemma: minimal compatibility number = 1 >>> tree} The minimal compatibility number of a graph $G$ is $1$ if and only if $G$ is a tree. \end{lemma} \begin{proof} We first prove the only if direction. If the minimal compatibility number of a graph is 1, then every chip configuration is in the integer image of the reduced Laplacian and thus equivalent to each other. Hence, the sandpile group $\mathcal{S}(G)$ is the trivial group, so by the Matrix-Tree Theorem $G$ must have only 1 spanning tree, or in other words, is a tree itself. For the if direction, a tree has a trivial sandpile group and thus every chip configuration is equivalent to each other, meaning that each chip configuration is in the integer image of the reduced Laplacian, and thus the minimal compatibility number of the tree is 1. \end{proof} Notice that $d\mathbf{e}_i$ is compatible if each least common denominator of the column of $(\mathbb{D}elta')^{-1}$ corresponding to vertex $i$ for all sinks $s$ divides $d$. Hence, $d\in I$ if the least common denominator of all entries of the inverses of all reduced Laplacians of a graph $G$ divides $d$, and thus the minimal compatibility number of a graph is the least common denominator of all entries of the inverses of all reduced Laplacians of a graph $G$. \begin{proposition}\label{proposition: CMIP >>> gcd k sum(msc) > 1} If a non-tree graph $G$ has the complete maximal identity property, then \[\gcd\left(\left|\mathbb{D}elta'\right|,2\cdot\operatorname{size}(G)-|G|\right)>1. \] \end{proposition} \begin{proof} We will prove the contrapositive. If for a non-tree graph $G$, \[ \gcd\left(\left|\mathbb{D}elta'\right|,2\cdot\operatorname{size}(G)-|G|\right)=1, \] the graph does not have the complete maximal identity property, as if it did then \[ \gcd\left(\left|\mathbb{D}elta'\right|,2\cdot\operatorname{size}(G)-|G|\right)=1\in I, \] so the minimal compatibility number of $G$ is 1, which by \cref{lemma: minimal compatibility number = 1 >>> tree} means $G$ is a tree, yielding a contradiction. \end{proof} \begin{proposition}\label{proposition: CMIP >>> x < n^2} If a graph $G$ has the complete maximal identity property, then the minimal compatibility number $x$ satisfies $x \leq |G|^2 - 2|G|$ for $|G| > 2$, and $x = 1$ if $|G| = 2$. \end{proposition} \begin{proof} If $|G| = 2$, then $G$ must be the connected graph with two vertices, which is a tree and thus the result follows from \cref{lemma: minimal compatibility number = 1 >>> tree}. We now assume $|G| > 2$. As \[ 2\cdot \operatorname{size}(G) - |G| \in I, \] we have \[ x \leq 2\cdot\operatorname{size}(G) - |G|, \] as $x$ is the smallest positive element of $I$ and $2\cdot \operatorname{size}(G) - |G| > 0$. This is because $G$ is a connected graph so $\operatorname{size}(G) \geq |G| - 1$, and thus $2\cdot \operatorname{size}(G) - |G| \geq |G| - 2 > 0$. The maximum value of $\operatorname{size}(G)$ of a graph with fixed order is $\frac{|G|(|G|-1)}{2}$, so \[ x \leq 2\cdot \operatorname{size}(G) - |G| \leq \left(|G|^2 - |G|\right) - |G| = |G|^2 - 2|G|. \] \end{proof} \begin{proposition}\label{proposition: x Kn Cn = n} The minimal compatibility number for both $K_n$ and $C_n$ is $n$ for $n \geq 3$. \end{proposition} \begin{proof} We will show $n\mathbf{e}_i$ is compatible, with an irreducible firing vector (i.e. the elements have no common factor). The \textbf{firing vector} for a configuration $c$ is the vector $(\mathbb{D}elta')^{-1} c$, where each element of the vector corresponds to how many times the corresponding non-sink vertex must fire from the all-zero configuration to reach $c$. Note that the firing vector may not necessarily have integer entries. As the reduced Laplacian is non-singular by the Matrix-Tree Theorem, the firing vector is well-defined, existing uniquely as the firing vector that takes the all-zero configuration to any configuration $c$. If $n\mathbf{e}_i$ is compatible, but the firing vector is irreducible, meaning all the entries are integers which do not share a nontrivial common factor, then~$n'\mathbf{e}_i$ for all positive integers $n' < n$ will not have a firing vector that has all integer entries; this is because that firing vector can be obtained by multiplying the firing vector for $n\mathbf{e}_i$ by $\frac{n'}{n}$. For the complete graph $K_n$, fire the sink and then backfire $v_i$ to result in $n$ chips at $v_i$. The firing vector consists of $-1$ at all non-sink vertices except $v_i$ which is $-2$. Hence, this is an irreducible firing vector, and $x=n$. For the cycle graph $C_n$, we will first show that $n\mathbf{e}_i$ is compatible for all $i$. Notice that on the cycle graph, one may select a proper connected subgraph (i.e. an ``arc" of the circle) and fire all of them, resulting in the outer vertices losing one vertex and the vertices adjacent to them but not in the arc gaining a chip. This may be repeated for iteratively larger arcs, each time including one more vertex on each end. Doing so enables us to send those chips an arbitrary distance away from the ends of the original arc (as long as there are no self-intersection issues). We will first look at odd $n$. From the sink, give each vertex a single chip by pairing vertices equidistant from the sink, and having the sink fire chips to those two vertices. Then, pair vertices equidistant from vertex $v_i$ and fire their chips to vertex $v_i$. This results in vertex $v_i$ having all $n$ chips, with no other vertex having chips. See \cref{figure: minimal compatibility example C7} for an example of the described firing process. \begin{figure} \caption{Example firing process for $C_n$ when $n=7$. The sink is colored green, vertices with 1 chip are colored blue, and vertices with $n=7$ chips are colored purple.} \label{figure: minimal compatibility example C7} \end{figure} Now we will look at even $n$. If $v_i$ is diametrically opposite the sink, we can send 2 chips from the sink to $v_i$, and as $n$ is even we can do this $\frac{n}{2}$ times to yield $v_i$ having $n$ chips and no other vertex having chips. Otherwise, from the sink, give each vertex except the vertex diametrically opposite the sink a single chip as in the odd case. Then, pair vertices equidistant from vertex $v_i$ except the vertex diametrically opposite $v_i$ and fire their chips to vertex $v_i$ like before. This results in vertex~$v_i$ having $n-1$ chips, the vertex diametrically opposite the sink having $-1$ chips, and the vertex diametrically opposite $v_i$ having 1 chip. Then, fire chips from the sink and the vertex with 1 chip to~$v_i$ and the vertex diametrically opposite the sink, resulting in $v_i$ having $n$ chips and no other vertex having chips. See \cref{figure: minimal compatibility example C6} for an example of the described firing process. \begin{figure} \caption{Example firing process for $C_n$ when $n=6$. The sink is colored green, vertices with 1 chip are colored blue, vertices with $-1$ chips are colored red, vertices with $n-1=5$ chips are colored pink, and vertices with $n=6$ chips are colored purple.} \label{figure: minimal compatibility example C6} \end{figure} Finally, we will show that $n$ is the smallest element in $I$. We will do this by showing the firing vector for the case where the sink and $v_i$ are adjacent is irreducible. Let the sink be vertex 1, with the vertices labeled in order so that $v_i$ is vertex $n$. First, fire vertex 1. Then, fire vertices 1 and 2. After that, fire vertices 1, 2, and 3, and so on, with the final step firing vertices 1 through $n-1$. Notice that each step pushes a chip in the positive direction; the first step sends a chip to vertex 2, the next step moves that chip to vertex 3, and so on. After all these steps, that chip will arrive at vertex $n$. At the same time, each step gives vertex $n$ a chip from the sink, vertex 1. Hence, after these $n-1$ steps, vertex $n$ will receive $1 + (n-1) = n$ chips, with no other vertex having chips. During this process, vertex $n-1$ was only fired once, and thus the firing vector is irreducible. \end{proof} \section{The Complete Identity Property}\label{section:CIP} Using the definition of compatibility, a graph has the complete maximal identity property if and only if $m_G$ is compatible. With this concept, the definition of the complete maximal identity property can be generalized to simply when there exists a compatible configuration that is recurrent for all sinks but need not be $m_G$, as seen in the following definition. \begin{definition}\label{def:CIP} A graph $G$ is said to have the \textbf{complete identity property} if there exists a chip configuration $c$ on all the vertices such that for all choices of sink $s$, the configuration $c$ with respect to sink $s$ is the recurrent identity for the sandpile of $G$ at $s$. If $G$ has the complete identity property, $c_G$ is the chip configuration on all vertices that gives the recurrent identity for all choices of sink $s$. \end{definition} Note that if a graph $G$ has the complete maximal identity property, it has the complete identity property. Odd cycle graphs have the complete maximal identity property, but even cycle graphs do not. The generalization of the complete maximal identity property to the complete identity property helps resolve this, as seen by \cref{proposition: CIP even cycle +}. \begin{proposition}\label{proposition: CIP even cycle +} For any positive integer $n$, attaching a single tree of size $1$ to any vertex in the even cycle graph $C_{2n}$ results in a graph with the complete identity property. \end{proposition} \begin{proof} Say $G$ is the graph resulting from attaching a single tree of size 1 to any vertex in $C_{2n}$. Let vertex $v_0$ be the vertex added to the even cycle graph and vertex $v_1$ be the vertex connected to it. Let the vertices of the cycle graph be numbered~$v_1$ through $v_{2n}$ in clockwise order. Notice that $v_{n+1}$ is diametrically opposite of $v_1$. See \cref{subfigure: C6+} for an example of $G$, when $n=3$. We claim the common recurrent identity configuration $c_G$ is the configuration that has 0 chips at $v_0$ and $v_{n+1}$, 2 chips at $v_1$, and 1 chip everywhere else. Let this configuration be denoted as $d_G$ for the proof. We will show that this configuration is the recurrent identity for all of the vertices. See \cref{subfigure: dG C6+} for an example of $d_G$, when $n=3$. \begin{figure} \caption{$G$ when $n=3$} \label{subfigure: C6+} \caption{$d_G$ when $n = 3$. Vertices with 1 and 2 chips are colored blue and purple, respectively.} \label{subfigure: dG C6+} \end{figure} \textbf{Case 1: sink at $v_0$.} To prove that the configuration $d_G$ is recurrent, take any stable configuration and add chips to it to yield the configuration that has 2 more chips than $d_G$ at $v_{n+1}$, which is the configuration resulting from taking the maximal stable configuration and adding 1 chip to~$v_{n+1}$, or $m_G + \mathbf{e}_{n+1}$. From here, we stabilize to reach $d_G$. Vertex $v_{n+1}$ fires, activating $v_{n+1\pm1}$. Vertex~$v_{n+1\pm1}$ fires, activating $v_{n+1}$ again as well as $v_{n+1\pm2}$. This process continues, sending the two chips originally at $v_{n+1}$ to $v_1$, which fires, activating the vertices all the way back to $v_{n+1}$, and then repeats this process again, giving the two chips to $v_0$, thus reaching $d_G$. Hence, $d_G$ is recurrent. Furthermore, using the same reasoning, all the chips in $d_G$ may be moved to $v_0$ as there are no chips at $v_{n+1}$, and the chips at $v_{n+1\pm i}$ (chips that are the same distance from vertex 1) may be paired together and moved towards $v_1$, and from there all the chips moved to $v_0$ by firing the entire cycle graph as many times as necessary. \textbf{Case 2: sink at $v_1$.} Follow the same procedure as $v_0$, ending when the two chips reach~$v_1$. \textbf{Case 3: sink at $v_{n+1}$.} Follow a similar procedure but for the proof of $d_G$ being recurrent, add chips to any stable configuration to yield the configuration that is the maximal stable configuration with two extra chips at $v_1$, and stabilize to yield $d_G$. \textbf{Case 4: sink at $v_i$ for $2 \leq i \leq n$.} To prove that the configuration $d_G$ is recurrent, take any stable configuration and add chips to it to yield the configuration that has 1 more chip than $m_G$ at $v_{n+i}$. Firing $v_0$ immediately after each time $v_1$ fires, the proof of the recurrence of $d_G$ proceeds exactly like the case for the sink at $v_1$, acting on the cycle graph. To prove that $d_G$ is equivalent to the all-zero configuration, pair off vertices equidistant from~$v_i$ in the cycle graph and move their chips to $v_i$, resulting in the configuration with $v_1$ having 1 chip,~$v_{n+1}$ having $-1$ chips, and $v_{n+i}$ having 1 chip. Using this observation, we move the two chips at $v_1$ and $v_{n+i}$ away from each other until the chip from $v_{n+i}$ reaches $v_{n+1}$, and thus the chip at $v_1$ reaches $v_i$, the sink. This results in the all-zero configuration, and thus $d_G$ is the recurrent identity at~$v$. \textbf{Case 5: sink at $v_i$ for $n+2 \leq i \leq 2n$.} By symmetry, follow the same procedure as described in case 4 for $v_{2n + 2 - i}$ where $2 \leq 2n + 2 - i \leq n$. This completes the proof. \end{proof} In order to create a graph from an even cycle graph that has the complete identity property, we needed to add a single tree of size 1 to any vertex. This property is also seen in the case of complete bipartite graphs, as the following theorem shows. \begin{theorem}\label{theorem: CIP K_mn +} For all positive integers $m,n$, attaching a single tree of size $1$ to any vertex in the complete bipartite graph $K_{m,n}$ results in a graph that has the complete identity property. \end{theorem} \begin{proof} It suffices to show the result for when the additional tree is attached to one of the $m$ vertices, as the same proof would hold with $m$ and $n$ interchanged. Furthermore, we may assume that $m,n\geq 2$, as if any of them are 1, the resulting graph is a tree and thus has the complete maximal identity property and hence the complete identity property by \cref{proposition: CMIP complete odd cycle tree}. Say the additional vertex is $a$, the vertex it is attached to is $b$, the $n$ vertices of one side of the complete bipartite graph compose set $C$, and the $m-1$ vertices in the set of $m$ vertices of one side of the complete bipartite graph that is not $b$ compose set $D$. We claim $c_G$ is the configuration with $a$ having 0 chips, $b$ having $n$ chips, each vertex in $C$ having $m-1$ chips, and each vertex in $D$ having 0 chips. Let this configuration be denoted as $d_G$ for the proof. See \cref{fig:K5_7+} for an example of the graph and the naming convention for the vertices. \begin{figure} \caption{Example of $K_{m,n} \label{fig:K5_7+} \end{figure} We will separate into four cases depending on where the sink is. \textbf{Case 1: sink at $a$.} From any stable configuration add chips to reach the configuration with~$b$ having $n$ chips, each vertex in $C$ having $m-1$ chips, and each vertex in $D$ having $n$ chips. We then stabilize. This results in all vertices in $D$ firing, then all vertices in $C$ firing, then all vertices in $D$ firing again to result in $b$ having $2n$ chips and $C$ having $2m-3$ chips each. Then, firing $b$, all vertices in~$C$, and then all vertices in $D$ results in a net loss of 1 chip at $b$, and this occurs as long as $b$ has at least~$n+1$ chips to start the cycle, and the cycle starts with all vertices in $C$ having at least~$m-1$ chips, which it does as $m \geq 2$, so $2m-3 \geq m-1$. Hence, after $n$ iterations of this cycle, we eventually reach $b$ having $n$ chips, all vertices in $C$ having $2m-3$ chips each, and all vertices in~$D$ having 0 chips each. Fire all vertices in $C$ and then all vertices in $D$ to yield all vertices in $C$ having lost 1 chip each, and $b$ having gained $n$ chips. Follow the cycle previously shown to return~$b$ back to having $n$ chips. This process can thus result in a net loss of 1 chip at each vertex in $C$, as long as all vertices in $C$ had at least $m$ chips to start with, giving it at least $m-1$ chips when it enters the other cycle. Cycling this process until it is no longer possible results in $b$ having $n$ chips, all vertices in $C$ having $m-1$ chips each, and all vertices in $D$ having no chips. This is $d_G$, and it is stable. Hence, $d_G$ is recurrent. Now, to prove that $d_G$ is the recurrent identity, fire all vertices in~$C\cup D$ each $m-1$ times to clear all vertices in $C \cup D$ of chips, and then fire $b$, all vertices in $C$, and all vertices in $D$ until $b$ is also clear of chips. \cref{table:Kmn-a} shows a stabilization of this configuration to $d_G$, and then the unrestricted firing sequence that takes $d_G$ to $\mathbf{0}$, tracking the chips at each non-sink vertex. As during the process the vertices in $C$ are indistinguishable, the column for $C$ tracks the number of chips at each of the vertices in $C$. The chips at each vertex in $D$ are similarly tracked. Hence, $d_G$ is the recurrent identity. \begin{table}[htbp!] \centering \begin{tabular}{| c | c | c | c || c |} \hline $a$ & $b$ & $C$ & $D$ & Firing step \\ \hline Sink & $n$ & $m-1$ & $n$ & $D,C,D$ fire \\ Sink & $2n$ & $2m-3$ & 0 & $b, C, D$ fire \\ Sink & $2n-1$ & $2m-3$ & 0 & Repeat firing $b,C,D$ a total of $n$ times \\ Sink & $n$ & $2m-3$ & 0 & $C,D$ fire, $b,C,D$ fire $n$ times. Do this $m-2$ times \\ Sink & $n$ & $m-1$ & 0 & Reached $d_G$ \\ \hline \hline Sink & $n$ & $m-1$ & 0 & $C,D$ fire $m-1$ times \\ Sink & $nm$ & 0 & 0 & $b,C,D$ fire $nm$ times \\ Sink & 0 & 0 & 0 & Reached $\mathbf{0}$ \\ \hline \end{tabular} \caption{Firing sequences for proof that $d_G$ is the recurrent identity with sink at $a$} \label{table:Kmn-a} \end{table} \textbf{Case 2: sink at $b$.} From any stable configuration add chips to reach the configuration with~$a$ having 0 chips, each vertex in $C$ having $m-1$ chips, and each vertex in $D$ having $n$ chips. Stabilizing this configuration results in first firing all vertices in $D$, resulting in all vertices in $C$ having $2m-2$ chips each. Then, firing all vertices in $C$ and then all vertices in $D$ results in a net loss of 1 chip at each vertex in $C$, and the cycle works as long as all vertices in $C$ start with at least $m$ chips so that they can fire. Repeating this process, we find that the stabilization of the specified accessible configuration is $d_G$. Hence, $d_G$ is recurrent. Now, to prove that $d_G$ is the recurrent identity, fire all vertices in $C \cup D$ each $m-1$ times to result in the all-zero configuration. \cref{table:Kmn-b} shows the firing sequences that prove $d_G$ is the recurrent identity for sink at $b$. \begin{table}[htbp!] \centering \begin{tabular}{| c | c | c | c || c |} \hline $a$ & $b$ & $C$ & $D$ & Firing step \\ \hline 0 & Sink & $m-1$ & $n$ & $D$ fires \\ 0 & Sink & $2m-2$ & 0 & $C, D$ fire repeatedly until no longer possible\\ 0 & Sink & $m-1$ & 0 & Reached $d_G$ \\ \hline \hline 0 & Sink & $m-1$ & 0 & $C,D$ fire $m-1$ times \\ 0 & Sink & 0 & 0 & Reached $\mathbf{0}$ \\ \hline \end{tabular} \caption{Firing sequences for proof that $d_G$ is the recurrent identity with sink at $b$} \label{table:Kmn-b} \end{table} \textbf{Case 3: sink in $C$.} Say the sink is $s_C \in C$, and let $C' = C \setminus \{s_C\}$. From any stable configuration add chips to reach the configuration with $a$ having 0 chips, $b$ having $n$ chips, each vertex in $C'$ having $m$ chips, and each vertex in $D$ having $n$ chips. We fire all vertices in $C'$ to yield~$b$ and each vertex in $D$ having~$2n-1$ chips each, both being active. Notice that we may fire~$b$, $a$, and all vertices in $D$, and then all vertices in $C'$, as long as $b$ and all vertices in $D$ have at least~$n+1$ chips each, and without any requirement of the starting chips for all vertices in $C'$, as they will each get the $m$ chips they need to fire. This process results in a net loss of 1 chip at each of $b$ and all vertices in $D$. Performing this operation until it is no longer possible, with $n$ chips at~$b$ and each vertex in $D$ each, we then fire all vertices in $D$ to yield $d_G$. Hence, $d_G$ is recurrent. To prove that $d_G$ is the recurrent identity, notice that~$d_G$ is equivalent to the configuration that preceded it in the proof of its recurrence, which had $n$ chips at each of $b$ and all vertices in $D$; hence, backfiring $D$ results in this configuration. We now backfire $s_C$ a total of $n$ times to clear $b$ and all vertices in $D$ of chips, resulting in the all-zero configuration. \cref{table:Kmn-C} shows the firing sequences that prove $d_G$ is the recurrent identity for sink in $C$. \begin{table}[htbp!] \centering \begin{tabular}{| c | c | c | c || c |} \hline $a$ & $b$ & $C'$ & $D$ & Firing step \\ \hline 0 & $n$ & $m$ & $n$ & $C'$ fires \\ 0 & $2n-1$ & 0 & $2n-1$ & $b,a,D,C'$ fire repeatedly until no longer possible\\ 0 & $n$ & 0 & $n$ & $D$ fires \\ 0 & $n$ & $m-1$ & 0 & Reached $d_G$ \\ \hline \hline 0 & $n$ & $m-1$ & 0 & Backfire $D$ \\ 0 & $n$ & 0 & $n$ & $s_C$ backfires $n$ times \\ 0 & 0 & 0 & 0 & Reached $\mathbf{0}$ \\ \hline \end{tabular} \caption{Firing sequence for proof of recurrence of $d_G$ with sink in $C$} \label{table:Kmn-C} \end{table} \textbf{Case 4: sink in $D$.} Say the sink is $s_D \in D$, and let $D' = D \setminus \{s_D\}$. From any stable configuration add chips to reach the configuration with $a$ having 0 chips, $b$ having $2n$ chips, each vertex in $C$ having $m-1$ chips, and each vertex in~$D'$ having $n$ chips. Notice that at this state $b$ and all vertices in~$D'$ are active. Furthermore, firing these vertices and then $a$ gives $m-1$ chips to each vertex in $C$, so as long as they each have at least 1 chip, they can fire. This results in $b$ and~$D'$ regaining their $n$ chips, and costing each vertex in $C$ $m$ chips, meaning that each vertex in~$C$ has lost one chip in total. We perform this procedure $m-1$ times to result in the configuration with~$2n$ chips at $b$ and $n$ chips at each vertex in $D'$. Firing $b$, $a$, and all vertices in $D'$ then gives~$d_G$. Hence,~$d_G$ is recurrent. To prove that $d_G$ is the recurrent identity, fire $a$ and $b$ to clear $a$ and~$b$ of chips, resulting in $C$ having $m$ chips. Backfire the sink $m$ times to result in the all-zero configuration. \cref{table:Kmn-D} shows the firing sequences that prove $d_G$ is the recurrent identity for sink in $D$. \begin{table}[htbp!] \centering \begin{tabular}{| c | c | c | c || c |} \hline $a$ & $b$ & $C$ & $D'$ & Firing step \\ \hline 0 & $2n$ & $m-1$ & $n$ & $b,D',a,C$ fire $m-1$ times. \\ 0 & $2n$ & 0 & $n$ & $b,D',a$ fire \\ 0 & $n$ & $m-1$ & 0 & Reached $d_G$ \\ \hline \hline 0 & $n$ & $m-1$ & 0 & $a,b$ fire \\ 0 & 0 & $m$ & 0 & $s_D$ backfires $m$ times \\ 0 & 0 & 0 & 0 & Reached $\mathbf{0}$ \\ \hline \end{tabular} \caption{Firing sequence for proof of recurrence of $d_G$ with sink in $D$} \label{table:Kmn-D} \end{table} This completes the proof. \end{proof} By \cref{proposition: CIP even cycle +} and \cref{theorem: CIP K_mn +}, we see that even cycles and complete bipartite graphs both have the property that attaching a single tree of size 1, or a single edge and vertex, to any vertex in the graph results in a graph with the complete identity property. In addition, adding a single edge and vertex to a tree results in a tree, which has the complete maximal identity property by \cref{proposition: CMIP complete odd cycle tree}, which implies the complete identity property. All of these three types of graphs are bipartite. While not all bipartite graphs, or even regular bipartite graphs, have this property (for example, the hypercube in 3 dimensions, which is isomorphic to $C_4 \cp K_2$, where the Cartesian product $\cp$ is defined in \cref{section:Conjectures}, is one such counterexample), a computer search found that all connected graphs of 10 vertices or less which had this property were bipartite. This motivates the following conjecture. \begin{conjecture}\label{conjecture: CIP+ >>> bipartite} Let $G$ be a connected graph. If for all vertices $v\in G$, attaching a single tree of size $1$ to $v$ results in a graph with the complete identity property, then $G$ is bipartite. \end{conjecture} \section{Conjectures on Graph Products}\label{section:Conjectures} Recall that the Cartesian product, tensor product, and strong product are binary operations on graphs that form a graph whose vertices are ordered pairs of vertices of the two daughter graphs. For the Cartesian product, denoted $\cp$, two vertices share an edge if in one of the daughter graphs the two vertices share an edge and in the other the vertices are the same. For the tensor product, denoted $\times$, two vertices share an edge if in both daughter graphs the two vertices share an edge. The strong product, denoted $\sp$, is the union of the Cartesian product and the tensor product. See \cref{figure: cp tp sp} for examples of the Cartesian, tensor, and strong products between two copies of the path graph with 3 vertices $P_3$. \begin{figure} \caption{$P_3$} \label{subfigure:P3} \caption{$P_3 \protect\cp P_3$} \label{subfigure:P3cpP3} \caption{$P_3 \times P_3$} \label{subfigure:P3tpP3} \caption{$P_3 \protect\sp P_3$} \label{subfigure:P3spP3} \caption{The Cartesian, tensor, and strong products between two copies of $P_3$} \label{figure: cp tp sp} \end{figure} Investigating the behavior of graph products with respect to the complete maximal identity property, we find that in general, the strong product, Cartesian product, and tensor product all do not preserve the complete maximal identity property. However, we do find some patterns. \begin{proposition}\label{proposition:S-PxP} Let $P_n$ be the path graph with $n$ vertices. The strong product between $P_2$ and $P_k$ has the complete maximal identity property if and only if $k=2$ or $k \equiv 1 \pmod{3}$. \end{proposition} \begin{proof} We will refer to the canonical labelling of the vertices of the graph $P_2 \sp P_k$ as via the ordered pairs $(i,j)$ where $i\in\{0,1\}$ and $j\in\{0,1,\dots,k-1\}$. By symmetry, it suffices to prove the graph has the maximal identity property for sinks with $i=0$ and $j\leq \frac{k-1}{2}$. We will prove the $j=0$ case first. Starting with the maximal stable configuration, incrementally fire all the vertices with second coordinate~$k-1$ until those have no chips, and then fire all the vertices with second coordinate at least~$k-2$ until those have no chips, and so on. The resulting configuration has~$4k-6$ chips at each of the two vertices with second coordinate~$1$. By symmetry, in order for the two vertices with second coordinate 1 to have the same number of chips (eventually~0), the number of times the vertices with second coordinate at least 1 can be fired must all be the same. Fire all of these vertices $k-2$ times. Then the three vertices connected to the sink each have $2k-2$ chips. Fire all non-sink vertices $2k-2$ times to clear the graph of all chips. Now we assume $k\geq3$. For $1<j\leq \frac{k-1}{2}$, we follow a similar process to result in the vertices with second coordinate~$j-1$ having all the chips originally with second coordinate less than $j$, and thus having $4j-2$ chips each. Similarly, the vertices with second coordinate $j+1$ have $4k-4j-6$ chips each. Fire the vertices with second coordinate at least $j+1$ a total of $k-2j-1$ times each to have the vertices with second coordinate $j\pm1$ having $4j-2$ chips each, and $(1,j)$ having $4k-8j$ chips. In order for the vertices with second coordinate not equal to $j$ to all have 0 chips, they must all be fired the same number of times. Hence, we essentially only have two operations: fire $(1,j)$ or fire all vertices with second coordinate not equal to $j$. From this, we can see that we can fire all the non-sink vertices $4j-2$ times, resulting in all the non-sink vertices having no chips except~$(1,j)$ having $4k-12j+2$ chips. Now notice that the smallest increment by which we can change the number of chips at that vertex without changing the other vertices is to fire it twice and then fire all the non-sink vertices once. Or, in other words, fire it once and backfire the sink once. This results in a net loss of 6 chips. So for $P_2 \sp P_k$ to have the complete maximal identity property we must have that $4k-12j+2\equiv 0 \mod{6}$ for all values of $j$, or equivalently $k \equiv 1 \mod{3}$. If this were not the case, we would be able to get the all-zero configuration with a fractional number of firings, and as the reduced Laplacian is nonsingular, this is the unique firing vector needed, and thus the maximal stable configuration is not equivalent to the all-zero configuration. To prove that the other graphs do not have the complete maximal identity property, let $k \not\equiv 1 \mod{3}$, and use the same process as before to arrive at $(1,j)$ having 2 or 4 chips. Using the fact that the reduced Laplacian is non-singular, there is a unique firing vector that results in this configuration. However, it does not have integer entries, as the configuration with 6 chips at $(1,j)$ does not have all of its entries being multiples of 3, rather having all of the vertices with second coordinate not equal to $j$ being backfired once and $(1,j)$ being backfired twice. Hence, the two configurations resulting from $k \not\equiv 1 \mod{3}$ are not equivalent to the identity, and thus do not have the complete maximal identity property. \end{proof} After looking at whether $P_i \sp P_j$ has the complete maximal identity property for all values of $i$ and $j$ where~$1 < i,j \leq 100$, we conjecture that the cases presented in \cref{proposition:S-PxP} are the only such graphs with the complete maximal identity property: \begin{conjecture}\label{conjecture:S-PxP} For $i \leq j \in \mathbb{Z}_{>1}$, the only graphs $P_i \sp P_j$ which have the complete maximal identity property are $P_2 \sp P_j$ where $j\equiv 1 \mod{3}$ or $j=2$, which yields $K_4$. \end{conjecture} A computer program in SageMath verified that the conjecture holds for $1 < i \leq j \leq 100$. Similarly, the following proposition on the Cartesian product was proven. \begin{proposition}\label{proposition:C-KxP} The Cartesian product between $K_4$ and $P_2$ has the complete maximal identity property. \end{proposition} After looking at whether or not $K_i \cp P_j$ has the complete maximal identity property for all values of $i$ and $j$ where~$1 < i,j \leq 50$, we conjecture that that the case presented in \cref{proposition:C-KxP} is the only such graph with the complete maximal identity property. \begin{conjecture}\label{conjecture:C-KxP} For $i,j \in \mathbb{Z}_{>1}$, the only graph $K_i \cp P_j$ which has the complete maximal identity property is $K_4 \cp P_2$. \end{conjecture} Another computer program in SageMath verified that the conjecture holds for $1 < i,j \leq 50$. We provide a proof of the special cases of \cref{conjecture:C-KxP} when $j=2,3$. \begin{proof}[Proof of the $j=2,3$ cases] For $j=2$, let the sink be (0,0). Instead of looking at abscissas of 0 through $i-1$, by symmetry all the positive abscissas for a specified ordinate must fire the same number of times (even if this were not true, the solution that follows would yield a non-integer number of firings for each vertex, and as the reduced Laplacian is non-singular, no other firing vector will yield the all-zero configuration, and thus the proof holds). So, we will combine the vertices with the same ordinate and a positive abscissa together to yield a weighted cycle graph of 4 vertices. The reduced Laplacian is \begin{equation*} \mathbb{D}elta' = \begin{bmatrix} i & 1-i & 0 \\ 1-i & 2i-2 & 1-i \\ 0 & 1-i & 2i-2 \\ \end{bmatrix}. \end{equation*} Thus, the firing vector $\mathbf{v}$ to reach the maximal stable configuration $\mathbf{c}$ is \begin{equation*} \mathbf{v} =(\mathbb{D}elta')^{-1} \mathbf{c} = (\mathbb{D}elta')^{-1} \begin{bmatrix} i - 1 \\ (i-1)^2 \\ (i-1)^2 \end{bmatrix} = \frac{1}{i+2}\begin{bmatrix} 3i(i-1) \\ (i-1)(3i+2) \\ 2(i-1)(i+1) \end{bmatrix}. \end{equation*} To have the complete maximal identity property, it suffices to show that for this particular sink, $\mathbf{v}$ has integer entries. So \[ i+2 \mid \gcd\left( 3i(i-1), (i-1)(3i+2), 2(i-1)(i+1)\right). \] We will first analyze $i+2 \mid 3i(i-1)$. Notice that $\gcd(i,i+2)=\gcd(2,i)\mid 2$. We also have~$\gcd(i+2,i-1)=\nolinebreak\gcd(i+2,3)\mid 3$ and $\gcd(i+2,3)\mid 3$. Hence, we find that $i+2 \mid 2\cdot 3^2$. This corresponds to integer values of $i$ greater than 1 being 4, 7, and 16. Verifying that these hold for the other two divisibility criteria, we find only $i=4$ yields the complete maximal identity property. For $j = 3$, we follow the same process for combining vertices. The reduced Laplacian is \begin{equation*} \mathbb{D}elta' = \begin{bmatrix} i & 1-i & -1 & 0 & 0\\ 1-i & 2i-2 & 0 & 1-i & 0\\ -1 & 0 & i+1 & 1-i & 0\\ 0 & 1-i & 1-i & 3i-3 & 1-i\\ 0 & 0 & 0 & 1-i & 2i-2\\ \end{bmatrix}. \end{equation*} The firing vector $\mathbf{v}$ to reach the maximal stable configuration $c$ is \begin{equation*} \mathbf{v}=(\mathbb{D}elta')^{-1}\mathbf{c} = \frac{1}{(i+1)(i+3)} \begin{bmatrix} 2i(3i-2)(i+3) \\ (3i-2)(2i^2 +6i + 1) \\ 5i^3 + 8i^2 - 6i + 1 \\ 5i^3 + 11i^2 -5i -1 \\ (3i-2)(i^2+3i+1) \\ \end{bmatrix}. \end{equation*} This requires that \[ (i+1)(i+3)\mid \gcd\left( 5i^3 + 8i^2 - 6i + 1, 5i^3 + 11i^2 -5i -1 \right)\] which results in \[ (i+1)(i+3)\mid 3i^2 + i -2. \] With $i+1\mid 3i^2+i-2$, the condition is equivalent to $i+3 \mid 3i-2$, or $i+3 \mid 11$, or $i=8$. But this does not yield a vector with integer entries, so hence there are no solutions for $j=3$. \end{proof} Chip-firing can be thought of as acting on a simplicial complex, where chips are on 0-dimensional cells (vertices) and fired through 1-dimensional cells (edges). This naturally leads to the generalization of chip-firing into higher dimensions, as done in \cite{duval2013critical}. \textbf{Ridge-firing}, as explained in \cite{felzenszwalb2019flow}, works over a polytopal decomposition of $n$-dimensional space, where values are assigned to $(n-1)$-dimensional cells (\textbf{ridges}) and rerouted through $n$-dimensional cells (\textbf{facets}). The first generalization of chip-firing into ridge-firing is the two-dimensional form of ridge-firing, or \textbf{flow-firing}, which works on a planar graph with \textbf{flow} on 1-dimensional cells (edges), which are rerouted through 2-dimensional cells (faces). We refer readers to \cite{felzenszwalb2019flow} for detailed background on flow-firing. Flow-firing does not universally display \textbf{global confluence}, meaning each initial configuration can terminate and must terminate at only one stable configuration, or even terminating behavior, but as demonstrated in \cite{felzenszwalb2019flow}, for a \textbf{conservative} configuration, where the inflow equals the outflow at all vertices, the flow-firing process will terminate; moreover, all conservative configurations yield a face representation, or a representation of the flow via values assigned to each face corresponding to the local clockwise flow around the face. Furthermore, as illustrated in \cite{felzenszwalb2019flow}, a \textbf{pulse}, or an initial configuration where the initial flow in face representation is confined to a single face, is over a topological hole on a planar graph, the configuration will display global confluence. We extend the idea of conservative initial configurations around a topological hole on planar graphs to allow for multiple topological holes. The flow at a face $F$ is denoted $\operatorname{flow}(F)$. We define a \textbf{pulse} of \textbf{force} $n\in\mathbb{Z}$ to be the conservative initial configuration that has a flow of $n$ on the removed (also referred to as the distinguished) face whose absence forms a topological hole in the planar graph. \begin{definition}\label{definition: confluence} A (conservative) initial configuration displays \textbf{local confluence} if it satisfies the \textbf{diamond lemma}: that is, if for all successors of the initial configuration, a configuration $c$ after one step may be either $c_1$ or $c_2$, then there exists a $c_3$ that can be reached from one step from both $c_1$ and $c_2$. A planar graph displays local confluence if every conservative initial configuration on the planar graph displays local confluence. A planar graph displays global confluence if every conservative initial configuration on the planar graph displays global confluence. \end{definition} It is known that an initial configuration that terminates and displays local confluence must display global confluence, and thus a planar graph that displays local confluence must display global confluence as well. We assume that in any conservative initial configuration there are always finitely many faces with nonzero flow. \subsection{Unidirectional Flow-firing}\label{subsection: unidirectional flow-firing} \begin{definition}\label{definition:unidirectional flow-firing} A \textbf{unidirectional flow-firing configuration} is an initial conservative configuration on a planar graph isomorphic to the $\mathbb{N}$ lattice (a 1-dimensional lattice of square faces enumerated uniquely with the nonnegative integers such that two faces share an edge if their corresponding numbers are consecutive). The \textbf{sequence of flow} of a unidirectional flow-firing configuration is the sequence formed by taking the flows of the faces with the $i$th term (0 indexed) corresponding to the flow of face $i\in \mathbb{N}$. \end{definition} \begin{lemma}\label{lemma: unidirectional flow-firing diamond lemma} A unidirectional flow-firing configuration with nonnegative flow at all faces and with a weakly decreasing sequence of flow satisfies the diamond lemma. \end{lemma} \begin{proof} First we notice that if the sequence of flow is weakly decreasing, no face $k$ can fire towards the face $k-1$, so if it can fire it can only fire to $k+1$. Furthermore, if a face $k$ fires towards face $k+1$, the sequence of flow remains weakly decreasing as the flow at face $k$ decreases and the flow at face $k+1$ increases, while the flow at face $k$ was initially at least 2 greater than the flow at face $k+1$ so that boundary remains weakly decreasing. Hence, the faces can only fire in the positive direction. Say we have a configuration $c$ that has (at least) two successors $c_1$ and $c_2$ after one step, meaning that $c$ has at least 2 options to fire. That means for two distinct values of $k$, $\operatorname{flow}(k+1)-\operatorname{flow}(k)\geq 2$. If these two values of $k$, $k_1$ and $k_2$, are at least 2 apart, then their firings do not affect each other, so if $k_1$ fires, then $k_2$ can fire afterwards, satisfying the diamond lemma. If $k_1$ and $k_2$ are consecutive, then firing one will increase the difference of the other, meaning that it still can fire, hence satisfying the diamond lemma. Thus a unidirectional flow-firing configuration with nonnegative flow at all faces and with a weakly decreasing sequence of flow satisfies the diamond lemma. \end{proof} \begin{proposition}\label{prop: unidirectional flow-firing constant to pyramid} A unidirectional flow-firing configuration with constant nonnegative flow $f\in \mathbb{N}$ for the first $n\in \mathbb{N}$ faces followed by 0 flow on all faces onward will stabilize in a weakly decreasing sequence of flow where the at most one pair of consecutive faces have the same flow, except faces that still have flow $f$, and all other pairs have a difference of 1 flow. \end{proposition} For example, a unidirectional flow-firing configuration with sequence of flow 6, 6, 6, 6, 6, 0, \dots~will stabilize to 6, 6, 5, 4, 3, 3, 2, 1, 0, \dots. \begin{proof} We merely need to show such a configuration is reachable, as the unidirectional flow-firing configuration has local confluence and thus global confluence by \ref{lemma: unidirectional flow-firing diamond lemma}, meaning this stable configuration is the unique stabilization. The configuration is stable, and by incrementally building a pyramid by firing the outermost face of flow $f$, the ending shape is any unused $f$'s, followed by the pyramid with additional flow starting from the bottom of the pyramid, forming the one pair of consecutive faces with the same flow. \end{proof} \subsection{Multiple Pulses}\label{subsection: multiple pulses} For a planar graph with $n$ holes $h_1, \dots, h_n$ each with a pulse of force $p_1, \dots, p_n$, respectively, the flow-firing process proceeds as if each hole was the only hole in the planar graph, with specific interactions at overlaps of the supports of each pulse. If the holes of two pulses share an edge and are of differing force, then there is a flow through the edge between them, but the edge has no way to fire. This edge thus will not fire, and it need not be taken into consideration for the stability of a flow-fire configuration, but is critical to maintain the conservativity of a configuration. \begin{definition}\label{definition: support} The \textbf{support} of a pulse of nonzero force is the union of the faces that, if the location of the pulse was the only hole in the planar graph, would have a nonzero flow through it in the final configuration. The support of a pulse of zero force is the union of the faces that share an edge with the hole of the pulse. \end{definition} The support of any configuration is the set of faces that have nonzero flow through them. There are three \textbf{signs} of pulses: positive, negative, and zero, which directly corresponds to the sign of the force of the pulse. We use the convention that a positive sign is opposite to a negative sign, but a sign of 0 is not opposite to any sign. Note that if exactly one of two signs is zero, then the signs are differing but not opposite. \begin{definition}\label{def:Aztec pyramid} The \textbf{Aztec pyramid} of a nonnegative pulse of force $p$ is the configuration where each undistinguished face $F$ has $\max{p-d+1,0}$ flow, where $d$ is the distance of $F$ from the pulse, or the minimal number of edges that must be crossed (one may not cross through vertices) to reach $F$ from the distinguished face. The hole still has $p$ flow. The flow through a face $F$ in the Aztec pyramid configuration of pulse $p$ is denoted by $\operatorname{Aztec}_p(F)$ or simply $\operatorname{Aztec}(F)$ if which pulse is being referred to is obvious. The Aztec pyramid of a negative pulse of force $-p$ is the configuration where each face $F$ has flow $-\operatorname{Aztec}_p(F)$. \end{definition} It has been demonstrated in \cite{felzenszwalb2019flow} that the terminating configuration of a pulse with force $p$ is its Aztec pyramid. If none of the supports overlap or share an edge, the flow-firing process will terminate, with the final configuration in each support being the final configuration if the location of the pulse was the only hole in the planar graph, and 0 flow on all the other faces. If any supports share an edge or overlap, the interactions of the flow-firing process are as follows: \begin{itemize} \item The supports of multiple pulses overlap (share a face) \begin{itemize} \item If the supports of any two pulses of differing signs overlap, the flow-firing process will not terminate as the final configurations of the two individual pulses do not agree at at least one face, and each pulse will continue the flow-firing process towards its final configuration. \item If the overlapping supports' pulses are of the same sign, then the flow-firing process will terminate unless other supports' interactions cause it to fail to terminate. If it terminates, the final flow for any face that is part of multiple supports, say of pulses $p_1$, $p_2$, \dots, $p_n$, is the $\operatorname{Aztec}_{p_i}(F)$ for which $\left|\operatorname{Aztec}_{p_i}(F)\right|$ is maximized among all $1 \leq i \leq n$. \end{itemize} \item The supports of multiple pulses share an edge \begin{itemize} \item If the supports of any two pulses of opposite signs share an edge, the flow-firing process will not terminate as one face containing that edge will have a positive flow and the other a negative flow, meaning that the flow-firing would not be able to terminate, as it would attempt to balance the two faces, but then the pulses would return the face to its previous value. \item If the supports of any two pulses of non-opposite signs share an edge, the flow-firing process will terminate unless other supports' interactions cause it to fail to terminate. The two pulses do not impact each other's supports in any way. \end{itemize} \end{itemize} This should be able to generalize to higher dimensional complexes, using the ridge-firing process as defined in \cite{felzenszwalb2019flow}. If a conservative flow may encompass holes but also faces, then there is no guarantee that the process will have global confluence (for example, take the infinite planar graph of the 2-dimensional lattice of $\mathbb{Z}^2$ with three consecutive face flows of 2 in a line, with a hole at one of the end 2s; the other end 2 will cause a 1 to be outside of the support of the pulse, but it can be in 3 different locations, thus creating at least 3 different final configurations). Essentially, if the starting configuration has flow that enables it to put flow outside of all of the pulses' supports, then the process will not have global confluence as the choices made outside of the supports create different final configurations. The flowfire process may or may not terminate depending on the pulses. \subsection{Conservative Flows and a Single Hole}\label{subsection: conservative flows and a single hole} \begin{lemma}\label{lemma: < aztec = global confluence} If the initial configuration has the flow of each face $F$ bounded between 0 and $\operatorname{Aztec}(F)$, then the initial configuration exhibits global confluence. \end{lemma} \begin{proof} At no face can the configuration ever exceed the Aztec pyramid configuration. Because of this, as the pulse must terminate at a configuration no less than the Aztec pyramid at any face, regardless of the order of firing the configuration will terminate at the Aztec pyramid. \end{proof} \begin{proposition}\label{proposition: single > aztec} If the initial conservative flow configuration has nonnegative flow through all faces and either contains exactly one face $F$ in the support of the pulse with flow equal to $\operatorname{Aztec}(F)+1$, or a flow outside the support of magnitude at least 2, with all other faces $F$ with flow no more than $\operatorname{Aztec}(F)$, then the initial conservative flow does not exhibit global confluence. \end{proposition} \begin{proof} In the case that flow is outside the support, it has at least 2 places that it can flow to, thus able to violate global confluence. In the case that the flow is inside the support, either the face is in the same row or column of the hole, or it is not. If it is not, create the Aztec pyramid at all faces except that face by using the hole, and then let that face's flow run down the Aztec pyramid, where it has at least 2 places that it can flow to, thus able to violate global confluence. If the flow is inside the support and in the same row or column of the hole, then without loss of generality it is to the right of the hole (so in the same row). Then, the Aztec pyramid in that row, to the right of the face that is greater than the Aztec pyramid configuration, cannot be formed without firing that face. However, this configuration still violates global confluence, as everywhere else the Aztec pyramid can be made, and thus the flow in that face can travel up or down, at which point it may continue down the Aztec pyramid, where it has at least 2 places that it can flow to, thus able to violate global confluence. \end{proof} Notice that the first condition in \cref{proposition: single > aztec} cannot be generalized to if there exists a face $F$ in the support of the Aztec pyramid such that $\operatorname{flow}(F)>\operatorname{Aztec}(F)$, as the initial configuration \begin{center} \begin{tabular}{*7{c}} 0&0&0&1&0&0&0\\ 0&0&\textbf{3}&2&1&0&0\\ 0&1&2&3&2&1&0 \\ \cline{4-4} 1&2&3&\multicolumn{1}{|c|}{3}&3&2&1\\ \cline{4-4} 0&1&2&3&2&1&0\\ 0&0&1&2&1&0&0\\ 0&0&0&1&0&0&0\\ \end{tabular} \end{center} where the hole is the boxed 3, has one face with flow greater than it would in the Aztec pyramid configuration. This face can only fire to the left or up, and once it does one of these the only remaining option is to fire in the other of the two options, resulting in the terminating configuration \begin{center} \begin{tabular}{*7{c}} 0&0&1&1&0&0&0\\ 0&1&1&2&1&0&0\\ 0&1&2&3&2&1&0 \\ \cline{4-4} 1&2&3&\multicolumn{1}{|c|}{3}&3&2&1\\ \cline{4-4} 0&1&2&3&2&1&0\\ 0&0&1&2&1&0&0\\ 0&0&0&1&0&0&0\\ \end{tabular} \end{center} and hence demonstrating that this initial configuration displays global confluence. \subsection{A Single Pulse of Nontrivial Radius}\label{subsection: single pulse of nontrivial radius} Define a pulse of \textbf{radius} $r\in\mathbb{Z}_{>0}$ and force $f$ to be the initial configuration with flow $f$ through all faces that are a distance at most $r$ away from the hole. \begin{proposition}\label{proposition:Z2 grid pulse radius global confluence} On the $\mathbb{Z}^2$ lattice, a pulse of radius $r$ and force $f$ exhibits global confluence iff $|f|\leq 1$ or $r\leq 1$. \end{proposition} \begin{proof} If $|f|\leq 1$, then the pulse is already stable, and thus exhibits global confluence. If $r \leq 1$, then the initial configuration is less than or equal to the Aztec pyramid of the pulse, and thus by \cref{lemma: < aztec = global confluence}, the final configuration must be the Aztec pyramid, implying global confluence. If $|f| \geq 2, r \geq 2$, without loss of generality let $f \geq 2$. If $f=2$, we see the result as the $4r$ outermost faces ($r$ away from the hole) have multiple options for firing, and once all of them have stabilized (which they will as there can be at most $4r$ firings), the configuration is stable. Hence as there were multiple options for the ending stable configuration, it does not display global confluence. We now assume $f \geq 3$. We will show that for $r \leq \frac{f+1}{2}$, the pulse does not display global confluence. We will first show that there exists a firing sequence that turns the pulse of radius $r$ into a configuration less than or equal to the Aztec pyramid, and thus the Aztec pyramid is a possible terminating configuration. We will first prove that there exists a way to have the squares directly to the right of the hole be less than the Aztec pyramid without changing the flow through any other of the faces. We note that starting from the square immediately to the right of the hole and continuing right, in order for the final configuration to be stable, the flows through those faces must form a weakly decreasing sequence starting at $f$ and ending at 0, where each decreasing step can only go down by 1. Hence, any stable configuration on those faces must have at least $f+(f-1)+\cdots+2+1+0=\frac{f(f+1)}{2}$ total flow. We fire the faces directly to the right of the hole towards the right until they, by themselves, are stable relative to each other. As the flow is conserved, there is a total of $rf$ flow. But if $rf\leq \frac{f(f+1)}{2}$, or equivalently $r \leq \frac{f+1}{2}$, then once it stabilizes it must be less than or equal to the Aztec pyramid. We apply this firing sequence similarly to the faces directly above, below, and to the left of the hole as well. Furthermore, using the same logic for the faces with flow off the axis, for example quadrant I could all fire to the right, the entire pulse of radius $r$ can be stabilized. In that case, at a y-coordinate of $n$, the total flow would be $rf-nf$, where the total flow in the Aztec pyramid would be $\frac{f(f+1)}{2}-f-(f-1)-\cdots-(f-n+1) \geq \frac{f(f+1)}{2}-nf$, so for those cases it results in a configuration less than or equal to the Aztec pyramid once stabilized (this is because such a weakly decreasing line of faces exhibits local confluence, and one can make the faces build the Aztec pyramid and stop when it runs out of flow). Hence, there exists a firing sequence that takes the pulse of radius $r$ into a configuration less than or equal to the Aztec pyramid, and thus the Aztec pyramid is a possible terminating configuration. Now we will show that there exists a successor of the pulse of radius $r$ that has flow outside of the support of the Aztec pyramid. Stabilize the pulse of radius $r$, except do not fire $(1,1)$. The resulting configuration for all of the faces except $(1,1)$ must be greater than or equal to the Aztec pyramid. If it is greater than the Aztec pyramid, then there must be flow outside of the support of the Aztec pyramid. If it is equal to the Aztec pyramid, then by Proposition \ref{proposition: single > aztec}, there exist terminating configurations that have a support that is a strict superset of the support of the Aztec pyramid. \begin{figure} \caption{Example diagram for $\protect{f=7,r=6} \label{fig:Prop8-8Ex} \end{figure} We will now prove the $r > \frac{f+1}{2}$ cases by considering the parity of $f$. For odd flow $f$, fire the faces with $x=\pm 1$ in the positive $y$ half of the lattice upwards. By Proposition \ref{prop: unidirectional flow-firing constant to pyramid}, this will use the uppermost $\frac{f-1}{2}$ squares of flow $f$ and create a pyramid going from $f-1$ at $(\pm 1, r - \frac{f-1}{2})$ to 1 at $(\pm 1, r + \frac{f-1}{2}-1)$. The two squares $(\pm 1,r)$ have flow $\frac{f-1}{2}$. If $f\geq 5$, then the square $(0,r)$ with $f$ flow may fire to the left and right once, allowing those flows to cascade upwards and reach squares $(\pm 1, r + \frac{f-1}{2})$. The squares from $(0,r - \frac{f-1}{2})$ to $(0,r-1)$, still with flow $f$, along with the square $(0,r)$ with flow $f-2$ by Proposition \ref{prop: unidirectional flow-firing constant to pyramid} will form a pyramid from $(0,r - \frac{f-1}{2})$ to $(0, r + \frac{f-1}{2})$ starting with $f-1$ flow and going down to 1 flow, with the double flow at flow $f-2$. Hence, this configuration will have positive flow at the three squares from $(-1,r + \frac{f-1}{2})$ to $(1,r + \frac{f-1}{2})$. However, if we merely fire those three columns purely upwards and then outwards to the left and right as necessary, the squares $(\pm 1, r + \frac{f-1}{2})$ will have 0 flow through them as the vertical firing by Proposition \ref{prop: unidirectional flow-firing constant to pyramid} will only yield a pyramidal flow from flow $f-1$ at $(\pm 1, r - \frac{f-1}{2}$ to flow 1 at $(\pm 1, r + \frac{f-1}{2} - 1)$. From this configuration it is possible for flow to never reach those squares; if the other columns are fired upwards initially, then each column at $x=\pm a$ for $a\leq r - \frac{f-3}{4}$ will be able to reach $(\pm a, r + \frac{f-1}{2} - a)$, but no further, as afterwards any firing is only to the left or right, meaning it is not possible to have flow go through $(\pm 2, r + \frac{f-1}{2})$ and then inwards to $(\pm 1, r \frac{f-1}{2})$ (alternatively, another way to look at this is that the flow configuration now is less than or equal to the Aztec pyramid with $r + \frac{f-1}{2}$ flow at the hole at the origin if the same firing process is done on the lower half of the lattice firing downwards, meaning that by the proof of Lemma \ref{lemma: < aztec = global confluence}, flow cannot reach the squares $(\pm 1, r + \frac{f-1}{2})$, which are outside of the Aztec pyramid's support.) Hence for odd flow $f \geq 5$ with $r \geq \frac{f+1}{2}$, the pulse of flow $f$ and radius $r$ does not have global confluence. For flow $f=3$, there is not sufficient flow for the square $(0,r)$ to fire both to the left and right, but there is sufficient to fire to one of the two, without loss of generality to the right, and the proof still holds as that one flow will reach $(1,r)$, which is not possible when all three columns fired upwards first. This proof works as long as the hole does not interfere with the aforementioned process, or when $r \geq \frac{f+1}{2}$. Combined with the proof for $r \leq \frac{f+1}{2}$, this completes the proof for odd $f$. For even flow $f\geq 4$, fire the faces with $x=\pm 1$ in the positive $y$ half of the lattice upwards. By Proposition \ref{prop: unidirectional flow-firing constant to pyramid}, this will use the uppermost $\frac{f}{2}$ squares of flow $f$ and create a pyramid going from $f-1$ at $(\pm 1, r - \frac{f}{2})$ to 1 at $(\pm 1, r + \frac{f}{2}-1)$, with the double flow at flow $\frac{f}{2}$ at $(\pm 1, r-1)$ and $(\pm 1, r)$. The two squares $(\pm 1, r)$ have flow $\frac{f}{2}$, with the pyramid without any double flows onward from that point. If $f\geq 6$, the square $(0,r)$ can fire to the left and then the right, and the proof follows similar to the odd flow $f \geq 5$ case aforementioned. If $f=4$, there is only sufficient flow at $(0,r)$ to fire either to the left or right but not both, and the proof follows similar to the odd flow $f=3$ case. The proof works as long as the hole does not interfere with the aforementioned process, or when $r > \frac{f}{2}$. Combined with the proof for $r \leq \frac{f+1}{2}$, this completes the proof for even $f$. This completes the proof. \end{proof} \end{document}
\begin{document} \title[ON GENERALIZATIONS OF THE DST AND KOH'S CONJECTURE] {GENERALIZATIONS OF THE DIRECT SUMMAND THEOREM OVER UFD-S FOR SOME BIGENERATED EXTENSIONS AND AN ASYMPTOTIC VERSION OF KOH'S CONJECTURE} \author[Danny de Jes\'us G\'omez-Ram\'irez]{Danny A. de Jes\'us G\'omez-Ram\'irez} \author[Edisson Gallego]{Edisson Gallego} \author[Juan D. Velez]{Juan D. V\'elez} \address{Vienna University of Technology, Institute of Discrete Mathematics and Geometry, wiedner Hauptstaße 8-10, 1040, Vienna, Austria.} \address{University of Antioquia, Calle 67 \# 53-108, Medell\'in, Colombia} \address{Universidad Nacional de Colombia, Escuela de Matem\'aticas, Calle 59A No 63 - 20, N\'ucleo El Volador, Medell\'in, Colombia.} \email{[email protected]} \email{[email protected]} \email{[email protected]} \begin{abstract} This article deals with two different problems in commutative algebra. In the first part we give a proof of generalized forms of the Direct Summand Theorem (DST (or DCS)) for module-finite extension rings of mixed characteristic $R\subset S$ satisfying the following hypotheses: The base ring $R$ is a Unique Factorization Domain of mixed characteristic zero. We assume that $S$ is generated by two elements which satisfy, either radical quadratic equations, or general quadratic equations under certain arithmetical restrictions. In the second part of this article, we discuss an asymptotic version of Koh's Conjecture. We give a model theoretical proof using "non-standard methods". \end{abstract} \maketitle \noindent Mathematical Subject Classification (2010): 13B02, 54D80 \noindent Keywords: ring extension, splitting morphism, discriminant, ultrafilter, non-principal ultrafilter.\footnote{This paper should be essentially cited as follows E. Gallego, D. A. J. G\'omez-Ram\'irez and and J. D. V\'elez. The Direct Summand Conjecture for some bi-generated extensions and an asymptotic Version of Koh’s Conjecture. Beitraege zur Algebra und Geometrie (Contributions to Algebra and Geometry) 57(3), 697-712, 2016.} \section*{Introduction} The Homological Conjectures have been a focus of research activity since Jean Pierre Serre introduced the theory of multiplicities in the early 1960s (\cite{serrelocal}), and since the introduction of characteristic prime methods in commutative algebra by Peskine, Szpiro, and M. Hochster, in the mid 1970s \cite{hochsterhomological}, \cite{Peskine}. These conjectures relate the homological properties of a commutative rings to certain invariants of the ring structure, as, for instance, its Krull dimension and its depth. They have been settled for equicharacteristc rings (i.e., rings for which the characteristic of the ring coincides with that of its residue field) but many remain open in mixed characteristic \cite{homological conjectures old and new}. Their validity in mixed characteristic, nonetheless, is known for rings of Krull dimension less than four. Among these conjectures the \textit{Direct Summand Conjecture} occupies a central place, implying, or actually being equivalent, to many of the other \textit{\ }conjectures \cite{hochstercanonical}, \cite{ohidirsumcon}, \cite {robertsdirsumcon}, \cite{mcculloughthesis}.\newline \textbf{Direct Summand Conjecture (DSC):} Let $R\subset S$ be a module-finite extension of noetherian rings (i.e., $S,$ regarded as an $R$ -module is finitely generated) where $R$ is assumed to be regular. Then, the inclusion map $R\subset S$ splits as a map of $R$-modules; or equivalently, there is a retraction $\rho :S\rightarrow R$ from $S$ into $R$. By a retraction we mean an $R$-linear homomorphism satisfying $\rho (1)=1.$ The central role of the DSC in Commutative Algebra as well as its relation to the other homological conjectures is comprehensively explained in \cite {hochsterhomological}. Now, the Direct Summand Conjecture (or D. S. Theorem) was proved in the general setting by Yves Andr\'{e}, essentially by proving the (remaining) case of unramified complete regular local rings with the framework of perfectoids developed by Scholze \cite{andre}, \cite{bhatt}. The problem of showing the existence of a retraction $\rho :S\rightarrow R$ may be reduce to the case where $R$ and $S$ are complete local domains \cite {hochsterhomological}. Therefore, one may assume that $(R,\mathfrak{m})$ is in particular a unique factorization domain (UFD) (\cite{eisen}, page 483). If the Krull dimension of $S$ is $d$ one can always choose a system of parameters for $S$ contained in $\mathfrak{m}$. By a \emph{system of parameters} in an arbitrary local ring $(S,\mathfrak{n})$ of Krull dimension $d$ we mean a sequence of elements $\{x_{1},...,x_{d}\}$ such that the radical of the ideal they generate in $S$ is precisely $\mathfrak{n}$, the unique maximal ideal of $S$ (\cite{eisen}, page 222). It can be proved then that the DSC holds if and only if for any system of parameters in $S,$ and any natural number $t>0,$ the \textit{socle element} $(x_{1}\cdots x_{d})^{t} $ is not contained in the ideal in $S$ generated by the $t+1$ powers of the parameters. That is, if $(x_{1}\cdots x_{d})^{t}\notin (x_{1}^{t+1},...,x_{d}^{t+1})S.$ This last statement is known as the \textit{ Monomial Conjecture}. More precisely:\newline \textbf{Monomial Conjecture (MC):} Let $(S,\mathfrak{n})$ be any local noetherian ring of dimension $d$ and let $\{x_{1},...,x_{d}\}$ be a system of parameters for $S$. Then, for any positive integer $t$, $(x_{1}\cdots x_{d})^{t}\notin (x_{1}^{t+1},...,x_{d}^{t+1})S.$ This conjecture is equivalent to the (DSC) \cite{hochstercontracted}. In low dimension, that is, for rings of Krull dimension $\leq 2$, the DSC, and consequently the MC, follows as a consequence of the existence of a \emph{normalization} for $S$. By mapping $S$ into its normalization one may assume that $S$ is a \emph{ normal} domain, and, for dimensional reasons, a Cohen-Macaulay ring (\cite {eisen}, pages 118, 420). Then the Auslander-Buchsbaum formula (\cite{eisen} , page 469) implies that $S$ must have projective dimension zero. That is, $S $ must be an $R$-free module, and consequently the inclusion map automatically splits. On the other hand, the general equicharacteristc case, i.e., the case if which $R$ contains a field of zero characteristic, is handled by elementary methods: The trace map form the fraction field of $S$ to the fraction field of $R$ provides a natural retraction. In fact, the inclusion map splits as a map of $R$-modules under the much weaker hypothesis of $R$ being a normal domain \cite[Lemma 2]{hochstercontracted}. If $R$ is equicharacteristc, but contains a field of characteristic $p>0$, a classical argument given by Hochster (actually, a precursor of his Tight Closure Theory) shows the validity of the MC by a method in which the properties of the iterated powers of the Frobenius map are exploited in a clever way. It should be remarked that the existence of \textit{Big Cohen Macaulay Modules} \textit{and Big Cohen Macaulay Algebras} for equicharacteristc rings immediately implies the validity of the (MC) and the (DSC) \cite {homological conjectures old and new}, \cite{CM}. In the mixed characteristic case it is known that the DSC holds for regular rings $R$ of Krull dimension $\leq 3.$ The dimension three case was proved quite recently by R. Heitmann, by means of a rather involved combinatorial argument \cite{heitmann2002}, \cite{robertsdirsumcon}. This is, undoubtedly, one of the most significant advances in commutative algebra of the last decades. As mentioned before, the DSC would follow from the existence of Big Cohen Macaulay Modules or Big Cohen Macaulay Algebras in mixed characteristic, an open problem in dimensions greater than three \cite{homological conjectures old and new}. A natural generalization of the (DSC) was proposed by J. Koh in his doctoral dissertation \cite{thesis koh}. Koh's question replaces the condition of $R$ being regular for the weaker condition of $S$ having finite projective dimension as a module over $R$.\newline \textbf{Koh's Conjecture: }Let $R$ be a noetherian ring, and let $R\subset S$ be a module-finite extension such that $S$, regarded as an $R$-module, has finite projective dimension. Then there is a retraction from $S$ into $R$. It is known that many theorems fail when one weakens the hypothesis of $R$ being regular and replaces it by the condition that $R$ is just noetherian, even a Cohen Macaulay or a Gorenstein complete local domain (see Definition \ref{goresnstein}). Then one imposes the condition that the corresponding $R$ -modules have finite projective dimension (if $R$ is regular this hypothesis is satisfied automatically, due to Serre's Theorem, \cite{eisen}, Chapter 19). For instance,\textit{\ the Rigidity of Tor} is no longer true in this context \cite{heitmann}. In a similar manner, the positivity of the intersection multiplicity $\chi _{R}(M,N)$ for modules $M,N$ of finite projective dimension over $R$, when $R$ is not regular, is no longer valid \cite{hocster-Mclaughing}. Notwithstanding, Koh's conjecture is true for rings of equal characteristic zero. Unfortunately, it turned out to be false for equicharacteristc rings of prime characteristic as well as in the case of mixed characteristic \cite{velezsplitting}. The fact that this conjecture is true in characteristic zero suggests that it may be true \textquotedblleft asymptotically\textquotedblright . By an asymptotic version we mean the following: Given any bound $b>0$ for the \textquotedblleft complexity\textquotedblright\ of the extension, a notion we will define in a precise manner in (\ref{complexity}), the set $S_{b}$ of prime numbers for which there are counterexamples whose characteristic lie in $S_{b}$ must be \emph{finite} (Theorem \ref{muymuybueno}). We will prove this asymptotic form for rings that are localization at prime ideals of affine $k$-algebras, where $k$ is a an algebraically closed field. We achieve this by first formulating \emph{Koh's Conjecture} as a first order sentence in the language of rings and algebraically closed fields. Then we give a proof via \textit{Lefschetz's Principle}\emph{.} A main reference for the model theoretical methods involved is \cite{Libro schoutens}. Also \cite {bounds} may be consulted for a more succinct account of \textquotedblleft nonstandard methods in commutative algebra.\newline In the first part of this article we provide a proof of the DSC for module-finite extensions of rings $R\subset S$ satisfying certain conditions. We will assume $R$ to be a UFD., where the most interesting case will be when $R$ is a ring of mixed characteristic zero. On the other hand, $ S$ will be a module-finite extension generated as a $R$-algebra by two elements satisfying, either radical quadratic equations (Theorem \ref{dany 1} ), or satisfying general quadratic equations, under certain arithmetical restrictions (Theorem \ref{nonradicalsplit}). In the second part of this article we discuss an asymptotic version of Koh's Conjecture. We will develop a Model Theoretical approach using \textquotedblleft non-standard methods\textquotedblright\ similar to those developed by H. Schoutens \cite{bounds}. The main result of this section is Theorem \ref{muymuybueno}. \textit{All rings will be commutative, with identity element }$1$\textit{, and all modules will be assumed to be unitary.} \section{The DSC for some radical quadratic extensions} \subsection{Some reductions\label{reducciones}} Let $R$ be a UFD, and let us denote by $L$ its fraction field. Let $S$ be a module-finite extension of $R$ such that $S$ is generated as an $R$-algebra by two elements $s_{1}$ and $s_{2}$ that satisfy monic polynomials $f_{1}(x)$ and $f_{2}(x)$ in $R[x],$ respectively. Let us first see that, without loss of generality, we may assume that $ f_{1}(x)$ and $f_{2}(x)$ have degree greater than one. For if one of them, for instance $f_{1}(x)$, had degree one then the element $s_{1}$ would be already in $R.$ In this case $S=R[s_{2}]$. By mapping $C=R[x]/(f_{2}(x))$ onto $S$ we can represent $S$ as a quotient of the form $C/J$, where $J$ is an ideal of $C$ of height zero. This is because the Krull dimension of $S$ and $T$ is the same and equal to the Krull dimension of $R$, since both rings are module-finite extensions of $R~$(\cite{kunz}, Corollary 2.13, page 47). \label{kunzito} Thus, $J$ would be contained in some minimal prime $P$ of $C$. Since $R[x]$ is also a UFD, $P$ is generated by a monic prime factor $p(x)$ of $f_{2}(x),$ hence $J\subset (p(x)).$ But in order to find a $R$-retraction from $S$ into $R$ it suffices to find any retraction \textquotedblleft further above\textquotedblright , $\rho :C/P\rightarrow R$. This is because the composition of the canonical map $S=C/J\rightarrow C/P$ with $\rho $ then provides a retraction from $S$ into $R.$ But the map $R\rightarrow C/P\simeq R[x]/(p(x))$ splits, since $R[x]/(p(x))$ is free as an $R$-module and consequently $\rho $ can be taken as the projection onto $R$. Let us then assume that the degrees of $f_{1}$ and $f_{2}$ are greater that one and henceforth $S$ is minimally generated as an $R$-algebra by $ s_{1},s_{2}\in S$. Set $T=R[x_{1},x_{2}]/I,$ with $I=(f_{1}(x_{1}),$ $ f_{2}(x_{2}))$, where $f_{1}(x_{1})$ and $f_{2}(x_{2})$ are monic polynomials for $s_{1}$ and $s_{2}$, respectively. It is easy to see that $T$ is a free $R$-module, because $T\cong R[x_{1}]/(f_{1})\otimes _{R}R[x_{2}]/(f_{2})$. In fact, an $R$-basis for $T$ consists of monomials of the form \begin{equation} B=\{\overline{x}_{1}^{d_{1}}\overline{x}_{2}^{d_{2}}\text{, with }0\leq d_{i}<\deg f_{i}\}. \label{basis} \end{equation} Let $\varphi :T\rightarrow S$ be the surjective $R$-homomorphism defined by ending $x_{i}$ into $s_{i}$. Let $J$ denote its kernel, so that $S\cong T/J$ , where the height of $J$ must be zero, and therefore $J$ must be contained in some minimal prime of $T$. The next lemma analyzes the case when $T$ turns out to be a domain, i.e., when $J=(0)$. In this case the existence of a retraction $\rho :S\rightarrow R$ follows automatically, since $S=T$ is free as an $R$-module. In what follows $i$ and $j$ denote a pair of indices $1\leq i,$ $j\leq 2,$ with $i\neq j$. \begin{lemma} \label{domaincriteria}Let $R$, $T$, $f_{1}$, $f_{2}$ be as above. Let $ E_{i}=L[x_{i}]/(f_{i}),$ and $F_{j}=E_{i}[x_{j}]/(f_{j})$. Then $T$ is a domain if and only if both $E_{i}$ and $F_{j}$ are fields. That is, if and only if $f_{i}$ is irreducible in $L[x_{i}]$ and $f_{j}$ is irreducible in $ E_{i}[x_{j}]$. \end{lemma} \begin{proof} First, we observe that $L\otimes _{R}T\cong L[x_{1},x_{2}]/I\cong E_{i}[x_{j}]/(f_{j})=F_{j}$ and that the natural homomorphism $\mu :T\hookrightarrow L\otimes _{R}T$ is an injection. This is because $T$ is a torsion free $R$-module, since $R$ is a domain and $T$ is a free module. Therefore, $T$ is a subring of $F_{j}$, and if $F_{j}$ is a field, $T$ must be a domain. This gives the \textquotedblleft only if\textquotedblright\ part of the lemma. Conversely, let us assume that $T$ is a domain. Arguing by contradiction let us suppose that either $E_{i}$ or $F_{j}$ is not a field. In the first case there are monic polynomials of positive degree, $g_{1}$ and $g_{2}$ in $ L[T_{i}],$ such that $f_{i}=g_{1}g_{2}$ with $\mathrm{deg}(g_{s})<\mathrm{deg }(f_{i})$. Now, let $\alpha \in R\smallsetminus \{0\}$ be a common denominator for the coefficients of $g_{1}$ and $g_{2}$. The equality $ \alpha ^{2}f_{i}=(\alpha g_{1})(\alpha g_{2})$ in $R[x_{i}]$ implies that $ \alpha g_{i}$ are zerodivisors in $T$. Besides, $\alpha g_{i}=\alpha \overline{x}_{i}^{\mathrm{deg}(g_{i})}+\cdots $ cannot be zero because $ g_{i},$ written in the $R$-basis \ref{basis} of $T$ has at least one coefficient different from zero. Therefore, $T$ would not be a domain, a contradiction. Then we may assume that $E_{i}$ is a field. \newline On the other hand, if $f_{j}$ were reducible over $E_{i}[x_{j}]$ we could write $f_{j}=h_{1}h_{2}$, where $h_{1},h_{2}\in E_{i}[x_{j}]$ are monic polynomials of degree less than $\mathrm{deg}(f_{j})$. Let us choose $ \widetilde{h}_{1},\widetilde{h}_{2}$ in $L[x_{1},x_{2}]$ such that $\psi ( \widetilde{h}_{s})=h_{s}$, $s=1,2,$ where $\psi :L[x_{i},x_{j}]\rightarrow E_{i}[x_{j}]$ is the natural homomorphism induced by the projection map $ L[x_{i}]\rightarrow E_{i}$. In fact, we can choose each $\widetilde{h}_{s}$, considered as a polynomial in $(L[x_{i}])[x_{j}]$, such that each of its coefficients in $L[x_{i}]$ is a polynomial in $x_{i}$ with degree less than $ \mathrm{deg}(f_{i})$. Hence, there exists $\widetilde{h}_{3}$ in $ L[x_{1},x_{2}]$ such that $f_{j}-\widetilde{h}_{1}\widetilde{h}_{2}= \widetilde{h}_{3}f_{i}$. Choose any nonzero element $c\in R$ such that $c \widetilde{h}_{r}\in R[x_{1},x_{2}]$, for $r=1,2,3$. Then we have that $c \widetilde{h}_{1}c\widetilde{h}_{2}=c^{2}f_{j}-c(c\widetilde{h}_{3})f_{i}\in I$ and consequently the classes of $c\widetilde{h}_{1}$ and $c\widetilde{h} _{2}$ in $T$ must be different from zero. Thus, $T$ would not be a domain, which is a contradiction. This proves $f_{j}$ must be irreducible over $ E_{i}[x_{j}].$ \end{proof} \begin{corollary} \label{CDU}Let $R$ be a UFD where its field of fractions $L$ has characteristic different from two. Assume that $f_{i}=x_{i}^{2}-a_{i}$ are irreducible polynomials in $L[x_{i}]$, $i=1,2$. If $ T=R[x_{1},x_{2}]/(f_{1},f_{2})$ is not a domain then there exist nonzero elements $c,d,u$ in $R$ such that $a_{1}=d^{2}u,a_{2}=c^{2}u,$ where $c,d$ are relatively prime. \end{corollary} \begin{proof} Since $T$ is not a domain, by Lemma \ref{domaincriteria} we may assume without loss of generality that one of the polynomials $f_{i},$ for instance $f_{2}(x_{2}),$ is reducible in $E_{1}[x_{2}]$. But this is equivalent to saying that $f_{2}(x_{2})$ has a root $e\in E_{1}$, that we may write as $ e=e_{1}+e_{2}\overline{x_{1}}$, where $e_{1},e_{2}\in L,$ and $\overline{ x_{1}}^{2}=a_{1}$. Hence \begin{equation*} a_{2}=e^{2}=(e_{1}^{2}+e_{2}^{2}a_{1})+2e_{1}e_{2}\overline{x_{1}}. \end{equation*} Then $a_{2}=e_{1}^{2}+e_{2}^{2}a_{1}$ and $2e_{1}e_{2}=0$. But $\mathrm{char( }L)\neq 2$ implies $e_{1}e_{2}=0$. If $e_{2}=0$ then $a_{2}=e_{1}^{2}$ and therefore $f_{2}=(x_{2}+e_{1})(x_{2}-e_{1})$, which is a contradiction. Thus, $e_{1}=0$ and $a_{2}=e_{2}^{2}a_{1}$. Now write $e_{2}=e/d$, where $c,d\neq 0$ are relatively prime elements in $R$ . So $d^{2}a_{2}=e^{2}a_{1};$ but $d^{2}$ does not divide $c^{2}$ and consequently $d^{2}$ divides $a_{1}.$ Hence, there is $u\in R$ such that $ a_{1}=d^{2}u$. Replacing $a_{1}$ in $d^{2}a_{2}=e^{2}a_{1}$ gives the equation $d^{2}a_{2}=c^{2}d^{2}u$. After dividing by $d^{2}\neq 0$ we obtain $a_{2}=c^{2}u$, which proves the corollary. \end{proof} \begin{lemma} \label{minimalprimesI}Let $R$ be a UFD, let $B=R[x,y]$, and let $u,c,d$ be elements in $R$ different from zero. Define $f_{1}=x^{2}-d^{2}u$, $ f_{2}=y^{2}-c^{2}u$, and set $I=(f_{1},f_{2}),$ the ideal in $B$ generated by $f_{1}$ and $f_{2}$. Assume that $\{c,d\}$ is a regular sequence in $R$ ( \cite{eisen}, page 173). Then the minimal prime ideals of $I$ are $ P_{r}=(f_{1},f_{2},f_{3,r},f_{4,r})$, where $f_{3,r}=dy+(-1)^{r}cx,$ $ f_{4,r}=xy+(-1)^{r}cdu$, $r=0,1$. \end{lemma} \begin{proof} Let us introduce a new variable $z$, and let $g(z)=z^{2}-u$ in $R[z].$ Clearly $g$ has no roots in $R$, since $f_{1}$ is irreducible. Therefore, $g$ is also irreducible and consequently the ideal $(g)$ is prime in $R[z]$; this is because $R$ is a UFD. Define $\psi _{r}:B\rightarrow R[z]/(g)$ as the unique $R$-homomorphism that sends $x$ into $d\overline{z}$ and $y$ into $(-1)^{r+1}c\overline{z}$, where $\overline{z}$ denotes the class of $z$ in $ R[z]/(g).$ We prove that $\mathrm{ker}(\psi _{r})=P_{r}$. First, we observe that $P_{r}\subset \mathrm{ker}(\psi _{r})$, since: \begin{equation*} \psi _{r}(f_{1})=d^{2}\overline{z}^{2}-d^{2}u=d^{2}u-d^{2}u=0, \end{equation*} \begin{equation*} \psi _{r}(f_{2})=c^{2}\overline{z}^{2}-c^{2}u=0, \end{equation*} \begin{equation*} \psi _{r}(f_{3,r})=\psi _{r}(dy+(-1)^{r}cx)=d(-1)^{r+1}c\overline{z} +(-1)^{r}cd\overline{z}=0, \end{equation*} \begin{equation*} \psi _{r}(f_{4,r})=\psi _{r}(xy+(-1)^{r}cdu)=(-1)^{r+1}cd\overline{z} ^{2}+(-1)^{r}cdu=0. \end{equation*} To prove that $\mathrm{ker}(\psi _{r})\subset P_{r}$ we use the well known fact that in a polynomial ring in one variable with coefficients in a commutative ring the ordinary division algorithm holds, as long as the divisor is a monic polynomial. This justifies the following procedure: Let $h(x,y)$ be a polynomial in $\ker (\psi _{r})$. If we regard $h(x,y)$ as a polynomial in the variable $y$ with coefficients in $R[x]$ then, dividing by $f_{1}=x^{2}-d^{2}u$ we can write $h(x,y)$ as \begin{equation} h(x,y)=f_{1}q(x,y)+q_{1}(y)x+q_{0}(y), \label{ee1} \end{equation} where $q(x,y)\in B$ and $q_{0}(y),q_{1}(y)\in R[y]$. Similarly, after dividing $q_{0}(y)$ by $f_{2}=y^{2}-c^{2}u$ we may find $ q_{2}(y)\in R[y],$ and $a_{1},a_{2}$ elements in $R$ such that \begin{equation} q_{0}(y)=f_{2}q_{2}(y)+(a_{1}y+a_{2}). \label{ee2} \end{equation} Similarly, there exists polynomials $q_{3}(y),$ $q_{4}(y)$ in $R[y],$ and $ b_{1},b_{2},\in R$ such that \begin{equation} q_{1}(y)x=f_{4,r}q_{3}(y)+q_{4}(y)+b_{1}x+b_{2}. \label{ee3} \end{equation} Dividing $q_{4}(y)$ by $f_{2}$ we obtain: \begin{equation} q_{4}(y)=q_{5}(y)f_{2}+e_{1}y+e_{2}, \label{ee4} \end{equation} for certain polynomial $q_{5}(y)$, and certain elements $e_{1},e_{2}$ in $R$ . Replacing equations (\ref{ee2}), (\ref{ee3}) and (\ref{ee4}) in (\ref{ee1}) we can write $h(x,y)$ as \begin{equation*} h(x,y)=f_{1}q(x,y)+f_{4,r}q_{3}(y)+(q_{5}(y)+q_{2}(y))f_{2}+l(x,y)\text{, } \end{equation*} where $l(x,y)$ is the linear polynomial \begin{equation*} l(x,y)=v_{1}y+b_{1}x+v_{2}, \end{equation*} where $v_{1}=e_{1}+a_{1}$ and $v_{2}=e_{2}+a_{2}+b_{2}$. From the condition $ \psi _{r}(h)=0$ we get: \begin{equation*} \psi _{r}(h)=\psi _{r}(l)=v_{1}(-1)^{r+1}c\overline{z}+b_{1}d\overline{z} +v_{2}=0. \end{equation*} Since $R[z]/(g)$ is a $R$-free module with basis $\{1,\overline{z}\}$ we then must have: \begin{equation} (-1)^{r+1}v_{1}c+b_{1}d=0\text{ and }v_{2}=0. \label{ee5} \end{equation} But$\{c,d\}$ is a regular sequence in $R$, hence there must be some $u\in R$ such that $b_{1}=uc$. Since $R$ is a domain we obtain from \ref{ee5} that $ (-1)^{r}d=v_{1}$, and consequently \begin{equation*} l(x,y)=v_{1}y+b_{1}x=u((-1)^{r}dy+cx)=u(-1)^{r}f_{3,r}. \end{equation*} This shows $h\in P_{r}$. Clearly $P_{0}$ and $P_{1}$ are prime ideal of $B$ because $B/P_{r}$ is an integral domain isomorphic to $R[z]/(g)$. Finally, let us show that $P_{0}$ and $P_{1}$ are the the only minimal primes of $I.$ Let $Q$ be any other minimal prime over $I$. Then \begin{equation*} c^{2}f_{1}-d^{2}f_{2}=-(dy-cx)(cx+dy)\in Q, \end{equation*} so $f_{3,1}=dy-cx\in Q,$ or $f_{3,0}=cx+dy\in Q$. \end{proof} \begin{proof} Let us suppose $f_{3,1}\in Q.$ Hence \begin{equation*} -x(cx-dy)=dxy-cx^{2}=dxy-cd^{2}u=d(xy-cdu)\in Q. \end{equation*} Similarly, \begin{equation*} y(cx-dy)=cxy-dy^{2}=cxy-dc^{2}u=c(xy-cdu)\in Q. \end{equation*} We claim that $f_{4,1}=xy-cdu$ is an element of $Q$. If we suppose otherwise, the elements $c,d$ would be contained in $Q$. Then we would have that $x^{2}=(f_{1}+d^{2}u)$ and $y^{2}=f_{2}+c^{2}u$ would also be contained in $Q.$ Consequently, $x,y$ would also be elements of $Q$. Henceforth, the ideal $(c,d,x,y)B$ would be contained in $Q$. But each generator of $P_{1}$ is in $(c,d,x,y)B$ and consequently $P_{1}\subset Q$. But this must be a proper inclusion, since $x\notin P_{1}$ (this is because every monomial containing $x$ in each one of the generator of $P_{1}$ is either quadratic, $ x^{2}$ or $xy$, or linear of the form $cx$, but $x\notin (x^{2},xy,cx)B,$ since $1\notin (x,y,c)B$). This contradicts the minimality of $Q,$ and proves the claim. Thus $Q$ must contain $f_{1},f_{2},f_{3,1},f_{4,1}$ and consequently $ P_{1}=Q.$ The second case, when $f_{3,0}=cx+dy\in Q,$ can be treated in a similar fashion. When this occurs we obtain that $P_{0}=Q$. \end{proof} \begin{theorem} \label{dany 1}Let $R\subset S$ be a module-finite extension of noetherian rings such that $R$ is a UFD and such that the characteristic of the fraction field $L$ of $R$ is different from two. Suppose $S$ is generated as an $R$-algebra by two elements $s_{1},s_{2}\in S$ satisfying monic radical quadratic polynomials $f_{1}=x_{1}^{2}-a_{1}$ and $f_{2}=x_{2}^{2}-a_{2}$, respectively. Then $R\subset S$ splits. \end{theorem} \begin{proof} By the Going Up (\cite{eisen}, page 129) there exists a prime ideal $ Q\subset S$ that contracts to zero in $R$. Therefore, we may replace $S$ by $ S/Q$ without altering the relevant hypothesis. As observed in Section \ref {reducciones}, it suffices to find a retraction from $S/Q$ into $R$. Hence, we may reduce to the case where $S$ is a domain. As observed in that same section we can write $S$ as a quotient $T/J$, where $T=R[x_{1},x_{2}]/I$, $I=(f_{1},f_{2}),$ and $J\subset T$ is an ideal of height zero. Moreover, we can assume each $f_{i}$ to be irreducible in $\in R[x_{i}]$, otherwise, as noticed in that section, the splitting of $R\subset S$ would immediately follow from the fact that $S$ would be a free $R$ -module. On the other hand, if $T$ is a domain then $J=0,$ and so $S=T$ would be a free $R$-free (of rank four) and $R\subset S$ would also split. If on the contrary, $T$ is not a domain, then by Corollary \ref{CDU} there exist $c,d,u $ nonzero elements of $R$ such that $a_{1}=d^{2}u,$ $a_{2}=c^{2}u,$ where $ c,d$ are relatively prime. This implies that $\{c,d\}$ is a regular sequence in $R$. Then Lemma \ref{minimalprimesI} implies that the minimal primes of $T $ are precisely $P_{0}/I$ and $P_{1}/I.$Then $J\subset P_{r}/I,$ for some $ r=0,1$. Hence, if $\alpha :T/J\rightarrow T/(P_{r}/I)$ is the natural map and $\psi _{r}^{\prime }:T/(P_{r}/I)\rightarrow R[z]/(z^{2}-u)$ is the $R$ -isomorphism induced by $\psi _{r}$ (as in the proof of Lemma \ref {minimalprimesI}) and $\pi _{1}:R[z]/(z^{2}-u)\rightarrow R$ is the $R$ -module projection on the first component, then a retraction for $R\subset S$ is given by $\rho =\pi _{1}\circ \psi _{r}^{\prime }\circ \alpha $. This proves the corollary. \end{proof} \begin{theorem} \label{dany 2}Let $R\subset S$ be a module-finite extension of noetherian rings such that $R$ is a UFD, and such that the characteristic of the fraction field $L$ of $R$ is different from two. Let us assume that the element $2$ is a unit in $R,$ and that $S$ is generated as an $R$-algebra by two elements $s_{1},s_{2}\in S$ satisfying monic quadratic polynomials $ f_{1}=x^{2}-ax+b$ and $f_{2}=y^{2}-cy+d$, respectively. Then $R\subset S$ splits. \end{theorem} \begin{proof} We can reduce to the radical quadratic case, as in Theorem \ref{dany 1}, by \textquotedblleft completing the squares\textquotedblright . That is, define rings \begin{equation*} T^{\prime }=R[u,v]/(u^{2}+b-a^{2}/4,\text{ }v^{2}+c-d^{2}/4) \end{equation*} and \begin{equation*} T=R[x,y]/(x^{2}-ax+b,\text{ }y^{2}-cy+d). \end{equation*} Let $\psi :T^{\prime }\rightarrow T$ be the linear isomorphism that sends the variable $u$ to $x-a/2$ and the variable $v$ to $y-c/2$. We already know that $S$ is isomorphic to a quotient of $T$ hence it is also isomorphic to a quotient of $T^{\prime }$. Then, we may choose new generators of $S$ as $R$-algebra satisfying monic radical quadratic equations: Namely, the classes of $\overline{u}$ and $\overline{v}$. Thus, by Theorem \ref{dany 1}, the extension $R\subset S$ must splits. \end{proof} \section{The DSC for some nonradical quadratic extensions} Our next goal is to prove the following result. \begin{theorem} \label{nonradicalsplit}Let $R$ be a UFD such that such that the characteristic of the fraction field $L$ of $R$ is different from two. Let $ R\subset S$ be a module-finite extension such that $S$ is minimally generated as an $R$-algebra by elements $s_{1},s_{2}\in S$. Let us assume $ f(s_{1})=g(s_{2})=0$, where $f(x)=x^{2}-ax+b$ and $g(y)=y^{2}-cy+d,$ for some $a,b,c,d\in R$. If $\mathrm{\gcd }(2,c)=1$ and $a^{2}-4b$ is square free, then $R\subset S$ splits. \end{theorem} In order to prove this Theorem we need the following lemma: \begin{lemma} \label{anetrior}Let $R$ be a UFD such that the characteristic of the fraction field $L$ of $R$ is different from two. Let $T=R[x,y]/(f(x),g(y))$, where $f(x)=x^{2}-ax+b$ and $g(y)=y^{2}-cy+d,$ for some $a,b,c,d\in R$. Suppose that $\mathrm{\gcd }(2,c)=1$, that the discriminant of $f(x),$ $ a^{2}-4b\neq 0,$ is square free in $R$ and that $f(x)$ is irreducible. If $T$ is not a domain, then there exists $e\in R$ such that $(c\pm ae)/2\in R$. In this case the minimal primes of $T$ are $P_{1}=(\overline{h}_{1})$ and $P_{2}=(\overline{h}_{2})$, where $\overline{h}_{1}$ and $\overline{h}_{2}$ are the classes in $T$ of the polynomials $h_{1}(x,y)=y-ex-(c-ae)/2$ and $h_{2}(x,y)=y-ex-(c+ae)/2.$ \end{lemma} \begin{proof} If we assume that $T$ is not a domain, then, by Lemma \ref{domaincriteria}, $g$ must be reducible in $E[y]$, where $E$ denotes the field $L[x]/(f(x))$. It is clear that $E$, as a field, is isomorphic to the extension field $ L(u^{1/2})$, for $u=a^{2}-4b$. Therefore, $g(y)$ has a root $\gamma =\alpha +\beta u^{1/2}$ in $E$, since $T\cong E[y]/(g(x))$ is not a domain. But one can verify directly that the conjugate $ \overline{\gamma }=\alpha -\beta u^{1/2}$ is also a root of $g$. Thus, $ g=(y-\gamma )(y-\overline{\gamma })$. By comparing coefficients we get $ \alpha ^{2}-\beta ^{2}u=d$ and $c=2\alpha $. Hence, $4d=c^{2}-4\beta ^{2}u$. Let us write $\beta =q/r$, for $q,r\in R$ such that $\mathrm{\gcd }(q,r)=1$. From $4d=c^{2}-4\beta ^{2}u$ we obtain that $4r^{2}d=r^{2}c^{2}-4q^{2}u$. Then, $4(r^{2}d+q^{2}u)=r^{2}c^{2}$. This implies that $4\mid r^{2}c^{2}$; but $\mathrm{\gcd }(2,c)=1$, therefore $4\mid r^{2},$ and so $2\mid r$. Write $r=2t$, for some $t\in R\smallsetminus \{0\}$. Thus, \begin{equation} 4(r^{2}d+q^{2}u)=4t^{2}c^{2} \label{eee} \end{equation} After dividing by $4$ equation \ref{eee} we obtain: $ 4t^{2}d+q^{2}u=t^{2}c^{2}.$ Or, equivalently, $t^{2}(c^{2}-4d)=q^{2}u$. From this, it follows that $t^{2}\mid q^{2}u$, which in turn implies $t^{2}\mid u$ , because $\mathrm{\gcd }(t,q)=1$. But $u$ is square free, therefore $t$ must be a unit. Then we may assume that $q/t\in R$. If we let $e=q/t$, then the equation $t^{2}(c^{2}-4d)=q^{2}u$ can be rewritten as: \begin{equation} c^{2}-4d=e^{2}(a^{2}-4b). \label{eq1} \end{equation} We will now prove that $2\mid (c\pm ae)$. In fact, suppose that $2=\varphid p_{i}^{n_{i}}$, and that $c+ae=\varphid p_{i}^{m_{i}}$ and $c-ae=\varphid p_{i}^{k_{i}}$ are factorizations into powers of prime elements. By allowing some exponents to be zero we may assume each product involves the same primes. We shall see that $n_{i}\leq \mathrm{min}(m_{i},k_{i})$, for all $i$ . From (\ref{eq1}) it follows that \begin{equation*} ((c-ea)/2)((c+ea)/2)=d-e^{2}b\in R. \end{equation*} This implies that $2n_{i}\leq m_{i}+k_{i}$, since $(c-ea)(c+ea)/2^{2}$ belongs to $R$. Arguing by contradiction, suppose there is $j$ such that $ n_{j}>\mathrm{min}(m_{j},k_{j})$. Without loss of generality, we may assume $ m_{j}=\mathrm{min}(m_{j},k_{j})$. Hence, $n_{j}\leq k_{j}$, otherwise $ 2n_{j}>m_{j}+k_{j}$, which is a contradiction. Therefore, $p_{j}^{n_{j}}\mid (c-ae)$, and consequently \begin{equation*} p_{j}^{n_{j}}\mid (c-ae)+2ae=c+ae \end{equation*} which means that $n_{j}\leq m_{j}$. Thus, $n_j\leq \mathrm{min}(m_{j},k_{j})$. Summarizing, $n_{i}\leq \mathrm{min}(m_{i},k_{i})$ for all $i$. Hence, $2\mid (c\pm ae)$. Now, let us see that $P_{1}=(\overline{h}_{1})$ and $P_{2}=(\overline{h}_{2}) $ are the minimal primes of $T$. Using the fact that $ ((c-ea)/2)((c+ea)/2)=d-e^{2}b$. We see by a direct computation that \begin{equation} h_{1}h_{2}=e^{2}f+g. \label{eeee} \end{equation} Hence, any minimal prime in $T=R[x,y]/(f,g)$ must contain either $\overline{h }_{1}$ or $\overline{h}_{2}$. On the other hand, $R[x,y]/(f,g,h_{1})\cong R[x]/(f),$ since we can eliminate the variable $y$ using $ h_{1}(x,y)=y-ex-(c-ae)/2,$ by sending $y$ into $ex+(c-ae)/2$. In a similar fashion we see that $R[x,y]/(f,g,h_{2})\simeq R[x]/(f)$. Since $R[x]/(f(x))$ is a domain, $P_{1}=(\overline{h}_{1})$ and $P_{2}=(\overline{h}_{2})$ must be prime ideals of $T$ and since each minimal prime of $T$ must contain either $\overline{h}_{1}$ or $\overline{h}_{2},$ then these must be the only minimal primes. \end{proof} Now, we are ready to prove Theorem \ref{nonradicalsplit}. \begin{proof} As we observed in Section \ref{reducciones} we may assume $S$ is minimally generated as an $R$-algebra by certain elements $s_{1}$ and $s_{2}$ whose corresponding monic polynomials $f(x)$ and $g(y)$ over $R$ have degree greater than one. As we observed in that same section, this implies that $ f(x)$ is irreducible. We know that $S$ can be represented as a quotient of the form $T/J$, where $ T=R[x,y]/(f(x),g(y))$ and $J\subset T$ is an ideal of $T$ of height zero. By Lemma \ref{anetrior}, $J\subset P_{1}=(\overline{h}_{1})$ or $J\subset P_{2}=(\overline{h}_{2}),$ the minimal prime ideals of $T$ defined in Lemma \ref{anetrior}, with $h_{1}(x,y)=y-ex-(c-ae)/2$ and $ h_{2}(x,y)=y-ex-(c+ae)/2.$ We can obtain the desired retraction $\rho :S\rightarrow R$ as the composition of the following natural chain of $R$ -homomorphisms: \begin{equation*} S=T/J\rightarrow T/P_{j}\overset{\varphi }{\rightarrow }R[x]/(f(x)) \rightarrow R\oplus R\overline{x}\overset{\pi _{1}}{\rightarrow }R, \end{equation*} where $\varphi $ is the $R$-homomorphism defined by defining $x$ into $x$ and $y$ into $h_{j}-y$, and $\pi _{1}$ is canonical projection on $R$. \end{proof} \section{An asymptotic form of Koh's conjecture} \subsection{\textbf{Lefschetz's Principle}} In this second part of this article we give an \textit{asymptotic} formulation of Koh's Conjecture as well as its proof. Let us recall that this conjecture states that if $R$ is a Noetherian ring and that if $ R\subset S$ a module-finite extension of rings such that the projective dimension of $S$ as an $R$-module is finite, then there exists a retraction $ \rho :S\rightarrow R$. The asymptotic form of Koh's conjecture is the following: given any bound $ b>0$ for the \textquotedblleft complexity\textquotedblright\ of the extension (see Definition \ref{complexity}) the set $S_{b}$ of prime numbers for which there are counterexamples whose characteristic lie in $S_{b}$ must be \emph{finite}. We will prove this asymptotic form for rings that are localization at prime ideals of affine $k$-algebras, where $k$ is a an algebraically closed field. We refer to such rings by the shorter name of \emph{\ local }$k$\emph{-algebras. } Let us begin by recalling Lefschetz's principle: \begin{theorem}[Lefschetz's Principle] \label{lef-fuerte} Let $\phi $ be a sentence in the language of rings. The following statements are equivalent. \begin{enumerate} \item The sentence $\phi $ is true in an algebraic closed field of characteristic zero. \item There exists a natural number $m$ such that for every $p>m$, $\phi $ is true for every algebraically closed field of characteristic $p$. \end{enumerate} \end{theorem} \begin{proof} See \cite{modelteory}, Corollary 2.2.10, page 42. \end{proof} We state without proof one of the cornerstones of Model Theory, \emph{the} \emph{Compactness Theorem}. This theorem guarantees the existence of a model for a $\mathcal{L}$-theory $T$ (we say in this case that $T$ is \emph{ satisfiable}) if and only if there exists a model for each finite subset of $ T$. \begin{theorem}[Compactness Theorem] Suppose $T$ is a $\mathcal{L}$-theory. Then, $T$ is satisfiable if and only if every finite subset of $T$ is satisfiable. \end{theorem} As a consequence of the Compactness Theorem one can readily deduce the following proposition (\cite{modelteory}, page 42). \begin{proposition} \label{tranferencia fuerte} Let $\phi $ be a first order sentence which is true in every field $k$ of characteristic zero. Then, there exists a prime number $p_{0}$ such that $\phi $ is true in each field $F$ of characteristic $q$, for $q>p_{0}$. \end{proposition} \section{Codes for polynomial rings and modules} Throughout this discussion we will fix a field $k$ and a monomial order in the polynomial ring $A=k\left[ x_{1},\ldots ,x_{n}\right] $. \begin{definition} \label{complexity}Let $R$ be a finitely generated $k$-algebra, and let $I$ be an ideal of $k\left[ x_{1},\ldots ,x_{n}\right] .$ We will say that: \begin{enumerate} \item The ideal $I$ has \textbf{complexity} \textbf{at most } $d$, if $n\leq d$ and it is possible to choose generators\ for $I,$ $f_{1},\ldots ,f_{s},$ with $\deg f_{i}\leq d,$ for $i=1,\ldots ,s$. \item We say $R$ has \textbf{\ complexity at most } $d$ if there is a presentation of $R$ as $k\left[ x_{1},\ldots ,x_{n}\right] /I$, with $I$ of complexity at most $d$. \item If $J\subset R$ is an ideal, we will say that $J$ \textbf{\ has complexity at most } $d$, if $R$ has complexity less than or equal to $d,$ and there exists a lifting of $J$ in $k\left[ x_{1},\ldots ,x_{n}\right] $, let us say $J^{\prime },$ with complexity at most $d$. \item If $R$ is a local $k$-algebra, we say it has \textbf{\ complexity at most } $d$ if $R$ can be written as $R=\left( k\left[ x_{1},\ldots ,x_{n} \right] /I\right) _{p},$ for some prime ideal $p\subset k\left[ x_{1},\ldots ,x_{n}\right] /I$ such that the complexity of $R$ and $p$ is at most $d$. \item If $M$ is any finitely generated $R$-module, we will say that $M$ has \textbf{\ complexity at most } $d$ if $R$ is a $k$-algebra of complexity at most $d,$ and there exists an exact sequence $R^{t}\overset{\Gamma }{ \rightarrow }R^{s}\rightarrow M\rightarrow 0$, with $s,t\leq d$, where all the entries of the matrix $\Gamma $ are polynomials (or quotients of polynomials, in the local case) with degree at most $d$. \item Let $M\subset R^{d}$ be $R$-submodule. We will say that $M$ has \textbf{degree type at most }$d$ (written as $gt\left( M\right) \leq d$) if the complexity of $R$ is at most $d,$ and $M$ is generated by $d$-tuples with all its entries of degree at most $d.$ If $M$ is a finitely generated $ R $-module, we will say that $M$ has \textbf{complexity degree at most }$d$ if there exist submodules $N_{2}\subset N_{1}\subset R^{d},$ both of degree type at most $d,$ such that $M\cong N_{1}/N_{2}$. \end{enumerate} \end{definition} Now, for any polynomial $f\in A$ we will denote by $a_{f}$ the tuple of all the coefficients of $f$ listed according to the fixed order. When the complexity of an ideal $I$ is at most $d,$ and $I=(f_{1},\ldots ,f_{s})$, then $I$ can be encoded by the tuple $a_{I}$ that consists of all the coefficients of the polynomials $f_{i}$. It is not difficult to see that the length of this tuple only depends on $d$ \cite{bounds}. On the other hand, given one of those tuples $a$ we can always reconstruct the ideal where it comes from, an ideal we shall denote by $\mathcal{I}\left( a\right) $. Similarly, if $R$ is a $k$-algebra with complexity at most $d$ then $R$ can be written as $k\left[ x_{1},\ldots ,x_{n}\right] /\mathcal{I}\left( a\right) .$ We will write this fact as $R=\mathcal{R}\left( a\right) $. Let $M$ be an $R$-module. If the complexity degree of $M$ is at most $d$ then the minimal number of generator for $M$ is bounded in function of $d$. Hence $M$ can be encoded by a tuple $v=\left( n_{1},n_{2}\right) ,$ where $ n_{1}$ is a code for $N_{1}$ and $n_{2}$ is a code for $N_{2}.$ We will write this as $M\cong \mathcal{M}\left( v\right) $ (see \cite{bounds}). Finally, if $\phi (\xi )$ is a formula with free variable $\xi $ and parameters from a ring $R$, then by $a\in |\phi |_{R}$ we will mean $ R\models \phi (a)$ (\cite{modelteory}, Definition 1.1.6.) The proof of the following theorem may be found in \cite{bounds}, Remark 2.3, and \cite{Libro schoutens}, Theorem 4.4.1, page 59. \begin{theorem} \label{member} Given $d>0$, there exists a formula \textrm{IdMem}$_{d}$ (Ideal Membership) such that for any field $k$, any ideal $I\subset k\left[ x_{1},\ldots ,x_{n}\right] ,$ and any $k$-algebra $R$, both of complexity at most $d$ over $k$, it holds that $f\in IR$ if an only if $k\models $\textrm{ IdMem}$_{d}(a_{f},a_{I})$. Here $a_{f}$, and $a_{I}$ denote codes for $f$ and $I,$ respectively. \end{theorem} \begin{remark} \begin{enumerate} \item \label{Inc} Using Corollary \ref{member}, it is easy to get for each $d $ formulas \textrm{Inc}$_{d}$ (Inclusion) and \textrm{Equal}$_{d}$ (Equality) such that if $R$ is a finitely generated $k$-algebra with complexity at most $d,$ and if $J$ and $I$ are ideals of $R$ with complexity less than $d$, then $(a_{I,}a_{J})\in \left\vert \text{Inc}_{d}\right\vert _{K}$ (resp. $(a_{I,}a_{J})\in \left\vert \text{Equal}_{d}\right\vert _{K}$) if and only if $I$ is included in $J$, $I\subset J$, (resp. $I=J.$) \item \label{Formula-Ideal Maximal} Given $d,n>0$ there exists a formula \textrm{MaxIdeal}$_{d,n}$ such that for any algebraic closed field $k$ and any ideal $\mathfrak{m}\subset k[x_{1},\ldots ,x_{n}]$ of complexity at most $d$ we have: $\mathfrak{m}$ is a maximal ideal if and only if $k\models \mathrm{MaxIdeal}_{d,n}(a_{m})$, where $a_{m}$ is a code for $\mathfrak{m}$. In fact, by the Nullstellensatz $\mathfrak{m}$ is maximal if and only if there exist $b_{1},\ldots ,b_{n}\in k$ such that $\mathfrak{m} =(x_{1}-b_{1},\ldots ,x_{n}-b_{n})$. Let us call $J=(x_{1}-b_{1},\ldots ,x_{n}-b_{n})$. Then, the required formula is: \begin{equation*} \mathrm{MaxIdeal}(\xi )=(\exists b_{1},\ldots ,b_{n})(\mathrm{Equal}_{d}(\xi ,a_{J})), \end{equation*} where $\xi $ and $a_{J}$ must be replaced by the codes $a_{m}$ of $\mathfrak{ m},$ and $a_{J}$ of $J,$ respectively. \end{enumerate} \end{remark} We will also need the following lemma: \begin{lemma} (\cite{bounds}, Lemma 3.2) For each $d>0$ there is a bound $D=D(d)$ with the following property: Let $T$ be a local $k$-algebra of complexity at most $d$ . Let $M$ and $M^{\prime }$ be submodules of $T^{d}$ of degree type at most $ d$. Then the degree type of $(M:_{T}M^{\prime })=\{t\in T:tM^{\prime }\subset M\}$ is bounded by $D$. \end{lemma} In particular, if $T$ is a local $k$-algebra of complexity at most $d$ and $ J\subset T$ is an ideal of complexity at most $d$ then $\mathrm{Ann}_{T}J$ must have complexity at most $D=D(d)$, a bound that only depends on $d$. \subsection{Proof of the asymptotic form of Koh's conjecture} In this section we give an \textit{non standard} proof of the asymptotic version of Koh's conjecture. We start by defining the \emph{complexity of a ring extension }\textbf{(}$d$ will denote a positive integer). \begin{definition} Let $R\subset S$ be a module-finite extension of local $k$-algebras. We say this extension has complexity $\leq d$ if: \begin{enumerate} \item The \emph{complexity} over $k$ of the local $k$-algebras $R$ and $S$ is at most $d$. \item The minimal number of generators of $S$ as an $R$-module is at most $ d. $ \item The projective dimension of $S$ as an $R$-module is less than or equal to $d$. \end{enumerate} \end{definition} We intend to prove the following: Given $d>0$, there exists a prime $p_{d}$ such that for any algebraically closed field $k$ of characteristic $p>p_{d}$ , and any modulo-finite extension $R\subset S$ of local $k$-algebras of complexity less than $d$, there is a retraction $\rho :S\rightarrow R$ and consequently $R\subset S$ splits. We start by recalling the following definition. \begin{definition} \label{goresnstein}A local ring $(R,\mathfrak{m})$ is called Gorenstein if $ R $ is Cohen Macaulay (CM), and if for any system of parameters $ \{x_{1},\ldots ,x_{d}\}$ in $R$ the socle of $\overline{R}=R/(x_{1},\ldots ,x_{d})$, defined as $\mathrm{Ann}_{\overline{R}}(\mathfrak{m})$, is a $1$ -dimensional $R/\mathfrak{m}$-vector space ( \cite{eisen}, page 526). \end{definition} The following result will be of fundamental importance \cite{velez-rigo}. \begin{proposition} \label{retraccion} Let $(R,\mathfrak{m})\subset (T,\mathfrak{n})$ be a module-finite extension of local rings with $T$ a free $R$-module. If $T/ \mathfrak{m}T$ is Gorenstein, and if $J\subset T$ is any ideal, then there exists a retraction $\rho :T/J\rightarrow R$ if and only if $\mathrm{Ann} _{T}(J)\nsubseteq \mathfrak{m}T.$ \end{proposition} We also need the following \cite{velezsplitting}. \begin{theorem}[Koh in characteristic zero] \label{koh 0}Let $R$ be a ring containing a field of characteristic zero, and let $R\subset S$ be a module-finite extension of rings such that the projective dimension of $S$ as an $R$-module is finite. Then, there exists a retraction $\rho :S\rightarrow R.$ \end{theorem} \begin{remark} \label{grave}Let $(R,\mathfrak{m})$ be a local ring and let $R\subset S$ be a module-finite extension. Let us take $s_{1},\ldots ,s_{n}\in S$ generators of $S$ as an $R$-algebra. For each $s_{i}\in S$ choose an arbitrary monic polynomial with coefficients in $R,$ $ f_{i}(t)=x_{i}^{d_{i}}+r_{i1}x_{i}^{d_{i}-1}+\cdots +r_{id_{i}}$, that each $ s_{i}$ satisfies. Let $T$ denote the quotient ring $R[x_{1},\ldots ,x_{n}]/(f_{1}(x_{1}),\ldots ,f_{n}(x_{n}))$. As in Section \ref{reducciones} , we may represent $S$ as a quotient of $T$ by defining a surjective $R$ -homomorphism $\phi :T\rightarrow S$, sending the class of $x_{i}$ into $ s_{i}$. If $J$ denotes its kernel then $\mathrm{ht}(J)=0$, as showed in that same section. The representation of $S$ as the quotient $T/J$ makes it possible to give a very useful criterion for the existence of a retraction. Let $(R,\mathfrak{m} )$ be a local ring and let $R\subset S$ be a module-finite extension. Then, the inclusion map $R\subset T/J$ splits if and only if $\mathrm{Ann}_{T}(J)$ is not contained in $\mathfrak{m}T$. This follows immediately from Proposition \ref{retraccion}. \end{remark} The following theorem states that given $i\geq 0$ and $d>0$ there exists a formula $(\mathrm{Tor}_{i})_{d}$ such that for any $k$-algebra $R$ and $R$ -modules $M,N,V,$ all of complexity at most $d$, then $\mathrm{Tor} _{i}^{R}(M,N)\cong V$ if and only if $(\mathrm{Tor}_{i})_{d}$ evaluated in codes of $R,M,N$ and $V$ is true over $k$ (analogously for $\mathrm{Ext}$). \begin{theorem} \label{4.4}Given $i\geq 0,d>0$, there exist formulas $(\mathrm{Tor}_{i})_{d}$ and $(\mathrm{Ext}^{i})_{d}$ with the following properties: Let $k$ be any field; then, if a tuple $(a,m,n,v)$ is in $|(\mathrm{Tor}_{i})_{d}|_{k}$ (respectively, in $|(\mathrm{Ext}^{i})_{d}|_{k}$), then $\mathcal{M}(v)$ is isomorphic to \[\mathrm{Tor}_{i}^{\mathcal{A}(a)}(\mathcal{M}(m),\mathcal{M} (n))\] (respectively to $\mathrm{Ext}_{\mathcal{A}(a)}^{i}(\mathcal{M}(m), \mathcal{M}(n))$). Moreover, for each tuple $(a,m,n)$ we can find at least one $v$ such that $(a,m,n,v)$ belongs to $|(\mathrm{Tor}_{i})_{d}|_{k}$ (respectively, to $|(\mathrm{Ext}^{i})_{d}|_{k}$). \end{theorem} \begin{proof} See \cite{bounds}, Corollary 4.4, page 150. \end{proof} We recall the following standard result (\cite{eisen}, page 167, Theorem 6.8). \begin{theorem} Let $(R,\mathfrak{m})$ be a local ring, and denote by $k$ the residue field $ R/\mathfrak{m}$. If $M$ is a finitely generated $R$-module, then \textrm{pd}$ _{R}(M)\leq n$ if and only if $\mathrm{Tor}_{n+1}^{R}(M,k)=0$. \end{theorem} \begin{remark} \label{formulita}It is clear from the previous theorems that there exists a formula $($\textrm{pd}$_{<n})_{d}$ such that, if $M=\mathcal{M}(v)$ is an $R= \mathcal{R}(a)$-module with complexity less than $d$, where $(R,\mathfrak{m} ) $ is a local $k$-algebra with complexity less than $d$, then $k\models ($ \textrm{pd}$_{<n})_{d}(a,v),$ if and only if, \textrm{pd}$_{R}(M)\leq n$. \end{remark} From this preliminaries we obtain the following main result: \begin{theorem} For each $d>0$ there exists a first order formula $\mathrm{Koh}_{d}$ such that if $R\subset S$ is a module-finite extension of local $k$-algebras such that the complexity of this extension is at most $d$, then there exists a retraction $\rho :S\rightarrow R$ if and only if $k\models \mathrm{Koh} _{d}(a,b),$ where $R\cong \mathcal{R}(a)$ and $S\cong \mathcal{S}(b)$. \end{theorem} \begin{proof} As shown in Remark \ref{grave} we can represent $S$ as $S\cong T/J$, where the complexity of $J$ and $T$ is less than $d$. We know there is a retraction $\rho :S\rightarrow R$ if and only if $\mathrm{Ann} _{T}(J)\nsubseteq mT$.\newline As proved in Remark \ref{formulita}, there exists a formula $($\textrm{pd}$ _{<n})_{d}$ such that if $R=\mathcal{R}(a)$ and $S=\mathcal{R}(b)$ then $ k\models $\textrm{pd}$_{<n}(a,b)$ if and only if \textrm{pd}$_{R}(S)<n$. \newline Let $\mathrm{Koh}_{d}$ be the formula which establishes the following: if \textrm{pd}$_{R}(S)<d$, then $\mathrm{Ann}_{T}(J)\nsubseteq mT$. Explicitly: \begin{equation*} \mathrm{Koh}_{d}(\xi ,\xi ^{\prime },\nu ,\nu ^{\prime }):\underset{i=0}{ \overset{d-1}{\bigvee }}Pd_{R}(\xi ,\nu )=i\Longrightarrow \lnot Inc_{d}(\nu ^{\prime },\xi ^{\prime }). \end{equation*} Here $\nu ^{\prime }$ and $\xi ^{\prime }$ are reserved for a code of $ \mathrm{Ann}_{T}(J)$ and $\mathfrak{m}T$, respectively. Then, it is clear that $k\models \mathrm{Koh}_{d}(a,a^{\prime },b,b^{\prime })$ if and only if there exist a retraction $\rho :S\rightarrow R$. \end{proof} \begin{theorem} \label{muymuybueno}Let $R\subset S$ be module-finite extensions of local $k$ -algebras. Fix $d>0,$ an arbitrary positive integer. The set of prime numbers $p$ for which there are counterexamples to Koh's Conjecture of complexity less than $d$ is finite. \end{theorem} \begin{proof} From Theorem \ref{koh 0} we see that $K\models \mathrm{Koh}_{d}(a,b)$ for any field $K$ of characteristic zero. Then, by Proposition \ref{tranferencia fuerte}, we deduce that $k\models \mathrm{Koh}_{d}(a,b),$ for every field $k$ of prime characteristic $p$ sufficiently large. More precisely: Given $d>0$, there exists a prime number $p_{d}$ such that for any field $k$ of characteristic $p>p_{d}$, and any modulo-finite extension $R\subset S$ of local $k$-algebras with complexity at most $d$ there exists a retraction $ \rho :S\rightarrow R$. \end{proof} \end{document}
\begin{document} \title{Quantum steering of Gaussian states via non-Gaussian measurements} \author{Se-Wan Ji} \affiliation{Department of Physics, Texas A$\&$M University at Qatar University, PO Box 23784, Doha, Qatar} \author{Jaehak Lee} \affiliation{Department of Physics, Texas A$\&$M University at Qatar University, PO Box 23784, Doha, Qatar} \author{Jiyong Park} \affiliation{Department of Physics, Texas A$\&$M University at Qatar University, PO Box 23784, Doha, Qatar} \author{Hyunchul Nha} \affiliation{Department of Physics, Texas A$\&$M University at Qatar University, PO Box 23784, Doha, Qatar} \begin{abstract} Quantum steering---a strong correlation to be verified even when one party or its measuring device is fully untrusted---not only provides a profound insight into quantum physics but also offers a crucial basis for practical applications. For continuous-variable (CV) systems, Gaussian states among others have been extensively studied, however, mostly confined to Gaussian measurements. While the fulfillment of Gaussian criterion is sufficient to detect CV steering, whether it is also necessary for Gaussian states is a question of fundamental importance in many contexts. This critically questions the validity of characterizations established only under Gaussian measurements like the quantification of steering and the monogamy relations. Here, we introduce a formalism based on local uncertainty relations of non-Gaussian measurements, which is shown to manifest quantum steering of some Gaussian states that Gaussian criterion fails to detect. To this aim, we look into Gaussian states of practical relevance, i.e. two-mode squeezed states under a lossy and an amplifying Gaussian channel. Our finding significantly modifies the characteristics of Gaussian-state steering so far established such as monogamy relations and one-way steering under Gaussian measurements, thus opening a new direction for critical studies beyond Gaussian regime. \end{abstract} \pacs{03.65.Ud, 03.67.Mn, 42.50.Dv} \maketitle \section*{Introduction} Quantum correlations, which do not admit classical descriptions, profoundly distinguish quantum physics from classical physics and also provide key resources for quantum information processing \cite{Nielsen}. Under the classification of quantum correlations, the quantum non-locality is the strongest form of correlation \cite{Brunner14} beyond any classical local hidden variable (LHV) models \cite{Bell64}, which can be used e.g. to achieve the unconditional security of key distribution (QKD) in a device-independent manner \cite{Acin07, Pironio09, Masanes11}. In LHV model, the joint probability for outcomes $a$ and $b$ of local measurements $A$ and $B$, respectively, is of the form $P_{LHV}\left( a,b | A,B \right) =\sum_{\lambda} p \left( \lambda \right) P\left( a|A, \lambda \right)P\left(b|B,\lambda \right),$ with a hidden variable $\lambda$ distributed according to the probability $ p\left( \lambda \right)$. Here, for a given $\lambda$, the local probability distributions are realized independently of each other. Notably, any local statistics are allowed even beyond quantum ones in the LHV models. On the other hand, if the conditional probabilities are restricted only to quantum statistics, i.e. $P_Q \left( a,b | A, B\right) = \sum_{\lambda} p\left( \lambda \right) P_Q\left( a| A, \rho_A^{\lambda} \right) P_Q \left( b|B, \rho_B^{\lambda} \right),$ where the subscript $Q$ refers to quantum statistics, the model describes quantum separability, the violation of which indicates quantum entanglement---a resource, e.g. for exponential speed-up over classical computation \cite{Nielsen,Jozsa03}. Recently, an intermediate form of correlation has been rigorously defined as quantum steering \cite{Wiseman07}, which has become a topic of growing interest over the past decade \cite{Wiseman07, Cavalcanti09, Branciard12, Handchen12, one-way, Midgley10, Olsen13, HeR13, HeF14, Tan15, Smith12, Wittman12, Bennett12, Quintino14, Uola14, Bowles14, Piani15}. In the so-called local hidden state (LHS) model, the correlation is represented by the form \begin{equation} \label{nonsteer1} P_{LHS}\left( a,b |A,B \right)= \sum_{\lambda}p \left( \lambda \right) P_Q\left( a| A, \lambda \right) P\left( b| B, \rho_{\lambda} \right), \end{equation} i.e. only the party $A$ is restricted to quantum statistics whereas the party $B$ is not. If the above approach fails to explain a certain correlation of a given state, the state is called steerable from $B$ to $A$. Not to mention its fundamental interest, it can have some important applications. A steering test is to certify quantum correlation even if Bob or his device is fully untrusted. In this sense, quantum steering may offer a crucial practical basis, e.g. for one-sided device independent cryptography \cite{Branciard12}, and was also shown to be useful for quantum sub-channel discrimination \cite{Piani15}. \ Quantum steering has been intensively studied for continuous variables (CVs) \cite{Braunstein05} as well as discrete variables. In fact, the first steering criterion was developed by M. Reid for CVs \cite{Reid89, Reid09} and a CV system is an attractive platform making an excellent candidate for quantum information processor. A wide range of quantum systems can process quantum information on CVs including light fields, collective spins of atomic ensembles, and motional degrees of trapped ions, Bose-Einstein condensates, mechanical oscillators \cite{Cerf}. An important regime to address CV systems both theoretically and experimentally is the one dealing with Gaussian states, Gaussian measurements and Gaussian operations \cite{Weedbrook12}. Although the Gaussian regime has many practical advantages due to its experimental feasibility, it has also been known that non-Gaussian operations and measurements are essential for some tasks such as CV nonlocality test \cite{Nha04, Patron04, Bell64} and universal quantum computation \cite{Lloyd99, Bartlett02}. On the other hand, there are also cases that Gaussian states and operations provide an optimal solution, e.g. classical capacity under Gaussian channels \cite{Mari14}, quantification of entanglement \cite{Giedke03} and quantum discord \cite{Adesso10, Pirandola14,Ollivier01}. These examples indicate that Gaussian regime can be sufficient for the characterization of certain quantum correlations of Gaussian states \cite{Giedke03, Adesso10, Pirandola14}. In particular, Gaussian criterion is both sufficient and necessary to detect quantum entanglement for Gaussian states \cite{Duan00,Simon00}.\\ So far, most of the studies on the steering of Gaussian states have been confined to Gaussian measurements, which established numerous characteristics of Gaussian-state steering \cite{Wiseman07, Cavalcanti09, Reid89, Reid13, He13, Adesso15, Nha15}. One remarkable example is a recent experiment that showed one-way steering of a two-mode Gaussian state, i.e. Alice steers Bob but Bob cannot steer Alice, under homodyne measurements \cite{Handchen12}. Another notable property is a strict monogamy relation identified under Gaussian measurements, i.e. Eve cannot steer Bob if Alice steers Bob \cite{Reid13, Nha15}, which can be a crucial basis for secure communication, e.g. in one-sided device independent cryptography \cite{Walk14}. On the other hand, it is very important to answer a critical question whether these properties under the restriction of Gaussian measurements can generally be true beyond the Gaussian regime. For instance, a special class of non-Gaussian measurements, i.e., higher-order quadrature amplitudes, was employed to study steerability of Gaussian states \cite{Nha15}, which did not show any better performance than Gaussian measurements. Kogias and Adesso further conjectured that for all two-mode Gaussian states, Gaussian measurements are optimal \cite{Kogias15}. It is a conjecture of both fundamental and practical importance to prove or disprove. In this Article we demonstrate that there exist two-mode Gaussian states for which Gaussian measurements cannot manifest steering, but non-Gaussian measurements can. For this purpose, we employ a formulation of steering criterion based on local uncertainty relations involving non-Gaussian measurements \cite{Ji15}. We study steerability of mixed Gaussian states, specifically, Gaussian states under a lossy and an amplifying channel that represent typical noisy environments. Our counter-examples to the aforementioned conjecture imply the breakdown of monogamy relations and one-way steerability emerging under the restriction to Gaussian measurements and point out a critical need for more studies beyond Gaussian regime. \\ \section*{Results} \subsection*{Gaussian vs. non-Gaussian steering criterion} Let us define two orthogonal quadrature amplitudes $ {\hat X}_i=\frac{1}{\sqrt{2}}\left( {\hat a}_i + {\hat a}_i^{\dagger} \right)$, ${\hat P}_i=\frac{1}{\sqrt{2}i} \left( {\hat a}_i-{\hat a}_i^{\dagger} \right)$, where ${\hat a}_i$ and ${\hat a}_i^{\dagger}$ are the annihilation and the creation operator for $i$-th mode. The covariance matrix $ \gamma_{AB}$ of a bipartite $\left(M + N\right)$-mode Gaussian state $\rho_{AB}$ is then given by $\gamma_{AB}=\left( \begin{array}{cc} \gamma_{A} & C \\ C^{T} & \gamma_{B} \\ \end{array} \right)$ whose elements are \begin{equation} \label{gcova} \gamma_{AB}^{ij} = \langle \Delta {\hat r}_i \Delta {\hat r}_j +\Delta {\hat r}_j \Delta {\hat r}_i\rangle-2\langle\Delta {\hat r}_i \rangle \langle \Delta {\hat r}_j \rangle , \end{equation} with ${\hat r}_i \in \left\{{\hat X}_1,{\hat P}_1,{\hat X}_2,{\hat P}_2,\cdots,{\hat X}_{M+N},{\hat P}_{M+N} \right\}$ and $\Delta {\hat r}_k={\hat r}_k-\langle {\hat r}_k \rangle$. $ \gamma_{A} $ and $\gamma_{B}$ are $2M \times 2M $, $2N \times 2N $ real symmetric positive matrices representing local statistics of $M$- and $N$-modes, respectively. $C$ is a $ 2M \times 2N $ real matrix representing correlation between two subsystems. Due to the uncertainty principle, the covariance matrix must satisfy $\gamma_{AB} \pm i \Omega_{AB} \geq 0$, where $ \Omega_{AB} =\oplus_{i=1}^{M+N} \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right) $ is the sympletic form \cite{Simon}. For the case of a two-mode Gaussian state, the covariance matrix can always be brought to a standard form by local Gaussian operations \cite{Duan00, Simon00} \begin{equation} \label{scova} \gamma _{AB} = \left( \begin{array}{cc} \gamma_A & C \\ C^T & \gamma_B \\ \end{array} \right) = \left( \begin{array}{cccc} a & 0 & c_1 & 0 \\ 0 & a & 0 & - c_2 \\ c_1 & 0 & b & 0 \\ 0 & - c_2 & 0 & b \\ \end{array} \right) . \end{equation} Without loss of generality, we may set $ c_1 \geq \left| c _2 \right| \geq 0 $. \\ Under Gaussian measurements, a Gaussian state $ \rho_{AB} $ is non-steerable from $B$ to $A$ iff \begin{equation} \label{GC} {\gamma _{AB}} + i{\Omega _A}\oplus{{\bf 0}_B} \ge 0, \end{equation} where $ {\bf 0}_{B} $ is a $ 2N \times 2N $ null matrix \cite{Wiseman07}. We now introduce our non-Gaussian steering criterion based on a set of local orthogonal observables, which we use to detect the steerability of two-mode Gaussian states. {\bf Lemma}: Let us choose a collection of $n^2$ observables $\left\{ A^{(n)} \right\} = \left\{ \lambda_k,\, \lambda_{kl}^{\pm} \right\}$ $\left( k,l=0,1,...,n-1 \right) $ where \begin{eqnarray} \lambda_k&=& |k\rangle \langle k |\\ \lambda_{kl}^{+}&=&\frac{|k \rangle \langle l| +|l \rangle \langle k| }{\sqrt{2}}\,\, \left( k < l \right), \\ \lambda_{kl}^{-}&=&\frac{|k \rangle \langle l| -|l \rangle \langle k| }{\sqrt{2}i}\,\, \left( k <l \right). \end{eqnarray} Here $ |k \rangle $ refers to a Fock state of number $k$. Note that each of $\lambda_k$ and $\lambda_{kl}^{\pm}$ represents a complete projective measurement individually. That is, $\lambda_k$ represents a two-outcome projective measurement, assigning $+1$ if the tested state is detected in $|k\rangle$ and $0$ otherwise. On the other hand, $\lambda_{kl}^{\pm}$ represents a three-outcome projective measurement, assigning $\pm1$ if detected in the two eigenstates of $\lambda_{kl}$ and 0 otherwise. These observables satisfy the orthogonal relations $ \Tr\left( A_i^{(n)} A_j^{(n)} \right) =\delta_{ij}$ and the sum of variances must satisfy the uncertainty relation, \begin{equation} \label{LUR1} \sum_j \delta^2\left( A_j^{(n)} \right) \geq \left(n-1 \right) \langle \openone_n \rangle, \end{equation} where $\delta^2\left(A_j^{(n)}\right)= \langle \left(A_j^{(n)}\right)^2 \rangle-\langle A_j^{(n)} \rangle^2$ represents each variance and $ \openone_n = \sum_{k=0}^{n-1} |k \rangle \langle k | $ the projection to the Hilbert space of truncated $n$-levels. The proof of Lemma is given in Supplementary Information. \\ \\ Now consider an orthogonal $n^2 \times n^2$ matrix $ O_n $ in the truncated space, satisfying $ O_n^T O_n= O_n O^T_n = \openone_n$. Then the set of observables $ \left\{ \tilde{A}_j^{(n)} \right\}$ under the transformation $\tilde{A}_j^{(n)} = \sum_l O_{jl} A_l^{(n)}$ also satisfies the uncertainty relation in equation (\ref{LUR1}) since $ \sum_{j} \left( \tilde{A}_j^{(n)} \right)^2 = \sum_{j,l,l'} O^T_{lj}O_{jl'} A_l^{(n)}A_{l'}^{(n)}=\sum_l \left(A_l^{(n)}\right)^2 =n \openone_n$ and $ \sum_j \langle \tilde{A}_j^{(n)} \rangle^2 = \sum_{j,l,l'} O^T_{lj}O_{jl'}\langle A_l^{(n)} \rangle\langle A_{l'}^{(n)} \rangle = \sum_l \langle A_l^{(n)} \rangle^2 $. This invariance property will be used later. \\ \\ We now obtain a non-steering condition based on these orthogonal observables.\\ \\ {\bf Theorem}: If a two-mode quantum state $\rho_{AB}$ is non-steerable from B to A, it must satisfy the inequality \begin{equation} \label{LUR-steer} \sum_j \delta^2 \left( A_j^{(n)} \otimes \openone + g \openone \otimes B_j^{(n')} \right) \geq \left(n-1 \right)\langle \openone_n^{A} \rangle, \end{equation} for any real $g$, where the observables $ \left\{ A_j^{(n)} \right\} $ satisfy the uncertainty relation in Eq. (\ref{LUR1}). Its proof is given in Supplementary Information. \\ \\ We next introduce a useful criterion originating from the inequality (\ref{LUR-steer}) that can be readily computable.\\ \\ {\bf Proposition}: If a two-mode state $ \rho_{AB}$ is non-steerable from $B$ to $A$, the correlation matrix $ C_{nn'}^{TLOOs}$ with elements $\left( C_{nn'}^{TLOOs} \right)_{ij}\equiv\langle A_i^{(n)} \otimes B_j^{(n')} \rangle-\langle A_i^{(n)} \rangle\langle B_j^{(n')} \rangle$ constructed with $n$- and $n'$-level truncated local orthogonal observables (TLOOs) must satisfy ($ \|\,\cdot\|_{tr}$: trace norm) \begin{eqnarray} \label{steering-tr} &&\| C_{nn'}^{TLOOs} \|_{tr} \nonumber\\ &&\leq \sqrt{ \left( \langle \openone_n^{A} \rangle -\sum_{j=1}^{n} \langle A_j^{(n)} \rangle^2 \right) \left( n' \langle \openone_{n'}^B \rangle - \sum_{j=1}^{n'} \langle B_j^{(n')} \rangle^2 \right)}.\nonumber\\ \end{eqnarray} The proof of Proposition in Supplementary Information clearly shows that if a given state violates the inequality (\ref{steering-tr}), it is steerable from B to A, and also that there exist a set of TLOOs that violates the inequality (\ref{LUR-steer}) as well. We give some details on how to calculate the expectation values of TLOOs for a given state in Supplementary Information. \begin{figure} \caption{Detection of steerability for a lossy Gaussian state based on the criterion in equation (\ref{steering-tr} \label{lossfig} \end{figure} \subsection*{Application to Gaussian states} Let us consider a two-mode squeezed-vacuum (TMSV) state as an initial state with its covariance matrix $\gamma_{AB}^{TMSV}$ given by $ a =b=\cosh{2r}$ and $c_1=c_2=\sinh{2r}$ in equation (\ref{scova}). ($r$: squeezing parameter). If $r>0$, this state is always steerable in both ways with Gaussian measurements using equation (\ref{GC}). Now suppose that the mode $B$ undergoes a vacuum noisy channel with transmission rate $\eta$. Then the output state has a covariance matrix \cite{Nha15}, \begin{eqnarray} \label{losscova} &&\gamma_{AB}^{TMSV}\rightarrow\gamma_{AB'}^{TMSV}=\nonumber\\ &&\left( \begin{array}{cccc} \cosh{2r} & 0 & \sqrt{\eta}\sinh{2r} & 0 \\ 0 & \cosh{2r} & 0 & -\sqrt{\eta}\sinh{2r} \\ \sqrt{\eta}\sinh{2r} & 0 & \eta \cosh{2r}+1-\eta & 0 \\ 0 & -\sqrt{\eta}\sinh{2r} & 0 & \eta\cosh{2r}+1-\eta \\ \end{array} \right).\nonumber \end{eqnarray} If $\eta>\frac{1}{2}$, one can readily check that the Gaussian criterion detects steering from $B$ to $A$ for all states regardless of squeezing. However, if the transmission rate is below $\eta=\frac{1}{2}$, steering is impossible from $B$ to $A$ under Gaussian measurements \cite{Nha15}. On the other hand, the steering from $A$ to $B$ is always possible regardless of $\eta>0$ for a nonzero squeezing. In contrast, as we show in Fig. \ref{lossfig}, one can detect steerability of noisy Gaussian states from $B$ to $A$ even below $\eta = 1/2$ using non-Gaussian measurements based on 2-level [Fig. 1 (a)] and 3-level [Fig. 1 (b)] TLOOs. For this calculation, we need the density matrix elements in Fock basis, $ \rho_{m_1m_2n_1n_2}\equiv{\rm Tr} \{\rho | m_1 \rangle_A \langle n_1| \otimes | m_2 \rangle_B \langle n_2 | \}$, which are given in Methods for completeness. Our motivation for the choice of TLOOs in low-photon numbers is rather natural. For a low transmission rate $\eta$, the mode $B$ resides in the Fock space of low photon numbers and this is particularly true for a small initial squeezing $r$. It is then expected that the information on correlation exists largely in the low-photon Fock space. For the case of 2-level TLOOs in Fig. 1 (a), the detection range under our steering test, for which Gaussian criterion fails, turns out to be $0<r<0.869$ (red shaded region below dashed line). On the other hand, for the case of 3-level TLOOs in Fig. 1 (b), it turns out to be $0.364<r<0.987$ (red shaded region below dashed line). One might then be interested to see if a truncated TMSS with a proper normalization in a genuinely low-dimensional Hilbert space can also show steering in a similar way. We find that it does not detect steerability beyond Gaussian criterion. In a sense, our TLOOs constructed with low-dimensional Fock states obtain a coarse-grained information on higher-order terms, not completely ignoring them, in an on-off fasion. (See the paragraph below equation (7).) Other Gaussian states of experimental relevance are TMSVs under an amplifying channel. Let mode $B$ undergo the amplification channel with a gain factor $ G \geq 1$, i.e. ${\hat b}_{\rm out}=\sqrt{G} {\hat b}_{\rm out}+ \sqrt{G-1}{\hat v}$, where ${\hat v}$ represents a vacuum noise. Then the output covariance matrix is given by $ a=\cosh{2r}$, $c_1=c_2=\sqrt{G}\sinh{2r}$, and $ b=G\cosh{2r}+G-1$ in equation (\ref{scova}). In this case, steering is possible from $A$ to $B$ using Gaussian criterion in the range of $ 1 \leq G < \frac{2\cosh{2r}}{\cosh{2r}+1}$, whereas it is so from $B$ to $A$ in all ranges of $G$ for a nonzero $r$. On the other hand, if we choose $2$-level TLOOs in each party and test the violation of our steering criterion in Eq. (\ref{steering-tr}), we find that the states with $ G = \frac{2\cosh{2r}}{\cosh{2r}+1}+\epsilon $ are detected where $ 0 \leq \epsilon \lesssim 0.05$ for $ 0< r \lesssim 0.65 $, where a nonzero $\epsilon$ indicates that there are some amplified Gaussian states the steering of which is detected not via Gaussian criterion but via our non-Gaussian measurements. \begin{figure} \caption{Modeling a lossy channel by a beamsplitter (BS) of transmittance $\eta$.} \label{model} \end{figure} \section*{Discussion} Aside from its fundamental interest, our result can have some practical implications as well. First, it has been known that there exists a strict monogamous property of steering under Gaussian measurements \cite{Reid13, Nha15}. That is, if Bob can steer Alice's system, Eve cannot steer it simultaneously. In contrast, our result provides a clear counter-example to this monogamy relation. For example, a loss channel can be modeled by a beamsplitter as shown in Fig. 2. Then, if Bob possesses a field of fraction $\eta$, Eve takes the remaining fraction $1-\eta$. In the case of e.g. $\eta=0.55$, Bob can steer Alice's system via Gaussian measurements. At the same time, however, Eve can also steer Alice's system via non-Gaussian measurements because it corresponds to the case of $1-\eta=0.45$ at which steering is possible as shown in Fig. 1. Therefore, there occurs a possibility of simultaneous steering if not restricted to the same Gaussian measurements. A similar argument can be given to the case of amplified states. In a related context, one may wonder if the above breakdown of monogamy relation can have implication on the security of one-sided device independent quantum key distribution (OSDIQKD). Basically, steering is a necessary condition to establish a positive key rate for OSDIQKD. If the monogamy relation does not hold, i.e. a simultaneous steering is possible, the security can be potentially compromised. However, importantly, it is not an arbitrary form of steering but a specific one that matters for a given protocol. For example, the CV OSDIQKD scheme of Ref. \cite{Walk14} extracts keys based on specific observables at Alice’s station, which are two orthogonal quadratures $\hat X$ and $\hat P$ (Fig.1). If only two observables are considered for the purpose of steering in the trusted party, a simultaneous steering is impossible, which was first shown by Reid \cite{Reid13}. This monogamy argument is even valid regardless of the type of states, whether Gaussian or non-Gaussian. Therefore, if Alice can establish a positive key rate with Bob, Eve cannot. Second, the interesting phenomenon of asymmetric steering, i.e. Alice steers Bob but Bob cannot steer Alice, which was so far investigated under the restriction to Gaussian measurements \cite{Handchen12,one-way}, must be rigorously reassessed. For example, the recent experiment \cite{Handchen12} studied the case of Gaussian states under a lossy channel and our result demonstrates that the transmission rate $\eta=1/2$ is not a critical value for the one-way steering. In summary we showed that there exist Gaussian states the steering of which Gaussian measurements cannot detect but non-Gaussian measurements can. To this aim, we introduced a criterion based on local orthogonal observables and their uncertainty relations in a truncated Hilbert space. We have applied this criterion to the case of TMSV under a lossy and an amplifying channel and found that Gaussian measurements are not always optimal to demonstrate steering of Gaussian states. Our result implies that the steering properties known under the restriction to Gaussian measurements must be rigorously reassessed. For example, the important properties such as the strict monogamy relation and asymmetric steering break down beyond Gaussian regime. Our investigation clearly indicates that we must pursue more studies to completely understand the characteristics of quantum steering even for the restricted set of Gaussian states. We hope our finding here could provide some useful insights into future studies beyond Gaussian measurements and operations. {\it Remarks}: Upon completion of this work \cite{arxiv}, we became aware of a related work in \cite{Pryde} that employs pseudo-spin observables to show non-optimality of Gaussian measurements for steering of Gaussian states. We here briefly compare their method with ours particularly in detecting Gaussian states under a loss channel. As shown in Fig. 3, the method of Ref. \cite{Pryde} detects steering at a lower level of transmittance $\eta$ for the squeezing range $0<r<0.746$ ($0<r<0.743$) than our 2 (3)-level TLOO criterion. However, it does not detect steering below $\eta=0.5$, which is the case of interest as Gaussian criterion fails, if the squeezing level is rather high as $r>0.81$. On the other hand, our method detects steering in the range of $0<r<0.869$ and $0.364<r<0.987$ using 2- and 3-level TLOO criterion, respectively, below $\eta=0.5$. Thus, the red and the purple shaded regions in Fig. 2 indicate the advantage of our criteria over the method of Ref. \cite{Pryde}. In this respect, two approaches are complementary to each other in detecting steering of Gaussian states for which Gaussian criterion fails. \begin{figure} \caption{Comparing the steering criterion in \cite{Pryde} \label{model} \end{figure} \section*{Methods} To calculate the expectation values of the observables in our steering criteria, we need the density matrix elements in Fock basis, i.e., $ \rho_{m_1m_2n_1n_2}\equiv{\rm Tr} \{\rho | m_1 \rangle_A \langle n_1| \otimes | m_2 \rangle_B \langle n_2 | \}$. In particular, a two mode squeezed state (squeezing: $r$), after the mode $B$ undergoes a vacuum noisy channel with transmittance $\eta$, gives \begin{equation} \begin{array}{lllllllllll} \rho_{0000} = \frac{2}{\cosh{(2r)}+1}, \\ \rho_{0011}=\rho_{1100}=\frac{2 \sqrt{\eta} \sinh{(2r)}}{\left( \cosh{(2r)}+1 \right)^2}, \\ \rho_{0022}=\rho_{2200} =\frac{2\eta\left( \cosh{(2r)} -1 \right)}{\left( \cosh{(2r)} +1 \right)^2}, \\ \rho_{1010}=\frac{2 \left(1- \eta\right) \left( \cosh{(2r)}-1 \right)}{\left( \cosh{(2r)}+1 \right)^2}, \\ \rho_{1021}=\rho_{2110}=\frac{2\sqrt{2}\left( 1- \eta\right) \sinh{(2r)}\left( \cosh{(2r)}-1 \right) }{\left( \cosh{(2r)}+1 \right)^3} , \\ \rho_{1111}=\frac{2 \eta \left( \cosh{(2r)} -1 \right)}{\left( \cosh{(2r)}+1 \right)^2}, \\ \rho_{1122}=\rho_{2211}= \frac{2\eta^{3/2} \sinh^3{(2r)}}{\left( \cosh{(2r)}+1 \right)^4}, \\ \rho_{2020}=\frac{2\left( 1- \eta \right)^2 \left( \cosh{(2r)}-1 \right)^2}{\left( \cosh{(2r)}+1 \right)^3}, \\ \rho_{2121} = \frac{4\eta(1-\eta) \left( \cosh{(2r)}-1 \right)^2}{\left( \cosh{(2r)}+1 \right)^3},\\ \rho_{2222}=\frac{2\eta^2 \left( \cosh{(2r)} -1 \right)^2}{\left( \cosh{(2r)}+1 \right)^3}, \end{array} \end{equation} while other terms are zero in the basis of $ \left\{ | 0 \rangle, | 1 \rangle, | 2 \rangle \right\}$. On the other hand, each single mode state in mode $A$ and $B$ is a thermal state, whose expectation values are also necessary to test our criterion. The single-mode thermal states are all diagonal in Fock basis, and the nonzero expectation values for the mode $B$ are given by \begin{equation} \begin{array}{lll} \langle B_0 \rangle_B = \langle | 0 \rangle \langle 0| \rangle_B = \frac{1}{1+\bar{n}} =\frac{2}{\eta \cosh{(2r)}-\eta+2}, \\ \langle B_1 \rangle_B = \langle | 1 \rangle \langle 1| \rangle_B = \frac{\bar{n}}{\left(1+\bar{n} \right)^2} = \frac{2\eta\left( \cosh{(2r)}-1 \right)}{\left( \eta\sinh{(2r)}-\eta+2 \right)^2}, \\ \langle B_2 \rangle_B = \langle | 2 \rangle \langle 2| \rangle_B = \frac{\bar{n}^2}{\left(1+\bar{n} \right)^3}=\frac{2\eta^2\left( \cosh{(2r)}-1 \right)^2}{\left( \eta\sinh{(2r)}-\eta+2 \right)^3}, \\ \end{array} \end{equation} where $ \bar{n}$ is the mean photon number of the initial thermal state. For mode $A$ which does not undergo a lossy channel, we simply set $\eta=1$ in the above expressions. \begin{thebibliography}{00} \bibitem{Nielsen} Nielsen, M.A. \& Chuang, I.L. {\it Quantum computation and Quantum Information}, (Cambridge University Press, Cambridge, 2000). \bibitem{Brunner14} Brunner, N. Cavalcanti, D., Pironio, S. Scarani, V. \& Wehner, S. Bell nonlocality, \textit{Rev. Mod. Phys.} {\bf 86}, 419 (2014). \bibitem{Bell64} Bell, J. On the Einstein Podolsky Rosen paradox, \textit{Physics} (Long Island City, N. Y.) {\bf 1}, 195 (1964). \bibitem{Acin07} Ac{\' i}n, A. \textit{et al.} Device-Independent Security of Quantum Cryptography against Collective Attacks, \textit{Phys. Rev. Lett.} {\bf 98}, 230501 (2007). \bibitem{Pironio09} Pironio, S. \textit{et al.} Device-independent quantum key distribution secure against collective attacks, \textit{New J. Phys.} {\bf 11}, 045021 (2009). \bibitem{Masanes11} Masanes, L. Pironio, S. \& Ac{\'i}n, A. Secure device-independent quantum key distribution with causally independent measurement devices, \textit{Nature Comm.} {\bf 2}, 238 (2011). \bibitem{Jozsa03} Jozsa, R. \& Linden, N. On the role of entanglement in quantum-computational speed-up, \textit{Proc. R. Soc. London, Ser. A} {\bf 459}, 2011 (2003). \bibitem{Wiseman07} Wiseman, H.M. Jones, S.J. \& Doherty, A.C. Steering, Entanglement, Nonlocality, and the Einstein-Podolsky-Rosen Paradox, \textit{Phys. Rev. Lett.} {\bf 98}, 140402 (2007). \bibitem{Cavalcanti09} Cavalcanti, E.G. Jones, S.J. Wiseman, H.M. \& Reid, M.D. Experimental criteria for steering and the Einstein-Podolsky-Rosen paradox, \textit{Phys. Rev. A} {\bf 80}, 032112 (2009). \bibitem{Branciard12} Branciard, C. Cavalcanti, E.G. Walborn, S.P. Scarani, V. \& Wiseman, H.M. One-sided device-independent quantum key distribution: Security, feasibility, and the connection with steering, \textit{Phys. Rev. A} {\bf 85}, 010301 (2012). \bibitem{Piani15} Piani, M. \& Watrous, J. Necessary and Sufficient Quantum Information Characterization of Einstein-Podolsky-Rosen Steering, \textit{Phys. Rev. Lett.} {\bf 114}, 060404 (2015). \bibitem{Smith12} Simth, D.H. \textit{et al.} Conclusive quantum steering with superconducting transition-edge sensors, \textit{Nat. Commun.} {\bf 3}, 625 (2012). \bibitem{Wittman12} Wittmann, B. \textit{et al.} Loophole-free Einstein–Podolsky–Rosen experiment via quantum steering, \textit{New J. Phys.} {\bf 14}, 053030 (2012). \bibitem{Bennett12} Bennet, A.J. \textit{et al.} Arbitrarily Loss-Tolerant Einstein-Podolsky-Rosen Steering Allowing a Demonstration over 1 km of Optical Fiber with No Detection Loophole, \textit{Phys. Rev. X} {\bf 2}, 031003 (2012). \bibitem{Handchen12} H{\" a}ndchen, V. \textit{et al.} Observation of one-way Einstein–Podolsky–Rosen steering, \textit{Nature Photonics} {\bf 6}, 596 (2012). \bibitem{one-way} Olsen, M.K. \& Bradley, A.S. Bright bichromatic entanglement and quantum dynamics of sum frequency generation, \textit{Phys. Rev. A} {\bf 77}, 023813 (2008). \bibitem{Midgley10} Midgley, S.L.W. Ferris, A.J. \& Olsen, M.K. Asymmetric Gaussian steering: When Alice and Bob disagree, \textit{Phys. Rev. A} {\bf 81}, 022101 (2010). \bibitem{Olsen13} Olsen, M.K. Asymmetric Gaussian harmonic steering in second-harmonic generation, \textit{Phys. Rev. A} {\bf 88}, 051802(R) (2013). \bibitem{HeR13} He, Q.Y. \& Reid, M.D. Einstein-Podolsky-Rosen paradox and quantum steering in pulsed optomechanics, \textit{Phys. Rev. A} {\bf 88}, 052121 (2013). \bibitem{HeF14} He, Q.Y. \& Ficek, Z. Einstein-Podolsky-Rosen paradox and quantum steering in a three-mode optomechanical system, \textit{ Phys. Rev. A} {\bf 89}, 022332 (2014). \bibitem{Tan15} Tan, H. Zhang, X. \& Li, G. Steady-state one-way Einstein-Podolsky-Rosen steering in optomechanical interfaces, \textit{Phys. Rev. A} {\bf 91}, 032121 (2015). \bibitem{Quintino14} Quintino, M.T. Vertesi, T. \& Brunner, N. Joint Measurability, Einstein-Podolsky-Rosen Steering, and Bell Nonlocality, \textit{Phys. Rev. Lett.} {\bf 113}, 160402 (2014). \bibitem{Uola14} Uola, R. Moroder, T. \& G{\"u}hne, O. Joint Measurability of Generalized Measurements Implies Classicality, \textit{Phys. Rev. Lett.} {\bf 113}, 160403 (2014). \bibitem{Bowles14} Bowles, J. V{\' e}rtesi, T. Quintino, M.T. \& Brunner, N. One-way Einstein-Podolsky-Rosen Steering, \textit{Phys. Rev. Lett.} {\bf 112}, 200402 (2014). \bibitem{Braunstein05} Braunstein, S.L. \& van Loock, P. Quantum information with continuous variables, \textit{Rev. Mod. Phys.} {\bf 77}, 513 (2005). \bibitem{Reid89} Reid, M.D. Demonstration of the Einstein-Podolsky-Rosen paradox using nondegenerate parametric amplification, \textit{Phys. Rev. A} {\bf 40}, 913 (1989). \bibitem{Reid09} Reid, M.D. \textit{et al.} The Einstein-Podolsky-Rosen paradox: From concepts to applications, \textit{Rev. Mod. Phys.} {\bf 81}, 1727 (2009). \bibitem{Cerf} {\it Quantum Information with Continuous Variables of Atoms and Light}, (eds Cerf, N.J. Leuchs, N.G. \& Polzik, E.S.) (Imperial College Press, London, 2007). \bibitem{Weedbrook12} Weedbrook, C. \textit{et al.} Gaussian quantum information, \textit{Rev. Mod. Phys.} {\bf 84}, 621 (2012). \bibitem{Nha04} Nha, H. \& Carmichael, H.J. Proposed Test of Quantum Nonlocality for Continuous Variables, \textit{Phys. Rev. Lett.} {\bf 93}, 020401 (2004). \bibitem{Patron04} Garc{\'i}a-Patr{\'o}n, R. \textit{et al.} Proposal for a Loophole-Free Bell Test Using Homodyne Detection, \textit{Phys. Rev. Lett.} {\bf 93}, 130409 (2004). \bibitem{Lloyd99} Lloyd, S \& Braunstein, S.L. Quantum Computation over Continuous Variables, \textit{Phys. Rev. Lett.} {\bf 82}, 1784 (1999). \bibitem{Bartlett02} Bartlett, S.D. \& Sanders, B.C. Efficient Classical Simulation of Optical Quantum Information Circuits, \textit{Phys. Rev. Lett.} {\bf 89}, 207903 (2002). \bibitem{Mari14} Mari, A. Giovannetti, V. \& Holevo, A.S. Quantum state majorization at the output of bosonic Gaussian channels, \textit{Nat. Comm.} {\bf 5}, 3826 (2014). \bibitem{Giedke03} Giedke, G. Wolf, M.M. Kr{\"u}ger, O. Werner, R.F. \& Cirac, J.I. Entanglement of Formation for Symmetric Gaussian States, \textit{Phys. Rev. Let.} {\bf 91}, 107901 (2003). \bibitem{Adesso10} Adesso, G. \& Datta, A. Quantum versus Classical Correlations in Gaussian States, \textit{Phys. Rev. Lett.} {\bf 105}, 030501 (2010). \bibitem{Pirandola14} Pirandola, S. Spedalieri, G. Braunstein, S.L. Cerf, N.J. \& Lloyd, S. Optimality of Gaussian Discord, \textit{Phys. Rev. Lett.} {\bf 113}, 140405 (2014). \bibitem{Ollivier01} Ollivier, H. \& Zurek, W.H. Quantum Discord: A Measure of the Quantumness of Correlations, \textit{Phys. Rev. Lett.} {\bf 88}, 017901 (2001). \bibitem{Duan00} Duan, L.-M. Giedke, G. Cirac, J.I. \& Zoller, P. Inseparability Criterion for Continuous Variable Systems, \textit{Phys. Rev. Lett.} {\bf 84}, 2722 (2000). \bibitem{Simon00} Simon, R. Peres-Horodecki Separability Criterion for Continuous Variable Systems, \textit{Phys. Rev. Lett.} {\bf 84}, 2726 (2000). \bibitem{Reid13} Reid, M.D. Monogamy inequalities for the Einstein-Podolsky-Rosen paradox and quantum steering, \textit{Phys. Rev. A} {\bf 88}, 062108 (2013). \bibitem{He13} He, Q.Y. \& Reid, M.D. Genuine Multipartite Einstein-Podolsky-Rosen Steering, \textit{Phys. Rev. Lett.} {\bf 111}, 250403 (2014). \bibitem{Adesso15} Kogias, I. Lee, A.R. Ragy, S \& Adesso, G. Quantification of Gaussian Quantum Steering, \textit{Phys. Rev. Lett.} {\bf 114}, 060403 (2015). \bibitem{Nha15} Ji, S.-W. Kim, M.S. \& Nha, H. Quantum steering of multimode Gaussian states by Gaussian measurements: monogamy relations and the Peres conjecture, \textit{J. Phys. A : Math. Theor.} {\bf 48}, 135301 (2015). \bibitem{Walk14} Walk, N. Wiseman, H.M. \& Ralph, T.C. Continuous variable one-sided device independent quantum key distribution, \textit{arXiv}:1405.6593v2. \bibitem{Kogias15} Kogias, I. \& Adesso, G. \textit{J. Opt. Soc. Am. B} {\bf 32}, A27 (2015). \bibitem {Ji15} Ji, S.-W. Lee, J. Park, J. \& Nha, H. Steering criteria via covariance matrices of local observables in arbitrary-dimensional quantum systems, \textit{Phys. Rev. A} {\bf 92}, 062130 (2015). \bibitem{Simon} Simon, R. Mukunda, N \& Dutta, B. Quantum-noise matrix for multimode systems: U(n) invariance, squeezing, and normal forms, \textit{Phys. Rev. A} {\bf 49}, 1567 (1994). \bibitem{arxiv} Ji, S.-W. Lee, J. Park, J. \& Nha, H. Quantum steering of Gaussian states via non-Gaussian measurements, \textit{arXiv}: 1511.02649 \bibitem{Pryde} Wollmann, S. Walk, N. Bennet, A.J. Wiseman, H.M. \& Pryde, G.J. Observation of genuine one-way Einstein-Podolsky-Rosen steering, \textit{Phys. Rev. Lett.} {\bf 116}, 160403 (2016). \bibitem{Tatham14} R. Tatham and N. Korolkova, Phys. Rev. A {\bf 89}, 012308 (2014). \bibitem{Horn2} R. A. Horn and C. R. Johnson, {\it Matrix Analysis}, Cambridge University Press ({\it Second Edition}), 2013. \end{thebibliography} \onecolumngrid \appendix* \setcounter{equation}{0} \section{{\Large Supplemental Material}} \subsection{A1. Proofs} {\bf (i) Proof of Lemma}: First, a direct calculation readily gives $ \sum_j \left(A_j^{(n)}\right)^2 = n \openone_n $. On the other hand, the sum of squares of expectation values is given by \begin{equation} \begin{array}{llllll} \sum_j \langle A_j^{(n)} \rangle^2 & & = \sum_{k=0}^{n-1} \langle |k \rangle \langle k | \rangle^2 \\ & + & \frac{1}{2}\sum_{k<l} \left( \langle |k \rangle\langle l | \rangle^2 + \langle |l \rangle\langle k | \rangle^2 + 2| \langle |k \rangle \langle l | \rangle |^2 \right) \\ & - &\frac{1}{2} \sum_{k<l} \left( \langle |k \rangle\langle l | \rangle^2 + \langle |l \rangle\langle k | \rangle^2 - 2| \langle |k \rangle \langle l | \rangle |^2 \right) \\ & &= \sum_{k=0}^{n-1} \langle |k \rangle \langle k | \rangle^2 + 2 \sum_{k<l}| \langle |k \rangle \langle l | \rangle |^2 \\ & &\leq \sum_{k=0}^{n-1} \langle |k \rangle \langle k | \rangle^2 + 2 \sum_{k<l} \langle |k \rangle \langle k | \rangle\langle |l \rangle\langle l | \rangle \\ & & = \langle \sum_{k=0}^{n-1} |k \rangle\langle k | \rangle^2 \leq \langle \openone_n \rangle, \end{array} \end{equation} where we use the Cauchy inequality to obtain the inequality in the fifth line. The positivity of the variance and $ \openone_n^2 =\openone_n$ yield the last inequality, which in turn gives the uncertainty relation in equation (8) of main text. The equality holds when a given state is a pure state within the Fock space spanned by $ \left\{ |0\rangle, ..., | n-1 \rangle \right\}$. {\bf (ii) Proof of Theorem}: As shown in \cite{Cavalcanti09}, if the set of observables satisfies uncertainty relations in a sum or product form as Eq. (8) of main text, the correlation of a bipartite quantum state described by the LHS models must satisfy a non-steerablity inequality in a form \begin{equation} \label{infer} \sum_{j} \delta_{inf}^{2} \left( A_j^{(n)} \right) \geq \left( n-1 \right) \langle \openone_n^{A} \rangle, \end{equation} where $ \delta_{inf}^{2}\left( A_j^{(n)} \right) $ is an inferred variance \cite{Reid89} defined by \begin{equation} \label{infervar} \delta^2_{inf}\left(A_j^{(n)}\right)=\langle \left[ A_j^{(n)} -A_{j,est}^{(n)}\left( B_j^{(n')} \right) \right]^2 \rangle. \end{equation} $A_{j,est}^{(n)}\left(B_j^{(n')}\right)$ is an estimate based on Bob's outcome $ B_j^{(n')}$ \cite{Cavalcanti09, Reid89}. With a choice of linear estimate $ A_{j,est}^{(n)}\left( B_j^{(n')} \right)= -g_j B_j^{(n')}+ \langle A_j^{(n)} + g_j B_j^{(n')} \rangle$ ($g_j$ is an arbitrary real number) \cite{Cavalcanti09, Reid09} and setting $ g=g_1=g_2=...=g_n $, we obtain the non-steerability criterion in equation (9) of main text. {\bf (iii) Proof of Proposition}: Similar to the analysis in \cite{Ji15}, let us first choose $g$ to make the left-hand side of equation (9) of main text as small as possible, i.e. \begin{equation}\ \label{set-g} g=-\frac{\sum_j \langle A_j^{(n)} \otimes B_j^{(n')} \rangle -\langle A_j^{(n)} \rangle\langle B_j^{(n')} \rangle }{\sum_j \delta^2\left(B_j^{(n')} \right)}. \end{equation} Plugging it to the inequality (9) of main text, we obtain the nonsteerability condition as \begin{equation} \label{steering1} \begin{array}{ll} | \sum_j \langle A_j^{(n)} \otimes B_j^{(n')} \rangle -\langle A_j^{(n)} \rangle \langle B_j^{(n')} \rangle | \\ \leq \sqrt{ \left( \langle \openone_n^{A} \rangle -\sum_j \langle A_j^{(n)} \rangle^2 \right) \left( n' \langle \openone_{n'}^B \rangle - \sum_j \langle B_j^{(n')} \rangle^2 \right)}. \end{array} \end{equation} Note that the left-hand side of Eq. (\ref{steering1}) has only the diagonal elements of correlation matrix. We may concentrate the correlation information onto the diagonal terms by taking a singular value decomposition of the correlation matrix $C_{nn'}^{TLOOs}$ ( $n^2 \times n'^2$ real matrix) using certain orthogonal matrices $ O_n^{A}$ and $O_{n'}^{B}$. Under this transformation, TLOOs remain TLOOs as already shown and the left-hand side of Eq. (\ref{steering1}) becomes a trace norm of the new correlation matrix. Using also the invariance of sum of squares of expectation values of TLOOs, the inequality (\ref{steering1}) corresponds to the inequality (10) of main text. \subsection{{A2. Expectation values of observables in Fock space} } In order to calculate the expectation values of Fock-basis observables for a two-mode Gaussian state, the multivariable Hermite polynomials are very useful \cite{Tatham14}. The multivariable (4 variables in our case) Hermite polynomials are given by \begin{equation} \label{hermite} H^{\left\{R,\theta\right\}}_{m_1,m_2,n_1,n_2} \left( y_1, y_2, y_3, y_4 \right)=(-1)^{m_1+m_2+n_1+n_2}\exp\left[ \vec{y}^T R \vec{y} \right] \frac{\partial^{m_1}}{\partial y^{m_1}_1}\frac{\partial^{m_2}}{\partial y^{m_2}_2}\frac{\partial^{n_1}}{\partial y^{n_1}_3}\frac{\partial^{n_2}}{\partial y^{n_2}_4} \exp\left[ -\vec{y}^T R \vec{y}-\theta\vec{y}\right], \end{equation} where $ \vec{y}= \left( y_1, y_2, y_3, y_4 \right)^T$. The matrix $R$ is a $ 4 \times 4$ matrix related to the covariance matrix of the two-mode Gaussian state. With these Hermite polynomials, the matrix elements in Fock basis are given by \cite{Tatham14} \begin{equation} \label{fockbasis} _A\langle m_1 | _B\langle m_2 | \rho_{AB} | n_1 \rangle_A | n_2 \rangle_B =\frac{4 H^{\left\{R,0 \right\}}_{m_1,m_2,n_1,n_2}\left(0,0,0,0\right)}{\sqrt{m_1 ! m_2 ! n_1 ! n_2 !} \sqrt{ \det{\left(\gamma_{AB}+ \openone \right)}}}, \end{equation} where $R=BU\left[ \left(\gamma_{AB} + \openone \right)^{-1} -\frac{1}{2}\openone \right] U^{\dagger} D$ and \begin{equation} U=\frac{1}{\sqrt{2}} \left( \begin{array}{cccc} 1 & i & 0 & 0 \\ 1 & -i & 0 & 0 \\ 0 & 0 & 1 & i \\ 0 & 0 & 1 & -i\\ \end{array} \right) ,\;\; B=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 &1 \\ \end{array} \right) ,\;\; D= \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ \end{array} \right) . \end{equation} Now let us consider a covariance matrix $\gamma_{AB}$ in a standard form \begin{equation} \label{bscova} \gamma_{AB} = \left( \begin{array}{cccc} a & 0 & c_1 & 0 \\ 0 & a & 0 & - c_2 \\ c_1 & 0 & b & 0 \\ 0 & - c_2 & 0 & b \\ \end{array} \right) . \end{equation} In this case the $R$ matrix in equation (\ref{fockbasis}) is given by \begin{equation} \label{matrixr} R=\frac{1}{2}\left( \begin{array}{cccc} \tilde{a}_1 & \tilde{c}_1 & \tilde{a}_2 & \tilde{c}_2 \\ \tilde{c}_1 & \tilde{b}_1 & \tilde{c}_2 & \tilde{b}_2 \\ \tilde{a}_2 & \tilde{c}_2 & \tilde{a}_1 & \tilde{c}_1 \\ \tilde{c}_2 & \tilde{b}_2 & \tilde{c}_1 & \tilde{b}_1 \\ \end{array} \right), \end{equation} where \begin{equation} \begin{array}{cccccc} \tilde{a}_1=\frac{\left(b+1\right)\left(c_1^2-c_2^2 \right)}{\left[ \left(a+1\right)\left(b+1\right)-c_1^2\right]\left[ \left(a+1\right)\left(b+1\right)-c_2^2 \right]} , \\ \tilde{a}_2=-1+\frac{\left(b+1\right)\left[2\left(a+1\right)\left(b+1\right)-\left(c_1^2+c_2^2\right)\right]}{\left[ \left(a+1\right)\left(b+1\right)-c_1^2\right]\left[ \left(a+1\right)\left(b+1\right)-c_2^2 \right]}, \\ \tilde{b}_1=\frac{\left(a+1\right)\left(c_1^2-c_2^2\right)}{\left[ \left(a+1\right)\left(b+1\right)-c_1^2\right]\left[ \left(a+1\right)\left(b+1\right)-c_2^2 \right]} , \\ \tilde{b}_2=-1+\frac{\left(a+1\right)\left[2\left(a+1\right)\left(b+1\right)-\left(c_1^2+c_2^2\right)\right]}{\left[ \left(a+1\right)\left(b+1\right)-c_1^2\right]\left[ \left(a+1\right)\left(b+1\right)-c_2^2 \right]}, \\ \tilde{c}_1=\frac{-\left[\left(a+1\right)\left(b+1\right)-c_1c_2\right]\left(c_1+c_2\right)}{\left[ \left(a+1\right)\left(b+1\right)-c_1^2\right]\left[ \left(a+1\right)\left(b+1\right)-c_2^2 \right]} , \\ \tilde{c}_2=\frac{-\left[\left(a+1\right)\left(b+1\right)+c_1c_2\right]\left(c_1-c_2\right)}{\left[ \left(a+1\right)\left(b+1\right)-c_1^2\right]\left[ \left(a+1\right)\left(b+1\right)-c_2^2 \right]}. \\ \end{array} \end{equation} For the covariance matrix $\gamma_{AB}^{TMSV}$ of a two-mode squeezed vacuum state (TMSV) with squeezing parameter $r$, the matrix $R^{TMSV}$ is given in a simple form \begin{equation} \label{matrxirtmsv} R^{TMSV}=\frac{1}{2}\left( \begin{array}{cccc} 0 & -\frac{\sinh{2r}}{\cosh{2r}+1} & 0 & 0 \\ -\frac{\sinh{2r}}{\cosh{2r}+1} & 0 & 0 & 0 \\ 0 & 0 & 0 & -\frac{\sinh{2r}}{\cosh{2r}+1} \\ 0 & 0 & -\frac{\sinh{2r}}{\cosh{2r}+1} & 0 \\ \end{array} \right) \end{equation} If this TMSV goes through loss in mode $B$ only, the matrix $R^{TMSV}_{LB}$ is given by \begin{equation} \label{matrixrloss} R^{TMSV}_{LB}=\frac{1}{2}\left( \begin{array}{cccc} 0 & -\frac{\sqrt{\eta}\sinh{2r}}{\cosh{2r}+1} & -\frac{\left(1-\eta\right)\left( \cosh{2r}-1\right)}{\cosh{2r}+1} & 0 \\ -\frac{\sqrt{\eta}\sinh{2r}}{\cosh{2r}+1} & 0 & 0 & 0 \\ -\frac{\left(1-\eta\right)\left( \cosh{2r}-1\right)}{\cosh{2r}+1} & 0 & 0 & -\frac{\sqrt{\eta}\sinh{2r}}{\cosh{2r}+1} \\ 0 & 0 & -\frac{\sqrt{\eta}\sinh{2r}}{\cosh{2r}+1} & 0 \\ \end{array} \right), \end{equation} with $\eta$ transmittance rate. We can similarly obtain the matrix $R^{TMSV}_{LG}$ for the case of amplification. \\ Therefore, using the matrix elements in equation (\ref{fockbasis}), we can calculate the expectation values of any observables $ A_k^{(n)} \otimes B_l^{(n')}$ to test our criterion in the main text. \subsection{{A3. Singular value decomposition of correlation matrix}} We here show that violation of the steering criterion inequality (10) of the main text is equivalent to the violation of equation (9) in the main text. In view of {\bf Proposition}, let us assume that a given two-mode state $\rho_{AB}$ violates the inequality \begin{equation} \label{Asteering-tr} \| C^{TLOOs}_{nn'}\|_{tr} \leq \sqrt{\left(\langle \openone_n^A \rangle -\sum_j \langle A_j^{(n)} \rangle^2 \right) \left( n'\langle \openone_{n'}^B \rangle -\sum_j \langle B_j^{(n')} \rangle^2\right)}. \end{equation} This violation means that there exists a set of TLOOs $\left\{ \tilde{A}_j^{(n)} \right\}$, $\left\{ \tilde{B}_j^{(n')} \right\}$, which satisfies an inequality \begin{equation} \label{singular1} \sum_j \langle \tilde{A}_j^{(n)} \otimes \tilde{B}_j^{(n')} \rangle -\langle \tilde{A}_j^{(n)} \rangle \langle \tilde{B}_j^{(n')} \rangle > \sqrt{\left(\langle \openone_n^A \rangle -\sum_j \langle A_j^{(n)} \rangle^2 \right) \left( n'\langle \openone_{n'}^B \rangle -\sum_j \langle B_j^{(n')} \rangle^2\right)}. \end{equation} The sets $\left\{ \tilde{A}_j^{(n)} \right\}$ and $\left\{ \tilde{B}_j^{(n')} \right\}$ can be chosen by employing the eigenvectors of $ C^{TLOOs}_{nn'} \left(C^{TLOOs}_{nn'}\right)^T$ and $\left(C^{LOOs}_{nn'}\right)^T C^{LOOs}_{nn'}$ used for the singular value decomposition \begin{equation} \label{TOS} O_n^A C^{TLOOs}_{nn'} \left(O_{n'}^B\right)^{T} = C^{TLOOs}_{nn',\,singular}. \end{equation} Here $C^{TLOOs}_{nn',\,singular}$ is a diagonal matrix with non-negative elements, and $O^A_n$ and $O^B_{n'}$ are truncated orthogonal matrices of which columns are eigenvectors of $ C^{TLOOs}_{nn'} \left(C^{TLOOs}_{nn'}\right)^T$ and $\left(C^{TLOOs}_{nn'}\right)^T C^{TLOOs}_{nn'}$, respectively \cite{Horn2}. \\ \\ Now, let us consider steering criterion in equation (9) of {\bf Theorem} and equation (A.4) with the observables in equation (\ref{singular1}). We can then derive the following \begin{equation} \begin{array}{llll} \sum_k^N \delta^{2} \left( \tilde{A}_k^{(n)} \otimes \openone + g \openone \otimes \tilde{B}_k^{(n')} \right) & = \sum_{k} \delta^2 \left( \tilde{A}_k^{(n)} \right) + g^2 \sum_k \delta^2 \left( \tilde{B}_k^{(n')} \right) + 2g \sum_k \left( \langle \tilde{A}_k \otimes \tilde{B}_k \rangle -\langle \tilde{A}_k \rangle \langle \tilde{B}_k \rangle \right) \\ & = \frac{-\left( \sum_k \langle \tilde{A}_k^{(n)} \otimes \tilde{B}_k^{(n')} \rangle -\langle \tilde{A}_k^{(n)} \rangle \langle \tilde{B}_k^{(n')} \rangle \right)^2 }{\sum_k \delta^2\left(\tilde{B}_k^{(n')}\right)}+\sum_k \delta^2\left( \tilde{A}_k^{(n)} \right) \\ & <\frac{\left(n'\langle \openone^{B}_{(n')} \rangle -\sum_k\langle \tilde{B}_k^{(n')} \rangle^2\right) \left( \sum_k \langle A_k^{(n)} \rangle^2 -\langle \openone_n^{A} \rangle \right)}{n'\langle \openone^{B}_{(n')} \rangle -\sum_k\langle \tilde{B}_k^{(n)} \rangle^2} + n\langle \openone_n^{A} \rangle- \sum_k \langle A_k^{(n)} \rangle^2 \\ & = \left(n-1\right) \langle \openone_n^A \rangle, \end{array} \end{equation} where we used optimal $ g=-\frac{\sum_{k}\langle \tilde{A}_k^{(n)} \otimes \tilde{B}_k^{(n')} \rangle -\langle \tilde{A}_k^{(n)} \rangle\langle \tilde{B}_k^{(n')} \rangle}{\sum_{k} \delta^2 \left(\tilde{B}_k^{(n')}\right)}$ as in equation (A.4) and $ \sum_{k} \delta^2 \left( \tilde{A}_k^{(n)} \right)=n\langle \openone_n^{A} \rangle -\sum_k \langle \tilde{A}_k^{(n)} \rangle^2$, $\sum_{k} \delta^2 \left( \tilde{B}_k^{(n')} \right)= n'\langle \openone_{n'}^{B} \rangle -\sum_k \langle \tilde{B}_k^{(n')} \rangle^2$. The inequality in the third line is given by equation (\ref{singular1}) which we assumed. In summary, the violation of equation (\ref{Asteering-tr}) with TLOOs which are constructed by singular value decomposition in equation (\ref{TOS}) is equivalent to the violation of the steering criterion in equation (9) of {\bf Theorem} with the same TLOOs. \end{document}
\begin{document} \title{Hilbert's 3rd Problem and Invariants of 3--manifolds} \shorttitle{Hilbert's 3rd problem and invariants of 3--manifolds} \asciititle{Hilbert's 3rd Problem and Invariants of 3-manifolds} \author{Walter D Neumann} \address{Department of Mathematics, The University of Melbourne\{\mathcal P}arkville, Vic 3052, Australia} \email{[email protected]} \begin{abstract} This paper is an expansion of my lecture for David Epstein's birthday, which traced a logical progression from ideas of Euclid on subdividing polygons to some recent research on invariants of hyperbolic 3--manifolds. This ``logical progression'' makes a good story but distorts history a bit: the ultimate aims of the characters in the story were often far from 3--manifold theory. We start in section 1 with an exposition of the current state of Hilbert's 3rd problem on scissors congruence for dimension 3. In section 2 we explain the relevance to 3--manifold theory and use this to motivate the Bloch group via a refined ``orientation sensitive'' version of scissors congruence. This is not the historical motivation for it, which was to study algebraic $K$--theory of ${\mathbb C}$. Some analogies involved in this ``orientation sensitive'' scissors congruence are not perfect and motivate a further refinement in section \ref{Extended Bloch}. Section \ref{More} ties together various threads and discusses some questions and conjectures. \end{abstract} \asciiabstract{ This paper is an expansion of my lecture for David Epstein's birthday, which traced a logical progression from ideas of Euclid on subdividing polygons to some recent research on invariants of hyperbolic 3-manifolds. This `logical progression' makes a good story but distorts history a bit: the ultimate aims of the characters in the story were often far from 3-manifold theory. We start in section 1 with an exposition of the current state of Hilbert's 3rd problem on scissors congruence for dimension 3. In section 2 we explain the relevance to 3-manifold theory and use this to motivate the Bloch group via a refined `orientation sensitive' version of scissors congruence. This is not the historical motivation for it, which was to study algebraic K-theory of C. Some analogies involved in this `orientation sensitive' scissors congruence are not perfect and motivate a further refinement in section 4. Section 5 ties together various threads and discusses some questions and conjectures.} \primaryclass{57M99} \secondaryclass{19E99, 19F27} \keywords{Scissors congruence, hyperbolic manifold, Bloch group, dilogarithm, Dehn invariant, Chern--Simons} \asciikeywords{Scissors congruence, hyperbolic manifold, Bloch group, dilogarithm, Dehn invariant, Chern-Simons} \maketitle \section{Hilbert's 3rd Problem} It was known to Euclid that two plane polygons of the same area are related by scissors congruence: one can always cut one of them up into polygonal pieces that can be re-assembled to give the other. In the 19th century the analogous result was proved with euclidean geometry replaced by 2--dimensional hyperbolic geometry or 2--dimensional spherical geometry. The 3rd problem in Hilbert's famous 1900 Congress address \cite{hilbert} posed the analogous question for 3--dimensional euclidean geometry: are two euclidean polytopes of the same volume ``scissors congruent,'' that is, can one be cut into subpolytopes that can be re-assembled to give the other. Hilbert made clear that he expected a negative answer. One reason for the nineteenth century interest in this question was the interest in a sound foundation for the concepts of area and volume. By ``equal area'' Euclid \emph{meant} scissors congruent, and the attempt in Euclid's Book XII to provide the same approach for 3--dimensional euclidean volume involved what was called an ``exhaustion argument'' --- essentially a continuity assumption --- that mathematicians of the nineteenth century were uncomfortable with (by Hilbert's time mostly for aesthetic reasons). The negative answer that Hilbert expected to his problem was provided the same year\footnote{In fact, the same answer had been given in 1896 by Bricard, although it was only fully clarified around 1980 that Bricard was answering an equivalent question --- see Sah's review 85f:52014 (AMS Mathematical Reviews) of \cite{dupont2} for a concise exposition of this history.} by Max Dehn \cite{dehn}. Dehn's answer is delighfully simple in modern terms, so we describe it here in full. \begin{definition} Consider the free ${\mathbb Z}$--module generated by the set of congruence classes of 3--dimensional polytopes. The {\em scissors congruence group} $\operatorname{\mathcal P}({\mathbb E}^3)$ is the quotient of this module by the relations of scissors congruence. That is, if polytopes $P_1,\dots,P_n$ can be glued along faces to form a polytope $P$ then we set $$[P]=[P_1]+\dots+[P_n]\quad\text{in }\operatorname{\mathcal P}({\mathbb E}^3).$$ (A {\em polytope} is a compact domain in ${\mathbb E}^3$ that is bounded by finitely many planar polygonal ``faces.'') \end{definition} Volume defines a map $$\operatorname{vol}\co\operatorname{\mathcal P}({\mathbb E}^3)\to {\mathbb R}$$ and Hilbert's problem asks\footnote{ Strictly speaking this is not quite the same question since two polytopes $P_1$ and $P_2$ represent the same element of $\operatorname{\mathcal P}({\mathbb E}^3)$ if and only if they are {\em stably scissors congruent} rather than scissors congruent, that is, there exists a polytope $Q$ such that $P_1+Q$ (disjoint union) is scissors congruent to $P_2+Q$. But, in fact, stable scissors congruence implies scissors congruence (\cite{zylev, zylev2}, see \cite{sah-book} for an exposition).} about injectivity of this map. Dehn defined a new invariant of scissors congrence, now called the {\em Dehn invariant}, which can be formulated as a map ${\delta}\co\operatorname{\mathcal P}({\mathbb E}^3)\to{\mathbb R}\otimes{\mathbb R}/\pi{\mathbb Q}$, where the tensor product is a tensor product of ${\mathbb Z}$--modules (in this case the same as tensor product as ${\mathbb Q}$--vector spaces). \begin{definition} If $E$ is an edge of a polytope $P$ we will denote by $\ell(E)$ and $\theta(E)$ the length of $E$ and dihedral angle (in radians) at $E$. For a polytope $P$ we define the {\em Dehn invariant} ${\delta}(P)$ as $${\delta}(P):=\sum_E\ell(E)\otimes\theta(E)\quad\in\quad {\mathbb R}\otimes({\mathbb R}/\pi{\mathbb Q}),\quad\text{sum over all edges $E$ of $P$.}$$ We then extend this linearly to a homomorphism on $\operatorname{\mathcal P}({\mathbb E}^3)$. \end{definition} It is an easy but instructive exercise to verify that \begin{itemize} \item ${\delta}$ is well-defined on $\operatorname{\mathcal P}({\mathbb E}^3)$, that is, it is compatible with scissors congruence; \item ${\delta}$ and $\operatorname{vol}$ are independent on $\operatorname{\mathcal P}({\mathbb E}^3)$ in the sense that their kernels generate $\operatorname{\mathcal P}({\mathbb E}^3)$ (whence $\operatorname{Im}({\delta}|\operatorname{Ker}(\operatorname{vol}))=\operatorname{Im}({\delta})$ and $\operatorname{Im}(\operatorname{vol}|\operatorname{Ker}({\delta}))={\mathbb R}$); \item the image of ${\delta}$ is uncountable. \end{itemize} In particular, $\ker(\operatorname{vol})$ is not just non-trivial, but even uncountable, giving a strong answer to Hilbert's question. To give an explicit example, the regular simplex and cube of equal volume are not scissors congruent: a regular simplex has non-zero Dehn invariant, and the Dehn invariant of a cube is zero. Of course, this answer to Hilbert's problem is really just a start. It immediately raises other questions: \begin{itemize} \item Are volume and Dehn invariant sufficient to classify polytopes up to scissors congruence? \item What about other dimensions? \item What about other geometries? \end{itemize} The answer to the first question is ``yes.'' Sydler proved in 1965 that $$(\operatorname{vol},{\delta})\co\operatorname{\mathcal P}({\mathbb E}^3)\to {\mathbb R}\oplus({\mathbb R}\otimes{\mathbb R}/\pi{\mathbb Q}) $$ is injective. Later Jessen \cite{jessen1, jessen2} simplified his difficult argument somewhat and proved an analogous result for $\operatorname{\mathcal P}({\mathbb E}^4)$ and the argument has been further simplified in \cite{dupont-sah-acta}. Except for these results and the classical results for dimensions $\le 2$ no complete answers are known. In particular, fundamental questions remain open about $\operatorname{\mathcal P}H$ and $\operatorname{\mathcal P}S $. Note that the definition of Dehn invariant applies with no change to $\operatorname{\mathcal P}H$ and $\operatorname{\mathcal P}S $. The Dehn invariant should be thought of as an ``elementary'' invariant, since it is defined in terms of 1--dimensional measure. For this reason (and other reasons that will become clear later) we are particularly interested in the kernel of Dehn invariant, so we will abbreviate it: for ${\mathbb X}={\mathbb E}^3,{\mathbb H}^3,{\mathbb S}^3$ $$ {\delta}ker({\mathbb X}):=\operatorname{Ker}({\delta}\co\operatorname{\mathcal P}({\mathbb X})\to {\mathbb R}\otimes{\mathbb R}/\pi{\mathbb Q}) $$ In terms of this notation Sydler's theorem that volume and Dehn invariant classify scissors congruence for ${\mathbb E}^3$ can be reformulated: $$\operatorname{vol}\co{\delta}ker({\mathbb E}^3)\to{\mathbb R} \quad\text{is injective.}$$ It is believed that volume and Dehn invariant classify scissors congruence also for hyperbolic and spherical geometry: \begin{conjecture}[Dehn Invariant Sufficiency] \label{sufficiency} $\operatorname{vol}\co{\delta}ker({\mathbb H}^3)\to{\mathbb R}$ is injective and $\operatorname{vol}\co{\delta}ker({\mathbb S}^3)\to{\mathbb R}$ is injective. \end{conjecture} On the other hand $\operatorname{vol}\co{\delta}ker({\mathbb E}^3)\to{\mathbb R}$ is also surjective, but this results from the existence of similarity transformations in euclidean space, which do not exist in hyperbolic or spherical geometry. In fact, Dupont \cite{dupont1} proved: \begin{theorem} $\operatorname{vol}\co{\delta}ker({\mathbb H}^3)\to{\mathbb R}$ and $\operatorname{vol}\co{\delta}ker({\mathbb S}^3)\to{\mathbb R}$ have countable image. \end{theorem} Thus the Dehn invariant sufficiency conjecture would imply: \begin{conjecture}[Scissors Congruence Rigidity]\label{rigidity} ${\delta}ker({\mathbb H}^3)$ and ${\delta}ker({\mathbb S}^3)$ are countable. \end{conjecture} The following collects results of B\"okstedt, Brun, Dupont, Parry, Sah and Suslin (\cite{bokstedt-brun-dupont}, \cite{dupont-sah-II}, \cite{sah}, \cite{suslin}). \begin{theorem}\label{divisibility} $\operatorname{\mathcal P}H$ and $\operatorname{\mathcal P}S $ and their subspaces ${\delta}ker({\mathbb H}^3)$ and ${\delta}ker({\mathbb S}^3)$ are uniquely divisible groups, so they have the structure of\/ ${\mathbb Q}$--vector spaces. As ${\mathbb Q}$--vector spaces they have infinite rank. The rigidity conjecture thus says ${\delta}ker({\mathbb H}^3)$ and ${\delta}ker({\mathbb S}^3)$ are ${\mathbb Q}$--vector spaces of countably infinite rank. \end{theorem} \begin{corollary}\label{image of volume} The subgroups $\operatorname{vol}({\delta}ker({\mathbb H}^3))$ and\/ $\operatorname{vol}({\delta}ker({\mathbb S}^3))$ of ${\mathbb R}$ are ${\mathbb Q}$--vector subspaces of countable dimension. \end{corollary} \subsection{Further comments} Many generalizations of Hilbert's problem have been considered, see eg \cite{sah-book} for an overview. There are generalizations of Dehn invariant to all dimensions and the analog of the Dehn invariant sufficiency conjectures have often been made in greater generality, see eg \cite{sah-book}, \cite{dupont-sah-II}, \cite{goncharov}. The particular Dehn invariant that we are discussing here is a codimension 2 Dehn invariant. Conjecture \ref{sufficiency} appears in various other guises in the literature. For example, as we shall see, the ${\mathbb H}^3$ case is equivalent to a conjecture about rational relations among special values of the dilogarithm function which includes as a very special case a conjecture of Milnor \cite{milnor1} about rational linear relations among values of the dilogarithm at roots of unity. Conventional wisdom is that even this very special case is a very difficult conjecture which is unlikely to be resolved in the forseeable future. In fact, Dehn invariant sufficiency would imply the ranks of the vector spaces of volumes in Corollary \ref{image of volume} are infinite, but at present these ranks are not even proved to be greater than 1. Even worse: although it is believed that the volumes in question are always irrational, it is not known if a single one of them is! As we describe later, work of Bloch, Dupont, Parry, Sah, Wagoner, and Suslin connects the Dehn invariant kernels with algebraic $K$--theory of ${\mathbb C}$, and the above conjectures are then equivalent to standard conjectures in algebraic $K$--theory. In particular, the scissors congruence rigidity conjectures for $H^3$ and $S^3$ are together equivalent to the rigidity conjecture for $K_3({\mathbb C})$, which can be formulated that $K_3^{ind}({\mathbb C})$ (indecomposable part of Quillen's $K_3$) is countable. This conjecture is probably much easier than the Dehn invariant sufficiency conjecture. The conjecture about rational relations among special values of the dilogarithm has been broadly generalized to polylogarithms of all degrees by Zagier (section 10 of \cite{zagier-conf}). The connections between scissors congruence and algebraic $K$--theory have been generalised to higher dimensions, in part conjecturally, by Goncharov \cite{goncharov}. We will return to some of these issues later. We also refer the reader to the very attractive exposition in \cite{dupont-sah-new} of these connnections in dimension 3. I would like to acknowledge the support of the Australian Research Council for this research, as well as the the Max--Planck--Institut f\"ur Mathematik in Bonn, where much of this paper was written. \section{Hyperbolic 3--manifolds}\label{Hyperbolic 3-manifolds} Thurston's geometrization conjecture, much of which is proven to be true, asserts that, up to a certain kind of canonical decomposition, 3--manifolds have geometric structures. These geometric structures belong to eight different geometries, but seven of these lead to manifolds that are describable in terms of surface topology and are very easily classified. The eighth geometry is hyperbolic geometry ${\mathbb H}^3$. Thus if one accepts the geometrization conjecture then the central issue in understanding 3--manifolds is to understand hyperbolic 3--manifolds. Suppose therefore that $M={\mathbb H}^3/\Gamma$ is a hyperbolic 3--manifold. We will always assume $M$ is oriented and for the moment we will also assume $M$ is compact, though we will be able to relax this assumption later. We can subdivide $M$ into small geodesic tetrahedra, and then the sum of these tetrahedra represents a class $\beta_0(M)\in \operatorname{\mathcal P}H$ which is an invariant of $M$. We call this the {\em scissors congruence class of $M$}. Note that when we apply the Dehn invariant to $\beta_0(M)$ the contributions coming from each edge $E$ of the triangulation sum to $\ell(E)\otimes2\pi$ which is zero in ${\mathbb R}\otimes{\mathbb R}/\pi{\mathbb Q}$. Thus \begin{proposition} The scissors congruence class $\beta_0(M)$ lies in ${\delta}ker({\mathbb H}^3)$.\qed \end{proposition} How useful is this invariant of $M$? We can immediately see that it is non-trivial, since at least it detects volume of $M$: $$\operatorname{vol}(M)=\operatorname{vol}(\beta_0(M)).$$ Now it was suggested by Thurston in \cite{thurston3} that the volume of hyperbolic 3--manifolds should have some close relationship with another geometric invariant, the Chern--Simons invariant $\operatorname{CS}(M)$. A precise analytic relationship was then conjectured in \cite{neumann-zagier} and proved in \cite{yoshida} (a new proof follows from the work discussed here, see \cite{neumann1}). We will not discuss the definition of this invariant here (it is an invariant of compact riemmanian manifolds, see \cite{chern-simons, cheeger-simons}, which was extended also to non-compact finite volume hyperbolic 3--manifolds by Meyerhoff \cite{meyerhoff}). It suffices for the present discussion to know that for a finite volume hyperbolic 3--manifold $M$ the Chern--Simons invariant lies in ${\mathbb R}/\pi^2{\mathbb Z}$. Moreover, the combination $\operatorname{vol}(M)+i \operatorname{CS}(M)\in {\mathbb C}/\pi^2{\mathbb Z}$ turns out to have good analytic properties and is therefore a natural ``complexification'' of volume for hyperbolic manifolds. Given this intimate relationship between volume and Chern--Simons invariant, it becomes natural to ask if $CS(M)$ is also detected by $\beta_0(M)$. The answer, unfortunately, is an easy ``no.'' The point is that $CS(M)$ is an orientation sensitive invariant: $CS(-M)=-CS(M)$, where $-M$ means $M$ with reversed orientation. But, as Gerling pointed out in a letter to Gauss on 15 April 1844: scissors congruence cannot see orientation because any polytope is scissors congruent to its mirror image\footnote{Gauss, Werke, Vol.\ 10, p.\ 242; the argument for a tetrahedron is to barycentrically subdivide by dropping perpendiculars from the circumcenter to each of the faces; the resulting $24$ tetrahedra occur in $12$ mirror image pairs.}. Thus $\beta_0(-M)=\beta_0(M)$ and there is no hope of $CS(M)$ being computable from $\beta_0(M)$. This raises the question: \begin{question}\label{question1} Is there some way to repair the orientation insensitivity of scissors congruence and thus capture Chern--Simons invariant? \end{question} The answer to this question is ``yes'' and lies in the so called ``Bloch group,'' which was invented for entirely different purposes by Bloch (it was put in final form by Wigner and Suslin). To explain this we start with a result of Dupont and Sah \cite{dupont-sah-II} about ideal polytopes --- hyperbolic polytopes whose vertices are at infinity (such polytopes exist in hyperbolic geometry, and still have finite volume). \begin{proposition}\label{ideal suffices} Ideal hyperbolic tetrahedra represent elements in $\operatorname{\mathcal P}H$ and, moreover, $\operatorname{\mathcal P}H$ is generated by ideal tetrahedra. \end{proposition} To help understand this proposition observe that if $ABCD$ is a non-ideal tetrahedron and $E$ is the ideal point at which the extension of edge $AD$ meets infinity then $ABCD$ can be represented as the difference of the two tetrahedra $ABCE$ and $DBCE$, each of which have one ideal vertex. We have thus, in effect, ``pushed'' one vertex off to infinity. In the same way one can push a second and third vertex off to infinity, \dots and the fourth, but this is rather harder. Anyway, we will accept this proposition and discuss its consequence for scissors congruence. The first consequence is a great gain in convenience: a non-ideal tetrahedron needs six real parameters satisfying complicated inequalities to characterise it up to congruence while an ideal tetrahedron can be neatly characterised by a single complex parameter in the upper half plane. We shall denote the standard compactification of ${\mathbb H}^3$ by $\overline {\mathbb H}^3 = {\mathbb H}^3\cup{\mathbb C}P^1$. An ideal simplex $\Delta$ with vertices $z_1,z_2,z_3,z_4\in{\mathbb C}P^1={\mathbb C}\cup\{\infty\}$ is determined up to congruence by the cross-ratio $$z=[z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]=\frac{(z_3-z_2)(z_4-z_1)}{(z_3-z_1)(z_4-z_2)}.$$ Permuting the vertices by an even (ie orientation preserving) permutation replaces $z$ by one of $$ z,\quad z'=\frac 1{1-z}, \quad\text{or}\quad z''=1-\frac 1z. $$ The parameter $z$ lies in the upper half plane of ${\mathbb C}$ if the orientation induced by the given ordering of the vertices agrees with the orientation of ${\mathbb H}^3$. There is another way of describing the cross-ratio parameter $z= [z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]$ of a simplex. The group of orientation preserving isometries of ${\mathbb H}^3$ fixing the points $z_1$ and $z_2$ is isomorphic to the multiplicative group ${\mathbb C}^*$ of nonzero complex numbers. The element of this ${\mathbb C}^*$ that takes $z_4$ to $z_3$ is $z$. Thus the cross-ratio parameter $z$ is associated with the edge $z_1z_2$ of the simplex. The parameter associated in this way with the other two edges $z_1z_4$ and $z_1z_3$ out of $z_1$ are $z'$ and $z''$ respectively, while the edges $z_3z_4$, $z_2z_3$, and $z_2z_4$ have the same parameters $z$, $z'$, and $z''$ as their opposite edges. See figure~1. This description makes clear that the dihedral angles at the edges of the simplex are $\arg(z)$, $\arg(z')$, $\arg(z'')$ respectively, with opposite edges having the same angle. Now suppose we have five points $z_0,z_1,z_2,z_3,z_4\in{\mathbb C}P^1={\mathbb C}\cup\{\infty\}$. Any four-tuple of these five points spans an ideal simplex, and the convex hull of these five points decomposes in two ways into such simplices, once into two of them and once into three of them. We thus get a scissors congruence relation equating the two simplices with the three simplices. It is often called the ``five-term relation.'' To express it in terms of the cross-ratio parameters it is convenient first to make an orientation convention. We allow simplices whose vertex ordering does not agree with the orientation of ${\mathbb H}^3$ (so the cross-ratio parameter is in the lower complex half-plane) but make the convention that this represents the negative element in scissors congruence. An odd permutation of the vertices of a simplex replaces the cross-ratio parameter $z$ by $$ \frac 1z, \quad\frac z{z-1},\quad\text{or}\quad 1-z, $$ so if we denote by $[z]$ the element in $\operatorname{\mathcal P}H$ represented by an ideal simplex with parameter $z$, then our orientation rules say: \begin{equation} [z]=[1-\frac 1z]=[\frac 1{1-z}]=-[\frac1z]=-[\frac{z-1}z]=-[1-z]. \label{invsim} \end{equation} These orientation rules make the five-term scissors congruence relation described above particularly easy to state: $$\sum_{i=0}^4(-1)^i[z_0\hbox{$\,:\,$} \dots\hbox{$\,:\,$} \hat{z_i}\hbox{$\,:\,$} \dots\hbox{$\,:\,$} z_4]=0.$$ The cross-ratio parameters occuring in this formula can be expressed in terms of the first two as \begin{gather*} {}[z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4] =:x\qquad[z_0\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]=:y\par [z_0\hbox{$\,:\,$} z_1\hbox{$\,:\,$} z_3\hbox{$\,:\,$} z_4]=\frac yx\quad [z_0\hbox{$\,:\,$} z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_4]=\frac {1-x^{-1}}{1-y^{-1}}\quad [z_0\hbox{$\,:\,$} z_1\hbox{$\,:\,$} z_2\hbox{$\,:\,$} z_3]=\frac {1-x}{1-y} \end{gather*} so the five-term relation can also be written: \begin{equation} [x]-[y]+[\frac yx]-[\frac{1-x^{-1}}{1-y^{-1}}]+[\frac{1-x}{1-y}]=0. \label{5term} \end{equation} We lose nothing if we also allow degenerate ideal simplices whose vertices lie in one plane so the parameter $z$ is real (we always require that the vertices are distinct, so the parameter is in ${\mathbb R}-\{0,1\}$), since the five-term relation can be used to express such a ``flat'' simplex in terms of non-flat ones, and one readily checks no additional relations result. Thus we may take the parameter $z$ of an ideal simplex to lie in ${\mathbb C}-\{0,1\}$ and every such $z$ corresponds to an ideal simplex. One can show that relations (\ref{invsim}) follow from the five-term relation (\ref{5term}), so we consider the quotient $${\mathcal P}({\mathbb C}):={\mathbb Z}\langle{\mathbb C}-\{0,1\}\rangle/(\text{five-term relations (\ref{5term}))}$$ of the free ${\mathbb Z}$--module on ${\mathbb C}-\{0,1\}$. Proposition \ref{ideal suffices} can be restated that there is a natural surjection ${\mathcal P}({\mathbb C})\to\operatorname{\mathcal P}H$. In fact Dupont and Sah (loc.\ cit.) prove: \begin{theorem}\label{ideal really suffices} The scissors congruence group $\operatorname{\mathcal P}H$ is the quotient of ${\mathcal P}({\mathbb C})$ by the relations $[z]=-[\overline z]$ which identify each ideal simplex with its mirror image\footnote{The minus sign in this relation comes from the orientation convention described earlier.}. \end{theorem} Thus ${\mathcal P}({\mathbb C})$ is a candidate for the orientation sensitive scissors congruence group that we were seeking. Indeed, it turns out to do (almost) exactly what we want. The analog of the Dehn invariant has a particularly elegant expression in these terms. First note that the above theorem expresses $\operatorname{\mathcal P}H$ as the ``imaginary part'' ${\mathcal P}({\mathbb C})^-$ (negative co-eigenspace under conjugation\footnote{${\mathcal P}({\mathbb C})$ turns out to be a ${\mathbb Q}$--vector space and is therefore the sum of its $\pm1$ eigenspaces, so ``co-eigenspace'' is the same as ``eigenspace.''}) of ${\mathcal P}({\mathbb C})$. \begin{proponition}\label{complex dehn} The Dehn invariant ${\delta}\co\operatorname{\mathcal P}H\to{\mathbb R}\otimes{\mathbb R}/\pi{\mathbb Q}$ is twice the ``imaginary part'' of the map $${\delta}_{\mathbb C}\co{\mathcal P}({\mathbb C})\to{\mathbb C}^*\wedge{\mathbb C}^*,\quad [z]\mapsto (1-z)\wedge z$$ so we shall call this map the ``complex Dehn invariant.'' We denote the kernel of complex Dehn invariant $${\mathcal B}({\mathbb C}):=\operatorname{Ker}({\delta}_{\mathbb C}),$$ and call it the ``Bloch group of ${\mathbb C}$.'' \end{proponition} (We shall explain this proposition further in an appendix to this section.) A hyperbolic 3--manifold $M$ now has an ``orientation sensitive scissors congruence class'' which lies in this Bloch group and captures both volume and Chern--Simons invariant of $M$. Namely, there is a map $$\rho\co {\mathcal B}({\mathbb C})\to{\mathbb C}/\pi^2{\mathbb Q}$$ introduced by Bloch and Wigner called the {\em Bloch regulator map}, whose imaginary part is the volume map on ${\mathcal B}({\mathbb C})$, and one has: \begin{theorem}[\cite{neumann-yang3}, \cite{dupont1}] Let $M$ be a complete oriented hyperbolic 3--manifold of finite volume. Then there is a natural class $\beta(M)\in{\mathcal B}({\mathbb C})$ associated with $M$ and $\rho(\beta(M))=\frac 1i (\operatorname{vol}(M)+i\operatorname{CS}(M))$. \end{theorem} This theorem answers Question \ref{question1}. But there are still two aesthetic problems: \begin{itemize} \item The Bloch regulator $\rho$ plays the r\^ole for orientation sensitive scissors congruence that volume plays for ordinary scissors congruence. But $\operatorname{vol}$ is defined on the whole scissors congruence group $\operatorname{\mathcal P}H$ while $\rho$ is only defined on the kernel ${\mathcal B}({\mathbb C})$ of complex Dehn invariant. \item The Chern--Simons invariant $\operatorname{CS}(M)$ is an invariant in ${\mathbb R}/\pi^2{\mathbb Z}$ but the invariant $\rho(\beta(M))$ only computes it in ${\mathbb R}/\pi^2{\mathbb Q}$. \end{itemize} We resolve both these problems in section \ref{Extended Bloch}. We describe the Bloch regulator map $\rho$ later. It would be a little messy to describe at present, although its imaginary part (volume) has a very nice description in terms of ideal simplices. Indeed, the volume of an ideal simplex with parameter $z$ is $D_2(z)$, where $D_2$ is the so called ``Bloch--Wigner dilogarithm function'' given by: $$D_2(z) = \operatorname{Im} \ln_2(z) + \log |z|\arg(1-z),\quad z\in {\mathbb C} -\{0,1\}$$ and $\ln_2(z)$ is the classical dilogarithm function. It follows that $D_2(z)$ satisfies a functional equation corresponding to the five-term relation (see below). \subsection{Further comments} To worry about the second ``aesthetic problem'' above could be considered rather greedy. After all, $CS(M)$ takes values in ${\mathbb R}/\pi^2{\mathbb Z}$ which is the direct sum of ${\mathbb Q}/\pi^2{\mathbb Z}$ and uncountably many copies of ${\mathbb Q}$, and we have only lost part of the former summand. However, it is not even known if the Chern--Simons invariant takes \emph{any} non-zero values\footnote{According to J Dupont, Jim Simons deserted mathematics in part because he could not resolve this issue!} in ${\mathbb R}/\pi^2{\mathbb Q}$. As we shall see, this would be implied by the sufficiency of Dehn invariant for ${\mathbb S}^3$ (Conjecture \ref{sufficiency}). The analogous conjecture in our current situation is: \begin{conjecture}[Complex Dehn Invariant Sufficiency] \label{csufficiency} $\rho\co{\mathcal B}({\mathbb C})\to{\mathbb C}/\pi^2{\mathbb Q}$ is injective. \end{conjecture} Again, the following is known by work of Bloch: \begin{theorem} $\rho\co{\mathcal B}({\mathbb C})\to{\mathbb C}/\pi^2{\mathbb Q}$ has countable image. \end{theorem} Thus the complex Dehn invariant sufficiency conjecture would imply: \begin{conjecture}[Bloch Rigidity]\label{bloch rigidity} ${\mathcal B}({\mathbb C})$ is countable. \end{conjecture} \begin{theorem}[\cite{suslin, suslin1}]\label{cdivisibility} ${\mathcal P}({\mathbb C})$ and its subgroup ${\mathcal B}({\mathbb C})$ are uniquely divisible groups, so they have the structure of ${\mathbb Q}$--vector spaces. As ${\mathbb Q}$--vector spaces they have infinite rank. \end{theorem} Note that the Bloch group ${\mathcal B}({\mathbb C})$ is defined purely algebraically in terms of ${\mathbb C}$, so we can define a Bloch group ${\mathcal B}(k)$ analogously\footnote{Definitions of ${\mathcal B}(k)$ in the literature vary in ways that can mildly affect its torsion if $k$ is not algebraically closed.} for any field $k$. This group ${\mathcal B}(k)$ is uniquely divisible whenever $k$ contains an algebraically closed field. It is not hard to see that the rigidity conjecture \ref{bloch rigidity} is equivalent to the conjecture that ${\mathcal B}(\overline{\mathbb Q})\to{\mathcal B}({\mathbb C})$ is an isomorphism (here $\overline{\mathbb Q}$ is the field of algebraic numbers; it is known that ${\mathcal B}(\overline{\mathbb Q})\to{\mathcal B}({\mathbb C})$ is injective). Suslin has conjectured more generally that ${\mathcal B}(k)\to{\mathcal B}(K)$ is an isomorphism if $k$ is the algebraic closure of the prime field in $K$. Conjecture \ref{csufficiency} has been made in greater generality by Ramakrishnan \cite{ramakrishnan} in the context of algebraic $K$--theory. Conjectures \ref{csufficiency} and \ref{bloch rigidity} are in fact equivalent to the Dehn invariant sufficiency and rigidity conjectures \ref{sufficiency} and \ref{rigidity} respectively for ${\mathbb H}^3$ and ${\mathbb S}^3$ together. This is because of the following theorem which connects the various Dehn kernels. It also describes the connections with algebraic $K$--theory and homology of the lie group $\operatorname{SL}(2,{\mathbb C})$ considered as a discrete group. It collates results of of Bloch, B\"okstedt, Brun, Dupont, Parry and Sah and Wigner (see \cite{bokstedt-brun-dupont} and \cite{dupont-parry-sah}). \begin{theorem} There is a natural exact sequence $$0\to{\mathbb Q}/{\mathbb Z}\to H_3(\operatorname{SL}(2,{\mathbb C})) \to {\mathcal B}({\mathbb C})\to 0.$$ Moreover there are natural isomorphisms: $H_3(\operatorname{SL}(2,{\mathbb C}))\phantom{-}\cong K_3^{ind}({\mathbb C})$, $H_3(\operatorname{SL}(2,{\mathbb C}))^-\cong {\mathcal B}({\mathbb C})^-\cong {\delta}ker({\mathbb H}^3)$, $H_3(\operatorname{SL}(2,{\mathbb C}))^+\cong {\delta}ker({\mathbb S}^3)/{\mathbb Z}$ and ${\mathcal B}({\mathbb C})^+\cong{\delta}ker(S^3)/{\mathbb Q}$, \noindent where ${\mathbb Z}\subset{\delta}ker(S^3)$ is generated by the class of the 3--sphere and ${\mathbb Q}\subset{\delta}ker(S^3)$ is the subgroup generated by suspensions of triangles in ${\mathbb S}^2$ with rational angles. The Cheeger--Simons map $c_2\co H_3(\operatorname{SL}(2,{\mathbb C}))\to {\mathbb C}/4\pi^2{\mathbb Z}$ of \cite{cheeger-simons} induces on the one hand the Bloch regulator map $\rho\co{\mathcal B}({\mathbb C})\to{\mathbb C}/\pi^2{\mathbb Q}$ and on the other hand its real and imaginary parts correspond to the volume maps on ${\delta}ker({\mathbb S}^3)/{\mathbb Z}$ and ${\delta}ker({\mathbb H}^3)$ via the above isomorphisms. \end{theorem} The isomorphisms of the theorem are proved via isomorphisms $H_3(\operatorname{SL}(2,{\mathbb C}))^-\cong H_3(\operatorname{SL}(2,R))$ and $H_3(\operatorname{SL}(2,{\mathbb C}))^+ \cong H_3(\operatorname{SU}(2))$. We have described the geometry of the isomorphism ${\mathcal B}({\mathbb C})^-\cong{\delta}ker({\mathbb H}^3)$ in Theorem \ref{ideal really suffices}. The geometry of the isomorphism ${\mathcal B}({\mathbb C})^+\cong{\delta}ker(S^3)/{\mathbb Q}$ remains rather mysterious. The exact sequence and first isomorphism in the above theorem are valid for any algebraically closed field of characteristic $0$. Thus Conjecture \ref{bloch rigidity} is also equivalent to each of the four: \begin{itemize} \item Is $K_3^{ind}(\overline{\mathbb Q})\to K_3^{ind}({\mathbb C})$ an isomorphism? Is $K_3^{ind}({\mathbb C})$ countable? \item Is $H_3(\operatorname{SL}(2,\overline{\mathbb Q}))\to H_3(\operatorname{SL}(2,{\mathbb C}))$ an isomorphism? Is $ H_3(\operatorname{SL}(2,{\mathbb C}))$ countable? \end{itemize} The fact that volume of an ideal simplex is given by the Bloch--Wigner dilogarithm function $D_2(z)$ clarifies why the ${\mathbb H}^3$ Dehn invariant sufficiency conjecture \ref{sufficiency} is equivalent to a statement about rational relations among special values of the dilogarithm function. Don Zagier's conjecture about such rational relations, mentioned earlier, is that any rational linear relation among values of $D_2$ at algebraic arguments must be a consequence of the relations $D_2(z)=D_2(\overline z)$ and the five-term functional relation for $D_2$: $$ D_2(x)-D_2(y) +D_2(\frac yx)-D_2(\frac{1-x^{-1}}{1-y^{-1}})+D_2(\frac{1-x}{1-y})=0. $$ Differently expressed, he conjectures that the volume map is injective on ${\mathcal P}(\overline {\mathbb Q})^-$. If one assumes the scissors congruence rigidity conjecture for ${\mathbb H}^3$ (that ${\mathcal B}(\overline{\mathbb Q})^-\cong{\mathcal B}({\mathbb C})^-$) then the Dehn invariant sufficiency conjecture for ${\mathbb H}^3$ is just that $D_2$ is injective on the subgroup ${\mathcal B}(\overline{\mathbb Q})^-\subset{\mathcal P}(\overline{\mathbb Q})^-$, so under this assumption Zagier's conjecture is much stronger. Milnor's conjecture, mentioned earlier, can be formulated that the values of $D_2(\xi)$, as $\xi$ runs through the primitive $n$-th roots of unity in the upper half plane, are rationally independent for any $n$. This is equivalent to injectivity modulo torsion of the volume map $D_2$ on ${\mathcal B}(k_n)$ for the cyclotomic field $k_n={\mathbb Q}(e^{2\pi i/n})$. For this field ${\mathcal B}(k_n)^-={\mathcal B}(k_n)$ modulo torsion. This is of finite rank but ${\mathcal P}(k_n)^-$ is of infinite rank, so even when restricted to $k_n$ Zagier's conjecture is much stronger than Milnor's. Zagier himself has expressed doubt that Milnor's conjecture can be resolved in the forseeable future. Conjecture \ref{csufficiency} can be similarly formulated as a statement about special values of a different dilogarithm function, the ``Rogers dilogarithm,'' which we will define later. \subsection{Appendix to section \ref{Hyperbolic 3-manifolds}: Dehn invariant of ideal polytopes} To define the Dehn invariant of an ideal polytope we first cut off each ideal vertex by a horoball based at that vertex. We then have a polytope with some horospherical faces but with all edges finite. We now compute the Dehn invariant using the geodesic edges of this truncated polytope (that is, only the edges that come from the original polytope and not those that bound horospherical faces). This is well defined in that it does not depend on the sizes of the horoballs we used to truncate our polytope. (To see this, note that dihedral angles of the edges incident on an ideal vertex sum to a multiple of $\pi$, since they are the angles of the horospherical face created by truncation, which is an euclidean polygon. Changing the size of the horoball used to truncate these edges thus changes the Dehn invariant by a multiple of something of the form $l\otimes \pi$, which is zero in ${\mathbb R}\otimes {\mathbb R}/\pi{\mathbb Q}$.) Now consider the ideal tetrahedron $\Delta(z)$ with parameter $z$. We may position its vertices at $0,1,\infty,z$. There is a Klein 4--group of symmetries of this tetrahedron and it is easily verified that it takes the following horoballs to each other: \begin{itemize} \item At $\infty$ the horoball $\{(w,t)\in{\mathbb C}\times{\mathbb R}^+|t\ge a\}$; \item at $0$ the horoball of euclidean diameter $|z|/a$; \item at $1$ the horoball of euclidean diameter $|1-z|/a$; \item at $z$ the horoball of euclidean diameter $|z(z-1)|/a$. \end{itemize} After truncation, the vertical edges thus have lengths $2\log a-\log|z|$, $2\log a-\log|1-z|$, and $2\log a-\log|z(z-1)|$ respectively, and we have earlier said that their angles are $\arg(z), \arg(1/(1-z)), \arg((z-1)/z)$ respectively. Thus, adding contributions, we find that these three edges contribute $\log|1-z|\otimes\arg(z) - \log |z|\otimes \arg(1-z)$ to the Dehn invariant. By symmetry the other three edges contribute the same, so the Dehn invariant is: $${\delta}(\Delta(z))=2\bigl(\log|1-z|\otimes\arg(z) - \log|z|\otimes \arg(1-z)\bigr)\in {\mathbb R}\otimes{\mathbb R}/\pi{\mathbb Q}.$$ \begin{proof}[Proof of Proposition \ref{complex dehn}] To understand the ``imaginary part'' of $(1-z)\wedge z \in {\mathbb C}^*\wedge{\mathbb C}^*$ we use the isomorphism $${\mathbb C}^*\to{\mathbb R}\oplus{\mathbb R}/2\pi{\mathbb Z},\quad z\mapsto \log|z|\oplus\arg z,$$ to represent $$\begin{aligned} {\mathbb C}^*\wedge{\mathbb C}^*&=({\mathbb R}\oplus{\mathbb R}/2\pi{\mathbb Z})\wedge({\mathbb R}\oplus{\mathbb R}/2\pi{\mathbb Z})\par &=({\mathbb R}\wedge{\mathbb R})\oplus({\mathbb R}/2\pi{\mathbb Z} \wedge{\mathbb R}/2\pi{\mathbb Z})\quad\oplus\quad({\mathbb R}\otimes{\mathbb R}/2\pi{\mathbb Z})\par &=({\mathbb R}\wedge{\mathbb R})\oplus({\mathbb R}/\pi{\mathbb Q}\wedge{\mathbb R}/\pi{\mathbb Q}) \quad\oplus\quad({\mathbb R}\otimes{\mathbb R}/\pi{\mathbb Q}), \end{aligned} $$ (the equality on the third line is because tensoring over ${\mathbb Z}$ with a divisible group is effectively the same as tensoring over ${\mathbb Q}$). Under this isomorphism we have $$\begin{aligned} (1-z)\wedge z = \bigl(\log|1-z|&\wedge\log|z|\oplus\arg(1-z)\wedge\arg z\bigr)\par&\oplus\quad\bigl(\log|1-z|\otimes\arg z -\log |z|\otimes\arg(1-z)\bigr),\end{aligned}$$ confirming the Proposition \ref{complex dehn}. \end{proof} \section{Computing $\beta(M)$} The scissors congruence invariant $\beta(M)$ turns out to be a very computable invariant. To explain this we must first describe the ``invariant trace field'' or ``field of definition'' of a hyperbolic 3--manifold. Suppose therefore that $M={\mathbb H}^3/\Gamma$ is a hyperbolic manifold, so $\Gamma$ is a discrete subgroup of the orientation preserving isometry group $\operatorname{PSL}(2,{\mathbb C})$ of ${\mathbb H}^3$. \begin{definition}\cite{reid}\qua The {\em invariant trace field} of $M$ is the subfield of ${\mathbb C}$ generated over ${\mathbb Q}$ by the squares of traces of elements of $\Gamma$. We will denote it $k(M)$ or $k(\Gamma)$. \end{definition} This field $k(M)$ is an algebraic number field (finite extension of ${\mathbb Q}$) and is a commensurability invariant, that is, it is unchanged on passing to finite covers of $M$ (finite index subgroups of $\Gamma$). Moreover, if $M$ is an arithmetic hyperbolic 3--manifold (that is, $\Gamma$ is an arithmetic group), then $k(M)$ is the field of definition of this arithmetic group in the usual sense. See \cite{reid, neumann-reid1}. Now if $k$ is an algebraic number field then ${\mathcal B}(k)$ is isomorphic to ${\mathbb Z}^{r_2}\oplus\text{(torsion)}$, where $r_2$ is the number of conjugate pairs of complex embeddings $k\to{\mathbb C}$ of $k$. Indeed, if these complex embeddings are $\sigma_1,\dots,\sigma_{r_2}$ then a reinterpretation of a theorem of Borel \cite{borel} about $K_3({\mathbb C})$ says: \begin{theorem}\label{borel regulator} The ``Borel regulator map'' $${\mathcal B}(k)\to {\mathbb R}^{r_2}$$ induced on generators of ${\mathcal P}(k)$ by $[z]\mapsto(\operatorname{vol}[\sigma_1(z)],\dots,\operatorname{vol}[\sigma_{r_2}(z)])$ maps \hfil\break ${\mathcal B}(k)/\text{(torsion)}$ isomorphically onto a full lattice in ${\mathbb R}^{r_2}$. \end{theorem} A corollary of this theorem is that an embedding $\sigma\co k\to{\mathbb C}$ induces an embedding ${\mathcal B}(k)\otimes{\mathbb Q}\to{\mathcal B}(C)\otimes{\mathbb Q}$. (This is because the theorem implies that an element of ${\mathcal B}(k)$ is determined modulo torsion by the set of volumes of its Galois conjugates, which are invariants defined on ${\mathcal B}({\mathbb C})$.) Moreover, since ${\mathcal B}({\mathbb C})$ is a ${\mathbb Q}$--vector space, ${\mathcal B}({\mathbb C})\otimes{\mathbb Q}={\mathcal B}({\mathbb C})$. Now if $M$ is a hyperbolic manifold then its invariant trace field $k(M)$ comes embedded in ${\mathbb C}$ so we get an explicit embedding ${\mathcal B}(k(M))\otimes{\mathbb Q}\to{\mathcal B}({\mathbb C})$ whose image, which is isomorphic to ${\mathbb Q}^{r_2}$, we denote by ${\mathcal B}(k(M))_{\mathbb Q}$. \begin{theorem}[\cite{neumann-yang2, neumann-yang3}] The element $\beta(M)$ lies in the subspace ${\mathcal B}(k(M))_{\mathbb Q}\subset{\mathcal B}({\mathbb C})$. \end{theorem} In fact Neumann and Yang show that $\beta(M)$ is well defined in ${\mathcal B}(K)$ for some explicit multi-quadratic field extension $K$ of $k(M)$, which implies that $2^c\beta(M)$ is actually well defined in ${\mathcal B}(k(M))$ for some $c$. Moreover, one can always take $c=0$ if $M$ is non-compact, but we do not know if one can for compact $M$. In view of this theorem we see that the following data effectively determines $\beta(M)$ modulo torsion: \begin{itemize} \item The invariant trace field $k(M)$. \item The image of $\beta(M)$ in ${\mathbb R}^{r_2}$ under the Borel regulator map of Theorem \ref{borel regulator}. \end{itemize} To compute $\beta(M)$ we need a collection of ideal simplices that triangulates $M$ in some fashion. If $M$ is compact, this clearly cannot be a triangulation in the usual sense. In \cite{neumann-yang3} it is shown that one can use any ``degree one ideal triangulation'' to compute $\beta(M)$. This means a finite complex formed of ideal hyperbolic simplices plus a map of it to $M$ that takes each ideal simplex locally isometrically to $M$ and is degree one almost everywhere. These always exist (see \cite{neumann-yang3} for a discussion). Special degree one ideal triangulations have been used extensively in practice, eg in Jeff Weeks' program Snappea \cite{weeks} for computing with hyperbolic 3--manifolds. Oliver Goodman has written a program Snap \cite{goodman} (building on Snappea) which finds degree one ideal triangulations using exact arithmetic in number fields and computes the invariant trace field and high precision values for the Borel regulator on $\beta(M)$. Such calculations can provide numerical evidence for the complex Dehn invariant sufficiency conjecture. Here is a typical result of such calculations. \subsection{Examples} To ensure that the Bloch group has rank $>1$ we want a field with at least two complex embeddings. One of the simplest is the (unique) quartic field over ${\mathbb Q}$ of discriminant $257$. This is the field $k={\mathbb Q}(x)/(f(x))$ with $f(x)=x^4+x^2-x+1$. This polynomial is irreducible with roots $\tau_1^{\pm}=0.54742\ldots\pm 0.58565\ldots i$ and $\tau_2^{\pm}=-0.54742\ldots\pm 1.12087\ldots i$. The field $k$ thus has two complex embeddings $\sigma_1,\sigma_2$ up to complex conjugation, one with image $\sigma_1(k)={\mathbb Q}(\tau_1^{-})$ and one with image $\sigma_2(k)={\mathbb Q}(\tau_2^-)$. The Bloch group ${\mathcal B}(k)$ is thus isomorphic to ${\mathbb Z}^2$ modulo torsion. This field occurs as the invariant trace field for two different hyperbolic knot complements in the standard knot tables up to $8$ crossings, the $6$--crossing knot $6_1$ and the $7$--crossing knot $7_7$, but the embeddings in ${\mathbb C}$ are different. For $6_1$ one gets $\sigma_1(k)$ and for $7_7$ one gets $\sigma_2(k)$. The scissors congruence classes are $$\begin{aligned} \beta(6_1)&=:\beta_1=2[\frac12(1-\tau^2-\tau^3)] + [1-\tau] + [\frac12(1-\tau^2+\tau^3)]\in {\mathcal B}(k)\par \beta(7_7)&=: \beta_2=4[2-\tau-\tau^3]+4[\tau+\tau^2+\tau^3] \in {\mathcal B}(k) \end{aligned}$$ where $\tau$ is the class of $x$ in $k={\mathbb Q}(x)/(x^4+x^2-x+1)$. These map under the Borel regulator ${\mathcal B}(k)\to{\mathbb R}^2$ (with respect to the embeddings $\sigma_1,\sigma_2$) to $$\begin{aligned} 6_1:\quad& (3.163963228883143983991014716.., -1.415104897265563340689508587..)\par 7_7:\quad& (-1.397088165568881439461453224.., 7.643375172359955478221844448..) \end{aligned} $$ In particular, the volumes of these knot complements are $3.1639632288831439..$ and $7.6433751723599554..$ respectively Snap has access to a large database of small volume compact manifolds. Searching this database for manifolds whose volumes are small rational linear combinations of $\operatorname{vol}(\sigma_1(\beta_1))= 3.1639632..$ and $\operatorname{vol}(\sigma_1(\beta_2))=-1.3970881..$ yielded just eight examples, three with volume $3.16396322888314..$, four with volume $4.396672801932495..$ and one with volume $5.629382374981847..$~. The complex Dehn invariant sufficiency conjecture predicts (under the assumption that the rational dependencies found are exact) that these should all have invariant trace field containing $\sigma_1(k)$. Checking with Snap confirms that their invariant trace fields equal $\sigma_1(k)$ and their scissors congruence classes in ${\mathcal B}(k)\otimes{\mathbb Q}$ (computed numerically using the Borel regulator) are $\beta_1$, $(3/2)\beta_1 + (1/2)\beta_2$, and $2\beta_1+\beta_2$ respectively. \section{Extended Bloch group}\label{Extended Bloch} In section \ref{Hyperbolic 3-manifolds} we saw that ${\mathcal P}({\mathbb C})$ and ${\mathcal B}(C)$ play a role of ``orientation sensitive'' scissors congruence group and kernel of Dehn invariant respectively, and that the analog of the volume map is then the Borel regulator $\rho$. But, as we described there, this analogy suffers because $\rho$ is defined on the Dehn kernel ${\mathcal B}({\mathbb C})$ rather than on the whole of ${\mathcal P}({\mathbb C})$ and moreover, it takes values in ${\mathbb C}/\pi^2{\mathbb Q}$, rather than in ${\mathbb C}/\pi^2{\mathbb Z}$. The repair turns out to be to use, instead of ${\mathbb C}-\{0,1\}$, a certain disconnected ${\mathbb Z}\times{\mathbb Z}$ cover of ${\mathbb C}-\{0,1\}$ to define ``extended versions'' of the groups ${\mathcal P}({\mathbb C})$ and ${\mathcal B}({\mathbb C})$. This idea developed out of a suggestion by Jun Yang. We shall denote the relevant cover of ${\mathbb C}-\{0,1\}$ by ${\mathbb C}over$. We start with two descriptions of it. The second will be a geometric interpretation in terms of ideal simplices. Let $P$ be ${\mathbb C}-\{0,1\}$ split along the rays $(-\infty,0)$ and $(1,\infty)$. Thus each real number $r$ outside the interval $[0,1]$ occurs twice in $P$, once in the upper half plane of ${\mathbb C}$ and once in the lower half plane of ${\mathbb C}$. We denote these two occurences of $r$ by $r+0i$ and $r-0i$. We construct ${\mathbb C}over$ as an identification space from $P\times{\mathbb Z}\times{\mathbb Z}$ by identifying $$ \begin{aligned} (x+0i, p,q)&\sim (x-0i,p+2,q)\quad\hbox{for each }x\in(-\infty,0)\par (x+0i, p,q)&\sim (x-0i,p,q+2)\quad\hbox{for each }x\in(1,\infty).\par \end{aligned} $$ We will denote the equivalence class of $(z,p,q)$ by $(z;p,q)$. ${\mathbb C}over$ has four components: $$ {\mathbb C}over=X_{00}\cup X_{01}\cup X_{10}\cup X_{11} $$ where $X_{\epsilon_0\epsilon_1}$ is the set of $(z;p,q)\in {\mathbb C}over$ with $p\equiv\epsilon_0$ and $q\equiv\epsilon_1$ (mod $2$). We may think of $X_{00}$ as the riemann surface for the function ${\mathbb C}-\{0,1\}\to{\mathbb C}^2$ defined by $z\mapsto (\log z, -\log (1-z))$. If for each $p,q\in{\mathbb Z}$ we take the branch $(\log z + 2p\pi i, -\log (1-z) + 2q\pi i)$ of this function on the portion $P\times\{(2p,2q)\}$ of $X_{00}$ we get an analytic function from $X_{00}$ to ${\mathbb C}^2$. In the same way, we may think of ${\mathbb C}over$ as the riemann surface for the collection of all branches of the functions $(\log z + p\pi i, -\log (1-z) + q\pi i)$ on ${\mathbb C}-\{0,1\}$. We can interpret ${\mathbb C}over$ in terms of ideal simplices. Suppose we have an ideal simplex $\Delta$ with parameter $z\in{\mathbb C}-\{0,1\}$. Recall that this parameter is associated to an edge of $\Delta$ and that other edges of $\Delta$ have parameters $$z'=\frac1{1-z},\quad z''=1-\frac 1z,$$ with opposite edges of $\Delta$ having the same parameter (see figure~1). Note that $zz'z''=-1$, so the sum $$\log z + \log z' + \log z''$$ is an odd multiple of $\pi i$, depending on the branches of $\log$ used. In fact, if we use the standard branch of log then this sum is $\pi i$ or $-\pi i$ depending on whether $z$ is in the upper or lower half plane. This reflects the fact that the three dihedral angles of an ideal simplex sum to $\pi$. \begin{definition}\label{flattening} We shall call any triple of the form $$ \bfw=(w_0,w_1,w_2)=(\log z +p\pi i,\log z'+q\pi i,\log z''+r\pi i) $$ with $$p,q,r\in {\mathbb Z}\quad\text{and}\quad w_0+w_1+w_2=0 $$ a \emph{combinatorial flattening} for our simplex $\Delta$. Thus a combinatorial flattening is an adjustment of each of the three dihedral angles of $\Delta$ by a multiple of $\pi$ so that the resulting angle sum is zero. Each edge $E$ of $\Delta$ is assigned one of the components $w_i$ of $\bfw$, with opposite edges being assigned the same component. We call $w_i$ the \emph{log-parameter} for the edge $E$ and denote it $l_E(\Delta,\bfw)$. \end{definition} For $(z;p,q)\in {\mathbb C}over$ we define $$\ell(z;p,q):=(\log z + p\pi i, -\log(1-z) + q\pi i, \log(1-z)-\log z - (p+q)\pi i), $$ and $\ell$ is then a map of ${\mathbb C}over$ to the set of combinatorial flattenings of simplices. \begin{lemma}\label{Cover is flattenings} This map $\ell$ is a bijection, so ${\mathbb C}over$ may be identified with the set of all combinatorial flattenings of ideal tetrahedra. \end{lemma} \begin{proof} We must show that $(w_0,w_1,w_2)= \ell(z;p,q)$ determines $(z;p,q)$. It clearly suffices to recover $z$. But up to sign $z$ equals $e^{w_0}$ and $1-z$ equals $e^{-w_1}$, and the knowledge of both $z$ and $1-z$ up to sign determines $z$. \end{proof} \subsection{The extended groups} We shall define a group $\widehat{\mathcal P}({\mathbb C})$ as ${\mathbb Z}\langle{\mathbb C}over\rangle/$(relations), where the relations in question are a lift of the five-term relations (\ref{5term}) that define ${\mathcal P}({\mathbb C})$, plus an extra relation that just eliminates an element of order $2$. We first recall the situation of the five-term relation (\ref{5term}). If $z_0,\ldots,z_4$ are five distinct points of $\partial\overline{\mathbb H}^3$, then each choice of four of five points $z_0,\dots,z_4$ gives an ideal simplex. We denote the simplex which omits vertex $z_i$ by $\Delta_i$. We denote the cross-ratio parameters of these simplices by $x_i=[z_0\hbox{$\,:\,$} \ldots\hbox{$\,:\,$} \hat{z_i}\hbox{$\,:\,$} \ldots\hbox{$\,:\,$} z_4]$. Recall that $(x_0,\dots,x_4)$ can be written in terms of $x=x_0$ and $y=x_1$ as $$(x_0,\dots,x_4)=\biggl(x,y,\frac yx,\frac{1-x^{-1}}{1-y^{-1}}, \frac{1-x}{1-y}\biggr)$$ The five-term relation was $\sum_{i=0}^4(-1)^i[x_i]=0$, so the lifted five-term relation will have the form \def\term#1{(x_#1;p_#1,q_#1)} \begin{equation}\label{5-term} \sum_{i=0}^4(-1)^i \term i=0 \end{equation} with certain relations on the $p_i$ and $q_i$. We need to describe these relations. Using the map of Lemma \ref{Cover is flattenings}, each summand in this relation (\ref{5-term}) represents a choice $\ell\term i$ of combinatorial flattening for one of the five ideal simplices. For each edge $E$ connecting two of the points $z_i$ we get a corresponding linear combination \begin{equation}\label{edge sums} \sum_{i=0}^4(-1)^il_E(\Delta_i,\ell\term i) \end{equation} of log-parameters (Definition \ref{flattening}), where we put $l_E(\Delta_i,\ell\term i)=0$ if the line $E$ is not an edge of $\Delta_i$. This linear combination has just three non-zero terms corresponding to the three simplices that meet at the edge $E$. One easily checks that the real part is zero and the imaginary part can be interpreted (with care about orientations) as the sum of the ``adjusted angles'' of the three flattened simplices meeting at $E$. \begin{definition}\label{geometric five term} We say that the $(x_i;p_i,q_i)$ satisfy the \emph{flattening condition} if each of the above linear combinations (\ref{edge sums}) of log-parameters is equal to zero. That is, the adjusted angle sum of the three simplices meeting at each edge is zero. In this case relation (\ref{5-term}) is an instance of the \emph{lifted five-term relation}. \end{definition} There are ten edges in question, so the flattening conditions are ten linear relations on the ten integers $p_i,q_i$. But these equations turn out to be linearly dependant, and the space of solutions is $5$--dimensional. For example, if the five parameters $x_0,\dots,x_4$ are all in the upper half-plane (one can check that this means $y$ is in the upper half-plane and $x$ is inside the triangle with vertices $0,1,y$) then the conditions are equivalent to: \begin{gather} p_2=p_1-p_0,\quad p_3=p_1-p_0+q_1-q_0,\quad p_4=q_1-q_0\nonumber\par q_3=q_2-q_1,\quad q_4 = q_2-q_1-p_0\nonumber \end{gather} which express $p_2$, $p_3$, $p_4$, $q_3$, $q_4$ in terms of $p_0$, $p_1$, $q_0$, $q_1$, $q_2$. Thus, in this case the lifted five-term relation becomes: \begin{gather} (x_0;p_0,q_0)-(x_1;p_1,q_1)+(x_2;p_1-p_0,q_2)-{}\nonumber\par {}-(x_3;p_1-p_0+q_1-q_0,q_2-q_1)+(x_4;q_1-q_0,q_2-q_1-p_0)=0\nonumber \end{gather} This situation corresponds to the configuration of figure~2 for the ideal vertices $z_0,\dots,z_4$, with $z_1$ and $z_3$ on opposite sides of the plane of the triangle $z_0z_2z_4$ and the line from $z_1$ to $z_3$ passing through the interior of this triangle. \begin{definition} The extended pre-Bloch group $\widehat{\mathcal P}({\mathbb C})$ is the group $$\widehat{\mathcal P}({\mathbb C}) := {\mathbb Z}\langle{\mathbb C}over\rangle/(\text{lifted five-term relations and the following relation})$$ \begin{equation}\label{transfer} [x;p,q]+[x;p',q']=[x;p,q']+[x;p',q]. \end{equation} \end{definition} (We call relation (\ref{transfer}) the \emph{transfer relation}; one can show that if one omits it then $\widehat{\mathcal P}({\mathbb C})$ is replaced by $\widehat{\mathcal P}({\mathbb C}) \oplus{\mathbb Z}/2$, where the ${\mathbb Z}/2$ is generated by the element $\kappa:=[x,1,1]+[x,0,0]-[x,1,0]-[x,0,1]$, which is independant of $x$.) The relations we are using are remarkably natural. To explain this we need a beautiful version of the dilogarithm function called the \emph{Rogers dilogarithm}: $${{\mathcal R}}(z)=-\frac12\biggl(\int_0^z\bigl(\frac{\log t}{1-t}+\frac{\log(1-t)}t\bigr)dt\biggr)-\frac{\pi^2}6.$$ The extra $-\pi^2/6$ is not always included in the definition but it improves the functional equation. ${\mathcal R}(z)$ is singular at $0$ and $1$ and is not well defined on ${\mathbb C}-\{0,1\}$, but it lifts to an analytic function \begin{gather} R:{\mathbb C}over\to{\mathbb C}/\pi^2{\mathbb Z}\nonumber\par R(z;p,q)={{\mathcal R}}(z)+\frac{\pi i}2(p\log(1-z)+q\log z) .\nonumber\end{gather} We also consider the map $$\hat\delta\co{\mathbb C}over\to{\mathbb C}\wedge{\mathbb C},\quad \hat\delta(z;p,q)= \bigl(\log z + p\pi i\bigr)\wedge \bigl(-\log(1-z) +q\pi i\bigr).$$ Relation (\ref{transfer}) is clearly a functional equation for both $R$ and $\hat\delta$. It turns out that the same is true for the lifted five-term relation. In fact: \begin{proposition} If $(x_i;p_i,q_i)$, $i=0,\dots,4$ satisfy the flattening condition, so $$\sum_{i=0}^4(-1)^i \term i=0$$ is an instance of the lifted five-term relation, then $$\sum_{i=0}^4(-1)^i R\term i=0$$ and $$\sum_{i=0}^4(-1)^i \hat\delta\term i=0.$$ Moreover, each of these equations also characterises the flattening condition. \end{proposition} Thus the flattening condition can be defined either geometrically, as we introduced it, or as the condition that makes the five-term functional equation for either $R$ or $\hat\delta$ valid. In any case, we now have: \begin{theorem} $R$ and $\hat\delta$ define maps $$\begin{aligned} R\co &\widehat{\mathcal P}({\mathbb C})\to {\mathbb C}/\pi^2{\mathbb Z}\par \hat\delta\co &\widehat{\mathcal P}({\mathbb C}) \to {\mathbb C}\wedge{\mathbb C}. \end{aligned}$$ \end{theorem} We call $\hat\delta$ the \emph{extended Dehn invariant} and call its kernel $$\widehat{\mathcal B}({\mathbb C}):=\ker(\hat\delta)$$ the \emph{extended Bloch group}. The final step in our path from Hilbert's 3rd problem to invariants of 3--manifolds is given by the following theorem. \begin{theorem}\label{final step} A hyperbolic 3--manifold $M$ has a natural class $\hat\beta(M)\in\widehat{\mathcal B}({\mathbb C})$. Moreover , $R(\hat\beta(M))=\frac 1i (\operatorname{vol}(M)+i\operatorname{CS}(M))\in{\mathbb C}/\pi^2{\mathbb Z}$. \end{theorem} To define the class $\hat\beta(M)$ directly from an ideal triangulation one needs to use a more restrictive type of ideal triangulation than the degree one ideal triangulations that suffice for $\beta(M)$. For instance, the triangulations constructed by Epstein and Penner \cite{epstein-penner} in the non-compact case and by Thurston \cite{thurston2} in the compact case are of the appropriate type. One then chooses flattenings of the ideal simplices of $K$ so that the whole complex $K$ satisfies certain ``flatness'' conditions. The sum of the flattened ideal simplices then represents $\hat\beta(M)$ up to a ${\mathbb Z}/6$ correction. The main part of the flatness conditions on $K$ are the conditions that adjusted angles around each edge of $K$ sum to zero together with similar conditions on homology classes at the cusps of $M$. If one just requires these conditions one obtains $\hat\beta(M)$ up to $12$--torsion. Additional mod $2$ flatness conditions on homology classes determine $\hat\beta(M)$ modulo $6$--torsion. The final ${\mathbb Z}/6$ correction is eliminated by appropriately ordering the vertices of the simplices of $K$. It takes some work to see that all these conditions can be satisfied and that the resulting element of $\widehat{\mathcal B}({\mathbb C})$ is well defined, see \cite{neumann, neumann1}. \section{Comments and questions}\label{More} \subsection{Relation with the non-extended Bloch group} What really underlies the above Theorem \ref{final step} is the \begin{theoremq}\label{theoremq} There is a natural short exact sequence $$0\to{\mathbb Z}/2\to H_3(\operatorname{PSL}(2,{\mathbb C});{\mathbb Z})\to \widehat{\mathcal B}({\mathbb C})\to 0.$$\end{theoremq} The reason for the question mark is that, at the time of writing, the proof that the kernel is exactly ${\mathbb Z}/2$ has not yet been written down carefully. The relationship of our extended groups with the ``classical'' ones is as follows. \begin{theorem} There is a commutative diagram with exact rows and columns: $$ \begin{CD} && 0 && 0 && 0 \par && @VVV @VVV @VVV \par 0 @>>> \mu^* @>>> {\mathbb C}^* @>>> {\mathbb C}^*/\mu^* @>>> 0\par && @V\chi|\mu^* VV @V\chi VV @V\xi VV @VVV\par 0 @>>> \widehat{\mathcal B}({\mathbb C}) @>>> \widehat{\mathcal P}({\mathbb C}) @>\hat\delta>> {\mathbb C}\wedge{\mathbb C} @>>> K_2({\mathbb C}) @>>> 0\par && @VVV @VVV @V\epsilon VV @V=VV\par 0 @>>> {\mathcal B}({\mathbb C}) @>>> {\mathcal P}({\mathbb C}) @>\delta>> {\mathbb C}^*\wedge{\mathbb C}^* @>>> K_2({\mathbb C}) @>>> 0\par && @VVV @VVV @VVV @VVV\par && 0 && 0 && 0 && 0 \end{CD} $$ Here $\mu^*$ is the group of roots of unity and the labelled maps that have not yet been defined are: $$\begin{aligned} \chi(z)&=[z,0,1]-[z,0,0]\in \widehat{\mathcal P}({\mathbb C});\par \xi[z]&=\log z \wedge \pi i;\par \epsilon(w_1\wedge w_2)&= (e^{w_1}\wedge e^{w_2}). \end{aligned} $$ \end{theorem} \subsection{Extended extended Bloch} The use of the disconnected cover ${\mathbb C}over$ of ${\mathbb C}-\{0,1\}$ rather than the universal abelian cover (the component $X_{00}$ of ${\mathbb C}over$) in defining the extended Bloch group may seem unnatural. If one uses $X_{00}$ instead of ${\mathbb C}over$ one obtains extended Bloch groups $\mathcal E{\mathcal P}({\mathbb C})$ and $\mathcal E{\mathcal B}({\mathbb C})$ which are non-trivial ${\mathbb Z}/2$ extensions of $\widehat{\mathcal P}({\mathbb C})$ and $\widehat{\mathcal B}({\mathbb C})$. Theorem \ref{theoremq} then implies a natural \emph{isomorphism} $H_3(\operatorname{PSL}(2,{\mathbb C});{\mathbb Z})\to\mathcal E{\mathcal B}({\mathbb C})$. The homomorphism of Theorem \ref{theoremq} is given explicitely by ``flattening'' homology classes in the way sketched after Theorem \ref{final step}, and the isomorphism $H_3(\operatorname{PSL}(2,{\mathbb C});{\mathbb Z})\to\mathcal E{\mathcal B}({\mathbb C})$ presumably has a similar explicit description using ``$X_{00}$--flattenings,'' but we have not yet proved that these always exist (note that an $X_{00}$--flattening of a simplex presupposes a choice of a pair of opposite edges of the simplex; changing this choice turns it into a $X_{01}$-- or $X_{10}$--flattening). For the same reason, we do not yet have a simplicial description of the class $\hat\beta(M)\in\mathcal E{\mathcal B}{{\mathbb C}}$ for a closed hyperbolic manifold $M$, although this class exists for homological reasons. It is essential here that $M$ be closed --- the class $\hat\beta(M)\in\widehat{\mathcal B}({\mathbb C})$ almost certainly has no natural lift to $\mathcal E{\mathcal B}({\mathbb C})$ in the non-compact case. The Rogers dilogarithm induces a natural map $R\co\mathcal E{\mathcal B}({\mathbb C})\to{\mathbb C}/2\pi^2{\mathbb Z}$, and this is the Cheeger--Simons class $H_3(PSL(2,{\mathbb C})\to{\mathbb C}/2\pi^2{\mathbb Z}$ via the above isomorphism. \subsection{Computing Chern--Simons invariant} The formula of \cite{neumann} for $CS(M)$ used in the programs Snappea and Snap uses ideal triangulations that arise in Dehn surgery. These triangulations are not of the type mentioned after Theorem \ref{final step}, but by modifying them one can put them in the desired form and use Theorem \ref{final step} to compute $\hat\beta(M)$, reconfirming the formula of \cite{neumann}. The formula computes $CS(M)$ up to a constant for the infinite class of manifolds that arise by Dehn surgery on a given manifold. It was conjectured in \cite{neumann} that this constant is always a multiple of $\pi^2/6$, and this too is confirmed. The theorem also gives an independent proof of the relation of volume and Chern--Simons invariant conjectured in \cite{neumann-zagier} and proved in \cite{yoshida}, from which a formula for eta-invariant was also deduced in \cite{neumann-meyerhoff} and \cite{ouyang}. \subsection{Realizing elements in the Bloch group and Gromov norm} One way to prove the Bloch group rigidity conjecture \ref{bloch rigidity} would be to show that ${\mathcal B}({\mathbb C})$ is generated by the classes $\beta(M)$ of 3--manifolds. This question is presumably much harder than the rigidity conjecture, although modifications of it have been used in attempts on it. More specifically, one can ask \begin{question}For which number fields $k$ is ${\mathcal B}(k)_{\mathbb Q}$ generated as a ${\mathbb Q}$ vector space by classes $\beta(M)$ of 3--manifolds with invariant trace field contained in $k$? \end{question} For totally real number fields (ie $r_2=0$) the answer is trivially ``yes'' while for number fields with $r_2=1$ the existence of arithmetic manifolds again shows the answer is ``yes.'' But beyond this little is known. In fact it is not even known if for every non-real number field $k\subset{\mathbb C}$ a 3--manifold exists with invariant trace field $k$. (For a few cases, eg multi-quadratic extensions of ${\mathbb Q}$, the author and A Reid have unpublished constructions to show the answer is ``yes.'') Jun Yang has pointed out that ``Gromov norm'' gives an obstruction to a class $\alpha\in{\mathcal B}({\mathbb C})$ being realizable as $\beta(M)$ (essentially the same observation also occurs in \cite{reznikov}). We define the \emph{Gromov norm} $\nu(\alpha)$ as $$\nu(\alpha)=\inf\bigl\{\sum\bigl|\frac{n_i}k\bigr|:k\alpha= \sum n_i[z_i],\quad z_i\in {\mathbb C}\bigr\},$$ and it is essentially a result of Gromov, with proof given in \cite{thurston1}, that: \begin{theorem} $$|\operatorname{vol}(\alpha)|\le V\nu(\alpha),$$ where $V=1.00149416...$ is the volume of a regular ideal tetrahedron. If $\alpha=\beta(M)$ for some 3--manifold $M$ then $$\operatorname{vol}(\alpha)=V\nu(\alpha).$$ \end{theorem} In particular, since $\nu(\alpha)$ is invariant under the action of Galois, for $\alpha=\beta(M)$ one sees that the $\operatorname{vol}(M)$ component of the Borel regulator is the largest in absolute value and equals $V\nu(\alpha)$. This suggests the question: \begin{question} Is it true for any number field $k$ and for any $\alpha\in{\mathcal B}(k)$ that $V\nu(\alpha)$ equals the largest absolute value of a component of the Borel regulator of $\alpha$? \end{question} This question is rather naive, and at this point we have no evidence for or against. Another naive question is the following. For $\alpha\in{\mathcal B}(k)_{\mathbb Q}$, where $k$ is a number field, we can define a stricter version of Gromov norm by $$\nu_k(\alpha)=\inf\bigl\{\sum\bigl|\frac{n_i}k\bigr|: k\alpha=\sum n_i[z_i],\quad z_i\in k\bigr\}.$$ \begin{question} Is $\nu_k(\alpha)=\nu(\alpha)$ for $\alpha\in{\mathcal B}(k)_{\mathbb Q}$? \end{question} If not, then $\nu_k$ gives a sharper obstruction to realizing $\alpha$ as $\beta(M)$ since it is easy to show that for $\alpha=\beta(M)$ one has $\operatorname{vol}(\alpha)=V\nu_K(\alpha)$ for some at most quadratic extension $K$ of $k$. {\small \parskip 0pt \leftskip 0pt \rightskip 0pt plus 1fil \def\par{\par} \sl\theaddress\par \rm Email:\stdspace\tt\theemail\par} \recd \end{document}
\begin{document} \begin{abstract} We are interested in various aspects of spectral rigidity of Cayley and Schreier graphs of finitely generated groups. For each pair of integers $d\geq 2$ and $m \ge 1$, we consider an uncountable family of groups of automorphisms of the rooted $d$-regular tree which provide examples of the following interesting phenomena. For $d=2$ and any $m\geq 2$, we get an uncountable family of non quasi-isometric Cayley graphs with the same Laplacian spectrum, absolutely continuous on the union of two intervals, that we compute explicitly. Some of the groups provide examples where the spectrum of the Cayley graph is connected for one generating set and has a gap for another. For each $d\geq 3, m\geq 1$, we exhibit infinite Schreier graphs of these groups with the spectrum a Cantor set of Lebesgue measure zero union a countable set of isolated points accumulating on it. The Kesten spectral measures of the Laplacian on these Schreier graphs are discrete and concentrated on the isolated points. We construct moreover a complete system of eigenfunctions which are strongly localized. \end{abstract} \title{On spectra and spectral measures of Schreier and Cayley graphs} \section{Introduction} \label{sec:introduction} Cayley graphs and, more generally, Schreier graphs of finitely generated groups constitute an important class of examples in spectral graph theory. At the same time, the study of the Laplacian spectrum and spectral measures occupies a significant place in the theory of random walks on groups and more generally in geometric group theory. It is particularly interesting to understand how the spectra and spectral measures on Cayley and Schreier graphs depend on the algebraic structure and on the geometry of the group. There are also some natural rigidity questions, for instance, whether the spectrum determines the group in some way or, as formulated by Alain Valette \cite{Val}, ``Can one hear the shape of a group?". Another interesting question is how the spectrum depends on the chosen generating set or on the choice of weights on the generators. The spectral computations are notoriously difficult, and very few examples are known of infinite graphs, or of infinite families of finite graphs, where the spectrum has been explicitly computed. The qualitative results are also scarce. It follows from deep results in K-theory that the spectra are intervals for some classes of finitely generated torsion-free groups \cite{HigKas}, and this is conjectured to be the case for all such groups. For groups that do contain a nontrivial element of finite order, the list of known shapes of spectra of Cayley graphs is very short: an interval, a union of an interval with one or two isolated points, a union of two disjoint intervals and one or two isolated points (free products of two finite cyclic groups \cite{CS86}), two disjoint intervals (Grigorchuk's group \cite{DG18}). A union of any finite number of disjoint intervals and one or two isolated points in the spectrum can appear in the case of anisotropic Laplacians on free products of several finite cyclic groups \cite{Kuh92}. Infinitely many gaps may appear in the spectrum of an anisotropic Laplacian on a lamplighter group~\cite{GS19}. The first examples of Schreier graphs whose spectrum is a Cantor set of Lebesgue measure zero or a union of such Cantor set and a countable set of isolated points accumulating on it were obtained in \cite{BG00}, and it is still open whether Cantor spectrum can occur on a Cayley graph. Even less is known about the spectral measure type. Lamplighter groups remain the only family of examples for which the spectral measure has been shown to be purely discrete (see~\cite{GriZ} for the original result on the lamplighter over $\mathbb{Z}$, and~\cite{LNW08} for a generalization to lamplighters with arbitrary bases). Anisotropic Laplacians on lamplighters may have nontrivial singular continuous part~\cite{GV15}. An example of a Schreier graph of a self-similar group (the Hanoi towers group) with a nontrivial singular continuous part in the spectral measure appeared in~\cite{Qui07}. Examples of Schreier graphs with purely singular continuous spectra for anisotropic Laplacians were provided in \cite{GLN16, GLNS19}. In this paper we study the spectra of Laplacians associated with certain self-similar group actions on rooted trees. Let $X = \{0, \dots, d-1\}$. The set of vertices of the tree $T_d$ is naturally identified with the set $X^*$ of finite words on $X$. Similarly, the boundary of the $d$-regular rooted tree $\partial T_d$ is in bijection with the set $X^\mathbb{N}$ of infinite sequences of elements of $X$. We shall alternatively use the notation $\partial T_d$, usually omitting the index $d$, or $X^\mathbb{N}$ to denote it. Given a finitely generated group $G$ acting by automorphisms on $T_d$, equipped with a natural finite set of generators $S$, we get a sequence of finite graphs $\{\Gamma_n\}_n$ (Schreier graphs of the action on finite levels of the tree) and a family $\{\Gamma_\xi\}_{\xi\in\partial T_d}$ of infinite Schreier graphs corresponding to the orbits of the action of $G$ by homeomorphisms on the boundary $\partial T_d$ of the tree. We will consider the normalized adjacency, or Markov, operator on the Cayley graph $\displaystyle M = \frac{1}{|S|}\sum_{s\in S} s :\ell^2(G)\rightarrow\ell^2(G)$, as well as its projections on the finite and infinite Schreier graphs: $M_n:\ell^2(G/H_n)\rightarrow\ell^2(G/H_n)$, where $H_n$ is the stabilizer subgroup of a vertex on the $n$-th level of the tree, and $M_\xi: \ell^2(G/H_\xi)\rightarrow\ell^2(G/H_\xi)$, with $H_\xi$ the stabilizer of a point $\xi$ in the boundary $\partial T_d$. When a generating set is fixed we will often write $\spec(G)$ for the spectrum of the operator M on the Cayley graph. The groups that we consider, the so-called spinal groups with the cyclic action at the root, are organized in uncountable families $\{G_\omega\}_{\omega\in\Omega_{d,m}}$. Here $d\geq 2$ denotes the degree of the regular rooted tree on which the group acts, $m\geq 1$ is an integer, and the groups in the family corresponding to a given pair $d,m$ are indexed by sequences in the alphabet consisting of all epimorphisms $(\mathbb{Z}/\mathbb{Z}_d)^m\rightarrow \mathbb{Z}/\mathbb{Z}_d$. They come with a natural choice of a set of generators that we call spinal generating set. See Section~\ref{sec:preliminaries} for the definition of spinal groups. In the case of $d=2$ the Schreier graphs $\Gamma_\xi$ of spinal groups are infinite lines with some multiple edges and loops, which makes the spectral analysis easier in this case. This is not the case anymore for $d\geq 3$, and it turns out that the spectral properties of the operators $M_\xi$ are very different for $d\geq 3$ as compared to $d=2$. All spinal groups with $d=2$ are of intermediate growth. For $d \ge 3$, this is known for some but not all of them (\cite{bart_pochon, bart_sunik_spinal, Fra19}). All spinal groups are amenable~\cite{JNS}. Hence, all their Schreier graphs are also amenable, and consequently, the spectrum of the operator $M_\xi$ does not depend of $\xi$~\cite{BG00}. Below we prove that, in the case of $d=2$, the spectrum of the infinite Schreier graph $\Gamma_\xi$, denoted by $\spec(M_\xi)$, is a union of two intervals. The spectral measure, that we compute explicitly, is absolutely continuous with respect to the Lebesgue measure. More interestingly, it happens that for $d=2$ the spectrum $\spec(M_\xi)$ of $\Gamma_\xi$ coincides with the spectrum $\spec(M)$ of the Cayley graph of the group. We show that it is independent on $\omega\in \Omega_{d,m}$, and hence we obtain a negative answer to the question ``Can one hear the shape of a group?" by providing uncountable families of isospectral groups (see also \cite{DG18}). The groups $G_\omega$ with $d=2$ and $m=2$ all have nonequivalent growth functions~\cite{Gri84}, so there are uncountably many non quasi-isometric isospectral groups. For spinal groups acting on the binary tree, we also investigate the dependence of the spectrum on the generating set. As mentioned above, the spectrum of both Schreier and Cayley graphs with respect to the spinal generating set is a union of two intervals. In the same time, there always exists a minimal generating set with the spectrum of the corresponding Schreier graph a Cantor set. For a certain subfamily of spinal groups acting on the binary tree that we denote $\{G_m\}$ (one group for each $m \ge 2$) there is also a minimal generating set with the spectrum of the corresponding Schreier graph an interval. For $m = 2, 3$ these examples provide groups with spectrum of the Cayley graphs an interval for one generating set and a union of two disjoint intervals for another generating set. For $d\geq 3$ we also compute the spectrum of $\Gamma_\xi$. It is is a Cantor set of Lebesgue measure zero plus a countable set of points accumulating on it. The computations are inspired by the work of Bartholdi and Grigorchuk for one of these groups in \cite{BG00}. We also extend their computation of the empirical spectral measure, or density of states. Moreover, we go further and study the spectral measures for the operators $M_\xi$. Note that while the spectrum of $M_\xi$ doesn't depend on $\xi$, the spectral measures a priori do. We prove that for all $\xi$ in a certain (explicitly given) subset of full measure in $\partial T_d$, all the spectral measures of $M_\xi$ are discrete and concentrated on the set of isolated points in the spectrum. Moreover, we provide a complete system of eigenfunctions of $M_\xi$ and show that they are all of finite support. Our main results are the following. \begin{restatable}{thm}{spectrumcayleydtwo} \label{thm:spectrum_cayley_d2} Let $G$ be a spinal group with $d=2$ and $m\ge 2$. Then, for any $\xi \in X^\mathbb{N}$, \begin{equation} \spec(G) = \spec(M_\xi) = \left[- \frac{1}{2^{m-1}}, 0\right] \cup \left[1 - \frac{1}{2^{m-1}}, 1\right]. \end{equation} Notice that, as $m \to \infty$, these spectra shrink from two intervals to two points. \end{restatable} \begin{restatable}{cor}{regularspectrumdtwo} \label{cor:regular_spectrum_d2} (see also~\cite{GD17}). There are uncountably many pairwise non quasi-isometric isospectral groups. \end{restatable} In the same time, we are able to find a different, minimal generating set, with the spectrum of the Schreier graph a Cantor set. \begin{restatable}{cor}{generatingsetcantor} \label{cor:generating-set-cantor} For every spinal group $G_\omega$ with $d = 2$, $m \ge 2$ and $\omega \in \Omega_{d, m}$ there exists a minimal generating set $T \subset S$ for which $\spec(M_\xi^T)$ is a Cantor set of Lebesgue measure zero. \end{restatable} For two specific examples further analysis shows that the spectrum on the Cayley graph may have a gap or be connected, depending on the generating set. See Section 7 for the definition of \v{S}uni\'c's family of self-similar spinal groups and of the subfamily $\{G_m\}_{m \ge 2}$ acting on the binary tree. \begin{restatable}{cor}{spectrumgego} \label{cor:spectrum-ge-go} For the Grigorchuk-Erschler group $G_2$ and Grigorchuk's overgroup $G_3$ the spectrum of the Cayley graph is a union of two disjoint intervals with respect to the spinal generating set and the interval $[-1, 1]$ with respect to the minimal \v{S}uni\'c generating set. \end{restatable} For spinal groups acting on trees of higher degree, we do not know the spectrum of the group, but we do know the spectrum $\spec(M_\xi)$ of the Schreier graphs. Consider the map $F(x) = x^2 - d(d-1)$, and denote $\psi(t) = \frac{1}{d^{m-1}}(|S|^2t^2 - |S|(|S| - 2)t - (|S| + d - 2))$. \begin{restatable}{thm}{spectrumschreier} \label{thm:spectrum_schreier} Let $G$ be a spinal group with $d \ge 2$ and $m \ge 1$, generated by the spinal generators. Then, for any $\xi \in X^\mathbb{N}$, \begin{equation} \spec(M_\xi) = \left\lbrace \frac{|S|-d}{|S|}\right\rbrace \cup \psi^{-1}\left(\overline{\bigcup_{n \ge 0}F^{-n}(0)}\right). \end{equation} For $d=2$, we have $\spec(M_\xi) = \left[- \frac{1}{2^{m-1}}, 0\right] \cup \left[1 - \frac{1}{2^{m-1}}, 1\right]$. For $d>2$, we can decompose \[ \spec(M_\xi) = \spec^0(M_\xi) \cup \spec^\infty(M_\xi), \] with $\spec^\infty(M_\xi)$ being a Cantor set and $\displaystyle \spec^0(M_\xi) = \left\lbrace \frac{|S|-d}{|S|}\right\rbrace\cup\psi^{-1}\left(\bigcup_{n \ge 0}F^{-n}(0)\right)$ being a countable set of isolated points accumulating on this Cantor set. \end{restatable} Notice that this spectrum is the preimage by the quadratic map $\psi$ of the set $\overline{\bigcup_{n \ge 0}F^{-n}(0)}$ of preimages of $0$ under $F$ and of its closure, the Julia set of $F$ (plus an isolated point). For $d=2$, the Julia set of $F(x) = x^2-2$ is the interval $[-2, 2]$, which contains $\cup_{n \ge 0} F^{-n}(0)$, hence its preimage by $\psi$ is the union of two intervals. For $d>2$, however, the Julia set of $F$ is a Cantor set, and is disjoint with $\cup_{n \ge 0} F^{-n}(0)$. Therefore, its preimage by $\psi$ is again a Cantor set, and $\psi^{-1}(\cup_{n \ge 0} F^{-n}(0))$ is a countable set of points accumulating on this Cantor set. Our proof of Theorem~\ref{thm:spectrum_schreier} follows the strategy developed in~\cite{BG00} for some examples and generalizes their technique. First, we use the Schur complement method to find a recurrence between the spectrum at level $n$ and the spectrum at level $n-1$. Then we solve this recurrence to completely describe these finite spectra in Theorem~\ref{thm:finite_spec}. Finally, we use this result to compute $\spec(M_\xi)$ in the proof of Theorem~\ref{thm:spectrum_schreier}, using the fact that the Schreier graphs on the boundary are limits of those on finite levels. The next step after identifying the spectrum is to study the spectral measures. Our results involve both the classical spectral measures on the infinite graphs and the empirical spectral measure (density of states), obtained by the finite approximations of the infinite graph. Let $\mathbb{C}H$ be a Hilbert space, let $T: \mathbb{C}H \to \mathbb{C}H$ be a self-adjoint linear operator, and let $f \in \mathbb{C}H$. Then there is a unique positive measure $\mu_f$ on $\spec(T)$ such that for every $n \ge 0$ \[ \int_{\spec(T)}x^n d\mu_f(x) = \langle T^n f, f \rangle, \] called the \emph{spectral measure} of the operator $T$ associated with $f$. Notice that $\mu_f(\spec(T)) = \norm{f}^2$, so $\mu_f$ is a finite measure. Let $\Gamma$ be a graph. For every vertex $p$ of $\Gamma$, we call the spectral measure of the Markov operator for the simple random walk on $\Gamma$ $\mu_p = \mu_{\delta_p}$ its \emph{Kesten spectral measure}, where $\delta_p$ is $1$ at $p$ and vanishes everywhere else (see e.g. I.1.C in~\cite{W00}). The $n$-th moment of the Kesten spectral measure $\mu_p$ is the probability for the random walk to return to $p$ in $n$ steps. Such measures for Markov operators were first considered by Kesten~\cite{K59}. In this paper we will consider the measures $\mu_\eta$ for the vertices $\eta$ of Schreier graphs $\Gamma_\xi$ of points $\xi \in X^\mathbb{N}$. These graphs are obtained as limits of the finite Schreier graphs $\Gamma_n$ on the levels of the tree, hence we also study the empirical spectral measure, or density of states. Given a sequence of finite graphs $\{Y_n\}_n$, for every $n \ge 0$ let $\nu_n$ be the counting measure on the spectrum of the Markov operator $M_n$ on $Y_n$: \[ \nu_n = \frac{1}{|Y_n|}\sum_{\lambda \in \spec(M_n)}\delta_\lambda, \] where $\delta_x$ is the Dirac measure at $x$, and the eigenvalues are counted with multiplicity. Following~\cite{GZ04}, we call the weak limit $\nu$ of the measures $\nu_n$ the \emph{empirical spectral measure} or \emph{density of states} of $\{Y_n\}_n$. \begin{restatable}{thm}{KNSspectralmeasure} \label{thm:KNS_spectral_measure} Let $\nu$ be the density of states of $\{\Gamma_n\}_n$. If $d = 2$, $\nu$ is absolutely continuous with respect to the Lebesgue measure. Its density is given by the function \begin{equation} g(x) = \frac{|2^{m-1} - 1 - 2^mx|}{\pi\sqrt{x(1-x)(2^mx + 2)(2^mx + 2 - 2^m)}}. \label{eq:dens_states} \end{equation} If $d \ge 3$, then $\nu$ is discrete. More precisely, \[ \nu = \frac{d-2}{d}\delta_{\frac{|S|-d}{|S|}} + \sum_{n \ge 0}\frac{d-2}{d^{n+2}}\sum_{x \in \psi^{-1}(F^{-n}(0))}\delta_x. \] \end{restatable} Kesten spectral measures also have different type for the cases $d=2$ and $d \ge 3$. In the binary case, linearity of the Schreier graphs and the fact that all $\Gamma_\xi$ are isomorphic (as unlabeled graphs), for $\xi \in X^\mathbb{N}$ not in the orbit of $1^\mathbb{N}$, allow us to conclude that all the Kesten spectral measures $\mu_\xi$ are equal with the exception of the orbit of $1^\mathbb{N}$. For $d\geq 3$, we prove that $M_\xi$ possesses a complete system of eigenfunctions of finite support, corresponding to the isolated eigenvalues $spec^0$, for a set of points $\xi$ in $X^\mathbb{N}$ of full measure. \begin{restatable}{prop}{Kestenspectralmeasurebinary} \label{prop:Kesten_spectral_measure_binary} Let $d=2$ and let $\xi \in X^\mathbb{N}$. The spectral measure $\mu_\xi$ is absolutely continuous with respect to the Lebesgue measure. For every $\xi$ not coorbital to $1^\mathbb{N}$, $\mu_\xi$ coincides with the density of states $\nu$ (see equation~(\ref{eq:dens_states})). In addition, $\mu_{1^\mathbb{N}}$ has density \begin{equation} h(x) = \frac{|x(2^mx + 2)|}{\pi\sqrt{x(1-x)(2^mx + 2)(2^mx + 2 - 2^m)}}. \label{eq:dens_1N} \end{equation} \end{restatable} For $d \ge 3$, a study of the eigenfunctions on both finite and infinite Schreier graphs yields that the operator $M_\xi$ has discrete Kesten spectral measures and the eigenfunctions are strongly localized. Let $\sigma: X^\mathbb{N} \to X^\mathbb{N}$ be the map which removes the first letter of any point in $X^\mathbb{N}$. For any given $\xi \in X^\mathbb{N}$, we define $I_\xi = \{ n \in \mathbb{N} \mid \forall r \ge 0, \: (d-1)^r0 \textrm{ is not a prefix of } \sigma^n(\xi) \}$, and consider the subset $W$ of $X^\mathbb{N}$ defined as $W = \{\xi \in X^\mathbb{N} \mid k, k+1 \in I_\xi \textrm{ for infinitely many } k \}$. \begin{restatable}{thm}{Kestenspectralmeasure} \label{thm:Kesten_spectral_measure} For every $d \ge 3$, the set $W$ has uniform Bernoulli measure $1$ in $X^\mathbb{N}$. For every spinal group defined by $d \ge 3$, $m \ge 1$ and $\omega \in \Omega_{d, m}$, and every $\xi \in W$, the operator $M_\xi$ has pure point spectrum. More precisely, it possesses a complete system of finitely supported eigenfunctions corresponding to the eigenvalues that form $\spec^0(M_\xi)$. \end{restatable} Note that all graphs for which Theorem~\ref{thm:Kesten_spectral_measure} applies are one-ended, and that the subset $W \subset X^\mathbb{N}$ does not depend on $m$ or $\omega$, but only on $d$. Let us note that in the case of spinal groups with $d \ge 3$ and $m = 1$ it is also possible to compute the spectrum directly via renormalization of the infinite graph, without finite approximation. This method was used on some self-similar graphs by Malozemov and Teplyaev in~\cite{MT03} and on an example of a Schreier graph of the Hanoi towers group by Quint in~\cite{Qui07}. One advantage of this method is that it allows simultaneously to compute the spectrum of all points in the space of Schreier graphs, i.e., not only of the graphs $\{\Gamma_\xi\}_{\xi \in \partial T}$, but also of their accumulation points in the space of labeled rooted graphs (see~\cite{BDN16}). These additional graphs are of special interest, as the spectral measures on them have a nontrivial singular continuous component. This approach will be detailed in a subsequent paper. The structure of the paper is as follows. In Section~\ref{sec:preliminaries}, we give some basic definitions about spinal groups, Schreier graphs and Markov operators, as well as some examples. Sections~\ref{sec:finite} and~\ref{sec:infinite} are devoted to proving Theorem~\ref{thm:spectrum_schreier}. We conclude the section by computing the density of states in Theorem~\ref{thm:KNS_spectral_measure}. In Section~\ref{sec:pure_point_eigenfunctions}, we discuss the spectral measures on the Schreier graphs and the eigenfunctions of the Markov operators. In particular, we prove the equality of the spectral measures with the density of states if $d=2$, and for $d\ge 3$ we show that the Kesten spectral measures are discrete and concentrated on the set of isolated eigenvalues, for any $\xi$ in a certain explicitly described measure one set of boundary points. We do that by explicitly finding the eigenfunctions of $M_\xi$ and showing that they form a complete set. Moreover, all of them are finitely supported. In Section~\ref{sec:spectra_cayley}, we show the equality between $\spec(G)$ and $\spec(M_\xi)$ for spinal groups acting on the binary tree. The fact that the spectra on $\Gamma_\xi$ do not depend on $\omega\in\Omega_{d,m}$ ensures that we obtain an uncountable family of groups with the same spectrum. Finally, in Section~\ref{sec:dependence_gen_set} we study the dependence of spectra on generating sets, give some examples and prove Corollary~\ref{cor:generating-set-cantor} and Corollary~\ref{cor:spectrum-ge-go}. \section{Preliminaries} \label{sec:preliminaries} \begin{dfn}(\textbf{Spinal groups}) Let $d \ge 2$ and $X = \{0, 1, \dots, d-1\}$. We denote by $T$ the $d$-regular rooted tree, whose vertices are in bijection with $X^*$ (the set of all finite words on the alphabet $X$), and by $\partial T$ its boundary, in bijection with $X^\mathbb{N}$ (the set of all right-infinite words on the alphabet $X$). Choose an integer $m \ge 1$, and let $A = \langle a \rangle = \mathbb{Z}/d\mathbb{Z}$ and $B = (\mathbb{Z}/d\mathbb{Z})^m$. Denote by $\Epi(B, A)$ the set of epimorphisms from $B$ to $A$, and define $\Omega = \Omega_{d,m} \subset \Epi(B, A)^\mathbb{N}$ to be the set of sequences of epimorphisms satisfying the condition \begin{equation} \label{eq:kernel-condition} \forall i \ge 0, \quad \bigcap\limits_{j\ge i} \Ker(\omega_j) = 1. \end{equation} Following~\cite{bart_sunik_spinal} and \cite{bart_grig_sunik_branch}, we define for every $\omega = \omega_0\omega_1\dots \in \Omega_{d, m}$ the spinal group $G_\omega$ as the subgroup of $\mathcal{A}ut(T)$ generated by $A$ and $B$. Here, by abuse of notation, $A$ and $B$ denote the subgroups of the following automorphisms: \[ a(v_0 v_1 \dots) = (v_0 + 1 \text{ mod } d) v_1 \dots \]\[ b(v_0 v_1 \dots) = \left\{ \begin{array}{ll} v_0v_1 \dots v_n \: \omega_n(b)(v_{n+1}) \: v_{n+2}\dots & \text{if } v_0\dots v_n = (d-1)^n0 \\ v_0v_1\dots & \text{otherwise} \end{array} \right.. \] The condition~(\ref{eq:kernel-condition}) implies that the action of $G_\omega$ on $T$ is faithful. The automorphism $a$ permutes the subtrees under the root cyclically. Elements in $B$ fix all vertices in the \emph{spine}, the rightmost ray of the tree, whose vertices are words of the form $(d-1)^n$. Moreover, their action is trivial everywhere except in the subtrees under vertices of the form $(d-1)^n0$. Notice that the action of $G_\omega$ on the tree is transitive in every level, and orbits in $X^\mathbb{N}$ are cofinality classes. \end{dfn} If not stated otherwise, we will consider $G_\omega$ with the \emph{spinal generating set} $S = (A\cup B) \setminus \{1\}$. Recall that $|S| = d^m + d - 2$. \begin{dfn}(\textbf{Schreier graphs}) Let $G$ be a group finitely generated by $S$, and $H \le G$. The \emph{Schreier graph} associated with $H$, denoted $\Sch(G, H, S)$, is the graph whose vertices are lateral classes $G/H$ and, for every $s \in S$ and $gH \in G/H$, there is an edge from $gH$ to $sgH$ labeled by $s$. In our case, we will always choose $H$ to be the stabilizer in $G_\omega$ of some vertex $u$ of $X^*$ or $X^\mathbb{N}$. Since the action is transitive at every level, if $u \in X^n$, the graph will not depend on the choice of $u$, so we will write $\Gamma_n = \Sch(G_\omega, Stab_{G_\omega}(u), S)$. Finally, for $\xi \in X^\mathbb{N}$, we will write $\Gamma_\xi = \Sch(G_\omega, Stab_{G_\omega}(\xi), S)$. \end{dfn} \begin{dfn}(\textbf{Markov operator}) For a graph $\Gamma$, the Markov operator is the normalized adjacency operator $M: \ell^2(V(\Gamma)) \to \ell^2(V(\Gamma))$, given by $Mf(x) = \frac{1}{\deg(x)} \sum_{y \sim x} f(y)$. We will be particularly interested in Markov operators on Schreier or Cayley graphs of spinal groups. For $n \ge 0$, $M_n: \ell^2(X^n) \to \ell^2(X^n)$ will denote the Markov operator on $\Gamma_n$, defined by $M_nf(v) = \frac{1}{|S|}\sum_{s \in S} f(sv)$, and, for $\xi \in X^\mathbb{N}$, $M_\xi: \ell^2(G\xi) \to \ell^2(G\xi)$ will denote the Markov operator on $\Gamma_\xi$, defined by $M_\xi f(\eta) = \frac{1}{|S|}\sum_{s \in S} f(s \eta)$. Finally, $Mf(g) = \frac{1}{|S|}\sum_{s \in S} f(sg)$ acting on $\ell^2(G)$ is the Markov operator on the Cayley graph of $G$ with respect to $S$. We will denote its spectrum by $\spec(G)$ when there is no confusion as to the choice of $S$. \end{dfn} The family of spinal groups contains some well-known examples of groups of intermediate growth. In particular the first Grigorchuk group is obtained by taking $d = 2$, $m = 2$ and $\omega = (\pi_d\pi_c\pi_b)^\mathbb{N}$ where $A = \{1, a\}$, $B = \{1, b, c, d\}$ and $\pi_x: B \to A$ is the epimorphism mapping $x$ to $1$. The Fabrykowski-Gupta group corresponds to the case $d=3$, $m=1$ and $\omega = \pi^\mathbb{N}$, where $A = \{1, a, a^2\}$, $B = \{1, b, b^2\}$ and $\pi: B \to A$ is the epimorphism mapping $b$ to $a$. \section{Spectra of finite Schreier graphs} \label{sec:finite} Let $G = G_\omega$ be a spinal group with parameters $d \ge 2$, $m \ge 1$ and $\omega \in \Omega_{d, m}$, as defined above. \begin{center} \begin{figure} \caption{The graph $\Gamma_3$ for Grigorchuk's group (see Section~\ref{sec:preliminaries} \label{fig:grigorchuk-3} \end{figure} \end{center} \begin{center} \begin{figure} \caption{The graph $\Gamma_3$ for the Gupta-Fabrykowski group (see Section~\ref{sec:preliminaries} \label{fig:gupta-fabrykowski-3} \end{figure} \end{center} Our goal in this section is to compute $\spec(M_n)$, the spectrum of the Schreier graph associated to the action of $G$ on the $n$-th level of the tree, which is a finite graph on $d^n$ vertices. Typical examples of finite Schreier graphs of spinal groups with $d = 2$ and $d \ge 3$ can be found in Figures~\ref{fig:grigorchuk-3} and~\ref{fig:gupta-fabrykowski-3}, respectively. In order to simplify the computations, we will compute the spectrum of $\tilde{M}_n = |S|M_n$ of the adjacency matrix of $\Gamma_n$, and then normalize it dividing by $|S|$. We will first find a recurrence relation between $\spec(\tilde{M}_n)$ and $\spec(\tilde{M}_{n-1})$, then we will solve this recurrence to explicitly get $\spec(\tilde{M}_n)$ and finally in Section~\ref{sec:infinite} we will take the limit to find $\spec(M_\xi)$. Since the vertices of $\Gamma_n$ are words on $X = \{0, \dots, d-1\}$ of length $n$, we will order them lexicographically. Let us first start by computing the matrix of $\tilde{M}_n$. We write it as a $d \times d$ block matrix where each block is a matrix of size $d^{n-1} \times d^{n-1}$. A block denoted by a scalar is the corresponding multiple of the identity matrix $I_{d^{n-1}}$. \begin{lem} \label{lem:adjacency_matrix} Let $A_0 = d-1$ and $B_0 = d^m - 1$. Define, for $n \ge 1$, the following matrices in $M_{d^n, d^n}(\mathbb{R})$. \[ A_n = \left( \begin{array}{ccccc} 0 & 1 & \dots & 1 & 1 \\ 1 & 0 & \dots & 1 & 1 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 1 & 1 & \dots & 0 & 1 \\ 1 & 1 & \dots & 1 & 0 \end{array} \right),\]\[ B_n = \left( \begin{array}{ccccc} d^{m-1}A_{n-1} + d^{m-1} - 1 & & & & \\ & d^{m-1} - 1 & & & \\ & & \ddots & & \\ & & & d^{m-1} - 1 & \\ & & & & B_{n-1} \end{array} \right). \] Then, the matrix of $\tilde{M}_n$ is $A_n + B_n$. \end{lem} \begin{proof} In order to write the adjacency matrix of $\Gamma_n$, we will first write the adjacency matrices of each of the generators we consider. For a generator $s \in S$, its adjacency matrix for $\Gamma_n$ is denoted $s_n$, and its coefficient $(u, v)$ is $1$ if $s(u) = v$ and $0$ otherwise. Let $a$ be the generator of $A$ permuting the subtrees of the first level cyclically. We can write the adjacency matrix of this generator by blocks as \[ a_0 = 1, \qquad a_n = \left( \begin{array}{ccccc} 0 & 1 & & & \\ & 0 & 1 & & \\ & & \ddots & \ddots & \\ & & & 0 & 1 \\ 1 & & & & 0 \end{array} \right), \quad \forall n \ge 1. \] Now, for any $b\in B$ and $k \ge 0$, if we write the matrix \[ b_{0,k} = 1 \qquad b_{n,k} = \left( \begin{array}{ccccc} \omega_k(b)_{n-1} & & & & \\ & 1 & & & \\ & & \ddots & & \\ & & & 1 & \\ & & & & b_{n-1, k+1} \end{array} \right), \quad \forall k\ge0, \] then the adjacency matrix of $b$ is $b_n = b_{n, 0}$. These matrices have size $d^n \times d^n$, and every block in the matrices above is a matrix of size $d^{n-1} \times d^{n-1}$. In order to simplify notation, from now on we will omit the identity matrix and write just the scalar multiplying it, as the dimensions should be clear from the context. Now notice that, for every $n \ge 0$, $\sum\limits_{i=1}^{d-1} a_n^i = A_n$. Similarly, we have $\sum\limits_{b \in B\setminus\{1\}}b_n = B_n$. Indeed, \[ \sum\limits_{b \in B\setminus\{1\}}b_{n,k} = \left( \begin{array}{ccccc} \sum\limits_{b \in B\setminus\{1\}}\omega_k(b)_{n-1} & & & & \\ & d^m - 1 & & & \\ & & \ddots & & \\ & & & d^m - 1 & \\ & & & & \sum\limits_{b \in B\setminus\{1\}}b_{n-1,k+1} \end{array} \right) = \] \[ = \left( \begin{array}{ccccc} d^{m-1}A_{n-1} + d^{m-1} - 1 & & & & \\ & d^m - 1 & & & \\ & & \ddots & & \\ & & & d^m - 1 & \\ & & & & \sum\limits_{b \in B\setminus\{1\}}b_{n-1,k+1} \end{array} \right). \] The sum in the first block does not depend on $k$, since $\omega_k$ is an epimorphism and all elements of $A$ have exactly $d^{m-1}$ preimages. Hence we can inductively conclude that $\sum\limits_{b \in B\setminus\{1\}}b_n = B_n$. Finally, the adjacency matrix of $\Gamma_n$ is $\sum\limits_{s \in S}s_n = \sum\limits_{i=1}^{d-1} a_n^i + \sum\limits_{b \in B\setminus\{1\}}b_n = A_n + B_n$. \end{proof} If we now try to find the characteristic polynomial of $\tilde{M}_n$, we will not find any explicit relation with that of $\tilde{M}_{n-1}$. Instead, we consider the matrix \[ Q_n(\lambda, \mu) := B_n + \lambda A_n - \mu. \] These additional parameters will allow us to find a relation between the determinant of $Q_n(\lambda, \mu)$ and $Q_{n-1}(\lambda', \mu')$, for some different $\lambda'$ and $\mu'$. According to Lemma~\ref{lem:adjacency_matrix}, by setting $\lambda = 1, \mu = 0$, we recover the matrix of $\tilde{M}_n$, so more specifically we want to find $\spec(\tilde{M}_n) = \left\{ \mu \mid |Q_n(1, \mu)| = 0 \right\}$. As mentioned above, the strategy consists of two steps. First, we will prove a relation between the determinants of $Q_n$ and $Q_{n-1}$ (Proposition~\ref{prop:recurrence}). Second, we will solve this recurrence to find a factorization of the determinant of $Q_n$ (Proposition~\ref{prop:factorization}). Before, as our computations will involve matrices of the form $rA_n + s$, let us start with the following Lemma, which will be useful later on. \begin{lem} \label{lem:rAns} Let $r, s, r', s' \in \mathbb{R}$. Then, \begin{enumerate} \item $A_n^2 = (d-2)A_n + d - 1$. \item $|rA_n + s| = \left[(s - r)^{d-1}(s + (d-1)r)\right]^{d^{n-1}}$. \item $(rA_n + s)^{-1} = \frac{rA_n - (d-2)r - s}{(r-s)(s + (d-1)r)}$. \item $(rA_n + s)(r'A_n + s') = [(d-2)rr' + rs' + r's]A_n + (d-1)rr' + ss'$. \end{enumerate} \end{lem} \begin{proof} For $(1)$, if we square $A_n$ then we will get a sum of $d-1$ ones for elements in the diagonal and $d-2$ ones for the rest, which shows the claim. For $(2)$, we have \[ |rA_n + s| = \left| \begin{array}{ccccc} s & r & \dots & r & r \\ r & s & \dots & r & r \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ r & r & \dots & s & r \\ r & r & \dots & r & s \end{array} \right| = \left| \begin{array}{ccccc} s-r & 0 & \dots & 0 & r-s \\ 0 & s-r & \dots & 0 & r-s \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \dots & s-r & r-s \\ r & r & \dots & r & s \end{array} \right| = \]\[ = (s-r)^{(d-1)d^{n-1}} \left|s - \frac{(d-1)r(r-s)}{s-r} \right| = \left[(s - r)^{d-1}(s + (d-1)r)\right]^{d^{n-1}}. \] For $(3)$, we can verify, using $(1)$, that \[ (rA_n + s)\left(rA_n - (d-2)r - s\right) = r^2A_n^2 - (d-2)r^2A_n - (d-2)rs - s^2 = \]\[ = r^2\left((d-2)A_n + d -1\right) - (d-2)r^2A_n - (d-2)rs - s^2 = (r-s)(s + (d-1)r). \] Claim $(4)$ can be checked directly, again using $(1)$. \end{proof} \begin{prop} \label{prop:Q0-Q1} For $n = 0$ and $n = 1$, we have \[ |Q_0(\lambda, \mu)| = \alpha + \lambda \quad \text{and} \quad |Q_1(\lambda, \mu)| = (\alpha + \lambda)\beta^{d-1}, \] where \[ \alpha = \alpha(\lambda, \mu) := d^m - 1 - \mu + (d-2)\lambda \] and \[ \beta = \beta(\lambda, \mu) := d^m - 1 - \mu - \lambda. \] \end{prop} \begin{proof} By direct computation, \[ \left|Q_0(\lambda, \mu)\right| = B_0 + \lambda A_0 - \mu = d^m - 1 - \mu + (d-1)\lambda = \alpha + \lambda, \] \[ \left|Q_1(\lambda, \mu)\right| = \left|B_1 + \lambda A_1 - \mu \right| = \left|\lambda A_1 + d^m - 1 - \mu \right| = \]\[ = (d^m - 1 - \mu + (d-1)\lambda)(d^m - 1 - \mu - \lambda)^{d-1} = (\alpha + \lambda)\beta^{d-1}. \] \end{proof} We are now ready to compute the determinant of $Q_n(\lambda, \mu)$ for $n \ge 2$: \begin{prop} \label{prop:recurrence} For $n \ge 2$, we have \[ \left|Q_n(\lambda, \mu)\right| = (\alpha\beta^{d^2 - 3d + 1}\gamma^{d-1})^{d^{n-2}} \left|Q_{n-1}(\lambda', \mu')\right|, \] with $\alpha$ and $\beta$ as in Proposition~\ref{prop:Q0-Q1}, \[ \lambda' := \frac{d^{m-1}\beta}{\alpha\gamma}\lambda^2 \quad \text{and} \quad \mu' := \mu + \frac{(d-1)\delta}{\alpha\gamma}\lambda^2, \] where \scriptsize \[ \gamma = \gamma(\lambda, \mu) := \mu^2 - ((d-3)\lambda + d^m - 2)\mu - ((d-2)\lambda^2 + (d-3)\lambda + d^m - 1), \] \[ \delta = \delta(\lambda, \mu) := \mu^2 - ((d-3)\lambda + d^m + d^{m-1} - 2)\mu - ((d-2)\lambda^2 + (d^{m-1}+d-3)\lambda - d^{2m-1} + d^m + d^{m-1} - 1). \] \normalsize \end{prop} \begin{proof} We start computing the determinant of $|Q_n(\lambda, \mu)|$ directly, performing elementary transformations of rows and columns in determinants. \[ \left|Q_n(\lambda, \mu)\right| = \left|\begin{array}{ccccc} d^{m-1}A_{n-1} + d^{m-1} - 1 - \mu & \lambda & \dots & \lambda & \lambda \\ \lambda & d^m - 1 - \mu & \dots & \lambda & \lambda \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \lambda & \lambda & \dots & d^m - 1 - \mu & \lambda \\ \lambda & \lambda & \dots & \lambda & B_{n-1} - \mu \end{array}\right| = \]\[ = \left|\begin{array}{ccc|cc} \beta + \lambda & \dots & \lambda & \lambda & \lambda \\ \vdots & \ddots & \vdots & \vdots & \vdots \\ \lambda & \dots & \beta + \lambda & \lambda & \lambda \\ \hline \lambda & \dots & \lambda & d^{m-1}A_{n-1} + d^{m-1} - 1 - \mu & \lambda \\ \lambda & \dots & \lambda & \lambda & B_{n-1} - \mu \end{array}\right| = \]\[ = \left|\begin{array}{ccc|cc} \beta & & & \lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1} & 0 \\ & \ddots & & \vdots & \vdots \\ & & \beta & \lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1} & 0 \\ \hline \lambda & \dots & \lambda & d^{m-1}A_{n-1} + d^{m-1} - 1 - \mu & \lambda \\ \lambda & \dots & \lambda & \lambda & B_{n-1} - \mu \end{array}\right| = \]\[ = \left|\begin{array}{ccc|cc} \beta & & & \lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1} & 0 \\ & \ddots & & \vdots & \vdots \\ & & \beta & \lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1} & 0 \\ \hline 0 & \dots & 0 & d^{m-1}A_{n-1} + d^{m-1} - 1 - \mu - \frac{(d-2)\lambda(\lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1})}{\beta} & \lambda \\ 0 & \dots & 0 & \lambda - \frac{(d-2)\lambda(\lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1})}{\beta} & B_{n-1} - \mu \end{array}\right| = \]\[ = (\beta^{d-2})^{d^{n-1}}\left|\begin{array}{cc} d^{m-1}A_{n-1} + d^{m-1} - 1 - \mu - \frac{(d-2)\lambda(\lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1})}{\beta} & \lambda \\ \lambda\left(1 - \frac{(d-2)(\lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1})}{\beta}\right) & B_{n-1} - \mu \end{array}\right| = \]\scriptsize\[ = (\beta^{d-3})^{d^{n-1}}\left|\begin{array}{cc} \beta(d^{m-1}A_{n-1} + d^{m-1} - 1 - \mu) - (d-2)\lambda(\lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1}) & \lambda^2 \\ \beta - (d-2)(\lambda + \mu + 1 - d^{m-1} - d^{m-1}A_{n-1}) & B_{n-1} - \mu \end{array}\right| = \]\normalsize\[ = (\beta^{d-3})^{d^{n-1}}\left|\begin{array}{cc} d^{m-1}(\alpha - \lambda)(A_{n-1} + 1) + \gamma & \lambda^2 \\ (d-2)d^{m-1}(A_{n-1} - (d-1)) + (d-1)\beta & B_{n-1} - \mu \end{array}\right|. \] We set for convenience \[ C_n := d^{m-1}(\alpha - \lambda)(A_n + 1) + \gamma, \]\[ D_n := (d-2)d^{m-1}(A_n - (d-1)) + (d-1)\beta. \] We continue the computation of the determinant by taking the first Schur complement. Namely, whenever $P$ is invertible, we have the equality $\left|\begin{array}{cc} P & Q \\ R & S \end{array}\right| = |P||S - RP^{-1}Q|$: \[ \left|Q_n(\lambda, \mu)\right| = (\beta^{d-3})^{d^{n-1}}\left|\begin{array}{cc} C_{n-1} & \lambda^2 \\ D_{n-1} & B_{n-1} - \mu \end{array}\right| = \]\[ = (\beta^{d-3})^{d^{n-1}}\left|C_{n-1}\right|\left|B_{n-1} - \mu - \lambda^2 D_{n-1}C_{n-1}^{-1}\right|. \] Let us now compute these two determinants. First, using Lemma~\ref{lem:rAns} with $r = d^{m-1}(\alpha - \lambda)$ and $s = \gamma + d^{m-1}(\alpha - \lambda)$, we obtain \[ \left| C_n \right| = \left[(s-r)^{d-1}(s + (d-1)r)\right]^{d^{n-1}} = (\alpha \beta \gamma^{d-1})^{d^{n-1}}, \] as well as \[ C_n^{-1} = \frac{-1}{\alpha\beta\gamma}\left[d^{m-1}(\alpha - \lambda)(A_n - (d-1)) - \gamma \right]. \] Similarly, again by Lemma~\ref{lem:rAns} but now with \[ \begin{array}{ll} r = (d-2)d^{m-1}, & s = (d-1)\left(\beta - (d-2)d^{m-1}\right), \\ r' = d^{m-1}(\alpha - \lambda), & s' = - (\gamma + (d-1)d^{m-1}(\alpha - \lambda)), \end{array} \] we find \[ D_nC_n^{-1} = \frac{-1}{\alpha\gamma}\left[d^{m-1}\beta A_n - (d-1)\delta \right]. \] Indeed, \[ (d-2)rr' + rs' + r's = \]\[ = (d-2)rr' - r(\gamma + (d-1)r') + r'(d-1)(\beta - r) = \]\[ = -drr' - r\gamma + (d-1)\beta r' = \]\[ = d^{m-1}\left[(d-1)\beta(\alpha - \lambda) - (d-2)(\gamma + d^m(\alpha - \lambda)) \right] = \]\[ = d^{m-1}\beta[\alpha - (d-1)\lambda] = d^{m-1}\beta^2, \] and \[ (d-1)rr' + ss' = \]\[ = (d-1)rr' - (d-1)(\beta - r)(\gamma + (d-1)r') = \]\[ = (d-1)(drr' + r\gamma - \beta(\gamma + (d-1)r')) = \]\[ = (d-1)(r(\gamma + dr') - \beta(\gamma + (d-1)r')) = \]\[ = (d-1)(r\alpha\beta - \beta(\gamma + (d-1)r')) = \]\[ = -(d-1)\beta(\gamma + (d-1)d^{m-1}(\alpha - \lambda) - (d-2)d^{m-1}\alpha) = \]\[ = -(d-1)\beta(\gamma + d^{m-1}(\alpha + (d-1)\lambda)) = \]\[ = -(d-1)\beta(\gamma + d^{m-1}\beta) = -(d-1)\beta\delta. \] Therefore, \[ B_{n-1} - \mu - \lambda^2 D_{n-1}C_{n-1}^{-1} = \]\[ B_{n-1} - \mu + \frac{\lambda^2}{\alpha\gamma}\left[d^{m-1}\beta A_{n-1} - (d-1)\delta \right] = \]\[ B_{n-1} + \frac{d^{m-1}\beta}{\alpha\gamma}\lambda^2 A_{n-1} - \left(\mu + \frac{(d-1)\delta}{\alpha\gamma}\lambda^2\right) = \]\[ = B_{n-1} + \lambda' A_{n-1} - \mu' = \]\[ = Q_{n-1}(\lambda', \mu'). \] Finally, we conclude the computation of the determinant of $Q_n(\lambda, \mu)$: \[ \left|Q_n(\lambda, \mu)\right| = (\beta^{d-3})^{d^{n-1}}\left|C_{n-1}\right|\left|B_{n-1} - \mu - \lambda^2 D_{n-1}C_{n-1}^{-1}\right| = \]\[ = (\beta^{d-3})^{d^{n-1}} (\alpha \beta \gamma^{d-1})^{d^{n-2}} \left| Q_{n-1}(\lambda', \mu') \right| = \]\[ = (\alpha \beta^{d^2 - 3d + 1} \gamma^{d-1})^{d^{n-2}} \left| Q_{n-1}(\lambda', \mu') \right|. \] \end{proof} This concludes the first part of the strategy, finding a recurrence relation between the determinants of $Q_n(\lambda, \mu)$ and $Q_{n-1}(\lambda, \mu)$ via the Schur complement. For the next part, we need to unfold this recurrence relation to get a factorization of $|Q_n(\lambda, \mu)|$. Proposition~\ref{prop:Q0-Q1} provides it for $n = 0, 1$. Let us inductively compute it for $n \ge 2$. \begin{prop} \label{prop:Q2} For $n = 2$, we have \[ |Q_2(\lambda, \mu)| = (\alpha + \lambda)\beta^{(d-2)d+1}H_0^{d-1}, \] where \[ H_x := H_x(\lambda, \mu) = \mu^2 - ((d-2)\lambda + d^m - 2)\mu - ((d-1)\lambda^2 + (d^{m-1}x+d-2)\lambda + d^m - 1). \] \end{prop} \begin{proof} Let $\alpha' := \alpha(\lambda', \mu')$ and $\beta' := \beta(\lambda', \mu')$ following the definition in Proposition~\ref{prop:Q0-Q1}. Then, by that Proposition and Proposition~\ref{prop:recurrence}, \[ \left|Q_2(\lambda, \mu)\right| = \alpha\beta^{d^2 - 3d + 1}\gamma^{d-1}\left|Q_1(\lambda', \mu')\right| = \alpha\beta^{d^2 - 3d + 1}\gamma^{d-1}(\alpha' + \lambda')\beta'^{d-1}. \] We can verify the following relations \[ \alpha' + \lambda' = \frac{\beta}{\alpha}(\alpha + \lambda), \quad \beta' = \frac{\beta}{\gamma}H_0. \] Therefore, \[ \left|Q_2(\lambda, \mu)\right| = \alpha\beta^{d^2 - 3d + 1}\gamma^{d-1}\frac{\beta}{\alpha}(\alpha + \lambda)\left(\frac{\beta}{\gamma}H_0\right)^{d-1} = \] \[ = (\alpha + \lambda)\beta^{(d-2)d + 1}H_0^{d-1}. \] \end{proof} The motivation for the definition of the polynomials $H_x$ from Proposition~\ref{prop:Q2} will become apparent in Proposition~\ref{prop:factorization}. They form a family of polynomials in $\lambda$ and $\mu$ indexed by the point $x \in \mathbb{R}$. For different values of $x \in \mathbb{R}$, the equation $H_x = 0$ defines different hyperbolas in $\lambda$ and $\mu$. \begin{prop} \label{prop:factorization} For any $n \ge 2$, we have the factorization \[ \left|Q_n(\lambda, \mu)\right| = (\alpha + \lambda)\beta^{(d-2)d^{n-1} + 1}\prod_{k=0}^{n-2}\prod_{x\in F^{-k}(0)} H_x^{(d-2)d^{n-k-2} + 1}, \] with $\alpha$ and $\beta$ as in Proposition~\ref{prop:Q0-Q1}, $H_x$ as in Proposition~\ref{prop:Q2} and $F$ being the map \[ F(x) = x^2 - d(d-1). \] \end{prop} \begin{proof} The case $n=2$ is shown in Proposition~\ref{prop:Q2}. We use again the recurrence in Proposition~\ref{prop:recurrence} to show the result for $n \ge 3$ inductively. Let $H_x' := H_x(\lambda', \mu')$. We can verify \[ H'_x = \frac{\beta}{\alpha\gamma}\prod_{y \in F^{-1}(x)}H_y. \] Using this relation and the fact that, for any $k \ge 0$, $|F^{-k}(0)| = 2^k$, we have \[ \left|Q_n(\lambda, \mu)\right| = \left(\alpha\beta^{d^2 - 3d + 1}\gamma^{d-1}\right)^{d^{n-2}}\left|Q_{n-1}(\lambda', \mu')\right| = \]\[ = \left(\alpha\beta^{d^2 - 3d + 1}\gamma^{d-1}\right)^{d^{n-2}}(\alpha' + \lambda')\beta'^{(d-2)d^{n-2} + 1} \prod_{k=0}^{n-3}\prod_{x \in F^{-k}(0)} H_x'^{(d-2)d^{n-k-3} + 1} = \]\scriptsize\[ = \left(\alpha\beta^{d^2 - 3d + 1}\gamma^{d-1}\right)^{d^{n-2}}\frac{\beta}{\alpha}(\alpha + \lambda)\left(\frac{\beta}{\gamma}H_0\right)^{(d-2)d^{n-2} + 1}\prod_{k=0}^{n-3}\prod_{x \in F^{-k}(0)} \left(\frac{\beta}{\alpha\gamma}\prod_{y \in F^{-1}(x)}H_y\right)^{(d-2)d^{n-k-3} + 1} = \]\tiny\[ = (\alpha\gamma)^{d^{n-2}-1}\beta^{(d^2 - 2d - 1)d^{n-2} + 2}(\alpha + \lambda) H_0^{(d-2)d^{n-2} + 1}\prod_{k=0}^{n-3}\left(\frac{\beta}{\alpha\gamma}\right)^{2^k((d-2)d^{n-k-3} + 1)}\prod_{x \in F^{-(k+1)}(0)}H_x^{(d-2)d^{n-k-3} + 1} = \]\scriptsize\[ = (\alpha\gamma)^{d^{n-2}-1}\beta^{(d^2 - 2d - 1)d^{n-2} + 2}(\alpha + \lambda) H_0^{(d-2)d^{n-2} + 1}\left(\frac{\beta}{\alpha\gamma}\right)^{d^{n-2} - 1}\prod_{k=1}^{n-2}\prod_{x\in F^{-k}(0)}H_x^{(d-2)d^{n-k-2} + 1} = \]\normalsize\[ = (\alpha + \lambda)\beta^{(d^2 - 2d)d^{n-2} + 1} \prod_{k=0}^{n-2}\prod_{x\in F^{-k}(0)}H_x^{(d-2)d^{n-k-2} + 1} = \]\[ = (\alpha + \lambda)\beta^{(d - 2)d^{n-1} + 1} \prod_{k=0}^{n-2}\prod_{x\in F^{-k}(0)}H_x^{(d-2)d^{n-k-2} + 1}. \] \end{proof} The relation between the determinants of $Q_n(\lambda, \mu)$ and $Q_{n-1}(\lambda', \mu' )$ is given by the substitution $\lambda \mapsto \lambda'$, $\mu \mapsto \mu'$. For $Q_2$, one of the factors of the determinant is the polynomial we called $H_0$. To compute the determinant of $Q_3$, we have to develop $H_0'$. It is in this analysis that the polynomials $H_x$ and the map $F$ arise. They are the link between $H_x'$ and $H_y$ that allows us to unfold the recurrence. From the factorization in Proposition~\ref{prop:factorization} we can extract $\spec(M_n)$, as we mentioned above, by setting $\lambda = 1$. Recall that $|S| = d^m + d - 2$. \begin{thm} \label{thm:finite_spec} We have $\spec(M_0) = \{1\}$, $\spec(M_1) = \{1, \frac{|S| - d}{|S|}\}$ and, for $n \ge 2$, \[ \spec(M_n) = \left\lbrace1, \frac{|S| - d}{|S|}\right\rbrace \bigcup \psi^{-1}\left(\bigcup_{k=0}^{n-2}F^{-k}(0)\right), \] where $F(x) = x^2 - d(d-1)$ and $\psi(t) = \frac{1}{d^{m-1}}(|S|^2t^2 - |S|(|S|-2)t - (|S| + d-2))$. \end{thm} \begin{proof} We already established that $\spec(M_n) = \{\frac{\mu}{|S|} \mid \left|Q_n(1, \mu)\right| = 0\}$, and by Proposition~\ref{prop:factorization}, the determinant only vanishes in the following cases: \begin{itemize} \item $\alpha + 1 = 0 \implies \mu = |S| \implies \frac{\mu}{|S|} = 1$ , with multiplicity $1$. \item $\beta = 0 \implies \mu = d^m - 2 \implies \frac{\mu}{|S|} = \frac{|S| - d}{|S|}$, with multiplicity $(d-2)d^{n-1} + 1$. \item $H_x = 0$, for some $x\in F^{-k}(0)$ with $0 \le k \le n-2$. This implies that $\mu = \frac{|S| - 2}{2} \pm \sqrt{\left(\frac{d^m + d}{2}\right)^2 - d^m + d^{m-1}x}$, each with multiplicity $(d-2)d^{n-k-2} + 1$. Equivalently, $\frac{\mu}{|S|}$ is one of the two preimages of $x$ by the map $\psi$ defined above. \end{itemize} \end{proof} \section{Spectra of infinite Schreier graphs} \label{sec:infinite} Once we have found $\spec(M_n)$ in Theorem~\ref{thm:finite_spec}, we can prove Theorem~\ref{thm:spectrum_schreier} using the relation $\spec(M_\xi) = \overline{\bigcup\limits_{n\ge0} \spec(M_n)}$, for any $\xi \in X^\mathbb{N}$. In particular, the spectrum does not depend on the choice of $\xi$. The inclusion $\subseteq$ follows from weak containment of representations, see~\cite{dixmier}, Theorem 3.4.9. The equality holds if $\Gamma$ is amenable, which is our case, because our graphs are of polynomial growth (see~\cite{B12}), and hence amenable. \begin{center} \begin{figure} \caption{Illustration of $F^{-k} \label{fig:parabolas} \end{figure} \end{center} \spectrumschreier* \begin{proof} The first statement is an immediate consequence of Theorem~\ref{thm:finite_spec}, as explained above. We just remark that $1$ is obtained as a preimage by $\psi$ of $d$, the limit of the sequence $(F^{-k}_1(0))_k$, where $F_1^{-1}$ is the positive branch of the inverse. For $d = 2$, the map $F$ is $x^2 - 2$, whose Julia set is the interval $[-2, 2]$, and $\psi(t) = 2^{m+1}t^2 - (2^{m+1} - 4)t - 2$. We can find the preimages $t$ of any $x \in [-2, 2]$ by $\psi$: \[ x = 2^{m+1}t^2 - (2^{m+1} - 4)t - 2 \implies 2^{m+1}t^2 - (2^{m+1} - 4)t - (2 + x) = 0 \implies \]\[ \implies t = \frac{2^{m-1} - 1}{2^m} \pm \frac{1}{2^m}\sqrt{4^{m-1} + 1 + 2^{m-1}x}. \] And so \[ t \in \frac{1}{2^m} \left(2^{m-1} - 1 \pm \sqrt{4^{m-1} + 1 + 2^{m-1}[-2,2]}\right) = \frac{1}{2^m} \left(2^{m-1} - 1 \pm \sqrt{[(2^{m-1} - 1)^2, (2^{m-1} + 1)^2]}\right) = \]\[ = \frac{1}{2^m} \left(2^{m-1} - 1 \pm [2^{m-1} - 1, 2^{m-1} + 1]\right) = \frac{1}{2^m} \left([-2, 0]\cup[2^m - 2, 2^m]\right) = [-\frac{1}{2^{m-1}}, 0]\cup[1 - \frac{1}{2^{m-1}}, 1]. \] If $d \ge 3$, then the Julia set of $F$ is a Cantor set of zero Lebesgue measure, and its two preimages by $\psi$ are still Cantor sets lying on the different sides of the minimal point of $\psi$, $\frac{|S| - 2}{2|S|}$. This completes the proof. Intuitively, we can regard $\spec^0(\Gamma)$ as two infinite trees (one for each branch of $\psi^{-1}$), and $\spec^\infty(\Gamma)$ as their boundary. \end{proof} The map $\psi$ is symmetric about its minimal point $v = \frac{|S|-2}{2|S|}$, and satisfies \[ \psi^{-1}(d) = \left\{1, -\frac{2}{|S|}\right\}, \qquad \psi^{-1}(-d) = \left\{ v \pm \frac{\sqrt{(|S| + 2)^2 - 8(|S| - d+2)}}{2|S|}\right\}, \]\[ \psi^{-1}(-d(d-1)) = \left\{ \frac{|S|-d}{|S|}, \frac{d-2}{|S|}\right\}. \] For example, for $m=1$, we have $\psi(t) = 4(d-1)^2t^2 - 4(d-1)(d-2)t - (3d-4)$, and \[ \spec(M_\xi) = \psi^{-1}\left(\overline{\bigcup_{n \ge 0}F^{-n}(0)}\right), \] for any $\xi \in X^\mathbb{N}$, as in this case $\frac{|S|-d}{|S|}$ belongs to the preimage of the Julia set by $\psi$. In the proof of Theorem~\ref{thm:spectrum_schreier} we actually found the explicit multiplicities of the eigenvalues of $M_n$. Before moving on to study the Kesten spectral measures, let us use these multiplicities to compute the density of states (empirical measure) of the graphs $\{\Gamma_n\}_n$ (see the definition just before Theorem~\ref{thm:KNS_spectral_measure} in the Introduction). It represents the spatial averaging of the Kesten measures. \begin{center} \begin{figure} \caption{$\spec(M_\xi)$ for Grigorchuk's group (left) and the Gupta-Fabrykowski group (right), for any $\xi \in X^\mathbb{N} \label{fig:spectra} \end{figure} \end{center} \KNSspectralmeasure* \begin{proof} Let $\nu_n$ be the counting measure on the spectrum of $M_n$, i.e. \[ \nu_n = \frac{1}{d^n}\sum_{\lambda \in \spec(M_n)}\delta_\lambda. \] From the multiplicities computed in the proof of Theorem~\ref{thm:finite_spec}, we have that $\nu_0 = \delta_1$, $\nu_2 = \frac{1}{d}(\delta_1 + (d-1)\delta_{\frac{|S| - d}{|S|}})$ and, for $n \ge 2$, \[ \nu_n = \frac{1}{d^n}\left(\delta_1 + \left((d-2)d^{n-1} + 1\right)\delta_{\frac{|S| - d}{|S|}} + \sum_{k=0}^{n-2}\sum_{x \in F^{-k}(0)}\left((d-2)d^{n-k-2} + 1\right)\left(\delta_{\psi^{-1}_0(x)} + \delta_{\psi^{-1}_1(x)}\right)\right), \] with $\psi^{-1}_0$ and $\psi^{-1}_1$ being the two branches of the inverse of $\psi$. For $d > 2$, we observe, in the limit as $n\to\infty$, the measure \[ \nu = \frac{d-2}{d}\delta_{\frac{|S| - d}{|S|}} + \sum_{n \ge 0}\sum_{x \in F^{-n}(0)}\frac{d-2}{d^{n+2}}\left(\delta_{\psi^{-1}_0(x)} + \delta_{\psi^{-1}_1(x)}\right), \] as in the statement. For $d = 2$, all the multiplicities of the eigenvalues in the finite graphs are $1$, or equivalently, every eigenvalue of $M_n$ has the same measure $\frac{1}{d^n}$. When taking the limit, the measure of each atom tends to zero and the set of eigenvalues is dense in either one ($m=1$) or two ($m \ge 2$) intervals. However, any set of positive spectral measure still has to be the union of cones of the tree of preimages of $F$ plus their closure, which would have positive Lebesgue measure. Hence, $\nu$ is absolutely continuous with respect to the Lebesgue measure. We can find its precise density if we notice the following, for $d=2$ and $n \ge 1$: \[ \spec(\Gamma_n) = \left\{ \frac{1}{2} - \frac{1}{2^m} + \frac{(-1)^\epsilon}{2^m}\sqrt{4^{m-1} + 1 + 2^m \cos\theta} \bigm| \epsilon \in \{0, 1\}, \theta \in \frac{2\pi\mathbb{Z}}{2^n} \right\} \setminus \left\{0, - \frac{1}{2^{m-1}} \right\}. \] Indeed, from the proof of Theorem~\ref{thm:finite_spec} we recover the two branches of the inverse of $\psi$: \[ \psi_\epsilon(x) = \frac{1}{2} - \frac{1}{2^m} + \frac{(-1)^\epsilon}{2^m}\sqrt{4^{m-1} + 1 + 2^{m-1}x}. \] Any $x \in F^{-k}(0)$ can be written as $x = \pm \sqrt{2 + y}$, with $y \in F^{-(k-1)}(0)$. We can hence complete the proof of the equality above by induction, using the trigonometric identity $2\cos(\frac{\theta}{2}) = \pm \sqrt{2 + 2\cos(\theta)}$. This allows us to find an injective, measure-preserving map $\chi: [0,\pi]\times \{0, 1\} \to \mathbb{R}$, defined by \[ \chi(\theta, \epsilon) = \frac{1}{2} - \frac{1}{2^m} + \frac{(-1)^\epsilon}{2^m}\sqrt{4^{m-1} + 1 + 2^m \cos\theta}, \] with the spectrum uniformly distributed on $[0,\pi]\times \{0, 1\}$. The measure of any subset $E \subset \mathbb{R}$ is $\nu(E) = \lambda(\chi^{-1}(E))$, with $\lambda$ being the Lebesgue measure on $[0,\pi]\times \{0, 1\}$. The density $g(x)$ of $\nu$ is thus given by \[ g(x) = \frac{1}{2\pi}\frac{d}{dx}\chi^{-1}(x), \] which coincides with the expression in the statement. \end{proof} We end this section by proving that in the case $d = 2$ the Kesten spectral measures for graphs $\Gamma_\xi$ for all $\xi$ except the orbit of $1^\mathbb{N}$ are equal to the density of states. More precisely, we prove the following result. \begin{center} \begin{figure} \caption{Densities of the spectral measures $\mu_\xi$ for $d=2$ and $m=2$. In blue, the symmetric density corresponds to points not in the orbit of $1^\mathbb{N} \label{fig:spectral-measures} \end{figure} \end{center} \Kestenspectralmeasurebinary* \begin{proof} First recall that for any $\xi$ not in the orbit of $1^\mathbb{N}$ the graphs $\Gamma_\xi$ are two-ended lines. More precisely, every vertex has $2^{m-1} - 1$ loops, $2^m - 2^{m-1}$ edges to one neighbor and one edge to the other neighbor. The simple random walk on such graphs is described by the Markov chain on $\mathbb{Z}$ with probability $\frac{1}{2} - \frac{1}{2^m}$ of staying at any vertex, and alternating probabilities $\frac{1}{2}$ and $\frac{1}{2^m}$ on the other edges. This implies that the Kesten spectral measures $\mu_\xi$ do not depend on this point $\xi$, except for $\xi$ in the orbit of $1^\mathbb{N}$. The density of states $\nu$ is the integral of the Kesten measures $\mu_\xi$ over all $X^\mathbb{N}$ (see Theorem 10.8 in~\cite{G11}), but we just showed that they are all equal in a subset of $X^\mathbb{N}$ of measure one. Hence, we necessarily have $\mu_\xi = \nu$ for every $\xi$ in that subset. The density $h(x)$ of $\mu_{1^\mathbb{N}}$ is computed with an approach similar to that in~\cite{GK12}. It uses the fact that the Stieltjes transform of the density of a spectral measure of the Markov operator on a graph coincides with its moment-generating function. We omit the technical computations. \end{proof} Recall that fixing $d=2$ and $m$ gives us uncountably many isospectral groups. Moreover, for those groups, Proposition~\ref{prop:Kesten_spectral_measure_binary} concludes that, for a subset of boundary points of measure one, the Kesten spectral measures on the orbital Schreier graphs coincide. It would be very interesting to determine the Kesten spectral measures on the Cayley graphs of these groups. \section{Pure point spectrum and eigenfunctions} \label{sec:pure_point_eigenfunctions} This section is devoted to the proof of Theorem~\ref{thm:Kesten_spectral_measure}. We will establish that, for $d \ge 3$, the Kesten spectral measures on $\Gamma_\xi$ are discrete and the eigenfunctions of the Markov operator $M_\xi$ are finitely supported, for every $\xi$ in a set of uniform Bernoulli measure one. To do that, we will use the following strategy. We will first find the eigenfunctions on the finite graphs $\Gamma_n$ (Proposition~\ref{prop:eigenfunctions_finite} and Corollary~\ref{cor:basis_Mn}). Next, we will extend those to $\Gamma_\xi$ and show that some of them remain eigenfunctions (Theorem~\ref{thm:eigenfunctions_Mxi}). Finally, we will show that the set $\mathcal{F}$ of eigenfunctions that we constructed is complete for every $\xi$ in a set of uniform Bernoulli measure one. As $\mathcal{F}$ is a complete set of eigenfunctions for $M_\xi$, any spectral measure $\mu_f$ of $M_\xi$ associated to $f \in \ell^2(\Gamma_\xi)$ must be discrete, in particular this holds for the Kesten spectral measures $\mu_\eta$, $\eta \in \text{Vert}(\Gamma_\xi)$. Moreover we show that all functions in $\mathcal{F}$ are of finite support. In this section we assume $d \ge 3$. Let us write ${\ell^2_n} = \ell^2(V(\Gamma_n))$ and $\ell^2 = \ell^2(V(\Gamma_\xi))$. We start by defining a notion of antisymmetry on the graphs $\Gamma_n$, which will be satisfied by the eigenfunctions. Let $\tau_i = (i, i+1) \in Sym(X)$, for $i \in \{0, \dots, d-2 \}$, and let $\Phi^i_n: \Gamma_n \to \Gamma_n$ be the automorphisms of $\Gamma_n$ defined by $\Phi^i_n(v_0 \dots v_{n-1}) = v_0 \dots v_{n-2}\tau_i(v_{n-1})$. Recall that the graph $\Gamma_n$ can be decomposed as $d$ copies of $\Gamma_{n-1}$ each of which is connected to the others only through one vertex. $\Phi^i_n$ exchanges the $i$-th and ($i+1$)-th copies of $\Gamma_{n-1}$ in this decomposition. We will say that $f \in {\ell^2_n}$ is antisymmetric with respect to $\Phi^i_n$ if $f = - f \circ \Phi^i_n$. In particular, this implies that $f$ is supported only on the $i$-th and ($i+1$)-th copies of $\Gamma_{n-1}$ in $\Gamma_n$. \begin{prop} \label{prop:basis_MN} Let $N \ge 1$ and $\lambda \in \spec(M_N) \setminus \spec(M_{N-1})$. There is a basis $\mathcal{F}_{\lambda, N} = \{f_0, \dots, f_{d-2}\}$ of the $\lambda$-eigenspace of $M_N$, such that $f_i$ is antisymmetric with respect to $\Phi^i_n$, for every $i\in\{0, \dots, d-2\}$. In particular, each $f_i$ is supported in $X^{N-1}i \sqcup X^{N-1}(i+1)$. \end{prop} \begin{proof} We know that the multiplicity of $\lambda$ in $\spec(M_N)$ is exactly $d-1$, as we computed in the proof of Theorem~\ref{thm:spectrum_schreier}. Due to the symmetry of $\Gamma_N$, given a $\lambda$-eigenfuction $f \in {\ell^2_n}$, we know that $f_i = f - f \circ \Phi^i_n$ will be antisymmetric with respect to $\Phi^i_n$ and will still be a $\lambda$-eigenfunction, for any $i\in\{0, \dots, d-2\}$. Furthermore, the fact that these functions are linearly independent becomes clear upon examination of their supports. \end{proof} Now, using the notations of Proposition~\ref{prop:basis_MN}, we partition the basis $\mathcal{F}_{\lambda, N}$ into four parts in order to obtain the eigenfunctions of $M_n$, for $n \ge N$: \[ \mathcal{F}_{\lambda, N}^A := \{f_{d-2}\}, \qquad \mathcal{F}_{\lambda, N}^B := \{f_0\}, \]\[ \mathcal{F}_{\lambda, N}^C := \mathcal{F}_{\lambda, N} \setminus \{f_0, f_{d-2}\}, \qquad \mathcal{F}_{\lambda, N}^D := \emptyset. \] We would like to translate these eigenfunctions from $\Gamma_N$ to $\Gamma_n$, with $n \ge N$. Recall that the graph $\Gamma_{n+1}$ consists of $d$ copies of $\Gamma_n$ joined together by a central piece. We will take advantage of this decomposition with the following natural graph inclusions. Let $n \ge 1$ and $i \in X$. We define \[ \iota^i_n: \Gamma_n \to \Gamma_{n+1}, \quad \iota^i_n(v) = vi. \] We may also define the following induced linear operator (see Figure~\ref{fig:rhos}): \[ \rho^i_n: {\ell^2_n} \to {\ell^2_n}n, \quad \rho^i_nf(vj) = \begin{cases} f\circ(\iota^i_n)^{-1}(vj) & \textrm{if } i=j \\ 0 & \textrm{otherwise} \end{cases}. \] Equivalently, $\rho^i_nf(vj) = f(v)\delta_{i,j}$, where $\delta_{i,j}$ is $1$ if $i=j$ or $0$ otherwise. Let also $\rho^n = \sum_{i \in X} \rho_i^n$. \begin{center} \begin{figure} \caption{Sketch of the transition operators $\rho^i_n$ and $\rho_n$. The former copies $f$ on the $i$-th copy of $\Gamma_n$ in $\Gamma_{n+1} \label{fig:rhos} \end{figure} \end{center} Now, in order to get the eigenfunctions of $M_{n+1}$ from those of $M_n$, we apply these transition functions $\rho^i_n$ in the following way, for $n \ge N$, \[ \mathcal{F}_{\lambda, n+1}^A := \rho^{d-1}_n(\mathcal{F}_{\lambda, n}^A), \qquad \mathcal{F}_{\lambda, n+1}^B := \rho^0_n(\mathcal{F}_{\lambda, n}^A), \]\[ \mathcal{F}_{\lambda, n+1}^C := \bigsqcup_{i \neq 0, d-1} \rho^i_n(\mathcal{F}_{\lambda, n}^A) \sqcup \rho_n(\mathcal{F}_{\lambda, n}^B), \qquad \mathcal{F}_{\lambda, n+1}^D := \bigsqcup_{i \in X} \rho^i_n(\mathcal{F}_{\lambda, n}^C \sqcup \mathcal{F}_{\lambda, n}^D). \] Finally, we set $\mathcal{F}_{\lambda, n} := \mathcal{F}_{\lambda, n}^A \sqcup \mathcal{F}_{\lambda, n}^B \sqcup \mathcal{F}_{\lambda, n}^C \sqcup \mathcal{F}_{\lambda, n}^D$. \begin{rmk} One can look at the supports of the functions in $\mathcal{F}_{\lambda, n}^A$, $\mathcal{F}_{\lambda, n}^B$, $\mathcal{F}_{\lambda, n}^C$ and $\mathcal{F}_{\lambda, n}^D$ to verify that these four sets are disjoint, and the following statements can be inductively proven: \begin{itemize} \item $|\mathcal{F}_{\lambda, n}^A| = |\mathcal{F}_{\lambda, n}^B| = 1$, $\forall n \ge N$. \item $|\mathcal{F}_{\lambda, N}^C| = d-3$, and $|\mathcal{F}_{\lambda, n}^C| = d-1$, $\forall n \ge N+1$. \item $|\mathcal{F}_{\lambda, n}| = (d-2)d^{n-N} + 1$, $\forall n \ge N$. \end{itemize} The sizes of $\mathcal{F}_{\lambda, n}^A$, $\mathcal{F}_{\lambda, n}^B$ and $\mathcal{F}_{\lambda, n}^C$ are uniformly bounded for all $n$. However, the size of $\mathcal{F}_{\lambda, n}^D$ grows with $n$. Furthermore, notice that, by construction, the following statements hold for every $n \ge N$: \[ \forall f \in \mathcal{F}_{\lambda, n}\setminus\mathcal{F}_{\lambda, n}^B, \quad f((d-1)^{n-1}0) = 0, \]\[ \forall f \in \mathcal{F}_{\lambda, n}\setminus \mathcal{F}_{\lambda, n}^A, \quad f((d-1)^n) = 0. \] \end{rmk} \begin{prop} \label{prop:eigenfunctions_finite} Let $N \ge 1$ and $\lambda \in \spec(M_N) \setminus \spec(M_{N-1})$. Then $\mathcal{F}_{\lambda, n}$ is a basis of the $\lambda$-eigenspace of $M_n$, for every $n \ge N$. \end{prop} \begin{proof} Let us proceed by induction on $n$, with the base case $n=N$ covered in Proposition~\ref{prop:basis_MN}. Let $f \in \mathcal{F}_{\lambda, n}$ be a $\lambda$-eigenfunction of $M_n$. and let $v \in X^n$, $j \in X$ and $s \in S$. On one hand we have \[ \rho^i_nM_nf(vj) = M_nf(v) \delta_{i,j} = \frac{1}{|S|}\sum_{s \in S} f(s(v)) \delta_{i,j}. \] On the other hand, if $v \neq (d-1)^{n-1}0$, we have $s(vj) = s(v)j$. In that case, \[ M_{n+1}\rho^i_nf(vj) = \frac{1}{|S|}\sum_{s \in S} \rho^i_nf(s(vj)) = \frac{1}{|S|}\sum_{s \in S} \rho^i_nf(s(v)j) = \frac{1}{|S|}\sum_{s \in S} f(s(v)) \delta_{i,j}. \] So we have $M_{n+1}\rho^i_nf(vj) = \rho^i_nM_nf(vj) = \lambda \rho^i_nf(vj)$ if $v \neq (d-1)^{n-1}0$. For $v=(d-1)^{n-1}0$, we need to further decompose the sums: \[ \rho^i_nM_nf(vj) = M_nf(v)\delta_{i,j} = \]\[ = \frac{1}{|S|}\left( \sum_{k = 1}^{d-1}f(a^k(v))\delta_{i,j} + \sum_{1 \neq b \in B} f(b(v))\delta_{i,j}\right) = \]\[ = \frac{1}{|S|}\left( \sum_{k = 1}^{d-1}f(a^k(v))\delta_{i,j} + \sum_{1 \neq b \in B} f(v)\delta_{i,j}\right), \] since $v$ is fixed by all $b\in B$, and \[ M_{n+1}\rho^i_nf(vj) = \]\[ = \frac{1}{|S|}\left( \sum_{k = 1}^{d-1}\rho^i_nf(a^k(vj)) + \sum_{1 \neq b \in B} \rho^i_nf(b(vj))\right) = \]\[ = \frac{1}{|S|}\left( \sum_{k = 1}^{d-1}\rho^i_nf(a^k(v)j) + \sum_{1 \neq b \in B} \rho^i_nf(v \omega_{n-1}(b)(j))\right) = \]\[ = \frac{1}{|S|}\left( \sum_{k = 1}^{d-1}f(a^k(v))\delta_{i,j} + \sum_{1 \neq b \in B} f(v)\delta_{i,\omega_{n-1}(b)(j)}\right). \] By substracting both expressions, we get \[ (M_{n+1} - \lambda)\rho^i_nf(vj) = \frac{1}{|S|}\left( \sum_{1 \neq b \in B} f(v)(\delta_{i,\omega_{n-1}(b)(j)} - \delta_{i,j})\right). \] We observe now that if $f \in \mathcal{F}_{\lambda, n}\setminus \mathcal{F}_{\lambda, n}^B$, by construction, we have $f(v) = 0$, and so $\rho^i_nf$ is a $\lambda$-eigenfunction of $M_{n+1}$. Else, if $f \in \mathcal{F}_{\lambda, n}^B$, we add the equations for all $i \in X$: \[ (M_{n+1} - \lambda)\rho_nf(vj) = \frac{1}{|S|}\left( \sum_{1 \neq b \in B} f(v)\sum_{i \in X}(\delta_{i,\omega_{n-1}(b)(j)} - \delta_{i,j})\right) = 0. \] In this case, $\rho_nf$ is an eigenfunction of $M_{n+1}$. We can inductively verify that the functions in $\mathcal{F}_{\lambda, n+1}$ are linearly independent by looking at the supports of the images of the functions from $\mathcal{F}_{\lambda, n}$ by $\rho^i_n$ and $\rho_n$. Finally, we already know that $|\mathcal{F}_{\lambda, n+1}| = (d-2)d^{n+1-N} + 1$, which equals the multiplicity of $\lambda$ for $M_{n+1}$, and so the dimension of the $\lambda$-eigenspace of $M_{n+1}$. \end{proof} Set $\mathcal{F}_{1, n}$ to be the singleton containing the constant function equal to one on $\Gamma_n$, $n \ge 0$. \begin{cor} \label{cor:basis_Mn} The set \[ \bigsqcup_{\lambda \in \spec(M_n)} \mathcal{F}_{\lambda, n} \] is a basis of ${\ell^2_n}$ that consists of eigenfunctions of $M_n$. \end{cor} We now want to describe the eigenfunctions of $M_\xi$, the Markov operator on the infinite graph $\Gamma_\xi$, whose vertex set is $G\xi$ (the cofinality class of $\xi$ in $X^\mathbb{N}$). We will do so by translating the eigenfunctions in the finite graphs $\Gamma_n$ to $\Gamma_\xi$ via the transfer operators we now define. Let $\tilde{\iota}_n$ be the canonical extension of the inclusions $\iota^i_n$ for the finite graphs $\Gamma_n$ to the infinite graph $\Gamma_\xi$. Namely, $\tilde{\iota}_n: \Gamma_n \to \Gamma_\xi$, defined by $\tilde{\iota}_n(v) = v\sigma^n(\xi)$. In addition, we define the following natural operators linking functions on $\Gamma_n$ with functions on $\Gamma_\xi$. For $n \ge 0$, we set \[ \tilde{\rho}_n: {\ell^2_n} \to \ell^2, \quad \tilde{\rho}_nf(\eta) = \begin{cases} f\circ\tilde{\iota}_n^{-1}(\eta) & \textrm{if } \sigma^n(\xi) = \sigma^n(\eta) \\ 0 & \textrm{otherwise} \end{cases}. \] Equivalently, $\tilde{\rho}_nf(\eta) = f(\eta_0\dots\eta_{n-1})\delta_{\sigma^n(\xi), \sigma^n(\eta)}$, where again $\delta_{a, b}$ is one iff $a=b$, zero otherwise. Informally, we could write $\tilde{\iota}_n = \dots \circ \iota^{\xi_{n+1}}_{n+1} \circ \iota^{\xi_n}_n$. Let us then define the following set: \[ \mathcal{F}_\lambda := \bigcup_{n \ge N} \tilde{\rho}_n(\mathcal{F}_{\lambda, n}^D). \] If there exists $r \ge 0$ such that $\sigma^r(\xi) = (d-1)^\mathbb{N}$, let it be minimal and set $R = \max\{r, N\}$. In that case, we also include the function $\tilde{\rho}_R(\mathcal{F}_{\lambda, R}^A)$ in the definition of $\mathcal{F}_\lambda$. \begin{thm} \label{thm:eigenfunctions_Mxi} Let $N \ge 1$ and $\lambda \in \spec(M_N) \setminus \spec(M_{N-1})$. Then every $f\in\mathcal{F}_\lambda$ is a $\lambda$-eigenfunction of $M_\xi$, for every $\xi \in X^\mathbb{N}$. \end{thm} \begin{proof} Let $n \ge N$ and $f \in \mathcal{F}_{\lambda, n}^D$. We will show that $\tilde{\rho}_n(f)$ is a $\lambda$-eigenfunction of $M_\xi$. Let $\eta \in G\xi$ and denote by $v$ its prefix of length $n$, so that $\eta = v \sigma^n(\eta)$. Assume first that $v$ is not $(d-1)^{n-1}0$ nor $(d-1)^n$. In that case, for any $s \in S$, $s(\eta) = s(v \sigma^n(\eta)) = s(v)\sigma^n(\eta)$. On one hand, \[ M_{\xi}\tilde{\rho}_nf(\eta) = \frac{1}{|S|}\sum_{s \in S} \tilde{\rho}_nf (s(\eta)) = \frac{1}{|S|}\sum_{s \in S} \tilde{\rho}_nf (s(v)\sigma^n(\eta)) = \]\[ = \frac{1}{|S|}\sum_{s \in S} f(s(v))\delta_{\sigma^n(\xi),\sigma^n(\eta)}. \] On the other hand, \[ \tilde{\rho}_nM_nf(\eta) = M_nf(v)\delta_{\sigma^n(\xi), \sigma^n(\eta)} = \frac{1}{|S|}\sum_{s \in S} f(s(v))\delta_{\sigma^n(\xi),\sigma^n(\eta)}. \] We observe that both expressions are equal. Therefore, \[ M_\xi \tilde{\rho}_nf(\eta) = \tilde{\rho}_nM_nf(\eta) = \lambda \tilde{\rho}_nf(\eta). \] Now let $v = (d-1)^{n-1}0$ or $v = (d-1)^n$. In that case, as $f \in \mathcal{F}_{\lambda, n}^D$, we have $f(v) = 0$. Then, \[ M_\xi \tilde{\rho}_nf(\eta) = \frac{1}{|S|}\sum_{s \in S} \tilde{\rho}_nf (s(\eta)) = \frac{1}{|S|}\sum_{s \in S} \tilde{\rho}_nf (s(v\sigma^n(\eta))) = \]\[ = \frac{1}{|S|}\sum_{s \in S} \tilde{\rho}_nf (s(v)s_{v}(\sigma^n(\eta))) = \frac{1}{|S|}\sum_{s \in S} f(s(v)) \delta_{\sigma^n(\xi), s_{v}(\sigma^n(\eta))}. \] In addition, \[ \tilde{\rho}_nM_nf(\eta) = M_nf(v)\delta_{\sigma^n(\xi), \sigma^n(\eta)} = \frac{1}{|S|}\sum_{s \in S} f(s(v))\delta_{\sigma^n(\xi),\sigma^n(\eta)}. \] We have two cases: either $s \in A$, which means that $s_v$ is trivial, or $s \in B$, so $s(v) = v$ and then $f(s(v)) = 0$. In any case, the two expressions above coincide. Consequently, $M_\xi \tilde{\rho}_nf(\eta) = \tilde{\rho}_nM_nf(\eta) = \lambda \tilde{\rho}_nf(\eta)$ as well for $v = (d-1)^{n-1}0$ and $v = (d-1)^n$, which shows that $f$ is a $\lambda$-eigenfunction of $M_\xi$. The case $\xi$ cofinal with $(d-1)^\mathbb{N}$ and $f \in \mathcal{F}_{\lambda, R}^A$ is proven in a very similar way. The only difference is that now $f(v)$ is not necessarily zero for $v = (d-1)^R$. However, for any $s \in B$, $\delta_{\sigma^n(\xi), s_v(\sigma^n(\eta))} = \delta_{s_v^{-1}(\sigma^n(\xi)), \sigma^n(\eta)} = \delta_{\sigma^n(\xi), \sigma^n(\eta)}$, since $\sigma^n(\xi) = (d-1)^\mathbb{N}$ is fixed by $s_v^{-1}$. Therefore, both expressions are still equal and the statement remains true for this case too. \end{proof} We finally introduce the set \[ \mathcal{F} := \bigcup_{N \ge 1}\bigcup_{\substack{\lambda \in \spec(M_N) \\ \lambda \not\in \spec(M_{N-1})}} \mathcal{F}_\lambda. \] Notice that if $\lambda \in \spec(M_N)\setminus\spec(M_{N-1})$, the size of the support of any $f \in \mathcal{F}_\lambda$ is either $2d^{N-1}$ or $2d^N$. \begin{center} \begin{figure} \caption{Supports of eigenfunctions of $M_\xi$ of eigenvalue $\frac{1} \label{fig:gupta-fabrykowski-fups-1} \end{figure} \end{center} \begin{center} \begin{figure} \caption{Supports of eigenfunctions of $M_\xi$ of eigenvalue $\frac{1} \label{fig:gupta-fabrykowski-fups-2} \end{figure} \end{center} We conclude the section with the proof of Theorem~\ref{thm:Kesten_spectral_measure}. For that, we need to extend the notion of antisymmetric functions to $\Gamma_\xi$, in a way similar to what is done in~\cite{BGJRT19}. We define the space of antisymmetric functions on $\Gamma_n$ by \[ \ell^2_an = \langle f \in {\ell^2_n} \mid \exists i \in \{1, \dots, d-2\},\quad f = -f \circ \Phi^i_n \rangle. \] \begin{lem} \label{lem:ellan} $\forall n \ge 1$, \[ \ell^2_an = \left\langle \bigsqcup_{N=1}^{n-1} \bigsqcup_{\substack{\lambda \in \spec(M_N) \\ \lambda \not\in \spec(M_{N-1})}} \bigsqcup_{i=1}^{d-2} (\rho^{i+1}_{n-1} - \rho^i_{n-1})(\mathcal{F}_{\lambda, n-1}\setminus \mathcal{F}_{\lambda, n-1}^B) \quad \sqcup \quad \bigsqcup_{\substack{\lambda \in \spec(M_n) \\ \lambda \not\in \spec(M_{n-1})}} \mathcal{F}_{\lambda, n}\setminus \mathcal{F}_{\lambda, n}^B \right\rangle. \] \end{lem} \begin{proof} First notice that all the functions on the right hand side belong to $\ell^2_an$ by construction. Indeed, let $1 \le N \le n-1$ and $\lambda\in \spec(M_N)\setminus\spec(M_{N-1})$. Let also $f \in \mathcal{F}_{\lambda, n-1}\setminus \mathcal{F}_{\lambda, n-1}^B$ and $1 \le i \le d-2$. On one hand we have \[ (\rho^{i+1}_{n-1} - \rho^i_{n-1})f \circ \Phi^i_n (v_0\dots v_{n-1}) = (\rho^{i+1}_{n-1} - \rho^i_{n-1})f (v_0\dots v_{n-2}\tau_i(v_{n-1})) = \]\[ = f(v_0 \dots v_{n-2})(\delta_{i+1, \tau_i(v_{n-1})} - \delta_{i, \tau_i(v_{n-1})}). \] On the other hand, \[ (\rho^{i+1}_{n-1} - \rho^i_{n-1})f (v_0\dots v_{n-1}) = f(v_0 \dots v_{n-2})(\delta_{i+1, v_{n-1}} - \delta_{i, v_{n-1}}). \] If $v_{n-1} \neq i, i+1$, then $\tau_i(v_{n-1}) = v_{n-1}$ and so both expressions vanish. Otherwise, $\tau_i$ exchanges $i$ and $i+1$ and so the latter equals the former with opposite sign. If we now take $\lambda \in \spec(M_n)\setminus\spec(M_{n-1})$ and $f \in \mathcal{F}_{\lambda, n}\setminus \mathcal{F}_{\lambda, n}^B$, the fact that $f$ is antisymmetric follows from Proposition~\ref{prop:basis_MN}. Finally, we will check that the dimension of both subspaces is the same. We know that $\dim(\ell^2_an) = (d-2)d^{n-1}$ and that the functions on the right hand side are linearly independent, again inductively and using the fact that we know their supports. Their number is: \[ \sum_{N=1}^{n-1} \sum_{\substack{\lambda \in \spec(M_N) \\ \lambda \not\in \spec(M_{N-1})}} \sum_{i=1}^{d-2} |\mathcal{F}_{\lambda, n-1}\setminus \mathcal{F}_{\lambda, n-1}^B| \quad + \quad \sum_{\substack{\lambda \in \spec(M_n) \\ \lambda \not\in \spec(M_{n-1})}} |\mathcal{F}_{\lambda, n}\setminus \mathcal{F}_{\lambda, n}^B| = \]\[ = \sum_{N=1}^{n-1} \sum_{\substack{\lambda \in \spec(M_N) \\ \lambda \not\in \spec(M_{N-1})}} \sum_{i=1}^{d-2} (d-2)d^{n-1-N} \quad + \quad \sum_{\substack{\lambda \in \spec(M_n) \\ \lambda \not\in \spec(M_{n-1})}} (d-2) = \]\[ = (d-2)^2\sum_{N=1}^{n-1} d^{n-1-N} |\spec(M_N)\setminus\spec(M_{N-1})| \quad + \quad (d-2) |\spec(M_n)\setminus\spec(M_{n-1})| = \]\[ = (d-2)^2\sum_{N=1}^{n-1} d^{n-1-N} 2^{N-1} + (d-2) 2^{n-1} = \]\[ = (d-2)(d^{n-1} - 2^{n-1}) + (d-2) 2^{n-1} = \]\[ = (d-2)d^{n-1} = \dim(\ell^2_an). \] \end{proof} Let $P_n: \ell^2 \to \ell^2$ be the orthogonal projector to the subspace of functions supported in $\tilde{\Gamma}_n := \tilde{\iota}_n(\Gamma_n) = X^n\sigma^n(\xi)$, so that $P_nf(\eta) = f(\eta) \chi_{\tilde{\Gamma}_n}(\eta)$. Let also $P_n': \ell^2 \to {\ell^2_n}$ be the operator defined by $P_n'f = f \circ \tilde{\iota}_n$. In addition, we define the space of antisymmetric functions on $\Gamma_\xi$ as follows: \[ \ell^2_a = \langle f \in \ell^2 \mid \exists n\in I_\xi \quad \supp(f) \subset \tilde{\Gamma}_n, \quad P_n'f \in \ell^2_an \rangle, \] where $I_\xi = \{ n \in \mathbb{N} \mid \forall r \ge 0, \: (d-1)^r0 \textrm{ is not a prefix of } \sigma^n(\xi) \}$ is the set of indices $n$ such that $\sigma^n(\xi)$ has trivial $B$-action. Equivalently, it is the set of indices for which $\tilde{\Gamma}_n$ is connected to the rest of $\Gamma_\xi$ by just one vertex. \begin{lem} \label{lem:ella_in_spanF} $\ell^2_a$ is contained in $\langle\mathcal{F}\rangle$. \end{lem} \begin{proof} Let $f \in \ell^2_a$. Then there exists some $n \in I_\xi$ such that $\supp(f) \subset \tilde{\Gamma}_n$ and $P_n'f \in \ell^2_an$. In particular, either there exists some $r \ge 0$ such that $\xi_n\dots \xi_{n+r} = (d-1)^rj$, with $j \neq 0, d-1$, or $\sigma^n(\xi) = (d-1)^\mathbb{N}$, in which case we set $r = \infty$. Let $T_n$ be the basis of $\ell^2_an$ from Lemma~\ref{lem:ellan}. We first claim that for every $h \in T_n$, $\tilde{\rho}_nh \in \langle\mathcal{F}\rangle$. Indeed, let $1 \le N \le n-1$, $\lambda \in \spec(M_N)\setminus\spec(M_{N-1})$ and $i \in \{1, \dots, d-1\}$. Given a function $g \in \mathcal{F}_{\lambda, n-1}\setminus\mathcal{F}_{\lambda, n-1}^B$, since $i \neq 0$, we have $\rho^i_{n-1}g \in \mathcal{F}_{\lambda, n}\setminus\mathcal{F}_{\lambda, n}^B$. If $r=\infty$, then $\tilde{\rho}_n\rho^i_{n-1}g \in \mathcal{F}$ directly. Otherwise, then as $\xi_n\dots \xi_{n+r} = (d-1)^rj$, we have $\rho^{\xi_{n+r}}_{n+r}\dots\rho^{\xi_n}_n\rho^i_{n-1}g \in \mathcal{F}_{\lambda, n+r+1}^C\cup\mathcal{F}_{\lambda, n+r+1}^D$, and so $\rho^{\xi_{n+r+1}}_{n+r+1}\dots\rho^{\xi_n}_n\rho^i_{n-1}g \in \mathcal{F}_{\lambda, n+r+2}^D$, independently of the value of $\xi_{n+r+1}$. This means that $\tilde{\rho}_n\rho^i_{n-1}g = \tilde{\rho}_{n+r+2}\rho^{\xi_{n+r+1}}_{n+r+1}\rho^{\xi_n+r}_{n+r}\rho^i_{n-1}g \in \mathcal{F}$. Finally, for any generator $h \in T_n$ of the form $h = (\rho^{i+1}_{n-1} - \rho^i_{n-1})g = \rho^{i+1}_{n-1}g - \rho^i_{n-1}g$, we showed that $\tilde{\rho}_nh \in \langle\mathcal{F}\rangle$. Similarly, let $\lambda \in \spec(M_n)\setminus\spec(M_{n-1})$, and $g \in \mathcal{F}_{\lambda, n}\setminus\mathcal{F}_{\lambda, n}^B$. If $r = \infty$, then $\tilde{\rho}_ng \in \mathcal{F}$, and otherwise we have $\rho^{\xi_{n+r}}_{n+r}\dots\rho^{\xi_n}_ng \in \mathcal{F}_{\lambda, n+r+1}^C\cup\mathcal{F}_{\lambda, n+r+1}^D$, provided $\xi_n\dots \xi_{n+r} = (d-1)^rj$. Consequently, $\rho^{\xi_{n+r+1}}_{n+r+1}\dots\rho^{\xi_n}_ng \in \mathcal{F}_{\lambda, n+r+2}^D$, again independently of $\xi_{n+r+1}$. Hence, $\tilde{\rho}_ng = \tilde{\rho}_{n+r+2}\rho^{\xi_{n+r+1}}_{n+r+1}\dots\rho^{\xi_n}_ng \in \mathcal{F}$. Now, for every generator $h$ of $T_n$, of the form $h=g$, $\tilde{\rho}_nh$ must also be in $\langle\mathcal{F}\rangle$. To conclude, since $P_n'f \in \ell^2_an$, let $P_n'f = \sum_i c_i h_i$, with $h_i \in T$. The support of $f$ is contained in $\tilde{\Gamma}_n$, so $f = \tilde{\rho}_nP_n'f = \sum_i c_i \tilde{\rho}_nh_i \in \langle\mathcal{F}\rangle$, and hence $\ell^2_a$ is contained in $\langle \mathcal{F} \rangle$. \end{proof} Our next step is to show that $\ell^2_a$ is dense in $\ell^2$. However, we are only able to do this under some extra conditions on $\xi \in X^\mathbb{N}$, which fortunately define a subset $W$ of uniform Bernoulli measure one in $X^\mathbb{N}$. Observe that the antisymmetric subspace $\ell^2_a$ is of infinite dimension iff the set $I_\xi = \{ n \in \mathbb{N} \mid \forall r \ge 0, \: (d-1)^r0 \textrm{ is not a prefix of } \sigma^n(\xi) \}$ is infinite. Equivalently, iff $\Gamma_\xi$ is one-ended. For the proof of Theorem~\ref{thm:Kesten_spectral_measure}, we will in fact need a slightly stronger condition than $\Gamma_\xi$ being one-ended. We will need not only that $I_\xi$ is infinite, but also that it contains consecutive pairs $k$ and $k+1$ infinitely often. Let us consider the subset $W \subset X^\mathbb{N}$ defined as $W = \{\xi \in X^\mathbb{N} \mid k, k+1 \in I_\xi \textrm{ for infinitely many } k \}$. Note that this set only depends on $d$, and does not depend on $m$ nor on $\omega \in \Omega_{d, m}$. \Kestenspectralmeasure* \begin{proof} The set $W$ can be rewritten as \[ W = \{ \xi \in X^\mathbb{N} \mid \forall l \ge 0,\: \exists k\ge l,\: k, k+1\in I_\xi\}, \] and so its complement is $X^*Z$, with \[ Z = \{ \xi \in X^\mathbb{N} \mid \forall k \ge 0,\: k \not\in I_\xi \textrm{ or } k+1 \not\in I_\xi\}. \] Notice that if, for some $k \in \mathbb{N}$, $\xi_k \neq 0, d-1$, then $k \in I_\xi$. Equivalently, for any $k \not\in I_\xi$, then necessarily $\xi_k = 0, d-1$. Therefore, for any point $\xi \in Z$, either $0 \not\in I_\xi$ or $1 \not\in I_\xi$, which implies that at least one of $\xi_0$, $\xi_1$ is $0, d-1$ (or both). Hence, \[ \mu(Z) \le \left(1 - \left(\frac{d-2}{d}\right)^2\right)\mu(Z) \Longrightarrow \mu(Z) = 0. \] Finally, $\mu(W) = 1 - \mu(X^\mathbb{N}\setminus W) = 1 - \mu(X^*Z) = 1 - \mu(Z) = 1$. We will now prove that the set $\mathcal{F}$ is complete for every $\xi \in W$. Let $f\in \ell^2$ such that $f\perp \ell^2_a$. Now let $n$ be the smallest such that $\normell{P_nf} \ge \frac{4}{5}\normell{f}$ and both $n, n+1 \in I_\xi$, which exists as $\xi \in W$. Our goal is to define a function approximating $P_nf$ and antisymmetric. Because $P_nf$ concentrates the major part of the norm of $f$, and $f \perp \ell^2_a$, this will allow us to conclude that $f$ must be zero. Let $i \in \{1, \dots, d-2\}$ such that $\xi_n \in \{i, i+1\}$. Define $g := \rho^{\xi_n}_nP_n'f - \rho^{\xi_n}_nP_n'f \circ \Phi^i_{n+1} \in \ell^2_ann \subset {\ell^2_n}n$ and also $h := \tilde{\rho}_{n+1}g \in \ell^2$. In fact, by construction, $h \in \ell^2_a$. Then, \[ 0 = \langle f, h \rangle_{\ell^2} = \langle P_{n+1}'f, g \rangle_{\ell^2_n}n = \]\[ = \langle P_{n+1}'f,\rho^{\xi_n}_nP_n'f \rangle_{\ell^2_n}n - \langle P_{n+1}'f, \rho^{\xi_n}_nP_n'f \circ \Phi^i_{n+1} \rangle_{\ell^2_n}n = \]\[ = \normellnn{\rho^{\xi_n}_nP_n'f}^2 - \langle P_{n+1}'f\circ \Phi^i_{n+1}, \rho^{\xi_n}_nP_n'f \rangle_{\ell^2_n}n = \]\[ = \normell{P_nf}^2 - \langle Q_nf, P_nf \rangle_{\ell^2}, \] where $Q_nf(\eta) = f(\eta_0\dots\eta_{n-1}\tau_i(\eta_n)) \delta_{\sigma^n(\eta), \sigma^n(\xi)}$. Notice that this function is supported in $\tilde{\Gamma}_n$ and its values are those of $f$ on the subgraph $\tilde{\iota}_{n+1}\iota^{\tau_i(\xi_n)}_n(\Gamma_n)$. Therefore, its norm is not greater than the norm of $f - P_nf$, supported in $\Gamma_\xi\setminus\tilde{\Gamma}_n$, so $\normell{Q_nf} \le \normell{f - P_nf}$. Now we have, using the Cauchy-Schwarz inequality, \[ 0 = \normell{P_nf}^2 - \langle Q_nf, P_nf \rangle_{\ell^2} \ge \]\[ \ge \frac{4^2}{5^2}\normell{f}^2 - \normell{Q_nf}\normell{P_nf} \ge \]\[ \ge \frac{4^2}{5^2}\normell{f}^2 - \normell{f - P_nf}\normell{f} \ge \]\[ \ge \frac{4^2}{5^2}\normell{f}^2 - \sqrt{1-\frac{4^2}{5^2}}\normell{f}^2 = \]\[ \ge \left(\frac{16}{25} - \frac{3}{5}\right)\normell{f}^2 = \frac{1}{25}\normell{f}^2. \] Hence $f = 0$. \end{proof} \section{Spectra of Cayley graphs} \label{sec:spectra_cayley} Any spinal group $G$ with $d=2$ has subexponential growth. This fact is trivial for $m=1$ and is proven in~\cite{Gri84} for $m=2$, and essentially the same proof extends for $m \ge 3$. Therefore any spinal group $G$ with $d = 2$ is amenable. In that case, it is known that $\spec(M_\xi) \subset \spec(G)$ for every $\xi \in X^\mathbb{N}$. Our goal now is to prove Theorem~\ref{thm:spectrum_cayley_d2}. The proof we provide is a modification of the proof given in~\cite{GD17}, where one of the directions of the proof uses a version of Hulanicki Theorem for graphs. For us it is enough to use the classical Hulanicki Theorem, which we now recall. \begin{thm}[Hulanicki's Theorem] Let $G$ be a locally compact group, and let $\lambda_G$ be its left-regular representation. $G$ is amenable if and only if $\lambda_G$ weakly contains any other unitary representation of $G$. \end{thm} For any group $G$, its left-regular representation $\lambda_G$ in $\ell^2(G)$ can be extended to the representation of the group algebra $\mathbb{C}[G]$ by bounded operators, setting, for every $t = \sum_{g\in G} c_g g \in \mathbb{C}[G]$, $\lambda_G(t) = \sum_{g\in G} c_g \lambda_G(g)$. A unitary representation $\rho$ is \emph{weakly contained} in a unitary representation $\eta$ of $G$ (denoted $\rho \prec \eta$) if there exists a surjective homomorphism $C_\rho^* \twoheadrightarrow C_\eta^*$ mapping the operator $\eta(g)$ to $\rho(g)$, for every $g \in G$, where $C_\rho^*$ is the $C^*$-algebra generated by $\rho$. In~\cite{dixmier}, it is shown that $\rho \prec \eta$ if and only if $\spec(\rho(t)) \subset \spec(\eta(t))$, for every $t \in \mathbb{C}[G]$. \begin{proof}[Proof of Theorem~\ref{thm:spectrum_cayley_d2}] Theorem~\ref{thm:spectrum_schreier} shows that $\spec(M_\xi) = \left\lbrack -\frac{1}{2^{m-1}}, 0 \right\rbrack \cup \left\lbrack 1 -\frac{1}{2^{m-1}}, 1 \right\rbrack$, for any $\xi \in X^\mathbb{N}$. Since $G$ is amenable, we know by Hulanicki's Theorem that $\lambda_{G/\Stab(\xi)} \prec \lambda_G$, where $\lambda_{G/\Stab(\xi)}$ is the quasi-regular representation of $G$ in $\ell^2(G/\Stab(\xi))$, for any $\xi \in X^\mathbb{N}$. Considering $M = \frac{1}{|S|}\sum_{s \in S}s \in \mathbb{C}[G]$, this implies that $\spec(\lambda_{G/\Stab(\xi)}(M)) \subset \spec(\lambda_G(M))$, or, equivalently, $\spec(M_\xi) \subset \spec(G)$, for any $\xi \in X^\mathbb{N}$. To prove the other inclusion, consider the element $t \in \mathbb{C}[G]$, defined as follows: \[ t = \frac{2}{|B|} \sum\limits_{b\in B}b - 1. \] We observe that $t^2 = 1$. Indeed, \[ t^2 = \left(\frac{2}{|B|} \sum\limits_{b\in B}b - 1\right)^2 = \frac{4}{|B|^2}\sum\limits_{b\in B}\sum\limits_{b'\in B}bb' + 1 - \frac{4}{|B|}\sum\limits_{b\in B}b = \]\[ = \frac{4}{|B|^2}\sum\limits_{b\in B}\sum\limits_{c\in B}c + 1 - \frac{4}{|B|}\sum\limits_{b\in B}b = \frac{4}{|B|}\sum\limits_{c\in B}c + 1 - \frac{4}{|B|}\sum\limits_{b\in B}b = 1. \] It follows that the subgroup $D = \langle a, t \rangle$ of the group algebra $\mathbb{C}[G]$ is a dihedral group (in fact infinite), as $a^2 = t^2 = 1$. Let $\rho = \lambda_G|_{D}$ be the restriction of the regular representation to $D \subset \mathbb{C}[G]$. Since both $a$ and $t$ are involutions, $\rho(a)$ and $\rho(t)$ are unitary operators, and hence $\rho$ is a unitary representation. By Hulanicki's Theorem, provided that $D$ is amenable, we have that $\rho \prec \lambda_D$, where $\lambda_D$ is the regular representation of $D$ in $\ell^2(D)$. This implies that $\spec(\rho(m)) \subset \spec(\lambda_D(m))$ for every $m \in \mathbb{C}[G]$. Notice that $M = \frac{a}{2^m} + \frac{t}{2} + \frac{2^{m-1}-1}{2^m} \in \mathbb{C}[D]$, so we have $\spec(G) = \spec(\lambda_G(M)) = \spec(\rho(M)) \subset \spec(\lambda_D(M))$. We only have to compute the latter, which is not hard to do as it corresponds to the spectrum of the Markov operator associated to a random walk on $\mathbb{Z}$ with $2$-periodic probabilities. In particular, the probability of staying at a vertex is $\frac{2^{m-1}-1}{2^m}$, and the probabilities of moving to a neighbor are $2$-periodic of values $\frac{1}{2}$ and $\frac{1}{2^m}$. To find the spectrum of a $2$-periodic graph we can use the elements of Floquet-Bloch theory (see for instance~\cite{BK13}). Let $k \in [-\pi, \pi]$ be a frequency and $e^{ikn}$ be its corresponding wave function. Using the $2$-periodicity of the graph we build a $2\times 2$ matrix for each $k$, we find its eigenvalues and we take the closure of their union for all $k$. The computations are shown below, and lead to the relation $\left\lbrack -\frac{1}{2^{m-1}}, 0 \right\rbrack \cup \left\lbrack 1 -\frac{1}{2^{m-1}}, 1 \right\rbrack = \spec(M_\xi)$. \[ 0 = \left|\begin{matrix} \frac{2^{m-1}-1}{2^m} - x & \frac{1}{2^m} + \frac{1}{2}e^{-ik} \\[6pt] \frac{1}{2^m} + \frac{1}{2}e^{ik} & \frac{2^{m-1}-1}{2^m} - x \end{matrix}\right| = \left(\frac{2^{m-1}-1}{2^m} - x\right)^2 - \left( \frac{1}{4^m} + \frac{1}{4} + \frac{1}{2^m}\cos(k) \right). \]\[ x = \frac{2^{m-1}-1}{2^m} \pm \frac{1}{2^m}\sqrt{4^{m-1} + 1 + 2^m\cos(k)}. \]\[ \spec(\lambda_D(M)) = \bigcup_{k\in [-\pi, \pi]} \spec\left(\lambda_D(M)_k\right) = \]\[ = \frac{2^{m-1}-1}{2^m} \pm \frac{1}{2^m}[2^{m-1} - 1, 2^{m-1} + 1] = \left\lbrack -\frac{1}{2^{m-1}}, 0 \right\rbrack \cup \left\lbrack 1 -\frac{1}{2^{m-1}}, 1 \right\rbrack. \] \end{proof} We can therefore conclude in Corollary~\ref{cor:regular_spectrum_d2} that, for spinal groups, as for many other classes of groups, the spectrum of the Cayley graph does not determine the group. \regularspectrumdtwo* \begin{proof} Theorem~\ref{thm:spectrum_cayley_d2} shows that $\spec(G)$ for spinal groups acting on the binary tree depends only on $m$, one of the parameters in the definition of spinal groups. Hence, all spinal groups with $d = 2$ and a fixed $m \ge 1$ will share the same spectrum. For $m=2$, we obtain the family of groups defined by Grigorchuk in~\cite{Gri84}. This family contains uncountably many groups with different growth function, which is a quasi-isometric invariant. Hence, there are uncountably many isospectral groups which are pairwise non quasi-isometric. \end{proof} \section{Dependence of the spectrum on the generating set} \label{sec:dependence_gen_set} All the results discussed so far concerned spinal groups with the spinal generators $S = (A \cup B)\setminus\{1\}$. One might also wonder what are the spectra like if we consider different generating sets, for instance minimal ones. For spinal groups acting on the binary tree ($d=2$), the infinite Schreier graphs $\Gamma_\xi$ have linear shape. The Schreier graphs of a minimal generating set can then be obtained by erasing double edges in the Schreier graph $\Gamma_\xi$ corresponding to the spinal generators $S$. This can be translated into considering a Markov operator on $\Gamma_\xi$ with non-uniform distribution of probabilities on $S$. The spectra of such anisotropic Markov operators were studied by Lenz, Sell and two first-named authors in~\cite{GLNS19}. Their results imply the following. \begin{prop} \label{prop:type_spectrum} Let $G_\omega$ be a spinal group with $d=2$, $m \ge 2$ and $\omega \in \Omega_{2, m}$, and let $M_\xi^T$ be the Markov operator on the graph $\Gamma_\xi^T$, with generating set $T \subset S$. For $\pi \in \Epi(B, A)$, we define $q_\pi = |T \cap B \setminus \Ker(\pi)|$. Two cases may occur: \begin{itemize} \item If the numbers $q_\pi$ are not all equal over $\pi \in \Epi(B, A)$ appearing infinitely often in $\omega$, then $\spec(M_\xi^T)$ is a Cantor set of Lebesgue measure zero. \item If not, $\spec(M_\xi^T)$ is a union of intervals. \end{itemize} \end{prop} \begin{proof} First notice that $a \in T$, or else $T$ would generate a finite group. We observe that we can relabel the vertices in $\Gamma_\xi^T$ by $\mathbb{Z}$ in such a way that the number of edges between them is the following: \begin{itemize} \item There is one $a$-edge between any vertex $v \in 2\mathbb{Z}$ and $v+1$. \item There are $|T \cap B \setminus \Ker(\omega_0)| = q_{\omega_0}$ edges, between any vertex $v \in 4\mathbb{Z} + 1$ and $v+1$, and $|T \cap \Ker(\omega_0)|$ loops on each of $v$, $v+1$. \item There are $|T \cap B \setminus \Ker(\omega_1)| = q_{\omega_1}$ edges between any vertex $v \in 8\mathbb{Z} + 3$ and $v+1$, and $|T \cap \Ker(\omega_1)|$ loops on each of $v$, $v+1$. ... \item In general, for every $i \ge 0$, there are $|T \cap B \setminus \Ker(\omega_i)|$ edges between any vertex $v \in 2^{i+2}\mathbb{Z} + 2^{i+1} - 1$ and $v+1$, and $|T \cap \Ker(\omega_i)|$ loops on each of $v$, $v+1$. \end{itemize} Hence, the simple random walk on $\Gamma_\xi^T$ is given by a weighted random walk on $\mathbb{Z}$, defined by the following probabilities: \begin{itemize} \item Probability of $\frac{1}{|T|}$ of transitioning between any vertex $v \in 2\mathbb{Z}$ and $v+1$. \item For every $i \ge 0$, probability of $\frac{q_{\omega_i}}{|T|}$ of transitioning between any vertex $v \in 2^{i+2}\mathbb{Z} + 2^{i+1} - 1$ and $v+1$. \item For every $i \ge 0$, probability of $1 - \frac{q_{\omega_i}}{|T|} - \frac{1}{|T|}$ of staying at any vertex $v \in 2^{i+2}\mathbb{Z} + 2^{i+1} - 1$ or $v+1$. \end{itemize} These probabilities follow a periodic pattern if and only if the numbers $q_\pi$ are all equal for every $\pi \in \Epi(B, A)$ occurring infinitely often in $\omega$. If this is not the case, we may use Corollary 7.2 in~\cite{GLNS19} to obtain that $\spec(M_\xi^T)$ is a Cantor set of Lebesgue measure zero. Suppose now that $q_\pi$ are all equal for every $\pi \in \Epi(B, A)$ occurring infinitely often in $\omega$, so the probabilities are periodic on $\mathbb{Z}$, let $l$ be that period. We may compute $\spec(M_\xi^T)$ with Floquet-Bloch theory. To do so, we first compute the spectrum of a fundamental domain, parametrized by $k\in[-\pi, \pi]$, with some boundary conditions. This gives a set of eigenvalues $\{x_1(k), \dots, x_l(k)\}$, which are the roots of a polynomial of degree $l$. These polynomials only depend on $\cos(k)$. Now $\spec(M_\xi^T)$ is just the union of these sets of eigenvalues for all $k\in[-\pi, \pi]$. Since the roots will vary continuously as $\cos(k) \in [-1, 1]$, this union will be a union of at least one and at most $l$ intervals. \end{proof} We already know from Theorem~\ref{thm:spectrum_schreier} that the second option in Proposition~\ref{prop:type_spectrum} is realized when $T=S$, the spinal generating set. The following result states that, except one degenerate example corresponding to $G_\omega = D_\infty$ ($d=2$, $m=1$), every spinal group on the binary tree has a generating set which gives a Cantor spectrum. \generatingsetcantor* \begin{proof} Let $\pi, \pi' \in Epi(B, A)$ be two different epimorphisms occurring infinitely often in $\omega$. Recall that $B$ is a vector space over $\mathbb{Z}/2\mathbb{Z}$, and let $K = \Ker(\pi)$ and $K' = \Ker(\pi')$. We know that $[B:K] = [B:K'] = 2$, and $[K:K \cap K'] = 2$ because $\pi'$ surjects $K$ onto $A$ with kernel $K \cap K'$, since $K \neq K'$. Hence, we have $[B:K \cap K'] = 4$. In particular, we can choose $m-2$ elements $x_1, \dots, x_{m-2} \in K \cap K'$ which generate $K \cap K'$. Moreover, we can choose elements $y \in K \setminus K'$ and $y' \in K' \setminus K$ to complete the generating set to one of $K$ or $K'$, respectively, and such that $\{x_1, \dots, x_{m-2}, y, y'\}$ generate $B$. If we now define $T = \{a, x_1, x_2, \dots, x_{m-2}, y, yy'\} \subset S$, it is clear that it is a minimal generating set for $G_\omega$, since $|T| = m+1$. Moreover, we have $q_\pi = |T \cap B\setminus K| = |\{yy'\}| = 1$ and $q_{\pi'} = |T \cap B \setminus K'| = |\{y, yy'\}| = 2$. By Proposition~\ref{prop:type_spectrum}, $\spec(M_\xi^T)$ is a Cantor set of Lebesgue measure zero. \end{proof} One can also find a condition on the generating set $T \subset S$ under which the spectrum on the Schreier graphs is one interval, for certain $G_\omega$. \begin{prop} Let $G_\omega$ be a spinal group with $d=2$, $m \ge 2$ and $\omega \in \Omega_{d,m}$, with generating set $T \subset S$. If $q_{\omega_i} = 1$ for every $i \ge 0$, then $\spec(M_\xi^T)$ is the interval $[1- \frac{4}{|T|}, 1]$. \end{prop} \begin{proof} In the proof of Proposition~\ref{prop:type_spectrum}, we established that the simple random walk on $\Gamma_\xi^T$ is given by a weighted random walk on $\mathbb{Z}$. Since $q_{\omega_i} = 1$ for every $i \ge 0$, the probabilities can be simplified to: \begin{itemize} \item Probability $\frac{1}{|T|}$ of transitioning between any vertex $v \in \mathbb{Z}$ and $v+1$. \item Probability $1 - \frac{2}{|T|}$ of staying at any vertex $v \in \mathbb{Z}$. \end{itemize} The fact that the probabilities are periodic allows us to use Floquet-Bloch theory in order to find $\spec(M_\xi^T)$, and since the period is $1$, the computation is rather simple. The only eigenvalue of a fundamental domain, parametrized by $k \in [-\pi, \pi]$, is the solution of the equation \[ 0 = 1 - \frac{2}{|T|} + \frac{1}{|T|}e^{ik} + \frac{1}{|T|}e^{-ik} - x = 1 - \frac{2}{|T|} + \frac{2}{|T|}\cos(k) - x \Longrightarrow x = 1 - \frac{2}{|T|} + \frac{2}{|T|}\cos(k). \] Finally, \[ \spec(M_\xi^T) = \bigcup_{k\in [-\pi, \pi]} \spec(M_\xi^T(k)) = \bigcup_{k\in [-\pi, \pi]} \left\{ 1 - \frac{2}{|T|} + \frac{2}{|T|}\cos(k) \right\} = \left[1-\frac{4}{|T|}, 1\right]. \] \end{proof} Self-similar groups inside the family of spinal groups were studied by \v{S}uni\'c~\cite{sunic}. For every $d$ and $m$, there are finitely many of them. They can be specified in terms of an epimorphism $\alpha \in \Epi(B, A)$ and an automorphism $\rho \in \mathcal{A}ut(B)$. The groups in \v{S}uni\'c's family are then the spinal groups defined by the periodic sequence $\omega = (\omega_n)_n$ given by $\omega_n = \alpha \rho^n$. Moreover, it was shown that any of these groups admits a natural minimal \v{S}uni\'c generating set $T = \{a, b_1, \dots, b_m\}$, contained in the spinal generating set $S$, such that \[ a = (1,1)\sigma \quad b_1 = (1, b_2) \quad b_2 = (1, b_3) \quad \dots \quad b_{m-1} = (1, b_m) \quad b_m = (a, b'), \] for some $b' \in B$. Notice that, for $i = 1, \dots, m-1$, $\alpha(b_i) = 1$ and $\rho(b_i) = b_{i+1}$, while $\alpha(b_m) = a$ and $\rho(b_m) = b'$. The choice of this $b' \in B$ in such a way that $\rho$ is an automorphism will then determine the group. It was shown in~\cite{sunic} that a \v{S}uni\'c group is infinite torsion if and only if all $\rho$-orbits intersect $\Ker(\alpha)$. \begin{ex}[Grigorchuk's group] Grigorchuk's group is the group $G$ in \v{S}uni\'c's family with $d=2$, $m=2$, $A = \{1, a\}$, $B = \{1, b_1, b_2, b_1b_2\}$ and $\rho(b_2) = b_1b_2$. With the standard notation $b, c, d$ for the generators, we have $b_1 = d$, $b_2 = b$ and $b_1b_2 = c$. The only nontrivial $\rho$-orbit is $b_1 \mapsto b_2 \mapsto b_1b_2 \mapsto b_1$, which intersects $\Ker(\alpha)$ at $b_1$, hence the group is infinite torsion. The minimal \v{S}uni\'c generating set is $T = \{a, b_1, b_2\}$ and the spinal generating set is $S = \{a, b_1, b_2, b_1b_2\}$, with \[ a = (1,1)\sigma \quad b_1 = (1, b_2) \quad b_2 = (a,b_1b_2) \quad b_1b_2 = (a, b_1). \] We have \[ \spec(M_\xi^S) = \spec(G, S) = \left[-\frac{1}{2}, 0\right]\cup\left[\frac{1}{2}, 1\right]. \] We may consider any of the minimal generating sets $T_{b_1} = \{a, b_2, b_1b_2\}$, $T_{b_2} = \{a, b_1, b_1b_2\}$ or $T = \{a, b_1, b_2\}$. In that case, all of $\spec(M_\xi^{T_{b_1}})$, $\spec(M_\xi^{T_{b_2}})$ and $\spec(M_\xi^{T})$ are Cantor sets, for any $\xi \in X^\mathbb{N}$. It would be interesting to know, for these minimal generating sets, what is the spectrum on the Cayley graph. So far, we only know it must contain this Cantor set. \end{ex} \begin{ex} One natural choice in the construction of \v{S}uni\'c groups above is to take $d=2$ and $\rho$ such that $b' = b_1$. This gives one group for each $m \ge 2$, we call them $G_m$. We have \[ a = (1,1)\sigma \quad b_1 = (1, b_2) \quad b_2 = (1, b_3) \quad \dots \quad b_{m-1} = (1, b_m) \quad b_m = (a, b_1). \] The element $a b_1 \dots b_m$ is of infinite order. We consider two generating sets: the spinal generating set $S = (A\cup B)\setminus\{1\}$, of size $2^m$, and the \v{S}uni\'c minimal generating set $T = \{a, b_1, \dots, b_m\}$, of size $m+1$. On the one hand, Theorem~\ref{thm:spectrum_cayley_d2} yields that, for any $\xi \in X^\mathbb{N}$, \[ \spec(M_\xi^S) = \spec(G_m, S) = \left[-\frac{1}{2^{m-1}}, 0\right] \cup \left[1 - \frac{1}{2^{m-1}}, 1\right]. \] On the other hand, \[ \spec(M_\xi^T) = \left[\frac{m-3}{m+1}, 1\right]. \] Indeed, for any two-ended $\Gamma_\xi^T$, the simple random walk translates into the weighted random walk on $\mathbb{Z}$ with probability $\frac{1}{m+1}$ of moving to a neighbor and probability $\frac{m-1}{m+1}$ of staying on any vertex. By taking a one-vertex fundamental domain parametrized by $k \in [-\pi, \pi]$ and using Floquet-Bloch theory, we have: \[ 0 = \frac{m-1}{m+1} + \frac{1}{m+1}(e^{ik} + e^{-ik}) - x \Longrightarrow x = \frac{1}{m+1}(m-1 + 2\cos(k)) \] \[ \spec\left(M_\xi^T\right) = \bigcup_{k\in [-\pi, \pi]} \spec\left(M_\xi^T(k)\right) = \bigcup_{k\in [-\pi, \pi]} \left\{ \frac{1}{m+1}(m-1 + 2\cos(k)) \right\} = \left[\frac{m-3}{m+1}, 1\right]. \] \begin{prop} \label{prop:sunic-G-m} Let $G$ be a \v{S}uni\'c group with $d=2$ and $m \ge 2$, with minimal \v{S}uni\'c generating set $T$. Then the spectrum on the Schreier graph with respect to $T$ is $\left[\frac{m-3}{m+1}, 1\right]$ if $G = G_m$ or a Cantor set of zero Lebesgue measure otherwise. \end{prop} \begin{proof} We found above the spectrum on the Schreier graphs for the groups $G_m$. Suppose now that $\spec(M_\xi^T)$ is a union of intervals. By Proposition~\ref{prop:type_spectrum}, we have that the numbers $q_\pi$ are all equal over $\pi \in \Epi(B, A)$ occurring infinitely often in $\omega$. By definition of the minimal \v{S}uni\'c generating set $T$, we know that $q_{\omega_0} = m-1$, so, as $\omega$ is periodic, $q_{\omega_n} = m-1$ for every $n \ge 0$. Now, for any $k = 0, \dots, m-1$, we know that $\omega_k(b_{m-k}) = \omega_0\rho^{k}(b_{m-k}) = \omega_0(b_m) = a$, so $b_{m-k} \not\in \Ker(\omega_k)$. As $q_{\omega_k} = m-1$, the only possibility is that, for every $j = 0, \dots, m-1$, $b_j \in \Ker(\omega_k)$ if and only if $j \neq k$. In particular, this implies that $b_m = (a, b_1)$, so that we are in fact in the case of the group $G_m$. \end{proof} We also have \begin{lem} The Cayley graph of $G_m$ with generating set $T$ is bipartite, for any $m \ge 2$. \end{lem} \begin{proof} We only have to show that all relations in the group $G_m$ have even length. Let $w$ be a freely reduced word on $T$, and let $|w|$ represent its length and $|w|_t$ the number of times the generator $t \in T$ occurs in $w$. Suppose that $w$ represents the identity element of $G_m$. In that case, $|w|_a$ must be even, or otherwise its action on the first level would be nontrivial. This allows us to write the word $w$ as a product of $b_i$ and $b_i^a$. Let $w_0$ and $w_1$ be the two projections of the word $w$ into the first level, before reduction. Let us look at the decomposition of a generator $t \in T$. If $t = a$, then it decomposes as $1$ on both subtrees. If $t = b_i$, then it decomposes as $b_{i+1}$ on the right and as $1$ on the left, or as $a$ if $i = m$. Notice that the decomposition of $b_i^a$ is that of $b_i$ exchanging the two projections. It is clear that both $w_0$ and $w_1$ represent the identity, too. Hence, $|w_0|_a$ and $|w_1|_a$ must both be even as well. But $|w_0|_a + |w_1|_a = |w|_{b_m}$, so $|w|_{b_m}$ must also be even. By iterating this argument we can conclude that $|w|$ must be even. In general, let $w_u$ be the projection of $w$ onto the vertex $u$ in $X^k$, the $k$-th level of the tree, for $1 \ge k \ge m$. For any $u \in X^k$, $w_u$ must represent the identity, and hence $|w_u|_a$ must be even. But tracing back the $a$'s occurring in $w_u$ we obtain \[ \sum_{u \in L_k} |w_u|_a = \sum_{u \in L_{k-1}} |w_u|_{b_m} = \dots = \sum_{u \in L_1} |w_u|_{b_{m-k+2}} = |w|_{b_{m-k+1}}. \] This shows that $|w|_t$ is even, for every $t \in T$, which implies that $|w|$ is indeed even and hence that the Cayley graph with the generating set $T$ is bipartite. \end{proof} The spectrum of a bipartite graph is symmetric about $0$. At the same time, for amenable groups, the spectrum on any Schreier graph is contained in the spectrum on the Cayley graph. Hence we have, \[ \spec(G_m^T) \supset -\spec(M_\xi^T) \cup \spec(M_\xi^T) = \left[-1, \frac{3-m}{m+1}\right] \cup \left[\frac{m-3}{m+1}, 1\right]. \] Two cases are of special interest: $m = 2$ and $m = 3$. In these cases, the union of the two intervals above is the whole interval $[-1, 1]$ and we can thus conclude that the spectrum of the Cayley graphs of $G_2$ and $G_3$ with respect to the minimal \v{S}uni\'c generating set is the whole interval $[-1, 1]$. For $m \ge 4$, the union of intervals is actually disjoint, so we can only conclude that the spectrum of the Cayley graph contains two intervals $[-1, -\beta]$ and $[\beta, 1]$, with $\beta > 0$. Note that the group $G_2$ was studied in~\cite{Ers04} and is therefore sometimes called the Grigorchuk-Erschler group. It is the only self-similar group in the Grigorchuk family (spinal groups with $d=2$ and $m=2$) besides Grigorchuk's group. The group $G_3$ is known as Grigorchuk's overgroup~\cite{BG00} because it contains Grigorchuk's group as a subgroup. Indeed, the automorphisms $b_2b_3$, $b_1b_3$, $b_1b_2$ are the generators $b$, $c$, $d$ of Grigorchuk's group. \spectrumgego* \end{ex} {} \end{document}
\begin{document} \maketitle \begin{abstract} We consider a version of the classical Ehrenfest urn model with two urns and two types of balls: regular and heavy. Each ball is selected independently according to a Poisson process having rate $1$ for regular balls and rate $\alpha\in(0,1)$ for heavy balls, and once a ball is selected is placed in a urn uniformly at random. We focus on the observable given by the total number of balls in the left urn, which converges to a binomial distribution of parameter $1/2$, regardless of the relative number of heavy balls and of the parameter $\alpha$. We study the behavior of this convergence when the total number of balls goes to infinity and show that this can exhibit three different phenomenologies depending on the choice of the two parameters of the model. \end{abstract} \section{Introduction} The \emph{cutoff phenomenon}, first denoted with this name by Aldous and Diaconins \cite{AD86}, is one of the most intriguing and well studied topics in the probabilistic literature of the last 30 years. Indeed, despite the topic received much attention, and the list of Markov processes for which it is possible to show such a behavior is now quite long, a general sufficient condition ensuring the presence of cutoff is still missing. To this aim, it could be beneficial to reach a good understanding of the mixing time of Markov chains that are obtained by perturbing an original chain that is known to exhibit cutoff. How does the mixing behavior of the modified chain depend on the perturbation? To which extent is the cutoff phenomenon robust? In the last few years, similar questions have been investigated in \cite{AGHH19,CQrsa,CQspa,AGHHN22}. Despite the models in the aforementioned papers are quite different, they can all be framed into a setting in which the perturbed chain is affected by the competition of two different mixing mechanisms: on the one hand, the abrupt convergence to equilibrium of the original chain, while, on the other hand, a smooth decay to equilibrium arising as an effect of the perturbation. Moreover, the models in \cite{AGHH19,CQspa,AGHHN22} share the common feature that the mixing stochastic process under investigation is not a Markov process itself, but rather a non-Markonian observable of a Markov process. A similar investigation, namely the analysis of mixing behavior of observables (or \emph{features} or \emph{statistics}) of Markov chains, has been recently performed in \cite{W19a,W19b} in the context of card shuffling routines and related models. From the phenomenological point of view, the moral of the results in \cite{AGHH19,CQrsa,CQspa,AGHHN22} is that there exhists a threshold for the perberbutation parameter below which the mixing behavior of the process is unaffected. When the parameter exceeds the threshold, then the perturbation becomes dominant and the convergence to equilibrium takes place in a smooth fashion. Finally, and more interestingly, a non trivial mixing behavior is shown to take place at the interphase between these two regimes. In the same spirit, this short note aims at analyzing the convergence to equilibrium of a two-parameter perturbation of one of the most classical examples of Markov chains exhibiting cutoff, i.e., the Ehrenfest urn. In fact, beyond the classical Ehrenfest urn model, the cutoff phenomenon is particularly well understood in the setting of birth-and-death chains \cite{DSC06,DLP10,CSC13,CSC15}. Therefore, it seems natural to consider perturbed version of this classical model and check for the robustness of the cutoff phenomenon with respect to the perturbation. Contrarily to the works \cite{AGHH19,CQrsa,CQspa,AGHHN22}, the proof of our result does not require a strong technical machinery. In fact, as will be explained in Section \ref{sec:tv-ub}, the proof relies on a recently introduced approach to the convergence to equilibrium of spin systems, based on negative dependence, which has been developed by Salez \cite{S22} to prove the cutoff for the Simple Exclusion Process with reservoirs on arbitrary graphs. Such a technique is purely probabilistic, astonishgly simple, and it is prefectly suited for the analysis of our model. \section{Model and results} We consider the following generalization of the classical Ehrenfest urn model. There are $N$ balls diveded in two urns. For some $\beta\in[0,1]$, $m=\lfloor N^\beta \rfloor$ balls are \emph{heavy}, while the remaining $n=N-m$ balls are \emph{regular}. Each ball is selected at the arrival time of an independent Poisson process having rate $1$ if the ball is regular, and rate $\alpha\le 1$ if the ball is heavy. When a ball is selected, it decides in which urn to move by tossing a fair coin. Assuming that one can distinguish between regular and heavy balls, but that within the same class the balls are indistinguishable, it is immediate to check that such a Markov chain is reversible with respect to the stationary distribution: $$\chi(a,b)\coloneqq\frac{\binom{n}{a}}{2^n} \frac{\binom{m}{b}}{2^m},\qquad (a,b)\in\{0,1,\dots n \}\times\{0,1,\dots,m\}\ .$$ In this note, we interpret the \emph{heavy balls} as \emph{impurities}, and we assume that the observer is not aware of their existence. In other words, the observer can only count the number of balls in the two containers, but cannot distinguish between heavy and regular balls. The question we are going to address, is whether the observer can infer the presence of such impurities just by looking at the way in which the process reaches its equilibrium state. More precisely, we let $R_t$ and $H_t$ denote the number of regular, respectively heavy, balls in the second urn at time $t\ge 0$. For simplicity, in what follows we will stick to the assumption that the initial configuration is the one in which all the balls start in the first urn, i.e., $\mathbb{P}(R_0+H_0=0)=1$. Therefore, for every $t\ge 0$, the quantity $W_t=R_t+H_t$ is distributed as $$W_t\sigmam \nu_t\coloneqq{\rm Bin}(m,p_t)\oplus {\rm Bin}(n,q_t),\quad p_t\coloneqq\frac12\left(1-e^{-\alpha t}\right)\ ,\quad q_t\coloneqq\frac12\left(1-e^{-t}\right)\ ,$$ where the symbol $\oplus$ denotes the convolution of two probability distributions on $\mathbb{R}$. It is clear that for every $N\in\mathbb{N}$ and every choice of $\alpha\in(0,1]$ and $\beta\in[0,1]$, it holds $$W_t\coloneqq R_t+H_t\overset{d}{\longrightarrow}{\rm Bin}\left(N, \tfrac{1}{2}\right)\quad\text{as} \quad t\to\infty\ .$$ We aim at investigating how does this convergence takes place as a function of the two parameters $(\alpha,\beta)$ in the limit in which the number of balls goes to infinity. To this aim, for every fixed $N$ and choice of the parameters $\alpha=\alpha_N$ and $\beta=\beta_N$ we will consider the function $$t\mapsto \mathcal{D}_{\alpha,\beta}^{N,{\rm tv}}(t)\coloneqq \left\|\nu_t-\nu_\star\right\|_{\rm tv}=\frac12\sum_{k=0}^{N}|\nu_t(k)-\nu_\star(k)| \ ,$$ where $\nu_\star$ is the law of Binomial random variable of parameters $N$ and $\tfrac12$. \begin{theorem}[Total Variation trichotomy]\label{th:tv} Consider the sequence \begin{equation}\label{eq:def-gamma} \gamma=\gamma_N= \frac{2\beta-1}{\alpha}-1\ . \end{equation} Assume that $$\exists\: \gamma_\infty=\lim_{N\to\infty}\gamma_N\in[-\infty,+\infty]\ .$$ Depending of the asymptotic behavior of the parameters $\alpha=\alpha_N$ and $\beta=\beta_N$ three different mixing behaviors can take place: \begin{itemize} \item Insensitivity: if $\gamma_\infty<0$ then, for every ${\rm Var} epsilon>0$ there exists $C=C({\rm Var} epsilon)$ such that, called $$t^{\rm ins}=\frac12 \log N\ ,$$ it holds \begin{equation} \liminf_{N\to\infty}\mathcal{D}^{N,{\rm tv}}_{\alpha,\beta}(t^{\rm ins}-C)\ge 1-{\rm Var} epsilon\ ,\qquad \limsup_{N\to\infty}\mathcal{D}^{N,{\rm tv}}_{\alpha,\beta}(t^{\rm ins}+C)\le {\rm Var} epsilon \ . \end{equation} \item Delayed cutoff: if $\gamma_\infty\ge 0$ and $(2\beta-1)\log N\to\infty$, then, for every ${\rm Var} epsilon>0$ there exists $C=C({\rm Var} epsilon)>0$ such that, called $$t^{\rm dc}=\frac{1+\gamma}2 \log N \ ,$$ one has \begin{equation} \liminf_{N\to\infty}\mathcal{D}^{N,{\rm tv}}_{\alpha,\beta}(t^{\rm dc}-C\alpha^{-1})\ge 1-{\rm Var} epsilon\ ,\qquad \limsup_{N\to\infty}\mathcal{D}^{N,{\rm tv}}_{\alpha,\beta}(t^{\rm dc}+C\alpha^{-1})\le {\rm Var} epsilon \ . \end{equation} \item No cutoff: if $\gamma_\infty\ge 0$ and $(2\beta-1)\log N\to\ell\in[0,\infty)$, then for every constant $C=C(\ell)>0$ sufficiently large there exists $0<{\rm Var} epsilon_1(C)<{\rm Var} epsilon_2(C)<1$ such that \begin{equation}\label{eq:th-no-cutoff} \liminf_{N\to\infty}\mathcal{D}^{N,{\rm tv}}_{\alpha,\beta}(C\alpha^{-1})\ge {\rm Var} epsilon_1(C),\qquad \limsup_{N\to\infty}\mathcal{D}^{N,{\rm tv}}_{\alpha,\beta}(C\alpha^{-1})\le {\rm Var} epsilon_2(C)\ . \end{equation} \end{itemize} \end{theorem} Let us now briefly comment on the result. As a first observation, notice that by plugging $\beta_N=\alpha_N\equiv1$ (and, hence, $\gamma_N\equiv0$), namely considering the classical (lazy) Ehrenfest urn, we obtain as a corollary the well-known cutoff result at $\tfrac12\log N+O(1)$. In this sense, we might define the phase $\gamma_\infty<0$ as \emph{subcritical}: the strength of the perturbation is not enough to affect the mixing behavior of the process. Notice that in this case, letting $\alpha_N\to 0$ arbitrarily fast, essentially constraining the heavy particles on the first urn for an arbitrarily long time, Theorem \ref{th:tv} states that at least order $\sqrt{N}$ \emph{impurities} are necessary to affect the mixing mechanism. This is not at all surprising. Indeed, the positions of $o(\sqrt{N})$ balls is irrelevant due to the size of the fluctuations of the stationary measure. At the \emph{critical line} $\gamma_\infty=0$ a smooth transition takes place: indeed, one should notice that, if $\alpha=\Theta(1)$ and $\gamma^{-1}\gg\log N$, then we get again the same phenomenology as in the \emph{subcritical phase}. Nevertheless, as $\gamma$ grows, the cutoff time is delayed by the amount $\tfrac{\gamma}{2}\log N$ and, if $\alpha\to 0$, the cutoff window is enlarged by a factor $\alpha^{-1}$. It is interesting to look at such a behavior from the following perspective: let $t_{\rm rel}=\alpha^{-1}$ be the relaxation time of the (whole) Markov process, so that it is natural to measure time on the timescale of $t_{\rm rel}$. Doing so, if $\gamma_\infty\ge 0$, Thereom \ref{th:tv} tells us that \begin{equation} t_{\rm mix}=t_{\rm rel}\big[(\beta-\tfrac{1}{2})\log N + \Theta(1) \big] \ . \end{equation} The latter equation makes clear the transition between the \emph{delayed cutoff phase} to the one in which the cutoff phenomenon is disrupted according to the finiteness of the limit of $(\beta-\tfrac{1}{2})\log N$ as $N\to\infty$. \subsection{The coupling}\label{sec:representation} Despite the fact that the stochastic process we aim at investigate is not a Markov process, one may try to adapt the techniques used to analyze the classical Ehrenfest urn model. Like in \cite[Sec. 6.5.2]{LP17}, a natural approach to bound from above the total variation distance at time $t$ consists in finding a coupling of the laws $\nu_t$ and $\nu_\star$ and control the probability that the coupling fails. It is well-know that in the classical Ehrenfest urn model such a strategy is not enough to obtain the exact constant of the mixing time, but it provides an estimate that is off by a factor $2$. We will show that the also for our model this approach does not provide the right prefactor. We this aim in mind, we fix an arbitrary $t\ge 0$ and consider the sequences of random variables \begin{equation}\label{eq:definitions} \begin{split} (X^\star_i)_{i\le N}&\:\text{ i.i.d. }{\rm Bern}(\tfrac12)\ ,\qquad (V_i)_{i\le N}\overset{\rm d}{=}{\rm Unif}\big(\{v\in\{0,1\}^N\mid {\textstyle \sum_{i}}v_i=m \}\big)\ ,\\ (P_i(t))_{i\le N}&\:\text{ i.i.d. }{\rm Bern}(\tfrac12-p_t)\ ,\qquad (Q_i(t))_{i\le N} \:\text{ i.i.d. }{\rm Bern}(\tfrac12-q_t)\ . \end{split} \end{equation} Given the randomness of the above defined random variables, we define, for every $i\in[N]$, \begin{equation}\label{eq:representation} Z_i(t)=V_iP_i(t)+(1-V_i)Q_i(t)\ ,\qquad X_i(t)=(1-Z_i(t))X^\star_i\ , \end{equation} together with the marginals \begin{equation*} S^\star=\sum_{i\le N} X^\star_i,\qquad S(t)=\sum_{i\le N}X_i(t)\ . \end{equation*} It is immediate to check that $S(t)\sigmam\nu_t$ and therefore \begin{equation}\label{eq:mean-var} \begin{split} \mathbb{E}[S(t)]=&\: mp_t+(n-m)q_t\ ,\\ {\rm Var}(S(t))=&\: m p_t(1-p_t)+(n-m)q_t(1-q_t)\le \frac{n}{4}\ ,\\ \sum_{i=1}^N \mathbb{E}[Z_i(t)]=&\: N\mathbb{E}[Z_1(t)]=\frac{N}2-\mathbb{E}[S(t)]\ . \end{split} \end{equation} It is not hard to come up with a coupling of the law $\nu_\star$ and $\nu_t$ via the representation of $\nu_t$ in \eqref{eq:representation}. Indeed, we can start by sampling $L_i\sigmam{\rm Unif}[0,1]$ independently for every $i$. If $L_i\le \tfrac12$ we declare $X_i^\star=1$, while $X_i^\star=0$ otherwise. Now we sample, independently of the previous samplings, the random vector $(V_i)_{i\le N}$ and call $r_i=\mathbf{1}(V_i=1)p_t+\mathbf{1}(V_i=0)q_t$. At this point, if $L_i\in[\tfrac12-r_i,\tfrac12]$ we declare $Z_i(t)=1$, and $Z_i(t)=0$ otherwise. Under such an explicit coupled construction, we consider the event $$\mathcal{W}_t=\big\{\textstyle\sum_{i\le N}Z_i(t)=0\big\}\ .$$ By the properties of the total variation distance and by Markov inequality, \begin{equation}\label{eq:ub-sep} \mathcal{D}^{N,\rm tv}_{\alpha,\beta}(t)\le \mathbb{P}(\mathcal{W}_t^c)=\mathbb{P}\left(\sum_{i=1}^{N}Z_i(t)>0\right)\le N\mathbb{E}[Z_1(t)]\ , \end{equation} Since \begin{equation} N\mathbb{E}[Z_1]= N(N^{\beta-1}e^{-\alpha t}+e^{-t}+N^{\beta-1}e^{-t})\ , \end{equation} by plugging in the values of $t=t^{\rm ins}+O(1)$ or $t=t^{\rm dc}+O(\alpha^{-1})$ we see that the upper bound in \eqref{eq:ub-sep} is too weak to provide the sharp estimates in Theorem \ref{th:tv}. \section{Stategy of proof} The strategy of proof is quite simple, and crucially relies on the recent result presented in \cite{S22}, which introduces a control on the $L^2(\mathfrak{p}i)$ distance of a negatively dependent perturbation of a product measure. More precisely, the technical lemma we are going to exploit reads as follows: \begin{lemma}\label{lemma:S22}[Cfr. \cite[Lemma 3.1]{S22}] Let $(X^\star_i)_{i\le N}$ be as in \eqref{eq:definitions}, $(Z_i)_{i\le N}$ arbitrarily distributed on $\{0,1\}^N$ but \emph{negatively dependent}, i.e., \begin{equation}\label{eq:neg-dependence} \mathbb{E}\left[\mathfrak{p}rod_{i\in A}Z_i(t) \right]\le \mathfrak{p}rod_{i\in A}\mathbb{E}[Z_i(t)],\qquad \forall A\subseteq [N]\ . \end{equation} Then, called $\mathfrak{p}i$ the law of $(X_i^\star)_{i\le N}$ and $\mu$ the law of $(X_i)_{i\le N}$, defiend as $$X_i=(1-Z_i)X_i^\star,\qquad i=1,\dots,N\ ,$$ it holds \begin{equation}\label{eq:lemmaS22} \left\| \frac{\mu}{\mathfrak{p}i}-1\right\|_{L^2(\mathfrak{p}i)}^2 \le \mathfrak{p}rod_{i=1}^N(1+\mathbb{E}[Z_i]^2)-1 \ . \end{equation} Moreover, if the one in \eqref{eq:neg-dependence} is an equality, so is the one in \eqref{eq:lemmaS22}. \end{lemma} Lemma \ref{lemma:S22} finds an ancestor in the exponential moment bound presented in \cite[Proposition 3.2]{MP12} (see also \cite[Lemma 3.1]{LS17}), and essentially show that \emph{negative dependence} can be translated into a bound for the $L^2$ distance as the one in \eqref{eq:lemmaS22}, in a sharp sense. As we will show in Section \ref{sec:tv-ub}, for every $t\ge 0$ the vector $(Z_i(t))_{i\le N}$ introduced in Section \ref{sec:representation} is negatively dependent, hence providing a control on the total variation distance between $(X_i^\star)_{i\le N}$ and $(X_i(t))_{i\le N}$, so that the same control on their marginals $S^\star$ and $S(t)$ follows immediately by projection. As we will see in Section \ref{sec:tv-lb}, for what concerns the lower bound our strategy is even simpler, and relies on controlling the probability that $S^\star$ and $S(t)$ lie in the subinterval $\{0,1,\dots, \tfrac{N}{2}-k\}$. This is a particularly simple example of a so-called \emph{distinguishing statistics}, where the function considered is the indicator of the above mentioned interval. It is well-known, see e.g. \cite[Sec. 7.3.1]{LP17}, that such a simple technique provides a sharp estimate for the mixing time at least in the classical Ehrenfest urn model. As we will show, this property is unchanged in our modified setting. \subsection{Upper bound via Negative Dependence}\label{sec:tv-ub} For notational simplicity, let us denote $$\mu_t={\rm Law}(X_i(t),\:i\le N),\qquad\mathfrak{p}i={\rm Law}(X_i^\star,\:i\le N)=\otimes_{i\le N}{\rm Bern}(\tfrac12)\ .$$ Since both $S(t)$ and $S^\star$ are marginal of the laws $\mu_t$ and $\mathfrak{p}i$, respectively, it is possible to bound from above \begin{equation} \mathcal{D}_{\alpha,\beta}^{N,\rm tv}(t)\le \| \mu_t-\mathfrak{p}i\|_{\rm tv}\ . \end{equation} We will show that for every given $t\ge 0$ the collection of random variables $(Z_i(t))_{i\le N}$ defined in Section \ref{sec:representation} is \emph{negatively dependent}, i.e., \eqref{eq:neg-dependence} holds. Thanks to this fact, \begin{equation}\label{eq:upper-bound} \begin{split} \| \mu_t-\mathfrak{p}i\|_{\rm tv}\le&\frac12\left\| \frac{\mu_t}{\mathfrak{p}i}-1\right\|_{L^2(\mathfrak{p}i)}\\ \le&\:\frac12 \sqrt{\left[1+\frac14\left(\frac{m}{N}e^{-\alpha t}+\frac{N-m}{N}e^{-t}\right)^2\right]^N-1} \ , \end{split} \end{equation} where the last inequality follows by Lemma \ref{lemma:S22} and \eqref{eq:mean-var}. \begin{lemma}\label{lemma:neg-dep} For every $N\in\mathbb{N}$, $t\ge 0$ and $\alpha_N,\beta_N\in[0,1]$ the random vector $(Z_i(t))_{i\le N}$ satisfies \eqref{eq:neg-dependence}. \end{lemma} \begin{proof} Called $u_t=(\tfrac12-p_t)/(\tfrac12-q_t)\ge 1$, we have \begin{align*} \mathbb{E}[Z_i(t)]^{|A|}&=\bigg(\frac{m}{N}\left(\frac12-p_t\right)+\frac{N-m}{N}\left(\frac12-q_t\right)\bigg)^{|A|}\\ &=\frac{1}{N^{|A|}}\sum_{a=0}^{|A|} \binom{|A|}{a}m^a (N-m)^{|A|-a}\left(\frac12-p_t\right)^a\left(\frac12-q_t\right)^{|A|-a} \\ &=\left(\frac{1}{2}-q_t \right)^{|A|}\mathbb{E}\left[u_t^B\right],\qquad B\sigmam{\rm Bin}(|A|,\tfrac{m}{N})\ . \end{align*} For the expectation on the left-hand-side of \eqref{eq:neg-dependence} we have \begin{align*} \mathbb{E}\left[\mathfrak{p}rod_{i\in A}Z_i(t)\right] &=\frac{1}{(N)_{|A|}}\sum_{a=0}^{|A|} \binom{|A|}{a}(m)_a (N-m)_{|A|-a}\left(\frac12-p_t\right)^a\left(\frac12-q_t\right)^{|A|-a}\\ &=\sum_{a=0}^{|A|}\frac{\binom{m}{a}\binom{N-m}{|A|-a}}{\binom{N}{|A|}}\left(\frac12-p_t\right)^a\left(\frac12-q_t\right)^{|A|-a}\\ &=\left(\frac{1}{2}-q_t \right)^{|A|}\mathbb{E}\left[u_t^{H}\right],\qquad H\sigmam{\rm Hypergeom}(N,m,|A|)\ , \end{align*} where $(x)_k$ denotes the falling factorial $x(x-1)\cdots(x-k+1)$. Therefore, to conclude the proof it is sufficient to show that for every $u\ge 1$ it holds \begin{equation}\label{eq:MGF} \mathbb{E}\left[u^B\right]\ge \mathbb{E}\left[u^{H}\right]\ . \end{equation} Notice that for every $k\ge 1$ and $D=B,H$ \begin{align}\label{eq:derivative} \frac{\rm d^k}{{\rm d}u^k}\mathbb{E}[u^D]\big\rvert_{u=1}=\mathbb{E}[(D)_ku^{D-k}]\big\rvert_{u=1}=\mathbb{E}[(D)_k]\ . \end{align} Moreover, \begin{equation}\label{eq:comparison-fact-mom} \mathbb{E}[(B)_k]=(|A|)_k\frac{m^k}{N^k}\ge (|A|)_k\frac{(m)_k}{(N)_k}=\mathbb{E}[(H)_k]\ . \end{equation} Therefore \eqref{eq:MGF} follows by a Taylor expansion of $\mathbb{E}\left[u^B\right]$ and $\mathbb{E}\left[u^{H}\right]$ around $u=1$ noticing that, thanks to \eqref{eq:derivative} and \eqref{eq:comparison-fact-mom}, the coefficients of the former series are larger or equal than those of the latter. \end{proof} \subsection{TV lower bound via distinguishing statistics}\label{sec:tv-lb} The lower bounds are proved by considering the events $\mathcal{E}^-_k\coloneqq\{0,1,\dots,N/2-k \}$, and, for appropriate choices of $t$ and $k$, we use the lower bound \begin{equation}\label{eq:idea-lb} \mathcal{D}_{\alpha,\beta}^{(N)}(t)\ge \mathbb{P}(S(t) \in \mathcal{E}^-_k)-\mathbb{P}(S^\star \in \mathcal{E}^-_k)\ . \end{equation} One can rewrite \begin{equation}\label{eq:lower-bound-CLT-star-k} \mathbb{P}(S^\star\in\mathcal{E}^-_k)=\mathbb{P}\left(2\sqrt{N}\left( \frac1N S^\star-\frac12\right)\le-\frac{2k}{\sqrt{N}}\right)\ , \end{equation} so that, choosing $k=\tfrac{\rm Var} epsilon2\sqrt{N}$, by the Central Limit Theorem we have \begin{equation}\label{eq:lower-bound-CLT-star} \mathbb{P}(S^\star\in\mathcal{E}^-_k)\to\ensuremath{\mathbb{P}}hi(-{\rm Var} epsilon)\ . \end{equation} Similarly, \begin{align}\label{eq:lower-bound-CLT-t} \mathbb{P}(S(t)&\not\in\mathcal{E}_k^-)=\\ &\mathbb{P}\left(\frac{N}{\sqrt{{\rm Var}(S(t))}}\left(\frac{S(t)}{N}-\frac{\mathbb{E}[S(t)]}{N} \right)> \frac{1}{\sqrt{{\rm Var}(S(t))}}\left(\frac{N}{2}-\mathbb{E}[S(t)]-k\right)\right)\ , \end{align} hence, if $\frac{N}{2}-\mathbb{E}[S(t)]\ge {\rm Var} epsilon \sqrt{N}$, choosing $k=\tfrac{\rm Var} epsilon2\sqrt{N}$ as in \eqref{eq:lower-bound-CLT-star} we obtain, for $t=t_N\to\infty$, \begin{equation}\label{eq:lower-bound-CLT-t-k} \begin{split} \mathbb{P}(S(t)\not\in\mathcal{E}_k^-)\le&\:\mathbb{P}\left(\frac{N}{\sqrt{{\rm Var}(S(t))}}\left(\frac{1}{N}S(t)-\mathbb{E}[S(t)] \right)> \frac{\rm Var} epsilon2\frac{\sqrt{N}}{\sqrt{{\rm Var}(S(t))}}\right)\\ \to&\:1-\ensuremath{\mathbb{P}}hi\left({\rm Var} epsilon \right)\ , \end{split} \end{equation} where we used that if $t=t_N\to\infty$ then $\sqrt{{\rm Var}(S(t)) N^{-1}}\to \frac12$, and in the last inequality that $N$ is sufficiently large. \section{Proof of Theorem \ref{th:tv}}\label{sec:proofs} We start by considering the case in which $\gamma_\infty<0$ and therefore, for every $N$ large enough, $\beta<\tfrac12(1+\alpha)$. \begin{proof}[Insensitivity] We start by proving the upper bound. Let $t^+=\tfrac12\log N + C$. Then, by \eqref{eq:upper-bound} we have, for all $N$ sufficiently large \begin{align*} \mathcal{D}_{\alpha,\beta}^{(N)}(t^+)\le& \frac12 \sqrt{\left[1+\frac14\left(N^{\beta-1-\frac\alpha{2}}e^{-C\alpha}+N^{-\frac12}e^{-C}\right)^2\right]^N-1} \\ \le&\frac12 \sqrt{\left[1+\frac{e^{-2C}}{N}\right]^N-1} \le\sqrt{e^{e^{-2C}}-1}\ , \end{align*} and the latter can be made arbitrarily small by taking $C\to\infty$. We now prove the lower bound and set $t^-=\tfrac12\log N - C$. By \eqref{eq:mean-var} we have \begin{equation}\label{eq:eps} \frac{N}{2}-\ensuremath{\mathbb{E}}[S(t^-)]\ge \sqrt{N} e^C\ , \end{equation} hence, choosing ${\rm Var} epsilon= e^C$ and $k=\frac{\rm Var} epsilon2\sqrt{N}$, thanks to \eqref{eq:lower-bound-CLT-t-k} we have \begin{equation}\label{eq:lower-bound-subcritical} \mathbb{P}(S(t^-)\not\in\mathcal{E}_k^-)\le 1-\ensuremath{\mathbb{P}}hi(\tfrac1{2} e^C)\ . \end{equation} On the other hand, by \eqref{eq:lower-bound-CLT-star}, \begin{equation}\label{eq:lower-bound-subcritical-final} \mathbb{P}(S^\star\in\mathcal{E}_k^-)\le \ensuremath{\mathbb{P}}hi(-\tfrac1{2} e^{C})\ , \end{equation} and the conclusion follows by \eqref{eq:idea-lb}, \eqref{eq:lower-bound-subcritical} and \eqref{eq:lower-bound-subcritical-final} by taking $C\to\infty$. \end{proof} We now prove the result for the phase in which $\gamma_\infty\ge 0$ and we first focus on the case in which $(2\beta-1)\log N\to\infty$. \begin{proof}[Delayed cutoff] For the upper bound, letting $t^+=\frac{1+\gamma}{2}\log N+C\alpha^{-1}=\frac{2\beta-1}{2\alpha}\log N + C\alpha^{-1}$ and using \eqref{eq:upper-bound} we get, for all $N$ sufficiently large, \begin{align*} \mathcal{D}_{\alpha,\beta}^{(N)}(t^+)\le& \frac12 \sqrt{\left[1+\frac14\left(N^{\beta-1-\beta+\frac12}e^{- C}+N^{-\frac12-\frac\gamma2}e^{-C\alpha^{-1}}\right)^2\right]^N-1} \\ \le&\frac12\sqrt{\left[1+\frac{e^{-2 C}}{N}\right]^N-1} \le\sqrt{e^{e^{-2C}}-1}\ . \end{align*} For the lower bound, set $t^-=\frac{2\beta-1}{2\alpha}\log N-C\alpha^{-1}$ and notice that \eqref{eq:eps} holds also in this case. Therefore, the conclusion follows again by \eqref{eq:lower-bound-subcritical} and \eqref{eq:lower-bound-subcritical-final}. \end{proof} Finally, we consider the case in which $\gamma_\infty\ge 0$ and $(2\beta-1)\log N\to \ell\in[0,\infty)$. \begin{proof}[No cutoff] Letting $C>0$ constant and using again \eqref{eq:upper-bound} we get, for all $N$ sufficiently large, \begin{equation}\label{eq:ub-super-gamma->infinity} \begin{split} \mathcal{D}_{\alpha,\beta}^{(N)}(C\alpha^{-1})\le& \frac12 \sqrt{\left[1+\frac14\left(N^{\beta-1}e^{- C}+e^{-C\alpha^{-1}}\right)^2\right]^N-1}\\ \le&\frac12 \sqrt{\left[1+N^{2\beta-2}e^{- 2C}\right]^N-1}\\ \le&\sqrt{e^{N^{2\beta-1}e^{- 2C}}-1} \le\sqrt{e^{N^\frac{2\ell}{\log N}e^{- 2C}}-1}=\sqrt{e^{e^{- 2(C-\ell)}}-1}\ , \end{split} \end{equation} where in the second inequality in \eqref{eq:ub-super-gamma->infinity} we used that for all $N$ sufficiently large \begin{equation}\label{eq:5.11} \forall C>\ell\:\frac{1-\beta}{1+\gamma},\qquad \beta-1>-\frac{C}{\alpha\log N}\ , \end{equation} thanks to the assumption $\alpha(1+\gamma)\log N\to \ell$. Since $\beta\to\frac12$, the upper bound in \eqref{eq:ub-super-gamma->infinity} holds for all $N$ large enough, as soon as $C>\ell$. For the lower bound, since \begin{equation*} \frac{N}2-\mathbb{E}[S(t)]\ge N^\beta e^{-C}=e^{\frac{\ell}{2}-C}\sqrt{N}\ , \end{equation*} we have by \eqref{eq:lower-bound-CLT-star} and \eqref{eq:lower-bound-CLT-t-k} with ${\rm Var} epsilon=e^{\frac{\ell}{2}-C}$ and $k=\frac{\rm Var} epsilon2 \sqrt{N}$ \begin{align*} \mathbb{P}(S(C\alpha^{-1})\not\in\mathcal{E}_k^-)\le&1-\ensuremath{\mathbb{P}}hi\left(\tfrac{1}{2}e^{\frac{\ell}{2}-C} \right),\qquad \mathbb{P}(S^\star\in \mathcal{E}^-_k)\le \ensuremath{\mathbb{P}}hi(-\tfrac{1}{2}e^{\frac{\ell}{2}-C} )\ , \end{align*} so that, for all $C>0$ and every $N$ sufficiently large, thanks to \eqref{eq:idea-lb}, \begin{equation*} \mathcal{D}_{\alpha,\beta}^{(N)}(C\alpha^{-1})\ge \ensuremath{\mathbb{P}}hi(\tfrac{1}{2}e^{\frac{\ell}{2}-C} )-\ensuremath{\mathbb{P}}hi(-\tfrac{1}{2}e^{\frac{\ell}{2}-C} )>0\ .\qedhere \end{equation*} \end{proof} This concludes the proof of Theorem \ref{th:tv}. \subsubsection*{Acknowledgments} The author thanks the German Research Foundation (project number 444084038, priority program SPP2265) for financial support. Moreover, the author whishes to thank Pietro Caputo and Federico Sau for several helpful discussions on the topic. \end{document}
\begin{document} \thispagestyle{plain} {\footnotesize {{\bf General Mathematics Vol. xx, No. x (201x), xx--xx}}} \vspace*{2cm} \begin{center} {\Large {\bf A Note On Multi Poly-Euler Numbers And Bernoulli Polynomials}} \footnote{\it Received 08 Jun, 2009 \hspace{0.1cm} Accepted for publication (in revised form) 29 November, 2013} {\large Hassan Jolany, Mohsen Aliabadi, Roberto B. Corcino, and M.R.Darafsheh } \end{center} \begin{abstract} In this paper we introduce the generalization of Multi Poly-Euler polynomials and we investigate some relationship involving Multi Poly-Euler polynomials. Obtaining a closed formula for generalization of Multi Poly-Euler numbers therefore seems to be a natural and important problem. \end{abstract} \begin{center} {\bf 2010 Mathematics Subject Classification:} 11B73, 11A07 {\bf Key words and phrases:} Euler numbers, Bernoulli numbers, Poly-Bernoulli numbers, Poly-Euler numbers, Multi Poly-Euler numbers and polynomials \end{center} \vspace*{0.3cm} \section{Introduction} In the 17th century a topic of mathematical interst was finite sums of powers of integers such as the series $1+2+...+(n-1)$ or the series $1^2 + 2^2 + ... + (n -1)^2$.The closed form for these finite sums were known ,but the sum of the more general series $1^k+2^k+...+(n-1)^k$was not.It was the mathematician Jacob Bernoulli who would solve this problem.Bernoulli numbers arise in Taylor series in the expansion \begin{equation}\label{e1} \begin{array}{c} \frac{x}{e^x-1}=\sum\limits_{n=0}^{\infty}B_n\frac{x^n}{n!} \end{array}. \end{equation} and we have, \begin{equation}\label{e1} \begin{array}{c}S_m(n) = \sum_{k=1}^n k^m = 1^m + 2^m+ \cdots + n^m= {1\over{m+1}}\sum_{k=0}^m {m+1\choose{k}} B_k\; n^{m+1-k} \end{array}. \end{equation} and we have following matrix representation for Bernoulli numbers(for $n\in \mathbf{N}$),[1-4]. \begin{align} B_{n} &=\frac{(-1)^n} {(n-1)!}~ \begin{vmatrix} \frac{1}{2}& \frac{1}{3} & \frac{1}{4} &\cdots \frac{1}{n}&~\frac{1}{n+1}\\ 1& 1 & 1 &\cdots 1 & 1 \\ 0& 2 & 3 &\cdots {n-1} & n\\ 0& 0 & \binom{3}{2} &\cdots \binom{n-1}{2} & \binom{n}{2} \\ \vdots & ~ \vdots & ~ \vdots &\ddots~~ \vdots & \vdots & \\ 0& 0& 0& \cdots \binom{n-1}{n-2} & \binom{n}{n-2} \\ \end{vmatrix}. \end{align} Euler on page 499 of [5], introduced Euler polynomials, to evaluate the alternating sum \begin{equation}\label{e1} \begin{array}{c} A_n(m)=\sum\limits_{k=1}^{m}(-1)^ {m-k}k^n=m^n-(m-1)^n+...+(-1)^ {m-1}1^n \end{array}. \end{equation} The Euler numbers may be defined by the following generating functions \begin{equation}\label{e1} \begin{array}{c} \frac{2}{e^{t}\!+\!1}\;=\;\sum\limits_{{n=0}}^{\infty}E_{n}\frac{t^{n}}{n!} \end{array}. \end{equation} and we have following folowing matrix representation for Euler numbers, [1,2,3,4]. \begin{align} E_{2n} &=(-1)^n (2n)!~ \begin{vmatrix} \frac{1}{2!}& 1 &~& ~&~\\ \frac{1}{4!}& \frac{1}{2!} & 1 &~&~\\ \vdots & ~ & \ddots~~ &\ddots~~ & ~\\ \frac{1}{(2n-2)!}& \frac{1}{(2n-4)!}& ~&\frac{1}{2!} & 1\\ \frac{1}{(2n)!}&\frac{1}{(2n-2)!}& \cdots & \frac{1}{4!} & \frac{1}{2!}\end{vmatrix}. \end{align} The poly-Bernoulli polynomials have been studied by many researchers in recent decade. The history of these polynomials goes back to Kaneko. The poly-Bernoulli polynomials have wide-ranging application from number theory and combinatorics and other fields of applied mathematics. One of applications of poly-Bernoulli numbers that was investigated by Chad Brewbaker in [6,7,8,9], is about the number of $(0,1)$-matrices with $n$-rows and $k $ columns. He showed the number of $(0,1)$-matrices with $n$-rows and $k$ columns uniquely reconstructable from their row and column sums are the poly-Bernoulli numbers of negative index $B_n^{(-k)}$. Let us briefly recall poly-Bernoulli numbers and polynomials. For an integer $k\in \mathbf{Z}$, put \begin{equation}\label{e1} \begin{array}{c} \operatorname{Li}_k(z) = \sum_{n=1}^\infty {z^n \over n^k}. \end{array}. \end{equation} which is the $k$-th polylogarithm if $k\geq 1$ , and a rational function if $k\leq 0$. The name of the function come from the fact that it may alternatively be defined as the repeated integral of itself . The formal power series can be used to define Poly-Bernoulli numbers and polynomials. The polynomials $B_n^{(k)}(x)$ are said to be poly-Bernoulli polynomials if they satisfy, \begin{equation}\label{e1} \begin{array}{c} {Li_{k}(1-e^{-t}) \over 1-e^{-t}}e^{xt}=\sum\limits_{n=0}^{\infty}B_{n}^{(k)}(x){t^{n}\over n!} \end{array}. \end{equation} In fact, Poly-Bernoulli polynomials are generalization of Bernoulli polynomials, because for $n\leq 0$, we have, \begin{equation}\label{e1} \begin{array}{c} (-1)^nB_n^{(1)}(-x)=B_n(x) \end{array}. \end{equation} Sasaki,[10], Japanese mathematician, found the Euler type version of these polynomials, In fact, he by using the following relation for Euler numbers, \begin{equation}\label{e1} \begin{array}{c} cosh t=\sum\limits_{n=0}^{\infty}\frac{E_n}{n!}t^n \end{array}. \end{equation} found a poly-Euler version as follows \begin{equation}\label{e1} \begin{array}{c} \frac{Li_k(1-e^{-4t})}{4tcosht}=\sum\limits_{n=0}^{\infty}E_{n}^{(k)}{t^{n}\over n!} \end{array}. \end{equation} Moreover, he by defining the following $L$-function, interpolated his definition about Poly-Euler numbers. \begin{equation}\label{e1} \begin{array}{c} L_k(s)=\frac{1}{\Gamma(s)}\int_0^{\infty}t^{s-1}\frac{Li_k(1-e^{-4t})}{4(e^t+e^{-t})}dt \end{array}. \end{equation} and Sasaki showed that \begin{equation}\label{e1} \begin{array}{c} L_k(-n)=(-1)^nn\frac{E_{n-1}^{(k)}}{2} \end{array}. \end{equation} But the fact is that working on such type of generating function for finding some identities is not so easy. So by inspiration of the definitions of Euler numbers and Bernoulli numbers, we can define Poly-Euler numbers and polynomials as follows which also A.Bayad [11], defined it by following method in same times. \begin{defin}\label{d1} (Poly-Euler polynomials):The Poly-Euler polynomials may be defined by using the following generating function, \begin{equation}\label{e1} \begin{array}{c} \frac{2Li_k(1-e^{-t})}{1+e^t}e^{xt}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{(k)}{t^{n}\over n!} \end{array}. \end{equation} \end{defin} If we replace $t$ by $4t$ and take $x=1/2$ and using the definition $cosht=\frac{e^t+e^{-t}}{2}$, we get the Poly-Euler numbers which was introduced by Sasaki and Bayad and also we can find same interpolating function for them (with some additional constant coefficient). The generalization of poly-logarithm is defined by the following infinite series \begin{equation}\label{e1} \begin{array}{c} Li_{(k_1,k_2,...,k_r)}(z)=\sum\limits_{m_1,m_2,...,m_r}\frac{z^{m_r}}{m_1^{k_1}...m_r^{k_r}} \end{array}. \end{equation} which here in summation ($0<m_1<m_2<...m_r$). Kim-Kim [12], one of student of Taekyun Kim introduced the Multi poly- Bernoulli numbers and proved that special values of certain zeta functions at non-positive integers can be described in terms of these numbers. The study of Multi poly-Bernoulli numbers and their combinatorial relations has received much attention in [6-13]. The Multi Poly-Bernoulli numbers may be defined as follows \begin{equation}\label{e1} \begin{array}{c} \frac{Li_{(k_1,k_2,...,k_r)}(1-e^{-t})}{(1-e^{-t})^r}=\sum\limits_{n=0}^{\infty}B_n^{(k_1,k_2,...,k_r)}\frac{t^n}{n!} \end{array}. \end{equation} So by inspiration of this definition we can define the Multi Poly-Euler numbers and polynomials . \begin{defin}\label{d1} Multi Poly-Euler polynomials $\mathbf{E}_n^{(k_1,...,k_r)}(x)$, $(n = 0, 1, 2, ...)$ are defined for each integer $k_1, k_2, ..., k_r $ by the generating series \begin{equation}\label{e1} \begin{array}{c} \frac{2Li_{(k_1,...,k_r)}(1-e^{-t})}{(1+e^t)^r}e^{rxt}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{(k_1,...,k_r)}(x){t^{n}\over n!} \end{array}. \end{equation} \end{defin} and if $x=0$, then we can define Multi Poly-Euler numbers $\mathbf{E}_{n}^{(k_1,...,k_r)}=\mathbf{E}_{n}^{(k_1,...,k_r)}(0)$ Now we define three parameters $a,b,c$, for Multi Poly-Euler polynomials and Multi Poly-Euler numbers as follows. \begin{defin}\label{d1} Multi Poly-Euler polynomials $\mathbf{E}_n^{(k_1,...,k_r)}(x,a,b)$, $(n = 0, 1, 2, ...)$ are defined for each integer $k_1, k_2, ..., k_r $ by the generating series \begin{equation}\label{e1} \begin{array}{c} \frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r}e^{rxt}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(x,a,b){t^{n}\over n!} \end{array}. \end{equation} \end{defin} In the same way, and if $x=0$, then we can define Multi Poly-Euler numbers with $a,b$ parameters $\mathbf{E}_{n}^{(k_1,...,k_r)}(a,b)=\mathbf{E}_{n}^{(k_1,...,k_r)}(0,a,b)$. In the following theorem, we find a relation between $\mathbf{E}_{n}^{(k_1,...,k_r)}(a,b)$ and $\mathbf{E}_{n}^{(k_1,...,k_r)}(x)$ \begin{theo}\label{t1} Let $a,b>0$, $ab\neq \pm1$ then we have \begin{equation}\label{e1} \begin{array}{c} \mathbf{E}_{n}^{(k_1,k_2,...,k_r)}(a,b)=\mathbf{E}_{n}^{(k_1,k_2,...,k_r)}\left(\frac{lna}{lna+lnb}\right)(lna+lnb)^n \end{array}. \end{equation} \end{theo} Proof.By applying the Definition 2 and Definition 3,we have \begin{align*} \frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r} &= \sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(a,b){t^{n}\over n!} \\ & = e^{rt\ln a}\frac{2Li_{(k_1,...,k_r)}(1-e^{-t\ln ab})}{(1+e^{t\ln ab})^r}\\ \end{align*} So, we get \begin{align*} \frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}\left(\frac{\ln a}{\ln a +\ln b}\right)(\ln a +\ln b)^n{t^{n}\over n!} \end{align*} Therefore, by comparing the coefficients of $t^n$ on both sides, we get the desired result. $\square$ Now, In next theorem, we show a shortest relationship between $\mathbf{E}_{n}^{(k_1,k_2,...,k_r)}(a,b)$ and $\mathbf{E}_{n}^{(k_1,k_2,...,k_r)}$. \begin{theo}\label{t1} Let $a,b>0$, $ab\neq \pm1$ then we have \begin{equation}\label{e1} \begin{array}{c} \mathbf{E}_{n}^{(k_1,k_2,...,k_r)}(a,b)=\sum\limits_{i=0}^{n}r^{n-i}(\ln a +\ln b)^i(\ln a)^{n-i}\binom{n}{i} \mathbf{E}_{i}^{(k_1,k_2,...,k_r)} \end{array}. \end{equation} \end{theo} Proof. By applying the Definition 2, we have, \begin{align*} \sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(a,b){t^{n}\over n!}&=\frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r} \\ & = e^{rt\ln a}\frac{2Li_{(k_1,...,k_r)}(1-e^{-t\ln ab})}{(1+e^{t\ln ab})^r}\\ & = \left(\sum\limits_{k=0}^{\infty}\frac{r^kt^k(\ln a)^k}{k!}\right)\left(\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(\ln a+\ln b)^n\frac{t^n}{n!}\right)\\ & = \sum\limits_{j=0}^{\infty}\left( \sum\limits_{i=0}^{j}r^{j-i}\frac{\mathbf{E}_j^{(k_1,...,k_r)}(\ln a+\ln b)^i(\ln a)^{j-i}}{i!(j-i)!}t^j\right)\\ \end{align*} So, by comparing the coefficients of $t^n$ on both sides , we get the desired result. $\square$ By applying the definition 2, by simple manipulation, we get the following corollary \begin{cor}\label{c1}For non-zero numbers $a,b$, with $ab \neq -1$ we have \begin{equation}\label{e2} \begin{array}{c} \mathbf{E}_n^{(k_1,...,k_r)}(x;a,b)=\sum\limits_{i=0}^{n}\binom{n}{i}r^{n-i}\mathbf{E}_i^{(k_1,...,k_r)}(a,b)x^{n-i} \end{array}. \end{equation} \end{cor} Furthermore, by combinig the results of Theorem 2, and Corollary 1, we get the following relation between generalization of Multi Poly-Euler polynomials with $a,b$ parameters $\mathbf{E}_{n}^{{(k_1,...,k_r)}}(x;a,b)$, and Multi Poly-Euler numbers $\mathbf{E}_{n}^{{(k_1,...,k_r)}}$. \begin{equation}\label{e1} \begin{array}{c} \mathbf{E}_{n}^{{(k_1,...,k_r)}}(x;a,b)= \sum\limits_{k=0}^{n} \sum\limits_{j=0}^{k}r^{n-k}\binom{n}{k}\binom{k}{j}(\ln a)^{k-j}(\ln a+\ln b)^j\mathbf{E}_{j}^{{(k_1,...,k_r)}}x^{n-k} \end{array}. \end{equation} Now, we state the ''Addition formula'' for generalized Multi Poly-Euler polynomials \begin{cor}\label{c1}(Addition formula) For non-zero numbers $a,b$, with $ab \neq -1$ we have \begin{equation}\label{e2} \begin{array}{c} \mathbf{E}_n^{(k_1,...,k_r)}(x+y;a,b)=\sum\limits_{k=0}^{n}\binom{n}{k}r^{n-k}\mathbf{E}_k^{(k_1,...,k_r)}(x;a,b)y^{n-k} \end{array}. \end{equation} \end{cor} Proof. We can write \begin{align*} \sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(x+y;a,b){t^{n}\over n!}&=\frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r} e^{(x+y)rt} \\ & = \frac{2Li_{(k_1,...,k_r)}(1-(ab)^{-t})}{(a^{-t}+b^t)^r} e^{xrt}e^{yrt}\\ & = \left(\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{{(k_1,...,k_r)}}(x;a,b){t^{n}\over n!}\right) \left(\sum\limits_{i=0}^{n}\frac{y^i r^i}{i!}t^i\right) \\ & = \sum\limits_{n=0}^{\infty}\left( \sum\limits_{k=0}^{n}\binom{n}{k}r^{n-k}y^{n-k}\mathbf{E}_{k}^{{(k_1,...,k_r)}}(x;a,b)\right)\frac{t^n}{n!}\\ \end{align*} So, by comparing the coefficients of $t^n$ on both sides , we get the desired result. $\square$ \section{Explicit formula for Multi Poly-Euler polynomials} Here we present an explicit formula for Multi Poly-Euler polynomials. \begin{theo}\label{t1} The Multi Poly-Euler polynomials have the following explicit formula \begin{equation}\label{e2} \begin{array}{c} \mathbf{E}^{(k_1,k_2,\ldots, k_r)}_n(x)=\sum\limits_{i=0}^n\sum\limits_{ 0\le m_1\le m_2\le\ldots\le m_r \atop c_1+c_2+\ldots=r}\sum\limits_{j=0}^{m_r}\frac{2(rx-j)^{n-i}r!(-1)^{j+c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^i\binom{m_r}{j}\binom{n}{i}}{(c_1!c_2!\ldots)(m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r})}. \end{array}. \end{equation} \end{theo} Proof. We have $$Li_{(k_1,k_2,\ldots, k_r)}(1-e^{-t})e^{rxt}=\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\frac{(1-e^{-t})^{m_r}}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}e^{rxt}\qquad\qquad\qquad\qquad\qquad\qquad$$ \begin{align*} =&\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\frac{1}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}\sum_{j=0}^{m_r}(-1)^j\binom{m_r}{j}\sum_{n\ge0}(rx-j)^n\frac{t^n}{n!}\\ =&\sum_{n\ge0}\left(\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\sum_{j=0}^{m_r}\frac{(-1)^{j}(rx-j)^n\binom{m_r}{j}}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}\right)\frac{t^n}{n!}. \end{align*} On the other hand, \begin{align*} \left(\frac{1}{1+e^{t}}\right)^r=&\left(\sum_{ n\ge0 }(-1)^ne^{nt}\right)^r\\ =&\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}}{c_1!c_2!\ldots}e^{t(c_1+2c_2+\ldots)}\\ =&\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}}{c_1!c_2!\ldots}\sum_{n\ge0}(c_1+2c_2+\ldots)^n\frac{t^n}{n!}\\ =&\sum_{n\ge0}\left(\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^n}{c_1!c_2!\ldots}\right)\frac{t^n}{n!}. \end{align*} Hence, $$\frac{2Li_{(k_1,k_2,\ldots, k_r)}(1-e^{-t})}{(1+e^{t})^r}e^{rxt}=2Li_{(k_1,k_2,\ldots, k_r)}(1-e^{-t})e^{rxt}\left(\frac{1}{1+e^{t}}\right)^r\qquad\qquad\qquad\qquad\qquad\qquad$$ \begin{align*} =\left(\sum_{n\ge0}\left(\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\sum_{j=0}^{m_r}\frac{(-1)^{j}(rx-j)^n\binom{m_r}{j}}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}\right)\frac{t^n}{n!}\right)\times\\ \;\;\;\;\times\left(\sum_{n\ge0}\left(\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^n}{c_1!c_2!\ldots}\right)\frac{t^n}{n!}\right)\\ =2\sum_{n\ge0}\sum_{i=0}^n\left(\sum_{ 0\le m_1\le m_2\le\ldots\le m_r }\sum_{j=0}^{m_r}\frac{(-1)^{j}(rx-j)^{n-i}\binom{m_r}{j}}{m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r}}\right)\frac{t^{n-i}}{(n-i)!}\times\\ \;\;\;\;\times\left(\sum_{c_1+c_2+\ldots=r}\frac{r!(-1)^{c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^i}{c_1!c_2!\ldots}\right)\frac{t^i}{i!}\\\end{align*} \begin{align*}=2\sum_{n\ge0}\sum_{i=0}^n\sum_{ 0\le m_1\le m_2\le\ldots\le m_r \atop c_1+c_2+\ldots=r}\sum_{j=0}^{m_r}\frac{(rx-j)^{n-i}r!(-1)^{j+c_1+2c_2+\ldots}(c_1+2c_2+\ldots)^i\binom{m_r}{j}\binom{n}{i}}{(c_1!c_2!\ldots)(m_1^{k_1} m_2^{k_2}\ldots m_r^{k_r})}\frac{t^{n}}{n!}\end{align*} By comparing the coefficient of $t^n/n!$, we obtain the desired explicit formula. \begin{defin}\label{d1} (Poly-Euler polynomials with $a,b,c$ parameters):The Poly-Euler polynomials with $a,b,c$ parameters may be defined by using the following generating function, \begin{equation}\label{e1} \begin{array}{c} \frac{2Li_k(1-(ab)^{-t})}{a^{-t}+b^t}c^{xt}=\sum\limits_{n=0}^{\infty}\mathbf{E}_{n}^{(k)}(x;a,b,c){t^{n}\over n!} \end{array}. \end{equation} \end{defin} Now, in next theorem, we give an explicit formula for Poly-Euler polynomials with $a,b,c$ parameters. \begin{theo}\label{t1} The generalized Poly-Euler polynomials with $a,b,c$ parameters have the following explicit formula \begin{equation}\label{e2} \begin{array}{c} \mathbf{E}^{(k)}_n(x;a,b,c)=\\ \sum\limits_{m=0}^n\sum\limits_{j=0}^m\sum\limits_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}(x\ln c-(m-j+i+1)\ln a-(m-j+i+1)\ln b)^n. \end{array}. \end{equation} \end{theo} Proof. We can write \begin{align*}\sum_{n\ge0}\mathbf{E}^{(k)}_n(x;a,b,c)\frac{t^n}{n!}=\frac{2Li_k(1-(ab)^{-t})}{a^{-t}((ab)^{-t}+1)}c^{xt} =2a^{-t}\left(\sum_{n\ge0}(-1)^n(ab)^{-nt}\right)\left(\sum_{n\ge0}\frac{\left(1-(ab)^{-t}\right)^m}{m^k}\right)c^{xt}.\end{align*} \begin{align*} =&a^{-t}\sum_{m\ge0}\sum_{j=0}^m\sum_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}(ab)^{-t(x+m-j+i)}c^{xt}\\ =&\sum_{m\ge0}\sum_{j=0}^m\sum_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}e^{-t(x+m-j+i)\ln(ab)}e^{-t\ln a}e^{xt\ln c}\\ =&\sum_{n\ge0}\sum_{m\ge0}\sum_{j=0}^m\sum_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}\sum_{n\ge0}(x\ln c-(m-j+i+1)\ln a-(m-j+i)\ln b)^n\frac{t^n}{n!}\\ =&\sum_{n\ge0}\sum_{m=0}^n\sum_{j=0}^m\sum_{i=0}^j\frac{2(-1)^{m-j+i}}{j^k}\binom{j}{i}(x\ln c-(m-j+i+1)\ln a-(m-j+i)\ln b)^n\frac{t^n}{n!}. \end{align*} By comparing the coefficient of $t^n/n!$, we obtain the desired explicit formula.$\square$ \noindent $\begin{array}{ll} \textrm{\bf Hassan Jolany}\\ \textrm{Université des Sciences et Technologies de Lille}\\ \textrm{UFR de Mathématiques}\\ \textrm{Laboratoire Paul Painlevé}\\ \textrm{CNRS-UMR 8524 59655 Villeneuve d'Ascq Cedex/France}\\ \textrm{e-mail: [email protected]} \end{array}$ \noindent $\begin{array}{ll} \textrm{\bf Mohsen Aliabadi}\\ \textrm{Department of Mathematics, Statistics and Computer Science,}\\ \textrm{University of Illinois at Chicago, USA}\\ \textrm{e-mail: [email protected]} \end{array}$ \noindent $\begin{array}{ll} \textrm{\bf Roberto B. Corcino}\\ \textrm{Department of Mathematics}\\ \textrm{Mindanao State University, Marawi City, 9700 Philippines}\\ \textrm{e-mail: [email protected]} \end{array}$ \noindent $\begin{array}{ll} \textrm{\bf M.R.Darafsheh}\\ \textrm{Department of Mathematics, Statistics and Computer Science }\\ \textrm{Faculty of Science}\\ \textrm{University of Tehran, Iran}\\ \textrm{e-mail: [email protected]} \end{array}$ \end{document}
\begin{document} \date{} \title{A New Balanced Subdivision of a Simple Polygon for Time-Space Trade-off Algorithms hanks{This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the SW Starlab support program(IITP-2017-0-00905) supervised by the IITP(Institute for Information \& communications Technology Promotion)} \begin{abstract} We are given a read-only memory for input and a write-only stream for output. For a positive integer parameter $s$, an $s$-workspace algorithm is an algorithm using only $O(s)$ words of workspace in addition to the memory for input. In this paper, we present an $O(n^2/s)$-time $s$-workspace algorithm for subdividing a simple polygon into $O(\min\{n/s,s\})$ subpolygons of complexity $O(\max\{n/s,s\})$. As applications of the subdivision, the previously best known time-space trade-offs for the following three geometric problems are improved immediately by adopting the proposed subdivision: (1) computing the shortest path between two points inside a simple $n$-gon, (2) computing the shortest path tree from a point inside a simple $n$-gon, (3) computing a triangulation of a simple $n$-gon. In addition, we improve the algorithm for problem (2) further by applying different approaches depending on the size of the workspace. \end{abstract} \section{Introduction} In the algorithm design for a given task, we seek to construct an \emph{efficient} algorithm with respect to the time and space complexities. However, one cannot achieve both goals at the same time in many cases: one has to use more memory space for storing information necessary to achieve a faster algorithm and spend more time if less amount of memory is allowed. Therefore, one has to make a compromise between the time and space complexities, considering the goal of the task and the system resources where the algorithm under design is performed. With this reason, a number of time-space trade-offs were considered even as early as in 1980s. For example, Frederickson~\cite{Frederickson-Upperbounds-1987} presented optimal time-space trade-offs for sorting and selection problems in 1987. After this work, a significant amount of research has been done for time-space trade-offs in the design of algorithms. The model we consider in this paper is formally described as follows. An input is given in a read-only memory. For a positive integer parameter $s$ which is determined by users, a memory space of $O(s)$ words are available as workspace (read-write memory under a \emph{random access model}) in addition to the memory for input. We assume that a word is large enough to store a number or a pointer. During the process, the output is to be written to a write-only stream without repetition. We assume that input is given in a \emph{read-only memory} under a random-access model. The assumption on the read-only memory has been considered in applications where the input is required to be retained in its original state or more than one program access the input simultaneously. An algorithm designed in this setting is called an \emph{$s$-workspace algorithm}. It is generally assumed that $s$ is sublinear in the size of input. Many classical algorithms require workspace of at least the size of input in addition to the memory for input. However, this is not always possible because the amount of data collected and used by various applications has significantly increased over the last years and the memory resource available in the system gets relatively smaller compared to the amount of data they use. The $s$-workspace algorithms deal with the case that the size of workspace is limited. Thus we assume that $s$ is at most the size of input throughout this paper. \subsection{Previous Work} In this paper, we consider time-space trade-offs for constructing a few geometric structures inside a simple polygon: the shortest path between two points, the shortest path tree from a point, and a triangulation of a simple polygon. With linear-size workspace, optimal algorithms for these problems are known. The shortest path between two points and the shortest path tree from a point inside a simple $n$-gon can be computed in $O(n)$ time using $O(n)$ words of workspace~\cite{guibas_linear_1987}. A triangulation of a simple $n$-gon can also be computed in $O(n)$ time using $O(n)$ words of workspace~\cite{Chazelle-triangulating-1991}. For a positive integer parameter $s$, the following $s$-workspace algorithms are known. \begin{itemize} \item \textbf{The shortest path between two points inside a simple polygon:} The first non-trivial $s$-workspace algorithm for computing the shortest path between any two points in a simple $n$-gon was given by Asano et al.~\cite{asano_memory-constrained_2013}. Their algorithm consists of two phases. In the first phase, they subdivide the input simple polygon into $O(s)$ subpolygons of complexity $O(n/s)$ in $O(n^2)$ time. In the second phase, they compute the shortest path between the two points in $O(n^2/s)$ time using the subdivision. In the paper, they asked whether the first phase can be improved to take $O(n^2/s)$ time. This problem is still open while there are several partial results. Har-Peled~\cite{har-peled_shortest_2015} presented an $s$-workspace algorithm which takes $O(n^2/s+n\log s \log^4(n/s))$ expected time. Their algorithm takes $O(n^2/s)$ expected time for the case of $s=O(n/\log^2 n)$. For the case that the input polygon is monotone, Barba et al.~\cite{barba_spacetime_2015} presented an $s$-workspace algorithm which takes $O(n^2/s+(n^2\log n)/2^s)$ time. Their algorithm takes $O(n^2/s)$ time for $\log\log n\leq s <n$. \item \textbf{The shortest path tree from a point inside a simple polygon:} The shortest path tree from a point $p$ inside a simple polygon is defined as the union of the shortest paths from $p$ to all vertices of the simple polygon. Aronov et al.~\cite{aronov_time-space} presented an $s$-workspace algorithm for computing the shortest path tree from a given point. Their algorithm reports the edges of the shortest path tree without repetition in an arbitrary order in $O((n^2\log n)/s + n\log s\log^5(n/s))$ expected time. \item \textbf{A triangulation of a simple polygon:} Aronov et al.~\cite{aronov_time-space} presented an $s$-workspace algorithm for computing a triangulation of a simple $n$-gon. Their algorithm returns the edges of a triangulation without repetition in $O(n^2/s + n\log s\log^5{(n/s)})$ expected time. Moreover, their algorithm can be modified to report the resulting triangles of a triangulation together with their adjacency information in the same time if $s\geq \log n$. For a monotone $n$-gon, Barba et al.~\cite{barba_spacetime_2015} presented an $O(s\log_s n)$-workspace algorithm for triangulating the polygon in $O(n\log_s n)$ time for a parameter $s\in\{1,\ldots,n\}$. Later, Asano and Kirkpatrick~\cite{Asano-time-space-2013} showed how to reduce the workspace to $O(s)$ words without increasing the running time. \end{itemize} \subsection{Our Results} In this paper, we present an $s$-workspace algorithm to subdivide a simple polygon with $n$ vertices into $O(\min\{n/s,s\})$ subpolygons of complexity $O(\max\{n/s,s\})$ in $O(n^2/s)$ deterministic time. We obtain this subdivision in three steps. First, we choose every $\max\{n/s,s\}$th vertex of the simple polygon which we call \emph{partition vertices}. In the second step, for every pair of consecutive partition vertices along the polygon boundary, we choose $O(1)$ vertices which we call \emph{extreme vertices}. Then we draw the vertical extensions from each partition vertex and each extreme vertex, one going upwards and one going downwards, until the extensions escape from the polygon for the first time. These extensions subdivide the polygon into subpolygons. In the subdivision, however, some subpolygons may still have complexity strictly larger than $O(\max\{n/s,s\})$. In the third step, we subdivide each such subpolygon further into subpolygons of complexity $O(\max\{n/s,s\})$. Then we show that the resulting subdivision has the desired complexity. By using this subdivision method, we improve the running times for the following three problems without increasing the size of the workspace. \begin{itemize} \item \textbf{The shortest path between two points inside a simple polygon:} We can compute the shortest path between any two points inside a simple $n$-gon in $O(n^2/s)$ deterministic time using $O(s)$ words of workspace. The previously best known $s$-workspace algorithm~\cite{har-peled_shortest_2015} takes $O(n^2/s + n\log s \log^4(n/s))$ expected time. \item \textbf{The shortest path tree from a point inside a simple polygon:} The previously best known $s$-workspace algorithm~\cite{aronov_time-space} takes $O((n^2\log n)/s + n\log s\log^5{(n/s)})$ expected time. It uses the algorithm in~\cite{har-peled_shortest_2015} as a subprocedure for computing the shortest path between two points. If the subprocedure is replaced by our shortest path algorithm, the algorithm is improved to take $O((n^2\log n)/s)$ expected time. \item \textbf{A triangulation of a simple polygon:} The previously best known $s$-workspace algorithm~\cite{aronov_time-space} takes $O(n^2/s + n\log s \log^4(n/s))$ expected time, which uses the shortest path algorithm in~\cite{har-peled_shortest_2015} as a subprocedure. If the subprocedure is replaced by our shortest path algorithm, the triangulation algorithm is improved to take only $O(n^2/s)$ deterministic time. \end{itemize} We also improve the algorithm for computing the shortest path tree from a given point even further to take $O(n^2/s + (n^2\log n)/s^c)$ expected time for an arbitrary positive constant $c$. The improved result is based on the constant-workspace algorithm by Aronov et al.~\cite{aronov_time-space} for computing the shortest path tree rooted at a given point. Depending on the size of workspace, we use two different approaches. For the case of $s=O(\sqrt{n})$, we decompose the polygon into subpolygons, each associated with a vertex, and for each subpolygon we compute the shortest path tree rooted at its associated vertex inside the subpolygon recursively. Due to the workspace constraint, we stop the recursion at a constant depth once one of the stopping criteria is satisfied. Then we show how to report the edges of the shortest path tree without repetition efficiently using $O(s)$ words of workspace. For the case of $s=\Omega(\sqrt{n})$, we can store all edges of each subpolygon in the workspace. We decompose the polygon into subpolygons associated with vertices and solve each subproblem directly using the algorithm by Guibas et al.~\cite{guibas_linear_1987}. \section{Preliminaries} Let $P$ be a simple polygon with $n$ vertices. Let $v_0,\ldots,v_{n-1}$ be the vertices of $P$ in clockwise order along $\ensuremath{\partial} P$. The vertices of $P$ are stored in a read-only memory in this order. For a subpolygon $S$ of $P$, we use $\ensuremath{\partial} S$ to denote the boundary of $S$ and $|S|$ to denote the number of vertices of $S$. For any two points $p$ and $q$ in $P$, we use $\pi(p,q)$ to denote the shortest path between $p$ and $q$ contained in $P$. To ease the description, we assume that no two distinct vertices of $P$ have the same $x$-coordinate. We can avoid this assumption by using a shear transformation~\cite[Chapter 6]{CGbook}. Let $v$ be a vertex of $P$. We consider two vertical extensions from $v$, one going upwards and one going downwards, until they escape from $P$ for the first time. A vertical extension from $v$ contains no vertex of $P$ other than $v$ due to the assumption we made above. We call the point of $\ensuremath{\partial} P$ where an extension from $v$ escapes from $P$ for the first time a \emph{foot point} of $v$. Note that a foot point of a vertex might be the vertex itself. The following two lemmas show how to compute and report the foot points of vertices using $O(s)$ words of workspace \begin{lemma} \label{lem:compute-extreme-small} For a polygonal chain $\gamma\subseteq\ensuremath{\partial} P$ of size $O(s)$, we can compute the foot points of all vertices of $\gamma$ in $O(n)$ deterministic time using $O(s)$ words of workspace. \end{lemma} \begin{proof} We show how to compute the foot point of every vertex $v$ of $\gamma$ lying above $v$ only. The other foot points can be computed analogously. The foot point of a vertex $v$ of $\gamma$ might be $v$ itself. We can determine whether the foot point of $v$ is $v$ itself or not in $O(1)$ time by considering the two edges incident to $v$. We split the boundary of $P$ into $O(n/s)$ polygonal chains each of which contains $O(s)$ vertices. Let $\beta_0,\ldots,\beta_t$ be the resulting polygonal chains with $t=O(n/s)$. For a vertex $v\in\gamma$ whose foot point is not $v$ itself, let $\beta_i(v)$ denote the first point of $\beta_i$ (excluding $v$) hit by the upward vertical ray from $v$ for each $i=0,\ldots,t$. If there is no such point, we let $\beta_i(v)$ denote a point at infinity. We observe that the foot point of a vertex $v\in\gamma$ is the one closest to $v$ among $\beta_i(v)$'s for $i=0,\ldots,t$ unless its foot point is $v$ itself. For any fixed index $i\in\{0,\ldots, t\}$, we can compute $\beta_i(v)$ for all vertices $v\in\gamma$ whose foot points are not themselves in $O(s)$ time using $O(s)$ words of workspace using the algorithm in~\cite{chazelle_triangulation_1984}. This algorithm computes the vertical decomposition of a simple polygon in linear time using linear space, but it can be modified to compute the vertical decomposition of any two non-crossing polygonal curves without increasing the time and space complexities. Since both $\beta_i$ and $\gamma$ have size of $O(s)$, we can apply the vertical decomposition algorithm in~\cite{chazelle_triangulation_1984} in $O(s)$ time using $O(s)$ words of workspace. We apply this algorithm to $\beta_0$. For each vertex $v$ of $\gamma$ whose foot point is not $v$ itself, we store $\beta_0(v)$ in the workspace. Now we assume that we have the one closest to $v$ among $\beta_i(v)$'s, for $i=0,\ldots,j-1$, stored in the workspace. To compute the one closest to $v$ among $\beta_i(v)$'s for $i=1,\ldots,j$, we compute $\beta_j(v)$. This can be done in $O(s)$ time for all vertices on $\gamma$ whose foot points are not themselves using the algorithm in~\cite{chazelle_triangulation_1984}. Then we compare $\beta_j(v)$ and the one stored in the workspace, choose the one closer to $v$ between them and store it in the workspace. Once we do this for all polygonal chains $\beta_i$, we obtain the foot points of all vertices of $\gamma$ by the observation. Since we spend $O(s)$ time for each polygonal chain $\beta_i$, the total running time is $O(n)$. \end{proof} \begin{lemma} \label{lem:compute-footpoints} We can report the foot points of all vertices of $P$ in $O(n^2/s)$ deterministic time using $O(s)$ words of workspace. \end{lemma} \begin{proof} We apply the procedure in Lemma~\ref{lem:compute-extreme-small} to the first $s$ vertices of $P$, the next $s$ vertices of $P$, and so on. In this way we apply this procedure $O(n/s)$ times. Thus we can find all foot points in $O(n^2/s)$ time. \end{proof} The extensions from some vertices of $P$ induce a subdivision of $P$ into subpolygons. Notice that the number of subpolygons in the subdivision is linear to the number of extensions. In the following sections, we compute $O(\min \{n/s, s\})$ extensions from vertices of $P$ and use them to subdivide $P$ into $O(\min \{n/s, s\})$ subpolygons. We store the endpoints of the extensions of the subdivision together with the extensions themselves in clockwise order along $\ensuremath{\partial} P$ in the workspace. Then we can traverse the boundary of the subpolygon starting from a given edge of the subpolygon in time linear to the complexity of the subpolygon. \section{Balanced Subdivision of a Simple Polygon} \label{sec:subdivision} We say that a subdivision of $P$ with $n$ vertices \emph{balanced} if it subdivides $P$ into $O(\min\{n/s,s\})$ subpolygons of complexity $O(\max\{n/s,s\})$. In this section, we present an $s$-workspace algorithm that computes a balanced subdivision using $O(\min\{n/s,s\})$ extensions in $O(n^2/s)$ time. In the following sections, we present a subdivision procedure in three steps. Then we show that the subdivision is balanced. \begin{figure} \caption{\label{fig:subdivide} \label{fig:subdivide} \end{figure} \subsection{Subdivision in Three Steps} We first present an $s$-workspace algorithm to subdivide $P$ into $O(n/\triangle)$ subpolygons of complexity $O(\triangle)$ using $O(n/\triangle)$ extensions in $O(n^2/s)$ time, where $\triangle$ is a positive integer satisfying $\max\{n/s, (s\log n)/n\}\leq \triangle \leq n$ which is determined by $s$. Since $n/s\leq \triangle$, we have $n/\triangle \leq s$. Thus, we can keep all such extensions in the workspace of size $O(s)$. We will set the value of $\triangle$ in Theorem~\ref{thm:subdivide} so that we can obtain a subdivision of our desired complexity. \label{sec:balanaced-subdivision-3steps} \paragraph{The first step: Subdivision by partition vertices.} We first consider every $\triangle$th vertex of $P$ from $v_0$ in clockwise order, that is, $v_0,v_{\triangle},v_{2\triangle},\ldots,v_{\lfloor n/\triangle\rfloor\triangle}$. We call them \emph{partition vertices}. The number of partition vertices is $O(n/\triangle)$. We compute the foot points of each partition vertex, which can be done for all partition vertices in $O(n^2/s)$ time in total using $O(s)$ words of workspace by Lemma~\ref{lem:compute-footpoints}. We sort the foot points along $\ensuremath{\partial} P$ in $O((n/\triangle) \log (n/\triangle))$ time, which is $O(n^2/s)$ by the fact that $\triangle\geq (s\log n)/n$. We store them together with their vertical extensions using $O(n/\triangle)=O(s)$ words of workspace. The vertical extensions of the partition vertices subdivide $P$ into $O(n/\triangle)$ subpolygons. See Figure~\ref{fig:subdivide}(a). However, there might be a subpolygon with complexity strictly larger than $O(\triangle)$. See Figure~\ref{fig:subdivide}(b). Recall that our goal is to subdivide $P$ into $O(n/\triangle)$ subpolygons each of complexity $O(\triangle)$. To achieve this complexity, we subdivide each subpolygon further. \paragraph{The second step: Subdivision by extreme vertices.} The \emph{\ensuremath{($\textsc{l}$,$\textsc{c}$)}-extreme vertex} and \emph{\ensuremath{($\textsc{l}$,$\textsc{cc}$)}-extreme vertex} of a polygonal chain $\gamma$ of $\ensuremath{\partial} P$ are defined as follows. Let $V_\gamma$ be the set of all vertices of $\gamma$ both of whose foot points are on $\ensuremath{\partial} P\setminus \gamma$ and whose extensions lie locally to the \emph{left} of $\gamma$. The \ensuremath{($\textsc{l}$,$\textsc{c}$)}-extreme vertex (or the \ensuremath{($\textsc{l}$,$\textsc{cc}$)}-extreme vertex) of $\gamma$ is the vertex in $V_\gamma$ defining the first extension we encounter while we traverse $\ensuremath{\partial} P$ in clockwise (or counterclockwise) order from $v_0$. See Figure~\ref{fig:second-third}(a) for an illustration. Similarly, we define the \emph{\ensuremath{($\textsc{r}$,$\textsc{c}$)}-extreme vertex} and \emph{\ensuremath{($\textsc{r}$,$\textsc{cc}$)}-extreme vertex} of $\gamma$. In this case, we consider the vertices of $\gamma$ whose extensions lie locally to the \emph{right} of $\gamma$. We simply call the \ensuremath{($\textsc{l}$,$\textsc{c}$)}-,\ensuremath{($\textsc{l}$,$\textsc{cc}$)}-,\ensuremath{($\textsc{r}$,$\textsc{c}$)}- and \ensuremath{($\textsc{r}$,$\textsc{cc}$)}-extreme vertices \emph{extreme vertices} of $\gamma$. Note that $\gamma$ may not have any extreme vertex. In the second step, we consider every polygonal chain of $\ensuremath{\partial} P$ connecting two consecutive partition vertices along $\ensuremath{\partial} P$ and compute the extreme vertices of the chain. Then we have $O(n/\triangle)$ extreme vertices. We compute the foot points of all extreme vertices and store them together with their vertical extensions using $O(n/\triangle)=O(s)$ words of workspace in $O(n^2/s)$ time using Lemma~\ref{lem:compute-footpoints} and Lemma~\ref{lem:compute-extreme}. \begin{figure} \caption{\label{fig:second-third} \label{fig:second-third} \end{figure} \begin{lemma} \label{lem:compute-extreme} We can find the extreme vertices of every polygonal chain of $\ensuremath{\partial} P$ connecting two consecutive partition vertices along $\ensuremath{\partial} P$ in $O(n^2/s)$ total time using $O(s)$ words of workspace. \end{lemma} \begin{proof} Let $\beta_i$ be the polygonal chain of $\ensuremath{\partial} P$ connecting two consecutive partition vertices $v_{i\triangle}$ and $v_{(i+1)\triangle}$ ($v_{i\triangle}$ and $v_0$ if $i=\lfloor n/\triangle\rfloor$) along $\ensuremath{\partial} P$ for $i=0,\ldots, \lfloor n/\triangle\rfloor$. We show how to compute the \ensuremath{($\textsc{l}$,$\textsc{c}$)}-extreme vertices of $\beta_i$ for all $i$. The other types of extreme vertices can be computed analogously. We apply the algorithm in Lemma~\ref{lem:compute-footpoints} that reports the foot points of every vertex of $P$. During the execution of the algorithm, for every $i$, we store one vertex for $\beta_i$ together with its foot points as a candidate of the $\ensuremath{($\textsc{l}$,$\textsc{c}$)}$-extreme vertex of $\beta_i$. These vertices are updated during the execution of the algorithm. At the end of the execution, we guarantee that the vertex stored for $\beta_i$ is the \ensuremath{($\textsc{l}$,$\textsc{c}$)}-extreme vertex of $\beta_i$ for every $i$ from 0 to $\lfloor n/\triangle\rfloor$. Assume that the algorithm in Lemma~\ref{lem:compute-footpoints} reports the foot points of a vertex $v\in\beta_i$. If the extensions of $v$ lie locally to the left of $\beta_i$, we update the vertex for $\beta_i$ as follows. We compare $v$ and the vertex $v'$ stored for $\beta_i$. Specifically, we check if we encounter the extension of $v$ before the extension of $v'$ during the traversal of $\ensuremath{\partial} P$ from $v_0$ in clockwise order. We can check this in constant time using the foot points of $v'$ which are stored for $\beta_i$ together with $v'$. If so, we store $v$ for $\beta_i$ together with its foot points instead of $v'$. Otherwise, we just keep $v'$ for $\beta_i$. In this way, for every chain $\beta_i$, we consider the foot points of all vertices on $\beta_i$ whose extensions lie to the left of $\beta_i$, and keep the extension which comes first from $v_0$ in clockwise order. Thus, at the end of the algorithm, we have the \ensuremath{($\textsc{l}$,$\textsc{c}$)}-extreme vertex of every polygonal chain $\beta_i$ by definition. This takes $O(n^2/s)$ time in total, which is the time for computing the foot points of all vertices of $P$ by Lemma~\ref{lem:compute-footpoints}. \end{proof} \paragraph{The third step: Subdivision by a vertex on a chain connecting three extensions.} After applying the first and second steps, we obtain the subdivision induced by the extensions from the partition and extreme vertices. Let $\ensuremath{Q}$ be a subpolygon in this subdivision. We will see later in Lemma~\ref{lem:no-consecutive} that $\ensuremath{Q}$ has the following property: every chain connecting two consecutive extensions along $\ensuremath{\partial} \ensuremath{Q}$ has no extreme vertex, except for two such chains. However, it is still possible that $\ensuremath{Q}$ contains more than a constant number of extensions on its boundary. For instance, Figure~\ref{fig:second-third}(b) shows a spiral-like subpolygon in the subdivision constructed after the first and second steps that has five extensions on its boundary. The input polygon can easily be modified to have more than a constant number of extensions on the boundary of such a spiral-like subpolygon. In the third step, we subdivide each subpolygon further so that every subpolygon has $O(1)$ extensions on its boundary. The boundary of $\ensuremath{Q}$ consists of vertical extensions and polygonal chains from $\ensuremath{\partial} P$ whose endpoints are partition vertices, extreme vertices, or their foot points. We treat the upward and downward extensions defined by one partition or extreme vertex (more precisely, the union of them) as one vertical extension. For every triple $(\ell,\ell',\ell'')$ of consecutive vertical extensions appearing along $\ensuremath{\partial} \ensuremath{Q}$ in clockwise order, we consider the part (polygonal chain) of $\ensuremath{\partial} \ensuremath{Q}$ from $\ell$ to $\ell''$ in clockwise order (excluding $\ell$ and $\ell''$). Let $\Gamma$ be the set of all such polygonal chains. For every $\gamma\in\Gamma$, we find a vertex, denoted by $v(\gamma)$, of $\ensuremath{\partial} \ensuremath{Q}\setminus \gamma$ such that one of its foot points lies in $\gamma$ between $\ell$ and $\ell'$, and the other foot point lies in $\gamma$ between $\ell'$ and $\ell''$ if it exists. If there are more than one such vertex, we choose an arbitrary one. The extensions of $v(\gamma)$ subdivide $\ensuremath{Q}$ into three subpolygons each of which contains one of $\ell, \ell'$ and $\ell''$ on its boundary. In other words, the extensions from $v(\gamma)$ \emph{separate} $\ell, \ell'$ and $\ell''$. In Figure~\ref{fig:second-third}(b), the vertical extension $h$ for $(\ell_0,\ell_1,\ell_2)$ and the vertical extension $h'$ for $(\ell_2,\ell_3,\ell_4)$ together subdivide $\ensuremath{Q}$ into five subpolygons. We can compute $v(\gamma)$ and their extensions for every $\gamma\in\Gamma$ in $O(|\ensuremath{Q}|^2/s + m(\ensuremath{Q}))$ time in total, where $m(\ensuremath{Q})$ denotes the number of the extensions on the boundary of $Q$. \begin{lemma} \label{lem:compute-extreme-complement} We can find $v(\gamma)$ for every $\gamma\in\Gamma$ in $O(|\ensuremath{Q}|^2/s+m(\ensuremath{Q}))$ total time using $O(s)$ words of workspace. \end{lemma} \begin{proof} The algorithm is similar to the one in Lemma~\ref{lem:compute-extreme}. We apply the algorithm in Lemma~\ref{lem:compute-footpoints} to compute the foot points of every vertex of $\ensuremath{Q}$ with respect to $\ensuremath{Q}$. Assume that the algorithm in Lemma~\ref{lem:compute-footpoints} reports the foot points of a vertex $v$ of $\ensuremath{Q}$. We find the polygonal chains $\gamma\in\Gamma$ containing both foot points of $v$ if they exist. There are at most two such polygonal chains by the construction of $\Gamma$. We can find them in constant time after an $O(m(\ensuremath{Q}))$-time preprocessing for $\ensuremath{Q}$ by Lemma~\ref{lem:point-chain}. Let $\ell,\ell'$ and $\ell''$ be the three extensions defining $\gamma$. Then we check whether one foot point of $v$ lies on the part of $\gamma$ between $\ell$ and $\ell'$, and the other foot point of $v$ lies on the part of $\gamma$ between $\ell'$ and $\ell''$. If so, we denote this vertex by $v(\gamma)$ and keep it for $\gamma$. Otherwise, we do nothing. In this way, we can find $v(\gamma)$ if it exists since we consider every vertex whose foot points lie on $\gamma$. This takes $O(|\ensuremath{Q}|^2/s+m(\ensuremath{Q}))$ time in total, which is the time for computing the foot points of all vertices of $\ensuremath{Q}$ plus the preprocessing time for $\ensuremath{Q}$. \end{proof} \begin{lemma} \label{lem:point-chain} For any point $p$ on $\ensuremath{\partial} \ensuremath{Q}$, we can find the polygonal chains in $\Gamma$ containing $p$ in constant time, if they exist, after an $O(m(\ensuremath{Q}))$-time preprocessing for $\ensuremath{Q}$, where $m(\ensuremath{Q})$ denotes the number of the extensions on the boundary of $Q$. \end{lemma} \begin{proof} Imagine that we subdivide $\ensuremath{\partial} P$ with respect to the partition vertices of $P$ into $O(n/\triangle)$ chains. Each chain $\beta$ in the subdivision of $\ensuremath{\partial} P$ intersects at most two chains $f_1(\beta),f_2(\beta)\in\Gamma$ by the construction of $\Gamma$. As a preprocessing, for each chain $\beta$ in the subdivision of $\ensuremath{\partial} P$ by the partition vertices, we store $f_1(\beta)$ and $f_2(\beta)$. There are $O(n/\triangle)$ chains of $\ensuremath{\partial} P$, but only $O(m(\ensuremath{Q}))$ of them have their $f_1(\cdot)$ and $f_2(\cdot)$. Thus, we can find and store for all such chains their $f_1(\cdot)$ and $f_2(\cdot)$ in $O(m(\ensuremath{Q}))$ time as follows. For each $\gamma\in\Gamma$, we find two chains $\beta_1$ and $\beta_2$ of $\ensuremath{\partial} P$ containing $\gamma$ in constant time, and set $f_i(\beta_1)=\gamma$ and $f_i(\beta_2)=\gamma$ for $i=1,2$, accordingly. For any point $q$ on $\ensuremath{\partial} P$, we can find the subchain $\beta$ in the subdivision of $\ensuremath{\partial} P$ containing $q$ in constant time because the partition vertices are distributed uniformly at intervals of $\triangle$ vertices along $\ensuremath{\partial} P$. Then we check whether $f_1(\beta)$ and $f_2(\beta)$ contain $p$ in constant time. \end{proof} The sum of $|\ensuremath{Q}|$ over all subpolygons $\ensuremath{Q}$ is $O(n)$ and the number of the subpolygons from the second step is $O(n/\triangle)$ since we construct $O(n/\triangle)$ extensions in the first and second steps. Therefore, we can apply the third step of the subdivision for all subpolygons in the subdivision from the second step in $O(n^2/s+n)=O(n^2/s)$ time using $O(s)$ words of workspace. \subsection{Balancedness of the Subdivision} \label{sec:balanced-subdivision-analysis} We obtained $O(n/\triangle)$ vertical extensions in $O(n^2/s)$ time using $O(s)$ words of workspace. In this section, we show that these vertical extensions subdivide $P$ into $O(n/\triangle)$ subpolygons of complexity $O(\triangle)$. We call this subdivision the \emph{balanced subdivision} of $P$. For any two points $a, b$ on $\ensuremath{\partial} P$, we use $P[a,b]$ to denote the polygonal chain from $a$ to $b$ (including $a$ and $b$) in clockwise order along $\ensuremath{\partial} P$. We use a few technical lemmas (Lemma~\ref{lem:contain-par} to Lemma~\ref{lem:num-third}) to show that each subpolygon in the final subdivision is incident to $O(1)$ extensions and has complexity of $O(\triangle)$. Then we obtain Theorem~\ref{thm:path} by setting a parameter $\triangle$. \begin{figure} \caption{\label{fig:analysis-consecutive} \label{fig:analysis-consecutive} \end{figure} \begin{lemma}\label{lem:contain-par} Let $a_1a_2$ be any extension constructed from a vertex $v$ during any of the three steps such that $P[a_1,a_2]$ contains $v$. Then both $P[a_1,v]$ and $P[v,a_2]$ contain partition vertices. \end{lemma} \begin{proof} If $a_1a_2$ is constructed in the first step, $v$ is a partition vertex and lies on $P[a_1,v]$ and $P[v,a_2]$, and we are done. If $a_1a_2$ is constructed in the second step, $v$ is an extreme vertex of a polygonal chain which connects two consecutive partition vertices. One of the two partition vertices lies on $P[a_1,v]$ and the other lies on $P[v,a_2]$, thus the claim holds. Now, consider the case that $a_1a_2$ is constructed in the third step. In this case, $a_1a_2$ separates three consecutive extensions $\ell,\ell'$ and $\ell''$ which are constructed in the first or second step of the subdivision. See Figure~\ref{fig:analysis-consecutive}(a). Let $Q$ be the subpolygon of $P$ bounded by the three extensions. Then every connected component of $P\setminus Q$ contains a partition vertex on its boundary contained in $\ensuremath{\partial} P$ because each component is incident to an extension constructed in the first or second step. In Figure~\ref{fig:analysis-consecutive}(a), the component of $P\setminus Q$ incident to $\ell''$ has a partition vertex on its boundary contained in $P[a_1,v]$. Similarly, the component of $P\setminus Q$ incident to $\ell$ has a partition vertex on its boundary contained in $P[v,a_2]$. Thus, both $P[a_1,v]$ and $P[v,a_2]$ contain partition vertices. \end{proof} Let $S$ be a subpolygon in the final subdivision and $\ensuremath{Q}$ be the subpolygon in the subdivision from the second step containing $S$. We again treat the two (upward and downward) vertical extensions defined by one vertex as one vertical extension. We label the extensions lying on $\ensuremath{\partial} S$ as follows. Let $\ell_0$ be the first extension on $\ensuremath{\partial} S$ we encounter while we traverse $\ensuremath{\partial} P$ from $v_0$ in clockwise order. We let $\ell_1,\ell_2,\ldots, \ell_k$ be the extensions appearing on $\ensuremath{\partial} S$ in clockwise order along $\ensuremath{\partial} S$ from $\ell_0$. Similarly, we label the extensions lying on $\ensuremath{\partial} \ensuremath{Q}$ from $\ell_0'$ to $\ell_{k'}'$ in clockwise order along $\ensuremath{\partial} \ensuremath{Q}$ such that $\ell_0'$ is the first one we encounter while we traverse $\ensuremath{\partial} P$ from $v_0$ in clockwise order. Then we have the following lemmas. \begin{lemma} \label{lem:no-consecutive} For any $1\leq i<k'$, let $a_1a_2=\ell_i'$ and $b_1b_2=\ell_{i+1}'$ such that $a_1, a_2, b_1$ and $b_2$ appear on $\ensuremath{\partial} P$ (and on $\ensuremath{\partial} \ensuremath{Q}$) in clockwise order. Then $P[a_2,b_1]$ has no extreme vertex. \end{lemma} \begin{proof} Assume to the contrary that for some $1\leq i<k'$, $P[a_2,b_1]$ has an extreme vertex. For an illustration, see Figure~\ref{fig:analysis-consecutive}(b). By definition, no partition vertex lies on $P[a_2,b_1]\setminus\{a_2,b_1\}$. Consider the maximal polygonal chain $\gamma\subset \ensuremath{\partial} P$ containing no partition vertex in its interior and containing $P[a_2,b_1]$. Note that $\gamma\subseteq P[a_1,b_2]$ since both $P[a_1,a_2]$ and $P[b_1,b_2]$ contain partition vertices by Lemma~\ref{lem:contain-par}. Let $v$ be an extreme vertex of $P[a_2,b_1]$. (Recall that $v$ exists by the assumption made in the beginning of the proof.) Without loss of generality, we assume that $P[a_2,b_1]$ lies locally to the right of the extension of $v$. The foot points of $v$ lie on $\ensuremath{\partial} P\setminus \gamma$ while $v$ lies on $\gamma$. Therefore, $\gamma$ has an extreme vertex. (But $v$ is not necessarily an extreme vertex of $\gamma$ by definition.) The extension of $v$ subdivides $P$ into three subpolygons. Let $f$ be the foot point of $v$ incident to the subpolygon containing $\ell_i'$ on its boundary and $f'$ be the other foot point of $v$, as shown in Figure~\ref{fig:analysis-consecutive}(b). Since $1\leq i<k'$, $v_0$ lies on $P[f',f]$, $P[f,a_1]$ or $P[b_2,f']$. (Recall that the vertices of $P$ are labeled from $v_0$ to $v_{n-1}$ in clockwise order.) We show that for any case, there is an extreme vertex on $\gamma$ whose extension separates $\ell_i'$ and $\ell_{i+1}'$. Note that these extensions are constructed in the second step, which contradicts the assumption that $\ensuremath{Q}$ contains both $\ell_i'$ and $\ell_{i+1}'$ on its boundary. \begin{itemize} \item \text{Case 1.} $v_0$ is in $P[f',f]$: Then $v$ is the \ensuremath{($\textsc{l}$,$\textsc{c}$)}- and \ensuremath{($\textsc{l}$,$\textsc{cc}$)}-extreme vertex of $\gamma$ by definition. The extension of $v$ separates $\ell_i'$ and $\ell_{i+1}'$, which is a contradiction. \item \text{Case 2.} $v_0$ is in $P[f,a_1]$: By definition, the foot points of the \ensuremath{($\textsc{l}$,$\textsc{cc}$)}-extreme vertex $u$ of $\gamma$ lie on $P[f,v_0]$. See Figure~\ref{fig:analysis-consecutive}(c). Moreover, $u$ lies on $P[a_2,v]$. Thus, the extension of $u$ separates $\ell_i'$ and $\ell_{i+1}'$, which is a contradiction. \item \text{Case 3.} $v_0$ is in $P[b_2,f']$: A contradiction can be shown in a way similar to Case 2. The only difference is that we consider the \ensuremath{($\textsc{l}$,$\textsc{c}$)}-extreme vertex instead of the \ensuremath{($\textsc{l}$,$\textsc{cc}$)}-extreme vertex. \end{itemize} Therefore, $P[a_2,b_1]$ has no extreme vertex. \end{proof} \begin{figure} \caption{\label{fig:exist} \label{fig:exist} \end{figure} We need a few more technical lemmas, which are given in the following, to conclude that the subdivision proposed in the previous section is balanced. \begin{lemma} \label{lem:one-of} For any $1\leq i<k-1$, one of $\ell_i,\ell_{i+1}$ and $\ell_{i+2}$ is constructed in the third step. \end{lemma} \begin{proof} Assume to the contrary that all of $\ell_i,\ell_{i+1}$ and $\ell_{i+2}$ are constructed prior to the third step for some index $1\leq i<k-1$. Then the three extensions are consecutive along $\ensuremath{\partial} \ensuremath{Q}$ as well since there is no vertical extensions added to the part of $\ensuremath{\partial} Q$ from $\ell_i$ to $\ell_{i+2}$ in clockwise order in the third step. Let $\gamma_1$ be the part of $\gamma$ lying between $\ell_i$ and $\ell_{i+1}$ excluding the two extensions, and let $\gamma_2$ be the part of $\gamma$ lying between $\ell_{i+1}$ and $\ell_{i+2}$ excluding the two extensions. By Lemma~\ref{lem:no-consecutive}, $\gamma_1$ and $\gamma_2$ have no extreme vertex. Thus, $\gamma_1\cup\ell_{i+1}\cup\gamma_2$ has no extreme vertex. We claim that $v(\gamma)$ exists. Consider the point $x\in\gamma_1$ closest to an endpoint of $\ell_i$ along $\gamma_1$ among the points in $\gamma_1$ one of whose foot points is on $\gamma_2$. Let $y$ be the foot point of $x$ lying on $\gamma_2$. See Figure~\ref{fig:exist}(a-b). If $xy$ intersects some point in $\ensuremath{\partial} \ensuremath{Q}\setminus \gamma$ (and therefore in $\ensuremath{\partial} S\setminus \gamma$) in its interior, such a point is $v(\gamma)$. Otherwise, $xy$ coincides with $\ell_i$ or $\ell_{i+2}$. See Figure~\ref{fig:exist}(c). This means that $\ell_i$ separates $\ell_{i+1}$ and $\ell_{i+2}$, or $\ell_{i+2}$ separates $\ell_i$ and $\ell_{i+1}$. This contradicts that $\ell_i,\ell_{i+1}$ and $\ell_{i+2}$ appear on $\ensuremath{\partial} S$ (and on $\ensuremath{\partial} \ensuremath{Q}$). Thus, the claim holds. In the third step, we construct the extensions of $v(\gamma)$, which separate $\ell_i,\ell_{i+1}$ and $\ell_{i+2}$. This is a contradiction. \end{proof} \begin{lemma} \label{lem:num-third} $S$ has $O(1)$ extensions constructed in the third step on its boundary. \end{lemma} \begin{proof} Consider an extension $\ell$ incident to $S$ constructed in the third step. Let $v$ be the vertex defining the extension $\ell$. Recall that the boundary of $\ensuremath{Q}$ consists of the extensions $\ell_0',\ldots,\ell_{k'}'$ and the polygonal chains of $\ensuremath{\partial} P$ connecting the pairs of the extensions in consecutive order. Let $\eta_i$ be the polygonal chain of $\ensuremath{\partial} P$ connecting $\ell_i'$ and $\ell_{i+1}'$, excluding the extensions, for $0\leq i<k'$, and $\eta_{k'}$ be the polygonal chain connecting $\ell_{k'}'$ and $\ell_0'$, excluding the extensions. We claim that $v$ is contained in $\eta_0$ or $\eta_{k'}$. Assume to the contrary that $v$ is contained in $\eta_i$ for $1\leq i<k'$. Then the foot points of $v$ lie outside of $\eta_i$ by the third step of the subdivision. Thus, $\eta_i$ has an extreme vertex, which contradicts Lemma~\ref{lem:no-consecutive}. We also claim that there exist at most two vertices in $\eta_0$ that has both foot points in $\ensuremath{\partial} \ensuremath{Q}\setminus\eta_0$ and an extension incident to $S$. To see this, let $u_1, u_2\in\eta_0$ be such vertices if they exist. Let $h_1$ and $h_2$ be the extensions from $u_1$ and $u_2$, respectively, incident to $S$. Since no foot point of $u_1$ and $u_2$ is in $\eta_0$, one of the two polygonal chains connecting $h_1$ and $h_2$ along $\ensuremath{\partial} S$ (but not containing them in its interior) is contained in $\eta_0$ and the other is disjoint with $\eta_0$. Therefore, no other vertex in $\eta_0$ that has both foot points in $\ensuremath{\partial} S\setminus\eta_0$ and has an extension incident to $S$. This proves the claim. The same holds for $\eta_{k'}$. Therefore, there are at most four extensions on $\ensuremath{\partial} S$ constructed in the third step: two of them are extensions of vertices of $\eta_0$ and the other two are extensions of vertices of $\eta_{k'}$. Thus the lemma holds. \end{proof} Due to Lemma~\ref{lem:one-of} and Lemma~\ref{lem:num-third}, the following corollary holds. \begin{corollary} \label{lem:incident-vertical-line} Every subpolygon in the final subdivision has $O(1)$ extensions on its boundary. \end{corollary} \begin{lemma} \label{lem:complexity} Every subpolygon in the final subdivision has complexity of $O(\triangle)$. \end{lemma} \begin{proof} Consider a subpolygon $S$ in the final subdivision. By Corollary~\ref{lem:incident-vertical-line}, the boundary of $S$ consists of $O(1)$ vertical extensions and $O(1)$ polygonal chains from the boundary of $P$ connecting two consecutive endpoints of vertical extensions along $\ensuremath{\partial} S$. Each polygonal chain from the boundary of $P$ contains at most one partition vertex in its interior. Otherwise, a vertical extension intersecting the interior of $S$ is constructed in the first or second step, which contradicts that $S$ is a subpolygon in the final subdivision. The number of vertices between two consecutive partition vertices along $\ensuremath{\partial} S$ is $O(\triangle)$. Therefore, $S$ has $O(\triangle)$ vertices on its boundary. \end{proof} Therefore, we have the following lemma and theorem. \begin{lemma} \label{lem:subdivide-parameter} Given a simple $n$-gon and a parameter $\triangle$ with $\max\{n/s, (s\log n)/n\}\leq \triangle \leq n$, we can compute a set of $O(n/\triangle)$ extensions which subdivides the polygon into $O(n/\triangle)$ subpolygons of complexity $O(\triangle)$ in $O(n^2/s)$ time using $O(s)$ words of workspace. \end{lemma} \begin{theorem} \label{thm:subdivide} Given a simple $n$-gon, we can compute a set of $O(\min\{n/s,s\})$ extensions which subdivides the polygon into $O(\min\{n/s,s\})$ subpolygons of complexity $O(\max\{n/s,s\})$ in $O(n^2/s)$ time using $O(s)$ words of workspace. \end{theorem} \begin{proof} If $s\leq \sqrt{n}$, we set $\triangle$ to $n/s$. In this case, we can subdivide the polygon into $O(s)$ subpolygons of complexity $O(n/s)$ by Lemma~\ref{lem:subdivide-parameter}. If $s> \sqrt{n}$, we set $\triangle$ to $s$. Note that $\max\{n/s, (s\log n)/n\}\leq \triangle \leq n$ in both cases. We can subdivide the polygon into $O(n/s)$ subpolygons of complexity $O(s)$. Therefore, the theorem holds. \end{proof} \section{Applications} We first introduce other subdivision methods frequently used for $s$-workspace algorithms and provide comparison for our balanced subdivision method with them. Then we will present $s$-workspace algorithms that improve the previously best known results for three problems without increasing the size of the workspace. \label{sec:application} \subsection{Comparison with Other Subdivision Methods} There are several subdivision methods which are used for computing the shortest path between two points in the context of time-space trade-offs. Asano et al.~\cite{asano_memory-constrained_2013} presented a subdivision method that subdivides a simple $n$-gon into $O(s)$ subpolygons of complexity $O(n/s)$ using $O(s)$ chords. They showed that the shortest path between any two points in the polygon can be computed in $O(n^2/s)$ time using $O(s)$ words of workspace. However, their algorithm takes $O(n^2)$ time to compute the subdivision, which dominates the overall running time. In fact, in the paper they asked whether a subdivision for computing shortest paths can be computed more efficiently using $O(s)$ words of workspace. Instead of answering this question directly, Har-Peled~\cite{har-peled_shortest_2015} presented a way to subdivide a simple $n$-gon into $O(n/s)$ subpolygons of complexity $O(s)$. The number of segments defining this subdivision can be strictly larger than $O(s)$, for $s=\omega(\sqrt{n})$, and therefore the whole subdivision may not be stored in the $O(s)$ words of workspace. Instead, they gave a procedure to find the subpolygon of the subdivision containing a query point in $O(n+s\log s\log^4 (n/s))$ expected time without maintaining the subdivision explicitly. They showed that one can find the shortest path between any two points using this subdivision in a way similar to the algorithm by Asano et al. in $O(n^2/s+(n/s)T(n,s))$ time, where $T(n,s)$ is the time for computing the subpolygon of the subdivision containing a query point. Therefore, the running time is $O(n^2/s+s\log s\log^4 (n/s))$. The balanced subdivision that we propose can replace the subdivision methods in the algorithms by Asano et al. and Har-Peled for computing the shortest path between any two points. Moreover, our subdivision method has two advantages compared to the subdivision methods by Asano et al. and Har-Peled: (1) the subdivision can be computed faster than the one by Asano et al., and (2) we can keep the whole subdivision in the workspace unlike the one by Har-Peled. By using our balanced subdivision, we can improve the running times of trade-offs that use a subprocedure of computing the shortest path between two points. Moreover, we can solve other application problems efficiently using $O(s)$ words of workspace. An example is to compute the shortest path between a query point and a fixed point after preprocessing the input polygon for the fixed point. See Lemma~\ref{lem:path-faster}. \subsection{Time-space Trade-offs Based on the Balanced Subdivision Method} By using our balanced subdivision method, we improve the previously best known running times for the following three problems without increasing the size of the workspace. \paragraph{Computing the shortest path between two points.} Given any two points $p$ and $q$ in $P$, we can report the edges of the shortest path $\pi(p,q)$ in order in $O(n^2/s)$ deterministic time using $O(s)$ words of workspace. This improves the $s$-workspace randomized algorithm by Har-Peled~\cite{har-peled_shortest_2015} which takes $O(n^2/s + n\log s \log^4(n/s))$ expected time. We can compute the shortest path between two query points using our balanced subdivision as follows. For $s \geq \sqrt{n}$, we have the subdivision consisting of $O(n/s)$ subpolygons of complexity $O(s)$. Thus we use the algorithm by Har-Peled~\cite{har-peled_shortest_2015} described in the following lemma. Har-Peled presented an algorithm that for a given query point $q$ computes the subpolygon of the subdivision containing $q$ in $O(n+s\log s\log^4(n/s))$ expected time ~\cite[Lemma 3.2]{har-peled_shortest_2015}. It is shown in the paper that the shortest path between any two points can be computed using the algorithm as stated in the following lemma. \begin{lemma}[Implied by~{\cite[Lemma 4.1 and Theorem 4.3]{har-peled_shortest_2015}}] For a subdivision of a simple polygon consisting of $O(n/s)$ subpolygons, each of complexity $O(s)$, if the subpolygon containing a query point can be computed in $T(n)$ time using $O(s)$ words of workspace, the shortest path between any two points can be computed in $O((n/s)(T(n)+n))$ time using $O(s)$ words of workspace. \end{lemma} In our case, we can find the subpolygon of the balanced subdivision containing a query point in $O(n)$ deterministic time. Combining this result with the lemma, we can compute the shortest path between any two points in $O(n^2/s)$ deterministic time. For $s< \sqrt{n}$, we have the subdivision consisting of $O(s)$ subpolygons of complexity $O(n/s)$. Instead of the algorithm by Hal-Peled, we use the algorithm by Asano et al.~\cite{asano_memory-constrained_2013} to compute the shortest path between any two points in the polygon. \begin{theorem} \label{thm:path} Given any two points in a simple polygon with $n$ vertices, we can compute the shortest path between them in $O(n^2/s)$ deterministic time using $O(s)$ words of workspace. \end{theorem} \paragraph{Computing the shortest path tree from a point.} The \emph{shortest path tree} rooted at $p$ is defined to be the union of $\pi(p,v)$ over all vertices $v$ of $P$. Aronov et al.~\cite{aronov_time-space} gave an $s$-workspace randomized algorithm for computing the shortest path tree rooted at a given point. It uses the algorithm by Har-Peled~\cite{har-peled_shortest_2015} as a subprocedure and takes $O((n^2\log n)/s + n\log s\log^5{(n/s)})$ expected time. If one uses Theorem~\ref{thm:path} instead of Har-Peled's algorithm, the running time improves to $O((n^2\log n)/s)$ expected time. In Section~\ref{sec:SPT}, we improve this algorithm even further using properties of our balanced subdivision. \paragraph{Computing a triangulation of a simple polygon.} Aronov et al.~\cite{aronov_time-space} presented an $s$-workspace algorithm for computing a triangulation of a simple $n$-gon. Their algorithm returns the edges of a triangulation without repetition in $O(n^2/s+n\log s\log^5(n/s))$ expected time. It uses the shortest path algorithm by Har-Peled~\cite{har-peled_shortest_2015} as a subprocedure, which takes $O(O(n^2/s+n\log s\log^4(n/s)))$ expected time. By replacing this shortest path algorithm with ours in Theorem~\ref{thm:path}, we can obtain a triangulation of a simple polygon in $O(n^2/s)$ deterministic time using $O(s)$ words of workspace. \begin{theorem} \label{thm:triangulation-polygon} Given a simple polygon with $n$ vertices, we can compute a triangulation of the simple polygon by returning the edges of the triangulation without repetition in $O(n^2/s)$ deterministic time using $O(s)$ words of workspace. \end{theorem} As mentioned by Aronov et al.~\cite{aronov_time-space}, the algorithm can be modified to report the resulting triangles of a triangulation together with their adjacency information in the same time if $s\geq \log n$. \section{Improved Algorithm for Computing the Shortest Path Tree} \label{sec:SPT} In this section, we improve the algorithm for computing the shortest path tree from a given point even further to $O(n^2/s+(n^2\log n)/s^c)$ expected time for an arbitrary positive constant $c$. We use the following lemma given by Aronov et al.~\cite{aronov_time-space}. \begin{lemma}[{\cite[Lemma~6]{aronov_time-space}}] \label{lem:shortest-path-constant} For any point $p$ in a simple $n$-gon, we can compute the shortest path tree rooted at a point in the polygon in $O(n^2\log n)$ expected time using $O(1)$ words of workspace. \end{lemma} We apply two different algorithms depending on the size of the workspace: $s=O(\sqrt{n})$ or $s=\Omega(\sqrt{n})$. We consider the case of $s=O(\sqrt{n})$ first. For the case of $s=\Omega(\sqrt{n})$, we can store all edges of each subpolygon in the workspace. \subsection{Case of \texorpdfstring{$s = O(\sqrt{n})$}{s=O(sqrt{n})}} \label{sec:small} Given a point $p\in P$, we want to report all edges of the shortest path tree rooted at $p$. Recall that there are $O(s)$ extensions on the balanced subdivision in this case. We call an edge of a path a \emph{w-edge} if it crosses an extension. For every extension $a_1a_2$ of the balanced subdivision, we first compute the w-edges of $\pi(p,a_1)$ and $\pi(p,a_2)$ in $O(n^2/s^2)$ time in Section~\ref{sec:wall-edges}. We show that the total number of the w-edges for the two paths is $O(s)$ for every extensions. These w-edges allow us to compute the shortest path $\pi(p,q)$ for any point $q$ of $P$ in $O(n^2/s^2)$ time. Then we decompose $P$ into subpolygons associated with vertices in Section~\ref{sec:all-edges}. For each subpolygon, we compute the shortest path tree rooted at its associated vertex inside the subpolygon recursively. If a subpolygon satisfies one of \textit{the stopping criteria} (to be defined later), we stop the recursion but proceed further to complete the shortest path tree inside the subpolygon if necessary. Because of the space constraint, we restrict the depth of the recurrence to be a constant. \subsubsection{Computing w-edges} \label{sec:wall-edges} We compute all w-edges of the shortest paths between $p$ and the endpoints of the extensions. The following lemma implies that there are $O(s)$ w-edges of the shortest paths. For any three points $x, y$ and $z$ in $P$, we call a point $x'$ the \emph{junction} of $\pi(x,y)$ and $\pi(x,z)$ if $\pi(x,x')$ is the maximal common path of $\pi(x,y)$ and $\pi(x,z)$. \begin{lemma} \label{lem:wall-edge} For an extension $a_1a_2$, there is at most one w-edge of $\pi(p,a_i)$ for $i=1,2$ which is not a w-edge of $\pi(p,b)$ or $\pi(p,b')$ for any other extension $bb'$ crossed by $\pi(p,a_i)$. \end{lemma} \begin{proof} Let $b_1$ and $b_2$ be the endpoints of the first extension that we encounter during the traversal of $\pi(p,a_1)$ from $a_1$ towards $p$. See Figure~\ref{fig:path-tree}(a). Let $v$ be the junction closer to $a_1$ between the junction of $\pi(p,a_1)$ and $\pi(p,b_1)$ and the junction of $\pi(p,a_1)$ and $\pi(p,b_2)$. Note that $\pi(p,a_1)$ is the concatenation of $\pi(p,v)$ and $\pi(v,a_1)$. The vertices of $\pi(v,a_1)$ other than $v$ lie in the subpolygon incident to $a_1a_2$ and $b_1b_2$. Thus every edge of $\pi(v,a_1)$ not incident to $v$ is contained in this subpolygon, and does not cross any extension. Therefore, the w-edge of $\pi(p,a_1)$ which is not a w-edge of $\pi(p,b)$ or $\pi(p,b')$ for any extension $bb'$ crossed by $\pi(p,a_1)$ is unique: the edge of $\pi(v,a_1)$ incident to $v$. \end{proof} \begin{figure} \caption{\label{fig:path-tree} \label{fig:path-tree} \end{figure} We consider the extensions one by one in a specific order and compute such w-edges one by one. To decide the order for considering the extensions, we define a \emph{w-tree} $T$ as follows. Each node $\alpha$ of $T$ corresponds to an extension $d(\alpha)$ of the balanced subdivision of $P$, except for the root. Also, each extension of the balanced subdivision of $P$ corresponds to a node of $T$. The root of $T$ corresponds to $p$ and has children each of which corresponds to an extension incident to the subpolygon containing $p$. A non-root node $\beta$ of $T$ is the parent of a node $\alpha$ if and only if $d(\beta)$ is the first extension that we encounter during the traversal of $\pi(p,a_1)$ from $a_1$ for an endpoint $a_1$ of $d(\alpha)$. We can compute $T$ in $O(n)$ time. \begin{lemma} The w-tree can be built in $O(n)$ time using $O(s)$ words of workspace. \end{lemma} \begin{proof} We create the root, and its children by traversing the boundary of the subpolygon $S_p$ containing $p$. Then for each subpolygon $S$ incident to $S_p$, we traverse its boundary. Let $\alpha$ be the node of the tree corresponding to the extension incident to both $S$ and $S_p$. We create nodes for the extensions incident to $S$ other than $d(\alpha)$ as children of $\alpha$. We repeat this until we visit every extension of the balanced subdivision. In this way, we traverse the boundary of each subpolygon exactly once, thus the total running time is $O(n)$. \end{proof} After constructing $T$, we apply depth-first search on $T$. Let $\mathcal{D}$ be an empty set. When we visit a node $\alpha$ of $T$, we compute the w-edges of $\pi(p,a_1)$ and $\pi(p,a_2)$ which are not in $\mathcal{D}$ yet, where $a_1a_2=d(\alpha)$, and insert them in $\mathcal{D}$. Each w-edge in $\mathcal{D}$ has information on the node of $T$ defining it and the subpolygons of the balanced subdivision containing its endpoints. Due to this information, we can compute the w-edges of $\pi(p,a)$ in order from $a$ in $O(s)$ time for any endpoint $a$ of $d(\alpha)$ and any node $\alpha$ we visited before. Once the traversal is done, $\mathcal{D}$ contains all w-edges in the shortest paths between $p$ and the endpoints of the extensions. We show how to compute the w-edge of $\pi(p,a_1)$ which is not in $\mathcal{D}$ yet. We can compute the $w$-edge of $\pi(p,a_2)$ not in $\mathcal{D}$ yet analogously. By Lemma~\ref{lem:wall-edge}, there is at most one such edge of $\pi(p,a_1)$. Moreover, by its proof, it is the edge of $\pi(v,a_1)$ that is incident to $v$. Here, $v$ is the junction closer to $a_1$ between the junction of $\pi(p,b_1)$ and $\pi(p,a_1)$ and the junction of $\pi(p,b_2)$ and $\pi(p,a_1)$, where $b_1b_2$ is the extension corresponding the parent of $\alpha$. Thus, to compute the $w$-edge, we first compute the junction of $\pi(p,b_1)$ and $\pi(p,a_1)$ and the junction of $\pi(p,b_2)$ and $\pi(p,a_1)$. \paragraph{Computing junctions.} We show how to compute the junction $v_1$ of $\pi(p,b_1)$ and $\pi(p,a_1)$ in $O(n^2/s^2)$ time for $s=O(\sqrt{n})$. The junction $v_2$ of $\pi(p,b_2)$ and $\pi(p,a_1)$ can be computed analogously in the same time. Then we choose the one between $v_1$ and $v_2$ that is closer to $a_1$. To do this, we compute the set of the w-edges in $\mathcal{D}$ appearing on $\pi(p,b_i)$ in order from $b_i$ in $O(s)$ time for $i=1,2$. We denote the set by $\mathcal{D}(b_i)$. Note that the edges in $\mathcal{D}(b_i)$ are the w-edges of $\pi(p,b_i)$. We find two consecutive edges in $\mathcal{D}(b_1)$ containing $v_1$ between them along $\pi(p,b_1)$ by applying binary search on the edges in $\mathcal{D}(b_1)$. Given any edge $e$ in $\mathcal{D}(b_1)$, we can determine which side of $e$ along $\pi(p,b_1)$ contains $v_1$ in $O(n/s)$ time as follows. We first check whether $e$ is also contained in $\pi(p,b_2)$ in constant time using $\mathcal{D}(b_2)$. If so, $v_1$ is contained in the side of $e$ along $\pi(p,b_1)$ containing $b_1$. Thus we are done. Otherwise, we extend $e$ towards $b_1$ until it escapes from $S$, where $S$ is the subpolygon incident to both $a_1a_2$ and $b_1b_2$. See Figure~\ref{fig:path-tree}(a). Note that the extension crosses $b_1b_2$ since both $\pi(b_1,v_e)$ and $\pi(b_2,v_e)$ are concave for an endpoint $v_e$ of $e$. We can compute the point where the extension escapes from $S$ in $O(n/s)$ time by traversing the boundary of $S$ once. If an endpoint of the extension lies on the part of $\ensuremath{\partial} S$ between $a_1$ and $b_1$ not containing $a_2$, $v_1$ lies in the side of $e$ containing $p$ along $\pi(p,b_1)$. Otherwise, $v_1$ is contained in the other side of $e$. Therefore, we can find two consecutive w-edges in $\mathcal{D}(b_1)$ containing $v_1$ between them along $\pi(p,b_1)$ in $O((n/s)\log s)$ time since the size of $\mathcal{D}(b_1)=O(s)$. The edges of $\pi(p,b_1)$ lying between the two consecutive w-edges are contained in the same subpolygon. Let $x$ and $y$ be the endpoints of the two consecutive edges of $\mathcal{D}(b_1)$ contained in the same subpolygon. Then we compute the edges of $\pi(x,y)$ one by one from $x$ to $y$ inside the subpolygon containing $x$ and $y$. By Theorem~\ref{thm:path}, we can compute $\pi(x,y)$ in $O(n^2/s^3)$ time since the size of the subpolygon is $O(n/s)$. Here, we use extra $O(s)$ words of workspace for computing $\pi(x,y)$. When the algorithm in Theorem~\ref{thm:path} reports an edge $f$ of $\pi(x,y)$, we check which side of $f$ along $\pi(x,y)$ contains $v_1$ in $O(n/s)$ time as we did before. We repeat this until we find $v_1$. This takes $O((n/s)^2)$ time since there are $O(n/s)$ edges in $\pi(x,y)$. Therefore, in total, we can compute the junction $v_1$ in $O(s+(n/s)\log s + n^2/s^2)=O(n^2/s^2)$ time since $s=O(\sqrt{n})$. \paragraph{Computing the edge \texorpdfstring{of $\pi(v,a_1)$}{} incident to the junction \texorpdfstring{$v$}{v}.} In the following, we compute the edge of $\pi(v,a_1)$ incident to $v$. We assume that $v$ is the junction of $\pi(v,a_1)$ and $\pi(v,b_1)$. The case that $v$ is the junction of $\pi(v,a_1)$ and $\pi(v,b_2)$ can be handled analogously. Let $e_1$ and $e_2$ be two edges of $\pi(p,b_1)$ incident to $v$, which can be obtained while we compute $v$. See Figure~\ref{fig:path-tree}(b). We extend $e_1$ and $e_2$ towards $b_1$ until they escape from $P$ for the first time. The two extensions and $a_1a_2$ subdivide $P$ into regions. Consider the region bounded by the two extensions and $a_1a_2$. Note that the region can be represented using $O(1)$ words as the boundary consists of three line segments, one from each of the two extensions and $a_1a_2$, and two boundary (possibly empty) chains of $P$ connecting the segment of $a_1a_2$ to the other segments. The number of polygon vertices on the boundary of the region is $O(n/s)$. Moreover, $\pi(v,a_1)$ is contained in the region. Thus, the edge of $\pi(v,a_1)$ incident to $v$ inside the region is the edge we want to compute. We can compute it in $O(n^2/s^3)$ time by applying Theorem~\ref{thm:path} to this region. In summary, we compute the w-edge of $\pi(p,a_1)$ which has not been computed yet in $O(n^2/s^2)$ time, assuming that we have done this for every node we have visited so far. More specifically, computing the junction of $\pi(p,a_1)$ and $\pi(p,b_i)$ takes $O(n^2/s^2)$ time for $i=1,2$, and computing the edge incident to each junction takes $O(n^2/s^3)$ time. One of the edges is the w-edge that we want to compute. Since the size of the w-tree is $O(s)$, we can do this for every node in $O(n^2/s)$ time in total. Thus we have the following lemma. \begin{lemma} \label{lem:computing-wall-edges} Given a point $p$ in a simple polygon with $n$ vertices, we can compute all w-edges of the shortest paths between $p$ and the endpoints of the extensions in $O(n^2/s)$ time using $O(s)$ words of workspace for $s=O(\sqrt{n})$. \end{lemma} Due to the w-edges, we can compute the shortest path $\pi(p,q)$ in $O(n^2/s^2)$ time for any point $q$ in $P$. Note that $n^2/s^2$ is at least $n$ for $s=O(\sqrt{n})$. \begin{lemma} \label{lem:path-faster} Given a fixed point $p$ in $P$ and a parameter $s=O(\sqrt{n})$, we can compute $\pi(p,q)$ in $O(n^2/s^2)$ time for any point $q$ in $P$ using $O(s)$ words of workspace after an $O(n^2/s)$-time preprocessing for $P$ and $p$. \end{lemma} \begin{proof} As a preprocessing, we compute the balanced subdivision of $P$. Then we compute all w-edges of the shortest paths between $p$ and the endpoints of the extensions in $O(n^2/s)$ time using Lemma~\ref{lem:computing-wall-edges}. To compute $\pi(p,q)$, we first find the subpolygon of the balanced subdivision containing $q$ in $O(n)$ time. The subpolygon is incident to $O(1)$ extensions due to Corollary~\ref{lem:incident-vertical-line}. Consider the nodes in the w-tree corresponding to these extensions. One of the nodes is the parent of the others. We find the extension corresponding to the parent and denote it by $a_1a_2$. This extension is the first extension crossed by $\pi(p,q)$ we encounter during the traversal of $\pi(p,q)$ from $q$. Then we compute the w-edge $e$ of $\pi(p,q)$ which is not in $\mathcal{D}$ in $O(n^2/s^2)$ time as we did before, where $\mathcal{D}$ is the set of all w-edges of the shortest paths between $p$ and the endpoints of the extensions. Let $v$ be the endpoint of $e$ closer to $q$. We report the edges of $\pi(q,v)$ from $q$ one by one using the algorithm in Theorem~\ref{thm:path}. Note that $\pi(q,v)$ is contained in a single subpolygon of the balanced subdivision. We can report them in $O(n^2/s^3)$ time since the subpolygon has complexity of $O(n/s)$. Then we report $e$ as an edge of $\pi(p,q)$. The remaining procedure is to report the edges of $\pi(p,v')$, where $v'$ is the endpoint of $e$ other than $v$. Note that $v'$ lies on $\pi(p,a_1)\cup\pi(p,a_2)$. Without loss of generality, we assume that it lies on $\pi(p,a_1)$. We can find all w-edges of $\pi(p,v')$ by computing all w-edges of $\pi(p,a_1)$ in $O(s)$ time. We consider the w-edges one by one from the one closest to $v'$ to the one farthest from $v'$. For two consecutive w-edges $e$ and $e'$ along $\pi(p,v')$, we report the edges of $\pi(p,v)$ lying between $e$ and $e'$. This takes $O(n^2/s^3)$ time since all such edges are contained in a single subpolygon of complexity $O(n/s)$. Since there are $O(s)$ w-edges, we can report all edges of $\pi(p,q)$ in $O(n^2/s^2)$ time in total. \end{proof} \subsubsection{Decomposing the Shortest Path Tree into Smaller Trees} \label{sec:all-edges} We subdivide $P$ into subpolygons each of which is associated with a vertex of it in a way different from the one for the balanced subdivision. Then inside each such subpolygon, we report all edges of the shortest path tree rooted at its associated vertex recursively. We guarantee that the edges reported in this way are the edges of the shortest path tree rooted at $p$. We also guarantee that all edges of the shortest path tree rooted at $p$ are reported. We use a pair $(P',p')$ to denote the problem of reporting the shortest path tree rooted at a point $p'$ inside a simple subpolygon $P'$ of $P$. Initially, we are given the problem $(P, p)$. \paragraph{Structural properties of the decomposition.} We use the following two steps of the decomposition. In the first step, we decompose $P$ into a number of subpolygons by the shortest path $\pi(p,a)$ for every endpoint $a$ of the extensions. The boundary of each subpolygon consists of a polygonal chain of $\ensuremath{\partial} P$ with endpoints $g_1, g_2$ and the shortest paths $\pi(v,g_1)$ and $\pi(v,g_2)$, where $g_1,g_2$ are endpoints of extensions and $v$ is the junction of $\pi(p,g_1)$ and $\pi(p,g_2)$. In the second step, we decompose each subpolygon further into smaller subpolygons by extending the edges of the shortest paths $\pi(v,g_1)$ and $\pi(v,g_2)$ towards $g_1$ and $g_2$, respectively. See Figure~\ref{fig:path-tree-subproblem}. Consider a subpolygon $P'$ in the resulting subdivision. Its boundary consists of a polygonal chain of $\ensuremath{\partial} P$ and two line segments sharing a common endpoint $p'$. We can represent $P'$ using $O(1)$ words. Moreover, $P'$ has complexity of $O(n/s)$. For any point $q$ in $P'$, $\pi(p,q)$ is the concatenation of $\pi(p,p')$ and $\pi(p',q)$. Therefore, the shortest path rooted at $p'$ inside $P'$ coincides with the shortest path tree rooted at $p$ inside $P$ restricted to $P'$. We can obtain the entire shortest path tree rooted at $p$ inside $P$ by computing it on $(P',p')$ for every subpolygon $P'$ in the resulting subdivision and its associated vertex $p'$. We define the orientation of an edge of the shortest path tree using the indices of its endpoints (for example, from a smaller index to a larger index.) Note that the endpoints of an edge of the shortest path tree are vertices of $P$ labeled from $v_0$ to $v_{n-1}$. We do not report an edge $e$ of the shortest path if $P'$ contains $e$ on its boundary and lies locally to the right of $e$ for a base problem $(P',p')$. Then every edge is reported exactly once. \begin{figure} \caption{\label{fig:path-tree-subproblem} \label{fig:path-tree-subproblem} \end{figure} \paragraph{Computing the subpolygons with their associated vertices.} In the following, we show how to obtain this subdivision. Recall that the boundary of a subpolygon $S$ in the balanced subdivision consists of extensions and polygonal chains from $\ensuremath{\partial} P$. For each maximal polygonal chain $\gamma$ of $\ensuremath{\partial} S$ containing no endpoint of extensions in its interior, we do the followings. Let $g_1$ and $g_2$ be the endpoints of $\gamma$. We compute the junction $v$ of $\pi(p,g_1)$ and $\pi(p,g_2)$ in $O(n^2/s^2)$ time as we did in Section~\ref{sec:wall-edges}. Consider the region (subpolygon) of $P$ bounded by $\pi(v,g_1)$, $\pi(v,g_2)$ and $\gamma$. We compute the edges of $\pi(p,g_i)$ lying between $v$ and $g_i$ in order in $O(n^2/s^2)$ time using Lemma~\ref{lem:path-faster} for $i=1,2$. Clearly, these edges are the edges of $\pi(v,g_i)$. Whenever we compute an edge $e$ of $\pi(v,g_i)$, we check whether the endpoints of $e$ are on $\gamma$ or not, and obtain a subproblem $(P_e,v_e)$ as follows. Let $v_e$ be the endpoint closer to $v$. See Figure~\ref{fig:path-tree-subproblem} for an illustration. \begin{itemize} \item Both endpoints are on $\gamma$: $P_e$ is the subpolygon bounded by $e$ and a part of $\gamma$ connecting the two endpoints of $e$. (See $e_4$ in Figure~\ref{fig:path-tree-subproblem}.) \item Exactly one of the endpoints are on $\gamma$: If $v_e$ is not on $\gamma$, we extend the edge incident to $v_e$ other than $e$ towards $g_i$ until it hits $\gamma$ in $O(n/s)$ time. (See $e_3$ in Figure~\ref{fig:path-tree-subproblem}.) If $v_e$ is on $\gamma$, we extend $e$ towards $g_i$ until it hits $\gamma$ in $O(n/s)$ time. (See $e_1$ in Figure~\ref{fig:path-tree-subproblem}.) Let $P_e$ is the subpolygon bounded by the extension (including $e$) and the part of $\gamma$ connecting the endpoint of $e$ and the endpoint of the extension lying on $\gamma$. \item No endpoint is on $\gamma$: We extend both edges of $\pi(v,g_2)$ incident to $v_e$ towards $g_i$ in $O(n/s)$ time. Let $P_e$ be the subpolygon bounded by the two extensions (including $e$) and the part of $\gamma$ connecting the endpoints of the extensions lying on $\gamma$. (See $e_2$ in Figure~\ref{fig:path-tree-subproblem}.) \end{itemize} Therefore, we can compute the decomposition of the region of $P$ bounded by $\pi(v,g_1)$, $\pi(v,g_2)$ and $\gamma$ in $O(n^2/s^2+nk/s)$ time, where $k$ is the number of edges of $\pi(v,g_1)\cup\pi(v,g_2)$ for the junction $v$ of $\pi(p,g_1)$ and $\pi(p,g_2)$. Since there are $O(s)$ such maximal polygonal chains containing no endpoint of extensions in their interiors and the sum of $k$ over all such maximal polygonal chains is $O(n)$, the running time for decomposing the problem $(P,p)$ into smaller problems is $O(n^2/s)$. \begin{lemma} We can decompose the problem $(P,p)$ into smaller problems in $O(n^2/s)$ time. \end{lemma} We decompose each problem recursively unless the problem satisfies one of the three stopping criteria in Definition~\ref{def:stopping}. Then we solve each base problem directly, that is, we report the edges of the shortest path tree. But for non-base problems, we do not report any edge of the shortest path tree. \begin{definition}[Stopping criteria] \label{def:stopping} There are three stopping criteria for $(P',p')$:\\ (1) $P'$ has $O(s)$ vertices, (2) $s\geq \sqrt{|P'|}$, where $|P'|$ is the complexity of $P'$, and (3) the depth of the recurrence is a positive constant $c$. \end{definition} When stopping criterion (1) holds, we compute the shortest path tree directly using the algorithm by Guibas et al.~\cite{guibas_linear_1987}. When stopping criterion (2) holds, we apply the algorithm described in Section~\ref{sec:large-space-tree} that computes the shortest path tree rooted at $p'$ inside $P'$ in $O(|P'|^2/s)$ time for the case that $s\geq \sqrt{|P'|}$, where $|P'|$ is the complexity of $P'$. When stopping criterion (3) holds, we compute the shortest path tree using Lemma~\ref{lem:shortest-path-constant}. \subsubsection{Analysis of the Recurrence} \label{sec:faster} \paragraph{Time complexity.} Consider the base problems. All base problems induced by stopping criterion (1) can be handled in $O(n)$ time in total because the subpolygons corresponding to them are pairwise interior-disjoint. For the base problems induced by stopping criterion (2), the subpolygons corresponding to them are also pairwise interior-disjoint. The time for handling these problems is the sum of $O(|P'|^2/s)$ over all the problems $(P',p')$. The running time is $O(n^2/s)$ because we have $O(\sum |P'|^2/s)=O(n/s\sum |P'|)=O(n^2/s).$ Now we consider the base problems induced by stopping criterion (3). For depth $c$ of the recurrence, every subpolygon has complexity of $O(n/s^c)$. Moreover, the total complexity of all subpolygons at depth $c$ is $O(n)$. By Lemma~\ref{lem:shortest-path-constant}, the expected time for computing the shortest path trees in all subpolygons is the sum of $O(|P'|^2 \log n)$ over all subpolygons $P'$ at depth $c$. Therefore, we can solve all problems at depth $c$ in $O((n^2\log n)/s^c)$ time because we have $O(\sum_i |P_i|^2\log n)=O(n/s^c \sum_i |P_i| \log n)=O((n^2\log n)/s^c)$. We analyze the running time for decomposing a problem into smaller problems. Consider depth $k$ for $1\leq k < c$. Let $(P_1,p_1),\ldots,(P_t,p_t)$ be the problems at depth $k$. Note that the sum of $|P_i|$ over all indices from $1$ to $t$ is $O(n)$. For each $P_i$, we construct the balanced subdivision of $P_i$ in $O(|P_i|^2/s)$ time, compute $O(s)$ w-edges of the shortest paths between $p_i$ and the endpoints of the extensions in $O(|P_i|^2/s^2)$ time, and decompose the problem into smaller problems in $O(|P_i|^2/s)$ time. Thus, the decomposition takes $O(\sum_i |P_i|^2/s)=O(n^2/s)$ time for the problems at depth $k$. Since $c$ is a constant, the decomposition over the $c$ depths takes $O(n^2/s)$ time. Therefore, the total running time is $O(n^2/s+(n^2\log n)/s^c)$ for an arbitrary constant $c>0$. \paragraph{Space complexity.} To handle each problem $(P',p')$, we maintain the balanced subdivision of $P'$ using $O(s)$ words of workspace. Until all subproblems of $(P',p')$ for all depths are handled, we keep this balanced subdivision. However, we do not keep the subdivision for two distinct problems in the same depth at the same time. Therefore, the total space complexity is $O(cs)$, which is $O(s)$. \begin{lemma} Given a point $p$ in a simple polygon with $n$ vertices, we can compute the shortest path tree rooted at $p$ in $O(n^2/s+ (n^2\log n)/s^c)$ expected time using $O(s)$ words of workspace for $s=O(\sqrt{n})$, where $c$ is an arbitrary positive constant. \end{lemma} \subsection{Case of \texorpdfstring{$s=\Omega(\sqrt{n})$}{}} \label{sec:large-space-tree} For the case of $s=\Omega(\sqrt{n})$, the balanced subdivision consists of $O(n/s)$ subpolygons of complexity $O(s)$. The algorithm for this case is similar to the one for the case of $s=O(\sqrt{n})$, except that we do not use Theorem~\ref{thm:path} and Lemma~\ref{lem:shortest-path-constant}. Instead, we use the fact that we can store all edges of each subpolygon in the workspace. As we did before, we compute all w-edges of the shortest paths between $p$ and the endpoints of the extensions. Using them, we decompose $(P,p)$ into a number of subproblems. In this case, we will see that every subproblem of $(P,p)$ is a base problem due to stopping criterion~(1) in Definition~\ref{def:stopping}. Then we solve each subproblem directly using the algorithm by Guibas et al.~\cite{guibas_linear_1987}. \begin{lemma} \label{lem:w-walls-large} We can compute all w-edges of the shortest paths between $p$ and the endpoints of the extensions in $O(n)$ time. \end{lemma} \begin{proof} As we did in Section~\ref{sec:wall-edges}, we apply depth-first search on the w-tree and compute the w-edges one by one. When we reach a node $\alpha$ of the w-tree, we compute the w-edges of $\pi(p,a_1)$ and $\pi(p,a_2)$ which are not computed yet, where $a_1$ and $a_2$ are the endpoints of the extension $d(\alpha)$. We show how to compute the edges of $\pi(p,a_1)$ only. The case for $\pi(p,a_2)$ can be handled analogously. By Lemma~\ref{lem:wall-edge}, there is at most one such w-edge of $\pi(p,a_1)$. Moreover, by the proof of the lemma, such an edge is incident to $v$ on $\pi(v,a_1)$, where $v$ is the one closer to $a_1$ than the other between the junction of $\pi(p,b_1)$ and $\pi(p,a_1)$ and the junction of $\pi(p,b_2)$ and $\pi(p,a_1)$, where $b_1b_2$ is the extension corresponding the parent of $\alpha$. We compute the junction $v_1$ of $\pi(p,b_1)$ and $\pi(p,a_1)$ as follows. Consider the endpoints of the w-edges of $\pi(p,b_1)$ sorted along $\pi(p,b_1)$ from $p$. We connect them by line segments in this order to form a polygonal chain. We denote the resulting polygonal chain by $\mu_1$. Notice that it might intersect the boundary of $P$. We also do this for $b_2$ and denote the resulting polygonal chain by $\mu_2$. We can compute $\mu_1$ and $\mu_2$ in $O(n/s)=O(s)$ time. Consider the union of the subpolygon of the balanced subdivision incident to both $a_1a_2$ and $b_1b_2$, and the region (funnel) bounded by $\mu_1$, $\mu_2$ and $b_1b_2$. The complexity of this union is $O(s)$. Thus, we can compute the shortest path tree rooted at $p$ restricted in this union using an algorithm by Guibas et al.~\cite{guibas_linear_1987}. This algorithm computes the shortest path tree rooted at a given point in time linear to the complexity of the input simple polygon using linear space. We find the maximal subchain of $\mu_1$ which is a part of $\pi(p,a_1)$. One endpoint of the subchain is $p$. Let $v'$ be the other endpoint. We find two consecutive w-edges $e$ and $e'$ of $\pi(p,b_1)$ containing $v'$ between them along $\pi(p,b_1)$. Then they also contain the junction $v_1$ between them along $\pi(p,b_1)$. Let $x$ and $y$ be the endpoints of $e$ and $e'$, respectively, that are contained in the same subpolygon. We compute $\pi(x,y)$ one by one using the algorithm by Guibas et al. We compute the extensions of the edges of $\pi(x,y)$ towards the subpolygon containing $a_1a_2$ and $b_1b_2$ on its boundary in $O(s)$ time. Then we can decide which vertex of $\pi(x,y)$ is the junction $v_1$. This takes $O(s)$ time in total. Moreover, while we compute $v_1$, we can obtain the edge of $\pi(v_1,a_1)$ incident to $v_1$. Thus, we can obtain the w-edge of $\pi(p,a_1)$ which is not computed yet in $O(s)$ time. Since there are $O(n/s)$ nodes in the w-tree, we can compute all w-edges of the shortest paths between $p$ and the endpoints of the extensions in $O(n)$ time in total. \end{proof} We decompose the problem $(P,p)$ into smaller problems in $O(n^2/s)$ time in a way similar to the one in Section~\ref{sec:all-edges}. \begin{lemma} We can decompose the problem $(P,p)$ into smaller problems defined in Section~\ref{sec:all-edges} in $O(n^2/s)$ time. \end{lemma} \begin{proof} Recall that the boundary of a subpolygon $S$ in the balanced subdivision consists of extensions and polygonal chains from $\ensuremath{\partial} P$. For each maximal polygonal chain $\eta$ of $\ensuremath{\partial} S$ containing no endpoint of extensions in its interior, we do the followings. Let $g_1$ and $g_2$ be the endpoints of $\eta$. We compute the junction $v$ of $\pi(p,g_1)$ and $\pi(p,g_2)$ in $O(s)$ time as we showed in the proof of Lemma~\ref{lem:w-walls-large}. Then we compute the w-edges of $\pi(v,g_1)$ and $\pi(v,g_2)$ in $O(n/s)=O(s)$ time. We are to compute the first point hit by the extension (ray) of each w-edge towards $\eta$. To do this, we connect the w-edges by line segments to form a polygonal chain $\mu$ as we did in Lemma~\ref{lem:w-walls-large}. We compute the union of $\mu$ and the part of $\eta$ connecting the two endpoints of $\mu$, which is a simple polygon. Then we apply the shortest path tree algorithm by Guibas et al.~\cite{guibas_linear_1987}. This takes $O(s)$ time since there are $O(s)$ such w-edges and $\eta$ has complexity of $O(s)$. For the edges of $\pi(v,g_1)$ and $\pi(v,g_2)$ lying between two consecutive w-edges, we observe that they are contained in the same subpolygon of the balanced subdivision. Thus we can compute such edges and extend them towards $\eta$ by applying the algorithm by Guibas et al. For a pair of consecutive w-edges, we can do this in $O(s)$ time. Since there are $O(n/s)$ such pairs, this takes $O(n)$ time for each maximal polygonal chain $\eta$. There are $O(n/s)$ maximal polygonal chains $\eta$, and thus the total running time is $O(n^2/s)$. \end{proof} Note that the boundary of each subpolygon $P'$ consists of two line segments and a part of $\ensuremath{\partial} P$ containing no endpoint of extensions in its interior. Thus, the complexity of $P'$ is $O(s)$. This means that all subproblems of $(P,p)$ are base problems due to stopping criterion (1) in Definition~\ref{def:stopping}. We can solve all base problems in $O(n)$ time in total. Therefore, we can compute the shortest path tree in $O(n^2/s)$ deterministic time in total. \begin{lemma} Given a point $p$ in a simple polygon with $n$ vertices, we can compute the shortest path tree rooted at $p$ in $O(n^2/s)$ deterministic time using $O(s)$ words of workspace for $s = \Omega(\sqrt{n})$. \end{lemma} Combining the algorithm for case of $s=O(\sqrt{n})$ in Section~\ref{sec:small} with the lemma above, we have the following theorem. \begin{theorem}\label{thm} Given a point $p$ in a simple polygon with $n$ vertices, we can compute the shortest path tree rooted at $p$ in $O(n^2/s+(n^2\log n)/s^c)$ expected time using $O(s)$ words of workspace for an arbitrary positive constant $c$. \end{theorem} Here, the size of the workspace is $O(cs)$. Thus, by changing the roles of $c$ and $s$, we can achieve another $s$-workspace algorithm. In specific, by setting $c$ to the size of workspace and $s$ to $2$, we have the following theorem. \begin{theorem} Given a point $p$ in a simple polygon with $n$ vertices, we can compute the shortest path tree rooted at $p$ in $O((n^2\log n)/2^s)$ expected time using $O(s)$ words of workspace for $s\leq \log\log n$. \end{theorem} \section{Conclusion} We present an $s$-workspace algorithm for computing a balanced subdivision of a simple polygon consisting of $O(\min\{n/s,s\})$ subpolygons of complexity $O(\max\{n/s,s\})$. This subdivision can be computed more efficiently than other subdivisions suggested in the context of time-space trade-offs, and therefore can be used for solving several fundamental problems in a simple polygon more efficiently. Since our subdivision method keeps all extensions of the balanced subdivision in the workspace, it has a few other application problems, including the problem for answering a single-source shortest path query. We also believe that we can preprocess a simple polygon and maintain a data structure of size $O(s)$ so that for any two points $x$ and $y$ in a simple polygon $\pi(x,y)$ can be computed in $o(n^2/s)$ time with $O(s)$ words of workspace by combining the ideas from Guibas and Hershberger~\cite{guibas_optimal_1989} with our subdivision method. We leave this as a future work. {} \end{document}
\ensuremath{\bm{e}}gin{document} \title{A class of null space conditions for sparse recovery via nonconvex, non-separable minimizations} \ensuremath{\bm{e}}gin{abstract} For the problem of sparse recovery, it is widely accepted that nonconvex minimizations are better than $\ell_1$ penalty in enhancing the sparsity of solution. However, to date, the theory verifying that nonconvex penalties outperform (or are at least as good as) $\ell_1$ minimization in exact, uniform recovery has mostly been limited to separable cases. In this paper, we establish general recovery guarantees through null space conditions for {nonconvex, non-separable} regularizations, which are slightly less demanding than the standard null space property for $\ell_1$ minimization. \end{abstract} \ensuremath{\bm{e}}gin{keywords} nonconvex optimization, null space property, sparse recovery, majorization theory \end{keywords} \ensuremath{\bm{e}}gin{AMS} 94A12, 94A15, 90C26 \end{AMS} \section{Introduction} \label{sec:intro} This paper is concerned with the reconstruction of a sparse signal $\ensuremath{\bm{x}} \in \mathbb{R}^N$ from relatively few observed data $\ensuremath{\bm{y}} \in \mathbb{R}^m$. More precisely, we recover the unknown vector $\ensuremath{\bm{x}} \in \mathbb{R}^N$ from the system \ensuremath{\bm{e}}gin{align} \label{problem:und_system} \ensuremath{\bm{A}} \ensuremath{\bm{z}} = \ensuremath{\bm{y}}, \end{align} given the matrix $\ensuremath{\bm{A}} \in \mathbb{R}^{m\times N}$ $(m\ll N)$, and using linear measurements $\ensuremath{\bm{y}} = \ensuremath{\bm{A}} \ensuremath{\bm{x}}$. In general, the system \eqref{problem:und_system} is underdetermined and has infinitely many solutions. With the additional acknowledgement that the unknown signal is sparse, which is the case in several contexts such as compressed sensing \cite{CRT06,Donoho06}, statistics \cite{HastieTibshiraniWainwright15}, and uncertainty quantification \cite{DO11,RS14,DexterTranWebster17}, we search for the sparsest solution of \eqref{problem:und_system} only. It is natural to reconstruct $\ensuremath{\bm{x}}$ via the $\ell_0$ minimization problem \ensuremath{\bm{e}}gin{align} \label{problem:l0} \min_{\ensuremath{\bm{z}} \in \mathbb{R}^N} \|\ensuremath{\bm{z}}\|_{0}, \text{ subject to } \ensuremath{\bm{A}} \ensuremath{\bm{z}} = \ensuremath{\bm{y}}, \end{align} where $\|\ensuremath{\bm{z}}\|_0$ is the number of nonzero components in $\ensuremath{\bm{z}}$. However, since the locations of the nonzero components are not available, solving \eqref{problem:l0} directly requires a combinatorial search and is unrealistic in general. An alternative and popular approach is basis pursuit or $\ell_1$ minimization, which consists in finding the minimizer of the problem \ensuremath{\bm{e}}gin{align} \label{problem:l1} \min_{\ensuremath{\bm{z}} \in \mathbb{R}^N} \|\ensuremath{\bm{z}}\|_{1}, \text{ subject to } \ensuremath{\bm{A}} \ensuremath{\bm{z}} = \ensuremath{\bm{y}}. \end{align} The convex optimization problem \eqref{problem:l1} is an efficient relaxation for \eqref{problem:l0} and often produces sparse solutions. The sparse recovery property of $\ell_1$ minimization has been well-developed. It is known from the compressed sensing literature that if $\ensuremath{\bm{A}}$ possesses certain properties, such as the null space property and restricted isometry property, problem \eqref{problem:l0} and its convex relaxation \eqref{problem:l1} are equivalent \cite{FouRau13}. Although the $\ell_1$ minimization technique has been used in a wide variety of problems, it is not able to reconstruct the sparsest solutions in many applications. As such, several nonconvex regularizations have been applied to improve the recovery performance. These penalties are generally closer to the $\ell_0$ penalty than the $\ell_1$ norm, thus it is reasonable to expect that nonconvex minimizations can further enhance the sparsity of the solutions. The most commonly used nonconvex penalty is probably $\ell_p$ with $0 < p < 1$ \cite{Chartrand07,FoucartLai09}, which interpolates between $\ell_0$ and $\ell_1$. Other well-known nonconvex methods in the literature include Smoothly Clipped Absolute Deviation (SCAD) \cite{FanLi01}, capped $\ell_1$ \cite{NIPS2008_3526,ShenPanZhu12} and transformed $\ell_1$ \cite{LvFan09,ZhangXin16}. All of these penalties are separable, in the sense that they are the sum of the penalty functions applied to each individual component of the vector. Recently, many non-separable regularizations are also considered, for instance, iterative support detection \cite{WangYin10}, $\ell_1 - \ell_2$ \cite{EsserLouXin13, YinLouHeXin15,YanShinXiu16}, two-level $\ell_1$ \cite{HuangLiuShi-et-al15} and sorted $\ell_1$ \cite{HuangShiYan15}. While nonconvex penalties are generally more challenging to minimize, they have been shown to reconstruct the sparse target signals from significantly fewer measurements in many computational test problems. Yet, to date, the theoretical evidence that nonconvex penalties are superior to (or at least, not worse than) $\ell_1$ optimization in uniform reconstruction of sparse signals has not been fully developed. For some regularizations such as $\ell_p$ ($0<p<1$) \cite{FoucartLai09}, iterative support detection \cite{WangYin10}, recovery guarantees are available, established via variants of restricted isometry property (RIP), and proved to be less restrictive than that of $\ell_1$. However, such guarantees remain elusive or suboptimal for many others, especially non-separable penalties. Let us elucidate this point by an examination of the null space properties recently developed for $\ell_1 - \ell_2$ \cite{YinLouHeXin15,YanShinXiu16} and two-level $\ell_1$ \cite{HuangLiuShi-et-al15}. For any $\bm{v} = (v_1,\ldots,v_N)\in \mathbb{R}^N$ and $S\subset \{1,\ldots,N\}$, denote by $\ensuremath{\overline{S}}$ the complement of $S$ in $\{1,\ldots,N\}$ and $\#(S)$ the cardinality of $S$, then the null space property of the measurement matrix $\ensuremath{\bm{A}}$, introduced in \cite{CohenDahmenDeVore09}, can be stated as follows. \ensuremath{\bm{e}}gin{definition}[Null space property] For the matrix $\ensuremath{\bm{A}} \in \mathbb{R}^{m\times N}$, the null space property is given by: \ensuremath{\bm{e}}gin{gather} \label{NSP:l1} \tag{\fontfamily{cmss}\selectfont NSP} \ensuremath{\bm{e}}gin{aligned} \ker(\ensuremath{\bm{A}})\!\setminus\!\{\mathbf{0}\}\subset \bigg\{ \ensuremath{\bm{v}}\! \in \! \mathbb{R}^N: \|\ensuremath{\bm{v}}_S\|_1 < \|\ensuremath{\bm{v}}_{\overline{S}}\|_1,\, \forall S \!\subset \!\{1,\ldots,N\}\text{ with }\#(S)\le s\bigg\}. \end{aligned} \end{gather} \end{definition} It is well-known that \eqref{NSP:l1} is the necessary and sufficient condition for the successful reconstruction using $\ell_1$ minimization \cite{CohenDahmenDeVore09,FouRau13}, thus, stronger recovery properties, which are desirable and expected for nonconvex minimizations, would essentially allow $\ker(\ensuremath{\bm{A}})$ to be contained in a larger set than that in \eqref{NSP:l1}. The arguments in \cite{YinLouHeXin15,YanShinXiu16,HuangLiuShi-et-al15} are, however, at odds with this observation. Therein, the RIP were developed so that \ensuremath{\bm{e}}gin{align*} & \ker(\bm{A})\! \setminus\! \{\mathbf{0}\} \subset \bigg\{ \bm{v} \in \mathbb{R}^N\!:\! \|\bm{v}_S\|_1 < \|\bm{v}_{H\cap {\ensuremath{\overline{S}}}}\|_1,\, \forall S\!: \#(S)\le s,\, \forall H\!:\#(H) = \lfloor N/2 \rfloor\! \bigg\}, \\ & \text{and} \ \ker(\ensuremath{\bm{A}})\setminus\{\mathbf{0}\}\subset \bigg\{ \bm{v} \in \mathbb{R}^N\!:\! \|\bm{v}_S\|_1 + \|\bm{v}_S\|_2 + \|\bm{v}_{\ensuremath{\overline{S}}}\|_2 < \|\bm{v}_{{\ensuremath{\overline{S}}}}\|_1,\, \forall S\!: \#(S)\le s\! \bigg\}, \end{align*} respectively for two-level $\ell_1$ and $\ell_1 - \ell_2$ penalties. As then $\ker({\ensuremath{\bm{A}}})$ was restricted to strictly smaller sets than as in $\ell_1$ case, the acquired RIPs were inevitably more demanding. In this paper, we establish new uniform recovery guarantees for a general class of nonconvex, possibly non-separable minimizations, which are superior to or at least as good as $\ell_1$. More specifically, we consider the nonconvex optimization problem in general form \ensuremath{\bm{e}}gin{align} \label{problem:PR} \tag{${\text{\fontfamily{cmss}\selectfont P}}_{\text{\fontfamily{cmss}\selectfont R}}$} \underset{\bm{z}\in \mathbb{R}^N}{\text{minimize}} \ \ R(\bm{z}) \text{ subject to }\bm{A}\bm{z} = \bm{A}\bm{x}, \end{align} and, under some mild assumptions of $R$ (applicable to most nonconvex penalties considered in the literature), derive null space conditions for the exact reconstruction via \eqref{problem:PR}, which are less demanding or identical to the standard conditions required by the $\ell_1$ norm. Our main achievement in this paper is: for the regularizations that are concave, non-separable and symmetric\footnote{The precise definitions of \textit{separable}, \textit{concave} and \textit{symmetric} are presented in Section \ref{sec:background}.} (such as $\ell_1 - \ell_2$, two-level $\ell_1$, sorted $\ell_1$), \eqref{NSP:l1} is sufficient for the exact recovery of all $s$-sparse signals. Furthermore, in many cases, an improved variant of \eqref{NSP:l1} is enough, i.e., \ensuremath{\bm{e}}gin{gather} \label{iNSP:nonsep} \tag{\fontfamily{cmss}\selectfont iNSP} \ensuremath{\bm{e}}gin{aligned} \ker(\ensuremath{\bm{A}})\setminus\{\mathbf{0}\}\subset \bigg\{ \ensuremath{\bm{v}} \in \mathbb{R}^N: \|\ensuremath{\bm{v}}_S\|_1 \le \|\ensuremath{\bm{v}}_{\overline{S}}\|_1,\, \forall S\mbox{ with }\#(S)\le s\bigg\}. \end{aligned} \end{gather} One distinct aspect of these results, as we shall see, is that they do \textit{not} have fixed support version. As such, our analysis requires technical arguments beyond the standard approach, that is, first deriving and then combining all null space properties for the recovery of vectors supported on fixed sets of same cardinality. To compare with the better known case where separable property is assumed, we revisit the necessary and sufficient condition for the exact recovery for concave and separable regularizations. The generalized null space property \ensuremath{\bm{e}}gin{gather} \label{iNSP:sep} \tag{\fontfamily{cmss}\selectfont gNSP} \ensuremath{\bm{e}}gin{aligned} \ker(\ensuremath{\bm{A}})\setminus\{\mathbf{0}\}\subset \bigg\{ \ensuremath{\bm{v}} \in \mathbb{R}^N: R(\ensuremath{\bm{v}}_S) < R(\ensuremath{\bm{v}}_{\overline{S}}),\, \forall S\mbox{ with }\#(S)\le s\bigg\} \end{aligned} \end{gather} can be established from fixed support null space conditions, therefore is a routine extension of several similar results for specific penalties \cite{FoucartLai09, FouRau13}. This property can also be found in \cite{GribonvalNielsen07}. However, \eqref{iNSP:sep} is not automatically less restrictive than the standard \eqref{NSP:l1}, and we verify that this is the case only if the regularization is also symmetric. Penalties that can be treated in this setting include $\ell_p$, SCAD, transformed $\ell_1$, capped $\ell_1$. \subsection{Related works} This paper examines the recovery of sparse signals by virtue of \textit{global minimizers} of nonconvex problems. These are the best solutions one can acquire via the considered nonconvex regularizations, regardless of the numerical procedures used to realize them. Therefore, our results serve as a benchmark for the performance of concrete algorithms. Often in practice, one can only obtain the \textit{local minimizers} of nonconvex problems. The theoretical recovery properties via local minimizers, attached to specific numerical schemes, has also gained considerable attention in the literature. In \cite{NIPS2008_3526, ZhangT10}, a multi-stage convex relaxation scheme was developed for solving problems with nonconvex objective functions, with a focus on capped-$\ell_1$. Theoretical error bound was established for fixed designs showing that the local minimum solution obtained by this procedure is superior to the global solution of the standard $\ell_1$. In \cite{YinXin16}, $\ell_{1} -\ell_{2}$ minimization was proved not worse than $\ell_1$, based on a difference of convex function algorithm (DCA), which is an iterative procedure and returns $\ell_1$ solution in the first step. Generalized conditions for nonconvex penalties were also established in \cite{FanLi01,LvFan09}, under which three desirable properties of the regularizations - unbiasedness, sparsity, and continuity - are fulfilled. Therein, the properties of the local minimizers and the sufficient conditions for a vector to be the local minimizer were analyzed with a unified approach, and specified for SCAD and transformed $\ell_1$ (referred to as SICA in \cite{LvFan09}). We remark that this framework applies to separable penalties only. Finally, sorted $\ell_1$ is a nonconvex method recently introduced in \cite{HuangShiYan15}. This method generalizes several nonconvex approaches, including iterative hard thresholding, two-level $\ell_1$, truncated $\ell_1, $ and small magnitude penalized (SMAP). \subsection{Organization} The remainder of this paper is organized as follows. In Section \ref{sec:background}, we describe the general nonconvex, non-separable minimization problem, some examples, and provide the necessary background results. In Section \ref{sec:nonseparable}, we prove our main results on the null space properties for the exact reconstruction via non-separable penalties. The recovery conditions for separable penalties are discussed in Section \ref{sec:separable}. Finally, concluding remarks are given in Section \ref{sec:conclusion}. \section{Nonconvex, non-separable minimization problem} \label{sec:background} Throughout this paper, we denote $\mathcal{U}= [0,\infty)^N$, and for $\bm{z} = (z_1,\ldots,z_N)\in \mathbb{R}^N$, let $|\ensuremath{\bm{z}}| = (|z_1|,\ldots,|z_N|)$ with $ |z|_{[1]} \ge \ldots \ge |z|_{[N]} $ the components of $|\ensuremath{\bm{z}}|$ in decreasing order. If $\ensuremath{\bm{z}}$ has nonnegative components, i.e., $\ensuremath{\bm{z}} \in \ensuremath{\mathcal{U}}$, we simply write $$ z_{[1]} \ge \ldots \ge z_{[N]}. $$ We call a vector $\ensuremath{\bm{z}} \in \mathbb{R}^N$: {equal-height} if all nonzero coordinates of $\ensuremath{\bm{z}}$ have the same magnitude; and $s$-sparse if it has at most $s$ nonzero coefficients. Also, $\ensuremath{\bm{e}}_j$ is the standard basis vector with a $1$ in the $j$-th coordinate and $0$'s elsewhere. The $j$-th coordinate of a vector $\ensuremath{\bm{z}}\in \mathbb{R}^N$ is often denoted simply by $z_j$, but at some places, we also use the notation $(\ensuremath{\bm{z}})_j$. Recall the nonconvex optimization problem of interest is given by \ensuremath{\bm{e}}gin{align} \label{problem:PR_copy} \tag{${\text{\fontfamily{cmss}\selectfont P}}_{\text{\fontfamily{cmss}\selectfont R}}$} \underset{\bm{z}\in \mathbb{R}^N}{\text{minimize}} \ \ R(\bm{z}) \text{ subject to }\bm{A}\bm{z} = \bm{A}\bm{x}. \end{align} We define the following theoretical properties of the penalty $R$ described in \eqref{problem:PR_copy}. \ensuremath{\bm{e}}gin{definition} \label{def:penalty_prop} Let $R$ be a mapping from $\mathbb{R}^N$ to $[0,\infty)$ satisfying $R(z_1,\ldots,z_N)$ $= R(|z_1|,\ldots,|z_N|)$, for all $\bm{z} = (z_1,\ldots,z_N)\in\mathbb{R}^N$. \ensuremath{\bm{e}}gin{itemize} \item $R$ is called \textbf{separable} on $\mathbb{R}^N$ if there exist functions $r_j: \mathbb{R} \to [0,\infty),\, j\in \{1,\ldots, N\}$ such that for every $\ensuremath{\bm{z}} \in \mathbb{R}^N$, $R$ can be represented as \ensuremath{\bm{e}}gin{align} \label{def:separable} R(\ensuremath{\bm{z}}) = \sum_{j=1}^N r_j(z_j). \end{align} If $R$ cannot be written in the form \eqref{def:separable}, we say $R$ is \textbf{non-separable} on $\mathbb{R}^N$. \item $R$ is called \textbf{symmetric} on $\mathbb{R}^N$ if for every $\bm{z} \in \mathbb{R}^N$ and every permutation $(\pi(1),\ldots,\pi(N))$ of $(1,\ldots,N)$: \ensuremath{\bm{e}}gin{align} \label{def:symmetric} R(z_{\pi(1)},\ldots,z_{\pi(N)}) = R(\bm{z}). \end{align} \item $R$ is called \textbf{concave} on $\mathcal{U}$ if for every $\bm{z},\bm{z}' \in \mathcal{U}$ and $0\le \lambda \le 1$: \ensuremath{\bm{e}}gin{align} \label{def:concave} R(\lambda \bm{z} + (1-\lambda )\bm{z}') \ge \lambda R(\bm{z}) + (1-\lambda) R(\bm{z}'). \end{align} \item $R$ is called \textbf{increasing} on $\mathcal{U}$ if for every $\bm{z},\bm{z}'\in \mathcal{U}$, $\bm{z}\ge \bm{z}'$ then \ensuremath{\bm{e}}gin{align} \label{def:increasing} R(\bm{z}) \ge R(\bm{z}'). \end{align} Here, $\bm{z}\ge \bm{z}'$ means $z_j \ge z'_j,\, \forall 1\le j\le N$. \end{itemize} \end{definition} The theory for uniform recovery developed herein is applicable for all concave, non-separable and symmetric penalties. In particular, our present conditions unify and improve the existing conditions for $\ell_1 - \ell_2$ and two-level $\ell_1$. On the other hand, we provide, for the first time, the theoretical requirements for the uniform recovery of sorted $\ell_1$. \ensuremath{\bm{e}}gin{example}[Concave, non-separable and symmetric penalties] \label{example:ncv_func} \ensuremath{\bm{e}}gin{enumerate} \itemsep10pt \item $\ell_1 - \ell_2$: \qquad \qquad \qquad \quad \, \ \ $R_{\ell_1 - \ell_2}(\bm{z}) = \|\bm{z}\|_1 - \|\bm{z}\|_2$, {\cite{EsserLouXin13, YinLouHeXin15}}. \item Two-level $\ell_1$: \qquad \qquad \quad \quad $R_{2\ell_1}(\bm{z}) = \rho \sum\limits_{j\in J(\bm{z})} |z_j| +\! \sum\limits_{j\in J(\bm{z})^c} |z_j|$, {\cite{HuangLiuShi-et-al15}}. \\ Here, $0\le \rho < 1$ and $J({\bm{z}})$ is the set of largest components of $|z_j|$. \item Sorted $\ell_1$: \qquad \qquad \qquad \quad \ {$R_{s\ell_1}(\bm{z}) = \ensuremath{\bm{e}}ta_1|{z}|_{[1]} + \ldots + \ensuremath{\bm{e}}ta_N|{z}|_{[N]}$, \cite{HuangShiYan15}}. Here, $0\le \ensuremath{\bm{e}}ta_1\le \ldots \le \ensuremath{\bm{e}}ta_N$ and $|{z}|_{[1]} \ge \ldots \ge |{z}|_{[N]}$ are the components of $|\ensuremath{\bm{z}}|$ ranked in decreasing order. \end{enumerate} \end{example} We remark that our analysis does not cover non-concave or non-symmetric regularizations. However, unlike their concave and symmetric counterparts, \eqref{NSP:l1} may not be sufficient to guarantee the uniform and sparse reconstruction with these penalties in general. Therefore, non-concave or non-symmetric regularizations are not necessarily better than $\ell_1$ minimization in exact, uniform recovery, in the sense that for some sampling matrices, $\ell_1$ can successfully recover all $s$-sparse vectors, while a non-concave or non-symmetric penalty fails to do so. A well-known example of non-concave penalties which are less efficient than $\ell_1$ is $\ell_p$ with $p>1$. Another interesting example is $\ell_1/\ell_2$ \cite{EsserLouXin13,YinEsserXin14}, a non-concave \textit{and} non-convex regularization. Whether there exists a null space property less restrictive than \eqref{NSP:l1} for this penalty is an open question. Non-symmetric regularizations, on the other hand, do not always recover the sparsest vectors due to their preference for some components over others. An example of a non-symmetric penalties is given in Section \ref{sec:separable}. \subsection{Properties of penalty functions} Next, we present a few necessary supporting results for the penalty functions of interest. These results are relatively well-known, so will be provided here without proofs. \ensuremath{\bm{e}}gin{lemma} \label{lem:incr_prop} Let $R$ be a map from $\mathbb{R}^N$ to $[0,\infty)$. If $R$ is concave on $\ensuremath{\mathcal{U}}$, then $R$ is increasing on $\ensuremath{\mathcal{U}}$. \end{lemma} Note that if $R:\mathbb{R}^N \to [0,\infty)$ satisfies $R(z_1,\ldots,z_N) = R(|z_1|,\ldots,|z_N|)$, $\forall \bm{z} = (z_1,\ldots,z_N)\in\mathbb{R}^N$ and is increasing on $\mathcal{U}$, then \ensuremath{\bm{e}}gin{align} \label{inc_prop} R(\bm{z}) \ge R(\bm{z}')\ \text{for all } \ensuremath{\bm{z}}, \ensuremath{\bm{z}}' \in \mathbb{R}^N \text{ with } |z_j| \ge |z'_j|,\, \forall 1\le j\le N. \end{align} $R$ is therefore increasing in the whole space $\mathbb{R}^N$ in the sense of \eqref{inc_prop}. We will use both terms ``increasing on $\ensuremath{\mathcal{U}}$'' and ``increasing on $\mathbb{R}^N$'' interchangeably in the sequel. To establish the generalized conditions for successful sparse recovery in the non-separable case, we employ the concept of \textit{majorization}. This notion, defined below, makes precise and rigorous the idea that the components of a vector are ``more (or less) equal'' than those of another. \ensuremath{\bm{e}}gin{definition}[Majorization, \cite{MarshallOlkinArnold11}] For $\ensuremath{\bm{z}},\ensuremath{\bm{z}}' \in \ensuremath{\mathcal{U}}$, $\ensuremath{\bm{z}}$ is said to be majorized by $\ensuremath{\bm{z}}'$, denoted by $\ensuremath{\bm{z}} \prec \ensuremath{\bm{z}}' $, if \ensuremath{\bm{e}}gin{align} \label{def:majorization} \ensuremath{\bm{e}}gin{cases} \sum\limits_{j=1}^n z_{[j]} \le \sum\limits_{j=1}^n z'_{[j]},\ \ \ n = 1,\ldots, N-1, \\ \sum\limits_{j=1}^N z_{[j]} = \sum\limits_{j=1}^N z'_{[j]}. \end{cases} \end{align} Given condition \eqref{def:majorization}, we also say $\ensuremath{\bm{z}}'$ majorizes $\ensuremath{\bm{z}}$ and denote $\ensuremath{\bm{z}}' \succ \ensuremath{\bm{z}} $. \end{definition} As a simple example of majorization, we have \ensuremath{\bm{e}}gin{gather*} \ensuremath{\bm{e}}gin{aligned} &\left(\frac16,\frac16,\frac16,\frac16,\frac16,\frac16\right) \prec \left(\frac14,\frac14,\frac14,\frac14,0,0\right) \prec \left(\frac38,\frac14,\frac14,\frac18,0,0\right) \\ &\qquad \qquad \prec \left(\frac12,\frac12,0,0,0,0\right) \prec \left(\frac23,\frac13,0,0,0,0\right) \prec \left(1,0,0,0,0,0\right). \end{aligned} \end{gather*} Loosely speaking, a sparse vector tends to majorize a dense one with the same $\ell_1$ norm. On the other hand, a sparse-promoting penalty function should have small values at sparse signals and larger values at dense signals. One may think that a penalty function which reverses the order of majorization would promote sparsity, in particular, outperform $\ell_1$. We will show in the next sections that this intuition is indeed correct, but first, let us clarify that all symmetric and concave penalty functions considered are order-reversing\footnote{Functions that reverse the order of majorization are often referred to as Schur-concave functions, see, e.g., \cite{MarshallOlkinArnold11}.}, see \cite[Chapter 3.C]{MarshallOlkinArnold11} and \cite[Remark II.3.7]{Bhatia97}. \ensuremath{\bm{e}}gin{lemma} \label{lemma:majorize} Let $R$ be a function from $\mathbb{R}^N$ to $[0,\infty)$ satisfying $R(\!z_1,\ldots,\!z_N\!)$ $ = R(|z_1|,\ldots,|z_N|),\ \forall \bm{z} = (z_1,\ldots,z_N)\in\mathbb{R}^N$. If $R$ is symmetric on $\mathbb{R}^N$ and concave on $\ensuremath{\mathcal{U}}$, then $R$ reverses the order of majorization: \ensuremath{\bm{e}}gin{align} \label{eq:schur-ccv} R(\ensuremath{\bm{z}}) \ge R(\ensuremath{\bm{z}}') \text{ for all }\ensuremath{\bm{z}},\ensuremath{\bm{z}}' \in \ensuremath{\mathcal{U}}\text{ with }\ensuremath{\bm{z}} \prec \ensuremath{\bm{z}}'. \end{align} \end{lemma} \section{Exact recovery of sparse signals via non-separable penalties} \label{sec:nonseparable} In this section, we prove that concave, non-separable and symmetric regularizations are superior to $\ell_1$ in sparse, uniform recovery. This setting applies to penalties such as two-level $\ell_1$, sorted $\ell_1$, and $\ell_1 - \ell_2$. Our main result is given below. \ensuremath{\bm{e}}gin{theorem} \label{main_theorem} Let $N>1$, $s\in \mathbb{N}$ with $1 \le s < N/2 $, and $\ensuremath{\bm{A}}$ be an $m\times N$ real matrix. Consider the problem \eqref{problem:PR}, where $R$ is a function from $\mathbb{R}^N$ to $[0,\infty)$ satisfying $R(z_1,\ldots,z_N) = R(|z_1|,\ldots,|z_N|),\ \forall \bm{z} = (z_1,\ldots,z_N)\in\mathbb{R}^N$, symmetric on $\mathbb{R}^N$, and concave on $\mathcal{U}$. \ensuremath{\bm{e}}gin{enumerate}[\ \ \ i)] \item If \ensuremath{\bm{e}}gin{align} \label{assump1} \tag{${\text{\fontfamily{cmss}\selectfont R}_{1}}$} R(z_1,\ldots,z_s, z_{s+1},{0,\ldots,0}) > R(z_1,\ldots, z_s,0,\ldots,0),\ \forall z_1, \ldots, z_{s+1} > 0, \end{align} then every $s$-sparse vector $\bm{x}\in \mathbb{R}^N$ is the unique solution to \eqref{problem:PR} provided that the null space property \eqref{NSP:l1} is satisfied. In this sense, {\eqref{problem:PR} is at least as good as $\ell_1$-minimization}. \item If \ensuremath{\bm{e}}gin{gather} \label{assump2} \tag{${\text{\fontfamily{cmss}\selectfont R}_{2}}$} \ensuremath{\bm{e}}gin{aligned} R(z_1,\ldots,z_{s-1},z_s, z_{s+1}, 0,\ldots, 0) >\, R( z_1,& \ldots,z_{s-1},z_s + z_{s+1},{0,\ldots,0}),\ \\ & \forall z_1, \ldots, z_{s+1} > 0, \notag \end{aligned} \end{gather} then every $s$-sparse vector $\bm{x}\in \mathbb{R}^N$ (except equal-height vectors) is the unique solution to \eqref{problem:PR} provided that the improved null space property \eqref{iNSP:nonsep} is satisfied. The recovery guarantee of \eqref{problem:PR} therefore is better than that of $\ell_1$-minimization. \end{enumerate} \end{theorem} { It is worth emphasizing that Theorem \ref{main_theorem} does not have a fixed support version. More specifically, it may be tempting to think that \eqref{NSP:l1} can be proved to be a sufficient condition for non-separable minimizations by the same mechanism as in $\ell_1$ and separable cases, i.e., combining all conditions for the recovery of vectors supported on fixed sets $S$ of cardinality $s$, i.e., \ensuremath{\bm{e}}gin{gather} \label{NSP-fs} \ensuremath{\bm{e}}gin{aligned} \ker(\ensuremath{\bm{A}})\!\setminus\!\{\mathbf{0}\}\subset \bigg\{ \ensuremath{\bm{v}}\! \in \! \mathbb{R}^N: \|\ensuremath{\bm{v}}_S\|_1 < \|\ensuremath{\bm{v}}_{\overline{S}}\|_1\bigg\}, \end{aligned} \end{gather} see \cite[Section 4.1]{FouRau13}. However, this strategy does not work, as we can show that unlike $\ell_1$, \eqref{NSP-fs} does not guarantee the successful recovery of vectors supported on $S$ with non-separable penalties. Indeed, consider the underdetermined system $ \ensuremath{\bm{A}}\ensuremath{\bm{z}} = \ensuremath{\bm{A}}\ensuremath{\bm{x}}, $ where the matrix $\ensuremath{\bm{A}}\in \mathbb{R}^{4\times 5}$ is defined as \[ \ensuremath{\bm{A}} = \left( \ensuremath{\bm{e}}gin{array}{ccccc} 1 & 0.5 & 1 & 0 & 0 \\ 1 & -0.5 & 0 & 1 & 0 \\ 0 & 0.1 & 0 & 0 & 1 \\ 1 & -1 & 0 & 0 & 0 \end{array} \right). \] As $\ker(\ensuremath{\bm{A}}) = (-t,-t,3t/2,t/2,t/10)^{\top}$ satisfies \eqref{NSP-fs} with $S=\{1,2\}$, all sparse signals supported on $S$ can be exactly recovered with $\ell_1$ penalty. Consider $\ell_1 - \ell_2$ regularization, let $\ensuremath{\bm{x}} = (1,1,0,0,0)^\top$ supported on $S$, then any solution to $\ensuremath{\bm{A}}\ensuremath{\bm{z}} = \ensuremath{\bm{A}}\ensuremath{\bm{x}}$ can be represented as $(1-t,1-t,3t/2,t/2,t/10)^{\top}$. Among those, the unique minimizer of $R_{\ell_1 - \ell_2}(\ensuremath{\bm{z}})$ is $\ensuremath{\bm{z}}\! =\! (0,0,3/2,1/2,1/10)^\top$, which is different from $\ensuremath{\bm{x}}$ and not the sparsest solution. } The proof of Theorem \ref{main_theorem} is rather lengthy and is relegated to Section \ref{sec:proof}. Let us first discuss the assumptions \eqref{assump1} and \eqref{assump2}. We will see from this proof that the concavity and symmetry of the penalty function $R$ is enough to guarantee every $s$-sparse vector is a solution to \eqref{problem:PR}. For the exact recovery, we also need these solutions to be unique. Such uniqueness could be derived assuming $R$ is strictly concave, but several regularizations do not satisfy this property. Rather, we only require strict concavity (or strictly increasing property) in one direction and locally at $s$-sparse vectors, reflected in \eqref{assump1} and \eqref{assump2}. These mild conditions can be validated easily for the considered non-separable penalties; see Proposition \ref{prop:non-sep}. We note that \eqref{assump1} is weaker than \eqref{assump2}. On the other hand, \eqref{iNSP:nonsep} cannot guarantee the exact recovery of equal-height, $s$-sparse vectors with symmetric penalties in general. Indeed, it is possible that $\ker(\ensuremath{\bm{A}})$ contains equal-height, $2s$-sparse vectors, for example, $\sum_{j=1}^{2s} \ensuremath{\bm{e}}_j$, in which case the recovery problem of an equal-height, $s$-sparse vector, say $\ensuremath{\bm{z}} = \sum_{j=1}^{s} \overline{z} \ensuremath{\bm{e}}_j$, would essentially have at least another solution, namely $\ensuremath{\bm{z}}' = - \sum_{j=s+ 1}^{2s} \overline{z} \ensuremath{\bm{e}}_j$, as $R(\ensuremath{\bm{z}}) = R(\ensuremath{\bm{z}}')$. We therefore exclude the reconstruction of equal-height vectors under \eqref{iNSP:nonsep}. \ensuremath{\bm{e}}gin{proposition} \label{prop:non-sep} \ensuremath{\bm{e}}gin{enumerate}[\ \ \ i)] \item The following methods: \ensuremath{\bm{e}}gin{center} two-level $\ell_1$, sorted $\ell_1$ with $\ensuremath{\bm{e}}ta_{s+1} > 0$ \end{center} are at least as good as $\ell_1$-minimization in recovering sparse vectors in the sense that these methods exactly reconstruct all $s$-sparse vectors under the null space property \eqref{NSP:l1}. \item The following methods: \ensuremath{\bm{e}}gin{center} $\ell_1 - \ell_2$, sorted $\ell_1$ with $\ensuremath{\bm{e}}ta_{s+1} > \ensuremath{\bm{e}}ta_{s}$ \end{center} are provably superior to $\ell_1$-minimization in recovering sparse vectors in the sense that these methods exactly reconstruct all $s$-sparse (except equal-height) vectors under the improved null space property \eqref{iNSP:nonsep}. \end{enumerate} \end{proposition} \ensuremath{\bm{e}}gin{proof} In this proof, for convenience, we often drop the zero components when denoting vectors in $\mathbb{R}^N$, for instance, $(z_1,\ldots,z_s,0,\ldots,0)$ with $z_i \ne 0, \forall\, 1\le i\le s$ and $s<N$ is simply represented as $(z_1,\ldots,z_s)$. Applying Theorem \ref{main_theorem}, we only need to show that: i) $R_{2\ell_1}$ and $R_{s\ell_1}$ with $\ensuremath{\bm{e}}ta_{s+1} > 0$ satisfy \eqref{assump1} and ii) $R_{\ell_1 - \ell_2}$ and $R_{s\ell_1}$ with $\ensuremath{\bm{e}}ta_{s+1} > \ensuremath{\bm{e}}ta_{s}$ satisfy \eqref{assump2}. \ensuremath{\bm{e}}gin{enumerate} \item $R_{\ell_1 - \ell_2}$: For $z_1,\ldots, z_{s+ 1} > 0$, \ensuremath{\bm{e}}gin{align*} & \left(\sqrt{ z_1^2 + \ldots + z_{s+1}^2} + z_{s+1}\right)\sqrt{ z_1^2 + \ldots + z_{s}^2} > z_1^2 + \ldots + z_{s}^2 \\ & \qquad = \left(\sqrt{ z_1^2 + \ldots + z_{s+1}^2} + z_{s+1}\right) \left(\sqrt{ z_1^2 + \ldots + z_{s+1}^2} - z_{s+1}\right), \end{align*} thus, $z_{s+1} - \sqrt{ z_1^2 + \ldots + z_{s+1}^2} > - \sqrt{ z_1^2 + \ldots + z_{s}^2} $. Adding $ z_1 + \ldots + z_{s}$ in both sides yields \eqref{assump2}. \item $R_{2\ell_1}$: We have $$ R_{2\ell_1}(z_1,\ldots, z_{s+ 1}) \ge R_{2\ell_1}(z_1,\ldots, z_{s}) + \rho z_{s+1} > R_{2\ell_1}(z_1,\ldots, z_{s}) $$ for all $z_1,\ldots, z_{s+ 1} > 0$ and \eqref{assump1} is deduced. \item $R_{s\ell_1}$: Let us define $\ensuremath{\bm{z}} = (z_1,\ldots, z_{s+1},0,\ldots,0)\in \mathbb{R}^N$ and assume $z_{s}$ and $z_{s+1}$ are the $T$-th and $t$-th largest components of $\ensuremath{\bm{z}}$, i.e., $z_{s} = z_{[T]}$, $z_{s+1} = z_{[t]}$. \hspace{.2in} Consider $\ensuremath{\bm{e}}ta_{s+1} > 0$, we assume $t<s+1$ (the other case $t = s+1$ is trivial). For any $j$ with $t \le j \le s$, $ \ensuremath{\bm{e}}ta_j z_{[j]} \ge \ensuremath{\bm{e}}ta_{j} z_{[j+1]}$. At $j = s+1$, we estimate $\ensuremath{\bm{e}}ta_{s+1} z_{[s+1]} > 0$. There follows \ensuremath{\bm{e}}gin{align*} & R_{s\ell_1}(z_1,\ldots, z_{s+1}) = \sum_{j=1}^{s+1} \ensuremath{\bm{e}}ta_j z_{[j]} > \sum_{j=1}^{t-1} \ensuremath{\bm{e}}ta_j z_{[j]} + \sum_{j=t}^s \ensuremath{\bm{e}}ta_j z_{[j+1]} = R_{s\ell_1}(z_1,\ldots, z_{s}), \end{align*} giving \eqref{assump1}. \hspace{.2in} Next, consider $\ensuremath{\bm{e}}ta_{s+1} > \ensuremath{\bm{e}}ta_{s}$. Without loss of generality, assume $t > T$ and also $t<s+1$ (the below argument also applies to $t = s+1$ with minor changes). At $j= t$, we estimate $\ensuremath{\bm{e}}ta_t z_{[t]} \ge \ensuremath{\bm{e}}ta_{T} z_{[t]}$. For all $j$ with $t+1 \le j\le s +1 $, $ \ensuremath{\bm{e}}ta_j z_{[j]} \ge \ensuremath{\bm{e}}ta_{j-1} z_{[j]}$. In particular, at $j = s+1$, the strict inequality holds, i.e., $ \ensuremath{\bm{e}}ta_{s+1} z_{[s+1]} > \ensuremath{\bm{e}}ta_{s} z_{[s+1]}$. Combining these facts and applying rearrangement inequality yield \ensuremath{\bm{e}}gin{align*} & R_{s\ell_1}(z_1,\ldots, z_{s+1}) = \sum_{j=1}^{s+1} \ensuremath{\bm{e}}ta_j z_{[j]} > \sum_{\substack{ j=1 } }^{t-1} \ensuremath{\bm{e}}ta_j z_{[j]} + \ensuremath{\bm{e}}ta_{T} z_{[t]} + \sum_{j = t+1}^{s+1} \ensuremath{\bm{e}}ta_{j-1} z_{[j]} \\ & = \sum_{\substack{ j=1 \\ j\ne T} }^{t-1} \ensuremath{\bm{e}}ta_j z_{[j]} + \sum_{j = t+1}^{s+1} \ensuremath{\bm{e}}ta_{j-1} z_{[j]} + \ensuremath{\bm{e}}ta_{T} ( z_{[T]} + z_{[t]}) \ge R_{s\ell_1}(z_1,\ldots, z_{s-1}, z_s + z_{s+1}) . \end{align*} We obtain \eqref{assump2} and complete the proof. \end{enumerate} \end{proof} \subsection{Proof of Theorem \ref{main_theorem}} \label{sec:proof} First, since $R$ is concave on $\mathcal{U}$, $R$ is also increasing on $\mathbb{R}^N$; see Lemma \ref{lem:incr_prop} and the discussion after. In what follows we denote \ensuremath{\bm{e}}gin{align*} \ensuremath{\mathcal{K}}_1 &:= \{\ensuremath{\bm{v}}\in \mathbb{R}^N: &\ \|\ensuremath{\bm{v}}_S\|_1 < \|\ensuremath{\bm{v}}_{\ensuremath{\overline{S}}}\|_1,\ \forall S\subset \{1,\ldots,N\}\, \text{ with }\, \#(S)\le s\}, \\ \ensuremath{\mathcal{K}}_2 &:= \{\ensuremath{\bm{v}}\in \mathbb{R}^N \setminus \{\mathbf{0}\}: \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &\ \|\ensuremath{\bm{v}}_S\|_1 \le \|\ensuremath{\bm{v}}_{\ensuremath{\overline{S}}}\|_1,\ \forall S\subset \{1,\ldots,N\} \, \text{ with }\, \#(S)\le s\}. \end{align*} Recall that if $\ensuremath{\bm{A}} $ satisfies \eqref{NSP:l1} or correspondingly \eqref{iNSP:nonsep}, then $\ker(\ensuremath{\bm{A}}) \setminus \{\textbf{0}\}\subset \ensuremath{\mathcal{K}}_1$ or $\ker(\ensuremath{\bm{A}})\setminus\{\textbf{0}\} \subset \ensuremath{\mathcal{K}}_2$ respectively. Let $\ensuremath{\bm{x}}$ be a fixed $s$-sparse vector in $\mathbb{R}^N$, then $\ensuremath{\bm{z}} = \ensuremath{\bm{x}}$ solves $\ensuremath{\bm{A}}\ensuremath{\bm{z}} = \ensuremath{\bm{A}} \ensuremath{\bm{x}}$ and any other solution of this system can be written as $\ensuremath{\bm{z}} = \ensuremath{\bm{x}} + \ensuremath{\bm{v}}$ with some $\ensuremath{\bm{v}} \in \ker(\ensuremath{\bm{A}})\setminus\{\textbf{0}\} \subset \mathcal{K}_{1|2}$. We will show in the next part that: \ensuremath{\bm{e}}gin{itemize} \item if \eqref{assump1} holds then $$ R(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}}) > R(\ensuremath{\bm{x}}),\ \forall \ensuremath{\bm{v}}\in \mathcal{K}_1; \ \text{ and } $$ \item if \eqref{assump2} holds and $\ensuremath{\bm{x}}$ is not an equal-height vector then $$ R(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}}) > R(\ensuremath{\bm{x}}),\ \forall \ensuremath{\bm{v}}\in \mathcal{K}_2, $$ \end{itemize} thus $\ensuremath{\bm{x}}$ is the unique solution to \eqref{problem:PR} assuming either: (i) \eqref{assump1} and \eqref{NSP:l1}; or (ii) \eqref{assump2} and \eqref{iNSP:nonsep} and $\ensuremath{\bm{x}}$ is not an equal-height vector. Since $R$ is symmetric, without loss of generality, we assume \ensuremath{\bm{e}}gin{align} &\ensuremath{\bm{x}} = (x_1,\ldots,x_s,0,\ldots,0) , \mbox{ where }x_i \ge 0,\, \forall 1\le i\le s. \label{define_x} \end{align} For every $\ensuremath{\bm{v}} \in \ensuremath{\mathcal{K}}_{1|2}$, there exists $\tilde{\ensuremath{\bm{v}}} \in \ensuremath{\mathcal{K}}_{1|2}$ that $$ R(\ensuremath{\bm{x}} + \tilde{\ensuremath{\bm{v}}}) \le R(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}} ), $$ and the $s$ first components of $\tilde{\ensuremath{\bm{v}}}$ are nonpositive, i.e., $ \tilde{v}_i \le 0,\, \forall 1\le i\le s. $ Indeed, let ${\ensuremath{\bm{v}}} = (v_1,\ldots,v_N)$ be a vector in $\mathcal{K}_{1|2}$. For any $1\le i \le s$, if ${v}_i > 0$, we flip the sign of ${v}_i$, i.e., replacing ${v}_i $ by $-{v}_i $. Denoting the newly formed vector by $\tilde{\ensuremath{\bm{v}}}$, then $\tilde{\ensuremath{\bm{v}}}\in \ensuremath{\mathcal{K}}_{1|2}$ and $$ |(\ensuremath{\bm{x}} + \tilde{\ensuremath{\bm{v}}})_i | = |x_i -{v}_i | \le x_i + {v}_i = |(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}})_i|, $$ for every $ i $ where the sign is flipped, while $(\ensuremath{\bm{x}} + \tilde{\ensuremath{\bm{v}}})_i = (\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}})_i$, for other $i$. By the increasing property of $R$, $\tilde{\ensuremath{\bm{v}}}$ satisfies $ R(\ensuremath{\bm{x}} + \tilde{\ensuremath{\bm{v}}}) \le R(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}} ). $ The first $s$ components of $\tilde{\ensuremath{\bm{v}}}$ are nonpositive by definition. It is therefore enough to consider $\ensuremath{\bm{v}}\in \ensuremath{\mathcal{K}}_{1|2}$ represented as \ensuremath{\bm{e}}gin{align} \label{def:v} {\ensuremath{\bm{v}}} = (- {a}_1,\ldots,- {a}_s,{b}_1,\ldots,{b}_t,0,\ldots,0), \end{align} where $\{{a}_i\},\, \{{b_j}\}$ are correspondingly nonnegative and positive sequences. We denote by $E({\ensuremath{\bm{v}}})$ the multiset $ \{{a}_i:i\in \overline{1,s}\}\cup \{{b}_j: j\in \overline{1,t}\}$, by $U({\ensuremath{\bm{v}}})$ the multiset containing $s$ largest elements in $E(\ensuremath{\bm{v}})$ and $L(\ensuremath{\bm{v}}) := E(\ensuremath{\bm{v}})\setminus U(\ensuremath{\bm{v}})$. Let \ensuremath{\bm{e}}gin{align*} &\overline{b}(\ensuremath{\bm{v}}) := \min U(\ensuremath{\bm{v}}), \mbox{ and} \\ &\sigma(\ensuremath{\bm{v}})\mbox{ and }\lambda(\ensuremath{\bm{v}})\mbox{ be the sum of all elements in }U(\ensuremath{\bm{v}})\mbox{ and }L(\ensuremath{\bm{v}}) \mbox{ respectively}. \end{align*} Then ${\ensuremath{\bm{v}}} \in \mathcal{K}_1$ (or ${\ensuremath{\bm{v}}} \in \mathcal{K}_2$ correspondingly) if and only if $\sigma(\ensuremath{\bm{v}})\! < \lambda(\ensuremath{\bm{v}})$ ($\sigma(\ensuremath{\bm{v}})\! \le \lambda(\ensuremath{\bm{v}})$ and $\ensuremath{\bm{v}} \ne \mathbf{0}$ respectively). Also, note that $t\ge s+1 $ for all ${\ensuremath{\bm{v}}} \in \mathcal{K}_1$, and $t \ge s $ for all ${\ensuremath{\bm{v}}} \in \mathcal{K}_2$ with $t=s$ occurring only if $\ensuremath{\bm{v}}$ is an equal-height vector. Below we consider two cases. \textbf{Case 1:} Assume \eqref{assump1} and $\ensuremath{\bm{v}}\in \ensuremath{\mathcal{K}}_1$ as in \eqref{def:v}. We will show there exists $\widehat{\ensuremath{\bm{v}}}\in \mathcal{K}_1 $ that \ensuremath{\bm{e}}gin{align*} \widehat{\ensuremath{\bm{v}}} = & \, (- {a}_1,\ldots,- {a}_s,\underbrace{\overline{b},\ldots,\overline{b}}_{T-1},b_{T},0,\ldots,0),\ \text{for some }T\ge s+1, \ 0< b_{T} \le \overline{b} \equiv \overline{b}(\ensuremath{\bm{v}}), \\ & \text{and }\ \ \qquad \quad R(\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}}) \le R(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}} ). \end{align*} First, we replace all components $b_j$ in $U(\ensuremath{\bm{v}})$ (which satisfy $b_j\ge \ensuremath{\overline{b}}$) by $\ensuremath{\overline{b}}$ and subtract a total amount $\sum_{b_j \in U(\ensuremath{\bm{v}})} ( b_j - \overline{b})$ from $b_{j}$'s in $L(\ensuremath{\bm{v}})$. It is possible to form an elementwise nonnegative vector (referred to as $\ensuremath{\bm{v}}'$) with this step, since \ensuremath{\bm{e}}gin{align} \label{mp:est1} \sum_{b_j \in U{(\ensuremath{\bm{v}})}} ( b_j - \overline{b}) < \sum_{b_j \in L(\ensuremath{\bm{v}})} b_j . \end{align} Indeed, one has $\sum\limits_{a_i \in L(\ensuremath{\bm{v}})} a_i \le s\ensuremath{\overline{b}} \le \sum\limits_{a_i \in U(\ensuremath{\bm{v}})} a_i + \sum\limits_{b_j \in U(\ensuremath{\bm{v}})} \ensuremath{\overline{b}}$. Combining with $\sigma(\ensuremath{\bm{v}})\! < \lambda(\ensuremath{\bm{v}})$, it gives $$ \sum\limits_{a_i \in U(\ensuremath{\bm{v}})} a_i + \sum\limits_{b_j \in U(\ensuremath{\bm{v}})} b_j < \sum\limits_{a_i \in L(\ensuremath{\bm{v}})} a_i + \sum\limits_{b_j \in L(\ensuremath{\bm{v}})} b_j \le \sum\limits_{a_i \in U(\ensuremath{\bm{v}})} a_i + \sum\limits_{b_j \in U(\ensuremath{\bm{v}})} \ensuremath{\overline{b}} + \sum\limits_{b_j \in L(\ensuremath{\bm{v}})} b_j, $$ yielding \eqref{mp:est1}. We remark that $\ensuremath{\bm{v}}' \in \ensuremath{\mathcal{K}}_1$, since $$ \sigma(\ensuremath{\bm{v}}')\! = \sigma(\ensuremath{\bm{v}}) - \sum_{b_j \in U(\ensuremath{\bm{v}})} ( b_j - \overline{b}) < \lambda(\ensuremath{\bm{v}}) - \sum_{b_j \in U(\ensuremath{\bm{v}})} ( b_j - \overline{b}) = \lambda(\ensuremath{\bm{v}}'). $$ On the other hand, by the construction, the magnitudes of coordinates of $\ensuremath{\bm{x}} + \ensuremath{\bm{v}}'$ are less than or equal to that of $\ensuremath{\bm{x}} + \ensuremath{\bm{v}}$. The increasing property of $R$ gives $R(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}}') \le R(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}} )$. Now, representing $\ensuremath{\bm{v}}'$ as $$ \ensuremath{\bm{v}}' = (- {a}_1,\ldots,- {a}_s,{b}'_1,\ldots,{b}'_{t'},0,\ldots,0), $$ we observe $\sigma(\ensuremath{\bm{v}}') \ge s\ensuremath{\overline{b}}$, as $\ensuremath{\bm{v}}'$ has at least $s$ components whose magnitudes are not less than $\ensuremath{\overline{b}}$. Thus, $\sum_{j =1}^{t'} b'_{j} \ge \lambda(\ensuremath{\bm{v}}') > \sigma(\ensuremath{\bm{v}}') \ge s\ensuremath{\overline{b}}$ and there exist $T \ge s+1 $ and $b_T \in (0,\ensuremath{\overline{b}}]$ such that $(T-1)\ensuremath{\overline{b}} + b_T= \sum_{j =1}^{t'} b'_{j}$. We define $$ \widehat{\ensuremath{\bm{v}}} = \, (- {a}_1,\ldots,- {a}_s,\underbrace{\overline{b},\ldots,\overline{b}}_{T-1},b_{T},0,\ldots,0). $$ One has $U(\widehat{\ensuremath{\bm{v}}}) = U(\ensuremath{\bm{v}}')$, which implies $\sigma(\widehat{\ensuremath{\bm{v}}}) = \sigma(\ensuremath{\bm{v}}')$ and $\lambda(\widehat{\ensuremath{\bm{v}}}) = \lambda(\ensuremath{\bm{v}}')$. Then, $\widehat{\ensuremath{\bm{v}}} \in \ensuremath{\mathcal{K}}_1$ can be deduced from the fact that $\ensuremath{\bm{v}}' \in \mathcal{K}_1$. As $b'_{j} \le \ensuremath{\overline{b}}\ \, \forall 1\le j\le t' $, it is easy to see {$|\ensuremath{\bm{x}} + \ensuremath{\bm{v}}'| \prec |\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}} |$}. From Lemma \ref{lemma:majorize}, there follows $ R( \ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}}) \le R(\ensuremath{\bm{x}} + \ensuremath{\bm{v}}')$. We proceed to prove $R(\ensuremath{\bm{x}}) < R(\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}})$. Let $\ensuremath{\overline{a}} = \sum_{i=1}^s a_i / s$. If $\ensuremath{\overline{a}} = 0$, the assertion can be deduced easily from the increasing property of $R$ and \eqref{assump1}. Let us consider $\ensuremath{\overline{a}} >0$. Since $\widehat{\ensuremath{\bm{v}}} \in \mathcal{K}_1$, $ (T-1)\ensuremath{\overline{b}} + b_T > s\ensuremath{\overline{a}}. $ There exists $0< \kappa < 1 $ such that $$ (T-1)\kappa \ensuremath{\overline{b}} + \kappa b_T > s\ensuremath{\overline{a}} > (T-1)\kappa \ensuremath{\overline{b}}. $$ We write $ (T-1)\kappa \ensuremath{\overline{b}} + \kappa b_T = s\ensuremath{\overline{a}} + a' = a_1 + \ldots + a_s + a' , $ then $ 0 < a' < \kappa b_T$. Also, note that $\ensuremath{\overline{a}} > \kappa \ensuremath{\overline{b}}$, as $T - 1 \ge s$. Denoting \ensuremath{\bm{e}}gin{gather} \label{maj_series} \ensuremath{\bm{e}}gin{aligned} \ensuremath{\bm{z}}_1 &= (|x_1- {a}_1|,\ldots,| x_s - {a}_s|,\underbrace{\kappa\overline{b},\ldots,\kappa\overline{b}}_{T-1},\kappa b_{T},0,\ldots,0), \\ \ensuremath{\bm{z}}_2 &= (|x_1- {a}_1|,\ldots,| x_s - {a}_s|,\underbrace{\ensuremath{\overline{a}},\ldots,\ensuremath{\overline{a}}}_{s},a',0,\ldots,0), \\ \ensuremath{\bm{z}}_3 & = (|x_1- {a}_1|,\ldots,| x_s - {a}_s|,{a_1,\ldots,a_s},a',0,\ldots,0), \\ \ensuremath{\bm{z}}_4 & = (|x_1- {a}_1| + a_1,\ldots,| x_s - {a}_s|+a_s,a',0,\ldots,0), \\ \ensuremath{\bm{x}}' & = ({x_1,\ldots,x_s},a',0,\ldots,0), \end{aligned} \end{gather} there holds $ \ensuremath{\bm{z}}_1 \prec \ensuremath{\bm{z}}_2 \prec \ensuremath{\bm{z}}_3 \prec \ensuremath{\bm{z}}_4. $ Applying Lemma \ref{lemma:majorize} yields \ensuremath{\bm{e}}gin{align} \label{mp:est2} R(\ensuremath{\bm{z}}_1) \ge R(\ensuremath{\bm{z}}_2) \ge R(\ensuremath{\bm{z}}_3) \ge R(\ensuremath{\bm{z}}_4). \end{align} On the other hand, $\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}} \ge \ensuremath{\bm{z}}_1$ and $\ensuremath{\bm{z}}_4 \ge \ensuremath{\bm{x}}'$, thus \ensuremath{\bm{e}}gin{align} \label{mp:est3} R(\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}}) \ge R(\ensuremath{\bm{z}}_1)\ \text{ and }\ R(\ensuremath{\bm{z}}_4) \ge R(\ensuremath{\bm{x}}'). \end{align} We have from \eqref{assump1} that \ensuremath{\bm{e}}gin{align} \label{mp:est4} R(\ensuremath{\bm{x}}') > R(\ensuremath{\bm{x}}). \end{align} Combining \eqref{mp:est2}--\eqref{mp:est4} gives $R(\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}}) > R(\ensuremath{\bm{x}})$, as desired. \textbf{Case 2:} Assume \eqref{assump2} and $\ensuremath{\bm{v}}\in \ensuremath{\mathcal{K}}_2$ as in \eqref{def:v} and $\ensuremath{\bm{x}}$ is not an equal-height vector. Following the arguments in Case 1, there exists $\widehat{\ensuremath{\bm{v}}}\in \mathcal{K}_2 $ that \ensuremath{\bm{e}}gin{align*} \widehat{\ensuremath{\bm{v}}} = & \, (- {a}_1,\ldots,- {a}_s,\underbrace{\overline{b},\ldots,\overline{b}}_{T-1},b_{T},0,\ldots,0),\ \text{for some }T\ge s+1, \ 0 \le b_{T} < \overline{b} \equiv \overline{b}(\ensuremath{\bm{v}}), \\ & \text{and }\ \ \qquad \quad R(\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}}) \le R(\ensuremath{\bm{x}} + {\ensuremath{\bm{v}}} ). \end{align*} Note that here $b_T \in [0,\ensuremath{\overline{b}})$, implying $\#(\supp(\widehat{\ensuremath{\bm{v}}})) \ge 2s$ (rather than $b_T \in (0,\ensuremath{\overline{b}}]$ and $\#(\supp(\widehat{\ensuremath{\bm{v}}})) \ge 2s+1$ as in previous case). First, if $\#(\supp(\widehat{\ensuremath{\bm{v}}})) = 2s$, then $\widehat{\ensuremath{\bm{v}}}$ must be an equal-height vector: \ensuremath{\bm{e}}gin{gather*} \ensuremath{\bm{e}}gin{aligned} \widehat{\ensuremath{\bm{v}}} &= \, (\underbrace{- \ensuremath{\overline{b}},\ldots,- \ensuremath{\overline{b}}}_s,\underbrace{\overline{b},\ldots,\overline{b}}_{s},0,\ldots,0), \\ \ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}} &= \, ({x_1- \ensuremath{\overline{b}},\ldots,x_s- \ensuremath{\overline{b}}},\underbrace{\overline{b},\ldots,\overline{b}}_{s},0,\ldots,0). \end{aligned} \end{gather*} We denote \ensuremath{\bm{e}}gin{gather*} \ensuremath{\bm{e}}gin{aligned} \ensuremath{\bm{z}}_5 &= (|x_1- {\ensuremath{\overline{b}}}| + \ensuremath{\overline{b}},\ldots,| x_s - {\ensuremath{\overline{b}}}| + \ensuremath{\overline{b}} ,0,\ldots,0). \end{aligned} \end{gather*} Since $\ensuremath{\bm{x}}$ is not an equal-height vector, $|x_{i} - \ensuremath{\overline{b}}| \ne 0$ for some $1\le i \le s$. Lemma \ref{lemma:majorize} and assumption \eqref{assump2} give $ R(\ensuremath{\bm{z}}_5) < R(\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}}). $ It is easy to see $\ensuremath{\bm{x}} \le \ensuremath{\bm{z}}_5$, therefore, $ R(\ensuremath{\bm{x}}) \le R(\ensuremath{\bm{z}}_5), $ and we arrive at $R(\ensuremath{\bm{x}}) < R(\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}})$. Otherwise, if $\#(\supp(\widehat{\ensuremath{\bm{v}}})) \ge 2s +1$, then $ (T-1) \ensuremath{\overline{b}} + b_T > s \ensuremath{\overline{b}}. $ Let us again denote $\ensuremath{\overline{a}} = \sum_{i=1}^s a_i / s$ and consider $\ensuremath{\overline{a}} >0$. Since $\widehat{\ensuremath{\bm{v}}}\in \mathcal{K}_2 $, $ (T-1)\ensuremath{\overline{b}} + b_T \ge s\ensuremath{\overline{a}}, $ and we can find $0< \kappa \le 1 $ that $$ (T-1)\kappa \ensuremath{\overline{b}} + \kappa b_T \ge s\ensuremath{\overline{a}} > s \kappa \ensuremath{\overline{b}}. $$ We write $ (T-1)\kappa \ensuremath{\overline{b}} + \kappa b_T = s\ensuremath{\overline{a}} + a' , $ then $ a' \ge 0$ and $\ensuremath{\overline{a}} > \kappa \ensuremath{\overline{b}}$. Denoting $\ensuremath{\bm{z}}_1, \ensuremath{\bm{z}}_2, \ensuremath{\bm{z}}_3, \ensuremath{\bm{z}}_4$ and $\ensuremath{\bm{x}}'$ as in \eqref{maj_series}, similarly to Case 1, there holds $$ R(\ensuremath{\bm{x}}) \le R(\ensuremath{\bm{x}}') \le R(\ensuremath{\bm{z}}_4) \le R(\ensuremath{\bm{z}}_3) \le R(\ensuremath{\bm{z}}_2) \le R(\ensuremath{\bm{z}}_1) \le R(\ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}}) $$ due to $ \ensuremath{\bm{z}}_1 \prec \ensuremath{\bm{z}}_2 \prec \ensuremath{\bm{z}}_3 \prec \ensuremath{\bm{z}}_4, \, $ $ \ensuremath{\bm{z}}_4 \ge \ensuremath{\bm{x}}' \ge \ensuremath{\bm{x}} $ and $ \ensuremath{\bm{z}}_1 \le \ensuremath{\bm{x}} + \widehat{\ensuremath{\bm{v}}}. $ We show that the strict inequality must occur somewhere in the chain. If $a' > 0$, we have from \eqref{assump2} that $ R(\ensuremath{\bm{x}}) < R(\ensuremath{\bm{x}}'). $ Otherwise, if $a'=0$, then $\#(\supp(\ensuremath{\bm{z}}_1)) \ge s + 1 $ and $\#(\supp(\ensuremath{\bm{z}}_4)) \le s $. Applying Lemma \ref{lemma:majorize} and assumption \eqref{assump2} gives $ R(\ensuremath{\bm{z}}_4) < R(\ensuremath{\bm{z}}_1). $ This concludes the proof. $\square$ \section{Exact recovery of sparse signals via separable penalties} \label{sec:separable} With the addition of separable property, the path to establish the null space condition for nonconvex minimizations is much simpler. To highlight the difference between the separable and non-separable cases, in this section, we revisit the exact recovery of sparse signals assuming the penalty is concave and separable. The discussion herein is applicable for the following well-known regularizations. \ensuremath{\bm{e}}gin{example}[Concave, separable and symmetric penalties] \label{example:sep_func} \ensuremath{\bm{e}}gin{enumerate} \itemsep10pt \item $\ell_p$ norm with $0< p<1$: \ \ \ $R_{\ell_p}(\bm{z}) = \|\bm{z}\|_p^p$, \cite{Chartrand07,FoucartLai09}. \item SCAD:\qquad \qquad \qquad \qquad \,\ \ $R_{\text{\fontfamily{cmss}\selectfont SCAD}}(\ensuremath{\bm{z}}) = \sum_{j=1}^N r_{\text{\fontfamily{cmss}\selectfont SCAD}} (z_j)$, \cite{FanLi01}. Here, $r_{\text{\fontfamily{cmss}\selectfont SCAD}}(z_j) = \ensuremath{\bm{e}}gin{cases} a_1 |z_j|,\, &\text{ if } |z_j|< a_1, \\ - \frac{a_1|z_j|^2 - 2 a_1 a_2 |z_j| + a_1^3}{2(a_2 - a_1)} &\text{ if } a_1 \le |z_j| \le a_2, \\ \frac{a_1 a_2 + a_1^2}{2} &\text{ if } |z_j|> a_2. \end{cases} $ \item Transformed $\ell_1$: \qquad \qquad \, $R_{t\ell_1}(\ensuremath{\bm{z}}) = \sum_{j=1}^N \rho_a(z_j)$, \cite{LvFan09,ZhangXin16}. Here, $\rho_a(z_j) = \frac{(a+1)|z_j|}{a + |z_j|},\, \forall z_j\in \mathbb{R}$ with $a\in (0,\infty)$. \item Capped $\ell_1$: \qquad \qquad \quad \quad \, \ $R_{c\ell_1}(\bm{z}) = \sum\limits_{j=1}^N \min\{|z_j|,\alpha\}$, {\cite{NIPS2008_3526}}. \end{enumerate} \end{example} The first key difference is that with the separable property, one can obtain the subadditivity of penalty functions. \ensuremath{\bm{e}}gin{lemma} \label{prop:triangle_ineq} Let $R$ be a map from $\mathbb{R}^N$ to $[0,\infty)$ satisfying $R(z_1,\ldots,z_N) = R(|z_1|,\ldots,|z_N|),\ \forall \bm{z} = (z_1,\ldots,z_N)\in\mathbb{R}^N$. If $R$ is separable on $\mathbb{R}^N$ and concave on $\ensuremath{\mathcal{U}}$, then $R$ is subadditive on $\mathbb{R}^N$: $$ R(\ensuremath{\bm{z}} + \ensuremath{\bm{z}}') \le R(\ensuremath{\bm{z}}) + R(\ensuremath{\bm{z}}'),\ \forall \ensuremath{\bm{z}},\ensuremath{\bm{z}}' \in \mathbb{R}^N. $$ \end{lemma} \ensuremath{\bm{e}}gin{proof} The assertion can be implied from its univariate version, see, e.g., \cite[Problem II.5.12]{Bhatia97} and the separable property of $R$. \end{proof} In this case, it is possible and also natural to show \eqref{iNSP:sep} to be the \textit{necessary and sufficient} condition for the uniform, exact recovery via fixed support setting. The following theorem extends several results for specific penalties, e.g., $\ell_p$ \cite{FoucartLai09}, weighted $\ell_1$ \cite{RW15}. Similar result was proved in \cite{GribonvalNielsen07}. \ensuremath{\bm{e}}gin{theorem} \label{theorem_separable} Let $\ensuremath{\bm{A}}$ be an $m\times N$ real matrix. Consider the problem \eqref{problem:PR}, where $R$ is a function from $\mathbb{R}^N$ to $[0,\infty)$ satisfying $R(z_1,\ldots,z_N) = R(|z_1|,\ldots,|z_N|),\ \forall \bm{z} = (z_1,\ldots,z_N)\in\mathbb{R}^N$, separable on $\mathbb{R}^N$, concave on $\ensuremath{\mathcal{U}}$ and $R( \mathbf{0}) = 0$. Then every $s$-sparse vector $\bm{x}\in \mathbb{R}^N$ is the unique solution to \eqref{problem:PR} if and only if the generalized null space property \eqref{iNSP:sep} is satisfied. \end{theorem} \ensuremath{\bm{e}}gin{proof} It is enough to show that for index set $S\subset \{1,\ldots,N\}$, every vector $\bm{x}\in \mathbb{R}^N$ supported in $S$ is the unique solution to \eqref{problem:PR} if and only if $$ \ker(\ensuremath{\bm{A}})\setminus\{\mathbf{0}\}\subset \{ \ensuremath{\bm{v}} \in \mathbb{R}^N: R(\ensuremath{\bm{v}}_S) < R(\ensuremath{\bm{v}}_{\overline{S}})\}. $$ First, assume that every vector $\bm{x}\in \mathbb{R}^N$ supported in $S$ is the unique solution to \eqref{problem:PR}. For any $\ensuremath{\bm{v}}\in \ker(\ensuremath{\bm{A}})\setminus \{\mathbf{0}\}$, $\bm{v}_S$ is the unique minimizer of $R(\ensuremath{\bm{z}})$ subject to $\ensuremath{\bm{A}} \ensuremath{\bm{z}} = \ensuremath{\bm{A}} \ensuremath{\bm{v}}_S$. Observe that $\ensuremath{\bm{A}}(-\ensuremath{\bm{v}}_{\ensuremath{\overline{S}}}) = \ensuremath{\bm{A}}\ensuremath{\bm{v}}_S$ and $-\ensuremath{\bm{v}}_{\ensuremath{\overline{S}}} \ne \ensuremath{\bm{v}}_S$ (since $\ensuremath{\bm{v}}\ne \mathbf{0}$), we have $R(\ensuremath{\bm{v}}_S) < R(\ensuremath{\bm{v}}_{\ensuremath{\overline{S}}})$. Conversely, assume $\ker(\ensuremath{\bm{A}})\setminus\{\mathbf{0}\}\subset \{ \ensuremath{\bm{v}} \in \mathbb{R}^N: R(\ensuremath{\bm{v}}_S) < R(\ensuremath{\bm{v}}_{\overline{S}})\}$. Let $\ensuremath{\bm{x}}\in \mathbb{R}^N$ be a vector supported in $S$. Any other solution to $\ensuremath{\bm{A}} \ensuremath{\bm{z}} = \ensuremath{\bm{A}} \ensuremath{\bm{x}}$ can be represented as $\ensuremath{\bm{z}} = \ensuremath{\bm{x}} + \ensuremath{\bm{v}}$ with $\ensuremath{\bm{v}} \in \ker(\ensuremath{\bm{A}})\setminus\{\textbf{0}\}$. We have by the separable property of $R$ and Lemma \ref{prop:triangle_ineq} \ensuremath{\bm{e}}gin{gather} \label{sec:sepa:est1} \ensuremath{\bm{e}}gin{aligned} R(\ensuremath{\bm{x}} + \ensuremath{\bm{v}}) &= R(\ensuremath{\bm{x}} + \ensuremath{\bm{v}}_S) + R(\ensuremath{\bm{v}}_{\ensuremath{\overline{S}}}) \ge {R(\ensuremath{\bm{x}}) - R(\ensuremath{\bm{v}}_S)} + R(\ensuremath{\bm{v}}_{\ensuremath{\overline{S}}}) > R(\ensuremath{\bm{x}}), \end{aligned} \end{gather} thus $\ensuremath{\bm{x}}$ is the unique solution to \eqref{problem:PR}. \end{proof} { \ensuremath{\bm{e}}gin{remark} \eqref{iNSP:sep} is actually a necessary condition for the exact recovery of every $s$-sparse vector even when the separable and concave property on $R$ is removed, thus applicable to a very general class of penalty functions. For this condition to become sufficient, the separable assumption on $R$ is critical. In estimate \eqref{sec:sepa:est1}, we utilize this assumption in two ways: splitting $R(\ensuremath{\bm{x}}+ \ensuremath{\bm{v}})$ into $R(\ensuremath{\bm{x}} + \ensuremath{\bm{v}}_S) + R(\ensuremath{\bm{v}}_{\ensuremath{\overline{S}}})$, and bounding $R(\ensuremath{\bm{x}} + \ensuremath{\bm{v}}_S) \ge {R(\ensuremath{\bm{x}}) - R(\ensuremath{\bm{v}}_S)} $ via the subadditivity of $R$. Without the separable property, a concave penalty may not be subadditive. For example, consider $R_{\ell_1 - \ell_2}:\mathbb{R}^2 \to [0,\infty)$, corresponding to $\ell_1-\ell_2$ regularization. $R_{\ell_1 -\ell_2}$ is not separable and also not subadditive, as one has $$ 2 - \sqrt{2} = R_{\ell_1 -\ell_2}(1,1) > R_{\ell_1 -\ell_2}(1,0) + R_{\ell_1 -\ell_2}(0,1) = 0. $$ In this case, the analysis for the sufficient condition follows a significantly different and more complicated path (see Section \ref{sec:nonseparable}). \end{remark} It is worth emphasizing that the necessary and sufficient condition \eqref{iNSP:sep} for concave and separable penalties is not necessarily less demanding than \eqref{NSP:l1}, as shown in the below example. \ensuremath{\bm{e}}gin{example} \label{example:nonsymmetric} Consider the underdetermined system $ \ensuremath{\bm{A}}\ensuremath{\bm{z}} = \ensuremath{\bm{A}}\ensuremath{\bm{x}}, $ where the matrix $\ensuremath{\bm{A}}\in \mathbb{R}^{4\times 5}$ is defined as \[ \ensuremath{\bm{A}} = \left( \ensuremath{\bm{e}}gin{array}{ccccc} 1 & \frac12 & \frac94 & 0 & 0 \\ 1 & -\frac12 & 0 & \frac34 & 0 \\ 0 & 1 & 0 & 0 & \frac43 \\ 1 & -1 & 0 & 0 & 0 \end{array} \right). \] As $\ker(\ensuremath{\bm{A}}) = (-t,-t,2t/3,2t/3,3t/4)^{\top}$ satisfies \eqref{NSP:l1} with $N =5$ and $s = 2$, one can successfully reconstruct all $2$-sparse vectors using $\ell_1$ minimization. Consider the weighted $\ell_1$ regularization $$ R_{w\ell_1} (\ensuremath{\bm{z}}) = 4 |z_1| + 3 |z_2| + |z_3| + |z_4| + |z_5|, $$ which is normally perceived as a convex penalty but also concave according to Definition \ref{def:penalty_prop}. $R_{w\ell_1}$ is not symmetric, and we can see that not every $2$-sparse vector can be recovered using this penalty, especially those {whose first two entries are nonzero}. For instance, if $\ensuremath{\bm{x}} = (1,1,0,0,0)^{\top}$, all solutions to $ \ensuremath{\bm{A}}\ensuremath{\bm{z}} = \ensuremath{\bm{A}}\ensuremath{\bm{x}} $ can be represented as $\ensuremath{\bm{z}} = (1-t,1-t,2t/3,2t/3,3t/4)$. Among these, the solution that minimizes $R_{w\ell_1}(\ensuremath{\bm{z}})$ is $(0,0,2/3,2/3,3/4)$, different from $\ensuremath{\bm{x}}$ and not the sparsest one. Nevertheless, it should be noted that weighted $\ell_1$ minimization, with appropriate choices of weights can be a very efficient approach that outperforms $\ell_1$, in the case when a priori knowledge of (the structure of) the support set is available, see, e.g., \cite{FMSY12,YB13,ChkifaDexterTranWebster15,Adcock15b}. \end{example} Finally, \eqref{iNSP:sep} is truly less demanding than \eqref{NSP:l1} if, in addition, the penalty function is symmetric. The following result verifies that separable, concave, and symmetric regularizations are superior to $\ell_1$ minimization in sparse, uniform recovery. } \ensuremath{\bm{e}}gin{proposition} Let $N>1,\, s\in \mathbb{N}$ with $1 \le s < N/2 $. Assume $R$ is a function from $\mathbb{R}^N$ to $[0,\infty)$ satisfying $R(z_1,\ldots,z_N)$ $= R(|z_1|,\ldots,|z_N|),\ \forall \bm{z} = (z_1,\ldots,z_N)\in\mathbb{R}^N$, separable and symmetric on $\mathbb{R}^N$, and concave on $\ensuremath{\mathcal{U}}$. Also, $R( \mathbf{0}) = 0$ and $R(\ensuremath{\bm{z}}) > 0,\, \forall \ensuremath{\bm{z}} \ne \mathbf{0}$. Then \ensuremath{\bm{e}}gin{align*} & \bigg\{ \ensuremath{\bm{v}} \in \mathbb{R}^N: \|\ensuremath{\bm{v}}_S\|_1 < \|\ensuremath{\bm{v}}_{\overline{S}}\|_1,\, \forall S\subset \!\{1,\ldots,N\}\mbox{ with }\#(S)\le s\bigg\} \subset \\ & \qquad \qquad \bigg\{ \ensuremath{\bm{v}} \in \mathbb{R}^N: \, R(\ensuremath{\bm{v}}_S) < R(\ensuremath{\bm{v}}_{\overline{S}}),\, \forall S\subset \!\{1,\ldots,N\}\mbox{ with }\#(S)\le s\bigg\}. \end{align*} Consequently, \eqref{iNSP:sep} is less demanding than \eqref{NSP:l1}. \end{proposition} \ensuremath{\bm{e}}gin{proof} Let $\ensuremath{\bm{v}} \in \mathbb{R}^N$ satisfy $\|\ensuremath{\bm{v}}_S\|_1 < \|\ensuremath{\bm{v}}_{\overline{S}}\|_1,\, \forall S\subset \!\{1,\ldots,N\}${ with }$\#(S)\le s$. We denote by $S^\star$ the set of indices of $s$ largest components of $\ensuremath{\bm{v}}$ (in magnitude), then $\|\ensuremath{\bm{v}}_{S^{\star}}\|_1 < \|\ensuremath{\bm{v}}_{\overline{S^{\star}}}\|_1$. Since $R$ is concave on $\ensuremath{\mathcal{U}}$, by Lemma \ref{lem:incr_prop}, $R$ is increasing on $\ensuremath{\mathcal{U}}$. It is enough to prove that $R(\ensuremath{\bm{v}}_{S^{\star}}) < R(\ensuremath{\bm{v}}_{\overline{S^{\star}}})$. Let $\alpha = \|\ensuremath{\bm{v}}_{\overline{S^{\star}}}\|_1 - \|\ensuremath{\bm{v}}_{S^{\star}}\|_1 > 0$ and $j$ be an index in $\overline{S^{\star}}$, we define $ \tilde{\ensuremath{\bm{v}}} = \ensuremath{\bm{v}}_{{S^{\star}}} + \alpha \ensuremath{\bm{e}}_j. $ Since $R$ is separable and $R(\ensuremath{\bm{z}}) > 0,\, \forall \ensuremath{\bm{z}} \ne \mathbf{0}$, one has \ensuremath{\bm{e}}gin{align} R(\tilde{\ensuremath{\bm{v}}}) = R( \ensuremath{\bm{v}}_{{S^{\star}}} ) + R( \alpha \ensuremath{\bm{e}}_j ) > R( \ensuremath{\bm{v}}_{{S^{\star}}} ) . \label{sec:sepa:est2} \end{align} On the other hand, $\|\tilde{\ensuremath{\bm{v}}}\|_1 = \|{\ensuremath{\bm{v}}_{\overline{S^{\star}}}}\|_1$ and, by the definition of $S^{\star}$, any nonzero component of $\tilde{\ensuremath{\bm{v}}}$ (with the possible exception of the $j$-th one) is larger than any component of $\ensuremath{\bm{v}}_{\overline{S^{\star}}}$. This yields $\tilde{\ensuremath{\bm{v}}} \succ {\ensuremath{\bm{v}}_{\overline{S^{\star}}}}$. Applying Lemma \ref{lemma:majorize}, there follows $R(\tilde{\ensuremath{\bm{v}}}) \le R({\ensuremath{\bm{v}}_{\overline{S^{\star}}}})$. Combining with \eqref{sec:sepa:est2}, the proposition is concluded. \end{proof} \section{Concluding remarks} \label{sec:conclusion} In this effort, we establish theoretical, generalized sufficient conditions for the uniform recovery of sparse signals via concave, non-separable and symmetric regularizations. These conditions are proved less restrictive than the standard null space property for $\ell_1$ minimization, thus verifying that concave, non-separable and symmetric penalties are better than or at least as good as $\ell_1$ in enhancing the sparsity of the solutions. Our work unifies and improves existing NSP-based conditions developed for several specific penalties, and also provides first theoretical recovery guarantees in some cases. Extending the present results to the more practical scenarios, which allows measurement errors and compressible (i.e., only close to sparse) signals is the next logical step. In particular, an important open question is: are concave and symmetric regularizations still provably better than $\ell_1$ in uniform recovery, when taking noise and sparsity defect into account? Also, the general sufficient conditions for non-separable penalties, established herein from that of $\ell_1$, may be suboptimal for specific penalties. It will be interesting to investigate the specialized and optimized conditions in such cases. Finally, while the advantage of nonconvex minimizations over $\ell_1$ in terms of null space property is obvious, how this advantage reflects itself in sample complexity is unclear to us and a topic for future work. \end{document}
\begin{document} \title{\large{\textbf{ON THE ASYMPTOTIC EXPANSION OF THE \\ SUM OF THE FIRST $n$ PRIMES} { \begin{flushleft} \textit{Dedicated to Dr. Madan Mohan Singh, my first mathematics teacher.} \end{flushleft} } \begin{abstract} \small{An asymptotic formula for the sum of the first $n$ primes is derived. This result improves the previous results of P. Dusart. Using this asymptotic expansion, we prove the conjectures of R. Mandl and G. Robin on the upper and the lower bound of the sum of the first $n$ primes respectively.} \end{abstract} \section{Introduction} Let $p_n$ denote the $n^{th}$ prime \footnote{ 2000 \textit{Mathematics Subject Classification.} 11A41.} \footnote{ \textit{Key words and phrases.} Primes, Inequalities.}. Robert Mandl conjectured that \begin{displaymath} \sum_{r \le n}p_r < \frac{np_n}{2}. \label{mandl} \end{displaymath} This conjecture was proven by Rosser and Schoenfeld in \cite{rs} and is now referred to as Mandl's inequality. An alternate version of the proof was given by Dusart in \cite{pd}. In the same paper, Dusart also showed that \begin{equation} \sum_{r \le n}p_r = \frac{n^2}{2}(\ln n + \ln\ln n - 3/2 + o(1)). \label{dusart1} \end{equation} Currently, the best upper bound on Mandl's inequality is due to Hassani who showed that (See \cite{mh}) for for $n \ge 10$, \begin{equation} \sum_{r \le n}p_r < \frac{np_n}{2} - \frac{n^2}{14}. \label{hasanni} \end{equation} With regards to the lower bound, G. Robin conjectured that \begin{equation} n p_{[n/2]} < \sum_{r <n}p_r. \label{robin} \end{equation} This conjecture was also proved by Dusart in \cite{pd}. However, neither \ref{hasanni} nor \ref{robin} give exact growth rate of $\sum_{r \le n}p_r$. In this paper, we shall derive the asymptotic formula for $\sum_{r \le n}p_r$. Both Hassani's improvement of Mandl's inequality and Robin's conjecture follow as corollaries of our asymptotic formula. \section{Asymptotic expansion of $\sum_{r \le n}p_r$} \begin{theorem} (\textbf{M. Cipolla}) There exists a sequence $(P_m)_{m \ge 1}$ of polynomials with rational coefficients such that, for any integer m, \begin{displaymath} p_n = n \Bigg[\ln n + \ln\ln n - 1 + \sum_{r=1}^{m}\frac{(-1)^rP_r (\ln\ln n)}{\ln^r n} + o\Bigg(\frac{1}{\ln^m n}\Bigg)\Bigg]. \label{cipolla1} \end{displaymath} \end{theorem} This was proved by M. Cipolla in a beautiful paper (See \cite{mc}) in 1902. In the same paper, Cipolla gives recurrence formula for $P_m$ and shows that every $P_m$ has degree $m$ and leading coefficient $\frac{1}{m}$. In particular, \begin{equation} P_1 (x) = x-2, P_2 (x) = \frac{1}{2}(x^2 - 6x + 11). \label{cipolla2} \end{equation} \begin{lemma} If f is monotonic and continuous and defined in $[1,n]$ and then, \begin{displaymath} \sum_{r\le n}f(r)= \int_{1}^{n} f(x)dx + O(|f(n)|+|f(1)|). \label{lemma2.2} \end{displaymath} \end{lemma} \begin{proof} Well known. (See \cite{ik}, 1.62-1.67, Page 19-20) \qedhere\ \end{proof} \begin{theorem} There exists a sequence $(S_m)_{m \ge 1}$ of polynomials with rational coefficients such that, for any integer m, \begin{displaymath} \sum_{r \le n}p_r = \frac{n^2}{2} \Bigg[\ln n + \ln\ln n - \frac{3}{2} + \sum_{r=1}^{m}\frac{(-1)^{r+1}S_r (\ln\ln n)}{r\ln^r n} + o\Bigg(\frac{1}{\ln^m n}\Bigg)\Bigg]. \end{displaymath} Further, every $S_m$ has degree m and leading coefficient $1/m$. In particular \\ \begin{displaymath} S_1 (x) = x-\frac{5}{2}, S_2 (x) = x^2 - 7x + \frac{29}{2}. \end{displaymath} \end{theorem} \begin{proof} We define $p(x)$ as \begin{equation} p(x) = x\ln x + x\ln\ln x - x + \sum_{r=1}^{m}\frac{(-1)^rxP_r (\ln\ln n)}{\ln^r n}. \label{theorem2.3a} \end{equation} where $P_r(x)$ is the same sequence of polynomials as in Theorem \ref{cipolla1}. It follows form Lemma \ref{lemma2.2} that \begin{displaymath} \sum_{r \le n}p_r = 2 + 3 + \int_{3}^{n}p(x)dx + O(p_n) + o\Bigg(\int_{3}^{n}\frac{x}{\ln^m x}\Bigg). \end{displaymath} Now $p_n \sim n \ln n$ where as \begin{displaymath} \int_{3}^{n}\frac{x}{\ln^m x} \sim \frac{n^2}{2\ln^m n} \end{displaymath} which grows much faster than $n \ln n$. Hence \begin{equation} \sum_{r \le n}p_r = \int_{3}^{n}p(x)dx + o\Bigg(\frac{n^2}{\ln^2 n}\Bigg). \label{theorem2.3b} \end{equation} All we need to do is the integrate each term of \ref{theorem2.3a}. Except for a couple of simple terms, integration of the terms of \ref{theorem2.3a} will result in an infinite series and due to \ref{theorem2.3b}, we can stop the series when the growth rate of a new term is equal to or slower than the error term in \ref{theorem2.3b}. Since $P_m(\ln\ln x)$ is a polynomial of degree $m$ and has rational coefficients with leading coefficient $1/m$, the integration of each terms of $p(x)$ will result in an infinite series of the type \begin{displaymath} \int_{3}^{n}xP_m(\ln\ln x)dx = \frac{(-1)^m n^2}{2}\sum_{i=1}^{\infty}Q_{m,i}(\ln\ln n) + O(1) \end{displaymath} where $Q_{m,i}(x)$ is a polynomial of degree $m$ with rational coefficients and leading coefficient $1/m$. Thus the polynomial $S_m(x)$ is of degree $m$ and has rational coefficients with leading coefficient $1/m$. \\ To find the first two terms of the polynomial $S_m(x)$ we integrate the first four terms of $p(x)$. The first four terms of $p(x)$ are \begin{equation} x\ln x + x\ln\ln x - x + \frac{x\ln\ln x - 2x}{\ln x} - \frac{x\ln^2 \ln x -6x\ln\ln x + 11x}{2\ln^2 x}. \label{theorem2.3c} \end{equation} Integrating each term separately, we have \begin{equation} \int_{3}^{n} x \ln x dx = \frac{n^2 \ln n}{2} - \frac{n^2}{4} + O(1) \label{theorem2.3d} \end{equation} \begin{equation} \int_{3}^{n} x \ln\ln x dx = \frac{n^2 \ln\ln n}{2} - \frac{n^2}{4\ln n} - \frac{n^2}{8\ln^2 n} + O\Bigg(\frac{n^2}{\ln^3 n}\Bigg) \end{equation} \begin{equation} -\int_{3}^{n} x dx = -\frac{n^2}{2} + O(1) \end{equation} \begin{equation} \int_{3}^{n} \frac{x\ln \ln x}{\ln x} dx = \frac{n^2 \ln \ln n}{2 \ln n} + \frac{n^2 \ln \ln n}{4 \ln^2 n} - \frac{n^2}{4 \ln n} + O\Bigg(\frac{n^2 \ln \ln n}{\ln^3 n}\Bigg) \end{equation} \begin{equation} - 2\int_{3}^{n} \frac{x}{\ln x}dx = - \frac{n^2}{\ln n} - \frac{n^2}{2\ln^2 n} + O\Bigg(\frac{n^2}{\ln^3 n}\Bigg) \end{equation} \begin{equation} -\frac{1}{2}\int_{3}^{n} \frac{x\ln^2 \ln x}{\ln^2 x} dx = -\frac{n^2 \ln^2 \ln n}{4 \ln^2 n} + O\Bigg(\frac{n^2 \ln^2 \ln n}{\ln^3 n}\Bigg) \end{equation} \begin{equation} 3 \int_{3}^{n} \frac{x\ln\ln x}{\ln^2 x} dx = \frac{3n^2 \ln\ln n}{2\ln^2 n} + O\Bigg(\frac{n^2 \ln^2 \ln n}{\ln^3 n}\Bigg) \end{equation} \begin{equation} -\frac{11}{2} \int_{3}^{n} \frac{x}{\ln^2 x} dx = -\frac{11n^2}{4\ln^2 n} + O\Bigg(\frac{n^2 \ln n}{\ln^3 n}\Bigg) \label{theorem2.3e} \end{equation} Adding \ref{theorem2.3d} - \ref{theorem2.3e} we obtain \begin{displaymath} \sum_{r \le n}p_r = \frac{n^2}{2}\Bigg[\ln n + \ln\ln n - \frac{3}{2} + \frac{\ln\ln n}{\ln n} - \frac{5}{2\ln n} - \frac{\ln^2 \ln n}{2\ln^2 n} \end{displaymath} \begin{equation} + \frac{7 \ln \ln n}{2\ln^2 n} - \frac{29}{4\ln^2 n} + o\Bigg(\frac{1}{\ln^2 n}\Bigg) \Bigg]. \label{theorem2.3f} \end{equation} \\ Notice that taking the first four terms of \ref{theorem2.3f} we obtain Dusart's result in \ref{dusart1}. This proves the theorem. \qedhere\ \end{proof} \section{The inequality of Robin} From the asymptotic expansion of $\sum_{r \le n}p_r$ we can not only prove the inequalities of Mandl \ref{mandl}and Robin \ref{robin} but also refine them. \begin{lemma} \begin{displaymath} \sum_{r < n}p_r = n p_{[n/2]} + \frac{2\ln 2 - 1}{4}n^2 + O\Bigg(\frac{n^2 \ln\ln n}{\ln n}\Bigg). \label{lemma3.2} \end{displaymath} \end{lemma} \begin{proof} Taking $[n/2]$ in place of $n$ in the asymptotic expansion of $n^{th}$ prime we obtain \begin{displaymath} n p_{[n/2]} = \frac{n^2}{2}(\ln n + \ln \ln n - 1 - \ln 2) + O\Bigg(\frac{n^2 \ln\ln n}{\ln n}\Bigg) \end{displaymath} \begin{displaymath} = \frac{n p_n}{2} -\frac{n^2 \ln 2}{2} + O\Bigg(\frac{n^2 \ln\ln n}{\ln n}\Bigg) \end{displaymath} Using \ref{theorem2.3f} we can reduce this to \begin{displaymath} = \sum_{r <n}p_r + \frac{n^2}{4} -\frac{n^2 \ln 2}{2} + O\Bigg(\frac{n^2 \ln\ln n}{\ln n}\Bigg) \end{displaymath} This proves the lemma. \qedhere\ \end{proof} Since the second term of Lemma \ref{lemma3.2} is positive, it follows that for all sufficiently large $n$, Robin's conjecture is true. \small{e-mail: \texttt{[email protected], [email protected]}} \end{document}
\begin{document} \title{Generalized Drazin invertibility of operator matrices} \begin{abstract} \noindent We study the generalized Drazin invertibility as well as the Drazin and ordinary invertibility of an operator matrix $M_C=\left( \begin{array}{cc} A & C \\ 0 & B\end{array} \right)$ acting on a Banach space $\mathcal{X} \oplus \mathcal{Y}$ or on a Hilbert space $\mathcal{H} \oplus \mathcal{K}$. As a consequence some recent results are extended. \end{abstract} 2010 {{\rm ind}t Mathematics subject classification\/}. 47A10, 47A53. {{\rm ind}t Key words and phrases\/}. Operator matrices, generalized Drazin invertibility, generalized Kato decomposition, point spectrum, defect spectrum. \section{Introduction and Preliminaries} Let $\mathcal{X}$ and $\mathcal{Y}$ be infinite dimensional Banach spaces. The set of all bounded linear operators from $\mathcal{X}$ to $\mathcal{Y}$ will be denoted by $L(\mathcal{X},\mathcal{Y})$. For simplicity, we write $L(\mathcal{X})$ for $L(\mathcal{X}, \mathcal{X})$. The set \[\mathcal{X} \oplus \mathcal{Y}=\{(x,y): x {\rm ind}n \mathcal{X}, y {\rm ind}n \mathcal{Y} \} \] is a vector space with standard addition and multiplication by scalars. Under the norm \[||(x,y)||=(||x||^2+||y||^2)^{\frac{1}{2}}\] $\mathcal{X} \oplus \mathcal{Y}$ becomes a Banach space. If $\mathcal{X}_1$ and $\mathcal{Y}_1$ are closed subspaces of $\mathcal{X}$ and $\mathcal{Y}$ respectively, then we will use sometimes notation $\left( \begin{array}{c} \mathcal{X}_1 \\ \mathcal{Y}_1 \end{array} \right)$ instead of $\mathcal{X}_1 \oplus \mathcal{Y}_1$. If $A {\rm ind}n L(\mathcal{X})$, $B {\rm ind}n L(\mathcal{Y})$ and $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$ are given, then \[\left( \begin{array}{cc} A & C \\ 0 & B\end{array} \right): \mathcal{X} \oplus \mathcal{Y} \to \mathcal{X} \oplus \mathcal{Y} \] represents a bounded linear operator on $\mathcal{X} \oplus \mathcal{Y}$ and it is called {\em upper triangular operator matrix}. If $A$ and $B$ are given then we write $M_C=\left( \begin{array}{cc} A & C \\ 0 & B \end{array} \right)$ in order to emphasize dependence on $C$. Let $\mathbb{N} \, (\mathbb{N}_0)$ denote the set of all positive (non-negative) integers, and let $\mathbb{C}$ denote the set of all complex numbers. Given $T {\rm ind}n L(\mathcal{X}, \mathcal{Y})$, we denote by $N(T)$ and $R(T)$ the {\em kernel} and the {\em range} of $T$. The numbers $\alpha(T)={\rm dim} N(T)$ and $\beta(T)={\rm codim} R(T)$ are {\em nullity} and {\em deficiency} of $T {\rm ind}n L(\mathcal{X}, \mathcal{Y})$ respectively. The set \[\sigma(T)=\{\lambda {\rm ind}n \mathbb{C}: T-\lambda \; \text{is not invertible} \} \] is the {\em spectrum} of $T {\rm ind}n L(\mathcal{X})$. An operator $T {\rm ind}n L(\mathcal{X})$ is {\em bounded below} if there exists some $c>0$ such that $c\|x\|\leq \|Tx\| \; \; \text{for every} \; \; x {\rm ind}n \mathcal{X}$. It is worth mentioning that the set of bounded below operators and the set of left invertible operators coincide in the Hilbert space setting. Also, the set of surjective operators and the set of right invertible operators coincide on a Hilbert space. Recall that $T {\rm ind}n L(\mathcal{X})$ is nilpotent when $T^n=0$ for some $n {\rm ind}n \mathbb{N}$, while $T {\rm ind}n L(\mathcal{X})$ is {\em quasinilpotent} if $T-\lambda$ is invertible for all complex $\lambda\ne 0$. The {\em ascent} of $T {\rm ind}n L(\mathcal{X})$ is defined as ${\rm asc}(T)={\rm ind}nf\{n {\rm ind}n \mathbb{N}_0:N(T^n)=N(T^{n+1})\}$, and {\em descent} of $T$ is defined as ${\rm dsc}(T)={\rm ind}nf\{n {\rm ind}n \mathbb{N}_0:R(T^n)=R(T^{n+1})\}$, where the infimum over the empty set is taken to be infinity. If $K \subset \mathbb{C}$, ${\rm acc} \, K$ is the set of accumulation points of $K$. If $M$ is a subspace of $\mathcal{X}$ such that $T(M) \subset M$, $T {\rm ind}n L(\mathcal{X})$, it is said that $M$ is {\em $T$-invariant}. We define $T_M:M \to M$ as $T_Mx=Tx, \, x {\rm ind}n M$. If $M$ and $N$ are two closed $T$-invariant subspaces of $\mathcal{X}$ such that $\mathcal{X}=M \oplus N$, we say that $T$ is {\em completely reduced} by the pair $(M,N)$ and it is denoted by $(M,N) {\rm ind}n Red(T)$. In this case we write $T=T_M \oplus T_N$ and say that $T$ is a {\em direct sum} of $T_M$ and $T_N$. An operator $T {\rm ind}n L(\mathcal{X})$ is said to be {\em Drazin invertible}, if there exists $S {\rm ind}n L(\mathcal{X})$ and some $k {\rm ind}n \mathbb{N}$ such that \[TS=ST, \; \; \; STS=S, \; \; \; T^kST=T^k.\] It is a classical result that necessary and sufficient for $T {\rm ind}n L(\mathcal{X})$ to be Drazin invertible is that $T=T_1 \oplus T_2$ where $T_1$ is invertible and $T_2$ is nilpotent; see \cite{K, lay}. The {\em Drazin spectrum} of $T$ is defined as \[\sigma_D(T)=\{\lambda {\rm ind}n \mathbb{C}: T-\lambda \; \text{is not Drazin invertible} \} \] and it is compact \cite[Proposition 2.5]{Ber1}. An operator $T {\rm ind}n L(\mathcal{X})$ is {\em left Drazin invertible} if ${\rm asc}(T)<{\rm ind}nfty$ and if $R(T^{{\rm asc}(T)+1})$ is closed and $T {\rm ind}n L(\mathcal{X})$ is {\em right Drazin invertible} if ${\rm dsc}(T)<{\rm ind}nfty$ and if $R(T^{{\rm dsc}(T)})$ is closed. J. Koliha extended the concept of Drazin invertibility \cite{koliha}. An operator $T {\rm ind}n L(\mathcal{X})$ is said to be {\em generalized Drazin invertible}, if there exists $S {\rm ind}n L(\mathcal{X})$ such that \[TS=ST, \; \; \; STS=S, \; \; \; TST-T \; \; \text{is quasinilpotent}.\] \noindent According to \cite[Theorem 4.2 and Theorem 7.1]{koliha}, $T {\rm ind}n L(\mathcal{X})$ is generalized Drazin invertible if and only if $0 \not {\rm ind}n {\rm acc} \, \sigma(T)$, and it is exactly when $T=T_1 \oplus T_2$ with $T_1$ invertible and $T_2$ quasinilpotent. Naturally, the set \[\sigma_{gD}(T)=\{\lambda {\rm ind}n \mathbb{C}: T-\lambda \; \text{is not generalized Drazin invertible} \} \] is the {\em generalized Drazin spectrum} of $T$. We also recall definitions of the following spectra of $T {\rm ind}n L(\mathcal{X})$:\par \noindent the point spectrum: $\sigma_p(T)=\{\lambda {\rm ind}n \mathbb{C}: T-\lambda \; \text{is not injective} \}$, \noindent the defect spectrum: $\sigma_d(T)=\{\lambda {\rm ind}n \mathbb{C}: \overline{R(T-\lambda)} \neq \mathcal{X} \}$, \noindent the approximate point spectrum: $\sigma_{ap}(T)=\{\lambda {\rm ind}n \mathbb{C}: T-\lambda \; \text{is not bounded below} \}$,\par \noindent the surjective spectrum: $\sigma_{su}(T)=\{\lambda {\rm ind}n \mathbb{C}: T-\lambda \; \text{is not surjective} \}$,\par \noindent the left spectrum: $\sigma_l(T)=\{\lambda {\rm ind}n \mathbb{C}: T-\lambda \; \text{is not left invertible} \}$,\par \noindent the right spectrum: $\sigma_r(T)=\{\lambda {\rm ind}n \mathbb{C}: T-\lambda \; \text{is not right invertible} \}$.\par Let $M$ and $L$ be two closed linear subspaces of $\mathcal{X}$ and set \[\delta(M,L)=\sup \{dist(u,L): u {\rm ind}n M, \, \|u\|=1\},\] in the case that $M \neq \{0\}$, otherwise we define $\delta(\{0\}, L)=0$ for any subspace $L$. The {\em gap} between $M$ and $L$ is defined by \[\hat{\delta}(M,L)=\max\{\delta(M,L), \delta(L,M)\}.\] It is known that \cite[corollary 10.10]{Mu} \begin{equation}\label{gp1} \hat{\delta}(M,L)<1\Longrightarrow{\rm dim} M={\rm dim} L. \end{equation} An operator $T {\rm ind}n L(\mathcal{X})$ is {\em semi-regular} if $R(T)$ is closed and if one of the following equivalent statements holds:\par \noindent {\rm (i)} $N(T) \subset R(T^m)$ for each $m {\rm ind}n \mathbb{N}$;\par \noindent {\rm (ii)} $N(T^n) \subset R(T)$ for each $n {\rm ind}n \mathbb{N}$.\par \noindent It is clear that left and right invertible operators are semi-regular. V. M\"{u}ller shows (\cite[Corollary 12.4]{Mu}) that if $T {\rm ind}n L(\mathcal{X})$, then $T$ is semi-regular and $0 \not {\rm ind}n {\rm acc} \, \sigma(T)$ imply that $T$ is invertible. In particular, \begin{align} T \; \; \text{is left invertible and} \; \; 0 \not {\rm ind}n {\rm acc} \, \sigma(T) \; \; \Longrightarrow T \; \; \text{is invertible}, \label{leftsemi}\\ T \; \; \text{is right invertible and} \; \; 0 \not {\rm ind}n {\rm acc} \, \sigma(T) \; \; \Longrightarrow T \; \; \text{is invertible}.\label{rightsemi} \end{align} An operator $T {\rm ind}n L(\mathcal{X})$ is said to admit a {\em generalized Kato decomposition}, abbreviated as GKD, if there exists a pair $(M,N) {\rm ind}n Red(T)$ such that $T_M$ is semi-regular and $T_N$ is quasinilpotent. A relevant case is obtained if we assume that $T_N$ is nilpotent. In this case $T$ is said to be of {\em Kato type}. The invertibility, Drazin invertibility and generalized Drazin invertibility of upper triangular operator matrices have been studied by many authors, see for example \cite{Du, Han, ELA, Zhong, maroko, Dragan, Dragan1, Zhang, pseudoBW}. In this article we study primarily the generalized Drazin invertibility but also the Drazin and ordinary invertibility of operator matrices by using the technique that involves the gap theory; see the auxiliary results: \eqref{gp1}, \eqref{leftsemi}, \eqref{rightsemi}, Lemma ˜\ref{lema1}, Lemma ˜\ref{lema3}. Let $A {\rm ind}n L(\mathcal{H})$ and $B {\rm ind}n L(\mathcal{K})$, where $\mathcal{H}$ and $\mathcal{K}$ are separable Hilbert spaces, and consider the following conditions:\par \noindent {\rm (i)} $A=A_1 \oplus A_2$, $A_1$ is bounded below and $A_2$ is quasinilpotent; \par \noindent{\rm (ii)} $B=B_1 \oplus B_2$, $B_1$ is surjective and $B_2$ is quasinilpotent; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda {\rm ind}n \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \noindent Our main result stands that if the conditions {\rm (i)}-{\rm (iii)} are satisfied then there exists $C {\rm ind}n L(\mathcal{K}, \mathcal{H})$ such that $M_C$ is generalized Drazin invertible. The converse is also true under the assumption that $A$ and $B$ admit a GKD. Moreover, we obtain the corresponding results concerning the Drazin invertibility of operator matrices. Further, let $A {\rm ind}n L(\mathcal{X})$ and $B {\rm ind}n L(\mathcal{Y})$ where $\mathcal{X}$ and $\mathcal{Y}$ are Banach spaces. It is also shown that if $0$ is not an accumulation point of the defect spectrum of $A$ or of the point spectrum of $B$ and if the operator matrix $M_C=\left( \begin{array}{cc} A & C \\ 0 & B\end{array} \right)$ is invertible (resp. Drazin invertible, generalized Drazin invertible) for some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$ then $A$ and $B$ are both invertible (resp. Drazin invertible, generalized Drazin invertible). What is more, we give several corollaries that improve some recent results. \section{Results} From now on $\mathcal{X}$ and $\mathcal{Y}$ will denote Banach spaces, while $\mathcal{H}$ and $\mathcal{K}$ will denote separable Hilbert spaces. We begin with several auxiliary lemmas. \begin{lemma} \label{lema1} Let $T {\rm ind}n L(\mathcal{X})$ be semi-regular. Then there exists $\epsilon >0$ such that $\alpha(T-\lambda)$ and $\beta(T-\lambda)$ are constant for $|\lambda|<\epsilon$. \end{lemma} \begin{proof} Let $T {\rm ind}n L(\mathcal{X})$ be semi-regular. The mapping $\lambda \to N(T-\lambda)$ is continuous at the point $0$ in the gap metric; see \cite[Theorem 1.38]{aiena} or \cite[Theorem 12.2]{Mu}. In particular, there exists $\epsilon_1>0$ such that $|\lambda|<\epsilon_1$ implies $\hat{\delta}(N(T), N(T-\lambda))<1$. From \eqref{gp1} we obtain ${\rm dim} N(T)={\rm dim} N(T-\lambda)$, i.e. $\alpha(T)=\alpha(T-\lambda)$ for $|\lambda|<\epsilon_1$. Further, $T^{\prime}$ is also semi-regular where $T^{\prime}$ is adjoint operator of $T$ \cite[Theorem 1.19]{aiena}. As above we conclude that $\alpha(T^{\prime})=\alpha(T^{\prime}-\lambda)$ on an open disc centered at $0$. Since $T-\lambda$ has closed range for sufficiently small $|\lambda|$ \cite[Theorem 1.31]{aiena}, it follows that there exists $\epsilon_2>0$ such that \[\beta(T)=\alpha(T^{\prime})=\alpha(T^{\prime}-\lambda)=\beta(T-\lambda) \; \; \text{for} \; \; |\lambda|<\epsilon_2.\] We put $\epsilon=\min\{\epsilon_1, \epsilon_2\}$, and the lemma follows. \end{proof} \begin{lemma} [\cite{Han}] \label{invertibility} Let $A {\rm ind}n L(\mathcal{X})$ and $B {\rm ind}n L(\mathcal{Y})$. If $A$ and $B$ are both invertible, then $M_C$ is invertible for every $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$. In addition, if $M_C$ is invertible for some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$, then $A$ is invertible if and only if $B$ is invertible. \end{lemma} \begin{lemma} [\cite{joint}] \label{lema3} \noindent {\rm (a)} Let $T {\rm ind}n L(\mathcal{X})$. The following conditions are equivalent:\par \noindent {\rm (i)} There exists $(M,N) {\rm ind}n Red(T)$ such that $T_M$ is bounded below (resp. surjective) and $T_N$ is quasinilpotent;\par \noindent {\rm (ii)} $T$ admits a GKD and $0 \not {\rm ind}n {\rm acc} \; \sigma_{ap}(T)$ (resp. $0 \not {\rm ind}n {\rm acc} \; \sigma_{su}(T))$. \noindent {\rm (b)} Let $T {\rm ind}n L(\mathcal{X})$. The following conditions are equivalent:\par \noindent {\rm (i)} There exists $(M,N) {\rm ind}n Red(T)$ such that $T_M$ is bounded below (resp. surjective) and $T_N$ is nilpotent;\par \noindent {\rm (ii)} $T$ is of Kato type and $0 \not {\rm ind}n {\rm acc} \; \sigma_{ap}(T)$ (resp. $0 \not {\rm ind}n {\rm acc} \; \sigma_{su}(T))$. \end{lemma} We now prove our first main result. \begin{theorem} \label{GD} Let $A {\rm ind}n L(\mathcal{H})$ and $B {\rm ind}n L(\mathcal{K})$ be given operators on separable Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$, respectively, such that:\par \noindent {\rm (i)} $A=A_1 \oplus A_2$, $A_1$ is bounded below and $A_2$ is quasinilpotent; \par \noindent{\rm (ii)} $B=B_1 \oplus B_2$, $B_1$ is surjective and $B_2$ is quasinilpotent; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda {\rm ind}n \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \noindent Then there exists an operator $C {\rm ind}n L(\mathcal{K}, \mathcal{H})$ such that $M_C$ is generalized Drazin invertible. \end{theorem} \begin{proof} By assumption, there exist closed $A$-invariant subspaces $\mathcal{H}_1$ and $\mathcal{H}_2$ of $\mathcal{H}$ such that $\mathcal{H}_1 \oplus \mathcal{H}_2=\mathcal{H}$, $A_{\mathcal{H}_1}=A_1$ is bounded below and $A_{\mathcal{H}_2}=A_2$ is quasinilpotent. Also, there exist closed $B$-invariant subspaces $\mathcal{K}_1$ and $\mathcal{K}_2$ of $\mathcal{K}$ such that $\mathcal{K}_1 \oplus \mathcal{K}_2=\mathcal{K}$, $B_{\mathcal{K}_1}=B_1$ is surjective and $B_{\mathcal{K}_2}=B_2$ is quasinilpotent. It is clear that $\beta(A-\lambda)=\beta(A_1-\lambda)+\beta(A_2-\lambda)$ and $\alpha(B-\lambda)=\alpha(B_1-\lambda)+\alpha(B_2-\lambda)$ for every $\lambda {\rm ind}n \mathbb{C}$. Since $A_2$ and $B_2$ are quasinilpotent we have \begin{align} \beta(A-\lambda)=\beta(A_1-\lambda) \label{beta}, \\ \alpha(B-\lambda)=\alpha(B_1-\lambda), \label{alpha} \end{align} for every $\lambda {\rm ind}n \mathbb{C} \setminus \{0\}$. Further, according to Lemma ˜\ref{lema1} there exists $\epsilon >0$ such that \begin{equation}\label{const} \beta(A_1-\lambda) \; \; \text{and} \; \; \alpha(B_1-\lambda) \; \; \text{are constant for} \; \; |\lambda|<\epsilon. \end{equation} Consider $\lambda_0 {\rm ind}n \mathbb{C}$ such that $0<|\lambda_0|<\min\{\epsilon, \delta\}$, where $\delta$ is from the condition (iii). Using \eqref{beta}, \eqref{alpha}, \eqref{const} and the condition (iii) we obtain \[\beta(A_1)=\beta(A_1-\lambda_0)=\beta(A-\lambda_0)=\alpha(B-\lambda_0)=\alpha(B_1-\lambda_0)=\alpha(B_1).\] On the other hand, it is easy to see that $\left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right)$ and $\left( \begin{array}{c} \mathcal{H}_2 \\ \mathcal{K}_2 \end{array} \right)$ are closed subspaces of $\left( \begin{array}{c} \mathcal{H} \\ \mathcal{K} \end{array} \right)$ and that $\left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right) \oplus \left( \begin{array}{c} \mathcal{H}_2 \\ \mathcal{K}_2 \end{array} \right)=\left( \begin{array}{c} \mathcal{H} \\ \mathcal{K} \end{array} \right)$. $\mathcal{H}_1$ and $\mathcal{K}_1$ are separable Hilbert spaces in their own right, so from \cite[Theorem 2]{Du} it follows that there exists an operator $C_1 {\rm ind}n L(\mathcal{K}_1, \mathcal{H}_1)$ such that the operator $\left( \begin{array}{cc} A_1 & C_1 \\ 0 & B_1 \end{array} \right): \left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right) \to \left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right)$ is invertible. Let define an operator $C {\rm ind}n L(\mathcal{K}, \mathcal{H})$ by \[C=\left( \begin{array}{cc} C_1 & 0 \\ 0 & 0 \end{array} \right): \left( \begin{array}{c} \mathcal{K}_1 \\ \mathcal{K}_2 \end{array} \right) \to \left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{H}_2 \end{array} \right).\] An easy computation shows that $\left( \begin{array}{c} \mathcal{H}_1 \\ \mathcal{K}_1 \end{array} \right)$ and $\left( \begin{array}{c} \mathcal{H}_2 \\ \mathcal{K}_2 \end{array} \right)$ are invariant for $M_C=\left( \begin{array}{cc} A & C \\ 0 & B \end{array} \right)$ and also \begin{align*} (M_C)_{\mathcal{H}_1 \oplus \mathcal{K}_1}=\left( \begin{array}{cc} A_1 & C_1 \\ 0 & B_1 \end{array} \right), \\ (M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}=\left( \begin{array}{cc} A_2 & 0\\ 0 & B_2 \end{array} \right). \end{align*} Since $A_2$ and $B_2$ are quasinilpotent, from $\sigma((M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2})=\sigma(A_2) \cup \sigma(B_2)=\{0\}$, it follows that $(M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}$ is quasinilpotent. Finally, $(M_C)_{\mathcal{H}_1 \oplus \mathcal{K}_1}$ is invertible and thus $M_C$ is generalized Drazin invertible. \end{proof} \noindent Using Theorem ˜\ref{GD} we obtain \cite[Theorem 2.1]{maroko} in a simpler way. \begin{theorem} [\cite{maroko}] \label{D} Let $A {\rm ind}n L(\mathcal{H})$ and $B {\rm ind}n L(\mathcal{K})$ be given operators on separable Hilbert spaces $\mathcal{H}$ and $\mathcal{K}$, respectively, such that:\par \noindent {\rm (i)} $A$ is left Drazin invertible; \par \noindent{\rm (ii)} $B$ is right Drazin invertible; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda {\rm ind}n \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \noindent Then there exists an operator $C {\rm ind}n L(\mathcal{K}, \mathcal{H})$ such that $M_C$ is Drazin invertible. \end{theorem} \begin{proof} By \cite[Theorem 3.12]{Ber0} it follows that there exist pairs $(\mathcal{H}_1, \mathcal{H}_2) {\rm ind}n Red(A)$ and $(\mathcal{K}_1, \mathcal{K}_2) {\rm ind}n Red(B)$ such that $A_{\mathcal{H}_1}=A_1$ is bounded below, $B_{\mathcal{K}_1}=B_1$ is surjective, $A_{\mathcal{H}_2}=A_2$ and $B_{\mathcal{K}_2}=B_2$ are nilpotent. From the proof of Theorem ˜\ref{GD} we conclude that there exists $C {\rm ind}n L(\mathcal{K}, \mathcal{H})$ such that \begin{align*} M_C=(M_C)_{\mathcal{H}_1 \oplus \mathcal{K}_1} \oplus (M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}, \\ (M_C)_{\mathcal{H}_1 \oplus \mathcal{K}_1} \; \text{is invertible},\\ (M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}=\left( \begin{array}{cc} A_2 & 0\\ 0 & B_2 \end{array} \right). \end{align*} For sufficiently large $n {\rm ind}n \mathbb{N}$ we have \[\left( \begin{array}{cc} A_2 & 0\\ 0 & B_2 \end{array} \right)^n=\left( \begin{array}{cc} (A_2)^n & 0\\ 0 & (B_2)^n \end{array} \right)=\left( \begin{array}{cc} 0 & 0\\ 0 & 0 \end{array} \right). \] It means that $(M_C)_{\mathcal{H}_2 \oplus \mathcal{K}_2}$ is nilpotent, and the proof is complete. \end{proof} Under the additional assumptions the converse implications in Theorem ˜\ref{GD} and Theorem ˜\ref{D} are also true even in the context of Banach spaces. \begin{theorem} \label{converse1} Let $A {\rm ind}n L(\mathcal{X})$ and $B {\rm ind}n L(\mathcal{Y})$ admit a GKD. If there exists some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is generalized Drazin invertible, then the following holds: \noindent {\rm (i)} $A=A_1 \oplus A_2$, $A_1$ is bounded below and $A_2$ is quasinilpotent; \par \noindent{\rm (ii)} $B=B_1 \oplus B_2$, $B_1$ is surjective and $B_2$ is quasinilpotent; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda {\rm ind}n \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \end{theorem} \begin{proof} Let $M_C$ be generalized Drazin invertible for some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$. Then there exists $\delta >0$ such that $M_C-\lambda$ is invertible for $0<|\lambda|<\delta$. According to \cite[Theorem 2]{Han} we have $0 \not {\rm ind}n {\rm acc} \; \sigma_l(A)$, $0 \not {\rm ind}n {\rm acc} \; \sigma_r(B)$ and that the statement {\rm (iii)} is satisfied. It means that also $0 \not {\rm ind}n {\rm acc} \; \sigma_{ap}(A)$ and $0 \not {\rm ind}n {\rm acc} \; \sigma_{su}(B)$. By Lemma ˜\ref{lema3} we obtain that the statements {\rm (i)} and {\rm (ii)} are also satisfied. \end{proof} \begin{theorem} \label{converse2} Let $A {\rm ind}n L(\mathcal{X})$ be of Kato type and let $B {\rm ind}n L(\mathcal{Y})$ admit a GKD. If there exists some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is Drazin invertible, then the following holds:\par \noindent {\rm (i)} $A$ is left Drazin invertible; \par \noindent{\rm (ii)} $B$ is right Drazin invertible; \par \noindent {\rm (iii)} There exists a constant $\delta>0$ such that $\beta(A-\lambda)=\alpha(B-\lambda)$ for every $\lambda {\rm ind}n \mathbb{C}$ such that $0<|\lambda|<\delta$. \par \end{theorem} \begin{proof} Applying the same argument as in Theorem ˜\ref{converse1} we obtain that the statement {\rm (iii)} holds and $0 \not {\rm ind}n {\rm acc} \; \sigma_{ap}(A) \cup {\rm acc} \; \sigma_{su}(B)$. Now Lemma ˜\ref{lema3} implies that there exist $(X_1, X_2) {\rm ind}n Red(A)$ and $(Y_1, Y_2) {\rm ind}n Red(B)$ such that $A_1$ is bounded below, $A_2$ is nilpotent, $B_1$ is surjective and $B_2$ is quasinilpotent. Since ${\rm dsc}(B)<{\rm ind}nfty$ \cite[Lemma 2.6]{Zhong}, then ${\rm dsc}(B_2)<{\rm ind}nfty$, so $B_2$ is quasinilpotent operator with finite descent. We conclude that $B_2$ is nilpotent by \cite[Corollary 10.6, p. 332]{TL}. Let $n \geq d$ where $d {\rm ind}n \mathbb{N}$ is such that $(A_2)^d=0$ and $(A_2)^{d-1} \neq 0$. We have \begin{align*} N(A^n)=N((A_1)^n) \oplus N((A_2)^n)=\mathcal{X}_2, \\ N(A^{d-1})=N((A_1)^{d-1}) \oplus N((A_2)^{d-1})=N((A_2)^{d-1}) \subsetneq \mathcal{X}_2. \end{align*} It follows that ${\rm asc}(A)=d<{\rm ind}nfty$. From $R(A^n)=R((A_1)^n) \oplus R((A_2)^n)=R((A_1)^n)$ we conclude that $R(A^n)$ is closed and therefore $A$ is left Drazin invertible. In a similar way we prove that $B$ is right Drazin invertible. \end{proof} \noindent An operator $T {\rm ind}n L(\mathcal{X})$ is {\em semi-Fredholm} if $R(T)$ is closed and if $\alpha(T)$ or $\beta(T)$ is finite. The class of semi-Fredholm operators belongs to the class of Kato type operators \cite[Theorem 16.21]{Mu}. According to this observation it seems that Theorem ˜\ref{converse2} is an extension of \cite[Corollary 2.3]{maroko}. If $\delta > 0$, set $\mathbb{D}(0, \delta)=\{\lambda {\rm ind}n \mathbb{C}: |\lambda|<\delta\}$. The following theorem is our second main result. \begin{theorem} \label{KolihaDrazin} Let $A {\rm ind}n L(\mathcal{X})$ and $B {\rm ind}n L(\mathcal{Y})$ be given operators such that $0 \not {\rm ind}n {\rm acc} \, \sigma_d(A)$ or $0 \not {\rm ind}n {\rm acc} \, \sigma_p(B)$. \noindent {\rm (i)} If there exists some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is generalized Drazin invertible, then $A$ and $B$ are both generalized Drazin invertible.\par \noindent {\rm (ii)} If there exists some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is Drazin invertible, then $A$ and $B$ are both Drazin invertible.\par \noindent {\rm (iii)} If there exists some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$ such that $M_C$ is invertible, then $A$ and $B$ are both invertible. \end{theorem} \begin{proof} {\rm (i)}. Suppose that $0 \not {\rm ind}n {\rm acc} \, \sigma_d(A)$ and that $M_C$ is generalized Drazin invertible for some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$. Since $0 \not {\rm ind}n {\rm acc} \, \sigma(M_C)$ then there exists $\delta > 0$ such that \begin{align} M_C-\lambda \; \text{is invertible}, \label{eq1}\\ \overline{R(A-\lambda)}=\mathcal{X}, \label{eq2} \end{align} for every $\lambda {\rm ind}n \mathbb{D}(0, \delta)\setminus \{0\}$. From \cite[Theorem 2]{Han} it follows that $A-\lambda$ is left invertible for every $\lambda {\rm ind}n \mathbb{D}(0, \delta)\setminus \{0\}$. Since $A-\lambda$ has closed range for every $\lambda {\rm ind}n \mathbb{D}(0, \delta)\setminus \{0\}$, from ˜\eqref{eq2} we conclude $R(A-\lambda)=\overline{R(A-\lambda)}=\mathcal{X}$ for $\lambda {\rm ind}n \mathbb{D}(0, \delta)\setminus \{0\}$, i.e. $A-\lambda$ is surjective for $\lambda {\rm ind}n \mathbb{D}(0, \delta)\setminus \{0\}$. Moreover, $A-\lambda$ is injective for $\lambda {\rm ind}n \mathbb{D}(0, \delta)\setminus \{0\}$, hence $A-\lambda$ is invertible for $\lambda {\rm ind}n \mathbb{D}(0, \delta)\setminus \{0\}$. It means that $0 \not {\rm ind}n {\rm acc} \, \sigma(A)$, so $A$ is generalized Drazin invertible. Now, \eqref{eq1} and Lemma ˜\ref{invertibility} imply that $B$ is generalized Drazin invertible. Assume that $0 \not {\rm ind}n {\rm acc} \, \sigma_p(B)$ and that $M_C$ is generalized Drazin invertible for some $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$. There exists $\delta>0$ such that ˜\eqref{eq1} holds and \begin{equation}\label{eq3} B-\lambda \; \text{is injective for} \; \lambda {\rm ind}n \mathbb{D}(0, \delta)\setminus \{0\}. \end{equation} $B$ is generalized Drazin invertible by ˜\eqref{eq1}, ˜\eqref{eq3} and \cite[Theorem 2]{Han}. We apply Lemma ˜\ref{invertibility} again and obtain that $A$ is generalized Drazin invertible.\par \noindent{\rm (ii)}. If there exists $C {\rm ind}n L(\mathcal{Y},\mathcal{X})$ such that $M_C$ is Drazin invertible, then $M_C$ is also generalized Drazin invertible. According to the part (i) it follows that $B$ is generalized Drazin invertible, i.e. $0 \not {\rm ind}n {\rm acc} \, \sigma(B)$. Further, ${\rm dsc}(B)<{\rm ind}nfty$ by \cite[Lemma 2.6]{Zhong}, so from \cite[Corollary 1.6]{dsc} it follows that $B$ is Drazin invertible. We apply \cite[Lemma 2.7]{Zhong} and obtain that $A$ is also Drazin invertible. \par \noindent{\rm (iii}). $A$ is left invertible and $B$ is right invertible by \cite[Theorem 2]{Han}. On the other hand, the part (i) now leads to $0 \not {\rm ind}n {\rm acc} \, \sigma(A)$ and $0 \not {\rm ind}n {\rm acc} \, \sigma(B)$. The result follows from \eqref{leftsemi} and \eqref{rightsemi}. \end{proof} \noindent The part {\rm (ii)} of Theorem ˜\ref{KolihaDrazin} is also an extension of \cite[Corollary 2.3]{maroko}. The following result is an immediate consequence of Theorem ˜\ref{KolihaDrazin}. The second inclusion of Corollary ˜\ref{inkluzije} is proved in \cite[Corollary 4]{Han}, \cite[Theorem 5.1]{Dragan} and \cite[Lemma 2.3]{Dragan1}. \begin{corollary} \label{inkluzije} Let $A {\rm ind}n L(\mathcal{X})$ and $B {\rm ind}n L(\mathcal{Y})$. If $\sigma_{\ast} {\rm ind}n \{\sigma, \sigma_{D}, \sigma_{gD}\}$, then we have \[(\sigma_{\ast}(A) \cup \sigma_{\ast}(B))\setminus ({\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B)) \subset \sigma_{\ast}(M_C) \subset \sigma_{\ast}(A) \cup \sigma_{\ast}(B) \] for every $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$. \end{corollary} \begin{remark} \label{remark} {\em Let $T {\rm ind}n L(\mathcal{X})$. Inclusions ${\rm acc} \, \sigma_p(T) \subset {\rm acc} \, \sigma_l(T) \subset {\rm acc} \, \sigma(T) \subset \sigma(T)$ and ${\rm acc} \, \sigma_d(T) \subset {\rm acc} \, \sigma_r(T) \subset {\rm acc} \, \sigma(T) \subset \sigma(T)$ are clear. According to \cite[Theorem 12(iii)]{Bo} and from the fact that the Drazin spectrum is compact we have ${\rm acc} \, \sigma_p(T) \subset {\rm acc} \, \sigma(T)={\rm acc} \, \sigma_D(T) \subset \sigma_D(T)$ and also ${\rm acc} \, \sigma_d(T) \subset {\rm acc} \, \sigma(T)={\rm acc} \, \sigma_D(T) \subset \sigma_D(T)$. From this consideration we obtain the following statements for every $A {\rm ind}n L(\mathcal{X})$ and $B {\rm ind}n L(\mathcal{Y})$: \noindent {\rm (i)} ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B) \subset \sigma(A) \cap \sigma(B)$;\par \noindent {\rm (ii)} ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B) \subset \sigma_D(A) \cap \sigma_D(B)$;\par \noindent {\rm (iii)} ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B) \subset {\rm acc} \, \sigma_r(A) \cap {\rm acc} \, \sigma_l(B)\subset \sigma_r(A) \cap \sigma_l(B)$. } \end{remark} \noindent Remark ˜\ref{remark} shows that Corollary ˜\ref{inkluzije} is an improvement of \cite[Corollary 4]{Han}, \cite[Corollary 3.17]{pseudoBW} and of the equation (1) in \cite{Zhong}. Another extension of the equation (1) from \cite{Zhong} may be found in \cite[Proposition 2.2]{maroko}. We also generalize \cite[Corollary 2.6]{maroko}. \begin{corollary} \label{cormar} Let $A {\rm ind}n L(\mathcal{X})$, $B {\rm ind}n L(\mathcal{Y})$ and let $\sigma_{\ast} {\rm ind}n \{\sigma, \sigma_{D}, \sigma_{gD}\}$. If ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B)=\emptyset$ then \begin{equation} \label{formula} \sigma_{\ast}(M_C)=\sigma_{\ast}(A) \cup \sigma_{\ast}(B) \; \; \text{for every} \; \; C {\rm ind}n L(\mathcal{Y}, \mathcal{X}). \end{equation} In particular, if $\sigma_r(A) \cap \sigma_l(B)=\emptyset$ then \eqref{formula} is satisfied. \end{corollary} \begin{proof} Apply Corollary ˜\ref{inkluzije} and {\rm (iii)} of Remark ˜\ref{remark}. \end{proof} \begin{example} {\em Let $\mathcal{X}=\mathcal{Y}=\ell^2(\mathbb{N})$ and let $U$ be forward unilateral shift operator on $\ell^2(\mathbb{N})$. It is known that \[\sigma(U)=\sigma_D(U)=\sigma_{gD}(U)=\{\lambda {\rm ind}n \mathbb{C}:|\lambda| \leq 1\}.\] Since $\sigma_p(U)=\emptyset$, then from Corollary ˜\ref{cormar} we conclude \[\sigma_{\ast}\left( \left(\begin{array}{cc} A & C \\ 0 & U\end{array} \right) \right)=\sigma_{\ast}(A) \cup \sigma_{\ast}(U),\] where $\sigma_{\ast} {\rm ind}n \{\sigma, \sigma_D, \sigma_{gD} \}$ and $A, C {\rm ind}n L(\ell^2(\mathbb{N}))$ are arbitrary operators. } \end{example} \noindent We finish with a slight extension of \cite[Corollary 7]{Han}, \cite[Theorem 2.9]{Zhong} and \cite[Theorem 3.18]{pseudoBW}. \begin{theorem} \label{holest} Let $A {\rm ind}n L(\mathcal{X})$ and $B {\rm ind}n L(\mathcal{Y})$. If $\sigma_{\ast} {\rm ind}n \{\sigma, \sigma_{D}, \sigma_{gD}\}$, then for every $C {\rm ind}n L(\mathcal{Y}, \mathcal{X})$ we have \begin{equation} \label{holes} \sigma_{\ast}(A) \cup \sigma_{\ast}(B)=\sigma_{\ast}(M_C) \cup W, \end{equation} where $W$ is the union of certain holes in $\sigma_{\ast}(M_C)$, which happen to be subsets of ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B)$. \end{theorem} \begin{proof} Expression \eqref{holes} is the main result of \cite{Han}, \cite{Zhong} and \cite{Zhang}. Corollary ˜\ref{inkluzije} shows that the filling some holes in $\sigma_{\ast}(M_C)$ should occur in ${\rm acc} \, \sigma_d(A) \cap {\rm acc} \, \sigma_p(B)$. \end{proof} \noindent {\bf Acknowledgements.} The author is supported by the Ministry of Education, Science and Technological Development, Republic of Serbia, grant no. 174007. \noindent \author{Milo\v s D. Cvetkovi\'c} \noindent University of Ni\v s\\ Faculty of Sciences and Mathematics\\ P.O. Box 224, 18000 Ni\v s, Serbia \noindent {{\rm ind}t E-mail}: {\tt [email protected]} \end{document}
\begin{document} \title{Approximate quantum error correction for correlated noise} \author{ Avraham Ben-Aroya\thanks{Department of Computer Science, Tel-Aviv University, Tel-Aviv 69978, Israel. Supported by the Adams Fellowship Program of the Israel Academy of Sciences and Humanities, by the European Commission under the Integrated Project QAP funded by the IST directorate as Contract Number 015848 and by USA Israel BSF grant 2004390. Email: [email protected].} \and Amnon Ta-Shma\thanks{Department of Computer Science, Tel-Aviv University, Tel-Aviv 69978, Israel. Supported by the European Commission under the Integrated Project QAP funded by the IST directorate as Contract Number 015848, by Israel Science Foundation grant 217/05 and by USA Israel BSF grant 2004390. Email: [email protected]. }} \date{} \maketitle \null\vspace*{-35pt} \vspace*{-7pt} \begin{abstract} Most of the research done on quantum error correction studies an error model in which each qubit is affected by noise, independently of the other qubits. In this paper we study a different noise model -- one in which the noise may be correlated with the qubits it acts upon. We show both positive and negative results. On the one hand, we show controlled-X errors cannot be \emph{perfectly} corrected, yet can be \emph{approximately} corrected with sub-constant approximation error. On the other hand, we show that no non-trivial quantum error correcting code can approximately correct controlled phase error with sub-constant approximation error. \end{abstract} \section{Introduction} One of the reasons for studying quantum error-correcting codes (QECCs) is that they serve as building blocks for fault-tolerant computation, and so might serve one day as central components in an actual implementation of a quantum computer. Much work was done trying to determine the threshold error, beyond which independent noise\footnote{The independent noise model is a model in which each qubit is affected by noise, with some probability, independently of the other qubits.} can be dealt with by fault-tolerant mechanisms (see the Ph.D. theses~\cite{R06,A07} and references therein). A few years ago there was some debate whether the independent noise model is indeed a realistic noise model for quantum computation or not (see, e.g.,~\cite{ALZ06}). This question should probably be answered by physicists, and the answer to that is most likely dependent on the actual realization chosen. Yet, while the physicists try to build actual machines, and the theorists try to deal with higher independent noise, it also makes sense to try and extend the qualitative types of errors that can be dealt with. The results in this paper are both optimistic and pessimistic. On the one hand, we show there are noise models that can be approximately corrected but not perfectly corrected, but on the other hand, there is simple correlated noise that cannot even be approximately corrected. It might be interesting to reach a better understanding of what can be approximately corrected. Also, it might be interesting to come up with other relaxations of quantum error correction that deal better with correlated noise. \subsection{Stochastic vs. Adversarial noise} The basic problem we deal with is that of encoding a message such that it can be recovered after being transmitted over a noisy channel. Classically, there are two natural error models: Shannon's independent noise model and Hamming's adversarial noise model. For example, a typical noise model that is dealt with in Shannon's theory, is one where each bit of the transmitted message is flipped with independent probability $p$, whereas a typical noise model in Hamming's theory is one where the adversary looks at the transmitted message and chooses at most $t$ bits to flip. We stress that the classical \emph{adversarial} noise model allows the adversary to decide which noise operator to apply based on the specific codeword it acts upon. Remarkably, there are classical error correcting codes that solve the problem in the adversarial noise model, which are almost as powerful as the best error correcting codes that solve the problem in the independent noise model. For instance, roughly speaking, any code in the independent noise model must satisfy $r \le 1-H(p)$, where $r$ is the rate of the code, $p$ is the noise rate. In the adversarial noise model, the Gilbert-Varshamov bound shows there are codes with rate $r=1-H(\delta)$ and relative distance $\delta$, though one can uniquely correct only up to half the distance.\footnote{If we allow list-decoding, then almost up to $1-r$ noise rate can be corrected.} \subsection{The quantum case} Let us now consider quantum error correcting codes (QECCs). The standard definition of such codes limits the noise to a linear combination of operators, each acting on at most $t$ qubits. A standard argument then shows that a noise operator that acts on $n$ qubits, such that it acts independently on each qubit with probability $p=t/n$, is very close to a linear combination of error operators that act on only, roughly, $t$ qubits. Thus, any quantum error correcting code (QECC) that corrects all errors on at most $t$ qubits, also \emph{approximately} corrects \emph{independent} noise with noise rate about $t/n$. Therefore, the standard definition of QECCs works well with independent noise. As we said before, the classical \emph{adversarial} noise model allows the adversary to decide which noise operator to apply based on the specific codeword it acts upon. In the quantum model this amounts to, say, applying a single bit-flip operator based on the specific basis element we are given, or, in quantum computing terminology, applying a controlled bit-flip. Controlled bit-flips are limited (in that they apply only a single $X$ operator) highly correlated (in that they depend on all the qubits of the input) operators. Can QECC correct controlled bit-flip errors? Can QECC \emph{approximately} correct such errors? \subsection{Correcting controlled bit flip errors} Before we proceed, let us first see that a QECC that corrects one qubit error in the standard sense may fail for controlled bit flip errors. Assume we have a quantum code of dimension $|C|$ that is spanned by $|C|$ orthogonal codewords $\set{\phi_i}$, and can correct any noise operator from $\mathcal{E}$ that is applied on any vector $\phi \in \set{\phi_i}$. Specifically, for any noise operator $E\in\mathcal{E}$, the quantum decoding algorithm maps a noisy word $\phi_i'=E\phi_i$ to a product state $\phi_i \otimes \ket{{\sf synd}(E)}$, where ${\sf synd}(E)$ is the error-syndrome associated with $E$. Notice that ${\sf synd}(E)$ depends on $E$ alone and not on $\phi_i$. Then, we can correct errors applied on any state in the vector space \emph{spanned} by the basis vectors $\set{\phi_i}$. To see that, notice that if we start with some linear combination $\sum_{i=1}^k \alpha_i {\phi_i}$ and we apply the error $E$ on it, then the corrupted word is $\sum_{i=1}^k \alpha_i E\phi_i$, and applying the decoding procedure we get the state $(\sum_{i=1}^k \alpha_i \phi_i) \otimes \ket{{\sf synd}(E)}$. Tracing out the syndrome register we recover the original state. This property, however, breaks down for controlled bit flip errors, where the error may depend on the specific codeword $\phi_i$. In that case the corrupted word is $\sum_{i=1}^k \alpha_i E_i\phi_i$. If we use the same decoding procedure, and the decoded word is $\sum_{i=1}^k \alpha_i \phi_i \otimes \ket{{\sf synd}(E_i)}$ and if we trace out the syndrome register we end up with a state different then the original state. The above argument shows that if we allow controlled bit-flip errors, then the environment may get information about the codeword, and thus corrupt it. This, by itself, is not yet an impossibility proof, as it is possible that one can find a code that is immune to controlled bit-flip errors. Unfortunately, an easy argument shows that there is no non-trivial QECC that perfectly corrects such errors (see Theorem~\ref{thm:exact-negative}). Therefore, while there are asymptotically good QECC in the standard error model, there are no non-trivial QECC correcting controlled \emph{single} bit-flip errors. \subsection{Approximate error-correction} Summarizing the discussion above, we saw that no QECC can \emph{perfectly} correct controlled bit-flip errors. We now ask whether this also holds when we relax the perfect decoding requirement and only require \emph{approximate decoding}. Namely, suppose we only require that for any codeword $\phi$ and any allowed error $E$, decoding $E\phi$ results in a state \emph{close} to $\phi$. Can we then correct controlled bit-flip errors? Somewhat surprisingly we show a positive answer to this question. That is, we show a QECC of arbitrarily high dimension, that can correct any controlled bit-flip error with sub-constant approximation-error (see Theorem~\ref{thm:app-positive} for a formal statement). This, in particular, shows that there are error models that cannot be perfectly decoded, yet can be approximately decoded. For the proof, we find a large dimension vector space containing only low-sensitive functions. Having that we increase our expectations and ask whether one can approximately correct, say, any controlled single-qubit error. However, here we show a negative result. We show that no non-trivial QECC can correct controlled phase-errors with only sub-constant approximation error (see Theorem~\ref{thm:app-negative} for a formal statement). Namely, no non-trivial QECC can handle, even approximately, correlated noise, if the control is in the standard basis and the error is in the phase. \section{Preliminaries} \label{sec:preliminiaries} \subsection{Quantum error-correcting codes} Let $\mathcal{{N}}$ denote the Hilbert space of dimension $2^n$. $\mathcal{{M}}$ is a $[n,k]$ quantum error correcting code (QECC) if it is a subspace of $\mathcal{{N}}$ of dimension $K \ge 2^k$. We call $n$ the \emph{length} of code, and $K$ the \emph{dimension} of the code. For two Hilbert spaces $\mathcal{{N}},\mathcal{{N}}'$, $L(\mathcal{{N}},\mathcal{{N}}')$ denotes the set of linear operators from $\mathcal{{N}}$ to $\mathcal{{N}}'$. \begin{definition} A code $\mathcal{{M}}$ \emph{corrects} $\mathcal{{E}} \subset L(\mathcal{{N}},\mathcal{{N}}')$ if for any two operators $X,Y \in \mathcal{{E}}$ and any two codewords $\phi_1,\phi_2 \in \mathcal{{M}}$, if $\phi_1^* \phi_2 = 0$ then $(X\phi_1)^*(Y\phi_2) = 0$. \end{definition} \begin{fact}[{\cite[Section 15.5]{KSV02}}]\label{fact:QECC} A code $\mathcal{{M}}$ corrects $\mathcal{{E}}$ if for any $X,Y \in \mathcal{{E}}$, defining $E=X^*Y$, there exists a constant $c(E) \in \mathbb{C}$, such that for any two codewords $\phi_1,\phi_2 \in \mathcal{{M}},$ $$\phi_1^*E\phi_2=c(E) \cdot \phi_1^* \phi_2.$$ \end{fact} A QECC $\mathcal{{M}}$ corrects $t$ errors if it corrects all linear operators that correlate the environment with at most $t$ qubits. There are \emph{asymptotically good} QECCs, i.e., $[n,k]$ QECCs that correct $t=\Omega(n)$ errors with $n=O(k)$~\cite{ALT01}. \subsection{Boolean functions} The influence of a variable $x_i$ on a boolean function $f:\set{0,1}^n \to \set{0,1}$ is defined to be $$\Pr_{x \in \set{0,1}^n} [f(x) \neq f(x \xor e_i)],$$ where $e_i \in \mathbb{R}^n$ is the $i$'th vector in the standard basis. The influence of a function is the maximum influence of its variables. Ben-Or and Linial~\cite{BL85} showed that there exists a balanced function $\mathop{\rm Tr}\nolimitsibes:\set{0,1}^n \to \set{0,1}$ with influence as small as $O({\log n \over n})$, and Kahn, Kalai and Linial~\cite{KKL88} showed that this bound is tight for balanced functions. We extend this notion to complex valued functions. For $g:\set{0,1}^n \to \mathbb{C}$ let $$I_i(g)=\mathbf{{E}}_{x \in \set{0,1}^n} |g(x)-g(x \xor e_i)|^2$$ and $I(g) = \max_{i \in [n]} I_i(g)$. We identify a function $g:\set{0,1}^n \to \mathbb{C}$ with the vector $\sum_{x \in \set{0,1}^n} g(x) \ket{x}$. When we write $g$ we refer to it as a vector in $\mathcal{{N}}$. When we write $g(x)$ we refer to $g$ as a function $g:\set{0,1}^n \to \mathbb{C}$ and $g(x) \in \mathbb{C}$. \section{No QECC can perfectly correct controlled bit flips} \label{sec:no-perfect-QECC} We now concentrate on the error model that allows any \emph{controlled bit flip} error. Formally, for $i \in [n]$ and $S\subseteq \set{0,1}^{n-1}$ let $E_{i,S}$ be the operator that applies $X$ on the $i$'th qubit conditioned on the other qubits being in $S$. More precisely, we define the operator $E_{i,S}$ on the basis $\set{\ket{x}| x \in \set{0,1}^n}$ and extend it linearly. For $x \in \set{0,1}^n$ define $\wh{x}_i=(x_1, \ldots, x_{i-1},x_{i+1},\ldots,x_n) \in \set{0,1}^{n-1}$. Also, let $X^i \in L(\mathcal{{N}},\mathcal{{N}})$ denote the operator that flips the $i$'th qubit, i.e., $X^i = I^{\otimes (i-1)} \otimes X \otimes I^{\otimes (n-i)}$. Then $$E_{i,S} \ket{x} = \left\{ \begin{array}{ll} X^i \ket{x} & ~~~\hbox{if $\wh{x}_i \in S$} \\ \ket{x} & ~~~\hbox{otherwise.} \\ \end{array} \right.$$ Let $$\bitflip \eqdef \set{E_{i,S} ~~|~~ i \in [n],~ S \subseteq \set{0,1}^{n-1} }.$$ We also define a tiny subset $\singeltons$ of $\bitflip$ by $$\singeltons \eqdef \set{E_{i,\set{j}} ~~|~~ i \in [n],~ j \in \set{0,1}^{n-1} }.$$ We claim that even this set of errors \emph{cannot} be corrected. \begin{theorem} \label{thm:exact-negative} There is no QECC with dimension bigger than one that can correct $\singeltons$. \end{theorem} \begin{proof} Suppose there exists a $[n,k]$ code with $k \ge 1$ that corrects $\singeltons$. Let $\phi = \sum_{i \in \set{0,1}^n} \phi(i) \ket{i}$ and $\psi = \sum_{i \in \set{0,1}^n} \psi(i) \ket{i}$ be two orthonormal codewords. We will prove that: \begin{claim} For every $i \in [n]$ and every $q \in \set{0,1}^n$ it holds that $\phi(q)=\phi(q \xor e_i)$. \end{claim} In particular, it follows that $\phi=\alpha \cdot \sum_{i \in \set{0,1}^n} \ket{i}$ for some $0 \neq \alpha \in \mathbb{C}$. Similarly, $\psi=\alpha' \cdot \sum_{i \in \set{0,1}^n} \ket{i}$ for some $0 \neq \alpha' \in \mathbb{C}$. Therefore, $\phi^*\psi = 2^n \alpha^* \alpha' \neq 0$. A contradiction. We now prove the claim. Fix $i \in [n]$ and $q \in \set{0,1}^{n}$. Denote $E=E_{i,\set{q}}$ and $q'=q \xor e_i$. It can be verified that \begin{eqnarray*} \phi^* E \phi &=& \phi^* \phi - |\phi(q) - \phi(q')|^2 =1-|\phi(q) - \phi(q')|^2 \\ \psi^* E \psi &=& \psi^* \psi - |\psi(q) - \psi(q')|^2 =1-|\psi(q) - \psi(q')|^2 \\ \psi^* E \phi &=& - (\phi(q) - \phi(q'))^*(\psi(q) - \psi(q')) \end{eqnarray*} As $\phi^* \psi =0$, by the QECC definition, $\psi^* E\phi = 0$ and so $(\phi(q) - \phi(q'))^*(\psi(q) - \psi(q'))=0$. If $\phi(q) \neq \phi(q')$ we conclude that $\psi(q)=\psi(q')$. But then, $\phi^* E\phi < 1$ while $\psi^* E \psi=1$, which contradicts Fact~\ref{fact:QECC}. Therefore, $\phi(q)=\phi(q')$. \end{proof} The argument above shows that for any $w,w' \in \set{0,1}^n$ and any codeword $\phi$, $\phi(w)=\phi(w')$, by employing a sequence of \emph{small} changes, and showing that $\phi$ is invariant under these small changes. However, if we replace the stringent notion perfect decoding with the more relaxed notion of \emph{approximate decoding}, then at least theoretically it is possible that under this weaker notion, controlled bit flips can be corrected. Somewhat surprisingly, this is indeed the case. \section{An approximate QECC for controlled bit flips} We first define a relaxed notion of error-detection. We say a code \emph{separates} $\mathcal{{E}}$ if for any two allowed errors $X,Y \in \mathcal{{E}}$ and any two orthogonal codewords $\phi,\psi$, $X\phi$ and $Y\psi$ are far away from each other. Formally, \begin{definition} Let $\mathcal{{M}} \subseteq \mathcal{{N}}$ be an $[n,k]$ QECC and $\mathcal{{E}} \subset L(\mathcal{{N}},\mathcal{{N}}')$. We say $\mathcal{{M}}$ separates $\mathcal{{E}}$ with at most $\alpha$ error, if for any two operators $X,Y \in \mathcal{{E}}$ and any two unit vectors $\phi_1, \phi_2 \in \mathcal{{M}}$, if $\phi_1^* \phi_2 = 0$ then $|\phi_1^*X^* Y\phi_2| \le \alpha$. \end{definition} We say a code $\mathcal{{M}}$ \emph{approximately} corrects $\mathcal{{E}}$ if there exists a POVM on $\mathcal{{N}}'$ such that for any operator $X \in \mathcal{{E}}$, and any codeword $\phi \in \mathcal{{M}}$, when we apply the POVM on $X\phi$, the resulting mixed state is close to the pure state $\phi$. A very special case of the above is when the decoding procedure is the \emph{identity} function. In this case we say $\mathcal{{M}}$ is $(\mathcal{{E}},\varepsilon$) \emph{immune}. Formally, \begin{definition} Let $\mathcal{{M}} \subseteq \mathcal{{N}}$ be an $[n,k]$ QECC and $\mathcal{{E}} \subset L(\mathcal{{N}},\mathcal{{N}}')$. We say $\mathcal{{M}}$ is $(\mathcal{{E}},\varepsilon$) immune if for every $X \in \mathcal{{E}}$ and every $\phi \in \mathcal{{M}}$, $|\phi^*X\phi| \ge (1-\varepsilon)|\phi^* \phi|$. We call $\varepsilon$ the \emph{approximation error}. \end{definition} We saw before that there is no non-trivial QECC that perfectly corrects $\bitflip$. In contrast, we will now construct a large QECC that is immune against $\bitflip$, with sub-constant approximation error. \subsection{The construction} The calculations done in Section~\ref{sec:no-perfect-QECC} can be generalized to show that if we want $\phi$ to be $\varepsilon$-immune for bit flip errors, then $\phi(x)$ must have low influence. However, we also want the QECC to have a large dimension, and so we want many orthogonal such vectors. The idea is to work with a function $f$ of low influence, and combine it on many independent blocks. Pick an integer $B$ such that $2B$ divides $n$, and define $n'={n \over 2B}$. Fix a balanced function $f:\set{0,1}^{n'} \to \set{\pm {1 \over 2}}$ with low influence, i.e., $I(f) \le s=s(n')$. We remind the reader that this means that for all $j$ in $[n']$, $I_j(f)=\mathbf{{E}}_{x \in \set{0,1}^{n'}} |f(x)-f(x \xor e_j)|^2 \le s$ (see Section~\ref{sec:preliminiaries}). Notice that this implies that for all $j \in [n']$, \begin{eqnarray} \label{eqn:pr} \Pr_{w \in \set{0,1}^{n'}} [f(w) \neq f(w \xor e_j)] & \le & s. \end{eqnarray} We use the low-influence function $f$ as a building block. Now partition $[n]$ into $2B$ blocks of equal length $n'$. For $x \in \set{0,1}^n$, $i \in \set{1,\ldots,B}$ and $b \in \set{0,1}$, let $x_{i,b} \in \set{0,1}^{n'}$ denote the value of $x$ restricted to the $(2i-1+b)$'th block, i.e., the string $x$ is the concatenation of the blocks $x_{1,0},x_{1,1},x_{2,0},x_{2,1},x_{3,0},\ldots,x_{B,0},x_{B,1}$. For $z=(z_1,\ldots,z_B) \in \set{0,1}^B$ we define a function $f_z: \set{0,1}^n \to \mathbb{C}$ that apply $f$ on the blocks corresponding to $z$. That is, $$f_z(x)=f(x_{1,z_1}) \cdot \ldots \cdot f(x_{B,z_B}),$$ as shown in Figure~\ref{fig:ctrlX-QECC}. \begin{figure} \caption{\it The input is $B$ pairs of blocks, each block is of length $n'$. The values $z_1,\ldots,z_{B} \label{fig:ctrlX-QECC} \end{figure} As usual we look at $f_z$ as a vector in $\mathcal{{N}}$. We let $W=\mathop{\rm Span}\nolimits \set{f_z ~:~ z \in \set{0,1}^B}$. We claim: \begin{theorem} \label{thm:1-bit-flip} $W$ is an $[n,B]$ QECC that is $(\bitflip,{2s(n')})$ immune. \end{theorem} In particular, taking $f(w)={1 \over 2}$ when $\mathop{\rm Tr}\nolimitsibes(w)=1$ and $f(w)=-{1 \over 2}$ when $\mathop{\rm Tr}\nolimitsibes(w)=0$, we get: \begin{theorem} \label{thm:app-positive} For every $0 < B=B(n) < n$ there exists an $[n,B]$ QECC that is $(\bitflip,\varepsilon=O({B \log n \over n}))$ immune. \end{theorem} In particular, there exists QECCs of length $n$ and dimension $2^{\sqrt{n}}$ that approximately corrects all controlled-X errors with an $O({\log(n) \over \sqrt{n}})$ approximation error. \subsection{The analysis} We first show that $\dim W = B$. This immediately follows from: \begin{claim} $\set{f_z}_{z \in \set{0,1}^B}$ is an orthogonal set. \end{claim} \begin{proof} We will show that for $z \neq z'$, $f_z^* f_{z'}=0$. For that, it is enough to show that $\set{(f_z(x),f_{z'}(x))}_{x \in \set{0,1}^n}$ is uniform over $\set{\pm 2^{-B}} \times \set{\pm 2^{-B}}$. To see that, first notice that $f(x_{1,i_1})$ is balanced over $\set{\pm {1 \over 2}}$. Hence, $\set{f_z(x)}_{x \in \set{0,1}^n}$ is uniform over $\set{\pm 2^{-B}}$. Also, as $z \neq z'$ there exists some $k$ such that $z_k \neq z'_k$. Notice that $f(x_{k,z_k})$ depends on bits that do not influence $f_{z'}(x)$, hence it is independent of $f_{z'}(x)$. It is also uniform on $\set{ \pm {1 \over 2}}$. Hence the pair $(f_{z}(x),f_{z'}(x))$ is uniform over $(\pm 2^{-B},\pm 2^{-B})$ as desired. \end{proof} We now analyze the approximation error. We will use the following lemmas: \begin{lemma} \label{lem:sensitivity} For every $\phi \in \mathcal{{N}}$ and every $i \in [n]$, $S \subseteq \set{0,1}^{n-1}$, $$|\phi^* E_{i,S} \phi -\phi^*\phi| \le 2^{n-1} I_{i}(\phi).$$ \end{lemma} \begin{lemma} \label{lem:W} For every $\phi \in W$, $$2^{n-1} I(\phi) \le {2s} \cdot |\phi^*\phi|.$$ \end{lemma} These lemmas together imply Theorem~\ref{thm:1-bit-flip}. Notice that in Lemma~\ref{lem:W} we had to prove the claim for every $\phi \in W$ and not just for some basis of $W$ (see the discussion in the introduction). \begin{proof}[ of Lemma~\ref{lem:sensitivity}] For every $g,h:\set{0,1}^n \to \mathbb{C}$ and every $i \in [n]$, $S \subseteq \set{0,1}^{n-1}$, \begin{eqnarray*} {h}^* E_{i,S} {g} & = & \sum_{x:\wh{x}_i \not \in S} h(x)^*g(x)+ \sum_{x:\wh{x}_i \in S} h(x)^*g(x \xor e_i) \\ & = & \sum_{x \in \set{0,1}^n} h(x)^*g(x)+\sum_{x: \wh{x}_i \in S} [h(x)^*g(x \xor e_i)-h(x)^*g(x)]. \end{eqnarray*} Now, fix $i$. For $y \in \set{0,1}^{n-1}$ and $b \in \set{0,1}$ let $(y,b)$ denote the string $x \in \set{0,1}^n$ such that $\wh{x}_i=y$ and $x_i=b$. Then, \begin{eqnarray*} |{h}^* E_{i,S} {g} -h^*g| & = & \left|\sum_{y \in S} (h(y,0)^*-h(y,1)^*)(g(y,0)-g(y,1))\right| \\ & \le & \sqrt{\sum_{y \in S}|h(y,0)^*-h(y,1)^*|^2} \sqrt{\sum_{y \in S}|g(y,0)-g(y,1)|^2} \\ & \le & \sqrt{\sum_{y \in \set{0,1}^{n-1}}|h(y,0)^*-h(y,1)^*|^2} \sqrt{\sum_{y \in \set{0,1}^{n-1}}|g(y,0)-g(y,1)|^2} \\ & = & \sqrt{2^{n-1} I_i(h)}\sqrt{2^{n-1} I_i(g)} \end{eqnarray*} \end{proof} We now turn to Lemma~\ref{lem:W}. One can check that all elements in $\set{f_z}$ have low influence. However, this by itself does not imply that all elements in $W$ are so. So we verify this directly. \begin{proof}[ of Lemma~\ref{lem:W}] We want to show that any $\phi \in W$ has low influence. Fix $i \in [n]$ and suppose that $i$ corresponds the $j$'th variable in the $(k,b)$'th block. For $x \in \set{0,1}^n$ let $x=(x^{(1)},x^{(2)})$ where $x^{(2)}=x_{k,b}$ and $x^{(1)} \in \set{0,1}^{n-n'}$ is the rest of $x$. Let $\wh{f_z}:\set{0,1}^{n} \to \mathbb{C}$ be $$\wh{f_z}(x)=f(x_{1,z_1}) \cdot \ldots f(x_{k-1,z_{k-1}}) \cdot f(x_{k+1,z_{k+1}}) \cdot \ldots f(x_{B,z_B}).$$ Notice that $\wh{f_z}(x^{(1)},x^{(2)})$ depends only on $x^{(1)}$. For that reason we also write it as $\wh{f_z}(x^{(1)})$. We are given $\phi \in W$ and express it as $\phi=\sum_{z} \alpha_{z} f_{z}$. We want to bound \begin{eqnarray*} I_i(\phi) &=& \mathbf{{E}}_{x \in \set{0,1}^{n}}|\phi(x)-\phi(x \xor e_i)|^2 \\ & = & \mathbf{{E}}_{x \in \set{0,1}^{n}} \left|\sum_z \alpha_z (f_z(x)-f_z(x \xor e_i))\right|^2. \end{eqnarray*} The functions $f_z$ for which $f_z(x)=f_z(x \xor e_i)$ do not contribute to the sum. We can therefore define $\zeta=\sum_{z: z_k=b} \alpha_{z} f_{z}$ and it follows that $I_i(\phi)=I_i(\zeta)$. Then, \begin{eqnarray*} 2^{n-1} I_i(\zeta) &=& {1 \over 2} \sum_{x \in \set{0,1}^{n}} \left|\sum_{z:z_k=b} \alpha_z (f_z(x)-f_z(x \xor e_i))\right|^2\\ & = & {1 \over 2} \sum_{(x^{(1)},x^{(2)})\in \set{0,1}^{n}} \left|\sum_{z:z_k=b} \alpha_z \wh{f_z}(x^{(1)})(f(x^{(2)})-f(x^{(2)} \xor e_j))\right|^2\\ & = & {1 \over 2} \sum_{(x^{(1)},x^{(2)})\in \set{0,1}^{n}} |f(x^{(2)})-f(x^{(2)} \xor e_j)|^2 \cdot \left|\sum_{z:z_k=b} \alpha_z \wh{f_z}(x^{(1)})\right|^2. \end{eqnarray*} Next, observe that the only terms that contribute non-zero values are those $x=(x^{(1)},x^{(2)})$ where $f(x^{(2)}) \neq f(x^{(2)} \xor e_j)$. There are at most $s2^{n'}$ such strings $x^{(2)}$ (see Equation~(\ref{eqn:pr})). Also, each such term contributes $|\sum_{z:z_k=b} \alpha_z \wh{f_z}(x^{(1)})|^2$. However, \begin{eqnarray*} |\zeta(x^{(1)},x^{(2)})|^2 &=& \set{0,1}ig|\sum_{z: z_k=b} \alpha_{z} f_{z}(x^{(1)},x^{(2)})\set{0,1}ig|^2 ~=~\set{0,1}ig|\sum_{z: z_k=b} \alpha_{z} \wh{f_z}(x^{(1)})f(x^{(2)})\set{0,1}ig|^2 \\ &=& \big|f(x^{(2)}) \big|^2 \cdot \set{0,1}ig|\sum_{z: z_k=b} \alpha_{z} \wh{f_z}(x^{(1)})\set{0,1}ig|^2 ~=~ {1 \over 4}\set{0,1}ig|\sum_{z:z_k=b} \alpha_z \wh{f_z}(x^{(1)})\set{0,1}ig|^2. \end{eqnarray*} Thus, the term $|\zeta(x^{(1)},x^{(2)})|^2$ depends only on $x^{(1)}$ and not on $x^{(2)}$, and we denote it by $|\zeta(x^{(1)})|^2$. Denote $\wh{\zeta}=\sum_{z: z_k=b} \alpha_{z} \wh{f_z}$. Notice that $\sum_{x^{(1)}} |\zeta(x^{(1)})|^2=|{\wh{\zeta}~ }^* \wh{\zeta}|$. Also, because $\zeta(x^{(1)},x^{(2)})$ does not depend on $x^{(2)}$, $|\zeta^* \zeta| = 2^{n'} |{\wh{\zeta} ~}^* \wh{\zeta}|$. Altogether, \begin{eqnarray*} 2^{n-1} I_i(\zeta) &=& {1 \over 2} \sum_{x_1} s2^{n'} 4|\zeta(x_1)|^2 ={2s}\cdot 2^{n'} |{\wh{\zeta}~}^* \wh{\zeta}| = {2s}|\zeta^*\zeta|. \end{eqnarray*} Finally, $\zeta=\sum_{z:z_k=b} \alpha_z f_z$ is a linear combination of orthogonal functions $\set{f_z}$ and $|f_z^*f_z|=2^{n-2B}$. Thus, $$|\zeta^*\zeta|=\sum_{z:z_k=b} |\alpha_z|^2 2^{n-2B} \le 2^{n-2B} \sum_z |\alpha_z|^2=|\phi^*\phi|,$$ which completes the proof. \end{proof} \section{No approximate QECC can correct controlled phase errors} So far we have seen that one can approximately correct controlled-X errors with a sub-constant approximation error. We now show there is no way to correct controlled-phase errors. The reason is that if $\phi$ and $\psi$ are two orthogonal codewords, then by applying controlled phase errors we can match the phase of $\phi$ and $\psi$ on any basis vector $\ket{x}$, and this implies that $|\phi^*X\psi|$ is about $\sum_x |\phi(x)| \cdot |\psi(x)|$ which leads to a simple contradiction. We now formally define our error model. As before, the error operators are linear and hence it is sufficient to define them on the standard basis $\set{\ket{x}}_{x \in \set{0,1}^n}$. We define the error operators $E_{S,\theta}$, for $S \subseteq \set{0,1}^n$ and $\theta \in [2\pi]$ by: $$E_{S,\theta} \ket{x} = \left\{ \begin{array}{ll} e^{\theta i} \ket{x} & ~~~\hbox{if $x \in S$} \\ \ket{x} & ~~~\hbox{otherwise.} \\ \end{array} \right.$$ In fact, we do not even need to allow any controlled phase error, and we can be satisfied with $\theta \in \set{0,{\pi \over 4},{\pi \over 2}}$. Set, $$\phaseflip = \set{E_{S,\theta} ~|~ S \subseteq \set{0,1}^n, \theta \in \set{0,{\pi \over 4},{\pi \over 2}}}.$$ We now prove that such errors cannot be approximately corrected, even for some fixed constant error. In fact, we prove that such errors cannot even be \emph{separated}. \begin{theorem} \label{thm:app-negative} There is no two-dimensional QECC that separates $\phaseflip$ with at most $\alpha={1 \over 10}$ error. \end{theorem} \begin{proof} Assume $\mathcal{{M}}$ be a 2-dimensional QECC that separates $\phaseflip$ with at most $\alpha$ error. \begin{lemma}\label{lem:abs} Let $\mathcal{{M}} \subseteq \mathcal{{N}}$ be a vector space of dimension greater than one. Then there are two orthonormal vectors $\phi,\psi \in \mathcal{{M}}$ such that $\sum_x |\phi(x)| \cdot |\psi(x)| \ge {1 \over 2}$. \end{lemma} We postpone the proof for later. Fix $\phi$ and $\psi$ as in the lemma. Notice that by ranging over all $X,Y \in \phaseflip$, we can implement any operator $E$ that partitions $\set{0,1}^n$ to four sets, and based on the set does a phase shift of angle $0$, ${\pi \over 4}$, ${\pi \over 2}$ or ${3\pi \over 4}$. More precisely: for a partition $\overline{S} = (S_1,\ldots,S_\ell)$ of $\set{0,1}^n$ and for a tuple of angles $\Theta=(\theta_1,\ldots,\theta_\ell) \subseteq [2\pi]^\ell$ define $E_{\overline{S},\Theta}$ by $E_{\overline{S},\Theta} \ket{x}= e^{\theta_j i} \ket{x}$, where $j$ is such that $x \in S_j$. Let $\Theta_k=\left( 0, {1 \over 2^{k}} \pi,\ldots, {(2^{k}-1) \over 2^{k}} \pi\right)$ and $$\phaseflip_k = \set{E_{\overline{S},\Theta_k} ~|~ \overline{S}=(S_1,\ldots,S_{2^k}) \mbox{ is a partition of $\set{0,1}^n$}}.$$ Then, by ranging over all $X,Y \in \phaseflip$ we range over all $E \in \phaseflip_2$. We claim: \begin{lemma}\label{lem:phase} Let $k \ge 1$ and $\varepsilon=2^{-k}\pi$. For every $\phi,\psi \in \mathcal{{N}}$ there exists $E \in \phaseflip_k$ such that $$|\phi^* E \psi| \ge (1-\varepsilon)\sum_x |\phi(x)| \cdot |\psi(x)|.$$ \end{lemma} Thus, in particular for the $\phi$ and $\psi$ we fixed before (and setting $k=2, \varepsilon={\pi \over 4}$): $$\alpha \ge |\phi^* X \psi| \ge {1-\varepsilon \over 2} > \frac{1}{10}.$$ \end{proof} We are left to prove the two lemmas: \begin{proof}[ of Lemma~\ref{lem:abs}] Let $\phi,\psi \in \mathcal{{M}}$ be arbitrary orthonormal vectors. Let $\phi'={1 \over \sqrt{2}}(\phi+\psi)$ and $\psi'={1 \over \sqrt{2}}(\phi-\psi)$. Then \begin{eqnarray*} \sum_x |\phi'(x)| \cdot |\psi'(x)| &=& {1 \over 2} \sum_x |\phi(x)+\psi(x)| \cdot |\phi(x)-\psi(x)|. \end{eqnarray*} Fix some $x \in \set{0,1}^n$. Denote $a=\phi(x), b=\psi(x)$, $a,b \in \mathbb{C}$ and assume $|a| \ge |b|$. Then, \begin{eqnarray*} |(a+b)(a-b)| &=& |a^2-b^2| \ge |a|^2-|b|^2 = (|a|-|b|) (|a|+|b|) \\ & \ge & (|a|-|b|)^2 = |a|^2+|b|^2-2 |a| \cdot |b|. \end{eqnarray*} Therefore, \begin{eqnarray*} \sum_x |\phi'(x)| \cdot |\psi'(x)| &\ge& {1 \over 2} \sum_x \left(|\phi(x)|^2+|\psi(x)|^2\right) -\sum_x |\phi(x)| \cdot |\psi(x)| = 1-\sum_x |\phi(x)| \cdot |\psi(x)|. \end{eqnarray*} Thus, either $\sum_x |\phi(x)| \cdot |\psi(x)|$ or $\sum_x |\phi'(x)| \cdot |\psi'(x)|$ is at least half. \end{proof} \begin{proof}[ of Lemma~\ref{lem:phase}] Express $\phi(x)=r_x \cdot e^{\theta_x i}$ with $r_x=|\phi(x)| \in R^{+}$ and $\theta_x \in [2\pi]$. Similarly, $\psi(x)=r'_x \cdot e^{\theta'_x i}$. The partition $\overline{S}=(S_1,\ldots,S_{2^k})$ is defined as follows. For every $x$ we look at $\min_j \set{|\theta'_x + \theta_j - \theta_x |} $. Any $x$ is chosen to be in $S_{j'}$ according to the $j'$ that minimizes the above expression for that $x$. Note that the above expression is always bounded by $2^{-k}\pi$. Denote $E=E_{\overline{S},\Theta_k}$. Then, $$|\phi^*E\psi|= \left|\sum_x r_x r'_x e^{\zeta_x i}\right|,$$ where $|\zeta_x| \le 2^{-k}\pi$. Let $u_x=1-e^{\zeta_x i}$ and notice that $$|u_x|^2=2(1-\cos(\zeta_x)) \le \zeta_x^2 \le 2^{-2k}\pi^2$$ and $|u_x| \le 2^{-k}\pi=\varepsilon$. Thus, \begin{eqnarray*} |\phi^*E\psi| &=& \left|\sum_x r_x r'_x (1-u_x)\right| ~\ge~ \sum_x r_x r'_x -\left|\sum_x r_x r'_x u_x \right| \\ &\ge & (1-\max_x |u_x|)\sum_x r_x r'_x ~\ge~ (1-\varepsilon)\sum_x r_x r'_x, \end{eqnarray*} as desired. \end{proof} \end{document}
\begin{document} \title{Some Compact Logics - Results in ZFC } \subseteqction{Preliminaries} While first order logic has many nice properties it lacks expressive power. On the other hand second order logic is so strong that it fails to have nice model theoretic properties such as compactness. It is desirable to find natural logics which are stronger than the first order logic but which still satisfy the compactness theorem. Particularly attractive are those logics which allow quantification over natural algebraic objects. One of the most natural choices is to quantify over automorphisms of a structure (or isomomorphisms between substructures). Generally compactness fails badly \cite{Sh56}, but if we restrict ourself to certain concrete classes then we may be able to retain compactness. In this paper we will show that if we enrich first order logic by allowing quantification over isomorphisms between definable ordered fields the resulting logic, $L(Q_{\rm Of})$, is fully compact. In this logic, we can give standard compactness proofs of various results. For example, to prove that there exists arbitrarily large rigid real closed fields, fix a cardinal $\kappa$ and form the $L(Q_{\rm Of})$ theory in the language of ordered fields together with $\kappa$ constants which says that the constants are pairwise distinct and the field is a real closed field which is rigid. (To say the field is rigid we use the expressive power of $L(Q_{\rm Of}$) to say that any automorphism is the identity.) This theory is consistent as the reals can be expanded to form a model of any finite subset of the theory. But a model of the theory must have cardinality at least $\kappa$. (Since we do not have the downward L\"owenheim-Skolem theorem, we cannot assert that there is a model of cardinality $\kappa$.) In \cite{IV} and \cite{Sh84}, the compactness of two interesting logics is established under certain set-theoretic hypotheses. The logics are those obtained from first order logic by adding quantifiers which range over automorphisms of definable Boolean algebras or which range over automorphisms of definable ordered fields. Instead of the weaker version of dealing with automorphisms, it is also possible to deal with a quantifier which says that two Boolean algebras are isomorphic or that two ordered fields are isomorphic. The key step in proving these results lies in establishing the following theorems. (By definable we shall mean definable with parameters). \begin{theorem} \label{dmd-thm} Suppose $\lambda$ is a regular cardinal and both $\diamondsuit(\lambda)$ and $\diamondsuit(\subseteqt{\alpha < \lambda^+}{{\rm cf\/} \alpha = \lambda})$ hold. Then if $T$ is any consistent theory and $|T| < \lambda$, there is a model $M$ of $T$ of cardinality $\lambda^+$ with the following properties: \begin{trivlist} \item[(i)] If $B$ is a Boolean algebra definable in $M$, then every automorphism of $B$ is definable. \item[(ii)] $M$ is $\lambda$-saturated. \item[(iii)] Every non-algebraic type of cardinality $< \lambda$ is realized in $M$ by $\lambda^+$ elements. \end{trivlist} \end{theorem} \begin{theorem} \label{dmdfield} Suppose $\lambda$ is a regular cardinal and both $\diamondsuit(\lambda)$ and $\diamondsuit(\subseteqt{\alpha < \lambda^+}{{\rm cf\/} \alpha = \lambda})$ hold. Then if $T$ is any consistent theory and $|T| < \lambda$, there is a model $M$ of $T$ of cardinality $\lambda^+$ with the following properties: \begin{trivlist} \item[(i)] If $F$ is an ordered field definable in $M$ then every automorphism of $F$ is definable and every isomorphism between definable ordered fields is definable. \item[(ii)] $M$ is $\lambda$-saturated. \item[(iii)] Every non-algebraic type of cardinality $< \lambda$ is realized in $M$ by $\lambda^+$ elements. \item[(iv)] Every definable dense linear order is not the union of $\lambda$ nowhere dense sets. \end{trivlist} \end{theorem} These theorems are proved in \cite{IV} (in \cite{IV} section 9, Theorem~\ref{dmd-thm} is proved from GCH) although there is not an explicit statement of them there . In order to show the desired compactness result (from the assumption that there are unboundedly many cardinals $\lambda$ as in the theorem statements) it is enough to use (i). However in our work on Boolean algebras we will need the more exact information above. Let us notice how the compactness of the various languages follow from these results. Since the idea is the same in all cases just consider the case of Boolean algebras. First we will describe the logic ${\rm L}(Q_{\rm Brch}a)$. We add second order variables (to range over automorphisms of Boolean algebras) and a quantifier $Q_{\rm Brch}a$ whose intended interpretation is that there is an automorphism of the Boolean algebra. More formally if $\Theta(f)$, $\phi(x), \psi(x,y), \rho(x, y)$ are formulas (where $f$ is a second order variable, $x$ and $y$ are first order variables and the formulas may have other variables) then $$Q_{\rm Brch}a f\,(\phi(x), \psi(x,y))\,\,\Theta(f)$$ is a formula. In a model $M$ the tuple $(\phi(x), \psi(x,y))$ {\em defines} a Boolean algebra (where parameters from $M$ replace the hidden free variables of $\phi(x), \psi(x,y)$) if $\psi(x, y)$ defines a partial order $<$ on $B = \subseteqt{a \in M}{M\models \phi[a]}$ so that $(B; <)$ is a Boolean algebra. A model $M$ satisfies the formula $Q_{\rm Brch}a f\,(\phi(x), \psi(x,y))\,\,\Theta(f)$ (where parameters from $M$ have been substituted for the free variables), if whenever $(\phi(x), \psi(x, y))$ define a Boolean algebra $(B; <)$ then there is an automorphism $f$ of $(B; <)$ such that $M\models\Theta (f)$. (It is easy to extend the treatment to look at Boolean algebras which are definable on equivalence classes, but we will avoid the extra complication.) We can give a more colloquial description of the quantifier $Q_{\rm Brch}a$ by saying the interpretation of $Q_{\rm Brch}a$ is that ``$Q_{\rm Brch}a f (B) \ldots$'' holds if there is an automorphism of the Boolean algebra $B$ so that \ldots. We will describe some of the other logics we deal with in this looser manner. For example, we will want to consider the quantifier $Q_{\rm Of}$ where $Q_{\rm Of} f (F_1, F_2) \ldots$ holds if there is is an isomorphism $f$ from the ordered field $F_1$ to the ordered field $F_2$ such that \ldots. The proof of compactness for ${\rm L}(Q_{\rm Brch}a)$ follows easily from theorem 1.1. By expanding the language\footnote{More exactly, for every model $M$ define a model $M^*$ with universe $M\cup\{f: f$ a partial function from $\vert M\vert$ to $\vert M\vert\}$ with the relations of $M$ and the unary predicate $P$, $P^{M'}=\vert M\vert$, and the ternary predicate $R$, $R=\{(f, a, b): f\in {}^MM$, $a\in M, b=f(a)\}$. We shall similarly transform a theory $T$ to $T'$ and consider automorphism only of structures $\subseteq P$.} we can assume that there is a ternary relation $R(*,*,*)$ so that the theory says that any first order definable function is definable by $R$ and one parameter. By the ordinary compactness theorem if we are given a consistent theory in this logic then there is a model of the theory where all the sentences of the theory hold if we replace automorphisms by definable automorphsisms in the interpretation of $Q_{\rm Brch}a$, since quantification over definable automorphisms can be replaced by first order quantification. Then we can apply the theorem to get a new model elementarily equivalent to the one given by the compactness theorem in which definable automorphisms and automorphisms are the same. In the following we will make two assumptions about all our theories. First that all definable partial functions are in fact defined by a fixed formula (by varying the parameters). Second we will always assume that the language is countable except for the constant symbols. In this paper we will attempt to get compactness results without recourse to $\diamondsuit$, i.e., all our results will be in ZFC. We will get the full result for the language where we quantify over automorphisms (isomorphisms) of ordered fields in Theorem 6.4. Unfortunately we are not able to show that the language with quantification over automorphisms of Boolean algebras is compact, but will have to settle for a close relative of that logic. This is theorem 5.1. In section 4 we prove we can construct models in which all relevant automorphism are somewhat definable: 4.1, 4.8 for BA, 4.13 for ordered fields. The reader may wonder why these results are being proved now, about 10 years after the results that preceeded them. The key technical innovation that has made these results possible is the discovery of $\diamondsuit$-like principles which are true in ZFC. These principles, which go under the common name of the Black Box, allow one to prove, with greater effort, many of the results which were previously known to follow from $\diamondsuit$ (see the discussion in \cite{[Sh-e]} for more details). There have been previous applications of the Black Box to abelian groups, modules and Boolean Algebras --- often building objects with specified endomorphism rings. This application goes deeper both in the sense that the proof is more involved and in the sense that the result is more surprising. The investigation is continued in \cite{[Sh 384]}, \cite{[Sh 482]}. In this paper we will also give a new proof of the compactness of another logic --- the one which is obtained when a quantifier $Q_{{{\rm Brch}}}$ is added to first order logic which says that a level tree (definitions will be given later) has an infinite branch. This logic was previously shown to be compact --- in fact it was the first logic shown in ZFC to be compact which is stronger than first order logic on countable structures --- but our proof will yield a somewhat stronger result and provide a nice illustration of one of our methods. (The first logic stronger than first order logic which was shown to be compact was the logic which expresses that a linear order has cofinality greater than $\omega_1$ \cite{cof}.) This logic, $L(Q_{{{\rm Brch}}})$, has been used by Fuchs-Shelah \cite{316} to prove the existence of nonstandard uniserial modules over (some) valuation domains. The proof uses the compactness of the tree logic to transfer results proved using $\diamondsuit$ to ZFC results. Eklof \cite{Ek} has given an explicit version of this transfer method and was able to show that it settles other questions which had been raised. (Osofsky \cite{Os1}, \cite{Os2} has found ZFC constructions which avoid using the model theory.) Theorem's 3.1 and 3.2 contain parallel results for Boolean algebra's and fields. They assert the existence of a theory (of sets) $T_1$ such that in each model$M_1$ of $T_1$, $P(M_1)$ is a model $M$ of the first order theory $T$ such that for every Boolean Algebra (respectively field) defined in $M$, every automorphism of the Boolean algebra (repectively field) that is definable in $M_1$ is definable in $M$. Moreover, each such $M_1$ has an elementary extension one of whose elements is a pseudofinite set $a$ with the universe of $M_1$ contained in $a$ and with $t(a/M_1)$ is definable over the empty set. This result depends on the earlier proof of our main result assuming $\diamondsuit$ and absoluteness. Theorem 4.1 uses the Black Box to construct a model $\hbox{\eufb\char"43}$ of $T_1$ so that for any automorphism $f$ of a Boolean algebra $B = P(\hbox{\eufb\char"43})$ there is a pseudofinite set $c$ such that for any atom $b \in B$, $f(b)$ is definable from $b$ and $c$. Theorem 4.13 is an analogous but stronger result for fields showing that for any $b$, $f(b)$ is definable from $b$ and $c$. In Lemma 4.7, this pointwise definability is extended by constructing a pseudo-finite partition of atoms of the Boolean algebra (respectively the elements of the field) such that $f$ is definable on each member of the partition. In Theorem 5.1 for Boolean algebras and 6.4 for fields this local definability is extended to global definability. \subsection{Outline of Proof} We want to build a model $A$ of a consistent $L(Q_{\rm Of})$ theory $T$ which has only definable isomorphisms between definable ordered fields. By the ordinary compactness theorem, there is a non-standard model $\hbox{\eufb\char"43}$ of an expansion of a weak set theory (say ${\rm ZFC}^-$) which satisfies that there is a model $A_1$ of $T$. So $A_1$ would be a model of $T$ if the interpretation of the quantifier $Q_{\rm Of}$ were taken to range over isomorphisms which are internal to $\hbox{\eufb\char"43}$. We can arrange that $A_1$ will be the domain of a unary predicate $P$. Then our goal is to build our non-standard model $\hbox{\eufb\char"43}$ of weak set theory in such a way that every external isomorphism between definable ordered subfields of $P(\hbox{\eufb\char"43})$ is internal, i.e., definable in $\hbox{\eufb\char"43}$. The construction of $\hbox{\eufb\char"43}$ is a typical construction with a prediction principle, in this case the Black Box, where we kill isomorphisms which are not pointwise definable over a set which is internally finite (or synonymously, {\em pseudofinite}). A predicted isomorphism is killed by adding an element which has no suitable candidate for its image. One common problem that is faced in such constructions is the question ``how do we ensure no possible image of such an element exists?''. To do this we need to omit some types. Much is known about omitting a type of size $\lambda$ in models of power $\lambda$ and even $\lambda^+$. But if say $2^{\lambda} > \lambda^{++}$, we cannot omit a dense set of types of power $\lambda$. So without instances of GCH we are reduced to omitting small types, which is much harder. To omit the small types we will use techniques which originated in ``Classification Theory''. In the construction we will have for some cardinal $\theta$ the type of any element does not split over a set of cardinality less than $\theta$ (see precise definitions below). This is analogous to saying the model is $\theta$-stable (of course we are working in a very non-stable context). The element we will add will have the property that its image (if one existed) would split over every set of cardinality $< \theta$. The final problem is to go from pointwise definablity to definability. The first ingredient is a general fact about $\aleph_0$-saturated models of set theory. We will show for any isomorphism $f$ that there is a large (internal) set $A$ and a pseudofinite sequence of one-one functions $(f_i \colon i < k^*)$ which cover $f \mathord{\restriction} A$ in the sense that for every $a \in A$ there is $i$ so that $f_i(a) = f(a)$. Using this sequence of functions it is then possible to define $f$ on a large subset of $A$. Finally, using the algebraic structure, the definition extends to the entire ordered field. In this paper we will need to use the following principle. In order to have the cleanest possible statement of our results (and to conform to the notation in \cite{[Sh-e]}), we will state our results using slightly non-standard notation. To obtain the structure ${\rm H}_\chi(\lambda)$, we first begin with a set of ordered urelements of order type $\lambda$ and then form the least set containing each urelement and closed under formation of sets of size less than $\chi$. In a context where we refer to ${\rm H}_\chi(\lambda)$ by $\lambda$ we will mean the urelements and not the ordinals. In practice we believe that in a given context there will be no confusion. \begin{theorem} Suppose $\lambda = \mu^+$, $\mu = \kappa^\theta = 2^\kappa$, $\chi$ is a regular cardinal, $\kappa$ is a strong limit cardinal, $\theta < \chi < \kappa$, $\kappa > {\rm cf\/} \kappa = \theta \geq \aleph_0$ and $S \subseteq \subseteqt{\delta < \lambda}{{\rm cf\/} \delta = \theta }$ is stationary. Let $\rho$ be some cardinal greater than $\lambda$. Then we can find $W = \subseteqt{(\bar{M}^\alpha, \eta^\alpha)}{\alpha < \alpha(\ast)}$ (actually a sequence)\footnote{ in our case $\alpha(*)=\lambda$ is fine}, a function $\zeta:\alpha(\ast) \to S$ and $(C_\delta \colon \delta \in S)$ such that: \begin{trivlist} \item[(a1)] $\bar{M}^\alpha = (M^\alpha_i \colon i \leq \theta)$ is an increasing continuous elementary chain, each $M_i^\alpha$ is a model belonging to ${\rm H}_\chi(\lambda)$ (and so necessarily has cardinality less than $ \chi$), $M^\alpha_i \cap \chi$ is an ordinal, $\eta^\alpha \in {}^\theta\!\lambda$ is increasing with limit $\zeta(\alpha) \in S$, for $i<\theta$, $\eta^\alpha\mathord{\restriction} i \in M^\alpha_{i+1}$, $M^\alpha_i \in {\rm H}_\chi(\eta^\alpha(i))$ and $(M^\alpha_j \colon j \leq i) \in M_{i+1}^\alpha$. \item[(a2)] For any set $X \subseteq \lambda$ there is $\alpha$ so that $M^\alpha_\theta \equiv_{\lambda \cap M^\alpha_\theta} ({\rm H}(\rho), \in, <, X)$, where $<$ is a well ordering of ${\rm H}(\rho)$ and $M\equiv_A N$ means $(M, a)_{a\in A}, (N, a)_{a\in A}$ are elementarily equivalent. \item[(b0)] If $\alpha \neq \beta$ then $\eta^\alpha \neq \eta^\beta$. \item[(b1)] If $\subseteqt{\eta^\alpha \mathord{\restriction} i}{i < \theta} \subseteq M^\beta_\theta$ and $\alpha \neq \beta$ then $\zeta(\alpha) < \zeta(\beta)$. \item[(b2)] If $\eta^\alpha \mathord{\restriction} (j + 1) \in M^\beta_\theta$ then $M^\alpha_j \in M^\beta_\theta$. \item[(c2)] $\bar{C} = (C_\delta \colon \delta \in S)$ is such that each $C_\delta$ is a club subset of $\delta$ of the order type $\theta$. \item[(c3)] Let $C_\delta = \subseteqt{\rhog_{\delta,i}}{i < \theta}$ be an increasing enumeration. For each $\alpha < \alpha(\ast)$ there is $( (\rhog^-_{\alpha, i}, \rhog^+_{\alpha,i}) \colon i < \theta)$ such that: $\rhog^-_{\alpha,i} \in M^\alpha_{i+1}$, $M^\alpha_{i+1} \cap \lambda \subseteq \rhog^+_{\alpha,i}$, $\rhog_{\zeta(\alpha),i} < \rhog^-_{\alpha,i} < \rhog^+_{\alpha,i} < \rhog_{\zeta(\alpha),i+1}$ and if $\zeta(\alpha) = \zeta(\beta)$ and $\alpha \ne \beta$ then for every large enough $i < \theta$, $[\rhog^-_{\alpha,i}, \rhog^+_{\alpha,i}) \cap [\rhog^-_{\beta,i}, \rhog^+_{\beta,i}) = \emptyset$. Furthermore for all $i$, the sequence $(\rhog^-_{\alpha, j} \colon j < i)$ is in $M^\alpha_{i+1}$. \end{trivlist} \end{theorem} This principle, which is one of the Black Box principles is a form of $\diamondsuit$ which is a theorem of ZFC. This particular principle is proved in \cite{[Sh-e]}III 6.13(2). The numbering here is chosen to correspond with the numbering there. Roughly speaking clauses (a1) and (a2) say that there is a family of elementary substructures which predict every subset of $\lambda$ as it sits in $\mbox{\rm H}(\lambda)$. (We will freely talk about a countable elementary substructure {\em predicting} isomorphisms and the like.) The existence of such a family would be trivial if we allowed all elementary substructures of cardinality less than $\chi$. The rest of the clauses say that the structures are sufficiently disjoint that we can use the information that they provide without (too much) conflict. The reader who wants to follow the main line of the arguments without getting involved (initially) in the complexities of the Black Box can substitute $\diamondsuit(\lambda)$ for the Black Box. Our proof of the compactness of ${\rm L}(Q_{\rm Of})$ does not depend on Theorem~\ref{dmdfield}, so even this simplification gives a new proof of the consistency of the compactness of ${\rm L}(Q_{\rm Of})$. Our work on Boolean algebras does require Theorem~\ref{dmd-thm}. The results in this paper were obtained while the first author was visiting the Hebrew University in Jerusalem. He wishes to thank the Institute of Mathematics for its hospitality. \subseteqction{Non-splitting extensions} \subseteqtcounter{theorem}{0} In this section $\theta$ will be a fixed regular cardinal. Our treatment is self contained but the reader can look at \cite{[Sh-a]}. \noindent {\sc Definition}. If $M$ is a model and $X, Y, Z \subseteq M$, then $X/Y$ {\em does not split over} $Z$ if and only if for every finite $d \subseteq Y$ the type of $X$ over $d$ (denoted either tp($X/d$) or $X/d$) depends only on the type of $d$ over $Z$. We will use two constructions to guarantee that types will not split over small sets. The first is obvious by definition. (The type of $A/B$ is {\em definable over} $C$ if for any tuple $\bar{a} \in A$ and formula $\phi(\bar{x}, \bar{y})$ there is a formula $\psi(\bar{y})$ with parameters from $C$ so that for any $\bar{b}$, $\phi(\bar{a}, \bar{b})$ if and only if $\psi(\bar{b})$.) \begin{proposition} If $X/Y$ is definable over $Z$ then $X/Y$ does not split over $Z$. \end{proposition} \noindent {\sc Definition}. Suppose $M$ is a model and $X, Y \subseteq M$. Let $D$ be an ultrafilter on $X^\alpha$. Then the $\mbox{Av}(X,D,Y)$ (read the ($D$-)average type that $X^\alpha$ realizes over $Y$) is the type $p$ over $Y$ defined by: for $\bar{y}\subseteq Y, \phi(\bar{z}, \bar{y}) \in p$ if and only if $\subseteqt{\bar{x} \in X^\alpha}{\phi(\bar{x}, \bar{y}) \mbox{ holds}} \in D$. We will omit $Y$ if it is clear from context. Similarly we will omit $\alpha$ and the ``bar'' for singletons, i.e., the case $\alpha =1$. The following two propositions are clear from the definitions. \begin{proposition} \label{ult} If $\bar{a}$ realizes $\mbox{Av}(X,D,Y)$ then $\bar{a}/Y$ does not split over $X$. Also if there is $Z$ such that for $\bar{b} \in X$ the type of $\bar{b}/Y$ does not split over $Z$, then $\mbox{Av}(X,D,Y)$ does not split over\ $Z$. \end{proposition} \begin{proposition} \label{trans} \begin{trivlist} \item[(i)] Suppose $A/B$ does not split over\ $D$, $B \subseteq C$ and $C/B\cup A$ does not split over\ $D\cup A$ then $A\cup C/B$ does not split over\ $D$. \item[(ii)] Suppose that $(A_i \colon i < \delta)$ is an increasing chain and for all $i$, $A_i/B$ does not split over\ $C$ then $\bigcup_{i < \delta} A_i/B$ does not split over\ C. \item[(iii)] $X/Y$ does not split over\ $Z$ if and only if\ $X/{\rm dcl}(Y \cup Z)$ does not split over\ $Z$. Here ${\rm dcl} (Y \cup Z)$ denotes the definable closure of $Y \cup Z$. \item[(iv)] If $X/Y$ does not split over\ $Z$ and $Z \subseteq W$, then $X/Y$ does not split over\ $W$ \end{trivlist} \end{proposition} \noindent {\sc Definition}. Suppose $M_1 \prec M_2$ are models. Define $M_1 \prec^\otimes_\gth M_2$, if for every $X \subseteq M_2$ of cardinality less than $\theta$ there is $Y \subseteq M_1$ of cardinality less than $\theta$ so that $X/M_1$ does not split over\ $Y$. (If $\theta$ is regular, then we only need to consider the case where $X$ is finite.) \begin{proposition} Assume that $\theta$ is a regular cardinal (needed for (2) only) and that all models are models of some fixed theory with Skolem functions (although this is needed for (3) only). \begin{trivlist} \item[1.] $\prec^\otimes_\gth$ is transitive and for all $M$, $M \prec^\otimes_\gth M$. \item[2.] If $(M_i \colon i < \delta)$ is a $\prec^\otimes_\gth$-increasing chain, then for all $i$ $M_i \prec^\otimes_\gth \bigcup_{j < \delta} M_j$. \item[3.] Suppose $M_2$ is generated by $M_1 \cup N_2$ and $N_1 = M_1 \cap N_2$. (Recall that we have Skolem functions.) If $|N_1| < \theta$ and $N_2/M_1$ does not split over\ $N_1$, then $M_1 \prec^\otimes_\gth M_2$. \end{trivlist} \end{proposition} An immediate consequence of these propositions is the following proposition. \begin{proposition} \label{saturated} Suppose $M \prec^\otimes_\gth N$, then there is a $\theta$-saturated model $M_1$ such that $N \prec M_1$ and $M \prec^\otimes_\gth M_1$. \end{proposition} {\sc Proof}\ By the lemmas it is enough to show that given a set $X$ of cardinality $< \theta$ and a type $p$ over $X$ we can find a realization $a$ of that type so that $a/N$ does not split over\ $X$. Since we have Skolem functions every finite subset of $p$ is realized by an element whose type over $N$ does not split over $X$, namely an element of the Skolem hull of $X$. So we can take $a$ to realize an average type of these elements. \fin \subseteqction{Building New Theories} \subseteqtcounter{theorem}{0} The models we will eventually build will be particular non-standard models of an enriched version of ${\rm ZFC}^-$. (Recall ${\rm ZFC}^-$ is ZFC without the power set axiom and is true in the sets of hereditary cardinality $< \kappa$ for any regular uncountable cardinal $\kappa$.) The following two theorems state that appropriate theories exist. \begin{theorem} \label{bathy} Suppose $T$ is a theory in a language which is countable except for constant symbols, $P_0$ a unary predicate so that in every model $M$ of $T$ every definable automorphism of a definable atomic Boolean algebra$\,\subseteq P^M_0$ is definable by a fixed formula (together with some parameters). Then there is $T_1$ an expansion of ${\rm ZFC}^-$ in a language which is countable except for constant symbols with a unary predicate $P_0$ so that if $M_1$ is a model of $T_1$ then $P_0(M_1)$ is a model $M$ of $T$ (when restricted to the right vocabulary) and the following are satisfied to be true in $M_1$. \begin{trivlist} \item[(i)] Any automorphism of a definable (in $M$) atomic Boolean algebra contained in $P_0(M)$ which is definable in $M_1$ is definable in $M$. \item[(ii)] $M_1$ (which is a model of ${\rm ZFC}^-$) satisfies for some regular cardinal $\mu$ (of $M_1$), $|M| = \mu^+$, $M$ is $\mu$-saturated and every non-algebraic type (in the language of $M$) of cardinality $< \mu$ is realized in $M$ by $\mu^+$ elements of $M$. \item[(iii)] $M \in M_1$. \item[(iv)] $M_1$ satisfies the separation scheme for all formulas (not just those of the language of set theory). \item[(v)] $M_1$ has Skolem functions. \item[(vi)] For any $M_1$ there is an elementary extension $N_1$ so that the universe of $M_1$ is contained in $N_1$ in a pseudofinite set (i.e., one which is finite in $N_1$) whose type over the universe of $M_1$ is definable over the empty set. \end{trivlist} \end{theorem} {\sc Proof}\ We first consider a special case. Suppose that there is a cardinal $\lambda$ greater than the cardinality of $T$ satisfying the hypothess of Theorem~\ref{dmd-thm} (i.e., both $\diamondsuit(\lambda)$ and $\diamondsuit(\subseteqt{\alpha < \lambda^+}{{\rm cf\/} \alpha = \lambda})$ hold). Then we could choose $\kappa$ a regular cardinal greater than $\lambda^+$. Our model $M_1$ will be taken to be a suitable expansion of $\her{\kappa}$ where the interpretation of the unary predicate $P_0$ is the model $M$ guaranteed by Theorem~\ref{dmd-thm} and $\mu = \lambda$, $\mu^+ = \lambda^+$. Since any formula in the enriched language is equivalent in $M_1$ to a formula of set theory together with parameters from $M_1$, $M_1$ will also satisfy (iv). What remains is to ensure (v) and that appropriate elementary extensions always exist i.e. clause (vi). To achieve this we will expand the language by induction on $n$. Let $L_0$ be the language consisting of the language of $T$, $\{P\}$, and Skolem functions and $M_1^0$ be any expansion by Skolem functions of the structure on $\her{\kappa}$ described above. Fix an index set $I$ and an ultrafilter $D$ on $I$ so that there is $a \in \her{\kappa}^I/D$ such that $\her{\kappa}^I/D \models \mbox{`` $a$ is finite''}$ and for all $b \in \her{\kappa}$, $\her{\kappa}^I/D \models b \in a$. Let $N_1^0 = {M_1^0}^I/D$. Then for every formula $\phi(y, x_1, \ldots, x_n)$ of $L_0$ which does not involve constants, add a new $n$-ary relation $R_\phi$. Let $L_{0.5}$ by the language containing all the $R_\phi$. Let $M_1^{0.5}$ be the $L_{0.5}$ structure with universe $H(\kappa)$ obtained by letting for all $b_1, \ldots, b_n$, $M_1^{0.5}\models R_\phi[b_1, \ldots, b_n]$ if and only if $N_1^0 \models \phi[a, b_1, \ldots, b_n]$. Let $L_1$ be an extension of $L_{0.5}$ by Skolem functions and let $M_1^1$ be an expansion of $M_1^{0.5}$ by Skolem functions. Condition (iv) still holds as it holds for any expansion of $(H(\kappa), \in)$. We now let $N^1_1 = {M^1_1}^I/D$ and continue as before. Let $L = \bigcup_{n <\omega} L_n$ and $M_1 = \bigcup_{n<\omega} M_1^n$ (i.e. the least common expansion; the universe stays the same). Let $T_1$ be the theory of $M_1$. As we have already argued $T_1$ has properties (i)--(iv). It remains to see that any model of $T_1$ has the desired extension property. First we consider $M_1$ and let $N_1 = {M_1}^I/D$. Then the type of $a$ over $M_1$ is definable over the empty set using the relations $R_\phi$ which we have added. Since $T_1$ has Skolem functions, for any model $A_1$ of $T_1$ there will be an extension $B_1$ of $A_1$ generated by $A_1$ and an element realizing the definable type over $A_1$. In the general case, where we may not have the necessary hypotheses of Theorem~\ref{dmd-thm}, we can force with a notion of forcing which adds no new subsets of $|T|$ to get some $\lambda$ satisfying hypothess of Theorem~\ref{dmd-thm} (alternately, we can use L[$A$] where $A$ is a large enough set of ordinals). Since the desired theory will exist in an extension it already must (as it can be coded by a subset of $|T|$) exist in the ground model. \fin Later on we will be juggling many different models of set theory. The ones which are given by the Black Box, and the non-standard ones which are models of $T_1$. When we want to refer to notions in models of $T_1$, we will use words like ``pseudofinite'' to refer to sets which are satisfied to be finite in the model of $T_1$. In the same way as we proved the last theorem we can show the following theorem. \begin{theorem} \label{ofthy} Suppose $T$ is a theory in a language with a unary predicate $P_0$ and which is countable except for constant symbols so that in every model $M$ of $T$ every definable isomomrphism between definable ordered fields$\subseteq P_0^M$ is definable by a fixed formula (together with some parameters). Then there is $T_1$ an expansion of ${\rm ZFC}^-$ in a language which is countable except for constant symbols with a unary predicate $P_0$ so that if $M_1$ is a model of $T_1$ then $P_0(M_1)$ is a model $M$ of $T$ and the following are satisfied to be true in $M_1$. \begin{trivlist} \item[(i)] Any isomorphism of definable (in $M$) ordered fields contained in $P_0$ which is definable in $M_1$ is definable in $M$. \item[(ii)] $M \in M_1$. \item[(iii)] $M_1$ satisfies the separation scheme for all formulas (not just those of the language of set theory). \item[(iv)] $M_1$ has Skolem functions. \item[(v)] For any $M_1$ there is an elementary extension $N_1$ so that the universe of $M_1$ is contained in $N_1$ in a pseudofinite set (i.e., one which is finite in $N_1$) whose type over the universe of $M_1$ is definable over the empty set. \end{trivlist} \end{theorem} Since we have no internal saturation conditions this theorem can be proved without recourse to Theorem~\ref{dmdfield} (see the next section for an example of a similar construction). \subsection{A Digression} The method of expanding the language to get extensions which realize a definable type is quite powerful in itself. We can use the method to give a new proof of the compactness of a logic which extends first order logic and is stronger even for countable structures. This subsection is not needed in the rest of the paper. \begin{lemma} \label{tree-lemma} Suppose that $N$ is a model. Then there is a consistent expansion of $N$ to a model of a theory $T_1$ with Skolem functions so that for every model $M$ of $T_1$ there is $a$ so that $a/M$ is definable over the empty set and in $M(a)$ (the model generated by $M$ and $\{a\}$) for every definable directed partial ordering $<$ of $M$ without the last element there is an element greater than any element of $M$ in the domain of $<$. Furthermore the cardinality of the language of $T_1$ is no greater than that of $N$ plus $\aleph_0$. \end{lemma} {\sc Proof}\ Fix a model $N$. Choose $\kappa$ and an ultrafilter $D$ so that for every directed partial ordering $<$ of $N$ without the last element there is an element of $N^\kappa/D$ which is greater than every element of $N$ (e.g., let $\kappa = |N|$ and $D$ be any regular ultrafilter on $\kappa$). Fix an element $a \in N^\kappa/D \subseteqtminus N$. Abusing notation we will let $a :\kappa\to N$ be a function representing the element $a$. The new language is defined by induction on $\omega$. Let $N = N_0$. There are three tasks so we divide the construction of $N_{n+1}$ into three cases. If $n \equiv 0 \bmod 3$, expand $N_n$ to $N_{n+1}$ by adding Skolem functions. If $n \equiv 1 \bmod 3$, add a $k$-ary relation $R_\phi$ for every formula of arity $k+1$ and let $R_\phi(b_0, \ldots, b_{k-1})$ hold if and only if $\phi(b_0, \ldots, b_{k-1}, a)$ holds in ${N_n}^\kappa/D$. If $n \equiv 2 \bmod 3$, we ensure that there is an upper bound to every definable directed partial orders without last element in $N_n$. For each $k+2$-ary formula $\phi(x_0, \ldots, x_{k-1}, y, z)$ we will add a $k+1$-ary function $f_\phi$ so that for all $\bar{b}$ if $\phi(\bar{b}, y,z)$ defines a directed partial order without last element then $f_\phi(\bar{b},a)$ is greater in that partial order than any element of $N$. Notice that there is something to do here since we must define $f_\phi$ on $N$ and then extend to $N^\kappa/D$ using the ultraproduct. For each such $\bar{b}$ choose a function $c:\kappa\to N$ so that $c/D$ is an upper bound (in the partial order) to all the elements of $N$. Now choose $f_\phi$ so that $f_\phi(\bar{b}, a(i)) = c(i)$. Let $T_1$ be the theory of the expanded model. Suppose now that $M$ is a model of $T_1$. The type we want is the type $p$ defined by $\phi(c_1, \ldots, c_{k-1},x) \in p$ if and only if $M \models R_\phi(c_1, \ldots, c_n)$. \fin In the process of building the new theory there are some choices made of the language. But these choices can be made uniformly for all models. We will in the sequel assume that such a uniform choice has been made. >From this lemma we can prove a stronger version of a theorem from \cite{II}, which says that the language which allows quantification over branches of level-trees is compact. A {\em tree} is a partial order in which the predecessors of any element are totally ordered. A {\em level-tree} is a tree together with a ranking function to a directed set. More exactly a {\em level-tree} is a model $(A: U, V, <_1, <_2, R)$ where: \begin{enumerate} \item $A$ is the union of $U$ and $V$; \item $<_1$ is a partial order of $U$ such that for every $u \in U$ $\subseteqt{y \in U}{y <_1 u}$ is totally ordered by $<_1$; \item $<_2$ is a directed partial order on $V$ with no last element; \item $R$ is a function from $U$ to $V$ which is strictly order preserving. \end{enumerate} The definiton here is slightly more general than in \cite{II}. In \cite{II} the levels were required to be linearly ordered. Also what we have called a ``level-tree'' is called a ``tree'' in \cite{II}. A {\em branch} $b$ of a level-tree is a maximal linearly-ordered subset of $U$ such that $\subseteqt{R(t)}{t \in b}$ is unbounded in $V$. We will refer to $U$ as the tree and $V$ as the levels. For $t \in U$ the {\em level} of $t$ is $R(t)$. A tuple of formulas (which may use parameters from $M$), $$(\phi_1(x),\phi_2(x), \psi_1(x, y), \psi_2(x, y), \rho(x,y, z)),$$ defines a level-tree in a model $M$ if $$(\phi_1(M) \cup \phi_2(M);\phi_1(M),\phi_2(M), \psi_1(x, y)^M, \psi_2(x, y)^M, \rho(x,y,z)^M)$$ is a level tree. (There is no difficulty in extending the treatment to all level-trees which are definable using equivalence relations.) Given the definition of a level-tree, we now define an extension of first order logic by adding second order variables (to range over branches of level-trees) and a quantifier $Q_{\rm Brch}$, such that $$Q_{\rm Brch} \,b (\phi_1(x),\phi_2(x), \psi_1(x, y), \psi_2(x, y), \rho(x,y, z)) \Theta(b)$$ says that if $(\phi_1(x),\phi_2(x), \psi_1(x, y), \psi_2(x, y), \rho(x,y, z))$ defines a level-tree then there is a branch $b$ of the level-tree such that $\Theta(b)$ holds. In \cite{II}, it is shown that (a first-order version of) this logic was compact. This is the first language to be shown (in ZFC) to be fully compact and stronger than first order logic for countable structures. In \cite{II}, the models are obtained at successors of regular cardinals. \begin{theorem} The logic $L(Q_{\rm Brch})$ is compact. Furthermore every consistent theory $T$ has a model in all uncountable cardinals $\kappa > |T|$. \end{theorem} {\sc Proof}\ By expanding the language we can assume that any model of $T$ admits elimination of quantifiers. (I.e., add new relations for each formula and the appropriate defining axioms.) For each finite $S \subseteq T$, choose a model $M_S$ of $S$. For each $S$ choose a cardinal $\mu$ so that $M_S \in {\rm H}(\mu)$ and let $N_S$ be the model $({\rm H}(\mu^+), M_S, \in)$, where $M_S$ is the interpretation of a new unary predicate $P$ and the language of $N_S$ includes the language of $T$ (with the correct restriction to $M_S$). By expanding the structure $N_S$ we can assume that the theory $T_S$ of $N_S$ satisfies the conclusion of Lemma~\ref{tree-lemma}. Furthermore we note that $T_S$ satisfies two additional properties. If a formula defines a branch in a definable level-tree contained in the domain of $P$ then this branch is an element of the model $N_S$. As well $N_S$ satisfies that $P(N_S)$ is a model of $S$. Now let $D$ be an ultrafilter on the finite subsets of $T$ such that for all finite $S$, $\subseteqt{S_1}{S \subseteq S_1} \in D$. Finally let $T_1$ be the (first-order) theory of $\prod N_S/D$. If $N$ is any model of $T_1$ then for any sentence $\phi \in T$, $N$ satisfies ``$P(N)$ satisfies $\phi$''. If we can arrange that the only branches of an $L(Q_{\rm Brch})$-definable (in the language of $T$) level-tree of $P(N)$ are first order definable in $N$ then $P(N)$ satisfaction of an $L(Q_{\rm Brch})$-formula will be the same in $N$ and the real world. Before constructing this model let us note that our task is a bit easier than it might seem. \begin{claim} Suppose that $N$ is a model of $T_1$ and every branch of an first-order definable (in the language of $T$) level-tree of $P(N)$ is first order definable in $N$, then every branch of an $L(Q_{\rm Brch})$-definable (in the language of $T$) level-tree of $P(N)$ is first order definable in $N$. \end{claim} {\sc Proof}\ (of the claim) Since we have quantifier elimination we can prove by induction on construction of formulas that satisfaction is the same in $N$ and the real world and so the quantifier-elimination holds for $P(N)$. In other words any $L(Q_{\rm Brch})$-definable level-tree is first order definable. It remains to do the construction and prove that it works. To begin let $N_0$ be any model of $T_1$ of cardinality $\kappa$. Let $\mu \leq \kappa$ be any regular cardinal. We will construct an increasing elementary chain of models $N_\alpha$ for $\alpha < \mu$ by induction. At limit ordinals we will take unions. If $N_\alpha$ has been defined, let $N_{\alpha +1} = N_\alpha(a_\alpha)$, where $a_\alpha$ is as guaranteed by Lemma~\ref{tree-lemma}. Now let $N = \bigcup_{\alpha<\mu} N_\alpha$. \begin{subclaim} Suppose $X$ is any subset of $N$ which is definable by parameters. Then for all $\alpha$, $X \cap N_\alpha$ is definable in $N_\alpha$. \end{subclaim} {\sc Proof}\ (of the subclaim) Suppose not and let $\beta$ be the least ordinal greater than $\alpha$ so that $X \cap N_\beta$ is definable in $N_\beta$. Such an ordinal must exist since for sufficiently large $\beta$ the parameters necessary to define $X$ are in $N_\beta$. Similarly there is $\rhog$ such that $\beta = \rhog + 1$. Since $N_\beta$ is the Skolem hull of $N_\rhog \cup \{a_\rhog\}$, there is $\bar{b} \in N_\rhog$ and a formula $\phi(x, \bar{y}, z)$ so that $X \cap N_\beta$ is defined by $\phi(x, \bar{b}, a_\rhog)$. But by the definability of the type of $a_\rhog$ over $N_\rhog$ there is a formula $\psi(x, \bar{y})$ so that for all $a, \bar{c} \in N_\rhog$, $N_\beta \models \psi(a, \bar{c}) \mbox{ if and only if } \phi(a, \bar{c}, a_\rhog)$. Hence $\psi(x, \bar{b})$ defines $X \cap N_\rhog$ in $N_\rhog$. It remains to see that every branch of a definable level tree is definable. Suppose $(A; U, V,<_1, <_2, R)$ is a definable level-tree. Without loss of generality we can assume it is definable over the empty set. Let $B$ be a branch. For any $\alpha < \mu$ there is $c \in V$ so that for all $d \in V \cap N_\alpha$, $d <_2 c$. Since the levels of $B$ are unbounded in $V$, $B \cap N_\alpha$ is not cofinal in $B$. Hence there is $b \in B$ so that $B \cap N_\alpha \subseteq \subseteqt{a \in U}{a <_1 b}$. By the subclaim $B \cap N_\alpha$ is definable in $N_\alpha$. Since $\mu$ has the uncountable cofinality, by Fodor's lemma, there is $\alpha< \mu$ so that for unboundedly many (and hence all) $\rhog < \mu$, $B \cap N_\rhog$ is definable by a formula with parameters from $N_\alpha$. Fix a formula $\phi(x)$ with parameters from $N_\alpha$ which defines $B \cap N_\alpha$. Then $\phi(x)$ defines $B$. To see this consider any $\rhog < \alpha$ and a formula $\psi(x)$ with parameters from $N_\alpha$ which defines $B \cap N_\rhog$. Since $N_\alpha$ satisfies ``for all $x$, $\phi(x)$ if and only if $\psi(x)$'' and $N_\alpha \prec N_\rhog$, $\phi(x)$ also defines $B \cap N_\rhog$. \fin \noindent {\sc Remark}. The compactness result above is optimal as far as the cardinality of the model is concerned. Any countable level-tree has a branch and so there is no countable model which is $L(Q_{\rm Brch})$-equivalent to an Aronszajn tree. By the famous theorem of Lindstrom \cite{Li}, this result is the best that can be obtained for any logic, since any compact logic which is at least as powerful as the first order logic and has countable models for all sentences is in fact the first order logic. The existence of a compact logic such that every consistent countable theory has a model in all uncountable cardinals was first proved by Shelah \cite{cof}, who showed that the first order logic is compact if we add a quantifier $Q^{\rm cf}$ which says of a linear order that its cofinality is $\omega$. (Lindstrom's theorem and the logic $L(Q^{cf})$ are also discussed in \cite{Ba}). The logic $L(Q_{\rm Brch})$ has the advantage that it is stronger than the first order logic even for countable models \cite{II}. Notice in the proof above in any definable level-tree the directed set of the levels has cofinality $\mu$. Since we can obtain any uncountable cofinality this is also the best possible result. Also in the theorem above we can demand just $\kappa \ge |T| + \aleph_1$ \subseteqction{The Models} \subseteqtcounter{theorem}{0} For the purposes of this section let $T$ and $T_1$ be theories as defined above in 3.1 or 3.2 (for Boolean algebras or ordered fields). In this section we will build models of our theory $T_1$. The case of Boolean algebras and the case of ordered fields are similar but there are enough differences that they have to be treated separately. We shall deal with Boolean algebras first. We want to approximate the goal of having every automorphism of every definable atomic Boolean algebra in the domain of $P$ be definable. In this section, we will get that they are definable in a weak sense. In order to spare ourselves some notational complications we will make a simplifying assumption and prove a weaker result. It should be apparent at the end how to prove the same result for every definable atomic Boolean algebra in the domain of $P$. \noindent {\sc Assumption.} Assume $T$ is the theory of an atomic Boolean algebra on $P$ with some additional structure. \begin{theorem} \label{Ba-thm} There is a model $\hbox{\eufb\char"43}$ of $T_1$ so that if $B = P(\hbox{\eufb\char"43})$ and $f$ is any automorphism of $B$ as a Boolean algebra then there is a pseudofinite set $c$ so that for any atom $b \in B$, $f(b)$ is definable from $b$ and elements of $c$. \end{theorem} {\sc Proof}\ We will use the notation from the Black Box. In particular we will use an ordered set of urelements of order type $\lambda$. We can assume that $\mu$ is larger than the cardinality of the language (including the constants). We shall build a chain of structures $(\hbox{\eufb\char"43}_\varepsilon \colon \varepsilon < \lambda)$ such that the universe of $\hbox{\eufb\char"43}_\varepsilon$ will be an ordinal $<\lambda$ and the universe of $\hbox{\eufb\char"43}=\bigcup_{\varepsilon<\lambda} \hbox{\eufb\char"43}_\varepsilon$ will be $\lambda$ (we can specify in a definable way what the universe of $\hbox{\eufb\char"43}_\varepsilon$ is e.g. $\mu(1+\alpha)$ is o.k.). We choose $\hbox{\eufb\char"43}_\varepsilon $ by induction on $\varepsilon$. Let $B_\varepsilon = P(\hbox{\eufb\char"43}_\varepsilon)$. We will view the $(\bar M^\alpha, \eta^\alpha)\in W$ in the Black Box as predicting a sequence of models of $T_1$ and an automorphism of the Boolean algebra. (See the following paragraphs for more details on what we mean by predicting.) The construction will be done so that if $\varepsilon \not\in S$ and $\varepsilon < \zeta$ then $\hbox{\eufb\char"43}_\varepsilon \prec^\otimes_\gth \hbox{\eufb\char"43}_\zeta$. (We will make further demands later.) The model $\hbox{\eufb\char"43}$ will be $\bigcup_{\varepsilon<\lambda} \hbox{\eufb\char"43}_\varepsilon$. When we are done, if $f$ is an automorphism of the Boolean algebra then we can choose $(\bar M^\alpha, \eta^\alpha)\in W$ to code, in a definable way, $f$ and the sequence $(\hbox{\eufb\char"43}_\varepsilon \colon \varepsilon < \lambda)$. The limit stages of the construction are determined. The successor stage when $\varepsilon \notin S$ is simple. We construct $\hbox{\eufb\char"43}_{\varepsilon+1}$ so that $\hbox{\eufb\char"43}_\varepsilon\prec^\otimes_\gth \hbox{\eufb\char"43}_{\varepsilon+1}$, $\hbox{\eufb\char"43}_{\varepsilon+1}$ is $\theta$-saturated and there is a pseudofinite set in $\hbox{\eufb\char"43}_{\varepsilon+1}$ which contains $\hbox{\eufb\char"43}_\varepsilon$. By the construction of the theory $T_1$ there is $c$, a pseudofinite set which contains $\hbox{\eufb\char"43}_\varepsilon$ such that the type of $c$ over $\hbox{\eufb\char"43}_\varepsilon$ is definable over the empty set. Hence $\hbox{\eufb\char"43}_\varepsilon \prec^\otimes_\gth \hbox{\eufb\char"43}_\varepsilon(c)$. By Proposition~\ref{saturated} there is $\hbox{\eufb\char"43}_{\varepsilon+1}$ which is $\theta$-saturated so that $\hbox{\eufb\char"43}_\varepsilon(c) \prec^\otimes_\gth \hbox{\eufb\char"43}_{\varepsilon+1}$. Finally by transitivity (Proposition~\ref{trans}), $\hbox{\eufb\char"43}_\varepsilon\prec^\otimes_\gth \hbox{\eufb\char"43}_{\varepsilon+1}$. The difficult case occurs when $\varepsilon \in S$ rename $\varepsilon$ by $\delta$. Consider $\alpha$ so that $\zeta(\alpha) = \delta$. We are interested mainly in $\alpha$'s which satisfy:\\ $(*)$ \ \ \ The model $M_\theta^\alpha$ ``thinks'' it is of the form $(\her\rho,\in, <, X)$ and by our coding yields (or predicts) a sequence of structures $(\mon{D}_\nu\colon \nu < \lambda)$ and a function $f_\alpha$ from $\mon{D} = \bigcup_{\nu < \lambda} \mon{D}_\nu$ to itself. (Of course all the urelements in $M_\theta^\alpha \cap \lambda$ will all have order type less than $\delta$.)\\ At the moment we will only need to use the function predicted by $M_\theta^\alpha$. We will say an {\em obstruction occurs at} $\alpha$ if (($*$) holds and) we can make the following choices. If possible choose $N_\alpha \subseteq \hbox{\eufb\char"43}_\delta$ so that $N_\alpha \in M_0^\alpha$ of cardinality less than $\theta$ and a sequence of atoms $(a^\alpha_i \colon i \in C_\delta $) so that (naturally $a^\alpha_i\in M^\alpha_{i+1})\, a^\alpha_i/\hbox{\eufb\char"43}_{\alphamma^-_{\alpha,i}}$ does not split over\ $N_\alpha$ and $a_i^\alpha \in [{\alphamma^-_{\alpha,i}}, {\alphamma^+_{\alpha,i}})$ and $f_\alpha(a^\alpha_i)$ is not definable over $a$ and parameters from $\hbox{\eufb\char"43}_{\alphamma^-_{\alpha, i}}$. At ordinals where an obstruction occurs we will take action to stop $f_\alpha$ from extending to an automorphism of $B=B^{\hbox{\eufb\char"43}}$. Notice that $N_\alpha$ is contained in $M^\alpha_0\subseteq M_\theta^\alpha$. Suppose an obstruction occurs at $\alpha$. Let $X_\alpha$ be the set of finite joins of the $\subseteqt{a^\alpha_i}{i \in C_\delta }$. In the obvious way, $X_\alpha$ can be identified with the set of finite subsets of $Y_\alpha=\subseteqt{a^\alpha_i}{i \in C_\delta} $. Fix $U_\alpha$ an ultrafilter on $X_\alpha$ so that for all $x\in X_\alpha$, $\subseteqt{y \in X_\alpha}{x \subseteq y} \in U_\alpha$. Now define by induction on $\mbox{Ob}(\delta) = \subseteqt{\alpha}{\zeta(\alpha) = \delta \mbox{ and an obstruction occurs at }\alpha}$ an element $x_\alpha$ so that $x_\alpha$ realizes the $U_\alpha$ average type of $X_\alpha$ over $\hbox{\eufb\char"43}_\delta \cup\subseteqt{x_\beta}{\beta \in \mbox{Ob}(\delta), \beta < \alpha}$. Then $\hbox{\eufb\char"43}_{\delta + 1}$ is the Skolem hull of $\hbox{\eufb\char"43}_\delta \cup \subseteqt{x_\alpha}{\alpha \in \mbox{Ob}(\delta)}$. We now want to verify the inductive hypothesis and give a stronger property which we will use later in the proof. The key is the following claim. \begin{claim} Suppose $\alpha_0, \ldots, \alpha_{n-1} \in \mbox{\rm Ob}(\gd)$, then for all but a bounded set of $\rhog < \delta$, $(\bigcup_{k < n} Y_{\alpha_k})/\hbox{\eufb\char"43}_\rhog$ does not split over\ $\bigcup_{k<n} N_{\alpha_k} \cup \bigcup_{k<n} (\hbox{\eufb\char"43}_\rhog \cap Y_{\alpha_k})$. \end{claim} {\sc Proof}\ (of the claim) Suppose $\rhog$ is large enough so that for all $ m \not=k < n$, $[{\alphamma^-_{\alpha_m,i}}, {\alphamma^+_{\alpha_m,i}}) \cap [{\alphamma^-_{\alpha_k,i}}, {\alphamma^+_{\alpha_k,i}}) = \emptyset$, whenever $\rhog^-_{\alpha_m, i}\geq \rhog$ (recall clause (c3) of the Black Box). It is enough to show by induction on $\rhog \leq \sigma < \delta$ that $(\bigcup_{k < n} Y_{\alpha_k}) \cap \hbox{\eufb\char"43}_\sigma$ has the desired property. For $\sigma = \rhog$ there is nothing to prove. In the inductive proof we only need to look at a place where the set increases. By the hypothesis on $\rhog$ we can suppose the result is true up to $\sigma =\rhog^-_{\alpha_k, i}$ and try to prove the result for $\sigma =\rhog^+_{\alpha_k, i}$ (since new elements are added only in these intervals). The new element added is $a^{\alpha_k}_i$. Denote this element by $a$. By hypothesis, $a/\hbox{\eufb\char"43}_{\rhog^-_{\alpha_k, i}}$ does not split over\ $N_{\alpha_k}$ and so also not over $\bigcup_{k<n} N_{\alpha_k} \cup \bigcup_{k<n} (\hbox{\eufb\char"43}_\rhog \cap Y_{\alpha_k})$. Now we can apply the induction hypothesis and Proposition~\ref{trans}. Notice that $X_\alpha$ is contained in the definable closure of $Y_\alpha$ and vice versa so we also have $(\bigcup_{k < n} X_{\alpha_k})/\hbox{\eufb\char"43}_\rhog$ does not split over\ $\bigcup_{k<n} N_{\alpha_k} \cup \bigcup_{k<n} (\hbox{\eufb\char"43}_\rhog \cap Y_{\alpha_k})$. We can immediately verify the induction hypothesis that if $\rhog<\delta$, $\rhog\notin S$ then $\hbox{\eufb\char"43}_\rhog\prec^\otimes_\gth \hbox{\eufb\char"43}_{\delta +1}$. It is enough to verify for $\alpha_0, \dots, \alpha_{n-1}$ and sufficiently large $\beta$ that $(x_{\alpha_0},\ldots, x_{\alpha_{n-1}})/\hbox{\eufb\char"43}_\beta$ does not split over $\bigcup_{k<n} N_{\alpha_k} \cup\bigcup_{k<n} (\hbox{\eufb\char"43}_\beta \cap Y_{\alpha_k})$ (a set of size $<\theta$). But this sequence realizes the ultrafilter average of $X_{\alpha_0}\times \cdots \times X_{\alpha_{n-1}}$. So we are done by Proposition~\ref{ult}. This completes the construction. Before continuing with the proof notice that we get the following from the claim. \begin{claim} \label{nspl-claim} 1) For all $\alpha \in \mbox{\rm Ob}(\gd)$, $D \subseteq \hbox{\eufb\char"43}_{\delta+1}$, if $|D| < \theta$ then for all but a bounded set of $i < \theta$, $D/\hbox{\eufb\char"43}_{\rhog_{\alpha,i}^+}$ does not split over\ $\hbox{\eufb\char"43}_{\rhog_{\alpha,i}^-} \cup \{a^\alpha_i\}$. Moreover for all but a bounded set of $i < \theta$, $D/\hbox{\eufb\char"43}_{\rhog_{\alpha,i}^+}$ does not split over\ a subset of $\hbox{\eufb\char"43}_{\rhog_{\alpha,i}^-} \cup \{a^\alpha_i\}$ of size $< \theta$.\\ 2) for every subset $D$ of $\hbox{\eufb\char"43}_{\delta+1}$ of cardinality $<\theta$ there is a subset $w$ of $Ob(\delta$) of cardinality $<\theta$ and subset $Z$ of $\hbox{\eufb\char"43}_\delta$ of cardinality $<\theta$ such that: the type of $D$ over $\hbox{\eufb\char"43}_\delta$ does not split over $Z\cup \bigcup_{j\in w} Y_j$.\\ 3) In (2) for every large enough $i$, for every $\alpha\in w$ the type of $D$ over $\hbox{\eufb\char"43}_{\rhog^+_{\alpha, i}}$ does not split over $Z\cup \cup \{Y_j\cap \hbox{\eufb\char"43}_{\rhog^-_{\alpha, i}}: j\in w\}\cup\{a_{\alpha, i}\}$.\\ 4) In (2), (3) we can allow $D\subseteq \hbox{\eufb\char"43}$ \end{claim} We now have to verify that $\hbox{\eufb\char"43}$ has the desired properties. Assume that $f$ is an automorphism of $B=B^{\hbox{\eufb\char"43}}$. We must show that \begin{claim} There is $\rhog$ so that for all atoms $a$, $f(a)$ is definable with parameters from $\hbox{\eufb\char"43}_\rhog$ and $a$. \end{claim} {\sc Proof}\ (of the claim) Assume that $f$ is a counterexample. For every $\rhog \notin S$, choose an atom $a_\rhog$ which witnesses the claim is false with respect to $\hbox{\eufb\char"43}_\rhog$. Since $\subseteqt{\delta < \lambda}{\delta \notin S, {\rm cf\/} \delta \geq \theta}$ is stationary, there is a set $N$ of cardinality less than $\theta$ so that for a stationary set of $\rhog$, $a_\rhog/\hbox{\eufb\char"43}_\rhog$ does not split over\ $N$. In fact (since $(\forall\alpha<\lambda)\alpha^\theta<\lambda$ as $\lambda=\mu^+$, $\mu^{<\theta}=\mu)$ for all but a bounded set of $\rhog$ we can use the same $N$. Let $X$ code the sequence $(\hbox{\eufb\char"43}_\rhog \colon \rhog < \lambda)$ and the function $f$. Then, by the previous discussion, $(\her{\rho}, \in, <, X)$ satisfies ``there exists $N \subseteq \hbox{\eufb\char"43}$ so that $|N| < \theta$ and for all but a bounded set of ordinals $\rhog$, there is an atom $z$ so that $z/\hbox{\eufb\char"43}_\rhog$ does not split over\ $N\mbox{''}$. Choose $\alpha$ so that $$M^\alpha_\theta\equiv_{M^\alpha_\theta\cap\lambda} (\her{\rho}, \in, <, X).$$ It is now easy to verify that an obstruction occurs at $\alpha$. Let $\delta = \zeta(\alpha)$. In this case, $f_\alpha$ is the restriction of $f$. We use the notation of the construction. By the construction there is $D\subseteq \hbox{\eufb\char"43}_{\delta+1}$, $\vert D\vert <\theta$ such that the type of $f(a_\alpha$) over $\hbox{\eufb\char"43}_{\delta+1}$ does not split over $D$. Apply Claim~\ref{nspl-claim} above, parts (2), (3) and get $Z , w$ and $i^* < \theta$ (the $i^*$ is just explicating the ``for every large enough $i$ to "for every $i \in [i^*,\theta)$'') . Let $D^*=Z\cup\cup\{Y_j \cap C_{\rhog^-_{\alpha, i}}:j\in w\}$ so for every $i \in [i^*,\theta)$ we have 1) $f(a_\alpha)/\hbox{\eufb\char"43}_{\delta+1}$ does not fork over $D (\subseteq \hbox{\eufb\char"43}_{\delta+1})$. \\ 2) $D/\hbox{\eufb\char"43}_{\alphamma^+_{\alpha,i}}$ does not split over $D^*\cup a_{\alpha,i}$\\ 3) $D^* \cup a_{\alpha,i}\subseteq \hbox{\eufb\char"43}_{\alphamma^+_{\alpha,i}}\subseteq \hbox{\eufb\char"43}_{\delta+1}$\\ so by the basic properties of non splitting $f(x_\alpha)/\hbox{\eufb\char"43}_{\rhog_{\alpha, i}^+}$ does not split over\ $D^*\cup \{a^\alpha_i\}$, and note that we have : $D^* \cup N_\alpha \subseteq \hbox{\eufb\char"43}_{\rhog_{\alpha, i}^-} $ and $|D^* \cup N_\alpha| < \theta$ where $N_\alpha$ comes from the construction . An important point is that by elementariness for all ordinals $\tau \in M^\alpha_i \cap \lambda$ and atoms $a \in M^\alpha_i$ there is an ordinal $\beta$ so that $\tau < \beta \in M^\alpha_i \cap \lambda$, $a \in B_\beta$, $\hbox{\eufb\char"43}_\beta$ is $\theta$-saturated (just take ${\rm cf\/} \beta \geq \theta$) and $f$ is an automorphism of $B_\beta$. Choose such a $\beta \in M^\alpha_{i+1}$ with respect to $a^\alpha_i$ and $\rhog^-_{\alpha,i}$ Since $f(a^\alpha_i)$ is not definable from $a^\alpha_i$ and parameters from $\hbox{\eufb\char"43}_{\rhog_{\alpha, i}^-}$ and $\hbox{\eufb\char"43}_\beta$ is $\theta$-saturated there is $b \neq f(a^\alpha_i)$ realizing the same type over $D^* \cup \{a^\alpha_i\} \cup \subseteqt{f(a^\alpha_j)}{j<i}$ with $b \in B_\beta$. Now for any atom $c \in B_\beta$ we have (by the definition of $x_\alpha$) that $c \leq x_\alpha$ if and only if $c = a^\alpha_j$ for some $j \leq i$. Since this property is preserved by $f$, we have that $f(a^\alpha_i) \leq f(x_\alpha)$ and $b \not \leq f(x_\alpha)$. But $\beta < \rhog^+_{\alpha, i}$ and $f(x_\alpha)/\hbox{\eufb\char"43}_{\rhog^+_{\alpha, i}}$ does not split over\ $\{a^\alpha_i\} \cup D^*$. So we have arrived at a contradiction. \fin In the proof above if we take $\theta$ to be uncountable then we can strengthen the theorem (although we will not have any current use for the stronger form). \begin{theorem} In the Theorem above if $\theta$ is uncountable then there is a finite set of formulas $L'$ and a pseudofinite set $c$ so that for every atom $b \in B$ $f(b)$ is $L'$-definable over $\{b\} \cup c$. \end{theorem} {\sc Proof}\ The argument so far has constructed a model in which every automorphism of $B$ is pointwise definable on the atoms over some $\hbox{\eufb\char"43}_\rhog$ (i.e., for every atom $b \in B$ $f(b)$ is definable from $b$ and parameters from $\hbox{\eufb\char"43}_\rhog$. In the construction of the model we have that every $\hbox{\eufb\char"43}_\rhog$ is contained in some pseudofinite set so we are a long way towards our goal. To prove the theorem it remains to show that we can restrict ourselves to a finite sublanguage. (Since all the interpretations of the constants will be contained in $\hbox{\eufb\char"43}_1$ we can ignore them.) Choose $\hbox{\eufb\char"43}_\rhog$ so that $f$ and $f^{-1}$ are pointwise definable over $\hbox{\eufb\char"43}_\rhog$. Let $c$ be a pseudofinite set containing $\hbox{\eufb\char"43}_\rhog$. We can assume that $f$ permutes the atoms of $B_\rhog$. Let the language $L$ be the union of an increasing chain of finite sublanguages $(L_n \colon n < \omega)$. Assume by way of contradiction that for all $n$, $f$ is not pointwise definable on the atoms over any pseudofinite set (and hence not over any $\hbox{\eufb\char"43}_\alpha$) using formulas from $L_n$. Choose a sequence $d_n$ of atoms so that for all $n$, $e_n = f(d_n)$ is not $L_n$-definable over $\{d_n\} \cup c$ and both $d_{n+1}$ and $e_{n+1}$ are not definable over $\subseteqt{d_k}{k \leq n}\cup \subseteqt{e_k}{k \leq n} \cup c$. Furthermore $d_{n+1}$ should not be $L_{n}$-definable over $\subseteqt{d_k}{k \leq n} \cup c$. The choice of $d_n$ is possible by hypothesis, since only a pseudofinite set of possibilities has been eliminated from the choice. Let $\bar{d}, \bar{e}$ realize the average type (modulo some ultrafilter) over $\hbox{\eufb\char"43}_\rhog \cup \{c\}$ of $\subseteqt{(d_k \colon k \leq n), (e_k \colon k \leq n)}{n < \omega}$. These are pseudofinite sequences which have $(d_n \colon n < \omega)$ and $(e_n \colon n < \omega)$ as initial segments. Say $\bar{d} = (d_i \colon i < n^*)$ for some non-standard natural number $n^*$. Now let $x \in B$ be the join of $\subseteqt{d_i}{i < n^*}$. (This join exists since $\bar{d}$ is a pseudofinite sequence.) For every $i < n^*$ there is a (standard) $n_i$ so that $f(d_i)$ is $L_{n_i}$-definable over $\{d_i\} \cup \hbox{\eufb\char"43}_\rhog$ and $d_i$ is $L_{n_i}$-definable over $\{f(d_i)\} \cup \hbox{\eufb\char"43}_\rhog$. Since $\hbox{\eufb\char"43}$ is $\theta$-saturated, the coinitiality of $n^* \subseteqtminus \omega$ is greater than $\omega$. So there is some $n$ so that $\subseteqt{i}{n_i = n}$ is coinitial. (Notice that for non-standard $i$, there is no connection between $e_i$ and $f(d_i)$.) Choose $k$ so that the formulas in $L_{n}$ have at most $k$ free variables. Let $Z$ be the set of subsets $Y$ of $c$ of size $k$ so that for all $ i < n^*$, neither $d_i$ nor $e_i$ is $L_{n}$-definable from $Y \cup\subseteqt{d_j, e_j}{j < i}$. By the choice of $\bar{d}, \bar{e}$ every subset of $\hbox{\eufb\char"43}_\rhog$ of size $k$ is an element of $Z$. Consider $\omega > m > n+1$. We claim there is no atom $y < f(x)$ such that $d_m$ and $y$ are $L_n$ interdefinable over elements of $Z$. Suppose there is one and $y = f(d_i)$. Then $i$ is non-standard, since, by the choice of $Z$, $y \neq e_j$ for all $j$. Since $d_i$ is definable from $\{f(d_i)\} \cup c$, $d_i$ is definable from $\{d_m\} \cup c$. This contradicts the choice of the sequence. We can finally get our contradiction. For $i < n^*$, we have $i < \omega$ if and only if for all $j > n +1$, if there is an atom $y < f(x)$ so that $d_j$ and $y$ are $L_n$-interdefinable over elements of $Z$ then $i < j$. \fin We will want to work a bit harder and get that the automorphisms in the model above is actually definable on a large set. To this end we prove an easy graph theoretic lemma. \begin{lemma} \label{graph-lemma} Suppose $G$ is a graph and there is $0 < k < \omega$ so that the valence of each vertex is at most $k$. Then there is a partition of (the set of nodes of) $G$ into $k^2$ pieces $A_0, \ldots, A_{k^2-1}$ so that for any $i$ and any node $v$, $v$ is adjacent to at most one element of $A_i$. Furthermore if $\lambda$ is an uncountable cardinal, each $A_i$ can be chosen to meet any $\lambda$ sets of cardinality $\lambda$. \end{lemma} {\sc Proof}\ Apply Zorn's lemma to get a sequence $\langle A_0, \ldots, A_{k^2-1}\rangle$ of pairwise disjoint set of nodes such that for $i<k^2$ and node $v$, $v$ is adjacent to at most one member of $A_i$ and $\bigcup_{i< k^2} A_i$ maximal [under those constrains]. Suppose there is $v$ which is not in any of the $A_i$. Since $v$ is not in any of the $A_i$ for each $i$ there is $u_i$ adjacent to $v$ and $w_i\neq u_i$ adjacent to $v$ so that $w_i\in A_i$. If no such $u_i, w_i$ existed we could extend the partition by adding $v$ to $A_i$. But as the valency of every vertex of $G$ is $\leq k$ there are at most $k(k-1) < k^2$ such pairs. As for the second statement. Since the valence is finite every connected component is countable. Hence we can partition the connected components and then put them together to get a partition meeting every one of the $\lambda$ sets. \fin Actually for infinite graphs we can get a sharper bound. Given $G$ we form an associated graph by joining vertices if they have a common neighbour. This gives a graph whose valence is at most $k^2 - k$. We want to vertex colour this new graph. Obviously (see the proof above) it can be vertex coloured in $k^2 +1 - k$ colours. In fact, by a theorem of Brooks (\cite{Ore}, Theorem 6.5.1), the result can be sharpened further. In our work we will only need that the colouring is finite, so these sharpenings need not concern us. We will want to form in $\hbox{\eufb\char"43}$ a graph whose vertices are the atoms of $B$ with a pseudofinite bound so that any atom $b$ is adjacent to $f(b)$. This is easy to do in the case where all the definitions of $f(b)$ from $\{b\} \cup c$ use only a finite sublanguage. In the general case (i.e., when ${\rm cf\/} \theta$ may be $\omega$) we have to cover the possible definitions by a pseudofinite set. \begin{lemma} Continue with the notation of the proof. Then there is a pseudofinite set $D$ and a pseudofinite natural number $k^*$ and a set of tuples $Z$ of length at most $k^*$ so that for every formula $\phi(\bar{x},\bar{y})$ there is $d \in D$ so that for all $\bar{a} \in c$ and $\bar{b} \in B$, the tuple $(d, \bar{a}, \bar{b}) \in Z$ if and only if $\phi(\bar{a}, \bar{b})$. \end{lemma} {\sc Proof}\ Since $\hbox{\eufb\char"43}$ is $\aleph_0$-saturated the lemma just says that a certain type is consistent. Now $B \in \hbox{\eufb\char"43}$ and $\hbox{\eufb\char"43}$ satisfies the separation scheme for all formulas (not just those of set theory). Hence for any formula $\phi$, $\subseteqt{(\bar{a},\bar{b})}{\bar{a} \in c, \bar{b} \in B\mbox{ and } \phi(\bar{a},\bar{b})}$ exists in $\hbox{\eufb\char"43}$. \fin Fix such sets $D$ and $Z$ for $c$. Say that an atom $a$ is {\em $D, Z$-definable from} $b$ over $c$ if there are $d \in D$ and a tuple (perhaps of non-standard length) $\bar{x} \in c$ so that $a$ is the unique atom of $B$ so that $(d, \bar{x}, b, a) \in Z$. We say that $a$ and $b$ are {\em $D, Z$-interdefinable} over $c$ if $a$ is $D, Z$-definable from $b$ over $c$ and $b$ is $D, Z$-definable from $a$ over $c$. Notice (and this is the content of the last lemma) that if $a$ is definable from $b$ over $c$ then it is $D, Z$-definable from $b$ over $c$. We continue now with the notation of the theorem and the model we have built. Suppose $f$ is an automorphism which is pointwise definable over a pseudofinite set $c$. Define the {\em $c$-graph} to be the graph whose vertices are the atoms of $B$ where $a, b$ are adjacent if $a$ and $b$ are $D, Z$-interdefinable over $c$. Since $D$, $k^*$ and $c$ are pseudofinite this is a graph whose valency is bounded by some pseudofinite number. So in $\hbox{\eufb\char"43}$ we will be able to apply Lemma~\ref{graph-lemma}. We will say that $(A_i \colon i < n^*)$ is a {\em good partition} of the $c$-graph if it is an element of $\hbox{\eufb\char"43}$ which partitions the atoms of $B$ into a pseudofinite number of pieces and for any $i$ and any $a$, $a$ is adjacent to at most one element of $A_i$. If $(A_i \colon i < n^*)$ is a good partition then for all $i, j$ let $f_{i,j}$ be the partial function from $A_i$ to $A_j$ defined by letting $f_{i,j}(a)$ be the unique element of $A_j$ if any so that $a$ is adjacent to $f_{i,j}(a)$. Otherwise let $f_{i,j}(a)$ be undefined. We have proved the following lemma. \begin{lemma} Use the notation above. For all $a \in A_i$ there is a unique $j$ so that $f(a) = f_{i,j}(a)$ \end{lemma} The proof of the last lemma applies in a more general setting. Since we will want to use it later, we will formulate a more general result. \begin{lemma} \label{cover-lemma} Suppose $\hbox{\eufb\char"43}$ is an $\aleph_0$-saturated model of an expansion of $\mbox{ZFC}^-$ which satisfies the separation scheme for all formulas. If $A$, $B$ are sets in $\hbox{\eufb\char"43}$ and $f$ is a bijection from $A$ to $B$ such that $f$ and $f^{-1}$ are pointwise definable over some pseudofinite set, then there is (in $\hbox{\eufb\char"43}$) a partition of $A$ into pseudofinitely many sets $(A_i \colon i < k^*)$ and a pseudofinite collection $(f_{i,j} \colon i, j < k^*)$ of partial one-to-one functions so that for all $i$, the domain of $f_{i,j}$ is contained in $A_i$ and for all $a \in A_i$ there exists $j$ so that $f_{i,j}(a) = f(a)$. Moreover if we are given (in $\hbox{\eufb\char"43}$) a family $P$ of $|A|$ (in $\hbox{\eufb\char"43}$'s sense) subsets of $A$ of cardinality $|A|$, we can demand that for every $i<k^*$ and $A^*\in P$ we have $\vert A_i\cap A'\vert =\vert A\vert$ (in $\hbox{\eufb\char"43}$'s sense). \end{lemma} \begin{lemma} \label{definable} Use the notation and assumptions above. For any $i<k^*$ we have: $f \mathord{\restriction} A_i$ is definable. \end{lemma} {\sc Proof}\ For each $ \rhog < \lambda$ such that $\rhog \notin S$ and ${\rm cf\/} \rhog \geq \theta$ choose $y_\rhog$ so that $y_\rhog$ is the join of a pseudofinite set of atoms contained in $A_i$ and containing $A_i \cap B_\rhog$. \begin{claim} For all $a \in A_i$ so that $a \leq y_\rhog$, $f(a)$ is definable from $f(y_\rhog)$ and the set $\{f_{i,j}\colon j < n^*\}$. In particular this claim applies to all $a \in A_i \cap B_\rhog$. \end{claim} {\sc Proof}\ (of the claim) First note that $f(a) < f(y_\rhog)$ and so there is $j$ such that $f_{i,j}(a) < f(y_\rhog)$. Suppose that there is $k \neq j$ so that $f_{i, k}(a) < f(y_\rhog)$. Choose $b \in A_i$ (not neccessarily in $B_\rhog$) so that $f(b) = f_{i, k}(a)$ (such a $b$ must exist since every atom below $y_\rhog$ is in $A_i$). But since $f_{i,k}$ is a partial one-to-one function, $a = b$. But $f(a) = f_{i,j}(a) \neq f_{i,k}(a)$. This is a contradiction, so $f \mathord{\restriction} (A_i \cap B_\rhog)$ is defined by ``$b = f(a)$ if there exists $g \in \subseteqt{f_{i,j}}{j < n^*}$ so that $b = g(a)$ and $g(a) < f(y_\rhog)$''. Choose now $N_\rhog$ of cardinality $< \theta$ so that the type of $f(y_\rhog), (f_{i,j} \colon j < n^*)/\hbox{\eufb\char"43}_\rhog$ does not split over\ $N_\rhog$. Notice that $f \mathord{\restriction} (A_i \cap B_\rhog)$ is definable as a disjunction of types over $N_\rhog$, namely the types satisfied by the pairs $(a, f(a))$. By Fodor's lemma and the cardinal arithmetic, there is $N$ and a stationary set where all the $N_\rhog = N$ and all the definitions as a disjunction of types coincide. So we have that $f \mathord{\restriction} A_i$ is defined as a disjunction of types over $a$, $N$, set of cardinality $< \theta$. We now want to improve the definability to definability by a formula. We show that for all $\rhog$, $f \mathord{\restriction} (A_i \cap B_\rhog)$ is defined by a formula with parameters from $N$. This will suffice since some choice of parameters and formula will work for unboundedly many (and hence all) $\rhog$. Suppose that $f \mathord{\restriction} (A_i \cap B_\rhog)$ is not definable by a formula with parameters from $N$. Consider the following type in variables $x_1, x_2, z_1, z_2$. \begin{quotation} \noindent$\subseteqt{\phi(x_1, x_2) \mbox{ if and only if\ }\phi(z_1, z_2)}{\phi \mbox{ a formula with parameters from } N}$ $ \cup \{x_1, z_1 \leq y_\rhog\}$$ \cup \{\psi(x_1, x_2), \neg \psi(z_1,z_2)\}$, \noindent where $\psi(u, v)$ is a formula saying `` there exists $g \in \subseteqt{f_{i,j}}{j < n^*}$ so that $u = g(v)$ and $g(u) < f(y_\rhog)\mbox{''}$ \end{quotation} This type is consistent since by hypothesis it is finitely satisfiable in $A_i \cap B_\rhog$. Since $\hbox{\eufb\char"43}$ is $\theta$-saturated there are $a_1, a_2, b_1, b_2$ realizing this type. Since $a_1, b_1 \leq y_\rhog$ $\psi(a_1, a_2)$ implies that $f(a_1) = a_2$ and similarly $\neg \psi(b_1, b_2)$ implies that $f(b_1) \neq b_2$. On the other hand $a_1, a_2$ and $b_1, b_2$ realize the same type over $N$. This contradicts the choice of $N$. \fin The information we obtain above is a little less than we want since we want a definablility requirement on every automorphism of every definable Boolean algebra. It is easy to modify the proof so that we can get the stronger result. Namely in the application of the Black Box we attempt to predict the definition of a definable Boolean algebra in the scope of $P$ as well as the construction of the model $\hbox{\eufb\char"43}$ and an automorphism. With this change the proof goes as before. So the following stronger theorem is true. \begin{theorem} There is a model $\hbox{\eufb\char"43}$ of $T_1$ so that if $B \subseteq P(\hbox{\eufb\char"43})$ is a definable atomic Boolean algebra and $f$ is any automorphism of $B$ (as a Boolean algebra) then there is a pseudofinite set $c$ such that for any atom $b \in B$, $f(b)$ is definable from $\{b\} \cup c $. \end{theorem} As before if $\theta$ is taken to be uncountable then we can find a finite sublanguage to use for all the definitions. The proof of the analogue of the Theorem~\ref{Ba-thm} for ordered fields is quite similar. Let $T_1$ denote the theory constructed in Theorem~\ref{ofthy}. \begin{theorem} There is a model $\hbox{\eufb\char"43}$ of $T_1$ so that if $F_1, F_2 \subseteq P(\hbox{\eufb\char"43})$ are definable ordered fields and $f$ is any isomorphism from $F_1$ to $F_2$ then there is a pseudofinite set $c$ so that for all $b \in F_1$ $f(b)$ is definable from $\{b\} \cup c$. \end{theorem} {\sc Proof}\ To simplify the proof we will assume that $F = P(\hbox{\eufb\char"43})$ is an ordered field and $f$ is an automorphism of $F$. The construction is similar to the one for Boolean algebras, although we will distinguish between the case $\theta = \omega$ and the case that $\theta$ is uncountable. We will need to take a little more care in the case of uncountable cofinalities. The difference from the case of Boolean algebras occurs at stages $\delta \in S$. Again we predict the sequence of structures $(\hbox{\eufb\char"43}_i\colon i < \lambda)$ and an automorphism $f_\alpha$ of the ordered field $F$. We say that {\em an obstruction occurs at $\alpha$ }if we can make the choices below. (The intuition behind the definition of an obstruction is that there are infinitesmals of arbitrarily high order for which the automorphism is not definable.) There are two cases, the one where $\theta = \omega$ and the one where $\theta$ is uncountable. Since it is somewhat simpler we will first consider the case $\theta = \omega$. Suppose an obstruction occurs at $\alpha$. By this we mean that we can choose $N_\alpha \in M^\alpha_0$ such that for each $i < \omega$ we can choose $a^\alpha_i\in \gint{\alpha}$ so that $a^\alpha_i/\hbox{\eufb\char"43}_{\rhog^-_{\alpha, i}}$ does not split over\ $N_\alpha$, $a^\alpha_i >0$, $a^\alpha_i$ makes the same cut in $F_{\rhog^-_{\alpha, i}}$ as 0 does and $f(a^\alpha_i)$ is not definable over $a^\alpha_i$ and parameters from $\hbox{\eufb\char"43}_{\rhog^-_{\alpha, i}}$. Then we let $x_i^\alpha =\sum_{j \leq i} a^\alpha_i$. We choose $x_\alpha$ so that it realizes the average type over some non-principal ultrafilter of $\subseteqt{x_i^\alpha}{i <\omega}$. As before we can show that both the inductive hypothesis and Claim~\ref{nspl-claim} are satisfied. Notice in the construction that $f_\alpha(x^\alpha_i)$ is not definable over $\{a^\alpha_i\} \cup\hbox{\eufb\char"43}_{\rhog^-_{\alpha,i}}$ , since $x^\alpha_i = x^\alpha_{i-1} + a^\alpha_i$ and $f(x^\alpha_i) = f(x^\alpha_{i-1}) + f(a^\alpha_i)$ (here we conventionally let $x^\alpha_i = 0$ when it is undefined) and each $M^\alpha_i$ is closed under $f_\alpha$. Notice as well that for all $i$, $x^\alpha_i < x_{\alpha}$ and $x^\alpha_i$ and $x_\alpha$ make the same cut in $\hbox{\eufb\char"43}_{\rhog^+_{\alpha,i}}$. Suppose now that $f$ is an automorphism which is not pointwise definable over any $\hbox{\eufb\char"43}_\rhog$. Since $F$ is an ordered field this is equivalent to saying that $f$ is not definable on any interval. Also since $\hbox{\eufb\char"43}_\rhog$ is contained in a pseudofinite set (in $\hbox{\eufb\char"43}$) there is a positive interval which makes the same cut in the pseudofinite set (and hence in $F_\rhog$) $0$ does. So we can choose $a_\rhog$ in this interval so that $f(a_\rhog)$ is not definable over $\hbox{\eufb\char"43}_\rhog \cup \{a_\rhog\}$. Arguing as before we can find a set $N \subseteq \hbox{\eufb\char"43}_{\rhog}$ of cardinality less than $\theta$ such that for all but boundedly many $\rhog$ $a_\rhog$ can be chosen so that $a_\rhog/\hbox{\eufb\char"43}_\rhog$ does not split over\ $N$. Arguing as before we get an ordinal $\alpha$ so that an obstruction occurs at $\alpha$. Consider now $f(x_\alpha)$, $i$ and a finite set $N_\alpha \subseteq \hbox{\eufb\char"43}_{\rhog^-_{\alpha, i}}$ so that $f(x_\alpha)/\hbox{\eufb\char"43}_{\rhog^+_{\alpha,i}}$ does not split over\ $N_\alpha \cup \{a_i^\alpha\}$. Since $f(x^\alpha_i)$ is not definable over $\{a^\alpha_i\} \cup \hbox{\eufb\char"43}_{\rhog_{\alpha, i}}$, there is $b > f(x^\alpha_i)$, $b \in F_{\rhog^+_{\alpha, i}}$ which realizes the same type over $N_\alpha \cup \{a^\alpha_i\}$. (Otherwise $f(x^\alpha_i)$ would be the rightmost element satisfying some formula with parameters in this set.) But $f(x^\alpha_i) < f(x_\alpha) <b$. This contradicts the choice of $N_\alpha$. We now consider the case where $\theta$ is uncountable. Here we have to do something different at limit ordinals $i$. If $i$ is a limit ordinal, we choose $x^\alpha_i\in \gint{\alpha} \cap M^\alpha_{i+1}$ to realize the average type of the $\subseteqt{x^\alpha_j}{j < i}$. For the successor case we let we let $x^\alpha_{i+1} = x^\alpha_i + a_{i+1}^\alpha$ (where the $a_{j}^\alpha$ are chosen as before). In this construction for all $\rhog$ and all $i$ $x_i^\alpha/\hbox{\eufb\char"43}_\rhog$ does not split over\ $N_\alpha \cup \subseteqt{x^\alpha_j}{\rhog^-_{\alpha, j} < \rhog }$. This is enough to verify the inductive hypothesis and Claim~\ref{nspl-claim} if we restrict the statement to a successor ordinal $i$. The rest of the proof can be finished as above.\fin To apply the theorem we explicate the explanation in the introduction. \begin{Observation} \label{4.x} For proving the compactness of $L(Q_{\rm Of})$ (or $L(Q_{BA})$ etc) it suffices to prove \item[($*$)] Let $T$ be a first order theory $T$ and let $P$, $R$ be unary predicates such that for every model $M$ of $T$ and an automorphism $f$ of a definable ordinal field (or Boolean algebra etc) $\subseteq P^M$ definable (with parameter), for some $c\in M$, $R(x, y, c)$ defines $f$. {\em Then} $T$ has a model $M^*$ such that: every automorphism $f$ of definable ordered field (or Boolean algebra etc.) $\subseteq P^M$, $f$ is defined in $M^*$ by $R(x, y, c)$ for some $c\in M$. \end{Observation} {\sc Proof}\ Assume $(*)$ and let $T_0$ be a given theory in the stronger logic; without loss of generality all formulas are equivalent to relations. For simplicity ignore function symbols. For every model $M$ let $M'$ be the model with the universe $\vert M\vert \cup \{f:f$ a partial function from $\vert M\vert$ to $\vert M\vert\}$, and relations those of $M$, $P^{M'}=\vert M\vert$, $R^{M'}=\{(f, a, b):a, b\in M$, $f$ a partial function from $\vert M\vert$ to $\vert M\vert$ and $f(a)=b\}$. There is a parallel definition of $T'$ from $T$, and if we intersect with the first order logic, we get a theory to which it suffices to apply $(*)$. \subseteqction{Augmented Boolean Algebras} \subseteqtcounter{theorem}{0} For Boolean algebras we don't have the full result we would like to have but we can define the notion of augmented Boolean algebras and then prove the compactness theorem for quantification over automorphisms of these structures. \noindent {\sc Definition.} An {\em augmented Boolean algebra} is a structure $(B, \leq, I, P)$ so that $(B, \leq)$ is an atomic Boolean algebra, $I$ is an ideal of $B$ which contains all the atoms, $P \subseteq B$, $|P| > 1$, for $x \neq y \in P$ the symmetric difference of $x$ and $y$ is not in $I$, and for all atoms $x \neq y$ there is $z\in P$ such that either $x \leq z$ and $y \not\leq z$ or vice versa. Notice that if we know the restriction of an automorphism $f$ of an augmented Boolean algebra to $P$ then we can recover $f$. Since for any atom $x$, $f(x)$ is the unique $z$ so that for all $y\in P$ $z \leq f(y)$ if and only if $x \leq y$ and the action of $f$ on the atoms determine its action on the whole Boolean algebra. Let $Q_{\rm Aug}$ be the quantifier whose interpretation is ``$Q_{\rm Aug} f\,(B_1, B_2) \ldots$'' holds if there is an isomorphism $f$ of the augmented Boolean algebras $B_1,B_2$ so that~$\ldots$. \begin{theorem} The logic {\rm L($Q_{\rm Aug}$)} is compact. \end{theorem} {\sc Proof}\ By 3.1 (and 4.9.1) it is enough to show that if $T$ is a theory which says that every automorphism of a definable Boolean algebra $\subseteq P$ is definable by a fixed formula and $T_1$ and $\hbox{\eufb\char"43}$ are as above then every automorphism $f$ of a definable augmented Boolean algebra $(B, \leq, I, P)$ of $P(\hbox{\eufb\char"43})$ is definable. We work for the moment in $\hbox{\eufb\char"43}$. First note that $I$ contains the pseudofinite sets so $\hbox{\eufb\char"43}$ thinks that the cardinality of $B$ is some $\mu^*$ and for every $x, y \in P$ the symmetric difference of $x$ and $y$ contains $\mu^*$ atoms (since $\hbox{\eufb\char"43}$ thinks that $B$ is $\mu^*$ saturated). So, by Lemma~\ref{cover-lemma} and Lemma~\ref{definable}, we can find a set of atoms $A$ so that $f \mathord{\restriction} A$ is definable and for every $x$ and $y$, $A$ contains some element in the symmetric difference. Now we can define $f(y) $ as the unique $z \in B$ so that for all $a \in A$, $a \leq y$ if and only if $f(a) \leq z$. Clearly $f(y)$ has this property. Suppose for the moment that there is some $z \neq f(y)$ which also has this property. Since $f$ is an automorphism there is $x \in P$ so that $f(x) = z$. Now choose $a\in A$ in the symmetric difference of $x$ and $y$. Then $f(a)$ is in exactly one of $f(y)$ and $f(x)$. \fin \subseteqction{Ordered Fields} \subseteqtcounter{theorem}{0} Here we will prove the compactness of the quantifier $ Q_{\rm Of}$. Let $Q_{\rm Of}$ be the quantifier whose interpretation is ``$Q_{\rm Of} f\,(F_1, F_2) \ldots$'' holds if there is an isomorphism $f$ of the ordered fields $F_1,F_2$ so that $\ldots$. We will use various facts about dense linear orders. A subset of a dense linear order is {\em somewhere dense\ }if it is dense in some non-empty open interval. A subset which is not somewhere dense\ is {\em nowhere dense}. The first few properties are standard and follow easily from the fact that a finite union of nowhere dense\ sets is nowhere dense. \begin{proposition} If a somewhere dense\ set is divided into finitely many pieces one of the pieces is somewhere dense. \end{proposition} \begin{proposition} If $\subseteqt{f_k}{k \in K}$ is a finite set of partial functions defined on a somewhere dense\ set $A$ then there is a somewhere dense\ subset $A' \subseteq A$ so that each $f_k$ is either total on $A'$ or no element of the domain of $f_k$ is in $A'$. \end{proposition} We can define an equivalence relation on functions to a linearly ordered set by $f\equiv g$ if for every non-empty open interval $I$ the symmetric difference of $f^{-1}(I)$ and $g^{-1}(I)$ is nowhere dense. \begin{proposition} \label{part-prop} Suppose $\cal F$ is a finite collection of functions from a somewhere dense\ set $A$ to a linearly ordered set. Then there is $\cal I$ a collection of disjoint intervals (not necessarily open) and a somewhere dense\ set $A' \subseteq A$ so that for all $f, g \in \cal F$ either $f \mathord{\restriction} A' \equiv g \mathord{\restriction} A'$ or there are disjoint intervals $I, J\in \cal I$ so that $f(A') \subseteq I$ and $g(A') \subseteq J$. \end{proposition} {\sc Proof}\ The proof is by induction on the cardinality of $\cal F$. Suppose that ${\cal F} = {\cal G}\cup\{g\}$ and $\cal J$ is a set of intervals and $A'' \subseteq A$ is a somewhere dense\ set satisfying the conclusion of the theorem with respect to $\cal G$. If there are some $f \in \cal G$ and $A'''$ a somewhere dense\ subset of $A''$ such that $f \mathord{\restriction} A''' \equiv g \mathord{\restriction} A'''$ then we can choose a somewhere dense\ $A'\subseteq A'''$ such that for all $I\in {\cal J}$ we have $(f\mathord{\restriction} A')^{-1}(I)=(g\mathord{\restriction} A')^{-1}(I)$. Then $A'$ and $\cal J$ are as required in the theorem. So we can assume that no such $f$ and $A'''$ exist. Let $\{f_0, \ldots, f_n\}$ enumerate a set of $\equiv$ class representatives of $\cal G$ and let ${\cal J} = \{J_0, \ldots, J_n\}$ where $f_k(A'') \subseteq J_k$ for all $k$ (so for $\ell<m\le n$ $J_ell\cap J_m=\emptyset)$. By induction on $k$ we will define a descending sequence $A_k$ of somewhere dense\ subset of $A''$ and intervals $I_k \subseteq J_k$ with the property that $f_k(A_k) \subseteq I_k$ and $g(A_k)$ is disjoint from $I_k$. Let $A_{-1} = A''$. Consider any $k$ and suppose $A_{k-1}$ has been defined. If possible, choose an interval $J_k' \subseteq J_k$ so that the symmetric difference of $(g\mathord{\restriction} A_{k-1})^{-1}(J_k')$ and $(f_k\mathord{\restriction} A_{k-1})^{-1}(J_k')$ is somewhere dense. There are two possibilities. If $(f_k\mathord{\restriction} A_{k-1})^{-1}(J_k') \subseteqtminus (g\mathord{\restriction} A_{k-1})^{-1}(J_k')$ is somewhere dense, then let $A_k = (f_k\mathord{\restriction} A_{k-1})^{-1}(J_k') \subseteqtminus (g\mathord{\restriction} A_{k-1})^{-1}(J_k')$ and let $I_k = J_k'$. Otherwise we can choose $I_k$ a subinterval of $J_k \subseteqtminus J_k'$ and $A_k$ a somewhere dense\ subset of $(g\mathord{\restriction} A_{k-1})^{-1}(J_k')$ so that $f_k(A_k) \subseteq I_k$. If there was no respective $J_k'$ then we would have $f_k\mathord{\restriction} A_{k-1}\equiv g\mathord{\restriction} A_{k-1}$ - contrary to our assumption. Given $A_n$ and $I_0, \ldots, I_n$, we can choose $A_{n+1} \subseteq A_n$ somewhere dense\ and an interval $I_{n+1}$ disjoint from $I_k$, for all $k$ so that $g(A_{n+1}) \subseteq I_{n+1}$. Finally we let $A' = A_{n+1}\cap \bigcap_{f\in \cal G} f^{-1}(\bigcup_{k\leq n} I_k)$ and ${\cal I}=\{I_0,\ldots,I_n\}$. \fin \begin{theorem} The logic ${\rm L}(Q_{\rm Of})$ is compact. \end{theorem} {\sc Proof}\ We work in the model $\hbox{\eufb\char"43}$ constructed before (suffice by 3.2 (and 4.9.1)). Suppose we have two definable ordered fields $F_1, F_2$ contained in $P(\hbox{\eufb\char"43})$ and an isomorphism $f$ between them. Since $f$ and $f^{-1}$ are locally definable over a pseudofinite set there are $(A_i \colon i < k^*)$ and $(f_{i, j} \colon i, j < k^*)$ as in Lemma~\ref{cover-lemma}. Since $\hbox{\eufb\char"43}$ satisfies that $k^*$ is finite, there is some $i$ so that $A_i$ is somewhere dense. Fix such an $i$ and for notational simplicity drop the subscript $i$. So $f_j$ denotes $f_{i,j}$. By restricting to a somewhere dense\ subset (and perhaps eliminating some of the $f_j$) we can assume that each $f_j$ is a total function (we work in $\hbox{\eufb\char"43}$ to make this choice) (use 6.2). Again choosing a somewhere dense\ subset we can find a somewhere dense\ set $A$ and a collection $\cal I$ of disjoint intervals of $F_2$ so that the conclusion of Proposition~\ref{part-prop} is satisfied. Since each $f_j$ is one-to-one we can assume that all the intervals in $\cal I$ are open. Choose now an interval $J$ contained in $F_1$ so that $A$ is dense in $J$. Next choose $a_1 < a_2 \in J$ and an interval $I \in \cal I$ so that $f(a_1), f(a_2) \in I$. To see that such objects exist, first choose any $a_1 \in A \cap J$. There is a unique interval $I \in \cal I$ so that $f(a_1) \in I$. Choose $b \in I$ so that $f(a_1) < b $ and let $a_2$ be any element of $J \cap A$ such that $a_1 < a_2 < f^{-1}(b)$. Of course this definition of $a_1, a_2$ cannot be made in $\hbox{\eufb\char"43}$, but the triple $(a_1, a_2, I)$ exists in $\hbox{\eufb\char"43}$. Without loss of generality we can assume that $A \subseteq (a_1, a_2) = J$. Consider now ${\cal F} = \subseteqt{f_j}{f_j(A) \subseteq I}$. This is an equivalence class of functions and for every $a \in A$, there is some $g \in \cal F$ so that $g(a) = f(a)$. The crucial fact is that for all $b \in J$ and $g \in {\cal F}$, $B = \subseteqt{a \in A}{ b < a \mbox{ and } g(a) < f(b)}$ is nowhere dense. Assume not. Since $\cal F$ is an equivalence class, for all $h \in \cal F$ $\subseteqt{a \in B}{h(a) \geq f(b)}$ is a nowhere dense\ set. But since $\cal F$ is a pseudofinite set, $\subseteqt{a \in B}{\mbox{for some } h \in {\cal F}, h(a)\geq f(b)}$ is nowhere dense. So there is some $a \in A$ so that $b < a$ and for all $g \in \cal F$ $g(a) < f(b)$. This gives a contradiction since there is some $g$ so that $f(a) = g(a)$. Similarly, $\subseteqt{a \in A}{a < b\mbox{ and }g(a) > f(b)}$ is nowhere dense. With this fact in hand we can define $f$ on $J$, by $f(b)$ is the greatest $x$ so that for all $g \in \cal F$, both $\subseteqt{a \in A}{b < a \mbox{ and }g(a) < x}$ and $\subseteqt{a \in A}{a < b \mbox{ and }g(a) > x}$ are nowhere dense. Since an isomorphism between ordered fields is definable if and only if it is definable on an interval we are done. \fin \end{document}
\begin{document} \maketitle \begin{abstract}We consider the Schr\"odinger system with Newton-type interactions that was derived by R.~Klein, A.~Majda and K.~Damodaran \cite{KMD} to modelize the dynamics of $N$ nearly parallel vortex filaments in a 3-dimensional homogeneous incompressible fluid. The known large time existence results are due to C.~Kenig, G.~Ponce and L.~Vega \cite{KPV} and concern the interaction of two filaments and particular configurations of three filaments. In this article we prove large time existence results for particular configurations of four nearly parallel filaments and for a class of configurations of $N$ nearly parallel filaments for any $N\geq 2$. We also show the existence of travelling wave type dynamics. Finally we describe configurations leading to collision. \end{abstract} \section{Introduction} In this paper we study the dynamics of $N$ interacting vortex filaments in a 3-dimensional homogeneous incompressible fluid. We focus on filaments that are all nearly parallel to the $z$-axis. They are described by means of complex-valued functions $\Psisi_j(t,\sigma)\in \mathbb{C}$, $1\leq j\leq N$, where $t\in \mathbb{R}$ is the time, $\sigma\in \mathbb{R}$ parameterizes the $z$-axis, and $\Psisi_j(t,\sigma)$ is the position of the $j$-th filament. A simplified model for the dynamics of such nearly parallel filaments has been derived by R.~Klein, A.~Majda and K.~Damodaran \cite{KMD} in the form of the following 1-dimensional Schr\"odinger system of equations \begin{equation} \label{syst:interaction} \begin{cases} \displaystyle i\partial_t \Psisi_j+ \Gamma_j \partial_\sigma^2 \Psisi_j +\sum_{k\neq j} \Gamma_k \frac{\Psisi_j-\Psisi_k}{|\Psisi_j-\Psisi_k|^2}=0,\quad1\leq j\leq N,\\ \Psisi_j(0,\sigma)=\Psisi_{j,0}(\sigma). \end{cases} \end{equation} Here $\Gamma_j$ is a real number representing the circulation of the j-th filament\footnote{The free Schr\"odinger operator derived in \cite{KMD} is actually $i\partial_t +\alpha_j \Gamma_j \partial_\sigma^2$, where $\alpha_j$ is another vortex core parameter related to the $j$-th filament. For simplicity we assume throughout the paper that $\alpha_j=1$.}. In the case where $\Psisi_j(t,\sigma)=\Psisi_j(t)=X_j(t)$ are exactly parallel filaments, Syst. \eqref{syst:interaction} reduces to the well-known point vortex system arising in 2-dimensional homogeneous incompressible fluids \begin{equation} \label{syst:point-vortex} \begin{cases} \displaystyle i\frac{d X_j}{dt}+\sum_{k\neq j} \Gamma_k \frac{X_j-X_k}{|X_j-X_k|^2}=0,\quad 1\leq j\leq N,\\ X_j(0)=X_{j,0}. \end{cases} \end{equation} The Syst. \eqref{syst:interaction} combines on the one hand the linearized self-induction approximation for each vortex filament, given by the linear Schr\"odinger equation, and on the other hand the interaction of the filaments, for any $\sigma$, by the point vortex system. Solutions of the simplified model \eqref{syst:interaction} have remarkable mathematical and physical properties, as described in \cite{MaBe}. The main issue in this context is the possibility of collision of at least two of the filaments in finite time at some point $\sigma$. Before presenting the known results on nearly parallel vortex filaments let us briefly review some classical facts on the point vortex system \eqref{syst:point-vortex}. Its dynamics preserves the center of inertia $\sum_j \Gamma_j X_j(t)$, the angular momentum $\sum_j \Gamma_j |X_j(t)|^2$ and the quantities $$\sum_{j\neq k} \Gamma_j \Gamma_k \ln \left|X_j(t)-X_k(t)\right|^2\,\,,\,\, \sum_{j\neq k}\Gamma_j \Gamma_k \left|X_j(t)-X_k(t)\right|^2.$$In case of circulations having all the same signs this implies that no collision among the vortices can occur in finite time. Therefore there exists a unique global $\mathcal C^1$ solution $\left(X_j(t)\right)_j$ to \eqref{syst:point-vortex}. For $N=2$ global existence still holds independently of the circulation signs since $|X_1(t)-X_2(t)|$ remains constant. When dealing with more than two vortices the single-sign assumption of the circulations really matters - explicit examples of configurations leading to collapse in finite time have been given by self-similar shrinking triangles \cite{Aref79}. For any circulations the equilateral triangle is a rotating or translating configuration, and for identical circulations the ends and the middle of a segment form also a relative equilibrium configuration. For $N\geq 4$ and identical circulations $\Gamma_j=\Gamma\,\forall j$, vertices of regular polygons also form relative equilibrium configurations. They rotate around the center of inertia with constant angular velocity $\omega=\Gamma(N-1)/(2R^2)$, where $R$ is the size of the polygon. These polygon configurations are stable if and only if $N\leq 7$. The proof of this result, conjectured by Kelvin in 1878, was recently completed by L.~G.~Kurakin and V.~I.~Yudovitch in 2002 \cite{KuYu} (see also \cite{No}). Finally, the configuration formed by adding to an $N$-polygon configuration one point of arbitrary circulation $\Gamma_0$ at the center of inertia, is an relative equilibria rotating with constant angular velocity $\omega=[\Gamma(N-1)+2\Gamma_0]/(2R^2)$. A natural observation to be done is that as $N$ increases the dynamics gets more and more sophisticated. A first result on nearly parallel vortex filaments has been given in \cite{KMD}. The authors proved that for $N=2$ the linearized system around the exactly parallel filaments solution of \eqref{syst:point-vortex} is stable if the circulations have the same sign and unstable otherwise. Moreover, they made numerical simulations suggesting global existence for \eqref{syst:interaction} in the first case and collision in finite time in the second case. Their first conjecture on global existence was proved then by C.~Kenig, G.~Ponce and L.~Vega \cite{KPV} for filaments $\Psisi_j$ obtained as small $H^1$ perturbations of exactly parallel filaments $X_j$, \begin{equation} \label{hyp:pert} \Psisi_j(t,\sigma)=X_j(t)+u_j(t,\sigma),\quad 1\leq j\leq N. \end{equation} More precisely, it has been proved in \cite{KPV} that for $u_j(0)$ sufficiently small in $H^1(\mathbb R)$ -and therefore in $L^\infty(\mathbb R)$- global existence and uniqueness of the solution to Syst. \eqref{syst:interaction} hold for all vortex solutions $(X_j)_j$ of equal circulations and such that $|X_j(t)-X_k(t)|=d $ for $1\leq j\neq k\leq N$. The only such possible configurations are $N=2$ with any pair $(X_1,X_2)$, and $N=3$ with $(X_1,X_2,X_3)$ an equilateral triangle. Moreover, local existence und uniqueness hold for any number $N$ of filaments and any circulations $\Gamma_j$ and the solution exists at least up to times of order $ |\ln \sum_j\|u_j(0)\|_{H^1}|$. Finally, let us mention that P.-L.~Lions and A.~Majda \cite{LM} developed an equilibrium statistical theory for nearly parallel filaments using the approximation given by Syst. \eqref{syst:interaction}. The purpose of this article is to study other specific configurations of vortex filaments. In order to obtain large time existence results we will strongly use the symmetry properties of the configuration of the straight filaments $(X_j)_j$ in itself, and those of the perturbation $(u_j)_j$ on the other hand. In the first part of this paper we focus on the case where $N\geq 3$ and $(X_j)_j$ is a regular rotating polygon of radius $1$ with $N$ vertices, with or without its center. The index $j=0$ refers to the center of the polygon and $1\leq j\leq N$ to the vertices of the polygon. Since \eqref{syst:interaction} is invariant under translations, we can suppose that the center of inertia of the polygon is set at the origin, i.e. $X_0(t)=0$ for all $t$. We shall impose that the circulations in the vertices have the same value $\Gamma$ and that $\omega$ has the same sign as $\Gamma$. For simplicity we consider $$\Gamma_j=1,\quad 1\leq j\leq N.$$ In the cases where the center of the polygon is not considered, the angular speed $\omega$ is $(N-1)/2$, hence positive. In the cases when the center of the polygon is considered, the circulation $\Gamma_0$ must be larger than $-(N-1)/2$. We will consider very specific perturbations of the configuration $(X_j)_j$, assuming that all the perturbations are the same for each of the straight filaments, a dilation combined with a rotation. More precisely we shall focus on solutions having the form \begin{equation}\label{hyp:pert-N} \Psisi_j(t,\sigma)=X_j(t)\Psihi(t,\sigma), \end{equation} with $\Psisi(t,\sigma)$ close to $X_j(t)$ in some sense as $|\sigma|\rightarrow\infty$. Let us notice that this dilation-rotation type of perturbations keeps the symmetry of the polygon for all $(t,\sigma)$. A natural example of such perturbations are the ones with $\Psihi-1$ small in $H^1(\mathbb{R})$. Our result below allows to handle a larger class of perturbations of the regular rotating polygon, including also for example all small constant rotations of the polygon. \begin{theorem} \label{thm:main-2}Let $N\geq 3$ and $(X_j)_j$ be the equilibrium solution given by a regular rotating polygon of radius $1$, with or without its center, with $\Gamma_j=1$ for $1\leq j\leq N$ and positive angular velocity $\omega$. Assume that $$\Psisi_{j,0}(\sigma)=X_{j,0}\Psihi_0(\sigma),$$ with $\Psihi_0$ such that \begin{equation*} \mathcal E(\Psihi_0)= \frac{1}{2}\int |\partial_\sigma \Psihi_0|^2 +\frac{\omega}{2}\int \left(|\Psihi_0|^2-1-\ln |\Psihi_0|^2\right) \end{equation*} satisfies $\mathcal E(\Psihi_0)\leq \eta_1$, where $\eta_1$ is an absolute constant\footnote{introduced in Lemma \ref{lemma:ginzburg} below.}. Then there exists a unique global solution $(\Psisi_j)_j$ of \eqref{syst:interaction}, with this initial datum, such that $$\Psisi_{j}(t,\sigma)=X_{j}(t)\Psihi(t,\sigma),\quad t\in \mathbb{R}$$ with $\Psihi-\Psihi_0\in C\left(\mathbb{R},H^1(\mathbb{R})\right)$. Moreover $$\frac 34 \leq \frac{|\Psisi_j(t,\sigma)-\Psisi_k(t,\sigma)|}{|X_j(t)-X_k(t)|}\leq \frac 54,\quad t,\sigma\in \mathbb{R}.$$ In particular, if $\Psihi_0(\sigma)\overset{|\sigma|\rightarrow\infty}{\longrightarrow} 1$ then $\Psisi_j(t,\sigma)\overset{|\sigma|\rightarrow\infty}{\longrightarrow} X_j(t)$ $\forall t$, and if $\Psihi_0\in 1+H^1(\mathbb{R})$ then $\Psisi_j-X_j\in C\left(\mathbb{R},H^1(\mathbb{R})\right)$. \end{theorem} \begin{remark} Theorem \ref{thm:main-2} does not assert that if initially $\|\Psihi_0-1\|_{H^1}$ is small then $\|\Psihi(t)-1\|_{H^1}$ remains small for all $t$. \end{remark} Our analysis is based on the observation that the solution $(\Psisi_j)_j$ to Syst. \eqref{syst:interaction} satisfies \eqref{hyp:pert-N} if and only if $\Psihi$ is solution to the equation \begin{equation}\label{eq:BM} i\partial_t \Psihi+\partial_\sigma^2 \Psihi+\omega \frac {\Psihi}{|\Psihi|^2}(1-|\Psihi|^2)=0. \end{equation}Eq. \eqref{eq:BM} is an hamiltonian equation, which preserves the energy \begin{equation}\label{def:energy-BM} \mathcal E(\Psihi)= \frac{1}{2}\int |\partial_\sigma \Psihi|^2+\frac{\omega}{2}\int \left(|\Psihi|^2-1-\ln |\Psihi|^2\right). \end{equation} Note that in the setting of Theorem \ref{thm:main-2} the solutions satisfy $|\Psihi|\simeq 1$, so that Eq. \eqref{eq:BM} is formally similar to the well-known Gross-Pitaevskii equation \begin{equation}\label{eq:GP} i\partial_t \Psihi+\partial_\sigma^2 \Psihi+\omega \Psihi(1-|\Psihi|^2)=0, \end{equation} with energy given by \begin{equation*} \mathcal E_{GP}(\Psihi)= \frac{1}{2}\int |\partial_\sigma \Psihi|^2+\frac{\omega}{4} \int \left(|\Psihi|^2-1\right)^2. \end{equation*} In fact we shall see that both functionals $\mathcal{E}(\Psihi)$ and $\mathcal{E}_{GP}(\Psihi)$ are comparable whenever $|\Psihi|\simeq 1$. A key point in the proof is, as in \cite{KPV}, the fact that if $\mathcal E(\Psihi_0)$ is small then the solution $\Psihi$ enjoys the property \begin{equation}\label{ineq:coercivity} \sup_{t\in \mathbb{R}}\left\| |\Psihi(t)|^2-1\right\|_{L^\infty}\leq \frac{1}{4}.\end{equation} This allows us to establish Theorem \ref{thm:main-2} by using the techniques introduced in \cite{Zh2} by P.~E.~Zhidkov (see also P.~G\'erard \cite{PG0}, \cite{PG}) for solving the Gross-Pitaevskii equation in the energy space.\par In the case where $\Psihi_0\in 1+H^1(\mathbb{R})$ we mention that the proof in \cite{KPV} can be adapted here, by showing that some quantities are still conserved even though $|X_j(t)-X_k(t)|$ are not all the same. As far as we have seen, global existence and uniqueness of the filaments hold for $N=2$ with any $(X_j)_j$ and any small pertubations, for $N=3$ with $(X_j)_j$ the equilateral triangle stable equilibrium and any small pertubations, for any $N\geq 2$ with $(X_j)_j$ the regular polygon equilibrium and any small pertubations with strong symmetry conditions. We expect then that global existence might hold for small $N$ and less restrictive conditions on the perturbations. In the second part of this paper we study the case \begin{equation*} N=4,\quad \Gamma_j=1, \end{equation*} and we assume that $ (X_j)_j=(X_1,X_2,X_3,X_4) $ is a square of radius $1$ rotating with constant angular speed. Again, since \eqref{syst:interaction} is invariant under translations, we can suppose that the square is centered at the origin. Our main result in this case may be formulated as follows. \begin{theorem} \label{thm:main} Let $N=4$ and $(X_j)_j$ be the equilibrium solution given by a rotating square of radius $1$ with $\Gamma_j=1$. Let $(u_{j,0})_j\in H^1(\mathbb{R})^4$ and set $\Psisi_{j,0}=X_{j,0}+u_{j,0}$. We introduce the energy\footnote{Note that $\mathcal{E}_0\geq 0$.} \begin{equation*}\begin{split}\mathcal{E}_0&=\frac{1}{2}\sum_{j} \int \left|\partial_\sigma \Psi_{j,0}(\sigma)\right|^2\,d\sigma\\ &+\frac{1}{2}\sum_{j\neq k} \int -\ln\left(\frac{|\Psisi_{j,0}(\sigma)-\Psisi_{k,0}(\sigma)|^2}{|X_{j,0}-X_{k,0}|^2}\right) +\left(\frac{|\Psisi_{j,0}(\sigma)-\Psisi_{k,0}(\sigma)|^2}{|X_{j,0}-X_{k,0}|^2}-1\right)\,d\sigma.\end{split}\end{equation*} We also introduce the quantity \begin{equation*} \tilde{\mathcal{E}_0}=\max\left\{\mathcal{E}_0; \frac{\|u_{1,0}+u_{3,0}\|_{L^2}^2}{2}+\frac{\|u_{2,0}+u_{4,0}\|_{L^2}^2}{2}\right\} \end{equation*} and we assume that \begin{equation*} \tilde{\mathcal{E}_0}\leq \eta_2 \end{equation*} for an absolute small constant $\eta_2>0$. Then there exists an absolute constant $C>0$, and there exists a time $T$, with $$T\geq C \min\left\{\frac{1}{{\tilde{\mathcal{E}_0}}^{1/4}\max_{j,k} \|u_{j,0}-u_{k,0}\|_{L^2}^{1/2}},\frac{1}{\tilde{{\mathcal{E}_0}}^{1/3}}\right\},$$ such that there exists a unique corresponding solution $(\Psisi_j)_j$ to Syst. \eqref{syst:interaction} on $[0,T]$, satisfying $\Psisi_j=X_j+u_j$, with $u_j\in C\left([0,T],H^1(\mathbb{R})\right)$, and such that \begin{equation*} \frac 34\leq \frac{|\Psisi_j(t,\sigma)-\Psisi_k(t,\sigma)|}{|X_j(t)-X_k(t)|}\leq \frac 54,\quad t\in [0,T], \quad \sigma\in \mathbb{R}. \end{equation*} Finally, if the initial perturbation is parallelogram-shaped, namely \begin{equation*} \|u_{1,0}+u_{3,0}\|_{L^2}=\|u_{2,0}+u_{4,0}\|_{L^2} =0, \end{equation*} then the solution $(\Psisi_j)_j$ is globally defined. \end{theorem} \begin{remark}In the proof of Theorem \ref{thm:main} we shall actually establish a local existence result for any $N$, any parallel configuration $(X_j)_j$, any set of positive circulations $(\Gamma_j)_j$ and any perturbations with small energy, but not necessarily small in $H^1$. This is a slight improvement of the result in \cite{KPV}, see also the next two remarks. \end{remark} \begin{remark} As we shall see, we can infer from the smallness of the energy $\mathcal{E}_0$ and from Sobolev embeddings that the nearly parallel filaments $\Psisi_{j,0}$ are not too far from the straight filaments $X_{j,0}$ and that $ \mathcal {E}_0\leq C\sum_j \|u_{j,0}\|_{H^1}^2.$ Conversely, if we assume that $\sum_j \|u_{j,0}\|_{H^1}$ is sufficiently small then one can show that $ \tilde{\mathcal {E}_0}\leq C\sum_j \|u_{j,0}\|_{H^1}^2$ and the assumptions of Theorem \ref{thm:main} are satisfied. Therefore the hypothesis on the energy is less restrictive than the one on the $H^1$ norm, see also the next remark. \end{remark} \begin{remark}From $0\leq \mathcal {E}_0\leq C\sum_j \|u_{j,0}\|_{H^1}^2$ it follows that $\tilde{\mathcal {E}_0}\leq C\sum_j \|u_{j,0}\|_{H^1}^2$ so the time of existence is a priori larger than in \cite{KPV}. Moreover, for all $\varepsilonilon>0$ Theorem \ref{thm:main} allows for initial perturbations of the form $$\Psisi_{j,0}^\varepsilonilon(\sigma)=e^{i\varphi^\varepsilonilon(\sigma)} X_{j,0}+T^\varepsilonilon(\sigma),$$ with $\varphi^\varepsilonilon,T^\varepsilonilon$ such that $\|(\varphi^\varepsilonilon,T^\varepsilonilon)\|_{H^1} =O(1)$. This amounts to rotating and translating the square $(X_j)_j$ at each level $\sigma$. By taking oscillating phases of the form $\varphi^\varepsilonilon(\sigma)=\sqrt{\varepsilon} \varphi_0(\varepsilon \sigma)$ with a fixed $\varphi_0\in H^1$, which implies $\|\varphi^\varepsilonilon\|_{L^2}\geq O(1)$, $\|\nabla\varphi^\varepsilonilon\|_{L^2}=O(\varepsilonilon)$ and by choosing $T^\varepsilonilon$ such that $\|T^\varepsilonilon\|_{ H^1}=O(\varepsilonilon)$ we compute \begin{equation*} \tilde{\mathcal E_0}=O(\varepsilonilon^2),\quad \sum_j\|u_{j,0}\|_{H^1}^2\geq O(1). \end{equation*} Therefore Theorem \ref{thm:main} provides a unique solution a least up to time of order $1/\sqrt{\varepsilon}$, while the $H^1$ norm of the perturbations is of order one. This suggests that the energy space is more appropriate for the analysis of \eqref{syst:interaction} than classical Sobolev spaces. \end{remark} The proof of Theorem \ref{thm:main} follows the one of Theorem \ref{thm:main-2} combined with the one in \cite{KPV}. In particular, we consider, as in \cite{KPV}, the energy \begin{equation} \label{def:energy-losange} \begin{split}\mathcal{E}(t)&=\frac{1}{2}\sum_{j} \int \left|\partial_\sigma \Psi_j(t,\sigma)\right|^2\,d\sigma\\ &+\frac{1}{2}\sum_{j\neq k} \int -\ln\left(\frac{|\Psisi_j(t,\sigma)-\Psisi_k(t,\sigma)|^2}{|X_j(t)-X_k(t)|^2}\right) +\left(\frac{|\Psisi_j(t,\sigma)-\Psisi_k(t,\sigma)|^2}{|X_j(t)-X_k(t)|^2}-1\right)\,d\sigma,\end{split}\end{equation} and show that the solution can be extended as long as $\mathcal E(t)$ remains small. For this purpose we show that $u_j$ can be extended locally from a time $t_0$ by a fixed point argument for small $H^1$ perturbations $w_j$ of the linear evolutions of the initial data, i.e. $u_j(t)=e^{i(t-t_0)\partial_\sigma^2}u_j(t_0)+w_j(t)$. In here we use crucially the fact that the deviation $e^{i(t-t_0)\partial_\sigma^2}u_j(t_0)-u_j(t_0)$ can be upper-bounded in $L^\infty$ in terms of the energy at the initial time $\mathcal E(t_0)$. As observed in \cite{KPV}, for any two parallel filaments and for the equilateral triangle configuration the energy is conserved, i.e. $\mathcal{E}(t)=\mathcal{E}(0)=\mathcal{E}_0$, so that global existence follows for small energy perturbations. Unfortunately, under the assumptions of Theorem \ref{thm:main} the energy is no longer conserved (unless the perturbation $(u_j)_j$ is parallelogram-shaped). Instead, we estimate its evolution in time, showing that it does not increase too fast, and this control enables us to obtain a large time of existence. We finally mention another collection of dynamics that is governed by the linear Schr\"odiger equation. For shifted perturbations $\Psisi_j=X_j+u$, for any $X_j$ with $\Gamma_j$ the same, we obtain that $u$ is a solution of the linear Schr\"odiger equation. So if $u$ is regular enough, it has constant $H^1$ norm, so the filaments remain separated for all time. Moreover, due to the dispersive inequality for the linear Schr\"odinger equation, the perturbations spread in time along the parallel configuration $X_j$. Finally, we get examples of $\mathcal C^\infty$ perturbations decaying at infinity that generate a singularity in finite time by considering less regular pertubations than $H^1$ that lead to a $L^\infty$ dispersive blow-up for the linear Schr\"odinger. The self-similar linear Schr\"odinger solution constructed from homogeneous data $|x|^{-p}$ with $0<p<1$ in \cite{CaWe} leads to solutions blowing-up in $L^\infty$ in finite time at one point. Also, the linear Schr\"odinger evolution of $e^{i|x|^2}/(1+|x|^2)^m$ with $1/2<m\leq 1$ has been proved in \cite{BoSa} to be an $L^2$ solution whose modulus blows-up in finite time at one point. The third part of this work is devoted to travelling waves for Syst. \eqref{syst:interaction}. Let us recall that in the case of one single filament, a travelling wave dynamics was exhibited by H.~Hasimoto \cite{Has} and experimentally observed by E. J.~Hopfinger and F.K.~Browand \cite{HB}. Here we construct travelling waves for several filaments via finite energy travelling wave solutions to Eq. \eqref{eq:BM}, i.e. solutions of the form $\Psihi(t,\sigma)=v(\sigma+ct)$, with $v$ solution of the equation \begin{equation} \label{eq:TW} ic v'+v''+\omega\frac{v}{|v|^2}(1-|v|^2)=0 \end{equation} and having finite energy, \begin{equation} \mathcal{E}(v)=\frac{1}{2}\int|\partial_\sigma v|^2+\frac{\omega}{2}\int \big( |v|^2-1-\ln |v|^2\big)<\infty. \end{equation} As in Theorem \ref{thm:main-2} we assume that $\omega>0$. In order to avoid having $v$ approaching zero we shall impose that the energy is small. \par Existence, stability issues and qualitative behaviour near the speed of sound of travelling waves for Gross-Pitaevskii-type equations and related problems were extensively studied in the past years (see for instance \cite{DeB, BS, DiMGa, Gr, BeGrSaSm, Ma, ChRo} and the references therein). For the one-dimensional Gross-Pitaevskii equation \eqref{eq:GP}, finite energy travelling waves (referred to as "grey solitons" in the context of non-linear optics) are known to exist for all $0<c<\sqrt{2\omega}$, and they have the explicit form (see e.g. \cite{Gr}) \begin{equation*} \begin{split} v(\sigma)=v_c(\sigma)&=\sqrt{ 1-\frac{\frac{1}{2\omega}(2\omega-c^2)}{\cosh^2\left ( \frac{\sqrt{2\omega-c^2}}{2} \sigma\right)}} \,e^{i \arctan \frac{\omega e^{\sqrt{2\omega-c^2}\sigma}+c^2-\omega}{c\sqrt{2\omega-c^2}}-i\arctan \frac{c}{\sqrt{2\omega-c^2}}}. \end{split} \end{equation*} The modulus $|v_c|$ of such maps is close to $1$ when $c$ is close to $\sqrt{2\omega}$, in which case $\mathcal{E}(v_c)\leq C\mathcal{E}_{GP}(v_c)\leq C(2\omega-c^2)^{3/2}$ (see \cite{Gr}), so the energy is finite and as small as needed. Note that therefore the maps $v_c$, with $c$ close to $\sqrt{2\omega}$ enter the class of perturbations presented in Theorem \ref{thm:main-2}. Our next result in this context is the following. \begin{theorem} \label{thm:TW} Let $c$ such that $0<2\omega-c^2<\eta_3$ for an absolute small constant $\eta_3>0$. There exists a travelling wave solution to Syst. \eqref{syst:interaction} $$\Psisi_j(t,\sigma)=e^{it\omega+i\frac{2\pi j}{N}}v(\sigma+ct),$$ where $v\in C^\infty(\mathbb{R})$ is a solution to Eq. \eqref{eq:TW}, with finite energy $ \mathcal{E}(v)\leq C(2\omega-c^2)^{3/2}$, such that $v$ never vanishes. The modulus $|v|$ is an even function, increasing on $[0,\infty)$ and satisfying on $\mathbb{R}$ \begin{equation*} 0<1-|v(\sigma)|^2 <\min\left\{\frac{3}{2\omega}(2\omega-c^2),\, C\sqrt{2\omega-c^2}e^{-\sqrt{2\omega-c^2}^{\,-}|\sigma|}\right\}. \end{equation*} Finally, we have a limit at infinity \begin{equation*} v(\sigma)\to \exp(i\theta_{\pm}),\quad \sigma\to \pm \infty, \quad \text{with } |\theta_+-\theta_-|\leq C\sqrt{2\omega-c^2}. \end{equation*} Here $C$ denotes an absolute numerical constant. \end{theorem} It has been noticed in \cite{KPV} that the Galilean invariance of Syst. \eqref{syst:interaction} leads to helix-shaped vortex filaments. In here, on one hand Eq. \eqref{eq:BM} is invariant under Galilean transform, i.e. $\Psihi_{\nu}(t,\sigma)=e^{-it\nu^2+i\nu \sigma}\,\Psihi(t,\sigma-2t\nu)$ is also a solution $\forall \nu\in\mathbb R$. On the other hand $X_j(t)=e^{it\omega+i\frac{2\pi j}{N}}$ for $j\neq 0$, so \begin{equation*}\begin{split}\Psisi_{j,\nu}(t,\sigma)&=e^{it(\omega-\nu^2)+i\nu \sigma+ i\frac{2\pi j}{N}}\,\Psihi(t,\sigma-2t\nu)\\ &=e^{it(\omega-\nu^2)+i\nu \sigma+ i\frac{2\pi j}{N}}\,v(\sigma+t(c-2\nu)).\end{split}\end{equation*} Therefore, choosing $\nu=\sqrt{\omega}$, we obtain a stationary $(\theta_+-\theta_-)$-twisted $N$-helix filament configuration with some localized peturbation travelling in time on each of its filaments. Last but not least, in the last part of this paper we describe configurations of nearly parallel filaments that lead to a collision in finite time. They are obtained by the same kind of dilation-rotation perturbations as in Theorem \ref{thm:main-2}. \begin{theorem} \label{thm:blup} Let $N\geq 2$ and $(X_j)_j$ be the stationnary configuration given by a regular $N-$polygon with its center and circulations $\Gamma_j=1$ for $1\leq j\leq N$, $\Gamma_0=-(N-1)/2$. Then the initial condition $$\Psisi_{j,0}(\sigma)=X_j(0)\left(1-\frac{e^{-\frac{\sigma^2}{1-4i}}}{\sqrt{1-4i}}\right)$$ yields a solution $(\Psisi_j)_j$ for Syst. \eqref{syst:interaction}, with $\Psisi_j-X_j\in C\left(\mathbb{R},H^1(\mathbb{R})\right)$, that collide at time $t=1$ at $\sigma =0$. \end{theorem} The remainder of this paper is organized as follows. In Section \S\ref{sec:sym} we derive Eq. \eqref{eq:BM}. We then present some preliminary lemmas about its energy, which lead to the proof of Theorem \ref{thm:main-2}. Section \S\ref{sec:losange} is devoted to the proof of Theorem \ref{thm:main}. Section \S\ref{sec:tw} contains the construction of travelling waves for Theorem \ref{thm:TW}. Finally, in Section \S\ref{sec:blup} we construct the collision dynamics in Theorem \ref{thm:blup}. In all the following the notation $C$ denotes an absolute constant which can possibly change from a line to another. \section{Proof of Theorem \ref{thm:main-2}}\label{sec:sym} We first derive Eq. \eqref{eq:BM}. Plugging the ansatz $\Psisi_j(t,\sigma)=X_j(t)\Psihi(t,\sigma)$ into Syst. \eqref{syst:interaction} with $\Gamma_j=1$ for $1\leq j\leq N$ we obtain $$iX_j\partial_t \Psihi+i\partial_t X_j \Psihi+X_j \partial_\sigma^2 \Psihi +\frac{\Psihi}{|\Psihi|^2}\sum_{k\neq j} \frac{X_j-X_k}{|X_j-X_k|^2}=0.$$ Next we use \eqref{syst:point-vortex} to get $$X_j(i\partial_t \Psihi+ \partial_\sigma^2 \Psihi )-i\partial_t X_j\frac{\Psihi}{|\Psihi|^2}\left(1-|\Psihi|^2\right)=0.$$ Now if we consider a configuration rotating with speed $\omega$ around its steady center of inertia $X_0=0$, for $1\leq j\leq N$ we have $X_j(t)=e^{it\omega+i\theta_j}$, so that $-i\partial_t X_j=\omega X_j$ and hence we obtain Eq. \eqref{eq:BM}, $$i\partial_t \Psihi+\partial_\sigma^2 \Psihi_j+\omega\frac{\Psihi}{|\Psihi|^2}\left(1-|\Psihi|^2\right)=0.$$ Conversely, assume that $\Psihi$ is a solution to Eq. \eqref{eq:BM} and set $\Psisi_j=X_j\Psihi$. Reversing the previous arguments, we obtain $$i\partial_t \Psisi_j+ \partial_\sigma^2 \Psisi_j +\sum_{k\neq j} \frac{\Psisi_j-\Psisi_k}{|\Psisi_j-\Psisi_k|^2}=0,\quad 1\leq j\leq N,$$ while, since $\Psisi_0(t,\sigma)=0$ for all $(t,\sigma)$, $$i\partial_t \Psisi_0=\partial_\sigma^2 \Psisi_0=0\quad \text{and}\quad \sum_{k=1}^N \frac{\Psisi_0-\Psisi_k}{|\Psisi_0-\Psisi_k|^2}= -\frac{\Psihi}{|\Psihi|^2}\sum_{k=1}^N \frac{X_k}{|X_k|^2}=0$$ and therefore $(\Psisi_j)_j$ is a solution to Syst. \eqref{syst:interaction}. \subsection{Some preliminary lemmas} \begin{lemma}\label{lemma:ginzburg}There exists an absolute constant $\eta_1$ and a time $t_1$ depending only on $\eta_1$ such that\\ i) If $\mathcal E(f)\leq \eta_1$ then $$\||f|^2-1\|_{L^\infty}\leq \frac 14.$$ ii) If $\|\partial_\sigma f\|_{L^2}\leq \eta_1$ then for all $0\leq t\leq t_1$ $$\frac{1}{\sqrt{2}}\|e^{it\partial_\sigma^2}f-f\|_{L^\infty}\leq \|e^{it\partial_\sigma^2}f-f\|_{H^1}\leq \frac 14.$$ \end{lemma} \begin{proof} i) The function $a(x)=x-1-\ln x$ is positive and convex, and vanishes only at $x=1$, therefore we can adapt standard arguments already used in the context of Ginzburg-Landau-type functionals (see e.g. \cite{BBH}). More precisely, we assume by contradiction that $\left||f(\sigma_0)|^2-1\right|>1/4$ for some $\sigma_0\in \mathbb{R}$. For example, $|f(\sigma_0)|>\sqrt{5/4}$. Next, since $\|\partial_\sigma f\|_{L^2}^2\leq 2\mathcal E(f)$ we have by Cauchy-Schwarz inequality \begin{equation*} |f(\sigma)|\geq |f(\sigma_0)|-\left|\int_{\sigma_0}^{\sigma} \partial_xf(x)dx\right| \geq \sqrt{\frac{{5}}{4}}-\sqrt{2\mathcal E(f)|\sigma-\sigma_0|}. \end{equation*} It follows that $|f|>\sqrt{9/8}$ on $I=[\sigma_0-1/(500\mathcal{E}(f)),\sigma_0+1/(500 \mathcal{E}(f))]$. Therefore \begin{equation*} \mathcal{E}(f)\geq \frac{1}{2}a\left(\frac{9}{8}\right)|I|=\frac{1}{500 \mathcal E(f)}a\left(\frac{9}{8}\right), \end{equation*} a contradiction if $\mathcal E(f)\leq \eta_1$ is sufficiently small. ii) The property ii) is a known one used in the Gross-Pitaevskii study (see Lemma 3 in \cite{PG0}) to which we recall the short proof: the Fourier transform of $e^{it\partial_\sigma^2}f-f$ can be written as $\frac{e^{-it\xi^2}-1}{\xi}\,\xi\hat f(\xi)$, so the $L^2$ norm is bounded by $C\sqrt{t}\|\partial_\sigma f\|_{L^2}$ and the $\dot H^1$ norm is bounded by $C\|\partial_\sigma f\|_{L^2}$, i.e. $$\|e^{it\partial_\sigma^2}f-f\|_{H^1}\leq C(1+\sqrt{t}) \|\partial_\sigma f\|_{L^2}\leq C(1+\sqrt{t}) \eta_1.$$ We choose $\eta_1$ small enough and $t_1$ small with respect to $ \eta_1$ such that for $0\leq t\leq t_1$, $$\|e^{it\partial_\sigma^2}f-f\|_{H^1}\leq \frac 14,$$ and the conclusion of the Lemma follows. \end{proof} Since $(x-1)^2/2\leq x-1-\ln x\leq 10(x-1)^2$ on $[3/4,5/4]$ we immediately obtain a second lemma. \begin{lemma}\label{lemma:comp}If $\||f|^2-1\|_{L^\infty}\leq 1/4$ then we can compare the energies: $$\mathcal E_{GP}(f) \equiv\frac{1}{2}\|\partial_\sigma f\|_{L^2}^2+\frac{\omega}{4}\||f|^2-1\|_{L^2}^2\leq \mathcal E(f)\leq 5\,\mathcal E_{GP}(f).$$ \end{lemma} So, if we consider an initial perturbation such that $\Psihi_0-1$ is sufficiently small in $H^1$, we infer from Sobolev embedding that $\mathcal E_{GP}(\Psihi_0)<\infty$ and that $\| |\Psihi_0|^2-1\|_{L^\infty}<1/4$. Hence Lemma \ref{lemma:comp} ensures that $\Psihi_0$ belongs to the energy space associated to Eq. \eqref{eq:BM}. We will also need the following transposition of a standard property of the Gross-Pitaevskii energy (see \cite{Ga, PG0,PG}). \begin{lemma} \label{lemma:recall} Let $f$ such that $\mathcal{E}(f)\leq \eta_1$, with $\eta_1$ defined in Lemma \ref{lemma:ginzburg}. Let $h\in H^1(\mathbb{R})$ with $\|h\|_{H^1}\leq 1/2$. Then the energy $\mathcal{E}(f+h)$ is finite. More precisely we have, for absolute numerical constants $C,C'$, \begin{equation*}\begin{split} \mathcal{E}(f+h)&\leq C\mathcal{E}_{GP}(f+h) \leq C'\left(1+\mathcal{E}(f)\right)\left(1+\|h\|_{H^1}^2\right).\end{split}\end{equation*} Moreover, \begin{equation*} \||f+h|-1\|_{L^\infty}\leq \frac{2+\sqrt{2}}{4}<1. \end{equation*} \end{lemma} \begin{proof} We first infer from Lemma \ref{lemma:ginzburg} i) that $\||f|-1\|_{L^\infty}\leq 1/4$, and from Lemma \ref{lemma:comp} that $\mathcal{E}_{GP}(f)<\infty$. Next, applying Gagliardo-Nirenberg inequality we get $\|h\|_{L^\infty}\leq \sqrt{2}\|h\|_{H^1}\leq \sqrt{2}/2$, so that $\||f+h|-1\|_{L^\infty}\leq (2+\sqrt{2})/4<1$. By Lemma \ref{lemma:comp} it follows that $\mathcal{E}(f+h)\leq C\mathcal{E}_{GP}(f+h)$. Using that $\mathcal{E}_{GP}(f)<\infty$ and $h\in H^1$ as well as Sobolev inequalities we conclude that $\mathcal{E}_{GP}(f+h)$ is finite, with the corresponding estimate (see also, e.g., Lemma 2 in \cite{PG0}). \end{proof} \subsection{Proof of Theorem \ref{thm:main-2}} First we will establish local well-posedness for Eq. \eqref{eq:BM} by performing a fixed point argument for the operator $$A(w)(t)=i\int_0^t e^{i(t-\tau)\partial_\tau^2} \frac{e^{i\tau\partial_\sigma^2}\Psihi_0+w(\tau)}{|e^{i\tau\partial_\sigma^2}\Psihi_0+ w(\tau)|^2}\left(1-|e^{i\tau\partial_\sigma^2}\Psihi_0+w(\tau)|^2\right)\,d\tau$$ on the ball $$B_T =\left\{ w\in C\left([0,T],H^1\right),\quad \sup_{0\leq t\leq T}\|w(t)\|_{H^1}\leq\frac 14\right\},$$ with $T$ small to be chosen later. Then $\Psihi(t)=e^{it\partial_\sigma^2}\Psihi_0+w(t)$ will be a solution for \eqref{eq:BM} on $[0,T]$ with initial data $\Psihi_0$. Observe that the proof of Lemma \ref{lemma:ginzburg} ii) yields that $t\mapsto (e^{it\partial_\sigma^2 }\Psihi_0-\Psihi_0)\in C([0,T],H^1(\mathbb{R}))$. So the map $\Psihi$ will belong to the energy space if $\Psihi_0$ belongs to the energy space (by Lemma \ref{lemma:recall} applied to $f=\Psihi_0$ and $h=e^{it\partial_\sigma^2}\Psihi_0-\Psihi_0+w(t)$ for $T\leq t_1$ with $t_1$ from Lemma \ref{lemma:ginzburg}), and it will belong to $1+H^1(\mathbb{R})$ if $\Psihi_0$ is in $1+H^1(\mathbb{R})$. The hypothesis of Theorem \ref{thm:main-2} is that we start with $\Psihi_0$ verifying $$\mathcal E=\mathcal E(\Psihi_0)= \frac{1}{2}\|\partial_\sigma \Psihi_0\|_{L^2}^2+\frac{\omega}{2}\int \left(-\ln |\Psihi_0|+|\Psihi_0|^2-1\right)\leq \eta_1.$$ We first impose $T\leq t_1$, with $t_1$ defined in Lemma \ref{lemma:ginzburg}. Let $w\in B_T$, and set for $0\leq t\leq T$ \begin{equation*} \tilde{\Psihi}(t)=e^{it\partial_\sigma^2}\Psihi_0+w(t) =\Psihi_0+\left(e^{it\partial_\sigma^2}\Psihi_0-\Psihi_0+w(t)\right). \end{equation*} By Lemma \ref{lemma:ginzburg} ii) and by choice of $B_T$, we have $\|\tilde{\Psihi}(t)-\Psihi_0\|_{H^1}\leq 1/2$ on $[0,T]$. Therefore, applying Lemma \ref{lemma:recall} to $f=\Psihi_0$ and $h=\tilde{\Psihi}(t)-\Psihi_0$ we obtain that $ \||\tilde{\Psihi}(t)|-1\|_{L^\infty}\leq (2+\sqrt{2})/4$ on $[0,T]$. In particular, since $C^{-1}\leq|\tilde{\Psihi}|\leq C$ for $C>0$ we can estimate the action of the operator as follows \begin{equation*}\begin{split}\|A&(w)(t)\|_{H^1}\leq t\sup_{0\leq \tau\leq t}\left\|\frac{\tilde{\Psihi}(\tau)}{|\tilde{\Psihi}(\tau)|^2}\left(1-|\tilde{\Psihi}(\tau)|^2\right) \right\|_{H^1}\\ &\leq C\,t\sup_{0\leq \tau\leq t}\left(\left\|1-|\tilde{\Psihi}(\tau)|^2\right\|_{L^2} +\left\|\partial_\sigma \tilde{\Psihi}(\tau)\right\|_{L^2}\right)\\ &\leq C\,t\sup_{0\leq \tau\leq t} \sqrt{\mathcal{E}_{GP}}(\tilde{\Psihi}(\tau)).\end{split}\end{equation*} We use again Lemma \ref{lemma:recall} and the bound $\|\tilde{\Psihi}(\tau)-\Psihi_0\|_{H^1}\leq 1/2$ to obtain \begin{equation*}\begin{split}\sup_{0\leq t\leq T}\|A(w)(t)\|_{H^1}&\leq C\,T(1+\mathcal E).\end{split}\end{equation*} Arguing similarly, we readily check that for $w_1,w_2\in B_T$ \begin{equation*}\begin{split} \sup_{0\leq t\leq T} \|A(w_1)(t)-A(w_2)(t)\|_{H^1}&\leq C\,T(1+\mathcal E)\sup_{0\leq t\leq T}\|w_1(t)-w_2(t)\|_{H^1}.\end{split}\end{equation*} Hence imposing a second smallness condition on $T$ with respect to $\mathcal E$ we obtain a fixed point $w$ for $A$ in $B_T$. Therefore local well-posedness holds for Eq. \eqref{eq:BM} on $[0,T]$ with $T$ depending only on $\mathcal{E}$. Next, since the energy of Eq. \eqref{eq:BM} is conserved $$\mathcal E(\Psihi(T))=\mathcal E(\Psihi(0))=\mathcal E,$$ we re-iterate the local in time argument to get the global existence. Finally, Lemma \ref{lemma:ginzburg} insures us that $$\sup_{t\in \mathbb{R}}\||\Psihi(t)|^2-1\|_{L^\infty}\leq \frac 14,$$ so the solution satisfies indeed $$\frac 14\leq |\Psihi(t,\sigma)|\leq \frac 54,\quad t,\sigma\in \mathbb{R}.$$ \section{Proof of Theorem \ref{thm:main}} \label{sec:losange} \subsection{Some useful quantities} From now on we will write $\Psi_{jk}=\Psi_j-\Psi_k$, $X_{jk}=X_j-X_k$ and $u_{jk}=u_j-u_k$. We first introduce some useful quantities. In the general case where $N\geq 1$ and $\Gamma_j\in \mathbb{R}$, the dynamics of Syst. \eqref{syst:interaction} preserves the following quantities. The energy \begin{equation*} \frac{1}{2} \sum_{j} \Gamma_j^2 \int \left|\partial_\sigma \Psi_j(t,\sigma)\right|^2\,d\sigma -\frac{1}{2}\sum_{j\neq k} \Gamma_j \Gamma_k \int \ln \left|\Psi_{jk}(t,\sigma)\right|^2\,d\sigma, \end{equation*} the angular momentum \begin{equation*} \sum_j \Gamma_j \int \left|\Psi_j(t,\sigma)\right|^2\,d\sigma, \end{equation*} and \begin{equation*} \sum_{j\neq k}\Gamma_j \Gamma_k \int \left|\Psi_{jk}(t,\sigma)\right|^2\,d\sigma. \end{equation*} However the previous quantities are not well-defined in the framework of Theorem \ref{thm:main}, not even formally, since $\Psisi_j(t,\sigma)$ and $\Psisi_{jk}(t,\sigma)$ do not tend to zero at infinity. As in \cite{KPV}, we modify them in order to get well-defined quantities, introducing \begin{equation*}\begin{split} \mathcal{H}&=\frac{1}{2}\sum_{j} \Gamma_j^2 \int \left|\partial_\sigma \Psi_j(t,\sigma)\right|^2\,d\sigma -\frac{1}{2}\sum_{j\neq k} \Gamma_j \Gamma_k \int \ln\left(\frac{|\Psi_{jk}(t,\sigma)|^2}{|X_{jk}(t)|^2}\right)\,d\sigma\\ \mathcal{A}&=\sum_j \Gamma_j \int \left(|\Psi_j(t,\sigma)|^2-|X_j(t)|^2\right)\,d\sigma\\ \mathcal{T}&=\sum_{j\neq k}\Gamma_j \Gamma_k \int \left(|\Psi_{jk}(t,\sigma)|^2-|X_{jk}(t)|^2\right)\,d\sigma.\end{split} \end{equation*} Note that, in view of the properties of the point vortex system \eqref{syst:point-vortex} mentioned in the introduction, the renormalized quantities $\mathcal{H}$, $\mathcal{A}$ and $\mathcal{T}$ are still formally preserved in time. Finally, we also introduce the time-dependent quantity \begin{equation*} \mathcal{I}(t)=\frac{1}{2}\sum_{j\neq k}\Gamma_j \Gamma _k \int \left( \frac{|\Psi_{jk}(t)|^2}{|X_{jk}(t)|^2}-1\right)\,d\sigma, \end{equation*} and we consider the energy \begin{equation}\label{def:energy-losange-bis} \mathcal E(t)=\mathcal H+\mathcal I(t),\end{equation} which have been already introduced in \eqref{def:energy-losange} in the introduction. As noticed in \cite{KPV}, a useful consequence of the convexity estimate $(x-1)^2/2\leq x-1-\ln x\leq 10(x-1)^2$ on $[3/4,5/4]$ is the inequality \begin{equation} \label{ineq:plus-loin} \frac{1}{2}\sum_j \Gamma_j^2 \int \left|\partial_\sigma \Psi_j(t,\sigma)\right|^2\,d\sigma +\frac{1}{4}\sum_{j\neq k}\Gamma_j\Gamma_k \int \left(\frac{|\Psi_{jk}(t,\sigma)|^2}{|X_{jk}(t)|^2}-1\right)^2\,d\sigma \leq \mathcal{E}(t), \end{equation} which holds as long as the filaments satisfy $3/4\leq |\Psisi_{jk}(t)|^2/|X_{jk}(t)|^2\leq 5/4$. \subsection{The approach} In this subsection we briefly sketch how to combine elements from \cite{KPV} and from \S2 to prove local existence and uniqueness of a solution to Syst. \eqref{syst:interaction} in the general case of $N$ filaments, with $N\geq 2$, and the way to extend this solution as long as the energy $\mathcal E(t)$ remains sufficiently small. Here we take positive circulations $$\Gamma_j>0,\quad 1\leq j\leq N.$$ Therefore there exists a unique global solution $(X_j)_j$ to Syst. \eqref{syst:point-vortex}. We denote by $d>0$ the minimal distance between the point vortices for all time. Here we shall make the extra-assumption that $$u_{j,0}=\Psisi_{j,0}-X_{j,0}\in H^1(\mathbb{R}).$$ We look for a solution $u=(u_j)_j\in C([0,T],H^1(\mathbb{R}))^N$ to the system \begin{equation} \label{syst:pert-u} \begin{cases} \displaystyle i\partial_t u_j + \Gamma_j\partial_\sigma^2 u_j+\sum_{k\neq j} \Gamma_k \left(\frac{ X_{jk}+u_{jk}}{|X_{jk}+u_{jk}|^2}-\frac{ X_{jk}}{|X_{jk}|^2}\right)=0\\ \displaystyle u_j(0)=u_{j,0},\quad 1\leq j\leq N. \end{cases} \end{equation} By similar arguments as in Section \S\ref{sec:sym}, our purpose is to find a fixed point in the Banach space \begin{equation*} B_T=\left\{ w=(w_1,\ldots,w_N)\in C\left([0,T],H^1\right)^N,\quad \sup_{0\leq t\leq T} \|w(t)\|_{H^1} \leq \frac{d}{4}\right\} \end{equation*} for the operator $A(w)=(A_j(w))_j$ defined by \begin{equation*} A_j(w)(t)=i\int_0^t \sum_{k\neq j} \Gamma_k \left(\frac{ X_{jk}(\tau)+e^{i\tau \Gamma_j\partial_\sigma^2}u_{j,0}+w_{j}(\tau)-e^{i\tau \Gamma_k\partial_\sigma^2}u_{k,0}-w_{k}(\tau)} {|X_{jk}(\tau)+e^{i\tau \Gamma_j\partial_\sigma^2}u_{j,0}+w_{j}(\tau)-e^{i\tau \Gamma_k\partial_\sigma^2}u_{k,0}-w_{k}(\tau)|^2} -\frac{ X_{jk}(\tau)}{|X_{jk}(\tau)|^2}\right)\,d\tau, \end{equation*} and for $T$ sufficiently small with respect to $\eta_2$, $\sum_j \|u_{j,0}\|_{H^1},$ $(\Gamma_j)_j$ and $d$. Then as in Section \S\ref{sec:sym} the solution will be given by \begin{equation*} u_{j}(t)=e^{it \Gamma_j\partial_\sigma^2}u_{j,0}+w_{j}(t). \end{equation*} By transposing the arguments of Section \S\ref{sec:sym} we obtain the following local well-posedness result. \begin{lemma}\label{lemma:lwp} Let $(u_{j,0})_j\in H^1(\mathbb{R})^N$ be such that $\mathcal{E}_0< 10\eta_2$, with $\mathcal{E}_0$ defined in Theorem \ref{thm:main} and $\eta_2=\eta_2(d)$ a small constant depending only on $d$. There exists $T>0$, depending only on $\eta_2$, $\sum_j \|u_{j,0}\|_{H^1}$, $(\Gamma_j)_j$ and $d$, and there exists a unique solution $(u_j)_j\in C([0,T],H^1(\mathbb{R}))^N$ to Syst. \eqref{syst:pert-u} satisfying \begin{equation*} \sup_{0\leq t\leq T} \|u_j(t)\|_{H^1}\leq \|u_{j,0}\|_{H^1}+\frac{d}{4},\quad 1\leq j\leq N. \end{equation*} Moreover we can choose $T$ such that \begin{equation*} T\big(1+\eta_2+\sum_j\|u_{j,0}\|_{H^1}\big)\geq C(d,(\Gamma_j)_j) \end{equation*} for some constant $C(d,(\Gamma_j)_j)$ depending only on $d$ and $(\Gamma_j)_j$. \end{lemma} \begin{remark}\label{rem:extension} As a byproduct of Lemma \ref{lemma:lwp} we realize that the solution $(u_j)_j$ to \eqref{syst:pert-u} exists as long as the energy $\mathcal{E}(t)$ remains bounded by $10\eta_2$. Indeed note that the norm $\sum_j\|u_j(t)\|_{H^1}$ can grow exponentially, but it cannot blow up as long as the energy is sufficiently small. \end{remark} \begin{proof} Let $0<\Gamma\leq 1$ such that $0<\Gamma\leq \min_{j} \Gamma_j$. Since all the $(\Gamma_j)'s$ are positive, we have \begin{equation*} \max_{j\neq k} \mathcal{E}\left(\frac{\Psisi_{jk,0}}{X_{jk,0}}\right)\leq \frac{1}{\Gamma^2} \mathcal{E}_0, \end{equation*} where we recall that $\mathcal{E}$ is defined by \eqref{def:energy-BM} (taking $\omega=1$). In particular, if $\eta_2$ is such that $10 \eta_2/\Gamma^2 \leq \eta_1$, with $\eta_1$ defined in Lemma \ref{lemma:ginzburg}, then $3/4 \leq |\Psisi_{jk,0}|/|X_{jk,0}|\leq 5/4$ for all $j\neq k$. Then we have for $w\in B_T$ \begin{equation*} \begin{split} &|X_{jk}(\tau)+e^{i\tau \Gamma_j\partial_\sigma^2}u_{j,0}+w_{j}(\tau)-e^{i\tau \Gamma_k\partial_\sigma^2}u_{k,0}-w_{k}(\tau)|\\ =&|\Psisi_{jk,0}+(X_{jk}(\tau)-X_{jk,0})+(e^{i\tau \Gamma_j\partial_\sigma^2}u_{j,0}-u_{j,0})-(e^{i\tau \Gamma_k\partial_\sigma^2}u_{k,0}-u_{k,0})+w_{jk}(\tau)|\\ \geq& |\Psisi_{jk,0}|-|X_{jk}(\tau)-X_{jk,0}|-\sqrt{2} \|(e^{i\tau \Gamma_j\partial_\sigma^2}u_{j,0}-u_{j,0})-(e^{i\tau \Gamma_k\partial_\sigma^2}u_{k,0}-u_{k,0})+w_{jk}(\tau)\|_{H^1}\\ \geq& \frac{3d}{4}-\frac{2(\sum_{j} \Gamma_j)}{d}T-C(1+T)\eta_2-\frac{\sqrt{2}d}{4}\geq \frac{d}{4} \end{split} \end{equation*} provided that $\eta_2$ is small with respect to $d$, and that $T$ is small in terms of $\eta_2,d, (\Gamma_j)_j$. In the last inequality we have used the proof of Lemma \ref{lemma:ginzburg} ii) together with the mean-value theorem for $X_{jk}$. Now, since $X_{jk}(\tau)+e^{i\tau \Gamma_j\partial_\sigma^2}u_{j,0}+w_{j}(\tau)-e^{i\tau \Gamma_k\partial_\sigma^2}u_{k,0}-w_{k}(\tau)$ is bounded from below, direct estimates show that $A$ is a contraction on $B_T$ as long as \begin{equation*} T(1+\eta_2+\sum_j\|u_{j,0}\|_{H^1})\leq C(d,(\Gamma_j)_j) \end{equation*} and the conclusion of Lemma \ref{lemma:lwp} follows. \end{proof} \subsection{The proof of Theorem \ref{thm:main}} We present now the proof of Theorem \ref{thm:main}. By Remark \ref{rem:extension}, there exists a unique solution as long as $\mathcal E(t)$ remains sufficiently small. In the cases considered in \cite{KPV} where the $|X_{jk}(t)|$ are all the same and constant equal to $d$, $\mathcal I(t)=\mathcal T/(2d^2)$ so $\mathcal E(t)$ is conserved. Also under the hypothesis of Theorem \ref{thm:main-2}, we have $$\mathcal{I}(t)=\frac{1}{2}\sum_{j\neq k}\Gamma_j\Gamma_k \int \left( \frac{|\Psi_{jk}(t)|^2}{|X_{jk}|^2}-1\right)\,d\sigma=\frac{1}{2}\sum_{j\neq k}\Gamma_j\Gamma_k \int \left(|\Psihi(t,\sigma)|^2-1\right)\,d\sigma=\omega \mathcal{A},$$ so, although $|X_{jk}|$ are not all equal, $\mathcal{I}(t)$ and $\mathcal E(t)$ are still formally preserved. In fact, under the assumptions of Theorem \ref{thm:main-2} we have $\mathcal{E}(t)=N \mathcal{E}(\Psihi(t))$ so we retrieve the fact that it is constant. Under the general hypothesis of Theorem \ref{thm:main} $\mathcal{E}(t)$ is no longer constant, but it will still be a useful quantity for which we can achieve some control. We recall that $\mathcal{E}_0\leq \eta_2$. From now on we consider $T>0$ and the unique solution to Syst. \eqref{syst:pert-u} on $[0,T]$, with $\mathcal{E}(t)< 10\tilde{\mathcal{E}_0}\leq 10\eta_2$, given by Lemma \ref{lemma:lwp}. We take $T$ maximal in the sense that $\mathcal{E}(T)= 10\tilde{\mathcal{E}_0}$ (but $T$ is not necessarily the largest time of existence). We thus have $3/4< |\Psisi_{jk}(t,\sigma)|< 5/2$ on $[0,T]\times \mathbb{R}$ for all $j\neq k$. \begin{proposition}\label{prop:I} We have for $t\in [0,T]$ \begin{equation*} \mathcal{E}(t)=\mathcal H+\frac{1}{2}\mathcal{T} -\mathcal{A}+ \frac{\|(u_1+u_3)(t)\|^2+ \|(u_2+u_4)(t)\|^2}{2}. \end{equation*} \end{proposition} \begin{proof}Since $(X_1,X_2,X_3,X_4)$ is a square of radius $1$ we have \begin{equation*} |X_{jk}(t)|^2=2\quad \text{if } |j-k|=1,\quad |X_{jk}(t)|^2=4\quad \text{if }|j-k|=2. \end{equation*} It follows that \begin{equation*} \begin{split} & \sum_{j\neq k} \left(\frac{|\Psi_{jk}|^2}{|X_{jk}|^2}-1\right) =\sum_{j\neq k}\frac{|\Psi_{jk}|^2-|X_{jk}|^2}{|X_{jk}|^2}\\ &=\frac{1}{2}\sum_{j\neq k}\left(|\Psi_{jk}|^2-|X_{jk}|^2\right)+2\left(\frac{1}{4}-\frac{1}{2}\right)\left( |\Psi_{13}|^2-|X_{13}|^2+|\Psi_{24}|^2-|X_{24}|^2\right). \end{split} \end{equation*} On the other hand, we compute \begin{equation*} \begin{split} & |\Psi_{13}|^2+|\Psi_{24}|^2-|X_{13}|^2-|X_{24}|^2\\ &=2\sum_{j=1}^4 |\Psi_j|^2-|\Psi_1+\Psi_3|^2-|\Psi_2+\Psi_4|^2-8\\ &=2\sum_{j=1}^4 \left( |\Psi_j|^2-|X_j|^2\right)-\left( |\Psi_1+\Psi_3|^2+|\Psi_2+\Psi_4|^2\right), \end{split} \end{equation*} so integrating with respect to $\sigma$ and using that $\Psisi_1+\Psisi_3=u_1+u_3$ and $\Psisi_2+\Psisi_4=u_2+u_4$ we are led to the conclusion. \end{proof} \begin{corollary} In the case of the parallelogram $\|(u_1+u_3)(0)\|_{L^2}^2=\|(u_2+u_4)(0)\|_{L^2}^2=0$, so it follows that $\|(u_1+u_3)(t)\|_{L^2}^2=\|(u_2+u_4)(t)\|_{L^2}^2=0$ for all times, using the fact that if $(\Psisi_1,\Psisi_2,\Psisi_3,\Psisi_4)$ is a solution of \eqref{syst:interaction} then $(-\Psisi_3,-\Psisi_4,-\Psisi_1,-\Psisi_2)$ is also a solution. Then $\mathcal I$ is conserved in time and global existence follows. \end{corollary} \begin{remark} One can do similar computations in others particular cases, for instance for ends and the middle of the segment, $$\mathcal E(t)=-\mathcal{H}+\mathcal{I}-\frac 32 \mathcal{A} + \frac 34 \left(\|u_1(t)\|_{L^2}^2 +\|(u_2+u_3)(t)\|_{L^2}^2\right),$$ or for hexagone, $$\mathcal E(t)=-\mathcal{H}+\mathcal{I}-\frac 72 \mathcal{A} + \frac 23 \sum_{j=1}^2\|(u_j+u_{j+2}+u_{j+4})(t)\|_{L^2}^2+\frac 34 \sum_{j=1}^3 \|(u_j+u_{j+3})(t)\|_{L^2}^2.$$ But these quantities have no reason to be conserved, unless the perturbations have the same shape as the shape of $(X_j)$, which enters the framework of the first part of this article. Moreover, when trying to control the growth of $\|u_1(t)\|_{L^2}$ for instance in the first example, the time of control is not satisfactory due to the presence of linear terms in the equation of $u_1$, that cannot be resorbed. \end{remark} In order to control the evolution of the energy we have to control the quantity $\|(u_1+u_3)(t)\|_{L^2}^2+\|(u_2+u_4)(t)\|_{L^2}^2$. We are led to introduce the new unknowns $$v=u_1+u_3,\quad w=u_2+u_4.$$ \begin{proposition}\label{prop:est-U} We have for $t\in [0,T]$, with $v=u_1+u_3$ and $w=u_2+u_4$, \begin{equation*}\begin{split} \|v(t)\|_{L^2}+\|w(t)\|_{L^2} &\leq \|v(0)\|_{L^2}+\|w(0)\|_{L^2}\\+ Ct\sup_{s\in [0,T]}& \max_{j\neq k}\|u_{jk}(s)\|_{L^2}^{1/2}\mathcal E(s)^{1/4} \left(\|v(s)\|_{L^2}+\|w(s)\|_{L^2}+ \mathcal E(s)^{1/2}\right).\end{split} \end{equation*} \end{proposition} \begin{proof}In view of Syst. \eqref{syst:interaction} and Syst. \eqref{syst:point-vortex}, we have \begin{equation*} \begin{split} i\partial_t v+\partial_\sigma^2 v& =-\sum_{k\neq 1,3}\left\{ \left( \frac{\Psi_{1k}}{|\Psi_{1k}|^2}-\frac{X_{1k}}{|X_{1k}|^2}\right) +\left(\frac{\Psi_{3k}}{|\Psi_{3k}|^2}-\frac{X_{3k}}{|X_{3k}|^2}\right)\right\}\\ &=-\sum_{k\neq 1,3} \left\{ X_{1k}\left( \frac{1}{|\Psi_{1k}|^2}-\frac{1}{|X_{1k}|^2}\right) +X_{3k}\left( \frac{1}{|\Psi_{3k}|^2}-\frac{1}{|X_{3k}|^2}\right)\right\}\\ &-\sum_{k\neq 1,3} \left\{ u_{1k}\left( \frac{1}{|\Psi_{1k}|^2}-\frac{1}{|X_{1k}|^2}\right) +u_{3k}\left( \frac{1}{|\Psi_{3k}|^2}-\frac{1}{|X_{3k}|^2}\right)\right\}\\ &-\sum_{k\neq 1,3}\left\{ \frac{u_{1k}}{|X_{1k}|^2}+\frac{u_{3k}}{|X_{3k}|^2}\right\}. \end{split} \end{equation*} We infer that \begin{equation*} \begin{split} i\partial_t v+\partial_\sigma^2 v&=\mathcal{L}_v(u)+\mathcal{R}_v(u),\end{split}\end{equation*} where $\mathcal{L}_v$ denotes the linear part, \begin{equation*} \begin{split} \mathcal{L}_v&(u)=2\sum_{k\neq 1,3} \left\{X_{1k} \frac{\re (\overline{u_{1k}}X_{1k})}{|X_{1k}|^4} +X_{3k} \frac{\re (\overline{u_{3k}}X_{3k})}{|X_{3k}|^4}\right\} -\sum_{k\neq 1,3}\left\{ \frac{u_{1k}}{|X_{1k}|^2}+\frac{u_{3k}}{|X_{3k}|^2}\right\} \end{split} \end{equation*} and where the remainder $\mathcal{R}_v$ is quadratic in $u$, \begin{equation*} \begin{split} &\mathcal{R}_v(u)\\ &=\sum_{k\neq 1,3}\left\{ \frac{X_{1k}}{|X_{1k}|^4}|u_{1k}|^2+\frac{X_{3k}}{|X_{3k}|^4}|u_{3k}|^2\right\}\\ &-\sum_{k\neq 1,3} \left\{ X_{1k}\left( \frac{|X_{1k}|^2-|\Psisi_{1k}|^2}{|X_{1k}|^2}\right) \left(\frac{1}{|\Psi_{1k}|^2}-\frac{1}{|X_{1k}|^2}\right) +X_{3k}\left( \frac{|X_{3k}|^2-|\Psisi_{3k}|^2}{|X_{3k}|^2}\right) \left(\frac{1}{|\Psi_{3k}|^2}-\frac{1}{|X_{3k}|^2}\right)\right\}\\ &-\sum_{k\neq 1,3} \left\{ u_{1k}\left( \frac{1}{|\Psi_{1k}|^2}-\frac{1}{|X_{1k}|^2}\right) +u_{3k}\left( \frac{1}{|\Psi_{3k}|^2}-\frac{1}{|X_{3k}|^2}\right)\right\}\\ &=\mathcal{R}_v^1(u)+\mathcal{R}_v^2(u)+\mathcal{R}_v^3(u). \end{split} \end{equation*} We claim that $\mathcal{L}_v(u)=0$. Indeed, using that $|X_{1k}|^2=|X_{3k}|^2=2$ for $k\neq 1,3$, \begin{equation*}\begin{split} \mathcal{L}_v(u)&= \frac{1}{2}\sum_{k\neq 1,3} \left(X_{1k} \re (\overline{u_{1k}}X_{1k}) +X_{3k} \re (\overline{u_{3k}}X_{3k})\right) -\frac{1}{2}\sum_{k\neq 1,3}\left(v-2u_k\right)\\ &=\frac{1}{2}\sum_{k\neq 1,3} \left(X_{1k} \re (\overline{u_{1k}}X_{1k}) +X_{3k} \re (\overline{u_{3k}}X_{3k})\right)-v+w. \end{split} \end{equation*} Now we compute, using that $X_{12}=-X_{34}$ and $X_{23}=X_{14}$, \begin{equation*} \begin{split} &\sum_{k\neq 1,3} \left(X_{1k} \re (\overline{u_{1k}}X_{1k}) +X_{3k} \re (\overline{u_{3k}}X_{3k})\right)\\ &=X_{12}\re (\overline{u_{12}}X_{12}) +X_{32}\re (\overline{u_{32}}X_{32})+X_{14}\re (\overline{u_{14}}X_{14}) +X_{34}\re (\overline{u_{34}}X_{34})\\ &=X_{12}\re (\overline{u_{12}}X_{12})+X_{12}\re (\overline{u_{34}}X_{12}) +X_{32}\re (\overline{u_{32}}X_{32})++X_{32}\re (\overline{u_{14}}X_{32})\\ &=X_{12}\re ((\overline{u_{12}}+\overline{u_{34}})X_{12})+X_{32}\re ((\overline{u_{32}}+\overline{u_{14}})X_{32}). \end{split} \end{equation*} We observe that \begin{equation*} u_{12}+u_{34}=u_{32}+u_{14}=u_1+u_3-(u_2+u_4)= v-w. \end{equation*} Therefore, inserting that $iX_{12}=X_{23}$ and that $|X_{12}|^2=2$ in the previous formula we find \begin{equation*} \begin{split} & \sum_{k\neq 1,3} \big(X_{1k} \re (\overline{u_{1k}}X_{1k}) +X_{3k} \re (\overline{u_{3k}}X_{3k})\big)\\ &=X_{12}\re ((\overline{v}-\overline{w})X_{12})-iX_{12}\im ((\overline{v}-\overline{w})X_{12})\\&=2(v-w), \end{split} \end{equation*} and finally $ \mathcal{L}_v(u)=0. $ We next estimate the remainder terms. Since $3/4<|\Psisi_{jk}|<5/2$ we have $\left||X_{jk}|^2-|\Psisi_{jk}|^2\right|\leq C|u_{jk}|$ on $[0,T]$ and therefore \begin{equation}\label{ineq:reste1} \left|\mathcal{R}^2_v(u)+\mathcal{R}^3_v(u)\right|\leq C\max_{j\neq k} |u_{jk}|\left| \frac{|\Psisi_{jk}|^2}{|X_{jk}|^2}-1\right|. \end{equation} Expanding the first term $\mathcal{R}_v^1(u)$ and using the symmetries of $(X_1,X_2,X_3,X_4)$, we then have \begin{equation*} \begin{split} \mathcal{R}^1_v(u)&= \frac{1}{4} \sum_{k\neq 1,3}\left\{ X_{1k}|u_{1k}|^2+X_{3k}|u_{3k}|^2\right\}\\ &= \frac{1}{4}\left\{ X_{12}\left(|u_{12}|^2-|u_{34}|^2\right)+X_{14}\left(|u_{14}|^2-|u_{32}|^2\right)\right\}\\ &=\frac{1}{2}\left\{ X_{12}\re\left(\overline{u_{12}-u_{34}}\, (v-w)\right)+ X_{14}\re\left(\overline{u_{14}-u_{32}}\, (v-w)\right)\right\}, \end{split} \end{equation*} so that \begin{equation} \label{ineq:reste2} \left|\mathcal{R}^1_v(u)\right|\leq C\max_{j,k}|u_{jk}||v-w|. \end{equation} We perform similar computations for $w$ and from \eqref{ineq:reste1}-\eqref{ineq:reste2} we infer the estimate \begin{equation*}\begin{split} \|v(t)\|_{L^2}+\|w(t)\|_{L^2}&\leq \|v(0)\|_{L^2}+\|w(0)\|_{L^2} +\int_0^t \left(\|\mathcal{R}_v(u)(s)\|_{L^2}+\|\mathcal{R}_w(u)(s)\|_{L^2}\right)\,ds\\ &\leq \|v(0)\|_{L^2}+\|w(0)\|_{L^2}\\& +t\sup_{s\in[0,t]}\max_{j\neq k}\|u_{jk}(s)\|_{L^\infty}\left( \left\|\frac{|\Psi_{jk}(s)|^2}{|X_{jk}(s)|^2}-1\right\|_{L^2}+\|v(s)\|_{L^2}+\|w(s)\|_{L^2}\right). \end{split} \end{equation*} Finally we apply Gagliardo-Nirenberg inequality and \eqref{ineq:plus-loin} to obtain the conclusion. \end{proof} \begin{proposition}\label{prop:est-uj} We have for $t\in [0,T]$ \begin{equation*} \sum_{j\neq k} \|u_{jk}(t)\|_{L^2}\leq C \sum_{j\neq k} \|u_{jk}(0)\|_{L^2} +C\,t\, \sup_{s\in[0,t]}\mathcal E(s)^{1/2}. \end{equation*} \end{proposition} \begin{proof} By \eqref{syst:pert-u}, \begin{equation*} \begin{split} & i\partial_t u_{jk}+\partial_\sigma^2 u_{jk}\\ &=-\sum_{l\neq j}\frac{u_{jl}}{|\Psisi_{jl}|^2}+\sum_{l\neq k}\frac{u_{kl}}{|\Psisi_{kl}|^2} -\sum_{l\neq j} X_{jl}\left( \frac{1}{|\Psisi_{jl}|^2} -\frac{1}{|X_{jl}|^2}\right)+\sum_{l\neq k} X_{kl}\left( \frac{1}{|\Psisi_{kl}|^2} -\frac{1}{|X_{kl}|^2}\right). \end{split} \end{equation*} We multiply the equation by $\overline{u_{jk}}$, take the imaginary part and perform the sum over $j$ and $k$, cancelling the first two terms in the right-hand side. Indeed, \begin{equation*} \begin{split} \sum_{j, k} \sum_{l\neq j} \frac{\im\left( u_{jk}\overline{u_{jl}}\right)}{|\Psisi_{jl}|^2} &=\sum_{j, k} \sum_{l\neq j} \frac{\im\left( (u_{jl}+u_{lk})\overline{u_{jl}}\right)}{|\Psisi_{jl}|^2}\\ &=\sum_{j, k} \sum_{l\neq j} \frac{\im\left( u_{lk}\overline{u_{jl}}\right)}{|\Psisi_{jl}|^2}\\ &=-\sum_{j, k} \sum_{l\neq j} \frac{\im\left( u_{jk}\overline{u_{jl}}\right)}{|\Psisi_{jl}|^2}, \end{split} \end{equation*} by exchanging $j$ and $l$ in the last equality. Therefore the latter sum vanishes. By the same arguments we also have \begin{equation*} \sum_{j, k} \sum_{l\neq k} \frac{\im\left( u_{jk}\overline{u_{kl}}\right)}{|\Psisi_{kl}|^2}=0. \end{equation*} It follows that \begin{equation*} \begin{split} \frac{d}{dt} \sum_{j\neq k} \|u_{jk}\|_{L^2}^2&\leq C \sum_{j\neq k} \sum_{l\neq j}\int |u_{jk}||X_{jl}|\frac{1}{|\Psisi_{jl}|^2} \left| \frac{|\Psisi_{jl}|^2}{|X_{jl}|^2}-1\right|\,d\sigma\\ &\leq C\big(\sum_{j\neq k} \|u_{jk}\|_{L^2}^2\big)^{1/2} \max_{j\neq k} \left\| \frac{|\Psisi_{jk}|^2}{|X_{jk}|^2}-1\right\|_{L^2}, \end{split} \end{equation*} and we finally obtain by \eqref{ineq:plus-loin} \begin{equation*} \left|\frac{d}{dt} \big(\sum_{j,k} \|u_{jk}(t)\|_{L^2}^2\big)^{1/2}\right| \leq C\mathcal E(t)^{1/2}. \end{equation*} The conclusion follows. \end{proof} We are now able to control the evolution of $\mathcal E(t)$ and to complete the proof of Theorem \ref{thm:main}. First we recall that by Proposition \ref{prop:I}, \begin{equation*} \frac{1}{2}\left(\|v(t)\|_{L^2}^2+\|w(t)\|_{L^2}^2\right)-\tilde{\mathcal{E}_0}\leq \mathcal{E}(t)\leq \tilde{\mathcal{E}_0}+\frac{1}{2}\left(\|v(t)\|_{L^2}^2+\|w(t)\|_{L^2}^2\right) \end{equation*} so in particular \begin{equation*}\begin{split} \mathcal{E}(t)+\|v(t)\|_{L^2}^2+\|w(t)\|_{L^2}^2 \leq C\tilde{\mathcal{E}_0}\quad \text{on}\quad [0,T]. \end{split}\end{equation*}. Next, in view of Proposition \ref{prop:est-U} we have \begin{equation*}\begin{split}\mathcal E(t) &\leq \tilde{\mathcal E_0}+(\|v(t)\|_{L^2}+\|w(t)\|_{L^2})^2 \leq \tilde{\mathcal E_0}+2(\|v(0)\|_{L^2}+\|w(0)\|_{L^2})^2\\ &+Ct^2\sup_{s\in [0,t]}\max_{j, k}\|u_{jk}(s)\|_{L^2} \mathcal E(s)^{1/2}\left(\mathcal{E}(s)^{1/2}+\|v(s)\|_{L^2}+\|w(s)\|_{L^2}\right)^2\\ &\leq 9\tilde{\mathcal{E}_0}+Ct^2\sup_{s\in [0,t]}\max_{j, k}\|u_{jk}(s)\|_{L^2}\tilde{\mathcal E_0}^{3/2} \end{split}\end{equation*} and finally by Proposition \ref{prop:est-uj} $$\mathcal E(t)\leq 9\tilde{\mathcal E_0}+Ct^2 \max_{j, k}\|u_{jk,0}\|_{L^2} \tilde{\mathcal E_0}^{3/2}+Ct^3 \tilde{\mathcal E_0}^2.$$ Setting $t=T$ in the above inequality and recalling that $\mathcal{E}(T)=10\tilde{\mathcal{E}_0}$, we infer that \begin{equation*} 1\leq C t^2 \max_{j, k}\|u_{jk,0}\|_{L^2}\tilde{\mathcal E_0}^{1/2}+Ct^3\tilde{\mathcal{E}_0}. \end{equation*} We conclude that $T$ is larger than $$C\min\left\{\frac{1}{\tilde{\mathcal E_0}^{1/4}\max_{j, k} \|u_{jk,0}\|_{L^2}^{1/2}}, \frac{1}{\tilde{\mathcal E_0}^{1/3}}\right\},$$ as we wanted. This concludes the proof of Theorem \ref{thm:main}. \section{Proof of Theorem \ref{thm:TW}} \label{sec:tw} Before proving Theorem \ref{thm:TW} we start with some preliminary computations. We mainly follow the Appendix of \cite{Gr}. Assume that $v$ is a $\mathcal{C}^\infty$ small energy solution to Eq. \eqref{eq:TW} such that $v'$ vanishes at infinity. We set $$\eta=1-|v|^2,$$ then $\eta$ vanishes at infinity. We decompose $v$ into its real and imaginary parts, $v=v_1+iv_2$. Eq. \eqref{eq:TW} gives then the system $$\left\{\begin{array}{c}\displaystyle-cv_2'+v_1''+\omega\frac{v_1}{v_1^2+v_2^2}-\omega v_1=0, \\ \displaystyle cv_1'+v_2''+\omega\frac{v_2}{v_1^2+v_2^2}-\omega v_2=0.\end{array}\right.$$ By substracting the first equation multiplied by $v_2$ from the second one multiplied by $v_1$ $$(v_1v_2'-v_1'v_2-\frac{c}{2}\eta)'=0,$$ so since $v$ has finite energy we can integrate from infinity and get \begin{equation} \label{eq:TW2}v_1v_2'-v_1'v_2=\frac{c}{2}\eta.\end{equation} Next we add the first equation multiplied by $v_1'$ to the second one multiplied by $v_2'$, $$(v_1'^2+v_2'^2+\omega\ln (v_1^2+v_2^2)-\omega(v_1^2+v_2^2))'=0,$$ so \begin{equation} \label{eq:TW3} |v'|^2=-\omega\ln(1-\eta)-\omega\eta. \end{equation} Finally, in view of \eqref{eq:TW2} and \eqref{eq:TW3} we can compute \begin{equation*} \begin{split} \eta''&=-2|v'|^2-2(v_1v_1''+v_2v_2'')\\ &=-2|v'|^2-2v_1(cv_2'-\omega\frac{v_1}{v_1^2+v_2^2}+\omega v_1)-2v_2(-cv_1'-\omega\frac{v_2}{v_1^2+v_2^2}+\omega v_2)\\ &=-2|v'|^2-2c(v_1v_2'-v_1'v_2)+2\omega-2\omega(v_1^2+v_2^2)\\ &=2\omega\ln (1-\eta)+4\omega \eta-c^2\eta. \end{split} \end{equation*} So we find \begin{equation} \label{eq:eta} \eta''-2\omega \ln(1-\eta)+(c^2-4\omega)\eta=0. \end{equation} Multiplying by $\eta'$ and integrating we obtain \begin{equation*} (\eta')^2+(c^2-4\omega)\eta^2-4\omega \big( (\eta-1)\ln(1-\eta)-\eta\big)=0, \end{equation*} which is satisfied if $\eta$ verifies \begin{equation} \label{eq:eta'} \eta'=\alpha\Big(-(c^2-4\omega)\eta^2+4\omega\big( (\eta-1)\ln(1-\eta)-\eta\big) \Big)^{1/2},\quad\alpha=\alpha(\sigma)=\pm 1. \end{equation} We now turn to the proof of Theorem \ref{thm:TW}. From now on we look for solutions such that $\eta$ is sufficiently small on the whole of $\mathbb{R}$ and for which the right hand side in \eqref{eq:eta'} makes sense. We introduce \begin{equation*} a(\eta)=-(c^2-4\omega)\eta^2+4\omega\big( (\eta-1)\ln(1-\eta)-\eta\big). \end{equation*} For $0<\eta<1$, we perform a Taylor expansion for $a$, \begin{equation*} \begin{split} a(\eta) &=(2\omega-c^2)\eta^2-2\omega \frac{\eta^3}{3}-4\omega\sum_{k\geq 4} \frac{\eta^k}{k(k-1)}\end{split}\end{equation*} therefore\begin{equation*}\begin{split}b(\eta)\equiv\frac{a(\eta)}{\eta^2}= 2\omega-c^2-2\omega \frac{\eta}{3}+r(\eta) \end{split} \end{equation*} with $r(\eta)=o(\eta)\leq 0$ such that $r'(\eta)=O(\eta)$. Let us set $$\sigma_0=\frac{2\omega-c^2}{\frac{2\omega}{3}}>0,$$ then $b(\sigma_0)\leq 0$. Since on the other hand $b(0)>0$, there exists $\sigma_1\in(0,\sigma_0]$ such that $b(\sigma_1)=0$. Moreover, since for $\eta\in[0,\sigma_0]$ we have $b'(\eta)=-\frac{2\omega}{3}+r'(\eta)\leq -\frac{2\omega}{3}+C(2\omega-c^2)<0$ for $2\omega-c^2$ sufficiently small, we infer that $b$ is strictly decreasing on $[0,\sigma_0]$ and therefore $\sigma_1$ is the unique zero of $a$ on $]0,\sigma_0]$. Next, we fix a small parameter $\varepsilon>0$ and we consider the ODE \begin{equation*} \begin{cases} \displaystyle y'_\varepsilon(\sigma)=-\sqrt{a(y_\varepsilon(\sigma))},\\ y_\varepsilon(0)=\sigma_1-\varepsilon. \end{cases} \end{equation*} Since $\sqrt{a}$ is Lipschitz on $[0,x_1-\varepsilon/2)$ we can find a unique maximal solution on some interval $I$ containing the origin. We claim that $\sup I=+\infty.$ We show first that $0<y_\varepsilon<\sigma_1-\varepsilon$ on $I\cap[0,\infty)$. Indeed, $y_\varepsilon$ is strictly decreasing on $I\cap[0,\infty)$. Assume by contradiction that there exists $\overline{\sigma}$ such that $y_\varepsilon(\overline{\sigma})=0$ and $y_\varepsilon>0$ on $[0,\overline{\sigma})$. We recall that $b(y)\sim 2\omega-c^2$ when $y\to 0$. Therefore \begin{equation*} y_\varepsilon'(\sigma)\geq -2\sqrt{2\omega-c^2}y_\varepsilon(\sigma) \quad \text{for } \sigma\in [\overline{\sigma}-\delta,\overline{\sigma}] \end{equation*} with $\delta$ small. Integrating the differential inequality above yields \begin{equation*} y_\varepsilon(\sigma)\geq y_\varepsilon(\overline{\sigma}-\delta)\exp(-2\sqrt{2\omega-c^2}(\sigma-\overline{\sigma}+\delta))\quad \text{on } [\overline{\sigma}-\delta,\overline{\sigma}], \end{equation*} which contradicts the fact that $y_\varepsilon(\overline{\sigma})=0$. Next, since $y\mapsto \sqrt{a(y)}$ is Lipschitz and bounded on $[0,\sigma_1-\varepsilon]$ the maximal solution $y_\varepsilon$ exists on $[0,\infty)$ which proves the claim. We next let $\varepsilon\to 0$. Noting that $y_\varepsilon$ and $y'_\varepsilon$ are uniformly bounded on $[0,\infty)$ we can pass to the limit to find a solution\footnote{We do not claim that such a solution is unique or maximal.} $y$ to the ODE \begin{equation*} \begin{cases} \displaystyle y'=-\sqrt{a(y)},\quad \sigma\geq 0\\ \displaystyle y(0)=\sigma_1. \end{cases} \end{equation*} We finally set \begin{equation*} \eta(\sigma)=y(\sigma)\quad \text{for }\sigma\in [0,+\infty) \quad \text{and } \eta(-\sigma)=\eta(\sigma)=y(\sigma)\quad \text{for }\sigma\in (-\infty,0]. \end{equation*} Thanks to $\eta(0)=\sigma_1$ and $a(\sigma_1)=0$ we check that $\eta\in C^\infty(\mathbb{R})$ is a solution of the ODE \eqref{eq:eta}. Moreover, by the same kind of arguments as before we have $\eta\to 0$, hence $\eta'(\sigma)\sim -\sqrt{2\omega-c^2}\eta(\sigma)$ as $\sigma\to \infty$, which yields the exponential decay $\eta(\sigma)\leq C_\delta \eta(0)\exp(-(\sqrt{2\omega-c^2}-\delta)|\sigma|)$ for all $0<\delta<\sqrt{2\omega-c^2}$. We complete the proof of Theorem \ref{thm:TW} by looking for a solution of the form \begin{equation}\label{def:solution-TW}v=\sqrt{1-\eta}\exp(i\theta).\end{equation} Then according to \eqref{eq:TW2} we must have \begin{equation}\label{eq:theta'} (1-\eta)\theta'=\frac{c\eta}{2} \end{equation} (note that in particular $\theta$ is an increasing function on $\mathbb{R}$). Therefore for $ \theta(\sigma)=\theta_0+\int_0^\sigma\frac{c\eta}{2(1-\eta)}\,d\tau $ where $\theta_0\in \mathbb{R}$, then \begin{equation*} |\theta(+\infty)-\theta(-\infty)|\leq \frac{C\eta(0)}{\sqrt{2\omega-c^2}}\leq C\sqrt{2\omega-c^2}. \end{equation*} Also, the map defined by \eqref{def:solution-TW} is a solution to \eqref{eq:TW}. It only remains to show that $v$ has finite energy. This clearly holds in view of the exponential decay of $\eta$, of $\eta'$ (by \eqref{eq:eta'}) and of $\theta'$ (by \eqref{eq:theta'}) at infinity. Moreover in view of \eqref{eq:theta'} we obtain \begin{equation*} \mathcal{E}(v)\leq C\|\eta\|_{H^1}^2\leq C(2\omega-c^2)^{3/2} \end{equation*} and the conclusion of Theorem \ref{thm:TW} follows. \section{Proof of Theorem \ref{thm:blup}} \label{sec:blup} Under the hypothesis of Theorem \ref{thm:blup}, the angular speed of the configuration $(X_j)_j$ is $\omega=0$ so if we set $$\Psisi_j(t,\sigma)=X_j(t)\Psihi(t,\sigma)$$ a solution of Syst. \eqref{syst:interaction}, we have shown in Section \S\ref{sec:sym} that $\Psihi$ has to solve the linear Schr\"odinger equation, $$i\partial_t \Psihi+\partial_\sigma^2 \Psihi=0.$$ Since the linear evolution of a Gaussian $G_0(\sigma)=e^{-\sigma^2}$ is $$e^{it\partial_\sigma^2}G_0(t,\sigma)=\frac{e^{-\frac{\sigma^2}{1+4it}}}{\sqrt{1+4it}},$$ it follows that the linear evolution of $$\Psihi_0(\sigma)=1-\frac{e^{-\frac{\sigma^2}{1-4i}}}{\sqrt{1-4i}}$$ is precisely $$\Psihi(t,\sigma)=1-\frac{e^{-\frac{\sigma^2}{1-4i(1-t)}}}{\sqrt{1-4i(1-t)}}.$$ We notice that $\Psihi(t,\sigma)\overset{|\sigma|\rightarrow\infty}{\longrightarrow} 1$ for $t\in [0,1]$, and for $t\in[0,1[$ $$|\Psihi(t,\sigma)|> 1-\frac{1}{\sqrt{1+16(1-t)^2}}>0.$$ On the other hand we have $$\Psihi(1,\sigma)=1-e^{-\sigma^2},$$ so $\sigma=0$ is a vanishing point at $t=1$ and Theorem \ref{thm:blup} follows. \end{document} \end{document}
\begin{document} \begin{abstract} In his work extending rational simple connectedness to schemes with higher Picard rank, Yi Zhu introduced hypotheses for schemes insuring that the relative Picard functor is representable and is \'{e}tale locally constant with finite free stalks. We give examples showing that one cannot eliminate any of the hypotheses and still have a representable Picard functor that is locally constant with finite free stalks. We also prove that the hypotheses are compatible with composition and with hyperplane sections. \end{abstract} \overline{\text{M}}aketitle \section{Acyclic Schemes} \label{sec-acyc} \overline{\text{M}}arpar{sec-acyc} \overline{\text{M}}edskip\noindent The \emph{acyclic schemes} have relative Picard functors that are representable and that are \'{e}tale locally constant with stalks being finite free Abelian groups. This class includes smooth, rationally connected varieties in characteristic $0$, as well as mildly singular specializations of these schemes. For this class of schemes, the \emph{Abel maps} of \cite{dJHS} and \cite{Zhu} exist and have good properties. This note proves some basic properties of these schemes. After reviewing Zhu's theorem about the relative Picard functor of acyclic schemes, Proposition \ref{prop-Zhu}, in the next section we present several examples showing that if any of the hypotheses in Definition \ref{defn-Zhu} is removed, then Proposition \ref{prop-Zhu} fails. The compatibilities are Proposition \ref{prop-compos}, compatibility of Definition \ref{defn-Zhu} with composition, Corollary \ref{cor-iterate1}, compatibility of Definition \ref{defn-Zhu} with ample hypersurfaces, Corollary \ref{cor-iterate2}, the application of Corollary \ref{cor-iterate1} to a universal family of hypersurface sections, and Corollary \ref{cor-Lef}, the iteration of Corollary \ref{cor-iterate2} for a universal family of complete intersections of hypersurface sections. \begin{defn} \label{defn-kacyclic} \overline{\text{M}}arpar{defn-kacyclic} Let $r\geq 0$ be an integer. A projective, fppf morphism, $f:X\to T$, is \emph{$r$-acyclic for the structure sheaf} if for every $T$-scheme $T'$ and base change morphism $f':X'\to T'$, the induced morphism $\overline{\text{M}}c{O}_{T'} \to Rf'_*\overline{\text{M}}c{O}_{X'}$ is a quasi-isomorphism in all degrees $\leq r$. The morphism is \emph{$\overline{\text{M}}c{O}$-acyclic} if it is $r$-acyclic for every $r\geq 0$. \end{defn} \begin{rmk} \label{rmk-kacyclic} \overline{\text{M}}arpar{rmk-kacyclic} Every projective, fppf morphism is locally on the target the base change of a projective, fppf morphism of Noetherian schemes. If $T$ is Noetherian, then $f$ is $r$-acyclic if and only if for every geometric point $t$ of $T$, $\kappa(t)\to H^0(X_t,\overline{\text{M}}c{O}_{X_t})$ is an isomorphism and $h^q(X_t,\overline{\text{M}}c{O}_{X_t})$ equals $0$ for every $0<q\leq r$ by Cohomology and Base Change, \cite[Theorem III.12.11]{H}. \end{rmk} \begin{defn}\cite[Definition 2.10]{Zhu} \label{defn-Zhu} \overline{\text{M}}arpar{defn-Zhu} A projective morphism $f:X_T\to T$ is \emph{weakly acyclic}, resp. \emph{acyclic}, if \begin{enumerate} \item[(i)] $f$ is fppf, \item[(ii)] every $X_t$ is LCI and $\text{codim}_{X_t}(\text{Sing}(X_t))$ is $\geq 3$, resp. is $\geq 4$, \item[(iii)] $X_t$ is $2$-acyclic for the structure sheaf, and \item[(iv)] $X_t$ is algebraically simply connected. \end{enumerate} The \emph{acyclic locus}, resp. \emph{weakly acyclic locus}, is the maximal open subscheme $T^o\subset T$ such that $X_T\times_T T^o\to T^o$ is acyclic, resp. weakly acyclic. \end{defn} \begin{prop}\cite[Proposition X.1.2]{SGA1} \label{lem-Zhu} \overline{\text{M}}arpar{lem-Zhu} For every proper, fppf morphism $f:X_T\to T$ whose geometric fibers are reduced, the finite part of the Stein factorization is \'{e}tale over $T$. \end{prop} \begin{proof} By limit theorems, it suffices to prove the result when $T$ is a Noetherian scheme (even finitely presented over $\text{Spec } \overline{\text{M}}athbfb{Z}$). The finite part of the Stein factorization is a finite morphism to $T$. To prove that it is \'{e}tale, it suffices to prove that it is formally \'{e}tale, e.g., it suffices to prove that it is formally \'{e}tale after base change to the strictly Henselized local ring $\overline{\text{M}}c{O}_{T,t}^{sh}$ for every $t$ in $T$. The Stein factorization is compatible with flat base change of $T$. Thus, without loss of generality, assume that $T$ equals $\text{Spec } \overline{\text{M}}c{O}_{T,t}^{sh}$. By \cite[Proposition 18.5.19]{EGA4}, it suffices to consider the case that $X_t$ is connected. Since $X_t$ is connected, projective, and reduced over the algebraically closed field $\kappa(t)$, the natural homomorphism $\kappa(t) \to H^0(X_t,\overline{\text{M}}c{O}_{X_t})$ is an isomorphism. Thus, the composition, $$ \kappa(t) \xrightarrow{f^\#_t} f_*\overline{\text{M}}c{O}_{X_T}\otimes_{\overline{\text{M}}c{O}_T} \kappa(t) \to H^0(X_t,\overline{\text{M}}c{O}_{X_t}), $$ is an isomorphism. By Cohomology and Base Change, cf. \cite[Theorem III.12.11]{H}, the following natural homomorphism is an isomorphism, $$ f^\#:\overline{\text{M}}c{O}_T \to f_*\overline{\text{M}}c{O}_{X_T}. $$ Thus, the Stein factorization is an isomorphism, hence it is formally \'{e}tale. \end{proof} \begin{prop}\cite[Proposition 2.9]{Zhu} \label{prop-Zhu} \overline{\text{M}}arpar{prop-Zhu} For every weakly acyclic morphism, and even for morphisms that become weakly acyclic after base change by an \'{e}tale cover of $T$, the relative Picard functor of $X_T/T$ is representable, and it is \'{e}tale locally constant with finite free stalks. \end{prop} \begin{proof} This is a review of the proof in \cite{Zhu}. By limit theorems, it suffices to prove the result when $T$ is a Noetherian scheme (even finitely presented over $\text{Spec } \overline{\text{M}}athbfb{Z}$) and $f$ is weakly acyclic. By Proposition \ref{lem-Zhu}, the finite part $T'$ of the Stein factorization of $f$ is finite and \'{e}tale over $T$. The relative Picard functor of $X_T/T$ is the restriction of scalars relative to $T'/T$ of the relative Picard functor of $X_T/T'$. Thus, it suffices to prove the result for $X_T/T'$. Thus, without loss of generality, assume that the geometric fibers of $f$ are connected. By Hypothesis (ii), the geometric fibers are integral. \overline{\text{M}}edskip\noindent Because $f$ is projective and flat with integral geometric fibers, the relative Picard functor is representable and equals a union of open and closed subschemes that are quasi-projective over $T$, cf. \cite[Theorem 3.1, no. 232-06]{FGA}. By Hypothesis (iii), the relative Picard functor is formally unramified and formally smooth over $T$. Thus, it is formally \'{e}tale over $T$. Since the Picard functor is representable and locally finitely presented over $T$, it is \'{e}tale over $T$. \overline{\text{M}}edskip\noindent Since the open and closed quasi-projective schemes are \'{e}tale over $T$, they are finite over $T$ if and only if they are proper over $T$. To prove properness, it suffices to verify the valuative criterion of properness. Thus, assume that $T$ is $\text{Spec } \overline{\text{M}}c{O}_T$ for a DVR $\overline{\text{M}}c{O}_T$. Let $\overline{\text{M}}athcalL_\eta$ be an invertible sheaf on the generic fiber $X_\eta$ of $f$. Denote by $X_{T,\text{sm}}\subset X_T$ the smooth locus of $f$. Since $X_\eta$ is projective, $\overline{\text{M}}athcalL_\eta$ comes from a Cartier divisor $D$ on $X_\eta$. Since $X_{T,\text{sm}}$ is regular, the Cartier divisor $D$ on $X_\eta\cap X_{T,\text{sm}}$ extends to a Cartier divisor on $X_{T,\text{sm}}$. Thus, the invertible sheaf extends to an invertible sheaf on the open subscheme $U=X_\eta \cup X_{T,\text{sm}}$. Since $X_T$ is normal, the pushforward of this invertible sheaf from $U$ is a torsion-free coherent sheaf $\overline{\text{M}}athcalL$ that is $\mathsf{S}_2$. Denote by $V\subset X_T$ the maximal open subscheme on which $\overline{\text{M}}athcalL$ has rank $\leq 1$. \overline{\text{M}}edskip\noindent For a generic point $x\in X_T$ of the complement of $V$, the stalk of $\overline{\text{M}}athcalL$ at $x$ has rank $\geq 2$. Since $x$ is in the closed fiber and in the complement of the smooth locus, $x$ has codimension $\geq 3$ in the closed fiber, hence codimension $\geq 4$ in $X_T$. Since $X_T$ is a local complete intersection scheme, by \cite[Th\'{e}or\`{e}me XI.3.13]{SGA2}, the local ring $\overline{\text{M}}c{O}_{X_T,x}$ is parafactorial. Thus, the stalk of $\overline{\text{M}}athcalL$ at $x$ is locally free of rank $1$. This contradiction proves that $V$ is all of $X_T$, i.e., $\overline{\text{M}}athcalL$ is an invertible sheaf on $X_T$. Therefore, by the valuative criterion of properness, for every Noetherian scheme $T$ and for every fppf projective morphism $f:X_T\to T$ satisfying Hypotheses (i)-(iv), the relative Picard function is representable and equals a union of open and closed subschemes, each of which is finite, \'{e}tale over $T$. \overline{\text{M}}edskip\noindent For every point $t$ of $T$, the geometric Picard group of $X_t$ is finitely generated by the theorem of the base, \cite[Th\'{e}or\`{e}me XIII.5.1]{SGA6}. By Hypothesis (iv), the geometric Picard group is torsion-free. Thus, it is finite free of some rank $r\geq 1$. Define $T_{r}$, resp. $T_{\geq r}$, to be the subset of $T$ over which the geometric Picard group is finite free of rank $r$, resp. of rank $\geq r$. \overline{\text{M}}edskip\noindent Let $r_0$ be an integer, and let $t\in T_{\geq r_0}$ be a point of rank $r\geq r_0$. The \'{e}tale stalk at $t$ of the Picard functor is generated by the images of finitely many of the finite, \'{e}tale, open and closed subschemes of the relative Picard scheme. The image under $f$ of each of these is an open and closed subscheme of $T$ that contains $t$. The intersection of these finitely many open and closed subscheme of $T$ is an open and closed subscheme of $T$ that contains $t$. For every geometric point of this open and closed subscheme, the rank is $\geq r$. In particular, the rank is $\geq r_0$. Thus, each subset $T_{\geq r_0}\subset T$ is open, and it is a union of open subsets that are both open and closed. \overline{\text{M}}edskip\noindent Since $T$ is Noetherian, there are only finitely many irreducible components. Thus, there are also finitely many connected components. The subset $T_r$ contains the unique connected component of $T$ that contains $t$. Thus, also every subset $T_r$ is an open subset of $T$. The restriction of the relative Picard functor over $T_r$ is \'{e}tale locally constant with finite free stalks of rank $r$. \end{proof} \overline{\text{M}}edskip\noindent \textbf{Acknowledgments.} This is part of a project begun with Chenyang Xu for extending theorems about rational simple connectedness; I am grateful to Xu for all his help. I am also grateful to Yi Zhu for many discussions about his work. I am grateful to Aise Johan de Jong for help with references. I was supported by NSF Grants DMS-0846972 and DMS-1405709, as well as a Simons Foundation Fellowship. \section{Examples and Composition} \label{sec-compos} \overline{\text{M}}arpar{sec-compos} \begin{ex} \label{ex-0} \overline{\text{M}}arpar{ex-0} Let $Q\subset \overline{\text{M}}athbfb{P}^3$ be a smooth quadric surface. For $T$ equal to $\overline{\text{M}}athbb{A}^1$, for $X_T$ the reduced closed subscheme of $T\times \overline{\text{M}}athbfb{P}^3$ whose intersection with $\text{Gal}m{m}\times \overline{\text{M}}athbfb{P}^3$ equals $\text{Gal}m{m}\times Q$ and whose fiber over $0\in T$ equals all of $\overline{\text{M}}athbfb{P}^3$, then $f$ satisfies Hypotheses (ii), (iii), and (iv), yet the morphism is not flat. The relative Picard functor is representable and \'{e}tale over $T$, but it fails the valuative criterion of properness. \end{ex} \begin{ex} \label{ex-1} \overline{\text{M}}arpar{ex-1} For every integer $r\geq 2$, for $T$ equal to $\overline{\text{M}}athbb{A}^1$, and for $X_T$ a specialization of the image of the Segre embedding, $\sigma:\overline{\text{M}}athbfb{P}^r\times \overline{\text{M}}athbfb{P}^r\to \overline{\text{M}}athbfb{P}^{r^2+2r},$ to a cone over a smooth hyperplane section of $\sigma(\overline{\text{M}}athbfb{P}^2\times \overline{\text{M}}athbfb{P}^2)$, Hypotheses (i), (iii), and (iv) are satisfied, and the fibers are smooth in codimension $\leq 2$, yet the fibers are not local complete intersections, and the relative Picard scheme is not \'{e}tale locally constant. More precisely, the relative Picard scheme is separated and \'{e}tale over $T$, but it fails the valuative criterion of properness. \end{ex} \begin{ex} \label{ex-2} \overline{\text{M}}arpar{ex-2} For $T$ equal to $\overline{\text{M}}athbb{A}^1$ and for $X_T$ a specialization in $\overline{\text{M}}athbfb{P}^3$ of a smooth quadric hypersurface to a quadric hypersurface with an ordinary double point, Hypothesis (i), (iii), and (iv) are satisfied, and the fibers are local complete intersections, yet the special fiber is singular at a point of codimension $2$. The relative Picard scheme is not proper over $T$. \end{ex} \begin{ex} \label{ex-3} \overline{\text{M}}arpar{ex-3} For a family of supersingular Enriques surfaces over a smooth scheme $T$ in characteristic $2$, Hypotheses (i), (ii), and (iv) are satisfied, yet Hypothesis (iii) fails. The relative Picard functor is representable and \'{e}tale locally constant over $T$. Yet the relative Picard functor is not smooth over $T$: the connected component of the identity is $\alpha_2$. \end{ex} \begin{ex} \label{ex-4} \overline{\text{M}}arpar{ex-4} For a family of Enriques surfaces over a smooth scheme $T$ in characteristic $0$, Hypotheses (i), (ii), and (iii) are satisfied, yet Hypothesis (iv) fails. The relative Picard functor is representable and \'{e}tale locally constant over $T$. Yet the stalks have $\overline{\text{M}}athbfb{Z}/2\overline{\text{M}}athbfb{Z}$-torsion. \end{ex} \begin{prop} \label{prop-compos} \overline{\text{M}}arpar{prop-compos} Let $g:Y\to X$ and $f:X\to T$ be projective, fppf morphisms whose geometric fibers are integral. The composition $f\circ g$ is a projective, fppf morphism whose geometric fibers are integral. If both $g:Y\to X$ and $f:X\to T$ are $r$-acyclic, resp. acyclic, weakly acyclic, then so is the composition $f\circ g:Y\to T$. \end{prop} \begin{proof} By limit theorems, it suffices to prove the case when $T$ is Noetherian. A composition of projective, fppf morphisms is a projective, fppf morphism. For each geometric point $t$ of $T$, the fiber $X_t$ of $f$ is integral. Denote by $\eta$ the generic point. The morphism $g_t:Y_t\to X_t$ is projective and flat. Thus, for every nonempty open affine $U\subset Y_t$, $U$ intersects the generic fiber $Y_{t,\eta}=g_t^{-1}(\eta)$. Since $\overline{\text{M}}c{O}_{Y_t}(U)$ is $\overline{\text{M}}c{O}_T$-flat, the induced morphism $\overline{\text{M}}c{O}_{Y_t}(U)\to \overline{\text{M}}c{O}_{Y_{t,\eta}}(U\cap Y_{t,\eta})$ is injective. Since the geometric fibers of $g$ are integral, the fiber $Y_{t,\eta}$ is integral. Since $\overline{\text{M}}c{O}_{Y_t}(U)$ is a subring of an integral domain, also $\overline{\text{M}}c{O}_{Y_t}(U)$ is an integral domain. Therefore $Y_t$ is integral. So the geometric fibers of $f\circ g$ are integral. \overline{\text{M}}edskip\noindent A composition of flat, LCI morphisms is a flat, LCI morphism, cf. the proof of \cite[Proposition 6.6(c)]{F} (Fulton works with global embeddings in smooth schemes, but the diagram in the proof also proves the result in the local case). With notation as in the previous paragraph, if $\text{Sing}(X_t)$ has codimension $\geq c$ in $X_t$, then also $g_t^{-1}(\text{Sing}(X_t))$ has codimension $\geq c$ in $Y_t$, since $g_t$ is flat. If the singular locus of the morphism $g_t$ has codimension $\geq c$ in every fiber of $g_t$, then it has codimension $\geq c$ in $Y_t$. Then the union of the singular locus of $g_t$ and $g^{-1}(\text{Sing}(X_t))$ has codimension $\geq c$ in $Y_t$. On the open complement of this union, $f\circ g$ is a composition of smooth morphisms, hence it is smooth. Thus, the singular locus of $Y_t$ is contained in this union, so that the singular locus of $Y_t$ has codimension $\geq c$ in $Y_t$. Finally, if the geometric fibers of $g_t$ are algebraically simply connected, and if $X_t$ is algebraically simply connected, then also $Y_t$ is algebraically simply connected, cf. \cite[Corollaire IX.6.11]{SGA1}. \overline{\text{M}}edskip\noindent Thus, to prove that $f\circ g$ is acyclic, resp. weakly acyclic, it suffices to prove that it is $2$-acyclic for the structure sheaf. For projective, fppf morphisms $f$ and $g$ that are $r$-acyclic, consider the Leray spectral sequence, $$ E^{p,q}_2 = H^p(X_t,R^q(g_t)_* \overline{\text{M}}c{O}_{Y_t}) \Rightarrow H^{p+q}(Y_t,\overline{\text{M}}c{O}_{Y_t}). $$ Since $g$ is $r$-acyclic, and since $g_t$ is a base change of $g$, also $g_t$ is $r$-acyclic. Thus, $(g_t)_*\overline{\text{M}}c{O}_{Y_t}$ equals $\overline{\text{M}}c{O}_{X_t}$, and $R^q(g_t)_*\overline{\text{M}}c{O}_{Y_t}$ is the zero sheaf for $0<q\leq r$. Thus, for every integer $s$ with $0\leq s \leq r$, the only nonzero terms in the spectral sequence with $p+q=s$ are when $q$ equals $0$ and $p$ equals $s$, i.e., $E^{s,0}_2=H^s(X_t,\overline{\text{M}}c{O}_{X_t})$. Since $f$ is $r$-acyclic, this equals $0$ unless $s=0$, in which case it equals $H^0(X_t,\overline{\text{M}}c{O}_{X_t}) = \kappa(t)$. Thus, $H^s(Y_t,\overline{\text{M}}c{O}_{Y_t})$ equals $0$ for $0<s\leq r$, and the natural map $\kappa(t) \to H^0(Y_t,\overline{\text{M}}c{O}_{Y_t})$ is an isomorphism. So $f\circ g$ is also $r$-acyclic for the structure sheaf. \end{proof} \section{Hyperplane Theorems} \label{sec-hyperplane} \overline{\text{M}}arpar{sec-hyperplane} \overline{\text{M}}edskip\noindent The following lemma in characteristic zero follows by the Kawamata-Viehweg Vanishing Theorem. \begin{lem} \label{lem-KV} \overline{\text{M}}arpar{lem-KV} Let $K$ be a field. Let $f:X\to T$ be a proper, fppf morphism of finite type $K$-schemes of relative dimension $n$. Let $Y\subset X$ be an effective Cartier divisor that is $T$-flat and $f$-ample. If $X$ is smooth over $K$, then $T$ is smooth over $K$, and every fiber of $f$ is LCI. If, moreover, $\text{char}(K)$ equals $0$, and if $n\geq r+2$, then $H^q(X_t,\overline{\text{M}}c{O}_{X_t}(-\underline{Y}_t))$ is zero for every $q=0,\dots,r+1$. Thus, if $f$ is $r$-acyclic for the structure sheaf, then also $f|_Y:Y\to T$ is $r$-acyclic for the structure sheaf. \end{lem} \begin{proof} Since $X$ is $K$-smooth and since $f$ is flat, also $T$ is $K$-smooth, \cite[Proposition 17.7.7]{EGA4}. For a flat morphism from an LCI scheme to a regular scheme, every fiber is LCI. In particular, every fiber is Gorenstein. \overline{\text{M}}edskip\noindent The relative dualizing sheaf of $f$ is $$ \omega_{X/T} \cong \omega_{X/K}\otimes_{\overline{\text{M}}c{O}_X} f^* \omega_{T/K}^\vee. $$ The dualizing sheaf of each fiber is the restriction of $\omega_{X/T}$. \overline{\text{M}}edskip\noindent Now assume that $\text{char}(K)$ equals $0$, and assume that $X$ is smooth over $K$. By the Kawamata-Viehweg Vanishing Theorem, \cite[Theorem 1.2.3, p. 306]{KaMaMa}, for every $q>0$, $R^qf_* \omega_{X/T}(\underline{Y})$ is zero. Thus, for every geometric point $t$ of $T$, for every $q>0$, $H^q(X_t,\omega_{X_t}(\underline{Y}_t))$ is zero by Cohomology and Base Change, \cite[Theorem III.12.11]{H}. By Serre duality, also $H^q(X_t, \overline{\text{M}}c{O}_{X_t}(-\underline{Y}_t)$ is zero for every $q < n$. \overline{\text{M}}edskip\noindent For the short exact sequence $$ 0\to \overline{\text{M}}c{O}_{X_t}(-\underline{Y}_t) \to \overline{\text{M}}c{O}_{X_t} \to \overline{\text{M}}c{O}_{Y_t} \to 0 $$ the long exact sequence of cohomology gives $$ H^q(X_t,\overline{\text{M}}c{O}_{X_t}(-\underline{Y}_t)) \to H^q(X_t,\overline{\text{M}}c{O}_{X_t}) \to H^q(Y_t,\overline{\text{M}}c{O}_{Y_t}) \to H^{q-1}(X_t,\overline{\text{M}}c{O}_{X_t}(-\underline{Y}_t)). $$ Thus, for every $q\leq n-2$, the restriction map is an isomorphism, $$ H^q(X_t,\overline{\text{M}}c{O}_{X_t}) \xrightarrow{\cong} H^q(Y_t,\overline{\text{M}}c{O}_{Y_t}). $$ Since $r\leq n-2$, also $H^q(Y_t,\overline{\text{M}}c{O}_{Y_t})$ is zero for $q=1,\dots,r$. Also, the composition $$ \overline{\text{M}}c{O}_T\otimes_{\overline{\text{M}}c{O}_T} \kappa(t) \to f_*\overline{\text{M}}c{O}_{X}\otimes_{\overline{\text{M}}c{O}_T} \kappa(t) \to f_*\overline{\text{M}}c{O}_Y\otimes_{\overline{\text{M}}c{O}_T}\kappa(t) \to H^0(Y_t,\overline{\text{M}}c{O}_{Y_t}) $$ is an isomorphism. Thus, once again using Cohomology and Base Change, for arbitrary $T'$, also $R^qf'_*\overline{\text{M}}c{O}_{Y'}$ is zero for $q=1,\dots,r$, the natural map $\overline{\text{M}}c{O}_{T'} \to f'_*\overline{\text{M}}c{O}_{Y'}$ is an isomorphism. \end{proof} \begin{prop}\cite{SGA2} \label{prop-iterate1} \overline{\text{M}}arpar{prop-iterate1} Let $f:X\to T$ be a proper, fppf morphism of Noetherian schemes of pure relative dimension $n$. Let $Y\subset X$ be an effective Cartier divisor that is $T$-flat and $f$-ample. For every geometric point $t$ of $T$, denote $X_t$, resp. $Y_t$, the corresponding fiber of $X$, resp. $Y$. \begin{enumerate} \item[(i)] If $n\geq 2$, if $X_t$ is integral and satisfies Serre's condition $\mathsf{S}_3$, and if $\text{codim}_{Y_t}(\text{Sing}(Y_t)) \geq 2$, then $Y_t$ is integral and normal. \item[(ii)] If $n\geq 3$ and if $X_t$ is LCI with $\text{codim}_{X_t}(\text{Sing}(X_t)) \geq 3$, then $\pi_1^{\text{alg}}(Y_t)\to \pi_1^{\text{alg}}(X_t)$ is an isomorphism. \item[(iii)] If $n\geq 4$, if $T$ is a finite type scheme over a characteristic $0$ field $K$, if $X$ is smooth over $K$, and if $\text{codim}_{X_t}(\text{Sing}(X_t)) \geq 4$, then $\text{Pic}(X_t)\to \text{Pic}(Y_t)$ is an isomorphism. \end{enumerate} \end{prop} \begin{proof} \textbf{(i)} Since $X_t$ satisfies $\mathsf{S}_3$, also $Y_t$ satisfies $\mathsf{S}_2$. Since $Y_t$ is regular at every codimension $0$ and codimension $1$ point, $Y_t$ is normal by Serre's Criterion \cite[Th\'{e}or\`{e}me 5.8.6]{EGA4}. Finally, by \cite[Corollaire XII.3.5]{SGA2}, $Y_t$ is connected. Thus $Y_t$ is integral. \overline{\text{M}}edskip\noindent \textbf{(ii)} By the Purity Theorem, \cite[Th\'{e}or\`{e}me X.3.4(ii)]{SGA2}, $X_t$ is pure and of depth $\geq 3$ at every closed point. By the Lefschetz Hyperplane Theorem for \'{e}tale fundamental groups, \cite[Corollaire XII.3.5]{SGA2}, the natural homomorphism $\pi_1^{\text{alg}}(Y_t) \to \pi_1^{\text{alg}}(X_T)$ is an isomorphism. \overline{\text{M}}edskip\noindent \textbf{(iii)} By Lemma \ref{lem-KV}, $H^q(X_t,\overline{\text{M}}c{O}_{X_t}(-d\underline{Y}_t)$ is zero for all $d>0$ and $q=1,2$. By Grothendieck's proof of Samuel's Conjecture, \cite[Th\'{e}or\`{e}me XI.3.13(ii), Corollaire XI.3.14]{SGA2}, the scheme $X_t$ is parafactorial, and even factorial. By the Lefschetz Hyperplane Theorem for Picard groups, \cite[Corollaire XII.3.6]{SGA2}, the restriction on Picard groups is an isomorphism. \end{proof} \begin{cor} \label{cor-iterate1} \overline{\text{M}}arpar{cor-iterate1} Let $K$ be a characteristic $0$ field, and let $f:X\to T$ be a proper, fppf morphism of $K$-schemes of pure dimension $n$. Let $Y\subset X$ be an effective Cartier divisor that is $T$-flat and $f$-ample. If $n\geq 4$, if $X$ is smooth over $K$, if $\text{codim}_{Y_t}(\text{Sing}(Y_t)) \geq 4$ for every geometric point $t$ of $T$, and if $f$ is acyclic, then also $f|_Y:Y\to T$ is acyclic. Moreover, the restriction morphism of \'{e}tale group schemes, $\text{Pic}_{X/T}\to \text{Pic}_{Y/T}$, is an isomorphism. \end{cor} \begin{proof} By Proposition \ref{lem-Zhu}, the finite part of the Stein factorization of $f$ is finite and \'{e}tale over $T$. Up to replacing $T$ by this finite, \'{e}tale cover, assume that $f$ has integral geometric fibers. \overline{\text{M}}edskip\noindent By hypothesis, $f|_Y:Y\to T$ is flat. By Proposition \ref{prop-iterate1}(i), the geometric fibers are integral. By Lemma \ref{lem-KV} and by Proposition \ref{prop-iterate1}(ii), Definition \ref{defn-Zhu}(ii) and (iv) hold. By Lemma \ref{lem-KV}, Definition \ref{defn-Zhu}(iii) holds. Finally, by Proposition \ref{prop-iterate1}(iii), the restriction morphism of Picard schemes is an isomorphism on geometric fibers. Since this is a morphism of \'{e}tale $T$-schemes, the restriction morphism is \'{e}tale. Since it is also bijective on geometric points, it is an isomorphism. \end{proof} \section{Families of Hypersurfaces} \label{sec-Bertini} \overline{\text{M}}arpar{sec-Bertini} \overline{\text{M}}edskip\noindent Let $X\to T$, $\overline{\text{M}}athcalC\to T$, and $\overline{\text{M}}athcalC\to G$ be fppf morphisms. \begin{lem} \label{lem-Bertini} \overline{\text{M}}arpar{lem-Bertini} Assume that the schemes above are finite type over a field $K$, and assume that the morphisms are $K$-morphisms. If $X$ is smooth over $K$, and if $\overline{\text{M}}athcalC\to T$ is smooth, then also $X\times_T \overline{\text{M}}athcalC$ is smooth over $K$. If $\text{char}(K)$ equals $0$, then there exists a dense open subset $W\subset G$ such that the morphism $X\times_T \overline{\text{M}}athcalC \times_G W\to W$ is smooth. \end{lem} \begin{proof} Since $\overline{\text{M}}athcalC\to T$ is smooth, also $X\times_T \overline{\text{M}}athcalC \to X$ is smooth. Since $X$ is smooth over $K$, also $X\times_T \overline{\text{M}}athcalC$ is smooth over $K$. If $\text{char}(K)$ equals $0$, then by the Generic Smoothness Theorem, cf. \cite[Corollary III.10.7]{H}, there exists a dense open subset $W\subset G$ such that $X\times_T \overline{\text{M}}athcalC \times_G W\to W$ is smooth. \end{proof} \begin{notat} \label{notat-T} \overline{\text{M}}arpar{notat-T} Let $T$ be a Noetherian scheme of pure dimension $m$. Let $X_T\subset \overline{\text{M}}athbfb{P}^r_T$ be a closed subscheme such that $p:X_T \to T$ is flat of pure relative dimension $n\geq 1$. Denote by $X_T^{\text{sm}}\subset X_T$ the open subscheme on which $p$ is smooth. \end{notat} \overline{\text{M}}edskip\noindent By \cite[Expos\'{e} XV, Corollaire 1.3.4]{SGA7II}, there exists an open subscheme $X_T^{\text{odp}}\subset X_T$ consisting of points of geometric fibers where either $p$ is smooth or else has an ordinary double point. \begin{defn} \label{defn-Bertini} \overline{\text{M}}arpar{defn-Bertini} The \emph{smooth locus of $p$}, $T^{\text{sm}}$, is the open complement in $T$ of $p(X\setminus X_T^{\text{sm}})$. Similarly, the \emph{ordinary locus of $p$}, $T^{\text{odp}}\subset T$, is the open complement of $p(X\setminusX_T^{\text{odp}})$, i.e., the maximal open subscheme of $T$ over which $p$ has geometrically reduced fibers that are either smooth or else admit (at worst) finitely many ordinary double points. Over the open $T^{\text{odp}}$, the morphism $X\setminus X_T^{\text{sm}} \to T$ is finite. The \emph{Lefschetz locus}, $T^{\text{Lef}}$, is the maximal open subscheme of $T^{\text{odp}}$ over which this finite morphism is a closed immersion. Thus, over $T^{\text{Lef}}$, every geometric fiber is either smooth or else it is reduced with a single ordinary double point. \end{defn} \overline{\text{M}}edskip\noindent Denote by $P_r(t)\in \overline{\text{M}}athbfb{Q}[t]$ the numerical polynomial such that $P_r(s)$ equals $\binom{r+s}{r}$ for every integer $s\geq -r$. For each integer $d\geq 1$, the projective space $\overline{\text{M}}athbfb{P}^{N_d}_T = \text{Hilb}^{P_{r}(t)-P_r(t-d)}_{\overline{\text{M}}athbfb{P}^r_T/T}$ parameterizes degree $d$ hypersurfaces $H\subset \overline{\text{M}}athbfb{P}^r$. \begin{defn} \label{defn-dual} \overline{\text{M}}arpar{defn-dual} The \emph{degenerate locus} or \emph{dual locus}, $\check{X}_T$, is the closed subset of $\overline{\text{M}}athbfb{P}^{N_d}_T$ whose geometric points relative to $\text{Spec } \kappa \to T$, parameterize hypersurfaces $H\subset \overline{\text{M}}athbfb{P}^r_\kappa$ for which $H\cap X_\kappa$ is \textbf{not} a smooth $\kappa$-scheme of dimension $n-1$, i.e., either it has an irreducible component of dimension $\geq n$ or else it is singular. The \emph{badly degenerate locus}, $F_1\subset \check{X}_T$, is the closed subset such that $H\cap X_\kappa$ either (i) has an irreducible component of dimension $\geq n$, (ii) it is nonreduced, or (iii) it is reduced of dimension $n$, yet it has worse than a single ordinary double point singularity. \end{defn} \overline{\text{M}}edskip\noindent For the universal family of hypersurface sections of $X_T$ over $\overline{\text{M}}athbfb{P}^{N_d}_T$, say $Y\to \check{\overline{\text{M}}athbfb{P}}^r_T$, the degenerate locus, resp. the badly degenerate locus, is the union of the non-flat locus with the closed complement of $(\check{\overline{\text{M}}athbfb{P}}^r_T)^{\text{sm}}$, resp. $(\check{\overline{\text{M}}athbfb{P}}^r_T)^{\text{Lef}}$, as defined in Definition \ref{defn-Bertini}. Thus, the degenerate locus and the badly degenerate locus are closed subsets. \begin{cor} \label{cor-iterate2} \overline{\text{M}}arpar{cor-iterate2} Let $K$ be a field. With notations as above, assume that $T$ is a finite type $K$-scheme, and assume that $X_T$ is smooth over $K$. Then $X_{\overline{\text{M}}athbfb{P}^{N_d}} := X_T\times_T \overline{\text{M}}athbfb{P}^{N_d}_T$ is smooth over $K$. Also the universal hypersurface, $Y\subset X_{\overline{\text{M}}athbfb{P}^{N_d}}$ as above, is smooth over $K$. If $\text{char}(K)$ equals $0$, if $X_T/T$ is acyclic, and if $n\geq 4$, resp. if $n\geq 5$, then the restriction of $Y$ over $\overline{\text{M}}athbfb{P}^{N_d}_T\setminus \check{X}_T$, resp. over $\overline{\text{M}}athbfb{P}^{N_d}_t\setminus F_1$, is acyclic. Also over this (respective) open subset, the natural morphism from the pullback of $\Pic{}{X_T/T}$ to the relative Picard scheme of $Y$ is an isomorphism. \end{cor} \begin{proof} By Lemma \ref{lem-Bertini}, $X_{\overline{\text{M}}athbfb{P}^{N_d}}$ is smooth over $K$. The same method proves that $Y$ is smooth over $K$: the projection $Y\to X_{\overline{\text{M}}athbfb{P}^{N_d}}$ is a projective space bundle. If $n\geq 4$, then the hypotheses of Corollary \ref{cor-iterate1} are satisfied for $Y\to \overline{\text{M}}athbfb{P}^{N_d}_T$ over $\overline{\text{M}}athbfb{P}^{N_d}_T\setminus \check{X}_T$. If $n\geq 5$, then over $\overline{\text{M}}athbfb{P}^{N_d}_T\setminus F_1$, the fibers of $Y_t$ have singular locus of codimension $n-1\geq 4$, so the hypotheses are satisfied over $\overline{\text{M}}athbfb{P}^{N_d}_T\setminus F_1$. \end{proof} \begin{prop}\cite[Expos\'{e} XVII, Th\'{e}or\`{e}me 2.5]{SGA7II} \label{prop-Lef} \overline{\text{M}}arpar{prop-Lef} Assume that $T^{\text{Lef}}$ equals all of $T$, and assume that $T^{\text{sm}}$ is a dense open subset of $T$. Then for every $d\geq 2$, every irreducible component of $\check{X}_T$, resp. of $F_1$, has codimension $\geq 1$, resp. $\geq 2$, in $\overline{\text{M}}athbfb{P}^{N_d}_T$. In characteristic $0$ this also holds with $d=1$. \end{prop} \begin{proof} The statement over $T^{\text{sm}}$ follows directly from loc. cit. By hypothesis, every component of the singular locus, $\Delta:=T\setminus T^{\text{sm}}$, has codimension $\geq 1$ in $T$. The inverse image of $\Delta$ in $\overline{\text{M}}athbfb{P}^{N_d}_T$ has codimension $1$. For each geometric point $\text{Spec } \kappa \to \Delta$, since this is a point of $T^{\text{Lef}}$, the corresponding fiber $X_\kappa$ has a single ordinary double point $x$. Inside $\overline{\text{M}}athbfb{P}^{N_d}_\kappa$, the set parameterizing $H$ with $x\in H$ is a proper closed subset, hence has codimension $\geq 1$. In total, the locus in $\Delta \times_T \overline{\text{M}}athbfb{P}^{N_d}_T$ parameterizing $H$ containing a singular point of $p$ is a subset of codimension $\geq 2$. Thus the proposition over all of $T$ is reduced to the proposition over $T^{\text{sm}}$. \end{proof} \section{Families of Complete Intersections} \label{sec-CI} \overline{\text{M}}arpar{sec-CI} \overline{\text{M}}edskip\noindent Let $X_T \to T$ be an fppf morphism of pure relative dimension $n$. \begin{notat} \label{notat-CI} \overline{\text{M}}arpar{notat-CI} Let $b$ be an integer with $1\leq b\leq n$, let $(\iota_j:X_T \hookrightarrow \overline{\text{M}}athbfb{P}^{r_j}_T)_{1\leq j\leq b}$ be an ordered $b$-tuple of closed immersions with associated very ample invertible sheaves $\overline{\text{M}}athcal{A}_j = \iota_j^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}^{r_j}_T}(1)$. Let $\underline{d}=(d_1,\dots,d_b)$ be an ordered $b$-tuple of integers $d_i\geq 1$. For each $j=1,\dots,b$, denote by $V_j(d_j)$ the free $\overline{\text{M}}c{O}_T$-module $H^0(\overline{\text{M}}athbfb{P}^{r_j}_T,\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}^{r_j}_T}(d_j))$. Denote by $V(\underline{d})$ the direct sum $V_1(d_1)\oplus \dots \oplus V_b(d_b)$ as a free $\overline{\text{M}}c{O}_T$-module. Denote by $\overline{\text{M}}athbfb{P}_T V(\underline{d})$ the projective space over $T$ on which there is a universal ordered $b$-tuple $(\phi_1,\dots,\phi_b)$ of sections of the invertible sheaves $\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}^{r_j}}(d_j)$. Precisely, for the product $$ P=\overline{\text{M}}athbfb{P}_T V(\underline{d}) \times_T (\overline{\text{M}}athbfb{P}_T^{r_1}\times \dots \overline{\text{M}}athbfb{P}_T^{r_b}) $$ with its projections $$ \text{pr}_0:P\to \overline{\text{M}}athbfb{P}_TV(\underline{d}) \text{ and } \text{pr}_j:P\to \overline{\text{M}}athbfb{P}_T^{r_j}, $$ the sequence $(\phi_1,\dots,\phi_b)$ is a universal homomorphism of coherent sheaves $$ \text{pr}_1^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}^{r_1}_T}(-d_1)\oplus \dots \oplus \text{pr}_b^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}^{r_b}_T}(-d_b) \to \text{pr}_0^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}_T V(\underline{d})}(1), $$ or equivalently, a universal homomorphism of coherent sheaves, $$ (\phi_1,\dots,\phi_b):\text{pr}_0^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}_T V(\underline{d})}(-1)\otimes\left(\text{pr}_1^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}^{r_1}_T}(-d_1)\oplus \dots \oplus \text{pr}_b^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}^{r_b}_T}(-d_b)\right) \to \overline{\text{M}}c{O}_P. $$ \end{notat} \overline{\text{M}}edskip\noindent For the diagonal closed immersion $\iota = (\iota_1,\dots,\iota_b)$ of $X_T$ into $\overline{\text{M}}athbfb{P}_T^{r_1}\times_T \dots \times_T \overline{\text{M}}athbfb{P}_T^{r_b}$, for every $j=1,\dots,c$, there is an associated homomorphism of coherent sheaves on $\overline{\text{M}}athbfb{P}_T V(\underline{d})\times_T X_X$, $$ \iota^*\phi_j:\text{pr}_0^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}_T V(\underline{d})}(-1)\otimes_{\overline{\text{M}}c{O}} \text{pr}_1^*\iota_j^*\overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}^{r_j}_T}(-d_j) \to \overline{\text{M}}c{O}_{\overline{\text{M}}athbfb{P}_T V(\underline{d}) \times_T X_T}. $$ \begin{defn} \label{defn-CI} \overline{\text{M}}arpar{defn-CI} Define $Y_j$ to be the Cartier divisor on $\overline{\text{M}}athbfb{P}_T V(\underline{d})\times_T X_T$ whose ideal sheaf is the image of $\iota^*\phi_j$. For every $j=0,\dots,b$, define the closed subscheme $X_j \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})\times_T X_T$ recursively by $$ X_0 = \overline{\text{M}}athbfb{P}_T V(\underline{d}) \times_T X_T \text{ and } X_j = Y_j \cap X_{j-1} $$ for every $j=1,\dots,b$. Define two sequences of open subsets $$ \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{sm}}_b \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{sm}}_{b-1}\subset \dots \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{sm}}_2 \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{sm}}_1 \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{sm}}_0 = \overline{\text{M}}athbfb{P}_T V(\underline{d}), $$ respectively, $$ \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_b \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_{b-1}\subset \dots \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_2 \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_1 \subset \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_0 = \overline{\text{M}}athbfb{P}_T V(\underline{d}), $$ where for $i=1,\dots,b$, $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_i$, resp. $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{sm}}_i$, is the maximal open subset such that for every $j=0,\dots,i$, \begin{enumerate} \item[(i)] $X_j\times_{\overline{\text{M}}athbfb{P}_T V(\underline{d})} \overline{\text{M}}athbfb{P}_T V(\underline{d})_i \to \overline{\text{M}}athbfb{P}_T V(\underline{d})_i$ is flat of relative dimension $n-j$, \item[(ii)] the geometric fibers are reduced, and \item[(iii)] every geometric fiber has, at worst, a single ordinary double point and no other singularities, resp. every geometric fiber is smooth. \end{enumerate} \end{defn} \overline{\text{M}}edskip\noindent By construction $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{sm}}_i$ is an open subset of $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_i$. \begin{notat} \label{notat-CI2} \overline{\text{M}}arpar{notat-CI2} For each $i\geq 1$, denote by $\check{X}_{i-1}$ the relative complement of $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{sm}}_{i}$ in $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_{i-1}$. Denote by $F_{i-1}$ the relative complement of $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_{i}$ in $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_{i-1}$. \end{notat} \overline{\text{M}}edskip\noindent Note that on $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_{i-1}$ there is a well-defined morphism $\Phi_{i-1}:\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_{i-1} \to \overline{\text{M}}athbfb{P}_T V(d_1,\dots,d_{i-1})$ that is flat. In fact the image is the corresponding open $$ \overline{\text{M}}athbfb{P}_T V(d_1,\dots,d_{i-1})^{\text{Lef}}_{i-1}, $$ and the morphism $\Phi_{i-1}$ to its image is Zariski locally on the image isomorphic to the vector bundle $V(d_i,\dots,d_b)\times_T \overline{\text{M}}athbfb{P}_T V(d_1,\dots,d_{i-1})^{\text{Lef}}_{i-1}$. \begin{cor} \label{cor-Lef} \overline{\text{M}}arpar{cor-Lef} If the characteristic is not $0$, assume that every $d_i\geq 2$. With the same hypotheses as in Proposition \ref{prop-Lef}, for $i=1,\dots,b$, the closed subset $F_{i-1}$ has codimension $\geq 2$ in $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_{i-1}$. The complement of $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_b$ in $\overline{\text{M}}athbfb{P}_T V(\underline{d})$ has codimension $\geq 2$. If $p$ has connected geometric fibers and if $n\geq b+1$, resp. if $n\geq b+2$, then every geometric fiber of $\pr{2}$ is connected, resp. is normal and irreducible, $$ \pr{2}:X_b\times_{\overline{\text{M}}athbfb{P}_T V(\underline{d})} \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_b \to \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_b. $$ Finally, if $\text{char}(K)$ equals $0$, if $X_T$ is smooth over $K$, if $X_T/T$ is acyclic, and if $n \geq b+ 4$, then $X_b\times_{\overline{\text{M}}athbfb{P}_T V(\underline{d})} \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_b$ is smooth over $K$, the morphism $\pr{2}$ above is acyclic, and the natural map from $\text{Pic}_{X_T/T}$ to the relative Picard scheme of $\pr{2}$ is an isomorphism. \end{cor} \begin{proof} The first assertion follows from Proposition \ref{prop-Lef} applied to the restriction over $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_{i-1}$ of the morphism $X_{i-1}\to \overline{\text{M}}athbfb{P}_T V(\underline{d})$. Thus, by induction on $i$, for every $i=0,\dots,b$, the closed complement of the open subset $\overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_i$ in $\overline{\text{M}}athbfb{P}_T V(\underline{d})$ has codimension $\geq 2$. \overline{\text{M}}edskip\noindent Assuming that $n\geq b+1$, connectedness of the fibers of $X_i \times_{\overline{\text{M}}athbfb{P}_T V(\underline{d})} \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_i \to \overline{\text{M}}athbfb{P}_T V(\underline{d})^{\text{Lef}}_i$ for $i=1,\dots,b$ is proved by induction on $i$ using \cite[Corollaire 3.5, Expos\'{e} XII]{SGA2} for the induction step. If $n\geq b+2$, then the geometric fibers of $X_b$ are connected, projective schemes of pure dimension $n-b\geq 2$ that are either smooth or else have a single ordinary double point. In particular, the geometric fiber is a local complete intersection scheme that is regular away from codimension $\geq 2$. By Serre's Criterion, \cite[Th\'{e}or\`{e}me 5.8.6]{EGA4}, the geometric fiber is normal. Since it is also connected, it is irreducible. \overline{\text{M}}edskip\noindent The acyclic hypothesis follows from Corollary \ref{cor-iterate2} and induction on $b$. \end{proof} \end{document}
\begin{document} \title[Non-algebraic Manifolds with the VDP]{Non-algebraic Examples of Manifolds with the Volume Density Property} \author[A. Ramos-Peon]{Alexandre Ramos-Peon} \address{Mathematisches Institut\\Universität Bern\\Sidlerstr. 5\\3012 Bern\\Switzerland.} \email{[email protected]} \thanks{Partially supported by Schweizerischer Nationalfonds Grant 153120} \subjclass[2010]{Primary 32M17,32H02. Secondary 32M25,14R10 } \date{\today} \commby{} \begin{abstract} Some Stein manifolds (with a volume form) have a large group of (volume-preserving) automorphisms: this is formalized by the (volume) density property, which has remarkable consequences. Until now all known manifolds with the volume density property are algebraic, and the tools used to establish this property are algebraic in nature. In this note we adapt a known criterion to the holomorphic case, and give the first known examples of non-algebraic manifolds with the volume density property: they arise as suspensions or pseudo-affine modifications over Stein manifolds satisfying some technical properties. As an application we show that there are such manifolds that are potential counterexamples to the Zariski Cancellation Problem, a variant of the T\'{o}th-Varolin conjecture, and the problem of linearization of $\mathbb{C}^*$-actions on $\mathbb{C}^3$. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} The group of automorphisms of complex affine space $\operatorname{Aut}(\mathbb{C}^n)$ has been intensively studied, both from the algebraic as from the analytic point of view. A foundational observation by E. Andersén and L. Lempert \cite{AL}, who proved that every polynomial vector field on $\mathbb{C}^n$ is a finite sum of complete vector fields, served as a starting point for new studies on $\operatorname{Aut}(\mathbb{C}^n)$ (because the flow of a complete field generates a $\mathbb{C}_+$-action on $\mathbb{C}^n$). This led F. Forstneri\v{c} and J-P. Rosay \cite{FR} to the formulation which is now commonly called the Andersén-Lempert theorem: any local holomorphic flow defined near a holomorphically compact set can be approximated by global holomorphic automorphisms. Hence $\operatorname{Aut}(\mathbb{C}^n)$ is exceptionally large, and this result opens the possibility of constructing automorphisms with prescribed local behavior, with remarkable consequences, such as the existence of non-straightenable embeddings of $\mathbb{C}$ into $\mathbb{C}^2$ (see Section \ref{sec:last}), counterexamples to the holomorphic linearization problem \cite{Derksen-Kut}, among many others (see e.g. \cite{KK-state}). This aspect of the study of the automorphism group may be referred to as Andersén-Lempert theory and is the subject of ongoing research. In order to generalize those techniques to a wider class of manifolds, D. Varolin introduced in \cite{Varolin1} the concept of the \textit{density property}, which accurately captures the idea of a manifold having a ``large'' group of automorphisms. Examples include homogeneous spaces, Danilov-Gizatullin surfaces, as well as Danielewski surfaces (see below). Andersén considered even earlier, in \cite{A90}, the situation where the vector fields preserve the standard volume form on $\mathbb{C}^n$, obtaining similar results. There is a corresponding \textit{volume density property} for manifolds equipped with a volume form, which has been substantially less studied. Beyond $\mathbb{C}^n$, only a few isolated examples were known to Varolin (see \cite{Varolin-sh}), including $(\mathbb{C}^*)^n$ and $\operatorname{SL}_2(\mathbb{C})$. It took around ten years until new instances of these manifolds were found in \cite{KK-volume}: all linear algebraic groups equipped with the left invariant volume form, as well as some algebraic Danielewski surfaces (see \cite{KK-Corea} for an exhaustive list). In this note we exhibit new manifolds with the volume density property. We prove a general result (see Theorem \ref{th:4.1}), from which we can deduce the following: \begin{theorem}\label{thm:i1} Let $n\geq 1$ and $f\in\mathcal{O}(\mathbb{C}^n)$ be a nonconstant holomorphic function with smooth reduced zero fiber $X_0$, such that $\tilde{H}^{n-2}(X_0)=0$ if $n\geq 2$. Then the hypersurface $\overline{\mathbb{C}^n_f}=\{uv=f(z_1,\dots,z_n)\}\subset\mathbb{C}^{n+2}$ has the volume density property with respect to the form $\bar{\omega}$ satisfying $d(uv-f)\wedge\bar{\omega}=du\wedge dv \wedge dz_1\wedge\dots \wedge dz_n$. \end{theorem} For $n=1$ this manifold is called a \textit{Danielewski surface}. Theorem \ref{thm:i1} was known in the special case where $f$ is a polynomial: this is due to S. Kaliman and F. Kutzschebauch, see \cite{KK-volume}. Their proof heavily depends on the use of Grothendieck's spectral sequence and seems difficult to generalize to the non-algebraic case. Our method of proof is completely different. It relies on modifying and using a suitable criterion involving so-called semi-compatible pairs of vector fields, developed in \cite{KK-compatible} for the algebraic setting. This method will be explained in Section \ref{sec:first}. In Section \ref{sec:second} we will study the suspension $\overline{X}$ (or pseudo-affine modification) of rather general manifolds $X$ along $f\in\mathcal{O}(X)$. After some results concerning the topology and homogeneity of $\overline{X}$, we will show that the structure of $\overline{X}$ makes it possible to lift compatible pairs of vector fields from $X$ to $\overline{X}$, in such a way that a technical but essential generating condition on $T\overline{X}\wedge T\overline{X}$ is guaranteed (Theorem \ref{Jo}). It is still unknown whether a contractible Stein manifold with the volume density property has to be biholomorphic to $\mathbb{C}^n$. It is believed that the answer is negative, see \cite{KK-volume}. For instance the affine algebraic submanifold of $\mathbb{C}^6$ given by the equation $uv = x+ x^2y + s^2 + t^3$ is such an example. Another prominent one is the Koras-Russell cubic threefold, see \cite{Leuenberger-volume}. In Section \ref{sec:last} we will show how to use Theorem \ref{th:4.1} to produce a non-algebraic manifold with the volume density property which is diffeomorphic to $\mathbb{C}^{n}$, which to our knowledge is the first of this kind. In fact, we prove the following. \begin{theorem} Let $\phi:\mathbb{C}^{n-1}\hookrightarrow\mathbb{C}^{n}$ be a proper holomorphic embedding, and consider the manifold defined by $\overline{\mathbb{C}^n_f}=\{uv=f(z_1,\dots,z_n)\}\subset\mathbb{C}^{n+2}$, where $f\in\mathcal{O}(\mathbb{C}^n)$ generates the ideal of functions vanishing on $\phi(\mathbb{C}^{n-1})$. Then $\overline{\mathbb{C}^n_f}$ is diffeomorphic to $\mathbb{C}^{n+1}$ and has the volume density property with respect to the volume form $\bar\omega$ satisfying $d(uv-f)\wedge\bar{\omega}=du\wedge dv \wedge dz_1\wedge\dots \wedge dz_n$. Moreover $\overline{\mathbb{C}^n_f}\times \mathbb{C}$ is biholomorphic to $\mathbb{C}^{n+2}$, and therefore is a potential counterexample to the Zariski Cancellation Problem if $\phi$ is not straightenable. \end{theorem} We end Section \ref{sec:last} with two more examples which are related to the problem of linearization of holomorphic $\mathbb{C}^*$-actions on $\mathbb{C}^n$. The author would like to thank F. Kutzschebauch for useful advice and suggestions during the preparation of the manuscript. \section{A criterion for volume density property}\label{sec:first} Let $X$ be a complex manifold of dimension $n$. We implicitly identify $T^{1,0}X$ with the real bundle $TX$; the global holomorphic sections of $TX$ are called holomorphic vector fields, and for simplicity we denote $\operatorname{VF}(X)$ the $\mathcal{O}(X)$-module of all such fields. Similarly, global holomorphic sections of the bundle $\wedge^j T^*X$ are called holomorphic $j$-forms, and we denote by $\Omega^j(X)$ the vector space of all such forms. In the sequel we drop the adjective \textit{holomorphic} since we only deal with such objects. Of particular interest to us are the \textit{complete} vector fields on $X$: these are defined to be those whose flow, starting at any point in $X$, exists for all complex times, and hence generate one-parameter groups automorphisms of $X$. We denote by $\mathbb{C}VF(X)$ the vector space of such fields, and note that given $\Theta\in\mathbb{C}VF(X)$, if either $f$ or $\Theta(f)$ lies in $\operatorname{Ker}(\Theta)$, then $f\Theta\in\mathbb{C}VF(X)$. Observe that sums or Lie combinations of elements in $\mathbb{C}VF(X)$ are in general not complete; denote by $\operatorname{Lie}(X)$ the Lie algebra generated by the elements in $\mathbb{C}VF(X)$. Assume $X$ is equipped with a \textit{volume form} $\omega$, that is, a non-degenerate $n$-form. Recall that the divergence of a vector field $\Theta$ on $X$ with respect to $\omega$ is the unique complex-valued function $\operatorname{div}_\omega\Theta$ such that \[(\operatorname{div}_\omega\Theta)\omega=\mathcal{L}_\Theta\omega\] where $\mathcal{L}_\Theta$ is the Lie derivative in the direction of $\Theta$. We can consider vector fields $\Theta$ of zero divergence with respect to $\omega$: $\mathcal{L}_\Theta\omega=0$, which is equivalent to $\phi^*_t\omega=\omega$, where $\phi_t$ is the time $t$ map of the local flow of $\Theta$. Denote $\operatorname{VF}_\omega(X)$ the vector space of all such fields, which we also call \emph{volume-preserving} (note that this is not an $\mathcal{O}(X)$-module anymore), and let $\mathcal{Z}^j(X)$ (resp. $\mathcal{B}^j(X)$) denote the vector space of $d$-closed (resp. $d$-exact) $j$ forms on $X$. We denote by $\operatorname{Lie}_\omega(X)$ the Lie algebra generated by elements in $\mathbb{C}VF_\omega(X)=\operatorname{VF}_\omega(X)\cap\mathbb{C}VF(X)$. The following is a definition of Varolin, making explicit the essential property of $\mathbb{C}^n$ necessary for the Andersén-Lempert behavior described in Section \ref{sec:intro}. \begin{definition} Let $X$ be a Stein manifold. We say that $X$ has the density property (in short DP) if $\operatorname{Lie}(X)$ is dense in $\operatorname{VF}(X)$ in the compact-open topology. If moreover $X$ is equipped with a volume form $\omega$, we say that $X$ has the volume density property with respect to $\omega$ (in short $\omega$-VDP) if $\operatorname{Lie}_\omega(X)$ is dense in $\operatorname{VF}_\omega(X)$. \end{definition} A manifold may have the VDP with respect to one form but not with respect to another one. It does not imply in general the DP: take for instance $(\mathbb{C},dz)$; less trivially, $(\mathbb{C}^*)^k$ has the VDP but it is unknown if it has the DP for $k\geq 2$. These definitions can be modified to the algebraic setting: if we consider only algebraic vector fields on an affine algebraic variety, (and an algebraic volume form, respectively), and replace density by equality, the definitions above are that of the algebraic \mbox{($\omega$-V)DP}. It should be noted that the algebraic DP implies the DP as defined above; similarly, the algebraic VDP implies the holomorphic VDP, although this is not a trivial fact (see \cite[Prop. 4.1]{KK-volume}). An effective criterion for the algebraic density property was found by Kaliman and Kutzschebauch in \cite{KK-Criteria}. The idea is to find a nonzero $\mathbb{C}[X]$-module in $\operatorname{Lie}_{alg}(X)$, which can be ``enlarged'' in the presence of a certain homogeneity condition to the whole $\operatorname{VF}_{alg}(X)$. The module can be found as soon as there is a pair of complete fields which is ``compatible'' in a certain technical sense. The algebraic VDP was first thoroughly studied in \cite{KK-volume}, and a corresponding criterion for was subsequently developed in \cite{KK-compatible}, wherein the notion of ``semi-compatible'' vector fields is central. In what follows, we give a holomorphic version of this criterion. Given a vector field $\Theta\in \operatorname{VF}(X)$, there is a degree $-1$ $\wedge$-antiderivation $\iota_\Theta$ on the graded algebra of forms $\Omega(X)$ called \textit{interior product}, defined by the relation \[ (\iota_\Theta\alpha)(\nu)=\alpha(\Theta\wedge\nu),\quad \alpha\in\Omega^{j+1}(X),\nu\in\Gamma(\wedge^kTX,X). \] Its relationship to the exterior derivative $d$ is expressed through Cartan's formula \[ \mathcal{L}_\Theta\alpha=d\iota_\Theta\alpha+\iota_\Theta d\alpha. \] If $\omega$ is a volume form on $X$, non-degeneracy implies that vector fields and $(n-1)$-forms are in one-to-one correspondence via $ \Theta\mapsto \iota_{\Theta}\omega,$ which by Cartan's formula restricts to an isomorphism \[ \Phi:\operatorname{VF}_\omega(X)\to \mathcal{Z}^{n-1}(X). \] In the same spirit, there is an isomorphism of $\mathcal{O}(X)$-modules \begin{equation}\label{intprod} \Psi:\operatorname{VF}(X)\wedge \operatorname{VF}(X)\to \Omega^{n-2}(X),\quad \nu\wedge\mu\mapsto \iota_{\nu}\iota_\mu\omega \end{equation} and it is straightforward that $\iota_\mu\iota_\nu\omega=\iota_{\nu\wedge\mu}\omega$. We can deduce from the easily verified relation $[\mathcal{L}_\nu,\iota_\mu]=\iota_{[\nu,\mu]}$ that for $\nu,\mu\in\operatorname{VF}_\omega(X)$, \begin{equation}\label{bracket} \iota_{[\nu,\mu]}\omega=d\iota_{\nu}\iota_\mu\omega. \end{equation} Hence by restricting the isomorphism in Equation \ref{intprod} to $\wedge^2\mathbb{C}VF_\omega(X)$ and composing with the exterior differential $d:\Omega^{n-2}\to\mathcal{B}^{n-1}$ we obtain a mapping \[ d\circ\Psi:\mathbb{C}VF_\omega(X)\wedge\mathbb{C}VF_\omega(X)\to\mathcal{B}^{n-1},\quad \nu\wedge\mu\mapsto i_{[\mu,\nu]}\omega, \] whose image is in fact contained in $\Phi(\operatorname{Lie}_\omega(X))$. Suppose we want to approximate $\Theta\in\operatorname{VF}_\omega(X)$ on $K\subset X$ by a Lie combination of elements in $\mathbb{C}VF_\omega(X)$. Consider the closed form $\iota_\Theta\omega$ and assume for the time being that it is exact. Then by Equation $\ref{intprod}$ there is $\gamma\in\operatorname{VF}_\omega(X)\wedge\operatorname{VF}_\omega(X)$ such that $\iota_\Theta\omega=d(\Psi(\gamma))$. It now suffices to approximate $\gamma$ by a sum of the form $\sum \alpha_i\wedge\beta_i\in\operatorname{Lie}_\omega(X)\wedge\operatorname{Lie}_\omega(X)$. Indeed, by Equation \ref{bracket}, $\iota_\Theta\omega=d\circ\Psi(\gamma)$ would then be approximated by elements \[ d\circ\Psi(\sum\alpha_i\wedge\beta_i)=\sum\iota_{[\alpha_i,\beta_i]}\omega\in\Phi(\operatorname{Lie}_\omega(X)), \]which implies that $\Theta$ is approximated uniformly on $K$ by elements of the form $\sum[\alpha_i,\beta_i]\in\operatorname{Lie}_\omega(X)$, as desired. We therefore concentrate on this approximation on $\operatorname{VF}_\omega\wedge\operatorname{VF}_\omega(X)$. We will assume that \textit{(a)} there are $\nu_1,\dots,\nu_k,\mu_1,\dots,\mu_k\in\mathbb{C}VF_\omega(X)$ such that the submodule of $\operatorname{VF}(X)\wedge\operatorname{VF}(X)$ generated by the elements $\nu_j\wedge\mu_j$ is contained in the closure of $\operatorname{Lie}_\omega(X)\wedge\operatorname{Lie}_\omega(X)$. We may assume $K$ to be $\mathcal{O}(X)$-convex, and let us suppose \textit{(b)} that for all $p$ in a Runge Stein neighborhood $U$ of $K$, $\nu_j(p)\wedge\mu_j(p)$ generate the vector space $T_pX\wedge T_pX$. We then proceed with standard methods in sheaf cohomology: let $\mathfrak{F}$ be the coherent sheaf corresponding to the wedge of the tangent bundle. Condition \textit{(b)} translates to the fact that the images of $\nu_j\wedge\mu_j$ generate the fibers of the sheaf, so by Nakayama's Lemma the lift to a set of generators for the stalks $\mathfrak{F}_p$ for all $p\in U$. Therefore, by Cartan's Theorem B, the sections of $\mathfrak{F}$ on $U$ are of the form \begin{equation}\label{wedges} \sum h_i (\nu_j\wedge\mu_j)\end{equation} with $h_j\in\mathcal{O}(U)$. Since $U$ is Runge, we conclude that every element $\gamma\in\operatorname{VF}_\omega(X)\wedge\operatorname{VF}_\omega(X)$ may be uniformly approximated on $K$ by elements as in Equation \ref{wedges} with $h_j\in\mathcal{O}(X)$. By assumption \textit{(a)} $\gamma$ may be approximated uniformly on $K$ by elements in $\operatorname{Lie}_\omega(X)\wedge\operatorname{Lie}_\omega(X)$. To find the pairs $\nu_j\wedge\mu_j$, observe that if $\nu,\mu\in\mathbb{C}VF_\omega(X)$, and $f\in\operatorname{Ker}\nu,g\in\operatorname{Ker}\mu$, then $f\nu,g\mu\in \mathbb{C}VF_\omega(X)$. By linearity, any element in the span of $(\operatorname{Ker}\nu\cdot\operatorname{Ker}\mu)\cdot(\nu\wedge\mu)$ lies in $\operatorname{Lie}_\omega(X)\wedge\operatorname{Lie}_\omega(X)$. By considering the closures, we see that if $I$ is a nonzero ideal contained in the closure of $\operatorname{Span}_\C{(\operatorname{Ker}\nu\cdot\operatorname{Ker}\mu)}$, then $I\cdot(\nu\wedge\mu)$ generates a submodule of $\operatorname{VF}(X)\wedge\operatorname{VF}(X)$ which is contained in $\overline{\operatorname{Lie}_\omega(X)\wedge\operatorname{Lie}_\omega(X)}$. This motivates the following definition. \begin{definition} Let $\nu,\mu$ be nontrivial complete vector fields on $X$. We say that the pair $(\nu,\mu)$ is \emph{semi-compatible} if the closure of the span of $\operatorname{Ker} \nu\cdot\operatorname{Ker}\mu$ contains a nonzero ideal of $\mathcal{O}(X)$. We call the largest ideal $I\subset \overline{\operatorname{Span}_\C{(\operatorname{Ker}\nu\cdot\operatorname{Ker}\mu)}}$ the ideal of the pair $(\nu,\mu)$. \end{definition} To reduce to the special case just treated (where $\iota_\Theta\omega$ is exact), we must further assume that given $\Theta\in\operatorname{VF}_\omega(X)$, it is possible to obtain the zero class in $H^{n-1}(X)$ by subtracting an element of $\Phi(\operatorname{Lie}_\omega(X))$; however, Equation \ref{bracket} implies that Lie brackets represent the zero class in $H^ {n-1}(X)$, so it is enough to subtract elements from $\Phi(\mathbb{C}VF_\omega(X))$. The preceding discussion then shows that the existence of ``enough'' semi-compatible pairs of volume-preserving vector fields, along with this condition, suffices to establish the VDP. We have thus proved the following criterion: \begin{proposition}\label{crit} Let $X$ be a Stein manifold of dimension $n$ with a holomorphic volume form $\omega$, satisfying the following condition: \[ \text{every class of }H^{n-1}(X) \text{ contains an element in the closure of } \Phi(\mathbb{C}VF_\omega(X)) \]Suppose there are finitely many semi-compatible pairs of volume-preserving vector fields $(\nu_j,\mu_j)$ with ideals $I_j$ such that for all $x\in X$, \[\left\{I_j(x)(\nu_j(x)\wedge \mu_j(x))\right\}_j\text{ generates }\wedge^2 T_xX.\] Then $X$ has the $\omega$-VDP. \end{proposition} It is also possible to adapt the criterion for the algebraic DP of \cite{KK-Criteria}, using so-called compatible vector fields, which satisfy a stronger condition. Namely, a semi-compatible pair of (complete) vector fields $(\nu,\mu)$ is called \emph{compatible} if there exists $h\in\mathcal{O}(X)$ such that $\nu(h)\in\operatorname{Ker}\nu$ and $h\in\operatorname{Ker}\mu$. Let $f\in\operatorname{Ker}\nu$ and $g\in\operatorname{Ker} \mu$. Then $f\nu,fh\nu,g\mu,gh\mu$ are complete vector fields and a simple calculation shows that \[ fg\nu(h)\mu=[f\nu,gh\mu]-[fh\nu,g\mu]\in\operatorname{Lie}(X). \] In other words, if $I$ is the ideal associated to the pair $(\nu,\mu)$, then $I\cdot \nu(h)\cdot \mu$ generates a submodule of $\operatorname{VF}(X)$ which is contained in $\operatorname{Lie}(X)$. So by an obvious variant of the above discussion, we obtain the following generalized criterion for the DP. \begin{proposition} Let $X$ be a Stein manifold. Suppose there are finitely many compatible pairs of vector fields $(\nu_j,\mu_j)$ such that $I_j(x)\nu_j(h_j(x))$ generate $T_xX$ for all $x\in X$. Then $X$ has the DP. \end{proposition} \section{Suspensions}\label{sec:second} Let $X$ be a connected Stein manifold of dimension $n$, and let $f\in\mathcal{O}(X)$ be a nonconstant holomorphic function with a smooth reduced zero fiber $X_0$ (this means that $df$ is not identically $0$ on $X_0$). To it we associate the space $\overline{X}$, called the \textit{suspension} over $X$ along $f$, which is defined as \[ \overline{X}=\{ (u,v,x)\in\mathbb{C}^2\times X ; uv-f(x)=0 \}. \] Since $X_0$ is reduced, $d(uv-f)\neq 0$ everywhere, so $\overline{X}$ is smooth. Hence $\overline{X}$ is a Stein manifold of dimension $n+1$. Suppose $X$ has a volume form $\omega$. Then $\Omega=du\wedge dv\wedge \omega$ is a volume form on $\mathbb{C}^2\times X$. There exists a canonical volume form $\overline{\omega}$ on $\overline{X}$ such that \[ d(uv-f)\wedge \overline{\omega}=\Omega|_{\overline{X}}. \] Moreover, any vector field $\overline{\Theta}$ on $\overline{X}$ has an extension $\Theta$ to $\mathbb{C}^2\times X$ with $\Theta(uv-f)=0$, and we have $\operatorname{div}_{\overline{\omega}}\overline{\Theta}=\operatorname{div}_\Omega\Theta|_{\overline{X}}$ (see \cite[2.2,2.4]{KK-Zeit}). In view of our criterion we now investigate the existence of sufficient semi-compatible fields, as well as the topology of $\overline{X}$. Let $\Theta\in\operatorname{VF}(X)$. There exists an extension $\tilde{\Theta}\in\operatorname{VF}(\mathbb{C}^2\times X)$ such that $\tilde{\Theta}(u)=\tilde{\Theta}(v)=0$ and $\tilde{\Theta}(\tilde{g})=\Theta(g)$ for all $g\in\mathcal{O}(X)$ (here $\tilde{g}$ is an extension of $g$ not depending on $u,v$). Clearly, $\operatorname{div}_\Omega\tilde{\Theta}=\pi^*(\operatorname{div}_\omega\Theta)$, where $\pi:\mathbb{C}^2\times X\to X$ is the natural projection. We may ``lift'' $\Theta$ to a field in $\overline{X}$ in two different ways. Consider the fields on $\mathbb{C}^2\times X$ \[\Theta_u=v\cdot \tilde{\Theta}+\overset{\sim}{\Theta(f)}\frac{\partial}{\partial u} \quad\quad \Theta_v=u\cdot \tilde{\Theta}+\overset{\sim}{\Theta(f)}\frac{\partial}{\partial v},\] which are clearly tangent to $\overline{X}$; we may therefore consider the corresponding fields (restrictions) on $\overline{X}$, which we denote simply ${\Theta_u}$ and ${\Theta_v}$. \begin{lemma}\label{l1} If $\Theta$ is $\omega$-volume-preserving, then ${\Theta_u}$ and ${\Theta_v}$ are of $\overline{\omega}$-divergence zero. Moreover, if $\Theta$ is complete, then ${\Theta_u}$ and ${\Theta_v}$ are also complete. \end{lemma} \begin{proof} The completeness of the lifts is clear, but it will be useful for the sequel to compute explicitly their flows. Denote by $\phi^t(x)$ the flow of $\Theta$ on $X$, and let $g:X\times \mathbb{C}\to \mathbb{C}$ be the first order approximation of $f\circ \phi$ with respect to $t$; in other words, let $g$ satisfy \begin{equation}\label{defining} f(\phi^t(x))=f(x)+tg(x,t). \end{equation} Since $f$ is holomorphic, $g$ is well defined and holomorphic on $X\times \mathbb{C}$. The claim is that $\Phi:\overline{X}\times \mathbb{C}_t\to \overline{X} $ defined by \begin{equation}\label{flowoflift} \Phi^t(u,v,x)=(u+tg(x,tv),v,\phi^{tv}(x)) \end{equation} is the flow of $\Theta_u$, which therefore exists for all $t$. Indeed, we compute\[ \Theta_u(\Phi^t(u,v,x))=v\cdot\Theta({\phi^{tv}(x))}+\Theta (f)(\phi^{tv}(x))\frac{\partial}{\partial u}, \] while on the other hand \[\frac{\partial}{\partial t}\Phi^t(u,v,x)=\frac{\partial}{\partial t}(tg(x,tv))\frac{\partial}{\partial u}+\frac{\partial}{\partial t}(\phi^{tv}(x))=\frac{\partial}{\partial t}(tg(x,tv))\frac{\partial}{\partial u}+v\cdot\Theta(\phi^{tv}(x)). \] The equality $\frac{\partial}{\partial t}(tg(x,tv))=\Theta (f)(\phi^{tv}(x))$ follows by differentiating Equation \ref{defining} at $(x,tv)$. Since $\operatorname{div}_{\overline{\omega}}{\Theta_u}=\operatorname{div}_\Omega\Theta_u|_{\overline{X}}$, and because divergence (with respect to any volume form) is linear and satisfies $\operatorname{div}(h\cdot\Theta)=h\operatorname{div}\Theta+\Theta(h)$, we get \[ \operatorname{div}_{\overline{\omega}}{\Theta_u}= v\cdot \operatorname{div}_{\Omega}\tilde{\Theta}|_{\overline{X}}+\tilde{\Theta}(v)+\Theta(f)\operatorname{div}_\Omega\left(\frac{\partial}{\partial u}\right)+\frac{\partial}{\partial u}(\Theta(f)) =v\cdot \operatorname{div}_{\Omega}\tilde{\Theta}|_{\overline{X}}\] and as noted above $\operatorname{div}_{\Omega}\tilde{\Theta}=\pi^*(\operatorname{div}_\omega\Theta)=0$. \end{proof} \begin{lemma}\label{l2} Suppose $(\nu,\mu)$ is a semi-compatible pair of vector fields on $X$. Then $({\nu_u},{\mu_v})$ and $({\nu_v},{\mu_u})$ are semi-compatible pairs on $\overline{X}$. \end{lemma} \begin{proof} By Lemma \ref{l1}, the lifted and extended fields are complete. It then suffices to show that $(\nu_u,\mu_v)$ is a semi-compatible pair in $\mathbb{C}^2\times X$, because we may restrict the elements in the ideal to $\overline{X}$: by the Cartan extension theorem, this set forms an ideal in $\mathcal{O}(\overline{X})$. Let $I$ be the ideal of the pair $(\nu,\mu)$. For any function $h\in\mathcal{O}(X)$, denote \[ \tilde{I}=\left\{ \tilde{h}\cdot F(u,v);h\in I,F\in\mathcal{O}(\mathbb{C}^2)\right\}\subset\mathcal{O}(\mathbb{C}^2\times X), \] where $\tilde{h}$ is the trivial extension as above. This is clearly a nonzero ideal. An element in $\tilde{I}$ can be approximated uniformly on a given compact of $\mathbb{C}^2\times X$ by a finite sum \[ (\sum_k \tilde{n}_k\tilde{m}_k)\sum_{i,j}a_{i,j}u^i v^j= \sum a_{i,j}(\tilde{n}_k v^j)(\tilde{m}_k u^i) \] where $n_{k}\in\operatorname{Ker}(\nu),m_k\in\operatorname{Ker}(\mu)$ for all $k$. Since $\tilde{n}_k v^j\in\operatorname{Ker}(\nu_ u)$ for all $j,k\geq 0$ and $\tilde{m}_k u^i \in\operatorname{Ker}(\mu_v)$ for all $i,k\geq 0$, it follows that $\tilde{I}$ is contained in the closure of $\operatorname{Span}_\C(\operatorname{Ker}(\nu_u)\cdot\operatorname{Ker}(\mu_v))$. \end{proof} The topology of the suspension $\overline{X}$ is of course closely related to that of $X$. In the case where $X$ is the affine space, this relationship is computed in detail in \cite[\S 4]{KZaffine}. For more general $X$ we have the following. \begin{proposition}\label{P2} Assume $X$ has dimension $n\geq 2$. If the complex de Rham cohomology groups satisfy $H^n(X)=H^{n-1}(X)=0$ and $\tilde{H}^{n-2}(X_0)=0$, where $\tilde{H}$ denotes reduced cohomology, then $H^n(\overline{X})=0$. \end{proposition} \begin{proof} Consider the long exact sequence of the pair $(\overline{X},\overline{X}\setminus U_0)$ in cohomology, where $U_0$ is the subspace of $\overline{X}$ where $u$ vanishes: \begin{equation}\label{exact} \dots\to H^n(\overline{X},\overline{X}\setminus U_0)\to H^n(\overline{X})\to H^n(\overline{X}\setminus U_0)\to \dots \end{equation} The term on the right vanishes, because $\overline{X}\setminus U_0$ is biholomorphic to $\mathbb{C}^*\times X$ via $(u,x)\mapsto (u,f(x)/u,x)$, so \[H^n(\overline{X}\setminus U_0)=(H^1(\mathbb{C}^*)\otimes H^{n-1}(X))\oplus (H^0(\mathbb{C}^*)\otimes H^n(X))=0. \] To evaluate the left-hand side, we use an idea due to Zaidenberg (see \cite{Zexotic}). Consider the normal bundle $\pi:N\to U_0$ of the closed submanifold $U_0$ in $\overline{X}$, with zero section $N_0\cong U_0$. Fix a tubular neighborhood $W$ of $U_0$ in $\overline{X}$ such that the pair $(W,U_0)$ is diffeomorphic to $(N,N_0)$. Then by excision, we have that \[ \tilde{H}^*(\overline{X},\overline{X}\setminus U_0)\cong \tilde{H}^*(W,W\setminus U_0)\cong \tilde{H}^*(N,N\setminus N_0). \] Let $t\in H^2(N,N\setminus N_0)$ be the Thom class of $U_0$ in $\overline{X}$, that is, the unique cohomology class taking value $1$ on any oriented relative $2$-cycle in $H_2(N,N \setminus N_0)$ defined by a fiber $F$ of the normal bundle $N$ (see e.g. \cite[\S 9--10]{Charclass}, for details). Then, by taking the cup-product of the pullback under $\pi$ of a cohomology class with $t$, we obtain the Thom isomorphisms \[ H^i(U_0)\cong H^{i+2}(N,N\setminus N_0)\cong H^{i+2} (\overline{X},\overline{X}\setminus U_0)\quad\forall i. \] Since $U_0\cong X_0\times \mathbb{C}$, $U_0$ is homotopy equivalent to $X_0$, and we have $H^n(\overline{X},\overline{X}\setminus U_0)\cong H^{n-2}(X_0)$. If $n\geq 3$, reduced cohomology coincides with standard cohomology, and therefore $H^n(\overline{X})=0$ by exactness of Equation \ref{exact}. If $n=2$, that sequence becomes \[ \dots\to H^1(\overline{X}\setminus U_0)\to H^2(\overline{X},\overline{X}\setminus U_0)\to H^2(\overline{X})\to 0. \] Let $\gamma$ be an oriented $2$-cycle in $\overline{X}$ whose boundary $\partial \gamma$ lies in $\overline{X}\setminus U_0$ (a disk transversal to $U_0$). A one-dimensional subspace of $H^1(\overline{X}\setminus U_0)$ is generated by a $1$-cocycle taking value $1$ on $\partial \gamma$, and this cocycle is sent via the coboundary operator (which is the first map in the above sequence) to a $2$-cocycle taking value $1$ on $\gamma$, i.e., to the Thom class $t$ described previously, which is also a generator of a one-dimensional subspace of $ H^2(\overline{X},\overline{X}\setminus U_0)$. However, $H^1(\overline{X}\setminus U_0)\cong H^1(\mathbb{C}^*\times X)\cong \mathbb{C}$ and $H^2(\overline{X},\overline{X}\setminus U_0)\cong H^0(U_0)\cong \mathbb{C}$, so the coboundary map is an isomorphism, and by exactness it follows that $H^2(\overline{X})=0$. \end{proof} Next, we show how to lift a collection of semi-compatible fields to the suspension and span $\wedge^2 T\overline{X}$ with semi-compatible fields\footnote{A simpler algebraic case has been treated by J. Josi (Master thesis, 2013, unpublished)}. We will denote by $\operatorname{Aut}(X)$ (resp. $\operatorname{Aut}_\omega(X))$ the group of holomorphic automorphisms of the manifold $X$ (resp. the volume-preserving automorphisms). \begin{theorem}\label{Jo} Let $X$ be a Stein manifold with a finite collection $S$ of semi-compatible pairs $(\alpha,\beta)$ of vector fields such that for some $x_0\in X$ \begin{equation}\label{span} \{\alpha(x_0)\wedge\beta(x_0);(\alpha,\beta)\in S\}\text{ spans } \wedge^2(T_{x_0} X). \end{equation} Assume that $\operatorname{Aut}(\overline{X})$ acts transitively on $\overline{X}$. Then there exists a finite collection $\overline{S}$ of semi-compatible pairs $(A_j,B_j)$ on $\overline{X}$ with corresponding ideals $I_j$ such that \begin{equation}\label{spans} \operatorname{Span}_\C\{I_j(\bar x) A_j(\bar{x})\wedge B_j(\bar{x})\}_j=\wedge^2(T_{\bar{x}} \overline{X})\quad \forall \bar{x}\in\overline{X}. \end{equation} Moreover, if $X$ has a volume form $\omega$ and the fields in $S$ preserve it, and $\operatorname{Aut}_{\bar\omega}(\overline{X})$ acts transitively, then the fields in $\overline{S}$ can be chosen to preserve the form $\bar{\omega}$ \end{theorem} \begin{proof} We claim that it is sufficient to show that the conclusion holds for a single $\bar{x}_0\in\overline{X}$. Indeed, let $C$ be the analytic set of points $\bar{x}\in \overline{X}$ where Equation \ref{spans} does not hold, and decompose $C$ into its (at most countably many) irreducible components $C_i$. For each $i$, let $D_i$ be the set of automorphisms $ \phi$ of $\overline{X}$ such that the image of $\overline{X}\setminus C_i$ under $\phi$ has a nonempty intersection with $C_i$. Clearly each $D_i$ is open, and it is also dense: given $h\in\operatorname{Aut}(\overline{X})$ not in $D_i$, let $c\in C_i$, $d=h(c)\in C_i$ and $\gamma\in\operatorname{Aut}(\overline{X})$ mapping $\bar{x}_0$ to $d$. Now, since the assumption in Equation \ref{spans} implies that the tangent space at $\bar x_0$ is spanned by complete fields, there exists a complete field $\alpha$ from the collection $\overline{S}$ such that $\gamma_*(\alpha)$ is not tangent to $C_i$. If $\varphi$ is the flow of $\gamma_*(\alpha)$, then $\varphi^t\circ h$ is an automorphism arbitrarily close to $h$ mapping $c$ out of $C_i$. By the Baire Category Theorem, since $\operatorname{Aut}(\overline{X})$ is a complete metric space, there exists a $\psi\in\bigcap D_i$. By expanding $\overline{S}$ to $\overline{S}\cup\{(\psi_*\alpha,\psi_*\beta);(\alpha,\beta)\in\overline{S}\}$, we obtain a finite collection of semi-compatible fields which fail to satisfy Equation \ref{spans} in an exceptional variety of dimension strictly lower than that of $C$. The conclusion follows from the finite iteration of this procedure. By the previous lemmas, if $(\alpha,\beta)\in S$ then $(\alpha_u,\beta_v)$ and $(\alpha_v,\beta_u)$ are semi-compatible pairs in $\overline{X}$. We let $\overline{S}$ consist of all those pairs. We will also add two pairs to $\overline{S}$, of the form $(\phi^*\alpha_u,\phi^*\beta_v)$, where $\phi$ is an automorphism of $\overline{X}$ (preserving a volume form, if necessary) to be specified later. We now select an appropriate $\bar x_0=(u_0,v_0,x_0)\in\overline{X}$ by picking any element from the complement of finitely many analytic subsets which we now describe. The first analytic subset of $\overline{X}$ to avoid is the locus where any of the (finitely many) associated ideals $I_j$ vanish. Note that Equation \ref{span} is in fact satisfied everywhere on $X$ except an analytic variety $C$: the second closed set in $\overline{X}$ to avoid is the preimage of $C$ under the projection. Finally, we avoid $u=0,v=0$ and $d_{x_0}f=0$. In short, we pick a $\bar x_0=(u_0,v_0,x_0)\in\overline{X}$ with $u_0\neq 0,v_0\neq 0,d_{x_0}f\neq 0$, such that Equation \ref{span} is satisfied at $x_0$, and such that none of the ideals $I_j(\bar x_0)$ vanish. Because of this last condition, it will suffice to show that $\{A(\bar x_0)\wedge B(\bar x_0); (A,B)\in \overline{S}\}$ spans $\wedge^2(T_{x_0} X)$. Consider $\pi:\overline{X}\to X\times \mathbb{C}_u$, which at $\bar x_0$ induces an isomorphism $d_{\bar x_0} \pi :T_{\bar x_0} \overline{X}\to T_{x_0}X\times T_{u_0}\mathbb{C}$. Denote by $\partial_u=\frac{\partial}{\partial u}$ the basis of $T_{u_0}\mathbb{C}$, and consider \[ P:\wedge^2(T_{\bar x_0}\overline{X})\to \wedge^2 (T_{x_0}X\oplus \left<\partial_u\right>)=\wedge^2(T_{x_0}X)\oplus(T_{x_0}X\otimes\left<\partial_u\right>). \] Since $P$ is a linear isomorphism, it now suffices to show that the direct sum on the right-hand side equals $P(\Lambda)$, where \[\Lambda=\operatorname{Span}_\C\{A(\bar x_0)\wedge B(\bar x_0);(A,B)\in \overline{S}\}.\] We will prove (i) that $\wedge^2(T_{x_0}X)\subseteq P(\Lambda)$, and (ii) that $T_{x_0}X\otimes\left<\partial_u\right>\subseteq P(\Lambda)$. Let us first show (i). Let $\alpha( x_0)\wedge \beta( x_0)\in\wedge^2(T_{x_0}X)$. Since $\{\alpha(x_0)\wedge \beta(x_0)\}_{(\alpha,\beta)\in S}$ spans $\wedge^2(T_{x_0} X)$, we can assume that $(\alpha,\beta)$ is a pair of vector fields lying in $S$ (we will often omit to indicate the point $x_0$ at which these fields are evaluated). Then $(\alpha_u, \beta_v)\in \overline{S}$, so $P(\Lambda)$ contains \begin{equation}\label{Lambda1} P(\alpha_u\wedge\beta_v)=P((v\tilde{\alpha}+\alpha(f)\partial_u)\wedge(u\tilde{\beta}+\beta(f)\partial_v))=uv(\alpha\wedge\beta)-u\alpha(f)(\beta\wedge\partial_u). \end{equation} At the point $\bar x_0$, we have assumed that $u$ and $v$ are both nonzero. If $\alpha(f)$ happens to vanish at $x_0$, then $\alpha(x_0)\wedge\beta(x_0)$ is in $P(\Lambda)$, as desired. Otherwise, consider the vector field $(u-u_0)\alpha_v$ on $\overline{X}$. Since $\alpha_v$ is complete and $(u-u_0)$ lies in the kernel of $\alpha_v$, $(u-u_0)\alpha_v$ is a complete (and $\bar\omega$-divergence free) vector field on $\overline{X}$. Quite generally one can compute, in local coordinates for example, that the flow at time $1$ of the field $g\Theta$, where $\Theta\in\mathbb{C}VF(M)$ and $g\in\operatorname{Ker}(\Theta)$ with $g(p)=0$, is a map $\phi$ whose derivative at $p\in M$ is given by: \[ w\mapsto w+d_pg(w)\Theta(p)\quad w\in T_pM. \] Therefore, for a vector field $\mu\in\operatorname{VF}(M)$, we have \begin{equation}\label{explicitvf} (\phi^{-1})^*(\mu)(p)=(d_{p}\phi)(\mu(p))=\mu(p)+\mu(g)(p)\Theta(p). \end{equation} Apply this in the case of $M=\overline{X}$, $p=\bar{x}_0$, $\Theta=\alpha_v$ and $g=u-u_0$. For the vector fields $\mu=\beta_v$, this equals $\beta_v$; for $\mu=\alpha_u$, it equals $\alpha_u+\alpha(f)\alpha_v$. Hence, if we add $((\phi^{-1})^*\alpha_u,(\phi^{-1})^*\beta_v)$ to $\overline{S}$, we obtain that $P(\Lambda)$ contains \[ P((\phi^{-1})^*\alpha_u\wedge(\phi^{-1})^*\beta_v-\alpha_u\wedge\beta_v)=P(\alpha(f)\alpha_v\wedge\beta_v)=\alpha(f)u^2(\alpha\wedge\beta). \] We now show (ii). It will be useful to distinguish elements in $T_{x_0}X$ according to whether they belong to $K=\operatorname{Ker}(d_{x_0}f)$ or not. Since we have assumed $d_{x_0}f\neq 0$, $T_{x_0}X$ splits as $K\oplus V$, where $V$ is a vector space of dimension $1$, which may be spanned by some $\xi$ satisfying $d_{x_0}f(\xi)=\xi(f)=1$. The isomorphism is given by the unique decomposition $v=(v-v(f)\xi)+v(f)\xi$. This induces another splitting \begin{align*} \wedge^2(T_{x_0}X)&\to \wedge^2(K)\oplus(K\otimes V)\\ \alpha\wedge\beta&\mapsto (\alpha-\alpha(f)\xi)\wedge(\beta-\beta(f)\xi)+(\alpha(f)\beta-\beta(f)\alpha)\wedge\xi. \end{align*} Since the left-hand side is generated by $\{\alpha\wedge\beta;(\alpha,\beta)\in S\}$, $K\otimes V$ is generated by $\{(\alpha(f)\beta-\beta(f)\alpha)\wedge\xi;(\alpha,\beta)\in S\}$, and therefore $K$ by $\{\alpha(f)\beta-\beta(f)\alpha;(\alpha,\beta)\in S\}$. Consider Equation \ref{Lambda1} and subtract $P(\alpha_v\wedge\beta_u)=uv(\alpha\wedge\beta)+u\beta(f)(\alpha\wedge\partial_u)$: recalling that $u_0\neq 0$, we see that \begin{equation}\label{Kotimesu} \{u(\beta(f)\alpha-\alpha(f)\beta)\wedge\partial_u;(\alpha,\beta)\in S\}=K\otimes \left<\partial_u\right>\subset P(\Lambda). \end{equation} It remains to show that $V\otimes \left<\partial_u\right>\subset P(\Lambda)$. By linearity, since $V$ is of dimension $1$, it suffices to find a single element in $P(\Lambda)\cap (V\otimes \left<\partial_u\right>)$. In fact since we have already proven (i), it suffices to find an element in $P_2(\Lambda)\cap (V\otimes \left<\partial_u\right>)$, where $P_2$ is the second component of the map $P$. If it were the case that for some pair $(\alpha,\beta)\in S$ both $\alpha(f)$ and $\beta(f)$ are nonzero at $x_0$, then by Equation \ref{Lambda1} $-u\alpha(f)\beta\wedge\partial_u$ is such an intersection element. In the other case, there is at least a pair $(\alpha,\beta)\in S$ for which $\alpha(f)(x_0)=0$ and both $\beta(f)(x_0)\neq 0$ and $\alpha(x_0)\neq 0$, for otherwise the spanning condition implied by Equation \ref{Kotimesu} would fail to be satisfied. As in the proof of (i), we will add to $\overline{S}$ the pair $(\phi^*(\alpha_u),\phi^*(\beta_v))$, where $\phi$ is the time $1$ map of the flow of the complete (volume-preserving) field $\Theta={g}(x)(u\partial_u-v\partial_v)$, and $g\in\mathcal{O}(X)$ vanishes at $x_0$. By Equation \ref{explicitvf}, we have that \[ \phi^*(\alpha_u)=\alpha_u+\alpha_u({g})\Theta=v\alpha+\alpha(f)\partial_u+v\alpha(g)(u\partial_u-v\partial_v) \] which by assumption simplifies to \[\phi^*(\alpha_u)=v\alpha+uv\alpha(g)\partial_u-v^2\alpha(g)\partial_v.\] Similarly we have \[ \phi^*(\beta_v)=u\beta+u^2\beta(g)\partial_u+(\beta(f)-uv\beta(g))\partial_v. \] Hence \[ P_2(\phi^*(\alpha_u)\wedge\phi^*(\beta_v))=u^2v\beta(g)\alpha\wedge\partial_u-u^2v\alpha(g)\beta\wedge\partial_u. \] By assumption, the first summand lies in $K\otimes \left<\partial_u\right>$, which we have already shown to be contained in $P(\Lambda)$. Since $\beta(f)\neq 0$, the second summand, if nonzero, lies in $P_2(\Lambda)\cap (V\otimes \left<\partial_u\right>)$. But it is clear that we may find a $g\in\mathcal{O}(X)$ such that $\alpha(g)(x_0)\neq 0$. \end{proof} Finally, we show how the transitivity requirement for the previous proposition can be inherited from the base space $X$. We say that a Stein manifold $X$ is \textit{holomorphically (volume) flexible} if the complete (volume-preserving) vector fields span the tangent space $T_xX$ at every $x\in X$ (see \cite[\S 6]{5auth}). Clearly, a manifold $X$ is holomorphically (volume) flexible if one point $x\in X$ is, and $\operatorname{Aut}(X)$ (resp. $\operatorname{Aut}_\omega(X)$) acts transitively. Moreover, holomorphic (volume) flexibility implies the the transitive action of $\operatorname{Aut}(X)$ (resp. $\operatorname{Aut}_\omega(X)$) on $X$. \begin{lemma}\label{L5} If $X$ is holomorphically flexible, then $\operatorname{Aut}(\overline{X})$ acts transitively. Moreover, if $X$ is holomorphically volume flexible at a point $x\in X$ and $\operatorname{Aut}_\omega(X)$ acts transitively, then $\operatorname{Aut}_{\bar\omega}(\overline{X})$ acts transitively. \end{lemma} \begin{proof} For simplicity we prove the first statement: the second is proven in an exactly analogous manner. Let $\bar{x}_0=(u_0,v_0,x_0)\in\overline{X}$ with $u_0v_0\neq 0$, and let us determine the orbit of $ \bar{x}_0$ under $ \operatorname{Aut}(\overline{X})$. Given $\Theta\in\operatorname{VF}(X)$, by Equation \ref{flowoflift} we have, for each $t$, an automorphism of $\overline{X}$ of the form \begin{equation}\label{orbit} (u,v,x)\mapsto (u+tg(x,tv),v,\phi^{tv}(x)). \end{equation} The orbit of $\bar{x}_0$ must hence contain the hypersurface $\{ v=v_0 \}\subset\overline{X}$ (because $\operatorname{Aut}(X)$ acts transitively on $X$), and analogously, since $u_0\neq 0$, the orbit contains $\{u=u_0\}\subset\overline{X}$. Let $(u_1,v_1,x_1)\in \overline{X}$ be another point with $u_1v_1\neq 0$. Note that the nonconstant function $f:X\to\mathbb{C}$ can omit at most one value $\xi$. Indeed, by flexibility there is a complete vector field which at $x_0$ points in a direction where $f$ is not constant; precomposition with its flow map at $x_0$ gives an entire function which must omit at most one value. Of course $\xi$ cannot be $0$, and by definition neither $u_0v_0$ nor $u_1v_1$. Follow the orbit of $\bar x_0$ along the hypersurface $\{ u=u_0 \}\cap \overline{X}$ until $(u_0,v_1,x')$, then along $\{ v=v_1 \}\cap \overline{X}$ until $(u_1,v_1,x_1)$ (if $\xi=u_0v_1$ replace $v_1$ by $2v_1$). So the orbit contains all points $(u,v,x)\in\overline{X}$ with $uv\neq 0$ and by Equation \ref{orbit} also those with either $u$ or $v$ nonzero. Consider now a point of the form $(0,0,x_0)\in\overline{X}$. Since $x_0\in X_0$ and $X_0$ is reduced, $d_{x_0}f\neq 0$, so there is a tangent vector evaluating to a nonzero number, which since $X$ is flexible can be taken to be of the form $\Theta(x_0)$ for a complete field $\Theta$. By lifting $\Theta$ we obtain an automorphism of $\overline{X}$ of the form $(u,v,x)\mapsto (g(0,x_0),0,x_0)$. Since \[ g(0,x_0)=\lim_{t\to 0}\frac{f(\phi^t(x_0))-f(\phi^0(x_0))}{t}=(f\circ \phi)'(0)=d_{x_0}f(\Theta(x_0))\neq 0, \] this automorphism moves $(0,0,x_0)$ to a point of nonzero $u$ coordinate, and we are done. \end{proof} In particular, by the Andersén-Lempert theorem (see e.g. \cite[\S 2.B]{KK-state}), the assumptions hold if $X$ has the $\omega$-VDP and is of dimension $n\geq 2$. \section{Examples or Applications}\label{sec:last} The following theorem summarizes the previous discussion and gives conditions under which the suspension over a manifold has a VDP. \begin{theorem}\label{th:4.1} Let $X$ be a Stein manifold of dimension $n\geq 2$ such that $H^n(X)=H^{n-1}(X)=0$. Let $\omega$ be a volume form on $X$ and suppose that $\operatorname{Aut}_\omega(X)$ acts transitively. Assume that there is a finite collection $S$ of semi-compatible pairs $(\alpha,\beta)$ of volume-preserving vector fields such that for some $x_0\in X$, $\{\alpha(x_0)\wedge\beta(x_0);(\alpha,\beta)\in S\}$ spans $\wedge^2T_{x_0}X$. Let $f:X\to \mathbb{C}$ be a nonconstant holomorphic function with smooth reduced zero fiber $X_0$ and $\tilde{H}^{n-2}(X_0)=0$. Then the suspension $\overline{X}\subset \mathbb{C}^2_{u,v}\times X$ of $X$ along $f$ has the VDP with respect to a natural volume form $\bar{\omega}$ satisfying $d(uv-f)\wedge\bar{\omega}=(du\wedge dv \wedge \omega)|_{\overline{X}}$. \end{theorem} \begin{proof} The spanning condition on $\wedge^2 TX$ implies holomorphic volume flexibility at $x_0$. So by Lemma \ref{L5}, $\operatorname{Aut}_{\bar\omega}(\overline{X})$ acts transitively, and therefore Theorem \ref{Jo} may be applied. By assumption and Proposition \ref{P2}, the topological condition of Proposition \ref{crit} is also trivially satisfied. \end{proof} \begin{corollary} Let $n\geq 1$ and $f\in\mathcal{O}(\mathbb{C}^n)$ be a nonconstant holomorphic function with smooth reduced zero fiber $X_0$, such that $\tilde{H}^{n-2}(X_0)=0$ if $n\geq 2$. Then the hypersurface $\overline{\mathbb{C}^n_f}=\{uv=f(z_1,\dots,z_n)\}\subset\mathbb{C}^{n+2}$ has the volume density property with respect to the form $\bar{\omega}$ satisfying $d(uv-f)\wedge\bar{\omega}=du\wedge dv \wedge dz_1\wedge\dots\wedge dz_n$. \end{corollary} \begin{proof} If $n\geq 2$ this follows immediately from the previous theorem, since in $\mathbb{C}^n$ the standards derivations $\partial_{z_j}$ generate $\wedge^2 TX$. If $n=1$, there are no semi-compatible pairs on $\mathbb{C}$, but it is possible to show the VDP directly. Given $\Theta\in\operatorname{VF}_\omega(\overline{\mathbb{C}_f})$ and a compact $K$ of $\overline{\mathbb{C}_f}$, we must find a finite Lie combination of volume-preserving fields approximating $\Theta$ on $K$. Because of this approximation, we can reduce to the algebraic case, which is treated in \cite{KK-Zeit} by means of explicit calculation of Lie brackets of the known complete fields $\Theta_u,\Theta_v$, and $h(u\partial_u-v\partial_v)$. \end{proof} Let $\phi:\mathbb{C}^{n-1}\to\mathbb{C}^{n}$ be a proper holomorphic embedding, and consider the closed subset $Z=\phi(\mathbb{C}^{n-1})\subset \mathbb{C}^{n}$. It is a standard result that every multiplicative Cousin distribution in $\mathbb{C}^n$ is solvable, since $H^2(\mathbb{C}^n,\mathbb{Z})=0$. This implies that the divisor associated to $Z$ is principal: in other words, there exists a holomorphic function $f$ on $\mathbb{C}^{n}$ vanishing precisely on $Z$ and such that $df\neq 0$ on $Z$. We may therefore consider the suspension $\overline{\mathbb{C}^n_f}$ of $\mathbb{C}^{n}$ along $f$, which according to the above corollary must have the volume density property. The significance of this lies in the existence of non-straightenable embeddings. Recall that a proper holomorphic embedding $\phi:\mathbb{C}^{k}\hookrightarrow\mathbb{C}^n$ is said to be \textit{holomorphically straightenable} if there exists an automorphism $\alpha$ of $\mathbb{C}^n$ such that $\alpha(\phi(\mathbb{C}^k))=\mathbb{C}^k\times \{0\}^{n-k}$. The existence of non-tame sets in $\mathbb{C}^n$, combined with an interpolation theorem, implies that there exists for each $k<n$ non-straightenable proper holomorphic embeddings $\phi:\mathbb{C}^{k}\hookrightarrow\mathbb{C}^n$, see \cite{Finterpol}. Note that proper algebraic embeddings are the holomorphic analogue of polynomial embeddings, and that the ``classical'' algebraic situation is in sharp contrast to the holomorphic one: for every $n>2k+1$ polynomial embeddings $\phi:\mathbb{C}^{k}\hookrightarrow\mathbb{C}^n$ are algebraically straightenable (see \cite{Kaliman-AM}), the case of real codimension $2$ remaining notoriously open. If the embedding $\phi$ is straightenable, it is clear that $\overline{\mathbb{C}^n_f}$ is trivially biholomorphic to $\mathbb{C}^{n+1}$, and a calculation shows that the form $\bar\omega$ is the standard one. So the result says something new only if $\phi$ is non-straightenable. Indeed, it is unknown whether $\overline{\mathbb{C}^n_f}$ is biholomorphic to $\mathbb{C}^{n+1}$. However, $\overline{\mathbb{C}^n_f}\times\mathbb{C}$ is biholomorphic to $\mathbb{C}^{n+2}$ (see \cite{Derksen-Kut}), and is therefore a potential counterexample to the holomorphic version of the important Zariski Cancellation Problem: if $X$ is a complex manifold of dimension $n$ and $X\times\mathbb{C}$ biholomorphic to $\mathbb{C}^{n+1}$, does it follow that $X$ is biholomorphic to $\mathbb{C}^n$? Moreover, $\overline{\mathbb{C}^n_f}$ is diffeomorphic to complex affine space. This is best shown in the algebraic language of modifications, as follows. Given a triple $(X,D,C)$ consisting of a Stein manifold $X$, a smooth reduced analytic divisor $D$, and a proper closed complex submanifold $C$ of $D$, it is possible construct the pseudo-affine modification of $X$ along $D$ with center $C$, denoted $\overline{X}$. It is the result of blowing up $X$ along $C$ and deleting the proper transform of $D$. We refer the interested reader to \cite{KZaffine} for a general discussion. In our situation we take $X=\mathbb{C}^n\times \mathbb{C}_u$, $D=\mathbb{C}^n\times \{0 \}$, and $C=Z\times \{ 0 \}=\phi(\mathbb{C}^{n-1})\times \{0\}$: it can be shown that in this case $\overline{X}$ is biholomorphic to $\overline{\mathbb{C}^n_f}$ (see \cite{KZaffine}). We now invoke a general result giving sufficient conditions for a pseudo-affine modification to be diffeomorphic to affine space: since $Z$ is contractible, Proposition 5.10 from \cite{KK-Zeit} is directly applicable, and therefore the following holds: \begin{corollary} If $\phi:\mathbb{C}^{n-1}\to\mathbb{C}^{n}$ is a proper holomorphic embedding, then the suspension $\overline{\mathbb{C}^n_f}$ along the function $f$ defining the subvariety $\phi(\mathbb{C}^{n-1})$, is diffeomorphic to $\mathbb{C}^{n+1}$ and has the volume density property with respect to a natural volume form $\bar\omega$. Moreover $\overline{\mathbb{C}^n_f}\times \mathbb{C}$ is biholomorphic to $\mathbb{C}^{n+2}$, and is therefore a potential counterexample to the Zariski Cancellation Problem if $\phi$ is not straightenable. \end{corollary} Recall a conjecture of A. T\'{o}th and Varolin \cite{Toth-Var} asking whether a complex manifold which is diffeomorphic to $\mathbb{C}^n$ and has the density property must be biholomorphic to $\mathbb{C}^n$. It is also unknown whether there are contractible Stein manifolds with the volume density property which are not biholomorphic to $\mathbb{C}^n$, and our construction provides a new potential counterexample. As pointed out in Section \ref{sec:intro}, this is to our knowledge the first non-algebraic one. To conclude, we give another example of an application. Consider a proper holomorphic embedding $\mathbb{D}\hookrightarrow\mathbb{C}^2_{x,y}$ (that this exists is a classical theorem of K. Kasahara and T. Nishino, see e.g. \cite{Stehle}), and let $f$ generate the ideal of functions vanishing on the embedded disk, as above. Then $M=\overline{\mathbb{C}^2_f}\subset\mathbb{C}^2_{u,v}\times\mathbb{C}^2_{x,y}$ admits a $\mathbb{C}^*$-action, namely \[ \lambda\mapsto(\lambda u,\lambda^{-1}v, x,y), \] whose fixed point set is biholomorphic to $\mathbb{D}$. Therefore, the action cannot be linearizable, i.e., there is no holomorphic change of coordinates after which the action is linear. Recall the problem of linearization of holomorphic $\mathbb{C}^*$-actions on $\mathbb{C}^k$ (see e.g. \cite{Derksen-Kut}): for $k=2$, every action is linearizable; there are counterexamples for $k\geq 4$; and the problem remains open for $k=3$. If $M$ is biholomorphic to $\mathbb{C}^3$, there would be a negative answer. Otherwise, it resolves in the negative the T\'{o}th-Varolin conjecture mentionned above. By a result of J. Globevnik \cite{Globevnik}, it is also possible to embed arbitrary small perturbations of a polydisc in $\mathbb{C}^n$ for any $n\geq 1$ into $\mathbb{C}^{n+1}$; by the same argument, we obtain for any $n\geq 3$, non-algebraic manifolds that are diffeomorphic to $\mathbb{C}^{n}$ with the volume density property. \newcommand{\etalchar}[1]{$^{#1}$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{\Large\bfseries Ruin probability for discrete risk processes} \author{\itshape Ivana Ge\v{c}ek Tu{\dj}en\\ \\ Department of Mathematics\\ University of Zagreb, Zagreb, Croatia\\ Email: [email protected] \color{black} } \date{} \maketitle \begin{abstract} \noindent We study the discrete time risk process modelled by the skip-free random walk and we derive the results connected to the ruin probability, such as crossing the fixed level, for this kind of process. We use the method relying on the classical ballot theorems to derive these results and compare them to the results obtained for the continuous time version of the risk process. We further generalize this model by adding the perturbation and, still relying on the skip-free structure of that process, we generalize the previous results on crossing the fixed level for the generalized discrete time risk process. \noindent \textit{Mathematics Subject Classification:} Primary 60C05; Secondary 60G50.\\ \noindent \textit{Keywords and phrases:} skip-free random walk, ballot theorem, Kemperman's formula, level crossing, ruin probability \end{abstract} \section{Introduction}\label{sec-1} In the classical ruin theory one usually observes the risk process \begin{equation}\label{1:eq.1} X(t)=ct-\sum_{i=1}^{N(t)}Y_i~,~~t\geq 0~,\end{equation} where $c>0$ represents the premium rate (we assume that there are the incoming premiums which arrive from the policy holders), $(Y_i~:~i\in {\mathbb N})$ is an i.i.d. sequence of nonnegative random variables with common distribution $F$ (which usually represent the policy holders' claims) and $(N(t)~:~t\geq 0)$ a homogeneous Poisson process of rate $\lambda>0$, independent of $(Y_i~:~i\in {\mathbb N})$. This basic process was generalized by many authors and we will follow the approach used in \cite{HPSV1}, \cite{HPSV2} and \cite{IGT}, which means that in the continuous time case one observes the generalized risk process \begin{equation}\label{1:eq.2}X(t)=ct-C(t)+Z(t)~,~~t\geq 0~,\end{equation} where $(C(t)~:~t\geq 0)$ is a subordinator and $(Z(t)~:~t\geq 0)$ an independent spectrally negative L\'{e}vy process. The overall process $X$ then also has the nice structure of the spectrally negative L\'evy process and the results from the fluctuation theory may be used to analyze it. One of the main questions that is observed in this model is the question of the \emph{ruin probability}, given some initial capital $u>0$, i.e. \begin{equation}\label{1:eq.3}\vartheta(u)={\mathbb P}(u+X(t)<0~,~~\textrm{for some $t>0$})~.\end{equation} Furthermore, the question of the distribution of the supremum of the dual process $\widehat{X}=-X$ is of the main interest, as well as the question directly connected to it, i.e. the first passage over some fixed level. Results for the above questions can be obtained using different approaches, such as decomposing the supremum of the dual of the generalized risk process $\widehat{X}$ or Laplace transform approach, and in \cite{HPSV2} authors use the famous Tak\'acs formula in the continuous time case.\\ \\ More precisely, for $m$ independent subordinators $C_1,\ldots,C_m$ without drift and with L\'evy measures $\Lambda_1,\ldots,\Lambda_m$ such that $\mathbb{E}\,(C_i(1))<\infty$, $i=1,2,\ldots,m$, one observes the risk process \begin{equation}\label{1:eq.4}X(t)=ct-C(t)~,~~t\geq 0~,\end{equation} for $C=C_1+\cdots +C_m$ and $c>\mathbb{E}\,(C_1(1))+\cdots +\mathbb{E}\,(C_m(1))$ (standard net profit assumption). In \cite{HPSV2} the following result was achieved: \begin{equation}\label{1:eq.5}{\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-)\in dy,\widehat{X}(\widehat{\tau}_0)\in dx,\Delta C(\widehat{\tau}_0)=\Delta C_i(\widehat{\tau}_0))=\mathcal{F}rac{1}{c}\Lambda_i(-y+dx)dy~,\end{equation} for $x>0$, $y\leq 0$ and $\widehat{X}(0)=0$, $\widehat{\tau}_0=\inf\{t\geq 0~:~\widehat{X}(t)>0\}$, $\Delta C(t)=C(t)-C(t-)$, $\Delta C_i(t)=C_i(t)-C_i(t-)$, $i=1,\ldots ,m$.\\ \\ Here authors interpret processes $C_i$ as independent risk portfolios competing to cause ruin and the above formula gives the probability that the ruin will be caused by one individual portfolio. Authors further generalize the problem by adding the L\'evy process $Z$ with no positive jumps in the model (this is called the perturbed model) and achieve the similar formula.\\ \\ The focus of this paper is rather on the method which led to above result, namely, the before mentioned Tak\'acs "magic" formula. In the continuous time case, this formula can be expressed in the following way (for details see \cite{Tak}). \begin{lemma} For the process $X$ defined as in (\ref{1:eq.4}), with $\widehat{X}(0)=0$, \begin{equation}\label{1:eq.6} {\mathbb P}\big(\sup_{0\leq s\leq t}\widehat{X}(s)>0~|~\widehat{X}(t)\big)=1-\big(-\mathcal{F}rac{\widehat{X}(t)}{ct}\big)~. \end{equation} \end{lemma} But, this result naturally arises in the discrete time case in the view of the well known ballot theorems, so the main aim of this paper is to discover how and which results for ruin probability can be obtained, using the above method, when we observe a discrete time risk process. To establish the connection with the continuous time model, we will model our discrete time risk process with the \emph{upwards skip-free} (or \emph{right continuous}) random walk, i.e. a random walk with increments less or equal than $1$. These random walks can be observed as a discrete version of the spectrally negative L\'evy processes, i.e. the processes with no positive jumps. The main connection, which is in the focus of this paper, between this discrete and the continuous model is that in both cases we are able to control the one side jumps of the process. More precisely, skip-free random walks can cross levels from one side with the jumps of any size and on the other side they can only have unit jumps.\\ \\ In this surrounding we will prove the main results of the paper and the main tool for it will be the following result (details for this type of result can be found in \cite{Tak} or \cite{Dwa}). \begin{lemma} Let $\xi_1,\ldots,\xi_n$ be the random variables in $\{\ldots,-3,-2,-1,0,1\}$ with cyclically interchangeable increments. Let $R(i)=\xi_1+\cdots+\xi_n$, $1\leq i\leq n$, $R(0):=0$. Then for each $0\leq k\leq n$ \begin{equation}\label{1.eq.7} {\mathbb P}(R(i)>0~\textrm{for each $1\leq i\leq n$}~|~R(n)=k)=\mathcal{F}rac{k}{n}~. \end{equation} \end{lemma} Using the skip-free structure of the random walks that model our risk process, Lemma 1.2. and some auxiliary results following from the ballot theorems (such as \emph{Kemperman's formula}, which will be explained in details in Section 2), we will derive the following main results of this paper. \begin{thm} Let $C^1$ and $C^2$ be two independent random walks with nondecreasing increments and $\mu^i:=\mathbb{E}\,(C^i(1))<\infty$, $i=1,2$. Let $C:=C^1+C^2$, $\mu=\mu^1+\mu^2$ and \begin{equation}\label{1:eq.8} X(n)=n-C(n)~,~~n\geq 0~, \end{equation} and let us assume that $\mathbb{E}\, (X(1))>0$, i.e. $\mu<1$. Then \begin{equation}\label{1:eq.9} {\mathbb P}(\widehat{\tau_0}<\infty,\widehat{X}(\widehat{\tau_0}-1)=y, \widehat{X}(\widehat{\tau_0})\geq x,\Delta C^i(\widehat{\tau_0})=x+1-y)={\mathbb P}(C^i(1)=x+1-y)~, \end{equation} for $y\leq 0$, $x>0$, $\widehat{\tau}_0=\inf\{n\geq 0~:~\widehat{X}(n)>0\}$, $\Delta C^i(n)=C^i(n)-C^i(n-1)$ (for $i=1,2$) and $\Delta C(n)=C(n)-C(n-1)$, $n\geq 0$. \end{thm} The above result will be generalized for $m$ independent random walks with nondecreasing increments $C^1,\ldots,C^m$, $m\in{\mathbb N}$, and $C=C^1+\cdots +C^m$ in the standard way.\\ \\ When we generalize the above model by adding the perturbation modelled by an \emph{upwards skip-free} random walk (or \emph{right continuous} random walk ) $Z$, i.e. the random walk with increments less or equal than $1$, we observe the perturbed discrete time risk process \begin{equation}\label{1:eq.10} X(n)=-C(n)+Z(n)~,~~n\geq 0~. \end{equation} Under the assumption that $\mathbb{E}\,(X(1))>0$ (so $\widehat{X}\to -\infty$) and with the same notation used as in the previous theorem, we will derive the following result. \begin{thm}\label{1.eq.11} \begin{align*}{\mathbb P}(\widehat{\tau}_0<\infty,&\widehat{X}(\widehat {\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\textrm{the new supremum was caused by the process $C$})\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)+{\mathbb P}(C(1)\geq x-y)\cdot{\mathbb P}(Z(1)\geq 0)~.\end{align*} \end{thm} This result can also be generalized so that we observe the probability that the random walk $C^i$ causes the ruin ($i\in\{1,\ldots,m\}$ ), again in the standard way. \section{Auxiliary results}\label{sec-2} \begin{defn} Let $(S(n)~:~n\geq 0)$ be a random walk with integer-valued increments $(Y(i)~:~i\geq 0)$, i.e. $S(n)=\sum_{i=1}^nY(i)$, $n\geq 0$, $S(0)=0$. We say that $S$ is an \emph{upwards skip-free} (or \emph{right continuous}) random walk if ${\mathbb P}(Y(i)\leq 1)=1$ (i.e. it's increments achieve values greater than $1$ with zero probability).\\ If ${\mathbb P}(Y(i)\geq -1)=1$ we say that $S$ is a \emph{downwards skip-free} (or \emph{left continuous}) random walk. \end{defn} To prove the results for the ruin probability for the skip-free class of random walks we will need some auxiliary results. Inspired by the approach used in \cite{HPSV2} for the continuous time case (i.e. spectrally negative L\'evy processes), we will use the results following from the famous \emph{ballot theorems}, first one dating to 1887. and formulated by \emph{Bertrand}. More precisely, let us assume that there are two candidates in the voting process in which there are $n$ voters. Candidate $A$ scores $a$ votes for the win over the candidate $B$ which scores $b$ votes, $a\geq b$. Then the probability that throughout the counting the number of votes registered for $A$ is always greater than the number of votes registered for the candidate $B$ is equal to $\mathcal{F}rac{a-b}{a+b}=\mathcal{F}rac{a-b}{n}$. This result was further generalized by \emph{Barbier} and proved in that generalized form by \emph{Aeplli}. Later this result was also proved by \emph{Dvoretzky} and \emph{Motzkin} using the \emph{cyclic lemma}, which is the approach similar to the one followed in this paper.\\ \\ The main property that lies in the heart of those type of theorems is having some kind of cyclic structure. \begin{defn} For random variables $\xi_1,\ldots,\xi_n$ we say that they are \emph{interchangeable} if for each $(r_1,\ldots ,r_n)\in {\mathbb R}^n$ and all permutations $\sigma$ of $\{1,2,\ldots ,n\}$, \begin{equation}\label{2:eq.1}{\mathbb P}(\xi_i\leq r_i~\textrm{for each $1\leq i\leq n$}~)={\mathbb P}(\xi_i\leq r_{\sigma(i)}~\textrm{for each $1\leq i\leq n$}~)~. \end{equation} For $\xi_1,\ldots ,\xi_n$ we say that they are \emph{cyclically interchangeable} if (\ref{2:eq.1}) is valid for each cyclic permutation $\sigma$ of $\{1,2,\ldots ,n\}$. \end{defn} In other words, the random variables are cyclically interchangeable if their distribution law is invariant under cyclic permutations and interchangeable (in some literature, for example see \cite{Dwa}, this property is also called \emph{exchangeable}) if it is invariant under all permutations. From the definition it is clear that interchangeable variables are also cyclically interchangeable and the converse is not true.\\ \\ One version of the ballot theorem states that for the random walk $R$ with interchangeable and non-negative increments which starts at $0$ (i.e. $R(0)=0$), the probability that $R(m)<m$ for each $m=1,2,\ldots,n$, conditionally on $R(n)=k$, is equal to $\mathcal{F}rac{k}{n}$. More precisely, for us, the following result will play the key role. The result of this type was proved independently by \emph{Dwass} (for cyclically interchangeable random variables) and by \emph{Tak\'acs} (in less general case, for interchangeable random variables) in 1962. (appearing in the same issue of Annals of Mathematical Statistics), for details see \cite{Tak},\cite{Dwa} and for historical overview of the named results see \cite{AB}. \begin{lemma} Let $\xi_1,\ldots,\xi_n$ be the cyclically interchangeable random variables with values in the set $\{\ldots,-3,-2,-1,0,1\}$. Let $R(i)=\xi_1+\cdots+\xi_i$, $1\leq i\leq n$, $R(0)=0$. Then for each $0\leq k\leq n$ \begin{equation}\label{2.eq.1} {\mathbb P}(R(i)>0~\textrm{for each $1\leq i\leq n$}~|~R(n)=k)=\mathcal{F}rac{k}{n}~. \end{equation} \end{lemma} Let us notice that from Lemma 2.3. it follows that if we have a skip-free random walk and we know it's position at some instant $n$ and that position is some $k$, we are able to calculate the exact probability that this random walk stayed under the position $0$ (or above the position $0$, depending on which skip-free random walk we observe, the right or the left continuous one) and that probability is equal to $\mathcal{F}rac{k}{n}$. It is also important for our problem to mention that the assumptions used in the above lemma cannot be removed - it is necessary that the variables take values in ${\mathbb Z}$ and that they are bounded from one side, i.e. that we can control the jumps on one side.\\ \\ To prove Lemma 2.3. we need the following result, again for details see \cite{Tak}. \begin{lemma} Let $\varphi(u)$, $u=0,1,2,\ldots$ be a nondecreasing function for which $\varphi(0)=0$ and $\varphi(t+u)=\varphi(t)+\varphi(u)$, for $u=0,1,2,\ldots$, where $t\in{\mathbb N}$. Define $$\delta(u)=\left\{ \begin{array}{ll} 1, & \hbox{$v-\varphi(v)>u-\varphi(u)$ za $v>u$;} \\ 0, & \hbox{otherwise.} \\ \end{array} \right.$$ Then \begin{equation}\label{2:eq.3}\sum_{u=1}^{t}\delta(u)=\left\{ \begin{array}{ll} t-\varphi(t), & \hbox{$0\leq\varphi(t)\leq t$;} \\ 0, & \hbox{$\varphi(t)\geq t$.} \\ \end{array} \right.\end{equation} \end{lemma} Let us now prove Lemma 2.3.\\ \\ We observe the random variables $\gamma_1:=1-\xi_1$, $\gamma_2:=1-\xi_2$,$\ldots$ (instead of $\xi_1$,$\xi_2$,$\ldots$) and the random walk $R$ defined as $R(i)=\gamma_1+\ldots+\gamma_i$, $i\geq 1$, $R(0)=0$. It is a nondecreasing random walk and it's increments $\gamma_i$ are integer and cyclically interchangeable variables. We will show that $${\mathbb P}(R(u)\leq u~:~0\leq u\leq n|R(n))=\left\{ \begin{array}{ll} 1-\mathcal{F}rac{R(n)}{n}, & \hbox{$0\leq R(n)\leq n$;} \\ 0, & \hbox{\textrm{otherwise.}} \\ \end{array} \right .$$ First we associate a new process $(R^{*}(u)~:~0\leq u<\infty)$ on $(0,\infty)$ to the process $(R(u)~:~0\leq u\leq n)$ such that $R^{*}(u)=R(u)$, for $0\leq u\leq n$ and $R^{*}(n+u)=R^{*}(n)+R^{*}(u)$, for $u\geq 0$. We define $$\delta(u)=\left\{ \begin{array}{ll} 1, & \hbox{if $v-R^{*}(v)\geq u-R^{*}(u)$ for each $v\geq u$;} \\ 0, & \hbox{otherwise.} \\ \end{array} \right.$$ Then $\delta(u)$ is a random variable and has the same distribution for each $u\geq 0$.\\ \\ Now we have \begin{align*} &{\mathbb P}(R(u)\leq u~,~0\leq u\leq n|R(n))={\mathbb P}(R^{*}(u)\leq u~,~u\geq 0|R(n))\\ &=\mathbb{E}\,[1_{\{v-R^{*}(v)\geq 0~,~\mathcal{F}orall v\geq 0\}}|R(n)]=\mathbb{E}\,[\delta^{*}(0)|R(n)]=\\ &=\mathcal{F}rac{1}{n}\cdot\sum_{u=1}^{n}\mathbb{E}\,[\delta^{*}(u)|R(n)]=\mathbb{E}\,[\mathcal{F}rac{1}{n}\cdot\sum_{u=1}^{n}\delta^{*}(u)|R(n)]\\ &=\left\{ \begin{array}{ll} 1-\mathcal{F}rac{R(n)}{n}, & \hbox{$0\leq R(n)\leq n$;} \\ 0, & \hbox{otherwise,} \\ \end{array} \right. \end{align*} using the fact that $\delta^{*}(u)$, $u\geq 0$, conditional on the position of $R(n)$, is equally distributed as $\delta(u)$, $0\leq u\leq n$, and using the result of Lemma 2.3. $\blacksquare$\\ \\ This is the proof which follows the approach used in \cite{Tak} and is also suitable for the continuous time risk process. But this type of result can also be proved following slightly different approach, in the same way the classic ballot theorems were proved - i.e. using some kind of a combinatorial formula. More precisely, we can use the following, for similar approach see \cite{Lam}. \begin{lemma}(\textbf{combinatorial formula}) Let $(x_1,\ldots ,x_n)$ be a finite sequence of integers with values greater or equal to $-1$ and let $\sum_{i=1}^{n} x_i=-k$. Let $\sigma_i(x)$ be a cyclic permutation of $x$ which starts with $x_i$, i.e. $\sigma_i(x)=(x_i,x_{i+1},\ldots ,x_n,x_1,\ldots x_{i-1})$, $i\in\{1,2,\ldots ,n\}$. Then there are exactly $k$ different indices $i\in\{1,2,\ldots ,n\}$ such that $$\sum_{l=1}^{j}(\sigma_i(x))_l> -k~,~~\textrm{for each}~j=1,\ldots ,n-1$$ and $$\sum_{l=1}^{n}(\sigma_i(x))_l=-k~,$$ i.e. there are exactly $k$ different permutations $\sigma_i(x)$ of the sequence $x$ such that the first sum of the members of the sequence $\sigma_i(x)$ that is equal to $-k$ is the sum of all members of the sequence $\sigma_i(x)$. \end{lemma} \textit{Proof.}\\ \\ We observe partial sums $s_j=\sum_{i=1}^j x_i$, $1\leq j\leq n$, $s_0:=0$, and find the lowest one - let that be $s_m$ (i.e. $m$ is the lowest index such that $s_m=\min_{1\le j \le n} s_j$). Now we take the cyclic permutation $\sigma_m(x)$, i.e. the one that starts with $x_{m+1}$; $\sigma_m(x)=(x_{m+1},x_{m+2},\ldots ,x_m)$. The overall sum of the sequence is $-k$, so $\sigma_m(x)$ hits $-k$ for the first time at the time instant $n$. For $j=1,2,\ldots ,k$ let $t_j$ be the first time of hitting the level $-j$, i.e. $t_j:=\min\{i\geq 1~:~s_i=-j\}$. Now again we can see that $\sigma_i(x)$ hits $-k$ for the first time at the time instant $n$ if and only if $i$ is one of the $t_j$-s, $j=1,2,\ldots ,k$, which proves our formula. $\blacksquare$\\ \\ Let us now observe the random walk $R(j)=\sum_{i=1}^{j} \xi(i)$, $R(0)=0$, with cyclically interchangeable increments $(\xi(1),\ldots ,\xi(n))$. Let $T_{-k}$ be the first time that $R$ reaches the level $-k$ and $T^{(i)}_{-k}$ the first time when the random walk with increments $\sigma_i(\xi)$ reaches $-k$ for the first time. Using Lemma 2.5., we have \begin{align*} n\cdot{\mathbb P}(T_{-k}=n|R(n)=-k)&=\sum_{i=1}^{n} \mathbb{E}\,[1_{\{T_{-k}=n\}}|R(n)=-k]\\ &=\mathbb{E}\,\big(\sum_{i=1}^{n}1_{\{T_{-k}=n\}}|R(n)=-k\big)\\ &=\mathbb{E}\,\big(\sum_{i=1}^{n}1_{\{T^{(i)}_{-k}=n\}}|R(n)=-k\big)\\ &=k~, \end{align*} where in the second line from the end we used that the increments of the random walk $R$ are cyclically interchangeable and in the last line the combinatorial formula, i.e. the fact that there are exactly $k$ permutations of the increments of the random walk $R$ which hit the level $-k$ for the first time at the time instant $n$. This is the \textit{Kemperman's formula} or \emph{the hitting time theorem}, and since we will use it only for independent random variables, i.e. the increments of the random walk, we will rephrase it in the less general form than the one we proved above. \begin{lemma} Let $R$ be the upwards skip-free random walk starting at $0$ (i.e. $R(0)=0$) and $\tau(k)\in\{0,1,2,\ldots\}$ the first time that the random walk $R$ crosses the level $k>0$. Then \begin{equation}\label{2:eq.4}n\cdot{\mathbb P}(\tau(k)=n)=k\cdot{\mathbb P}(R(n)=k)~,~n\geq 1~. \end{equation} \end{lemma} \section{Main results for the discrete time ruin process} \label{sec-3} Let $C^1$and $C^2$ be the independent and nondecreasing random walks, i.e. \begin{equation}\label{3:eq.1} C^i(n)=U^i(1)+\cdots +U^i(n)~,~~n\geq 1~, \end{equation} for \begin{equation}\label{3:eq.2} U^i(j)\sim \left( \begin{array}{ccccc} 0 & 1 & 2 & 3 & \ldots \\ p_0 & p_1 & p_2 & p_3 & \ldots \end{array} \right) \end{equation} for some $p_k\geq 0$, $k\geq 0$, $\sum_{k=0}^\infty p_k=1$ and $j\in\{1,2,\ldots\}$, $i=1,2$. We define \begin{equation}\label{3:eq.3} C=C^1+C^2~. \end{equation} Let us assume that $\mathbb{E}\, C^i(1)<\infty$ or, equivalently, $\mathbb{E}\, U^i(1)<\infty$, $i=1,2$.\\ \\ We define the \emph{the discrete time risk process with the unit drift} by \begin{equation}\label{3:eq.3} X(n)=n-C(n)~,~~n\geq 0~, \end{equation} which means that $X$ is the upwards skip-free random walk. Let us further assume that, using the notation $\mu:=\mathbb{E}\, C(1)$, $\mu_i:=\mathbb{E}\, C^i(1)$, $i=1,2$, \begin{equation}\label{3:eq.4} \mathbb{E}\, X(1)=1-\mathbb{E}\, C(1)=1-\mu=1-(\mu_1+\mu_2)>0~. \end{equation} For $\widehat{X}=-X$, we also define \begin{equation*}\widehat{S}(n):=\max_{0\leq s\leq n} \widehat{X}(s)~, \end{equation*} \begin{equation*} \widehat{S}(\infty):=\max_{s\geq 0} \widehat{X}(s) \end{equation*} and \begin{equation*}\widehat{\tau_x}=\inf\{n\geq 0~:~\widehat{X}(n)>x\}~, \end{equation*} the first time that the dual random walk $\widehat{X}$ crosses the level $x\in{\mathbb N}$, \begin{equation*} \Delta C^i(n)=C^i(n)-C^i(n-1)~~\textrm {for} ~~i=1,2 \end{equation*} and \begin{equation*} \Delta C(n)=C(n)-C(n-1)~, \end{equation*} $n\geq 1$, the jumps of the random walks $C^1$, $C^2$ and $C$. Using the linearity of the expectation, the fact that the increments of the random walk are independent and equally distributed and the standard induction procedure, we can see that the following result is valid. \begin{lemma} \begin{equation}\label{3:eq.5}\mathbb{E}\,\big( \sum_{n=0}^{\infty} \mathcal{H}(n,\omega,\Delta C_n^i(\omega))\big) =\mathbb{E}\,\big(\int_{(0,\infty)} \mathcal{H}(n,\omega,\varepsilon)dF_i(\varepsilon)\big)~, \end{equation} where $\mathcal{H}$ is a non-negative function and $F_i$ the distribution function of the increments of the random walk $C^i$, $i=1,2$. \end{lemma} For $y\leq 0$ and $x>0$ we define $$\mathcal{H}(n,\omega,\varepsilon_i)=1_{\{\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0\}}\cdot 1_{\{\varepsilon_i=x+1-y\}}~.$$ Then, using Lemma 3.1., it follows \begin{align*} &\mathbb{E}\,\big( \sum_{n=0}^{\infty} \mathcal{H}(n,\omega,\Delta C^i(n)(\omega))\big)\\ &=\sum_{n=1}^{\infty} {\mathbb P}(\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0,\Delta C^i(n)=x+1-y)\\ &={\mathbb P}(\widehat{\tau_0}<\infty,\widehat{X}(\widehat{\tau_0}-1)=y, \widehat{X}(\widehat{\tau_0})\geq x,\Delta C^i(\widehat{\tau_0})=x+1-y)~. \end{align*} Let us mention that the inequality $\widehat{X}(\widehat{\tau_0})\geq x$ appearing in the last line is the result of the fact that in the discrete time case the components of the random walk $C$ may jump simultaneously (unlike in the continuous time case when modelling the risk process with the spectrally negative L\'evy process). Since the components of the random walk $C$ are nondecreasing, they can only increase the supremum of the overall risk process and the drift decreases it for a unit at each time instant - so we have $\Delta C^i(\widehat{\tau_0})=(x-y)+1$ in the last line.\\ \\ On the other side, using Lemma 2.3. and Lemma 2.6., we have\\ \\ $\mathbb{E}\,\big(\sum_{n=1}^{\infty}\int_{(0,\infty)} \mathcal{H}(n,\omega)dF_i(\varepsilon_i)\big)\\ \\=\sum_{n=1}^{\infty}\mathbb{E}\,\big(\int_{(0,\infty)} 1_{\{\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0\}}\cdot 1_{\{\varepsilon_i=x+1-y\}}dF_i(\varepsilon_i)\big)\\ \\=\sum_{n=1}^{\infty}{\mathbb P}(\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0)\cdot{\mathbb P}(C^i(1)=x+1-y)\\ \\=\sum_{n=1}^{\infty}{\mathbb P}(\widehat{S}(n-1)\leq 0|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\cdot{\mathbb P}(C^i(1)=x+1-y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} {\mathbb P}(\max_{0\leq m\leq n-1} \widehat{X}(m)\leq 0|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} {\mathbb P}(\widehat{X}(m)\leq 0~,~ \mathcal{F}orall 0\leq m\leq n-1|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty}{\mathbb P}(C(m)\leq m~:~\mathcal{F}orall 0\leq m\leq n-1|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty}{\mathbb P}(C(m)\leq m~:~ \mathcal{F}orall 0\leq m\leq n-1|C(n-1)=y+(n-1))\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} (1-\mathcal{F}rac{y+(n-1)}{n-1})\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} \mathcal{F}rac{1}{n-1}\cdot(-y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ \\={\mathbb P}(C^i(1)=x+1-y)\cdot\sum_{n=1}^{\infty} \mathcal{F}rac{1}{n-1}\cdot(n-1)\cdot{\mathbb P}(\widehat{\tau}(y)=n-1)\\ \\={\mathbb P}(C^i(1)=x+1-y)~.$\\ \\ Let us notice that in the last line we again used the fact that $\widehat{X}$ is a downwards skip-free random walk which can only take unit steps to go downwards, i.e. it has to hit each level $y=-k\leq 0$ it crosses, so ${\mathbb P}(\widehat{\tau}(y)<\infty)=1$ for $y\leq 0$.\\ \\ Let us further notice that the above result can be generalized for finitely many random walks $C^1,C^2,\ldots ,C^m$, for some $m\in{\mathbb N}$, in the same way. So we have the following result. \begin{thm} Let $C^1, C^2,\ldots ,C^m$, $m\in{\mathbb N}$, be independent random walks with nondecreasing increments (defined as in (\ref{3:eq.1}) and (\ref{3:eq.2})) and \begin{equation}\label{3:eq.7} X(n)=n-(C^1+\cdots +C^m)(n)~,~n\geq 0~. \end{equation} Let $\widehat{X}(0)=0$ for the dual of the random walk $X$ (i.e. $\widehat{X}=-X$) and let us assume that $1>\mu=\mu^1+\cdots +\mu^m$, for $\mu:=\mathbb{E}\, C(1)$ and $\mu^i:=\mathbb{E}\, C^i(1)$, $i=1,2,\ldots,m$. Then for $y\leq 0$ and $x>0$ \begin{equation}\label{3:eq.8}{\mathbb P}(\widehat{\tau_0}<\infty,\widehat{X}(\widehat{\tau_0}-1)=y, \widehat{X}(\widehat{\tau_0})\geq x,\Delta C^i(\widehat{\tau_0})=x+1-y)={\mathbb P}(C^i(1)=x+1-y)~. \end{equation} \end{thm} Summing for all $y\leq 0$, we derive the following result. \begin{cor} For the random walks and the assumptions defined as in the Theorem 3.2., \begin{equation}\label{3:eq.9}{\mathbb P}(\widehat{\tau_0}<\infty, \widehat{X}(\widehat{\tau_0})\geq x,\Delta C^i(\widehat{\tau_0})\geq x+1)={\mathbb P}(C^i(1)\geq x+1)~. \end{equation} \end{cor} Let us now look at \emph{the discrete risk model with the perturbation}, i.e. the random walk $X$ such that \begin{equation}\label{3:eq.10} X(n)=-C(n)+Z(n)~,~ n\geq 1~,\end{equation} where $C$ is the random walk with nondecreasing increments and $Z$ the upwards skip-free random walk. In other words, we have \begin{equation}\label{3:eq.11} Z(1)=\xi_Z(i)\sim\left( \begin{array}{ccccc} \ldots & -2 & -1 & 0 & 1 \\ \ldots & q_2 & q_1 & q_0 & \rho \\ \end{array} \right) \end{equation} and \begin{equation}\label{3:eq.12} C(1)=\xi_C(i)\sim\left( \begin{array}{ccccc} 0 & 1 & 2 & 3 & \ldots \\ p_0& p_1 & p_2 & p_3 & \ldots \\ \end{array} \right)~~, \end{equation} for some $\rho,~q_j,~p_j\geq 0$ such that $\sum_{j=0}^\infty q_j=\sum_{j=0}^\infty p_j=1$. We assume that \begin{equation}\label{3:eq.13} \mathbb{E}\,(X(1))>0~, ~~\textrm{i.e}~~\mathbb{E}\, C(1)<\mathbb{E}\, Z(1)~. \end{equation} $X$ is obviously the upwards skip-free random walk and the dual process, $\widehat{X}=-X$, is the downwards skip-free random walk such that $\widehat{X}\to -\infty$. Furthermore, we can rewrite $\widehat{X}$ so that $$\widehat{X}(n)=\sum_{i=1}^{n}\xi_{\widehat{X}}(i)=\sum_{i=1}^{n}(\xi_W(i)-1)=\sum_{i=1}^n \xi_W(i)-n=:W(n)-n~,~~n\geq 0~,$$ for $\xi_W(i):=\xi_{\hat{X}}(i)+1$, so we have that ${\mathbb P}(W(1)=k)={\mathbb P}(\widehat{X}(1)=k-1)$, $k\geq 0$.\\ \\ Since in the discrete time case the random walks $C$ and $Z$ jump at the same time, if $\widehat{X}$ was in some position $y\leq 0$ at the time instant just before it crossed the level $0$ and in the position $x>0$ when it crossed the level $0$, the event \{$C$ caused the jump of the process $\widehat{X}$ over the level $0$\} can be written as $$\{\Delta C(\widehat{\tau}_0)\geq -y+1+x, \Delta Z(\widehat{\tau}_0) =-1\}\cup\{\Delta C(\widehat{\tau}_0)\geq -y+x, \Delta Z(\widehat{\tau}_0) \geq 0\}~.$$ So, for $y\leq 0$ and $x>0$ we have \begin{align*} &{\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat {\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\textrm{$C$ caused the jump of the process $\widehat{X}$ over the level $0$ })\\ &={\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\Delta C(\widehat{\tau}_0)\geq -y+1+x,\Delta Z(\widehat{\tau}_0)=-1)\\ &+{\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\Delta C(\widehat{\tau}_0)\geq -y+x,\Delta Z(\widehat{\tau}_0)\geq 0)~. \end{align*} Let us define $$\mathcal{H}(n,\omega,\varepsilon,\eta)=1_{\{\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0\}}\cdot 1_{\{\varepsilon=x+1-y\}}\cdot 1_{\{\eta=-1\}}~.$$ Using the Lemma 3.1., Lemma 2.3. and Lemma 2.6. we have \begin{align*} &{\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\Delta C(\widehat{\tau}_0)\geq -y+1+x,\Delta Z(\widehat{\tau}_0)=-1)\\ &=\sum_{n=1}^{\infty}{\mathbb P}(\widehat{X}(n-1)=y,\widehat{S}(n-1)\leq 0)\cdot{\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}{\mathbb P}(\widehat{S}(n-1)\leq 0|\widehat{X}(n-1)=y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}{\mathbb P}(W(t)\leq t~,~0\leq t\leq n-1|\widehat{X}(n-1)=y)\\ & \cdot{\mathbb P}(\widehat{X}(n-1)=y)\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}(1-\mathcal{F}rac{y+(n-1)}{n-1})\cdot{\mathbb P}(\widehat{X}(n-1)=y)=\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}\mathcal{F}rac{1}{n-1}\cdot(-y)\cdot{\mathbb P}(\widehat{X}(n-1)=y)=\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)\cdot\sum_{n=1}^{\infty}{\mathbb P}(\widehat{\tau}_{y}=n-1)=\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)~.\end{align*} It is obvious that in the result above we used the crucial argument that $\widehat{X}$ is the downwards skip-free random walk, which means that it can only use unit steps to go downwards, i.e. it visits each level $y\leq 0$.\\ \\ We can use the same calculation on the second addend, ${\mathbb P}(\widehat{\tau}_0<\infty,\widehat{X}(\widehat{\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\Delta C(\widehat{\tau}_0)\geq -y+x,\Delta Z(\widehat{\tau}_0)\geq 0)$, so we have the following result. \begin{thm} Let $C$ and $Z$ be the random walks defined as in (\ref{3:eq.11}) and (\ref{3:eq.12}) and $X$ the discrete time perturbed risk process \begin{equation}\label{3:eq.14} X(n)=-C(n)+Z(n)~,~ n\geq 1~,\end{equation} and let us assume that $\mathbb{E}\,(X(1))<0$, i.e. $\mathbb{E}\, C(1)<\mathbb{E}\, Z(1)$. Then \begin{align*}{\mathbb P}(\widehat{\tau}_0<\infty,&\widehat{X}(\widehat {\tau}_0-1)=y,\widehat{X}(\widehat{\tau}_0)\geq x,\textrm{$C$ caused the jump of the process $\widehat{X}$ over the level $0$})\\ &={\mathbb P}(C(1)\geq x+1-y)\cdot{\mathbb P}(Z(1)=-1)+{\mathbb P}(C(1)\geq x-y)\cdot{\mathbb P}(Z(1)\geq 0)~.\end{align*}\end{thm} Let us notice that the similar result can be derived if we define $C$ as the sum of the $m\in{\mathbb N}$ independent nondecreasing random walks $C^1,\ldots,C^m$, $C=C^1+\cdots +C^m$.\\ \\ Let us further notice that the "problem" of the simultaneous jumps of the random walks $C$ and $Z$, which is characteristic for the discrete time processes and differs from the continuous time version of the same problem, may be overcome if we observe a natural connection between these two types of models - i.e. the compound Poisson processes. For some generalizations in this way and similar results connected to ruin probability see \cite{GTV}.\\ \\ \textbf{Acknowledgement:} This work has been supported in part by Croatian Science Foundation under the project 3526. \end{document}
\begin{document} \title[]{Hyperbolic plane geometry revisited} \author[\'A. G.Horv\'ath]{\'Akos G.Horv\'ath} \date{25 April, 2014} \address{\'A. G.Horv\'ath, Dept. of Geometry, Budapest University of Technology, Egry J\'ozsef u. 1., Budapest, Hungary, 1111} \email{[email protected]} \subjclass{51M10; 51M15} \keywords{cycle, hyperbolic plane, inversion, Malfatti's construction problem, triangle centers} \begin{abstract} Using the method of C. V\"or\"os, we establish results in hyperbolic plane geometry, related to triangles and circles. We present a model independent construction for Malfatti's problem and several trigonometric formulas for triangles. \end{abstract} \maketitle \section{Introduction} J. W. Young, the editor of the book \cite{johnson}, wrote in his introduction: \emph{ There are fashions in mathematics as well as in clothes, -- and in both domains they have a tendency to repeat themselves.} During the last decade, ``hyperbolic plane geometry'' aroused much interest and was investigated vigorously by a considerable number of mathematicians. Despite the large number of investigations, the number of hyperbolic trigonometric formulas that can be collected from them is fairly small, they can be written on a page of size B5. This observation is very surprising if we compare it with the fact that already in 1889, a very extensive and elegant treatise of spherical trigonometry was written by John Casey \cite{casey 1}. For this, the reason, probably, is that the discussion of a problem in hyperbolic geometry is less pleasant than in spherical one. On the other hand, in the 19th century, excellent mathematician -- Cyrill V\"or\"os\footnote{Cyrill V\"or\"os (1868 --1948), piarist, teacher} in Hungary made a big step to solve this problem. He introduced a method for the measurement of distances and angles in the case that the considered points or lines, respectively, are not real. Unfortunately, since he published his works mostly in Hungarian or in Esperanto, his method is not well-known to the mathematical community. To fill this gap, we use the concept of distance extracted from his work and, translating the standard methods of Euclidean plane geometry into the language of the hyperbolic plane, apply it for various configurations. We give a model independent construction for the famous problem of Malfatti (discussed in \cite{gho 1}) and give some interesting formulas connected with the geometry of hyperbolic triangles. By the notion of distance introduced by V\"or\"os, we obtain results in hyperbolic plane geometry which are not well-known. The length of this paper is very limited, hence some proofs will be omitted here. The interested reader can find these proofs in the unpublished source file \cite{gho 3}. \subsection{Well-known formulas on hyperbolic trigonometry} The points $A,B,C$ denote the vertices of a triangle. The lengths of the edges opposite to these vertices are $a,b,c$, respectively. The angles at $A,B,C$ are denoted by $\alpha, \beta, \gamma$, respectively. If the triangle has a right angle, it is always at $C$. The symbol $\delta$ denotes half of the area of the triangle; more precisely, we have $2\delta=\pi-(\alpha+\beta+\gamma)$. \begin{itemize} \item {\bf Connections between the trigonometric and hyperbolic trigonometric functions:} $$ \sinh a=\frac{1}{i}\sin (ia), \quad \cosh a=\cos (ia), \quad \tanh a=\frac{1}{i}\tan (ia). $$ \item {\bf Law of sines:} \begin{equation} \sinh a :\sinh b :\sinh c=\sin \alpha :\sin \beta :\sin \gamma. \end{equation} \item {\bf Law of cosines:} \begin{equation} \cosh c=\cosh a\cosh b-\sinh a\sinh b \cos \gamma. \end{equation} \item {\bf Law of cosines on the angles:} \begin{equation} \cos \gamma=-\cos\alpha\cos\beta+\sin\alpha\sin\beta \cosh c. \end{equation} \item {\bf The area of the triangle:} \begin{equation} T:=2\delta=\pi-(\alpha+\beta+\gamma). \end{equation} \begin{equation} \tan \frac{T}{2}=\left(\tanh \frac{a_1}{2}+\tanh \frac{a_1}{2}\right)\tanh \frac{m_a}{2}, \end{equation} where $m_a$ is the height of the triangle corresponding to $A$ and $a_1,a_2$ are the signed lengths of the segments into which the foot point of the height divides the side $BC$. \item {\bf Heron's formula:} \begin{equation} \tan \frac{T}{4}=\sqrt{\tanh \frac{s}{2}\tanh \frac{s-a}{2}\tanh \frac{s-b}{2}\tanh \frac{s-c}{2}}. \end{equation} \item{\bf Formulas on Lambert's quadrangle:} The vertices of the quadrangle are $A,B,C,D$ and the lengths of the edges are $AB=a,BC=b,CD=c$ and $DA=d$, respectively. The only angle which is not a right angle is $ BCD\measuredangle=\varphi$. Then, for the sides, we have: $$ \tanh b=\tanh d\cosh a, \quad \tanh c=\tanh a\cosh d, $$ and $$ \sinh b=\sinh d\cosh c, \quad \sinh c=\sinh a\cosh b . $$ Moreover, for the angles, we have: $$ \cos \varphi=\tanh b\tanh c=\sinh a\sinh d, \quad \sin \varphi=\frac{\cosh d}{\cosh b}=\frac{\cosh a}{\cosh c}, $$ and $$ \tan \varphi =\frac{1}{\tanh a\sinh b}=\frac{1}{\tanh d\sinh c}. $$ \end{itemize} \section{The distance of points and on the lengths of segments} First we extract the concepts of the distance of real points following the method of the book of Cyrill V\"or\"os (\cite{voros}). We extend the plane with two types of points, one of the type of the points at infinity and the other one the type of ideal points. In a projective model these are the boundary and external points of a model with respect to the embedding real projective plane. Two parallel lines determine a point at infinity, and two ultraparallel lines an ideal point which is the pole of their common transversal. Now the concept of the line can be extended; a line is real if it has real points (in this case it also has two points at infinity and the other points on it are ideal points being the poles of the real lines orthogonal to the mentioned one). The extended real line is a closed compact set with finite length. We also distinguish the line at infinity which contains precisely one point at infinity and the so-called ideal line which contains only ideal points. By definition the common lengths of these lines are $\pi ki$, where $k$ is a constant of the hyperbolic plane and $i$ is the imaginary unit. In this paper we assume that $k=1$. Two points on a line determine two segments $AB$ and $BA$. The sum of the lengths of these segments is $AB+BA=\pi i$. We define the length of a segment as an element of the linearly ordered set $\bar{\mathbb{C}}:=\overline{\mathbb{R}}+ \mathbb{R}\cdot i$. Here $\overline{\mathbb{R}}=\mathbb{R}\cup\{\pm \infty\}$ is the linearly ordered set of real numbers extracted with two new numbers with the "real infinity" $\infty $ and its additive inverse $-\infty$. The infinities can be considered as new "numbers" having the properties that either "there is no real number greater than or equal to $\infty$" or "there is no real number less than or equal to $-\infty$". We also introduce the following operational rules: $\infty+\infty=\infty$, $-\infty+(-\infty)=-\infty$, $\infty+(-\infty)=0$ and $\pm\infty+a=\pm\infty$ for real $a$. It is obvious that $\overline{\mathbb{R}}$ is not a group, the rule of associativity holds only for such expressions which contain at most two new objects. In fact, $0=\infty+(-\infty)=(\infty+\infty)+(-\infty)=\infty+(\infty+(-\infty))=\infty$ is a contradiction. We also require that the equality $\pm\infty+b i=\pm\infty +0 i$ holds for every real number $b$, and for brevity we introduce the respective notations $\infty:=\infty +0 i$ and $-\infty:=-\infty +0 i$. We extract the usual definition of hyperbolic function based on the complex exponential function by the following formulas: $$ \cosh (\pm \infty):=\infty, \sinh (\pm \infty):=\pm \infty, \mbox{ and } \tanh(\pm \infty):=\pm 1. $$ We also assume that $\infty \cdot \infty=(-\infty)\cdot(-\infty)=\infty$, $\infty\cdot(-\infty)=-\infty$ and $\alpha \cdot (\pm \infty)=\pm \infty$. Assuming that the trigonometric formulas of hyperbolic triangles are also valid with ideal vertices the definition of the mentioned lengths of the complementary segments of a line are given. For instance, consider a triangle with two real vertices ($B$ and $C$) and an ideal one ($A$), respectively. The lengths of the segments between $C$ and $A$ are $b$ and $b'$, the lengths of the segments between $B$ and $A$ are $c$ and $c'$ and the lengths of that segment between $C$ and $B$ which contains only real points is $a$, respectively. Let the right angle be at the vertex $C$ and denote by $\beta$ the other real angle at $B$. (See in Fig. 1.) \begin{figure} \caption{Length of the segments between a real and an ideal point} \end{figure} With respect to this triangle we have $\tanh b=\sinh a\cdot \tan \beta$, and since $A$ is an ideal point, the parallel angle corresponding to the distance $\overline{BC}=a$ less than or equal to $\beta $. Hence $\tan \beta > 1/\sinh a$ implying that $\tanh b >1$. Hence $b$ is a complex number. If the polar of $A$ is $EF$, then it is the common perpendicular of the lines $AC$ and $AB$. The quadrangle $CFEB$ has three right angles. Denote by $b_1$ the length of that segment $\overline{CF}$ which contains real points only. Then we get $ \tan \beta =\frac{1}{\tanh b_1 \sinh a}, $ meaning that $ \sinh a \tan \beta=\frac{1}{\tanh b_1}=\tanh b. $ Similarly we have that $\tanh b'=\sinh a\cdot \tan (\pi-\beta)=- \sinh a\cdot \tan \beta$ implying that $|\tanh b'|>1$, hence $b'$ is complex. Now we have that $ \tanh b'=-\frac{1}{\tanh b_1}. $ Using the formulas between the trigonometric and hyperbolic trigonometric functions we get that $ \frac{1}{i}\tan ib =\frac{i}{\tan ib_1}, $ implying that $ \tan ib =-\tan\left(\frac{\pi}{2}-ib_1\right), $ so $ b=-\frac{2n-1}{2}\pi i+b_1. $ Analogously we get also that $ b'=-\frac{2m+1}{2}\pi i-b_1. $ Here $n$ and $m$ are arbitrary integers. On the other hand, if $b_1=0$ then $AC=CA$, and so $b=b'$ meaning that $2n-1=2m+1$. For the half length of the complete line we can choose an odd multiplier of the number $\pi i/2$. The most simple choosing is when we assume that $n=0$ and $m=-1$. Thus the lengths of the segments $AC$ and $CA$ can be defined as $b=b_1+\frac{\pi}{2}$ and $b'=-b_1+\frac{\pi}{2}$, respectively. We now define all of the possible lengths of a segment on the basis of the type of the line that contains them. \subsection{The points $A$ and $B$ are on a real line.} We can distinguish six subcases. The definitions of the respective cases can be found in Table 1. We abbreviate the words real, infinite and ideal by symbols ${\mathcal R}$, ${\mathcal I}n$ and ${\mathcal I}d$, respectively. $d$ means a real (positive) distance of the corresponding usual real elements which are a real point or the real polar line of an ideal point, respectively. Every box in the table contains two numbers which are the lengths of the two segments determined by the two points. For example, the distance of a real and an ideal point is a complex number. Its real part is the distance of the real point to the polar of the ideal point with a sign. This sign is positive in the case when the polar line intersects the segment between the real and ideal points, and is negative otherwise. The imaginary part of the length is $(\pi/2)i$, implying that the sum of the lengths of two complementary segments of this projective line has total length $\pi i$. Consider now a point at infinity. This point can also be considered as the limit of real points or limit of ideal points of this line. By definition the distance from a point at infinity of a real line to any other real or infinite point of this line is $\pm \infty$ according to that it contains or not ideal points. If, for instance, $A$ is an infinite point and $B$ is a real one, then the segment $AB$ contains only real points has length $\infty$. It is clear that with respect to the segments on a real line the length-function is continuous. \begin{table} \begin{tabular}{cc|c|c|c|} \cline{3-5} & & \multicolumn{3}{|c|}{$B$} \\ \cline{2-5} &\multicolumn{1}{|c|}{ } & ${\mathcal R}$ & ${\mathcal I}n$ & ${\mathcal I}d$ \\ \cline{1-5} \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{${\mathcal R}$} & \begin{tabular}{c} $AB=d$ \\ $BA=-d+\pi i$ \end{tabular} & \begin{tabular}{c} $AB=\infty $ \\$BA=-\infty$ \end{tabular} & \begin{tabular}{c} $AB=d+\frac{\pi}{2} i$ \\ $BA=-d+\frac{\pi}{2} i$ \end{tabular} \\ \cline{2-5} \multicolumn{1}{|c|}{$A$}& \multicolumn{1}{|c|}{${\mathcal I}n$} & & \begin{tabular}{c} $AB=\infty $ \\ $BA=-\infty$ \end{tabular} & \begin{tabular}{c} $AB=\infty$ \\$BA=-\infty $ \end{tabular} \\ \cline{2-5} \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{${\mathcal I}d$} & & & \begin{tabular}{c} $AB=d+\pi i$ \\ $BA=-d$ \end{tabular} \\ \hline \end{tabular} \caption{Distances on the real line.} \end{table} \subsection{The points $A$ and $B$ are on a line at infinity.} We can check that the length of a segment for which either $A$ or $B$ is an infinite point is indeterminable. To see this, let the real point $C$ be a vertex of a right-angled triangle whose other vertices $A$ and $B$ are on a line at infinity with infinite point $B$. Then we get that $\cosh c=\cosh a\cdot \cosh b$ for the corresponding sides of this triangle. But from the result of the previous subsection $$ \cosh a=\cosh \infty=\infty \mbox{ and } \cosh b=\cosh \left(0+\frac{\pi}{2}i\right)=\cos \left(-\frac{\pi}{2}\right)=0, $$ showing that their product is undeterminable. On the other hand, if we consider the polar of the ideal point $A$ we get a real line through $B$. The length of a segment connecting the (ideal) point $A$ and one of the points of its polar is $(\pi/2)i$. This means that we can define the length of a segment between $A$ and $B$ also as this common value. Now if we also want to preserve the additivity property of the lengths of segments on a line at infinity, then we must give the pair of values $0, \pi i$ for the lengths of segment with ideal ends. Table 2 collects these definitions. \begin{table} \begin{tabular}{cc|c|c|} \cline{3-4} & & \multicolumn{2}{|c|}{$B$} \\ \cline{2-4} & \multicolumn{1}{|c|}{} & ${\mathcal I}n$ & ${\mathcal I}d$ \\ \cline{1-4} \multicolumn{1}{|c|}{$A$} & \multicolumn{1}{|c|}{${\mathcal I}n$} & \begin{tabular}{c} $AB=0 $ \\ $BA=\pi i$ \end{tabular} & \begin{tabular}{c} $AB=\frac{\pi}{2}i$ \\ $BA=\frac{\pi}{2}i $ \end{tabular} \\ \cline{2-4} \multicolumn{1}{|c|}{} & \multicolumn{1}{|c|}{${\mathcal I}d$} & & \begin{tabular}{c} $AB=0$ \\ $BA=\pi i$ \end{tabular} \\ \hline \end{tabular} \caption{Distances on the line at infinity.} \end{table} \subsection{The points $A$ and $B$ are on an ideal line.} \begin{figure} \caption{The cases of the ideal segment and angles} \end{figure} This situation contains only one case: $A$, $B$ and $AB$ are ideal elements, respectively. We need first the measure of the angle of two real ultraparallel lines. (See $\alpha$ in Fig.1). Then clearly $\cos \alpha=\cosh a \cdot \sin \beta>1$, and so $\alpha $ is imaginary. From Lambert's quadrangle $BCEF$ we get $$ \cosh a \sin \beta =\cosh p, $$ thus $\cosh p=\cos \alpha$ and so $\alpha =2n\pi \pm p i $. Now an elementary analysis of the figure shows that the continuity property requires the choice $n=0$. If we also assume that we choose the negative sign, then the measure is $\alpha=-p i=p/i$, where $p$ is the length of that segment of the common perpendicular whose points are real. Consider now an ideal line and its two ideal points $A$ and $B$, respectively. The polars of these points intersect each other in a real point $B_1$. Consider a further real point $C$ of the line $BB_1$ and denote by $A_1$ the intersection point of the polar of $A$ and the real line $AC$ (see Fig. 2). Observe that $A_1B_1$ is perpendicular to $AC$; thus we have $\tanh b_1 =\tanh a_1 \cdot \cos \gamma$. On the other hand, $a=\pm a_1 +(\pi i)/2$ and $b=\pm b_1 +(\pi i)/2$ implying that $\tanh b =\tanh a \cdot \cos \gamma$. Hence the angle between the real line $CB$ and the ideal line $AB$ can be considered to $\pi /2$, too. Now from the triangle $ABC$ we get that $$ \cosh c =\frac{\cosh b}{\cosh a}=\frac{\pm i\sinh b_1}{\pm i\sinh a_1}=\frac{\sinh b_1}{\sinh a_1}=\sin \left(\frac{\pi}{2}-\varphi\right)=\cos \varphi, $$ where $\varphi$ is the angle of the two polars. From this we get $c=2n\pi\pm \varphi/i=2n\pi \mp \varphi i$. We choose $n=0$ since at this time $\varphi=0$ implies $c=0$ and the positive sign because the length of the line is $\pi i$. \emph{The length of an ideal segment on an ideal line is the angle of their polars multiplied by the imaginary unit $i$.} \subsection{Angles of lines} \begin{table}[htbp] \begin{tabular}{cc|c|c|c|} \cline{3-5} & & \multicolumn{3}{|c|}{$a$} \\ \cline{3-5} & & ${\mathcal R}$ & ${\mathcal I}n$ & ${\mathcal I}d$ \\ \hline \multicolumn{1}{|c|}{}& \multicolumn{1}{|c|}{${\mathcal R}$} & \begin{tabular}{|c|c|c|} \multicolumn{3}{|c|}{$M$} \\ \hline ${\mathcal R}$ & ${\mathcal I}n$ & ${\mathcal I}d$ \\ \hline \begin{tabular}{c} $\varphi$ \\ $\pi-\varphi$ \end{tabular} & \begin{tabular}{c} $0$ \\ $\pi$ \end{tabular} & \begin{tabular}{c} $\frac{p}{i}$ \\ $\pi -\frac{p}{i}$ \end{tabular} \\ \end{tabular} & \begin{tabular}{|c|c|} \multicolumn{2}{|c|}{$M$} \\ \hline ${\mathcal I}n$ & ${\mathcal Id}$ \\ \hline \begin{tabular}{c} $\frac{\pi}{2}$ \\ $\frac{\pi}{2}$ \end{tabular} & \begin{tabular}{c} $\infty$ \\ $-\infty$ \end{tabular} \\ \end{tabular} & \begin{tabular}{|c|} \multicolumn{1}{|c|}{$M$} \\ \hline ${\mathcal I}d$ \\ \hline \begin{tabular}{c} $\frac{\pi}{2}+\frac{a_1}{i}$ \\ $\frac{\pi}{2}-\frac{a_1}{i}$ \end{tabular} \\ \end{tabular}\\ \cline{2-5} \multicolumn{1}{|c|}{$b$} & \multicolumn{1}{|c|}{${\mathcal I}n$} & &\begin{tabular}{|c|} \multicolumn{1}{|c|}{$M$} \\ \hline ${\mathcal I}d$ \\ \hline $\infty$ \\ $-\infty$ \end{tabular} &\begin{tabular}{|c|} \multicolumn{1}{|c|}{$M$} \\ \hline ${\mathcal I}d$ \\ \hline $\infty$ \\ $-\infty$ \end{tabular} \\ \cline{2-5} \multicolumn{1}{|c|}{} &\multicolumn{1}{|c|}{${\mathcal I}d$} & & & \begin{tabular}{|c|} \multicolumn{1}{|c|}{$M$} \\ \hline ${\mathcal I}d$ \\ \hline $\frac{p}{i}$ \\ $\pi-\frac{p}{i}$ \end{tabular} \\ \hline \end{tabular} \caption{Angles of lines.} \end{table} Similarly as in the previous paragraph we can deduce the angle between arbitrary kinds of lines (see Table 3). In Table 3, $a$ and $b$ are the given lines, $M=a\cap b$ is their intersection point, $m$ is the polar of $M$ and $A$ and $B$ are the poles of $a$ and $b$, respectively. The numbers $p$ and $a_1$ represent real distances, as can be seen on Fig. 2, respectively. The general connection between the angles and distances is the following: \emph{ Every distance of a pair of points is the measure of the angle of their polars multiplied by $i$. The domain of the angle can be chosen in such a way, that we are going through the segment by a moving point and look at the domain which is described by the moving polar of this point. } \subsection{The extracted hyperbolic theorem of sines} With the above definition of the length of a segment the known formulas of hyperbolic trigonometry can be extracted to the formulas of general objects with real, infinite or ideal vertices. For example, we can prove the hyperbolic theorem of sines for right-angled triangles. It says that $\sinh a=\sinh c\cdot \sin \alpha$. \begin{figure} \caption{Hyperbolic theorem of sines with non-real vertices} \end{figure} We prove {\bf first} those cases when all sides of the triangle lie on real lines, respectively. We assume that the right angle is at $C$ and that it is a real point because of our definition. \begin{itemize} \item If $A$ is an infinite point $B$ and $C$ are real ones then $\sinh c\cdot \sin \alpha=\infty \cdot 0$ is indeterminable and we can consider that the equality is true. The relation $\sinh b\cdot \sin \beta=\infty \cdot \sin \beta =\infty$ is also true by our agreement. If $A,B$ are at infinity then $\alpha =\beta =0$ and the equality holds, too. \item In the case when $B,C$ are real points and $A$ is an ideal point, let the polar of $A$ be $p_A$. Then by definition $ \sinh c=\sinh (d_B +(i\pi/2))=\cosh (d_B)\sinh (i\pi/2)=i\cosh (d_B)$ where $d_B$ is the distance of $B$ and $p_a$; $\sin\alpha= \sin (d/i)=i(1/i)\sin(-id)=-i\sinh(d)$ where $d$ is the length of the segment between the lines of the sides $AC$ and $BC$. If $p_A$ intersects $AC$ and $BC$ in the points $D$ and $E$, respectively, then $BCDE$ is a quadrangle with three right angles and with the sides $a$, $x$, $d$ and $d_B$ (see the left figure in Fig. 4). This implies that $ \sinh c \sin\alpha=\cosh (d_B)\sinh(d)=\sinh a $, as we stated. \item If $C$ is a real point, $A$ is at infinity, and $B$ is an ideal point, then $\alpha =0$ and the right-hand side $\sinh c\cdot \sin \alpha$ is undeterminable. If we consider $\sinh c\cdot \sin \beta=\infty \sin\beta$ it is infinite by our agreement, and the statement is true, again. \item Very interesting is the last case when $C$ is a real point, $A$ and $B$ are ideal points, respectively, and the line $AB$ is a real line (see the right-hand side picture in Fig. 3). Then $\sinh a=i\cosh g$, $ \sinh c=\sinh (-e)$ and $\sin\alpha=-i\sinh d$, thus $\sinh c \sin\alpha=i\sinh e \sinh d$ and the theorem holds if and only if in the real pentagon $CDEFG$ with five right angles it holds that $\sinh e \sinh d=\cosh g$. But we have: \end{itemize} \begin{statement}[\cite{gho 3}] Denote by $a,b,c,d,e$ the edge lengths of the successive sides of a pentagon with five right angles on the hyperbolic plane. Then we have: $$\cosh d=\sinh a\sinh b, \quad \sinh c=\frac{\cosh a}{\sqrt{\sinh^2 a\sinh^2 b-1}}$$ $$\sinh e=\frac{\cosh b}{\sqrt{\sinh^2 a\sinh^2 b-1}}$$. \end{statement} {\bf Second} we assume that the hypotenuse $AB$ lies on a non-real line. Now if it is at infinity and at least one vertex is an infinite point then the statement evidently true. Assume that $A$, $B$ and its line are ideal elements, respectively. Then the length $c$ is equal to $(\pi/2)i$, the angle $\alpha$ is equal to $(\pi/2)+d/i$, where $d$ is the distance between $C$, and the polar of $B$ and the length of $a$ is equal to $d+(\pi/2)i$, respectively. The equality $\sinh (\pi/2)i\cdot \sin ((\pi/2)+d/i)=(1/i)\sin (-(\pi/2))\cos (d/i)=-(1/i)\cosh d=i \cosh d=\sinh (d+(\pi/2)i)$ proves the statement in this case, too. \section{Power, inversion and centres of similitude} It is not clear who investigated first the concept of inversion with respect to hyperbolic geometry. A synthetic approach can be found in \cite{molnar} using reflections in Bachmann's metric plane. For our purpose it is more convenient to use an analytic approach in which the concepts of centres of similitude and axis of similitude can be defined. We mention that the spherical approach of these concepts can be found in Chapter VI and Chapter VII in \cite{casey 1}. In the hyperbolic case, using the extracted concepts of lengths of segments, this approach can be reproduced. \begin{lem}[\cite{gho 3}] The product $\tanh (PA)/2 \cdot \tanh (PB)/2$ is constant if $P$ is a fixed (but arbitrary) point (real, at infinity or ideal), $P,A,B$ are collinear and $A,B$ are on a cycle of the hyperbolic plane (meaning that in the fixed projective model of the real projective plane it has a proper part). \end{lem} On the basis of Lemma 3.1. we can define the power of a point with respect to a given cycle. \begin{defn} The \emph{power} of a point $P$ with respect to a given cycle is the value $$ c:=\tanh \frac{1}{2}PA \cdot \tanh \frac{1}{2} PB, $$ where the points $A$, $B$ are on the cycle, such that the line $AB$ passes through the point $P$. With respect to Lemma 1 this point could be a real, infinite or ideal one. The \emph{ axis of power } of two cycles is the locus of points having the same powers with respect to the cycles. \end{defn} The power of a point can be positive, negative or complex. (For example, in the case when $A,B$ are real points we have the following possibilities: it is positive if $P$ is a real point and it is in the exterior of the cycle; it is negative if $P$ is real and it is in the interior of the cycle, it is infinite if $P$ is a point at infinity, or complex if $P$ is an ideal point.) We can also introduce the concept of similarity center of cycles. \begin{defn} The \emph{ centres of similitude} of two cycles with non-overlapping interiors are the common points of their pairs of tangents touching directly or inversely (i.e., they do not separate, or separate the circles), respectively. The first point is the \emph{external center of similitude}, the second one is \emph{the internal center of similitude}. \end{defn} For intersecting cycles separating tangent lines do not exist, but the internal center of similitude is defined as on the sphere, but replacing $\sin $ by $\sinh $. More precisely we have \begin{lem}[\cite{gho 3}] Two points $S,S'$ which divide the segments $OO'$ and $O'O$, joining the centers of the two cycles in the hyperbolic ratio of the hyperbolic sines of the radii $r,r'$ are the centers of similitude of the cycles. By formula, if $\sinh OS : \sinh SO'=\sinh O'S':\sinh S'O=\sinh r:\sinh r'$ then the points $S,S'$ are the centers of similitude of the given cycles. \end{lem} We also have the following \begin{lem}[\cite{gho 3}] If the secant through a centre of similitude $S$ meets the cycles in the corresponding points $M, M'$ then $\tanh \frac{1}{2}SM$ and $\tanh \frac{1}{2} SM'$ are in a given ratio. \end{lem} We now discuss the cases for the possible centers of similitude. We have six possibilities. \begin{description} \item[i] \textsl{The two cycles are circles.} To get the centers of similitude we have to solve an equation in $x$. Here $d$ means the distance of the centers of the circles, $r\leq R$ denotes the respective radii, and $x$ is the distance of the center of similitude to the center of the circle with radius $r$.$\sinh (d\pm x): \sinh x=\sinh R:\sinh r$ from which we get that $\coth x=\frac{\sinh R\mp\cosh d\sinh r}{\sinh r\sinh d}$ or, equivalently, $$ e^x=\sqrt{\frac{\coth x+1}{\coth x -1}}=\sqrt{\frac{(\sinh R)/(\sinh r)\mp e^{\mp d}}{(\sinh R)/(\sinh r)\mp e^{\pm d}}}. $$ The two centers corresponding to the two cases of possible signs. If we assume that $e^x=\sqrt{\frac{(\sinh R)/(\sinh r)- e^{-d}}{(\sinh R)/(\sinh r)- e^{ d}}}$, then the center is an ideal point, point at infinity or a real point according to the cases $\sinh R/\sinh r <e^d, \quad \sinh R/\sinh r =e^d$, or $\sinh R/\sinh r>e^d$, respectively. The corresponding center is the external center of similitude. In the other case we have $e^x=\sqrt{\frac{(\sinh R)/(\sinh r)+ e^{d}}{(\sinh R)/(\sinh r)+ e^{- d}}}$, and the corresponding center is always a real point. This is the internal center of similitude. \item[ii] \textsl{One of the cycles is a circle and the other one is a paracycle.} The line joining their centers (which we call axis of symmetry) is a real line, but the respective ratio is zero or infinite. To determine the centres we have to decide the common tangents and their points of intersections, respectively. The external centre is a real, infinite or ideal point, and the internal centre is a real point. \item[iii] \textsl{One of the cycles is a circle and the other one is a hypercycle.} The axis of symmetry is a real line such that the ratio of the hyperbolic sines of the radii is complex. The external center is a real, infinite or ideal point, the internal one is always a real point. Each of them can be determined as in the case of two circles. \item[iv] \textsl{Each of them is a paracycle.} The axis of symmetry is a real line and the internal centre is a real point. The external centre is an ideal point. \item[v] \textsl{One of them is a paracycle and the other one is a hypercycle.} The axis of symmetry (in the Poincar\'e model, with the hypercycle replaced by the circular line containing it, and the axis containing the two apparent centers) is a real line. The internal centre is a real point. The external centre is a real, infinite or ideal point. \item[vi] \textsl{Both of them are hypercycles.} The axis of symmetry (in the Poincar\'e model, with the hypercycle replaced by the circular line containing it, and the axis containing the two apparent centers) can be a real line, ideal line or a line at infinity. For the internal centre we have three possibilities as above as well as for the external centre. \end{description} We can use the concepts of "axis of similitude", "inverse and homothetic pair of points", "homothetic to and inverse of a curve $\gamma $ with respect to a fixed point $S$ (which "can be real point, a point at infinity, or an ideal point, respectively") as in the case of the sphere. More precisely we have: \begin{lem}[\cite{gho 3}] The six centers of similitude of three cycles taken in pairs lie three by three on four lines, called \emph{axes of similitude} of the cycles. \end{lem} From Lemma 3.5 it follows immediately that if two pairs of intersection points of a line through $S$ with the cycles are $N,N'$ and $M,M'$ then $\tanh \frac{1}{2}SM \cdot \tanh \frac{1}{2} SN'$ is independent from the choice of the line. Thus, given a fixed point $S$ (which is the center of the cycle at which we would like to invert) and any curve $\gamma $, on the hyperbolic plane, if on the halfline joining $S$ (the endpoint of the halfline) with any point $M$ of $\gamma $ a point $N'$ is taken, such that $ \tanh \frac{SM}{2}\cdot \tanh \frac{SN'}{2}$ is constant, the locus of $N'$ is \emph{called the inverse} of $\gamma $. We also use the name \emph{cycle of inversion} for the locus of the points whose squared distance from $S$ is $ \tanh \frac{SM}{2}\cdot\tanh \frac{SN'}{2}$. Among the projective elements of the pole and its polar either one of them is always real or both of them are at infinity. Thus, in a construction the common point of two lines is well-defined, and in every situation it can be joined with another point; for example, if both of them are ideal points they can be given by their polars (which are constructible real lines) and the required line is the polar of the intersection point of these two real lines. Thus the lengths in the definition of the inverse can be constructed. This implies that the inverse of a point can be constructed on the hyperbolic plane, too. \begin{rem}Finally we remark that all of the concepts and results of inversion with respect to a sphere of the Euclidean space can be defined also in the hyperbolic space, the "basic sphere" could be a hypersphere, parasphere or sphere, respectively. We can use also the concept of ideal elements and the concept of elements at infinity, if it is necessary. It can be proved (using Poincar\'e's ball-model) that every hyperbolic plane of the hyperbolic space can be inverted to a sphere by such a general inversion. This map sends the cycles of the plane to circles of the sphere. \end{rem} \section{Applications} In this section we give applications, some of them having analogous on the sphere, and others being completely new ones. \subsection{Steiner's construction on Malfatti's construction problem} Malfatti (see \cite{malfatti}) raised and solved the following problem: \emph{construct three circles into a triangle so that each of them touches the two others from outside and, moreover, touches also two sides of the triangle.} The first nice moment was Steiner's construction. He gave an elegant method (without proof) to construct the given circles. He also extended the problem and his construction to the case of three given circles instead of the sides of a triangle (see in \cite{steiner 1}, \cite{steiner 2}). Cayley referred to this problem in \cite{cayley} as \emph{Steiner's extension of Malfatti's problem}. We note that Cayley investigated and solved a further generalization in \cite{cayley}, which he also called Steiner's extension of Malfatti's problem. His problem is \emph{to determine three conic sections so that each of them touches the two others, and also touches two of three more given conic sections.} Since the case of circles on the sphere is a generalization of the case of circles of the plane (as it can be seen easily by stereographic projection), Cayley indirectly proved Steiner's second construction. We also have to mention Hart's nice geometric proof for Steiner's construction which was published in \cite{hart}. (It can be found in various textbooks, e.g. \cite{casey} and also on the web.) In the paper \cite{gho 1} we presented a possible form of Steiner's construction which meet the original problem in the best way. We note (see the discussion in the proof) that our theorem has a more general form giving all possible solutions of the problem. However, for simplicity we restrict ourself to the most plausible case, when the cycles touch each other from outside. In \cite{gho 1} we used the fact that cycles are represented by circles in the conformal model of Poincar\'e. The Euclidean constructions of circles of this model gives hyperbolic constructions on cycles in the hyperbolic plane. To do these constructions manually we have to use special rulers and calipers to draw the distinct types of cycles. For brevity, we think of a fixed conformal model of the embedding Euclidean plane and preserve the name of the known Euclidean concepts with respect to the corresponding concept of the hyperbolic plane, too. We now interprete this proof without using models. We use Gergonne's construction (see the Euclidean version in \cite{dorrie}, and the hyperbolic one in \cite{gho 1} or \cite{gho 3}) which solves the problem \emph{Construct a circle (cycle) touching three given circles (cycles) of the plane.} \begin{figure} \caption{Steiner's construction.} \end{figure} \begin{thm}[\cite{gho 1}] Steiner's construction can be done also in the hyperbolic plane. More precisely, for three given non-overlapping cycles there can be constructed three other cycles, each of them touching the two other ones from outside and also touching two of the three given cycles from outside. \end{thm} \begin{proof} Denote by $c_i$ the given cycles. Now the steps of Steiner's construction are the following. \begin{enumerate} \item Construct the cycle of inversion $c_{i,j}$, for the given cycles $c_i$ and $c_j$, where the center of inversion is the external centre of similitude of them. (I.e., the center of $c_{i,j}$ is the center of the above inversion, and $c_i,c_j$ are images of each other with respect to inversion at $c_{ij}$. Observe that $c_{ij}$ separates $c_i$ and $c_j$.) \item Construct the cycle $k_j$ touching two cycles $c_{i,j}, c_{j,k}$ and the given cycle $c_j$, in such a way that $k_j,c_j$ touch from outside, and $k_{ij},c_{ij}$ (or $c_{jk}$) touch in such a way that $k_j$ lies on that side of $c_{ij}$ (or $c_{ik}$) on which side of them $c_j$ lies. \item Construct the cycle $l_{i,j}$ touching $k_i$ and $k_j$ through the point $P_k=k_k\cap c_k$. \item Construct Malfatti's cycle $m_j$ as the common touching cycle of the four cycles $l_{i,j}$, $l_{j,k}$, $c_i$, $c_k$. \end{enumerate} The first step is the hyperbolic interpretation of the analogous well-known Euclidean construction of circles. To the second step we follow Gergonne's construction (see in \cite{gho 3}). The third step is a special case of the second one. (A given cycle is a point now.) Obviously the general construction can be done in this case, too. The fourth step is again the second one choosing three arbitrary cycles from the four ones, since the quadrangles determined by the cycles have incircles. Finally we have to prove that this construction gives the Malfatti cycles. As we saw, the Malfatti cycles exist (see in \cite{gho 1} Theorem 1). We also know that in an embedding hyperbolic space the examined plane can be inverted to a sphere. The trigonometry of the sphere is absolute implying that the possibility of a construction which can be checked by trigonometric calculations, is independent of the fact that the embedding space is a hyperbolic space or a Euclidean one. Of course, the Steiner construction is just such a construction, the touching position of circles on the sphere can be checked by spherical trigonometry. So we may assume that the examined sphere is a sphere of the Euclidean space and we can apply Cayley's analytical methods (see in \cite{cayley}) by which he proved that Steiner's construction works on a surface of second order. Hence the above construction produces the required touches. \end{proof} \subsection{Applications for triangle centers} There are many interesting statements on triangle centers. In this section we mention some of them, concentrating only on the centroid, circumcenters and incenters, respectively. The notation of this subsection follows the previous part of this paper: \emph{the vertices} of the triangle are $A,B,C$, the corresponding angles are $\alpha,\beta,\gamma$ and the lengths of the sides opposite to the vertices are $a,b,c$, respectively. We also use the notion $2s=a+b+c$ for the \emph{perimeter} of the triangle. Let denote $R,r,r_A,r_B,r_C$ the radius of the \emph{circumscribed cycle}, the radius of the \emph{inscribed cycle} (shortly incycle), and the radii of the \emph{escribed cycles} opposite to the vertices $A,B,C$, respectively. We do not assume that the points $A,B,C$ are real and the distances are positive numbers. In most cases the formulas are valid for ideal elements and elements at infinity and when the distances are complex numbers, respectively. The only exception when the operation which needs to the examined formula has no exact mathematical meaning. Before examining hyperbolic triangle centers, we collect some further important formulas on hyperbolic triangles. We can consider them in our extracted manner. \subsubsection{Staudtian and angular Staudtian of a hyperbolic triangle:} The concept of \emph{Staudtian of a hyperbolic triangle} somehow similar (but definitely distinct) to the concept of the Euclidean area. In spherical trigonometry the twice of this very important quantity was called by Staudt the sine of the trihedral angle $O-ABC$, and later Neuberg suggested the names (first) ``Staudtian'' and the ``Norm of the sides'', respectively. We prefer in this paper the name ``Staudtian''to honour the great geometer Staudt. Let $$ n=n(ABC):=\sqrt{\sinh s\sinh (s-a)\sinh (s-b)\sinh (s-c)}. $$ Then we have \begin{equation} \sin \frac{\alpha}{2}\sin \frac{\beta}{2}\sin \frac{\gamma}{2}=\frac{n^2}{\sinh s\sinh a\sinh b\sinh c}. \end{equation} This observation leads to the following formulas of the Staudtian: \begin{equation} \sin \alpha =\frac{2n}{\sinh b\sinh c}, \quad \sin \beta =\frac{2n}{\sinh a\sinh c}, \quad \sin \gamma =\frac{2n}{\sinh a\sinh b}. \end{equation} From the first equality of (4.2) we get that \begin{equation} n=\frac{1}{2} \sin \alpha \sinh b\sinh c= \frac{1}{2}\sinh h_C\sinh c, \end{equation} where $h_C$ is the height of the triangle corresponding to the vertex $C$. As a consequence of this concept we can give homogeneous coordinates of the points of the plane with respect to a basic triangle as follows: \begin{defn} Let $ABC$ be a non-degenerate reference triangle of the hyperbolic plane. If $X$ is an arbitrary point we define its coordinates by the ratio of the Staudtian $X:=\left(n_A(X):n_B(X):n_C(X)\right)$ where $n_A(X)$, $n_B(X)$ and $n_C(X)$ means the Staudtian of the triangle $XBC$, $XCA$ and $XAB$, respectively. This triple of coordinates is the \emph{triangular coordinates} presents the point $X$ with respect to the triangle $ABC$. \end{defn} Consider finally the ratio of section $(BX_AC)$ where $X_A$ is the foot of the transversal $AX$ on the line $BC$. If $n(BX_AA)$, $n(CX_AA)$ mean the Staudtian of the triangles $BX_AA$, $CX_AA$, respectively, then, using (4.3), we have $$ (BX_AC)=\frac{\sinh BX_A}{\sinh X_AC}=\frac{\frac{1}{2}\sinh h_C\sinh BX_A}{\frac{1}{2}\sinh h_C\sinh X_A C}=\frac{n(BX_AA)}{n(CX_AA)}= $$ $$ =\frac{\frac{1}{2}\sinh c\sinh AX_A\sin(BAX_A)\measuredangle}{\frac{1}{2}\sinh b\sinh AX_A\sin(CAX_A)\measuredangle}=\frac{\sinh c\sinh AX\sin(BAX_A)\measuredangle}{\sinh b\sinh AX\sin(CAX_A)\measuredangle}=\frac{n_C(X)}{n_B(X)}, $$ proving that \begin{equation} (BX_AC)=\frac{n_C(X)}{n_B(X)}, (CX_BA)=\frac{n_A(X)}{n_C(X)}, (AX_CB)=\frac{n_B(X)}{n_A(X)}. \end{equation} The \emph{angular Staudtian} of the triangle defined by the equality $$ N=N(ABC):=\sqrt{\sin\delta\sin(\delta+\alpha)\sin(\delta+\beta)\sin(\delta+\gamma)} $$ is the "dual" of the concept of Staudtian and thus we have similar formulas for it. From the law of cosines for the angles we have $\cos \gamma=-\cos \alpha\cos \beta + \sin \alpha\sin\beta \cosh c$ and adding to this the addition formula of the cosine function we get that $$ \sin \alpha\sin\beta(\cosh c-1)=\cos \gamma +\cos (\alpha +\beta)=2\cos\frac{\alpha+\beta+\gamma}{2}\cos\frac{\alpha+\beta-\gamma}{2}. $$ From this we obtain that \begin{equation} \sinh\frac{c}{2}=\sqrt{\frac{\sin{\delta}\sin{(\delta+\gamma})}{\sin \alpha\sin\beta}}. \end{equation} Analogously we get that \begin{equation} \cosh \frac{c}{2}=\sqrt{\frac{\sin{(\delta+\beta)}\sin{(\delta+\alpha})}{\sin \alpha\sin\beta}}. \end{equation} From these equations it follows that \begin{equation} \cosh \frac{a}{2}\cosh \frac{b}{2}\cosh \frac{c}{2}=\frac{N^2}{\sin \alpha\sin\beta\sin \gamma\sin\delta}. \end{equation} Finally we also have that \begin{equation} \sinh a =\frac{2N}{\sin \beta\sin \gamma}, \quad \sinh b=\frac{2N}{\sin \alpha\sin \gamma}, \quad \sinh c =\frac{2N}{\sin \alpha\sin \beta}, \end{equation} and from the first equality of (4.8) we get that \begin{equation} N=\frac{1}{2} \sinh a \sin\beta\sin\gamma= \frac{1}{2}\sinh h_C\sin\gamma. \end{equation} The connection between the two Staudtians is given by the formula \begin{equation} 2n^2=N\sinh a\sinh b\sinh c. \end{equation} Dividing the first equality of (4.2) by the analogous one in (4.8) we get that $\frac{\sin \alpha}{\sinh a}=\frac{n}{N}\frac{\sin \beta}{\sinh b}\frac{\sin \gamma}{\sinh c}$ implying the equality \begin{equation} \frac{N}{n}=\frac{\sin \alpha}{\sinh a}. \end{equation} \subsubsection{On the centroid (or median point) of a triangle} We denote the medians of the triangle by $AM_A,BM_B$ and $CM_C$, respectively. The feet of the medians are $M_A$,$M_B$ and $M_C$. The existence of their common point $M$ follows from the Menelaos theorem (\cite{szasz}). For instance if $AB$, $BC$ and $AC$ are real lines and the points $A,B$ and $C$ are ideal points then we have that $AM_C=M_CB=d=a/2$ implies that $M_C$ is the middle point of the real segment lying on the line $AB$ between the intersection points of the polars of $A$ and $B$ with $AB$, respectively (see Fig. 5). \begin{figure} \caption{Centroid of a triangle with ideal vertices.} \end{figure} The fact that the centroid exists implies new real hyperbolic statements, e.g. \emph{ Consider a real hexagon with six right angles. Then the lines containing the middle points of a side and being perpendicular to the opposite sides of the hexagon are concurrent}. \begin{thm}[\cite{gho 3}] We have the following formulas connected with the centroid: \begin{equation} n_A(M)=n_B(M)=n_C(M), \end{equation} \begin{equation} \frac{\sinh AM}{\sinh MM_A}=2 \cosh \frac{a}{2}, \end{equation} \begin{equation} \frac{\sinh AM_A}{\sinh MM_A}=\frac{\sinh BM_B}{\sinh MM_B}=\frac{\sinh CM_C}{\sinh MM_C}=\frac{n}{n_A(M)}, \end{equation} \begin{equation} \sinh d'_M=\frac{\sinh d'_A +\sinh d'_B +\sinh d'_C}{\sqrt{1+2(1+\cosh a +\cosh b+\cosh c)}}, \end{equation} where $d'_A$, $d'_B$, $d'_C$, $d'_M$ mean the signed distances of the points $A,B,C,M$ to a line $y$, respectively. Finally we have \begin{equation} \cosh YM=\frac{\cosh YA +\cosh YB +\cosh YC}{\frac{n}{n_A(M)}}. \end{equation} where $Y$ is a point of the plane. (4.15) and (4.16) are called the ``center-of-gravity'' property of $M$ and the ``minimality property'' of $M$, respectively. \end{thm} \begin{rem} Using the first order approximation of the hyperbolic functions by their Taylor polynomial of order 1, we get from this formula the following one: $ d'_M=\frac{d'_A+d'_B+d'_C}{3}$ which associates the centroid with the physical concept of center of gravity and shows that the center of gravity of three equal weights at the vertices of a triangle is at $M$. \end{rem} \begin{rem} The minimality property of $M$ for $Y=M$ says that $\cosh MA +\cosh MB +\cosh MC=\sqrt{1+2(1+\cosh a +\cosh b+\cosh c)}$. This implies $\cosh YA +\cosh YB +\cosh YC=(\cosh MA +\cosh MB +\cosh MC) \cosh YM$. From the second-order approximation of $\cosh x $ we get that \break $3+\frac{1}{2}\left(YA^2+ YB^2+ YC^2\right)=\left(3+\frac{1}{2}\left(MA^2+ MB^2+ MC^2\right)\right)\left(1+\frac{1}{2}YM^2\right)$. From this (take into consideration only such terms whose order is less than or equal to $2$) we get an Euclidean identity characterizing the centroid: $YA^2+YB^2+YC^2=MA^2+MB^2+MC^2+3YM^2$. As a further consequence we can see immediately that the value $\cosh YA +\cosh YB +\cosh YC$ is minimal if and only if $Y$ is the centroid. \end{rem} \subsubsection{On the center of the circumscribed cycle} Denote by $O$ the center of the circumscribed cycle of the triangle $ABC$. In the extracted plane $O$ always exists and could be a real point, point at infinity or ideal point, respectively. Since we have two possibilities to choose the segments $AB$, $BC$ and $AC$ on their respective lines, we also have four possibilities to get a circumscribed cycle. One of them corresponds to the segments with real lengths and the others can be gotten if we choose one segment with real length and two segments with complex lengths, respectively. If $A,B,C$ are real points the first cycle could be a circle, a paracycle or a hypercycle, but the other three are always hypercycles, respectively. For example, let $a'=a=BC$ be a real length, and $b'=-b+\pi i$, $c'=-c +\pi i$ be complex lengths, respectively. Then we denote by $O_A$ the corresponding (ideal) center and by $R_A$ the corresponding (complex) radius. We also note that the latter three hypercycles have geometric meaning. These are those hypercycles whose fundamental lines contain a pair from the midpoints of the edge-segments and contain that vertex of the triangle which is the meeting point of the corresponding edges. \begin{thm} The following formulas are valid on the circumradii: \begin{equation} \tanh R=\frac{\sin\delta}{N}, \quad \tanh R_A=\frac{\sin(\delta+\alpha)}{N}, \end{equation} \begin{equation} \tanh R=\frac{2\sinh\frac{a}{2}\sinh\frac{b}{2}\sinh\frac{c}{2}}{n}, \quad \tanh R_A=\frac{2\sinh\frac{a}{2}\cosh\frac{b}{2}\cosh\frac{c}{2}}{n}. \\ \end{equation} \begin{equation} n_A(0):n_B(O)=\cos (\delta+\alpha)\sinh a:\cos (\delta+\beta)\sinh b \end{equation} \end{thm} \begin{rem} The first order Taylor polynomial of the hyperbolic functions of distances leads to a correspondence between the hyperbolic Staudtians and the Euclidean area $T$ yealding also further Euclidean formulas. More precisely, we have $n=T$ and $N=\frac{T\sin\alpha}{a}=\frac{Ta}{2Ra}=\frac{T}{2R}$. Hence we give the following formula: $\sin\alpha\sin\beta\sin\gamma=\frac{2N^2}{n}=\frac{2T^2}{4R^2T}=\frac{T}{2R^2}$ or, equivalently, the known Euclidean dependence of these quantities: $T=2R^2\sin\alpha\sin\beta\sin\gamma$. \end{rem} \begin{rem} Use the minimality property of $M$ for the point $Y=O$. Then we have $\sqrt{1+2(1+\cosh a +\cosh b+\cosh c)}\cosh OM=\cosh OA +\cosh OB +\cosh OC=3\cosh R$. Approximating this we get the equation $3\left(1+\frac{R^2}{2}\right)= \sqrt{9+a^2+b^2+c^2}\left(1+\frac{OM^2}{2}\right)= 3\sqrt{1+\frac{a^2+b^2+c^2}{9}}\left(1+\frac{OM^2}{2}\right)$. The functions on the right hand side we approximate of order two. If we multiply these polynomials and hold only those terms which order at most 2 we can deduce the equation $1+\frac{R^2}{2}=1+\frac{a^2+b^2+c^2}{2\cdot 9}+\frac{OM^2}{2}$, and hence we deduced the Euclidean formula $OM^2=R^2-\frac{a^2+b^2+c^2}{9}$. \end{rem} \begin{cor} Applying (4.18) to a triangle with four ideal circumcenters, we get a formula which determines the common distance of three points of a hypercycle from the basic line of it. In fact, if $d$ means the searched distance, than $\frac{2\sinh\frac{a}{2}\sinh\frac{b}{2} \sinh\frac{c}{2}}{n}= \tanh R=\tanh \left(d+\varepsilon\frac{\pi}{2}i\right)= \frac{\sinh \left(d+\varepsilon\frac{\pi}{2}i\right)}{\cosh\left(d+\varepsilon\frac{\pi}{2}i\right)}=\frac{\varepsilon i\cosh d}{\varepsilon i\sinh d }=\coth d$, and we get: \begin{equation} \tanh d=\frac{n}{2\sinh\frac{a}{2}\sinh\frac{b}{2}\sinh\frac{c}{2}}. \end{equation} For the Euclidean analogy of this equation we can use the first order Taylor polynomial of the hyperbolic function. Our formula yields to the following $\frac{1}{R}=d=\frac{4T}{abc}$ implying a well-known connection among the sides, the circumradius and the area of a triangle. \end{cor} \subsubsection{On the center of the inscribed and escribed cycles} We are aware of the fact that the bisectors of the interior angles of a hyperbolic triangle are concurrent at a point $I$, called the incenter, which is equidistant from the sides of the triangle. The radius of the \emph{incircle} or \emph{inscribed circle}, whose center is at the incenter and touches the sides, shall be designated by $r$. Similarly the bisector of any interior angle and those of the exterior angles at the other vertices, are concurrent at a point outside the triangle; these three points are called \emph{excenters}, and the corresponding tangent cycles \emph{excycles} or \emph{escribed cycles}. The excenter lying on $AI$ is denoted by $I_A$, and the radius of the escribed cycle with center at $I_A$ is $r_A$. We denote by $X_A$, $X_B$, $X_C$ the points of the interior bisectors meets $BC$, $AC$, $AB$, respectively. Similarly $Y_A$, $Y_B$ and $Y_C$ denote the intersection points of the exterior bisectors at $A$, $B$ and $C$ with $BC$, $AC$ and $AB$, respectively. \begin{figure} \caption{Incircles and excycles.} \end{figure} We note that the excenters and the points of intersection of the sides with the bisectors of the corresponding exterior angles could be points at infinity or could also be ideal points. Let $Z_A$, $Z_B$ and $Z_C$ denote the touching points of the incircle with the lines $BC$, $AC$ and $AB$, respectively and the touching points of the excycles with center $I_A$, $I_B$ and $I_C$ are given by the triples $\{V_{A,A},V_{B,A},V_{C,A}\}$, $\{V_{A,B},V_{B,B},V_{C,B}\}$ and $\{V_{A,C},V_{B,C},V_{C,C}\}$, respectively (see in Fig. 6). \begin{thm}[\cite{gho 3}] For the radii $r$, $r_A$, $r_B$ or $r_C$ we have the following formulas: . \begin{equation} \tanh r=\frac{n}{\sinh s}, \quad \tanh r_A=\frac{n}{\sinh (s-a)}, \end{equation} \begin{equation} \tanh r=\frac{N}{2\cos \frac{\alpha}{2}\cos \frac{\beta}{2}\cos \frac{\gamma}{2}}, \end{equation} \begin{eqnarray} \coth r & = & \frac{\sin(\delta+\alpha)+\sin(\delta+\beta)+\sin(\delta+\gamma)+\sin\delta}{2N}, \\ \coth r_A & = & \frac{-\sin(\delta+\alpha)+\sin(\delta+\beta)+\sin(\delta+\gamma)-\sin\delta}{2N}, \end{eqnarray} \begin{eqnarray} \tanh R+\tanh R_A & = & \coth r_B+\coth r_C, \\ \nonumber \tanh R_B+\tanh R_C & = & \coth r+\coth r_A, \\ \nonumber \tanh R +\coth r & = & \frac{1}{2}\left(\tanh R+\tanh R_A+\tanh R_B+\tanh R_C\right), \end{eqnarray} \begin{eqnarray} n_A(I):n_B(I):n_C(I) & = & \sinh a:\sinh b:\sinh c,\\ n_A(I_A):n_B(I_A):n_C(I_A) & = & -\sinh a :\sinh b: \sinh c. \end{eqnarray} \end{thm} The following theorem describes relations between the distance of the incenter and circumcenter, the radii $r,R$ and the side-lengths $a,b,c$ . \begin{thm}[\cite{gho 3}] Let $O$ and $I$ be the center of the circumscribed and inscribed circles, respectively. Then we have \begin{equation} \cosh OI=2\cosh \frac{a}{2}\cosh \frac{b}{2}\cosh \frac{c}{2}\cosh r\cosh R+\cosh\frac{a+b+c}{2}\cosh(R-r). \end{equation} \end{thm} \begin{rem} The second order approximation of (4.28) leads to the equality $1+\frac{OI^2}{2}=2\left(1+\frac{r^2}{2}\right)\left(1+\frac{R^2}{2}\right)\left(1+\frac{a^2}{8}\right)\left(1+\frac{b^2}{8}\right) \left(1+\frac{c^2}{8}\right)- $ \break $ -\left(1+\frac{(a+b+c)^2}{8}\right)\left(1+\frac{(R-r)^2}{2}\right)$. From this we get that $OI^2=R^2+r^2+\frac{a^2+b^2+c^2}{4}-\frac{ab+bc+ca}{2}+2Rr$. But for Euclidean triangles we have (see \cite{bell}) $a^2+b^2+c^2=2s^2-2(4R+r)r$ and $ab+bc+ca=s^2+(4R+r)r$. The equality above leads to the Euler's formula: $OI^2=R^2-2rR$. \end{rem} \end{document}
\begin{document} \title{Efficient constant factor approximation algorithms for stabbing line segments with equal disks \thanks{This work was supported by Russian Foundation for Basic Research, project 19-07-01243. The paper represents a major extension of the short paper appeared in the proceedings of the 12 Annual International Conference on Learning and Intelligent Optimization (LION 2018) \cite{kobylkin_lion}. It also extends work presented in the International Conference on Mathematical Optimization Theory and Operations Research (MOTOR 2019) \cite{kobylkin_dryakhlova}.}} \author{Konstantin Kobylkin} \titlerunning{Efficient approximation algorithms for stabbing line segments with equal disks} \institute{K. Kobylkin \at Krasovsky Institute of Mathematics and Mechanics, Russia, Ekaterinburg, Sophya Kovalevskaya str. 16\\ Ural Federal University, Russia, Ekaterinburg, Mira str. 19\\ \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} An NP-hard problem is considered of intersecting a given set of $n$ straight line segments on the plane with the smallest cardinality set of disks of fixed radii $r>0,$ where the set of segments forms a straight line drawing $G=(V,E)$ of a planar graph without proper edge crossings. To the best of our knowledge, related work only tackles a setting where $E$ consists of (generally, properly overlapping) axis-parallel segments, resulting in an $O(n\log n)$-time and $O(n\log n)$-space 8-approximation algorithm. Exploiting tough connection of the problem with the geometric Hitting Set problem, an $\left(50+52\sqrt{\frac{12}{13}}+\nu\right)$-approximate $O\left(n^4\log n\right)$-time and $O\left(n^2\log n\right)$-space algorithm is devised based on the modified Agarwal-Pan algorithm, which uses epsilon nets. More accurate $(34+24\sqrt{2}+\nu)$- and $\left(\frac{144}{5}+32\sqrt{\frac{3}{5}}+\nu\right)$-approxi\-mate algorithms are also proposed for cases where $G$ is any subgraph of either a generalized outerplane graph or a Delaunay triangulation respectively, which work within the same time and space complexity bounds, where $\nu>0$ is an arbitrarily small constant. \keywords{approximation algorithm \and geometric Hitting Set problem \and epsilon net \and Delaunay triangulation \and line segments} \end{abstract} \section{Introduction} \label{intro} Design of fast and accurate approximation algorithms is a very important topic in the class of combinatorial optimization problems, being both NP- and W[1]-hard. Roughly speaking, a problem W[1]-hardness means that every polynomial time approximation scheme (PTAS) to solve the problem must have complexity of the order $\Omega\left(L^{f(1/\varepsilon)}\right)$ for a monotonic computable function $f$ if some reasonable conjecture holds true, where $L$ represents length of the problem input. Many problems from computational geometry are both NP- and W[1]-hard, including a wide class of problems related to optimal coverage, piercing or intersection of given families of geometric objects on the plane with simply shaped objects. Some problems from this class can be stated in the following general form. Suppose a family ${\cal{F}}$ is given of objects from $\mathbb{R}^2$ of constant complexity, which can be e.g. disks, straight line segments, triangles etc. The problem is to find the smallest cardinality set ${\cal{D}}$ of translates of a given object $D_0\subset\mathbb{R}^2$ such that either for each $F\in {\cal{F}}$ there is some $D=D(F)\in{\cal{D}},$ which intersects $F$ in some prescribed way, or $F\subset\bigcup\limits_{D\in{\cal{D}}}D$ for each $F\in {\cal{F}}.$ Design and analysis of approximation algorithms for problems, which can be written in this form, is an area of ongoing research (see e.g. works \cite{basappa}, \cite{biniaz}, \cite{nandy}, \cite{kobylkin_dryakhlova}, \cite{mudgal}). In the present paper fast constant factor approximation algorithms are constructed for a special NP-hard (\cite{nandy}, \cite{kobylkin}) geometric intersection problem, which fits in the general form above. Namely, in this problem, ${\cal{F}}$ coincides with a set $E$ of straight line segments, ${\cal{D}}$ consists of identical disks and object intersection is understood in its common sense. The problem has its applications in facility location and sensor network deployment. \noindent {\sc Intersecting Plane Graph with Disks (IPGD)}: given a straight line drawing (or a plane graph) $G=(V,E)$ of an arbitrary simple planar graph without proper edge crossings and a constant $r>0,$ find the smallest cardinality set ${\cal{D}}$ of disks of radius $r$ such that $e\cap\bigcup\limits_{D\in{\cal{D}}}D\neq\varnothing$ for each edge $e\in E.$ Here each isolated vertex $v\in V$ is treated as a zero-length segment $e_v\in E.$ This problem is obviously equivalent to finding the smallest cardinality point set $C\subset\mathbb{R}^2$ such that each $e\in E$ is within Euclidean distance $r$ from some point $c=c(e)\in C.$ In fact, $C$ represents a set of centers of radius $r$ disks, forming a solution to the {\sc IPGD} problem. The latter equivalent formulation of the {\sc IPGD} problem can further be reduced to a special case of the well known geometric {\sc Hitting Set} problem on the plane. In its general form the {\sc Hitting Set} problem is formulated as follows: \noindent {\sc Hitting Set}: Given a point set $Y\subseteq\mathbb{R}^2$ and a family ${\cal{R}}$ of subsets (also called {\it objects}) of $\mathbb{R}^2,$ find the smallest cardinality subset $H\subseteq Y$ such that $H\cap R\neq\varnothing$ for all $R\in{\cal{R}}.$ Here, a subset $H\subset \mathbb{R}^2$ is named a {\it hitting set} for ${\cal{R}}$ if $H\cap R\neq\varnothing$ for every $R\in{\cal{R}}.$ A pair $(Y,{\cal{R}})$ is associated with each instance of the {\sc Hitting Set} problem, which is called a {\it range space}. To describe a {\sc Hitting Set} formulation of the {\sc IPGD} problem, some notation is given below. Suppose $N_r(e)=\{x\in\mathbb{R}^2:d(x,e)\leq r\},$ ${\cal{N}}_r(E)=\{N_r(e):e\in E\}$ and $d(x,e)$ is Euclidean distance between a point $x\in{\mathbb{R}}^2$ and a segment $e\in E;$ for a zero-length segment $x\in\mathbb{R}^2$ $N_r(x)$ denotes a radius $r$ disk centered at $x.$ Each object from ${\cal{N}}_r(E)$ is a Euclidean $r$-neighborhood of some segment of $E$ also called {\it $r$-hippodrome} or {\it $r$-offset} in the literature \cite{nandy}. The {\sc IPGD} problem can equivalently be formulated as follows: {\sc Piercing Euclidean Hippodromes (PEH).} Given a set ${\cal{N}}_r(E)$ of $r$-hippodromes on the plane whose underlying straight line segments form an edge set of some plane graph $G=(V,E),$ find the minimum cardinality hitting set for ${\cal{N}}_r(E).$ Thus, the {\sc IPGD} problem is reduced to the {\sc Hitting Set} problem with $Y=\mathbb{R}^2$ and ${\cal{R}}={\cal{N}}_r(E),$ where ${\cal{N}}_r(E)$ is a set of $r$-hippodromes, formed by segments from $E.$ \subsection{Applications} The {\sc IPGD} problem is of interest in network security analysis, sensor network deployment and facility location. In the work \cite{nandy} its sensor deployment applications are reported for road networks. Namely, in this work an application is considered to monitor a road network using identical sensors with circular sensing areas. Geometrically, network roads are modeled by piecewise linear arcs on the plane. One can split these arcs into chains of elementary straight line segments such that any two of the resulting elementary segments intersect at most at their endpoints. When full road network surveillance is costly, it might be a good idea to place the minimum number of sensors such that each piece of every road (represented by an elementary segment) is partially covered by sensing area of some of the placed sensors. Using this network of the placed sensors, geographic locations of all those vehicles can be identified which move on the road network. Here coordinates of moving vehicles are given by coordinates of the respective elementary segments. The aforementioned modeling approach leads to a geometric combinatorial optimization model, which coincides with the {\sc IPGD} problem. A variety of settings of optimal location problems to place facilities (e.g. petrol stations) nearby a given network roads can be reduced to the {\sc IPGD} problem. Another network security analysis application may also be of interest of the {\sc IPGD} problem for optical fiber networks, which is inspired by work of \cite{journals/ton/AgarwalEGHSZ13} and described in detail in \cite{kobylkin}. \subsection{Related work} To the best of our knowledge, settings close to the {\sc IPGD} problem are first considered in \cite{nandy}. They explore the case in which ${\cal{D}}$ contains identical disks and $E$ consists of (generally properly overlapping) axis-parallel segments. Their algorithms can easily be extended to the case of sets $E$ of straight line segments with bounded number of distinct orientations. Moreover, in \cite{nandy} two polynomial time approximation schemes are proposed. The first one is for the case of axis-parallel segments and another one is for the case of segments, whose Euclidean lengths are within a constant factor of $r.$ The {\sc IPGD} problem generalizes a classical NP-hard unit disk covering problem. In the unit disk covering problem one needs to cover a given finite point set $E$ on the plane with the least cardinality set ${\cal{D}}$ of unit disks. In the {\sc IPGD} problem setting $E$ generally contains non-zero length segments instead of points. A PTAS exists \cite{mudgal} based on local search for more general version of the {\sc IPGD} problem in which disks of ${\cal{D}}$ are chosen from some prescribed finite set ${\cal{H}}$ of generally non-equal disks. Existence of PTAS can also be established by reduction of the {\sc IPGD} problem to the {\sc PEH} problem. More precisely, the fact can be proved that ${\cal{N}}_r(E)$ is a family of closed convex pseudo-disks under some mild restrictions on $E,$ and, then, a PTAS is constructed for the {\sc PEH} problem, using the general approach from \cite{muray}. To emphasize a close connection of the {\sc IPGD} problem with the {\sc Hitting Set} problem a proof sketch is given below of the latter fact. More precisely, without loss of generality it can be shown, according to the definition of pseudo-disks, that $|{\rm{bd}}\,N_1\cap {\rm{bd}}\,N_2|\leq 2$ for any distinct $N_1,N_2\in {\cal{N}}_r(E)$ and both $N_1\backslash N_2$ and $N_2\backslash N_1$ are connected, where ${\rm{bd}}\,N$ denotes boundary of a set $N\subset\mathbb{R}^2$ and $|N|$ denotes its cardinality. Indeed, as straight line segments from $E$ intersect at most at their endpoints, segments of $E$ can slightly be shifted to become pairwise disjoint and non-parallel while keeping all nonempty intersections of subsets of objects from ${\cal{N}}_r(E)$ with some slightly larger $r.$ For two non-overlapping segments $e$ and $e'$ it can be understood that $|{\rm{bd}}\,N_r(e)\cap{\rm{bd}}\,N_r(e')|\leq 2$ because Euclidean distance grows strictly monotonically from $e$ (or from $e')$ to a point of the curve $\chi(e,e')=\{x\in\mathbb{R}^2:d(x,e)=d(x,e')\}$\footnote{In fact the curve $\chi(e,e')$ is composed of pieces of straight lines and parabolas} as that point moves along $\chi(e,e')$ in any of two opposite directions starting from midpoint of the segment $[x,x']$ which joins the points $x\in e$ and $x'\in e',$ where $d(x,x')=d(e,e')$ and $d(e,e')$ denotes Euclidean distance between $e$ and $e'.$ Here, obviously, ${\rm{bd}}\,N_r(e)\cap {\rm{bd}}\,N_r(e')\subset\chi(e,e').$ Points from ${\rm{bd}}\,N_r(e)\cap{\rm{bd}}\,N_r(e')$ split ${\rm{bd}}\,N_r(e)$ into two subcurves, lying on different sides of $\chi(e,e').$ Therefore one of those two subcurves is contained in ${\rm{int}}\,N_r(e'),$ where ${\rm{int}}\,N$ denotes interior of $N\subset\mathbb{R}^2.$ This implies that $N_r(e')\backslash N_r(e)$ is connected and concludes the proof of the fact that ${\cal{N}}_r(E)$ can be considered to be a set of pseudo-disks. When pairs of segments of $E$ are allowed to intersect properly and admitted to have arbitrarily large number of distinct orientations, it is difficult to achieve a constant factor approximation at least by using known approaches. It is due to the non-constant lower bound obtained in \cite{alon} on integrality gap of a geometric intersection problem, which is close to the {\sc IPGD} problem for $r=0.$ \subsection{Our algorithmic contribution} In contrast to related work this paper is focused on design of $O(1)$-approxi\-ma\-tion algorithms for the NP-hard {\sc IPGD} problem where segments from $E$ are allowed to intersect at most at their endpoints, can have arbitrarily large Euclidean lengths and arbitrarily large number of distinct orientations. Moreover, the paper focus is on algorithms, giving a favourable combination of guaranteed constant approximation factor and time complexity. There is a major challenge in design of efficient constant factor approximation algorithms for the {\sc IPGD} problem. Namely, an ugly tradeoff is observed between guaranteed constant approximation factor and time complexity of the known approximation algorithms for the problem. This means that in existing approximation algorithms good accuracy (i.e. approximation factor close to $1)$ can be guaranteed only with high computational cost, which is unacceptable in practice. Such a situation is very typical in a variety of geometric {\sc Hitting Set} problems on the plane in which corresponding families ${\cal{R}}$ contain objects of more or less sophisticated shape. Indeed, exploiting equivalence of the {\sc IPGD} problem to the {\sc PEH} problem on the set ${\cal{N}}_r(E)$ of pseudo-disks, an $O(1)$-approximation can be designed based on epsilon nets \cite{pyrga} and Agarwal-Pan iterative reweighting algorithm \cite{agarwal}. It has reasonable time complexity (roughly of $O(n^4))$ but its guaranteed constant approximation factor is large, being more than $100000.$ Of course, PTAS for the {\sc IPGD} problem can be adapted to design an $O(1)$-approximation algorithm by setting $\varepsilon$ equal to some small constant. According to the introduction in \cite{mulimits}, the resulting constant factor approximation algorithm has huge time complexity of $O(n^{30}).$ Successful attempts are taken to adapt local search for design of faster low constant factor approximation algorithms. In particular, local search approach has become quite competitive for designing of low constant factor approximations for geometric {\sc Hitting Set} problems with families of disks \cite{mulimits} and pseudo-disks \cite{muhall}. For the {\sc IPGD} problem an algorithm from \cite{muhall}, constructed for pseudo-disks, yields 4-approximate solution in $O(n^{13})$ time, being too complicated. Thus, direct application of the aforementioned general algorithms for pseudo-disks to the {\sc IPGD} problem without relying deeply on geometry of the range space $(\mathbb{R}^2,{\cal{N}}_r(E))$ results in the severe tradeoff between their guaranteed constant approximation factor and time complexity. Some other less general algorithmic techniques fail to work for the {\sc IPGD} problem due to its relatively general setting. In this work, to construct approximation algorithms with better combination of guaranteed approximation factor and time complexity than that of the aforementioned epsilon net based approximation algorithm for pseudo-disks, implied by results from \cite{agarwal} and \cite{pyrga}, the {\sc IPGD} problem geometry is incorporated into slightly modified version of the latter algorithm. Namely, a simple to implement $\left(50+52\sqrt{\frac{12}{13}}+\nu\right)$-approximation is proposed for the {\sc IPGD} problem working in $O\left(\left(n^2+\frac{n\log n}{\nu^2}+\frac{\log n}{\nu^3}\right)n^2\log n\right)$ time. Moreover, $(34+24\sqrt{2}+\nu)$- and $\left(\frac{144}{5}+32\sqrt{\frac{3}{5}}+\nu\right)$-appro\-xi\-mations are given for special segment configurations $E$ defined by generalized outerplane graphs and Delaunay triangulations, arising in network applications. The latter two algorithms work within the same time complexity bounds, where $\nu>0$ is an arbitrary small constant. Though guaranteed approximation factors of our algorithms are larger than $4,$ it is very likely that their actual approximation factors are small in practice. \subsection{Brief description of our algorithms and approaches}\label{gjfjdkksksk} Constant factor approximation algorithms for the {\sc IPGD} problem proposed in this paper in fact approximately solve the {\sc PEH} problem on the corresponding set ${\cal{N}}_r(E)$ of $r$-hippodromes. Below their general layout and the most important stages are briefly discussed. \subsubsection{General layout of our approximation algorithms} Our algorithms contain three main stages. At the first stage the {\sc PEH} problem input is preprocessed to simplify work at the subsequent stages. The idea of this preprocessing is to remove from ${\cal{N}}_r(E)$ all $r$-hippodromes, which have empty intersection with the rest. {\sc Stage 1: preprocessing.} Every object $N\in{\cal{N}}_r(E)$ is identified such that no other object from ${\cal{N}}_r(E)$ intersects $N.$ This can be done in $O(n^2)$ time. Let ${\cal{K}}$ be a set of all such objects. One proceeds to the next two stages to find an approximate solution $S$ to the {\sc PEH} problem for the family ${\cal{N}}_r(E)\backslash{\cal{K}}.$ When it is done, a point set $C:=S\cup C_0$ is returned as the final output, where $C_0=\{c_K:K\in{\cal{K}}\}\subset\mathbb{R}^2$ is such that $c_K\in K$ for each $K\in{\cal{K}}.$ At the second stage the {\sc PEH} problem is discretized. The idea behind this discretization is as follows. Let $f>0$ be some absolute constant. Instead of finding an $f$-approximate solution $S$ to the {\sc PEH} problem, being a finite subset of the whole plane, its special $f$-approximate solution $S_0\subseteq Y_0$ can always be found, where $Y_0$ is some finite precomputed point set, uniquely defined by the {\sc PEH} problem input. This discretization allows us to apply a machinery of epsilon nets at the next (third) stage. This algorithmic machinery is developed in \cite{agarwal} and \cite{pyrga} for designing of approximation algorithms for {\sc Hitting Set} problems on discrete range spaces $(Y,{\cal{R}}),$ where discreteness of the range space means that $Y$ is finite. {\sc Stage 2: discretization.} Let ${\cal{B}}(E)=\{{\rm{bd}}\,N_r(e):e\in E\}.$ A vertex set of the arrangement ${\cal{A}}(E,r)$ of curves of ${\cal{B}}(E)$ is used as the set $Y_0.$ Of course, $|Y_0|=O(n^2)$ and $Y_0$ can be constructed in the straightforward way by computing pairwise intersections of curves from ${\cal{B}}(E).$ It is easy to see that the {\sc PEH} problem instance can equivalently be reduced to the {\sc Hitting Set} problem instance for $(Y_0,{\cal{N}}_r(E)).$ Indeed, any $k$-element hitting set $C$ for ${\cal{N}}_r(E)$ can be converted into a $k$-element hitting set $C'\subseteq Y_0$ by some sort of (polynomial) point location algorithm \cite{boiso} on the arrangement ${\cal{A}}(E,r)$ \footnote{Being applied for a point $c\in C$ and the arrangement ${\cal{A}}(E,r),$ point location algorithm identifies either a vertex, edge or face of the arrangement, which contains $c;$ after that, either the vertex $c$ or an arbitrary vertex of the edge or face, containing $c,$ can be returned.}. As a consequence ${\rm{OPT}}(\mathbb{R}^2,{\cal{N}}_r(E))= {\rm{OPT}}(Y_0,{\cal{N}}_r(E))={\rm{OPT}},$ where ${\rm{OPT}}(Y,{\cal{R}})$ denotes optimum of the {\sc Hitting Set} problem for a given range space $(Y,{\cal{R}}).$ Then, one proceeds to the next (main) stage, now dealing with the discrete range space $(Y_0,{\cal{N}}_r(E))$ instead of the range space $(\mathbb{R}^2,{\cal{N}}_r(E)),$ where a point set $Y_0$ is defined as above. Algorithmic work performed at the main stage is similar in its structure to the Agarwal-Pan algorithm from \cite{agarwal}. To describe what is done precisely at this stage, some notation and definitions are given first. A map $w:Y_0\rightarrow\mathbb{Q}_+$ is called a {\it weight map} on $Y_0$ in the sequel, meaning that a positive rational value $w(y)$ is assigned to each point $y\in Y_0.$ Let $w(N)=\sum\limits_{y\in N\cap Y_0}w(y)$ for $N\subseteq\mathbb{R}^2.$ Given ${\cal{N}}\subseteq{\cal{N}}_r(E)$ and a weight map $w$ on $Y_0,$ a triple $(Y_0,{\cal{N}},w)$ is used to denote a range space when points from $Y_0$ have, generally, non-equal positive rational weights. Range spaces $(Y_0,{\cal{N}})$ and $(Y_0,{\cal{N}},w_0)$ are considered equivalent, where $w_0$ is the unit weight map, i.e. $w_0(y)=1$ for all $y\in Y_0$ and $w_0(N)=|N\cap Y_0|$ for $N\subseteq\mathbb{R}^2.$ \begin{definition} Assume that $0<\varepsilon<1,$ $w:Y_0\rightarrow\mathbb{Q}_+$ and ${\cal{N}}\subseteq{\cal{N}}_r(E)$ are given. Let ${\cal{N}}_{\varepsilon}=\{N\in{\cal{N}}:w(N)>\varepsilon w(Y_0)\}.$ A finite subset $Y'\subseteq\mathbb{R}^2$ is called a {\it (weighted) weak $\varepsilon$-net} (see also definition in \cite{alon}) for a range space $(Y_0,{\cal{N}},w)$ if $Y'\cap N\neq\varnothing$ for any $N\in{\cal{N}}_{\varepsilon},$ i.e. $Y'$ is a hitting set for ${\cal{N}}_{\varepsilon}.$ \end{definition} {\sc Main stage: computing weight maps and epsilon nets.} Work of our algorithms is quite involved at the main stage. Its most important steps are described briefly at first; then, a simplified pseudo-code is presented for clarification. At the main stage the following two steps are performed in turn within a binary search loop: {\sc Step 1: finding a weight map.} During the first step either unit weights are assigned to all points from $Y_0$ or a special weight map $w:Y_0\rightarrow\mathbb{Q}_+$ is computed. {\sc Step 2: constructing an epsilon net.} The second step represents calls of a special procedure, which, given a weight map $w$ on $Y_0,$ constructs weak $\varepsilon$-nets of cardinality at most $O\left(\frac{1}{\varepsilon}\right)$ for subspaces of the range space $(Y_0,{\cal{N}}_r(E),w).$ More precisely, let ${\cal{N}}\subseteq{\cal{N}}_r(E),$ a weight map $w:Y_0\rightarrow\mathbb{Q}_+$ and an arbitrary $\varepsilon>0$ are given. A procedure is referred to as an {\it epsilon net finder} if it seeks a weak $\varepsilon$-net $C=C(Y_0,{\cal{N}},w,\varepsilon)\subset \mathbb{R}^2$ of size at most $\frac{M}{\varepsilon}$ for $(Y_0,{\cal{N}},w),$ where $M\geq 1$ is an absolute constant, which is specific to this procedure; it is called its performance parameter. For $\varepsilon\geq 1$ the procedure returns $C=\varnothing.$ Finally, $\varepsilon$ represents the parameter, which is adjusted within the binary search loop over the two steps above. Namely, using a sort of a trial and error method (see, e.g. \cite{agarwal},\cite{bronnimann}), its reciprocal $1/\varepsilon$ is adjusted to be as close to ${\rm{OPT}}$ as possible. Besides, roughly speaking, at the first step one tries to compute a special weight map $w$ on $Y_0$ such that ${\cal{N}}_{\varepsilon}={\cal{N}}_r(E).$ At the same time, a chosen epsilon net finder, called at the second step, returns hitting sets for ${\cal{N}}_{\varepsilon}$ of size at most $\frac{M}{\varepsilon}.$ As $\frac{1}{\varepsilon}$ tends to be close to ${\rm{OPT}},$ one finally gets an $O(1)$-approximate solution to the {\sc PEH} problem. Pseudo-code of our algorithms is given below. Let $\mu_0>0$ and $\mu>1$ be some absolute constants to be defined later. \hrule \noindent {\sc Piercing Hippodromes.} \hrule \noindent {\bf Input:} $r>0$ and a family ${\cal{N}}_r(E)$ of $r$-hippodromes, where $E$ is an edge set of a plane graph; \noindent {\bf Output:} $O(1)$-approximate solution to the {\sc PEH} problem for ${\cal{N}}_r(E).$ \hrule \begin{enumerate} \item compute ${\cal{K}}=\{K\in{\cal{N}}_r(E):K\cap N=\varnothing\,\,\forall N\in{\cal{N}}_r(E), N\neq K\},$ set $E:=E\backslash\{e\in E:N_r(e)\in {\cal{K}}\},$ $k:=1$ and construct a vertex set $Y_0$ of the arrangement of curves from ${\cal{B}}(E)=\{{\rm{bd}}\,N_r(e):e\in E\};$ \item find a weak $\frac{1}{\mu_0 k}$-net $C_k$ for $(Y_0,{\cal{N}}_r(E),w_0)$ of size at most $M\mu_0 k;$ \item set ${\cal{N}}_k:=\{N\in{\cal{N}}_r(E):N\cap C_k=\varnothing\};$ \item set $\varepsilon_k:=\frac{1}{\mu k},$ compute a weight map $w_k=w_k(\cdot |Y_0,{\cal{N}}_k,k)$ on $Y_0$ such that $w_k(N)>\varepsilon_kw_k(Y_0)$ for all $N\in{\cal{N}}_k$ if $w_k$ can be algorithmically computed\footnote{At this step the algorithm also determines if such a weight map $w_k$ can in principle be computed.}; if it can be, set flag:=true; otherwise, set flag:=false; \item if flag=false, set $k:=2k$ and repeat steps 2-4; otherwise, set $k_p:=k;$ \item repeating steps 2-4 within a binary search loop for $k$ in the interval $(k_p/2, k_p],$ find the smallest $k=k_f\in (k_p/2,k_p]$ for which performing those steps gives flag=true; \item find a weak $\varepsilon_{k_f}$-net $C'_{k_f}$ for $(Y_0,{\cal{N}}_{k_f},w_{k_f})$ of size at most $\frac{M}{\varepsilon_{k_f}};$ \item return $C=C_0\cup C_{k_f}\cup C'_{k_f}$ as an $M(\mu_0+\mu)$-approximate solution to the {\sc PEH} problem, where $C_0=\{c_K:K\in{\cal{K}}\}\subset\mathbb{R}^2$ and $c_K\in K$ for each $K\in{\cal{K}}.$ \end{enumerate} \hrule Although, pseudo-code of our algorithms tackles the case of nonempty ${\cal{K}},$ it is assumed below that ${\cal{K}}=\varnothing$ for simplicity of presentation. An important step of the {\sc Piercing Hippodromes} algorithm is its step 4 where a special weight map $w_k:Y_0\rightarrow\mathbb{Q}_+$ is computed. A procedure is used to implement this step, which is a slightly modified version of the iterative reweighting procedure from \cite{agarwal}. This modified procedure is described in detail in the section \ref{pkgkslsls}. Its performance analysis, given in the proof of the theorem \ref{ksakldkekeksa}, shows that the {\sc Piercing Hippodromes} algorithm finally arrives at the case flag=true at its step 4. Moreover, the value of $k_p$ is obtained at its step 5 such that either $k_p/2<{\rm{OPT}}(Y_0,{\cal{N}}_{k_p})\leq k_p$ or $k_p<{\rm{OPT}}(Y_0,{\cal{N}}_{k_p}),$ thus, giving $k_f\leq {\rm{OPT}}(Y_0,{\cal{N}}_{k_f}).$ It is also shown in this paper that parameters $\mu$ and $\mu_0$ can be chosen such that the upper bound $M(\mu_0+\mu)$ on constant approximation factor of the {\sc Piercing Hippodromes} algorithm depends mostly on the performance parameter $M$ of the epsilon net finder, running at its steps 2 and 7. The {\sc Piercing Hippodromes} algorithm time complexity turns out to be of the same order (up to logarithmic factors) as time complexity of the epsilon net finder. \subsubsection{Our geometric approaches to design an epsilon net finder} In this work a special algorithmic scheme is used to devise epsilon net finders, which is based on ideas from work \cite{pyrga}. Key approaches (given in \cite{pyrga} and ours) to build such procedures with small values of $M$ for subspaces of $(Y_0,{\cal{N}}_r(E),w)$ are reported in short below. Implementation of those procedures and summarizing on their performances are postponed for the section \ref{fhskakjwwoqo}. Our epsilon net finders follow a slightly modified general algorithmic scheme from \cite{pyrga}, applied for subspaces of the specific range space $(Y_0,{\cal{N}}_r(E),w).$ To give a short description of how those procedures work, let us begin with the following simple algorithmic idea, which gives an $O(1)$-approximation algorithm for a special case of the {\sc PEH} problem in which all segments from $E$ have zero lengths, i.e. the corresponding set ${\cal{N}}_r(E)$ is composed of radius $r$ disks. Namely, the idea consists in applying a ``divide-and-conquer'' heuristic, which extracts a {\it maximal independent} set ${\cal{I}}\subseteq{\cal{N}}_r(E)$ of radius $r$ disks within ${\cal{N}}_r(E),$ i.e. a maximal (with respect to inclusion) subset ${\cal{I}}$ of pairwise non-overlapping disks from ${\cal{N}}_r(E)$\footnote{In distinction to the known NP-hard Maximum Independent Set problem for disks, the problem of finding a maximal (by inclusion) subset of non-intersecting disks among disks from ${\cal{N}}_r(E)$ is polynomially solvable. The corresponding algorithm to solve the problem incrementally adds disks into a growing independent set starting from an empty one.}. It should be noted that ${\cal{N}}_r(E)=\bigcup\limits_{I\in{\cal{I}}}{\cal{N}}_I,$ where ${\cal{N}}_I=\{N\in {\cal{N}}_r(E):N\cap I\neq\varnothing\}$ for $I\in{\cal{I}}.$ As $7$ radius $r$ disks are sufficient to cover any $2r$ radius disk, for each $I\in{\cal{I}}$ a $7$-point set $S_I$ can be easily constructed in $O(1)$ time, which has nonempty intersection with each disk from ${\cal{N}}_I.$ Therefore a set $\bigcup\limits_{I\in{\cal{I}}}S_I$ gives a $7$-approximate solution to the {\sc PEH} problem as $|{\cal{I}}|\leq{\rm{OPT}}.$ In the general setting of the {\sc PEH} problem this ``divide-and-conquer'' heuristic is not working as one can not guarantee constant sized hitting sets for ${\cal{N}}_I$ to exist uniformly for all $I\in{\cal{I}}.$ In fact, it is not possible because, first, Euclidean lengths are not assumed to be uniformly bounded from above of segments from $E$ by any linear function of $r.$ Second, one can not cluster segments into constant number of groups with similar segment orientations and apply this heuristic (with an additional modification) in each group separately as done e.g. in \cite{nandy} for the case of sets of axis-parallel straight line segments. Instead, in this paper a similar algorithmic idea is adopted to design an epsilon net finder. This idea relies conceptually on the approach of work \cite{pyrga}, being its slightly improved version. Given ${\cal{N}}\subseteq{\cal{N}}_r(E),$ it exercises a similar ``divide-and-conquer'' heuristic, which suggests to find a maximal subset ${\cal{I}}$ of {\it almost} non-overlapping objects within the set ${\cal{N}}_{\varepsilon}.$ It means that pairs of objects from ${\cal{I}}$ are allowed to have nonempty intersection, but ``amount'' of this intersection does not exceed some fraction of $w(Y_0).$ More precisely, the following definition describes properties of ${\cal{I}}$ in detail (see also general definition in \cite{pyrga}). \begin{definition} Given a subset ${\cal{N}}\subseteq{\cal{N}}_r(E),$ a parameter $0\leq\delta<1$ and a weight map $w:Y_0\rightarrow\mathbb{Q}_+,$ a subset ${\cal{I}}={\cal{I}}(\delta)\subseteq{\cal{N}}$ is called a {\it maximal (with respect to inclusion) $\delta$-independent} for a range space $(Y_0,{\cal{N}},w)$ if $$w(I\cap I')\leq\delta w(Y_0)$$ for any distinct $I,I'\in{\cal{I}}$ and for any $N\in{\cal{N}}$ there is some $I=I(N)\in{\cal{I}}$ such that $w(N\cap I)>\delta w(Y_0).$ \end{definition} As $Y_0$ is the vertex set of the arrangement of curves from ${\cal{B}}(E)$ and $w(y)>0$ for every $y\in Y_0,$ any maximal $0$-independent set for $(Y_0,{\cal{N}},w)$ is a maximal independent set within ${\cal{N}}.$ Let $0\leq\theta_0<1$ be some parameter to be defined later. Pseudo-code of our epsilon net finder procedure is given below: \hrule \noindent {\sc Weak Epsilon Net Finder.} \hrule \noindent {\bf Input:} a range space $(Y_0,{\cal{N}},w)$ for some ${\cal{N}}\subseteq{\cal{N}}_r(E),$ $w:Y_0\rightarrow\mathbb{Q}_+$ and a parameter $0<\varepsilon<1;$ \noindent {\bf Output:} a weak $\varepsilon$-net for $(Y_0,{\cal{N}},w)$ of size at most $\frac{M}{\varepsilon}$ for some absolute constant $M\geq 1.$ \hrule \begin{enumerate} \item set $\delta:=\theta_0\varepsilon,$ find a maximal $\delta$-independent set ${\cal{I}}={\cal{I}}(\delta)\subseteq {\cal{N}}_{\varepsilon}$ for $(Y_0,{\cal{N}}_{\varepsilon},w)$ and form disjoint sets ${\cal{N}}_{\delta,I},\,I\in {\cal{I}},$ such that $\bigcup\limits_{I\in{\cal{I}}}{\cal{N}}_{\delta,I}={\cal{N}}_{\varepsilon},$ where ${\cal{N}}_{\delta,I}\subseteq{\cal{N}}^0_{\delta,I}=\{N\in{\cal{N}}_{\varepsilon}:w(N\cap I)>\delta w(Y_0)\};$ \item for each $I\in{\cal{I}}$ compute a hitting set $C_I$ for ${\cal{N}}_{\delta,I}$ of size at most $\frac{c_1w(I)}{\delta w(Y_0)}+c_2$ for some positive constants $c_1$ and $c_2;$ \item return the set $C_{\theta_0}=\bigcup\limits_{I\in{\cal{I}}}C_I.$ \end{enumerate} \hrule At its step 1 the {\sc Weak Epsilon Net Finder} procedure forms a partition of ${\cal{N}}_{\varepsilon}$ into disjoint subsets ${\cal{N}}_{\delta,I};$ here each subset ${\cal{N}}_{\delta,I}$ does not have to coincide with ${\cal{N}}^0_{\delta,I},$ being a result of the partitioning process for ${\cal{N}}_{\varepsilon}.$ The way in which the partitioning is done within the {\sc Weak Epsilon Net Finder} procedure is slightly different from the way in which it is done in an epsilon net finder, resulting from applying the approach of work \cite{pyrga} directly to subspaces of $(Y_0,{\cal{N}}_r(E),w).$ It is demonstrated in the subsubsection \ref{skskfkgeoqoapdlgls} that our modification gives the smaller value of the performance parameter $M.$ As $|Y_0|=O(n^2)$ the procedure step 1 can be implemented in a straightforward way without relying on the {\sc PEH} problem specifics (see proof of the lemma \ref{dskskskkroroeowoqp} from the subsection \ref{gkfkkskslf_glro}). There are two basic parts of the {\sc Weak Epsilon Net Finder} procedure implementation which rely on geometry of shape of $r$-hippodromes and the {\sc PEH} problem specifics. They are the main algorithmic result of this paper. The first part consists in a special subprocedure which is aimed at computing a hitting set of size at most $\frac{c_1w(I)}{\delta w(Y_0)}+c_2$ with small nonnegative constants $c_1$ and $c_2$ for a family ${\cal{N}}_{\delta,I},$ corresponding to an object $I\in{\cal{I}}$ of any maximal $\delta$-independent set ${\cal{I}}$ for $(Y_0,{\cal{N}}_{\varepsilon},w).$ In the subsection \ref{qeiaididis} several fast variants are given of such subprocedures, guaranteeing small constants $c_1$ and $c_2$ (also called their performance parameters) under different assumptions on $E,$ including the general case in which $E$ belongs to a class ${\cal{E}}_0$ of edge sets of arbitrary plane graphs. Those subprocedures in fact construct weak $\Delta_I$-nets for subspaces $(Y_0\cap I,{\cal{N}},w_I),\,I\in{\cal{I}},$ of size at most $\frac{c_1}{\Delta_I}+c_2,$ where $\Delta_I=\frac{\delta w(Y_0)}{w(I)}$ and $w_I=w\mid_{Y_0\cap I}.$ They are called {\it subspace epsilon net finders} in the sequel. The second part consists in choosing a suitable value of the parameter $\theta_0,$ scaling the upper bound $\delta w(Y_0)$ on intersection weight of pairs of objects from ${\cal{I}}.$ Indeed, it is not an easy task to choose $\theta_0$ to get a modest value of the performance parameter $M.$ Namely, there is a tradeoff between constants in the two bounds $|{\cal{I}}|=O\left(\frac{1}{\varepsilon}\right)$ and $\frac{\sum\limits_{I\in{\cal{I}}}w(I)}{\delta w(Y_0)}= O\left(\frac{1}{\varepsilon}\right),$ implying the bound $|C_{\theta_0}|\leq\frac{M}{\varepsilon}.$ Setting $\theta_0$ either large or close to zero may result in a large constant either in the bound $|{\cal{I}}|=O\left(\frac{1}{\varepsilon}\right)$ or in the bound $\frac{\sum\limits_{I\in{\cal{I}}}w(I)}{\delta w(Y_0)}= O\left(\frac{1}{\varepsilon}\right)$ respectively. In the subsection \ref{gkfkkskslf_glro} a geometric approach is used to adjust the parameter $\theta_0,$ originating from work \cite{pyrga}. Given a subclass ${\cal{E}}\subseteq{\cal{E}}_0,$ the approach allows to get a value $\theta^{\ast}_0=\theta^{\ast}_0({\cal{E}})$ of this parameter, finally leading to a small value $M^{\ast}=M^{\ast}({\cal{E}})$ of $M$ in the upper bound on $|C_{\theta^{\ast}_0}|,$ which holds true uniformly within the class ${\cal{E}}.$ As a result, it guarantees a small constant upper bound on the approximation factor of the {\sc Piercing Hippodromes} algorithm to hold within ${\cal{E}}.$ For the case $c_1=0$ minimizing the performance parameter $M$ with respect to $\theta_0$ implies that $\theta_0\rightarrow 0.$ Thus, in this case the {\sc Weak Epsilon Net Finder} procedure is similar in its layout to the 7-approximation algorithm, described above for the case where $E$ consists of points. \section{Weak Epsilon Net Finder: implementation and performance analysis}\label{fhskakjwwoqo} \subsection{Subspace epsilon net finders}\label{qeiaididis} Three subquadratic subspace epsilon net finders are constructed below to be applied at the step 2 of the {\sc Weak Epsilon Net Finder} procedure. These procedures have small performance parameters $c_1$ and $c_2,$ being designed for different classes of sets of non-zero length straight line segments. They largely amount to fast construction of the least cardinality hitting sets for sets of 1-dimensional intervals on the real line. More specifically, the first subspace epsilon net finder treats the general case where segments from $E$ form an edge set of some plane graph $G,$ giving $c_1=8$ and $c_2=2.$ The second subspace epsilon net finder works for the special case where $E$ is such that either $d(e,e')>r$ or $d(e,e')=0$ for any distinct $e,e'\in E.$ It reports $c_1=1$ and $c_2=6.$ The last subspace epsilon net finder tackles the case in which $E$ is being an edge set of any subgraph of a Delaunay triangulation $G.$ Graphs of this type arise in network routing and modelling applications. This procedure gives $c_1=4$ and $c_2=8.$ Let a subset ${\cal{N}}\subseteq{\cal{N}}_r(E),$ an object $I\in{\cal{N}},$ an arbitrary $0<\Delta<1,$ a nonempty finite point set $F\subset\mathbb{R}^2$ and a weight map $w_I:F\cap I\rightarrow\mathbb{Q}_+$ are given. Generally, proposed subspace epsilon net finders are assumed to accept an object $I$ and a subset ${\cal{N}}_I(\Delta)\subseteq\{N\in{\cal{N}}:w_I(N)>\Delta w_I(I)\}$ as their input. Their output is a hitting set for ${\cal{N}}_I(\Delta)$ of size at most $\frac{c_1}{\Delta}+c_2$ for some nonnegative constants $c_1$ and $c_2.$ It is shown below that they work in $O(m\log m)$ time and $O(m)$ space, where $m=|{\cal{N}}_I(\Delta)|.$ When they are applied within the {\sc Weak Epsilon Net Finder} procedure, their input is an object $I$ of a maximal $\delta$-independent set for $(Y_0,{\cal{N}}_{\varepsilon},w)$ and a subset ${\cal{N}}_{\delta,I}$ from the corresponding partition of ${\cal{N}}_{\varepsilon},$ built at its step 1. It corresponds to setting $F=Y_0,$ $\Delta=\frac{\delta w(Y_0)}{w(I)}$ and $w_I=w\mid_{Y_0\cap I}.$ \subsubsection{General case of plane $G$}\label{qyuuififis} Let $E$ be an edge set of a plane graph $G.$ The following observations can be made about shape of $r$-hippodromes. \noindent {\bf Observation 1.} Let $e,e'\in E$ be such that $M=N_r(e)\cap N_r(e')\neq\varnothing.$ Then $M=N_r(z_{e'}(e))\cap N_r(e')=N_r(e)\cap N_r(z_e(e')),$ where $z_e(e')=\{x\in e':d(x,e)\leq 2r\}.$ Let $l(e)$ be a straight line through $e$ for some non-zero length segment $e\in E$ and $h_1(e)$ and $h_2(e)$ be positive and negative halfplanes respectively whose boundary coincides with $l(e);$ here orientation is chosen arbitrarily for $l(e).$ The set ${\rm{bd}}\,N_r(e)$ can be represented in the form of a union of two halfcircles and two segments $f_1(e)$ and $f_2(e),$ where $f_i(e)\subset {\rm{int}}\,h_i(e),\,i=1,2.$ Let $l_i(e)$ be the straight line through $f_i(e).$ \noindent {\bf Observation 2.} Let $\{v_1,v_2\}=l(e)\cap{\rm{bd}}\,N_r(e),\,e\in E.$ For every $e,e'\in E$ for which $N_r(e)\cap N_r(e')\neq\varnothing$ either $N_r(e')\cap\{v_1,v_2\}\neq\varnothing$ or such $i_0\in\{1,2\}$ exists that $d(x,l_{i_0}(e))\leq r$ for all $x\in z_{e}(e').$ Let us provide some notation. Given a subset ${\cal{M}}\subseteq{\cal{N}}_r(E),$ a subset $E({\cal{M}})$ is such that ${\cal{M}}={\cal{N}}_r(E({\cal{M}}));$ in particular, for $N\in{\cal{N}}_r(E)$ let $e(N)\in E$ be a segment such that $N=N_r(e(N)).$ Our subspace epsilon net finder is based on finding hitting sets for sets of 1-dimensional $r$-neighbourhoods of (interval) projections of segments from $\{z_e(e')\}_{e'\in E'},\,E'\subseteq E,$ onto straight lines $l_i(e).$ Let $N_{ir}(f)=\{x\in l_i(e):d(x,f)\leq r\}$ for an arbitrary interval $f\subset l_i(e),\,i=1,2.$ The following folklore lemma reports on the complexity of getting minimum cardinality hitting set for a set of 1-dimensional intervals. Its proof is left for the appendix. \begin{lemma}\label{gjfkdkksaadqqrsra} The minimum cardinality hitting set can be found for a set of $n$ 1-dimensional intervals on the real line in $O(n\log n)$ time and $O(n)$ space. \end{lemma} \hrule \noindent {\sc Subspace Weak Epsilon Net Finder } \hrule \noindent {\bf Input:} an object $I\in {\cal{N}}$ and a set ${\cal{N}}_I(\Delta);$ \noindent {\bf Output:} a hitting set $C_I\subset\mathbb{R}^2$ for ${\cal{N}}_I(\Delta).$ \hrule \begin{enumerate} \item set $\{v_1,v_2\}=l(e(I))\cap{\rm{bd}}\,I$ and $${\cal{P}}:={\cal{N}}_I(\Delta)\backslash\{N\in{\cal{N}}:N\cap\{v_1,v_2\}\neq\varnothing\};$$ \item form sets $Z_i=\{z_{e(I)}(e):e\in E({\cal{P}}),\,z_{e(I)}(e)\subset h_i(e(I))\},\,i=1,2;$ \item form a set $P_i$ of orthogonal projections of segments from $Z_i$ onto the straight line $l_i(e(I))$ and construct sets $P_i(r)=\{N_{ir}(p):p\in P_i\},\,i=1,2;$ \item find the minimum cardinality hitting set $H_i\subset l_i(e(I))$ for $P_i(r),\,i=1,2,$ as in the proof of the lemma \ref{gjfkdkksaadqqrsra}; \item for each $x_0\in H_i$ and $i=1,2$ construct a set $S(x_0)$ of 4 points such that $N_{\sqrt{2}r}(x_0)\subset\bigcup\limits_{x\in S(x_0)}N_r(x)$ and return a set $C_I=\{v_1,v_2\}\cup\bigcup\limits_{x_0\in H_i,i=1,2}S(x_0).$ \end{enumerate} \hrule The following lemma summarizes on the procedure performance. \begin{lemma}\label{kob11111111} Let $m=|{\cal{N}}_I(\Delta)|.$ The {\sc Subspace Weak Epsilon Net Finder} procedure returns a hitting set $C_I$ for ${\cal{N}}_I(\Delta)$ of size at most $\frac{8}{\Delta}+2$ in $O(m\log m)$ time and $O(m)$ space. \end{lemma} \begin{proof} All steps except for step 4 of the procedure require $O(m)$ time whereas step 4 takes $O(m\log m)$ time according to the lemma \ref{gjfkdkksaadqqrsra}. It remains to get the bound $|C_I|\leq \frac{8}{\Delta}+2$ and prove that $C_I$ is a hitting set for ${\cal{N}}_I(\Delta).$ Indeed, due to the step 1 and the observation 2 one has either $z_{e(I)}(e)\subset h_1(e(I))$ or $z_{e(I)}(e)\subset h_2(e(I))$ for every $e\in E({\cal{P}}).$ Moreover, each interval $J\in P_i(r)$ is an orthogonal projection of some object $P^{-1}_i(J)\in {\cal{N}}_r(Z_i).$ According to the proof of the lemma \ref{gjfkdkksaadqqrsra}, for each $i=1,2$ at the step 4 a maximal subset $Q_i\subseteq P_i(r)$ is built of pairwise non-overlapping intervals with $|Q_i|=|H_i|.$ Thus, the respective set $\{P^{-1}_i(J):J\in Q_i\}$ consists of non-intersecting objects. By the observation 1 one gets that $w_I(P^{-1}_i(J)\cap I)>\Delta w_I(I)$ for all $J\in Q_i.$ Therefore $|Q_i|\leq \frac{1}{\Delta}$ and $|C_I|=4|Q_1|+4|Q_2|+2\leq\frac{8}{\Delta}+2.$ By the observation 2 each point of a segment from $Z_i$ is within the distance $r$ from $l_i(e(I)).$ Therefore each segment of $Z_i$ is within $\sqrt{2}r$ distance from some point of $H_i.$ By construction at the step 5 one gets that $C_I$ is a hitting set for ${\cal{N}}_I(\Delta).$ \end{proof} \subsubsection{Case where edges of $G$ are far apart}\label{hkfksiwiaoaofohdov} Below a special case is considered in which either $d(e,e')>r$ or $d(e,e')=0$ for any distinct $e,e'\in E.$ For this case the following idea can be used to implement a subspace epsilon net finder with smaller $c_1$ and $c_2.$ The idea consists in exploiting the fact that a small constant sized point set $U(I)\subset\mathbb{R}^2$ can be computed in $O(1)$ time for which the set ${\cal{P}}=\{N\in {\cal{N}}_I(\Delta):N\cap U(I)=\varnothing\}$ can be transformed into the set ${\cal{J}}_I(\Delta)=\{{\rm{bd}}\,I\cap N: N\in {\cal{P}}\}$ of 1-dimensional arcs, where the following property $(\ast)$ holds for ${\cal{J}}_I(\Delta):$ if ${\cal{M}}\subseteq{\cal{P}}$ is such that $I\cap\bigcap\limits_{N\in {\cal{M}}}N\neq\varnothing,$ then ${\rm{bd}}\,I\cap \bigcap\limits_{N\in {\cal{M}}}N\neq\varnothing.$ More precisely, the idea suggests to exclude from ${\cal{N}}_I(\Delta)$ those objects which are hit by $U(I)$ and reduce the problem of computing a small hitting set for the remaining objects to the equivalent much simpler problem of finding a hitting set for the corresponding set of one-dimensional arcs. For an object $I\in{\cal{N}}$ let $C(I)$ be the set of 4 endpoints of segments $f_i(e(I))$ and $U(I)=C(I)\cup (l(e(I))\cap {\rm{bd}}\,I),$ where $i=1,2$ and $|U(I)|=6.$ As a start, a simple observation can be made about shape of $r$-hippodromes of non-zero length segments. \begin{lemma}\label{ahhhahshsfafadsdwrwr} Let $I,N_1,N_2\in{\cal{N}}$ be distinct and $d(e(I),e(N_i))\in (r,2r],\,i=1,2.$ If $I\cap N_1\cap N_2\neq\varnothing,$ then either $N_1\cap N_2\cap {\rm{bd}}\,I\neq\varnothing$ or $N_{i_0}\cap U(I)\neq\varnothing$ for some $i_0\in\{1,2\}.$ \end{lemma} \begin{proof} Let $\chi_i={\rm{bd}}\,I\cap N_i$ and $\pi_i={\rm{bd}}\,N_i\cap I$ for $i=1,2.$ Assume that $N_i\cap U(I)=\varnothing$ for all $i\in\{1,2\}.$ It should be proved that $\chi_1\cap \chi_2\neq\varnothing$ if $I\cap N_1\cap N_2\neq\varnothing.$ Let $p(x)\in e(I)$ be Euclidean projection of $x$ onto $e(I)$ for $x\in\mathbb{R}^2.$ It is sufficient to establish the following monotonicity property: for any $x\in {\rm{bd}}\,I$ and $i=1,2$ nonempty intersection $[p(x),x]\cap N_i$ is a (possibly zero-length) segment with its endpoint in $x.$ Indeed, for $x\in I\cap N_1\cap N_2$ this implies that the ray with the origin $p(x)$ and direction $x-p(x)$ intersects ${\rm{bd}}\,I$ at some point of $\chi_1\cap \chi_2.$ Suppose, in contrary, there is a point $x_0\in {\rm{bd}}\,I$ and $i_0\in\{1,2\}$ such that the interval $(p(x_0),x_0)$ has two (possibly identical) points $x'_1$ and $x'_2$ of intersection with $\pi_{i_0}.$ There is a point $x'\in [p(x_0),x_0]$ and a endpoint $x''\in e(N_{i_0})$ with $(x'-x'',x_0-p(x_0))=0,$ \footnote{$(\cdot,\cdot)$ denotes Euclidean scalar product in $\mathbb{R}^2.$} such that $d(x',x'')\leq r.$ It implies the inclusion $x''\in \bigcup\limits_{x\in U(I)}N_r(x)$ taking into account that $r<d(e(I),e(N_{i_0}))\leq 2r.$ But this inclusion is impossible by our assumption that $N_{i_0}\cap U(I)=\varnothing.$ \end{proof} By construction of the set ${\cal{P}}$ there is a point in ${\rm{bd}}\,I,$ which does not belong to $\bigcup\limits_{J\in{\cal{J}}_I(\Delta)}J.$ Therefore the property $(\ast)$ holds true for ${\cal{P}}$ and ${\cal{J}}_I(\Delta)$ by Helly theorem and the lemma \ref{ahhhahshsfafadsdwrwr}. Thus, the problem of finding the smallest cardinality hitting set for ${\cal{P}}$ is equivalent to the problem of finding the least cardinality hitting set for ${\cal{J}}_I(\Delta).$ The latter problem can be equivalently reduced to the problem of computing the smallest size hitting set for a set of one-dimensional intervals on the real line, using polar coordinates. Our subspace epsilon net finder is given below. It amounts to constructing a minimum cardinality hitting set for the set of ``1-dimensional'' intervals ${\cal{J}}_I(\Delta).$ \hrule \noindent {\sc Subspace Weak Epsilon Net Finder$^{\ast}$} \hrule \noindent {\bf Input:} an object $I\in {\cal{N}}$ and a set ${\cal{N}}_I(\Delta);$ \noindent {\bf Output:} a hitting set $C_I\subset\mathbb{R}^2$ for ${\cal{N}}_I(\Delta).$ \hrule \begin{enumerate} \item compute $U(I)$ as described before the lemma \ref{ahhhahshsfafadsdwrwr}; \item set ${\cal{P}}:=\{N\in{\cal{N}}_I(\Delta):N\cap U(I)=\varnothing\}$ and ${\cal{J}}_I(\Delta):=\{{\rm{bd}}\,I\cap N:N\in {\cal{P}}\};$ \item applying polar coordinates, find the minimum cardinality hitting set $C'_I$ for ${\cal{J}}_I(\Delta)$ as in the proof of the lemma \ref{gjfkdkksaadqqrsra}; \item return $C_I=C'_I\cup U(I).$ \end{enumerate} \hrule Based on the lemma \ref{ahhhahshsfafadsdwrwr} performance analysis is given below of the {\sc Subspace Weak Epsilon Net Finder$^{\ast}$} procedure under a weaker assumption on the set $E.$ \begin{lemma}\label{kkfkfkdkdksks} Let $m=|{\cal{N}}_I(\Delta)|$ and ${\cal{P}}={\cal{N}}_I(\Delta)\backslash\{N\in{\cal{N}}:N\cap U(I)\neq\varnothing\}.$ If $E({\cal{P}})$ consists of segments at the distance more than $r$ from $e(I),$ then the {\sc Subspace Weak Epsilon Net Finder$^{\ast}$} procedure returns a hitting set $C_I$ for ${\cal{N}}_I(\Delta)$ of size at most $\frac{1}{\Delta}+6$ in $O(m\log m)$ time and $O(m)$ space. \end{lemma} \begin{proof} The set $C_I$ gives a hitting set for ${\cal{N}}_I(\Delta)$ as $C'_I$ is a hitting set for ${\cal{J}}_I(\Delta)$ by construction reported in the proof of the lemma \ref{gjfkdkksaadqqrsra}. Thus, it remains to estimate $|C'_I|.$ As byproduct of this construction one gets a maximal subset ${\cal{J}}'$ of non-overlapping arcs from ${\cal{J}}_I(\Delta)$ with $|C'_I|=|{\cal{J}}'|.$ Let ${\cal{P}}'\subseteq{\cal{P}}$ be the subset such that ${\cal{J}}'=\{{\rm{bd}}\,I\cap N:N\in{\cal{P}}'\}.$ Due to the lemma \ref{ahhhahshsfafadsdwrwr} ${\cal{P}}'$ consists of non-overlapping objects within $I.$ Therefore $|C'_I|\leq\frac{1}{\Delta}.$ \end{proof} \subsubsection{Case of Delaunay triangulation $G$} Subspace epsilon net finders are constructed below under the assumption that $E$ is a special configuration of straight line segments. It is produced from a planar finite point set $V$ using the so called empty disk property: two points $u,v\in V$ are joined by a straight line segment when a disk exists which contains $u$ and $v$ on its boundary and does not contain any other points from $V.$ This property makes the other segments of $E$ avoid having their endpoints in that disk. It defines a special class of plane graphs called Delaunay triangulations. \begin{definition} Assuming that no $4$ points of $V$ are cocircular a plane graph $G=(V,E)$ is called a {\it Delaunay triangulation} when $[u,v]\in E$ iff there is a disk $D$ such that $u,v\in {\rm{bd}}\,D$ and $V\cap {\rm{int}}\,D=\varnothing.$ Such a disk $D$ is called an {\it empty disk} for $[u,v].$ \end{definition} Let $E$ be an edge set of an arbitrary subgraph of a Delaunay triangulation $G.$ A subspace epsilon net finder is constructed below with small $c_1$ and $c_2.$ Let ${\cal{N}}\subseteq{\cal{N}}_r(E),$ $I\in{\cal{N}}$ and ${\cal{N}}_I(\Delta)$ be given as its input. By definition of Delaunay triangulation there is a disk $D(e(I))$ such that both endpoints of $e(I)$ lie on its boundary and none of segments from $E$ has its endpoints in the interior of $D(e(I)).$ Let $c(e(I))$ be its center. Each segment $e\in E({\cal{N}}_I(\Delta))$ must intersect $N_{2r}(e(I)).$ In the lemma 6 of \cite{kobylkin_dryakhlova} it is shown that for $c(e(I))\in l(e(I))$ an at most $14$-point set $U_0(I)$ can be computed in $O(1)$ time such that $N\cap U_0(I)\neq\varnothing$ for every $N\in{\cal{N}}_I(\Delta).$ Therefore if $c(e(I))$ is, say, in a halfplane $h_{i_0}(e(I))$ for $i_0\in\{1,2\},$ there must be a constant sized point set, which hits all objects from ${\cal{N}}_{i_0}(e(I))=\{N\in{\cal{N}}_I(\Delta):e(N)\cap {\rm{cl}}\,h_{i_0}(e(I))\cap N_{2r}(e(I))\neq\varnothing\}.$ More precisely, it can be proved that \begin{lemma}\label{jh_} An at most $8$-point set $U(I)$ can be found in $O(1)$ time such that $N\cap U(I)\neq\varnothing$ for every $N\in{\cal{N}}_{i_0}(e(I)).$ \end{lemma} Before giving the lemma proof a procedure is described first which computes the subset $U(I).$ Its steps are as follows: \hrule \noindent {\sc Constant Hitting Set Finder.} \hrule \noindent {\bf Input:} a constant $r>0$ and an edge $e=[u_1,u_2]$ of a Delaunay triangulation $G=(V,E);$ \noindent {\bf Output:} an at most $8$-point hitting set $U(e)\subset\mathbb{R}^2$ for ${\cal{N}}_{i_0}(e)=\{N\in{\cal{N}}_r(E):e(N)\cap {\rm{cl}}\,h_{i_0}(e)\cap N_{2r}(e)\neq\varnothing\},$ where $c(e)$ is a center of an empty disk $D(e)$ for $e$ and $c(e)\in {\rm{cl}}\,h_{i_0}(e)$ for some $i_0\in\{1,2\}.$ \hrule \begin{enumerate} \item for each $s\in\{1,2\}$ construct a regular hexagon inscribed in $N_{2r}(u_s),$ whose orientation is such that the straight line through $e$ contains a pair of vertices of that hexagon; form a 7-point set $V_s,$ which contains $u_s$ and midpoints of the hexagon sides (of length $2r)$; \item let $v_{s1}$ and $v_{s2}$ be points from $V_s,$ which are symmetric with respect to $u_s$ and $[v_{s1},v_{s1}]\perp e,\,s=1,2;$ \item for each $s\in\{1,2\}$ choose a subset $U_s\subset V_s$ such that $|U_s|=3$ and $T_s\cap h_{i_0}(e)\subset\bigcup\limits_{u\in U_s}N_r(u),$ where $N_{2r}(e)=T_1\cup T_2\cup R$ for some rectangle $R$ and two closed halfdisks $T_1$ and $T_2$ of radii $2r$ centered at $u_1$ and $u_2$ respectively; set $u_{si_0}:=g_{i_0}(e)\cap {\rm{bd}}\,N_r(v_{si_0}),$ where $g_{i_0}(e)\subset h_{i_0}(e)\cap{\rm{bd}}\,N_{2r}(e)$ is a straight line segment, touching both $T_1$ and $T_2;$ \item let $\Delta(e)=\sqrt{d(u_1,u_2)^2/4+d(c(e),e)^2};$ \item if either $\Delta(e)\geq\frac{(2\sqrt{3}-1)r}{\sqrt{4\sqrt{3}-6}}$ or $\Delta(e)\in(0,r/2],$ return $U(e):=U_1\cup U_2;$ \item for $\Delta(e)\in \left[2r,\frac{(2\sqrt{3}-1)r}{\sqrt{4\sqrt{3}-6}}\right)$ set $u_0:=\frac{u_1+u_2}{2}$ and construct two points $z_{1i_0}$ and $z_{2i_0}$ at the intersection $g_{i_0}(e)\cap {\rm{bd}}\,N_{\Delta(e)}(u_0),$ where $z_{si_0}$ is closer to $u_{si_0}$ than the point $z_{(3-s)i_0}$ is; set $a_{si_0}=\frac{u_{si_0}+z_{si_0}}{2},\,s=1,2;$ \item if $\Delta(e)\in \left(r,2r\right),$ consider a (rectangular) coordinate system with the origin at $u_s$ whose $x$-axis is along $e$ and $y$-axis is perpendicular to $x$-axis, being directed towards $g_{i_0}(e);$ set $b_{i_0}=(d(u_1,u_2)/2,\Delta(e))$ and $a_{si_0}=\frac{u_{si_0}+b_{i_0}}{2},\,s=1,2;$ \item for $\Delta(e)\in (r/2,r]$ set $a_{1i_0}:=(d(u_1,u_2)/2,\sqrt{3}r),$ $a_{2i_0}:=\left(d(u_1,u_2)/2,\frac{\sqrt{3}r}{2}\right);$ \item return $U(I):=U_1\cup U_2\cup\{a_{1i_0},a_{2i_0}\}.$ \end{enumerate} \hrule \begin{proof} Let $U(I)$ be a set, produced by the {\sc Constant Hitting Set Finder} procedure for $e=e(I).$ Let $O_I=N_{\Delta(e(I))}(u_0)\cap h_{i_0}(e(I))\cap R.$ Obviously, $O_I\subset D(e(I)).$ Using planarity of Delaunay triangulations and applying the same argument as in the proof of the lemma 6 from \cite{kobylkin_dryakhlova} it can be proved that $U(I)$ hits all objects from ${\cal{N}}_{i_0}(e(I)).$ \end{proof} \begin{lemma}\label{jh} A hitting set $C'_I$ of size at most $\frac{4}{\Delta}$ can be constructed for ${\cal{M}}_I(\Delta)=\{N\in{\cal{N}}_I(\Delta):N\cap U(I)=\varnothing\},$ using the {\sc Subspace Weak Epsilon Net Finder} procedure, applied for $I$ and ${\cal{M}}_I(\Delta).$ \end{lemma} \begin{proof} As $z_{e(I)}(e(N))\subset h_{3-i_0}(e(I))\cap N_{2r}(e(I))$ for every $N\in{\cal{M}}_I(\Delta),$ the set $C'_I$ is a hitting set for ${\cal{M}}_I(\Delta)$ of size at most $\frac{4}{\Delta}$ due to the proof of the lemma \ref{kob11111111}. \end{proof} Finally, $C_I=U(I)\cup C'_I$ gives a hitting set for ${\cal{N}}_I(\Delta)$ of size at most $\frac{4}{\Delta}+8.$ The lemma below summarizes on the performance of the described subspace epsilon net finder. It follows from lemmas \ref{jh_} and \ref{jh}. \begin{lemma}\label{fsjdkgowoqpspa} Let $m=|{\cal{N}}_I(\Delta)|.$ A hitting set $C_I$ for ${\cal{N}}_I(\Delta)$ can be built within $O(m\log m)$ time and $O(m)$ space of size at most $\frac{4}{\Delta}+8.$ \end{lemma} \subsection{Estimating the parameter $\theta_0$ of the Weak Epsilon Net Finder procedure}\label{gkfkkskslf_glro} \subsubsection{General approach to estimate $\theta_0$} Below, given a class ${\cal{E}}\subseteq{\cal{E}}_0$ of sets of straight line segments on the plane, a general approach is described below to adjust the parameter $\theta_0$ of the {\sc Weak Epsilon Net Finder} procedure. Adjusting $\theta_0$ allows to guarantee an upper bound $|C_{\theta_0}|\leq\frac{M}{\varepsilon}$ to hold uniformly for all $E\in{\cal{E}},$ where $C_{\theta_0}$ is a weak $\varepsilon$-net, output by the procedure, and $M$ is some absolute constant, defining the procedure performance parameter for the class ${\cal{E}}.$ Moreover, tuning $\theta_0$ is aimed at minimizing $M.$ More precisely, the approach suggests to identify a special mapping, defined for any $E\in {\cal{E}}$ and any ${\cal{I}}\subseteq{\cal{N}}_r(E).$ Existence of this special mapping guarantees the bound $|C_{\theta^{\ast}_0}|\leq\frac{M^{\ast}}{\varepsilon}$ to hold uniformly within ${\cal{E}}$ for some $\theta^{\ast}_0=\theta^{\ast}_0({\cal{E}})$ and $M^{\ast}=M^{\ast}({\cal{E}}),$ where both $M^{\ast}$ and $\theta^{\ast}_0$ depend on constant parameters of the identified mapping and performance parameters $c_1$ and $c_2$ of a subspace epsilon net finder, applied at the step 2 of the {\sc Weak Epsilon Net Finder} procedure. The approach follows the paper \cite{pyrga}. The lemma below describes how it works. \begin{lemma}\label{dskskskkroroeowoqp} Let ${\cal{E}}\subseteq{\cal{E}}_0$ be a class of sets of straight line segments on the plane. Assume that: \begin{enumerate} \item there are absolute constants $\alpha,\beta,\tau$ and a graph $G_{{\cal{I}}}=({\cal{I}},U)$ for every $E\in{\cal{E}}$ and ${\cal{I}}\subseteq {\cal{N}}_r(E)$ such that $|U|\leq\beta|{\cal{I}}|$ and $m_{{\cal{I}}}(y)\geq\alpha n_{{\cal{I}}}(y)-\tau$ for every $y\in Y_0,$ where $n_{{\cal{I}}}(y)=|\{I\in {\cal{I}}:y\in I\}|$ and $m_{{\cal{I}}}(y)=|\{\{I,I'\}\in U:y\in I\cap I'\}|;$ \item the {\sc Weak Epsilon Net Finder} procedure is applied with a subspace epsilon net finder, whose time complexity is $O(Q(m)),$ where $m$ is the number of objects in a subset of ${\cal{N}}_r(E),$ defining an input of the subspace epsilon net finder, and there is a constant $L>0$ such that $Q(m_1)+\ldots+Q(m_t)\leq L Q\left(\sum\limits_{k=1}^t m_k\right)$ for any positive integers $t, m_1,\ldots m_t.$ \end{enumerate} Then, given $E\in{\cal{E}}$ and ${\cal{N}}\subseteq{\cal{N}}_r(E),$ a weak $\varepsilon$-net is constructed for $(Y_0,{\cal{N}},w)$ by the {\sc Weak Epsilon Net Finder} procedure for any $0<\varepsilon<1$ of size at most $$\left[\left(1+\frac{1}{\sqrt{1+\frac{c_2\alpha}{c_1\beta}}}\right)\left(\frac{2c_1\tau\beta}{\alpha^2}+\frac{c_2\tau}{\alpha}\right)+\frac{c_2\tau}{\alpha \sqrt{1+\frac{c_2\alpha}{c_1\beta}}}\right]\frac{1}{\varepsilon}$$ in $O\left(\frac{\tau n^3}{\alpha\varepsilon}+Q(n)\right)$ time and linear space with respect to the space used to store $(Y_0,{\cal{N}},w),$ where \begin{equation}\label{hjhiislsldlsls} \theta_0=\theta^{\ast}_0({\cal{E}})=\frac{\frac{\alpha}{\beta}}{1+\sqrt{1+\frac{c_2\alpha}{c_1\beta}}} \end{equation} and $n=|E|.$ \end{lemma} \begin{proof} First, time complexity of the procedure step 1 is to be estimated. It can be implemented in the straightforward way as follows. Initially, set ${\cal{P}}:={\cal{N}}_{\varepsilon}$ and ${\cal{I}}:=\varnothing.$ An arbitrary set $P\in {\cal{P}}$ is tried for adding to ${\cal{I}}$ by performing a sequence of checks to find out if there is an object $I\in {\cal{I}}$ with $w(P\cap I)>\delta w(Y_0).$ Each check is done by running through all points from $Y_0$ in $O(n^2)$ time. If such $I$ exists, then choose the first encountered one, add $P$ into a set ${\cal{N}}_{\delta,I}$ which is initially assumed empty and set ${\cal{P}}:={\cal{P}}\backslash\{P\}.$ Otherwise, add $P$ into ${\cal{I}}$ and set ${\cal{P}}:={\cal{P}}\backslash\{P\}.$ The process stops when ${\cal{P}}=\varnothing.$ Let $t_{\theta_0}=|{\cal{I}}|$ and $z_i$ be the number of sets from ${\cal{N}}_{\varepsilon}$ which are tried for inclusion into ${\cal{I}}$ when $|{\cal{I}}|=i.$ Then, time complexity of the above straightforward implementation is of the order: $$O\left(n^2\sum\limits_{i=1}^{t_{\theta_0}}z_ii\right)=O\left(n^2 t_{\theta_0}\sum\limits_{i=1}^{t_{\theta_0}}z_i\right)=O(n^3 t_{\theta_0}).$$ Its space cost is obviously of the same order as the space cost required to store the range space $(Y_0,{\cal{N}},w).$ Second, complexity of the step 2 is to be estimated of the {\sc Weak Epsilon Net Finder} procedure. Here one has disjoint sets ${\cal{N}}_{{\delta},I}$ formed for each $I\in{\cal{I}}.$ It requires $O(Q(|{\cal{N}}_{\varepsilon}|))$ time by our assumption on time complexity of the subspace epsilon net finder, working at the procedure step 2. Finally, the claimed upper bound is established for length of a weak epsilon net produced by the {\sc Weak Epsilon Net Finder} procedure. Let ${\cal{I}}$ be a maximal $\delta$-independent set, where the parameter $\theta_0$ is to be chosen later. Following the same argument as in the proof of the theorem 4 from \cite{pyrga}, one gets $$t_{\theta_0}\leq\frac{\sum\limits_{I\in {\cal{I}}}w(I)}{\varepsilon w(Y_0)}= \frac{\sum\limits_{y\in Y_0\cap\bigcup\limits_{I\in {\cal{I}}}I} w(y)n_{{\cal{I}}}(y)}{\varepsilon w(Y_0)}\leq$$$$\leq \frac{\sum\limits_{y\in Y_0\cap\bigcup\limits_{I\in {\cal{I}}}I} w(y)(m_{{\cal{I}}}(y)+\tau)}{\varepsilon\alpha w(Y_0)}=$$ $$=\frac{\tau w\left(Y_0\cap\bigcup\limits_{I\in {\cal{I}}}I\right)+\sum\limits_{u\in U}w(Y_0(u))}{\varepsilon\alpha w(Y_0)}\leq\frac{\tau w(Y_0)+t_{\theta_0}\beta\delta w(Y_0)}{\varepsilon\alpha w(Y_0)}=\frac{t_{\theta_0}\beta\theta_0}{\alpha}+\frac{\tau}{\alpha\varepsilon}$$ where $Y_0(u)\subset Y_0$ contains all points which lie in the intersection of a pair of those objects from ${\cal{I}}$ which form an edge $u\in U.$ Thus, it gives upper bounds $t_{\theta_0}\leq\frac{\tau}{(\alpha-\theta_0\beta)\varepsilon}$ and $\frac{\sum\limits_{I\in {\cal{I}}}w(I)}{\varepsilon w(Y_0)}\leq\frac{\tau}{(\alpha-\theta_0\beta)\varepsilon}.$ According to our assumptions, the subspace epsilon net finder at the procedure step 2 gives a hitting set of size at most $\frac{c_1w(I)}{\delta w(Y_0)}+c_2$ for ${\cal{N}}_{{\delta},I},\,I\in{\cal{I}}.$ Therefore, $C_{\theta_0}$ is a weak $\varepsilon$-net of size at most $\left(\frac{c_1}{\theta_0}+c_2\right)\frac{\tau}{(\alpha-\theta_0\beta)\varepsilon}.$ Optimizing with respect to $\theta_0<\frac{\alpha}{\beta}$ one obtains $\theta_0^{\ast}=\theta_0^{\ast}({\cal{E}})=\frac{\frac{\alpha}{\beta}}{1+\sqrt{1+\frac{c_2\alpha}{c_1\beta}}}$ and gets the claimed bound $$|C_{\theta_0^{\ast}}|\leq \left[\left(1+\frac{1}{\sqrt{1+\frac{c_2\alpha}{c_1\beta}}}\right)\left(\frac{2c_1\tau\beta}{\alpha^2}+\frac{c_2\tau}{\alpha}\right)+\frac{c_2\tau}{\alpha \sqrt{1+\frac{c_2\alpha}{c_1\beta}}}\right]\frac{1}{\varepsilon}.$$ \end{proof} \begin{definition} Given $r>0$ and a class ${\cal{E}}\subseteq{\cal{E}}_0$ of sets of straight line segments on the plane, a map is called a {\it{structural map}} for ${\cal{E}}$ and $r,$ if it assigns a graph $G_{\cal{I}}$ for each $E\in{\cal{E}}$ and ${\cal{I}}\subseteq{\cal{N}}_r(E)$ as defined in the lemma \ref{dskskskkroroeowoqp}, where the constants $\alpha=\alpha({\cal{E}},r),\beta=\beta({\cal{E}},r)$ and $\tau=\tau({\cal{E}},r)$ are referred to as {\it structural parameters} for the class ${\cal{E}}$ and radius $r.$ \end{definition} Thus, to estimate value of $\theta_0$ for a given class ${\cal{E}}$ of sets of straight line segments on the plane, the approach of the lemma \ref{dskskskkroroeowoqp} suggests to choose an appropriate subspace epsilon net finder to use at the step 2 of the {\sc Weak Epsilon Net Finder} procedure and identify a structural map for ${\cal{E}}$ and $r.$ More specifically, $\theta_0=\theta^{\ast}_0$ is computed using the equation $(\ref{hjhiislsldlsls}),$ where $\alpha,\beta$ and $\tau$ are parameters of the identified structural map whereas $c_1$ and $c_2$ are performance parameters of the chosen subspace epsilon net finder. Let $M^{\ast}=\left(1+\frac{1}{\sqrt{1+\frac{c_2\alpha}{c_1\beta}}}\right)\left(\frac{2c_1\tau\beta}{\alpha^2}+\frac{c_2\tau}{\alpha}\right)+\frac{c_2\tau}{\alpha \sqrt{1+\frac{c_2\alpha}{c_1\beta}}}$ be the value of the performance parameter of the {\sc Weak Epsilon Net Finder} procedure, which corresponds to setting $\theta_0=\theta^{\ast}_0.$ Let us note that $\theta^{\ast}_0$ depends only on ratios $\frac{c_1}{c_2}$ and $\frac{\beta}{\alpha}$ whereas $M^{\ast}$ depends on $c_1, c_2, \frac{\beta}{\alpha}$ and $\frac{\tau}{\alpha}.$ The smaller the latter two ratios, the smaller $M^{\ast}$ is. Of course, the approach of the lemma \ref{dskskskkroroeowoqp} to estimating $\theta_0$ can also be adapted for general setting of the {\sc Hitting Set} problem for a range space $(Y,{\cal{R}}),$ where $Y\subset\mathbb{R}^2$ is a finite set and ${\cal{R}}$ is a family of subsets on the plane. It is shown in the subsubsection \ref{skskfkgeoqoapdlgls} below that, being applied for subspaces of the range space $(Y_0,{\cal{N}}_r(E),w),$ this approach provides the smaller performance parameter $M$ than that parameter for the epsilon net finder, resulting from direct application of the original approach of \cite{pyrga}. \subsubsection{Identifying a structural map and estimating $\theta_0$}\label{yuieruiusdjk} Let ${\cal{E}}\subseteq{\cal{E}}_0.$ To build a structural map for ${\cal{E}}$ and $r$ a graph is to be identified for each $E\in{\cal{E}}$ and ${\cal{I}}\subseteq{\cal{N}}_r(E)$ or, equivalently, for each $E\in{\cal{E}},$ $E'\subseteq E$ and $r>0.$ Below it is shown that a map, which assigns a Delaunay triangulation graph of $E'$ for every segment set $E',$ turns out to be a favourable structural map for which ratios $\frac{\beta}{\alpha}$ and $\frac{\tau}{\alpha}$ are small. Delaunay triangulations can be defined \cite{breivellers} for planar segment sets of non-overlapping straight line segments in assumption of their {\it general position}: \begin{enumerate} \item no quadruple exists of segments from $E$ which is touched by any single disk; \item the set is in general position of endpoints of segments from $E.$ \end{enumerate} \begin{definition} Let $F$ be the maximal set of open non-overlapping triangles each of which has its endpoints lying on 3 distinct segments from $E$ and its open circumscribing disk does not intersect any segment from $E.$ The complement $${\rm{conv}}\,\left(\bigcup\limits_{e\in E}e\right)\backslash \left(\bigcup\limits_{f\in F}f\cup \bigcup\limits_{e\in E}e\right)$$ is a union of a set $U$ of relatively open connected components, where closure of each component intersects exactly two segments from $E.$ A triple $T_E=(E,U,F)$ is called a Delaunay triangulation of the segment set $E.$ A graph $G_E=(E,U_2)$ is called a graph for $T_E,$ where $U_2$ consists of those unordered pairs $e,e'\in E$ for which there exists $u\in U$ with $e\cap {\rm{cl}}\,u\neq\varnothing$ and $e'\cap {\rm{cl}}\,u\neq\varnothing.$ \end{definition} It is shown in the section 4 of \cite{breivellers} that a Delaunay triangulation $T_E$ is uniquely defined by a set $E$ of non-overlapping segments in general position. Moreover, its graph $G_E$ is planar and dual to the graph of Voronoi diagram for $E.$ Below a parameter $\sigma=\sigma({\cal{E}})$ is given on which $\beta=\beta({\cal{E}})$ depends. Let $m(E')=\left|\left\{e\in E':e\cap {\rm{bd}}\,{\rm{conv}}\,\left(\bigcup\limits_{e\in E'}e\right)\neq\varnothing\right\}\right|$ for $E'\subseteq E.$ Let also $$\sigma=\sigma({\cal{E}})=\inf\limits_{E'\subseteq E, E\in{\cal{E}}}\frac{m(E')}{|E'|}.$$ \begin{lemma}\label{ks234321345} For any class ${\cal{E}}\subseteq{\cal{E}}_0$ and $r>0$ there exists a structural map with $\beta=3-\sigma$ and $\alpha=\tau=1.$ \end{lemma} \begin{proof} It can be assumed that $E\in{\cal{E}}$ contains pairwise non-intersecting segments. Indeed, the {\sc PEH} problem can be considered for the same segment set $E$ with radius $r+\rho$ instead of $r,$ where a small constant $\rho>0$ guarantees meeting the following conditions: \begin{enumerate} \item $\{N\in{\cal{N}}_r(E):y\in N\}=\{N\in{\cal{N}}_{r+\rho}(E):y\in N\}$ for every $y\in Y_0;$ \item a subset of ${\cal{N}}_r(E)$ has empty intersection iff the respective subset of ${\cal{N}}_{r+\rho}(E)$ has no common points. \end{enumerate} Then each segment $[v_1,v_2]\in E$ is replaced by the segment $[v_1+\kappa(v_2-v_1),v_2-\kappa(v_2-v_1)]$ (denote the set of segments thus obtained by $E_{\kappa}),$ where a small $\kappa=\kappa(v_1,v_2)>0$ guarantees that the same conditions are met for $N_{r+\rho}\left(E_{\kappa}\right)$ instead of ${\cal{N}}_{r+\rho}(E),$ $y\in{\rm{int}}\,\bigcap\limits_{N\in {\cal{N}}_{r+\rho}(E_{\kappa}):y\in N}N$ for each $y\in Y_0$ and the parameter $\sigma$ is kept unchanged. Suppose a structural map is defined for $E_{\kappa}$ and $r+\rho$ with $\alpha=\tau=1,\beta=3-\sigma$ and a graph $G_{{\cal{I}}_{\rho\kappa}}$ corresponds to a subset ${\cal{I}}_{\rho\kappa}$ under this map, where segments from $E_{\kappa}({\cal{I}}_{\rho\kappa})$ are shortened segments from $E({\cal{I}})$ for ${\cal{I}}\subseteq{\cal{N}}_r(E).$ It is obvious that the same graph can be assigned for the set ${\cal{I}}$ taking into account that $\{N\in{\cal{I}}:y\in N\}=\{N\in{\cal{I}}_{\rho\kappa}:y\in N\}$ for any $y\in Y_0.$ Thus, it is assumed with slight abuse of terminology that $y\in{\rm{int}}\,\bigcap\limits_{N\in {\cal{I}}:y\in N}N$ for each $y\in Y_0$ and $E$ consists of non-overlapping segments. Moreover, it can be assumed that the segment set $E$ is in general position. To achieve this, segments from $E$ can be slightly shifted keeping unchanged values of structural parameters $\alpha,\tau$ and $\sigma$ without breaking empty/nonempty intersections of subsets of objects from ${\cal{N}}_r(E).$ Let ${\cal{I}}\subseteq {\cal{N}}_r(E)$ and $G_{{\cal{I}}}$ be the maximal graph which is obtained from a Delaunay triangulation graph for $E({\cal{I}})$ by removing redundant multiple edges. Due to the theorem 3 from \cite{breivellers}, $G_{{\cal{I}}}$ contains at most $3|E({\cal{I}})|-k-3$ edges, where $k$ denotes the number of those edges of ${\rm{conv}}\,\left(\bigcup\limits_{e\in E({\cal{I}})}e\right),$ which are not segments of $E({\cal{I}}).$ As segments from $E({\cal{I}})$ are non-intersecting, $m(E({\cal{I}}))\leq k$ and $G_{{\cal{I}}}$ has at most $\beta|E({\cal{I}})|$ edges. Let ${\cal{I}}(y)=\{I\in{\cal{I}}:y\in I\}$ and $G_{{\cal{I}}}(y)$ be a subgraph of $G_{{\cal{I}}}$ induced by the subset $E({\cal{I}}(y))$ as its set of $n_{{\cal{I}}}(y)$ vertices. Let us prove that $\alpha=\tau=1$ by induction on $n_{{\cal{I}}}(y)$ for every $y\in Y_0.$ The case $n_{{\cal{I}}}(y)=1$ is obvious. Let us assume that any graph $G_{{\cal{I}}}(y)$ with $n_{{\cal{I}}}(y)\leq k$ vertices contains at least $n_{{\cal{I}}}(y)-1$ edges (for any $r)$ and suppose that $n_{{\cal{I}}}(y)=k+1.$ By perturbing $y$ within ${\rm{int}}\,\bigcap\limits_{I\in{\cal{I}}(y)}I$ it can be achieved that segments from $E({\cal{I}}(y))$ are at distinct distances from $y.$ Besides, let $e_0(y)\in E({\cal{I}}(y))$ be the farthest (among segments of $E({\cal{I}}(y)))$ segment from $y.$ Denote by $y_0$ Euclidean projection of $y$ onto $e_0(y).$ There is a point $y_1\in [y,y_0]$ which is equidistant from $e_0(y)$ and some segment $e(y)\in E({\cal{I}}(y))\backslash\{e_0(y)\}$ whereas none of segments of $E({\cal{I}})\backslash \{e_0(y),e(y)\}$ is within the distance $\|y_0-y_1\|_2$ from $y_1$ (again may be after a small perturbation of $y).$ Due to duality between Delaunay triangulations and Voronoi diagrams considered over the same segment set (see the theorem 4 from \cite{breivellers}), we get that $G_{{\cal{I}}}(y)$ contains an edge which connects $e_0(y)$ and $e(y).$ Obviously, each segment of $E({\cal{I}}(y))$ has nonempty intersection with the radius $r$ disk centered at $y.$ Let $\gamma>0$ be so small such that $r_0=\|y-y_0\|_2-\gamma$ radius disk centered at $y$ intersects all $n_{{\cal{I}}}(y)-1$ segments from $E({\cal{I}}(y))\backslash\{e_0(y)\}.$ Let $G_{{\cal{I}}}(y,\gamma)$ be the subgraph of $G_{{\cal{I}}}(y)$ induced by segments of $E({\cal{I}}(y))\backslash\{e_0(y)\}.$ Applying inductive assumption, one gets that $G_{{\cal{I}}}(y,\gamma)$ has at least $n_{{\cal{I}}}(y)-2$ edges. Thus, the graph $G_{{\cal{I}}}(y)$ contains at least $n_{{\cal{I}}}(y)-1$ edges. \end{proof} In the proof of the lemma \ref{ks234321345} a structural map is built for an arbitrary subclass ${\cal{E}}\subseteq{\cal{E}}_0.$ The parameter $\beta$ of this structural map depends on the ${\cal{E}}$-specific parameter $\sigma.$ This allows to design ${\cal{E}}$-specific implementations of the {\sc Weak Epsilon Net Finder} procedure with different values of $\theta^{\ast}_0,$ giving the smaller value $M^{\ast}$ of the performance parameter $M$ than that value in the general case where ${\cal{E}}={\cal{E}}_0.$ Three examples are given below of choice of $\theta_0$ for different classes of sets of segments. The first example is for the general case ${\cal{E}}={\cal{E}}_0.$ In this case the subspace epsilon net finder from the subsubsection \ref{qyuuififis} is chosen to work at the step 2 of the {\sc Weak Epsilon Net Finder} procedure. \noindent {\bf Example 1.} Using the equation $(\ref{hjhiislsldlsls})$ and the lemma \ref{ks234321345} one gets $\theta^{\ast}_0({\cal{E}}_0)=\frac{1}{\left(3+\frac{\sqrt{39}}{2}\right)}\approx 0.163,$ where $\alpha=\tau=1,\,\beta=3,\,c_1=8$ and $c_2=2.$ The second example is for a special proper subclass of the class ${\cal{E}}_0.$ It shows how $\theta^{\ast}_0$ is changed when $\frac{\beta}{\alpha}$ varies. \begin{definition} A plane graph $G=(V,E)$ is called a {\it generalized outerplane} if $$e\cap{\rm{bd}}\,{\rm{conv}}\,\left(\bigcup\limits_{e\in E}e\right)\neq\varnothing$$ for any $e\in E.$ \end{definition} When each segment from $E$ has its both endpoints on the boundary of ${\rm{conv}}\,\left(\bigcup\limits_{e\in E}e\right)$ such a plane graph $G=(V,E)$ is known as an outerplane graph. The lemma below follows from the lemma \ref{ks234321345}. \begin{lemma}\label{gakakakeoqoeofodl} For the class ${\cal{E}}_1$ of edge sets of arbitrary generalized outerplane graphs there is a structural map with $\alpha=\tau=\sigma=1$ and $\beta=2.$ \end{lemma} \noindent {\bf Example 2.} Using the equation $(\ref{hjhiislsldlsls})$ one has $\theta^{\ast}_0({\cal{E}}_1)=\frac{1}{\left(2+\frac{3}{\sqrt{2}}\right)}\approx 0.242,$ where $\alpha=\tau=1,\,\beta=2,\,c_1=8$ and $c_2=2.$ The third example illustrates how $\theta^{\ast}_0$ depends on the ratio $\frac{c_1}{c_2}$ of performance parameters of subspace epsilon net finder, chosen to work at the step 2 of the {\sc Weak Epsilon Net Finder} procedure. Consider the class ${\cal{E}}_2(r)$ of sets of straight line segments in which for any $E\in{\cal{E}}_2(r)$ either $d(e,e')>r$ or $d(e,e')=0$ for any distinct $e,e'\in E.$ For this class a suitable subspace epsilon net finder is the one from the subsubsection \ref{hkfksiwiaoaofohdov}. \noindent {\bf Example 3.} In view of the equation $(\ref{hjhiislsldlsls})$ $\theta^{\ast}_0({\cal{E}}_2(r))=\frac{1}{3+3\sqrt{3}}\approx 0.122,$ where $\alpha=\tau=1,\,\beta=3,\,c_1=1$ and $c_2=6.$ \subsection{Performance analysis of the Weak Epsilon Net Finder procedure}\label{fikdksoeofosqp} As an obvious consequence of lemmas \ref{kob11111111}, \ref{dskskskkroroeowoqp} and \ref{ks234321345} time complexity and space cost can be estimated of the procedure. \begin{lemma}\label{hhsurigklflskieigodls} For any $E\in {\cal{E}}_0$ the {\sc Weak Epsilon Net Finder} procedure works in $O\left(\frac{n^3}{\varepsilon}\right)$ time and linear space with respect to the space cost to store $(Y_0,{\cal{N}},w)$ for ${\cal{N}}\subseteq{\cal{N}}_r(E)$ and $w:Y_0\rightarrow\mathbb{Q}_+.$ \end{lemma} The lemma below summarizes on $O\left(\frac{1}{\varepsilon}\right)$ upper bounds on size of weak $\varepsilon$-nets, returned by the {\sc Weak Epsilon Net Finder} procedure. It follows from lemmas \ref{kob11111111}, \ref{kkfkfkdkdksks}, \ref{fsjdkgowoqpspa}, \ref{dskskskkroroeowoqp}, \ref{ks234321345} and \ref{gakakakeoqoeofodl}. \begin{lemma}\label{kdksksiritfoosos} Given a range space $(Y_0,{\cal{N}},w)$ for ${\cal{N}}\subseteq{\cal{N}}_r(E)$ and $w:Y_0\rightarrow\mathbb{Q}_+,$ the {\sc Weak Epsilon Net Finder} procedure with a suitable subspace epsilon net finder, working at its step 2, returns a weak $\varepsilon$-net for $(Y_0,{\cal{N}},w)$ of size at most $\frac{M}{\varepsilon},$ where \begin{enumerate} \item $M=50+52\sqrt{\frac{12}{13}}$ for $E$ being an edge set of a plane graph; \item $M=34+24\sqrt{2}$ in the case where $E$ is an edge set of a generalized outerplane graph; \item $M=12+6\sqrt{3}$ if each pair of distinct segments from $E$ is at Euclidean distance either zero or more than $r$ from each other; \item $M=\frac{144}{5}+32\sqrt{\frac{3}{5}}$ in the case where $E$ is an edge set of any subgraph of a Delaunay triangulation; \item $M=20+12\sqrt{2}$ in the case of $E,$ being an edge set of any subgraph of a generalized outerplane Delaunay triangulation. \end{enumerate} \end{lemma} \begin{remark} If $E$ is an edge set of any subgraph of an outerplane graph, the {\sc Weak Epsilon Net Finder} procedure can be slightly modified to get much smaller $M$ than that parameter for the case of generalized outerplane graphs. This improvement consists in applying a special algorithm of constructing a maximal $\delta$-independent set ${\cal{I}}$ at its step 1. The algorithm is based on the fact that for every subset $E'\subseteq E$ a segment can be chosen from $E'$, being an edge of ${\rm{conv}}\,\bigcup\limits_{e\in E'}e.$ Here the {\sc Subspace Weak Epsilon Net Finder} procedure is used from the subsubsection \ref{qyuuififis} at the step 2 of the {\sc Weak Epsilon Net Finder} procedure to get hitting sets for ${\cal{N}}_{\delta,I}$ of size at most $\frac{4w(I)}{\delta w(Y_0)}.$ \end{remark} \subsubsection{Comparing the performance parameter of the Weak Epsilon Net Finder procedure with related work}\label{skskfkgeoqoapdlgls} As an alternative to the {\sc Weak Epsilon Net Finder} procedure consider an epsilon net finder, resulting from direct application of the original approach from \cite{pyrga} to subspaces of $(Y_0,{\cal{N}}_r(E),w).$ Within this procedure a partition is performed of ${\cal{N}}_{\varepsilon}$ into much smaller subsets than it is done in the {\sc Weak Epsilon Net Finder} procedure. Namely, ${\cal{N}}_{\varepsilon}$ is first partitioned into groups $${\cal{N}}^s_{\varepsilon}=\{N\in{\cal{N}}:2^{s+1}\varepsilon w(Y_0)\geq w(N)>2^s\varepsilon w(Y_0)\}$$ for $s=0,\ldots,\log_2\frac{1}{\varepsilon}-1;$ then each group ${\cal{N}}^s_{\varepsilon}$ is partitioned into subsets in the same way as ${\cal{N}}_{\varepsilon}$ is at the step 1 of the {\sc Weak Epsilon Net Finder} procedure. More precisely, the corresponding partition of ${\cal{N}}^s_{\varepsilon}$ is generated using some maximal $\delta_s$-independent set ${\cal{I}}_s$ for $(Y_0,{\cal{N}}^s_{\varepsilon},w),$ where $\delta_s=\theta_02^s\varepsilon.$ Applying results from \cite{komlos}, a hitting set is generated of constant size for each element of the partition of ${\cal{N}}^s_{\varepsilon}$ for every $s.$ Omitting details, the original approach of \cite{pyrga} gives an epsilon net finder whose performance parameter is equal to $M_1=\frac{16d\tau\beta\log\frac{4\beta}{\alpha}}{\alpha^2},$ where $d$ is VC-dimension of $(Y_0,{\cal{N}}_r(E)).$ For $c_2=0$ the equation $(\ref{hjhiislsldlsls})$ gives $\theta^{\ast}_0=\frac{\alpha}{2\beta}$ and the bound $|C_{\theta^{\ast}_0}|\leq\frac{M_2}{\varepsilon}$ follows from the lemma \ref{dskskskkroroeowoqp} for $M_2=\frac{4c_1\tau\beta}{\alpha^2}.$ Therefore $M_2\leq M_1$ when $c_1\leq 4d\log\frac{4\beta}{\alpha}.$ Of course, the {\sc Subspace Weak Epsilon Net Finder} procedure from the subsubsection \ref{qyuuififis} can be considered as having performance parameters $c_1=10$ and $c_2=0.$ As $d\geq 3,$ $\frac{M_1}{M_2}\geq\approx 3.$ \section{Computing a proper weighting in the Piercing Hippodromes algorithm}\label{pkgkslsls} Apart from constructing weak epsilon nets, another important task is performed at the step 4 of the {\sc Piercing Hippodromes} algorithm. Given ${\cal{N}}\subseteq{\cal{N}}_r(E),$ it consists in computing a proper weight map $w_f:Y_0\rightarrow\mathbb{Q}_+$ such that $w_f(N)=\Omega\left(\frac{w_f(Y_0)}{{\rm{OPT}}(Y_0,{\cal{N}})}\right)$ for all $N\in {\cal{N}}.$ In \cite{bronnimann} it is shown (in much more general setting) by Bronnimann and Goodrich and later in \cite{agarwal} by Agarwal and Pan that such a map always exists and can be computed algorithmically. In this section an iterative reweighting procedure is described to be applied at the step 4 of the {\sc Piercing Hippodromes} algorithm. It is a slightly modified version of the procedure, applied within the Agarwal-Pan algorithm from \cite{agarwal}. An analogous modification is used in \cite{bus2}. To present work of the procedure, the idea of its original version from \cite{agarwal} is briefly reviewed first. \subsection{Iterative reweighting procedure in the original Agarwal-Pan algorithm} Being applied for range subspaces of $(Y_0,{\cal{N}}_r(E)),$ the iterative reweighting procedure of the original Agarwal-Pan algorithm accepts a positive integer parameter $k$ and a range space $(Y_0,{\cal{N}})$ for ${\cal{N}}\subseteq{\cal{N}}_r(E)$ as its input. It updates weights of points from $Y_0,$ trying to increase ratios $\frac{w(N)}{w(Y_0)}$ uniformly for all $N\in{\cal{N}}$ to get them all above the threshold $\frac{1}{2ke}.$ The procedure works as follows. Initially, $w(y):=w_0(y)$ for all $y\in Y_0,$ where $w_0$ is the unit weight map. Within the procedure steps are repeated for objects from ${\cal{N}},$ which are called {\it weight updating} steps. Weight updating steps are grouped in the so called rounds. Each round contains at most $T=2k$ consecutive weight updating steps. Within each round objects from ${\cal{N}}$ are processed one by one. It means that after performing weight updating steps for a particular object from ${\cal{N}}$ it is not processed later in the current round. Given an object $N\in{\cal{N}}$ such that $w(N)\leq\frac{w(Y_0)}{2k},$ an update $w(y):=2w(y)$ is performed for each $y\in N\cap Y_0$ during a particular weight updating step for $N.$ Weight updating steps for $N$ continue to be performed until either $w(N)>\frac{w(Y_0)}{2k}$ or the total number of weight updating steps (including those steps for previously processed objects) equals to $T$ in the current round. When the current round is finished without achieving the limit $T$ on the total number of performed weight updating steps, a final weight map $w_k:=w$ on $Y_0,$ the value $\varepsilon_k=\frac{1}{2ke}$ are returned and the procedure stops. In this case it can be proved \cite{agarwal} that $w_k(N)>\varepsilon_kw(Y_0)$ for all $N\in{\cal{N}}.$ Otherwise, if the current round contains $T$ consecutive weight updating steps, the procedure either proceeds to the next round or stops depending on whether the total number of performed rounds does not exceed $2\log\frac{|Y_0|}{k}+1.$ If the procedure stops, exceeding that limit, it reports that no suitable weight map can be computed for a given $k.$ Ability of the original Agarwal-Pan algorithm to identify a value $k=O({\rm{OPT}}),$ for which the iterative reweighting procedure is able to compute the corresponding suitable weight map on $Y_0,$ relies on a basic observation. It can formulated as follows for $(Y_0,{\cal{N}}):$ for any positive integer $k$ the above procedure finishes working, performing less than $T$ steps in the final round if a $k$-element hitting set $C\subseteq Y_0$ exists for ${\cal{N}};$ besides, no $k$-element hitting set exists for ${\cal{N}},$ when the procedure performs exactly $T$ weight updating steps in its final round. This observation is based on the fact that $w(C)$ grows faster than $w(Y_0)$ as weight updating steps are proceeding, thus, restricting the number of those steps (see the lemma 2.1 from \cite{agarwal} for details). \subsection{Modified iterative reweighting procedure} The problem with the original iterative reweighting procedure from \cite{agarwal} is that applying it at the step 4 of the {\sc Piercing Hippodromes} algorithm only guarantees an upper bound on its approximation factor of $4Me$ (see the procedure analysis in \cite{agarwal}), where $M$ is the performance parameter of the {\sc Weak Epsilon Net Finder} procedure. Below a slight modification is provided of the original procedure to achieve a better upper bound, which is very close to $M.$ The only difference between this procedure and the procedure of the original algorithm is that weights are only slightly modified of objects from ${\cal{N}}$ during a particular weight updating step by just multiplying them on the factor of $1+\lambda_1,$ where $\lambda_1$ is some small absolute constant to be chosen later; moreover, the condition $w(N)>\frac{w(Y_0)}{\lambda k}$ is verified before performing each weight updating step for some constant $\lambda>1,\,\lambda\approx 1.$ Pseudo-code of the procedure is given below. Let $\kappa=2\lambda-\lambda\lambda_1-2>0.$ \hrule \noindent {\sc Iterative Reweighting.} \hrule \noindent {\bf Input:} a parameter $k$ and a range space $(Y_0,{\cal{N}})$ for some ${\cal{N}}=\{N_1,\ldots,N_m\}\subseteq{\cal{N}}_r(E);$ \noindent {\bf Output:} if a weight map $w_k=w_k(\cdot|Y_0,{\cal{N}})$ can be computed for which $w_k(N)>\varepsilon_kw_k(Y)$ for all $N\in{\cal{N}},$ where $\varepsilon_k\geq\frac{1}{\lambda ke^{2\lambda_1/\lambda}},$ it returns such $w_k$ and $\varepsilon_k;$ otherwise, it reports that no suitable weight map can be computed for a given $k.$ \hrule \begin{enumerate} \item set $t:=1;$ // {\it round counter} \item set $w:=w_0$ // set $w$ equal to the unit weight map \item set $s:=0$ and $p:=1;$ // {\it counters for weight updating steps and objects processed in the current round} \item verify the inequality \begin{equation}\label{klsks} w(N_p)\leq\frac{w(Y_0)}{\lambda k} \end{equation} and if it is true, set $s:=s+1,$ $w(y):=w(y)(1+\lambda_1)$ for all $y\in N_p\cap Y_0$ and continue repeating the step 4 while $s<2k$ and $(\ref{klsks})$ still holds; \item for $s<2k$ examine whether the equality $p=m$ holds: if it does, return $w_k:=w,\,\varepsilon_k:=\frac{1}{\lambda ke^{\lambda_1s/(\lambda k)}};$ otherwise, set $p:=p+1$ and go to step 4; \item if $s=2k,$ check if $t>\frac{\lambda\ln(|Y_0|/k)}{\lambda_1\kappa}$ holds: when it does, report that no suitable map can be computed for a given value of $k;$ otherwise, set $t:=t+1$ and go to step 3. \end{enumerate} \hrule Finally note that computing of $w(N_p)$ and generating points from $N_p\cap Y_0$ at the procedure step 4 is done in a straightforward way. For example, when reporting points from $N_p\cap Y_0$ each point $y\in Y_0$ is verified if it is contained in $N_p:$ if yes, the point $y$ is reported. \section{Constant factor approximation algorithms for the PEH problem and their performances}\label{fhdhhsueurusiis} In this section our main algorithmic results are formulated. The first result is a constant factor approximation for the {\sc PEH} problem on a set of $r$-hippodromes whose underlying straight line segments are allowed to intersect at most at their endpoints. More accurate approximations are also provided for special geometric configurations of segments. \begin{theorem}\label{ksakldkekeksa} Applying the {\sc Iterative reweighting} procedure at the step 4 of the {\sc Piercing Hippodromes} algorithm and the {\sc Weak Epsilon Net Finder} procedure with a suitable subspace epsilon net finder at the algorithm steps 2 and 7 for $\theta_0$ computed according to the equation $(\ref{hjhiislsldlsls}),$ an approximation algorithm can be obtained for the {\sc PEH} problem such that for any small $\nu>0$ it works in $O\left(\left(n^2+\frac{n\log n}{\nu^2}+\frac{\log n}{\nu^3}\right)n^2\log n\right)$ time, $O\left(\frac{n^2\log n}{\nu}\right)$ space and provides an \begin{enumerate} \item $\left(50+52\sqrt{\frac{12}{13}}+\nu\right)$-approximate solution in the case where $E$ is an edge set of a plane graph; \item $(34+24\sqrt{2}+\nu)$-approximate solution for $E,$ being an edge set of a generalized outerplane graph; \item $\left(12+6\sqrt{3}+\nu\right)$-approximate solution if each pair of distinct segments from $E$ is at Euclidean distance either zero or more than $r$ from each other; \item $\left(\frac{144}{5}+32\sqrt{\frac{3}{5}}+\nu\right)$-approximate solution in the case where $E$ is an edge set of any subgraph of a Delaunay triangulation; \item $(20+12\sqrt{2}+\nu)$-approximate solution for $E,$ being an edge set of any subgraph of a generalized outerplane Delaunay triangulation. \end{enumerate} \end{theorem} \begin{proof} The proof is organized in two stages. \noindent {\sc Stage 1.} Let ${\cal{E}}\subseteq{\cal{E}}_0.$ Suppose that the {\sc Weak Epsilon Net Finder} procedure produces weak $\varepsilon$-nets for subspaces of $(Y_0,{\cal{N}}_r(E),w)$ of size at most $\frac{M}{\varepsilon}$ for any $E\in{\cal{E}}.$ At the first stage it is proved that for any small constants $\mu_0,\lambda_1>0$ and a constant $\lambda>1$ for which $\kappa=2\lambda-\lambda\lambda_1-2>0,$ the {\sc Piercing Hippodromes} algorithm is $M(\mu_0+\lambda e^{2\lambda_1/\lambda})$-approximate, works in \begin{equation}\label{fjdjsjsjfuruwuyq} O\left(\left(\frac{n^3\log n+\frac{n^2\log n}{\mu_0}}{\lambda_1\kappa}+ n^3{\rm{OPT}}\right)\log {\rm{OPT}}\right) \end{equation} time and $O\left(\frac{n^2\log n}{\kappa}\right)$ space. Its proof is analogous to proofs of lemmas 2.1 and 3.1 from \cite{agarwal} and the lemma 2.1 from \cite{bus2}. First, an approximation ratio is estimated of the {\sc Piercing Hippodromes} algorithm. For a given $k$ it is proved that at most $\left\lceil\frac{2\lambda k\ln|Y_0|}{\lambda_1\kappa}\right\rceil$ weight updates are done in the {\sc Iterative Reweighting} procedure at its step 4 for sets from ${\cal{N}}_k,$ summing over all rounds, under assumption that there is a $k$-element hitting set $S_k\subseteq Y_0$ for ${\cal{N}}_k.$ Indeed, let $w^F_k(Y_0)$ (respectively, $w^F_k(S_k))$ be the weight $w(Y_0)$ (respectively, be the weight $w(S_k))$ observed at the end of the final round in which the {\sc Iterative Reweighting} procedure stops. Let $z_k$ be also the total number of times that weights are updated (i.e. multiplied by $1+\lambda_1)$ of objects from ${\cal{N}}_k$ in all rounds. The following inequality holds true: \begin{equation}\label{kdhdhshshah} w^F_k(Y_0)\leq |Y_0|\left(1+\frac{\lambda_1}{\lambda k}\right)^{z_k}. \end{equation} From the other hand, one gets $$\frac{w^F_k(S_k)}{k}=\frac{\sum\limits_{c\in S_k}(1+\lambda_1)^{z_k(c)}}{k}\geq (1+\lambda_1)^{\sum\limits_{c\in S_k}z_k(c)/k}\geq (1+\lambda_1)^{z_k/k}$$ where $z_k(c)$ denotes the number of times that $w(c)$ is updated. As $w^F_k(S_k)\leq w^F_k(Y_0)$ the inequality holds true $$k(1+\lambda_1)^{z_k/k}\leq |Y_0|\left(1+\frac{\lambda_1}{\lambda k}\right)^{z_k}.$$ Resolving it with respect to $z_k,$ one gets $z_k\leq\frac{2\lambda k\ln(|Y_0|/k)}{\lambda_1\kappa}$\footnote{Here the doubled inequality $x-\frac{x^2}{2}\leq\ln(1+x)\leq x$ is used for small $x>0.$}. Thus, once the inequality ${\rm{OPT}}(Y_0,{\cal{N}}_k)\leq k$ holds, after at most $\left\lceil \frac{2\lambda k\ln(|Y_0|/k)}{\lambda_1\kappa}\right\rceil$ weight updates the {\sc Iterative Reweighting} procedure stops, returning $w_k$ and $\varepsilon_k.$ At the step 5 of the {\sc Piercing Hippodromes} algorithm true and false values are explored of the flag variable to localize ${\rm{OPT}}(Y_0,{\cal{N}}_{k_p}).$ The algorithm finally gets $k_f\leq {\rm{OPT}}(Y_0,{\cal{N}}_{k_f})$ and outputs the corresponding weight map $w_{k_f}$ and the parameter $\varepsilon_{k_f}\geq\frac{1}{\lambda k_fe^{2\lambda_1/\lambda}}.$ Let $C\subset\mathbb{R}^2$ be the set of size at most $Mk_f\left(\mu_0+\lambda e^{\lambda_1s/(\lambda k_f)}\right),$ which is returned at the step 8 of the {\sc Piercing Hippodromes} algorithm. It is proved below that $C$ is a hitting set for ${\cal{N}}_r(E).$ Let $w^I_{k_f}(Y_0)$ be $w(Y_0)$ observed at the beginning of the (final) round within which the {\sc Iterative Reweighting} procedure returns a proper weight map $w_{k_f}$ and the parameter $\varepsilon_{k_f}.$ As $s<2k_f$ in that round, one gets $$w^F_{k_f}(Y_0)\leq\left(1+\frac{\lambda_1}{\lambda k_f}\right)^{s}w^I_{k_f}(Y_0)\leq e^{\lambda_1s/(\lambda k_f)}w^I_{k_f}(Y_0)<e^{2\lambda_1/\lambda}w^I_{k_f}(Y_0).$$ From the other hand, $w_{k_f}^F(N)>\frac{w^I_{k_f}(Y_0)}{\lambda k_f}\geq\frac{w^F_{k_f}(Y_0)}{\lambda k_fe^{\lambda_1s/(\lambda k_f)}}>\frac{w^F_{k_f}(Y_0)}{\lambda k_fe^{2\lambda_1/\lambda}}$ for any $N\in{\cal{N}}_{k_f},$ where $w_{k_f}^F(N)$ denotes weight of $N$ at the end of the final round. Thus, the algorithm output $C$ gives a hitting set for ${\cal{N}}_r(E)$ and: $$|C|\leq Mk_f\left(\mu_0+\lambda e^{\lambda_1s/(\lambda k_f)}\right)\leq M\left(\mu_0+\lambda e^{\lambda_1s/(\lambda k_f)}\right){\rm{OPT}}(Y_0,{\cal{N}}_{k_f})\leq $$ $$\leq M\left(\mu_0+\lambda e^{2\lambda_1/\lambda}\right){\rm{OPT}}(Y_0,{\cal{N}}_{k_f})\leq M\left(\mu_0+\lambda e^{2\lambda_1/\lambda}\right){\rm{OPT}}.$$ Now bounds are established for time complexity of the {\sc Piercing Hippodromes} algorithm. Its step 1 requires $O(n^2)$ time. Binary search over steps 2-4 requires $O\left(\left(n^3{\rm{OPT}}+B\right)\log{\rm{OPT}}\right)$ time, where $B$ is the maximal time complexity of the {\sc Iterative Reweighting} procedure. Indeed, the algorithm step 2 takes $O(n^3{\rm{OPT}})$ time by the lemma \ref{hhsurigklflskieigodls} whereas its step 3 requires $O(n\,{\rm{OPT}})$ time. The step 7 of the {\sc Piercing Hippodromes} algorithm takes $O(n^3{\rm{OPT}})$ time by the lemma \ref{hhsurigklflskieigodls}. Thus, to estimate total complexity of the algorithm it remains to estimate $B.$ Recall that the {\sc Iterative reweighting} procedure call is for the space $(Y_0,{\cal{N}}_k)$ for $k=O({\rm{OPT}}).$ For any $t$ $t$th round consists of at most $|{\cal{N}}_k|$ operations to compute weights of objects from ${\cal{N}}_k.$ As $t\leq \frac{\lambda\ln(|Y_0|/k)}{\lambda_1\kappa}+1,$ overall time complexity is of the order $O\left(\frac{\lambda|{\cal{N}}_k|n^2\log n}{\lambda_1\kappa}\right)$ for such operations at the procedure step 4. One has $|N\cap Y_0|\leq\frac{|Y_0|}{\mu_0 k}$ for every $N\in{\cal{N}}_k.$ As $s\leq 2k,$ an $O\left(\frac{\lambda n^2\log n}{\mu_0\lambda_1\kappa}+\frac{\lambda|{\cal{N}}_k|n^2\log n}{\lambda_1\kappa}\right)$ time is spent for operations to update point weights at the step 4. Therefore $B=O\left(\frac{\lambda \left(n^3\log n+\frac{n^2\log n}{\mu_0}\right)}{\lambda_1\kappa}\right).$ As for space cost of the {\sc Piercing Hippodromes} algorithm, note that $$w(Y_0)\leq |Y_0|^{1+2/\kappa}e^{2\lambda_1/\lambda}$$ substituting $z_k=\left(\frac{\lambda\ln (|Y_0|/k)}{\lambda_1\kappa}+1\right)2k$ into the bound $(\ref{kdhdhshshah})$ for $w^F_k(Y_0).$ Of course, as $w(y)=(1+\lambda_1)^{z_k(y)}$ for $y\in Y_0,$ it can be shown that $$\sum\limits_{y\in Y_0}\ln w(y)\leq|Y_0|\ln\frac{w(Y_0)}{|Y_0|}= O\left(\frac{|Y_0|\ln|Y_0|}{\kappa}\right).$$ \noindent {\sc Stage 2.} Now one is ready to prove the first statement of the theorem. The other theorem statements can be proved analogously, using the lemma \ref{kdksksiritfoosos}. Consider an implementation of the {\sc Weak Epsilon Net Finder} procedure with the {\sc Subspace Weak Epsilon Net Finder} procedure from the subsubsection \ref{qyuuififis}, working at its step 2. Choose $\theta_0$ as in the example 1 from the subsubsection \ref{yuieruiusdjk}. By the lemma \ref{kdksksiritfoosos} the {\sc Weak Epsilon Net Finder} procedure outputs weak $\varepsilon$-nets of size at most $\left(50+52\sqrt{\frac{12}{13}}\right)\frac{1}{\varepsilon}.$ Due to shown in the first stage, it implies that the {\sc Piercing Hippodromes} algorithm is $\left(50+52\sqrt{\frac{12}{13}}\right)(\mu_0+\lambda e^{2\lambda_1/\lambda})$-approximate. One can adjust its parameters $\mu_0,\lambda$ and $\lambda_1$ to get an $\left(50+52\sqrt{\frac{12}{13}}+\nu\right)$-approximate algorithm without affecting orders of its time and space complexities for any small $\nu>0.$ More specifically, setting $\lambda:=1+\frac{\nu}{300},\,\lambda_1=\mu_0:=\frac{\nu}{1200},$ it can be shown that $\left(50+52\sqrt{\frac{12}{13}}\right)\left(\frac{\nu}{1200}+\left(1+\frac{\nu}{300}\right)e^{\nu/600}\right)\leq 50+52\sqrt{\frac{12}{13}}+\nu$ for small $\nu>0.$ Moreover, it gives claimed dependencies of the algorithm time and space cost on $\nu.$ \end{proof} \section{Conclusion} Constant factor approximations are proposed for a special NP- and W[1]-hard geometric piercing problem on a set of $r$-hippodromes whose underlying straight line segments are allowed to intersect at most at their endpoints. More accurate approximations are also provided for special configurations of segments, forming edge sets of generalized outerplane graphs and Delaunay triangulations. They demonstrate a competitive combination of guaranteed constant approximation factor and time complexity being compared with known local search and epsilon net based approximation algorithms for the {\sc Hitting Set} problem on a set of pseudo-disks. We hope that our $O(1)$-approximations can be expedited by incorporating clever geometric data structures. \appendix \section{Proof of the lemma \ref{gjfkdkksaadqqrsra}} \begin{proof} Let ${\cal{J}}=\{J_i\}_{i=1}^n$ be a set of bounded intervals on the real line, i.e. $J_i=[a_i,b_i],\,i=1,\ldots,n.$ Let ${\cal{J}}'$ be its subset of intervals which is maximal with respect to inclusion and does not contain pairs of intervals $I$ and $J$ with either $I\subseteq J$ or $J\subset I.$ Removing such pairs from ${\cal{J}}$ can be done in $O(n\log n)$ time and $O(n)$ space. Indeed, an interval $[a,b]$ can be represented by a point $(a,b)$ on the $xy$-plane above the straight line $y=x;$ checking if an interval $[a,b]$ contains some other interval $[c,d]$ is equivalent to checking if the axis-parallel rectangle contains a point $(c,d)$ whose left upper vertex is $(a,b)$ and right lower vertex is $(b,a).$ This check can be done using data structures for processing of orthogonal range emptiness queries on $n$-point sets in $O(\log n)$ time and $O(n)$ space with preliminary preprocessing in $O(n\log n)$ time \cite{Blelloch}. Secondly, we get lower and upper ends of intervals from ${\cal{J}}'$ sorted in a single sequence, set $H:=\varnothing$ and ${\cal{P}}:={\cal{J}}'.$ Then, doing sequentially until ${\cal{P}}=\varnothing,$ an interval $I_k=[a_k,b_k]\in {\cal{P}}$ is selected at step $k$ with the maximal upper end $b_k;$ its lower end $a_k$ is added to the hitting set $H$ and intervals are excluded from ${\cal{P}}$ which are hit by $a_k.$ When ${\cal{P}}=\varnothing,$ let $Q=\{I_k\}\subset{\cal{J}}$ be the set of non-overlapping intervals, thus, constructed. We get that $H$ is the minimum cardinality hitting set for ${\cal{J}}.$ Summarizing on the complexity of computing of $H,$ we note that sorting of interval ends from ${\cal{J}}'$ can obviously be done in $O(|{\cal{J}}'|\log |{\cal{J}}'|)$ time. Moreover, when reporting those intervals from ${\cal{P}},$ which contain $a_k,$ we first start with the interval from ${\cal{P}}$ having the second maximal upper end. Thus, it takes $O(|{\cal{J}}'|)$ overall time for reporting such intervals. \end{proof} \end{document}
\begin{document} \title{Higher classes of ordinals induced by $<_1$}\author{Parm{\'e}nides Garc{\'i}a Cornejo}\maketitle \begin{abstract} In this article we want to see that it is possible to iterate and generalize the notions presented in {\cite{GarciaCornejo1}} such that we can obtain the higher or thinner classes of ordinals induced by the $<_1$-relation as solutions of $\langle \alpha, t \rangle$-conditions. Indeed, these results are so general that in a future article we will be able to characterize the ordinal $| \Pi_1^1 - \tmop{CA}_0 |$ as the smallest ordinal of the thinnest class induced by $<_1$. \end{abstract} \section{$\tmop{Class} (n)$} \begin{definition} We define by recursion on $\omega$\\ $\tmop{Class} (1) := \mathbbm{E}$;\\ $\tmop{Class} (n + 1) := \{\alpha \in \tmop{OR} | \alpha \in \tmop{Class} (n) \wedge \alpha ( +^n) \in \tmop{Class} (n) \wedge \alpha <_1 \alpha (+^n)\}$,\\ where for $\alpha \in \tmop{Class} (n)$ we define \\ $\alpha (+^n) := \left\{ \begin{array}{l} \min \{\beta \in \tmop{Class} (n) | \alpha < \beta\} \text{ iff } \{\beta \in \tmop{Class} (n) | \alpha < \beta\} \neq \emptyset\\ \infty \text{ otherwise } \end{array} \right.$, and we make the conventions $\infty \not\in \tmop{OR}$ and $\forall \gamma \in \tmop{OR} . \gamma < \infty$ \end{definition} In the upper definition of $(+^n)$, which we can call ``the successor functional of $\tmop{Class} (n)$'', we needed to consider the case when, for $\alpha \in \tmop{Class} (n)$, such ``successor of $\alpha$ in $\tmop{Class} (n)$'' may not exist. We want to tell the reader that this is just a merely formal necessity: one of our purposes is to show that such successor always exists and that $\tmop{Class} (n)$ ``behaves like the class $\mathbbm{E}$'' in the sense of being a closed unbounded class of ordinals. This is one of the important results we want to generalize, although it's proof will take a lot of effort. Let's see now some basic properties of the elements of $\tmop{Class} (n)$. \begin{proposition} \label{Class(n)_first_properties}{$\mbox{}$} \begin{enumeratenumeric} \item $\forall n, i \in [1, \omega) .i \leqslant n \Longrightarrow \tmop{Class} (n) \subset \tmop{Class} (i)$ \item For any $n \in [1, \omega)$ and any $\alpha \in \tmop{Class} (n)$ define recursively on $[0, n - 1]$\\ $\alpha_n := \alpha$, $\alpha_{n - (k + 1)} := \alpha_{n - k} (+^{n - (k + 1)})$.\\ Then $\forall i \in [1, n] . \alpha_i \in \tmop{Class} (i)$ and $\alpha = \alpha_n <_1 \alpha_{n - 1} <_1 \ldots <_1 \alpha_2 <_1 \alpha_1 <_1 \alpha_1 2$. \item For any $n \in [1, \omega)$ and any $\alpha \in \tmop{Class} (n)$ consider the sequence defined in 2.\\ If $\alpha <_1 \alpha_1 2 + 1$ then $\alpha \in \tmop{Lim} \tmop{Class} (n)$. \item $\forall n \in [2, \omega) . \tmop{Class} (n) \subset \tmop{Lim} \tmop{Class} (n - 1)$. \item $\forall n, m \in [1, \omega) \forall \alpha . (m < n \wedge \alpha \in \tmop{Class} (n)) \Longrightarrow \alpha \in \tmop{Class} (m) \wedge \alpha (+^m) < \alpha (+^n)$. \end{enumeratenumeric} \end{proposition} \begin{proof} {\color{orange} 1, 2 and 5 are left to the reader.} {\noindent}3.\\ Let $\rho \in \alpha$ be arbitrary. Define $B_{\rho} := \{\rho\} \cup \{\alpha_i, \alpha_i 2| n \in \{1, \ldots, n\}\}$. Since $\alpha <_1 \alpha_1 2 + 1$ and $B_{\rho} \subset_{\tmop{fin}} \alpha_1 2 + 1$, then there exists an ($<, <_1, +$)-isomorphism $h_{\rho} : B_{\rho} \longrightarrow h [B_{\rho}] \subset \alpha$ with \\ $h_{\rho} |_{\alpha} = \tmop{Id}_{\alpha}$. Note this implies the following facts in the following order (the order is important since the later facts use the previous to assert their conclusion): (1') \ $\forall i \in [1, n] . \alpha_i <_1 \alpha_i 2 \Longleftrightarrow h_{\rho} (\alpha_i) <_1 h_{\rho} (\alpha_i) 2$. So $h_{\rho} (\alpha_i) \in \mathbbm{E} = \tmop{Class} (1)$. (2') \ $\forall i \in [2, n] . \alpha_i <_1 \alpha_1 \Longleftrightarrow h_{\rho} (\alpha_i) <_1 h_{\rho} (\alpha_1)$. \\ \ \ \ \ \ \ \ \ \ \ So by (1') and $<_1$-connectedness $\forall i \in [2, n] .h_{\rho} (\alpha_i) \in \tmop{Class} (2)$. (3') \ $\forall i \in [3, n] . \alpha_i <_1 \alpha_2 \Longleftrightarrow h_{\rho} (\alpha_i) <_1 h_{\rho} (\alpha_2)$.\\ \ \ \ \ \ \ \ \ \ \ So by (1') and (2') and $<_1$-connectedness $\forall i \in [3, n - 1] .h_{\rho} (\alpha_i) \in \tmop{Class} (3)$. ... (inductively) (n') \ $\alpha_n <_1 \alpha_{n - 1} \Longleftrightarrow h_{\rho} (\alpha_n) <_1 h_{\rho} (\alpha_{n - 1})$. \\ \ \ \ \ \ \ \ \ \ \ So by (1'), (2'),.., (n-1') and $<_1$-connectedness $h_{\rho} (\alpha_n) \in \tmop{Class} (n)$. The previous shows that the set (remember that $\alpha_n = \alpha$)\\ $A := \{h_{\rho} (\alpha) | \rho \in \alpha \wedge h_{\rho} : B_{\rho} \longrightarrow \alpha \text{ is an } ( <, <_1, +) \text{-iso sucht hat } h_{\rho} |_{\alpha} = \tmop{Id}_{\alpha} \} \subset \tmop{Class} (n)$ contains for any $\rho \in \alpha$ an element $h_{\rho} (\alpha) = h_{\rho} (\alpha_n) \in \tmop{Class} (n)$; moreover, since $\rho < \alpha = \alpha_n$ implies $\rho = h_{\rho} (\rho) < h_{\rho} (\alpha_n) = h_{\rho} (\alpha) < \alpha$, then $A$ is confinal in $\alpha$. Hence $\alpha \in \tmop{Lim} \tmop{Class} (n)$. {\noindent}4.\\ We proceed by induction on $[2, \omega)$. Take $n \in [2, \omega)$. Suppose $\forall l \in n \cap [2, \omega) . \tmop{Class} (l) \subset \tmop{Lim} \tmop{Class} (l - 1)$. \ \ \ \ \ \ \ {\tmstrong{(cIH)}} If $\tmop{Class} (n) = \emptyset$ \ then we are done. So suppose $\tmop{Class} (n) \neq \emptyset$ and take $\alpha \in \tmop{Class} (n)$. By definition this means $\alpha \in \tmop{Class} (n - 1) \ni \alpha (+^{n - 1})$ and $\alpha <_1 \alpha (+^{n - 1})$. \ \ \ \ \ \ \ {\tmstrong{(*3)}} Case $n - 1 = 1$. Then by (*3), the inequality $\alpha < \alpha 2 + 1 < \alpha (+^1$) and $<_1$-connectedness we get $\alpha <_1 \alpha 2 + 1$. Then by {\cite{GarciaCornejo1}} corollary \ref{eta(t)+1<less>_1_Equivalences}, $\alpha \in \tmop{Lim} \mathbbm{E} = \tmop{Lim} \tmop{Class} (1)$. Case $n - 1 \geqslant 2$. By (*3) we know $\alpha \in \tmop{Class} (n - 1)$. Then, by 2., the on $[0, n - 2]$ recursively defined sequence of ordinals $\beta_{n - 1} := \alpha$, $\beta_{n - 1 - (k + 1)} := \beta_{n - 1 - k} (+^{n - 1 - (k + 1)}$) satisfies \\ $\beta_{n - 1} <_1 \beta_{n - 2} <_1 \ldots <_1 \beta_2 <_1 \beta_1 <_1 \beta_1 2$ and $\forall i \in [1, n - 1] . \beta_i \in \tmop{Class} (i)$. \ \ \ \ \ \ \ {\tmstrong{(*4)}} Let $\gamma := \alpha (+^{n - 1}) \in \tmop{Class} (n - 1)$. We show now by a side induction that $\forall u \in [2, n - 1] . \beta_{n - u} < \gamma$. \ \ \ \ \ \ \ {\tmstrong{(**3**)}} Let $u \in [2, n - 1]$. Suppose for $l \in u \cap [2, n - 1] . \beta_{n - l} < \gamma$. \ \ \ \ \ \ \ {\tmstrong{(SIH)}} Since $\gamma \in \tmop{Class} (n - 1)$, then by (3*) (in case $u = 2$), by our SIH (in case $u > 2$) and 1. we have that $\beta_{n - (u - 1)} < \gamma \in \tmop{Class} (n - (u - 1))$, that is, $\gamma \in \{e \in \tmop{Class} (n - (u - 1)) | \beta_{n - (u - 1)} <_1 e\}$. But $\beta_{n - u} = \beta_{n - (u - 1)} (+^{n - u}) = \min \{e \in \tmop{Class} (n - u) | \beta_{n - (u - 1)} < e\}$. From all this follows \\ $\beta_{n - u} \leqslant \gamma$. \ \ (*5) On the other hand, our cIH applied to $\gamma \in \tmop{Class} (n - (u - 1))$ implies $\gamma \in \tmop{Lim} \tmop{Class} (n - u)$; however, since $\beta_{n - u} = \min \{e \in \tmop{Class} (n - u) | \beta_{n - (u - 1)} < e\}$, then $\beta_{n - u} \not\in \tmop{Lim} \tmop{Class} (n - u)$. From this and (*5) follows $\beta_{n - u} < \gamma$. This shows (**3**). From the fact that $\gamma \in \mathbbm{E}$, (**3**) and (*4) we have\\ $\alpha = \beta_{n - 1} <_1 \beta_{n - 2} <_1 \ldots <_1 \beta_2 <_1 \beta_1 <_1 \beta_1 2 < \beta_1 2 + 1 < \gamma = \alpha (+^{n - 1}$); moreover, from this, (*3) and $<_1$-connectedness we obtain $\beta_{n - 1} <_1 \beta_1 2 + 1$. This way, by 3., it follows that \\ $\alpha = \beta_{n - 1} \in \tmop{Lim} \tmop{Class} (n - 1)$ as we needed to show. \end{proof} \begin{proposition} \label{d<less>j<less>n_implies_Class(j)_contained_Lim(Class(d))}Let $j \in [2, \omega)$ and $c \in \tmop{Class} (j)$. Then for any $d \in [1, j)$ there exists a sequence $(c_{\xi})_{\xi \in X} \subset \tmop{Class} (d)$ such that $c_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}c$. \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{corollary} \label{alpha_i_properties}For any $n \in [1, \omega)$ and any $\alpha \in \tmop{Class} (n)$ define by recursion on $[0, n - 1]$ the ordinals $\alpha_n := \alpha$, $\alpha_{n - (k + 1)} := \alpha_{n - k} (+^{n - (k + 1)})$. Then \begin{enumeratealpha} \item $\alpha = \alpha_n \in \tmop{Class} (n)$. \item $\forall j \in [1, n - 1] . \alpha_j \in \tmop{Class} (j) \backslash \tmop{Class} (j + 1)$. \item $\alpha = \alpha_n <_1 \alpha_{n - 1} <_1 \alpha_{n - 2} <_1 \alpha_{n - 3} <_1 \ldots <_1 \alpha_2 <_1 \alpha_1 <_1 \alpha_1 2 < \alpha (+^n)$. \item $\forall j \in [1, n - 1] .m (\alpha_j) = \alpha_1 2$. \end{enumeratealpha} \end{corollary} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{proposition} \label{<less>_1_chain_of_length_j}{$\mbox{}$} \begin{enumeratenumeric} \item $\forall j \in [1, \omega) \forall (a_1, \ldots, a_j) \in \tmop{OR}^j$.$a_j <_1 a_{j - 1} <_1 \ldots <_1 a_1 <_1 a_1 2 \Longrightarrow a_j \in \tmop{Class} (j)$ \item $\forall j \in [1, \omega) \forall a \in \tmop{OR}$.\\ $a \in \tmop{Class} (j) \Longleftrightarrow \exists (a_1, \ldots, a_j) \in \tmop{OR}^j .a = a_j <_1 a_{j - 1} <_1 \ldots <_1 a_2 < a_1 < a_1 2$ \end{enumeratenumeric} \end{proposition} \begin{proof} {\noindent}1.\\ By induction on $[1, \omega)$. For $j = 1$ it is clear. Suppose for $j \in [1, \omega)$ the claim holds, that is \\ \ \ \ $\forall (a_1, \ldots, a_j) \in \tmop{OR}^j$.$a_j <_1 a_{j - 1} <_1 \ldots <_1 a_1 <_1 a_1 2 \Longrightarrow a_j \in \tmop{Class} (j)$. \ \ \ {\tmstrong{(IH)}} We show that the claim holds for $j + 1$. Let $(a_1, \ldots, a_{j + 1}) \in \tmop{OR}^{j + 1}$ be such that $a_{j + 1} <_1 a_j <_1 \ldots <_1 a_1 <_1 a_1 2$. \ \ \ \ \ \ \ {\tmstrong{(*)}} By our (IH) follows $a_j \in \tmop{Class} (j)$ and therefore, $\forall s \in [1, j] .a_j \in \tmop{Class} (s)$. \ \ \ \ \ \ \ {\tmstrong{(**)}} Now, observe the following argument: $a_{j + 1} < 2 a_{j + 1}\underset{\text{because }a_{j + 1} < a_j \in \mathbbm{E} \subset \mathbbm{P}}{<}a_j$ and then $a_{j + 1} <_1 a_{j + 1} 2$ by (*) and $<_1$-connectedness; that is, $a_{j + 1} \in \mathbbm{E} = \tmop{Class} (1)$. But then, $a_{j + 1} < a_{j + 1} (+^1)\underset{\text{because }a_{j + 1} < a_j \in \mathbbm{E}}{\leqslant}a_j$ and then $a_{j + 1} <_1 a_{j + 1} (+^1$) by (*) and $<_1$-connectedness; that is, $a_{j + 1} \in \tmop{Class} (2)$. But then \\ $a_{j + 1} < a_{j + 1} (+^2)\underset{\text{because }a_{j + 1} < a_j \in \tmop{Class} (2)}{\leqslant}a_j$ and then $a_{j + 1} <_1 a_{j + 1} (+^2$) by (*) and $<_1$-connectedness; that is, $a_{j + 1} \in \tmop{Class} (3)$. Inductively, we get $a_{j + 1} \in \tmop{Class} (j)$, \\ $a_{j + 1} < a_{j + 1} (+^j)\underset{\text{because }a_{j + 1} < a_j \in \tmop{Class} (j)}{\leqslant}a_j$ and then, by (*) and $<_1$-connectedness, $a_{j + 1} <_1 a_{j + 1} (+^j$); that is, $a_{j + 1} \in \tmop{Class} (j + 1)$. {\noindent}2.\\ {\color{orange} Left to the reader.} \end{proof} \begin{proposition} \label{alpha<less>_1_beta_in_Class(k)_then_alpha_in_Class(k+1)}$\forall k \in [1, \omega) . \forall \alpha, \beta \in \tmop{OR} . \alpha <_1 \beta \in \tmop{Class} (k) \Longrightarrow \alpha \in \tmop{Class} (k + 1)$. \end{proposition} \begin{proof} By induction on $[1, \omega)$. Let $k = 1$ and $\alpha, \beta \in \tmop{OR}$ be such that $\alpha <_1 \beta \in \tmop{Class} (1)$. Then $\alpha <_1 \alpha 2$ by $\leqslant_1$-connectedness (because $\alpha < \alpha 2 < \beta$), which, as we know, means $\alpha \in \mathbbm{E}$. This way $\alpha < \alpha (+^1) \leqslant \beta$, and then, by $\leqslant_1$-connectedness again, $\alpha <_1 \alpha (+^1$), that is, $\alpha \in \tmop{Class} (2)$. Suppose the claim holds for $k \in [1, \omega)$. \ \ \ \ \ \ \ {\tmstrong{(IH)}} Let $\alpha, \beta \in \tmop{OR}$ be such that $\alpha <_1 \beta \in \tmop{Class} (k + 1)$. Then $\beta \in \tmop{Class} (k)$ by proposition \ref{Class(n)_first_properties}. So $\alpha <_1 \beta \in \tmop{Class} (k)$, and our (IH) implies $\alpha \in \tmop{Class} (k + 1)$. But then $\alpha < \alpha (+^{k + 1}) \leqslant \beta$, which implies by $\leqslant_1$-connectedness that $\alpha <_1 \alpha (+^{k + 1}$). Thus $\alpha \in \tmop{Class} (k + 2)$. \end{proof} \begin{proposition} \label{t_in_(a,a(+)..(+^1)2]_then_m(t)<less>=a(+)..(+^1)2}Let $i \in [1, \omega)$ and $\alpha \in \tmop{Class} (i)$. Then \begin{enumeratenumeric} \item $\forall z \in [1, i) .m (\alpha ( +^{i - 1}) ( +^{i - 2}) \ldots ( +^z)) = \alpha (+^{i - 1}) ( +^{i - 2}) \ldots ( +^z) ( +^{z - 1}) \ldots ( +^1) 2$. \item $\forall t \in (\alpha, \alpha ( +^{i - 1}) ( +^{i - 2}) \ldots ( +^2) ( +^1) 2] .m (t) \leqslant \alpha (+^{i - 1}) ( +^{i - 2}) \ldots ( +^2) ( +^1) 2$. \end{enumeratenumeric} \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \subsection{More general substitutions} For our subsequent work we need to introduce the following general notion of substitutions. \begin{definition} \label{NoAppendix_simultaneous_substitution}Let $x \in \tmop{OR}$ and $f : \tmop{Dom} f \subset \mathbbm{E} \longrightarrow \mathbbm{E}$ a strictly increasing function such that $\tmop{Ep} (x) \subset \tmop{Dom} f$. We define $x [f]$, the ``simultaneous substitution of all the epsilon numbers $\tmop{Ep} (x)$ of the Cantor Normal Form of $x$ by the values of $f$ on them'', as $x [f] = \left\{ \begin{array}{l} f (x) \text{ if } x \in \mathbbm{E}\\ \\ \sum_{i = 1}^n T_i [f] t_i \text{ if } x \not\in \mathbbm{E} \text{ and } x =_{\tmop{CNF}} \sum_{i = 1}^n T_i t_i \text{ and } (t_1 \geqslant 2 \vee n \geqslant 2)\\ \\ \omega^{Z [f]} \text{ if } x \not\in \mathbbm{E} \text{ and } x = \omega^Z \text{ for some } Z \in \tmop{OR}\\ \\ x \text{ if } x < \varepsilon_0 \end{array} \right.$ Moreover, for $\tmop{Ep} (x) = \{e_1 > \ldots > e_k \}$ and a set $Y := \{\sigma_1 > \ldots > \sigma_k \} \subset \mathbbm{E}$ of epsilon numbers, we may also write $x [\tmop{Ep} (x) := Y]$ instead of $x [h]$, where $h : \tmop{Ep} (x) \longrightarrow Y$ is the function $h (e_i) := \sigma_i$. \end{definition} \begin{definition} Let $S \subset \tmop{OR}$ and $f_1, f_2 : S \longrightarrow \tmop{OR}$. We will denote as usual: \begin{itemizedot} \item $f_1 \leqslant f_2$ \ $: \Longleftrightarrow$ $\forall e \in S.f_1 (e) \leqslant f_2 (e)$. \item $f_1 < f_2$ $: \Longleftrightarrow$ $f_1 \leqslant f_2 \wedge \exists e \in S.f_1 (e) < f_2 (e)$. \end{itemizedot} \end{definition} Now we enunciate the properties about these kind of substitutions that are of our interest. \begin{proposition} \label{NoAppendix_If_x<less>y_then_x[f]<less>y[f]}Let $x, y \in \tmop{OR}$. Let $f : S \subset \mathbbm{E} \longrightarrow \mathbbm{E}$ be a strictly increasing function with \ $\tmop{Ep} (x) \cup \tmop{Ep} (y) \subset S$. Then \begin{enumeratenumeric} \item $y \in \mathbbm{P} \Longleftrightarrow y [f] \in \mathbbm{P}$. \item $x < y \Longleftrightarrow x [f] < y [f]$. \item $y \in \mathbbm{E} \Longleftrightarrow y [f] \in \mathbbm{E}$ and $y \in \mathbbm{P} \backslash \mathbbm{E} \Longleftrightarrow y [f] \in \mathbbm{P} \backslash \mathbbm{E}$ \end{enumeratenumeric} \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{proposition} \label{NoAppendix_[f]_=-compatible}Let $f : \tmop{Dom} f \subset \mathbbm{E} \longrightarrow \mathbbm{E}$ be a strictly increasing function. \\ Let $A := \{x \in \tmop{OR} | \tmop{Ep} (x) \subset \tmop{Dom} f\}$. Then the assignation $\varphi : A \longrightarrow \tmop{OR}$ defined as $\varphi (x) := x [f]$ is a function with respect to the equality in the ordinals, that is, $\forall x, y \in A.x = y \Longrightarrow \varphi (x) = \varphi (y)$. \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{proposition} \label{NoAppendix_general_substitution_properties2}Let $x \in \tmop{OR}$ and $f : S \subset \mathbbm{E} \longrightarrow \mathbbm{E}$ be a strictly increasing function, where\\ $\tmop{Ep} (x) \subset S$. Then \begin{enumeratenumeric} \item $x [f]$ is already in Cantor Normal Form. \item $\tmop{Ep} (x [f]) = f [\tmop{Ep} (x)] \subset \tmop{Im} f$. \item It exists $f^{- 1} : \tmop{Im} f \longrightarrow S$, $f^{- 1}$ is strictly increasing and $(x [f]) [f^{- 1}] = x$. \item Let $\alpha \in \mathbbm{E}$. Then $x \in [\alpha, \alpha ( +^1)) \Longleftrightarrow \alpha \in S \wedge x [f] \in [f (\alpha), f (\alpha) ( +^1))$. \end{enumeratenumeric} \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{proposition} Let $f, g : S \subset \mathbbm{E} \longrightarrow \mathbbm{E}$ be strictly increasing functions. \\ Let $D := \{e \in S|f (e) < g (e)\}$. Then \begin{enumeratenumeric} \item $f \leqslant g \Longleftrightarrow \forall x \in \tmop{OR} . \tmop{Ep} (x) \subset S \Longrightarrow x [f] \leqslant x [g])$. \item $f < g \Longrightarrow \forall x \in \tmop{OR} . (\tmop{Ep} (x) \subset S \wedge \tmop{Ep} (x) \cap D \neq \emptyset) \Longrightarrow x [f] < x [g])$. \end{enumeratenumeric} \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{proposition} \label{NoAppendix_operations_iso_gen_subst}Let $x, y \in \tmop{OR}$. Let $f : S \subset \mathbbm{E} \longrightarrow \mathbbm{E}$ be a strictly increasing function with $\tmop{Ep} (x) \cup \tmop{Ep} (y) \subset S$. Then \begin{enumeratenumeric} \item $\tmop{Ep} (x + y) \cup \tmop{Ep} (\omega^x) \cup \tmop{Ep} (x \cdot y) \subset S$ \item $(x + y) [f] = x [f] + y [f]$ \item $\omega^x [f] = \omega^{x [f]}$ \item $(x \cdot y) [f] = x [f] \cdot y [f]$ \end{enumeratenumeric} \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{proposition} Let $g : S \subset \mathbbm{E} \longrightarrow Z \subset \mathbbm{E}$ and $f : Z \subset \mathbbm{E} \longrightarrow \mathbbm{E}$ be strictly increasing functions. Then $f \circ g : S \subset \mathbbm{E} \longrightarrow \mathbbm{E}$ is strictly increasing and for any $t \in \tmop{OR}$ with $\tmop{Ep} (t) \subset S$, $t [f \circ g] = t [g] [f]$. \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \subsection{The main theorem.} Now we introduce certain notions that are necessary to enunciate the main theorem. \begin{definition} For $k \in [1, \omega)$, $\alpha \in \tmop{Class} (k)$ and $t \in [\alpha, \alpha ( +^k))$, the ordinal \\ $\eta (k, \alpha, t)$ is defined as \\ $\eta (k, \alpha, t) := \left\{ \begin{array}{l} \alpha (+^{k - 1}) ( +^{k - 2}) \ldots ( +^2) ( +^1) 2 \text{ iff } t \in [\alpha, \alpha ( +^{k - 1}) ( +^{k - 2}) \ldots ( +^2) ( +^1) 2]\\ \\ \max \{m (e) |e \in (\alpha, t]\} \text{ iff } t > \alpha (+^{k - 1}) ( +^{k - 2}) \ldots ( +^2) ( +^1) 2 \end{array} \right.$ \end{definition} Our next proposition \ref{eta(k,alpha,t)_is_well_defined} shows that $\eta (k, \alpha, t)$ is well defined. \begin{proposition} \label{eta(k,alpha,t)_is_well_defined}Let $k \in [1, \omega)$, $\alpha \in \tmop{Class} (k)$, $t \in (\alpha, \alpha ( +^k))$ and \\ $P := \{r \in (\alpha, t] |m (r) \geqslant t\}$. Then (0). $P$ is finite; more specifically $1 \leqslant |P| \leqslant k + 1$. (1). $\max \{m (e) |e \in P\}$ exists and $\max \{m (e) |e \in P\} = \max \{m (e) |e \in (\alpha, t]\}$. (2). $\eta (k, \alpha, t)$ is well defined. (3). $\eta (k, \alpha, t) \geqslant m (t) \geqslant t$. \end{proposition} \begin{proof} Consider $i, \alpha$ and $t$ as stated. {\noindent}(0)\\ Clearly $t \in P$. So $|P| \geqslant 1$. We prove the other inequality by contradiction. Suppose $|P| \geqslant k + 2$. Then there exist $k + 1$ ordinals $E_0, E_1, \ldots, E_{k - 1}, E_k \in P$ such that $E_k < E_{k - 1} < \ldots < E_1 < E_0 < t$; that is, $\forall l \in [0, k] .E_k < \ldots < E_l < \ldots < E_0 < t \leqslant m (E_l)$, and therefore, by $\leqslant_1$-connectedness, we get: a. $E_0 <_1 E_0 + 1 \leqslant t$, that is, $E_0 \in \tmop{Lim} \mathbbm{P} \subset \mathbbm{P}$. b. $E_1 < 2 E_1 < E_0 < t \leqslant m (E_1)$; then, by $\leqslant_1$-connectedness $E_1 <_1 2 E_1$, that is, $E_1 \in \mathbbm{E}$. c. $\alpha < E_k <_1 E_{k - 1} <_1 \ldots <_1 E_1 <_1 E_1 2 < E_0 < t < \alpha (+^k)$ This way, from c. and proposition \ref{<less>_1_chain_of_length_j} follows $E_k \in \tmop{Class} (k) \cap (\alpha, \alpha ( +^k))$. Contradiction. Therefore $|P| \leqslant k + 1$. {\noindent}(1)\\ Since by (0) $P \neq \emptyset$ is finite, then $\{m (e) |e \in P\}$ is finite too and thus $\mu := \max \{m (e) |e \in P\}$ exists. Then: $(I)$. \ $\mu \geqslant m (t) \geqslant t$ because $t \in P$ (and because $m (\beta) \geqslant \beta$ for any ordinal). $(I I)$. Since $P \subset (\alpha, t]$, then $\mu \in \{m (e) |e \in (\alpha, t]\}$. On the other hand, let $e \in (\alpha, t]$ be arbitrary. If $m (e) < t$, then $m (e) < \mu$ because of $(I)$. If $m (e) \geqslant t$, then $e \in P$ and then $m (e) \leqslant \mu$. This shows that $\forall e \in (\alpha, t] .m (e) \leqslant \mu$ and since by $(I I)$ \\ $\mu \in \{m (e) |e \in (\alpha, t]\}$, we have shown $\mu = \max \{m (e) |e \in (\alpha, t]\}$. {\noindent}(2)\\ If $t \in [\alpha, \alpha ( +^{k - 1}) ( +^{k - 2}) \ldots ( +^2) ( +^1) 2]$, then it is clear that $\eta (k, \alpha, t)$ is well defined. So suppose $t \in (\alpha ( +^{k - 1}) ( +^{k - 2}) \ldots ( +^2) ( +^1) 2, \alpha ( +^k))$. By (1) $\max \{m (e) |e \in P\}$ exists and \\ $\max \{m (e) |e \in P\} = \max \{m (e) |e \in (\alpha, t]\} \underset{\text{by definition}}{=} \eta (k, \alpha, t)$. That is, $\eta (k, \alpha, t)$ exists. {\noindent}(3)\\ For $t > \alpha (+^{k - 1}) \ldots ( +^2) ( +^1) 2$ the assertion is clear. For $t \leqslant \alpha (+^{k - 1}) \ldots ( +^2) ( +^1) 2$, we get by proposition \ref{t_in_(a,a(+)..(+^1)2]_then_m(t)<less>=a(+)..(+^1)2}, $t \leqslant m (t) \leqslant \alpha (+^{k - 1}) ( +^{k - 2}) \ldots ( +^2) ( +^1) 2 = \eta (k, \alpha, t)$. \end{proof} \begin{remark} \label{remark_eta(1,alpha,t)=eta(t)}The ordinal $\eta (i, \alpha, t)$ is meant to play in $\tmop{Class} (i)$ the analogous role that the ordinal $\eta t$ played in $\tmop{Class} (1) = \mathbbm{E}$. Particularly, for $i = 1, \alpha \in \tmop{Class} (1) = \mathbbm{E}$ and $t \in [\alpha, \alpha ( +^1))$, \\ $\eta (1, \alpha, t)\underset{\text{by {\cite{GarciaCornejo1}} proposition } \ref{eta(t)_m(t)_and_<less>_1}}{=}\eta t$. \end{remark} \begin{proposition} $\forall i \in [1, \omega) \forall \alpha \in \tmop{Class} (i) \forall t \in [\alpha, \alpha ( +^i)) . \eta (i, \alpha, t) \in (\alpha, \alpha ( +^i))$. \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{definition} \label{l(t)}Let $i \in [1, \omega)$, $\alpha \in \tmop{Class} (i)$ and $t \in [\alpha, \alpha ( +^i))$. We define \\ $l (i, \alpha, t) := \left\{ \begin{array}{l} \alpha (+^{i - 1}) ( +^{i - 2}) \ldots ( +^2) ( +^1) 2 \text{ iff } t \in [\alpha, \alpha ( +^{i - 1}) ( +^{i - 2}) \ldots ( +^2) ( +^1) 2]\\ \\ \min \{r \in (\alpha, t] | m (r) = \eta (i, \alpha, t)\} \text{ iff } t > \alpha (+^{i - 1}) ( +^{i - 2}) \ldots ( +^2) ( +^1) 2 \end{array} \right.$. \end{definition} \begin{proposition} \label{eta(i,a,l(i,a,t))=eta(i,a,t)_if_t_in_(a(+)...(+^1)2,a(+^i)}Let $i \in [1, \omega)$, $\alpha \in \tmop{Class} (i)$ and $t \in (\alpha ( +^{i - 1}) ( +^{i - 2}) \ldots ( +^2) ( +^1) 2, \alpha ( +^i))$. Then \begin{enumeratenumeric} \item $l (i, \alpha, t) > \alpha (+^{i - 1}) \ldots ( +^1) 2$ \item $\eta (i, \alpha, l (i, \alpha, t)) = \max \{m (e) |e \in (\alpha, l (i, \alpha, t)]\} = m (l (i, \alpha, t)) = \eta (i, \alpha, t)$. \end{enumeratenumeric} \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{remark} \label{remark_l(1,alpha,t)}With respect to definition \ref{l(t)}, consider the case $i = 1$. Let $t \in (\alpha 2, \alpha^+)$ and suppose $l (1, \alpha, t) \in (\alpha, t)$. The inequalities $l (1, \alpha, t) \leqslant t < \eta t = m (l (1, \alpha, t))$ and the fact that \\ $l (1, \alpha, t) \not\in \mathbbm{E}$ imply, by {\cite{GarciaCornejo1}} theorem \ref{Wilken_Theorem1} and {\cite{GarciaCornejo1}} corollary \ref{Wilken_Corollary1}, that $\mathbbm{P} \ni l (1, \alpha, t) \wedge m (l (1, \alpha, t)) < l (1, \alpha, t) 2$. Therefore $\mathbbm{P} \ni l (1, \alpha, t) \leqslant t < m (l (1, \alpha, t)) < l (1, \alpha, t) 2$, which subsequently implies (by considering the cantor normal form of $t$) that $\pi t = l (1, \alpha, t)$. From this we conclude: \\ For any $s \in [\alpha, \alpha^+)$, $l (1, \alpha, s) = \left\{ \begin{array}{l} \alpha 2 \text{ iff } s \in [\alpha, \alpha 2]\\ \pi s \text{ iff } s \in (\alpha 2, \alpha^+) \wedge l (1, \alpha, s) < s\\ s \text{ iff } s \in (\alpha 2, \alpha^+) \wedge l (1, \alpha, s) \nless s \end{array} \right.$. \end{remark} \begin{definition} Let $i \in [1, \omega)$, $\alpha \in \tmop{Class} (i)$, $t \in [\alpha, \alpha ( +^i)$ and $j \in [1, i]$. We define $\lambda (j, t)$ as the only one ordinal $\delta$ satisfying $\delta \in \tmop{Class} (j)$ and $t \in [\delta, \delta ( +^j))$ in case such ordinal exists, and $- \infty$ otherwise. \end{definition} \begin{remark} For $i \in [1, \omega)$, $\alpha \in \tmop{Class} (i)$, $t \in [\alpha, \alpha ( +^i))$ and $j \in [1, i]$, (i.e., all the conditions above) $\lambda (j, t)$ will always be an ordinal. Again, the reason to give the definition this way is just because the existence of $\lambda (j, t)$ is not completely obvious (we will see that later). \end{remark} We can now present the main theorem of this article. \begin{theorem} \label{concise_gen_thm}For any $n \in [1, \omega)$, {\noindent}(1). $\tmop{Class} (n)$ is $\kappa$-club for any non-countable regular ordinal $\kappa$. There exist a binary relational $\leqslant^n \subset \tmop{Class} (n) \times \tmop{OR}$ such that: For $\alpha, c \in \tmop{Class} (n)$ and any $t \in \alpha (+^n)$ there exist - A finite set $T (n, \alpha, t) \subset \mathbbm{E} \cap \alpha (+^n)$, - A strictly increasing function $g (n, \alpha, c) : \tmop{Dom} g (n, \alpha, c) \subset \mathbbm{E} \cap \alpha (+^n) \longrightarrow \mathbbm{E} \cap c (+^n)$ such that: {\noindent}(2) The function $H : (\tmop{Dom} g (n, \alpha, c)) \cap (\alpha, \alpha ( +^n)) \longrightarrow (c, c ( +^n))$, $H (e) := e [g (n, \alpha, c)]$ is \\ an $(<, +, \cdot, <_1, \lambda x. \omega^x, (+^1), (+^2), \ldots, (+^{n - 1}))$ isomorphism. {\noindent}(3) The relation $\leqslant^n$ satisfies $\leqslant^n$-connectedness, $\leqslant^n$-continuity and is such that\\ $(t \in [\alpha, \alpha (+^n)] \wedge \alpha \leqslant^n t) \Longrightarrow \alpha \leqslant_1 t$. \ \ {\noindent}(4) (First fundamental cofinality property of $\leqslant^n$). \\ If $t \in [\alpha, \alpha ( +^n)) \wedge \alpha \leqslant^n t + 1$, then there exists a sequence $(c_{\xi})_{\xi \in X} \subset \alpha \cap \tmop{Class} (n)$ such that $c_{\xi} \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \alpha$, $\forall \xi \in X.T (n, \alpha, t) \cap \alpha \subset c_{\xi}$ and $c_{\xi} \leqslant_1 t [g (n, \alpha, c_{\xi})]$. {\noindent}(5). (Second fundamental cofinality property of $\leqslant^n$).\\ Suppose $t \in [\alpha, \alpha ( +^n)) \wedge \alpha \in \tmop{Lim} \{\gamma \in \tmop{Class} (n) |T (n, \alpha, t) \cap \alpha \subset \gamma \wedge \gamma \leqslant_1 t [g (n, \alpha, \gamma)]\}$. Then \ \ \ \ \ \ (5.1) $\forall s \in [\alpha, t + 1] . \alpha \leqslant^n s$, and therefore (5.2) $\alpha \leqslant_1 t + 1$ {\noindent}(6). $(t \in [\alpha, \alpha (+^n)) \wedge \alpha <_1 \eta (n, \alpha, t) + 1) \Longrightarrow \alpha \leqslant^n \eta (n, \alpha, t) + 1$. \end{theorem} Theorem \ref{concise_gen_thm} states the general result we are striving for. But the proof of theorem \ref{concise_gen_thm} is a very long journey: we need to overcome many technical difficulties not stated in it; because of that, we restate it in a more technical way: theorem \ref{most_most_general_theorem}. \begin{theorem} \label{most_most_general_theorem}For any $n \in [1, \omega)$ {\noindent}(0). $\tmop{Class} (n)$ is $\kappa$-club for any non-countable regular ordinal $\kappa$. {\noindent}(1). For any $\alpha \in \tmop{Class} (n)$ the functions \begin{itemizedot} \item $S (n, \alpha) : \tmop{Class} (n - 1) \cap (\alpha, \alpha ( +^n)) \longrightarrow \tmop{Subsets} (\tmop{Class} (n - 1) \cap (\gamma, \gamma (+^n)))$ {\noindent}$S (n, \alpha) (\delta) := \{e \in \tmop{Class} (n - 1) \cap (\alpha, \alpha (+^n)) \cap \delta | m (e) [g (n - 1, e, \delta)] \geqslant m (\delta)\}$ \item $f (n, \alpha) : \tmop{Class} (n - 1) \cap (\alpha, \alpha ( +^n)) \longrightarrow \tmop{Subsets} (\tmop{OR})$ {\noindent}$f (n, \alpha) (\delta) := \left\{ \begin{array}{l} \{\delta\} \text{ iff } S (n, \alpha) (\delta) = \emptyset\\ \\ f (n, \alpha) (s) \cup \{\delta\} \text{ iff } S (n, \alpha) (\delta) \neq \emptyset \wedge s := \sup (S (n, \alpha) (\delta)) \end{array} \right.$ \end{itemizedot} \ \ \ \ are well defined and are such that (1.1) If $S (n, \alpha) (\delta) \neq \emptyset$ then $\sup (S (n, \alpha) (\delta)) \in S (n, \alpha) (\delta) \subset \tmop{Class} (n - 1) \cap \delta$. (1.2) $\forall \delta \in \tmop{Class} (n - 1) \cap (\alpha, \alpha ( +^n))$.$\delta \in f (n, \alpha) (\delta) \subset (\alpha, \alpha ( +^n)) \cap \tmop{Class} (n - 1)$\\ \hspace*{14mm} and $f (n, \alpha) (\delta)$ is finite. (1.3) $\forall q \in [1, \omega) . \forall \sigma \in (\alpha, \alpha ( +^n)) \cap \tmop{Class} (n - 1)$. If $f (n, \alpha) (\sigma) = \{\sigma_1 > \ldots > \sigma_q \}$ for some \hspace*{14mm} $\sigma_1, \ldots, \sigma_q \in \tmop{OR}$ then \ \ \ \ \ \ (1.3.1) $\sigma_1 = \sigma$, \ \ \ \ \ \ (1.3.2) $q \geqslant 2 \Longrightarrow \forall j \in \{1, \ldots, q - 1\} .m (\sigma_j) \leqslant m (\sigma_{j + 1}) [g (n - 1, \sigma_{j + 1}, \sigma_j)]$ and \ \ \ \ \ \ (1.3.3) $\sigma_q = \min \{e \in (\alpha, \sigma_q] \cap \tmop{Class} (n - 1) | m (e) [g (n - 1, e, \sigma_q)] \geqslant m (\sigma_q)\}$. \ \ \ \ \ \ (1.3.4) $m (\sigma) = m (\sigma_1) \leqslant m (\sigma_2) [g (n - 1, \sigma_2, \sigma)] \leqslant \ldots \leqslant m (\sigma_q) [g (n - 1, \sigma_q, \sigma)]$. \ \ \ \ \ \ (1.3.5) $\sigma_q = \min \{e \in (\alpha, \alpha (+^n)) \cap \tmop{Class} (n - 1) |$ $e \leqslant \sigma_q \wedge m (e) [g (n - 1, e, \sigma_q)] \geqslant m (\sigma_q) \}$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \min \{e \in (\alpha, \alpha (+^n)) \cap \tmop{Class} (n - 1) |$ $e \leqslant \sigma_q \wedge m (e) [g (n - 1, e, \sigma_q)] = m (\sigma_q) \}$. \ \ \ \ \ \ (1.3.6) For any $j \in \{1, \ldots, q - 1\}$, \ \ \ \ \ \ \ \ \ \ \ \ \ $\sigma_j = \min \{e \in (\alpha, \alpha (+^n)) \cap \tmop{Class} (n - 1) |\sigma_{j + 1} < e \leqslant \sigma_j \wedge m (e) [g (n - 1, e, \sigma_j)] \geqslant m (\sigma_j) \}$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \min \{e \in (\alpha, \alpha (+^n)) \cap \tmop{Class} (n - 1) | \sigma_{j + 1} < e \leqslant \sigma_j \wedge m (e) [g (n - 1, e, \sigma_j)] = m (\sigma_j) \}$ {\noindent}{\color{blue} Note: $S (1, \alpha) = \emptyset = f (1, \alpha)$. These functions are interesting for $n \geqslant 2$.} {\noindent}(2). (2.1) For any $\alpha \in \tmop{Class} (n)$ and any $t \in \alpha (+^n$) consider the set $T (n, \alpha, t)$ defined as: \ \ \ \ \ \ \ $T (n, \alpha, t) := \bigcup_{E \in \tmop{Ep} (t)} T (n, \alpha, E)$ \ \ \ if \ \ $t \not\in \mathbbm{E}$; \ \ \ \ \ \ \ $T (n, \alpha, t) := \{t\}$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ if \ \ $t \in \mathbbm{E} \cap (\alpha + 1)$; \ \ \ \ \ \ \ $T (n, \alpha, t) := \bigcup_{i \in \omega} O (i, t)$, \ \ \ \ \ \ \ \ \ \ \ if \ \ $t \in (\alpha, \alpha ( +^n)) \cap \mathbbm{E}$, \ \\ \hspace*{13mm} where for $E \in (\alpha, \alpha ( +^n)) \cap \mathbbm{E}$: \ \ \ \ \ \ \ we define $E_1 := \lambda (1, m (E)), E_2 := \lambda (2, E_1), \ldots, E_n := \lambda (n, E_{n - 1})$, \ \ \ \ \ \ (note $\alpha = E_n \leqslant \ldots E_3 \leqslant E_2 < E_1$) and \ \ \ \ \ \ \ $O (0, E) :=\underset{\delta \in W (0, k, E), k = 1, \ldots, n - 1}{\bigcup} f(k + 1, \lambda (k + 1, \delta)) (\delta)\cup\tmop{Ep} (m (\delta)) \cup \{\lambda (k + 1, \delta)\}$; \ \ \ \ \ \ \ $W (0, k, E) := (\alpha, \alpha ( +^n)) \cap \{E_1 > E_2 \geqslant E_3 \geqslant \ldots \geqslant E_n = \alpha\} \cap (\tmop{Class} (k) \backslash \tmop{Class} (k + 1))$; \ \ \ \ \ \ \ $O (l + 1, E) :=\underset{\delta \in W (l, k, E),k = 1, \ldots, n - 1}{\bigcup} f (k + 1, \lambda (k + 1, \delta)) (\delta) \cup \tmop{Ep} (m (\delta)) \cup \{\lambda (k + 1, \delta)\}$; \ \ \ \ \ \ \ $W (l, k, E) := (\alpha, \alpha ( +^n)) \cap O (l, E) \cap (\tmop{Class} (k) \backslash \tmop{Class} (k + 1))$. \ \ \ \ \ \ \ Then $T (n, \alpha, t) \subset \mathbbm{E} \cap \alpha (+^n)$ is such that: \ \ \ \ \ \ \ (2.1.1) $\tmop{Ep} (t) \subset T (n, \alpha, t)$ and $T (n, \alpha, t)$ is finite. \ \ \ \ \ \ \ (2.1.2) $T (n, \alpha, t + 1) = T (n, \alpha, t)$ \ \ \ \ \ \ \ (2.1.3) $\alpha (+^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2 \leqslant t \Longrightarrow T (n, \alpha, \eta (n, \alpha, t)) \cap \alpha \subset T (n, \alpha, t) \cap \alpha$ \ \ \ \ \ \ \ (2.1.4) $\alpha (+^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2 \leqslant t \Longrightarrow T (n, \alpha, l (n, \alpha, t)) \subset T (n, \alpha, t)$ \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2.2) For any $\alpha, c \in \tmop{Class} (n)$ there exist a function \\ \hspace*{14mm} $g (n, \alpha, c) : \tmop{Dom} g (n, \alpha, c) \subset \mathbbm{E} \cap \alpha (+^n) \longrightarrow \mathbbm{E} \cap c (+^n$) such that \\ \hspace*{14mm} (2.2.1) $g (n, \alpha, c) |_{c \cap \alpha \cap (\tmop{Dom} g (n, \alpha, c))}$ and $g (n, \alpha, \alpha)$ are the identity functions in their \\ \hspace*{26mm} respective domain. \ \ \ \ \ \ \ (2.2.2) $g (n, \alpha, c)$ is strictly increasing. \ \ \ \ \ \ \ (2.2.3) $\forall t \in \alpha (+^n) .T (n, \alpha, t) \cap \alpha \subset c \Longleftrightarrow \tmop{Ep} (t) \subset \tmop{Dom} g (n, \alpha, c)$ \ \ \ \ \ \ \ (2.2.4) $\forall t \in \alpha (+^n) . \tmop{Ep} (t) \subset \tmop{Dom} g (n, \alpha, c) \Longrightarrow T (n, c, t [g (n, \alpha, c)]) \cap c = T (n, \alpha, t) \cap \alpha$ \ \ \ \ \ \ \ \ \ (2.2.5) For any $t \in [\alpha, \alpha ( +^n))$ with $\tmop{Ep} (t) \subset \tmop{Dom} g (n, \alpha, c)$, $\tmop{Ep} (\eta (n, \alpha, t)) \subset \tmop{Dom} g (n, \alpha, c)$\\ \hspace*{25mm} and $\eta (n, \alpha, t) [g (n, \alpha, c)] = \eta (n, c, t [g (n, \alpha, c)])$. By (2.2.2), $g (n, \alpha, c)$ is bijective in its image. Let's denote $g^{- 1} (n, \alpha, c)$ to the inverse function \\ \hspace*{4mm} of $g (n, \alpha, c)$. (2.3) For (2.3.1), (2.3.2) and (2.3.3) we suppose $c \leqslant \alpha$. Then \ \ \ \ \ \ \ (2.3.1) $\tmop{Dom} g (n, c, \alpha) = \mathbbm{E} \cap c (+^n$) \ \ \ \ \ \ \ (2.3.2) $g (n, \alpha, c) = g^{- 1} (n, c, \alpha)$ \ \ \ \ \ \ \ (2.3.3) $g (n, \alpha, c) [\tmop{Dom} g (n, \alpha, c)] = \mathbbm{E} \cap c (+^n$) (2.4) $g (n, \alpha, c)$ has the following homomorphism-like properties: \ \ \ \ \ \ \ (2.4.1) $g (n, \alpha, c) (\alpha) = c$ \ \ \ \ \ \ \ (2.4.2) For any $i \in [1, n]$ and any $e \in (\tmop{Dom} g (n, \alpha, c)) \cap [\alpha, \alpha ( +^n))$,\\ \hspace*{26mm} $e \in \tmop{Class} (i) \Longleftrightarrow g (n, \alpha, c) (e) \in \tmop{Class} (i)$ \ \ \ \ \ \ \ (2.4.3) The function $e \longmapsto e [g (n, \alpha, c)]$ with domain $(\tmop{Dom} g (n, \alpha, c)) \cap (\alpha, \alpha ( +^n))$ is\\ \hspace*{26mm} an $(<, +, \cdot, <_1, \lambda x. \omega^x, (+^1), (+^2), \ldots, (+^{n - 1}))$ isomorphism \ \ \ \ \ \ \ (2.4.4) $\forall e \in (\tmop{Dom} g (n, \alpha, c)) \cap (\alpha, \alpha ( +^n)) .m (g (n, \alpha, c) (e)) = m (e) [g (n, \alpha, c)]$. \ \ \ \ \ \ \ (2.4.5) Suppose $n \geqslant 2$. Then\\ \hspace*{25mm} $\forall i \in [2, n]$.$\forall e \in \tmop{Class} (i) \cap (\tmop{Dom} g (n, \alpha, c)) \cap [\alpha, \alpha ( +^n))$.\\ \hspace*{25mm} $\forall E \in (e, e ( +^i)) \cap \tmop{Class} (i - 1)$.\\ \hspace*{26mm} $f (i, e) (E) = \{E_1 > \ldots > E_q \} \Longleftrightarrow$\\ \hspace*{26mm} $f (i, g (n, \alpha, c) (e)) (g (n, \alpha, c) (E)) = \{g (n, \alpha, c) (E_1) > \ldots > g (n, \alpha, c) (E_q)\}$ \ \ \ \ \ \ \ (2.4.6) Suppose $n \geqslant 2$. Then \\ \hspace*{26mm}$\forall i \in [2, n] . \forall s \in \tmop{Class} (i - 1) \cap [\alpha, \alpha ( +^n))$.$g (n, \alpha, c) (\lambda (i, s)) = \lambda (i, g (n, \alpha, c) (s))$ (2.5) For (2.5.1), (2.5.2) and (2.5.3) we suppose $c \leqslant \alpha$.Then for all $d \in \tmop{Class} (n) \cap [c, \alpha]$, \ \ \ \ \ \ \ (2.5.1) $\tmop{Dom} g (n, \alpha, c) \subset \tmop{Dom} g (n, \alpha, d)$ \ \ \ \ \ \ \ (2.5.2) $g (n, \alpha, d) [\tmop{Dom} g (n, \alpha, c)] \subset \tmop{Dom} g (n, d, c)$ \ \ \ \ \ \ \ (2.5.3) $g (n, \alpha, c) = g (n, d, c) \circ g (n, \alpha, d) |_{\tmop{Dom} g (n, \alpha, c)}$ and therefore\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $g^{- 1} (n, \alpha, d) \circ g^{- 1} (n, d, c) = g^{- 1} (n, \alpha, c) : \mathbbm{E} \cap c (+^n) \longrightarrow \tmop{Dom} g (n, \alpha, c)$. {\noindent}(3). There exists a binary relational $\leqslant^n \subset \tmop{Class} (n) \times \tmop{OR}$ satisfying $\leqslant^n$-connectedness and \\ $\leqslant^n$-continuity such that $\forall \alpha \in \tmop{Class} (n) . \forall t \in [\alpha, \alpha ( +^n)] . \alpha \leqslant^n t \Longrightarrow \alpha \leqslant_1 t$; moreover: {\noindent}(4) (First fundamental cofinality property of $\leqslant^n$).\\ Let $\alpha \in \tmop{Class} (n)$ and $t \in [\alpha, \alpha ( +^n))$ be arbitrary. If $\alpha \leqslant^n t + 1$, then there exists a sequence $(c_{\xi})_{\xi \in X} \subset \alpha \cap \tmop{Class} (n)$ such that $c_{\xi} \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \alpha$, $\forall \xi \in X.T (n, \alpha, t) \cap \alpha \subset c_{\xi}$ and $c_{\xi} \leqslant_1 t [g (n, \alpha, c_{\xi})]$. {\noindent}(5). (Second fundamental cofinality property of $\leqslant^n$).\\ Let $\alpha \in \tmop{Class} (n)$ and $t \in [\alpha, \alpha ( +^n))$.\\ Suppose $\alpha \in \tmop{Lim} \{\gamma \in \tmop{Class} (n) |T (n, \alpha, t) \cap \alpha \subset \gamma \wedge \gamma \leqslant_1 t [g (n, \alpha, \gamma)]\}$. Then \ \ \ \ \ \ (5.1) $\forall s \in [\alpha, t + 1] . \alpha \leqslant^n s$, and therefore (5.2) $\alpha \leqslant_1 t + 1$ {\noindent}(6). For $\alpha \in \tmop{Class} (n)$ and $t \in [\alpha, \alpha ( +^n))$, $\alpha <_1 \eta (n, \alpha, t) + 1 \Longrightarrow \alpha \leqslant^n \eta (n, \alpha, t) + 1$ \end{theorem} The proof of the previous theorem \ref{most_most_general_theorem} will be carried out by induction on $([1, \omega), <)$, and one proves simultaneously (0), (1), (2), (3), (4), (5) and (6). Indeed, such proof is now our current goal. \subsection{The case $n = 1$ of theorem \ref{most_most_general_theorem}} \begin{proposition} \label{GenThm_holds_for_n=1}Theorem \ref{most_most_general_theorem} holds for $n = 1$. \end{proposition} \begin{proof} {\noindent}(0).\\ $\mathbbm{E}$ is $\kappa$-club for any non-countable regular ordinal $\kappa$. {\noindent}(1).\\ Let $\alpha \in \mathbbm{E} = \tmop{Class} (1)$. We define $S (1, \alpha) := \emptyset$ and $f (1, \alpha) := \emptyset$. Then clearly $S (1, \alpha)$ and $f (1, \alpha)$ satisfy the properties stated. {\noindent}(2). (2.1)\\ Let $\alpha, c \in \mathbbm{E} = \tmop{Class} (1)$ with $c \leqslant \alpha$. Let $t \in \alpha (+^1$). Note $T (1, \alpha, c) = \tmop{Ep} (t) \subset \mathbbm{E}$ is well defined and clearly $(2.1.1)$ and $(2.1.2)$ hold. Now, suppose $t \geqslant \alpha 2$. Since \\ $\eta (1, \alpha, t) = \eta t = \max \{t, \pi t + d \pi t\}$, then $(2.1.3)$ holds. Finally, by remark \ref{remark_l(1,alpha,t)}, \\ $l (1, \alpha, t) \in \{\alpha 2, \pi t, t\}$ and therefore (2.1.4) holds too. (2.2)\\ Let $\alpha, c \in \mathbbm{E} = \tmop{Class} (1)$. Consider $\tmop{Dom} g (1, \alpha, c) := ( \mathbbm{E} \cap c \cap \alpha) \cup \{\alpha\}$ and \\ $g (1, \alpha, c) : \tmop{Dom} g (1, \alpha, c) \longrightarrow \mathbbm{E} \cap c (+^1$) be the function defined as $e$ $\longmapsto$ $e$ for $e \in \mathbbm{E} \cap c \cap \alpha$ and $\alpha \longmapsto$ $c$. Then it is easy to see that $g (1, \alpha, c)$ satisfies (2.2.1), (2.2.2) and (2.2.3). Besides, $g (1, \alpha, c)$ also satisfies (2.2.4): Take $t \in \alpha^+$ with $\tmop{Ep} (t) \subset \tmop{Dom} g (1, \alpha, c) = ( \mathbbm{E} \cap c \cap \alpha) \cup \{\alpha\}$. Then \\ $\tmop{Ep} (t) \cap \alpha \subset c$ and $t [g (1, \alpha, c)] = t [\alpha := c]$ and therefore \\ $T (1, c, t [g (1, \alpha, c)]) \cap c = T (1, c, t [\alpha := c]) \cap c = \tmop{Ep} (t [\alpha := c]) \cap c\underset{\text{by {\cite{GarciaCornejo1}} proposition \ref{[alpha:=e]_proposition3}}}{=}\tmop{Ep} (t) \cap \alpha = T (1, \alpha, t)$. Finally, we show that $g (1, \alpha, c)$ satisfies (2.2.5): Take $t \in \alpha^+$ with $\tmop{Ep} (t) \subset \tmop{Dom} g (1, \alpha, c)$. Then $\tmop{Ep} (t) \cap \alpha \subset c$ and so $\tmop{Ep} (\eta (1, \alpha, t)) \cap \alpha\underset{\text{remark \ref{remark_eta(1,alpha,t)=eta(t)}}}{=}\tmop{Ep} (\eta t) \cap \alpha\underset{\text{by {\cite{GarciaCornejo1}} proposition \ref{pi.eta.substitutions}}}{\subset}c$, which means $\tmop{Ep} (\eta (1, \alpha, t)) \subset \tmop{Dom} g (1, \alpha, c)$. Moreover, $\eta (1, \alpha, t) [g (1, \alpha, c)] = (\eta t) [\alpha := c]\underset{\text{by {\cite{GarciaCornejo1}} proposition \ref{pi.eta.substitutions}}}{=}\eta (t [\alpha := c]) = \eta (1, c, t [\alpha := c]) = \eta (1, c, t [g (1, \alpha, c)])$. (2.3)\\ Considering $\alpha, c \in \mathbbm{E}$ and $g (1, \alpha, c)$ as in (2.2) with the extra assumption $c \leqslant \alpha$ it is immediate that (2.3.1), (2.3.2) and (2.3.3) hold. (2.4)\\ Given $\alpha, c \in \mathbbm{E}$ and $g (1, \alpha, c)$ as in (2.2), it is clear that (2.4.1), (2.4.2), (2.4.5) and (2.4.6) hold. Moreover, (2.4.3) and (2.4.4) are {\cite{GarciaCornejo1}} corollary \ref{A_[alpha:=e]_isomorphisms} and {\cite{GarciaCornejo1}} remark \ref{remark_m-iso_<less>_1-iso}. (2.5)\\ Take $\alpha, d, c \in \mathbbm{E}$ with $c \leqslant d \leqslant \alpha$. Then $\tmop{Dom} g (1, \alpha, c) = ( \mathbbm{E} \cap c \cap \alpha) \cup \{\alpha\} \subset ( \mathbbm{E} \cap d \cap \alpha) \cup \{\alpha\} = \tmop{Dom} g (1, \alpha, d)$, that is, (2.5.1) holds. Moreover, \\ $g (1, \alpha, d) [\tmop{Dom} g (1, \alpha, c)] = \{g (1, \alpha, d) (e) | e \in ( \mathbbm{E} \cap c \cap \alpha) \cup \{\alpha\}\} = ( \mathbbm{E} \cap c \cap \alpha) \cup \{d\} \subset$\\ $( \mathbbm{E} \cap c \cap d) \cup \{d\} = \tmop{Dom} g (1, d, c)$, i.e., (2.5.2) holds. Let's show that (2.5.3) also holds: For $e \in \tmop{Dom} g (1, \alpha, c) = ( \mathbbm{E} \cap c \cap \alpha) \cup \{\alpha\}$, $e \underset{g (1, \alpha, d)}{\longmapsto} \left\{ \begin{array}{l} e \underset{g (1, d, c)}{\longmapsto} c = g (1, \alpha, c) (e) \text{ iff } e = \alpha\\ \\ e \underset{g (1, d, c)}{\longmapsto} e = g (1, \alpha, c) (e) \text{ iff } e \neq \alpha \end{array} \right.$, that is, $g (1, \alpha, c) = g (1, d, c) \circ g (1, \alpha, d) |_{\tmop{Dom} g (n, \alpha, c)}$; finally, direct from the previous equality follows that $g^{- 1} (n, \alpha, c) = g^{- 1} (n, \alpha, d) \circ g^{- 1} (n, d, c)$ because $g (1, \alpha, c)$, $g (1, d, c)$ and $g (1, \alpha, d) |_{\tmop{Dom} g (n, \alpha, c)}$ are invertible functions, and since by (2.3.2) $g^{- 1} (1, \alpha, c) = g (1, c, \alpha)$, then \\ $g^{- 1} (1, \alpha, c) = g (1, c, \alpha) : ( \mathbbm{E} \cap \alpha \cap c) \cup \{c\} = \mathbbm{E} \cap c^+ \longrightarrow ( \mathbbm{E} \cap \alpha \cap c) \cup \{\alpha\} = \tmop{Dom} g (1, \alpha, c)$. {\noindent}(3).\\ Of course, the relation $\leqslant^1$ from {\cite{GarciaCornejo1}} satisfies \\ $\forall \alpha \in \tmop{Class} (1) . \forall t \in [\alpha, \alpha ( +^1)] . \alpha \leqslant^1 t \Longrightarrow \alpha \leqslant_1 t$ and moreover: {\noindent}(4) holds because of {\cite{GarciaCornejo1}} proposition \ref{<less>^1.implies.cofinal.sequence}; {\noindent}(5) holds too because of {\cite{GarciaCornejo1}} proposition \ref{2nd_Fund_Cof_Property_<less>^1}; {\noindent}(6) holds because {\cite{GarciaCornejo1}} of corollary \ref{<less>_1.iff.<less>^1}. \end{proof} \addvspace{4mm} {\Large \tmstrong{Working on case $n > 1$ of theorem \ref{most_most_general_theorem}}} It is in this moment that the hard work starts. As we have already said, we prove theorem \ref{most_most_general_theorem} by induction on $[1, \omega)$, and since we have already seen that it holds for $n = 1$, then {\color{blue} until we complete the whole proof}, we consider a fixed $n \in [2, \omega)$ and our induction hypothesis is that theorem \ref{most_most_general_theorem} holds for any $i \in [1, n)$. We name {\tmstrong{GenThmIH}} to this induction hypothesis. \section{Clause (0) of theorem \ref{most_most_general_theorem}} We will dedicate the rest of this article to the proof of clause (0) of theorem \ref{most_most_general_theorem}. (The rest of the proof will appear in two subsequent articles). In order to do this, our first goal is to provide a generalized version of the hierarchy theorem done for the intervals $[\varepsilon_{\gamma}, \varepsilon_{\gamma + 1})$. We first prove certain propositions that will be necessary later. \begin{proposition} \label{regular_ordinal_in_Class(n-1)}Let $i \in [1, n - 1]$. Let $\kappa$ be an uncountable regular ordinal. Then $\kappa \in \tmop{Class} (i)$. \end{proposition} \begin{proof} Take $i, \kappa$ as stated. Let $\rho$ be an uncountable regular ordinal, $\rho > \kappa$ ($\rho$ exists because the class of regular ordinals is unbounded in the class of ordinals). Since $\tmop{Class} (i) \cap \kappa$ is bounded in $\rho$ and $\tmop{Class} (i)$ is club in $\rho$ by GenThmIH, then $\sup (\tmop{Class} (i) \cap \kappa) \in \tmop{Class} (i)$. But $\tmop{Class} (i) \cap \kappa$ is unbounded in $\kappa$ (by GenThmIH) and therefore $\sup (\tmop{Class} (i) \cap \kappa) = \kappa$. These two observations prove $\kappa = \sup (\tmop{Class} (i) \cap \kappa) \in \tmop{Class} (i)$. \end{proof} \begin{proposition} \label{Class(n)_is_closed}For any $i \in [1, n]$, $\tmop{Class} (n)$ is closed. \end{proposition} \begin{proof} For $i \leqslant n - 1$ the claim is clear by GenThmIH. So suppose $i = n$. Let $\alpha \in \tmop{Lim} \tmop{Class} (n)$. Then there exists a sequence $(c_{\xi})_{\xi \in X} \subset \tmop{Class} (n) \cap \alpha$ with $c_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\alpha$. So $(c_{\xi})_{\xi \in X} \subset \tmop{Class} (n - 1)$ and since by (0) of GenThmIH $\tmop{Class} (n - 1)$ is club in any non-countable regular ordinal $\kappa$, then $\alpha \in \tmop{Class} (n - 1)$. Now we want to show that $\forall t \in (\alpha, \alpha ( +^{n - 1})) . \alpha <_1 t$. \ {\tmstrong{(*)}} Let $t \in (\alpha, \alpha ( +^n))$. Since $T (n - 1, \alpha, t)$ is finite and $c_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\alpha$, then we can assume without loss of generality that $\forall \xi \in X.T (n - 1, \alpha, t) \cap \alpha \subset c_{\xi}$. This way, for all $\xi \in X$, the ordinal \\ $t [g (n - 1, \alpha, c_{\xi})] \in (c_{\xi}, c_{\xi} ( +^{n - 1}))$ and since by hypothesis $c_{\xi} \in \tmop{Class} (n)$, (i.e., $c_{\xi} <_1 c_{\xi} (+^{n - 1}))$, then $c_{\xi} <_1 t [g (n - 1, \alpha, c_{\xi})]$ by $<_1$-connectedness. This shows \\ $\forall \xi \in X.c_{\xi} <_1 t [g (n - 1, \alpha, c_{\xi})]$. From our work in the previous paragraph follows that \\ $\alpha \in \tmop{Lim} \{\gamma \in \tmop{Class} (n - 1) |T (n - 1, \alpha, t) \cap \alpha \subset \gamma \wedge \gamma \leqslant_1 t [g (n - 1, \alpha, \gamma)]) \}$, and therefore, by use of GenThmIH (5) (Second fundamental cofinality property of $\leqslant^{n - 1}$), follows $\alpha \leqslant_1 t$. The previous shows (*). Finally, for the sequence $(d_{\xi})_{\xi \in (\alpha, \alpha ( +^n))}$ defined as $d_{\xi} := \xi$, it follows from (*) that\\ $\alpha <_1 d_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\alpha (+^{n - 1})$; therefore, by $\leqslant_1$-continuity, $\alpha <_1 \alpha (+^{n - 1})$, that is, $\alpha \in \tmop{Class} (n)$. \end{proof} \begin{remark} Consider $i \in [1, n]$, $\alpha \in \tmop{Class} (i)$ and $t \in [\alpha, \alpha ( +^i))$. Let $j \in [1, i]$. Then $\lambda (j, e)$ was defined as the only one ordinal $\delta$ satisfying $\delta \in \tmop{Class} (j) \wedge e \in [\delta, \delta ( +^j))$ or $- \infty$ in case such ordinal does not exist. We want {\tmstrong{to show that}} $\tmmathbf{\lambda (j, e)}$ {\tmstrong{is indeed an ordinal}}: Let $U := (e + 1) \cap \tmop{Class} (j)$. Then $\beta \in U \neq \emptyset$ because $j \leqslant i$ implies \\ $\tmop{Class} (i) \subset \tmop{Class} (j)$ by proposition \ref{Class(n)_first_properties}. Let $u := \sup U$. Then, by previous proposition \ref{Class(n)_is_closed}, \\ $u \in \tmop{Class} (j) \cap (e + 1)$. Moreover, $e \in [u, u ( +^j))$. This shows that $\lambda (e, j) = u \in \tmop{OR}$. \end{remark} \begin{proposition} \label{Simplest_Characterization_beta_in_Lim(Class(k))}Let $k < n$ and $\beta \in \tmop{Class} (k)$. Then $\beta \leqslant_1 \beta (+^{k - 1}) \ldots ( +^2) ( +^1) 2 + 1 \Longleftrightarrow \beta \leqslant^k \beta (+^{k - 1}) \ldots ( +^2) ( +^1) 2 + 1 \Longleftrightarrow \beta \in \tmop{Lim} (\tmop{Class} (k))$. \end{proposition} \begin{proof} Let $k < n$ and $\beta \in \tmop{Class} (k)$. Note $\beta \leqslant_1 \beta (+^{k - 1}) \ldots ( +^2) ( +^1) 2 + 1 \Longleftrightarrow \beta \leqslant^k \beta ( +^{k - 1}) \ldots ( +^2) ( +^1) 2 + 1$ holds because \\ $\eta (k, \beta, \beta ( +^{k - 1}) \ldots ( +^1) 2) = \beta (+^{k - 1}) \ldots ( +^1) 2$ and because of (3) and (6) of GenThmIH. Moreover, $\beta \leqslant^k \beta (+^{k - 1}) \ldots ( +^1) 2 + 1 \Longrightarrow \beta \in \tmop{Lim} (\tmop{Class} (k))$ holds because of (4) of GenThmIH. It only remains to show that $\beta \leqslant^k \beta (+^{k - 1}) \ldots ( +^1) 2 + 1 \Longleftarrow \beta \in \tmop{Lim} (\tmop{Class} (k))$. Take \\ $\beta \in \tmop{Lim} (\tmop{Class} (k))$. Then there is a sequence $(c_{\xi})_{\xi \in X} \subset \tmop{Class} (k)$ with $c_{\xi} \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \beta$. Now, by (2.1.1) of GenThmIH, $T (k, \beta, \beta ( +^{k - 1}) \ldots ( +^1) 2)$ is finite, and so $T (k, \beta, \beta ( +^{k - 1}) \ldots ( +^1) 2) \cap \beta$ is finite too. This way, there is a subsequence $(d_j)_{j \in J}$ of $(c_{\xi})_{\xi \in X}$ such that \\ $\forall j \in J.T (k, \beta, \beta ( +^{k - 1}) \ldots ( +^1) 2) \cap \beta \subset d_j$ and $d_j \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \beta$. From the previous paragraph we get that $\forall j \in J.T (k, \beta, \beta ( +^{k - 1}) \ldots ( +^1) 2) \cap \beta \subset d_j$, $d_j \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \beta$ and $\forall j \in J.d_j \underset{\text{\tmop{by} \tmop{proposition} \ref{Class(n)_first_properties}}}{\leqslant_1} d_j (+^{k - 1}) \ldots ( +^1) 2 \underset{\text{by (2.4.3) and (2.4.1) of GenThmIH}}{=}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= (\beta ( +^{k - 1}) \ldots ( +^1) 2) [g (k, \beta, d_j)]$. That is, we have shown \\ $\beta \in \tmop{Lim} \{\gamma \in \tmop{Class} (k) |T (k, \beta, \beta ( +^{k - 1}) \ldots ( +^1) 2) \cap \beta \subset \gamma \wedge$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\gamma \leqslant_1 (\beta ( +^{k - 1}) \ldots ( +^1) 2) [g (k, \beta, \gamma)] \}$. Therefore, by (5) of \\ GenThmIH, we conclude $\beta \leqslant^k \beta (+^{k - 1}) \ldots ( +^1) 2 + 1$. \end{proof} \begin{definition} Let $i \in [1, n)$, $\alpha \in \tmop{Class} (i)$ and $t \in (\alpha, \alpha ( +^i))$. For any ordinal $r \in \tmop{OR}$, let \\ $S (i, \alpha, r, t) := \{q \in (\alpha, l (i, \alpha, t)) | T (i, \alpha, q) \cap \alpha \subset r\}$. \end{definition} \begin{remark} \label{S(i,a,r,t)=l(i,a,t)_intersection_Dom(g(i,a,r))}With respect to our previous definition, note $S (i, \alpha, r, t) \subset l (i, \alpha, t) \leqslant t$. Moreover, since $i \in [1, n)$, then by (2.2.3) of GenThmIH, \\ $r \in \tmop{Class} (i) \Longrightarrow S (i, \alpha, r, t) = \{q \in (\alpha, l (i, \alpha, t)) | \tmop{Ep} (q) \subset (\tmop{Dom} g (i, \alpha, r))\}$. \end{remark} \subsection{The Generalized Hierarchy Theorem} \begin{definition} Let $C^{n - 1} : \tmop{OR} \longrightarrow \tmop{Class} (n - 1)$ be the counting functional of $\tmop{Class} (n - 1)$, (by GenThmIH follows $\tmop{Class} (n - 1)$ is a closed unbounded class of ordinals) and for $j \in \tmop{OR}$, let's write $C^{n - 1}_j$ for $C^{n - 1} (j)$. We define by recursion on the interval $[C_{\omega}^{n - 1}, \infty)$ the functional \\ $A^{n - 1} : [C^{n - 1}_{\omega}, \infty) \longrightarrow \tmop{Subclasses} (\tmop{OR})$ as: For $t \in [C^{n - 1}_{\omega}, \infty)$, let $\alpha \in \tmop{Class} (n - 1)$ be such that $t \in [\alpha, \alpha ( +^{n - 1}))$. \\ Let $M := \left\{ \begin{array}{l} \max (T (n - 1, \alpha, t) \cap \alpha) \text{ iff } T (n - 1, \alpha, t) \cap \alpha \neq \emptyset\\ - \infty \text{ otherwise} \end{array} \right.$. Case $t = l + 1$. $A^{n - 1} (l + 1) := \left\{ \begin{array}{l} A^{n - 1} (l) \text{ iff } l < \eta (n - 1, \alpha, l)\\ \\ \tmop{Lim} A^{n - 1} (l) \text{ otherwise; that is, } l = \eta (n - 1, \alpha, l) \end{array} \right.$ Case $t \in \tmop{Lim}$. $A^{n - 1} (t) := \left\{ \begin{array}{l} (\tmop{Lim} \tmop{Class} (n - 1)) \cap (M, \alpha + 1) \text{ iff } t \in [\alpha, \alpha ( +^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2]\\ \\ \tmop{Lim} \{r \leqslant \alpha |M < r \in \bigcap_{s \in S (n - 1, \alpha, r, t)} A^{n - 1} (s)\} \text{ otherwise} \end{array} \right.$. On the other hand, we define the functional $G^{n - 1} : [C^{n - 1}_{\omega}, \infty) \longrightarrow \tmop{Subclasses} (\tmop{OR})$ in the \\ following way: For $t \in [C^{n - 1}_{\omega}, \infty)$, let $\alpha \in \tmop{Class} (n - 1)$ be such that $t \in [\alpha, \alpha ( +^{n - 1}))$ and let\\ $G^{n - 1} (t) := \{\beta \in \tmop{Class} (n - 1) |T (n - 1, \alpha, t) \cap \alpha \subset \beta \leqslant \alpha \wedge \beta \leqslant^{n - 1} \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] + 1\}$ \hspace*{13mm} $=$, by GenThmIH (3) and (6),\\ \hspace*{14mm}$= \{\beta \in \tmop{Class} (n - 1) |T (n - 1, \alpha, t) \cap \alpha \subset \beta \leqslant \alpha \wedge \beta \leqslant_1 \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] + 1\}$. \end{definition} \begin{remark} Notice that $G^{n - 1} (t)$ is well defined because for $\beta \in \tmop{Class} (n - 1)$ satisfying \\ $T (n - 1, \alpha, t) \cap \alpha \subset \beta$, (2.2.3) and (2.2.4) of GenThmIH imply $T (n - 1, \alpha, \eta (n - 1, \alpha, t)) \cap \alpha \subset \beta$; therefore, again by (2.2.3) of GenThmIH, $\tmop{Ep} (\eta (n - 1, \alpha, t)) \subset \tmop{Dom} (g (n - 1, \alpha, \beta))$. \end{remark} \begin{proposition} \label{A^n-1_constant_in_[alpha,alpha(+)...(+)2]}Let $\alpha \in \tmop{Class} (n - 1) \cap [C_{\omega}^{n - 1}, \infty)$. Then \\ $\forall t \in [\alpha, \alpha ( +^{n - 2}) \ldots ( +^1) 2] .A^{n - 1} (t) = (\tmop{Lim} \tmop{Class} (n - 1)) \cap (\max (T (n - 1, \alpha, t) \cap \alpha), \alpha + 1)$ \end{proposition} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{theorem} \label{G^n-1(t)=A^n-1(t)_Gen_Hrchy_thm}$\forall t \in [C^{n - 1}_{\omega}, \infty) .G^{n - 1} (t) = A^{n - 1} (t)$ \end{theorem} \begin{proof} We proceed by induction on the class $[C^{n - 1}_{\omega}, \infty)$. Let $t \in [C^{n - 1}_{\omega}, \infty)$ and $\alpha \in \tmop{Class} (n - 1)$ be with $t \in [\alpha, \alpha ( +^{n - 1}))$. Suppose $\forall s \in [C_{\omega}^{n - 1}, \infty) \cap t.G^{n - 1} (s) = A^{n - 1} (s)$. \ \ \ {\tmstrong{(cIH)}} Case $t \in [\alpha, \alpha ( +^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2]$. Then $\eta (n - 1, \alpha, t) = \alpha (+^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2$ and so $G^{n - 1} (t) = \{\beta \in \tmop{Class} (n - 1) |T (n - 1, \alpha, t) \cap \alpha \subset \beta \leqslant \alpha \wedge$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\beta \leqslant^{n - 1} \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] + 1\}=$ \ \ \ \ \ \ \ \ \ \ $=\{\beta \in \tmop{Class} (n - 1) |T (n - 1, \alpha, t) \cap \alpha \subset \beta \leqslant \alpha \wedge$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\beta \leqslant^{n - 1} \alpha (+^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2 [g (n - 1, \alpha, \beta)] + 1\} =$ \ \ \ \ \ \ \ \ \ \ $=\{\beta \in \tmop{Class} (n - 1) |T (n - 1, \alpha, t) \cap \alpha \subset \beta \leqslant \alpha \wedge$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\beta \leqslant^{n - 1} \beta (+^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2 + 1\} =$ \ \ \ \ \ $underset{ proposition \ref{Simplest_Characterization_beta_in_Lim(Class(k))}}{=}(\tmop{Lim} \tmop{Class} (n - 1)) \cap (\max (T (n - 1, \alpha, t) \cap \alpha), \alpha + 1) =$ \ \ \ \ $underset{by proposition \ref{A^n-1_constant_in_[alpha,alpha(+)...(+)2]} }{=}A^{n - 1} (t)$. The previous shows the theorem holds in interval $[\alpha, \alpha ( +^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2]$. So, {\tmstrong{from now on, we suppose}} {\tmstrong{$t \in (\alpha ( +^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2, \alpha ( +^{n - 1}))$}}. \ \ \ \ \ \ \ {\tmstrong{(A0)}} Successor subcase. Suppose $t = s + 1$ for some $s \in [\alpha ( +^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2, t)$. First note \\ $\eta (n, \alpha, s + 1) = \max \{m (e) |e \in (\alpha, s + 1]\} = \max \{\max \{m (e) |e \in (\alpha, s]\}, m (s + 1) = s + 1\} =$ \ \ \ \ \ \ \ \ \ \ \ $= \left\{ \begin{array}{l} \max \{\max \{m (e) |e \in (\alpha, s]\}, s + 1\} \text{ iff } s = \alpha (+^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2\\ \\ \max \{\max \{m (e) |e \in (\alpha, s]\}, s + 1\} \text{ iff } s > \alpha (+^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2 \end{array} \right.$ \ \ \ \ \ \ \ \ \ \ \ $= \left\{ \begin{array}{l} \max \{\alpha ( +^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2, s + 1\} \text{ iff } s = \alpha (+^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2\\ \\ \max \{\eta (n - 1, \alpha, s), s + 1\} \text{ iff } s > \alpha (+^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2 \end{array} \right.$ \ \ \ \ \ \ \ \ \ \ \ $= \left\{ \begin{array}{l} \max \{\eta (n - 1, \alpha, s), s + 1\} \text{ iff } s = \alpha (+^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2\\ \\ \max \{\eta (n - 1, \alpha, s), s + 1\} \text{ iff } s > \alpha (+^{n - 1}) ( +^{n - 2}) \ldots ( +^2) ( +^1) 2 \end{array} \right.$ \ \ \ \ \ \ \ \ \ \ \ $= \max \{\eta (n - 1, \alpha, s), s + 1\}$. \ \ \ \ \ \ \ {\tmstrong{(A1)}} {\underline{Subsubcase $s < \eta (n - 1, \alpha, s)$}}. Then, using (A1), $\eta (n - 1, \alpha, s + 1) = \eta (n - 1, \alpha, s)$. Therefore, \\ $G^{n - 1} (t) = G^{n - 1} (s + 1) = \{\beta \in \tmop{Class} (n - 1) |T (n - 1, \alpha, s + 1) \cap \alpha \subset \beta \leqslant \alpha \wedge$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\beta \leqslant^{n - 1} \eta (n - 1, \alpha, s + 1) [g (n - 1, \alpha, \beta)] + 1\}=$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $underset{\text{by (2.1.2) of GenThmIH}}{=} \{\beta \in \tmop{Class} (n - 1) |T (n - 1, \alpha, s) \cap \alpha \subset \beta \leqslant \alpha \wedge$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\beta \leqslant^{n - 1} \eta (n - 1, \alpha, s) [g (n - 1, \alpha, \beta)] + 1\}=$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= G^{n - 1} (s)\underset{\text{by cIH}}{=}A^{n - 1} (s)\underset{\text{because } s < \eta (n - 1, \alpha, s)}{=}A^{n - 1} (s + 1)$. {\underline{Subsubcase $s = \eta (n - 1, \alpha, s)$}}. So, from (A1), $\eta (n - 1, \alpha, s + 1) = s + 1 = \eta (n - 1, \alpha, s) + 1$. \ \ \ \ \ \ \ {\tmstrong{(A2)}}. {\tmstrong{To show $G^{n - 1} (t) \subset A^{n - 1} (t)$.}} \ \ \ \ \ \ \ {\tmstrong{(A3)}} Let $\beta \in G^{n - 1} (t) = G^{n - 1} (s + 1)$. Then $\beta \in \tmop{Class} (n - 1)$, $T (n - 1, \alpha, s + 1) \cap \alpha \subset \beta \leqslant \alpha$ and \\ $\beta \leqslant^{n - 1} \eta (n - 1, \alpha, s + 1) [g (n - 1, \alpha, \beta)] + 1\underset{\text{by (A2)}}{=}(\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)] + 1$; from this and (4) of GenThmIH follows the existence of a sequence \\ $(c_{\xi})_{\xi \in X} \subset \tmop{Class} (n - 1) \cap \beta$, $c_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\beta$ such that for all $\xi \in X$, \\ $T (n - 1, \beta, {\color{magenta} (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)]}) \cap \beta \subset c_{\xi}$ and \\ $c_{\xi} \leqslant_1 \tmmathbf{(\eta (n - 1, \alpha, s) + 1)} [g (n - 1, \alpha, \beta)] g [(n - 1, \beta, c_{\xi})]$. \ \ \ \ \ \ \ {\tmstrong{(A4)}} On the other hand, for any $\xi \in X$, $c_{\xi} \supset T (n - 1, \beta, {\color{magenta} (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)]}) \cap \beta =$\\ $T (n - 1, \beta, {\color{magenta} (s + 1) [g (n - 1, \alpha, \beta)]}) \cap \beta\underset{\text{by (2.2.4) of GenThmIH}}{=}T (n - 1, \alpha, s + 1) \cap \alpha =$\\ $T (n - 1, \alpha, \eta (n - 1, \alpha, s) + 1) \cap \alpha$. \ \ \ \ \ \ \ {\tmstrong{(A5)}} Now, note that for any $\xi \in X$, by (A5) and (2.2.3) of GenThmIH, we have that\\ $\tmop{Ep} (\eta (n - 1, \alpha, s) + 1) \subset \tmop{Dom} (g (n - 1, \alpha, c_{\xi}))$. Then\\ $(\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)] [g (n - 1, \beta, c_{\xi})] =$\\ $(\eta (n - 1, \alpha, s) + 1) [g (n - 1, \beta, c_{\xi}) \circ g (n - 1, \alpha, \beta)]\underset{\text{by (2.5.3) of GenThmIH}}{=}$\\ $(\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, c_{\xi})] = \eta (n - 1, \alpha, s) [g (n - 1, \alpha, c_{\xi})] + 1\underset{\text{by (2.2.5) of GenThmIH}}{=}$\\ $\eta (n - 1, c_{\xi}, s [g (n - 1, \alpha, c_{\xi})]) + 1$. \ \ \ \ \ {\tmstrong{(A6)}} Done the previous work, from (A4), (A5) (and (2.1.2) of GenThmIH) and (A6) follows \\ $\forall \xi \in X.T (n - 1, \alpha, s) \cap \alpha \subset c_{\xi} \leqslant \alpha \wedge c_{\xi} \leqslant_1 \eta (n - 1, \alpha, s [g (n - 1, \alpha, c_{\xi})]) + 1$ and therefore, by (6) of GenThmIH, $\forall \xi \in X.T (n - 1, \alpha, s) \cap \alpha \subset c_{\xi} \leqslant \alpha \wedge c_{\xi} \leqslant^{n - 1} \eta (n - 1, \alpha, s [g (n - 1, \alpha, c_{\xi})]) + 1$. This shows that $(c_{\xi})_{\xi \in X} \subset G^{n - 1} (s)\underset{\text{by our cIH}}{=}A^{n - 1} (s)$, and since $c_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\beta$, then we have that \\ $\beta \in \tmop{Lim} A^{n - 1} (s) = A (s + 1) = A (t)$. This proves {\tmstrong{(A3)}}. {\tmstrong{We now show $G^{n - 1} (t) \supset A^{n - 1} (t)$.}} \ \ \ \ \ \ \ {\tmstrong{(B1)}} Let $\beta \in A^{n - 1} (t) = A^{n - 1} (s + 1) = \tmop{Lim} A^{n - 1} (s)\underset{\text{by our cIH}}{=}\tmop{Lim} G^{n - 1} (s)$. So there is a sequence \\ $(c_{\xi})_{\xi \in X} \subset G^{n - 1} (s)$ such that $c_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\beta$. So for all $\xi \in X$, \\ $T (n - 1, \alpha, s) \cap \alpha \subset c_{\xi} \in \tmop{Class} (n - 1) \cap \beta \subset \alpha$ and \\ $c_{\xi} \leqslant^{n - 1} \eta (n - 1, \alpha, s) [g (n - 1, \alpha, c_{\xi})] + 1 = (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, c_{\xi})]$. \ \ {\tmstrong{(B2)}} We will argue similarly as in the proof of (A3). Let $\xi_0 \in X$. Then \ \\ $T (n - 1, \alpha, \eta (n - 1, \alpha, s) + 1) \cap \alpha = T (n - 1, \alpha, \eta (n - 1, \alpha, s)) \cap \alpha = T (n - 1, \alpha, s) \cap \alpha \subset c_{\xi_0} < \beta$, so $\tmmathbf{T (n - 1, \alpha, s) \cap \alpha \subset \beta}$ and the ordinal $(\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)] \in [\beta, \beta ( +^{n - 1}))$ is well defined. Now, for any $\xi \in X$, $T (n - 1, \beta, {\color{magenta} (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)]}) \cap \beta = T (n - 1, \alpha, {\color{magenta} (\eta (n - 1, \alpha, s) + 1)}) \cap \alpha = T (n - 1, \alpha, s + 1) \cap \alpha = T (n - 1, \alpha, s) \cap \alpha \subset c_{\xi}$, and so the ordinal $(\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)] [g (n - 1, \beta, c_{\xi})] \in [c_{\xi}, c_{\xi} ( +^{n - 1}))$ is well defined too; moreover, using this and (2.5.3) of GenThmIH, we get \\ $(\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)] [g (n - 1, \beta, c_{\xi})] =$\\ $(\eta (n - 1, \alpha, s) + 1) [g (n - 1, \beta, c_{\xi}) \circ g (n - 1, \alpha, \beta)] = (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, c_{\xi})]$. But from this and (B2) we get\\ $\forall \xi \in X.T (n - 1, \alpha, {\color{magenta} (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)]}) \cap \beta \subset c_{\xi} < \beta \wedge$\\ $c_{\xi} \leqslant_1 (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, c_{\xi})] = (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)] [g (n - 1, \beta, c_{\xi})]$; note these previous two lines and the fact that $c_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\beta$ means \\ $\beta \in \tmop{Lim} \{\gamma \in \tmop{Class} (n - 1) |T (n - 1, \beta, {\color{magenta} (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)]}) \cap \beta \subset \gamma \wedge$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\gamma \leqslant_1 {\color{magenta} (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)]} [g (n - 1, \beta, c_{\xi})] \}$. Thus, from all of the above and using (5.1) of GenThmIH, we conclude \\ $T (n - 1, \alpha, s) \cap \alpha \subset \beta \leqslant \alpha \wedge$\\ $\beta \leqslant^{n - 1} (\eta (n - 1, \alpha, s) + 1) [g (n - 1, \alpha, \beta)] + 1\underset{\text{by (A2)}}{=}\eta (n - 1, \alpha, s + 1) [g (n - 1, \alpha, \beta)] + 1 =$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] + 1$. \ \ \ \ \ \ \ {\tmstrong{(B3)}}. (B3) shows $\beta \in G^{n - 1} (t)$. Hence we have shown $G^{n - 1} (t) \supset A^{n - 1} (t)$. All the previous work shows that for $t$ a successor ordinal the theorem holds. Now we have to see what happens when $t$ is a limit ordinal. Subcase $t \in \tmop{Lim}$. We remind the reader that, by (A0), we also know that \\ $t \in (\alpha ( +^{n - 2}) ( +^{n - 3}) \ldots ( +^2) ( +^1) 2, \alpha ( +^{n - 1}))$. {\tmstrong{To show}} $\tmmathbf{G^{n - 1} (t) \subset A^{n - 1} (t)}$. \ \ \ \ \ \ \ {\tmstrong{(B4)}} Let $\beta \in G^{n - 1} (t)$. So $T (n - 1, \alpha, t) \cap \alpha \subset \beta \leqslant^{n - 1} \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] + 1$ and \\ $\alpha \geqslant \beta \in \tmop{Class} (n - 1)$. Then, by (4) of GenThmIH there exists a sequence \\ $(c_{\xi})_{\xi \in X} \subset \tmop{Class} (n - 1) \cap \beta$, $c_{\xi} \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \beta$ such that for all $\xi \in X$,\\ $T (n - 1, \beta, \eta (n - 1, \alpha, t)) [g (n - 1, \alpha, \beta)]) \cap \beta \subset c_{\xi}$ and \\ $c_{\xi} \leqslant_1 \tmmathbf{\eta (n - 1, \alpha, t)} [g (n - 1, \alpha, \beta)] g [(n - 1, \beta, c_{\xi})]$. \ \ \ \ \ \ \ {\tmstrong{(B5)}} On the other hand, since $T (n - 1, \alpha, t) \cap \alpha \subset \beta$, $T (n - 1, \alpha, t)$ is finite (by (2.1.1) of GenThmIH) and $c_{\xi} \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \beta$, then $T (n - 1, \alpha, t) \cap \alpha$ is also finite and therefore we can assume without loss of generality that $\forall \xi \in X.T (n - 1, \alpha, t) \cap \alpha \subset c_{\xi}$. \ \ \ \ \ \ \ {\tmstrong{(B6)}} Now, notice for any $\xi \in X$, \\ $T (n - 1, \alpha, \eta (n - 1, \alpha, t)) \cap \alpha\underset{\text{by (2.1.3) of GenThmIH}}{\subset}T (n - 1, \alpha, t) \cap \alpha \subset c_{\xi}$ and therefore, by (B6) and (2.2.3) of GenThmIH, \\ $\tmop{Ep} (\eta (n - 1, \alpha, t)) \subset \tmop{Dom} (g (n - 1, \alpha, c_{\xi}))$. This way, \\ $\eta (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] [g (n - 1, \beta, c_{\xi})] =$\\ $\eta (n - 1, \alpha, t) [g (n - 1, \beta, c_{\xi}) \circ g (n - 1, \alpha, \beta)]\underset{\text{(2.5.3) of GenThmIH}}{=}$\\ $\eta (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]$. From this, (B5) and (B6) we obtain\\ $\forall \xi \in X.T (n - 1, \alpha, t) \cap \alpha \subset c_{\xi} \leqslant \alpha \wedge$\\ \ \ \ \ \ \ \ \ $c_{\xi} \leqslant_1 \eta (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})] = \eta (n - 1, \alpha, l (n - 1, \alpha, t)) [g (n - 1, \alpha, c_{\xi})]$. \ \ \ \ \ \ \ {\tmstrong{(C1)}} Let's see now that $\forall \xi \in X.c_{\xi} \in\underset{s \in S (n - 1, \alpha, c_{\xi}, t)}{\bigcap}A^{n - 1} (s)$. \ \ \ \ \ \ \ {\tmstrong{(C2)}} Let $\xi \in X$ be arbitrary. Take $s \in S (n - 1, \alpha, c_{\xi}, t)$. Then $s \in (\alpha, l (n - 1, \alpha, t))$ and then, by the definition of $l (n - 1, \alpha, t)$, it follows that $\eta (n - 1, \alpha, s) < \eta (n - 1, \alpha, l (n - 1, \alpha, t))$. On the other hand, since $T (n - 1, \alpha, s) \cap \alpha \subset c_{\xi}$ then $T (n - 1, \alpha, \eta (n - 1, \alpha, s)) \cap \alpha \subset c_{\xi}$ (by (2.1.3) of GenThmIH); moreover, we know $T (n - 1, \alpha, t) \cap \alpha \subset c_{\xi}$, so by (2.1.4) of GenThmIH, \\ $T (n - 1, \alpha, l (n - 1, \alpha, t)) \cap \alpha \subset c_{\xi}$. From the previous paragraph follows that, for any $\xi \in X$, the ordinals \\ $\eta (n - 1, \alpha, s) [g (n - 1, \alpha, c_{\xi})], l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})] \in (c_{\xi}, c_{\xi} ( +^{n - 1})) \subset \beta < \alpha$ are well defined and that $c_{\xi} < \eta (n - 1, \alpha, s) [g (n - 1, \alpha, c_{\xi})] + 1 \leqslant l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]$. This last inequalities imply, by (C1) and $\leqslant_1$-connectedness, that \\ $c_{\xi} <_1 \eta (n - 1, \alpha, s) [g (n - 1, \alpha, c_{\xi})] + 1\underset{\text{by (2.2.5) of GenThmIH}}{=}$ \ $\eta (n - 1, c_{\xi}, s [g (n - 1, \alpha, c_{\xi})]) + 1$, and then (by (6) of GenThmIH) \\ $c_{\xi} <^{n - 1} \eta (n - 1, c_{\xi}, s [g (n - 1, \alpha, c_{\xi})]) + 1\underset{\text{by (2.2.5) of GenThmIH}}{=}\eta (n - 1, \alpha, s) [g (n - 1, \alpha, c_{\xi})] + 1$. The previous shows that for all $\xi \in X$ and all $s \in S (n - 1, \alpha, c_{\xi}, t)$, \\ $c_{\xi} \in G^{n - 1} (s)\underset{\text{cIH}}{=}A^{n - 1} (s)$, that is, we have shown (C2). From (C2) and the fact that $c_{\xi}\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\beta$ we conclude that $\beta \in \tmop{Lim} \{r \leqslant \alpha |M < r \in\underset{s \in S (n - 1, \alpha, r, t)}{\bigcap}A^{n - 1} (s) \} = A^{n - 1} (t)$. This shows (B4). {\tmstrong{Now we show}} $\tmmathbf{G^{n - 1} (t) \supset A^{n - 1} (t)}$. \ \ \ \ \ \ \ {\tmstrong{(C3)}} Let $\beta \in A^{n - 1} (t) = \tmop{Lim} \{r \leqslant \alpha |M < r \in\underset{s \in S (n - 1, \alpha, r, t)}{\bigcap}A^{n - 1} (s) \}\underset{\text{cIH}}{=}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \tmop{Lim} \{r \leqslant \alpha |M < r \in\underset{s \in S (n - 1, \alpha, r, t)}{\bigcap}G^{n - 1} (s) \}$. Then there is a sequence\\ $(c_{\xi})_{\xi \in X}$ such that $M < c_{\xi} \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \beta$ and $\forall \xi \in X.c_{\xi} \in\underset{s \in S (n - 1, \alpha, c_{\xi}, t)}{\bigcap}G^{n - 1} (s)$. \ \ \ \ \ \ \ {\tmstrong{(C4)}} \\ Note that since $\forall \xi \in X.c_{\xi} \in\underset{s \in S (n - 1, \alpha, c_{\xi}, t)}{\bigcap}G^{n - 1} (s) \subset \tmop{Class} (n - 1)$ and $(c_{\xi})_{\xi \in X}$ is cofinal in $\beta$, then, by proposition \ref{Class(n)_is_closed}, $\beta \in \tmop{Class} (n - 1)$. Now, for any $\xi \in X$, we know $\max (T (n - 1, \alpha, t) \cap \alpha) = M < c_{\xi} < \beta$; therefore, by (2.2.3) and (2.1.4) of GenThmIH, we have hat\\ $t [g (n - 1, \alpha, \beta)], l (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] \in (\beta, \beta ( +^{n - 1}))$ and $t [g (n - 1, \alpha, c_{\xi})], l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})] \in (c_{\xi}, c_{\xi} ( +^{n - 1}))$ are well defined. \ \ \ \ \ \ \ {\tmstrong{(C5)}} Our next aim is to show that $\forall \xi \in X.c_{\xi} \leqslant_1 l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]$. \ \ \ \ \ \ \ {\tmstrong{(C6)}} Let $\xi \in X$ be arbitrary. First note that, since $t \in \tmop{Lim}$, then $l (n - 1, \alpha, t) \in \tmop{Lim}$ (because \\ $l (n - 1, \alpha, t) = t \in \tmop{Lim}$ or $l (n - 1, \alpha, t) < l (n - 1, \alpha, t) + 1 \leqslant t \leqslant m (t) = m (l (n - 1, \alpha, t))$; the latter case implies $l (n - 1, \alpha, t) <_1 l (n - 1, \alpha, t) + 1$ by $\leqslant_1$-connectedness and so $l (n - 1, \alpha, t) \in \mathbbm{P}$) and then $l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})] \in \tmop{Lim}$ (simply because $l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]$ is the result of substituting epsilon numbers by other epsilon numbers in the cantor normal form of $l (n - 1, \alpha, t)$). Now, let $q \in (c_{\xi}, l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]) \underset{\text{by (2.2) of GenThmIH}}{\subset} (c_{\xi}, c_{\xi} ( +^n))$ be arbitrary. Then by (2.3.1) of GenThmIH, $\tmop{Ep} (q) \subset \tmop{Dom} (g (n - 1, c_{\xi}, \alpha))$ and then\\ $q [g (n - 1, c_{\xi}, \alpha)] \underset{\text{by (2.4.3) of GenThmIH}}{\in} (\alpha, l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})] [g (n - 1, c_{\xi}, \alpha)]) =$ \\ $\underset{\text{by (2.3.2) of GenThmIH}}{=} (\alpha, l (n - 1, \alpha, t))$. This shows \\ $q [g (n - 1, c_{\xi}, \alpha)] \in \tmop{Im} (g (n - 1, c_{\xi}, \alpha)) \cap (\alpha, l (n - 1, \alpha, t))\underset{\text{by (2.3.2) of GenThmIH}}{=}$\\ $(\tmop{Dom} (g (n - 1, \alpha, c_{\xi}))) \cap (\alpha, l (n - 1, \alpha, t))\underset{\text{remark} \ref{S(i,a,r,t)=l(i,a,t)_intersection_Dom(g(i,a,r))}}{=}S (n - 1, \alpha, c_{\xi}, t)$, and so by (C4), \\ $c_{\xi} \in G^{n - 1} (q [g (n - 1, c_{\xi}, \alpha)])$. Finally, observe the latter implies that \\ $c_{\xi} \leqslant^{n - 1} \eta (n - 1, \alpha, q [g (n - 1, c_{\xi}, \alpha)]) [g (n - 1, \alpha, c_{\xi})] + 1\underset{\text{(2.2.5) of GenThmIH}}{=}$\\ \ \ \ \ \ \ $= \eta (n - 1, c_{\xi}, q [g (n - 1, c_{\xi}, \alpha)] [g (n - 1, \alpha, c_{\xi})]) + 1 = \eta (n - 1, c_{\xi}, q) + 1$, which subsequently implies, (using $c_{\xi} < q \leqslant \eta (n - 1, c_{\xi}, q)$ and $\leqslant_1$-connectedness) that $c_{\xi} \leqslant_1 q$. Last paragraph proves that, for $\xi \in X$, the sequence $(d_q)_{q \in Y}$ defined as $d_q := q$ and \\ $Y := (c_{\xi}, l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})])$, satisfies $\forall q \in Y.c_{\xi} \leqslant_1 q$; but this and the fact that $d_q\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]$ (we already showed $l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})] \in \tmop{Lim}$) imply $c_{\xi} \leqslant_1 l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]$ by $\leqslant_1$-continuity. Since the previous was done for arbitrary $\xi \in X$, we conclude $\forall \xi \in X.c_{\xi} \leqslant_1 l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]$. This proves (C6). We continue with the proof of (C3). Let $\xi \in X$. Using (C6) we get \\ $c_{\xi} \leqslant_1 l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})] \leqslant_1 \eta (n - 1, c_{\xi}, l (n - 1, \alpha, t) [g (n - 1, \alpha, c_{\xi})]) =$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\underset{\text{by (2.5.3) of GenThmIH}}{=}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \eta (n - 1, c_{\xi}, l (n - 1, \alpha, t) [g (n - 1, \beta, c_{\xi}) \circ g (n - 1, \alpha, \beta)]) =$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \eta (n - 1, c_{\xi}, l (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] [g (n - 1, \beta, c_{\xi})]) =$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\underset{\text{by (2.2.5) of GenThmIH}}{=}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \eta (n - 1, \beta, l (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)]) [g (n - 1, \beta, c_{\xi})]$;\\ therefore, by $\leqslant_1$-transitivity, $c_{\xi} \leqslant_1 \eta (n - 1, \beta, l (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)]) [g (n - 1, \beta, c_{\xi})]$. But since this was done for arbitrary $\xi \in X$, we have proved \\ $\forall \xi \in X.c_{\xi} \leqslant_1 \eta (n - 1, \beta, l (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)]) [g (n - 1, \beta, c_{\xi})]$. \ \ \ \ \ \ \ {\tmstrong{(C7)}} Finally, from (C7), the fact that $c_{\xi} \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} \beta$ and (5) of GenThmIH follow that\\ $\beta \leqslant^{n - 1} \eta (n - 1, \beta, l (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)]) + 1\underset{\text{by (2.2.5) of GenThmIH}}{=}$\\ \ $= \eta (n - 1, \alpha, l (n - 1, \alpha, t)) [g (n - 1, \alpha, \beta)] + 1\underset{\text{by proposition } \ref{eta(i,a,l(i,a,t))=eta(i,a,t)_if_t_in_(a(+)...(+^1)2,a(+^i)}}{=}$\\ \ $= \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] + 1$. This and (C5) show that $\beta \in G^{n - 1} (t)$. But the previous we have done for arbitrary $\beta \in A^{n - 1} (t)$, so we have proved $A^{n - 1} (t) \subset G^{n - 1} (t)$, i.e., we have proved (C3). \end{proof} \subsection{Uncountable regular ordinals and the $A^{n - 1} (t)$ sets} \begin{proposition} \label{A^n-1(t)_club_in_kapa}Let $\kappa$ be an uncountable regular ordinal ($\kappa \in \tmop{Class} (n - 1)$ by proposition \ref{regular_ordinal_in_Class(n-1)}). Then $\forall t \in [\kappa, \kappa ( +^{n - 1}))$, $A^{n - 1} (t)$ is club in $\kappa$. \end{proposition} \begin{proof} We prove the claim by induction on the interval $[\kappa, \kappa ( +^{n - 1}))$. {\tmstrong{Case $t = \kappa$.}} Then $T (n - 1, \kappa, t) \cap \kappa \underset{\text{definition of } T (n - 1, \kappa, \kappa)}{=} \emptyset$. So \\ $A^{n - 1} (t) = (\tmop{Lim} \tmop{Class} (n - 1)) \cap (- \infty, \kappa + 1) = \tmop{Lim} \tmop{Class} (n - 1)$ is club in $\kappa$ because \\ $\tmop{Class} (n - 1)$ is club in $\kappa$ (by GenThmIH (0)) and because of {\cite{GarciaCornejo1}} proposition \ref{X_club_implies_LimX_club}. Our induction hypothesis is $\forall s \in [\kappa, \kappa ( +^{n - 1})) \cap t.A^{n - 1} (s)$ is club in $\kappa$. \ \ \ \ \ \ \ {\tmstrong{(IH)}} {\tmstrong{Case $t = l + 1 \in [\kappa, \kappa ( +^{n - 1}))$.}} Then $A^{n - 1} (t) = A^{n - 1} (l + 1) = \left\{ \begin{array}{l} A^{n - 1} (l) \text{ if } l < \eta (n - 1, \kappa, l)\\ \\ \tmop{Lim} A^{n - 1} (l) \text{ otherwise} \end{array} \right.$; this way, by our (IH) and {\cite{GarciaCornejo1}} proposition \ref{X_club_implies_LimX_club}, $A^{n - 1} (t)$ is club in $\kappa$. {\tmstrong{Case $t \in [\kappa, \kappa ( +^{n - 1})) \cap \tmop{Lim}$.}} By definition $M := \left\{ \begin{array}{l} \max (T (n - 1, \kappa, t) \cap \kappa) \text{ iff } T (n - 1, \kappa, t) \cap \kappa \neq \emptyset\\ - \infty \text{ otherwise} \end{array} \right.$ and $A^{n - 1} (t) = \left\{ \begin{array}{l} (\tmop{Lim} \tmop{Class} (n - 1)) \cap (M, \kappa + 1) \text{ iff } t \in [\kappa, \kappa ( +^{n - 2}) \ldots ( +^2) ( +^1) 2]\\ \\ \tmop{Lim} \{r \leqslant \kappa |M < r \in \bigcap_{s \in S (n - 1, \kappa, r, t)} A^{n - 1} (s)\} \text{ otherwise} \end{array} \right.$ If $t \in [\kappa, \kappa ( +^{n - 2}) \ldots ( +^2) ( +^1) 2]$, then \\ $A^{n - 1} (t) = (\tmop{Lim} \tmop{Class} (n - 1)) \cap (M, \kappa + 1)$ is club in $\kappa$ because of exactly the same reasons as in the case $t = \kappa$. So from now on we suppose $t \in (\kappa ( +^{n - 2}) \ldots ( +^2) ( +^1) 2, \kappa ( +^{n - 1}))$. First we make the following four observations: - It is enough to show that $Y := \{r \leqslant \kappa |M < r \in \bigcap_{s \in S (n - 1, \kappa, r, t)} A^{n - 1} (s)\}$ is club in $\kappa$ because, knowing this, we conclude Lim$Y = A^{n - 1} (t)$ is club in $\kappa$ by {\cite{GarciaCornejo1}} proposition \ref{X_club_implies_LimX_club}. Moreover, note that as a consequence of theorem \ref{G^n-1(t)=A^n-1(t)_Gen_Hrchy_thm}, $\forall z \in \tmop{Dom} A^{n - 1} .A^{n - 1} (z) \subset \tmop{Class} (n - 1)$ and therefore $Y = \{r \in \tmop{Class} (n - 1) \cap (\kappa + 1) |M < r \in \bigcap_{s \in S (n - 1, \kappa, r, t)} A^{n - 1} (s)\}$. \ \ \ \ \ \ \ {\tmstrong{(0*)}} - For $r \in \tmop{Class} (n - 1) \cap \kappa$, \\ $\{q \in (\kappa, l (n - 1, \kappa, t)) | \tmop{Ep} (q) \subset (\tmop{Im} g (n - 1, r, \kappa))\} \underset{\text{by (2.3.2) of GenThmIH}}{=}$\\ $\{q \in (\kappa, l (n - 1, \kappa, t)) | \tmop{Ep} (q) \subset (\tmop{Dom} g (n - 1, \kappa, r))\}\underset{\text{by remark } \ref{S(i,a,r,t)=l(i,a,t)_intersection_Dom(g(i,a,r))}}{=}$\\ $S (n - 1, \kappa, r, t) \underset{\text{by remark } \ref{S(i,a,r,t)=l(i,a,t)_intersection_Dom(g(i,a,r))}}{\subset} l (n - 1, \kappa, t) \underset{\text{by remark }\ref{S(i,a,r,t)=l(i,a,t)_intersection_Dom(g(i,a,r))}}{\leqslant} t$. \ \ \ \ \ \ \ {\tmstrong{(1*)}} - By (1*) and our (IH), \\ $\forall r \in \tmop{Class} (n - 1) \cap \kappa \forall s \in S (n - 1, \kappa, r, t)$, $A^{n - 1} (s)$ is club in $\kappa$. \ \ {\tmstrong{(2*)}} - Let $r \in \tmop{Class} (n - 1) \cap \kappa$. By (0) of GenThmIH, $\tmop{Class} (n - 1)$ is club in $\kappa$ and consequently $r (+^{n - 1}) \in \tmop{Class} (n - 1) \cap \kappa$; moreover, by proposition \ref{regular_ordinal_in_Class(n-1)}, $\kappa \in \tmop{Class} (n - 1)$ and subsequently, \\ $r < r (+^{n - 1}) < \kappa < \kappa (+^{n - 1}$). Consider the function $P_r : r (+^{n - 1}) \longrightarrow \kappa (+^{n - 1}$) defined as $P_r (x) := x [g (n - 1, r, \kappa)]$. $P_r$ is well defined because of (2.3.1) of GenThmIH. We now show that $S (n - 1, \kappa, r, t) \subset \tmop{Im} P_r$. This is easy: Take $q \in S (n - 1, \kappa, r, t)$. Then, by (1*), \\ $\tmop{Ep} (q) \subset \tmop{Dom} (g (n - 1, \kappa, r))$ and therefore $q [g (n - 1, \kappa, r)]$ is well defined; but then, by (2.3.3) and (2.3.2) of GenThmIH, $q [g (n - 1, \kappa, r)] \in r (+^{n - 1}$) and $q = q [g (n - 1, \kappa, r)] [g (n - 1, r, \kappa)] = P_r (q [g (n - 1, \kappa, r)])$. This shows $S (n - 1, \kappa, r, t) \subset \tmop{Im} P_r$ as we assured. Finally, since $P_r$ is a strictly increasing function (so it is injective), then \\ $|S (n - 1, \kappa, r, t) | \leqslant | \tmop{Im} P_r | = |r ( +^{n - 1}) | \underset{\text{because } \kappa \text{ is a cardinal}}{<} \kappa$. \ \ \ \ \ \ \ {\tmstrong{(3*)}} After the previous observations, we continue with the proof of the theorem, that is, as already said in (0*), we want to show that $Y$ is club in $\kappa$. {\tmstrong{We show first that $Y$ is $\kappa$-closed.}} Let $(r'_i)_{i \in I'} \subset Y \cap \kappa$ be such that $|I' | < \kappa$ and $r'_i\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\rho$ for some $\rho < \kappa$. To show that $\rho \in Y$. Since $Y \subset \tmop{Class} (n - 1)$ and by (0) of GenThmIH $\tmop{Class} (n - 1)$ is club in $\kappa$, then \\ $\rho \in \tmop{Class} (n - 1)$. Now consider $s \in S (n - 1, \kappa, \rho, t) =$\\ $\{d \in (\kappa, l (n - 1, \kappa, t)) \subset (\kappa, \kappa (+^{n - 1})) | T (n - 1, \kappa, d) \cap \kappa \subset \rho\}$. Since by (2.1.1) of GenThmIH $T (n - 1, \kappa, s) \cap \kappa$ is finite and $r'_i\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\rho$, then there exists a subsequence $(r_i)_{i \in I}$ of the sequence $(r'_i)_{i \in I'}$, such that $r_i\underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}}\rho$, $\forall i \in I.T (n - 1, \kappa, s) \cap \kappa \subset r_i$ and $|I| \leqslant |I' | < \kappa$; that is, \\ $\forall i \in I.s \in S (n - 1, \kappa, r_i, t)$. This and the fact that $(r_i)_{i \in I} \subset Y$ means $\forall i \in I.r_i \in A^{n - 1} (s)$. But by (2*) $A^{n - 1} (s)$ is club in $\kappa$, so $\rho = \sup \{r_i |i \in I\} \in A^{n - 1} (s)$. Our previous work shows that, for arbitrary $s \in S (n - 1, \kappa, \rho, t)$, $\rho \in A^{n - 1} (s)$, i.e., $\rho \in \bigcap_{s \in S (n - 1, \kappa, \rho, t)} A^{n - 1} (s)$. From this it follows that $\rho \in Y$. Hence $Y$ is $\kappa$-closed. {\tmstrong{Now our aim is to to prove that $Y$ is unbounded in $\kappa$.}} \ \ \ \ \ \ \ {\tmstrong{(b0)}} We do first the following: Let $R := \tmop{Class} (n - 1) \cap \kappa$ and $B_r := \bigcap_{s \in S (n - 1, \kappa, r, t)} A^{n - 1} (s)$ for any $r \in R$. Let's show first that $\forall \xi \in \tmop{Lim} R \cap \kappa$.$\bigcap_{r \in R \cap \xi} B_r = B_{\xi}$. \ \ \ \ \ \ \ {\tmstrong{(b1)}} Proof of (b1): Let $\xi \in \tmop{Lim} R \cap \kappa$. {\underline{We show (b1) contention $'' \subset''$.}} Let $x \in \bigcap_{r \in R \cap \xi} B_r = \bigcap_{r \in R \cap \xi} ( \bigcap_{s \in S (n - 1, \kappa, r, t)} A^{n - 1} (s))$ be arbitrary. This means \\ $\forall r \in R \cap \xi . \forall s \in S (n - 1, \kappa, r, t) .x \in A^{n - 1} (s)$. \ \ \ \ \ \ \ {\tmstrong{(b2)}}. On the other hand, let $z \in S (n - 1, \kappa, \xi, t)$ be arbitrary. By definition of $S (n - 1, \kappa, \xi, t)$, this means $z \in (\kappa, l (n - 1, \kappa, t))$ and $T (n - 1, \kappa, z) \cap \kappa \subset \xi$. But $T (n - 1, \kappa, d) \cap \kappa$ is a finite set (by (2.1.1) of GenThmIH), so, since $\xi \in \tmop{Lim} R$, there exists $r \in R$ such that $T (n - 1, \kappa, z) \cap \kappa \subset r < \xi$. This means $z \in S (n - 1, \kappa, r, t)$, and then, by (b2), $x \in A^{n - 1} (z)$. Note the previous shows $\forall z \in S (n - 1, \kappa, \xi, t) .x \in A^{n - 1} (z)$, i.e., $x \in \bigcap_{s \in S (n - 1, \kappa, \xi, t)} A^{n - 1} (s) = B_{\xi}$. Finally, since this was done for arbitrary $x \in \bigcap_{r \in R \cap \xi} B_r$, then we have actually shown that $\bigcap_{r \in R \cap \xi} B_r \subset B_{\xi}$. {\underline{Now we show (b1) contention $'' \supset''$.}} Let $x \in B_{\xi} = \bigcap_{s \in S (n - 1, \kappa, \xi, t)} A^{n - 1} (s)$ be arbitrary. This means \\ $\forall s \in S (n - 1, \kappa, \xi, t) .x \in A^{n - 1} (s)$. \ \ \ \ \ {\tmstrong{(b4)}} On the other hand, let $r \in R \cap \xi$ be arbitrary. Take $z \in S (n - 1, \kappa, r, t)$. By definition, this means $z \in (\kappa, l (n - 1, \kappa, t))$ and $T (n - 1, \kappa, z) \cap \kappa \subset r$. But since $r < \xi$, this implies that actually $z \in S (n - 1, \kappa, \xi, t)$, which, together with (b4), implies $x \in A^{n - 1} (z)$. Note we have shown \\ $\forall z \in S (n - 1, \kappa, r, t) .x \in A^{n - 1} (z)$, i.e, $x \in \bigcap_{z \in S (n - 1, \kappa, r, t)} A^{n - 1} (z) = B_r$; moreover, we have shown this for arbitrary $r \in R \cap \xi$, i.e., we have shown $x \in \bigcap_{r \in R \cap \xi} B_r$. Finally, since this was done for arbitrary $x \in B_{\xi}$, we have shown $\bigcap_{r \in R \cap \xi} B_r \supset B_{\xi}$. This concludes the proof of (b1). Now we show that $X := \{r \in R | M < r \in B_r \}$ is unbounded in $\kappa$. \ \ \ \ \ \ \ {\tmstrong{(c0)}}. By (2*), (3*) and {\cite{GarciaCornejo1}} proposition \ref{Intersection_club_classes} we have that for any $r \in R$, $B_r$ is club in $\kappa$. \ \ \ \ \ \ \ {\tmstrong{(c1)}} Let $\delta \in \kappa$ be arbitrary. Moreover, let $a := \min R$. We define by recursion the function \\ $r : \omega \longrightarrow R$ as: $r (0) := \min \{s \in \kappa \cap B_a | \delta < s > M\}$. Note $r (0)$ exists because of (c1). Suppose we have defined $r (l) \in R = \tmop{Class} (n - 1) \cap \kappa$, for $l \in \omega$. \ \ \ \ \ \ \ {\tmstrong{(rIH)}} Note that $|R \cap r (l) | \leqslant r (l) \underset{\text{by (rIH)}}{<}\kappa$, and then, by (c1) and {\cite{GarciaCornejo1}} proposition \ref{Intersection_club_classes} it follows that $\underset{z \in R \cap r (l)}{\bigcap}B_z$ is club in $\kappa$. So we define $r (l + 1) := \min \{s \in \kappa \cap\underset{z \in R \cap r (l)}{\bigcap}B_z | r (l) < s\}$. Consider $\rho := \sup \{r (l) | l \in \omega\}$. First note that, by construction, $(r (l))_{l \in \omega}$ is a strictly increasing sequence of ordinals in $R$ (because any $B_s$ is club in $\kappa$ and $B_s \subset \tmop{Class} (n - 1)$) and so $\rho \in \tmop{Class} (n - 1) \cap (\tmop{Lim} R)$. Moreover, since $\kappa$ is an uncountable regular ordinal and $r : \omega \longrightarrow \kappa$, then $\rho < \kappa$. Summarizing all these observations: $\rho \in R \cap (\tmop{Lim} R)$ \ \ \ \ \ \ {\tmstrong{(c2)}} Now we show $M < \rho \wedge \delta < \rho \in B_{\rho}$. \ \ \ \ \ \ \ {\tmstrong{(c3)}} That $\delta < \rho > M$ is clear from the definition of the function $r$. Now, let $\gamma \in R \cap \rho$ be arbitrary. Then there exists $l \in \omega$ such that $r (l) > \gamma$. Now, by the definition of our function $r$, \\ $r (l + 1) \in\underset{z \in R \cap r (l)}{\bigcap}B_z$; but this implies that the sequence $(r (s))_{s \in [l + 1, \omega)} \subset B_{\gamma}$, and since \\ $\rho = \sup \{r (s) | s \in [l + 1, \omega)\}$ and $B_{\gamma}$ is club in $\kappa$, then $\rho \in B_{\gamma}$. Finally, since this was done for arbitrary $\gamma \in R \cap \rho$, then we have actually shown that $\rho \in\underset{\gamma \in R \cap \rho}{\bigcap}B_{\gamma}\underset{\text{by (b1) and (c2)}}{=}B_{\rho}$. This concludes the proof of (c3). Finally, observe that (c2) and (c3) have actually shown that $\forall \delta \in \kappa \exists \rho \in R. \delta < \rho \in X \subset R \subset \kappa$. Therefore (c0) holds. But $X \underset{\text{by (0)}}{=} Y \cap \kappa \subset Y$. So $Y$ is unbounded in $\kappa$. This has proven (b0). \end{proof} \begin{proposition} \label{alpha<less>^n-1alpha(+^n-1)_iff_alpha_in_intersection} $\forall \alpha \in \tmop{Class} (n - 1)$.$\alpha <_1 \alpha (+^{n - 1}) \Longleftrightarrow \alpha <^{n - 1} \alpha (+^{n - 1})$ \\ \hspace*{79mm}$\Longleftrightarrow \alpha \in \bigcap_{t \in [\alpha, \alpha ( +^{n - 1}))} A^{n - 1} (t)$. \end{proposition} \begin{proof} Let $\alpha \in \tmop{Class} (n - 1)$. To show $\alpha <^{n - 1} \alpha (+^{n - 1}) \Longrightarrow \alpha \in \bigcap_{t \in [\alpha, \alpha ( +^{n - 1}))} A^{n - 1} (t)$. \ \ \ \ \ \ \ {\tmstrong{(a)}} Suppose $\alpha <^{n - 1} \alpha (+^{n - 1})$. Let $t \in [\alpha, \alpha ( +^{n - 1}))$ be arbitrary. \\ Then $\alpha \leqslant^{n - 1} \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \alpha)] + 1 = \eta (n - 1, \alpha, t) + 1$ by $<^{n - 1}$-connectedness. So \ $\alpha \in \{\beta \in \tmop{Class} (n - 1) |T (n - 1, \alpha, t) \cap \alpha \subset \beta \leqslant \alpha \wedge \beta \leqslant^{n - 1} \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \beta)] + 1\} =$\\ $G^{n - 1} (t)\underset{\text{theorem } \ref{G^n-1(t)=A^n-1(t)_Gen_Hrchy_thm}}{=}A^{n - 1} (t)$. Since this holds for an arbitrary $t \in [\alpha, \alpha ( +^{n - 1}))$, we have shown $\alpha \in \bigcap_{t \in [\alpha, \alpha ( +^{n - 1}))} A^{n - 1} (t)$. This shows (a). To show $\alpha <^{n - 1} \alpha (+^{n - 1}) \Longleftarrow \alpha \in \bigcap_{t \in [\alpha, \alpha ( +^{n - 1}))} A^{n - 1} (t)$. \ \ \ \ \ \ \ {\tmstrong{(b)}} Suppose $\alpha \in \bigcap_{t \in [\alpha, \alpha ( +^{n - 1}))} A^{n - 1} (t)\underset{\text{theorem } \ref{G^n-1(t)=A^n-1(t)_Gen_Hrchy_thm}}{=}\bigcap_{t \in [\alpha, \alpha ( +^{n - 1}))} G^{n - 1} (t)$. Then for any \\ $t \in [\alpha, \alpha ( +^{n - 1}))$, $\alpha \leqslant^{n - 1} \eta (n - 1, \alpha, t) [g (n - 1, \alpha, \alpha)] + 1 = \eta (n - 1, \alpha, t) + 1$; thus, by (3) of GenThmIH (that is, by $\leqslant^{n - 1}$-continuity), $\alpha \leqslant^{n - 1} \alpha (+^{n - 1}$). This shows (b). To show $\alpha <_1 \alpha (+^{n - 1}) \Longrightarrow \alpha <^{n - 1} \alpha (+^{n - 1})$. \ \ \ \ \ \ \ {\tmstrong{(c)}} Suppose $\alpha <_1 \alpha (+^{n - 1})$. Then for any $t \in [\alpha, \alpha ( +^{n - 1}))$, $\eta (n - 1, \alpha, t)) + 1 \in (\alpha, \alpha ( +^{n - 1}))$ and so, by $\leqslant_1$-connectedness, $\alpha \leqslant_1 \eta (n - 1, \alpha, t)) + 1$. Subsequently, by (6) of GenThmIH, $\alpha \leqslant^{n - 1} \eta (n - 1, \alpha, t)) + 1$. The previous shows that $\forall t \in [\alpha, \alpha ( +^{n - 1})) . \alpha \leqslant^{n - 1} \eta (n - 1, \alpha, t)) + 1$, and since the sequence $\{\eta (n - 1, \alpha, t)) + 1 | t \in [\alpha, \alpha ( +^n)) \}$ is confinal in $\alpha (+^{n - 1}$), then by (3) of GenThmIH (that is, $\leqslant^{n - 1}$-continuity), $\alpha <^{n - 1} \alpha (+^{n - 1}$). This shows (c). Finally, $\alpha <_1 \alpha (+^{n - 1}) \Longleftarrow \alpha <^{n - 1} \alpha (+^{n - 1})$ clearly holds by (3) of GenThmIH. \end{proof} \begin{corollary} \label{corollary_kapa_uncount_regul_in_Class(n)}Let $\kappa$ be an uncountable regular ordinal ($\kappa \in \tmop{Class} (n - 1)$ by proposition \ref{regular_ordinal_in_Class(n-1)}). Then \begin{enumeratealpha} \item $\kappa <^{n - 1} \kappa (+^{n - 1})$ and therefore $\kappa \in \tmop{Class} (n)$. \item $\kappa \in \bigcap_{s \in [\kappa, \kappa ( +^{n - 1}))} A^{n - 1} (s)$. \end{enumeratealpha} \end{corollary} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{corollary} {$\mbox{}$} \begin{enumeratenumeric} \item $\tmop{Class} (n) \neq \emptyset$ \item For any $\alpha \in \tmop{Class} (n)$, $\alpha (+^n) < \infty$; that is, $\alpha (+^n)$ is an ordinal. \end{enumeratenumeric} \end{corollary} \begin{proof} {\color{orange} Left to the reader.} \end{proof} \begin{lemma} \label{sequence_csi_j_with_m(csi_j)=t[g(k,q,csi_j)]}Let $k \in [1, n)$, $q \in \tmop{Class} (k)$, $t = \eta (k, q, t) \in [q, q ( +^k))$ and $q <_1 t + 1$. Then there is a sequence $(\xi_j)_{j \in J} \subset \tmop{Class} (k)$ such that $\xi_j \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} q$ and such that for all $j \in J$, $T (k, q, t) \cap q \subset \xi_j$ and $m (\xi_j) = t [g (k, q, \xi_j)]$. \end{lemma} \begin{proof} Let $k, q$ and $t$ as stated. Then by (6) and (4) of GenThmIH, there exists a sequence $(l_i)_{i \in I} \in q \cap \tmop{Class} (k)$, $l_i \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} q$ such that for all $i \in I$, \\ $T (k, q, t) \cap q \subset l_i$ and $m (l_i) \geqslant t [g (k, q, l_i)]$. \ \ \ \ (*) We have now two cases: (a). For some subsequence $(l_d)_{d \in D} \subset (l_i)_{i \in I}$ it occurs $\forall d \in D.m (l_d) = t [g (k, q, l_d)]$. Then $(l_d)_{d \in D}$ is the sequence we are looking for. (b). For every subsequence $(l_d)_{d \in D} \subset (l_i)_{i \in I}$ $\exists d \in D.m (l_d) \neq t [g (k, q, l_d)]$. Choose an arbitrary $e < q$ and let \\ $l := \min \{r \in q \cap \tmop{Class} (k) | T (k, q, t) \cap q \subset r > e \wedge m (r) > t [g (k, q, r)]\}$. Observe $l$ exists because of (b) and (*). Then \\ $e < l <_1 t [g (k, q, l)] + 1 = \eta (k, q, t) [g (k, q, l)] + 1 =$\\ \ \ \ \ \ \ \ \ \ \ \ \ $underset{\text{by (2.2.3) and (2.2.5) of GenThmIH}}{=}\eta (k, l, t [g (k, q, l)]) + 1$, which implies, by (6) and (4) of GenThmIH, the existence of a sequence $(s_u)_{u \in U}$, $s_u \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} l$ such that for all $u \in U$, \\ $T (k, q, {\color{magenta} \eta (k, q, t)}) \cap q\underset{\text{by (2.2.3) and (2.2.4) of GenThmIH}}{=}T (k, l, {\color{magenta} \eta (k, q, t) [g (k, q, l)]}) \cap l$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $underset{\text{by (2.2.3) and (2.2.5) of GenThmIH}}{=}T (k, l, {\color{magenta} \eta (k, l, t [g (k, q, l)])}) \cap l \subset s_u$ \ \ \ \ \ \ \ {\tmstrong{(1*)}} \\ and\\ $s_u \leqslant_1 {\color{magenta} \eta (k, l, t [g (k, q, l)])} [g (k, l, s_u)]\underset{\text{by (2.2.3) and (2.2.5) of GenThmIH}}{=}$\\ \ \ \ \ \ \ $\eta (k, q, t) [g (k, q, l)] [g (k, l, s_u)] = \eta (k, q, t) [g (k, l, s_u) \circ g (k, q, l)]\underset{\text{by (2.5.3) of GenThmIH}}{=}$\\ \ \ \ \ \ \ $\eta (k, q, t) [g (k, q, s_u)] = t [g (k, q, s_u)]$. \ \ \ \ \ \ \ {\tmstrong{(2*)}} Now, note that (1*) and (2*) assert $\forall u \in U$.$T (k, q, {\color{magenta} \eta (k, q, t)}) \cap q \subset s_u \wedge m (s_u) \geqslant t [g (k, q, s_u)]$. Therefore, since $s_u \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} l$, there is some $a \in U$ such that $e < s_a < l$, \\ $T (k, q, t) \cap q \subset s_a$ and $m (s_a) \geqslant t [g (k, q, s_a)]$; moreover, by the definition of $l$, \\ $m (s_a) \ngtr t [g (k, q, s_a)]$ and then $m (s_a) = t [g (k, q, s_a)]$. We define $\xi_e := s_a$. Then, the sequence $(\xi_e)_{e \in q}$ is the sequence we are looking for. \end{proof} \subsection{Canonical sequence of an ordinal $e (+^i)$} {\noindent}{\tmstrong{Reminder:}} For $e \in \mathbbm{E}$, we denote by $(\omega_k (e))_{k \in \omega}$ to the recursively defined sequence \\ $\omega_0 (e) := e + 1$, $\omega_{k + 1} (e) := \omega^{\omega_k (e)}$. We want now to define, for $e \in \tmop{Class} (i)$, a (canonical) sequence cofinal in $e (+^i$). \begin{definition} \label{canonical_fundamental_sequence}(Canonical sequence of an ordinal $e (+^i))$ For $i \in [1, n)$, $e \in \tmop{Class} (i)$, and $k \in [1, \omega)$ we define the set $X_k (i, e)$ and the ordinals $x_k (i, e)$ and $\gamma_k (i, e)$ simultaneously by recursion on $([1, n), <)$ as follows: Let $i = 1$, $e \in \tmop{Class} (1)$ and $k \in [1, \omega)$. Let it be $X_k (1, e) := \{\omega_k (e)\}$, $x_k (1, e) := \omega_k (e) = \min X_k (1, e)$ and $\gamma_k (1, e) := m (x_k (1, e)) = m (\omega_k (e)) = \pi (\omega_k (e)) + d \pi (\omega_k (e)) = \omega_k (e) + d (\omega_k (e)) = \eta (\omega_k (e)) =$\\ \ \ \ \ \ \ \ \ $underset{\text{properties of }\eta}{=}\eta (\eta (\omega_k (e))) = \eta (\gamma_k (1, e)) = \eta (1, e, \gamma_k (1, e))$. Suppose $i + 1 \in [2, n)$ and that for $i \in [1, n)$, $X_k (i, E)$, $x_k (i, E)$ and $\gamma_k (i, E)$ have already been defined for arbitrary $E \in \tmop{Class} (i)$ and $k \in [1, \omega)$. Let $e \in \tmop{Class} (i + 1)$ and $k \in [1, \omega)$. We define $X_k (i + 1, e) := \{r \in (e, e (+^{i + 1})) \cap \tmop{Class} (i) | m (r) = \gamma_k (i, r)\}$, $x_k (i + 1, e) := \min X_k (i + 1, e)$ and $\gamma_k (i + 1, e) := m (x_k (i + 1, e)) \in (x_k (i + 1, e), x_k (i + 1, e) ( +^i)) \subset (e, e ( +^{i + 1}))$. For $e \in \tmop{Class} (i)$, we call $(\gamma_k (i, e))_{k \in [1, \omega)}$ the canonical sequence of $e (+^i)$. \end{definition} To assure that our previous definition \ref{canonical_fundamental_sequence} is correct we need to show that $\min X_k (i, e)$ exists. This is one of the reasons for our next \begin{proposition} \label{most_important_sequence_in_(e,e(+^i))}$\forall i \in [1, n) \forall e \in \tmop{Class} (i)$. \begin{enumeratenumeric} \item For any $k \in [1, \omega)$, $X_k (i, e) \neq \emptyset$ and therefore $\min X_k (i, e)$ exists. \item $(\gamma_j (i, e))_{j \in [1, \omega)} \subset (e, e ( +^i))$ \item $\forall j \in [1, \omega) . \gamma_j (i, e) = \eta (i, e, \gamma_j (i, e))$ \item $(\gamma_j (i, e))_{j \in [1, \omega)}$ and cofinal in $e (+^i)$. \item If $i = 1$, then $\forall z \in [1, \omega) .T (i, e, \gamma_z (i, e)) = \{\lambda (1, \gamma_z (1, e)) = e\}$.\\ Case $i \geqslant 2$. Then for any $z \in [1, \omega)$, - $m (x_z (i, e)) = m (x_z (i - 1, {\color{magenta} x_z (i, e)})) = \ldots = m (x_z (2, x_z (3, \ldots x_z (i - 1, {\color{magenta} x_z (i, e)}) \ldots)))$; - $T (i, e, \gamma_z (i, e)) = \{o_1 > o_2 > \ldots > o_{i - 1} > o_i = e\}$, where \ $o_1 := \lambda (1, \gamma_z (i, e))$, \\ \ \ \ \ \ $o_2 := \lambda (2, \gamma_z (i, e)), \ldots$,\\ \ \ \ \ \ $o_{i - 1} := \lambda (i - 1, \gamma_z (i, e))$,\\ \ \ \ \ \ $o_i := \lambda (i, \gamma_z (i, e)) = e$; - Moreover, \ $x_z (i, e) = o_{i - 1}$,$x_z (i - 1, {\color{magenta} x_z (i, e)}) = o_{i - 2}, \ldots$,$x_z (2, x_z (3, \ldots x_z (i - 1, {\color{magenta} x_z (i, e)}) \ldots)) = o_1$ \ and \ $o_2 = \lambda (2, o_1), o_3 = \lambda (3, o_2), \ldots, o_i = \lambda (i, o_{i - 1}), o_{i + 1} = \lambda (i + 1, o_i)$. \item $\forall j \in [1, \omega) . \forall a \in ( {\color{magenta} T (i, e, \gamma_j (i, e))} \backslash \{e\}) .m (a) = \gamma_j (i, e)$ \item $\forall \alpha \in \tmop{Class} (i) . \forall j \in [1, \omega) . \emptyset = T (i, e, \gamma_j (i, e)) \cap e \subset \alpha \wedge \gamma_j (i, e) [g (i, e, \alpha)] = \gamma_j (i, \alpha)$ \end{enumeratenumeric} \end{proposition} \begin{proof} We prove simultaneously 1, 2, 3, 4, 5, 6 and 7 by induction on $[1, n)$. {\tmstrong{Case $i = 1$ and $e \in \tmop{Class} (1)$.}} It follows immediately from definition \ref{canonical_fundamental_sequence} (and the equalities explicitly given there) that 1, 2 and 3 hold. Moreover, it is also clear that 4. holds. Now, let $j \in [1, \omega)$ be arbitrary. Then, by the definition (see statement of theorem \ref{most_most_general_theorem}), \\ $T (1, e, \gamma_j (1, e)) = \bigcup_{E \in \tmop{Ep} (\gamma_j (1, e))} T (1, e, E) = \tmop{Ep} (\gamma_j (1, e)) = \{e = \lambda (1, \gamma_j (1, e))\}$. So 5. holds. Moreover, by the equality $T (1, e, \gamma_j (1, e)) = \{e\}$ it is clear that 6. holds too. Finally, let $\alpha \in \tmop{Class} (1)$ and $j \in [1, \omega)$ be arbitrary. Then by 5. $\emptyset = T (i, e, \gamma_j (i, e)) \cap e \subset \alpha$. Moreover, by definition of $\gamma_j (1, e)$ and the usual properties of the substitution $x \longmapsto x [e := \alpha]$, we have $\gamma_j (1, e) [g (1, e, \alpha)] = \gamma_j (1, e) [e := \alpha] = \gamma_j (1, \alpha)$, that is, 7. holds. {\tmstrong{Let $i + 1 \in [2, n)$}} and suppose the claim holds for $i$. \ \ \ \ \ \ \ {\tmstrong{(IH)}} Let $e \in \tmop{Class} (i + 1)$. {\underline{To show that 1. holds.}} Let $k \in [1, \omega)$. Since $e (+^{i + 1}) \in \tmop{Class} (i + 1) \subset \tmop{Class} (i)$, then by our (IH), \\ $\eta (i, e ( +^{i + 1}), \gamma_k (i, e ( +^{i + 1}))) = \gamma_k (i, e ( +^{i + 1})) \in (e ( +^{i + 1}), e (+^{i + 1}) ( +^i))$; from this, the fact that $e (+^{i + 1}) <_1 e (+^{i + 1}) ( +^i$) and $<_1$-connectedness follows \\ $e (+^{i + 1}) <_1 \eta (i, e ( +^{i + 1}), \gamma_k (i, e ( +^{i + 1}))) + 1$. Thus, by lemma \ref{sequence_csi_j_with_m(csi_j)=t[g(k,q,csi_j)]}, there is a sequence \\ $(\xi_j)_{j \in J} \subset \tmop{Class} (i)$ such that $\xi_j \underset{\text{cof}}{{\lhook\joinrel\relbar\joinrel\rightarrow}} e (+^{i + 1}$) and such that for all $j \in J$, \\ $T (i, e ( +^{i + 1}), \gamma_k (i, e ( +^{i + 1}))) \cap e (+^{i + 1}) \subset \xi_j$ and \\ $m (\xi_j) = \gamma_k (i, e ( +^{i + 1})) [g (i, e (+^{i + 1}), \xi_j)]\underset{\text{by 7. of our (IH)}}{=}\gamma_k (i, \xi_j)$. From the previous follows that \\ $X_k (i + 1, e) = \{r \in (e, e (+^{i + 1})) \cap \tmop{Class} (i) | m (r) = \gamma_k (i, r)\} \neq \emptyset$. Hence 1. holds. {\underline{2. holds.}} This is clear from the definition of $(\gamma_j (i + 1, e))_{j \in [1, \omega)}$ (the fact that $X_k (i + 1, e) \neq \emptyset$ implies that $(\gamma_j (i + 1, e))_{j \in [1, \omega)}$ is well defined). {\underline{To show that $[\gamma_k (i + 1, e))_{k \in [1, \omega)}$ satisfies 3.}} Let $k \in [1, \omega)$. Since $x_k (i + 1, e) \in (e, e ( +^{i + 1})) \cap \tmop{Class} (i)$, then $x_k (i + 1, e) \geqslant e (+^i)$ and so \\ $m (x_k (i + 1, e)) \geqslant e (+^i) ( +^{i - 1}) \ldots ( +^1) 2$. \ \ \ \ \ \ \ (1*) On the other hand, for any $t \in (e, x_k (i + 1, e))$ proposition \ref{alpha<less>_1_beta_in_Class(k)_then_alpha_in_Class(k+1)} implies \\ $m (t) < x_k (i + 1, e) \leqslant {\color{magenta} m (x_k (i + 1, e))}$. Moreover, notice for any $t \in [x_k (i + 1, e), {\color{magenta} m (x_k (i + 1, e))}]$, $m (t) \ngtr {\color{magenta} m (x_k (i + 1, e))}$: Assume the opposite. Then the inequalities \\ $x_k (i + 1, e) \leqslant t \leqslant {\color{magenta} m (x_k (i + 1, e))} < {\color{magenta} m (x_k (i + 1, e))} + 1 \leqslant m (t)$ imply by $\leqslant_1$-connectedness that $x_k (i + 1, e) \leqslant_1 t <_1 {\color{magenta} m (x_k (i + 1, e))} + 1$ and then, by $\leqslant_1$-transitivity, \\ $x_k (i + 1, e) <_1 {\color{magenta} m (x_k (i + 1, e))} + 1$. Contradiction. Hence, from all this we conclude \\ $\forall t \in (e, {\color{magenta} m (x_k (i + 1, e))}] .m (t) \leqslant {\color{magenta} m (x_k (i + 1, e))}$. \ \ \ \ \ \ \ (2*) Finally, \\ $\eta (i + 1, e, \gamma_k (i + 1, e)) = \eta (i + 1, e, m (x_k (i + 1, e)))\underset{\text{by (1*)}}{=}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \max \{m (\beta) | \beta \in (e, m (x_k (i + 1, e))]\}\underset{\text{by (2*)}}{=}$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= m (x_k (i + 1, e)) = \gamma_k (i + 1, e)$.\\ Thus 3. holds. {\underline{To show 4., that is, $(\gamma_k (i + 1, e))_{k \in [1, \omega)}$ is cofinal in $e (+^{i + 1})$.}} First note that since $e (+^{i + 1}) {\color{magenta} (+^{i - 1})} \ldots ( +^2) ( +^1) 2 + 1 \in (e (+^{i + 1}), e (+^{i + 1}) ( +^i))$, then\\ $e (+^{i + 1}) <_1 e (+^{i + 1}) {\color{magenta} (+^{i - 1})} \ldots ( +^2) ( +^1) 2 + 1 =$\\ $\eta (i, e ( +^{i + 1}), e (+^{i + 1}) {\color{magenta} (+^{i - 1})} \ldots ( +^2) ( +^1) 2) + 1$ by $<_1$-connectedness; then, by (6) and (4) of GenThmIH, there exist a sequence of elements in $\tmop{Class} (i)$ that is cofinal in $e (+^{i + 1}$). So, to show that $(\gamma_k (i + 1, e))_{k \in [1, \omega)}$ is cofinal in $e (+^{i + 1}$) it is enough \\ to show $\forall \sigma \in (e, e ( +^{i + 1})) \cap \tmop{Class} (i) . \exists s \in [1, \omega) . \gamma_s (i + 1, e) > \sigma$. \ \ \ \ \ \ \ {\tmstrong{(b1)}} To show (b1). Let $\sigma \in \tmop{Class} (i) \cap (e, e ( +^{i + 1}))$. Then by (1.3), (1.3.1), (1.3.5), (1.3.6) and (1.3.4) of GenThmIH $f (i + 1, e) (\sigma) = \{\sigma = \sigma_1 > \ldots > \sigma_q \}$ for some $q \in [1, \omega)$, \ \ \ \ \ \ \ {\tmstrong{(c0)}}\\ where \\ $\sigma_q = \min \{d \in (e, \sigma_q] \cap \tmop{Class} (i) | m (d) [g (i, d, \sigma_q)] \geqslant m (\sigma_g)\}$, \ \ \ \ \ \ \ {\tmstrong{(c1)}}\\ $\forall l \in [1, q - 1] . \sigma_l = \min \{d \in (\sigma_{l + 1}, \sigma_l] \cap \tmop{Class} (i) | m (d) [g (i, d, \sigma_l)] \geqslant m (\sigma_l)\}$ \ \ \ \ \ \ \ {\tmstrong{(c2)}}\\ and \\ $m (\sigma) = m (\sigma_1) \leqslant m (\sigma_2) [g (i, \sigma_2, \sigma)] \leqslant m (\sigma_3) [g (i, \sigma_3, \sigma)] \leqslant \ldots \leqslant m (\sigma_q) [g (i, \sigma_q, \sigma)]$. \ \ \ \ \ \ \ {\tmstrong{(c3)}} On the other hand, by (IH) $(\gamma_j (i, \sigma_q))_{j \in [1, \omega)}$ is cofinal in $\sigma_q (+^i$), so there exists $z \in [1, \omega)$ such that $\gamma_z (i, \sigma_q) \in (m (\sigma_q), \sigma_q ( +^i))$. \ \ \ \ \ \ \ {\tmstrong{(c4)}}\\ But by (c1), \\ $\forall d \in (e, \sigma_q] \cap \tmop{Class} (i)$.$m (d) [g (i, d, \sigma_q)] \leqslant m (\sigma_q) < \gamma_z (i, \sigma_q)$, particularly, \\ $\forall d \in (e, \sigma_q] \cap \tmop{Class} (i)$.$m (d) [g (i, d, \sigma_q)] < \gamma_z (i, \sigma_q)$. From this and using (2.3.2) of GenThmIH and (7) of our (IH), we get \\ $\forall d \in (e, \sigma_q] \cap \tmop{Class} (i)$.\\ $m (d) = m (d) [g (i, d, \sigma_q)] [g (i, \sigma_q, d)] < \gamma_z (i, \sigma_q) [g (i, \sigma_q, d)]\underset{\text{by (7) of our (IH)}}{=}\gamma_z (i, d)$ \ \ \ \ \ \ \ {\tmstrong{(*)}} Now let $l \in [1, q - 1]$ and $d \in (\sigma_{l + 1}, \sigma_l] \cap \tmop{Class} (i)$. By (c2), $m (d) [g (i, d, \sigma_l)] \leqslant m (\sigma_l)$; this inequality, (c0) and (2.5.3) and (2.3.1) of GenThmIH imply,\\ $m (d) [g (i, d, \sigma)] = m (d) [g (i, d, \sigma_l)] [g (i, \sigma_l, \sigma)] \leqslant m (\sigma_l) [g (i, \sigma_l, \sigma)] \leqslant$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $underset{\text{by (c3)}}{\leqslant}m (\sigma_q) [g (i, \sigma_q, \sigma)]\underset{\text{using (c4)}}{<}\gamma_z (i, \sigma_q) [g (i, \sigma_q, \sigma)]\underset{\text{(7) of our (IH)}}{=}\gamma_z (i, \sigma)$. From this, by (2.3.2) and (2.5.3) of GenThmIH, \\ $m (d) = m (d) [g (i, d, \sigma)] [g (i, \sigma, d)] < \gamma_z (i, \sigma) [g (i, \sigma, d)]\underset{\text{(7) of our (IH)}}{=}\gamma_z (i, d)$. The previous shows $\forall l \in [1, q - 1] \forall d \in (\sigma_{l + 1}, \sigma_l] \cap \tmop{Class} (i) .m (d) < \gamma_z (i, d)$. \ \ \ \ \ \ \ {\tmstrong{(**)}} From (*) and (**) follows $\forall d \in (e, \sigma] \cap \tmop{Class} (i) .m (d) < \gamma_z (i, d)$, and therefore\\ $\forall d \in (e, \sigma] \cap \tmop{Class} (i) .d < \min X_z (i + 1, e) = x_z (i + 1, e) \leqslant m (x_z (i + 1, e)) = \gamma_z (i + 1, e)$. This shows (b1). Hence 4. holds. {\underline{To show that $(\gamma_k (i + 1, e))_{k \in [1, \omega)}$ satisfies 5.}} First note that for arbitrary $k, j \in [1, \omega)$ and $c \in \tmop{Class} (j + 1)$\\ $x_k (j + 1, c) = \min \{r \in (c, c (+^{j + 1})) \cap \tmop{Class} (j) | m (r) = \gamma_k (j, r)\}$. \ \ \ \ \ \ \ {\tmstrong{(J0)}}\\ So\\ $m (x_k (j + 1, c)) = \gamma_k (j, x_z (j + 1, c)) = m (x_k (j, x_z (j + 1, c)))$; \ \ \ \ \ \ \ {\tmstrong{(J1)}}\\ $x_k (j + 1, c) \in \tmop{Class} (j) \backslash \tmop{Class} (j + 1)$; \ \ \ \ \ \ \ {\tmstrong{(J2)}}\\ $\gamma_k (j + 1, c) = m (x_k (j + 1, c)) \in (x_k (j + 1, c), x_k (j + 1, c) ( +^j))$; \ \ \ \ \ \ \ {\tmstrong{(J3)}}\\ $\lambda (j, m (x_k (j + 1, c)) = \lambda (j, \gamma_k (j + 1, c)) = x_k (j + 1, c)$. \ \ \ \ \ \ \ {\tmstrong{(J4)}} Let $z \in [1, \omega)$. We show now $\gamma_z (i + 1, e) = m (x_z (i + 1, e)) = m (x_z (i, {\color{magenta} x_z (i + 1, e)})) = \ldots =$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= m (x_z (2, x_z (3, \ldots x_z (i, {\color{magenta} x_z (i + 1, e)}) \ldots)))$ \ \ \ \ \ \ \ {\tmstrong{(J5)}}\\ This is easy:\\ $\gamma_z (i + 1, e) = m (x_z (i + 1, e))\underset{\text{by (J1)}}{=}m (x_z (i, x_z (i + 1, e)))\underset{\text{by (J1)}}{=}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ $= m (x_z (i - 1, x_z (i, x_z (i + 1, e))))\underset{\text{by (J1)}}{=}\ldots\underset{\text{by (J1)}}{=}$ \ \ \ \ \ \ \ \ \ $= m (x_z (2, x_z (3, \ldots x_z (i, {\color{magenta} x_z (i + 1, e)}) \ldots)))$. This shows (J5). Let's abbreviate\\ $o_1 := \lambda (1, \gamma_z (i + 1, e))$,\\ $o_2 := \lambda (2, \gamma_z (i + 1, e)), \ldots$,\\ $o_i := \lambda (i, \gamma_z (i + 1, e))$,\\ $o_{i + 1} := \lambda (i + 1, \gamma_z (i + 1, e))$. \ \ \ \ \ \ \ {\tmstrong{(d1)}} To show $o_{i + 1} = e$, $x_z (i + 1, e) = o_i$,$x_z (i, {\color{magenta} x_z (i + 1, e)}) = o_{i - 1}, \ldots$,$x_z (2, x_z (3, \ldots x_z (i, {\color{magenta} x_z (i + 1, e)}) \ldots)) = o_1$ and $o_2 = \lambda (2, o_1), o_3 = \lambda (3, o_2), \ldots, o_i = \lambda (i, o_{i - 1}), o_{i + 1} = \lambda (i + 1, o_i)$. \ \ \ \ \ \ \ {\tmstrong{(J6)}} First let's see $o_{i + 1} = e$. \ \ \ \ \ \ \ {\tmstrong{(J6.1)}} Note $\gamma_z (i + 1, e)\underset{\text{By (J3)}}{\in}(x_z (i + 1, e), x_z (i + 1, e) ( +^i))\underset{\text{By (J0)}}{\subset}(e, e ( +^{i + 1}))$. Then, since \\ $(e, e ( +^{i + 1})) \cap \tmop{Class} (i) = \emptyset$, we get $o_{i + 1} = \lambda (i + 1, \gamma_z (i + 1, e)) = e$. So (J6.1) holds. Now let's show $x_z (i + 1, e) = o_i, \ldots$,$x_z (2, x_z (3, \ldots x_z (i, {\color{magenta} x_z (i + 1, e)}) \ldots)) = o_1$. \ \ \ \ \ \ \ {\tmstrong{(J6.2)}} This is also easy:\\ $x_z (i + 1, e)\underset{\text{by (J4)}}{=}\lambda (i, \gamma_z (i + 1, e)) = o_i$,\\ $x_z (i, x_z (i + 1, e))\underset{\text{by (J4)}}{=}\lambda (i - 1, {\color{magenta} m (x_z (i, x_z (i + 1, e)))})\underset{\text{by (J5)}}{=}\lambda (i - 1, \gamma_z (i + 1, e)) = o_{i - 1}$,\\ $\ldots$\\ $x_z (2, x_z (3, \ldots x_z (i, {\color{magenta} x_z (i + 1, e)}) \ldots))\underset{\text{by (J4)}}{=}\lambda (1, m (x_z (2, x_z (3, \ldots x_z (i, {\color{magenta} x_z (i + 1, e)}) \ldots))))\underset{\text{by (J5)}}{=}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= \lambda (1, \gamma_z (i + 1, e)) = o_1$. \\ So (J6.2) holds. Let's see that $o_2 = \lambda (2, o_1), o_3 = \lambda (3, o_2), \ldots, o_i = \lambda (i, o_{i - 1}), o_{i + 1} = \lambda (i + 1, o_i)$. \ \ \ \ \ \ \ {\tmstrong{(J6.3)}} Note for any $k \in [1, i]$ $o_k \underset{\text{by (J 6.2) and (J 6.1)}}{=} x_z (k + 1, o_{k + 1}) \underset{\text{by (J 0)}}{\in} (o_{k + 1}, o_{k + 1} ( +^{k + 1})) \cap \tmop{Class} (k)$, so $\lambda (k + 1, o_k) = o_{k + 1}$.\\ So (J6.3) holds. Hence (J6) holds because of the proofs of (J6.1), (J6.2), (J6.3). To show $T (i + 1, e, \gamma_z (i + 1, e)) = \{o_1 > o_2 > \ldots > o_{i - 1} > o_{i + 1} \}$ \ \ \ \ \ \ \ {\tmstrong{(J7)}} Since $\gamma_z (i + 1, e)\underset{\text{by (J5) and (J6)}}{=}m (o_1)$, then \\ $T (i + 1, e, \gamma_z (i + 1, e)) = T (i + 1, e, m (o_1)) = \bigcup_{d \in \tmop{Ep} (m (o_1))} T (i + 1, e, d)\underset{\tmop{Ep} (m (o_1)) = T (1, o_1, m (o_1))}{=}$\\ $\bigcup_{d \in T (1, o_1, m (o_1))} T (i + 1, e, d)\underset{\text{by our (IH)}}{=}\bigcup_{d \in \{o_1 \}} T (i + 1, e, d) = T (i + 1, e, o_1) = \bigcup_{k \in \omega} O (k, o_1)$, where by definition \\ $E_1 = \lambda (1, m (o_1)) = o_1,E_2 = \lambda (2, E_1)\underset{\text{by (d1)}}{=}o_2, \ldots,E_{i + 1} = \lambda (i + 1, E_i)\underset{\text{by (d1)}}{=}o_{i + 1}$ and $O (0, o_1) :=\underset{\delta \in W (0, k, o_1), k = 1, \ldots, i}{\bigcup} f (k + 1, \lambda (k + 1, \delta)) (\delta) \cup \tmop{Ep} (m (\delta)) \cup \{\lambda (k + 1, \delta)\}$;\\ $W (0, k, o_1) := (e, e ( +^{i + 1})) \cap \{E_1 > E_2 \geqslant E_3 \geqslant \ldots \geqslant E_{i + 1} = e\} \cap (\tmop{Class} (k) \backslash \tmop{Class} (k + 1))$; $O (l + 1, o_1) :=\underset{\delta \in W (l, k, o_1), k = 1, \ldots, i}{\bigcup} f (k + 1, \lambda (k + 1, \delta)) (\delta) \cup \tmop{Ep} (m (\delta)) \cup \{\lambda (k + 1, \delta)\}$; $W (l, k, o_1) := (e, e ( +^{i + 1})) \cap O (l, o_1) \cap (\tmop{Class} (k) \backslash \tmop{Class} (k + 1))$. But note for any $k \in [1, i]$,\\ $W (0, k, o_1) = \{o_k \}$,\\ $\lambda (k + 1, o_k) = o_{k + 1}$,\\ $f (k + 1, \lambda (k + 1, o_k)) (o_k) = f (k + 1, o_{k + 1}) (o_k) = \{o_k \}$,\\ $\tmop{Ep} (m (o_k))\underset{\text{by (J5) and (J6)}}{=}\tmop{Ep} (m (o_1)) = \{o_1 \}$. \\ Therefore $f (k + 1, \lambda (k + 1, o_k)) (o_k) \cup \tmop{Ep} (m (o_k)) \cup \{\lambda (k + 1, o_k)\} = \{o_k, o_1, o_{k + 1} \}$. This way $O (0, o_1) = \{o_1 > o_2 \geqslant o_3 \geqslant \ldots \geqslant o_{i + 1} = e\}$, and moreover, exactly because of the same reasoning, $\forall l \in \omega .O (l + 1, o_1) = \{o_1 > o_2 \geqslant o_3 \geqslant \ldots \geqslant o_{i + 1} = e\}$. Thus, we conclude \\ $T (i + 1, e, \gamma_z (i + 1, e)) = \{o_1 > o_2 \geqslant o_3 \geqslant \ldots \geqslant o_{i + 1} = e\}$. Finally, we just make the reader aware that actually $o_1 > o_2 > o_3 > \ldots > o_{i + 1}$ holds because of (J6) and (J2). So we have shown (J7). This concludes the proof of 5. {\underline{To show that $(\gamma_k (i + 1, e))_{k \in [1, \omega)}$ satisfies 6.}} From 5. we get $T (i, e, \gamma_z (i, e)) \backslash \{e\} = \{o_1 > o_2 > \ldots > o_{i - 1} \}$ with $m (o_{i - 1}) = \ldots = m (o_1)$. {\underline{To show $(\gamma_k (i + 1, e))_{k \in [1, \omega)}$ satisfies 7.}} Let $\alpha \in \tmop{Class} (i + 1)$ and $z \in [1, \omega)$. Let $o_1, \ldots, o_{i + 1}$ as in 5. (that is, for $k \in [1, i + 1]$, $o_k := \lambda (k, {\color{magenta} \gamma_z (i + 1, e)})$). By 5., we know that $T (i + 1, e, \gamma_z (i + 1, e)) = \{o_1 > o_2 > \ldots > o_{i - 1} > o_{i + 1} = e\}$. So $T (i + 1, e, \gamma_z (i + 1, e)) \cap e = \emptyset$. Now, for any $k \in [1, i + 1]$, $T (i + 1, e, o_k)\underset{\text{definition of } T (i + 1, e, o_k)}{\subset}T (i + 1, e, \gamma_z (i + 1, e))$, which means \\ $\forall k \in [1, i + 1] . \emptyset = T (i + 1, e, o_k) \cap e \subset \alpha$. The latter expression implies, by (2.2.3) of GenThmIH that $\forall k \in [1, i + 1] . \tmop{Ep} (o_k) \subset \tmop{Dom} [g (i + 1, e, \alpha)]$. So for $k \in [1, i + 1]$, let $\tmmathbf{u_k := o_k [g (i + 1, e, \alpha)]}$. We will need the following observations (K1), (K2), (K3) and (W): Since by 5. we know $\forall k \in [1, i] .o_{k + 1} = \lambda (k + 1, o_k)$, then by (2.4.6) of GenThmIH, this implies $\forall k \in [1, i] .u_{k + 1} = o_{k + 1} [g (i + 1, e, \alpha)] = \lambda (k + 1, o_k [g (i + 1, e, \alpha)]) = \lambda (k + 1, u_k)$. \ \ \ \ \ \ \ {\tmstrong{(K1)}} Note $o_1\underset{\text{by 5.}}{=}x_z (2, o_2) \in X_z (2, o_2) := \{r \in (o_2, o_2 (+^2)) \cap \tmop{Class} (1) | m (r) = \gamma_z (1, r)\}$. This implies $m (o_1) = \gamma_z (1, o_1)$. \ \ \ \ \ \ \ {\tmstrong{(K2)}} Moreover, observe that\\ $o_i <_1 o_{i - 1} <_1 \ldots <_1 o_1 <_1 \gamma_z (i + 1, e) = m (o_1) \underset{\text{by (K 2)}}{=} \gamma_z (1, o_1) \underset{\text{by definition}}{=} m (\omega_z (o_1))$ and \\ $\forall j \in [1, i] .o_j \nleqslant_1 m (\omega_z (o_1)) + 1$ imply, by (2.4.3) of GenThmIH, that\\ $u_i <_1 \ldots <_1 u_1 <_1 m (\omega_z (o_1)) [g (i + 1, e, \alpha)] \underset{\text{by (2.4.4) of GenThmIH}}{=} m ( {\color{magenta} (\omega_z (o_1)) [g (i + 1, e, \alpha)]}) =$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= m (\omega_z (u_1))$\\ and \\ $\forall j \in [1, i] .u_j \nleqslant_1 (m (\omega_z (o_1)) + 1) [g (i + 1, e, \alpha)] \underset{\text{by (2.4.4) of GenThmIH}}{=}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $m ( {\color{magenta} (\omega_z (o_1)) [g (i + 1, e, \alpha)]}) + 1 = m (\omega_z (u_1)) + 1$.\\ From this follows $\forall j \in [1, i] .m (u_j) = m (\omega_z (u_1)) \underset{\text{by definition}}{=} \gamma_z (1, u_1)$. \ \ \ \ \ \ \ {\tmstrong{(K3)}} Now we show that $\forall j \in [1, i] .u_j = x_z (j + 1, u_{j + 1})$ \ \ \ \ \ \ \ {\tmstrong{(W)}} We prove (W) by a (side)induction on $([1, i], <)$. Let $j \in [1, i]$. Suppose $\forall l \in j \cap [1, i] .u_l = x_z (l + 1, u_{l + 1})$. \ \ \ \ \ \ \ {\tmstrong{(WIH)}} Note $m (u_j)\underset{\text{by (K3)}}{=}\gamma_z (1, u_1)\underset{\text{if }j \geqslant 2}{=}m (u_{j - 1})\underset{\text{by (WIH)}}{=}m (x_z (j, u_j)) = \gamma_z (j, u_j)$. This shows that, in any case, $m (u_j) = \gamma_z (j, u_j)$ \ \ \ \ \ {\tmstrong{(M1)}}.\\ This way, \\ $u_j \underset{\text{by (M1)}}{\in} \{r \in (\lambda (j + 1, u_j), \lambda (j + 1, u_j) (+^{j + 1})) \cap \tmop{Class} (j) |m (r) = \gamma_z (j, r)\} =$\\ \ \ \ \ \ \ \ \ \ \ \ $= X_z (j + 1, \lambda (j + 1, u_j))\underset{\text{by (K1)}}{=}X_z (j + 1, u_{j + 1})$. Moreover, \ since \\ $f (j + 1, o_{j + 1}) (o_j) = \{o_j \}\underset{\text{by (2.4.5) of GenThmIH}}{\Longleftrightarrow}f (j + 1, u_{j + 1}) (u_j) = \{u_j \}$, then by (1.3.5) of GenThmIH, \\ $u_j = \min \{s \in (u_{j + 1}, u_{j + 1} (+^{j + 1})) \cap \tmop{Class} (j) | s \leqslant u_j \wedge m (s) [g (j, s, u_j)] = m (u_j)\} =$\\ \ \ $= \min \{s \in (u_{j + 1}, u_{j + 1} (+^{j + 1})) \cap \tmop{Class} (j) | s \leqslant u_j \wedge m (s) [g (j, s, u_j)] \underset{\text{by (M1)}}{=} \gamma_z (j, u_j)\} =$\\ \ \ $=$, since by (IH) 7., applied to $j \leqslant i$, $u_j, s \in \tmop{Class} (j)$, we get $T (j, u_j, \gamma_z (j, u_j)) \cap u_j = \emptyset \subset s$,\\ \ \ $= \min \{s \in (u_{j + 1}, u_{j + 1} (+^{j + 1})) \cap \tmop{Class} (j) | s \leqslant u_j \wedge m (s) = \gamma_z (j, u_j) [g (j, u_j, s)]\}$\\ \ \ $=$, by (IH) 7. applied to $j \leqslant i$, $u_j, s \in \tmop{Class} (j)$ and $z \in [1, \omega)$,\\ \ \ $= \min \{s \in (u_{j + 1}, u_{j + 1} (+^{j + 1})) \cap \tmop{Class} (j) | s \leqslant u_j \wedge m (s) = \gamma_z (j, s)\} =$\\ \ \ $= \min \{s \in (u_{j + 1}, u_{j + 1} (+^{j + 1})) \cap \tmop{Class} (j) | m (s) = \gamma_z (j, s)\} =$\\ \ \ $= \min X_z (j + 1, u_{j + 1}) = x_z (j + 1, u_{j + 1})$.\\ This shows that (W) holds. Finally, \\ $\gamma_z (i + 1, e) [g (i + 1, e, \alpha)] = m (x_z (i + 1, e)) [g (i + 1, e, \alpha)] \underset{\text{\tmop{by} 5.}}{=} m (o_i) [g (i + 1, e, \alpha)] =$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= m (o_i [g (i + 1, e, \alpha)]) = m (u_i) \underset{\text{by (W)}}{=} m (x_z (i + 1, u_{i + 1})) =$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $= m (x_z (i + 1, \alpha)) = \gamma_z (i + 1, \alpha)$.\\ Since this last equality was done for arbitrary $\alpha \in \tmop{Class} (i + 1)$ and $z \in [1. \omega)$, then 7. holds. \end{proof} \begin{remark} \label{Remark_Cannonical_sequences}For $i \in [1, n)$ and $e \in \tmop{Class} (i)$, it is not hard to see that the sequences \\ $(x_k (i, e))_{k \in [1, \omega)}$ and $(\gamma_k (i, e))_{k \in [1, \omega)}$ are strictly increasing. Moreover, for any $k \in [1, \omega)$, $\eta (i, e, x_k (i, e)) = m (x_k (i, e))$. This equality holds because \\ $x_k (i, e) \leqslant m (x_k (i, e))$, implies \\ $m (x_k (i, e)) \leqslant \eta (i, e, x_k (i, e)) \leqslant \eta (i, e, m (x_k (i, e))\underset{\text{by 3. of previous proposition } \ref{most_important_sequence_in_(e,e(+^i))}}{=}m (x_k (i, e))$. \end{remark} \subsection{$\tmop{Class} (n)$ is $\kappa$-club} \begin{proposition} \label{alpha_in_Class(n)_iff_alpha_in_certain_intersection_A^n-1(s)}Let $\kappa$ be an uncountable regular ordinal and $r \in \tmop{Class} (n - 1) \cap \kappa$ arbitrary. Let $M^{n - 1} (r, \kappa) := \{q \in [\kappa, \kappa (+^{n - 1})) | T (n - 1, \kappa, q) \cap \kappa \subset r\}$. Then $\bigcap_{s \in M^{n - 1} (r, \kappa)} A^{n - 1} (s) \subset \tmop{Class} (n)$. \end{proposition} \begin{proof} Let $\alpha \in \bigcap_{s \in M^{n - 1} (r, \kappa)} A^{n - 1} (s)$ Consider $(\gamma_j (n - 1, \kappa))_{j \in [1, \omega)}$, the canonical sequence of $\kappa (+^{n - 1}) \in \tmop{Class} (n - 1)$. Then \\ $\forall j \in [1, \omega) . \gamma_j (n - 1, \kappa) \in M^{n - 1} (r, \kappa)$. Therefore for any $j \in [1, \omega)$, \\ $\alpha \in A^{n - 1} (\gamma_j (n - 1, \kappa))\underset{\text{theorem } \ref{G^n-1(t)=A^n-1(t)_Gen_Hrchy_thm}}{=}G^{n - 1} (\gamma_j (n - 1, \kappa))$. \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $=\{\beta \in \tmop{Class} (n - 1) | T (n - 1, \kappa, \gamma_j (n - 1, \kappa)) \cap \kappa \subset \beta \leqslant \kappa \wedge$ \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\beta \leqslant^{n - 1} \eta (n - 1, \kappa, \gamma_j (n - 1, \kappa)) [g (n - 1, \kappa, \beta)] + 1\}$. The previous means, for any $j \in [1, \omega)$,\\ $\alpha \leqslant^{n - 1} \eta (n - 1, \kappa, \gamma_j (n - 1, \kappa)) [g (n - 1, \kappa, \alpha)] + 1 = \gamma_j (n - 1, \kappa) [g (n - 1, \kappa, \alpha)]) + 1 =$ \ \ \ \ $= \gamma_j (n - 1, \alpha) + 1$. But, by previous proposition \ref{most_important_sequence_in_(e,e(+^i))}, $(\gamma_j (n - 1, \alpha))_{j \in [1, \omega)}$ is cofinal in $\alpha (+^{n - 1})$; therefore, by $\leqslant^{n - 1}$-continuity follows $\alpha \leqslant^n \alpha (+^{n - 1})$. Hence $\alpha \in \tmop{Class} (n)$. \end{proof} \begin{proposition} \label{Class(n)_indeed_k-club}Let $\kappa$ be an uncountable regular ordinal. Then $\tmop{Class} (n)$ is club in $\kappa$. \end{proposition} \begin{proof} We already know that $\tmop{Class} (n)$ is closed in $\kappa$. So we only need to show that $\tmop{Class} (n)$ is unbounded in $\kappa$. Let $\beta \in \kappa$. Since we know $\tmop{Class} (n - 1)$ is club in $\kappa$, take $r, r (+^{n - 1}) \in \tmop{Class} (n - 1) \cap \kappa \neq \emptyset$. Consider $M^{n - 1} (r, \kappa) := \{q \in [\kappa, \kappa (+^{n - 1})) | T (n - 1, \kappa, q) \cap \kappa \subset r\}$. Consider $R : [r, r ( +^{n - 1})) \longrightarrow R [r ( +^{n - 1})] \subset [\kappa, \kappa ( +^{n - 1}))$, $R (t) := t [g (n - 1, r, \kappa)]$. Then $R$ is a bijection. We assure that $R [ {\color{magenta} [ r, r ( +^{n - 1}))}] = M^{n - 1} (r, \kappa)$. {\tmstrong{(a)}} To show $R [ {\color{magenta} [ r, r ( +^{n - 1}))}] \subset M^{n - 1} (r, \kappa)$. \ \ \ \ \ \ \ {\tmstrong{(a1)}} Take $t \in [r, r ( +^{n - 1}))$. Then by (2.3.1) of GenThmIH, $\tmop{Ep} (t) \in \tmop{Dom} (g (n - 1, r, \kappa))$ and then, by (2.2.4) of GenThmIH, $T (n - 1, \kappa, t [g (n - 1, r, \kappa)]) \cap \kappa = T (n - 1, r, t) \cap r \subset r$. Moreover, it is clear also from GenThmIH that $t [g (n - 1, r, \kappa)] \in [\kappa, \kappa ( +^{n - 1}))$. This shows that $R (t) = t [g (n - 1, r, \kappa)] \in M^{n - 1} (r, t)$, and since this was done for $t \in r (+^{n - 1}$) arbitrary, then (a1) holds. To show $R [ {\color{magenta} [ r, r ( +^{n - 1}))}] \supset M^{n - 1} (r, \kappa)$. \ \ \ \ \ \ \ {\tmstrong{(a2)}} Let $s \in M^{n - 1} (r, \kappa)$. By (2.2.3) of GenThmIH we have that \\ $M^{n - 1} (r, \kappa) = \{t \in [\kappa, \kappa (+^{n - 1})) | \tmop{Ep} (t) \subset \tmop{Dom} g (n - 1, \kappa, r)\}$. Therefore, easily from GenThmIH we get that $s [g (n - 1, \kappa, r)] \in [r, r ( +^{n - 1}))$. But then \\ $R (s [g (n - 1, \kappa, r)]) = s [g (n - 1, \kappa, r)] [g (n - 1, r, \kappa)]\underset{\text{by (2.3.2) of GenThmIH}}{=}s$. This shows that \\ $s \in R [ {\color{magenta} [ r, r ( +^{n - 1}))}]$, and since this was done for arbitrary $s \in M^{n - 1} (r, \kappa)$, then (a2) holds. (a1) and (a2) show (a). By (a) and (2.3.2) of GenThmIH, the function $H := R^{- 1} : M^{n - 1} (r, \kappa) \longrightarrow [r, r ( +^{n - 1}))$, \\ $H (s) := s [g (n - 1, \kappa, r)]$ is a bijection. \ \ \ \ \ \ \ {\tmstrong{(b)}} On the other hand, since $r \in \tmop{Class} (n - 1) \subset \mathbbm{E} \subset [\omega, \infty)$ (because $n - 1 \geqslant 1$), then there exists $\delta \in \tmop{OR}$ such that $\aleph_{\delta} = |r|$. Then $\aleph_{\delta} \leqslant r < \aleph_{\delta + 1} \leqslant \kappa$. But $\aleph_{\delta + 1}$ is a regular uncountable ordinal (because it is a successor cardinal), and then, by (0) of GenThmIH, $\tmop{Class} (n - 1)$ is club in $\aleph_{\delta + 1}$. Hence, $\aleph_{\delta} \leqslant r < r (+^{n - 1}) < \aleph_{\delta + 1} \leqslant \kappa$ and subsequently \\ $| [r, r (+^{n - 1})) | \leqslant |r ( +^{n - 1}) | = |r| < \kappa$. \ \ \ \ \ \ \ {\tmstrong{(c)}} Finally, from (c), (b), proposition \ref{A^n-1(t)_club_in_kapa} and {\cite{GarciaCornejo1}} proposition \ref{Intersection_club_classes} follows that the set\\ $\bigcap_{s \in M^{n - 1} (r, \kappa)} A^{n - 1} (s)$ is club in $\kappa$. So there exists $\gamma \in \bigcap_{s \in M^{n - 1} (r, \kappa)} A^{n - 1} (s)$, with $\gamma > \beta$. But by previous proposition \ref{alpha_in_Class(n)_iff_alpha_in_certain_intersection_A^n-1(s)}, $\gamma \in \tmop{Class} (n)$. Since the previous was done for an arbitrary $\beta \in \kappa$, then we have shown that $\tmop{Class} (n)$ is unbounded in $\kappa$. \end{proof} {\nocite{Bachmann}} \ {\nocite{Bridge}} \ {\nocite{Buchholz1}} \ {\nocite{Buchholz2}} \ {\nocite{Buchholz3}} \ {\nocite{Buchholz4}} \ {\nocite{BuchholzSch{"u}tte1}} \ {\nocite{Carlson1}} \ {\nocite{Carlson2}} \ {\nocite{Pohlers1}} \ {\nocite{Pohlers2}} \ {\nocite{Rathjen1}} \ {\nocite{Sch{"u}tte2}} \ {\nocite{Sch{"u}tteSimpson}} \ {\nocite{Schwichtenberg1}} \ {\nocite{Setzer}} \ {\nocite{Wilken1}} \ {\nocite{Wilken2}} \ {\nocite{Wilken3}} \ {\nocite{GarciaCornejo0}} \ {\nocite{GarciaCornejo1}} \end{document}
\begin{document} \begin{abstract} In 1994, Martin Gardner stated a set of questions concerning the dissection of a square or an equilateral triangle in three similar parts. Meanwhile, Gardner's questions have been generalized and some of them are already solved. In the present paper, we solve more of his questions and treat them in a much more general context. Let $D\subset \mathbb{R}^d$ be a given set and let $f_1,\ldots,f_k$ be injective continuous mappings. Does there exist a set $X$ such that $D = X \cup f_1(X) \cup \ldots \cup f_k(X)$ is satisfied with a non-overlapping union? We prove that such a set $X$ exists for certain choices of $D$ and $\{f_1,\ldots,f_k\}$. The solutions $X$ often turn out to be attractors of iterated function systems with condensation in the sense of Barnsley. Coming back to Gardner's setting, we use our theory to prove that an equilateral triangle can be dissected in three similar copies whose areas have ratio $1:1:a$ for $a \ge (3+\sqrt{5})/2$. \end{abstract} \title{Similar dissection of sets} \begin{section}{Introduction} In the present paper, we deal with the dissection of a given set $D$ into finitely many parts which are similar to each other. Before we establish the fairly general setting of the present paper, we give a brief outline of the existing results on this topic. In 1994, Martin Gardner~\cite{Gardner:94} (see also \cite[Chapter~16]{Gardner:01}) asked a set of questions concerning the dissection of a square as well as an equilateral triangle in three similar parts. The existence of such a dissection is easy to verify if all parts are congruent to each other. In the case of the square, we get three congruent rectangles. Generalizing a result of Stewart and Wormstein~\cite{Stewart-Wormstein:92}, Maltby~\cite{Maltby:94} proved that this is the only dissection of a square in three congruent pieces (see also \cite{Liping-Yuqin-Ren:02}, where an analogous question is settled for a parallelogram). \begin{figure} \caption{Karl Scherer's dissection of an equilateral triangle in three similar parts, just two of which are congruent.} \label{ShigekiPolygon} \end{figure} Finding a solution to Gardner's set of problems becomes more tricky if one requires that at least one of the parts is not congruent to the other ones. A~nice solution to the problem of dissecting an equilateral triangle in three parts, just two of which are congruent, was given by Karl Scherer (see \cite[p.~123]{Gardner:01}). It is depicted in Figure~\ref{ShigekiPolygon}. Here, an equilateral triangle with vertices $(0,0), (1,0), (1/2,\sqrt{3}/2)$ is dissected into three pieces $X, f_1(X), f_2(X)$, where $f_1, f_2$ are two similarities with contractive ratios equal to $1/2$ and $X$ is the polygon with the consecutive vertices given by $(1/3,0), (1,0), (1/2,\sqrt{3}/2), (1/4,\sqrt{3}/4), (7/12,\sqrt{3}/4)$. Scherer found nice solutions also for the case of dissecting a square with just two congruent parts. Also, the dissection of a square as well as an equilateral triangle in three non-congruent pieces was done by him (all these dissections are depicted in \cite[Chapter~16]{Gardner:01}). Chun, Liu and van Vliet~\cite{Chun-Liu-Vliet:96} studied a more general problem. Indeed, let $m=a_1+\cdots+a_n$ be an integer composition of~$m$. The question is to dissect a square in $m$ similar pieces so that there are $a_1$ pieces of largest size, $a_2$ pieces of second-largest size and so on. They prove that such a dissection is possible if and only if the composition is not of the form $m=(m-1)+1$. In the present paper, we are going to generalize these questions considerably. A first stage of generalization is contained in the following question, which will be solved partially in the subsequent sections and which will be used as a paradigm for our general theory. \begin{question} Can we dissect an equilateral triangle into three pieces of the same shape with area ratio $1:1:a$ for each $a>0$? \end{question} Contrary to the results quoted above, we want to gain solutions to dissection problems in similar parts whose similarity ratios are prescribed. Indeed, using our general framework, we will be able to construct a dissection of an equilateral triangle in three pieces of area ratio $1:1:a$ with $a \in \{1\} \cup \big[\frac{3+\sqrt{5}}{2}, \infty\big)$. Moreover, we will not restrict ourselves to the equilateral triangle but also consider arbitrary compact subsets of~$\mathbb{R}^d$. Interestingly, in our studies we will meet the number ``high phi'' which is defined as the positive root of $x^3-2x^2+x-1$ (see \cite[p.~124]{Gardner:01}) and which already played a role in Scherer's original problems. We mention that ``high phi'' is the square of the smallest Pisot number. Note that the problem of finding a dissection with area ratios $1:1:a$ for arbitrary $a>0$ is trivial if we do not fix the set $D$ which we want to dissect. For instance, as illustrated in Figure~\ref{rectangle}, for each $r>0$, we can find a rectangle that admits an obvious dissection into three parts with area ratio $1:1:r^{-2}$. Thus, throughout the present paper we are interested in finding dissections of a fixed set $D \subset \mathbb{R}^d$ in similar parts with prescribed ratios. \begin{figure} \caption{For any given $r>0$, we can find a rectangle that can be dissected into three similar rectangles (each of whose side-lengths have ratio $r:1$) with area ratios $1:1:r^{-2} \label{rectangle} \end{figure} We now set up a general framework that contains the problems discussed above as special cases. In what follows, $\mu_d$ will denote the $d$-dimensional Lebesgue measure. \begin{definition} Let $D \subset \mathbb{R}^d$ be a compact set with $\overline{D^\circ} = D$ and $\mathcal{F} := \{f_1,f_2,\dots, f_k\}$ a finite family of injective mappings from $\mathbb{R}^d$ to itself. We say that $\mathcal{F}$ admits a \emph{dissection of $D$} if there exists a compact set $X \subset \mathbb{R}^d$ with $\overline{X^\circ} = X$ such that \[ D = X \cup f_1(X) \cup f_2(X) \cup \ldots \cup f_k(X), \] where $\mu_d(f_i(X) \cap f_j(X)) = 0$ for all disjoint $i,j \in \{1,\ldots,k\}$ and $\mu_d(X \cap f_i(X)) = 0$ for each $i \in \{1,\ldots,k\}$. We call $X$ the \emph{generator} of the dissection. \end{definition} The difficulty of constructing a dissection of $D$ for a given family $\mathcal{F}$ depends on the properties of $\mathcal{F}$ and $D$. Actually, one of our main aims is to discuss the existence and the uniqueness of the compact set $X$ so that $D = X \cup f_1(X) \cup \cdots \cup f_k(X)$ is a dissection of~$D$. The treatment of the following classes turns out to be easier than the general case. \begin{definition} Let $D \subset \mathbb{R}^d$ be a compact set with $\overline{D^\circ} = D$ and $\mathcal{F} := \{f_1,f_2,\dots, f_k\}$ a finite family of injective mappings from $\mathbb{R}^d$ to itself. \begin{itemize} \item $\mathcal{F}$ is called \emph{inside family} (with respect to~$D$) if $f_i(D) \subset D$ for each $i \in \{1,\ldots,k\}$. \item $\mathcal{F}$ is called \emph{non-overlapping family} (with respect to~$D$) if $\mu_d\big(f_i(D) \cap f_j(D)\big) = 0$ holds for each $i,j \in \{1,\ldots,k\}$ with $i \neq j$. \end{itemize} \end{definition} If all the functions in $\mathcal{F}$ are contractions, then $\mathcal{F}$ can be regarded as an \emph{iterated function system} (\emph{IFS} for short) in the sense of Hutchinson~\cite{Hutchinson:81}. In this case, there exists a unique non-empty compact set $K \subset \mathbb{R}^d$, called the \emph{attractor} of the IFS $\mathcal{F}$, satisfying \[ K = \bigcup_{i=1}^k f_i(K). \] Setting \[ \Phi(X)= \bigcup_{i=1}^k f_i(X), \] this can be written as $K = \Phi(K)$. A variant of IFS are so-called \emph{IFS with condensation} ({\it cf.} Barnsley~\cite{Barnsley:93}). \begin{definition}[IFS with condensation] Let $\mathcal{F} = \{f_1,\ldots, f_k\}$ be a family of contractions in $\mathbb{R}^d$ and $A \subset \mathbb{R}^d$ a nonempty compact set. Then the pair $(\mathcal{F},A)$ is called an \emph{IFS with condensation~$A$}. The unique non-empty compact set $K$ satisfying the set equation \[ K= A \cup f_1(K) \cup \ldots \cup f_k(K) = A \cup \Phi(K) \] is called the \emph{attractor} of the IFS $\mathcal{F}$ with condensation~$A$. \end{definition} The unique existence of $K$ is proved by a standard fix point argument. It is given by \[ K = A \cup \Phi(A) \cup \Phi^2(A) \cup \ldots \] In some cases, the solution $X$ to our dissection problem will be an attractor of an IFS related to $\mathcal{F}$ with a certain condensation depending on the set~$D$. \end{section} \begin{section}{Non-overlapping inside families} We start with the easiest case, non-overlapping inside families. We can construct a dissection for these families provided that $\mathcal{F}$ consists of contractions and $D$ is compact. The main result of this section, Theorem~\ref{IFS}, will be used in subsequent sections in order to settle more complicated cases. For the proof of Theorem~\ref{IFS}, we need the following consequence of the invariance of domains. \begin{lemma}\label{Brouwer} Let $f:\, \mathbb{R}^d \to \mathbb{R}^d$ be an injective contraction and $X \subset \mathbb{R}^d$ a compact set. Then $\partial f(X) = f(\partial X)$. \end{lemma} \begin{proof} By the invariance of domains (see {\it e.g.} \cite[Theorem~2B.3]{Hatcher:02}), the mapping $f$ is a homeomorphism. This implies the result. \end{proof} \begin{theorem}\label{IFS} Let $D \subset \mathbb{R}^d$ be a compact set with $D = \overline{D^\circ}$ and $\mu_d(\partial D) = 0$. Let $\mathcal{F} := \{f_1,\ldots,f_k\}$ be an IFS on~$\mathbb{R}^d$, whose attractor is denoted by~$E$. Suppose that $\mathcal{F}$ is a non-overlapping inside family. Then $\mathcal{F}$ admits a dissection of $D$ if and only if $\mu_d(E) = 0$. Moreover, the generator of the dissection is unique. \end{theorem} \begin{proof} Assume that $\mathcal{F}$ admits a dissection of~$D$ generated by $X$. Then we have $X^\circ \cap \Phi(X^\circ) = \emptyset$, thus the non-overlapping inside property implies that $\Phi(X^\circ) \cap \Phi^2(X^\circ) = \emptyset$, which yields $\Phi^2(X^\circ) \subset D^\circ \setminus \Phi(X^\circ)$. Since $\overline{X^\circ} = X$ and $X \cup \Phi(X) = D$, we obtain that $\Phi^2(X) \subset X$. By induction, we see that \begin{equation} \label{Phi2nX} \Phi^{2n}(X) \subset X\quad \mbox{for all}\ n\ge0. \end{equation} Set $Y := \overline{D \setminus \Phi(D)}$. Then we have $Y \subset X$ and, by (\ref{Phi2nX}), \begin{equation} \label{YX} Y \cup \Phi^2(Y) \cup \Phi^4(Y) \cup \ldots \subset X. \end{equation} Hutchinson's classical theory on IFS (see~\cite{Hutchinson:81}) implies that $(\Phi^{2n}(Y))_{n\ge0}$ converges to $E$ in Hausdorff metric. Since $X$ is closed, we obtain \begin{equation} \label{EX} E \subset X. \end{equation} By definition, we have $E = \Phi(E)$, implying that $E \subset X \cap \Phi(X)$ and, hence, $\mu_d(E) \le \sum_{i=1}^k \mu_d\big(X\cap f_i(X)\big) = 0$. Therefore, $\mathcal{F}$ can admit a dissection of $D$ only if $\mu_d(E) = 0$. \begin{figure} \caption{The attractor $E$ of the IFS with an inside non-overlapping family $\{f_1,f_2\} \label{approximation} \end{figure} Assume now that $\mu_d(E)=0$ and consider the set \[ Z := E \cup Y \cup \Phi^2(Y) \cup \Phi^4(Y) \cup \ldots \] with $Y$ defined as above. Note that \begin{equation} \label{Zint} Z = \overline{Z^\circ} \end{equation} because $\Phi^{2n}(Y) = \overline{\Phi^{2n}(Y)^\circ}$ (by Lemma~\ref{Brouwer}) and $E$ is the Hausdorff limit of the sequence of sets $(\Phi^{2n}(Y))_{n\ge0}$. By the non-overlapping condition, we have \begin{equation} \label{fijZ} \mu_d\big(f_i(Z) \cap f_j(Z)\big) = 0\quad \mbox{for}\ i \ne j. \end{equation} Next, we shall prove that \begin{equation} \label{muY} \mu_d\big(\Phi^n(Y) \cap \Phi^m(Y)\big) = 0\quad \mbox{for}\ n \neq m. \end{equation} See Figure~\ref{approximation} for an illustration of $Y, \Phi(Y), \Phi^2(Y), \Phi^3(Y), \Phi^4(Y)$ and~$E$. As $\mu_d(\partial D) = 0$, \[ \mu_d\big(\Phi^n(Y) \cap \Phi^m(Y)\big) = \mu_d\big(\Phi^n(D \setminus \Phi(D)) \cap \Phi^m(D \setminus \Phi(D))\big). \] By the non-overlapping condition, we obtain \begin{equation} \label{nmY} \mu_d\big(\Phi^n(Y) \cap \Phi^m(Y)\big) = \mu_d\Big(\big(\Phi^n(D) \setminus \Phi^{n+1}(D)\big) \cap \big(\Phi^m(D) \setminus \Phi^{m+1}(D)\big)\Big). \end{equation} Since $\Phi(D^\circ) \subset D^\circ$ by the inside condition, we know that $\Phi^{n+1}(D^\circ) \subset \Phi^{n}(D^\circ)$, thus the sets $\Phi^{n}(D^\circ) \setminus \Phi^{n+1}(D^\circ)$ are pairwise disjoint, hence the right hand side of (\ref{nmY}) is 0, which yields (\ref{muY}). Thus, as $\mu_d(E) = 0$ by assumption, we get \begin{equation} \label{ZPhiZ} \mu_d\big(Z \cap \Phi(Z)\big) \le \mu_d(E) + \mu_d\big(\Phi(E)\big) = 0. \end{equation} Moreover, since $Y \cup \Phi(Y) \cup \ldots \cup \Phi^n(Y) = \overline{D \setminus \Phi^{n+1}(D)}$, which tends to $\overline{D \setminus E}$ in Hausdorff metric, we get \begin{equation} \label{equalD} Z \cup \Phi(Z) = E \cup Y \cup \Phi(Y) \cup \Phi^2(Y) \cup \ldots = D. \end{equation} Combining (\ref{Zint}), (\ref{fijZ}), (\ref{ZPhiZ}) and (\ref{equalD}), we conclude that $\mathcal{F}$ admits a dissection of~$D$. Two examples for dissections originating from non-overlapping inside families are given in Figure~\ref{approximation2} (see also Example~\ref{exa4}). \begin{figure} \caption{The dissections discussed in Example~\ref{exa4} \label{approximation2} \end{figure} To prove the uniqueness of the generator of a dissection, assume that $X$ generates a dissection of $D$, which is different from~$Z$. Then (\ref{YX}) and (\ref{EX}) imply that $Z \subset X$. As $Z = \overline{Z^\circ}$ and $X = \overline{X^\circ}$, we obtain $\mu_d(X \setminus Z) > 0$. By the dissection property of~$Z$, we have $X \setminus Z \subset \Phi(Z) \subset \Phi(X)$, thus $\mu\big(X \cap \Phi(X)\big) \ge \mu(X \setminus Z) > 0$, which contradicts the dissection property of~$X$. \end{proof} If all the $f_i$ are similarities, the condition $\mu_d(E) = 0$ can be checked easily. \begin{corollary} \label{simcor} Let $D \subset \mathbb{R}^d$ be a compact set with $D = \overline{D^\circ}$ and $\mu_d(\partial D) = 0$. Let $\mathcal{F} :=\{f_1,\ldots,f_k\}$ be an IFS on~$\mathbb{R}^d$, where every $f_i$ is a similarity. Suppose that $\mathcal{F}$ is a non-overlapping inside family and $\Phi(D) \neq D$. Then $\mathcal{F}$ admits a unique dissection of~$D$. \end{corollary} \begin{proof} By Theorem~\ref{IFS}, we only have to show that $\mu_d(E) = 0$. Observe that $\Phi(D) \neq D$ implies that \[ \mu_d(D) > \mu_d\big(\Phi(D)\big) = \sum_{i=1}^k r_i^d \mu_d(D), \] where $r_i$ is the contraction ratio of $f_i$ for each $i \in \{1,\ldots,k\}$, thus $\sum_{i=1}^k r_i^d < 1$. Since $E = \Phi(E)$, we have \[ \mu_d(E) = \mu_d\big(\Phi(E)\big) = \sum_{i=1}^k r_i^d \mu_d(E), \] which implies $\mu_d(E) = 0$. \end{proof} From the definition of $Z$ in the proof of Theorem~\ref{IFS}, we obtain the following description of the dissection in terms of an IFS with condensation. \begin{corollary} Let $D$ and $\mathcal{F}$ be given as in Theorem~\ref{IFS}. Let $E$ be the attractor of the IFS~$\mathcal{F}$, $\mu_d(E) = 0$. Then the unique dissection of $D$ with respect to $\mathcal{F}$ is given by the unique solution of the IFS with condensation $\big(\{f \circ g:\, f,g \in \mathcal{F}\}, \overline{D \setminus \Phi(D)} \cup E\big)$. \end{corollary} In the following, we discuss some examples for Theorem~\ref{IFS}. \begin{corollary} Subdividing a star body by the ratio $1:a$ is possible for any $a>0$. \end{corollary} \begin{proof} Let $D \subset \mathbb{R}^2$ be a star body and suppose the origin is the center of this star body, {\it i.e.}, any segment connecting the origin and a point in $D$ is entirely in $D$. Take $\mathcal{F} = \{f_1\}$ with $f_1(x) = x/\sqrt{a}$. Then $D$ and $\mathcal{F}$ meet the conditions of Theorem~\ref{IFS}. This proves the corollary. \end{proof} Figure~\ref{f1} shows an example for a subdivision of the equilateral triangle with area ratio $1:2$. \begin{figure} \caption{A dissection of the triangle with area ratio $1:2$. The dissection is done by the IFS $\{f_1\} \label{f1} \end{figure} \begin{example}\label{exa4} We want to dissect the equilateral triangle $D = \triangle\big((0,0), (1,0), \big(\frac{1}{2}, \frac{\sqrt{3}}{2}\big)\big)$ with the IFS $\{f_1,f_2\}$, where \[ f_1(x,y) = r R\Big(\frac{2\pi}{3}\Big) (x,y) + (r,0)\quad \hbox{and}\quad f_2(x,y) = r R\Big(\frac{4\pi}{3}\Big) (x,y) + \Big(1-\frac{r}{2}, \frac{r\sqrt{3}}{2}\Big), \] with $r \in (0,1/2]$ and $R(\alpha)$ being the counterclockwise rotation with angle $\alpha$ around the origin ({\it cf.} Figure~\ref{approximation}). It is easy to see that $\{f_1,f_2\}$ is an inside non-overlapping family, providing a dissection of $D$ with respect to $\{f_1,f_2\}$ in view of Corollary~\ref{simcor}. Figure~\ref{approximation2} shows the dissections for the choices $r=9/20$ and $r = 1/2$. \end{example} The mappings defined in Example~\ref{exa4} show that the equilateral triangle can be dissected in similar parts with area ratios $1:1:a$ for each $a \ge 4$. Figure~\ref{approximation2} suggests that it is possible to go beyond this bound. Indeed, we will establish dissections coming from families where the inside as well as the non-overlapping condition will be violated. An application will be the construction of dissections of the equilateral triangle with area ratio $1:1:a$ for each $a \ge (3+\sqrt{5})/2$. \end{section} \begin{section}{A general dissection result and its consequences} In this section we will give a criterion which enables us to construct dissections of $D$ with respect to (not necessarily inside and non-overlapping) families~$\mathcal{F}$. \begin{theorem} \label{main} Let $D \subset \mathbb{R}^d$ be a compact set with $D = \overline{D^\circ}$ and $\mu_d(\partial D) = 0$. Let $\mathcal{F} :=\{f_1,\ldots,f_k\}$ be an IFS on~$\mathbb{R}^d$, whose attractor $E$ satisfies $\mu_d(E) = 0$. Suppose that there exists some $Y \subset D$ satisfying the following conditions. \begin{itemize} \item[(1)] $\overline{Y^\circ} = Y$, $\mu_d(\partial Y) = 0$, \item[(2)] $Y, f_1(Y), \ldots, f_k(Y)$ are subsets of $D$ which are mutually disjoint in measure, \item[(3)] $Y, f_1\big(D \setminus \Phi(Y)\big), \ldots, f_k\big(D \setminus \Phi(Y)\big)$ are subsets of $D$ which are mutually disjoint in measure. \end{itemize} Then $D$ admits a dissection with respect to~$\mathcal{F}$. \end{theorem} \begin{proof} Let $C = \overline{D \setminus \big(Y \cup \Phi(Y)\big)}$. Obviously, the set $C$ is compact and the closure of its interior. Moreover, we have $\mu_d(\partial C)=0$. We only have to show that $\mathcal{F}$ is an inside non-overlapping family for~$C$. By Theorem~\ref{IFS}, this implies that $\mathcal{F}$ admits a unique dissection of $C$ generated by $X$. From this immediately follows together with (2) that $X \cup Y$ generates a dissection of $D$ with respect to~$\mathcal{F}$. We first prove that $\mathcal{F}$ is an inside family for $C$, {\it i.e.}, that $f_i(C) \subset C$. By (3), we have \begin{equation}\label{new1} \mu_d\big(Y \cap f_i\big(D\setminus \Phi(Y)\big)\big) = 0. \end{equation} Moreover, since by (2) we have $Y \subset D\setminus\Phi(Y)$ up to a set of measure zero, (3) implies that \begin{equation}\label{new2} \mu_d\big(f_j(Y) \cap f_i\big(D \setminus \Phi(Y)\big)\big) = 0 \quad (i\not=j). \end{equation} By the injectivity of $f_i$ we have \begin{equation}\label{new3} \mu_d\big(f_i(Y) \cap f_i(D \setminus Y)\big) = 0. \end{equation} Combining \eqref{new1}, \eqref{new2} and \eqref{new3} we arrive at \begin{equation}\label{new4} \mu_d\Big(\big(Y \cup \Phi(Y)\big) \cap f_i\big(D \setminus \big(Y \cup \Phi(Y)\big)\big)\Big) = 0. \end{equation} Together with (1) and the fact that $\mu_d(\partial D)=0$ equation \eqref{new4} yields \[ \mu_d\Big(\big(Y \cup \Phi(Y)\big) \cap f_i(C)\Big) = 0. \] Applying (3) again, we conclude that $f_i(C) \subset C$, thus $\mathcal{F}$ is an inside family for~$C$. The non-overlapping property of $\mathcal{F}$ for $C$ follows from (3) as well. This proves the theorem. \end{proof} \begin{remark} \label{3dash} In view of Condition~(2), Condition~(3) can be rewritten as \begin{itemize} \item[(3')] $f_1(C), \ldots, f_k(C)$ are subsets of $C$ which are mutually disjoint in measure. \end{itemize} Here, $C = \overline{D \setminus \big(Y \cup \Phi(Y)\big)}$ as in the proof of Theorem~\ref{main}. \end{remark} \begin{remark} If $\mathcal{F}$ is a non-overlapping inside family, then we can choose $Y = \emptyset$ in Theorem~\ref{main}. \end{remark} \begin{corollary}\label{outsidecorollary} Let $D \subset \mathbb{R}^d$ be a compact set with $D = \overline{D^\circ}$ and $\mu_d(\partial D) = 0$. Let $\mathcal{F} :=\{f_1,\ldots,f_k\}$ be an IFS on~$\mathbb{R}^d$, whose attractor $E$ satisfies $\mu_d(E) = 0$. If $\mathcal{F}$ is a non-overlapping family, \begin{itemize} \item[(i)] $\Phi\big(D \setminus \Phi(D)\big) \subset D$ and \item[(ii)] $\Phi\big(D \cap \Phi^2(D)\big) \subset D$, \end{itemize} then $D$ admits a dissection with respect to~$\mathcal{F}$. \end{corollary} \begin{proof} We show that $Y := \overline{D \setminus \Phi(D)}$ satisfies Conditions (1)--(3) of Theorem~\ref{main}. Since $D = \overline{D^\circ}$ and $\mu_d(\partial D) = 0$, the same properties hold for~$Y$, which implies Condition~(1). Condition~(2) is an immediate consequence of (i), the non-overlapping property and the fact that \[ \mu_d\big(Y \cap \Phi(Y)\big) \le \mu_d\big(Y \cap \Phi(D)\big)=0. \] To show Condition~(3), note that \begin{equation} \label{DY} D \setminus \Phi(Y) = D \setminus \Phi\big(\overline{D \setminus \Phi(D)}\big) \subset D \setminus \Phi\big(D \setminus \Phi(D)\big) \subset D \setminus \big(\Phi(D) \setminus \Phi^2(D)\big). \end{equation} Since \begin{equation} \label{DDD2} D \setminus \big(\Phi(D) \setminus \Phi^2(D)\big) = \big(D \setminus \Phi(D)\big) \cup \big(D \cap \Phi^2(D)\big), \end{equation} using (\ref{DY}), (\ref{DDD2}), (i) and (ii), we obtain that \[ \Phi\big(D \setminus \Phi(Y)\big) \subset \Phi\big(D \setminus \Phi(D)\big) \cup \Phi\big(D \cap \Phi^2(D)\big) \subset D. \] Together with the non-overlapping property and the fact that \[ \mu_d\big(Y \cap \Phi\big(D\setminus \Phi(Y)\big)\big) \le \mu_d\big(Y \cap \Phi(D)\big) = 0, \] this implies Condition~(3). \end{proof} \begin{remark} For each positive integer $n$, we can replace Conditions (i) and (ii) in Corollary~\ref{outsidecorollary} by \begin{itemize} \item[(i)] $\Phi\big(D \cap \Phi^{2k}\big(D \setminus \Phi(D)\big)\big) \subset D$ for all $0 \le k < n$ and \item[(ii)] $\Phi\big(D \cap \Phi^{2n}(D)\big) \subset D$. \end{itemize} To prove this, set \[ Y := \overline{D\cap \bigcup_{0\le k<n} \Phi^{2k}\big(D \setminus \Phi(D)\big)}. \] Again, we have to show that Conditions (1)--(3) of Theorem~\ref{main} are fulfilled. Conditions~(1) and (2) are proved as for Corollary~\ref{outsidecorollary}. Condition~(3) follows now from \begin{align*} D \setminus \Phi(Y) & \subset D \setminus \bigcup_{0\le k<n} \Phi^{2k+1}\big(D \setminus \Phi(D)\big) \\ & \subset D \setminus \bigcup_{0\le k<n} \Big(\Phi^{2k+1}(D) \setminus \Phi^{2k+2}(D)\Big) \\ & \subset D \cap \bigg(\bigcup_{0\le k<n} \Big(\Phi^{2k}(D) \setminus \Phi^{2k+1}(D)\Big) \cup \Phi^{2n}(D)\bigg) \\ & = \bigcup_{0\le k<n} \Big(D \cap \Phi^{2k}\big(D \setminus \Phi(D)\big)\Big) \cup \Big(D \cap \Phi^{2n}(D)\Big) \end{align*} by the same reasoning as in the proof of Corollary~\ref{outsidecorollary}. \end{remark} \end{section} \begin{section}{Examples for general dissections} The following example shows that not every overlapping family yields a dissection. We do not have a satisfactory answer for the existence of a solution, see Section~\ref{problems}. \begin{example} \label{exanodis} Let $D = \triangle\big((0,0), (1,0), \big(\frac{1}{2}, \frac{\sqrt{3}}{2}\big)\big)$ and the IFS $\{f_1,f_2\}$ given by \[ f_1(x,y) = r R\Big(\frac{4\pi}{3}\Big) (x,y) + \Big(\frac{r}{2}, \frac{r\sqrt{3}}{2}\Big),\qquad f_1(x,y) = r R\Big(\frac{2\pi}{3}\Big) (x,y) + (1,0) \] with $r>1/2$. Then we have $\mu_2\big(f_1(Y) \cap f_2(Y)\big) > 0$ for $Y := \overline{D \setminus \Phi(D)}$, see Figure~\ref{fignodis}. Since every dissection of $D$ with respect to $\{f_1,f_2\}$ must contain~$Y$, we conclude that $D$ admits no such dissection. \end{example} \begin{figure} \caption{The sets $Y, f_1(Y), f_2(Y)$ showing that Example~\ref{exanodis} \label{fignodis} \end{figure} As a first application of Theorem~\ref{main}, we extend Example~\ref{exa4} to contraction ratios $r \le (\sqrt{5}-1)/2$. \begin{example} \label{exagold} Let $D = \triangle\big((0,0), (1,0), \big(\frac{1}{2}, \frac{\sqrt{3}}{2}\big)\big)$ and the IFS $\{f_1,f_2\}$ given by \[ f_1(x,y) = r R\Big(\frac{2\pi}{3}\Big) (x,y) + (r,0),\qquad f_2(x,y) = r R\Big(\frac{4\pi}{3}\Big) (x,y) + \Big(1-\frac{r}{2}, \frac{r\sqrt{3}}{2}\Big) \] with $r \in \big(0, \frac{\sqrt{5}-1}{2}\big]$. Choose \[ Y := \triangle\Big(\Big(\frac{1}{2(1+r)}, \frac{\sqrt{3}}{2(1+r)}\Big), \Big(1-\frac{1}{2(1+r)}, \frac{\sqrt{3}}{2(1+r)}\Big), \Big(\frac{1}{2}, \frac{\sqrt{3}}{2}\Big)\Big). \] Then it is easy to see that $Y$ satisfies the conditions of Theorem~\ref{main}, see Figure~\ref{Cfigure}. Indeed, the sets $f_1(C)$ and $f_2(C)$ are disjoint in measure if and only if the first coordinate of the rightmost point of $f_1(C)$ is less than or equal to $1/2$. As this yields the inequality $\frac{r}{1+r} + \frac{1}{2} \frac{r^2}{1+r} \le \frac{1}{2}$, we obtain the condition $r \le (\sqrt{5}-1)/2$. Figure~\ref{goldenmean} shows the dissections for the choices $r=11/20$ and $r=(\sqrt{5}-1)/2$. \end{example} \begin{figure} \caption{The sets $Y, f_1(Y), f_2(Y), C, f_1(C), f_2(C)$ for the choices $r = 11/20$ (above) and $r = (\sqrt{5} \label{Cfigure} \end{figure} \begin{figure} \caption{Dissections from Example~\ref{exagold} \label{goldenmean} \end{figure} Note that the boundary of this dissection can be described by another IFS with condensation: $B = f_1(B) \cup \big[\frac{r}{1+r},\frac{1}{1+r}\big] \cup f_2(B)$. The solution $B$ satisfies $f_j(B)= C \cap f_j(C)$ for $j=1,2$ and one can show that $B$ is a simple arc for $r > (\sqrt{5}-1)/2$. The limit case $r = (\sqrt{5}-1)/2$ is of interest because $f_1(B) = C \cap f_1(C)$ coincides with an IFS attractor associated to the set of uniqueness $\mathcal{U}_{(\sqrt{5}-1)/2}$ in the golden gasket (see p.~1470--1472 in~\cite{Sidorov}). Example~\ref{exagold} shows that the equilateral triangle can be dissected into similar parts with area ratio $1:1:a$ with $a \ge (3+\sqrt{5})/2$. \begin{example} \label{exaflip} Let $D = \triangle\big((0,0), (1,0), \big(\frac{1}{2}, \frac{\sqrt{3}}{2}\big)\big)$ and the IFS $\{f_1,f_2\}$ given by \[ f_1(x,y) = r R\Big(\frac{\pi}{3}\Big) (x,-y),\qquad f_2(x,y) = r R\Big(\frac{4\pi}{3}\Big) (x,y) + \Big(1-\frac{r}{2}, \frac{r\sqrt{3}}{2}\Big) \] with $r \in (0, 1/\phi]$, where $\phi$ denotes ``high phi'', the positive root of $x^3-2x^2+x-1$. Choose \[ Y := \triangle\Big(\Big(\frac{r}{2}, \frac{r\sqrt{3}}{2}\Big), \Big(1-\frac{r}{2}, \frac{r\sqrt{3}}{2}\Big), \Big(\frac{1}{2}, \frac{\sqrt{3}}{2}\Big)\Big). \] Then $Y$ satisfies the conditions of Theorem~\ref{main}, see Figure~\ref{Cfigure2}. We have $f_2(C) \subset C$ if and only if the $x$-coordinate of the leftmost point of $f_2(C)$ is at least $1/2$, {\it i.e.}, $r - \frac{r^2(1-r)}{2} \ge \frac{1}{2}$. Figure~\ref{flip} shows the dissections for $r=1/2$ and $r=1/\phi$. Note that $X$ is not connected for $r < 1/\phi$. \end{example} \begin{figure} \caption{The sets $Y, f_1(Y), f_2(Y), C, f_1(C), f_2(C)$ for $r=1/\phi$ in Example~\ref{exaflip} \label{Cfigure2} \end{figure} \begin{figure} \caption{Dissections from Example~\ref{exaflip} \label{flip} \end{figure} \begin{example} \label{exasquare} Let now $D$ be the unit square. By Theorem \ref{IFS}, one can construct many different dissections of ratio $1:1:a$ with $a \ge 4$. However, for $\phi^2 \le a < 4$ ($\phi$ denotes ``high phi'' again), the following construction is basically the only one that we know, and we do not know any dissection with $1<a< \phi^2$. Let \[ f_1(x,y) = r (x,y),\qquad f_2(x,y) = r R\Big(\frac{3\pi}{2}\Big) (x,y) + (1-r,1). \] Figure~\ref{figsquare} shows that $Y := \overline{D\setminus\Phi(D)}$ satisfies the conditions of Theorem~\ref{main} for $r \le 1/\phi$. Indeed, we need $\mu_2(f_1(C) \cap f_2(Y)) = 0$ in order to have $f_1(C) \subset C$, and this holds if and only if the rightmost point in $C$ of the form $(x,1)$ satisfies $r x \le 1-r$, {\it i.e.}, $r(1-r(1-r)) \le 1-r$. The dissections for $r=1/2$ and the limit case $r=1/\phi$ are depicted in Figure~\ref{figsquaredis}. \end{example} \begin{figure} \caption{Two instances of Example~\ref{exasquare} \label{figsquare} \end{figure} \begin{figure} \caption{Dissections from Example~\ref{exasquare} \label{figsquaredis} \end{figure} The last example concerns non-overlapping outside families of the equilateral triangle. \begin{example} \label{exaoutside} Let $D = \triangle\big((0,0), (1,0), \big(\frac{1}{2}, \frac{\sqrt{3}}{2}\big)\big)$ and the IFS $\{f_1,f_2\}$ given by \[ f_1(x,y) = r R\Big(\frac{4\pi}{3}\Big) (x,y) + \Big(\frac{r}{2}, \frac{r\sqrt{3}}{2}\Big),\qquad f_2(x,y) = -r (x,y) + \Big(\frac{3r}{2}, \frac{r\sqrt{3}}{2}\Big), \] with $r \in (0, 1/\phi]$. This family is non-overlapping, and outside for $r>1/2$. Figure~\ref{figoutside} (where $r=1/\phi$) shows that it satisfies the conditions of Corollary~\ref{outsidecorollary}. The dissection for the case $r=1/\phi$ is given in Figure~\ref{figoutside2}. \end{example} \begin{figure} \caption{The sets $\Phi(D), \Phi^2(D), Y, \Phi(Y), C, \Phi(C)$ for $r=1/\phi$ in Example~\ref{exaoutside} \label{figoutside} \end{figure} \begin{figure} \caption{The dissection for $r=1/\phi$ in Example~\ref{exaoutside} \label{figoutside2} \end{figure} \end{section} \begin{section}{Problems for Further Study} \label{problems} We want to finish this paper with some questions and conjectures that are related to the topic of the present paper. First of all, some part of the question we stated at the beginning remains unsolved. \begin{question} Can we dissect an equilateral triangle in three similar parts having area ratio $1:1:a$ for some $a \in \big(1,\frac{3+\sqrt{5}}{2}\big)$? \end{question} \begin{question} Can we dissect a square in three similar parts having area ratio $1:1:a$ for some $a \in \big(1,\phi^2\big)$, where $\phi$ denotes ``high phi'', the positive root of $x^3-2x^2+x-1$? \end{question} We want to generalize these questions. To this matter, let $r(f)$ denote the contraction ratio of a contractive mapping $f$. \begin{question} Let $D \subset \mathbb{R}^d$, $d \ge 2$, with $D = \overline{D^\circ}$ and $k \ge 1$ be given. Find the smallest constant $B<1$ depending on $D$ and $k$ with the following property. There exist only finitely many families $\{f_1,\ldots,f_k\}$ of contractions satisfying $\min_i r(f_i) > B$ that give rise to a dissection of~$D$. \end{question} The assumption $d\ge 2$ cannot be dropped. For $D=[0,1]\subset \mathbb{R}$, it is trivially possible to dissect $D$ into similar $k$ intervals by any ratio $r_1:r_2:\dots:r_k$. To state a more concrete variant of this question, we conjecture that, for each $D\subset \mathbb{R}^d$ with $D = \overline{D^\circ}$, there are only finitely many families $\{f_1,\ldots,f_k\}$ solving the dissection problem for $D$ and satisfying $\sum_{i=1}^k r(f_i)^d > 1$. We call such solutions ``sporadic'' solutions of the dissection problem. The solution depicted in Figure~\ref{rectangle} seems to be such a sporadic solution. \begin{question} Let $D$ and $\mathcal{F}$ be given. Can we find an algorithm for deciding whether there exists a dissection? \end{question} \end{section} We thank Arturas Dubickas and Charlene Kalle for stimulating discussions. Especially, Figure~\ref{approximation2} with $r=1/2$ is due to Dubickas. We are deeply indebted to Tohru Tanaka at Meikun high school who informed us about the reference~\cite{Gardner:94}. \end{document}
\begin{document} \title{Do not explain without context: addressing the blind spot of model explanations } \author{Katarzyna Woźnica \And Katarzyna Pękala \AND Hubert Baniecki \And Wojciech Kretowicz \And Elżbieta Sienkiewicz \And Przemysław Biecek \\ Faculty of Mathematics and Information Science, Warsaw University of Technology \\ \texttt{[email protected]} } \maketitle \begin{abstract} The increasing number of regulations and expectations of predictive machine learning models, such as so called right to explanation, has led to a large number of methods promising greater interpretability. High demand has led to a widespread adoption of XAI techniques like Shapley values, Partial Dependence profiles or permutational variable importance. However, we still do not know enough about their properties and how they manifest in the context in which explanations are created by analysts, reviewed by auditors, and interpreted by various stakeholders. This paper highlights a blind spot which, although critical, is often overlooked when monitoring and auditing machine learning models: the effect of the reference data on the explanation calculation. We discuss that many model explanations depend directly or indirectly on the choice of the referenced data distribution. We showcase examples where small changes in the distribution lead to drastic changes in the explanations, such as a change in trend or, alarmingly, a conclusion. Consequently, we postulate that obtaining robust and useful explanations always requires supporting them with a~broader context. \end{abstract} \section{Introduction}\label{sec:introduction} One of main goals of explainable machine learning techniques is to evaluate how much and how well the model prediction depends on individual variables \citep{covert2020explaining}. Permutation based variable importance methods \citep{fisher2018model} emulate the removal of a specific variable and then assess the magnitude of change in the model prediction for the whole data. An importance attribution to single prediction for every variable is often required so the Shapley-value based explanations~\citep{vstrumbelj2014explaining,lundberg2017unified} and Local Interpretable Model-Agnostic explanations (LIME) \citep{ribeiro2016should} are popular tools to provide local explanation. Partial Dependence Profile (PDP) \citep{friedman2001greedy}, Accumulated Local Effects (ALE) \citep{apley2020visualizing} and Individual Conditional Expectation (ICE)~\citep{goldstein2015peeking} extend quantitative assessment and approximate the model prediction dependency on the change of variable, respectively on the global and local level. Aside from tools that summarize the operation of a predictive model, there are also counterfactual explanations, which suggest what action should be taken in order to achieve a specific effect, e.g. change of prediction for a specific observation \citep{wachter2017counterfactual}. In addition to model-agnostic explanations, there is a whole spectrum of explainability methods specific to neural networks \citep{simonyan-saliency, shrikumar2017learning, kim-tcav} and tree-based models \citep{lundberg2018consistent, lundberg-treeshap}. The diversity and large number of emerging techniques has led to many works classifying existing methods \citep{adadi-survey-xai, arrieta-responsible-ai, molnar2019, ema2021}, evaluating them \citep{adebayo-debugging-test-explanations, bhatt2020evaluating, warnecke2020evaluating}, as well as providing guidance on how to apply these methods in the model lifecycle \citep{bhatt2020machine, bhatt-xml-stakeholders, hase2020evaluating, gill-responsible-ml}. The rapid development of the domain leads one to expect that we can accurately summarize the insights of a model and understand the structure of its dependencies. However, this can be overly optimistic \citep{adebayo-sanity-checks-saliency-maps, adebayo-debugging-test-explanations}. First, some of the explainable machine learning methods are unstable and a growing body of work addresses the issue of their robustness to purposeful attacks \citep{dombrowski-manipulated-explanations, slack2020fooling}. Other objections are related to the mathematical assumptions underlying the foregoing methods and often lead to erroneous conclusions. For instance, \cite{lipton2018mythos} argues that these errors are caused by differences between real-world goals and what we can convey with mathematical formulas. Some critical voices argue that we should move away from explanations altogether and focus instead on simple interpretable models \citep{rudin2019stop}. An overview of global explanation methods and identification of misleading interpretations is provided by \cite{molnar2020pitfalls}. A key takeaway from this survey is a~more cautious approach to interpreting results. The contextual nature of the explanatory technique is also pointed out by \cite{miller2019}; we always have to make a number of simplifications and assumptions and only in this context can we form conclusions. A very important aspect of the explainable machine learning, that thus far has not received the adequate attention, is the actual distribution of the data for which the explanation is created. In this work, we consider the distribution to be a part of the context for which the explanation is created. When it comes to testing the quality of the model, we recognize the importance of distributional assumptions made about the analyzed data. The dataset on which the model is trained, and the dataset on which the model is tested, are independent, but generated by the same process. However, when we talk about the contexts for which explanations are created, these are not necessarily the same data (see Figure~\ref{fig:schema}). An obvious example is the out-of-time sample. The importance of the data distribution, however intuitive, is also apparent if we look at the definitions of the explainable machine learning methods mentioned in the previous paragraphs. A common technique is to perturb the data, locally or globally, and then examine the behavior of the model on such modified data. In most cases, the effect of such perturbations is averaged by calculating the expected value of a prediction or a loss function, thus implicitly relying on the data distribution. In many publications, the authors address the implications of making distributional assumptions on the mathematical properties of explanation methods, but do not discuss the differences revealed in the formulation of conclusions from the explanations~\citep{kumar2020problems,janzing2020feature,chen2020true}. \cite{merrick2020explanation} recognize that data distribution selection may be a relevant source of difference between various implementation of the Shapley-based explanations and extend this remark to suggestion of creating targeted reference distributions in the context of which SHAP values should be analyzed. In this paper, we take a critical view of previously developed methods and pay attention to the context in which explanations should be interpreted. The question of the underlying distribution is fundamental and practitioners need to address it when delivering such explanations to interested parties. It is especially important when providing counterfactual explanations, because for hypothetical events, their distribution is uncertain and assumptions have to be made and documented. \begin{figure} \caption{Distributions in the model life cycle. Training data are used to train a model. Validation data is an independent sample used to assess the performance of the model. Exploratory data are selected to derive an explanation. It need not be either learning or validation data. Explanations for different questions and contexts, or addressed to different stakeholders, can be determined on different sets of data.} \label{fig:schema} \end{figure} \section{Distribution matters} \label{sec:distribution} We argue that the selection of the adequate data distribution is a fundamental aspect of explainable machine learning methods. In this section we provide an overview of the most common explainable machine learning techniques and demonstrate distributional dependence directly from their definition. We discuss how the aspect of sampling from a proper distribution has been addressed so far. In the following definitions we point out where the assumption of a proper distribution is required using $\sim \mathcal{D}$ but we do not specify these distributions in details. \subsection{Variable importance} One of the most elementary techniques are model-agnostic variable importance measures \citep{fisher2018model}. To assess the relevance of a single variable we try to eliminate its impact on model estimation. One approach is to distort the dependency between a target variable and the examined variable. For instance, a permutation-based variable importance measure depends on the change in the model performance before and after perturbations in a subset of variables $X_2$: \begin{align*} v(i) = \frac{\mathbb{E}_{(Y,X_1,X_2) \sim \mathcal{D}_2} \mathcal{L}( Y, f(X_1, X_2))}{\mathbb{E}_{(Y,X_1,X_2) \sim \mathcal{D}_1} \mathcal{L}( Y, f(X_1, X_2))}, \end{align*} where $\mathcal{D}_1$ is original data distribution and $\mathcal{D}_2$ is distribution ignoring the relationship between $X_2$ and random vector $(Y, X_1)$, it is usually a product of marginal distributions of $X_2$ and $(Y, X_1)$ from $\mathcal{D}_1$. In a basic implementation, $X_2$ consists of just one variable and becomes uncorrelated with the other variables through permutations. It is clear that the magnitude of change in measured performance depends on the distribution from which the data are drawn. The very method of generating perturbed observations, which affects the joint distribution of the data, is often pointed out as a drawback of this approach and alternatives are proposed \citep{hooker2019please}. In addition, the importance of selected samples has been raised repeatedly in the context of the data drift. If the model does not generalize well, then the initial model performance will be lower and thus the importance of the variables will also appear lower. \subsection{Variable dependence} Variable profile analysis methods such as Partial Dependence Plots (PDP) are based on the expected value of model predictions for observations with a fixed value of the analyzed variable and over marginal distribution of others: \begin{align*} g_{PDP}^{j}(z) &= \mathbb{E}_{X^{-j} \sim \mathcal{D}} f(X^{j|=z}), \end{align*} where $X^{-j}$ indicates the random vector with excluded $j$-th variable and $X^{j|=z}$ stands for the random vector with fixed $j$-th variable to value $z$. Remaining variables $X^{-j}$ are sampled from the marginal distribution so if there are interactions in the model this estimation is biased. This problem is partially addressed with Accumulated Local Effects (ALE) which are based on the expected value of the model prediction for observations over the conditional distribution of the remaining variables: \begin{align*} g_{ALE}^{j}(z) &= \int_{z_0}^z \left[E_{\underline{X}^{-j}|X^j=v \sim \mathcal{D}}\left\{ q^j(\underline{X}^{j|=v}) \right\}\right] dv + c, \end{align*} where $q^j(\underline{u})=\left\{ \frac{\partial f(\underline{x})}{\partial x^j} \right\}_{\underline{x}=\underline{u}}$ is a partial derivative of the model prediction and $c$ is a constant. In both cases, the resulting profiles depend on the selected data sample, and hence on the distribution. The choice of the distribution from which the observations should come is limited to the marginal and conditional distributions but the conclusions of the two approaches may be different and should not be implicitly regarded as equivalent \citep{molnar2020pitfalls}. \subsection{Variable attributions} \label{sec:shap_def} Local methods, such as Shapley values or Break-down, aim to determine how a~single variable contributes to a prediction for a~single observation $\mathbf{x}$. A basic assumption which local variable importance methods should satisfy is the completeness property~\citep{sundararajan2017axiomatic}: \begin{align*} f(\mathbf{x}) - \mathbb{E}_{X \sim \mathcal{D}} (f(\mathbf{X})) = \sum_i attr_i(\mathbf{x}, f) . \end{align*} The decomposition is performed with respect to the baseline prediction, in the original approach this is the expected value of the prediction approximated by the sample mean \citep{lundberg2017unified}. The way in which attribution values $attr_i$ are estimated varies, but for all implementations it is also based on the expected value of the prediction for observations from the marginal (sometimes defined as interventional) or conditional distribution ~\citep{datta2016algorithmic,lundberg2017unified,lundberg2018consistent,frye2020nips}. \cite{kumar2020problems} and \cite{janzing2020feature} discuss these differences in the context of satisfying axioms of additive variable attributions. These mathematical properties affect the interpretation of SHAP values as applied to a counterfactual explanation. \cite{chen2020true} take a different perspective to compare interventional and conditional distributions and conclude that this choice depends on what one wants to explain, the model structure or the nature of the process. It is worth noting that \cite{merrick2020explanation} address not only the problem of estimating variable attributions, but also the selection of a baseline prediction and its implications. Not all explanation methods depend directly on the data distribution. An example of an indirectly dependent methods is e.g., saliency maps based on model gradients or Locally Interpretable Model Explanations (LIME). But even these methods are often combined to determine global explanations based on the aggregation of local explanations. \section{Context-sensitive explanations}\label{sec:pdp_credit} In this section, we show an example of how a small change in the dataset can significantly affect model's explanations. The explanations are generally considered to be robust, but in fact may be very sensitive to various factors, that will be describe next. The choice of a distribution is very important and one cannot rely on the explanations without taking into account the specific data on the basis of which it was built. In the following sections we discuss how to choose the right distribution to answer questions posed by the model's user. Conclusions drawn from exploratory machine learning technique cannot be considered as absolutely valid and accurately reflecting the model structure if we do not provide data for which the explanation has been prepared. \begin{figure} \caption{\label{fig:attackPDP} \label{fig:attackPDP} \end{figure} In Section~\ref{sec:distribution}, we showed that estimators of some of the most popular explanatory methods depend on the data distribution. But by how much? Let us consider the example of the Heart Disease dataset\footnote{\url{https://www.kaggle.com/ronitf/heart-disease-uci}}, which deals with the binary classification for the presence of a heart disease. A {Support Vector Classification} model was built for this dataset and then a technique described by \cite{baniecki2021fooling} was applied to perturb the dataset in a way that alters the PDP curve, which could be done maliciously to influence the result. The new perturbed dataset has the same dimension as the original dataset. Panel A in the Figure \ref{fig:attackPDP} shows the marginal distributions of the variables in the original and modified dataset. We can see that for each variable the distributions are very similar. The model performance determined on the original data achieved AUC = 0.93 and MAE between predictions for the original data and prediction for the changed data is equal 0.07. So from the global perspective, these datasets are comparable. Although the two datasets seem similar, the PDP explanations (panel B) and ALE curves (panel C) look quite different for our model. While for the original data we can see a clear monotonic relationship between the variable X and the model output, for the modified data this relationship is not visible. This change has potentially serious consequences. If, for instance, we use the PDP explanations to audit the model, it carries a different message whether we show an increasing relationship or no trend at all. On what data then is the auditor to verify the PDP and ALE curves? This example shows that explanations such as PDP and ALE convey information not only about the model on the basis of which they were created, but also about the data. If our aim is to verify the behaviour of the model using PD or ALE curves then it is a good idea to verify on several datasets, e.g. both training and test datasets. \section{Target-sensitive counterfactual explanation}\label{sec:shap_credit} In this section, we show that we do not need to artificially perturb the distribution to obtain a different inference from the exploratory machine learning techniques. By restricting explored data to a sub-population of observations for which the explanation was created, we can also obtain a different decomposition of the model prediction and performance. In our view, a deliberate selection of data sub-distributions allows us to answer questions closer to those posed by stakeholders. We discuss this on the example of the SHAP method and local explanations for credit scoring data. As an illustration, we use GiveMeTheCredit dataset\footnote{\url{https://www.kaggle.com/c/GiveMeSomeCredit}} to predict probability of a loan delinquency and build the gradient boosting algorithm on $10$ variables summarising the economic situation of customers. To understand how particular variables impact individual predictions for credit scoring model frequently the Shapley values are being used. But in this example, we show that this can only be done properly if we choose the reference distribution very carefully. \begin{figure} \caption{SHAP values for the the same observation but three different reference distributions are considered: A - all applicants, B - only defaulters and C - only customers who paid loans).} \label{fig:cd} \end{figure} As we see in Section~\ref{sec:shap_def}, any model prediction is decomposed relative to the baseline prediction. A different choice of a reference distribution results in the SHAP explanation that answers a different question and covers a different context. A typical inquiry could be \textit{Why did the customer not get the loan?} or \textit{Why did the customer get the loan?}. On the other hand, a classic approach to SHAP estimation is to utilize all train data with instances of customers who have and have not repaid the loan and, according to Section~\ref{sec:shap_def}, this suggests that the question we answer is \textit{Why the prediction for a given customer differs from the baseline?} To demonstrate the difference between explanations obtained by changing the reference distribution we choose applicant who defaulted on the credit but the model prediction value is close to the median prediction. We consider three approaches with three different reference distributions: the average prediction for the whole population, the prediction for the customers who paid, and finally, for the customers who defaulted on the credit. Figure~\ref{fig:cd} shows that SHAP explanations for selected applicant in these three scenarios. We see that importance of a variable depends on the baseline and reference data. If we consider the whole population and the sub-population of payers, the most important variable is \textit{MonthlyIncome}, which decreases the probability of paying the loan. If we compare the prediction with the sub-population of defaulters, the relevance of the \textit{MonthlyIncome} variable is less pronounced and yields to \textit{NumberOfTimes90DaysLate} and \textit{RevolvingUtilization}. For \textit{RevolvingUtilization} variable not only the importance is changing but also the effect: in Panel B \textit{RevolvingUtilization} has negative impact and for others this variable drives the prediction above their mean score. We see that the explanation for the sub-population of solvent customers is more similar to the whole distribution, this is due to unbalanced data in terms of the target class. Recall that in reality, considered customer defaulted on the loan so from the bank's point of view we would like to know why the model prediction was below mean prediction and why not higher. Considering the whole population of customers can be misleading - it only gives us information which variables caused the prediction to differ from the average for the whole population. A~better solution is to restrict the referential distribution to customers who have not repaid. \section{Covariate-sensitive explanations} In this section, using a FIFA dataset\footnote{\url{https://www.kaggle.com/karangadiya/fifa19} }, we demonstrate that the choice of the reference distribution based on target variables and explanatory variables is important. Consider the following problem. Based on the FIFA data, we build a predictive model that predicts a footballer's worth based on their characteristics. We want to use this model to understand what influences the valuation of a young footballer Yuriy Lodygin, who happens to be a goalkeeper, and to suggest in what areas he needs to improve in order to increase his value. Figure \ref{figfifa} shows a set of explanations for this goalkeeper. Panel A sets SHAP attributes on the population of all players. The most important variable, with the highest value, is \textit{goalkeeping reflexes}. This means that compared to several thousand other players, the value of \textit{goalkeeping reflexes} is very high and it contributes positively to the valuation of the player. But is this a useful information if a player other than a goalkeeper has this variable poorly developed? A~better benchmark would be the population of goalkeepers. Panel B) shows Shapley values counted for the population of goalkeepers. The variable \textit{goalkeeping relaxes} still has a positive contribution to the pricing, but lower than for the whole population of footballers. But if we want to suggest areas for improvement in order to raise the valuation of a player then why are we referring to all goalkeepers? A better reference population is the population of the best goalkeepers. Panel C shows the Shapley values calculated for the highest valued goalkeepers. What does it take to get there? This time the Shapley values are different than in the previous panels. The \textit{goalkeeping reflexes} variable shows that this low value drags the valuation down. Why the difference? \begin{figure} \caption{\label{figfifa} \label{figfifa} \end{figure} In Panels D and E, the Ceteris Paribus plot and histogram reveal the secrets. The \textit{goalkeeping reflexes} variable is a~key characteristic in the development of goalkeepers, the top goalkeepers have even higher values of this characteristic than value 82. Therefore, when we refer to the best goalkeepers the contribution of goalkeeping reflexes value = 82 is more negative while being positive if we take into account the whole population of goalkeepers or football players. Each of these explanations presents a correct value, but means something different due to the context of the data used as a reference. So in this case, if our explanation is supposed to show how to increase the value of a goalkeeper, it seems better to use the population of goalkeepers or only the best goalkeepers. Again, the distribution is important. Moreover, this time the choice of distribution depended both on the dependent variable (being a goalkeeper) and the target variable (the value of the footballer). \section{Toy example}\label{sec:toy} \begin{figure} \caption{\label{fig:toyExample} \label{fig:toyExample} \end{figure} In previous sections, we discussed the importance of a proper data distribution in real world scenarios, and demonstrated its relevance both in different explanation methods and in different application domains. Someone could argue that the problem with different interpretations is due to the complexity and instability of the models we analyze. To address this objection, in this section we show that a similar problem arises in a linear model built on synthetic data. Consider the following problem. We are interested in the binary classification into the group - will pay back credit / will not pay back the credit. We have two explanatory variables: \textit{savings} and \textit{wages}. In this synthetic example, we will assume that for loan defaulters $$(savings,\;wages) \sim \mathcal N ((25,\;40)^T, 5),$$ while for loan payers have values $$(savings,\;wages) \sim \mathcal N ((75,\;60)^T, 5).$$ The scoring model is based on a simple linear and additive formula $$ credit\_score = 5/3 \cdot wages + 2/3 \cdot savings. $$ Let us also assume that we are interested in explanations for a customer with $(savings,\;wages) = (40,\;35)$. Figure \ref{fig:toyExample} shows both sub-populations (Panel B) and the explanations determined by the Shapley Value method (Panel A). The example is constructed in such a way that the Shapley values determined over the whole distribution suggest a~positive contribution of the \textit{wages} variable and a negative contribution of the \textit{savings} variable (Panel A 'all'). Such Shapley values show, of course, what influenced the score above the mean credit score for the whole population. But for this customer, the reference to the average score is not interesting, because in this example the average is determined by the large share of customers who have defaulted. To answer the question '\textit{what do I have to do to obtain the credit}', one has to compare with the values determined for the reference distribution of people who have paid the credit off. The Shapley values calculated for this population are different (see Panel A 'paid'). We see that the variable that decreases the score for our client the most is \textit{wages}. To summarise, for the same single customer, the variable \textit{wages} had the most positive effect when the Shapley values were calculated on the whole population and the most negative effect if only on the population of customers that paid the credit. Which of these explanations should we believe? Looking at Panel B we are inclined to believe that both variables, wages and savings, are far too low. Thus, we find that more valuable suggestions come from the Shapley values calculated for the 'paid off' distribution. \section{Conclusion: you do not explain without a context} In this paper, we showed that the most common methods for model explanation such as PDP, ALE, SHAP, Break-down cannot be considered as fully automatic tools. Expectations that some method will automatically explain the behaviour of an arbitrarily complex model to an arbitrarily posed question must be abandoned. This does not mean that explanations are not useful. It is still a fantastic tool for explanatory model analysis. But they need to be used skillfully. They offer very valuable techniques for detailed exploratory model analysis once precise hypotheses have been defined. However, to get the right answer, the right choice of distribution is essential. The key conclusion is that there is no point in applying explanations without a detailed analysis of which reference distribution is related to the question posed. Do not explain without considering the context! Moreover, it does not matter whether the model is simple or complex. The model analysed here had two additive linear variables, and the explanatory problem was solely due to the choice of reference distribution. Explanations can be manipulated, sometimes in a very significant way, as we showed in section \ref{sec:pdp_credit}. If we want to use these tools to audit models then the auditor should specify precisely on which data explanations are to be determined. A good choice is to use training data, as this is the dataset that is explicitly linked to the model being trained. If a~certain type of response is expected by the auditor then it is also a good idea to test the model on some data and test whether the model responses are stable. Such cross analysis is sometimes called contrastive explanations \citep{ema2021}. For counterfactual questions, such as \emph{``What do I need to do to pay the loan''?} or \emph{``to be a high valued goalkeeper?''}, the choice of the reference population is crucial. It can be determined by a target variable or an explanatory variable as we showed in section \ref{sec:shap_credit}. Choosing the wrong population can result in an attributions of the opposite sign. Importantly, problems with model explanations are not solely derived from the complexity of the model. Even simple models that are considered transparent, such as a simple logistic regression with two variables, can lead to significantly different explanations depending on the reference distribution, as we showed in section 7. This means that the problem is not the complexity of the model or the design of methods such as SHAP, PDP or ALE. The actual problem to be solved by the research community is to determine exactly what questions the model stakeholders can pose in explanatory model analysis and what the desired properties of the answers to these questions are \citep{miller2019, adebayo-debugging-test-explanations, iema, arrieta-responsible-ai, sokol-interactive-customizable-explanations, srinivasan-survey-cognitive-science}. \end{document}
\begin{document} \title{A Simple and Efficient Estimation Method for Models with Nonignorable Missing Data } \author{Chunrong Ai\\Department of Economics, University of Florida\\ [email protected] \and Oliver Linton\\Faculty of Economics, University of Cambridge \\ [email protected] \and Zheng Zhang\\Institute of Statistics and Big Data, Renmin University of China\\ [email protected]} \maketitle \begin{abstract} This paper proposes a simple and efficient estimation procedure for the model with non-ignorable missing data studied by {\color{black}\cite{morikawa2016semiparametric}}. Their semiparametrically efficient estimator requires explicit nonparametric estimation and so suffers from the curse of dimensionality and requires a bandwidth selection. We propose an estimation method based on the Generalized Method of Moments (hereafter GMM). Our method is consistent and asymptotically normal regardless of the number of moments chosen. Furthermore, if the number of moments increases appropriately our estimator can achieve the semiparametric efficiency bound derived in {\color{black}\cite{morikawa2016semiparametric}}, but under weaker regularity conditions. Moreover, our proposed estimator and its consistent covariance matrix are easily computed with the widely available GMM package. We propose two data-based methods for selection of the number of moments. A small scale simulation study reveals that the proposed estimation indeed out-performs the existing alternatives in finite samples. \end{abstract} \textbf{Keywords}: Nonignorable nonresponse; Generalized method of moments; Semiparametric efficiency. \section{Introduction} Missing data is common in many fields of applications. One way to deal with the missing data problem is to delete observations containing missing data. In doing so we may produce biased estimates and erroneous conclusions, depending on the data missing mechanism. If data are missing completely at random, standard estimation and inference procedures are still consistent when the missing data observations are ignored, see \cite{heitjan1996distinguishing}, \cite{little1988test} among others. If data are missing at random (MAR) in the sense that the propensity of missingness depends only on the observed covariates, consistent estimation can still be obtained through covariate balancing, see \cite{rubin1976comparing, rubin1976inference}, \cite{little1989analysis}, \cite{robins1995semiparametric}, \cite{robins1995analysis}, \cite{bang2005doubly}, \cite{qin2007empirical}, \cite{chen2008semiparametric}, \cite {tan2010bounded}, \cite{rotnitzky2012improved}, \cite{little2014statistical} among others. In many applications, data are missing not at random (MNAR). For example, the income question in sample surveys is often not answered by people at the top end of the distribution, that is, their response frequency depends on an outcome variable that is often the key focus. An investigator is examining the effect of sleep on pain by calling subjects daily to ask them about last night's sleep and their pain today. Patients who are experiencing severe pain are more likely to not come to the phone leaving the data missing for that particular day; again this would violate the MAR assumption. From political science, roll-call votes, which measure legislatures ideological positions, are subject to non-ignorable nonresponse because, unsurprisingly, politicians behave strategically. In the MNAR case, the parameter of interest may not even be identified (e.g., \cite{robins1997toward}), let alone be consistently estimated. To be more specific, let $T\in \{0,1\}$ denote the binary random variable indicating the missing status of the outcome variable $Y$: $Y$ is observed if $T$ takes the value one and $Y$ is not observed if $T$ takes the value zero. Let $\boldsymbol{X}$ denote a vector of explanatory variables, let $\pi (\boldsymbol{x},y)=\mathbb{P}(T=1| \boldsymbol{X}=\boldsymbol{x},Y=y)$ denote the propensity score function and let $f_{Y|\boldsymbol{X}}(y|\boldsymbol{x})$ denote the conditional density function of $Y$ given $\boldsymbol{X}$. \cite{robins1997toward} shows that if both the propensity score function and the conditional density function are completely unknown, the joint \ distribution of $(T,Y)$ given $\boldsymbol{X}$ is not point identifiable. In this case, a necessary identification condition is the parameterization of either the propensity score function or the conditional density function. Molenberghs and Kenward (2007) proposes the parameterization of both the propensity score function and the conditional density function as an identification strategy, while \cite{sverchkov2008new} and \cite {riddles2016propensity} parameterize the propensity score function and only a component of the conditional density function: $f_{Y|\boldsymbol{X},T}(y|\boldsymbol{x},T=1)$. \\ If the joint distribution is not the parameter of interest, the identification strategy above can be modified. For example, if the parameter of interest is the conditional density of $Y$ given $\boldsymbol{X}$ (i.e., $f_{Y|\boldsymbol{X}}(y|\boldsymbol{x})$), parameterization of the propensity score function is not needed. However, parameterization of $f_{Y|\boldsymbol{X}}(y|\boldsymbol{x})$ in this case is not sufficient for identification due to missing data. \cite{tang2003analysis} suggests parameterization of the marginal density $f_{\boldsymbol{X}}(\boldsymbol{x})$ as well, while \cite{zhao2015semiparametric} imposes an exclusion retriction. In both studies, $f_{Y|\boldsymbol{X}}(y|\boldsymbol{x})$ is identified and consistently estimated. \\ We consider estimation of the parameter $\theta _{0}=\mathbb{E}[U(\boldsymbol{X},Y)]$, where $U(\cdot )$ is any known function. We suppose that the propensity score $\pi$ is parameterized but do not restrict the conditional density function of the outcome variable. In earlier work in this framework, either the coefficients in the propensity score function are known or consistently estimated from an external sample (\cite{kim2011semiparametric}) or an exclusion restriction is imposed (\cite{wang2014instrumental} and \cite{shao2016semiparametric}). \cite{morikawa2016semiparametric} study the efficient estimation of $\theta _{0}$. They derive the efficient score function (and hence the semiparametric efficiency bound) for $\theta _{0}$ in this model. They propose to estimate the efficient score function by estimating $f_{Y|\boldsymbol{X},T}(y|, \boldsymbol{x},1)$ by a working parametric model (MK1) or by kernel nonparametric estimation (MK2). Their approach MK1 is not efficient unless the working parametric model is correct, although it is consistent. Their method MK2 suffers from the curse of dimensionality (their smoothness conditions depend on the dimensionality of the covariates through their conditions C14) and the bandwidth selection problem (about which they give no guidance). \\ We study the same estimation problem as in \cite{morikawa2016semiparametric} but propose a simpler yet equally efficient estimation procedure. Our proposed method does not require explicit nonparametric estimation and hence does not suffer from the curse of dimensionality. The proposed estimator is motivated by the key insight that the model parameter satisfies a parametric conditional moment restriction, of which the semiparametric efficiency bound is identical to the bound derived in \cite {morikawa2016semiparametric}. The conditional moment restriction is then turned into an expanding set of unconditional moment restrictions and the parameter of interest is estimated by applying the widely available and easy to compute GMM estimation (see Hansen (1982)). Under some sufficient conditions, we establish that the proposed estimator is consistent and asymptotically normally distributed even if the set of unconditional moment restrictions does not expand, thereby freeing us from the curse of dimensionality and the bandwidth selection problem; when the set does expand, the proposed estimator attains the semiparametric efficiency bound. This is in contrast with the MK2 method of \cite{morikawa2016semiparametric} , which is inconsistent if the bandwidth does not go to zero at a certain rate.\\ The paper is organized as follows. Section 2 describes the estimation, Section 3 derives the large sample properties of the estimator, Section 4 provides a consistent asymptotic variance estimator, Section 5 suggests two data driven approaches to determine the number of unconditional moment restrictions, Section 6 reports on a small scale simulation study, followed by some concluding remarks in Section 7. All technical proofs are relegated to the Appendix. \section{Basic Framework and Estimation} We begin by setting up the basic framework. Denote $\boldsymbol{Z}=( \boldsymbol{X}^{\top },Y)^{\top }$. The following assumption shall be maintained throughout the paper: \ \\ \textbf{Assumption 2.1}. (i) Parameterization of data missing mechanism: $\mathbb{P}(T=1|Y,\boldsymbol{X})=\pi (Y,\boldsymbol{X};\gamma _{0})=\pi ( \boldsymbol{Z};\gamma _{0})$ holds for some\ known function $\pi (.;.)$, where $\gamma _{0}\in \mathbb{R}^{p}$ for some known $p\in \mathbb{N}$ is the true (unknown) value; (ii) exclusion restriction: there exists some nonresponse instrument variables $\boldsymbol{X}_{1}$ in $\boldsymbol{X}=( \boldsymbol{X}_{1}^{\top },\boldsymbol{X}_{2}^{\top })^{\top }$ so that $ \boldsymbol{X}_{2}$ is independent of $T$ given both $\boldsymbol{X}_{1}$ and $Y$; and (iii) the parameter of interest is $\theta _{0}=\mathbb{E}[U( \boldsymbol{Z})]$ for some known function $U(\cdot )$.\\ Under Assumption 2.1 and by applying the law of iterated expectations, we obtain the following conditional moment restrictions: \begin{align} & \mathbb{E}\left[ 1-\frac{T}{\pi (\boldsymbol{Z};\gamma _{0})}\bigg|\boldsymbol{X }\right] =0, \label{sequential1} \\ & \mathbb{E}\left[ \theta _{0}-\frac{T}{\pi (\boldsymbol{Z};\gamma _{0})}U( \boldsymbol{Z})\right] =0, \label{sequential2} \end{align} which will form the basis for the proposed estimation. We notice that the parameters of interest in (\ref{sequential1})-(\ref{sequential2}) are finite dimensional (and there is no explicit infinite dimensional nuisance parameter) and can be easily estimated with GMM estimation. We also notice that it is a special case of the model studied in \cite{ai2012semiparametric}. By applying their result (Remark 2.1, pp. 446), we obtain the semiparametric efficiency bound for model (\ref{sequential1})-( \ref{sequential2}), which is identical to the bound derived in \cite {morikawa2016semiparametric}, thereby suggesting a simple and efficient estimation. \\ The (nuisance) parameter $\gamma _{0}$ is identified by (\ref{sequential1}) and the parameter of interest $\theta _{0}$ is identified by (\ref{sequential2}). The following condition shall also be maintained throughout the paper: \ \\ \textbf{Assumption 2.2}. The parameter space $\Gamma $ is a compact subset of $\mathbb{R}^{p}$. The true value $\gamma _{0}$ lies in the interior of $ \Gamma $ and is the only solution to (\ref{sequential1}). {\color{black}The parameter space $\Theta$ is a compact subset of $\mathbb{R}$ and the true value $\theta_0$ lies in the interior of $\Theta$.} \\ To estimate model (\ref{sequential1})-(\ref{sequential2}), we first turn it into a set of unconditional moment restrictions. We work with a set of known basis functions: for each integer $\ K\in \mathbb{N}\ \text{with}\ K\geq p\ $, let \begin{equation*} u_{K}(\boldsymbol{X})=(u_{1K}(\boldsymbol{X}),\ldots ,u_{KK}(\boldsymbol{X} ))^{\top }.\ \end{equation*} {\color{black}Discussion on the choice of $u_K(\boldsymbol{X})$ and its properties can be found in Section \ref{sec:uK} in Appendix}. Model (\ref{sequential1})-(\ref {sequential2}) {\color{black}implies the unconditional moment restrictions}: \begin{align} & \mathbb{E}\left[ \left( 1-\frac{T}{\pi (\boldsymbol{Z};\gamma _{0})} \right) u_{K}(\boldsymbol{X})\right] =0, \label{uncond1} \\ & \mathbb{E}\left[ \theta _{0}-\frac{T}{\pi (\boldsymbol{Z};\gamma _{0})}U( \boldsymbol{Z})\right] =0. \label{uncond2} \end{align} To avoid redundant moment restrictions, we require $\mathbb{E}\left[ u_{K}( \boldsymbol{X})u_{K}(\boldsymbol{X})^{\top }\right] $ to be nonsingular for every $K$. The following somewhat stronger identification condition shall be maintained throughout the paper: \textbf{Assumption 2.2'}. The parameter space $\Gamma $ is a compact subset of $\mathbb{R}^{p}$. The true value $\gamma _{0}$ lies in the interior of $ \Gamma $ and is the only solution to (\ref{uncond1}). {\color{black}The parameter space $\Theta$ is a compact subset of $\mathbb{R}$ and the true value $\theta_0$ lies in the interior of $\Theta$.}\\ We can estimate the parameter of interest by the GMM method. Let $\{T_{i},\boldsymbol{Z}_{i}\}_{i=1}^{N}$ denote an $i.i.d \text{.}$ sample drawn from the joint distribution of $(T,\boldsymbol{Z})$. Denote \begin{eqnarray*} \boldsymbol{G}_{K}(\gamma ,\theta ) :&=&\left( \sum_{i=1}^{N}\left[ 1-\frac{ T_{i}}{\pi (\boldsymbol{Z}_{i};\gamma )}\right] u_{K}(\boldsymbol{X} _{i})^{\top },\sum_{i=1}^{N}\left[ \theta -\frac{T_{i}}{\pi (\boldsymbol{Z} _{i};\gamma )}U(\boldsymbol{Z}_{i})\right] \right) ^{\top } \\ &=&\sum_{i=1}^{N}g_{K}(T_{i},\boldsymbol{Z}_{i};\gamma ,\theta )\text{,} \end{eqnarray*} {\color{black} where $g_{K}(T,\boldsymbol{Z};\gamma ,\theta ):=\left( \left[ 1-\frac{ T}{\pi (\boldsymbol{Z};\gamma )}\right] u_{K}(\boldsymbol{X})^{\top }, \theta -\frac{T}{\pi (\boldsymbol{Z};\gamma )}U(\boldsymbol{Z})\right) ^{\top }$.} The GMM estimator of $\gamma _{0}$ and $\theta _{0}$ is defined as \begin{equation*} (\check{\gamma},\check{\theta})=\arg \underset{\gamma \in \Gamma ,\theta\in\Theta }{ \min }\boldsymbol{G}_{K}(\gamma ,\theta )^{T}\cdot \mathbf{W}\cdot \boldsymbol{G}_{K}(\gamma ,\theta ) \end{equation*} where $\mathbf{W}$ is a $(K+1)\times (K+1)$ symmetric weighting matrix. For every fixed $K\geq p$, \cite{hansen1982large} shows that, under some regularity conditions, the estimator \begin{equation} {(\check{\gamma}-\gamma _{0},\check{\theta}-\theta _{0})=\color{black}O_{p}({N}^{-1/2})} \label{Hansen1} \end{equation} {\ is asymptotically normally distributed, but generally not the best unless the best weighting matrix is used. The best weighting matrix is the inverse of } \begin{equation*} \boldsymbol{D}_{(K+1)\times (K+1)}:=\mathbb{E}\left[ g_{K}(T,\boldsymbol{Z};\gamma _{0},\theta _{0})g_{K}(T,\boldsymbol{Z};\gamma _{0},\theta _{0})^{\top } \right] . \end{equation*} The best estimator (within the class defined by the specific unconditional moments) is defined as \begin{equation*} (\overline{\gamma },\overline{\theta })=\arg \underset{\gamma \in \Gamma ,\theta\in\Theta }{\min }\boldsymbol{G}_{K}(\gamma ,\theta )^{T}\cdot \boldsymbol{D} _{(K+1)\times (K+1)}^{-1}\cdot \boldsymbol{G}_{K}(\gamma ,\theta ). \end{equation*} Suppose that the propensity score function is differentiable with respect to $\gamma $. Denote \begin{align*} \boldsymbol{B}_{(K+1)\times (p+1)} =\nabla _{\gamma ,\theta }\mathbb{E} \left[ \frac{1}{N}\boldsymbol{G}_{K}(\gamma _{0},\theta _{0})\right] =\left( \begin{array}{cc} \mathbb{E}\left[ u_{K}(\boldsymbol{X})\frac{\nabla _{\gamma }\pi ( \boldsymbol{Z};\gamma _{0})^{\top}}{\pi (\boldsymbol{Z};\gamma _{0})}\right] \text{,} & \mathbf{0}_{K\times 1} \\ \mathbb{E}\left[ U(\boldsymbol{Z})\frac{\nabla _{\gamma }\pi (\boldsymbol{Z} ;\gamma _{0})^{\top}}{\pi (\boldsymbol{Z};\gamma _{0})}\right] \text{,} & 1 \end{array} \right) \end{align*} and \begin{align*} \boldsymbol{V}_{K} =\left\{ \left( \boldsymbol{B}_{(K+1)\times (p+1)}\right) ^{\top }\boldsymbol{D}_{(K+1)\times (K+1)}^{-1}\left( \boldsymbol{B}_{(K+1)\times (p+1)}\right) \right\} ^{-1}. \end{align*} Hansen (1982) shows that, for every fixed $K\geq p$, \begin{equation} \boldsymbol{V}_{K}^{-1/2}\binom{\sqrt{N}(\overline{\gamma }-\gamma _{0})}{ \sqrt{N}(\overline{\theta }-\theta _{0})}\rightarrow N\left( 0,I_{(p+1)\times (p+1)}\right) \text{ in distribution.} \label{Hansen2} \end{equation} Since the best weighting matrix depends on the unknown parameter value, the best estimator $(\overline{\gamma },\overline{\theta })$ is infeasible. Hansen (1982) suggests the following two-step procedure:\\ \qquad Step I. Compute the initial $\sqrt{N}$-consistent estimator {\color{black}\begin{align*} &\widehat{\boldsymbol{W}}_0:=\begin{pmatrix} \frac{1}{N}\sum_{i=1}^Nu_K(\bold{X}_i)u_K(\bold{X}_i)^{\top} & \boldsymbol{0}_{K\times 1} \\[2mm] \boldsymbol{0}_{K\times 1}^{\top} & 1 \end{pmatrix}\ ,\\ &(\check{\gamma},\check{\theta})=\arg \underset{(\gamma,\theta) \in \Gamma \times \Theta }{ \min }\boldsymbol{G}_{K}(\gamma ,\theta )^{T}\cdot \widehat{\boldsymbol{W}}^{-1}_0\cdot \boldsymbol{G}_{K}(\gamma ,\theta ). \end{align*}} \qquad \qquad\ \qquad \qquad Step II. Compute the best weighting matrix and the best estimator {\ \begin{equation*} \hat{\boldsymbol{D}}_{(K+1)\times (K+1)}:=\frac{1}{N}\sum_{i=1}^{N}g_{K}(T_{i}, \boldsymbol{Z}_{i};\check{\gamma},\check{\theta})g_{K}(T_{i},\boldsymbol{Z} _{i};\check{\gamma},\check{\theta})^{\top }\;\text{,} \end{equation*} } \begin{equation*} (\widehat{\gamma },\widehat{\theta })=\arg \underset{\gamma \in \Gamma ,\theta \in \Theta }{\min }\boldsymbol{G}_{K}(\gamma ,\theta )^{T}\cdot \widehat{\boldsymbol{D}}_{(K+1)\times (K+1)}^{-1}\cdot \boldsymbol{G} _{K}(\gamma ,\theta ). \end{equation*} Hansen (1982) establishes that, for {every fixed $K\geq p$}, \begin{equation} \boldsymbol{V}_{K}^{-1/2}\binom{\sqrt{N}(\widehat{\gamma }-\gamma _{0})}{ \sqrt{N}(\widehat{\theta }-\theta _{0})}\rightarrow N\left( 0,I_{(p+1)\times (p+1)}\right) \text{ in distribution.} \label{Hansen3} \end{equation} Moreover, denote \begin{align*} \widehat{\boldsymbol{B}}_{(K+1)\times (p+1)} :=\left( \begin{array}{cc} N^{-1}\sum_{i=1}^{N}u_{K}(\boldsymbol{X}_{i})\frac{\nabla _{\gamma }\pi ( \boldsymbol{Z}_{i};\widehat{\gamma })^{\top}}{\pi (\boldsymbol{Z}_{i};\widehat{ \gamma })}\text{,} & \mathbf{0}_{K\times 1} \\ N^{-1}\sum_{i=1}^{N}U(\boldsymbol{Z}_{i})\frac{\nabla _{\gamma }\pi ( \boldsymbol{Z}_{i};\widehat{\gamma })^{\top}}{\pi (\boldsymbol{Z}_{i};\widehat{ \gamma })}\text{,} & 1 \end{array} \right) \end{align*} and \begin{align*} \widehat{\boldsymbol{V}}_{K} :=\left\{ \left( \widehat{ \boldsymbol{B}}_{(K+1)\times (p+1)}\right) ^{\top }\widehat{\boldsymbol{D}} _{(K+1)\times (K+1)}^{-1}\left( \widehat{\boldsymbol{B}}_{(K+1)\times (p+1)}\right) \right\} ^{-1}. \end{align*} Hansen (1982) proves that, for {every fixed $K\geq p$}, \begin{equation} \widehat{\boldsymbol{V}}_{K}\rightarrow \boldsymbol{V}_{K}\text{ in probability.} \label{Hansen4} \end{equation} The best estimator (within the class defined by the specific unconditional moments) is generally not semiparametrically efficient. To obtain the efficient estimator, we shall allow $K$ to increase with the sample size at the rate {\color{black}$ o(N^{1/3})$} so that $\{u_{K}(\boldsymbol{X})\}$ span the space of measureable functions (see also \cite{geman1982nonparametric} and \cite {newey1997convergence}). In the next two sections, we shall establish that results in (\ref{Hansen1})-(\ref{Hansen4}) still hold with expanding {\color{black}$K=o(N^{1/3})$}. \\ The advantage of our proposed estimator over the existing estimators is evident. Our estimation problem is a parametric one, requiring no modeling of or nonparametric estimation of $f_{Y|\boldsymbol{X},T}(y|x,1)$. In contrast, the estimators proposed in \cite{riddles2016propensity} and \cite {morikawa2016semiparametric} could be inconsistent if $f_{Y|\boldsymbol{X} ,T}(y|x,1)$ is incorrectly specified or suffers from the curve of dimensionality and bandwith selection problem of the nonparametric estimation of $f_{Y|\boldsymbol{X},T}(y|x,1)$. \section{Asymptotic Theory} In this section, we show that results in {\ (\ref{Hansen1})- ( \ref{Hansen3}) still hold with expanding }$K$, all technical proof can be found in the supplemental material \cite{alz2018}. First, we establish the convergence rate of the first step estimator $(\check{\gamma} ,\check{\theta})$. \begin{theorem} Under Assumptions 2.1-2.2 and Assumptions \ref{as:id}, \ref{as:suppX}, \ref{as:eigen}, \ref{as:iid}, \ref{as:pi}, and \ref{as:K&N} listed in Appendix, with $K=o(N^{1/3})$, the first step estimator satisfies \begin{equation*} (\check{\gamma}-\gamma _{0},\check{\theta}-\theta _{0})=O_{p}\left(N^{-1/2}\right). \end{equation*} \end{theorem} Next, we establish the large sample properties of the infeasible best estimator $(\overline{\gamma },\overline{\theta })$ without imposing the smoothness Assumptions \ref{as:proj_smooth} and \ref{as:boundS} listed in Appendix. \begin{theorem} Under Assumptions 2.1-2.2 and Assumptions \ref{as:id}, \ref{as:suppX}, \ref{as:eigen}, \ref{as:iid}, \ref{as:pi}, and \ref{as:K&N} listed in Appendix, with $K=o(N^{1/3})$, the infeasible best estimator satisfies \begin{equation*} \boldsymbol{V}_{K}^{-1/2}\binom{\sqrt{N}(\overline{\gamma }-\gamma _{0})}{ \sqrt{N}(\overline{\theta }-\theta _{0})}\rightarrow N\left( 0,I_{(p+1)\times (p+1)}\right) \text{ in distribution.} \end{equation*} \end{theorem} {If in addition the smoothness Assumptions \ref{as:proj_smooth} and \ref {as:boundS} are satisfied, the next result shows that $\boldsymbol{V}_{K}\rightarrow \boldsymbol{V}_{eff}$} in probability, where $\boldsymbol{V}_{eff}$ is the semiparametric efficiency bound of $(\gamma _{0},\theta _{0})$ {\color{black}derived in \cite{morikawa2016semiparametric}, see Lemma 1 in Section 8.3 of Appendix.} \begin{theorem} Under Assumption 2.1-2.2 and Assumption 1-\ref{as:K&N} listed in Appendix, with $ K=o(N^{1/3})$, we obtain \begin{equation*} \boldsymbol{V}_{K}\rightarrow \boldsymbol{V}_{eff}\text{ in probability.} \end{equation*} \end{theorem} By Theorem 1-3, the infeasible best estimator attains the semiparametric efficiency bound. The next result establishes the equivalence between the best estimator $(\widehat{\gamma },\widehat{ \theta })$ and the infeasible best estimator $(\overline{\gamma },\overline{ \theta })$, implying that the best estimator also attains the semiparametric efficiency bound. \begin{theorem} Under Assumption 2.1-2.2 and Assumption 1-\ref{as:K&N} listed in Appendix, with $ K=o(N^{1/3})$, we obtain \begin{equation*} \binom{\sqrt{N}(\overline{\gamma }-\widehat{\gamma })}{\sqrt{N}(\overline{ \theta }-\widehat{\theta })}=o_{p}(1). \end{equation*} \end{theorem} \section{Variance Estimation} In order to conduct statistical inference, we need a consistent covariance estimator. Notice that {\ (\ref{Hansen1}) } implies that $\widehat{\boldsymbol{V}}_{K} $ is a consistent estimator of $\boldsymbol{V}_{K}$ for every fixed $K\geq p$ . We now show that this result still holds true with expanding $K$, thereby providing a consistent covariance estimator. \begin{theorem} Under Assumption 2.1-2.2 and Assumption 1-\ref{as:K&N} listed in Appendix, with $ K=o(N^{1/3})$, we obtain \begin{equation*} \widehat{\boldsymbol{V}}_{K}\rightarrow \boldsymbol{V}_{K}\text{ in probability.} \end{equation*} \end{theorem} We notice that our covariance estimator is much simpler and more natural than the one suggested in \cite{morikawa2016semiparametric}, which requires nonprametric estimation of $f_{Y|\boldsymbol{X},T}(y|x,1)$ and tends to have poor performance in finite samples. Our covariance estimator is the GMM covariance estimator and is easily computed by existing statistical packages. \section{Selection of $K$} The large sample properties of the proposed estimator established in the previous sections allow for a wide range of values for $K$, and theoretically the sensitivity of the estimator to the choice of $K$ is not so pronounced, it affects higher order terms in a way that does not affect consistency and asymptotic normality. Nevertheless, there may be some higher order effect of the choice of $K$ on perfomance. In this section, we present two data-driven approaches to select $K$. Both approaches strike a balance between bias and variance. \\ \textbf{Covariate balancing approach}. The first approach attempts to balance the distribution of the covariates between the whole population and the non-missing population through weighting. Notice that \begin{equation*} \mathbb{E}\left[ \frac{T}{\pi (\boldsymbol{Z};\gamma _{0})}I(X_{j}\leq x_{j}) \right] =\mathbb{E}[I(X_{j}\leq x_{j})]\ ,\ j\in \{1,...,r\}\ , \end{equation*} where $X_{j}$ is the $j^{th}$ component of $\boldsymbol{X}$ and $I(X_{j}\leq x_{j})$ is the indicator function. Obviously the propensity score function $\pi ( \boldsymbol{Z};\gamma _{0})$ plays the role of balancing. Notice that the estimator $\hat{\gamma}$ depends on $K.$ For a given $K$, we compute \begin{equation*} \hat{F}_{N,K}^{j}(x_{j}):=\frac{1}{N}\sum_{i=1}^{N}\frac{T_{i}}{\pi ( \boldsymbol{X}_{i};\hat{\gamma})}I(X_{ij}\leq x_{j}),\;j\in \{1,\ldots ,r\} \text{.} \end{equation*} We compute the empirical distributions of the covariates \begin{equation*} \tilde{F}_{N}^{j}(x_{j}):=\frac{1}{N}\sum_{i=1}^{N}I(X_{ij}\leq x_{j}),\;j\in \{1,\ldots ,r\}\text{.} \end{equation*} We choose the lowest $K$ so that the difference between $\{\hat{F} _{N,K}^{j}\}_{j=1}^{r}$ and $\{\hat{F}_{N}^{j}\}_{j=1}^{r}$ is small. Denote the upper bound of $K$ by $\bar{K}$ (e.g. $\bar{K}=7$ in our simulation studies). We choose $K\in \{1,...,\bar{K}\}$ to minimize the aggregate Kolmogorov-Smirnov distance between $\{\hat{F}_{N,K}^{j}\}_{j=1}^{r}$ and $\{ \hat{F}_{N}^{j}\}_{j=1}^{r}$: \begin{equation*} \hat{K}=\arg \min_{K\in \{1,...,\bar{K}\}}{D}_{N}(K)=\sum_{j=1}^{r} \sup_{x_{j}\in \mathbb{R}}\left\vert \tilde{F}_{N}^{j}(x_{j})-\hat{F} _{N,K}^{j}(x_{j})\right\vert . \end{equation*} {\color{black} \textbf{Higher order MSE approach}. The second approach chooses $K$ to minimize the mean-squared error of the estimator. \cite{donald2009choosing} derives the higher-order asymptotic mean-square error (MSE) of a linear combination $\mathbf{t}^{\top } \hat{\gamma }$ for some fixed $\mathbf{t} \in \mathbb{R}^{p}$. \\ Let $\check{\gamma }$ be some preliminary estimator. Define: \begin{align*} & \widehat{\Pi } (K ;\boldsymbol{t}) =\sum _{i =1}^{N}\hat{\xi }_{i i} \rho (T_{i} ,\boldsymbol{X}_{i} ,Y_{i} ;\check{\gamma }) \cdot (\mathbf{t}^{\top } \hat{\Omega }_{p \times p}^{ -1} \tilde{\mathbf{\eta }}_{i})\text{,} \\ & \hat{\Phi } (K ;\boldsymbol{t}) =\sum _{i =1}^{N}\hat{\xi }_{i i} \left \{\mathbf{t}^{\top } \hat{\Omega }_{p \times p}^{ -1} \left [\widehat{\mathbf{D}}_{i}^{ \ast } \rho (T_{i} ,\boldsymbol{X}_{i} ,Y_{i} ;\check{\gamma })^{2} - \nabla _{\gamma }\rho (T_{i} ,\boldsymbol{X}_{i} ,Y_{i} ;\check{\gamma })\right ]\right \}^{2} \\ & \text{\quad \quad \quad \quad \quad \quad } -\mathbf{t}^{\top } \hat{\Omega }_{p \times p}^{ -1} (\hat{\Gamma }_{K \times p})^{\top } \hat{\Upsilon }_{K \times K}^{ -1} \hat{\Gamma }_{K \times p} \hat{\Omega }_{p \times p}^{ -1} \boldsymbol{t}\;\text{.}\end{align*} where $\rho(T_i,\boldsymbol{X}_i,Y_i;\check{\gamma})$, $\hat{\Omega}_{p\times p}$, $\tilde{\eta}_i$, $\hat{\xi}_{ii}$, $\hat{\boldsymbol{D}}_i^*$, $\hat{\Gamma}_{K\times p}$, and $\hat{\Upsilon}_{K\times K}$ are defined in Section 8.2 of Appendix. Notice that $\widehat{\Pi}(K;\boldsymbol{t})^2/N$ is an estimate of the squared bias term derived in \cite{newey2004higher} and $\hat{\Phi } (K;\boldsymbol{t})$ is the asymptotic variance. \\ The second approach chooses $K$ to minimize the following higher-order MSEs of $\hat{\gamma }_{j} ,j =1 ,\ldots ,p$: \begin{align}S_{G M M} (K) =\sum _{j =1}^{p}\left \{\frac{1}{N} \widehat{\Pi } (K ;e_{j})^{2} +\hat{\Phi } (K ;e_{j})\right \}\;\text{,} \label{eq:criteria_K}\end{align}where $e_{j}$ is the $j^{t h}$ column of the $p$-dimensional identity matrix. In practice, we set the upper bound $\bar{K}$ and then choose $K \in \{1 ,2 ,\ldots ,\bar{K}\}$ to minimize the criteria \eqref{eq:criteria_K}} . \begin{table} \centering \caption{Simulation results under Scenorio I} \begin{threeparttable} {\fontsize{12pt}{15pt} \selectfont \begin{tabular}{c|ccccccccccc} \hline \hline \multicolumn{10}{c}{$n = 200$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & 0.028 & -0.125 & 0.039 & 0.055 & 0.120 & 0.106 & -0.997 & 0.167 & 0.301 \\ Stdev & 0.254 & 0.413 & 0.129 & 0.229 & 0.272 & 0.118 & 0.197 &0.266 & 0.101 \\ MSE & 0.065 & 0.186 & 0.018 & 0.055 & 0.088 & 0.025 & 1.033 & 0.099 & 0.101 \\ CP & --- & --- & 0.908 & --- & --- & 0.908 &--- &--- & 0.22 \\ \hline \hline \multicolumn{10}{c}{$n = 500$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & 0.011 & -0.067 & 0.016 & 0.048 & 0.058 & 0.063 & -0.966 & 0.220 & 0.299 \\ Stdev & 0.161 & 0.282 & 0.090 & 0.151 & 0.193 & 0.077 & 0.126 & 0.160 & 0.063 \\ MSE & 0.026 & 0.084 & 0.008 & 0.025 & 0.040 & 0.010 & 0.949 & 0.074 &0.093 \\ CP & --- & --- & 0.928 & --- & --- & 0.892 &--- &--- & 0.034 \\ \hline \hline \multicolumn{10}{c}{} \\[1mm] \hline \hline \multicolumn{10}{c}{$n = 1000$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$\\[1mm] \hline\\ Bias & 0.005 & -0.040 & 0.008 & 0.034 & 0.023 & 0.040 & -0.962 & 0.235 & 0.298\\ Stdev & 0.103 & 0.187 & 0.065 & 0.102 & 0.132 & 0.055 & 0.078 & 0.099 & 0.045 \\ MSE & 0.010 & 0.036 & 0.004 & 0.011 & 0.018 & 0.004 &0.932 &0.065 &0.091\\ CP & --- & --- & 0.934 & --- & --- & 0.906 & --- & --- &0.012 \\ \hline \hline \end{tabular}} {\fontsize{9.5pt}{12pt} \selectfont Stdev: standard deviation ; MSE: mean squared error; CP: coverage probability. The bandwith used in computing the nonparametric kernel estimators $(\hat{\alpha}_{MK},\hat{\beta}_{MK},\hat{\theta}_{MK})$ is $h=0.15$. } \end{threeparttable} \end{table} \begin{table} \centering \caption{Simulation results under Scenorio II} \begin{threeparttable} {\fontsize{12pt}{15pt} \selectfont \begin{tabular}{c|ccccccccccc} \hline \hline \multicolumn{10}{c}{$n = 200$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & -0.208 & 0.096 & 0.084 & -0.552 & 0.588 & 0.173 & -2.053 & 1.215 & 0.530 \\ Stdev & 0.646 & 0.555 & 0.201 & 0.372 & 0.245 & 0.125 & 0.809 & 0.148 & 0.205 \\ MSE & 0.462 & 0.318 & 0.047 & 0.443 & 0.406 & 0.045 & 4.873 & 1.498 &0.323 \\ CP & --- & --- & 0.95 & --- & --- & 0.784 &--- &--- & 0.138 \\ \hline \hline \multicolumn{10}{c}{$n = 500$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & -0.081 & 0.040 & 0.044 & -0.313 & 0.392 & 0.122 & -1.924& 1.203 & 0.583 \\ Stdev & 0.406 & 0.363 & 0.131 & 0.261 & 0.186 & 0.085 & 0.175 & 0.064 & 0.132 \\ MSE & 0.171 & 0.134 & 0.019 & 0.166 & 0.188 & 0.022 &3.732 & 1.451& 0.357 \\ CP & --- & --- & 0.932 & --- & --- & 0.764 &--- &--- & 0.06 \\ \hline \hline \multicolumn{10}{c}{} \\[1mm] \hline \hline \multicolumn{10}{c}{$n = 1000$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$\\[1mm] \hline\\ Bias & -0.036 & 0.019 & 0.019 & -0.198 & 0.268 & 0.085 &-1.900 & 1.201 & 0.590\\ Stdev & 0.260 & 0.225 & 0.086 & 0.203 & 0.164 & 0.061 & 0.086 & 0.044 & 0.078 \\ MSE & 0.069 & 0.051 & 0.007 & 0.080 & 0.098 & 0.011 & 3.618 & 1.445 & 0.354 \\ CP & --- & --- & 0.932 & --- & --- & 0.768 & --- & --- & 0.018 \\ \hline \hline \end{tabular}} {\fontsize{9.5pt}{14pt} \selectfont Stdev: standard deviation ; MSE: mean squared error; CP: coverage probability. The bandwith used in computing the nonparametric kernel estimators $(\hat{\alpha}_{MK},\hat{\beta}_{MK},\hat{\theta}_{MK})$ is $h=0.05$. } \end{threeparttable} \end{table} \begin{table} \caption{Simulation results under Scenorio III} \centering \begin{threeparttable} {\fontsize{12pt}{15pt} \selectfont \begin{tabular}{c|ccccccccccc} \hline \hline \multicolumn{10}{c}{$n = 200$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & 0.155 & -0.171 & 0.003 & 0.047 & 0.015 & 0.071 & -2.794 & 0.954 & -1.146 \\ Stdev & 0.584 & 0.585 & 0.155 & 0.376 & 0.190 & 0.131 & 1.395 & 0.396 & 0.263 \\ MSE & 0.365 & 0.372 & 0.024 & 0.144 & 0.036 & 0.022 & 9.758 & 1.069 & 1.384 \\ CP & --- & --- & 0.934 & --- & --- & 0.884 &--- &--- & 0.032 \\ \hline \hline \multicolumn{10}{c}{$n = 500$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & 0.034 & -0.036 & 0.000 & 0.012 & 0.012 & 0.034 & 0.782 & 0.355 & 0.123 \\ Stdev & 0.305 & 0.224 & 0.103 & 0.250 & 0.128 & 0.085 & 0.433 & 0.113 & 0.101 \\ MSE & 0.094 & 0.051 & 0.010 & 0.062 & 0.016 & 0.008 & 0.799 & 0.139 & 0.025 \\ CP & --- & --- & 0.902 & --- & --- & 0.894 & ---& --- & 0.698 \\ \hline \hline \multicolumn{10}{c}{} \\[1mm] \hline \hline \multicolumn{10}{c}{$n = 1000$} \\ \hline\\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & 0.009 & -0.010 & 0.002 & 0.002 & 0.009 & 0.017 & 0.728 & 0.372 & 0.126 \\ Stdev & 0.215 & 0.157 & 0.069 & 0.167 & 0.083 & 0.056 & 0.302 &0.078 &0.067 \\ MSE & 0.046 & 0.024 & 0.004 & 0.028 & 0.007 & 0.003 & 0.621 & 0.144 & 0.020 \\ CP & --- & --- & 0.932 & --- & --- & 0.934 & --- &--- & 0.454 \\ \hline \hline \end{tabular}} {\fontsize{9.5pt}{14pt} \selectfont Stdev: standard deviation ; MSE: mean squared error; CP: coverage probability. The bandwith used in computing the nonparametric kernel estimators $(\hat{\alpha}_{MK},\hat{\beta}_{MK},\hat{\theta}_{MK})$ is $h=0.1$. } \end{threeparttable} \end{table} \begin{table} \caption{Simulation results under Scenorio IV} \centering \begin{threeparttable} {\fontsize{12pt}{15pt} \selectfont \begin{tabular}{c|ccccccccccc} \hline \hline \multicolumn{10}{c}{$n = 200$} \\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & 0.097 & -0.114 & 0.005 & -0.018 & 0.027 & 0.043 & -1.002 & 1.003 & 0.136 \\ Stdev & 1.140 & 0.721 & 0.118 & 0.308 & 0.185 & 0.103 & 0.081 & 0.139 & 0.348 \\ MSE & 1.310 & 0.533 & 0.014 & 0.095 & 0.035 & 0.013 & 1.011 & 1.026 & 0.139 \\ CP & --- & --- & 0.914 & --- & --- & 0.92 &--- &--- & 0.998 \\ \hline \hline \multicolumn{10}{c}{$n = 500$}\\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & -0.001 & -0.026 &0.003 & -0.042 & 0.041& 0.022 & -1.003 & 1.000 &0.146 \\ Stdev & 0.203 & 0.139 & 0.071 & 0.172 & 0.100 & 0.067 & 0.048 & 0.088 & 0.199 \\ MSE & 0.041 & 0.020 & 0.005 & 0.031 & 0.011 & 0.005 & 1.010 & 1.009 &0.061 \\ CP & --- & --- & 0.944 & --- & --- & 0.946 & ---&--- & 1.000 \\ \hline \hline \multicolumn{10}{c}{} \\[1mm] \hline \hline \multicolumn{10}{c}{$n = 1000$}\\ \hline \\ & $\hat{\alpha}$& $\hat{\beta}$ & $\hat{\theta}$ & $\hat{\alpha}_{MK}$ & $\hat{\beta}_{MK}$ & $\hat{\theta}_{MK}$ & $\tilde{\alpha}_{MAR}$ & $\tilde{\beta}_{MAR}$ & $\tilde{\theta}_{MAR}$ \\[1mm] \hline\\ Bias & 0.010 & -0.034 & -0.001 & -0.027 & 0.024 & 0.011 & -1.000 & 0.997 & 0.134 \\ Stdev & 0.262 & 0.264 & 0.052 &0.122 &0.070 & 0.048 & 0.035 & 0.065 & 0.148 \\ MSE & 0.068 & 0.070 & 0.002 & 0.015 & 0.005 &0.002 & 1.003& 1.000 & 0.039\\ CP & --- & --- & 0.936 & --- & --- & 0.932 & ---& ---& 1.000\\ \hline\hline \end{tabular}} {\fontsize{9.5pt}{14pt} \selectfont Stdev: standard deviation ; MSE: mean squared error; CP: coverage probability. The bandwith used in computing the nonparametric kernel estimators $(\hat{\alpha}_{MK},\hat{\beta}_{MK},\hat{\theta}_{MK})$ is $h=0.2$. } \end{threeparttable} \end{table} \section{Simulations} After establishing the large sample properties of the proposed estimator, we now evaluate its finite sample performance through a small scale simulation study. We consider four scenarios. In all scenarios, the parameter of interest is $\theta _{0}=\mathbb{E}[Y]$ and the sample size is set respectively at $N=200,500$ and $1000$. \begin{itemize} \item {\color{black} \textbf{Scenario I}: $X$ is generated from the normal distribution $N(0,1)$, and the outcome $Y$ is generated from the normal distribution with mean $X+1$ and unit variance, i.e. $Y\sim N(X+1,1)$. The relationship between the outcome variable and the covariate is linear, and the distribution of outcome is normal. The missing mechanism is modeled by $$\mathbb{P}(T=1|Y,X)=[1+\exp(\alpha_0 +\beta_0 Y)]^{-1}\ ,$$ with the true value $(\alpha_0,\beta_0)=(0,-1.2)$. The true value of the parameter of interest is $\theta_0=\mathbb{E}[Y]=1$}.\\ \item \textbf{Scenario II}: $X$ is generated from the normal distribution $ N(0,1)$, and the outcome $Y$ is generated from the normal distribution with mean $X^{2}+1$ and unit variance, i.e. $Y\sim N(X^{2}+1,1)$. Thus the relationship between the outcome variable and the covariate is nonlinear, and the distribution of outcome is non-normal. The missing mechanism is modeled as \begin{equation*} \mathbb{P}(T=1|Y,X)=[1+\exp (\alpha _{0}+\beta _{0}Y)]^{-1}\ \end{equation*} with the true value $(\alpha _{0},\beta _{0})=(1.25,-1.2)$. The true value of the parameter of interest is $\theta _{0}=\mathbb{E}[Y]=2$. \\ \item \textbf{Scenario III}. The design follows \cite{qin2002estimation}. We generate the outcome from \begin{equation*} Y=0.1X^{2}+ZX^{1/2}/5\ , \end{equation*} where $Z$ and $X$ are independent, $Z$ is standard normal random variable, and $X$ follows the $\chi _{(6)}^{2}/2$ distribution. The missing mechanism is modeled as \begin{equation*} \mathbb{P}(T=1|Y,X)=\left[ 1+\exp (\alpha _{0}+\beta _{0}Y)\right] ^{-1}\ \end{equation*} with the true value $(\alpha _{0},\beta _{0})=(3,-1)$. The true value of the target parameter is $\theta _{0}=\mathbb{E}[Y]=1.2$.\\ \item \textbf{Scenario IV}. The design is similar to that in \cite{kang2007demystifying}. $\boldsymbol{Z}=(Z_{1},Z_{2})$ is generated from the standard bivariate normal distribution, and $Y$ is generated from the normal distribution with mean $2+Z_{1}$ and unit variance. The missing mechanism is modeled as \begin{equation*} \mathbb{P}(T=1|Y,X_{1},X_{2})=[1+\exp (\alpha _{0}Z_{1}+\beta _{0}Y)]^{-1}\ \end{equation*} with $(\alpha _{0},\beta _{0})=(1,-1)$. The true value of the parameter of interest is $\theta _{0}=\mathbb{E}[Y]=2$. Instead of directly observing covariates $\boldsymbol{Z}$, we observe a non-linear transformation of $ \boldsymbol{Z}$: $X_{1}=\exp (Z_{1}/2)$ and $X_{2}=Z_{2}/(1+\exp (Z_{1}))$. \end{itemize} \ In all scenarios, we generate $J=500$ random samples, and for each sample, we compute the following three estimators: \begin{enumerate} \item Naive estimator. We compute the missing at random estimator $( \tilde{\alpha}_{MAR},\tilde{\beta}_{MAR},\tilde{\theta}_{MAR})$ as \begin{equation*} \tilde{\theta}_{MAR}=\frac{1}{N}\sum_{i=1}^{N}\frac{T_{i}}{\pi (\boldsymbol{X} _{i};\tilde{\alpha}_{MAR},\tilde{\beta}_{MAR})}Y_{i}\ , \end{equation*} where $\pi (\boldsymbol{X}_{i};\tilde{\alpha}_{MAR},\tilde{\beta}_{MAR})$ is an estimated response model. In Scenarios I, II \& III, $ \pi (\boldsymbol{X}_{i};\tilde{\alpha}_{MAR},\tilde{\beta}_{MAR})=\left[ 1+\exp (\tilde{\alpha}_{MAR}+\tilde{\beta}_{MAR}X_{i})\right] ^{-1}$ and in Scenario IV $\pi (\boldsymbol{X}_{i};\tilde{\alpha}_{MAR},\tilde{\beta} _{MAR})=\left[ 1+\exp (\tilde{\alpha}_{MAR}Z_{1i}+\tilde{\beta}_{MAR}X_{2i}) \right] ^{-1}$, where $(\tilde{\alpha}_{MAR},\tilde{\beta}_{MAR})$ are estimated by GMM. \item MK2 estimator. We compute $(\hat{\alpha}_{MK},\hat{\beta}_{MK},\hat{ \theta}_{MK})$ using the approach of \cite{morikawa2016semiparametric}, i.e. $(\hat{\alpha}_{MK},\hat{\beta}_{MK},\hat{ \theta}_{MK})$ is the solution of $$\sum_{i=1}^N\left(\hat{\boldsymbol{S}}_{1}(T_i,\boldsymbol{Z}_i;\alpha,\beta)^{\top}, \hat{S}_2(T_i,\boldsymbol{Z}_i;\alpha,\beta,\theta)\right)^{\top}=0\ ,$$ where \begin{align*} &\hat{\boldsymbol{S}}_{1}(T,\boldsymbol{Z};\alpha,\beta)=-\left(1-\frac{T}{\pi(\boldsymbol{Z};\alpha,\beta)}\right)\mathbb{E}^\star\left[\frac{\nabla_{\gamma}\pi(\boldsymbol{Z};\alpha,\beta)}{1-\pi(\boldsymbol{Z};\alpha,\beta)}\bigg|\boldsymbol{X}\right]\ , \\ &\hat{S}_2(T,\boldsymbol{Z};\alpha,\beta,\theta)=-\frac{T}{\pi(\boldsymbol{Z};\alpha,\beta)}U(\boldsymbol{Z})+\theta-\left(1-\frac{T}{\pi(\boldsymbol{Z};\alpha,\beta)}\right) \mathbb{E}^\star\left[U(\boldsymbol{Z})|\boldsymbol{X}\right]\ , \end{align*} and for any function $g(\boldsymbol{Z})$ the quantity $\mathbb{E}^\star[g(\boldsymbol{Z})|\boldsymbol{X}]$ is defined by \begin{align*} &\mathbb{E}^\star[g(\boldsymbol{Z})|\boldsymbol{X}=\boldsymbol{x}]:=\frac{\sum_{j=1}^NT_jK_h(\boldsymbol{x}-\boldsymbol{X}_j)T_j\pi(\boldsymbol{Z}_j;\alpha,\beta)^{-1}O(\boldsymbol{x},Y_j;\alpha,\beta)g(\boldsymbol{x},Y_j)}{\sum_{j=1}^NK_h(\boldsymbol{x}-\boldsymbol{X}_j)T_j\pi(\boldsymbol{Z}_j;\alpha,\beta)^{-1}O(\boldsymbol{x},Y_j;\alpha,\beta)}\ ; \\ &O(\boldsymbol{z};\alpha,\beta)=\frac{1-\pi(\boldsymbol{z};\alpha,\beta)}{\pi(\boldsymbol{z};\alpha,\beta)} \ , \end{align*} $K_h(\boldsymbol{x}-\boldsymbol{w})=K\left((\boldsymbol{x}-\boldsymbol{w}/h)\right)$, $K(\cdot)$ is Gaussian kernel function and $h$ is the bandwidth. \item Our GMM estimator. We compute $(\hat{\alpha},\hat{\beta},\hat{\theta})$ using the proposed approach and the covariate balancing approach to select $K $, {\color{black}with $\bar{K}=7$ in Scenarios I, II, III, and with $\bar{K}=10$ in Scenario IV. Here $\bar{K}$ is the maximal number of candidate moments to be considered.} \end{enumerate} The simulation results (the bias, the standard deviation (Stdev), the mean squared error (MSE), and the coverage probability (CP) (for significance level $\alpha =0.05$) of the point estimates) for all scenarios are reported in Tables 1, 2, 3, and 4 respectively. {\color{black} The histogram of selected $K'$s (based on $500$ Monte Carlo samples) in all scenarios is reported in Figure 1.} Glancing at these tables, we find: \begin{enumerate} \item In all scenarios, the naive estimator using the missing at random assumption has a large bias, because this assumption does not hold. \item In all scenarios, our proposed estimator of $\mathbb{E}[Y]$ out-performs the MK estimator. \item In all scenarios, our proposed variance estimator has coverage probability close to $95\%$, even the sample size is small. The MK's variance estimator performs well in Scenario IV, but badly in other scenarios: in Scenarios I, the coverage probability using MK's approach converges to $90\%$ rather than $95\%$; in Scenarios II, the CP values are far from $95\%$ in Scenario 2 no matter the sample size is small or large; in Scenarios III, the MK's variance estimaotr is consistent only when the sample size is large. \item {\color{black}When the sample size is small the optimal $K$ tends to be $2$. When the sample size is large, the optimal $K$ tends to be $3$. The growing rate of $K$ is extremely slow comparing to that of the sample size $n$, which is consistent with our theoretical Assumption \ref{as:K&N}.} \end{enumerate} These results clearly show that the proposed approach has better finite sample performance. \begin{figure} \caption{Histogram of $K$} \end{figure} \section{Discussion} The data missing not at random problem is common in applications. \cite {morikawa2016semiparametric} studies the efficient estimation of a class of missing not at random problems. But their approach requires nonparametric estimation of the conditional density function and thus suffers from the curse of dimensionality and smoothing parameter selection problem. In this paper, we study the same class of missing not at random problems but present a much simpler and more natural efficient estimator. Our approach is based on a parametric moment restriction model that does not require nonparametric estimation and hence does not suffer from the curse of dimensionality problem nor the bandwidth selection problem. Indeed the simulation results confirm that the proposed approach out-performs their approach in finite samples. The GMM approach is also easy to adapt to stratified sampling and other sampling schemes common in survey data. \\ Both approaches require correct parameterization of the propensity score function. If the propensity score function is misspecified, then both approaches yield inconsistent estimates. There is some attempt in the literature to mitigate this problem. For instance, Zhao and Shao (2015) introduce a partial linear index to model missing mechanism. The proposed approach can be extended in this direction. Such extension shall be pursued in a future study. \section{Appendix} \subsection{Assumptions} We first introduce the smoothness classes of functions used in the nonparametric estimation; see e.g. \cite{stone1982optimal, stone1994use}, \cite{robinson1988root}, \cite{newey1997convergence}, \cite{horowitz2012semiparametric} and \cite{chen2007large}. Suppose that $\mathcal{X}$ is the Cartesian product of $r$-compact intervals. Let $0<\delta \leq 1$. A fucntion $f$ on $\mathcal{X}$ is said to satisfy a H$\ddot{\text{o}}$lder condition with exponent $\delta$ if there is a positive constant $L$ usch that $\|f(\boldsymbol{x}_1)-f(\boldsymbol{x}_2)\|\leq L \|\boldsymbol{x}_1-\boldsymbol{x}_2\|^{\delta}$ for all $\boldsymbol{x}_1,\boldsymbol{x}_2\in\mathcal{X}$. Given a $r$-tuple $\boldsymbol{\alpha}=(\alpha_1,...,\alpha_r)$ of nonnegative integer, denote $[\boldsymbol{\alpha}]=\alpha_1+\cdots+\alpha_r$ and let $D^{\boldsymbol{\alpha}}$ denote the differential operator defined by $D^{\boldsymbol{\alpha}}=\frac{\partial^{[\boldsymbol{\alpha}]}}{\partial x_1^{\alpha_1}\cdots \partial x_r^{\alpha_r}}$, where $\boldsymbol{x}=( x_1,..., x_r)$. \begin{definition} \emph{Let $s$ be a nonnegative integer and $s:=s_0+\delta$. The function $f$ on $\mathcal{X}$ is said to be} $s$-smooth \emph{if it is $s$ times continuously differentiable on $\mathcal{X}$ and $D^{\boldsymbol{\alpha}}f$ satisfies a H$\ddot{\text{o}}$lder condition with exponent $\delta$ for all $\boldsymbol{\alpha}$ with $[\boldsymbol{\alpha}]=s_0$}. \end{definition} The following notations are needed for presenting the efficiency bounds: \begin{align} & O (\boldsymbol{Z}):=\frac{1 -\pi (\boldsymbol{Z} ;\gamma _{0})}{\pi (\boldsymbol{Z} ;\gamma _{0})} ,\text{\quad }\boldsymbol{S}_{0} (\boldsymbol{Z}):= -\frac{ \nabla _{\gamma }\pi (\boldsymbol{Z} ;\gamma _{0})}{1 -\pi (\boldsymbol{Z} ;\gamma _{0})} \label{def:os}\ , \\ &m(\boldsymbol{X}):=\frac{\mathbb{E}[O(\boldsymbol{Z})\boldsymbol{S}_{0} (\boldsymbol{Z})\vert \boldsymbol{X}]}{\mathbb{E}[O(\boldsymbol{Z})\vert \boldsymbol{X}]} ,\text{\quad }R (\boldsymbol{X}):=\frac{\mathbb{E}[O(\boldsymbol{Z})U (\boldsymbol{Z})\vert \boldsymbol{X}]}{\mathbb{E}[O(\boldsymbol{Z})\vert \boldsymbol{X}]}\; \label{def:osmr}\ , \\ & \boldsymbol{S}_{1} (T ,\boldsymbol{Z} ;\gamma _{0}) :=\left (1 -\frac{T}{\pi (\boldsymbol{Z} ;\gamma _{0})}\right ) m (\boldsymbol{X})\;\text{,} \label{def:S1}\\ & S_{2} (T ,\boldsymbol{Z} ;\gamma _{0} ,\theta _{0}) := -\frac{T}{\pi (\boldsymbol{Z} ;\gamma _{0})} U (\boldsymbol{Z}) +\theta _{0} -\left (1 -\frac{T}{\pi (\boldsymbol{Z} ;\gamma _{0})}\right ) R (\boldsymbol{X})\;\text{.} \label{def:S2} \end{align} The following assumptions are required in this paper: \begin{assumption} \label{as:id} There exists a nonresponse instrumental variable $\mathbf{X}_{\mathbf{2}}$, i.e., $\mathbf{X} =(\mathbf{X}_{1}^{^{ \intercal }} ,\mathbf{X}_{2}^{^{ \intercal }})^{^{ \intercal }}\text{,}$ such that $\boldsymbol{X}_{2}$ is independent of $T$ given $\boldsymbol{X}_{1}$ and $Y$; furthermore, $\boldsymbol{X}_{2}$ is correlated with $Y$. \end{assumption} \begin{assumption} \label{as:suppX} The support of $\mathbf{X}$, which is denoted by $\mathcal{X}$, is a Cartesian product of $r$-compact intervals, and we denote $\boldsymbol{X}=(X_1,...,X_r)^{\top}$. \end{assumption} \begin{assumption} \label{as:proj_smooth} The functions $\mathbb{E}[O(\boldsymbol{Z})S_{0} (\boldsymbol{Z})\vert \bold {X} =\boldsymbol{x}]$, $\mathbb{E}[O(\boldsymbol{Z})U (\boldsymbol{Z})\vert \bold {X} =\boldsymbol{x}]$ and $\mathbb{E}[O(\boldsymbol{Z})\vert \boldsymbol{X} =\boldsymbol{x}]$ are $s$-smooth in $\boldsymbol{x}$, where $s > 0$. \end{assumption} \begin{assumption}\label{as:eigen} There exists two finite positive constants $\underline{a}$ and $\overline{a}$ such that the smallest (resp. largest) eigenvalue of $\mathbb{E}[u_K(\boldsymbol{X})u_K^{\top}(\boldsymbol{X})]$ is bounded away from $\underline{a}$ (resp. $\overline{a}$) uniformly in $K$, i.e., $$0<\underline{a}\leq \lambda_{\min}(\mathbb{E}[u_{K}(\boldsymbol{X})u_{K} (\boldsymbol{X})^{\top }])\leq \lambda_{\max}(\mathbb{E}[u_{K}(\boldsymbol{X})u_{K} (\boldsymbol{X})^{\top }])\leq \overline{a}<\infty\ .$$ \end{assumption} \textbf{Remark}: Asssumption \ref{as:eigen} implies that following results: \begin{enumerate} \item \begin{align}\label{bound:Eu_K^2} \mathbb{E}[\|u_K(\boldsymbol{X})\|^2]=\tr\left(\mathbb{E}\left[u_K(\boldsymbol{X})u_K(\boldsymbol{X})^{\top}\right]\right)=O(K)\ ; \end{align} \item the matrices $\bar{a}\cdot I_{K\times K}-\mathbb{E}[u_{K}(\boldsymbol{X})u_{K} (\boldsymbol{X})^{\top }]$ and $\mathbb{E}[u_{K}(\boldsymbol{X})u_{K} (\boldsymbol{X})^{\top }]-\underline{a}\cdot I_{K\times K}$ are positive definite, and \begin{align}\label{bound_u_kK} \underline{a}\leq \inf_{k\in\{1,...,K\}}\mathbb{E}[u_{kK}(\boldsymbol{X})^2]\leq \sup_{k\in\{1,...,K\}}\mathbb{E}[u_{kK}(\boldsymbol{X})^2] \leq \overline{a}\ . \end{align} \end{enumerate} \begin{assumption} \label{as:iid}The full data $\{(T_{i} ,\boldsymbol{X}_{i} ,Y_{i})\}_{i =1}^{N}$ are independently and identically distributed. \end{assumption} \begin{assumption} \label{as:boundS}$ \boldsymbol{S}_{eff}(T,\boldsymbol{Z};\gamma,\theta ):=(\boldsymbol{S}_{1}^{^{\intercal}}(T,\boldsymbol{Z};\gamma ),S_{2}(T,\boldsymbol{Z};\gamma,\theta))^{^{\intercal}}$ is continuously differentiable at each $(\gamma ,\theta ) \in \Gamma \times \Theta $ with probability one, and $\mathbb{E} \left [ \partial \boldsymbol{S}_{e f f} (\gamma ,\theta )/ \partial (\gamma ^{\top } ,\theta )\right ]$ is nonsingular at $(\gamma _{0} ,\theta _{0})$. \end{assumption} \begin{assumption} \label{as:pi} The response probability $\pi $ satisfies the following conditions: \begin{enumerate} \item there exists two positive constants $\bar{c}$ and $\underline{c}$ such that $0 <\underline{c} \leq \pi (\boldsymbol{x} ,y ;\gamma ) \leq \bar{c} <1$ for all $\gamma \in \Gamma $ and $(\boldsymbol{x} ,y) \in \mathcal{X} \times \mathbb{R}$; \item the propensity score $\pi (\boldsymbol{x},y;\gamma )$ is twice continuously differentiable in $\gamma \in \Gamma $, and the derivatives are uniformly bounded. \item for any $\gamma\in \Gamma$, the conditional functions $\mathbb{E}\left[1-\frac{T}{\pi(\boldsymbol{Z};\gamma)}|\boldsymbol{X}=\boldsymbol{x}\right]$ and $\mathbb{E}\left[\frac{\nabla_{\gamma}\pi(\boldsymbol{Z};\gamma)}{\pi(\boldsymbol{Z};\gamma)}\bigg|\boldsymbol{X}=\boldsymbol{x}\right]$ are $s$-smooth in $\boldsymbol{x}$, where $s> 0$. \end{enumerate} \end{assumption} \begin{assumption} \label{as:K&N} Suppose $K \rightarrow \infty $ and {\color{black}$K^3/N \rightarrow 0$}. \end{assumption} The Assumption \ref{as:id} is used for the identification of the model, which was discussed in \cite{wang2014instrumental}. Assumptions \ref{as:suppX} and \ref{as:proj_smooth} are required for uniform boundedness of approximations. Assumption \ref{as:eigen} is a standard assumption used in nonparametric sieve approximation, see also \cite{newey1997convergence}. Assumption \ref{as:iid} is a standard condition for statistical sampling. Assumptions \ref{as:boundS}-\ref{as:pi} are required for the convergence of our estimator as well as the boundness of the asymptotic variance. Assumption \ref{as:K&N} is the same as Assumption 2 in \cite{newey2003instrumental}, it is required for controlling the stochastic order of the residual terms, which is desirable in practice because $K$ grows very slowly with $N$ so a relatively small number of moment conditions is sufficient for the method proposed to perform well. {\color{black}\subsection{Discussion on $u_K$}\label{sec:uK} To construct the GMM estimator, we need to specify the matching function $u_K(\boldsymbol{X})$. Although the approximation theory is derived for general sequences of approximating functions, the most common class of functions are power series. Suppose the dimension of covariate $\boldsymbol{X}$ is $r\in\mathbb{N}$, namely $\boldsymbol{X}=(X_1,...,X_r)^{\top}$. Let $\lambda =(\lambda _{1} ,\ldots \lambda _{r})^{\top }$ be an $r$-dimensional vector of nonnegative integers (multi-indices), with norm $\vert \lambda \vert =\sum _{j =1}^{r}\lambda _{j}$. Let $(\lambda (k))_{k =1}^{\infty }$ be a sequence that includes all distinct multi-indicesand satisfies $\vert \lambda (k)\vert \leq \vert \lambda (k +1)\vert $, and let $\boldsymbol{X}^{\lambda } =\prod _{j =1}^{r}X_{j}^{\lambda _{j}}$. For a sequence $\lambda (k)$ we consider the series $u_{k K} (\boldsymbol{X}) =\boldsymbol{X}^{\lambda (k)}$, $k\in\{1,...,K\}$. \cite{newey1997convergence} showed the following property for the power series: there exists a universal constant $C >0$ such that \begin{align}\zeta (K):=\underset{\boldsymbol{x} \in \mathcal{X}}{\sup }\Vert u_{K} (\boldsymbol{x})\Vert \leq C K \label{eq:zeta}\ , \end{align}where $\Vert \cdot \Vert $ denotes the usual matrix norm $\Vert A\Vert =\sqrt{\tr (A^{\top } A)}$. \\ Another important issue is choosing the number of the matching function $K$ in finite sample experiment. \cite{donald2009choosing} proposed a strategy for an appropriate choice of $K$ by minimizing the higher order MSE defined in \eqref{eq:criteria_K}, and the following notations are needed to describe this criteria: \begin{align*} & \rho (T_{i} ,\boldsymbol{X}_{i} ,Y_{i} ;\check{\gamma }) =1 -\frac{T_{i}}{\pi (\boldsymbol{X}_{i} ,Y_{i} ;\check{\gamma })} ,\text{\quad }\hat{\Upsilon }_{K \times K} =\frac{1}{N} \sum _{i =1}^{N}\rho (T_{i} ,\boldsymbol{X}_{i} ,Y_{i} ;\check{\gamma })^{2} u_{K} (\boldsymbol{X}_{i})^{ \otimes 2}\text{,} \\ & \widehat{\Gamma }_{K \times p} =\frac{1}{N} \sum _{i =1}^{N}u_{K} (\boldsymbol{X}_{i}) \nabla _{\gamma }\rho (T_{i} ,\boldsymbol{X}_{i} ,Y_{i} ;\check{\gamma })^{\top } ,\text{\quad }\hat{\Omega }_{p \times p} =(\hat{\Gamma }_{K \times p})^{\top } \hat{\Upsilon }_{K \times K}^{ -1} \hat{\Gamma }_{K \times p}\text{,} \\ & \tilde{\mathbf{d}}_{i} =(\hat{\Gamma }_{K \times p})^{\top } \left (\frac{1}{N} \sum _{j =1}^{N}u_{K} (\boldsymbol{X}_{j})^{ \otimes 2}\right )^{ -1} u_{K} (\boldsymbol{X}_{i}) ,\text{\quad }\tilde{\mathbf{\eta }}_{i} = \nabla _{\gamma }\rho (T_{i} ,\boldsymbol{X}_{i} ,Y_{i} ;\check{\gamma }) -\tilde{\mathbf{d}}_{i}\text{,} \end{align*} \begin{align*} & \hat{\xi }_{i j} =\frac{1}{N} u_{K} (\boldsymbol{X}_{i})^{\top } \hat{\Upsilon }_{K \times K}^{-1} u_{K} (\boldsymbol{X}_{j}) ,\text{\quad }\widehat{ \bold {D} }_{i}^{ \ast } =(\hat{\Gamma }_{K \times p})^{\top } \hat{\Upsilon }_{K \times K}^{ -1} u_{K} (\boldsymbol{X}_{i})\;\text{.}\end{align*} \subsection{Semiparametric Efficiency Bounds} The following lemma is Theorem 1 in \cite{morikawa2016semiparametric}. \begin{lemma}[\cite{morikawa2016semiparametric}] \label{lemma:MK} The efficient variance bounds of $(\gamma _{0} ,\theta _{0})$ is $\boldsymbol{V}_{eff}:=\mathbb{E}\left[\boldsymbol{S}_{eff}(T,\bold{Z};\gamma_0,\theta_0)^{\otimes 2}\right]^{-1}$, where $\boldsymbol{S}_{eff}=(\boldsymbol{S}_1^{\top},S_2)^{\top}$ and $\boldsymbol{S}_1$, $S_2$ are defined in \eqref{def:S1} and \eqref{def:S2} respectively. \end{lemma} Let $\boldsymbol{V}_{\gamma _{0}}$ (resp. $V_{\theta _{0}}$) be the efficient variance bound of $\gamma_0$ (resp. $\theta_0$). After some simple computation, we can find out \begin{align}\boldsymbol{V}_{\gamma _{0}} = & \mathbb{E} \left [\frac{1 -\pi (\boldsymbol{Z} ;\gamma _{0})}{\pi (\boldsymbol{Z} ;\gamma _{0})} m (\boldsymbol{X})^{ \otimes 2}\right ]^{ -1} \label{effbound_gamma}\end{align} and \begin{align}V_{\theta _{0}} =V a r \left (S_{2} (T ,\boldsymbol{Z} ;\gamma _{0} ,\theta _{0}) -\mathbf{\kappa }^{^{ \intercal }} \boldsymbol{S}_{1} (T ,\boldsymbol{Z} ;\gamma _{0})\right )\;\text{.} \label{effbound_theta}\end{align} where \begin{align}\mathbf{\kappa }^{^{ \intercal }}:=\mathbb{E} \left [\frac{ \nabla _{\gamma }\pi (\boldsymbol{Z} ;\gamma _{0})^{^{ \intercal }}}{\pi (\boldsymbol{Z} ;\gamma _{0})} \{R (\boldsymbol{Z}) -U (\boldsymbol{X})\}\right ] \cdot \mathbb{E} \left [\frac{m (\boldsymbol{X})}{\pi (\boldsymbol{Z} ;\gamma _{0})} \nabla _{\gamma }\pi (\boldsymbol{Z} ;\gamma _{0})^{^{ \intercal }}\right ]^{ -1}\;\text{.} \label{def:kappa}\end{align} \end{document}
\begin{document} \title{ Lost-customers approximation of semi-open queueing networks with backordering\\ - An application to minimise the number of robots in robotic mobile fulfilment systems} \date{} \renewcommand\Affilfont{\small \itshape} \author[a,c]{Sonja Otten} \author[a]{Ruslan Krenzler} \author[a]{Lin Xie} \affil[a]{Leuphana University of L\"uneburg, Universit\"atsallee 1, 21335 L\"uneburg } \author[b]{Hans Daduna} \affil[b]{University of Hamburg, Bundesstra\ss e 55, 20146 Hamburg} \author[c]{Karsten Kruse} \affil[c]{Hamburg University of Technology, Am Schwarzenberg-Campus 3, 21073 Hamburg} \renewcommand\Authands{, } \maketitle \begin{abstract} \noindent We consider a semi-open queueing network (SOQN), where a customer requires exactly one resource from the resource pool for service. If there is a resource available, the customer is immediately served and the resource enters an inner network. If there is no resource available, the new customer has to wait in an external queue until one becomes available (``backordering''). When a resource exits the inner network, it is returned to the resource pool and waits for another customer. In this paper, we present a new solution approach. To approximate the inner network with the resource pool of the SOQN, we consider a modification, where newly arriving customers will decide not to join the external queue and are lost if the resource pool is empty (``{lost customers}''). We prove that we can adjust the arrival rate of the modified system so that the throughputs in each node are pairwise identical to those in the original network. We also prove that the probabilities that the nodes with constant service rates are idling are pairwise identical too. Moreover, we provide a closed-form expression for these throughputs and probabilities of idle nodes. To approximate the external queue of the SOQN with backordering, we construct a reduced SOQN with backordering, where the inner network consists only of one node, by using Norton's theorem and results from the lost-customers modification. In a final step, we use the closed-form solution of this reduced SOQN, to estimate the performance of the original SOQN. We apply our results to robotic mobile fulfilment systems (RMFSs). These are a new type of warehousing system, which has received attention recently, due to increasing growth in the e-commerce sector. Instead of sending pickers to the storage area to search for the ordered items and pick them, robots carry shelves with ordered items from the storage area to picking stations. We model the RMFS as an SOQN, analyse its stability and determine the minimal number of robots for such systems using the results from the first part. \begingroup \renewcommand\thefootnote{} \footnote{ ORCID IDs and email addresses:\\ Sonja Otten \includegraphics[width=0.8em]{image/orcid_icon}\hspace{0.4em}\url{https://orcid.org/0000-0002-3124-832X}, [email protected],\\ Ruslan Krenzler \includegraphics[width=0.8em]{image/orcid_icon}\hspace{0.4em}\url{https://orcid.org/0000-0002-6637-1168}, [email protected],\\ Lin Xie \includegraphics[width=0.8em]{image/orcid_icon}\hspace{0.4em}\url{https://orcid.org/0000-0002-3168-4922}, [email protected], \\ Hans Daduna \includegraphics[width=0.8em]{image/orcid_icon}\hspace{0.4em}\url{https://orcid.org/0000-0001-6570-3012}, [email protected], \\ Karsten Kruse \includegraphics[width=0.8em]{image/orcid_icon}\hspace{0.4em}\url{https://orcid.org/0000-0003-1864-4915}, [email protected] } \addtocounter{footnote}{-1} \endgroup \end{abstract} \emph{MSC 2010 Subject Classification:} 60K25, 90B22, 90C59, 90B05 \noindent \emph{Keywords:} semi-open queueing network, backordering, {lost customers}, lost sales, product form approximation, robotic mobile fulfilment system, warehousing \selectlanguage{english} \global\long\def\mathrel{\phantom{=}}{\mathrel{\phantom{=}}} \global\long\def\mathbf{e}{\mathbf{e}} \global\long\def\overline{J}_{0}{\overline{J}_{0}} \global\long\defJ{J} \global\long\def\overline{J}{\overline{J}} \global\long\defN{N} \global\long\defN{N} \global\long\def\mathcal{R}{\mathcal{R}} \global\long\defr{r} \global\long\defY{Y} \global\long\defYRV{\widehat{Y}} \global\long\defYALL{\mathbf{Y}} \global\long\def\mathbf{Y}{\mathbf{Y}} \global\long\defn{n} \global\long\def\nu{\nu} \global\long\defnALL{\mathbf{n}} \global\long\defX_{\text{ex}}{X_{\text{ex}}} \global\long\defX_{\text{ex}}RV{\widehat{X}_{\text{ex}}} \global\long\defn_{\text{ex}}{n_{\text{ex}}} \global\long\defL_{\text{ex}}{L_{\text{ex}}} \global\long\defW_{\text{ex}}{W_{\text{ex}}} \global\long\defW_{\text{in}}{W_{\text{in}}} \global\long\def\pi_{\text{LC}}{\pi_{\text{LC}}} \global\long\def\theta_{\text{LC}}{\theta_{\text{LC}}} \global\long\defC_{\text{LC}}{C_{\text{LC}}} \global\long\defC_{\text{}}^{\text{stb}}{C_{\text{}}^{\text{stb}}} \global\long\defC_{\text{}}^{\text{stb}}strich{C_{\text{BO}}^{\text{stb}'}} \global\long\defC_{\text{BO}}{C_{\text{BO}}} \global\long\def\pi_{\text{BO}}{\pi_{\text{BO}}} \global\long\def\theta_{\text{BO}}{\theta_{\text{BO}}} \global\long\def\lambda_{\text{BO}}{\lambda_{\text{BO}}} \global\long\def\lambda_{\text{eff}}{\lambda_{\text{eff}}} \global\long\def\lambda_{\text{BO}}LS{\lambda_{\text{LC}}} \global\long\def\lambda_{\text{BO,max}}{\lambda_{\text{BO,max}}} \global\long\def\lambda_{\text{CO}}{\lambda_{\text{CO}}} \global\long\defX_{\text{ex}}{X_{\text{ex}}} \global\long\defn_{\text{ex}}{n_{\text{ex}}} \global\long\defY_{0}{Y_{\text{idle robots}}} \global\long\defk_{\text{idle robots}}{k} \global\long\defY_{\text{sp}}{Y_{\text{to pod}}} \global\long\defn_{\text{sp}}{m_{\text{1}}} \global\long\def\mu_{\text{sp}}{\nu_{1}} \global\long\def\phi_{\text{sp}}{\phi_{1}} \global\long\defY_{\text{to picker}}{Y_{\text{to picker}}} \global\long\defm_{2}{m_{2}} \global\long\def\nu_{\text{2}}{\nu_{\text{2}}} \global\long\def\phi_{2}{\phi_{2}} \global\long\defY_{\text{picker}}{Y_{\text{picker}}} \global\long\defn{n} \global\long\def\mu{\mu} \global\long\defY_{\text{to storage}}{Y_{\text{to storage}}} \global\long\defm_{3}{m_{3}} \global\long\def\nu_{3}{\nu_{3}} \global\long\def\phi_{\text{rs}}{\phi_{3}} \global\long\def\eta{\eta} \global\long\def\etaVect{\mathbf{\boldsymbol{\eta}}} \global\long\def\widetilde{E}_{\text{+}}{\widetilde{E}_{\text{+}}} \global\long\def\varphi{\varphi} \global\long\def\pi_{\text{BO}}sub{\widetilde{\pi}_{\text{BO}}} \global\long\def\widetilde{\eta}{\widetilde{\eta}} \global\long\defE_{LC}{E_{LC}} \global\long\def\pi_{\text{LC,0}}{\pi_{\text{LC,0}}} \global\long\defY_{0}{Y_{0}} \global\long\defk_{\text{idle robots}}{n_{0}} \global\long\defk_{\text{idle robots}}State{0} \global\long\defY_{\text{sp}}{Y_{\text{sp}}} \global\long\defn_{\text{sp}}{n_{\text{sp}}} \global\long\defn_{\text{sp}}State{\text{sp}} \global\long\def\mu_{\text{sp}}{\mu_{\text{sp}}} \global\long\def\phi_{\text{sp}}{\phi_{\text{sp}}} \global\long\defY_{\text{to picker}}One{Y_{\text{pp}_{1}}} \global\long\defm_{2}OneState{\text{pp}_{1}} \global\long\defm_{2}One{n_{\text{pp}_{1}}} \global\long\def\nu_{\text{2}}One{\mu_{\text{pp}_{1}}} \global\long\def\phi_{2}One{\phi_{\text{pp}_{1}}} \global\long\defq_{\text{pp}_{1}}{q_{\text{pp}_{1}}} \global\long\defY_{\text{to picker}}Two{Y_{\text{pp}_{2}}} \global\long\defm_{2}TwoState{\text{pp}_{2}} \global\long\defm_{2}Two{n_{\text{pp}_{2}}} \global\long\def\nu_{\text{2}}Two{\mu_{\text{pp}_{2}}} \global\long\def\phi_{2}Two{\phi_{\text{pp}_{2}}} \global\long\defq_{\text{pp}_{2}}{q_{\text{pp}_{2}}} \global\long\defY_{\text{picker}}One{Y_{\text{p}_{1}}} \global\long\defnOne{n_{\text{p}_{1}}} \global\long\defnOneState{\text{p}_{1}} \global\long\def\muOne{\nu_{\text{p}_{1}}} \global\long\defq_{\text{p}_{1}\text{r}}{q_{\text{p}_{1}\text{r}}} \global\long\defq_{\text{p}_{1}\text{s}}{q_{\text{p}_{1}\text{s}}} \global\long\defY_{\text{picker}}Two{Y_{\text{p}_{2}}} \global\long\defnTwoState{\text{p}_{2}} \global\long\defnTwo{n_{\text{p}_{2}}} \global\long\def\muTwo{\nu_{\text{p}_{2}}} \global\long\defq_{\text{p}_{2}\text{r}}{q_{\text{p}_{2}\text{r}}} \global\long\defq_{\text{p}_{2}\text{s}}{q_{\text{p}_{2}\text{s}}} \global\long\defY_{\text{to storage}}One{Y_{\text{p}_{1}\text{s}}} \global\long\defm_{3}OneState{\text{p}_{1}\text{s}} \global\long\defm_{3}One{n_{\text{p}_{1}\text{s}}} \global\long\def\nu_{3}One{\mu_{\text{p}_{1}\text{s}}} \global\long\def\phi_{\text{rs}}One{\phi_{\text{p}_{1}\text{s}}} \global\long\defY_{\text{to storage}}Two{Y_{\text{p}_{2}\text{s}}} \global\long\defm_{3}TwoState{\text{p}_{2}\text{s}} \global\long\defm_{3}Two{n_{\text{p}_{2}\text{s}}} \global\long\def\nu_{3}Two{\mu_{\text{p}_{2}\text{s}}} \global\long\def\phi_{\text{rs}}Two{\phi_{\text{p}_{2}\text{s}}} \global\long\defY_{\text{p}_{1}\text{r}}{Y_{\text{p}_{1}\text{r}}} \global\long\def\text{p}_{1}\text{r}{\text{p}_{1}\text{r}} \global\long\defn_{\text{p}_{1}\text{r}}{n_{\text{p}_{1}\text{r}}} \global\long\def\mu_{\text{p}_{1}\text{r}}{\mu_{\text{p}_{1}\text{r}}} \global\long\def\phi_{\text{p}_{1}\text{r}}{\phi_{\text{p}_{1}\text{r}}} \global\long\defY_{\text{p}_{2}\text{r}}{Y_{\text{p}_{2}\text{r}}} \global\long\def\text{p}_{2}\text{r}{\text{p}_{2}\text{r}} \global\long\defn_{\text{p}_{2}\text{r}}{n_{\text{p}_{2}\text{r}}} \global\long\def\mu_{\text{p}_{2}\text{r}}{\mu_{\text{p}_{2}\text{r}}} \global\long\def\phi_{\text{p}_{2}\text{r}}{\phi_{\text{p}_{2}\text{r}}} \global\long\defY_{\text{r}}{Y_{\text{r}}} \global\long\def\text{r}{\text{r}} \global\long\defn_{\text{r}}{n_{\text{r}}} \global\long\def\nu_{r}{\nu_{r}} \global\long\defY_{\text{r}}ToStorage{Y_{\text{rs}}} \global\long\def\text{rs}{\text{rs}} \global\long\defn_{\text{rs}}{n_{\text{rs}}} \global\long\def\mu_{\text{rs}}{\mu_{\text{rs}}} \global\long\def\phi_{\text{rs}}{\phi_{\text{rs}}} \global\long\defk_{\text{idle robots}}{k_{\text{idle robots}}} \global\long\defY_{\text{sub}}{Y_{\text{sub}}} \global\long\defn{n} \global\long\def\varphi{\varphi} \global\long\defstability networkTh#1{TH_{#1}^{\text{stb}}} \global\long\def\abrevC#1{b(#1)} \global\long\def\sigma_{\text{pod/order}}{\sigma_{\text{pod/order}}} \global\long\defW_{\text{alg}}{W_{\text{alg}}} \global\long\defW_{\text{assembled}}{W_{\text{assembled}}} \global\long\defTO_{\text{task}}{TO_{\text{task}}} \global\long\def\ThBO#1{TH_{\text{BO},#1}} \global\long\def\ThLC#1{TH_{\text{LC},#1}} \global\long\def\meanVisits#1{V_{#1}} \selectlanguage{british} \tableofcontents{} \section{Introduction} Queueing networks can be broadly classified into three categories, see e.g.~\citet[p. 21ff.]{yao2001fundamentals}, \citet{roy.2016}, \citet{azadeh2017robot}: open queueing network (OQN), closed queueing network (CQN) and semi-open queueing network (SOQN). In an OQN, customers arrive from an external source, request service at several nodes and then leave the system. In contrast, in a CQN, there are no external arrivals and departures. The number of customers, which are cycling in the system, is fixed. An SOQN has characteristics of both OQNs and CQNs, see e.g.~\citet{Jia.Heragu.2009}. It is similar to an OQN in the sense that customers arrive from an external source and leave the system after service. It is similar to a CQN in the sense that there is an overall capacity constraint for the inner network. A customer needs a resource from an associated resource pool for service. If there is a resource available, the customer is immediately served and the resource enters an inner network. If there is no resource available, the new customer has to wait in an external queue until one becomes available -- we call this waiting regime ``backordering''. When a resource exits the inner network, it returns to the resource pool and waits for the next customer. There are several application areas of SOQNs. For example, they are adopted for performance analysis of manufacturing systems and service systems (logistics, communication, warehousing and health care), see \citet{roy.2016}. \subsection[Contribution I: New solution approach to SOQN-BO]{Contribution I: New solution approach to SOQN with backordering} In this paper we focus on semi-open queueing networks (SOQNs). The literature on SOQNs is overwhelming, so we point only to the most relevant sources for our present investigation. A detailed overview about SOQNs and their solution methods is presented in \citet{Jia.Heragu.2009}, \citet{EKREN201478} and \citet{roy.2016}. Roy also compares the numerical accuracy of the solution methods. A recent article, which is not included in these reviews, is provided by \citet{Kim20181}. The most common solution approaches are the matrix-geometric method, aggregation method, network decomposition approach, parametric decomposition method and performance measure bounds. For SOQNs with backordering and an inner network, which consists of more than one node, closed-form expressions for the steady-state distributions are not available. In this paper we present a new solution approach, which is depicted in \prettyref{fig:overview-models}. To approximate the resource network ($=$ inner network and resource pool) of the SOQN, we consider a modification, where new arriving customers will be lost if the resource pool is empty (``{lost customers}''). For such a modification, closed-form expressions for the steady-state distribution in product form are available. We prove that we can adjust the arrival rate in the modified network so that the throughputs in each node are pairwise identical to those in the original network. We also prove that the probabilities that the nodes with constant service rates are idling are also pairwise identical. Moreover, we provide a closed-form expression both for the throughputs and for these special idle probabilities. To approximate the external queue of the SOQN, we use an indirect two-step approach. In the first step, we reduce the complexity of the modified SOQN by applying Norton's theorem. In the next step, we remove the lost-customer property of the reduced SOQN with lost customers to get the backordering property again. The idea to approximate backordering systems by lost-customer systems follows \citet[Section 2.1.4, p. 34ff.]{krenzler:16}. This approximation method is interesting because in the literature there are already several results for queueing networks with lost customers, e.g.~lost sales models in \citet{otten:18}, \citet{schwarz;sauer;daduna;kulik;szekli:06} and \citet{krishnamoorthy:2011}. For an overview of the literature on systems with lost sales, we refer to \citet{bijvank;vis:11}. \subsection{Contribution II: Application to robotic mobile fulfilment systems} We focus on robotic mobile fulfilment systems (RMFSs). RMFSs are a new type of warehousing system which has received attention recently, due to increasing growth in the e-commerce sector. Instead of sending pickers to the storage area to search for the ordered items and pick them, robots carry shelves -- here called \emph{pods} -- with ordered items from the storage area to picking stations. At every picking station there is a person -- the picker -- who takes items from the pods and packs them into boxes according to customers' orders. When the picker does not need the pod any more, the robot either transports the pod directly back to the storage area or first makes a stopover at a replenishment station. Such a fulfilment system poses many decision problems. An overview is presented in \citet[Section 4]{RawSimDecisionRules}. They can be classified at strategic, tactical and operational levels. The literature on RMFS is, similar to that on SOQNs, already overwhelming, so we only point to some references closely related to our investigations. RMFSs are modelled as SOQNs in several articles. For an overview of the literature, we refer to \citet[Section 7.2 and Table 4]{azadeh2017.2} and \citet[Section 6]{azadeh2017robot}. They classify articles according to the decision problem of interest and the relevant methodology. Furthermore, a recent overview is presented in \citet{dynamicPolicies}. In our paper, we focus on decisions on the optimal number of robots. Most of these articles analyse different decision problems from the ones we look at in this paper. \citet{Yuan.2017} calculate the optimal number of robots. They compare two protocols: pooled and dedicated robots. Their investigations are based on OQNs, rather than SOQNs and they do not consider a replenishment station in their model. \citet{zou2018} determine the number of robots for different battery recovery strategies. Their investigations are based on a nested SOQN. \subsection{Structure of the paper} In \prettyref{sec:A-general-semi-open}, we describe a general SOQN with backordering and analyse the stability. In \prettyref{sec:throughputs-and-idle-times-LC}, we calculate the throughputs and special idle probabilities in steady state. After that, we introduce our new approximation method for an SOQN with backordering. Our approach is depicted in \prettyref{fig:overview-models}. In \prettyref{sec:Approximation}, we present an approximation of the resource network. For that, in \prettyref{sec:Lost-network}, we analyse a modification, where newly arriving customers are lost, if the resource pool is empty. Then, in \prettyref{sec:Adjustment}, we adjust this system. In \prettyref{sect:TH_BO}, we calculate the throughputs and special idle probabilities in steady state. In \prettyref{sec:Approximation-of-the-external-queue}, we present an approximation for the external queue. For that, in \prettyref{sec:small-modified}, we reduce the complexity of the modified SOQN with lost customers by applying Norton's theorem. Then, in \prettyref{sub:small-with-backordering}, we remove the lost-customer property to get the backordering property again. In \prettyref{sec:application}, we present an application of our results. We model the RMFS as an SOQN with backordering and apply our results to the problem: ``What is the optimal number of robots?'' We formulate an algorithm to calculate the minimum number of robots and present numerical examples. Finally, \prettyref{sec:Conclusion} concludes the paper. \begin{figure} \caption{\label{fig:overview-models} \label{fig:overview-models} \end{figure} \subsection{Notation and preliminaries} $\mathbb{N}:=\left\{ 1,2,3,\ldots\right\} $, $\mathbb{N}_{0}:=\{0\}\cup\mathbb{N}$, $\mathbb{R}_{0}^{+}:=[0,\infty)$ and $\mathbb{R}^{+}:=(0,\infty)$. The vector $\mathbf{0}$ is a row vector of appropriate size with all entries equal to $0$. The vector $\mathbf{e}$ is a column vector of appropriate size with all entries equal to $1$. The vector $\mathbf{e}_{i}=(0,\ldots,0,\underbrace{1}_{\mathclap{i\text{-th element}}},0,\ldots,0)$ is a vector of appropriate size. $1_{\left\{ expression\right\} }$ is the indicator function which is $1$ if $expression$ is true and $0$ otherwise. Empty sums are 0, and empty products are 1. We call a ``generator'' a matrix $M\in\mathbb{R}^{K\times K}$ with countable index set $K$ if all its off-diagonal elements are non-negative and all its row sums are equal to zero. Throughout this paper it is assumed that all random variables are defined on a common probability space $(\Omega,{\cal F},P)$. Furthermore, by ``Markov process'' we mean a time-homogeneous continuous-time strong Markov process with discrete state space -- also known as a Markov jump process. All Markov processes are assumed to be regular and have cdlg paths, i.e.\ each path of a process is right-continuous and has left limits everywhere. We call a Markov process regular if it is non-explosive (i.e.\ the sequence of jump times of the process diverges almost surely) and its transition intensity matrix is a generator matrix. \section{SOQN with backordering\label{sec:A-general-semi-open}} \subsection{Description of the model\label{sec:RMFS-general-model}} An SOQN with backordering is shown in \prettyref{fig:SOQN}. Henceforth, we call it an \emph{SOQN-BO}. It represents a queueing network with an additional resource pool. \begin{figure} \caption{\label{fig:SOQN} \label{fig:SOQN} \end{figure} Customers arrive one by one according to a Poisson process with rate $\lambda_{\text{BO}}>0$. Every customer requires exactly one resource from resource pool for service. If there is a resource available, the customer is immediately served and the resource enters an inner network. If there is no resource available, the new customer has to wait in an external queue under the first-come, first-served (FCFS) regime until a resource becomes available (``backordering''). When the resource exits the inner network, it returns to the resource pool, which is henceforth referred to as node $0$, and waits for the next customer. Whenever the external queue is not empty and a resource item is returned to the resource pool, this item is instantaneously synchronised with the customer at the head of the line. The resources therefore move in a closed network. We will call this network \emph{resource network}. The maximal number of resources in the resource pool is $N$. The inner network consists of $J\geq1$ numbered service stations (nodes), denoted by $\overline{J}:=\left\{ 1,\ldots,J\right\} $. Each station $j$ consists of a single server with infinite waiting room under the FCFS regime or the processor sharing regime. Customers in the network are indistinguishable. The service times are exponentially distributed random variables with mean $1$. If there are $n_{j}>0$ customers present at node $j$, service at node $j$ is provided with intensity $\nu_{j}(n_{j})>0$. All service and inter-arrival times constitute an independent family of random variables. Movements of resources in the inner network are governed by a Markovian routing mechanism: After synchronisation with a customer, a resource visits node $j$ with probability $r(0,j)\geq0$. A resource, when leaving node $i$, selects with probability $r(i,j)\geq0$ to visit node $j$ next, and then enters node $j$ immediately. It starts service if it finds the server idle, otherwise it joins the tail of the queue at node $j$. This resource can also leave the inner network with probability $r(i,0)\geq0$. It holds $\sum_{j=0}^{J}r(i,j)=1$ with $r(0,0):=0$ for all $i\in\overline{J}_{0}:=\left\{ 0,1,\ldots,J\right\} $. Given the departure node $i$, the resource's routing decision is made independently of the network's history. We assume that the routing matrix $\mathcal{R}:=\left(r(i,j):i,j\in\overline{J}_{0}\right)$ is irreducible. To obtain a Markovian process description, we denote by $X_{\text{ex}}(t)$ the number of customers in the external queue at time $t\geq0$, by $Y_{0}(t)$ the number of resources in the resource pool at time $t\geq0$ and by $Y_{j}(t)$, $j\in\overline{J}$, the number of resources present at node $j$ in the inner network at time $t\geq0$, either waiting or in service. We call this $Y_{j}(t)$ queue length at node $j\in\overline{J}$ at time $t\geq0$. Then $YALL(t):=\left(Y_{j}(t):j\in\overline{J}_{0}\right)$ is the queue length vector of the resource network at time $t\geq0$. We define the joint queue length process of the semi-open network with \textbf{b}ack\textbf{o}rdering by \[ Z_{\text{BO}}:=\left(\left(X_{\text{ex}}(t),YALL(t)\right):t\geq0\right). \] Then, due to the independence and memorylessness assumptions,\label{independence-memorylessness} $Z_{\text{BO}}$ is a homogeneous Markov process with state space \begin{align*} E & :=\big\{\left(0,n_{0},n_{1},\ldots,n_{J}\right):n_{j}\in\left\{ 0,\ldots,N\right\} \:\forall j\in\overline{J}_{0},\sum_{j\in\overline{J}_{0}}n_{j}=N\big\}\\ & \mathrel{\phantom{=}}\cup\big\{\left(n_{\text{ex}},0,n_{1},\ldots,n_{J}\right):n_{\text{ex}}\in\mathbb{N},\:n_{j}\in\left\{ 0,\ldots,N\right\} \:\forall j\in\overline{J},\sum_{j\in\overline{J}}n_{j}=N\big\}. \end{align*} $Z_{\text{BO}}$ is irreducible on $E$. \subsection{Stability\label{sec:RMFS-general-stat-distr}} In this section, we analyse the stability of our system. This is important from a practical point of view: It ensures that the system processes customers quickly enough. The Markov process $Z_{\text{BO}}$ has an infinitesimal generator $\mathbf{Q}:=\left(q(z;\tilde{z}):z,\tilde{z}\in E\right)$ with the following transition rates for $\left(n_{\text{ex}},nALL\right),\ \left(0,nALL\right)\in E$, where $nALL:=\left(n_{j}:j\in\overline{J}_{0}\right)$: \begin{align*} & q\left(\left(n_{\text{ex}},nALL\right);\left(n_{\text{ex}}+1,nALL\right)\right)=\lambda_{\text{BO}}\cdot1_{\left\{ n_{0}=0\right\} },\ n_{\text{ex}}\geq0,\\ & q\left(\left(0,nALL\right);\left(0,nALL-\mathbf{e}_{0}+\mathbf{e}_{i}\right)\right)=\lambda_{\text{BO}}\cdotr(0,i)\cdot1_{\left\{ n_{0}>0\right\} },\ i\in\overline{J},\\ & q\left(\left(n_{\text{ex}},nALL\right);\left(n_{\text{ex}},nALL-\mathbf{e}_{i}+\mathbf{e}_{j}\right)\right)=\nu_{i}(n_{i})\cdotr(i,j)\cdot1_{\left\{ n_{i}>0\right\} },\ n_{\text{ex}}\geq0,\ i,j\in\overline{J},\\ & q\left(\left(0,nALL\right);\left(0,nALL-\mathbf{e}_{i}+\mathbf{e}_{0}\right)\right)=\nu_{i}(n_{i})\cdotr(i,0)\cdot1_{\left\{ n_{i}>0\right\} },\ i\in\overline{J},\\ & q\left(\left(n_{\text{ex}},nALL\right);\left(n_{\text{ex}}-1,nALL-\mathbf{e}_{i}+\mathbf{e}_{j}\right)\right)=\nu_{i}(n_{i})\cdotr(i,0)\cdotr(0,j)\cdot1_{\left\{ n_{\text{ex}}>0\right\} }\cdot1_{\left\{ n_{i}>0\right\} },\ n_{\text{ex}}>0,\ i\in\overline{J}. \end{align*} Furthermore, $q(z;\tilde{z})=0$ for any other pair $z\neq\tilde{z}$, and \[ q\left(z;z\right)=-\sum_{\substack{\tilde{z}\in E,\\ \tilde{z}\neq z } }q\left(z;\tilde{z}\right)\qquad\forall z\in E. \] The Markov process $Z_{\text{BO}}$ is a level-independent quasi-birth-and-death process in the sense of \citet[Def. 1.3.1, p. 12]{latouche.ramaswami:99}. The level is the length $n_{\text{ex}}$ of the external queue. The phase is the state of the resource network. For a level equal to zero, the phase space is \[ \widetilde{E}_{0}:=\big\{\left(n_{0},n_{1},\ldots,n_{J}\right):\ n_{j}\in\left\{ 0,\ldots,N\right\} \:\forall j\in\overline{J}_{0},\sum_{j\in\overline{J}_{0}}n_{j}=N\big\}. \] When the level is positive, there is at least one customer in the external queue. This is only possible if the resource pool is empty. Hence, for all positive levels the phase space is \[ \widetilde{E}_{\text{+}}:=\big\{\left(0,n_{1},\ldots,n_{J}\right):\:n_{j}\in\left\{ 0,\ldots,N\right\} \:\forall j\in\overline{J},\:\sum_{j\in\overline{J}}n_{j}=N\big\}. \] Arranging the states by level, the corresponding infinitesimal generator $\mathbf{Q}$ of $Z_{\text{BO}}$ can be written as \[ \mathbf{Q}=\left(\begin{array}{ccccc} \mathbf{B}_{0} & \mathbf{B}_{1}\\ \mathbf{B}_{2} & \mathbf{A}_{0} & \mathbf{A}_{1}\\ & \mathbf{A}_{-1} & \mathbf{A}_{0} & \mathbf{A}_{1}\\ & & & \ddots\ddots & \ddots \end{array}\right), \] where $\mathbf{B}_{0}\in\mathbb{R}^{\widetilde{E}_{0}\times\widetilde{E}_{0}}$, $\mathbf{B}_{1}\in\mathbb{R}^{\widetilde{E}_{0}\times\widetilde{E}_{\text{+}}}$, $\mathbf{B}_{2}\in\mathbb{R}^{\widetilde{E}_{\text{+}}\times\widetilde{E}_{0}}$ and $\mathbf{A}_{-1}$, $\mathbf{A}_{0}$, $\mathbf{A}_{1}\in\mathbb{R}^{\widetilde{E}_{\text{+}}\times\widetilde{E}_{\text{+}}}$ are matrices.\\ $\mathbf{A}_{1}$ is a non-negative matrix with the following positive elements: \[ a_{1}\left(\left(0,n_{1},\ldots,n_{J}\right);\left(0,n_{1},\ldots,n_{J}\right)\right)=\lambda_{\text{BO}}. \] $\mathbf{A}_{-1}$ is a non-negative matrix with at most the following elements non-negative: \begin{align} & a_{-1}\left(\left(0,n_{1},\ldots,n_{J}\right);\left(0,n_{1},\ldots,n_{i}-1,\ldots,n_{j}+1,\ldots,n_{J}\right)\right)\nonumber \\ & =\nu_{i}(n_{i})\cdotr(i,0)\cdotr(0,j)\cdot1_{\left\{ n_{i}>0\right\} },\quad i,j\in\overline{J}.\label{eq:alpha-minus1} \end{align} $\mathbf{A}_{0}$ has non-negative off-diagonal elements and strictly negative diagonals. The off-diagonal elements are \begin{align*} & a_{0}\left(\left(0,n_{1},\ldots,n_{J}\right);\left(0,n_{1},\ldots,n_{i}-1,\ldots,n_{j}+1,\ldots,n_{J}\right)\right)\\ & =\nu_{i}(n_{i})\cdotr(i,j)\cdot1_{\left\{ n_{i}>0\right\} },\quad i,j\in\overline{J}. \end{align*} Let $\boldsymbol{\pi}_{\text{BO}}:=\left(\pi_{\text{BO}}\left(n_{\text{ex}},nALL\right):\left(n_{\text{ex}},nALL\right)\in E\right)$ be the steady-state distribution of the stochastic Markov process $Z_{\text{BO}}$. The global balance equations $\boldsymbol{\pi}_{\text{BO}}\cdot\mathbf{Q=0}$ are given as follows. \noindent For $n_{\text{ex}}=0$ \begin{align*} & \pi_{\text{BO}}\left(0,nALL\right)\cdot\Big(\lambda_{\text{BO}}+\sum_{i\in\overline{J}}\sum_{j\in\overline{J}\setminus\{i\}}\nu_{i}(n_{i})\cdotr(i,j)\cdot1_{\left\{ n_{i}>0\right\} }+\sum_{i\in\overline{J}}\nu_{i}(n_{i})\cdotr(i,0)\cdot1_{\left\{ n_{i}>0\right\} }\Big)\\ & =\sum_{i\in\overline{J}}\pi_{\text{BO}}\left(0,nALL+\mathbf{e}_{0}-\mathbf{e}_{i}\right)\cdot\lambda_{\text{BO}}\cdotr(0,i)\cdot1_{\left\{ n_{i}>0\right\} }\\ & \mathrel{\phantom{=}}+\sum_{i\in\overline{J}}\sum_{j\in\overline{J}\setminus\{i\}}\pi_{\text{BO}}\left(0,nALL+\mathbf{e}_{i}-\mathbf{e}_{j}\right)\cdot\nu_{i}(n_{i}+1)\cdotr(i,j)\cdot1_{\left\{ n_{j}>0\right\} }\\ & \mathrel{\phantom{=}}+\sum_{i\in\overline{J}}\pi_{\text{BO}}\left(0,nALL+\mathbf{e}_{i}-\mathbf{e}_{0}\right)\cdot\nu_{i}(n_{i}+1)\cdotr(i,0)\cdot1_{\left\{ n_{0}>0\right\} }\\ & \mathrel{\phantom{=}}+\sum_{i\in\overline{J}}\sum_{j\in\overline{J}\setminus\{i\}}\pi_{\text{BO}}\left(1,nALL+\mathbf{e}_{i}-\mathbf{e}_{j}\right)\cdot\nu_{i}(n_{i}+1)\cdotr(i,0)\cdotr(0,j)\cdot1_{\left\{ n_{j}>0\right\} }\cdot1_{\left\{ n_{0}=0\right\} }\\ & \mathrel{\phantom{=}}+\sum_{i\in\overline{J}}\pi_{\text{BO}}\left(1,nALL\right)\cdot\nu_{i}(n_{i})\cdotr(i,0)\cdotr(0,i)\cdot1_{\left\{ n_{i}>0\right\} }\cdot1_{\left\{ n_{0}=0\right\} }. \end{align*} For $n_{\text{ex}}>0$ which implies $n_{0}=0$ \begin{align*} & \pi_{\text{BO}}\left(n_{\text{ex}},nALL\right)\cdot\Big(\lambda_{\text{BO}}+\sum_{i\in\overline{J}}\sum_{j\in\overline{J}\setminus\{i\}}\nu_{i}(n_{i})\cdotr(i,j)\cdot1_{\left\{ n_{i}>0\right\} }+\sum_{i\in\overline{J}}\nu_{i}(n_{i})\cdotr(i,0)\cdot1_{\left\{ n_{i}>0\right\} }\Big)\\ & =\pi_{\text{BO}}\left(n_{\text{ex}}-1,nALL\right)\cdot\lambda_{\text{BO}}\\ & \mathrel{\phantom{=}}+\sum_{j\in\overline{J}}\sum_{i\in\overline{J}\setminus\{i\}}\pi_{\text{BO}}\left(n_{\text{ex}},nALL+\mathbf{e}_{i}-\mathbf{e}_{j}\right)\cdot\nu_{i}(n_{i}+1)\cdotr(i,j)\cdot1_{\left\{ n_{j}>0\right\} }\\ & \mathrel{\phantom{=}}+\sum_{j\in\overline{J}}\sum_{i\in\overline{J}\setminus\{i\}}\pi_{\text{BO}}\left(n_{\text{ex}}+1,nALL+\mathbf{e}_{i}-\mathbf{e}_{j}\right)\cdot\nu_{i}(n_{i}+1)\cdotr(i,0)\cdotr(0,j)\cdot1_{\left\{ n_{j}>0\right\} }\\ & \mathrel{\phantom{=}}+\sum_{i\in\overline{J}}\pi_{\text{BO}}\left(n_{\text{ex}}+1,nALL\right)\cdot\nu_{i}(n_{i})\cdotr(i,0)\cdotr(0,i)\cdot1_{\left\{ n_{i}>0\right\} }. \end{align*} There is no closed-form expression for the steady-state distribution for the case $J>1$. Latouche and Ramaswami developed a logarithmic reduction algorithm for the level-independent quasi-birth-and-death process to compute the steady-state distribution (\citet{latouche;ramaswami:93}, \citep[Theorem 6.4.1 and Lemma 6.4.3, p. 142ff.]{latouche.ramaswami:99}). For the case $J=1$, we calculate a closed-form expression for the steady-state distribution, which we present in \prettyref{sec:Special-case}. Next we determine the stability condition of the system. For that we define the following traffic equation: \begin{equation} \eta_{j}=\sum_{i\in\overline{J}_{0}}\eta_{i}\cdotr(i,j),\qquad j\in\overline{J}_{0}.\label{eq:traffic-equation-LS} \end{equation} In matrix notation, the above equation can be expressed as $\etaVect\cdot R=\etaVect$ with $\etaVect:=\left(\eta_{j}\in\mathbb{R}^{+}:j\in\overline{J}_{0}\right)$. \citet{lavenberg-1978} showed that the system with backordering is stable if the arrival rate $\lambda_{\text{BO}}$ is smaller than the maximal arrival rate $\lambda_{\text{BO,max}}$ where $\lambda_{\text{BO,max}}$ is the throughput through node $0$ of a closed network in \prettyref{fig:SOQN-stability-network}. In this network every resource which goes through node~$0$, spends zero time at node $0$ and goes immediately to the next node according to a branching vector $(r(0,j):j\in\overline{J})$. \citeauthor{lavenberg-1978} calls this network \emph{saturated} -- it is obtained when there are infinitely many customers in the external queue. We will call this network in \prettyref{fig:SOQN-stability-network} \emph{stability} network. \citeauthor{lavenberg-1978} did not only prove that the system is stable for $\lambda_{\text{BO}}<\lambda_{\text{BO,max}}$, he also proved that it is unstable if $\lambda_{\text{BO}}>$ $\lambda_{\text{BO,max}}$, but he did not show what will happen if $\lambda_{\text{BO}}=\lambda_{\text{BO,max}}$. In our stability analysis we use matrix geometrical methods which cover all the cases ``$<$'', ``$=$'' and ``$>$''. In order to simplify notation we define the following constant: \[ C_{\text{}}^{\text{stb}}(\overline{J},N):=\sum_{\sum_{j\in\overline{J}}n_{j}=N}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right). \] \begin{prop} \label{prop:stability-condition}The system is stable if and only if \[ \lambda_{\text{BO}}<\lambda_{\text{BO,max}} \] with \begin{equation} \lambda_{\text{BO,max}}=\eta_{0}\cdot\frac{C_{\text{}}^{\text{stb}}(\overline{J},N-1)}{C_{\text{}}^{\text{stb}}(\overline{J},N)}.\label{eq:max-arrival-from-norm-constants} \end{equation} \end{prop} \begin{proof} We use matrix-geometric methods to prove the stability condition. According to \citet[Theorem 1]{latouche:10}: Given the irreducible inter-level generator matrix $\mathbf{A}:=\mathbf{A}_{-1}+\mathbf{A}_{0}+\mathbf{A}_{1}$ of $Z_{\text{BO}}$ and the stochastic solution $\mathbf{\boldsymbol{\alpha}}:=(\alpha(\widetilde{nALL}):\widetilde{nALL}\in\widetilde{E}_{\text{+}})$ of the equation $\mathbf{\boldsymbol{\alpha}}\cdot\mathbf{A}=\mathbf{0}$, then the process $Z_{\text{BO}}$ is stable if and only if \begin{equation} \mathbf{\boldsymbol{\alpha}}\cdot\mathbf{A}_{1}\cdot\mathbf{e}<\mathbf{\boldsymbol{\alpha}}\cdot\mathbf{A}_{-1}\cdot\mathbf{e}.\label{eq:stability-equation} \end{equation} Because $\mathbf{A}_{1}$ is a diagonal matrix with $\lambda_{\text{BO}}$ on its diagonal and $\mathbf{\boldsymbol{\alpha}}$ is a stochastic vector, we have immediately for the left-hand side of \prettyref{eq:stability-equation} \begin{equation} \mathbf{\boldsymbol{\alpha}}\cdot\mathbf{A}_{1}\cdot\mathbf{e}=\lambda_{\text{BO}}.\label{eq:left-hand-side-of-the-stability-equation} \end{equation} Therefore, in a stable system, the arrival rate $\lambda_{\text{BO}}$ must be strictly less than the right-hand side of inequation \prettyref{eq:stability-equation}. We define \[ \lambda_{\text{BO,max}}:=\mathbf{\boldsymbol{\alpha}}\cdot\mathbf{A}_{-1}\cdot\mathbf{e}. \] To calculate $\lambda_{\text{BO,max}}$, we first need to calculate the stochastic vector $\mathbf{\boldsymbol{\alpha}}$. The non-negative non-diagonal elements of the generator $\mathbf{A}$ are of the form \begin{align*} & a\left(\left(0,n_{1},\ldots,n_{J}\right);\left(0,n_{1},\ldots,n_{i}-1,\ldots,n_{j}+1,\ldots,n_{J}\right)\right)\\ & =\nu_{i}(n_{i})\cdot\Big(r(i,j)+r(i,0)\cdotr(0,j)\Big)\cdot1_{\left\{ n_{i}>0\right\} }, \end{align*} for $i\neq j$. For its diagonal elements it holds \[ a\left(z;z\right)=-\sum_{\substack{\tilde{z}\in E,\\ z\neq\tilde{z} } }a\left(z;\tilde{z}\right)\qquad\forall z\in E. \] We now solve for all $\widetilde{nALL}:=\left(0,n_{1},\ldots,n_{J}\right)\in\widetilde{E}_{\text{+}}$ \begin{align} & \alpha\left(\widetilde{nALL}\right)\cdot\sum_{i\in\overline{J}}\nu_{i}(n_{i})\cdot\sum_{j\in\overline{J}}\Big(r(i,j)+r(i,0)\cdotr(0,j)\Big)\cdot1_{\left\{ n_{i}>0\right\} }\nonumber \\ & =\sum_{j\in\overline{J}}\sum_{i\in\overline{J}}\alpha\left(\widetilde{nALL}+\mathbf{e}_{i}-\mathbf{e}_{j}\right)\cdot\nu_{i}(n_{i}+1)\cdot\Big(r(i,j)+r(i,0)\cdotr(0,j)\Big)\cdot1_{\left\{ n_{j}>0\right\} }.\label{eq:envirenment-generalized-gordon-newell-gbe} \end{align} \begin{figure} \caption{\label{fig:SOQN-stability-network} \label{fig:SOQN-stability-network} \end{figure} Equation \eqref{eq:envirenment-generalized-gordon-newell-gbe} is the global balance equation of a generalised Gordon-Newell network with node set $\overline{J}_{0}$, $N$ customers and zero service time at node $0$. We will call this network a \emph{stability network}, see \prettyref{fig:SOQN-stability-network}. Equation \eqref{eq:envirenment-generalized-gordon-newell-gbe} also has another important interpretation, which allows us to use standard algorithms from performance analysis of Gordon-Newell networks. Note that the state of node $0$ never changes, therefore we define \begin{equation} \alpha'(n_{1},\ldots,n_{J}):=\alpha\left(0,n_{1},\ldots,n_{J}\right)\label{eq:def-alpha-prime} \end{equation} and we define the routing matrix $\mathcal{R}':=\left(r'(i,j):i,j\in\overline{J}\right)$ with \begin{equation} r'(i,j):=r(i,j)+r(i,0)\cdotr(0,j).\label{eq:jump-over-routing-matrix} \end{equation} Routing matrix $\mathcal{R}'$ results from routing matrix $\mathcal{R}$ when every resource which wants to go to node~$0$ skips this node and goes immediately to the next node according to $(r(0,j):j\in\overline{J})$. Now equation \eqref{eq:envirenment-generalized-gordon-newell-gbe} can be written as \begin{align} & \alpha'\left(n_{1},\ldots,n_{J}\right)\cdot\sum_{i\in\overline{J}}\nu_{i}(n_{i})\sum_{j\in\overline{J}}r'(i,j)\cdot1_{\left\{ n_{i}>0\right\} }\nonumber \\ & =\sum_{j\in\overline{J}}\sum_{i\in\overline{J}}\alpha'\left((n_{1},\ldots,n_{J})+\mathbf{e}_{i}-\mathbf{e}_{j}\right)\cdot\nu_{i}(n_{i}+1)\cdotr'(i,j)\cdot1_{\left\{ n_{j}>0\right\} }.\label{eq:envirenment-normal-gordon-newell-gbe} \end{align} \eqref{eq:envirenment-normal-gordon-newell-gbe} is the global balance equation of a Gordon-Newell network with nodes $\overline{J}=\left\{ 1,2,\ldots,J\right\} $, with $N$ customers, and with routing matrix $\mathcal{R}'$. The steady-state distribution of this network is \begin{equation} \alpha'(n_{1},\ldots,n_{J})=\left[C_{\text{}}^{\text{stb}}strich(\overline{J},N)\right]^{-1}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta'_{j}}{\nu_{j}(i)}\right)\label{eq:stability-distribution-explicite-with-jumps} \end{equation} where $\etaVect':=\left(\eta'_{j}:j\in\overline{J}\right)$ is a solution of the traffic equation $\etaVect'\cdot\mathcal{R}'=\etaVect'$ and \[ C_{\text{}}^{\text{stb}}strich(\overline{J},N):=\sum_{\sum_{j\in\overline{J}}n_{j}=N}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta'_{j}}{\nu_{j}(i)}\right) \] is the normalisation constant. Because of the special structure \eqref{eq:jump-over-routing-matrix} of matrix $\mathcal{R}'$, $\eta'_{j}:=\eta_{j}$ for all $j\in J$ is a solution of $\etaVect'\cdot\mathcal{R}'=\etaVect'$, see \citet[Proposition 2.1]{krenzler-daduna-otten:2016}. Consequently, $C_{\text{}}^{\text{stb}}strich(\overline{J},N)=C_{\text{}}^{\text{stb}}(\overline{J},N)$. Now we can switch between both interpretations without recalculating $\etaVect'$ and $C_{\text{}}^{\text{stb}}strich(\overline{J},N)$: \[ \alpha'(n_{1},\ldots,n_{J})=\left[C_{\text{}}^{\text{stb}}(\overline{J},N)\right]^{-1}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta{}_{j}}{\nu_{j}(i)}\right). \] We now calculate $\lambda_{\text{BO,max}}$ explicitly.\begingroup \allowdisplaybreaks \begin{align*} & \mathrel{\phantom{=}}\lambda_{\text{BO,max}}=\alpha\cdot\mathbf{A}_{-1}\cdot\mathbf{e}\\ & =\sum_{(0,m_{1},\ldots,m_{J})\in\widetilde{E}_{\text{+}}}\big(\alpha\cdot\mathbf{A}_{-1}\big)\left(0,m_{1},\ldots,m_{J}\right)\\ & =\sum_{(0,m_{1},\ldots,m_{J})\in\widetilde{E}_{\text{+}}}\Bigg[\sum_{\left(0,n_{1},\ldots,n_{J}\right)\in E_{1}}\alpha\left(0,n_{1},\ldots,n_{J}\right)\cdot a_{-1}\left(\left(0,n_{1},\ldots,n_{J}\right);\left(0,m_{1},\ldots,m_{J}\right)\right)\Bigg]\\ & =\sum_{\left(0,n_{1},\ldots,n_{J}\right)\in\widetilde{E}_{\text{+}}}\alpha\left(0,n_{1},\ldots,n_{J}\right)\sum_{(0,m_{1},\ldots,m_{J})\in\widetilde{E}_{\text{+}}}a_{-1}\left(\left(0,n_{1},\ldots,n_{J}\right);\left(0,m_{1},\ldots,m_{J}\right)\right)\\ & =\sum_{\left(0,n_{1},\ldots,n_{J}\right)\in\widetilde{E}_{\text{+}}}\alpha\left(0,n_{1},\ldots,n_{J}\right)\cdot\Bigg[\sum_{i=1}^{J}\sum_{j=1}^{J}\nu_{i}(n_{i})\cdotr(i,0)\cdotr(0,j)\cdot1_{\left\{ n_{i}>0\right\} }\Bigg]\\ & =\sum_{\left(0,n_{1},\ldots,n_{J}\right)\in\widetilde{E}_{\text{+}}}\underbrace{\alpha\left(0,n_{1},\ldots,n_{J}\right)}_{=\alpha'(n_{1},\ldots,n_{J})}\cdot\Bigg[\sum_{i=1}^{J}\nu_{i}(n_{i})\cdotr(i,0)\cdot1_{\left\{ n_{i}>0\right\} }\cdot\underbrace{\sum_{j=1}^{J}r(0,j)}_{=1}\Bigg]\\ & =\sum_{\left(n_{1},\ldots,n_{J}\right)\in\widetilde{E}_{\text{+}}}\alpha'(n_{1},\ldots,n_{J})\cdot\Bigg[\sum_{i=1}^{J}\nu_{i}(n_{i})\cdot1_{\left\{ n_{i}>0\right\} }\cdotr(i,0)\Bigg]\\ & =\sum_{i=1}^{J}\Bigg[\underbrace{\sum_{n_{i}=0}^{N}\sum_{\stackrel{n_{j}\in\left\{ 0,\ldots,N\right\} ,\ j\in\overline{J}\backslash\left\{ i\right\} }{\sum_{j\in\overline{J}\backslash\left\{ i\right\} }n_{j}=N-n_{i}}}\alpha'(n_{1},\ldots,n_{J})\cdot\nu_{i}(n_{i})\cdot1_{\left\{ n_{i}>0\right\} }}_{(*)}\Bigg]\cdotr(i,0). \end{align*} \endgroup The expression ($*$) is the throughput through node $i$ in the Gordon-Newell network with routing matrix $\mathcal{R}'$: \[ stability networkTh i(N):=\sum_{n_{i}=0}^{N}\sum_{\stackrel{n_{j}\in\left\{ 0,\ldots,N\right\} ,\ j\in\overline{J}\backslash\left\{ i\right\} }{\sum_{j\in\overline{J}\backslash\left\{ i\right\} }n_{j}=N-n_{i}}}\alpha'(n_{1},\ldots,n_{J})\cdot\nu_{i}(n_{i})\cdot1_{\left\{ n_{i}>0\right\} },\qquad i\in\overline{J}. \] We can now write \begin{equation} \lambda_{\text{BO,max}}=\sum_{i=1}^{J}stability networkTh i(N)\cdotr(i,0).\label{eq:rhs-stability-with-thi} \end{equation} According to \citet[p. 374, (8.14)]{Bolch:1998:QNM:289350} \[ stability networkTh i(N)=\eta_{i}\cdot\frac{C_{\text{}}^{\text{stb}}(\overline{J},N-1)}{C_{\text{}}^{\text{stb}}(\overline{J},N)}, \] therefore, \begin{align*} \lambda_{\text{BO,max}} & =\sum_{i=1}^{J}\eta_{i}\cdot\frac{C_{\text{}}^{\text{stb}}(\overline{J},N-1)}{C_{\text{}}^{\text{stb}}(\overline{J},N)}\cdotr(i,0)=\frac{C_{\text{}}^{\text{stb}}(\overline{J},N-1)}{C_{\text{}}^{\text{stb}}(\overline{J},N)}\underbrace{\sum_{i=1}^{J}\eta_{i}\cdotr(i,0)}_{=\eta_{0}}. \end{align*} \end{proof} \begin{rem} \label{rem:lambdamax-th0}The right-hand side of equation \eqref{eq:rhs-stability-with-thi} is the throughput through node $0$ in the {stability network} in \prettyref{fig:SOQN-stability-network}. Formally, this means \begin{equation} \lambda_{\text{BO,max}}=stability networkTh 0(N)\text{ with }stability networkTh 0(N):=\sum_{i=1}^{J}stability networkTh i(N)\cdotr(i,0).\label{eq:th0-by-thi} \end{equation} The advantage of representation \eqref{eq:th0-by-thi} is that it uses throughputs $stability networkTh i(N)$, $i\in\overline{J}$, of a classical Gordon-Newell network with routing matrix $\mathcal{R}'$. We can calculate these throughputs very efficiently with standard methods, like, for example, mean value analysis (MVA). With \eqref{eq:max-arrival-from-norm-constants} we have another representation of $stability networkTh 0(N)$: \begin{equation} stability networkTh 0(N)=\eta_{0}\cdot\frac{C_{\text{}}^{\text{stb}}(\overline{J},N-1)}{C_{\text{}}^{\text{stb}}(\overline{J},N)}.\label{eq:stable-th0-by-thi} \end{equation} To calculate efficiently the constants on the right-hand side of \eqref{eq:stable-th0-by-thi}, we can use, for example, the convolution algorithm. Both algorithms are illustrated in \citet[p. 371ff., Section 8.1 and p. 384ff., Section 8.2]{Bolch:1998:QNM:289350}. From a practical point of view, it is important to know the long-term behaviour of the system. This behaviour is equal to the behaviour of the system in steady state. We analyse it in the following sections, \ref{sec:throughputs-and-idle-times-LC} and \ref{sec:Special-case}. \end{rem} \subsection{Throughputs and idle times\label{sec:throughputs-and-idle-times-LC}} We consider a stable SOQN-BO in steady state. Let $\etaVect:=(\eta_{j}:j\in\overline{J}_{0})$ be a solution of $\boldsymbol{x}=\boldsymbol{x}\cdot\mathcal{R}$. Recall that $\etaVect$ is uniquely defined up to a constant. \begin{prop} \label{prop:THL-BO} The local throughput at the nodes $j\in\overline{J}_{0}$ is \begin{equation} \ThBO j=\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}}.\label{eq:TH-BO1} \end{equation} \end{prop} Note, formula \eqref{eq:TH-BO1} occurs as an approximation in a different setting in \citep{dallery:1990} as (26). We give a proof for correctness in our setting. \begin{proof} We define for $j\in\overline{J}_{0}$ in steady state: \begin{itemize} \item the mean number of departures from $j$ per time unit is $D_{j}$, and \item the mean number of arrivals at $j$ per time unit is $\meanVisits j$. \end{itemize} From steady state assumption follows \[ \ThBO j=\meanVisits j=D_{j}. \] For any $j\in\overline{J}_{0}$ holds \[ \meanVisits j=\sum_{i\in\overline{J}_{0}}D_{i}\cdotr(i,j). \] Therefore, the vector $\meanVisits{}:=(\meanVisits j:j\in\overline{J}_{0})$ fulfils the set of equations \[ \meanVisits j=\sum_{i\in\overline{J}_{0}}\meanVisits i\cdotr(i,j),\quad j\in\overline{J}_{0},~~~\text{which is }~~~\meanVisits{}=\meanVisits{}\cdot\mathcal{R}. \] This implies that $\meanVisits j=\eta_{j}\cdot K$ for some constant $K>0$. Because of $\lambda_{\text{BO}}=\meanVisits 0=\eta_{0}\cdot K$ we have \[ K=\frac{\lambda_{\text{BO}}}{\eta_{0}}, \] and therefore, \[ \meanVisits j=\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}},\quad j\in\overline{J}_{0}. \] \end{proof} Note, that \eqref{eq:TH-BO1} shows that $\ThBO j$ does not depend on normalisation of $\etaVect$. \begin{cor} \label{cor:idle-times-BO}Let $Y_{\text{BO}}:=(Y_{\text{BO},j}:j\in\overline{J})$ denote a random vector which is distributed according to the stationary queue length at the nodes in $\overline{J}$ of the SOQN-BO. If the service rate at node $j$ does not depend on the queue length, i.e. $\nu_{j}(\cdot)=\nu_{j},j\in\overline{J}$, then the probability that node $j$ is idling is \[ P(Y_{\text{BO},j}=0)=1-\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}}\cdot\nu_{j}^{-1}. \] Note, that the formula above also describes the proportion of time that node $j$ is idle. \end{cor} \pagebreak{} \begin{proof} We define for $j\in\overline{J}$ in steady state: \begin{itemize} \item the mean number of customers in service is $B_{j}$, \item the mean service time is $S_{j}$, and \item the arrival intensity is $\lambda_{j}$ . \end{itemize} According to Little's formula, for every node $j$ it holds $B_{j}=\lambda_{j}\cdot S_{j}$. In steady state the arrival rate $\lambda_{j}$ at node $j$ is equal to its throughput $\ThBO j$. Hence, from \prettyref{prop:THL-LC}, we know that $\lambda_{j}=\ThBO j=\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}}$. In every node $j$, with constant service rate $\nu_{j}$, the mean service time $S_{j}$ is $\nu_{j}^{-1}$. Now we substitute these known results for $\lambda_{j}$ and $S_{j}$ in Little's formula and get for the mean number of customers in service \[ B_{j}=\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}}\cdot\nu_{j}^{-1}. \] Consequently, the probability that node $j$ is idling is \[ P(Y_{\text{BO},j}=0)=1-P(Y_{\text{BO},j}>0)=1-E\left[1_{\left\{ Y_{\text{BO},j}>0\right\} }\right]=1-B_{j}=1-\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}}\cdot\nu_{j}^{-1}. \] \end{proof} \subsection{Special case: $\protectJ=1$\label{sec:Special-case}} In this section, we consider the special case where the inner network consists only of one node ($J=1$) as shown in \prettyref{fig:SOQN-small-1}. For this special case, \citeauthor{avi-itzhak-heyman-1973} calculated the steady-state distribution in \citep{avi-itzhak-heyman-1973}. \begin{figure} \caption{\label{fig:SOQN-small-1} \label{fig:SOQN-small-1} \end{figure} \begin{thm} For $J=1$, $Z_{\text{BO}}$ is stable if and only if $\lambda_{\text{BO}}<\nu_{1}(N)$. If $Z_{\text{BO}}$ is stable, the limiting and steady-state distribution $\boldsymbol{\pi}_{\text{BO}}:=\left(\pi_{\text{BO}}\left((n_{\text{ex}},n_{0},n_{1})\right):(n_{\text{ex}},n_{0},n_{1})\in E\right)$ of the process $Z_{\text{BO}}$ is \begin{equation} \pi_{\text{BO}}(n_{\text{ex}},n_{0},n_{1})=\pi_{\text{BO}}(n_{\text{ex}},N-n_{1},n_{1})=\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{n_{1}}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\cdot\left(\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}\right)^{n_{\text{ex}}}\label{eq:MODEL-ALL-TOGETHER-stationary-distribution-1} \end{equation} with normalisation constant \begin{equation} C_{\text{BO}}(\left\{ 1\right\} ,N)\coloneqq\sum_{m=0}^{N-1}\prod_{\ell=1}^{m}\frac{\lambda_{\text{BO}}}{\nu_{1}(\ell)}+\prod_{\ell=1}^{N}\frac{\lambda_{\text{BO}}}{\nu_{1}(\ell)}\cdot\left(\frac{1}{1-\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}}\right).\label{eq:Modell-all-together-normierungskonstante} \end{equation} \end{thm} \begin{proof} See \citep[equations (20) and (21)]{avi-itzhak-heyman-1973}. In that paper, the authors calculated the steady-state probabilities for $P\left(X_{\text{ex}}+Y_{1}=m\right)$. From these results we can easily obtain the probabilities of the states of all queues, because for $m\leq N$ it holds $X_{\text{ex}}+Y_{1}=m\LeftrightarrowX_{\text{ex}}=0\landY_{1}=m\land Y_{0}=N-m$ and otherwise $X_{\text{ex}}+Y_{1}=m\LeftrightarrowX_{\text{ex}}=m-N\landY_{1}=N\land Y_{0}=0$. \end{proof} \begin{prop} \label{prop:marginal-distribution-of-a-simple-system}Let $(X_{\text{ex}}RV,YRV_{0},YRV_{1})$ denote a random vector which is distributed according to the limiting and steady-state distribution of $Z_{\text{BO}}$ in the SOQN-BO with $J=1$. \begin{enumerate} \item [(i)]For the marginal distributions it holds: \begin{equation} P(X_{\text{ex}}RV=0)=\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\sum_{n_{1}=0}^{N}\prod_{m=1}^{n_{1}}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\label{eq:eq:small-backordering-p-X-ex-0} \end{equation} and for $n_{\text{ex}}>0$ \begin{equation} P(X_{\text{ex}}RV=n_{\text{ex}})=\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\cdot\left(\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}\right)^{n_{\text{ex}}}.\label{eq:small-backordering-p-X-ex-n} \end{equation} For $0\leqn_{1}<N$ \begin{equation} P(YRV_{0}=N-n_{1},YRV_{1}=n_{1})=\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{n_{1}}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\label{eq:small-backordering-p-Y-n} \end{equation} and for $n_{1}=N$ \begin{equation} P(YRV_{0}=0,YRV_{1}=N)=\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\cdot\frac{1}{1-\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}}.\label{eq:randvtlg-y} \end{equation} \item [(ii)]The average external queue length is \begin{align} L_{\text{ex}} & =P(YRV_{0}=0,YRV_{1}=N)\cdot\frac{\lambda_{\text{BO}}}{\nu_{1}(N)-\lambda_{\text{BO}}}\label{eq:small-backordering-L-ex-0} \end{align} and the average waiting time of customers in the external queue is \begin{equation} W_{\text{ex}}=\frac{L_{\text{ex}}}{\lambda_{\text{BO}}}=P(YRV_{0}=0,YRV_{1}=N)\cdot\frac{1}{\nu_{1}(N)-\lambda_{\text{BO}}}.\label{eq:waiting-time-ex} \end{equation} \end{enumerate} \end{prop} \begin{proof} $ $ \begin{enumerate} \item [(i)]For \eqref{eq:eq:small-backordering-p-X-ex-0} we calculate \begin{align*} P(X_{\text{ex}}RV=0) & \overset{\hphantom{\eqref{eq:MODEL-ALL-TOGETHER-stationary-distribution-1}}}{=}\sum_{n_{1}=0}^{N}P(X_{\text{ex}}RV=0,YRV_{0}=N-n_{1},YRV_{1}=n_{1})\\ & \overset{\eqref{eq:MODEL-ALL-TOGETHER-stationary-distribution-1}}{=}\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\sum_{n_{1}=0}^{N}\prod_{m=1}^{n_{1}}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}. \end{align*} For \eqref{eq:small-backordering-p-X-ex-n} we calculate \begin{align*} P(X_{\text{ex}}RV=n_{\text{ex}}) & \overset{\hphantom{\eqref{eq:MODEL-ALL-TOGETHER-stationary-distribution-1}}}{=}P(X_{\text{ex}}RV=n_{\text{ex}},YRV_{0}=0,YRV_{1}=N)\\ & \overset{\eqref{eq:MODEL-ALL-TOGETHER-stationary-distribution-1}}{=}\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\cdot\left(\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}\right)^{n_{\text{ex}}}. \end{align*} For \eqref{eq:small-backordering-p-Y-n} we calculate \begin{align*} P(YRV_{0}=N-n_{1},YRV_{1}=n_{1}) & \overset{\hphantom{\eqref{eq:MODEL-ALL-TOGETHER-stationary-distribution-1}}}{=}P(X_{\text{ex}}RV=0,YRV_{0}=N-n_{1},YRV_{1}=n_{1})\\ & \overset{\eqref{eq:MODEL-ALL-TOGETHER-stationary-distribution-1}}{=}\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{n_{1}}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}. \end{align*} For \eqref{eq:randvtlg-y} we calculate \begin{align*} P(YRV_{0}=0,YRV_{1}=N) & \overset{\hphantom{\eqref{eq:MODEL-ALL-TOGETHER-stationary-distribution-1}}}{=}\sum_{n_{\text{ex}}=0}^{\infty}P(X_{\text{ex}}RV=n_{\text{ex}},YRV_{0}=0,YRV_{1}=N)\\ & \overset{\eqref{eq:MODEL-ALL-TOGETHER-stationary-distribution-1}}{=}\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\cdot\sum_{n_{\text{ex}}=0}^{\infty}\left(\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}\right)^{n_{\text{ex}}}. \end{align*} \item [(ii)]We use the marginal distributions in (i) to calculate the average external queue length in steady state. \begin{align*} L_{\text{ex}} & =\sum_{n_{\text{ex}}=0}^{\infty}n_{\text{ex}}\cdot P(X_{\text{ex}}RV=n_{\text{ex}})=\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\cdot\sum_{n_{\text{ex}}=1}^{\infty}n_{\text{ex}}\cdot\left(\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}\right)^{n_{\text{ex}}}\\ & =\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{\nu_{1}(m)}\cdot\frac{\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}}{\left(1-\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}\right)^{2}}\overset{\eqref{eq:randvtlg-y}}{=}P(YRV_{0}=0,YRV_{1}=N)\cdot\frac{\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}}{1-\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}}\\ & =P(YRV_{0}=0,YRV_{1}=N)\cdot\frac{\frac{\lambda_{\text{BO}}}{\nu_{1}(N)}}{\frac{\nu_{1}(N)-\lambda_{\text{BO}}}{\nu_{1}(N)}}=P(YRV_{0}=0,YRV_{1}=N)\cdot\frac{\lambda_{\text{BO}}}{\nu_{1}(N)-\lambda_{\text{BO}}}. \end{align*} Equation \eqref{eq:waiting-time-ex} follows from Little's law, see, for example, \citet{little-graves:2008}. \end{enumerate} \end{proof} An important practical question about SOQN systems in this paper is: ``Do more resources allow more system throughput?'' \prettyref{prop:monoton-for-1} shows that, even for the simple system with $J=1$, the surprising answer is: ``It depends.'' We also know that in systems with $J\geq1$ whose service rates are non-decreasing in the number of customers, more resources lead to equal or higher throughput. \begin{prop} \label{prop:monoton-for-more-1}Let $J\geq1$ and all service rates be non-decreasing in the number of customers, i.e.~is $\nu_{j}(n+1)\geq\nu_{j}(n)$, $n\in\{0,\ldots,N-1\}$, for all $j\in\overline{J}$. Then $\lambda_{\text{BO,max}}=stability networkTh 0(\cdot)$ is non-decreasing on $\mathbb{N}$. \end{prop} \begin{proof} See Van der Wal \Citep{vanderWal1989}. \end{proof} \begin{prop} \label{prop:monoton-for-1}Let $J=1$. Then the following are equivalent: \begin{itemize} \item [(i)]$\lambda_{\text{BO,max}}=stability networkTh 0(\cdot)$ is non-decreasing on $\mathbb{N}$. \item [(ii)]$\nu_{1}$ is non-decreasing on $\mathbb{N}$. \end{itemize} \end{prop} \begin{proof} Because of \eqref{eq:max-arrival-from-norm-constants} and \eqref{eq:th0-by-thi}, (i) is equivalent to \begin{equation} \forall\;N\in\mathbb{N}:\;C_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N-1)\cdotC_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N+1)\leqC_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N)^{2}.\label{eq:1} \end{equation} We note that \begin{align*} C_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N) & =\prod_{i=1}^{N}\frac{\eta_{1}}{\nu_{1}(i)},\\ C_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N+1) & =C_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N)\cdot\frac{\eta_{1}}{\nu_{1}(N+1)},\\ C_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N-1) & =C_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N)\cdot\frac{\nu_{1}(N)}{\eta_{1}} \end{align*} implying that $\prettyref{eq:1}$ is equivalent to \[ \forall\;N\in\mathbb{N}:\;\frac{\nu_{1}(N)}{\nu_{1}(N+1)}\cdotC_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N)^{2}\leqC_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N)^{2} \] and because $C_{\text{}}^{\text{stb}}(\left\{ 1\right\} ,N)^{2}>0$, it is equivalent to \[ \forall\;N\in\mathbb{N}:\;\nu_{1}(N)\leq\nu_{1}(N+1), \] which is (ii). \end{proof} \section{Approximation of the resource network\label{sec:Approximation}} Unfortunately, the steady-state distribution of the SOQN-BO for $J>1$ is not known. However, for a modified system, we get closed-form expressions and product-form results for $J\geq1$, as shown in the next section. \subsection{SOQN {with lost customers}\label{sec:Lost-network}} In this section, we consider a modification of the model from \prettyref{sec:RMFS-general-model}, where newly arriving customers are lost if the resource pool is empty. Henceforth, we call it \emph{SOQN-LC}. In the literature, this behaviour is called ``lost customers'', ``lost sales'', ``lost arrivals'' or ``loss systems''. These customers arrive one by one according to a Poisson process with rate $\lambda_{\text{BO}}LS>0$. But because of lost customers, the actual arrival rate to the synchronisation node is smaller. We call this effective arrival rate $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)$. The SOQN-LC is on the right of \prettyref{fig:SOQN-lostsales}, and the original SOQN-BO from \prettyref{sec:RMFS-general-model} is on the left of \prettyref{fig:SOQN-lostsales}. It is well known that such an SOQN-LC turns formally into a closed Gordon-Newell network which consists only of the resource network, see \citep[p. 21]{yao2001fundamentals}. \begin{figure} \caption{\label{fig:SOQN-lostsales} \label{fig:SOQN-lostsales} \end{figure} To obtain a Markovian process description, we denote by $Y_{0}(t)$ the number of resources in the resource pool at time $t\geq0$ and by $Y_{j}(t)$, $j\in\overline{J}$, the number of resources present at node $j$ in the inner network at time $t\geq0$, either waiting or in service. We call this number queue length at node $j\in\overline{J}$. Then $YALL(t):=\left(Y_{j}(t):j\in\overline{J}_{0}\right)$ is the local queue length vector of the resource network at time $t\geq0$. We define the joint queue length process of the semi-open network by \[ Z_{\text{LC}}:=\left(YALL(t):t\geq0\right). \] Then, due to the usual independence and memorylessness assumptions (see the assumptions \vpageref{independence-memorylessness}), $Z_{\text{LC}}$ is a homogeneous Markov process, which is irreducible on the state space \begin{align*} E_{LC} & :=\big\{\left(n_{0},n_{1},\ldots,n_{J}\right):\:n_{j}\in\left\{ 0,\ldots,N\right\} \:\forall j\in\overline{J}_{0},\:\sum_{j\in\overline{J}_{0}}n_{j}=N\big\}. \end{align*} For such a system, closed-form expressions for the steady-state distribution\\ $\boldsymbol{\pi}_{\text{LC}}:=\left(\pi_{\text{LC}}\left(nALL\right):nALL\inE_{LC}\right)$ in product form are available, see \citet[p. 22, Theorem 2.5]{yao2001fundamentals}. It is for $nALL:=\left(n_{j}:j\in\overline{J}_{0}\right)\inE_{LC}$ \begin{equation} \pi_{\text{LC}}\left(nALL\right)=\left[C_{\text{LC}}(\overline{J}_{0},N)\right]^{-1}\cdot\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\cdot\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)\label{eq:BO-GEN-APROX-stationary-distribution-1} \end{equation} with normalisation constant \begin{equation} C_{\text{LC}}(\overline{J}_{0},N):=\sum_{\sum_{j\in\overline{J}_{0}}n_{j}=N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\cdot\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right).\label{eq:BO-GEN-normalize} \end{equation} \subsection{Adjustment\label{sec:Adjustment}} We want to use the modified system (SOQN-LC) with known results to approximate the SOQN-BO. But before we do so, we have to ensure that both systems process in the mean the same number of external customers. That means they have the same throughput through synchronisation node $0$. Our main idea is: In order to compensate customers' loss, we will adjust the input rate $\lambda_{\text{BO}}LS$ of the {modified} system until it reaches the desired throughput. But is it even possible? Yes, it is! We will prove it in the next \prettyref{thm:theorem-ls}. But first we need to calculate the throughput of both systems. \begin{lem} \label{lem:effektive-throuput-with-lost-customrers}The throughput of the SOQN-BO in steady state is $\lambda_{\text{BO}}$. The throughput of the SOQN-LC in steady state is \[ \lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lambda_{\text{BO}}LS\cdot\Bigg(1-\underbrace{\frac{C_{\text{}}^{\text{stb}}(\overline{J},N)}{C_{\text{LC}}(\overline{J}_{0},N)}}_{=\pi_{\text{LC,0}}(0)}\Bigg) \] where $\pi_{\text{LC,0}}(0)$ is the probability that there are no resources in the resource pool. \end{lem} \begin{proof} The throughput of the SOQN-BO in steady state is equal to the arrival rate $\lambda_{\text{BO}}$ because all customers pass through the system. In contrast, in the SOQN-LC, the proportion $\pi_{\text{LC,0}}(0)$ of the customers is lost. From the steady-state distribution \prettyref{eq:BO-GEN-APROX-stationary-distribution-1}, we directly calculate for the SOQN-LC the probability $\pi_{\text{LC,0}}(0)$ of the resource pool being empty as \begin{align} \pi_{\text{LC,0}}(0) & :=\sum_{\sum_{j\in\overline{J}}n_{j}=N}\pi_{\text{LC}}\left(0,n_{1},\ldots,n_{J}\right)\overset{\prettyref{eq:BO-GEN-APROX-stationary-distribution-1}}{=}\frac{C_{\text{}}^{\text{stb}}(\overline{J},N)}{C_{\text{LC}}(\overline{J}_{0},N)}.\label{eq:BO-GEN-LS-theta-null-1} \end{align} Then the effective arrival rate $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)$ and the throughput $\ThLC j$ of the system is \[ \lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\ThLC j=\lambda_{\text{BO}}LS\cdot\left(1-\pi_{\text{LC,0}}(0)\right). \] \end{proof} Now we can adjust $\lambda_{\text{BO}}LS$ in such a way that both systems have the same throughput. We assume that both systems -- with backordering and {with lost customers} -- are stable. For the SOQN-BO, according to \prettyref{prop:stability-condition}, stability is equivalent to $\lambda_{\text{BO}}\in\left(0,\lambda_{\text{BO,max}}\right)$. For the SOQN-LC, stability is granted for any arrival rate $\lambda_{\text{BO}}LS\in\left(0,\infty\right)$, because the state space $E_{LC}$ is finite. \begin{thm} \label{thm:theorem-ls}For every stable SOQN-BO, there exists an SOQN-LC with arrival rate $\lambda_{\text{BO}}LS$ such that both systems have the same throughput in steady state. Formally, this means: \[ \text{For all }\lambda_{\text{BO}}\in\left(0,\lambda_{\text{BO,max}}\right)\ \text{exists }\lambda_{\text{BO}}LS\in\left(0,\infty\right)\ \text{with}\ \lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lambda_{\text{BO}}, \] where $\lambda_{\text{BO,max}}$ is given in \prettyref{eq:max-arrival-from-norm-constants}. \end{thm} \begin{proof} The idea of the proof is to show that for any $\lambda_{\text{BO}}\in\left(0,\lambda_{\text{BO,max}}\right)$, the function $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)$ from \prettyref{lem:effektive-throuput-with-lost-customrers} can have values larger than a prescribed $\lambda_{\text{BO}}$, and smaller than $\lambda_{\text{BO}}$, and is continuous. Thus, by the intermediate value theorem, there exists $\lambda_{\text{BO}}LS$ such that $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lambda_{\text{BO}}$. We first show that $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lambda_{\text{BO}}LS\cdot\left(1-\frac{C_{\text{}}^{\text{stb}}(\overline{J},N)}{C_{\text{LC}}(\overline{J}_{0},N)}\right)$ can be larger than any given $\lambda_{\text{BO}}$. We analyse the normalisation constant $C_{\text{LC}}(\overline{J}_{0},N)$: \begin{align} C_{\text{LC}}(\overline{J}_{0},N) & =\sum_{\sum_{j\in\overline{J}_{0}}n_{j}=N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\cdot\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)\nonumber \\ & =\sum_{n_{0}=0}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\cdot\sum_{\sum_{j\in\overline{J}}n_{j}=N-n_{0}}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)\nonumber \\ & =\sum_{n_{0}=0}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\cdotC_{\text{}}^{\text{stb}}(\overline{J},N-n_{0}).\label{eq:C_LC-depends-from-CB} \end{align} To simplify the notation, we define constant $\abrevC{n_{0}}:=C_{\text{}}^{\text{stb}}(\overline{J},N-n_{0})$. Then $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)$ can be expressed as \begin{align*} \lambda_{\text{eff}}(\lambda_{\text{BO}}LS) & =\lambda_{\text{BO}}LS\cdot\left(1-\frac{C_{\text{}}^{\text{stb}}(\overline{J},N)}{C_{\text{LC}}(\overline{J}_{0},N)}\right)=\lambda_{\text{BO}}LS\cdot\left(1-\frac{\abrevC 0}{\sum_{n_{0}=0}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\abrevC{n_{0}}}\right)\\ & =\lambda_{\text{BO}}LS\cdot\left(\frac{\sum_{n_{0}=1}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\abrevC{n_{0}}}{\sum_{n_{0}=0}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\abrevC{n_{0}}}\right)=\eta_{0}\cdot\left(\frac{\sum_{n_{0}=1}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}-1}\abrevC{n_{0}}}{\abrevC 0+\sum_{n_{0}=1}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\abrevC{n_{0}}}\right)\\ & =\eta_{0}\cdot\left(\frac{\abrevC 1+\sum_{n_{0}=2}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}-1}\abrevC{n_{0}}}{\abrevC 0+\sum_{n_{0}=1}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\abrevC{n_{0}}}\right). \end{align*} Hence, it holds \[ \lim_{\lambda_{\text{BO}}LS\rightarrow\infty}\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\eta_{0}\cdot\frac{\abrevC 1}{\abrevC 0}=\lambda_{\text{BO,max}}. \] Therefore, $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)$ can be larger than any stable arrival rate $\lambda_{\text{BO}}\in\left(0,\lambda_{\text{BO,max}}\right)$. Now we show that $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)$ can be smaller than any stable $\lambda_{\text{BO}}\in\left(0,\lambda_{\text{BO,max}}\right)$. It follows from \[ \lim_{\lambda_{\text{BO}}LS\rightarrow0}\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lim_{\lambda_{\text{BO}}LS\rightarrow0}\lambda_{\text{BO}}LS\cdot\underbrace{\left(1-\pi_{\text{LC,0}}(0)\right)}_{>0\ \text{and }<1}=0. \] Finally, from $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lambda_{\text{BO}}LS\cdot\left(1-\frac{\abrevC 0}{\sum_{n_{0}=0}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\cdot\abrevC{n_{0}}}\right)$ follows that $\lambda_{\text{eff}}$ is a continuous function in $\lambda_{\text{BO}}LS\in\left(0,\infty\right)$, which proves our claim by the intermediate value theorem. \end{proof} Henceforth, we will call $\lambda_{\text{BO}}LS$ with $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lambda_{\text{BO}}$ \emph{adjusted} arrival rate for $\lambda_{\text{BO}}$. \begin{prop} \label{prop:unique}If the service rates $\nu_{j}(\cdot)$, $j\in\overline{J}$, are non-decreasing, $\lambda_{\text{BO}}LS$ in \prettyref{thm:theorem-ls} is unique. \end{prop} The proof of \prettyref{prop:unique} is presented in \prettyref{appx:omitted-calculation}. Interested readers will find the explicit results for adjusted $\lambda_{\text{BO}}LS$ in the special cases $N=1$ and $N=2$ in \prettyref{rem:lambda-eff-N-1-N-2} in \prettyref{appx:omitted-calculation}. \paragraph*{Some arguments for applying the resource network of the SOQN-LC as approximation for that of the SOQN-BO.} The result of \prettyref{thm:theorem-ls} only guarantees that for a stable SOQN-BO with a prescribed external arrival rate $\lambda_{\text{BO}}$ there exists an SOQN-LC with the same resource network where the resource pool has the same throughput. Because the SOQN-LC is a standard Gordon-Newell network, the local throughputs can be computed directly by standard algorithms. We can compare these local throughputs with those of the SOQN-BO. Surprisingly, not only the throughputs at the queues of the resource network are the same by construction. All throughput pairs in the respective nodes of the resource network are identical, too. This observation suggests to use the local characteristics of the queues in the resource network of the SOQN-LC as approximation for the respective performance measures of the SOQN-BO. The point is: The performance characteristics of the SOQN-BO are not directly accessible, while the performance characteristics of the SOQN-LC are explicitly known from product-form network theory, and even more, well-established algorithmic procedures are at hand to evaluate these. In the next section, we prove the coincidence of the respective local throughputs. Thereafter, we show that in nodes with constant service rates even the probabilities for an empty queue in both resource networks are pairwise identical. \subsection{Throughputs and idle times\label{sect:TH_BO}} \begin{prop} \label{prop:THL-LC}The local throughput $\ThLC j$ at nodes $j\in\overline{J}_{0}$ in the SOQN-LC with adjusted arrival rate is pairwise the same as that of the respective nodes in the SOQN-BO given in \prettyref{prop:THL-BO}. With \begin{equation} C_{\text{LC}}(\overline{J}_{0},N)=\sum_{n_{0}=0}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\sum_{\sum_{j\in\overline{J}}n_{j}=N-n_{0}}\prod_{j=1}^{J}\left(\prod_{\ell=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(\ell)}\right)\label{eq:NormingConst} \end{equation} it holds \[ \ThLC j=\eta_{j}\cdot\frac{C_{\text{LC}}(\overline{J}_{0},N-1)}{C_{\text{LC}}(\overline{J}_{0},N)}=\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}},\quad j\in\overline{J}_{0}. \] \end{prop} \begin{proof} It was shown in the proof of \prettyref{thm:theorem-ls} that it holds \[ \ThLC 0=\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\eta_{0}\cdot\frac{C_{\text{LC}}(\overline{J}_{0},N-1)}{C_{\text{LC}}(\overline{J}_{0},N)} \] which is the standard formula for the throughput at node $0$ in the resource network, which is a standard Gordon-Newell network. Therefore, \[ \lambda_{\text{BO}}=\eta_{0}\cdot\frac{C_{\text{LC}}(\overline{J}_{0},N-1)}{C_{\text{LC}}(\overline{J}_{0},N)} \] and by using the standard formula for local throughputs in Gordon-Newell networks follows \[ \ThBO j=\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}}=\eta_{j}\cdot\frac{C_{\text{LC}}(\overline{J}_{0},N-1)}{C_{\text{LC}}(\overline{J}_{0},N)}=\ThLC j\quad j\in\overline{J}_{0}. \] \end{proof} The explicit formula for the throughputs in \prettyref{prop:THL-LC} has a further interesting consequence. It allows us to determine efficiently the steady-state marginal distribution of the queue length at every node $j\in\overline{J}$ without knowing the adjusted value $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)$ of $\lambda_{\text{BO}}LS$. \begin{prop} Let $Y_{\text{LC}}:=(Y_{\text{LC},j}:j\in\overline{J})$ denote a random vector which is distributed according to the stationary queue length at the nodes in $\overline{J}$ of the SOQN-LC with adjusted arrival rate. If the service rate at node $j$ does not depend on the queue length, i.e. $\nu_{j}(\cdot)=\nu_{j},j\in\overline{J}$, then the probabilities that the nodes $j\in\overline{J}_{0}$ in the SOQN-LC with adjusted arrival rate are idling are pairwise the same as those of the respective nodes in the SOQN-BO given in \prettyref{cor:idle-times-BO}: \[ P(Y_{\text{LC},j}=0)=1-\lambda_{\text{BO}}\cdot\frac{\eta_{j}}{\eta_{0}}\cdot\nu_{j}^{-1}. \] \end{prop} \begin{proof} According to \prettyref{prop:THL-LC}, the throughputs are equal. The rest of the proof is the same as the proof of \prettyref{cor:idle-times-BO}. \end{proof} \section{Approximation of the external queue\label{sec:Approximation-of-the-external-queue}} Although, after adjusting $\lambda_{\text{BO}}LS$, the behaviour of the resources in both systems -- the original SOQN-BO and the {modified} SOQN-LC -- is very similar, their external queues are still very different. No matter how highly we increase $\lambda_{\text{BO}}LS$, the external queue length of the SOQN-LC is always zero, while the external queue of the SOQN-BO has strictly positive mean. Therefore, we cannot use the {modified} system directly to estimate the external queue of the original system. Instead, in the following two sections, we will use a two-step approach to approximate the external queue: \begin{description} \item [{step~1}] In \prettyref{sec:small-modified}, we will reduce the modified system to a simple system {with lost customers}. \item [{step~2}] In \prettyref{sub:small-with-backordering}, we will combine the results from \prettyref{sec:small-modified} and the results from \prettyref{sec:Special-case} for a simple system with backordering to approximate the external queue. \end{description} \subsection[Reduced SOQN-LC]{Reduced SOQN {with lost customers}\label{sec:small-modified}} Because the SOQN-LC from \prettyref{sec:Lost-network} is a Gordon-Newell network, we can reduce complexity further by applying Norton's theorem proved by \citet{chandy;herzog;woo75:} to construct a two-node Gordon-Newell with the same throughput. \begin{figure} \caption{Step 1: Reduction of complexity\label{fig:Reduction-of-complexity} \label{fig:Reduction-of-complexity} \end{figure} The inner network is replaced by only one composite node ($\overline{J}:=\left\{ 1\right\} $), which consists of a single server with infinite waiting room under the FCFS regime. The service time is exponentially distributed with mean $1$. The service speed is determined by a queue-length-dependent service intensity. According to \citet[p. 39, eq. (20)]{chandy;herzog;woo75:}, the service intensity $\varphi$ is given by \begin{align} \varphi(0) & =0,\nonumber \\ \varphi(m) & =\eta_{0}\cdot\frac{C_{\text{}}^{\text{stb}}(\overline{J},m-1)}{C_{\text{}}^{\text{stb}}(\overline{J},m)},\quad m\in\left\{ 1,\ldots,N\right\} .\label{eq:small-model-phi} \end{align} Remarkably, $\varphi(N)$ is the same as $\lambda_{\text{BO,max}}$ in \eqref{eq:max-arrival-from-norm-constants}. We deduce from \eqref{eq:stable-th0-by-thi} that \begin{equation} \varphi(m)=stability networkTh 0(m),\quad m\in\left\{ 1,\ldots,N\right\} ,\label{eq:small-model-phi-from-th-0} \end{equation} and it does not depend on $\lambda_{\text{BO}}LS$. The normalisation constants $C_{\text{}}^{\text{stb}}(\overline{J},m)$, $m\in\left\{ 0,\ldots,N\right\} $, can be calculated by the convolution algorithm or mean value analysis (MVA). They are illustrated in \citet[p. 371ff., Section 8.1 and p. 384ff., Section 8.2]{Bolch:1998:QNM:289350}. \subsection{Back to backordering\label{sub:small-with-backordering}} We can use the result of \prettyref{thm:theorem-ls}, that for every stable SOQN-BO, there exists an SOQN-LC with adjusted arrival rate $\lambda_{\text{BO}}LS$ such that both systems have the same throughput in the steady state. So we can remove the lost-customer property to get again the backordering property as shown in \prettyref{fig:Transition-from-lost-to-back}. \begin{figure} \caption{\label{fig:Transition-from-lost-to-back} \label{fig:Transition-from-lost-to-back} \end{figure} Having done this, we can eventually approximate the external queue of the large SOQN-BO with $J>1$ by a reduced SOQN-BO with $J=1$. We calculate $stability networkTh 0(m)$, $m\in\left\{ 1,\ldots,N\right\} $, of the large system and substitute these throughputs in the reduced system like in \eqref{eq:small-model-phi-from-th-0}: $\nu_{1}(m):=\varphi(m)=stability networkTh 0(m)$. Then we insert these rates $\nu_{1}(m)$ into the formulas for $P(X_{\text{ex}}RV=n_{\text{ex}})$, $L_{\text{ex}}$ and $W_{\text{ex}}$ in \prettyref{prop:marginal-distribution-of-a-simple-system}, which we know to be exact for $J=1$. For $J>1$ we expect that the results are close to the real values, but we cannot give error bounds at the present. \[ P(X_{\text{ex}}RV=0)\overset{\eqref{eq:eq:small-backordering-p-X-ex-0}}{\text{\ensuremath{\approx}}}\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\sum_{n_{1}=0}^{N}\prod_{m=1}^{n_{1}}\frac{\lambda_{\text{BO}}}{stability networkTh 0(m)} \] and for $n_{\text{ex}}>0$ \[ P(X_{\text{ex}}RV=n_{\text{ex}})\overset{\eqref{eq:small-backordering-p-X-ex-n}}{\text{\ensuremath{\approx}}}\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{stability networkTh 0(m)}\cdot\left(\frac{\lambda_{\text{BO}}}{stability networkTh 0(N)}\right)^{n_{\text{ex}}} \] with \[ C_{\text{BO}}(\left\{ 1\right\} ,N)\overset{\eqref{eq:Modell-all-together-normierungskonstante}}{=}\sum_{n_{1}=0}^{N-1}\prod_{m=1}^{n_{1}}\frac{\lambda_{\text{BO}}}{stability networkTh 0(m)}+\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{stability networkTh 0(m)}\cdot\frac{1}{1-\frac{\lambda_{\text{BO}}}{stability networkTh 0(N)}}. \] We approximate the average number of customers in the external queue with \eqref{eq:randvtlg-y} and \eqref{eq:small-backordering-L-ex-0} \begin{equation} L_{\text{ex}}\approxL_{\text{ex}}^{\text{apprx}}:=\left[C_{\text{BO}}(\left\{ 1\right\} ,N)\right]^{-1}\cdot\prod_{m=1}^{N}\frac{\lambda_{\text{BO}}}{stability networkTh 0(m)}\cdot\frac{1}{1-\frac{\lambda_{\text{BO}}}{stability networkTh 0(N)}}\cdot\frac{\lambda_{\text{BO}}}{stability networkTh 0(N)-\lambda_{\text{BO}}}.\label{eq:external-length} \end{equation} Note, with our approximation method we arrive to the same formula for $L_{\text{ex}}^{\text{apprx}}$ as \citet[eq. (22)]{dallery:1990} and thus our method produces the same formula as the aggregation technique of \citet[Section 6]{dallery:1990}. We approximate the average waiting time of customers in the external queue with \eqref{eq:waiting-time-ex} \begin{equation} W_{\text{ex}}\text{\ensuremath{\approx}}\frac{L_{\text{ex}}^{\text{apprx}}}{\lambda_{\text{BO}}}.\label{eq:waiting-time-ex-approx} \end{equation} \section{Application to RMFS\label{sec:application}} In this section, we use our approximation algorithm to calculate the minimal number of robots for a robotic mobile fulfilment system (RMFS). In an RMFS, robots are expensive resources. Therefore, we want to keep their number small while still maintaining the necessary quality of service. We will model an RMFS as SOQN-BO and evaluate its performance analytically because this is much faster than evaluating a simulation model. Because of the large state space, it is impractical to solve this SOQN-BO exactly with matrix-geometric methods, even for a small number of robots. For example, for 10 robots, we need to calculate ca.~$\left(9\cdot10^{4}\right)^{2}$ entries of a special matrix. That is why we will use our approximation methods instead, in order to quickly estimate the main performance metrics. \subsection{\label{sec:description-of-RMFS}Description of RMFS} Firstly, we define the components and the order fulfilment processes in an RMFS together with an illustrated example in \prettyref{fig:rmfs-example}. The central components are: \begin{itemize} \item movable shelves, called \emph{pods}, on which items are stored, \item \emph{storage area} -- the area where the pods are stored, \item workstations, where \begin{itemize} \item the items are picked from pods by pickers (\emph{picking stations}) or \item the items are stored to pods (\emph{replenishment stations}), \end{itemize} \item mobile \emph{robots}, which can move underneath pods and carry them to workstations. \end{itemize} \prettyref{fig:rmfs-example} illustrates an example of order fulfilment processes in an RMFS. On the upper left hand we have three customers' orders. The orders contain different items, which are illustrated with different colours. To fulfil customers' orders, we send them or parts of them to picking stations. To the same station we send pods with all the necessary items. Each pod is carried by a robot. In this way, customers' orders generate tasks for robots. The robots, with their pods, queue up in front of the picking stations. A picker takes all the necessary items from the pod at the head of the queue. Then he sends it, with its robot, back to the storage area. As soon as the customer's order or part of it is fulfilled, we remove it from the picking station. The order in which we send the customers' orders, how we split them apart, and which pod we send, is a complex topic. See, for example, \citet{orderpicking}. In the present paper, we focus on the generated robots' tasks, which we will call just \emph{tasks}. In this example, each customer's order is split into two parts. Three parts are sent to picking station~1, and three other parts are sent to picking station~2. To fulfil these partial orders, a robot transports one pod to picking station~1, and another robot transports one pod to picking station~2, from the storage area. From time to time, we need to refill the pods. To do this, we send these pods to the replenishment station. There, employees will refill these pods and send them back to the storage area. In this example, after picking, pod~2 is sent to the replenishment station to refill it with the blue items. \begin{figure} \caption{\label{fig:rmfs-example} \label{fig:rmfs-example} \end{figure} \subsection{Modelling as SOQN} A robotic mobile fulfilment system can be modelled as an SOQN-BO. This is depicted in \prettyref{fig:RMFS-image}. An RMFS is open with respect to tasks and closed with respect to robots, which are the resources in this model. In this section, we consider only an RMFS with two picking stations and one replenishment station, but the results from Sections \ref{sec:A-general-semi-open} and \ref{sec:Approximation} about general SOQN can also be applied to an RMFS with more than two picking stations and more than one replenishment station. \begin{figure} \caption{\label{fig:RMFS-image} \label{fig:RMFS-image} \end{figure} Customers' orders arrive at the RMFS one by one with rate $\lambda_{\text{CO}}$ and generate tasks. The number of tasks, that a single customer order can generate depends on many parameters. In particular, it depends on the efficiency of the algorithm which tries to find an optimal match between customers' orders and pods. The matching problem is NP-hard, and, to the best of our knowledge, there are no formulas known which determine how many pods an order will require. Therefore, we assume that there exists some average pod/order ratio $\sigma_{\text{pod/order}}$ which we can find empirically for a particular RMFS. We assume that this ratio depends only on the pods' contents and customers' order contents, and it does not depend on the number of robots. The matching algorithm also adds some delay in order to assign pods to orders. We assume that this delay depends only on the pods' contents, customers' order contents and order input rate and it does not depend on the number of robots. We assume that we can find this delay empirically for our particular RMFS and it is on average $W_{\text{alg}}$. Thus, the customers' orders generate a stream of ``bring a pod to a picking station'' tasks. This stream has the rate $\lambda_{\text{BO}}=\lambda_{\text{CO}}\cdot\sigma_{\text{pod/order}}>0$. The delay, introduced by the matching algorithm, does not change this rate. We simplify all the complexity that occurs until each task is created and model the task stream as a Poisson arrival stream with rate $\lambda_{\text{BO}}=\lambda_{\text{CO}}\cdot\sigma_{\text{pod/order}}$. To be processed ($=$ to enter the inner network), each such task requires exactly one idle robot from the robot pool (resource pool), which is henceforth referred to as node $0$. If there is no idle robot available, the new task has to wait in an external queue until a robot becomes available. The maximal number of robots in the resource pool is $N$. The inner network in the example in \prettyref{fig:RMFS-image} consists of 11 nodes, denoted by \[ \overline{J}:=\left\{ n_{\text{sp}}State,m_{2}OneState,m_{2}TwoState,nOneState,nTwoState,m_{3}OneState,m_{3}TwoState,\text{p}_{1}\text{r},\text{p}_{2}\text{r},\text{r},\text{rs}\right\} . \] The notations of the nodes are presented in Table \ref{tab:RMFS-nodes-overview} \vpageref{tab:RMFS-nodes-overview}. The robot with assigned task moves through the network. The following processes occur from the perspective of a robot: \begin{itemize} \item The idle robot waits to be assigned to a task (bring a particular pod). \item The robot moves with the assigned task to a pod. \item With this pod, the robot moves to one of the picking stations, more precisely, with probability $q_{\text{pp}_{1}}\in\left(0,1\right)$ to picking station $1$ and with probability $q_{\text{pp}_{2}}\in\left(0,1\right)$ to picking station $2$, whereby $q_{\text{pp}_{1}}+q_{\text{pp}_{2}}=1$. \item The robot queues with the pod at the picking stations. \item After picking at picking station $1$ resp.~picking station 2, the robot: \begin{itemize} \item either \begin{itemize} \item carries the pod directly back to the storage area with probability $q_{\text{p}_{1}\text{s}}\in\left(0,1\right)$ resp.~$q_{\text{p}_{2}\text{s}}\in\left(0,1\right)$, or \item moves to the replenishment station with probability $q_{\text{p}_{1}\text{r}}\in\left(0,1\right)$ resp.~$q_{\text{p}_{2}\text{r}}\in\left(0,1\right)$, whereby $q_{\text{p}_{1}\text{s}}+q_{\text{p}_{1}\text{r}}=1$ resp.~$q_{\text{p}_{2}\text{s}}+q_{\text{p}_{2}\text{r}}=1$, \end{itemize} \item queues at the replenishment station and \item carries the pod back to the storage area and waits for the next task. \end{itemize} \end{itemize} Each of these processes is modelled as a queue. All the movements of the robots are modelled by processor-sharing nodes with exponentially distributed service times. Their intensities $\nu_{j}(n_{j}):=\mu_{j}\cdot\phi_{j}(n_{j})$, $j\in\overline{J}\setminus\left\{ nOneState,nTwoState,\text{r}\right\} $, are presented in Table \ref{tab:RMFS-nodes-overview}. The two picking stations and the replenishment station, which are referred to as node $nOneState$, node $nTwoState$ resp.~node $\text{r}$, consist of a single server with waiting room under the FCFS regime. The picking times and the replenishment times are exponentially distributed with rates $\muOne$, $\muTwo$ resp.~$\nu_{r}$. \begin{table}[h] \caption{\label{tab:RMFS-nodes-overview}Overview of the nodes in the network} \begin{tabular}{ccccc} \hline \multirow{2}{*}{\textbf{Node}} & \textbf{Service} & \textbf{Random} & \multirow{2}{*}{\textbf{State}} & \textbf{Description}\tabularnewline & \textbf{intensity} & \textbf{variable} & & \textbf{(number of robots at time $t\geq0$)}\tabularnewline \hline \multirow{2}{*}{$n_{\text{sp}}State$} & \multirow{2}{*}{$\mu_{\text{sp}}\cdot\phi_{\text{sp}}(n_{\text{sp}})$} & \multirow{2}{*}{$Y_{\text{sp}}(t)$} & \multirow{2}{*}{$n_{\text{sp}}$} & \multirow{2}{*}{moving in the storage area to a pod}\tabularnewline & & & & \tabularnewline \hline \multirow{2}{*}{$m_{2}OneState$} & \multirow{2}{*}{$\nu_{\text{2}}One\cdot\phi_{2}One(m_{2}One)$} & \multirow{2}{*}{$Y_{\text{to picker}}One(t)$} & \multirow{2}{*}{$m_{2}One$} & moving a pod from the storage area\tabularnewline & & & & to picking station $1$\tabularnewline \hline \multirow{2}{*}{$m_{2}TwoState$} & \multirow{2}{*}{$\nu_{\text{2}}Two\cdot\phi_{2}Two(m_{2}Two)$} & \multirow{2}{*}{$Y_{\text{to picker}}Two(t)$} & \multirow{2}{*}{$m_{2}Two$} & moving a pod from the storage area\tabularnewline & & & & to picking station $2$\tabularnewline \hline \multirow{2}{*}{$nOneState$} & \multirow{2}{*}{$\muOne$} & \multirow{2}{*}{$Y_{\text{picker}}One(t)$} & \multirow{2}{*}{$nOne$} & \multirow{2}{*}{in the queue of picking station $1$}\tabularnewline & & & & \tabularnewline \hline \multirow{2}{*}{$nTwoState$} & \multirow{2}{*}{$\muTwo$} & \multirow{2}{*}{$Y_{\text{picker}}Two(t)$} & \multirow{2}{*}{$nTwo$} & \multirow{2}{*}{in the queue of picking station $2$}\tabularnewline & & & & \tabularnewline \hline \multirow{2}{*}{$m_{3}OneState$} & \multirow{2}{*}{$\nu_{3}One\cdot\phi_{\text{rs}}One(m_{3}One)$} & \multirow{2}{*}{$Y_{\text{to storage}}One(t)$} & \multirow{2}{*}{$m_{3}One$} & moving a pod from picking station $1$\tabularnewline & & & & to the storage area and entering node $0$\tabularnewline \hline $m_{3}TwoState$ & $\nu_{3}Two\cdot\phi_{\text{rs}}Two(m_{3}Two)$ & $Y_{\text{to storage}}Two(t)$ & $m_{3}Two$ & moving a pod from picking station $2$\tabularnewline & & & & to the storage area and entering node $0$\tabularnewline \hline \multirow{2}{*}{$\text{p}_{1}\text{r}$} & \multirow{2}{*}{$\mu_{\text{p}_{1}\text{r}}\cdot\phi_{\text{p}_{1}\text{r}}(n_{\text{p}_{1}\text{r}})$} & \multirow{2}{*}{$Y_{\text{p}_{1}\text{r}}(t)$} & \multirow{2}{*}{$n_{\text{p}_{1}\text{r}}$} & moving a pod from picking station $1$\tabularnewline & & & & to the replenishment station\tabularnewline \hline \multirow{2}{*}{$\text{p}_{2}\text{r}$} & \multirow{2}{*}{$\mu_{\text{p}_{2}\text{r}}\cdot\phi_{\text{p}_{2}\text{r}}(n_{\text{p}_{2}\text{r}})$} & \multirow{2}{*}{$Y_{\text{p}_{2}\text{r}}(t)$} & \multirow{2}{*}{$n_{\text{p}_{2}\text{r}}$} & moving a pod from picking station $2$\tabularnewline & & & & to the replenishment station\tabularnewline \hline \multirow{2}{*}{$\text{r}$} & \multirow{2}{*}{$\nu_{r}$} & \multirow{2}{*}{$Y_{\text{r}}(t)$} & \multirow{2}{*}{$n_{\text{r}}$} & \multirow{2}{*}{in the queue of the replenishment station}\tabularnewline & & & & \tabularnewline \hline \multirow{2}{*}{$\text{rs}$} & \multirow{2}{*}{$\mu_{\text{rs}}\cdot\phi_{\text{rs}}(n_{\text{rs}})$} & \multirow{2}{*}{$Y_{\text{r}}ToStorage(t)$} & \multirow{2}{*}{$n_{\text{rs}}$} & moving a pod from the replenishment station\tabularnewline & & & & to the storage area and entering node $0$\tabularnewline \hline \end{tabular} \end{table} The robots travel among the nodes following a fixed routing matrix $\mathcal{R}:=\left(r(i,j):i,j\in\overline{J}_{0}\right)$, whereby $\overline{J}_{0}:=\left\{ 0\right\} \cup\overline{J}$, which is given by \[ \mathcal{R}=\left(\begin{array}{c|cccccccccccc} & k_{\text{idle robots}}State & n_{\text{sp}}State & m_{2}OneState & m_{2}TwoState & nOneState & nTwoState & m_{3}OneState & m_{3}TwoState & \text{p}_{1}\text{r} & \text{p}_{2}\text{r} & \text{r} & \text{rs}\\ \hline k_{\text{idle robots}}State & & 1\\ n_{\text{sp}}State & & & q_{\text{pp}_{1}} & q_{\text{pp}_{2}}\\ m_{2}OneState & & & & & 1\\ m_{2}TwoState & & & & & & 1\\ nOneState & & & & & & & q_{\text{p}_{1}\text{s}} & & q_{\text{p}_{1}\text{r}}\\ nTwoState & & & & & & & & q_{\text{p}_{2}\text{s}} & & q_{\text{p}_{2}\text{r}}\\ m_{3}OneState & 1\\ m_{3}TwoState & 1\\ \text{p}_{1}\text{r} & & & & & & & & & & & 1\\ \text{p}_{2}\text{r} & & & & & & & & & & & 1\\ \text{r} & & & & & & & & & & & & 1\\ \text{rs} & 1 \end{array}\right). \] The routing matrix $\mathcal{R}$ is irreducible by construction. We define the joint stochastic process $Z$ of this system by \begin{align*} Z & :=\Big(\Big(X_{\text{ex}}(t),Y_{0}(t),Y_{\text{sp}}(t),Y_{\text{to picker}}One(t),Y_{\text{to picker}}Two(t),Y_{\text{picker}}One(t),Y_{\text{picker}}Two(t),Y_{\text{to storage}}One(t),Y_{\text{to storage}}Two(t),\\ & \mathrel{\phantom{=}}\quad\ \ Y_{\text{p}_{1}\text{r}}(t),Y_{\text{p}_{2}\text{r}}(t),Y_{\text{r}}(t),Y_{\text{r}}ToStorage(t)\Big):t\geq0\Big). \end{align*} Due to the usual independence and memorylessness assumptions (see the assumptions \vpageref{independence-memorylessness}), $Z$ is a homogeneous Markov process with state space \begin{align*} E & :=\phantom{\cup}\big\{\left(0,k_{\text{idle robots}},n_{\text{sp}},m_{2}One,m_{2}Two,nOne,nTwo,m_{3}One,m_{3}Two,n_{\text{p}_{1}\text{r}},n_{\text{p}_{2}\text{r}},n_{\text{r}},n_{\text{rs}}\right):\\ & \mathrel{\phantom{=}}\phantom{\cup\big\{\:}n_{j}\in\left\{ 0,\ldots,N\right\} \ \forall j\in\overline{J}_{0},\ \sum_{j\in\overline{J}_{0}}n_{j}=N\big\}\\ & \mathrel{\phantom{=}}\cup\big\{\left(n_{\text{ex}},0,n_{\text{sp}},m_{2}One,m_{2}Two,nOne,nTwo,m_{3}One,m_{3}Two,n_{\text{p}_{1}\text{r}},n_{\text{p}_{2}\text{r}},n_{\text{r}},n_{\text{rs}}\right):\\ & \mathrel{\phantom{=}}\phantom{\cup\big\{\:}n_{\text{ex}}\in\mathbb{N},\:n_{j}\in\left\{ 0,\ldots,N\right\} \ \forall j\in\overline{J},\ \sum_{j\in\overline{J}}n_{j}=N\big\}. \end{align*} $Z$ is irreducible on $E$. \subsection{Determine the minimal number of robots} We see that the throughput $stability networkTh 0(N)$ depends on $N$, the number of robots. This raises the question: ``How many robots do we need to stabilize a given system?'' Consequently, to find the minimal number of robots, we first check the stability criterion from \prettyref{prop:stability-condition}. The maximal number of robots $N^{\max}$ is equal to the number of pods or is determined by financial restrictions. In the following \prettyref{alg:algorithm-stable}, we determine the set $\overline{N}^{*}$of feasible numbers of robots for a stable system. \begin{algorithm} \input{algorithm-stable-robot-set.tex}\caption{\label{alg:algorithm-stable}Calculate set of feasible numbers of robots for a stable system} \end{algorithm} \begin{rem} If $stability networkTh 0(\cdot)$ is non-decreasing in $\mathbb{N}$, then the algorithm can be improved: the algorithm does not have to check all possible numbers of robots . It starts with one robot and adds a new robot in each new step until the stability criterion is satisfied for the first time. You will find some simple sufficient conditions for non-decreasing $stability networkTh 0(\cdot)=\lambda_{\text{BO,max}}$ in \prettyref{prop:monoton-for-more-1} and \ref{prop:monoton-for-1}. \end{rem} Unfortunately, stability does not say anything about the turnover time of an order. We do not know if only 2 minutes are required to satisfy a customer\textquoteright s order or maybe the order requires 2 years to be processed. In case of the latter turnover time, the system is stable, but customers may not be happy. Now, in addition to stability, we want to consider the turnover time of a customer's order for the quality of service. The turnover time of a customer's order can be split into three main parts: \begin{enumerate} \item Waiting time until the matching algorithm has assigned all required pods to that order. By assumption, this time does not depend on the number of robots and is on average $W_{\text{alg}}>0$. \item Waiting time of the first matched pod for an idle robot, time for transport to the picking station, waiting time for the picker at the picking station. During all these times, the order is coupled with at least one task. We call this turnover time for the task $TO_{\text{task}}(\lambda_{\text{BO}}LS,N)$. \item Time of an order between start of picking and its completion. This time is complex and depends on many factors. Like, for example: How many orders can a picker complete with the same pod? Will all completed orders wait until a pod leaves? Is the order's content in multiple pods? Will these pods arrive right after each other, or will there be many pods for other orders in between? Is there any complex merging procedure outside of the picking station? In our model, we use a simplifying assumption that the order needs on average $W_{\text{assembled}}>0$ from the time its first pod arrives at the picking station until the time picking for this order is completed. \end{enumerate} With all these assumptions, we can assume that the turnover time of an order is \[ TO_{\text{order}}(\lambda_{\text{BO}}LS,N):=W_{\text{alg}}+TO_{\text{task}}(\lambda_{\text{BO}}LS,N)+W_{\text{assembled}}. \] Even in the case where $W_{\text{alg}}$ and $W_{\text{assembled}}$ are not known, we can still use $TO_{\text{task}}(\lambda_{\text{BO}}LS,N)$ as a lower bound for $TO_{\text{order}}(\lambda_{\text{BO}}LS,N)$. Because of the simplifying assumption about $W_{\text{alg}}$ and $W_{\text{assembled}}$, only the turnover time $TO_{\text{task}}(\lambda_{\text{BO}}LS,N)$ of a task depends on $N$, and for the minimal number of robots we can focus on this. The turnover time $TO_{\text{task}}(\lambda_{\text{BO}}LS,N)$ of a task is measured from the time the task is received to the time the picker starts to process it: \[ TO_{\text{task}}(\lambda_{\text{BO}}LS,N):=W_{\text{ex}}(N)+W_{\text{in}}(\lambda_{\text{BO}}LS,N). \] $W_{\text{ex}}(N)$ is the average time which a task spends waiting in the external queue until it enters the inner network. We can calculate it with \eqref{eq:waiting-time-ex-approx}. $W_{\text{in}}(\lambda_{\text{BO}}LS,N)$ is the average time which a task spends in the inner network until a picker starts to process it at one of the picking stations. Given the average waiting times $W_{j}(\lambda_{\text{BO}}LS,N)$ at nodes $j\in\overline{J}=\left\{ n_{\text{sp}}State,m_{2}OneState,m_{2}TwoState,nOneState,nTwoState,m_{3}OneState,m_{3}TwoState,\text{p}_{1}\text{r},\text{p}_{2}\text{r},\text{r},\text{rs}\right\} $ from arrival until service completion, and constant service rates $\nu_{j}$ at nodes $j\in\{nOneState,nTwoState$\}, then \begin{align*} W_{\text{in}}(\lambda_{\text{BO}}LS,N) & :=W_{n_{\text{sp}}State}(\lambda_{\text{BO}}LS,N)+r(n_{\text{sp}}State,m_{2}OneState)\cdot\left(W_{m_{2}OneState}(\lambda_{\text{BO}}LS,N)+W_{nOneState}(\lambda_{\text{BO}}LS,N)-1/\muOne\right)\\ & \mathrel{\phantom{=}}+r(n_{\text{sp}}State,m_{2}TwoState)\cdot\left(W_{m_{2}TwoState}(\lambda_{\text{BO}}LS,N)+W_{nTwoState}(\lambda_{\text{BO}}LS,N)-1/\muTwo\right). \end{align*} We calculate $W_{j}(\lambda_{\text{BO}}LS,N)$, $j\in\overline{J}$, with MVA. So, the question is: \textquotedblleft How many additional robots do we need, so that the turnover time of a task is also acceptable?\textquotedblright{} In the following \prettyref{alg:algorithm-minimal-to}, we determine the minimum number of robots for an acceptable turnover time of a task. We will call this time $TO_{task}^{max}$. \begin{algorithm}[h] \input{algorithm-turnover-robot-set.tex}\caption{\label{alg:algorithm-minimal-to}Calculate the minimal number of robots for acceptable turnover time of a task} \end{algorithm} \subsection{Numerical experiments\label{sec:numerical-experiments}} We use parameters from \citet[Table 5.3 and Table 5.4]{dynamicPolicies} in our experiments. We set the number of pods $N^{\max}=550$, arrival rate of tasks\footnote{Note, in \citet[Table 5.3 and Table 5.4]{dynamicPolicies}, the arrival rates are in {[}order/hour{]}, but because in that paper each order generates one task, we use {[}task/hour{]} directly.} $468\:\frac{\text{tasks}}{\text{h}}=0.13\:\frac{\text{tasks}}{\text{s}}$, average travel time at node $n_{\text{sp}}State$ $\mu_{\text{sp}}^{-1}=\SI{18.4}{\second}$, average travel time at node $m_{2}OneState$ $\nu_{\text{2}}One^{-1}=\SI{34.5}{\second}$, average travel time at node $m_{2}TwoState$ $\nu_{\text{2}}Two^{-1}=\SI{34.5}{\second}$, average pick time of picking station $1$ $\muOne^{-1}=\SI{10}{\second}$, average pick time of picking station $2$ $\muTwo^{-1}=\SI{10}{\second}$, average travel time at node $m_{3}OneState$ $\nu_{3}One^{-1}=\SI{34.5}{\second}$, average travel time at node $m_{3}TwoState$ $\nu_{3}Two^{-1}=\SI{34.5}{\second}$, average travel time at node $\text{p}_{2}\text{r}$ $\mu_{\text{p}_{2}\text{r}}^{-1}=\SI{34.5}{\second}$, average travel time at node $\text{p}_{1}\text{r}$ $\mu_{\text{p}_{1}\text{r}}^{-1}=\SI{34.5}{\second}$, average replenishment time (node $\text{r}$) $\nu_{r}^{-1}=\SI{30}{\second}$, and average travel time at node $\text{rs}$ $\mu_{\text{rs}}^{-1}=\SI{34.5}{\second}$. For our numerical example, we assume that the robots do not interfere when they move. Hence, our processor-sharing queues are infinite server queues. That means $\phi_{j}(n_{j})=n_{j}$ for all $j\in\overline{J}\setminus\left\{ nOneState,nTwoState,\text{r}\right\} $. We implemented our algorithm in R and used the package\emph{ queueing}, see \citet{Rqueueing}. The minimal number of robots for this system to be stable is 18. The minimal number of robots with some additional waiting time requirements depends on this waiting time. In the worst case scenario -- when we need to try all ($550-18+1$) of the robots -- our implementation takes on average 83 seconds on a notebook with an i7-7600U CPU processor, 2.80GHz and 16GB RAM. We plotted some important parameters of the system in Figures \ref{fig:stable-arrivals} to \ref{fig:waiting-times}. For better readability, we plotted data for a limited number of robots; due to the asymptotic behaviour, one can estimate how these parameters look with more robots. \prettyref{fig:stable-arrivals} shows maximal arrival rates $\lambda_{\text{BO}}$ for given numbers of robots to keep the system stable. We see asymptotic behaviour of the rate $\lambda_{\text{BO}}$ for $N\rightarrow\infty$ . In particular, after about 40 robots, additional robots do not allow significantly higher arrival rates. \prettyref{fig:throughputs} shows the throughputs for each node. In \prettyref{prop:THL-BO}, we showed that these throughputs do not depend on the number of robots, and they are pairwise the same for the original system with backordering and for the adjusted lost-customers approximation. The probabilities that the nodes $nOneState$, $nTwoState$ and $\text{r}$ are idling are 0.35, 0.35, and 0.22. We calculate them with \prettyref{cor:idle-times-BO}. \prettyref{fig:adjusted-arrivals} shows the adjusted arrival rate $\lambda_{\text{BO}}LS$ for a system with lost customers, so that the effective arrival rate is $\lambda_{\text{BO}}$. We see an asymptotic behaviour $\lim_{N\rightarrow\infty}\lambda_{\text{BO}}LS\approx\lambda_{\text{BO}}$. This happens because, in our test system with lost customers, when there are many robots in the system, the probability of an empty resource pool is almost $0$, therefore only a few customers are lost. When only a few customers are lost, we do not need to adjust $\lambda_{\text{BO}}LS$ much. \prettyref{fig:waiting-times} shows average waiting times for an order, after it has arrived into the system and until it is completed at a picking station. We see that an order spends a lot of time in a system with only 18 robots, even if this system is stable. We also see how dramatically the waiting time improves with only one additional robot. \begin{figure} \caption{Maximal arrival rates $\protect\lambda_{\text{BO} \label{fig:stable-arrivals} \end{figure} \begin{figure} \caption{Throughputs for each node of the RMFS example.\label{fig:throughputs} \label{fig:throughputs} \end{figure} \begin{figure} \caption{Adjusted arrival rate $\protect\lambda_{\text{BO} \label{fig:adjusted-arrivals} \end{figure} \begin{figure} \caption{Turnover times of a task $\protectTO_{\text{task} \label{fig:waiting-times} \end{figure} \paragraph{Simulation} We simulated the RMFS with backordering for 365 days 20 times for each number of robots. For simulation we used SimPy 3.0. \prettyref{fig:simulation-external-waiting-times} shows results for waiting times beginning with 19 robots. The approximation shows the same qualitative behaviour as the original system with backordering. In the interesting region 19-25 robots, the approximation is not very precise but it answers the essential question ``how many robots we need'{}' quite well. Beginning with 26 robots, the approximation reflects the asymptotic behaviour of the original system well. In \prettyref{fig:simulation-external-waiting-times} we have omitted the results for 18 robots because they have very large mean value and very large standard deviation. This is because the system with 18 robots operates on the edge of instability. This behaviour of the system under these settings is interesting from theoretical point of view, that is why we ran more intensive tests of this system with 200 simulations. The results are in \prettyref{fig:simulation-external-18-histogram}. They show how different the average waiting time can be. From practical point of view, we do not recommend to operate a real system under these conditions. Also in this case, we recommend not to trust simulation results if they were obtained by only few simulations. \prettyref{fig:simulation-turn-over-times} shows that our approximation approximates very well the turnover times. To better judge the quality of the approximation, we need to consider that the turnover times consist of transportation times and waiting times for a picker. The average transportation times are pairwise equal in the original system and in the approximation. We can easily calculate them from service times on appropriate nodes without any approximation: $\mu_{\text{sp}}^{-1}+r(n_{\text{sp}}State,m_{2}OneState)\cdot\nu_{\text{2}}One^{-1}+r(n_{\text{sp}}State,m_{2}TwoState)\cdot\nu_{\text{2}}One^{-1}$ $=0.0147\text{h}$. The hard part is to estimate the average waiting times for the picker. \prettyref{fig:simulation-waiting-for-picker} shows the results. Our approximation is still good. Other waiting times, which we cannot calculate directly, are the waiting times for the replenishment, which are shown in \prettyref{fig:simulation-waiting-for-replenisher}. We do not need these times for our optimisation problem. However, it demonstrates how well our algorithm estimates other parts of the network. Although we are happy with these results, we remark that this approximation worked well for our test system but systems with less impressive results are possible, too. \begin{figure} \caption{Average waiting times in the external queue for different number of robots, simulation vs. approximation. \label{fig:simulation-external-waiting-times} \label{fig:simulation-external-waiting-times} \end{figure} \begin{figure} \caption{Distribution of waiting times in the external queue for a system with 18 robots, obtained with 200 simulations. \label{fig:simulation-external-18-histogram} \label{fig:simulation-external-18-histogram} \end{figure} \begin{figure} \caption{Average turnover times for different number of robots, simulation vs. approximation. The graph for approximation is right on the graph for simulation. The large fraction of turnover times for the transportation is the same in both systems. It is equal in both systems by construction. \label{fig:simulation-turn-over-times} \label{fig:simulation-turn-over-times} \end{figure} \begin{figure} \caption{Waiting times for pickers with different number of robots, simulation vs. approximation. \label{fig:simulation-waiting-for-picker} \label{fig:simulation-waiting-for-picker} \end{figure} \enlargethispage{1cm} \begin{figure} \caption{Waiting times until replenishment begins with different number of robots, simulation vs. approximation. \label{fig:simulation-waiting-for-replenisher} \label{fig:simulation-waiting-for-replenisher} \end{figure} \section{Conclusion\label{sec:Conclusion}} In this paper, we have focused on semi-open queueing networks (SOQNs), which include an external queue for customers and a resource network which consists of an inner network and a resource pool. We have determined closed-form expressions for stability, throughputs and idle probabilities of some nodes. For other performance metrics, we have proposed a new approximation approach to solve problems of an SOQN with backordering. To approximate the resource network of the SOQN with backordering, in \prettyref{sec:Approximation} we have considered a modification, where newly arriving customers will decide not to join the external queue and are lost if the resource pool is empty (``{lost customers}''). We have proved that we can adjust the arrival rate so that the throughputs in each node are pairwise identical to those in the original network. We also have proved that the idle probabilities of the nodes with constant service rate are pairwise identical. To approximate the external queue of the SOQN with backordering, in \prettyref{sec:Approximation-of-the-external-queue} we have used a two-step approach. In step one, we have constructed a reduced SOQN with lost customers, where the inner network consists only of one node, by using Norton's theorem. In step two, we have used the closed-form solution of this reduced SOQN, to estimate the performance of the original SOQN with backordering. Based on the theoretical foundation above, we have modelled a real-world automatic warehousing system (called robotic mobile fulfilment system, in short: RMFS) as an SOQN with backordering. We have selected for our experiment an RMFS with two picking stations and one replenishment station. Based on the stability analysis in \prettyref{sec:RMFS-general-stat-distr}, we got the minimum number of robots for a stable system. However, the stability does not say anything about the turnover time of an order ($=$ waiting time in the external queue + processing time in the inner network), since customers' orders are expected to be processed as quickly as possible in the e-commerce sector. Therefore, we have calculated the processing time in the network based on the approximation model in \prettyref{sec:Approximation} and the waiting time in the external queue based on the approximation model in \prettyref{sec:Approximation-of-the-external-queue}. Due to the short computational time for trying different numbers of robots, we got all the important metrics within a couple of seconds. We have plotted the data to see the relationship between the number of required robots and the average waiting time of a task. Based on our results, we have seen the dramatic reduction in waiting time in the external queue with only one more robot, while we have seen the stagnation of average waiting time by adding more robots. Therefore, it provides an interesting insight for practitioners and researchers to make a trade-off between low investment cost and good service for customers. We made a simulation to analyse the quality of our approximation method. For our test system, it shows good results. \section*{Acknowledgments} Ruslan Krenzler and Sonja Otten are funded by the industrial project ``Robotic Mobile Fulfillment System'', which is financially supported by Ecopti GmbH (Paderborn, Germany) and Beijing Hanning Tech Co., Ltd.~(Beijing, China). \appendix \begin{appendices} \section{Omitted proofs\label{appx:omitted-calculation}} \begin{proof} [Proof of \prettyref{prop:unique}]We will show the strict isotonicity of the effective arrival rate $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)$ for any $N\geq1$ by induction in $N$. \textit{\emph{To distinguish systems with different numbers of}} resources,\textit{\emph{ we will use the notation $\lambda_{\text{eff}}^{(N)}(\lambda_{\text{BO}}LS)$ for a system with }}$N$\textit{\emph{ }}resources\textit{\emph{.}} We recall the following constants from \prettyref{eq:BO-GEN-normalize} for $L=1,2,\ldots,N$: \[ C_{\text{LC}}(\overline{J}_{0},L)=\sum_{n_{0}=0}^{L}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\sum_{\sum_{j\in\overline{J}}n_{j}=L-n_{0}}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right). \] We set $C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},L):=C_{\text{LC}}(\overline{J}_{0},L)$ to emphasise that these constants are functions in $\lambda_{\text{BO}}LS$. It holds \begin{align*} \lambda_{\text{eff}}^{(N)}(\lambda_{\text{BO}}LS) & =\lambda_{\text{BO}}LS\cdot\left(1-\frac{C_{\text{}}^{\text{stb}}(\overline{J},N)}{C_{\text{LC}}(\overline{J}_{0},N)}\right)\\ & =\lambda_{\text{BO}}LS\cdot\left(1-\frac{\sum_{\sum_{j\in\overline{J}}n_{j}=N}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)}{\sum_{\sum_{j\in\overline{J}_{0}}n_{j}=N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\cdot\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)}\right)\\ & =\eta_{0}\cdot\frac{\sum_{n_{0}=1}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}-1}\sum_{\sum_{j\in\overline{J}}n_{j}=N-n_{0}}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)}{\sum_{n_{0}=0}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\sum_{\sum_{j\in\overline{J}}n_{j}=N-n_{0}}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)}\\ & =\eta_{0}\cdot\frac{\sum_{n_{0}=0}^{N-1}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\sum_{\sum_{j\in\overline{J}}n_{j}=N-1-n_{0}}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)}{\sum_{n_{0}=0}^{N}\left(\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\right)^{n_{0}}\sum_{\sum_{j\in\overline{J}}n_{j}=N-n_{0}}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right)}\\ & =\eta_{0}\cdot\frac{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)}{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N)}. \end{align*} We further define for $L=1,2,\ldots,N$ \[ C(\overline{J},L):=\sum_{\sum_{j\in\overline{J}}n_{j}=L}\prod_{j=1}^{J}\left(\prod_{i=1}^{n_{j}}\frac{\eta_{j}}{\nu_{j}(i)}\right) \] and obtain \begin{equation} C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},L)=C(\overline{J},L)+\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},L-1),\quad L=1,2,\ldots,N.\label{eq:C-gleichung} \end{equation} Now we prove by induction \[ \lambda_{\text{eff}}^{(N)}(\lambda_{\text{BO}}LS+\varepsilon)>\lambda_{\text{eff}}^{(N)}(\lambda_{\text{BO}}LS)\qquad\forall\lambda_{\text{BO}}LS>0,\ \varepsilon>0,\ \forallN=1,2,\ldots \] \textit{Base step}: For $N=1$: \[ \lambda_{\text{eff}}^{(1)}(\lambda_{\text{BO}}LS+\varepsilon)-\lambda_{\text{eff}}^{(1)}(\lambda_{\text{BO}}LS)=\frac{\eta_{0}}{\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}+\sum_{j=1}^{J}\frac{\eta_{j}}{\nu_{j}(1)}}-\frac{\eta_{0}}{\frac{\eta_{0}}{\lambda_{\text{BO}}LS}+\sum_{j=1}^{J}\frac{\eta_{j}}{\nu_{j}(1)}}>0. \] \textit{Induction step:} Assume the inequality holds for all $1\leq L\leqN-1$. Then for $L=N$ we obtain \begin{align*} & \frac{1}{\eta_{0}}\cdot\left(\lambda_{\text{eff}}^{(N)}(\lambda_{\text{BO}}LS+\varepsilon)-\lambda_{\text{eff}}^{(N)}(\lambda_{\text{BO}}LS)\right)=\frac{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)}{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N)}-\frac{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)}{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N)}\\ & =\frac{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N)-C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N)}{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N)\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N)}. \end{align*} Because the denominator is strictly positive, it suffices to show that the numerator is strictly positive. Using the induction assumption we obtain from \prettyref{eq:C-gleichung} the following: \begin{align*} & C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N)-C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N)\\ & \stackrel{}{=}\left(C(\overline{J},N-1)+\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)\right)\\ & \mathrel{\phantom{=}}\mathrel{\phantom{=}}\cdot\left(C(\overline{J},N)+\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)\right)\\ & \mathrel{\phantom{=}}-\left(C(\overline{J},N-1)+\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)\right)\\ & \mathrel{\phantom{=}}\mathrel{\phantom{=}}\cdot\left(C(\overline{J},N)+\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\right)\\ & =C(\overline{J},N-1)\cdot C(\overline{J},N)+C(\overline{J},N-1)\cdot\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)\\ & \mathrel{\phantom{=}}\quad+\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)\cdot C(\overline{J},N)\\ & \mathrel{\phantom{=}}\quad+\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)\cdot\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)\\ & \mathrel{\phantom{=}}-C(\overline{J},N-1)\cdot C(\overline{J},N)-C(\overline{J},N-1)\cdot\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\\ & \mathrel{\phantom{=}}\quad-\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)\cdot C(\overline{J},N)\\ & \mathrel{\phantom{=}}\quad-\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)\cdot\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\\ & =\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot\Big[C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)\\ & \mathrel{\phantom{=}}\underbrace{\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad-C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\Big]}_{=:A_{N}}\\ & \mathrel{\phantom{=}}+\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C(\overline{J},N-1)\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)+\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)\cdot C(\overline{J},N)\\ & \mathrel{\phantom{=}}-\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C(\overline{J},N-1)\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)-\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)\cdot C(\overline{J},N)\\ & =A_{N}+\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot\left[C(\overline{J},N-1)\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)-C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)\cdot C(\overline{J},N)\right]\\ & \mathrel{\phantom{=}}+\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot\left[C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)\cdot C(\overline{J},N)-C(\overline{J},N-1)\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\right]. \end{align*} $A_{N}$ is a positive factor $\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot\frac{\eta_{0}}{\lambda_{\text{BO}}LS}$ multiplied by the numerator of $\lambda_{\text{eff}}^{(N-1)}(\lambda_{\text{BO}}LS+\varepsilon)-\lambda_{\text{eff}}^{(N-1)}(\lambda_{\text{BO}}LS)$. Therefore, $A_{N}>0$ by the induction assumption. Thus, it suffices to prove \begin{align*} & \frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot\left[C(\overline{J},N-1)\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)-C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)\cdot C(\overline{J},N)\right]\\ & \mathrel{\phantom{=}}+\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot\left[C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)\cdot C(\overline{J},N)-C(\overline{J},N-1)\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\right]\\ & \geq0, \end{align*} which is equivalent to \begin{align*} & \frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot\left[\frac{C(\overline{J},N-1)}{C(\overline{J},N)}-\frac{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)}\right]\cdot C(\overline{J},N)\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)\\ & -\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot\left[\frac{C(\overline{J},N-1)}{C(\overline{J},N)}-\frac{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)}\right]\cdot C(\overline{J},N)\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)\\ & =\Bigg\{\underbrace{\left[\frac{C(\overline{J},N-1)}{C(\overline{J},N)}-\frac{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)}\right]}_{=:B}\cdot\underbrace{\frac{\eta_{0}}{\lambda_{\text{BO}}LS}\cdot C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)}_{=:D}\\ & \mathrel{\phantom{=}}-\underbrace{\left[\frac{C(\overline{J},N-1)}{C(\overline{J},N)}-\frac{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)}\right]}_{=:C}\cdot\underbrace{\frac{\eta_{0}}{\lambda_{\text{BO}}LS+\varepsilon}\cdot C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)}_{=:E}\Bigg\}\cdot C(\overline{J},N)\\ & \geq0. \end{align*} (i) We show that $B\geq0$, $C\geq0$. Van der Wal \Citep{vanderWal1989} shows that an increasing population size increases throughput, hence \[ \frac{C(\overline{J},N-1)}{C(\overline{J},N)}\geq\frac{C(\overline{J},N-2)}{C(\overline{J},N-1)}. \] Furthermore, we show \[ \frac{C(\overline{J},N-2)}{C(\overline{J},N-1)}\geq\frac{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)}. \] We consider the right-hand side as throughput of a cyclic Gordon-Newell network\footnote{$\frac{C(\overline{J},N-2)}{C(\overline{J},N-1)}$ and $\frac{C_{\lambda_{\text{BO}}LS}(\overline{J},N-2)}{C_{\lambda_{\text{BO}}LS}(\overline{J},N-1)}$ as given in our derivations are the (average) throughput of a cycle following \citet[Definition 2.6]{Daduna2008}.} with node set $\overline{J}_{0}:=\left\{ 0,1,\ldots,J\right\} $, service rates $\mu_{j}(n):=\frac{\nu_{j}(i)}{\eta_{j}}$, $j=1,\ldots,J$, $n=0,1,\ldots,N$ and $\mu_{0}(n):=\frac{\lambda_{\text{BO}}LS}{\eta_{0}}$ and solution $\underbrace{(1,1,\ldots,1)}_{J+1\text{-times}}$ of the routing matrix for the cycle. The left-hand side is the throughput of a cyclic Gordon-Newell network which is obtained from the first cycle by deleting node $0$ and skipping the gap by the cycling customers. Lemma 2.8 in \citet{Daduna2008} states that deleting any node of a cycle with skipping the gap increases the throughput. Consequently, $B\geq0$ in a two-step conclusion, and similarly $C\geq0$. (ii) From definition of $D$ and $E$ follows by direct comparison $D>E$. (iii) The proof will be finished if we can show $B\geq C$. To do so, it is sufficient to prove with \citet{SHANTHIKUMAR1986259} \begin{equation} \frac{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)}\leq\frac{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)}.\label{eq:Daduna-2} \end{equation} The left-hand side is the throughput of the Gordon-Newell network according to Definition (2.2) in \citet{SHANTHIKUMAR1986259}. The right-hand side is the throughput of the Gordon-Newell network with service rate at node $0$ increased to $\lambda_{\text{BO}}LS+\varepsilon$. Corollary 3.1(i) in \citet{SHANTHIKUMAR1986259} states \[ \eta_{0}\cdot\frac{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS}(\overline{J}_{0},N-1)}\leq\eta_{0}\cdot\frac{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-2)}{C_{\lambda_{\text{BO}}LS+\varepsilon}(\overline{J}_{0},N-1)}, \] which verifies \prettyref{eq:Daduna-2}. \end{proof} \begin{rem} \label{rem:lambda-eff-N-1-N-2} For $\lambda_{\text{BO}}\in\left(0,\lambda_{\text{BO,max}}\right)=\left(0,\eta_{0}\cdot\tfrac{\abrevC 1}{\abrevC 0}\right)$ it holds \[ \lambda_{\text{BO}}LS=\begin{cases} \frac{\eta_{0}\cdot\lambda_{\text{BO}}}{\eta_{0}-\lambda_{\text{BO}}\cdot\abrevC 0}, & N=1,\\ -\frac{\eta_{0}}{2\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)}\left(\eta_{0}-\lambda_{\text{BO}}\cdot\abrevC 1-\sqrt{\big(\eta_{0}+\lambda_{\text{BO}}\cdot\abrevC 1\big){}^{2}-4\cdot\lambda_{\text{BO}}^{2}\cdot\abrevC 0}\right), & N=2. \end{cases} \] \end{rem} \begin{proof} Due to \prettyref{thm:theorem-ls} we have \[ \lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lambda_{\text{BO}}LS\cdot\left(1-\frac{\abrevC 0}{\sum_{n=0}^{N}\left(\frac{\lambda_{\text{BO}}LS}{\eta_{0}}\right)^{n}\cdot\abrevC n}\right)=\lambda_{\text{BO}}LS\cdot\left(\frac{\sum_{n=1}^{N}\abrevC n\cdot\lambda_{\text{BO}}LS\cdot\left(\frac{\lambda_{\text{BO}}LS}{\eta_{0}}\right)^{N-n}}{\sum_{n=0}^{N}\abrevC n\cdot\left(\frac{\lambda_{\text{BO}}LS}{\eta_{0}}\right)^{N-n}}\right) \] for $\lambda_{\text{BO}}LS\in(0,\infty)$ and \[ \abrevC{N}=1,\quad\abrevC{N-1}=\sum_{j=1}^{J}\frac{\eta_{j}}{\nu_{j}(1)} \] and \[ \abrevC{N-2}=\sum_{j=1}^{J}\frac{\eta_{j}}{\nu_{j}(1)\cdot\nu_{j}(2)}+\sum_{j=1}^{J-1}\sum_{k=j+1}^{J}\frac{\eta_{j}\cdot\eta_{k}}{\nu_{j}(1)\cdot\nu_{k}(1)}. \] Let $\lambda_{\text{BO}}\in\left(0,\eta_{0}\cdot\tfrac{\abrevC 1}{\abrevC 0}\right)$. First, we note that \begin{equation} \eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0>\eta_{0}\cdot\abrevC 1-\eta_{0}\cdot\frac{\abrevC 1}{\abrevC 0}\cdot\abrevC 0=0.\label{eq:uniqueness_1} \end{equation} The equation $\lambda_{\text{eff}}(\lambda_{\text{BO}}LS)=\lambda_{\text{BO}}$ is equivalent to \[ \frac{\sum_{n=1}^{N}\left(\frac{\lambda_{\text{BO}}LS}{\eta_{0}}\right)^{N+1-n}\cdot\eta_{0}\cdot\abrevC n}{\sum_{n=0}^{N}\left(\frac{\lambda_{\text{BO}}LS}{\eta_{0}}\right)^{N-n}\cdot\abrevC n}=\lambda_{\text{BO}} \] and this to \begin{equation} \sum_{n=1}^{N}\big(\eta_{0}\cdot\abrevC n-\lambda_{\text{BO}}\cdot\abrevC{n-1}\big)\cdot\left(\frac{\lambda_{\text{BO}}LS}{\eta_{0}}\right)^{N+1-n}-\lambda_{\text{BO}}\cdot\abrevC{N}=0.\label{eq:uniqueness_2} \end{equation} If $N=1$, then \eqref{eq:uniqueness_2} is equivalent to \[ \lambda_{\text{BO}}LS=\frac{\eta_{0}\cdot\lambda_{\text{BO}}\cdot\abrevC 1}{\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0}=\frac{\eta_{0}\cdot\lambda_{\text{BO}}}{\eta_{0}-\lambda_{\text{BO}}\cdot\abrevC 0}. \] If $N=2$, then \eqref{eq:uniqueness_2} is equivalent to \[ \big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)\cdot\left(\frac{\lambda_{\text{BO}}LS}{\eta_{0}}\right)^{2}+\big(\eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1\big)\cdot\left(\frac{\lambda_{\text{BO}}LS}{\eta_{0}}\right)-\abrevC 2\cdot\lambda_{\text{BO}}=0. \] Since the discriminant fulfils \[ \big(\eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1\big)^{2}+4\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)\cdot\abrevC 2\cdot\lambda_{\text{BO}}>0 \] by \eqref{eq:uniqueness_1}, it follows that \begin{align*} \frac{\lambda_{\text{BO}}LS}{\eta_{0}} & =-\frac{1}{2\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)}\\ & \mathrel{\phantom{=}}\cdot\Bigl(\eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1\pm\sqrt{\big(\eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1\big)^{2}+4\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)\cdot\abrevC 2\cdot\lambda_{\text{BO}}}\Bigr). \end{align*} Due to \eqref{eq:uniqueness_1}, the solution $\lambda_{\text{BO}}LS$ is positive only if \[ \eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1\pm\sqrt{\big(\eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1\big)^{2}+4\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)\cdot\abrevC 2\cdot\lambda_{\text{BO}}}<0. \] Let $c:=\eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1$ and $d:=4\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)\cdot\abrevC 2\cdot\lambda_{\text{BO}}$ which implies $d>0$. If $c\geq0$, then obviously $c+\sqrt{c^{2}+d}>0$. If $c<0$, then \[ c+\sqrt{c^{2}+d}>c+\sqrt{c^{2}}=c+|c|=c-c=0. \] Hence, the only positive solution (existence guaranteed by \prettyref{thm:theorem-ls}) has to be \begin{align*} \lambda_{\text{BO}}LS & =-\frac{\eta_{0}}{2\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)}\\ & \mathrel{\phantom{=}}\cdot\Bigl(\eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1-\sqrt{\big(\eta_{0}\cdot\abrevC 2-\lambda_{\text{BO}}\cdot\abrevC 1\big)^{2}+4\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)\cdot\abrevC 2\cdot\lambda_{\text{BO}}}\Bigr)\\ & =-\frac{\eta_{0}}{2\cdot\big(\eta_{0}\cdot\abrevC 1-\lambda_{\text{BO}}\cdot\abrevC 0\big)}\cdot\left(\eta_{0}-\lambda_{\text{BO}}\cdot\abrevC 1-\sqrt{\big(\eta_{0}+\lambda_{\text{BO}}\cdot\abrevC 1\big)^{2}-4\cdot\lambda_{\text{BO}}^{2}\cdot\abrevC 0}\right). \end{align*} \end{proof} \end{appendices} {} \end{document}
\begin{document} \maketitle \begin{abstract} We study two computational problems, parameterised by a fixed tree~$H$. $\nHom{H}$ is the problem of counting homomorphisms from an input graph~$G$ to~$H$. $\wHom{H}$ is the problem of counting weighted homomorphisms to~$H$, given an input graph~$G$ and a weight function for each vertex~$v$ of~$G$. Even though $H$ is a tree, these problems turn out to be sufficiently rich to capture all of the known approximation behaviour in $\mathrm{\#P}$. We give a complete trichotomy for $\wHom{H}$. If $H$ is a star then $\wHom{H}$ is in $\mathrm{FP}$. If $H$ is not a star but it does not contain a certain induced subgraph~$J_3$ then $\wHom{H}$ is equivalent under approximation-preserving (AP) reductions to $\textsc{\#BIS}$, the problem of counting independent sets in a bipartite graph. This problem is complete for the class $\text{\#RH$\Pi_1$}$ under AP-reductions. Finally, if $H$ contains an induced~$J_3$ then $\wHom{H}$ is equivalent under AP-reductions to $\textsc{\#Sat}$, the problem of counting satisfying assignments to a CNF Boolean formula. Thus, $\wHom{H}$ is complete for $\mathrm{\#P}$ under AP-reductions. The results are similar for $\nHom{H}$ except that a rich structure emerges if $H$ contains an induced~$J_3$. We show that there are trees~$H$ for which $\nHom{H}$ is $\textsc{\#Sat}$-equivalent (disproving a plausible conjecture of Kelk). However, it is still not known whether $\nHom{H}$ is $\textsc{\#Sat}$-hard for \emph{every} tree~$H$ which contains an induced~$J_3$. It turns out that there is an interesting connection between these homomorphism-counting problems and the problem of approximating the partition function of the \emph{ferromagnetic Potts model}. In particular, we show that for a family of graphs $J_q$, parameterised by a positive integer~$q$, the problem $\nHom{J_q}$ is AP-interreducible with the problem of approximating the partition function of the $q$-state Potts model. It was not previously known that the Potts model had a homomorphism-counting interpretation. We use this connection to obtain some additional upper bounds for the approximation complexity of $\nHom{J_q}$. \end{abstract} \section{Introduction} A \emph{homomorphism} from a graph~$G$ to a graph~$H$ is a mapping $\sigma:V(G)\rightarrow V(H)$ such that the image $(\sigma(u),\sigma(v))$ of every edge $(u,v) \in E(G)$ is in $E(H)$. Let $\Hom GH$ denote the set of homomorphisms from~$G$ to~$H$ and let $Z_H(G)=|\Hom GH|$. For each fixed~$H$, we consider the following computational problem. \begin{description} \item[Problem] $\nHom{H}$. \item[Instance] Graph $G$. \item[Output] $Z_H(G)$. \end{description} The vertices of~$H$ are often referred to as ``colours'' and a homomorphism from~$G$ to~$H$ can be thought of as an assignment of colours to the vertices of $G$ which satisfies certain constraints along each edge of~$G$. The constraints guarantee that adjacent vertices in~$G$ are assigned colours which are adjacent in~$H$. A homomorphism in $\Hom GH$ is therefore often called an ``$H$-colouring'' of~$G$. When $H=K_q$, the complete graph with $q$~vertices, the elements of $\Hom G{K_q}$ are proper $q$-colourings of~$G$. There has been much work on determining the complexity of the $H$-colouring decision problem, which is the problem of determining whether $Z_H(G)=0$, given input~$G$. This work will be described in Section~\ref{sec:prev}, but at this point it is worth mentioning the dichotomy result of Hell and Ne{\v{s}}et{\v{r}}il~\cite{HN}, which shows that the decision problem is solvable in polynomial time if $H$ is bipartite and that it is NP-hard otherwise. There has also been work~\cite{DG,Kelk} on determining the complexity of exactly or approximately solving the related counting problem~$\nHom{H}$. This paper is concerned with the computational difficulty of $\nHom{H}$ when $H$ is bipartite, and particularly when $H$ is a tree. As an example, consider the case where $H$ is the four-vertex path~$P_4$ (of length three). Label the vertices (or colours) $1,2,3,4$, in sequence. If $G$ is not bipartite then $\Hom GH=\emptyset$, so the interesting case is when $G$ is bipartite. Suppose for simplicity that $G$ is connected. Then one side of the vertex bipartition of $G$ must be assigned even colours and the other side must be assigned odd colours. It is easy to see that the vertices assigned colours~$1$ and~$4$ form an independent set of $G$, and that every independent set arises in exactly two ways as a homomorphism. Thus, $Z_{P_4}(G)$ is equal to twice the number of independent sets in the bipartite graph~$G$. We will return to this example presently. It will sometimes be useful to consider a weighted generalisation of the homomorphism-counting problem. Suppose, for each $v\in V(G)$, that $w_v:V(H)\rightarrow \mathbb{Q}_{\geq 0}$ is a weight function, assigning a non-negative rational weight to each colour. Let $W(G,H)$ be an indexed set of weight functions, containing one weight function for each vertex $v\in V(G)$, Thus, $$W(G,H) =\{ w_v \mid v\in V(G)\}.$$ Our goal is to compute the weighted sum of homomorphisms from~$G$ to~$H$, which is expressed as the partition function $$Z_{H}(G, W(G,H)) = \sum_{\sigma\in \Hom GH} \prod_{v\in V(G)} w_v(\sigma(v)).$$ Given a fixed~$H$, each weight function $w_v\in W(G,H)$ can be represented succinctly as a list of $|V(H)|$ rational numbers. This representation is used in the following computational problem. \begin{description} \item[Problem] $\wHom{H}$. \item[Instance] A graph $G$ and an indexed set of weight functions $W(G,H)$. \item[Output] $Z_{H}(G,W(G,H))$. \end{description} The complexity of \emph{exactly} solving $\nHom{H}$ and $\wHom{H}$ is already understood. Dyer and Greenhill have observed \cite[Lemma 4.1]{DG} that $\nHom{H}$ is in~$\mathrm{FP}$ if $H$ is a complete bipartite graph. It is easy to see (see Observation~\ref{obs:star}) that the same is true of $\wHom{H}$. On the other hand, Dyer and Greenhill showed that $\nHom{H}$ is $\mathrm{\#P}$-complete for every bipartite graph~$H$ that is not complete. Since $\nHom{H}$ is a special case of the more general problem $\wHom{H}$, we conclude that both problems are in~$\mathrm{FP}$ if $H$ is a star (a tree in which some ``centre'' vertex is an endpoint of every edge), and that both problems are $\mathrm{\#P}$-complete for every other tree~$H$. This paper maps the complexity of \emph{approximately} solving $\nHom{H}$ and $\wHom{H}$ when $H$ is a tree. Dyer, Goldberg, Greenhill and Jerrum~\cite{APred} introduced the concept of ``AP-reduction'' for studying the complexity of approximate counting problems. Informally, an AP-reduction is an efficient reduction from one counting problem to another, which preserves closeness of approximation; two counting problems that are interreducible using this kind of reduction have the same complexity when it comes to finding good approximate solutions. We have already encountered an extremely simple example of two AP-interreducible problems, namely $\nHom{P_4}$ and $\textsc{\#BIS}$, the problem of counting independent sets in a bipartite graph. Using less trivial reductions, Dyer et al.\ showed (\cite[Theorem 5]{APred}) that several natural counting problems in addition to $\nHom{P_4}$ are interreducible with $\textsc{\#BIS}$, and moreover that they are all complete for the complexity class $\text{\#RH$\Pi_1$}$ with respect to AP-reductions. The class $\text{\#RH$\Pi_1$}$ is conjectured to contain problems that do not have an FPRAS; however it is not believed to contain $\textsc{\#Sat}$, the classical hard problem of computing the number of satisfying assignments to a CNF Boolean formula. Refer to Section~\ref{sec:prelim} for more detail on the technical concepts mentioned here and elsewhere in the introduction. Steven Kelk's PhD thesis~\cite{Kelk} examined the approximation complexity of the problem $\nHom{H}$ for general~$H$. He identified certain families of graphs~$H$ for which $\nHom{H}$ is AP-interreducible with $\textsc{\#BIS}$ and other large families for which $\nHom{H}$ is AP-interreducible with $\textsc{\#Sat}$. He noted \cite[Section 5.7.1]{Kelk} that, during the study, he did not encounter \emph{any} bipartite graphs~$H$ for which $\textsc{\#Sat} \leq_\mathrm{AP} \nHom{H}$, and that he suspected \cite[Section 7.3]{Kelk} that there were ``structural barriers'' which would prevent homomorphism-counting problems to bipartite graphs from being $\textsc{\#Sat}$-hard. An interesting test case is the tree $J_3$ which is depicted in Figure~\ref{fig:J3}. Kelk referred to this tree \cite[Section 7.4]{Kelk} as ``the junction'', and conjectured that $\nHom{J_3}$ is neither $\textsc{\#BIS}$-easy nor $\textsc{\#Sat}$-hard. Thus, he conjectured that unlike the setting of Boolean constraint satisfaction, where every parameter leads to a computational problem which is FPRASable, $\textsc{\#BIS}$-equivalent, or $\textsc{\#Sat}$-equivalent \cite{trichotomy}, the complexity landscape for approximate $H$-colouring may be more nuanced, in the sense that there might be graphs~$H$ for which none of these hold. The purpose of this paper is to describe the interesting complexity landscape of the approximation problems $\nHom{H}$ and $\wHom{H}$ when $H$ is a tree. It turns out that even the case in which $H$ is a tree is sufficiently rich to include all of the known approximation complexity behaviour in $\mathrm{\#P}$. First, consider the weighted problem~$\wHom{H}$. For this problem, we show that there is a complexity trichotomy, and the trichotomy depends upon the induced subgraphs of~$H$. We say that $H$ {\it contains an induced $H'$} if $H$ has an induced subgraph that is isomorphic to~$H'$. Here is the result. If $H$ contains no induced $P_4$ then it is a star, so $\wHom{H}$ is in $\mathrm{FP}$ (Observation~\ref{obs:star}). If $H$ contains an induced $P_4$ but it does not contain an induced $J_3$ then it turns out that $\wHom{H}$ is AP-interreducible with $\textsc{\#BIS}$ (Lemma~\ref{lem:intermediate}). Finally, if $H$ contains an induced $J_3$, then $\textsc{\#Sat} \leq_\mathrm{AP} \wHom{H}$ (Lemma~\ref{lem:hardweighted}.) Thus, the complexity of $\wHom{H}$ is completely determined by the induced subgraphs of the tree~$H$, and there are no possibilities other than those that arise in the Boolean constraint satisfaction trichotomy~\cite{trichotomy}. Now consider the problem~$\nHom{H}$. Like its weighted counterpart, the unweighted problem $\nHom{H}$ is in $\mathrm{FP}$ if $H$ is a star, and it is $\textsc{\#BIS}$-equivalent if $H$ contains an induced~$P_4$ but it does not contain an induced~$J_3$. However, it is not known whether $\nHom{H}$ is $\textsc{\#Sat}$-hard for every $H$ which contains an induced $J_3$. The structure that has emerged is already quite rich. First, we have discovered (Theorem~\ref{thm:hardH}) that there are trees $H$ for which $\nHom{H}$ is $\textsc{\#Sat}$-hard. This result is surprising --- it disproves the plausible conjecture of Kelk that $\nHom{H}$ is not $\textsc{\#Sat}$-hard for any bipartite graph~$H$. We don't know whether $\nHom{H}$ is $\textsc{\#Sat}$-hard for \emph{every} tree~$H$ which contains an induced~$J_3$. In fact, we have discovered an interesting connection between these homomorphism-counting problems and the problem of approximating the partition function of the \emph{ferromagnetic Potts model}. In particular, Theorem~\ref{thm:junction} shows that for a family of graphs $J_q$, parameterised by a positive integer~$q$, the problem $\nHom{J_q}$ is AP-interreducible with the problem of approximating the partition function of the $q$-state Potts model. This is surprising because it was not known that the Potts model had a homomorphism-counting interpretation. The Potts-model connection allows us to give a non-trivial upper bound for the complexity of $\nHom{J_q}$. In particular, Corollary~\ref{cor:bqcol} shows that this problem is AP-reducible to the problem of counting proper $q$-colourings of bipartite graphs. We are not aware of any complexity relationships between the problems $\nHom{J_q}$, for $q>2$. At one extreme, they might all be AP-interreducible; at the other, they might all be incomparable. Another conceivable situation is that \nHom{$J_q$} is AP-reducible to $\nHom{J_{q'}}$ exactly when $q\leq q'$. There is no real evidence for or against any of these or other possibilities. However, in the final section we exhibit a natural problem that provides an upper bound on the complexity of infinite families of problems of the form $\nHom{J_q}$ where $q$ is a prime power. Specifically, we show (Corollary~\ref{newcor}) that $\nHom{J_{p^k}}$ is AP-reducible to the weight enumerator of a linear code over the field~$\mathbb{F}_p$. \subsection{Previous Work} \label{sec:prev} We have already mentioned Hell and Ne{\v{s}}et{\v{r}}il's classic work~\cite{HN} on the complexity of the $H$-colouring decision problem. They showed that this problem is solvable in polynomial time if $H$ is bipartite, and that it is NP-complete otherwise. Our paper is concerned with the situation in which $H$ is an undirected graph (specifically, an undirected tree) but it is worth noting that the decision problem becomes much more complicated if $H$ is allowed to be a \emph{directed} graph. Indeed, Feder and Vardi showed~\cite{FV} that every constraint satisfaction problem (CSP) is equivalent to some digraph homomorphism problem. Despite much research, a complete dichotomy theorem for the digraph homomorphism decision problem is not known. Bang-Jensen and Hell~\cite{BJH} had conjectured a dichotomy for the special case in which the digraph~$H$ has no sources and no sinks. This conjecture was proved in important recent work of Barto, Kozik and Niven~\cite{BKN}. Given the conjecture, Hell, Ne{\v{s}}et{\v{r}}il, and Zhu~\cite{HNZ} stated that ``digraphs with sources and sinks, and in particular oriented trees, seem to be the hard part of the problem.'' Gutjahr, Woeginger and Welzl~\cite{GWW} constructed a directed tree~$H$ such that determining whether a digraph~$G$ has a homomorphism to~$H$ is NP-complete. Of course, for some other trees, this problem is solvable in polynomial time. For example, they showed that it is solvable in polynomial time whenever $H$ is an oriented path (a path in which edges may go in either direction). Hell, Ne{\v{s}}et{\v{r}}il\ and Zhu~\cite{HNZ} construct a whole family of directed trees for which the homomorphism decision problem is NP-hard, and study the problem of characterising NP-hard trees by forbidden subtrees. The reader is referred to Hell and Ne{\v{s}}et{\v{r}}il's book~\cite{HNbook} and to their survey paper~\cite{HNsurvey} for more details about these decision problems. As mentioned in the introduction, there is already some existing work~\cite{DG, Kelk} on determining the complexity of exactly or approximately counting homomorphisms. This work is discussed in more detail elsewhere in this paper. The problem of sampling homomorphisms uniformly at random (or, in the weighed case, of sampling homomorphisms with probability proportional to their contributions to the partition function) is closely related to the approximate counting problem. We will later discuss some existing work~\cite{GKP} on the complexity of the homomorphism-sampling problem. First, we describe some related results on a particular approach to this problem - namely, the application of the Markov chain Monte Carlo (MCMC) method. Here the idea is to simulate a Markov chain whose states correspond to homomorphisms from~$G$ to~$H$. The chain will be constructed so that the probability of a particular homomorphism~$\sigma$ in the stationary distribution of the chain is proportional to the contribution of~$\sigma$ to the partition function. If the Markov chain is \emph{rapidly mixing} then it is possible to efficiently sample homomorphisms from a distribution that is very close to the appropriate distribution. This, in turn, leads to a good approximate counting algorithm~\cite{HColSampleCount}. First, Cooper, Dyer and Frieze~\cite{CDF} considered the unweighted problem. They showed that, for any non-trivial $H$, any Markov chain on $H$-colourings that changes the colours of up to some constant fraction of the vertices of~$G$ in a single step will have exponential mixing time (so will not lead to an efficient approximate counting algorithm). When $H$ is a tree with a self-loop on every vertex, they construct a weight function $w_H\colon V(H) \to \mathbb{Q}_{\geq 0}$ so that rapid mixing does occur for the special case of the weighted homomorphism problem in which every vertex $v$ of~$G$ has weight function $w_v=w_H$. Thus, their result gives an FPRAS for this special case of $\wHom{H}$. The slow-mixing results of \cite{CDF} have been extended in~\cite{BS} and in \cite{BCDT}. In particular, Borgs et al.~\cite{BCDT} considered the case in which $H$ is a rectangular subset of the hypercubic lattice, and constructed a weight function~$w_H$ for which quasi-local Markov chains (which change the colours of up to some constant fraction of the vertices in a small sublattice at each step) have slow mixing. \section{Preliminaries} \label{sec:prelim} This section brings together the main complexity-theoretic notions that are specific to the study of approximate counting problems. A more detailed account can be found in~\cite{APred}. A \emph{randomised approximation scheme\/} is an algorithm for approximately computing the value of a function~$f:\Sigma^*\rightarrow \mathbb{R}_{\geq 0}$. The approximation scheme has a parameter~$\varepsilon\in(0,1)$ which specifies the error tolerance. A \emph{randomised approximation scheme\/} for~$f$ is a randomised algorithm that takes as input an instance $ x\in \Sigma^{\ast }$ (e.g., in the case of $\nHom{H}$, the input would be an encoding of a graph~$G$) and a rational error tolerance $\varepsilon \in(0,1)$, and outputs a rational number $z$ (a random variable depending on the ``coin tosses'' made by the algorithm) such that, for every instance~$x$, $\Pr \big[e^{-\epsilon} f(x)\leq z \leq e^\epsilon f(x)\big]\geq \tfrac{3}{4}$. We adopt the convention that~$z$ is represented as a pair of integers representing the numerator and the denominator. The randomised approximation scheme is said to be a \emph{fully polynomial randomised approximation scheme}, or \emph{FPRAS}, if it runs in time bounded by a polynomial in $ |x| $ and $ \epsilon^{-1} $. As in~\cite{FerroPotts}, we say that a real number~$z$ is \emph{efficiently approximable} if there is an FPRAS for the constant function $f(x)=z$. Our main tool for understanding the relative difficulty of approximation counting problems is \emph{approximation-preserving reductions}. We use the notion of approximation-preserving reduction from Dyer et al.~\cite{APred}. Suppose that $f$ and $g$ are functions from $\Sigma^{\ast }$ to~$\mathbb{R}_{\geq 0}$. An AP-reduction from~$f$ to~$g$ gives a way to turn an FPRAS for~$g$ into an FPRAS for~$f$. The actual definition in~\cite{APred} applies to functions whose outputs are natural numbers. The generalisation that we use here follows McQuillan~\cite{McQuillan}. An {\it approximation-preserving reduction\/} (AP-reduction) from $f$ to~$g$ is a randomised algorithm~$\mathcal{A}$ for computing~$f$ using an oracle for~$g$. The algorithm~$\mathcal{A}$ takes as input a pair $(x,\varepsilon)\in\Sigma^*\times(0,1)$, and satisfies the following three conditions: (i)~every oracle call made by~$\mathcal{A}$ is of the form $(w,\delta)$, where $w\in\Sigma^*$ is an instance of~$g$, and $\delta \in (0,1)$ is an error bound satisfying $\delta^{-1}\leq\mathop{\mathrm{poly}}(|x|, \varepsilon^{-1})$; (ii) the algorithm~$\mathcal{A}$ meets the specification for being a randomised approximation scheme for~$f$ (as described above) whenever the oracle meets the specification for being a randomised approximation scheme for~$g$; and (iii)~the run-time of~$\mathcal{A}$ is polynomial in $|x|$ and $\varepsilon^{-1}$ and the bit-size of the values returned by the oracle. If an approximation-preserving reduction from $f$ to~$g$ exists we write $f\leq_\mathrm{AP} g$, and say that {\it $f$ is AP-reducible to~$g$}. Note that if $f\leq_\mathrm{AP} g$ and $g$ has an FPRAS then $f$ has an FPRAS\null. (The definition of AP-reduction was chosen to make this true.) If $f\leq_\mathrm{AP} g$ and $g\leq_\mathrm{AP} f$ then we say that {\it $f$ and $g$ are AP-interreducible}, and write $f\equiv_\mathrm{AP} g$. A word of warning about terminology: the notation $\leq_\mathrm{AP}$ has been used (see, e.g., \cite{CrescenziGuide}) to denote a different type of approximation-preserving reduction which applies to optimisation problems. We will not study optimisation problems in this paper, so hopefully this will not cause confusion. Dyer et al.~\cite{APred} studied counting problems in \#P and identified three classes of counting problems that are interreducible under approx\-imation-preserving reductions. The first class, containing the problems that have an FPRAS, are trivially AP-interreducible since all the work can be embedded into the reduction (which declines to use the oracle). The second class is the set of problems that are AP-interreducible with \textsc{\#Sat}, the problem of counting satisfying assignments to a Boolean formula in CNF\null. Zuckerman~\cite{zuckerman} has shown that \textsc{\#Sat}{} cannot have an FPRAS unless $\mathrm{RP}=\mathrm{NP}$. The same is obviously true of any problem to which \textsc{\#Sat}{} is AP-reducible. The third class appears to be of intermediate complexity. It contains all of the counting problems expressible in a certain logically-defined complexity class, $\text{\#RH$\Pi_1$}$. Typical complete problems include counting the downsets in a partially ordered set~\cite{APred}, computing the partition function of the ferromagnetic Ising model with local external magnetic fields~\cite{Ising}, and counting the independent sets in a bipartite graph, which is defined as follows. \begin{description} \item[Problem] $\textsc{\#BIS}$. \item[Instance] A bipartite graph $G$. \item[Output] The number of independent sets in $G$. \end{description} In \cite{APred} it was shown that $\textsc{\#BIS}$ is complete for the logically-defined complexity class $\mathrm{\#RH}\Pi_1$ with respect to approximation-preserving reductions. We conjecture~\cite{FerroPotts} that there is no FPRAS for $\textsc{\#BIS}$. A problem that is closely related to approximate counting is the problem of sampling configurations almost uniformly at random. The analogue of an FPRAS in the context of sampling problems is the PAUS, or \emph{Polynomial Almost Uniform Sampler}. Goldberg, Kelk, and Paterson~\cite{GKP} have studied the problem of sampling $H$-colourings almost uniformly at random. They gave a hardness result for every fixed tree~$H$ that is not a star. In particular, their theorem \cite[Theorem 2]{GKP} shows that there is no PAUS for sampling $H$-colourings unless $\textsc{\#BIS}$ has an FPRAS. In general, there is a close connection between approximate counting and almost-uniform sampling. Indeed, in the presence of a technical condition called ``self-reducibility'', the counting and sampling variants of two problems are interreducible~\cite{JVV}. The weighted problem $\wHom{H}$ is self-reducible, so the result of~\cite{GKP} immediately gives an AP-reduction from~$\textsc{\#BIS}$ to $\wHom{H}$ for every tree~$H$ that is not a star. However, it is not known whether the unweighted problem~$\nHom{H}$ is self-reducible. As mentioned in Section~\ref{sec:prev} the paper \cite{HColSampleCount} shows how to turn a PAUS for $H$-colourings into an FPRAS for $\nHom{H}$, but it is not known whether there is a reduction in the other direction. Thus, we cannot directly apply the hardness result of~\cite{GKP} to reduce~$\textsc{\#BIS}$ to~$\nHom{H}$. However, we will see in the next section that the complexity gap between problems with an FPRAS and those that are $\textsc{\#BIS}$-equivalent still holds for $\nHom{H}$ in the special case when $H$ is a tree, which is the focus of this paper. \section{Weighted tree homomorphisms} \label{sec:weighted} First, we introduce some notation and a few graphs that are of special interest. In this paper, the graphs that we consider are undirected and simple --- they do not have self-loops or multiple edges between vertices. For every positive integer~$n$, let $[n]$ denote $\{1,2,\ldots,n\}$. We use $\Gamma_H(v)$ to denote the set of neighbours of vertex~$v$ in graph~$H$ and we use $d_H(v)$ to denote the degree of~$v$, which is~$|\Gamma_H(v)|$. Let $P_n$ be the $n$-vertex path (with $n-1$ edges). An $n$-leaf \emph{star} is the complete bipartite graph $K_{1,n}$. Let $J_q$ be the graph with vertex set $$V(J_q) = \{w\} \cup \{c_i\mid i\in[q]\} \cup \{c'_i\mid i\in[q]\},$$ and edge set $$ E(J_q) = \{(c_i,c'_i) \mid i\in[q]\} \cup \{(c'_i,w)\mid i\in[q]\}.$$ $J_3$ is depicted in Figure~\ref{fig:J3}. \begin{figure} \caption{The tree $J_3$.} \label{fig:J3} \end{figure} \subsection{Stars} As Dyer and Greenhill observed \cite[Lemma 4.1]{DG}, $\nHom{H}$ is in $\mathrm{FP}$ if $H$ is a complete bipartite graph. We now show that $\wHom{H}$ is also in $\mathrm{FP}$ in this case. Suppose that $H$ is a complete bipartite graph with bipartition $(U,U')$ where $U=\{u_1,\ldots,u_h\}$ and $U'=\{u'_1,\ldots,u'_{h'}\}$. Let $G$ be an input to $\wHom{H}$ with connected components $G^1,\ldots,G^\kappa$. Clearly, $Z_H(G)=\prod_{i=1}^\kappa Z_{H}(G^i)$. Also, if $G^i$ is non-bipartite then $Z_{H}(G^i)=0$. Suppose that $G^i$ is a connected bipartite graph with bipartition $(V,V')$ where $V=\{v_1,\ldots,v_n\}$ and $V'=\{v'_1,\ldots,v'_{n'}\}$. Then $$Z_{H}(G^i) = \prod_{j=1}^n\sum_{c=1}^{h} w_{v_j}(u_c) \prod_{j'=1}^{n'}\sum_{c'=1}^{h'} w_{v'_{j'}}(u'_{c'}) + \prod_{j=1}^{n'}\sum_{c=1}^{h} w_{v'_{j}}(u_c) \prod_{j'=1}^{n}\sum_{c'=1}^{h'} w_{v_{j'}}(u'_{c'}) .$$ In the context of this paper, where $H$ is a tree, we can draw the following concluson. \begin{observation} \label{obs:star}\label{obs:triv} Suppose that $H$ is a star. Then $\wHom{H}$ is in $\mathrm{FP}$. \end{observation} \subsection{Trees with intermediate complexity} The purpose of this section is to prove Lemma~\ref{lem:intermediate}, which says that if $H$ is a tree that is not a star and has no induced~$J_3$ then $\textsc{\#BIS} \equiv_\mathrm{AP}\nHom{H}$ and $\textsc{\#BIS}\equiv_\mathrm{AP} \wHom{H}$. The main work of the section is in the proof of Lemma~\ref{lem:intermediate}, but first we need some existing results. In particular, Lemma~\ref{lem:kelk} below is due to Kelk, and Lemma~\ref{lem:CSP} is an easy consequence of earlier work by the authors and their coauthors on counting CSPs. We have chosen to include a proof sketch of the former because the work of Kelk is unpublished~\cite{Kelk} and a proof of the latter because we did not state or prove it explicitly in earlier work, and it might be rather difficult for the reader to see why it is implied by that work. If $H$ is a tree with no induced~$P_4$ then it is a star, so, by Observation~\ref{obs:triv}, $\wHom{H}$ is in $\mathrm{FP}$. On the other hand, the following lemma shows that if $H$ contains an induced~$P_4$ then even the unweighted problem $\nHom{H}$ is $\textsc{\#BIS}$-hard. To motivate the lemma, suppose that $H$ contains an induced~$P_4$. Then it is a bipartite graph which is not complete, so by Goldberg at al.~\cite[Theorem 2]{GKP} the (uniform) sampling problem for $H$-colourings of a graph is as hard as the sampling problem for independent sets in a bipartite graph. This is not quite the result we are seeking, but it is close in spirit, given the close connection between sampling and approximate counting. The following lemma, which is a special case of \cite[Lemma 2.19]{Kelk}, is exactly what we need. \begin{lemma} [Kelk] \label{lem:kelk} Let $H$ be a tree containing an induced~$P_4$. Then $$\textsc{\#BIS} \leq_\mathrm{AP} \nHom{H}.$$ \end{lemma} \begin{proof} (Proof sketch) We will not give a complete proof of Lemma~\ref{lem:kelk} since it is a special case of a lemma of Kelk, but here is a sketch to give the reader a high-level idea of the construction. Let $\Delta$ be the maximum degree of vertices of~$H$ and let $\Delta'\leq \Delta$ be the maximum degree taken by a neighbour of a degree-$\Delta$ vertex in~$H$. Note that $\Delta'\geq2$ since $H$ cannot be a star. Let $(c,c')$ be any edge in~$H$ with $d_H(c)=\Delta$ and $d_H(c')=\Delta'$. Let $N_c$ be the set $\Gamma_H(c)-\{c'\}$ and let $N_{c'} = \Gamma_H(c')-\{c\}$. Since $H$ is a tree, there are no edges in $H$ between $N_c$ and $N_{c'}$. Now consider a connected instance $G$ of $\textsc{\#BIS}$ with bipartition $V(G)=(V,V')$. Let $G'$ be the bipartite graph with vertex set $V(G)\cup \{C,C'\}$ (where~$C$ and~$C'$ are new vertices that are not in $V(G)$) and edge set $E(G) \cup \{(C,C')\} \cup \{C\}\times V' \cup \{C'\} \times V$. Consider an $H$-colouring $\sigma$ of~$G$ with $\sigma(C)=c$ and $\sigma(C')=c'$. (Standard constructions can be used to augment $G'$ so that almost all homomorphisms to $H$ have this property.) For every vertex $v\in V$, $\sigma(v) \in N_{c'} \cup \{c\}$ and for every vertex $v'\in V'$, $\sigma(v') \in N_c \cup \{c\}$. Also, $\{v \in V \mid \sigma(v)\in N_{c'} \} \cup \{ v'\in V' \mid \sigma(v')\in N_c\}$ is an independent set of~$G$. Thus, there is an injection from independent sets of~$G$ into these $H$-colourings of~$G'$. Standard tricks can be used to adjust the construction so that almost all of the homomorphisms correspond to \emph{maximum} independent sets of~$G$ and so that all maximum independent sets correspond to approximately the same number of homomorphisms. The proof follows from the fact that counting maximum independent sets in a bipartite graph is equivalent to~$\textsc{\#BIS}$~\cite{APred}. \end{proof} As mentioned above, the main result of this section is Lemma~\ref{lem:intermediate}, which will be presented below. Its proof relies on earlier work on counting \emph{constraint satisfaction problems} (CSPs). Suppose that $x$ and $x'$ are Boolean variables. An assignment $\sigma: \{x,x'\}\to \{0,1\}$ is said to satisfy the implication constraint $\mathrm{IMP}(x,x')$ if $(\sigma(x),\sigma(x'))$ is in $\{ (0,0),(0,1),(1,1)\}$. The idea is that ``$\sigma(x)=1$'' implies ``$\sigma(x')=1$''. The assignment~$\sigma$ is said to satisfy the ``pinning'' constraint $\delta_0(x)$ if $\sigma(x)=0$ and the pinning constraint $\delta_1(x)$ if $\sigma(x)=1$. If $X$ is a set of Boolean variables then a set~$C$ of $\{\mathrm{IMP},\delta_0,\delta_1\}$ constraints on~$X$ is a set of constraints of the form $\delta_0(x)$, $\delta_1(x)$ and $\mathrm{IMP}(x,x')$ for $x$ and $x'$ in $X$. The set $S(X,C)$ of \emph{satisfying assignments} is the set of all assignments $\sigma: X \to \{0,1\}$ which simultaneously satisfy all of the constraints in~$C$. We will consider the following computational problem. \begin{description} \item[Problem] $\textsc{\#CSP}(\mathrm{IMP},\delta_0,\delta_1)$. \item[Instance] A set $X$ of Boolean variables and a set $C$ of $\{\mathrm{IMP},\delta_0,\delta_1\}$ constraints on~$X$. \item[Output] $|S(X,C)|$. \end{description} We will also consider the following weighted version of $\textsc{\#CSP}(\mathrm{IMP})$. Suppose, for each $x\in X$, that $\gamma_x:\{0,1\} \rightarrow \mathbb{Q}_{> 0}$ is a weight function. For an indexed set $\gamma(X) = \{\gamma_x \mid x\in X\}$ of weight functions, let $$Z(X,C,\gamma) = \sum_{\sigma\in S(X,C)} \prod_{x\in X} \gamma_x(\sigma(x)).$$ \begin{description} \item[Problem] $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. \item[Instance] A set $X$ of Boolean variables, a set $C$ of $\{\mathrm{IMP},\delta_0,\delta_1\}$ constraints on $X$, and an indexed set $\gamma(X)$ of weight functions. \item[Output] $Z(X,C,\gamma)$. \end{description} We will use the following lemma, which follows from earlier work on counting CSPs. \begin{lemma} \label{lem:CSP} $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1) \equiv_\mathrm{AP} \textsc{\#BIS}$. \end{lemma} \begin{proof} Dyer, Goldberg, and Jerrum~\cite[Theorem 3]{trichotomy} shows that $\textsc{\#CSP}(\mathrm{IMP},\delta_0, \delta_1) \equiv_\mathrm{AP} \textsc{\#BIS}$. $\textsc{\#CSP}(\mathrm{IMP},\delta_0,\delta_1)$ trivially reduces to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ since it is a special case. Thus, it suffices to give an AP-reduction from $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ to $\textsc{\#CSP}(\mathrm{IMP},\delta_0,\delta_1)$. The idea behind the construction that we use comes from Bulatov et al.~\cite[Lemma 36, Item~(i)]{LSM}. We give the details in order to translate the construction into the current context. Let $(X,C,\gamma)$ be an instance of $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. We can assume without loss of generality that all of the weights $\gamma_x(b)$ are positive integers by multiplying all of the weights by the product of the denominators. The construction that follows is not difficult but the details are a little bit complicated, so we use the following running example to illustrate. Let $X=\{y,z\}$, $C = \mathrm{IMP}(y,z)$, $\gamma_y(0)=5$, $\gamma_y(1) = 2$, $\gamma_z(0)=1$ and $\gamma_z(1)=1$. For every variable $x\in X$, consider the weight function~$\gamma_x$. Let $k_x = \max(\lceil \lg \gamma_x(0) \rceil ,\lceil \lg \gamma_x(1) \rceil)$. For every $b\in \{0,1\}$, write the bit-expansion of $\gamma_x(1\oplus b)$ as $$ \gamma_x(1\oplus b) = a_{x,b,0} + a_{x,b,1} 2^1 + \cdots + a_{x,b,k_{x}} 2^{k_{x}},$$ where each $a_{x,b,i}\in \{0,1\}$. Note that $\gamma_x(1\oplus b)>0$ so there is at least one~$i$ with $a_{x,b,i}=1$. Let $\min_{x,b} = \min\{i \mid a_{x,b,i}=1\}$ and $\max_{x,b} = \max \{ i \mid a_{x,b,i} = 1\}$. If $i<\max_{x,b}$ and $a_{x,b,i}=1$ then let $\text{next}_{x,b,i} = \min\{j>i \mid a_{x,b,j}=1\}$. If $i>\min_{x,b}$ and $a_{x,b,i}=1$ then let $\text{prev}_{x,b,i} = \max\{j<i \mid a_{x,b,j}=1\}$. For the running example, \begin{itemize} \item $k_y= \lceil \lg 5 \rceil = 3$ and $k_z = \lceil \lg 1 \rceil =0$. \item For the variable $y$, taking $b=0$ we have $\gamma_y(1\oplus 0) = 2^1$ so $a_{y,0,0}=0$, $a_{y,0,1}=1$, and $a_{y,0,2}= a_{y,0,3}=0$. Also, $\min_{y,0}=1=\max_{y,0}$. \item Similarly, taking $b=1$ gives $\gamma_y(1\oplus 1) = 2^0+2^2$ so $a_{y,1,0}=1$, $a_{y,1,1}=0$, $a_{y,1,2}=1$ and $a_{y,1,3}=0$. Thus $\min_{y,1}=0$ and $\max_{y,1}=2$. Then $\text{next}_{y,1,0}=2$ and $\text{prev}_{y,1,2}=0$. \item Finally, for the variable $z$ and $b\in \{0,1\}$, we have $\gamma_z(1\oplus b)=2^0$ so $a_{z,b,0}=1$ and $\min_{z,b}=0=\max_{z,b}$. \end{itemize} Now for every $x\in X$, for every $ i \in \{1,\ldots,k_x\}$ and every $b\in\{0,1\}$ with $a_{x,b,i}=1$ let $A_{x,b,i}$ be the set of $i+2$ variables $\{x_{b,i,1},\ldots,x_{b,i,i}\} \cup \{ L_{x,b,i},R_{x,b,i}\}$. Let $C_{x,b,i}$ be the set of implication constraints $\bigcup_{j\in[i]} \{\mathrm{IMP}(L_{x,b,i},x_{b,i,j}),\mathrm{IMP}(x_{b,i,j},R_{x,b,i})\}$. Note that there are $2^i+2$ satisfying assignments to the $\textsc{\#CSP}$ instance $(A_{x,b,i},C_{x,b,i})$: one with $\sigma(L_{x,b,i})=\sigma(R_{x,b,i})=0$, one with $\sigma(L_{x,b,i})=\sigma(R_{x,b,i})=1$, and $2^i$ with $\sigma(L_{x,b,i})=0$ and $\sigma(R_{x,b,i})=1$. The point here is that the sets $A_{x,b,i}$ will be combined for different values of~$i$. The satisfying assignments with $\sigma(L_{x,b,i})=\sigma(R_{x,b,i})=0$ will correspond to contributions from a different index $i'>i$ and the satisfying assignments with $\sigma(L_{x,b,i})=\sigma(R_{x,b,i})=1$ will correspond to contributions from a different index $i'<i$. There are exactly $2^i$ satisfying assignments with $\sigma(L_{x,b,i})=0$ and $\sigma(R_{x,b,i})=1$ and these will correspond to the $a_{x,b,i} 2^i$ summand in the bit-expansion of $\gamma_x(1\oplus b)$. For the running example, \begin{itemize} \item for the variable $y$ and for $b=0$ and $i=1$ we have $A_{y,0,1} = \{y_{0,1,1} \} \cup \{ L_{y,0,1},R_{y,0,1}\}$. Then $C_{y,0,1}$ contains $ \{\mathrm{IMP}(L_{y,0,1},y_{0,1,1}),\mathrm{IMP}(y_{0,1,1},R_{y,0,1})\}$ and there are $2+2^1=4$ solutions. \item For the variable $y$ and for $b=1$ and $i=2$ we have $A_{y,1,2} = \{y_{1,2,1}, y_{1,2,2}\} \cup \{ L_{y,1,2},R_{y,1,2}\}$. Then $C_{y,1,2}$ contains the constraints $\mathrm{IMP}(L_{y,1,2},y_{1,2,1})$, $\mathrm{IMP}(y_{1,2,1},R_{y,1,2})$, $\mathrm{IMP}(L_{y,1,2},y_{1,2,2})$, and $\mathrm{IMP}(y_{1,2,2},R_{y,1,2})$ and there are $2+2^2=6$~solutions. \end{itemize} We now add some constraints corresponding to the $i=0$ case above. For every $x\in X$ and every $b\in \{0,1\}$ with $a_{x,b,0}=1$ let $A_{x,b,0}$ be the set of variables $\{ L_{x,b,0},R_{x,b,0}\}$. Let $C_{x,b,0}$ be the set containing the constraint $\mathrm{IMP}(L_{x,b,0},R_{x,b,0})$. Note that there are $2^0+2=3$ satisfying assignments to the $\textsc{\#CSP}$ instance $(A_{x,b,0},C_{x,b,0})$: one with $\sigma(L_{x,b,0})=\sigma(R_{x,b,0})=0$, one with $\sigma(L_{x,b,0})=\sigma(R_{x,b,0})=1$, and $2^0=1$ with $\sigma(L_{x,b,0})=0$ and $\sigma(R_{x,b,0})=1$. For the running example, \begin{itemize} \item $A_{y,1,0} = \{ L_{y,1,0},R_{y,1,0}\}$ and $C_{y,1,0} = \{ \mathrm{IMP}(L_{y,1,0},R_{y,1,0})\}$. \item For $b\in \{0,1\}$, $A_{z,b,0} = \{ L_{z,b,0},R_{z,b,0}\}$ and $C_{z,b,0} = \{ \mathrm{IMP}(L_{z,b,0},R_{z,b,0})\}$. \end{itemize} Now for every $x\in X$ and $b\in\{0,1\}$ let $C'_{x,b}$ be the set of constraints forcing equality of $\sigma(R_{x,b,i})$ and $\sigma(L_{x,b,j})$ when $i$ and $j$ are adjacent one-bits in the bit-expansion of $\gamma_x(1\oplus b)$. In particular, $$C'_{x,b} = \bigcup_{\text{next}_{x,b,i}=j, \text{prev}_{x,b,j}=i} \{ \mathrm{IMP}(R_{x,b,i},L_{x,b,j}), \mathrm{IMP}(L_{x,b,j},R_{x,b,i}) \} $$ For the running example, \begin{itemize} \item $C'_{y,0} = C'_{z,0} = C'_{z,1} = \emptyset$ since these variables have only one positive coefficient in the bit expansion. \item For the variable $y$ and $b=1$ the relevant non-zero coefficients are $i=0$ and $j=2$ so we get $$C'_{y,1} = \{ \mathrm{IMP}(R_{y,1,0},L_{y,1,2}), \mathrm{IMP}(L_{y,1,2},R_{y,1,0}) \}. $$ \end{itemize} Now consider $x\in X$. Let $C''_{x,0}=C'_{x,0} \cup \{\delta_0(L_{x,0,\min_{x,0}})\}$ and let $C''_{x,1} = C'_{x,1} \cup \{\delta_1(R_{x,1,\max_{x,1}})\}$. For $x\in X$ and $b\in \{0,1\}$ let $$A_{x,b} = \bigcup_{i\in \{0,\ldots,k_x\}, a_{x,b,i}=1} A_{x,b,i}$$ and let $$C_{x,b} = C''_{x,b} \cup \bigcup_{i\in\{0,\ldots,k_x\}, a_{x,b,i}=1} C_{x,b,i}. $$ Now will show that there are $\gamma_x(1)$ satisfying assignments to the $\textsc{\#CSP}$ instance $(A_{x,0},C_{x,0})$ which have the property that $\sigma(R_{x,0,\max_{x,0}})=1$ and one satisfying assignment in which $\sigma(R_{x,0,\max_{x,0}})=0.$ To see this, note that the constraint $\delta_0(L_{x,0,\min_{x,0}})$ forces $\sigma(L_{x,0,\min_{x,0}})=0$. If $\sigma(R_{x,0,\max_{x,0}})=0$ then all of the variables in $A_{x,0}$ are assigned spin~$0$ by~$\sigma$. Otherwise, there is exactly one~$i$ with $a_{x,0,i}=1$ and $\sigma(L_{x,0,i})=0$ and $\sigma(R_{x,0,i})=1$. As we noted above, there are $2^i$ assignments to the variables in~$A_{x,b,i}$. But $\sum_{i: a_{x,0,1}=i} 2^i = \gamma_x(1)$, as required. Similarly, there are $\gamma_x(0)$ satisfying assignments to the $\textsc{\#CSP}$ instance $(A_{x,1},C_{x,1})$ in which $\sigma(L_{x,1,\min_{x,1}})=0$ and there is one satisfying assignment in which $\sigma(L_{x,1,\min_{x,1}})=1.$ Let us quickly apply this to the running example. \begin{itemize} \item Taking variable $y$ and $b=0$ we have $A_{y,0} = A_{y,0,1}$ and $C''_{y,0} = \{\delta_0(L_{y,0,1})\} \cup C_{y,0,1}$. Then $\max_{y,0}=1$. From above, there is one solution~$\sigma$ with $\sigma(R_{y,0,\max_{y,0}})=0$ and there are $2^1=\gamma_y(1)$ solutions $\sigma$ with $\sigma(R_{y,0,\max_{y,0}})=1$. \item Taking variable $y$ and $b=1$ we have $$A_{y,1} = A_{y,1,0} \cup A_{y,1,2}$$ and $$C''_{y,1} = \{ \delta_1(R_{y,1,2}), \mathrm{IMP}(R_{y,1,0},L_{y,1,2}), \mathrm{IMP}(L_{y,1,2},R_{y,1,0}) \} \cup C_{y,1,0} \cup C_{y,1,2} .$$ There is one solution $\sigma$ with $\sigma(L_{y,1,0})=1$. There are $2^0+2^2=\gamma_y(0)$ solutions $\sigma$ with $\sigma(L_{y,1,0})=0$. \item Taking variable $z$ we have $A_{z,b} = A_{z,b,0} = \{L_{z,b,0},R_{z,b,0}\}$. Then, taking $b=0$, $C_{z,0} = \{ \delta_0(L_{z,0,0}),\mathrm{IMP}(L_{z,0,0},R_{z,0,0})\}$. so there is $2^0=1=\gamma_z(1)$ assignment with $\sigma(R_{z,0,0})=1$ and one with $\sigma(R_{z,0,0})=0$. Taking $b=1$, $C_{z,1} = \{\delta_1(R_{z,1,0}),\mathrm{IMP}(L_{z,1,0},R_{z,1,0})\}$ so there is $2^0=1=\gamma_z(0)$ assignment with $\sigma(L_{z,1,0})=0$ and one with $\sigma(L_{z,1,0})=1$. \end{itemize} Finally, consider $x\in X$. Let $C_x$ be the set of constraints containing the four implications $\mathrm{IMP}(x,R_{x,0,\max_{x,0}})$, $\mathrm{IMP}(R_{x,0,\max_{x,0}},x)$, $\mathrm{IMP}(x,L_{x,1,\min_{x,1}})$, and $\mathrm{IMP}(L_{x,1,\min_{x,1}},x)$. Now there are $\gamma_x(1)$ solutions to $(A_{x,0} \cup A_{x,1} \cup \{x\},C_{x,0} \cup C_{x,1} \cup C_x)$ with $\sigma(x)=1$ and $\gamma_x(0)$ solutions with $\sigma(x)=0$. Thus, we have simulated the weight function $w_x$ with $\{\mathrm{IMP},\delta_0,\delta_1\}$ constraints. For the running example, \begin{itemize} \item first consider the variable $y$. \begin{itemize} \item With $\sigma(y)=1$ the constraints in $C_y$ force $\sigma(R_{y,0,\max_{y,0}})=1$ which, from above, gives $\gamma_y(1)$ solutions to $(A_{y,0},C_{y,0})$. The constraints in $C_y$ also force $\sigma(L_{y,1,\min(y,1)})=1$, which, from above, gives one solution to $(A_{y,1},C_{y,1})$. \item With $\sigma(y)=0$ the constraints in $C_y$ force $\sigma(R_{y,0,\max_{y,0}})=0$ so there is only one solution to $(A_{y,0},C_{y,0})$. The constraints in $C_y$ also force $\sigma(L_{y,1,\min(y,1)})=0$ so there are $\gamma_y(0)$ solutions to $(A_{y,1},C_{y,1})$. \end{itemize} \item The argument for variable~$z$ is similar. \end{itemize} Thus, the correct output for the $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ instance $(X,C,\gamma)$ is same as the correct output for the $\textsc{\#CSP}(\mathrm{IMP},\delta_0,\delta_1)$ instance obtained from $(X,C,\gamma)$ by adding new variables and constraints to simulate each weight function $\gamma_x$. \end{proof} We can now prove the main lemma of this section. \begin{lemma} \label{lem:intermediate} Suppose that $H$ is a tree which is not a star and which has no induced~$J_3$. Then $$\textsc{\#BIS} \equiv_\mathrm{AP}\nHom{H} \mbox{ and } \textsc{\#BIS}\equiv_\mathrm{AP} \wHom{H}.$$ \end{lemma} \begin{proof} $\nHom{H}$ is a special case of $\wHom{H}$ so it is certainly AP-reducible to $\wHom{H}$. By Lemma~\ref{lem:kelk}, $\textsc{\#BIS}$ is AP-reducible to $\nHom{H}$ and therefore it is AP-reducible to $\wHom{H}$. So it suffices to give an AP-reduction from $\wHom{H}$ to $\textsc{\#BIS}$. Applying Lemma~\ref{lem:CSP}, it suffices to give an AP-reduction from $\wHom{H}$ to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. In order to do the reduction, we will order the vertices of~$H$ using the fact that it has no induced~$J_3$. (This ordering is similar the one arising from the ``crossing property'' of the authors that is mentioned in \cite[Section 7.3.3]{Kelk}.) A ``convex ordering'' of a connected bipartite graph with bipartition $(U,U')$ with $|U|=h$ and $|U'|=h'$ and edge set $E\subseteq U\times U'$ is a pair of bijections $\pi:U \rightarrow [h]$ and $\pi':U' \rightarrow [h']$ such that there are monotonically non-decreasing functions functions $m:[h]\to[h']$, $M:[h]\to[h']$, $m':[h']\to[h]$ and $M':[h']\to[h]$ satisfying the following conditions. \begin{itemize} \item If $\pi(u)=i$ then $\{\pi'(u') \mid (u,u')\in E \} = \{ \ell \in [h'] \mid m(i) \leq \ell \leq M(i)\}$. \item If $\pi'(u')=i$ then $\{\pi(u) \mid (u,u')\in E \} = \{ \ell \in [h] \mid m'(i) \leq \ell \leq M'(i)\}$. \end{itemize} The purpose of~$\pi$ and~$\pi'$ is just to put the vertices in the correct order. For example, in Figure~\ref{fig:referee}, \begin{figure} \caption{An example of a convex ordering} \label{fig:referee} \end{figure} $\pi$ is the identity map on the set $U=\{1,2,3,4\}$ and $\pi'$ is the identity map on the set $U'=\{1,2,3\}$. Vertex~$3$ in~$U$ is connected to the sequence containing vertices $1$, $2$ and $3$ in~$U'$, so $m(3)=1$ and $M(3)=3$. Every other vertex in~$U$ has degree~$1$ and in particular $m(1)=M(1)=1$, $m(2)=M(2)=1$ and $m(4)=M(4)=3$. Similarly, vertex~$1$ in~$U'$ is attached to the sequence containing vertices $1$, $2$ and $3$ in $U$ so $m'(1)=1$ and $M'(1)=3$ but $m'(2)=M'(2)=3$ and $m'(3)=M'(3)=4$. To see that a convex ordering of~$H$ always exists, consider the following algorithm. The input is a tree~$H$ with no induced~$J_3$, a bipartition $(U,U')$ of the vertices of~$H$, and a distinguished leaf~$u\in U$ whose parent~$u'$ is adjacent to at most one non-leaf. (Note that such a leaf~$u$ always exists since $H$ is a tree.) The output is a convex ordering of~$H$ in which $\pi(u)=h$ and $\pi'(u')=h'$. Here is what the algorithm does. If all of the neighbours of~$u'$ are leaves, then $h'=1$ so take any bijection $\pi$ from $U-\{u\}$ to $[h-1]$ and set $\pi(u)=h$ and $\pi'(u')=h'$. Return this output. Otherwise, let $u''$ be the neighbour of $u'$ that is not a leaf. Let $H'$ be the graph formed from $H$ by removing all of the $d_H(u')-1$ neighbours of $u'$ other than $u''$. Since $H$ has no induced $J_3$, the graph $H'$ has the following property: $u'$ is a leaf whose parent, $u''$, is adjacent to at most one non-leaf. Recursively, construct a convex ordering for $H'$ in which $\pi(u')=h'$ and $\pi(u'')=h-(d_H(u')-1)$. Extend $\pi$ by assigning values to the leaf-neighbours of~$u'$, ensuring that $\pi(u)=h$. We will now show how to reduce $\wHom{H}$ to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. Let $G$ be a connected bipartite graph with bipartition $(V,V')$ and let $W(G,H)$ be an indexed set of weight functions. Let $$Z'_{H}(G,W(G,H)) = \sum_{\sigma\in \Hom GH\text{ with $\sigma(V)\subseteq U$}}\, \prod_{v\in V(G)} w_v(\sigma(v))$$ and let $$Z''_{H}(G,W(G,H)) = \sum_{\sigma\in \Hom GH\text{ with $\sigma(V)\subseteq U'$}}\, \prod_{v\in V(G)} w_v(\sigma(v)).$$ Clearly, $Z_{H}(G,W(G,H)) = Z'_{H}(G,W(G,H))+Z''_{H}(G,W(G,H))$. We will show how to reduce the computation of $Z'_{H}(G,W(G,H))$, given the input $(G,W(G,H))$, to the problem $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. In the same way, we can reduce the computation of $Z''_{H}(G,W(G,H))$ to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. Since we are considering assignments which map $V$ to $U$ and $V'$ to $U'$, the vertices in~$U$ will not get mixed up with the vertices in~$U'$. We can simplify the notation by relabelling the vertices so that $\pi$ and~$\pi'$ are the identity permutations. Then, given the convex ordering property, we can assume that $U=[h]$ and that $U'=[h']$ and that we have monotonically non-decreasing functions functions $m:[h]\to[h']$, $M:[h]\to[h']$, $m':[h']\to[h]$ and $M':[h']\to[h]$ such that \begin{itemize} \item for $i\in U$, $\Gamma_H(i) = \{ \ell \in [h'] \mid m(i) \leq \ell \leq M(i)\}$, and \item for $i\in U'$, $\Gamma_H(i) = \{ \ell \in [h] \mid m'(i) \leq \ell \leq M'(i)\}$. \end{itemize} A configuration $\sigma$ contributing to $Z'_{H}(G,W(G,H))$ is a map from $V$ to $[h]$ together with a map from $V'$ to $[h']$ such that the following is true for every edge $(v,v')\in V\times V'$. \begin{enumerate}[(1)] \item \label{one} $m({\sigma(v)}) \leq \sigma(v') \leq M({\sigma(v)})$, and \item \label{two} $m'({\sigma(v')}) \leq \sigma(v) \leq M'({\sigma(v')})$. \end{enumerate} Since $m$, $M$, $m'$ and $M'$ are monotonically non-decreasing, we can re-write the conditions in a less natural way which will be straightforward to apply below. \begin{enumerate}[($1'$)] \item \label{onep} $\sigma(v) \leq i$ implies $\sigma(v') \leq M(i)$, \item \label{twop} $\sigma(v') \leq i'$ implies $\sigma(v) \leq M'(i')$, \item \label{threep} $\sigma(v') \leq m(i)-1$ implies $\sigma(v) \leq i-1$, and \item \label{fourp} $\sigma(v) \leq m'(i')-1$ implies $\sigma(v') \leq i'-1$. \end{enumerate} Using monotonicity, (\ref{onep}$'$) and (\ref{twop}$'$) follow from the right-hand side of (\ref{one}) and (\ref{two}). Suppose that $\sigma(v') < m(i)$. Then the left-hand side of (\ref{one}) gives $m(\sigma(v))< m(i)$, so by monotonicity, $\sigma(v)< i$. Equation~(\ref{threep}$'$) follows. In the same way, Equation~(\ref{fourp}$'$) follows from the left-hand side of (\ref{two}). Going the other direction, the right-hand sides of (\ref{one}) and (\ref{two}) follow from (\ref{onep}$'$) and (\ref{twop}$'$).To derive the left-hand side of (\ref{one}), take the contrapositive of (\ref{threep}$'$), which says $\sigma(v) \geq i$ implies $\sigma(v') \geq m(i)$ then plug in $i=\sigma(v)$. The derivation of the left-hand side of (\ref{two}) is similar. We now construct an instance of $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. For each vertex $v\in V$ introduce Boolean variables $v_0,\ldots,v_{h}$. Introduce constraints $\delta_0(v_0)$ and $\delta_1(v_{h})$ and, for every $i\in[h]$, $\mathrm{IMP}(v_{i-1},v_i)$. For each vertex $v'\in V'$ introduces Boolean variables $v'_0,\ldots,v'_{h'}$. Introduce constraints $\delta_0(v'_0)$ and $\delta_1(v'_{h'})$ and, for every $i'\in[h']$, $\mathrm{IMP}(v'_{i'-1},v'_{i'})$. Now there is a one-to-one correspondence between assignments $\sigma$ mapping $V$ to~$U$ and $V'$ to~$U'$, and assignments $\tau$ to the Boolean variables that satisfy the above constraints. In particular, $\sigma(v)=\min\{i \mid \tau(v_i)=1\}$. Similarly, $\sigma(v')=\min\{i' \mid \tau(v'_i)=1\}$. Now, $\sigma(v) \leq i$ is exactly equivalent to $\tau(v_i) =1$. Thus, we can add the following further constraints to rule out assignments $\sigma$ that do not satisfy (\ref{onep}$'$), (\ref{twop}$'$), (\ref{threep}$'$) and (\ref{fourp}$'$). Add all of the following constraints where $v\in V$, $v'\in V'$, $i\in [h]$ and $i'\in [h']$: $\mathrm{IMP}(v_{i},v'_{M(i)})$, $\mathrm{IMP}(v'_{i'}, v_{M'(i')})$, $\mathrm{IMP}(v'_{m(i)-1},v_{i-1})$, and $\mathrm{IMP}(v_{m'(i')-1},v'_{i'-1})$. Now the assignments~$\tau$ of Boolean values to the variables satisfy all of the constraints if and only if they correspond to assignments~$\sigma$ which satisfy (\ref{onep}$'$), (\ref{twop}$'$), (\ref{threep}$'$) (\ref{fourp}$'$), and so should contribute to $$Z'_{H}(G,W(G,H)) = \sum_{\sigma\in \Hom GH\text{ with $\sigma(V)\subseteq U$}} \, \prod_{v\in V(G)} w_v(\sigma(v)).$$ We will next construct weight functions for the instance of $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ in order to reproduce the effect of the weight functions in $W(G,H)$. In order to avoid division by~$0$, we first modify the construction. Suppose that for some variable $v\in V$ and some $i\in [h]$, $w_v(i)=0$. Configurations $\sigma$ with $\sigma(v)=i$ make no contribution to $Z'_{H}(G,W(G,H))$. Thus, it does no harm to rule out such configurations by modifying the $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ instance to ensure that $\tau(v_i)=1$ implies $\tau(v_{i-1})=1$. We do this by adding the constraint $\mathrm{IMP}(v_i,v_{i-1})$. Similarly, if $w_{v'}(i')=0$ for $v'\in V$ and $i'\in[h']$ then we add the constraint $\mathrm{IMP}(v'_{i'},v'_{i'-1})$. Once we've made this change, we can replace $W(G,H)$ with an equivalent indexed set of weight functions $W'(G,H)$ where $w'_v(i)=w_v(i)$ if $w_v(i)>0$ and $w'_v(i)=1$, otherwise. The weight functions for the $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$ instance are then constructed as follows, for each $v\in V$. For each $i\in[h]$, let $\gamma_{v_{i-1}}(0)=1$. Let $\gamma_{v_h}(1)=w'_v(h)$. For each $i\in [h-1]$, let $\gamma_{v_i}(1)=w'_v(i)/w'_v(i+1)$. Note that $\gamma_{v_h}(0)$ and $\gamma_{v_0}(1)$ have not yet been defined --- these values can be chosen arbitrarily. They will not be relevant given the constraints $\delta_0(v_0)$ and $\delta_1(v_h)$. Now if $\sigma(v)=i$ we have $\tau(v_0)=\cdots = \tau(v_{i-1})=0$ and $\tau(v_i)=\cdots = \tau(v_h)=1$ so $\prod_j \gamma_{v_j}(\tau(v_j)) = w'_v(i)$, as required. Similarly, for each $v'\in V'$, define the weight functions as follows. For each $i\in[h']$, let $\gamma_{v'_{i-1}}(0)=1$. Let $\gamma_{v'_{h'}}(1)=w'_{v'}(h')$. For each $i\in [h'-1]$, let $\gamma_{v'_i}(1)=w'_{v'}(i)/w'_{v'}(i+1)$. Using these weight functions, we obtain the desired reduction from the computation of $Z'_{H}(G,W(G,H))$ to $\textsc{\#CSP}^*(\mathrm{IMP},\delta_0,\delta_1)$. \end{proof} \subsection{Intractable trees} Lemma~\ref{lem:intermediate} shows that if $H$ has no induced~$J_3$ then $\wHom{H}$ is AP-reducible to $\textsc{\#BIS}$. The purpose of this section is to prove Lemma~\ref{lem:hardweighted}, below, which shows, by contrast, that if $H$ does have an induced~$J_3$, then $\wHom{H}$ is $\textsc{\#Sat}$-hard. In order to prepare for the proof of Lemma~\ref{lem:hardweighted}, we introduce the notion of a multiterminal cut. Given a graph~$G=(V,E)$ with distinguished vertices~$\alpha$, $\beta$ and~$\gamma$, which we refer to as ``terminals'', a {\it multiterminal cut\/} is a set $E'\subseteq E$ whose removal disconnects the terminals in the sense that the graph $(V,E\setminus E')$ does not contain a path between any two distinct terminals. The size of the multiterminal cut is the number of edges in~$E'$. Consider the following computational problem. \begin{description} \item[Problem] \MultiCutCount{$3$}. \item[Instance] A positive integer~$b$, a connected graph $G=(V,E)$ and $3$ distinct vertices $\alpha$, $\beta$ and $\gamma$ from~$V$. The input has the property that every multiterminal cut has size at least~$b$. \item[Output] The number of size-$b$ multiterminal cuts for $G$ with terminals $\alpha$, $\beta$, and $\gamma$. \end{description} We will use the following technical lemma, which we used before in~\cite{Ising} (without stating it formally). \begin{lemma} \label{lem:cut} \MultiCutCount{$3$} $\equiv_\mathrm{AP} \textsc{\#Sat}$. \end{lemma} \begin{proof} This follows essentially from the proof of Dalhaus et al.~\cite{Dalhaus} that the decision version of \MultiCutCount{$3$} is NP-hard and from the fact \cite[Theorem 1]{APred} that the NP-hardness of a decision problem implies that the corresponding counting problem is AP-interreducible with $\textsc{\#Sat}$. The details are given in \cite[Section 4]{Ising}. \end{proof} \begin{lemma} \label{lem:hardweighted} Suppose that $H$ is a tree with an induced~$J_3$. Then $$\textsc{\#Sat} \leq_\mathrm{AP}\wHom{H}.$$ \end{lemma} \begin{proof} We will prove the lemma by giving an AP-reduction from \MultiCutCount{$3$} to $\wHom{H}$. The lemma will then follow from Lemma~\ref{lem:cut}. Suppose that $H$ has an induced subgraph which is isomorphic to~$J_3$. To simplify the notation, label the vertices and edges of~$H$ in such a way that the induced subgraph is (identically) the graph $J$ depicted in Figure~\ref{fig:J}. \begin{figure} \caption{The tree $J$.} \label{fig:J} \end{figure} Let $b$, $G=(V,E)$, $\alpha$, $\beta$ and $\gamma$ be an input to \MultiCutCount{$3$}. Let $s= 2 + |E(G)|+2|V(G)|$. (The exact size of~$s$ is not important, but it has to be at least this big to make the calculation work, and it has to be at most a polynomial in the size of~$G$.) Let $G'$ be the graph defined as follows. First, let $V'(G)= \{(e,i) \mid e\in E,i\in[s]\}$. Thus, $V'(G)$ contains $s$ vertices for each edge $e$ of~$G$. Then let $G'$ be the graph with vertex set $V(G') = V(G) \cup V'(G)$ and edge set $$E(G') = \{(u,(e,i)) \mid u\in V(G), (e,i)\in V'(G), \mbox{and $u$ is an endpoint of~$e$} \}.$$ We will define weight functions $w_v$ for $v\in V(G')$ so that an approximation to the number of size-$b$ multi-terminal cuts for $G$ with terminals $\alpha$, $\beta$ and $\gamma$ can be obtained from an approximation to $Z_H(G',W(G',H))$. We start by defining the set of pairs $(v,c)\in V(G')\times V(H)$ for which we will specify $w_v(c)>0$. In particular, define the set $\Omega$ as follows. $$\Omega = \{(\alpha,x_0),(\beta,y_0),(\gamma,z_0)\} \cup \big((V(G)-\{\alpha,\beta,\gamma\}) \times \{x_0,y_0,z_0\} \big) \cup \left(V'(G) \times \{w,x_1,y_1,z_1\}\right).$$ Let $w_v(c)=1$ if $(v,c)\in \Omega$. Otherwise, let $w_v(c)=0$. Thus, $Z_H(G',W(G',H))$ is the number of homomorphisms $\sigma$ from~$G'$ to~$H$ with $\sigma(V(G)) = \{x_0,y_0,z_0\}$, $\sigma(V'(G)) \subseteq \{w,x_1,y_1,z_1\}$, $\sigma(\alpha)=x_0$, $\sigma(\beta)=y_0$ and $\sigma(\gamma)=z_0$. We will refer to these as ``valid'' homomorphisms. If $\sigma$ is a valid homomorphism, then let \begin{align*} \bichrom{\sigma} = \{ e \in E(G) \mid \quad & \mbox{the vertices of~$V(G)$ corresponding to } \\ & \mbox{the endpoints of~$e$ are mapped to different colours by~$\sigma$} \}.\end{align*} Note that, for every valid homomorphism~$\sigma$, $\bichrom{\sigma}$ is a multiterminal cut for the graph~$G$ with terminals~$\alpha$, $\beta$ and~$\gamma$. For every multiterminal cut $E'$, let $\components{E'}$ denote the number of components in the graph $(V,E\setminus E')$. For each multiterminal cut~$E'$, let $Z_{E'}$ denote the number of valid homomorphisms~$\sigma$ from~$G'$ to~$H$ such that $\bichrom{\sigma} = E'$. From the definition of multiterminal cut, $\components{E'}\geq 3$. If $\components{E'}=3$ then $$Z_{E'} = 2^{s(E(G)-E')}$$ since there are two choices for the colours of each vertex $(e,i)$ with $e\in E(G)-E'$. (Since the endpoints of each such edge~$e$ are assigned the same colour by~$\sigma$, the vertex $(e,i)$ can either be coloured~$w$, or it can be coloured with one other colour.) Also, $$Z_{E'} \leq 2^{s(E(G)-E')} 3^{\components{E'}-3},$$ since the component of~$\alpha$ is mapped to~$x_0$ by~$\sigma$, the component of~$\beta$ is mapped to~$y_0$, the component of~$\gamma$ is mapped to~$z_0$, and each remaining component is mapped to a colour in $\{x_0,y_0,z_0\}$. Let $Z^*= 2^{s(E(G)-b)} $. If $E'$ has size~$b$ then $\components{E'}=3$. (Otherwise, there would be a smaller multiterminal cut, contrary to the definition of \MultiCutCount{$3$}.) So, in this case, \begin{equation} Z_{E'} = Z^*. \label{eq:smgoodcuts} \end{equation} If $E'$ has size $b'>b$ then $$Z_{E'} \leq 2^{s(E(G)-b')} 3^{\components{E'}-3} = 2^{-s(b'-b)} 3^{\components{E'}-3} Z^* \leq 2^{-s} 3^{|V(G)|} Z^*. $$ Clearly, there are at most $2^{|E(G)|}$ multiterminal cuts~$E'$. So, using the definition of~$s$, \begin{equation} \label{eq:smbigcuts} \sum_{E' : |E'|>b} Z_{E'} \leq \frac{Z^*}{4} \end{equation} From Equation~(\ref{eq:smgoodcuts}), we find that, if there are $N$ size-$b$ multiterminal cuts then $$Z_H(G',W(G',H)) = N Z^* + \sum_{E' : |E'|>b} Z_{E'} .$$ So applying Equation (\ref{eq:smbigcuts}) , we get $$ N \leq \frac{Z_H(G',W(G',H))}{Z^*} \leq N + \frac{1}{4}.$$ Thus, we have an AP-reduction from \MultiCutCount{$3$} to $\nHom{H}$. To determine the accuracy with which~$Z(G)$ should be approximated in order to achieve a given accuracy in the approximation to~$N$, see the proof of Theorem 3 of \cite{APred}. \end{proof} \section{Tree homomorphisms capture the ferromagnetic Potts model.} \label{sec:potts} The problem $\nHom{H}$ counts colourings of a graph satisfying ``hard'' constraints: two colours (corresponding to vertices of $H$) are either allowed on adjacent vertices of the instance or disallowed. By contrast, the Potts model (to be described presently) is ``permissive'': every pair of colours is allowed on adjacent vertices, but some pairs are favoured relative to others. The strength of interactions between colours is controlled by a real parameter~$\gamma$. In this section, we will show that approximating the number of homomorphisms to $J_q$ is equivalent in difficulty to the problem of approximating the partition function of the ferromagnetic $q$-state Potts model. Since the latter problem is not known to be \textsc{\#BIS}-easy for any $q>2$, we might speculate that approximating $\nHom{J_q}$ is not \textsc{\#BIS}-easy for any $q>2$. If so, $J_3$ would be the smallest tree with this property. It is interesting that, for fixed~$q$, a continuously parameterised class of permissive problems can be shown to be computationally equivalent to a single counting problem with hard constraints. Suppose, for example, that we wanted to investigate the possibility that computing the partition function of the $q$-state ferromagnetic Potts model formed a hierarchy of problems of increasing complexity with increasing~$q$. We could equivalently investigate the sequence of problems $\nHom{J_q}$, which seems intuitively to be an easier proposition. We start with some definitions. Let $q$ be a positive integer. The $q$-state Potts model is a statistical mechanical model of Potts~\cite{Potts} which generalises the classical Ising model from two to $q$~spins. In this model, spins interact along edges of a graph~$G=(V,E)$. The strength of each interaction is governed by a parameter~$\gamma$ (a real number which is always at least~$-1$, and is greater than~$0$ in the \emph{ferromagnetic} case which we study, where like spins attract each other). The $q$-state Potts partition function is defined as follows. \begin{equation}\label{eq:PottsGph} Z_\mathrm{Potts}(G;q,\gamma) = \sum_{\sigma:V\rightarrow [q]} \prod_{e=\{u,v\}\in E} \big(1+\gamma\,\delta(\sigma(u) ,\sigma(v))\big), \end{equation} where $\delta(s,s')$ is~$1$ if $s=s'$, and is~$0$ otherwise. The Potts partition function is well-studied. In addition to the complexity-theory literature mentioned below, we refer the reader to Sokal's survey~\cite{Sokal05}. In order to state our results in the strongest possible form, we use the notion of ``efficiently approximable real number'' from Section~\ref{sec:prelim}. Recall that a real number $\gamma$ is efficiently approximable if there is an FPRAS for the problem of computing it. The notion of ``efficiently approximable'' is not important to the constructions below --- the reader who prefers to assume that the parameters are rational will still appreciate the essence of the reductions. Let $q$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. Consider the following computational problem, which is parameterised by~$q$ and~$\gamma$. \begin{description} \item[Problem] $\textsc{Potts}(q,\gamma)$. \item[Instance] Graph $G=(Gvertices,Gedges)$. \item[Output] $Z_\mathrm{Potts}(G;q,\gamma)$. \end{description} This problem may be defined more generally for non-integers~$q$ via the Tutte polynomial. We will use some results from \cite{FerroPotts} which are more general, but we do not need the generality here. In an important paper, Jaeger, Vertigan and Welsh~\cite{JVW90} examined the problem of evaluating the Tutte polynomial. Their result gave a complete classification of the computational complexity of $\textsc{Potts}(q,\gamma)$. For every fixed positive integer~$q$, apart from the trivial $q=1$, and for every fixed~$\gamma$, they showed that this computational problem is \#P-hard. When $q=1$ and $\gamma$ is rational, $Z_\mathrm{Potts}(G;q,\gamma)$ can easily be exactly evaluated in polynomial time. The complexity of the approximation problem has also been partially resolved. In the positive direction, Jerrum and Sinclair~\cite{JS93} gave an FPRAS for the case $q=2$. In the negative direction, Goldberg and Jerrum~\cite{FerroPotts} showed that approximation is $\textsc{\#BIS}$-hard for every fixed $q>2$. They left open the question of whether approximating $Z_\mathrm{Potts}(G;q,\gamma)$ is as easy as $\textsc{\#BIS}$ (or whether it might be even harder). In this paper, we show that the approximation problem is equivalent in complexity to a tree homomorphism problem. In particular, we show that $\textsc{Potts}(q,\gamma)$ is AP-equivalent to the problem of approximately counting homomorphisms to the tree~$J_q$. We first give an AP-reduction from $\textsc{Potts}(q,1)$ to $\nHom{J_q}$. \begin{lemma} \label{lem:tocol} Let $q>2$ be a positive integer. $$\textsc{Potts}(q,1) \leq_\mathrm{AP} \nHom{J_q}.$$ \end{lemma} \begin{proof} Let $G$ be an instance of $\textsc{Potts}(q,1)$. We can assume without loss of generality that $G$ is connected, since it is clear from~(\ref{eq:PottsGph}) that a graph~$G$ with connected components $G_1,\ldots,G_\kappa$ satisfies $Z_\mathrm{Potts}(G;q,\gamma)=\prod_{i=1}^{\kappa} Z_\mathrm{Potts}(G_i;q,\gamma)$. Let $G'$ be the graph with $$V(G') = V(G) \cup E(G)$$ and $$E(G') = \{(u,e) \mid u\in V(G), e \in E(G), \mbox{and $u$ is an endpoint of~$e$} \}.$$ $G'$ is sometimes referred to as the ``$2$-stretch'' of~$G$. For clarity, when we consider an element $e\in E(G)$ as a vertex of $G'$ (rather than an edge of $G$), we shall refer to it as the ``midpoint vertex corresponding to edge~$e$''. Let $s$ be an integer satisfying \begin{equation} \label{eq:firsts} 8 q {(q+1)}^{|V(G)|+|E(G)|} \leq {\left(\frac{q}{2}\right)}^s . \end{equation} For concreteness, take $s$ to be the smallest integer satisfying (\ref{eq:firsts}). The exact size of~$s$ is not so important. The calculation below relies on the fact that $s$ is large enough to satisfy~(\ref{eq:firsts}). On the other hand, $s$ must be at most a polynomial in the size of~$G$, to make the reduction feasible. We will construct an instance~$G''$ of $\nHom{J_q}$ by adding some gadgets to~$G'$. Fix a vertex $v\in V(G)$. Let $G''$ be the graph with $V(G'')=V(G) \cup E(G) \cup \{v_0,\ldots,v_s\}$ and $E(G'') = E(G') \cup \{(v,v_0)\} \cup \{(v_0,v_i) \mid i\in [s]\}$. See Figure~\ref{fig:firstinstance}. \begin{figure} \caption{The instance~$G''$. The thick curved line between $V(G)$ and $E(G)$ indicates that the edges in~$E(G')$ go between elements of~$V(G)$ and elements of~$E(G)$, but these are not shown. } \label{fig:firstinstance} \end{figure} We say that a homomorphism~$\sigma$ from~$G''$ to~$J_q$ is \emph{typical} if $\sigma(v_0)=w$. Note that, in a typical homomorphism, every vertex in $V(G)$ is mapped by~$\sigma$ to one of the colours from $\{c'_1,\ldots,c'_q\}$. Let $Z_{J_q}^t(G'')$ denote the number of typical homomorphisms from~$G''$ to~$J_q$. Given a mapping $\sigma: V(G) \rightarrow \{c'_1,\ldots,c'_q\}$, the number of typical homomorphisms which induce this mapping is $2^{\mathrm{mono}(\sigma)} q^s$, where $\mathrm{mono}(\sigma)$ is the number of edges $e\in E(G)$ whose endpoints in $V(G)$ are mapped to the same colour by~$\sigma$. (To see this, note that there are two possible colours for the midpoint vertices corresponding to such edges, whereas the other midpoint vertices have to be mapped to~$w$ by~$\sigma$. Also, there are $q$ possible colours for each vertex in $\{v_1,\ldots,v_s\}$.) Thus, using the definition~(\ref{eq:PottsGph}), we conclude that $$Z_{J_q}^t(G'') = \sum_{\sigma: V(G) \rightarrow \{c'_1,\ldots,c'_q\}} 2^{\mathrm{mono}(\sigma)} q^s = q^s Z_\mathrm{Potts}(G;q,1).$$ The number of atypical homomorphisms from~$G''$ to~$J_q$, which we denote by $Z_{J_q}^a(G'')$, is at most $2q 2^s {(q+1)}^{|V(G)|+|E(G)|}$. (To see this, note, that there are $2q$ alternative colours for~$v_0$. For each of these, there are at most~$2$ colours for each vertex in $\{v_1,\ldots,v_s\}$ and at most $q+1$ colours for each vertex in $V(G)\cup E(G)$.) Using Equation~(\ref{eq:firsts}), we conclude that $Z_{J_q}^a(G'') \leq q^s/4$. Since $Z_{J_q}(G'') = Z_{J_q}^t(G'') + Z_{J_q}^a(G'')$, we have \begin{equation} \label{done} Z_\mathrm{Potts}(G;q,1) \leq \frac{Z_{J_q}(G'')}{q^s} \leq Z_\mathrm{Potts}(G;q,1) + \frac{1}{4}. \end{equation} Equation~(\ref{done}) guarantees that the construction is an AP-reduction from $\textsc{Potts}(q,1)$ to the problem $\nHom{J_q}$. To determine the accuracy with which $Z_{J_q}(G'')$ should be approximated in order to achieve a given desired accuracy in the approximation to $Z_\mathrm{Potts}(G;q,1)$, see the proof of Theorem 3 of \cite{APred}. \end{proof} In order to get a reduction going the other direction, we need to generalise the Potts partition function to a hypergraph version. Let $\mathcal{H}=(\mathcal{V},\mathcal{E})$ be a hypergraph with vertex set $\mathcal{V}$ and hyperedge (multi)set~$\mathcal{E}$. Let $q$ be a positive integer. The $q$-state Potts partition function of $\mathcal{H}$ is defined as follows: $$ Z_\mathrm{Potts}(\mathcal{H};q,\gamma) = \sum_{\sigma:\mathcal{V}\rightarrow [q]} \prod_{f\in\mathcal{E}} \big(1+\gamma \delta(\{\sigma(v) \mid v\in f\})\big),$$ where $\delta(S)$ is~$1$ if its argument is a singleton and 0 otherwise. Let $q$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. We consider the following computational problem, which is parameterised by~$q$ and~$\gamma$. \begin{description} \item[Problem] $\textsc{HyperPotts}(q,\gamma)$. \item[Instance] A hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$. \item[Output] $Z_\mathrm{Potts}(\mathcal{H};q,\gamma)$. \end{description} We start by reducing $\nHom{J_q}$ to the problem of approximating the Potts partition function of a hypergraph with parameters~$q$ and~$1$. \begin{lemma}\label{lem:fromcol} Let $q$ be a positive integer. $$\nHom{J_q} \leq_\mathrm{AP} \textsc{HyperPotts}(q,1).$$ \end{lemma} \begin{proof} We can assume without loss of generality that the instance to $\nHom{J_q}$ is bipartite, since otherwise the output is zero. We can also assume that it is connected since a graph~$G$ with connected components $G_1,\ldots,G_\kappa$ satisfies $Z_{J_q}(G) = \prod_{i=1}^\kappa Z_{J_q}(G_i)$. Finally, it is easy to find a bipartition of a connected bipartite graph in polynomial time, so we can assume without loss of generality that this is provided as part of the input. Let $B=(U,V,E)$ be a connected instance of $\nHom{J_q}$ consisting of vertex sets~$U$ and~$V$ and edge set $E$ (a subset of $U\times V$). Let $Z_{J_q}^U(B)$ be the number of homomorphisms from $B$ to $J_q$ in which vertices in~$U$ are coloured with colours in~$\{c'_1,\ldots,c'_q\}$. Similarly, let $Z_{J_q}^V(B)$ be the number of homomorphisms from $B$ to $J_q$ in which vertices in~$V$ are coloured with colours in~$\{c'_1,\ldots,c'_q\}$. Clearly, $Z_{J_q}(B) = Z_{J_q}^U(B) + Z_{J_q}^V(B)$. We will show how to approximate $Z_{J_q}^U(B)$ using an approximation oracle for $\textsc{HyperPotts}(q,1)$. The approximation of $Z_{J_q}^V(B)$ is similar. The construction is straightforward. For every $v\in V$, let $\Gamma(v)$ denote the set of neighbours of vertex~$v$ in~$B$. Let $F = \{\Gamma(v), \mid v\in V\}$. Let $H=(U,F)$ be an instance of $\textsc{HyperPotts}(q,1)$. The reduction is immediate, because $Z_{J_q}^U(B) = Z_\mathrm{Potts}(H;q,1)$. To see this, note that every configuration $\sigma: U \rightarrow \{c'_1,\ldots,c'_q\}$ contributes weight $2^{\mathrm{mono}(\sigma)}$ to $Z_\mathrm{Potts}(H;q,1)$, where ${\mathrm{mono}(\sigma)}$ is the number of hyperedges in~$F$ that are monochromatic in~$\sigma$. Also, the configuration~$\sigma$ can be extended in exactly $2^{\mathrm{mono}(\sigma)}$ ways to homomorphisms from~$B$ to~$J_q$. \end{proof} The next step is to reduce the problem of approximating the Potts partition function of a hypergraph to the problem of approximating the Potts partition function of a \emph{uniform} hypergraph, which is a hypergraph in which all hyperedges have the same size. The reason for this step is that the paper \cite{FerroPotts} shows how to reduce the latter to the approximation of the Potts partition function of a \emph{graph}, which is the desired target of our reduction. Let $q$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. We consider the following computational problem, which, like $\textsc{HyperPotts}(q,\gamma)$, is parameterised by~$q$ and~$\gamma$. \begin{description} \item[Problem] $\textsc{UniformHyperPotts}(q,\gamma)$. \item[Instance] A uniform hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$. \item[Output] $Z_\mathrm{Potts}(\mathcal{H};q,\gamma)$. \end{description} We will actually only use the following lemma with $\gamma=1$ but we state, and prove, the more general lemma, since it is no more difficult to prove. \begin{lemma}\label{lem:touniform} Let $q$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. Then $$\textsc{HyperPotts}(q,\gamma) \leq_\mathrm{AP} \textsc{UniformHyperPotts}(q,\gamma).$$ \end{lemma} \begin{proof} Let $\mathcal{H}=(\mathcal{V},\mathcal{E})$ be an instance to $\textsc{HyperPotts}(q,\gamma)$ with $|\mathcal{V}|=n$ and $|\mathcal{E}|=m$ and $\max(|f| \mid f \in \mathcal{E})=t$. Let~$s$ be any positive integer that is at least $$ \frac{\log(4 q^{n+m(t-1)}{(1+\gamma)}^m)} {\log(1+\gamma)}.$$ As with our other reductions, the exact value of~$s$ is not important, as long as it satisfies the above inequality, it is bounded from above by a polynomial in~$n$ and~$m$, and its can be computed in polynomial time (as a function of~$n$ and~$m$). An appropriate~$s$ can be readily computed by computing crude upper and lower bounds for~$\gamma$ and evaluating different values of~$s$ one-by-one to find one that is sufficiently large, in terms of these bounds. For every hyperedge $f\in\mathcal{E}$, fix some vertex $v_f\inf$. Introduce new vertices $\{u_{f,i}\mid f\in\mathcal{E},i\in[t-1]\}$, and let $\mathcal{V}' = \mathcal{V} \cup \{u_{f,i}\mid f\in\mathcal{E}, i\in[t-1]\}$. Let $$\mathcal{E}' = \Big\{ f \cup \big\{u_{f,i}\bigm| i\in[\, t-|f|\, ]\big\} \Bigm| f\in\mathcal{E} \Big\} \cup \Big\{ \{v_f,u_{f,1},\ldots,u_{f,t-1}\} \times [s] \Bigm| f \in \mathcal{E} \Big\}.$$ That is, the multi-set $\mathcal{E}' $ has $s$ copies of the edge $\{v_f,u_{f,1},\ldots,u_{f,t-1}\} $ and one copy of the edge $f \cup \{u_{f,i}\mid i\in[t-|f|\,]\}$ for each hyperedge $f\in \mathcal{E}$. Let $\mathcal{H}' = (\mathcal{V}', \mathcal{E}')$. Note that $\mathcal{H}'$ is $t$-uniform. Now, the total contribution to $Z_\mathrm{Potts}(\mathcal{H}';q,\gamma)$ from configurations~$\sigma$ which are monochromatic on every edge $\{v_f,u_{f,1},\ldots,u_{f,t-1}\}$ is exactly $Z_\mathrm{Potts}(\mathcal{H};q,\gamma) {(1+\gamma)}^{s m}$. Also, the total contribution to $Z_\mathrm{Potts}(\mathcal{H}';q,\gamma)$ from any other configurations~$\sigma$ is at most $q^{n+m(t-1)} {(1+\gamma)}^{m} {(1+\gamma)}^{s(m-1)}$ since there are at most $q^{n+m(t-1)}$ such configurations and $\gamma>0$. So \begin{align*} Z_\mathrm{Potts}(\mathcal{H};q,\gamma) \leq \frac{Z_\mathrm{Potts}(\mathcal{H}';q,\gamma)}{{(1+\gamma)}^{s m}} &\leq Z_\mathrm{Potts}(\mathcal{H};q,\gamma) + \frac{q^{n+m(t-1)} {(1+\gamma)}^{m}} { {(1+\gamma)}^s}\\ &\leq Z_\mathrm{Potts}(\mathcal{H};q,\gamma) + \frac14 \end{align*} which completes the reduction. \end{proof} Finally, we are ready to put together the pieces to show that, for every integer $q>2$, the problem of approximating the Potts partition function is equivalent to a tree homomorphism problem. \begin{theorem} Let $q>2$ be a positive integer and let $\gamma$ be a positive efficiently approximable real. Then $\textsc{Potts}(q,\gamma)\equiv_\mathrm{AP} \nHom{J_q}$. \label{thm:junction} \end{theorem} \begin{proof} We start by establishing the reduction from $\nHom{J_q}$ to $\textsc{Potts}(q,\gamma)$. By Lemmas \ref{lem:fromcol} and~\ref{lem:touniform}. $$\nHom{J_q}\leq_\mathrm{AP}\textsc{HyperPotts}(q,1)\leq_\mathrm{AP}\textsc{UniformHyperPotts}(q,1).$$ To complete the sequence of reductions we need to know that the last problem is reducible to $\textsc{Potts}(q,\gamma)$. Fortunately, this step already appears in the literature in a slightly different guise, so we just need to explain how to translate the terminology from the earlier result to the current setting. For every positive integer~$q$, the partition function $Z_\mathrm{Potts}(\mathcal{H};q,\gamma)$ of the Potts model on hypergraphs is equal to the \emph{Tutte polynomial} $Z_\mathrm{Tutte}(\mathcal{H};q,\gamma)$ (whose definition we will not need here). This equality is proved in \cite[Observation 2.1]{FerroPotts}, using the same basic line of argument that Fortuin and Kasteleyn~\cite{FK} used in the graph case. Furthermore, for $q>2$, Lemmas~9.1 and~10.1 of \cite{FerroPotts} reduce the problem of approximating the Tutte partition function $Z_\mathrm{Tutte}(\mathcal{H};q,1)$, where $\mathcal{H}$ is a \emph{uniform hypergraph}, to that of approximating the Tutte partition function $Z_\mathrm{Tutte}(G;q,\gamma)$, where $G$ is a \emph{graph}. Given the equivalence between $Z_\mathrm{Tutte}(G;q,\gamma)$ and $Z_\mathrm{Potts}(G;q,\gamma)$ mentioned earlier, we see that $$\textsc{UniformHyperPotts}(q,1)\leq_\mathrm{AP}\textsc{Potts}(q,\gamma),$$ completing the chain of reductions. For the other direction, we will establish an AP-reduction from $\textsc{Potts}(q,\gamma)$ to the problem $\nHom{J_q}$. To start, we note that since a graph is a special case of a uniform hypergraph, Lemmas~9.1 and 10.1 of \cite{FerroPotts} give an AP-reduction from $\textsc{Potts}(q,\gamma)$ to $\textsc{Potts}(q,1)$. (It is definitely not necessary to go via hypergraphs for this reduction, but here it is easier to use the stated result than to repeat the work.) Finally, Lemma~\ref{lem:tocol} shows that $\textsc{Potts}(q,1) \leq_\mathrm{AP} \nHom{J_q}$. \end{proof} \section{Inapproximability of counting tree homomorphisms} \label{sec:hard} Until now, it was not known whether or not a bipartite graph~$H$ exists for which approximating $\nHom H$ is \textsc{\#Sat}-hard. It is perhaps surprising, then, to discover that $\nHom H$ may be \textsc{\#Sat}-hard even when $H$ is a tree. However, the hardness result from Section~\ref{sec:weighted} provides a clue. There it was shown that the weighted version $\wHom{H}$ is \textsc{\#Sat}-hard whenever $H$ is a tree containing $J_3$ as an induced subgraph. If we were able to construct a tree~$H$, containing $J_3$, that is able, at least in some limited sense, to simulate vertex weights, then we might obtain a reduction from $\wHom{J_3}$ to $\nHom{H}$. That is roughly how we proceed in this section. We will obtain our hard tree~$H$ by ``decorating'' the leaves of~$J_3$. These decorations will match certain structures in the instance~$G$, so that particular distinguished vertices in $G$ will preferentially be coloured with particular colours. Carrying through this idea requires~$H$ to have a certain level of complexity, and the tree~$J_3^*$ that we actually use (see Figure~\ref{fig:JS}) is about the smallest for which this approach works. Presumably the same approach could also be applied starting at $J_q$, for $q>3$. It is possible that there are trees~$H$ that are much smaller than~$J_3^*$ for which $\nHom{H}$ is \textsc{\#Sat}-hard. It is even possible that $\nHom{J_3}$ is \textsc{\#Sat}-hard. But demonstrating this would require new ideas. Define vertex sets \begin{align*} X &=\{x_0,x_1\} \cup \{x_{2,i}\mid i\in[5]\} ,\\ Y &= \{y_0,y_1\} \cup \{y_{2,i}\mid i\in[4]\} \cup \{y_{3,i,j}\mid i\in[4],j\in[3]\}, \\ Z &= \{z_0,z_1\} \cup \{z_{2,i}\mid i\in[3]\} \cup \{z_{3,i,j}\mid i\in[3],j\in[3]\} \cup \{z_{4,i,j,k}\mid i\in[3],j\in[3],k\in[2]\}, \end{align*} and edge sets \begin{align*} E_X &=\{(x_0,x_1)\} \cup \{(x_1,x_{2,i})\mid i\in[5]\} ,\\ E_Y &= \{(y_0,y_1)\} \cup \{(y_1,y_{2,i})\mid i\in[4]\} \cup \{(y_{2,i},y_{3,i,j})\mid i\in[4],j\in[3]\} ,\\ E_Z &= \{(z_0,z_1)\} \cup \{(z_1,z_{2,i})\mid i\in[3]\} \cup \{(z_{2,i},z_{3,i,j})\mid i\in[3],j\in[3]\} \\ &\qquad\null\cup \{ ( z_{3,i,j},z_{4,i,j,k})\mid i\in[3],j\in[3],k\in[2]\}. \end{align*} Let $J_3^*$ be the tree with vertex set $V(J_3^*)=\{w\} \cup X \cup Y \cup Z$ and edge set $$E(J_3^*)=\{(w,x_0),(w,y_0),(w,z_0)\} \cup E_X \cup E_Y \cup E_Z.$$ See Figure~\ref{fig:JS}. Consider the equivalence relation on $V(J_3^*)$ defined by graph isomorphism --- two vertices of~$J_3^*$ are in the same equivalence class if there is an isomorphism of~$J_3^*$ mapping one to the other. The canonical representatives of the equivalence classes are the vertices $w$, $x_0$, $x_1$, $x_{2,1}$, $y_0$, $y_1$, $y_{2,1}$, $y_{3,1,1}$, $z_0$, $z_1$, $z_{2,1}$, $z_{3,1,1}$ and $z_{4,1,1,1}$. These are shown in the figure. \begin{figure} \caption{The tree $J_3^*$.} \label{fig:JS} \end{figure} In this section, we will show that $\textsc{\#Sat}$ is AP-reducible to~$\nHom{J_3^*}$. We start by identifying relevant structure in~$J_3^*$. A simple path in a graph is a path in which no vertices are repeated. For every vertex~$h$ of~$J_3^*$, and every positive integer~$k$, let $d_k(h)$ be the number of simple length-$k$ paths from~$h$. The values $d_1(h)$, $d_2(h)$ and $d_3(h)$ can be calculated for each canonical representative $h\in V(J_3^*)$ by inspecting the definition of~$J_3^*$ (or its drawing in Figure~\ref{fig:JS}). These values are recorded in the first four columns of the table in Figure~\ref{JStable}. \begin{figure} \caption{ For each canonical representative $h\in V(J_3^*)$, we record the values of $w_1(h)=d_1(h)$, $w_2(h)=d_1(h)+d_2(h)$ and $w_3(h)=d_1^2(h)+d_2(h)+d_3(h)$.} \label{JStable} \end{figure} Now let $w_k(h)$ denote the number of length-$k$ walks from~$h$ in~$J_3^*$. Clearly, $w_1(h)=d_1(h)$ since $J_3^*$ has no self-loops, so all length-$1$ walks are simple paths. Next, note that $w_2(h)=d_1(h) + d_2(h)$. To see this, note that every length-$2$ walk from~$h$ is either a simple length-$2$ path from~$J_3^*$, or it is a walk obtained by taking an edge from~$h$, and then going back to~$h$. Finally, $w_3(h) = d_1(h)^2 + d_2(h)+d_3(h)$ since every length-$3$ walk from~$h$ is one of the following: \begin{itemize} \item a simple length-$3$ path from~$h$, \item a simple length-$2$ path from~$h$, with the last edge repeated in reverse, or \item a simple length-$1$ path from~$h$ with the last edge repeated in reverse, followed by another simple length-$1$ path from~$h$. \end{itemize} These values are recorded, for each canonical representative $h\in V(J_3^*)$, in the last three columns of the table in Figure~\ref{JStable}. The important fact that we will use is that $w_1(h)$ is uniquely maximised at $h=x_1$, $w_2(h)$ is uniquely maximised at $h=y_1$, and $w_3(h)$ is uniquely maximised at $h=z_1$. (These are shown in boldface in the table.) We are now ready to prove the following theorem. \begin{theorem} \label{thm:hardH} $\textsc{\#Sat} \leq_\mathrm{AP} \nHom{J_3^*}$. \end{theorem} \begin{proof} By Lemma~\ref{lem:cut}, it suffices to give an AP-reduction from \MultiCutCount{$3$} to $\nHom{J_3^*}$. The basic construction follows the outline of the reduction developed in the proof of Lemma~\ref{lem:hardweighted}. However, unlike the situation of Lemma~\ref{lem:hardweighted}, the target problem $\nHom{J_3^*}$ does not include weights, so we must develop gadgetry to simulate the role of these. Let $b$, $G=(V,E)$, $\alpha$, $\beta$ and $\gamma$ be an input to \MultiCutCount{$3$}. Let $s= 3 + |E(G)|+2|V(G)|$. (As before, the exact size of~$s$ is not important, but it has to be at least this big to make the calculation work, and it has to be at most a polynomial in the size of~$G$.) Let $G'$ be the graph defined in the proof of Lemma~\ref{lem:hardweighted}. In particular, let $V'(G)= \{(e,i) \mid e\in E(G),i\in[s]\}$. Then let $G'$ be the graph with vertex set $V(G') = V(G) \cup V'(G)$ and edge set $$E(G') = \{(u,(e,i)) \mid u\in V(G), (e,i)\in V'(G), \mbox{and $u$ is an endpoint of~$e$} \}.$$ Now let $r$ be any positive integer such that \begin{equation} \label{eq:r} {\left( \frac{46}{40} \right)}^r \geq 8 {|V(J_3^*)|}^{|V(G)|+ s |E(G)| + 7}. \end{equation} For concreteness, take $r$ to be the smallest integer satisfying (\ref{eq:r}). Once again, the exact value of~$r$ is not so important. Any~$r$ would work as long as it is at most a polynomial in the size of~$G$, and it satisfies (\ref{eq:r}). We will construct an instance~$G''$ of $\nHom{J_3^*}$ by adding some gadgets to~$G'$. First, we define the gadgets. \begin{itemize} \item Let $\Gamma_{x}$ be a graph with vertex set $V(\Gamma_{x}) = \{ v_{x_1} \} \cup \bigcup_{i\in[r]} \{v_{x,i}\}$ and edge set $E(\Gamma_x) = \bigcup_{i\in [r]} \{(v_{x_1},v_{x,i})\}$. \item Let $\Gamma_y$ be a graph with vertex set $V(\Gamma_y) = \{v_{y_1}\} \cup \bigcup_{i\in[r]} \{v_{y,i},v'_{y,i}\} $ and edge set $E(\Gamma_y) = \bigcup_{i\in [r]} \{(v_{y_1},v_{y,i}),(v_{y,i},v'_{y,i}) \}$. \item Let $\Gamma_z$ be a graph with vertex set $V(\Gamma_z) = \{v_{z_1}\} \cup \bigcup_{i\in[r]} \{ v_{z,i}, v'_{z,i}, v''_{z,i} \} $ and edge set $E(\Gamma_x) = \bigcup_{i\in [r]} \{ (v_{z_1},v_{z,i}),(v_{z,i},v'_{z,i}),(v'_{z,i},v''_{z,i}) \}$. \end{itemize} Finally, let $$V(G'') = V(G') \cup \{v_w,v_{x_0},v_{y_0},v_{z_0}\} \cup V(\Gamma_x) \cup V(\Gamma_y) \cup V(\Gamma_z),$$ and \begin{align*} E(G'') &= \{ (v_w,v_{x_0}),(v_w,v_{y_0}),(v_w,v_{z_0}), (v_{x_0},v_{x_1}),(v_{y_0},v_{y_1}),(v_{z_0},v_{z_1}), (v_{x_1},\alpha),(v_{y_1},\beta),(v_{z_1},\gamma) \} \\ & \cup E(G') \cup \{(v_w,v) \mid v\in V(G)\} \cup E(\Gamma_x) \cup E(\Gamma_y) \cup E(\Gamma_z). \end{align*} A picture of the instance $G''$ is shown in Figure~\ref{fig:GInstance}. \begin{figure} \caption{The instance~$G''$. The thick curved line between $V(G)$ and $V'(G)$ indicates that the edges in~$E(G')$ go between vertices in~$V(G)$ and vertices in~$V'(G)$, but these are not shown. Vertex $v_w$ is connected to each vertex in~$V(G)$. } \label{fig:GInstance} \end{figure} We say that a homomorphism~$\sigma$ from~$G''$ to~$J_3^*$ is \emph{typical} if $\sigma(v_{x_1})=x_1$, $\sigma(v_{y_1})=y_1$, and $\sigma(v_{z_1})=z_1$. Note that, in a typical homomorphism, $\sigma(v_w)=w$, so $\sigma(V(G))=\{x_0,y_0,z_0\}$ and $\sigma(V'(G)) \subseteq \{w,x_1,y_1,z_1\}$. Also, $\sigma(\alpha)=x_0$, $\sigma(\beta)=y_0$, and $\sigma(\gamma)=z_0$. If $\sigma$ is a typical homomorphism, then let \begin{align*} \bichrom{\sigma} = \{ e \in E(G) \mid \quad & \mbox{the vertices of~$V(G)$ corresponding to } \\ & \mbox{the endpoints of~$e$ are mapped to different colours by~$\sigma$} \}.\end{align*} Note that, for every typical homomorphism~$\sigma$, $\bichrom{\sigma}$ is a multiterminal cut for the graph~$G$ with terminals~$\alpha$, $\beta$ and~$\gamma$. For every multiterminal cut $E'$ of~$G$, let $\components{E'}$ denote the number of components in the graph $(V,E\setminus E')$. For each multiterminal cut~$E'$, let $Z_{E'}$ denote the number of typical homomorphisms~$\sigma$ from~$G''$ to~$J_3^*$ such that $\bichrom{\sigma} = E'$. As in the proof of Lemma~\ref{lem:hardweighted}, $\components{E'}\geq 3$. If $\components{E'}=3$ then $$Z_{E'} = 2^{s|E(G)-E'|} 6^r 18^r 46^r = 2^{s|E(G)-E'|} 4968^r.$$ The $2^{s|E(G)-E'|}$ comes from the two choices for the colour of each vertex $(e,i)$ with $e\in E(G)-E'$, as before. The $6^r$ comes from the choices for the vertices in $V(\Gamma_x)\setminus \{x_1\}$ according to column~5 of the table in Figure~\ref{JStable}. The $18^r$ comes from the choices for the vertices in $ V(\Gamma_y)\setminus \{y_1\}$ (in column~6) and the $46^r$ comes from the choices for the vertices in $ V(\Gamma_z)\setminus \{z_1\}$ (in column~7). Also, for any multiterminal cut $E'$ of~$G$, $$Z_{E'} \leq 2^{s|E(G)-E'|} 3^{\components{E'}-3} 4968^r,$$ since in any typical homomorphism~$\sigma$, the component of~$\alpha$ is mapped to~$x_0$ by~$\sigma$, the component of~$\beta$ is mapped to~$y_0$, the component of~$\gamma$ is mapped to~$z_0$, and each remaining component is mapped to a colour in $\{x_0,y_0,z_0\}$. Let $Z^*= 2^{s|E(G)-b|} 4968^r$. If $E'$ has size~$b$ then $\components{E'}=3$. (Otherwise, there would be a smaller multiterminal cut, contrary to the definition of \MultiCutCount{$3$}.) So, in this case, \begin{equation} Z_{E'} = Z^*. \label{eq:goodcuts} \end{equation} If $E'$ has size $b'>b$ then $$Z_{E'} \leq 2^{s|E(G)-b'|} 3^{\components{E'}-3} 4968^r = 2^{-s(b'-b)} 3^{\components{E'}-3} Z^* \leq 2^{-s} 3^{|V(G)|} Z^*. $$ Clearly, there are at most $2^{|E(G)|}$ multiterminal cuts~$E'$. So, using the definition of~$s$, \begin{equation} \label{eq:bigcuts} \sum_{E' : |E'|>b} Z_{E'} \leq \frac{Z^*}{8}. \end{equation} Now let $Z^-$ denote the number of homomorphisms from~$G''$ to~$J_3^*$ that are not typical. Now $$ Z^- \leq |V(J_3^*)|^{|V(G)|+|V'(G)|+7 } {(40/46)}^r 4968^r, $$ since there are at most $|V(J_3^*)|$ colours for each of the vertices in $$V(G)\cup V'(G) \cup \{v_w,v_{x_0},v_{y_0},v_{z_0},v_{x_1},v_{y_1},v_{z_1}\}.$$ Also, given that the assignment to $v_{x_1}$, $v_{y_1}$ and $v_{z_1}$ is not precisely $x_1$, $y_1$ and $z_1$, respectively, it can be seen from the table in Figure~\ref{JStable} that the number of possibilities for the remaining vertices is at most $(40/46)^r$ times as large as it would otherwise have been. (For example, from the last column of the table, colouring $v_{z_1}$ with $y_1$ instead of with~$z_1$ would give exactly $40^r$ choices for the colours of the vertices in $ \Gamma_z \setminus \{v_{z_1}\}$ instead of $46^r$ choices. The differences in the other columns are more substantial than this.) Since $|V'(G)|=s |E(G)|$, $$Z^- \leq {|V(J_3^*)|}^{|V(G)|+s|E(G)|+7} {(40/46)}^r 4968^r.$$ We can assume that $b\leq |E(G)|$ (otherwise, the number of size-$b$ multiterminal cuts is trivially~$0$) so from the definition of~$Z^*$, $$ Z^- \leq {|V(J_3^*)|}^{|V(G)|+s|E(G)|+7} {(40/46)}^r Z^*. $$ Using Equation~(\ref{eq:r}), we get \begin{equation} \label{eq:nocut} Z^- \leq \frac{Z^*}{8}. \end{equation} From Equation~(\ref{eq:goodcuts}), we find that, if there are $N$ size-$b$ multiterminal cuts then $$Z_{J_3^*}(G) = N Z^* + \sum_{E' : |E'|>b} Z_{E'} + Z^-.$$ So applying Equations (\ref{eq:bigcuts}) and (\ref{eq:nocut}), we get $$ N \leq \frac{Z_{J_3^*}(G)}{Z^*} \leq N + \frac{1}{4}.$$ Thus, we have an AP-reduction from \MultiCutCount{$3$} to $\nHom{J_3^*}$. To determine the accuracy with which~$Z(G)$ should be approximated in order to achieve a given accuracy in the approximation to~$N$, see the proof of Theorem 3 of \cite{APred}. \end{proof} \section{The Potts partition function and proper colourings of bipartite graphs} \label{sec:bqcol} Let $q$ be any integer greater than~$2$. Consider the following computational problem. \begin{description} \item[Problem] $\bqcol q$. \item[Instance] A bipartite graph $G$. \item[Output] The number of proper $q$-colourings of $G$. \end{description} Dyer et al.~\cite[Theorem 13]{APred} showed that $\textsc{\#BIS} \leq_\mathrm{AP} \bqcol q$. However, it may be the case that $\bqcol q$ is easier to approximate than $\textsc{\#Sat}$. Certainly, no AP-reduction from $\textsc{\#Sat}$ to $\bqcol q$ has been discovered (despite some effort!). Therefore, it seems worth recording the following upper bound on the complexity of $\nHom{J_q}$, which is an easy consequence of Theorem~\ref{thm:junction}. \begin{corollary}\label{cor:bqcol} Let $q>2$ be a positive integer. Then $\nHom{J_q} \leq_\mathrm{AP} \bqcol q$. \end{corollary} Corollary~\ref{cor:bqcol} follows immediately from Lemma~\ref{lem:bqcol} below by applying Theorem~\ref{thm:junction} with $\gamma=1/(q-2)$. \begin{lemma}\label{lem:bqcol} Let $q>2$ be a positive integer. Then $\textsc{Potts}(q,1/(q-2)) \leq_\mathrm{AP} \bqcol q$. \end{lemma} \begin{proof} Let $G=(Gvertices,Gedges)$ be an input to $\textsc{Potts}(q,1/(q-2))$. Let $G'$ be the two-stretch of $G$ constructed as in the proof of Lemma~\ref{lem:tocol}. In particular, $G'$ is the bipartite graph with $$V(G') = V(G) \cup E(G)$$ and $$E(G') = \{(u,e) \mid u\in V(G), e \in E(G), \mbox{and $u$ is an endpoint of~$e$} \}.$$ Consider an assignment $\sigma\colon V(G) \to [q]$ and an edge $e=(u,v)$ of $G$. If $\sigma(u)\neq \sigma(v)$ then there are $q-2$ ways to colour the midpoint vertex corresponding to~$e$ so that it receives a different colour from~$\sigma(u)$ and~$\sigma(v)$. However, if $\sigma(u)=\sigma(v)$ then there are $q-1$ possible colours for the midpoint vertex. Let $N$ denote the number of proper $q$-colourings of~$G'$. Then since $(q-1)/(q-2)-1=1/(q-2)$, we have $$ N = {(q-2)}^{|Gedges|} \sum_{\sigma:Gvertices\rightarrow[q]} {\left(\frac{q-1}{q-2}\right)}^{\mathrm{mono}(\sigma)} = {(q-2)}^{|Gedges|} Z_\mathrm{Potts}(G;q, 1/(q-2)),$$ where $\mathrm{mono}(\sigma)$ is the number of edges $e\in E(G)$ whose endpoints in $V(G)$ are mapped to the same colour by~$\sigma$. \end{proof} \section{The Potts partition function and the weight enumerator of a code} \label{sec:we} A {\it linear code\/} $C$ of length $N$ over a finite field $\mathbb{F}_q$ is a linear subspace of $\mathbb{F}_q^N$. If the subspace has dimension~$r$ then the code may be specified by an $r\times N$ {\it generating matrix}~$M$ over~$\mathbb{F}_q$ whose rows form a basis for the code. For any real number $\lambda$, the weight enumerator of the code is given by $W_M(\lambda)=\sum_{w\in C}\lambda^{\|w\|}$ where $\|w\|$ is the number of non-zero entries in~$w$. ($\|w\|$ is usually called the {\it Hamming weight\/} of~$w$.) We consider the following computational problem, parameterised by $q$ and~$\lambda$. \begin{description} \item[Problem] $\WE q\lambda$. \item[Instance] A generating matrix $M$ over $\mathbb{F}_q$. \item[Output] $W_M(\lambda)$. \end{description} In \cite{WeightEnum}, the authors considered the special case $q=2$ and obtained various results on the complexity of $\WE 2\lambda$, depending on $\lambda$. Here we show that, for any prime~$p$, $\WE p\lambda$ provides an upper bound on the complexity of $\textsc{Potts}(p^k,\gamma)$. \begin{theorem}\label{thm:PottsToWE} Suppose that $p$ is a prime, $k$ is a positive integer satisfying $p^k>2$ and $\lambda\in(0,1)$ is an efficiently computable real. Then $$\textsc{Potts}(p^k,1)\leq_\mathrm{AP}\WE p\lambda.$$ \end{theorem} The following corollary follows immediately from Theorem~\ref{thm:PottsToWE} and Theorem~\ref{thm:junction}. \begin{corollary} \label{newcor} Suppose that $p$ is a prime, $k$ is a positive integer satisfying $p^k>2$ and $\lambda\in(0,1)$ is an efficiently computable real. Then $\nHom{J_{p^k}} \leq_\mathrm{AP} \WE p\lambda$. \end{corollary} The condition $p^k>2$ can in fact be removed from Corollary~\ref{newcor}, even though the result does not follow from Theorem~\ref{thm:PottsToWE} in this situation. For the missing case where $p=2$ and $k=1$, Lemma~\ref{lem:intermediate} gives $\nHom{J_{2}} \leq_\mathrm{AP} \textsc{\#BIS}$ and \cite[Cor.~7, Part~(4)]{WeightEnum} show $\textsc{\#BIS} \leq_\mathrm{AP} \WE {2}{\lambda}$. A striking feature of Corollary~\ref{newcor} is that it provides a uniform upper bound on the complexity of the infinite sequence of problems $\nHom{J_{p^k}}$, with $p$ fixed and $k$ varying. This uniform upper bound is interesting if (as we suspect) $\WE p\lambda$ is not itself equivalent to \textsc{\#Sat}{} via AP-reducibility. \begin{proof}[Proof of Theorem~\ref{thm:PottsToWE}] Let $q=p^k$ and let $\gamma=\lambda^{-q(p-1)/p}-1>0$. Since Theorem~\ref{thm:junction} shows $\textsc{Potts}(p^k,1)\equiv_\mathrm{AP} \nHom{J_{p^k}}\equiv_\mathrm{AP}\textsc{Potts}(p^k,\gamma)$, it is enough to given an AP-reduction from $\textsc{Potts}(p^k,\gamma)$ to $\WE p\lambda$. So suppose $G=(V,E)$ is a graph with $n$ vertices and $m$ edges. We wish to evaluate \begin{equation}\label{eq:PottsDef} Z_\mathrm{Potts}(G;q,\gamma)=\sum_{\sigma:V\to[q]}(1+\gamma)^{\mathop{\mathrm{mono}}(\sigma)}. \end{equation} Our aim is to construct an instance of the weight enumerator problem whose solution is the above expression, modulo an easily computable factor. Introduce a collection of variables $X=\{x^v_i\mid v\in V \text{ and }i\in[k]\}$. To each assignment $\sigma:V\to[q]$ we define an associated assignment $\hat\sigma:X\to\mathbb{F}_p$ as follows: for all $v\in V$, $$ \big(\hat\sigma(x_1^v),\hat\sigma(x_2^v), \ldots,\hat\sigma(x_k^v)\big)=\phi(\sigma(v)), $$ where $\phi$ is any fixed bijection $[q]\to \mathbb{F}_p^k$. Note that $\sigma\mapsto\hat\sigma$ is a bijection from assignments $V\to[q]$ to assignments $X\to\mathbb{F}_p$. (Informally, we have coded the spin at each vertex as a $k$-tuple of variables taking values in~$\mathbb{F}_p$.) Let $\ell_1(z_1,\ldots,z_k),\ldots,\ell_q(z_1,\ldots,z_k)$ be an enumeration of all linear forms $\alpha_1z_1+\alpha_2z_2+\cdots+\alpha_kz_k$ over $\mathbb{F}_p$, where $(\alpha_1,\alpha_2,\ldots,\alpha_k)$ ranges over $\mathbb{F}_p^k$. This collection of linear forms has the following property: \begin{equation}\label{eq:prop} \begin{split} &\text{If $z_1=z_2=\cdots z_k=0$, then all of $\ell_1(z_1,\ldots,z_k),\ldots,\ell_q(z_1,\ldots,z_k)$ are zero;}\\ &\text{otherwise, precisely $q/p=p^{k-1}$ of $\ell_1(z_1,\ldots,z_k),\ldots,\ell_q(z_1,\ldots,z_k)$ are zero.} \end{split} \end{equation} The first claim in~(\ref{eq:prop}) is trivial. To see the second, assume without loss of generality that $z_1\not=0$. Then, for any choice of $(\alpha_2,\ldots,\alpha_k)\in\mathbb{F}_p^{k-1}$, there is precisely one choice for $\alpha_1\in\mathbb{F}_p$ that makes $\alpha_1z_1+\cdots+\alpha_kz_k=0$. Now give an arbitrary direction to each edge $(u,v)\in E$ and consider the system $\Lambda$ of linear equations $$ \Big\{ \ell_j \big(\hat\sigma(x^v_1)-\hat\sigma(x^u_1),\,\hat\sigma(x^v_2)-\hat\sigma(x^u_2),\, \ldots,\,\hat\sigma(x^v_k)-\hat\sigma(x^u_k)\big)=0: j\in[q] \text{ and } (u,v) \in E\Big\}. $$ (We view $\Lambda$ as a multiset, so the trivial equation $0=0$ arising from the linear form $\ell_j$ with $\alpha_1=\alpha_2 = \cdots = \alpha_k=0$ occurs $m$ times, a convention that makes the following calculation simpler.) Denote by $\mathop{\mathrm{sat}}(\hat\sigma)$ the number of satisfied equations in $\Lambda$. Then, from~(\ref{eq:prop}), $$\mathop{\mathrm{sat}}(\hat\sigma)=q\mathop{\mathrm{mono}}(\sigma)+\frac qp(m-\mathop{\mathrm{mono}}(\sigma)),$$ and hence $$ \mathop{\mathrm{mono}}(\sigma)=\frac p{(p-1)q}\mathop{\mathrm{sat}}(\hat\sigma)-\frac m{p-1}. $$ Noting that $1+\gamma=\lambda^{-q(p-1)/p}$, \begin{align} \sum_{\sigma:V\to[q]}(1+\gamma)^{\mathop{\mathrm{mono}}(\sigma)} &=\sum_{\hat\sigma: X \to\mathbb{F}_p}(1+\gamma)^{(p/(p-1)q)\mathop{\mathrm{sat}}(\hat\sigma)-m/(p-1)}\notag\\ &= \lambda^{qm/p} \sum_{\hat\sigma: X \to\mathbb{F}_p}\lambda^{-\mathop{\mathrm{sat}}(\hat\sigma)}\notag\\ &= \lambda^{-(1-1/p)qm} \sum_{\hat\sigma: X \to\mathbb{F}_p}\lambda^{\mathop{\mathrm{unsat}}(\hat\sigma)},\label{eq:unsat} \end{align} where $\mathop{\mathrm{unsat}}(\hat\sigma)=qm-\mathop{\mathrm{sat}}(\hat\sigma)$ is the number of unsatisfied equations in $\Lambda$. The system $\Lambda$ has $qm$ equations in $kn$ variables, so we may write it in matrix form $A\boldsymbol{\hat\sigma}=\mathbf0$, where $A$ is a $(qm\times kn)$-matrix, and $\boldsymbol{\hat\sigma}$ is a $kn$-vector over~$\mathbb{F}_p$. The columns of $A$ and the components of $\boldsymbol{\hat\sigma}$ are indexed by pairs $(i,v)\in[k]\times V$, and the $(i,v)$-component of $\boldsymbol{\hat\sigma}$ is $\hat\sigma(x_i^v)$. Enumerating the columns of $A$ as $\mathbf a_i^v\in\mathbb{F}_p^{qm}$ for $(i,v)\in[k]\times V$, we may re-express $\Lambda$ in the form $$ \sum_{i\in[k],v\in V}\hat\sigma(x_i^v)\,\mathbf a_i^v=\mathbf0, $$ where $\mathbf0$ is the length-$qm$ zero vector. Then $\mathop{\mathrm{unsat}}(\hat\sigma)$ is the Hamming weight of the length-$qm$ vector $\mathbf b(\hat\sigma)=\sum_{i,v}\hat\sigma(x_i^v)\,\mathbf a_i^v$. As $\hat\sigma$ ranges over all assignments $X\to\mathbb{F}_p$, so $\mathbf b(\hat\sigma)$ ranges over the vector space (or code) $$C=\Big\{\sum_{i,v}\hat\sigma(x_i^v)\,\mathbf a_i^v\Bigm| \hat\sigma: X\to\mathbb{F}_p\Big\} =\langle \mathbf a_i^v\mid i\in[k],v\in V\rangle$$ generated by the vectors $\{\mathbf a_i^v\}$. We will argue that the mapping sending $\hat\sigma$ to $\mathbf b(\hat\sigma)$ is $q$ to~1, from which it follows that $\sum_{\hat\sigma}\lambda^{\mathop{\mathrm{unsat}}(\hat\sigma)}$ is $q$ times the weight enumerator of the code~$C$. Then, from (\ref{eq:PottsDef}) and~(\ref{eq:unsat}), letting $M$ be any generating matrix for~$C$, $$ Z_\mathrm{Potts}(G;q,\gamma)=q\lambda^{-(1-1/p)qm} \,W_M(\lambda). $$ To see where the factor~$q$ comes from, consider the assignments~$\hat\sigma$ satisfying \begin{equation}\label{eq:qto1} \sum_{i\in[k],v\in V}\hat\sigma(x_i^v)\,\mathbf a_i^v=\mathbf b, \end{equation} for some $\mathbf b\in\mathbb{F}_p^{qm}$. For every $i\in [k]$ and every edge $(u,v)\in E$, there is an equation in $\Lambda$ specifying the value of $\hat\sigma(x_i^v)-\hat\sigma(x_i^u)$. Thus, since $G$ is connected, the vector $\mathbf b$ determines $\hat\sigma$ once the partial assigment $(\hat\sigma(x_1^r),\ldots,\hat\sigma(x_k^r))$ is specified for some distinguished vertex $r\in V$. Conversely, each of the $q$ partial assignments $(\hat\sigma(x_1^r),\ldots,\hat\sigma(x_k^r))$ extends to a total assignment satisfying~(\ref{eq:qto1}). \end{proof} \end{document}
\begin{document} \title[Automorphisms of buildings]{Automorphisms of Non-Spherical Buildings\\ Have Unbounded Displacement} \author[Abramenko]{Peter Abramenko} \address{Department of Mathematics\\ University of Virginia\\ Charlottesville, VA 22904} \email{[email protected]} \author[Brown]{Kenneth S. Brown} \address{Department of Mathematics\\ Cornell University\\ Ithaca, NY 14853} \email{[email protected]} \date{October 5, 2007} \begin{abstract} If $\phi$ is a nontrivial automorphism of a thick building~$\Delta$ of purely infinite type, we prove that there is no bound on the distance that $\phi$ moves a chamber. This has the following group-theoretic consequence: If $G$ is a group of automorphisms of~$\Delta$ with bounded quotient, then the center of~$G$ is trivial. \end{abstract} \maketitle \section*{Introduction} \label{sec:introduction} A well-known folklore result says that a nontrivial automorphism~$\phi$ of a thick Euclidean building~$X$ has unbounded displacement. Here we are thinking of $X$ as a metric space, and the assertion is that there is no bound on the distance that $\phi$ moves a point. [For the proof, consider the action of~$\phi$ on the boundary~$X_\infty$ at infinity. If $\phi$ had bounded displacement, then $\phi$ would act as the identity on~$X_\infty$, and one would easily conclude that $\phi=\id$.] In this note we generalize this result to buildings that are not necessarily Euclidean. We work with buildings~$\Delta$ as combinatorial objects, whose set $\C$ of chambers has a discrete metric (``gallery distance''). We say that $\Delta$ is of \emph{purely infinite type} if every irreducible factor of its Weyl group is infinite. \begin{theorem*} Let $\phi$ be a nontrivial automorphism of a thick building~$\Delta$ of purely infinite type. Then $\phi$, viewed as an isometry of the set \C\ of chambers, has unbounded displacement. \end{theorem*} The crux of the proof is a result about Coxeter groups (Lemma \ref{lem:3}) that may be of independent interest. We prove the lemma in Section~\ref{sec:lemma-about-coxeter}, after a review of the Tits cone in Section~\ref{sec:preliminaries}. We then prove the theorem in Section~\ref{sec:proof-theorem}, and we obtain the following (almost immediate) corollary: If $G$ is a subgroup of~$\Aut(\Delta)$ such that there is a bounded set of representatives for the $G$\h-orbits in~\C, then the center of~$G$ is trivial. We conclude the paper with a brief discussion of displacement in the spherical case. We are grateful to Hendrik Van Maldeghem for providing us with some counterexamples in this connection (see Example~\ref{exam:1} and Remark~\ref{rem:5}). \section{Preliminaries on the Tits cone} \label{sec:preliminaries} In this section we review some facts about the Tits cone associated to a Coxeter group \cite{abramenko08:_approac_to_build,bourbaki81:_group_lie,humphreys90:_reflec_coxet,tits61:_group_coxet,vinberg71:_discr}. We will use \cite{abramenko08:_approac_to_build} as our basic reference, but much of what we say can also be found in one or more of the other cited references. Let $(W,S)$ be a Coxeter system with $S$ finite. Then $W$ admits a canonical representation, which turns out to be faithful (see Lemma~\ref{lem:7} below), as a linear reflection group acting on a real vector space $V$ with a basis $\{e_s \mid s\in S\}$. There is an induced action of~$W$ on the dual space~$V^*$. We denote by~$C_0$ the simplicial cone in~$V^*$ defined by \[ C_0 := \{x\in V^* \mid \<x,e_s> >0 \text{ for all } s\in S\}; \] here $\<-,->$ denotes the canonical evaluation pairing between $V^*$ and~$V$. We call~$C_0$ the \emph{fundamental chamber}. For each subset $J\subseteq S$, we set \[ A_J := \{x \in V^* \mid \<x,e_s> = 0 \text{ for } s \in J \text{ and } \<x,e_s> > 0 \text{ for } s \in S\setminus J\}. \] The sets $A_J$ are the (relatively open) \emph{faces} of~$C$ in the standard terminology of polyhedral geometry. They form a partition of the closure~$\Cbar_0$ of $C_0$ in~$V^*$. For each $s\in S$, we denote by~$H_s$ the hyperplane in~$V^*$ defined by the linear equation $\<-,e_s> =0$. If follows from the explicit definition of the canonical representation of~$W$ (which we have not given) that $H_s$ is the fixed hyperplane of $s$ acting on~$V^*$. The complement of $H_s$ in~$V^*$ is the union of two open halfspaces~$U_\pm(s)$ that are interchanged by~$s$. Here \[ U_+(s) := \{x \in V^* \mid \<x,e_s> > 0\}, \] and \[ U_-(s) := \{x \in V^* \mid \<x,e_s> < 0\}. \] The hyperplanes $H_s$ are called the \emph{walls} of~$C_0$. We denote by $\H_0$ the set of walls of~$C_0$. The \emph{support} of the face $A= A_J$, denoted~$\supp A$, is defined to be the intersection of the walls of~$C_0$ containing~$A$, i.e., $\supp A = \bigcap_{s\in J} H_s$. Note that $A$ is open in~$\supp A$ and that $\supp A$ is the linear span of~$A$. Although our definitions above made use of the basis $\{e_s \mid s \in S\}$ of~$V$, there are also intrinsic geometric characterizations of walls and faces. Namely, the walls of~$C_0$ are the hyperplanes $H$ in~$V^*$ such that $H$ does not meet~$C_0$ and $H\cap\Cbar_0$ has nonempty interior in~$H$. And the faces of~$C_0$ correspond to subsets $\H_1\subseteq \H_0$. Given such a subset, let $L := \bigcap_{H\in\H_1} H$; the corresponding face~$A$ is then the relative interior (in~$L$) of the intersection $L\cap\Cbar_0$. We now make everything $W$-equivariant. We call a subset $C$ of~$V^*$ a \emph{chamber} if it is of the form $C = wC_0$ for some $w \in W$, and we call a subset $A$ of~$V^*$ a \emph{cell} if it is of the form $A = wA_J$ for some $w \in W$ and $J \subseteq S$. Each chamber~$C$ is a simplicial cone and hence has well-defined walls and faces, which can be characterized intrinsically as above. If $C=wC_0$ with $w\in W$, the walls of~$C$ are the transforms $wH_s$ ($s \in S$), and the faces of~$C$ are the cells $wA_J$ ($J\subseteq S$). Finally, we call a hyperplane $H$ in~$V^*$ a \emph{wall} if it is a wall of some chamber, and we denote by~\H\ the set of all walls; thus \[ \H = \{wH_s \mid w \in W,\, s \in S\}. \] The set of all faces of all chambers is equal to the set of all cells. The union of these cells is called the \emph{Tits cone} and will be denoted by~$X$ in the following. Equivalently, \[ X = \bigcup_{w\in W} w\Cbar_0. \] We now record, for ease of reference, some standard facts about the Tits cone. The first fact is Lemma~2.58 in \cite[Section~2.5]{abramenko08:_approac_to_build}. See also the proof of Theorem~1 in \cite[Section~V.4.4]{bourbaki81:_group_lie}. \begin{lemma} \label{lem:6} For any $w \in W$ and $s \in S$, we have \[ wC_0 \subseteq U_+(s) \iff l(sw) > l(w) \] and \[ wC_0 \subseteq U_-(s) \iff l(sw) < l(w). \] Here $l(-)$ is the length function on~$W$ with respect to~$S$. \qed \end{lemma} This immediately implies: \begin{lemma} \label{lem:7} $W$ acts simply transitively on the set of chambers. \qed \end{lemma} The next result allows one to talk about separation of cells by walls. It is part of Theorem~2.80 in \cite[Section~2.6]{abramenko08:_approac_to_build}, and it can also be deduced from Proposition~5 in \cite[Section~V.4.6]{bourbaki81:_group_lie}. \begin{lemma} \label{lem:8} If $H$ is a wall and $A$ is a cell, then either $A$ is contained in~$H$ or $A$ is contained in one of the two open halfspaces determined by~$H$. \qed \end{lemma} We turn now to reflections. The following lemma is an easy consequence of the stabilizer calculation in \cite[Theorem~2.80]{abramenko08:_approac_to_build} or \cite[Section~V.4.6]{bourbaki81:_group_lie}. \begin{lemma} \label{lem:9} For each wall $H\in\H$, there is a unique nontrivial element $s_H\in W$ that fixes $H$ pointwise. \qed \end{lemma} We call $s_H$ the \emph{reflection} with respect to~$H$. In view of a fact stated above, we have $s_{H_s} = s$ for all $s\in S$. Thus $S$ is the set of reflections with respect to the walls in~$\H_0$. It follows immediately from Lemma~\ref{lem:9} that \begin{equation} \label{eq:1} s_{wH} = ws_Hw^{-1} \end{equation} for all $H \in \H$ and $w \in W$. Hence $wSw^{-1}$ is the set of reflections with respect to the walls of~$wC_0$. \begin{corollary} \label{cor:2} For $s \in S$ and $w \in W$, $\,H_s$ is a wall of~$wC_0$ if and only if $w^{-1}sw$ is in~$S$. \end{corollary} \begin{proof} $H_s$ is a wall of~$wC_0$ if and only if $s$ is the reflection with respect to a wall of~$wC_0$. In view of the observations above, this is equivalent to saying $s\in wSw^{-1}$, i.e., $w^{-1} s w \in S$. \end{proof} Finally, we record some special features of the infinite case. \begin{lemma} \label{lem:1} Assume that $(W,S)$ is irreducible and $W$ is infinite. \begin{enumerate} \item\label{item:1} If two chambers $C,D$ have the same walls, then $C=D$. \item\label{item:2} The Tits cone $X$ does not contain any pair~$\pm x$ of opposite nonzero vectors. \end{enumerate} \end{lemma} \begin{proof} (\ref{item:1}) We may assume that $C=C_0$ and~$D =wC_0$ for some $w\in W$. Then Corollary~\ref{cor:2} implies that $C$ and~$D$ have the same walls if and only if $w$ normalizes~$S$. So the content of~(\ref{item:1}) is that the normalizer of $S$ in~$W$ is trivial. This is a well known fact. See \cite[Section~V.4, Exercise~3]{bourbaki81:_group_lie}, \cite[Proposition~4.1]{deodhar82:_coxet}, or \cite[Section~2.5.6]{abramenko08:_approac_to_build}. Alternatively, there is a direct geometric proof of~(\ref{item:1}) outlined in the solution to \cite[Exercise~3.118]{abramenko08:_approac_to_build}. (\ref{item:2}) This is a result of Vinberg~\cite[p.~1112, Lemma~15]{vinberg71:_discr}. See also \cite[Section~2.6.3]{abramenko08:_approac_to_build} and \cite[Theorem~2.1.6]{krammer94:_coxet} for alternate proofs. \end{proof} \section{A lemma about Coxeter groups} \label{sec:lemma-about-coxeter} We begin with a geometric version of our lemma, and then we translate it into algebraic language. \begin{lemma} \label{lem:2} Let $(W,S)$ be an infinite irreducible Coxeter system with $S$ finite. If $C$ and~$D$ are distinct chambers in the Tits cone, then $C$ has a wall~$H$ with the following two properties: \begin{enumerate}[\rm(a)] \item $H$ is not a wall of~$D$. \item $H$ does not separate $C$ from~$D$. \end{enumerate} \end{lemma} \begin{proof} For convenience (and without loss of generality), we assume that $C$ is the fundamental chamber~$C_0$. Define $J \subseteq S$ by \[ J := \{s \in S \mid H_s \text{ is a wall of } D\}, \] and set $L := \bigcap_{s\in J} H_s$. Thus $L$ is the support of the face $A = A_J$ of~$C$. By Lemma~\ref{lem:1}(\ref{item:1}), $J\neq S$, hence $L \neq \{0\}$. Since $L$ is an intersection of walls of~$D$, it is also the support of a face $B$ of~$D$. Note that $A$ and~$B$ are contained in precisely the same walls, since they have the same span~$L$. In particular, $B$ is not contained in any of the walls~$H_s$ with $s\in S\setminus J$, so, by Lemma~\ref{lem:8}, $B$ is contained in either $U_+(s)$ or~$U_-(s)$ for each such~$s$. Suppose that $B\subseteq U_-(s)$ for each $s \in S\setminus J$. Then, in view of the definition of~$A = A_J$ by linear equalities and inequalities, $B \subseteq -A$. But $B$ contains a nonzero vector~$x$ (since $B$ spans~$L$), so we have contradicted Lemma~\ref{lem:1}(\ref{item:2}). Thus there must exist $s\in S\setminus J$ with $B \subseteq U_+(s)$. This implies that $D\subseteq U_+(s)$, and the wall $H=H_s$ then has the desired properties (a) and~(b). \end{proof} We now prove the algebraic version of the lemma, for which we relax the hypotheses slightly. We do not even have to assume that $S$ is finite. Recall that $(W,S)$ is said to be \emph{purely infinite} if each of its irreducible factors is infinite. \begin{lemma} \label{lem:3} Let $(W,S)$ be a purely infinite Coxeter system. If $w \neq 1$ in~$W$, then there exists $s\in S$ such that: \begin{enumerate}[\rm(a)] \item $w^{-1} s w \notin S$. \item $l(sw) > l(w)$. \end{enumerate} \end{lemma} \begin{proof} Let $(W_i,S_i)$ be the irreducible factors of~$(W,S)$, which are all infinite. Suppose the lemma is true for each factor~$(W_i,S_i)$, and consider any $w\neq 1$ in~$W$. Then $w$ has components $w_i\in W_i$, at least one of which (say~$w_1$) is nontrivial. So we can find $s \in S_1$ with $w_1^{-1}sw_1 \notin S_1$ and $l(sw_1) > l(w_1)$. One easily deduces (a) and~(b). We are now reduced to the case where $(W,S)$ is irreducible. If $S$ is finite, we apply Lemma~\ref{lem:2} with $C$ equal to the fundamental chamber~$C_0$ and $D = w C_0$. Then $H = H_s$ for some $s\in S$. Property~(a) of that lemma translates to (a) of the present lemma by Corollary~\ref{cor:2}, and property~(b) of that lemma translates to (b) of the present lemma by Lemma~\ref{lem:6}. If $S$ is infinite, we use a completely different method. The result in this case follows from Lemma~\ref{lem:4} below. \end{proof} Recall that for any Coxeter system $(W,S)$ and any $w\in W$, there is a (finite) subset $S(w) \subseteq S$ such that every reduced decomposition of~$w$ involves precisely the generators in~$S(w)$. This follows, for example, from Tits's solution to the word problem~\cite{tits69:_word_prob}. (See also \cite[Theorem~2.35]{abramenko08:_approac_to_build}). \begin{lemma} \label{lem:4} Let $(W,S)$ be an irreducible Coxeter system, and let $w \in W$ be nontrivial. If $S(w) \neq S$, then there exists $s\in S$ satisfying conditions (a) and~(b) of Lemma~\ref{lem:3}. \end{lemma} \begin{proof} By irreducibility, there exists an element $s\in S \setminus S(w)$ that does not commute with all elements of~$S(w)$. Condition~(b) then follows from the fact that $s\notin S(w)$ and standard properties of Coxeter groups; see \cite[Lemma~2.15]{abramenko08:_approac_to_build}. To prove~(a), suppose $sw=wt$ with $t\in S$. We have $s\notin S(w)$ but $s \in S(sw)$ (since $l(sw) > l(w)$), so necessarily $t = s$. Using induction on~$l(w)$, one now deduces from Tits's solution to the word problem that $s$ commutes with every element of~$S(w)$ (see \cite[Lemma~2.39]{abramenko08:_approac_to_build}), contradicting the choice of~$s$. \end{proof} \section{Proof of the theorem} \label{sec:proof-theorem} In this section we assume familiarity with basic concepts from the theory of buildings \cite{abramenko08:_approac_to_build,ronan89:_lectur,scharlau95:_build,tits74:_build_bn,weiss03}. Let $\Delta$ be a building with Weyl group~$(W,S)$, let \C\ be the set of chambers of~$\Delta$, and let $\delta\colon \C\times\C\to W$ be the Weyl distance function. (See \cite[Section 4.8 or~5.1]{abramenko08:_approac_to_build} for the definition and standard properties of~$\delta$.) Recall that \C\ has a natural \emph{gallery metric}~$d(-,-)$ and that \[ d(C,D) = l\bigl(\delta(C,D)\bigr) \] for $C,D\in\C$. Let $\phi \colon \Delta\to\Delta$ be an automorphism of~$\Delta$ that is not necessarily type-preserving. Recall that $\phi$ induces an automorphism $\sigma$ of~$(W,S)$. From the simplicial point of view, we can think of $\sigma$ (restricted to~$S$) as describing the effect of~$\phi$ on types of vertices. From the point of view of Weyl distance, $\sigma$ is characterized by the equation \[ \delta(\phi(C),\phi(D)) = \sigma(\delta(C,D)) \] for $C,D\in\C$. Our main theorem will be obtained from the following technical lemma: \begin{lemma} \label{lem:5} Assume that $\Delta$ is thick. Fix a chamber $C \in \C$, and set $w := \delta(C,\phi(C))$. Suppose there exists $s \in S$ such that $l(sw) > l(w)$ and $w^{-1}sw \neq t := \sigma(s)$. Then there is a chamber $D$ $s$\h-adjacent to~$C$ such that $d(D,\phi(D)) > d(C, \phi(C))$. \end{lemma} \begin{proof} We will choose $D$ $s$\h-adjacent to~$C$ so that $u := \delta(C,\phi(D))$ satisfies $l(u) \geq l(w)$ and $l(su) > l(u)$. We will then have $\delta(D,\phi(D)) = su$, as illustrated in the following schematic diagram: \[ \def\textstyle{\textstyle} \xymatrix@+1pc{C \ar[r]^w \ar@{-}[d]_s \ar[dr]_u & \phi(C) \ar@{-}[d]^t\\ D \ar[r]_{su} & \phi(D)} \] Hence \[ d(D,\phi(D)) = l(su) > l(u) \geq l(w) = d(C,\phi(C)). \] Case 1. $l(wt) < l(w)$. Then there is a unique chamber $E_0$ $t$\h-adjacent to~$\phi(C)$ such that $\delta(C,E_0) = wt$. For all other $E$ that are $t$\h-adjacent to~$\phi(C)$, we have $\delta(C,E) = w$. So we need only choose $D$ so that $\phi(D) \neq E_0$; this is possible by thickness. Then $u = w$. Case 2. $l(wt) > l(w)$. Then $l(swt) > l(wt)$ because the conditions $l(swt) < l(wt)$, $\,l(wt) > l(w)$, and $l(sw) > l(w)$ would imply (e.g., by the deletion condition for Coxeter groups) $swt = w$, and the latter is excluded by assumption. Hence in this case we can choose $D$ $s$\h-adjacent to~$C$ arbitrarily, and we then have $u = wt$. \end{proof} Suppose now that $(W,S)$ is purely infinite and $\phi$ is nontrivial. Then we can start with any chamber~$C$ such that $\phi(C)\neq C$, and Lemma~\ref{lem:3} shows that the hypothesis of Lemma~\ref{lem:5} is satisfied. We therefore obtain a chamber~$D$ such that $d(D,\phi(D)) > d(C,\phi(C))$. Our main theorem as stated in the introduction follows at once. We restate it here for ease of reference: \begin{theorem} \label{thr:1} Let $\phi$ be a nontrivial automorphism of a thick building~$\Delta$ of purely infinite type. Then $\phi$, viewed as an isometry of the set \C\ of chambers, has unbounded displacement, i.e., the set $\{d(C,\phi(C)) \mid C\in\C\}$ is unbounded. \qed \end{theorem} \begin{remark} \label{rem:1} Note that, in view of the generality under which we proved Lemma \ref{lem:3}, the building~$\Delta$ is allowed to have infinite rank. \end{remark} \begin{remark} \label{rem:2} In view of the existence of translations in Euclidean Coxeter complexes, the thickness assumption in the theorem cannot be dropped. \end{remark} \begin{corollary} \label{cor:1} Let $\Delta$ and~\C\ be as in the theorem, and let $G$ be a group of automorphisms of~$\Delta$. If there is a bounded set of representatives for the $G$\h-orbits in~\C, then $G$ has trivial center. \end{corollary} \begin{proof} Let \M\ be a bounded set of representatives for the $G$\h-orbits in~\C, and let $z\in G$ be central. Then there is an upper bound~$M$ on the distances $d(C,zC)$ for $C\in\M$; we can take $M$ to be the diameter of the bounded set $\M \cup z\M$, for instance. Now every chamber $D\in\C$ has the form $D=gC$ for some $g\in G$ and $C\in\M$, hence \[ d(D,zD) = d(gC,zgC) = d(gC,gzC) = d(C,zC) \leq M. \] Thus $z$ has bounded displacement and therefore $z=1$ by the theorem. \end{proof} \begin{remark} \label{rem:4} Although Corollary~\ref{cor:1} is stated for faithful group actions, we can also apply it to actions that are not necessarily faithful and conclude (under the hypothesis of the corollary) that the center of~$G$ acts trivially. \end{remark} \begin{remark} \label{rem:6} Note that the hypothesis of the corollary is satisfied if the action of $G$ is chamber transitive. In particular, it is satisfied if the action is strongly transitive and hence corresponds to a BN-pair in~$G$. In this case, however, the result is trivial (and does not require the building to be of purely infinite type). Indeed, the stabilizer of every chamber is a parabolic subgroup and hence is self-normalizing, so it automatically contains the center of~$G$. To obtain other examples, consider a cocompact action of a group on a locally-finite thick Euclidean building (e.g., a thick tree). The corollary then implies that the center of the group must act trivially. \end{remark} \begin{remark} \label{rem:3} The conclusion of Theorem~\ref{thr:1} is obviously false for spherical buildings, since the metric space~\C\ is bounded in this case. But one can ask instead whether or not \begin{equation} \label{eq:2} \disp \phi = \diam \Delta, \end{equation} where $\diam \Delta$ denotes the diameter of the metric space~\C, and $\disp\phi$ is the \emph{displacement} of~$\phi$; the latter is defined by \[ \disp \phi := \sup \{d(C,\phi(C)) \mid C\in\C\}. \] Note that, in the spherical case, Equation~\eqref{eq:2} holds if and only if there is a chamber~$C$ such that $\phi(C)$ and~$C$ are opposite. This turns out to be false in general. The following counterexample was pointed out to us by Hendrik Van Maldeghem. \end{remark} \begin{example} \label{exam:1} Let $k$ be a field and $n$ an integer $\geq 2$. Let $\Delta$ be the building associated to the vector space $V=k^{2n}$. Thus the vertices of~$\Delta$ are the subspaces $U$ of~$V$ such that $0<U<V$, and the simplices are the chains of such subspaces. A chamber is a chain \[ U_1<U_2<\cdots <U_{2n-1} \] with $\dim U_i = i$ for all~$i$, and two such chambers $(U_i)$ and~$(U'_i)$ are opposite if and only if $U_i + U'_{2n-i} = V$ for all~$i$. Now choose a non-degenerate alternating bilinear form $B$ on~$V$, and let $\phi$ be the (type-reversing) involution of~$\Delta$ that sends each vertex $U$ to its orthogonal subspace~$U^\perp$ with respect to~$B$. For any chamber $(U_i)$ as above, its image under~$\phi$ is the chamber~$(U'_i)$ with $U'_{2n-i} = U_i^\perp$ for all~$i$. Since $U_1\leq U_1^\perp = U'_{2n-1}$, these two chambers are not opposite. \end{example} Even though \eqref{eq:2} is false in general, one can still use Lemma~\ref{lem:5} to obtain lower bounds on~$\disp\phi$. Consider the rank~2 case, for example. Then $\Delta$ is a generalized $m$-gon for some~$m$, its diameter is~$m$, and its Weyl group~$W$ is the dihedral group of order~$2m$. Lemma~\ref{lem:5} in this case yields the following result: \begin{corollary} \label{cor:3} Let $\phi$ be a nontrivial automorphism of a thick generalized $m$-gon. Then the following hold: \begin{enumerate}[\rm(a)] \item $\disp\phi \geq m-1$. \item If $\phi$ is type preserving and $m$ is odd, or if $\phi$ is type reversing and $m$ is even, then $\disp\phi=m$. \end{enumerate} \end{corollary} \begin{proof} (a) The hypothesis of Lemma~\ref{lem:5} is always satisfied as long as $w \neq 1$ and $l(w) < m-1$ since then there exists $s \in S$ with $l(sw) > l(w)$, and $sw$ has a unique reduced decomposition (which is in particular not of the form $wt$ with $t \in S$). (b) Suppose $l(w) = m-1$, and let $s \in S$ be the unique element such that $sw = w_0$, where the latter is the longest element of~$W$. Then $s' \in S$ satisfies $ws' = sw$ if and only if $l(ws') > l(w)$. Since $w$ does not start with~$s$, this is equivalent to $s' \neq s$ if $m$ is odd and to $s' = s$ if $m$ is even. So the hypothesis of Lemma~\ref{lem:5} is satisfied for \emph{any} $w$ different from 1 and~$w_0$ if $m$ is odd and $\sigma=\id$ or if $m$ is even and $\sigma \neq \id$. \end{proof} We conclude by mentioning another family of examples, again pointed out to us by Van Maldeghem. \begin{remark} \label{rem:5} For even $m = 2n$, type-preserving automorphisms $\phi$ of generalized $m$-gons with $\disp \phi = m-1$ arise as follows. Assume that there exists a vertex $x$ in the generalized $m$-gon~$\Delta$ such that the ball $B(x, n)$ is fixed pointwise by~$\phi$. Here $B(x,n)$ is the set of vertices with $d(x,y) \leq n$, where $d(-,-)$ now denotes the usual graph metric, obtained by minimizing lengths of paths. Recall that there are two types of vertices in~$\Delta$ and that opposite vertices always have the same type since $m$ is even. Let $y$ be any vertex that does has not the same type as~$x$. Then $y$ is at distance at most $n-1$ from some vertex in $B(x,n)$. Since $\phi$ fixes $B(x,n)$ pointwise, $d(y, \phi(y)) \leq 2n-2$. So $C$ and~$\phi(C)$ are not opposite for any chamber~$C$ having $y$ as a vertex. Since this is true for any vertex $y$ that does not have the same type as~$x$, $\disp \phi \neq m$ and hence, by Corollary~\ref{cor:3}(a), $\disp \phi = m-1$ if $\phi \neq \id$. Now it is a well-known fact (see for instance \cite[Corollary 5.4.7]{maldeghem98:_gener}) that every \emph{Moufang} $m$-gon possesses nontrivial type-preserving automorphisms $\phi$ fixing some ball $B(x,n)$ pointwise. (In the language of incidence geometry, these automorphisms are called central or axial collineations, depending on whether $x$ is a point or a line in the corresponding rank~2 geometry.) So for $m = 4$, $6$, or~$8$, all Moufang $m$-gons admit type-preserving automorphisms $\phi$ with $\disp \phi = m-1$. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \end{document}
{\beta}egin{document} \title{Almost Gorenstein homogeneous rings and their $h$-vectors} {\alpha}uthor{Akihiro Higashitani} \thanks{ {{\beta}f 2010 Mathematics Subject Classification:} Primary 13H10; Secondary 13D40, 13H15. \\ \;\;\;\; {{\beta}f Keywords:} Almost Gorenstein, Cohen--Macaulay, Gorenstein, $h$-vector, homogeneous domain. } {\alpha}ddress{Akihiro Higashitani, Department of Mathematics, Kyoto Sangyo University, Motoyama, Kamigamo, Kita-Ku, Kyoto, Japan, 603-8555} \email{[email protected]} {\beta}egin{abstract} In this paper, for the development of the study of almost Gorenstein graded rings, we discuss some relations between almost Gorensteinness of Cohen--Macaulay homogeneous rings and their $h$-vectors. Concretely, for a Cohen--Macaulay homogeneous ring $R$, we give a sufficient condition for $R$ to be almost Gorenstein in terms of the $h$-vector of $R$ (Theorem \ref{suff}) and we also characterize almost Gorenstein homogeneous domains with small socle degrees in terms of the $h$-vector of $R$ (Theorem \ref{hvector}). Moreover, we also provide the examples of almost Gorenstein homogeneous domains arising from lattice polytopes. \end{abstract} \maketitle \sigmaection{Introduction} Recently, for the study of a new class of local or graded rings which are Cohen--Macaulay but not Gorenstein, {\em almost Gorenstein} local or graded rings were defined and have been studied. In this paper, for the further study of almost Gorenstein rings, we concentrate on almost Gorenstein homogeneous rings and investigate the $h$-vectors of almost Gorenstein homogeneous rings. Originally, the notion of almost Gorenstein local rings of dimension one was introduced by Barucci and Fr{\"o}berg \cite{BF} in the case where the local rings are analytically unramified. After this work, Goto, Matsuoka and Phuong \cite{GMP} modified the definition of one-dimensional almost Gorenstein local rings, which also works well in the case where the rings are analytically ramified. As a further investigation of almost Gorenstein local rings, the definition of almost Gorenstein local or graded rings of higher dimension suggested by Goto, Takahashi and Taniguchi \cite{GTT}. We refer the reader to \cite{BF, GMP, GTT} for the detailed information on almost Gorenstein rings. In addition, Matsuoka and Murai \cite{MaMu} have studied the almost Gorenstein Stanley--Reisner rings. They introduce the notion ``uniformly Cohen--Macaulay'' and they prove that a simplicial complex $\Delta$ is uniformly Cohen--Macaulay if and only if there exists an injection from the Stanley--Reisner ring $k[\Delta]$ to its canonical module $\omega_{k[\Delta]}$. This is a necessary condition for $k[\Delta]$ to be almost Gorenstein (with $a$-invariant 0). They also characterize the almost Gorenstein$^*$ simplicial complexes of dimension at most 2. See \cite[Section 3.3]{MaMu}. Inspired by these works (especially \cite{GTT} and \cite{MaMu}), we will study higher dimensional almost Gorenstein {\em homogeneous} rings. The main goal of this paper is to give a characterization of almost Gorensteinness in terms of the Hilbert series, namely, the $h$-vector of the Cohen--Macaulay homogeneous ring. For a Cohen--Macaulay homogeneous domain and the $h$-vector $(h_0,h_1,\ldots,h_s)$ of $R$, it is known by the work of Stanley \cite[Theorem 4.4]{StanleyHF} that $R$ is Gorenstein if and only if $h_i=h_{s-i}$ for $i=0,1,\ldots,\lfloor s/2 \rfloor$. (See, also \cite[Corollary 4.4.6]{BH}.) This says that the ``symmetry'' of the $h$-vector of $R$ characterizes the Gorensteinness of $R$. Hence, it is natural to think of some ``almost symmetry'' of the $h$-vector of $R$ may characterize the almost Gorensteinness of $R$. Although to give a complete characterization might be impossible, we will prove that a certain almost symmetry of the $h$-vector of a Cohen--Macaulay homogeneous ring can be a sufficient condition to be almost Gorenstein (Theorem \ref{suff}). In addition, in the case where the ring is a domain and the socle degree is small, we will characterize the almost Gorensteinness in terms of its $h$-vector (Theorem \ref{hvector}). The structure of this paper is as follows. In Section \ref{pre}, we recall the definition of almost Gorenstein graded rings and prepare some propositions for the discussions later. In Section \ref{juubun}, we prove that a certain almost symmetry of the $h$-vector of the Cohen--Macaulay homogeneous ring implies the almost Gorensteinness. More precisely, let $R$ be a Cohen--Macaulay homogeneous ring and $(h_0,h_1,\ldots,h_s)$ its $h$-vector. We prove that if $h_i=h_{s-i}$ for $i=0,1,\ldots,\lfloor s/2 \rfloor -1$, then $R$ is almost Gorenstein (Theorem \ref{suff}). In Section \ref{domain}, we characterize the almost Gorenstein homogeneous domain in terms of its $h$-vector. More precisely, for a Cohen--Macaulay homogeneous domain $R$, let $(h_0,h_1,\ldots,h_s)$ be the $h$-vector of $R$. When $s=2$, the following three conditions are equivalent: (i) $R$ is almost Gorenstein; (ii) $R$ is Gorenstein; (iii) $h_2=1$. Moreover, when $s=3$, $R$ is almost Gorenstein if and only if $h_3=1$. (See Theorem \ref{hvector}.) Finally, in Section \ref{rei}, we supply some examples of almost Gorenstein homogeneous domains arising from lattice polytopes. \sigmaubsection*{Acknowledgements} The authors would like to be grateful to Shiro Goto, Naoyuki Matsuoka and Naoki Taniguchi for their helpful comments and instructive discussions. The author is partially supported by JSPS Grant-in-Aid for Young Scientists (B) $\sigmaharp$26800015. {\beta}igskip \sigmaection{Preliminaries}\label{pre} Let $R$ be a Cohen--Macaulay homogeneous ring of dimension $d$ over an algebraically closed field $k$ with characteristic 0. Let $a=a(R)$ be the $a$-invariant of $R$ and let $\omega_R$ be a canonical module of $R$. Note that $a(R)=-\min\{ j : (\omega_R)_j \not= 0\}$. \sigmaubsection{Definitions} We collect some notation used throughout this paper. {\beta}egin{itemize} \item For a graded $R$-module $M$, {\beta}egin{itemize} \item let $\mu(M)$ denote the number of elements in a minimal system of generators of $M$ as an $R$-module; \item let $e(M)$ denote the multiplicity of $M$; \item note that we have in general the inequality {\beta}egin{align}\label{ookii} e(M) \gammaeq \mu(M); \end{align} \item let $M(-\ell)$ denote the $R$-module whose underlying $R$-module is the same as that of $M$ and whose grading is given by $(M(-\ell))_n=M_{n-\ell}$ for all $n \in {\NZQ Z}$; \item let $[[ M ]]$ denote the Hilbert series of $M$, i.e., $$[[M]]=\sigmaum_{n \in {\NZQ Z}}({\delta}im_k M_n) t^n.$$ \end{itemize} \item We denote the Cohen--Macaulay type of $R$ by $r(R)$. Note that $r(R)=\mu(\omega_R)$. \item We say that $(h_0,h_1,\ldots,h_s)$ is the {\em $h$-vector} of $R$ if $$[[R]]=\sigmaum_{n \gammaeq 0}({\delta}im_k R_n) t^n=\frac{h_0+h_1t+\cdots+h_st^s}{(1-t)^d}$$ with $h_s \not=0$. It is well known that each $h_i$ is a nonnegative integer (since $R$ is Cohen--Macaulay) and $h_0=1$. When $(h_0,h_1,\ldots,h_s)$ is the $h$-vector of $R$, the index $s$ is called the {\em socle degree} of $R$. \end{itemize} Let us recall the definition of the almost Gorenstein {\em graded} ring. {\beta}egin{Definition}[{\cite[Definition 1.5]{GTT}}]{\em We say that a Cohen--Macaulay graded ring $R$ is {\em almost Gorenstein} if there exists an exact sequence {\beta}egin{align}\label{ex_seq} 0 \rightarrow R \xrightarrow{\phi} \omega_R(-a) \rightarrow C \rightarrow 0 \end{align} of graded $R$-modules with $\mu(C)=e(C)$, where $\phi$ is an injection of degree 0. }\end{Definition} \sigmaubsection{Properties on $C$} For a while, we consider the condition: {\beta}egin{align}\label{condition} \text{there exists an injection $\phi : R \rightarrow \omega_R(-a)$ of degree 0}. \end{align} This is a necessary condition for $R$ to be almost Gorenstein. Let $C=\cok(\phi)$. Then $C$ is a Cohen--Macaulay $R$-module of dimension $d-1$ if $C\not=0$ (see \cite[Lemma 3.1]{GTT}). First, we see that the condition \eqref{condition} is satisfied in some cases. For example: {\beta}egin{Proposition}\label{domeinnobaai} When $R$ is a domain, $R$ always satisfies the condition \eqref{condition}. \end{Proposition} {\beta}egin{proof} When $R$ is a domain, a canonical module $\omega_R$ can be taken as an ideal $I_R \sigmaubset R$ (\cite[Section 3.3]{BH}). Replace $\omega_R$ by a canonical ideal $I_R$ of $R$. Consider any $R$-homomorphism $\phi : R \rightarrow I_R(-a)$ of degree 0 with $\phi \not=0$. Take $x \in \ker(\phi)$. Then $\phi(x)=x\cdot\phi(1)=0$. Since $x \in R$ and $\phi(1) \in I_R \sigmaubset R$ and $R$ is a domain, we see $x=0$ or $\phi(1)=0$. Since $\phi(1) \not=0$, we obtain $x=0$, i.e., $\ker(\phi)=0$, as desired. \end{proof} Next, we would like to describe $\mu(C)$ and $e(C)$ in terms of some invariants on $R$. {\beta}egin{Proposition}\label{myu-} Assume that $R$ satisfies \eqref{condition}. Then $\mu(C)=r(R)-1$. \end{Proposition} {\beta}egin{proof} Since $\phi$ is a degree 0 injection and all elements of $(\omega_R(-a))_0$ can be its minimal generators, $\phi(1)$ can be an element of minimal generators of $\omega_R(-a)$. Hence, the assertion follows from the exact sequence \eqref{ex_seq}. \end{proof} {\beta}egin{Proposition}\label{i-} Assume that $R$ satisfies \eqref{condition}. Let $(h_0,h_1,\ldots,h_s)$ be the $h$-vector of $R$. Then we have {\beta}egin{align}\label{ccc} [[C]] = \frac{\sigmaum_{j=0}^{s-1}((h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j))t^j}{(1-t)^{d-1}}. \end{align} In particular, we have $e(C)=\sigmaum_{j=0}^{s-1}((h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j))$. \end{Proposition} {\beta}egin{proof} We know that $[[ R ]]=\sigmaum_{j=0}^sh_jt^j/(1-t)^d$ and it is also known (e.g., see \cite[Corollary 4.4.6]{BH}) that {\beta}egin{align*} [[ \omega_R(-a)]]=\frac{\sigmaum_{j=0}^s h_{s-j}t^j}{(1-t)^d}. \end{align*} From the exact sequence \eqref{ex_seq}, we obtain $[[C]]=[[\omega_R(-a)]]-[[R]]$. On the other hand, since $M$ is a Cohen--Macaulay module of dimension $d-1$, $[[C]]$ looks like $\sigmaum_{j \gammaeq 0}h_j't^j/(1-t)^{d-1}$, where each $h_j'$ is a nonnegative integer. Hence we obtain the equality $$\frac{\sigmaum_{j=0}^s (h_{s-j}-h_j)t^j}{(1-t)^d}=\frac{\left(\sigmaum_{j \gammaeq 0}h_j't^j\right)(1-t)}{(1-t)^d}.$$ This implies that $h_j'=(h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j)$ for $j=0,1,\ldots,s$, as required. \end{proof} From this proposition, we obtain: {\beta}egin{Corollary}\label{prop1} Assume that $R$ satisfies \eqref{condition}. Let $(h_0,h_1,\ldots,h_s)$ be the $h$-vector of $R$. Then we have the inequality $$h_s+\cdots+h_{s-j} \gammaeq h_0+\cdots+h_j$$ for each $j=0,1,\ldots,\lfloor s/2 \rfloor$, and $R$ is Gorenstein if and only if all the equalities hold, namely, $h_j=h_{s-j}$ for each $j=0,1,\ldots,\lfloor s/2 \rfloor$. \end{Corollary} {\beta}egin{proof} The above inequality directly follows from \eqref{ccc} and the Cohen--Macaulayness of $C$. Moreover, since $R$ is Gorenstein if and only if $C=0$, the second assertion also follows. \end{proof} {\beta}egin{Remark}{\em Corollary \ref{prop1} is just an analogy of Stanley's inequality for the $h$-vectors of semi-standard Cohen--Macaulay domains (\cite[Theorem 2.1]{StanleyCMD}). In the case of Stanley--Reisner ring of a uniformly Cohen--Macaulay simplicial complex, this inequality is also shown in \cite[Proposition 2.7]{MaMu}. }\end{Remark} {\beta}egin{Corollary}\label{tokuchou} The following four conditions are equivalent: {\beta}egin{itemize} \item[(a)] there exists an injection $\phi : R \rightarrow \omega_R(-a)$ of degree 0 such that $C=\cok(\phi)$ satisfies $\mu(C)=e(C)$, namely, $R$ is almost Gorenstein; \item[(b)] every injection $\phi : R \rightarrow \omega_R(-a)$ of degree 0 satisfies $\mu(C)=e(C)$; \item[(c)] we have {\beta}egin{align*} r(R)-1=\sigmaum_{j=0}^{s-1}((h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j)); \end{align*} \item[(d)] we have {\beta}egin{align}\label{hutousiki} {\delta}im_k (\omega_R \otimes k)_{-a+j} = (h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j) \end{align} for each $j=1,\ldots,s-1$. \end{itemize} In particular, $\phi$ does not matter for the almost Gorensteinness of $R$. \end{Corollary} {\beta}egin{proof} First, we observe that all the elements of $\omega_R(-a)$ of degree 0 can be an element of minimal system of generators of $\omega_R(-a)$. Thus, by Proposition \ref{myu-} and Proposition \ref{i-}, we obtain the equivalence of (a) and (b) and (c). Next, the implication (d) $\Rightarrow$ (c) easily follows from the facts $r(R)=\sigmaum_{j=0}^{s-1} {\delta}im_k (\omega_R \otimes k)_{-a+j}$ and $h_s={\delta}im_k (\omega_R \otimes k)_{-a}$. Finally, consider the implication (c) $\Rightarrow$ (d). From the exact sequence \eqref{ex_seq}, we obtain the exact sequence $R \otimes k \rightarrow \omega_R(-a) \otimes k \rightarrow C \otimes k \rightarrow 0.$ Since $(R \otimes k)_j = 0$ for $j \gammaeq 1$, by taking $(-)_j$ with $j \gammaeq 1$ for this sequence, we obtain the isomorphism $$(\omega_R \otimes k)_{-a+j} \cong (C \otimes k)_j$$ for $j \gammaeq 1$. In general, for a graded $R$-module $M$ of dimension $e$, we see that ${\delta}im_k (M \otimes k)_j \leq h_j'$ if $[[M]]=\sigmaum_{j \in {\NZQ Z}}h_j't^j/(1-t)^e$. Hence, we obtain the inequality $${\delta}im_k(\omega_R \otimes k)_{-a+j} \leq (h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j)$$ for each $j=1,\ldots,s-1$ by Proposition \ref{i-}. Therefore, if ${\delta}im_k(\omega_R \otimes k)_{-a+j} < (h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j)$ for some $j$, since $r(R)=\sigmaum_{j=0}^{s-1} {\delta}im_k (\omega_R \otimes k)_{-a+j}$ and $h_s={\delta}im_k (\omega_R \otimes k)_{-a}$, we conclude that $r(R)-1 < \sigmaum_{j=0}^{s-1} ((h_s+\cdots+h_{s-j})-(h_0+\cdots+h_j))$, as desired. \end{proof} {\beta}igskip \sigmaection{A sufficient condition to be almost Gorenstein in terms of $h$-vectors}\label{juubun} Let $R$ be a Cohen--Macaulay homogeneous ring of dimension $d$ satisfying the condition \eqref{condition} and let $(h_0,h_1,\ldots,h_s)$ be the $h$-vector of $R$. One of the main results of this paper is the following: {\beta}egin{Theorem}\label{suff} If $h_i=h_{s-i}$ for $i=0,1,\ldots, \lfloor s/2 \rfloor -1$, then $R$ is almost Gorenstein. \end{Theorem} This theorem should be compared with the following Stanley's theorem: {\beta}egin{Theorem}[{\cite[Theorem 4.4]{StanleyHF}}]\label{symm} Assume that $R$ is a domain. Then $R$ is Gorenstein if and only if $h_i=h_{s-i}$ for $i=0,1,\ldots, \lfloor s/2 \rfloor$. \end{Theorem} This theorem says that under the assumption $R$ is a domain, the symmetry of the $h$-vector of $R$ can be a necessary and sufficient condition for $R$ to be Gorenstein. On the other hand, Theorem \ref{suff} says that a certain ``almost symmetry'' of the $h$-vector of $R$ can be a sufficient condition for $R$ to be almost Gorenstein. {\beta}egin{proof}[Proof of Theorem \ref{suff}] When $s$ is even, $h_{\lfloor s/2 \rfloor}=h_{s-\lfloor s/2 \rfloor}$ is automatically satisfied. Hence, $R$ is Gorenstein by Theorem \ref{symm}, in particular, almost Gorenstein. Assume that $s$ is odd. Let $C=\cok \phi$, where $\phi$ is an injection in \eqref{condition}. We may show that $\mu(C)=e(C)$. By Proposition \ref{i-}, we have $$[[C]]=\frac{(h_{(s+1)/2}-h_{(s-1)/2})t^{(s-1)/2}}{(1-t)^{d-1}}.$$ Hence, it is easy to see that $C$ satisfies $\mu(C)=e(C)(=h_{(s+1)/2}-h_{(s-1)/2})$, as required. \end{proof} {\beta}igskip \sigmaection{Almost Gorenstein homogeneous domains with small socle degrees}\label{domain} Assume that $R$ satisfies the condition \eqref{condition}. Let $(h_0,h_1,\ldots,h_s)$ be the $h$-vector of $R$. In this section, we discuss the condition for $R$ to be almost Gorenstein in terms of $h$-vectors in the case where $s$ is small. First, we observe the case $s=1$. {\beta}egin{Theorem}[{\cite[Theorem 10.4]{GTT}}] In the case $s=1$, $R$ is always almost Gorenstein. Moreover, in this case, $R$ is Gorenstein if and only if $h_1=1$. \end{Theorem} {\beta}egin{proof} In the statement of \cite[Theorem 10.4]{GTT}, the condition $a=1-d$ is equivalent to $s=1$ and the condition ``$Q(R)$ is a Gorenstein ring'' is automatically satisfied since $R$ satisfies \eqref{condition} (\cite[Lemma 3.1]{GTT}). Thus, \cite[Theorem 10.4]{GTT} directly implies the assertion. Moreover, the condition for $R$ to be Gorenstein follows from Theorem \ref{symm}. \end{proof} Next, let us discuss the case $s>1$. It is well known that the inequality $h_s \leq r(R)$ holds in general. We say that $R$ is {\em level} if $h_s=r(R)$. (See, e.g., \cite{HibiASL}). One can see from \cite[Lemma 10.2]{GTT} that if $R$ is level and $s>1$, then $R$ is almost Gorenstein if and only if $R$ is Gorenstein. Before considering the almost Gorensteinness of $R$, we give an upper bound for the Cohen--Macaulay types of Cohen--Macaulay homogeneous {\em domains} in Theorem \ref{upper}. We will use this for the proof of Theorem \ref{hvector}. The most part of the discussions for Theorem \ref{upper} is derived from \cite[Section 3]{Yanagawa}. {\beta}egin{Theorem}\label{upper} Let $R$ be a Cohen--Macaulay homogeneous domain. Then {\beta}egin{align}\label{ineq} r(R) \leq \sigmaum_{i=2}^s h_i - (s-2)h_1. \end{align} \end{Theorem} {\beta}egin{proof} Take a nonzero divisor $x$ of $R$. Then we have $r(R)=r(R/(x))$ and the $h$-vectors of $R$ and $R/(x)$ are equal. Hence, by the similar argument to \cite[Lemma 3.1]{Yanagawa}, it suffices to show that for any set of points in uniform position $X \sigmaubset {\NZQ P}^r$ ($r \gammaeq 2$) and the homogeneous coordinate ring $R$ of $X$, the inequality \eqref{ineq} holds. Assume that $R$ is the coordinate ring of a set of points in uniform position $X \sigmaubset {\NZQ P}^r$ ($r \gammaeq 2$). Let $s$ be the socle degree of $R$. For $i=1,\ldots,s-1$, let $$\varphi_i : S_1 \otimes (\omega_R)_{-s+i} \longrightarrow (\omega_R)_{-s+i+1}$$ be the multiplication map, where $S=\text{Sym}_k R_1$ is the coordinate ring of ${\NZQ P}^r$. Then this map $\varphi_i$ is 1-generic (\cite[Proposition 1.5]{Yanagawa}). By the theory of 1-generic map, we have ${\delta}im_k \img \varphi_i \gammaeq {\delta}im_k S_1 + {\delta}im_k (\omega_R)_{-s+i} -1$ for each $i=1,\ldots,s-1$ (see \cite{E}). Hence {\beta}egin{align*} {\delta}im_k\img\varphi_i &\gammaeq {\delta}im_k S_1 + {\delta}im_k (\omega_R)_{-s+i} -1 \\ &=h_1+1+(h_s+h_{s-1}+\cdots+h_{s-i+1})-1\\ &=h_1+h_s+h_{s-1}+\cdots+h_{s-i+1}. \end{align*} On the other hand, we have ${\delta}im_k (\omega_R)_{-s+i+1}=h_s+h_{s-1}+\cdots+h_{s-i}$. Thus, we see that ${\delta}im_k (\omega_R)_{-s+i+1}-{\delta}im_k\img\varphi_i \leq h_{s-i}-h_1$. By ${\delta}im_k (\omega_R \otimes k)_{-s+i+1} \leq {\delta}im_k (\omega_R)_{-s+i+1}-{\delta}im_k\img\varphi_i$, we obtain the inequality {\beta}egin{align}\label{kagi} {\delta}im_k (\omega_R \otimes k)_{-a+i} \leq h_{s-i}-h_1 \end{align} for $i=1,\ldots,s-1$. (Note that $-a=-s+1$ in this setting.) Since we can see that $(\omega_R \otimes k)_i = 0$ for each $i \gammaeq 1$ and $i \leq -s$ (\cite[Lemma 3.9]{Yanagawa}), we have $\mu (\omega_R) \leq {\delta}im_k (\omega_R)_{-a}+\sigmaum_{i=1}^{s-1}({\delta}im_k (\omega_R)_{-a+i}-{\delta}im_k\img\varphi_i )$. Therefore, {\beta}egin{align*} r(A)=\mu(\omega_R) &\leq {\delta}im_k (\omega_R)_{-a} + \sigmaum_{i=1}^{s-1}({\delta}im_k (\omega_R)_{-a+i}-{\delta}im_k\img\varphi_i ) \\ &\leq h_s + \sigmaum_{i=1}^{s-1}(h_{s-i}-h_1) =\sigmaum_{i=2}^s h_i -(s-2)h_1. \end{align*} \end{proof} As an immediate corollary of Theorem \ref{upper}, we see the following. Note that the following has been already obtained in \cite{Yanagawa}. {\beta}egin{Corollary}[{\cite[Corollary 3.11]{Yanagawa}}]\label{kei_yanagawa} If the socle degree of $R$ is $2$, then $R$ is level. \end{Corollary} {\beta}egin{proof} In this case, the inequality \eqref{ineq} implies $r(R) \leq h_2$. On the other hand, as mentioned above, we always have $r(R) \gammaeq h_2$. Hence, $r(R)=h_2$. \end{proof} {\beta}igskip When $s=2$ or $s=3$ and $R$ is a domain, we can characterize the almost Gorensteinness of $R$ in terms of the $h$-vector as follows: {\beta}egin{Theorem}\label{hvector} Assume that $R$ is a domain. {\beta}egin{itemize} \item[(a)] When $s=2$, the following conditions are equivalent: {\beta}egin{itemize} \item[(i)] $R$ is almost Gorenstein; \item[(ii)] $R$ is Gorenstein; \item[(iii)] $h_2=1$. \end{itemize} \item[(b)] When $s=3$, $R$ is almost Gorenstein if and only if $h_3=1$. \end{itemize} \end{Theorem} {\beta}egin{proof} (a) When $s=2$, $R$ is always level by Corollary \ref{kei_yanagawa}. Hence, the equivalence (i) $\Leftrightarrow$ (ii) follows. Moreover, (ii) $\Leftrightarrow$ (iii) also holds by Theorem \ref{symm}. \noindent (b) Let $C=\cok \phi$ in \eqref{condition}. From Proposition \ref{myu-} and Proposition \ref{i-}, we have $\mu(C)=r(R)-1$ and $e(C)=3(h_3-h_0)+h_2-h_1$, respectively. By the inequality \eqref{ineq} given in Theorem \ref{upper}, we see that {\beta}egin{align*} e(C)-\mu(C) &= 3(h_3-h_0)+h_2-h_1 - (r(R) - 1) \\ &\gammaeq 3(h_3-1)+h_2-h_1 - (h_2+h_3 - h_1 -1) \\ &=2(h_3-1). \end{align*} Hence, if $h_3 > 1$, then $R$ is never almost Gorenstein by $e(C) - \mu(C) > 0$. On the other hand, if $h_3=1$, then $R$ is almost Gorenstein by Theorem \ref{suff}, as required. \end{proof} {\beta}egin{Remark}{\em In the case $s \gammaeq 4$, we do not know any characterization of almost Gorensteinness in terms of $h$-vectors. However, any characterization of almost Gorensteinness using $h$-vectors might be impossible. For example, whether $R$ is level or not cannot be determined by $h$-vectors. See \cite[Section 3]{HibiASL}. The examples described in \cite[Section 3]{HibiASL} say that the Cohen--Macaulay type cannot be determined in terms of $h$-vectors in general. Hence, by considering Corollary \ref{tokuchou}, it is natural to think of we cannot characterize almost Gorensteinness in terms of $h$-vectors, either. }\end{Remark} We remain the following: {\beta}egin{Problem} If exists, find examples of Cohen--Macaulay homogeneous domains $R$ and $R'$ such that {\beta}egin{itemize} \item the $h$-vectors of $R$ and $R'$ are equal; \item $R$ is almost Gorenstein; \item $R'$ is not almost Gorenstein. \end{itemize} \end{Problem} Moreover, we are also interested in the question: if $R$ is an almost Gorenstein homogeneous domain with $s \gammaeq 2$, then does $h_s=1$ holds? This is true when $s=2$ and $s=3$ by Theorem \ref{hvector}. On the other hand, when $R$ is not a domain, this is not true even in the case $s=2$. In fact, by taking the {\em ridge sum} of Gorenstein simplicial complexes, we can obtain an almost Gorenstein Stanley--Reisner ring with $h_s \gammaeq 2$. For the detail, see \cite{MaMu}. Actually, this is true in general. {\beta}egin{Theorem}\label{new} Let $R$ be an almost Gorenstein homogeneous domain and $(h_0,h_1,\ldots,h_s)$ its $h$-vector with $s \gammaeq 2$. Then $h_s=1$. \end{Theorem} {\beta}egin{proof} From Corollary \ref{tokuchou} (d), the inequality \eqref{hutousiki} holds. In particular, we see that $${\delta}im_k(\omega_R \otimes k)_{-a+1}=(h_s+h_{s-1})-(h_0+h_1).$$ On the other hand, since $R$ is a domain, we also obtain $${\delta}im_k(\omega_R \otimes k)_{-a+1} \leq h_{s-1}-h_1$$ by \eqref{kagi} for $i=1$. Therefore, we conslude that $h_s \leq h_0=1$, i.e., $h_s=1$. \end{proof} {\beta}igskip \sigmaection{Examples : Almost Gorenstein Ehrhart rings}\label{rei} In this section, we provide some examples of almost Gorenstein homogeneous domains which are given by {\em Ehrhart rings} of lattice polytopes. Before giving examples, let us recall what the Ehrhart ring of a lattice polytope is. Let ${\mathcal P} \sigmaubset {\NZQ R}^d$ be a lattice polytope, which is a convex polytope all of whose vertices belong to the standard lattice ${\NZQ Z}^d$, of dimension $d$. Let $k$ be a field. Then we define the $k$-algebra $k[P]$ as follows: {\beta}egin{align*} k[{\mathcal P}]=k[ {{\beta}f X}^{\alpha}lpha Z^n : {\alpha}lpha \in n{\mathcal P} \cap {\NZQ Z}^d, \; n \in {\NZQ Z}_{\gammaeq 0}], \end{align*} where for ${\alpha}lpha=({\alpha}lpha_1,\ldots,{\alpha}lpha_d) \in {\NZQ Z}^d$, ${{\beta}f X}^{\alpha}lpha Z^n=X_1^{{\alpha}lpha_1} \cdots X_d^{{\alpha}lpha_d} Z^n$ denotes a Laurent monomial in $k[X_1^\pm, \ldots,X_d^\pm, Z]$ and $n{\mathcal P}=\{nv : v \in {\mathcal P}\}$. This $k[{\mathcal P}]$ is a normal Cohen--Macaulay graded domain of dimension $d+1$, where a grading is defined by ${\delta}eg ({{\beta}f X}^{\alpha}lpha Z^n) =n$ for ${\alpha}lpha \in n{\mathcal P} \cap {\NZQ Z}^d$. The ring $k[{\mathcal P}]$ is called the {\em Ehrhart ring} of ${\mathcal P}$. Note that the Ehrhart ring is not necessarily standard graded, but is semi-standard graded, i.e., $k[{\mathcal P}]$ is a finitely generated module over the subring of $k[P]$ generated by all the elements of degree 1. Hence, the Hilbert series of $k[{\mathcal P}]$ (called the {\em Ehrhart series} of ${\mathcal P}$) is of the form: $$[[\; k[{\mathcal P}]\; ]]=\frac{\sigmaum_{i=0}^s h_i^* t^i}{(1-t)^{d+1}}, \;\;\; h_i^* \in {\NZQ Z}_{\gammaeq 0}.$$ The sequence of the coefficiens $h^*({\mathcal P})=(h_0^*,h_1^*,\ldots,h_s^*)$ appearing in the numerator of the Ehrhart series is called the {\em $h^*$-vector} (or the ${\delta}elta$-vector) of ${\mathcal P}$. It is known that the socle degree $s$ of the Ehrhart ring of ${\mathcal P}$ can be written as follows: {\beta}egin{align}\label{socle} s=d+1-\min\{ m \in {\NZQ Z}_{> 0} : m \inte({\mathcal P}) \cap {\NZQ Z}^d \not= \emptyset\}, \end{align} where $\inte({\mathcal P})$ denotes the interior of ${\mathcal P}$. Thus, in particular, $s \leq d$. For more details on Ehrhart rings or $h^*$-vectors, we refer \cite[Part 2]{HibiRedBook}. First, we supply examples of Ehrhart rings whose almost Gorensteinness can be guaranteed by Theorem \ref{suff}. Let ${{\beta}old e}_1,\ldots,{{\beta}old e}_d \in {\NZQ Z}^d$ be the standard basis for ${\NZQ R}^d$, where each ${{\beta}old e}_i$ is the $i$th unit vector, and let $\con(X)$ denote the convex hull of a set $X \sigmaubset {\NZQ R}^d$. {\beta}egin{Example}{\em Let ${\mathcal P}_1=\con(\{\pm{{\beta}old e}_1,\pm{{\beta}old e}_2,\pm{{\beta}old e}_3, {{\beta}old e}_1+{{\beta}old e}_2+2{{\beta}old e}_3\}) \sigmaubset {\NZQ R}^3$. Then ${\mathcal P}_1$ is a lattice polytope of dimension 3 and we see that $k[{\mathcal P}_1]$ is standard graded and $h^*({\mathcal P}_1)=(1,4,7,1)$. Hence, by Theorem \ref{suff}, $k[{\mathcal P}_1]$ is almost Gorenstein, while this is not Gorenstein. Moreover, let ${\mathcal P}_2=\con(\{\pm{{\beta}old e}_1,\ldots,\pm{{\beta}old e}_5, {{\beta}old e}_1+\cdots+{{\beta}old e}_4+2{{\beta}old e}_5\}) \sigmaubset {\NZQ R}^5$ be a lattice polytope of dimension 5. Then we see that $k[{\mathcal P}_2]$ is standard graded and $h^*({\mathcal P}_2)=(1,6,16,26,6,1)$. Hence, by Theorem \ref{suff} again, $k[{\mathcal P}_2]$ is also almost Gorenstein but not Gorenstein. In general, let $d=2e+1$ with $e \gammaeq 1$ and let ${\mathcal P}_e$ be the convex hull of $\{\pm {{\beta}old e}_1,\ldots,\pm {{\beta}old e}_d, {{\beta}old e}_1+\cdots+{{\beta}old e}_{d-1}+2{{\beta}old e}_d\} \sigmaubset {\NZQ R}^d$. We compute $h^*({\mathcal P}_e)$ for small $e$'s and the following show the results. {\beta}egin{align*} &e=3: \;\; h^*({\mathcal P}_3)=(1, 8, 29, 64, 99, 29, 8, 1), \\ &e=4: \;\; h^*({\mathcal P}_4)=(1, 10, 46, 130, 256, 382, 130, 46, 10, 1), \\ &e=5: \;\; h^*({\mathcal P}_5)=(1, 12, 67, 232, 562, 1024, 1486, 562, 232, 67, 12, 1), \\ &e=6: \;\; h^*({\mathcal P}_6)=(1, 14, 92, 378, 1093, 2380, 4096, 5812, 2380, 1093, 378, 92, 14, 1). \end{align*} The Ehrhart rings of these polytopes are standard graded and these are almost Gorenstein by Theorem \ref{suff}. We expect that $k[{\mathcal P}_e]$ is always standard graded and almost Gorenstein for every $e$, although it seems difficult to give a precise proof. }\end{Example} Next, we provide other examples of almost Gorenstein Ehrhart rings. The examples which we will give are obtained from the lattice polytopes arising from finite partially ordered sets (posets, for short). Let $P=\{x_1,\ldots,x_n\}$ be a poset with a partial order $\prec$ and let $${\mathcal O}(P)=\{(a_1,\ldots,a_n) \in {\NZQ R}^n : a_i \gammaeq a_j \text{ if }x_i \preceq x_j \text{ in }P, \;\; 0 \leq a_i \leq 1 \text{ for }i=1,\ldots,n\}.$$ This convex polytope ${\mathcal O}(P)$ is called the {\em order polytope} of $P$. It is known that ${\mathcal O}(P)$ is a lattice polytope and the Ehrhart ring $k[{\mathcal O}(P)]$ is often called the {\em Hibi ring} of $P$. It is known that: {\beta}egin{itemize} \item ${\delta}im {\mathcal O}(P)=\sigmaharp P$; \item $k[{\mathcal O}(P)]$ is standard graded and an algebra with straightening laws on $P$ (\cite{Hibilat}); \item the socle degree of $k[{\mathcal O}(P)]$ is equal to $\sigmaharp P -\max\{ \sigmaharp C : C \text{ is a chain in }P\}$, where a chain is a totally ordered subset of $P$. \end{itemize} For more detailed information on order polytopes, please consult \cite{StanleyEC}, or on Hibi rings, please consult \cite{EHHM, Hibilat, HibiASL} and the references therein. We describe the $h^*$-vector of ${\mathcal O}(P)$ in terms of $P$. Let $P=\{x_1,\ldots,x_n\}$ be a poset with a {\em natural} partial order $\prec$, i.e., if $x_i \prec x_j$, then $i<j$. We say that a map $\sigmaigma : P \rightarrow [n]$ from $P$ to the $n$-elements chain {\em order-preserving} if $x \preceq y$ implies $\sigmaigma(x) \leq \sigmaigma(y)$. We can identify an order-preserving map $\sigmaigma : P \rightarrow [n]$ with a permutation ${\beta}egin{pmatrix} 1 &\cdots &n \\ \sigmaigma^{-1}(1) &\cdots &\sigmaigma^{-1}(n) \end{pmatrix} \in S_n$. The set of all the permutations obtained in this way is denoted by ${\mathcal Q}(P)$. For example, if $P=\{x_1,x_2,x_3,x_4\}$ is a poset having a natural partial order $x_1 \prec x_3, x_2 \prec x_3$ and $x_2 \prec x_4$, then ${\mathcal Q}(P)=\{1234, 2134, 1243, 2143, 2413\} \sigmaubset S_4$. Here, we denote $a_1a_2 \cdots a_n$ instead of the usual notation ${\beta}egin{pmatrix} 1 &2 &\cdots &n \\ a_1 &a_2 &\cdots &a_n \end{pmatrix} \in S_n$. Moreover, for $\pi \in S_n$, let $d(\pi)=\sigmaharp\{ i \in \{1,\ldots,n-1\}: \pi(i)>\pi(i+1)\}$ and let $D(\pi)=\{ i \in \{1,\ldots,n-1\}: \pi(i)>\pi(i+1)\}$. (Clearly, $d(\pi)=\sigmaharp D(\pi)$.) {\beta}egin{Proposition}[{cf. \cite[Theorem 4.5.14]{StanleyEC}}]\label{h_order_poly} Let $P=\{x_1,\ldots,x_n\}$ be a poset with a natural partial order and let $h^*({\mathcal O}(P))(=h(k[{\mathcal O}(P)]))=(h_0,h_1,\ldots,h_s)$. Then $$h_i=\sigmaharp \{ \pi \in {\mathcal Q}(P) : d(\pi)=i\}$$ for each $i=0,1,\ldots,s$. \end{Proposition} Now, we will consider the following poset: For $m \gammaeq 3$, let $P_m=\{x_1,x_2,\ldots,x_{2m-1},x_{2m}\}$ be the poset with the natural partial order $x_1 \prec x_3 \prec \cdots \prec x_{2m-1}$, $x_2 \prec x_4 \prec \cdots \prec x_{2m}$ and $x_1 \prec x_{2m}$. Figure \ref{poset} below is the Hasse diagram of $P_m$. {\beta}egin{figure}[htb!] \centering \includegraphics[scale=0.3]{poset.eps} \caption{the poset $P_m$}\label{poset} \end{figure} {\beta}egin{Theorem}\label{Hibiring} Let $P_m$ be as above. Then $k[{\mathcal O}(P_m)]$ is an almost Gorenstein homogeneous domain whose $h$-vector $(h_0,h_1,\ldots,h_m)$ satisfies $h_i=h_{m-i}$ for $i=0,2,3,\ldots,\lfloor m/2 \rfloor$ and $h_{m-1}=h_1+1$. \end{Theorem} Theorem \ref{Hibiring} says that there is an almost Gorenstein homogeneous ring having a ``different almost symmetry'' $h$-vector. \sigmaubsection{The $h$-vector of $R_m$} Let $R_m=k[{\mathcal O}(P_m)]$. Then we can compute by \eqref{socle} that the socle degree of $R_m$ is equal to $m$. First, for the proof of Theorem \ref{Hibiring}, let us observe the set ${\mathcal Q}(P_m)$ in detail. Let {\beta}egin{align*} {\mathcal S}=\{ \sigmaigma \in S_{2m} : \; \sigmaigma(1)<\sigmaigma(3)<\cdots<\sigmaigma(2m-1), \sigmaigma(2)<\sigmaigma(4)<\cdots<\sigmaigma(2m), \sigmaigma(1)<\sigmaigma(2m)\} \end{align*} and let ${\mathcal S}^{-1}=\{\sigmaigma^{-1} : \sigmaigma \in {\mathcal S}\}$. Then ${\mathcal S}^{-1}={\mathcal Q}(P_m)$. Let $\tau=1325476\cdots (2m-4)(2m-1)(2m-2)2m \in S_{2m}$. Then we see that $\tau \in {\mathcal S}^{-1}$. Let ${\mathcal T}={\mathcal S}^{-1} \sigmaetminus \{\tau\}$ and let ${\mathcal T}_i=\{\pi \in {\mathcal T} : d(\pi)=i\}$ for $i=0,1,\ldots,m$. We construct a map $\xi_i : {\mathcal T}_i \rightarrow {\mathcal T}_{m-i}$ for each $i=0,1,\ldots,m$ as follows: Fix $\pi \in {\mathcal T}_i$. {\beta}egin{itemize} \item Let $A=\{x_1,x_3,\ldots,x_{2m-1}\} \sigmaetminus \{x_{\pi(j)},x_{\pi(j+1)} : j \in D(\pi)\}$ and $B=\{x_2,x_4,\ldots,x_{2m}\} \sigmaetminus \{x_{\pi(j)},x_{\pi(j+1)} : j \in D(\pi)\}$. Since the parities of $\pi(j)$ and $\pi(j+1)$ are different for each $j \in D(\pi)$ and $j'+1 \not\in D(\pi)$ for each $j' \in D(\pi)$, we obtain that $\sigmaharp A = \sigmaharp B = m-i$. Let $p_1,p_2,\ldots,p_{m-i}$ (resp. $q_1,q_2,\ldots,q_{m-i}$) be the positive integers with $A=\{x_{p_j} : 1 \leq j \leq m-i\}$ (resp. $B=\{x_{q_j} : 1 \leq j \leq m-i\}$) such that $p_1<p_2<\cdots<p_{m-i}$ (resp. $q_1<q_2<\cdots<q_{m-i}$). \item We construct the permutation $\widetilde{\pi}=a_1a_2 \cdots a_{2m}$ by defining each $a_i$ inductively as follows: Let $a_1=1$ if $p_1>1$ or let $a_1=2$ if $p_1=1$, and {\beta}egin{itemize} \item for $1 \leq \ell <(p_1+q_1+1)/2$, let {\beta}egin{align*} a_{\ell+1}= {\beta}egin{cases} a_\ell + 1, \; &\text{ if } a_\ell \leq \min\{p_1,q_1\}-2, \\ p_1, &\text{ if }a_\ell=q_1>p_1, \\ q_1, &\text{ if }a_\ell=p_1>q_1, \\ a_\ell+2, &\text{ otherwise}; \end{cases} \end{align*} \item for $(p_{j-1}+q_{j-1}+1)/2 \leq \ell < (p_j+q_j+1)/2$ with some $j \in \{2,\ldots,m-i\}$, let {\beta}egin{align*} a_{\ell+1}= {\beta}egin{cases} a_\ell + 1, \; &\text{ if } \max\{p_{j-1},q_{j-1}\} + 1 \leq a_\ell \leq \min\{p_j,q_j\}-2, \\ p_{j-1}+2, &\text{ if }a_\ell \leq p_{j-1}, \; p_{j-1} > q_{j-1} \text{ and }a_\ell+2=q_j, \\ q_{j-1}+2, &\text{ if }a_\ell \leq q_{j-1}, \; q_{j-1} > p_{j-1} \text{ and }a_\ell+2=p_j, \\ p_j, &\text{ if }a_\ell = q_j>p_j, \\ q_j, &\text{ if }a_\ell = p_j>q_j, \\ a_\ell+2, &\text{ otherwise}; \end{cases} \end{align*} \item for $\ell \gammaeq (p_{m-i}+q_{m-i}+1)/2$, let {\beta}egin{align*} a_{\ell+1}= {\beta}egin{cases} a_\ell + 1, \; &\text{ if } a_\ell \gammaeq \max\{p_{m-i},q_{m-i}\} + 1, \\ a_\ell+2, &\text{ otherwise}. \end{cases} \end{align*} \end{itemize} See Example \ref{rensyuu} below for this construction. \item We define $\xi_i(\pi)=\widetilde{\pi}$. \end{itemize} {\beta}egin{Example}\label{rensyuu}{\em Let $m=4$ and consider $\pi=13246857$. Then $d(\pi)=2$ and $D(\pi)=\{2,6\}$. Thus $A=\{x_1.x_7\}$, $B=\{x_4,x_6\}$ and $p_1=1, p_2=7, q_1=4, q_2=6$. We construct $\widetilde{\pi}=a_1a_2\cdots a_8$ as follows: {\beta}egin{align*} &a_1=2 \text{ since }p_1=1, \; a_2=a_1+2=4, \; a_3=p_1=1, \\ &a_4=a_3+2=3, \; a_5=a_4+2=5, \; a_6=a_5+2=7, \; a_7=q_2=6 \text{ (since }a_6=p_2>q_2), \\ &a_8=a_7+2=8. \end{align*} Actually, for $\pi \in {\mathcal T}_2$, we have $\widetilde{\pi}=24135768 \in {\mathcal T}_{4-2}$. }\end{Example} {\beta}egin{Lemma}\label{hodai1} For this construction, {\beta}egin{itemize} \item[(a)] $\widetilde{\pi} \in S_{2m}$ with $d(\widetilde{\pi})=m-i$; \item[(b)] the map $\overline{\pi}^{-1} : P \rightarrow [2m]$ defined by $\overline{\pi}^{-1}(x_j)=\widetilde{\pi}^{-1}(j)$ is order-preserving; \item[(c)] $\xi_i$ is bijective. \end{itemize} \end{Lemma} {\beta}egin{proof} (a) For $(p_{j-1}+q_{j-1}+1)/2 \leq \ell < (p_j+q_j+1)/2$ with some $j \in \{2,\ldots,m-i\}$, by the definition of $a_\ell$, we can observe that $p_{j-1} \leq a_\ell \leq p_j$ when $a_\ell$ is odd or $q_{j-1} \leq a_\ell \leq q_j$ when $a_\ell$ is even. Moreover, we can also observe that for $(p_{j-1}+q_{j-1}+1)/2 \leq \ell, \ell' < (p_j+q_j+1)/2$ with $\ell \not= \ell'$, when the parities of $a_\ell$ and $a_{\ell'}$ are equal, $a_\ell < a_{\ell'}$ if and only if $\ell < \ell'$. Similar assertions also hold in the cases where $\ell < (p_1+q_1+1)/2$ and $\ell \gammaeq (p_{m-i}+q_{m-i}+1)/2$. Thus, $a_1,a_2,\ldots,a_{2m}$ are all different integers with $1 \leq a_\ell \leq 2m$ for each $\ell$. Hence, $\widetilde{\pi}$ should define a permutation, i.e., $\widetilde{\pi} \in S_{2m}$. In addition, we also have $D(\widetilde{\pi})=\{(p_j+q_j-1)/2 : j=1,\ldots,m-i\}$. In particular, $d(\widetilde{\pi})=m-i$. \noindent (b) As in (a) above, we know that when the parities of $a_\ell$ and $a_{\ell'}$ are equal, $a_\ell<a_{\ell'}$ if and only if $\ell<\ell'$. Moreover, one has $\widetilde{\pi}^{-1}(a_\ell)=\ell$ for each $\ell$. On the other hand, we know that if $a_\ell = 1$ and $a_{\ell'}=2m$, then $\ell < \ell'$. In fact, the situation $a_\ell=1$, $a_{\ell'}=2m$ and $\ell>\ell'$ may happen if $(p_1, q_1)=(1,2m)$ and $m-i=1$. However, this never happens because $\pi \not= 1325476\cdots (2m-4)(2m-1)(2m-2)2m = \tau$. Hence, the map $\overline{\pi}^{-1} : P \rightarrow [2m]$ defined by $\overline{\pi}^{-1}(x_{a_\ell})=\ell$ is order-preserving. \noindent (c) Since $\widetilde{\pi}$ is uniquely determined by $p_1,\ldots,p_{m-i},q_1,\ldots,q_{m-i}$, it is obvious that $\xi_i$ is injective. By the injectivities of $\xi_i : {\mathcal T}_i \rightarrow {\mathcal T}_{m-i}$ and $\xi_{m-i} : {\mathcal T}_{m-i} \rightarrow {\mathcal T}_i$ for each $i=0,1,\ldots,m$, we obtain the bijectivity of $\xi_i$ for each $i$, as required. \end{proof} \sigmaubsection{The Cohen--Macaulay type of $R_m$} Next, let us estimate the Cohen--Macaulay type of $R_m$ by considering its canonical ideal. Given a poset $P$, let $\hat{P} = P \cup \{\hat{0}, \hat{1}\}$ with $\hat{0} < x < \hat{1}$ for every $x \in P$. A map $v : \hat{P} \rightarrow {\NZQ Z}_{\gammaeq 0}$ is called {\em order-reversing} (resp. {\em strictly order-reversing}) if $v(x) \leq v(y)$ (resp. $v(x)<v(y)$) for each $x,y \in \hat{P}$ with $x \sigmaucceq y$ (resp. $x \sigmaucc y$). Let $S(\hat{P})$ (resp. $T(\hat{P})$) denote the set of all order-reversing (all strictly order-reversing) maps $v : \hat{P} \rightarrow {\NZQ Z}_{\gammaeq 0}$ with $v(\hat{1})=0$. It is shown in \cite{Hibilat} that $k[{\mathcal O}(P)]$ has a $k$-basis consisting of the monomials which look like $$Z^{v(\hat{0})} \prod_{p \in P}X_p^{v(p)}, \;\;\; v \in S(\hat{P}) $$ and it is also shown there that the monomials which look like $$Z^{v(\hat{0})} \prod_{p \in P}X_p^{v(p)}, \;\;\; v \in T(\hat{P}) $$ form a $k$-basis of the canonical ideal $I_{k[{\mathcal O}(P)]} \sigmaubset k[{\mathcal O}(P)]$. Let $T_0(\hat{P}) \sigmaubset T(\hat{P})$ denote the subset corresponding to the minimal set of generators of $I_{k[{\mathcal O}(P)]}$. Note that $k[{\mathcal O}(P)]$ is standard graded by the grading ${\delta}eg (Z^{v(\hat{0})} \prod_{p \in P}X_p^{v(p)}) = v(\hat{0})$. On the Cohen--Macaulay type of $R_m$, we see the following: {\beta}egin{Lemma}\label{hodai2} We have $\sigmaharp T_0(\widehat{P_m}) \gammaeq m-1$, i.e., $r(R_m)=\mu(I_{R_m}) \gammaeq m-1$. \end{Lemma} {\beta}egin{proof} For $i=1,\ldots,m-1$, let $v_i : P_m \rightarrow {\NZQ Z}_{\gammaeq 0}$ be the map defined by {\beta}egin{align*} v_i(x_j)= {\beta}egin{cases} m+1-\frac{j+1}{2}, \;\; &\text{ if $j$ is odd}, \\ m+i-\frac{j}{2}, \;\; &\text{ if $j$ is even} \end{cases} \end{align*} for $j=1,\ldots,2m$. Then each $v_i$ is a strictly order-reversing map. We will show that $v_i \in T_0(\widehat{P_m})$ for each $i=1,\ldots,m-1$. Assume that $v_i$ can be written as a sum of $v \in T(\widehat{P_m})$ and $w \in S(\widehat{P_m})$, i.e., $v_i(x)=v(x)+w(x)$ for each $x \in P$. Then we have $v_i(x_j)-v(x_j)=w(x_j) \gammaeq 0$ for each $j$. Moreover, since $\hat{1} \sigmaucc x_{2m-1} \sigmaucc x_{2m-3} \sigmaucc \cdots \sigmaucc x_1 \sigmaucc \hat{0}$, $v \in T(\widehat{P_m})$ and $v(\hat{1})=0$, we have $v(x_{2q-1}) \gammaeq m+1-q = v_i(x_{2q-1})$, equivalently, $v_i(x_{2q-1})- v(x_{2q-1}) \leq 0$ for each $q=1,2,\ldots,m$. Hence, we obtain $v_i(x_j)=v(x_j)$ for each $j=1,3,\ldots,2m-1$. On the other hand, if there is $1 \leq r < m$ with $v(x_{2r})-v(x_{2r+2}) \gammaeq 2$, then we see that {\beta}egin{align*} w(x_{2r+2})&=v_i(x_{2r+2})-v(x_{2r+2})=(v_i(x_{2r+2})+1)-(v(x_{2r+2})+1) \\ &\gammaeq v_i(x_{2r})-(v(x_{2r}) - 1) > w(x_{2r}), \end{align*} a contradiction to $w \in S(\widehat{P_m})$. Hence, $v(x_{2r})-v(x_{2r+2}) \leq 1$ for each $1 \leq r < m$, i.e., $v(x_{2r})-v(x_{2r+2}) = 1$ for each $r$ because $v \in T(\widehat{P_m})$. Let $v(x_{2m})={\alpha}lpha \in {\NZQ Z}_{>0}$. {\beta}egin{itemize} \item Suppose that ${\alpha}lpha>i$. Then we have $w(x_{2m})=v_i(x_{2m})-v(x_{2m})=i-{\alpha}lpha < 0$, a contradiction. \item Suppose that ${\alpha}lpha<i$. Then we have $w(x_{2m})> 0$. However, as observed above, we also have $w(x_1)=v_i(x_1)-v(x_1)=0$. Since $x_1 \prec x_{2m}$ in $P_m$, we should have $w(x_1) \gammaeq w(x_{2m})$, a contradiction. \end{itemize} Thus, $v(x_{2m})=i$. By $v(x_{2r})-v(x_{2r+2}) = 1$ for each $1 \leq r < m$, we also have $v(x_{2r})=m+i-r$. Therefore, $v=v_i$ and $w=0$. This implies that $v_i \in T_0(\widehat{P_m})$, as required. \end{proof} \sigmaubsection{Proof of Theorem \ref{Hibiring}} We are now in the position to give a proof of Theorem \ref{Hibiring}. Work with the same notation as above. Let $C=\cok \phi$, where $\phi : R_m \rightarrow I_{R_m}(-a)$ is an injection, and let $(h_0,h_1,\ldots,h_m)$ be the $h$-vector of $R_m$. By the above discussions and Lemma \ref{hodai1}, there exists a bijection $\xi_i$ between ${\mathcal T}_i$ and ${\mathcal T}_{m-i}$. In particular, we have $\sigmaharp {\mathcal T}_i = \sigmaharp {\mathcal T}_{m-i}$. On the other hand, by Proposition \ref{h_order_poly}, we have $h_i=\sigmaharp \{\pi \in {\mathcal S}^{-1} : d(\pi)=i\}$ for each $i=0,1,\ldots,m$. Since we have ${\mathcal S}^{-1} = {\beta}igcup_{i=0}^m {\mathcal T}_i \cup \{\tau\}$ and $d(\tau)=m-1$, we conclude that $(h_0,h_1,\ldots,h_m)$ satisfies that $h_i=h_{m-i}$ for $i=0,2,3,\ldots,\lfloor m/2 \rfloor$ and $h_{m-1}=h_1+1$. Therefore, by Proposition \ref{i-}, we obtain that $$e(C)=\sigmaum_{j=0}^m ((h_m+\cdots+h_{m-j})-(h_0+\cdots+h_j))=\sigmaum_{j=1}^{m-2}(h_{m-1}-h_1)=m-2.$$ In addition, it follows from Proposition \ref{myu-} and Lemma \ref{hodai2} that $\mu(C)=r(R_m)-1 \gammaeq m-2.$ Hence, we see that $e(C) - \mu(C) \leq 0$. Since $e(C) - \mu(C) \gammaeq 0$ is also satisfied by \eqref{ookii}, we conclude that $C$ satisfies $e(C)=\mu(C)$. Therefore, by Corollary \ref{tokuchou}, $R_m$ is almost Gorenstein, as desired. \qedsymbol {\beta}egin{Example}{\em Let $Q_m=\{x_1,\ldots,x_{2m}\}$ be the poset with the partial order $x_1 \prec x_3 \prec \cdots \prec x_{2m-1}$, $x_2 \prec x_4 \prec \cdots \prec x_{2m}$, $x_1 \prec x_{2m}$ and $x_2 \prec x_{2m-1}$. By the similar discussions as above, we see that $k[{\mathcal O}(Q_m)]$ is also almost Gorenstein with $r(k[{\mathcal O}(Q_m)])=2m-3$. }\end{Example} {\beta}igskip {\beta}egin{thebibliography}{10} {\beta}ibitem{BF} V. Barucci and R. Froberg, One-dimensional almost Gorenstein rings, {\em J. Algebra} {{\beta}f 188} (1997), 418--442. {\beta}ibitem{BH} W. Bruns and J. Herzog, ``Cohen--Macaulay rings,'' Cambridge University Press, 1993. {\beta}ibitem{E} D. Eisenbud, Linear sections of determinantal varieties, {\em Amer. J. Math.} {{\beta}f 110} (1988), 541--575. {\beta}ibitem{EHHM} V. Ene, J. Herzog, T. Hibi and S. S. Madani, Pseudo-Gorenstein and level Hibi rings, {\em J. Algebra} {{\beta}f 431} (2015), 138--161. {\beta}ibitem{GMP} S. Goto, N. Matsuoka and T. T. Phuong, Almost Gorenstein rings, {\em J. Algebra} {{\beta}f 379} (2013), 355--381. {\beta}ibitem{GTT} S. Goto, R. Takahashi and N. Taniguchi, Almost Gorenstein rings --Towards a theory of higher dimension--, {\em J. Pure Appl. Algebra} {{\beta}f 219} (2015), 2666--2712. {\beta}ibitem{Hibilat} T. Hibi, Distributive lattices, affine semigroup rings and algebras with straightening laws, In:``Commutative Algebra and Combinatorics'' (M. Nagata and H. Matsumura, Eds.), Adv. Stud. {\em Pure Math.} {{\beta}f 11}, North--Holland, Amsterdam, (1987), 93--109. {\beta}ibitem{HibiASL} T. Hibi, Level rings and algebras with straightening laws, {\em J. Algebra} {{\beta}f 117} (1988), 343--362. {\beta}ibitem{HibiRedBook} T. Hibi, ``Algebraic Combinatorics on Convex Polytopes,'' Carslaw Publications, Glebe NSW, Australia, 1992. {\beta}ibitem{MaMu} N. Matsuoka and S. Murai, Uniformly Cohen-Macaulay simplicial complexes, arXiv:1405.7438v1. {\beta}ibitem{Yanagawa} K. Yanagawa, Castelnuovo's Lemma and $h$-vectors of Cohen--Macaulay homogeneous domains, {\em J. Pure and Appl. Algebra} {{\beta}f 105} (1995), 107--116. {\beta}ibitem{StanleyEC} R. P. Stanley, ``Enumerative Combinatorics, Volume 1,'' Wadsworth \& Brooks/Cole, Monterey, Calif., 1986. {\beta}ibitem{StanleyHF} R. P. Stanley, Hilbert functions of Graded Algebras, {\em Adv. Math.} {{\beta}f 28} (1978), 57--83. {\beta}ibitem{StanleyCMD} R.P. Stanley, On the Hilbert function of a graded Cohen-Macaulay domain, {\em J. Pure Appl. Algebra} {{\beta}f 73} (1991), 307--314. \end{thebibliography} \end{document}
\begin{document} \title[On finite groups with exactly two non-abelian centralizers]{On finite groups with exactly two non-abelian centralizers} \author[S. J. Baishya ]{Sekhar Jyoti Baishya} \address{S. J. Baishya, Department of Mathematics, Pandit Deendayal Upadhyaya Adarsha Mahavidyalaya, Behali, Biswanath-784184, Assam, India.} \email{[email protected]} \begin{abstract} In this paper, we characterize finite group $G$ with unique proper non-abelian element centralizer. This improves \cite[Theorem 1.1]{nab}. Among other results, we have proved that if $C(a)$ is the proper non-abelian element centralizer of $G$ for some $a \in G$, then $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$, $C(a)$ is the Fitting subgroup of $G$ and $G' \in C(a)$, where $G'$ is the commutator subgroup of $G$. \end{abstract} \subjclass[2010]{20D60, 20D99} \keywords{Finite group, Centralizer, Partition of a group} \maketitle \section{Introduction} \label{S:intro} Throughout this paper $G$ is a finite group with center $Z(G)$ and commutator subgroup $G'$. Given a group $G$, let $\Cent(G)$ denote the set of centralizers of $G$, i.e., $\Cent(G)=\lbrace C(x) \mid x \in G\rbrace $, where $C(x)$ is the centralizer of the element $x$ in $G$. The study of finite groups in terms of $|\Cent(G)|$, becomes an interesting research topic in last few years. Starting with Belcastro and Sherman \cite{ctc092} in 1994 many authors have been studied and characterised finite groups $G$ in terms of $\mid \Cent(G)\mid$. More information on this and related concepts may be found in \cite{ed09, amiri2019, amiri20191, rostami, en09, ctc09, ctc091, ctc099, baishya, baishya1, baishya2,baishya5, zarrin0941, zarrin0942, non, con}. Amiri and Rostami \cite{nab} in 2015 introduced the notion of $nacent(G)$ which is the set of all non-abelian centralizers of $G$. Schmidt \cite {schmidt} characterized all groups $G$ with $\mid nacent(G)\mid=1$ which are called CA-groups. The authors in \cite{nab} initiated the study of finite groups with $\mid nacent(G)\mid=2$ and proved the following result (\cite[Theorem 1.1]{nab}): \begin{thm}\label{nabcentamiri} Let $G$ be a finite group such that $\mid nacent(G)\mid=2$. If $C(a)$ is a proper non-abelian centralizer for some $a \in G$, then one of the following assertions hold: \begin{enumerate} \item $\frac{G}{Z(G)}$ is a $p$-group for some prime $p$. \item $C(a)$ is the Fitting subgroup of $G$ of prime index $p$, $p$ divides $|C(a)|$ and $\mid Cent(G) \mid= \mid Cent(C(a)) \mid+ j+1 $, where $j$ is the number of distinct centralizers $C(g)$ for $g \in G \setminus C(a)$. \item $\frac{G}{Z(G)}$ is a Frobenius group with cyclic Frobenius complement $\frac{C(x)}{Z(G)}$ for some $x \in G$. \end{enumerate} \end{thm} In this paper, we revisit finite groups $G$ with $\mid nacent(G)\mid=2$ and improve this result. Among other results, we have also proved that if $C(a)$ is the proper non-abelian element centralizer of $G$ then $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$, $C(a)$ is the Fitting subgroup of $G$ and $G' \in C(a)$. \section{The main results} In this section, we prove the main results of the paper. We make the following Remark from \cite[Pp. 571--575]{zappa} which will be used in the sequel. \begin{rem}\label{rem1} A collection $\Pi$ of non-trivial subgroups of a group $G$ is called a partition if every non-trivial element of $G$ belongs to a unique subgroup in $\Pi$. If $\mid \Pi \mid=1$, the partition is said to be trivial. The subgroups in $\Pi$ are called components of $\Pi$. Following Miller, if $\Pi$ is a non-trivial partition of a non-abelian $p$ group $G$ ($p$ a prime), then all the elements of $G$ having order $>p$ belongs to the same component of $\Pi$. A partition $\Pi$ of a group $G$ is said to be normal if $g^{-1}Xg \in \Pi$ for every $X \in \Pi$ and $g \in G$. A non-trivial partition $\Pi$ of a group $G$ is said to be elementary if $G$ has a normal subgroup $K$ such that all cyclic subgroups which are not contained in $K$ have order $p$ ($p$ a prime) and are components of $\Pi$. All normal non-trivial partitions of a $p$ group of exponent $> p$ are elementary. A non-trivial partition $\Pi$ of a group $G$ is said to be non-simple if there exists a proper normal subgroup $N$ of $G$ such that for every component $X \in \Pi$, either $X \leq N$ or $X \cap N=1$. Let $G$ be a group and $\Pi$ a normal non-trivial partition. Suppose $\Pi$ is not a Frobenious partition and is non-simple. Then $G$ has a normal subgroup $K$ of index $p$ ($p$ a prime) in $G$ which is generated by all elements of $G$ having order $\neq p$. So $\Pi$ is elementary. Let $G$ be a group and $p$ be a prime. We recall that the subgroup generated by all the elements of $G$ whose order is not $p$ is called the Hughes subgroup and denoted by $H_p(G)$. The group $G$ is said to be a group of Hughes-Thompson type if $G$ is not a $p$ group and $H_p(G) \neq G$ for some prime $p$. In such a group we have $\mid G : H_p(G) \mid=p$ and $H_p(G)$ is nilpotent. \end{rem} We now determine the structure of finite groups $G$ with $\mid nacent(G)\mid=2$ which improves (\cite[Theorem 1.1]{nab}). \begin{thm}\label{nabcent1} Let $G$ be a finite group and $a \in G \setminus Z(G)$. Then $nacent(G)=\lbrace G, C(a) \rbrace$ if and only if one of the following assertions hold: \begin{enumerate} \item $\frac{G}{Z(G)}$ is a non-abelian $p$-group of exponent $>p$ ($p$ a prime), $\mid \frac{G}{Z(G)}: H_p(\frac{G}{Z(G)})\mid=p$, $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$, $\mid \frac{C(x)}{Z(G)} \mid=p$ for any $x \in G \setminus C(a)$ and $C(a)$ is a CA-group. \item $\frac{G}{Z(G)}$ is a group of Hughes-Thompson type, $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$ ($p$ a prime), $\mid \frac{C(x)}{Z(G)} \mid=p$ for any $x \in G \setminus C(a)$ and $C(a)$ is a CA-group. \item $\frac{G}{Z(G)}= \frac{C(a)}{Z(G)} \rtimes \frac{C(x)}{Z(G)}$ is a Frobenius group with Frobenius Kernel $\frac{C(a)}{Z(G)}$, cyclic Frobenius Complement $\frac{C(x)}{Z(G)}$ for some $x \in G\setminus C(a)$ and $C(a)$ is a CA-group.. \end{enumerate} \end{thm} \begin{proof} Let $G$ be a finite group such that $nacent(G)=\lbrace G, C(a) \rbrace$, $a \in G \setminus Z(G)$. Then clearly $C(a)$ is a CA-group. Note that we have $C(s) \subseteq C(a)$ for any $s \in C(a)\setminus Z(G)$, $C(a) \cap C(x)=Z(G)$ for any $x \in G \setminus C(a)$ and $C(x) \cap C(y)=Z(G)$ for any $x, y \in G \setminus C(a)$ with $C(x) \neq C(y)$. Hence $\Pi= \lbrace \frac{C(a)}{Z(G)}, \frac{C(x)}{Z(G)} \mid x \in G \setminus C(a)\rbrace$ is a non-trivial partition of $\frac{G}{Z(G)}$. In the present scenario we have $(gZ(G))^{-1}\frac{C(x)}{Z(G)}gZ(G)=\frac{g^{-1}C(x)g}{Z(G)}=\frac{C(g^{-1}xg)}{Z(G)}$ for any $gZ(G) \in \frac{G}{Z(G)}$ and $\frac{C(a)}{Z(G)}\lhd \frac{G}{Z(G)}$. Therefore $(gZ(G))^{-1}XgZ(G) \in \Pi$ for every $X \in \Pi$ and $gZ(G) \in \frac{G}{Z(G)}$. Hence $\Pi$ is a normal non-simple partition of $\frac{G}{Z(G)}$. In the present scenario, if $\Pi$ is a Frobenius partition of $\frac{G}{Z(G)}$, then $\frac{G}{Z(G)}= \frac{C(a)}{Z(G)} \rtimes \frac{C(x)}{Z(G)}$ is a Frobenius group with Frobenious Kernel $\frac{C(a)}{Z(G)}$ and cyclic Frobenius Complement $\frac{C(x)}{Z(G)}$ for some $x \in G\setminus C(a)$. Next, suppose $\Pi$ is not a Frobenius partition. Then in view of Remark \ref{rem1}, $\frac{G}{Z(G)}$ has a normal subgroup of index $p$ ($p$ a prime) in $\frac{G}{Z(G)}$ which is generated by all elements of $\frac{G}{Z(G)}$ having order $\neq p$. In the present situation if $\frac{G}{Z(G)}$ is not a $p$ group ($p$ a prime), then in view of Remark \ref{rem1}, $\frac{G}{Z(G)}$ is a group of Hughes-Thompson type and $\Pi$ is elementary. That is $\frac{G}{Z(G)}$ has a normal subgroup $\frac{K}{Z(G)}$ such that all cyclic subgroups which are not contained in $\frac{K}{Z(G)}$ have order $p$ ($p$ a prime) and are components of $\Pi$. In the present scenario we have $\frac{K}{Z(G)}=H_p(\frac{G}{Z(G)})$. Therefore $\Pi$ has $\frac{\mid G \mid}{p}$ components of order $p$ and these are precisely $\frac{C(x)}{Z(G)}, x \in G \setminus C(a)$. Consequently, we have $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$. On the other hand, if $\frac{G}{Z(G)}$ is a $p$ group ($p$ a prime), then in view of Remark \ref{rem1}, $\frac{G}{Z(G)}$ is non-abelian of exponent $>p$ and $\mid \frac{G}{Z(G)}: H_p(\frac{G}{Z(G)})\mid=p$. In the present situation by Remark \ref{rem1}, $\Pi$ is elementary. Therefore using Remark \ref{rem1} again, $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$ and all cyclic subgroups which are not contained in $\frac{C(a)}{Z(G)}$ have order $p$ and are components of $\Pi$. Therefore $\Pi$ has $\frac{\mid G \mid}{p}$ components of order $p$ and these are precisely $\frac{C(x)}{Z(G)}, x \in G \setminus C(a)$. Consequently, we have $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$. Conversely, suppose $G$ is a finite group such that one of (a), (b) or (c) holds. Then it is easy to see that $nacent(G)=\lbrace G, C(a) \rbrace$ for some $a \in G \setminus Z(G)$. \end{proof} As an immediate consequence we have the following result. Recall that for a finite group $G$, the Fitting subgroup denoted by $F(G)$ is the largest normal nilpotent subgroup of $G$. \begin{thm}\label{sjb1} Let $G$ be a finite group with a unique proper non-abelian centralizer $C(a)$ for some $a \in G$. Then we have \begin{enumerate} \item $\mid \Cent(G) \mid= \mid \Cent(C(a)) \mid+ \frac{\mid G \mid}{p}+1$, ($p$ a prime) or $\mid \Cent(C(a)) \mid+ \mid \frac{C(a)}{Z(G)} \mid+1$. \item $G' \subseteq C(a)$. \item $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$. \item $C(a)$ is the Fitting subgroup of $G$. \item $C(a)=P \times A$, where $A$ is an abelian subgroup and $P$ is a CA-group of prime power order. \item $\frac{G}{C(a)}$ is cyclic. \end{enumerate} \end{thm} \begin{proof} (a) In view of Theorem \ref{nabcent1}, if $\frac{G}{Z(G)}$ is a Frobenius group then by \cite[Proposition 3.1]{amiri2019}, we have $\mid \Cent(G) \mid= \mid \Cent(C(a)) \mid+ \mid \frac{C(a)}{Z(G)} \mid+1$. On the otherhand, if $\frac{G}{Z(G)}$ is not a Frobenius group, then it follows from the proof of Theorem \ref{nabcent1} that the non-trivial partition $\Pi= \lbrace \frac{C(a)}{Z(G)}, \frac{C(x)}{Z(G)} \mid x \in G \setminus C(a)\rbrace$ of $\frac{G}{Z(G)}$ has $\frac{\mid G \mid}{p}$ components of order $p$ and these are precisely $\frac{C(x)}{Z(G)}, x \in G \setminus C(a)$. Hence $\mid \Cent(G) \mid= \mid \Cent(C(a)) \mid+ \frac{\mid G \mid}{p}+1$. (b) If $\frac{G}{Z(G)}$ is a Frobenius group, then by Theorem \ref{nabcent1}, $\frac{G}{Z(G)}= \frac{C(a)}{Z(G)} \rtimes \frac{C(x)}{Z(G)}$ with cyclic Frobenius complement $\frac{C(x)}{Z(G)}$ for some $x \in G\setminus C(a)$. Therefore $\frac{G}{C(a)}$ is cyclic and hence $G' \subseteq C(a)$. On the otherhand, if $\frac{G}{Z(G)}$ is not a Frobenius group, then it follows from Theorem \ref{nabcent1} that $C(a) \lhd G$ and $\mid \frac{G}{C(a)} \mid=p$ ($p$ a prime). Hence $G' \subseteq C(a)$. (c) If $\frac{G}{Z(G)}$ is a Frobenius group, then by Theorem \ref{nabcent1}, $\frac{G}{Z(G)}= \frac{C(a)}{Z(G)} \rtimes \frac{C(x)}{Z(G)}$ with cyclic Frobenius complement $\frac{C(x)}{Z(G)}$ for some $x \in G\setminus C(a)$. In the present scenario by \cite[Pp. 3]{mukti}, we have $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$. On the otherhand, if $\frac{G}{Z(G)}$ is not a Frobenius group, then it follows from Theorem \ref{nabcent1} that $H_p(\frac{G}{Z(G)})=\frac{C(a)}{Z(G)}$ and $\mid \frac{G}{Z(G)}: H_p(\frac{G}{Z(G)})\mid=p$. Hence $\frac{C(a)}{Z(G)}$ is the Fitting subgroup of $\frac{G}{Z(G)}$, noting that we have $C(a) \lhd G$. (d)It follows from (c) noting that $F(\frac{G}{Z(G)})=\frac{F(G)}{Z(G)}=\frac{C(a)}{Z(G)}$. (e) Using (d) we have $C(a)$ is a nilpotent CA-group. Therefore using \cite[Theorem 3.10 (5)]{abc}, $C(a)=P \times A$, where $A$ is an abelian subgroup and $P$ is a CA-group of prime power order. (f) It is clear from Theorem \ref{nabcent1} that if $\frac{G}{Z(G)}$ is a Frobenius group, then $\frac{G}{C(a)}$ is cyclic and $\mid \frac{G}{C(a)} \mid=p$, ($p$ a prime) otherwise. \end{proof} \end{document}
\begin{document} \title{Probability backflow for correlated quantum states} \author{Arseni Goussev} \affiliation{School of Mathematics and Physics, University of Portsmouth, Portsmouth PO1 3HF, United Kingdom} \date{\today} \begin{abstract} In its original formulation, quantum backflow (QB) is an interference effect that manifests itself as a negative probability transfer for free-particle states comprised of plane waves with only positive momenta. Quantum reentry (QR) is another interference effect in which a wave packet expanding from a spatial region of its initial confinement partially returns to the region in the absence of any external forces. Here we show that both QB and QR are special cases of a more general classically-forbidden probability flow for quantum states with certain position-momentum correlations. We further demonstrate that it is possible to construct correlated quantum states for which the amount of probability transferred in the ``wrong'' (classically impossible) direction exceeds the least upper bound on the corresponding probability transfer in the QB and QR problems, known as the Bracken-Melloy constant. \end{abstract} \maketitle \section{Introduction} Quantum backflow (QB) is a quantum-mechanical interference effect with no counterpart in classical mechanics. It manifests itself as a flow of probability density in the direction opposite to the direction of the momentum of a quantum particle. For a more concrete definition of QB, let us consider a free particle travelling along the $x$-axis. Suppose that it is know {\it with certainty} that the momentum $p$ of the particle is positive, i.e. $p > 0$. Let $P_<(t)$ denote the probability that, at time $t$, the particle is located to the left of the origin, i.e. at $x < 0$. The quantity of interest is \begin{equation} \Delta(\tau, T) = P_<(\tau + T) - P_<(\tau) \,. \label{Delta_def} \end{equation} where $\tau \in \mathbb{R}$ and $T > 0$. The physical meaning of $\Delta(\tau, T)$ is the amount of probability that has been transported from the right of the origin, $x > 0$, to the left of the origin, $x < 0$, during the time interval $\tau < t < \tau + T$. If the motion of the particle was governed by the laws of classical mechanics, it would be impossible for $\Delta(\tau, T)$ to exceed zero: in classical mechanics, the direction of the probability flow is the same as that of the particle momentum. In quantum mechanics however there exist wave packets, composed entirely of plane waves with positive momenta, for which $\Delta(\tau, T) > 0$; this is the QB effect. The first argument for the existence of the QB effect was made in Refs.~\cite{All69time-c, Kij74time}. There it was pointed out that a (non-normalizable) linear combination of two plane waves with positive momenta may generate a negative probability current. QB for normalized wave packets was first analysed by Bracken and Melloy~\cite{BM94Probability}. In particular, they showed that the supremum of the right-to-left probability transfer -- $\sup \Delta(\tau, T)$ where the supremum is taken over all normalizable states comprised of plane waves with positive momenta -- coincides with the supremum $\lambda_{\sup}$ of the eigenvalue spectrum in the following integral eigenproblem: \begin{equation} -\frac{1}{\pi} \int_{0}^{\infty} du' \, \frac{\sin \left( u^2 - {u'}^2 \right)}{u - u'} \psi(u') = \lambda \psi(u) \,, \label{int_eig_prob} \end{equation} where $\psi(x)$ belongs to the class of square-integrable functions defined on $x > 0$. That is, $\sup \Delta = \lambda_{\sup} \equiv \sup_{\psi} \lambda.$ As of today, the exact value of $\lambda_{\sup}$ remains unknown, while the most accurate numerical estimate stands at \cite{PGKW06new} \begin{equation} \lambda_{\sup} \simeq 0.0384517 \,. \label{BM_const} \end{equation} It is intriguing that the constant $\lambda_{\sup}$, called the Bracken-Melloy constant, is independent not only of the particle mass and of times $\tau$ and $T$, but also of the Planck constant $\hbar$. This observation prompted Bracken and Melloy to suggest that $\lambda_{\sup}$ is a ``new dimensionless quantum number'' that ``reflects the structure of Schr\"odinger's equation, rather than the values of the parameters appearing there''~\cite{BM94Probability, BM14Waiting}. (It has been later pointed out in Ref.~\cite{YHHW12Analytical} that there is a conceptual analogy between, on the one hand, the $\hbar$-independence of $\lambda_{\sup}$ and, on the other hand, the $\hbar$-independence of the reflection coefficient in the problem of quantum-mechanical scattering off a step potential.) Following the pioneering work of Bracken and Melloy~\cite{BM94Probability}, numerous studies of the QB effect have been reported in the literature. Accurate numerical estimates of the Bracken-Melloy constant were obtained in Refs.~\cite{EFV05Quantum, PGKW06new}. Analytical examples of states with large QB were constructed in Refs.~\cite{YHHW12Analytical, HGL+13Quantum}. The relation between QB and the arrival-time problem was discussed in Refs.~\cite{MPL99Arrival, ML00Arrival, HBLO19Quasiprobability}. Some aspects of the spatial extent of QB were studied in Refs.~\cite{EFV05Quantum, Ber10Quantum, BCL17Quantum}. Probability backflow in quantum systems with rotational motion, such as an electron in a constant magnetic field, was addressed in Ref.~\cite{Str12Large}. A scheme for observing QB in experiments with Bose-Einstein condensates was proposed in Ref.~\cite{PTMM13Detecting}. QB has been also investigated under the action of a constant force~\cite{MB98velocity}, in the presence of spin-orbit coupling~\cite{MPM+14Interference}, thermal noise~\cite{AGP16Quantum}, dissipations~\cite{MM20Dissipative}, and in scattering situations~\cite{BCL17Quantum}. QB in relativistic quantum mechanics was studied in Refs.~\cite{MB98Probability, SC18Quantum, ALS19Relativistic}. While an experimental observation of QB in a truly {\it quantum} system is still missing, an {\it optical} equivalent of the effect has been recently realized in the laboratory~\cite{EZB20Observation}. Recently, it has been shown \cite{Gou19Equivalence} that classically-forbidden probability transfer may also occur in a seemingly different problem of quantum reentry (QR) that can be formulated as follows. Suppose that initially, at $t = 0$, the particle is localized on the negative position semi-axis, $x < 0$, so that $P_<(0) = 1$; the particle momentum $p$ is unconstrained. At later times, $t > 0$, as a result of free motion the particle may cross the origin and enter the region $x > 0$, yielding $P_<(t) < 1$ for $t > 0$. Just as in the case of QB, the quantity of interest is the probability transfer $\Delta(\tau, T)$, defined by Eq.~\eqref{Delta_def}, with the only modification that one now requires not only $T > 0$, but also $\tau > 0$. In classical mechanics, once the free particle has left the $x < 0$ region, it can no longer reenter the region, meaning that the classical-mechanical value of $\Delta(\tau, T)$ cannot be positive -- a classical reentry is impossible. The situation is different in quantum mechanics: there exist states, initially localized in the $x < 0$ region, for which $\Delta(\tau, T)$ is positive. This is the manifestation of the QR effect. QR may take place not only in free space, as described here, but also in the presence of an external potential, e.g., when a particle ``leaks'' out of a quasi-stable trap through a $\delta$-potential barrier~\cite{DT19Decay}. Interestingly, the least upper bound on the QR probability appears to equal the Bracken-Melloy constant, $\lambda_{\sup}$ \cite{Gou19Equivalence}. This suggests the existence of a deep connection between QB and QR. In this paper, we elucidate this connection by showing that both QB and QR effects can be viewed as special cases of a generalized backflow problem. More surprisingly, we show that the least upper bound on the classically-forbidden probability transfer in this generalized backflow problem exceeds the Bracken-Melloy constant. This paper is organized as follows. In Section~\ref{Sec:QB-QR} we discuss the phase-space interpretations of QB and QR, and make a connection between the two effects. In Section~\ref{Sec:general} we formulate a generalized backflow problem, and study in detail one particular example. We summarize and discuss our findings in Section~\ref{Sec:end}. Some calculations are deferred to the Appendixes. \section{A unified view on quantum backflow and quantum reentry} \label{Sec:QB-QR} In what follows, we denote the position and momentum operators by $\hat{x}$ and $\hat{p}$, and their corresponding eigenstates by $| x \rangle$ and $| p \rangle$, respectively. The eigenstates are normalized as $\langle x | x' \rangle = \delta(x - x')$ and $\langle p | p' \rangle = \delta(p - p)$, where $\delta(\cdot)$ is the Dirac $\delta$-function. We consider the motion of a quantum particle of mass $m$ under the action of the free-space Hamiltonian \begin{equation} \hat{H} = \frac{\hat{p}^2}{2 m} \,. \label{Hamiltonian} \end{equation} The corresponding evolution operator is given by \begin{equation} \hat{U}(t) = \exp \left(-\frac{i t}{2 \hbar m} \hat{p}^2 \right) \,. \label{U_def} \end{equation} \subsection{Qualitative relation between quantum backflow and quantum reetry} \begin{figure} \caption{Classical-mechanical representation of a particle with a positive momentum, $p > 0$. The phase-space probability density is supported by the upper half-plane (hatched area). The probability flow through the spatial point $x = 0$ is parallel to the position axis (red arrows).} \label{fig1} \end{figure} In the QB setting, the state of a particle at time $t = 0$ has the form \begin{equation} \int_0^{\infty} dp \, f(p) | p \rangle \,, \end{equation} where $f$ is a complex-valued function normalized as $\int_0^{\infty} dp \, |f(p)|^2 = 1$. The state of the particle during the time interval $\tau < t < \tau + T$, relevant to backflow analysis (see Eq.~\eqref{Delta_def}), is given by \begin{equation} \int_0^{\infty} dp \, f(p) \hat{U}(t) | p \rangle = \int_0^{\infty} dp \, f(p) e^{-i p^2 t / 2 \hbar m} | p \rangle \,. \end{equation} It is clear that any momentum measurement performed on this state is bound to give a positive result, $p > 0$, with the probability density $|f(p)|^2$. So, from the classical-mechanical viewpoint, the particle is located in the upper half-plane of phase space, and any probability transfer through the spatial point $x = 0$ may only occur in the left-to-right direction (see Fig.~\ref{fig1}). From the quantum-mechanical perspective however the fact that the outcome of a measurement of $\hat{p}$ is (at all times) certain to be positive does not prevent the right-to-left probability transfer $\Delta(\tau,T)$ from being positive. \begin{figure} \caption{Classical-mechanical representation of a particle with a negative position, $x < 0$. The phase-space probability density is supported by the left half-plane (hatched area). The instantaneous probability flow through the spatial point $x = 0$ is illustrated with red arrows.} \label{fig2} \end{figure} \begin{figure} \caption{Classical-mechanical representation at $t > 0$ of a particle that was initially (at $t = 0$) confined to the region of negative positions. The phase-space probability density is supported by the half-plane above the line $p = \frac{m} \label{fig3} \end{figure} Let us now look at the QR problem. At time $t = 0$, the particle is localized in the region of negative positions (see Fig.~\ref{fig2}). This means that the corresponding quantum state has the form \begin{equation} \int_{-\infty}^0 dx \, g(x) | x \rangle \,, \end{equation} where $g$ is some complex-valued function normalized as $\int_{-\infty}^0 dx \, |g(x)|^2 = 1$. A measurement of $\hat{x}$ performed on this state is guaranteed to return a negative result, $x < 0$, with the probability density $|g(x)|^2$. In the course of its evolution through time $t > 0$, the particle state becomes \begin{equation} \int_{-\infty}^0 dx \, g(x) \hat{U}(t) | x \rangle \,. \label{QR_t>0_state} \end{equation} It is straightforward to show (see Appendix~\ref{App:U|x>}) that $\hat{U}(t) | x \rangle$ is an eigenstate of the Hermitian operator $\hat{p} - \frac{m}{t} \hat{x}$, with the corresponding eigenvalue equal to $-\frac{m x}{t}$, i.e. \begin{equation} \left( \hat{p} - \frac{m}{t} \hat{x} \right) \hat{U}(t) | x \rangle = -\frac{m x}{t} \hat{U}(t) | x \rangle \,. \label{U|x>} \end{equation} Therefore, a measurement of $\hat{p} - \frac{m}{t} \hat{x}$, performed on the state given by Eq.~\eqref{QR_t>0_state}, is guaranteed to give a positive result. From the classical-mechanical viewpoint, this means that the particle at time $t >0$ is located above the $p = \frac{m}{t} x$ line in phase space (see Fig~\ref{fig3}). Therefore, according to the laws of classical mechanics, any probability flow through the spatial point $x = 0$ may only take place in the left-to-right direction. However, the quantum-mechanical analysis of this problem presented below shows that the system {\it can} give rise to a positive (classically-forbidden) probability transfer in the right-to-left direction, i.e. one might have $\Delta(\tau, T) > 0$ with $\tau, T > 0$. Figures~\ref{fig1} and \ref{fig3} elucidate the connection between QB and QR: both phenomena can be regarded as classically-forbidden probability flow for quantum states with position-momentum correlations. In general, one is interested in the right-to-left probability transfer, $\Delta(\tau, T)$, produced by a quantum state whose classical phase-space probability density, at $t = \tau$, vanishes below the line $p = \frac{m}{\tau} x$. The QR problem corresponds to the case of a finite $\tau > 0$. The QB problem is recovered in the limit $\tau \to \infty$. \subsection{Probability transfer operator} We begin our analysis of backflow for position-momentum correlated states by introducing the probability transfer operator $\hat{D}$. The right-to-left probability transfer $\Delta(\tau, T)$, defined by Eq.~\eqref{Delta_def}, can be written as the following expectation value: \begin{equation} \Delta(\tau, T) = \langle \tau | \hat{D}(T) | \tau \rangle \,, \label{Delta_in_terms_of_D} \end{equation} where $| \tau \rangle$ is the particle state at time $t = \tau$, and \begin{equation} \hat{D}(t) = \hat{U}^{\dag}(t) \Theta(-\hat{x}) \hat{U}(t) - \Theta(-\hat{x}) \,. \label{D_1} \end{equation} Here, $\Theta(\cdot)$ is the Heaviside step function, and the dagger $\dag$ denotes Hermitian conjugation. It is convenient to rewrite the probability transfer operator in a symmetric form as \begin{equation} \hat{D}(t) = \hat{U}^{\dag}( \tfrac{t}{2} ) \hat{B}(t) \hat{U}( \tfrac{t}{2} ) \label{D_2} \end{equation} with \begin{equation} \hat{B}(t) = \hat{U}^{\dag}(\tfrac{t}{2}) \Theta(-\hat{x}) \hat{U}(\tfrac{t}{2}) - \hat{U}(\tfrac{t}{2}) \Theta(-\hat{x}) \hat{U}^{\dag}(\tfrac{t}{2}) \,. \label{B_def} \end{equation} It is clear from their definitions that both $\hat{B}$ and $\hat{D}$ are Hermitian operators. A straightforward calculation (see Appendix~\ref{App:B_op}) yields the following momentum representation of $\hat{B}$: \begin{equation} \langle p | \hat{B}(t) | p' \rangle = -\frac{1}{\pi (p - p')} \sin \left[ \frac{t}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right] \,. \label{B_mom_rep} \end{equation} The function in the right-hand side of the last equation is the same as the backflow kernel function originally derived by Bracken and Melloy~\cite{BM94Probability}. Finally, in view of Eqs.~\eqref{D_2} and \eqref{B_mom_rep}, we have \begin{align} \langle p | \hat{D}(t) | p' \rangle = -&\frac{1}{\pi (p - p')} \sin \left[ \frac{t}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right] \nonumber \\ &\times \exp \left[ \frac{i t}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right] \,. \label{D_mom_rep} \end{align} This representation of $\hat{D}$ will be used below. \subsection{Supremum of classically-forbidden probability transfer} We now address the right-to-left probability transfer $\Delta(\tau, T)$, given by Eq.~\eqref{Delta_in_terms_of_D}, for states $| \psi \rangle$ with position-momentum correlations of the type illustrated in Fig.~\ref{fig3}. More specifically, we consider the particle state at time $t = \tau$ to have the form \begin{equation} | \tau \rangle = \int_0^{\infty} dz \, \Psi(z) | z \rangle \,, \label{psi_z_rep} \end{equation} where $\Psi$ is a complex-valued function of a real variable, and states $| z \rangle$ are orthonormal eigenstates of the Hermitian operator $\hat{p} - k \hat{x}$ with $k = \frac{m}{\tau} > 0$, i.e. \begin{equation} \left( \hat{p} - k \hat{x} \right) | z \rangle = z | z \rangle \,, \label{z_def} \end{equation} with \begin{equation} \langle z | z' \rangle = \delta(z - z') \,. \label{z_norm} \end{equation} Since, in momentum representation, $\hat{x}$ is given by $i \hbar \frac{\partial}{\partial p}$, it is easy to see that \begin{equation} \langle p | z \rangle = \frac{1}{\sqrt{2 \pi \hbar k}} \exp \left( -\frac{i}{2 \hbar k} p^2 + \frac{i z}{\hbar k} p \right) \,. \label{z_mom_rep} \end{equation} The normalization condition $\langle \tau | \tau \rangle = 1$ imposes the following constraint on $\Psi$: \begin{equation} \int_0^{\infty} dz \, |\Psi(z)|^2 = 1 \,. \label{Psi_norm} \end{equation} The right-to-left probability transfer, generated by $| \tau \rangle$ during the time interval $\tau < t < \tau + T$, is obtained by substituting Eq.~\eqref{psi_z_rep} into Eq.~\eqref{Delta_in_terms_of_D}: \begin{equation} \Delta(\tau,T) = \int_0^{\infty} dz \int_0^{\infty} dz' \, \Psi^*(z) \langle z | \hat{D}(T) | z' \rangle \Psi(z') \,. \label{Delta_vs_<z|D|z'>} \end{equation} A straightforward calculation yields (see Appendix~\ref{App:<z|D|z'>}, or, alternatively, follow the method adopted in Section~\ref{Sec:general}) \begin{equation} \langle z | \hat{D}(T) | z' \rangle = -\frac{1}{\pi} e^{-i \alpha z^2} \frac{\sin \left[ \beta \left( z^2 - {z'}^2 \right) \right]}{z - z'} e^{i \alpha {z'}^2} \,, \label{<z|D|z'>} \end{equation} with \begin{equation} \alpha = \frac{1}{4 \hbar k} \left( 1 + \frac{1}{1 + k T / m} \right) \label{alpha} \end{equation} and \begin{equation} \beta = \frac{1}{4 \hbar k} \left( 1 - \frac{1}{1 + k T / m} \right) \,. \label{beta} \end{equation} Then, substituting Eq.~\eqref{<z|D|z'>} into Eq.~\eqref{Delta_vs_<z|D|z'>}, and defining \begin{equation} \psi(u) = \beta^{-1/4} \exp \left( -\frac{i \alpha}{\beta} u^2 \right) \Psi \left( \frac{u}{\sqrt{\beta}} \right) \,, \end{equation} we obtain \begin{equation} \Delta(\tau, T) = -\frac{1}{\pi} \int_0^{\infty} du \int_0^{\infty} du' \, \psi^*(u) \frac{\sin \left( u^2 - {u'}^2 \right)}{u - u'} \psi(u') \,. \label{Delta_in_terms_of_varphi} \end{equation} The normalization condition, Eq.~\eqref{Psi_norm}, now reads \begin{equation} \int_0^{\infty} du \, |\psi(u)|^2 = 1 \,. \label{varphi_norm} \end{equation} The supremum of the classically-forbidden right-to-left probability transfer is obtained by optimizing $\Delta(\tau, T)$, Eq.~\eqref{Delta_in_terms_of_varphi}, subject to the normalization constraint on $\psi$, Eq.~\eqref{varphi_norm}. This is the variational problem originally considered in Ref.~\cite{BM94Probability}. The corresponding Euler-Lagrange equation is given by Eq.~\eqref{int_eig_prob}. The supremum of $\Delta(\tau, T)$ equals the Bracken-Melloy constant, Eq.~\eqref{BM_const}, and is independent of the slope $k = \frac{m}{\tau}$ characterizing the position-momentum correlation of the initial state (see Eqs.~\eqref{psi_z_rep} and \eqref{z_def}). In other words, the supremum of the right-to-left probability transfer is the same in the QB and QR problems. \section{Generalized backflow problem} \label{Sec:general} In the previous section, we have shown that both QB and QR effects manifest themselves as (classically forbidden) right-to-left probability transfer for initial states with linear position-momentum correlations, for which the measurement of $\hat{p} - k \hat{x}$, with $k \ge 0$, is guaranteed to yield a positive result. We now extend our study to states with nonlinear position-momentum correlations, for which the outcome of measuring $S(\hat{p}) - \hat{x}$ is certain to be positive. The real function $S$ is such that the curve $S(p) - x = 0$ does not intersect the fourth quadrant of the phase-space plane, i.e. the intersection of $\{ (x,p) : S(p) - x > 0 \}$ and $\{ (x,p) : x > 0 \; \& \; p < 0 \}$ is empty. Clearly, from the classical-mechanical viewpoint, the fact that the support of the phase-space density of a state has no overlap with $\{ (x,p) : x > 0 \; \& \; p < 0 \}$ implies that the state cannot give rise to any positive right-to-left probability transfer. More concretely, we consider particle states $| \tau \rangle$ of the form \begin{equation} | \tau \rangle = \int_0^{\infty} dw \, \Phi(w) | w \rangle \,, \label{phi_w_rep} \end{equation} where $\Phi$ is a complex-valued function of a real variable, and states $| w \rangle$ are orthonormal eigenstates of the Hermitian operator $S(\hat{p}) - \hat{x}$ \footnote{In general, the sum of two unbounded Hermitian operators is not necessarily Hermitian. However, the operator $S(\hat{p}) - \hat{x}$ is a unitary transformation of the operator $-\hat{x}$, and as such is Hermitian. Indeed, it is straightforward to check that $S(\hat{p}) - \hat{x} = \hat{U}^{-1} (-\hat{x}) \hat{U}$, with $\hat{U} = e^{i R(\hat{p}) / \hbar}$ and $R(p) = \int^{p} dq \, S(q)$ (cf.~Eq.~\eqref{w_mom_rep})}, i.e. \begin{equation} \big( S(\hat{p}) - \hat{x} \big) | w \rangle = w | w \rangle \,, \label{w_def} \end{equation} with \begin{equation} \langle w | w' \rangle = \delta(w - w') \,. \label{w_norm} \end{equation} In momentum representation, $\hat{x}$ is given by $i \hbar \frac{\partial}{\partial p}$, and so \begin{equation} \langle p | w \rangle = \frac{1}{\sqrt{2 \pi \hbar}} \exp \left( -\frac{i}{\hbar} \int^p dq \, S(q) + i \frac{w p}{\hbar} \right) \,. \label{w_mom_rep} \end{equation} The normalization condition $\langle \tau | \tau \rangle = 1$ imposes the following constraint on $\Phi$: \begin{equation} \int_0^{\infty} dw \, |\Phi(w)|^2 = 1 \,. \label{Phi_norm} \end{equation} The right-to-left probability transfer during the time interval $\tau < t < \tau + T$ is given by Eq.~\eqref{Delta_in_terms_of_D}, which, using the $w$-representation, can be written as \begin{equation} \Delta(\tau,T) = \int_0^{\infty} dw \int_0^{\infty} dw' \, \Phi^*(w) \langle w | \hat{D}(T) | w' \rangle \Phi(w') \,. \label{Delta_vs_<w|D|w'>} \end{equation} Repeating the steps that led to Eq.~\eqref{App:<z|D|z'>:eq0}, we obtain the following expression for the kernel $\langle w | \hat{D}(T) | w' \rangle$: \begin{widetext} \begin{align} \langle w | \hat{D}(T) | w' \rangle = -\frac{1}{2 \pi^2 \hbar} \int_{-\infty}^{+\infty} dp \int_{-\infty}^{+\infty} dp' &\, \frac{\sin \left[ \frac{T}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right]}{p - p'} \nonumber \\ &\times \exp \left[ \frac{i}{\hbar} \int_{p'}^p dq \, S(q) + i \frac{T}{4 \hbar m} \left( p^2 - {p'}^2 \right) - i \frac{w p - w' p'}{\hbar} \right] \,. \label{<w|D|w'>} \end{align} It is convenient to introduce the following dimensionless versions of the functions $\Phi$ and $S$: \begin{equation} \phi(u) = \left( \frac{\hbar T}{4 m} \right)^{1/4} \Phi \left(u \sqrt{\frac{\hbar T}{4 m}} \right) \,, \qquad s(u) = \sqrt{\frac{4 m}{\hbar T}} \, S \left( u \sqrt{\frac{4 \hbar m}{T}} \right) \,. \label{dimensionless_variables} \end{equation} This corresponds to taking $\sqrt{\frac{\hbar T}{4 m}}$ for the unit of position and $\sqrt{\frac{4 \hbar m}{T}}$ for the unit of momentum. Now the expression for the right-to-left probability transfer, Eqs.~\eqref{Delta_vs_<w|D|w'>} and \eqref{<w|D|w'>}, takes the form \begin{equation} \Delta(\tau, T) = \int_0^{\infty} du \int_0^{\infty} du' \, \phi^*(u) K(u,u') \phi(u') \,, \label{Delta_dimless} \end{equation} with \begin{equation} K(u, u') = -\frac{1}{2 \pi^2} \int_{-\infty}^{\infty} d\xi \int_{-\infty}^{\infty} d\xi' \, \frac{\sin \left( \xi^2 - {\xi'}^2 \right)}{\xi - \xi'} \exp \left[ i \int_{\xi'}^{\xi} dp \, s(p) + i \left( {\xi}^2 - {\xi'}^2 \right) - i u \xi + i u' \xi' \right] \,. \label{K_def} \end{equation} \end{widetext} We also note that the normalization condition for $\Phi$, given by Eq.~\eqref{Phi_norm}, translates into \begin{equation} \int_0^{\infty} du \, |\phi(u)|^2 = 1 \,. \label{phi_norm} \end{equation} Optimization of $\Delta(\tau, T)$ with respect to $\phi$, subject to the normalization constraint, Eq.~\eqref{phi_norm}, yields the following eigenproblem: \begin{equation} \int_0^{\infty} du' \, K(u, u') \phi(u') = \mu \phi(u) \,. \label{K_eigprob} \end{equation} The supremum of $\Delta$ is given by the supremum of the eigenvalue spectrum, $\mu_{\sup} = \sup_{\phi} \mu$. It is easy to show that in the case of a straight boundary, $s(p) = p/k$ with $k > 0$, this eigenproblem reduces to the Bracken-Melloy one, Eq.~\eqref{int_eig_prob}, and in this case $\mu_{\sup}$ coincides with $\lambda_{\sup}$. However, for curved boundaries $s(p) - x = 0$, the supremum of the right-to-left probability transfer, $\mu_{\sup}$, generally differs from the Bracken-Melloy constant, $\lambda_{\sup}$. Moreover, as we argue below, one can choose $s(p)$ such that $\mu_{\sup} > \lambda_{\sup}$. \subsection{Small deformation of a straight phase-space boundary} Let us consider the case when the phase-space boundary curve $s(p) - x = 0$ is only a small deformation of a straight line: \begin{equation} s(p) = \frac{p}{k} + \epsilon s_1(p) \,, \end{equation} where $k > 0$, parameter $\epsilon$ is defined on an interval containing zero, and $s_1(p)$ is a bounded function, such that the curve $s(p) - x = 0$ does not cross the fourth quadrant of the phase space. (A concrete example of such function will be considered in Section~\ref{Sec:Ex}.) Assuming continuity around $\epsilon = 0$, we write \begin{align*} K(u,u') &= K_0(u,u') + \epsilon K_1(u,u') + O(\epsilon^2) \,, \\ \phi(u) &= \phi_0(u) + \epsilon \phi_1(u) + O(\epsilon^2) \,, \\ \mu &= \mu_0 + \epsilon \mu_1 + O(\epsilon^2) \,. \end{align*} Substituting these expansions into Eq.~\eqref{K_eigprob} and comparing terms of the same order in $\epsilon$, we obtain \begin{equation} \int_0^{\infty} du' \, K_0(u, u') \phi_0(u') = \mu_0 \phi_0(u) \label{K0_eigprob} \end{equation} and \begin{align} \int_0^{\infty} du' &\Big( K_0(u,u') \phi_1(u') + K_1(u,u') \phi_0(u') \Big) \nonumber \\ &= \mu_0 \phi_1(u) + \mu_1 \phi_0(u) \,. \label{K1_eigprob} \end{align} Then, we multiply both sides of Eq.~\eqref{K1_eigprob} by $\phi^*_0(u)$, integrate over $u$, and make use of the facts that $K_0(u',u) = \big[ K_0(u,u') \big]^*$ and that $\phi_0$ fulfils Eqs.~\eqref{K0_eigprob} and \eqref{phi_norm}. This yields \begin{equation} \mu_1 = \int_0^{\infty} du \int_0^{\infty} du' \, \phi_0^*(u) K_1(u,u') \phi_0(u') \,. \label{mu_1} \end{equation} In fact, Eq.~\eqref{mu_1} is nothing but the prediction of the standard non-degenerate perturbation theory for the eigenvalue spectrum of a linear Hermitian operator. We now put forward the following argument. Let us suppose that $\phi_0(u)$ is a normalized eigenfunction corresponding the eigenvalue $\mu_0 = \lambda_{\sup}$. Then, for small $\epsilon$, we expect to have \begin{equation} \mu_{\sup} \simeq \lambda_{\sup} + \epsilon \mu_1 \,, \label{mu_sup_linear_approx} \end{equation} where $\mu_1$ is determined by Eq.~\eqref{mu_1}. In general, there is no reason for $\mu_1$ to vanish. In turn, the assumption $\mu_1 \not= 0$ implies that there is a value of $\epsilon$ for which $\mu_{\sup} > \lambda_{\sup}$. Below we demonstrate that this intuitive argument is indeed valid by numerically computing $\mu_{\sup}$ for a specific example of a phase-space boundary curve. \subsection{Example} \label{Sec:Ex} \begin{figure} \caption{Phase-space boundary curve $s(p) - x = 0$, with $s(p)$ given by Eq.~\eqref{s} \label{fig4} \end{figure} Let us consider the case \begin{equation} s(p) = 2 p \left( 1 + \epsilon e^{-p^2} \right) \,. \label{s} \end{equation} Provided $\epsilon > -1$, the curve $s(p) - x = 0$ does not penetrate the fourth quadrant of the phase-space plane (see Fig.~\ref{fig4}). Our aim is to demonstrate that, for some values of $\epsilon$, the supremum of the right-to-left probability transfer, $\mu_{\sup}$, exceeds the Bracken-Melloy constant, $\lambda_{\sup}$. We start by recasting the kernel $K(u,u')$, defined by Eq.~\eqref{K_def}, in a form more convenient for the present computation. Substituting the identity \begin{equation} \frac{\sin \left( \xi^2 - {\xi'}^2 \right)}{\xi - \xi'} = \frac{\xi + \xi'}{2} \int_{-1}^{1} d\nu \, \exp \left[ i \left( \xi^2 - {\xi'}^2 \right) \nu \right] \end{equation} into Eq.~\eqref{K_def}, and then changing the integration order, we get \begin{equation} K(u, u') = -\frac{i}{4 \pi^2} \int_{-1}^1 d\nu \, \left( \frac{\partial}{\partial u} - \frac{\partial}{\partial u'} \right) I(u, \nu) I^*(u', \nu) \,, \label{K_new_representation} \end{equation} where \begin{equation} I(u,\nu) = \int_{-\infty}^{+\infty} d\xi \, \exp \left( i \int^{\xi} dp \, s(p) + i (1 + \nu) \xi^2 - i u \xi \right) \,. \end{equation} In the case of $s(p)$ given by Eq.~\eqref{s}, we have $\int^{\xi} dp \, s(p) = \xi^2 - \epsilon e^{-\xi^2}$, and so \begin{align} I(u, \nu) &= \int_{-\infty}^{+\infty} d\xi \, \exp \left( -i \epsilon e^{-\xi^2} + i (2 + \nu) \xi^2 - i u \xi \right) \nonumber \\ &= \sum_{n = 0}^{\infty} \frac{(-i \epsilon)^n}{n!} \int_{-\infty}^{+\infty} d\xi \, e^{-[n - i (2 + \nu)] \xi^2 - i u \xi} \nonumber \\ &= \sum_{n = 0}^{\infty} \frac{(-i \epsilon)^n}{n!} \sqrt{\frac{\pi}{n - i (2 + \nu)}} e^{ -\frac{u^2}{4 [n - i (2 + \nu)]} } \,. \label{I_as_sum} \end{align} Then, substituting Eq.~\eqref{I_as_sum} into Eq.~\eqref{K_new_representation}, and performing some straightforward manipulations (see Appendix~\ref{App:K_expansion}), we arrive at the following expansion: \begin{equation} K(u, u') = \sum_{n=0}^{\infty} \epsilon^n K_n(u, u') \,, \label{K_expansion} \end{equation} where \begin{widetext} \begin{equation} K_n(u, u') = -\frac{(-i)^n}{8 \pi} \sum_{k=0}^n \frac{(-1)^k}{(n-k)! k!} \int_1^3 dz \, \left( \frac{u}{z + i (n-k)} + \frac{u'}{z - i k} \right) \frac{\exp \left( -i \frac{u^2}{4 [z + i (n-k)]} + i \frac{{u'}^2}{4 (z - i k)} \right)}{\sqrt{[z + i (n-k)] (z - i k)}} \,. \label{K_n} \end{equation} \end{widetext} For $n = 0$ (and, consequently, $k = 0$), the integral in Eq.~\eqref{K_n} can be done analytically (see Appendix~\ref{App:K_expansion}). This yields the expression \begin{equation} K_0(u,u') = -\frac{1}{\pi} e^{-i u^2 / 6} \frac{\sin \left[ \frac{1}{12} \left( u^2 - {u'}^2 \right) \right]}{u - u'} e^{i {u'}^2 / 6} \,, \label{K_0} \end{equation} which, as expected, is consistent with Eq.~\eqref{<z|D|z'>}. For $n \ge 1$, the integral and sum in Eq.~\eqref{K_n} have to be evaluated numerically. The numerical evaluation of $\mu_{\sup}$ is based on the method originally used in Ref.~\cite{BM94Probability}. We consider the following discretized version of Eq.~\eqref{K_eigprob}: $\frac{L}{N} \sum_{l = 0}^N K(u_k, u_l) \phi_{L,N}(u_l) = \mu_{L,N} \phi_{L,N}(u_k)$ with $u_k = \frac{L}{N} k$. The kernel $K(u_k, u_l)$ is computed using Eqs.~\eqref{K_expansion} and \eqref{K_n}. We limit our consideration to $-0.5 < \epsilon < 0.5$. For this range, it appears to be sufficient to retain only terms up to (and including) order $\epsilon^4$ in Eq.~\eqref{K_expansion}. Fixing the value of $L$, we solve the discretized eigenproblem numerically for different values of $N$, and observe the following scaling: $\max \mu_{L,N} \simeq \mu_L + C_L N^{-1}$, where $C_L$ is a constant independent of $N$. Extrapolating to $N \to \infty$, we determine $\mu_L$. Repeating this computation for different values of $L$, we observed that $\mu_L \simeq \mu_{\sup} + C L^{-1}$, where $C$ is a constant independent of $L$. Finally, extrapolating to $L \to \infty$, we find $\mu_{\sup}$. \begin{figure} \caption{Thick blue curve: The supremum of the classically-forbidden probability transfer, $\mu_{\sup} \label{fig5} \end{figure} The thick blue curve in Fig.~\ref{fig5} shows the numerically computed function $\mu_{\sup} = \mu_{\sup}(\epsilon)$. The Bracken-Melloy bound, $\lambda_{\sup}$, is presented with the thin red line. The dashed black line shows the linear approximation to $\mu_{\sup}(\epsilon)$, as given by Eq.~\eqref{mu_sup_linear_approx} with the slope evaluated numerically using Eq.~\eqref{mu_1}. The central message conveyed by Fig.~\ref{fig5} is that the classically-forbidden probability transfer from wave packets with nonlinear position-momentum correlations can indeed exceed the Bracken-Melloy bound. \section{Conclusion} \label{Sec:end} The results of this work are twofold. First, we elucidate the reason why the suprema of the classically-impossible probability transfer in the QB and QR problems coincide. We achieve this by showing that both effects can be viewed as a backflow for quantum states with linear position-momentum correlations, defined by Eqs.~\eqref{psi_z_rep} and \eqref{z_def}. For such states, the supremum of the backflow is given by the Bracken-Melloy constant. Second, we formulate a generalized backflow problem for quantum states with nonlineal position-momentum correlations, defined by Eqs.~\eqref{phi_w_rep} and \eqref{w_def}. QB and QR can be viewed as special cases of the generalized backflow. We further present analytical and numerical arguments demonstrating that the supremum of the classically-forbidden probability transfer in the generalized backflow problem exceeds the Bracken-Melloy constant. As of today, probability backflow in quantum systems has not been observed experimentally. One of the reasons for this is that, when considered in the original QB formulation, the effect is weak: the QB probability transfer is bounded by less than $3.9\%$ (see Eq.~\eqref{BM_const}). As shown in this paper, the class of quantum states exhibiting classically-forbidden probability flow is larger than that of QB states, and the backflow probability transfer can exceed $3.9\%$. For instance, in the system considered in Section~\ref{Sec:Ex}, more than $4.3\%$ of the total probability can be transported in the ``wrong'' direction (see Fig.~\ref{fig5}). This suggests it could be possible to devise quantum states with large probability backflow that would facilitate future experiments and offer technological applications. \appendix \onecolumngrid \section{Derivation of Eq.~\eqref{U|x>}} \label{App:U|x>} Using the Baker–Campbell–Hausdorff formula, \begin{equation*} e^{\hat{A}} \hat{B} e^{-\hat{A}} = \hat{B} + [\hat{A}, \hat{B}] + \frac{1}{2!} [\hat{A}, [\hat{A}, \hat{B}]] + \frac{1}{3!} [\hat{A}, [\hat{A}, [\hat{A}, \hat{B}]]] + \ldots \,, \end{equation*} where $\hat{A}$ and $\hat{B}$ are operators and $[\cdot,\cdot]$ denotes the commutator, we have \begin{equation*} \hat{U}^{-1}(t) \hat{x} \hat{U}(t) = \exp \left(\frac{i t}{2 \hbar m} \hat{p}^2 \right) \hat{x} \exp \left(-\frac{i t}{2 \hbar m} \hat{p}^2 \right) = \hat{x} + \frac{i t}{2 \hbar m} [\hat{p}^2 , \hat{x}] = \hat{x} + \frac{t}{m} \hat{p} \,. \end{equation*} From here, it follows that \begin{equation*} \hat{x} \hat{U}(t) = \hat{U}(t) \hat{x} + \frac{t}{m} \hat{p} \hat{U}(t) \,, \end{equation*} or \begin{equation*} \left( \hat{p} - \frac{m}{t} \hat{x} \right) \hat{U}(t) = -\frac{m}{t} \hat{U}(t) \hat{x} \,. \end{equation*} Applying this operator identity to a position eigenstate $| x \rangle$, we obtain Eq.~\eqref{U|x>}. \section{Derivation of Eq.~\eqref{B_mom_rep}} \label{App:B_op} Starting from Eq.~\eqref{B_def}, and using Eq.~\eqref{U_def}, we write \begin{align} \langle p | \hat{B}(t) | p' \rangle &= \langle p | \hat{U}^{\dag}(\tfrac{t}{2}) \Theta(-\hat{x}) \hat{U}(\tfrac{t}{2}) | p' \rangle - \langle p | \hat{U}(\tfrac{t}{2}) \Theta(-\hat{x}) \hat{U}^{\dag}(\tfrac{t}{2}) | p' \rangle \nonumber \\ &= \left\{ \exp \left[ \frac{i t}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right] - \exp \left[ -\frac{i t}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right] \right\} \langle p | \Theta(-\hat{x}) | p' \rangle \nonumber \\ &= 2 i \sin \left[ \frac{t}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right] \langle p | \Theta(-\hat{x}) | p' \rangle \,. \label{App:B_op:eq1} \end{align} Then, \begin{align} \langle p | \Theta(-\hat{x}) | p' \rangle &= \int_{-\infty}^0 dx \, \langle p | x \rangle \langle x | p' \rangle = \frac{1}{2 \pi \hbar} \int_{-\infty}^0 dx \, e^{-i (p - p') x / \hbar} = \frac{1}{2 \pi} \int_0^{\infty} d\xi \, e^{i (p - p') \xi} \nonumber \\ &= \frac{1}{2 \pi} \left( \pi \delta(p - p') + i \operatorname{P} \frac{1}{p - p'} \right) \,, \label{App:B_op:eq2} \end{align} where $\operatorname{P}$ denotes the Cauchy principal value. Substituting Eq.~\eqref{App:B_op:eq2} into Eq.~\eqref{App:B_op:eq1}, we arrive at Eq.~\eqref{B_mom_rep}. \section{Derivation of Eq.~\eqref{<z|D|z'>}} \label{App:<z|D|z'>} Using Eqs.~\eqref{D_mom_rep} and \eqref{z_mom_rep}, we write \begin{align} \langle z | \hat{D}(T) | z' \rangle &= \int_{-\infty}^{+\infty} dp \int_{-\infty}^{+\infty} dp' \, \langle z | p \rangle \langle p | \hat{D}(T) | p' \rangle \langle p' | z' \rangle \nonumber \\ &= \int_{-\infty}^{+\infty} dp \int_{-\infty}^{+\infty} dp' \, \frac{1}{\sqrt{2 \pi \hbar k}} \exp \left( \frac{i}{2 \hbar k} p^2 - \frac{i z}{\hbar k} p \right) \nonumber \\ &\hspace*{0.165\textwidth} \times \frac{-1}{\pi (p - p')} \sin \left[ \frac{T}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right] \exp \left[ \frac{i T}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right] \nonumber \\ &\hspace*{0.165\textwidth} \times \frac{1}{\sqrt{2 \pi \hbar k}} \exp \left( -\frac{i}{2 \hbar k} {p'}^2 + \frac{i z'}{\hbar k} p' \right) \nonumber \\ &= -\frac{1}{2 \pi^2 \hbar k} \int_{-\infty}^{+\infty} dp \int_{-\infty}^{+\infty} dp' \, \frac{\sin \left[ \frac{T}{4 \hbar m} \left( p^2 - {p'}^2 \right) \right]}{p - p'} \exp \left[ i \frac{1 + k T / 2 m}{2 \hbar k} \left( p^2 - {p'}^2 \right) - i \frac{z p - z' p'}{\hbar k} \right] \,. \label{App:<z|D|z'>:eq0} \end{align} Changing the integration variables as \begin{equation*} u = p - p' \,, \qquad v = \frac{p + p'}{2} \,, \end{equation*} so that \begin{equation*} p = \frac{u}{2} + v \,, \qquad p' = -\frac{u}{2} + v \,, \end{equation*} we obtain \begin{equation} \langle z | \hat{D}(T) | z' \rangle = -\frac{1}{2 \pi^2 \hbar k} \int_{-\infty}^{+\infty} du \, \frac{I(u)}{u} \exp \left( -i \frac{z + z'}{2 \hbar k} u \right) \,, \label{App:<z|D|z'>:eq1} \end{equation} where \begin{align} I(u) &= \int_{-\infty}^{+\infty} dv \, \sin \left( \frac{T}{2 \hbar m} u v \right) \exp \left( i \frac{1 + k T / 2 m}{\hbar k} u v - i \frac{z - z'}{\hbar k} v \right) \nonumber \\ &= \frac{1}{2 i} \left[ \int_{-\infty}^{+\infty} dv \, \exp \left( i \frac{(1 + k T / m) u - z + z'}{\hbar k} v \right) - \int_{-\infty}^{+\infty} dv \, \exp \left( i \frac{u - z + z'}{\hbar k} v \right) \right] \nonumber \\ &= -i \frac{\pi \hbar k}{1 + k T / m} \delta \left( u - \frac{z - z'}{1 + k T / m} \right) + i \pi \hbar k \delta(u - z + z') \,. \label{App:<z|D|z'>:eq2} \end{align} Substituting Eq.~\eqref{App:<z|D|z'>:eq2} into Eq.~\eqref{App:<z|D|z'>:eq1} and evaluating the integral over $u$, we find \begin{equation*} \langle z | \hat{D}(T) | z' \rangle = -\frac{1}{2 \pi i (z - z')} \left[ \exp \left( -i \frac{z^2 - {z'}^2}{2 \hbar k (1 + k T / m)} \right) - \exp \left( -i \frac{z^2 - {z'}^2}{2 \hbar k} \right) \right] \,. \end{equation*} Now, introducing constants $\mu$ and $\nu$ such that (cf.~Eqs.~\eqref{alpha} and \eqref{beta}) \begin{equation*} \frac{1}{2 \hbar k (1 + k T / m)} = \alpha - \beta \,, \qquad \frac{1}{2 \hbar k} = \alpha + \beta \,, \end{equation*} we obtain \begin{equation*} \langle z | \hat{D}(T) | z' \rangle = -\frac{e^{-i \alpha \left( z^2 - {z'}^2 \right)}}{\pi (z - z')} \frac{e^{i \beta \left( z^2 - {z'}^2 \right)} - e^{-i \beta \left( z^2 - {z'}^2 \right)}}{2 i} \,, \end{equation*} which is equivalent to Eq.~\eqref{<z|D|z'>}. \section{Derivation of Eq.~\eqref{K_expansion}} \label{App:K_expansion} Starting from Eq.~\eqref{I_as_sum}, we have \begin{align*} I(u,\nu) I^*(u',\nu) = &\sum_{n = 0}^{\infty} \frac{(-i \epsilon)^n}{n!} \sqrt{\frac{\pi}{n - i (2 + \nu)}} \exp \left( -\frac{u^2}{4 [n - i (2 + \nu)]} \right) \\ &\times \sum_{k = 0}^{\infty} \frac{(i \epsilon)^k}{k!} \sqrt{\frac{\pi}{k + i (2 + \nu)}} \exp \left( -\frac{{u'}^2}{4 [k + i (2 + \nu)]} \right) \,. \end{align*} Using the Cauchy product formula \begin{equation*} \sum_{n=0}^{\infty} a_n \sum_{k=0}^{\infty} b_k = \sum_{n=0}^{\infty} \sum_{k=0}^n a_{n-k} b_k \,, \end{equation*} and the fact that $(-i)^{n-k} i^k = (-1)^{n-k} i^n = (-i)^n (-1)^k$, we obtain \begin{equation*} I(u, \nu) I^*(u', \nu) = \pi \sum_{n=0}^{\infty} (-i \epsilon)^n \sum_{k=0}^n \frac{(-1)^k}{(n-k)! k!} \frac{\exp \left( -\frac{u^2}{4 [n - k - i (2 + \nu)]} - \frac{{u'}^2}{4 [k + i (2 + \nu)]} \right)}{\sqrt{[n - k - i (2 + \nu)] [k + i (2 + \nu)]}} \end{equation*} Substituting the last expression into Eq.~\eqref{K_new_representation}, we get \begin{align} K(u, u') &= -\frac{i}{4 \pi} \sum_{n=0}^{\infty} (-i \epsilon)^n \sum_{k=0}^n \frac{(-1)^k}{(n-k)! k!} \int_{-1}^1 d\nu \, \left( \frac{\partial}{\partial u} - \frac{\partial}{\partial u'} \right) \frac{\exp \left( -\frac{u^2}{4 [n - k - i (2 + \nu)]} - \frac{{u'}^2}{4 [k + i (2 + \nu)]} \right)}{\sqrt{[n - k - i (2 + \nu)] [k + i (2 + \nu)]}} \nonumber \\ &= -\frac{1}{8 \pi} \sum_{n=0}^{\infty} (-i \epsilon)^n \sum_{k=0}^n \frac{(-1)^k}{(n-k)! k!} Q_{n,k}(u,u') \label{K_intermediate} \,, \end{align} where \begin{align} Q_{n,k}(u,u') &= 2 i \int_{-1}^1 d\nu \, \left( \frac{\partial}{\partial u} - \frac{\partial}{\partial u'} \right) \frac{\exp \left( -\frac{u^2}{4 [n - k - i (2 + \nu)]} - \frac{{u'}^2}{4 [k + i (2 + \nu)]} \right)}{\sqrt{[n - k - i (2 + \nu)] [k + i (2 + \nu)]}} \nonumber \\ &= 2 i \int_1^3 dz \, \left( \frac{\partial}{\partial u} - \frac{\partial}{\partial u'} \right) \frac{\exp \left( -i \frac{u^2}{4 [z + i (n-k)]} + i \frac{{u'}^2}{4 (z - i k)} \right)}{\sqrt{[z + i (n-k)] (z - i k)}} \nonumber \\ &= \int_1^3 dz \, \left( \frac{u}{z + i (n-k)} + \frac{u'}{z - i k} \right) \frac{\exp \left( -i \frac{u^2}{4 [z + i (n-k)]} + i \frac{{u'}^2}{4 (z - i k)} \right)}{\sqrt{[z + i (n-k)] (z - i k)}} \,. \label{Q_nk} \end{align} Finally, substituting Eq.~\eqref{Q_nk} into Eq.~\eqref{K_intermediate}, we arrive at Eq.~\eqref{K_expansion} For $n=k=0$, the integral in Eq.~\eqref{Q_nk} can be evaluated as follows: \begin{align*} Q_{0,0}(u,u') &= (u + u') \int_1^3 \frac{dz}{z^2} e^{-i \left( u^2 - {u'}^2 \right) / 4 z} = (u + u') \int_{1/3}^1 d\zeta \, e^{-i \left( u^2 - {u'}^2 \right) \zeta / 4} \\ &= (u + u') \frac{e^{-i \left( u^2 - {u'}^2 \right) / 4} - e^{-i \left( u^2 - {u'}^2 \right) / 12}}{-i \left( u^2 - {u'}^2 \right)/4} = 8 e^{-i u^2 / 6} \frac{\sin \left[ \frac{1}{12} \left( u^2 - {u'}^2 \right) \right]}{u - u'} e^{i {u'}^2 / 6} \,. \end{align*} \twocolumngrid \end{document}
\begin{document} \title{Branch dependence in the ``consistent histories'' approach to quantum mechanics} \author{Thomas \surname{M\"{u}ller}} \email{[email protected]} \affiliation{Institut f\"ur Philosophie, Lenn\'estr.~39, 53113 Bonn, Germany} \date{12 November 2006} \pacs{03.65.Ca, 03.65.Ta, 05.30.-d} \begin{abstract} In the consistent histories formalism one specifies a family of histories as an exhaustive set of pairwise exclusive descriptions of the dynamics of a quantum system. We define \emph{branching families} of histories, which strike a middle ground between the two available mathematically precise definitions of families of histories, viz., product families and Isham's history projector operator formalism. The former are too narrow for applications, and the latter's generality comes at a certain cost, barring an intuitive reading of the ``histories''. Branching families retain the intuitiveness of product families, they allow for the interpretation of a history's weight as a probability, and they allow one to distinguish two kinds of coarse-graining, leading to reconsidering the motivation for the consistency condition. \end{abstract} \maketitle \section{Introduction} The consistent histories approach to quantum mechanics \cite{griffiths84,gellmann90,gellmann93,omnes94,dowker_kent96,kent2000,griffiths2003} studies the dynamics of closed quantum systems as a stochastic process within a framework of alternative possible histories. Such a framework, or \emph{family of histories}, must consist of pairwise exclusive and jointly exhaustive descriptions of the system's dynamics. This intuitive characterization does not yet state what a family of histories is mathematically. There are two formal definitions in the literature: So-called \emph{product families} are straightforward generalizations of one-time descriptions of a system's properties in terms of projectors to the case of multiple times \cite{griffiths84}. The so-called \emph{history projector operator} formalism, introduced by Isham \cite{isham94}, is vastly more general. However, histories in that formalism do not necessarily have an intuitive interpretation in terms of temporal sequences of one-time descriptions. In our paper we make formally precise the notion of a \emph{branching} family of histories, which is meant to balance generality and intuitiveness: histories in such a family do correspond to temporal sequences of one-time descriptions, yet branching families are much more general than product families. In the context of quantum histories, the notion of branching, or branch dependence, was originally proposed by Gell-Mann and Hartle \cite{gellmann90}. It is invoked informally in many publications, but a formal definition is so far lacking. Our definition of \emph{branching families of histories} is based on the theory of branching temporal logic. Apart from providing a precise reading of a useful concept, our approach allows us to comment on the relation between the consistency condition for families of histories and probability measures on such families. It turns out that the consistency condition is best viewed not as a precondition for introducing probabilities, as some authors suggest, but as the requirement that interference effects be absent from the description of a system's dynamics. Our definition allows for consistent as well as inconsistent families of histories. It is therefore neutral with respect to the discussion about the pros and cons of consistency \cite{dowker_kent96,griffiths98,kent2000}, and we refrain from taking a stance in that discussion. Our paper is structured as follows: In section~\ref{sec:conshist}, we introduce some basic facts about consistent histories and probabilities and review the definition of product families of histories and the history projection operator approach. We also sketch the intuitive motivation for a notion of branch-dependent families. In our central section~\ref{sec:branching}, we give our formal definition of branch-dependent families of histories and prove some relevant properties of such families. In the final section~\ref{sec:discussion}, we discuss the relation between our new definition and the two mentioned definitions of families of histories, and we comment on the consistency condition. \section{Consistent histories} \label{sec:conshist} \subsection{Histories, chain operators, and weights} In the consistent histories approach, a \emph{history} is specified via properties of the system in question at a finite number of times. \footnote{Continuous extensions of the theory have also been studied \cite{isham_etal98}, but these will not be considered in this paper.---This section closely follows the notational conventions of \cite{griffiths2003}, which book provides a detailed and readable introduction to consistent histories.} The system's properties are expressed through orthogonal projectors \footnote{More generally, one can specify POVMs or completely positive maps; cf.\ \cite{peres2000a}. We will only consider projectors in this paper.} on (closed) subspaces of the system's Hilbert space \ensuremath{\mathcal{H}}, i.e., operators $P$ for which \begin{equation} \label{eq:projector} P\cdot P = P^\dag = P. \end{equation} Thus, a single history $Y^\alpha$ consists of a number of projectors $P^i_\alpha$ at given times $t_i$, $i=1,\ldots,n$: \begin{equation}\label{eq:nonrelhist} Y^\alpha = P_\alpha^1 \odot P_\alpha^2 \odot \ldots \odot P_\alpha^n. \end{equation} So far, the symbol ``$\odot$'' should be read as ``and then''; in Isham's \emph{history projection operator} version of the history formalism, the symbol can be read as a tensor product (cf.\ section~\ref{sec:hpo}). A \emph{family of histories} $\ensuremath{\mathfrak{F}}$ (sometimes also called a \emph{framework}) is an exhaustive set of alternative histories. In line with most of the literature on consistent histories, we will only consider finite families in this paper. Associated with a history $Y^\alpha$ is a \emph{chain operator} $K(Y^\alpha)$, which is formed by multiplying together the projectors $P_\alpha^i$ associated with the times $t_i$ ($i=1,\ldots,n$), interleaved with the respective unitary time development operators $T(t_i,t_{i+1})$. Employing the convention of \cite{griffiths2003}, we define \begin{equation} K^\dag(Y^\alpha) = P_\alpha^1\cdot T(t_1,t_2)\cdot P_\alpha^2\cdot \ldots \cdot T(t_{n-1},t_n)\cdot P_\alpha^n. \end{equation} These chain operators are often taken to be \emph{representations} of the respective histories. This is appropriate in that $K(Y^\alpha)$ correctly describes the successive action of the projectors forming $Y^\alpha$ on the systen. However, the representation relation is in general many-one, which may be seen as a disadvantage in that one cannot recover a history from the associated chain operator uniquely. The system's dynamics explicitly enters the definition of the chain operators through the time development operators. Thus, a history $Y^\alpha$ can have a zero chain operator even though the history involves no zero projectors; such histories are \emph{dynamically impossible}, i.e., ruled out by the system's dynamics. We assume that the initial state of the system is described by a density matrix $\rho$, which might be proportional to unity if no information is given. \footnote{Many definitions implicitly take $\rho=I$ in the case of lacking information, which will lead to wrong scaling of inner products and thus, of weights, by a factor of dim($\ensuremath{\mathcal{H}}$).} The inner product of two operators, $\langle K_1,K_2\rangle_\rho$, given $\rho$, is defined via \begin{equation} \langle K_1,K_2\rangle_\rho = Tr[ \rho\cdot K_1^\dag\cdot K_2 ]. \end{equation} The chain operators allow us to associate with any history $Y^\alpha$ a \emph{weight} $W(Y^\alpha)$, which is the inner product of the history's chain operator with itself: \begin{equation} W(Y^\alpha) = \langle K(Y^\alpha),K(Y^\alpha)\rangle_\rho. \end{equation} In general, one would hope that these weights correspond to probabilities for histories from a given family. We will comment on that issue below, but first we review a few notions from probability theory. \subsection{Probabilities} \label{sec:probabilities} A probability space is a triple $\ensuremath{\mathfrak{P}} = \langle S,A,\mu\rangle$, where $S$ is the sample space (the set of alternatives), $A$ is a Boolean $\sigma$-algebra on $S$, and $\mu$ is a normalized, countably additive measure on $A$, i.e., a function \begin{equation} \mu: A\rightarrow [0,1]\quad\quad \mbox{s.t.}\quad \mu(S) = 1, \end{equation} and such that for any countable family $(a_j)_{j\in J}$ of disjoint elements of $A$, $\mu$ is additive: \begin{equation}\label{eq:additivity_of_measure} \mu\left(\bigcup_{j\in J} a_j\right) = \sum_{j\in J} \mu(a_j). \end{equation} For a finite set $S$, the algebra $A$ is isomorphic to the so-called power set algebra, i.e., the Boolean algebra of subsets of $S$, with minimal element $\emptyset$ and maximal element $S$; the operations of join, meet, and complement are set-theoretic union, intersection, and set-theoretic complement, respectively. In the finite case, $\mu$ is uniquely specified by its value on the singletons (atoms), and normalization is expressed by the condition \begin{equation}\label{eq:sumatoms} \sum_{s\in S} \mu(\{s\}) = 1. \end{equation} For our treatment of quantum histories, which follows the literature in assuming finite families, these latter, simplified conditions are sufficient: A finite probability space is completely specified by giving a finite set $S$ of alternatives and an assignment $\mu$ of nonnegative numbers fulfilling (\ref{eq:sumatoms}). The algebra $A$ is then given as the power set algebra of $S$, and $\mu$ is extended to all of $A$ via (\ref{eq:additivity_of_measure}). \subsection{Families of histories: Product families} \label{sec:productfamilies} Intuitively, a family $\ensuremath{\mathfrak{F}}$ of histories should consist of an exhaustive set of exclusive alternatives. Thus any one history $h\in\ensuremath{\mathfrak{F}}$ should rule out all other histories from $\ensuremath{\mathfrak{F}}$, and $\ensuremath{\mathfrak{F}}$ should have available enough histories to describe any possible dynamic evolution of the system in question. Exclusiveness must in some way be linked to orthogonality of projectors. This rules out taking $\ensuremath{\mathfrak{F}}$ to be the set of all time-ordered sets of projectors of the form (\ref{eq:nonrelhist})---such a family would be far too large. The simplest way to ensure the requirements of exclusiveness and exhaustiveness is to fix a sequence of time points \begin{equation} t_1<t_2<\ldots<t_n \end{equation} and to specify, for each time $t_i$, a decomposition of the identity operator $I$ on the system's Hilbert space \ensuremath{\mathcal{H}}\ into $n_i$ orthogonal projectors $\{P^i_1,\ldots,P^i_{n_i}\}$, so that \begin{equation} P^i_j\cdot P^i_{j'} = \delta_{jj'}\,P^i_j,\quad\quad \sum_{j=1}^{n_i} P^i_j = I. \end{equation} In this case, the index $\alpha$ specifying a history $Y^\alpha$ can be taken to be the list of numbers $(\alpha_1,\ldots,\alpha_n)$, where $1\leq\alpha_i\leq n_i$. The size $|\ensuremath{\mathfrak{F}}|$ of the family $\ensuremath{\mathfrak{F}}$ is given by \begin{equation} |\ensuremath{\mathfrak{F}}| = n_1\times n_2\times \cdots\times n_n. \end{equation} Histories in $\ensuremath{\mathfrak{F}}$ are pairwise exclusive, since any two different histories use different, orthogonal projectors at some time $t_i$. Such a family is also exhaustive, as at every time, the decomposition of the identity specifies an exhaustive set of alternatives. Formally, $\ensuremath{\mathfrak{F}}$ corresponds to a cartesian product of decompositions of the identity at different times. Product families are thus the obvious generalization of one-time descriptions of a system's properties in terms of projectors to the case of multiple times. \subsection{Branch-dependent families of histories} \label{sec:branchingintuitive} While the construction of a product family of histories is the simplest way to ensure exclusiveness and exhaustiveness, that construction is by no means the only possibility. Many authors have noted that in applications, it will often be necessary to choose a time point $t_{i+1}$, or the decomposition of $I$ at $t_{i+1}$, dependent on the projector $P^i_\alpha$ employed at time $t_i$. Thus, e.g., in order to describe a ``delayed choice'' quantum correlation experiment \cite{aspect82}, one chooses the direction of spin projection at time $t_2$ depending on the outcome of a previous selection event at time $t_1$. It is intuitively quite clear what such ``branch dependence'' would mean; eqs.\ (\ref{eq:branch_no_prod1})--(\ref{eq:branch_no_prod4}) below give an example of a branch-dependent family of histories. However, no formally rigorous description of such families of histories is available so far. Before we go on to give such a description in section~\ref{sec:branching}, we introduce Isham's \cite{isham94} generalized definition of families of histories in terms of history projection operators (HPOs), with which our definition of branch dependence will be compared below. \subsection{Isham's history projection operators (HPO)} \label{sec:hpo} The guiding idea of the history projection operator framework is to single out, as for product families, $n$ times $t_1,\ldots,t_n$ at which the system's properties will be described. One then forms the $n$-fold tensor product of the system's Hilbert space: \begin{equation} \ensuremath{\tilde{\mathcal{H}}} = \ensuremath{\mathcal{H}} \otimes \ldots \otimes \ensuremath{\mathcal{H}} \quad\quad \mbox{($n$ times)}. \end{equation} In that large history Hilbert space $\ensuremath{\tilde{\mathcal{H}}}$, a history $Y^\alpha$ is read as a tensor product of projectors: \begin{equation} Y^\alpha = P_\alpha^1 \odot P_\alpha^2 \odot \ldots \odot P_\alpha^n = P_\alpha^1 \otimes P_\alpha^2 \otimes \ldots \otimes P_\alpha^n. \end{equation} That tensor product operator $Y^\alpha$ is itself a projection operator on $\ensuremath{\tilde{\mathcal{H}}}$, a so-called \emph{history projection operator}, fulfilling \begin{equation} Y^\alpha\cdot Y^\alpha=(Y^\alpha)^\dag=Y^\alpha. \end{equation} Along these lines one can give an abstract definition of a family of histories $\ensuremath{\mathfrak{F}} = \{Y^1,\ldots,Y^n\}$ as a decomposition of the history Hilbert space identity $\tilde{I}$: \begin{equation} Y^\alpha Y^\beta=\delta_{\alpha\beta}Y^\alpha,\quad \sum_{Y^\alpha\in\ensuremath{\mathfrak{F}}}Y^\alpha = \tilde{I}. \end{equation} This generalization is formally rigorous, and it allows for further (e.g., continuous) extensions. However, the generality comes at a certain cost, since there is no condition that would ensure that a history projector $Y^\alpha$ should factor into a product of $n$ projectors on the system's Hilbert space at the $n$ given times, as in (\ref{eq:nonrelhist}). History projectors that do factor in this way are called \emph{homogeneous histories}. As the main motivation for introducing histories is given in terms of homogeneous histories, some authors have expressed doubts as to whether the full generality of HPO is really appropriate \cite[p.~118]{griffiths2003}. One possibility for constructing a narrower framework is to restrict the HPO formalism to the homogeneous case by requiring that all the $Y^\alpha\in\ensuremath{\mathfrak{F}}$ be products of projectors of the form (\ref{eq:nonrelhist}). Such a restriction may be implicitly at work in \cite{griffiths2003}. However, such a restriction is to some extent alien to the HPO formalism. Nor does it single out a useful class of families of histories: as we will show below (eqs.~(\ref{eq:hom_no_branch1})--(\ref{eq:hom_no_branch4})), not all homogeneous families are branch-dependent families, and some homogeneous families do not admit an interpretation of weights in terms of probabilities. We now describe an alternative approach in which branch dependence is the natural result of an inductive definition. Furthermore, the framework allows one to distinguish two different types of coarse--graining for families of histories. \section{A formal framework for branch-dependent histories} \label{sec:branching} The idea of a branching family of histories is based on the theory of branching temporal logic that originated in the work of Prior \cite{prior67}. \footnote{Branching temporal logic has already found many applications in computer science, linguistics, and philosophy. The interested reader is referred to \cite{belnap2001} for an overview.} \subsection{Branching structures} For the purpose of constructing finite families of branching histories, branching temporal logic boils down to the following inductive definition of a \emph{branching structure}, which is a set $M$ of \emph{moments} $m_i$ partially ordered by $\preceq$: \footnote{One should think of $\preceq$ in analogy to ``less than or equal'' ($\leq$), so there is a companion strict order (excluding equality), which is denoted by $\prec$. However, note that $\preceq$ is only a partial order, i.e., some elements of $M$ may be incomparable.} \begin{Def}[Branching structure] (i) A singleton set $M=\{m_0\}$ together with the relation $m_0\preceq m_0$ is a branching structure. (ii) Let $\langle M,\preceq\rangle$ be a branching structure and $m\in M$ a maximal element, and let $m^*_1,\ldots,m^*_n$ be new elements. Let $\preceq^*$ be the reflexive and transitive closure of the relation $\preceq$ together with the new relations $m\preceq^* m^*_1$, \ldots, $m\preceq^* m^*_n$. Then the set $M\cup \{m^*_1,\ldots,m^*_n\}$ together with the relation $\preceq^*$ is again a branching structure. \end{Def} By taking a finite number of steps along this definition, one constructs a finite branching tree in the form of a partially ordered set with the unique root element $m_0$. \footnote{Note that the construction ensures the following formal features: The ordering $\preceq$ is transitive (if $x\preceq y$ and $y\preceq z$, then $x\preceq z$), reflexive ($x\preceq x$) and antisymmetric (if $x\preceq y$ and $y\preceq x$, then $x=y$). Furthermore, the ordering fulfills the axioms of ``no backward branching'' (if $x\preceq z$ and $y\preceq z$, then either $x\preceq y$ or $y\preceq x$) and of ``historical connection'', meaning the existence of a common lower bound for any two elements (for any $x,y\in M$, there is $z\in M$ s.t.\ $z\preceq x$ and $z\preceq y$).---In a more general approach to branching temporal logic, these formal features are taken as axioms of a logical framework.} The maximal nodes in the tree are called ``leaves''. Figure~\ref{fig:bs} illustrates the inductive process. Except for the root element $m_0$, each node has a unique direct predecessor, and except for the leaves, each node $m$ has one or more direct successors, which correspond to branching at $m$. Paths in the tree, i.e., maximal linearly ordered subsets, extend from the root to one of the leaves and are thus in one-to-one correspondence with the leaves. These paths are often called \emph{histories} by logicians, and they can indeed be given an interpretation in terms of quantum histories, as we will show in the next section. \begin{figure} \caption{\label{fig:bs} \label{fig:bs} \end{figure} \subsection{Branching families of histories} A branching family of histories can be viewed as a quantum-mechanical interpretation of a branching structure $\langle M,\preceq\rangle$. We assume that a system with Hilbert space $\ensuremath{\mathcal{H}}$ (identity operator $I$) is given. The interpretation is given by two functions $\tau$ and $P$ that associate times and projectors with the elements of $M$, respectively. Formally, we define: \begin{Def}[Branching family of histories]\label{def:branchingfamily} A \emph{branching family of histories} is a quadruple \begin{equation} \ensuremath{\mathfrak{F}} = \langle M,\preceq,\tau,P\rangle, \end{equation} where \mbox{$\langle M,\preceq\rangle$} is a finite branching structure and $\tau$ is a function from $M$ to the real numbers respecting the partial ordering $\preceq$: \begin{equation}\label{eq:respectordering} \mbox{if $m\prec m'$, then $\tau(m)< \tau(m')$}. \end{equation} $P$ is a function from $M$ to the set of projectors on $\ensuremath{\mathcal{H}}$ that assigns projection operators to the elements of $M$ in the following way: If $m\in M$ is not a maximal element, and $m_1,\ldots,m_{n_m}$ are the $n_m$ immediate successors of $m$, then a set of orthogonal projectors $P^m_1,\ldots,P^m_{n_m}$ forming a decomposition of the identity, \begin{equation}\label{eq:associate_projectors} P^m_i P^m_j=\delta_{ij} P^m_i,\quad \sum_{i=1}^{n_m} P^m_i = I, \end{equation} is assigned to the $m_1,\ldots,m_{n_m}$ via $P(m_i)=P^m_i$. \end{Def} The number $\tau(m)\in \mathbb{R}$ is the time associated with $m\in M$. While the same time may be assigned to moments in different histories, we require that $\tau$ respect the partial ordering, as expressed via (\ref{eq:respectordering}). \footnote{The time parameter will only be needed in specifying the time development operators in the definition of the chain operators later on. Working in the Heisenberg picture, the function $\tau$ would not be needed; the condition on $\tau$ would be replaced by requiring an appropriate temporal ordering of the Heisenberg operators specified through $P$. This already points towards a relativistic generalization of the current approach to quantum histories, which is currently under preparation \cite{muller_qmbst}.} As regards the assignment of projection operators, note that $P(m_i)=P^m_i$ means that at $m$ (the unique predecessor of $m_i$, not $m_i$ itself), the system had the property expressed by $P^m_i$. \footnote{The slightly awkward reference to the previous node in (\ref{eq:associate_projectors}) can be avoided if one assigns projectors more properly not to nodes, but to \emph{elementary transitions} in the branching structure; cf.\ \cite{xu97,belnap2005} for details. We stick to our simplified exposition in order not to clutter this paper with technicalities---which will, however, be relevant for an extension to infinite structures, or to a relativistic version employing \emph{branching space-times} \cite{belnap92,muller2005,muller_qmbst}.} The function $P$ thus associates decompositions of the Hilbert space identity with instances of branching. In this way, each maximal path $\alpha$ of length $n_\alpha+1$ in $\langle M,\preceq\rangle$, \begin{equation} m^\alpha_0 \prec m^\alpha_1 \prec \cdots \prec m^\alpha_{n_\alpha}, \end{equation} corresponds to the $n_\alpha$ elements long chain of projection operators \begin{equation} P(m^\alpha_1) \odot P(m^\alpha_2) \odot \cdots \odot P(m^\alpha_{n_\alpha}). \end{equation} Here, $m^\alpha_0=m_0$ is the root node, and $m^\alpha_{n_\alpha}$ is one of the maximal elements. The projectors give information for the $n_\alpha$ many times \begin{equation} \tau(m^\alpha_0) < \tau(m^\alpha_1) < \cdots < \tau(m^\alpha_{n_\alpha-1}). \end{equation} \begin{figure} \caption{\label{fig:branchingfamily} \label{fig:branchingfamily} \end{figure} Our Definition~\ref{def:branchingfamily} captures the intuitive notion of branch dependence described in section~\ref{sec:branchingintuitive} in a formally exact manner, as illustrated by Figure~\ref{fig:branchingfamily}. In that example of a branching family of histories, the root node $m_0$ has two successors, $m_1$ and $m_2$, corresponding to the system's having the properties corresponding to projection operators $P_0^1$ and $P_0^2$ at $\tau(m_0)$, respectively. The vertical position of $m_1$ vs.\ $m_2$ indicates that $\tau(m_1)\neq \tau(m_2)$, which is one aspect of branch dependence that is not available in product families: the time for which the system's property is described after $\tau(m_0)$ depends on the system's property at $\tau(m_0)$. Furthermore, the decompositions of the Hilbert space identity at $m_1$ ($\{P_1^1,P_1^2,P_1^3\}$) and at $m_2$ ($\{P_2^1,P_2^2\}$) are different, thus exhibiting the second form of branch dependence that is not available in product families. \subsection{Properties of quantum branching histories} We first note that every product family of histories (cf.\ section~\ref{sec:productfamilies}) is a branching family of a very symmetric kind. \begin{lem}[Branching vs.\ product families]\label{lem:branching_product} Every product family of histories is a branching family of histories, but not conversely. \end{lem} \noindent \emph{Proof:} In order to see that product families are branching families, let a product family corresponding to $n$ decompositions of the identity $\{P^i_1,\ldots,P^i_{n_i}\}$ at times $t_1,\ldots,t_n$ be given. An equivalent branching family is constructed in $n+1$ steps as follows: We start with a single node, $M_0 = \{m_0\}$ (stage $0$). Then at each stage $i$ ($1\leq i\leq n$), we add new nodes and enlarge the structure. We assign the time $t_i$ to all of the maximal elements of $M_{i-1}$, and we add the decomposition $\{P^i_1,\ldots,P^i_{n_i}\}$ above all these maxima by introducing $n_i$ new elements above \emph{each} maximum, thus arriving at the new set of nodes $M_i$. When $M_n$ has been constructed, we finally assign some time $t^*>t_n$ to all the maximal elements in $M_n$. This construction yields a symmetrically growing tree that in the end (at stage $n$) corresponds to the original product family of histories. For a branching family that is not a product family, let $\ensuremath{\mathcal{H}}$ have dimension~2, and let $\{|\phi_1\rangle,|\phi_2\rangle\}$ and $\{|\psi_1\rangle,|\psi_2\rangle\}$ be two different orthonormal bases of $\ensuremath{\mathcal{H}}$. The family \begin{eqnarray} \label{eq:branch_no_prod1} h_1 &=& |\phi_1\rangle\langle\phi_1| \odot |\phi_1\rangle\langle\phi_1|\\ \label{eq:branch_no_prod2} h_2 &=& |\phi_1\rangle\langle\phi_1| \odot |\phi_2\rangle\langle\phi_2|\\ \label{eq:branch_no_prod3} h_3 &=& |\phi_2\rangle\langle\phi_2| \odot |\psi_1\rangle\langle\psi_1|\\ \label{eq:branch_no_prod4} h_4 &=& |\phi_2\rangle\langle\phi_2| \odot |\psi_2\rangle\langle\psi_2| \end{eqnarray} describes the system at two times $t_1$ and $t_2$, yielding a branching family, but not a product family. \ensuremath{\square} Branching families of histories $\ensuremath{\mathfrak{F}} = \langle M,\preceq,\tau,P\rangle$ thus yield a natural generalization of product histories while retaining a strong link between the formalism and the intended temporal interpretation of the histories. Furthermore, the inductive definition of the structure makes it easy to prove two key properties of branching families of histories. Firstly, the construction immediately shows that $\ensuremath{\mathfrak{F}}$ is exclusive and exhaustive: The one-element family has that property, and it is retained by adding inductively further (exclusive and exhaustive) decompositions of the identity at a maximal node. \footnote{``Exhaustive'' does not mean ``maximally detailed''. The latter notion is quite dubious anyway, as for every finite description of a quantum system's dynamics one can give a more detailed one.} Secondly, one can show that the weights of histories in a branching family $\ensuremath{\mathfrak{F}}$ always add up to one. Thus, the weights immediately induce probabilities on the power set Boolean algebra of $\ensuremath{\mathfrak{F}}$ (cf.\ section~\ref{sec:probabilities}). \begin{lem}[Weights in branching families] \label{lem:addtoone} In a branching family of histories $\ensuremath{\mathfrak{F}}$, the weights sum to one: \begin{equation} \sum_{h\in\ensuremath{\mathfrak{F}}} W(h) = 1. \end{equation} \end{lem} \noindent \emph{Proof:} Assume that an initial state of the system is given by a density matrix $\rho_0$. \footnote{In the case of complete ignorance, $\rho_0 = I / \mbox{dim}(\ensuremath{\mathcal{H}})$.} For the trivial family consisting of only one history $\{m_0\}$, the sum of weights reduces to \begin{equation} W(\{m_0\}) = Tr [\rho_0] = 1. \end{equation} Now assume that the property in question holds for the family $\ensuremath{\mathfrak{F}}$ corresponding to $\langle M,\preceq,\tau,P\rangle$, let $m$ be a maximal node, $m^-$ its (unique) direct predecessor, and let \begin{equation} P^m_i P^m_j=\delta_{ij} P^m_i,\quad \sum_{i=1}^n P^m_i = I, \end{equation} be the decomposition of the identity that is to be employed at the $n$ new maximal elements $m^*_1,\ldots,m^*_n$ that are to be added after $m$. Suppose that $\ensuremath{\mathfrak{F}}$ had $N$ elements $h_1,\ldots,h_N$, whose weights add to one. In order to facilitate book-keeping, suppose further that $m$ is the final node of $h_N$. To the new quantum branching structure $\langle M',\preceq',\tau',P'\rangle$ there corresponds a family $\ensuremath{\mathfrak{F}}'$ of $N+n-1$ histories, where for $i=1,\ldots,N-1$, $h'_i=h_i$, whereas $h_N$ is replaced by the $n$ new histories $h'_N,\ldots, h'_{N+n-1}$ ending in the new elements $m^*_1,\ldots,m^*_n$. In order to show that in $\ensuremath{\mathfrak{F}}'$, the weights still add to one, we only need to show that \begin{equation} W(h_N) = \sum_{i=1}^{n} W(h'_{N+i-1}), \end{equation} i.e., the histories replacing old $h_N$ must together have the same weight as $h_N$. Now in terms of the chain operator $K(h_N)$ for $h_N$, the chain operators for the new histories are \begin{equation} K(h'_{N+i-1}) = P^m_i\cdot T(\tau(m),\tau(m^-)) \cdot K(h_N). \end{equation} The initial density matrix $\rho_0$, evolved along $h_N$, becomes \begin{equation} \rho_m = K(h_N)\,\rho_0\,K^\dag(h_N), \end{equation} and the weight of $h_N$ can be expressed as \begin{equation} W(h_N) = Tr[K(h_N)\,\rho_0\,K^\dag(h_N)] = Tr[\rho_m]. \end{equation} The weights for the new histories can then be written as \begin{eqnarray} \lefteqn{W(h'_{N+i-1}) = } \\ \nonumber & & Tr[P^m_i\, T(\tau(m),\tau(m^-))\, \rho_m\, T(\tau(m^-),\tau(m))\, (P^m_i)^\dag] = \\ \nonumber & & Tr[T(\tau(m^-),\tau(m))\, P^m_i\, T(\tau(m),\tau(m^-))\, \rho_m], \end{eqnarray} where we used the cyclic property of the trace and $P^\dag = P^2 = P$. Now as $T$ is unitary, the $n$ operators \begin{equation} \tilde{P}^m_i = T(\tau(m^-),\tau(m))\, P^m_i\, T(\tau(m),\tau(m^-)) \end{equation} are again projectors forming a decomposition of the identity, so that by the linearity of the trace, \begin{eqnarray} \sum_{i=1}^n W(h'_{N+i-1}) &=& \sum_{i=1}^n Tr[\tilde{P}^m_i\, \rho_m]\\ &=& Tr\left[\sum_{i=1}^n \tilde{P}^m_i\, \rho_m\right]\\ &=& Tr[I\cdot \rho_m] = W(h_N), \end{eqnarray} which was to be proved. \ensuremath{\square} \section{Coarse graining, probabilities and the consistency condition} \label{sec:discussion} The general idea of coarse graining is that it should be possible to move from a more to a less detailed description of a given system in a coherent way. If probabilities are attached to a fine-grained description, then an obvious requirement is that the probability of a coarse-grained alternative should be the sum of the probabilities of the corresponding fine-grained alternatives. Considerations of coarse graining are important for quantum histories because of the interplay between weights of histories and probability measures in a family of histories. Our discussion will show that one needs to distinguish two notions of coarse graining. One notion of coarse graining comes for free in any probability space: Due to the additivity of the measure, if $b^*$ is the disjoint union of $b_1,\ldots,b_n$ in the event algebra, then $\mu(b^*) = \sum_{i=1}^n \mu(b_i)$. \footnote{In the infinite case, that equation holds for countable unions.} By Lemma~\ref{lem:addtoone}, for any branching family (and thus, by Lemma~\ref{lem:branching_product}, for any product family) of quantum histories, the weights $W(h)$ of the histories $h\in\ensuremath{\mathfrak{F}}$ induce a probability measure on $\ensuremath{\mathfrak{F}}$ via $\mu(\{h\}) = W(h)$, which is extended to the power set Boolean algebra of $\ensuremath{\mathfrak{F}}$ via (\ref{eq:additivity_of_measure}): \begin{equation}\label{eq:cg_sets} \mu(\{h_1,\ldots,h_n\}) = \sum_{i=1}^n \mu(\{h_i\}) = \sum_{i=1}^n W(h_i). \end{equation} If $h_1,\ldots,h_n$ are fine-grained descriptions of a system's dynamics, eq.~(\ref{eq:cg_sets}) shows that the coarse-grained description $\{h_1,\ldots,h_n\}$ automatically is assigned the correct probability. Thus, branching families of histories unconditionally and naturally support this notion of coarse graining. If all branching families support probabilities and coarse graining, then what is behind the consistency condition? In the literature it is often suggested that the possibility of defining probabilities or the possibility of coarse graining for a family of histories is conditional upon the so-called \emph{consistency condition}, \begin{equation}\label{eq:consistency} \langle K(Y^\alpha), K(Y^\beta)\rangle_\rho = 0\quad \mbox{if $\alpha\neq\beta$}, \end{equation} which demands that the chain operators of different histories must be orthogonal. Condition (\ref{eq:consistency}) is also called ``medium decoherence'' \cite{gellmann93}. The above considerations show that for branching families, both probabilities and one notion of coarse graining are unproblematic, independent of any condition like (\ref{eq:consistency}). However, that condition does play an important role with respect to a second, different notion of coarse graining. That second notion of coarse graining is based on the idea of constructing from the histories $h\in\ensuremath{\mathfrak{F}}$ not \emph{sets of histories}, as in eq.~(\ref{eq:cg_sets}), but \emph{new histories}, by something like addition. Whether such additive combination of histories is possible at all, generally depends on what histories are mathematically. We will see below that additive combination is not always possible for histories in a branching family. Accordingly, in order not to suggest that addition of histories is always unproblematic, we will use the formal notation ``$sum(h_1,h_2)$'' when we wish to leave open the question whether that sum is in fact defined. If we consider the additive combination of two histories, $h_1$ and $h_2$, \begin{equation}\label{eq:sumanyhist} h = sum(h_1,h_2), \end{equation} the idea behind coarse graining suggests that for $h$, which is a less detailed description than the two fine-grained histories, the probabilities should just add: \begin{equation}\label{eq:cg_hist} \mu(h) = \mu(sum(h_1,h_2)) = \mu(h_1) + \mu(h_2). \end{equation} Even apart from the question of whether $sum(h_1,h_2)$ can be defined, eq.~(\ref{eq:cg_hist}) is problematic as it stands: No family of histories can contain both two histories $h_1$ and $h_2$ and their sum $h$, as that would violate the requirement of exclusiveness. Thus, $\mu$ in (\ref{eq:cg_hist}) cannot be a probability measure in a single family of histories. At this point the idea of weights $W(h)$ as probabilities enters. Assuming that $h=sum(h_1,h_2)$ is indeed a history, $W$ is defined for all three of $h_1$, $h_2$, and $h$, and the main idea of (\ref{eq:cg_hist}) can be reformulated as \begin{equation}\label{eq:cg_hist_weights} W(h) = W(sum(h_1,h_2)) = W(h_1) + W(h_2). \end{equation} The validity of (\ref{eq:cg_hist_weights}) is indeed linked to the consistency condition (\ref{eq:consistency}). However, depending on which type of family of histories one considers, there are some subtle issues, as the following sections point out. \subsection{Coarse graining in product families} If $\ensuremath{\mathfrak{F}}$ is a product family of histories, the sketched idea of coarse graining makes immediate sense, as the sum of any two histories in a given product family can be defined. To consider the basic case, let two histories $h_1,h_2\in\ensuremath{\mathfrak{F}}$ be given, \begin{equation} h_\alpha = P^1_\alpha\odot \cdots \odot P^n_\alpha,\quad\quad \alpha=1,2, \end{equation} such that they coincide everywhere except for the $j$-th position: $P^i_1 = P^i_2$ for $i\neq j$, $P^j_1\neq P^j_2$. In this case, one can define their sum \begin{equation}\label{eq:sumprodhist} h = sum(h_1,h_2) := P^1_1 \odot \cdots \odot P^{j-1}_1 \odot (P^j_1+P^j_2) \odot P^{j+1}_1 \odot \cdots \odot P^n_1, \end{equation} i.e., at each time $t_i$ for which the histories $h_1$ and $h_2$ are defined, their sum, $h$, specifies either the same projector as each of $h_1$ and $h_2$, or gives a less detailed description in terms of the projector $P^j_1+P^j_2$. The assumption of a product family is crucial in this definition, as it guarantees that the $P^i_1$ and $P^i_2$ are both defined at the same times, and that $P^j_1$ and $P^j_2$ commute---for a branch-dependent family, $P^j_1+P^j_2$, even if defined, wouldn't normally be a projector. For a history like $h$ in (\ref{eq:sumprodhist}), the weight function $W$ is naturally defined even though $h\not\in\ensuremath{\mathfrak{F}}$, and it appears natural to demand that \begin{equation}\label{eq:addweights} W(h) = W(sum(h_1,h_2)) = W(h_1) + W(h_2). \end{equation} This equation does not hold in general in product families of histories. A family of histories $\ensuremath{\mathfrak{F}}$ must satisfy the above-mentioned condition of \emph{consistency} if it is to satisfy (\ref{eq:addweights}) for all $h_1,h_2\in\ensuremath{\mathfrak{F}}$, and it was along these lines that Griffiths \cite{griffiths84} originally motivated the consistency condition for product families of histories. \footnote{\label{fn:consistency} An extended discussion of questions of uniqueness conditions for probability assignments in product families is given in \cite{nistico99}.---The notion of consistency in \cite{griffiths84}, which corresponds to (\ref{eq:addweights}), is weaker than the consistency condition (\ref{eq:consistency}) formulated above: For (\ref{eq:addweights}) to hold, it is sufficient that the \emph{real part} of $\langle K(Y^\alpha),K(Y^\beta)\rangle_\rho$ vanish for $\alpha\neq\beta$. The latter condition is known as \emph{weak consistency}. In what follows, we will not differentiate between medium and weak consistency.} However, as the next section shows, the symmetric nature of product families hides an important asymmetry in adding histories. \subsection{Coarse graining in branching families} \label{sec:2cg} We have already seen that in product families $\ensuremath{\mathfrak{F}}$, the formal addition $sum(h_1,h_2)$ can be defined for any $h_1,h_2\in\ensuremath{\mathfrak{F}}$. For branching families, this is not always possible. In fact, we will see that with respect to formula (\ref{eq:addweights}) one should distinguish two types of coarse graining, which we call \emph{intra-branch} and \emph{trans-branch} coarse graining. \emph{Intra-branch} coarse graining means that maximal nodes from an otherwise shared branch are added, whereas \emph{trans-branch} coarse graining means adding ``across branches''. The notion of intra-branch coarse graning and the respective summation of histories for the basic case can be defined as follows: \begin{Def}[Intra-branch coarse graining] \label{def:intrabranch} In a branching family $\ensuremath{\mathfrak{F}}$, the formal summation $h=sum(h_1,h_2)$ of two histories $h_1,h_2\in\ensuremath{\mathfrak{F}}$, \begin{equation} h_\alpha = P^1_\alpha\odot \cdots \odot P^{n_\alpha}_\alpha,\quad\quad \alpha=1,2, \end{equation} is called \emph{intra-branch coarse graining} iff $n_1=n_2$, the histories are defined at the same times, and for $1\leq i<n_1$, $P^i_1 = P^i_2$. In that case, the sum is defined to be \begin{equation}\label{eq:intra_cg} h=sum(h_1,h_2) := P^1_1\odot \cdots \odot P^{n_1-1}_1 \odot (P^{n_1}_1 + P^{n_1}_2). \end{equation} \end{Def} Intra-branch coarse graining is both well-defined and probabilistically unproblematic for all branching families: \begin{lem}\label{lem:intrabranch} For intra-branch coarse graining $h=sum(h_1,h_2)$ as in (\ref{eq:intra_cg}) in a branching family of histories, the weights add according to (\ref{eq:addweights}), i.e., \[ W(h) = W(sum(h_1,h_2)) = W(h_1) + W(h_2). \] \end{lem} \noindent \emph{Proof:} We can follow the lines of the inductive proof of Lemma~\ref{lem:addtoone}: Let two histories $h_1$ and $h_2$ fulfilling Definition~\ref{def:intrabranch} be given, and assume that $m$ is the immediate predecessor of maximal nodes $m^*_1$ and $m^*_2$ of histories $h_1$ and $h_2$, respectively. Let $m^-$ be the unique direct predecessor of $m$. Let $K(h_m)$ be the chain operator for the path from the root node to $m$, and set $T=T(\tau(m),\tau(m^-))$. Then the chain operators for the histories $h_\alpha$ are: \begin{equation} K(h_\alpha) = P(m^*_\alpha)\cdot T\cdot K(h_m). \end{equation} The sum of the two final projectors, $P(m^*_1)+P(m^*_2)$, is again a projector in virtue of (\ref{eq:associate_projectors}). Accordingly, the weight of the coarse-grained history $h=h_1+h_2$ is \begin{eqnarray} \nonumber W(h) &=& \langle K(h), K(h)\rangle_\rho = \langle K(h_1+h_2), K(h_1+h_2)\rangle_\rho\\ \nonumber &=& Tr [(P(m^*_1)+P(m^*_2))\, T\, K(h_m)\, \rho\, \\ \nonumber & & \quad\quad K^\dag(h_m)\, T^\dag\, (P(m^*_1)+P(m^*_2))^\dag] \\ &=& Tr [ T^\dag\, P(m^*_1)\, T\, K(h_m)\, \rho\, K^\dag(h_m)] + \\ \nonumber & & Tr [ T^\dag\, P(m^*_2)\, T\, K(h_m)\, \rho\, K^\dag(h_m)]\\ \nonumber &=& W(h_1) + W(h_2), \end{eqnarray} where we employed $P^\dag(m^*_\alpha) = P(m^*_\alpha) = (P(m^*_\alpha))^2$, the fact that $P(m^*_1)\cdot P(m^*_2) = 0$ (\ref{eq:associate_projectors}), and linearity and the cyclic property of the trace. \ensuremath{\square} So all is well probabilistically if histories are formed as sums of histories that differ only at the last node, i.e., via intra-branch coarse graining. Note that this result carries over to product families of histories: for intra-branch coarse graining in branching families and in product families, eq.~(\ref{eq:addweights}) holds automatically, without having to presuppose a consistency condition like (\ref{eq:consistency}). What about trans-branch coarse graining? For a product family, eq.~(\ref{eq:sumprodhist}) shows how to build histories from other histories quite generally, and we have mentioned the fact that the validity of eq.~(\ref{eq:addweights}) for trans-branch coarse graining in a product family generally depends on a consistency condition like (\ref{eq:consistency}) \cite{griffiths84,griffiths2003}. For the more general case of branching families, so far the summation of histories for trans-branch coarse graining has not been defined. It is possible to define that type of summation in the extended framework of Isham's HPO formalism, but this amounts to discarding the intuitive idea that histories are temporal sequences of one-time descriptions (cf.\ the next subsection). The problem of defining trans-branch coarse graining while holding on to the intuitive interpretation of a history may be illustrated by considering a two-dimensional Hilbert space and the two histories $h_1$ and $h_3$ from eq.~(\ref{eq:branch_no_prod1}) and eq.~(\ref{eq:branch_no_prod3}), respectively, taken to be defined at the two times $t_1$ and $t_2$. How should one define $sum(h_1,h_3)$? Surely one can coarse-grain by considering the \emph{set} of histories $\{h_1,h_3\}$, and the sum of $h_1$ and $h_3$ in Isham's formalism amounts to this exactly. But no temporal interpretation in terms of a single history is forthcoming, as the projectors involved at $t_2$ do not commute. Thus, for trans-branch coarse graining, the formal summation $sum(h_1,h_2)$ generally must remain undefined. With respect to the coarse-graining criterion (\ref{eq:addweights}), this means that in all cases in which it generally makes sense to ask whether it is satisfied, it is satisfied automatically: Eq.~(\ref{eq:addweights}) is generally defined only for intra-branch coarse graining, and for that case, Lemma~\ref{lem:intrabranch} has shown the equation to hold unconditionally. This result does of course not mean that all branching families of histories are consistent. The test of eq.~(\ref{eq:consistency}) can still be applied for any branching family, and many branching families will be classified as inconsistent. However, in these cases, the link with eq.~(\ref{eq:addweights}), which holds for product families, can no longer be made in our branching histories framework. This points to a somewhat different interpretation of the consistency condition: In a branching family, that condition should not be read as a precondition for the assignment of probabilties via weights (which is unproblematic in view of Lemma~\ref{lem:addtoone}), but rather as the condition that the different descriptions of the system's dynamics given by the histories in the family amount to wholly separate, interference-free alternatives. This is just what it means for he chain operators to be orthogonal, i.e., to satisfy eq.~(\ref{eq:consistency}). We hold it to be an advantage of the branching family formalism proposed here that by leaving trans-branch coarse graining undefined, it forces us to rethink the interpretation of the consistency condition. The formalism of product histories is deceptively smooth in treating all times in the same way, thus blurring the distinction between intra-branch and trans-branch coarse graining. \subsection{Coarse graining in Isham's HPO} The comparison between branching families and Isham's HPO scheme is illuminating in another respect. HPO is more general and more abstract than branching families. However, that abstractness comes at a price: we will argue that much of the intuitiveness of branching families is lost by moving to the HPO scheme. In Isham's HPO scheme, histories are themselves represented as projectors on the (large) history Hilbert space, not as chain operators on the (much smaller) system Hilbert space. As an HPO-family $\ensuremath{\mathfrak{F}}$ must correspond to a decomposition of the history identity operator, sums of histories in $\ensuremath{\mathfrak{F}}$ will again correspond to history projectors (even though not from the given family); the formal addition $h=sum(h_1,h_2)$ has a direct interpretation as the literal addition of history projectors. Thus one can form a Boolean algebra with elements \begin{equation}\label{eq:hpo_algebra} Y = \sum_{Y^\alpha\in\ensuremath{\mathfrak{F}}} \pi_\alpha Y^\alpha,\quad\quad \pi_\alpha\in\{0,1\}, \end{equation} that is isomorphic to the power set algebra of $\ensuremath{\mathfrak{F}}$ (the $\pi_\alpha$ playing the role of characteristic functions)---just like in the first type of coarse graining considered at the beginning of this section. From one point of view, addition here always forms like objects from like objects: sums of history projectors are again history projectors. From another point of view, however, addition remains problematic: Even if the $Y^\alpha$ are homogeneous histories, i.e., have an intuitive interpretation as temporally ordered sequences of projectors, that will generally not be so for their sums. Thus in general, the elements (\ref{eq:hpo_algebra}) of the Boolean algebra are inhomogeneous histories without an intuitive interpretation. At the level of abstraction of eq.~(\ref{eq:hpo_algebra}), one need not distinguish between different kinds of coarse graining. Accordingly, it seems more natural to demand additivity of weights, \begin{equation}\label{eq:addweights_hpo} W\left(\sum_{Y^\alpha\in\ensuremath{\mathfrak{F}}} \pi_\alpha Y^\alpha\right) = \sum_{Y^\alpha\in\ensuremath{\mathfrak{F}}} \pi_\alpha W(Y^\alpha), \end{equation} which corresponds to satisfaction of the consistency condition (\ref{eq:consistency}). However, at the same level of abstraction, one can note that $W$ is a quadratic function, whereas (\ref{eq:addweights_hpo}) demands linearity---not a natural demand at all. The motivation for additivity was, after all, given in terms of the time-ordered sequences of projectors that formed the basis of the history framework, not in terms of some abstract algebra. Furthermore, even when HPO is restricted to families of homogeneous histories, a probability interpretation of the weights is not forthcoming generally; families of HPO histories can violate a number of seemingly straightforward assumptions \cite{isham_linden94}. As an example, consider the following family of histories: Let $\ensuremath{\mathcal{H}}$ have dimension 2, and let $\{|\phi\rangle,|\psi\rangle\}$ be an orthonormal basis, so that $\langle\phi|\psi\rangle = 0$. Then, define \begin{equation} |\chi\rangle = \frac{1}{\sqrt{2}} (|\phi\rangle + |\psi\rangle);\quad\quad |\chi'\rangle = \frac{1}{\sqrt{2}} (|\phi\rangle - |\psi\rangle). \end{equation} Note that $\langle\chi|\chi'\rangle=0$. We now construct a family of two-time histories; the corresponding history Hilbert space $\ensuremath{\tilde{\mathcal{H}}}$ has dimension 4: \begin{eqnarray} \label{eq:hom_no_branch1} h_1 &=& |\chi\rangle\langle\chi| \odot |\psi\rangle\langle\psi|\\ h_2 &=& |\chi'\rangle\langle\chi'| \odot |\psi\rangle\langle\psi|\\ h_3 &=& |\phi\rangle\langle\phi| \odot |\phi\rangle\langle\phi|\\ \label{eq:hom_no_branch4} h_4 &=& |\psi\rangle\langle\psi| \odot |\phi\rangle\langle\phi| \end{eqnarray} These four histories are pairwise orthogonal, and their sum is the identity operator in $\ensuremath{\tilde{\mathcal{H}}}$. Thus, $\{h_1,\ldots,h_4\}$ is a homogeneous family of histories. Now, taking the initial density matrix $\rho$ to be the pure state $\rho=|\phi\rangle\langle\phi|$, one can compute the following: \begin{eqnarray} K(h_1)\,\rho\,K^\dag(h_1) = \frac{1}{4} |\psi\rangle\langle\psi| ;\quad W(h_1) &=& \frac{1}{4}\\ K(h_2)\,\rho\,K^\dag(h_2) = \frac{1}{4} |\psi\rangle\langle\psi| ;\quad W(h_2) &=& \frac{1}{4}\\ K(h_3)\,\rho\,K^\dag(h_3) = |\phi\rangle\langle\phi| ;\quad W(h_3) &=& 1\\ K(h_4)\,\rho\,K^\dag(h_4) = 0 ;\quad W(h_4) &=& 0 \end{eqnarray} Thus, the sum of weights in this HPO family of histories is $3/2$, barring any straightforward probability interpretation. This family is \emph{not} a branching family of histories in the sense of section~\ref{sec:branching}, proving that branching families of histories are a subclass even of homogeneous HPO families. \footnote{Note that the temporally reversed family \emph{is} a branching family, and the weights do sum to unity (as only the mirror image of $h_3$, which is $h_3$ itself, contributes a non-zero weight).} \subsection{Discussion} The consistent history approach to quantum mechanics offers a view of quantum mechanics that honours many classical intuitions while remaining, of course, faithful to the empirical predictions of orthodox quantum mechanics. While the initial motivation of the approach in terms of product families makes good pedagogical sense, it is too narrow for applications. Furthermore, as we have shown in section~\ref{sec:2cg}, product families may be misleading because they fail to distinguish between two importantly different notions of coarse-graining. A number of applications demand branching families of histories. However, so far there has not been available a formally rigorous definition of that class of families. Isham's HPO formalism, while offering the necessary generality, is in danger of losing touch with the intuitive motivation of the history approach. To be sure, this does not amount to any fundamental criticism, but it gives additional support for providing a less general definition that stays closely tied to the intuitive motivation of branch-dependent histories. Through our definition of branching families of histories we have here provided the sought-for formal framework. Branching families are more general than product families, and they are general enough for applications while retaining a natural interpretation. \begin{figure} \caption{\label{fig:fam} \label{fig:fam} \end{figure} Figure~\ref{fig:fam} gives a graphical overview of the various kinds of families of histories that we considered in this paper. Branching families always admit an interpretation of weights in terms of probabilities, as the weights of the histories in such a family add to one. Consistent branching families in addition are free from interference effects. The formal framework presented here is neutral with respect to the question of whether families of histories that are not consistent in this sense can be put to good physical use or not. \begin{acknowledgments} I would like to thank audiences at Oxford and Pittsburgh for stimulating discussions. Special thanks to Nuel Belnap, Jeremy Butterfield, Jens Eisert, Robert Griffiths, and Tomasz Placek. Thanks to Przemys\l{}aw Gralewicz and to four anonymous reviewers for additional helpful comments. Support by the Polish KBN and by the Alexander von Humboldt-Stiftung is gratefully acknowledged. \end{acknowledgments} \end{document}
\begin{document} \title{Reexamination of determinant based separability test for two qubits} \author{Maciej Demianowicz}\email{[email protected]} \affiliation{Faculty of Applied Physics and Mathematics, Gda\'nsk University of Technology, PL-80-952 Gda\'nsk, Poland} \affiliation{National Quantum Information Center of Gdansk, PL-81-824 Sopot, Poland} \begin{abstract} It was shown in [Augusiak {\it et al.},\;Phys. Rev. A \textbf{77}, 030301(R) (2008)] that discrimination between entanglement and separability in a two qubit state can be achieved by a measurement of a single observable on four copies of it. Moreover, a pseudo entanglement monotone $\pi$ was proposed to quantify entanglement in such states. The main goal of the present paper is to show that close relationship between $\pi$ and concurrence reported there is a result of sharing the same underlying construction of a spin flipped matrix. We also show that monogamy of entanglement can be rephrased in terms of $\pi$ and prove the factorization law for $\pi$. \end{abstract} \pacs{03.67.Mn} \maketitle Entanglement, first recognized by Schrodinger, Einstein, Podolsky, and Rosen \cite{schrodingerepr}, lies at the heart of quantum information theory. With no doubt it is the most important resource of this rapidly developing branch of science and serves as the building brick for the huge number of information tasks, just to mention \np teleportation \cite{teleportation} and dense coding \cite{dense}. From this point of view full recognition of this 'spooky action at a distance' \cite{einstein} is fundamental to our understanding of quantum mechanics. Much effort has been put to recognize its nature and not surprisingly the major progress has been achieved in the case of the simplest bipartite quantum states- states of two qubits. One of the most important qualitative results concerning such systems is the necessary and sufficient condition for inseparability- the celebrated Peres-Horodeckis criterion of non positive partial transposition \cite{ppt}. On the other side research towards quantitative description of entanglement of two qubit states has culminated in the introduction of entanglement measures, among which the most notable are entanglement of formation \cite{formation} and concurrence for whom closed expressions has been found \cite{hillwoot}. Unfortunately, both of them has still not been shown to be directly measurable and it is reasonable to conjecture that they are not in general. However, very recently it has been demonstrated that {\it single} collective measurement of the specially prepared observable on four copies of an unknown two qubit state can {\it unambiguously} discriminate between entanglement and separability, additionally {\it quantifying} to some extent entanglement contained in the system by providing sharp lower and upper bounds on concurrence \cite{nasza}. In the present paper we continue research on the pseudo entanglement monotone $\pi$, which was introduced in Ref. \cite{nasza} for entanglement quantification purposes. Let us start with the an introduction of necessary concepts. Consider a two qubits mixed state $\rho_{AB}$. Define (conjugation in a standard basis) a spin flipped state \cite{hillwoot} \beq \tilde{\rho}_{AB}=\sigma_y \otimes \sigma_y \rho_{AB}^* \sigma_y \otimes \sigma_y. \eeq Let $\lambda_1 \ge \lambda_2 \ge \lambda_3\ge \lambda_4$ be the square roots of the eigenvalues of $\rho_{AB}\tilde{\rho}_{AB}:=M_{AB}$. Note that we can safely write inequalities since they are real (moreover they are nonnegative). We define concurrence to be \beq\label{concurrence} C(\rho_{AB})=\max \{0,\lambda_1-\lambda_2 -\lambda_3-\lambda_4\}=:C_{AB}. \eeq Eligibility of such constructed quantity for being a good measure of entanglement is justified by the invariance of the eigenvalues $\lambda_i$ under local unitary operations and by the fact that $0\le C_{AB}\le 1$ with extreme values taken on separable and maximally entangled states respectively. When $\rho_{AB}$ is a partial trace over $C$ from the tripartite pure qubit state $\proj{\psi_{ABC}}=:\psi_{ABC}$ there are only two nonzero eigenvalues thus we have just $C_{AB}=\lambda_1-\lambda_2$. We then also define tangle \cite{ckw} to be \beq\label{tangle} \tau_{ABC}=4\lambda_1 \lambda_2. \eeq It was shown that eigenvalues of either of the matrices $M_{AB}$, $M_{BC}$, $M_{AC}$ can be used in the above. These quantities can be combined to give the so called monogamy relation \cite{ckw,osborn} \beq\label{mono-ckw} C_{AB}^2+ C_{BC}^2+\tau_{ABC}=C_{B(AC)}^2=4\det \rho_B, \rho_B=\tr_A \;\rho_{AB}. \eeq Concurrence $C_{B(AC)}$ is the meaningful quantity since we consider pure state of three qubits thus effectively $B(AC)$ is a two-qubit-like state. This relations provides an interpretation for tangle as a measure of tripartite correlations. One also defines \cite{assist} concurrence of assistance $C^a$, which is the maximum over ensembles of average concurrence of pure states in the ensemble. In case of two qubits we have just $C^a=\lambda_1+\lambda_2+\lambda_3+\lambda_4$. In Ref. \cite{nasza} it was shown that the separability of an unknown two qubit state $\rho$ can be unambiguously settled in a single collective measurement on four copies of this state, \tzn one needs at one time $\rho ^{\otimes 4}$. This was obtained on the basis of two facts: (i) partially transposed density matrix $\rho^{\Gamma}$ of an entangled two qubit state $\rho$ is full rank (has four non zero eigenvalues), (ii) there can be only one negative eigenvalue of $\rho^{\Gamma}$. The above led to the conclusion that it is sufficient to measure $\det \rho^{\Gamma}$ and the strict negativity of the latter indicates entanglement. The authors of the mentioned paper showed that indeed such a measurement is possible using a noiseless circuit \cite{nasza2}. They also proposed a simple alternative scheme to measure this determinant. The question of the usage of $\det \rho^{\Gamma}$ for quantitative description was further addressed. It was shown that the quantity, which we will call in this paper the {\it determinant-based measure}, \beqn\label{miara} \pi(\rho)=\left\{\begin{array}{ccc} 0 & \mathrm{for} & \det{\rho^{\Gamma}} \ge 0 \\ 2\sqrt[4]{|\det{\rho^{\Gamma}}|} & \mathrm{for} & \det{\rho^{\Gamma}} < 0 \end{array}.\right. \eeqn is a monotone under pure local operations preserving dimensions and classical communication and provides tight upper and lower bounds on concurrence as follows \beq\label{boundy} C(\rho) \le \pi(\rho)\le \sqrt[4]{C(\rho) \left( \frac{C(\rho)+2}{3}\right)^3}. \eeq Normalization in Eq. (\ref{miara}) is chosen to impose agreement of determinant-based measure and concurrence on pure states. From the above inequalities we also have immediate bounds for entanglement of formation $E_f$ \cite{hillwoot} as follows $E(r^{-1}(\pi(\rho)))\le E_f(\rho)\le E(\pi(\rho))$, where $E(x)=H(\frac{1+\sqrt{1-x^2}}{2})$ with $H(y)$ being the Shannon entropy of a probability distribution $(y,1-y)$ and $r(x)=\sqrt[4]{x \left( \frac{x+2}{3}\right)^3}$. One can also prove that $\pi$ shares the nice property of being continuous in the input density operator \cite{contin}. For the purpose of the present paper we propose the extension of our definition for entanglement between qubit $A$ and qubits $BC$ in a pure state $\psi_{ABC}$ to $\pi_{A(BC)}\equiv 2\sqrt{\det \rho_A}$, \tzn we define it to be equal to $C_{A(BC)}$ on such states. Such extension is the most natural since we keep two most important properties of $\pi$: mentioned equality with concurrence and possibility of direct measurement. \begin{figure} \caption{Singlet fraction $F_{AB} \label{fidelity-random} \end{figure} Let us now turn to the main body of the paper. We start with considerations analogous to the one from Ref. \cite{ishizaka}, where the local unitary interaction of one part of maximally entangled state $\psi^+_{AB}$ with two level environment $E$ was considered, \tzn the global state after the evolution was $\ket{\psi_{ABE}}=\jedynka_A\otimes U_{BE}(\ket{\psi^{+}}_{AB}\otimes \ket{0}_E)$. Its bipartite reductions of interest will be denoted $\rho_{AB}$ and $\rho_{AE}$. In the mentioned paper the author showed by random sampling that there is no correlation between singlet fraction \cite{fraction} $F(\rho_{AB}):=F_{AB}$ after the action of the channel and concurrence $C_{AB}$ of the decohered state and showed analytically for the chosen class of channels that $F_{AB}=\frac{1}{4}(1+C_{AB})(1+\sqrt{1-C_{AE}^2})$. It turned out that this relation holds true for all channels, which was shown by random generation of channels. We pursue the same approach using determinant-based measure instead of concurrence. First let us consider relation of $F_{AB}$ and $\pi_{AB}$ after the action of the random channel. The result is shown in the Fig.1., which was obtained by random generation of $10000$ $U_{BE}$'s \cite{zyczkowski}. One can see that in our case there is some connection between these two quantities, however, still there is no analytical formula linking them. We will comment on this connection later when monogamy equality will be obtained. Following Ref. \cite{ishizaka} let us consider a class of local channels implemented by unitaries defined by: \beqn \ket{00}_{BE}\to \sqrt{1-q} \ket{00}_{BE} + \sqrt{q} \ket{11}_{BE}\\\ \ket{10}_{BE}\to \sqrt{1-p} \ket{10}_{BE} + \sqrt{p} \ket{01}_{BE}. \eeqn For such channels we obtain the following (with previously established notation): \beqn \displaystyle \pi_{AB}=\sqrt{|p+q-1|}\\\ \pi_{AE}=\sqrt{|p-q|}\\\ F_{AB}=\left\{\begin{array}{ccc} \frac{2-p-q+2\sqrt{(1-p)(1-q)}}{4} & \mathrm{for} & p+q-1 < 0 \\ \frac{p+q+2\sqrt{pq}}{4} & \mathrm{for} & p+q-1\ge 0 \end{array}.\right. \eeqn Direct calculation reveals that \beq \displaystyle F_{AB}=\frac{1}{4}(1+\pi_{AB}^2)\left(1+\sqrt{1-\frac{\pi_{AE}^4}{(\pi_{AB}^2+1)^2}}\;\right).\eeq As it was in the case of concurrence, the relation we obtained can be shown to hold for all channels and is independent of which maximally entangled state we choose to be the input. Note the close resemblance of both forms. Closed formula for singlet fraction using determinant-based measure $\pi$ opens hope for a monogamy relation of entanglement in terms of it. In what follows we prove the existence of such equation. Consider a pure state of three qubits $\psi_{ABC}$. As it was shown \cite{3szmit}, as far as the entanglement properties are considered, such state can be parameterized by five real numbers as \beqn \ket{\psi_{ABC}}=\gamma_0 \ket{000}+\gamma_1\mathrm{e}^{\mathrm{i}\varphi}\ket{100}+\gamma_2\ket{101}+\nonumber \\ \gamma_3\ket{110}+\gamma_4\ket{111} \eeqn with $\gamma_i\ge 0$, $\sum_i \gamma_i^2=1$, and $\varphi\in \langle 0,\pi \rangle$. From this we obtain eigenvalues of the matrix $M_{AB}$ and determinant of the partially transposed matrix of the reduced state of qubits $A$ and $B$ \beqn &&\lambda_1^2 =\gamma_0^2 \left(2\gamma_3^2+\gamma_4+2 \gamma_3\sqrt{\gamma_3^2+\gamma_4^2}\right), \\\ &&\lambda_2^2 =\gamma_0^2 \left(2\gamma_3^2+\gamma_4-2 \gamma_3\sqrt{\gamma_3^2+\gamma_4^2}\right),\\\ &&\det \rho_{AB}^{\Gamma}=-\gamma_0^4\gamma_3^2(\gamma_3^2+\gamma_4^2), \eeqn which immediately yields \beq \pi_{AB}=\sqrt{\lambda_1^2-\lambda_2^2}. \eeq Recalling Eqs. (\ref{concurrence}) and (\ref{tangle}) we obtain analytical relationship between $\pi_{AB}$, $C_{AB}$, and $\tau_{ABC}$ in a pure three qubit state \beq\label{pi} \pi_{AB}=\sqrt{C_{AB}\sqrt{C_{AB}^2+\tau_{ABC}}}. \eeq This can be put into a nice compact form \beq \pi_{AB}=\sqrt{C_{AB}C^a_{AB}}\;, \eeq which means that in case of rank two states the determinant-based measure is the geometric mean value of concurrence and concurrence of assistance. We also conclude that the bound in Eq. (\ref{boundy}) can be tightened for such states to obtain \beq \pi_{AB}\le \sqrt{C_{AB}}. \eeq Eq. (\ref{pi}) leads us to a simple corollary stating that for a given pure three qubit state $\psi_{ABC}$ one has $\pi_{AB}=C_{AB}$ if and only if $C_{AB}=0$ or $\tau_{ABC}=0$. With the results of \cite{corollary} this means that both measures agree when $\psi_{ABC}$ is either of the following classes: $GHZ$ with separable reduction $AB$, $W$, biseparable, or product. One can now argue that the pattern in the plot of singlet fraction (Fig.1.) is the result of quantifying to some extent tripartite correlations by the determinant-based measure. Now let us reverse (\ref{pi}) to get \beq C_{AB}^2=\frac{-\tau_{ABC}+\sqrt{\tau_{ABC}^2+4\pi_{AB}^4}}{2}. \eeq Inserting this into Eq. (\ref{mono-ckw}) one obtains the advertised elegant monogamy relation in terms of the determinant-based measure \beq\label{monogamia}\displaystyle \sqrt{(\frac{\tau_{ABC}}{2})^2+\pi_{AB}^4}+\sqrt{(\frac{\tau_{ABC}}{2})^2+\pi_{BC}^4}=\pi^2_{B(AC)}. \eeq This gives also the recipe to measure tangle on ten copies of the state relying directly on the measurements of determinants of two partially transposed density matrices (four plus four copies) and the determinant of the reduced qubit density matrix (two copies). The question of optimality of such measurement is beyond the scope of this paper (see \cite{carteret}). Now we will argue that the determinant-based measure $\pi$ in a general case of mixed states of arbitrary rank is an analytical function of the eigenvalues of the matrix $M$. Consider states \beqn \rho_{Bdiag}=p_1 \psi_+ +p_2 \psi_- +p_3 \phi_+ + p_4 \phi_-, \eeqn which are diagonal in the Bell basis $\ket{\psi_{\pm}}=\frac{1}{\sqrt{2}}(\ket{01}\pm \ket{10})$, $\ket{\phi_{\pm}}=\frac{1}{\sqrt{2}}(\ket{00}\pm \ket{11})$. Such states are entangled iff one of the probabilities is larger than $\frac{1}{2}$. W.l.o.g. assume that it holds for $p_1$. One has then \bwn \pi(\rho_{Bdiag})=\sqrt[4]{|-p_1+p_2+p_3+p_4|(p_1-p_2+p_3+p_4)(p_1+p_2-p_3+p_4)(p_1+p_2+p_3-p_4)}. \ewn We also easily compute that $\lambda_i=p_i$. Motivated by the form of $\pi$ for $\rho_{Bdiag}$ we further define \beqn C_1=\max \{0,\lambda_1-\lambda_2 -\lambda_3-\lambda_4\}\equiv C,\\\ C_2=\lambda_1-\lambda_2 +\lambda_3+\lambda_4,\\\ C_3=\lambda_1+\lambda_2 -\lambda_3+\lambda_4,\\\ C_4=\lambda_1+\lambda_2 +\lambda_3-\lambda_4 \eeqn and \beqn \hat{\pi}(\rho)=\sqrt[4]{C_1 C_2 C_3 C_4}. \eeqn Notice that $C_2,C_3,C_4$ are always non negative. The main result of this part of the paper is the following. \newline\noindent \textit{Theorem 1.} For any two qubit state $\rho$ one has \beq\label{formula} \pi(\rho)=\hat{\pi}(\rho). \eeq {\it Proof.} Let $A$ and $B$ be non singular local filters. The initial state $\varrho_1$ after the transformation under these filters is $\varrho_2=(1/p) A\otimes B \varrho_1 A^{\dagger}\otimes B^{\dagger}$, where $p=tr A\otimes B \varrho_1 A^{\dagger}\otimes B^{\dagger}$. It follows from the results of Ref. [20] that $C_i(\varrho_2)=(|\det AB|/p)C_i(\varrho_1)$. Moreover, it holds true that $\pi(\varrho_2)=(|\det AB|/p)\pi(\varrho_1)$ [8]. Assume now that $\varrho$ is a rank $4$ state. It was shown [20] that such states can be reversibly obtained with $A$, $B$ from a Bell diagonal state $\varrho_{Bdiag}$. As we already know, the assertion of the theorem is true on the latter. We thus have $\pi(\varrho)=(|\det AB|/p)\sqrt[4]{\Pi_i C_i(\varrho_{Bdiag})}$ and because of the transformation rule for $C_i$ it follows that for full rank states it holds $\pi(\varrho)=\hat{\pi}(\varrho)$. For singular states, we can take their full rank perturbations $\sigma_{\epsilon}=\epsilon \varrho+(1-\epsilon)\mathbb{I}/2$. Then, $\pi(\sigma_{\epsilon})=\hat{\pi}(\sigma_{\epsilon})$ and we can apply the preceding argument and take the limit $\epsilon\to 0$. The result then follows from continuity of $\pi$ and $C_i$. $\blacksquare$ We see that $\pi$ can be regarded as some kind of symmetrization of concurrence allowing for experimental direct accessibility. Natural question is to what extent determinant-based measure quantifies also tripartite correlations in the general case. Unfortunately we have not been able to find definite answer so far \cite{step}. At the end, we prove the factorization law, which was originally stated for concurrence \cite{konrad}. \noindent \textit{Theorem 2.} Determinant-based measure $\pi$ obeys the factorization law \tzn for an arbitrary channel $\kan$, pure state $\phi$, and a Bell state $\psi_+$ it holds \beq \pi({\cal I}\otimes \kan (\phi))=\pi ({\cal I}\otimes \kan (\psi_+))\pi(\phi). \eeq \textit{Proof.} The assertion is trivially true for separable $\mathcal{I}\otimes \Lambda (\psi_+)$ ({\it i.e.}, when $\Lambda$ is entanglement breaking) so we may assume entanglement of the latter. Any state $\ket{\phi}$ can be written as $A\otimes \mathbb{I}(\ket{\psi_+})$ with $tr A^{\dagger}A=2$. We then have $\pi(\mathcal{I}\otimes \Lambda (\phi))=2\sqrt[4]{|\det[ \mathcal{I}\otimes \Lambda (A\otimes \mathbb{I}(\psi_+)A^{\dagger}\otimes \mathbb{I})]^{\Gamma_B}|}=2\sqrt[4]{|\det[A\otimes \mathbb{I}(\varrho_{\Lambda}^{\Gamma_B})A^{\dagger}\otimes \mathbb{I}]|}$, where $\varrho_{\Lambda}=\mathcal{I}\otimes \Lambda(\psi_+)$ and we have used the fact that $(X\otimes \mathbb{I} \varrho Y\otimes \mathbb{I})^{\Gamma_{B}}=X \otimes \mathbb{I} \varrho^{\Gamma_B} Y\otimes \mathbb{I}$. Using now the following property of the determinant $\det XY=\det X \det Y$ and the fact that $\pi(\phi)=|\det A|$ [8] we arrive at $\pi(\mathcal{I}\otimes \Lambda (\phi))=\pi(\phi)\pi(\mathcal{I}\otimes \Lambda (\psi_+))$ which is the desired. $\blacksquare$ Thus the determinant-based measure provides {\it factorizable} measurable bound on concurrence (see \cite{factor} for recent effort in this direction). We have not been able to find an analytical proof of the extension of the factorization law to the mixed state domain, as it was in \cite{konrad}, nevertheless, by random sampling, we have verified that such an extension is indeed valid, that is $\pi(\mathcal{I}\otimes \Lambda (\varrho))\le\pi(\varrho)\pi(\mathcal{I}\otimes \Lambda (\phi_+))$ In conclusion, we have provided monogamy relation for entanglement quantified by determinant based measure $\pi$. As a byproduct we obtained explicit formulas for the latter in terms of other entanglement quantities. We showed that close relation with concurrence is the result of bearing the similar construction in its roots. We also provided evidence that the disagreement of $\pi$ and $C$ on general mixed states stems from the fact that $\pi$ quantifies to some extent both bipartite and tripartite correlations. The natural question motivated by the result of the present paper is about possibility of constructing other measurable quantifiers of entanglement, which are based on the analogous procedure and provide better bounds on concurrence of an unknown state. We hope our results will stimulate research on this topic and will provide some tools for improved understanding of two qubits entanglement. The issue of using the determinant-based measure for detecting and quantifying entanglement in higher dimensional systems is the subject of the ongoing research \cite{prep}. Discussions with R. Augusiak and P. Horodecki are gratefully acknowledged. The author is supported by Ministerstwo Nauki i Szkolnictwa Wyzszego grant No. N N202 191734. \end{document}
\begin{document} \title{ Conditional Density Matrix: \ Systems and Subsystems in Quantum Mechanics } \begin{abstract} A new quantum mechanical notion --- Conditional Density Matrix --- proposed by the authors \cite{Kh}, \cite{Sol}, is discussed and is applied to describe some physical processes. This notion is a natural generalization of von Neumann density matrix for such processes as divisions of quantum systems into subsystems and reunifications of subsystems into new joint systems. Conditional Density Matrix assigns a quantum state to a subsystem of a composite system under condition that another part of the composite system is in some pure state. \end{abstract} \section{Introduction} \noindent A problem of a correct quantum mechanical description of divisions of quantum systems into subsystems and reunifications of subsystems into new joint systems attracts a great interest due to the present development of quantum communication. Although the theory of such processes finds room in the general scheme of quantum mechanics proposed by von Neumann in 1927 \cite{Neu}, even now they are often described in a fictitious manner. For example, the authors of classical photon teleportation experiment \cite{Tel} write {\it The entangled state contains no information on the individual particles; it only indicates that two particles will be in the opposite states. The important property of an entangled pair that as soon as a measurement on one particles projects it, say, onto $|\leftrightarrow >$ the state of the other one is determined to be $|\updownarrow >$, and vice versa. How could a measurement on one of the particles instantaneously influence the state of the other particle, which can be arbitrary far away? Einstein, among many other distinguished physicists, could simply not accept this "spooky action at a distance". But this property of entangled states has been demonstrated by numerous experiments. } \section{ The General Scheme of Quantum Mechanics } \noindent It was W.Heisenberg who in 1925 formulated a kinematic postulate of quantum mechanics \cite {Hei}. He proposed that there exists a connection between matrices and physical variables: $$ variable \quad {\cal F} \quad \Longleftrightarrow \quad matrix \quad (\hat F)_{mn}. $$ In the modern language the kinematic postulate looks like: {\it Each dynamical variable $\cal F$ of a system $\cal S$ corresponds to a linear operator $\hat F$ in Hilbert space $\cal H$} $$ dynamical \quad variable \quad {\cal F} \quad \Longleftrightarrow \quad linear \quad operator \quad {\hat F}. $$ The dynamics is given by the famous Heisenberg's equations formulated in terms of commutators. $$ {d{\hat F} \over dt} \quad = \quad {i \over \hbar}[{\hat H}, {\hat F}]. $$ To compare predictions of the theory with experimental data it was necessary to understand how one can determine the values of dynamical variables in the given state. W.Heisenberg gave a partial answer to this problem: {\it If matrix that corresponds to the dynamical variable is diagonal, then its diagonal elements define possible values for the dynamical variable, i.e. its spectrum.} $$ (\hat F)_{mn} = f_{m}{\delta}_{mn} \quad \Longleftrightarrow \quad \lbrace f_{m} \rbrace \quad is \quad spectrum \quad {\cal F}. $$ The general solution of the problem was given by von Neumann in 1927. He proposed the following procedure for calculation of average values of physical variables: $$ < {\cal F} > \quad = \quad Tr({\hat F}{\hat {\rho}}). $$ Here operator $\hat \rho$ satisfies three conditions: $$ 1) \quad {\hat \rho}^{+} \quad = \quad {\hat \rho}, $$ $$ 2) \quad Tr{\hat \rho} \quad = \quad 1, $$ $$ 3) \quad \forall \psi \in {\cal H} \quad <\psi|{\hat \rho}\psi> \quad \geq 0. $$ By the formula for average values von Neumann found out the correspondence between linear operators $\hat \rho$ and states of quantum systems: $$ \quad state \quad of\quad a\quad system \quad \rho \quad \Longleftrightarrow \quad linear \quad operator \quad {\hat \rho}. $$ In this way, the formula for average values becomes quantum mechanical definition of the notion "a state of a system". The operator $\hat \rho$ is called {\bf Density Matrix}. From the relation $$ (<{\cal F>})^{*} \quad = \quad Tr({\hat F}^{+}{\hat \rho}) $$ one can conclude that Hermitian-conjugate operators correspond to complex-conjugate variables and Hermitian operators correspond to real variables. $$ {\cal F} \leftrightarrow {\hat F} \quad \Longleftrightarrow \quad {\cal F}^{*} \leftrightarrow {\hat F}^{+}, $$ $$ {\cal F} = {\cal F}^{*} \quad \Longleftrightarrow \quad {\hat F} = {\hat F}^{+}. $$ The real variables are called {\it observables.} From the properties of density matrix and the definition of positively definite operators: $$ {\hat F}^{+} = {\hat F}, \quad \quad \forall \psi \in {\cal H} \quad <\psi|{\hat F}{\psi}> \quad \geq 0, $$ it follows that the average value of nonnegative variable is nonnegative. Moreover, the average value of nonnegative variable is equal to zero if and only if this variable equals zero. Now it is easy to give the following definition: {\it variable $\cal F$ has a definite value in the state $\rho$ if and only if its dispersion in the state $\rho$ is equal to zero. } In accordance to general definition of the dispersion of an arbitrary variable $$ {\cal D}(A) \quad = \quad <A^{2}> \quad - \quad (<A>)^{2}\,, $$ the expression for dispersion of a quantum variable $\cal F$ in the state $\rho$ has the form: $$ {\cal D}_{\rho}({\cal F}) \quad = \quad Tr({\hat Q}^{2}{\hat \rho}), $$ where $\hat Q$ is an operator: $$ \hat Q \quad = \quad {\hat F} - <{\cal F}>{\hat E}. $$ If $\cal F$ is observable then $Q^{2}$ is a positive definite variable. It follows that the dispersion of $\cal F$ is nonnegative. And all this makes clear the above-given definition. Since density matrix is a positive definite operator and its trace equals 1, we see that its spectrum is pure discrete and it can be written in the form $$ {\hat \rho} \quad = \quad \sum_{n}p_{n}{\hat P}_{n}, $$ where ${\hat P}_{n}$ is a complete set of self-conjugate projective operators: $$ {{\hat P}_{n}}^{+} = {\hat P}_{n}, \quad {\hat P}_{m}{\hat P}_{n} = {\delta}_{mn}{\hat P}_{m}, \quad \sum_{n}{\hat P}_{n} = {\hat E}. $$ Numbers $\lbrace p_{n} \rbrace$ satisfy the condition $$ p_{n}^{*} = p_{n}, \quad 0 \le p_{n}, \quad \sum_{n}p_{n}\,Tr{\hat P}_{n} = 1. $$ It follows that $\hat \rho$ acts according to the formula $$ {\hat \rho}{\Psi} \quad = \quad \sum_{n} p_{n} \sum_{\alpha \in {\Delta}_{n}} {\phi}_{n\alpha}\langle \phi_{n\alpha}|{\Psi} \rangle. $$ The vectors $\phi_{n\alpha}$ form an orthonormal basis in the space $\cal H$. Sets ${\Delta}_{n} = \lbrace 1,...,k_{n} \rbrace$ are defined by degeneration multiplicities $k_n$ of eigenvalues $p_{n}$. Now the dispersion of the observable $\cal F$ in the state $\rho$ is given by the equation $$ {\cal D}_{\rho}({\cal F}) \quad = \quad \sum_{n} p_{n} \sum_{\alpha \in {\Delta}_{n}} ||{\hat Q}{\phi}_{n\alpha}||^{2}. $$ All terms in this sum are nonnegative. Hence, if the dispersion is equal to zero, then $$ if \quad p_{n} \not= 0, \quad then \quad {\hat Q}{\phi}_{n\alpha} = 0. $$ Using the definition of the operator $\hat Q$, we obtain $$ if \quad p_{n} \not= 0, \quad then \quad {\hat F}{\phi}_{n\alpha} = {\phi}_{n\alpha}\langle F \rangle. $$ In other words, {\it if an observable ${\cal F}$ has a definite value in the given state ${\rho}$, then this value is equal to one of the eigenvalues of the operator ${\hat F}$. } In this case we have $$ {\hat \rho}{\hat F}{\phi}_{n\alpha} \quad = \quad {\phi}_{n\alpha}p_{n}\langle {\cal F} \rangle\,, $$ $$ {\hat F}{\hat \rho}{\phi}_{n\alpha} \quad = \quad {\phi}_{n\alpha}\langle {\cal F} \rangle p_{n}\,, $$ that proves the commutativity of operators $\hat F$ and $\hat \rho$. It is well known, that if $\hat A$ and $\hat B$ are commutative self-conjugate operators, then there exists self-conjugate operator $\hat T$ with non-degenerate spectrum such that $\hat A$ and $\hat B$ are functions of $\hat T$: $$ {\hat T}{\Psi} \quad = \quad \sum_{n\alpha} {\phi}_{n\alpha}t_{n\alpha} \langle{\phi}_{n\alpha}|{\Psi}\rangle, $$ $$ t_{n\alpha}^{*} = t_{n\alpha}, \quad t_{n\alpha} \not= t_{n^{'}{\alpha}^{'}}, \quad if \quad (n,{\alpha}) \neq (n^{'},{\alpha}^{'}). $$ $$ {\hat F}{\Psi} \quad = \quad \sum_{n\alpha} {\phi}_{n\alpha}f_{1}(t_{n\alpha}) \langle{\phi}_{n\alpha}|{\Psi}\rangle, $$ $$ {\hat \rho}{\Psi} \quad = \quad \sum_{n\alpha} {\phi}_{n\alpha}f_{2}(t_{n\alpha}) \langle{\phi}_{n\alpha}|{\Psi}\rangle, $$ Suppose that $\hat F$ is an operator with non-degenerate spectrum; then {\it if the observable ${\cal F}$ with non-degenerate spectrum has a definite value in the state ${\rho}$, then it is possible to represent the density matrix of this state as a function of the operator ${\hat F}$. } The operator $\hat F$ can be written in the form $$ {\hat F} \quad = \quad \sum_{n}f_{n}{\hat P}_{n}, $$ $$ {{\hat P}_{n}}^{+} = {\hat P}_{n}, \quad {\hat P}_{m}{\hat P}_{n} = {\delta}_{mn}{\hat P}_{m}, \quad tr({\hat P}_{n}) = 1, \quad \sum_{n}{\hat P}_{n} = {\hat E}. $$ The numbers $\lbrace f_{n} \rbrace$ satisfy the conditions $$ f_{n}^{*} = f_{n}, \quad f_{n} \neq f_{n^{'}}, \quad if \quad n \neq n^{'}. $$ We obviously have $$ {\hat F} \quad = \quad \sum_{n}f_{n}{\hat P}_{n}. $$ From $$ \langle F \rangle \quad = \quad \sum_{n} p_{n}f_{n} \quad = \quad f_{N}, $$ $$ \langle F^2 \rangle \quad = \quad \sum_{n} p_{n}f_{n}^{2} \quad = \quad f_{N}^{2} $$ we get $$ p_{n} \quad = \quad {\delta}_{nN}. $$ In this case density matrix is a projective operator satisfying the condition $$ {\hat \rho}^{2} \quad = \quad {\hat \rho}. $$ It acts as $$ {\hat \rho}{\Psi} \quad = \quad {\Psi}_{N}\langle {\Psi}_{N}|{\Psi} \rangle, $$ where $|{\Psi} \rangle$ is a vector in Hilbert space. The average value of an arbitrary variable in this state is equal to $$ \langle {\cal A} \rangle \quad = \quad \langle {\Psi}_{N}|{\hat A}{\Psi}_{N} \rangle. $$ It is so-called {\it PURE } state. If the state is not pure it is known as {\it mixed.} Suppose that every vector in $\cal H$ is a square integrable function $\Psi (x)$, where $x$ is a set of continuous and discrete variables. Scalar product is defined by the formula $$ \langle \Psi|\Phi \rangle \quad = \quad \int dx{\Psi}^{*}(x){\Phi}(x). $$ For simplicity we assume that every operator $\hat F$ in $\cal H$ acts as follows . $$ ({\hat F}{\Psi})(x) \quad = \quad \int F(x,x^{'})dx^{'}{\Psi}(x^{'}). $$ That is for any operator $\hat F$ there is an integral kernel $F(x,x^{'})$ associated with this operator $$ {\hat F} \quad \Longleftrightarrow \quad F(x,x^{'}). $$ Certainly, we may use $\delta$-function if necessary. Now the average value of the variable $\cal F$ in the state $\rho$ is given by equation $$ \langle {\cal F} \rangle_{\rho} \quad = \quad \int F(x,x^{'})dx^{'}{\rho}(x^{'},x)dx. $$ Here the kernel ${\rho}(x,x^{'})$ satisfies the conditions $$ {\rho}^{*}(x,x^{'}) \quad = \quad {\rho}(x^{'},x), $$ $$ \int {\rho}(x,x)dx \quad = \quad 1, $$ $$ \forall {\Psi} \in {\cal H} \quad \int{\Psi}(x)dx{\rho}(x,x^{'})dx^{'}{\Psi}(x^{'}) \geq 0. $$ \section{Composite System and Reduced Density Matrix} \noindent Suppose the variables $x$ are divided into two parts: $x = \lbrace y,z \rbrace$. Suppose also that the space $\cal H$ is a direct product of two spaces ${\cal H}_{1}$, ${\cal H}_{2}$: $$ {\cal H} \quad = \quad {\cal H}_{1}\otimes{\cal H}_{2}. $$ So, there is a basis in the space $\cal Н$ that can be written in the form $$ {\phi}_{an}(y,z) \quad = \quad f_{a}(y)v_{n}(z)\,. $$ The kernel of operator $\hat F$ in this basis looks like $$ {\hat F} \quad \Longleftrightarrow \quad F(y,z;y^{'},z^{'})\,. $$ In quantum mechanics it means that the system $S$ is a unification of two subsystems $S_{1}$ and $S_{2}$: $$ S \quad = \quad S_{1} \cup S_{2}\,. $$ The Hilbert space $\cal H$ corresponds to the system $S$ and the spaces ${\cal H}_{1}$ and ${\cal H}_{2}$ correspond to the subsystems $S_{1}$ and $S_{2}$. Now suppose that a physical variable ${\cal F}_{1}$ depends on variables ${y}$ only. The operator that corresponds to ${\cal F}_{1}$ has a kernel $$ F_{1}(y,z;y^{'},z^{'}) \quad = \quad F_{1}(y,y^{'}){\delta}(z - z^{'})\,. $$ The average value of $F_{1}$ in the state $\rho$ is equal to $$ \langle F_{1} \rangle_{\rho} \quad = \quad \int F(y,y^{'})dy^{'}{\rho}_{1}(y^{'},y)dy\,, $$ where the kernel ${\rho}_{1}$ is defined by the formula $$ {\rho}_{1}(y,y^{'}) \quad = \quad \int {\rho}(y,z;y_{'},z)dz\,. $$ The operator ${\hat \rho}_{1}$ satisfies all the properties of Density Matrix in $S_1$. Indeed, we have $$ {{\rho}_1}^{*}(y,y^{'}) \quad = \quad {\rho _1}(y^{'},y)\,, $$ $$ \int {\rho _1}(y,y)dy \quad = \quad 1\,, $$ $$ \forall {\Psi _1} \in {\cal H_1} \quad \int{\Psi _1}(y)dy{\rho _1}(y,y^{'})dy^{'}{\Psi _1}(y^{'}) \geq 0\,. $$ The operator $$ {\hat \rho}_{1} \quad = \quad Tr_{2}{\hat \rho}_{1+2}, $$ is called {\bf Reduced Density Matrix} . Thus, the state of the subsystem $S_1$ is defined by reduced density matrix. The reduced density matrix for the subsystem $S_2$ is defined analogously. $$ {\hat \rho}_{2} \quad = \quad Tr_{1}{\hat \rho}_{1+2}. $$ Quantum states $\rho_{1}$ and $\rho_{2}$ of subsystems are defined uniquely by the state $\rho_{1+2}$ of the composite system. Suppose the system $S$ is in a pure state then a quantum state of the subsystem $S_{1}$ is defined by the kernel $$ {\rho}_{1}(y,y^{'}) \quad = \quad \int{\Psi}(y,z)dz{\Psi}^{*}(y^{'},z). $$ If the function ${\Psi}(y,z)$ is the product $$ {\Psi}(y,z) \quad = \quad f(y)w(z), \quad \int|w(z)|^{2}dz = 1, $$ then subsystem $S_{1}$ is a pure state , too $$ {\rho}_{1}(y,y^{'}) \quad = \quad f(y)f^{*}(y^{'}), \quad \int|f(y)|^{2}dy = 1. $$ As it was proved by von Neumann, it is the only case when purity of composite system is inherited by its subsystems. Let us consider an example of a system in a pure state having subsystems in mixed states. Let the wave function of composite system be $$ {\Psi}(y,z) \quad = \quad {1 \over \sqrt{2}}(f(y)w(z) \pm f(z)w(y)), $$ where $<f|w> = 0$ and $<f|f>=<w|w>=1$. The density matrix of the subsystem $S_{1}$ has the kernel $$ {\rho}_{1}(y,y^{'}) \quad = \quad {1 \over 2} (f(y)f^{*}(y^{'}) + w(y)w^{*}(y^{'})). $$ The kernel of the operator ${{\hat \rho}_{1}}^{2}$ has the form $$ {{\rho}_{1}}^{2}(y,y^{'}) \quad = \quad {1 \over 4} (f(y)f^{*}(y^{'}) + w(y)w^{*}(y^{'})). $$ Therefore, the subsystem $S_{1}$ is in the mixed state. Moreover, its density matrix is proportional to unity operator. The previous property resolves the perplexities connected with Einstein - Podolsky - Rosen paradox. \section{EPR - paradox} \noindent Anyway, it was Shr\"{o}dinger who introduced a term "EPR-paradox". The authors of EPR themselves always considered their article as a demonstration of inconsistency of present to them quantum mechanics rather than a particular curiosity. The main conclusion of the paper \cite{EPR} "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" published in 1935 (8 years later then the von Neumann book) is the statement: {\it ..we proved that (1) the quantum mechanical description of reality given by wave functions not complete or (2) when the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality. Starting then with the assumption that the wave function does give a complete description of the physical reality, we arrived at the conclusion that two physical quantities, with noncommuting operators, can have simultaneous reality. Thus the negation of (1) leads to negation of only other alternative (2). We can thus focused to conclude that the quantum-mechanical description of physical reality given by wave function is not complete. } After von Neumann's works this statement appears obvious. However, in order to clarify this point of view completely we must understand what is "the physical reality" in EPR. In EPR-paper the physical reality is defined as: {\it If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of physical quantity, then there exists an element of physical reality corresponding to this physical quantity. } Such definition of physical reality is a step back as compared to von Neumann's definition. By EPR definition, the state is actual only when at least one observable has an exact value. This point of view is incomplete and leads to inconsistency. When a subsystem is separated "the loss of observables" results directly from the definition of density matrix for the subsystem. "The occurrence" of observables in the chosen subsystem when the quantities are measured in another "subsidiary" subsystem can be naturally explained in the terms of conditional density matrix. \section{Conditional Density Matrix} \noindent The average value of a variable with the kernel $$ F^c (x,x' ) = F_1 (y,y')u(z)u^* (z'), \quad \int |u(z)|^2 dz =1, $$ is equal to $$ \langle F^c \rangle_{\rho} \quad = \quad p \int F_1 (y,y^{'})dy^{'}{\rho}^{c}(y^{'},y)dy, $$ where $$ {\rho}^{c}(y,y^{'}) = {1 \over p} \int u^*(z)dz\, \rho (y,z;y',z')\,u(z')dz'\,, $$ $$ p \quad = \quad \int u^* (z)dz \,\rho (y,z;y,z')\,u(z')dz' dy. $$ Since we can represent $p$ in the form $$ p\quad = \quad \int P(z,z')dz'\, \rho _2 (z';z)dz, $$ $$ P(z,z') \quad = \quad u(z)u^*(z'), $$ we see that $p$ is an average value of a variable $P$ of the subsystem $S_2$. Operator $\hat P$ is a projector ($\hat P^2 = \hat P$). Therefore it is possible to consider the value $p$ as a probability. It is easy to demonstrate that the operator $\hat{\rho}^c $ satisfies all the properties of density matrix. So the kernel $\rho^c (y,y')$ defines some state of the subsystem $S_1$. What is this state? According to the decomposition of $\delta$-function $$ \delta (z-z')=\sum_{n} \phi _n (z) {\phi _n}^* (z'), $$ $\{ \phi _n (z) \}$ being a basis in the space ${\it H}_2$, the reduced density matrix is represented in the form of the sum $$ \rho _1 (y,y') = \sum p_n {\rho}_{n}^{c} (y,y'). $$ Here $$ {\rho}_{n}^{c} (y,y') = {1 \over p_n } \int {\phi _n}^* (z)dz \, \rho (y,z;y',z')\, {\phi}_n (z')dz' $$ and $$ p_n \quad = \quad \int {\phi _n}^* (z)dz \,\rho (y,z;y,z')\,{\phi _n}(z')dz'dy $$ $$ \quad = \quad \int \hat P_n (z,z')dz'\, {\rho}_2 (z',z)dz. $$ The numbers $p_n$ satisfy the conditions $$ {p_n}^* =p_n,\qquad p_n \geq 0, \qquad \sum_{n} p_n =1. $$ and are connected with a probability distribution. The basis $\{ \phi _n \}$ in the space ${\it H}_2$ corresponds to some observable $\hat G_2$ of the subsystem $S_2$ with discrete non-degenerate spectrum. It is determined by the kernel $$ G_2 (z,z')=\sum_{n} g_n {\phi }_n {\phi }*_n, \quad g_n = {g*}_n ; \quad g_n \not= g_{n1} \quad if \quad n \not= n1. $$ The average value of $G_2$ in the state $\rho _2$ is equal to $$ \int dy \rho _2 (z,z')dz'G(z',z) = $$ $$ = \sum_{n} g_n \int dy \rho _2 (z,z') dz' \phi _n (z') {\phi _n}^* (z') = \sum_{n} p_n g_n. $$ Thus number $p_n$ defines the chance that the observable $\hat G_2$ has the value $g_n$ in the state $\rho _2$. Obviously, the kernel $\rho _{n}^{c} (y,y')$ in this case defines the state of system $S_1$ under condition that the value of variable $G_2$ is equal to $g_n$. Hence it is natural to call operator $\hat \rho _{n}^{c}$ as Conditional Density Matrix (CDM) \cite{Kh}, \cite{Sol} $$ \hat \rho _{c1|2} = {Tr_2 (\hat P_2 \hat \rho) \over Tr(\hat P_2 \hat \rho ) }. $$ It is ({\it conditional}) density matrix for the subsystem $S_1$ under the condition that the subsystem $S_2$ is selected in a pure state $\hat \rho _2 =\hat P_2 $. It is the most important case for quantum communication. Conditional density matrix satisfies all the properties of density matrix. Conditional density matrix helps to clarify a sense of operations in some finest experiments. \section{Examples: System and Subsystems} \subsection{ Parapositronium} \noindent As an example we consider parapositronium, i.e. the system consisting of an electron and a positron. The total spin of the system is equal to zero. In this case the nonrelativistic approximation is valid and the state vector of the system is represented in the form of the product $$ {\Psi}({\vec r}_{e},{\sigma}_{e}; {\vec r}_{p}, {\sigma}_{p}) \quad = \quad {\Phi}({\vec r}_{e},{\vec r}_{p}) \chi({\sigma}_{e},{\sigma}_{p}). $$ The spin wave function is equal to $$ \chi({\sigma}_{e},{\sigma}_{p}) \quad = \quad {1 \over \sqrt{2}} ({\chi}_{\vec n}({\sigma}_{e}){\chi}_{-{\vec n}}({\sigma}_{p}) \quad - \quad {\chi}_{\vec n}({\sigma}_{p}){\chi}_{-{\vec n}}({\sigma}_{e})). $$ Here ${\chi}_{\vec n}(\sigma)$ and ${\chi}_{(-{\vec n})}(\sigma)$ are the eigenvectors of the operator that projects spin onto the vector $\vec n$: $$ ({\vec {\sigma}}{ \vec n})\ {\chi}_{\pm \vec n}(\sigma) \quad = \quad \pm {\chi}_{\pm \vec n}(\sigma), $$ The spin density matrix of the system is determined by the operator with the kernel $$ \rho({\sigma};{\sigma}^{'}) \quad = \quad {\chi}({\sigma}_{e},{\sigma}_{p})\ {{\chi}}^{*}({\sigma}^{'}_{e},{\sigma}^{'}_{p}), $$ The spin density matrix of the electron is $$ {\rho}_{e}({\sigma},{\sigma}^{'}) \quad = \quad \sum_{\xi}\ {\chi}({\sigma},\xi)\ {{\chi}}^{*}({\sigma}^{'}, \xi) \quad = \quad $$ $$ {1 \over 2}\ ({\chi}_{\vec n}(\sigma)\ {\chi}_{(-{\vec n})}({\sigma}^{'}) \ + \ {\chi}_{\vec n}(\sigma)\ {\chi}_{(-{\vec n})}({\sigma}^{'})) \quad = \quad {1 \over 2}\delta (\sigma - {\sigma}^{'}). $$ In this state the electron is completely unpolarized. If an electron passes through polarization filter then the pass probability is independent of the filter orientation. The same fact is valid for the positron if its spin state is measured independently of the electron. Now let us consider quite a different experiment. Namely, the positron passes through the polarization filter and the electron polarization is simultaneously measured. The operator that projects the positron spin onto the vector $\vec m$ (determined by the filter) is given by the kernel $$ P(\sigma,{\sigma}^{'}) \quad = \quad {\chi}_{\vec m}(\sigma)\ {\chi}^{*}_{{\vec m}}({\sigma}^{'}). $$ Now the conditional density matrix of the electron is equal to $$ {\rho}_{e/p}(\sigma,{\sigma}^{'}) \ = \ {\sum_{(\sigma,{\sigma}^{'})} {\chi}_{\vec m}(\sigma)\ {\chi}^{*}_{\vec m}({\sigma}^{'})\ {\chi}({\sigma}_{e},{\sigma}^{'})\ {{\chi}^{*}}({\sigma}^{'}_{e},\sigma) \over \sum_{(\xi,\sigma,{\sigma}^{'})} {\chi}_{\vec m}(\sigma)\ {\chi}^{*}_{\vec m}({\sigma}^{'})\ {\chi}(\xi,{\sigma}^{'})\ {{\chi}^{*}}(\xi,\sigma)}. $$ The result of the summation is $$ {\rho}_{e/p}(\sigma,{\sigma}^{'}) \quad = \quad {\chi}_{(-\vec m )}(\sigma)\ {\chi}^{*}_{(-\vec m )}({\sigma}^{'}). $$ Thus, if the polarization of the positron is selected with the help of polarizer in the state with well defined spin, then the electron appears to be polarized in the opposite direction. Of course, this result is in an agreement with the fact that total spin of composite system is equal to zero. Nevertheless this natural result can be obtained if positron and electron spins are measured simultaneously. In the opposite case, the more simple experiment shows that the direction of electron and positron spins are absolutely indefinite. A.Eistein said "{\it raffinert ist der Herr Gott, aber boschaft ist Er nicht}". \subsection{ Quantum Photon Teleportation} \noindent In the Innsbruck experiment \cite{Tel} on a photon state teleportation, the initial state of the system is the result of the unification of the pair of photons 1 and 2 being in the antisymmetric state ${\chi}({\sigma}_{1},{\sigma}_{2})$ with summary angular momentum equal to zero and the photon 3 being in the state ${\chi}_{\vec m}({\sigma}_{3})$ (that is, being polarized along the vector $\vec m $). The joint system state is given by the density matrix $$ \rho(\sigma, {\sigma}^{'}) \quad = \quad {\Psi}(\sigma){{\Psi}^{*}}({\sigma}^{'}), $$ where the wave function of the joint system is the product $$ {\Psi}(\sigma) \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{2})\ {\chi}_{\vec m}({\sigma}_{3}). $$ Considering then the photon 2 only (without fixing the states of the photons 1 and 3) we find the photon 2 to be completely unpolarized with the density matrix $$ {\rho}({\sigma}_{2},{\sigma}_{2}^{'}) \ = \ Tr_{(1,3)}\ {\rho}({\sigma}_{1},{\sigma}_{2},{\sigma}_{3}; {\sigma}_{1},{\sigma}_{2}^{'},{\sigma}_{3}) \ = \ {1 \over 2}\ \delta ({\sigma}_{2} - {\sigma}_{2}^{'}). $$ However, if the photon 2 is registered when the state of the photons 1 and 3 has been determined to be ${\chi}({\sigma}_{1},{\sigma}_{3})$ then the state of the photon 2 is given by the conditional density matrix $$ {\rho}_{2/\lbrace 1,3 \rbrace} \quad = \quad {Tr_{(1,3)}\ (P_{1,3}\ {\rho}_{1,2,3}) \over Tr\ (P_{1,3}\ {\rho}_{1,2,3})}. $$ Here $ P_{1,3}$ is the projection operator $$ P_{1,3} \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{3})\ {\chi}^{*}({\sigma}_{1},{\sigma}_{3}). $$ To evaluate the conditional density matrix it is convenient to preliminary find the vectors $$ {\phi}({\sigma}_{1}) \quad = \quad \sum_{3}\ {\chi}^{*}_{\vec m}({\sigma}_{3})\ {\chi}({\sigma}_{1},{\sigma}_{3}) $$ and $$ {\theta}({\sigma}_{2}) \quad = \quad \sum_{1}\ {{\phi}^{*}}({\sigma}_{1})\ {\chi}({\sigma}_{1},{\sigma}_{2}). $$ The vector $\theta$ equals to $$ {\theta}({\sigma}_{2}) \quad = \quad - {1 \over 2}\ {\chi}_{\vec m}({\sigma}_{2}) $$ and the conditional density matrix of the photon 2 appears to be equal to $$ {\rho}_{2/\lbrace 1,3 \rbrace} \quad = \quad {\chi}_{\vec m}({\sigma}_{2})\ {{\chi}}^{*}_{\vec m}({\sigma}_{2}^{'}). $$ Thus, if the subsystem consisting of the photons 1 and 3 is forced to be in the antisymmetric state ${\chi}({\sigma}_{1},{\sigma}_{3})$ (with total angular momentum equal to zero) then the photon 2 appears to be polarized along the vector ${\vec m}$. \subsection{ Entanglement Swapping} \noindent In the recent experiment \cite{Swa} in installation two pairs of correlated photons are emerged simultaneously. The state of the system is described by the wave function $$ {\Psi}(\sigma) \quad = \quad {\Psi}({\sigma}_{1}, {\sigma}_{2}, {\sigma}_{3}, {\sigma}_{4}) \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{2}){\chi}({\sigma}_{3},{\sigma}_{4}). $$ The photons 2 and 3 are selected into antisymmetric state $ {\chi}({\sigma}_{2},{\sigma}_{3})$. What is the state of pair of photons 1 and 4? Conditional density matrix of the pair (1-4) is $$ {\hat \rho}_{14/23} \quad = \quad {Tr_{23}({\hat P}_{23}{\hat \rho}_{1234}) \over Tr({\hat P}_{23}{\hat \rho}_{1234})}, $$ where operator that selects pair (2-3) is defined by $$ P_{23}(\sigma, {\sigma}^{'}) \quad = \quad {\chi}({\sigma}_{2},{\sigma}_{3}) {\chi}^{*}({\sigma}_{2}^{'},{\sigma}_{3}^{'}) $$ and density matrix of four photons system is determined by kernel $$ {\rho}_{1234}(\sigma, {\sigma}^{'}) \quad = \quad {\Psi}({\sigma}_{1}, {\sigma}_{2}, {\sigma}_{3}, {\sigma}_{4}) {\Psi}^{*}({\sigma}_{1}^{'}, {\sigma}_{2}^{'}, {\sigma}_{3}^{'}, {\sigma}_{4}^{'}). $$ Direct calculation shows that the pair of the photons (1 and 4) has to be in pure state with the wave function $$ \Phi({\sigma}_{1},{\sigma}_{4}) \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{4}). $$ The experiment confirms this prediction. \subsection{ Pairs of Polarized Photons} \noindent Now consider a modification of the Innsbruck experiment. Let there be two pairs of photons $(1,\ 2)$ and $(3,\ 4)$. Suppose that each pair is in the pure antisymmetric state $\chi$. The spin part of the density matrix of the total system is given by the equation $$ {\rho}(\sigma,{\sigma}^{'}) \quad = \quad {\Psi}(\sigma)\ {{\Psi}^{*}}({\sigma}^{'}), $$ where $$ {\Psi}(\sigma) \quad = \quad {\chi}({\sigma}_{1},{\sigma}_{2})\ {\chi}({\sigma}_{3},{\sigma}_{4}). $$ If the photons 2 and 4 pass though polarizes, they are polarized along ${\chi}_{\vec m}({\sigma}_{2})$ and ${\chi}_{\vec s}({\sigma}_{4})$ then the wave function of the system is transformed into $$ {\Phi}(\sigma) \quad = \quad {\chi}_{\vec n}({\sigma}_{1})\ {\chi}_{\vec m}({\sigma}_{2}) \ {\chi}_{\vec r}({\sigma}_{3})\ {\chi}_{\vec s}({\sigma}_{4}). $$ Here ${\vec n},\ {\vec m}$ and ${\vec r},\ {\vec s}$ are pairs of mutually orthogonal vectors. Now the conditional density matrix of the pair of photons 1 and 3 is $$ {\rho}_{(1,3)/(2,4)}(\sigma,{\sigma}^{'}) \quad = \quad {\Theta}({\sigma}_{1},{\sigma}_{3})\ {{\Theta}^{*}}({\sigma}_{1}^{'},{\sigma}_{3}^{'}). $$ The wave function of the pair is the product of wave functions of each photon with definite polarization $$ {\Theta}({\sigma}_{1},{\sigma}_{3}) \quad = \quad {\chi}_{\vec n}({\sigma}_{1})\ {\chi}_{\vec r}({\sigma}_{3}) . $$ We note that initial correlation properties of the system appear only when the photons pass though polarizers. Although the wave function of the system seems to be a wave function of independent particles the initial correlation exhibits in correlations of polarizations for each pair. Pairs of polarized photons appear to be very useful in quantum communication. \subsection{ Quantum Realization of Verman Communication Scheme} \noindent Let us recall the main idea of Vernam communication scheme \cite{Ver}. In this scheme, Alice encrypts her message (a string of bits denoted by the binary number $m_{1}$) using a randomly generated key $k$. She simply adds each bit of the message with the corresponding bit of the key to obtain the scrambled text ($ s = m_{1} \oplus k $, where $\oplus$ denotes the binary addition modulo 2 without carry). It is then sent to Bob, who decrypts the message by subtracting the key ($s \ominus k = m_{1} \oplus k \ominus k = m_{1}$). Because the bits of the scrambled text are as random as those of the key, they do not contain any information. This cryptosystem is thus provable secure in sense of information theory. Actually, today this is the only probably secure cryptosystem! Although perfectly secure, the problem with this security is that it is essential that Alice and Bob possess a common secret key, which must be at least as long as the message itself. They can only use the key for a single encryption. If they used the key more than once, Eve could record all of the scrambled messages and start to build up a picture of the plain texts and thus also of the key. (If Eve recorded two different messages encrypted with the same key, she could add the scrambled text to obtain the sum of the plain texts: $s_{1} \oplus s_{2} = m_{1} \oplus k \oplus m_{2} \oplus k = m_{1} \oplus m_{2} \oplus k \oplus k = m_{1} \oplus m_{2}$, where we used the fact that $\oplus$ is commutative.) Furthermore, the key has to be transmitted by some trusted means, such as a courier, or through a personal meeting between Alice and Bob. This procedure may be complex and expensive, and even may lead to a loophole in the system. With the help of pairs of polarized photons we can overcome the shortcomings of the classical realization of Vernam scheme. Suppose Alice sends to Bob pairs of polarized photons obtained according to the rules described in the previous section. Note that the concrete photons' polarizations are set up in Alice's laboratory and Eve does not know them. If the polarization of the photon 1 is set up by a random binary number $p_{i}$ and the polarization of the photon 3 is set up by a number $m_{i} \oplus p_{i}$ then each photon (when considered separately) does not carry any information. However, Bob after obtaining these photons can add corresponding binary numbers and get the number $m_{i}$ containing the information ($m_{i} \oplus p_{i} \oplus p_{i}=m_{i}$). In this scheme, a secret code is created during the process of sending and is transferred to Bob together with the information. It makes the usage of the scheme completely secure. \section{Conclusion} \noindent Provided that the subsystem $S_2$ of composite quantum system $S=S_1 + S_2$ is selected (or will be selected) in a pure state $\hat P_n$ the quantum state of subsystem $S_1$ is conditional density matrix $\hat {\rho}_{1c/2n}$. Reduced density matrix $\hat {\rho }_1$ is connected with conditional density matrices by an expansion: $$ \hat \rho _1 = \sum p_n {\hat \rho}_{1n/2n}; $$ here $$ \sum \hat P_n = \hat E, \qquad \sum p_n = 1. $$ The coefficients $p_n$ are probabilities to find subsystem $S_2$ in pure states $\hat P_n$. \end{document}
\begin{document} \title[L\"owner equation]{Exponential driving function for the L\"owner equation} \author[D.~Prokhorov]{Dmitri Prokhorov} ubjclass[2010]{Primary 30C35; Secondary 30C20, 30C80} \keywords{L\"owner equation, singular solution} \address{D.~Prokhorov: Department of Mathematics and Mechanics, Saratov State University, Saratov 410012, Russia} \email{[email protected]} \begin{abstract} We consider the chordal L\"owner differential equation with the model driving function $\root3\of t$. Holomorphic and singular solutions are represented by their series. It is shown that a disposition of values of different singular and branching solutions is monotonic, and solutions to the L\"owner equation map slit domains onto the upper half-plane. The slit is a $C^1$-curve. We give an asymptotic estimate for the ratio of harmonic measures of the two slit sides. \end{abstract} \maketitle ection{Introduction} The L\"owner differential equation introduced by K.~L\"owner \cite{Loewner} served a source to study properties of univalent functions on the unit disk. Nowadays it is of growing interest in many areas, see, e.g., \cite{Markina}. The L\"owner equation for the upper half-plane $\mathbb H$ appeared later (see, e.g., \cite{Aleksandrov}) and became popular during the last decades. Define a function $w=f(z,t)$, $z\in\mathbb H$, $t\geq0$, \begin{equation} f(z,t)=z+\frac{2t}{z}+O\left(\frac{1}{z^2}\right), \;\;\; z\to\infty, \label{exp} \end{equation} which maps $\mathbb H etminus K_t$ onto $\mathbb H$ and solves the {\it chordal} L\"owner ordinary differential equation \begin{equation} \frac{df(z,t)}{dt}=\frac{2}{f(z,t)-\lambda(t)},\;\;\;f(z,0)=z,\;\;\;z\in\mathbb H, \label{Leo} \end{equation} where the driving function $\lambda(t)$ is continuous and real-valued. The conformal maps $f(z,t)$ are continuously extended onto $z\in\mathbb R$ minus the closure of $K_t$ and the extended map also satisfies equation (\text{\rm Re }f{Leo}). Following \cite{LMR}, we pay attention to an old problem to determine, in terms of $\lambda$, when $K_t$ is a Jordan arc, $K_t=\gamma(t)$, $t\geq0$, emanating from the real axis $\mathbb R$. In this case $f(z,t)$ are continuously extended onto the two sides of $\gamma(t)$, \begin{equation} \lambda(t)=f(\gamma(t),t),\;\;\;\gamma(t)=f^{-1}(\lambda(t),t). \label{gam} \end{equation} Points $\gamma(t)$ are treated as prime ends which are different for the two sides of the arc. Note that Kufarev \cite{Kufarev} proposed a counterexample of the non-slit mapping for the {\it radial} L\"owner equation in the disk. For the chordal L\"owner equation, Kufarev's example corresponds to $\lambda(t)=3 qrt2 qrt{1-t}$, see \cite{Kager}, \cite{LMR} for details. Equation (\text{\rm Re }f{Leo}) admits integrating in quadratures for partial cases of $\lambda(t)$ studied in \cite{Kager}, \cite{PrZ}. The integrability cases of (\text{\rm Re }f{Leo}) are invariant under linear and scaling transformations of $\lambda(t)$, see, e.g., \cite{LMR}. Therefore, assume without loss of generality that $\lambda(0)=0$ and, equivalently, $\gamma(0)=0$. The picture of singularity lines for driving functions $\lambda(t)$ belonging to the Lipschitz class $\text{Lip}(1/2)$ with the exponent 1/2 is well studied, see, e.g., \cite{LMR} and references therein. This article is aimed to show that in the case of the cubic root driving function $\lambda(t)=\root3\of t$ in (\text{\rm Re }f{Leo}), that is, \begin{equation} \frac{df(z,t)}{dt}=\frac{2}{f(z,t)-\root3\of t},\;\;\;f(z,0)=z,\;\;\;\text{\rm Im } z\geq0, \label{Le3} \end{equation} the solution $w=f(z,t)$ is a slit mapping for $t>0$ small enough, i.e., $K_t=\gamma(t)$, $0<t<T$. The driving function $\lambda(t)=\root3\of t$ is chosen as a typical function of the Lipschitz class $\text{Lip}(1/3)$. We do not try to cover the most general case but hope that the model driving function serves a demonstration for a wider class. By the way, the case when the trace $\gamma$ is a circular arc meeting the real axis tangentially is studied in \cite{ProkhVas}. The explicit solution for the inverse function gave a driving term of the form $\lambda(t)=C t^{1\over3}+\dots$ which corresponds to the above driving function asymptotically. The main result of the article is contained in the following theorem which shows that $f(z,t)$ is a mapping from a slit domain $D(t)=\mathbb H etminus\gamma(t)$. \begin{theorem} Let $f(z,t)$ be a solution to the L\"owner equation (\text{\rm Re }f{Le3}). Then $f(\cdot,t)$ maps $D(t)=\mathbb H etminus\gamma(t)$ onto $\mathbb H$ for $t>0$ small enough where $\gamma(t)$ is a $C^1$-curve, except probably for the point $\gamma(0)=0$. \end{theorem} Preliminary results of Section 2 in the article concern the theory of differential equations and preparations for the main proof. Theorem 1 together with helpful lemmas are proved in Section 3. Section 4 is devoted to estimates for harmonic measures of the two sides of the slit generated by the L\"owner equation (\text{\rm Re }f{Le3}). Theorem 2 in this Section gives the asymptotic relation for the ratio of these harmonic measures as $t\to0$. In Section 5 we consider holomorphic solutions to (\text{\rm Re }f{Le3}) represented by power series and propose asymptotic expansions for the radius of convergence of the series. ection{Preliminary statements} Change variables $t\to\tau^3$, $g(z,\tau):=f(z,\tau^3)$, and reduce equation (\text{\rm Re }f{Le3}) to \begin{equation} \frac{dg(z,\tau)}{d\tau}=\frac{6\tau^2}{g(z,\tau)-\tau}, \;\;\;g(z,0)=z, \;\;\;\text{\rm Im } z\geq0. \label{Le2} \end{equation} Note that differential equations $$\frac{dy}{dx}=\frac{Q(x,y)}{P(x,y)}$$ with holomorphic functions $P(x,y)$ and $Q(x,y)$ are well known both for complex and real variables, especially in the case of polynomials $P$ and $Q$, see, e.g., \cite{Bendixson}, \cite{Borel}, \cite{Golubev}, \cite{Poincare}, \cite{Sansone-1}, \cite{Sansone-2}. If $z\neq0$, then $g(z,0)\neq0$, and there exists a {\it regular} solution $g(z,\tau)$ to (\text{\rm Re }f{Le2}) holomorphic in $\tau$ for $|\tau|$ small enough which is unique for every $z\neq0$. We are interested mostly in studying {\it singular} solutions to (\text{\rm Re }f{Le2}), i.e., those which do not satisfy the uniqueness conditions for equation (\text{\rm Re }f{Le2}). Every point $(g(z_0,\tau_0),\tau_0)$ such that $g(z_0,\tau_0)=\tau_0$ is a {\it singular point} for equation (\text{\rm Re }f{Le2}). If $\tau_0\neq0$, then $(g(z_0,\tau_0),\tau_0)$ is an {\it algebraic solution critical point}, and corresponding singular solutions to (\text{\rm Re }f{Le2}) through this point are expanded in series in terms $(\tau-\tau_0)^{1/m}$, $m\in\mathbb N$. So these singular solutions are different branches of the same analytic function, see [17,~Chap.9, \S1]. The point $(g(z_0,\tau_0),\tau_0)=(0,0)$ is the only {\it singular point of indefinite character} for (\text{\rm Re }f{Le2}). It is determined when the numerator and denominator in the right-hand side of (\text{\rm Re }f{Le2}) vanish simultaneously. All the singular solutions to (\text{\rm Re }f{Le2}) which are not branches of the same analytic function pass through this point $(0,0)$ [17,~Chap.9, \S1]. Regular and singular solutions to (\text{\rm Re }f{Le2}) behave according to the Poincar\'e-Bendixson theorems \cite{Poincare}, \cite{Bendixson}, [17,~Chap.9, \S1]. Namely, two integral curves of differential equation (\text{\rm Re }f{Le2}) intersect only at the singular point $(0,0)$. An integral curve of (\text{\rm Re }f{Le2}) can have multiple points only at $(0,0)$. Bendixson \cite{Bendixson} considered real integral curves globally and stated that they have endpoints at knots and focuses and have an extension through a saddle. Under these assumptions, the Bendixson theorem \cite{Bendixson} makes possible only three cases for equation (\text{\rm Re }f{Le2}) in a neighborhood of $(0,0)$: (a) an integral curve is closed, i.e., it is a cycle; (b) an integral curve is a spiral which tends to a cycle asymptotically; (c) an integral curve has the endpoint at $(0,0)$. Recall the integrability case \cite{Kager} of the L\"owner differential equation (\text{\rm Re }f{Leo}) with the square root forcing $\lambda(t)=c qrt t$. After changing variables $t\to\tau^2$, the singular point $(0,0)$ in this case is a saddle according to the Poincar\'e classification \cite{Poincare} for linear differential equations. From the other side, another integrability case \cite{Kager} with the square root forcing $\lambda(t)=c qrt{1-t}$, after changing variables $t\to1-\tau^2$, leads to the focus at $(0,0)$. Going back to equation (\text{\rm Re }f{Le2}) remark that its solutions are infinitely differentiable with respect to the real variable $\tau$, see [4,~Chap.1, \S1], [17,~Chap.9, \S1]. Hence recurrent evaluations of Taylor coefficients can help to find singular solutions provided that a resulting series will have a positive convergence radius [16,~Chap.3, \S1]. Apply this method to equation (\text{\rm Re }f{Le2}). Let \begin{equation} g_s(0,\tau)= um_{n=1}^{\infty}a_n\tau^n \label{Si1} \end{equation} be a a formal power series for singular solutions to (\text{\rm Re }f{Le2}). Note that $g_s$ is not necessarily unique. It depends on the path along which $z$ approaches to 0, $z\notin K_{\tau}$. Substitute (\text{\rm Re }f{Si1}) into (\text{\rm Re }f{Le2}) and see that \begin{equation} um_{n=1}^{\infty}na_n\tau^{n-1}\left( um_{n=1}^{\infty}a_n\tau^n-\tau\right)=6\tau^2. \label{Si2} \end{equation} Equating coefficients at the same powers in both sides of (\text{\rm Re }f{Si2}) obtain that \begin{equation} a_1(a_1-1)=0. \label{Si3} \end{equation} This equation gives two possible values $a_1=1$ and $a_1=0$ to two singular solutions $g^+(0,\tau)$ and $g^-(0,\tau)$. In both cases equation (\text{\rm Re }f{Si2}) implies recurrent formulas for coefficients $a_n^+$ and $a_n^-$ of $g^+(0,\tau)$ and $g^-(0,\tau)$ respectively, \begin{equation} a_1^+=1,\;\;a_2^+=6,\;\;a_n^+=- um_{k=2}^{n-1}ka_k^+a_{n+1-k}^+, \;\;n\geq3, \label{Si4} \end{equation} \begin{equation} a_1^-=0,\;\;a_2^-=-3,\;\; a_n^-=\frac{1}{n} um_{k=2}^{n-1}ka_k^-a_{n+1-k}^-,\;\;n\geq3, \label{Si5} \end{equation} Show that the series $ um_{n=1}^{\infty}a_n^+\tau^n$ formally representing $g^+(0,\tau)$ diverges for all $\tau\neq0$. \begin{lemma} For $n\geq2$, the inequalities \begin{equation} 6^{n-1}(n-1)!\leq|a_n^+|\leq12^{n-1}n^{n-3} \label{Si6} \end{equation} hold. \end{lemma} \proof For $n=2$, the estimate (\text{\rm Re }f{Si6}) from below holds with the equality sign. Suppose that these estimates are true for $k=2,\dots,n-1$ and substitute them in (\text{\rm Re }f{Si2}). For $n\geq3$, we have $$|a_n^+|= um_{k=2}^{n-1}k|a_k^+||a_{n+1-k}^+| \geq um_{k=2}^{n-1}k6^{k-1}(k-1)!6^{n-k}(n-k)!=$$ $$6^{n-1} um_{k=2}^{n-1}k!(n-k)!\geq6^{n-1}(n-1)!\;.$$ This confirms by induction the estimate (\text{\rm Re }f{Si6}) from below. Similarly, for $n=2,3$, the estimate (\text{\rm Re }f{Si6}) from above is easily verified. Suppose that these estimates are true for $k=2,\dots,n-1$ and substitute them in (\text{\rm Re }f{Si2}). For $n\geq4$, we have $$|a_n^+|= um_{k=2}^{n-1}k|a_k^+||a_{n+1-k}^+| \leq um_{k=2}^{n-1}k12^{k-1}k^{k-3}12^{n-k}(n+1-k)^{n-2-k}=$$ $$12^{n-1} um_{k=2}^{n-1}k^{k-2}(n+1-k)^{n-2-k} \leq12^{n-1}\left( um_{k=2}^{n-2}(n-1)^{k-2}(n-1)^{n-2-k}+\frac{(n-1)^{n-3}}{2}\right)$$ $$<12^{n-1}\left( um_{k=2}^{n-2}(n-1)^{n-4}+(n-1)^{n-4}\right)<12^{n-1}n^{n-3}$$ which completes the proof. \endproof Evidently, the upper estimates (\text{\rm Re }f{Si6}) are preserved for $|a_n^-|$, $n\geq2$. The lower estimates (\text{\rm Re }f{Si6}) imply divergence of $ um_{n=1}^{\infty}a_n^+\tau^n$ for $\tau\neq0$. Therefore equation (\text{\rm Re }f{Le2}) does not have any holomorphic solution in a neighborhood of $\tau_0=0$. There exist some methods to summarize the series $ um_{n=1}^{\infty}a_n^+\tau^n$, the Borel regular method among them \cite{Borel}, [16,~Chap.3, \S1]. Let $$G(\tau)= um_{n=1}^{\infty}\frac{a_n^+}{n!}\tau^n,$$ this series converges for $|\tau|<1/12$ according to Lemma 1. The Borel sum equals $$h(\tau)=\int_0^{\infty}e^{-x}G(\tau x)dx$$ and solves (\text{\rm Re }f{Le2}) provided it determines an analytic function. The same approach is applied to $ um_{n=1}^{\infty}a_n^-\tau^n$. In any case solutions $g_1(0,\tau)$, $g_2(0,\tau)$ to (\text{\rm Re }f{Le2}) emanating from the singular point $(0,0)$ satisfy the asymptotic relations $$g_1(0,\tau)= um_{k=1}^na_k^+\tau^k+o(\tau^n),\;\;\; g_2(0,\tau)= um_{k=1}^na_k^-\tau^k+o(\tau^n),\;\;\;\tau\to0,$$ for all $n\geq2$, $o(\tau^n)$ in both representations depend on $n$. Let $f_1(0,t):=g_1(0,\tau^3)$, $f_2(0,t):=g_2(0,\tau^3)$. Since $f_1(0,t)=\root3\of t+6\root3\of{t^2}+o(\root3\of{t^2})$ and $f_2(0,t)=-3\root3\of{t^2}+o(\root3\of{t^2})$ as $t\to0$, the inequality $$f_2(0,t)<\root3\of t<f_1(0,t)$$ holds for all $t>0$ small enough. Let us find representations for all other singular solutions to equation (\text{\rm Re }f{Le3}) which appear at $t>0$. Suppose there is $z_0\in\mathbb H$ and $t_0>0$ such that $f(z_0,t_0)=\root3\of t$. Then $(f(z_0,t_0),t_0)$ is a singular point of equation (\text{\rm Re }f{Le3}), and $f(z_0,t)$ is expanded in series with powers $(t-t_0)^{n/m}$, $m\in\mathbb N$, \begin{equation} f(z_0,t)=\root3\of t_0+ um_{n=1}^{\infty}b_{n/m}(t-t_0)^{n/m}. \label{Si7} \end{equation} Substitute (\text{\rm Re }f{Si7}) into (\text{\rm Re }f{Le3}) and see that $$ um_{n=1}^{\infty}\frac{nb_{n/m}(t-t_0)^{n/m-1}}{m}\times$$ \begin{equation} \left( um_{n=1}^{\infty}b_{n/m} (t-t_0)^{n/m}- um_{n=1}^{\infty}\frac{(-1)^{n-1}2\cdot5\dots(3n-4)}{n!} \frac{(t-t_0)^n}{(3t_0)^n}\right)=2. \label{Si8} \end{equation} Equating coefficients at the same powers in both sides of (\text{\rm Re }f{Si8}) obtain that $m=2$ and \begin{equation} (b_{1/2})^2=4. \label{Si9} \end{equation} This equation gives two possible values $b_{1/2}=2$ and $b_{1/2}=-2$ to two branches $f_1(z_0,t)$ and $f_2(z_0,t)$ of the solution (\text{\rm Re }f{Si7}). Indeed, we can accept only one of possibilities, for example $b_{1/2}=2$, while the second case is obtained by going to another branch of $(t-t_0)^{n/2}$ when passing through $t=t_0$. So we have recurrent formulas for coefficients $b_{n/2}$ of $f_1(z_0,t)$ and $f_2(z_0,t)$, \begin{equation} b_{1/2}=2,\;\;b_{n/2}=\frac{1}{n+1}\left(c_{n/2}-\frac{1}{2} um_{k=2}^{n-1}kb_{k/2} (b_{(n+1-k)/2}-c_{(n+1-k)/2})\right),\;\;n\geq2, \label{Si10} \end{equation} where \begin{equation} c_{(2k-1)/2}=0,\;\;c_k=\frac{(-1)^{k-1}2\cdot5\dots(3k-4)}{3^kt_0^{k-1/3}k!},\;\; k=1,2,\dots\;. \label{Si11} \end{equation} Since $$f_1(z_0,t)=\root3\of{t_0}+2 qrt{t-t_0}+o( qrt{t-t_0}),\; f_2(z_0,t)=\root3\of{t_0}-2 qrt{t-t_0}+o( qrt{t-t_0}),$$ $$\root3\of t= \root3\of{t_0}+\frac{1}{3t_0}(t-t_0)+o(t-t_0),\;\;\;t\to t_0+0,$$ the inequality $$f_2(z_0,t)<\root3\of t<f_1(z_0,t)$$ holds for all $t>t_0$ close to $t_0$. ection{Proof of the main results} The theory of differential equations claims that integral curves of equation (\text{\rm Re }f{Le3}) intersect only at the singular point $(0,0)$ [17,~Chap.9, \S1]. In particular, this implies the local inequalities $f_2(0,t)<f_2(z_0,t)<\root3\of t<f_1(z_0,t)<f_1(0,t)$ where $(f(z_0,t_0),t_0)$ is an algebraic solution critical point for equation (\text{\rm Re }f{Le3}). We will give an independent proof of these inequalities which can be useful for more general driving functions. \begin{lemma} For $t>0$ small enough and a singular point $(f(z_0,t_0),t_0)$ for equation (\text{\rm Re }f{Le3}), $0<t_0<t$, the following inequalities $$f_2(0,t)<f_2(z_0,t)<\root3\of t<f_1(z_0,t)<f_1(0,t)$$ hold. \end{lemma} \proof To show that $f_1(z_0,t)<f_1(0,t)$ let us subtract equations $$\frac{df_1(0,t)}{dt}=\frac{2}{f_1(0,t)-\root3\of t},\;\;\;f_1(0,0)=0,$$ $$\frac{df_1(z_0,t)}{dt}=\frac{2}{f_1(z_0,t)-\root3\of t}, \;\;\;f_1(z_0,t_0)=\root3\of{t_0},$$ and obtain $$\frac{d(f_1(0,t)-f_1(z_0,t))}{dt}=\frac{2(f_1(z_0,t)-f_1(0,t))} {(f_1(0,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)},$$ which can be written in the form $$\frac{d\log(f_1(0,t)-f_1(z_0,t))}{dt}=\frac{-2} {(f_1(0,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)}.$$ Suppose that $T>t_0$ is the smallest number for which $f_1(0,T)=f_1(z_0,T)$. This implies that \begin{equation} \int_{t_0}^T\frac{dt}{(f_1(0,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)}= \infty. \label{Si12} \end{equation} To evaluate the integral in (\text{\rm Re }f{Si12}) we should study the behavior of $f_1(z_0,t)-\root3\of t$ with the help of differential equation \begin{equation} \frac{d(f_1(z_0,t)-\root3\of t)}{dt}= \frac{2}{f_1(z_0,t)-\root3\of t}-\frac{1}{3\root3\of{t^2}}=\frac{\root3\of t+6\root3\of{t^2}-f_1(z_0,t)}{3\root3\of{t^2} (f_1(z_0,t)-\root3\of t)}. \label{Si13} \end{equation} Calculate that $a_3^+=-72$ and write the asymptotic relation $$f_1(0,t)=\root3\of t+6\root3\of{t^2}-72t+o(t),\;\;t\to+0.$$ There exists a number $T'>0$ such that for $0<t<T'$, $\root3\of t+6\root3\of{t^2}>f_1(0,t)$. Consequently, the right-hand side in (\text{\rm Re }f{Si13}) is positive for $0<t<T'$. Note that $T'$ does not depend on $t_0$. The condition "$t>0$ small enough" in Lemma 2 is understood from now as $0<t<T'$. We see from (\text{\rm Re }f{Si13}) that for such $t$, $f_1(z_0,t)-\root3\of t$ is increasing with $t$, $t_0<t<T<T'$. Therefore, the integral in the left-hand side of (\text{\rm Re }f{Si12}) is finite. The contradiction against equality (\text{\rm Re }f{Si12}) denies the existence of $T$ with the prescribed properties which proves the third and the fourth inequalities in Lemma 2. The rest of inequalities in Lemma 2 are proved similarly and even easier. To show that $f_2(z_0,t)>f_2(0,t)$ let us subtract equations $$\frac{df_2(0,t)}{dt}=\frac{2}{f_2(0,t)-\root3\of t},\;\;\;f_2(0,0)=0,$$ $$\frac{df_2(z_0,t)}{dt}=\frac{2}{f_2(z_0,t)-\root3\of t}, \;\;\;f_2(z_0,t_0)=\root3\of{t_0},$$ and obtain $$\frac{d(f_2(0,t)-f_2(z_0,t))}{dt}=\frac{2(f_2(z_0,t)-f_2(0,t))} {(f_2(0,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)},$$ which can be written in the form $$\frac{d\log(f_2(z_0,t)-f_2(0,t))}{dt}=\frac{-2} {(f_2(0,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)}.$$ Suppose that $T>t_0$ is the smallest number for which $f_2(z_0,T)=f_2(0,T)$. This implies that \begin{equation} \int_{t_0}^T\frac{dt}{(f_2(0,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)}= \infty. \label{Si14} \end{equation} To evaluate the integral in (\text{\rm Re }f{Si14}) we should study the behavior of $f_2(z_0,t)-\root3\of t$ with the help of differential equation \begin{equation} \frac{d(f_2(z_0,t)-\root3\of t)}{dt}= \frac{2}{f_2(z_0,t)-\root3\of t}-\frac{1}{3\root3\of{t^2}}=\frac{\root3\of t+6\root3\of{t^2}-f_2(z_0,t)}{3\root3\of{t^2} (f_2(z_0,t)-\root3\of t)}. \label{Si15} \end{equation} Since $$f_2(0,t)=-3\root3\of{t^2}+o(\root3\of{t^2}),\;\;t\to+0,$$ there exists a number $T''>0$ such that for $0<t<T''$, $\root3\of t+6\root3\of{t^2}>f_2(0,t)$. Consequently, the right-hand side in (\text{\rm Re }f{Si15}) is positive for $0<t<T''$. We see from (\text{\rm Re }f{Si15}) that for such $t$, $f_2(0,t)-\root3\of t$ is decreasing with $t$, $t_0<t<T<T''$. Therefore, the integral in the left-hand side of (\text{\rm Re }f{Si14}) is finite. The contradiction against equality (\text{\rm Re }f{Si14}) denies the existence of $T$ with the prescribed properties which completes the proof. \endproof Add and complete the inequalities of Lemma 2 by the following statements demonstrating a monotonic disposition of values for different singular solutions. \begin{lemma} For $t>0$ small enough and singular points $(f(z_1,t_1),t_1)$, $(f(z_0,t_0),t_0)$ for equation (\text{\rm Re }f{Le3}), $0<t_1<t_0<t$, the following inequalities $$f_2(z_1,t)<f_2(z_0,t),\;\;\;f_1(z_0,t)<f_1(z_1,t)$$ hold. \end{lemma} \proof Similarly to Lemma 2, subtract equations $$\frac{df_1(z_1,t)}{dt}=\frac{2}{f_1(z_1,t)-\root3\of t}, \;\;\;f_1(z_1,t_1)=\root3\of{t_1},$$ $$\frac{df_1(z_0,t)}{dt}=\frac{2}{f_1(z_0,t)-\root3\of t}, \;\;\;f_1(z_0,t_0)=\root3\of{t_0},$$ and obtain $$\frac{d(f_1(z_1,t)-f_1(z_0,t))}{dt}= \frac{2(f_1(z_0,t)-f_1(z_1,t))}{(f_1(z_1,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)},$$ which can be written in the form $$\frac{d\log(f_1(z_1,t)-f_1(z_0,t))}{dt}=\frac{-2} {(f_1(z_1,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)}.$$ Suppose that $T>t_0$ is the smallest number for which $f_1(z_1,T)=f_1(z_0,T)$. This implies that \begin{equation} \int_{t_0}^T\frac{dt}{(f_1(z_1,t)-\root3\of t)(f_1(z_0,t)-\root3\of t)}= \infty. \label{Si16} \end{equation} To evaluate the integral in (\text{\rm Re }f{Si16}) apply to (\text{\rm Re }f{Si13}) and obtain that there exists a number $T'>0$ such that for $0<t<T'$, $f_1(z_0,t)-\root3\of t$ is increasing with $t$, $t_0<t<T<T'$. Therefore, the integral in the left-hand side of (\text{\rm Re }f{Si16}) is finite. The contradiction against equality (\text{\rm Re }f{Si16}) denies the existence of $T$ with the prescribed properties which proves the second inequality of Lemma 3. To prove the first inequality of Lemma 3 subtract equations $$\frac{df_2(z_1,t)}{dt}=\frac{2}{f_2(z_1,t)-\root3\of t},\;\;\; f_2(z_1,t_1)=\root3\of{t_1},$$ $$\frac{df_2(z_0,t)}{dt}=\frac{2}{f_2(z_0,t)-\root3\of t}, \;\;\;f_2(z_0,t_0)=\root3\of{t_0},$$ and obtain after dividing by $f_2(z_1,t)-f_2(z_0,t)$ $$\frac{d\log(f_2(z_0,t)-f_2(z_1,t))}{dt}=\frac{-2} {(f_2(z_1,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)}.$$ Suppose that $T>t_0$ is the smallest number for which $f_2(z_0,T)=f_2(z_1,T)$. This implies that \begin{equation} \int_{t_0}^T\frac{dt}{(f_2(z_1,t)-\root3\of t)(f_2(z_0,t)-\root3\of t)}=\infty. \label{Si17} \end{equation} To evaluate the integral in (\text{\rm Re }f{Si17}) apply to (\text{\rm Re }f{Si15}) and obtain that $\root3\of t+6\root3\of{t^2}>\root3\of t>f_2(0,t)$. Consequently, the right-hand side in (\text{\rm Re }f{Si15}) is positive and we see that $f_2(0,t)-\root3\of t$ is decreasing with $t$, $t_0<t<T$. Therefore, the integral in the left-hand side of (\text{\rm Re }f{Si17}) is finite. The contradiction against equality (\text{\rm Re }f{Si17}) denies the existence of $T$ with the prescribed properties which completes the proof. \endproof {\it Proof of Theorem 1.} For $t_0>0$, there is a hull $K_{t_0} ubset\mathbb H$ such that $f(\cdot,t_0)$ maps $\mathbb H etminus K_{t_0}$ onto $\mathbb H$. We refer to \cite{LMR} for definitions and more details. The hull $K_{t_0}$ is driven by $\root3\of t$. The function $f(\cdot,t_0)$ is extended continuously onto the set of prime ends on $\partial(\mathbb H etminus K_{t_0})$ and maps this set onto $\mathbb R$. One of the prime ends is mapped on $\root3\of{t_0}$. Let $z_0=z_0(t_0)$ represent this prime end. Lemmas 2 and 3 describe the structure of the pre-image of $\mathbb H$ under $f(\cdot,t)$. All the singular solutions $f_1(0,t)$, $f_2(0,t)$, $f_1(z_0,t)$, $f_2(z_0,t)$, $0<t_0<t<T'$, are real-valued and satisfy the inequalities of Lemmas 2 and 3. So the segment $I=[f_2(0,t),f_1(0,t)]$ is the union of the segments $I_2=[f_2(0,t),\root3\of t]$ and $I_1=[\root3\of t,f_1(0,t)]$. The segment $I_2$ consists of points $f_2(z(\tau),t)$, $0\leq\tau<t$, and the segment $I_1$ consists of points $f_1(z(\tau),t)$, $0\leq\tau<t$. All these points belong to the boundary $\mathbb R=\partial\mathbb H$. This means that all the points $z(\tau)$, $0\leq\tau<t$, belong to the boundary $\partial(\mathbb H etminus K_t)$ of $\mathbb H etminus K_t$. Moreover, every point $z(\tau)$ except for the tip determines exactly two prime ends corresponding to $f_1(z(\tau),t)$ and $f_2(z(\tau),t)$. Evidently, $z(\tau)$ is continuous on $[0,t]$. This proves that $z(\tau):=\gamma(\tau)$ represents a curve $\gamma:=K_t$ with prime ends corresponding to points on different sides of $\gamma$. This proves that $f^{-1}(w,t)$ maps $\mathbb H$ onto the slit domain $\mathbb H etminus\gamma(t)$ for $t>0$ small enough. It remains to show that $\gamma(t)$ is a $C^1$-curve. Fix $t_0>0$ from a neighborhood of $t=0$. Denote $g(w,t)=f^{-1}(w,t)$ an inverse of $f(z,t)$, and $h(w,t):=f(g(w,t_0),t)$, $t\geq t_0$. The arc $\gamma[t_0,t]:=K_t etminus K_{t_0}$ is mapped by $f(z,t_0)$ onto a curve $\gamma_1(t)$ in $\mathbb H$ emanating from $\root3\of{t_0}\in\mathbb R$. So the function $h(w,t)$ is well defined on $\mathbb H etminus\gamma_1(t_0)$, $t\geq t_0$. Expand $h(w,t)$ near infinity, $$h(w,t)=g(w,t_0)+\frac{2t}{g(w,t_0)}+O\left(\frac{1} {g^2(w,t_0)}\right)=w+\frac{2(t-t_0)}{w}+O\left(\frac{1}{w^2}\right).$$ Such expansion satisfies (\text{\rm Re }f{exp}) after changing variables $t\to t-t_0$. The function $h(w,t)$ satisfies the differential equation $$\frac{dh(w,t)}{dt}=\frac{2} {h(w,t)-\root3\of t},\;\;\;h(w,t_0)=w,\;\;\;w\in\mathbb H.$$ This equation becomes the L\"owner differential equation if $t_1:=t-t_0$, $h_1(w,t_1):=h(w,t_0+t_1)$, \begin{equation} \frac{dh_1(w,t_1)}{dt_1}=\frac{2}{h_1(w,t_1)-\root3\of{t_1+t_0}},\;\;\; h_1(w,0)=w,\;\;\;w\in\mathbb H. \label{Cu1} \end{equation} The driving function $\lambda(t_1)=\root3\of{t_1+t_0}$ in (\text{\rm Re }f{Cu1}) is analytic for $t_1\geq0$. It is known [1,~p.59] that under this condition $h_1(w,t_1)$ maps $\mathbb H etminus\gamma_1$ onto $\mathbb H$ where $\gamma_1$ is a $C^1$-curve in $\mathbb H$ emanating from $\lambda(0)=\root3\of{t_0}$. The same does the function $h(w,t)$. Go back to $f(z,t)=h(f(z,t_0),t)$ and see that $f(z,t)$ maps $\mathbb H etminus\gamma(t)$ onto $\mathbb H$, $\gamma(t)=\gamma[0,t_0]\cup\gamma[t_0,t]$, and $\gamma[t_0,t]$ is a $C^1$-curve. Tending $t_0$ to 0 we prove that $\gamma(t)$ is a $C^1$-curve, except probably for the point $\gamma(0)=0$. This completes the proof. ection{Harmonic measures of the slit sides} The function $f(z,t)$ solving (\text{\rm Re }f{Le3}) maps $\mathbb H etminus\gamma(t)$ onto $\mathbb H$. The curve $\gamma(t)$ has two sides. Denote $\gamma_1=\gamma_1(t)$ the side of $\gamma$ which is mapped by the extended function $f(z,t)$ onto $I_1=[\root3\of t,f_1(0,t)]$. Similarly, $\gamma_2=\gamma_2(t)$ is the side of $\gamma$ which is the pre-image of $I_2=[f_2(0,t),\root3\of t]$ under $f(z,t)$. Remind that the harmonic measures $\omega(f^{-1}(i,t);\gamma_k,\mathbb H etminus\gamma(t),t)$ of $\gamma_k$ at $f^{-1}(i,t)$ with respect to $\mathbb H etminus\gamma(t)$ are defined by the functions $\omega_k$ which are harmonic on $\mathbb H etminus\gamma(t)$ and continuously extended on its closure except for the endpoints of $\gamma$, $\omega_k|_{\gamma_k(t)}=1$, $\omega_k|_{\mathbb R\cup(\gamma(t) etminus\gamma_k(t))}=0$, $k=1,2$, see, e.g., [6,~Chap.3, \S3.6]. Denote $$m_k(t):=\omega(f^{-1}(i,t);\gamma_k,\mathbb H etminus\gamma(t),t),\;\;\;k=1,2.$$ \begin{theorem} Let $f(z,t)$ be a solution to the L\"owner equation (\text{\rm Re }f{Le3}). Then \begin{equation} \lim_{t\to+0}\frac{m_1(t)}{m_2^2(t)}=6\pi. \label{har} \end{equation} \end{theorem} \proof The harmonic measure is invariant under conformal transformations. So $$\omega(f^{-1}(i,t);\gamma_k,\mathbb H etminus\gamma(t),t)=\Omega(i;f(\gamma_k,t),\mathbb H,t)$$ are defined by the harmonic functions $\Omega_k$ which are harmonic on $\mathbb H$ and continuously extended on $\mathbb R$ except for the endpoints of $f(\gamma_k,t)$, $\Omega_k|_{f(\gamma_k,t)}=1$, $\Omega_k|_{\mathbb R etminus f(\gamma_k,t)}=0$, $k=1,2$. The solution of the problem is known, see, e.g., [5,~p.334]. Namely, $$m_k(t)=\frac{\alpha_k(t)}{\pi}$$ where $\alpha_k(t)$ is the angle under which the segment $I_k=I_k(t)$ is observed from the point $w=i$, $k=1,2$. It remains to find asymptotic expansions for $\alpha_k(t)$. Since $$f_1(0,t)=\root3\of t+6\root3\of{t^2}+O(t),\;\;\;f_2(0,t)=-3\root3\of{t^2}+O(t),\;\;\;t\to+0,$$ after elementary geometrical considerations we have $$\alpha_1(t)=\arctan f_1(0,t)-\arctan\root3\of t=6\root3\of{t^2}+O(t),\;\;\;t\to+0,$$ $$\alpha_2(t)=\arctan\root3\of t-\arctan f_2(0,t)=\root3\of t+3\root3\of{t^2}+O(t),\;\;\;t\to+0.$$ This implies that $$\frac{m_1(t)}{m_2^2(t)}=\pi\frac{6\root3\of{t^2}+O(t)}{(\root3\of t+3\root3\of{t^2}+O(t))^2}= 6\pi(1+O(\root3\of t)),\;\;\;t\to+0,$$ which leads to (\text{\rm Re }f{har}) and completes the proof. \endproof \begin{remark} The relation similar to (\text{\rm Re }f{har}) follows from \cite{ProkhVas} for the two sides of the circular slit $\gamma(t)$ in $\mathbb H$ such that $\gamma(t)$ is tangential to $\mathbb R$ at $z=0$. \end{remark} ection{Representation of holomorphic solutions} Holomorphic solutions to (\text{\rm Re }f{Le3}) or, equivalently, to (\text{\rm Re }f{Le2}) appear in a neighborhood of every non-singular point $(z_0,0)$. We will be interested in real solutions corresponding to $z_0\in\mathbb R$. Put $z_0=\epsilon>0$ and let \begin{equation} f(\epsilon,t)=\epsilon+ um_{n=1}^{\infty}a_n(\epsilon)t^{n/3} \label{hol} \end{equation} be a solution of equation (\text{\rm Re }f{Le3}) holomorphic with respect to $\tau=\root3\of t$. Change $\root3\of t$ by $\tau$ and substitute (\text{\rm Re }f{hol}) in (\text{\rm Re }f{Le2}) to get that \begin{equation} um_{n=1}^{\infty}na_n(\epsilon)\tau^{n-1}\left[\epsilon-\tau+ um_{n=1}^{\infty} a_n(\epsilon)\tau^n\right]=6\tau^2. \label{coe} \end{equation} Equate coefficients at the same powers in both sides of (\text{\rm Re }f{coe}) and obtain equations \begin{equation} a_1(\epsilon)=0,\;\;\;a_2(\epsilon)=0,\;\;\;a_k(\epsilon)=\frac{6}{k\epsilon^{k-2}}, \;\;\;k=3,4,5, \label{de1} \end{equation} and \begin{equation} a_n(\epsilon)=\frac{1}{n\epsilon}\left[(n-1)a_{n-1}(\epsilon)- um_{k=3}^{n-3}(n-k)a_{n-k}(\epsilon)a_k(\epsilon)\right], \;\;\;n\geq6. \label{de2} \end{equation} The series in (\text{\rm Re }f{hol}) converges for $|\tau|=|\root3\of t|<R(\epsilon)$. \begin{theorem} The series in (\text{\rm Re }f{hol}) converges for \begin{equation} |t|<\epsilon^3+o(\epsilon^3),\;\;\;\epsilon\to+0. \label{rad} \end{equation} \end{theorem} \proof Estimate the convergence radius $R(\epsilon)$ following the Cauchy majorant method, see, e.g., [4,~Chap.1, \S\S2-3], [16,~Chap.3, \S1]. The Cauchy theorem states: if the right-hand side in (\text{\rm Re }f{Le2}) is holomorphic on a product of the closed disks $|g-\epsilon|\leq\rho_1$ and $|\tau|\leq r_1$ and is bounded there by $M$, then the series $ um_{n=1}^{\infty}a_n(\epsilon)\tau^n$ converges in the disk $$|\tau|<R(\epsilon)=r_1\left(1-\exp\left\{-\frac{\rho_1}{2Mr_1}\right\}\right).$$ In the case of equation (\text{\rm Re }f{Le2}) we have $$\rho_1+r_1<\epsilon,\;\;\text{and}\;\;M=\frac{6r_1^2}{\epsilon-(\rho_1+r_1)}.$$ This implies that for $\rho_1+r_1=\epsilon-\delta,$ $\delta>0$, $$R(\epsilon)=r_1\left(1-\exp\left\{-\frac{\epsilon- \delta-r_1}{12r_1^2}\delta\right\}\right).$$ So $R(\epsilon)$ depends on $\delta$ and $r_1$. Maximum of $R$ with respect to $\delta$ is obtained for $\delta=(\epsilon-r_1)/2$. Hence, this maximum is equal to \begin{equation} R_1(\epsilon)=r_1\left(1-\exp\left\{-\frac{(\epsilon-r_1)^2}{48r_1^3}\right\}\right), \label{eR1} \end{equation} where $R_1(\epsilon)$ depends on $r_1$. Let us find a maximum of $R_1$ with respect to $r_1\in(0,\epsilon)$. Notice that $R_1$ vanishes for $r_1=0$ and $r_1=\epsilon$. Therefore the maximum of $R_1$ is attained for a certain root $r_1=r_1(\epsilon)\in(0,\epsilon)$ of the derivative of $R_1$ with respect to $r_1$. To simplify the calculations we put $r_1(\epsilon)=\epsilon c(\epsilon)$, $0<c(\epsilon)<1$. Now the derivative of $R_1$ vanishes for $c=c(\epsilon)$ satisfying \begin{equation} 1-\exp\left\{-\frac{(1-c)^2}{48\epsilon c^3}\right\}\left(1+\frac{(1-c)(3-c)}{48\epsilon c^3}\right)=0. \label{der} \end{equation} Choose a sequence $\{\epsilon_n\}_{n=1}^{\infty}$ of positive numbers, $\lim_{n\to\infty}\epsilon_n=0$, such that $c(\epsilon_n)$ converge to $c_0$ as $n\to\infty$. Suppose that $c_0<1$. Then $$\exp\left\{-\frac{(1-c(\epsilon_n))^2}{48\epsilon_nc^3(\epsilon_n)}\right\} \left(1+\frac{(1-c(\epsilon_n))(3-c(\epsilon_n))}{48\epsilon_nc^3(\epsilon_n)}\right)<1$$ for $n$ large enough. Therefore $c(\epsilon_n)$ is not a root of equation (\text{\rm Re }f{der}) for $\epsilon=\epsilon_n$ and $n$ large enough. This contradiction claims that $c_0=1$ for every sequence $\{\epsilon_n>0\}_{n=1}^{\infty}$ tending to 0 with $\lim_{n\to\infty}c(\epsilon_n)=c_0$. So we proved that $c(\epsilon)\to1$ as $\epsilon\to+0$. Consequently, the maximum of $R_1$ with respect to $r_1$ is attained for $r_1(\epsilon)=\epsilon c(\epsilon)=\epsilon(1+o(1))$ as $\epsilon\to+0$. Let $R_2=R_2(\epsilon)$ denote the maximum of $R_1$ with respect to $r_1$. It follows from (\text{\rm Re }f{eR1}) that \begin{equation} R_2(\epsilon)=r_1(\epsilon) \left(1-\exp\left\{-\frac{(\epsilon-r_1(\epsilon))^2}{48r_1^3(\epsilon)}\right\}\right)= \epsilon c(\epsilon)\left(1-\exp\left\{-\frac{(1-c(\epsilon))^2}{48\epsilon c^3(\epsilon)}\right\}\right). \label{max} \end{equation} Examine how fast does $c(\epsilon)$ tends to 1 as $\epsilon\to+0$. Choose a sequence $\{\epsilon_n>0\}_{n=1}^{\infty}$, $\lim_{n\to\infty}\epsilon_n=0$, such that the sequence $(1-c(\epsilon_n))^2/\epsilon_n$ converges to a non-negative number or to $\infty$. Denote $$l:=\lim_{n\to\infty}\frac{(1-c(\epsilon_n))^2}{\epsilon_n},\;\;\;0\leq l\leq\infty.$$ If $0<l<\infty$, then $(1-c(\epsilon_n))/\epsilon_n$ tends to $\infty$, and equation (\text{\rm Re }f{der}) with $\epsilon=\epsilon_n$ has no roots for $n$ large enough. If $l=0$, then, according to (\text{\rm Re }f{der}), $\lim_{n\to\infty}(1-c(\epsilon_n))/\epsilon_n=0$, and $$\exp\left\{-\frac{(1-c(\epsilon_n))^2}{48\epsilon c^3(\epsilon)}\right\}\left(1+\frac{(1-c(\epsilon_n))(3-c(\epsilon_n))}{48\epsilon_n c^3(\epsilon_n)}\right)=$$ $$\left(1-\frac{(1-c(\epsilon_n))^2}{48\epsilon_n c^3(\epsilon_n)}+o\left(\frac{(1-c(\epsilon_n))^2}{\epsilon_n}\right)\right) \left(1+\frac{(1-c(\epsilon_n))(3-c(\epsilon_n))}{48\epsilon_n c^3(\epsilon_n)}+\right)=$$ $$1+\frac{1-c(\epsilon_n)}{24\epsilon_n}+ o\left(\frac{1-c(\epsilon_n)}{\epsilon_n}\right), \;\;\;n\to\infty.$$ This implies again that equation (\text{\rm Re }f{der}) with $\epsilon=\epsilon_n$ has no roots for $n$ large enough. Thus the only possible case is $l=\infty$ for all sequences $\{\epsilon_n>0\}_{n=1}^{\infty}$ converging to 0. It follows from (\text{\rm Re }f{max}) that \begin{equation} R_2(\epsilon)=\max_{0<r_1(\epsilon)<\epsilon}R_1(\epsilon)=\epsilon+o(\epsilon), \;\;\;\epsilon\to0. \label{R2} \end{equation} In other words, the series in (\text{\rm Re }f{hol}) converges for $|t|<(\epsilon+o(\epsilon))^3$, $\epsilon\to0$, which implies the statement of Theorem 3 and completes the proof. \endproof \begin{remark} Evidently, a similar conclusion with the same formulas (\text{\rm Re }f{de1}) and (\text{\rm Re }f{de2}) is true for $\epsilon<0$. \end{remark} \end{document}
\begin{document} \title{ {\Large{\bf The Metric Dimension of Lexicographic Product of Graphs}}} {\small \author{ {\sc Mohsen Jannesari} and {\sc Behnaz Omoomi }\\ [1mm] {\small \it Department of Mathematical Sciences}\\ {\small \it Isfahan University of Technology} \\ {\small \it 84156-83111, Isfahan, Iran}} \maketitle \baselineskip15truept \begin{abstract} For an ordered set $W=\{w_1,w_2,\ldots,w_k\}$ of vertices and a vertex $v$ in a connected graph $G$, the ordered $k$-vector $r(v|W):=(d(v,w_1),d(v,w_2),\ldots,d(v,w_k))$ is called the (metric) representation of $v$ with respect to $W$, where $d(x,y)$ is the distance between the vertices $x$ and $y$. The set $W$ is called a resolving set for $G$ if distinct vertices of $G$ have distinct representations with respect to $W$. The minimum cardinality of a resolving set for $G$ is its metric dimension. In this paper, we study the metric dimension of the lexicographic product of graphs $G$ and $H$, $G[H]$. First, we introduce a new parameter which is called adjacency metric dimension of a graph. Then, we obtain the metric dimension of $G[H]$ in terms of the order of $G$ and the adjacency metric dimension of $H$. \end{abstract} {\bf Keywords:} Lexicographic product; Resolving set; Metric dimension; Basis; Adjacency metric dimension. \section{Introduction} In this section, we present some definitions and known results which are necessary to prove our main theorems. Throughout this paper, $G=(V,E)$ is a finite simple graph. We use $\overline G$ for the complement of graph $G$. The distance between two vertices $u$ and $v$, denoted by $d_G(u,v)$, is the length of a shortest path between $u$ and $v$ in $G$. Also, $N_G(v)$ is the set of all neighbors of vertex $v$ in $G$. We write these simply $d(u,v)$ and $N(v)$, when no confusion can arise. The notations $u\sim v$ and $u\nsim v$ denote the adjacency and none-adjacency relation between $u$ and $v$, respectively. The symbols $(v_1,v_2,\ldots, v_n)$ and $(v_1,v_2,\ldots,v_n,v_1)$ represent a path of order $n$, $P_n$, and a cycle of order $n$, $C_n$, respectively. For an ordered set $W=\{w_1,w_2,\ldots,w_k\}\subseteq V(G)$ and a vertex $v$ of $G$, the $k$-vector $$r(v|W):=(d(v,w_1),d(v,w_2),\ldots,d(v,w_k))$$ is called the ({\it metric}) {\it representation} of $v$ with respect to $W$. The set $W$ is called a {\it resolving set} for $G$ if distinct vertices have different representations. In this case, we say set $W$ resolves $G$. Elements in a resolving set are called {\it landmarks}. A resolving set $W$ for $G$ with minimum cardinality is called a {\it basis} of $G$, and its cardinality is the {\it metric dimension} of $G$, denoted by $\beta(G)$. The concept of (metric) representation is introduced by Slater~\cite{Slater1975} (see~\cite{Harary}). For more results related to these concepts see~\cite{trees,bounds,sur1,Discrepancies}. \par We say an ordered set $W$ {\it resolves} a set $T$ of vertices in $G$, if the representations of vertices in $T$ are distinct with respect to $W$. When $W=\{x\}$, we say that vertex $x$ resolves $T$. To see that whether a given set $W$ is a resolving set for $G$, it is sufficient to look at the representations of vertices in $V(G)\backslash W$, because $w\in W$ is the unique vertex of $G$ for which $d(w,w)=0$. \par Two distinct vertices $u,v$ are said {\it twins} if $N(v)\backslash\{u\}=N(u)\backslash\{v\}$. It is called that $u\equiv v$ if and only if $u=v$ or $u,v$ are twins. In~\cite{extermal}, it is proved that ``$\equiv$" is an equivalent relation. The equivalence class of vertex $v$ is denoted by $v^*$. Hernando et al.~\cite{extermal} proved that $v^*$ is a clique or an independent set in $G$. As in ~\cite{extermal}, we say $v^*$ is of type (1), (K), or (N) if $v^*$ is a class of size $1$, a clique of size at least $2$, or an independent set of size at least $2$. We denote the number of equivalence classes of $G$ with respect to ``$\equiv$" by $\iota(G)$. We mean by $\iota_{_K}(G)$ and $\iota_{_N}(G)$, the number of classes of type (K) and type (N) in $G$, respectively. We also use $a(G)$ and $b(G)$ for the number of all vertices in $G$ which have at least an adjacent twin and a none-adjacent twin vertex in $G$, respectively. On the other way, $a(G)$ is the number of all vertices in the classes of type (K) and $b(G)$ is the number of all vertices in the classes of type (N). Clearly, $\iota(G)=n(G)-a(G)-b(G)+\iota_{_N}(G)+\iota_{_K}(G)$. \begin{obs}~{\rm\cite{extermal}}\label{twins} Suppose that $u,v$ are twins in a graph $G$ and $W$ resolves $G$. Then $u$ or $v$ is in $W$. Moreover, if $u\in W$ and $v\notin W$, then $(W\setminus\{u\})\cup \{v\}$ also resolves $G$. \end{obs} \begin{lem}~\rm\cite{Ollerman}\label{B=1,B=n-1} Let $G$ be a connected graph of order $n$. Then,\begin{description}\item (i) $\beta(G)=1$ if and only if $G=P_n$,\item (ii) $\beta(G)=n-1$ if and only if $G=K_n$. \end{description} \end{lem} \begin{lem}~\rm\cite{K dimensional,idea}\label{B(P_n) B(C_N)} \begin{description} \item (i) If $n\notin\{3,6\}$, then $\beta(C_n\vee K_1)=\lfloor{{2n+2}\over 5}\rfloor$, \item (ii) If $n\notin\{1,2,3,6\}$, then $\beta(P_n\vee K_1)=\lfloor{{2n+2}\over 5}\rfloor$. \end{description} \end{lem} The metric dimension of cartesian product of graphs is studied by Caseres et al. in ~\cite{cartesian product}. They obtained the metric dimension of cartesian product of graphs $G$ and $H$, $G\square H$, where $G,H\in \{P_n,C_n,K_n\}$. \par The {\it lexicographic product} of graphs $G$ and $H$, denoted by $G[H]$, is a graph with vertex set \linebreak $V(G)\times V(H):=\{(v,u)~|~v\in V(G),u\in V(H)\}$, where two vertices $(v,u)$ and $(v',u')$ are adjacent whenever, $v\sim v'$, or $v=v'$ and $u\sim u'$. When the order of $G$ is at least $2$, it is easy to see that $G[H]$ is a connected graph if and only if $G$ is a connected graph. This paper is aimed to investigate the metric dimension of lexicographic product of graphs. The main goal of Section~\ref{Adjacency Resolving Sets} is introducing a new parameter, which we call it adjacency metric dimension. In Section~\ref{Lexicographic product}, we prove some relations to determine the metric dimension of lexicographic product of graphs, $G[H]$, in terms of the order of $G$ and the adjacency metric dimension of $H$. As a corollary of our main theorems, we obtain the exact value of the metric dimension of $G[H]$, where $G=C_n(n\geq 5)$ or $G=P_n (n\geq4)$, and $H\in\{P_m,C_m,\overline P_m,\overline C_m,K_{m_1,\ldots,m_t},\overline K_{m_1,\ldots,m_t}\}$. \section{Adjacency Resolving Sets}\label{Adjacency Resolving Sets} S. Khuller et al.~\cite{landmarks} have considered the application of the metric dimension of a connected graph in robot navigation. In that sense, a robot moves from node to node of a graph space. If the robot knows its distances to a sufficiently large set of landmarks, its position on the graph is uniquely determined. This suggest the problem of finding the fewest number of landmarks needed, and where should be located, so that the distances to the landmarks uniquely determine the robot's position on the graph. The solution of this problem is the metric dimension and a basis of the graph. \par Now let there exist a large number of landmarks, but the cost of computing distance is much for the robot. In this case, robot can determine its position on the graph only by knowing landmarks which are adjacent to it. Here, the problem of finding the fewest number of landmarks needed, and where should be located, so that the adjacency and none-adjacency to the landmarks uniquely determine the robot's position on the graph is a different problem. The answer to this problem is one of the motivations of introducing {\it adjacency resolving sets} in graphs. \begin{deff}\label{Adjacency metric dimension} Let $G$ be a graph and $W=\{w_1,w_2,\ldots,w_k\}$ be an ordered subset of $V(G)$. For each vertex $v\in V(G)$ the adjacency representation of $v$ with respect to $W$ is $k$-vector $$r_2(v|W):=(a_G(v,w_1),a_G(v,w_2),\ldots,a_G(v,w_k)),$$ where $$a_G(v,w_i)=\left\{ \begin{array}{ll} 0 & {\rm if}~v=w_i, \\ 1 & {\rm if}~v\sim w_i,\\ 2 & {\rm if}~v\nsim w_i. \end{array}\right.$$ If all distinct vertices of $G$ have distinct adjacency representations, $W$ is called an adjacency resolving set for $G$. The minimum cardinality of an adjacency resolving set is called adjacency metric dimension of $G$, denoted by $\beta_2(G)$. An adjacency resolving set of cardinality $\beta_2(G)$ is called an adjacency basis of $G$. \end{deff} \par By the definition, if $G$ is a connected graph with diameter $2$, then $\beta(G)=\beta_2(G)$. The converse is false; it can be seen that $\beta_2(C_6)=2=\beta(C_6)$ while, $diam(C_6)=3$. \par In the following, we obtain some useful results on the adjacency metric dimension of graphs. \begin{pro}\label{B(H)<B2(H)} For every connected graph $G$, $\beta(G)\leq \beta_2(G)$. \end{pro} \begin{proof}{ Let $W$ be an adjacency basis of $G$. Thus, for each pair of vertices $u,v\in V(G)$ there exist a vertex $w\in W$ such that, $a_G(u,w)\neq a_G(v,w)$. Therefore, $d_G(u,w)\neq d_G(v,w)$ and hence $W$ is a resolving set for $G$. }\end{proof} \begin{pro}\label{B2(H)=B2(overlineH)} For every graph $G$, $\beta_2(G)=\beta_2(\overline{G})$. \end{pro} \begin{proof}{Let $W$ be an adjacency basis of $G$. For each pair of vertices $u,v\in V(G)$, there exist a vertex $w\in W$ such that $a_G(u,w)\neq a_G(v,w)$. Without loss of generality, assume that $a_G(u,w)< a_G(v,w)$. Thus, if $a_G(u,w)=0$, then $a_{\overline{G}}(u,w)=0$ and $a_{\overline{G}}(v,w)>0$. Also, if $a_G(u,w)=1$, then $a_G(v,w)=2$ and hence, $a_{\overline{G}}(u,w)=2$ and $a_{\overline{G}}(v,w)=1$. Therefore, $W$ is an adjacency resolving set for $\overline{G}$ and $\beta_2(\overline{G})\leq \beta_2(G)$. Since $\overline{\overline{G}}=G$, we conclude that $\beta_2(G)\leq \beta_2(\overline{G})$ and consequently, $\beta_2(G)= \beta_2(\overline{G})$. }\end{proof} Let $G$ be a graph of order $n$. It is easy to see that, $1\leq \beta_2(G)\leq n-1$. In the following proposition, we characterize all graphs $G$ with $\beta_2(G)=1$ and all graphs $G$ of order $n$ and $\beta_2(G)=n-1$. \begin{pro}\label{characterization 1, m-1} If $G$ is a graph of order $n$, then \begin{description} \item (i) $\beta_2(G)=1$ if and only if $G\in\{P_1,P_2,P_3,\overline{P}_2,\overline{P}_3\}$. \item (ii) $\beta_2(G)=n-1$ if and only if $G=K_n$ or $G=\overline{K}_n$. \end{description} \end{pro} \begin{proof}{(i) It is easy to see that for $G\in\{P_1,P_2,P_3,\overline{P}_2,\overline{P}_3\}$, $\beta_2(G)=1$. Conversely, let $G$ be a graph with $\beta_2(G)=1$. If $G$ is a connected graph, then by Proposition~\ref{B(H)<B2(H)}, $\beta(G)\leq \beta_2(G)=1$. Thus, by Theorem~\ref{B=1,B=n-1}, $G=P_n$. If $n\geq 4$, then $\beta_2(P_n)\geq 2$. Hence, $n\leq3$. If $G$ is a disconnected graph and $\beta_2(G)=1$, then $\overline{G}$ is a connected graph and by Proposition~\ref{B2(H)=B2(overlineH)}, $\beta_2(\overline{G})=1$. Thus, $\overline{G}=P_n$, $n\in\{2,3\}.$ Therefore, $G=\overline{P}_2$ or $G=\overline{P}_3$. \noindent (ii) By Proposition~\ref{B(H)<B2(H)}, we have $n-1=\beta(K_n)\leq \beta_2(K_n)$. On the other hand, $\beta_2(G)\leq n-1$. Therefore, $\beta_2(K_n)=n-1$ and by Proposition~\ref{B2(H)=B2(overlineH)}, $\beta_2(\overline{K}_n)=\beta_2(K_n)=n-1$. Conversely, let $G$ be a connected graph with $\beta_2(G)=n-1$. Suppose on the contrary that $G\neq K_n$. Thus, $P_3$ is an induced subgraph of $G$. Let $P_3=(x_1,x_2,x_3)$. Therefore, $a_G(x_2,x_1)=1$ and $a_G(x_3,x_1)=2$. Consequently, $V(G)\backslash\{x_2,x_3\}$ is an adjacency resolving set for $G$ of cardinality $n-2$. That is, $\beta_2(G)\leq n-2$, which is a contradiction. Hence, $G=K_n$. If $G$ is a disconnected graph with $\beta_2(G)=n-1$, then $\overline{G}$ is a connected graph and by Proposition~\ref{B2(H)=B2(overlineH)}, $\beta_2(\overline{G})=n-1$. Thus, $\overline{G}=K_n$. }\end{proof} \begin{lemma}\label{delta=n-1} If $u$ is a vertex of degree $n(G)-1$ in a connected graph $G$, then $G$ has a basis which does not include $u$. \end{lemma} \begin{proof}{Let $B$ be a basis of $G$ which contains $u$. Thus, $r(u|B\backslash\{u\})=(1,\ldots,1)$. Since $B$ is a basis of $G$, there exist two vertices $v,w\in V(G)\backslash(B\backslash\{u\})$ such that, $r(v|B\backslash\{u\})=r(w|B\backslash\{u\})$ and $d_G(u,v)\neq d_G(u,w)$. If $u\notin\{v,w\}$, then $d(u,v)=d(u,w)=1$, which is a contradiction. Hence, $u\in\{v,w\}$, say $u=v$. Therefore, $r(w|B\backslash\{u\})=r(u|B\backslash\{u\})=(1,1,\ldots,1)$ and for each $x,y\in V(G)\backslash\{u,w\}$, $r(x|B\backslash\{u\})\neq r(y|B\backslash\{u\})$. Note that, $r(w|B)=(1,1,\ldots,1)$, because $u\sim w$. Since $B$ is a basis of $G$, $w$ is the unique vertex of $G$ which its representation with respect to $B$ is entirely $1$. It implies that $w$ is the unique vertex of $G\backslash B$ with $r(w|B\backslash\{u\})=(1,1,\ldots,1)$. Therefore, the set $(B\backslash\{u\})\cup\{w\}$ is a basis of $G$ which does not contain $u$. }\end{proof} \begin{pro}\label{H join K_1} For every graph $G$, $\beta(G\vee K_1)-1\leq \beta_2(G)\leq \beta(G\vee K_1)$. Moreover, $\beta_2(G)=\beta(G\vee K_1)$ if and only if $G$ has an adjacency basis for which no vertex has adjacency representation entirely $1$ with respect to it. \end{pro} \begin{proof}{ Let $V(G)=\{v_1,v_2,\ldots,v_n\}$ and $V(K_1)=\{u\}$. Note that, $d_{G\vee K_1}(v_i,v_j)=a_G(v_i,v_j)$, $1\leq i,j \leq n$. By Lemma~\ref{delta=n-1}, $G\vee K_1$ has a basis $B=\{b_1,b_2,\ldots,b_k\}$ such that $u\notin B$. Therefore,$$r(v_i|B)=(d_{G\vee K_1}(v_i,b_1),d_{G\vee K_1}(v_i,b_2),\ldots,d_{G\vee K_1}(v_i,b_k))=r_2(v_i|B)$$ for each $v_i$, $1\leq i\leq n$. Thus, $B$ is an adjacency resolving set for $G$ and $\beta_2(G)\leq \beta(G\vee K_1)$. \par Now let $W=\{w_1,w_2,\ldots,w_t\}$ be an adjacency basis of $G$. Since $d_{G\vee K_1}(v_i,w_j)=a_G(v_i,w_j)$, $1\leq i \leq n$, $1\leq j\leq t$, we have $r(v_i|W)=r_2(v_i|W)$, $1\leq i\leq n$. Hence, $W$ resolves $V(G\vee K_1)\backslash \{u\}$ and $\beta(G\vee K_1)-1\leq \beta_2(G)$. On the other hand, $r(u|W)$ is entirely $1$. Therefore, $W$ is a resolving set for $G\vee K_1$ if and only if $r_2(v_i|W)$ is not entirely $1$ for every $v_i$, $1\leq i\leq n$. Since $\beta_2(G)\leq \beta(G\vee K_1)$, we have $\beta_2(G)=\beta(G\vee K_1)$ if and only if $r_2(v_i|W)$ is not entirely $1$ for every $v_i$, $1\leq i\leq n$. }\end{proof} \begin{pro}\label{B_2(P_m),B_2(C_m)} If $n\geq 4$, then $\beta_2(C_n)=\beta_2(P_n)=\lfloor{{2n+2}\over 5}\rfloor$. \end{pro} \begin{proof}{ If $n\leq 8$, then by a simple computation, we can see that $\beta_2(C_n)=\beta_2(P_n)=\lfloor{{2n+2}\over 5}\rfloor$. Now, let $G\in\{P_n,C_n\}$, and $n\geq 9$. By Theorem~\ref{B(P_n) B(C_N)}, $\beta(G\vee K_1)=\lfloor{{2n+2}\over 5}\rfloor\geq 4$. Hence, by Proposition~\ref{H join K_1}, we have $\beta_2(G)\geq 3$. If $W$ is an adjacency basis of $G$, then for each vertex $v\in V(G)$, $r_2(v|W)$ is not entirely $1$, because $v$ has at most two neighbors. Therefore, by Proposition~\ref{H join K_1}, $\beta_2(G)=\beta_2(G\vee K_1)=\lfloor{{2n+2}\over 5}\rfloor$. }\end{proof} \begin{pro}\label{B_2 multipartite} If $K_{m_1,m_2,\ldots,m_t}$ is the complete $t$-partite graph, then $$\beta_2(K_{m_1,m_2,\ldots,m_t})=\beta(K_{m_1,m_2,\ldots,m_t})=\left\{ \begin{array}{ll} m-r-1 & ~{\rm if}~r\neq t, \\ m-r & ~{\rm if}~r=t, \end{array}\right.$$ where $m_1, m_2,\ldots,m_r$ are at least $2$, $m_{r+1}=\cdots=m_t=1$, and $\sum_{i=1}^tm_i=m$. \end{pro} \begin{proof}{ Since $diam(K_{m_1,m_2,\ldots,m_t})=2$, we have $\beta_2(K_{m_1,m_2,\ldots,m_t})= \beta(K_{m_1,m_2,\ldots,m_t})$. Let $M_i$ be the partite set of size $m_i$, $1\leq i\leq t$. For each $i,~1\leq i\leq r$, all vertices of $M_i$ are none-adjacent twins. Also, all vertices of $\cup_{i=r+1}^tM_i$ are adjacent twins. Let $x_i$ be a fixed vertex in $M_i$, $1\leq i\leq r$. If $r=t$, then by Observation~\ref{twins}, $\beta(K_{m_1,m_2,\ldots,m_t})\geq\sum_{i=1}^tm_i-r$. Also, the set $\cup_{i=1}^tM_i\backslash\{x_1,x_2,\ldots,x_r\}$ is a resolving set for $K_{m_1,m_2,\ldots,m_t}$ with cardinality $\sum_{i=1}^tm_i-r$. Thus, $\beta(K_{m_1,m_2,\ldots,m_t})=\sum_{i=1}^tm_i-r=m-r$. If $r\neq t$, then $\cup_{i=r+1}^t M_i\neq\emptyset$. Let $x_{r+1}\in \cup_{i=r+1}^t M_i$. Observation~\ref{twins} implies that $\beta(K_{m_1,m_2,\ldots,m_t})\geq\sum_{i=1}^tm_i-r-1$. On the other hand, the set $\cup_{i=1}^tM_i\backslash\{x_1,x_2,\ldots,x_{r+1}\}$ is a resolving set for $K_{m_1,m_2,\ldots,m_t}$ with cardinality $\sum_{i=1}^tm_i-r-1=m-r-1$. } \end{proof} \section{Lexicographic Product of Graphs}\label{Lexicographic product} Throughout this section, $G$ is a connected graph of order $n$, $V(G)=\{v_1,v_2,\ldots,v_n\}$, $H$ is a graph of order $m$, and $V(H)=\{u_1,u_2,\ldots,u_m\}$. Therefore, $G[H]$ is a connected graph. For convenience, we denote the vertex $(v_i,u_j)$ of $G[H]$ by $v_{ij}$. Note that, for each pair of vertices $v_{ij},v_{rs}\in V(G[H])$, $$d_{G[H]}(v_{ij},v_{rs})=\left\{ \begin{array}{ll} d_G(v_i,v_r) & ~{\rm if}~v_i\neq v_r, \\ 1 & ~{\rm if}~v_i=v_r~{\rm and}~u_j\sim u_s,\\ 2 & ~{\rm if}~v_i=v_r~{\rm and}~u_j\nsim u_s. \end{array}\right.$$On the other words, $$d_{G[H]}(v_{ij},v_{rs})=\left\{ \begin{array}{ll} d_G(v_i,v_r) & ~{\rm if}~v_i\neq v_r, \\ a_H(u_j,u_s)& ~{\rm otherwise.} \end{array}\right.$$ Let $S$ be a subset of $V(G[H])$. The {\it projection} of $S$ onto $H$ is the set $\{u_j\in V(H)\ |\ v_{ij}\in S\}$. Also, the ith {\it row} of $G[H]$, denoted by $H_i$, is the set $\{v_{ij}\in V(G[H])\ |\ 1\leq j\leq m\}.$ \begin{lemma}\label{S i Resolves H_i} If\- $W\subseteq V(G[H])$ is a resolving set for $G[H]$, then $W\cap H_i$ resolves $H_i$, for each $i,~1\leq i\leq n$. Moreover, the projection of $W\cap H_i$ onto $H$ is an adjacency resolving set for $H$, for each $i,~1\leq i\leq n$. \end{lemma} \begin{proof}{ Since $W$ resolves $G[H]$, for each pair of vertices $v_{ij},v_{iq}\in H_i$, there exist a vertex $v_{rt}\in W$ such that, $d_{G[H]}(v_{rt},v_{ij})\neq d_{G[H]}(v_{rt},v_{iq})$. If $r\neq i$, then $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)= d_{G[H]}(v_{rt},v_{iq})$, which is a contradiction. Therefore, $i=r$ and $W\cap H_i$ resolves $H_i$. \par Now, let $u_j,u_q\in V(H)$. Since $W\cap H_i$ resolves $H_i$, there exist a vertex $v_{it}\in W\cap H_i$ such that, $d_{G[H]}(v_{it},v_{ij})\neq d_{G[H]}(v_{it},v_{iq})$. Hence, $a_H(u_t,u_j)=d_{G[H]}(v_{it},v_{ij})\neq d_{G[H]}(v_{it},v_{iq})=a_H(u_t,u_q)$. Consequently, the projection of $W\cap H_i$ onto $H$ is an adjacency resolving set for $H$. } \end{proof} By Lemma~\ref{S i Resolves H_i}, every basis of $G[H]$ contains at least $\beta_2(H)$ vertices from each copy of $H$ in $G[H]$. Thus, the following lower bound for $\beta(G[H])$ is obtained. \begin{equation}\label{l bound} \beta(G[H])\geq n\beta_2(H). \end{equation} \begin{thm}\label{B1 B2} Let $G$ be a connected graph of order $n$ and $H$ be an arbitrary graph. If there exist two adjacency bases $W_1$ and $W_2$ of $H$ such that, there is no vertex with adjacency representation entirely~$1$ with respect to $W_1$ and no vertex with adjacency representation entirely $2$ with respect to $W_2$, then $\beta(G[H])=\beta(G[\overline H])=n\beta_2(H)$. \end{thm} \begin{proof}{ By Inequality~\ref{l bound}, we have $\beta(G[H])\geq n\beta_2(H)$. To prove the equality, it is enough to provide a resolving set for $G[H]$ of size $n\beta_2(H)$. For this sake, let $$S=\{v_{ij}\in V(G[H])\ |\ v_i\in K(G),~u_j\in W_1\}\cup \{v_{ij}\in V(G[H])\ |\ v_i\notin K(G),~u_j\in W_2\},$$ where $K(G)$ is the set of all vertices of $G$ in equivalence classes of type (K). On the other word, $K(G)$ is the set of all vertices of $G$ which have adjacent twins. We show that $S$ is a resolving set for $G[H]$. Let $v_{rt},v_{pq}\in V(G[H])\backslash S$ be two distinct vertices. The following possibilities can be happened. \\ 1. $r=p$. Note that, $v_{rt}\neq v_{pq}$ implies that $t\neq q$. Since $W_1$ and $W_2$ are adjacency resolving sets, there exist vertices $u_j\in W_1$ and $u_l\in W_2$ such that, $a_H(u_t,u_j)\neq a_H(u_q,u_j)$ and $a_H(u_t,u_l)\neq a_H(u_q,u_l)$. If $v_r\in K(G)$, then $v_{rj}\in S$ and $d_{G[H]}(v_{rt},v_{rj})=a_H(u_t,u_j)\neq a_H(u_q,u_j)=d_{G[H]}(v_{pq},v_{rj})$. Similarly, if $v_r\notin K(G)$, then $v_{rl}\in S$ and $d_{G[H]}(v_{rt},v_{rl})\neq d_{G[H]}(v_{pq},v_{rl})$. \\ 2. $r\neq p$ and $v_r,v_p\in K(G)$. If $v_r$ and $v_p$ are not twins, then there exist a vertex $v_i\in V(G)\backslash\{v_r,v_p\}$ which is adjacent to only one of the vertices $v_r$ and $v_p$. Hence, for each $u_j\in W_1$, we have $v_{ij}\in S$ and $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)\neq d_G(v_p,v_i)= d_{G[H]}(v_{pq},v_{ij})$. If $v_r$ and $v_p$ are twins, then $v_r\sim v_p$, because $v_r,v_p\in K(G)$. Since $r_2(u_t|W_1)$ is not entirely $1$, there exist a vertex $u_l\in W_1$ such that, $a_H(u_t,u_l)=2$. Therefore, $v_{rl}\in S$ and $d_{G[H]}(v_{rt},v_{rl})=a_H(u_t,u_l)=2$. On the other hand, $d_{G[H]}(v_{pq},v_{rl})=d_G(v_p,v_r)=1$. Thus, $d_{G[H]}(v_{rt},v_{rl})\neq d_{G[H]}(v_{pq},v_{rl})$. \\ 3. $r\neq p$, $v_r\in K(G)$, and $v_q\notin K(G)$. In this case, $v_r$ and $v_p$ are not twins. Therefore, there exist a vertex $v_i\in V(G)\backslash\{v_r,v_p\}$ which is adjacent to only one of the vertices $v_r$ and $v_p$. Let $u_j$ be a vertex of $W_1\cup W_2$, such that $v_{ij}\in S$. Hence, $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)\neq d_G(v_p,v_i)= d_{G[H]}(v_{pq},v_{ij})$. \\ 4. $r\neq p$ and $v_r,v_p\notin K(G)$. If $v_r$ and $v_p$ are not twins, then there exist a vertex $v_i\in V(G)\backslash\{v_r,v_p\}$ which is adjacent to only one of the vertices $v_r$ and $v_p$. Thus, for each $u_j\in W_2$, we have $v_{ij}\in S$ and $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)\neq d_G(v_p,v_i)= d_{G[H]}(v_{pq},v_{ij})$. If $v_r$ and $v_p$ are twins, then $v_r\nsim v_p$, because $v_r,v_p\notin K(G)$. Since $r_2(u_t|W_2)$ is not entirely $2$, there exist a vertex $u_l\in W_2$, such that $a_H(u_t,u_l)=1$. Therefore, $v_{rl}\in S$ and $d_{G[H]}(v_{rt},v_{rl})=a_H(u_t,u_l)=1$. On the other hand, $d_{G[H]}(v_{pq},v_{rl})=d_G(v_p,v_r)=2$, since $v_r$ and $v_p$ are none-adjacent twins in the connected $G$. Hence, $d_{G[H]}(v_{rt},v_{rl})\neq d_{G[H]}(v_{pq},v_{rl})$. \par Thus, $r(v_{rt}|S)\neq r(v_{pq}|S)$. Therefore, $S$ is a resolving set for $G[H]$ with cardinality $n\beta_2(H)$. \par Clearly, in $\overline H$, for each $u\in V(\overline H)$, $r_2(u|W_1)$ is not entirely $2$ and $r_2(u|W_2)$ is not entirely $1$. Since $\beta_2(H)=\beta_2(\overline H)$, by interchanging the roles of $W_1$ and $W_2$ for $\overline H$, we conclude $\beta(G[\overline H])=n\beta_2(\overline H)=n\beta_2(H)$. }\end{proof} In the following three theorems, we obtain $\beta(G[H])$, when $H$ does not satisfy the assumption of Theorem~\ref{B1 B2}. \begin{thm}\label{thm generalG[H]} Let $G$ be a connected graph of order $n$ and $H$ be an arbitrary graph. If for each adjacency basis $W$ of $H$ there exist vertices with adjacency representations entirely $1$ and entirely $2$ with respect to $W$, then $\beta(G[H])=\beta(G[\overline H])=n(\beta_2(H)+1)-\iota(G).$ \end{thm} \begin{proof}{ Let $B$ be a basis of $G[H]$ and $B_i$ be the projection of $B\cap H_i$ onto $H$, for each $i$, $1\leq i\leq n$. By Lemma~\ref{S i Resolves H_i}, $B_i$'s are adjacency resolving sets for $H$. Therefore, $|B\cap H_i|=|B_i|\geq \beta_2(H)$ for each $i$, $1\leq i\leq n$. Let $I=\{i\ |\ |B_i|=\beta_2(H)\}$. We claim that $|I|\leq\iota(G)$, otherwise by the pigeonhole principle, there exist a pair of twin vertices $v_r,v_p\in V(G)$ such that, $|B_r|=|B_p|=\beta_2(H)$. Since $B_r$ and $B_p$ are adjacency bases of $H$, by the assumption there are vertices $u_t$ and $u_q$ with adjacency representations entirely $1$ with respect to $B_r$ and $B_p$, respectively. Also, there are vertices $u_t'$ and $u_q'$ with adjacency representations entirely $2$ with respect to $B_r$ and $B_p$, respectively. Hence, for each $u\in B_r$ and $u^\prime\in B_p$, we have $u_{t}\sim u,~u_{t^\prime}\nsim u,~u_{q}\sim u^\prime$, and $u_{q^\prime}\nsim u^\prime$. If $v_r\sim v_p$, then for each $v_{ij}\in B$ one of the following cases can be happened. \\ 1. $i\notin\{r,p\}$. Since $v_r$ and $v_p$ are twins, we have $d_G(v_r,v_i)=d_G(v_p,v_i)$. On the other hand, $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)$ and $d_{G[H]}(v_{pq},v_{ij})=d_G(v_p,v_i)$. Thus, $d_{G[H]}(v_{rt},v_{ij})=d_{G[H]}(v_{pq},v_{ij})$. \\ 2. $i=p\neq r$. In this case, $d_{G[H]}(v_{pq},v_{ij})=a_H(u_{q},u_j)$ and $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)$. Since $v_i=v_p\sim v_r$, we have $d_G(v_r,v_i)=1$. On the other hand $u_j\in B_p$ and hence, $a_H(u_{q},u_j)=1$. Therefore, $d_{G[H]}(v_{rt},v_{ij})=d_{G[H]}(v_{pq},v_{ij})$. \\ 3. $i=r\neq p$. Similar to previous case, $d_{G[H]}(v_{rt},v_{ij})=a_H(u_{t},u_j)=1$ and $d_{G[H]}(v_{pq},v_{ij})=d_G(v_p,v_i)=1$. Consequently, $d_{G[H]}(v_{rt},v_{ij})=d_{G[H]}(v_{pq},v_{ij})$. \\ 4. $i=p=r$. In this case, $d_{G[H]}(v_{pq},v_{ij})=a_H(u_{q},u_j)$ and $d_{G[H]}(v_{rt},v_{ij})=a_H(u_{t},u_j)$. Since, $u_j\in B_p=B_r$, we have $a_H(u_{q},u_j)=1=a_H(u_{t},u_j)$. Thus, $d_{G[H]}(v_{rt},v_{ij})=d_{G[H]}(v_{pq},v_{ij})$. \\ Hence, $v_r\sim v_p$ implies that $r(v_{rt}|B)=r(v_{pq}|B)$, which is a contradiction. Therefore, $v_r\nsim v_p$. Since $G$ is a connected graph, none-adjacent twin vertices $v_r$ and $v_p$ have at least one common neighbor and thus, $d_G(v_r,v_p)=2$. Consequently, by a same method as the case $v_r\sim v_p$, we can see that $r(v_{rt^\prime}|B)=r(v_{pq^\prime}|B)$, which contradicts the assumption that $B$ is a basis of $G[H]$. Hence $|I|\leq \iota(G)$. On the other hand, every basis of $G[H]$ has at least $\beta_2(H)+1$ vertices in $H_i$, where $i\notin I$. Therefore, \begin{eqnarray*} \beta(G[H])=|B|=|\cup_{i=1}^n(B\cap H_i)|&\geq &|I|\beta_2(H)+(n-|I|)(\beta_2(H)+1)\\ &=&n\beta_2(H)+n-|I|\\ &\geq & n(\beta_2(H)+1)-\iota(G). \end{eqnarray*} Now let $W$ be an adjacency basis of $H$. By assumption, there exist vertices $u_1,u_2\in V(H)\backslash W$ such that, $u_1$ is adjacent to all vertices of $W$ and $u_2$ is not adjacent to any vertex of $W$. Also, let $K(G)$ be the set of all classes of type (K), and $N(G)$ be the set of all classes of $G$ of type (N) in $G$. Choose fixed vertex $v$ from $v^*$ for each $v^*\in N(G)\cup K(G)$. We claim that the set $$S=\{v_{ij}\in V(G[H])\,|\,u_j\in W\}\cup\{v_{t1}\,|\,v_t\in \cup_{v^*\in K(G)}(v^*\backslash\{v\})\} \cup\{v_{t2}\,|\,v_t\in \cup_{v^*\in N(G)}(v^*\backslash\{v\})\}$$ is a resolving set for $G[H]$. Let $v_{rt},v_{pq}\in V(G[H])\backslash S$. Hence, one of the following cases can be happened. \\ 1. $r=p$. Since $W$ is an adjacency basis of $H$, there exist a vertex $u_j\in W$, such that $a_H(u_q,u_j)\neq a_H(u_t,u_j)$. Therefore, $d_{G[H]}(v_{pq},v_{rj})=a_H(u_q,u_j)\neq a_H(u_t,u_j)=d_{G[H]}(v_{rt},v_{rj})$. Consequently, $r(v_{rt}|S)\neq r(v_{pq}|S)$. \\ 2. $r\neq p$ and $v_r,v_p$ are not twins. Hence, there exist a vertex $v_i\in V(G)$ which is adjacent to only one of the vertices $v_r$ and $v_p$. Thus, for each vertex $u_j\in W$, $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)\neq d_G(v_p,v_i)=d_{G[H]}(v_{pq},v_{ij})$. This yields, $r(v_{rt}|S)\neq r(v_{pq}|S)$. \\ 3. $v_r$ and $v_p$ are adjacent twins. Therefore, at least one of the vertices $v_{r1}$ and $v_{p1}$, say $v_{r1}$ belongs to $S$. Since $v_{rt}\notin S$, we have $t\neq 1$. Hence, there exists a vertex $u_j\in S$, such that $a_H(u_t,u_j)=2$, otherwise $t=1$. Consequently, $d_{G[H]}(v_{rt},v_{rj})=a_H(u_t,u_j)=2$. On the other hand, $d_{G[H]}(v_{pq},v_{rj})=d_G(v_p,v_r)=1$, because $v_r\sim v_p$. This gives, $r(v_{rt}|S)\neq r(v_{pq}|S)$. \\ 4. $v_r$ and $v_p$ are none-adjacent twins. In this case, at least one of the vertices $v_{r2}$ and $v_{p2}$, say $v_{r2}$ belongs to $S$. Hence, $t\neq 2$ and there exists a vertex $u_j\in W$, such that $a_H(u_t,u_j)=1$, otherwise $t=2$. Therefore, $d_{G[H]}(v_{rt},v_{rj})=a_H(u_t,u_j)=1\neq2=d_G(v_p,v_r)=d_{G[H]}(v_{pq},v_{rj})$. Thus, $r(v_{rt}|S)\neq r(v_{pq}|S)$. \\ Consequently, $S$ is a resolving set for $G[H]$ with cardinality $$|S|=n\beta_2(H)+a(G)-\iota_{_K}(G)+b(G)-\iota_{_N}(G)=n(\beta_2(H)+1)-\iota(G). $$ Since all adjacency bases of $H$ and $\overline H$ are the same, $\overline H$ satisfies the condition of the theorem. Hence, $\beta(G[\overline H])=n(\beta_2(H)+1)-\iota(G)$ and the proof is completed. }\end{proof} \begin{thm}\label{nB2+a(G) K(G)} Let $G$ be a connected graph of order $n$ and $H$ be an arbitrary graph. If $H$ has the following properties \begin{description}\item (i) for each adjacency basis of $H$ there exist a vertex with adjacency representation entirely $1$, \item (ii) there exist an adjacency basis $W$ of $H$ such that there is no vertex with adjacency representation entirely $2$ with respect to $W$, \end{description} then $\beta(G[H])=n\beta_2(H)+a(G)-\iota_{_K}(G).$ \end{thm} \begin{proof}{ Let $B$ be a basis of $G[H]$ and $B_i$ be the projection of $B\cap H_i$ onto $H$, for each $i$, $1\leq i\leq n$. By Lemma~\ref{S i Resolves H_i}, $B_i$'s are adjacency resolving sets for $H$. Therefore, $|B\cap H_i|=|B_i|\geq \beta_2(H)$ for each $i$, $1\leq i\leq n$. Let $I=\{i\ |\ |B_i|=\beta_2(H)\}$. We claim that $|I|\leq n-a(G)+\iota_{_K}(G)$, otherwise by the pigeonhole principle, there exist a pair of adjacent twin vertices $v_r,v_p\in V(G)$, such that $|B_r|=|B_p|=\beta_2(H)$. Since $B_r$ and $B_p$ are adjacency bases of $H$, by assumption $(i)$ there exist vertices $u_{t},u_q\in V(H)$ with adjacency representation entirely $1$ with respect to $B_r$ and $B_p$, respectively. Hence, for each $u\in B_r$ and each $u^\prime\in B_p$, we have $u_{t}\sim u$, and $u_q\sim u'$. Since $v_r\sim v_p$, for each $v_{ij}\in B$ one of the following cases can be happened. \\ 1. $i\notin\{r,p\}$. Since $v_r$ and $v_p$ are twins, we have $d_G(v_r,v_i)=d_G(v_p,v_i)$. On the other hand, $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)$ and $d_{G[H]}(v_{pq},v_{ij})=d_G(v_p,v_i)$. Thus, $d_{G[H]}(v_{rt},v_{ij})=d_{G[H]}(v_{pq},v_{ij})$. \\ 2. $i=p\neq r$. In this case, $d_{G[H]}(v_{pq},v_{ij})=a_H(u_{q},u_j)$ and $d_{G[H]}(v_{rt},v_{ij})=d_G(v_r,v_i)$. Since $v_i=v_p\sim v_r$, we have $d_G(v_r,v_i)=1$. On the other hand $u_j\in B_p$ and hence, $a_H(u_{q},u_j)=1$. Therefore, $d_{G[H]}(v_{rt},v_{ij})=d_{G[H]}(v_{pq},v_{ij})$. \\ 3. $i=r\neq p$. Similar to previous case, $d_{G[H]}(v_{rt},v_{ij})=a_H(u_{t},u_j)=1$ and $d_{G[H]}(v_{pq},v_{ij})=d_G(v_p,v_i)=1$. Consequently, $d_{G[H]}(v_{rt},v_{ij})=d_{G[H]}(v_{pq},v_{ij})$. \\ 4. $i=p=r$. In this case, $d_{G[H]}(v_{pq},v_{ij})=a_H(u_{q},u_j)$ and $d_{G[H]}(v_{rt},v_{ij})=a_H(u_{t},u_j)$. Since, $u_j\in B_p=B_r$, we have $a_H(u_{q},u_j)=1=a_H(u_{t},u_j)$. Thus, $d_{G[H]}(v_{rt},v_{ij})=d_{G[H]}(v_{pq},v_{ij})$. \\ Hence, $r(v_{rt}|B)=r(v_{pq}|B)$, which is a contradiction. Therefore, $|I|\leq n-a(G)+\iota_{_K}(G)$. On the other hand, every basis of $G[H]$ has at least $\beta_2(H)+1$ vertices in $H_i$, where $i\notin I$. Thus, \begin{eqnarray*} \beta(G[H])=|B|&\geq &|I|\beta_2(H)+(n-|I|)(\beta_2(H)+1)\\ &=&n\beta_2(H)+n-|I|\\ &\geq & n\beta_2(H)+a(G)-\iota_{_K}(G). \end{eqnarray*} Now let $K(G)$ be the set of all classes of type (K) in $G$ and $v\in v^*$ be a fixed vertex for each class $v^*$ of type (K). Also, let $u_1\in V(H)\backslash W$, such that $r_2(u_1|W)$ is entirely $1$. Consider $$S=\{v_{ij}\in V(G[H])\,|\,u_j\in W\}\cup\{v_{t1}\,|\,v_t\in \cup_{v^*\in K(G)}(v^*\backslash\{v\})\}$$ and let $v_{rt},v_{pq}\in V(G[H])\backslash S$. If $v_r$ and $v_p$ are not none-adjacent twins, then similar to the proof of Theorem~\ref{thm generalG[H]}, we have $r(v_{rt}|S)\neq r(v_{pq}|S)$. Now, let $v_r$ and $v_p$ be none-adjacent twin vertices of $G$. By assumption, there exists a vertex $u_j\in W$, such that $a_H(u_t,u_j)=1$. Therefore, $d_{G[H]}(v_{rt},v_{rj})=a_H(u_t,u_j)=1$. On the other hand, $d_{G[H]}(v_{pq},v_{rj})=d_G(v_p,v_r)=2$, since $v_r$ and $v_p$ are none-adjacent twins in the connected graph $G$. Hence, $r(v_{rt}|S)\neq r(v_{pq}|S)$. This implies that $S$ is a resolving set for $G[H]$ with cardinality $n\beta_2(H)+a(G)-\iota_{_K}(G)$. }\end{proof} By a similar proof, we have the following theorem. \begin{thm}\label{nB2+b(G) N(G)} Let $G$ be a connected graph of order $n$ and $H$ be an arbitrary graph. If $H$ has the following properties \begin{description}\item (i) for each adjacency basis of $H$ there exist a vertex with adjacency representation entirely $2$, \item (ii) there exist an adjacency basis $W$ of $H$ such that there is no vertex with adjacency representation entirely~$1$ with respect to $W$, \end{description} then $\beta(G[H])=n\beta_2(H)+b(G)-\iota_{_N}(G).$ \end{thm} \begin{cor}\label{no twin} If $G$ has no pair of twin vertices, then $\beta (G[H])=n\beta_2(H)$. \end{cor} \begin{proof}{ The adjacency bases of $H$ satisfy one of the conditions of Theorems ~\ref{B1 B2},~\ref{thm generalG[H]},~\ref{nB2+a(G) K(G)}, and~\ref{nB2+b(G) N(G)} . Now, if $G$ does not have any pair of twin vertices, then $\iota(G)=n,~\iota_{_K}(G)=a(G)=0$, and $\iota_{_N}(G)=b(G)=0$. Therefore, $\beta (G[H])=n\beta_2(H)$.}\end{proof} By Theorems~\ref{B1 B2},~\ref{thm generalG[H]},~\ref{nB2+a(G) K(G)}, and~\ref{nB2+b(G) N(G)} the exact value of $\beta(G[H])$ of many graphs $G$ and $H$ can be determined. In the following two corollaries, $\beta(G[H])$ for some of the well known graphs are obtained. \begin{cor}\label{G=P_n or C_n} Let $G=P_n$, $n\geq4$ or $G=C_n$, $n\geq5$. Then, $G$ does not have any pair of twin vertices. Thus by Corollary~\ref{no twin}, $\beta(G[H])=n\beta_2(H)$, for each graph $H$. In particular, by Propositions~\ref{B2(H)=B2(overlineH)} and \ref{B_2(P_m),B_2(C_m)}, $\beta_2(P_m)=\beta_2(C_m)=\beta_2(\overline P_m)=\beta_2(\overline C_m)=\lfloor{2m+2\over5}\rfloor$. Therefore, $\beta(G[P_m])=\beta(G[C_m])=\beta(G[\overline P_m])=\beta(G[\overline C_m])=n\lfloor{2m+2\over5}\rfloor$. Also, by Propositions~\ref{B2(H)=B2(overlineH)} and \ref{B_2 multipartite}, we have $$\beta(G[\overline K_{m_1,m_2,\ldots,m_t}])=\beta(G[K_{m_1,m_2,\ldots,m_t}])=\left\{ \begin{array}{ll} n(m-r-1) & ~{\rm if}~r\neq t, \\ n(m-r) & ~{\rm if}~r=t, \end{array}\right.$$ where $m_1, m_2,\ldots,m_r$ are at least $2$, $m_{r+1}=\cdots=m_t=1$, and $\sum_{i=1}^tm_i=m$. \end{cor} \begin{cor}\label{H=K_m1,m2,..,mt} Let $H=K_{m_1,m_2,\ldots,m_t}$, where $m_1, m_2,\ldots,m_r$ are at least $2$, $m_{r+1}=\cdots=m_t=1$, and $\sum_{i=1}^tm_i=m$. Thus, for each adjacency basis of $H$ there is no vertex of $H$ with adjacency representation entirely $2$. \par If $r=t$, then for each adjacency basis of $H$ there is no vertex of $H$ with adjacency representation entirely $1$. Therefore, by Theorem~\ref{B1 B2}, $\beta(G[H])=n\beta_2(H)$ for each connected graph $G$ of order $n$. If $r\neq t$, then for each adjacency basis of $H$, there exist a vertex with adjacency representation entirely~$1$. Thus, by Theorem~\ref{nB2+a(G) K(G)}, $\beta(G[H])=n\beta_2(H)+a(G)-\iota_{_K}(G)$ for each connected graph $G$ of order $n$. \par In particular, if $G=K_n$, then all vertices of $K_n$ are adjacent twins. Thus, $a(K_n)=n$ and $\iota_{_K}(K_n)=1$, hence, $\beta(K_n[H])=n\beta_2(H)+n-1$. Therefore, by Proposition~\ref{B_2 multipartite}, $$\beta(K_n[H])=\left\{ \begin{array}{ll} n(m-r)-1 & ~{\rm if}~r\neq t, \\ n(m-r) & ~{\rm if}~r=t. \end{array}\right.$$ \end{cor} \end{document}
\begin{document} \title[On the Krull filtration of ${\mathcal U}$]{The Krull filtration of the category of unstable modules over the Steenrod algebra} \author[Kuhn]{Nicholas J.~Kuhn} \address{Department of Mathematics \\ University of Virginia \\ Charlottesville, VA 22904} \email{[email protected]} \thanks{This research was partially supported by grants from the National Science Foundation} \downarrowte{June 25, 2013.} \subjclass[2000]{Primary 55S10; Secondary 18E10} \begin{abstract} In the early 1990's, Lionel Schwartz gave a lovely characterization of the Krull filtration of ${\mathcal U}$, the category of unstable modules over the mod $p$ Steenrod algebra. Soon after, this filtration was used by the author as an organizational tool in posing and studying some topological nonrealization conjectures. In recent years the Krull filtration of ${\mathcal U}$ has been similarly used by Castellana, Crespo, and Scherer in their study of H--spaces with finiteness conditions. Recently, Gaudens and Schwartz have given a proof of some of my conjectures. In light of these topological applications, it seems timely to better expose the algebraic properties of the Krull filtration. \end{abstract} \maketitle \section{Introduction} \leftarrowbel{introduction} The mod $p$ cohomology of a space $H^*(X)$ is naturally an object in both ${\mathcal U}$ and ${\mathcal K}$, the categories of unstable modules and algebras over the mod $p$ Steenrod algebra ${\mathcal A}$. Over the past quarter century, much important work in unstable homotopy theory has been grounded in the tremendous advances made during the 1980's in our understanding of the abelian category ${\mathcal U}$, and the central role played by $H^*(B{\mathbb Z}/p)$. In his 1962 paper [Gab62], Gabriel introduced the Krull filtration of a general abelian category. Applied to ${\mathcal U}$, the Krull filtration is the increasing sequence of localizing subcategories $$ {\mathcal U}_0 \subset {\mathcal U}_1 \subset {\mathcal U}_2 \subset {\mathcal U}_3 \subset {\mathcal U}_4 \subset \dots$$ recursively defined as follows: ${\mathcal U}_0$ is the full subcategory of locally finite unstable ${\mathcal A}$-modules, and for $n>0$, ${\mathcal U}_n$ is the full subcategory of unstable modules which project to a locally finite object in the quotient category ${\mathcal U}/{\mathcal U}_{n-1}$. In the early 1990's, Lionel Schwartz gave a lovely characterization of the Krull filtration of ${\mathcal U}$ in terms of Lannes' $T$-functor. Soon after, this filtration was used by the author \cite{k1} as an organizational tool in posing and studying some topological nonrealization conjectures. In particular, the author's Strong Realization Conjecture posited that if $H^*(X) \in {\mathcal U}_n$ for some finite $n$, then $H^*(X) \in {\mathcal U}_0$. A proof of this was recently given by Schwartz and G. Gaudens \cite{gaudens schwartz}. (Special cases were proved earlier in \cite{k1,s3,s4}.) The Krull filtration of ${\mathcal U}$ has been similarly used in recent years by Castellana, Crespo, and Scherer in their study of H--spaces with finiteness conditions \cite{ccs1,ccs2}. The few results about the Krull filtration are scattered in the literature, and are not particularly comprehensive. Moreover, Schwartz just sketched a proof of his key characterization in \cite[\S6.2]{s2}, and we suspect many readers would have trouble filling in the details. In light of the recent topological applications, it seems timely to better expose the properties of this interesting bit of the structure of ${\mathcal U}$. \subsection{Schwartz's characterization of ${\mathcal U}_n$} Lannes \cite{L} defines $T: {\mathcal U} \rightarrow {\mathcal U}$ to be the left adjoint to the functor sending $M$ to $H^*(B{\mathbb Z}/p) \otimes M$, and proves that this functor satisfies many remarkable properties; in particular, it is exact and commutes with tensor products. We need the reduced version. Let ${\bar T}: {\mathcal U} \rightarrow {\mathcal U}$ be left adjoint to the functor sending $M$ to $\tilde H^*(B{\mathbb Z}/p) \otimes M$, so that $TM = {\bar T} M \oplus M$. Then let ${\bar T}^n$ denote its $n$th iterate. Schwartz's elegant characterization of ${\mathcal U}_n$ \cite[Thm.6.2.4]{s2} goes as follows. It generalizes the $n=0$ case, proved earlier by Lannes and Schwartz \cite{ls2}. \begin{thm} \leftarrowbel{schwartz thm} \leftarrowbel{T thm} ${\mathcal U}_n = \{ M \in {\mathcal U} \ | \ {\bar T}^{n+1}M = 0 \}$. \end{thm} We will give a proof of this theorem which is simpler than the proof outlined in \cite{s2}. All of our subsequent results about ${\mathcal U}_n$ use this theorem. As an example, since ${\bar T}(M \otimes N) = ({\bar T} M \otimes N) \oplus (M \otimes {\bar T} N) \oplus ({\bar T} M \otimes {\bar T} N)$, the theorem has the following first consequence. \begin{cor} If $M \in {\mathcal U}_m$ and $N \in {\mathcal U}_n$, then $M \otimes N \in {\mathcal U}_{m+n}$. \end{cor} \subsection{The quotient category ${\mathcal U}_n/{\mathcal U}_{n-1}$ and consequences.} Our next theorem usefully identifies the quotient category ${\mathcal U}_n/{\mathcal U}_{n-1}$. We need to introduce some basic unstable modules. Let $F(1)$ be the free unstable module on a $1$--dimensional class. Explicitly, $F(1) = {\mathcal A}\cdot x \subset H^*(B{\mathbb Z}/p)$, where $x$ generates $H^1(B{\mathbb Z}/p)$. Thus when $p=2$, $$ F(1) = \leftarrowngle x, x^2, x^4, \dots \rightarrowngle \subset {\mathbb Z}/2[x],$$ and when $p$ is odd, $$ F(1) = \leftarrowngle x, y, y^p, y^{p^2}, \dots \rightarrowngle \subset \Lambda(x) \otimes {\mathbb Z}/p[y].$$ If $M \in {\mathcal U}_n$, then ${\bar T}^nM \in {\mathcal U}_0$. But ${\bar T}^nM$ has more structure than just this: note that ${\bar T}^n$ is left adjoint to tensoring with $\tilde H^*(B{\mathbb Z}/p)^{\otimes n}$, and thus ${\bar T}^n(M)$ has a natural action of the $n$th symmetric group $\Sigma_n$. Let $\Sigma_n\text{--}{\mathcal U}_0$ denote the category of $M \in {\mathcal U}_0$ equipped with a $\Sigma_n$--action. \begin{thm} \leftarrowbel{Un/Un-1 thm} The exact functor ${\bar T}^n: {\mathcal U}_n \rightarrow \Sigma_n\text{--}{\mathcal U}_0$ has as right adjoint the functor $$N \mapsto (N \otimes F(1)^{\otimes n})^{\Sigma_n},$$ and together these functors induce an equivalence $${\mathcal U}_n/{\mathcal U}_{n-1} \simeq \Sigma_n\text{--}{\mathcal U}_0.$$ \end{thm} Using this theorem, one quite easily obtains a recursive description of modules in ${\mathcal U}_n$. \begin{thm} \leftarrowbel{recursive Un thm} $M \in {\mathcal U}_n$ if and only if there exist $K,Q \in {\mathcal U}_{n-1}$, $N \in \Sigma_n\text{--}{\mathcal U}_0$, and an exact sequence $$ 0 \rightarrow K \rightarrow M \rightarrow (N \otimes F(1)^{\otimes n})^{\Sigma_n} \rightarrow Q \rightarrow 0.$$ Furthermore, $\bar T^n M \simeq N$. \end{thm} The case $n=1$, already useful, was previously known \cite[Prop.2.3]{s4}. One obtains a simple generating set for ${\mathcal U}_n$. \begin{thm} \leftarrowbel{F(1) thm} ${\mathcal U}_n$ is the smallest localizing subcategory containing all suspensions of the modules $F(1)^{\otimes m}$ for $0 \leq m \leq n$. \end{thm} \begin{rem} Let $F(n)$ be the free unstable module on an $n$--dimensional class. Though $F(n) \in {\mathcal U}_n$, ${\mathcal U}_n$ is generally strictly larger than the localizing subcategory generated by all suspensions of the modules $F(m)$ for $0\leq m \leq n$. See \exref{F(n) example}. \end{rem} \subsection{Interaction with the nilpotent filtration} To put \thmref{Un/Un-1 thm} in perspective, we remind readers of how the Krull filtration of ${\mathcal U}$ interacts with the nilpotent filtration \cite[\S2]{k1}. The nilpotent filtration of ${\mathcal U}$, introduced in \cite{s1}, is the decreasing filtration $$ {\mathcal U} = {\mathcal Nil}_0 \supset {\mathcal Nil}_1 \supset {\mathcal Nil}_2 \supset {\mathcal Nil}_3 \supset \dots$$ defined by letting ${\mathcal Nil}_s$ be the smallest localizing subcategory containing all $s$--fold suspensions of unstable modules. We write ${\mathcal Nil}$ for ${\mathcal Nil}_1$. H.-W. Henn, Lannes, and Schwartz \cite{hls1} identify ${\mathcal U}/{\mathcal Nil}$ as follows. Let ${\mathcal F}$ be the category of functors from finite dimensional ${\mathbb Z}/p$--vector spaces to ${\mathbb Z}/p$--vector spaces. There is a difference operator $\Delta: {\mathcal F} \rightarrow {\mathcal F}$ defined by $$ \Delta F(V) = F(V \oplus {\mathbb Z}/p)/F(V).$$ $F$ is said to be polynomial of degree $n$ if $\Delta^{n+1}F = 0$, and analytic if it is locally polynomial. Let ${\mathcal F}^n$ and ${\mathcal F}^{an}$ denote the subcategories of such functors. The authors of \cite{hls1} show that there is an equivalence $ {\mathcal U}/{\mathcal Nil} \simeq {\mathcal F}^{an}$. Under this equivalence, it is not hard to show that $\bar T$ corresponds to $\Delta$, thus ${\mathcal U}_n$ projects to ${\mathcal F}^n$. Schwartz \cite{s1} further observes that, for all $s$, $M \mapsto \Sigma^s M$ induces an equivalence $ {\mathcal U}/{\mathcal Nil} \simeq {\mathcal Nil}_s/{\mathcal Nil}_{s+1}$. Restricting these equivalences to ${\mathcal U}_n$, one learns that, for all $s$, there are equivalences of abelian categories $$ ({\mathcal Nil}_s \cap {\mathcal U}_n)/({\mathcal Nil}_{s+1} \cap {\mathcal U}_n) \simeq {\mathcal F}^n.$$ It is classic (see \cite{pirashvili}, \cite[\S 5.5]{s1}, or \cite{filt gen rep}) that there is an equivalence \begin{equation} \leftarrowbel{Fn/Fn-1thm} {\mathcal F}^n / {\mathcal F}^{n-1} \simeq {\mathbb Z}/p[\Sigma_n]\text{--modules}. \end{equation} The equivalence ${\mathcal U}_n/{\mathcal U}_{n-1} \simeq \Sigma_n\text{--}{\mathcal U}_0$ of \thmref{Un/Un-1 thm} can thus be seen as the correct lift of this result to ${\mathcal U}$. \begin{rem} The strongest form of (\ref{Fn/Fn-1thm}) says that the quotient functor ${\mathcal F}^n \rightarrow {\mathbb Z}/p[\Sigma_n]\text{--modules}$ admits both a left and right adjoint, forming a recollement setting \cite[Example 1.5]{filt gen rep}. By contrast, as $\bar T^n$ does not commute with products, the exact quotient functor ${\mathcal U}_n \rightarrow \Sigma_n\text{--} {\mathcal U}_0$ does not admit a left adjoint. See \remref{no adjoint remark} for related comments. \end{rem} \subsection{The Krull filtration of a module} Let $k_n: {\mathcal U} \rightarrow {\mathcal U}_n$ be right adjoint to the inclusion ${\mathcal U}_n \hookrightarrow {\mathcal U}$. Explicitly, $k_nM \subset M$ is the largest submodule of $M$ contained in ${\mathcal U}_n$. (So $k_0M$ is the locally finite part of $M$.) We obtain a natural increasing filtration of $M$: $$ k_0M \subset k_1M \subset k_2M \subset k_3M \subset \dots .$$ It is useful to let $\bar k_nM$ denote the composition factor $k_nM/k_{n-1}M$. A basic calculation from \cite{hls1} can be interpreted as saying the following. \begin{prop} \leftarrowbel{pnH prop} \cite[Lem.7.6.6]{hls1} $k_n \tilde H^*(B{\mathbb Z}/p)$ is the span of products of elements in $F(1)$ of length at most $n$. \end{prop} The next theorem lists some basic properties. In this theorem, $\Phi$ is the Frobenius functor, $nil_sM$ is the maximal submodule of $M$ in ${\mathcal Nil}_s$, $R_sM$ is the reduced module defined by $nil_sM/nil_{s+1}M = \Sigma^s R_sM$, and $\bar R_sM$ is its ${\mathcal Nil}$--closure. (These will be recalled in more detail in \secref{U section}.) \begin{thm} \leftarrowbel{kn thm} The Krull filtration of a module satisfies the following properties, for all $M,N \in {\mathcal U}$. \noindent{\bf (a)} \ $k_n$ is left exact, commutes with filtered colimits, and $\displaystyle \bigcup_{n=0}^{\infty} k_nM = M$. \\ \noindent{\bf (b)} \ $k_n$ commutes with the functors $\Sigma^s$, $\Phi$, $nil_s$, and $\bar R_s$. \\ \noindent{\bf (c)} \ $k_n$ preserves ${\mathcal Nil}$--reduced modules, ${\mathcal Nil}$--closed modules, and \\ ${\mathcal Nil}$--isomorphisms. \\ \noindent{\bf (d)} \ $\displaystyle \bigoplus_{l+m=n} \bar k_lM \otimes \bar k_mN = \bar k_n(M \otimes N)$. \end{thm} In contrast to (b), the natural map $$ R_s(k_nM) \rightarrow k_n(R_sM)$$ need only be a monomorphism (with cokernel in ${\mathcal Nil}$). See \exref{Rs ex}. \subsection{Symmetric sequences of locally finite modules} We construct a functor from ${\mathcal U}$ to the category of symmetric sequences of locally finite modules that seems to nicely encode much of the information about the Krull filtration of a module. A {\em symmetric sequence} in ${\mathcal U}_0$ is a sequence $M = \{M_0,M_1,M_2, \dots\}$, with $M_n \in \Sigma_n\text{--}{\mathcal U}_0$. The category of these, $\Sigma_*\text{--}{\mathcal U}_0$, has a symmetric monoidal structure with product $$ (M \boxtimes N)_n = \bigoplus_{l+m=n} \operatorname{Ind}_{\Sigma_l \times \Sigma_m}^{\Sigma_n}(M_l \otimes N_m).$$ \begin{defn} Let $\sigma_*: {\mathcal U} \longrightarrow \Sigma_*\text{--}{\mathcal U}_0$ be defined by $$ \sigma_nM = {\bar T}^n k_nM.$$ \end{defn} As ${\bar T}$ is exact and ${\bar T}^n k_{n-1}M = 0$, $\sigma_n M$ also equals ${\bar T}^n \bar k_nM$. Thus, under the correspondence of \thmref{Un/Un-1 thm}, $\sigma_nM$ corresponds to the image of the ${\mathcal U}_{n-1}$--reduced composition factor $\bar k_nM$ in ${\mathcal U}_n/{\mathcal U}_{n-1}$. \begin{thm} \leftarrowbel{sigma thm} $\sigma_*$ satisfies the following properties. \\ \noindent{\bf (a)} \ $\sigma_n$ is left exact and commutes with filtered colimits.\\ \noindent{\bf (b)} \ $\sigma_n$ commutes with the functors $\Sigma^s$, $\Phi$, $nil_s$, and $\bar R_s$. \\ \noindent{\bf (c)} \ $\sigma_n$ preserves ${\mathcal Nil}$--reduced modules, ${\mathcal Nil}$--closed modules, and \\ ${\mathcal Nil}$--isomorphisms. \\ \noindent{\bf (d)} \ $\sigma_*$ is symmetric monoidal: $ \sigma_*(M \otimes N) = \sigma_*M \boxtimes \sigma_*N$. \\ \noindent{\bf (e)} \ For all $s$ and $n$, there is a natural isomorphism of ${\mathbb Z}/p[\Sigma_n]$--modules $$(\sigma_nM)^s \simeq \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, \bar R_sM).$$ \end{thm} \subsection{Organization of the paper} The next three sections respectively contain needed background material on abelian categories, unstable modules, and polynomial functors. \thmref{schwartz thm}, Schwartz's characterization of ${\mathcal U}_n$, is then proved in \secref{schwartz thm section}. Properties of the Krull filtration of a module are proved in \secref{krull mod section}, and some of these are used in our proof of the identification of ${\mathcal U}_n/{\mathcal U}_{n-1}$ given in \secref{quotient cat section}. In \secref{sym sequences section}, we discuss \thmref{sigma thm} and give examples illustrating it. \section{Background: abelian categories} \leftarrowbel{abelian categories} We recall some standard concepts regarding abelian categories, as in \cite{gabriel}. \subsection{Basic notions} All the categories in this paper satisfy Grothendieck's axioms AB1--AB5 \cite{weibel} and have a set of generators. Standard consequences include that such categories have enough injectives. An object in an abelian category ${\mathcal C}$ is {\em Noetherian} if its poset of subobjects satisfy the ascending chain condition. ${\mathcal C}$ itself is said to be {\em locally Noetherian} if every object is the union of its Noetherian subobjects. Standard consequences include that direct sums of injectives are again injective, and objects admit injective envelopes. A full subcategory ${\mathcal B}$ of an abelian category ${\mathcal C}$ is {\em localizing} if it is closed under sub and quotient objects, extensions, and direct sums. $f: M \rightarrow N$ is a {\em ${\mathcal B}$-isomorphism} if $\ker f, \operatorname{coker} f \in {\mathcal B}$. The quotient category ${\mathcal C}/{\mathcal B}$ has the same objects as ${\mathcal C}$, with morphisms from $M$ to $N$ given by equivalence classes of triples $$ M \overset{f}{\hookleftarrow} M^{\prime} \xrightarrow{g} N^{\prime} \overset{h}{\twoheadleftarrow} N$$ with $\operatorname{coker} f, \ker h \in {\mathcal B}$. It is the initial category under ${\mathcal C}$ in which all ${\mathcal B}$-isomorphisms have been inverted. $M \in {\mathcal C}$ is {\em ${\mathcal B}$--reduced} if $\operatorname{Hom}_{{\mathcal C}}(N,M) = 0$ for all $N \in {\mathcal B}$, and is {\em ${\mathcal B}$--closed} if also $\operatorname{Ext}^1_{{\mathcal C}}(N,M) = 0$ for all $N \in {\mathcal B}$. The exact quotient functor $l: {\mathcal C} \rightarrow {\mathcal C}/{\mathcal B}$ has a right adjoint $r$. The counit of the adjunction $\epsilon_M: lr(M) \rightarrow M$ is always an isomorphism. The unit of the adjunction $\eta_M: M \rightarrow rl(M)$ is thus idempotent, and is called {\em localization away from ${\mathcal B}$} or {\em ${\mathcal B}$--closure}. The functor $k: {\mathcal C} \rightarrow {\mathcal B}$ right adjoint to the inclusion ${\mathcal B} \hookrightarrow {\mathcal C}$ can be computed by the formula $k(M) = \ker(\eta_M)$. \subsection{Locally finite objects and the Krull filtration} An object in an abelian category ${\mathcal C}$ is {\em simple} if it has no non-zero proper subobjects, {\em finite} if it has a filtration of finite length with simple composition factors, and {\em locally finite} if it is the a union of its finite subobjects. We let ${\mathcal C}_0$ denote the full subcategory consisting of the locally finite objects of ${\mathcal C}$. If ${\mathcal C}$ is locally Noetherian, ${\mathcal C}_0$ will be closed under extensions, and it follows that ${\mathcal C}_0$ is localizing. \begin{defn} \cite[p.382]{gabriel} The {\em Krull filtration} of a locally Noetherian abelian category ${\mathcal C}$ is the increasing sequence of localizing subcategories $$ {\mathcal C}_0 \subset {\mathcal C}_1 \subset {\mathcal C}_2 \subset $$ recursively defined for $n\geq 1$ by letting ${\mathcal C}_n$ be the full subcategory of ${\mathcal C}$ whose objects are the objects in ${\mathcal C}$ that represent locally finite objects in ${\mathcal C}/{\mathcal C}_{n-1}$. \end{defn} \begin{rem} Gabriel defines ${\mathcal C}_{\leftarrowmbda}$ for any ordinal $\leftarrowmbda$. For example, ${\mathcal C}_{\omega}$ is defined as the smallest localizing category containing all the ${\mathcal C}_n$. \thmref{F(1) thm} implies that ${\mathcal U}_{\omega} = {\mathcal U}$, so the Krull filtration for ${\mathcal U}$ stops at this point. \end{rem} \section{Background: unstable modules} \leftarrowbel{U section} In this section we recall some basic material about unstable modules. A general reference for this is \cite{s2}. \subsection{The categories ${\mathcal U}$ and ${\mathcal K}$} The mod $p$ Steenrod algebra ${\mathcal A}$ is generated by $Sq^k$, $k\geq 0$, when $p=2$, and $P^k$, $k \geq 0$, together with the Bockstein $\beta$ when $p$ is odd, and satisfying the usual {Adem relations}. The category ${\mathcal U}$ is then defined to be the full subcategory of ${\mathcal A}$--modules $M$ satisfying the {unstable condition}. When $p=2$ this means that, for all $x\in M$, $Sq^k x = 0$ if $k> |x|$. When $p$ is odd, the condition is that, for all $x\in M$ and $e = 0$ or $1$, $\beta^{e}P^k x = 0$ if $2k +e > |x|$. The abelian category ${\mathcal U}$ has a tensor product coming from the {Cartan formula}, and ${\mathcal K}$ is then defined to be the category of commutative algebras $K$ in ${\mathcal U}$ also satisfying the {restriction condition}: $Sq^{|x|}x = x^2$ for all $x \in K$ when $p=2$, and $P^{|x|/2}x = x^p$ for all even degree $x \in K$ when $p$ is odd. All of these definitions are, of course, motivated by the fact that, if $X$ is a topological space, its mod $p$ cohomology $H^*(X)$ is naturally an object in both ${\mathcal U}$ and ${\mathcal K}$. \subsection{The Frobenius functor $\Phi$} For $M \in {\mathcal U}$, $P_0: M \rightarrow M$ is defined by \begin{equation*} P_0 x = \begin{cases} Sq^k x & \text{if } k=|x| \text{ and } p=2 \\ \beta^eP^kx = 0 & \text{if } 2k + e = |x| \text{ and $p$ is odd}. \end{cases} \end{equation*} When $p=2$, the Frobenius functor $\Phi: {\mathcal U} \rightarrow {\mathcal U}$ is defined by letting \begin{equation*} (\Phi M)^{m} = \begin{cases} M^n & \text{if } m=2n \\ 0 & \text{otherwise, } \end{cases} \end{equation*} with $Sq^{2k}\phi(x) = \phi(Sq^kx)$, where $\phi(x) \in (\Phi M)^{2n}$ corresponds to $x \in M^n$. At odd primes, $\Phi: {\mathcal U} \rightarrow {\mathcal U}$ is defined by letting \begin{equation*} (\Phi M)^{m} = \begin{cases} M^{2n+e} & \text{if } m=2pn+2e, \text{ with } e=0,1 \\ 0 & \text{otherwise,} \end{cases} \end{equation*} with $P^{pk}\phi(x) = \phi(P^kx)$, and $P^{pk+1}\phi(x) = \phi(\beta P^kx)$ when $|x|$ is odd. $\Phi$ is an exact functor, and there is a natural transformation of unstable modules $$ \leftarrowmbda: \Phi M\rightarrow M$$ defined by $\leftarrowmbda(\phi(x)) = P_0x$. Let $\Omega: {\mathcal U} \rightarrow {\mathcal U}$ be left adjoint to $\Sigma$. Explicitly $\Omega M$ is the largest unstable submodule of $\Sigma^{-1}M$. This has just one nonzero right derived functor $\Omega^1$, and these can be calculated via an exact sequence $$ 0 \rightarrow \Sigma \Omega^1 M \rightarrow \Phi(M) \xrightarrow{\leftarrowmbda} M \rightarrow \Sigma \Omega M \rightarrow 0.$$ $\leftarrowmbda: \Phi M\rightarrow M$ is monic exactly when $M$ is ${\mathcal Nil}$--reduced, and, in this case, the evident iterated natural map $\leftarrowmbda_k: \Phi^k M \rightarrow M$ is still monic. \subsection{The nilpotent filtration of a module} Let $nil_s: {\mathcal U} \rightarrow {\mathcal Nil}_s$ be right adjoint to the inclusion ${\mathcal Nil}_s \hookrightarrow {\mathcal U}$. Explicitly, $nil_sM \subset M$ is the largest submodule of $M$ contained in ${\mathcal Nil}_s$. We obtain a natural decreasing filtration of $M$: $$ M = nil_0M \supset nil_1M \supset nil_2M \supset nil_3M \supset \dots .$$ This filtration is complete as $nil_s M$ is $(s-1)$--connected. \begin{prop/def} $nil_sM/nil_{s+1}M = \Sigma^s R_sM$, where $R_sM$ is ${\mathcal Nil}$-reduced. \end{prop/def} For a proof see \cite[Lemma 6.4.1]{s2} or \cite[Prop. 2.2]{k1} We let $\bar R_sM$ denote the ${\mathcal Nil}$--closure of $R_sM$. \begin{prop} \cite[Cor. 3.2]{k5} $\bar R_s: {\mathcal U} \rightarrow {\mathcal U}$ is left exact. \end{prop} \begin{prop} \cite[Prop. 2.11]{k1} If $M$ is locally finite, then the nilpotent filtration equals the skeletal filtration, so that $M^s = R_s M = \bar R_s M$. \end{prop} \subsection{Finitely generated modules} We need various results about unstable modules which are finitely generated over ${\mathcal A}$. \begin{thm} \leftarrowbel{finite gen thm} {\bf (a)} \ A submodule of a finitely generated unstable module is again finitely generated. Thus ${\mathcal U}$ is locally Noetherian. \noindent{\bf (b)} \ The nilpotent filtration of a finitely generated module has finite length. \noindent{\bf (c)} \ A finitely generated unstable module represents a finite object in ${\mathcal U}/{\mathcal Nil}$. \end{thm} Proofs of these appear in \cite{s2} and \cite{k1}. \begin{cor} \leftarrowbel{gen cor} Any localizing subcategory of ${\mathcal U}$ will be generated by modules of the form $\Sigma^s M$, with $M$ finitely generated, ${\mathcal Nil}$--reduced, and representing a simple object in $U/{\mathcal Nil}$. \end{cor} We will also need the following lemma. \begin{lem} \leftarrowbel{phi lemma} Let $M$ be ${\mathcal Nil}$--reduced and finitely generated, and let $i: N \hookrightarrow M$ be the inclusion of a submodule with $M/N \in {\mathcal Nil}$. Then, for $k\gg 0$, the inclusion $\leftarrowmbda_k: \Phi^k M \hookrightarrow M$ factors through $i$. \end{lem} \begin{proof} Let $x_1, \dots, x_l$ generate $M$. Since $M/N \in {\mathcal Nil}$, there exists $k$ such that $P_0^k(x_i) \in N$ for all $i$. But then the image of $\leftarrowmbda^k$ will be contained in $N$. \end{proof} \subsection{${\mathcal U}$--injectives and properties of $\bar T$} Let $J(n)$ be the $n$th `Brown--Gitler module': the finite injective representing $M \rightsquigarrow (M^n)^{\vee}$ \cite[\S2.3]{s2}. \begin{thm} $H^*(BV) \otimes J(n)$ is injective in ${\mathcal U}$, and every ${\mathcal U}$--injective is a direct summand of a direct sum of such modules. \end{thm} \begin{thm} {\bf (a)} $T$, and thus $\bar T$, is exact. \noindent{\bf (b)} The natural map $T(M \otimes N) \rightarrow TM \otimes TN$ is an isomorphism. \noindent{\bf (c)} $T$, and thus $\bar T$, preserves both ${\mathcal Nil}$--reduced and ${\mathcal Nil}$--closed modules. \noindent{\bf (d)} $T$, and thus $\bar T$, commutes with the following functors: $\Sigma^s$, $\Phi$, $nil_s$, $R_s$, and $\bar R_s$. \end{thm} All of this in the literature: see \cite{L}, \cite{ls2}, \cite{lz1}, \cite{lz2}, \cite{k1}. \section{Background: polynomial functors} \subsection{Definitions and examples} Recall that ${\mathcal F}$ is the category of functors from finite dimensional ${\mathbb Z}/p$--vector spaces to ${\mathbb Z}/p$--vector spaces. This is an abelian category in the standard way, e.g., $F \rightarrow G \rightarrow H$ is exact at $G$ means that $F(V) \rightarrow G(V) \rightarrow H(V)$ is exact at $G(V)$ for all $V$. Some objects in ${\mathcal F}$ are $S^n$, $H_n$, $P_W$, and $I_W$, defined by $S^n(V) = (V^{\otimes n})_{\Sigma_n}$, $H_n(V) = H_n(BV)$, $P_W(V) = {\mathbb Z}/p[\operatorname{Hom}(W,V)]$ and $I_W(V) = {\mathbb Z}/p^{\operatorname{Hom}(V,W)}$. Note that $H_0$ is the constant functor `${\mathbb Z}/p$', and $H_1$ is the identity functor $\operatorname{Id}$. Using Yoneda's lemma, one see that $\operatorname{Hom}_{{\mathcal F}}(P_W,F) \simeq F(W)$, and thus $P_W$ is projective. Similarly $\operatorname{Hom}_{{\mathcal F}}(F, I_W) \simeq F(W)^{\vee}$, and thus $I_W$ is injective. Note also that $P_V \otimes P_W \simeq P_{V \oplus W}$ and $I_V \otimes I_W \simeq I_{V \oplus W}$. The functors $P_W$ and $I_W$ canonically split as $P_W \simeq {\mathbb Z}/p \oplus \bar P_W$ and $I_W \simeq {\mathbb Z}/p \oplus \bar I_W$, and we write $\bar P_{{\mathbb Z}/p}$ and $\bar I_{{\mathbb Z}/p}$ as $\bar P$ and $\bar I$. $F(V)$ is a canonical retract of $F(V \oplus {\mathbb Z}/p)$ and, as in the introduction, one defines the exact functor $\Delta: {\mathcal F} \rightarrow {\mathcal F}$ by $$ (\Delta F)(V) = F(V \oplus {\mathbb Z}/p)/F(V).$$ One easily checks that $\Delta$ has a left adjoint given by $F \mapsto F \otimes \bar P$, and a right adjoint given by $F \mapsto F \otimes \bar I$. $F$ is {\em polynomial of degree $n$} if $\Delta^{n+1}F = 0$. As explained in \cite{genrep1} or \cite{s2}, this agrees with the Eilenberg--MacLane definition used in \cite{hls1}. One then lets ${\mathcal F}^n$ be the category of all degree $n$ polynomial functors and ${\mathcal F}^{an}$ be the category of all locally polynomial functors. (As shown in \cite{genrep1}, ${\mathcal F}^{an}$ is also the locally finite category ${\mathcal F}_0 \subset {\mathcal F}$.) As examples, $\operatorname{Id}^{\otimes n}, S^n, H_n \in {\mathcal F}^n$, $\bar P_W$ has no nonzero polynomial subfunctors, while $I_W \in {\mathcal F}^{an}$. We explain this last fact. There is an identification $$ I_{{\mathbb Z}/p}(V) = S^*(V)/(x^p-x)$$ and thus $I_W$ is visibly a quotient of the sum of the polynomial functors $$ V \mapsto S^n(\operatorname{Hom}(W,V)).$$ \subsection{The polynomial filtration of a functor} The inclusion ${\mathcal F}^n \hookrightarrow {\mathcal F}$ has both a left adjoint $q_n$ and a right adjoint $p_n$. Explicitly $$ q_nF = \operatorname{coker} \{ \epsilon_F: \Delta^{n+1}F \otimes \bar P^{\otimes (n+1)} \rightarrow F\}$$ and $$ p_nF = \ker \{ \eta_F: F \rightarrow \Delta^{n+1}F \otimes \bar I^{\otimes (n+1)} \}.$$ \begin{thm} \leftarrowbel{poly filt thm} {\bf (a)} \ $\displaystyle p_nI_W(V) = S^{* \leq n}(\operatorname{Hom}(W,V))/(x^p-x)$. \\ \noindent{\bf (b)} \ $\displaystyle \sum_{l+m=n} p_lF \otimes p_mG = p_n(F \otimes G)$. \end{thm} Both of these statements are well known. For (a), see \cite[Lemma 7.6.6]{hls1} or the discussion in \cite[\S 6]{genrepsurvey}. Statement (b) is then a consequence as follows. We are asserting that two natural filtrations on $F \otimes G$ agree, with the one filtration clearly including in the other. From (a), one can visibly see that these filtrations agree when $F$ and $G$ are sums of $I_W$'s. For the general case, one finds exact sequences $$ 0 \rightarrow F \rightarrow I_0 \rightarrow I_1$$ and $$ 0 \rightarrow G \rightarrow J_0 \rightarrow J_1,$$ where $I_0$, $I_1$, $J_0$, and $J_1$ are all sums of $I_W$'s. Tensoring these sequences together yields an exact sequence $$ 0 \rightarrow F \otimes G \rightarrow I_0 \otimes J_0 \rightarrow I_1 \otimes J_0 \oplus I_0 \otimes J_1.$$ The two filtrations agree on the last two terms, and thus on $F\otimes G$. \subsection{The equivalence ${\mathcal U}/{\mathcal Nil} \simeq {\mathcal F}^{an}$} \leftarrowbel{U/Nil subsection} One has adjoint functors $$ {\mathcal U} \begin{array}{c} l \\[-.08in] \longrightarrow \\[-.1in] \longleftarrow \\[-.1in] r \end{array} {\mathcal F}$$ defined by $l(M)(V) = \operatorname{Hom}_{{\mathcal U}}(M,H^*(BV))^{\vee}$ and $r(F)^n = \operatorname{Hom}_{{\mathcal F}}(H_n,F)$. As examples, $l(F(n)) = H_n$, and $r(I_W) = H^*(BW)$. \begin{thm} \cite{hls1} The functor $l$ is exact, and $l$ and $r$ induce an equivalence of abelian categories $$ {\mathcal U}/{\mathcal Nil} \begin{array}{c} l \\[-.08in] \longrightarrow \\[-.1in] \longleftarrow \\[-.1in] r \end{array} {\mathcal F}^{an}.$$ \end{thm} It follows that $M \rightarrow rlM$ is ${\mathcal Nil}$--closure. \begin{prop} \cite{hls1} There are natural isomorphisms $$ l(M \otimes N) \simeq l(M) \otimes l(N) \text{ and } r(F \otimes G) \simeq r(F) \otimes r(G).$$ \end{prop} \begin{prop} There are natural isomorphisms $$ l (\bar T M) \simeq \Delta l(M) \text{ and } r(\Delta F) \simeq \bar T r(F).$$ \end{prop} The first of these is easily checked, and the second formally follows, once one knows that $\bar T$ preserves ${\mathcal Nil}$--closed modules. \section{Proof of \thmref{schwartz thm}} \leftarrowbel{schwartz thm section} In this section, we prove Schwartz' characterization of ${\mathcal U}_n$: $${\mathcal U}_n = {\mathcal U}_n^T,$$ where ${\mathcal U}_n^T$ is the full subcategory of ${\mathcal U}$ with objects $\{ M \in {\mathcal U} \ | \ {\bar T}^{n+1}M = 0 \}$. We prove this by induction on $n$. As our inductive step needs a lemma (\corref{deg 0 cor} below) that is at the heart of the $n=0$ case, we begin with the proof that ${\mathcal U}_0 = {\mathcal U}_0^T$ roughly following the proofs in \cite{ls2,s2}. It is easy to characterize modules in ${\mathcal U}_0$, noting that any module has its decreasing skeletal filtration, with subquotient modules concentrated in one degree. \begin{lem} \ Let $M$ be an unstable module. $M$ is simple if and only if it is isomorphic to $\Sigma^s {\mathbb Z}/p$ for some $s$. $M$ is finite in the categorical sense if and only if it is finite. $M$ is locally finite if and only if ${\mathcal A}\cdot x \subseteq M$ is finite for all $x \in M$. \end{lem} \begin{prop} If $M$ is locally finite, then $\bar T M = 0$. Thus ${\mathcal U}_0 \subseteq {\mathcal U}_0^T$. \end{prop} \begin{proof} As $\bar T$ is exact and commutes with suspensions and directed colimits, this follows from the lemma and the calculation that $\bar T {\mathbb Z}/p = 0$. \end{proof} \begin{prop} If $\bar T M = 0$, then $M$ is locally finite. Thus ${\mathcal U}_0^T \subseteq {\mathcal U}_0$. \end{prop} \begin{proof}[Sketch proof] It suffices to show this when $M$ is finitely generated. Such an $M$ embeds in an injective module of the form $\displaystyle \bigoplus_{i=1}^r H^*(BV_i) \otimes J(n_i)$. Meanwhile, $\bar T M = 0$ implies that $\operatorname{Hom}_{{\mathcal U}}(M, \tilde H^*(BV) \otimes J(n)) = 0$ for all $V$ and $n$. Thus $M$ will embed in $\displaystyle \bigoplus_{i=1}^r H^0(BV_i) \otimes J(n_i) = \bigoplus_{i=1}^r J(n_i)$ and so is finite. \end{proof} \begin{cor} \leftarrowbel{deg 0 cor} If $M$ is ${\mathcal Nil}$--reduced and $\bar T M = 0$, then $M$ is concentrated in degree 0. \end{cor} This corollary is used in the last step of the proof of the following lemma. \begin{lem} \leftarrowbel{omega lemma} If $M$ is ${\mathcal Nil}$--reduced, then $$\bar T^{n+1}M = 0 \Leftrightarrow \bar T^n\Omega M = 0.$$ \end{lem} \begin{proof} If $M$ is ${\mathcal Nil}$--reduced, there is a short exact sequence $$ 0 \rightarrow \Phi M \rightarrow M \rightarrow \Sigma \Omega M \rightarrow 0.$$ Since $\bar T^n$ is exact and commutes with $\Phi$ and $\Sigma$, one deduces that \begin{equation*} \begin{split} \bar T^n \Omega M = 0 & \Leftrightarrow\Phi \bar T^n M \simeq \bar T^n M \\ & \Leftrightarrow \bar T^n M \text{ is concentrated in degree 0} \\ & \Leftrightarrow \bar T^{n+1} M = 0. \end{split} \end{equation*} \end{proof} Armed with this lemma, we now give the inductive step of the proof that ${\mathcal U}_n = {\mathcal U}_n^T$. So assume by induction that ${\mathcal U}_{n-1} = {\mathcal U}_{n-1}^T$. \begin{proof}[Proof that ${\mathcal U}_n \subseteq {\mathcal U}_n^T$] A simple object in ${\mathcal U}/{\mathcal U}_{n-1}$ can be represented by $\Sigma^s M$, where $M$ is reduced. We show that then $\Sigma^s M \in {\mathcal U}_n^T$. Consider the exact sequence $$ 0 \rightarrow \Sigma^s \Phi M \rightarrow \Sigma^s M \rightarrow \Sigma^{s+1} \Omega M \rightarrow 0.$$ As $\Sigma^s M$ is simple in ${\mathcal U}/{\mathcal U}_{n-1}$ either $\Sigma^s \Phi M \in {\mathcal U}_{n-1}$ or $\Sigma^{s+1} \Omega M \in {\mathcal U}_{n-1}$. In this first case, we have \begin{equation*} \begin{split} \Sigma^s \Phi M \in {\mathcal U}_{n-1} & {\mathbb R}ightarrow \bar T^n \Sigma^s \Phi M = 0 {\mathbb R}ightarrow \Phi \bar T^n M = 0 \\ & {\mathbb R}ightarrow \bar T^n M = 0 {\mathbb R}ightarrow \bar T^n \Sigma^s M = 0 \\ & {\mathbb R}ightarrow \Sigma^s M \in {\mathcal U}_{n-1}. \end{split} \end{equation*} But this contradicts that $\Sigma^sM$ is nonzero in ${\mathcal U}/{\mathcal U}_{n-1}$. Thus $\Sigma^{s+1} \Omega M \in {\mathcal U}_{n-1}$. But then \begin{equation*} \begin{split} \Sigma^{s+1} \Omega M \in {\mathcal U}_{n-1} & {\mathbb R}ightarrow \bar T^n \Sigma^{s+1} \Omega M = 0 {\mathbb R}ightarrow \bar T^n \Omega M = 0 \\ & {\mathbb R}ightarrow \bar T^{n+1} M = 0 {\mathbb R}ightarrow \bar T^{n+1} \Sigma^s M = 0 \\ & {\mathbb R}ightarrow \Sigma^s M \in {\mathcal U}_{n}^T. \end{split} \end{equation*} \end{proof} \begin{proof}[Proof that ${\mathcal U}_n^T \subseteq {\mathcal U}_n$] Suppose that $\Sigma^s M \in {\mathcal U}_n^T$, with $M$ finitely generated, ${\mathcal Nil}$--reduced, and representing a simple object in ${\mathcal U}/{\mathcal Nil}$. We show that then $\Sigma^s M$ is either in ${\mathcal U}_{n-1}$ or represents a simple object in ${\mathcal U}/{\mathcal U}_{n-1}$, and is thus in ${\mathcal U}_n$. Using \lemref{omega lemma}, $\Sigma^s M \in {\mathcal U}_n^T$ implies that $\Omega M \in {\mathcal U}_{n-1}^T$, so that $\Phi M \rightarrow M$ is a ${\mathcal U}_{n-1}^T$--isomorphism. It follows that, for all $k$, $\Sigma^s \Phi^k M \rightarrow \Sigma^s M$ will be a ${\mathcal U}_{n-1}^T$--isomorphism (and thus a ${\mathcal U}_{n-1}$--isomorphism). Any nonzero sub-object of $\Sigma^s M$ has the form $\Sigma^s N$ with $0 \neq N < M$. As $M$ is ${\mathcal Nil}$--reduced and simple in ${\mathcal U}/{\mathcal Nil}$, $M/N \in {\mathcal Nil}$. By \lemref{phi lemma}, there exists $k>0$ such that $\Phi^k M \subset N$. Since $\Sigma^s M/\Sigma^s \Phi^kM \in {\mathcal U}_{n-1}$, we deduce that $\Sigma^s M/\Sigma^s N \in {\mathcal U}_{n-1}$. \end{proof} \begin{rem} We comment on how the proof of \thmref{schwartz thm} differs from the one outlined in \cite{s2}. The proof in \cite{s2} makes use of the theorem that ${\mathcal U}/{\mathcal Nil} \simeq {\mathcal F}^{an}$ together with substantial analysis of how the polynomial filtration of ${\mathcal F}^{an}$ is reflected in the category of reduced modules. Our proof instead uses \thmref{finite gen thm}(c), which says that finitely generated modules represent finite objects in ${\mathcal U}/{\mathcal Nil}$. The proof of this also uses that ${\mathcal U}/{\mathcal Nil} \simeq {\mathcal F}^{an}$, but replaces the analysis of the polynomial filtration by the observation that the functor in ${\mathcal F}$ given by $V \mapsto H_n(BV)$ is a finite functor, which has a very elementary proof \cite[Prop. 4.10]{genrep1}. \end{rem} \section{Properties of the Krull filtration of a module} \leftarrowbel{krull mod section} For simplicity, we write $\bar H$ denote $\tilde H^*(B{\mathbb Z}/p)$. Recall that $M \in {\mathcal U}_n$ if and only if $\bar T^{n+1}M = 0$, and $\bar T^{n+1}$ is both exact and left adjoint to the functor $M \mapsto M \otimes \bar H^{\otimes n+1}$. It follows formally that \begin{equation*} \leftarrowbel{kn defn} k_nM = \ker\{ \eta_M: M \rightarrow \bar T^{n+1} M \otimes \bar H^{\otimes n+1}\}, \end{equation*} where $\eta_M$ is the unit of the adjunction. We run through proofs of various properties of $k_nM$. \\ \noindent{\bf (a)} \ $k_n$ is left exact and commutes with filtered colimits. \begin{proof} This is immediate. \end{proof} \noindent{\bf (b)} \ $\displaystyle \bigcup_{n=0}^{\infty} k_nM = M$. \begin{proof} One easily computes that $\bar T F(n) = F(n-1)$, so that $F(n) \in {\mathcal U}_n$. As every unstable module is a quotient of a sum of $F(n)$'s, the claim follows. \end{proof} \noindent{\bf (c)} \ $k_n(N \otimes M) \simeq N \otimes k_nM$ if $N \in {\mathcal U}_0$. In particular, $k_n\Sigma^s M = \Sigma^s k_nM$. \begin{proof} As $\bar TN=0$, $$\eta_{N \otimes M}: N \otimes M \rightarrow \bar T^{n+1}(N \otimes M) \otimes \bar H^{\otimes n+1}$$ identifies with $$ N \otimes \eta_M: N \otimes M \rightarrow N \otimes \bar T^{n+1}M \otimes \bar H^{\otimes n+1}.$$ \end{proof} \noindent{\bf (d)} \ $k_n\Phi M \simeq \Phi k_nM$. \begin{proof} As $\Phi$ commutes with tensor products and $\bar T$, one has a commutative diagram \begin{equation*} \xymatrix{ \Phi M \ar@{=}[d] \ar[r]^-{\Phi(\eta_M)} & \Phi (\bar T^{n+1} M \otimes \bar H^{\otimes n+1})\ar[r]^-{\sim} & \bar T^{n+1} \Phi M \otimes (\Phi \bar H)^{\otimes n+1} \ar[d] \\ \Phi M \ar[rr]^-{\eta_{\Phi M}} && \bar T^{n+1} \Phi M \otimes \bar H^{\otimes n+1} } \end{equation*} As $\Phi \bar H \rightarrow \bar H$ is monic, the kernels of the two horizontal maps are equal. The kernel of the bottom map is $k_n\Phi M$, and, since $\Phi$ is exact, the kernel of the top map is $\Phi k_n M$. \end{proof} \noindent{\bf (e)} \ $k_n nil_s M \simeq nil_s k_nM$. \begin{proof} One easily checks that $k_n nil_s M \simeq k_nM \cap nil_s M \simeq nil_s k_nM$. \end{proof} \noindent{\bf (f)} \ If $M = r(F)$, then $k_n M = r(p_n F)$. Thus if $M$ is ${\mathcal Nil}$--closed, so is $k_nM$. \begin{proof} Applying the left exact functor $r$ to the exact sequence $$ 0 \rightarrow p_n F \rightarrow F \rightarrow \Delta^{n+1} F \otimes \bar I^{\otimes n+1}$$ shows that $r(p_n F)$ is the kernel of the map $$ M \rightarrow r(\Delta^{n+1} F \otimes \bar I^{\otimes n+1}).$$ But since $r$ commutes with tensor products and $r \circ \Delta^{n+1} = \bar T^{n+1} \circ r$, this map rewrites as $$ M \rightarrow \bar T^{n+1} M \otimes \bar H^{\otimes n+1},$$ which has kernel $k_n M$. \end{proof} \noindent{\bf (g)} \ $k_n$ preserves ${\mathcal Nil}$--reduced modules. \begin{proof} This follows from (f), noting that ${\mathcal Nil}$--reduced modules are submodules of ${\mathcal Nil}$--closed modules, and $k_n$ preserves monomorphisms. \end{proof} \noindent{\bf (h)} \ If $F = l(M)$, then $p_nF = l(k_nM)$. It follows that $k_n$ preserves ${\mathcal Nil}$--isomorphisms. \begin{proof} Applying the exact functor $l$ to the exact sequence $$ 0 \rightarrow k_n M \rightarrow M \rightarrow \bar T^{n+1} M \otimes \bar H^{\otimes n+1}$$ shows that $l(k_n M)$ is the kernel of the map $$ F \rightarrow l(\bar T^{n+1} M \otimes \bar H^{\otimes n+1}).$$ But since $l$ commutes with tensor products and $\Delta^{n+1} \circ l= l \circ \bar T^{n+1}$, this map rewrites as $$ F \rightarrow \Delta^{n+1} F \otimes \bar I^{\otimes n+1},$$ which has kernel $p_n F$. A map $f: M \rightarrow N$ in ${\mathcal U}$ is a ${\mathcal Nil}$--isomorphism precisely when $l(f)$ is an isomorphism. When this happens, $l(k_nf)=p_nl(f)$ will be an isomorphism, so $k_nf$ will be a ${\mathcal Nil}$--isomorphism. \end{proof} \noindent{\bf (i)} \ The natural map $\bar R_sk_nM \rightarrow k_n\bar R_sM$ is an isomorphism. \begin{proof} Applying the left exact functor $\bar R_s$ to the exact sequence $$ 0 \rightarrow k_nM \rightarrow M \rightarrow \bar T^{n+1} M \otimes \bar H^{\otimes n+1}$$ shows that $\bar R_sk_nM$ is the kernel of the map $$ \bar R_sM \rightarrow \bar R_s(\bar T^{n+1} M \otimes \bar H^{\otimes n+1}).$$ But since $\bar H$ is ${\mathcal Nil}$--closed, and $\bar R_s$ commutes with $\bar T$, this map rewrites as $$ R_sM \rightarrow \bar T^{n+1} \bar R_s M \otimes \bar H^{\otimes n+1},$$ which has kernel $k_n\bar R_sM$. \end{proof} \noindent{\bf (j)} \ $\displaystyle \sum_{l+m=n} k_lM \otimes k_mM = k_n(M \otimes N)$. \begin{proof} As with \thmref{poly filt thm}, this says that two natural filtrations of $M \otimes N$ agree, with one filtration certainly including in the other. Combined with (f), \thmref{poly filt thm} says that this is true if both modules are ${\mathcal Nil}$--closed. Then (c) implies that the statement is also true if both $M$ and $N$ are sums of ${\mathcal Nil}$--closed modules tensored with locally finite modules. This includes all ${\mathcal U}$--injectives. The statement then holds for all modules, using the same argument as in the proof of \thmref{poly filt thm}. \end{proof} From the above, one can deduce that the natural map $R_sk_nM \rightarrow k_nR_sM$ is always an inclusion with cokernel in ${\mathcal Nil}$, but the next example shows that this need {\em not} be an isomorphism. \begin{ex} \leftarrowbel{Rs ex} With $p=2$, let $M \in {\mathcal U}$ be defined by the pullback square \begin{equation*} \xymatrix{ M \ar[d] \ar[r] & \Phi^2(F(1)) \ar@{->>}[d] \\ \Sigma F(3) \ar@{->>}[r] & \Sigma^4 {\mathbb Z}/2. } \end{equation*} We claim that, for this $M$, the natural monomorphism $R_0(k_1(M)) \rightarrow k_1(R_0(M))$ identifies with the proper inclusion $ \Phi^3(F(1)) \hookrightarrow \Phi^2(F(1))$, and is thus {\em not} an isomorphism. To see this, recall that $R_0M = M/nil_1 M$. Applying $R_0$ to the short exact sequence $$0 \rightarrow \Sigma(F(3)^{\geq 4}) \rightarrow M \rightarrow \Phi^2F(1) \rightarrow 0$$ shows that $R_0M$, and thus $k_1R_0M$, equals $\Phi^2F(1)$. Meanwhile, applying the left exact functor $k_1$ to the short exact sequence $$ 0 \rightarrow \Phi^3F(1) \rightarrow M \rightarrow \Sigma F(3) \rightarrow 0$$ shows that $k_1M$, and thus $R_0k_1M$, equals $\Phi^3F(1)$. \end{ex} \begin{rem} \leftarrowbel{no adjoint remark} Being a right adjoint, $k_n$ is left exact. It is easily seen to not usually preserve surjections. To see this, let $M^{>r} \subset M$ denote the submodule of all elements of degree greater than $r$. Then $F(1) \rightarrow F(1)/F(1)^{>1} = \Sigma {\mathbb Z}/p$ is a surjection, $k_0(F(1)) = 0$, but $k_0(F(1)/F(1)^{>1}) = \Sigma {\mathbb Z}/p$. A similar argument shows that ${\mathcal U}_n \hookrightarrow {\mathcal U}$ does not admit a left adjoint. So see this, note that, for all $M \in {\mathcal U}$ and all $r$, $M/M^{>r}$ is in ${\mathcal U}_0$. Now suppose a left adjoint $q_n: {\mathcal U} \rightarrow {\mathcal U}_n$ exists for some $n$. It would follow that $M \rightarrow M/M^{>r}$ would factor through the natural map $M \rightarrow q_n(M)$ for all $r$, and we could deduce that $M \rightarrow q_n(M)$ would be monic. Since this would be true for all $M \in {\mathcal U}$, it would follow that ${\mathcal U}_n = {\mathcal U}$. \end{rem} We end this section with a discussion of \propref{pnH prop}: $k_n \bar H$ is the span of products of elements in $F(1)$ of length at most $n$. This is essentially the content of \cite[Lem.7.6.6]{hls1}, and a proof goes along the following lines. $I = I_{{\mathbb Z}/p}$ is a Hopf algebra object in ${\mathcal F}$, with addition ${\mathbb Z}/p \oplus {\mathbb Z}/p \rightarrow {\mathbb Z}/p$ inducing the coproduct $$ \Psi: I_{{\mathbb Z}/p} \rightarrow I_{{\mathbb Z}/p \oplus {\mathbb Z}/p} \simeq I_{{\mathbb Z}/p} \otimes I_{{\mathbb Z}/p}.$$ Then one observes that $p_n\bar I$ is precisely the kernel of the iterated reduced coproduct $$ \Psi^n: \bar I \rightarrow \bar I^{\otimes n+1}.$$ Applying $r$ to this, it follows that $k_n\bar H$ is the kernel of the iterated reduced coproduct $$ \Psi^n: \bar H \rightarrow \bar H^{\otimes n+1},$$ i.e., is the $n$th stage of the primitive filtration of $H^*(B{\mathbb Z}/p)$. This is then checked to be the span of products of elements in $F(1)$ of length at most $n$. \section{The identification of ${\mathcal U}_n/{\mathcal U}_{n-1}$} \leftarrowbel{quotient cat section} \thmref{Un/Un-1 thm} is a consequence of the next two lemmas. \begin{lem} \leftarrowbel{adjoint lem} The functor $\Sigma_n\text{--}{\mathcal U}_0 \rightarrow {\mathcal U}_n$ given by $$N \mapsto (N \otimes F(1)^{\otimes n})^{\Sigma_n}$$ is right adjoint to ${\bar T}^n: {\mathcal U}_n \rightarrow \Sigma_n\text{--}{\mathcal U}_0$. \end{lem} \begin{lem} \leftarrowbel{counit lem} The counit of the adjunction $$ {\bar T}^n((N \otimes F(1)^{\otimes n})^{\Sigma_n}) \rightarrow N$$ is an isomorphism for all $N \in \Sigma_n\text{--}{\mathcal U}_0$. \end{lem} \begin{proof}[Proof of \lemref{adjoint lem}] Given $M \in {\mathcal U}_n$ and $N \in \Sigma_n\text{--}{\mathcal U}_0$, we compute: \begin{equation*} \begin{split} \operatorname{Hom}_{\Sigma_n\text{--}{\mathcal U}_0}({\bar T}^n M, N) & = \operatorname{Hom}_{{\mathcal U}}^{\Sigma_n}({\bar T}^n M, N) \\ & = \operatorname{Hom}_{{\mathcal U}}^{\Sigma_n}(M, N \otimes \bar H^{\otimes n}) \\ & = \operatorname{Hom}_{{\mathcal U}}^{\Sigma_n}(M, k_n(N \otimes \bar H^{\otimes n})) \text{\ (since } M \in {\mathcal U}_n) \\ & = \operatorname{Hom}_{{\mathcal U}}^{\Sigma_n}(M, N \otimes F(1)^{\otimes n}) \\ & = \operatorname{Hom}_{{\mathcal U}_n}(M, (N \otimes F(1)^{\otimes n})^{\Sigma_n}). \\ \end{split} \end{equation*} \end{proof} \begin{proof}[Proof of \lemref{counit lem}] Starting from the calculation that ${\bar T} F(1) \simeq {\mathbb Z}/p$, it is easy to check that ${\bar T}^n (F(1)^{\otimes n}) = {\mathbb Z}/p[\Sigma_n]$. Then one has isomorphisms \begin{equation*} \begin{split} {\bar T}^n((N \otimes F(1)^{\otimes n})^{\Sigma_n}) & \simeq ({\bar T}^n(N \otimes F(1)^{\otimes n}))^{\Sigma_n} \\ & \simeq (N \otimes {\bar T}^n(F(1)^{\otimes n}))^{\Sigma_n} \\ & \simeq (N \otimes {\mathbb Z}/p[\Sigma_n])^{\Sigma_n} \\ & \simeq N. \end{split} \end{equation*} \end{proof} \begin{proof}[Proof of \thmref{recursive Un thm}] Given $M \in {\mathcal U}$, suppose that there exist $K,Q \in {\mathcal U}_{n-1}$, $N \in \Sigma_n\text{--}{\mathcal U}_0$, and an exact sequence $$ 0 \rightarrow K \rightarrow M \rightarrow (N \otimes F(1)^{\otimes n})^{\Sigma_n} \rightarrow Q \rightarrow 0.$$ Applying the exact functor ${\bar T}^n$ to this sequence shows that ${\bar T}^n M \simeq N$, and then applying ${\bar T}$ once more shows that ${\bar T}^{n+1}M = 0$, so that $M \in {\mathcal U}_n$. Conversely, given $M \in {\mathcal U}_n$, \thmref{Un/Un-1 thm} tells us that the unit of the adjunction $$ M \rightarrow ({\bar T}^n M \otimes F(1)^{\otimes})^{\Sigma_n}$$ has kernel and cokernel in ${\mathcal U}_{n-1}$. \end{proof} \begin{ex} \leftarrowbel{F(n) example} ${\mathcal U}_n$ is generally not generated as a localizing category by the modules $\Sigma^s F(m)$ with $m \leq n$. If it were, then ${\mathcal F}^n$ would be generated as a localizing category by the functors $H_m$, for $m\leq n$. But this is not always true. The first example of this is ${\mathcal F}^3$, when $p=2$. There is a splitting of functors $$ \Lambda^2(V) \otimes V \simeq \Lambda^3(V) \oplus L(V).$$ $L$ is a simple object in ${\mathcal F}^3$ which is not among the composition factors of the functors $H_m$, $m\leq 3$: these are just the functors $\Lambda^m$, $m \leq 3$. \end{ex} \section{A functor to symmetric sequences in ${\mathcal U}_0$} \leftarrowbel{sym sequences section} Recall that $\Sigma_*\text{--}{\mathcal U}_0$ is the category of symmetric sequences of locally finite modules, and that the functor $$\sigma_*: {\mathcal U} \longrightarrow \Sigma_*\text{--}{\mathcal U}_0$$ is defined by $\sigma_nM = {\bar T}^n k_nM$. We prove the various properties of this functor asserted in \thmref{sigma thm}. Properties listed in parts (a)--(c) of the theorem are true because the analogous properties have been shown to be true for the functors ${\bar T}$ and $k_n$. \\ \noindent{\bf (d)} \ $\sigma_*$ is symmetric monoidal: $ \sigma_*(M \otimes N) = \sigma_*M \boxtimes \sigma_*N$. \begin{proof} Recall that $\sigma_nM = {\bar T}^n \bar k_nM$, where $\bar k_nM = k_nM/k_{n-1}M$. Then we compute: \begin{equation*} \begin{split} \sigma_n(M\otimes N) & = {\bar T}^n\bar k_n(M\otimes N) \\ & =\bigoplus_{l+m=n}{\bar T}^n(\bar k_lM \otimes \bar k_mN) \\ & =\bigoplus_{l+m=n} \operatorname{Ind}_{\Sigma_l \times \Sigma_m}^{\Sigma_n}({\bar T}^l\bar k_lM \otimes {\bar T}^m\bar k_mN) \\ & = (\sigma_*M \boxtimes \sigma_*N)_n. \end{split} \end{equation*} \end{proof} \noindent{\bf (e)} \ For all $s$ and $n$, there is a natural isomorphism of ${\mathbb Z}/p[\Sigma_n]$--modules $$(\sigma_nM)^s \simeq \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, \bar R_sM).$$ \begin{proof} We first look at the special case when $M$ is ${\mathcal Nil}$--closed. We claim that then $\sigma_n M$ is concentrated in degree 0, and $$(\sigma_nM)^0 \simeq \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, M).$$ To see this, note that $M = r(F)$ for some $F \in {\mathcal F}$. Thus $$ \sigma_n M = {\bar T}^n k_n r(F)= r(\Delta^n p_n F).$$ Since $p_nF$ is polynomial of degree $n$, $\Delta^n p_nF$ will be polynomial of degree 0, and is thus constant. It follows that \begin{multline*} r(\Delta^n p_n F) = \operatorname{Hom}_{{\mathcal F}}({\mathbb Z}/p, \Delta^n p_n F) = \operatorname{Hom}_{{\mathcal F}}(\bar P^{\otimes n}, p_n F) \\ = \operatorname{Hom}_{{\mathcal F}}(q_n(\bar P^{\otimes n}), F) = \operatorname{Hom}_{{\mathcal F}}(\operatorname{Id}^{\otimes n}, F) = \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, M). \end{multline*} Now we consider the case of a general unstable module $M$. For locally finite modules, the skeletal filtration equals the nilpotent filtration, so $$ (\sigma_nM)^s = \bar R_s \sigma_n M = \sigma_n \bar R_s M = \operatorname{Hom}_{{\mathcal U}}(F(1)^{\otimes n}, \bar R_s M).$$ \end{proof} We make a couple of general comments regarding the values of $\sigma_*M$. Firstly, from part (e) of \thmref{sigma thm}, one sees that if the nilpotence length of $M$ is finite, there will be uniform bound on the nonzero degrees of the modules $\sigma_nM$. More precisely, if $nil_{s+1}M = 0$, then $(\sigma_nM)^t = 0$ for all $t>s$ and all $n$. This is the case if $M$ is a module over a Noetherian unstable algebra $K$ which is finitely generated using both the $K$--module and ${\mathcal A}$--module structures \cite[Lemma 6.3.10]{meyer}. (The more accessible paper \cite{henn} has a weaker version, and both \cite{hls2} and \cite{k5} give examples of bounds on nilpotence.) Our second comment is that if $K$ is an unstable algebra, then $\sigma_*K$ will be an algebra in symmetric sequences. Explicitly, the multiplication on $K$ induces $\Sigma_i \times \Sigma_j$ equivariant maps $\sigma_iK \otimes \sigma_jK \rightarrow \sigma_{i+j}K$ which are suitably associative, commutative, and unital. In particular, each $\sigma_nK$ will be a module over the unstable algebra $\sigma_0K$, the locally finite part of $K$. We end this section with various examples, all restricted to $p=2$. \begin{prop} Suppose $M$ is `homogeneous of degree $m$', i.e., $\bar k_m M = M$. Then ${\bar T}^m M \in \Sigma_m\text{--}{\mathcal U}_0$, and $$ \sigma_nM = \begin{cases} {\bar T}^mM & \text{if } n=m \\ 0 & \text{otherwise}. \end{cases} $$ \end{prop} \begin{proof} The hypothesis is equivalent to saying that the unit of the adjunction $M \rightarrow ({\bar T}^m M \otimes F(1)^{\otimes m})^{\Sigma_m}$ is an embedding, and the result follows from \thmref{Un/Un-1 thm}. \end{proof} \begin{ex} As special cases of the proposition, one has $$ \sigma_nF(m) = \begin{cases} {\mathbb Z}/2 & \text{if } n=m \\ 0 & \text{otherwise}, \end{cases} $$ and $$ \sigma_nF(1)^{\otimes m} = \begin{cases} {\mathbb Z}/2[\Sigma_m] & \text{if } n=m \\ 0 & \text{otherwise}. \end{cases} $$ \end{ex} Let $U: {\mathcal U} \rightarrow {\mathcal K}$ be the usual free functor, left adjoint to the forgetful functor. The next proposition describes $\sigma_*K$ when $K= U(M)$ where $M$ is as in the last example. To state this, we define a functor $$Sh^m: \Sigma_m\text{--}{\mathcal U}_0 \rightarrow \text{algebras in } \Sigma_*\text{--}{\mathcal U}_0$$ as follows. Given a $\Sigma_m$--module $N$, let $$ Sh^m_n(N) = \begin{cases} \operatorname{Ind}_{\Sigma_k \wr \Sigma_m}^{\Sigma_{km}}(N^{\otimes k}) & \text{if } n=km \\ 0 & \text{otherwise}. \end{cases} $$ Given $i+j=k$, the multiplication map $ Sh^m_{im}(N) \otimes Sh^m_{jm}(N) \rightarrow Sh^m_{km}(N)$ is defined by the composite $$\operatorname{Ind}_{\Sigma_i \wr \Sigma_m}^{\Sigma_{im}} N^{\otimes i} \otimes \operatorname{Ind}_{\Sigma_j \wr \Sigma_m}^{\Sigma_{m}} N^{\otimes j} \rightarrow \operatorname{Ind}_{(\Sigma_i \times \Sigma_j) \wr \Sigma_m}^{\Sigma_{im} \times \Sigma_{jm}} N^{\otimes k} \rightarrow \operatorname{Ind}_{(\Sigma_k) \wr \Sigma_m}^{\Sigma_{km}} N^{\otimes k},$$ where the first map is $\operatorname{Ind}_{(\Sigma_i \times \Sigma_j) \wr \Sigma_m}^{\Sigma_{im} \times \Sigma_{jm}}$ applied to the shuffle product $$ N^{\otimes i} \otimes N^{\otimes j} \rightarrow N^{\otimes k}.$$ \begin{prop} Suppose $M$ satisfies $\bar k_mM = M$. Then $ \sigma_*(U(M)) = Sh^m_*({\bar T}^mM)$. \end{prop} This follows from the next two lemmas. \begin{lem} Suppose $M$ satisfies $\bar k_mM = M$. Then \begin{equation*} \bar k_nU(M) = \begin{cases} \Lambda^k M & \text{if } n = km \\ 0 & \text{otherwise}. \end{cases} \end{equation*} \end{lem} \begin{proof}[Sketch proof] We first observe that $k_{km-1}\Lambda^k M \subseteq k_{km-1}M^{\otimes k} = 0$, so that $\bar k_{km}\Lambda^k M = \Lambda^k M$. Now we recall that $U(M) = S^*(M)/(Sq^{|x|}x-x^2)$. This is filtered with $U^k(M)$ equal to the image of $M^{\otimes k}$ in $U(M)$, and there are exact sequences $$ 0 \rightarrow U^{k-1}(M) \rightarrow U^{k}(M) \rightarrow \Lambda^{k} M \rightarrow 0.$$ Applying $k_n$ to this yields exact sequences $$ 0 \rightarrow k_nU^{k-1}(M) \rightarrow k_nU^{k}(M) \rightarrow k_n\Lambda^{k} M,$$ and one deduces that $$ U^{k-1}(M) = k_{km-1}U^k(M) \text{ so that } \bar k_{km}U^k(M) = \Lambda^k M$$ and $$ k_{km+r}U^k(M) = k_{km+r}U(M)$$ for $0\leq r < m$. \end{proof} \begin{lem} If ${\bar T}^{m+1}M = 0$, then $\bar T^{km}\Lambda^k M = \operatorname{Ind}_{\Sigma_k \wr \Sigma_m}^{\Sigma_{km}}(({\bar T}^mM)^{\otimes k}))$. \end{lem} \begin{proof}[Sketch proof] This follows by iterated use of the following facts: $T\Lambda^kM = \Lambda^kTM$, $TM = {\bar T} M \oplus M$, and $\Lambda^k(M \otimes N) = \bigoplus_{i+k=k} \Lambda^i M \otimes \Lambda^j N$. \end{proof} The proposition is illustrated with the next example. \begin{ex} $\sigma_*(H^*(K(V,m))) = Sh^m_*(V^{\vee})$, where $V^{\vee}$ is the dual of $V$, and is given trivial $\Sigma_m$--module structure. \end{ex} We end with a group cohomology example. \begin{ex} Let $Q_8$ be the quaternionic group of order 8. Then $$\sigma_*(H^*(BQ_8)) = H^*(S^3/Q_8) \otimes Sh^1_*({\mathbb Z}/2),$$ with $\sigma_0(H^*(BQ_8)) = H^*(S^3/Q_8) = {\mathbb Z}/2[x,y]/(x^2+xy+y^2,x^2y+xy^2)$. This can be seen quite easily. $Q_8 \subset S^3$ induces an embedding $$\Phi^2 H^*(B{\mathbb Z}/2) = H^*(BS^3) \subset H^*(BQ_8).$$ The image in cohomology of $Q_8 \rightarrow Q_8/[Q_8,Q_8] = ({\mathbb Z}/2)^2$ is $\sigma_0H^*(BQ_8)$, and the composite $\sigma_0H^*(BQ_8) \hookrightarrow H^*(BQ_8) \rightarrow H^*(S^3/Q_8)$ is an isomorphism. These maps combine to define an isomorphism in ${\mathcal K}$: $$ H^*(S^3/Q_8) \otimes \Phi^2 H^*(B{\mathbb Z}/2) \simeq H^*(BQ_8),$$ and the calculation of $\sigma_*H^*(BQ_8)$ follows. \end{ex} \end{document}
\begin{document} \title{Bayesian Quantile Factor Models} \begin{abstract} Factor analysis is a flexible technique for assessment of multivariate dependence and codependence. Besides being an exploratory tool used to reduce the dimensionality of multivariate data, it allows estimation of common factors that often have an interesting theoretical interpretation in real problems. However, in some specific cases the interest involves the effects of latent factors not only in the mean, but in the entire response distribution, represented by a quantile. This paper introduces a new class of models, named quantile factor models, which combines factor model theory with distribution-free quantile regression producing a robust statistical method. Bayesian estimation for the proposed model is performed using an efficient Markov chain Monte Carlo algorithm. The proposed model is evaluated using synthetic datasets in different settings, in order to evaluate its robustness and performance under different quantiles compared to more usual methods. The model is also applied to a financial sector dataset and a heart disease experiment. \end{abstract} \section{Introduction} Technological advancement and the strengthening of computational tools have led to increasing of large amounts of data available, making dimensionality reduction an essential aspect of data analysis. Among the multivariate analysis techniques for that purpose is factorial analysis, which aims to describe the original dependence of a set of observed correlated variables by a smaller number of unobservable latent variables called latent factors. Factor models are structured from a linear model that relates observable variables to the latent factors plus a random error component. The dependence structure of the original variables is explained by the matrix of factor loadings. Although in standard factor analysis, a multivariate normal distribution is assumed for the errors \cite{lopes2004bayesian}, the literature has introduced various factor models, such as a general class of factor-analytic models for the analysis of multivariate (truncated) count data \cite{wedel2003factor}, binary data coming from an unobserved heterogeneous population \cite{cagnone2012factor}, robust analysis replacing the Gaussian factor analysis model with a multivariate Student-t distribution \cite{zhang2014robust}, censored non-normal random variables with influential observations \cite{castro2015likelihood}, and categorical variables \cite{CapdevilleGoncPer}. For all these cases, the latent factors are a representation of the original variables, obtained from the expected value, which entails two possibly restrictive features: (i) hidden factors that may shift characteristics (moments or quantiles) of the distribution of the original variable rather than its mean are not captured; and (ii) the factor loadings are not allowed to vary across the distributional characteristics of each unit. Interest in investigating associations of random variables in tail parts has arisen in various fields. In finance, for example, recurrent global finance crises have shown that the risky status, or Value at Risk (VaR), of one financial institution may cause a series of negative impacts on other financial institutions or the entire financial system \cite{tobias2016covar}. In environmental science, the growing frequency of abnormal climate events has increased the importance of identifying associations of environmental factors in the extreme tail part of the distribution \cite{villarini2011frequency}. In particular, in this work, one of the applications discussed in Section \ref{sec4} shows how the association between several world market indices may be influenced when analyzing lower quantiles, for example. More specifically, the effect of the Russell 2000 index of the New York Stock Exchange on other market indices is greater when observed at the 10\% quantile than at the expected value. In the context of regression analysis, quantile regression has been an appealing application \cite{koenker1978regression} in cases where the interest relies in the effects of covariates not only in the mean, but in the entire response distribution, represented by a quantile. Quantile regression is advantageous compared to standard mean regression because besides providing richer information about the covariate effects, it is robust to heteroscedasticity and outliers, and accommodates the non-normal errors often encountered in practical applications. Bayesian inference for quantile regression operates by forming the likelihood function based on the asymmetric Laplace distribution \cite{yu2001bayesian}. In particular, \cite{kozumi2011gibbs} proposed a Gibbs sampling algorithm to sample from the posterior distribution of the unknown quantities using a normal-exponential location-scale mixture representation for the asymmetric Laplace distribution. Based on these ideas, the main aim of this work is to extend the standard factor models to a flexible new class, named quantile factor models, which allows capturing both the quantile-dependent loadings and extra latent factors. In the proposed method a linear function of the latent factors is set equal to a quantile of the response variables, similar to the quantile regression of \cite{koenker1978regression}. Besides being an exploratory tool used to reduce the dimensionality of multivariate data based on a quantile correlation structure, the method allows estimation of common factors that often have an interesting theoretical interpretation in real problems and that may vary depending on the quantile. To follow the Bayesian paradigm, like \cite{yu2001bayesian}, a multivariate asymmetric Laplace distribution is assumed for the random errors of the proposed factorial model. Since the kernel of the posterior distribution does not result in a known distribution, we make use of Markov chain Monte Carlo (MCMC) methods to sample from it. The remainder of the paper is organized as follows. Section \ref{sec-mfq} presents the proposed model. It starts by discussing some aspects of the method when applied to the simplest case, namely the univariate one assuming the latent factors to be fixed. Then, the proposed method is introduced as an extension of those ideas to the multivariate case and assuming unknown factors. Some model properties are discussed together with the inference procedure, which follows the Bayesian paradigm and makes use of Gibbs sampling for almost all the parameters, since the posterior full conditional distributions have a closed analytical form, except for some scale parameters, for which the Metropolis-Hastings algorithm is used. Section \ref{sec3} presents the analysis of two synthetic datasets. First a dataset is generated from a transformation of a Normal factor model, for which correlation seems to be larger for upper quantiles. Then, the results obtained in the quantile factor model fit for some quantiles are compared to those obtained under Normal and Student-t \cite{zhang2014robust} factor model fits. The second illustration consists of a comparison of the results obtained with the proposed model in the median with the Normal and Student-t factor models using a synthetic dataset with outliers, in order to study the robustness of the method. In Section \ref{sec4}, we illustrate the approach for two real datasets. Model comparison is performed using different criteria and we concentrate on the practical interpretations using the factor loadings estimates. Finally, Section \ref{sec5} discusses our main findings. \section{Quantile factor model}\label{sec-mfq} Let a random $p$-vector $y_i$, $i=1,\dots,n$, be a measurement of $k$ ($k \leq p$) latent factors, $f_i^{(\tau)} = (f_{1i}^{(\tau)}, \dots, f_{ki}^{(\tau)})$. The proposed model assumes that the associations among the observed variables in $\tau$-th quantile ($0<\tau<1$) are wholly explained by the $q$ latent variables, such that: \begin{equation}\label{eq1} \mathcal{Q}_\tau(y_i | f_i) = \beta_\tau f_i^{(\tau)}, \end{equation} where $\mathcal{Q}_\tau(y_i | f_i)$ is the $\tau$-th $p$-quantile of $y_i$, formally defined as $\mathcal{Q}_\tau(y_i) = \inf \{y^* : P(y_i < y^* ) \geq \tau\}$, for $y^*\in \mathbb{R}^p$, $\beta_\tau$ is the $p\times k$ $\tau$-factor loadings matrix analogous to factor loadings in standard factor models. Besides the factor loadings, the method allows factors to exhibit heterogeneous effects across different parts of the conditional distribution. Conditioning equation (\ref{eq1}) on the latent factors and assuming $p=1$, we get $y_i$ a scalar, so the problem described can be viewed as a standard quantile regression. To motivate the proposed model's construction, we first describe its presentation first describing this particular case. From now on, the sub and superscript $\tau$ will be omitted in order to keep the notation as simple as possible. \subsection{A particular introductory case}\label{univ} The standard quantile regression is obtained conditionally on the latent factors, assuming $p=1$ and adding to the term $\beta f_i$ in equation (\ref{eq1}) an error term $\epsilon_i$ whose distribution (with density, say, $g_\tau(.)$) is restricted so that the $\tau$-th quantile is equal to zero, that is $\int_{-\infty}^{0} g_\tau(\epsilon_i) d\epsilon_i = \tau$. For this case, \cite{koenker1978regression} defined the $\tau$-th quantile regression estimator of $\beta$ as any solution of the following quantile minimization problem: \begin{equation*} \min_{\beta}\sum_{i=1}^n{\phi_\tau\left(y_i-\beta f_i\right)}, \end{equation*} where $\phi_\tau(.)$ is the loss (or check) function defined by $\phi_\tau(u) = u( \tau - I(u<0)),$ with $I(\cdot)$ denoting the indicator function. In this particular case, the Bayesian inference for quantile regression proceeds by using the idea that minimizing the loss function $\phi_\tau(\cdot)$ is equivalent to maximizing the likelihood function of an asymmetric Laplace ($\mathcal{AL}$) distribution \cite{yu2001bayesian}. However, instead of maximizing the likelihood, \cite{yu2001bayesian} obtained the posterior distribution of the $\tau$-th quantile regression coefficients using the $\mathcal{AL}$ distribution. Thus, to use the Bayesian inference paradigm for quantile regression, it is enough to assume that, regardless of the distribution of $y_i$, $\epsilon_i\sim \mathcal{LA}(0, \sigma, \tau)$, whose density function is given by: \begin{equation}\label{eq:fdp-laplace-assimetrica-univariada} g_\tau(\epsilon_i) = \frac{\tau(1-\tau)}{\sigma}\exp \left( -\frac{\phi_\tau(\epsilon_i)}{\sigma} \right), \end{equation} with $\epsilon_i \in \mathbb{R}$, $0 < \tau < 1$ a skewness parameter representing the quantile of interest and $\sigma > 0$ a scale parameter. Conditional on $f_i$, we have that $y_i\mid f_i\sim \mathcal{AL}(\beta f_i, \sigma, \tau)$. \cite{kotz2012laplace} presented a location-scale mixture representation of the $\mathcal{AL}$ distribution that allows finding analytical expressions for the conditional posterior densities of the model. Then, conditional on $f_i$ we can write the distribution of $y_i$ using the following mixture representation: \begin{equation}\label{eq:mistura-laplace-assimetrica-univariada} y_i = \beta f_i + a_\tau w_i + b_\tau \sqrt{\sigma w_i} v_i, \end{equation} for $v_i \sim N(0, 1)$, $w_i \sim Exp(\sigma)$, where $Exp(\lambda)$ denotes the Exponential distribution with mean $\lambda$, \begin{equation*} a_\tau = \frac{1 - 2\tau}{\tau(1-\tau)} \hspace{0.4cm} \textrm{and} \hspace{0.4cm} b^2_\tau = \frac{2}{\tau(1-\tau)}. \end{equation*} Moreover, its mean and variance are, respectively, given by: \begin{equation*} E(y_i) = \beta f_i + \sigma a_\tau \hspace{0.4cm} \textrm{ and } \hspace{0.4cm} Var(y_i) = \sigma^2 (a_\tau^2+ b_\tau). \end{equation*} The quantile factor model, presented next, can be viewed as a multivariate extension ($p>1$) of model in (\ref{eq:mistura-laplace-assimetrica-univariada}), assuming also that $f_i$ is unknown. \subsection{Proposed model} The quantile factor model relates each $p$-vector $y_i$ to the underlying $k$-vector of random variables $f_i$ such that: for $i=1,\dots,n$ \begin{equation}\label{eq:definicao-mfq} y_i = \beta f_i + \epsilon_i, \;\; \epsilon_i \sim \mathcal{AL}_p(m, \Delta), \end{equation} where $f_i$ is independent with $f_i\sim N_k(0,I_k)$, for $I_k$ an identity $k$-matrix; $\mathcal{AL}_p(\mu,\Psi)$ denotes the $p-$multivariate asymmetric Laplace distribution with $\mu \in \mathbb{R}^p$ and $\Psi$ is a symmetric positive-definite $p \times p$-matrix \cite[ch. 6]{kotz2012laplace}; $\epsilon_i$ and $f_s$ are independent for all $i$ and $s$. A brief presentation of the $p-$multivariate asymmetric Laplace ($\mathcal{AL}_p$) distribution, its moments and the characteristic function are presented in Appendix A. From the definition in equation (\ref{eq:definicao-mfq}) and the marginal properties presented in Appendix A, we get, conditional on latent factors, that for all $l,h=1,\dots,p$, \begin{align*} Cov(y_{li}, y_{hi} | f_i ) &= m_lm_h + \delta_{lh}=\left(\frac{1-2\tau}{\tau(1-\tau)}\right) ^ 2 \sigma_l \sigma_h + \delta_{lh} \mbox{ and}\\ Var(y_{li} | f_i) &= m^2_l+\delta_{ll} = \sigma_l^2 \frac{(1-2\tau + 2\tau^2)}{\tau^2(1-\tau)^2}. \end{align*} Analogously, the pairwise covariance and variance, marginal on latent factors, are, respectively, given by: \begin{align} Cov(y_{li}, y_{hi}) = & \displaystyle \sum_{j=1}^{k} \beta_{lj} \beta_{hj} + m_lm_h + \delta_{lh} = \displaystyle \sum_{j=1}^{k} \beta_{lj} \beta_{hj} + \left(\frac{1-2\tau}{\tau(1-\tau)}\right)^2 \sigma_l \sigma_h + \delta_{lh}\nonumber\\ \mbox{and } Var(y_{li}) = & \sum_{j = 1}^{k} \beta_{lj}^2 + m^2_l + \delta_{ll} = \displaystyle \sum_{j = 1}^{k} \beta_{lj}^2 + \sigma_l^2 \frac{(1-2\tau + 2\tau^2)}{\tau^2(1-\tau)^2}.\label{eq2} \end{align} According to (\ref{eq2}), we have that the variance is divided into a part explained by the common factors and the uniquenesses, which measure the residual variability for each of the variables once those contributed by the factors are accounted for. This feature is in agreement with the normal factor model. Note that while in the normal factor model, the conditional covariance is always zero, this only happens in the proposed model in (\ref{eq:definicao-mfq}) if we assume $\delta_{lh} = 0$ and $\tau = 1/2$. Thus, we particularly assume in this paper that $\delta_{lh}=0$, in order to have an approach closest to the standard theory. The variance decomposition, which is a fairly standard way to summarize the importance of a common factor by its percentage contribution to the variability of a given attribute, is given by: \begin{equation}\label{DV} DV_l = 100\frac{ \displaystyle \sum_{j = 1}^{k} \beta_{lj}^2}{ \displaystyle \sum_{j = 1}^{k} \beta_{lj}^2 + \sigma_l^2 \frac{(1-2\tau + 2\tau^2)}{\tau^2(1-\tau)^2}}\%, \end{equation} for $l = 1,\dots, p.$ Since the uniqueness $\sigma^2_l$ is multiplied by a factor that depends on $\tau$, the variance decomposition in (\ref{DV}) should be carefully used, mainly when the interest lies in comparing different quantiles or the proposed model with other ones, whose penalty is different. The uniqueness is increased eightfold if the median is tracked, when compared to the variance decomposition in the Normal factor model, and this inflation becomes larger as the quantile moves to the distribution's tails. Moreover, as happens in the normal factor model, the proposed model (\ref{eq:definicao-mfq}) suffers from a identifiability problem, due to the invariance of factor models under orthogonal transformations. That is, if one considers $\beta^*= \beta \Gamma^{-1}$ and $f_i^*= \Gamma f_i$, for any nonsingular matrix $\Gamma$, the same model defined in (\ref{eq:definicao-mfq}) is obtained. To deal with the factor model invariance, the alternative adopted here to identify the model is to constrain $\beta$ to be a block lower triangular matrix, assumed to be of full rank, with strictly positive diagonal elements. This form provides both identification and often useful interpretation of the factor model \cite{lopes2004bayesian}. With a specified $k$-factor model, Bayesian analysis using MCMC methods is straightforward. \cite{lopes2004bayesian} treated the case where uncertainty about the number of latent factors is assumed in a Normal factor model. They also discussed reversible jump MCMC methods and alternative MCMC methods based on bridge sampling. In the identifiable model, the loadings matrix has $pk - k(k-1)/2$ free parameters. With $p$ non-zero $\sigma^2_j$, $j=1,\dots,p$ parameters, the resulting factor form of $\Delta$ has $p(k+1)-k(k-1)/2$ parameters, compared with the total $p(p + 1)/2$ in an unconstrained (or $p = k$) model; leading to the constraint that $p(p + 1)/2 - p(k + 1) + k(k - 1)/2 \geq 0$, which provides at least an upper bound on $k$ \cite{lopes2004bayesian}. In this work, we assume $k$ known and use an exploratory pairwise quantile correlation plot as a preliminary method to infer the $k$-value. Moreover, the method is flexible enough to allow the use of different values for $k$, depending on the quantile of interest. The choice of the quantile to be tracked in the following applications depends on the specific aims of the problem. In each example we arbitrarily fix a small quantile, 10\%, the median, and a large quantile, 90\%, to be monitored, with the purpose of illustrating the method for different scenarios and quantile correlation values. However, in some contexts, there is a practical rationale behind this choice. For example, in the Value-at-Risk (VaR) estimation, used by financial institutions and their regulators as the standard measure of market risk, the $\tau$-th quantile is often set to 0.01 or 0.05. Or in survival analysis, in the mean residual life estimation this is especially useful when the tail behavior of the distribution is of interest. The same rationale applies to in the quantile factor model fit. \subsubsection{Inference procedure} The $\mathcal{AL}_p$ distribution admits a location-scale mixture representation that allows finding analytical expressions for the conditional posterior densities of the model \cite{kozumi2011gibbs}. Therefore, the quantile factor model in equation (\ref{eq:definicao-mfq}) can be rewritten as the following hierarchical model: \begin{align}\label{mod_repr_mist} \begin{array}{rl} y_i | \beta, f_i,\Delta,\tau, w_i &\sim N_p(\beta f_i + m w_i, w_i \Delta),\\ w_i &\sim Exp(1), \end{array} \end{align} with $f_i \sim N_k(0, I_k).$ Let $\Theta= (\beta,f_1,\dots,f_n,\sigma^2_1,\dots,\sigma^2_p, w_1,\dots,w_n)$ be the parameter vector. The inference procedure is performed under the Bayesian paradigm assuming the number of factors $k$ to be known, and model specification is completed after assigning a prior distribution for $\Theta$, $p(\Theta)$. An advantage of following the Bayesian paradigm is that the inference procedure is performed in a single framework, and uncertainty about parameter estimation is naturally accounted for. We assume some components of $\Theta$ are independent, {\it a priori}. More specifically, \begin{align*} p(\Theta)=\left[\prod_{i=1}^n{g(f_i)}\right]\left[\prod_{j=1}^p{p(\sigma^2_j)}\right]p(\beta), \end{align*} where $g(f_i)$ is the pdf of the $q$-multivariate normal with all the components of the mean vector equal to 0, and correlation identity matrix. We assume further that {\it a priori}, $\beta_{jl} \sim N(0,C_0)$, when $j\neq l$, $\beta_{jj} \sim N(0,C_0)I(\beta_{jj}>0)$ and $\sigma_j^{2}\sim IG(\nu/2,\nu s^2/2)$, where $IG(a,b)$ denotes the inverse gamma distribution having mode $s^2$ with $\nu$ being the prior degrees of freedom hyperparameter. Following Bayes’ theorem, the posterior distribution of $\Theta$, which is proportional to the product of the likelihood function and the prior distribution for $\Theta$, is given by: \begin{equation*} \begin{array}{ll} p(\Theta | y_1,\dots,y_n) & \propto \displaystyle \prod_{i=1}^{n} p(y_i | f_i, w_i) p(f_i) p(w_i) \displaystyle \prod_{j=1}^{k} p(\beta_j) \displaystyle \prod_{j=k+1}^{p} p(\beta_j) \displaystyle \prod_{j=1}^{p} p(\sigma^{-2}_j) \\ & \propto \exp \left(-\frac{1}{2} \displaystyle \sum_{i=1}^{n}(y_i - \beta f_i - mw_i)'(w_i \Delta)^{-1}(y_i - \beta f_i - mw_i) \right) \times \\ & \displaystyle \prod_{i=1}^{n} |w_i \Delta|^{-\frac{1}{2}} \displaystyle \prod_{i=1}^{n} \exp \left( -w_i \right) \times \exp \left(-\frac{1}{2} \displaystyle \sum_{i=1}^{n}f_i'I_kf_i \right) \times \\ & \displaystyle \prod_{j=1}^{p} (\sigma^{-2}_j)^{\frac{\nu}{2}-1} \textrm{exp} \left( - \frac{\nu s^2}{2} \sigma^{-2}_j \right) \times \\ & \displaystyle \prod_{j=1}^k \textrm{exp} \left( -\frac{1}{2} \beta_j (C_0 I_j)^{-1} {\beta_j}' \mathbb{I}(\beta_{jj} \geq 0) \right) \displaystyle \prod_{j=k+1}^p \textrm{exp} \left( -\frac{1}{2} \beta_j (C_0I_k)^{-1} {\beta_j}' \right), \end{array} \end{equation*} for $i = 1, \dots, n$, $j = 1, \dots, p$, and $\beta_j$ the $j$-th line of matrix $\beta$. Moreover, we have $f_i = (f_{i1}, \dots, f_{ik})$ and denote by $f_i^j$ a $j-$vector containing the first $j$ elements of $f_i$. The kernel of this distribution does not result in a known distribution. We make use of MCMC methods to obtain samples from the resulting posterior distribution. In particular, we use the Gibbs sampling algorithm for all the parameters, except for $\sigma^2_1, \ldots, \sigma^2_p$, whose full conditional distributions do not have a closed form, so we make use of the Metropolis-Hastings algorithm with a random walk proposal distribution to obtain samples from them. However, it is quite interesting that, when $\tau = 1/2$, those full conditional distributions also have a closed form, so Gibbs sampling alone could be used. The full conditional posterior distributions are described in Appendix B. In the following applications, we also perform model comparison with different values of $q$ using the Akaike information criterion (AIC), Bayesian information criterion (BIC), a variant BIC* and the informational complexity criterion (ICOMP). Each of these criteria is described in more detail in Appendix C. \section{Illustrations with synthetic data}\label{sec3} In this section we analyze two synthetic datasets generated from standard models, to check the proposed model's performance in different scenarios and under different quantiles. The main aim here is to compare the proposed model with standard models, not only with respect to the fit, but also the practical interpretations that may be deduced from each case. In the first case study we generated a synthetic dataset from a transformation of a multivariate Normal distribution, with the purpose of comparing the proposed model with several quantiles, denoted by QFM, with the Normal and Student-t factor models, denoted by NFM and TFM, respectively. The generation process reflects the tail-dependent degree of pairwise association. In the second study, a synthetic dataset is generated from a multivariate Student-t model, with the purpose of verifying the quantile factor model fit in the median in comparison to the fit of the Student-t and Normal factor models. We considered the following hyperparameters in the prior distributions: $C_0=100$, $\nu=0.02$ and $s^2=1$, which yield vague priors. The MCMC algorithm was implemented in the \texttt{R} programming language, version 3.4.1 \cite{teamR}, in a computer with an Intel(R) Core(TM) i5-4590 processor with 3.30 GHz and 8 GB of RAM memory. For each sample and fitted model, we ran two parallel chains starting from different values, letting each chain run for 160,000 iterations, discarded the first 10,000 as burn-in, and stored every 50th iteration to avoid possible autocorrelations within the chains. We used the diagnostic tools available in the CODA package \cite{plummer2006coda} to check convergence of the chains. A preliminary analysis of the artificial datasets generated is done using scatterplots and a Bayesian version of the quantile correlation measure, denoted by $\rho_\tau$, proposed by \cite{choi2018quantile} and described next. \subsection{Bayesian quantile correlation} The usual correlation coefficients, as the Pearson correlation coefficient, which is a measure of linear association, tend to fail in the measurement of tail-specific relationships. The quantile correlation coefficient measures tail dependence in this context. In particular, \cite{choi2018quantile} defined the quantile correlation for two random variables $x$ and $y$ by: \begin{equation}\label{choishin} \rho_\tau = sign(\beta_{2.1}(\tau))\sqrt{\beta_{2.1}(\tau)\beta_{1.2}(\tau)}, \end{equation} which is the geometric mean of the two $\tau$-quantile regression slopes $\beta_{2.1}(\tau)$ of $y$ on $x$ and $\beta_{1.2}(\tau)$ of $x$ on $y$ . While the Pearson correlation coefficient measures sensitivity of the conditional mean of a random variable with respect to a change in the other variable, the quantile correlation $\rho_\tau$ is modified to measure sensitivity of conditional quantiles rather than conditional mean by considering $\tau$-quantile regressions. The quantile correlation coefficient satisfies the basic features of correlation coefficient, such as: being zero for independent random variables; being bounded by 1 in absolute value for a wide class of distributions with $1$ and $-1$ indicating perfectly linear related random variables; and having commutativity and scale-location-invariance. The larger the absolute value of $\rho_\tau$ is, the more sensitive the conditional $\tau-$quantile of a random variable to change of the other variable will be. More details about the quantile correlation coefficient can be seen in \cite{choi2018quantile}. In this work, we make use of a Bayesian version of the coefficient. With the posterior sample obtained from MCMC for $\beta_{2.1}(\tau)$ and $\beta_{1.2}(\tau)$, we can propagate them and obtain a sample of posterior distribution of $\rho_\tau$. In particular, we define the Bayesian quantile correlation estimator as the posterior mean of $\rho_\tau$. \subsection{Case study 1}\label{studycase1} In this illustration, a sample with $n=150$ observations was generated from a multivariate Normal distribution with $p=5$, zero mean vector and covariance matrix $ \Psi = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0.95 & 0.95 \\ 0 & 0 & 0.95 & 1 & 0.95 \\ 0 & 0 & 0.95 & 0.95 & 1 \\ \end{pmatrix}.$ An auxiliary variable $e$ was generated from $N(0, 9)$ and the following transformations were applied to the original dataset: \begin{align*} y_{i1} & = y_{i1} + e \mathbb{I}(y_{i1} < c) \mathbb{I}(y_{i2} < c) \\ y_{i2} & = y_{i2} + e \mathbb{I}(y_{i1} < c) \mathbb{I}(y_{i2} < c), \,\, \mbox{ for } c = -0.4 \mbox{ and } i = 1, \dots, n. \end{align*} The lower panels of Figure \ref{fig:pairs-y-ilustr1} show the scatterplot for each component of the variable $y_i$, while the upper panels show the respective posterior mean (solid line) and respective 95\% credible interval (dashed line) of the pairwise quantile correlation varying by quantile. The gray scale highlights regions for which the correlation is weak ($|\rho_\tau| < 0.3$), moderate ($ 0.3 < |\rho_\tau| < 0.7$) and strong ($|\rho_\tau| > 0.7$). As expected from the covariance structure considered in the data generation, variables 3, 4 and 5 have strong linear and quantile correlation for all the quantiles considered. However, due to the transformation adopted, variables 1 and 2 seems to have a significant correlation only for lower quantiles and this value decreases as the quantile increases. Upper panels with the quantile correlations show that for $\tau \leq 0.3$, there is a moderate correlation between variables, while for $\tau>0.3$ it becomes weak. The other pairs seem not to be correlated with $y_{i1}$ and $y_{i2}$, for $i = 1, \dots, n$, regardless of the quantiles considered. \begin{figure} \caption{Lower panels: Scatterplots of dataset generated in case study 1. Upper panels: posterior mean (solid line) with respective 95\% credible interval (dashed line) of the pairwise quantile correlation varying by quantile. The gray scale highlights regions for which the correlation is weak ($|\rho_\tau| < 0.3$), moderate ($ 0.3 < |\rho_\tau| < 0.7$) and strong ($|\rho_\tau| > 0.7$). } \label{fig:pairs-y-ilustr1} \end{figure} Figure \ref{fig:matcor-datasim1} presents the quantile correlation estimated for $\tau = 0.1, 0.5, 0.9$ and the respective Pearson coefficient. While variables $y_{i3}$, $y_{i4}$ and $y_{i5}$, for $i = 1, \dots, n$, are correlated for the three quantiles considered as well as in the mean, components 1 and 2 seems to have a significant correlation just for $\tau=0.1$. Note that the Pearson correlation shows weak correlation, like an average value between the quantiles, since it is not as high as in the 0.1-th quantile and not as low as in the other quantiles. \begin{figure} \caption{Quantile correlation matrix estimated for $\tau=0.1, 0.5$ and $0.9$ and Pearson correlation matrix for the dataset generated in the case study 1.} \label{fig:matcor-datasim1} \end{figure} This exploratory analysis motivates the use of the proposed quantile factor model, since the correlations are very different depending on the quantile considered. Therefore, we fit to the synthetic dataset to the proposed quantile factor model for $\tau = 0.1, 0.25, 0.5, 0.75 \textrm{ and } 0.9$, the Student-t and Normal factor model, assuming $k=1$ and $k=2$. Table \ref{table:medidas-resumo-ilustr1} shows the different model comparison criteria computed. In a preliminary analysis the Normal factor model performs best compared to the quantile factor model with respect to the ICOMP, AIC, BIC and BIC* values. This occurs because these criteria are completely based on the likelihood function and the quantile model is not assumed to fit the data well, but to estimate relations in quantiles. Thus, we suggest the use of those comparison criteria for assessment of the quantile or the number of factors for models in the same distribution class. For the Normal factor model, all the different criteria indicated that $k=1$ performed best among the fitted ones, while for the quantile factor model, the best results were achieved assuming $\tau=0.5$ and $k=2$. Moreover, it is quite interesting that the best value of $k$ indicated by those criteria is different depending on the quantile considered. For example, while for $\tau=0.1$ the model with $k=2$ performed better than with $k=1$, for $\tau=0.75$ this behavior changed. On the other hand, for RPS, MAE and MSE criteria, the model that performed best was the quantile factor model assuming $\tau = 0.1$ and $k = 2$. In general, models fitted assuming $k=2$ performed better in these criteria than the models with $k=1$. \begin{table}[h!] \centering \caption{Model comparison criteria for the quantile, Student-t and Normal factor models assuming $\tau = 0.1, 0.25, 0.5, 0.75 \textrm{ and } 0.9$, $k=1$ and $k=2$.\label{table:medidas-resumo-ilustr1}} \begin{tabular}{lcccccccc} \hline Model & $k$ & ICOMP & AIC & BIC & BIC* & RPS & MAE & MSE \\ \hline $\textrm{QFM}_{\tau = 0.10}$ & 1 & 1690.42 & 1697.42 & 1727.53 & 1727.25 & 0.57 & 0.73 & 2.68 \\ \hline $\textrm{QFM}_{\tau = 0.25}$ & 1 & 1576.49 & 1583.63 & 1613.74 & 1613.46 & 0.49 & 0.64 & 1.96 \\ \hline $\textrm{QFM}_{\tau = 0.50}$ & 1 & 1442.58 & 1450.18 & 1480.28 & 1480.00 & 0.38 & 0.51 & 0.97 \\ \hline $\textrm{QFM}_{\tau = 0.75}$ & 1 & 1530.12 & 1537.91 & 1568.02 & 1567.74 & 0.43 & 0.57 & 1.04 \\ \hline $\textrm{QFM}_{\tau = 0.90}$ & 1 & 1635.20 & 1642.99 & 1673.09 & 1672.81 & 0.49 & 0.64 & 1.34 \\ \hline TFM & 1 & 391.34 & 383.85 & 353.77 & 354.05 & 0.36 & 0.46 & 0.75 \\ \hline NFM & 1 & 20.72 & 30.21 & 60.32 & 60.04 & 0.34 & 0.45 & 0.61 \\ \hline $\textrm{QFM}_{\tau = 0.10}$ & 2 & 1563.63 & 1582.34 & 1624.49 & 1624.03 & 0.29 & 0.33 & 0.28 \\ \hline $\textrm{QFM}_{\tau = 0.25}$ & 2 & 1639.70 & 1644.99 & 1687.14 & 1686.68 & 0.32 & 0.37 & 0.93 \\ \hline $\textrm{QFM}_{\tau = 0.50}$ & 2 & 1373.74 & 1384.87 & 1427.01 & 1426.56 & 0.32 & 0.42 & 0.59 \\ \hline $\textrm{QFM}_{\tau = 0.75}$ & 2 & 1560.27 & 1571.71 & 1613.86 & 1613.40 & 0.34 & 0.41 & 0.70 \\ \hline $\textrm{QFM}_{\tau = 0.90}$ & 2 & 1631.60 & 1646.15 & 1688.30 & 1687.84 & 0.34 & 0.37 & 0.41 \\ \hline TFM & 2 & 86.97 & 96.21 & 138.38 & 137.90 & 0.35 & 0.44 & 0.67 \\ \hline NFM & 2 & 23.65 & 38.36 & 80.51 & 80.05 & 0.31 & 0.36 & 0.34 \\ \hline \end{tabular} \end{table} Figure \ref{fig:mat-beta-datasim1-k1} displays the posterior mean of the factor loadings for the quantile factor model with varying $\tau$, for $k=1$ (a) and $k=2$ (b). The factor loadings estimated in the Normal factor model fit for $k=1$ and $k=2$ are presented in gray in the figure just for comparison. When $k=1$ is fixed, independently of the value of $\tau$, the factor loadings estimated are higher for variables $y_{i3}, y_{i4} \textrm{ and } y_{i5}$, showing that these variables are more correlated and better explained by the latent factor than variables $y_{i1} \textrm{ and } y_{i2}$. Even so, the highest factor loadings estimated for these variables happen with $\tau=0.1$, which is exactly the one with the highest value of quantile correlation, as was shown in Figure \ref{fig:pairs-y-ilustr1}. However, Figure \ref{fig:mat-beta-datasim1-k1} (b) shows that the factor loadings matrix is more influenced by the quantile considered when $k=2$ is assumed than for $k=1$. When $\tau = 0.1$, the first factor is clearly related to variables $y_{i1} \textrm{ and } y_{i2}$, while the second factor is related to the other variables. Thus, the quantile factor model assuming $\tau = 0.1$ captured the correlation structure, while in the Normal factor model this structure is less evident. \begin{figure} \caption{Posterior mean of the factor loadings matrix obtained under the quantile factor model fit assuming $\tau = 0.1, 0.25, 0.5, 0.75 \textrm{ and } \label{fig:mat-beta-datasim1-k1} \end{figure} Therefore, the quantile factor model allows more flexibility to summarize the association between the components for any quantile considered, either through the factor loadings, the latent factor dimension, or their interpretation. Figure \ref{fig:boxplot} displays the boxplots of the posterior mean of the latent factor obtained from the quantile factor model fit for the quantiles considered, for $k=1$ and $k=2$. In general, when $k=1$ is assumed, as the quantile increases, the factor mean increases with similar variability. However, when $k=2$, this increase is just present in the first factor, while the second factor remains at a similar level but with different variability depending on the quantile considered. \begin{figure} \caption{Boxplots of the posterior mean for the latent factors obtained in the quantile factor model fit for $\tau = 0.1, 0.25, 0.5, 0,75 \textrm{ and } \label{fig:boxplot} \end{figure} Finally, Table \ref{dv-ilustr1} reports the posterior mean of the variance decomposition defined in equation (\ref{DV}) obtained in the quantile factor model fit assuming particularly $k=1$ and $2$ and $\tau=0.1, 0.5$ and $0.9$. We also include the variance decomposition in the Normal and Student-t factor model fits. The values are reported for each variable and separated by factor, when appropriate. Variables 3, 4 and 5 are in general well explained for all the models considered. However, the quantile factor model with $k=2$ and $\tau = 0.1$ is the only one for which the latent factors can explain variables 1 and 2 well. The results obtained assuming $\tau=0.9$ should also be highlighted for the first variable. However, since the the variance decomposition presented in equation (\ref{DV}) contains a portion that depends on $\tau$ in the uniqueness, it must be carefully evaluated. In particular, for $\tau=0.1$ and 0.9, the uniqueness $\sigma^2_l$ is multiplied by 101 in both cases, while in the median it is just multiplied by 8. Thus, although this inflation in the uniqueness part is lower for $\tau=0.5$ than for $\tau=0.1$, the quantile factor for $\tau=0.1$ still obtains better results. Just for comparison purposes, we include a modified version of the decomposition variance (\ref{DV}) that may be useful for the cases when comparing results with different quantiles is the interest. The modified variance decomposition for variable $l$ is defined in a similar way to the Normal factor model, as: \begin{equation}\label{DVmod} DV^{\textrm{\textit{mod}}}_l = 100\frac{ \displaystyle \sum_{j = 1}^{k} \beta_{lj}^2}{ \displaystyle \sum_{j = 1}^{k} \beta_{lj}^2 + \sigma_l^2 }\%. \end{equation} \begin{table}[h!] \centering \caption{Posterior mean of the modified variance decomposition and variance decomposition obtained, respectively, in the quantile factor model fit with $\tau=0.1$, 0.5 and 0.9, and Student-t and Normal factor model fits, assuming $k=1$ and $2$.}\label{dv-ilustr1} \begin{tabular}{c|c|ccc|c|ccc} \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{Variance decomposition} & \multicolumn{4}{c}{Modified variance decomposition} \\ \cline{2-9} \multicolumn{1}{c}{} & k = 1 & \multicolumn{3}{c|}{k = 2} & k = 1 & \multicolumn{3}{c}{k = 2} \\ \cline{2-9} \multicolumn{1}{c}{} & Fac 1 & Fac 1 & Fac 2 & Total & Fac 1 & Fac 1 & Fac 2 & Total \\ \hline \multicolumn{1}{l}{} & \multicolumn{8}{c}{$\textrm{QFM}_{\tau = 0.1}$ } \\ \hline $y_{i1}$ & 1.3 & 60.5 & 0.0 & 60.5 & 22.6 & 97.1 & 0.0 & 97.1 \\ $y_{i2}$ & 1.5 & 64.0 & 0.2 & 64.2 & 25.6 & 97.3 & 0.3 & 97.6 \\ $y_{i3}$ & 77.8 & 5.4 & 85.8 & 91.2 & 98.7 & 5.9 & 93.7 & 99.6 \\ $y_{i4}$ & 78.0 & 5.5 & 86.0 & 91.5 & 98.8 & 6.0 & 93.6 & 99.6 \\ $y_{i5}$ & 75.9 & 7.0 & 83.1 & 90.1 & 98.6 & 7.7 & 91.8 & 99.5 \\ \hline \multicolumn{1}{l}{} & \multicolumn{8}{c}{$\textrm{QFM}_{\tau = 0.5}$ } \\ \hline \multicolumn{1}{l|}{$y_{i1}$ } & 0.8 & 1.1 & 0.0 & 1.1 & 6.2 & 8.3 & 0.0 & 8.3 \\ \multicolumn{1}{l|}{$y_{i2}$ } & 0.5 & 0.3 & 2.9 & 3.2 & 4.0 & 1.8 & 19.1 & 20.9 \\ \multicolumn{1}{l|}{$y_{i3}$ } & 94.5 & 41.8 & 51.8 & 93.6 & 99.3 & 44.3 & 54.9 & 99.2 \\ $y_{i4}$ & 93.1 & 40.6 & 52.1 & 92.7 & 99.1 & 43.4 & 55.7 & 99.0 \\ $y_{i5}$ & 91.8 & 55.0 & 39.7 & 94.7 & 98.9 & 57.7 & 41.6 & 99.3 \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{$\textrm{QFM}_{\tau = 0.9}$ } \\ \hline $y_{i1}$ & 0.1 & 60.5 & 0.0 & 60.5 & 2.2 & 97.2 & 0.0 & 97.2 \\ $y_{i2}$ & 0.1 & 32.6 & 0.1 & 32.7 & 2.2 & 91.3 & 0.3 & 91.5 \\ $y_{i3}$ & 78.5 & 0.5 & 88.9 & 89.4 & 98.8 & 0.5 & 98.9 & 99.5 \\ $y_{i4}$ & 79.4 & 0.5 & 88.9 & 89.4 & 98.8 & 0.5 & 98.9 & 99.5 \\ $y_{i5}$ & 77.2 & 0.4 & 88.3 & 88.7 & 98.7 & 0.5 & 98.9 & 99.4 \\ \hline \multicolumn{9}{c}{Variance decomposition} \\ \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{NFM} & \multicolumn{4}{c}{TFM} \\ \hline $y_{i1}$ & 0.6 & 20.0 & 0.0 & 20.0 & 1.5 & 2.0 & 0.0 & 2.0 \\ $y_{i2}$ & 0.1 & 28.7 & 2.1 & 30.8 & 0.0 & 0.0 & 3.6 & 3.6 \\ $y_{i3}$ & 94.7 & 2.6 & 87.9 & 90.6 & 92.5 & 75.4 & 16.1 & 91.5 \\ $y_{i4}$ & 94.2 & 2.3 & 87.3 & 89.6 & 94.8 & 78.5 & 15.1 & 93.6 \\ $y_{i5}$ & 94.0 & 2.2 & 87.2 & 89.3 & 92.3 & 77.6 & 13.3 & 90.9 \\ \hline \end{tabular} \end{table} \subsection{Case study 2} The main aim of this illustration is to evaluate the robustness of the quantile factor model for the median in the presence of outliers, compared to the Normal and Student-t factor models. With this purpose, a sample was generated from a particular multivariate Student-t model with $p=6$ variables, that is, $y \sim t_6(\mu_0, \Sigma_0, \nu)$, with $\nu = 2.5$ degrees of freedom and a shape matrix $\Sigma_0 = \begin{pmatrix} 1 & 0.95 & 0 & 0 & 0 & 0 \\ 0.95 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0.95 & 0 & 0 \\ 0 & 0 & 0.95 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0.95 \\ 0 & 0 & 0 & 0 & 0.95 & 1 \end{pmatrix}.$ In a preliminary exploratory analysis, the scatterplots and quantile correlations estimated for several values of $\tau$, presented in Figure \ref{fig:pairs-y-datasim3t}, show some evidence that three latent factors are necessary to explain the variability, since there are three independent pairs of variables are be correlated. Note that in general the quantile correlations do not vary among the quantiles, except for higher quantiles, which tend to increase the quantile correlations, probably due to the presence of outliers. \begin{figure} \caption{Lower panels: Scatterplots of dataset generated in case study 2 . Upper panels: posterior mean (solid line) with respective 95\% credible interval (dashed line) of the pairwise quantile correlation varying by quantile. The gray scale highlights regions for which the correlation is weak ($|\rho_\tau| < 0.3$), moderate ($ 0.3 < |\rho_\tau| < 0.7$) and strong ($|\rho_\tau| > 0.7$). } \label{fig:pairs-y-datasim3t} \end{figure} Thus, to the generate dataset we fit those three factor models assuming $k = 1, 2 \textrm{ and } 3$ and compared their performance in a similar way to the case study 1 in Subsection \ref{studycase1}. Table \ref{table-medidas-resumo-datasim3t} reports the results of the model comparison criteria considered. The quantile factor model proposed is the only one in which the criteria point $k=3$ is the best. In the Normal factor model fit, for example, the AIC indicates $k=1$, while in the Student-t factor model fit it suggests $k=2$. For the number of latent factors fixed, note that RPS, MAE and MSE in general indicate either the quantile factor model for the median or the Student-t model performs best. \begin{table}[h!] \centering \caption{Model comparison criteria for the quantile factor model with $\tau=0.5$, the Normal and Student-t factor models assuming $k=1, 2$ and $3$.}\label{table-medidas-resumo-datasim3t} \begin{tabular}{lcccccccc} \hline Model & k & ICOMP & AIC & BIC & BIC* & RPS & MAE & MSE \\ \hline $\textrm{QFM}_{\tau = 0.5}$ & 1 & 1023.44 & 1042.73 & 1073.99 & 1073.44 & 0.35 & 0.47 & 0.76 \\ \hline NFM & 1 & 21.29 & 38.85 & 70.11 & 69.56 & 0.39 & 0.48 & 0.71 \\ \hline TFM & 1 & 714.77 & 732.88 & 764.15 & 763.57 & 0.37 & 0.46 & 0.74 \\ \hline $\textrm{QFM}_{\tau = 0.5}$ & 2 & 936.05 & 958.66 & 1002.95 & 1002.05 & 0.23 & 0.29 & 0.38 \\ \hline NFM & 2 & 27.53 & 46.28 & 90.57 & 89.66 & 0.26 & 0.29 & 0.35 \\ \hline TFM & 2 & 264.74 & 283.80 & 328.15 & 327.18 & 0.23 & 0.28 & 0.29 \\ \hline $\textrm{QFM}_{\tau = 0.5}$ & 3 & 866.05 & 907.77 & 962.48 & 961.22 & 0.12 & 0.11 & 0.02 \\ \hline NFM & 3 & 11.98 & 53.94 & 108.65 & 107.38 & 0.13 & 0.11 & 0.02 \\ \hline TFM & 3 & 694.56 & 736.47 & 791.18 & 789.91 & 0.11 & 0.10 & 0.02 \\ \hline \end{tabular} \end{table} The posterior mean of the factor loadings for the three models considered are presented in Table \ref{matriz-beta-datasim3t}. When models assuming $k = 1$ are fitted, the latent factor only captures the correlation structure between variables 5 and 6. In this case, the estimates obtained in the Normal fit are significantly higher than those obtained in the quantile factor model in the median and Student-t model fit, which in turn have similar results. Assuming $k=2$ is fixed, the quantile and Normal factor models capture the correlation between variables 1 and 2 in the first latent factor and between variables 5 and 6 in the second latent factor. However, the second latent factor in the Student-t factor model is associated with variables 3 and 4. Finally, when $k=3$ is fixed, the three pairs of correlated variables are correctly identified in both the quantile and Student-t factor models, while the factor loadings estimated in the Normal model fit do not provide a clear distinction between the three pairs of variables, allowing $y_{i3}$, $y_{i4}$, $y_{i5}$ and $y_{i6}$ to be a combination of two latent factors. These results suggest that since the quantile factor model with $\tau=0.5$ presented similar performance to the Student-t model, it can be viewed as an efficient alternative for studying datasets with outliers, while the Normal model seems to be more inefficient otherwise. The proposed model could be even more advantageous in this case, as it does not require estimation of the degrees of freedom, which is usually a hard parameter to estimate \cite{zhang2014robust}. \begin{table}[h!] \begin{center} \caption{Posterior mean of the factor loadings matrix obtained in the quantile factor model fit assuming $\tau =0.5$, and the Normal and Student-t factor models, for $k=1, 2$ and $3$.\label{matriz-beta-datasim3t}} \begin{tabular}{c|c|c|cc|cc|cc} \hline \multicolumn{3}{c|}{$k=1$} & \multicolumn{6}{c}{$k=2$} \\ \hline $\textrm{QFM}_{\tau = 0.5}$ & NFM & TFM & \multicolumn{2}{c|}{$\textrm{QFM}_{\tau = 0.5}$} & \multicolumn{2}{c|}{NFM} & \multicolumn{2}{c}{TFM}\\ \hline 0.06 & 0.25 & 0.04 & 0.94 & 0.00 & 1.03 & 0.00 & 0.53 & 0.00 \\ 0.07 & 0.37 & 0.03 & 1.01 & 0.05 & 1.12 & 0.11 & 0.59 & 0.02 \\ -0.03 & 0.20 & 0.07 & -0.30 & 0.04 & -0.16 & 0.25 & 0.18 & 0.61 \\ 0.01 & 0.19 & 0.08 & -0.28 & 0.05 & -0.14 & 0.24 & 0.18 & 0.61 \\ 0.84 & 0.90 & 0.60 & 0.20 & 0.87 & 0.25 & 0.86 & 0.04 & 0.05 \\ 0.89 & 0.97 & 0.62 & 0.08 & 0.88 & 0.17 & 0.96 & 0.03 & 0.11 \\ \hline \multicolumn{9}{c}{$k=3$} \\ \hline \multicolumn{3}{c|}{$\textrm{QFM}_{\tau = 0.5}$} & \multicolumn{3}{c|}{NFM} & \multicolumn{3}{c}{TFM}\\ \hline \multicolumn{1}{c}{1.00} & \multicolumn{1}{c}{0.00} & 0.00 & 1.03 & \multicolumn{1}{c}{0.00} & \multicolumn{1}{c|}{0.00} & \multicolumn{1}{c}{0.54} & \multicolumn{1}{c}{0.00} & 0.00 \\ \multicolumn{1}{c}{1.06} & \multicolumn{1}{c}{0.08} & 0.00 & 1.13 & \multicolumn{1}{c}{0.03} & \multicolumn{1}{c|}{0.00} & \multicolumn{1}{c}{0.60} & \multicolumn{1}{c}{0.07} & 0.00 \\ \multicolumn{1}{c}{-0.17} & \multicolumn{1}{c}{0.14} & 0.74 & -0.14 & \multicolumn{1}{c}{0.34} & \multicolumn{1}{c|}{0.42} & \multicolumn{1}{c}{0.18} & \multicolumn{1}{c}{0.02} & 0.61 \\ \multicolumn{1}{c}{-0.16} & \multicolumn{1}{c}{0.13} & 0.76 & -0.12 & \multicolumn{1}{c}{0.35} & \multicolumn{1}{c|}{0.41} & \multicolumn{1}{c}{0.18} & \multicolumn{1}{c}{0.01} & 0.61 \\ \multicolumn{1}{c}{0.25} & \multicolumn{1}{c}{0.67} & 0.02 & 0.30 & \multicolumn{1}{c}{-0.28} & \multicolumn{1}{c|}{0.35} & \multicolumn{1}{c}{0.02} & \multicolumn{1}{c}{0.62} & 0.02 \\ \multicolumn{1}{c}{0.15} & \multicolumn{1}{c}{0.70} & 0.05 & 0.23 & \multicolumn{1}{c}{-0.28} & \multicolumn{1}{c|}{0.41} & \multicolumn{1}{c}{0.08} & \multicolumn{1}{c}{0.66} & 0.07 \\ \hline \end{tabular} \end{center} \end{table} Finally, from the variance decomposition, presented in Table \ref{dv-datasim3t}, it is possible to see that the quantile and Student-t factor model fits for $k=3$ show that all the variables are equally well explained. In the Normal factor model just variables 1 and 2 are well explained by only one latent factor, while variables 3 to 6 are partially explained by the three latent factors, reflecting the estimated factor loadings. \begin{table}[h!] \centering \caption{Posterior mean of the variance decomposition obtained in the quantile factor model fit with $\tau=0.5$, and Normal and Student-t factor model fits, assuming $k=1$, 2 and 3.\label{dv-datasim3t}} \begin{tabular}{ccccccccc} \hline & \multicolumn{1}{c|}{$k=1$} & \multicolumn{3}{c|}{$k=2$} & \multicolumn{4}{c}{$k=3$} \\ \cline{2-9} & \multicolumn{1}{c|}{Fac 1} & Fac 1 & Fac 2 & \multicolumn{1}{c|}{Total} & Fac 1 & Fac 2 & Fac 3 & Total \\ \cline{2-9} \multicolumn{9}{c}{Variance decomposition - $\textrm{QFM}_{\tau = 0.5}$} \\ \hline \multicolumn{1}{c|}{$y_{i1}$} & \multicolumn{1}{c|}{0.5} & 90.9 & 0.0 & \multicolumn{1}{c|}{90.9} & 93.9 & 0.0 & 0.0 & 93.9 \\ \multicolumn{1}{c|}{$y_{i2}$} & \multicolumn{1}{c|}{0.6} & 92.5 & 0.3 & \multicolumn{1}{c|}{92.7} & 93.2 & 0.5 & 0.0 & 93.7 \\ \multicolumn{1}{c|}{$y_{i3}$} & \multicolumn{1}{c|}{0.1} & 10.6 & 0.2 & \multicolumn{1}{c|}{10.8} & 4.6 & 2.9 & 82.1 & 89.5 \\ \multicolumn{1}{c|}{$y_{i4}$} & \multicolumn{1}{c|}{0.1} & 9.2 & 0.3 & \multicolumn{1}{c|}{9.5} & 3.5 & 2.3 & 85.5 & 91.3 \\ \multicolumn{1}{c|}{$y_{i5}$} & \multicolumn{1}{c|}{90.6} & 4.8 & 89.4 & \multicolumn{1}{c|}{94.2} & 10.7 & 80.5 & 0.1 & 91.3 \\ \multicolumn{1}{c|}{$y_{i6}$} & \multicolumn{1}{c|}{88.3} & 0.8 & 88.5 & \multicolumn{1}{c|}{89.2} & 3.6 & 82.9 & 0.5 & 87.0 \\ \hline \multicolumn{9}{c}{Variance decomposition - NFM} \\ \hline \multicolumn{1}{c|}{$y_{i1}$} & \multicolumn{1}{c|}{5.6} & 94.3 & 0.0 & \multicolumn{1}{c|}{94.3} & 93.7 & 0.0 & 0.0 & 93.7 \\ \multicolumn{1}{c|}{$y_{i2}$} & \multicolumn{1}{c|}{10.4} & 93.7 & 0.9 & \multicolumn{1}{c|}{94.6} & 94.3 & 0.1 & 0.0 & 94.4 \\ \multicolumn{1}{c|}{$y_{i3}$} & \multicolumn{1}{c|}{3.5} & 2.2 & 5.5 & \multicolumn{1}{c|}{7.7} & 5.0 & 30.5 & 47.1 & 82.7 \\ \multicolumn{1}{c|}{$y_{i4}$} & \multicolumn{1}{c|}{3.2} & 1.7 & 5.1 & \multicolumn{1}{c|}{6.8} & 3.9 & 32.1 & 46.0 & 82.1 \\ \multicolumn{1}{c|}{$y_{i5}$} & \multicolumn{1}{c|}{92.5} & 7.3 & 85.1 & \multicolumn{1}{c|}{92.3} & 25.5 & 22.5 & 34.4 & 82.3 \\ \multicolumn{1}{c|}{$y_{i6}$} & \multicolumn{1}{c|}{92.4} & 2.9 & 90.6 & \multicolumn{1}{c|}{93.6} & 13.8 & 21.8 & 45.9 & 81.5 \\ \hline \multicolumn{9}{c}{Variance decomposition - TFM} \\ \hline \multicolumn{1}{c|}{$y_{i1}$} & \multicolumn{1}{c|}{0.2} & 80.4 & 0.0 & \multicolumn{1}{c|}{80.4} & 82.3 & 0.0 & 0.0 & 82.3 \\ \multicolumn{1}{c|}{$y_{i2}$} & \multicolumn{1}{c|}{0.1} & 85.5 & 0.1 & \multicolumn{1}{c|}{85.7} & 86.9 & 1.1 & 0.0 & 88.1 \\ \multicolumn{1}{c|}{$y_{i3}$} & \multicolumn{1}{c|}{0.4} & 7.1 & 79.3 & \multicolumn{1}{c|}{86.4} & 7.2 & 0.1 & 80.1 & 87.4 \\ \multicolumn{1}{c|}{$y_{i4}$} & \multicolumn{1}{c|}{0.5} & 7.0 & 79.5 & \multicolumn{1}{c|}{86.5} & 7.0 & 0.0 & 80.6 & 87.6 \\ \multicolumn{1}{c|}{$y_{i5}$} & \multicolumn{1}{c|}{84.3} & 0.2 & 0.3 & \multicolumn{1}{c|}{0.6} & 0.1 & 88.1 & 0.1 & 88.4 \\ \multicolumn{1}{c|}{$y_{i6}$} & \multicolumn{1}{c|}{81.2} & 0.1 & 1.1 & \multicolumn{1}{c|}{1.2} & 1.4 & 85.9 & 0.9 & 99.2 \\ \hline \end{tabular} \end{table} \section{Applications to real data} \label{sec4} In this section, the quantile factor model, Normal and Student-t factor models are evaluated using two real datasets with two very different structures. In the first application, 5 daily price indexes are analyzed, for which the correlation structure seems to be highly dependent on the quantile considered, motivating the use of our proposed. The second application is to a dataset from a heart disease study. In this case, the correlation structure a priori does not seem to be quantile dependent, not favoring the use of the proposed model. For both cases, we considered the same hyperparameters described in Section \ref{sec3} for the prior distributions. The MCMC algorithm was also implemented in the \texttt{R} programming language, version 3.4.1 \cite{teamR}, in a computer with an Intel(R) Core(TM) i5-4590 processor with 3.30 GHz and 8 GB of RAM memory. For each sample and fitted model, we also ran two parallel chains starting from different values, letting each chain run for 160,000 iterations, discarded the first 10,000 as burn-in, and stored every 50th iteration to avoid possible autocorrelations within the chains. We also used the diagnostic tools available in the CODA package \cite{plummer2006coda} to check convergence of the chains. \subsection{Financial sector dataset}\label{sec:aplicacao3-realdata-1-setorfinanc} The dataset consists of 5 daily price indexes during 2018. In particular, variables $y_{i1}$ to $y_{i5}$ for $i = 1, \dots, n$ represent, respectively the Russell 2000 (New York, USA), Bell 20 (Brussels, Belgium), IBEX 35 (Madrid, Spain), Straits Times (Singapore) and Hang Seng (Hong Kong, China) indexes. This dataset was also analyzed to illustrate the quantile correlation measure in \cite{choi2018quantile} and is available at \url{https://realized.oxford-man.ox.ac.uk/} Figure \ref{fig:pairs-y-realdata} shows that $y_{i2}$, $y_{i3}$, $y_{i4}$ are $y_{i5}$ are correlated for all the quantiles, but the quantile correlation between the Russell 2000 index ($y_{i1}$) and the other indexes is weak in the intermediate quantiles and moderate in the extreme quantiles, and also is positive in the lowest quantiles, and negative in the highest ones. \begin{figure} \caption{Lower panels: Scatterplots of the price indexes. Upper panels: posterior mean (solid line) with respective 95\% credible interval (dashed line) of the pairwise quantile correlation varying by quantile. The gray scale highlights regions for which the correlation is weak ($|\rho_\tau| < 0.3$), moderate ($ 0.3 < |\rho_\tau| < 0.7$) and strong ($|\rho_\tau| > 0.7$).} \label{fig:pairs-y-realdata} \end{figure} Figure \ref{fig:matcor-realdata} displays the quantile correlation matrices for $\tau = 0.1, 0.5 \textrm{ and } 0.9$ and the linear correlation matrix. Similar conclusions are obtained in this case. The linear correlation matrix is similar to the quantile correlations for all the indexes, except the Russell 2000 ($y_{i1}$), for which the highest values for the quantile correlation is obtained when $\tau = 0.1$. It is interesting to observe that for $\tau=0.9$ the quantile correlations between the Russell 2000 and the other indexes are all negative. \begin{figure} \caption{Quantile correlation matrix estimated for $\tau=0.1, 0.5$ and $0.9$ and Pearson correlation matrix for the financial index dataset.} \label{fig:matcor-realdata} \end{figure} Table \ref{table-medidas-resumo-realdata} reports the comparison criteria values in the quantile factor model for some values of $\tau$, and the Student-t and Normal factor model fits. When $k=2$ is assumed, RPS, MAE and MSE point to the quantile factor model as the best, especially with $\tau = 0.1$ and $0.9$. On the other hand, ICOMP, AIC, BIC and BIC* suggest that quantile factor models assuming $k=2$ always perform best for all the quantiles considered, while for the Student-t and Normal factor model fits they suggest $k=1$. \begin{table}[h!] \centering \caption{Model comparison criteria for the quantile, Normal and Student-t factor models assuming $\tau = 0.1, 0.25, 0.5, 0.75 \textrm{ and } 0.9$, $k=1$ and $k=2$.} \label{table-medidas-resumo-realdata} \begin{tabular}{lcccccccc} \hline Model & $k$ & ICOMP & AIC & BIC & BIC* & RPS & MAE & MSE \\ \hline $\textrm{QFM}_{\tau = 0.10}$ & 1 & 1334.66 & 1350.15 & 1380.26 & 1379.98 & 0.33 & 0.42 & 0.55 \\ \hline $\textrm{QFM}_{\tau = 0.25}$ & 1 & 1236.79 & 1251.15 & 1281.26 & 1280.97 & 0.30 & 0.39 & 0.41 \\ \hline $\textrm{QFM}_{\tau = 0.50}$ & 1 & 1168.63 & 1181.54 & 1211.65 & 1211.37 & 0.26 & 0.36 & 0.28 \\ \hline $\textrm{QFM}_{\tau = 0.75}$ & 1 & 1256.85 & 1271.38 & 1301.49 & 1301.21 & 0.28 & 0.36 & 0.34 \\ \hline $\textrm{QFM}_{\tau = 0.90}$ & 1 & 1371.69 & 1387.18 & 1417.29 & 1417.01 & 0.32 & 0.40 & 0.46 \\ \hline TFM & 1 & 853.85 & 840.63 & 810.52 & 810.80 & 0.26 & 0.35 & 0.28 \\ \hline NFM & 1 & 14.20 & 28.58 & 58.68 & 58.40 & 0.26 & 0.35 & 0.27 \\ \hline $\textrm{QFM}_{\tau = 0.10}$ & 2 & 1018.54 & 1045.98 & 1088.13 & 1087.67 & 0.14 & 0.16 & 0.05 \\ \hline $\textrm{QFM}_{\tau = 0.25}$ & 2 & 1011.01 & 1037.61 & 1079.76 & 1079.30 & 0.16 & 0.18 & 0.06 \\ \hline $\textrm{QFM}_{\tau = 0.50}$ & 2 & 1031.39 & 1051.75 & 1093.90 & 1093.44 & 0.18 & 0.20 & 0.09 \\ \hline $\textrm{QFM}_{\tau = 0.75}$ & 2 & 1053.68 & 1079.38 & 1121.53 & 1121.07 & 0.17 & 0.18 & 0.06 \\ \hline $\textrm{QFM}_{\tau = 0.90}$ & 2 & 1096.82 & 1124.38 & 1166.53 & 1166.07 & 0.15 & 0.17 & 0.05 \\ \hline TFM & 2 & 1443.50 & 1424.90 & 1382.27 & 1383.21 & 0.18 & 0.22 & 0.11 \\ \hline NFM & 2 & 15.09 & 35.51 & 77.66 & 77.20 & 0.19 & 0.22 & 0.11 \\ \hline \end{tabular} \end{table} Figures \ref{fig:mat-beta-dadosreais-k1} (a) and \ref{fig:mat-beta-dadosreais-k1} (b) show the posterior mean of the factor loadings obtained under the quantile (in black) and Normal (in gray) factor model fits assuming $k=1$ and 2. When $k = 1$, the latent factor is strongly correlated to variables $y_{i2}, y_{i3}, y_{i4} \textrm{ and } y_{i5}$, as expected. Although the latent factor does not explain well the Russell 2000 index ($y_{i1}$) in general, some improvement can be viewed in the factor loadings for extreme quantiles. On the other hand, from Figure \ref{fig:mat-beta-dadosreais-k1} (b), we notice that while the second latent factor is correlated to variables $y_{i2}, y_{i3}, y_{i4} \textrm{ and } y_{i5}$ for both models and all the quantiles, the first latent factor is more correlated to the Russell 2000 index ($y_{i1}$), mainly in 10\% and 90\% quantiles, although it seems to be also correlated to the Bell 20 and IBEX 35 indexes ($y_{i2} \textrm{ and } y_{i3}$). The Normal factor model does not capture well the Russell 2000 index in its structure. \begin{figure} \caption{Posterior mean of the factor loadings matrix obtained in the quantile factor model fit assuming $\tau = 0.1, 0.25, 0.5, 0.75 \textrm{ and } \label{fig:mat-beta-dadosreais-k1} \end{figure} Moreover, from Table \ref{dv-realdata} with the variance decomposition for each model considered, we conclude that while the variability explained by the latent factor for the Russell 2000 index ($y_{i1}$) in the Normal and Student-t model fits does not exceed 50\%, it is higher than 75\% with just one factor for $\tau = 0.1$ and $0.9$. In the two-quantile factor model, all the indexes are well explained, mainly for $\tau = 0.1$ and $0.9$, for which the first latent factor is clearly related to $y_{i1}$ and the second latent factor to the other components. The modified variance decomposition in equation (\ref{DVmod}) is presented just to reinforce the previous conclusion. \begin{table}[h!] \centering \caption{Posterior mean of the modified variance decomposition and variance decomposition obtained, respectively, in the quantile factor model fit with $\tau=0.1$, 0.5 and 0.9, and the Normal and Student-t factor model fits, assuming $k=1$ and $2$.\label{dv-realdata}} \begin{tabular}{c|c|ccc|c|ccc} \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{Variance decomposition} & \multicolumn{4}{c}{Modified variance decomposition} \\ \cline{2-9} \multicolumn{1}{c}{} & k = 1 & \multicolumn{3}{c|}{k = 2} & \multicolumn{1}{c}{k = 1} & \multicolumn{3}{c}{k = 2} \\ \cline{2-9} \multicolumn{1}{c}{} & Fac 1 & Fac 1 & Fac 2 & Total & Fac 1 & Fac 1 & Fac 2 & Total \\ \hline \multicolumn{1}{l}{} & \multicolumn{8}{c}{$\textrm{QFM}_{\tau = 0.1}$ } \\ \hline $y_{i1}$ & 12.8 & 94.8 & 0.0 & 94.8 & 76.6 & 99.8 & 0.0 & 99.8 \\ $y_{i2}$ & 74.3 & 26.1 & 61.2 & 87.3 & 98.5 & 29.7 & 69.6 & 99.4 \\ $y_{i3}$ & 73.3 & 21.9 & 65.0 & 86.9 & 98.4 & 25.1 & 74.3 & 99.3 \\ $y_{i4}$ & 76.6 & 3.0 & 87.8 & 90.8 & 98.6 & 3.2 & 96.3 & 99.5 \\ $y_{i5}$ & 77.0 & 3.7 & 86.8 & 90.5 & 98.7 & 4.1 & 95.5 & 99.5 \\ \hline \multicolumn{1}{l}{} & \multicolumn{8}{c}{$\textrm{QFM}_{\tau = 0.5}$ } \\ \hline \multicolumn{1}{l|}{$y_{i1}$ } & 5.7 & 56.8 & 0.0 & 56.8 & 32.5 & 91.3 & 0.0 & 91.3 \\ \multicolumn{1}{l|}{$y_{i2}$ } & 76.6 & 33.8 & 57.9 & 91.6 & 96.3 & 36.4 & 62.4 & 98.9 \\ \multicolumn{1}{l|}{$y_{i3}$ } & 75.1 & 29.8 & 54.4 & 87.2 & 96.0 & 33.6 & 94.6 & 98.2 \\ $y_{i4}$ & 93.4 & 4.8 & 90.7 & 95.5 & 99.1 & 5.0 & 94.4 & 99.4~ \\ $y_{i5}$ & 95.3 & 5.2 & 89.7 & 94.9~ & 99.4 & 5.4 & 93.9 & 99.3~ \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{$\textrm{QFM}_{\tau = 0.9}$ } \\ \hline $y_{i1}$ & 12.0 & 89.8 & 0.0 & 89.8 & 75.1 & 99.5 & 0.0 & 99.5 \\ $y_{i2}$ & 77.1 & 32.2 & 58.4 & 90.6 & 98.7 & 35.3 & 64.2 & 99.5 \\ $y_{i3}$ & 75.7 & 29.5 & 61.0 & 90.4 & 98.6 & 32.4 & 67.1 & 99.5 \\ $y_{i4}$ & 79.1 & 10.4 & 83.9 & 93.3 & 98.8 & 11.0 & 88.7 & 99.7 \\ $y_{i5}$ & 80.1 & 12.3 & 81.8 & 94.0 & 98.9 & 13.0 & 86.7 & 99.7 \\ \hline \multicolumn{9}{c}{Variance decomposition} \\ \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{NFM} & \multicolumn{4}{c}{TFM} \\ \hline $y_{i1}$ & 3.3 & 46.8 & 0.0 & 46.8 & 0.1 & 43.8 & 0.0 & 43.8 \\ $y_{i2}$ & 82.8 & 25.9 & 66.4 & 92.3 & 80.9 & 29.5 & 63.9 & 93.4 \\ $y_{i3}$ & 81.7 & 27.7 & 64.6 & 92.3 & 80.0 & 31.0 & 62.1 & 93.1 \\ $y_{i4}$ & 93.2 & 1.8 & 93.3 & 95.1 & 94.8 & 3.0 & 93.3 & 96.3 \\ $y_{i5}$ & 94.1 & 2.8 & 92.1 & 94.9 & 96.2 & 4.2 & 91.7 & 95.9 \\ \hline \end{tabular} \end{table} \subsection{Heart disease dataset}\label{sec:aplicacao4-realdata-wcgs} Here we analyze a sample extracted from the dataset {\tt wcgs} available in the \textit{faraway} package of R \cite{citation_faraway}. It consists of data on 3154 healthy young men aged from 39 to 59 years old from the San Francisco area. All patients were free from coronary heart disease at the start of the study. Eight and a half years later, changes in this situation were recorded. In particular, this application was performed with a random sample of $n=100$ individuals and is concentrated in the following variables: weight ($y_{i1}$), height ($y_{i2}$), systolic blood pressure ($y_{i3}$), diastolic blood pressure ($y_{i4}$) and fasting serum cholesterol ($y_{i5}$). From Figure \ref{fig:pairs-y-realdata2} it can be seen that weight and height ($y_{i1}$ and $y_{i2}$, respectively) are strongly correlated, as are the systolic and diastolic blood pressure ($y_{i3}$ and $y_{i4}$, respectively). The aim of this application is to evaluate the proposed quantile model's performance in comparison with the Student-t and the Normal ones, in a real scenario that does not seem to favor a quantile analysis a priori. \begin{figure} \caption{Lower panels: Scatterplots of the heart disease dataset. Upper panels: posterior mean (solid line) with respective 95\% credible interval (dashed line) of the pairwise quantile correlation varying by quantile. The gray scale highlights regions for which the correlation is weak ($|\rho_\tau| < 0.3$), moderate ($ 0.3 < |\rho_\tau| < 0.7$) and strong ($|\rho_\tau| > 0.7$).} \label{fig:pairs-y-realdata2} \end{figure} Additionally, Figure \ref{fig:matcor-realdata2} presents the quantile correlation matrix for $\tau = 0.1, 0.5 \textrm{ and } 0,9$ and the linear correlation. In fact, the quantile correlation is very similar to the linear one, mainly when $\tau=0.5$ is assumed. Thus, in this application we just considered the proposed quantile factor model with $\tau=0.5$ and compared it to the Normal and Student-t factor model fits, assuming $k=1$ and $k=2$. \begin{figure} \caption{Quantile correlation matrix estimate for $\tau=0.1, 0.5$ and $0.9$ and Pearson correlation matrix for the heart disease dataset.} \label{fig:matcor-realdata2} \end{figure} Table \ref{table-medidas-resumo-realdata2} reports the model comparison criteria results for the three models fitted. First, with respect to the number of factors, while all the criteria point to $k=2$ as the best setting in the quantile and Student-t factor models, AIC, BIC and BIC* point to $k=1$ as the best in the Normal fit. Then, for $k = 2$ fixed, RPS, MAE and MSE criteria do not differ much across the different models. However, while the RPS criterion suggests the quantile factor model in the median performs best, MAE suggests the Student-t factor model and MSE equally suggests the Student-t and Normal factor models as the best. \begin{table}[h!] \centering \caption{Model comparison criteria for the quantile factor model for $\tau=0.5$, and the Normal and Student-t factor models for $k=1$ and $k=2$.} \label{table-medidas-resumo-realdata2} \begin{tabular}{lcccccccc} \hline Model & k & ICOMP & AIC & BIC & BIC* & RPS & MAE & MSE \\ \hline $\textrm{QFM}_{\tau = 0.5}$ & 1 & 1259.28 & 1276.97 & 1303.02 & 1302.59 & 0.43 & 0.58 & 0.66 \\ \hline NFM & 1 & 14.70 & 33.51 & 59.57 & 59.14 & 0.45 & 0.59 & 0.65 \\ \hline TFM & 1 & 243.45 & 261.86 & 287.91 & 287.48 & 0.44 & 0.57 & 0.64 \\ \hline $\textrm{QFM}_{\tau = 0.5}$ & 2 & 1222.10 & 1247.25 & 1283.72 & 1283.03 & 0.35 & 0.42 & 0.38 \\ \hline NFM & 2 & 13.99 & 40.96 & 77.44 & 76.74 & 0.37 & 0.43 & 0.34 \\ \hline TFM & 2 & 7.85 & 34.58 & 71.06 & 70.36 & 0.36 & 0.41 & 0.34\\\hline \end{tabular} \end{table} Table \ref{matriz-beta-realdata2} reports the posterior mean of the factor loadings for the models considered. The results are, in general, very similar. When $k = 1$, the latent is mainly related to weight and height ($y_{i1}$ and $y_{i2}$), while fasting serum cholesterol ($y_{i5}$) presents the lowest value of the associated factor loading, reflecting the structure presented in Figures \ref{fig:pairs-y-realdata2} and \ref{fig:matcor-realdata2}. On the other hand, when $k=2$ is fixed, there is clear separation of the original variables among the two latent factors considered. While, the first factor is related to weight and height ($y_{i1}$ and $y_{i2}$), the second factor explains the systolic and diastolic blood pressure ($y_{i3}$ and $y_{i4}$). The fasting serum cholesterol is almost equally explained by both factors. Moreover, since Figures \ref{fig:pairs-y-realdata2} and \ref{fig:matcor-realdata2} show a possible inverse correlation between cholesterol component and weight and height components, which are well explained by the first factor, we would expect a negative value for the factor loading of this component. However, this happens only for the quantile factor model in the median and the Normal one. In general the results are very similar for the three models fitted. \begin{table}[h!] \centering \caption{Posterior mean of the factor loadings matrix obtained in the quantile factor model fit assuming $\tau =0.5$, and the Normal and Student-t factor models for $k=1$ and $2$.\label{matriz-beta-realdata2}} \begin{tabular}{c|c|c|cc|cc|cc} \hline \multicolumn{3}{c|}{$k=1$} & \multicolumn{6}{c}{$k=2$} \\ \hline $\textrm{QFM}_{\tau = 0,5}$ & NFM & TFM & \multicolumn{2}{c|}{$\textrm{QFM}_{\tau = 0.5}$} & \multicolumn{2}{c|}{NFM} & \multicolumn{2}{c}{TFM} \\ \hline 0.57 & 0.68 & 0.59 & 0.66 & 0.00 & 0.85 & 0.00 & 0.80 & 0.00 \\ 0.97 & 0.90 & 0.86 & 0.90 & 0.30 & 0.73 & 0.40 & 0.69 & 0.36 \\ 0.24 & 0.30 & 0.34 & 0.06 & 0.64 & 0.02 & 0.72 & 0.05 & 0.63 \\ 0.29 & 0.29 & 0.31 & 0.05 & 0.88 & 0.00 & 0.76 & 0.01 & 0.76 \\ -0.16 & -0.14 & 0.12 & -0.20 & 0.16 & -0.32 & 0.27 & 0.30 & 0.21 \\ \hline \end{tabular} \end{table} Finally, Table \ref{dv-realdata2} presents the posterior mean of the variance decomposition for the three models considered. In general, the two-factor structure well explains the original variability for the three models considered, except for the fasting serum cholesterol component ($y_{i5}$). The Normal and Student-t factor models present similar results. On the other hand, for comparison purposes, these results reveal a case for which the modified variance decomposition in equation (\ref{DVmod}) may be useful. For example, when $k=1$, although the latent factor is strongly related to weight and height components, the proposed model better explains the the systolic and diastolic blood pressure than the other models according to the modified variance decomposition. Moreover, under the variance decomposition the quantile factor model evaluated at the median explains about 40\% of the variability assuming the two-factor structure, while under the modified variance decomposition this percentage increases to 80\%. \begin{table}[h!] \centering \caption{Posterior mean of the modified variance decomposition and variance decomposition obtained in the quantile factor model fit with $\tau=0.5$, and Normal and Student-t factor model fits, assuming $k=1$ and 2.\label{dv-realdata2}} \begin{tabular}{ccccccccc} \hline & \multicolumn{8}{c}{$\textrm{QFM}_{\tau = 0.5}$} \\ \cline{2-9} & \multicolumn{1}{c|}{k = 1} & \multicolumn{3}{c|}{k = 2} & \multicolumn{1}{c|}{k = 1} & \multicolumn{3}{c}{k = 2} \\ \cline{2-9} & \multicolumn{1}{c|}{Fac 1} & Fac 1 & Fac 2 & \multicolumn{1}{c|}{Total} & \multicolumn{1}{c|}{Fac 1} & Fac 1 & Fac 2 & Total \\ \cline{2-9} & \multicolumn{4}{c|}{Variance decomposition} & \multicolumn{4}{c}{Modified variance decomposition} \\ \hline \multicolumn{1}{c|}{$y_{i1}$} & \multicolumn{1}{c|}{28.5} & 38.8 & 0.0 & \multicolumn{1}{c|}{38.8} & \multicolumn{1}{c|}{76.1} & 83.6 & 0.0 & 83.6 \\ \multicolumn{1}{c|}{$y_{i2}$} & \multicolumn{1}{c|}{87.6} & 74.5 & 8.5 & \multicolumn{1}{c|}{83.0} & \multicolumn{1}{c|}{98.3} & 87.5 & 10.0 & 97.5 \\ \multicolumn{1}{c|}{$y_{i3}$} & \multicolumn{1}{c|}{5.4} & 0.3 & 40.7 & \multicolumn{1}{c|}{41.0} & \multicolumn{1}{c|}{31.4} & 0.7 & 84.1 & 84.7 \\ \multicolumn{1}{c|}{$y_{i4}$} & \multicolumn{1}{c|}{7.4} & 0.2 & 71.4 & \multicolumn{1}{c|}{71.6} & \multicolumn{1}{c|}{38.9} & 0.3 & 95.0 & 95.3 \\ \multicolumn{1}{c|}{$y_{i5}$} & \multicolumn{1}{c|}{2.0} & 3.2 & 2.1 & \multicolumn{1}{c|}{5.3} & \multicolumn{1}{c|}{14.1} & 18.7 & 12.3 & 30.9 \\ \hline \multicolumn{1}{l}{} & \multicolumn{4}{c|}{NFM} & \multicolumn{4}{c}{TFM} \\ \cline{2-9} \multicolumn{1}{l}{} & \multicolumn{4}{c|}{Variance decomposition} & \multicolumn{4}{c}{Variance decomposition} \\ \hline \multicolumn{1}{l|}{$y_{i1}$} & \multicolumn{1}{c|}{45.2} & 70.1 & 0.0 & \multicolumn{1}{c|}{70.1} & \multicolumn{1}{c|}{38.4} & 70.0 & 0.0 & 70.0 \\ \multicolumn{1}{l|}{$y_{i2}$} & \multicolumn{1}{c|}{77.0} & 52.2 & 15.2 & \multicolumn{1}{c|}{67.4} & \multicolumn{1}{c|}{82.6} & 50.3 & 13.4 & 63.6 \\ \multicolumn{1}{l|}{$y_{i3}$} & \multicolumn{1}{c|}{8.9} & 0.0 & 51.1 & \multicolumn{1}{c|}{51.1} & \multicolumn{1}{c|}{11.4} & 0.3 & 47.1 & 47.4 \\ \multicolumn{1}{c|}{$y_{i4}$} & \multicolumn{1}{c|}{8.0} & 0.0 & 56.8 & \multicolumn{1}{c|}{56.8} & \multicolumn{1}{c|}{8.8} & 0.0 & 64.4 & 64.4 \\ \multicolumn{1}{c|}{$y_{i5}$} & \multicolumn{1}{c|}{1.9} & 9.7 & 7.7 & \multicolumn{1}{c|}{17.4} & \multicolumn{1}{c|}{1.6} & 9.6 & 4.6 & 14.2 \\ \hline \end{tabular} \end{table} \section{Conclusions} \label{sec5} We propose a new class of models, named quantile factor models, which is advantageous compared to standard factor models since it provides richer information about the latent factors, is robust to heteroscedasticity and outliers, and can accommodate the non-normal errors often encountered in practical applications. The method not only allows factor loadings to vary across the quantiles, but also the latent factors. For the inference procedure, we presented a MCMC algorithm based mostly on Gibbs sampling for the location-scale mixture representation of the $\mathcal{AL}_p$ distribution. The method is also an alternative to the quantile correlation, which may be used in some applications as a preliminary exploratory measure. We evaluated the QFM in artificial and real datasets and compared it with the Normal and Student-t factor models. We concluded that the proposed model is a robust alternative to these, in some cases having similar performance to the Student-t one. On the other hand, the flexibility of the method shows that the quantile to be tracked depends on the specific aims of the problem and different results and interpretations can be obtained for each case. Model comparison criteria and variance decomposition were used not only to evaluate the model's performance under different quantiles but also to infer about latent factor dimensions for each quantile considered. \appendix \section{Multivariate asymmetric Laplace distribution} A random vector $X \in \mathbb{R}^p$ is said to have ($\mathcal{AL}_p$) distribution with parameters $\mu$ and $\Psi$, this is, $X\sim \mathcal{AL}_p(\mu, \Psi)$ if its characteristic function is given by: \begin{equation*} \psi(t) = \frac{1}{1 + \frac{1}{2}t'\Psi t - i\mu't}, \end{equation*} where $m \in \mathbb{R}^p$ and $\psi$ is a $p \times p$ non-negative definite symmetric matrix \cite[ch. 6]{kotz2012laplace}. Moreover, we have that $E(\textbf{X}) = \mu$ and $Var(\textbf{X}) = \mu\mu' + \Psi.$ As in the univariate case, the $\mathcal{AL}_p$ distribution admits a location-scale mixture representation of Normal and Exponential distributions. Let $V \sim N_p(0, \Psi)$ and $W \sim Exp(1)$, then we can write $X$ using the following mixture representation: \begin{equation}\label{eq:mistura-laplace-assimetrica-multivariada} X = \mu W + \sqrt{W} V. \end{equation} Moreover, all univariate marginals are $\mathcal{LA}$-distributed. That is, for $X = (X_1, \dots, X_p)'$, $\mu = (\mu_1, ..., \mu_p)'$ and $\Psi=(\psi_{ij})_{i,j=1}^p$, we have that $X_{l} \sim \mathcal{AL}(\mu_l, \psi_l)$ for all $l = 1, \dots, p$ and: \begin{equation*} E(X_{l}) = \mu_l ,\; Var(X_{l}) = \mu_l^2+\psi_{ll} \, \mbox { and } \, Cov(X_{l}, X_{h}) = \mu_l\mu_{h} + \psi_{lh}. \end{equation*} Coming back to the notation used in the proposed model in (\ref{eq:definicao-mfq}), we get: $\mu=m$, $\Psi=\Delta$. On the other hand, the univariate quantile regression in (\ref{eq:mistura-laplace-assimetrica-univariada}) is obtained in this case assuming that $\mu_l= m_l= \sigma_{l} a_\tau $ and $\psi_{ll}= \delta_{ll} = \sigma^2_{l} b_\tau^2$, for $a_\tau$ and $b_\tau^2$ defined in equation (\ref{eq:mistura-laplace-assimetrica-univariada}). More details about the $\mathcal{AL}_p$ distribution can be seen in \cite[ch. 6]{kotz2012laplace}. \section{Full conditional posterior distributions of the parameters in the proposed model} In this section we present the posterior full conditional distributions of the components of the parameter vector $\Theta$. We denote the posterior full conditional of a parameter $\theta$ in $\Theta$ by $p(\theta\mid \dots)$. \subsection{Full conditional posterior distribution of \texorpdfstring{$w_i$}:} For $i = 1, \dots, n$, the posterior full conditional of the location-scale mixture parameter $w_i$ is proportional to: \begin{eqnarray*} p(w_i\mid \dots) &\propto & \displaystyle \prod_{i=1}^{n} p(y_i | f_i, w_i) p(f_i) p(w_i)\\ & \propto & w_i^{(1-p)/2}\exp\left\{(2 + m'\Delta^{-1}m)w_i+(y_i - \beta f_i)'\Delta^{-1}(y_i - \beta f_i)w_i^{-1}\right\} \end{eqnarray*} Therefore, \begin{equation*} w_i \mid \cdot \sim \mathcal{GIG}\left(1-\frac{p}{2}, 2 + m'\Delta^{-1}m, (y_i - \beta f_i)'\Delta^{-1}(y_i - \beta f_i) \right) , \end{equation*} where $\mathcal{GIG}$ denotes the generalized inverse Gaussian distribution \cite{jorgensen2012statistical}. \subsection{Full conditional posterior distribution of \texorpdfstring{$f_i$}:} For $i = 1, ..., n$, the posterior full conditional of the latent factors $f_i$ is proportional to: \begin{eqnarray*} p(f_i\mid \dots) &\propto & \displaystyle \prod_{i=1}^{n} p(y_i | f_i, w_i) p(f_i) \\ & \propto & \exp \left[-\frac{1}{2} \left(\displaystyle \sum_{i=1}^{n}(y_i - \beta f_i - mw_i)'(w_i \Delta)^{-1}(y_i - \beta f_i - mw_i) - \displaystyle \sum_{i=1}^{n}f_i'I_kf_i \right)\right] \end{eqnarray*} Therefore, \begin{equation*} f_i \mid \cdot \sim N\displaystyle \left( \Omega \beta'(w_i \Delta)^{-1}(y_i - m w_i), \Omega \right), \quad \textrm{where} \quad \Omega^{-1} = \beta'(w_i \Delta)^{-1}\beta + I_k \end{equation*} \subsection{Full conditional posterior distribution of \texorpdfstring{$\beta_j$}:} Recall that $\beta_j$ denotes the $j$-th column of factor loadings matrix $\beta$. For $j = 1, \dots, p$, the posterior full conditional of $\beta_j$ is proportional to: \begin{eqnarray*} p(\beta_j\mid \dots) &\propto & \displaystyle \prod_{i=1}^{n} p(y_i | \beta, f_i, w_i) \displaystyle \prod_{j=1}^{k} p(\beta_j) \displaystyle \prod_{j=k+1}^{p} p(\beta_j) \\ & \propto & \exp \left(-\frac{1}{2} \displaystyle \sum_{i=1}^{n}(y_i - \beta f_i - mw_i)'(w_i \Delta)^{-1}(y_i - \beta f_i - mw_i)\right)\\ & \times & \displaystyle \prod_{j=1}^k \textrm{exp} \left( -\frac{1}{2} \beta_j (C_0 I_j)^{-1} {\beta_j}' \mathbb{I}(\beta_{jj} \geq 0) \right) \displaystyle \prod_{j=k+1}^p \textrm{exp} \left( -\frac{1}{2} \beta_j (C_0I_k)^{-1} {\beta_j}' \right) \end{eqnarray*} Therefore, \begin{itemize} \item for $j \leq k$: \begin{align*} & \beta_j | \cdot \sim N(C_1^* r^*, C_1^*) \mathbb{I}(\beta_{jj}>0), \quad \textrm{where} \\ & \quad {C_1^*}^{-1} = C_0 I_j + \displaystyle \sum_{i=1}^{n} w_i^{-1} \delta_{jj}^{-2} f_i^j {f_i^j}' \quad \textrm{and} \quad r^* = \displaystyle \sum_{i=1}^{n} f_i^j \delta_{jj}^{-2} (w_i^{-1} y_{ij} - m_j) \end{align*} \item for $j > k$: \begin{align*} & \beta_j | \cdot \sim N(C_1 r, C_1), \quad \textrm{ where } \\ & \quad C_1^{-1} = C_0 I_k + \displaystyle \sum_{i=1}^{n} w_i^{-1} \delta_{jj}^{-2} f_i f'_i \quad \textrm{e} \quad r = \displaystyle \sum_{i=1}^{n} f_i \delta_{jj}^{-2} (w_i^{-1} y_{ij} - m_j) \end{align*} \end{itemize} \subsection{Full conditional posterior distribution of \texorpdfstring{$\sigma^{-2}_j$}:} For $j=1,\dots,p$, the posterior full conditional of the inverse of idiosyncratic variances $\sigma^{-2}_j$ is proportional to: \begin{equation*} \begin{array}{lll} p(\sigma^{-2}_j | \dots ) \propto & (\sigma^{-2}_j)^{\frac{n + \nu}{2}-1} \exp \left\{ (-\sigma^{-2}_j) \left( \frac{\nu s^2}{2} + \frac{\tau(1-\tau)}{4} \displaystyle \sum_{i=1}^n \frac{(y_{ij} - \beta_j f_i)^2}{w_i} \right) \right\}\\ & \exp \left\{ (-\sigma^{-1}_j) \left( \frac{1-2\tau}{2} \displaystyle \sum_{i=1}^n (y_{ij} - \beta_j f_i) \right) \right\}, \end{array} \end{equation*} which does not have a closed analytical form. We use the Metropolis-Hastings algorithm with a lognormal proposal, whose mean of the associated normal is the logarithm of the current value of the parameter, and the variance is previously tuned to provide acceptance rates between 25\% and 45\%. However, in the particular case when $\tau = 1/2$, $ p(\sigma^{-2}_j | \dots )$ has an analytical closed form and: \begin{equation*} \sigma^{-2}_j | \cdot \sim Gama \left( \frac{n + \nu}{2} , \frac{\nu s^2}{2} + \frac{\tau(1-\tau)}{4} \displaystyle \sum_{i=1}^n \frac{(y_{ij} - \beta_j f_i)^2} {w_i} \right) \end{equation*} Then, when $\tau = 1/2$, all the full conditional distributions present closed form and just the Gibbs sampler algorithm is used to obtain samples from $p(\Theta\mid\dots)$. \section{Model comparison criteria}\label{sec:criterio-selecao-modelo} In this section we briefly describe the model comparison criteria used to compare the fitted models in Section \ref{sec3}. \subsection{Likelihood and information criteria} Traditional model selection criteria based on the likelihood include variants of Akaike information criterion (AIC) \cite{akaike1974new}, the Bayesian information criteria (BIC) \cite{schwarz1978estimating}), and related information criteria such as the informational complexity (ICOMP) \cite{bozdogan1998bayesian}. From equation in (\ref{eq:definicao-mfq}), we have that, marginal on latent factors: \begin{equation*} y_i | \beta, \Delta \sim N_p(m w_i, \Lambda), \end{equation*} where $\Lambda = \beta \beta' + w_i \Delta$. It is easily deduced that the likelihood function is given by: \begin{equation*} L = \prod_{i=1}^n |\Lambda|^{-1/2} (2\pi)^{-p/2} exp \left[ -\frac{1}{2} (y_i - m w_i)'\Lambda^{-1} (y_i - m w_i) \right]. \end{equation*} Define $l = -2log(L)$. Then, the various model selection criteria are defined as follows: (i) $AIC = l + 2 p_k$, (ii) $BIC = l + log(n) p_k$, (iii) $BIC^* = l + log(\tilde{n}) p_k$ e (iv) $ICOMP = l + g( \hat{\Delta})$, where \begin{equation*} p_k = p(k+1) - \frac{k}{2}(k-1), \,\, \tilde{n} = n - \frac{2p + 11}{6} - \frac{2k}{3} \textrm{ and } \end{equation*} \begin{equation*} g(\Delta) = 2(k+1) \left[\frac{p}{2}log \left(trace(\Delta) \frac{1}{m}\right) - \frac{1}{2}log |\Delta| \right]. \end{equation*} Smaller values of these criteria indicate the best model among the fitted ones. \subsection{Ranked probability scores} \cite{gneiting2007probabilistic} considered scoring rules to assess the quality of probabilistic forecasts. In particular, the continuous ranked probability score (CRPS) is computed as follows. For each $y_i$, the RPS can be expressed as: \begin{equation*} RPS = \frac{1}{pn} \displaystyle \sum_{j=1}^p \sum_{i=1}^n \mathbb{E} \left[ | y_{\textrm{rep},i} - y_i | \right] - \frac{1}{2} \mathbb{E} \left[ | y_{\textrm{rep},i} - \tilde{y}_{\textrm{rep},i} | \right], \end{equation*} where $y_{\textrm{rep},i}$ and $\tilde{y}_{\textrm{rep},i}$ are independent replicates from the posterior predictive distribution. Assuming there is a sample of size $T$ from the posterior distribution of the parameters in the model, the previous expectations are approximated by $\frac{1}{T}\sum_{t=1}^T | y_{\textrm{rep},i}^{(t)} - y_i |$ e $\frac{1}{T}\sum_{t=1}^T | y_{\textrm{rep},i}^{(t)} - \tilde{y}_{\textrm{rep},i}^{(t)} | $. Smaller values of RPS indicate the best model among the fitted ones. \subsection{Mean absolute error and mean square error} Standard measures of goodness of fit were also entertained in this study for comparison purposes. They are the mean square error (MSE) and the mean absolute error (MAE): \begin{equation*} MSE = \frac{1}{pn} \displaystyle \sum_{j=1}^p \sum_{i=1}^n \left( y_i - \hat{y_i} \right) ^2 \textrm{ and } MAE = \frac{1}{pn} \displaystyle \sum_{j=1}^p \sum_{i=1}^n | y_i - \hat{y_i} |, \end{equation*} where $\hat{y}_i$ is obtained through a Monte Carlo estimate of the posterior mean of the predictive distribution, across $T$ draws. Smaller values of MSE and MAE indicate the best model among the fitted ones. \end{document}
\begin{document} \maketitle \normalem \begin{abstract} There are two bijections from unit interval orders on $n$ elements to Dyck paths from $(0,0)$ to $(n,n)$. One is to consider the pairs of incomparable elements, which form the set of boxes between some Dyck path and the diagonal. Another is to find a particular part listing (in the sense of Guay-Paquet) which yields an isomorphic poset, and to interpret the part listing as the area sequence of a Dyck path. Matherne, Morales, and Selover conjectured that, for any unit interval order, these two Dyck paths are related by Haglund's well-known zeta bijection. In this paper we prove their conjecture. \end{abstract} \section{Introduction}\label{intro} Let $\mathcal I=\{I_1,\dots,I_n\}$ be a set of $n$ intervals of unit length, numbered from left to right. $U(\mathcal I)$ is the poset on the elements 1 to $n$ which is defined by $i\prec j$ if and only if interval $I_i$ is strictly to the left of interval $I_j$. The posets that can be described in this way are called \emph{unit interval orders}. We write $\mathcal U_n$ for the collection of unit interval orders on $\{1,\dots,n\}$. A \emph{Dyck path} of length $n$ is a path which proceeds by steps of $(1,0)$ and $(0,1)$ from $(0,0)$ to $(n,n)$, while not passing below the diagonal line from $(0,0)$ to $(n,n)$. Dyck paths are counted by the well-known Catalan numbers. We write $\mathcal D_n$ for the collection of Dyck paths of length $n$. A Dyck path can be specified by identifying the collection of complete $1\times 1$ boxes with vertices at lattice points which lie between the Dyck path and the diagonal line. We refer to the collection of these boxes as the \emph{area set} of the Dyck path. The boxes which may appear in the area set of a Dyck path in $\mathcal D_n$ can be indexed by pairs $(i,j)$ with $1\leq i <j \leq n$, where the pair $(i,j)$ corresponds to the box $i-1\leq x \leq i$, $j-1\leq y\leq j$. The \emph{area sequence} of a Dyck path $D\in\mathcal D_n$ is the sequence $(a_1,\dots,a_n)$ consisting of the number of boxes in the area set on each horizontal line, from bottom to top. The possible area sequences of Dyck paths are characterized by the fact that $a_1=0$ and $a_i\leq a_{i-1}+1$. For $U$ a unit interval order, let $a(U)$ be the Dyck path whose area set is given by the boxes $(i,j)$ such that $i<j$ and $i\not\prec j$. In Section \ref{Area}, we will recall the proof that this defines a bijection from $\mathcal U_n$ to $\mathcal D_n$. We now turn to the definition of a second map from unit interval orders to Dyck paths, as described in Section \ref{PL}. Let $w=(w_1,\dots,w_n)$ be a sequence of non-negative integers. Associated to $w$ is a poset $P(w)$ defined as follows \cite[Section 2]{GP}. For $1\leq i,j\leq n$, we set $i\prec j$ if either of the following is satisfied: \begin{itemize} \item $w_j -w_i\geq 2$, \item or $w_j-w_i =1$ and $i<j$. \end{itemize} For any $w$, the poset $P(w)$ is isomorphic to a unique unit interval order. The word $w$ is referred to as a \emph{part listing} for $P(w)$. Note that the labelling of the elements of $P(w)$ and the labelling of the elements of the isomorphic unit interval order may not be the same. For a unit interval order $U\in \mathcal U_n$, there is a unique part listing $w$ such that $P(w)$ is isomorphic to $U$ and $w$ is the area sequence of a Dyck path. Define $\tilde p(U)$ to be this part listing. Define $p(U)$ to be the Dyck path whose area sequence is $\tilde p(U)$. We have now recalled two ways to associate a Dyck path to a unit interval order, namely the maps $a$ and $p$. Considering these two maps, Matherne, Morales and Selover raised this question of how they are related. (In fact, the definition of $p$ used in \cite{MMS} is slightly different from what we have taken as the definition, and seems to include a small inaccuracy. The authors of \cite{MMS} have confirmed to us that the map $p$ which we consider is the one which they intended.) On the basis of computer evidence, Matherne, Morales, and Selover made the following conjecture: \begin{conjecture}\cite{MMS} \label{con} For $U\in \mathcal U_n$, we have that $a(U)=\zeta(p(U))$.\end{conjecture} The map $\zeta$ which relates $a(U)$ and $p(U)$ in the conjecture of Matherne, Morales, and Selover is the famous zeta map of Haglund \cite{H}. It is a bijection from $\mathcal D_n$ to $\mathcal D_n$ which plays an important rôle in $(q,t)$-Catalan combinatorics. We recall its definition in Section \ref{Zeta} below. This unexpected connection between $\zeta$ and unit interval orders seems worthy of further investigation. The main result of this paper is to prove Conjecture \ref{con} (Theorem \ref{theoremConj}) which we carry out in Section \ref{proof}. Additionally, in Section \ref{grevlex}, we establish that the part listing $\tilde p(U)$ can be characterized among those part listings $w$ with $P(w)$ isomorphic to $U$ as the one which is minimal with respect to graded reverse lexicographic order. This is not needed for our argument, but provides an interesting alternative description of the map $\tilde p$ (and thus also of $p$). \section{The map $a$}\label{Area} Let us give an equivalent definition of the unit interval orders. \begin{lemma} \label{defequivalent} An order $\prec$ on $\{1,\dots,n\}$ is a unit interval order if and only if for all $x,y\in \{1,\dots,n\}$, we have the two properties \begin{itemize} \item $x\prec y$ implies $x<y$ , \item $x\prec y$ implies $\forall x'\leq x$ and $\forall y'\geq y$, we have $x'\prec y'$. \end{itemize} \end{lemma} \begin{proof} It is obvious from the definition that a unit interval order satisfies the two properties. We prove the converse direction by contradiction. It is clearly true for $n=1$. Suppose that $\prec$ is an order on $\{1,\dots,n \}$, with $n\geq 2$ minimal, that satisfies the two properties but such that it is not a unit interval order. The two properties are such that when $\prec$ is restricted to $\{1,\dots,n-1\}$, it still satisfies the two properties. By minimality of $n$, the order $\prec$ restricted to $\{1,\dots,n-1\}$ is a unit interval order. Let $\{I_1,\dots,I_{n-1}\}$ be a set of intervals of unit length realizing this unit interval order. If the integer $n$ is not comparable to any other interval with respect to $\prec$, then the second property implies that there are no relations for $\prec$. The poset $\prec$ can therefore be realized as the unit interval order corresponding to $n$ intervals all of which overlap. This contradicts our assumption. Otherwise let $k$ be the greatest integer such that $k\prec n$. By the second property, this means that the intervals $I_i$ for $k<i<n$ (if any) intersect. We can therefore choose $I_n$ so that it intersects exactly these interval and no others. Using the second property, the unit interval order defined by $\{I_1,\dots,I_n\}$ is precisely $\prec$, which is a contradiction and finishes the proof of the lemma. \end{proof} Given $U\in \mathcal U_n$, define $$\tilde a(U) = \{(x,y)\mid x\not\prec y, 1\leq x<y \leq n\} $$ \begin{lemma} \label{area} For $U\in \mathcal U_n$, we have that $\tilde a(U)$ is the area set of a Dyck path in $\mathcal D_n$. \end{lemma} \begin{proof} Since $(x,y)\in\tilde a(U)$ implies $x<y$, we only have boxes above the diagonal. Let $(x,y)\in\tilde a(U)$. Then for any $x\leq x'<y'\leq y$, we have that $(x',y')\in\tilde a(U)$. Indeed, since the intervals $I_x$ and $I_y$ overlap, so do all the intervals indexed by numbers between $x$ and $y$. We just proved that if there is a box $(x,y)$ in $\tilde a(U)$, all the boxes above the diagonal and weakly south east of $(x,y)$ are also in $\tilde a(U)$. This implies that $\tilde a(U)$ is the area set of a Dyck path. \end{proof} Thanks to Lemma \ref{area}, for $U$ a unit interval order, we can define $a(U)$ to be the Dyck path whose area set is given by $\tilde a(U)$. \begin{lemma} The map $a$ is a bijection from $\mathcal U_n$ to $\mathcal D_n$. \end{lemma} \begin{proof} We give the inverse map. Let $D\in \mathcal D_n$ be a Dyck path with area set $A$. Let $\prec$ be the order on $\{1,\dots,n\}$ defined by $x\prec y$ if $x<y$ and $(x,y)\not\in A$. Proving that $\prec$ is a unit interval order would finish the proof, as the map sending $D$ to $\prec$ would be the inverse of $a$. Since $A$ is the area set of a Dyck path we know that if $(x,y)\not\in A$, then $\forall x'\leq x$ and $\forall y'\geq y$, we have that $(x',y')\not\in A$. The order $\prec$ is therefore a unit interval order by Lemma \ref{defequivalent}. \end{proof} \section{Part listings} \label{PL} In this section, we study the way that unit interval orders can be defined via part listings, as already described in Section \ref{intro}. We begin by giving an algorithm which, starting from a unit interval order $U\in\mathcal U_n$, gives the part listing $\tilde p(U)$ defined in the introduction as the unique part listing whose associated poset is isomorphic to $U$ and such that it is the area sequence of a Dyck path. Given $U$ a unit interval order in $\mathcal U_n$, we inductively define a function $\operatorname{\ell}$ from $\{1,\dots,n\}$ to $\mathbb Z_{\geq 0}$, as follows. We fix that $\operatorname{\ell}(1)=0$. We suppose that $\operatorname{\ell}(i)$ has been defined for all $1\leq i \leq j-1$. Now define $\operatorname{\ell}(j)$ to be $\max_{i\prec j} \operatorname{\ell}(i)+1$. If $j$ is a minimal element of $U$, so that the set over which we are taking the maximum is empty, we define $\operatorname{\ell}(j)=0$. We call $\ell(i)$ the level of $i$ (or of the interval $I_i$). \begin{Algorithm} \label{algo} Let $U\in \mathcal U_n$. We will successively define words $q_1$, $q_2$, \dots, $q_n$. The word $q_i$ is of length $i$, and is obtained by inserting a copy of $\ell(i)$ into $q_{i-1}$. \begin{itemize} \item We begin by defining $q_1=0$. Now suppose that $q_{i-1}$ has already been constructed. \item Let $C_i$ be the number of elements of level $\ell(i)-1$ comparable to $i$.(Note that they are necessarily to the left of $i$.) The letter $\ell(i)$ is added into $q_{i-1}$ directly after the occurrences of the letter $\ell(i)$ (if any) immediately following the $C_i$-th letter $\ell(i)-1$. \end{itemize} Finally, define $q(U)=q_n$. \end{Algorithm} See Figure \ref{fig:algo} for an example of this algorithm. \begin{figure} \caption{Example of Algorithm \ref{algo} \label{fig:algo} \end{figure} \begin{lemma} \label{asequ} For $U\in \mathcal U_n$, we have that $q(U)$ is a part listing corresponding to the area sequence of a Dyck path in $\mathcal D_n$. \end{lemma} \begin{proof} Recall that the area sequences of Dyck paths in $\mathcal D_n$ are the sequences of non-negative integers $(a_1,\dots,a_n)$ characterized by the two properties that $a_1=0$ and $a_i \leq a_{i-1}+1$. Let $U$ be a unit interval order and $q(U)$ be the sequence constructed by the above algorithm. By definition of the algorithm, $0$ is the first element added to the list and will remain at the first position in $q(U)$ since every level $\ell(i)$ will be added after this $0$. Thus, we have that $q(U)$ starts with a $0$. Also by construction, we have that $\ell(i)$ is inserted directly after a letter which is either $\ell(i)-1$ or $\ell(i)$. Thus, when $\ell(i)$ is inserted, it satisfies the condition that its value is at most one more than the value of its preceding letter. And since at each step we add letters greater or equal to the maximal letter of the previous word, the condition will remain true as we carry out all subsequent insertions. \end{proof} Given a part listing $w$ of length $n$, we define a partial order on $\{1,\dots,n\}$ as described in the introduction. We denote it by $P(w)$, and we write $\prec_{P(w)}$ for its order relation. \begin{definition} Given a unit interval order $U\in \mathcal U_n$, and its corresponding part listing $w=q(U)$, we define a permutation $f_U:\{1,\dots,n\}\rightarrow \{1,\dots,n\}$ as follows. We first consider the positions $i$ in the part listing with $w_i=0$, and we number them starting with 1 from left to right. We then number the positions $i$ with $w_i=1$ from left to right, and continue in the same way. The number that is eventually assigned to position $i$ is $f(i)$. \end{definition} We now define a new order $\prec_f$ on $\{1,\dots,n\}$ by saying that $i\prec_f j$ if and only if $f^{-1}(i) \prec_{P(w)} f^{-1}(j)$. By definition, this poset is isomorphic to $P(w)$. \begin{proposition}\label{iso} For $U$ a unit interval order, $w=q(U)$ the corresponding part listing, and $f$ the bijection defined above, $\prec_f$ agrees with the original order on $U$. \end{proposition} \begin{proof} Note that $\ell$ is a weakly increasing function of $\{1,\dots,n\}$. Let $1\leq i\leq n$. Because the copies of $\ell$ in $w$ are inserted from left to right, $f^{-1}(i)$ is the position in $w$ where $\ell(i)$ wound up, that is to say, $w_{f^{-1}(i)}=\ell(i)$. Let $1\leq i<j\leq n$. Suppose $I_i$ is strictly to the left of $I_j$ in $U$. Thus, $\ell(j)>\ell(i)$. Suppose first that $\ell(j) \geq \ell(i)+2$. In this case, $w_{f^{-1}(j)} \geq w_{f^{-1}(i)}+2$, so $j \succ_f i$, as desired. Suppose now that $\ell(j)=\ell(i)+1$. Suppose that $I_j$ is strictly to the right of $C$ intervals of level $\ell(i)$. Note that $I_i$ is one of them by assumption. Thus, when $\ell(j)$ was inserted into $w$, it was inserted to the right of the letter $\ell(i)$; this persists as other letters are inserted. It follows that also in this case, $j\succ_f i$, as desired. Suppose now that $I_i$ and $I_j$ overlap. In this case, $\ell(j)\leq \ell(i)+1$, because the $I_k$ strictly to the left of $I_j$ are weakly to the left of $I_i$, so have level at most $\ell(i)$. We must therefore consider two cases, when $\ell(j)=\ell(i)$ and when $\ell(j)=\ell(i)+1$. Consider first the case where $\ell(j)=\ell(i)$. In this case, $w_{f^{-1}(j)}=w_{f^{-1}(i)}$, so $i$ and $j$ are incomparable with respect to $\prec_f$, as desired. The case where $\ell(j)=\ell(i)+1$ is disposed of similarly to the case $\ell(j)=\ell(i)+1$ where $I_i$ and $I_j$ do not overlap; in this case, the result is that $f^{-1}(j)<f^{-1}(i)$, with the result that $i$ and $j$ are incomparable with respect to $\prec_f$, as desired. This completes the proof. \end{proof} In the introduction, for $U$ a unit interval order, we defined $\tilde p(U)$ to be the unique part listing $w$ such that $P(w)$ is isomorphic to $U$ and $w$ is also an area sequence of a Dyck path. We are now in a position to establish that the map $\tilde p$ is well-defined. \begin{proposition} For $U$ a unit interval order, we have $\tilde p(U)$ is well-defined and $\tilde p(U)=q(U)$. \end{proposition} \begin{proof} For each unit interval order $U\in \mathcal U_n$, the part listing $q(U)$ is the area sequence of a Dyck path by Lemma \ref{asequ}. By Proposition \ref{iso}, the poset $P(q(U))$ is isomorphic to $U$. This shows in particular that the map $q$ must be injective. Since there are the same number of unit interval orders in $\mathcal U_n$ as of Dyck paths in $\mathcal D_n$, $q$ must be a bijection. Thus, Proposition \ref{iso} tells us that if $w$ and $w'$ are two different area sequences of Dyck paths, then $P(w)$ and $P(w')$ cannot be isomorphic. It follows that, for any $U$, there is exactly one Dyck path $w$ such that $P(w)$ is isomorphic to $U$, namely, $q(U)$. Thus, $\tilde p(U)$ is well-defined and equals $q(U)$. \end{proof} \section{The zeta map} \label{Zeta} We now describe the map $\zeta:\mathcal D_n\rightarrow \mathcal D_n$. Start with $D\in \mathcal D_n$. We begin by labelling the lattice points that make up the path $D$ (except the very first): we label the top end-point of an up step with the letter $a$, and we label the right endpoint of a right step with the letter $b$. We then read the labels: first on the line $y=x$, from bottom left to top right, then on the line $y=x+1$, again in the same direction, then on the line $y=x+2$, etc. Interpret $b$ as designating an up step, and $a$ as designating a right step. This defines a lattice path from $(0,0)$ to $(n,n)$. Define this to be $\zeta(D)$. See Figure \ref{fig:zeta} for an example of this map. \begin{lemma}\label{app} Starting from $D\in\mathcal D_n$, the path $\zeta(D)$ is a Dyck path. \end{lemma} \begin{figure} \caption{Example of the zeta map } \label{fig:zeta} \end{figure} \begin{proof} Let $D\in\mathcal D_n$. We define a matching between the up steps and the right steps of $D$ as follows. For all non-negative integers $t$, look at the part of $D$ between the lines $y=x+t$ and $y=x+(t+1)$. This necessarily consists of an alternating sequence of the same number of up steps and right steps. We define our pairing by matching the $i$-{th} up step with the $i$-{th} right step. By definition of $\zeta$, these two matched edges contribute an up step and a right step to $\zeta(D)$, and the up step comes before the right step. Thus $\zeta(D)$ will always stay above the diagonal. \end{proof} \section{Proof of the conjecture}\label{proof} The proof of the conjecture (Theorem \ref{theoremConj}) will proceed by induction. We suppose that for a unit interval order $U$, we know that $\zeta(p(U))$ and $a(U)$ coincide. We then consider what happens when we add a new rightmost interval to $U$. Proving that this changes the result of applying each of the maps in the same way, we conclude that the two maps also coincide on the larger poset. This proves the conjecture by induction. \begin{definition} A peak in a Dyck path consists of an up step followed by a right step. In the Dyck word, this amounts to an occurrence of the consecutive pair of letters `$ab$' and in the corresponding area sequence $(a_1,\dots,a_n)$ this amounts to have $a_{i}\leq a_{i-1}$ for a given $i$. By adding a peak to a Dyck path, we mean the insertion, at some position, of `$ab$' into the Dyck path. The result of adding a peak is again a Dyck path. Note that adding a peak does not necessarily increase the number of peaks: if the peak is added after a letter `$a$' and before a letter `$b$', the number of peaks does not change. We say that a peak of a Dyck path is a maximal peak if the top of the up step lies on the highest line of slope $1$ that touches the Dyck path. We will refer to the last peak of the Dyck path as the final peak, and to the last maximal peak as the final maximal peak. We say that a peak is of height $i$ if the top of the up step is on the line $y=x+i$. \end{definition} In the sequel, let $U\in \mathcal U_n$ and $U'$ be obtained from $U$ by adding an $(n+1)$-st interval of unit length to the right of those of $U$. We note $\ell$ the level of this added interval. For an illustration of the following lemmas, see Figure \ref{fig:expan}. \begin{lemma}\label{add} Let $U$ be a unit interval order, and let $U'$ be obtained from it by adding an interval to the right of the intervals of $U$. The Dyck path $p(U')$ is obtained from $p(U)$ by adding a final maximal peak. \end{lemma} \begin{proof} By Algorithm \ref{algo}, in going from $p(U)$ to $p(U')$, we insert a letter $\ell$ into $p(U)$ such that the letter before is either $\ell$ or $\ell-1$. The letters before are at most $\ell$ and the letters after are at most $\ell-1$. Adding a letter $\ell$ in this way adds a final maximal peak, since we add the rightmost maximal letter $\ell$ to the area sequence $p(U)$. \end{proof} \begin{figure} \caption{Adding of an interval of level $2$ that creates a new peak in $p$.} \label{fig:expan} \end{figure} \begin{lemma}\label{deb} Let $U\in \mathcal U_n$, and let $U'$ be obtained from it by adding an interval to the right of the intervals of $U$. The Dyck path $\zeta(p(U'))$ is obtained from $\zeta(p(U))$ by adding a final peak in position $(n-r,n+1)$, where $r$ is the sum of the number of occurrences of the letter $\ell$ in $p(U)$ and of the number of occurrences of the letter $\ell-1$ appearing after the position of the added letter $\ell$ in $p(U')$. \end{lemma} \begin{proof} By Lemma \ref{add}, we know that $p(U')$ is obtained from $p(U)$ by adding a final maximal peak. The height of this peak is $\ell+1$ since we added $\ell$ in the area sequence $p(U)$ to obtain this peak. The fact that this is the final maximal peak means that the peaks after are of heights smaller than $\ell+1$. Let us recall that the map $\zeta$ builds a Dyck path by scanning along the lines of slope $1$ from bottom left to top right, by order of increasing height. Adding the peak of height $\ell+1$ does not change what we read on any height below $\ell$, nor what we read on height $\ell$ before we reach the right endpoint of the right step of the added peak. When we reach this right endpoint, in $\zeta(p(U'))$ we put an up step `$a$'. Then on the same height $\ell$ we read top endpoints of peaks of height $\ell$ appearing after the added peak (if any). Such peaks correspond to the letters $\ell-1$ appearing after the position of the added letter $\ell$ in $p(U')$, which will give right steps `$b$' in $\zeta(p(U'))$. Finally, we read the line of height $\ell+1$, putting a right step `$b$' in $\zeta(p(U'))$ for all maximal peaks of $p(U')$, which correspond to the occurrences of the letter $\ell$ in $p(U')$. Thus after the last up step `$a$' of $\zeta(p(U'))$ we have exactly $r+1$ right steps `$b$', so the added peak is in position $(n-r,n+1)$. \end{proof} \begin{lemma} \label{lemmea} The Dyck path $a(U')$ is obtained from $a(U)$ by adding a final peak in position $ (n-s,n+1)$, where $s$ is the number of intervals in $U'$ not comparable to the rightmost interval $I_{n+1}$. \end{lemma} \begin{proof} Let us recall that by definition $a(U)$ is the Dyck path whose area set is given by the boxes $(i,j)$ such that $i<j$ and $i\not\prec j$. We then add $I_{n+1}$, which is not comparable to the last $s$ intervals in $U$ (those of level $\ell$ together with a subset of those of level $\ell-1$). Thus in $U'$ we have only the new non comparable relations $i\not\prec n+1$ for $i\in\{n-s+1,\dots,n\}$, giving the corresponding boxes $(i,n+1)$ in the area set. This proves the lemma. \end{proof} We can now prove the main result of the paper. \begin{Theorem} \label{theoremConj} The maps $\zeta \circ p$ and $a$ coincide. \end{Theorem} \begin{proof} We proceed by induction. We know that $a(U)=\zeta \circ p(U)$ is true for the unique $U\in \mathcal U_1$, thus the initial case is verified. Let $n\geq 1$. Suppose that $a(U)=\zeta \circ p(U)$ is true for all $U\in \mathcal U_n$. Let $U'\in \mathcal U_{n+1}$. There exists a unit interval order poset $U\in \mathcal U_n$ such that $U'$ is obtained from $U$ by adding an $(n+1)$-st interval $I_{n+1}$ of level $\ell$ to the right of those of $U$. We now establish that the number $r$ in Lemma \ref{deb} is equal to the number $s$ in Lemma \ref{lemmea}. Indeed, in $U'$ the intervals not comparable to $I_{n+1}$ are all the intervals of level $\ell$ in $U$ and a subset of those of level $\ell-1$. The latter are exactly those which correspond to the occurrences of the letter $\ell-1$ appearing after the position of the added $\ell$ in $p(U')$. Then using Lemma \ref{deb} and Lemma \ref{lemmea}, and the induction hypothesis that gives $a(U) = \zeta(p(U))$, we obtain that $a(U') = \zeta(p(U'))$, thereby finishing the induction and the proof of the theorem. \end{proof} \section{Graded reverse lexicographic minimality of $\tilde p(U)$} \label{grevlex} We define the graded reverse lexicographic order on finite sequences of $n$ non-negative integers as follows. We say that $(a_1,\dots,a_n)<(b_1,\dots,b_n)$ if $\sum_i a_i<\sum_i b_i$, or in the case that the sums are equal if $a_j>b_j$, where $j$ is the index of the first position where the two strings differ. (Note that the inequality $a_j>b_j$ is reversed from what one might expect! The expected inequality, $a_j<b_j$, defines graded lexicographic order.) \begin{lemma} \label{grevlex-lemma} Let $U$ be a unit interval order. The graded reverse lexicographically minimal part listing for $U$ is the area sequence of a Dyck path. \end{lemma} \begin{proof} Let $(a_1,\dots, a_n)$ be the graded reverse lexicographically minimal part listing for $U$. Suppose, seeking a contradiction, that $a_i>a_{i-1}+1$. Consider the part listing in which $a_i$ and $a_{i-1}$ have swapped positions. This part listing defines an isomorphic poset, and the new part listing is lower in graded reverse lexicographic order, so we would have preferred it. This is a contradiction. Now suppose that $a_1>0$. Consider the part listing in which $a_1$ is removed and $a_1-1$ is inserted at the end. This part listing produces an isomorphic poset, and since its sum is lower, it is lower in graded reverse lexicographic order, so we would have preferred it. This, too, is a contradiction. It follows that the graded reverse lexicographically minimal part listing for $U$ is the area sequence of a Dyck path. \end{proof} \begin{corollary} For $U$ a unit interval order, the part listing $\tilde p(U)$ is the graded reverse lexicographically minimal part listing for $U$. \end{corollary} \begin{proof} By Lemma \ref{grevlex-lemma}, the graded reverse lexicographically minimal part listing for $U$ is the area sequence of some Dyck path. We know that $\tilde p$ defines a bijection from unit interval orders to area sequences of Dyck paths. Thus, there is at most one area sequence of a Dyck path which, when interpreted as a part listing, yields a poset isomorphic to $U$. The graded reverse lexicographically minimal part listing for $U$ is therefore $\tilde p(U)$. \end{proof} \end{document}
\begin{document} \title{Jump processes as Generalized Gradient Flows} \author{Mark A.\ Peletier} \address{M.\ A.\ Peletier, Department of Mathematics and Computer Science and Institute for Complex Molecular Systems, TU Eindhoven, 5600 MB Eindhoven, The Netherlands} \email{M.A.Peletier\,@\,tue.nl} \author{Riccarda Rossi} \address{R.\ Rossi, DIMI, Universit\`a degli studi di Brescia. Via Branze 38, I--25133 Brescia -- Italy} \email{riccarda.rossi\,@\,unibs.it} \author{Giuseppe Savar\'e} \address{G.\ Savar\'e, Dipartimento di Matematica ``F.\ Casorati'', Universit\`a degli studi di Pavia. Via Ferrata 27, I--27100 Pavia -- Italy} \email{giuseppe.savare\,@\,unipv.it} \author{Oliver Tse} \address{O.\ Tse, Department of Mathematics and Computer Science, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands} \email{o.t.c.tse\,@\,tue.nl} \begin{abstract} We have created a functional framework for a class of non-metric gradient systems. The state space is a space of nonnegative measures, and the class of systems includes the Forward Kolmogorov equations for the laws of Markov jump processes on Polish spaces. This framework comprises a definition of a notion of solutions, a method to prove existence, and an archetype uniqueness result. We do this by using only the structure that is provided directly by the dissipation functional, which need not be homogeneous, and we do not appeal to any metric structure. \end{abstract} \maketitle \tableofcontents \section{Introduction} The study of dissipative variational evolution equations has seen a tremendous activity in the last two decades. A general class of such systems is that of \emph{generalized gradient flows}, which formally can be written as \begin{equation} \label{eq:GGF-intro-intro} \dot \rho = {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta \color{black} {\mathsf R}^*(\rho,-{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\rho {\mathsf E}(\rho)) \end{equation} in terms of a \emph{driving functional} ${\mathsf E}$ and a \emph{dual dissipation potential} ${\mathsf R}^* = {\mathsf R}^*(\rho,\upzeta)$, where ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\rho$ denote derivatives with respect to $\upzeta$ and $\rho$. The most well-studied of these are classical gradient flows~\cite{AmbrosioGigliSavare08}, for which $\upzeta \mapsto {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta {\mathsf R}^*(\rho,\upzeta) = \mathbb K(\rho) \upzeta$ \color{black} is a linear operator $\mathbb K(\rho)$, and rate-independent systems~\cite{MielkeRoubicek15}, for which $\upzeta\mapsto {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta{\mathsf R}^*(\rho,\upzeta)$ \color{black} is zero-homogeneous. However, various models naturally lead to gradient structures that are neither classic nor rate-independent. For these systems, the map $\upzeta\mapsto {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta{\mathsf R}^*(\rho,\upzeta)$ \color{black} is neither linear nor zero-homogeneous, and in many cases it is not even homogeneous of any order. Some examples are \begin{enumerate} \item Models of chemical reactions, where ${\mathsf R}^*$ depends exponentially on $\upzeta$~\cite{Feinberg72,Grmela10,ArnrichMielkePeletierSavareVeneroni12,LieroMielkePeletierRenger17}, \item The Boltzmann equation, also with exponential ${\mathsf R}^*$~\cite{Grmela10}, \item Nonlinear viscosity relations such as the Darcy-Forchheimer equation for porous media flow~\cite{KnuppLage95,GiraultWheeler08}, \item Effective, upscaled descriptions in materials science, where the effective potential~${\mathsf R}^*$ arises through a cell problem, and can have many different types of dependence on~$\upzeta$ \cite{ElHajjIbrahimMonneau09,PerthameSouganidis09,PerthameSouganidis09a,MirrahimiSouganidis13,LieroMielkePeletierRenger17,DondlFrenzelMielke18TR,PeletierSchlottke19TR,MielkeMontefuscoPeletier20TR}, \item Gradient structures that arise from large-deviation principles for sequences of stochastic processes, in particular jump processes~\cite{MielkePeletierRenger14,MielkePattersonPeletierRenger17}. \end{enumerate} The last example is the inspiration for this paper. Regardless whether ${\mathsf R}^*$ is classic, rate-independent, or otherwise, equation~\eqref{eq:GGF-intro-intro} typically is only formal, and it is a major mathematical challenge to construct an appropriate functional framework for this equation. Such a functional framework should give the equation a rigorous meaning, and provide the means to prove well-posedness, stability, regularity and approximation results to facilitate the study of the equation. For classical gradient systems, in which ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta{\mathsf R}^*$ is linear and ${\mathsf R}^*$ is quadratic in $\upzeta$ (therefore also called `quadratic' gradient systems) and when ${\mathsf R}^*$ generates a metric space, a rich framework has been created by Ambrosio, Gigli, and Savar\'e~\cite{AmbrosioGigliSavare08}. For rate-independent systems, in which ${\mathsf R}^*$ is $1$-homogeneous in $\upzeta$, the complementary concepts of `Global Energetic solutions' and `Balanced Viscosity solutions' give rise to two different frameworks~\cite{MielkeTheilLevitas02,Dal-MasoDeSimoneMora06,MielkeRossiSavare12a,MRS13,MielkeRoubicek15}. For the examples (1--5) listed above, however, ${\mathsf R}^*$ is not homogeneous in~$\upzeta$, and neither the rate-independent frameworks nor the metric-space theory apply. Nonetheless, the existence of such models of real-world systems with a formal variational-evolutionary structure suggests that there may exist a functional framework for such equations that relies on this structure. In this paper we build exactly such a framework for an important class of equations of this type, those that describe Markov jump processes. We expect the approach advanced here to be applicable to a broader range of systems. \subsection{Generalized gradient systems for Markov jump processes} Some generalized gradient-flow structures of evolution equations are generated by the large deviations of an underlying, more microscopic stochastic process~\cite{AdamsDirrPeletierZimmer11,AdamsDirrPeletierZimmer13,DuongPeletierZimmer13,MielkePeletierRenger14,MielkePeletierRenger16,LieroMielkePeletierRenger17}. This explains the origin and interpretation of such structures, and it can be used to identify hitherto unknown gradient-flow structures~\cite{PeletierRedigVafayi14,GavishNyquistPeletier19TR}. It is the example of Markov \emph{jump} processes that inspires the results of this paper, and we describe this example here; nonetheless, \color{black} the general setup that starts in Section~\ref{ss:assumptions} has wider application. We think of Markov jump processes as jumping from one `vertex' to another `vertex' along an `edge' of a `graph'; we place these terms between quotes because the space $V$ of vertices may be finite, countable, or even uncountable, and similarly the space $E:= V\times V$ of edges may be finite, countable, or uncountable (see Assumption~\ref{ass:V-and-kappa} below). In this paper, $V$ is a standard Borel space. The laws of such processes are time-dependent measures $t\mapsto \rho_t\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ (with ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ the space of positive finite Borel \color{black} \normalcolor measures---see Section~\ref{ss:3.1}). These laws satisfy the Forward Kolmogorov equation \begin{align}\label{eq:fokker-planck} \partial_t\rho_t = Q^*\rho_t, \qquad (Q^*\rho)(\mathrm{d} x) = \int_{y\in V} \rho(\mathrm{d} y) \kappa(y,\mathrm{d} x) - \rho(\mathrm{d} x)\int_{y\in V} \kappa(x,\mathrm{d} y). \end{align} Here $Q^*:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)$ is the dual of the infinitesimal generator $Q:\mathrm{B}_{\mathrm b}(V)\to \mathrm{B}_{\mathrm b}(V)$ of the process, which for an arbitrary bounded Borel function $\varphi\in \mathrm{B}_{\mathrm b}(V)$ \normalcolor is given by \begin{equation} \label{eq:def:generator} (Q\varphi)(x) = \int_V [\varphi(y)-\varphi(x)]\,\kappa(x,\mathrm{d} y). \end{equation} The jump kernel $\kappa$ in these definitions characterizes the process: $\kappa(x,\cdot)\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ is the infinitesimal rate of jumps of a particle from the point $x$ to points in $V$. Here we address \color{black} the reversible case, which means that the process has an invariant measure $\pi\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, i.e., $Q^*\pi=0$, and that the joint measure $\pi(\mathrm{d} x) \kappa(x,\mathrm{d} y)$ is symmetric in $x$ and~$y$. In this paper we consider evolution equations of the form~\eqref{eq:fokker-planck} for the nonnegative measure $\rho$, as well as various linear and nonlinear generalizations. We will view them as gradient systems of the form~\eqref{eq:GGF-intro-intro}, and use this gradient structure to study their properties. The gradient structure for equation~\eqref{eq:fokker-planck} consists of the state space ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, a driving functional $\mathscr E:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\to[0,{+\infty}]$, and a dual dissipation potential $\mathscr R^*:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\times \mathrm{B}_{\mathrm b}(E)\to[0,{+\infty}]$ (where $\mathrm{B}_{\mathrm b}(E)$ denotes the space of bounded Borel functions on $E$). We now describe this structure in formal terms, and making it rigorous is one of the aims of this paper. The functional that drives the evolution is the relative entropy with respect to the invariant measure $\pi$, namely \begin{equation} \label{eq:def:S} \mathscr E(\rho) = \relax\mathscr F_{\upphi}(\rho|\pi):= \begin{cases} \displaystyle \relax \int_{V} \upphi\bigl(u(x)\bigr) \pi(\mathrm{d} x) & \displaystyle \text{ if } \rho \ll \pi, \text{ with } u =\frac{\mathrm{d} \rho }{\mathrm{d} \pi}, \\ {+\infty} & \text{ otherwise}, \end{cases} \end{equation} where for the example of Markov jump processes the `energy density' $\upphi$ is given by \begin{equation} \label{def:phi-log-intro} \upphi(s) := \relax s\log s - s + 1. \end{equation} (In the general development below we consider more general functions $\upphi$, such as those that arise in strongly interacting particle systems; see e.g.~\cite{KipnisOllaVaradhan89,DirrStamatakisZimmer16}). The dissipation potential ${\mathsf R}^*$ is best written in terms of an alternative potential $\mathscr R^*$, \[ {\mathsf R}^*(\rho,\upzeta) := \mathscr R^*(\rho,\dnabla \upzeta). \] Here the `graph gradient' $\dnabla:\mathrm{B}_{\mathrm b}(V) \to \mathrm{B}_{\mathrm b}(E)$ and its negative dual, the `graph divergence operator' $ \odiv:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)$, \color{black} are defined as follows: \begin{subequations} \label{eq:def:ona-div} \begin{align} \label{eq:def:ona-grad} (\dnabla \varphi)(x,y) &:= \varphi(y)-\varphi(x) &&\text{for any }\varphi\in \mathrm{B}_{\mathrm b}(V),\\ \normalcolor (\odiv {\boldsymbol j} )(\mathrm{d} x) &:= \int_{y\in V} \bigl[{\boldsymbol j} (\mathrm{d} x,\mathrm{d} y)-{\boldsymbol j} (\mathrm{d} y,\mathrm{d} x)\bigr]\normalcolor &&\text{for any }{\boldsymbol j} \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E), \color{black} \label{eq:def:div} \end{align} \end{subequations} \color{ddcyan}O and are linked by \begin{equation} \label{eq:nabladiv} \iint_E \dnabla\varphi(x,y)\,{\boldsymbol j} (\mathrm{d} x,\mathrm{d} y)= -\int_V \varphi(x) \,\odiv {\boldsymbol j} (\mathrm{d} x)\quad \text{for every }\varphi\in \mathrm{B}_{\mathrm b}(V). \end{equation} \normalcolor The dissipation functional $\mathscr R^*$ is defined for $\xi\in \mathrm{B}_{\mathrm b}(E)$ by \begin{align} \label{eq:def:R*-intro} &\mathscr R^*(\rho,\xi) := \frac 12 \int_{E} \Psi^*(\xi(x,y)) \, \boldsymbol\upnu_\rho(\mathrm{d} x \,\mathrm{d} y), \end{align} where the function $\Psi^*$ and the `edge' measure $\boldsymbol\upnu_\rho$ will be fixed in \eqref{eq:def:alpha} below. With these definitions, the gradient-flow equation~\eqref{eq:GGF-intro-intro} can be written alternatively as \begin{equation} \label{eq:GF-intro} \partial_t \rho_t = - \odiv \Bigl[ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*\Bigl(\rho_t,-\relax \dnabla\upphi'\Bigl(\frac{\mathrm{d} \rho_t}{\mathrm{d}\pi}\Bigr)\Bigr)\Bigr], \end{equation} which can be recognized by observing that \[ \bigl\langle {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upzeta{\mathsf R}^*(\rho,\upzeta),\tilde \upzeta\bigr \rangle = \frac{\mathrm{d} }{\mathrm{d} h} \mathscr R^*(\rho,\dnabla \upzeta+h\dnabla \tilde \upzeta)\Big|_{h=0} =\bigl\langle {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*(\rho,\dnabla \upzeta),\dnabla \tilde \upzeta\bigr \rangle =\bigl\langle -\odiv {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*(\rho,\dnabla \upzeta), \tilde \upzeta\bigr \rangle, \] and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F} \mathscr E(\rho) = \relax \upphi'(u)$ (which corresponds to $\relax \log u$ for the logarithmic entropy \eqref{def:phi-log-intro}). This $(\odiv,\dnabla)$-duality structure is a common feature in both physical and probabilistic models, and has its origin in the distinction between `states' and `processes'; see~\cite[Sec.~3.3]{PeletierVarMod14TR} and~\cite{Ottinger19} for discussions. For this example of Markov jump processes we consider a class of generalized gradient structures of the type above, given by $\mathscr E$ and $\mathscr R^*$ (or equivalently by the densities $\upphi$, $\Psi^*$, and the measure $\boldsymbol\upnu_\rho$), with the property that equations~\eqref{eq:GGF-intro-intro} and~\eqref{eq:GF-intro} coincide with~\eqref{eq:fokker-planck}. Even for fixed $\mathscr E$ there exists a range of choices for $\Psi^*$ and $\boldsymbol\upnu_\rho$ that achieve this (see also the discussion in~\cite{GlitzkyMielke13,MielkePeletierRenger14}). A simple calculation (see the discussion at the end of Section \ref{ss:assumptions}) shows that, if one chooses for the measure $\boldsymbol\upnu_\rho$ the form \begin{equation} \label{eq:def:alpha} \boldsymbol\upnu_\rho(\mathrm{d} x\,\mathrm{d} y) = \upalpha(u(x),u(y))\, \pi(\mathrm{d} x)\kappa(x,\mathrm{d} y), \end{equation} \color{ddcyan} for a suitable fuction $\upalpha:[0,\infty)\times [0,\infty)\to [0,\infty)$, \color{black} and one introduces the map $\rmF:(0,\infty)\times(0,\infty)\to\mathbb{R}$ \begin{equation} \label{eq:184} \rmF(u,v):= (\Psi^*)'\big[\upphi'(v)-\upphi'(u)\big]\upalpha(u,v)\quad u,v>0, \end{equation} then \eqref{eq:GF-intro} takes the form of the integro-differential equation \begin{equation} \partial_t u_t(x) = \int_{y\in V} \mathrm F\bigl(u_t(x),u_t(y)\bigr)\, \kappa(x,\mathrm{d} y),\label{eq:180} \end{equation} in terms of the density $u_t$ of $\rho_t$ with respect to $\pi$. Therefore, \normalcolor a pair $(\Psi^*,\boldsymbol\upnu_\rho)$ leads to equation~\eqref{eq:fokker-planck} whenever \color{ddcyan}O $(\Psi^*,\upphi,\upalpha)$ satisfy the \emph{compatibility property} \begin{equation} \label{cond:heat-eq-2} \rmF(u,v)=v-u \quad \text{for every }u,v>0. \end{equation} The classical quadratic-energy, quadratic-dissipation choice \begin{equation} \label{eq:68} \Psi^*(\xi)=\tfrac 12\xi^2,\quad \upphi(s)=\tfrac 12s^2,\quad \upalpha(u,v)=1 \end{equation} corresponds to the Dirichlet-form approach to \eqref{eq:fokker-planck} in $L^2(V,\pi)$. Here $\mathscr R^*(\rho,{\boldsymbol j})=\mathscr R^*({\boldsymbol j})$ is in fact independent of $\rho$: if one introduces the symmetric bilinear form \begin{equation} \label{eq:185} \llbracket u,v\rrbracket:=\frac 12\iint_E \dnabla u(x,y)\,\dnabla v(x,y)\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y),\quad \llbracket u,u\rrbracket=\frac 12\iint_E \Psi(\dnabla u)\,\mathrm{d} {\boldsymbol\vartheta}pi, \end{equation} with ${\boldsymbol\vartheta}pi (\mathrm{d} x, \mathrm{d} y) = \pi(\mathrm{d} x ) \kappa(x, \mathrm{d} y)$ (cf.\ \eqref{nu-pi} ahead), \color{black} then \eqref{eq:180} can also be formulated as \begin{equation} \label{eq:186} (\dot u_t, v)_{L^2(V,\pi)}+ \llbracket u_t,v\rrbracket=0\quad \text{for every }v\in L^2(V,\pi). \end{equation} \normalcolor Two other choices have received attention in the recent literature. Both of these are based not on the quadratic energy $\upphi(s)=\tfrac 12s^2$, but on the Boltzmann entropy functional $\upphi(s) = s\log s - s + 1$: \begin{subequations} \label{choices} \begin{enumerate} \item The large-deviation characterization~\cite{MielkePeletierRenger14} leads to the choice \begin{equation} \label{choice:cosh} \Psi^*(\xi) := 4\bigl(\cosh (\xi/2) - 1\bigr) \quad \text{and}\quad \upalpha(u,v) := \sqrt{uv}. \end{equation} The corresponding primal dissipation potential $\Psi := (\Psi^*)^*$ is given by \[ \Psi(s) := 2s\log \left(\frac{s+\sqrt{s^2+4}}2 \right) - \sqrt{s^2 + 4} + 4. \] \item The `quadratic-dissipation' choice introduced independently by Maas~\cite{Maas11}, Mielke \cite{Mielke13CALCVAR}, and Chow, Huang, and Zhou~\cite{ChowHuangLiZhou12} for Markov processes on \emph{finite} graphs, \begin{equation} \label{choice:quadratic} \Psi^*(\xi) := \tfrac12 \xi^2, \quad \Psi(s) = \tfrac12 s^2 , \quad \text{and}\quad \upalpha(u,v) := \frac{ u-v }{ \log(u) - \log(v) }. \end{equation} \end{enumerate} \end{subequations} Other examples are discussed in \S \ref{subsec:examples-intro}. With \color{black} the quadratic choice~\eqref{choice:quadratic}, the gradient system fits into the metric-space structure (see e.g.~\cite{AmbrosioGigliSavare08}) and this feature has been used extensively to investigate the properties of general Markov jump processes~\cite{Maas11,Mielke13CALCVAR,ErbarMaas12,Erbar14,Erbar16TR,ErbarFathiLaschosSchlichting16TR}. In this paper, however, we focus on functions $\Psi^*$ that are not homogeneous, as in~\eqref{choice:cosh}, and such that the corresponding structure is not covered by the usual metric framework. On the other hand, there are various arguments why this structure nonetheless has a certain `naturalness' (see Section~\ref{ss:comments}), and these motivate our aim to develop a functional framework based on this structure. \subsection{Challenges} Constructing a `functional framework' for the gradient-flow equation~\eqref{eq:GF-intro} with the choices~\eqref{def:phi-log-intro} and~\eqref{choice:cosh} presents a number of independent challenges. \subsubsection{Definition of a solution} \label{ss:def-sol-intro} As it stands, the formulation of equation~\eqref{eq:GF-intro} and of the functional $\mathcal R^*$ of \eqref{eq:def:R*-intro} presents many difficulties: the definition of $\mathcal R^*$ and the measure $\boldsymbol\upnu_\rho$ when $\rho$ is not absolutely continuous with respect to~$\pi$, the concept of time differentiability for the curve of measures $\rho_t$, \color{black} whether $\rho_t$ is necessarily absolutely continuous with respect to $\pi$ along an evolution, what happens if $\mathrm{d} \rho_t /\mathrm{d} \pi$ vanishes and $\upphi$ is not differentiable at $0$ as in the case of the logarithmic entropy, etcetera. As a result of these difficulties, it is not clear what constitutes a solution of equation~\eqref{eq:GF-intro}, let alone whether such solutions exist. In addition, a good solution concept should be robust under taking limits, and the formulation~\eqref{eq:GF-intro} does not seem to satisfy this requirement either. For quadratic and rate-independent systems, successful functional frameworks have been constructed on the basis of the Energy-Dissipation balance~\cite{Sandier-Serfaty04,Serfaty11,MRS2013,LieroMielkePeletierRenger17,MielkePattersonPeletierRenger17}, and we follow that example here. In fact, the same large-deviation principle that gives rise to the `cosh' structure above formally yields the `EDP' functional \begin{equation} \label{eq:def:mathscr-L} \mathscr L(\rho,{\boldsymbol j} ) := \begin{cases} \displaystyle \int_0^T \Bigl[ \mathscr R(\rho_t, {\boldsymbol j} _t) + \mathscr R^*\Bigl(\rho_t, -\relax\dnabla \upphi'\Bigl(\frac{\mathrm{d} \rho_t}{\mathrm{d}\pi}\Bigr) \Bigr) \Bigr]\mathrm{d} t + \mathscr E(\rho_T) - \mathscr E(\rho_0)\hskip-8cm&\\ &\text{if }\partial_t \rho_t + \odiv {\boldsymbol j} _t = 0 \text{ and } \rho_t \ll \pi \text{ for all $t\in [0,T],$ \normalcolor}\\ {+\infty} &\text{otherwise.} \end{cases} \end{equation} In this formulation, $\mathscr R$ is the Legendre dual of $\mathscr R^*$ with respect to the $\xi$ variable, \normalcolor which can be written in terms of the Legendre dual $\Psi:=\Psi^{**}$ of $\Psi^*$ as \begin{equation} \label{eq:def:R-intro} \mathscr R(\rho,{\boldsymbol j} ) := \frac 12\normalcolor\int_{E} \Psi\left( 2\frac{\mathrm{d} {\boldsymbol j}}{\mathrm{d} \boldsymbol\upnu_\rho}\right)\mathrm{d}\boldsymbol\upnu_\rho. \qquad \end{equation} Along smooth curves $\rho_t=u_t\pi$ with strictly positive densities, the functional $\mathscr L$ is nonnegative, since \normalcolor \begin{align} \notag \normalcolor\frac \mathrm{d}{\mathrm{d} t} \mathscr E(\rho_t) &= \int_V \upphi'(u_t)\partial_t u_t\, \mathrm{d}\pi \normalcolor =\int_V \upphi'(u_t(x)) \partial_t\rho_t(\mathrm{d} x) = - \relax \int_V \upphi'(u_t(x)) (\odiv {\boldsymbol j}_t)(\mathrm{d} x)\\ & = \relax \iint_E \dnabla \upphi'(u_t) (x,y) \,{\boldsymbol j}_t(\mathrm{d} x\,\mathrm{d} y) = \relax \iint_E \dnabla \upphi'(u_t) (x,y) \frac{\mathrm{d} {\boldsymbol j}_t}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\;\boldsymbol\upnu_{\rho_t} (\mathrm{d} x\,\mathrm{d} y) \label{eq:174} \\ &\geq - \frac 12\normalcolor \iint_E \left[ \Psi\left( 2\, \frac{\mathrm{d} {\boldsymbol j}_t}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\right) + \Psi^*\left(-\relax \dnabla \upphi'(u_t) (x,y) \right) \right] \boldsymbol\upnu_{\rho_t}(\mathrm{d} x\,\mathrm{d} y). \label{ineq:deriv-GF} \end{align} After time integration we find that $\mathscr L(\rho,{\boldsymbol j} )$ is nonnegative for any pair $(\rho,{\boldsymbol j} )$. The minimum of $\mathscr L$ is formally achieved at value zero, at pairs $(\rho,{\boldsymbol j} )$ satisfying \begin{align}\label{eq:flux-identity} 2{\boldsymbol j}_t = (\Psi^*)'\left(- \relax \dnabla\upphi'\Bigl(\frac{\mathrm{d} \rho_t}{\mathrm{d}\pi}\Bigr)\right)\boldsymbol\upnu_{\rho_t} \qquad \text{and} \qquad \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \end{align} which is an equivalent way of writing the gradient-flow equation~\eqref{eq:GF-intro}. This can be recognized, as usual for gradient systems, by observing that achieving equality in the inequality~\eqref{ineq:deriv-GF} requires equality in the Legendre duality of $\Psi$ and $\Psi^*$, which reduces to the equations above. \color{ddcyan} \begin{remark} \label{rem:alpha-concave} It is worth noticing that the joint convexity of the functional $\mathscr R$ of \eqref{eq:def:R-intro} (a crucial property for the development of our analysis) is equivalent to the \emph{convexity} of $\Psi$ and \emph{concavity} of the function $\upalpha$. \end{remark} \color{black} \begin{remark} \label{rem:choice-of-2} Let us add a comment concerning the choice of the factor $1/2$ in front of $\Psi^*$ in \eqref{eq:def:R*-intro}, and the corresponding factors $1/2$ and $2$ in \eqref{eq:def:R-intro}. The cosh-entropy combination~\eqref{choice:cosh} satisfies the linear-equation condition $\rmF(u,v) = v-u$ (equation~\eqref{cond:heat-eq-2}) because of the elementary identity \[ 2\,\sqrt{uv} \,\sinh \Bigl(\frac12 \log \frac vu\Bigr) = v-u. \] The factor $1/2$ inside the $\sinh$ can be included in different ways. In~\cite{MielkePeletierRenger14} it was included explicitly, by writing expressions of the form ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F} {\mathsf R}^*(\rho,-\tfrac12 {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}{\mathsf E}(\rho))$; in this paper we follow~\cite{LieroMielkePeletierRenger17} and include this factor in the definition of $\mathscr R^*$. \end{remark} \begin{remark} The continuity equation $\partial_t \rho_t + \odiv {\boldsymbol j} _t = 0 $ is invariant with respect to skew-symmetrization of ${\boldsymbol j}$, i.e.\ with respect to the transformation ${\boldsymbol j}\mapsto {\boldsymbol j}^\flat$ with ${\boldsymbol j}^\flat(\mathrm{d} x,\mathrm{d} y):= \frac12 \bigl({\boldsymbol j}(\mathrm{d} x,\mathrm{d} y)-{\boldsymbol j}(\mathrm{d} y,\mathrm{d} x)\bigr)$. \color{black} Therefore we could also write the second integral in \eqref{eq:174} as \begin{align*} & \iint_E \dnabla \upphi'(u_t) (x,y) \frac{\mathrm{d} {\boldsymbol j}^\flat_t}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\;\boldsymbol\upnu_{\rho_t} (\mathrm{d} x\,\mathrm{d} y) \\ &\qquad\geq - \frac 12 \iint_E \left[ \Psi\left( \frac{\mathrm{d} (2 {\boldsymbol j}^\flat_t)\color{black}}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\right) + \Psi^*\left(-\relax \dnabla \upphi'(u_t) (x,y) \right) \right] \boldsymbol\upnu_{\rho_t}(\mathrm{d} x\,\mathrm{d} y). \end{align*} thus replacing $\Psi\left( 2\normalcolor \frac{\mathrm{d} {\boldsymbol j}_t}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\right)$ with the lower term $\Psi\left( \frac{\mathrm{d} (2 {\boldsymbol j}^\flat_t)\color{black}}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}(x,y)\right)$, cf.\ Remark \ref{rem:skew-symmetric}, \color{black} and obtaining a corresponding equation as \eqref{eq:flux-identity} for $(2{\boldsymbol j}_t^\flat)$ instead of $2{\boldsymbol j}_t$. \color{black} This would lead to a weaker gradient system, since the choice \eqref{eq:def:R-intro} forces ${\boldsymbol j}_t$ to be skew-symmetric, whereas the choice of a dissipation involving only ${\boldsymbol j}^\flat$ would not control the symmetric part of ${\boldsymbol j}$. On the other hand, the evolution equation generated by the gradient system would remain the same. \end{remark} Since at least formally equation~\eqref{eq:GF-intro} is equivalent to the requirement $\mathscr L(\rho,{\boldsymbol j} )\leq0$, we adopt this variational point of view to define solutions to the generalized gradient system $(\mathscr E,\mathscr R,\mathscr R^*)$. This inequality is in fact the basis for the variational Definition~\ref{def:R-Rstar-balance} below. In order to do this in a rigorous manner, however, we will need \begin{enumerate} \item A study of the continuity equation \begin{equation} \label{eq:ct-eq-intro} \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \end{equation} that appears in the definition of the functional $\mathscr L$ (Section~\ref{sec:ct-eq}). \item A rigorous definition of the measure $\boldsymbol\upnu_{\rho_t}$ and of the \normalcolor functional $\mathscr R$ (Definition~\ref{def:R-rigorous}); \item A class $\mathbb{C}ER 0T$ of curves ``of finite action'' in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ along which the functional $\mathscr R$ has finite integral \normalcolor (equation~\eqref{adm-curves}); \item An appropriate \normalcolor definition of the \emph{Fisher-information} functional (see Definition~\ref{def:Fisher-information}) \begin{equation}\label{eq:formal-Fisher-information} \rho \mapsto \mathscr{D}(\rho) := \mathscr R^*\bigl(\rho,-\relax \dnabla \upphi'(\mathrm{d} \rho/\mathrm{d} \pi)\bigr); \end{equation} \item A proof of the lower bound $\mathscr L\geq 0$ (Theorem~\ref{th:chain-rule-bound}) via a suitable chain-rule inequality. \end{enumerate} \subsubsection{Existence of solutions} The first test of a new solution concept is whether solutions exist under reasonable conditions. In this paper we provide two existence proofs that complement each other. The first existence proof is based on a reformulation of the equation~\eqref{eq:fokker-planck} as a differential equation in the Banach space $L^1(V,\pi)$, driven by a continuous dissipative operator. \normalcolor Under general compatibility conditions on $\upphi$, $\Psi$, and $\upalpha$, we show that the solution provided by this abstract approach is also \color{black} a solution in the variational sense that we discussed above. The proof is presented in Section~\ref{s:ex-sg} and is quite robust for initial data whose density takes value in a compact interval $[a,b]\subset (0,\infty)$. In order to deal with a more general class of data, we will adopt two different viewpoints. A first possibility is to take advantage of the robust stability properties of the $(\mathscr E,\mathscr R, \mathscr R^*)$ Energy-Dissipation balance \color{black} when the Fisher information $\mathscr{D}$ is lower semicontinuous. A second possibility is to exploit the monotonicity properties of \eqref{eq:180} when the map $\rmF$ in~\eqref{eq:184} exhibits good behaviour at the boundary of $\mathbb{R}_+^2$ and at infinity. Since we believe \color{black} that the variational formulation reveals a relevant structure of such systems and we expect that it may also be useful in dealing with more singular cases and their stability issues, \normalcolor we also present a more intrinsic approach by adapting the well-established `JKO-Min\-i\-miz\-ing-Movement' method to the structure of this equation. This method has been used, e.g., for metric-space gradient flows~\cite{JordanKinderlehrerOtto98,AmbrosioGigliSavare08}, for rate-independent systems~\cite{Mielke05a}, for some non-metric systems with formal metric structure~\cite{AlmgrenTaylorWang93,LuckhausSturzenhecker95}, and also for Lagrangian systems with local transport~\cite{FigalliGangboYolcu11}. This approach relies on the \normalcolor {\em Dynamical-Variational Transport cost} (DVT) $\DVT \tau{\mu}{\nu}$, which is the $\tau$-dependent transport cost between two measures $\mu,\nu\in{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ induced by the dissipation potential $\mathscr R$ via \begin{equation} \label{def:W-intro} \DVT\tau{\mu}{\nu} := \inf\left\{ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t \, : \, \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \ \rho_0 = \mu, \text{ and }\rho_ \tau = \nu\right\}. \end{equation} In the Minimizing-Movement scheme a single increment with time step $\tau>0$ is defined by the minimization problem \begin{equation} \label{MM-intro} \rho^n \in \mathop{\rm argmin}_\rho \, \left( \DVT\tau{\rho^{n-1}}\rho + \mathscr E(\rho)\right) . \end{equation} By concatenating such solutions, constructing appropriate interpolations, and proving a compactness result---all steps \color{black} similar to the procedure in~\cite[Part~I]{AmbrosioGigliSavare08}---we find a curve $(\rho_t,{\boldsymbol j}_t)_{t\in [0,T]}$ satisfying the continuity equation \eqref{eq:ct-eq-intro} such that \begin{equation} \label{ineq:soln-rel-gen-slope-intro} \int_0^t \bigl[\mathscr R(\rho_r,{\boldsymbol j}_r) + \mathscr{S}^-(\rho_r)\bigr]\, \mathrm{d} r + \mathscr E(\rho_t) \le \mathscr E(\rho_0)\qquad\text{for all $t\in[0,T]$}, \end{equation} where $\mathscr{S}^-:{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)\to[0,{+\infty})$ is a suitable {\em relaxed slope} of the energy functional $\mathscr E$ with respect to the cost $\mathscr{W}$ (see~\eqref{relaxed-nuovo}). Under a lower-semicontinuity \color{black} \normalcolor condition on $\mathscr{D}$ we show that $\mathscr{S}^-\ge \mathscr{D} $. It then follows that $\rho$ is a solution as defined above (see Definition~\ref{def:R-Rstar-balance}). Section~\ref{s:MM} is devoted to developing the `Minimizing-Movement' approach for general DVTs. This requires establishing \begin{enumerate}[resume] \item Properties of $\mathscr{W}$ that generalize those of the `metric version' $\DVT\tau\mu\nu = \frac1{2\tau}d(\mu,\nu)^2$ (Section~\ref{ss:aprio}); \item A generalization of the `Moreau-Yosida approximation' and of the `De Giorgi variational interpolant' to the non-metric case, and a generalization of their properties (Sections~\ref{ss:MM} and~\ref{ss:aprio}); \item A compactness result as $\tau\to0$, based on the properties of $\mathscr{W}$ (Section~\ref{ss:compactness}); \item A proof of $\mathscr{S}^-\ge \mathscr{D} $ (Corollary~\ref{cor:cor-crucial}). \end{enumerate} This procedure leads to our existence result, Theorem \ref{thm:construction-MM}, of solutions in the sense of Definition \ref{def:R-Rstar-balance}. \subsubsection{Uniqueness of solutions} We prove uniqueness of variational solutions under suitable convexity conditions of $\mathscr{D} $ and $\mathscr E$ (Theorem~\ref{thm:uniqueness}), following an idea by Gigli~\cite{Gigli10}. \normalcolor \subsection{Examples} \label{subsec:examples-intro} We will use the following two guiding examples to illustrate the results of this paper. Precise assumptions are given in Section~\ref{ss:assumptions}. In both examples the state space consists of measures $\rho$ on a standard Borel space $(V,\mathfrak B)$ endowed with a reference Borel measure $\pi$. The kernel $x\mapsto \kappa(x,\cdot)$ is a measurable family of nonnegative measures with uniformly bounded mass, such that the pair $(\pi,\kappa)$ satisfies detailed balance (see Section~\ref{ss:assumptions}). \normalcolor \emph{Example 1: Linear equations \color{ddcyan} driven by the Boltmzann entropy. \color{black}} This is the example that we have been using in this introduction. The equation is the linear equation~\eqref{eq:fokker-planck}, \[ \partial_t\rho_t(\mathrm{d} x) = \int_{y\in V} \rho(\mathrm{d} y) \kappa(y,\mathrm{d} x) - \rho(\mathrm{d} x)\int_{y\in V} \kappa(x,\mathrm{d} y), \] which can also be written in terms of the density $u =\mathrm{d}\rho/\mathrm{d} \pi$ as \[ \partial_t u_t(x) = \int_{y\in V} \bigl[u_t(y)-u_t(x)\bigr] \, \kappa(x,\mathrm{d} y), \] \color{ddcyan} and corresponds to the linear field $\rmF$ of \eqref{cond:heat-eq-2}. Apart from the classical quadratic setting of \eqref{eq:68}, \color{black} two gradient structures for this equation have recently received attention in the literature, both driven by the Boltzmann entropy \eqref{def:phi-log-intro} $\upphi(s) = s\log s - s + 1$ as described in~\eqref{choices}: \begin{enumerate}[label=\textit{(\arabic*)}] \item The `cosh' structure: $\Psi^*(\xi) = 4\bigl(\cosh(\xi/2) \normalcolor -1\bigr)$ and $\upalpha(u,v) = \sqrt{uv}$; \item The `quadratic' structure: $\Psi^*(\xi) = \tfrac12 \xi^2$ and $\upalpha (u,v) = (u-v)/\log(u/v)$. \end{enumerate} However, the approach of this paper applies to more general combinations $(\upphi,\Psi^*,\upalpha)$ that lead to the same equation. \color{ddcyan} Due to the particular structure of \eqref{eq:184}, it is clear that the $1$-homogeneity of the linear map $\rmF$ \eqref{cond:heat-eq-2} and the $0$-homogeneity of the term $\upphi'(v)-\upphi'(u)$ associated with \color{black} the Boltzmann entropy \eqref{def:phi-log-intro} restrict the range of possible $\upalpha$ to \emph{$1$-homogenous functions} like the `mean functions' $\upalpha(u,v) = \sqrt{uv}$ (geometric) and $\upalpha (u,v) = (u-v)/\log(u/v)$ (logarithmic). Confining \color{black} the analysis to concave functions (according to Remark \ref{rem:alpha-concave}), \color{black} we observe that every concave and $1$-homogeneous function $\upalpha$ can be obtained by the concave generating function $\mathfrak f:(0,{+\infty})\to (0,{+\infty})$ \begin{equation} \label{eq:150} \upalpha(u,v)=u\mathfrak f(v/u)=v\mathfrak f(u/v),\quad \mathfrak f(r):=\alpha(r,1),\quad u,v,r>0. \end{equation} The symmetry of $\upalpha $ corresponds to the property \begin{equation} \label{eq:151} r\mathfrak f(1/r)=\mathfrak f(r)\quad\text{for every }r>0, \end{equation} and shows that the function \begin{equation} \label{eq:152} \mathfrak g(s):=\frac{\exp(s)-1}{\mathfrak f(\exp(s))}\quad s\in \mathbb{R}, \text{ is odd}. \end{equation} The concaveness of $\mathfrak f$ also shows that $\mathfrak g$ is increasing, so that we can define \begin{equation} \label{eq:149} \Psi^*(\xi):=\int_0^{\xi} \mathfrak g(s)\,\mathrm{d} s =\int_1^{\exp(\xi)}\frac{r-1}{\mathfrak f (r)}\frac{\mathrm{d} r}r,\quad \xi\in \mathbb{R}, \end{equation} which is convex, even, and superlinear if \begin{equation} \label{eq:153} \upalpha(0,1)=\mathfrak f(0)= \lim_{r\to0}r\mathfrak f\Bigl(\frac1r\Bigr)=0. \end{equation} A natural class of concave and $1$-homogeneous weight functions is provided by the \color{red} {\em Stolarsky means} $\mathfrak c_{p,q}(u,v)$ with appropriate $p,q\in\mathbb{R}$, and any $u,v>0$ \cite[Chapter VI]{Bullen2003handbook}: \[ \upalpha(u,v) = \mathfrak c_{p,q}(u,v) := \begin{cases} \Bigl(\frac pq\frac{v^q-u^q}{v^p-u^p}\Bigr)^{1/(q-p)} &\text{if $p\ne q$, $q\ne 0$},\\ \Bigl( \frac{1}{p}\frac{v^p-u^p}{\log(v) - \log(u)}\Bigr)^{1/p} &\text{if $p\ne 0$, $q= 0$}, \\ e^{-1/p}\Bigl(\frac{v^{v^p}}{u^{v^p}}\Bigr)^{1/(v^p-u^p)} &\text{if $p= q\ne 0$}, \\ \sqrt{uv} &\text{if $p= q= 0$}, \end{cases} \] from which we identify other simpler means, such as the {\em power means} $\mathfrak m_p(u,v) = \mathfrak c_{p,2p}(u,v)$ with $p\in [-\infty, 1]$: \begin{equation} \label{eq:147} \mathfrak m_p(u,v) = \begin{cases} \Big(\frac 12\big(u^p+v^p\big)\Big)^{1/p}&\text{if $0<p\le 1$ or $-\infty<p<0$ and $u,v\neq0$},\\ \sqrt{uv}&\text{if }p=0,\\ \min(u,v)&\text{if }p=-\infty,\\ 0&\text{if }p<0\text{ and }uv=0, \end{cases} \end{equation} and the generalized logarithmic mean $\mathfrak l_p(u,v)=\mathfrak c_{1,p+1}(u,v)$, $p\in[-\infty,-1]$. \color{black} The power means are obtained from the concave generating functions \begin{equation} \label{eq:148} \mathfrak f_p(r):=2^{-1/p}(r^p+1)^{1/p} \quad \text{if }p\neq 0,\quad \mathfrak f_0(r)=\sqrt r,\quad \mathfrak f_{-\infty}(r)=\min(r,1),\quad r>0. \end{equation} We can thus define \begin{equation} \label{eq:149p} \Psi_p^*(\xi):=2^{1/p}\int_1^{\exp \xi} \frac{r-1}{(r^p+1)^{1/p}}\,\frac{\mathrm{d} r}r,\quad \xi\in \mathbb{R}, \quad p\in (-\infty,1]\setminus 0, \end{equation} with the obvious changes when $p=0$ (the case $\Psi_0^*(\xi)=4(\cosh(\xi/2)-1$)) or $p=-\infty$ (the case $\Psi_{-\infty}^*(\xi)= \exp(|\xi|)-|\xi|$). It is interesting to note that the case $p=-1$ (harmonic mean) corresponds to \begin{equation} \label{eq:154} \Psi_{-1}^*(\xi)=\cosh(\xi)-1. \end{equation} We finally note that the arithmetic mean $\upalpha(u,v)=\mathfrak m_1(u,v)=(u+v)/2$ would yield $\Psi_1^*(\xi)=4\log(1/2(1+\rme^\xi))-2\xi$, which is not superlinear. \emph{Example 2: Nonlinear equations.} We consider a combination of \color{black} $\upphi$, $\Psi^*$, and $\upalpha$ such that the function $\rmF$ introduced in \eqref{eq:184} has a continuous extension up to the boundary of $[0,{+\infty})^2$ and satisfies a suitable growth and monotonicity condition (see Section~\ref{s:ex-sg}). The resulting integro-differential equation is given by \eqref{eq:180}. Here is a list of some interesting cases (we will neglect all the issues \color{black} concerning growth and regularity). \begin{enumerate} \item A field of the form $\rmF(u,v)=f(v)-f(u)$ with $f:\mathbb{R}_+\to \mathbb{R}$ monotone corresponds to the equation \[ \partial_t u_t(x) = \int_{y\in V} \bigl(f(u_t(y))-f(u_t(x))\bigr)\, \kappa(x,\mathrm{d} y), \] and can be classically considered in the framework of the Dirichlet forms, i.e.~$\upalpha \equiv \color{black} 1$, $\Psi^*(r)= r^2/2$, with energy $\upphi$ satisfying $\upphi' = f$. \item The case $\rmF(u,v)=g(v-u)$, with $g:\mathbb{R}\to \mathbb{R}$ monotone and odd, yields the equation \[ \partial_t u_t(x) = \int_{y\in V} g\bigl(u_t(y)-u_t(x)\bigr)\, \kappa(x,\mathrm{d} y), \] and can be obtained with the choices $\upalpha \equiv \color{black} 1$, $\upphi(s):=s^2/2$ and $\Psi^*(r):=\int_0^r g(s)\,\mathrm{d} s$. \item Consider now the case when $\rmF$ is positively $q$-homogeneous, with $q\in [0,1]$. It is then natural to consider a $q$-homogeneous $\upalpha$ and the logarithmic entropy $\upphi(r)=r\log r-r+1$. If the function $h:(0,\infty)\to \mathbb{R}$, $h(r):=\rmF(r,1)/\upalpha(r,1)$ is increasing, then setting as in \eqref{eq:149p} \begin{displaymath} \Psi^*(\xi):=\int_1^{\exp (\xi)}h(r)\,\mathrm{d} r \end{displaymath} equation \eqref{eq:180} provides an example of generalized gradient system $(\mathscr E,\mathscr R,\mathscr R^*)$. Simple examples are $\rmF(u,v)=v^q-u^q$, corresponding to the equation \[ \partial_t u_t(x) = \int_{y\in V} \bigl(u^q_t(y)-u^q_t(x)\bigr)\, \kappa(x,\mathrm{d} y), \] with $\upalpha(u,v):= \mathfrak m_p(u^q,v^q)$ and $\Psi^*(\xi):=\frac 1q\Psi_p^*(q\xi)$, where $\Psi^*_p$ has been defined in~\eqref{eq:149p}. In the case $p=0$ we get $\Psi^*(\xi)=\frac 4q\big(\cosh(q\xi/2)-1\big)$. As a last example, we can consider $\rmF(u,v)=\operatorname{sign} (v-u)|v^m-u^m|^{1/m}$, $m>0$, and $\upalpha(u,v)=\min(u, v)$; in this case, the function $h$ given by \color{black} $h(r)=(r^m-1)^{1/m}$ when $r\ge1$, and $h(r)=-(r^{-m}-1)^{1/m}$ when $r<1$, satisfies the required monotonicity property. \end{enumerate} \subsection{Comments} \label{ss:comments} \emph{Rationale for studying this structure.} \color{ddcyan} We think that the structure of generalized gradient systems $(\calE,\mathscr R,\mathscr R^*)$ is sufficiently rich and interesting to deserve a careful analysis. It provides a genuine extension of the more familiar quadratic gradient-flow structure of Maas, Mielke, and Chow--Huang--Zhou, which better fits into the metric framework of \cite{AmbrosioGigliSavare08}. In Section~\ref{s:ex-sg} we will also show its connection with the theory of dissipative evolution equations. Moreover, \color{black} the specific non-homogeneous structure based on the $\cosh$ function~\eqref{choice:cosh} has a number of arguments in its favor, which can be summarized in the statement that it is `natural' in various different ways: \begin{enumerate} \item It appears in the characterization of large deviations of Markov processes; see Section~\ref{ss:ldp-derivation} or~\cite{MielkePeletierRenger14,BonaschiPeletier16}; \item It arises in evolutionary limits of other gradient structures (including quadratic ones) \cite{ArnrichMielkePeletierSavareVeneroni12,Mielke16,LieroMielkePeletierRenger17,MielkeStephan19TR}; \item It `responds naturally' to external forcing \cite[Prop.~4.1]{MielkeStephan19TR}; \item It can be generalized to nonlinear equations \cite{Grmela84,Grmela10}. \end{enumerate} We will explore these claims in more detail in a forthcoming paper. Last but not least, the very fact that non-quadratic, generalized gradient flows may arise in the limit of gradient flows suggests that, allowing for a broad class of dissipation mechanisms is crucial in order to (1) fully exploit the flexibility of the gradient-structure \color{black} formulation, and (2) explore its robustness with respect to $\Gamma$-converging energies and dissipation potentials. \color{black} \emph{Potential for generalization.} In this paper we have chosen to concentrate on the consequences of non-homogeneity of the dissipation potential $\Psi$ for the techniques that are commonly used in gradient-flow theory. Until now, the lack of a sufficiently general rigorous construction of the functional $\mathscr R$ and its minimal integral over curves $\mathscr{W}$ have impeded the use of this variational structure in rigorous proofs, and a main aim of this paper is to provide a way forward by constructing a rigorous framework for these objects, while keeping the setup (in particular, the ambient space $V$) as general as possible. \color{black} In order to restrict the length of this paper, we considered only simple driving functionals $\mathscr E$, which are of the local variety $\mathscr E(\rho) = \relax \int \upphi(\mathrm{d}\rho/\mathrm{d}\pi)\mathrm{d}\pi$. \normalcolor Many gradient systems appearing in the literature are driven by more general functionals, that include interaction and other nonlinearities~\cite{ErbarFathiLaschosSchlichting16TR,ErbarFathiSchlichting19TR,RengerZimmer19TR,HudsonVanMeursPeletier20TR}, and we expect that the techniques of this paper will be of use in the study of such systems. As one specific direction of generalization, we note that the Minimizing-Movement construction on which the proof of Theorem \ref{thm:construction-MM} is based has a scope wider than that of the generalized gradient structure $(\mathscr E, \mathscr R, \mathscr R^*)$ \color{black} under consideration. In fact, as we show in Section~\ref{s:MM}, Theorem~\ref{thm:construction-MM} yields the existence of (suitably formulated) gradient flows in a general \emph{topological space} endowed with a cost fulfilling suitable properties. While we do not develop this discussion in this paper, at places throughout the paper \color{black} we hint at this prospective generalization: the `abstract-level' properties of the DVT cost are addressed in Section~\ref{ss:4.5}, and the whole proof of Theorem \ref{thm:construction-MM} is carried out under more general conditions than those required on the `concrete' system set up in Section \ref{s:assumptions}. \color{black} \emph{Challenges for generalization.} A well-formed functional framework includes a concept of solutions that behaves well under the taking of limits, and the existence proof is the first test of this. Our existence proof highlights a central challenge here, in the appearance of \emph{two} slope functionals $\mathscr{S}^-$ and $\mathscr{D}$ that both represent rigorous versions of the `Fisher information' term $\mathscr R^*\bigl(\rho,-\dnabla\upphi'(\mathrm{d} \rho/\mathrm{d} \pi)\bigr)$. The chain-rule lower-bound inequality holds under general conditions for $\mathscr{D}$ (Theorem~\ref{th:chain-rule-bound}), but the Minimizing-Movement construction leads to the more abstract object $\mathscr{S}^-$. Passing to the limit in the minimizing-movement approach requires connecting the two through the inequality $\mathscr{S}^-\geq \mathscr{D}$. We prove it by first obtaining the inequality $\mathscr{S} \geq \mathscr{D}$, cf.\ Proposition \ref{p:slope-geq-Fish}, under the condition that a solution to the $(\mathscr E, \mathscr R, \mathscr R^*)$ system exists (for instance, by the approach developed in Section \ref{s:ex-sg}). We then deduce the inequality $\mathscr{S}^- \geq \mathscr{D}$ under the further condition that $\mathscr{D}$ be lower semicontinuous, which can be in turn proved under a suitable convexity condition (cf.\ Prop.\ \ref{PROP:lsc}). \color{black} We hope that more effective ways of dealing with these issues will be found in the future. \emph{Comparison with the Weighted Energy-Dissipation method.} It would be interesting to develop the analogous variational approach based on studying the \color{black} limit behaviour as $\varepsilon\downarrow0$ of the minimizers $(\rho_t,{\boldsymbol j}_t)_{t\ge0}$ of the Weighted Energy-Dissipation (\textrm{WED}) \color{black} functional \begin{equation} \label{eq:69} \mathscr{W}_\varepsilon(\rho,{\boldsymbol j} ):=\int_0^{+\infty} \mathrm e^{-t/\varepsilon} \Big(\mathscr R(\rho_t,{\boldsymbol j}_t)+\frac1\varepsilon\mathscr E(\rho_t)\Big)\,\mathrm{d} t \end{equation} among the solutions to the continuity equation with initial datum $\rho_0$, see \cite{RSSS19}. Indeed, the \emph{intrinsic character} of the \textrm{WED} functional, which only features the dissipation potential $\mathscr R$, makes it suitable to the present non-metric framework. \color{black} \subsection{Notation} The following table collects the notation used throughout the paper. \begin{center} \newcommand{\specialcell}[2][c]{ \begin{tabular}[#1]{@{}l@{}}#2\end{tabular}} \begin{small} \begin{longtable}{lll} $\dnabla$, $\odiv$ & graph gradient and divergence &\eqref{eq:def:ona-div}\\ $\upalpha(\cdot,\cdot)$ & multiplier in flux rate $\boldsymbol\upnu_\rho$ & Ass.~\ref{ass:Psi}\\ $\upalpha^\infty$, $\upalpha_*$ & recession function, Legendre transform & Section~\ref{subsub:convex-functionals} \\ $\upalpha[\cdot|\cdot]$, $\hat\upalpha$ & measure map, perspective function & Section~\ref{subsub:convex-functionals} \\ $\mathbb{C}ER ab$ & set of curves $\rho$ with finite action & \eqref{def:Aab}\\ $ \|\kappa_V\|_\infty$ \color{black} & upper bound on $\kappa$ & Ass.~\ref{ass:V-and-kappa}\\ $ \mathbb{C}b $ & space of bdd, ct.\ functions with supremum norm\\ $\mathbb{C}E ab$ & set of pairs $(\rho,{\boldsymbol j} )$ satisfying the continuity equation & Def.~\ref{def-CE}\\ ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v)$, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^\pm_\upphi(u,v)$ & integrands defining the Fisher information $\mathscr{D}$ & \eqref{subeq:D}\\ $\mathscr{D}$ & Fisher-information functional & Def.~\ref{def:Fisher-information}\\ $E = V\times V$ & space of edges & Ass.~\ref{ass:V-and-kappa}\\ $\mathscr E$, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F} (\mathscr E)$ & driving entropy \color{black} functional and its domain & \eqref{eq:def:S} \& Ass.~\ref{ass:S}\\ $\rmF$ & vector field & \eqref{eq:184}\\ ${\boldsymbol\vartheta}_\rho^\pm$, & $\rho$-adjusted jump rates & \eqref{def:teta}\\ ${\boldsymbol\vartheta}pi$ & equilibrium jump rate & \eqref{nu-pi}\\ $\kappa$ & jump kernel &\eqref{eq:def:generator} \& Ass.~\ref{ass:V-and-kappa}\\ $\kernel\kappa\gamma$ & $\gamma \otimes \kappa$ & \eqref{eq:84}\\ $\mathscr L$ & Energy-Dissipation balance functional &\eqref{eq:def:mathscr-L}\\ ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(\Omega;\mathbb{R}^m)$, ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(\Omega)$ & vector (positive) measures on $\Omega$ & Sec.~\ref{ss:3.1}\\ $\boldsymbol\upnu_\rho$ & edge measure in definition of $\mathscr R^*$, $\mathscr R$ &\eqref{eq:def:R*-intro}, \eqref{eq:def:R-intro}, \eqref{eq:def:alpha}\\ $Q$, $Q^*$ & generator and dual generator & \eqref{eq:fokker-planck}\\ $\mathscr R$, $\mathscr R^*$ & dual pair of dissipation potentials & \eqref{eq:def:R*-intro}, \eqref{eq:def:R-intro}, Def.~\ref{def:R-rigorous}\\ $\mathbb{R}_+ := [0,\infty)$ \\ ${\mathsf s}$ & symmetry map $(x,y) \mapsto (y,x)$ & \eqref{eq:87}\\ $\mathscr{S}^-$ & relaxed slope & \eqref{relaxed-nuovo}\\ $\Upsilon$ & perspective function associated with $\Psi$ and $\upalpha$& \eqref{Upsilon}\\ $V$ & space of states & Ass.~\ref{ass:V-and-kappa}\\ $\upphi$ & density of $\mathscr E$ & \eqref{eq:def:S} \& Ass.~\ref{ass:S}\\ $\Psi$, $\Psi^*$ & dual pair of dissipation functions & Ass.~\ref{ass:Psi}, Lem.~\ref{l:props:Psi}\\ $\mathscr{W}$ & Dynamic-Variational Transport cost & \eqref{def:W-intro} \& Sec.~\ref{sec:cost}\\ $\mathbb W$ & $\mathscr{W}$- action & \eqref{def-tot-var}\\ ${\mathsf x},{\mathsf y}$ & coordinate maps $(x,y) \mapsto x$ and $(x,y)\mapsto y$ & \eqref{eq:87}\\ \end{longtable} \end{small} \end{center} \subsubsection*{\bf Acknowledgements} M.A.P.\ acknowledges support from NWO grant 613.001.552, ``Large Deviations and Gradient Flows: Beyond Equilibrium". R.R.\ and G.S. acknowledge support from the MIUR - PRIN project 2017TEXA3H ``Gradient flows, Optimal Transport and Metric Measure Structures". O.T.\ acknowledges support from NWO Vidi grant 016.Vidi.189.102, ``Dynamical-Variational Transport Costs and Application to Variational Evolutions". Finally, the authors thank Jasper Hoeksema for insightful and valuable comments during the preparation of this manuscript. \section{Preliminary results} \label{ss:3.1} \subsection{Measure theoretic preliminaries} Let $(Y,\mathfrak B)$ be a measurable space. When $Y$ is endowed with \color{black} a (metrizable and separable) topology $\tau_Y$ we will often assume that $\mathfrak B$ coincides with the Borel $\sigma$-algebra $\mathfrak B(Y,\tau_Y)$ induced by $\tau_Y$. We recall that $(Y,\mathfrak B)$ is called a \emph{standard Borel space} if it is isomorphic (as a measurable space) to a Borel subset of a complete and separable metric space; equivalently, one can find a Polish topology $\tau_Y$ \color{black} on $Y$ such that $\mathfrak B=\mathfrak B(Y,\tau_Y)$. \par We will denote by ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ the space of $\sigma$-additive measures on $\mu: \mathfrak B \to \mathbb{R}^m$ of \emph{finite} total variation $\|\mu\|_{TV}: =|\mu|(Y)<{+\infty}$, where for every $B\in\mathfrak B$ \[ |\mu|(B): = \sup \left\{ \sum_{i=0}^{+\infty} |\mu(B_i)|\, : \ B_i \in \mathfrak B,\, \ B_i \text{ pairwise disjoint}, \ B = \bigcup_{i=0}^{+\infty} B_i \right\}. \] The set function $|\mu|: \mathfrak B \to [0,{+\infty})$ is a positive finite measure on $\mathfrak B$ \cite[Thm.\ 1.6]{AmFuPa05FBVF} and $({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m),\|\cdot\|_{TV})$ is a Banach space. In the case $m=1$, we will simply write ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y)$, and we shall denote the space of \emph{positive} finite measures on $\mathfrak B$ by ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$. For $m>1$, we will identify any element $\mu \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ with a vector $(\mu^1,\ldots,\mu^m)$, with $\mu^i \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y)$ for all $i=1,\ldots, m$. If $\varphi =(\varphi^1,\ldots,\varphi^m)\in \mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m)$, the set of bounded $\mathbb{R}^m$-valued $\mathfrak B$-measurable maps, the duality between $\mu \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ and $\varphi$ can be expressed by \normalcolor \[ \langle\mu,\varphi\rangle : = \int_{Y} \varphi \cdot \mu (\mathrm{d} x) = \sum_{i=1}^m \int_Y \varphi^i(x) \mu^i(\mathrm{d} x). \] For every $\mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ and $B\in \mathfrak B$ we will denote by $\mu\mres B$ the restriction of $\mu$ to $B$, i.e.\ $\mu\mres B(A):=\mu(A\cap B)$ for every $A\in \mathfrak B$. Let $(X,\mathfrak A)$ be another measurable space and let ${\mathsf p}:X\to Y$ a measurable map. For every $\mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(X;\mathbb{R}^m)$ we will denote by ${\mathsf p}_\sharp\mu$ the push-forward measure obtained by \begin{equation} \label{eq:82} {\mathsf p}_\sharp\mu(B):=\mu({\mathsf p}^{-1}(B))\quad\text{for every }B\in \mathfrak B. \end{equation} For every couple $\mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ and $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ there exist a unique (up to the modification in a $\gamma$-negligible set) $\gamma$-integrable map $\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}: Y\to\mathbb{R}^m$, a $\gamma$-negligible set $N\in \mathfrak B$ and a unique measure $\mu^\perp\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ yielding the \emph{Lebesgue decomposition} \begin{equation} \label{eq:Leb} \begin{gathered} \mu=\mu^a+\mu^\perp,\quad \mu^a=\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\,\gamma= \mu\mres(Y\setminus N),\quad \mu^\perp=\mu\mres N,\quad \gamma(N)=0\\ |\mu^\perp|\perp \gamma,\quad |\mu|(Y)=\int_Y \left|\frac{\mathrm{d} \mu}{\mathrm{d}\gamma}\right|\,\mathrm{d}\gamma+|\mu^\perp|(Y). \end{gathered} \end{equation} \subsection{Convergence of measures} \label{subsub:convergence-measures} Besides the topology of convergence in total variation (induced by the norm $\|\cdot\|_{TV}$), we will also consider \color{ddcyan} the topology of \emph{setwise convergence}, i.e.~the coarsest topology on ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ making all the functions \begin{displaymath} \mu\mapsto \mu(B)\quad B\in \mathfrak B \end{displaymath} continuous. \color{black} For a sequence $(\mu_n)_{n\in\mathbb{N}}$ and a candidate limit $\mu$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ we have the following equivalent characterizations of the corresponding convergence \cite[\S 4.7(v)]{Bogachev07}: \begin{enumerate} \item Setwise convergence: \begin{equation} \label{eq:71} \lim_{n\to{+\infty}}\mu_n(B)=\mu(B)\qquad \text{for every set $B\in \mathfrak B$}. \end{equation} \item Convergence in duality with $\mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m)$: \begin{equation} \label{eq:70} \lim_{n\to{+\infty}}\langle \mu_n,\varphi\rangle= \langle \mu,\varphi\rangle \qquad \text{for every $\varphi\in \mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m)$}. \end{equation} \item Weak topology of the Banach space: the sequence $\mu_n$ converges to $\mu$ in the weak topology of the Banach space $({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m);\|\cdot\|_{TV})$. \item Weak $L^1$-convergence of the densities: there exists a common dominating measure $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ such that $\mu_n\ll\gamma$, $\mu\ll\gamma$ and \begin{equation} \label{eq:72} \frac{\mathrm{d}\mu_n}{\mathrm{d}\gamma}\rightharpoonup \frac{\mathrm{d}\mu}{\mathrm{d}\gamma} \quad\text{weakly in }L^1(Y,\gamma;\mathbb{R}^m). \end{equation} \item Alternative form of weak $L^1$-convergence: \eqref{eq:72} holds \emph{for every} common dominating measure $\gamma$. \end{enumerate} We will refer to \emph{setwise convergence} for sequences satisfying one of the equivalent properties above. The above topologies also share the same notion of compact subsets, as stated in the following useful theorem, cf.\ \cite[Theorem 4.7.25]{Bogachev07}, where we shall denote by $\sigma({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m) ; \mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m) )$ the weak topology on ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ induced by the duality with $\mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m)$. \color{black} \begin{theorem} \label{thm:equivalence-weak-compactness} For every set $\emptyset\neq M\subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ the following properties are equivalent: \begin{enumerate} \item $M$ has a compact closure in the topology of setwise convergence. \item $M$ has a compact closure in the topology $\sigma({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m) ; \mathrm{B}_{\mathrm b}(Y;\mathbb{R}^m) )$. \item $M$ has a compact closure in the weak topology of $({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m);\|\cdot\|_{TV})$. \item Every sequence in $M$ has a subsequence converging \color{black} on every set of $\mathfrak B$. \item There exists a measure $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ such that \begin{equation} \label{eq:73} \forall\,\varepsilon>0\ \exists\,\delta>0: \quad B\in \mathfrak B,\ \gamma(B)\le \delta\quad \mathbb{R}ightarrow\quad \sup_{\mu\in M}\mu(B)\le \varepsilons. \end{equation} \item There exists a measure $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ such that $\mu\ll\gamma$ for every $\mu\in M$ and the set $\{\mathrm{d}\mu/\mathrm{d}\gamma:\mu\in M\}$ has compact closure in the weak topology of $L^1(Y,\gamma;\mathbb{R}^m)$. \end{enumerate} \end{theorem} We also recall a useful characterization of weak compactness in $L^1$. \begin{theorem} \label{thm:L1-weak-compactness} Let $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ and $\emptyset\neq F\subset L^1(Y,\gamma;\mathbb{R}^m)$. The following properties are equivalent: \begin{enumerate} \item $F$ has compact closure in the weak topology of $L^1(Y,\gamma;\mathbb{R}^m)$; \item $F$ is bounded in $L^1(Y,\gamma;\mathbb{R}^m)$ and \color{ddcyan} equi-absolutely continuous, i.e.~\color{black} \begin{equation} \label{eq:73bis} \forall\,\varepsilon>0\ \exists\,\delta>0: \quad B\in \mathfrak B,\ \gamma(B)\le \delta\quad \mathbb{R}ightarrow\quad \sup_{f\in F}\int_B |f|\,\mathrm{d}\gamma\le \varepsilons. \end{equation} \label{cond:setwise-compactness-superlinear} \item There exists a convex and superlinear function $\beta:\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \label{eq:74} \sup_{f\in F}\int_Y \beta(|f|)\,\mathrm{d}\gamma<{+\infty}. \end{equation} \end{enumerate} \end{theorem} The name `equi-absolute continuity' above derives from the interpretation that the {measure} $f\gamma$ is absolutely continuous with respect to $\gamma$ in a uniform manner; `equi-absolute continuity' is a shortening of Bogachev's terminology `$F$ has uniformly absolutely continuous integrals'~\cite[Def.~4.5.2]{Bogachev07}. A fourth equivalent property is equi-integrability with respect to $\gamma$~\cite[Th.~4.5.3]{Bogachev07}, a fact that we will not use. When $Y$ is endowed with a (separable and metrizable) topology $\tau_Y$, \color{black} we will use the symbol $\mathbb{C}b(Y;\mathbb{R}^m) $ to denote the space of bounded $\mathbb{R}^m$-valued continuous functions on $(Y,\tau_Y)$. \color{black} We will consider the corresponding weak topology $\sigma({\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m);\mathbb{C}b(Y;\mathbb{R}^m))$ induced by the duality with $\mathbb{C}b(Y;\mathbb{R}^m)$. Prokhorov's Theorem yields that a subset $M\subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ has compact closure in this topology if it is bounded in the total variation norm and it is equally tight, i.e. \begin{equation} \label{eq:47} \forall\varepsilon>0\ \exists\, K\text{ compact in $Y$}: \quad \sup_{\mu\in M}|\mu|(Y\setminus K)\le \varepsilon. \end{equation} It is obvious that for a sequence $(\mu_n)_{n\in \mathbb{N}}$ convergence in total variation implies setwise convergence (or in duality with bounded measurable functions), and setwise convergence implies weak convergence in duality with bounded continuous functions. \subsection{Convex functionals and concave transformations of measures} \label{subsub:convex-functionals} We will use the following construction several times. Let $\uppsi:\mathbb{R}^m\to [0,{+\infty}]$ be convex and lower semicontinuous and let us denote by $\uppsi^\infty:\mathbb{R}^m\to [0,{+\infty}]$ its recession function \begin{equation} \label{eq:3} \uppsi^\infty(z):=\lim_{t\to{+\infty}}\frac{\uppsi(tz)}t=\sup_{t>0}\frac{\uppsi(tz)-\uppsi(0)}t, \end{equation} which is a convex, lower semicontinuous, and positively $1$-homogeneous map with $\uppsi^\infty(0)=0$. \color{black} We define the functional $\mathscr F_\uppsi:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m) \times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)\mapsto [0,{+\infty}]$ by \begin{equation} \label{def:F-F} \mathscr F_\uppsi(\mu|\nu) := \int_Y \uppsi \Bigl(\frac{\mathrm{d} \mu}{\mathrm{d} \nu}\Bigr)\,\mathrm{d}\nu+ \int_Y \uppsi^\infty\Bigl(\frac{\mathrm{d} \mu^\perp}{\mathrm{d} |\mu^\perp|}\Bigr) \, \mathrm{d} |\mu^\perp|,\qquad \text{for }\mu=\frac{\mathrm{d} \mu}{\mathrm{d} \nu}\nu+\mu^\perp. \end{equation} Note that when $\uppsi$ is superlinear then $\uppsi^\infty(x)={+\infty}$ in $\mathbb{R}^m\setminus\{0\}$. Equivalently, \begin{equation} \label{eq:5} \text{$\uppsi$ superlinear,}\quad \mathscr F_\uppsi(\mu|\nu)<\infty\quad\mathbb{R}ightarrow\quad \mu\ll\nu,\quad \mathscr F_\uppsi(\mu|\nu)= \int_Y \uppsi \Bigl(\frac{\mathrm{d} \mu}{\mathrm{d} \nu}\Bigr)\,\mathrm{d}\nu. \end{equation} We collect in the next Lemma a list of useful properties. \begin{lemma} \label{l:lsc-general}\ \begin{enumerate} \item\label{l:lsc-general:i3} When $\uppsi$ is also positively $1$-homogeneous, then $\uppsi\equiv \uppsi^\infty$, $\mathscr F_\uppsi(\cdot|\nu)$ is independent of $\nu$ and will also be denoted by $\mathscr F_\uppsi(\cdot)$: it satisfies \begin{equation} \label{eq:78} \mathscr F_\uppsi(\mu) \color{black} =\int_Y \uppsi\left(\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\right) \,\mathrm{d}\gamma\quad \text{for every }\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)\text{ such that } \mu\ll\gamma. \end{equation} \item If $\hat \uppsi:\mathbb{R}^{m+1}\to[0,\infty]$ denotes the 1-homogeneous, convex, perspective function associated with \color{black} $\uppsi$ by \begin{equation} \label{eq:76} \hat \uppsi(z,t):= \begin{cases} \uppsi(z/t)t&\text{if }t>0,\\ \uppsi^\infty(z)&\text{if }t=0,\\ {+\infty}&\text{if }t<0, \end{cases} \end{equation} then \begin{equation} \label{eq:77} \mathscr F_\uppsi(\mu|\nu)=\mathscr F_{\hat \uppsi}(\mu,\nu)\quad \text{for every } (\mu,\nu) \color{black} \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)\times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y) \end{equation} with $\mathscr F_{\hat \uppsi}$ defined as in \eqref{eq:78}. \color{black} \item In particular, if $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ is a common dominating measure such that $\mu=u\gamma$, $\nu=v\gamma$, and $Y':=\{x\in Y:v(x)>0\}$ we also have \begin{equation} \label{eq:67} \mathscr F_\uppsi(\mu|\nu)= \int_Y \hat\uppsi(u,v)\,\mathrm{d}\gamma=\int_{Y'} \uppsi(u/v)v\,\mathrm{d}\gamma+ \int_{Y\setminus Y'} \uppsi^\infty(u)\,\mathrm{d}\gamma. \end{equation} \item The functional $\mathscr F_\uppsi$ is convex; if $\uppsi$ is also positively $1$-homogeneous then \begin{equation} \label{eq:81} \begin{aligned} \mathscr F_\uppsi(\mu+\mu')&\le \mathscr F_\uppsi(\mu)+\mathscr F_\uppsi(\mu')\\ \mathscr F_\uppsi(\mu+\mu')&= \mathscr F_\uppsi(\mu)+\mathscr F_\uppsi(\mu')\quad \text{if }\mu\perp\mu'. \end{aligned} \end{equation} \item Jensen's inequality: \begin{equation} \label{eq:79} \hat\uppsi(\mu^a(B),\nu(B)) +\uppsi^\infty(\mu^\perp(B)) \color{black} \le \mathscr F_\uppsi(\mu\mres B\color{ddcyan} |\color{black}\nu\mres B)\quad \text{for every }B\in \mathfrak B \end{equation} (with $\mu = \mu^a + \mu^\perp $ the Lebesgue decomposition of $\mu$ w.r.t.\ $\nu$). \color{black} \item If $\uppsi(0)=0$ then for every $\mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y,\mathbb{R}^m)$, $\nu,\nu'\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ \begin{equation} \label{eq:80} \nu\le \nu'\quad\mathbb{R}ightarrow\quad \mathscr F_\uppsi(\mu|\nu)\ge \mathscr F_\uppsi(\mu|\nu'). \end{equation} \item \label{l:lsc-general:i1} $\mathscr F_\uppsi$ is \color{ddcyan} sequentially \color{black} lower semicontinuous in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m) \times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ with respect to the topology of setwise convergence. \item \label{l:lsc-general:i2} If $\mathfrak B$ is the Borel family induced by a \color{ddcyan} Polish topology $\tau_Y$ on $Y$, \color{black} $\mathscr F_\uppsi$ is lower semicontinuous with respect to ~weak convergence (in duality with continuous bounded functions). \color{black} \end{enumerate} \end{lemma} \begin{proof} \color{ddcyan} The above properties are mostly well known; we give a quick sketch of the proofs of the various claims for the ease of the reader. \noindent \textit{(1)} Let us set $u:=\mathrm{d} \mu/\mathrm{d}\nu$, $u^\perp:=\mathrm{d}\mu^\perp/\mathrm{d}|\mu|$ and let $N\in\mathfrak B$ $\nu$-negligible such that $\mu^\perp=\mu\mres N$. We also set $N':=\{y\in Y\setminus N:|u(y)|> 0\}$; notice that $\nu\mres N'\ll |\mu|$. If $v$ is the Lebesgue density of $|\mu|$ w.r.t.~$\gamma$, since $\uppsi=\uppsi^\infty$ is positively $1$-homogeneous and $\uppsi(0)=0$, we have \begin{align*} \mathscr F_\uppsi(\mu|\nu) &=\int_{N'}\uppsi(u)\,\mathrm{d} \nu+ \int_N \uppsi(u^\perp)\,\mathrm{d}|\mu^\perp| =\int_{N'}\uppsi(u)/|u|\,\mathrm{d} |\mu|+ \int_N \uppsi(u^\perp)\,\mathrm{d}|\mu^\perp| \\&=\int_{N'}v\uppsi(u)/|u|\,\mathrm{d} \gamma+ \int_N v\uppsi(u^\perp)\,\mathrm{d}\gamma= \int_{N'}\uppsi(uv/|u|)\,\mathrm{d} \gamma+ \int_N \uppsi(u^\perp v)\,\mathrm{d}\gamma \\&= \int_{N'}\uppsi(\mathrm{d}\mu/\mathrm{d}\gamma)\,\mathrm{d} \gamma+ \int_N \uppsi(\mathrm{d} \mu/\mathrm{d}\gamma)\,\mathrm{d}\gamma= \int_Y \uppsi(\mathrm{d} \mu/\mathrm{d}\gamma)\,\mathrm{d}\gamma=\mathscr F_\uppsi(\mu|\gamma), \end{align*} where we also used the fact that $|\mu|(Y\setminus (N\cup N'))=0$, so that $\mathrm{d} \mu/\mathrm{d}\gamma=0$ $\gamma$-a.e.~on $Y\setminus (N\cup N').$ \noindent\textit{(2)} Since $\hat\uppsi$ is $1$-homogeneous, we can apply the previous claim and evaluate $\mathscr F_{\hat\uppsi}(\mu,\nu)$ by choosing the dominating measure $\gamma:=\nu+\mu^\perp$. \noindent\textit{(3)} It is an immediate consequence of the first two claims. \noindent\textit{(4)} By \eqref{eq:77} it is sufficient to consider the $1$-homogeneous case. The convexity then follows by the convexity of $\uppsi$ and by choosing a common dominating measure to represent the integrals. Relations \color{black} \eqref{eq:81} are also immediate. \noindent\textit{(5)} Using \eqref{eq:77} and selecting a dominating measure $\gamma$ with $\gamma(B)=1$, Jensen's inequality applied to the convex functional $\hat\uppsi$ yields \begin{displaymath} \hat\uppsi(\mu(B),\nu(B))= \hat\uppsi\Big(\int_B \frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\,\mathrm{d}\gamma, \int_B \frac{\mathrm{d}\nu}{\mathrm{d}\gamma}\,\mathrm{d}\gamma\Big)\le \int_B \hat\uppsi\Big(\frac{\mathrm{d}\mu}{\mathrm{d}\gamma},\frac{\mathrm{d}\nu}{\mathrm{d}\gamma}\Big)\,\mathrm{d}\gamma =\mathscr F_{\hat\uppsi}(\mu\mres B,\nu\mres B). \end{displaymath} Applying now the above inequality to the mutally singular couples $(\mu^a,\nu)$ and $(\mu^\perp,0)$ and using the second identity of \eqref{eq:81} we obtain \eqref{eq:79}. \noindent\textit{(6)} We apply \eqref{eq:77} and the first identity of \eqref{eq:67}, observing that if $\uppsi(0)=0$ then $\hat\uppsi$ is decreasing with respect to its second argument. \noindent\textit{(7)} By \eqref{eq:77} it is not restrictive to assume that $\Psi$ is $1$-homogeneous. If $(\mu_n)_n$ is a sequence setwise converging to $\mu$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m)$ we can find a common dominating measure $\gamma$ such that \eqref{eq:72} holds. The claimed property is then reduced to the weak lower semicontinuity of the functional \begin{equation} \label{eq:121} u\mapsto \int_Y \Psi(u)\,\mathrm{d}\gamma \end{equation} in $L^1(Y,\gamma;\mathbb{R}^m)$. Since the functional of \eqref{eq:121} is convex and strongly lower semicontinuous in $L^1(Y,\gamma;\mathbb{R}^m)$ (thanks to Fatou's Lemma), it is weakly lower semicontinuous as well. \noindent\textit{(8)} It follows by the same argument of \cite[Theorem 2.34]{AmFuPa05FBVF}, by using a suitable dual formulation which holds also in Polish spaces, where all the finite Borel measures are Radon (see e.g.~\cite[Theorem 2.7]{LMS18} for positive measures). \end{proof} \subsubsection*{Concave transformation of vector measures} \label{subsub:concave-transformation} Let us set $\mathbb{R}_+:=[0,{+\infty}[$, $\mathbb{R}^m_+:=(\mathbb{R}_+)^m$, and let $\upalpha:\mathbb{R}^m_+\to\mathbb{R}_+$ be a continuous and concave function. It is obvious that $\upalpha$ is non-decreasing with respect to each variable. As for \eqref{eq:3}, the recession function $\upalpha^\infty$ is defined by \begin{equation} \label{eq:1} \upalpha^\infty(z):=\lim_{t\to{+\infty}}\frac{\upalpha(tz)}t=\inf_{t>0}\frac{\upalpha(tz)-\upalpha(0)}t,\quad z\in \mathbb{R}^m_+. \end{equation} We define the corresponding map $\upalpha:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m_+)\times{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$ by \begin{equation} \label{eq:6} \upalpha[\mu|\gamma]:= \upalpha\Bigl(\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\Bigr)\gamma+ \upalpha^\infty\Bigl(\frac{\mathrm{d}\mu}{\mathrm{d} |\mu^\perp|}\Bigr)|\mu^\perp|\quad \mu\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y;\mathbb{R}^m_+),\ \gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}_+(Y), \end{equation} where as usual $\mu=\frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\gamma+\mu^\perp$ is the Lebesgue decomposition of $\mu$ with respect to ~$\gamma$; in what follows, we will use the short-hand $\mu_\gamma := \frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\gamma$. We also mention in advance that, for shorter notation we will write $\upalpha[\mu_1,\mu_2|\gamma]$ in place of $\upalpha[(\mu_1,\mu_2)|\gamma]$. \color{black} Like for $\mathscr F$, it is not difficult to check that $\upalpha[\mu|\gamma]$ is independent of $\gamma$ if $\upalpha$ is positively $1$-homogeneous (and thus coincides with $\upalpha^\infty$). If we define the perspective function $\hat \upalpha:\mathbb{R}_+^{m+1}\to\mathbb{R}_+$ \begin{equation} \label{eq:83} \hat \upalpha(z,t):= \begin{cases} \upalpha(z/t)t&\text{if }t>0,\\ \upalpha^\infty(z)&\text{if }t=0 \end{cases} \end{equation} we also get $\upalpha[\mu|\gamma]=\hat\upalpha(\mu,\gamma)$. We denote by $\upalpha_*:\mathbb{R}^m_+\to[-\infty,0]$ the upper semicontinuous concave conjugate of $\upalpha$ \begin{equation} \label{eq:4} \upalpha_*(y):=\inf_{x\in \mathbb{R}^m_+} \left( y\cdot x-\upalpha(x)\right),\quad D(\upalpha_*):=\big\{y\in \mathbb{R}^m_+:\upalpha_*(y)>-\infty\big\}. \end{equation} The function $\upalpha_*$ provides simple affine upper bounds for \color{black} $\upalpha$ \begin{equation} \label{eq:8} \upalpha(x)\le x\cdot y-\upalpha_*(y)\quad\text{for every }y\in D(\upalpha_*) \end{equation} and Fenchel duality yields \begin{equation} \label{eq:7} \upalpha(x)=\inf_{y\in \mathbb{R}^m_+} \left( y\cdot x-\upalpha_*(y) \right)= \inf_{y\in D(\upalpha_*)}\left( y\cdot x-\upalpha_*(y) \right). \end{equation} We will also use that \begin{equation} \label{form-4-alpha-infty} \upalpha^\infty(z) = \inf_{y\in D(\upalpha_*)} y \cdot z\,. \end{equation} Indeed, on the one hand for every $y \in D(\upalpha_*)$ and $t>0$ we have that \[ \upalpha^\infty(z) \leq \frac1t \left( \alpha(tz) - \alpha(0)\right) \leq \frac1t \left( y \cdot (tz) -\upalpha(0) - \upalpha^*(y) \right); \] by the arbitrariness of $t>0$, we conclude that $\upalpha^\infty(z) \leq y \cdot z$ for every $y \in D(\upalpha_*)$. On the other hand, by \eqref{eq:7} we have \[ \begin{aligned} \upalpha^\infty(z) = \inf_{t>0}\frac{\upalpha(tz)-\upalpha(0)}t & = \inf_{t>0} \inf_{y\in D(\upalpha^*)} \frac{y \cdot (tz) -\upalpha^*(y) - \upalpha(0)}t \\ & = \inf_{y\in D(\upalpha^*)} \left( y \cdot z +\inf_{t>0} \frac{-\upalpha^*(y) - \upalpha(0)}t \right) = \inf_{y\in D(\upalpha^*)} y \cdot z, \end{aligned} \] where we have used that $-\upalpha^*(y) - \upalpha(0) \geq 0$ since $\upalpha(0) = \inf_{y\in D(\upalpha^*) } ({-}\upalpha^*(y))$. \color{black} \par For every Borel set $B\subset Y$, Jensen's inequality yields (recall the notation $\mu_\gamma = \frac{\mathrm{d}\mu}{\mathrm{d}\gamma}\gamma$) \color{black} \begin{equation} \label{eq:9} \begin{aligned} \upalpha[\mu|\gamma](B)&\le \upalpha\Bigl(\frac{\mu_\gamma(B)}{\gamma(B)}\Bigr)\gamma(B)+ \upalpha^\infty(\mu^\perp(B))\\ \upalpha[\mu|\gamma](B)&\le\upalpha(\mu(B))\quad\text{if }\upalpha=\upalpha^\infty \text{ is $1$-homogeneous.} \end{aligned} \end{equation} In fact, for every $y,y'\in D(\upalpha_*)$, \begin{align*} \upalpha[\mu|\gamma](B) &=\int_B \upalpha[\mu|\gamma]\le \int_B \Bigl(y\cdot \frac{\mathrm{d}\mu}{\mathrm{d}\gamma}-\upalpha_*(y)\Bigr)\,\mathrm{d}\gamma+ \int_B \Bigl(y'\cdot \frac{\mathrm{d}\mu}{\mathrm{d} |\mu^\perp|}\Bigr)\,\mathrm{d}\,|\mu^\perp| \\&=y\cdot \mu_\gamma(B)-\upalpha_*(y)\gamma(B) +y'\cdot \mu^\perp(B). \end{align*} Taking the infimum with respect to ~$y$ and $y'$, and recalling \eqref{eq:7} and \eqref{form-4-alpha-infty}, \color{black} we find \eqref{eq:9}. Choosing $y=y'$ in the previous formula we also obtain the linear upper bound \begin{equation} \label{eq:31} \upalpha[\mu|\gamma]\le y\cdot\mu-\upalpha_*(y)\gamma \quad\text{for every }y\in D(\upalpha_*). \end{equation} \subsection{Disintegration and kernels} \label{subsub:kernels} Let $(X,\mathfrak A)$ and $(Y,\mathfrak B)$ be measurable spaces and let $\big(\kappa(x,\cdot)\big)_{x\in X}$ be a $\mathfrak A$-measurable family of measures in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$, i.e. \begin{equation} \label{eq:17} \text{for every $B\in \mathfrak B$,}\quad x\mapsto \kappa(x,B)\ \text{is $\mathfrak A$-measurable}. \end{equation} We will set \begin{equation} \label{eq:155} \kappa_Y(x):=\kappa(x,Y),\quad \|\kappa_Y\|_\infty \color{black} :=\sup_{x\in X} |\kappa|(x,Y), \color{black} \end{equation} and we say that $\kappa$ is a bounded kernel if $\|\kappa_Y\|_\infty$ is finite. If $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(X)$ and \begin{equation} \label{eq:89} \text{$\kappa_Y$ is $\gamma$-integrable, i.e.}\quad \int_X \kappa(x,Y)\,\gamma(\mathrm{d} x)<{+\infty}, \end{equation} then Fubini's Theorem \cite[II, 14]{Dellacherie-Meyer78} shows that there exists a unique measure $\kernel\kappa\gamma(\mathrm{d} x,\mathrm{d} y)= \gamma(\mathrm{d} x)\kappa(x,\mathrm{d} y)$ on $(X\times Y,\mathfrak A\otimes \mathfrak B)$ such that \begin{equation} \label{eq:84} \kernel\kappa\gamma(A\times B)=\int_A \kappa(x,B)\,\gamma(\mathrm{d} x)\quad\text{for every }A\in \mathfrak A,\ B\in \mathfrak B. \end{equation} If $X=Y$, the measure $\gamma$ is called \emph{invariant} if $\kernel\kappa\gamma$ has the same marginals; equivalently \begin{equation} \label{eq:156} {\mathsf y}_\sharp \kernel\kappa\gamma(\mathrm{d} y)= \int_X \kappa(x,\mathrm{d} y)\gamma(\mathrm{d} x)= \kappa_Y(y)\gamma(\mathrm{d} y), \end{equation} where $ {\mathsf y}:E \to V $ denotes the projection on the second component, cf.\ \eqref{eq:87} ahead. We say that \color{black} $\gamma$ is \emph{reversible} if it satisfies \emph{the detailed balance condition}, i.e. $\kernel\kappa\gamma$ is symmetric: ${\mathsf s}_\sharp \kernel\kappa\gamma=\kernel\kappa\gamma$. The concepts of invariance and detailed balance correspond to the analogous concepts in stochastic-process theory; see Section~\ref{ss:assumptions}. It is immediate to check that reversibility implies invariance. If $f:X\times Y\to \mathbb{R}$ is a positive or bounded measurable function, then \begin{equation} \label{eq:86} \text{the map $x\mapsto \kappa f(x):=\int_Y f(x,y)\kappa(x,\mathrm{d} y)$ is $\mathfrak A$-measurable} \end{equation} and \begin{equation} \label{eq:85} \int_{X\times Y}f(x,y)\,\kernel\kappa\gamma(\mathrm{d} x,\mathrm{d} y)= \int_X\Big(\int_Y f(x,y)\,\kappa(x,\mathrm{d} y)\Big)\gamma(\mathrm{d} x). \end{equation} Conversely, if $X,Y$ are standard Borel spaces, $\boldsymbol\kappa\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(X\times Y)$ (with the product $\sigma$-algebra) and the first marginal ${\mathsf p}^X_\sharp \boldsymbol\kappa $ of $\boldsymbol\kappa$ is absolutely continuous with respect to ~$\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, then we may apply the disintegration Theorem \cite[Corollary 10.4.15]{Bogachev07} to find a $\gamma$-integrable kernel $(\kappa(x,\cdot))_{x\in X}$ such that $\boldsymbol\kappa=\kernel\kappa\gamma$. We will often apply the above construction in two cases: when $X=Y:=V$, the main domain of our evolution problems (see Assumptions \ref{ass:V-and-kappa} below), and when $X:=I=(a,b)$ is an interval of the real line endowed with the Lebesgue measure $\lambda$. In this case, we will denote by $t$ the variable in $I$ and by $(\mu_t)_{t\in X}$ a measurable family in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(Y)$ parametrized by $t\in I$: \begin{equation} \label{eq:99} \text{if }\int_I \mu_t(Y)\,\mathrm{d} t<{+\infty} \text{ then we set } \mu_\lambda\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(I\times Y),\quad \mu_\lambda(\mathrm{d} t,\mathrm{d} y)=\lambda(\mathrm{d} t)\mu_t(\mathrm{d} y). \end{equation} \begin{lemma} \label{le:kernel-convergence} If $\mu_n\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(X)$ is a sequence converging to $\mu$ setwise and $(\kappa(x,\cdot))_{x\in X}$ is a \emph{bounded} measurable kernel in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y)$, then $\kernel\kappa{\mu_n}\to \kernel\kappa\mu$ setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(X\times Y,\mathfrak A\otimes\mathfrak B)$. If $X,Y$ are Polish spaces and $\kappa$ also satisfies the weak Feller property, i.e. \begin{equation} \label{eq:Feller} x\mapsto \kappa(x,\cdot)\quad\text{is weakly continuous in }{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(Y), \end{equation} (where `weak' means in duality with continuous bounded functions), \color{black} then for every weakly converging sequence $\mu_n\to\mu$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(X)$ we have $\kernel\kappa{\mu_n}\to \kernel\kappa\mu$ weakly as well. \end{lemma} \begin{proof} If $f:X\times Y\to \mathbb{R}$ is a bounded $\mathfrak A\otimes\mathfrak B$-measurable map, then by \eqref{eq:86} also the map $\kappa f$ is bounded and $\mathfrak A$-measurable so that \begin{displaymath} \lim_{n\to{+\infty}}\int_{X\times Y} f\,\mathrm{d} \kernel\kappa{\mu_n}= \lim_{n\to{+\infty}}\int_{X} \kappa f\,\mathrm{d} \mu_n= \int_X \kappa f\,\mathrm{d} \mu=\int_{X\times Y} f\,\mathrm{d} \kernel\kappa\mu, \end{displaymath} showing the setwise convergence. The other statement follows by a similar argument. \end{proof} \normalcolor \section{Jump processes, large deviations, and their generalized gradient structures} \label{s:assumptions} \subsection{The systems of this paper} \label{ss:assumptions} In the Introduction we described jump processes on~$V$ with kernel $\kappa$, and showed that the evolution equation $\partial_t\rho_t = Q^*\rho_t$ for the law $\rho_t$ of the process is a generalized gradient flow characterized by a driving functional $\mathscr E$ and a dissipation potential~$\mathscr R^*$. The mathematical setup of this paper is slightly different. Instead of starting with an evolution equation and proceeding to the generalized gradient system, our mathematical development starts with the generalized gradient system; we then consider the equation to be defined by this system. In this Section, therefore, we describe assumptions that we make on~$\mathscr E$ and $\mathscr R^*$ that will allow us to set up the rigorous functional framework for the evolution equation~\eqref{eq:GF-intro}. We first state the assumptions about the sets $V$ of `vertices' and $E:=V\times V$ of `edges'. `Edges' are identified with ordered pairs $(x,y)$ of vertices $x,y\in V$. We will denote by $\mathsf x,\mathsf y:E\to V$ and $\symmapn:E\toE$ the coordinate and the symmetry maps defined by \begin{equation} \mathsf x(x,y):=x,\quad \mathsf y(x,y):=y,\quad \symmapn(x,y): = (y,x)\quad \label{eq:87} \text{for every }x,y\in V. \end{equation} \begin{Assumptions}{$V\!\pi\kappa$} \label{ass:V-and-kappa} We assume that \begin{equation} \label{locally-compact} \begin{gathered} \text{$(V,\mathfrak B,\pi)$ is a standard Borel measure space, $\pi\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, } \end{gathered} \end{equation} \noindent $(\kappa(x,\cdot))_{x\in V}$ is bounded kernel in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ (see \S \ref{subsub:kernels}), satisfying the \emph{detailed-balance condition} \begin{equation}\label{detailed-balance} \int_{A} \kappa(x,B)\,\pi(\mathrm{d} x) = \int_{B} \kappa(y,A)\,\pi(\mathrm{d} y) \qquad \text{for all } A,B\in \mathfrak B, \end{equation} and the uniform upper bound \begin{equation} \label{bound-unif-kappa} \|\kappa_V\|_\infty= \color{black} \sup_{x\in V} \,\kappa(x,V) <{+\infty}. \end{equation} \end{Assumptions} The measure $\pi\in{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ often is referred to as the \emph{invariant measure}, and it will be stationary under the evolution generated by the generalized gradient system. By Fubini's Theorem (see \S\,\ref{subsub:kernels}) we also introduce the measure ${\boldsymbol\vartheta}pi$ \normalcolor on $E$ given by \begin{equation} \label{nu-pi} {\boldsymbol\vartheta}pi(\mathrm{d} x\,\mathrm{d} y) = \kernel\kappa\pi(\mathrm{d} x,\mathrm{d} y)=\pi(\mathrm{d} x)\kappa (x,\mathrm{d} y),\quad {\boldsymbol\vartheta}pi(A{\times}B) = \int_{A} \kappa(x,B)\, \pi(\mathrm{d} x) \,. \end{equation} Note that the invariance of the measure $\pi$ \color{black} and \normalcolor the detailed balance condition \eqref{detailed-balance} can be rephrased in terms of ${\boldsymbol\vartheta}pi$ as \begin{equation} \label{symmetry-nu-pi} \mathsf x_\sharp {\boldsymbol\vartheta}pi=\mathsf y_\sharp{\boldsymbol\vartheta}pi,\qquad\normalcolor \symmap {\boldsymbol\vartheta}pi = {\boldsymbol\vartheta}pi\,. \color{black} \end{equation} Conversely, if we choose a symmetric measure ${\boldsymbol\vartheta}pi\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)$ such that \begin{equation} \label{eq:88} \mathsf x_\sharp {\boldsymbol\vartheta}pi\ll\pi,\quad \frac{\mathrm{d} (\mathsf x_\sharp {\boldsymbol\vartheta}pi)}{\mathrm{d} \pi}\le \|\kappa_V\|_\infty \color{black} <{+\infty} \quad\text{$\pi$-a.e.} \end{equation} then the disintegration Theorem \cite[Corollary 10.4.15]{Bogachev07} shows the existence of a bounded measurable kernel $(\kappa(x,\cdot))_{x\in X}$ satisfying \eqref{detailed-balance} and \eqref{nu-pi}. \normalcolor We next turn to the driving functional, which is given by the construction in \color{black} \eqref{def:F-F} and \eqref{eq:5} for a superlinear density $\uppsi=\relax\upphi$ and for the choice $\gamma=\pi$. \begin{Assumptions}{$\mathscr E\upphi$} \label{ass:S} The driving functional $\mathscr E: {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) \to [0,{+\infty}]$ is of the form \begin{equation} \label{driving-energy} \mathscr E(\rho): = \relax\mathscr F_{\upphi}(\rho|\pi)=\normalcolor \begin{cases} \displaystyle \relax\int_V \upphi\Bigl(\frac{\mathrm{d} \rho}{\mathrm{d}\pi}\Bigr)\,\mathrm{d}\pi &\text{if } \rho \ll \pi, \\ {+\infty} &\text{otherwise,} \end{cases} \end{equation} with \begin{multline} \label{cond-phi} \upphi\in \mathrm{C}([0,{+\infty}))\cap\mathrm{C}^1((0,{+\infty})),\; \min \upphi= 0, \text{ and $\upphi$ is convex}\\ \text{with superlinear growth at infinity.} \end{multline} \end{Assumptions} \noindent Under these assumptions the functional $\mathscr E$ is lower semicontinuous on ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ both with respect to the topology of setwise convergence, and \color{black} any compatible weak topology (see Lemma~\ref{l:lsc-general}). A central example was already mentioned in the introduction, i.e.\ the Boltzmann-Shannon entropy function \begin{equation} \label{logarithmic-entropy} \upphi(s) = s\log s - s + 1, \qquad s\geq 0. \end{equation} Finally, we state our assumptions on the dissipation. \begin{Assumptions}{$\mathscr R^*\Psi\upalpha$} \label{ass:Psi} We assume that the {dual} dissipation density $\Psi^*$ satisfies \begin{equation} \left.\begin{gathered} \label{Psi1} \Psi^*: \mathbb{R} \to [0,{+\infty}) \text{ is convex, differentiable, even, with $\Psi^*(0)=0$, and} \\ \lim_{|\xi|\to\infty} \frac{\Psi^*(\xi)}{|\xi|} ={+\infty}\, . \end{gathered}\quad\right\} \end{equation} The flux density map $\upalpha: [0,{+\infty}) \times [0,{+\infty}) \to [0,{+\infty})$, with $\upalpha\not\equiv0$, is continuous, concave, symmetric: \begin{equation} \label{alpha-symm} \text{} \upalpha(u_1,u_2) = \upalpha(u_2,u_1) \quad\text{ for all } u_1,\, u_2 \in [0,{+\infty}), \end{equation} and its recession function $\upalpha^\infty$ vanishes on the boundary of $\mathbb{R}_+^2$: \begin{equation} \label{alpha-0} \text{for every }u_1,u_2\in \mathbb{R}^2_+:\quad u_1u_2=0\quad\Longrightarrow\quad \upalpha^\infty(u_1,u_2) = 0. \end{equation} \end{Assumptions} Note that since $\upalpha$ is nonnegative, concave, and not trivially $0$, it cannot vanish in the interior of $\mathbb{R}^2_+$, i.e.~ \begin{equation} \label{eq:38} u_1u_2>0\quad\mathbb{R}ightarrow\quad \upalpha(u_1,u_2)>0. \end{equation} \normalcolor The examples that we gave in the introduction of the cosh-type dissipation~\eqref{choice:cosh} and the quadratic dissipation~\eqref{choice:quadratic} both fit these assumptions; other examples are \[ \upalpha(u,v) = 1 \qquad\text{and}\qquad \upalpha(u,v) = u+v. \] In some cases we will use an additional property, namely that $\upalpha$ is positively $1$-homogeneous, i.e.\ $\upalpha(\lambda u_1,\lambda u_2) = \lambda \upalpha(u_1,u_2)$ for all $\lambda \geq0$. This $1$-homogeneity is automatically satisfied under the compatibility condition \color{black} \eqref{cond:heat-eq-2}, with the Boltzmann entropy function $\upphi(s) = s\log s - s + 1$. Concaveness of $\upalpha$ is a natural assumption in view of the convexity of $\mathscr R$ \normalcolor (cf.\ \color{purple} Remark \ref{rem:alpha-concave} and \color{black} Lemma \ref{l:alt-char-R} ahead), while $1$-homogeneity will make the definition of $\mathcal{R}$ independent of the choice of a reference measure. It is interesting to observe that the concavity and symmetry conditions, that one has to naturally assume to ensure the aforementioned properties of $\mathscr R$, were singled out for the analog of the function $\upalpha$ in the construction of the distance yielding the quadratic gradient structure of \cite{Maas11}. The choices for $\Psi^*$ above generate corresponding properties for the Legendre dual $\Psi$: \begin{lemma} \label{l:props:Psi} Under Assumption~\ref{ass:Psi}, the function $\Psi: \mathbb{R}\to\mathbb{R}$ is even and satisfies \begin{subequations} \label{props:Psi} \begin{gather} \label{basic-props-Psi} 0=\Psi(0) < \Psi(s) < {+\infty} \text{ for all }s\in \mathbb{R}\setminus \{0\}.\\ \Psi\text{ is strictly convex, \color{ddcyan} strictly increasing, \color{black} and superlinear.} \label{eq:propsPsi:Psi-strictly-convex} \end{gather} \end{subequations} \end{lemma} \begin{proof} The superlinearity of $\Psi^*$ implies that $\Psi(s)<{+\infty}$ for all $s\in \mathbb{R}$, and similarly the finiteness of $\Psi^*$ on $\mathbb{R}$ implies that $\Psi$ is superlinear. Since $\Psi^*$ is even, $\Psi$ is convex and even, and therefore $\Psi(s) \geq \Psi(0) = \sup_{\xi\in \mathbb{R}} [-\Psi^*(\xi)] = 0.$ Furthermore, since for all $p\in \mathbb{R}$, $\mathop{\rm argmin}_{s\in\mathbb{R}} (\Psi(s) -p s)= \partial \Psi^*(p)$ (see e.g.~\cite[Thm.\ 11.8]{Rockafellar-Wets98}) and $\Psi^*$ is differentiable at every $p$, we conclude that $\mathop{\rm argmin}_s (\Psi(s)-ps ) = \{(\Psi^*)'(p)\}$; therefore each point of the graph of $\Psi$ is an exposed point. It follows that $\Psi$ is strictly convex, and $\Psi(s)>0$ for all $s\not=0$. \end{proof} As described in the introduction, we use $\Psi$, $\Psi^*$, and $\upalpha$ to define the dual pair of dissipation potentials $\mathscr R$ and~$\mathscr R^*$, which for a couple of measures $\rho=u\pi\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and ${\boldsymbol j}\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ are \color{black} formally given by \begin{equation}\label{eq:dissipation-pair} \mathscr R(\rho,{\boldsymbol j} ) := \frac12 \normalcolor \int_{E} \Psi\left(2 \frac{\mathrm{d} {\boldsymbol j} }{\mathrm{d} \boldsymbol\upnu_\rho}\right)\mathrm{d}\boldsymbol\upnu_\rho, \qquad \mathscr R^*(\rho,\xi) := \frac12\normalcolor\int_{E} \Psi^*(\xi) \, \mathrm{d}\boldsymbol\upnu_\rho, \end{equation} with \begin{equation} \label{def:nu_rho} \boldsymbol\upnu_\rho(\mathrm{d} x\,\mathrm{d} y) := \upalpha\big(u(x),u(y)\big)\,{\boldsymbol\vartheta}pi(\mathrm{d} x\,\mathrm{d} y) = \upalpha\big(u(x),u(y)\big)\,\pi(\mathrm{d} x)\kappa(x,\mathrm{d} y). \end{equation} This expression for the edge measure $\boldsymbol\upnu_\rho$ also is implicitly present in the structure built in \cite{Maas11,Erbar14}. \color{black} The above definitions are made rigorous in Definition~\ref{def:R-rigorous} and in \eqref{eq:27} below. The three sets of conditions above, Assumptions~\ref{ass:V-and-kappa}, \ref{ass:S}, and~\ref{ass:Psi}, are the main assumptions of this paper. Under these assumptions, the evolution equation~\eqref{eq:GF-intro} may be linear or nonlinear in $\rho$. The equation coincides with the Forward Kolmogorov equation~\eqref{eq:fokker-planck} if and only if condition~\eqref{cond:heat-eq-2} is satisfied, as shown below. \subsubsection*{Calculation for \eqref{cond:heat-eq-2}.} Let us call $\mathscr Q[\rho]$ the right-hand side of \eqref{eq:GF-intro} and let us compute \[ \langle \mathscr Q[\rho], \varphi \rangle =\bigl \langle -\odiv \Bigl[ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*\Bigl(\rho,- \relax \dnabla\upphi'\Bigl(\frac{\mathrm{d} \rho}{\mathrm{d}\pi}\Bigr)\Bigr)\Bigr], \varphi \bigr\rangle \] for every $ \varphi \in \mathrm{B}_{\mathrm b}(V) $ and $\rho \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ with $\rho \ll \pi$. \normalcolor With $u = \frac{\mathrm{d} \rho}{\mathrm{d} \pi} $ we thus obtain \begin{align} \notag \langle \mathscr Q[\rho], \varphi \rangle&= \bigl\langle {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\xi\mathscr R^*\bigl(\rho,-\relax \dnabla\upphi'(u) \bigr),\dnabla \varphi \bigr \rangle \\ & = \label{almost-therepre} \frac12 \iint_{E} \big(\Psi^*\big)' \left( -\relax \dnabla\upphi'(u)(x,y) \right) \dnabla \varphi (x,y) \boldsymbol\upnu_\rho(\mathrm{d} x,\mathrm{d} y)\,. \end{align} Recalling the definitions~\eqref{def:nu_rho} of $\boldsymbol\upnu_\rho$ and~\eqref{eq:184} of $\rmF$, \eqref{almost-therepre} thus becomes \begin{align} \notag \langle \mathscr Q[\rho], \varphi \rangle&= \frac 12\iint_{E} \bigl(\Psi^*\bigr)' \bigl( \relax \upphi'(u(x)) - \upphi'(u(y)) \bigr) \,\dnabla \varphi(x,y) \,\upalpha(u(x),u(y)) {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \\&= \frac12 \iint_{E} \rmF(u(x),u(y)) \big(\varphi(x)-\varphi(y)\big)\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\label{eq:187} \\& \stackrel{(*)}{=} \color{black} \iint_{E} \rmF(u(x),u(y))\notag \varphi(x)\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)= \int_V \varphi(x) \Big(\int_V \rmF(u(x),u(y))\kappa(x,\mathrm{d} y)\Big)\pi(\mathrm{d} x) \end{align} where for $(*)$ \color{black} we used the symmetry of ${\boldsymbol\vartheta}pi$ (i.e.\ the detailed-balance condition). This calculation justifies \eqref{eq:180}. In the linear case of \eqref{eq:fokker-planck} it is immediate to see that \begin{align} \langle Q^*\rho,\varphi\rangle\notag = \langle \rho,Q\varphi\rangle &=\iint_{E} [\varphi(y)-\varphi(x)]\,\kappa(x,\mathrm{d} y)\rho(\mathrm{d} x ) \\&= \frac12 \iint_{E} \dnabla \varphi(x,y) \bigl[\kappa(x,\mathrm{d} y)\rho(\mathrm{d} x) - \kappa(y,\mathrm{d} x)\rho(\mathrm{d} y)\bigr] \notag \\\label{for-later} & =\frac12 \iint_{E} \dnabla \varphi(x,y) \bigl[u(x) -u(y)\bigr] {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y), \end{align} Comparing \eqref{for-later} and \eqref{eq:187} we obtain that $\rmF$ has to fulfill \color{black} \eqref{cond:heat-eq-2}. \normalcolor \subsection{Derivation of the cosh-structure from large deviations} \label{ss:ldp-derivation} We mentioned in the introduction that the choices \begin{equation} \label{choices:ldp} \upphi(s) = s\log s - s + 1,\qquad \Psi^*(\xi) = 4\bigl(\cosh(\xi/2)-1\bigr), \qquad\text{and}\qquad \upalpha(u,v) = \sqrt{uv} \end{equation} arise in the context of large deviations. In this section we describe this context. Throughout this section we work under Assumptions~\ref{ass:V-and-kappa}, \ref{ass:S}, and~\ref{ass:Psi}, and since we are interested in the choices above, we will also assume~\eqref{choices:ldp}, implying that \[ \boldsymbol\upnu_\rho(\mathrm{d} x\,\mathrm{d} y) = \sqrt{u(x)u(y)}\, \pi(\mathrm{d} x)\kappa(x,\mathrm{d} y), \qquad \text{if }\rho = u\pi \ll \pi. \] Consider a sequence of independent and identically distributed stochastic processes $X^i$, $i=1, 2, \dots$ on $V$, each described by the jump kernel $\kappa$, or equivalently by the generator $Q$ in~\eqref{eq:def:generator}. With probability one, a realization of each process has a countable number of jumps in the time interval $[0,{+\infty})$, and we write $t^i_k$ for the $k^{\mathrm{th}}$ jump time of $X^i$. We can assume that $X^i$ is a c\`adl\`ag function of time. We next define the empirical measure $\rho^n$ and the empirical flux ${\boldsymbol j} ^n$ by \begin{align*} &\rho^n: [0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V), &\rho^n_t &:= \frac1n \sum_{i=1}^n \delta_{X^i_t}, \\ &{\boldsymbol j}^n\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+((0,T)\times E), &\qquad {\boldsymbol j}^n(\mathrm{d} t\,\mathrm{d} x\,\mathrm{d} y) &:= \frac1n \sum_{i=1}^n \sum_{k=1}^\infty \delta_{t^i_k}(\mathrm{d} t) \delta_{(X^i_{t-},X^i_{t})}(\mathrm{d} x\,\mathrm{d} y), \end{align*} where $t^i_k$ is the $k^{\mathrm{th}}$ jump time of $X^i$, and $X^i_{t-}$ is the left limit (pre-jump state) of $X^i$ at time~$t$. Equivalently, ${\boldsymbol j}^n$ is defined by \[ \langle {\boldsymbol j}^n, \varphi\rangle := \frac1n \sum_{i=1}^n \sum_{k=1}^\infty \varphi\bigl(t_k^i,X^i_{t_k^i-},X^i_{t_k^i}\bigr), \qquad \text{for }\varphi\in \mathbb{C}B([0,T]\times E). \] A standard application of Sanov's theorem yields a large-deviation characterization of the pair $(\rho^n,{\boldsymbol j}^n)$ in terms of two rate functions $I_0$ and $I$, \[ \mathrm{Prob}\bigl((\rho^n,{\boldsymbol j}^n)\approx (\rho,{\boldsymbol j} )\bigr) \sim \exp\Bigl[ -n \bigl(I_0(\rho_0) + I(\rho,{\boldsymbol j} )\bigr)\Bigr], \qquad\text{as }n\to\infty. \] The rate function $I_0$ describes the large deviations of the initial datum $\rho^n_0$; this functional is determined by the choices of the initial data of $X^i_0$ and is independent of the stochastic process itself, and we therefore disregard it here. The functional $I$ characterizes the large-deviation properties of the dynamics of the pair $(\rho^n,{\boldsymbol j}^n)$ conditional on the initial state, and has the expression \begin{equation} \label{eq:def:I} I(\rho,{\boldsymbol j} ) = \int_0^T \mathscr F_\upeta({\boldsymbol j}_t | {\boldsymbol\vartheta}_{\rho_t}^-)\, \mathrm{d} t. \end{equation} In this expression we write ${\boldsymbol\vartheta}_{\rho_t}^-$ for the measure $\rho_t(\mathrm{d} x)\kappa(x,\mathrm{d} y)\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ (see also~\eqref{def:teta} ahead). The function $\upeta$ is the Boltzmann entropy function that we have seen above, \[ \upeta(s) := s\log s - s+ 1,\qquad\text{for }s\geq0, \] and the functional $\mathscr F_\upeta:{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)\times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E) \to [0,\infty]$ is given by~\eqref{def:F-F}. Even though the function $\upeta$ coincides in this section with $\upphi$, we choose a different notation to emphasize that the roles of $\upphi$ and $\upeta$ are different: the function $\upphi$ defines the entropy of the system, which is related to the large deviations of the empirical measures $\rho^n$ in equilibrium (see~\cite{MielkePeletierRenger14}); the function $\upeta$ characterizes the large deviations of the time courses of $\rho^n$ and ${\boldsymbol j}^n$. \begin{remark} Sanov's theorem can be found in many references on large deviations (e.g.~\cite[Sec.~6.2]{DemboZeitouni98}); the derivation of the expression~\eqref{eq:def:I} is fairly well known and can be found in e.g.~\cite[Eq.~(8)]{MaesNetocny08} or \cite[App.~A]{KaiserJackZimmer18}. Instead of proving~\eqref{eq:def:I} we give an interpretation of the expression~\eqref{eq:def:I} and the function $\upeta$ in terms of exponential clocks. An exponential clock with rate parameter $r$ has large-deviation behaviour given by $r\eta(\cdot/r)$ (see~\cite[Exercise 5.2.12]{DemboZeitouni98} or~\cite[Th.~1.5]{Moerters10}) in the following sense: for each $t>0$, \[ \mathrm{Prob}\bigl( \text{ $\approx \beta nt$ firings in time $nt$ }\bigr) \sim \exp \Bigl[ -n tr\,\upeta(\beta/r)\Bigr]\qquad\text{as }n\to\infty. \] The expression~\eqref{eq:def:I} generalizes this to a field of exponential clocks, one for each edge~$(x,y)$. In this case, the rescaled rate parameter $r$ for the clock at edge $(x,y)$ is equal to $\rho_t(\mathrm{d} x)\kappa(x,\mathrm{d} y)$, since it is proportional to the number of particles $n\rho_t(\mathrm{d} x)$ at $x$ and to the rate of jump $\kappa(x,\mathrm{d} y)$ from $x$ to $y$. The flux $n{\boldsymbol j}_t(\mathrm{d} x\,\mathrm{d} y)$ is the observed number of jumps from $x$ to $y$, corresponding to firings of the clock associated with the edge $(x,y)$. In this way, the functional $I$ in~\eqref{eq:def:I} can be interpreted as characterizing the large-deviation fluctuations in the clock-firings for each edge $(x,y)\inE$. \end{remark} The expression~\eqref{eq:def:I} leads to the functional $\mathscr L$ in~\eqref{eq:def:mathscr-L} after a symmetry reduction, which we now describe (see also~\cite[App.~A]{KaiserJackZimmer18}). Assuming that we are more interested in the fluctuation properties of $\rho$ than those of ${\boldsymbol j}$, we might decide to minimize $I(\rho,{\boldsymbol j} )$ over a class of fluxes ${\boldsymbol j} $ for a fixed choice of $\rho$. Here we choose to minimize over the class of fluxes with the same skew-symmetric part, \[ A_{\boldsymbol j} := \bigl\{{\boldsymbol j}'\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}([0,T]\times E): {\boldsymbol j}' - \symmap{\boldsymbol j}' = {\boldsymbol j} - \symmap {\boldsymbol j} \bigr\}. \] By the form~\eqref{eq:ct-eq-intro} of the continuity equation and the definition~\eqref{eq:def:div} of the divergence we have $\odiv {\boldsymbol j}' = \odiv {\boldsymbol j} $ for all ${\boldsymbol j}'\in A_j$, so that replacing ${\boldsymbol j} $ by ${\boldsymbol j}'$ preserves the continuity equation. \begin{Flemma} The minimum of $I(\rho,{\boldsymbol j}'\,)$ over all ${\boldsymbol j}'\in A_j$ is achieved for the `skew-sym\-me\-tri\-za\-tion' ${\boldsymbol j}^\flat = \tfrac12({\boldsymbol j} -\symmap {\boldsymbol j} )$, and for ${\boldsymbol j}^\flat$ \color{black} the result equals $\tfrac12\mathscr L$: \begin{equation} \label{eq:infI=infL} \inf_{{\boldsymbol j}'\,\in A_j} I(\rho,{\boldsymbol j}'\,) = \inf_{{\boldsymbol j}'\in A_j} \tfrac12\mathscr L(\rho,{\boldsymbol j}'\,) = \tfrac12\mathscr L(\rho,{\boldsymbol j}^\flat). \color{black} \end{equation} Consequently, for a given curve $\rho:[0,T]\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, \begin{align*} \inf_j \Bigl\{ I(\rho,{\boldsymbol j} ) : \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\} &= \inf_j \Bigl\{ \tfrac12\mathscr L(\rho,{\boldsymbol j} ) : \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\}, \\ \noalign{\noindent and in this final expression the flux can be assumed to be skew-symmetric:} \color{black} &=\inf_j \Bigl\{ \tfrac12\mathscr L \bigl(\rho,{\boldsymbol j} \bigr): \partial_t \rho + \odiv {\boldsymbol j} = 0 \text{ and } \symmap {\boldsymbol j} = - {\boldsymbol j} \Bigr\}. \end{align*} \end{Flemma} This implies that the two functionals $I$ and $\mathscr L$ can be considered to be the same, if one is only interested in $\rho$, not in ${\boldsymbol j} $. By the Contraction Principle (e.g.~\cite[Sec.~4.2.1]{DemboZeitouni98}) the functional $\rho \mapsto \inf_j I(\rho,{\boldsymbol j} ) = \inf_j \tfrac12\mathscr L(\rho,{\boldsymbol j} )$ also can be viewed as the large-deviation rate function of the sequence of empirical measures $\rho^n$. The above lemma is only formal because we have not given a rigorous definition of the functional~$\mathscr L$. While it would be possible to do so, using the construction of Lemma~\ref{l:lsc-general} and the arguments of the proof below, actually the rest of this paper deals with this question in a more detailed manner. In addition, at this stage this lemma only serves to explain why we consider this specific class of functionals~$\mathscr L$. Therefore here we only give heuristic arguments. \begin{proof} We assume throughout this (formal) proof that all measures are absolutely continuous, strictly positive, and finite where necessary. Note that writing $\rho_t = u_t \pi$ we have ${\boldsymbol\vartheta}_{\rho_t}^-(\mathrm{d} x\,\mathrm{d} y) = u_t(x) {\boldsymbol\vartheta}(\mathrm{d} x\,\mathrm{d} y)$, and using~\eqref{choices:ldp} we therefore have \begin{gather*} \sqrt{{\boldsymbol\vartheta}_{\rho_t}^- \, \symmap{\boldsymbol\vartheta}_{\rho_t}^-}\; (\mathrm{d} x\,\mathrm{d} y) = \sqrt{u_t(x)u_t(y)}\;{\boldsymbol\vartheta}pi(\mathrm{d} x\,\mathrm{d} y) = \boldsymbol\upnu_{\rho_t}(\mathrm{d} x\,\mathrm{d} y), \qquad\text{and}\\ \log \frac{\mathrm{d} \symmap {\boldsymbol\vartheta}_{\rho_t}^-}{\mathrm{d} {\boldsymbol\vartheta}_{\rho_t}^-} (x,y) = \log \frac {u_t(y)}{u_t(x)} = \dnabla \upphi'(u_t)(x,y). \end{gather*} For the length of this proof we write $\hat \upeta$ for the perspective function corresponding to $\upeta$ (see \color{purple} \eqref{eq:76} in \color{black} Lemma~\ref{l:lsc-general}) \[ \color{purple} \hat{\upeta}(a,b) \color{black} := \begin{cases} a\log \dfrac ab - a + b & \text{if $a,b>0$,}\\ 0 &\text{if }a = 0,\\ +\infty &\text{if $a>0$, $b=0$.} \end{cases} \] We now rewrite $\inf_{{\boldsymbol j}'\,\in A_j} I(\rho,{\boldsymbol j}'\,)$ as \begin{align*} \inf_{{\boldsymbol j}' \,\in A_j} &\int_0^T \iint_E \upeta\biggl( \frac{\mathrm{d} {\boldsymbol j}'_t}{\mathrm{d} {\boldsymbol\vartheta}_{\rho_t}^-}\biggr)\, \mathrm{d} {\boldsymbol\vartheta}_{\rho_t}^- \mathrm{d} t = \inf_{{\boldsymbol j}' \,\in A_j} \int_0^T \iint_E u_t\, \upeta\biggl( \frac1 {u_t} \frac{\mathrm{d} {\boldsymbol j}'_t}{\mathrm{d} {\boldsymbol\vartheta}}\biggr) \,\mathrm{d} {\boldsymbol\vartheta}\, \mathrm{d} t\\ &=\inf_{{\boldsymbol j}' = \zeta \color{black} {\boldsymbol\vartheta} \,\in A_j} \int_0^T \iint_E \hat \upeta\bigl( \zeta_t(x,y), \color{black} u_t(x)\bigr) {\boldsymbol\vartheta}(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t \\ &= \frac12 \normalcolor \inf_{{\boldsymbol j}' = \zeta \color{black} {\boldsymbol\vartheta}pi \,\in A_j} \int_0^T \iint_E \Bigl\{\hat \upeta\bigl( \zeta_t(x,y), \color{black} u_t(x)\bigr) + \hat \upeta\bigl( \zeta_t(y,x), \color{black} u_t(y)\bigr) \Bigr\}\,{\boldsymbol\vartheta}(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t . \end{align*} Since $\zeta(x,y) - \zeta(y,x) = \mathrm{d} ({\boldsymbol j}' - \symmap {\boldsymbol j}'\,)/\mathrm{d}{\boldsymbol\vartheta}pi$ \color{black} is constrained in $A_j$, we follow the expression inside the second integral and set \[ \psi:\mathbb{R}\times[0,{+\infty})^2 \to [0,{+\infty}], \qquad \psi(s\,; c,d) := \inf_{a,b\geq0} \Bigl\{ \bigl[\hat \upeta(a,c) + \hat \upeta(b,d)\bigr] : a-b = 2s\Bigr\}, \] for which a calculation gives the explicit formula (for $c,d>0$) \[ \psi(s\,; c,d) = \frac{\sqrt{cd}}2\; \biggl\{\Psi\biggl( \frac { 2s}{\sqrt{cd}}\biggr) + \Psi^*\biggl( - \log \frac dc\biggr) \biggr\} + s \; \log \frac dc, \] in terms of the function $\Psi^*(\xi) = 4\bigl(\cosh \xi/2 - 1\bigr)$ and its Legendre dual $\Psi$. This minimization corresponds to minimizing over all fluxes for which the `net flux' ${\boldsymbol j} - \symmap {\boldsymbol j} = 2 \tj $ \color{black} is the same; see e.g.~\cite{Renger18,KaiserJackZimmer18} for discussions. Let $\cw(x,y) := (w(x,y) - w(y,x)) = \frac{\mathrm{d}(2\tj)}{\mathrm{d} {\boldsymbol\vartheta}pi}$ \color{black} and $\upalpha_t := \upalpha_t(x,y) = \sqrt{u_t(x)u_t(y)}$. We find \begin{align} \inf_{{\boldsymbol j}'\, \in A_j} &\int_0^T \mathscr F_\upeta\bigl( {\boldsymbol j}'_t | {\boldsymbol\vartheta}_{\rho_t}^-\bigr) \,\mathrm{d} t \notag \\ &= \frac12\normalcolor \int_0^T \iint_E \psi\bigl( \tfrac12 \cw_t(x,y) \color{black} \,;\, u_t(x),u_t(y)\bigr)\, {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \mathrm{d} t \notag\\ &=\frac 12 \int_0^T \iint_E \biggl\{ \frac{\upalpha_t}2 \Psi\biggl(\frac{ \cw_t}{\upalpha_t}\biggr) + \frac{\upalpha_t}2 \Psi^*\biggl(- \dnabla \upphi'(u_t)\biggr) + \frac12 \cw_t\, \color{black} \dnabla \upphi'(u_t) \biggr\}\,\mathrm{d}{\boldsymbol\vartheta}\mathrm{d} t \notag\\ &= \frac12 \int_0^T \iint_E \frac12 \biggl\{ \Psi\biggl(\frac{{2\mathrm{d}\tj_t} \color{black}}{\mathrm{d} \boldsymbol\upnu_{\rho_t}}\biggr) + \Psi^*\biggl(- \dnabla \upphi'(u_t)\biggr)\biggr\}\,\mathrm{d}\boldsymbol\upnu_{\rho_t} \, \mathrm{d} t + \frac12 \mathscr E(\rho_T) - \frac12 \mathscr E(\rho_0). \label{eq:formal-motivation-L} \end{align} In the last identity we used the fact that since $\odiv \tj_t = -\partial_t \rho_t$, \color{black} formally we have \[ \int_0^T \iint_E \frac12 \cw_t\, \dnabla \upphi'(u_t) \,\mathrm{d}{\boldsymbol\vartheta}\mathrm{d} t = \int_0^T \iint_E \dnabla \upphi'(u_t) \, \mathrm{d}\tj_t \,\mathrm{d} t \color{black} = \int_0^T \langle \upphi'(u_t) ,\partial_t \rho_t \rangle\,\mathrm{d} t = \mathscr E(\rho_T)- \mathscr E(\rho_0). \] The expression on the right-hand side of~\eqref{eq:formal-motivation-L} is one half times the functional $\mathscr L$ defined in~\eqref{eq:def:mathscr-L} (see also~\eqref{ineq:deriv-GF}). This proves that \[ \inf_{{\boldsymbol j}'\,\in A_j} I(\rho,{\boldsymbol j}'\,) = \frac12 \mathscr L\bigl(\rho, \tj \color{black}\,\bigr). \] From convexity of $\Psi$ and symmetry of $\boldsymbol\upnu_\rho$ we deduce that $\mathscr L(\rho, \tj) \color{black} \leq \mathscr L(\rho,{\boldsymbol j} )$ for any ${\boldsymbol j} $; see Remark~\ref{rem:skew-symmetric}. The identity $\mathscr L\bigl(\rho,\tj\,\bigr) = \inf_{{\boldsymbol j}'\,\in A_j} \mathscr L (\rho,{\boldsymbol j}'\,)$ \color{black} then follows immediately; this proves~\eqref{eq:infI=infL}. To prove the second part of the Lemma, we write \begin{align*} \inf_j \Bigl\{ I(\rho,{\boldsymbol j} ) : \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\} &= \inf_j \Bigl\{ \Bigl[\,\inf _{{\boldsymbol j}'\in A_j} I(\rho,{\boldsymbol j}'\,) \Bigr]: \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\},\\ &= \inf_j \Bigl\{ \Bigl[\,\inf _{{\boldsymbol j}'\in A_j} \tfrac12 \mathscr L(\rho,{\boldsymbol j}'\,) \Bigr]: \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\},\\ &= \inf_j \Bigl\{ \tfrac12\mathscr L(\rho, \tj \color{black} ) : \partial_t \rho + \odiv {\boldsymbol j} = 0\Bigr\},\\ &= \inf_j \Bigl\{ \tfrac12\mathscr L(\rho, \tj \color{black}) : \partial_t \rho + \odiv \tj \color{black} = 0\Bigr\}. \end{align*} This concludes the proof. \end{proof} \section{Curves in \texorpdfstring{${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$}{M+(V)}} A major challenge in any rigorous treatment of an equation such as~\eqref{eq:GGF-intro-intro} is finding a way to deal with the time derivative. The Ambrosio-Gigli-Savar\'e framework for metric-space gradient systems, for instance, is organized around absolutely continuous curves. These are a natural choice because on the one hand this class admits a `metric velocity' that generalizes the time derivative, while on the other hand solutions are automatically absolutely continuous by the superlinear growth of the dissipation potential. For the systems of this paper, a similar role is played by curves such that the `action' $\int \mathscr R\,\mathrm{d} t$ is finite; we show below that the superlinearity of $\mathscr R(\rho,{\boldsymbol j} )$ in ${\boldsymbol j} $ leads to similarly beneficial properties. In order to exploit this aspect, however, a number of intermediate steps need to be taken: \begin{enumerate}[label=(\alph*)] \item \label{intr:curves:1} We define the class $\mathbb{C}E0T$ of solutions $(\rho,{\boldsymbol j} )$ of the continuity equation~\eqref{eq:ct-eq-intro} (Definition~\ref{def-CE}). \item For such solutions, $t\mapsto \rho_t$ is continuous in the total variation distance (Corollary~\ref{c:narrow-ct}). \item We give a rigorous definition of the functional $\mathscr R$ (Definition~\ref{def:R-rigorous}), and describe its behaviour on absolutely continuous and singular parts of $(\rho,{\boldsymbol j})$ (Lemma~\ref{l:alt-char-R} and Theorem~\ref{thm:confinement-singular-part}). \item If the action functional $\int \mathscr R$ is finite along a solution $(\rho,{\boldsymbol j})$ of the continuity equation in $[0,T]$, then the property that $\rho_t$ is absolutely continuous with respect to ~$\pi$ at some time $t\in [0,T]$ propagates to all the interval $[0,T]$ (Corollary~\ref{cor:propagation-AC}). \item We prove a chain rule for the derivative of convex entropies along curves of finite $\mathscr R$-action (Theorem~\ref{th:chain-rule-bound}) and derive an estimate involving $\mathscr R$ and a Fisher-information-like term (Corollary~\ref{th:chain-rule-bound2}). \item\label{intr:curves:5} If the action $\int\mathscr R$ is uniformly bounded along a sequence $(\rho^n,{\boldsymbol j}^n)\in\mathbb{C}E0T$, then the sequence is compact in an appropriate sense (Proposition~\ref{prop:compactness}). \end{enumerate} Once properties~\ref{intr:curves:1}--\ref{intr:curves:5} have been established, the next step is to consider finite-action curves that also connect two given values $\mu,\nu$, leading to the definition of the Dynamical-Variational Transport (DVT) cost \begin{equation} \label{def-psi-rig-intro-section} \DVT \tau\mu\nu : = \inf\left\{ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t \, : \, (\rho,{\boldsymbol j} ) \in \mathbb{C}E 0\tau, \ \rho_0 = \mu, \ \rho_\tau = \nu \right\}\,. \end{equation} This definition is in the spirit of the celebrated Benamou-Brenier formula for the Wasserstein distance \cite{Benamou-Brenier}, generalized to a broader family of transport distances \cite{DolbeaultNazaretSavare09} and to jump processes \cite{Maas11,Erbar14}. However, a major difference with those constructions is that $\mathscr{W}$ also depends on the time variable $\tau$ and that $\DVT\tau\cdot\cdot$ is not a (power of a) distance, since $\Psi$ is not, in general, positively homogeneous of any order. Indeed, \color{black} when $\mathscr R$ is $p$-homogeneous in ${\boldsymbol j} $, for $p\in (1,{+\infty})$, we have (see also the discussion at the beginning of Sec.\ \ref{ss:MM}) \begin{equation} \label{eq:DVT=JKO} \DVT \tau\mu\nu = \frac1{\tau^{p-1}} \DVT 1\mu\nu= \frac1{ p \color{black} \tau^{p-1}}d_{\mathscr R}^p(\mu,\nu), \end{equation} where $d_\mathscr R$ is an extended distance and \normalcolor is a central object in the usual Minimizing-Movement construction. In Section~\ref{s:MM}, the DVT cost~$\mathscr{W}$ will replace the rescaled $p$-power of the distance and play a similar role for the Minimizing-Movement approach. \color{black} For the rigorous construction of $\mathscr{W}$, \begin{enumerate}[label=(\alph*),resume] \item we show that minimizers of~\eqref{def-psi-rig-intro-section} exist (Corollary~\ref{c:exist-minimizers}); \item we establish properties of $\mathscr{W}$ that generalize those of the metric-space version~\eqref{eq:DVT=JKO} (Theorem~\ref{thm:props-cost}). \end{enumerate} Finally, \begin{enumerate}[label=(\alph*),resume] \item we close the loop by showing that from a given functional $\mathscr{W}$ integrals of the form $\int_a^b\mathscr R$ can be reconstructed (Proposition~\ref{t:R=R}). \end{enumerate} Throughout this section we adopt \textbf{Assumptions~\ref{ass:V-and-kappa} and \ref{ass:Psi}}. \subsection{The continuity equation} \label{sec:ct-eq} We now introduce the formulation of the continuity equation we will work with. Hereafter, for a given function $\mu :I \to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)$, or $\mu : I \to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$, with $I=[a,b]\subset\mathbb{R}$, we shall often write $\mu_t$ in place of $\mu(t)$ for a given $t\in I$ and denote the time-dependent function $\mu $ by $(\mu_t)_{t\in I}$. We will write $\lambda$ for the Lebesgue measure on $I$. The following definition mimics those given in \cite[Sec.~8.1]{AmbrosioGigliSavare08} and \cite[Def.~4.2]{DNS09}. \begin{definition}[Solutions $(\rho,{\boldsymbol j})$ of the continuity equation] \label{def-CE} Let $I=[a,b]$ be a closed interval of $\mathbb{R}$. We denote by $\mathbb{C}EI I$ the set of pairs $(\rho,{\boldsymbol j})$ given by \begin{itemize} \item a family of time-dependent measures $\rho=(\rho_t)_{t\in I} \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, and \item a measurable family $({\boldsymbol j}_t)_{t\in I} \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ with $\int_0^T |{\boldsymbol j}_t|(E)\,\mathrm{d} t <{+\infty}$, satisfying the continuity equation \begin{equation} \label{eq:ct-eq-def} \dot\rho + \odiv {\boldsymbol j}=0 \quad\text{ in } I\times V, \end{equation} in the following sense: \begin{equation} \label{2ndfundthm} \int_V \varphi\,\mathrm{d} \rho_{t_2}-\int_V \varphi\,\mathrm{d}\rho_{t_1}= \iint_{J\times E} \overline\nabla \varphi \,\mathrm{d} {\boldsymbol j}_\lambda \quad\text{for all $\varphi\in \mathrm{B}_{\mathrm b}(V)$, $J=[t_1,t_2]\subset I$}. \end{equation} where ${\boldsymbol j}_\lambda(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y):=\lambda(\mathrm{d} t){\boldsymbol j}_t(\mathrm{d} x,\mathrm{d} y)$. \end{itemize} Given $\rho_0,\, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, we will use the notation \[ \mathbb{C}EIP I{\rho_0}{\rho_1} : = \bigl\{(\rho,{\boldsymbol j} ) \in \mathbb{C}EI I\, : \ \rho(a)=\rho_0, \ \rho(b) = \rho_1\bigr\}\,. \] \end{definition} \begin{remark} \label{rem:expand} The requirement~\eqref{2ndfundthm} shows in particular that $t\mapsto \rho_t$ is continuous with respect to the total variation metric. Choosing $\varphi\equiv1$ in~\eqref{2ndfundthm}, one immediately finds that \begin{equation} \label{eq:91} \text{the total mass }\rho_t(V)\text{ is constant in $I$}. \end{equation} By the disintegration theorem, it is equivalent to assign the measurable family $({\boldsymbol j}_t)_{t\in I}$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ or the measure ${\boldsymbol j}_\lambda$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(I\times E)$. \end{remark} We can in fact prove a more refined property. The proof of the Corollary below is postponed to Appendix \ref{appendix:proofs}. \begin{cor} \label{c:narrow-ct} If $(\rho,{\boldsymbol j} )\in\mathbb{C}E0T$, then there exist a \emph{common} dominating measure $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ (i.e., $\rho_t \ll \gamma$ for all $t\in [a,b]$), \color{black} and an absolutely continuous map $\tilde u:[a,b]\to L^1(V,\gamma)$ such that $\rho_t=\tilde u_t\gamma\ll \gamma$ for every $t\in [a,b]$. \end{cor} The interpretation of the continuity equation in Definition \ref{def-CE}---in duality with all bounded measurable functions---is quite strong, and in particular much stronger than the more common continuity in duality with \emph{continuous} and bounded functions. However, this continuity \color{purple} equation \color{black} can be recovered starting from a much weaker formulation. The following result illustrates this; it is a translation of \cite[Lemma 8.1.2]{AmbrosioGigliSavare08} (cf.\ also \cite[Lemma 4.1]{DNS09}) to the present setting. The proof adapts the argument for \cite[Lemma 8.1.2]{AmbrosioGigliSavare08} and is given in Appendix~\ref{appendix:proofs}. \begin{lemma}[Continuous representative] \label{l:cont-repr} Let $(\rho_t)_{t\in I} \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and $({\boldsymbol j}_t)_{t\in I}$ be measurable families that are integrable with respect to ~$\lambda$ and let $\tau$ be any separable and metrizable topology inducing $\mathfrak B$. If \begin{equation} -\int_0^T \eta'(t) \left( \int_V \zeta(x) \rho_t (\mathrm{d} x ) \right) \mathrm{d} t = \int_0^T \eta(t)\Big(\iint_E \overline\nabla\zeta(x,y)\, {\boldsymbol j}_t(\mathrm{d} x\,\mathrm{d} y)\Big)\,\mathrm{d} t \,,\label{eq:90} \end{equation} holds for every $\eta \in \mathrm{C}_\mathrm{c}^\infty((a,b))$ and $\zeta \in \mathbb{C}b (V,\tau)$, then there exists a unique curve $I \ni t \mapsto \tilde{\rho}_t \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+ (V)$ such that $\tilde{\rho}_t = \rho_t$ for $\lambda$-a.e. $t\in I$. The curve $\tilde\rho$ is continuous in the total-variation norm with estimate \begin{equation} \label{est:ct-eq-TV} \|\tilde \rho_{t_2}-\tilde \rho_{t_1}\|_{TV} \leq 2 \int_{t_1}^{t_2} |{\boldsymbol j}_t|(E)\, \mathrm{d} t \qquad \text{ for all } t_1 \leq t_2, \end{equation} and satisfies \begin{equation} \label{maybe-useful} \int_V \varphi(t_2,\cdot) \,\mathrm{d}\tilde\rho_{t_2} - \int_V \varphi(t_1,\cdot) \,\mathrm{d}\tilde\rho_{t_1} = \int_{t_1}^{t_2} \int_V \partial_t \varphi \,\mathrm{d}\tilde\rho_t\,\mathrm{d} t + \int_{J\times E} \overline\nabla \varphi \,\mathrm{d} {\boldsymbol j}_\lambda \end{equation} for all $\varphi \in \mathrm{C}^1(I;\mathrm{B}_{\mathrm b}(V))$ and $J=[t_1,t_2]\subset T$. \end{lemma} \begin{remark} In \eqref{2ndfundthm} we can always replace ${\boldsymbol j} $ with the positive measure \color{purple} ${\boldsymbol j} ^+:=({\boldsymbol j} -\symmap {\boldsymbol j} )_+ = (2\tj)_+$, since $\odiv {\boldsymbol j} = \odiv {\boldsymbol j}^+$ \color{black} (see Lemma~\ref{le:A1}); therefore we can assume without loss of generality that ${\boldsymbol j} $ is a positive measure. \end{remark} \normalcolor As another immediate consequence of \eqref{2ndfundthm}, the concatenation of two solutions of the continuity equation is again a solution; the result below also contains a statement about time rescaling of the solutions, whose proof follows from trivially adapting that of \cite[Lemma 8.1.3]{AmbrosioGigliSavare08} and is thus omitted. \begin{lemma}[Concatenation and time rescaling] \label{l:concatenation&rescaling} \begin{enumerate} \item Let $(\rho^i,{\boldsymbol j}^i) \in \mathbb{C}E 0{T_i}$, $i=1,2$, with $\rho_{T_1}^1 = \rho_0^2$. Define $(\rho_t,{\boldsymbol j}_t)_{t\in [0,T_{1}+T_2]}$ by \[ \rho_t: = \begin{cases} \rho_t^1 & \text{ if } t \in [0,T_1], \\ \rho_{t-T_1}^2 & \text{ if } t \in [T_1,T_1+T_2], \end{cases} \qquad \qquad {\boldsymbol j}_t: = \begin{cases} {\boldsymbol j}_t^1 & \text{ if } t \in [0,T_1], \\ {\boldsymbol j}_{t-T_1}^2 & \text{ if } t \in [T_1,T_1+T_2]\,. \end{cases} \] Then, $(\rho,{\boldsymbol j} ) \in \mathbb{C}E 0{T_1+T_2}$. \item Let $\mathsf{t} : [0,\hat{T}] \to [0,T]$ be strictly increasing and absolutely continuous, with inverse $\mathsf{s}: [0,T]\to [0,\hat{T}]$. Then, $(\rho, {\boldsymbol j} ) \in \mathbb{C}E 0T$ if and only if $\hat \rho: = \rho \circ \mathsf{t}$ and $\hat {\boldsymbol j} : = \mathsf{t}' ({\boldsymbol j} {\circ} \mathsf{t})$ fulfill $(\hat \rho, \hat {\boldsymbol j} ) \in \mathbb{C}E 0{\hat T}$. \end{enumerate} \end{lemma} \subsection{Definition of the dissipation potential \texorpdfstring{$\mathscr R$}R} \label{ss:def-R} In this section we give a rigorous definition of the dissipation potential $\mathscr R$, following the formal descriptions above. In the special case when $\rho$ and ${\boldsymbol j}$ are absolutely continuous, i.e. \begin{equation} \rho=u\pi\ll\pi \qquad\text{and}\qquad 2{\boldsymbol j} = w{\boldsymbol\vartheta}pi\ll{\boldsymbol\vartheta}pi, \end{equation} we set \begin{equation} \label{concentration-set} E': = \{ (x,y) \in E\, : \upalpha(u(x),u(y))>0 \}, \end{equation} and in this case we can define the functional $\mathscr R$ by the direct formula \begin{equation} \label{eq:21} \mathscr R(\rho,{\boldsymbol j} )= \begin{cases} \displaystyle \frac12\int_{E'} \Psi\Bigl(\frac{w(x,y)}{\upalpha(u(x),u(y))}\Bigr)\upalpha(u(x),u(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) &\text{if }|{\boldsymbol j} |(E\setminus E') =0,\\ {+\infty}&\text{if }|{\boldsymbol j} |(E\setminus E')>0. \end{cases} \end{equation} Recalling the definition of the perspective function $\hat\Psi$ \eqref{eq:76}, we can also write \eqref{eq:21} in the equivalent and more compact form \begin{equation} \label{eq:92} \mathscr R(\rho,{\boldsymbol j} )= \frac12\iint_E \hat\Psi\big(w(x,y), \upalpha(u(x),u(y)) \big)\, {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y),\quad 2{\boldsymbol j}=w{\boldsymbol\vartheta}pi\,. \end{equation} so that it is natural to introduce the function $\Upsilon : [0,{+\infty})\times[0,{+\infty})\times\mathbb{R}\to[0,{+\infty}] $, \begin{equation} \label{Upsilon} \Upsilon (u,v,w) := \hat\Psi(w,\upalpha(u,v)), \end{equation} observing that \begin{equation} \label{eq:22} \mathscr R(\rho,{\boldsymbol j} )= \frac12\iint_E \Upsilon(u(x),u(y),w(x,y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\quad\text{for } 2{\boldsymbol j}=w{\boldsymbol\vartheta}pi. \end{equation} \normalcolor \begin{lemma}\label{lem:Upsilon-properties} The function $\Upsilon:[0,{+\infty})\times[0,{+\infty})\times\mathbb{R}\to[0,{+\infty}]$ defined above is convex and lower semicontinuous, with recession functional \begin{equation} \label{eq:23} \Upsilon^\infty(u,v,w)= \hat\Psi(w, \upalpha^\infty(u,v))= \begin{cases} \displaystyle \Psi\left( \frac{w}{\upalpha^\infty(u,v)}\right) \upalpha^\infty(u,v) & \text{ if $\upalpha^\infty(u,v)>0$} \\ 0 &\text{ if $w=0$} \\ {+\infty} & \text{ if $w\ne 0$ and $\upalpha^\infty(u,v)=0$.} \end{cases} \end{equation} For any $u,v\in [0,\infty)$ with $\upalpha^\infty(u,v)>0$, the map $w\mapsto \Upsilon(u,v,w)$ is strictly convex. If $\upalpha$ is positively 1-homogeneous then $\Upsilon$ is positively 1-homogeneous as well. \end{lemma} \begin{proof} Note that $\Upsilon$ may be equivalently represented in the form \begin{equation} \label{eq:dual-formulation-Upsilon} \Upsilon(u,v,w) = \sup_{\xi\in\mathbb{R}} \bigl\{\xi w - \upalpha(u,v)\Psi^*(\xi)\bigr\} =: \sup_{\xi\in\mathbb{R}} f_\xi(u,v,w)\,. \end{equation} The convexity of $f_\xi$ for each $\xi\in\mathbb{R}$ readily follows from its linearity in $w$ and the convexity of $-\upalpha$ in $(u,v)$. Therefore, $\Upsilon$ is convex and lower semicontinuous as the pointwise supremum of a family of convex continuous functions. The characterization~\eqref{eq:23} of $\Upsilon^\infty$ follows from observing that $\Upsilon(0,0,0)=\hat\Psi(0,0)=0$ and using the $1$-homogeneity of $\hat\Psi$: \begin{align*} \lim_{t\to{+\infty}} t^{-1}\Upsilon(tu,tv,tw) &= \lim_{t\to{+\infty}} t^{-1} \hat\Psi\Big(tw,\upalpha( tu,tv)\Big) =\lim_{t\to{+\infty}} \hat\Psi\Big(w,t^{-1}\upalpha( tu,tv)\Big) \\&= \hat\Psi\Big(w,\upalpha^\infty(u,v)\Big)\,, \end{align*} where the last equality follows from the continuity of $r\mapsto \hat\Psi(w,r)$ for all $w\in \mathbb{R}$. The strict convexity of $w\mapsto \Upsilon(u,v,w)$ for any $u,v\in [0,\infty)$ with $\upalpha^\infty(u,v)>0$ follows directly from the strict convexity of $\Psi$ (cf. Lemma~\ref{l:props:Psi}). \end{proof} The choice~\eqref{eq:22} provides a rigorous definition of $\mathscr R$ for couples of measures $(\rho,{\boldsymbol j})$ that are absolutely continuous with respect to $\pi$ and ${\boldsymbol\vartheta}$. In order to extend $\mathscr R$ to pairs $(\rho,{\boldsymbol j})$ that are not absolutely continuous, it is useful to interpret the measure \begin{equation} \boldsymbol\upnu_\rho(\mathrm{d} x,\mathrm{d} y):=\upalpha(u(x),u(y)){\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \label{eq:26} \end{equation} in the integral of \eqref{eq:21} in terms of a suitable concave transformation as in \eqref{eq:6} of two couplings generated by $\rho$. We therefore introduce the measures \begin{equation} \label{def:teta} \begin{aligned} {\boldsymbol\vartheta}_{\rho}^-(\mathrm{d} x\,\mathrm{d} y) := \rho(\mathrm{d} x)\kappa(x,\mathrm{d} y),\qquad {\boldsymbol\vartheta}_{\rho}^+(\mathrm{d} x\,\mathrm{d} y) := \rho(\mathrm{d} y)\kappa(y,\mathrm{d} x)= s_{\#}{\boldsymbol\vartheta}_\rho^-(\mathrm{d} x\,\mathrm{d} y), \end{aligned} \end{equation} observing that \begin{equation} \label{eq:24} \rho=u\pi\ll\pi\quad\Longrightarrow\quad {\boldsymbol\vartheta}^\pm_\rho\ll{\boldsymbol\vartheta}pi,\qquad \frac{\mathrm{d} {\boldsymbol\vartheta}_\rho^-}{\mathrm{d} {\boldsymbol\vartheta}pi}(x,y) = u(x), \quad \frac{\mathrm{d} {\boldsymbol\vartheta}_\rho^+}{\mathrm{d} {\boldsymbol\vartheta}pi}(x,y) = u(y). \end{equation} We thus obtain that \eqref{eq:26}, \eqref{eq:21} and \eqref{eq:22} can be equivalently written as \begin{equation} \label{eq:27} \boldsymbol\upnu_\rho=\upalpha[{\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho|{\boldsymbol\vartheta}pi],\quad \mathscr R(\rho,{\boldsymbol j} )=\frac12\mathscr F_\Psi(2{\boldsymbol j} |\boldsymbol\upnu_\rho)\,, \end{equation} where $\upalpha[{\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho|{\boldsymbol\vartheta}pi]$ stands for $\upalpha[({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho)|{\boldsymbol\vartheta}pi]$, and the functional $ \mathscr F_\psi(\cdot | \cdot)$ is from \eqref{def:F-F}, and also \begin{equation} \label{eq:28} \mathscr R(\rho,{\boldsymbol j} )=\frac12\mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho,2{\boldsymbol j} |{\boldsymbol\vartheta}pi)\,, \end{equation} again writing for shorter notation $\mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho,2{\boldsymbol j} |{\boldsymbol\vartheta}pi)$ in place of $\mathscr F_\Upsilon(({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho,2{\boldsymbol j}) |{\boldsymbol\vartheta}pi)$. Therefore we can use the same expressions \eqref{eq:27} and \eqref{eq:28} to extend the functional $\mathscr R$ to measures $\rho$ and ${\boldsymbol j} $ that need not be absolutely continuous with respect to ~$\pi$ and ${\boldsymbol\vartheta}pi$; the next lemma shows that they provide equivalent characterizations. We introduce the functions $u^\pm:E\to\mathbb{R}$, adopting the notation \begin{multline} \label{eq:93} u^-:=u\circ {\mathsf x}\quad \text{and}\quad u^+:=u\circ {\mathsf y},\\ \text{or equivalently} \quad u^-(x,y):=u(x),\quad u^+(x,y):=u(y). \end{multline} (Recall that ${\mathsf x}$ and ${\mathsf y}$ denote the coordinate maps from $E$ to $V$). \begin{lemma} \label{l:4.8} For every $\rho\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and ${\boldsymbol j} \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ we have \begin{equation} \label{eq:29} \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho, 2{\boldsymbol j} \color{black} |{\boldsymbol\vartheta}pi) =\mathscr F_\Psi( 2{\boldsymbol j} \color{black} |\boldsymbol\upnu_\rho). \end{equation} If $\rho=\rho^a+\rho^\perp$ and ${\boldsymbol j}={\boldsymbol j}^a+{\boldsymbol j}^\perp$ are the Lebesgue decompositions of $\rho$ and ${\boldsymbol j}$ with respect to ~$\pi$ and ${\boldsymbol\vartheta}pi$, respectively, we have \begin{equation} \label{eq:94} \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho, 2{\boldsymbol j} \color{black} |{\boldsymbol\vartheta}pi)= \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_{\rho^a},{\boldsymbol\vartheta}^+_{\rho^a}, 2{\boldsymbol j}^a \color{black} |{\boldsymbol\vartheta}pi)+ \mathscr F_{\Upsilon^\infty}({\boldsymbol\vartheta}^-_{\rho^\perp},{\boldsymbol\vartheta}^+_{\rho^\perp}, 2{\boldsymbol j}^\perp). \color{black} \end{equation} \end{lemma} \begin{proof} Let us consider the Lebesgue decomposition $\rho=\rho^a+\rho^\perp$, $\rho^a=u\pi$, and a corresponding partition of $V$ in two disjoint Borel sets $R,P$ such that $\rho^a=\rho\mres R$, $\rho^\perp=\rho\mres P$ and $\pi(P)=0$, which yields \begin{equation} \label{eq:30} {\boldsymbol\vartheta}^\pm_\rho={\boldsymbol\vartheta}^\pm_{\rho^a}+{\boldsymbol\vartheta}^\pm_{\rho^\perp},\quad {\boldsymbol\vartheta}^\pm_{\rho^a}\ll{\boldsymbol\vartheta}pi,\quad {\boldsymbol\vartheta}^-_{\rho^\perp}:={\boldsymbol\vartheta}^-_\rho\mres{P\times V}, \quad {\boldsymbol\vartheta}^+_{\rho^\perp}:={\boldsymbol\vartheta}^+_\rho\mres{V\times P}. \end{equation} Since ${\boldsymbol\vartheta}pi(P\times V)={\boldsymbol\vartheta}pi(V\times P) \le \|\kappa_V\|_\infty \color{black} \pi(P)=0$, ${\boldsymbol\vartheta}^\pm_{\rho^\perp}$ are singular with respect to ~${\boldsymbol\vartheta}pi$. Let us also consider the Lebesgue decomposition ${\boldsymbol j}={\boldsymbol j}^a+{\boldsymbol j}^\perp$ of ${\boldsymbol j}$ with respect to ~${\boldsymbol\vartheta}pi$. We can select a measure ${\boldsymbol \varsigma}\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)$ such that ${\boldsymbol\vartheta}^\pm_{\rho^\perp}=z^\pm{\boldsymbol \varsigma}\ll{\boldsymbol \varsigma}$, ${\boldsymbol j}^\perp\ll{\boldsymbol \varsigma}$ and ${\boldsymbol \varsigma}\perp{\boldsymbol\vartheta}pi$, obtaining \begin{equation} \label{eq:34} \begin{aligned} \boldsymbol\upnu_\rho=\upalpha[{\boldsymbol\vartheta}_\rho^-,{\boldsymbol\vartheta}_\rho^+|{\boldsymbol\vartheta}pi]= \boldsymbol\upnu_\rho^1+\boldsymbol\upnu_\rho^2, \quad \boldsymbol\upnu_\rho^1:=\upalpha(u^-,u^+){\boldsymbol\vartheta}pi,\quad \boldsymbol\upnu_\rho^2:=\upalpha^\infty(z^-,z^+){\boldsymbol \varsigma}. \end{aligned} \end{equation} Since ${\boldsymbol j} \ll {\boldsymbol\vartheta}pi+{\boldsymbol \varsigma}$, we can decompose \begin{equation} \label{decomp-DF} 2{\boldsymbol j} =w{\boldsymbol\vartheta}pi+w'{\boldsymbol \varsigma}, \color{black} \end{equation} and by the additivity property \eqref{eq:81} we obtain \begin{equation} \label{heartsuit} \begin{aligned} \mathscr F_\Psi &( 2{\boldsymbol j} \color{black} |\boldsymbol\upnu_\rho) = \mathscr F_{\hat \Psi}( 2{\boldsymbol j}, \color{black}\boldsymbol\upnu_\rho)= \mathscr F_{\hat\Psi}(w{\boldsymbol\vartheta},\boldsymbol\upnu_\rho^1)+ \mathscr F_{\hat\Psi}(w'{\boldsymbol \varsigma},\boldsymbol\upnu_\rho^2) \\&\stackrel{(*)}= \iint_E \Upsilon(u(x),u(y),w(x,y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)+ \iint_E \Upsilon^\infty(z^-(x,y),z^+(x,y),w'(x,y))\,{\boldsymbol \varsigma}(\mathrm{d} x,\mathrm{d} y) \\&= \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_{\rho^a},{\boldsymbol\vartheta}^+_{\rho^a}, 2{\boldsymbol j}^a |{\boldsymbol\vartheta}pi)+ \color{black} \mathscr F_{\Upsilon^\infty}({\boldsymbol\vartheta}^-_{\rho^\perp},{\boldsymbol\vartheta}^+_{\rho^\perp}, 2{\boldsymbol j}^\perp) \color{black} =\mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho, 2{\boldsymbol j} |{\boldsymbol\vartheta}pi). \color{black} \end{aligned} \end{equation} Indeed, identity (*) follows from the fact that, since $\hat{\Psi}$ is $1$-homogeneous, \[ \mathscr F_{\hat\Psi}(w{\boldsymbol\vartheta},\boldsymbol\upnu_\rho^1) = \iint_{\color{ddcyan} E} \color{ddcyan} \hat{\Psi} \left( \frac{\mathrm{d}(w{\boldsymbol\vartheta},\boldsymbol\upnu_\rho^1)}{\mathrm{d} \gamma}\right) \mathrm{d} \gamma \] for every $\gamma \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)$ such that $w{\boldsymbol\vartheta} \ll \gamma$ and $\boldsymbol\upnu_\rho^1 \ll \gamma$, cf.\ \eqref{eq:78}. Then, it suffices to observe that $w{\boldsymbol\vartheta} \ll {\boldsymbol\vartheta}$ and $\boldsymbol\upnu_\rho^1 \ll {\boldsymbol\vartheta}$ with $\frac{\mathrm{d} \boldsymbol\upnu_\rho^1}{\mathrm{d} {\boldsymbol\vartheta}} = \upalpha(u^-,u^+)$. The same argument applies to $ \mathscr F_{\hat\Psi}(w'{\boldsymbol \varsigma},\boldsymbol\upnu_\rho^2)$, cf.\ also Lemma~\ref{l:lsc-general}(3). \color{black} \end{proof} \begin{definition} \label{def:R-rigorous} The \textit{dissipation potential} $\mathscr R: {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\times{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E) \to [0,{+\infty}]$ is defined by \begin{equation} \label{def:action} \mathscr R(\rho,{\boldsymbol j} ) := \frac12 \mathscr F_\Upsilon({\boldsymbol\vartheta}^-_\rho,{\boldsymbol\vartheta}^+_\rho,2{\boldsymbol j} |{\boldsymbol\vartheta}pi) = \frac12 \mathscr F_\Psi(2{\boldsymbol j} |\boldsymbol\upnu_\rho). \end{equation} where ${\boldsymbol\vartheta}_{\rho}^\pm$ are defined by \eqref{def:teta}. If $\upalpha$ is $1$-homogeneous, then $\mathscr R(\rho,{\boldsymbol j})$ is independent of ${\boldsymbol\vartheta}pi$. \end{definition} \begin{lemma} \label{l:alt-char-R} Let $\rho=\rho^a+\rho^\perp\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and ${\boldsymbol j} ={\boldsymbol j}^a+{\boldsymbol j}^\perp\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$, with $\rho^a=u\pi$, $2{\boldsymbol j}^a=w{\boldsymbol\vartheta}pi$, and $\rho^\perp$, $j^\perp$ as in Lemma \ref{l:4.8}, \color{black} satisfy $\mathscr R(\rho,{\boldsymbol j} )<{+\infty}$, and let $P\in \calB(V)$ be a $\pi$-negligible set such that $\rho^\perp=\rho\mres P$. \begin{enumerate}[label=(\arabic*)] \item We have $|{\boldsymbol j} |(P\times (V\setminus P))= |{\boldsymbol j} |((V\setminus P)\times P)=0$, ${\boldsymbol j}^\perp={\boldsymbol j} \mres(P\times P)$, and \begin{equation} \label{eq:37} \mathscr R(\rho,{\boldsymbol j} )= \mathscr R(\rho^a,{\boldsymbol j}^a)+ \frac12 \mathscr F_{\Upsilon^\infty}({\boldsymbol\vartheta}^-_{\rho^\perp},{\boldsymbol\vartheta}^+_{\rho^\perp},2{\boldsymbol j}^\perp). \end{equation} In particular, if $\upalpha$ is $1$-homogeneous we have the decomposition \begin{equation} \label{eq:175} \mathscr R(\rho,{\boldsymbol j} )= \mathscr R(\rho^a,{\boldsymbol j}^a)+ \mathscr R(\rho^\perp,{\boldsymbol j}^\perp). \end{equation} \item \label{l:alt-char-R:i2} If $\rho\ll \pi$ or $\upalpha$ is sub-linear, i.e.~$\upalpha^\infty\equiv0$, or $\kappa(x,\cdot)\ll\pi$ for every $x\in V$, then ${\boldsymbol j} \ll{\boldsymbol\vartheta}pi$ and ${\boldsymbol j}^\perp\equiv0$. In any of these three cases, $\mathscr R(\rho,{\boldsymbol j} ) = \mathscr R(\rho^a,{\boldsymbol j})$, and setting $E'$ as in \eqref{concentration-set} we have $w=0$ ${\boldsymbol\vartheta}pi$-a.e.\ on $E\setminusE'$, and \eqref{eq:21} holds. \item Furthermore, $\mathscr R$ is convex and lower semicontinuous with respect to ~setwise convergence in $(\rho,{\boldsymbol j})$. If $\kappa$ satisfies the weak Feller property, then $\mathscr R$ is also lower semicontinuous with respect to weak convergence in duality with continuous bounded functions. \color{black} \end{enumerate} \end{lemma} \begin{proof} \textit{(1)} Equation~\eqref{eq:37} is an immediate consequence of \eqref{eq:94}. To prove the properties of ${\boldsymbol j}$, set $R = V\setminus P$ for convenience. By using the decompositions ${\boldsymbol j} =w{\boldsymbol\vartheta}pi+w'{\boldsymbol \varsigma}$ and ${\boldsymbol\vartheta}_{\rho}^\pm = {\boldsymbol\vartheta}_{\rho^a}^\pm + {\boldsymbol\vartheta}_{\rho^\perp}^ \pm = {\boldsymbol\vartheta}_{\rho^a}^\pm + z^\pm {\boldsymbol \varsigma}$ introduced in the proof of the previous Lemma, the definition~\eqref{eq:30} implies that ${\boldsymbol\vartheta}^+_{\rho^\perp}(P\times R)=0$, so that $z^+=0$ ${\boldsymbol \varsigma}$-a.e.~in $P\times R$; analogously $z^-=0$ ${\boldsymbol \varsigma}$-a.e.~in $R\times P$. By \eqref{alpha-0} we find that $\upalpha^\infty(z^-,z^+)=0$, ${\boldsymbol \varsigma}$-a.e.~in $(P\times R)\cup (R\times P)$ and therefore $w'=0$ as well, since $\Upsilon^\infty(z^-,z^+,w')<{+\infty}$ ${\boldsymbol \varsigma}$-a.e (see \eqref{heartsuit}). \color{black} We eventually deduce that ${\boldsymbol j}^\perp={\boldsymbol j} \mres P\times P$. \textit{(2)} When $\rho\ll\pi$ we can choose $P=\emptyset$ so that ${\boldsymbol j}^\perp={\boldsymbol j}\mres P=0$. When $\upalpha$ is sub-linear then $\boldsymbol\upnu_\rho\ll {\boldsymbol\vartheta}pi$ so that ${\boldsymbol j} \ll{\boldsymbol\vartheta}pi$ since $\Psi$ is superlinear. If $\kappa(x,\cdot)\ll \pi$ for every $x\in V$, then ${\mathsf y}_\sharp {\boldsymbol\vartheta}^-_{\rho^\perp}\ll \pi$ and ${\mathsf x}_\sharp {\boldsymbol\vartheta}^+_{\rho^\perp}\ll \pi$, so that ${\boldsymbol\vartheta}^\pm_{\rho^\perp}(P\times P)=0$, since $P$ is $\pi$-negligible. We deduce that ${\boldsymbol j}^\perp(P\times P)=0$ as well. \noindent \textit{(3)} The convexity of $\mathscr R$ follows by the convexity of the functional $\mathscr F_\Upsilon$. The lower semicontinuity follows by combining Lemma \ref{le:kernel-convergence} with Lemma \ref{l:lsc-general}. \normalcolor \end{proof} \begin{cor} \label{cor:decomposition} Let $\pi_1,\pi_2\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ be mutually singular measures satisfying the detailed balance condition with respect to ~$\kappa$, and let ${\boldsymbol\vartheta}pi_i=\boldsymbol \kappa_{\pi_i}$ be the corresponding symmetric measures in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(E)$ (see Section~\ref{subsub:kernels}). For every pair $(\rho,{\boldsymbol j})$ with $\rho=\rho_1+\rho_2$, ${\boldsymbol j}={\boldsymbol j}_1+{\boldsymbol j}_2$ for $\rho_i\ll\pi_i$ and ${\boldsymbol j}_i\ll{\boldsymbol\vartheta}pi_i$, we have \begin{equation} \label{eq:176} \mathscr R(\rho,{\boldsymbol j})=\mathscr R_1(\rho_1,{\boldsymbol j}_1)+\mathscr R_2(\rho_2,{\boldsymbol j}_2), \end{equation} where $\mathscr R_i$ is the dissipation functional induced by ${\boldsymbol\vartheta}pi_i$. When $\upalpha$ is $1$-homogeneous, $\mathscr R_i=\mathscr R$. \end{cor} \subsection{Curves with finite \texorpdfstring{$\mathscr R$}R-action} In this section, we study the properties of curves with finite $\mathscr R$-action, i.e., elements of \begin{equation} \label{def:Aab} \mathbb{C}ER ab: = \biggl\{ (\rho,{\boldsymbol j} ) \in \mathbb{C}E ab:\ \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t <{+\infty} \biggr\}. \end{equation} The finiteness of the $\mathscr R$-action leads to the following remarkable property: A curve $(\rho,{\boldsymbol j})$ with finite $\mathscr R$-action can be separated into two mutually singular curves $(\rho^a,{\boldsymbol j}^a),\ (\rho^\perp,{\boldsymbol j}^\perp)\in \mathbb{C}ER ab$ that evolve independently, and contribute independently to~$\mathscr R$. Consequently, finite $\mathscr R$-action preserves $\pi$-absolute continuity of $\rho$: if $\rho_t\ll\pi$ at any $t$, then $\rho_t\ll\pi$ at all~$t$. These properties and others are proved in Theorem~\ref{thm:confinement-singular-part} and Corollary~\ref{cor:propagation-AC} below. \begin{remark}\label{rem:skew-symmetric} If $(\rho,{\boldsymbol j} )\in \mathbb{C}ER ab$ then the `skew-symmetrization' $ \tj=({\boldsymbol j} -\symmap {\boldsymbol j} )/2$ \color{black} of ${\boldsymbol j} $ gives rise to a pair $(\rho,\tj)\in \mathbb{C}ER ab$ \color{black} as well, and it has lower $\mathscr R$-action: \[ \int_a^b \mathscr R(\rho_t, \tj_t)\, \mathrm{d} t \color{black} \leq \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t. \] This follows from the convexity of $w\mapsto\Upsilon(u_1,u_2,w)$, the symmetry of $(u_1,u_2)\mapsto\Upsilon(u_1,u_2,w)$, and the invariance of the continuity equation~\eqref{eq:ct-eq-def} under the `skew-symmetrization' ${\boldsymbol j} \mapsto \tj$ \color{purple} (cf.\ also the calculations in the proof of Corollary \ref{th:chain-rule-bound2}). \color{black} As a result, we can often assume without loss of generality that a flux ${\boldsymbol j} $ is skew-symmetric, i.e.\ that $\symmap {\boldsymbol j} = -{\boldsymbol j} $. \end{remark} \begin{theorem} \label{thm:confinement-singular-part} Let $(\rho,{\boldsymbol j} )\in \mathbb{C}ER ab$ and let us consider the Lebesgue decompositions $\rho_t=\rho_t^a+\rho_t^\perp$ and ${\boldsymbol j}_t ={\boldsymbol j}_t^a+{\boldsymbol j}_t^\perp$ of $\rho_t$ with respect to ~$\pi$ and of ${\boldsymbol j}_t $ with respect to ~${\boldsymbol\vartheta}pi$. \begin{enumerate} \item We have $(\rho^a,{\boldsymbol j}^a)\in \mathbb{C}ER ab$ with \begin{equation} \label{eq:55} \int_a^b \mathscr R(\rho^a_t,{\boldsymbol j}^a_t)\, \mathrm{d} t \le \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t . \end{equation} In particular $t\mapsto \rho_t^a(V)$ and $t\mapsto\rho_t^\perp(V)$ are constant. \item If $\upalpha$ is $1$-homogeneous then also $(\rho^\perp,{\boldsymbol j}^\perp)\in \mathbb{C}ER ab$ and \begin{equation} \label{eq:55bis} \int_a^b \mathscr R(\rho^a_t,{\boldsymbol j}^a_t)\, \mathrm{d} t + \int_a^b \mathscr R(\rho^\perp_t,{\boldsymbol j}^\perp_t)\, \mathrm{d} t= \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t . \end{equation} \item If $\upalpha$ is sub-linear or $\kappa(x,\cdot)\ll\pi$ for every $x\in V$, then $\rho_t^\perp$ is constant in $[a,b]$ and ${\boldsymbol j}^\perp\equiv0$. \end{enumerate} \end{theorem} \begin{proof} \textit{(1)} Let $\gamma\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ be a dominating measure for the curve $\rho$ according to Corollary~\ref{c:narrow-ct} and let us denote by $\gamma=\gamma^a+\gamma^\perp$ the Lebesgue decomposition of $\gamma$ with respect to ~$\pi$; we also denote by $P\in\calB(V)$ a $\pi$-negligible Borel set such that $\gamma^\perp=\gamma\mres P$. Setting $R:=V\setminus P$, since $\rho_t\ll \gamma$ we thus obtain $\rho^a_t=\rho_t\mres R$, $\rho^\perp_t=\rho_t\mres P$. By Lemma \ref{l:alt-char-R} for $\lambda$-a.e.~$t\in (a,b)$ we obtain ${\boldsymbol j}^\perp_t={\boldsymbol j} \mres(P\times P)$ and ${\boldsymbol j}^a_t={\boldsymbol j} \mres(R\times R)$ with $|{\boldsymbol j}_t|(R\times P)=|{\boldsymbol j}_t|(P\times R)=0$. For every function $\varphi\in\mathrm{B}_{\mathrm b}$ we have $\dnabla(\varphi\chi_R)\equiv0$ on $P\times P$ so that we get \begin{align*} \int_V \varphi\,\mathrm{d} \rho_{t_2}^a- \int_V \varphi\,\mathrm{d} \rho_{t_1}^a &= \int_R \varphi\,\mathrm{d} \rho_{t_2}- \int_R \varphi\,\mathrm{d} \rho_{t_1}= \int_{t_1}^{t_2} \iint_{E} \dnabla(\varphi \chi_R)\,\mathrm{d} ({\boldsymbol j}^a_t+{\boldsymbol j}^\perp_t)\,\mathrm{d} t \\&=\int_{t_1}^{t_2} \iint_{R\times R} \dnabla(\varphi \chi_R)\,\mathrm{d} {\boldsymbol j}^a_t\,\mathrm{d} t =\int_{t_1}^{t_2} \iint_{E} \dnabla\varphi\,\mathrm{d} {\boldsymbol j}^a_t\,\mathrm{d} t, \end{align*} showing that $(\rho^a,{\boldsymbol j}^a)$ belongs to $\mathbb{C}E ab$. Estimate \eqref{eq:55} follows by \eqref{eq:37}. \color{red} From Lemma~\ref{l:cont-repr} \color{black} we deduce that $\rho_t^a(V)$ and $\rho_t^\perp(V)$ are constant. \textit{(2)} This follows by the linearity of the continuity equation and \eqref{eq:175}. \textit{(3)} If $\upalpha$ is sub-linear or $\kappa(x,\cdot)\ll\pi$ for every $x\in V$, then Lemma~\ref{l:alt-char-R} shows that ${\boldsymbol j}^\perp\equiv 0$. Since by linearity $(\rho^\perp,{\boldsymbol j}^\perp)\in \mathbb{C}E ab$, we deduce that $\rho^\perp_t$ is constant. \end{proof} \begin{cor}\label{cor:propagation-AC} \color{red} Let $(\rho,{\boldsymbol j} )\in \mathbb{C}ER ab$. \color{black} If there exists $t_0\in [a,b]$ such that $\rho_{t_0}\ll\pi$, then we have $\rho_t\ll\pi$ for every $t\in [a,b]$, ${\boldsymbol j}^\perp\equiv0$, and $\odiv {\boldsymbol j}_t\ll \pi$ for $\lambda$-a.e.~$t\in (a,b)$. In particular, there exists an absolutely continuous and a.e.~differentiable map $u:[a,b]\to L^1(V,\pi)$ and a map $w\in L^1(E,\lambda\otimes{\boldsymbol\vartheta}pi)$ such that \begin{equation} \label{eq:42} 2{\boldsymbol j}_\lambda=w \lambda\otimes{\boldsymbol\vartheta}pi,\quad \partial_t u_t(x)=\frac12\int_V \big(w_t(y,x)-w_t(x,y)\big)\,\kappa(x,\mathrm{d} y) \quad\text{for a.e.~}t\in (a,b). \end{equation} Moreover there exists a measurable map $\xi:(a,b)\times E\to \mathbb{R}$ such that $w=\xi\upalpha(u^-,u^+)$ $\lambda\otimes{\boldsymbol\vartheta}pi$-a.e.~and \begin{equation} \label{eq:45} \mathscr R(\rho_t,{\boldsymbol j}_t)= \frac12\iint_E \Psi(\xi_t(x,y))\upalpha(u_t(x),u_t(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \quad\text{for a.e.~$t\in (a,b)$.} \end{equation} If $w$ is skew-symmetric, then $\xi$ is skew-symmetric as well and \eqref{eq:42} reads as \begin{equation} \label{eq:177} \partial_t u_t(x)=\int_V w_t(y,x)\,\kappa(x,\mathrm{d} y)= \int_V \xi_t(y,x)\upalpha(u_t(x),u_t(y))\,\kappa(x,\mathrm{d} y) \quad\text{a.e.~in }(a,b). \end{equation} \end{cor} \begin{remark} \label{rem:general-fact} Relations \eqref{eq:42} and \eqref{eq:177} hold both in the sense of a.e.~differentiability of maps with values in $L^1(V,\pi)$ and pointwise a.e.~with respect to ~$x\in V$: more precisely, there exists a set $U\subset V$ of full $\pi$-measure such that for every $x\in U$ the map $t\mapsto u_t(x)$ is absolutely continuous and equations \eqref{eq:42} and \eqref{eq:177} hold for every $x\in U$, a.e.~with respect to ~$t\in (0,T)$. \end{remark} \begin{proof} The first part of the statement is an immediate consequence of Theorem~\ref{thm:confinement-singular-part}, which yields $\rho^\perp_t(V)= 0$ for every $t\in [a,b]$. We can thus write $2{\boldsymbol j} =w(\lambda\otimes {\boldsymbol\vartheta}pi)$ for some measurable map $w:(a,b)\times E\to \mathbb{R}$. Moreover $\odiv {\boldsymbol j} \ll\lambda\otimes \pi$, since ${\mathsf s}_\sharp{\boldsymbol j}\ll{\mathsf s}_\sharp (\lambda\otimes{\boldsymbol\vartheta}pi)= \lambda\otimes{\boldsymbol\vartheta}pi$, and therefore \begin{equation} \label{eq:46} 2 {\boldsymbol j}^\flat={\boldsymbol j}-{\mathsf s}_\sharp{\boldsymbol j}\ll \lambda\otimes{\boldsymbol\vartheta}pi\quad\Longrightarrow\quad \odiv {\boldsymbol j}= {\mathsf x}_\sharp (2{\boldsymbol j}^\flat)\ll \color{black} {\mathsf x}_\sharp(\lambda\otimes{\boldsymbol\vartheta}pi)\color{red} \ll \color{black}\lambda\otimes\pi. \end{equation} Setting $z_t=\mathrm{d}(\odiv {\boldsymbol j}_t)/\mathrm{d}\pi$ we get for a.e.~$t\in (a,b)$ \begin{align*} \partial_t u_t&=-z_t,\\ -2\int_V \varphi \,z_t\,\mathrm{d}\pi&= \iint_E (\varphi(y)-\varphi(x))w_t(x,y){\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) = \iint_E \varphi(x) (w_t(y,x)-w_t(x,y)){\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \\&= \int_V \varphi(x) \Big(\int_V (w_t(y,x)-w_t(x,y)) \kappa(x,\mathrm{d} y)\Big)\pi(\mathrm{d} x), \end{align*} The existence of $\xi$ and formula~\eqref{eq:45} follow from Lemma \ref{l:alt-char-R}\ref{l:alt-char-R:i2}. \end{proof} \subsection{Chain rule for convex entropies} \label{subsec:chain-rule} Let us now consider a continuous convex function $\upbeta:\mathbb{R}_+\to\mathbb{R}_+$ that is differentiable in $(0,+\infty)$. The main choice for $\upbeta$ will be the function~$\upphi$ that appears in the definition of the driving functional~$\mathscr E$ (see Assumption~\ref{ass:S}), and the example of the Boltzmann-Shannon entropy function~\eqref{logarithmic-entropy} illustrates why we only assume differentiability away from zero. By setting $\upbeta'(0)=\lim_{r\downarrow0} \upbeta'(r)\in [-\infty,{+\infty})$, we define the function ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta:\mathbb{R}_+\times \mathbb{R}_+\to[-\infty,+\infty]$ by \begin{equation} \label{eq:102} {\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v):= \begin{cases} \upbeta'(v)-\upbeta'(u)&\text{if }u,v\in \mathbb{R}_+\times \mathbb{R}_+\setminus \{(0,0)\},\\ 0&\text{if }u=v=0. \end{cases} \end{equation} Note that ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta$ is continuous (with extended real values) in $\mathbb{R}_+\times \mathbb{R}_+\setminus\{(0,0)\}$ and is finite and continuous whenever $\upbeta'(0)>-\infty$. When $\upbeta'(0)=-\infty$ we have ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(0,v)=-{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,0)={+\infty}$ for every $u,v>0$. In the following we will adopt the convention \begin{equation} \label{eq:75} |\pm\infty|={+\infty},\quad a\cdot ({+\infty}):= \begin{cases} {+\infty}&\text{if }a>0,\\ 0&\text{if }a=0,\\ -\infty&\text{if }a<0 \end{cases} \quad a\cdot(-\infty)=-a\cdot ({+\infty}), \end{equation} for every $a\in [-\infty,+\infty]$ and, using this convention, we define the extended valued function $\rmB_\upbeta:\mathbb{R}_+\times\mathbb{R}_+\times \mathbb{R}\to [-\infty,+\infty]$ by \begin{equation} \label{eq:105} \rmB_\upbeta(u,v,w):={\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)w. \end{equation} We want to study the differentiability properties of the functional $\mathscr F_\upbeta(\cdot|\pi)$ along solutions $(\rho,{\boldsymbol j})\in \mathbb{C}EI I$ of the continuity equation. Note that if $\upbeta$ is superlinear and $\mathscr F_\upbeta$ is finite at a time $t_0\in I$, then Corollary \ref{cor:propagation-AC} shows that $\rho_t\ll\pi$ for every $t\in I$. If $\upbeta$ has linear growth then \begin{equation} \label{eq:100} \mathscr F_\upbeta(\rho_t|\pi)= \int_V \upbeta(u_t)\,\mathrm{d}\pi+\upbeta^\infty(1)\rho^\perp(V),\quad \rho_t=u_t\pi+\rho_t^\perp, \end{equation} where we have used that $t \mapsto \rho_t^\perp(V)$ is constant. Thus, we are reduced to studying $\mathscr F_\upbeta$ along $(\rho^a,{\boldsymbol j}^a)$, which is still a solution of the continuity equation. The absolute continuity property of $\rho_t$ with respect to ~$\pi$ is therefore quite a natural assumption in the next result. \begin{theorem}[Chain rule I] \label{th:chain-rule-bound} Let $(\rho,{\boldsymbol j} )\in \mathbb{C}ER ab$ with $\rho_t=u_t\pi\ll \pi$ and let $2{\boldsymbol j}^\flat={\boldsymbol j}-{\mathsf s}_\sharp {\boldsymbol j}=w^\flat\lambda\otimes{\boldsymbol\vartheta}pi$ \color{black} as in Corollary \ref{cor:propagation-AC} satisfy \begin{equation} \label{ass:th:CR} \int_V \upbeta(u_a)\,\mathrm{d}\pi<{+\infty},\quad \int_a^b\iint_E\Big(\rmB_\upbeta(u_t(x),u_t(y),w^\flat_t(x,y))\Big)_+\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t<{+\infty} \end{equation} Then the map $t\mapsto \int_V \upbeta(u_t)\,\mathrm{d}\pi$ is absolutely continuous in $[a,b]$, the map $\rmB_\upbeta(u^-,u^+,w^\flat)$ is $\lambda\otimes{\boldsymbol\vartheta}pi$-integrable and \begin{equation} \label{eq:CR} \frac{\mathrm{d}}{\mathrm{d} t}\int_V \upbeta(u_t)\,\mathrm{d}\pi = \frac12\iint_{E} \rmB_\upbeta(u_t(x),u_t(y),w^\flat_t(x,y)) {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \quad\text{for a.e.~}t\in (a,b). \end{equation} \end{theorem} \begin{remark} At first sight condition~\eqref{ass:th:CR} on the positive part of $\rmB_\upbeta$ is remarkable: we only require the positive part of $\rmB_\upbeta$ to be integrable, but in the assertion we obtain integrability of the negative part as well. This integrability arises from the combination of the upper bound on $\int_V \upbeta(u_a)\,\mathrm{d} \pi$ in~\eqref{ass:th:CR} with the lower bound $\upbeta\geq0$. \end{remark} \begin{proof} {\em Step 1:\ Chain rule for an approximation.} Define for $k\in \mathbb{N}$ an approximation $\upbeta_k$ of $\upbeta$ as follows:\ Let $\upbeta_k'(\sigma):=\max\{-k,\min\{\upbeta'(\sigma),k\}\}$ be the truncation of $\upbeta'$ to the interval $[-k,k]$. Due to the assumptions on $\upbeta$, we may assume that $\upbeta$ achieves a minimum at the point $s_0\in[0,{+\infty})$. Now set $\upbeta_k(s) := \upbeta(s_0) + \int_{s_0}^s \upbeta_k'(\sigma)\,\mathrm{d} \sigma$. Note that $\upbeta_k$ is differentiable and globally Lipschitz, and converges monotonically to $\upbeta(s)$ for all $s\geq0$ as $k\to\infty$. For each $k\in\mathbb{N}$ and $t\in [a,b]$ we define \[ S_{k}(t): = \int_{V} \upbeta_k(u_t)\, \mathrm{d} \pi,\quad S(t): = \int_{V} \upbeta(u_t)\, \mathrm{d}\pi. \] By convexity and Lipschitz continuity of $\upbeta_k$, we have that \begin{align*} \upbeta_k(u_t(x))-\upbeta_k(u_s(x)) \le \upbeta_k'(u_t(x))( u_t(x)-u_s(x)) \le k| u_t(x)-u_s(x)|\,. \end{align*} Hence, we deduce by Corollary \ref{cor:propagation-AC} that for every $a\le s<t\le b$ \begin{align*} S_{k}(t) - S_{k}(s) &= \int_{V} \bigl[\upbeta_k(u_t(x))-\upbeta_k(u_s(x))\bigr]\pi(\mathrm{d} x) \\ &\le k\|u_t-u_s\|_{L^1(V;\pi)} \le k\int_s^t \|\partial_r u_r\|_{L^1(V;\pi)}\,\mathrm{d} r. \end{align*} We conclude that the function $t\mapsto S_k(t)$ is absolutely continuous. Let us pick a point $t\in (a,b)$ of differentiability for $t\mapsto S_k(t)$: it easy to check that \begin{align*} \frac{\mathrm{d} }{\mathrm{d} t}S_k(t) &= \int_{V} \upbeta'_k(u_t)\,\partial_t u_t \,\mathrm{d}\pi = \frac12\iint_{E} \dnabla \upbeta'_k(u_t)w^\flat_t \, \mathrm{d}{\boldsymbol\vartheta}pi\,, \end{align*} which by integrating over time yields \begin{equation}\label{ineq:est-S-k} S_k(t) - S_k(s) = \frac12\int_s^t\iint_{E} \dnabla \upbeta'_k(u_r)w^\flat_r\, \mathrm{d} {\boldsymbol\vartheta}pi\,\mathrm{d} r \qquad \text{for all } a \leq s \leq t \leq b. \end{equation} \paragraph{\em Step 2:\ The limit $k\to\infty$} Since $0\leq \upbeta_k''\leq \upbeta''$ we have \begin{equation} \label{eq:103} 0\le {\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_{\upbeta_k}(u,v)=\upbeta_k'(v)-\upbeta_k'(u)\le \upbeta'(v)-\upbeta'(u)={\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\quad\text{whenever }0\le u\le v \end{equation} and \begin{equation} \label{eq:104} |\upbeta_k'(v)-\upbeta_k'(u)|\le |{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)|\quad \text{for every }u,v\in \mathbb{R}_+. \end{equation} We can thus estimate the right-hand side in \eqref{ineq:est-S-k} \begin{align} (B_k)_+=\left( \dnabla \upbeta'_k(u)\, w^\flat\right)_+ & \le \left({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u^-,u^+) w^\flat\right)_+=B_+ \label{est:CR:w-ona-phi-k} \end{align} where we have used the short-hand notation \begin{equation} \label{eq:106} B_k(r,x,y)=\rmB_{\upbeta_k}(u_r(x),u_r(y),w^\flat_r(x,y)),\quad B(r,x,y):=\rmB_\upbeta(u_r(x),u_r(y),w^\flat_r(x,y)). \end{equation} Assumption~\eqref{ass:th:CR} implies that the right-hand side in~\eqref{est:CR:w-ona-phi-k} is an element of $L^1([a,b]\times E;\lambda\otimes{\boldsymbol\vartheta}pi)$, so that in particular $B_+\in \mathbb{R}$ for $(\lambda\otimes{\boldsymbol\vartheta}pi)$-a.e.~$(t,x,y)$. Moreover, \eqref{ineq:est-S-k} yields \begin{align} \int_a^b \iint_E (B_k)_-\,\mathrm{d} {\boldsymbol\vartheta}pi_\lambda &=\notag \int_a^b \iint_E (B_k)_+\,\mathrm{d} {\boldsymbol\vartheta}pi_\lambda+ S_k(a)-S_k(b) \\ &\le \int_a^b \iint_E (B)_+\,\mathrm{d} {\boldsymbol\vartheta}pi_\lambda+ S(a)<{+\infty}.\label{eq:193} \end{align} Note that the sequence $k\mapsto (B_k)_-$ is definitely $0$ or is monotonically increasing to $B_-$. Beppo Levi's Monotone Convergence Theorem and the uniform estimate \eqref{eq:193} then yields that $B_-\in L^1((a,b)\times E,\lambda\otimes{\boldsymbol\vartheta}pi)$, thus showing that $\rmB_\upbeta(u^-,u^+,w^\flat)$ is $(\lambda\otimes{\boldsymbol\vartheta}pi)$-integrable as well. We can thus pass to the limit in \eqref{ineq:est-S-k} as $k\to{+\infty}$ and we have \begin{equation} \lim_{k\to{+\infty}} \dnabla \upbeta'_k(u)\, w^\flat =B \quad \text{$\lambda\otimes {\boldsymbol\vartheta}pi$-a.e.~in $(a,b)\times E$.} \label{eq:57} \end{equation} The identity~\eqref{eq:57} is obvious if $\upbeta'(0)$ is finite, and if $\upbeta'(0)=-\infty$ then it follows by the upper bound \eqref{est:CR:w-ona-phi-k} and the fact that the right-hand side of \eqref{est:CR:w-ona-phi-k} is finite almost everywhere. \normalcolor The Dominated Convergence Theorem then implies that \[ \int_s^t\iint_{E} \dnabla \upbeta'_k(u_r)\, w^\flat_r \,\mathrm{d} {\boldsymbol\vartheta}pi\,\mathrm{d} r \quad\longrightarrow\quad \int_s^t\iint_{E}B\, \mathrm{d} {\boldsymbol\vartheta}pi\,\mathrm{d} r \qquad\text{as}\quad k\to\infty\,. \] By the monotone convergence theorem $S(t) = \lim_{k\to {+\infty}} S_k(t)\in [0,{+\infty}]$ for all $t\in [a,b]$ and the limit is finite for $t=0$. For all $t\in [a,b]$, therefore, \[ S(t) = S(a)+ \frac12\int_a^t \iint_E B \, \mathrm{d} {\boldsymbol\vartheta}pi \,\mathrm{d} r, \] which shows that $S$ is absolutely continuous and \eqref{eq:CR} holds. \end{proof} \par We now introduce three functions associated with the (general) continuous convex function $\upbeta:\mathbb{R}_+\to\mathbb{R}_+$, differentiable in $(0,+\infty)$, that we have considered so far, and whose main example will be the entropy density $\upphi$ from \eqref{cond-phi}. \color{ddcyan}O Recalling the definition \eqref{eq:102}, the convention \eqref{eq:75}, and setting $\Psi^*(\pm\infty):={+\infty}$, let us now introduce the functions ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upbeta, {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta,{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta:\mathbb{R}_+^2\to[0,{+\infty}]$ \begin{subequations} \label{subeq:D} \begin{align} \label{eq:181} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u,v)&:= \Psi^*({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v))\upalpha(u,v)\\ &\phantom:= \begin{cases} \Psi^*({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v))\upalpha(u,v)&\text{if }\upalpha(u,v)>0,\\ 0&\text{otherwise,} \end{cases}\notag\\[2\jot] \label{eq:52} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upbeta(u,v)&:= \begin{cases} \Psi^*({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v))\upalpha(u,v)&\text{if }\upalpha(u,v)>0,\\ 0&\text{if }u=v=0,\\ {+\infty}&\text{otherwise, i.e.~if }\upalpha(u,v)=0,\ u\neq v, \end{cases}\\[2\jot] \label{eq:182} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta(\cdot,\cdot)&:= \text{the lower semicontinuous envelope of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^+$ in $\mathbb{R}_+^2$}. \end{align} \end{subequations} The function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ corresponding to the choice $\upbeta = \upphi$ shall feature in the (rigorous) definition of the \emph{Fisher information} functional $\mathscr{D}$, cf.\ \eqref{eq:def:D} ahead. Nonetheless, it is significant to introduce the functions $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$ and $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upphi$ as well, cf.\ Remarks \ref{rmk:why-interesting-1} and \ref{rmk:Mark} ahead. \color{black} \begin{example}[The functions ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^\pm_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ in the quadratic and in the $\cosh$ case] \label{ex:Dpm} In the two examples of the linear equation~\eqref{eq:fokker-planck}, with Boltzmann entropy function~$\upphi$, and with quadratic and cosh-type potentials $\Psi^*$ (see~\eqref{choice:cosh} and~\eqref{choice:quadratic}), the functions ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^\pm_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ take the following forms: \begin{enumerate} \item If $\Psi^*(s)=s^2/2$ and, accordingly, $\upalpha(u,v)=(u-v)/(\log(u)-\log(v))$ for all $u,v >0$ (with $\upalpha(u,v)=0$ otherwise), then \begin{align*} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi(u,v) &= \begin{cases} \frac{1}{2}(\log(u)-\log(v))(u-v) & \text{if } u,\, v>0, \\ 0 & \text{if $u=0$ or $v=0$}, \end{cases}\\ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v) = {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upphi(u,v)&= \begin{cases} \frac{1}{2}(\log(u)-\log(v))(u-v) & \text{if } u,\, v>0, \\ 0 & \text{if } u=v=0, \\ {+\infty} & \text{if } u=0 \text{ and } v \neq 0, \text{ or vice versa}. \end{cases} \end{align*} {For this example ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^+$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ are convex, and all three functions are lower semicontinuous.} \item For the case $\Psi^*(s)=4\bigl(\cosh(s/2)-1\bigr)$ and, accordingly, $\upalpha(u,v)=\sqrt{u v}$ for all $u,v \geq 0$, then \begin{align*} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi(u,v) &= \begin{cases} 2\Bigl(\sqrt{u}-\sqrt{v}\Bigr)^2 & \text{if } u,\, v>0, \\ 0 & \text{if $u=0$ or $v=0$}, \end{cases}\\ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v) &= 2\Bigl(\sqrt{u}-\sqrt{v}\Bigr)^2\qquad {\text{for all }u,v\geq 0,}\\ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upphi(u,v) &= \begin{cases} 2\Bigl(\sqrt{u}-\sqrt{v}\Bigr)^2 & \text{if $u, v>0$ or $u=v=0$}, \\ {+\infty} & \text{if } u=0 \text{ and } v \neq 0, \text{ or vice versa}. \end{cases} \end{align*} For this example, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^+$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ again are convex, but only ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ are lower semicontinuous. \end{enumerate} \end{example} We collect a number of general properties of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^\pm$. \begin{lemma} \label{le:trivial-but-useful} \begin{enumerate}[ref=(\arabic*)] \item ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-\leq {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta\leq {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^+$; \item ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta$ are lower semicontinuous; \item \label{le:trivial-but-useful:ineq} For every $u,v\in \mathbb{R}_+$ and $w\in \mathbb{R}$ we have \begin{equation} \label{eq:107} \bigl|\rmB_\upbeta(u,v,w)\bigr| \le \Upsilon(u,v,w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u,v). \end{equation} \item Moreover, when the right-hand side of \eqref{eq:107} is finite, then the equality \begin{equation} \label{eq:107a} -\rmB_\upbeta(u,v,w) = \Upsilon(u,v,w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u,v) \end{equation} is equivalent to the condition \begin{equation} \label{eq:109} \upalpha(u,v)=w=0\quad\text{or}\quad\biggl[ \upalpha(u,v)>0,\ {\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\in \mathbb{R},\ -w=(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\big)\upalpha(u,v)\biggr]. \end{equation} \end{enumerate} \end{lemma} \begin{proof} It is not difficult to check that ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta$ is lower semicontinuous: such a property is trivial where $\upalpha$ vanishes, and in all the other cases it is sufficient to use the positivity and the continuity of $\Psi^*$ in $[-\infty,+\infty]$, the continuity of ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta$ in $\mathbb{R}_+^2\setminus\{(0,0)\}$, and the continuity and the positivity of $\upalpha$. It is also obvious that ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta\le {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upbeta$, and therefore ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta\le {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta\le {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^+_\upbeta$. For the inequality~\eqref{eq:107}, let us distinguish the various cases: \begin{itemize} \item If $w=0$ or $u=v=0$, then $\rmB_\beta(u,v,w) =0$ so that \eqref{eq:107} is trivially satisfied. We can thus assume $w\neq0$ and $u+v>0$. \item When $\upalpha(u,v)=0$ then $\Upsilon(u,v,w) ={+\infty}$ so that \eqref{eq:107} is trivially satisfied as well. We can thus assume $\upalpha(u,v)>0$. \item If ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\in \{\pm\infty\}$ then ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-(u,v)={+\infty}$ and the right-hand side of \eqref{eq:107} is infinite. \item It remains to consider the case when ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\in \mathbb{R}$, $\upalpha(u,v)>0$ and $w\neq0$. In this situation \begin{align} \bigl|\rmB(u,v,w)\bigr|&=\bigl|{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)w\bigr|= \bigg|{\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\frac{w}{\upalpha(u,v)}\bigg| \upalpha(u,v) \notag \\&\le \Psi\Big(\frac w{\upalpha(u,v)}\Big) \upalpha(u,v)+ \Psi^*\Big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\Big) \upalpha(u,v) \notag\\ &=\Upsilon(u,v,w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-(u,v). \label{eq:108} \end{align} This proves~\eqref{eq:107}. \end{itemize} It is now easy to study the case of equality in~\eqref{eq:107a}, when the right-hand side of \eqref{eq:107} and \eqref{eq:107a} is finite. This in particular implies that $\upalpha(u,v)>0$ and ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\in \mathbb{R}$ or $\upalpha(u,v)=0$ and $w=0$. In the former case, calculations similar to \eqref{eq:108} show that $-w=(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\big)\upalpha(u,v)$. In the latter case, $\alpha(u,v) = w =0$ yields that $ \rmB_\upbeta(u,v,w)=0$, $\Upsilon (u,v,w) = \hat{\Psi}(w,\alpha(u,v)) = \hat{\Psi}(0,0) = 0$, and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta(u,v) = \Psi^*({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)) \alpha(u,v)=0$. \color{black} \end{proof} \par As a consequence of Lemma \ref{le:trivial-but-useful}, we conclude a chain-rule inequality involving the smallest functional ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta^-$ and thus, a fortiori, the functional ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upbeta$ which, for $\upbeta=\upphi$, shall enter into the definition of the Fisher information $\mathscr{D}$. \color{ddcyan}O \begin{cor}[Chain rule II] \label{th:chain-rule-bound2} Let $(\rho,{\boldsymbol j} )\in \mathbb{C}ER ab$ with $\rho_t=u_t\pi\ll \pi$ and $2{\boldsymbol j}_\lambda=w (\lambda\otimes{\boldsymbol\vartheta}pi)$ satisfy \begin{equation} \label{ass:th:CR2} \int_V \upbeta(u_a)\,\mathrm{d}\pi<{+\infty}, \quad \int_a^b \iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u_t(x),u_t(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\mathrm{d} t<{+\infty}. \end{equation} Then the map $t\mapsto \int_V \upbeta(u_t)\,\mathrm{d}\pi$ is absolutely continuous in $[a,b]$ and \begin{equation} \label{eq:CR2} \begin{aligned} \left|\frac{\mathrm{d}}{\mathrm{d} t}\int_V \upbeta(u_t)\,\mathrm{d}\pi\right| \le \mathscr R(\rho_t,{\boldsymbol j}_t)+\frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u_t(x),u_t(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\quad\text{for a.e.~}t\in (a,b). \end{aligned} \end{equation} If moreover \[ -\frac{\mathrm{d}}{\mathrm{d} t}\int_V \upbeta(u_t)\,\mathrm{d}\pi =\mathscr R(\rho_t,{\boldsymbol j}_t)+\frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upbeta(u_t(x),u_t(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \] then $2{\boldsymbol j}={\boldsymbol j}^\flat$ and \begin{equation} \label{eq:110} -w_t(x,y)=(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u_t(x),u_t(y))\big)\upalpha(u_t(x),u_t(y)) \quad\text{for ${\boldsymbol\vartheta}pi$-a.e.~$(x,y)\in E$}. \end{equation} In particular, $ w_t=0$ ${\boldsymbol\vartheta}pi$-a.e.~in $\big\{(x,y)\in E: \upalpha(u_t(x),u_t(y))=0\big\}.$ \end{cor} \begin{proof} We recall that for $\lambda$-a.e.~$t\in (a,b)$ \begin{displaymath} \mathscr R(\rho_t,{\boldsymbol j}_t)= \frac12\iint_E \Upsilon(u_t(x),u_t(y),w_t(x,y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y). \end{displaymath} We can then apply Lemma \ref{le:trivial-but-useful} and Theorem \ref{th:chain-rule-bound}, observing that \begin{equation} \label{eq:178} \iint_E \Upsilon(u_t(x),u_t(y),w^\flat_t(x,y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\le \iint_E \Upsilon(u_t(x),u_t(y), w(x,y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \end{equation} since \begin{align*} \Upsilon(u_t(x),u_t(y),w^\flat_t(x,y))&= \Upsilon(u_t(x),u_t(y),\frac12 (w_t(x,y)-w_t(y,x))) \\&\le \frac12\Upsilon(u_t(x),u_t(y),w_t(x,y)) +\frac12\Upsilon(u_t(x),u_t(y),w_t(y,x)) \end{align*} and the integral of the last term coincides with the right-hand side of \eqref{eq:178} thanks to the symmetry of ${\boldsymbol\vartheta}pi$. \end{proof} \subsection{Compactness properties of curves with uniformly bounded $\mathscr R$-action} \label{subsec:compactness} The next result shows an important compactness property for collections of curves in $\mathbb{C}ER ab$ with bounded action. Recalling the discussion and the notation of Section~\ref{subsub:kernels}, we will systematically associate with a given $(\rho,{\boldsymbol j})\in \mathbb{C}EIR I$, $I=[a,b]$, a couple of measures $\rho_\lambda\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(I\times V)$, ${\boldsymbol j}_\lambda\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(I\times E)$ by integrating with respect to ~the Lebesgue measure $\lambda$ in $I$: \begin{equation} \label{eq:95} \rho_\lambda(\mathrm{d} t,\mathrm{d} x)= \lambda(\mathrm{d} t)\rho_t(\mathrm{d} x),\quad {\boldsymbol j}_\lambda(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y)=\lambda(\mathrm{d} t){\boldsymbol j}_t(\mathrm{d} x,\mathrm{d} y). \end{equation} Similarly, we define \begin{equation} \label{eq:97} \begin{aligned} {\boldsymbol\vartheta}_{\rho,\lambda}^\pm(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y):={}& ({\boldsymbol\vartheta}_{\rho}^\pm)_\lambda(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y)= \lambda(\mathrm{d} t){\boldsymbol\vartheta}_{\rho_t}^\pm(\mathrm{d} x,\mathrm{d} y) \\={}& \lambda(\mathrm{d} t)\rho_t(\mathrm{d} x)\kappa(x,\mathrm{d} y) ={\boldsymbol\vartheta}_{\rho_\lambda}^\pm(\mathrm{d} t,\mathrm{d} x,\mathrm{d} y). \end{aligned} \end{equation} It is not difficult to check that \begin{equation} \label{eq:96} \int_I \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t= \frac12\mathscr F_\Upsilon({\boldsymbol\vartheta}_{\rho,\lambda}^-,{\boldsymbol\vartheta}_{\rho,\lambda}^+,2{\boldsymbol j}_\lambda|\lambda\otimes{\boldsymbol\vartheta}pi). \end{equation} \begin{prop}[Bounded $\int\mathscr R$ implies compactness and lower semicontinuity] \label{prop:compactness} Let $(\rho^n,{\boldsymbol j}^n)_n \subset \mathbb{C}ER ab$ be a sequence such that the initial states $\rho^n_a$ are $\pi$-absolutely-continuous and relatively compact with respect to setwise convergence. Assume that \begin{gather} \label{eq:lem:compactness:assumptions} M:=\sup_{n\in\mathbb{N}}\int_a^b \mathscr R(\rho_t^n, {\boldsymbol j}_t^n) \,\mathrm{d} t<{+\infty}. \end{gather} Then, there exist a subsequence (not relabelled) and a pair $(\rho,{\boldsymbol j} )\in \mathbb{C}ER ab$ such that, for the measures ${\boldsymbol j}_\lambda^n \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}([a,b]\timesE)$ defined as in \eqref{eq:95} there holds \begin{subequations} \label{cvs-rho-n-j-n} \begin{align} & \label{converg-rho-n} \rho_t^n\to \rho_t\quad\text{setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ for all $t\in[a,b]$}\,,\\ & \label{converg-j-n} {\boldsymbol j}_\lambda^n\rightharpoonup {\boldsymbol j}_\lambda \quad\text{setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}([a,b]\timesE)$}\,, \end{align} \end{subequations} where ${\boldsymbol j}_\lambda$ is induced (in the sense of \eqref{eq:95}) by a $\lambda$-integrable family $({\boldsymbol j}_t)_{t\in [a,b]}\subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$. In addition, for any sequence $(\rho^n,{\boldsymbol j}^n)$ converging to $(\rho,{\boldsymbol j} )$ in the sense of~\eqref{cvs-rho-n-j-n}, we have \begin{equation} \label{ineq:lsc-R} \int_a^b \mathscr R(\rho_t,{\boldsymbol j}_t)\, \mathrm{d} t \leq \liminf_{n\to\infty} \int_a^b \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\, \mathrm{d} t . \end{equation} \end{prop} \begin{proof} Let us first remark that the mass conservation property of the continuity equation yields \begin{equation} \label{eq:62} \rho_t^n(V)=\rho_a^n(V)\le M_1\quad\text{for every }t\in [a,b],\ n\in \mathbb{N} \end{equation} for a suitable finite constant $M_1$ independent of $n$. We deduce that for every $t\in [a,b]$ the measures ${\boldsymbol\vartheta}pi_{\rho_t^n}^\pm$ have total mass bounded by $M_1 \|\kappa_V\|_\infty$, \color{black} so that estimate \eqref{eq:31} for $y=(c,c)\in D(\upalpha_*)$ yields \begin{equation} \label{eq:63} \boldsymbol\upnu_{\rho^n_t}(E)= \upalpha[{\boldsymbol\vartheta}pi^+_{\rho_t^n},{\boldsymbol\vartheta}pi^-_{\rho_t^n}|{\boldsymbol\vartheta}pi](E) \le M_2 \quad \text{for every }t\in [a,b],\ n\in \mathbb{N}, \end{equation} where $M_2:=2 c\,M_1 \|\kappa_V\|_\infty \color{black}-\upalpha_*(c,c){\boldsymbol\vartheta}pi(E)$. Jensen's inequality \eqref{eq:79} and the monotonicity property \eqref{eq:80} yield \begin{equation} \label{eq:64} \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\ge \frac12 \hat\Psi\Bigl(2{\boldsymbol j}_t^n(E),\boldsymbol\upnu_{\rho^n_t}(E)\Bigr)\ge \frac12 \hat\Psi\Bigl(2{\boldsymbol j}_t^n(E),M_2\Bigr)= \frac12 \Psi\Bigl(\frac{2{\boldsymbol j}_t^n(E)} {M_2}\Bigr) M_2, \end{equation} with $\hat\Psi$ the perspective function associated with $\Psi$, cf.~\eqref{eq:76}. Since $\Psi$ has superlinear growth, we deduce that the sequence of functions $t\mapsto |{\boldsymbol j}_t^n|(E)$ is equi-integrable. Since the sequence $(\rho_a^n)_n$, with $\rho_a^n = u_a^n \pi \ll \pi$, is relatively compact with respect to setwise convergence, by Theorems \ref{thm:equivalence-weak-compactness}(6) and \ref{thm:L1-weak-compactness}(3) there exist a convex superlinear function $\upbeta:\mathbb{R}_+\to\mathbb{R}_+$ and a constant $M_3<{+\infty}$ such that \begin{equation} \label{eq:58} \mathscr F_\upbeta(\rho^n_a|\pi)= \int_V \upbeta(u_a^n)\,\mathrm{d}\pi\le M_3\quad \text{for every }n\in \mathbb{N}. \end{equation} Possibly adding $M_1$ to $M_3$, it is not restrictive to assume that $\upbeta'(r)\ge1$. We can then apply Lemma \ref{le:slowly-increasing-entropy} and we can find a smooth convex superlinear function $\upomega:\mathbb{R}_+\to\mathbb{R}_+$ such that \eqref{eq:41} holds. In particular \begin{align} \label{eq:59} \int_V \upomega(u_a^n)\,\mathrm{d}\pi&\le M_1,\\ \notag \int_a^b \iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upomega(u_r^n(x),u_r^n(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} r&\le \int_a^b \iint_E (u_r^n(x)+u^n_r(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} r \\&\leq 2(b-a)M_1\|\kappa_V\|_\infty.\label{eq:60} \end{align} By Corollary~\ref{th:chain-rule-bound2} we obtain \begin{equation} \label{eq:61} \int_V \upomega(u_t^n)\,\mathrm{d}\pi\le M+(b-a)M_1 \|\kappa_V\|_\infty + M_1\quad \text{for every }t\in [a,b]. \end{equation} By \eqref{est:ct-eq-TV} we deduce that \begin{equation} \label{eq:65} \|u^n_t-u^n_s\|_{L^1(V,\pi)}\le \zeta(s,t)\quad \text{where}\quad \zeta(s,t): = 2\sup_{n\in \mathbb{N}} \int_{s}^{t} |{\boldsymbol j}_r^n|(E)\,\mathrm{d} r \,. \end{equation} Since $t\mapsto |{\boldsymbol j}^n_t|(E)$ is equi-integrable we have \[ \lim_{(s,t)\to (r,r)} \zeta(s,t) =0 \qquad \text{for all } r \in [a,b]\,, \] We conclude that the sequence of maps $(u_t^n)_{t\in [a,b]}$ satisfies the conditions of the compactness result \cite[Prop.\ 3.3.1]{AmbrosioGigliSavare08}, which yields the existence of a (not relabelled) subsequence and of a $L^1(V,\pi)$-continuous (thus also weakly-continuous) function $[a,b]\ni t \mapsto u_t\in L^1(V,\pi)$ such that $u^n_t\rightharpoonup u_t$ weakly in $L^1(V,\pi)$ for every $t\in [a,b]$. By \eqref{eq:72} we also deduce that \eqref{converg-rho-n} holds, i.e. \[ \rho_t^n\to \rho_t=u_t\pi\quad\text{setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(V)$ for all $t\in[a,b]$}. \] It is also clear that for every $t\in [a,b]$ we have ${\boldsymbol\vartheta}pi_{\rho_t^n}^\pm\to{\boldsymbol\vartheta}pi_{\rho_t}^\pm$ setwise. The Dominated Convergence Theorem and~\eqref{eq:70}, \color{purple} \eqref{eq:85} \color{black} imply that the corresponding measures ${\boldsymbol\vartheta}pi_{\rho^n,\lambda}^\pm$ converge setwise to ${\boldsymbol\vartheta}pi_{\rho,\lambda}^\pm$, and are therefore equi-absolutely continuous with respect to ~${\boldsymbol\vartheta}pi_\lambda=\lambda\otimes{\boldsymbol\vartheta}pi$ (recall \eqref{eq:73bis}). \color{ddcyan} Let us now show that also the sequence $({\boldsymbol j}^n_\lambda)_{n}$ is equi-absolutely continuous with respect to ~${\boldsymbol\vartheta}pi_\lambda$, so that \eqref{converg-j-n} holds up to extracting a further subsequence. Selecting a constant $c>0$ sufficiently large so that $\upalpha(u_1,u_2)\le c(1+u_1+u_2)$, the trivial estimate $\boldsymbol\upnu_\rho\le c({\boldsymbol\vartheta}pi+{\boldsymbol\vartheta}_\rho^-+{\boldsymbol\vartheta}_\rho^+)$ and the monotonicity property \eqref{eq:80} yield \color{ddcyan} \begin{equation} \label{eq:66} M\ge \int_a^b\mathscr R(\rho^n_t,{\boldsymbol j}^n_t )\,\mathrm{d} t= \frac12\mathscr F_\Psi(2{\boldsymbol j}^n_\lambda |\boldsymbol\upnu_{\rho^n_\lambda})\ge \mathscr F_\Psi({\boldsymbol j}^n_\lambda |{\boldsymbol \varsigma}^n ),\ {\boldsymbol \varsigma}^n :=c({\boldsymbol\vartheta}pi_\lambda+{\boldsymbol\vartheta}pi_{\rho^n,\lambda}^++{\boldsymbol\vartheta}pi_{\rho^n,\lambda}^-). \end{equation} For every $B\in \mathfrak A\otimes \mathfrak B$, $\mathfrak A$ being the Borel $\sigma$-algebra of $[a,b]$, with ${\boldsymbol\vartheta}pi_\lambda(B)>0$, Jensen's inequality \eqref{eq:79} yields \begin{equation} \label{eq:111} \Psi\biggl(\frac{{\boldsymbol j}_\lambda^n(B)}{{\boldsymbol \varsigma}^n(B)}\biggr) {\boldsymbol \varsigma}^n(B)\le \mathscr F_\Psi({\boldsymbol j}_\lambda^n\mres B|{\boldsymbol \varsigma}^n\mres B)\le M. \end{equation} Denoting by $U:\mathbb{R}_+\to\mathbb{R}_+$ the inverse function of $\Psi$, we thus find \begin{equation} \label{eq:112} {\boldsymbol j}_\lambda^n(B)\le {\boldsymbol \varsigma}^n(B)\,U\biggl(\frac{M}{{\boldsymbol \varsigma}^n(B)}\biggr). \end{equation} Since $\Psi$ is superlinear, $U$ is sublinear so that \begin{equation} \label{eq:118} \lim_{\delta\downarrow0}\delta U(M/\delta)=0. \end{equation} For every $\varepsilons>0$ there exists $\delta_0>0$ such that $\delta U(M/\delta)\le \varepsilons$ for every $\delta\in (0,\delta_0)$. Since ${\boldsymbol \varsigma}^n$ is equi absolutely continuous with respect to~${\boldsymbol\vartheta}pi_\lambda$ we can also find $\delta_1>0$ such that ${\boldsymbol\vartheta}pi_\lambda (B)<\delta_1$ yields ${\boldsymbol \varsigma}^n(B)\le \delta_0$. By \eqref{eq:112} we eventually conclude that ${\boldsymbol j}^n_\lambda(B)\le \varepsilons$. \color{black} It is then easy to pass to the limit in the integral formulation \eqref{2ndfundthm} of the continuity equation. Finally, concerning \eqref{ineq:lsc-R}, it is sufficient to use the equivalent representation given by \eqref{eq:96}. \end{proof} \normalcolor \subsection{Definition and properties of the cost}\label{sec:cost} We now define the Dynamical-Variational Transport cost $\mathcal{W} : (0,{+\infty}) \times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\times {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) \to [0,{+\infty})$ by \begin{equation} \label{def-psi-rig} \DVT \tau{\rho_0}{\rho_1} : = \inf\left\{ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t) \,\mathrm{d} t \, : \, (\rho,{\boldsymbol j} ) \in \mathbb{C}EP 0\tau{\rho_0}{\rho_1} \right\}\,. \end{equation} In studying the properties of $\calW$, we will also often use the notation \begin{equation} \label{adm-curves} \ADM 0\tau{\rho_0}{\rho_1}: = \biggl\{ (\rho,{\boldsymbol j} )\in\mathbb{C}ER 0{\tau}\, : \normalcolor \ \rho(0)=\rho_0, \ \rho(\tau) = \rho_1 \biggr\}\,, \end{equation} with $\mathbb{C}ER 0\tau$ the class from \eqref{def:Aab}. For given $\tau>0$ and $\rho_0,\, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, if the set $\ADM 0\tau{\rho_0}{\rho_1}$ is non-empty, then it contains an exact minimizer for $\DVT {\tau}{\rho_0}{\rho_1}$. This is stated by the following result that is a direct consequence of Proposition~\ref{prop:compactness}. \color{black} \begin{cor}[Existence of minimizers] \label{c:exist-minimizers} If $\rho_0,\rho_1\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and $\ADM 0\tau{\rho_0}{\rho_1} $ is not empty, then the infimum in~\eqref{def-psi-rig} is achieved. \end{cor} \begin{remark}[Scaling invariance] \label{rem:scaling-invariance} Let us consider the perspective function $\hat \Psi(r,s)$ associated wih $\Psi$ as in \eqref{eq:76}, $\hat \Psi(r,s)=s\Psi(r/s)$ if $s>0$. We call $\mathscr R_s(\rho,{\boldsymbol j})$ the dissipation functional induced by $\hat{\Psi}(\cdot, s)$, with induced Dynamic-Transport cost $\mathscr W_s$. For every $\tau>0$, $\rho_0,\rho_1\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ a rescaling argument yields \begin{equation} \label{eq:188} \mathscr{W}(\tau,\rho_0, \rho_1) =\mathscr W_{\tau/\sigma}(\sigma,\rho_0,\rho_1) = \inf\left\{ \int_0^{\sigma} \mathscr R_{\tau/\sigma}(\rho_t,{\boldsymbol j}_t) \,\mathrm{d} t \, : \, (\rho,{\boldsymbol j} ) \in \mathbb{C}EP 0{\sigma}{\rho_0}{\rho_1} \color{black} \right\}\,. \end{equation} In particular, choosing $\sigma=1$ we find \color{ddcyan}O \begin{equation} \label{eq:188bis} \mathscr{W}(\tau,\rho_0, \rho_1) =\mathscr W_{\tau}(1,\rho_0,\rho_1). \end{equation} Since $\hat\Psi(\cdot,\tau)$ is convex, lower semicontinuous, and decreasing with respect to ~$\tau$, we find that $\tau\mapsto \mathscr{W}(\tau,\rho_0, \rho_1) $ is decreasing and convex as well. \end{remark} \par Currently, proving that \emph{any} pair of measures can be connected by a curve with finite action $\int \mathscr R$ under general conditions on $V$, $\Psi$ and $\upalpha$ is an open problem: in other words, in the general case we cannot exclude that $\ADM 0\tau{\rho_0}{\rho_1} = \emptyset$, which would make $\DVT {\tau}{\rho_0}{\rho_1} = {+\infty}$. Nonetheless, in a more specific situation, Proposition \ref{prop:sufficient-for-connectivity} below provides sufficient conditions for this connectivity property, between two measures $\rho_0, \, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) $ with the same mass and such that $\rho_i \ll \pi$ for $i\in \{0,1\}$. Preliminarily, we give the following \begin{definition} Let $q\in (1,{+\infty})$. We say that the measures $(\pi,{\boldsymbol\vartheta}pi) $ satisfy a \emph{$q$-Poincar\'e inequality} if there exists a constant $C_P>0$ such that for every $\xi \in L^q(V;\pi)$ with $\int_{V}\xi(x) \pi(\mathrm{d} x) =0$ there holds \begin{equation} \label{q-Poinc} \int_{V} |\xi(x)|^q \pi(\mathrm{d} x) \leq C_P \int_{E} |\dnabla \xi(x,y)|^q {\boldsymbol\vartheta}pi(\mathrm{d} x, \mathrm{d} y) . \end{equation} \end{definition} We are now in a position to state the connectivity result, where we specialize the discussion to dissipation densities with $p$-growth for some $p \in (1,{+\infty})$. \begin{prop} \label{prop:sufficient-for-connectivity} Suppose that \begin{equation} \label{psi-p-growth} \exists\, p \in (1,{+\infty}), \, \overline{C}_p>0 \ \ \forall\,r \in \mathbb{R} \, : \qquad \Psi(r) \leq \overline{C}_p(1{+}|r|^p), \end{equation} and that the measures $(\pi,{\boldsymbol\vartheta}pi) $ satisfy a $q$-Poincar\'e inequality for $q=\tfrac p{p-1}$. Let $\rho_0, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) $ with the same mass be given by $\rho_i = u_i \pi$, with positive $u_i \in L^1(V; \pi) \cap L^\infty (V; \pi) $, for $i \in \{0,1\}$. Then, for every $\tau>0$ the set $\ADM 0\tau{\rho_0}{\rho_1} $ is non-empty and thus $\DVT \tau{\rho_0}{\rho_1}<\infty$. \end{prop} We postpone the proof of Proposition \ref{prop:sufficient-for-connectivity} to Appendix \ref{s:app-2}, where some preliminary results, also motivating the role of the $q$-Poincar\'e inequality, will be provided. \color{black} \subsection{Abstract-level properties of \texorpdfstring{$\mathscr{W}$}W} \label{ss:4.5} The main result of this section collects a series of properties of the cost that will play a key role in the study of the \emph{Minimizing Movement} scheme \eqref{MM-intro}. Indeed, as already hinted in the Introduction, the analysis that we will carry out in Section \ref{s:MM} ahead might well be extended to a scheme set up in a general topological space, endowed with a cost functional enjoying properties \eqref{assW} below. We will now check them for the cost $\mathscr{W}$ associated with generalized gradient structure $(\mathscr E,\mathscr R,\mathscr R^*)$ \color{black} fulfilling \textbf{Assumptions~\ref{ass:V-and-kappa} and \ref{ass:Psi}}. In this section \emph{all convergences will be with respect to the setwise topology}. \begin{theorem} \label{thm:props-cost} The cost $\mathscr{W}$ enjoys the following properties: \begin{subequations}\label{assW} \begin{enumerate}[label={(\arabic*)}] \item \label{tpc:1} For all $\tau>0,\, \rho_0,\, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, \begin{equation}\label{e:psi2} \mathscr{W}(\tau,\rho_0,\rho_1)= 0 \ \Leftrightarrow \ \rho_0=\rho_1. \end{equation} \item \label{tpc:2} For all $\rho_1,\, \rho_2,\,\rho_3\in{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ and $\tau_1, \tau_2 \in (0,{+\infty})$ with $\tau=\tau_1 +\tau_2$, \begin{equation}\label{e:psi3} \mathscr{W}(\tau,\rho_1,\rho_3) \leq \mathscr{W}(\tau_1,\rho_1,\rho_2) + \mathscr{W}(\tau_2,\rho_2,\rho_3). \end{equation} \item \label{tpc:3} For $\tau_n\to \tau>0, \ \rho_0^n \to \rho, \ \rho_1^n \to \rho_1$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, \begin{equation}\label{lower-semicont} \liminf_{n \to {+\infty}} \mathscr{W}(\tau_n,\rho_0^n, \rho_1^n) \geq \mathscr{W}(\tau,\rho_0,\rho_1). \end{equation} \item \label{tpc:4} For all $\tau_n \downarrow 0$ and for all $(\rho_n)_n$, $ \rho \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, \begin{equation}\label{e:psi4} \sup_{n\in\mathbb{N}} \mathscr{W}(\tau_n, \rho_n,\rho) <{+\infty} \quad \mathbb{R}ightarrow \quad \rho_n \to \rho. \end{equation} \item \label{tpc:5} For all $\tau_n \downarrow 0$ and all $(\rho_n)_n$, $ (\nu_n)_n \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ with $\rho_n \to \rho, \ \nu_n \to\nu$, \begin{equation}\label{e:psi6} \limsup_{n\to\infty} \mathscr{W}(\tau_n, \rho_n,\nu_n) <{+\infty} \quad \mathbb{R}ightarrow \quad \rho = \nu. \end{equation} \end{enumerate} \end{subequations} \end{theorem} \begin{proof} \textit{\ref{tpc:1}} Since $\Psi(s)$ is strictly positive for $s\neq0$ it is immediate to check that $\mathscr R(\rho,{\boldsymbol j})=0\ \mathbb{R}ightarrow\ {\boldsymbol j}=0$. For an optimal pair $(\rho,{\boldsymbol j} ) $ satisfying $ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t =0 $ we deduce that ${\boldsymbol j}_t=0$ for a.e.~$t\in (0,\tau)$. The continuity equation then implies $\rho_0=\rho_1$. \normalcolor \textit{\ref{tpc:2}} This can easily be checked by using the existence of minimizers for $\mathscr{W}(\tau,\rho_0, \rho_1)$. \textit{\ref{tpc:3}} Assume without loss of generality that $\liminf_{n\to+\infty} \mathscr{W}(\tau_n,\rho_0^n, \rho_1^n) < \infty$. By \eqref{eq:188bis} \normalcolor we use that, for every $n\in \mathbb{N}$ and setting $\overline \tau = \sup_n \tau_n$, \[ \mathscr{W}(\tau_n,\rho_n^0, \rho_n^1) = \mathscr{W}_{\tau_n} (1,\rho_n^0, \rho_n^1) \leq \mathscr{W}_{\overline \tau} (1,\rho_n^0, \rho_n^1) \stackrel{(*)}= \int_0^{1} \mathscr R_{\overline \tau}(\rho_t^n,{\boldsymbol j}_t^n) \,\mathrm{d} t, \] where the identity $(*)$ holds for an optimal pair $(\rho^n,{\boldsymbol j}^n) \in \mathbb{C}EP{0}{1}{\rho_0^n}{\rho_1^n}$. Applying Proposition~\ref{prop:compactness}, we obtain the existence of $(\rho, {\boldsymbol j} ) \in \mathbb{C}EP 01{\rho_0}{\rho_1}$ such that, up to a subsequence, \begin{equation} \label{tilde-rho-j-n} \begin{aligned} &{\rho}_s^n \to {\rho}_s \text{ setwise in } {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) \quad \text{for all } s \in [0,1]\,,\\ &{{\boldsymbol j}}^n \to {{\boldsymbol j}} \text{ setwise in } {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(\color{purple} [0,1]\color{black}{\times}E)\,, \end{aligned} \end{equation} Arguing as in Proposition~\ref{prop:compactness} and using the joint lower semicontinuity of $\hat \Psi$, we find that \[ \liminf_{n\to\infty} \int_0^{1} \mathscr R_{\tau_n}\left( {\rho}_s^n , {{\boldsymbol j}}_s^n \right) \mathrm{d} s \geq \int_0^{1} \mathscr R_\tau\left( {\rho}_s , {{\boldsymbol j}}_s \right) \mathrm{d} s \ge \mathscr W_\tau(1,\rho_0,\rho_1)= \mathscr{W}(\tau,\rho_0,\rho_1). \] \textit{\ref{tpc:4}} If we denote by $\mathscr R_0$ the dissipation associated with $\hat\Psi(\cdot,0)$, given by $\hat\Psi(w,0) = +\infty$ for $w\not=0$ and $\hat\Psi(0,0)=0$, we find \begin{equation} \label{eq:189} \mathscr R_0(\rho,{\boldsymbol j})<{+\infty}\quad\mathbb{R}ightarrow\quad {\boldsymbol j}=0. \end{equation} By the same argument as for part~\ref{tpc:3}, every subsequence of $\rho_n$ has a converging subsequence in the setwise topology; the lower semicontinuity result of the proof of part~\ref{tpc:3} shows that any limit point must coincide with $\rho$. \textit{\ref{tpc:5}} The argument combines \eqref{eq:189} and part~\ref{tpc:3}. \end{proof} \subsection{The action functional \texorpdfstring{$\mathbb W$}W and its properties} The construction of $\mathscr R$ and $\mathscr{W}$ above proceeded in the order $\mathscr R \rightsquigarrow \mathscr{W}$: we first constructed $\mathscr R$, and then $\mathscr{W}$ was defined in terms of~$\mathscr R$. It is a natural question whether one can invert this construction: given $\mathscr{W}$, can one reconstruct~$\mathscr R$, or at least integrals of the form $\int_a^b \mathscr R\,\mathrm{d} t$? The answer is positive, as we show in this section. Given a functional $\mathscr{W}$ satisfying the properties~\eqref{assW}, we define the `$\mathscr{W}$-action' of a curve $\rho:[a,b]\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ as \begin{equation} \label{def-tot-var} \VarW \rho ab: = \sup \left \{ \sum_{j=1}^M \DVT{t^j - t^{j-1}}{\rho(t^{j-1})}{\rho(t^j)} \, : \ (t^j)_{j=0}^M \in \mathfrak{P}_f([a,b]) \right\} , \end{equation} for all $[a,b]\subset [0,T]$ where $\mathfrak{P}_f([a,b])$ denotes the set of all partitions of a given interval $[a,b]$. If $\mathscr{W}$ is defined by~\eqref{def-psi-rig}, then each term in the sum above is defined as an optimal version of $\int_{t^{j-1}}^{t^j} \mathscr R(\rho_t,\cdot)\,\mathrm{d} t$, and we might expect that $\VarW \rho ab$ is an optimal version of $\int_a^b \mathscr R(\rho_t,\cdot)\,\mathrm{d} t$. This is indeed the case, as is illustrated by the following analogue of~\cite[Th.~5.17]{DolbeaultNazaretSavare09}: \begin{prop} \label{t:R=R} Let $\mathscr{W}$ be given by~\eqref{def-psi-rig}, and let $\rho:[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$. Then $\VarW \rho 0T<{+\infty}$ if and only if there exists a measurable map ${\boldsymbol j} :[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ such that $(\rho,{\boldsymbol j} )\in\mathbb{C}E 0T$ with $\int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\mathrm{d} t<{+\infty}$\,. In that case, \begin{equation} \label{calR-leq-VarW} \VarW \rho0T\leq \int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t, \end{equation} and there exists a unique ${\boldsymbol j}_{\rm opt}$ such that equality is achieved. The optimal ${\boldsymbol j}_{\rm opt}$ is skew-symmetric, i.e.\ ${\boldsymbol j}_{\rm opt}= \tj_{\rm opt}$ \color{black} (cf.~Remark~\ref{rem:skew-symmetric}). \end{prop} Prior to proving Proposition \ref{t:R=R}, we establish the following approximation result. \begin{lemma} \label{l:convergence-of-interpolations} Let $\rho:[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ satisfy $\VarW {\rho}0T<{+\infty}$. For a sequence of partitions $P_n =(t_n^j)_{j=0}^{M_n}\in \mathfrak{P}_f([0,T])$ with fineness $\tau_n = \max_{j=1,\ldots, M_n} (t_n^j{-}t_n^{j-1})$ converging to zero, let $\rho^n :[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ satisfy \[ \rho^n(t_n^j) = \rho(t_n^j)\quad\text{for all } j =1,\ldots, M_n \qquad\text{and}\qquad \sup\nolimits_{n\in\mathbb{N}} \VarW {\rho^n}0T < {+\infty}. \] Then $\rho^n(t) \to \rho(t)$ setwise \color{black} for all $t\in[0,T]$ as $n\to\infty$. \end{lemma} \begin{proof} First of all, observe that by the symmetry of $\Psi$, also the time-reversed curve $\check \rho(t):= \rho(T-t)$ satisfies $\VarW {\check{\rho}}0T<{+\infty}$. Let $\piecewiseConstant {\mathsf{t}}{n}$ and $\underpiecewiseConstant {\mathsf{t}}{n}$ be the piecewise constant interpolants associated with the partitions $P_n$, cf.\ \eqref{nodes-interpolants}. Fix $t\in [0,T]$; we estimate \begin{align*} \mathscr{W}\bigl(2(\piecewiseConstant\mathsf{t} n-t),\rho^n(t),\rho(t)\bigr) &\stackrel{(1)}{\leq} \mathscr{W}\bigl(\piecewiseConstant\mathsf{t} n-t,\rho^n(t),\rho^n(\piecewiseConstant {\mathsf{t}}{n}(t))\bigr) + \mathscr{W}\bigl(\piecewiseConstant\mathsf{t} n-t,\rho(\piecewiseConstant {\mathsf{t}}{n}(t)),\rho(t)\bigr) \\ &= \mathscr{W}\bigl(\piecewiseConstant\mathsf{t} n-t,\rho^n(t),\rho^n(\piecewiseConstant {\mathsf{t}}{n}(t))\bigr) + \mathscr{W}\bigl(\piecewiseConstant\mathsf{t} n-t,\check \rho(T-\piecewiseConstant {\mathsf{t}}{n}(t)),\check \rho(T-t)\bigr)\\ &\leq \VarW {\rho^n}t{\piecewiseConstant {\mathsf{t}}{n}(t)} + \VarW{\check \rho}{T-\piecewiseConstant {\mathsf{t}}{n}(t)}{T-t}\\ &\leq \sup_{n\in\mathbb{N}} \VarW {\rho^n} 0T + \VarW{\check \rho}0T =: C<{+\infty}, \end{align*} where (1) follows from property \eqref{e:psi3} of $\mathscr{W}$. Consequently, by property~\eqref{e:psi4} it follows that $\rho^n(t) \to \rho(t)$ setwise in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ for all $t\in[0,T]$. \end{proof} We are now in a position to prove Proposition~\ref{t:R=R}: \begin{proof}[Proof of Proposition~\ref{t:R=R}] One implication is straightforward: if a pair $(\rho,{\boldsymbol j} )$ exists, then \[ \mathscr{W}(t-s,\rho_s,\rho_t) \stackrel{\eqref{def-psi-rig}}\leq \int_s^t \mathscr R(\rho_r,{\boldsymbol j}_r)\,\mathrm{d} r,\qquad \text{for all }0\leq s<t\leq T, \] and therefore $\VarW \rho0T<{+\infty}$ and \eqref{calR-leq-VarW} holds. To prove the other implication, assume that $\VarW \rho0T<{+\infty}$. Choose a sequence of partitions $P_n =(t_n^j)_{j=0}^{M_n}\in \mathfrak{P}_f([0,T])$ that becomes dense in the limit $n\to\infty$. For each $n\in\mathbb{N}$, construct a pair $(\rho^n,{\boldsymbol j}^n)\in \mathbb{C}E 0T$ as follows: On each time interval $[t_n^{j-1},t_n^j]$, let $(\rho^n,{\boldsymbol j}^n)$ be given by Corollary~\ref{c:exist-minimizers} as the minimizer under the constraint $\rho^n(t_n^{j-1}) = \rho(t_n^{j-1})$ and $\rho^n(t_n^j) = \rho(t_n^j)$, namely \begin{equation} \label{minimizer-cost} \DVT{t_n^{j}{-}t_n^{j-1}}{\rho(t_n^{j-1})}{ \rho(t_n^{j})} = \int_{t_n^{j-1}}^{t_n^{j}}\mathscr R(\rho_r^n,{\boldsymbol j}_r^n) \,\mathrm{d} r\,. \end{equation} By concatenating the minimizers on each of the intervals a pair $(\rho^n,{\boldsymbol j}^n)\in \mathbb{C}E0T$ is obtained, thanks to Lemma \ref{l:concatenation&rescaling}. By construction we have the property \begin{align} \label{eq:VarW-rhon-calR} &\VarW{\rho^n}0T = \int_0^T \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\,\mathrm{d} t. \end{align} Also by optimality we have \[ \VarW{\rho^n}{t_n^{j-1}}{t_n^j} = \mathscr{W}\bigl(t_n^j-t_n^{j-1},\rho(t_n^{j-1}),\rho(t_n^j)\bigr) \leq \VarW\rho{t_n^{j-1}}{t_n^j}, \] which implies by summing that \begin{equation} \label{ineq:VarW-rhon-rho} \VarW{\rho^n}0T\leq \VarW\rho0T. \end{equation} By Lemma~\ref{l:convergence-of-interpolations} we then find that $\rho^n(t)\to \rho(t)$ setwise \color{black} as $n\to\infty$ for each $t\in[0,T]$. Applying Proposition~\ref{prop:compactness}, we find that ${\boldsymbol j}^n(\mathrm{d} t\,\mathrm{d} x\,\mathrm{d} y):= {\boldsymbol j}_t^n(\mathrm{d} x\,\mathrm{d} y)\,\mathrm{d} t$ setwise \color{black} converges along a subsequence to a limit ${\boldsymbol j}$. The limit $ {\boldsymbol j}$ can be disintegrated as ${\boldsymbol j}(\mathrm{d} t\,\mathrm{d} x\,\mathrm{d} y) = \lambda(\mathrm{d} t) \, {\boldsymbol j}_t(\mathrm{d} x\,\mathrm{d} y) $ \color{black} for a measurable family $({\boldsymbol j}_t)_{t\in[0,T]}$, and the pair $(\rho,{\boldsymbol j})$ \color{black} is an element of $\mathbb{C}E0T$. In addition we have the lower-semicontinuity property \begin{equation} \label{ineq:lsc:j-tilde-j} \liminf_{n\to\infty} \int_0^T \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\,\mathrm{d} t \geq \int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\color{black}\,\mathrm{d} t. \end{equation} We then have the series of inequalities \begin{align*} \VarW\rho 0T &\stackrel{\eqref{ineq:VarW-rhon-rho}} \geq \limsup_{n\to\infty} \VarW{\rho^n}0T \stackrel{\eqref{eq:VarW-rhon-calR}} = \limsup_{n\to\infty} \int_0^T \mathscr R(\rho^n_t,{\boldsymbol j}^n_t)\,\mathrm{d} t\\ & \stackrel{\eqref{ineq:lsc:j-tilde-j}}{\geq} \int_0^T \mathscr R(\rho_t, {\boldsymbol j}_t) \color{black}\,\mathrm{d} t \stackrel{\eqref{calR-leq-VarW}}{\geq} \VarW \rho0T, \end{align*} which implies that $\int_0^T \mathscr R(\rho_t, {\boldsymbol j}_t) \color{black} \,\mathrm{d} t = \VarW \rho0T$. Finally, the uniqueness of ${\boldsymbol j}$ \color{black} is a consequence of the strict convexity of $\Upsilon(u_1,u_2,\cdot)$, cf.\ Lemma~\ref{lem:Upsilon-properties}. \color{black} Similarly, the skew-symmetry of ${\boldsymbol j}$ follows from the strict convexity of $\Upsilon(u_1,u_2,\cdot)$, the symmetry of $\Upsilon(\cdot,\cdot,w)$, and the invariance of the continuity equation~\eqref{eq:ct-eq-def} under the `skew-symmetrization' ${\boldsymbol j} \mapsto \tj$, cf.\ Remark \ref{rem:skew-symmetric}. \color{black} \end{proof} \section{The Fisher information \texorpdfstring{$\mathscr{D} $}{D} and the definition of solutions} \label{s:Fisherinformation} With the definitions and the properties that we established in the previous section we have given a rigorous meaning to the first term in the functional $\mathscr L$ in~\eqref{eq:def:mathscr-L}. In this section we continue with the second term in the integral, often called \emph{Fisher information}, after the canonical version in diffusion problems~\cite{Otto01}. Section \ref{ss:5.2} is devoted to\ \begin{enumerate}[label=(\alph*)] \item A rigorous definition of the Fisher information $\mathscr{D}(\rho)$ (Definition~\ref{def:Fisher-information}). \end{enumerate} In several practical settings, such as the proof of existence that we give in Section~\ref{s:MM}, it is important to have lower semicontinuity of $\mathscr{D}$: this is proved in Proposition \ref{PROP:lsc}. \par We are then in a position to give \begin{enumerate}[resume,label=(\alph*)] \item a rigorous definition of solutions to the $(\mathscr E, \mathscr R, \mathscr R^*)$ system (Definition~\ref{def:R-Rstar-balance}). \end{enumerate} In Section~\ref{ss:def-sol-intro} we explained that the Energy-Dissipation balance approach to defining solutions is based on the fact that $\mathscr L(\rho,{\boldsymbol j} ) \geq 0$ for all $(\rho,{\boldsymbol j} )$ by the validity of a suitable chain-rule inequality. \begin{enumerate}[resume,label=(\alph*)] \item A rigorous proof of this chain-rule inequality, involving $\mathscr R$ and $\mathscr{D}$, is given in Corollary~\ref{cor:CH3}, which is based on Theorem~\ref{th:chain-rule-bound}). \end{enumerate} This establishes the inequality $\mathscr L(\rho,{\boldsymbol j} ) \geq 0$. Hence, we can rigorously deduce that the opposite inequality $\mathscr L(\rho,{\boldsymbol j} ) \leq 0$ characterizes the property that $(\rho,{\boldsymbol j} )$ is a solution to the $(\mathscr E, \mathscr R, \mathscr R^*)$ system. Theorem \ref{thm:characterization} provides an additional characterization of this solution concept. \par Finally, in Sections~\ref{subsec:main-properties} and \ref{ss:5.4}, \color{black} \begin{enumerate}[resume,label=(\alph*)] \item \color{ddcyan}O we prove existence, uniqueness and stability of solutions under suitable convexity/l.s.c.~conditions on of $\mathscr{D}$ (Theorems \ref{thm:existence-stability} and \ref{thm:uniqueness}). We also discuss their asymptotic behaviour and the role of the invariant measures $\pi$. \end{enumerate} Throughout this section we adopt \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}}. \subsection{The Fisher information \texorpdfstring{$\mathscr{D} $}D} \label{ss:Fisher} Formally, the Fisher information is the second term in~\eqref{eq:def:mathscr-L}, namely \[ \mathscr{D}(\rho) = \mathscr R^*\Bigl(\rho,-\relax \dnabla \upphi(u)\Bigr) = \color{ddcyan} \frac12\normalcolor \iint_E \Psi^*\bigl( -\relax(\upphi'(u(y))-\upphi'(u(x))\bigr) \boldsymbol\upnu_\rho(\mathrm{d} x \, \mathrm{d} y),\qquad \rho = u\pi\, . \] In order to give a precise meaning to this formulation when $\upphi$ is not differentiable at $0$ (as, for instance, in the case of the Boltzmann entropy function~\eqref{logarithmic-entropy}), we use the function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ defined in \eqref{eq:182}. \begin{definition}[The Fisher-information functional $\mathscr{D} $] \label{def:Fisher-information} The Fisher information $\mathscr{D}: \mathrm{D}(\mathscr E)\to [0,{+\infty}]$ is defined as \begin{equation} \label{eq:def:D} \mathscr{D} (\rho) := \displaystyle \frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi\bigl(u(x),u(y)\bigr)\, {\boldsymbol\vartheta}pi(\mathrm{d} x\,\mathrm{d} y) \qquad \text{for } \rho = u\pi\,. \end{equation} \end{definition} \normalcolor \begin{example}[The Fisher information in the quadratic and in the $\cosh$ case] \label{ex:D} For illustration we recall the two expressions for ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ from Example~\ref{ex:Dpm} for the linear equation~\eqref{eq:fokker-planck} with quadratic and cosh-type potentials $\Psi^*$ : \begin{enumerate} \item If $\Psi^*(s)=s^2/2$ , then \[ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v) = \begin{cases} \frac{1}{2}(\log(u)-\log(v))(u-v) & \text{if } u,\, v>0, \\ 0 & \text{if } u=v=0, \\ {+\infty} & \text{if } u=0 \text{ and } v \neq 0, \text{ or vice versa}. \end{cases} \] \item If $\Psi^*(s)=4\bigl(\cosh(s/2)-1\bigr)$, then \[ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v) = 2\Bigl(\sqrt{u}-\sqrt{v}\Bigr)^2\qquad \forall\, (u,v) \in [0,{+\infty}) \times [0,{+\infty}). \] \end{enumerate} These two examples of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ are convex. \end{example} Let us discuss the lower-semicontinuity properties of $\mathscr{D}$. In accordance with the Minimizing-Movement approach carried out in Section \ref{ss:MM}, we will just be interested in lower semicontinuity of $\mathscr{D}$ along sequences with bounded energy $\mathscr E$. Now, \color{black} since sublevels of the energy~$\mathscr E$ are relatively compact with respect to setwise convergence (by part~\ref{cond:setwise-compactness-superlinear} of Theorem~\ref{thm:L1-weak-compactness}), there is no difference between narrow and setwise lower semicontinuity of $\mathscr{D}$. \color{black} \begin{prop}[\textbf{Lower semicontinuity of $\mathscr{D} $}] \label{PROP:lsc} \upshape Assume either that $\pi$ is purely atomic or that the function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ is convex on $\mathbb{R}_+^2$. Then $\mathscr{D}$ is (sequentially) lower semicontinuous with respect to setwise convergence, i.e.,\ for all $(\rho^n)_n,\, \rho \in \mathrm{D}(\mathscr E) $ \begin{equation} \label{lscD} \rho^n \to \rho \text{ setwise in } {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)\quad \Longrightarrow \quad \mathscr{D}(\rho) \leq \liminf_{n\to\infty} \mathscr{D}(\rho^n)\,. \end{equation} \end{prop} \begin{proof} When $\pi$ is purely atomic, setwise convergence implies pointwise convergence $\pi$-a.e.~for the sequence of the densities, so that \eqref{lscD} follows by Fatou's Lemma. A standard argument, still based on Fatou's Lemma, shows that the functional \begin{equation} \label{eq:98} u\mapsto \iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u(x),u(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \end{equation} is lower semicontinuous with respect to ~the strong topology in $L^1(V,\pi)$: it is sufficient to check that $u_n\to u$ in $L^1(V,\pi)$ implies $(u_n^-,u_n^+)\to (u^-,u^+)$ in $L^1(E,{\boldsymbol\vartheta}pi)$. If ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ is convex on $\mathbb{R}_+^2$, then the functional \eqref{eq:98} is also lower semicontinuous with respect to the weak topology in $L^1(V,\pi)$. On the other hand, since $\rho_n$ and $\rho$ are absolutely continuous with respect to ~$\pi$, $\rho_n\to\rho$ setwise if and only if $\mathrm{d}\rho_n/\mathrm{d}\pi\rightharpoonup \mathrm{d}\rho/\mathrm{d}\pi$ weakly in $L^1(V,\pi)$ (see Theorem~\ref{thm:equivalence-weak-compactness}). \color{black} \end{proof} \normalcolor \subsection{The definition of solutions: \texorpdfstring{$\mathscr R/\mathscr R^*$}{R/R*} Energy-Dissipation balance } \label{ss:5.2} We are now in a position to formalize the concept of solution. \begin{definition}[$(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance] \label{def:R-Rstar-balance} We say that a curve $\rho: [0,T] \to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ is a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system, if it \normalcolor satisfies the {\em $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance}: \begin{enumerate} \item $\mathscr E(\rho_0)<{+\infty}$; \item There exists a measurable family $({\boldsymbol j}_t)_{t\in [0,T]} \subset {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ such that $(\rho,j)\in \mathbb{C}E0T$ with \begin{equation} \label{R-Rstar-balance} \int_s^t \left( \mathscr R(\rho_r, {\boldsymbol j}_r) + \mathscr{D}(\rho_r) \right) \mathrm{d} r+ \mathscr E(\rho_t) = \mathscr E(\rho_s) \qquad \text{for all } 0 \leq s \leq t \leq T. \end{equation} \end{enumerate} \end{definition} \begin{remark}\label{rem:properties} \begin{enumerate} \item Since $(\rho,{\boldsymbol j})\in \mathbb{C}E 0T$, the curve $\rho$ is absolutely continuous with respect to the total variation distance. \item The Energy-Dissipation balance \eqref{R-Rstar-balance} written for $s=0$ and $t=T$ implies that $(\rho,{\boldsymbol j})\in \mathbb{C}ER 0T$ as well. Moreover, $t\mapsto \mathscr E(\rho_t)$ takes finite values and it is absolutely continuous in the interval $[0,T]$. \item The chain-rule estimate \eqref{eq:CR2} implies the following important corollary: \begin{cor}[Chain-rule estimate III] \label{cor:CH3} For any curve $(\rho,{\boldsymbol j})\in \mathbb{C}E 0T$, \begin{equation} \label{eq:CR3} \mathscr L_T(\rho,{\boldsymbol j}):= \int_0^T \left( \mathscr R(\rho_r, {\boldsymbol j}_r) + \mathscr{D}(\rho_r) \right) \mathrm{d} r+ \mathscr E(\rho_T) -\mathscr E(\rho_0)\geq 0 . \end{equation} \end{cor} \noindent It follows that the Energy-Dissipation balance \eqref{R-Rstar-balance} is equivalent to the Energy-Dissipation Inequality \begin{equation} \label{EDineq} \mathscr L_T(\rho,{\boldsymbol j})\leq 0. \end{equation} \end{enumerate} \end{remark} Let us give an equivalent characterization of solutions to the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system. Recalling the definition~\eqref{eq:184} of the map $\rmF$ in the interior of $\mathbb{R}_+^2$ and the definition~\eqref{eq:102} of ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upphi$, we first note that $\rmF$ can be extended to a function defined in $\mathbb{R}_+^2$ with values in the extended real line $[-\infty,+\infty]$ by \begin{equation} \label{eq:101} \mathrm F_0(u,v):= \begin{cases} \big(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upphi(u,v)\big)\upalpha(u,v)&\text{if }\upalpha(u,v)>0,\\ 0&\text{if }\upalpha(u,v)=0. \end{cases} \end{equation} where we set $(\Psi^*)'(\pm\infty):=\pm\infty$. The function $\mathrm F_0$ is skew-symmetric. \begin{theorem} \label{thm:characterization} A curve $(\rho_t)_{t\in [0,T]}$ in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ is a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system \color{black} iff \begin{enumerate}[label=(\arabic*)] \item\label{thm:characterization-first} $\rho_t=u_t\pi\ll\pi$ for every $t\in [0,T]$ and $t\mapsto u_t$ is an absolutely continuous a.e.~differentiable map with values in $L^1(V,\pi)$; \item $\mathscr E(\rho_0)<{+\infty}$; \item \label{thm:characterization-finite-F0} We have \begin{equation} \int_0^T \iint_E |\mathrm F_0(u_t(x),u_t(y))|\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t<{+\infty};\label{eq:190} \end{equation} and \begin{equation} \label{eq:191} {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u_t(x),u_t(y))={\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi(u_t(x),u_t(y))\quad \text{for $\lambda\otimes{\boldsymbol\vartheta}pi$-a.e.~$(t,x,y)\in [0,T]\times E$}. \end{equation} In particular the complement $U'$ of the set \begin{equation} U:=\{(t,x,y)\in [0,T]\timesE: \mathrm F_0(u_t(x),u_t(y))\in \mathbb{R}\}\label{eq:192} \end{equation} is $(\lambda \otimes {\boldsymbol\vartheta}pi)$-negligible and $\mathrm F_0$ takes finite values $(\lambda\otimes{\boldsymbol\vartheta}pi)$-a.e.~in $[0,T]\times E$; \item\label{thm:characterization-last} Setting \begin{equation} \label{eq:183} 2{\boldsymbol j}_t(\mathrm{d} x,\mathrm{d} y)=-\mathrm F_0(u_t(x),u_t(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y), \end{equation} we have $(\rho,{\boldsymbol j})\in \mathbb{C}E 0T$. In particular, \begin{equation} \label{eq:179} \dot u_t(x)=\int_V \mathrm F_0(u_t(x),u_t(y))\,\kappa(x,\mathrm{d} y) \quad\text{for $(\lambda \otimes \pi)$-a.e.~}(t,x,y)\in [0,T]\times E. \end{equation} \end{enumerate} \end{theorem} \begin{proof} Let $\rho_t=u_t\pi$ be a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system \color{black} with the corresponding flux ${\boldsymbol j}_t$. By Corollary \ref{cor:propagation-AC} we can find a skew-symmetric measurable map $\xi:(0,T)\times E\to \mathbb{R}$ such that ${\boldsymbol j}_\lambda=\xi\upalpha(u^-,u^+)\lambda\otimes{\boldsymbol\vartheta}pi$ and \eqref{eq:42}, \eqref{eq:45} hold. Taking into account that ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi\le {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and applying the equality case of Corollary \ref{th:chain-rule-bound2}, we complete the proof of one implication. Suppose now that $\rho_t$ satisfies all the above conditions \ref{thm:characterization-first}--\ref{thm:characterization-last}; we want to apply formula \eqref{eq:CR} of Theorem \ref{th:chain-rule-bound} for $\upbeta=\upphi$. For this we write the shorthand $u^-,u^+$ for $u_t(x),u_t(y)$ and set $w=-\mathrm F_0(u^-,u^+)$. We verify the equality conditions~\eqref{eq:109} of Lemma~\ref{le:trivial-but-useful}: \begin{itemize} \item At $(t,x,y)$ where $\upalpha(u^-,u^+) = 0$, we have by definition $w = -\rmF_0(u^-,u^+)=0$; \item At $(\lambda\otimes{\boldsymbol\vartheta})$--a.e.\ $(t,x,y)$ where $\upalpha(u^-,u^+) >0$, $\rmF_0(u^-,u^+)$ is finite by condition~\ref{thm:characterization-finite-F0}, and by~\eqref{eq:101} it follows that $(\Psi^*)'\bigl({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upphi(u^-,u^+)\bigr)$ is finite and therefore ${\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upphi(u^-,u^+)$ is finite. The final condition $-w=(\Psi^*)'\big({\mathrm A}} \def\rmB{{\mathrm B}} \def\rmC{{\mathrm C}_\upbeta(u,v)\big)\upalpha(u,v)$ then follows by the definition of $w$. \end{itemize} By Lemma~\ref{le:trivial-but-useful} therefore we have at $(\lambda\otimes{\boldsymbol\vartheta})$--a.e.\ $(t,x,y)$ \begin{align*} -\rmB_\upphi(u^-,u^+,w) = \Upsilon(u^-,u^+,-w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-(u^-,u^+) \stackrel{\eqref{eq:191}}= \Upsilon(u^-,u^+,-w)+{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u^-,u^+). \end{align*} In particular $\rmB_\upphi$ is nonpositive, and the integrability condition \eqref{ass:th:CR} is trivially satisfied. Integrating~\eqref{eq:CR} in time we find~\eqref{R-Rstar-balance}. \end{proof} \begin{remark} \label{rmk:why-interesting-1} \upshape By Theorem \ref{thm:characterization}(3), along a solution $\rho_t = u_t \pi$ of the $(\mathscr E, \mathscr R, \mathscr R^*)$ system, the functions $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$ coincide. Recall that, in general, we only have $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^- \leq {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$, and the inequality can be strict, as in the examples of the linear equation \eqref{eq:fokker-planck} with the Boltzmann entropy and the quadratic and $\cosh$-dissipation potentials discussed in Ex.\ \ref{ex:Dpm}. There, $ {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$ differ on the boundary of $\mathbb{R}^2$. Therefore, \eqref{eq:191} encompasses the information that the pair $(u_t(x),u_t(y))$ stays in the interior of $\mathbb{R}^2$ $(\lambda{\otimes}{\boldsymbol\vartheta}pi)$-a.e.\ in $[0,T]\times E$. \end{remark} \color{black} \subsection{Existence and uniqueness of solutions of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system} \label{subsec:main-properties} Let us now collect a few basic structural properties of solutions of the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance. Recall that we will always adopt \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}}. Following an argument by Gigli~\cite{Gigli10} we first use convexity of $\mathscr{D}$ to deduce uniqueness. \begin{theorem}[Uniqueness] \label{thm:uniqueness} Suppose that $\mathscr{D}$ is convex and the energy density $\upphi$ is strictly convex. Suppose that $\rho^1,\, \rho^2$ satisfy the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance \eqref{R-Rstar-balance} and are identical at time zero. Then $\rho_t^1 = \rho_t^2$ for every $t\in [0,T]$. \end{theorem} \begin{proof} Let ${\boldsymbol j}^i\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}((0,T)\times E)$ satisfy $\mathscr L_t(\rho^i,{\boldsymbol j}^i)=0$ and let us set \begin{displaymath} \rho_t:=\frac 12(\rho_t^1+\rho_t^2),\quad {\boldsymbol j}:=\frac12({\boldsymbol j}^1+{\boldsymbol j}^2). \end{displaymath} By the linearity of the continuity equation we have that $(\rho,{\boldsymbol j})\in \mathbb{C}E 0T$ with $\rho_0=\rho^1_0=\rho^2_0$, so that by convexity \begin{align*} \mathscr E(\rho_t) &\ge\mathscr E(\rho_0)- \int_0^t \left( \mathscr R(\rho_r, {\boldsymbol j}_r) + \mathscr{D}(\rho_r) \right) \mathrm{d} r \\&\ge \mathscr E(\rho_0)- \frac12\int_0^t \left( \mathscr R(\rho^1_r, {\boldsymbol j}^1_r) + \mathscr{D}(\rho^1_r) \right) \mathrm{d} r - \frac12\int_0^t \left( \mathscr R(\rho^2_r, {\boldsymbol j}^2_r) + \mathscr{D}(\rho^2_r) \right) \mathrm{d} r \\& =\frac 12\mathscr E(\rho^1_t)+\frac12\mathscr E(\rho^2_t). \end{align*} Since $\mathscr E$ is strictly convex we deduce $\rho^1_t=\rho^2_t$. \end{proof} \begin{theorem}[Existence and stability] \label{thm:existence-stability} Let us suppose that the Fisher information functional $\mathscr{D}$ is lower semicontinuous with respect to ~setwise convergence (e.g.~if $\pi$ is purely atomic, or ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ is convex, see Proposition \ref{PROP:lsc}). \begin{enumerate}[label=(\arabic*)] \item \label{thm:existence-stability:p1} For every $\rho_0\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ with $\mathscr E(\rho_0)<{+\infty}$ there exists a solution $\rho:[0,T]\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ of the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system starting from $\rho_0$. \item \label{thm:existence-stability:p2} Every sequence $(\rho^n_t)_{t\in [0,T]}$ of solutions to the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system such that \begin{equation} \label{eq:195} \sup_{n\in \mathbb{N}} \mathscr E(\rho^n_0)<{+\infty} \end{equation} has a subsequence setwise converging to a limit $(\rho_t)_{t\in [0,T]}$ for every $t\in [0,T]$. \item \label{thm:existence-stability:p3} Let $(\rho^n_t)_{t\in [0,T]}$ is a sequence of solutions, with corresponding fluxes $({\boldsymbol j}^n_t)_{t\in[0,T]}$. Let $\rho^n_t$ converge setwise to $\rho_t$ for every $t\in [0,T]$, and assume that \begin{equation} \lim_{n\to\infty}\mathscr E(\rho^n_0)=\mathscr E(\rho_0).\label{eq:194} \end{equation} Then $\rho$ is a solution as well, with flux ${\boldsymbol j}$, and the following additional convergence properties hold: \begin{subequations} \label{eq:196} \begin{align} \label{eq:196a} \lim_{n\to\infty}\int_0^T\mathscr R(\rho_t^n,{\boldsymbol j}_t^n)\,\mathrm{d} t&= \lim_{n\to\infty}\int_0^T\mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t,\\ \label{eq:196b} \lim_{n\to\infty}\int_0^T\mathscr{D}(\rho_t^n)\,\mathrm{d} t&= \lim_{n\to\infty}\int_0^T\mathscr{D}(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t,\\ \label{eq:196c} \lim_{n\to\infty}\mathscr E(\rho^n_t)&=\mathscr E(\rho_t)\quad \text{for every }t\in [0,T]. \end{align} \end{subequations} If moreover $\mathscr E$ is strictly convex then $\rho^n$ converges uniformly in $[0,T]$ with respect to the total variation distance. \end{enumerate} \end{theorem} \begin{proof} Part \textit{\ref{thm:existence-stability:p2}} follows immediately from Proposition \ref{prop:compactness}. For part \textit{\ref{thm:existence-stability:p3}}, the three statements of~\eqref{eq:196} \emph{as inequalities} \color{purple} $\leq$ \color{black} follow from earlier results: for~\eqref{eq:196a} this follows again from Proposition~\ref{prop:compactness}, for~\eqref{eq:196b} from Proposition~\ref{PROP:lsc}, and for~\eqref{eq:196c} from Lemma~\ref{l:lsc-general}. Using these inequalities to pass to the limit in the equation $\mathscr L_T(\rho^n,{\boldsymbol j}^n)=0$ we obtain that $\mathscr L_T(\rho,{\boldsymbol j})\le 0$. On the other hand, since $\mathscr L_T(\rho,{\boldsymbol j})\geq0$ by the chain-rule estimate~\eqref{eq:CR3}, \color{purple} standard arguments yield \color{black} the equalities in~\eqref{eq:196}. When $\mathscr E$ is strictly convex, we obtain the convergence in $L^1(V,\pi)$ of the densities $u^n_t=\mathrm{d} \rho^n_t/\mathrm{d}\pi$ for every $t\in [0,T]$. We then use the equicontinuity estimate \eqref{eq:65} of Proposition \ref{prop:compactness} to conclude uniform convergence of the sequence $(\rho_n)_n$ with respect to the total variation distance. For part \textit{\ref{thm:existence-stability:p1}}, when the density $u_0$ of $\rho_0$ takes value in a compact interval $[a,b]$ with $0<a<b<\infty$, the existence of a solution follows by Theorem \ref{thm:sg-sol-is-var-sol} below. The general case follows by a standard approximation of $u_0$ by truncation and applying the stability properties of parts \textit{\ref{thm:existence-stability:p2}} and \textit{\ref{thm:existence-stability:p3}}. \end{proof} \subsection{Stationary states and attraction} \label{ss:5.4} Let us finally make a few comments on stationary measures and on the asymptotic behaviour of solutions of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system. The definition of invariant measures was already given in Section~\ref{subsub:kernels}, and we recall it for convenience. \begin{definition}[Invariant and stationary measures] Let $\rho=u\pi\in D(\mathscr E)$ be given. \begin{enumerate} \item We say that $\rho$ is \emph{invariant} if $\kernel\kappa\rho(\mathrm{d} x\mathrm{d} y )= \rho(\mathrm{d} x)\kappa(x,\mathrm{d} y)$ has equal marginals, i.e.\ ${\mathsf x}_\# \kernel\kappa\rho = {\mathsf y}_\# \kernel\kappa\rho$. \item We say that $\rho$ is \emph{stationary} if the constant curve $\rho_t\equiv \rho$ is a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system. \end{enumerate} \end{definition} Note that we always assume that $\pi$ is invariant (see Assumption~\ref{ass:V-and-kappa}). It is immediate to check that \begin{align} \label{eq:197} \rho\text{ is stationary}\quad \Longleftrightarrow\quad \mathscr{D}(\rho)=0 \quad &\Longleftrightarrow\quad {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u(x),u(y))=0\quad\text{${\boldsymbol\vartheta}pi$-a.e.} \end{align} If a measure $\rho$ is invariant, then $u=\mathrm{d} \rho/\mathrm{d}\pi$ satisfies \begin{equation} \label{eq:198} u(x)=u(y)\quad\text{for ${\boldsymbol\vartheta}pi$-a.e.~$(x,y)\in E$}, \end{equation} which implies~\eqref{eq:197}; therefore invariant measures are stationary. Depending on the system, the set of stationary measures might also contain non-invariant measures, as the next example shows. \begin{example} Consider the example of the cosh-type dissipation~\eqref{choice:cosh}, \[ \upalpha(u,v) := \sqrt{uv}, \quad\Psi^*(\xi) := 4\Bigl(\cosh\frac\xi2-1\Bigr), \] but combine this with a Boltzmann entropy with an additional multiplicative constant $0<\gamma\leq 1$: \[ \upphi(s) := \gamma(s\log s - s + 1). \] The case $\gamma=1$ corresponds to the example of~\eqref{choice:cosh}, and for general $0<\gamma\leq 1$ we find that \[ \rmF(u,v) = u^{\frac{1-\gamma}2}v^{\frac{1+\gamma}2} - u^{\frac{1+\gamma}2}v^{\frac{1-\gamma}2}, \] resulting in the evolution equation (see~\eqref{eq:180}) \[ \partial_t u(x) = \int_{y\in V} \Bigl[u(x)^{\frac{1-\gamma}2}u(y)^{\frac{1+\gamma}2} - u(x)^{\frac{1+\gamma}2}u(y)^{\frac{1-\gamma}2}\Bigr]\, \kappa(x,\mathrm{d} y). \] When $0<\gamma<1$, any function of the form $u(x) = \mathbbm{1}\{x\in A\}$ for $A\subset V$ is a stationary point of this equation, and equivalently any measure $\pi \mres A$ is a stationary solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system. For $0<\gamma<1$ therefore the set of stationary measures is much larger than just invariant measures. \end{example} As in the case of linear evolutions, $(\mathscr E,\mathscr R,\mathscr R^*)$ systems behave well with respect to decomposition of $\pi$ into mutually singular invariant measures. \begin{theorem}[Decomposition] \label{thm:decomposition} Let us suppose that $\pi=\pi^1+\pi^2$ with $\pi^1,\pi^2\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ mutually singular and invariant. Let $\rho:[0,T]\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ be a curve with $\rho_t=u_t\pi\ll\pi$ and let $\rho^i_t:=u_t\pi^i$ be the decomposition of $\rho_t$ with respect to ~$\pi^1$ and $\pi^2$. Then $\rho$ is a solution of the $(\mathscr E,\mathscr R,\mathscr R^*)$ system if and only if each curve $\rho^i_t$, $i=1,2$, is a solution of the $(\mathscr E^i,\mathscr R^i,(\mathscr R^i)^*)$ system, where $\mathscr E^i(\mu):=\mathscr F_\upphi(\mu|\pi^i)$ is the relative entropy with respect to the measures $\pi^i$ and and $\mathscr R^i,(\mathscr R^i)^*$ are induced by $\pi^i$. \end{theorem} \begin{remark} It is worth noting that when $\upalpha$ is $1$-homogeneous then $\mathscr R^i=\mathscr R$ and $(\mathscr R^i)^*=\mathscr R^*$ do not depend on $\pi^i$, cf.\ Corollary \ref{cor:decomposition}. The decomposition is thus driven just by the splitting of the entropy $\mathscr E$. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:decomposition}] Note that the assumptions of invariance and mutual singularity of $\pi^1$ and $\pi^2$ imply that~${\boldsymbol\vartheta}pi$ has a singular decomposition ${\boldsymbol\vartheta} = {\boldsymbol\vartheta}^1 + {\boldsymbol\vartheta}^2 := \kernel\kappa{\pi^1} + \kernel\kappa{\pi^2}$, where the $\kernel\kappa{\pi^i}$ are symmetric. It then follows that $\mathscr E(\rho_t)=\mathscr E^1(\rho^1_t)+\mathscr E^2(\rho^2_t)$ and $\mathscr{D}(\rho_t)=\mathscr{D}^1(\rho^1_t)+\mathscr{D}^2(\rho^2_t)$, where \begin{displaymath} \mathscr{D}^i(\rho^i)=\frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u(x),u(y))\,{\boldsymbol\vartheta}pi^i(\mathrm{d} x,\mathrm{d} y). \end{displaymath} Finally, Corollary~\ref{cor:decomposition} shows that decomposing ${\boldsymbol j}$ as the sum ${\boldsymbol j}^1+{\boldsymbol j}^2$ where ${\boldsymbol j}^i\ll{\boldsymbol\vartheta}pi^i$, the pairs $(\rho^i,{\boldsymbol j}^i)$ belong to $\mathbb{C}E 0T$ and $\mathscr R(\rho_t,{\boldsymbol j}_t)= \mathscr R^1(\rho^1_t,{\boldsymbol j}^1_t)+ \mathscr R^2(\rho^2_t,{\boldsymbol j}^2_t)$. \end{proof} \begin{theorem}[Asymptotic behaviour] Let us suppose that the only stationary measures are multiples of $\pi$, and that $\mathscr{D}$ is lower semicontinuous with respect to setwise convergence. Then every solution $\rho:[0,\infty)\to {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ of the $(\mathscr E,\mathscr R,\mathscr R^*)$ evolution system converges setwise to $c\pi$, where $c:=\rho_0(V)/\pi(V)$. \end{theorem} \begin{proof} Let us fix a vanishing sequence $\tau_n\downarrow0$ such that $\sum_n\tau_n={+\infty}$. Let $\rho_\infty$ be any limit point with respect to ~setwise convergence of the curve $\rho_t$ along a diverging sequence of times $t_n\uparrow{+\infty}$. Such a point exists since the curve $\rho$ is contained in a sublevel set of $\mathscr E$. Up to extracting a further subsequence, it is not restrictive to assume that $t_{n+1}\ge t_n+\tau_n$. Since \begin{align*} \sum_{n\in \mathbb{N}}\int_{t_n}^{t_n+\tau_n}\Big(\mathscr R(\rho_t,{\boldsymbol j}_t)+\mathscr{D}(\rho_t)\Big)\,\mathrm{d} t \le \int_0^{{+\infty}}\Big(\mathscr R(\rho_t,{\boldsymbol j}_t)+\mathscr{D}(\rho_t)\Big)\,\mathrm{d} t\le \mathscr E(\rho_0)<\infty \end{align*} and the series of $\tau_n$ diverges, we find \begin{displaymath} \liminf_{n\to{+\infty}}\frac1{\tau_n}\int_{t_n}^{t_n+\tau_n}\mathscr{D}(\rho_t)\,\mathrm{d} t=0,\quad \lim_{n\to\infty}\int_{t_n}^{t_n+\tau_n}\mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t=0. \end{displaymath} Up to extracting a further subsequence, we can suppose that the above $\liminf$ is a limit and we can select $t'_n\in [t_n,t_n+\tau_n]$ such that \begin{displaymath} \lim_{n\to\infty}\mathscr{D}(\rho_{t_n'})=0,\quad \lim_{n\to\infty}\int_{t_n}^{t_n'}\mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t=0. \end{displaymath} Recalling the definition \eqref{def-psi-rig} of the Dynamical-Variational Transport cost and the monotonicity with respect to $\tau$, we also get $\lim_{n\to\infty} \mathscr W(\tau_n,\rho_{t_n},\rho_{t_n'})=0$, so that Theorem \ref{thm:props-cost}(5) and the relative compactness of the sequence $(\rho_{t_n'})_n$ yield $\rho_{t_n'}\to \rho_\infty$ setwise. The lower semicontinuity of $\mathscr{D}$ yields $\mathscr{D}(\rho_\infty)=0$ so that $\rho_\infty=c\pi$ thanks to the uniqueness assumption and to the conservation of the total mass. Since we have uniquely identified the limit point, we conclude that the \emph{whole} curve $\rho_t$ converges setwise to $\rho_\infty$ as $t\to{+\infty}$. \end{proof} \section{Dissipative evolutions in $L^1(V,\pi)$} \label{s:ex-sg} In this section we construct solutions of the $(\mathscr E,\mathscr R,\mathscr R^*)$ formulation by studying their equivalent characterization as abstract evolution equations in $L^1(V,\pi)$. Throughout this section we adopt Assumption~\ref{ass:V-and-kappa}. \subsection{Integro-differential equations in $L^1$} Let $J\subset \mathbb{R}$ be a closed interval (not necessarily bounded) and let us first consider a map ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}:E\times J^2\to \mathbb{R}$ with the following properties: \begin{subequations} \label{subeq:G} \begin{enumerate} \item measurability with respect to ~$(x,y)\in E$: \begin{equation} \label{eq:113} \text{for every $u,v\in J$ the map } (x,y)\mapsto {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\text{ is measurable}; \end{equation} \item continuity with respect to ~$u,v$ and linear growth: there exists a constant $M>0$ such that \begin{equation} \label{eq:114} \begin{gathered} \text{for every }(x,y)\in E\quad (u,v)\mapsto {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\text{ is continuous and } \\ |{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)|\le M(1+|u|+|v|) \quad \text{for every }u,v\in J, \end{gathered} \end{equation} \item skew-symmetry: \begin{equation} \label{eq:129} {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)=-{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(y,x;v,u),\quad \text{for every } (x,y)\in E,\ u,v\in J, \end{equation} \item $\ell$-dissipativity: there exists a constant $\ell\ge0$ such that for every $(x,y)\in E$, $u,u',v\in J$: \begin{equation} \label{eq:130} u\le u'\quad\mathbb{R}ightarrow\quad {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u',v)- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\le \ell(u'-u). \end{equation} \end{enumerate} \end{subequations} \begin{remark} \label{rem:spoiler} Note that \eqref{eq:130} is surely satisfied if ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ is $\ell$-Lipschitz in $(u,v)$, uniformly with respect to ~$(x,y)$. The `one-sided Lipschitz condition'~\eqref{eq:130} however is weaker than the standard Lipschitz condition; this type of condition is common in the study of ordinary differential equations, since it is still strong enough to guarantee uniqueness and non-blowup of the solutions (see e.g.~\cite[Ch.~IV.12]{HairerWanner96}). Let us also remark that \eqref{eq:129} and \eqref{eq:130} imply the reverse monotonicity property of ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ with respect to ~$v$, \begin{equation} \label{eq:130bis} v\ge v'\quad\mathbb{R}ightarrow\quad {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v')- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\le \ell(v-v')\,, \end{equation} and the joint estimate \begin{equation} \label{eq:120} u\le u',\ v\ge v'\quad\mathbb{R}ightarrow\quad {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u',v')- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\le \ell\big[(u'-u)+(v-v')\big]. \end{equation} \end{remark} Let us set $L^1(V,\pi;J):=\{u\in L^1(V,\pi):u(x)\in J\ \text{for $\pi$-a.e.~$x\in V$}\}$. \begin{lemma} \label{le:tedious} Let $u:V\to J$ be a measurable $\pi$-integrable function. \begin{enumerate} \item We have \begin{equation} \label{eq:140} \int_V\big|{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y))\big|\,\kappa(x,\mathrm{d} y)<{+\infty} \quad\text{for $\pi$-a.e.~$x\in V$}, \end{equation} and the formula \begin{equation} \label{eq:115} \boldsymbol G[u](x):= \int_V {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y))\,\kappa(x,\mathrm{d} y) \end{equation} defines a function $\boldsymbol G[u]$ in $L^1(V,\pi)$ that only depends on the Lebesgue equivalence class of $u$ in $L^1(V,\pi)$. \item The map $\boldsymbol G:L^1(V,\pi;J)\to L^1(V,\pi)$ is continuous. \item The map $\boldsymbol G$ is $(\ell\, \|\kappa_V\|_\infty \color{black})$-dissipative, in the sense that \color{red} for all $h>0$,\color{black} \begin{equation} \label{eq:141} \big\|(u_1- u_2)-h (\boldsymbol G[u_1]-\boldsymbol G[u_2])\big\|_{L^1(V,\pi)}\ge (1-\color{red} 2 \color{black} \ell \|\kappa_V|_\infty \color{black} \,h)\|u_1-u_2\|_{L^1(V,\pi)} \end{equation} for every $u_1,u_2\in L^1(V,\pi;J)$. \item If $a\in J$ satisfies \begin{equation} \label{eq:123a} 0={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;a,a)\le {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;a,v)\quad \text{for every }(x,y)\in E,\ v\ge a\,, \end{equation} then for every function $u\in L^1(V,\pi;J)$ we have \begin{equation} \label{eq:142} \begin{aligned} u\ge a\text{ $\pi$-a.e.}\quad &\mathbb{R}ightarrow\quad \lim_{h\downarrow0}\frac1h \int_V \Big(a-(u+h\boldsymbol G[u])\Big)_+\,\mathrm{d} \pi=0\,. \end{aligned} \end{equation} If $b\in J$ satisfies \begin{equation} \label{eq:123b} 0={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;b,b)\ge {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;b,v)\quad \text{for every }(x,y)\in E,\ v\le b, \end{equation} then for every function $u\in L^1(V,\pi;J)$ we have \begin{equation} u\le b\text{ $\pi$-a.e.}\quad \mathbb{R}ightarrow\quad \lim_{h\downarrow0}\frac 1h\int_V \Big(u+h\boldsymbol G[u]-b\Big)_+\,\mathrm{d} \pi=0\,.\label{eq:145} \end{equation} \end{enumerate} \end{lemma} \begin{proof} \textit{(1)} Since ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ is a Carath\'eodory function, for every measurable $u$ and every $(x,y)\in E$ the map $(x,y)\mapsto {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y))$ is measurable. Since \begin{equation} \label{eq:143} \begin{aligned} \iint_E |{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y)|\,\kappa(x,\mathrm{d} y)\pi(\mathrm{d} x)&= \iint_E |{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y)|\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \\&\le M \|\kappa_V\|_\infty \color{black} \bigg(1+2\int_V |u|\,\mathrm{d}\pi\bigg)\,, \end{aligned} \end{equation} the first claim follows by Fubini's Theorem \cite[II, 14]{Dellacherie-Meyer78}. \noindent \textit{(2)} Let $(u_n)_{n\in \mathbb{N}}$ be a sequence of functions strongly converging to $u$ in $L^1(V,\pi;J)$. Up to extracting a further subsequence, it is not restrictive to assume that $u_n$ also converges to $u$ pointwise $\pi$-a.e. We have \begin{equation} \label{eq:144} \big\|\boldsymbol G[u_n]-\boldsymbol G[u]\big\|_{L^1(V,\pi)}= \iint_E \Big|{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u_n(x),u_n(y))- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u(x),u(y))\Big|\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\, . \end{equation} Since the integrand $g_n$ in \eqref{eq:144} vanishes ${\boldsymbol\vartheta}pi$-a.e.\ in $E$ as $n\to\infty$, by the generalized Dominated Convergence Theorem (see for instance \cite[Thm.\ 4, page 21]{Evans-Gariepy} it is sufficient to show that there exist positive functions $h_n$ pointwise converging to $h$ such that \begin{displaymath} g_n\le h_n\ {\boldsymbol\vartheta}pi\text{-a.e.~in $E$},\qquad \lim_{n\to\infty}\iint_E h_n\,\mathrm{d}{\boldsymbol\vartheta}pi=\iint_E h\,\mathrm{d}{\boldsymbol\vartheta}pi. \end{displaymath} We select \color{purple} $h_n(x,y):=M(2+|u_n(x)|+|u_n(y)|+|u(x)|+|u(y)|)$ and $ h(x,y):=2M(1+|u(x)|+|u(y)|)$. \color{black} This proves the result. \noindent \textit{(3)} Let us set \begin{displaymath} \mathfrak s(r):= \begin{cases} 1&\text{if }r>0\,,\\ -1&\text{if }r\le 0\,, \end{cases} \end{displaymath} \color{red} and observe that the left-hand side of \eqref{eq:141} may be estimated from below by \begin{align*} \big\|(u_1- u_2)-h (\boldsymbol G[u_1]-\boldsymbol G[u_2])\big\|_{L^1(V,\pi)} &\ge \|u_1-u_2\|_{L^1(V,\pi)} \\ &\hspace{2em}- h\int_V \mathfrak s(u_1-u_2)\big(\boldsymbol G[u_1]-\boldsymbol G[u_2]\big) \,\mathrm{d}\pi \end{align*} for all $h>0$. Therefore, \color{black} estimate \eqref{eq:141} follows if we prove that \begin{equation} \label{eq:132} \delta:=\int_V \mathfrak s(u_1-u_2)\big(\boldsymbol G[u_1]-\boldsymbol G[u_2]\big) \,\mathrm{d}\pi \le 2\ell \|\kappa_V\|_\infty \color{black} \, \|u_1-u_2\|_{L^1(V,\pi)}. \end{equation} Let us set \begin{displaymath} \Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y):= {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u_{1}(x),u_{1}(y))- {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u_{2}(x),u_{2}(y)), \end{displaymath} and \begin{equation} \label{eq:133} \Delta_\mathfrak s(x,y):=\mathfrak s(u_{1}(x)-u_{2}(x))-\mathfrak s(u_{1}(y)-u_{2}(y)). \end{equation} Since $ \Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y)=-\Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(y,x)$, using \eqref{eq:129} we have \begin{align*} \delta= \int_V \mathfrak s\big(u_{1}-u_{2})\,\big(\boldsymbol G[u_{1}]-\boldsymbol G[u_{2}]\big)\,\mathrm{d}\pi &=\iint_E \mathfrak s(u_{1}(x)-u_{2}(x))\Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y) \,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \\&= \frac12\iint_E \Delta_\mathfrak s(x,y) \Delta_{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y) \,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) . \end{align*} Setting $\Delta(x):=u_{1}(x)-u_{2}(x)$ we observe that by \eqref{eq:120} \begin{align*} \Delta(x)>0,\ \Delta(y)>0\quad&\mathbb{R}ightarrow\quad \Delta_\mathfrak s (x,y)=0,\\ \Delta(x)\le 0,\ \Delta(y)\le 0\quad&\mathbb{R}ightarrow\quad \Delta_\mathfrak s (x,y)=0,\\ \Delta(x)\le0,\ \Delta(y)>0 \quad&\mathbb{R}ightarrow\quad \Delta_\mathfrak s (x,y)=-2,\ \Delta_G(x,y)\ge-\ell\big(\Delta(y)-\Delta(x)\big)\\ \Delta(x)>0,\ \Delta(y)\le 0\quad&\mathbb{R}ightarrow\quad \Delta_\mathfrak s (x,y)=2,\ \Delta_G(x,y)\le \ell\big(\Delta(x)-\Delta(y)\big). \end{align*} We deduce that \begin{displaymath} \delta\le \ell \iint_E \Big[|u_{1}(x)-u_{2}(x)|+ |u_{1}(y)-u_{2}(y)|\Big]\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\le 2\ell \|\kappa_V\|_\infty \color{black} \,\|u_1-u_2\|_{L^1(V,\pi)}. \end{displaymath} \noindent \textit{(4)} We will only address the proof of property \eqref{eq:142}, as the argument for \eqref{eq:145} is completely analogous. \color{black} \color{red} Suppose that $u\ge a$ $\pi$-a.e. \color{black} Let us first observe that if $u(x)=a$, \color{red} then from \eqref{eq:123a},\color{black} \begin{displaymath} \boldsymbol G[u](x)= \int_V {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;a,u(y))\,\kappa(x,\mathrm{d} y)\ge 0\,. \end{displaymath} We set $f_h(x):=h^{-1}(a-u(x))-\boldsymbol G[u](x)$, observing that $f_h(x)$ is monotonically decreasing to $-\infty$ if $u(x)>a$ and $f_h(x)=-\boldsymbol G[u](x)\le 0$ if $u(x)=a$, so that $\lim_{h\downarrow0}\big(f_h(x)\big)_+=0$. Since $\big(f_h\big)_+\le \big(\!-\!\boldsymbol G[u]\big)_+$ we can apply the Dominated Convergence Theorem \color{red} to obtain \color{black} \begin{displaymath} \lim_{h\downarrow0} \int_V \big(f_h(x)\big)_+\,\pi(\mathrm{d} x)=0\,, \end{displaymath} \color{red} thereby concluding the proof.\color{black} \end{proof} In what follows, we shall address the Cauchy problem \color{black} \begin{subequations} \label{eq:119-Cauchy} \begin{align} \label{eq:119} \dot u_t&=\boldsymbol G[u_t]\quad\text{in $L^1(V,\pi)$ for every }t\ge0,\\ \label{eq:119-0} u\restr{t=0}&=u_0. \end{align} \end{subequations} \begin{lemma}[Comparison principles] \label{le:positivity} Let us suppose that the map ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ satisfies {\rm (\ref{subeq:G}a,b,c)} with $J=\mathbb{R}$. \begin{enumerate} \item If $\bar u\in \mathbb{R}$ satisfies \begin{equation} \label{eq:123abis} 0={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u,\bar u)\le {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u,v)\quad \text{for every }(x,y)\in E,\ v\ge \bar u, \end{equation} then for every initial datum $u_0\ge\bar u$ the solution $u$ of \eqref{eq:119-Cauchy} satisfies $u_t\ge \bar u$ $\pi$-a.e.~for every $t\ge0$. \item If $\bar u\in \mathbb{R}$ satisfies \begin{equation} \label{eq:123bbis} 0={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u,\bar u)\ge {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u,v)\quad \text{for every }(x,y)\in E,\ v\le \bar u, \end{equation} then for every initial datum $u_0\le\bar u$ the solution $u$ of \eqref{eq:119-Cauchy} satisfies $u_t\le \bar u$ $\pi$-a.e.~for every $t\ge0$. \end{enumerate} \end{lemma} \begin{proof} \textit{(1)} Let us first consider the case $\bar u=0$. We define a new map $\overline {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$ by symmetry: \begin{equation} \label{eq:126} \overline {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v):={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,|v|) \end{equation} which satisfies the same structural properties (\ref{subeq:G}a,b,c), and moreover \begin{equation} \label{eq:127} 0=\overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;0,0) \color{red} \le \color{black} \overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;0,v)\quad \text{for every } x,y\in V,\ v\in \mathbb{R}. \end{equation} We call $\overline\boldsymbol G$ the operator induced by $\overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$, and $\bar u$ the solution curve \color{black} of the corresponding Cauchy problem starting from the same (nonnegative) initial datum $u_0$. If we prove that $\bar u_t\ge0$ for every $t\ge0$, then $\bar u_t$ is also the unique solution of the original Cauchy problem \eqref{eq:119-Cauchy} induced by ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}$, so that we obtain the positivity of $u_t$. Note that \eqref{eq:127} \color{red} and property \eqref{eq:130} \color{black} yield \begin{equation} \label{eq:124} \overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)\ge \overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v)-\overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;0,v)\ge \color{red} \ell\,u\qquad\text{for $u\le 0$}\,.\color{black} \end{equation} \color{red} We set $\upbeta(r):=r_-=\max(0,-r)$ and $P_t:=\{x\in V:\bar u_t(x)<0\}$ for each $t\ge 0$. Due to the Lipschitz continuity of $\upbeta$, the map $t\mapsto b(t):=\int_V \upbeta(\bar u_t)\,\mathrm{d}\pi$ is absolutely continuous. Hence, the chain-rule formula applies, which, together with \eqref{eq:124} gives \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t} b(t) &= -\int_{P_t} \overline\boldsymbol G[\bar u_t](x)\,\pi(\mathrm{d} x) = -\iint_{P_t\times V} \overline{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;\bar u_t(x),\bar u_t(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \\ &\le \ell \iint_{P_t\times V} (-\bar u_t(x))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) = \ell \iint_E \upbeta(\bar u_t(x))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \le \ell \|\kappa_V\|_\infty \color{black} b(t)\,. \end{align*} \color{black} Since $b$ is nonnegative and $b(0)=0$, we conclude, \color{red} by Gronwall's inequality, \color{black} that $b(t)=0$ for every $t\ge0$ and therefore $\bar u_t\ge0$. In order to prove the the statement for a general $\bar u\in \mathbb{R}$ it is sufficient to consider the new operator $\widetilde {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v):={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u+\bar u,v+\color{red}\bar u\color{ddcyan})$, and to consider the curve $\widetilde u_t:=u_t-\bar u$ starting from the nonnegative initial datum $\widetilde u_0:=u_0-\bar u$. \noindent \textit{(2)} It suffices to apply the transformation $\widetilde{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;u,v):=-{\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}(x,y;-u,-v)$ and set \color{black} $\widetilde u_t:=-u_t$. We then apply the previous claim, yielding \color{black} the lower bound $-\bar u$. \end{proof} We can now state our main result concerning the well-posedness of the Cauchy problem~\eqref{eq:119-Cauchy}. \color{black} \begin{theorem} \label{thm:localization-G} Let $J\subset \mathbb{R}$ be a closed interval of $\mathbb{R}$ and let $G:E\times J^2\to\mathbb{R}$ be a map satisfying conditions {\rm(\ref{subeq:G})}. Let us also suppose that, if $a=\inf J>-\infty$ then \eqref{eq:123a} holds, and that, if $b=\sup J<+\infty$ then \color{black} \eqref{eq:123b} holds. \begin{enumerate} \item For every $u_0\in L^1(V,\pi;J)$ there exists a unique curve $u\in \rmC^1([0,\infty);L^1(V,\pi; J))$ solving the Cauchy problem \eqref{eq:119-Cauchy}. \item $\int_V u_t\,\mathrm{d}\pi=\int_V u_0\,\mathrm{d} \pi$ for every $t\ge0$. \item If $u,v$ are two solutions with initial data $u_0,v_0\in L^1(V,\pi;J)$ respectively, then \begin{equation} \label{eq:146} \|u_t-v_t\|_{L^1(V,\pi)}\le \rme^{2 \|\kappa_V\|_\infty \color{black} \ell\, t}\|u_0-v_0\|_{L^1(V,\pi)}\quad \text{for every }t\ge0. \end{equation} \item If $\bar a\in J$ satisfies condition \eqref{eq:123a} and $u_0\ge \bar a$, then $u_t\ge \bar a$ for every $t\ge0$. Similarly, if $\bar b\in J$ satisfies condition \eqref{eq:123b} and $u_0\le \bar b$, then $u_t\le \bar b$ for every $t\ge0$. \item If $\ell=0$, then the evolution is order preserving: if $u,v$ are two solutions with initial data $u_0,v_0$ then \begin{equation} \label{eq:128} u_0\le v_0\quad\mathbb{R}ightarrow\quad u_t\le v_t\quad\text{for every }t\ge0. \end{equation} \end{enumerate} \end{theorem} \color{black} \begin{proof} Claims \textit{(1), (3), (4)} follow by the abstract generation result of \cite[\S 6.6, Theorem 6.1]{Martin76} applied to the operator $\boldsymbol G$ defined in the closed convex subset $D:=L^1(V,\pi;J)$ of the Banach space $L^1(V,\pi)$. \color{red} For the theorem to apply, \color{black} one has to check the continuity of $\boldsymbol G\color{red} :D\to L^1(V,\pi)\color{ddcyan}$ (Lemma \ref{le:tedious}(2)), its dissipativity \eqref{eq:141}, and the property \begin{displaymath} \liminf_{h\downarrow0} h^{-1} \inf_{v\in D} \|u+h\boldsymbol G[u]-v\|_{L^1(V,\pi)}=0 \quad\text{for every }u\in D\,. \end{displaymath} When $J=\mathbb{R}$, the inner infimum always is zero; if $J$ is a bounded interval $[a,b]$ then the property above follows from the estimates of Lemma \ref{le:tedious}(4), \color{red} since for any $u\in D$, \[ \inf_{v\in D}\int_V |u + h \boldsymbol G[u]-v|\,\mathrm{d}\pi \le \int_V \Bigl(a- (u+h\boldsymbol G[u])\Bigr)_+\mathrm{d}\pi + \int_V \Bigl(u+h\boldsymbol G[u]-b\Bigr)_+\mathrm{d}\pi\,. \] \color{black} When $J=[a,\infty)$ or $J = (-\infty,b]$ a similar reasoning applies. Claim \textit{(2)} is an immediate consequence of \eqref{eq:129}. Finally, when $\ell=0$, claim \textit{(5)} follows from the Crandall-Tartar Theorem \cite{Crandall-Tartar80}, stating that a non-expansive map in $L^1$ (cf.\ \eqref{eq:146}) \color{black} that satisfies claim \textit{(2)} is also order preserving. \end{proof} \subsection{Applications to dissipative evolutions} Let us now consider the map $\mathrm F: (0,+\infty)^2 \to \mathbb{R}$ \color{black} induced by the system $(\Psi^*,\upphi,\upalpha)$, first introduced in~\eqref{eq:184}, \begin{equation} \label{eq:appA-w-Psip-alpha} \mathrm F(u,v) := (\Psi^*)'\bigl( \upphi'(v)-\upphi'(u)\bigr)\, \upalpha(u,v) \quad\text{for every }u,v>0\,, \end{equation} with the corresponding integral operator: \begin{equation} \label{eq:162} \boldsymbol F[u](x):=\int_V \mathrm F(u(x),u(y))\,\kappa(x,\mathrm{d} y)\,. \end{equation} Since $\Psi^*$, $\upphi$ are $\rmC^1$ \color{red} convex \color{black} functions on $(0,{+\infty})$ and $\upalpha$ is locally Lipschitz in $(0,{+\infty})^2$ it is easy to check that $\mathrm F$ satisfies properties (\ref{subeq:G}a,b,c,d) in every compact subset $J\subset (0,{+\infty})$ and conditions \eqref{eq:123a}, \eqref{eq:123b} at every point $a,b\in J$. In order to focus on the structural properties of the \color{purple} {associated evolution problem, cf.\ \eqref{eq:159} below,} \color{black} we will mostly confine our analysis to the regular case, according to the following: \color{black} \begin{Assumptions}{$\rmF$} \label{ass:F} \color{black} The map $\mathrm F$ defined by \eqref{eq:appA-w-Psip-alpha} satisfies \color{red} the following properties:\color{black} \begin{gather} \label{eq:157} \mathrm F\text{ admits a continuous extension to $[0,\infty)$, } \intertext{and for every $R>0$ there exists $\ell_R\ge 0$ such that } \label{eq:158} v\le v'\quad\mathbb{R}ightarrow\quad \mathrm F(u,v)-\mathrm F(u,v')\le \ell_R\, (v'-v) \quad\text{for every }u,v,v'\in [0,R]. \end{gather} If moreover \eqref{eq:158} is satisfied in $[0,{+\infty})$ for some constant $\ell_\infty\ge 0$ and there exists a constant $M$ such that \begin{equation} \label{eq:161} |\mathrm F(u,v)|\le M(1+u+v)\quad\text{for every }u,v\ge0\,, \end{equation} we say that $(\rmF_\infty)$ holds. \end{Assumptions} Note that \eqref{eq:157} is always satisfied if $\upphi$ is differentiable at $0$. Estimate \eqref{eq:158} is also true if in addition $\upalpha$ is Lipschitz. However, as we have shown in Section \ref{subsec:examples-intro}, there are important examples in which $\upphi'(0)=-\infty$, but \eqref{eq:157} and \eqref{eq:158} hold nonetheless. Theorem \ref{thm:localization-G} yields the following general result: \begin{theorem} \label{thm:ODE-well-posedness} Consider the Cauchy problem \begin{equation} \label{eq:159} \dot u_t=\boldsymbol F[u_t] \quad t\ge0,\quad u\restr{t=0}=u_0. \end{equation} for a given nonnegative $u_0\in L^1(V,\pi)$. \begin{enumerate}[label=(\arabic*)] \item \label{thm:ODE-well-posedness-ex} For every $u_0\in L^1(V,\pi;J)$ with $J$ a compact subinterval of $(0,{+\infty})$ there exists a unique bounded and nonnegative solution $u\in \rmC^1([0,\infty);L^1(V,\pi;J))$ of \eqref{eq:159}. We will denote by $({\mathsf S}_t)_{t\ge0}$ the corresponding $\rmC^1$-semigroup \color{red} of nonlinear operators, \color{black} mapping $u_0$ to the value $u_t={\mathsf S}_t[u_0]$ at time $t$ of the solution $u$. \item $\int_V u_t\,\mathrm{d}\pi=\int_V u_0\,\mathrm{d}\pi$ for every $t\ge0$. \item If $a\le u_0\le b$ $\pi$-a.e.~in $V$, then $a\le u_t\le b$ $\pi$-a.e.~for every $t\ge0$. \item\label{thm:ODE-well-posedness-Lip} The solution satisfies the Lipschitz estimate \eqref{eq:146} (with $\ell=\ell_R$) and the order preserving property if $\ell_R=0$. \item If Assumption \ref{ass:F} holds, then $({\mathsf S}_t)_{t\ge0}$ can be extended to a semigroup defined on every essentially bounded nonnegative $u_0\in L^1(V,\pi)$ and satisfying the same properties \ref{thm:ODE-well-posedness-ex}--\ref{thm:ODE-well-posedness-Lip} above. \item If additionally $(\rmF_\infty)$ holds, then $({\mathsf S}_t)_{t\ge0}$ can be extended to a semigroup defined on every nonnegative $u_0\in L^1(V,\pi)$ and satisfying the same properties \ref{thm:ODE-well-posedness-ex}--\ref{thm:ODE-well-posedness-Lip} above. \end{enumerate} \end{theorem} We now show that the solution $u$ given by Theorem~\ref{thm:ODE-well-posedness} is also a solution in the sense of the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance. \begin{theorem} \label{thm:sg-sol-is-var-sol} Assume~\ref{ass:V-and-kappa}, \ref{ass:Psi}, \ref{ass:S}. Let $u_0 \in L^1(V;\pi)$ be nonnegative and $\pi$-essentially valued in a compact interval $J$ of $(0,\infty)$ and let $u={\mathsf S}[u_0] \in \mathrm{C}^1 ([0,{+\infty});L^1(V,\pi;J))$ be the solution to \eqref{eq:159} given by Theorem~\ref{thm:ODE-well-posedness}. \normalcolor Then the pair $(\rho,{\boldsymbol j} )$ given by \begin{align*} \rho_t (\mathrm{d} x)&: = u_t(x) \pi(\mathrm{d} x)\,,\\ \color{red} 2\color{black}{\boldsymbol j}_t (\mathrm{d} x\, \mathrm{d} y)&:= w_t(x,y)\, {\boldsymbol\vartheta}pi(\mathrm{d} x\, \mathrm{d} y)\,,\qquad w_t(x,y):=-\rmF(u_t(x),u_t(y))\,, \end{align*} is an element of $\mathbb{C}E0{+\infty}$ and satisfies the $(\mathscr E,\mathscr R$,$\mathscr R^*)$ Energy-Dissipation balance \eqref{R-Rstar-balance}. If\/ $\rmF$ satisfies the stronger assumption\/ \ref{ass:F}, then the same result holds for every essentially bounded and nonnegative initial datum. Finally, if also $(\rmF_\infty)$ holds, the above result is valid for every nonnegative $u_0\in L^1(V,\pi)$ with $\rho_0=u_0\pi\in D(\mathscr E)$. \end{theorem} \begin{proof} Let us first consider the case when $u_0$ satisfies $0<a\le u_0\le b<{+\infty}$ $\pi$-a.e.. Then, the solution $u={\mathsf S}[u_0]$ satisfies the same bounds, the map $w_t$ is uniformly bounded and $ \upalpha(u_t(x),u_t(y))\ge \upalpha(a,a)>0$, so that $(\rho,{\boldsymbol j})\in \mathbb{C}ER 0T.$ We can thus apply Theorem \ref{thm:characterization}, obtaining the Energy-Dissipation balance \begin{equation} \label{eq:164L} \mathscr E(\rho_0)-\mathscr E(\rho_T)= \int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t+ \int_0^T \mathscr{D}(\rho_t)\,\mathrm{d} t, \qquad\text{or equivalently}\quad \mathscr L(\rho,{\boldsymbol j})=0. \end{equation} In the case $0\leq u_0\leq b$ we can argue by approximation, setting $u_{0}^a:=\max\{u_0, a\}$, $a>0$, and considering the solution $u_t^a:={\mathsf S}_t[u_0^a]$ with divergence field $\color{red} 2\color{black} {\boldsymbol j}_t^a(\mathrm{d} x,\mathrm{d} y)=-\rmF(u_t^{a}(x),u_t^{a}(y)){\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)$. Theorem \ref{thm:ODE-well-posedness}(4) shows that $u_t^a\to u_t$ strongly in $L^1(V,\pi)$ as $a\downarrow0$, \color{red} and consequently also ${\boldsymbol j}_\lambda^a\to {\boldsymbol j}_\lambda$ \color{black} setwise. Hence, we can pass to the limit in \eqref{eq:164L} (written for $(\rho^a,{\boldsymbol j}^a)$ thanks to Proposition \ref{prop:compactness} and Proposition \ref{PROP:lsc}), obtaining $\mathscr L(\rho,{\boldsymbol j})\le 0$, which is still sufficient to conclude that $(\rho,{\boldsymbol j})$ is a solution thanks to Remark \ref{rem:properties}(3). Finally, if $(\rmF_\infty)$ holds, we obtain the general result by a completely analogous argument, approximating $u_0$ by the sequence $u_0^b:=\min\{u_0, b\}$ and letting $b\uparrow{+\infty}$. \end{proof} \normalcolor \section{Existence via Minimizing Movements} \label{s:MM} In this section we construct solutions to the $(\mathscr E,\mathscr R,\mathscr R^*)$ formulation via the \emph{Minimizing Movement} approach. The method uses only fairly general properties of $\mathscr{W}$, $\mathscr E$, and the underlying space, and it may well have broader applicability than the \color{purple} measure-space \color{black} setting that we consider here (see Remark \ref{rmk:generaliz-topol}). Therefore we formulate the results in a slightly more general setup. We consider a topological space \begin{equation} \label{ambient-topological} (X,\sigma) = {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) \text{ endowed with the \color{ddcyan}O setwise topology}. \end{equation} For consistency with the above definition, in this section we will use use the abstract notation $\weaksigmatoabs$ to denote setwise convergence in $X= {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$. Although throughout this paper we adopt the Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}, in this chapter we will base the discussion only on the following properties: \begin{Assumptions}{Abs} \label{ass:abstract} \begin{enumerate} \item the Dynamical-Variational Transport (DVT) cost $\mathscr{W}$ enjoys properties \eqref{assW}; \item the driving functional $\mathscr E$ enjoys the typical lower-semicontinuity and coercivity properties underlying the variational approach to gradient flows: \begin{subequations}\label{conditions-on-S} \begin{align} \label{e:phi1} &\mathscr E \geq 0 \quad \text{and} \quad \mathscr E \ \text{is $\sigma$-sequentially lower semicontinuous};\\ &\exists \rho^{*} \in X \quad\text{such that}\quad \forall\, \tau>0, \notag\\ \label{e:phipsi1} &\qquad \text{the map $\rho \mapsto \mathscr{W}(\tau,\rho^{*}, \rho) + \mathscr E(\rho)$ has $\sigma$-sequentially compact sublevels.} \end{align} \end{subequations} \end{enumerate} \end{Assumptions} Assumption~\ref{ass:abstract} is implied by Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}. The properties~\eqref{assW} are the content of Theorem~\ref{thm:props-cost}; condition \eqref{e:phi1} follows from Assumption~\ref{ass:S} and Lemma~\ref{PROP:lsc}; condition~\eqref{e:phipsi1} follows from the superlinearity of $\upphi$ at infinity and Prokhorov's characterization of compactness in the space of finite measures~\cite[Th.~8.6.2]{Bogachev07}. \subsection{The Minimizing Movement scheme and the convergence result} \label{ss:MM} The classical `Minimizing Movement' scheme for metric-space gradient flows~\cite{DeGiorgiMarinoTosques80,AmbrosioGigliSavare08} starts by defining approximate solutions through incremental minimization, \[ \rho^n \in \mathop{\rm argmin}_{\rho} \left( \frac1{2\tau} d(\rho^{n-1},\rho)^2 + \mathscr E(\rho)\right). \] In the context of this paper the natural generalization of the expression to be minimized is $\DVT\tau{\rho^{n-1}}\rho + \mathscr E(\rho)$. This can be understood by remarking that if $\mathscr R(\rho,\cdot)$ is quadratic, then it formally generates a metric \begin{align*} \frac12 d(\mu,\nu)^2 &= \inf\left\{ \int_0^1 \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t \, : \, \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \ \rho_0 = \mu, \text{ and }\rho_ 1 = \nu\right\}\\ &= \tau \inf\left\{ \int_0^\tau \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t \, : \, \partial_t \rho_t + \odiv {\boldsymbol j}_t = 0, \ \rho_0 = \mu, \text{ and }\rho_ \tau = \nu\right\}\\ &= \tau \DVT\tau{\mu}\nu. \end{align*} In this section we set up the approximation scheme featuring the cost $\mathscr{W}$. We consider a partition $ \{t_{\tau}^0 =0< t_{\tau}^1< \ldots<t_{\tau}^n < \ldots< t_{\tau}^{N_\tau-1}<T\le t_{\tau}^{N_\tau}\} $, with fineness $\tau : = \max_{i=n,\ldots, N_\tau} (t_{\tau}^{n} {-} t_{\tau}^{n-1})$, of the time interval $[0,T]$. The sequence of approximations $(\rho_\tau^n)_n$ is defined by the \color{purple} following \color{black} recursive minimization scheme. Fix $\rho^\circ\in X$. \begin{problem} \label{pr:time-incremental} Given $\rho_\tau^0:=\rho^\circ,$ find $\rho_\tau^1, \ldots, \rho_\tau^{N_\tau} \in X $ fulfilling \begin{equation} \label{eq:time-incremental} \rho_\tau^n \in \mathop{\rm argmin}_{v \in X} \Bigl\{ \mathscr{W}(t_{\tau}^n -t_{\tau}^{n-1}, \rho_\tau^{n-1}, v) +\mathscr E(v)\Bigr\} \quad \text{for $n=1, \ldots, {N_\tau}.$} \end{equation} \end{problem} \begin{lemma} \label{lemma:exist-probl-increme} Under assumption \ref{ass:abstract}, for any $\tau >0$ Problem \ref{pr:time-incremental} admits a solution $\{\rho_\tau^n\}_{n=1}^{{N_\tau}}\subset X$. \end{lemma} \par We denote by $\piecewiseConstant \rho \tau$ and $\underpiecewiseConstant \rho \tau$ the left-continuous and right-continuous piecewise constant interpolants of the values $\{\rho_\tau^n\}_{n=1}^{{N_\tau}}$ on the nodes of the partition, fulfilling $\piecewiseConstant \rho \tau(t_{\tau}^n)=\underpiecewiseConstant \rho \tau(t_{\tau}^n)=\rho_\tau^n$ for all $n=1,\ldots, {N_\tau}$, i.e., \begin{equation} \label{pwc-interp} \piecewiseConstant \rho \tau(t)=\rho_\tau^n \quad \forall t \in (t_{\tau}^{n-1},t_{\tau}^n], \quad \quad \underpiecewiseConstant \rho \tau(t)=\rho_\tau^{n-1} \quad \forall t \in [t_{\tau}^{n-1},t_{\tau}^n), \quad n=1,\ldots, {N_\tau}. \end{equation} Likewise, we denote by $\piecewiseConstant {\mathsf{t}}{\tau}$ and $\underpiecewiseConstant {\mathsf{t}}{\tau}$ the piecewise constant interpolants $\piecewiseConstant {\mathsf{t}}{\tau}(0): = \underpiecewiseConstant {\mathsf{t}}{\tau}(0): =0$, $ \piecewiseConstant {\mathsf{t}}{\tau}(T): = \underpiecewiseConstant {\mathsf{t}}{\tau}(T): =T$, and \begin{equation} \label{nodes-interpolants} \piecewiseConstant {\mathsf{t}} \tau(t)=t_{\tau}^n \quad \forall t \in (t_{\tau}^{n-1},t_{\tau}^n], \quad \quad \underpiecewiseConstant {\mathsf{t}} \tau(t)=t_{\tau}^{n-1} \quad \forall t \in [t_{\tau}^{n-1},t_{\tau}^n)\,. \end{equation} \par We also introduce another notion of interpolant of the discrete values $\{\rho_\tau^n\}_{n=0}^{N_\tau}$ introduced by De Giorgi, namely the \emph{variational interpolant} $\pwM\rho\tau : [0,T]\to X$, which is defined in the following way: the map $t\mapsto \pwM \rho\tau(t)$ is Lebesgue measurable in $(0, T )$ and satisfies \begin{equation} \label{interpmin} \begin{cases}\quad \pwM \rho\tau(0)=\rho^\circ, \quad \text{and, for } t=t_{\tau}^{n-1} + r \in (t_{\tau}^{n-1}, t_{\tau}^{n}], \\\quad \pwM \rho\tau(t) \in \displaystyle \mathop{\rm argmin}_{\mu \in X} \left\{ \mathscr{W}(r, \rho_\tau^{n-1}, \mu) +\mathscr E(\mu)\right\} \end{cases} \end{equation} The existence of a measurable selection is guaranteed by \cite[Cor. III.3, Thm. III.6]{Castaing-Valadier77}. \par It is natural to introduce the following extension of the notion of \emph{(Generalized) Minimizing Movement}, which is typically given in a metric setting \cite{Ambr95MM,AmbrosioGigliSavare08}. For simplicity, we will continue to use the classical terminology. \begin{definition} \label{def:GMM} We say that a curve $\rho: [0,T] \to X$ is a \emph{Generalized Minimizing Movement} for the energy functional $\mathscr E$ starting from the initial datum $\rho^\circ\in \mathrm{D}(\mathscr E)$, if there exist a sequence of partitions with fineness $(\tau_k)_k$, $\tau_k\downarrow 0$ as $k\to\infty$, and, correspondingly, a sequence of discrete solutions $(\piecewiseConstant \rho {\tau_k})_k$ such that, as $k\to\infty$, \begin{equation} \label{sigma-conv-GMM} \piecewiseConstant \rho {\tau_k}(t) \weaksigmatoabs \rho(t) \qquad \text{for all } t \in [0,T]. \end{equation} We shall denote by $\GMM{\mathscr E,\mathscr{W}}{\rho^\circ}$ the collection of all Generalized Minimizing Movements for $\mathscr E$ starting from $\rho^\circ$. \end{definition} We can now state the main result of this section. \begin{theorem} \label{thm:construction-MM} Under \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}}, let the lower-semicontinuity Property \eqref{lscD} be satisfied. Then $\GMMT{\mathscr E,\mathscr{W}}{0}{T}{\rho^\circ} \neq \emptyset$ and every $\rho \in \GMMT{\mathscr E,\mathscr{W}}{0}{T}{\rho^\circ}$ satisfies the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation balance (Definition~\ref{def:R-Rstar-balance}). \end{theorem} Throughout Sections \ref{ss:aprio}--\ref{ss:compactness} we will first prove an abstract version of this theorem as Theorem~\ref{thm:abstract-GMM} below, under \textbf{Assumption~\ref{ass:abstract}}. Indeed, therein we could `move away' from the context of the `concrete' gradient structure for the Markov processes, and carry out our analysis in a general topological setup (cf.\ Remark \ref{rmk:generaliz-topol} ahead). In Section~\ref{ss:pf-of-existence} we will `return' to the problem under consideration and deduce the proof of Theorem\ \ref{thm:construction-MM} from Theorem\ \ref{thm:abstract-GMM}. \subsection{Moreau-Yosida approximation and generalized slope} \label{ss:aprio} Preliminarily, let us observe some straightforward consequences of the properties of the transport cost: \begin{enumerate} \item the `generalized triangle inequality' from \eqref{e:psi3} entails that for all $m \in \mathbb{N}$, for all $(m+1)$-ples $(t, t_1, \ldots, t_m) \in (0,{+\infty})^{m+1}$, and all $(\rho_0, \rho_1, \ldots, \rho_m) \in X^{m+1}$, we have \begin{equation} \label{eq1} \mathscr{W}(t,\rho_0,\rho_{m}) \leq \sum_{k=1}^{m} \mathscr{W}(t_k,\rho_{k-1},\rho_{k}) \qquad \text{if\, $t=\sum_{k=1}^m t_k$.} \end{equation} \item Combining \eqref{e:psi2} and \eqref{e:psi3} we deduce that \begin{equation} \label{monotonia} \mathscr{W}(t,\rho,\mu) \leq \mathscr{W}(s,\rho,\mu) \quad \text{ for all } 0<s<t \text{ and for all } \rho, \mu \in X. \end{equation} \end{enumerate} In the context of metric gradient-flow theory, the `Moreau-Yosida approximation' (see e.g.~\cite[Ch.~7]{Brezis11} or~\cite[Def.~3.1.1]{AmbrosioGigliSavare08}) provides an approximation of the driving functional that is finite and sub-differentiable everywhere, and can be used to define a generalized slope. We now construct the analogous objects in the situation at hand. Given $r>0$ and $\rho \in X$, we define the subset $J_r(\rho)\subset X$ by $$ J_r(\rho) := \mathop{\rm argmin}_{\mu \in X} \Bigl\{ \mathscr{W}(r,\rho,\mu) + \mathscr E(\mu)\Bigr\} $$ (by Lemma \ref{lemma:exist-probl-increme}, this set is non-empty) and define \begin{equation} \label{def:gen} \gen(r,\rho):= \inf_{\mu \in X} \left\{ \mathscr{W}(r,\rho,\mu) + \mathscr E(\mu)\right\}= \mathscr{W}(r,\rho, \rho_r) + \mathscr E(\rho_r) \quad \forall\, \rho_r \in J_r(\rho). \end{equation} In addition, for all $\rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$, we define the \emph{generalized slope}\begin{equation} \label{def:nuovo} \mathscr{S}(\rho):= \limsup_{r \downarrow 0}\frac{\mathscr E(\rho) -\gen(r,\rho) }{r} = \limsup_{r \downarrow 0}\frac{\sup_{\mu \in X} \left\{ \mathscr E(\rho) -\mathscr{W}(r,\rho,\mu) -\mathscr E(\mu)\right\} }{r}\,. \end{equation} Recalling the \emph{duality formula} for the local slope (cf.\ \cite[Lemma 3.15]{AmbrosioGigliSavare08}) and the fact that $\mathscr{W}(\tau,\cdot, \cdot)$ is a proxy for $\frac1{2\tau}d^2(\cdot, \cdot)$, it is immediate to recognize that the generalized slope is a surrogate of the local slope. Furthermore, as we will see that its definition is somehow tailored to the validity of Lemma \ref{lemma-my-1} \color{purple} ahead. \color{black} Heuristically, the generalized slope $\mathscr{S}(\rho)$ coincides with the Fisher information $\mathscr{D}(\rho) = \mathscr R^*(\rho,-{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}\mathscr E(\rho))$. This can be recognized, again heuristically, by fixing a point $\rho_0$ and considering curves $\rho_t := \rho_0 -t\odiv {\boldsymbol j} $, for a class of fluxes ${\boldsymbol j} $. We then calculate \begin{align*} \mathscr R^*(\rho_0,-{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}\mathscr E(\rho_0)) &= \sup_{{\boldsymbol j} }\, \bigl\{ -{\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}\mathscr E(\rho_0)\cdot {\boldsymbol j} - \mathscr R(\rho_0,{\boldsymbol j} )\bigr\}\\ &= \sup_{\boldsymbol j} \lim_{r\to0} \frac1r \biggl\{ \mathscr E(\rho_0) - \mathscr E(\rho_r) - \int_0^r \mathscr R(\rho_t,{\boldsymbol j} )\, \mathrm{d} t\biggr\}. \end{align*} In Theorem~\ref{th:slope-Fish} below we rigorously prove that $\mathscr{S}\geq \mathscr{D}$ using this approach. The following result collects some properties of $\genn r$ and $\mathscr{S}$. \begin{lemma} \label{lemma-my-1} For all $\rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$ and for every selection $ \rho_r \in J_r(\rho)$ \begin{align} & \label{gen1} \gen(r_2, \rho) \leq \gen(r_1,\rho) \leq \mathscr E(\rho) \quad \text{for all } 0 <r_1<r_2; \\ & \label{gen2} \rho_r \weaksigmatoabs\rho \ \text{as $r \downarrow 0$,} \quad \mathscr E(\rho)= \lim_{r \downarrow 0} \gen(r,\rho); \\ & \label{gen3} \frac{\rm d}{{\rm d}r} \gen(r,\rho) \leq - \mathscr{S}(\rho_r) \quad \text{for a.e.\ }\ r>0. \end{align} In particular, for all $\rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$ \begin{align} & \label{nuovopos} \mathscr{S}(\rho) \geq 0 \quad \ \text{and} \\ & \label{ineqenerg} \mathscr{W}(r_0, \rho, \rho_{r_0}) + \int_{0}^{r_0} \mathscr{S}(\rho_r) \, {\rm d}r \leq \mathscr E(\rho) -\mathscr E(\rho_{r_0}) \end{align} for every $ r_{0}>0$ and $ \rho_{r_0} \in J_{r_0}(\rho)$. \end{lemma} \begin{proof} Let $r>0$, $\rho\in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$, and $\rho_r\in J_r(\rho)$. It follows from \eqref{def:gen} and \eqref{e:psi2} that \begin{equation} \label{eq3} \gen(r,\rho)= \mathscr{W}(r,\rho,\rho_r) + \mathscr E(\rho_r) \leq \mathscr{W}(r,\rho,\rho) + \mathscr E(\rho) = \mathscr E(\rho) \quad \forall \, r>0, \rho \in X; \end{equation} in the same way, one checks that for all $\rho\in X$ and $0 <r_1<r_2$, \[ \gen(r_2,\rho) -\gen(r_1, \rho) \leq \mathscr{W}(r_2, \rho_{r_1}, \rho) + \mathscr E(\rho_{r_1}) -\mathscr{W}(r_1, \rho_{r_1}, \rho) - \mathscr E(\rho_{r_1}) \stackrel{\eqref{monotonia}}\leq 0, \] which implies \eqref{gen1}. Thus, the map $r \mapsto \gen(r,\rho)$ is non-increasing on $(0,{+\infty})$, and hence almost everywhere differentiable. Let us fix a point of differentiability $r>0$. For $h>0$ and $\rho_r \in J_r (\rho)$ we then have \begin{align*} \frac{\gen(r+h,\rho)-\gen(r,\rho)}{h} & = \frac1h\, {\inf_{v \in X} \Bigl\{\mathscr{W}(r+h, \rho,v) +\mathscr E(v) -\mathscr{W}(r, \rho,\rho_r) -\mathscr E(\rho_r) \Bigr\}} \\ & \leq \frac1h \, {\inf_{v \in X} \Bigl\{\mathscr{W}(h, \rho_r,v) +\mathscr E(v) -\mathscr E(\rho_r) \Bigr\}}, \end{align*} the latter inequality due to \eqref{e:psi3}, so that $$ \begin{aligned} \frac{\rm d}{{\rm d}r} \gen(r,\rho) & \leq \liminf_{h \downarrow 0} \,\frac1h \,{\inf_{v \in X} \Bigl\{\mathscr{W}(h, \rho_r,v) +\mathscr E(v) -\mathscr E(\rho_r) \Bigr\}} \\ & = - \limsup_{h \downarrow 0}\, \frac1h \, {\sup_{v \in X} \Bigl\{-\mathscr{W}(h, \rho_r,v) -\mathscr E(v) +\mathscr E(\rho_r) \Bigr\}}, \end{aligned} $$ whence \eqref{gen3}. Finally, \eqref{eq3} yields that, for any $\rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$ and any selection $\rho_r \in J_r(\rho)$, one has $ \sup_{r>0} \mathscr{W}(r,\rho,\rho_r) <+ \infty .$ Therefore, \eqref{e:psi4} entails the first convergence in \eqref{gen2}. Furthermore, we have $$ \mathscr E(\rho) \geq \limsup_{r \downarrow 0} \gen(r,\rho)\geq \liminf_{r \downarrow 0} \left(\mathscr{W}(r,\rho,\rho_r) + \mathscr E(\rho_r)\right) \geq \liminf_{r \downarrow 0}\mathscr E(\rho_r)\geq \mathscr E(\rho), $$ where the first inequality again follows from \eqref{eq3}, and the last one from the $\sigma$-lower semicontinuity of $\mathscr E$. This implies the second statement of~\eqref{gen2}. \end{proof} \subsection{\emph{A priori} estimates} Our next result collects the basic estimates on the discrete solutions. In order to properly state it, we need to introduce the `density of dissipated energy' associated with the interpolant $\piecewiseConstant \rho{\tau}$, namely the piecewise constant function $\piecewiseConstant {\mathsf{W}}{\tau}:[0,T] \to [0,{+\infty})$ defined by \begin{align} \notag \piecewiseConstant {\mathsf{W}}{\tau}(t)&:= \frac{\mathscr{W}(t_{\tau}^{n}-t_{\tau}^{n-1}, \rho_\tau^{n-1}, \rho_\tau^{n})}{t_{\tau}^{n}-t_{\tau}^{n-1}} \quad t\in (t_{\tau}^{n-1}, t_{\tau}^n], \quad n=1,\ldots, {N_\tau}, \\ \text{so that}\quad \int_{t_{\tau}^{j-1}}^{t_{\tau}^n}\piecewiseConstant {\mathsf{W}}{\tau}(t)\, {\rm d}t&= \sum_{k=j}^n \mathscr{W}(t_{\tau}^{k}-t_{\tau}^{k-1}, \rho_\tau^{k-1}, \rho_\tau^{k}) \quad \text{for all } 1 \leq j < n \leq {N_\tau}. \label{density-W} \end{align} \begin{prop}[Discrete energy-dissipation inequality and \emph{a priori} estimates] We have \begin{align} \label{discr-enineq-var} & \mathscr{W}(t-\underpiecewiseConstant \mathsf{t}{\tau}(t), \underpiecewiseConstant \rho{\tau}(t), \pwM {\rho}{\tau}(t)) + \int_{\underpiecewiseConstant \mathsf{t}{\tau}(t)}^{t} \mathscr{S}(\pwM {\rho}{\tau}(r)) \, {\rm d}r +\mathscr E(\pwM {\rho}{\tau}(t))\leq \mathscr E(\underpiecewiseConstant \rho{\tau}(t)) \quad \text{for all } 0 \leq t \leq T\,, \\ & \label{discr-enineq} \int_{\underpiecewiseConstant \mathsf{t}{\tau}(s)}^{\piecewiseConstant \mathsf{t}{\tau}(t)} \piecewiseConstant {\mathsf{W}}{\tau}(r)\, {\rm d}r + \int_{\underpiecewiseConstant \mathsf{t}{\tau}(s)}^{\piecewiseConstant \mathsf{t}{\tau}(t)} \mathscr{S}(\pwM {\rho}{\tau}(r)) \, {\rm d}r +\mathscr E(\piecewiseConstant \rho\tau(t)) \leq \mathscr E(\underpiecewiseConstant \rho\tau(s)) \qquad \text{for all } 0\leq s \leq t \leq T\,, \end{align} and there exists a constant $C>0$ such that for all $\tau>0$ \begin{gather} \label{est-diss} \int_0^T \piecewiseConstant{\mathsf{W}}\tau (t)\, \mathrm{d} t \leq C, \qquad \int_0^T \mathscr{S}(\pwM {\rho}{\tau}(t)) \, {\rm d}t \leq C. \end{gather} Finally, there exists a $\sigma$-sequentially compact subset $K\subset X$ such that \begin{equation}\label{aprio1} \piecewiseConstant \rho{\tau}(t),\, \underpiecewiseConstant \rho{\tau}(t),\, \pwM \rho{\tau}(t)\, \in K \quad \text{$\forall\, t \in [0,T]$ and $\tau >0$}. \end{equation} \end{prop} \begin{proof} From \eqref{ineqenerg} we directly deduce, for $t \in (t_{\tau}^{j-1}, t_{\tau}^j]$, \begin{equation} \label{eq4} \mathscr{W}(t-t_{\tau}^{j-1}, \rho_\tau^{j-1}, \pwM {\rho}{\tau}(t)) + \int_{t_{\tau}^{j-1}}^{t} \mathscr{S}(\pwM {\rho}{\tau}(r)) \, {\rm d}r +\mathscr E(\pwM {\rho}{\tau}(t))\leq \mathscr E(\rho_\tau^{j-1}), \end{equation} which implies \eqref{discr-enineq-var}; in particular, for $t= t_{\tau}^j$ one has \begin{equation} \label{eq4bis} \int_{t_{\tau}^{j-1}}^{t_{\tau}^j}\piecewiseConstant {\mathsf{W}}{\tau}(t)\, {\rm d}t + \int_{t_{\tau}^{j-1}}^{t_{\tau}^j} \mathscr{S}(\pwM {\rho}{\tau}(t)) \, {\rm d}t +\mathscr E(\rho_\tau^{j})\leq \mathscr E(\rho_\tau^{j-1}). \end{equation} The estimate~\eqref{discr-enineq} follows upon summing \eqref{eq4bis} over the index $j$. Furthermore, applying \eqref{eq1}--\eqref{monotonia} one deduces for all $1 \leq n \leq N_\tau$ that \begin{equation} \label{est-basis} \mathscr{W}(n\tau,\rho_0,\rho_\tau^{n}) +\mathscr E(\rho_\tau^{n}) \leq \int_0^{t_{\tau}^n}\piecewiseConstant {\mathsf{W}}{\tau}(r)\, {\rm d}r + \int_0^{t_{\tau}^n} \mathscr{S}(\pwM {\rho}{\tau}(r)) \, {\rm d}r +\mathscr E(\rho_\tau^{n}) \leq \mathscr E(\rho_0). \end{equation} In particular, \eqref{est-diss} follows, as well as $\sup_{n=0,\ldots,N_\tau} \mathscr E(\rho_\tau^{n}) \leq C$. Then, \eqref{eq4} also yields $\sup_{t\in [0,T]} \mathscr E(\pwM {\rho}{\tau}(t)) \leq C$. Next we show the two estimates \begin{align} \label{est1} \mathscr{W}(2T, \rho^*,\piecewiseConstant \rho{\tau}(t)) +\mathscr E(\piecewiseConstant \rho{\tau}(t)) \leq C, \\ \label{est2} \mathscr{W}(2T, \rho^*, \pwM {\rho}{\tau}(t)) + \mathscr E(\pwM {\rho}{\tau}(t)) \leq C \,. \end{align} Recall that $\rho^*$ is introduced in Assumption~\ref{ass:abstract}. To deduce \eqref{est1}, we use the triangle inequality for $\mathscr{W}$. Preliminarily, we observe that $ \mathscr{W}(t, \rho^*, \rho_0) <{+\infty}$ for all $t>0$. In particular, let us fix an arbitrary $ m \in \{1,\ldots, N_\tau\}$ and let $C^*: = \mathscr{W}(t_{\tau}^{ m}, \rho^*, \rho_0) $. We have for any $n$, \begin{align*} \mathscr{W}(2T, \rho^*,\rho_\tau^n) &\leq \mathscr{W}(2T-t_{\tau}^n, \rho^*, \rho_0) +\mathscr{W}(t_{\tau}^n, \rho_0,\rho_\tau^n) \stackrel{(1)}{\leq} \mathscr{W}(t_{\tau}^{ m}, \rho^*, \rho_0) +\mathscr{W}(t_{\tau}^n, \rho_0,\rho_\tau^n) \\ & \leq C^* +\mathscr{W}(t_{\tau}^n, \rho_0,\rho_\tau^n) \quad \text{for all } n \in \{1, \ldots,N_\tau\}, \end{align*} where for (1) we have used that $ \mathscr{W}(2T-t_{\tau}^n, \rho^*, \rho_0) \leq \mathscr{W}(t_{\tau}^{ m}, \rho^*, \rho_0) $ since $2T- t_{\tau}^n \geq t_{\tau}^{ m} $. Thus, in view of \eqref{est-basis} we we deduce \begin{align} \notag \mathscr{W}(2T, \rho^*,\piecewiseConstant \rho{\tau}(t)) +\mathscr E(\piecewiseConstant \rho{\tau}(t)) &\leq C^* +\mathscr{W}(\piecewiseConstant \mathsf{t} \tau(t), \rho_0, \piecewiseConstant \rho{\tau}(t) ) +\mathscr E(\piecewiseConstant \rho{\tau}(t)) \\ &\leq C^* +\mathscr E(\rho_0) \leq C \quad \text{for all } t \in [0, T]\,, \label{eq5} \end{align} i.e.\ the desired \eqref{est1}. \par Likewise, adding \eqref{eq4} and \eqref{eq4bis} one has $ \mathscr{W}(t, \rho_0, \pwM {\rho}{\tau}(t)) + \mathscr E(\pwM {\rho}{\tau}(t)) \leq \mathscr E(\rho_0) $, whence \eqref{est2} with arguments similar to those in the previous lines. \end{proof} \subsection{Compactness result} \label{ss:compactness} The main result of this section, Theorem \ref{thm:abstract-GMM} below, states that $\GMMT{\mathscr E,\mathscr{W}}0T{\rho^\circ} $ is non-empty, and that any curve $\rho \in \GMMT{\mathscr E,\mathscr{W}}0T{\rho^\circ}$ fulfills an `abstract' version \eqref{limit-enineq} of the $(\mathscr E,\mathscr R,\mathscr R^*)$ Energy-Dissipation estimate~\eqref{EDineq}, obtained by passing to the limit in the discrete inequality~\eqref{discr-enineq}. We recall the $\mathscr{W}$-action of a curve $\rho:[0,T]\to X$, defined in~\eqref{def-tot-var} as \begin{equation*} \VarW \rho ab: = \sup \left \{ \sum_{j=1}^M \DVT{t^j - t^{j-1}}{\rho(t^{j-1})}{\rho(t^j)} \, : \ (t^j)_{j=0}^M \in \mathfrak{P}_f([a,b]) \right\} \end{equation*} for all $[a,b]\subset [0,T]$, where $\mathfrak{P}_f([a,b])$ is the set of all finite partitions of the interval $[a,b]$. We also introduce the \emph{relaxed generalized slope} $\mathscr{S}^-: {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F} (\mathscr E) \to [0,{+\infty}]$ of the driving energy functional $\mathscr E$, namely the relaxation of the generalized slope $\mathscr{S}$ along sequences with bounded energy: \begin{equation} \label{relaxed-nuovo} \mathscr{S}^-(\rho) : = \inf\biggl\{ \liminf_{n\to\infty} \mathscr{S}(\rho_n) \, : \ \rho_n\weaksigmatoabs \rho, \ \sup_{n\in \mathbb{N}} \mathscr E(\rho_n) <{+\infty}\biggr\}\,. \end{equation} We are now in a position to state and prove the `abstract version' of Theorem \ref{thm:construction-MM}. \begin{theorem} \label{thm:abstract-GMM} Under \textbf{Assumption~\ref{ass:abstract}}, let $\rho^\circ \in \mathrm{D}(\mathscr E)$. Then, for every vanishing sequence $(\tau_k)_k$ there exist a (not relabeled) subsequence and a $\sigma$-continuous curve $\rho : [0,T]\to X$ such that $\rho(0) = \rho^\circ$, and \begin{equation} \label{convergences-interpolants} \piecewiseConstant \rho{\tau_k}(t),\, \underpiecewiseConstant \rho{\tau_k}(t),\, \pwM \rho{\tau_k}(t) \weaksigmatoabs\rho(t) \qquad \text{for all } t \in [0,T], \end{equation} and $\rho$ satisfies the Energy-Dissipation estimate \begin{equation} \label{limit-enineq} \VarW \rho0t + \int_0^t \mathscr{S}^-(\rho(r)) \mathrm{d} r +\mathscr E(\rho(t)) \leq \mathscr E(\rho_0) \qquad \text{for all } t \in [0,T]. \end{equation} \end{theorem} \begin{remark} \label{rmk:generaliz-topol} \upshape Theorem \ref{thm:abstract-GMM} could be extended to a topological space where the cost $\mathscr{W}$ and the energy functional $\mathscr E$ satisfy the properties listed at the beginning of the section. \end{remark} \begin{proof} Consider a sequence $\tau_k \downarrow 0$ as $k\to\infty$. \emph{Step 1: Construct the limit curve $\ol\rho$. } We first define the limit curve $\ol\rho$ on the set $A: = \{0\} \cup N$, with $N$ a countable dense subset of $(0,T]$. Indeed, in view of \eqref{aprio1}, with a diagonalization procedure we find a function $\overline\rho : A \to X$ and a (not relabeled) subsequence such that \begin{equation} \label{step1-constr} \piecewiseConstant \rho{\tau_k}(t) \weaksigmatoabs\overline\rho(t) \quad \text{for all } t \in A \quad \text{and} \quad \overline\rho(t) \in K \text{ for all } t \in A . \end{equation} In particular, $ \overline\rho(0)=\rho^\circ$. We next show that $\ol\rho$ can be uniquely extended to a $\sigma$-continuous curve $\ol\rho:[0,T]\to X$. Let $s,t\in A$ with $s<t$. By the lower-semicontinuity property~\eqref{lower-semicont} we have \begin{align*} \DVT {t-s} {\ol\rho(s)} {\ol\rho(t)} &\leq\liminf_{k\to\infty} \DVT {t-s} {\piecewiseConstant\rho{\tau_k}(s)} {\piecewiseConstant\rho{\tau_k}(t)} \stackrel{\eqref{density-W}}\leq\liminf_{k\to\infty} \int_{\underpiecewiseConstant {\mathsf{t}}{\tau_{k}} (s)}^{\piecewiseConstant {\mathsf{t}}{\tau_{k}} (t)} \piecewiseConstant {\mathsf{W}}{\tau_{k}} (r) \,\mathrm{d} r\\ & \stackrel{(1)}{\leq} \liminf_{k\to\infty} \mathscr E(\piecewiseConstant \rho{\tau_{k}} (t_1) ) \stackrel{(2)}{\leq} \mathscr E(\rho_0), \end{align*} where {(1)} follows from \eqref{discr-enineq} (using the lower bound on $\mathscr E$), and {(2)} is due to the fact that $t\mapsto \mathscr E(\piecewiseConstant \rho{\tau_{k}}(t))$ is nonincreasing. By the property~\eqref{e:psi6} of $\mathscr{W}$, this estimate is a form of uniform continuity of $\ol\rho$, and we now use this to extend $\ol\rho$. Fix $t\in [0,T]\setminus A$, and choose a sequence $t_m\in A$, $t_m\to t$, with the property that $\ol\rho(t_m)$ $\sigma$-converges to some $\tilde\rho$. For any sequence $s_m\in A$, $s_m\to t$, we then have \[ \sup_{m} \DVT {|t_m-s_m|}{\ol\rho(s_m)}{\ol\rho(t_m)} < {+\infty}, \] and since $|t_m-s_m|\to0$, property~\eqref{e:psi6} implies that ${\ol\rho(s_m)}\weaksigmatoabs \tilde \rho$. This implies that along any converging sequence $t_m\in A$, $t_m\to t$ the sequence $\ol\rho(t_m)$ has the same limit; therefore there is a unique extension of $\ol\rho$ to $[0,T]$, that we again indicate by $\ol\rho$. By again applying the lower-semicontinuity property~\eqref{lower-semicont} we find that \[ \DVT {|t-s|}{\ol\rho(s)}{\ol\rho(t)} \leq \mathscr E(\rho_0) \qquad \text{for all }t,s\in [0,T], \ s\not=t, \] and therefore the curve $[0,T]\ni t\mapsto \ol\rho(t)$ is $\sigma$-continuous. \emph{Step 2: Show convergence at all $t\in [0,T]$.} Now fix $t\in [0,T]$; we show that $\piecewiseConstant \rho{\tau_k}(t)$, $\underpiecewiseConstant\rho{\tau_k}(t)$, and $\pwM\rho{\tau_k}(t)$ each converge to $\ol\rho(t)$. Since $\piecewiseConstant \rho{\tau_k}(t)\in K$, there exists a convergent subsequence $\piecewiseConstant\rho{\tau_{k_j}}(t)\weaksigmatoabs\tilde \rho$. Take any $s\in A$ with $s\not=t$. Then \begin{align*} \DVT {|t-s|}{\tilde \rho}{\ol\rho(s)} \leq \liminf_{j\to\infty} \DVT {|t-s|}{\piecewiseConstant\rho{\tau_{k_j}}(t)}{\piecewiseConstant\rho{\tau_{k_j}}(s)} \leq \mathscr E(\rho_0)\leq C, \end{align*} by the same argument as above. Taking the limit $s\to t$, property~\eqref{e:psi6} and the continuity of $\ol\rho$ imply $\tilde\rho= \ol\rho(t)$. Therefore $\piecewiseConstant\rho{\tau_{k_j}}(t)\weaksigmatoabs \ol\rho(t)$ along each subsequence $\tau_{k_j}$, and consequently also along the whole sequence $\tau_k$. Estimates \eqref{discr-enineq-var} \& \eqref{discr-enineq} also give at each $t\in (0,T]$ \[ \limsup_{k\to\infty} \mathscr{W}(t-\underpiecewiseConstant \mathsf{t}{\tau_k}(t), \underpiecewiseConstant \rho{\tau_k}(t), \piecewiseConstant {\rho}{\tau_k}(t)) \leq \mathscr E(\rho_0), \qquad \limsup_{k\to\infty} \mathscr{W}(t-\underpiecewiseConstant \mathsf{t}{\tau_k}(t), \underpiecewiseConstant \rho{\tau_k}(t), \pwM {\rho}{\tau_k}(t)) \leq \mathscr E(\rho_0), \] so that, again using the compactness information provided by \eqref{aprio1} and property \eqref{e:psi6} of the cost $\mathscr{W}$, it is immediate to conclude \eqref{convergences-interpolants}. \emph{Step 3: Derive the energy-dissipation estimate.} Finally, let us observe that \begin{equation} \label{liminf-var} \liminf_{k\to\infty} \int_0^{\piecewiseConstant {\mathsf{t}}{\tau_k}(t)} \piecewiseConstant {\mathsf{W}}{\tau_k}(r) \mathrm{d} r \geq \VarW \rho0t \quad \text{for all } t \in [0,T]. \end{equation} Indeed, for any partition $\{ 0=t^0<\ldots <t^j <\ldots<t^M = t\}$ of $[0,t]$ we find that \begin{align*} \sum_{j=1}^{M} \DVT{t^j-t^{j-1}}{\rho(t^{j-1})}{\rho(t^j)} &\stackrel{(1)}{\leq} \liminf_{k\to\infty} \sum_{j=1}^{M} \DVT{\piecewiseConstant {\mathsf{t}}{\tau_k}(t^j)-{\piecewiseConstant {\mathsf{t}}{\tau_k}(t^{j-1})}}{\piecewiseConstant \rho{\tau_k}(t^{j-1})}{\piecewiseConstant \rho{\tau_k}(t^j)} \\ &= \liminf_{k\to\infty} \int_0^{\piecewiseConstant {\mathsf{t}}{\tau_k}(t)} \piecewiseConstant {\mathsf{W}}{\tau_k}(r) \,\mathrm{d} r, \end{align*} with (1) due to \eqref{lower-semicont}. Then \eqref{liminf-var} follows by taking the supremum over all partitions. On the other hand, by Fatou's Lemma we find that \[ \liminf_{k\to\infty} \int_0^{\piecewiseConstant {\mathsf{t}}{\tau_k}(t)} \mathscr{S} (\pwM \rho{\tau_k}(r)) \,\mathrm{d} r \geq \int_0^t \mathscr{S}^-(\rho(r)) \mathrm{d} r, \] while the lower semicontinuity of $\mathscr E$ gives \[ \liminf_{k\to\infty} \mathscr E(\piecewiseConstant \rho{\tau_k}(t)) \geq \mathscr E(\rho(t)) \] so that \eqref{limit-enineq} follows from taking the $\liminf_{k\to\infty}$ in \eqref{discr-enineq} for $s=0$. \end{proof} \subsection{Proof of Theorem~\ref{thm:construction-MM}} \label{ss:pf-of-existence} Having established the abstract compactness result of Theorem~\ref{thm:abstract-GMM}, we now apply this to the proof of Theorem~\ref{thm:construction-MM}. As described above, under \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}} the conditions of Theorem~\ref{thm:abstract-GMM} are fulfilled, and Theorem~\ref{thm:abstract-GMM} provides us with a curve $\rho:[0,T]\to{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ \color{red} that is continuous with respect to setwise convergence \color{black} such that \begin{equation} \label{ineq:pf-abs-to-concrete} \VarW \rho 0t+ \int_0^t \mathscr{S}^-(\rho(r)) \mathrm{d} r +\mathscr E(\rho(t)) \leq \mathscr E(\rho_0) \qquad \text{for all } t \in [0,T]. \end{equation} To conclude the proof of Theorem~\ref{thm:construction-MM}, we now show that \color{ddcyan} the Energy-Dissipation inequality \eqref{EDineq} can be derived from \eqref{ineq:pf-abs-to-concrete}. \normalcolor We first note that Corollary~\ref{c:exist-minimizers} implies the existence of a flux ${\boldsymbol j}$ such that $(\rho,{\boldsymbol j})\in \mathbb{C}E 0T$ and $\VarW \rho 0T = \int_0^T \mathscr R(\rho_t,{\boldsymbol j}_t)\,\mathrm{d} t$. Then from Corollary~\ref{cor:cor-crucial} below, we find that $\mathscr{S}^-(\rho(r))\geq \mathscr{D}(\rho(r))$ for all \color{purple} $r \in [0,T]$. \color{black} Combining these results with~\eqref{ineq:pf-abs-to-concrete} we find the required estimate~\eqref{EDineq}. It remains to prove the inequality $\mathscr{S}^-\geq \mathscr{D}$, which follows from the corresponding inequality $ \mathscr{S}\geq \mathscr{D}$ for the non-relaxed slope (Theorem~\ref{th:slope-Fish}) with the lower semicontinuity of $\mathscr{D}$ that is assumed in Theorem~\ref{thm:construction-MM}. This is the topic of the next section. \subsection{The generalized slope bounds the Fisher information} We recall the definition of the generalized slope $\mathscr{S}$ from~\eqref{def:nuovo}: \[ \mathscr{S}(\rho):= \limsup_{r \downarrow 0}\sup_{\mu \in X} \frac1r \Bigl\{ \mathscr E(\rho) -\mathscr E(\mu)-\mathscr{W}(r,\rho,\mu) \Bigr\} \,. \] Given the structure of this definition, the proof of the inequality $\mathscr{S}\geq \mathscr{D}$ naturally proceeds by constructing an admissible curve $(\rho, {\boldsymbol j})\in\mathbb{C}E0T$ such that $\rho\restr{t=0}=\rho$ and such that the expression in braces can be related to $\mathscr{D}(\rho)$. For the systems of this paper, the construction of such a curve faces three technical difficulties: the first is that $\rho$ needs to remain nonnegative, the second is that $\upphi'$ may be unbounded at zero, and the third is that the function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi(u,v)$ in~\eqref{eq:182} that defines $\mathscr{D}$ may be infinite when $u$ or $v$ is zero (see Example~\ref{ex:D}). We first prove a lower bound for the generalized slope $\mathscr{S}$ involving ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$, under the basic conditions on the $(\mathscr E, \mathscr R, \mathscr R^*)$ system presented in Section \ref{s:assumptions}. \color{black} \begin{theorem} \label{th:slope-Fish} Assume \ref{ass:V-and-kappa}, \ref{ass:Psi}, and~\ref{ass:S}. Then \begin{equation} \label{ineq:nuovo-FI1} \mathscr{S} (\rho) \geq \frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi(u(x),u(y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \quad \text{for all } \rho=u\pi \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E). \end{equation} \end{theorem} \begin{proof} Let us fix $\rho_0=u_0\pi\in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$, a bounded measurable skew-symmetric map \begin{displaymath} \xi:E\to \mathbb{R} \quad\text{with } \xi(y,x)=-\xi(x,y),\quad |\xi(x,y)|\le \Xi<\infty \quad\text{for every $(x,y)\in E$,} \end{displaymath} the Lipschitz functions \color{red} $q(r):=\min(r, 2(r-1/2)_+)$ \color{ddcyan}O (approximating the identity far from $0$) and $h(r):=\max(0,\min(2-r, 1))$ (cutoff for $r\ge 2$), and the Lipschitz regularization of $\upalpha$ \begin{displaymath} \upalpha_\varepsilons(u,v):=\varepsilons q(\upalpha(u,v)/\varepsilons). \end{displaymath} We introduce the field ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\varepsilons:E\times\mathbb{R}_+^2\to \mathbb{R}$ \begin{equation} \label{eq:166} {\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\varepsilons(x,y;u,v):=\xi(x,y)g_\varepsilons(u,v)\,, \end{equation} where \[ g_\varepsilons(u,v):=\upalpha_\varepsilons(u,v)\,h(\varepsilons \max(u, v))q(\min(1,\min(u,v)/\varepsilonsilon))\,, \] which vanishes if \color{red} $\upalpha(u,v)<\varepsilons/2$ or $\min(u,v)<\varepsilons/2$ or $\max(u, v)\ge 2/\varepsilons$, \color{ddcyan}O and coincides with $\upalpha$ if \color{red} $\upalpha\ge \varepsilons$, $\min(u,v)\ge \varepsilons$, and $\max(u, v)\le 1/\varepsilons$. \color{ddcyan}O Since $g_\varepsilons$ is Lipschitz, it is easy to check that ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\varepsilons$ satisfies all the assumptions (\ref{subeq:G}a,b,c,d) and also \eqref{eq:123a} for $a=0$, since \color{red} $0=g_\varepsilons(0,0)\le g_\varepsilons(0,v)$ for every $v\ge 0$ and every $(x,y)\in E$. \color{ddcyan}O It follows that for every nonnegative $u_0\in L^1(X,\pi)$ there exists a unique nonnegative solution $u^\varepsilons\in \rmC^1([0,\infty);L^1(V,\pi))$ of the Cauchy problem \eqref{eq:119-Cauchy} \color{red} induced by ${\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\varepsilons$ \color{ddcyan}O with initial datum $u_0$ and the same total mass. \color{red} Henceforth, we set $\rho_t^\varepsilons = u_t^\varepsilons\pi$ for all $t\ge 0$. \color{ddcyan}O Setting $\color{red} 2 \color{ddcyan}O{\boldsymbol j}_t^\varepsilons(\mathrm{d} x,\mathrm{d} y):=w_t^\varepsilons(x,y) {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) $, where $w_t^\varepsilons(x,y):={\mathrm G}} \def\rmH{{\mathrm H}} \def\rmI{{\mathrm I}_\varepsilons(x,y;u_t(x),u_t(y))$, it is also easy to check that $(\rho^\varepsilons,{\boldsymbol j}^\varepsilons)\in \mathbb{C}ER 0T$, since $g_\varepsilons(u,v)\le \upalpha(u,v)$ and \color{red} \begin{displaymath} |w_t^\varepsilons(x,y)|\le |\xi| \upalpha(u_t^\varepsilons(x),u_t^\varepsilons(y))\normalcolorhi_{U_\varepsilons(t)}(x,y)\qquad \text{for $(x,y)\in E$}\,, \end{displaymath} where $U_\varepsilons(t):=\{(x,y)\in E: g_\varepsilonsilon(u_t^\varepsilons(x),u_t^\varepsilons(y))>0\}$, thereby yielding \[ \Upsilon(u_t^\varepsilons(x),u_t^\varepsilons(y),w_t^\varepsilons(x,y))\le \Psi(\Xi)\upalpha(2/\varepsilons,2/\varepsilons)\,. \]\color{ddcyan}O Finally, recalling \eqref{eq:102} and \eqref{eq:105}, we get \begin{displaymath} |\color{purple} \rmB_\upphi \color{black}(u_t^\varepsilons(x),u_t^\varepsilons(y),w_t^\varepsilons(x,y))| \le \Xi \big(\upphi'(2/\varepsilons)-\color{red} \upphi'(\varepsilons/2) \color{ddcyan}O \big) \upalpha(2/\varepsilons,2/\varepsilons). \end{displaymath} Thus, we can apply Theorem \ref{th:chain-rule-bound} obtaining \begin{equation} \label{eq:167} \mathscr E(\rho_0)-\mathscr E(\rho_\tau^\varepsilons)= -\color{red}\frac{1}{2}\color{ddcyan}O\int_0^\tau \iint_E\rmB_\upphi(u_t^\varepsilons(x),u_t^\varepsilons(y),w_t^\varepsilons(x,y))\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\,\mathrm{d} t, \end{equation} and consequently \begin{equation} \label{eq:168} \begin{aligned} \mathscr{S}(\rho_0)&\ge \limsup_{\tau\downarrow0}\tau^{-1} \Big(\mathscr E(\rho_0)-\mathscr E(\rho_\tau^\varepsilons)- \int_0^\tau \mathscr R(\rho_t^\varepsilons,{\boldsymbol j}_t^\varepsilons)\,\mathrm{d} t\Big) \\&= \color{red}\frac{1}{2}\color{ddcyan}O\iint_E \Big(\rmB_\upphi(u_0(x),u_0(y),w_0^\varepsilons(x,y))- \Upsilon(u_0(x),u_0(y), w_0^\varepsilons(x,y))\Big)\,{\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y). \end{aligned} \end{equation} Let us now set $\Delta_k$ to be the truncation of $\upphi'(u_0(x))-\upphi'(u_0(y))$ to $[-k,k]$, i.e. \color{red} \[ \Delta_k(x,y):=\max\Bigl\{-k,\min\bigl[k, \upphi'(u_0(x))-\upphi'(u_0(y))\bigr]\Bigr\}\,, \] and $\xi_k(x,y):= (\Psi^*)'(\Delta_k(x,y))$ for each $k\in\mathbb{N}$. Notice that $\xi_k$ is a bounded measurable skew-symmetric map satisfying $|\xi_k(x,y)|\le k$ for every $(x,y)\in E$ and $k\in\mathbb{N}$. Therefore, inequality \eqref{eq:168} holds for $w_0^\varepsilons(x,y) = \xi_k(x,y)\,g_\varepsilons(u_0(x),u_0(y))$, $(x,y)\in E$. We then observe from Lemma~\ref{le:trivial-but-useful}\ref{le:trivial-but-useful:ineq} that \begin{equation} \label{eq:171} \begin{aligned} (\upphi'(u_0(x))-\upphi'(u_0(y)))\cdot \xi_k(x,y)&\ge \Delta_k(x,y)\xi_k(x,y) \\ &= \Psi(\xi_k(x,y))+\Psi^*(\Delta_k(x,y))\,, \end{aligned} \end{equation} and from $g_\varepsilonsilon(u,v)\le \upalpha(u,v)$ that \color{ddcyan}O \begin{equation} \label{eq:172} \begin{aligned} \Upsilon(u_0(x),u_0(y), w_0^\varepsilons(x,y)) &=\Psi\left(\frac{\xi_k(x,y) g_\varepsilons(u_0(x),u_0(y))}{\upalpha(u_0(x),u_0(y))}\right) \upalpha(u_0(x),u_0(y)) \\ &\le \Psi(\xi_k(x,y))\upalpha(u_0(x),u_0(y))\,. \end{aligned} \end{equation} Substituting these bounds in \eqref{eq:168} and passing to the limit as $\varepsilons\downarrow0$ we obtain \begin{equation} \label{eq:173} \mathscr{S}(\rho)\ge \color{red}\frac{1}{2}\color{ddcyan}O\iint_E \Psi^*(\Delta_k(x,y))\upalpha(u_0(x),u_0(y))\, {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y)\,. \end{equation} We eventually let $k\uparrow\infty$ and obtain \eqref{ineq:nuovo-FI1}. \color{black} \end{proof} In the next proposition we finally bound $\mathscr{S}$ from below by the Fisher information, by relying on the existence of a solution to the $(\mathscr E,\mathscr R,\mathscr R^*)$ system, as shown in Section~\ref{s:ex-sg}. \color{black} \begin{prop} \label{p:slope-geq-Fish} Let us suppose that for $\rho\in D(\mathscr E)$ there exists a solution to the $(\mathscr E,\mathscr R,\mathscr R^*)$ system. Then the generalized slope bounds the Fisher information from above: \begin{equation} \label{ineq:nuovo-FI} \mathscr{S} (\rho) \geq \mathscr{D} (\rho) \quad \text{for all } \rho \in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E). \end{equation} \end{prop} \begin{proof} Let $\rho_t = u_t \pi$ be a solution to the $(\mathscr E,\mathscr R,\mathscr R^*)$ system with initial datum $\rho_0\in {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}(\mathscr E)$. Then, we can find \color{red} a family $({\boldsymbol j}_t)_{t\ge 0}\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ \color{black} such that $(\rho,{\boldsymbol j} )\in \mathbb{C}E0{+\infty}$ and \[ \mathscr E(\rho_t) +\int_0^t \bigl[\mathscr R(\rho_r,{\boldsymbol j}_r)+\mathscr{D}(\rho_r)\bigr] \,\mathrm{d} r = \mathscr E(\rho_0)\qquad \text{for all }t\geq0. \] Therefore \begin{align*} \mathscr{S}(\rho_0) &\geq \liminf_{t\downarrow 0} \frac1t \Bigl[ \mathscr E(\rho_0) - \mathscr E(\rho_t) - \DVT t{\rho_0}{\rho_t}\Bigr]\\ &\geq \liminf_{t\downarrow 0} \frac1t \Bigl[ \mathscr E(\rho_0) - \mathscr E(\rho_t) - \int_0^t \mathscr R(\rho_r,{\boldsymbol j}_r) \,\mathrm{d} r\Bigr] = \liminf_{t\downarrow 0} \frac1t \int_0^t \mathscr{D}(\rho_r)\, \mathrm{d} r\,. \end{align*} Since $u_t\to u_0$ in $L^1(V;\pi)$ as $t\to0$ and since $\mathscr{D}$ is lower semicontinuous with respect to $L^1(V,\pi)$-convergence (\color{red} see the proof of \color{black} Proposition~\ref{PROP:lsc}), with a change of variables we find \[ \mathscr{S}(\rho_0) \geq \liminf_{t\downarrow 0} \int_0^1 \mathscr{D}(\rho_{ts})\, \mathrm{d} s \geq \mathscr{D}(\rho_0). \qedhere \] \end{proof} We then easily get the desired lower bound for $\mathscr{S}^-$ in terms of $\mathscr{D}$, under the condition that the latter functional is lower semicontinuous (recall that Proposition \ref{PROP:lsc} provides sufficient conditions for the lower semicontinuity of $\mathscr{D}$): \color{black} \begin{cor} \label{cor:cor-crucial} Let us suppose that \textbf{Assumptions~\ref{ass:V-and-kappa}, \ref{ass:Psi}, \ref{ass:S}} hold and that $\mathscr{D}$ is lower semicontinuous with respect to ~setwise convergence. Then \begin{equation} \label{desired-rel} \mathscr{S}^- (\rho) \geq \mathscr{D} (\rho) \quad \text{for all } \rho \in \mathrm{D}(\mathscr E). \end{equation} \end{cor} \noindent \begin{remark} \label{rmk:Mark} The combination of Theorem \ref{th:slope-Fish}, Proposition \ref{p:slope-geq-Fish}, and Corollary~\ref{cor:cor-crucial} illustrates why we introduced both ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi$. For the duration of this remark, consider both the functional $\mathscr{D}$ that is defined in~\eqref{eq:def:D} in terms of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$, and a corresponding functional ~$\mathscr{D}^-$ defined in terms of the function ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$: \[ \mathscr{D}^- (\rho) := \displaystyle \frac12\iint_E {\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}^-_\upphi\bigl(u(x),u(y)\bigr)\, {\boldsymbol\vartheta}pi(\mathrm{d} x\,\mathrm{d} y) \qquad \text{for } \rho = u\pi\,. \] In the two guiding cases of Example~\ref{ex:Dpm}, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ is convex and lower semicontinuous, but ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ is only lower semicontinuous. As a result, $\mathscr{D}$ is lower semicontinuous with respect to setwise convergence, but $\mathscr{D}^-$ is not (indeed, \color{black} consider e.g.\ a sequence $\rho_n$ converging setwise to $\rho$, with $\mathrm{d}\rho_n/\mathrm{d}\pi$ given by characteristic functions of some sets $A_n$, where the sets $A_n$ are chosen such that for the limit the density $\mathrm{d}\rho/\mathrm{d}\pi$ is strictly positive and non-constant; then $\mathscr{D}^-(\rho_n)=0$ for all $n$ while $\mathscr{D}^-(\rho)>0$). Setwise lower semicontinuity of $\mathscr{D}$ is important for two reasons: first, this is required for stability of solutions of the Energy-Dissipation balance under convergence in some parameter (evolutionary $\Gamma$-convergence), which is a hallmark of a good variational formulation; and secondly, the proof of existence using the Minimizing-Movement approach \color{black} requires the bound~\eqref{desired-rel}, for which $\mathscr{D}$ also needs to be lower semicontinuous. This explains the importance of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$, and it also explains why we defined the Fisher information $\mathscr{D}$ in terms of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$ and not in terms of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$. On the other hand, ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ is straightforward to determine, and in addition the weaker control of ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ is still sufficient for the chain rule: it is ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ that appears on the right-hand side of~\eqref{eq:CR2}. Note that if ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi^-$ itself is convex, then it coincides with ${\mathrm D}} \def\rmE{{\mathrm E}} \def\rmF{{\mathrm F}_\upphi$. \end{remark} \appendix \section{Continuity equation} \label{appendix:proofs} In this Section we complete the analysis of the continuity equation by carrying out the proofs of Lemma \ref{l:cont-repr} and Corollary \ref{c:narrow-ct}. \begin{proof}[Proof of Lemma \ref{l:cont-repr}] The distributional identity \color{black} \eqref{eq:90} yields that for every $\zeta\in \mathbb{C}b(V,\tau)$ the map \[ t\mapsto \rho_t(\zeta): = \int_V \zeta(x) \rho_t (\mathrm{d} x )\quad\text{belongs to $W^{1,1}(a,b)$}, \] with distributional derivative \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}\rho_t(\zeta) = \iint_{E} \overline\nabla\zeta\,\mathrm{d} {\boldsymbol j}_t= -\int_V \zeta\,\mathrm{d}\odiv {\boldsymbol j}_t \quad\text{for almost all $ t \in [a,b]$.}\label{eq:13} \end{equation} Hence, setting $\mathfrak d_t:=|\odiv {\boldsymbol j}_t|\in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$, we have \begin{equation} \label{distributional-derivative} \left| \frac{\mathrm{d}}{\mathrm{d} t}\rho_t(\zeta)\right| \leq \int_V |\zeta| \,\mathrm{d} \mathfrak d_t\le \|\zeta\|_{\mathbb{C}b(V)}|\odiv {\boldsymbol j}_t|(V)\le 2\|\zeta\|_{\mathbb{C}b(V)}| {\boldsymbol j}_t|(E), \end{equation} where we used the fact that \begin{equation*} \label{eq:53} \mathfrak d_t=|{\mathsf x}_\sharp ({\boldsymbol j}_t-{\mathsf s}_\sharp {\boldsymbol j}_t)| = |{\mathsf x}_\sharp {\boldsymbol j}_t-{\mathsf y}_\sharp {\boldsymbol j}_t|\le | {\mathsf x}_\sharp {\boldsymbol j}_t|+ |{\mathsf y}_\sharp{\boldsymbol j}_t| \end{equation*} which implies \[ \mathfrak d_t (V) \le 2|{\boldsymbol j}_t|(E). \] \normalcolor Hence, the set $L_\zeta$ of the Lebesgue points of $t\mapsto \rho_t(\zeta)$ has full Lebesgue measure. Choosing $\zeta\equiv 1$ one immediately recognizes that $\rho_t(V)$ is (essentially) constant: it is not restrictive to normalize it to $1$ for convenience. Let us now consider a countable set \color{purple} $Z=\{\zeta_k\}_{k\in \mathbb{N}}$ \color{black} of uniformly bounded functions in $\mathbb{C}b(V)$ such that \begin{displaymath} |\zeta_k|\le 1,\quad \mathsf d(\mu,\nu):=\sum_{k=1}^\infty 2^{-k}\Big|\int_V \zeta_k\,\mathrm{d}(\mu-\nu)\Big| \end{displaymath} is a distance inducing the weak topology in $ {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) $ \color{black} (see e.g.~\cite[\S\,5.1.1]{AmbrosioGigliSavare08}). By introducing the set $L_Z := \bigcap_{\zeta \in Z} L_\zeta$, it follows from \eqref{distributional-derivative} that \begin{equation} \label{eq:10} \mathsf d(\rho_s,\rho_t)\le 2 \int_s^t |{\boldsymbol j}_r|(E)\, \mathrm{d} r \end{equation} showing that the restriction of $\rho$ to $L_Z$ is continuous in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$. \color{black} Estimate~\eqref{distributional-derivative} also shows that for all $s,t\in L_Z$ with $s \leq t$ we have \begin{equation} |\rho_t(\zeta) {-} \rho_s(\zeta) | \leq \int_s^t \int_V |\zeta|\,\mathrm{d}\mathfrak d_r\,\mathrm{d} r\le 2\| \zeta\|_{\mathbb{C}b(V)} \int_s^t |{\boldsymbol j}_r|(E)\, \mathrm{d} r \qquad \text{for all } \zeta \in \mathbb{C}b(V). \label{eq:11} \end{equation} Taking the supremum with respect to ~$\zeta$ we obtain \begin{equation} \label{eq:12} \|\rho_t-\rho_s\|_{TV}\le 2 \int_s^t |{\boldsymbol j}_r|(E)\, \mathrm{d} r \qquad \text{ and all } s,t\in L_Z,\ s \leq t, \end{equation} which shows that the measures $(\rho_t)_{t\in L_Z}$ are uniformly continuous with respect to the total variation metric in ${\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V)$ \color{black} and thus can be extended to an absolutely continuous curve $\tilde\rho\in \mathrm{AC}(I;{\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V))$ \color{black} satisfying \eqref{eq:12} for every $s,t\in I$. When $\varphi\in \mathbb{C}b(V)$, \eqref{2ndfundthm} immediately follows from \eqref{eq:13}. By a standard argument based on the functional monotone class Theorem \cite[\S 2.12]{Bogachev07} we can extend the validity of \eqref{2ndfundthm} to every bounded Borel function. If $\varphi\in \mathrm C^1([a,b];\mathrm{B}_{\mathrm b}(V))$, combining \eqref{eq:13} and the fact that the map $t\mapsto \int_V \varphi(t,x)\,\tilde\rho_t(\mathrm{d} x)$ is absolutely continuous we easily get \eqref{maybe-useful}. \end{proof} \begin{proof}[Proof of Corollary \ref{c:narrow-ct}] Keeping the same notation of the previous proof, if we define \begin{displaymath} \gamma:=\rho_0+\int_0^T \mathfrak d_t \,\mathrm{d} t \end{displaymath} then the estimate \eqref{distributional-derivative} shows that \begin{displaymath} \rho_t(B)\le \gamma(B)\quad\text{for every }B\in \mathfrak B, \end{displaymath} thus showing that $\rho_t=\tilde u_t\gamma$ for every $t\in [0,T]$ and \begin{equation} \label{eq:54} \|\rho_t-\rho_s\|_{TV}=\int_V |\tilde u_t-\tilde u_s|\,\mathrm{d}\gamma\le 2\int_s^t |{\boldsymbol j}_r|(E)\,\mathrm{d} r \quad\text{for every }0\le s<t\le T. \end{equation} \end{proof} We conclude with a result on the decomposition of the measure \color{purple} $ {\boldsymbol j} -\symmap {\boldsymbol j} = 2\tj $ \color{black} into its positive and negative part. \begin{lemma} \label{le:A1} If ${\boldsymbol j} \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}(E)$ and we set \begin{equation} \label{eq:14} \color{purple} {\boldsymbol j}^+:=( {\boldsymbol j} {-}\symmap {\boldsymbol j} )_+,\quad {\boldsymbol j}^-:=( {\boldsymbol j} {-}\symmap {\boldsymbol j} )_-, \color{black} \end{equation} then we have \begin{equation} \label{eq:15} {\boldsymbol j}^-=\symmap {\boldsymbol j}^+,\quad \color{purple} \odiv {\boldsymbol j}^+= \odiv {\boldsymbol j}. \color{black} \end{equation} When ${\boldsymbol j}$ is skew-symmetric, we also have \begin{equation} \label{eq:16} \color{purple} {\boldsymbol j}^+=2{\boldsymbol j}_+,\quad {\boldsymbol j}^-=-2{\boldsymbol j}_-. \color{black} \end{equation} \end{lemma} \begin{proof} \color{purple} By definition, we have ${\boldsymbol j}^+=2\tj_+$, ${\boldsymbol j}^-=2\tj_-$. Furthermore, $ \tj =-\symmap \tj =\symmap \tj _--\symmap \tj_+$, where the first equality follows from the fact that $\tj$ is skew-symmetric. Since $\symmap \tj_-\perp \symmap \tj_+$ we deduce that $\symmap \tj_+=\tj_-,$ $\symmap \tj_-=\tj_+$ and $\tj=\tj_+-\symmap \tj_+$, so that $\odiv {\boldsymbol j} = \odiv \tj =2\odiv \tj_+=\odiv {\boldsymbol j}^+$. \end{proof} \color{black} \section{Slowly increasing superlinear entropies} The main result of this Section is Lemma \ref{le:slowly-increasing-entropy} ahead, invoked in the proof of Proposition \ref{prop:compactness}. It provides the construction of a \emph{smooth} function estimating the entropy density $\upphi$ from below and such that the function $(r,s) \mapsto \Psi^*(A_\upomega(r,s)) \alpha(r,s)$ fulfills a suitable bound, cf.\ \eqref{eq:41} ahead. Prior to that, we prove the preliminary Lemmas \ref{le:alpha-behaviour} and \ref{le:sub} below. \color{black} \begin{lemma} \label{le:alpha-behaviour} Let us suppose that $\upalpha$ satisfies Assumptions \ref{ass:Psi}. Then for every $a\ge0$ \begin{equation} \label{eq:40} \lim_{r\to{+\infty}}\frac{\upalpha(r,a)}r= \lim_{r\to{+\infty}}\frac{\upalpha(a,r)}r=0. \end{equation} \end{lemma} \begin{proof} Since $\upalpha$ is symmetric it is sufficient to prove the first limit. Let us first observe that the concavity of $\upalpha$ yields the existence of the limit since the map $r\mapsto r^{-1}(\upalpha(r,a)-\upalpha(0,a))$ is decreasing, so that \begin{displaymath} \lim_{r\to{+\infty}}\frac{\upalpha(r,a)}r= \lim_{r\to{+\infty}}\frac{\upalpha(r,a)-\upalpha(0,a)}r= \inf_{r>0}\frac{\upalpha(r,a)-\upalpha(0,a)}r. \end{displaymath} Let us call $L(a)\in \mathbb{R}_+$ the above quantity. The inequality (following by the concavity of $\upalpha$ and the fact that $\upalpha(0,0)\ge0$) \begin{equation} \label{eq:concave-elementary} \upalpha(r,a)\le \lambda\upalpha(r/\lambda,a/\lambda)\quad\text{for every }\lambda\ge1 \end{equation} yields \begin{equation} \label{eq:32} L(a)=\lim_{r\to{+\infty}}\frac{\upalpha(r,a)}r\le \lim_{r\to{+\infty}}\frac{\upalpha(r/\lambda,a/\lambda)}{r/\lambda}= L(a/\lambda) \quad\text{for every }\lambda\ge1. \end{equation} For every $b\in (0,a)$ and $r>0$, setting $\lambda:=a/b>1$, we thus obtain \begin{displaymath} L(a)\le L(b)\le \frac{\upalpha(r,b)-\upalpha(0,b)}r \end{displaymath} Passing first to the limit as $b\downarrow0$ and using the continuity of $\upalpha$ we get \begin{displaymath} L(a)\le \frac{\upalpha(r,0)-\upalpha(0,0)}r \quad\text{for every $r>0$}. \end{displaymath} Eventually, we pass to the limit as $r\uparrow{+\infty}$ and we get $L(a)\le \upalpha^\infty(1,0)=0$ thanks to \eqref{alpha-0}. \end{proof} \begin{lemma} \label{le:sub} Let $f:\mathbb{R}_+\to \mathbb{R}_+$ be an increasing continuous function and $f_0\ge0$ with \begin{equation} \label{eq:43} \lim_{r\to{+\infty}}f(r)=\sup f={+\infty},\qquad \liminf_{r\downarrow0}\frac{f(r)-f_0}r\in (0,{+\infty}]. \end{equation} Then for every $g_0\in [0,f_0]$ there exists a $\rmC^\infty$ concave function $g:\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \label{eq:35} \forall\,r\in \mathbb{R}_+:g(r)\le f(r),\qquad g(0)=g_0,\qquad \lim_{r\to{+\infty}}g(r)={+\infty}. \end{equation} \end{lemma} \begin{proof} By subtracting $f_0$ and $g_0$ from $f$ and $g$, respectively, it is not restrictive to assume $f_0=g_0=0$. We will use a recursive procedure to construct a concave piecewise-linear function $g$ satisfying \eqref{eq:35}; a standard regularization yields a $\rmC^\infty$ map. We set \begin{equation} \label{eq:44} a:=\frac 13\liminf_{r\downarrow0}\frac{f(r)}r,\quad x_1:=\sup\Big\{x\in (0,1]:f(r)\ge 2 ar\text{ for every }r\in (0,x]\Big\}, \end{equation} and $\delta:=ax_1.$ We consider a strictly increasing sequence $(x_n)_{n\in\mathbb{N}}$, $n\in \mathbb{N}$, defined by induction starting from $x_0=0$ and $x_1$ as in \eqref{eq:44}, according to \begin{equation} \label{eq:25} x_{n+1}:=\min\Big\{x\ge 2x_n-x_{n-1}: f(x)\ge f(x_n)+\delta\Big\},\quad n\ge 1. \end{equation} Since $\lim_{r\to{+\infty}}f(r)={+\infty}$, the minimizing set in \eqref{eq:25} is closed and not empty, so that the algorithm is well defined. It yields a sequence $x_n$ satisfying \begin{equation} \label{eq:33} x_{n+1}-x_n\ge x_{n}-x_{n-1},\quad x_{n+1}\ge x_n+\delta\quad\text{for every }n\ge0, \end{equation} so that $(x_n)_{n\in \mathbb{N}}$ is strictly increasing and unbounded, and induces a partition $\{0=x_0<x_1<x_1<\cdots<x_n<\cdots\}$ of $\mathbb{R}_+$. We can thus consider the piecewise linear function $g:\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \label{eq:36} g(x_n):=n\delta,\quad g((1-t)x_n+t x_{n+1}):=(n+t)\delta\quad \text{for every }n\in \mathbb{N},\ t\in[0,1]. \end{equation} We observe that $g$ is increasing, $\lim_{r\to{+\infty}}g(r)={+\infty}$ and it is concave since \begin{displaymath} \frac{g(x_{n+1})-g(x_n)}{x_{n+1}-x_n}= \frac{\delta}{x_{n+1}-x_n}\topref{eq:33}\le \frac\delta{x_{n}-x_{n-1}}=\frac{g(x_{n})-g(x_{n-1})}{x_{n}-x_{n-1}}. \end{displaymath} Furthermore, $g$ is also dominated by $f$: in the interval $[x_0,x_1]$ this follows by \eqref{eq:44}. For $x\in [x_n,x_n+1]$ and $n\ge 1$, we observe that \eqref{eq:25} yields $f(x_{n+1})\ge f(x_n)+\delta$ so that by induction $f(x_n)\ge (n+1)\delta$; on the other hand \begin{displaymath} \text{for every $x\in [x_{n},x_{n+1}]$}:\quad g(x)\le g(x_{n+1})=(n+1) \delta\le f(x_n)\le f(x).\qedhere \end{displaymath} \end{proof} \begin{lemma} \label{le:slowly-increasing-entropy} Let $\Psi^*,\upalpha$ be satisfying Assumptions \ref{ass:Psi} and let $\upbeta:\mathbb{R}_+\to\mathbb{R}_+$ be a convex superlinear function with $\upbeta'(r)\ge \upbeta_0'>0$ for a.e.~$r\in \mathbb{R}_+$. Then, there exists a $\rmC^\infty$ convex superlinear function $\upomega:\mathbb{R}_+\to\mathbb{R}_+$ such that \begin{equation} \label{eq:41} \upomega(r)\le \upbeta(r),\qquad \Psi^*(\upomega'(s)-\upomega'(r))\upalpha(s,r)\le r+s\quad\text{for every }r,s\in \mathbb{R}_+. \end{equation} \end{lemma} \begin{proof} By a standard regularization, \color{ddcyan} we can always approximate $\upbeta$ by a smooth convex superlinear function $\tilde\upbeta\le \upbeta$ whose derivative is strictly positive, so that \color{black} it is not restrictive to assume that $\upbeta$ is of class $\rmC^2$. Let us set $r_0:=\inf\{r>0:\Psi^*(r)>0\}$ and let $P:(0,{+\infty})\to (r_0,{+\infty})$ be the inverse map of $\Psi^*$: $P$ is continuous, strictly increasing, and of class $\rmC^1$. Since $\upalpha$ is concave, the function $x \mapsto \upalpha(x,1)/x$ is nonincreasing in $(0,{+\infty})$; we can thus define the nondecreasing function $Q(x):=P(x/\upalpha(x,1))$ and the function \begin{displaymath} \gamma(x):=2g_0+\int_1^x \min(\upbeta''(y),Q'(y))\,\mathrm{d} y\quad \text{for every }x\ge 1,\quad g_0:=\frac12\min(\upbeta_0', Q(1))>0. \end{displaymath} By construction $\gamma(1)=2g_0= \min(\upbeta'_0,Q(1))\le \upbeta'(1)$ so that $\gamma(x)\le \min(\upbeta'(x),Q(x))$ for every $x\ge1$. We eventually set \begin{displaymath} f(t):=\frac{\rme^t}{\gamma(\rme^t)}\quad t\ge0. \end{displaymath} Clearly, we have $f(0)=2g_0$. Furthermore, we combine the estimate $\gamma(\rme^t) \leq Q(\rme^t) = P(\rme^t/\upalpha(\rme^t,1))$ with the facts that $\rme^t/\upalpha(\rme^t,1) \to +\infty$ as $t\to +\infty$, thanks to Lemma \ref{le:alpha-behaviour}, and that $P$ has sublinear growth at infinity, being the inverse function of $\Psi^*$. All in all, we conclude that \[ \lim_{t\to{+\infty}}f(t)={+\infty}. \] Therefore, we are in a position to \color{black} apply Lemma \ref{le:sub}, obtaining an increasing concave function $g:\mathbb{R}_+\to\mathbb{R}_+$ such that $g_0=g(0)\le g(t)\le f(t)$ and $\lim_{t\to{+\infty}}g(t)={+\infty}$. Since $g(0)\ge0$, the concaveness of $g$ yields $g(t'')-g(t')\le g(t''-t')$ for every $0\le t'\le t''$, so that the function $h(x):=g(\log (x\lor 1))$ satisfies $h(x)=g_0\le \upbeta'(x)$ for $x\in [0,1]$, and \begin{equation} \label{eq:48} h(z)\le \min(\upbeta'(z),Q(z))\quad\text{for every }z\ge 1,\quad h(y)-h(x)\le h(y/x)\quad \text{for every }0< x\le y. \end{equation} In fact, if $x\le 1$ we get \begin{displaymath} h(y)-h(x)=h(y)-g_0\le h(y)\le h(y/x) \end{displaymath} and if $x\ge 1$ we get \begin{displaymath} h(y)-h(x)\le g(\log y)-g(\log x)\le g(\log y-\log x)=g(\log(y/x))= h(y/x). \end{displaymath} Let us now define the convex function $\upomega(x):=\int_0^x h(y)\,\mathrm{d} y$ with $\upomega(0)=0$ and $\upomega'=h$. In particular $\upomega(x)\le \upbeta(x)$ for every $x\ge 0$. It remains to check the second inequality of \eqref{eq:41}. The case $r,s\le 1$ is trivial since $\upomega'(s)-\upomega'(r)=h(r)-h(s)=0$. We can also consider the case $\upomega'(r)\neq \upomega'(s)$ and $\upalpha(r,s)>0$; since \eqref{eq:41} is also symmetric, it is not restrictive to assume $r\le s$; by continuity, we can assume $r>0$. Recalling that $\upalpha(s,r)\le r\upalpha(s/r,1)$ if $0<r\le s$, and $(r+s)/r>s/r$, \eqref{eq:41} is surely satisfied if \begin{equation} \label{eq:49} \Psi^*(\upomega'(s)-\upomega'(r))\upalpha(s/r,1)\le s/r\quad \text{for every }0<r<s. \end{equation} Recalling that $\upomega'(s)-\upomega'(r)\le \upomega'(s/r)$ by \eqref{eq:48} and $\Psi^*$ is nondecreasing, \eqref{eq:49} is satisfied if \begin{equation} \label{eq:50} \Psi^*(\upomega'(s/r))\upalpha(s/r,1)\le s/r\quad \text{for every }0<r<s. \end{equation} After the substitution $t:=r/s$, \eqref{eq:50} corresponds to \begin{equation} \label{eq:51} \upomega'(t)\le P(t/\upalpha(t,1))=Q(t)\quad\text{for every }t\ge 1, \end{equation} which is a consequence of the first inequality of \eqref{eq:48}. \end{proof} \section{Connectivity by curves of finite action} \label{s:app-2} Preliminarily, with the reference measure $\pi \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}_+(V)$ and with the `jump equilibrium rate' ${\boldsymbol\vartheta}pi$ from \eqref{nu-pi} we associate the `graph divergence' operator $\odiv_{\pi,{\boldsymbol\vartheta}pi}: L^p(E;{\boldsymbol\vartheta}pi) \to L^p(V;\pi) $, $p\in [1,{+\infty}]$, defined as the transposed of the `graph gradient' $\dnabla:L^q(V;\pi) \to L^{q}(E;{\boldsymbol\vartheta}pi)$, with $q = p'$. Namely \[ \begin{gathered} \text{for } \zeta \color{black} \in L^p(E;{\boldsymbol\vartheta}pi), \qquad \xi = - \overline{\mathrm{div}}_{\pi,{\boldsymbol\vartheta}pi}( \zeta \color{black})\qquad \text{if and only if} \\ \int_{V} \xi(x) \omega(x) \pi(\mathrm{d} x) = \int_{E} \zeta \color{black}(x,y) \dnabla \omega(x,y) {\boldsymbol\vartheta}pi(\mathrm{d} x, \mathrm{d} y) \quad \text{for all } \omega \in L^q(V;\pi) \end{gathered} \] or, equivalently, \begin{equation} \label{in-terms-of-measures} \xi \pi = - \odiv( \zeta \color{black}{\boldsymbol\vartheta}pi) \end{equation} (with $\odiv$ the divergence operator from \eqref{eq:def:ona-div}) in the sense of measures. \par We can now first address the connectivity problem in the very specific setup \begin{equation} \label{alpha-equiv-1} \upalpha(u,v) \equiv 1 \qquad \text{for all } (u,v) \in [0,{+\infty}) \times [0,{+\infty}). \end{equation} Then, the action functional $\int \mathscr R$ is translation-invariant. Let us consider two measures $\rho_0, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}_+(V)$ such that for $i\in \{0,1\}$ there holds $\rho_i = u_i \pi$ with $u_i \in L_+^p(V;\pi)$ for some $p\in (1,{+\infty})$. Thus, we look for curves $\rho \in \ADM 0\tau{\rho_0}{\rho_1}$, with finite action, such that $\rho_t \ll \pi$, with density $u_t$, for almost all $t\in (0,\tau)$. Consequently, any flux $({\boldsymbol j}_t)_{t\in (0,\tau)}$ shall satisfy ${\boldsymbol j}_t \ll {\boldsymbol\vartheta}pi$ for a.a.\ $t\in (0,\tau)$ (cf.\ Lemma \ref{l:alt-char-R}). Taking into account \eqref{in-terms-of-measures}, the continuity equation reduces to \begin{equation} \label{cont-eq-densities} \dot{u}_t = - \overline{\mathrm{div}}_{\pi,{\boldsymbol\vartheta}pi}( \zeta_t \color{black}) \qquad \text{for a.e.\ }\, t \in (0,\tau) \end{equation} with $ \zeta_t = \frac{\mathrm{d} {\boldsymbol j}_t}{\mathrm{d} {\boldsymbol\vartheta}pi}$. Furthermore, we look for a connecting curve $\rho_t = u_t \pi$ with $u_t = (1{-}t)u_0 +t u_1$, so that \eqref{cont-eq-densities} becomes $ - \overline{\mathrm{div}}_{\pi,{\boldsymbol\vartheta}pi}( \zeta_t \color{black}) \equiv u_1 -u_0$. Hence, we can restrict to flux densities that are constant in time, i.e.\ $\zeta_t \equiv \zeta$ with $\zeta\in L^p(E;{\boldsymbol\vartheta}pi)$. \color{black} In this specific context, and if we further confine the discussion to the case $\Psi(r) = \frac1p |r|^p $ for $p\in (1,{+\infty})$, the minimal action problem becomes \begin{equation} \label{minimal-action} \inf\left \{ \frac1p \int_{E} |w|^p {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \, : \ w = 2\zeta \in L^p(E;{\boldsymbol\vartheta}pi), \ - \overline{\mathrm{div}}_{\pi,{\boldsymbol\vartheta}pi}(\zeta) \equiv u_1 -u_0 \right\} \color{black} \end{equation} Now, by a general duality result on linear operators, the operator $- \odiv_{\pi,{\boldsymbol\vartheta}pi}: L^p(E;{\boldsymbol\vartheta}pi) \to L^p(V;\pi)$ is surjective if and only if the graph gradient $\dnabla:L^q(V;\pi) \to L^{q}(E;{\boldsymbol\vartheta}pi)$ fulfills the following property: \[ \exists\, C>0 \ \ \forall\, \xi \in L^q(V;\pi) \text{ with } \int_V \xi \pi(\mathrm{d} x) =0 \text{ there holds } \| \xi\|_{L^q(V;\pi)} \leq C \| \dnabla \xi\|_{L^q(E;{\boldsymbol\vartheta}pi)}, \] namely the $q$-Poincar\'e inequality \eqref{q-Poinc}. We can thus conclude the following result. \begin{lemma} \label{l:intermediate-conn} Suppose that $\upalpha \equiv 1$, that $\Psi$ has $p$-growth (cf.\ \eqref{psi-p-growth}), and that the measures $(\pi,{\boldsymbol\vartheta}pi) $ satisfy a $q$-Poincar\'e inequality for $q=\tfrac p{p-1}$. Let $\rho_0, \rho_1 \in {\mathcal M}} \def\calN{{\mathcal N}} \def\calO{{\mathcal O}^+(V) $ be given by $\rho_i = u_i \pi$, with positive $u_i \in L^p(V; \pi)$, for $i \in \{0,1\}$. Then, for every $\tau\in (0,1)$ we have $\DVT{\tau}{\rho_0}{\rho_1}<{+\infty}$. If $\Psi(r) = \frac1p |r|^p$, $q$-Poincar\'e inequality is also necessary for having $\DVT{\tau}{\rho_0}{\rho_1}<{+\infty}$. \end{lemma} We are now in a position to carry out the \begin{proof}[Proof of Proposition \ref{prop:sufficient-for-connectivity}] Assume that $\rho_0(V) = \int_V u_0 (x) \pi(\mathrm{d} x) = \pi(V)$. Hence, it is sufficient to provide a solution for the connectivity problem between $u_0$ and $u_1 \equiv 1$. We may also assume without loss of generality that $\upalpha(u,v) \geq \upalpha_0(u,v)$ with $\upalpha_0(u,v) = c_0 \min(u,v,1)$ for some $c_0>0$, so that \begin{equation} \label{inequ} \begin{aligned} \Psi\left(\frac w{\upalpha(u,v)} \right)\upalpha(u,v) \leq \Psi\left(\frac w{\upalpha_0(u,v)} \right)\upalpha_0(u,v) & \leq C_p \left( 1+ \left| \frac w{\upalpha_0(u,v)} \right|^p \right) \upalpha_0(u,v) \\ & \leq C_p c_0 + C_p |w|^p (\upalpha_0(u,v))^{1-p}\,, \end{aligned} \end{equation} where the first estimate follows from the convexity of $\Psi$ and the fact that $\Psi(0)=0$, yielding that $\lambda \mapsto \lambda \Psi(w/\lambda)$ is non-increasing. It is therefore sufficient to consider the case in which $c_0=C_p=1$, $\upalpha_0(u,v) = \min(u,v,1)$, and to solve the connectivity problem for $\tilde\Psi(r) = \frac1p |r|^p$. By Lemma \ref{l:intermediate-conn}, we may first find $w\in L^p(E;{\boldsymbol\vartheta}pi)$ solving the minimum problem \eqref{minimal-action} in the case $\upalpha \equiv 1$, so that the flux density $\zeta_t \equiv \frac12 w$ \color{black} is associated with the curve $u_t = (1{-}t)u_0 +t u_1$, $t\in [0,\tau]$. Then, we fix an exponent $\gamma>0$ and we consider the rescaled curve $\tilde{u}_t: = u_{t^\gamma}$, that fulfills $\partial_t \tilde{u}_t = - \overline{\mathrm{div}}_{\pi,{\boldsymbol\vartheta}pi}(\tilde{\zeta}_t) $ with $\tilde{\zeta}_t = \frac12 \tilde{w}_t = \frac12 \gamma t^{\gamma-1} w$. \color{black} Moreover, \begin{align*} \upalpha_0(\tilde{u}_t (x), \tilde{u}_t (y)) &= \min \{ (1{-}t^\gamma) u_0(x) + t^\gamma u_1(x), (1{-}t^\gamma) u_0(y) + t^\gamma u_1(y), 1\} \\ &\geq \min(t^\gamma, 1) = t^\gamma \end{align*} since $u_1(x) = u_1(y) =1$. By \eqref{inequ} we thus get \[ \begin{aligned} & \int_{E} \Psi \left(\frac{\tilde{w}_t(x,y)}{\upalpha(\tilde{u}_t (x), \tilde{u}_t (y))} \right) \upalpha(\tilde{u}_t (x), \tilde{u}_t (y)) {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) \\ &\quad \leq C_p c_0 {\boldsymbol\vartheta}pi(E)+ \int_{E} \gamma^p t^{p(\gamma{-}1)} |w(x,y)|^p t^{\gamma(1{-}p)} {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) = C_p c_0 {\boldsymbol\vartheta}pi(E)+ \gamma^p t^{\gamma-p} \|w\|_{L^p(E;{\boldsymbol\vartheta}pi)}^p\,. \end{aligned} \] Choosing $\gamma>p-1$ we conclude that \[ \int_0^\tau \int_{E} \Psi \left(\frac{\tilde{w}_t(x,y)}{\upalpha(\tilde{u}_t (x), \tilde{u}_t (y))} \right) \upalpha(\tilde{u}_t (x), \tilde{u}_t (y)) {\boldsymbol\vartheta}pi(\mathrm{d} x,\mathrm{d} y) <{+\infty}\, \] hence $\ADM 0\tau{\rho_0}{\rho_1} \neq \emptyset$. \end{proof} \color{black} \end{document}
\begin{document} \title{Dynamical singularity of the rate function for quench dynamics in finite-size quantum systems} \author{Yumeng Zeng} \affiliation{Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China} \affiliation{School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China } \author{Bozhen Zhou} \affiliation{Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China} \author{Shu Chen} \email{Corresponding author: [email protected] } \affiliation{Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China} \affiliation{School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China } \affiliation{Yangtze River Delta Physics Research Center, Liyang, Jiangsu 213300, China } \date{\today} \begin{abstract} The dynamical quantum phase transition is characterized by the emergence of nonanalytic behaviors in the rate function, corresponding to the occurrence of exact zero points of the Loschmidt echo in the thermodynamical limit. In general, exact zeros of the Loschmidt echo are not accessible in a finite-size quantum system except for some fine-tuned quench parameters. In this work, we study the realization of the dynamical singularity of the rate function for finite-size systems under the twist boundary condition, which can be introduced by applying a magnetic flux. By tuning the magnetic flux, we illustrate that exact zeros of the Loschmidt echo can be always achieved when the postquench parameter is across the underlying equilibrium phase transition point, and thus the rate function of a finite-size system is divergent at a series of critical times. We demonstrate our theoretical scheme by calculating the Su-Schrieffer-Heeger model and the Creutz model in detail and exhibit its applicability to more general cases. Our result unveils that the emergence of dynamical singularity in the rate function can be viewed as a signature for detecting dynamical quantum phase transition in finite-size systems. We also unveil that the critical times in our theoretical scheme are independent on the systems size, and thus it provides a convenient way to determine the critical times by tuning the magnetic flux to achieve the dynamical singularity of the rate function. \end{abstract} \maketitle \section{Introduction} Since the dynamical quantum phase transition (DQPT) was proposed \citep{Heyl2013PRL}, it has become an important concept in describing a class of nonequilibrium critical phenomena associated with singular behavior in the real-time evolution of the Loschmidt echo (LE) \citep{Heyl2013PRL,Karrasch2013PRB,Hickey2014PRB,Andraschko2014PRB,Schmitt2015PRB,Vajna2014PRB,Sharma2015PRB,Heyl2018RPP,Canovi2014PRL, Mera2018PRB,Zvyagin2016LTP,Heyl2018RPP,YangC,Budich2016PRB,Heyl2015PRL,Heyl2014PRL,Zhou2019PRB,BoBo2020PRB,Halimeh,Dora,Halimeh2,Jafari2}. Given $\langle\psi_{i}|\psi(t)\rangle$ denotes the overlap of an initial ground state $|\psi_{i}\rangle$ and its time evolution state $|\psi(t)\rangle=e^{-iH_{f}t}|\psi_{i}\rangle$ governed by a postquench Hamiltonian $H_{f}$, the LE is defined as \begin{equation} \mathcal{L}(t)=|\langle\psi_{i}|\psi(t)\rangle|^{2}, \end{equation} which represents the return probability of the time evolution state to the initial state \citep{Gorin2006PR}. The LE plays a particularly important role in the characterization of the DQPT \citep{Zvyagin2016LTP,Heyl2018RPP}. When the phase-driving parameter is quenched across an underlying equilibrium phase transition point, a series of zero points of LE emerge at some critical times. In general, exact zeros of LE only occur when the system size tends to infinity \citep{Heyl2013PRL,Liska,ZhouBZ2021}. Meanwhile, LE always approaches zero in the thermodynamical limit, even when the quench parameter does not cross the transition point. This can be attributed to the Anderson orthogonality catastrophe \cite{OC} for the reason that the multiplication of an infinite number of numbers with magnitude less than 1 equals 0. To eliminate the effect of system size properly, it is convenient to introduce the rate function of LE given by \begin{equation} \lambda(t)=-\frac{1}{L}\ln\mathcal{L}(t).\label{rf} \end{equation} As the LE is analogous to a dynamical boundary partition function, the rate function $\lambda(t)$ can be viewed as a dynamical free energy. Thus the DQPT is characterized by nonanalytic behaviors in the rate function of LE in the thermodynamical limit. According to the theory of DQPT, the nonanalyticity of rate function occurs at the critical times $t_{n}^{*}$ when the quench parameter is across the equilibrium phase transition point, corresponding to the emergence of exact zeros of LE in the thermodynamical limit. For finite size systems, LE usually has no exact zeros, except for fine-tuned post-quench parameters which fulfill specific constraint conditions \cite{ZhouBZ2021}. Therefore, to study the DQPT and extract the critical times in finite-size quantum systems, one needs to resort to finite-size analysis to extract the non-analytical properties and critical times in the limit of $L\rightarrow\infty$. With the increase of $L$, $\mathcal{L}(t)$ approaches zero at critical times $t_{n}^{*}$, and thus $\ln\mathcal{L}(t_{n}^{*})\rightarrow\infty$ when $L\rightarrow\infty$. However, $\lambda(t_{n}^{*})$ is not divergent and only displays a cusp due to the fact that the divergence is offset by the $L$ in the denominator. For a finite system with size $L$, $t_{n}^{*}(L)$ are determined by the times at which $\lambda$ takes the local maximum. As we shall demonstrate later, $t_{n}^{*}(L)$ does not fulfill a simple fitting relation with $L$. Thanks to the advance of quantum simulators, quantum simulations of DQPT were already reported in various systems \cite{GuoXY,XueP,Monroe2017Nature,Jurcevic2017PRL,Bernien-Nature,Flaschner2018Nature,Smale,DuanLM}, such as trapped ions \cite{Monroe2017Nature,Jurcevic2017PRL}, Rydberg atoms \cite{Bernien-Nature}, and ultracold atoms \cite{Flaschner2018Nature,Smale,DuanLM}, with finite sizes. Therefore, extracting the non-analytical signature of DQPT in finite-size systems is important from both experimental and theoretical aspects. In this work, we study the non-analytical behaviors of DQPT in finite-size systems with a twist boundary condition which can be realized by introducing a magnetic flux $\phi_{c}$ into the periodic system. When the quench parameter is across the equilibrium phase transition point, by tuning the flux, we demonstrate that exact zeros of LE can be always achieved at critical times $t_{n}^{*}$ even for a finite-size system. It is interesting that the critical times obtained in this way are independent of the system sizes and match exactly with the critical times obtained in the thermodynamical limit of the corresponding periodic system. Due to the finite size $L$, the rate function $\lambda(t)$ should be divergent at critical times $t_{n}^{*}$, corresponding to the exact zeros of LE. On the other hand, no exact zeros can be achieved if there is no DQPT, when the postquench and prequench parameters are in the same region of phases. Correspondingly, the rate function is not sensitive to the flux and does not show any singular behavior. Our theoretical work unveils that the emergence of the dynamical singularity in the rate function can be viewed as a signature for detecting DQPT in finite-size systems. Since the critical times in our theoretical scheme are not dependent on the systems size, it provides us a convenient way to determine the critical times in finite-size systems. \section{Models and scheme for achieving the dynamical singularity} To illustrate how the singularity of the rate function arises as a result of the emergence of zero points of LE, we consider general one-dimensional (1D) two-band systems with the Hamiltonian in momentum space described by \begin{equation} \hat{h}(\gamma,k)=\sum_{\alpha=x,y,z}d_{\alpha}(\gamma,k)\hat{\sigma}_{\alpha}+d_{0}(\gamma,k)\hat{\mathbb{I}},\label{eq:hk} \end{equation} where $\gamma$ denotes a phase transition driving parameter; $\hat{\sigma}_{\alpha}$ are Pauli matrices with $\alpha=x,y,z$; $d_{\alpha}(\gamma,k)$ and $d_{0}(\gamma,k)$ are the corresponding vector components of $\hat{h}(\gamma,k)$; and $\hat{\mathbb{I}}$ is the unit matrix. Such systems are widely studied in the literature \cite{Dora,Budich2016PRB,YangC} and include, e.g., the transverse-field Ising model, quantum \textit{XY} model, the Su-Schrieffer-Heeger (SSH) model, and Creutz model, as special cases. Consider a quench process described by a sudden change of driving parameter $\gamma=\gamma_{i}\theta(-t)+\gamma_{f}\theta(t)$ with the initial state prepared as the ground state of the prequench Hamiltonian $H(\gamma_{i})$. The LE following the quench can be written as \begin{equation} \mathcal{L}=\prod_{k}\mathcal{L}_{k}=\prod_{k}\left|\langle\psi_{k}^{i}|e^{-i\hat{h}(\gamma_{f},k)t}|\psi_{k}^{i}\rangle\right|^{2}, \end{equation} where $\hat{h}(\gamma_{f},k)$ is the postquench Hamiltonian with mode $k$. Choosing $|\psi_{k}^{i}\rangle$ as the $k$-mode of the ground state of the prequench Hamiltonian, then we have \begin{equation} \mathcal{L}_{k}=1-\Lambda_{k}\sin^{2}[\epsilon_{f}(k)t], \end{equation} with \[ \Lambda_{k}=1-\left[\frac{\sum_{\alpha=x,y,z}d_{\alpha}(\gamma_{i},k)d_{\alpha}(\gamma_{f},k)}{\epsilon_{i}(k)\epsilon_{f}(k)}\right]^{2}, \] where $\epsilon_{i}(k)=\sqrt{\underset{\alpha}{\sum}d_{\alpha}^{2}(\gamma_{i},k)}$ and $\epsilon_{f}(k)=\sqrt{\underset{\alpha}{\sum}d_{\alpha}^{2}(\gamma_{f},k)}$. The singularity of rate function $\lambda(t)=-\frac{1}{L}\ln\mathcal{L}(t)$ occurs when $\mathcal{L}(t)=0$, which needs at least one $k$-mode fulfilling $\Lambda_{k}=1$ and gives rise to the following constraint relation \begin{equation} \sum_{\alpha=x,y,z}d_{\alpha}(\gamma_{i},k)d_{\alpha}(\gamma_{f},k)=0.\label{eq:dvalue} \end{equation} To make our discussion concrete, we consider the SSH model \cite{SSH} and the Creutz model \cite{Creutz1999} as examples and show the details of the calculation in this section. \subsection{SSH model} First, we consider the SSH model with the vector components of Hamiltonian given by \begin{eqnarray} d_{x}(k) & = & J_{1}+J_{2}\cos k,\\ d_{y}(k) & = & -J_{2}\sin k \end{eqnarray} and $d_{z}(k)=d_{0}(k)=0$, where $J_{1}$ and $J_{2}$ represent the intracellular and intercellular tunneling amplitudes, respectively. The SSH model possesses two topologically different phases for $J_{2}>J_{1}$ and $J_{2}<J_{1}$ with a phase transition occurring at the transition point of $J_{2c}/J_{1}=1$ \cite{SSH,LiLH2014}. Then we quench parameter $J_{2}$ from $J_{2i}$ to $J_{2f}$ at $t=0$ and get the LE of the SSH model \begin{align} \mathcal{L}(t) & =\prod_{k}\left\{ 1-\Lambda_{k}\sin^{2}[\epsilon_{f}(k)t]\right\} ,\label{Lt} \end{align} where $\epsilon_{f}(k)$ and $\Lambda_{k}$ are given by \begin{equation} \epsilon_{f}(k)=J_{1}\sqrt{1+2\gamma_{f}\cos k+\gamma_{f}^{2}},\label{Ekf} \end{equation} \begin{equation} \Lambda_{k}=1-\frac{[1+(\gamma_{i}+\gamma_{f})\cos k+\gamma_{i}\gamma_{f}]^{2}}{(1+2\gamma_{i}\cos k+\gamma_{i}^{2})(1+2\gamma_{f}\cos k+\gamma_{f}^{2})}. \end{equation} Here $\gamma_{i}=\frac{J_{2i}}{J_{1}}$ and$\gamma_{f}=\frac{J_{2f}}{J_{1}}$. For convenience, we shall fix $J_{1}=1$ and take it as the energy unit in the following calculation. For a finite size system under the periodic boundary condition (PBC), the momentum $k$ takes discrete values $k=2\pi m/L$ with $m=-L/2,-L/2+1,\ldots,L/2-1$ if $L$ is even or $m=-(L-1)/2,-(L-1)/2+1,\ldots,(L-1)/2$ if $L$ is odd. \begin{figure} \caption{(a) The rate function $\lambda(t)$ of the SSH model versus $t$ for different system sizes $L=40$, $60$, 100 and $1100$. Vertical dashed lines guide the values of critical times $t_{1} \label{Fig1} \end{figure} For a finite-size system under the PBC, we can utilize Eqs. (\ref{rf}) and (\ref{Lt}) to calculate the rate function numerically. In Fig. \ref{Fig1}(a), we display the rate function $\lambda(t)$ versus time $t$ for different system sizes $L$. Around the critical times $t_{n}^{*}$, the rate function exhibits a series of peaks and the times $t_{n}^{*}(L)$ corresponding to these local maximums can be used to interpolate numerically the values of critical times in the limit of $L\rightarrow\infty$. When we increase the size, $t_{n}^{*}(L)$ does not change linearly with $L$, but approaches the critical times $t_{n}^{*}$ in an oscillating way as shown in Figs. \ref{Fig1}(b) and \ref{Fig1}(c). In the thermodynamical limit, the non-analytical behaviors of $\lambda(t)$ are characterized by the emergence of a cusp at $t_{n}^{*}$. Using $\lambda_{\mathrm{max}}$ to represent the first local maximum of $\lambda(t)$, we find that the value of $\lambda_{\mathrm{max}}$ does not increase linearly with the increase of system size but approaches a finite number in an oscillating way. Our numerical result unveils $\lambda_{\mathrm{max}}\sim0.643$ with $L\rightarrow\infty$. In the thermodynamical limit $L\rightarrow\infty$, the momentum $k$ distributes continuously and we have \[ \lambda(t)=-\frac{1}{2\pi}\int_{0}^{2\pi}\ln[1-\Lambda_{k}\sin^{2}[\epsilon_{f}(k)t]]dk, \] from which we numerically evaluate the value $\lambda(t_{1}^{*})\approx0.643$ at the critical time $t_{1}^{*}$. It is evident that $\lambda(t_{1}^{*})$ is equal to $\lambda_{\mathrm{max}}$ in the thermodynamical limit. \begin{figure} \caption{(a) The images of $k_{c,+} \label{Fig2} \end{figure} The non-analytical behaviors of the rate function occurring at the critical times $t_{n}^{*}$ are associated to the emergence of zeros of LE. We notice that the constraint relation for ensuring $\mathcal{L}(t)=0$ is \begin{equation} \gamma_{f}=-\frac{1+\gamma_{i}\cos k}{\gamma_{i}+\cos k}.\label{eq:gammaf} \end{equation} If $|\gamma_{i}|<1$, Eq. (\ref{eq:gammaf}) is fulfilled only for $|\gamma_{f}|>1$. On the other hand, if $|\gamma_{i}|>1$, Eq. (\ref{eq:gammaf}) is fulfilled only for $|\gamma_{f}|<1$. It means that the exact zeros of LE emerge only when the quench parameter $\gamma$ is across the underlying phase transition point. When $\gamma_{i}$ and $\gamma_{f}$ are in different phase regions, there always exists a pair of momentum modes given by \begin{equation} k_{c,\pm}=\pm\arccos\left[-\frac{1+\gamma_{i}\gamma_{f}}{\gamma_{i}+\gamma_{f}}\right],\label{kc} \end{equation} which leads to the occurrence of a series of zero points of LE at \begin{equation} t_{n}^{*}=\frac{\pi}{2\epsilon_{f}(k_{c,\pm})}(2n-1), \end{equation} with \begin{equation} \epsilon_{f}(k_{c,\pm})/J_{1}=\sqrt{\frac{(1-\gamma_{f}^{2})(\gamma_{i}-\gamma_{f})}{\gamma_{i}+\gamma_{f}}},\label{Ekc} \end{equation} and $n$ being a positive integer. Since $\epsilon_{f}(k_{c,+})=\epsilon_{f}(k_{c,-})$, we omit the subscript $\pm$ in $t_{n}^{*}$ as either $\epsilon_{f}(k_{c,+})$ or $\epsilon_{f}(k_{c,-})$ gives the same contribution to critical times. In Fig. \ref{Fig2}(a), we exhibit the images of $k_{c,+}/\pi$ and $k_{c,-}/\pi$ versus $\gamma_{f}$ for $\gamma_{i}=0.5$ and $\gamma_{i}=1.5$ according to Eq. (\ref{kc}), and the two red circles denote $k_{c,+}/\pi\approx0.839$ and $k_{c,-}/\pi\approx-0.839$ for $\gamma_{i}=1.5$ and $\gamma_{f}=0.5$. For finite size systems, $k$ takes discrete values. According to Eq. (\ref{kc}), $k_{c,\pm}$ is usually not equal to the quantized momentum values $k=2\pi m/L$ enforced by the PBC except for some fine-tuned postquench parameters \citep{ZhouBZ2021}. With the increase in the system size, $k_{c,\pm}$ can be approached in terms of $\min|k-k_{c,\pm}|\leq\pi/L$, and thus exact zeros of LE are usually only achievable in the thermodynamical limit of $L\rightarrow\infty$. Although exact zeros of LE for a finite-size system generally do not exist, next we unveil that exact zeros of LE can be achieved even in a finite-size system if we introduce a magnetic flux $\phi$ into the system. The effect of magnetic flux is effectively described by the introduction of a twist boundary condition in real space $c_{L+1}^{\dagger}=c_{1}^{\dagger}e^{i\phi}(\phi\in(0,\pi])$. Under the twist boundary condition, the quantized momentum is shifted by a factor $\phi/L$, i.e., $k=\frac{2\pi m+\phi}{L}$ with $m=-L/2,-L/2+1,\ldots,L/2-1$ if $L$ is even or $m=-(L-1)/2,-(L-1)/2+1,\ldots,(L-1)/2$ if $L$ is odd. Therefore, for a given lattice size $L$ we can always achieve $k_{c,+}$ or $k_{c,-}$ by tuning the flux $\phi$ to \begin{equation} \phi_{c}=\min\{\!\!\negthinspace\negthinspace\!\mod[Lk_{c,+},2\pi],\negthinspace\!\!\!\!\mod[Lk_{c,-},2\pi]\}.\label{ssh-phic} \end{equation} In Fig. \ref{Fig2}(b), we display the image of $\phi_{c}/\pi$ versus $\gamma_{f}$ according to Eq. (\ref{ssh-phic}) for the system with $\gamma_{i}=1.5$ and $L=20$, and the red point in the picture denotes $\phi_{c}/\pi\approx0.783$ for $\gamma_{i}=1.5$ and $\gamma_{f}=0.5$. Let $\Delta=\phi-\phi_{c}$, at the time $t=t_{n}^{*}$, we can get \begin{equation} \lambda(t_{n}^{*})=-\frac{1}{L}[\ln\mathcal{L}_{k^{*}}(t_{n}^{*})+\sum_{k\neq k^{*}}\ln\mathcal{L}_{k}(t_{n}^{*})], \end{equation} where $\mathcal{L}_{k^{*}}(t_{n}^{*})$ comes from the contribution of the $k^{*}$-mode which is closest to $k_{c}$, i.e., $k^{*}=k_{c}+\Delta/L$. Let $\Delta\rightarrow0$, we can get \begin{equation} \mathcal{L}_{k^{*}}(t_{n}^{*})\approx\frac{(\gamma_{i}+\gamma_{f})^{3}+\gamma_{f}^{2}t_{n}^{*2}(\gamma_{f}-\gamma_{i})(1-\gamma_{i}^{2})}{(\gamma_{f}-\gamma_{i})^{2}(\gamma_{f}+\gamma_{i})L^{2}}\Delta^{2}.\label{eq:eta} \end{equation} When $\Delta\rightarrow0$, $\mathcal{L}_{k^{*}}(t_{n}^{*})\rightarrow0$ and thus $\ln\mathcal{L}_{k^{*}}(t_{n}^{*})$ is divergent, i.e., when $\phi$ achieves $\phi_{c}$, the rate function becomes divergent at the critical times. \begin{figure} \caption{(a) The rate function $\lambda(t)$ versus $t$ for the SSH model with $\gamma_{i} \label{Fig3} \end{figure} In Fig. \ref{Fig3}(a), we demonstrate rate functions versus $t$ for various $\phi$ with $L=20$, $\gamma_{i}=1.5$ and $\gamma_{f}=0.5$. It is shown that the rate function is divergent at the critical times $t_{1}^{*}\approx2.565$ and $t_{2}^{*}\approx7.695$ when $\phi$ is tuned to the critical value $\phi_{c}$ which is shown in Fig. \ref{Fig2}(b). In comparison with Fig. \ref{Fig1}(a), both the nonanalytical behaviors occur at the same critical times $t_{1}^{*}$ and $t_{2}^{*}$. While the nonanalyticity of the rate function in the thermodynamical limit is characterized by a cusp or a kink, the nonanalyticity of the rate function of a finite-size system induced by tuning the flux $\phi$ is characterized by the appearance of singularity at the critical times. Such a singularity of the rate function for the finite-size system is a kind of dynamical singularity, which corresponds to the occurrence of exact zeros of LE. The existence of dynamical singularity for a finite-size system means that the initial state can evolve to its orthogonal state at a series of time by tuning the magnetic flux. For a given $\gamma_{i}$ and $\gamma_{f}$, tuning $\phi$ from $0$ to $\pi$, from Fig. \ref{Fig3}(b) we can see that if $\gamma_{i}$ and $\gamma_{f}$ belong to the same phase, $\lambda_{\mathrm{max}}$ barely changes with $\phi$, which means no singularity of rate function can be observed; if $\text{\ensuremath{\gamma}}_{i}$ and $\gamma_{f}$ belong to different phases, $\lambda_{\mathrm{max}}$ will diverge at $\phi_{c}/\pi\approx0.783$, which gives a signal of DQPT. Therefore, we can judge whether a DQPT happens by observing the change of $\lambda_{\mathrm{max}}$ as a function of $\phi$, which continuously varies from $0$ to $\pi$. By tuning $\phi$ in finite-size systems, we also obtain the critical times of DQPT, which are usually defined in the thermodynamical limit and can be extracted from the finite-size-scaling analysis in previous studies. \subsection{Creutz model} Next we consider the Creutz model \cite{Creutz1999} which describes the dynamics of a spinless electron moving in a ladder system governed by the Hamiltonian: \begin{align} H & =-\sum_{j=1}[J_{h}(e^{i\theta}c_{j+1}^{p\dagger}c_{j}^{p}+e^{-i\theta}c_{j+1}^{q\dagger}c_{j}^{q})\nonumber \\ & \qquad\qquad+J_{d}(c_{j+1}^{p\dagger}c_{j}^{q}+c_{j+1}^{q\dagger}c_{j}^{p})+J_{v}c_{j}^{q\dagger}c_{j}^{p}+\mathrm{H.c.}], \end{align} where $c_{j}^{p(q)\dagger}$ and $c_{j}^{p(q)}$ are fermionic creation and annihilation operators on the $j$th site of the lower (upper) chain; $J_{h}$, $J_{d}$, and $J_{v}$ represent hopping amplitudes for horizontal, diagonal, and vertical bonds, respectively; and $\theta\in[-\pi/2,\pi/2]$ represents the magnetic flux per plaquette induced by a magnetic field piercing the ladder \citep{Jafari,Creutz1999}. Via the Fourier transformation, the vector components of the Hamiltonian in momentum space can be expressed as $d_{x}(k)=-2J_{d}\cos k-J_{v},\ d_{y}(k)=0,\ d_{z}(k)=-2J_{h}\sin k\sin\theta$, and $d_{0}(k)=-2J_{h}\cos k\cos\theta$. For simplicity, in the following we will focus on the case of $J_{h}=J_{d}=J$ and $J_{v}/2J<1$, and take $J=1$ as the unit of energy. In this case, the Creutz model has two distinct topologically nontrivial phases for $-\pi/2\leq\theta<0$ and $0<\theta\leq\pi/2$ with a phase transition occurring at the transition point of $\theta=0$ \cite{LiLH}. Then we quench parameter $\theta$ from $\theta_{i}$ to $\theta_{f}$ at $t=0$ and get the LE of the Creutz model \begin{align} \mathcal{L}(t) & =\prod_{k}\left\{ 1-\Lambda_{k}\sin^{2}[\epsilon_{f}(k)t]\right\} , \end{align} where \begin{equation} \Lambda_{k}=1-\frac{16J^{4}[(\cos k+\tilde{J_{v}})^{2}+\sin^{2}k\sin\theta_{i}\sin\theta_{f}]^{2}}{\epsilon_{i}^{2}(k)\epsilon_{f}^{2}(k)}, \end{equation} \begin{equation} \epsilon_{i}(k)=2J\sqrt{(\cos k+\tilde{J_{v}})^{2}+\sin^{2}k\sin^{2}\theta_{i}}, \end{equation} and \begin{equation} \epsilon_{f}(k)=2J\sqrt{(\cos k+\tilde{J_{v}})^{2}+\sin^{2}k\sin^{2}\theta_{f}}\label{Ekf-1} \end{equation} with $\tilde{J_{v}}=J_{v}/2J$. The corresponding constraint relation of Eq. (\ref{eq:dvalue}) for the occurrence of exact zeros of LE is \begin{equation} \sin\theta_{f}=-\frac{(\cos k+\tilde{J_{v}})^{2}}{\sin^{2}k\sin\theta_{i}}.\label{eq:thetaf} \end{equation} If $\sin\theta_{i}<0$, Eq. (\ref{eq:thetaf}) is fulfilled only for $\sin\theta_{f}>0$. On the other hand, if $\sin\theta_{i}>0$, Eq. (\ref{eq:thetaf}) is fulfilled only for $\sin\theta_{f}<0$. It means that the dynamical singularity of the rate function exists only when the quench parameter $\theta$ is across the underlying phase transition point. When $\theta_{i}$ and $\theta_{f}$ are in different phase regions, there are always two pairs of momentum modes given by \begin{equation} k_{c,1\pm}=\pm\arccos\left[\frac{-\tilde{J_{v}}+\sqrt{A(\tilde{J}_{v}^{2}-1+A)}}{1-A}\right],\label{kc-1} \end{equation} and \begin{equation} k_{c,2\pm}=\pm\arccos\left[\frac{-\tilde{J_{v}}-\sqrt{A(\tilde{J}_{v}^{2}-1+A)}}{1-A}\right],\label{kc-2} \end{equation} with $A=\sin\theta_{i}\sin\theta_{f}$, which lead to the occurrence of a series of dynamical singularities of the rate function at \begin{equation} t_{n,1/2}^{*}=\frac{\pi}{2\epsilon_{f}(k_{c,1\pm/2\pm})}(2n-1), \end{equation} with \begin{equation} \epsilon_{f}(k_{c,1\pm/2\pm})/2J=\sqrt{(\sin\theta_{f}-\sin\theta_{i})\sin\theta_{f}\sin^{2}k_{c,1\pm/2\pm}}\label{Ekc-1} \end{equation} and $n$ being a positive integer. Since $\epsilon_{f}(k_{c,1+/2+})=\epsilon_{f}(k_{c,1-/2-})$, we omit the subscript $\pm$ in $t_{n,1/2}^{*}$. Similarly, $k_{c,1\pm}$ and $k_{c,2\pm}$ are usually not equal to the quantized momentum values $k=2\pi m/L$ enforced by the PBC except for some fine-tuned postquench parameters. It means that the exact zeros of LE of a finite-size system generally do not exist for arbitrary $\theta_{i}$ and $\theta_{f}$. With the increase in the system size, $k_{c,1\pm/2\pm}$ can be approached in terms of $\min|k-k_{c,1\pm/2\pm}|\leq\pi/L$, and thus dynamical singularities of the rate function are usually only achieved in the limit of $L\rightarrow\infty$. \begin{figure} \caption{(a) The rate function $\lambda(t)$ versus $t$ for the Creutz model with different system sizes $L=40$, $60$, 100, and $1200$. Vertical dashed lines guide the values of critical times $t_{1,1} \label{Fig4} \end{figure} In Fig. \ref{Fig4}(a), we display the rate function $\lambda(t)$ versus time $t$ for different system sizes $L$. From Figs. \ref{Fig4}(b) and \ref{Fig4}(c) we can see that $t_{1,1}^{*}(L)$ and $t_{1,2}^{*}(L)$ approach the critical times in an oscillating way as the size $L$ increases. With the increase of the size $L$, we find that the value of $\lambda_{\mathrm{max}}$ also approaches a finite number in an oscillating way and $\lambda_{\mathrm{max}}\sim0.621$ when $L\rightarrow\infty$. In the thermodynamical limit, the momentum $k$ distributes continuously and we have $\lambda(t)=-\frac{1}{2\pi}\int_{0}^{2\pi}\ln[1-\Lambda_{k}\sin^{2}[\epsilon_{f}(k)t]]dk$, from which we numerically evaluate the value $\lambda(t_{1,1}^{*})\approx0.621$ at the critical time $t_{1,1}^{*}$, agreeing with $\lambda_{\mathrm{max}}$ in the thermodynamical limit. \begin{figure} \caption{(a) The images of $k_{c,1+} \label{Fig5} \end{figure} In Fig. \ref{Fig5}(a), we exhibit the images of $k_{c,1+}$, $k_{c,1-}$, $k_{c,2+}$, and $k_{c,2-}$ versus $\theta_{f}$ for $\theta_{i}=-0.4$ and $\theta_{i}=0.4$ according to Eqs. (\ref{kc-1}) and (\ref{kc-2}), and the four red circles denote $k_{c,1\pm}/\pi\approx\pm0.536$ and $k_{c,2\pm}/\pi\approx\pm0.772$ for $\theta_{i}=0.4$ and $\theta_{f}=-0.4$. Since the quantized momenta $k$ usually do not include $k_{c,1\pm}$ and $k_{c,2\pm}$ under the PBC, we introduce the twist boundary condition here. For a system with a given finite size $L$, we can always achieve $k_{c,1+/2+}$ or $k_{c,1-/2-}$ by using the twist boundary condition with \begin{equation} \phi_{c,1/2}=\min\{\!\!\!\!\!\!\mod[Lk_{c,1+/2+},2\pi],\!\!\!\!\!\!\mod[Lk_{c,1-/2-},2\pi]\}.\label{Creutz-phic} \end{equation} Figure \ref{Fig5}(b) displays the images of $\phi_{c,1}/\pi$ and $\phi_{c,2}/\pi$ versus $\theta_{f}$ according to Eq. (\ref{Creutz-phic}) for the system with $\theta_{i}=0.4$, $\tilde{J}_{v}=0.5$, and $L=20$, and the two red points denote $\phi_{c,1}/\pi\approx0.721$ and $\phi_{c,2}/\pi\approx0.550$ for $\theta_{i}=0.4$ and $\theta_{f}=-0.4$. Let $\Delta_{1/2}=\phi-\phi_{c,1/2}$, at the time $t=t_{n,1/2}^{*}$ we can get \begin{equation} \lambda(t_{n,1/2}^{*})=-\frac{1}{L}\left[\ln\mathcal{L}_{k_{1/2}^{*}}(t_{n,1/2}^{*})+\sum_{k\neq k_{1/2}^{*}}\ln\mathcal{L}_{k}(t_{n,1/2}^{*})\right], \end{equation} where $\mathcal{L}_{k_{1/2}^{*}}(t_{n,1/2}^{*})$ comes from the contribution of the $k_{1/2}^{*}$-mode which is closest to $k_{c,1/2}$, i.e., $k_{1/2}^{*}=k_{c,1/2}+\Delta_{1/2}/L$. Let $\Delta_{1/2}\rightarrow0$, we can get \begin{equation} \mathcal{L}_{k_{1/2}^{*}}(t_{n,1/2}^{*})\approx B_{1/2}\Delta_{1/2}^{2},\label{eq:eta-1} \end{equation} where \[ B_{1/2}=\frac{4[t_{n,1/2}^{*2}(\tilde{J}_{v}+\cos k_{c,1/2}\cos^{2}\theta_{f})^{2}-C]}{(\sin\theta_{f}-\sin\theta_{i})\sin\theta_{f}L^{2}}, \] with $C=\frac{\sin\theta_{f}(\tilde{J}_{v}^{2}-1+\sin\theta_{i}\sin\theta_{f})}{\sin^{2}k_{c,1/2}(\sin\theta_{f}-\sin\theta_{i})}$. It means when $\Delta_{1/2}\rightarrow0,$ i.e., $\phi\rightarrow\phi_{c,1/2}$, $\mathcal{L}_{k_{1/2}^{*}}(t_{n,1/2}^{*})\propto\Delta_{1/2}^{2}$. When $\phi$ reaches $\phi_{c,1/2}$, we can get a $k_{1/2}^{*}$-mode which satisfies $k_{1/2}^{*}=k_{c,1/2}$ and $\mathcal{L}_{k_{1/2}^{*}}(t_{n,1/2}^{*})=0$, thus the rate function is divergent at $t_{n,1/2}^{*}$. \begin{figure} \caption{(a) The rate function $\lambda(t)$ versus $t$ for the Creutz model with $\theta_{i} \label{Fig6} \end{figure} In Fig. \ref{Fig6}(a), we demonstrate rate functions versus $t$ for various $\phi$ with $L=20$, $\tilde{J}_{v}=0.5$, $\theta_{i}=0.4$, and $\theta_{f}=-0.4$. It is shown that the rate functions are divergent at the critical times $t_{1,1}^{*}\approx1.435$ and $t_{2,1}^{*}\approx4.306$, when $\phi$ is tuned to the critical value $\phi_{c,1}\approx0.721\pi$, and divergent at the critical times $t_{1,2}^{*}\approx2.176$ and $t_{2,2}^{*}\approx6.527$, when $\phi$ is tuned to the critical value $\phi_{c,2}\approx0.550\pi$. In comparison with Fig. \ref{Fig4}(a), all the nonanalytical behaviors occur at the same critical times obtained by finite-size-scaling analysis. For a pair of given $\theta_{i}$ and $\theta_{f}$, Fig. \ref{Fig6}(b) shows that if $\theta_{i}$ and $\theta_{f}$ belong to the same phase, $\lambda_{\mathrm{max}}$ only changes slightly with $\phi$, which means the absence of DQPT; if $\theta_{i}$ and $\theta_{f}$ belong to different phases, $\lambda_{\mathrm{max}}$ will diverge at $\phi_{c,1}/\pi$ and $\phi_{c,2}/\pi$, indicating the occurrence of DQPT. It is a remarkable fact that there are two critical magnetic fluxes $\phi_{c,1}$ and $\phi_{c,2}$, which are generated by two pairs of momentum modes $k_{c,1\pm}$ and $k_{c,2\pm}$, respectively. While $\phi_{c,1}$ only produces singularities at $t_{n,1}^{*}$, $\phi_{c,2}$ produces singularities at $t_{n,2}^{*}$. \section{Application to other model systems} To exhibit the applicability of our theoretical scheme to more general cases, here we study more examples by taking account into the effect long-range hopping, dimensionality, and interaction. We shall explore the dynamical singularity of rate function in the SSH model with long-range hopping, the two-dimensional Qi-Wu-Zhang model \citep{QiWuZhang2006PRB}, and the interacting SSH model, respectively. \subsection{SSH model with long-range hopping} Consider the SSH model with long-range hopping described by \begin{align} H & =\sum_{n=1}^{L}\sum_{r=1}^{L/2}(V_{1,r}c_{A,n}^{\dagger}c_{B,n+r-1}+V_{2,r}c_{A,n+r}^{\dagger}c_{B,n}\nonumber \\ & +V_{3,r}c_{A,n}^{\dagger}c_{A,n+r}+V_{4,r}c_{B,n}^{\dagger}c_{B,n+r}+\mathrm{H.c.}), \end{align} where $V_{1,r}=J_{1}e^{-\alpha(r-1)},\ V_{2,r}=J_{2}e^{-\alpha(r-1)},\ V_{3,r}=J_{3}e^{-\alpha(r-1)},\ V_{4,r}=J_{4}e^{-\alpha(r-1)}$, and $\alpha$ is a tunable positive parameter. The model is schematically depicted in Fig. \ref{fig:long-range}(a). By using the Fourier transformation $c_{A/B,n}^{\dagger}=\frac{1}{\sqrt{L}}\sum_{k}e^{ikn}c_{A/B,k}^{\dagger}$, we get \begin{align} H & =\sum_{k}\sum_{r=1}^{L/2}[(V_{1,r}e^{-ik(r-1)}+V_{2,r}e^{ikr})c_{A,k}^{\dagger}c_{B,k}\nonumber \\ & +\cos[kr](V_{3,r}c_{A,k}^{\dagger}c_{A,k}+V_{4,r}c_{B,k}^{\dagger}c_{B,k})+\mathrm{H.c.}]. \end{align} \begin{figure} \caption{(a) A scheme for the SSH model with long-range hopping. (b) The rate function $\lambda(t)$ versus $t$ with $\phi=0,0.487\pi,0.8\pi$, and $\pi$ for the system with $\alpha=1$, respectively. Vertical dashed lines guide the divergent points $t_{1} \label{fig:long-range} \end{figure} The vector components of the Hamiltonian in momentum space are \begin{align} d_{x}(k) & =\sum_{r=1}^{L/2}(V_{1,r}\cos[k(r-1)]+V_{2,r}\cos[kr]),\\ d_{y}(k) & =\sum_{r=1}^{L/2}(V_{1,r}\sin[k(r-1)]-V_{2,r}\sin[kr]),\\ d_{z}(k) & =\sum_{r=1}^{L/2}(V_{3,r}-V_{4,r})\cos[kr],\\ d_{0}(k) & =\sum_{r=1}^{L/2}(V_{3,r}+V_{4,r})\cos[kr]. \end{align} For simplicity, we set $V_{3,r}=V_{4,r}$ and choose $V_{2,r}$ as the quench parameter. The phase transition point is $V_{2c,r}/V_{1,r}=1$. According to Eq. (\ref{eq:dvalue}), the corresponding constraint relation for the occurrence of divergence of the rate function is \begin{align} \sum_{r=1}^{L/2}(V_{1,r}\cos[k(r-1)]+V_{2i,r}\cos[kr])\nonumber \\ \times\sum_{r=1}^{L/2}(V_{1,r}\cos[k(r-1)]+V_{2f,r}\cos[kr])\nonumber \\ +\sum_{r=1}^{L/2}(V_{1,r}\sin[k(r-1)]-V_{2i,r}\sin[kr])\nonumber \\ \times\sum_{r=1}^{L/2}(V_{1,r}\sin[k(r-1)]-V_{2f,r}\sin[kr]) & =0.\label{eq:kc} \end{align} If we choose $J_{1}=1,\ J_{2i}=1.5,\ J_{2f}=0.5,\ \alpha=1$, and $L=20$, we can get $k_{c}\approx0.676\pi$ from Eq. (\ref{eq:kc}). Then it follows that $\phi_{c}\approx0.487\pi$, $t_{1}^{*}\approx3.163$, and $t_{2}^{*}\approx9.490$. In Fig. \ref{fig:long-range}(b), we show the image of rate function for the case of $\alpha=1$ with various $\phi$. It is obvious that the rate function diverges at $t_{1}^{*}$ and $t_{2}^{*}$ for $\phi_{c}\approx0.487\pi$, while the rate function is analytic for other values of $\phi$. As a comparison, we choose $\alpha=10$ and keep the other parameters the same as the case of $\alpha=1$, and the rate function is displayed in Fig. \ref{fig:long-range}(c). In this case, the amplitude of hopping decays rapidly so that the dynamical behavior of the model resembles the SSH model without long-range hopping. Similar to Fig. \ref{Fig3}(a), the result for the case of $\alpha=10$ in Fig. \ref{fig:long-range}(c) shows that the rate function diverges at $t_{1}^{*}\approx{2.565}$ and $t_{2}^{*}\approx{7.696}$ for $\phi_{c}\approx0.782\pi$. \subsection{Qi-Wu-Zhang model} Next we consider the Qi-Wu-Zhang model, which is a two-dimensional two-band model described by \begin{align} H & =-\frac{1}{2}\sum_{n_{x},n_{y}}[(c_{n_{x},n_{y}}^{\dagger}c_{n_{x}+1,n{}_{y}}+c_{n_{x},n_{y}}^{\dagger}c_{n_{x},n{}_{y}+1})\nonumber \\ & +ic_{n_{x},n_{y}}^{\dagger}c_{n_{x},n{}_{y}+1}^{\dagger}-c_{n_{x},n_{y}}^{\dagger}c_{n_{x}+1,n{}_{y}}^{\dagger}\nonumber \\ & +\mu c_{n_{x},n_{y}}^{\dagger}c_{n_{x},n{}_{y}}+\mathrm{H.c.}], \end{align} where $\mu$ is the chemical potential. After the Fourier transformation, we get the vector components of the Hamiltonian in momentum space as \begin{align} d_{x} & =\sin k_{y},\\ d_{y} & =-\sin k_{x},\\ d_{z} & =-\cos k_{x}-\cos k_{y}-\mu,\\ d_{0} & =-2\mu. \end{align} Depending on the value of $\mu$, the Qi-Wu-Zhang model is known to have three different topological phases characterized by different band Chern numbers with transition points at $\mu_{c}=0$ and $\pm2$ \citep{QiWuZhang2006PRB}. According to Eq. (\ref{eq:dvalue}), the corresponding constraint relation for the occurrence of divergence of the rate function is \begin{equation} \mu_{i}\mu_{f}+(\cos k_{x}+\cos k_{y})(\mu_{i}+\mu_{f})+2\cos k_{x}\cos k_{y}+2=0.\label{eq:2D} \end{equation} For the Qi-Wu-Zhang model, we find plenty of pairs of $(k_{xc},k_{yc})$ satisfy Eq. (\ref{eq:2D}), and the value of $k_{yc}$ is determined by \begin{equation} k_{yc}=\pm\arccos\left[\frac{-\mu_{i}\mu_{f}-(\mu_{i}+\mu_{f})\cos k_{xc}-2}{\mu_{i}+\mu_{f}+2\cos k_{xc}}\right]. \end{equation} Taking $\mu$ as the quench parameter, we choose $\mu_{i}\in(0,2)$ and $\mu_{f}\in(-2,0)$. To make sure that $k_{yc}$ is real, the value of $k_{xc}$ is bounded as follows: \begin{align} \cos k_{xc}\in & \left[-1,-\frac{\mu_{i}\mu_{f}}{\mu_{i}+\mu_{f}+2}-1\right]\nonumber \\ & \cup\left[-\frac{\mu_{i}\mu_{f}}{\mu_{i}+\mu_{f}-2}+1,1\right]. \end{align} It should be noted that the critical time $t_{n}^{*}$ is no longer a value, but in a region: \begin{equation} t_{n}^{*}\in\left[\frac{(2n-1)\pi}{2\sqrt{\frac{\mu_{f}(\mu_{f}-2)(\mu_{f}-\mu_{i})}{\mu_{i}+\mu_{f}-2}}},\frac{(2n-1)\pi}{2\sqrt{\frac{\mu_{f}(\mu_{f}+2)(\mu_{f}-\mu_{i})}{\mu_{i}+\mu_{f}+2}}}\right], \end{equation} where $n$ is a positive integer. \begin{figure} \caption{(a) The rate function $\lambda(t)$ versus $t$ for the Qi-Wu-Zhang model with $\mu_{i} \label{fig:Qi-Wu-Zhang} \end{figure} It is worth noting that, for a finite-size system, both $k_{x}$ and $k_{y}$ take discrete values. Thus the number of pairs of $(k_{xc},k_{yc})$ satisfing Eq. (\ref{eq:2D}) is finite. If we choose $\mu_{i}=0.3,\ \mu_{f}=-0.9$, and $L_{x}=L_{y}=12$, we can get 16 pairs of $(k_{xc},k_{yc})$ from Eq. (\ref{eq:2D}), given by $(k_{xc,1},k_{yc,1})\approx(\pm0.146\pi,\pi)$ or $(\pi,\pm0.146\pi),$ $(k_{xc,2},k_{yc,2})\approx(\pm0.0849\pi,\pm5\pi/6)$ or $(\pm5\pi/6,\pm0.0849\pi)$, and $(k_{xc,3},k_{yc,3})\approx(\pm0.799\pi,0)$ or $(0,\pm0.799\pi)$. By applying the twist boundary conditions $(k_{x}=\frac{2\pi m_{x}+\phi_{xc}}{L_{x}},k_{y}=\frac{2\pi m_{y}}{L_{y}})$ or $(k_{x}=\frac{2\pi m_{x}}{L_{x}},k_{y}=\frac{2\pi m_{y}+\phi_{yc}}{L_{y}})$ with $\phi_{xc/yc,1}\approx0.244\pi,$ $\phi_{xc/yc,2}\approx0.981\pi$, and $\phi_{xc/yc,3}\approx0.412\pi$, we get the corresponding critical times given by $t_{n,1}^{*}\approx1.431(2n-1),\ t_{n,2}^{*}\approx1.602(2n-1)$, and $t_{n,3}^{*}\approx1.705(2n-1)$, respectively. In Fig. \ref{fig:Qi-Wu-Zhang}(a), we show the image of the rate function for the Qi-Wu-Zhang model for various $\phi_{x/y}$, where $\mu_{i}=0.3,\ \mu_{f}=-0.9$ and $L_{x}=L_{y}=12$. The rate functions distinctly diverge at the corresponding critical times for $\phi_{x/y}=\phi_{xc/yc,1},\ \phi_{x/y}=\phi_{xc/yc,2}$, and $\phi_{x/y}=\phi_{xc/yc,3}$, while the rate function under the period boundary condition is smooth for all time. Figure \ref{fig:Qi-Wu-Zhang}(b) exhibits that $\lambda_{\mathrm{max}}$ diverges at $\phi_{xc/yc,1}\approx0.244\pi,$ $\phi_{xc/yc,2}\approx0.981\pi$, and $\phi_{xc/yc,3}\approx0.412\pi$ when $\mu_{i}$ and $\mu_{f}$ are in different phase regions, while there is no divergence when $\mu_{i}$ and $\mu_{f}$ belong to the same phase. It is important to note that each $\phi_{xc/yc}$ just produces a part of all the singularities. To obtain the whole critical times $t_{1}^{*}\in[1.431,1.705]$, $t_{2}^{*}\in[4.294,5.116]$, and so on, one needs to add proper twist boundary conditions on both the $x$ and $y$ directions simultaneously. \subsection{Interacting SSH model} To check whether our theoretical scheme works for the interacting system, now we consider the interacting SSH model with the twist boundary condition, \begin{align} H= & \sum_{j=1}^{L-1}\left(J_{1}c_{j,A}^{\dagger}c_{j,B}+J_{2}c_{j,B}^{\dagger}c_{j+1,A}+\mathrm{H.c.}\right)\nonumber \\ & +(J_{1}c_{L,A}^{\dagger}c_{L,B}+J_{2}e^{-i\phi}c_{L,B}^{\dagger}c_{1,A}+\mathrm{H.c.})\nonumber \\ & +U\sum_{j=1}^{L}\left(n_{j,A}n_{j,B}+n_{j,B}n_{j+1,A}\right), \end{align} where $U>0$ characterizes the strength of the nearest-neighbor repulsive interaction, $n_{j,A/B}$ denotes the fermion occupation number operator of sublattice $A/B$ on the unit cell $j$, and $n_{L+1,A}=n_{1,A}$. Here we shall consider the half-filling case. The topological phase transition of the interacting SSH model under the periodic boundary condition was discussed in Ref. \cite{Tang}. When $U$ is much larger than $J_{1}$ and $J_{2}$, the system is in a density-wave phase with the ground state approximately described by $|1010\cdots\rangle$ or $|0101\cdots\rangle$. Here we shall consider the case with $U$ much smaller than $J_{1}$ and $J_{2}$, for which there is still a phase transition when we change the parameter $J_{2}/J_{1}$ with the transition point close to $J_{2}/J_{1}=1$ when $U$ is small. \begin{figure} \caption{The rate function $\lambda(t)$ versus $t$ for the interacting SSH model with $J_{1} \label{fig:issh} \end{figure} Since our motivation is to observe the signature of dynamical singularity in a small size system, we shall not pursue determining the phase boundary of the system precisely. By applying the finite-size scaling of fidelity \cite{Tang}, we obtain the approximate value of the phase transition point $J_{2c}/J_{1}\approx1.038$ and $J_{2c}/J_{1}\approx1.103$ for $U=0.1$ and $U=0.6$, respectively. We numerically calculate the rate function by exact diagonalization of a system with $L=5$ by fixing $U$ and $J_{1}=1$ and quenching the parameter $J_{2}$. The numerical results are shown in Fig. \ref{fig:issh}. For quench from $J_{2}/J_{1}<1$ to $J_{2}/J_{1}>1$, we can observe that the rate functions present some peaks at some typical magnetic fluxes $\phi$, as shown in Figs. \ref{fig:issh}(c) and \ref{fig:issh}(d). However, these peaks are absent for the quench process from $J_{2}/J_{1}<1$ to $J_{2}/J_{1}<1$, as shown in Figs. \ref{fig:issh}(a) and \ref{fig:issh}(b). By scrutinizing these peaks in Figs. \ref{fig:issh}(c) and \ref{fig:issh}(d), we choose three rate functions from Figs. \ref{fig:issh}(c) and \ref{fig:issh}(d) and show them in Figs. \ref{fig:issh}(e) and \ref{fig:issh}(f), respectively. We can see that the rate function with $\phi=1.4111$ in Fig. \ref{fig:issh}(e) and the rate function with $\phi=0.2012$ in Fig. \ref{fig:issh}(f) exhibit obvious peaks, while the other four rate functions display no divergence. Our numerical results indicate a clear signature of dynamical singularity even in a small size interacting system by tuning the magnetic flux. \section{Conclusion and discussion} In summary, we proposed a theoretical scheme for studying the dynamical singularity of the rate function in finite-size quantum systems which exhibit DQPT in the thermodynamic limit. The dynamical singularity of the rate function occurs whenever the corresponding LE has exact zero points, which is, however, not accessible in a finite-size quantum system with the PBC because the momentum takes quantized values $k=2\pi m/L$. To realize the exact zeros of LE, we consider the twist boundary condition by applying a magnetic flux into the system, which enables us to shift the quantized momentum continuously to achieve the exact zeros of LE. Taking the SSH model and Creutz model as concrete examples, we demonstrate that tuning the magnetic flux can lead to the occurrence of divergency in the rate function of a finite-size system at the same critical times as in the case of the thermodynamical limit, when the quench parameter is across the underlying equilibrium phase transition point. We also exhibit the applicability of our theoretical scheme to more general cases, including the SSH model with long-range hopping, the Qi-Wu-Zhang model, and the interacting SSH model. Our work unveils that the singularity of the rate function is accessible in finite-size quantum systems by introducing an additional magnetic flux, which provides a possible way for experimentally detecting DQPT and the critical times in finite-size quantum systems. For the experimental setup in a trapped-ion quantum simulator \cite{Monroe2017Nature,Jurcevic2017PRL}, it remains a challenging task to create tunable magnetic flux in the setup. However, for the cold atomic system, one can implement discrete momentum states by using multifrequency Bragg lasers to realize the SSH model on a momentum lattice \cite{YanB}, where the synthetic magnetic flux through the ring are tunable \cite{YanB2020}. We expect that the momentum lattice of cold atomic systems might be a promising platform to observe the dynamical singularity of the rate function related to the DQPT. \begin{acknowledgments} The work is supported by the National Key Research and Development Program of China (Grant No. 2021YFA1402104), the NSFC under Grants No.12174436 and No.T2121001, and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No. XDB33000000. \end{acknowledgments} \end{document}
\begin{document} \title{Handling Sub-symmetry in Integer Programming using Activation Handlers} \author{Christopher Hojny \and Tom Verhoeff \and Sten Wessel } \authorrunning{C.\ Hojny et al.} \institute{Eindhoven University of Technology, Eindhoven, The Netherlands\\\email{\{c.hojny,t.verhoeff,s.wessel\}@tue.nl}} \maketitle \begin{abstract} Symmetry in integer programs (IPs) can be exploited in order to reduce solving times. Usually only symmetries of the original IP are handled, but new symmetries may arise at some nodes of the branch-and-bound tree. While symmetry-handling inequalities (SHIs) can easily be used to handle original symmetries, handling sub-symmetries arising later on is more intricate. To handle sub-symmetries, it has recently been proposed to add SHIs that are activated by auxiliary variables. This, however, may increase the size of the IP substantially as all sub-symmetries need to be modeled explicitly. As an alternative, we propose a new framework for generically activating SHIs, so-called \emph{activation handlers}. This framework allows for a direct implementation of routines that check for active sub-symmetries, eliminating the need for auxiliary variables. In particular, activation handlers can activate symmetry-handling techniques that are more powerful than SHIs. We show that our approach is flexible, with applications in the multiple-knapsack, unit commitment, and graph coloring problems. Numerical results show a substantial performance improvement on the existing sub-symmetry-handling methods. \keywords{Symmetry handling \and Sub-symmetries \and Integer programming.} \end{abstract} \section{Introduction} \label{sec:introduction} One of the most popular methods to solve integer programs is branch-and-bound (B\&B), which iteratively splits the integer program into smaller subproblems that are solved in turn~\cite{LandDoig1960}. While B\&B can solve problems with thousands of variables and constraints in adequate time, it is well-known that the presence of symmetries leads to unnecessarily large B\&B trees. The main reason is that B\&B explores symmetric subproblems, which all provide essentially the same information. Therefore, symmetry-handling is an important ingredient of modern B\&B implementations that substantially improves the running time~\cite{pfetsch2019computational}. We consider permutation symmetries of binary programs~$\max\set{ \sprod{c}{x} | x \in \mathcal{X} }$, where~$c \in \ensuremath{\mathbb Z}^d$ and~$\mathcal{X} \subseteq \B{d}$. A \emph{permutation} is a bijection of~$[d] \coloneqq \set{1,\dots,d}$; the set of all permutations is~$\Sym{d}$. We assume that a permutation~$\pi \in \Sym{d}$ acts on~$x \in \B{d}$ by permuting its coordinates, i.e., $\pi(x) \coloneqq (x_{\pi^{-1}(1)},\dots,x_{\pi^{-1}(d)})$. A \emph{symmetry} of the binary program is a permutation~$\pi \in \Sym{d}$ that preserves the objective, i.e., $\sprod{c}{\pi(x)} = \sprod{c}{x}$, and feasibility, i.e., $x \in \mathcal{X}$ if and only if~$\pi(x) \in \mathcal{X}$. Note that the set of all symmetries forms a group under composition. We refer to this group as the \emph{symmetry group} of the binary program, denoted by~$G$. Since computing~$G$ is NP-hard~\cite{margot2010symmetry}, one usually only handles a subgroup of~$G$, which can either be detected automatically~\cite{pfetsch2019computational,salvagnin2005dominance} or is provided by an expert. To solve binary programs, B\&B generates subproblems~$\max\set{\sprod{c}{x} | x \in Q}$, where~$Q \subseteq \mathcal{X}$. Each subproblem is a binary program with symmetry group~$G_Q$. In general, $G_Q$ is different from~$G$ and neither is a subgroup of the other. Following~\cite{bendotti2020symmetry}, we call symmetries in~$G_Q$ \emph{sub-symmetries} of the initial binary program. If sub-symmetries appear frequently during the solving process, it can be beneficial to handle them. But since computing (subgroups of)~$G_Q$ might be costly, providing knowledge about sub-symmetries via experts can substantially reduce the complexity of handling sub-symmetries. This, however, leads to a new challenge: how to efficiently provide expert knowledge to a binary programming solver. Recently, \cite{bendotti2020symmetry} suggested to introduce, for each possible sub-symmetry, a simple symmetry-handling inequality (SHI) that is coupled with auxiliary variables that enable (resp.\ disable) the SHI if the sub-symmetry is active (resp.\ inactive) at a subproblem. This, however, might lead to a significant increase in the size of the problem formulation as all sub-symmetries need to be explicitly modeled. Moreover, simple SHIs might not lead to the strongest symmetry reductions. In this paper, we propose an alternative approach. Instead of using auxiliary variables to activate SHIs, we introduce a simple framework that allows to activate arbitrary symmetry-handling techniques. Our so-called \emph{activation handlers} receive expert knowledge as input, i.e., rules to decide which sub-symmetries are present at a subproblem. These rules are then automatically evaluated and communicate with the solver how sub-symmetries are handled. We thus mimic the process of how an expert would handle sub-symmetries and avoid reformulating the original problem. In particular, our framework is more flexible than auxiliary variables as not all activation rules can be compactly expressed by variables. In the remainder of this section, we provide basic notation as well as terminology and we provide a brief overview of symmetry handling methods. Sec.~\ref{sec:subsym} summarizes the state-of-the-art of handling sub-symmetries, which is complemented by a description of our activation handler framework in Sec.~\ref{sec:ah}. Then, we illustrate for three classes of problems how activation handlers can be used to handle sub-symmetries (Sec.~\ref{sec:applications}), and we evaluate our activation handler framework on a broad set of instances (Sec.~\ref{sec:experiments}). Numerical results show that our novel framework substantially improves upon the state-of-the-art in handling sub-symmetries. \paragraph{Notation and Terminology} Let~$G$ be the symmetry group of a binary program with feasible region~$\mathcal{X}$. The \emph{orbit} of~$x$ is~$\mathrm{orb}_G(x) = \set{\pi(x) | \pi \in G}$ and contains all solutions equivalent to~$x$ w.r.t.~$G$. Note that the orbits partition~$\mathcal{X}$. To handle symmetries, it suffices to restrict~$\mathcal{X}$ to (usally lexicographically maximal) representatives of orbits. A vector~$y \in \ensuremath{\mathbb Z}^d$ is \emph{lexicographically greater than} $z \in \ensuremath{\mathbb Z}^d$, denoted~$y \succ z$, if there is~$i \in [d]$ such that $y_i > z_i$ and $y_j = z_j$~for all~$j < i$. We write~$y \succeq z$ when~$y \succ z$ or~$y = z$ hold. Then, a solution~$x \in \mathcal X$ is \emph{lexicographically maximal} in its orbit under~$G$ when~$x \succeq \pi(x)$ for all~$\pi \in G$. \paragraph{Related Literature} A great variety of symmetry-handling methods exist in the literature, including variable branching and fixing rules~\cite{margot2003exploiting,OstrowskiEtAl2011}, pruning rules~\cite{Margot2002,margot2003exploiting,ostrowski2009symmetry}, model reformulation techniques~\cite{FischettiLiberti2012}, and symmetry-handling constraints~\cite{Friedman2007,hojny2019polytopes,Hojny2020,kaibel2008packing,Liberti2008,Liberti2012,Liberti2012a,LibertiOstrowski2014,Salvagnin2018}. In the following, we provide details about some constraint-based techniques that we also use in our experiments. For details on other techniques, we refer to the overview~\cite{margot2010symmetry} and the computational survey~\cite{pfetsch2019computational}. The common ground of most symmetry-handling constraints is to enforce that only lexicographically maximal representatives of symmetric solutions are computed. For a single symmetry~$\pi \in G$, $x \succeq \pi(x)$ can be enforced in linear time by propagation and separation techniques~\cite{DoornmalenHojny2022,hojny2019polytopes}. For general groups, however, it is coNP-complete to decide whether a solution is lexicographically maximal in its orbit if~$G$ is given by a set of generators~\cite{babai1983canonical}. Attention has thus been spent on groups that arise frequently in practice. One such case assumes that the variables are organized in a matrix~$x = (x_{i,j})_{i \in [m],j \in [n]}$ and that the symmetries in~$G$ permute the columns of~$x$ arbitrarily. Such symmetries arise frequently in benchmark instances~\cite{pfetsch2019computational}, and in Sec.~\ref{sec:applications}, we illustrate some applications. There, we also discuss how orbitopes can be used to handle these symmetries. The \emph{full orbitope} is the convex hull of all binary matrices with lexicographically non-increasingly sorted columns. If one restricts to matrices all of whose rows have at most (resp.\ exactly) one~1-entry, the corresponding convex hull is called \emph{packing orbitope} (resp.\ \emph{partitioning orbitope}). The above-mentioned matrix symmetries can be handled by separating valid inequalities for orbitopes. For packing/partitioning orbitopes, a facet description can be separated in linear time~\cite{kaibel2008packing}; for full orbitopes, efficiently separable IP formulations are known~\cite{hojny2019polytopes}. Moreover, efficient propagation algorithms for both full and packing/partitioning orbitopes are known~\cite{bendotti2021orbitopal,KaibelEtAl2011}. The so-called \emph{orbitopal fixing} algorithms receive local variable bounds at a subproblem and derive variables that need to be fixed at a certain value to guarantee that a solution is contained in the orbitope. \section{Sub-symmetry in Integer Programming} \label{sec:subsym} In this section, we provide a detailed explanation of sub-symmetries. We discuss how sub-symmetries arise at a subproblem~$\max\set{c^\top x | x \in Q}$ (Sec.~\ref{sec:subsymbinary}) and how one can handle them by state-of-the-art techniques (Sec.~\ref{sec:handlesubsym}). We use the multiple knapsack problem as a running example for illustration purposes. \begin{example}\label{ex:mkp} The \emph{multiple knapsack problem}~(MKP) considers~$m$~items with associated profit~$p_i$ and weight~$w_i$, $i \in [m]$, as well as~$n$~knapsacks with capacity~$c_j$, $j \in [n]$. The objective is to assign each item to at most one knapsack such that the total weight of items assigned to knapsack~$j \in [n]$ does not exceed~$c_j$ and the total profit of assigned items is maximized. The MKP is NP-hard as it generalizes the NP-hard single-knapsack problem~\cite{GareyJohnson1979}. A standard IP formulation is given by~\ref{eq:ip:mks}, which is given below. There,variable~$y_{i,j}$ indicates whether item~$i \in [m]$ is packed in knapsack~$j \in [n]$, see~\cite{MARTELLO1987213}. Let~$\mathcal Y_\text{MKP}$ denote the set of all feasible solution matrices~$y$. This formulation exhibits multiple types of symmetries. From any feasible solution, permuting indices of knapsacks with equal capacity yields another feasible solution. For example, when all knapsacks have the same capacity~$c = c_j$ for all~$j \in [n]$, this symmetry corresponds to permutations the columns of the solution matrix~$y$. Symmetries also arise from items with identical properties, i.e., identical weight and profit. In any feasible solution these items can be permuted, corresponding to permutations of the respective rows in the solution matrix~$y$. Sub-symmetries also occur in this formulation, arising from the following observation on partially-filled knapsacks. Consider two knapsacks~$j$ and $j'$, and an item with index~$i$. Suppose that items~$\set{1,\dots,i-1}$ are placed such that the capacity that remains in knapsacks~$j$ and~$j'$ are equal. Then, the placement of the remaining items~$\set{i, \dots, m}$ can be permuted between the two knapsacks. We call this type of sub-symmetry \emph{capacity sub-symmetries}. Note that by this definition also knapsacks~$j$ and~$j'$ with~$c_j \neq c_{j'}$ can become sub-symmetric. \begin{subequations} \label{eq:ip:mks} \begin{alignat}{5} \text{maximize}\quad && \sum_{i=1}^m \sum_{j=1}^n p_i y_{i,j} \span \label{eq:ip:mks:start} \\ \text{subject to}\quad && \sum_{i=1}^m w_i y_{i,j} &\le c_j &&\quad\forall_{j \in [n]}, \label{eq:ip:mks:capacity} \\ && \sum_{j=1}^n y_{i,j} &\le 1 &&\quad\forall_{i \in \mathcal [m]}, \label{eq:ip:mks:packing} \\ && y_{i,j} &\in \set{0, 1} &&\quad\forall_{i \in [m], j \in [n]}, \label{eq:ip:mks:end} \end{alignat} \end{subequations} \end{example} \subsection{Sub-symmetries in General Binary Programs}\label{sec:subsymbinary} In general, sub-symmetries can be defined for arbitrary collections of subproblems~$\mathbb S = \set{Q_s \subset \mathcal X | s \in [q]}$ for some~$q$. The sub-symmetries then correspond to permutations in~$G_{Q_s}$, $s \in [q]$, and are either automatically detected by an IP solver or are provided by a user. In the example of the MKP, the solution subsets for which the capacity sub-symmetries occur can be defined as \begin{equation} Q^i_{j,j'} = \left\{y \in \mathcal Y_\text{MKP} \;\middle|\; c_j - \sum_{k=1}^{i-1} w_k y_{k,j} = c_{j'} - \sum_{k=1}^{i-1} w_k y_{k,j'}\right\} \end{equation} for all pairs~$j,j' \in [n]$, $j < j'$, and all~$i \in [m]$. We denote by~$\mathbb S_\text{MKP}$ the collection of all these solution subsets. During the branch-and-bound process, a subproblem corresponding to a node in the B\&B~tree may belong to one or multiple solution subsets~$Q_s$. We then say that the sub-symmetries in~$G_{Q_s}$ become \emph{active}. To exploit these sub-symmetries, the solver needs to detect when it is in a subproblem where the sub-symmetry is active. Then, the sub-symmetry must be handled, which can be done using methods similar to those for handling global symmetries. As observed by~\cite{bendotti2021orbitopal}, when handling the sub-symmetries of all solution subsets in~$\mathbb S$ simultaneously, one needs to be slightly more careful as for global symmetries, however. The general idea remains the same: we want to disregard solutions such that for each orbit at least one representative remains. For a solution subset~$Q_s$, let~$\sigma^s_k$ for~$k \in [o_s]$ denote the orbits defined by $G_{Q_s}$. Furthermore, let~$O = \set{\sigma_k^s | s \in [q],\, k \in [o_s]}$ denote the family of all orbits of the considered solution subsets~$Q_s$. Notice that~$O$ does not necessarily partition the solutions of the solution subsets, as the orbits of different solution subsets may overlap. As a consequence, we need to be more careful in choosing the representative solution~$r(\sigma)$ for each orbit~$\sigma \in O$, as for a given orbit~$\sigma$ the set~$\sigma \setminus \set{r(\sigma)}$ may contain a representative of another orbit~$\sigma' \in O$. Bendotti et al.~\cite{bendotti2021orbitopal} propose to choose \emph{orbit-compatible representatives} that ensure that there always remains at least one solution in every orbit, when restricting to generalized representatives. If one only handles symmetries arising from permutations of columns in the matrix of binary variables, the following structure in the set of sub-symmetries~$\mathbb S$ ensures that generalized representatives can be defined easily~\cite{bendotti2021orbitopal}. Let~$Q \in \mathbb S$. For a solution matrix~$x \in Q$, let~$x(R, C)$ denote the submatrix of~$x$ obtained by restricting to rows~$R \subseteq [m]$ and columns~$C \subseteq [n]$. The symmetry group~$G_Q$ is the \emph{sub-symmetric group} with respect to~$(R, C)$ if it contains all the permutations of the columns of~$x(R, C)$. If~$G_Q$ is the sub-symmetric group, then~$Q$ is called \emph{sub-symmetric} with respect to $(R, C)$. Now, let~$\mathbb S$ be a set of solution subsets such that every~$Q_s \in \mathbb S$ is sub-symmetric with respect to~$(R_s, C_s)$. For every orbit~$\sigma_s^i$ of~$G_{Q_s}$, choose the representative~$x_s^i \in \sigma_s^i$ such that the submatrix~$x_s^i(R_s, C_S)$ is lexicographically maximal in its orbit, i.e., its columns are lexicographically non-increasing. Then, these representatives are orbit-compatible~\cite{bendotti2021orbitopal}. For the MKP example, the capacity sub-symmetries arise in the solution subsets~$\mathbb S_\text{MKP}$. A solution subset~$Q^i_{j,j'} \in \mathbb S_\text{MKP}$ is sub-symmetric with respect to~$(\set{i, \dots, m}, \set{j, j'})$. Whenever the sub-symmetry is active, one can thus handle it by enforcing that the columns of the submatrix~$y(\set{i, \dots, m}, \set{j, j'})$ are lexicographically non-increasing. \subsection{Handling Sub-symmetries}\label{sec:handlesubsym} An approach to handle a sub-symmetry is to add sub-symmetry-handling inequalities to the model~\cite{bendotti2020symmetry}. We briefly describe the framework of~\cite{bendotti2020symmetry}. Let~$Q_s$ be a solution subset that is sub-symmetric with respect to~$(R_s, C_s)$. We introduce an integer variable~$z_s$ such that~$z_s = 0$ if and only if~$x \in Q_s$ and $z_s \ge 1$ otherwise. Let~$c_{j},c_{j+1}$ be two consecutive columns in the submatrix~$x(R_s, C_s)$. Then, the \emph{partial sub-symmetry-handling inequality} is \begin{equation} \label{eq:partsubsym} x_{r_1,c_{j+1}} \le z_s + x_{r_1,c_j}\qquad \text{where $r_1 = \min R_s$.} \end{equation} If~$z_s = 0$, \eqref{eq:partsubsym} ensures that, for all pairs of consecutive columns, the first row of the submatrix~$x(R_s, C_s)$ is lexicographically non-increasing. Otherwise, if~$z_s \ge 1$, the inequality is trivially satisfied. In the MKP~example, the auxiliary variable~$z$ for~$Q^i_{j,j'} \in \mathbb S_\text{MKP}$ can be expressed as \begin{equation} \label{eq:actMKP} z = \left|c_j - c_{j'} - \sum_{k=1}^{i-1} w_k (y_{k,j} - y_{k,j'})\right|. \end{equation} To express~$z$ in the IP formulation, we need to linearize~\eqref{eq:actMKP}. Therefore, we write~$\alpha = c_j - c_{j'} - \sum_{k=1}^{i-1} w_k (y_{k,j} - y_{k,j'})$ for brevity, and introduce non-negative variables~$\alpha^+,\alpha^- \ge 0$, binary variables~$z^+,z^- \in \set{0,1}$, and constraints \begin{equation} \alpha^+ \le Mz^+,\, \alpha^- \le M z^-,\, \alpha^+ + \alpha^- \ge z^+ + z^-,\, \alpha = \alpha^+ - \alpha^-,\, z^+ + z^- \le 1. \end{equation} Here, $M$ is a sufficiently large constant, e.g.,~$\max\{c_j, c_{j'}\} + \sum_{k=1}^{i-1} w_k$. Then, $z = z^+ + z^-$ indicates whether the sub-symmetry is active. Note that the constraints indeed ensure that~$|\alpha| = \alpha^+ + \alpha^- = 0$ if and only if~$z = 0$. The sub-symmetry-handling inequality thus is \begin{equation} y_{i,j'} \le z + y_{i,j}. \end{equation} Notice that the Inequalities~\eqref{eq:partsubsym} are not sufficient to fully handle the symmetry. Indeed, they only ensure that the first row of the submatrix is lexicographically ordered. If the entries on the first row are equal, subsequent rows need to be considered until a \emph{tie-break} row is found. To this end, the set~$\mathbb S$ is extended to~$\tilde{\mathbb S}$ with additional \emph{tie-break subsets} for which Inequalities~\eqref{eq:partsubsym} break the symmetry on rows where the previous rows have equal entries. The set~$\tilde{\mathbb S}$ can be defined as \begin{equation} \tilde{\mathbb S} = \set{\tilde Q_s(i, j) | \text{$s \in [q]$, $i \in \set{1, \dots, {|R_s|}}$, $j \in \set{2, \dots, {|C_s|}}$}} \end{equation} where \begin{equation} \tilde Q_s(i, j) = \set{x \in Q_s | \text{$x_{r,c^s_{j-1}} = x_{r,c^s_j}$ for all $r \in \set{r^s_1, \dots, r^s_{i-1}}$}}. \end{equation} For our MKP example, one can verify that the set~$\mathbb S_\text{MKP}$ already includes the tie-break sets for every sub-symmetry. Therefore, it is not necessary to extend~$\mathbb S_\text{MKP}$ with additional tie-break subsets. \section{Activation Handler} \label{sec:ah} The existing method of handling sub-symmetries with inequalities has a number of limitations. For every sub-symmetry that we want to handle, it is necessary to add explicit SHIs to the formulation, leading to a blow-up of the IP\@. The size of the formulation increases even more with the addition of tie-break sets, and for problems where additional variables or constraints are necessary to express the auxiliary~$z$-variable in the formulation. Additionally, the inequalities are rather weak for symmetry handling. In particular, the variable-based approach is not immediately able to activate more sophisticated symmetry-handling methods such as orbitopal fixing. To circumvent these issues, we introduce a new approach for handling sub-symmetries. In our framework, we decouple the activation of sub-symmetries from the explicit IP~formulation itself, and instead adopt a more flexible approach. The modeler of the problem can provide a set of rules that define when a sub-symmetry becomes active to the IP~solver. We call this set of rules the \emph{activation handler}. The activation handler checks for a node~$a$ in the B\&B tree whether the rules hold. If that is the case, the activation handler activates a symmetry-handling method in the solver, to handle the sub-symmetry. In our framework, both the activation and handling of sub-symmetries is flexible. The modeler can describe the rules for activation with any custom implementation that uses information from the current state of the solver, such as variable fixings at the current node of the B\&B tree, to determine whether a subproblem with sub-symmetry is active. Furthermore, the activation handler can be used with any symmetry-handling method from the literature, and is not restricted to inequality-based approaches. Neither activation nor symmetry handling needs to be encoded in the formulation directly, keeping the IP compact. As the rules-based approach of activation handlers is rather generic, we will illustrate the concept for several problems in Sec.~\ref{sec:applications}. First, we describe sub-symmetries that arise in certain applications and compare how the SHI-based approach and activation handlers can be used to handle them. Afterwards, we compare the numerical performance of the different approaches in Sec.~\ref{sec:experiments}. Our implementation of the activation handler framework is publicly available\footnote{See \url{https://github.com/stenwessel/activation-handler}.}, as well as the setup of our experiments. The generic framework can easily be adapted by practitioners for use in other applications. \section{Application} \label{sec:applications} In this section, we discuss how sub-symmetries arise for three types of problems: the multiple knapsack problem, the unit commitment problem, and the maximum $k$-colorable subgraph problem. We describe how SHIs can be applied to handle sub-symmetries, as well as the activation handler framework. The activation handler uses information from the solver about variable fixings at a node of the B\&B tree. To this end, we define for a node~$a$ of the B\&B tree the sets~$F^a_0$ and~$F^b_1$, which denote the variables that are fixed to~$0$ or~$1$ at node~$a$, respectively. \subsection{Multiple Knapsack Problem} We introduced the MKP, its symmetries, and the capacity sub-symmetries in Example~\ref{ex:mkp}. Notice that the number of solution subsets in~$\mathbb S_\text{MKP}$ is rather large, as we consider all pairs of knapsacks. When handling the symmetry with inequalities, considering all solution subsets is intractable, as potentially every subset of knapsacks might define a sub-symmetry. That is, exponentially many SHIs as well as auxiliary variables and constraints need to be added to the problem. Therefore, we only consider SHIs for \emph{consecutive} pairs of knapsacks, i.e.,~$j' = j + 1$. We hence add~$\ensuremath{\mathcal O}(mn)$~SHIs to the formulation, with for each SHI four auxiliary variables and five auxiliary constraints. \paragraph{Sub-symmetry handling with activation handler} When handling sub-symmetries via activation handlers, we are more flexible in implementing the activation rules. Instead of enumerating every solution subset~$Q^i_{j_1,j_2}$ separately and checking if the sub-symmetry is active, we can use a single activation handler in the model. The activation handler returns all submatrices of~$y$ that contain active sub-symmetries at a given node of the B\&B tree. The activation handler identifies whether the placement of items~$\set{1, \dots, i-1}$ is fixed at node~$a$, according to the variable fixings~$F^a_0$ and~$F^a_1$. For every item~$i$ for which the previous holds, the activation handler checks whether there are knapsacks of equal remaining capacity, after placement of items~$\set{1, \dots, i-1}$. Suppose that for item~$i$ the knapsacks~$j_{k_1}, \dots, j_{k_r}$ have equal remaining capacity. Then, the activation handler reports the submatrix~$y(\set{i, \dots, m}, \set{j_{k_1}, \dots, j_{k_r}})$, for which the capacity sub-symmetry is now active at node~$a$. In this way, finding all active capacity sub-symmetries is linear in the number of variables in the matrix~$y$, as we can simply perform a linear scan over the rows of~$y$ and checking the variable fixings. Note that checking whether a variable is fixed can be done in constant time, as this information is available from the solver. The activated submatrices are then passed to a high-level symmetry-handling constraint in the solver. Several methods can be used to handle symmetry in the submatrix. In our implementation, we use orbitopal fixing for packing orbitopes~\cite{KaibelEtAl2011} to handle the active sub-symmetries. However, when all knapsacks are in the activated submatrix (i.e., all columns), the sub-symmetry is instead handled with orbitopal fixing for the stronger partitioning orbitope. \subsection{Unit Commitment Problem} Another problem in which we can handle sub-symmetries is the min-up/min-down unit commitment problem~(MUCP), as introduced in~\cite{bendotti2020symmetry}. We are given a set of production units~$\mathcal U$ with~$|\mathcal U| = n$, and a discrete time horizon~$\mathcal T = \set{1, \dots, T}$ for which a certain non-negative demand~$D_t$ needs to be satisfied at every time~$t \in \mathcal T$. Every production unit~$j \in \mathcal U$ can be either \emph{up} or \emph{down} at every~$t \in \mathcal T$. When a unit is up, its production is between a minimum and maximum production capacity~$P_\text{min}^j$ and~$P_\text{max}^j$, and it must remain up for at least~$L^j$~time steps. When a unit is down, its production is zero and it must remain down for at least~$\ell^j$~time steps. We furthermore have for every unit~$j$ a start-up cost~$c_0^j$, a fixed cost~$c_f^j$ for every time step the unit is up, and a production cost~$c_p^j$ proportional to its production. The goal is to find a \emph{production schedule} satisfying the production demand at every time step and the min-up and min-down constraints, while minimizing the total cost. Let the variables~$x_{t,j} \in \set{0, 1}$ indicate whether unit~$j \in \mathcal U$ is up at time~$t \in \mathcal T$, and~$u_{t,j} \in \set{0,1}$ whether unit~$j$ starts up at time~$t$. We omit further details of the IP~formulation we use for this problem, as they are not relevant for symmetry handling, and refer to~\cite{bendotti2020symmetry} for details. Let~$\mathcal X_\text{MUCP}$ denote the set of matrices~$(x_{t,j})$ that are feasible. Notice that the solution matrix~$x$ completely characterizes a solution, as the corresponding matrix~$u$ can be derived completely from~$x$. \paragraph{Symmetry in the MUCP} Symmetries are present globally in the MUCP when production units have identical properties, i.e., units where all of the properties~$(P_\text{min}, P_\text{max}, L, \ell, c_0, c_f, c_p)$ are equal. To make this more explicit, we partition the production units into $H$~\emph{types}, where a type~$h \in \set{1, \dots, H}$ consists of~$n_h$ identical units that we denote by~$\mathcal U_h = \set{j^h_1, \dots, j^h_{n_h}}$. For a type~$h$, we slightly abuse notation to denote its properties as $(P_\text{min}^h, P_\text{max}^h, L^h, \ell^h, c_0^h, c_f^h, c_p^h)$. We can then also partition the matrix variable~$x$ into~$H$~matrices~$x^h = (x_{t,j})_{t \in \mathcal T, j \in \mathcal U_h}$ for every type of production unit. The production units within each type are identical, and we can hence permute their production schedules. This corresponds to permuting the columns of~$x^h$. One possible way of breaking the symmetry is to restrict $x^h$ to the full orbitope for binary matrices of size~$T \times n_h$, i.e., by imposing that the columns of~$x^h$ are lexicographically non-increasing. The MUCP also exhibits sub-symmetries, as introduced in~\cite{bendotti2020symmetry}. Call a production unit~$j \in \mathcal U$ \emph{ready to start up} at some time~$t \in \mathcal T$ if the unit has been down continuously for at least the minimum downtime~$\ell^j$. In other words, when~$x_{t',j} = 0$ for all~$t' = t-\ell^j, \dots, t-1$ and $t \ge \ell^j + 1$. Now, suppose there are at least two units~$j_1, \dots, j_k \in \mathcal U^h$ of type~$h$ that are all ready to start up at some time~$t$. Then, their production schedules can be permuted from time~$t$ onwards, regardless of their schedule up to time~$t$. This thus defines a sub-symmetry where the columns of the submatrix~$x(\set{t, \dots, T}, \set{j_1, \dots, j_k})$ can be permuted. Analogously, one can identify sub-symmetries for two units ready to shut down at some time~$t \in \mathcal T$. These sub-symmetries are referred to as the \emph{start-up} and \emph{shut-down sub-symmetries}, respectively. \paragraph{Sub-symmetry-handling inequalities} Following the approach in~\cite{bendotti2020symmetry}, the start-up sub-symmetries can be handled with inequalities as follows. The handling of shut-down sub-symmetries is analogous, and we omit the details here. Let~$j_k^h, j^h_{k+1}$ be a pair of consecutive units of the same type~$h$. For brevity, let~$j = j_k^h$ and~$j' = j^h_{k+1}$. Then, the solution subsets \begin{equation} \check Q_{k,h}^t = \set{x \in \mathcal X_\text{MUCP} | \text{$x_{t',j} = x_{t',j'} = 0$ for all $t' = t - \ell^h, \dots, t-1$}} \end{equation} for all~$t \ge \ell^h + 1$, define when the start-up sub-symmetries occur. Note that no additional tie-break sets are necessary. For~$\check Q_{k,h}^t$, the corresponding auxiliary variable can be expressed as~$z = \sum_{t'=t-\ell^h}^{t-1} [x_{t',j} + x_{t',j'}]$, leading to the SHI \begin{equation} \label{eq:shi:mucp} x_{t,j'} \le z + x_{t,j}. \end{equation} Note that the~$z$-variable has a linear description in~$x$, and hence it is not necessary to add~$z$ as a new variable to the formulation. Instead, we can simply replace~$z$ directly with its linear expression in the SHI~\eqref{eq:shi:mucp}. The SHIs for the start-up sub-symmetries can be slightly strengthened to \begin{equation} \label{eq:strengthenedSHI} u_{t,j'} \le x_{t-\ell^h,j} + x_{t,j} + \sum_{t'=t-\ell^h + 1}^{t-1} u_{t',j}, \end{equation} leading to a stronger LP~relaxation. A similar inequality can be obtained for the shut-down sub-symmetries, see~\cite{bendotti2020symmetry} for the derivation of the strengthened SHIs. \paragraph{Sub-symmetry-handling with activation handler} Handling sub-symmetry with an activation handler is similar to the approach for the MKP\@. We add a single activation handler to the model, that identifies all submatrices corresponding to active sub-symmetries in the following manner. For sake of presentation, we assume that all production units~$\mathcal U$ have the same type. In the more general case where we have multiple types of production units, we can simply apply our method to the unit types separately. Let~$a$ be a node of the B\&B tree. Define for every~$t \in \set{\ell + 1, \dots, T}$, \begin{equation} \check S^a_t = \set{j \in \mathcal U | \text{$x_{t',j} \in F^a_0$ for all $t' \in \set{t - \ell, \dots, t-1}$}}. \end{equation} That is, $\check S^a_t$ are the production units that are \emph{fixed} to be ready to start up at time~$t$ at node~$a$. For every subset~$\check S^a_{t}$ for which $|S^a_{t}| \ge 2$, the corresponding start-up sub-symmetry becomes active. Hence, the symmetry corresponds to column permutations of the submatrix~$x(\set{t, \dots, T},\check S^a_{t})$. We then use orbitopal fixing for full orbitopes to handle the sub-symmetry in the activated submatrix. Notice that we can find the units that are ready to start up for every time~$t \in \mathcal T$ in~$\ensuremath{\mathcal O}(nT)$~time, by iterating over the time horizon and a dynamic-programming approach. The shut-down sub-symmetries are activated and handled with an analogous approach. \subsection{Graph Coloring Problem} In this section, we discuss another application of the sub-symmetry-handling framework on a variant of the graph coloring problem. Consider an undirected graph~$G = (V, E)$ with~$|V| = n$ and let~$k$ be a positive integer. A \emph{$k$-coloring} of the graph is a function~$c\colon V \to [k]$ that assigns colors to the vertices, such that for any pair~$\set{i,j} \in E$ it holds that~$c(i) \neq c(j)$. In the \emph{max-$k$-colorable subgraph problem}~(MKCS), we want to find a subset~$V' \subseteq V$ of vertices of maximum size, such that~$V'$ induces a subgraph that admits a~$k$-coloring. Let the binary variables~$x_{i,r} \in \set{0,1}$ indicate that vertex~$i \in V$ is colored with color~$r \in [k]$. We then have the standard IP~formulation~\cite{JanuschowskiPfetsch2011} \begin{subequations} \label{eq:ip:mkcs} \begin{alignat}{5} \text{maximize}\quad && \sum_{i\in V}\sum_{r=1}^k x_{i,r} \span \label{eq:ip:mkcs:obj} \\ \text{subject to}\quad && x_{i,r} + x_{j,r} &\le 1 &&\qquad \forall_{\set{i,j} \in E},\ \forall_{r \in [k]}, \label{eq:ip:mkcs:edge} \\ && \sum_{r=1}^k x_{i,r} &\le 1 &&\qquad\forall_{i \in V}, \label{eq:ip:mkcs:vertex}\\ && x_{i,r} &\in \set{0, 1} &&\qquad\forall_{i \in V},\ \forall_{r \in [k]}. \label{eq:ip:mkcs:x} \end{alignat} \end{subequations} Let~$\mathcal{X}_\text{MKCS}$ denote the set of all feasible solution matrices~$x$. The MKCS~problem is NP-hard for any fixed~$k$, with~$1 \le k < n$~\cite{JanuschowskiPfetsch2011}. The type of symmetry we consider in this problem is the equivalence between the colors. Indeed, in any feasible solution, the color indices can be permuted to obtain a different, equivalent, feasible solution. This corresponds to the permutation of the columns in the solution matrix~$(x_{i,r})_{i\in V,r \in [k]}$. \paragraph{Sub-symmetry in the MKCS problem} The formulation~\eqref{eq:ip:mkcs} also exhibits sub-symmetries, c.f.~\cite{bendotti2020symmetry}. Consider two distinct colors~$c_1,c_2 \in [k]$ and a subset of the vertices~$R \subseteq V$ such that the neighbors of~$R$, denoted by~$N(R)$, are colored with neither~$c_1$ nor~$c_2$. Then, the colors~$c_1$ and~$c_2$ can be permuted within~$R$. Hence, the sub-symmetry occurs within the solution subset \begin{equation} Q^R_{c_1,c_2} = \set{x \in \mathcal{X}_\text{MKCS} | \text{$x_{i,c_1} = x_{i,c_2} = 0$ for all $i \in N(R)$}} \end{equation} with the sub-symmetry corresponding to the permutations of the columns in the submatrix~$x(R,\set{c_1,c_2})$. Notice that there are an exponential number of such solution subsets, because of the exponential number of subsets of vertices. Handling all such sub-symmetries with inequalities is therefore infeasible in practice, because of the blow-up in the number of auxiliary constraints. The adopted approach in~\cite{bendotti2020symmetry} is to only consider some subsets of vertices to handle sub-symmetries, leading to a quadratic number of SHIs. Within our activation handler framework, we can instead approach this differently and dynamically find relevant vertex subsets at the nodes of the B\&B tree. \paragraph{Sub-symmetry-handling with activation handler} Let~$a$ be a node of the B\&B tree and fix a pair of distinct colors~$c_1,c_2$. Let~$S \subseteq V$ be a set of vertices, such that~$i \in S$ when~$x_{i,c_1},x_{i,c_2} \in F^0_a$. Let now~$R \coloneqq V \setminus S$, and notice that indeed~$N(R)$ only contains vertices that are fixed to be not colored by~$c_1$ nor~$c_2$. In particular, the connected components~$R_1, \dots, R_\ell$ of the subgraph induced by~$R$ all satisfy this property. Hence, we can handle sub-symmetry in the submatrices~$x(R_j,\set{c_1,c_2})$ for all connected components~$R_j$. An activation handler can implement this activation rule via an iterated depth-first-search approach that efficiently identifies the connected components. For non-trivial connected components, the sub-symmetry is handled by orbitopal fixing for packing orbitopes on the identified submatrix. In particular, in contrast to the SHI-based approach, the activation handler can handle \emph{all} possible color-pair-based sub-symmetries without increasing the size of the IP formulation. \section{Experimental Results}\label{sec:experiments} In this section, we compare the sub-symmetry-handling methods using experiments on instances of the problems introduced above. \subsection{Instances} For the MKP, we generate random instances in the four standard classes of problems from the literature on MKP~\cite{MARTELLO1987213,PISINGER1999528,fukunaga2011branch}: \begin{itemize} \item \emph{uncorrelated}, where the weights~$w_i$ and profits~$p_i$ are uniformly distributed integers in the closed range~$[\ell, L]$, \item \emph{weakly correlated}, where the weights~$w_i$ are uniformly distributed integers in the closed range~$[\ell, L]$ and~$p_i$ are uniformly distributed integers in the closed range~$[\max\set{1, w_i - (L - \ell)/10}, w_i + (L - \ell)/10]$, \item \emph{strongly correlated}, where the weights~$w_i$ are uniformly distributed integers in the closed range~$[\ell, L]$ and~$p_i = w_i + (L - \ell)/10$, \item \emph{multiple subset-sum}, where the weights~$w_i$ are uniformly distributed integers in the closed range~$[\ell, L]$ and~$p_i = w_i$. \end{itemize} We generate our instances with~$\ell = 10$, $L = 1000$. The capacity of every knapsack is set to~$c_j = \lfloor \frac 1 2 \sum_{i=1}^m w_i/n \rfloor$, such that the total capacity is approximately half of the total weight of all items. To introduce symmetry in the problem, we generate multiple items with the same weight with an approach similar to Bendotti et al.~\cite{bendotti2020symmetry}. Generated weights are duplicated~$d$ times, where~$d$ is a uniformly random integer in~$[1, fm]$, where we call~$f \in \set{\frac 1 2, \frac 1 3, \frac 1 4, \frac 1 8}$ the \emph{symmetry factor}. Larger values of~$f$ generate larger groups of items with equal weight, leading to a more symmetric instance. For generating the profit values for the items within an equal-weight group, we consider two types of instances: \begin{itemize} \item \emph{equal profit}, where every item in the equal-weight group also has equal profit, generated according to the item class above, \item \emph{free profit}, where every item in the equal-weight group has a profit value generated according to the item class above. \end{itemize} Note that for the \emph{strongly correlated} and \emph{multiple subset-sum} classes, we only generate instances for \emph{equal profit} as both types of instances are equivalent. We generate groups of equal-weight items until we have generated~$m$ items. For every pair of~$(m, n) \in \set{(48, 12), (60, 10), (60, 30), (75, 15), (100, 10)}$, we generate~$20$ instances for every combination of item class, symmetry factor, and duplication type, yielding a total of $2400$~instances. For the MUCP, we use the same generated instances that are used for the experimental evaluation in~\cite{bendotti2020symmetry}, of which we are grateful to the authors for providing them. For the MKCS problem, we use the standard DIMACS Color02 set of graph coloring instances\footnote{Obtained from~\url{https://mat.tepper.cmu.edu/COLOR02/}.}, with~$k \in \set{5,6,8,10}$. \subsection{Experimental setup} All experiments are run with the development version of SCIP~7.0.3 (Git hash \texttt{3671128c}) with the SoPlex LP solver (Release~600)~\cite{scip}, on a single core of an Intel Xeon Platinum~8260~CPU running at~$2.4$~GHz, with a memory limit set at $10$~GB of RAM\@ and a solving-time limit of~$3600$~seconds for MKP and MUCP, and $7200$~seconds for the MKCS problem. The IP model is constructed in Python~3.10 using the PySCIPOpt interface that exposes the SCIP API in Python. The activation handler is implemented in SCIP as a new plugin, and can be added to the model with the PySCIPOpt interface. Every instance is solved with five different settings, in order to compare performance of the different symmetry-handling methods: \begin{itemize} \item No-Sym: Formulation with SCIP internal symmetry handling turned off. \item Default: Formulation with SCIP default parameters. \item Orbitope: Formulation with orbitope constraints for (global) symmetry handling. \item Ineq: Formulation with SHIs. \item Act: Formulation with orbitope constraints for (global) symmetry handling and activation handler for sub-symmetries. \end{itemize} For the MKP, all models except for No-Sym include orbitope constraints for handling symmetry between identical items. In the orbitopes for symmetries between identical items and symmetries between identical knapsacks, we use a compatible ordering of the variables such that orbitopal fixing for all orbitopes can be performed simultanuously, without introducing any conflicts. For the MUCP, we use the strengthened SHIs~\eqref{eq:strengthenedSHI} in the Ineq model. For MKCS, we consider two variants of the Act model: Act-AllPairs and Act-Consec which activate the sub-symmetries for all pairs of colors, or only pairs of consecutive colors, respectively. We do not consider a formulation with SHIs for MKCS, to avoid making arbitrary choices for which vertex subsets to consider. The orbitope constraints in SCIP use orbitopal fixing, as discussed in Sec.~\ref{sec:introduction}. \subsection{Results} The results are summarized in Tables~\ref{tab:mkp}, \ref{tab:mucp}, \ref{tab:mkcs:small}, and \ref{tab:mkcs:large} for the MKP, MUCP, MKCS with~$k=5,6$, and MKCS with~$k=8,10$, respectively. In the reported results, we summarize the results for instances in classes, based on the solving time of the tested models. We use the notation~$[a, b)$ to denote the set of instances for which all models have a solving time of at least~$a$ and below~$b$~seconds. We exclude instances from our test set where all models reach the time limit. For every instance class, we report the number of instances in the class~(\#). For every model, the number of instances solved to optimality (Opt) is reported, as well as the mean solving time, in seconds, of all instances. The mean solving time is the shifted geometric mean, with a shift of~$1$~second. For instances that are not solved to optimality within the time limit, the solving time is set to~$3600$~seconds for MKP and MUCP, and $7200$~seconds for the MKCS~problem. \begin{table}[p] \centering \caption{Summarized numerical results for MKP\@. For $532$~instances, all models reach the time limit and are excluded from the table.} \label{tab:mkp} \scriptsize \begin{tabular}{l *{11}{@{\hskip 1em}r}} \toprule && \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{No-Sym} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Default} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Ineq} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Orbitope} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Act} \\ \cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8} \cmidrule(r){9-10} \cmidrule{11-12} Instances & {$\#$} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} \\ \midrule All & 1868 & 1166 & 52.7 & 1387 & 35.3 & 625 & 1122.9 & 1636 & 17.6 & 1819 & 13.9 \\ $[0,100)$ & 277 & 277 & 0.0 & 277 & 0.2 & 277 & 14.4 & 277 & 0.1 & 277 & 0.2 \\ $[100,1800)$ & 269 & 269 & 0.4 & 269 & 0.5 & 269 & 370.5 & 269 & 0.3 & 269 & 0.4 \\ $[1800,\infty)$ & 1322 & 620 & 256.8 & 841 & 139.1 & 79 & 3460.2 & 1090 & 56.0 & 1273 & 40.3 \\ \bottomrule \end{tabular} \end{table} \begin{table}[p] \centering \caption{Summarized numerical results for MUCP\@. For $12$~instances, all models reach the time limit and are excluded from the table.} \label{tab:mucp} \scriptsize \begin{tabular}{l *{9}{@{\hskip 1em}r}} \toprule && \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{No-Sym} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Default} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Ineq} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Act} \\ \cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8} \cmidrule{9-10} Instances & {$\#$} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} \\ \midrule All & 268 & 172 & 162.7 & 216 & 70.8 & 240 & 131.4 & 266 & 39.0 \\ $[0,10)$ & 26 & 26 & 3.1 & 26 & 3.0 & 26 & 5.0 & 26 & 3.6 \\ $[10,300)$ & 101 & 101 & 19.2 & 101 & 14.1 & 101 & 31.3 & 101 & 15.8 \\ $[300,\infty)$ & 34 & 1 & 3353.4 & 7 & 2789.0 & 18 & 1972.0 & 32 & 751.3 \\ \bottomrule \end{tabular} \end{table} \begin{table}[p] \centering \caption{Summarized numerical results for MKCS instances with~$k = 5, 6$. For $92$~instances, all models reach the time limit and are excluded from the table.} \label{tab:mkcs:small} \scriptsize \begin{tabular}{l *{11}{@{\hskip 1em}r}} \toprule && \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{No-Sym} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Default} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Orbitope} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Act-Consec} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Act-AllPairs} \\ \cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8} \cmidrule(r){9-10} \cmidrule{11-12} Instances & {$\#$} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} \\ \midrule All & 147 & 139 & 18.4 & 145 & 17.4 & 141 & 20.4 & 141 & 22.0 & 141 & 21.9 \\ $[0,10)$ & 55 & 55 & 0.4 & 55 & 0.5 & 55 & 0.7 & 55 & 0.7 & 55 & 0.9 \\ $[10,600)$ & 63 & 63 & 26.1 & 63 & 24.9 & 63 & 31.8 & 63 & 36.2 & 63 & 35.0 \\ $[600,\infty)$ & 13 & 7 & 3482.0 & 11 & 3263.3 & 8 & 2990.4 & 8 & 3660.4 & 8 & 3703.8 \\ \bottomrule \end{tabular} \end{table} \begin{table}[p] \centering \caption{Summarized numerical results for MKCS instances with~$k = 8, 10$. For $91$~instances, all models reach the time limit and are excluded from the table.} \label{tab:mkcs:large} \scriptsize \begin{tabular}{l *{11}{@{\hskip 1em}r}} \toprule && \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{No-Sym} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Default} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Orbitope} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Act-Consec} & \multicolumn{2}{@{\hskip .5em}c@{\hskip 1em}}{Act-AllPairs} \\ \cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8} \cmidrule(r){9-10} \cmidrule{11-12} Instances & {$\#$} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} & {Opt} & {Time} \\ \midrule All & 147 & 140 & 7.6 & 143 & 9.0 & 138 & 16.9 & 136 & 20.1 & 139 & 25.2 \\ $[0,10)$ & 60 & 60 & 0.1 & 60 & 0.3 & 60 & 0.5 & 60 & 0.8 & 60 & 1.1 \\ $[10,600)$ & 60 & 60 & 7.9 & 60 & 12.1 & 60 & 26.8 & 60 & 36.2 & 60 & 45.7 \\ $[600,\infty)$ & 7 & 5 & 4810.1 & 3 & 5452.5 & 4 & 6204.7 & 3 & 5284.1 & 4 & 4515.3 \\ \bottomrule \end{tabular} \end{table} In Table~\ref{tab:mkp}, we see that over all instances, the activation handler method solves more instances to optimality within the time limit, compared to the other models. We can see that symmetry handling is highly relevant for the MKP problem. SCIP's state-of-the-art symmetry-handling methods reduce the running time by roughly~\SI{33}{\percent}. Our activation handler approach reduces the running time of this already very competitive setting by further~\SI{60}{\percent}. Comparing the different sub-symmetry-handling methods, the additional overhead necessary for the inequalities method is too large for handling this type of sub-symmetries. From the small and medium classes, we see there is a substantial difference between solving time for the inequalities method. There is also a slight improvement in running time for the global orbitope and activation handler methods, compared to default SCIP and the model where no symmetry handling is performed. For the large instances we see that the activation handler method solves more instances to optimality with a clear improvement in solving time. In Table~\ref{tab:mucp} we see similar results. Overall, SCIP's symmetry-handling methods improve the running time by roughly~\SI{55}{\percent}, whereas the activation handler reduces it even further by~\SI{45}{\percent}. We omit the Orbitope model in the results, as SCIP automatically finds these orbitopes in the Default model. The inequalities method, in contrast with the results for the MKP, shows improvement on the default models for the large instances, confirming the results of~\cite{bendotti2020symmetry}. This difference compared to the results for the MKP is likely caused by the auxiliary $z$-variables, that can here be expressed linearly with no additional constraints. The activation handler outperforms all other models; it solves considerably more large instances to optimality. For the MKCS instances in Table~\ref{tab:mkcs:small}, (sub-)symmetry handling on the colors seems to have an adverse effect on the performance, and the default SCIP model performs best on average. This is likely caused by SCIP's automatic symmetry detection, which might discover symmetries in the graph that are not handled in the global orbitope and activation handler models. In this case, it seems that graph symmetries lead to more powerful performance improvements for this problem. In Table~\ref{tab:mkcs:large}, the results are similar, but for the large instances the average solving times are increased for every model, compared to the instances in Table~\ref{tab:mkcs:small} where~$k$ is smaller. Here, we see that the activation handler outperforms only handling global symmetries in the orbitope model; although the model with no symmetry handling generally outperforms both methods. This would imply that for very large instances, the activation handler likely scales better with the instance size than only handling global symmetries. \section{Conclusion} \label{sec:conclusion} We have introduced a new framework for sub-symmetry handling in integer programming. The approach is flexible, as it allows modelers to generically implement rules that define sub-symmetries, and it can be used in conjunction with any kind of symmetry-handling method in the existing literature. We have shown via a number of applications that it can be used in various settings, and computational experiments show that our framework substantially improves on the performance of the state-of-the-art methods. For future work, we are interested in looking into the interface of the activation handler framework. Currently, its flexibility allows users to write custom code for the implementation of the activation rules. For some problems, the activation rules may have a common structure that can be exploited, e.g., the capacity sub-symmetries for the MKP can also be applied for other bin-packing or scheduling problems. Such activation handlers may be generalized to be applicable for a broader set of formulations, for which a \emph{domain specific language} might allow practitioners to more easily specify sub-symmetries in their models. \subsubsection{Acknowledgments} \label{subsubsec:acknowledgements} We thank the authors of~\cite{bendotti2020symmetry} for providing us with the MUCP~instances originally used in their experiments. \end{document}
\begin{document} \title{\textsc{\LARGE Common factors, trends, and cycles\\ in large datasets } \begin{center} \begin{tabular}{cp{1cm}c} \leftarge Matteo Barigozzi && \leftarge Matteo Luciani\\[.1cm] \small London School of Economics &&\small Federal Reserve Board\\[.1cm] \footnotesize [email protected] &&\footnotesize [email protected]\\[1cm] \end{tabular} \small \today\\[1.5cm] \end{center} \begin{abstract} This paper considers a non-stationary dynamic factor model for large datasets to disentangle long-run from short-run co-movements. We first propose a new Quasi Maximum Likelihood estimator of the model based on the Kalman Smoother and the Expectation Maximisation algorithm. The asymptotic properties of the estimator are discussed. Then, we show how to separate trends and cycles in the factors by mean of eigenanalysis of the estimated non-stationary factors. Finally, we employ our methodology on a panel of US quarterly macroeconomic indicators to estimate aggregate real output, or Gross Domestic Output, and the output gap.\\ \noindent \textit{JEL classification:} C32, C38, C55, E0.\\ \noindent \textit{Keywords:} Non-stationary Approximate Dynamic Factor Model; Trend-Cycle Decomposition; Quasi Maximum Likelihood; EM Algorithm; Kalman Smoother; Gross Domestic Output; Output Gap. \end{abstract} \rightenewcommand{\arabic{footnote}}{$\ast$} \thispagestyle{empty} \footnotetext{We thank for helpful comment the participants to the conferences: ``Inference in Large Econometric Models'', CIREQ, Montr\' eal, May 2017; ``Big Data in Dynamic Predictive Econometric Modelling'', University of Pennsylvania, Philadelphia, May 2017; ``Computing in Economics and Finance'', Fordham University, New York City, June 2017; and to the seminars at: Federal Reserve Board, September 2016; Warwick Business School, February 2017; Department of Statistics, Universidad Carlos III, Madrid, May 2017. We would like also to thank: Stephanie Aaronson, Gianni Amisano, Massimo Franchi, Marco Lippi, Filippo Pellegrino, Ivan Petrella, Lucrezia Reichlin, John Roberts, and Esther Ruiz.\\ Disclaimer: the views expressed in this paper are those of the authors and do not necessarily reflect the views and policies of the Board of Governors or the Federal Reserve System. } \rightenewcommand{\arabic{footnote}}{\arabic{footnote}} \section{Introduction} This paper is about two stylized facts of macroeconomic time series: co-movements and non-stationarity \citep{lippireichlin94}. More precisely, this paper is about disentangling long-run co-movements (common trends) from short-run co-movements (common cycles) in a large dataset of non-stationary US macroeconomic indicators. Since the seminal work of \citet{beveridgenelson1981}, the issue of decomposing GDP into a trend and a cycle has been a central question in both time series econometrics and policy analysis. This is not surprising, as long-run trends are mainly influenced by supply-side factors, while short-run cycles are mainly associated with demand-side factors, and therefore different estimates of the trend and of the cycle can lead to different policy recommendations. Given the relevance of the issue, in the last 30 years, many papers have suggested different ways to obtain a Trend-Cycle (TC) decomposition of GDP. Roughly speaking, those works can be grouped under two main approaches: one based on univariate methods \citep[e.g.][]{watson86,lippireichlin94b,mnz03,dungeyetal15}, and another using multivariate, but low-dimensional, time series techniques \citep[e.g.][]{stockwatson88JASA,lippireichlin94,gonzalogranger,garratetal06,crealetal10}. In this paper we use a novel approach to decompose GDP into a trend and cycle based on large datasets. We first disentangle common and idiosyncratic dynamics by using a Non-Stationary Approximate Dynamic Factor Model (DFM), and then we disentangle common trends from common cycles by applying a non-parametric TC decomposition to the latent common factors. Our methodology builds on four points: first, focusing on a high-dimensional setting is crucial, as only in a high-dimensional setting it is possible to disentangle common from idiosyncratic dynamics in a consistent way \citep{FHLR00,baing02, stockwatson02JASA} --- i.e., we can separate macroeconomic fluctuations from sectoral dynamics and measurement error only in a high-dimensional setting. Second, assuming the existence of a factor structure is a realistic and convenient way to represent co-movements in large macroeconomic datasets. Third, considering non-stationary data is necessary to account for the presence of common trends or, equivalently, cointegration \citep{bai04,baing04,BLL1,BLL2}. And fourth, by using a non-parametric TC decomposition we do not have to make assumptions on the law of motion of either the trend, or the cycle. Our approach is deliberately reduced form, and therefore our empirical analysis is conducted ``without pretending to have too much a priori economic theory'' \citep{SS77}, thus letting the data speak as freely as possible. The first contribution of this paper is methodological. Namely, we propose a Quasi Maximum Likelihood estimator of the non-stationary DFM based on the Expectation Maximisation (EM) algorithm combined with the Kalman Filter and the Kalman Smoother estimators of the factors. The theoretical properties of this approach in the large stationary DFM case have been studied in \citet{DGRfilter,DGRqml}, and here we extend their results to the non-stationary case by proving consistency and by providing rates of convergence for the factors and the parameters of the model. Compared to the non-stationary principal component estimator \citep{baing04}, the estimator proposed in this paper is more efficient, and it is more flexible in that, thanks to the use of the Kalman Filter, it allows us to explicitly model the idiosyncratic dynamics, and to impose economically meaningful restrictions. The second contribution of this paper is to show how to isolate common trends and common cycles in large macroeconomic datasets. In detail, we use a non-parametric approach that identifies the common trends as those linear combinations of the factors obtained by the leading eigenvectors of the long-run covariance matrix (\citealp{bai04,penaponcela04}), and the common cycles as deviations from the long-run equilibria, which coincide with the space orthogonal to that of the common trends --- i.e., the cointegration space \citep{ZRY}. Because our approach is non-parametric, we are not imposing any particular form to the trend, which is not constrained to be a random walk, or to the cycle. This is what differentiates our approach from the standard state-space, which normally is applied on a handful of variables and where the trend and the cycle dynamics are explicitly specified and jointly estimated with the parameters of the model (\citealp{harvey90}). Our final contributions are empirical. Specifically, we employ our methodology to analyse a large panel of US quarterly macroeconomic time series with the goal of estimating the cyclical position of the economy and the observation error. With the expression ``estimating the observation error,'' we mean estimating aggregate real output. With the expression ``estimating the cyclical position of the economy,'' we mean decomposing aggregate real output into potential output and output gap. To the best of our knowledge, \citet{charlesjohn} and \citet{ADNSS} are the only works that, so far, have used (small) factor models to estimate aggregate real output. On the other hand, a few papers have used low-dimensional factor models to estimate the cyclical position of the economy \citep[e.g.][]{charlesjohn,jarocinskilenza}, and a few more to estimate long-run trends \citep[e.g.][]{ADP16}. Finally, \citet{aastveittrovik14} and \citet{morleywong2017} have used a high-dimensional setting for estimating the output gap by means of a factor model and a large Bayesian VAR, respectively. However, in both works the variables are transformed to stationarity prior to model estimation. The first part of our empirical analysis is about estimating aggregate real output, to which we refer as Gross Domestic Output (GDO). We first show that our model naturally produces an estimate of GDO as that part of GDP/GDI that is driven by the macroeconomic (common) shocks. We then compare our estimate of GDO with ``the average of GDP and GDI'' released by the Bureau of Economic Analysis, and with ``GDPplus'' proposed by \citet{ADNSS} and released by the Philadelphia Fed. Our results show that these three measures are very similar, which is not surprising, as they are attempting to estimate the same thing. However, we estimate that since 2010 quarterly annualized GDO growth was on average \sfrac{1}{2} of a percentage point higher than estimated by the BEA or the Philadelphia Fed, thus pointing out that --- based on the commonality in the data --- the US economy grew at a faster pace than measured by national account statistics. The second part of our empirical analysis is about estimating the output gap. To this end, we use the above-mentioned TC decomposition in order to separate long-run from short-run co-movements, and in particular we focus on the decomposition derived for GDO. We compare our estimate with the one produced by the Congressional Budget Office (CBO), which estimates potential output as that level of output consistent with current technologies and normal utilisation of capital and labour, and the output gap as the residual part of output. Although these two estimates are obtained in completely different ways, in practice they look very similar. The two estimates are comparable for most of the sample considered, but from the late nineties to the financial crisis, when our measure suggests that a greater part of the produced output was driven by transitory factors. In particular, according to our estimate between 2001:Q1 and 2005:Q4 the output gap was on average 2\sfrac{1}{2} percentage points higher than estimated by the CBO. The rest of this paper is structured as follows.~In Section \rightef{sec:Representation} we discuss representation of large non-stationary panels of time series. In this section we first present the non-stationary dynamic factor model, and we define the concepts of commonality --- i.e., the common factors. Then we discuss how to disentangle long-run co-movements from short-run co-movements --- i.e., we define what common trends and common cycles are. In Section \rightef{sec:Estimation} we discuss estimation. We first introduce in Section \rightef{sec:StaticFM} the {\it static} representation of the DFM, which is just a convenient way to approach estimation of the dynamic model presented in Section \rightef{sec:Representation}. We then present in Section \rightef{sec:ML} our estimator, we discuss its properties, and we compare it with existing methods. Finally, in Section \rightef{sec:TC2} we present the non-parametric TC decomposition that we use in the empirical section. Then, Section \rightef{sec:emp} presents the empirical analysis. This section is split in two, with the first part presenting our estimate of GDO (Section \rightef{sec_gdo}), and the second part presenting our estimate of the output gap (Section \rightef{sec_ogap}). To conclude, in Section \rightef{sec:Conclusions} we discuss our findings and the advantages and limitations of our methodology, and we propose directions for further research. In the Appendix we report all technical proofs and the description of the data used and their transformation. \subsection*{Notation} A vector $\mathbf z_t$ is $I(1)$ if the higher-order of integration among all its components is 1, thus under this definition some components of $\mathbf z_t$ can be stationary. Eigenvalues are always considered as ordered from the largest to the smallest, so for a given set of eigenvalues $\{\mu_j\}_{j=1}^m$, we have $\mu_1\ge\mu_2\ge\leftdots\ge\mu_{m-1}\ge \mu_m$. Therefore, the spectral norm of $\bm A$ is defined as $\Vert \bm A\Vert^2=\mu_1^{A'A}$. The $j$-th largest eigenvalue of a spectral density matrix at frequency $\omega$ is denoted as $\mu_j(\omega)$. The generic $(i,j)$-th entry of a matrix $\bm A$ is denoted as $[\bm A]_{ij}$. We denote by $L$ the lag operator, such that $L^k y_t =y_{t-k}$, for any $k\in\mathbb Z$ and we use the notation $\Delta y_t:=(1-L)y_t$. Finally, we let $M,M_0,M_1\leftdots$ denote generic positive and finite constants that do not depend on the panel dimensions $n$ or $T$, and whose value may change from line to line. \section{Representation of non-stationary panels of time series}\leftabel{sec:Representation} Let us assume to observe a vector of $n$ time series $\{\mathbf y_t =(y_{1t}\cdots y_{nt})':t=1,\leftdots ,T\}$ such that \begin{equation}\leftabel{eq:trend} y_{it} = \mathcal D_{it} + x_{it}, \end{equation} where $\mathcal D_{it}$ is a deterministic component --- e.g., a linear trend --- and $\mathbf x_t=(x_{1t}\cdots x_{nt})'$ is such that $\mathbf x_t\sim I(1)$. We also assume that $\E[x_{it}]= 0$, for any $i$ and $t$, therefore, $\mathbf x_{t}$ contains all the stochastic trends but no deterministic component. Throughout, the spectral density matrix of $\Delta \mathbf x_t$ is assumed to exist. In a high-dimensional setting, it is reasonable to assume that there are common trends and common cycles, but also idiosyncratic terms. Thus, for each variable $x_{it}$ we write \begin{equation}\leftabel{eq:TC} x_{it} = {\mathcal T}_{it} + {\mathcal C}_{it} + \xi_{it}, \end{equation} where $\mathcal T_{it}\sim I(1)$ is the trend component, $\mathcal C_{it}\sim I(0)$ is the cycle component, and $\xi_{it}$ is the idiosyncratic component, which is allowed to be either $I(1)$ (in presence of idiosyncratic trends) or $I(0)$ (e.g. measurement errors). The trend and the cycle are capturing the common dynamics across series, and thus constitute the common component defined as $\chi_{it}={\mathcal T}_{it} + {\mathcal C}_{it}$. Hence, \eqref{eq:TC} is also written as \begin{equation} x_{it} = \chi_{it} + \xi_{it}.\leftabel{eq:commonidio} \end{equation} We define the vectors of common and idiosyncratic components as $\bm\chi_t=(\chi_{1t}\cdots \chi_{nt})'$ and $\bm\xi_t=(\xi_{1t}\cdots \xi_{nt})'$, respectively. Finally, notice that consistently with the data considered in this paper: (i) some (but not all) components of $\mathbf x_t$ are allowed to be stationary, and (ii) the deterministic components $\mathcal D_{it}$ are not common to all series --- i.e., there are no common deterministic trends. We assume that the co-movements in $\bm \chi_t$ are driven by $q$ ``structural'' shocks, with $q\leftl n$, which are collected in a weak white noise vector process $\mathbf u_t=(u_{1t}\cdots u_{qt})'$. Then, for a given $q$, we decompose each element of $\mathbf x_t$ as \begin{align} x_{it}&=\bm b_i'(L) \bm f_t+\xi_{it},\leftabel{eq:gdfm0}\\ \Delta\bm f_t &= \mathbf C(L)\mathbf u_t, \leftabel{eq:dynf} \end{align} where from \eqref{eq:commonidio} the common component is given by $\chi_{it}=\bm b_i'(L) \bm f_t$ and the following properties hold: \begin{enumerate} \item [A1.] $\mathbf u_t\stackrel{w.n.}{\sim} (\mathbf 0_q,\mathbf I_q)$, with $q$ is independent of $n$; \item [A2.] $\E[u_{jt} \xi_{is}]=0$, for any $j=1,\leftdots q$, $i=1,\leftdots, n$, and $s,t=1,\leftdots,T$; \item [A3.] $\mathbf B(L)=(\bm b_1'(L)\cdots\bm b_n'(L))'$ is an $n\times q$ one-sided, matrix polynomial matrix of finite order $s$, $\bm f_t\sim I(1)$ of dimension $q$; \item [A4.] $\mathbf C(L)=(\bm c_1'(L)\cdots\bm c_q'(L))'$ is a $q\times q$ one-sided, infinite matrix polynomial with square-summable coefficients and such that $\mathsf {rk}(\mathbf C(1))=(q-d)$ with $0<d< q$; \item [A5.] the $q$-th largest eigenvalue $\mu_q^{\Delta \chi}(\omega)$ of the spectral density matrix of $\Delta \bm\chi_t$ is such that $$ M_1\lefte\leftim\!\!\!\inf_{n\to\infty} n^{-1}{\mu_q^{\Delta \chi}(\omega)}\lefte \leftim\!\!\sup_{n\to\infty} n^{-1}{\mu_q^{\Delta \chi}(\omega)}\lefte M_2,\qquad \omega\mbox{-a.e.}\in [-\pi,\pi], $$ while the largest eigenvalue $\mu_{1}^{\Delta \xi}(\omega)$ of the spectral density matrix of $\Delta \bm\xi_t$ is such that $$ M_3\lefte \leftim\!\!\!\inf_{n\to\infty}\mu_{1}^{\Delta \xi}(\omega)\lefte \leftim\!\!\sup_{n\to\infty}\mu_{1}^{\Delta \xi}(\omega)\lefte M_4, \qquad \omega\mbox{-a.e.}\in [-\pi,\pi]. $$ \end{enumerate} Equations \eqref{eq:gdfm0} and \eqref{eq:dynf} together with properties A1-A5 define a Non-Stationary Approximate Dynamic Factor Model (DFM). In the case of stationary time series our model is a special case of the Generalised Dynamic Factor Model originally proposed by \citet{FHLR00}. Condition A5 is crucial and it allows for identification of the common component by defining it according to its spectral properties. An explanation for A5 in the time domain is provided by \citet{hallinlippi13} who show that this condition is equivalent to defining the common and idiosyncratic component by asking that for any dynamic aggregation scheme given by an $n$-dimensional vector of weights $\mathbf a_k$ such that $\sum_{k\in\mathbb Z}\mathbf a_k'\mathbf a_k=1$, the following holds \begin{align} 0< \leftim_{n\to\infty}\Var\left(\frac 1n\sum_{k=-\infty}^\infty \mathbf a_k'\Delta \bm\chi_{t-k}\right)\lefte M\;\mbox{ and }\;\leftim_{n\to\infty} \Var\left(\frac 1n\sum_{k=-\infty}^{\infty} \mathbf a_k'\Delta \bm\xi_{t-k}\right)= 0.\leftabel{example} \end{align} The following asymptotic conditions for the eigenvalues $\mu_{i}(\omega)$ of the spectral density of $\Delta \mathbf x_t$ are a direct consequence of A4, A5, and Weyl's inequality: \begin{enumerate} \item [B1.] for $\omega\mbox{-a.e.}\in [-\pi,\pi]$ the following holds:\\ $M_1\lefte \leftim\inf_{n\to\infty}n^{-1}{\mu_q(\omega)}\lefte\leftim\sup_{n\to\infty}n^{-1}{\mu_q(\omega)} \lefte M_2$,\\ $M_3\lefte \leftim\inf_{n\to\infty}\mu_{q+1}(\omega)\lefte \leftim\sup_{n\to\infty}\mu_{q+1}(\omega)\lefte M_4$; \item [B2.] for $\omega = 0$ the following holds:\\ $M_1\lefte \leftim\inf_{n\to\infty}n^{-1}{\mu_{q-d}(0)}\lefte\leftim\sup_{n\to\infty}n^{-1}{\mu_{q-d}(0)} \lefte M_2$, \\ $M_3\lefte \leftim\inf_{n\to\infty}\mu_{q-d+1}(0)\lefte \leftim\sup_{n\to\infty}\mu_{q-d+1}(0)\lefte M_4$. \end{enumerate} By means of B1 the number of shocks $q$ can then be identified (\citealp{hallinliska07}, \citealp{onatski09}). Similarly, by means of B2 the number of common trends, $(q-d)$, can be identified (\citealp{BLL2}). In particular, from the intuition given in \eqref{example} and because of B1 and B2, it is clear that the DFM is identifiable only in the limit $n\to\infty$. Condition A4 allows for the presence of $(q-d)$ common trends in the factors $\bm f_t$. In line with our empirical results in Section \rightef{sec:emp} we rule out the degenerate cases $d=0$ or $d=q$. This implies that the vector $\bm f_t$ admits a VECM representation with $d$ cointegration relations (\citealp{englegranger1987}), as well as the factor representation (\citealp{EP1994}): \begin{equation}\leftabel{eq:EP} \bm f_t = \bm\Psi \bm\tau_t + \bm\gamma_t, \end{equation} where $\bm\Psi$ is $q\times (q-d)$ and $\bm \tau_t$ is the vector of $(q-d)$ common trends with components $\tau_{jt}\sim I(1)$ for $j=1,\leftdots, (q-d)$, while $\bm\gamma_t$ is a $q$-dimensional stationary vector.\footnote{Notice that in general all factors are non-stationary, unless some \textit{ad hoc} zero-constraint is imposed on the elements of $\mathbf C(1)$. On the other hand if we were to ask for one of the factors to be stationary then the corresponding row of $\bm\Psi$ must be set to zero. However, we do not consider this case further since it could easily be included in our framework by imposing the appropriate identifying assumptions.} Notice that \eqref{eq:EP} is different from the common trends representation (or multivariate Beveridge-Nelson decomposition) of \citet{stockwatson88JASA} in that the trend $\bm\tau_t$ is not constrained to be a vector random walk, a property advocated for by many authors (e.g. \citealp{lippireichlin94}). For a given choice of $\bm\Psi$, the $(q-d)$ common trends can then be obtained by linear projection onto the space spanned by the columns of $\bm \Psi$: \begin{equation}\nonumber \bm \tau_t = (\bm\Psi '\bm\Psi )^{-1}\bm\Psi '\bm f_t=\bm\Psi '\bm f_t. \end{equation} where the second equality holds because, without loss of generality, we can always assume the identifying constraint $\bm\Psi'\bm\Psi=\mathbf I_{(q-d)}$. Different choices of $\bm\Psi$ lead to different definitions of common trends. Here we opt for a non-parametric approach and we identify the elements of $\bm\tau_t$ as the first $(q-d)$ principal components of $\bm f_t$, as proposed by \citet{bai04} and \citet{penaponcela04} (see Section \rightef{sec:TC2} for details on estimation). Given this definition, the columns of $\bm\Psi$ are orthonormal and therefore there exists a $q\times d$ matrix $\bm\Psi_{\perp}$ such that $\bm\Psi_{\perp}'\bm\Psi_{\perp}=\mathbf I_d$ and $\bm\Psi_{\perp}'\bm\Psi=\mathbf 0_{d\times (q-d)}$. Now, consider the $d$-dimensional process obtained by projecting $\bm f_t$ onto the space orthogonal to the common trends \begin{equation}\nonumber \bm c_t = (\bm\Psi_{\perp}'\bm\Psi_{\perp})^{-1}\bm\Psi_\perp'\bm f_t=\bm\Psi_\perp'\bm f_t=\bm\Psi_\perp'\bm \gamma_t. \end{equation} It is straightforward to see that ${\bm c}_t\sim I(0)$, that its components are $d$ common cycles in the sense of \citet{vahidengle93}, and that the columns of $\bm\Psi_{\perp}$ are a basis of the cointegration space of $\bm f_t$, thus these common cycles represent deviations from long-run equilibria --- see also e.g. \citet{johansen91} and \citet{kasa1992} for similar definitions.\footnote{Other TC decompositions based on a different definitions of cycles than the one used here are in \citet{gonzalogranger} and \citet{GN01}.} According to our definition, common trends and common cycles are orthogonal by construction, and we have the TC decomposition of the factors: \begin{align}\leftabel{eq:tc1} \bm f_t = \bm\Psi \bm\Psi '\bm f_t + \bm\Psi_\perp \bm\Psi_\perp '\bm f_t= \bm\Psi \bm\tau_t + \bm\Psi_\perp \bm c_t, \end{align} and therefore, by combining \eqref{eq:trend}, \eqref{eq:gdfm0} and \eqref{eq:tc1}, we have the TC decomposition of the data: \begin{equation}\leftabel{eq:tc2} y_{it} = \mathcal D_{it}+ \bm b_i'(L) \bm\Psi\bm\tau_t + \bm b_i'(L) \bm\Psi_\perp \bm c_t + \xi_{it}= \mathcal D_{it}+\mathcal T_{it} +\mathcal C_{it}+\xi_{it}. \end{equation} \section{Estimation} \leftabel{sec:Estimation} In order to estimate \eqref{eq:tc2}, we need to estimate the factors, $\bm f_t$ and their TC decomposition. We opt for a two-step approach, where we first extract the common factors and then we estimate their TC decomposition. In particular, we first introduce a convenient re-parametrization of the DFM based on its \textit{static} state-space representation (Section \rightef{sec:StaticFM}), which is then used for retrieving the factors space by means of the EM algorithm (Section \rightef{sec:ML}). Then, in a second step we use principal component analysis for extracting common trends and cycles (Section \rightef{sec:TC2}). Notice that compared to the classical state-space approach (e.g. \citealp{charlesjohn}) or from the Bayesian approach (e.g. \citealp{jarocinskilenza}) in which the trend and the cycle are estimated in one-step together with the parameters of the models, our approach has the advantage that it does not require us to specify a law of motion for the trend and the cycles. For simplicity of exposition we assume in this section that there is no deterministic component and we refer to Section \rightef{sec:emp} and to \rightef{sec:data} for the treatment of these terms in practice. \subsection{The \textit{static} representation of dynamic factor models}\leftabel{sec:StaticFM} Consider the state-space form of the DFM in \eqref{eq:gdfm0}-\eqref{eq:dynf} (\citealp{stockwatson05}; \citealp{FGLR09}): \begin{align} x_{it} &= \bm\leftambda_i'\mathbf F_t + \xi_{it},\leftabel{eq:static}\\ \Delta\mathbf F_t &= \mathbf D(L)\mathbf u_t\leftabel{eq:static2}, \end{align} where from \eqref{eq:commonidio} the common component is now given by $\chi_{it}=\bm\leftambda_i'\mathbf F_t$ and $\mathbf u_t$ is the same as in \eqref{eq:dynf}. We assume that A1, A2 and A5 still hold and in addition we require: \begin{enumerate} \item [C1.] $\mathbf D(L)=(\bm d_1'(L)\cdots\bm d_r'(L))'$ is an $r\times q$ one-sided, infinite matrix polynomial with square-summable coefficients and such that $\mathsf {rk}(\mathbf D(1))=(q-d)$ with $0<d< q$; \item [C2.] $\bm\Lambda=(\bm\leftambda_1 \cdots\bm\leftambda_n)'$ is an $n\times r$ loadings matrix such that $\leftim_{n\to\infty}\Vert n^{-1}\bm\Lambda'\bm\Lambda-\mathbf I_r\Vert=0$ and $\vert[\bm\Lambda]_{ij}\vert<M$, for any $i=1,\leftdots, n$ and $j=1,\leftdots, r$; \item [C3.] $\mathbf F_t\sim I(1)$ of dimension $r$, with $\E[\Delta\mathbf F_t\Delta\mathbf F_t']$ positive definite. \end{enumerate} Condition C1 is equivalent to A4 in that it requires the existence of $(q-d)$ common trends driving the common component. Conditions C2 and C3 are standard in the literature and imply that the eigenvalues of the covariance of $\Delta\bm\chi_t$ diverge as $n\to\infty$ at a rate $n$ (\citealp{stockwatson02JASA,baing02,FLM13}). Finally, from A5 we immediately have that the largest eigenvalue of the covariance of $\Delta\bm\xi_t$ is finite for any $n$. Given the way $\mathbf F_t$ and $\bm f_t$ are loaded by the data, hereafter we call $\mathbf F_t$ \textit{static} factors and $\bm f_t$ \textit{dynamic} factors. Let us stress once more the fact that here the DFM and the related TC decomposition are our focus, while the \textit{static} representation is just a convenient way to approach estimation of the dynamic model. In particular, for \eqref{eq:static}-\eqref{eq:static2} to be equivalent to \eqref{eq:gdfm0}-\eqref{eq:dynf} we need the following restrictions to hold: \begin{enumerate} \item [R1.] there exists an invertible $r\times r$ matrix $\mathbf K$ such that $\mathbf F_t=\mathbf K (\bm f_t'\cdots \bm f_{t-s}')'$ and $\bm\leftambda_i' = (\bm b_{i0}'\cdots \bm b_{is}')\mathbf K^{-1}$, for any $i=1,\leftdots, n$, where $\bm b_{ik}$, for $k=0,\leftdots, s$, are the coefficients of $\bm b_i(L)$ defined in A3; \item [R2.] the dimension of $\mathbf F_t$ is $r=q(s+1)$; \item [R3.] the cointegration rank of $\mathbf F_t$ is $d$. \end{enumerate} Let us consider each restriction in detail. Restriction R1 implies that the spectral density of $\Delta\mathbf F_t$ has reduced rank $q$. In the following, we impose this restriction when estimating the model but we do not attempt to identify $\mathbf K$. Restriction R2 offers an alternative way to determine $r$ with respect to the typical methods available in the literature based on the behavior of the eigenvalues of the covariance matrix of $\Delta\mathbf x_t$ and therefore on C2, C3, and A5 (e.g. \citealp{baing02}). Specifically, by virtue of restriction R2, once we set $q$ using B1, we can choose $r$ such that the share of variance explained by the \textit{static} factors $\mathbf F_t$ coincides with the share of variance explained by the $q$ \textit{dynamic} factors $\bm f_t$ --- see also \citet{dagostinogiannone12}. Finally, restriction R3 tells us that the autoregressive representation for \eqref{eq:static2} is a VECM with $d$ cointegration relations (a proof is in \rightef{sec:proofsRepresentation}). Moreover, since the vector $\mathbf F_t$ is singular, the autoregressive representation has a finite order (\citealp{BLL1}). However, in the next section we do not estimate a VECM, rather we estimate an unrestricted VAR in the levels (\citealp{simsstockwatson}). We use the knowledge of the cointegration rank to determine the dimension of the common cycles space (see Section \rightef{sec:TC2}). Summing up, by not fully imposing R1 and R3 when estimating the factors, we opt for simplicity of estimation versus complexity of a more realistic representation, which implies that the model considered is deliberately mis-specified.~The effects of such mis-specification will appear clear in Section \rightef{sec:TC2}, when we consider TC decompositions of $\mathbf F_t$ as opposed to those of $\bm f_t$. \subsection{Estimating the space of factors and loadings} \leftabel{sec:ML} We consider the following state-space form of \eqref{eq:static}-\eqref{eq:static2} in which we assume a VAR(2) for the \textit{static} factors as in the empirical analysis of Section \rightef{sec:emp}: \begin{align} x_{it} &= \bm\leftambda_i'\mathbf F_t + \xi_{it},\leftabel{eq:SS1}\\ \mathbf F_t&= \mathbf A_1\mathbf F_{t-1}+\mathbf A_2\mathbf F_{t-2}+\mathbf H\mathbf u_t,\leftabel{eq:SS2}\\ \xi_{it}&=\rightho_i\xi_{it-1}+e_{it}.\leftabel{eq:SS3} \end{align} We estimate \eqref{eq:SS1}-\eqref{eq:SS3} via the EM algorithm (\citealp{DLR77}), combined with the Kalman Filter (KF) and the Kalman Smoother (KS) estimators of the factors (\citealp{AM79,harvey90}). In the stationary, low-dimensional --- i.e., finite $n$ --- setting, estimation of a factor model by means of the EM algorithm can be found in \citet{shumwaystoffer82} and \citet{watsonengle83}, while the asymptotic properties of this factors' estimator are studied by \citet{DGRfilter,DGRqml} under the joint limit $n,T\to\infty$.\footnote{For recent applications of this approach see e.g. \citet{reiswatson10,banburamodugno14,juvenalpetrella2015,smokinggun,CGM16}.} In the non-stationary case, applications of the EM algorithm can be found in \citet{quahsargent93} and \citet{SAZ13} in a low-dimensional setting. Here, we study the theoretical properties in the non-stationary case when $n,T\to\infty$. In order to run the KF-KS it is necessary to make some additional assumptions on the idiosyncratic component. Let $\mathbf R$ be the covariance matrix of the vector $\bm e_t=(e_{1t}\cdots e_{nt})'$ of the idiosyncratic innovations in \eqref{eq:SS3}, then we assume: \begin{enumerate} \item [D1.] $\rightho_i=1$ if $\xi_{it}\sim I(1)$ or $\rightho_i=0$ if $\xi_{it}\sim I(0)$; \item [D2.] $\bm e_{t}\stackrel{w.n.}{\sim} \mathcal N(\mathbf 0_n, \mathbf R)$, with $[\mathbf R]_{ii}>0$ and $[\mathbf R]_{ij}=0$ for any $i\neq j$ and $i,j=1,\leftdots, n$; \item [D3.] $\mathbf u_t\stackrel{w.n.}{\sim} \mathcal N(\mathbf 0_q, \mathbf I_q)$. \end{enumerate} It is clear from D1, D2 and \eqref{eq:SS3} that if some idiosyncratic components are $I(1)$, we can still consider a factor model for $\mathbf x_t$ with stationary errors in \eqref{eq:SS1} by adding additional latent states with unit loadings and evolving as random walks. Notice that the dimension of the parameter space does not increase by increasing the number of $I(1)$ idiosyncratic components. On the other hand modeling the dynamics of $I(0)$ idiosyncratic components would increase the complexity of the estimation problem. For this reason, in D1 we choose to leave the dynamics of the stationary idiosyncratic components unspecified --- see Section \rightef{sec:emp} for practical implementation of this assumption. Assumptions D1-D3 define a mis-specified approximating model of the true DFM and in this sense our EM approach delivers Quasi Maximum Likelihood (QML) estimators. The effect of these mis-specifications are discussed at the end of this section, but before discussing them we present the asymptotic properties of the estimated factors and loadings. We collect all unknown parameters of the model into the vector $$\bm\Theta:=(\mbox{vec}(\bm\Lambda)' \;\mbox{vec}(\mathbf A_1)'\;\mbox{vec}(\mathbf A_2)'\; \mbox{vec}(\mathbf H)'\;\mbox{diag}(\mathbf R)')'.$$ We denote by $Q$ the dimension of ${\bm\Theta}$, then we assume that the true values of the parameters satisfy: \begin{enumerate} \item [D4.] $\bm\Theta \in \mbox{int} (\Omega)$, with $\Omega\subseteq \mathbb R^Q$ and compact. \end{enumerate} This condition is standard in QML theory and ensures existence of the true values of the parameters. The EM algorithm is based on the iteration of two steps. In the E-step, for a given estimator of the parameters $\widehat{\bm\Theta}_k$, we compute the expected likelihood conditional on all observed data $\{\mathbf x_1,\leftdots,\mathbf x_T\}$. This is in turn a function of the first and second conditional moments of the \textit{static} factors, which are computed by means of the KS when using $\widehat{\bm\Theta}_k$. Note that, under the assumption of normality, as in D2 and D3, and for a given value of the parameters $\bm\Theta$, the KF-KS give the conditional expectations: \begin{align}\nonumber &{\mathbf F}_{t|t-1} := \E_{{\bm\Theta}}[\mathbf F_t |\mathbf x_1,\leftdots,\mathbf x_{t-1}],\qquad {\mathbf F}_{t|t} := \E_{{\bm\Theta}}[\mathbf F_t |\mathbf x_1,\leftdots,\mathbf x_t],\qquad &{\mathbf F}_{t|T} := \E_{{\bm\Theta}}[\mathbf F_t |\mathbf x_1,\leftdots,\mathbf x_T], \end{align} with the associated covariance matrices denoted as ${\mathbf P}_{t|t-1}$, ${\mathbf P}_{t|t}$, and ${\mathbf P}_{t|T}$, respectively. These are therefore optimal estimators of the \textit{static} factors since they minimize the associated Mean-Square-Error (MSE) for a given value of the parameters. In the M-step a new estimator of the parameters $\widehat{\bm\Theta}_{k+1}$ is computed by maximizing the expected likelihood. At convergence of the EM algorithm, say at iteration $k^*$, we obtain the estimator of the parameters, which we denote by $\widehat{\bm\Theta}:= \widehat{\bm\Theta}_{k^*}$. The estimator of the factors is then obtained by running the KS a last times using $\widehat{\bm\Theta}$ and it is denoted by $\widehat{\mathbf F}_{t}:=\E_{\widehat{\bm\Theta}}[\mathbf F_t |\mathbf x_1,\leftdots,\mathbf x_T]$. The estimated common and idiosyncratic components are then given by $\widehat{\chi}_{it}=\widehat{\bm \leftambda}_i'\widehat{\mathbf F}_{t}$ and $\widehat{\xi}_{it}=x_{it}-\widehat{\chi}_{it}$. Details of the EM algorithm, as well as closed form expressions for all the estimators, are in \rightef{sec:proofsDetails}. To initialise the EM algorithm we use as initial estimator of the loadings the $r$ leading eigenvectors of the covariance of $\Delta \mathbf x_t$, from which we have an estimator of the \textit{static} factors as the integrated principal components of $\Delta \mathbf x_t$ (\citealp{baing04}). This factors' estimator is in turn used to: (i) initialize the KF, together with a diffuse prior for the factors' covariance (\citealp{koopman97,KD00}) and (ii) estimate the VAR parameters (\citealp{BLL2}). Define as $\mathbf V$ the $n\times r$ matrix having as columns the $r$ leading normalised eigenvectors of the covariance of $\Delta \bm \chi_t$, then the following identifying assumptions are convenient for proving consistency: \begin{enumerate} \item [E1.] $\bm\Lambda= \sqrt n\, \mathbf V$ with $[\bm \Lambda]_{1j}>0$ for all $j=1,\leftdots, r$; \item [E2.] $\mathbf F_t=n^{-1/2}\,\mathbf V'\bm\chi_t$ with $\mathbf F_0=\mathbf 0_r$. \end{enumerate} Since the \textit{static} factors have no economic meaning, these identifying assumptions are perfectly valid and --- together with assumption C2 on the loadings scale --- they rule out any indeterminacy in the estimators used to initialize the EM algorithm --- see \citet{DGRfilter} for similar assumptions. We have the following consistency result. \begin{prop}\leftabel{prop:EMcons} Let A1, A2, A5, C1, C2, C3, D1, D2, D3, D4, E1, and E2 hold and let $\begin{array}r t(T)>0$ be such that \begin{equation} \leftim\!\!\sup_{T\to\infty} T e^{-\begin{array}r t(T)}\lefte M.\leftabel{eq:exprate} \end{equation} Define $\mathbf F_t^\dag:=(\bm f_t'\cdots \bm f_{t-s}')'$ and $\bm\leftambda_i^\dag:=(\bm b_{i0}'\cdots \bm b_{is}')'$. Then, there exists an invertible $r\times r$ matrix $\mathbf K$ such that, as $n,T\to\infty$, for all $\, \begin{array}r t (T) \lefte t\lefte T$ and any given $i=1,\leftdots n$, \begin{align} & \sqrt T\,\Vert\widehat{\bm \leftambda}_i-\mathbf K^{-1'}{\bm\leftambda_i^\dag} \Vert=O_p(1),\leftabel{eq:consEM2}\\ &\min(\sqrt n, \sqrt T)\,\Vert\widehat{\mathbf F}_{t}-\mathbf K \mathbf F_t^\dag\Vert=O_p(1), \leftabel{eq:consEM3}\\ &\min(\sqrt n, \sqrt T)\,\vert\widehat{\chi}_{it}-\chi_{it}\vert=O_p(1). \leftabel{eq:consEM4} \end{align} \end{prop} \noindent Proposition 1 states that under the assumptions presented before, we can consistently estimate the common component, as well as the spaces spanned by the \textit{dynamic} factors $\bm f_t$ and the corresponding \textit{dynamic} loadings which are the coefficients of $\bm b_i(L)$ defined A3. Our proof, which is presented in detail in \rightef{sec:proofsProposition}, is based on the same approach followed by \citet{PR15} in the one-factor case, and it is made of two main parts which we summarize here. \noindent \textbf{Population results}. We first show that, when the parameters are known the one-step-ahead factors' MSE, $\mathbf P_{t|t-1}$, converges to a steady state, while both the MSEs of the KF, $\mathbf P_{t|t}$, and of the KS, $\mathbf P_{t|T}$, tend to zero as $n\to\infty$ (Lemmas \rightef{lem:steady1} and \rightef{lem:steady3}). Notice that this is true also when initializing with a diffuse prior since this has an effect only for a finite number of initial periods, say $t_0$ (\citealp{koopman97}). In particular, convergence to the steady state is exponentially fast \citep{AM79}, hence our result holds for any $t\ge \begin{array}r t(T)>t_0$, where $\begin{array}r t(T)$ satisfies condition \eqref{eq:exprate}, which asymptotically requires $\begin{array}r t(T)=O(\leftog T)$. In practice, though, the steady state is reached very quickly as shown in Figure \rightef{fig_Ptt}, where we report the trace of $\mathbf P_{t|t-1}$ (solid line), $\mathbf P_{t|t}$ (dashed line) and $\mathbf P_{t|T}$ (dashed-dotted line), computed for the data analysed in Section \rightef{sec:emp}. \noindent \textbf{Estimation results}. In the second step of the proof, consistency of the KF and KS estimators of the \textit{static} factors when using estimated parameters is proved (Lemma \rightef{lem:steady_est1}). This is done by taking into account an additional parameter estimation error which has two components: (i) the error of the QML estimator of the parameters for the case of known factors, say $\widehat{\bm\Theta}^*$ (Lemma \rightef{lem:eststar}) and (ii) the error due to the numerical approximation of $\widehat{\bm\Theta}$ to $\widehat{\bm\Theta}^*$ which is related to the stopping rule of the EM algorithm (\citealp{MR94}, and Lemma \rightef{lem:convEM}). In particular, the latter error is shown to be negligible with respect to the former one. Therefore the rate of convergence of the loadings estimated via the EM algorithm is the same that one would obtain by QML estimation, were the true factors observable, and moreover, because of assumption D2 the loadings are estimated equation by equation, thus such error depends only on $T$. Results similar to \eqref{eq:consEM2} hold also for all other estimated parameters in $\widehat{\bm\Theta}$. On the other hand the rate of convergence for the estimated \textit{static} factors is standard in the literature. \begin{figure} \caption{Conditional Mean Squared Errors} \end{figure} The results in Proposition \rightef{prop:EMcons} extend those by \citet{DGRfilter,DGRqml} to the non-stationary case. A major difference between the EM algorithm in levels proposed in this paper, and the EM algorithm in first differences proposed by \citet{DGRqml}, is relative to the way idiosyncratic components are modelled. Indeed, while by considering first differences it is implicitly assumed that all idiosyncratic components have a unit root, in our case we can distinguish between stationary and non-stationary idiosyncratic components --- i.e., we can allow for idiosyncratic trends only in some variables. This is not a minor difference, as it has substantial implications for the properties of the estimators. First of all we model non-stationary idiosyncratic components as additional latent states rather than differencing them, thus improving efficiency (see also Remark 2 below). Second, when $\xi_{it}\sim I(0)$, under D1 and D2 the QML estimator of the loadings of the $i$-th variable is obtained by minimizing the sample variance of $\xi_{it}$. In this case this is not the same as differencing before estimation, since in that case the loadings would be estimated by minimizing the sample variance of $\Delta\xi_{it}$. The resulting common component of the $i$-th variable has therefore different empirical properties: compared to our non-stationary approach, the common component estimated in first differences is likely to provide a better fit of the first differenced data, but not necessarily of the levels. Conversely, the common component obtained with our approach is likely to provide a better fit of the levels thus capturing better the lower frequencies --- and so the long-run trends --- and resulting in a smoother estimator, which however might have a worse fit of the differenced data. We conclude this section by briefly discussing the possible mis-specifications introduced by assumptions D1, D2 and D3. In particular, we assume the vector of idiosyncratic shocks $\bm e_t$ to be i.i.d.~Gaussian, thus imposing four restrictions on: (1) the cross-sectional dependence; (2) the variances; (3) the serial dependence; (4) the distribution. Let us consider the implications for the properties of the estimators of each of these restrictions --- see also \citet{DGRfilter} for a similar discussion. \begin{rem}\upshape{ If the idiosyncratic components have some cross-sectional dependence, as allowed by A5, then the state-space form of the model is mis-specified, however by inspecting the proofs we see that, as long as we use an invertible estimator of $\mathbf R$, consistency is not affected as long as $n\to\infty$. As a consequence of this asymptotic argument, we do not attempt here to model the off-diagonal terms of $\mathbf R$. This is better illustrated by a simple example showing the properties of the KF (an analogous argument holds for the KS). Denote as $\mathbf P$ the steady state of $\mathbf P_{t|t-1}$ then it can be shown that $\mathbf P=\mathbf H\mathbf H'$ (Lemma \rightef{lem:steady1}). Consider the case in which the parameters are given, $\bm\xi_{t}\sim I(0)$, and $r=q$, so that $\mathbf P$ is invertible, then for $t\ge \begin{array}r t(T)$ the KF estimator is such that \begin{align} \mathbf F_{t|t}&= \mathbf F_{t|t-1} + \mathbf P \bm\Lambda'(\bm\Lambda\mathbf P\bm\Lambda'+\mathbf R)^{-1}(\mathbf x_t-\bm\Lambda\mathbf F_{t|t-1})\nonumber\\ &=\mathbf F_{t|t-1} +(\bm\Lambda'\mathbf R^{-1}\bm\Lambda+\mathbf P^{-1})^{-1}\bm\Lambda'\mathbf R^{-1}(\mathbf x_t-\bm\Lambda\mathbf F_{t|t-1})\nonumber\\ &= (\bm\Lambda'\mathbf R^{-1}\bm\Lambda)^{-1}\bm\Lambda'\mathbf R^{-1}\mathbf x_t + O(n^{-1})\nonumber\\ &=\mathbf F_t + (\bm\Lambda'\mathbf R^{-1}\bm\Lambda)^{-1}\bm\Lambda'\mathbf R^{-1}\bm\xi_t+ O(n^{-1})\nonumber\\ &=\mathbf F_t +O_p(n^{-1/2}),\nonumber \end{align} where we used (in order) the Woodbury formula, assumption C2, the definition of $\mathbf x_t$ in \eqref{eq:SS1}, and assumption A5. Clearly consistency of the KF does not depend on the specific assumption for $\mathbf R$, as long as it is invertible. However for finite $n$ the KF depends on $\mathbf R$ and modeling also its out of diagonal terms could in principle improve its efficiency (e.g. \citealp{bailiao16}). } \end{rem} \begin{rem}\upshape{ From the example in Remark 1 it is clear that for finite $n$ the KF estimator is a weighted average of the data where the heteroskedasticity of the idiosyncratic components is accounted for. Again the same argument holds also for the KS. In this respect the KF-KS approach is analogous to the generalized principal component estimator, which is however derived in a stationary setting and without explicitly addressing the dynamics of the data (\citealp{choi12}). } \end{rem} \begin{rem} \upshape{If the idiosyncratic components are autocorrelated, then, unless we model them explicitly as additional latent states, optimality is lost, in particular the loadings' estimators are still consistent but not efficient. By means of D1 we partially solve the problem at least for the series with $I(1)$ idiosyncratic components. } \end{rem} \begin{rem}\upshape{ If the idiosyncratic components are non-Gaussian then the estimator is not optimal being only the best linear estimator. Nevertheless, it has to be noticed that typical macroeconomic data show little deviations from normality, so we are minimally concerned by the restrictions imposed by this assumption. } \end{rem} Summing up, regardless of these mis-specifications even though we might not have the most efficient estimator, we are likely to have gains in efficiency with respect to those estimators obtained by integrating the principal components of first differences of the data (\citealp{baing04}).~Indeed, principal components are optimal only in the case of serially and cross-sectionally i.i.d.~Gaussian idiosyncratic components (\citealp{lawleymaxwell71,tippingbishop99}), and such conditions clearly do not hold in a time series context, especially when non-stationarities are present and the cross-sectional dimension is large. On the contrary, our approach explicitly takes into account the autocorrelation in the factors and in the idiosyncratic components as well as their heteroscedasticity, and, as discussed above, it delivers consistent estimates even when some degree of cross-sectional dependence is present but not modelled. \subsection{Trend and cycles}\leftabel{sec:TC2} We now turn to estimation of common trends and common cycles. Notice that since we do not fully impose R1, the \textit{dynamic} factors $\bm f_t$ are not identified and instead we have to deal with a TC decomposition of the \textit{static} factors $\mathbf F_t$, which can be carried out analogously to the one described in Section \rightef{sec:Representation} for $\bm f_t$. Because of assumption C1 and restriction R3, for given values of $q$ and $d$, the vector $\mathbf F_t$ admits the factor representation: \begin{align}\nonumber {\mathbf F}_t &= \bm\Phi_1\mathbf T_t + \bm\Gamma_t, \end{align} where $\bm\Gamma_t\sim I(0)$, $\bm\Phi_1$ is $r\times (q-d)$ and $\mathbf T_t$ is the vector of $(q-d)$ common trends with components $T_{jt}\sim I(1)$ for $j=1,\leftdots, (q-d)$. Hence, in general the common trends admit the MA representation: \[ \Delta \mathbf T_t = \bm{\mathcal B}(L) \bm \eta_t, \] where $\bm\eta_t\stackrel{w.n.}{\sim} (\mathbf 0_{q-d},\bm\Sigma_{\eta})$ with $\bm\Sigma_{\eta}$ positive definite and $\bm{\mathcal B}(L)$ is a $(q-d)\times (q-d)$ one-sided, infinite matrix polynomial with square-summable coefficients and $\mathsf{rk}(\bm{\mathcal B}(1))=(q-d)$. As a consequence of the results by \citet{penaponcela97} and Proposition \rightef{prop:EMcons} above, given the estimated factors $\widehat{\mathbf F}_t$, it is clear that, as $n,T\to\infty$, \begin{equation}\leftabel{eq:PPGK} \widehat{\mathbf S}:=\frac 1{T^2}\sum_{t=1}^{T} \widehat{\mathbf F}_t\widehat{\mathbf F}_{t}'\stackrel{}{\Rightarrow} \bm\Phi_1\,\bm{\mathcal B}(1)\,\bm\Sigma_{\eta}^{1/2}\left(\int_0^1\bm{\mathcal W}(u)\bm{\mathcal W}(u)'\mathrm d u\right)\bm\Sigma_{\eta}^{1/2}\,\bm{\mathcal B}(1)' \,\bm\Phi_1', \end{equation} where convergence is in the sense of weak convergence of the associated probability measures and $\{\bm{\mathcal W}(u),\ 0\lefte u\lefte 1\}$ is a $(q-d)$-dimensional standard Wiener process. Hence, by virtue of \eqref{eq:PPGK}, we can estimate the common trends $\mathbf T_t$ as the first $(q-d)$ principal components of the estimated \textit{static} factors $\widehat{\mathbf F}_t$ (\citealp{bai04,penaponcela04}). Specifically, we denote by $(\widehat{\bm \Phi}_1\;\widehat{\bm \Phi}_0)$ the $r\times r$ matrix with columns given by the normalized eigenvectors of $\widehat{\mathbf S}$, ordered according to the decreasing value of the corresponding eigenvalues, and such that $\widehat{\bm \Phi}_1$ is $r\times (q-d)$ and $\widehat{\bm \Phi}_0$ is $r\times (r-q+d)$. This leads to the estimator of common trends as the projection: \begin{align} \widehat{\mathbf T}_t = \widehat{\bm\Phi}_1'\widehat{\mathbf F}_t. \nonumber \end{align} As for the common cycles, notice first that, by projecting $\widehat{\mathbf F}_t$ onto the columns of $\widehat{\bm \Phi}_0$, we obtain the $(r-q+d)$-dimensional process \begin{equation}\nonumber \widehat{\mathbf G}_t=\widehat{\bm\Phi}_{0}'\widehat{\mathbf F}_t, \end{equation} which, by construction, is orthogonal to $\widehat{\mathbf T}_t$. Moreover, $\widehat{\mathbf G}_t$ is stationary since it belongs to the cointegration space of $\widehat{\mathbf F}_t$ \citep{ZRY}. However, by R3 we know that the cointegration space must have dimension $d$, but we do not impose R3 when estimating the \textit{static} factors. Thus, we face the problem of identifying $d$ cycles from the higher-dimensional stationary process $\widehat{\mathbf G}_t$. In order to identify the common cycles we then look for the $d$-dimensional projection of $\widehat{\mathbf G}_t$ with maximum spectral density. In the empirical analysis of Section \rightef{sec:emp}, we consider the VAR(2): \begin{equation}\leftabel{eq:VARG} \widehat{\mathbf G}_t = \bm{\mathcal A}_1\widehat{\mathbf G}_{t-1}+\bm{\mathcal A}_2\widehat{\mathbf G}_{t-2}+ \bm v_t, \end{equation} where $\bm v_t\stackrel{w.n.}{\sim} (\mathbf 0_{r-q+d},\bm\Sigma_v)$ and $\det(\mathbf I_{r-q+d}- \bm{\mathcal A}_1z-\bm{\mathcal A}_2z^2)\neq 0$ for $|z|\lefte 1$. Once we estimate \eqref{eq:VARG} we have its residuals $\widehat{\bm v}_t$ and their covariance matrix $\widehat{\bm\Sigma}_v$. Denote as $\widehat{\bm{\mathcal H}}$ the $(r-q+d)\times d$ matrix having as columns the leading $d$ normalized eigenvectors of $\widehat{\bm\Sigma}_v$. We then define the estimated cycle component as the $d$-dimensional projection: \begin{equation} \widehat{\mathbf C}_t = \widehat{\bm{\mathcal H}}'\widehat{\mathbf G}_t.\nonumber \end{equation} The estimated TC decomposition is then given by \begin{align} \widehat{\mathbf F}_t &= \widehat{\bm\Phi}_1 \widehat{\bm\Phi}_1'\widehat{\mathbf F}_t+ \widehat{\bm\Phi}_{0} \widehat{\bm\Phi}_{0}'\widehat{\mathbf F}_t\nonumber\\ &=\widehat{\bm\Phi}_1 \widehat{\mathbf T}_t + \widehat{\bm\Phi}_{0}\widehat{\mathbf G}_t \nonumber\\ &=\widehat{\bm\Phi}_1 \widehat{\mathbf T}_t + \widehat{\bm\Phi}_{0}\widehat{\bm{\mathcal H}}\widehat{\bm{\mathcal H}}'\widehat{\mathbf G}_t + \widehat{\bm\Phi}_{0}\widehat{\bm{\mathcal H}}_\perp\widehat{\bm{\mathcal H}}'_\perp\widehat{\mathbf G}_t \nonumber\\ &=\widehat{\bm\Phi}_1 \widehat{\mathbf T}_t + \widehat{\bm\Phi}_{0}\widehat{\bm{\mathcal H}}\widehat{\mathbf C}_t + \widehat{\bm\Phi}_{0}(\widehat{\mathbf G}_t-\widehat{\bm{\mathcal H}}\widehat{\mathbf C}_t), \leftabel{eq:FTCW} \end{align} where $\widehat{\bm{\mathcal H}}_\perp$ is $(r-q+d)\times (r-q)$ and such that $\widehat{\bm{\mathcal H}}_\perp'\widehat{\bm{\mathcal H}}=\mathbf 0_{(r-q)\times d}$. The last term on the right-hand-side of \eqref{eq:FTCW} appears due to the mis-specification caused by not fully imposing R1 and R3 and in particular it has covariance of rank $(r-q)$ and since $r>q$ it is in general not zero. \begin{figure} \caption{Spectral Densities of Common Trends and Common Cycles} \end{figure} To appreciate the meaning and the appropriateness of decomposition \eqref{eq:FTCW}, in Figure \rightef{fig_TC} we show the spectral densities of the first differences of the three components of $\widehat{\mathbf F}_t$ for the data analyzed in Section \rightef{sec:emp}, where $r=6$, $q=3$, and $d=2$. As expected the estimated common trend $\widehat{\mathbf T}_t$ (black line) contributes most at the lowest frequencies --- i.e., lower than $\frac \pi {10}$ --- which correspond to periods higher than five years. Once we remove the common trend, of the remaining five processes $\widehat{\mathbf G}_t$, the two estimated common cycles $\widehat{\mathbf C}_t$ (red lines) capture most of the variation for almost all frequencies: one cycle dominates at periods longer than two years --- i.e., frequencies lower than $\frac\pi 4$ --- and the other cycle dominates at periods shorter than two years --- i.e., frequencies higher than $\frac \pi 4$. With respect to those two cycles, the residual three cycles $(\widehat{\mathbf G}_t-\widehat{\bm{\mathcal H}}\widehat{\mathbf C}_t)$ (blue lines) give a negligible contributions to the total variation. Given this empirical result, the extra term in \eqref{eq:FTCW} can be neglected and treated as a mis-specification error. Finally, from \eqref{eq:FTCW}, the estimated TC decomposition of the data immediately follows: \begin{align} x_{it} &= \widehat{\bm\leftambda}_i' \widehat{\bm\Phi}_1 \widehat{\mathbf T}_t +\widehat{\bm\leftambda}_i' \widehat{\bm\Phi}_{0} \widehat{\bm{\mathcal H}}\widehat{\mathbf C}_t +\widehat{\bm\leftambda}_i' \widehat{\bm\Phi}_{0}(\widehat{\mathbf G}_t-\widehat{\bm{\mathcal H}}\widehat{\mathbf C}_t)+\widehat{\xi}_{it},\nonumber \end{align} which is the estimated counterpart of the representation given in \eqref{eq:tc2}. \section{Estimating the cyclical position of the economy\\ and the observation error}\leftabel{sec:emp} We now use our model to estimate the cyclical position of the US economy and the observation error. In particular, in Section \rightef{sec_gdo} we will estimate ``the observation error'' by estimating the non-stationary approximate DFM as explained in Section \rightef{sec:ML}. And, in Section \rightef{sec_ogap} we will estimate ``the cyclical position of the economy'' by decomposing the common factors into common trends and common cycles using the TC decomposition discussed in Sections \rightef{sec:Representation} and \rightef{sec:TC2}. The following analysis is carried out on a large macroeconomic dataset comprising $n=103$ quarterly series from 1960:Q1 to 2017:Q1 describing the US economy. The complete list of variables and transformations is reported in \rightef{sec:data}. Compared to the papers that use small DFMs to estimate the cyclical position of the economy, which typically estimate the output gap using only high level variables such as GDP, the unemployment rate, and PCE price inflation, we include several other indicators, thus being able to capture information coming from a wider spectrum of the economy. Specifically, our datasets includes national account statistics, industrial production indexes, various price indexes including CPIs, PPIs, and PCE price indexes, various labor market indicators including indicators from both the household survey and the establishment survey as well as labor cost and compensation indexes, monetary aggregates, credit and loans indicators, housing market indicators, interest rates, the oil price, and the S\&P500 index. Broadly speaking, all the variables that are $I(1)$ are not transformed, while all the variables that are $I(2)$ are differenced once. Notice that some variables should from a theoretical economic point of view always be considered as $I(0)$ (e.g. inflation rates, unemployment rate, and interest rates) but since they exhibit a great deal of persistence are here treated as $I(1)$. Finally, a linear trend is estimated where necessary before applying our methodology, thus accounting for the deterministic component in \eqref{eq:trend}. A thorough empirical analysis requires tackling two main preliminary problems. First, we need to determine the number of common trends $(q-d)$, of common shocks $q$, and of \textit{static} factors $r$. To determine the number of common trends $(q-d)$ we use the criterion by \citet{BLL2}, which exploits the behaviour of the eigenvalues described in condition B2. This criterion indicates the presence of $(q-d)=1$ common trend, which is in line with many theoretical models assuming a common productivity trend as the sole driver of long-run dynamics \citep[e.g.][]{negro07}. To determine the number of common shocks $q$ we use the test by \citet{onatski09} and the criterion by \citet{hallinliska07}, which exploit the behaviour of the eigenvalues described in condition B1. Both methods indicate the presence of $q=3$ common shocks. Having determined $q$, as we explained in Section \rightef{sec:StaticFM} by virtue of R2 we can set the number of \textit{static} factors $r$ according to their explained variance. By looking at Table \rightef{tab_explainedvariance} we can clearly see that $r\simeq2q$, and therefore in our benchmark specification we set $q=3$ and $r=6$.\footnote{An alternative way to select the number of \textit{static} factors $r$ is to resort to one of the many available methods, such as, for example, the criterion of \citet{baing02}, which for our dataset gives results in line with our choice of $r$.} \begin{table}[t!]\caption{Percentage of explained variance}\leftabel{tab_explainedvariance} \centering \begin{tabular*}{.95\textwidth}{@{}@{\extracolsep{\fill}}ccccccccccccc@{}} \hline\hline & & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 &\\\hline & $q$ & 33.4 & 45.8 & 53.3 & 58.9 & 63.6 & 67.4 & 70.6 & 73.4 & 75.8 & 77.9& \\ & $r$ & 23.4 & 33.9 & 42.1 & 47.9 & 51.8 & 55.3 & 58.2 & 60.6 & 62.7 & 64.9& \\\hline \end{tabular*} \begin{tabular}{p{.95\textwidth}} \scriptsize This table reports the percentage of total variance explained by the $q$ largest eigenvalues of the spectral density matrix of $\Delta \mathbf x_t$ and by the $r$ largest eigenvalues of the covariance matrix of $\Delta \mathbf x_t$. \end{tabular} \end{table} Second, we need to choose which idiosyncratic components to model as random walk, and which as white noises. Following the methodology proposed by \citet{baing04}, we can explicitly test the null-hypothesis $H_0$: ${\rightho}_i=1$, and if we do not reject $H_0$, we set $\rightho_i=1$, while if we reject $H_0$, we set $\rightho_i=0$. This approach is applied to all variables in the dataset except GDP, GDI, unemployment rate, Federal funds rate, and CPI, core CPI, PCE, and core PCE inflation, for which we impose a priori $\rightho_i=0$. That is, while for most of the variables in the dataset we let the data determine what is driving their long run dynamics, we impose that the long-run dynamics of GDP, GDI, unemployment rate, Federal funds rate, and CPI, core CPI, PCE, and core PCE inflation are driven exclusively by macroeconomic shocks, with the idiosyncratic shocks accounting only for short-run movement. \subsection{Measuring Gross Domestic Output} \leftabel{sec_gdo} A fundamental issue in economics is the measurement of aggregate real output, henceforth Gross Domestic Output (GDO). Historically, GDO has been measured mainly by the Gross Domestic Product (GDP), but GDP, which tracks all expenditures on final goods and services produced, is just an estimate of GDO. An equally acceptable estimate of the concept of GDO is represented by the Gross Domestic Income (GDI), which tracks all income received by those who produced the output. GDP is almost always preferred to GDI, the main reason being that it is released before GDI.\footnote{The first estimate of GDP is released one month after the reference quarter, while GDI is generally released two months after the reference quarter, together with the second release of GDP.} However it has been shown that GDI reflects the business cycle fluctuations in true output growth better than GDP and moreover GDI is better than GDP in recognising the start of a recession (\citealp{jeremy10,jeremy12}). In recent years, there has been interest in combining GDP and GDI to come up with a better estimate of GDO, where the rationale for doing so is that the difference between GDP and GDI is exclusively the result of measurement error --- using the NIPA table definition ``statistical discrepancy'' --- as these two statistics are in fact measuring the same thing. For example, starting from November 4, 2013, the Philadelphia Fed releases an estimate of GDO, called ``GDPplus'' proposed by \citet{ADNSS}, which is defined as the common component of a bivariate one-factor model built with GDP and GDI growth rates. Similarly, and starting from July 30, 2015, the Bureau of Economic Analysis (BEA) releases ``the average of GDP and GDI'', which the Council of Economic Advisers refers to as GDO \citep*{CEA15}. Our approach differs from those mentioned above in that our estimate of GDO is not obtained by combining GDP and GDI, rather it is obtained by using all the 103 variables included in our dataset. In detail, we define GDO as that part of GDP/GDI that is driven by the macroeconomic (common) shocks, i.e., $\textrm{GDO}_t=\chi_t^\textrm{GDP}=\chi_t^\textrm{GDI}$. To estimate GDO in this way, we estimate a constrained version of model \eqref{eq:SS1}-\eqref{eq:SS2}, where we impose the restriction of equal common components: $\chi_t^\textrm{GDP}=\chi_t^\textrm{GDI}$. This restriction is indeed corroborated by the data, as even if we do not impose it, the estimated $\chi_t^\textrm{GDP}$ and $\chi_t^\textrm{GDI}$ are nearly identical. In numbers, the standard deviation of $(\Delta y_t^\textrm{GDP}-\Delta y_t^\textrm{GDI})$ is 1.93, while the standard deviation of $(\Delta \chi_t^\textrm{GDP}-\Delta \chi_t^\textrm{GDI})$ is reduced to 0.28. \begin{figure} \caption{Gross Domestic Output} \end{figure} Figure \rightef{fig_gdo} shows our proposed estimate of GDO (red line) together with ``GDPplus'' (blue line) and the ``the average of GDP and GDI'' released by the BEA (black line). Overall, the three measures are very similar, which is not surprising, as they are attempting to estimate the same quantity. However, three important differences emerges. First, our estimate of GDO is smoother than the other two. This is not surprising. Compared to ``GDPplus'' and ``the average of GDP and GDI'', our estimate of GDO is constructed to contain a larger low frequency component, because it is estimated on data in levels rather than on growth rates. Moreover, because it is derived under the assumption that the idiosyncratic components of GDP and GDI are stationary, by construction our estimate of GDO captures all the low frequency movements of GDP and GDI. Second, our estimate of GDO does not show any kind of residual seasonality in the last fifteen years, where the term ``residual seasonality'' refers to the presence of ``lingering seasonal effects even after seasonal adjustment processes have been applied to the data'' \citep{Moulton16}. Mainly motivated by the fact that since 2010 GDP growth in Q1 has been on average more than 1 percentage point lower than in the other quarters (NW plot of Figure \rightef{fig_residseas}), in recent years there has been lots of discussion on whether US GDP exhibit residual seasonality or not. The profession is not in agreement on this issue, as some authors \citepalias[e.g.][]{Claudia1,Claudia2} conclude that US GDP does not exhibit residual seasonality, while others \citep[e.g.][]{rudebuschetal2017,lunsford2017} find evidence of residual seasonality --- see \citet{Moulton16} for a technical discussion on causes and remedies for residual seasonality in US GDP. Figure \rightef{fig_residseas} shows average real GDO growth by quarter for our estimate of GDO (SE plot), ``GDPplus'' (SW plot), and ``the average of GDP and GDI'' (NE plot). As can be clearly seen, our estimate of GDO exhibits no residual seasonality whatsoever in the last 15 years. \begin{figure} \caption{Residual Seasonality} \end{figure} Third, our estimate of GDO in the recent years gives a different signal about the economy than the one given by `GDPplus'' and ``the average of GDP and GDI''. According to our estimate, since 2010 quarterly annualized GDO growth was on average \sfrac{1}{2} of a percentage point higher than estimated by the BEA or the Philadelphia Fed, where this difference comes mainly from our estimate of GDO growth in the first quarter (see Figure \rightef{fig_residseas}), and therefore from the fact that our measure do not suffer of residual seasonality. In other words, based on the commonality in the data, the US economy grew at a faster pace than measured by national account statistics. \subsection{Measuring the output gap}\leftabel{sec_ogap} Decomposing aggregate real output into potential output and output gap is a critical task for both monetary and fiscal policy, as the former is a key input for long-term projections, and the latter can be an important gauge of inflationary pressure. There exist many definitions of potential output and of output gap --- see \citealp{outputgaps}, for a survey of different methods and definitions. Here we use the definition implied by the TC decomposition discussed in Sections \rightef{sec:Representation} and \rightef{sec:TC2}. Among the many existing approaches the most similar to ours are \citet{charlesjohn} and \citet{jarocinskilenza}, who use small dynamic factor models, \citet{aastveittrovik14}, who use a large stationary dynamic factor model combined with the Hodrick Prescott filter, and \citet{morleywong2017}, who use a large stationary BVAR combined with the Beveridge and Nelson decomposition. We compare our output gap estimate with the one produced by the Congressional Budget Office (CBO). The CBO estimates potential output and the output gap by using the so-called ``production function approach'' according to which potential output is that level of output consistent with current technologies and normal utilisation of capital and labour, and the output gap is the residual part of output. Specifically, the CBO model is based upon a textbook Solow growth model, with a neoclassical production function. Labour and productivity trends are estimated by using a variant of the Okun's law, so that actual output is above its potential (the output gap is positive), when the unemployment rate is below the natural rate of unemployment, which is in turn defined as the non-accelerating inflation rate of unemployment (NAIRU), i.e., that level of unemployment consistent with a stable inflation --- for further details see \citet*{CBO01}. In Figure \rightef{fig_OG}, we compare our measure of the output gap (red line) with the one produced by the CBO (blue line), where the left plot shows the level of the output gap, while the right plot shows the 4-quarter percentage change of the output gap. The main result emerging from Figure \rightef{fig_OG} is that our estimate of the output gap is remarkably similar to that of the CBO. However, there are a few periods in which the two estimates diverge, among which the main one is from the late nineties to the financial crisis. In particular, while according to the CBO the level of the output gap was negative between 2001:Q1 and 2005:Q4, according to our estimate in that same period the output gap was positive --- on average 2\sfrac{1}{2} percentage points higher than estimated by the CBO. Therefore, according to our estimate the level of the output gap right before the great financial crisis in 2007:Q4 was 1.3\%, while according to the CBO was -0.7\%, and hence we estimate that the level of slack in the economy at the trough of the crisis in 2009:Q2 was -4.5\%, approximately 1\sfrac{3}{4} percentage points higher than estimated by the CBO. \begin{figure} \caption{Output gap} \end{figure} To conclude, let us emphasize that the fact that our estimate of the output gap is close to that of the CBO is a remarkable result, particularly so because our estimate of the output gap is very different from that of the CBO from both a technical and an interpretational point of view.~Indeed, while the CBO constructs the output gap so that its level has a specific economic meaning, our measure of the output gap is simply the transitory/stationary part of the common component of output --- i.e., that part of aggregate real output that will disappear in the long-run.\footnote{Notice that also for the CBO the output gap is assumed to revert to zero in the long-run as it imposes in its forecast that in 10 years the output gap will be zero --- see e.g. \citet*{CBO04}.} Therefore, our output gap estimate provides different and complementary information on the cyclical position of the economy than that contained in the CBO estimate. In particular, our estimate of the output gap seems more suitable to answer the question ``which part of current growth is due to temporary factors?'', while the measure of the CBO is certainly more suitable as a gauge of inflation pressure. This can explain in part the divergence of the two estimates in the 2000s. This period is characterized by stable and low inflation --- on average core CPI inflation between 2001:Q1 and 2007:Q4 was approximately 2.1\%. Accordingly, the CBO estimates that slack is positive (i.e., the output gap negative). By contrast, our measure, which is not specifically affected by inflation, but it is more broadly influenced by the co-movement in the data, estimates that a part of the aggregate real output was transitory. This makes sense given that the years before the crisis were characterized by several factors that proved indeed transitory, such as the housing boom, a historically high share of sub-prime loan origination \citep{Haughwout2009}, and a large amount of equity withdrawal from housing \citep{HouseATM}. And, since our model includes a large number of variables, including housing indicators as well as loan and credit indicators, these transitory factors are captured by our model. \section{Discussion and conclusions}\leftabel{sec:Conclusions} In this paper we disentangle long-run co-movements (common trends) from short-run co-movements (common cycles) in large datasets. To this end, we first estimate a non-stationary dynamic factor model by means of a Quasi Maximum Likelihood estimator based on the Expectation Maximisation algorithm, combined with the Kalman Filter and the Kalman Smoother estimators of the factors. We then disentangle common trends from common cycles by applying a non-parametric Trend-Cycle decomposition to the latent common factors and based on eigenanalysis of their long-run covariance. The asymptotic properties of this estimator are derived and discussed in the paper. We estimate our model on a large panel of US quarterly macroeconomic time series with the goal of estimating the cyclical position of the economy and the observation error. After backing out the observation error, we show that our model naturally produces an estimate of aggregate real output, which we refer to as Gross Domestic Output (GDO). According to our estimate of GDO, since 2010 the US economy grew at a faster pace than measured by national account statistics. We then use a Trend-Cycle decomposition to estimate the output gap. We compare our estimate of the output gap, which is entirely data-driven, with that produced by the Congressional Budget Office (CBO), which is instead based on theoretical economic models. It turns out that our estimate of the output gap is remarkably similar to that of the CBO except from the late nineties to the financial crisis, when our measure suggests that a greater part of the produced output was driven by transitory factors. There are a number of aspects of our model that we have not fully developed in our empirical analysis and that are left for future research. First, due to the use of the Kalman Filter, our factor estimator is in principle able to handle both mixed frequency and missing data \citep[e.g.][]{marianomurasawa03,JKVW2011,banburamodugno14} and, therefore, it can be used for real-time analysis \citep{Nowcasting}. This aspect is well-known to be particularly relevant when estimating the output gap, since as shown by \citet{orphanidesvannorden}, end-of-sample revisions of GDP are of the same order of magnitude as the gap itself. Second, the use of the Kalman Filter makes our model suitable for scenario and counterfactual analysis based on conditional forecasts \citep{banburagiannonelenza15}. Third, as shown in equation \eqref{eq:FTCW}, our model naturally produces a Trend-Cycle decomposition for each variable in the dataset, and therefore it is possible to estimate other policy-relevant indicators, such as the unemployment gap (in our framework, the cycle component of the unemployment rate) or trend inflation (in our framework, the trend component of core CPI or the core PCE price indexes). Our approach has been so far deliberately entirely data driven, and we have been careful in imposing the least possible amount of restrictions to let the data speak freely. This approach has undeniably some important merits, as estimation of GDO seems to fit naturally in our framework, and the Trend-Cycle decomposition that we obtain for GDO is economically sensible. However, we believe that imposing the statistical restrictions described in Section \rightef{sec:StaticFM}, thus eliminating the miss-specification error when computing the Trend-Cycle decomposition, as well as imposing economically meaningful constraints, seems to be an essential step forward. Our view is that one way to proceed is to consider Bayesian estimation of the model, so that our economic and statistical knowledge of the data can be included by means of suitable priors. All this is the subject of our current research. \singlespacing {\small{ \setlength{\bibsep}{.2cm} }} \setcounter{section}{0} \setcounter{subsection}{0} \setcounter{equation}{0} \setcounter{table}{0} \setcounter{figure}{0} \setcounter{footnote}{0} \gdef\thesection{Appendix \Alph{section}} \gdef\thesubsection{\Alph{section}.\arabic{subsection}} \gdef\thefigure{\Alph{section}\arabic{figure}} \gdef\theequation{\Alph{section}\arabic{equation}} \gdef\thetable{\Alph{section}\arabic{table}} \gdef\arabic{footnote}{\Alph{section}\arabic{footnote}} \section{Representation results}\leftabel{sec:proofsRepresentation} Hereafter, and throughout all appendices, we consider restriction R2 when $s=1$ as found empirically in Section \rightef{sec:emp}. Therefore, $r=2q$. \subsection{Proof of restriction R3} For the dynamic factors consider the VECM(2) \begin{equation}\leftabel{vecmdyn} \Delta \bm f_t = - \mathbf a\mathbf b' \bm f_{t-3} + \bm\Gamma_1 \Delta \bm f_{t-1}+ \bm\Gamma_2 \Delta \bm f_{t-2} +\mathbf u_t, \end{equation} where $\mathbf a$ and $\mathbf b$ are $q\times d$ and for simplicity we consider just the case of two lags since this will imply a VECM(1) and therefore a VAR(2) for the static factors as implemented in \eqref{eq:SS2}. \noindent First assume that in R1 we have $\mathbf K=\mathbf I_r$. Our aim is then to find the correct VECM representation for $\mathbf F_t=(\bm f_t'\;\bm f_{t-1}')'$ when the VECM in \eqref{vecmdyn}, and restrictions R1 and R2 hold. Since we model $\mathbf F_t$ as a VAR(2) we know that we must have a VECM(1) with reduced rank innovations by R1, hence \begin{equation}\leftabel{vecmsta} \Delta \mathbf F_t = -\bm\alpha\bm\beta'\mathbf F_t+ \mathbf M\Delta\mathbf F_{t-1} + \mathbf H\mathbf u_t, \end{equation} where $\bm\alpha$ and $\bm\beta$ are $r\times c$ with $c<r$ and $\mathbf H$ is $r\times q$. Moreover, from \citet{BLL1} we have $d\lefte c\lefte (r-q+d)$. We are then interested in finding $c$, and the expressions of $\mathbf M$, $\bm\alpha$, $\bm\beta$, and $\mathbf H$ as functions of the parameters $\mathbf a$, $\mathbf b$, $\bm\Gamma_1$, and $\bm\Gamma_2$ in \eqref{vecmdyn}. Let us write $\bm\alpha=(\bm\alpha_1'\;\bm\alpha_2')'$ and $\bm\beta=(\bm\beta_1'\;\bm\beta_2')'$ where $\bm\alpha_1$, $\bm\alpha_2$, $\bm\beta_1$, $\bm\beta_2$ are all $q\times c$. We also denote as $\mathbf M_{ij}$ for $i,j=1,2$ the four $q\times q$ blocks of $\mathbf M$ and as $\mathbf H_1$ and $\mathbf H_2$ the two $q\times q$ blocks of $\mathbf H$. Following \citet{proietti97}, we define the $(2r+c)$-dimensional vector \begin{equation} \mathbf G_t=\left(\begin{array}{l} \Delta \mathbf F_t\\ \Delta \mathbf F_{t-1}\\ \bm \beta'\mathbf F_{t-2} \end{array} \right)=\left(\begin{array}{l} \Delta \bm f_t\\ \Delta \bm f_{t-1}\\ \Delta \bm f_{t-1}\\ \Delta \bm f_{t-2}\\ \bm \beta_1'\bm f_{t-2}+\bm \beta_2'\bm f_{t-3} \end{array} \right).\nonumber \end{equation} Then, the state-space form of \eqref{vecmsta} is given by \begin{align} \Delta \mathbf F_t &= {\mathbf Z} \mathbf G_t,\nonumber\\ \mathbf G_t &={\mathbf T}\mathbf G_{t-1} +{\mathbf Z}'\mathbf H\mathbf u_t,\leftabel{ss2b} \end{align} with the $r\times (2r+c)$ matrix ${\mathbf Z}=(\mathbf I_r \;\mathbf 0_r\;\mathbf 0_{r\times c})$. Then, \begin{equation}\leftabel{Ztilde} {\mathbf Z}'\mathbf H= \left(\begin{array}{l} \mathbf I_r \\ \mathbf 0_r\\ \mathbf 0_{c\times r} \end{array} \right)\left(\begin{array}{l} \mathbf H_1\\ \mathbf H_2 \end{array} \right) = \left(\begin{array}{l} \mathbf H_1\\ \mathbf H_2\\ \mathbf 0_q\\ \mathbf 0_q\\ \mathbf 0_{c\times q} \end{array} \right).\nonumber \end{equation} and the $(2r+c)\times (2r+c)$ matrix ${\mathbf T}$ is given by \begin{equation}\leftabel{Ttilde} {\mathbf T} = \left(\begin{array}{lll} \mathbf M & -\bm\alpha\bm\beta' & -\bm\alpha\\ \mathbf I_r & \mathbf 0_r & \mathbf 0_{r\times c}\\ \mathbf 0_{c\times r} &\bm\beta'&\mathbf I_c \end{array} \right) =\left(\begin{array}{ll | ll | l} \mathbf M_{11} & \mathbf M_{12}& -\bm \alpha_1\bm \beta_1'& -\bm \alpha_1\bm \beta_2'& -\bm \alpha_1\\ \mathbf M_{21} & \mathbf M_{22}& -\bm \alpha_2\bm \beta_1'& -\bm \alpha_2\bm \beta_2'& -\bm \alpha_2\\ \hline \mathbf I_q & \mathbf 0_q& \mathbf 0_q & \mathbf 0_q & \mathbf 0_{q\times c}\\ \mathbf 0_q & \mathbf I_q & \mathbf 0_q & \mathbf 0_q & \mathbf 0_{q\times c}\\ \hline \mathbf 0_{c\times q} & \mathbf 0_{c\times q} & \bm \beta_1'& \bm \beta_2' & \mathbf I_{c}\\ \end{array} \right).\nonumber \end{equation} Now using these definitions into \eqref{ss2b} we have five $q$-dimensional equations. The first one is \begin{align} \Delta \bm f_t &=\mathbf M_{11} \Delta \bm f_{t-1}+ \mathbf M_{12} \Delta \bm f_{t-2}-\bm\alpha_1\bm\beta_1' \bm f_{t-2}-\bm\alpha_1\bm\beta_2' \bm f_{t-3}+\mathbf H_1\mathbf u_t,\nonumber \end{align} which is equivalent to \eqref{vecmdyn} when \begin{equation}\leftabel{map2} \mathbf M_{11}=\bm\Gamma_1,\quad \mathbf M_{12}=\bm\Gamma_2, \quad \bm\alpha_1=\mathbf a,\quad\bm\beta_1 =\mathbf 0_{q\times c},\quad\bm\beta_2 =\mathbf b, \quad \mathbf H_1=\mathbf I_q,\quad c=d. \end{equation} The second equation is \begin{align} \Delta\bm f_{t-1} &=\mathbf M_{21}\Delta\bm f_{t-1}+\mathbf M_{22}\Delta \bm f_{t-2}-\bm\alpha_2\bm\beta_1'\Delta\bm f_{t-2} -\bm\alpha_2\bm\beta_2'\Delta\bm f_{t-3} -\bm\alpha_2\bm \beta_1'\bm f_{t-3}-\bm\alpha_2\bm \beta_2'\bm f_{t-4}+\mathbf H_2\mathbf u_t,\nonumber \end{align} from which we see that we must also have \begin{equation}\leftabel{map3} \mathbf M_{21}=\mathbf I_q,\qquad \mathbf M_{22}=\mathbf 0_q, \qquad\bm \alpha_2 =\mathbf 0_{q\times c}, \qquad \mathbf H_2=\mathbf 0_q. \end{equation} Under \eqref{map2} and \eqref{map3} the third, fourth and fifth equation in \eqref{ss2b} are just identities. \noindent By imposing these restrictions we have the mapping between the VECM(1) for $\mathbf F_t$ in \eqref{vecmsta} and the VECM(2) for $\bm f_t$ in \eqref{vecmdyn} \begin{equation}\leftabel{map4} \mathbf M = \left(\begin{array}{ll} \bm\Gamma_1 &\bm\Gamma_2\\ \mathbf I_q &\mathbf 0_q \end{array} \right),\qquad \bm\alpha = \left(\begin{array}{l} \mathbf a\\ \mathbf 0_{q\times d} \end{array} \right),\qquad \bm\beta=\left(\begin{array}{l} \mathbf 0_{q\times d}\\ \mathbf b \end{array} \right), \qquad \mathbf H=\left(\begin{array}{c} \mathbf I_q\\ \mathbf 0_q \end{array} \right). \end{equation} If we now consider a generic $\mathbf K$ in R1, then \eqref{vecmsta} holds for $\mathbf F_t= \mathbf K(\bm f_t'\; \bm f_{t-1}')'$ and \eqref{map4} becomes \begin{equation}\leftabel{map5} \mathbf M = \mathbf K\left(\begin{array}{ll} \bm\Gamma_1 &\bm\Gamma_2\\ \mathbf I_q &\mathbf 0_q \end{array} \right)\mathbf K^{-1},\qquad \bm\alpha = \mathbf K\left(\begin{array}{l} \mathbf a\\ \mathbf 0_{q\times d} \end{array} \right),\qquad \bm\beta={\mathbf K^{-1}}'\left(\begin{array}{l} \mathbf 0_{q\times d}\\ \mathbf b \end{array} \right), \qquad \mathbf H=\mathbf K\left(\begin{array}{c} \mathbf I_q\\ \mathbf 0_q \end{array} \right).\nonumber \end{equation} The cointegration rank $c$ of $\mathbf F_t$ is given by $\mathsf{rk}(\bm\alpha\bm\beta')=d$. \subsection{Reduced and structural form of the state-space representation} Consider \eqref{eq:SS1}-\eqref{eq:SS2} written in matrix notation and using the companion form of the VAR \begin{align} \mathbf x_{t}&=\bm\Lambda \mathbf F_t + \bm\xi_{t},\leftabel{eq:kf1_app}\\ \left(\begin{array}{l} \mathbf F_t\\ \mathbf F_{t-1} \end{array} \right) &=\left( \begin{array}{cc} \mathbf A_1&\mathbf A_2\\ \mathbf I_r &\mathbf 0_r \end{array} \right) \left(\begin{array}{c} \mathbf F_{t-1}\\ \mathbf F_{t-2} \end{array} \right) +\left(\begin{array}{c} \mathbf H\\ \mathbf 0_{r\times q} \end{array} \right) \mathbf u_t,\leftabel{eq:kf2_app} \end{align} with $\bm\Lambda=(\bm\leftambda_1\cdots \bm\leftambda_n)'$ the $n\times r$ loadings matrix. We call \eqref{eq:kf1_app}-\eqref{eq:kf2_app} the reduced form of the model. Similarly consider the structural form, where, for convenience, in the VAR we write twice the same equation: \begin{align} \mathbf x_{t} &= \mathbf B_0 \bm f_t+\mathbf B_1\bm f_{t-1}+ \bm\xi_{t},\leftabel{eq:gdfm1_app}\\ \left(\begin{array}{c} \bm f_{t}\\ \bm f_{t-1}\\ \bm f_{t-1}\\ \bm f_{t-2} \end{array} \right)&= \left(\begin{array}{cccc} \bm\Pi_1&\bm\Pi_2&\mathbf 0_q &\bm\Pi_3\\ \mathbf I_q &\mathbf 0_q&\mathbf 0_q&\mathbf 0_q\\ \mathbf I_q &\mathbf 0_q&\mathbf 0_q&\mathbf 0_q\\ \mathbf 0_q &\mathbf I_q&\mathbf 0_q&\mathbf 0_q\\ \end{array} \right) \left(\begin{array}{c} \bm f_{t-1}\\ \bm f_{t-2}\\ \bm f_{t-2}\\ \bm f_{t-3} \end{array} \right)+\left(\begin{array}{c} \mathbf I_q\\ \mathbf 0_q\\ \mathbf 0_q\\ \mathbf 0_q \end{array} \right) \mathbf u_t,\leftabel{eq:gdfm2_app} \end{align} where $\mathbf B_0=(\bm b_{01}\cdots \bm b_{0n})'$, $\mathbf B_1=(\bm b_{11}\cdots \bm b_{1n})$ are both $n\times q$. Because of R1 there exists an invertible $r\times r$ matrix $\mathbf K$ such that \begin{align} \mathbf F_t = \mathbf K (\bm f_t'\,\bm f_{t-1}')', &\qquad (\bm f_t'\,\bm f_{t-1}')'= \mathbf K^{-1}\mathbf F_t,\leftabel{eq:R1_app}\\ \bm\Lambda = (\mathbf B_0\,\mathbf B_1)\mathbf K^{-1}, &\qquad (\mathbf B_0\,\mathbf B_1)=\bm\Lambda \mathbf K.\leftabel{eq:R1_app_2} \end{align} By comparing \eqref{eq:kf1_app}-\eqref{eq:kf2_app} with \eqref{eq:gdfm1_app}-\eqref{eq:gdfm2_app} and using \eqref{eq:R1_app}-\eqref{eq:R1_app_2}, we have the parameters of the reduced form \begin{align} \mathbf A_1 =\mathbf K\left(\begin{array}{cc} \bm\Pi_1&\bm\Pi_2\\ \mathbf I_q&\mathbf 0_q \end{array}\right)\mathbf K^{-1},\qquad \mathbf A_2 =\mathbf K\left(\begin{array}{cc} \mathbf 0_q&\bm\Pi_3\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)\mathbf K^{-1},\qquad \mathbf H &= \mathbf K\left(\begin{array}{c} \mathbf I_q\\ \mathbf 0_q \end{array}\right).\leftabel{eq:R1_app_3} \end{align} The relations \eqref{eq:R1_app}, \eqref{eq:R1_app_2} and \eqref{eq:R1_app_3} are used throughout the following. Moreover, since a VAR(2) of dimension $r$ can always be written as a VAR(1) of dimension $2r$, to avoid introducing further notation hereafter we consider the case of a VAR(1) for $\mathbf F_t$, where $\mathbf A\equiv \mathbf A_1$. \subsection{Properties of the structural and reduced form of the linear system} \begin{lem}\leftabel{lem:control} The structural model \eqref{eq:gdfm1_app}-\eqref{eq:gdfm2_app} is stabilizable and detectable. \end{lem} \noindent{\bf Proof.} Equations \eqref{eq:gdfm1_app}-\eqref{eq:gdfm2_app} define a linear system with $r=2q$ latent states $(\bm f_t'\;\bm f_{t-1}')'$. We say that a linear system is stabilizable if its unstable (non-stationary) states are controllable and all uncontrollable states are stable (see \citealp{AM79}, page 342), where stability is dictated by the eigenvalues of the matrix of VAR coefficients, which we denote as \begin{align} \widetilde{\mathbf A}={\left(\begin{array}{cc} \bm\Pi_1&\bm\Pi_2\\ \mathbf I_q &\mathbf 0_q \end{array} \right)} \end{align} Because of cointegration, $\widetilde{\mathbf A}$ has $(q-d)$ unit eigenvalues corresponding to $(q-d)$ unstable states. Moreover, $(\mathbf I_q-\bm\Pi_1-\bm\Pi_2)=\mathbf a\mathbf b'$, where $\mathbf a$ and $\mathbf b$ have full column-rank $q\times d$ matrices, so that $\mathsf{rk}(\mathbf a\mathbf b')=d$. Define the $q\times (q-d)$ matrices $\mathbf a_{\perp}$ and $\mathbf b_{\perp}$ such that $\mathbf a_{\perp}'\mathbf a=\mathbf b_{\perp}'\mathbf b=\mathbf 0_{(q-d)\times d}$. Then, since $\mathsf{rk}(\mathbf a_{\perp}'\mathbf I_q)=(q-d)$, the unstable states are controllable because they satisfy the Popov-Belevitch-Hautus rank test (see \citealp{franchi}, Theorem 2.1, and \citealp{AM07}, Corollary 6.11, page 249). \noindent Now, by looking at \eqref{eq:gdfm2_app}, we see that $\widetilde{\mathbf A}$ has also $(r-q+d)=(q+d)$ eigenvalues which are smaller than one in absolute value. Of these $q$ correspond to states which are uncontrollable because they are not driven by any shock, but are also stable since have no dynamics (see the second equation in \eqref{eq:gdfm2_app}). The remaining $d$ states follow a stable VAR, hence are controllable. \noindent Similarly, we say that a linear system is detectable if its unstable states are observable and all unobservable states are stable (see \citealp{AM79}, page 342). First, notice that $\mathsf{rk}(\mathbf B_0)=q$ and $\mathsf{rk}(\mathbf B_1)=q$ because of C2 and \eqref{eq:R1_app_2}, therefore $\mathsf{rk}(\mathbf B_0 \mathbf b_{\perp})=(q-d)$ and $\mathsf{rk}(\mathbf B_1 \mathbf b_{\perp})=(q-d)$, which implies that the unstable states are observable because they satisfy the Popov-Belevitch-Hautus rank test (see \citealp{franchi}, Theorem 2.1, and \citealp{AM07}, Corollary 6.11, page 249). Since $\mathbf B_0$ and $\mathbf B_1$ have full column-rank there are no unstable unobservable states. This completes the proof. $\Box$ \setcounter{footnote}{0} \setcounter{equation}{0} \setcounter{subsection}{0} \section{Details of estimation}\leftabel{sec:proofsDetails} This appendix provides details on estimation of factors and parameters which are necessary to introduce the notation required in the proofs in \rightef{sec:proofsProposition}. The model considered is \eqref{eq:SS1}-\eqref{eq:SS3}, where for simplicity of exposition we consider a VAR(1) for the factors. As explained in the text without loss of generality we can assume $\bm\xi_t\sim I(0)$, thus considering as latent states only the $r$ static factors. \noindent Adding $I(1)$ idiosyncratic components as latent states does not increase the dimension of the parameter space but it increases the dimension of the latent states vector. However, since the idiosyncratic components are assumed to be orthogonal (see D2), and moreover they are orthogonal to the static factors (see A2), the results in \rightef{sec:proofsProposition} can be generalized to this case by treating each new state separately. \subsection{Expectation Maximization algorithm} In what follows we denote the whole sample of observed data as $\bm{\mathcal X}_T:= (\mathbf x_1\cdots \mathbf x_T)'$ and the whole history of the unknown factors as $\bm{\mathcal F}_T:= (\mathbf F_1\cdots\mathbf F_T)'$. Recall that the vector of parameters is given by ${\bm\Theta}:=(\mbox{vec}({\bm\Lambda})' \;\mbox{vec}({\mathbf A})'\; \mbox{vec}({\mathbf H})'\; \;\mbox{diag}({\mathbf R}))'$. To avoid heavier notation we use $\bm\Theta$ to indicate both a generic value of the parameters and the true value, whether we refer to one or the other is either clearly implied by the context or explicitly stated. \noindent The joint pdf of data and factors is denoted as $f(\bm{\mathcal X}_T,\bm{\mathcal F}_T;\bm\Theta)$ and the corresponding joint log-likelihood is denoted as $\ell(\bm{\mathcal X}_T,\bm{\mathcal F}_T;\bm\Theta):= \leftog f(\bm{\mathcal X}_T,\bm{\mathcal F}_T;\bm\Theta)$ and it is such that \begin{equation}\leftabel{eq:LL1} \ell(\bm{\mathcal X}_T,\bm{\mathcal F}_T;\bm\Theta)=\ell(\bm{\mathcal X}_T|\bm{\mathcal F}_T;\bm\Theta)+\ell(\bm{\mathcal F}_T;\bm\Theta), \end{equation} where $\ell(\bm{\mathcal X}_T|\bm{\mathcal F}_T;\bm\Theta)$ is the log-likelihood of the data conditional on the factors and $\ell(\bm{\mathcal F}_T;\bm\Theta)$ is the marginal log-likelihood of the factors. Because of D2 and D3 all log-likelihoods are Gaussian and in particular \begin{align} \ell(\bm{\mathcal X}_T|\bm{\mathcal F}_T;\bm\Theta) &=-\frac{nT}2\leftog(2\pi)-\frac T2\leftog\det (\mathbf R)-\frac 12\mathsf{tr}\left[(\bm{\mathcal X}_T-\bm{\mathcal F}_T\bm\Lambda')\mathbf R^{-1}(\bm{\mathcal X}_T-\bm{\mathcal F}_T\bm\Lambda')'\right].\leftabel{eq:loglikXcondF} \end{align} We first briefly review the steps of the EM algorithm, while in Section \rightef{sec:EMML} we prove that the values of the parameters obtained at convergence of the EM algorithm converge to the QML estimator. \subsubsection{Initialization} The EM algorithm is initialised with estimated parameters \begin{equation}\leftabel{initparam} \widehat{\bm\Theta}_0:=(\mbox{vec}(\widehat{\bm\Lambda}_0)' \;\mbox{vec}(\widehat{\mathbf A}_0)'\; \mbox{vec}(\widehat{\mathbf H}_0)'\; \;\mbox{diag}(\widehat{\mathbf R}_0))'. \end{equation} These are obtained as follows. From the integration of the first $r$ principal components of $\Delta\mathbf x_t$ we have an estimator of the factors, $\widetilde{\mathbf F}_t$, and then of the loadings, $\widehat{\bm\Lambda}_0$. The VAR parameters, $\widehat{\mathbf A}_0$ are obtained by fitting a VAR on the estimated factors $\widetilde{\mathbf F}_t$ and the columns of $\widehat{\mathbf H}_0$ are given by the $q$ leading eigenvectors of the covariance matrix of VAR residuals. Finally, the diagonal entries of $\widehat{\mathbf R}_0$, are obtained as sample variances of $\widehat{\xi}_{it,0}=x_{it}-\widehat{\bm\Lambda}_0\widetilde{\mathbf F}_t$. Consistency of these estimators is discussed in Section \rightef{app:KFBLL}. \subsubsection{E-step} At iteration $k\ge 0$, given $\bm{\mathcal X}_T$ and an estimate of the parameters $\widehat{\bm\Theta}_k$, we compute the expected log-likelihood as function of a generic value of the parameters $\bm\Theta$, where the expectation is computed with respect to the conditional distribution of $\bm{\mathcal F}_T$ given $\bm{\mathcal X}_T$ and when using $\widehat{\bm\Theta}_k$: \begin{equation}\leftabel{eq:Estep} \mathcal Q(\bm\Theta;\widehat{\bm\Theta}_k) := \int_{\mathbb R^{r\times T}} \ell(\bm{\mathcal X}_T, \bm{\mathcal F}_T;\bm\Theta) f(\bm{\mathcal F}_T|\bm{\mathcal X}_T;\widehat{\bm\Theta}_k) \mathrm d \bm{\mathcal F}_T=\E_{\widehat{\bm\Theta}_k} [\ell(\bm{\mathcal X}_T,\bm{\mathcal F}_T;\bm\Theta)|\bm{\mathcal X}_T]. \end{equation} In the Gaussian case \eqref{eq:Estep} depends on the conditional mean of the factors and their conditional second moments, which are obtained with the KS when using the parameters $\widehat{\bm\Theta}_k$ and are given by (see Section \rightef{sec:KFKS} for details) \begin{align}\leftabel{eq:FKS_PKS} {\mathbf F}_{t|T,k} := \E_{\widehat{\bm\Theta}_k}[\mathbf F_t |\bm{\mathcal X}_T],\qquad{\mathbf P}_{t|T,k} := \E_{\widehat{\bm\Theta}_k}[(\mathbf F_t-{\mathbf F}_{t|T,k})(\mathbf F_t-{\mathbf F}_{t|T,k})' |\bm{\mathcal X}_T]. \end{align} \subsubsection{M-step} A new estimator of the parameters is obtained by maximising the expected log-likelihood over all possible values of the parameters: \begin{equation}\leftabel{eq:Mstep} \widehat{\bm\Theta}_{k+1} = \arg\!\!\!\!\!\max_{\bm\Theta\in\Omega\subseteq \mathbb R^Q}\; \mathcal Q(\bm\Theta;\widehat{\bm\Theta}_k). \end{equation} Thus, maximizing the conditional expectation of \eqref{eq:loglikXcondF} and using \eqref{eq:FKS_PKS}, we have the loadings estimator \begin{align}\leftabel{eq:loadEM} \widehat{\bm\Lambda}_{k+1}&= \left(\sum_{t=1}^T \E_{\widehat{\bm\Theta}_k}[ \mathbf x_t\mathbf F_t' |\bm{\mathcal X}_T]\right)\left(\sum_{t=1}^T \E_{\widehat{\bm\Theta}_k}[ \mathbf F_t\mathbf F_t' |\bm{\mathcal X}_T]\right)^{-1}\nonumber\\ &=\left(\sum_{t=1}^T \mathbf x_t {\mathbf F}_{t|T,k}'\right)\left(\sum_{t=1}^T \left({\mathbf F}_{t|T,k}{\mathbf F}_{t|T,k}'+{\mathbf P}_{t|T,k}\right)\right)^{-1}.\nonumber \end{align} Similarly we can obtain estimates of the other parameters $\widehat{\mathbf A}_{k+1}$ and $\widehat{\mathbf R}_{k+1}$ (see e.g. \citealp{banburamodugno14}, for their expressions). The columns of $\widehat{\mathbf H}_{k+1}$ are obtained as the $q$ leading eigenvectors of the matrix \begin{equation}\leftabel{eq:sigmaEM} \widehat{\bm\Sigma}_{k+1} = \frac 1 T\left(\sum_{t=1}^T \E_{\widehat{\bm\Theta}_k}[ \mathbf F_t\mathbf F_t' |\bm{\mathcal X}_T] -\widehat{\mathbf A}_{k+1}\sum_{t=1}^T\E_{\widehat{\bm\Theta}_k}[ \mathbf F_t\mathbf F_{t-1}' |\bm{\mathcal X}_T] \right),\nonumber \end{equation} which is an estimator of the covariance of the VAR residuals, and where the second expectation can also be computed from the output of the KS. \subsubsection{Convergence}\leftabel{sec:EMML} Denote the QML estimator of the parameters as \begin{equation}\leftabel{MLparam} \widehat{\bm\Theta}^*:=(\mbox{vec}(\widehat{\bm\Lambda}^*)' \;\mbox{vec}(\widehat{\mathbf A}^*)'\; \mbox{vec}(\widehat{\mathbf H}^*)'\; \;\mbox{diag}(\widehat{\mathbf R}^*))', \end{equation} then by definition we have \begin{equation} \widehat{\bm\Theta}^*= \arg\!\!\!\!\!\max_{\bm\Theta\in\Omega\subseteq \mathbb R^Q}\;\ell(\bm{\mathcal X}_T;\bm\Theta). \end{equation} where $\ell(\bm{\mathcal X}_T;\bm\Theta)$ is the log-likelihood of the data such that \begin{equation}\leftabel{eq:LLX} \ell(\bm{\mathcal X}_T;\bm\Theta) = \ell(\bm{\mathcal X}_T, \bm{\mathcal F}_T;\bm\Theta)-\ell(\bm{\mathcal F}_T|\bm{\mathcal X}_T;\bm\Theta), \end{equation} where the first term on the rhs is given by \eqref{eq:LL1} and the second can be computed using the output of the KS for a given value of $\bm\Theta$. Define the expectation \begin{align} \mathcal H(\bm\Theta;\widehat{\bm\Theta}_k) &:= \int_{\mathbb R^{r\times T}}\ell(\bm{\mathcal F}_T|\bm{\mathcal X}_T;\bm\Theta) f(\bm{\mathcal F}_T|\bm{\mathcal X}_T;\widehat{\bm\Theta}_k) \mathrm d \bm{\mathcal F}_T=\E_{\widehat{\bm\Theta}_k}[\ell(\bm{\mathcal F}_T|\bm{\mathcal X}_T;\bm\Theta)|\bm{\mathcal X}_T],\leftabel{eq:H} \end{align} and recall the definition of $\mathcal Q(\bm\Theta;\widehat{\bm\Theta}_k)$ in the E-step in \eqref{eq:Estep}. Since the lhs of \eqref{eq:LLX} does not depend on $\bm{\mathcal F}_T$, by taking its expectation with respect to the conditional distribution of $\bm{\mathcal F}_T$ given $\bm{\mathcal X}_T$ and when using $\widehat{\bm\Theta}_k$, for any $\bm\Theta\in \Omega$, we have \begin{align} \ell(\bm{\mathcal X}_T;\bm\Theta) &=\mathcal Q(\bm\Theta;\widehat{\bm\Theta}_k)-\mathcal H(\bm\Theta;\widehat{\bm\Theta}_k).\leftabel{eq:LLX2} \end{align} Now, by definition of Kullback-Leibler divergence, we have (see also Lemma 1 in \citealp{DLR77}) \begin{align} \mathcal H(\widehat{\bm\Theta}_{k+1};\widehat{\bm\Theta}_k)&\lefteq \mathcal H(\widehat{\bm\Theta}_k;\widehat{\bm\Theta}_k).\leftabel{eq:HHH} \end{align} Hence, from \eqref{eq:LLX2} and \eqref{eq:HHH}, for any $k$, \[ \ell(\bm{\mathcal X}_T;\widehat{\bm\Theta}_{k+1})- \ell(\bm{\mathcal X}_T;\widehat{\bm\Theta}_{k})\geq \mathcal Q(\widehat{\bm\Theta}_{k+1};\widehat{\bm\Theta}_{k})- \mathcal Q(\widehat{\bm\Theta}_k;\widehat{\bm\Theta}_k)\geq 0 \] where the last inequality is a consequence of the M-step in \eqref{eq:Mstep}. This shows that the log-likelihood increases monotonically as $k$ increases. Moreover, since due to Gaussianity $\mathcal Q(\bm{\Theta};\bm{\Theta}')$ is continuous in $\bm{\Theta}$ and $\bm{\Theta}'$ and its gradient $\nabla_{\bm\Theta}\mathcal Q(\bm{\Theta};\bm{\Theta}')$ is continuous in $\bm{\Theta}$, then conditions for Theorems 1 and 2 and Corollary 1 in \citet{wu83} are satisfied and we have convergence of the log-likelihood to its unique maximum and of the parameters to the corresponding QML estimators \begin{align}\leftabel{eq:EMtoML} &\leftim_{k\to\infty}\ell(\bm{\mathcal X}_T;\widehat{\bm\Theta}_{k})=\ell(\bm{\mathcal X}_T;\widehat{\bm\Theta}^*),\qquad \leftim_{k\to\infty}\widehat{\bm\Theta}_{k} = \widehat{\bm\Theta}^*. \end{align} The previous result holds in the limit $k\to\infty$, but in practice we can run the EM algorithm only for a finite number of iterations $k_{\max}$. Define, for any $k$, \[ \Delta\ell_k = \frac{\vert \ell(\bm{\mathcal X}_T,{\bm{\mathcal F}}_{T,k+1};\widehat{\bm\Theta}_{k+1}) -\ell(\bm{\mathcal X}_T,{\bm{\mathcal F}}_{T,k};\widehat{\bm\Theta}_{k})\vert} {\vert \ell(\bm{\mathcal X}_T,{\bm{\mathcal F}}_{T,k+1};\widehat{\bm\Theta}_{k+1}) \vert+\vert\ell(\bm{\mathcal X}_T,{\bm{\mathcal F}}_{T,k};\widehat{\bm\Theta}_{k})\vert}, \] where ${\bm{\mathcal F}}_{T,k}:=({\mathbf F}_{1|T,k} \cdots {\mathbf F}_{T|T,k})'$. We say that the algorithm has converged at iteration $k^*<k_{\max}$ according to the following rule, which is defined for a given threshold $\eta$, \begin{equation} \Delta\ell_{k^*}<\eta, \;\mbox{ but }\; \Delta\ell_{k^*-1}\ge \eta.\nonumber \end{equation} Once we find $k^*$, our estimator of the parameters is defined as $\widehat{\bm\Theta}:= \widehat{\bm\Theta}_{k^*}$. The corresponding estimator of the factors is then defined as $\widehat{\mathbf F}_{t}:= {\mathbf F}_{t|T,k^*}$, thus running the KS once last time using $\widehat{\bm\Theta}_{k^*}$. The rate of convergence of $\widehat{\bm\Theta}$ to $\widehat{\bm\Theta}^*$ in \eqref{eq:EMtoML} is studied in Lemma \rightef{lem:convEM} below. \subsection{Kalman filter and Kalman smoother}\leftabel{sec:KFKS} For ease of notation assume to know the true parameter collected in the vector $\bm\Theta$. When using the KF-KS in the EM algorithm at a given iteration $k$, the factors' estimators given below are obtained by replacing $\bm\Theta$ with $\widehat{\bm\Theta}_k$ throughout this section. We denote the conditional expectation and covariance of the factors as \begin{equation}\leftabel{eq:condmean} \mathbf F_{t|s} := \E_{\bm\Theta}[\mathbf F_t|\bm{\mathcal X}_s], \qquad \mathbf P_{t|s} := \E_{\bm\Theta}[(\mathbf F_t-\mathbf F_{t|s})(\mathbf F_t-\mathbf F_{t|s})'|\bm{\mathcal X}_s], \end{equation} where $\bm{\mathcal X}_s:=(\mathbf x_1\cdots \mathbf x_s)'$. Under Gaussianity (D2 and D3) these can be computed with the KF-KS. Specifically, when $s=t-1$ we have the optimal one-step-ahead prediction, when $s=t$ we have the optimal in-sample estimator, when $s=T$ we have the optimal smoother. The KF gives the first two cases while the KS gives the latter. In particular, we denote the KF-KS estimators respectively as: $\mathbf F_{t|t}$ and $\mathbf F_{t|T}$ when using the true value $\bm\Theta$ and as ${\mathbf F}_{t|t,k}$ and ${\mathbf F}_{t|T,k}$ when using $\widehat{\bm\Theta}_k$ (see also \eqref{eq:FKS_PKS}). \subsubsection{Forward iterations - Filtering} For given initial conditions $\mathbf F_{0|0}$ and $\mathbf P_{0|0}$, the KF is based on the forward iterations for $t=1,\leftdots, T$: \begin{align} &\mathbf F_{t|t-1} = \mathbf A \mathbf F_{t-1|t-1},\leftabel{eq:pred1}\\ &\mathbf P_{t|t-1} = \mathbf A\mathbf P_{t-1|t-1} \mathbf A' + \mathbf H\mathbf H',\leftabel{eq:pred2}\\ &\mathbf F_{t|t} =\mathbf F_{t|t-1}+\mathbf P_{t|t-1}\bm\Lambda'(\bm\Lambda\mathbf P_{t|t-1}\bm\Lambda'+\mathbf R)^{-1}(\mathbf x_t-\bm\Lambda\mathbf F_{t|t-1}),\leftabel{eq:up1}\\ &\mathbf P_{t|t} =\mathbf P_{t|t-1}-\mathbf P_{t|t-1}\bm\Lambda'(\bm\Lambda\mathbf P_{t|t-1}\bm\Lambda'+\mathbf R)^{-1}\bm\Lambda\mathbf P_{t|t-1}.\leftabel{eq:up2} \end{align} Moreover, by combining \eqref{eq:pred2} and \eqref{eq:up2}, we obtain the Riccati difference equation \begin{equation}\leftabel{eq:riccati} \mathbf P_{t+1|t}-\mathbf A\mathbf P_{t|t-1}\mathbf A'+\mathbf A\mathbf P_{t|t-1}\bm\Lambda'(\bm\Lambda\mathbf P_{t|t-1}\bm\Lambda'+\mathbf R)^{-1}\bm\Lambda\mathbf P_{t|t-1}\mathbf A'=\mathbf H\mathbf H'. \end{equation} The KF is started with given values of ${\mathbf F}_{0|0}$ and ${\mathbf P}_{0|0}$. The latter can be obtained with a diffuse prior run for $t<0$ (see \citealp{koopman97}, and \citealp{KD00}, for details). \subsubsection{Backward iterations - Smoothing} The KS is then based on the backward iterations for $t=T,\leftdots, 1$: \begin{align} \mathbf F_{t|T} &=\mathbf F_{t|t}+\mathbf P_{t|t}\mathbf A'\mathbf P_{t+1|t}^{-1}(\mathbf F_{t+1|T}-\mathbf F_{t+1|t}),\leftabel{eq:KS1}\\ \mathbf P_{t|T}&=\mathbf P_{t|t} + \mathbf P_{t|t} \mathbf A' \mathbf P_{t+1|t}^{-1} (\mathbf P_{t+1|T}-\mathbf P_{t+1|t})\mathbf P_{t+1|t}^{-1} \mathbf A \mathbf P_{t|t}.\leftabel{eq:KS2} \end{align} The KS iterations in \eqref{eq:KS1} require $T$ inversions of $\mathbf P_{t|t-1}$ and in the singular case $r>q$ these matrices are likely to be singular (see also Lemma \rightef{lem:steady3}). There are two possible solutions to this problem. \citet{KA83} suggest to use a generalized inverse of $\mathbf P_{t|t-1}$, like the Moore-Penrose one. Alternatively, it can be proved that \eqref{eq:KS1} can be written in an equivalent way, which does not require matrix inversion, and which is defined by the backward iterations for $t=T,\leftdots, 1$: \begin{align} &\mathbf F_{t|T}=\mathbf F_{t|t-1}+\mathbf P_{t|t-1}\mathbf r_{t-1},\leftabel{eq:KS3}\\ &\mathbf r_{t-1}=\bm\Lambda'(\bm\Lambda\mathbf P_{t|t-1}\bm\Lambda'+\mathbf R)^{-1}(\mathbf x_t-\bm\Lambda\mathbf F_{t|t-1})+\mathbf L'_t\mathbf r_t,\leftabel{eq:KS4}\\ &\mathbf P_{t|T}=\mathbf P_{t|t-1}-\mathbf P_{t|t-1}\mathbf N_{t-1}\mathbf P_{t|t-1},\leftabel{eq:KS5}\\ &\mathbf N_{t-1}=\bm\Lambda'(\bm\Lambda\mathbf P_{t|t-1}\bm\Lambda'+\mathbf R)^{-1}\bm\Lambda+\mathbf L_t'\mathbf N_t\mathbf L_t,\leftabel{eq:KS6}\\ &\mathbf L_t= \mathbf A-\mathbf A \mathbf P_{t|t-1} \bm\Lambda' (\bm\Lambda\mathbf P_{t|t-1}\bm\Lambda'+\mathbf R)^{-1} \bm\Lambda,\leftabel{eq:KS7} \end{align} where $\mathbf r_T=\mathbf 0_{r\times 1}$, $\mathbf N_T=\mathbf 0_{r}$ and by consturction $\mathbf A\mathbf P_{t|t}=\mathbf L_t\mathbf P_{t|t-1}$ (see also \citealp{DK01}, pp.70-73). Although numerically no appreciable differences emerge with respect to the chosen method, \eqref{eq:KS3}-\eqref{eq:KS7} are particularly useful for our proofs. \section{Consistency of the EM algorithm}\leftabel{sec:proofsProposition} \subsection{Preliminary results} \begin{lem}\leftabel{lem:wood} For $m< n$, and given symmetric positive definite matrices $\bm A$ of dimension $m\times m$, $\bm B$ of dimension $n\times n$, and for $\bm C$ of dimension $n\times m$ with full column-rank, the following holds \begin{equation}\leftabel{statement} \bm A \bm C' (\bm C\bm A\bm C'+\bm B)^{-1} = (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\bm C'\bm B^{-1}. \end{equation} \end{lem} \noindent{\bf Proof.} Recall the Woodbury forumla \begin{align}\leftabel{woodbury} (\bm C\bm A\bm C'+\bm B)^{-1}=\bm B^{-1}-\bm B^{-1}\bm C(\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\bm C'\bm B^{-1}. \end{align} Denote $\bm D=(\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}$ then from \eqref{woodbury} the lhs of \eqref{statement} is equivalent to \begin{align} \bm A\bm C'\left[\bm B^{-1}-\bm B^{-1}\bm C\bm D\bm C'\bm B^{-1}\right] = \bm A\left[\bm C'\bm B^{-1}-\bm C'\bm B^{-1}\bm C\bm D\bm C'\bm B^{-1}\right]=\bm A\left[\bm I-\bm C'\bm B^{-1}\bm C\bm D\right]\bm C'\bm B^{-1}.\nonumber \end{align} Then, \eqref{statement} becomes \[ \bm A\left[\bm I-\bm C'\bm B^{-1}\bm C\bm D\right]\bm C'\bm B^{-1}=\bm D\bm C'\bm B^{-1}, \] or equivalently multiplying both sides on the right by $\bm B\bm C(\bm C'\bm C)^{-1}$ \begin{equation}\leftabel{statement2} \bm A\left[\bm I-\bm C'\bm B^{-1}\bm C\bm D\right]=\bm D. \end{equation} Now notice that \begin{align} \bm D = (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}&= \bm A(\bm I+\bm A\bm C'\bm B^{-1}\bm C)^{-1}.\leftabel{D1} \end{align} Substituting \eqref{D1} in \eqref{statement2} and multiplying both sides on the left by $\bm A^{-1}$ \[ \left[\bm I-\bm C'\bm B^{-1}\bm C\bm A(\bm I+\bm A\bm C'\bm B^{-1}\bm C)^{-1}\right]=(\bm I+\bm A\bm C'\bm B^{-1}\bm C)^{-1}. \] Multiplying both sides on the right by $(\bm I+\bm A\bm C'\bm B^{-1}\bm C)$ we have that \eqref{statement} is equivalent to \begin{equation}\leftabel{proof1} \bm I+\bm A\bm C'\bm B^{-1}\bm C-\bm C'\bm B^{-1}\bm C\bm A = \bm I \end{equation} Therefore \eqref{statement} is correct provided that $\bm A\bm C'\bm B^{-1}\bm C=\bm C'\bm B^{-1}\bm C\bm A$ which is always true since both $\bm A$ and $\bm C'\bm B^{-1}\bm C$ are symmetric. $\Box$ \begin{lem}\leftabel{lem:denom} For $m<n$ with $m$ independent of $n$ and given \begin{enumerate} \item[(a)] an $m\times m$ matrix $\bm A$ symmetric and positive definite with $\mu_j^{A}\lefte M$ for $j=1,\leftdots,m$; \item[(b)] an $n\times n$ matrix $\bm B$ symmetric and positive definite with $\mu_j^{B}\lefte M$ for $j=1,\leftdots,n$; \item[(c)] an $n\times m$ matrix $\bm C$ such that $\bm C'\bm C$ is positive definite with $\mu_j^{C'C}=M_j n$ for $j=1,\leftdots ,m$; \end{enumerate} then the following holds \[ (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\bm C'\bm B^{-1}\bm C = \mathbf I_m+ O(n^{-1}). \] \end{lem} \noindent{\bf Proof.} First notice that for two matrices $\bm K$ and $\bm H$ we have \begin{align} (\bm H+ \bm K)^{-1}&= (\bm H + \bm K)^{-1}- \bm K^{-1} + \bm K^{-1}= (\bm H+ \bm K)^{-1}(\bm K - (\bm H + \bm K))\bm K^{-1}+ \bm K^{-1}\nonumber\\ &= (\bm H + \bm K)^{-1}(-\bm H)\bm K^{-1} + \bm K^{-1}= \bm K^{-1}- (\bm H + \bm K)^{-1}\bm H\bm K^{-1}.\leftabel{eq:inverse} \end{align} Then setting $\bm K=\bm C'\bm B^{-1}\bm C$ and $\bm H=\bm A^{-1}$ from \eqref{eq:inverse} we have \begin{align}\nonumber (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}=(\bm C'\bm B^{-1}\bm C)^{-1}-(\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\bm A^{-1}(\bm C'\bm B^{-1}\bm C)^{-1}. \end{align} which implies \begin{equation}\leftabel{eq:abcinv} (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\bm C'\bm B^{-1}\bm C = \mathbf I_m - (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\bm A^{-1}. \end{equation} Now consider the second term on the rhs of \eqref{eq:abcinv} \begin{align} \Vert (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\bm A^{-1}\Vert^2&\lefte\Vert (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\Vert^2\,\Vert\bm A^{-1}\Vert^2\nonumber\\ &\lefte\Vert (\bm C'\bm B^{-1}\bm C)^{-1}\Vert^2\,(\mu_n^A)^{-1}\lefte \Vert (\bm C'\bm B^{-1}\bm C)^{-1}\Vert^2\, M_1^{-1},\leftabel{eq:abcinv2} \end{align} where we use norm sub-additivity and the fact that by condition (a) $\bm A$ and $\bm A^{-1}$ are positive definite and therefore $\mu_n^{A}\geq M_1>0$ and moreover $\mu_n^{A^{-1}}\geq M_2>0$ thus by Weyl's inequality \[ \mu_n^{A^{-1}+ C' B^{-1} C} \ge \mu_n^{A^{-1}}+\mu_n^{ C' B^{-1} C}\ge \mu_n^{ C' B^{-1} C}, \] therefore, \[ \Vert (\bm A^{-1}+\bm C'\bm B^{-1}\bm C)^{-1}\Vert = (\mu_n^{A^{-1}+ C' B^{-1} C})^{-1}\lefte( \mu_n^{ C' B^{-1} C})^{-1} = \Vert (\bm C'\bm B^{-1}\bm C)^{-1}\Vert. \] Then, the first term on the rhs of \eqref{eq:abcinv2} is \begin{align} \Vert (\bm C'\bm B^{-1}\bm C)^{-1}\Vert^2\lefte \mathsf{tr}\left[(\bm C'\bm B^{-1}\bm C)^{-2}\right]=\sum_{j=1}^m \frac 1{{(\mu_j^{C'B^{-1}C})}^2}=O(n^{-2}).\leftabel{eq:abcinv3} \end{align} Indeed, the $m$ eigenvalues of $\bm C'\bm B^{-1}\bm C$ are also the $m$ non-zero eigenvalues of $\bm B^{-1/2}\bm C\bm C' \bm B^{-1/2}$, which are all $O(n)$ by conditions (b) and (c). By using \eqref{eq:abcinv2} and \eqref{eq:abcinv3} in \eqref{eq:abcinv} we prove the Lemma. $\Box$ \subsection{Consistency of KF and KS using the true value of the parameters} \begin{lem}\leftabel{lem:steady1} For the conditional covariance $\mathbf P_{t|t-1}$ of the static factors given $\bm{\mathcal X}_{t-1}$, there exists a steady state for the reduced form denoted as ${\mathbf P}$ solving the algebraic Riccati equation (ARE) and such that \[ {\mathbf P}_{t|t-1} = {\mathbf P}+ O(e^{-t}). \] Moreover, as $n\to\infty$, \[ {\mathbf P} = \mathbf K\left(\begin{array}{cc} \mathbf I_q &\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array} \right)\mathbf K' + O(n^{-1}) = \mathbf H\mathbf H'+ O(n^{-1}). \] \end{lem} \noindent{\bf Proof.} Define $\widetilde{\mathbf P}_{t|t-1}$ as the conditional covariance matrix for the vector $(\bm f_t'\,\bm f_{t-1}')'$ given $\bm{\mathcal X}_{t-1}$. Then, due to stabilizability and detectability proved in Lemma \rightef{lem:control}, there exists a steady state for the structural model denoted as $\widetilde{\mathbf P}$ solving the algebraic Riccati equation (ARE) and such that (see \citealp{AM79}, pp.76-77, and \citealp{harvey90}, pp.118-119) \[ \widetilde{\mathbf P}_{t|t-1} = \widetilde{\mathbf P}+ O(e^{-t}). \] In presence of a diffuse prior its effect is limited to the first few periods, say $t_0$ (see \citealp{koopman97}), then the result above holds for $t>t_0$. The ARE for the structural model is then (see also \eqref{eq:riccati}) \begin{equation}\leftabel{eq:AREstruct} \widetilde{\mathbf P}-\widetilde{\mathbf A}\widetilde{\mathbf P}\widetilde{\mathbf A}'+\widetilde{\mathbf A}\widetilde{ \mathbf P}\widetilde{\mathbf B}'(\widetilde{\mathbf B}\widetilde{ \mathbf P}\widetilde{\mathbf B}'+\mathbf R)^{-1}\widetilde{\mathbf B}\widetilde{ \mathbf P}\widetilde{\mathbf A}'=\left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right), \end{equation} where \begin{equation} \widetilde{\mathbf A}=\left(\begin{array}{cc} \bm\Pi_1&\bm\Pi_2\\ \mathbf I_q&\mathbf 0_q \end{array} \right), \qquad \widetilde{\mathbf B} =(\mathbf B_0\,\mathbf B_1).\leftabel{AtildeBtilde} \end{equation} Now since the structural model has only $q$ controllable and observable states (see Lemma \rightef{lem:control}) and $\widetilde{\mathbf P}$ is the steady state covariance of those states, $\mathrm{rk}(\widetilde{\mathbf P})=q$. Define as $\mathbf V$ the $r\times r$ matrix of eigenvectors of $\widetilde{\mathbf P}$ and as $\mathbf D$ the $q\times q$ diagonal matrix of its non zero eigenvalues, then \begin{equation}\leftabel{eq:PWtilde} \widetilde{\mathbf P}=\mathbf V\left(\begin{array}{cc} \mathbf D&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)\mathbf V'=\mathbf V\left(\begin{array}{cc} \mathbf D^{1/2}&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)\left(\begin{array}{cc} \mathbf D^{1/2}&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)\mathbf V'= \mathbf W\left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right) \mathbf W', \end{equation} with \[ \mathbf W = \mathbf V\left(\begin{array}{cc} \mathbf D^{1/2}&\mathbf 0_q\\ \mathbf 0_q &\mathbf I_q \end{array} \right). \] Define $\mathbf B_0^*$ and $\mathbf B_1^*$ as the $n\times q$ matrices such that $\widetilde{\mathbf B}\mathbf W=(\mathbf B_0^*\,\mathbf B_1^*)$. Then, from \eqref{eq:PWtilde} \begin{equation}\leftabel{eq:PWtilde1} \widetilde{\mathbf B}\widetilde{\mathbf P}\widetilde{\mathbf B}'=\widetilde{\mathbf B}\mathbf W \left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)\mathbf W'\widetilde{\mathbf B}' = (\mathbf B_0^*\,\mathbf B_1^*) \left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right) \left(\begin{array}{c} {\mathbf B_0^*}'\\ {\mathbf B_1^*}' \end{array}\right) =\mathbf B_0^*\mathbf {B_0^*}', \end{equation} and \begin{equation}\leftabel{eq:PWtilde2} \left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)\mathbf W'\widetilde{\mathbf B}' = \left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right) \left(\begin{array}{c} {\mathbf B_0^*}'\\ {\mathbf B_1^*}' \end{array}\right)= \left(\begin{array}{c} {\mathbf B_0^*}'\\ \mathbf 0_{q\times n} \end{array}\right). \end{equation} From \eqref{eq:PWtilde}, \eqref{eq:PWtilde1}, \eqref{eq:PWtilde2}, Lemmas \rightef{lem:wood} and \rightef{lem:denom}, we have \begin{align} \left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)\mathbf W'\widetilde{\mathbf B}'(\widetilde{\mathbf B}\widetilde{\mathbf P}\widetilde{\mathbf B}'+\mathbf R)^{-1}\widetilde{\mathbf B}\mathbf W\left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)&= \left(\begin{array}{c} {\mathbf B_0^*}'({\mathbf B_0^*}{\mathbf B_0^*}'+\mathbf R)^{-1}\\ \mathbf 0_{q\times n} \end{array}\right)(\mathbf B_0^*\,\mathbf B_1^*)\left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right)\nonumber\\ &= \left(\begin{array}{cc} ({\mathbf B_0^*}'\mathbf R^{-1}{\mathbf B_0^*}+\mathbf I_q)^{-1}{\mathbf B_0^*}'\mathbf R^{-1}{\mathbf B_0^*}&\mathbf 0_{q}\\ \mathbf 0_{q} & \mathbf 0_{q} \end{array}\right)\nonumber\\ &=\left(\begin{array}{cc} \mathbf I_q +O(n^{-1})&\mathbf 0_q\\ \mathbf 0_q &\mathbf 0_q \end{array} \right). \leftabel{eq:PWtilde4} \end{align} Notice that we can apply Lemma \rightef{lem:denom} to the top left $q\times q$ block of \eqref{eq:PWtilde4} since: $\mathbf I_q$ trivially satisfies condition (a), $\mathbf R^{-1}$ satisfies condition (b) because of D2 and ${\mathbf B_0^*}'\mathbf B_0^*$ satisfies condition (c). Indeed, from definition \eqref{eq:R1_app_2} we have \[ \frac1 n \left(\begin{array}{cc} {\mathbf B_0^*}'\\ {\mathbf B_1^*}' \end{array} \right)(\mathbf B_0^*\,\mathbf B_1^*) =\mathbf W'\mathbf K'\frac{\bm\Lambda'\bm\Lambda}n\mathbf K\mathbf W, \] and because of assumption C2 the top left $q\times q$ block of this matrix which is $n^{-1}{\mathbf B_0^*}'\mathbf B_0^*$ has full column-rank for any $n$. By substituting \eqref{eq:PWtilde} and \eqref{eq:PWtilde4} into \eqref{eq:AREstruct} we have \begin{equation}\leftabel{Ptildesteady} \widetilde{\mathbf P} = \left(\begin{array}{cc} \mathbf I_q &\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array} \right) + O(n^{-1}). \end{equation} Now, notice that by construction $\mathbf P_{t|t-1} =\mathbf K\widetilde {\mathbf P}_{t|t-1}\mathbf K'$ and since $\mathbf K$ is full-rank then also the reduced form system is stabilizable and detectable, thus it has a steady state $\mathbf P$ such that \begin{equation}\leftabel{Ptt1} \mathbf P_{t|t-1} = \mathbf P + O(e^{-t}). \end{equation} Moreover, since $\mathbf K$ does not depend on $t$ nor $n$, we have $\mathbf P=\mathbf K\widetilde {\mathbf P}\mathbf K'$ and the result follows directly from \eqref{Ptildesteady}. Last, from the definition of $\mathbf H$ in \eqref{eq:R1_app_3} we have also $\mathbf P=\mathbf H\mathbf H'$. $\Box$ \begin{lem}\leftabel{lem:steady3} For the static factors estimated via KF and KS when using the true value of the parameters $\bm\Theta$, under condition \eqref{eq:exprate} in the text, the following hold, for all $\begin{array}r t\lefte t \lefte T$ and as $n\to\infty$, \begin{align} &\sqrt n\, \Vert\mathbf F_{t|t}-\mathbf F_t\Vert = O_p(1),\nonumber\\ &\sqrt n\,\Vert\mathbf F_{t|T}-\mathbf F_t\Vert = O_p(1).\nonumber \end{align} \end{lem} \noindent{\bf Proof.} By Lemma \rightef{lem:steady1}, the conditional covariance $\mathbf P_{t|t}$ of the static factors given $\bm{\mathcal X}_{t}$ has a steady state $\mathbf S$ such that (see \eqref{eq:up2}) \begin{equation}\leftabel{Pkf1} \mathbf S = \mathbf P-\mathbf P\bm\Lambda'(\bm\Lambda\mathbf P\bm\Lambda'+\mathbf R)^{-1}\bm\Lambda\mathbf P. \end{equation} Then, notice that by Lemma \rightef{lem:steady1} and \eqref{eq:R1_app_2} \begin{align} \mathbf P\bm\Lambda' &= \mathbf K\left(\begin{array}{cc} \mathbf I_q &\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array} \right)\mathbf K'(\mathbf K')^{-1}\left(\begin{array}{c}\mathbf B_0'\\\mathbf B_1'\end{array}\right)+O(n^{-1})=\mathbf K\left(\begin{array}{c} \mathbf B_0'\\ \mathbf 0_q \end{array} \right)+O(n^{-1}),\leftabel{Pkf2}\\ \bm\Lambda\mathbf P\bm\Lambda' &= (\mathbf B_0\,\mathbf B_1)\mathbf K^{-1}\mathbf K\left(\begin{array}{cc} \mathbf I_q &\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array} \right)\mathbf K'(\mathbf K')^{-1}\left(\begin{array}{c}\mathbf B_0'\\\mathbf B_1'\end{array}\right)+O(n^{-1})=\mathbf B_0\mathbf B_0'+O(n^{-1}).\leftabel{Pkf3} \end{align} Using \eqref{Pkf2} and \eqref{Pkf3} and by applying Lemmas \rightef{lem:wood} and \rightef{lem:denom} we have \begin{align} \mathbf P\bm\Lambda'(\bm\Lambda\mathbf P\bm\Lambda'+\mathbf R)^{-1}\bm\Lambda\mathbf P &= \mathbf K\left(\begin{array}{cc} \mathbf B_0'(\mathbf B_0\mathbf B_0'+\mathbf R)^{-1}\mathbf B_0&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right) \mathbf K'+O(n^{-1})\nonumber\\ &=\mathbf K\left(\begin{array}{cc} (\mathbf B_0'\mathbf R^{-1}\mathbf B_0+\mathbf I_q)^{-1}\mathbf B_0'\mathbf R^{-1}\mathbf B_0&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right) \mathbf K'+O(n^{-1})\nonumber\\ &=\mathbf K\left(\begin{array}{cc} \mathbf I_q+O(n^{-1})&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right) \mathbf K'+O(n^{-1}).\leftabel{Pkf4} \end{align} Notice that we can apply Lemma \rightef{lem:denom} to the top $q\times q$ block of \eqref{Pkf4} since: $\mathbf I_q$ trivially satisfies condition (a), $\mathbf R^{-1}$ satisfies condition (b) because of D2 and ${\mathbf B_0}'\mathbf B_0$ satisfies condition (c) because of assumption C2 and definition \eqref{eq:R1_app_2} (see also \eqref{eq:PWtilde4} in the proof of Lemma \rightef{lem:steady1}). By substituting \eqref{Pkf4} into \eqref{Pkf1} and because of Lemma \rightef{lem:steady1}, we have \begin{equation}\leftabel{Pkf5} \mathbf S = \mathbf K\left(\begin{array}{cc} \mathbf I_q &\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array} \right)\mathbf K' + O(n^{-1}) - \mathbf K\left(\begin{array}{cc} \mathbf I_q&\mathbf 0_q\\ \mathbf 0_q&\mathbf 0_q \end{array}\right) \mathbf K'+O(n^{-1})=O(n^{-1}). \end{equation} By substituting \eqref{Ptt1} in \eqref{eq:up2}, from \eqref{Pkf1} we have \begin{equation}\leftabel{Pkf0} \mathbf P_{t|t} = \mathbf S +O(e^{-t}). \end{equation} Therefore, by substituting \eqref{Pkf5} into \eqref{Pkf0} and letting $n=T^{\gamma}$ for $\gamma>0$ and $\begin{array}r t\equiv \begin{array}r t (T)$, because of \eqref{eq:exprate} for $\begin{array}r t\lefte t \lefte T$ we have \begin{equation}\leftabel{Ptt} \mathbf P_{t|t} = O(n^{-1})+O(e^{-t})=O(n^{-1}). \end{equation} Now, let us consider $\mathbf P_{t|T}$ defined in \eqref{eq:KS5}. From \eqref{eq:up2} \begin{equation}\leftabel{eq:B6} \mathbf P_{t|t-1} = \mathbf P_{t|t}+\mathbf P_{t|t-1}\bm\Lambda'(\bm\Lambda\mathbf P_{t|t-1}\bm\Lambda'+\mathbf R)^{-1}\bm\Lambda\mathbf P_{t|t-1}. \end{equation} By substituting \eqref{eq:B6} and \eqref{eq:KS6} in \eqref{eq:KS5} we have \begin{align} \mathbf P_{t|T} &=\mathbf P_{t|t}+\mathbf P_{t|t-1}\mathbf L_t'\mathbf N_{t}\mathbf L_t\mathbf P_{t|t-1}.\leftabel{eq:PtTeasy} \end{align} Since $\mathbf N_t$ is function of $\mathbf P_{t|t-1}$, because of Lemma \rightef{lem:steady1}, it has a steady state $\mathbf N$ such that $\Vert\mathbf N\Vert=O(1)$ and \begin{equation}\leftabel{steadyN} \mathbf N_t = \mathbf N + O(e^{-t}). \end{equation} Now, since $\mathbf A\mathbf P_{t|t}=\mathbf L_t\mathbf P_{t|t-1}$, using \eqref{Ptt} and \eqref{steadyN}, because of \eqref{eq:exprate} for $\begin{array}r t\lefte t \lefte T$ we have \begin{equation}\leftabel{eq:L} \mathbf P_{t|t-1}\mathbf L_t'\mathbf N_{t}\mathbf L_t\mathbf P_{t|t-1}=\mathbf P_{t|t}\mathbf A'\mathbf N_t\mathbf A\mathbf P_{t|t}=O(n^{-2}). \end{equation} By using \eqref{Ptt} and \eqref{eq:L} into \eqref{eq:PtTeasy}, for $\begin{array}r t\lefte t \lefte T$ we have \begin{equation}\leftabel{PtT} \mathbf P_{t|T} = O(n^{-1}). \end{equation} By the law of iterated expectations, for $\begin{array}r t\lefte t \lefte T$ we have (see also the definitions in \eqref{eq:condmean}) \begin{align} &\E_{\bm\Theta}[(\mathbf F_t-\mathbf F_{t|t})(\mathbf F_t-\mathbf F_{t|t})'] = \E_{\bm\Theta}[\E_{\bm\Theta}[(\mathbf F_t-\mathbf F_{t|t})(\mathbf F_t-\mathbf F_{t|t})'|\bm {\mathcal X}_t] ]= \E_{\bm\Theta}[\mathbf P_{t|t}]=O(n^{-1}),\nonumber\\ &\E_{\bm\Theta}[(\mathbf F_t-\mathbf F_{t|T})(\mathbf F_t-\mathbf F_{t|T})'] = \E_{\bm\Theta}[\E_{\bm\Theta}[(\mathbf F_t-\mathbf F_{t|T})(\mathbf F_t-\mathbf F_{t|T})'|\bm {\mathcal X}_T] ]= \E_{\bm\Theta}[\mathbf P_{t|T}]=O(n^{-1}),\nonumber \end{align} which imply mean-square convergence of the KF and KS when the parameters are known and for all $\begin{array}r t\lefte t \lefte T$: \begin{align} &\E_{\bm\Theta}[\Vert\mathbf F_t-\mathbf F_{t|t}\Vert^2] = \sum_{j=1}^r\E_{\bm\Theta}[(F_{j,t}-F_{j,t|t})^2] = \mathsf{tr}\left\{\E_{\bm\Theta}[\mathbf P_{t|t}]\right\} = O(n^{-1}),\nonumber\\ &\E_{\bm\Theta}[\Vert\mathbf F_t-\mathbf F_{t|T}\Vert^2] = \sum_{j=1}^r\E_{\bm\Theta}[(F_{j,t}-F_{j,t|T})^2] = \mathsf{tr}\left\{\E_{\bm\Theta}[\mathbf P_{t|T}]\right\} = O(n^{-1}).\nonumber \end{align} The result follows from Chebychev's inequality. $\Box$ \subsection{Consistency of KF and KS using estimated parameters}\leftabel{app:KFBLL} \begin{lem}\leftabel{lem:eststar} Consider the QML estimator of the parameters $\widehat{\bm\Theta}^*$ defined in \eqref{MLparam} and obtained using the true values of the static factors $\mathbf F_t$, then, as $T\to\infty$: \begin{align} &\sqrt T\,\Vert\widehat{\bm\leftambda}_{i}^{*}-\bm\leftambda_i \Vert = O_p(1), \quad i=1,\leftdots, n,\nonumber\\ &\sqrt T\,\Vert\widehat{\mathbf A}^*-\mathbf A\Vert = O_p(1),\nonumber\\ &\sqrt T\,\Vert\widehat{\mathbf H}^*-\mathbf H\Vert = O_p(1),\nonumber\\ &\sqrt T\,\vert[\widehat{\mathbf R}]_{ii}^*-[\mathbf R]_{ii}\vert = O_p(1), \quad i=1,\leftdots, n.\nonumber \end{align} \end{lem} \noindent{\bf Proof.} The QML estimator of the loadings, for any $i=1,\leftdots,n$, is given by \begin{align} \widehat{\bm\leftambda}_i^{*'}=\left(\sum_{t=1}^T x_{it} {\mathbf F}_{t}'\right) \left(\sum_{t=1}^T {\mathbf F}_{t}{\mathbf F}_{t}'\right)^{-1}. \end{align} We know that $\mathbf F_t$ is driven by $(q-d)$ common trends (see C2), therefore we can find an orthonormal linear basis of dimension $(q-d)$ such that the projection of $\mathbf F_t$ onto this basis span the same space as the common trends. Collect the elements of this basis in the $r\times (q-d)$ matrix $\bm\gamma$, and denote as $\bm\gamma_\perp$ the $r\times(r-q+ d)$ matrix such that $\bm\gamma_\perp'\bm\gamma=\mathbf 0_{(r-q+d)\times (q-d)}$. Then, consider the $r\times r$ linear transformation \begin{equation}\leftabel{DFZ} \bm{\mathcal D}\mathbf F_t = \left(\begin{array}{c} \bm\gamma'\mathbf F_t\\ \bm\gamma_\perp'\mathbf F_t\\ \end{array} \right)=\left(\begin{array}{c} \mathbf Z_{1t}\\ \mathbf Z_{0t} \end{array}\right), \end{equation} where $\mathbf Z_{1t}$ has all $(q-d)$ components which are $I(1)$ while $\mathbf Z_{0t}\sim I(0)$ and is of dimension $(r-q+d)$. Moreover, for $\mathbf Z_{1t}$ we have the MA representation \begin{equation}\leftabel{z1Q} \Delta \mathbf Z_{1t} = \mathbf Q(L) \bm\zeta_t, \end{equation} $\bm\zeta_t\stackrel{w.n.}{\sim} (\mathbf 0_{q-d},\bm\Sigma_{\zeta})$ with $\bm\Sigma_{\zeta}$ positive definite and ${\mathbf Q}(L)$ is a $(q-d)\times (q-d)$ one-sided, infinite matrix polynomial with square-summable coefficients and $\mathsf{rk}({\mathbf Q}(1))=(q-d)$. \noindent Because of orthonormality $\bm{\mathcal D}'\bm{\mathcal D}=\mathbf I_r$. Then, the corresponding transformation of the loadings gives $\bm\leftambda_i'\bm{\mathcal D}'=(\bm\leftambda_{i1}'\; \bm\leftambda_{i0}')$ such that $x_{it}=\bm\leftambda_{i1}'\mathbf Z_{1t}+\bm\leftambda_{i0}'\mathbf Z_{0t}+\xi_{it}$ and we also have $\widehat{\bm\leftambda}_i^{*'}\bm{\mathcal D}'=(\widehat{\bm\leftambda}_{i1}^{*'}\;\widehat{\bm\leftambda}_{i0}^{*'})$. Recall that $\mathbf Z_{1t}$ and $\mathbf Z_{0t}$ are orthogonal by construction, then we have \begin{align} &\left(\begin{array}{c} \widehat{\bm\leftambda}_{i1}^{*'}-\bm\leftambda_{i1}'\\ \widehat{\bm\leftambda}_{i0}^{*'}-\bm\leftambda_{i0}' \end{array}\right)\leftabel{lambdaDD}\\ &=\left(\begin{array}{cc} \left(\frac 1{T^2} \sum_{t=1}^T \xi_{it} {\mathbf Z}_{1t}'\right) \left(\frac 1{T^2}\sum_{t=1}^T {\mathbf Z}_{1t}{\mathbf Z}_{1t}'\right)^{-1}&\mathbf 0_{(q-d)\times (r-q+d)}\\ \mathbf 0_{(r-q+d)\times (q-d)}& \left(\frac 1{T} \sum_{t=1}^T \xi_{it} {\mathbf Z}_{0t}'\right) \left(\frac 1{T}\sum_{t=1}^T {\mathbf Z}_{0t}{\mathbf Z}_{0t}'\right)^{-1} \end{array} \right).\nonumber \end{align} By Theorem 1 in \citet{penaponcela97,penaponcela04}, under C1 and C3, and from \eqref{z1Q}, as $T\to\infty$, \begin{equation} \frac 1{T^2}\sum_{t=1}^T {\mathbf Z}_{1t}{\mathbf Z}_{1t}'\Rightarrow \mathbf Q(1)\bm\Sigma_{\zeta}^{1/2}\left(\int_0^1\bm{\mathcal W}(u)\bm{\mathcal W}(u)'\mathrm d u\rightight)\bm\Sigma_{\zeta}^{1/2}\mathbf Q(1)',\leftabel{PPAPP} \end{equation} where $\bm{\mathcal W}(\cdot)$ is a $(q-d)$-dimensional standard Wiener process. Thus this term is $O_p(1)$ and positive definite therefore invertible. Last, from \eqref{z1Q} we see that each component of $\Delta \mathbf Z_{1t}$ has an MA representation with square summable coefficients ($\Delta \mathbf Z_{1t}\sim I(0)$ by construction), therefore $\Var(t^{-1}Z_{1jt})= O(1)$ for any $j=1,\leftdots, (q-d)$ and any $t=1,\leftdots, T$. Thus, by using Gaussianity (see D2 and D3) and by A2 also independence of factors and idiosyncratic components, we can prove that, as $T\to\infty$, \begin{equation}\leftabel{casino} \frac 1 {T^2}\sum_{t=1}^T\xi_{it} {\mathbf Z}_{1t}' = O_p\left({T}^{-1}\right). \end{equation} Moreover, from C1 it is easy to see that ${\mathbf Z}_{0t}$ has an MA representation with square summable coefficients (it is stationary) and because of A2 and C3 we have, as $T\to\infty$, \begin{align} &\frac 1{T} \sum_{t=1}^T \xi_{it} {\mathbf Z}_{0t}' = O_p(T^{-1/2}),\qquad \frac 1{T} \sum_{t=1}^T \mathbf Z_{0t} {\mathbf Z}_{0t}' = \E[\mathbf Z_{0t} {\mathbf Z}_{0t}']+O_p(T^{-1/2}) = O_p(1).\leftabel{lambda0cons1} \end{align} From \eqref{lambdaDD}, \eqref{PPAPP}, \eqref{casino}, and \eqref{lambda0cons1}, and since $\bm{\mathcal D}$ does not depend on $T$, we obtain the result. \noindent Consider the VAR \begin{equation} \bm{\mathcal D}\mathbf F_t = (\bm{\mathcal D}\mathbf A\bm{\mathcal D}')\bm{\mathcal D}\mathbf F_{t-1}+\bm{\mathcal D}\mathbf H\mathbf u_t. \end{equation} such that $\bm{\mathcal D}\mathbf H\mathbf u_t=(\mathbf e_{1t}'\;\mathbf {e}_{0t}')'$ where $\mathbf e_{1t}$ and $\mathbf e_{0t}$ are white noise processes of dimensions $(q-d)$ and $(r-q+d)$, respectively. Then, similarly to \eqref{lambdaDD} we have \begin{align} &\bm{\mathcal D}(\widehat{\mathbf A}^*-\mathbf A)\bm{\mathcal D}'=\leftabel{DAD}\\ &=\left(\begin{array}{cc} \left(\frac 1{T^2} \sum_{t=1}^T \mathbf e_{1t} {\mathbf Z}_{1t-1}'\right) \left(\frac 1{T^2}\sum_{t=1}^T {\mathbf Z}_{1t-1}{\mathbf Z}_{1t-1}'\right)^{-1}& \left(\frac 1{T} \sum_{t=1}^T \mathbf e_{0t} {\mathbf Z}_{1t-1}'\right) \left(\frac 1{T}\sum_{t=1}^T {\mathbf Z}_{0t-1}{\mathbf Z}_{0t-1}'\right)^{-1}\\ \left(\frac 1{T^2} \sum_{t=1}^T \mathbf e_{1t} {\mathbf Z}_{0t-1}'\right) \left(\frac 1{T^2}\sum_{t=1}^T {\mathbf Z}_{1t-1}{\mathbf Z}_{1t-1}'\right)^{-1}& \left(\frac 1{T} \sum_{t=1}^T \mathbf e_{0t} {\mathbf Z}_{0t-1}'\right) \left(\frac 1{T}\sum_{t=1}^T {\mathbf Z}_{0t-1}{\mathbf Z}_{0t-1}'\right)^{-1} \end{array} \right)\nonumber \end{align} Then, using the fact that $\mathbf e_{1t}$ and $\mathbf e_{0t}$ are white noise, it can be shown that \begin{align} &\frac 1{T^2} \sum_{t=1}^T \mathbf e_{1t} {\mathbf Z}_{1t-1}' = O_p(T^{-1}), && \frac 1{T^2} \sum_{t=1}^T \mathbf e_{1t} {\mathbf Z}_{0t-1}' = O_p(T^{-1/2}),\leftabel{altriOP}\\ &\frac 1{T} \sum_{t=1}^T \mathbf e_{1t} {\mathbf Z}_{0t-1}' = O_p(T^{-1}),&&\frac 1{T} \sum_{t=1}^T \mathbf e_{0t} {\mathbf Z}_{0t-1}' = O_p(T^{-1/2}).\nonumber \end{align} Substituting \eqref{PPAPP}, \eqref{lambda0cons1}, and \eqref{altriOP} into \eqref{DAD} and since $\bm{\mathcal D}$ does not depend on $T$, we have the result for the VAR parameters. Similar results can be proved for all other parameters. $\Box$ \begin{lem}\leftabel{lem:steady_est1} Define the KF and KS estimators of the static factors and their conditional covariances when using the QML estimator of the parameters $\widehat{\bm\Theta}^*$ as \begin{align} &\mathbf F_{t|t}^* = \E_{\widehat{\bm\Theta}^*}[\mathbf F_t|\bm{\mathcal X}_t], \qquad \mathbf P_{t|t}^* = \E_{\widehat{\bm\Theta}^*}[(\mathbf F_t-\mathbf F_{t|t}^*)(\mathbf F_t-\mathbf F_{t|t}^*)'|\bm{\mathcal X}_t],\nonumber\\ &\mathbf F_{t|T}^* = \E_{\widehat{\bm\Theta}^*}[\mathbf F_t|\bm{\mathcal X}_T],\qquad \mathbf P_{t|T}^* = \E_{\widehat{\bm\Theta}^*}[(\mathbf F_t-\mathbf F_{t|T}^*)(\mathbf F_t-\mathbf F_{t|T}^*)'|\bm{\mathcal X}_T].\nonumber \end{align} Then, under condition \eqref{eq:exprate} in the text, the following hold, for all $\begin{array}r t\lefte t \lefte T$ and as $n,T\to\infty$, \begin{align} &\min(\sqrt n,\sqrt T)\, \Vert\mathbf F_{t|t}^*-\mathbf F_t\Vert = O_p(1),\nonumber\\ &\min(\sqrt n,\sqrt T)\,\Vert\mathbf F_{t|T}^*-\mathbf F_t\Vert = O_p(1),\nonumber\\ &\min( n,\sqrt T)\,\Vert\mathbf P_{t|t}^*\Vert = O_p(1),\nonumber\\ &\min( n,\sqrt T)\,\Vert\mathbf P_{t|T}^*\Vert = O_p(1).\nonumber \end{align} \end{lem} \noindent{\bf Proof.} We start with three preliminary results. First, from \eqref{Ptt} in the proof of Lemma \rightef{lem:steady3} we have \begin{equation}\leftabel{res1} \mathbf F_{t-1|t-1}-\mathbf F_{t-1}= O(e^{-(t-1)/2})+O(n^{-1/2}). \end{equation} Second, because of Lemma \rightef{lem:steady3} and \eqref{res1}, we have \begin{align}\leftabel{eq:OP1} \left\Vert\frac{\mathbf x_t}{\sqrt n}-\frac{{\bm\Lambda}\mathbf F_{t|t-1}}{\sqrt n}\right\Vert&=\left\Vert\frac{\bm\Lambda\mathbf F_t+\bm \xi_t}{\sqrt n}-\frac{{\bm\Lambda}\mathbf F_{t|t-1}}{\sqrt n}\right\Vert\lefte \left\Vert\frac{{\bm\Lambda}}{\sqrt n}\right\Vert\left(\Vert \mathbf A\Vert\;\Vert \mathbf F_{t-1}-\mathbf F_{t-1|t-1}\Vert+\Vert\mathbf H\mathbf u_t\Vert\right) +\left\Vert \frac{\bm\xi_t}{\sqrt n}\right\Vert\nonumber\\ &=\left\Vert\frac{{\bm\Lambda}}{\sqrt n}\right\Vert\;\left[\Vert \mathbf A\Vert\; \left(O(e^{-(t-1)/2})+O(n^{-1/2})\right)+\Vert\mathbf H\mathbf u_t\Vert\right] +\left\Vert \frac{\bm\xi_t}{\sqrt n}\right\Vert= O_p(1), \end{align} since $\mathbf u_t\stackrel{w.n.}{\sim}(\mathbf 0_q,\mathbf I_q)$ and $\bm\xi_t\sim I(0)$. \noindent Third, from transformation \eqref{DFZ}, defined in the proof of Lemma \rightef{lem:eststar}, $\bm{\mathcal D}\mathbf F_t=(\mathbf Z_{1t}'\;\mathbf Z_{0t}')'$, we have $\Vert\mathbf Z_{1t}\Vert=O_p(\sqrt T)$ and $\Vert\mathbf Z_{0t}\Vert=O_p(1)$. Then, since $\bm{\mathcal D}'\bm{\mathcal D}=\mathbf I_r$, as a consequence of Lemma \rightef{lem:eststar} (see in particular \eqref{lambdaDD} and \eqref{DAD}), the following hold \begin{align} &(\widehat{\mathbf A}^*-\mathbf A)\mathbf F_t=\bm{\mathcal D}'\bm{\mathcal D}(\widehat{\mathbf A}^*-\mathbf A)\bm{\mathcal D}'\bm{\mathcal D}\mathbf F_t=O_p(T^{-1/2}),\leftabel{DF1}\\ &(\widehat{\bm{\leftambda}}_i^{*'}-{\bm{\leftambda}}_i')\mathbf A\mathbf F_t=(\widehat{\bm{\leftambda}}_i^{*'}-{\bm{\leftambda}}_i')\bm{\mathcal D}'\bm{\mathcal D}\mathbf A\bm{\mathcal D}'\bm{\mathcal D}\mathbf F_t=O_p(T^{-1/2}), \quad i=1,\leftdots, n.\leftabel{DF2} \end{align} Now, we compare the KF iterations, \eqref{eq:pred1}-\eqref{eq:up2}, with those obtained when using $\widehat{\bm\Theta}^*$: \begin{align} &\mathbf F_{t|t-1}^* = \widehat{\mathbf A}^* \mathbf F_{t-1|t-1}^*,\leftabel{eq:pred1hat}\\ &\mathbf P_{t|t-1}^* = \widehat{\mathbf A}^* \mathbf P_{t-1|t-1}^* \widehat{\mathbf A}^{*'} + \widehat{\mathbf H}^*\widehat{\mathbf H}^{*'},\leftabel{eq:pred2hat}\\ &\mathbf F_{t|t}^* =\mathbf F_{t|t-1}^*+\mathbf P_{t|t-1}^*\widehat{\bm\Lambda}^{*'}(\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*\widehat{\bm\Lambda}^{*'}+\widehat{\mathbf R}^*)^{-1}(\mathbf x_t-\widehat{\bm\Lambda}^*\mathbf F_{t|t-1}^*),\leftabel{eq:up1hat}\\ &\mathbf P_{t|t}^* =\mathbf P_{t|t-1}^*-\mathbf P_{t|t-1}^*\widehat{\bm\Lambda}^{*'}(\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*\widehat{\bm\Lambda}^{*'}+\widehat{\mathbf R}^*)^{-1}\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*.\leftabel{eq:up2hat} \end{align} From \eqref{eq:pred1hat} we have \begin{align} \mathbf F_{t|t-1}^*-\mathbf F_{t|t-1} &=\mathbf A(\mathbf F_{t-1|t-1}^*-\mathbf F_{t-1|t-1})+(\widehat{\mathbf A}^*-\mathbf A)(\mathbf F_{t-1|t-1}^*-\mathbf F_{t-1|t-1})+(\widehat{\mathbf A}^*-\mathbf A)\mathbf F_{t-1|t-1}\nonumber\\ &=\mathbf A(\mathbf F_{t-1|t-1}^*-\mathbf F_{t-1|t-1})+(\widehat{\mathbf A}^*-\mathbf A)(\mathbf F_{t-1|t-1}^*-\mathbf F_{t-1|t-1})+O_p(T^{-1/2}),\leftabel{predFerror} \end{align} since \begin{align} (\widehat{\mathbf A}^*-\mathbf A)\mathbf F_{t-1|t-1}&=(\widehat{\mathbf A}^*-\mathbf A)(\mathbf F_{t-1|t-1}-\mathbf F_{t-1})+(\widehat{\mathbf A}^*-\mathbf A)\mathbf F_{t-1}\nonumber\\ &=O_p(T^{-1/2})O(e^{-(t-1)/2})+O_p(T^{-1/2})O(n^{-1/2})+O_p(T^{-1/2}),\nonumber \end{align} because of Lemma \rightef{lem:eststar}, \eqref{res1} and \eqref{DF1}. Similarly, from \eqref{eq:pred2hat} we have \begin{align} \mathbf P_{t|t-1}^* &-\mathbf P_{t|t-1} =\mathbf A(\mathbf P_{t-1|t-1}^*-\mathbf P_{t-1|t-1})\mathbf A'+(\widehat{\mathbf A}^*-\mathbf A)(\mathbf P_{t-1|t-1}^*-\mathbf P_{t-1|t-1})\mathbf A'\leftabel{predPerror}\\ &\;+(\widehat{\mathbf A}^*-\mathbf A)(\mathbf P_{t-1|t-1}^*-\mathbf P_{t-1|t-1})(\widehat{\mathbf A}^*-\mathbf A)'+\mathbf A(\mathbf P_{t-1|t-1}^*-\mathbf P_{t-1|t-1})(\widehat{\mathbf A}^*-\mathbf A)'\nonumber\\ &\;+(\widehat{\mathbf A}^*-\mathbf A)\mathbf P_{t-1|t-1}\mathbf A'+(\widehat{\mathbf A}^*-\mathbf A)\mathbf P_{t-1|t-1}(\widehat{\mathbf A}^*-\mathbf A)'+\mathbf A\mathbf P_{t-1|t-1}(\widehat{\mathbf A}^*-\mathbf A)'\nonumber\\ &\;+(\widehat{\mathbf H}^*-{\mathbf H})\mathbf H'+(\widehat{\mathbf H}^*-{\mathbf H})(\widehat{\mathbf H}^*-{\mathbf H})'+\mathbf H(\widehat{\mathbf H}^*-{\mathbf H})'\nonumber\\ &=\mathbf A(\mathbf P_{t-1|t-1}^*-\mathbf P_{t-1|t-1})\mathbf A'+(\widehat{\mathbf A}^*-\mathbf A)(\mathbf P_{t-1|t-1}^*-\mathbf P_{t-1|t-1})\mathbf A'\nonumber\\ &\;+(\widehat{\mathbf A}^*-\mathbf A)(\mathbf P_{t-1|t-1}^*-\mathbf P_{t-1|t-1})(\widehat{\mathbf A}^*-\mathbf A)'+\mathbf A(\mathbf P_{t-1|t-1}^*-\mathbf P_{t-1|t-1})(\widehat{\mathbf A}^*-\mathbf A)'+O_p(T^{-1/2}),\nonumber \end{align} since \begin{align} &(\widehat{\mathbf A}^*-\mathbf A)\mathbf P_{t-1|t-1}\mathbf A'=\mathbf A\mathbf P_{t-1|t-1}(\widehat{\mathbf A}^*-\mathbf A)'=O_p(T^{-1/2})O(e^{-(t-1)})+O_p(T^{-1/2})O(n^{-1}),\nonumber\\ &(\widehat{\mathbf A}^*-\mathbf A)\mathbf P_{t-1|t-1}(\widehat{\mathbf A}^*-\mathbf A)'=O_p(T^{-1})O(e^{-(t-1)})+O_p(T^{-1})O(n^{-1}).\nonumber \end{align} because of Lemma \rightef{lem:eststar} and \eqref{Ptt} in the proof of Lemma \rightef{lem:steady3} and \begin{equation} \widehat{\mathbf H}^*\widehat{\mathbf H}^{*'}-{\mathbf H}{\mathbf H}'=(\widehat{\mathbf H}^*-{\mathbf H})\mathbf H'+\mathbf H(\widehat{\mathbf H}^*-{\mathbf H})'+(\widehat{\mathbf H}^*-{\mathbf H})(\widehat{\mathbf H}^*-{\mathbf H})'=O_p(T^{-1/2}),\nonumber \end{equation} because of Lemma \rightef{lem:eststar}. \noindent Define \[ {\bm{\mathcal K}}_t = \mathbf P_{t|t-1}{\bm\Lambda}'({\bm\Lambda}\mathbf P_{t|t-1}{\bm\Lambda}'+{\mathbf R})^{-1}, \] and analogously define $\widehat{\bm{\mathcal K}}^*_t$ when using $ \mathbf P_{t|t-1}^*$, $\widehat{\bm\Lambda}^*$ and $\widehat{\mathbf R}^*$. From \eqref{eq:up1hat} we have \begin{align} \mathbf F_{t|t}^*-\mathbf F_{t|t}&=\mathbf F_{t|t-1}^*-\mathbf F_{t|t-1}+(\widehat{\bm{\mathcal K}}^*_t-{\bm{\mathcal K}}_t)(\mathbf x_t-\bm\Lambda \mathbf F_{t|t-1})\nonumber\\ &\;+(\widehat{\bm{\mathcal K}}^*_t-{\bm{\mathcal K}}_t)(\bm\Lambda \mathbf F_{t|t-1}-\widehat{\bm\Lambda}^* \mathbf F_{t|t-1}^*)+{\bm{\mathcal K}}_t(\bm\Lambda \mathbf F_{t|t-1}-\widehat{\bm\Lambda}^* \mathbf F_{t|t-1}^*).\leftabel{upFerror} \end{align} Moreover, because of \eqref{DF2} \begin{align} \frac{\widehat{\bm\Lambda}^* \mathbf F_{t|t-1}^*-{\bm\Lambda} \mathbf F_{t|t-1}}{\sqrt n}&=\frac{\bm\Lambda}{\sqrt n}(\mathbf F_{t|t-1}^*-\mathbf F_{t|t-1})+\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)(\mathbf F_{t|t-1}^*-\mathbf F_{t|t-1})+\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)\mathbf F_{t|t-1}\nonumber\\ &=\frac{\bm\Lambda}{\sqrt n}(\mathbf F_{t|t-1}^*-\mathbf F_{t|t-1})+\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)(\mathbf F_{t|t-1}^*-\mathbf F_{t|t-1})+O_p(T^{-1/2}),\leftabel{upFerror2} \end{align} since \begin{align} \left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)\mathbf F_{t|t-1}&=\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)\mathbf A\mathbf F_{t-1|t-1}=\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)\mathbf A(\mathbf F_{t-1|t-1}-\mathbf F_{t-1})+\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)\mathbf A\mathbf F_{t-1}\nonumber\\ &=O_p(T^{-1/2})O(e^{-(t-1)/2})+O_p(T^{-1/2})O(n^{-1/2})+O_p(T^{-1/2}),\nonumber \end{align} because of Lemma \rightef{lem:eststar}, \eqref{res1}, \eqref{DF2} and since $\Vert\mathbf A\Vert=O(1)$. Similarly, from \eqref{eq:up2hat} \begin{align} \mathbf P_{t|t}^*-\mathbf P_{t|t} &=\mathbf P_{t|t-1}^*-\mathbf P_{t|t-1}^*-\left[(\widehat{\bm{\mathcal K}}_t^*-{\bm{\mathcal K}}_t)\bm\Lambda\mathbf P_{t|t-1}\right.\nonumber\\ &+\;\left.(\widehat{\bm{\mathcal K}}_t^*-{\bm{\mathcal K}}_t)(\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*-\bm\Lambda\mathbf P_{t|t-1}) + {\bm{\mathcal K}}_t(\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*-\bm\Lambda\mathbf P_{t|t-1})\right].\leftabel{upPerror} \end{align} Moreover, \begin{align} \frac{\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*-\bm\Lambda\mathbf P_{t|t-1}}{\sqrt n}&=\frac{\bm\Lambda}{\sqrt n}(\mathbf P_{t|t-1}^*-\mathbf P_{t|t-1})+\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)(\mathbf P_{t|t-1}^*-\mathbf P_{t|t-1})+\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)\mathbf P_{t|t-1}\nonumber\\ &=\frac{\bm\Lambda}{\sqrt n}(\mathbf P_{t|t-1}^*-\mathbf P_{t|t-1})+\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)(\mathbf P_{t|t-1}^*-\mathbf P_{t|t-1})+O_p(T^{-1/2}),\leftabel{upPerror2} \end{align} since \begin{align} \left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)\mathbf P_{t|t-1}&=\left(\frac{\widehat{\bm\Lambda}^*-\bm\Lambda}{\sqrt n}\right)\left(\mathbf A\mathbf P_{t-1|t-1}\mathbf A'+\mathbf H\mathbf H'\right)\nonumber\\ &=O_p(T^{-1/2})O(e^{-(t-1)})+O_p(T^{-1/2})O(n^{-1})+O_p(T^{-1/2}),\nonumber \end{align} because of Lemma \rightef{lem:eststar} and \eqref{Ptt} in the proof of Lemma \rightef{lem:steady3} and since $\Vert\mathbf A\Vert=O(1)$ and $\Vert\mathbf H\Vert=O(1)$. Following the same reasoning we also have \begin{align} \frac{\bm\Lambda\mathbf P_{t|t-1}}{\sqrt n}&=\frac{\bm\Lambda}{\sqrt n}\left(\mathbf A\mathbf P_{t-1|t-1}\mathbf A'+\mathbf H\mathbf H'\right)=\frac{\bm\Lambda}{\sqrt n}\left(O(e^{-(t-1)})+O(n^{-1})+\mathbf H\mathbf H'\right). \leftabel{upPerror3} \end{align} \noindent Now, set $t=1$. Then, by noticing that $\mathbf F_{0|0}^*=\mathbf F_{0|0}$ and $\mathbf P_{0|0}^*=\mathbf P_{0|0}$, from \eqref{predFerror} and \eqref{predPerror} at $t=1$ we have \begin{align} \mathbf F_{1|0}^*-\mathbf F_{1|0} = O_p(T^{-1/2}), \qquad \mathbf P_{1|0}^*-\mathbf P_{1|0} = O_p(T^{-1/2}).\leftabel{F10P10} \end{align} Then, because of \eqref{eq:OP1}, \eqref{F10P10} and Lemma \rightef{lem:eststar}, at $t=1$ we have \begin{align} &\sqrt n(\widehat{\bm{\mathcal K}}^*_1-{\bm{\mathcal K}}_1)\left(\frac{\mathbf x_1}{\sqrt n}-\frac{\bm\Lambda \mathbf F_{1|0}}{\sqrt n}\right)= \leftabel{res2}\\ &=\left[\mathbf P_{1|0}^*\frac{\widehat{\bm\Lambda}^{*'}}{\sqrt n}\left(\frac{\widehat{\bm\Lambda}}{\sqrt n}\mathbf P_{1|0}^*\frac{\widehat{\bm\Lambda}^{*'}}{\sqrt n}+\frac{\widehat{\mathbf R}^*}n\right)^{-1}-\mathbf P_{1|0}\frac{{\bm\Lambda}'}{\sqrt n}\left(\frac{\bm\Lambda}{\sqrt n}\mathbf P_{1|0}\frac{\bm\Lambda'}{\sqrt n}+\frac{\mathbf R}n\right)^{-1}\right]\left(\frac{\mathbf x_1}{\sqrt n}-\frac{\bm\Lambda \mathbf F_{1|0}}{\sqrt n}\right)\nonumber\\ &=O_p(T^{-1/2}).\nonumber \end{align} Moreover, from \eqref{upPerror3} \begin{align} \sqrt n(\widehat{\bm{\mathcal K}}_1^*-{\bm{\mathcal K}}_1)\frac{\bm\Lambda\mathbf P_{1|0}}{\sqrt n}=O_p(T^{-1/2}).\leftabel{res3} \end{align} From \eqref{upFerror} and using \eqref{predFerror}, \eqref{upFerror2}, \eqref{F10P10}, and \eqref{res2} at $t=1$ we have \begin{align} \mathbf F_{1|1}^*-\mathbf F_{1|1}&=\mathbf F_{1|0}^*-\mathbf F_{1|0}+(\widehat{\bm{\mathcal K}}^*_1-{\bm{\mathcal K}}_1)(\mathbf x_1-\bm\Lambda \mathbf F_{1|0})\nonumber\\ &\;+(\widehat{\bm{\mathcal K}}^*_1-{\bm{\mathcal K}}_1)(\bm\Lambda \mathbf F_{1|0}-\widehat{\bm\Lambda} \mathbf F_{1|0}^*)+{\bm{\mathcal K}}_1(\bm\Lambda \mathbf F_{1|0}-\widehat{\bm\Lambda} \mathbf F_{1|0}^*)= O_p(T^{-1/2}).\leftabel{upFerror11} \end{align} Similarly, from \eqref{upPerror} and using \eqref{predPerror}, \eqref{upPerror2}, \eqref{F10P10}, and \eqref{res3} at $t=1$ we have \begin{align} \mathbf P_{1|1}^*-\mathbf P_{1|1} &=\mathbf P_{1|0}^*-\mathbf P_{1|0}^*-\left[(\widehat{\bm{\mathcal K}}_1^*-{\bm{\mathcal K}}_1)\bm\Lambda\mathbf P_{1|0}\right.\nonumber\\ &+\;\left.(\widehat{\bm{\mathcal K}}_1^*-{\bm{\mathcal K}}_1)(\widehat{\bm\Lambda}^*\mathbf P_{1|0}^*-\bm\Lambda\mathbf P_{1|0}) + {\bm{\mathcal K}}_1(\widehat{\bm\Lambda}^*\mathbf P_{1|0}^*-\bm\Lambda\mathbf P_{1|0})\right]=O_p(T^{-1/2}).\leftabel{upPerror11} \end{align} Then substituting \eqref{upFerror11} into \eqref{upFerror} and \eqref{upPerror11} into \eqref{upPerror} we have \begin{equation} \mathbf F_{2|1}^*-\mathbf F_{2|1}=O_p(T^{-1/2}), \qquad \mathbf P_{2|1}^*-\mathbf P_{2|1}=O_p(T^{-1/2}).\leftabel{F21P21} \end{equation} Then, because of \eqref{eq:OP1}, \eqref{F21P21} and Lemma \rightef{lem:eststar}, at $t=2$ we have \begin{align} &\sqrt n(\widehat{\bm{\mathcal K}}^*_2-{\bm{\mathcal K}}_2)\left(\frac{\mathbf x_2}{\sqrt n}-\frac{\bm\Lambda \mathbf F_{2|1}}{\sqrt n}\right)= O_p(T^{-1/2}),\leftabel{res4} \end{align} and from \eqref{upPerror3} \begin{align} \sqrt n(\widehat{\bm{\mathcal K}}_2^*-{\bm{\mathcal K}}_2)\frac{\bm\Lambda\mathbf P_{2|1}}{\sqrt n}=O_p(T^{-1/2}).\leftabel{res5} \end{align} From, \eqref{upFerror} and using \eqref{predFerror}, \eqref{upFerror2}, \eqref{F21P21}, and \eqref{res4}, at $t=2$ we have \[ \mathbf F_{2|2}^*-\mathbf F_{2|2}= O_p(T^{-1/2}), \] and from \eqref{upPerror} and using \eqref{predPerror}, \eqref{upPerror2}, \eqref{F21P21}, and \eqref{res5}, at $t=2$ we have \[ \mathbf P_{2|2}^*-\mathbf P_{2|2}= O_p(T^{-1/2}). \] By repeating the same reasoning for $t=3,\leftdots ,T$ we have \begin{align} &\Vert\mathbf F_{t|t}^*-\mathbf F_{t|t} \Vert = O_p(T^{-1/2}),\qquad \Vert\mathbf P_{t|t}^*-\mathbf P_{t|t} \Vert = O_p(T^{-1/2}),\leftabel{F110_6} \end{align} and also \begin{align} &\Vert\mathbf F_{t|t-1}^*-\mathbf F_{t|t-1} \Vert = O_p(T^{-1/2}),\qquad \Vert\mathbf P_{t|t-1}^*-\mathbf P_{t|t-1} \Vert = O_p(T^{-1/2}).\leftabel{F110_8} \end{align} Because of Lemma \rightef{lem:steady3} and \eqref{F110_6}, we have for $\begin{array}r t\lefte t \lefte T$, \begin{align} &\Vert\mathbf F_{t|t}^*-\mathbf F_{t} \Vert \lefte \Vert\mathbf F_{t|t}^*-\mathbf F_{t|t} \Vert + \Vert\mathbf F_{t|t}-\mathbf F_{t} \Vert = O_p(T^{-1/2}) + O(n^{-1/2}),\nonumber\\ &\Vert\mathbf P_{t|t}^* \Vert \lefte \Vert\mathbf P_{t|t}^*-\mathbf P_{t|t} \Vert + \Vert\mathbf P_{t|t} \Vert = O_p(T^{-1/2}) + O(n^{-1}).\nonumber \end{align} Now compare the KS iterations, \eqref{eq:KS3}-\eqref{eq:KS7}, with those obtained when using $\widehat{\bm\Theta}^*$: \begin{align} &\mathbf F_{t|T}^*=\mathbf F_{t|t-1}^*+\mathbf P_{t|t-1}^*\mathbf r_{t-1}^*,\leftabel{eq:KS3hat}\\ &\mathbf r_{t-1}^*=\widehat{\bm\Lambda}^{*'}(\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*\widehat{\bm\Lambda}^{*'}+\widehat{\mathbf R}^*)^{-1}(\mathbf x_t-\widehat{\bm\Lambda}^*\mathbf F_{t|t-1}^*)+\mathbf L^{*'}_{t}\mathbf r_{t}^*,\leftabel{eq:KS4hat}\\ &\mathbf P_{t|T}^*=\mathbf P_{t|t-1}^*-\mathbf P_{t|t-1}^*\mathbf N_{t-1}^*\mathbf P_{t|t-1}^*,\leftabel{eq:KS5hat}\\ &\mathbf N_{t-1}^*=\widehat{\bm\Lambda}^{*'}(\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*\widehat{\bm\Lambda}^{*'}+\widehat{\mathbf R}^*)^{-1}\widehat{\bm\Lambda}^*+\mathbf L_{t}^{*'}\mathbf N_{t}^*\mathbf L_{t}^*,\leftabel{eq:KS6hat}\\ &\mathbf L_{t}^*= \widehat{\mathbf A}^*-\widehat{\mathbf A}^* \mathbf P_{t|t-1}^* \widehat{\bm\Lambda}^{*'} (\widehat{\bm\Lambda}^*\mathbf P_{t|t-1}^*\widehat{\bm\Lambda}^{*'}+\widehat{\mathbf R}^*)^{-1} \widehat{\bm\Lambda}^*,\leftabel{eq:KS7hat} \end{align} where $\mathbf r_{T}^*=\mathbf 0_{r\times 1}$, $\mathbf N_{T}^*=\mathbf 0_{r}$. First notice that obviously at $t=T$ both KF and KS give the same result hence \eqref{F110_6} applies also in this case, and because of Lemma \rightef{lem:eststar}, \eqref{eq:OP1}, \eqref{upFerror2}, and \eqref{F110_8}, we have \begin{equation}\leftabel{rT1} \mathbf r_{T-1}^*-\mathbf r_{T-1}=O_p(T^{-1/2}),\qquad \mathbf N_{T-1}^*-\mathbf N_{T-1}=O_p(T^{-1/2}). \end{equation} Moreover, from \eqref{eq:KS7hat}, because of Lemma \rightef{lem:eststar} and \eqref{F110_8}, we have \begin{align} \mathbf L_{t}^*-\mathbf L_t&=\widehat{\mathbf A}^*-\mathbf A-\sqrt n\left[\widehat{\mathbf A}^* \widehat{\bm{\mathcal K}}_t^* \frac{\widehat{\bm\Lambda}^*}{\sqrt n}-{\mathbf A} {\bm{\mathcal K}}_t \frac{{\bm\Lambda}}{\sqrt n}\right]= O_p(T^{-1/2}).\leftabel{rNL_2} \end{align} Then, from \eqref{eq:KS4hat}, because of \eqref{eq:OP1}, \eqref{upFerror2}, \eqref{F110_8}, \eqref{rT1} and \eqref{rNL_2}, at $t=T-1$ we have \begin{align}\leftabel{rT2} \mathbf r_{T-2}^*-\mathbf r_{T-2}=O_p(T^{-1/2}),\qquad \mathbf N_{T-2}^*-\mathbf N_{T-2}=O_p(T^{-1/2}). \end{align} Therefore, from \eqref{eq:KS3hat} and \eqref{eq:KS5hat}, because of \eqref{F110_8} and \eqref{rT2}, we have \begin{align} \mathbf F_{T-1|T}^*-\mathbf F_{T-1|T}= O_p(T^{-1/2}), \qquad \mathbf P_{T-1|T}^*-\mathbf P_{T-1|T}= O_p(T^{-1/2}). \end{align} By repeating the same reasoning for $t=(T-2),\leftdots ,1$, we have \begin{equation}\leftabel{FtT00cons} \Vert\mathbf F_{t|T}^*-\mathbf F_{t|T}\Vert=O_p(T^{-1/2}), \qquad \Vert\mathbf P_{t|T}^*-\mathbf P_{t|T}\Vert=O_p(T^{-1/2}). \end{equation} Because of Lemma \rightef{lem:steady3} and \eqref{FtT00cons}, we have for $\begin{array}r t\lefte t \lefte T$ \begin{align} &\Vert\mathbf F_{t|T}^*-\mathbf F_{t} \Vert \lefte \Vert\mathbf F_{t|T}^*-\mathbf F_{t|T} \Vert + \Vert\mathbf F_{t|T}-\mathbf F_{t} \Vert = O_p(T^{-1/2}) + O(n^{-1/2}),\nonumber\\ &\Vert\mathbf P_{t|T}^* \Vert \lefte \Vert\mathbf P_{t|T}^*-\mathbf P_{t|T} \Vert + \Vert\mathbf P_{t|T} \Vert = O_p(T^{-1/2}) + O(n^{-1}),\nonumber \end{align} which completes the proof. $\Box$ \begin{lem}\leftabel{lem:est0} Consider the initial estimator of the parameters $\widehat{\bm\Theta}_0$ defined in \eqref{initparam}, then there exists an invertible $r\times r$ matrix $\mathbf J$ such that, as $n,T\to\infty$: \begin{align} &\min(\sqrt n,\sqrt T)\,\Vert\widehat{\bm\leftambda}_{i0}'-\bm\leftambda_i'\mathbf J^{-1}\Vert = O_p(1), \quad i=1,\leftdots, n,\nonumber\\ &\min(\sqrt n,\sqrt T)\,\Vert\widehat{\mathbf A}_0-\mathbf J\mathbf A\mathbf J^{-1}\Vert = O_p(1),\nonumber\\ &\min(\sqrt n,\sqrt T)\,\Vert\widehat{\mathbf H}_0-\mathbf J\mathbf H\Vert = O_p(1),\nonumber\\ &\min(\sqrt n,\sqrt T)\,\vert[\widehat{\mathbf R}]_{ii,0}-[\mathbf R]_{ii}\vert = O_p(1), \quad i=1,\leftdots, n.\nonumber \end{align} Moreover, under E1 and E2 we have $\mathbf J=\mathbf I_r$. \end{lem} \noindent{\bf Proof.} When E2 holds the proof Lemmas 3 and 5 in \citealp{BLL2} where is shown that $\mathbf J$ is a diagonal matrix with entries $\pm 1$. If we impose also E1 the sign indeterminacy is fixed and $\mathbf J=\mathbf I_r$. $\Box$ \begin{lem}\leftabel{lem:convEM} Consider the estimator of the parameters obtained at convergence of the EM algorithm $\widehat{\bm\Theta}:= \widehat{\bm\Theta}_{k^*}$, then, as $n,T\to\infty$: \begin{align} &\sqrt T\,\Vert\widehat{\bm\leftambda}_{ik^*}-\bm\leftambda_i \Vert = O_p(1), \quad i=1,\leftdots, n,\nonumber\\ &\sqrt T\,\Vert\widehat{\mathbf A}_{k^*}-\mathbf A\Vert = O_p(1),\nonumber\\ &\sqrt T\,\Vert\widehat{\mathbf H}_{k^*}-\mathbf H\Vert = O_p(1),\nonumber\\ &\sqrt T\,\vert[\widehat{\mathbf R}]_{ii,k^*}-[\mathbf R]_{ii}\vert = O_p(1), \quad i=1,\leftdots, n.\nonumber \end{align} \end{lem} \noindent{\bf Proof.} Define the $Q\times Q$ matrices \begin{align} &\bm{\mathcal I}(\bm\Theta)=-\nabla^2_{\bm\Theta\bm\Theta'}\ \ell(\bm{\mathcal X}_T;\bm\Theta),\nonumber\\ &\bm{\mathcal I}_0(\bm\Theta) = -\int_{\mathbb R^{r\times T}}\nabla^2_{\bm\Theta\bm\Theta'}\ \ell(\bm{\mathcal X}_T,\bm{\mathcal F}_T;\bm\Theta) f(\bm{\mathcal F}_T|\bm{\mathcal X}_T;\bm\Theta)\mathrm d\bm{\mathcal F}_T = -\E_{\bm\Theta}[\nabla^2_{\bm\Theta\bm\Theta'}\ \ell(\bm{\mathcal X}_T,\bm{\mathcal F}_T;\bm\Theta)|\bm{\mathcal X}_T],\nonumber\\ &\bm{\mathcal I}_1(\bm\Theta) = -\int_{\mathbb R^{r\times T}}\nabla^2_{\bm\Theta\bm\Theta'}\ \ell(\bm{\mathcal F}_T|\bm{\mathcal X}_T,\bm{\mathcal F}_T;\bm\Theta) f(\bm{\mathcal F}_T|\bm{\mathcal X}_T;\bm\Theta)\mathrm d\bm{\mathcal F}_T= -\E_{\bm\Theta}[\nabla^2_{\bm\Theta\bm\Theta'}\ \ell(\bm{\mathcal F}_T|\bm{\mathcal X}_T;\bm\Theta)|\bm{\mathcal X}_T],\nonumber \end{align} then, since $\bm{\mathcal I}(\bm\Theta)$ does not depend on $\bm{\mathcal F}_T$ from \eqref{eq:LL1} and \eqref{eq:LLX2} \begin{equation}\leftabel{III} \bm{\mathcal I}(\bm\Theta)= \bm{\mathcal I}_0(\bm\Theta)-\bm{\mathcal I}_1(\bm\Theta). \end{equation} Moreover, at convergence of the EM algorithm (iteration $k^*$) we have the Taylor approximation \begin{align}\leftabel{taylorIII} (\widehat{\bm\Theta}_{k^*}-\widehat{\bm\Theta}^*) &= (\widehat{\bm\Theta}_{k^*-1}-\widehat{\bm\Theta}^*)\bm{\mathcal R}(\widehat{\bm\Theta}^*) + O\left(\Vert\widehat{\bm\Theta}_{k^*}-\widehat{\bm\Theta}^*\Vert^2\right), \end{align} where following \citet{MR94} we have (see also \eqref{III}) \[ \bm{\mathcal R}(\widehat{\bm\Theta}^*) = \bm{\mathcal I}_1(\widehat{\bm\Theta}^*)\left(\bm{\mathcal I}_0(\widehat{\bm\Theta}^*)\right)^{-1} = \mathbf I_Q-\bm{\mathcal I}(\widehat{\bm\Theta}^*)\left(\bm{\mathcal I}_0(\widehat{\bm\Theta}^*)\right)^{-1}. \] Hence, by iterating \eqref{taylorIII} $k^*$ times and neglecting the second term on the rhs which at convergence is always smaller the the first term, we obtain \begin{align} \Vert\widehat{\bm\Theta}_{k^*}-\widehat{\bm\Theta}^*\Vert& \lefte \Vert \widehat{\bm\Theta}_{0}-\widehat{\bm\Theta}^*\Vert \;\Vert\bm{\mathcal R}(\widehat{\bm\Theta}^*)\Vert^{k^*}.\leftabel{taylorIII2} \end{align} Hereafter, denote $\zeta_{nT}:=\max(n^{-1/2},T^{-1/2})$. Consider \eqref{taylorIII2} for the estimated loadings, for any $i=1,\leftdots,n$, and using Gaussianity (see D2 and D3), we have \begin{align} \Vert\widehat{\bm\leftambda}_{ik^*}-\widehat{\bm\leftambda}_i^*\Vert &\lefte \Vert \widehat{\bm\leftambda}_{i0}-\widehat{\bm\leftambda}_i^*\Vert \;\Big\Vert\mathbf I_r- \Big(\sum_{t=1}^T\mathbf F_t\mathbf F_t'\Big)\Big(\sum_{t=1}^T\E_{\widehat{\bm\Theta}^*}[\mathbf F_t\mathbf F_t'|\bm{\mathcal X}_T]\Big)^{-1}\Big\Vert^{k^*}\nonumber\\ &=\Vert \widehat{\bm\leftambda}_{i0}-\widehat{\bm\leftambda}_i^*\Vert \;\Big\Vert\mathbf I_r- \Big(\frac 1 {T^2}\sum_{t=1}^T\mathbf F_t\mathbf F_t'\Big)\Big(\frac 1 {T^2}\sum_{t=1}^T\left(\mathbf F_{t|T}^*\mathbf F_{t|T}^{*'}+\mathbf P^*_{t|T}\right)\Big)^{-1}\Big\Vert^{k^*}.\leftabel{convEMML} \end{align} Therefore, by condition \eqref{eq:exprate} in the text ${\begin{array}r t}=O(\leftog T)$, because of Lemma \rightef{lem:steady_est1}, we have \begin{align}\leftabel{convEMML1} \Big(\frac 1 {T^2} \sum_{t=1}^T\left(\mathbf F_{t|T}^*\mathbf F_{t|T}^{*'}+\mathbf P^*_{t|T}\right)\Big)^{-1}&=\Big(\frac 1 {T^2} \sum_{t=1}^{\begin{array}r t-1}\left(\mathbf F_{t|T}^*\mathbf F_{t|T}^{*'}+\mathbf P^*_{t|T}\right)+\frac 1 {T^2}\sum_{t=\begin{array}r t}^{T}\left(\mathbf F_{t|T}^*\mathbf F_{t|T}^{*'}+\mathbf P^*_{t|T}\right)\Big)^{-1}\nonumber\\ &=\bigg(\frac 1 {T^2}\sum_{t=1}^{T}\mathbf F_{t}\mathbf F_{t}'+O_p\bigg(\frac{\zeta_{nT}}{\sqrt T}\bigg)+O_p\bigg(\frac {\leftog T}{T}\bigg)\bigg)^{-1}\nonumber\\ &=\Big(\frac 1 {T^2}\sum_{t=1}^{T}\mathbf F_{t}\mathbf F_{t}'\Big)^{-1}+o_p(T^{-1/2}), \end{align} since $\Vert\mathbf F_{t|T}^*\mathbf F_{t|T}^{*'}+\mathbf P^*_{t|T}\Vert = O_p(T)$ and $\Vert\mathbf F_t\mathbf F_t'\Vert = O_p(T)$, because $\mathbf F_t\sim I(1)$ and $\mathbf F_{t|T}^*\sim I(1)$, and \begin{align} &\frac 1 {T^2} \sum_{t=1}^{\begin{array}r t-1}\left(\mathbf F_{t|T}^*\mathbf F_{t|T}^{*'}+\mathbf P^*_{t|T}\right)=O_p\left(\frac {\leftog T}{T}\right),\qquad \frac 1{T^2}\sum_{t=1}^{T}\mathbf F_{t}\mathbf F_{t}'- \frac 1 {T^2}\sum_{t=\begin{array}r t}^{T}\mathbf F_{t}\mathbf F_{t}'= O_p\left(\frac {\leftog T}{T}\right).\nonumber \end{align} Moreover, because of Lemmas \rightef{lem:est0} and \rightef{lem:eststar}, we have \begin{align} \Vert \widehat{\bm\leftambda}_{i0}-\widehat{\bm\leftambda}_i^*\Vert&\lefte \Vert \widehat{\bm\leftambda}_{i0}-{\bm\leftambda}_i\Vert +\Vert \widehat{\bm\leftambda}_i^*-{\bm\leftambda}_i\Vert = O_p(\zeta_{nT}) + O_p(T^{-1/2}). \leftabel{convEMML2} \end{align} By substituting \eqref{convEMML1} and \eqref{convEMML2} into \eqref{convEMML}, we have \begin{align}\leftabel{convEMML3} \Vert\widehat{\bm\leftambda}_{ik^*}-\widehat{\bm\leftambda}_i^*\Vert=o_p(\zeta_{nT}\,T^{-k^*/2}). \end{align} Finally, because of Lemma \rightef{lem:eststar} and \eqref{convEMML3}, we have \begin{align} \Vert \widehat{\bm\leftambda}_{ik^*}-{\bm\leftambda}_i\Vert&\lefte \Vert \widehat{\bm\leftambda}_{ik^*}-\widehat{\bm\leftambda}_i^*\Vert + \Vert \widehat{\bm\leftambda}_{i}^*-{\bm\leftambda}_i\Vert =o_p(T^{-k^*/2})+O_p(T^{-1/2}).\nonumber \end{align} The proof for the other parameters follows the same steps by taking the appropriate second derivatives and applying the results in Lemma \rightef{lem:steady_est1}. $\Box$ \subsection{Proof of Proposition \rightef{prop:EMcons}}\leftabel{app:KFKSEM} Consistency of the estimated loadings is proved in Lemma \rightef{lem:convEM}. Recalling that $\widehat{\bm\Theta}:= \widehat{\bm\Theta}_{k^*}$ and also $\bm\leftambda_i = \mathbf K^{-1'} (\bm b_{i0}'\,\bm b_{i1}')$, because of \eqref{eq:R1_app_2} we prove \eqref{eq:consEM2}. \noindent Consistency of the estimated static factors is then proved as in Lemma \rightef{lem:steady_est1} but using the results of Lemma \rightef{lem:convEM} for the estimated parameters. In particular, for all $\begin{array}r t\lefte t \lefte T$ \begin{align} \Vert{\mathbf F}_{t|T,k^*}-\mathbf F_{t} \Vert \lefte \Vert{\mathbf F}_{t|T,k^*}-\mathbf F_{t|t} \Vert + \Vert\mathbf F_{t|t}-\mathbf F_{t} \Vert = O_p(T^{-1/2}) + O(n^{-1/2}).\nonumber \end{align} Recalling that $\widehat{\mathbf F}_t:= \mathbf F_{t|T,k^*}$ and also $\mathbf F_t=\mathbf K (\bm f_t'\, \bm f_{t-1}')'$ because of \eqref{eq:R1_app} we prove \eqref{eq:consEM3}. A similar result can be proved for the KF estimator of the static factors, $\mathbf F_{t|t,k^*}$ using again Lemmas \rightef{lem:steady_est1} and \rightef{lem:convEM}. \noindent Denote $\zeta_{nT}:=\max(n^{-1/2},T^{-1/2})$. Then, recalling that $\widehat{\bm\leftambda}_i'\widehat{\mathbf F}_t:= \widehat{\bm\leftambda}_{ik^*}'{\mathbf F}_{t|T,k^*}$, because of \eqref{eq:consEM2}, \eqref{eq:consEM3} and Lemma \rightef{lem:steady3}, we have \begin{align} \vert \widehat{\chi}_{it} - \chi_{it}\vert &= \vert \widehat{\bm\leftambda}_{i}'\widehat{\mathbf F}_{t}-\bm\leftambda_i'\mathbf F_t\vert \lefte \Vert(\widehat{\bm\leftambda}_i-\bm\leftambda_i)\mathbf F_t\Vert + \Vert\bm\leftambda_i\Vert\;\Vert\widehat{\mathbf F}_t-\mathbf F_t\Vert+ \Vert\widehat{\bm\leftambda}_i-\bm\leftambda_i\Vert\;\Vert\widehat{\mathbf F}_t-\mathbf F_t\Vert\nonumber\\ &=\Vert(\widehat{\bm\leftambda}_i-\bm\leftambda_i)\mathbf F_t\Vert + O_p(\zeta_{nT})\nonumber\\ &=\Vert\widehat{\bm\leftambda}_i-\widehat{\bm\leftambda}_i^*\Vert \;\Vert\mathbf F_t\Vert + \Vert(\widehat{\bm\leftambda}_i^*-\bm\leftambda_i)\mathbf F_t\Vert + O_p(\zeta_{nT}).\leftabel{ultimo} \end{align} Now, from Lemma \rightef{lem:convEM} (see in particular \eqref{convEMML3}) and since $\Vert\mathbf F_t\Vert=O_p(T^{-1/2})$, we have \begin{equation} \Vert\widehat{\bm\leftambda}_i-\widehat{\bm\leftambda}_i^*\Vert \;\Vert\mathbf F_t\Vert= o_p(\zeta_{nT}\,T^{-(k^*-1)/2}).\leftabel{ultimo2} \end{equation} Moreover, from transformation \eqref{DFZ}, defined in the proof of Lemma \rightef{lem:eststar}, $\bm{\mathcal D}\mathbf F_t=(\mathbf Z_{1t}'\;\mathbf Z_{0t}')'$, we have $\Vert\mathbf Z_{1t}\Vert=O_p(\sqrt T)$ and $\Vert\mathbf Z_{0t}\Vert=O_p(1)$. Then, since $\bm{\mathcal D}'\bm{\mathcal D}=\mathbf I_r$, as a consequence of Lemma \rightef{lem:eststar} (see in particular \eqref{lambdaDD}), we have \begin{equation} (\widehat{\bm\leftambda}_{i}^*-\bm\leftambda_i)'\bm{\mathcal D}'\bm{\mathcal D}\mathbf F_t= O_p(T^{-1/2}).\leftabel{ultimo3} \end{equation} By substituting \eqref{ultimo2} and \eqref{ultimo3} into \eqref{ultimo} and since $k^*\ge 1$ because we run the EM algorithm at least once after initialization, we prove \eqref{eq:consEM4}. $\Box$ \setcounter{footnote}{0} \section{Data Description and Data Treatment }\leftabel{sec:data} This Appendix present the dataset used in the analysis.~All variables where downloaded from Haver on June 16$^{th}$ 2017. None of the variables where adjusted for outliers but variables 57, 83, 87, and 94. All variables are from the USECON database but variable 103 that is from the DAILY database. All monthly and daily series are transformed into quarterly observation by simple averages. In order to choose whether or not to de-trend a variable we apply the following procedure: let $m_i$ be the sample mean of $\Delta y_{it}$, $\gamma_{i}(j)$ be the auto-covariance of order $j$ of $\Delta y_{it}$, and $\begin{array}r{\gamma}_i=\sqrt{\frac{1}{T}\sum_{j=1}^J \gamma_i(j)}$, then if $\frac{|m_i|}{\begin{array}r{\gamma}_i}\ge 1.96$ we estimate $a_i$ and $b_i$ from an OLS regression of $y_{it}$ on a constant and a time trend, whereas if $\frac{|m_i|}{\begin{array}r{\gamma}_i}< 1.96$ we set $\widehat{a}_i=m_i$ and $\widehat{b}_i=0$. \begin{center} \begin{tabular}{ p{.21\textwidth} p{.21\textwidth} p{.1\textwidth} p{.1\textwidth} } \multicolumn{4}{c}{\textsc{List of Abbreviations}}\\ \hline\hline \multicolumn{4}{l}{Source:}\\\hline \multicolumn{4}{l}{BLS=U.S. Department of Labor: Bureau of Labor Statistics}\\ \multicolumn{4}{l}{BEA=U.S. Department of Commerce: Bureau of Economic Analysis} \\ \multicolumn{4}{l}{ISM = Institute for Supply Management} \\ \multicolumn{4}{l}{CB=U.S. Department of Commerce: Census Bureau} \\ \multicolumn{4}{l}{FRB=Board of Governors of the Federal Reserve System} \\ \multicolumn{4}{l}{EIA=Energy Information Administration} \\ \multicolumn{4}{l}{WSJ=Wall Street Journal} \\ \multicolumn{4}{l}{CBO=Congressional Budget Office} \\ \multicolumn{4}{l}{FRBPHIL=Federal Reserve Bank of Philadelphia} \\\hline \\ \hline\hline F = Frequency & T=Transformation & SA & $\xi$=Idiosyncratic \\ \hline Q = Quarterly & 0 = None & 0 = no & 0=$I(0)$ \\ M = Monthly & 1 = $\leftog$ & 1 = yes & 1=$I(1)$ \\ D = Daily & 2 = $\Delta \leftog$ & & \\ \hline \\ \hline\hline \multicolumn{2}{l}{D = Deterministic Component} & \multicolumn{2}{l}{U=Units}\\ \hline \multicolumn{2}{l}{0 = $\widehat{a}_i=\frac{1}{T}\sum_{t=1}^T\Delta y_{it}$, $\widehat{b}_i=0$} & \multicolumn{2}{l}{1000--P = Thousands of Persons}\\ \multicolumn{2}{l}{1 = OLS Detrending } & \multicolumn{2}{l}{1000--U = Thousands of Units}\\ & & \multicolumn{2}{l}{BoC = Billions of Chained}\\ & & \multicolumn{2}{l}{\$--B = Dollars per Barrel}\\\hline \end{tabular} \end{center} \setlength{\tabcolsep}{.01\textwidth} \centering \footnotesize \begin{tabular}{ p{.015\textwidth} p{.1\textwidth} p{.45\textwidth} p{.1\textwidth} p{.02\textwidth} p{.04\textwidth} P{.025\textwidth} p{.01\textwidth} p{.01\textwidth} p{.01\textwidth} }\hline\hline \bf N & \bf Series ID & \bf Definition & \bf Unit & \bf F & \bf S & \bf SA & \bf T & \bf D & $\bm \xi$ \\\hline 1 & GDPH & Real Gross Domestic Product & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 2 & GDYH & Real gross domestic income & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 3 & FSH & Real Final Sales of Domestic Product & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 1 \\ 4 & IH & Real Gross Private Domestic Investment & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 1 \\ 5 & GSH & Real State \& Local$^\ast$ & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 1 \\ 6 & FRH & Real Private Residential Fixed Investment & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 7 & FNH & Real Private Nonresidential Fixed Investment & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 1 \\ 8 & MH & Real Imports of Goods \& Services & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 9 & GH & Real Government$^\ast$ & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 1 \\ 10 & XH & Real Exports of Goods \& Services & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 14 & CH & Real Personal Consumption Expenditures (PCE)& BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 11 & CNH & Real PCE: Nondurable Goods & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 1 \\ 12 & CSH & Real PCE: Services & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 13 & CDH & Real PCE: Durable Goods & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 15 & GFDIH & Real National Defense Gross Investment & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 16 & GFNIH & Real Federal Nondefense Gross Investment & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ 17 & YPDH & Real Disposable Personal Income & BoC 2009\$ & Q & BEA & 1 & 1 & 1 & 0 \\ \hline 18 & JI & Gross Private Domestic Investment:$^\star$ & 2009=100 & Q & BEA & 1 & 2 & 0 & 0 \\ 19 & JGDP & Gross Domestic Product:$^\star$ & 2009=100 & Q & BEA & 1 & 2 & 0 & 1 \\\hline 20 & LXNFU & Unit Labor Cost$^\dag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 1 \\ 21 & LXNFR & Real Compensation Per Hour$^\dag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 1 \\ 22 & LXNFC & Compensation Per Hour$^\dag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 1 \\ 23 & LXNFH & Hours of All Persons$^\dag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 0 \\ 24 & LXNFA & Output Per Hour of All Persons$^\dag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 0 \\ 25 & LXMU & Unit Labor Cost$^\ddag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 1 \\ 26 & LXMR & Real Compensation Per Hour$^\ddag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 1 \\ 27 & LXMC & Compensation Per Hour$^\ddag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 0 \\ 28 & LXMH & Hours of All Persons$^\ddag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 0 \\ 29 & LXMA & Output Per Hour of All Persons$^\ddag$ & 2009=100 & Q & BLS & 1 & 1 & 1 & 1 \\ \hline 30 & IP & Industrial Production (IP) Index & 2012=100 & M & FRB & 1 & 1 & 1 & 0 \\ 31 & IP521 & IP: Business Equipment & 2012=100 & M & FRB & 1 & 1 & 1 & 1 \\ 32 & IP511 & IP: Durable Consumer Goods & 2012=100 & M & FRB & 1 & 1 & 1 & 0 \\ 33 & IP531 & IP: Durable Materials & 2012=100 & M & FRB & 1 & 1 & 1 & 1 \\ 34 & IP512 & IP: Nondurable Consumer Goods & 2012=100 & M & FRB & 1 & 1 & 1 & 0 \\ 35 & IP532 & IP: nondurable Materials & 2012=100 & M & FRB & 1 & 1 & 1 & 0 \\\hline \end{tabular} \begin{tabular}{p{\textwidth}} \scriptsize $^\ast$ Consumption Expenditures \& Gross Investment\\ \scriptsize $^\star$ Chain-type Price Index\\ \scriptsize $^\dag$ Nonfarm Business Sector\\ \scriptsize $^\ddag$ Manufacturing Sector \end{tabular} \begin{tabular}{ p{.015\textwidth} p{.1\textwidth} p{.45\textwidth} p{.1\textwidth} p{.02\textwidth} p{.04\textwidth} P{.025\textwidth} p{.01\textwidth} p{.01\textwidth} p{.01\textwidth} }\hline\hline \bf N & \bf Series ID & \bf Definition & \bf Unit & \bf F & \bf S & \bf SA & \bf T & \bf D & $\bm \xi$ \\\hline 36 & PCU & CPI-U: All Items & 82-84=100 & M & BLS & 1 & 2 & 0 & 0 \\ 37 & PCUSE & CPI-U: Energy & 82-84=100 & M & BLS & 1 & 2 & 0 & 0 \\ 38 & PCUSLFE & CPI-U: All Items Less Food and Energy & 82-84=100 & M & BLS & 1 & 2 & 0 & 0 \\ 39 & PCUFO & CPI-U: Food & 82-84=100 & M & BLS & 1 & 2 & 0 & 0 \\ \hline 40 & JCBM & PCE: Chain Price Index & 2009=100 & M & BEA & 1 & 2 & 0 & 0 \\ 41 & JCEBM & PCE: Energy Goods \& Services--price index & 2009=100 & M & BEA & 1 & 2 & 0 & 0 \\ 42 & JCNFOM & PCE: Food \& Beverages--price index$^\ast$ & 2009=100 & M & BEA & 1 & 2 & 0 & 0 \\ 43 & JCXFEBM & PCE less Food \& Energy--price index & 2009=100 & M & BEA & 1 & 2 & 0 & 0 \\ 44 & JCSBM & PCE: Services--price index & 2009=100 & M & BEA & 1 & 2 & 0 & 0 \\ 45 & JCDBM & PCE: Durable Goods--price index & 2009=100 & M & BEA & 1 & 2 & 0 & 0 \\ 46 & JCNBM & PCE: Nondurable Goods--price index & 2009=100 & M & BEA & 1 & 2 & 0 & 0 \\\hline 47 & PC1 & PPI: Intermediate Demand Processed Goods & 1982=100 & M & BLS & 1 & 2 & 0 & 0 \\ 48 & P05 & PPI: Fuels and Related Products and Power & 1982=100 & M & BLS & 0 & 2 & 0 & 0 \\ 49 & SP3000 & PPI: Finished Goods & 1982=100 & M & BLS & 1 & 2 & 0 & 0 \\ 50 & PIN & PPI: Industrial Commodities & 1982=100 & M & BLS & 0 & 2 & 0 & 0 \\ 51 & PA & PPI: All Commodities & 1982=100 & M & BLS & 0 & 2 & 0 & 0 \\ 52 & PC1 & PPI: Intermediate Demand Processed Goods & 1982=100 & M & BLS & 1 & 2 & 0 & 0 \\ \hline 53 & FMC & Money Stock: Currency & Bil. of \$ & M & FRB & 1 & 2 & 0 & 0 \\ 54 & FM1 & Money Stock: M1 & Bil. of \$ & M & FRB & 1 & 2 & 0 & 1 \\ 55 & FM2 & Money Stock: M2 & Bil. of \$ & M & FRB & 1 & 2 & 0 & 0 \\ \hline 56 & FABWC & C \& I Loans in Bank Credit:$^\dag$ & Bil. of \$ & M & FRB & 1 & 1 & 1 & 1 \\ 57 & FABWQ & Consumer Loans in Bank Credit:$^\dag$ & Bil. of \$ & M & FRB & 1 & 1 & 1 & 1 \\ 58 & FAB & Bank Credit:$^\dag$ & Bil. of \$ & M & FRB & 1 & 1 & 1 & 1 \\ 59 & FABW & Loans \& Leases in Bank Credit:$^\dag$ & Bil. of \$ & M & FRB & 1 & 1 & 1 & 1 \\ 60 & FABYO & Other Securities in Bank Credit:$^\dag$ & Bil. of \$ & M & FRB & 1 & 1 & 1 & 1 \\ 61 & FABWR & Real Estate Loans in Bank Credit:$^\dag$ & Bil. of \$ & M & FRB & 1 & 1 & 1 & 0 \\ 62 & FOT & Consumer Credit Outstanding & Bil. of \$ & M & FRB & 1 & 1 & 1 & 0 \\\hline 63 & HSTMW & Housing Starts: Midwest & 1000--U & M & CB & 1 & 1 & 0 & 0 \\ 64 & HSTNE & Housing Starts: Northeast & 1000--U & M & CB & 1 & 1 & 0 & 0 \\ 65 & HSTS & Housing Starts: South & 1000--U & M & CB & 1 & 1 & 0 & 0 \\ 66 & HSTGW & Housing Starts: West & 1000--U & M & CB & 1 & 1 & 0 & 0 \\ 67 & HPT & Building Permit\scriptsize $^\star$ & 1000--U & M & CB & 1 & 1 & 0 & 0 \\\hline 68 & FBPR & Bank Prime Loan Rate & Percent & M & FRB & 0 & 0 & 0 & 0 \\ 69 & FFED & Federal Funds [effective] Rate & Percent & M & FRB & 0 & 0 & 0 & 0 \\ 70 & FCM1 & 1-Year Treasury Bill Yield$^\ddag$ & Percent & M & FRB & 0 & 0 & 0 & 0 \\ 71 & FCM10 & 10-Year Treasury Note Yield$^\ddag$ & Percent & M & FRB & 0 & 0 & 0 & 0 \\\hline \end{tabular} \begin{tabular}{p{\textwidth}} \scriptsize $^\ast$ Purchased for Off-Premises Consumption\\ \scriptsize $^\dag$ All Commercial Banks\\ \scriptsize $^\star$ New Private Housing Units Authorized by \scriptsize $^\ddag$ at Constant Maturity \end{tabular} \begin{tabular}{ p{.015\textwidth} p{.1\textwidth} p{.45\textwidth} p{.1\textwidth} p{.02\textwidth} p{.04\textwidth} P{.025\textwidth} p{.01\textwidth} p{.01\textwidth} p{.01\textwidth} }\hline\hline \bf N & \bf Series ID & \bf Definition & \bf Unit & \bf F & \bf S & \bf SA & \bf T & \bf D & $\bm \xi$ \\\hline 72 & LP & Civilian Participation Rate: 16 yr + & Percent & M & BLS & 0 & 0 & 0 & 1 \\ 73 & LQ & Civilian Employment/Population Ratio: 16 yr + & Percent & M & BLS & 0 & 0 & 0 & 1 \\ 74 & LE & Civilian Employment: Sixteen Years \& Over & 1000--P & M & BLS & 0 & 1 & 1 & 0 \\ 75 & LR & Civilian Unemployment Rate: 16 yr + & Percent & M & BLS & 0 & 0 & 0 & 0 \\ 76 & LU0 & Civilians Unemployed for Less Than 5 Weeks & 1000--P & M & BLS & 0 & 1 & 0 & 0 \\ 77 & LU5 & Civilians Unemployed for 5-14 Weeks & 1000--P & M & BLS & 0 & 1 & 0 & 1 \\ 78 & LU15 & Civilians Unemployed for 15-26 Weeks & 1000--P & M & BLS & 0 & 1 & 0 & 1 \\ 79 & LUT27 & Civilians Unemployed for 27 Weeks and Over & 1000--P & M & BLS & 0 & 1 & 0 & 1 \\\hline 80 & LUAD & Average [Mean] Duration of Unemployment & Weeks & M & BLS & 0 & 1 & 0 & 0 \\ 81 & LANAGRA & All Employees: Total Nonfarm & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 82 & LAPRIVA & All Employees: Total Private Industries & 1000--P & M & BLS & 0 & 1 & 1 & 0 \\ 83 & LANTRMA & All Employees: Mining and Logging & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 84 & LACONSA & All Employees: Construction & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 85 & LAMANUA & All Employees: Manufacturing & 1000--P & M & BLS & 0 & 1 & 1 & 0 \\ 86 & LATTULA & All Employees: Trade, Transportation \& Utilities & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 87 & LAINFOA & All Employees: Information Services & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 88 & LAFIREA & All Employees: Financial Activities & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 89 & LAPBSVA & All Employees: Professional \& Business Services & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 90 & LAEDUHA & All Employees: Education \& Health Services & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 91 & LALEIHA & All Employees: Leisure \& Hospitality & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 92 & LASRVOA & All Employees: Other Services & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 93 & LAGOVTA & All Employees: Government & 1000--P & M & BLS & 0 & 1 & 1 & 0 \\ 94 & LAFGOVA & All Employees: Federal Government & 1000--P & M & BLS & 0 & 1 & 1 & 1 \\ 95 & LASGOVA & All Employees: State Government & 1000--P & M & BLS & 0 & 1 & 1 & 0 \\ 96 & LALGOVA & All Employees: Local Government & 1000--P & M & BLS & 0 & 1 & 1 & 0 \\\hline 97 & PETEXA & West Texas Intermediate Spot Price FOB$^\ast$ & \$--B & M & EIA & 0 & 2 & 0 & 0 \\\hline 98 & NAPMNI & ISM Mfg: New Orders Index & Index & M & ISM & 1 & 0 & 0 & 1 \\ 99 & NAPMOI & ISM Mfg: Production Index & Index & M & ISM & 1 & 0 & 0 & 1 \\ 100 & NAPMEI & ISM Mfg: Employment Index & Index & M & ISM & 1 & 0 & 0 & 1 \\ 101 & NAPMVDI & ISM Mfg: Supplier Deliveries Index & Index & M & ISM & 1 & 0 & 0 & 0 \\ 102 & NAPMII & ISM Mfg: Inventories Index & Index & M & ISM & 1 & 0 & 0 & 0 \\\hline 103 & SP500 & Standard \& Poor's 500 Stock Price Index & 41-43=10 & D & WSJ & 0 & 1 & 1 & 0 \\\hline \end{tabular} \begin{tabular}{p{\textwidth}} \scriptsize $^\ast$ Cushing, Oklahoma \\ \end{tabular} \begin{tabular}{ p{.125\textwidth} p{.4\textwidth} p{.1\textwidth} p{.02\textwidth} p{.1\textwidth} p{.02\textwidth} p{.01\textwidth} }\hline\hline \bf Series ID & \bf Definition & \bf Unit & \bf F & \bf Source \\\hline GDPPOTHQ & Real Potential Gross Domestic Product & BoC 2009\$ & Q & CBO \\ NAIRUQ & Natural Rate of Unemployment & percent & Q & CBO \\ GDPPLUS & US GDPplus & percent & Q & FRBPHIL \\\hline \end{tabular} \end{document}
\begin{document} \title{Placeholder Substructures III:~~ A Bit-String-Driven ``Recipe Theory'' for Infinite-Dimensional Zero-Divisor Spaces} \makeatother \begin{abstract} Zero-divisors (ZDs) derived by Cayley-Dickson Process (CDP) from N-dimensional hypercomplex numbers ($N$ a power of $2$, and at least $4$) can represent singularities and, as $N \rightarrow \infty$, fractals -- and thereby, scale-free networks. Any integer $> 8$ and not a power of $2$ generates a meta-fractal or \textit{Sky} when it is interpreted as the \textit{strut constant} (S) of an ensemble of octahedral vertex figures called \textit{Box-Kites} (the fundamental ZD building blocks). Remarkably simple bit-manipulation rules or \textit{recipes} provide tools for transforming one fractal genus into others within the context of Wolfram's Class 4 complexity. \end{abstract} \section{The Argument So Far} In Parts I[1] and II[2], the basic facts concerning zero-divisors (ZDs) as they arise in the hypercomplex context were presented and proved. ``Basic,'' in the context of this monograph, means seven things. First, they emerged as a side-effect of applying CDP a minimum of 4 times to the Real Number Line, doubling dimension to the Complex Plane, Quaternion 4-Space, Octonion 8-Space, and 16-D Sedenions. With each such doubling, new properties were found: as the price of sacrificing counting order, the Imaginaries made a general theory of equations and solution-spaces possible; the non-commutative nature of Quaternions mapped onto the realities of the manner in which forces deploy in the real world, and led to vector calculus; the non-associative nature of Octonions, meanwhile, has only come into its own with the need for necessarily unobservable quantities (because of conformal field-theoretical constraints)in String Theory. In the Sedenions, however, the most basic assumptions of all -- well-defined notions of field and algebraic norm (and, therefore, measurement) -- break down, as the phenomena correlated with their absence, zero-divisors, appear onstage (never to leave it for all higher CDP dimension-doublings). Second thing: ZDs require at least two differently-indexed imaginary units to be defined, the index being an integer larger than 0 (the CDP index of the Real Unit) and less than $2^{N}$ for a given CDP-generated collection of $2^{N}$-ions. In ``pure CDP,'' the enormous number of alternative labeling schemes possible in any given $2^{N}$-ion level are drastically reduced by assuming that units with such indices interact by XOR-ing: the index of the product of any two is the XOR of their indices. Signing is more tricky; but, when CDP is reduced to a 2-rule construction kit, it becomes easy: for index $u < {\bf G}$, ${\bf G}$ the Generator of the $2^{N}$-ions (i.e., the power of 2 immediately larger than the highest index of the predecessor $2^{N-1}$-ions), Rule 1 says $i_{u} \cdot i_{\bf G} = + i_{(u + {\bf G})}$. Rule 2 says take an associative triplet $(a, b, c)$, assumed written in CPO (short for ``cyclically positive order'': to wit, $a \cdot b = +c$, $b \cdot c = +a$, and $c \cdot a = + b$). Consider, for instance, any $(u, {\bf G}, {\bf G} + u)$ index set. Then three more such associative triplets (henceforth, \textit{trips}) can be generated by adding {\bf G} to two of the three, then switching their resultants' places in the CPO scheme. Hence, starting with the Quaternions' $(1, 2, 3)$ (which we'll call a \textit{Rule 0} trip, as it's inherited from a prior level of CDP induction), Rule 1 gives us the trips $(1, 4, 5)$, $(2, 4, 6)$, and $(3, 4, 7)$, while Rule 2 yields up the other 4 trips defining the Octonions: $(1, 7, 6)$, $(2, 5, 7)$, and $(3, 6, 5)$. Any ZD in a given level of $2^{N}$-ions will then have units with one index $< {\bf G}$, written in lowercase, and the other index $> {\bf G}$, written in uppercase. Such pairs, alternately called ``dyads'' or ``Assessors,'' saturate the diagonal lines of their planes, which diagonals never mutually zero-divide each other (or make \textit{DMZs}, for "divisors (or dyads) making zero"), but only make DMZs with other such diagonals, in other such Assessors. (This is, of course, the opposite situation from the projection operators of quantum mechanics, which are diagonals in the planes formed by Reals and dimensions spanned by Pauli spin operators contained within the 4-space created by the Cartesian product of two standard imaginaries.) Third thing: Such ZDs are not the only possible in CDP spaces; but they define the ``primitive'' variety from which ZD spaces saturating more than 1-D regions can be articulated. A not quite complete catalog of these can be found in our first monograph on the theme [3]; a critical kind which was overlooked there, involving the Reals (and hence, providing the backdrop from which to see the projection-operator kind as a degenerate type), were first discussed more recently [4]. (Ironically, these latter are the easiest sorts of composites to derive of any: place the two diagonals of a DMZ pairing with differing internal signing on axes of the same plane, and consider the diagonals \textit{they} make with each other!) All the primitive ZDs in the Sedenions can be collected on the vertices of one of 7 copies of an Octahedron in the \textit{Box-Kite} representation, each of whose 12 edges indicates a two-way ``DMZ pathway,'' evenly divided between 2 varieties. For any vertex V, and $k$ any real scalar, indicate the diagonals this way: $\verb|(V,/)| = k \cdot (i_{v} + i_{V})$, while $\verb|(V, \)| = k \cdot (i_{v} - i_{V})$. 6 edges on a Box-Kite will always have \textit{negative edge-sign} (with \textit{unmarked} ET cell entries: see the ``sixth thing''). For vertices M and N, exactly two DMZs run along the edge joining them, written thus: \begin{center} $\verb|(M,/)| \cdot \verb|(N, \)|$ $= \verb|(M, \)|$ $\cdot$ $\verb|(N,/)|$ $ = 0$ \end{center} The other 6 all have \textit{positive edge-sign}, the diagonals of their two DMZs having same slope (and \textit{marked} -- with leading dashes -- ET cell entries): \begin{center} $\verb|(Z,/)| \cdot \verb|(V, /)| = $ $\verb|(Z, \)| \cdot \verb|(V,\)| = 0$ \end{center} Fourth thing: The edges always cluster similarly, with two opposite faces among the 8 triangles on the Box-Kite being spanned by 3 negative edges (conventionally painted red in color renderings), with all other edges being positive (painted blue). One of the red triangles has its vertices' 3 low-index units forming a trip; writing their vertex labels conventionally as A, B, C, we find there are in fact always 4 such trips cycling among them: $(a, b, c)$, the \textit{L-trip}; and the three \textit{U-trips} obtained by replacing all but one of the lowercase labels in the L-trip with uppercase: $(a, B, C)$; $(A, b, C)$; $(A, B, c)$. Such a 4-trip structure is called a \textit{Sail}, and a Box-Kite has 4 of them: the \textit{Zigzag}, with all negative edges, and the 3 \textit{Trefoils}, each containing two positive edges extending from one of the Zigzag vertices to the two vertices opposite its \textit{Sailing partners}. These opposite vertices are always joined by one of the 3 negative edges comprising the Vent which is the Zigzag's opposite face. Again by convention, the vertices opposite A, B, C are written F, E, D in that order; hence, the Trefoil Sails are written $(A, D, E)$; $(F, D, B)$, and $(F, C, E)$, ordered so that their lowercase renderings are equivalent to their CPO L-trips. The graphical convention is to show the Sails as filled in, while the other 4 faces, like the Vent, are left empty: they show ``where the wind blows'' that keeps the Box-Kite aloft. A real-world Box-Kite, meanwhile, would be held together by 3 dowels (of wood or plastic, say) spanning the joins between the only vertices left unconnected in our Octahedral rendering: the \textit{Struts} linking the \textit{strut-opposite} vertices (A, F); (B, E); (C, D). Fifth thing: In the Sedenions, the 7 isomorphic Box-Kites are differentiated by which Octonion index is missing from the vertices, and this index is designated by the letter {\bf S}, for ``signature,'' ``suppressed index,'' or \textit{strut constant}. This last designation derives from the invariant relationship obtaining in a given Box-Kite between {\bf S} and the indices in the Vent and Zigzag termini (V and Z respectively) of any of the 3 Struts, which we call the ``First Vizier'' or VZ1. This is one of 3 rules, involving the three Sedenion indices always missing from a Box-Kite's vertices: ${\bf G}$, ${\bf S}$, and their simple sum ${\bf X}$ (which is also their XOR product, since ${\bf G}$ is always to the left of the left-most bit in ${\bf S}$). The Second Vizier tells us that the L-index of either terminus with the U-index of the other always form a trip with ${\bf G}$, and it true as written for all $2^{N}$-ions. The Third shows the relationship between the L- and U- indices of a given Assessor, which always form a trip with ${\bf X}$. Like the First, it is true as written only in the Sedenions, but as an unsigned statement about indices only, it is true universally. (For that reason, references to VZ1 and VZ3 hereinout will be assumed to refer to the \textit{unsigned} versions.) First derived in the last section of Part I, reprised in the intro of Part II, we write them out now for the third and final time in this monograph: \begin{center} VZ1: $v \cdot z = V \cdot Z = {\bf S}$ VZ2: $Z \cdot v = V \cdot z = {\bf G}$ VZ3: $V \cdot v = z \cdot Z = {\bf X}$. \end{center} Rules 1 and 2, the Three Viziers, plus the standard Octonion labeling scheme derived from the simplest finite projective group, usually written as PSL(2,7), provide the basis of our toolkit. This last becomes powerful due to its capacity for recursive re-use at all levels of CDP generation, not just the Octonions. The simplest way to see this comes from placing the unique Rule 0 trip provided by the Quaternions on the circle joining the 3 sides' midpoints, with the Octonion Generator's index, 4, being placed in the center. Then the 3 lines leading from the Rule 0 trip's (1, 2, 3) midpoints to their opposite angles -- placed conventionally in clockwise order in the midpoints of the left, right, and bottom sides of a triangle whose apex is at 12 o'clock -- are CPO trips forming the Struts, while the 3 sides themselves are the Rule 2 trips. These 3 form the L-index sets of the Trefoil Sails, while the Rule 0 trip provides the same service for the Zigzag. By a process analogized to tugging on a slipcover (Part I) and pushing things into the central zone of hot oil while wok-cooking (Part II), all 7 possible values of {\bf S} in the Sedenions, not just the 4, can be moved into the center while keeping orientations along all 7 lines of the Triangle unchanged. Part II's critical Roundabout Theorem tells us, moreover, that all $2^{N}$-ion ZDs, for all $N > 3$, are contained in Box-Kites as their minimal ensemble size. Hence, by placing the appropriate ${\bf G}$, ${\bf S}$, or ${\bf X}$ in the center of a PSL(2,7) triangle, with a suitable Rule 0 trip's indices populating the circle, any and all \textit{candidate} primitive ZDs can be discovered and situated. Sixth thing: The word ``candidate'' in the above is critical; its exploration was the focus of Part II. For, starting with $N = 5$ and hence ${\bf G = 16}$ (which is to say, in the 32-D Pathions), whole Box-Kites can be suppressed (meaning, all 12 edges, and not just the Struts, no longer serve as DMZ pathways). But for all $N$, the full set of candidate Box-Kites are viable when ${\bf S} \leq 8$ or equal to some higher power of 2. For all other ${\bf S}$ values, though, the phenomenon of \textit{carrybit overflow} intervenes -- leading, ultimately, to the ``meta-fractal'' behavior claimed in our abstract. To see this, we need another mode of representation, less tied to 3-D visualizing, than the Box-Kite can provide. The answer is a matrix-like method of tabulating the products of candidate ZDs with each other, called \textit{Emanation Tables} or \textit{ETs}. The L-indices only of all candidate ZDs are all we need indicate (the U-indices being forced once ${\bf G}$ is specified); these will saturate the list of allowed indices $< {\bf G}$, save for the value of ${\bf S}$ whose choice, along with that of ${\bf G}$, fixes an ET. Hence, the unique ET for given ${\bf G}$ and ${\bf S}$ will fill a square spreadsheet whose edge has length $2^{N-1}- 2$. Moreover, a cell entry (r,c) is only filled when row and column labels R and C form a DMZ, which can never be the case along an ET's long diagonals: for the diagonal starting in the upper left corner, R xor R = 0, and the two diagonals within the same Assessor, can never zero-divide each other; for the righthand diagonal, the convention for ordering the labels (ascending counting order from the left and top, with any such label's strut-opposite index immediately being entered in the mirror-opposite positions on the right and bottom) makes R and C strut-opposites, hence also unable to form DMZs. For the Sedenions, we get a 6 x 6 table, 12 of whose cells (those on long diagonals) are empty: the 24 filled cells, then, correspond to the two-way traffic of ``edge-currents'' one imagines flowing between vertices on a Box-Kite's 12 edges. A computational corollary to the Roundabout Theorem, dubbed the \textit{Trip-Count Two-Step}, is of seminal importance. It connects this most basic theorem of ETs to the most basic fact of associative triplets, indicated in the opening pages of Part I, namely: for any N, the number $Trip_{N}$ of associative triplets is found, by simple combinatorics, to be $(2^{N} - 1)(2^{N} - 2)/3!$ -- 35 for the Sedenions, 155 for the Pathions, and so on. But, by Trip-Count Two-Step, we also know that \textit{the maximum number of Box-Kites that can fill a $2^{N}$-ion ET = $Trip_{N-2}$.} For ${\bf S}$ a power of 2, beginning in the Pathions (for ${\bf S} = 2^{5 - 2} = 8$), the Number Hub Theorem says the upper left quadrant of the ET is an unsigned multiplication table of the $2^{N-2}$-ions in question, with the 0's of the long diagonal (indicated Real negative units) replaced by blanks -- a result effectively synonymous with the Trip-Count Two-Step. Seventh thing: We found, as Part II's argument wound down, that the 2 classes of ETs found in the Pathions -- the ``normal'' for ${\bf S} \leq 8$, filled with indices for all 7 possible Box-Kites, and the ``sparse'' so-called Sand Mandalas, showing only 3 Box-Kites when $8 < {\bf S} < 16$, were just the beginning of the story. A simple formula involving just the bit-string of ${\bf s}$ and ${\bf g}$, where the lowercase indicates the values of ${\bf S}$ and ${\bf G}$ modulo ${\bf G}/2$, gave the prototype of our first \textit{recipe}: all and only cells with labels R or C, or content P ( = R xor C ), are filled in the ET. The 4 ``missing Box-Kites'' were those whose L-index trip would have been that of a Sail in the $2^{N-1}$ realm with ${\bf S} = {\bf s}$ and ${\bf G} = {\bf g}$. The sequence of 7 ETs, viewed in ${\bf S}$-increasing succession, had an obvious visual logic leading to their being dubbed a \textit{flip-book}. These 7 were obviously indistinguishable from many vantages, hence formed a \textit{spectrographic band}. There were 3 distinct such bands, though, each typified by a Box-Kite count common to all band-members, demonstrable in the ETs for the 64-D Chingons. Each band contained ${\bf S}$ values bracketed by multiples of 8 (either less than or equal to the higher, depending upon whether the latter was or wasn't a power of 2). These were claimed to underwrite behaviors in all higher $2^{N}$-ion ETs, according to 3 rough patterns in need of algorithmic refining in this Part III. Corresponding to the first unfilled band, with ETs always missing $4^{N-4}$ of their candidate Box-Kites for $N > 4$, we spoke of \textit{recursivity}, meaning the ETs for constant ${\bf S}$ and increasing $N$ would all obey the same recipe, properly abstracted from that just cited above, empirically found among the Pathions for ${\bf S} > 8$. The second and third behaviors, dubbed, for ${\bf S}$ ascending, \textit{(s,g)-modularity} and \textit{hide/fill involution} respectively, make their first showings in the Chingons, in the bands where $16 < {\bf S} \leq 24$, and then where $24 < {\bf S} < 32$. In all such cases, we are concerned with seeing the ``period-doubling'' inherent in CDP and Chaotic attractors both become manifest in a repeated doubling of ET edge-size, leading to the fixed-${\bf S}$, $N$ increasing analog of the fixed-$N, {\bf S}$ increasing flip-books first observed in the Pathions, which we call \textit{balloon-rides}. Specifying and proving their workings, and combining all 3 of the above-designated behaviors into the ``fundamental theorem of zero-division algebra,'' will be our goals in this final Part III. Anyone who has read this far is encouraged to bring up the graphical complement to this monograph, the 78-slide Powerpoint show presented at NKS 2006 [5], in another window. (Slides will be referenced by number in what follows.) \section{$8 <{\bf S} < 16, N \rightarrow \infty$ : Recursive Balloon Rides in the Whorfian Sky} We know that any ET for the $2^{N}$-ions is a square whose edge is $2^{N-1} - 2$ cells. How, then, can any simply recursive rule govern exporting the structure of one such box to analogous boxes for progressively higher $N$? The answer: \textit{include the label lines} -- not just the column and row headers running across the top and left margins, but their strut-opposite values, placed along the bottom and right margins, which are mirror-reversed copies of the label-lines \textit{(LLs)} proper to which they are parallel. This increases the edge-size of the ET box to $2^{N-1}$. \noindent {\small Theorem 11.} For any fixed ${\bf S} > 8$ and not a power of $2$, the row and column indices comprising the Label Lines (LLs) run along the left and top borders of the $2^{N}$-ion ET "spreadsheet" for that ${\bf S}$. Treat them as included in the spreadsheet, \textit{as labels}, by adding a row and column to the given square of cells, of edge $2^{N-1} - 2$, which comprises the ET proper. Then add another row and column to include the strut-opposite values of these labels' indices in ``mirror LLs,'' running along the opposite edges of a now $2^{N-1}$-edge-length box, whose four corner cells, like the long diagonals they extend, are empty. When, for such a fixed ${\bf S}$, the ET for the $2^{N+1}$-ions is produced, the values of the 4 sets of LL indices, bounding the contained $2^{N}$-ion ET, correspond, \textit{as cell values}, to actual DMZ P-values in the bigger ET, residing in the rows and columns labeled by the contained ET's ${\bf G}$ and ${\bf X}$ (the containing ET's $g$ and $g + {\bf S}$). Moreover, all cells contained in the box they bound in the containing ET have P-values (else blanks) exactly corresponding to -- and \textit{including edge-sign markings} of -- the positionally identical cells in the $2^{N}$-ion ET: those, that is, for which the LLs act as labels. \noindent \textit{Proof.} For all strut constants of interest, ${\bf S} < g ( = {\bf G/2})$; hence, all labels up to and including that immediately adjoining its own strut constant (that is, the first half of them) will have indices monotonically increasing, up to and at least including the midline bound, from $1$ to $g - 1$. When $N$ is incremented by $1$, the row and column midlines separating adjoining strut-opposites will be cut and pulled apart, making room for the labels for the $2^{N+1}$-ion ET for same ${\bf S}$, which middle range of label indices will also monotonically increase, this time from the current $2^N$-ion generation's $g$ (and prior generation's ${\bf G}$), up to and at least including its own midline bound, which will be $g$ plus the number of cells in the LL inherited from the prior generation, or $g/2 - 1$. The LLs are therefore contained in the rows and columns headed by $g$ and its strut opposite, $g + {\bf S}$. To say that the immediately prior CDP generation's ET labels are converted to the current generation's P-values in the just-specified rows and columns is equivalent to asserting the truth of the following calculation: \begin{center} $(g + u) + (sg) \cdot ({\bf G} + g + u_{opp})$ \underline{$\;\;\;\;\;\;g \;\;\;\;\; + \;\;\;\;\; ({\bf G} + g + {\bf S})\;\;\;\;\;\;$} $- (vz)\cdot ({\bf G} + u_{opp}) \;\;\; + (vz) \cdot (sg) \cdot u$ \underline{$ + u \;\;\;\;\; - \;\;\;\;\; (sg) \cdot ({\bf G} + u_{opp})$} $0$ only if $vz = (-sg)$ \end{center} Here, we use two binary variables, the inner-sign-setting $sg$, and the Vent-or-Zigzag test, based on the First Vizier. Using the two in tandem lets us handle the normal and ``Type II'' box-kites in the same proof. Recall (and see Appendix B of Part II for a quick refresher) that while the ``Type I'' is the only type we find in the Sedenions, we find that a second variety emerges in the Pathions, indistinguishable from Type I in most contexts of interest to us here: the orientation of 2 of the 3 struts will be reversed (which is why VZ1 and VZ3 are only true generally when unsigned). For a Type I, since ${\bf S} < g$, we know by Rule 1 that we have the trip $({\bf S}, g, g + {\bf S})$; hence, $g$ -- for all $2^{N}$-ions beyond the Pathions, where the Sand Mandalas' $g = 8$ is the L-index of the Zigzag B Assessor -- must be a Vent (and its strut-opposite, $g + {\bf S}$, a Zigzag). For a Type II, however, this is necessarily so only for 1 of the 3 struts -- which means, per the equation above, that sg must be reversed to obtain the same result. Said another way, we are free to assume either signing of $vz$ means +1, so the ``only if'' qualifying the zero result is informative. It is $u$ and its relationship to $g + u$ that is of interest here, and this formulation makes it easier to see that the products hold for arbitrary LL indices $u$ \textit{or} their strut-opposites. But for this, the term-by-term computations should seem routine: the left bottom is the Rule 1 outcome of $(u, g, g+u)$: obviously, any $u$ index must be less than $g$. To its right, we use the trip $(u_{opp}, g, g + u_{opp}) \rightarrow ({\bf G} + g + u_{opp}, g, {\bf G} + u_{opp})$, whose CPO order is opposite that of the multiplication. For the top left, we use $(u, {\bf S}, u_{opp})$ as limned above, then augment by $g$, then ${\bf G}$, leaving $u_{opp}$ unaffected in the first augmenting, and $g + u$ in the second. Finally, the top right (ignoring $sg$ and $vz$ momentarily) is obtained this way: $(u,{\bf S}, u_{opp}) \rightarrow (u, g + u_{opp}, g + {\bf S}) \rightarrow (u, {\bf G} + g + s, {\bf G} + g + u_{opp})$; ergo, $+u$. Note that we cannot eke out any information about edge-sign marks from this setup: since labels, as such, have no marks, we have nothing to go on -- unlike all other cells which our recursive operations will work on. Indeed, the exact algorithmic determination of edge-sign marks for labels is not so trivial: as one iterates through higher $N$ values, some segments of LL indexing will display reversals of marks found in the ascending or descending left midline column, while other segments will show them unchanged -- with key values at the beginnings and ends of such octaves (multiples of $8$, and sums of such multiples with ${\bf S}\; mod \; 8$) sometimes being reversed or kept the same irrespective of the behavior of the terms they bound. Fortunately, such behaviors are of no real concern here -- but they are, nevertheless, worth pointing out, given the easy predictability of other edge-sign marks in our recursion operations. Now for the ET box within the labels: if all values (including edge-sign marks) remain unchanged as we move from the $2^{N}$-ion ET to that for the $2^{N+1}$-ions, then one of 3 situations must obtain: the inner-box cells have labels $u, v$ which belong to some Zigzag L-trip $(u, v, w)$; or, on the contrary, they correspond to Vent L-indices -- the first two terms in the CPO triplet $(w_{opp}, v_{opp}, u)$, for instance; else, finally, one term is a Vent, the other a Zigzag (so that inner-signs of their multiplied dyads are both positive): we will write them, in CPO order, $v_{opp}$ and $u$, with third trip member $w_{opp}$. Clearly, we want all the products in the containing ET to indicate DMZs only if the inner ET's cells do similarly. This is easily arranged: for the containing ET's cells have indices identical to those of the contained ET's, save for the appending of $g$ to both (and ditto for the U-indices). \textit{Case 1:} If $(u, v, w)$ form a Zigzag L-index set, then so do $(g + v, g + u, w)$, so markings remain unchanged; and if the $(u,v)$ cell entry is blank in the contained, so will be that for $(g + u, g + v)$ in its container. In other words, the following holds: \begin{center} $(g + v) + (sg) \cdot ({\bf G} + g + v_{opp})$ \underline{$\;\;\;(g + u) \;\;\;\; + \;\;\;({\bf G} + g + u_{opp})\;\;\;$} $- ({\bf G} + w_{opp}) \;\;\; - (sg) \cdot w$ \underline{$ - w \;\;\;\;\; - (sg) \cdot ({\bf G} + w_{opp})$} $0$ only if $sg = (-1)$ \end{center} \pagebreak $(g + u) \cdot (g + v) = P:$ $(u,v,w) \rightarrow (g + v, g + u, w)$; hence, $(- w)$. $(g + u) \cdot (sg) \cdot ({\bf G} + g + v_{opp}) = P:$ $(u, w_{opp}, v_{opp})$ $\rightarrow (g + v_{opp}, w_{opp}, g + u)$ $\rightarrow ({\bf G} + w_{opp}, {\bf G} + g + v_{opp}, g + u)$; hence, $(sg) \cdot ( - ({\bf G} + w_{opp}))$. $({\bf G} + g + u_{opp}) \cdot (g + v) = P:$ $(u_{opp}, w_{opp}, v)$ $\rightarrow (g + v, w_{opp}, g + u_{opp})$ $\rightarrow (g + v, {\bf G} + g + u_{opp}, {\bf G} + w_{opp})$; hence, $(- ({\bf G} + w_{opp}))$. $({\bf G} + g + u_{opp}) \cdot ({\bf G} + g + v_{opp}) = P:$ Rule 2 twice to the same two terms yields the same result as the terms in the raw, hence $(- w)$. Clearly, cycling through $(u,v,w)$ to consider $(g + v) \cdot (g + w)$ will give the exactly analogous result, forcing two (hence three) negative inner-signs in the candidate Sail; hence, if we have DMZs at all, we have a Zigzag Sail. \textit{Case 2:} The product of two Vents must have negative edge-sign, and there's no cycling through same-inner-signed products as with the Zigzag, so we'll just write our setup as a one-off, with upper inner-sign explicitly negative, and claim its outcome true. \begin{center} $(g + v_{opp}) - ({\bf G} + g + v)$ \underline{$\;(g + w_{opp}) \; + \;({\bf G} + g + w)\;$} $+ ({\bf G} + u_{opp}) \;\;\;\;\; + u$ \underline{$ - u \;\;\;\;\; - ({\bf G} + u_{opp})$} $0$ \end{center} $(g + w_{opp}) \cdot (g + v_{opp}) = P:$ $(w_{opp}, v_{opp}, u)$ $\rightarrow (g + v_{opp}, g + w_{opp}, u)$; hence, $(- u)$. $(g + w_{opp}) \cdot ({\bf G} + g + v) = P:$ $(w_{opp}, v, u_{opp})$ $\rightarrow (g + v, g + w_{opp}, u_{opp})$ $\rightarrow ({\bf G} + u_{opp}, g + w_{opp}, {\bf G} + g + v)$; but inner sign of upper dyad is negative, so $(- ({\bf G} + u_{opp}))$. $({\bf G} + g + w) \cdot (g + v_{opp}) = P:$ $(v_{opp}, u_{opp}, w)$ $\rightarrow (g + w, u_{opp}, g + v_{opp})$ $\rightarrow ({\bf G} + u_{opp}, {\bf G} + g + w, g + v_{opp})$; hence, $(+ ({\bf G} + u_{opp}))$. $({\bf G} + g + w) \cdot ({\bf G} + g + v) = P:$ Rule 2 twice to the same two terms yields the same result as the terms in the raw; but inner sign of upper dyad is negative, so $(+ u)$. \textit{Case 3:} The product of Vent and Zigzag displays same inner sign in both dyads; hence the following arithmetic holds: \pagebreak \begin{center} $(g + u) + ({\bf G} + g + u_{opp})$ \underline{$\; (g + v_{opp}) \; + \;({\bf G} + g + v)\;$} $- ({\bf G} + w) \;\;\;\;\; + w_{opp}$ \underline{$ - w_{opp} \;\;\;\;\; + ({\bf G} + w)$} $0$ \end{center} The calculations are sufficiently similar to the two prior cases as to make their writing out tedious. It is clear that, in each of our three cases, content and marking of each cell in the contained ET and the overlapping portion of the container ET are identical. $\;\; \blacksquare$ To highlight the rather magical label/content involution that occurs when $N$ is in- or de- cremented, graphical realizations of such nested patterns, as in Slides 60-61, paint LLs (and labels proper) a sky-blue color. The bottom-most ET being overlaid in the central box has $g =$ the maximum high-bit in ${\bf S}$, and is dubbed the \textit{inner skybox}. The degree of nesting is strictly measured by counting the number of bits $B$ that a given skybox's $g$ is to the left of this strut-constant high-bit. If we partition the inner skybox into quadrants defined by the midlines, and count the number $Q$ of quadrant-sized boxes along one or the other long diagonal, it is obvious that the inner skybox itself has $B = 0$ and $Q = 1$; the nested skyboxes containing it have $Q = 2^{B}$. If recursion of skybox nesting be continued indefinitely -- to the fractal limit, which terminology we will clarify shortly -- the indices contained in filled cells of any skybox can be interpreted in $B$ distinct ways, $B \rightarrow \infty$, as representations of distinct ZDs with differing ${\bf G}$ and, therefore, differing U-indices. By obvious analogy to the theory of Riemann surfaces in complex analysis, each such skybox is a separate ``sheet''; as with even such simple functions as the logarithmic, the number of such sheets is infinite. We could then think of the infinite sequence of skyboxes as so many cross-sections, at constant distances, of a flashlight beam whose intensity (one over the ET's cell count) follows Kepler's inverse square law. Alternatively, we could ignore the sheeting and see things another way. Where we called fixed-$N$, ${\bf S}$ varying sequences of ETs flip-books, we refer to fixed-${\bf S}$, $N$ varying sequences as balloon rides: the image is suggested by David Niven's role as Phineas Fogg in the movie made of Jules Vernes' \textit{Around the World in 80 Days}: to ascend higher, David would drop a sandbag over the side of his hot-air balloon's basket; if coming down, he would pull a cord that released some of the balloon's steam. Each such navigational tactic is easy to envision as a bit-shift, pushing ${\bf G}$ further to the left to cross LLs into a higher skybox, else moving it rightward to descend. Using ${\bf S = 15}$ as the basis of a 3-stage balloon-ride, we see how increasing $N$ from $5$ to $6$ to $7$ approaches the white-space complement of one of the simplest (and least efficient) plane-filling fractals, the Ces\`aro double sweep [6, p. 65]. \begin{figure} \caption{ETs for S=15, N=5,6,7 (nested skyboxes in blue)$\cdots$ and ``fractal limit.''} \end{figure} The graphics were programmatically generated prior to the proving of the theorems we're elaborating: their empirical evidence was what informed (indeed, demanded) the theoretical apparatus. And we are not quite finished with the current task the apparatus requires of us. We need two more theorems to finish the discussion of skybox recursion. For both, suppose some skybox with $B = k$, $k$ any non-negative integer, is nested in one with $B = k + 1$. Divide the former along midlines to frame its four quadrants, then block out the latter skybox into a $4 \times 4$ grid of same-sized window panes, partitioned by the one-cell-thick borders of its own midlines into quadrants, each of which is further subdivided by the outside edges of the 4 one-cell-thick label lines and their extensions to the window's frame. These extended LLs are themselves NSLs, and have $R, C$ values of $g$ and $g + {\bf S}$; for ${\bf S} = 15$, they also adjoin NSLs along their outer edges whose $R, C$ values are multiples of $8$ plus ${\bf S} \; mod \; 8$. These pane-framing pairs of NSLs we will henceforth refer to (as a windowmaker would) as \textit{muntins}. It is easy to calculate that while the inner skybox has but one muntin each among its rows and columns, each further nesting has $2^{B+1} - 1$. But we are getting ahead of ourselves, as we still have two proofs to finish. Let's begin with Four Corners, or \noindent {\small Theorem 12.} The 4 panes in the corners of the 16-paned $B = k + 1$ window are identical in contents and marks to the analogously placed quadrants of the $B = k$ skybox. \pagebreak \noindent \textit{Proof.} Invoke the Zero-Padding Lemma with regard to the U-indices, as the labels of the boxes in the corners of the $B = k + 1$ ET are identical to those of the same-sized quadrants in the $B = k$ ET, all labels $\geq$ the latter's $g$ only occurring in the newly inserted region. $\;\; \blacksquare$ \noindent \textit{Remarks.} For $N = 6$, all filled Four Corners cells indicate edges belonging to $3$ Box-Kites, whose edges they in fact exhaust. These $3$, not surprisingly, are the zero-padded versions of the identically L-indexed trio which span the entirety of the $N = 5$ ET. By calculations we'll see shortly, however, the inner skybox, when considered as part of the $N = 6$ ET, has filled cells belonging to all the other 16 Box-Kites, even though the contents of these cells are identical to those in the $N = 5$ ET. As $B$ increases, then, the ``sheets'' covering this same central region must draw upon progressively more extensive networks of interconnected Box-Kites. As we approach the fractal limit -- and ``the Sky is the limit'' -- these networks hence become scale-free. (Corollarily, for $N = 7$, the Four Corners' cells exhaust all the edges of the $N = 6$ ET's 19 Box-Kites, and so on.) Unlike a standard fractal, however, such a Sky merits the prefix ``meta'': for each empty ET cell corresponds to a point in the usual fractal variety; and each pair of filled ET cells, having (r,c) of one = (c,r) of the other), correspond to diagonal-pairs in Assessor planes, orthogonal to all other such diagonal-pairs belonging to the other cells. Each empty ET cell, in other words, not only corresponds to a point in the usual plane-confined fractal, but belongs to the complement of the filled cells' infinite number of \textit{dimensions} framing the Sky's \textit{meta-}fractal. We've one last thing to prove here. The French Windows Theorem shows us the way the cell contents of the pairs of panes contained between the $B = k + 1$ skybox's corners are generated from those of the analogous pairings of quadrants in the $B = k$ skybox, by adding $g$ to L-indices. \noindent {\small Theorem 13}. For each half-square array of cells created by one or the other midline (the French windows), each cell in the half-square parallel to that adjoining the midline (one of the two shutters), but itself adjacent to the label-line delimiting the former's bounds, has content equal to $g$ plus that of the cell on the same line orthogonal to the midline, and at the same distance from it, as \textit{it} is from the label-line. All the empty long-diagonal cells then map to $g$ (and are marked), or $g + {\bf S}$ (and are unmarked). Filled cells in extensions of the label-lines bounding each shutter are calculated similarly, but with reversed markings; all other cells in a shutter have the same marks as their French-window counterparts. \noindent \textit{Preamble.} Note that there can be (as we shall see when we speak of \textit{hide/fill involution}) cells left empty for rule-based reasons other than $P \; = \; R \veebar C \; = \; 0 \;| \; {\bf S}$. The shutter-based counterparts of such French-window cells, unlike those of long-diagonal cells, remain empty. \noindent \textit{Proof.} The top and left (bottom and right) shutters are equivalent: one merely switches row for column labels. Top/left and bottom/right shutter-sets are likewise equivalent by the symmetry of strut-opposites. We hence make the case for the left shutter only. But for the novelties posed by the initially blank cells and the label lines (with the only real subtleties involving markings), the proof proceeds in a manner very similar to Theorem 11: split into 3 cases, based on whether (1) the L-index trip implied by the $R, C, P$ values is a Zigzag; (2) $u, v$ are both Vents; or, (3) the edge signified by the cell content is the emanation of same-inner-signed dyads (that is, one is a Vent, the other a Zigzag). \textit{Case 1:} Assume $(u, v, w)$ a Zigzag L-trip in the French window's contained skybox; the general product in its shutter is \begin{center} $v \;\; - \;\; ({\bf G} + v_{opp})$ \underline{$\;\;\;(g + u) \;\; + \;\;({\bf G} + g + u_{opp})\;\;\;$} $- ({\bf G} + g + w_{opp}) \;\;\; + (g + w)$ \underline{$ - (g + w) \;\;\; + ({\bf G} + g + w_{opp})$} $0$ \end{center} $(g + u) \cdot v = P:$ $(u,v,w) \rightarrow (g + w, v, g + u)$; hence, $(- (g + w))$. $(g + u) \cdot ({\bf G} + v_{opp}) = P:$ $(u, w_{opp}, v_{opp})$ $\rightarrow (g + w_{opp}, g + u, v_{opp})$ $\rightarrow ({\bf G} + v_{opp}, g + u, {\bf G} + g + w_{opp})$; dyads' opposite inner signs make $({\bf G} + g + w_{opp})$ positive. $({\bf G} + g + u_{opp}) \cdot v = P:$ $(u_{opp}, w_{opp}, v)$ $\rightarrow (g + w_{opp}, g + u_{opp}, v)$ $\rightarrow ({\bf G} + g + u_{opp}, {\bf G} + g + w_{opp}, v)$; hence, $(- ({\bf G} + g + w_{opp}))$. $({\bf G} + g + u_{opp}) \cdot ({\bf G} + v_{opp}) = P:$ $(v_{opp}, u_{opp}, w)$ $\rightarrow$ $(v_{opp}, g + w, g + u_{opp})$ $\rightarrow$ $({\bf G} + g + u_{opp}, g + w, {\bf G} + v_{opp})$; dyads' opposite inner signs make $(g + w)$ positive. \textit{Case 2:} The product of two Vents must have negative edge-sign, hence negative inner sign in top dyad to lower dyad's positive. The shutter product thus looks like this: \pagebreak \begin{center} $(u_{opp}) - ({\bf G} + u)$ \underline{$\;(g + v_{opp}) \; + \;({\bf G} + g + v)\;$} $+ ({\bf G} + g + w_{opp}) \;\;\;\;\; + (g + w)$ \underline{$ - (g + w) \;\;\;\;\; - ({\bf G} + g + w_{opp})$} $0$ \end{center} $(g + v_{opp}) \cdot u_{opp} = P:$ $(v_{opp}, u_{opp}, w)$ $\rightarrow (g + w, u_{opp}, g + v_{opp})$; hence, $(- (g + w))$. $(g + v_{opp}) \cdot ({\bf G} + u) = P:$ $(v_{opp}, u, w_{opp})$ $\rightarrow (g + w_{opp}, u, g + v_{opp})$ $\rightarrow ({\bf G} + u, {\bf G} + g + w_{opp}, g + v_{opp})$; but dyads' inner signs are opposite, so $(- ({\bf G} + g + w_{opp}))$. $({\bf G} + g + v) \cdot u_{opp} = P:$ $(u_{opp}, w_{opp}, v)$ $\rightarrow (u_{opp}, g + v, g + w_{opp})$ $\rightarrow (u_{opp}, {\bf G} + g + w_{opp}, {\bf G} + g + v)$; hence, $(+ ({\bf G} + g + w_{opp}))$. $({\bf G} + g + v) \cdot ({\bf G} + u) = P:$ $(u, v, w)$ $\rightarrow$ $(u, g + w, g + v)$ $\rightarrow$ $({\bf G} + g + v, g + w, {\bf G} + u)$; but dyads' inner signs are opposite, so $(+ (g + w))$. \textit{Case 3:} The product of Vent and Zigzag displays same inner sign in both dyads; hence the following arithmetic holds: \begin{center} $(u_{opp}) + ({\bf G} + u)$ \underline{$\; (g + v) \; + \;({\bf G} + g + v_{opp})\;$} $+({\bf G} + g + w) \;\;\;\;\; +( g + w_{opp})$ \underline{$ - (g + w_{opp}) \;\;\;\;\; - ({\bf G} + g + w)$} $0$ \end{center} As with the last case in Theorem 11, we omit the term-by-term calculations for this last case, as they should seem ``much of a muchness'' by this point. What is clear in all three cases is that index values of shutter cells have same markings as their French-window counterparts, at least for all cells which \textit{have} markings in the contained skybox; but, in all cases, indices are augmented by $g$. The assignment of marks to the shutter-cells linked to blank cells in French windows is straightforward for Type I box-kites: since any containing skybox must have $g > {\bf S}$, and since $g + s$ has $g$ as its strut opposite, then the First Vizier tells us that any $g$ must be a Vent. But then the $R, C$ indices of the cell containing $g$ must belong to a Trefoil in such a box-kite; hence, one is a Vent, the other a Zigzag, and $g$ must be marked. Only if the $R, C, P$ entry in the ET is necessarily confined to a Type II box-kites will this not necessarily be so. But Part II's Appendix B made clear that Type II's are generated by \textit{excluding} $g$ from their L-indices: recall that, in the Pathions, for all ${\bf S}$ < 8, all and only Type II box-kites are created by placing one of the Sedenion Zigzag L-trips on the ``Rule 0'' circle of the PSL(2,7) triangle with $8$ in the middle (and hence excluded). This is a box-kite in its own right (one of the 7 ``Atlas'' box-kites with ${\bf S} = 8$); its 3 sides are ``Rule 2'' triplets, and generate Type II box-kites when made into zigzag L-index sets. Conversely, all Pathion box-kites containing an '8' in an L-index (dubbed ''strongboxes'' in Appendix B) are Type I. Whether something peculiar might occur for large $N$ (where there might be multiple powers of 2 playing roles in the same box-kite) is a matter of marginal interest to present concerns, and will be left as an open question for the present. We merely note that, by a similar argument, and with the same restrictions assumed, $g + {\bf S}$ must be a Zigzag L-index, and $R, C$ either both be likewise (hence, $g + {\bf S}$ is unmarked); or, both are Vents in a Trefoil (so $g + {\bf S}$ must be unmarked here too). The last detail -- reversal of label-line markings in their $g$-augmented shutter-cell extensions -- is demonstrated as follows, with the same caveat concerning Type II box-kites assumed to apply. Such cells house DMZs (just swap $u$ for $g + u$ in Theorem 11's first setup -- they form a Rule 1 trip -- and compute). The LL extension on top has row-label $g$; that along the bottom, the strut-opposite $g + {\bf S}$. Given trip $(u,v,w)$, the shutter-cell index for $R, C = (g, u)$ corresponds to French-window index for $R, C = (g, g + u)$. But $(u, g, g+u)$ is a Trefoil, since $g$ is a Vent. So if $u$ is one too, $g + u$ isn't; hence marks are reversed as claimed. $\;\;\blacksquare$ \section{Maximal High-Bit Singletons: (s,g)-Modularity for ${\bf 16 < S \leq 24}$} The Whorfian Sky, having but one high bit in its strut constant, is the simplest possible meta-fractal -- the first of an infinite number of such infinite-dimensional zero-divisor-spanned spaces. We can consider the general case of such singleton high-bit recursiveness in two different, complementary ways. First, we can supplement the just-concluded series of theorems and proofs with a calculational interlude, where we consider the iterative embeddings of the Pathion Sand Mandalas in the infinite cascade of boxes-within-boxes that a Sky oversees. Then, we can generalize what we saw in the Pathions to consider the phenomenology of strut constants with singleton high-bits, which we take to be any bits representing a power of $2 \geq 3$ if ${\bf S}$ contains low bits (is not a multiple of $8$), else a power of $2$ strictly greater than $3$ otherwise. Per our earlier notation, $g = {\bf G/2}$ is the highest such singleton bit possible. We can think of its exponential increments -- equivalent to left-shifts in bit-string terms -- as the side-effects of conjoint zero-padding of $N$ and ${\bf S}$. This will be our second topic in this section. Maintaining our use of ${\bf S = 15}$ as exemplary, we have already seen that NSLs come in quartets: a row and column are each headed by ${\bf S} \; mod \; g$ (henceforth, $s$) and $g$, hence $7$ and $8$ in the Sand Mandalas. But each recursive embedding of the current skybox in the next creates further quartets. Division down the midlines to insert the indices new to the next CDP generation induces the Sand Mandala's adjoining strut-opposite sets of $s$ and $g$ lines (the pane-framing muntins) to be displaced to the borders of the four corners and shutters, with the new skybox's $g$ and $g + s$ now adjoining the old $s$ and $g$ to form new muntins, on the right and left respectively, while $g + g/2$ (the old ${\bf G} + g)$ and its strut opposite form a third muntin along the new midlines. Continuing this recursive nesting of skyboxes generates 1, 3, 7, $\cdots$, $2^{B+1}-1$ row-and-column muntin pairs involving multiples of $8$ and their supplementings by $s$, where (recalling earlier notation) $B = 0$ for the inner skybox, and increments by $1$ with each further nesting. Put another way, we then have a muntin number $\mu = (2^{N-4} - 1)$, or $4\mu$ NSL's in all. The ET for given $N$ has $(2^{N-1}-2)$ cells in each row and column. But NSLs divvy them up into boxes, so that each line is crossed by $2 \mu$ others, with the 0, 2 or 4 cells in their overlap also belonging to diagonals. The number of cells in the overlap-free segments of the lines, or $\omega$, is then just $4 \mu \cdot (2^{N-1} - 2 - 2 \mu ) = 24 \mu ( \mu + 1 )$: an integer number of Box-Kites. For our ${\bf S = 15}$ case, the minimized line shuffling makes this obvious: all boxes are 6 x 6, with 2-cell-thick boundaries (the muntins separating the panes), with $\mu$ boundaries, and $( \mu + 1)$ overlap-free cells per each row or column, per each quartet of lines. The contribution from diagonals, or $\delta$, is a little more difficult, but straightforward in our case of interest: 4 sets of $1, 2, 3, \cdots, \mu$ boxes are spanned by moving along \textit{one} empty long diagonal before encountering the \textit{other}, with each box contributing 6, and each overlap zone between adjacent boxes adding 2. Hence, $\delta = 24 \cdot (2^{N-3} - 1) (2^{N-3} - 2)/6$ -- a formula familiar from associative-triplet counting: it also contributes an integer number of Box-Kites. The one-liner we want, then, is this: \begin{center} {\small $BK_{N,\; 8 < {\bf S} < 16} = \omega + \delta = (2^{N-4})(2^{N-4} - 1) \; + \; (2^{N-3} - 1)(2^{N-3} - 2)/6$} \end{center} For $N = 4, 5, 6, 7, 8, 9, 10$, this formula gives $0, 3, 19, 91, 395$, $1643, 6699$. Add $4^{N-4}$ to each -- the immediate side-effect of the offing of all four Rule 0 candidate trips of the Sedenion Box-Kite exploded into the Sand Mandala that begins the recursion -- and one gets ``d\'ej\`a vu all over again'': $1$, $7$, $35$, $155$, $651$, $2667$, $10795$ -- the full set of Box-Kites for ${\bf S \leq 8}$. It would be nice if such numbers showed up in unsuspected places, having nothing to do with ZDs. Such a candidate context does, in fact, present itself, in Ed Pegg's regular MAA column on ``Math Games'' focusing on ``Tournament Dice.'' [7] He asks us, ``What dice make a non-transitive four player game, so that if three dice are chosen, a fourth die in the set beats all three? How many dice are needed for a five player non-transitive game, or more?'' The low solution of 3 explicitly involves PSL(2,7); the next solution of 19 entails calculations that look a lot like those involved in computing row and column headers in ETs. No solutions to the dice-selecting game beyond 19 are known. The above formulae, though, suggest the next should be 91. Here, ZDs have no apparent role save as dummies, like the infinity of complex dimensions in a Fourier-series convergence problem, tossed out the window once the solution is in hand. Can a number-theory fractal, with intrinsically structured cell content (something other, non-meta, fractals lack) be of service in this case -- and, if not in this particular problem, in others like it? Now let's consider the more general situation, where the singleton high-bit can be progressively left-shifted. Reverting to the use of the simplest case as exemplary, use ${\bf S = g + 1 = 9}$ in the Pathions, then do tandem left-shifts to produce this sequence: $N = 6, \; {\bf S = g + 1}$ ${\bf = 17}$; $N = 7$, ${\bf S = }$ ${\bf g + 1 = 33};\; \cdots;\; N = K$, ${\bf S = g + 1} = 2^{K-2} + 1$. A simple rule governs these ratchetings: in all cases, the number of filled cells = $6 \cdot (2^{N-1} - 4)$, since there are two sets of parallel sides which are filled but for long-diagonal intersections, and two sets of $g$ and $1$ entries distributed one per row along orthogonals to the empty long diagonals. Hence, for the series just given, we have cell counts of $72,\; 168,\; \cdots,\; 6 \cdot (2^{N - 1} - 4)$ for $BK_{N,\;S} = 3,\;7,\; \cdots,\;2^{N - 3} - 1$, for $g < {\bf S} < g + 8 = {\bf G}$ in the Pathions, and all $g < {\bf S} \leq g + 8$ in the Chingons, $2^{7}$-ions, and general $2^{N}$-ions, in that order. Algorithmically, the situation is just as easy to see: the splitting of dyads, sending U- and L- indices to strut-opposite Assessors, while incorporating the ${\bf S}$ and ${\bf G}$ of the current CDP generation as strut-opposites in the next, continues. For ${\bf S = 17}$ in the Chingons, there are now $2^{N-3}-1 = 7$, not $3$, Box-Kites sharing the new $g = 16$ (at B) and ${\bf S} \;mod \;g = 1$ (at E) in our running example. The U- indices of the Sand Mandala Assessors for ${\bf S = g + 1 = 9}$ are now L-indices, and so on: every integer $< G$ and $\neq {\bf S}$ gets to be an L-index of one of the $30 (= 2^{N-1} - 2)$ Assessors, as $16$ and ${\bf S} \;mod \; g = 1$ appear in each of the $7$ Box-Kites, with each other eligible integer appearing once only in one of the $7 \cdot 4 = 28$ available L-index slots. As an aside, in all 7 cases, writing the smallest Zigzag L-index at $a$ mandates all the Trefoil trips be ``precessed'' -- a phenomenon also observed in the ${\bf S = 8}$ Pathion case, as tabulated on p. 14 of [8]. For Zigzag L-index set $(2, 16, 18)$, for instance, $(a,d,e)$ $=$ $(2,3,1)$ instead of $(1,2,3)$; $(f,c,e)$ $=$ $(19,18,1)$ not $(1,19,18)$; and $(f,d,b)$ $=$ $(19,3,16)$. But otherwise, there are no surprises: for $N=7$, there are $(2^{7 - 3} - 1) = 15$ Box-Kites, with all $62 ( = 2^{N-1} - 2)$ available cells in the rows and columns linked to labels $g$ and ${\bf S}\; mod \; g$ being filled, and so on. Note that this formulation obtains for any and all ${\bf S > 8}$ where the maximum high-bit (that is, $g$) is included in its bitstring: for, with $g$ at B and ${\bf S} \;mod \; g$ at E, whichever ${\bf R, C}$ label is not one of these suffices to completely determine the remaining Assessor L-indices, so that no other bits in ${\bf S}$ play a role in determining any of them. Meanwhile, cell \textit{contents} ${\bf P}$ containing either $g$ or ${\bf S} \; mod \; g$, but created by XORing of row and column labels equal to neither, are arrayed in off-diagonal pairs, forming disjoint sets parallel or perpendicular to the two empty ones. If we write ${\bf S} \;mod \; g$ with a lower-case $s$, then we could call the rule in play here \textit{(s,g)-modularity}. Using the vertical pipe for logical or, and recalling the special handling required by the 8-bit when ${\bf S}$ is a multiple of 8 (which we signify with the asterisk suffixed to ``mod''), we can shorthand its workings this way: \noindent {\small Theorem 14}. For a $2^{N}$-ion inner skybox whose strut constant ${\bf S}$ has a singleton high-bit which is maximal (that is, equal to $g = {\bf G/2} = 2^{N-2}$), the recipe for its filled cells can be condensed thus: \begin{center} ${\bf R\; |\; C |\; P} = g\; |\; {\bf S} \; mod^{*} \; g$ \end{center} Under recursion, the recipe needs to be modified so as to include not just the inner-skybox $g$ and ${\bf S} \; mod^{*} \; g$ (henceforth, simply lowercase $s$), but all integer multiples $k$ of $g$ less than the ${\bf G}$ of the outermost skybox, plus their strut opposites $k \cdot g + s$. \noindent \textit{Proof}. The theorem merely boils down the computational arguments of prior paragraphs in this section, then applies the last section's recursive procedures to them. The first claim of the proof is identical to what we've already seen for Sand Mandalas, with zero-padding injected into the argument. The second claim merely assumes the area quadrupling based on midline splitting, with the side-effects already discussed. No formal proof, then, is called for beyond these points. $\;\;\blacksquare$ \noindent \textit{Remarks}. Using the computations from two paragraphs prior to the theorem's statement, we can readily calculate the box-kite count for any skybox, no matter how deeply nested: recall the formula $6 \cdot (2^{N - 1} - 4)$ for $BK_{N,\;S} = 2^{N - 3} - 1$. It then becomes a straightforward matter to calculate, as well, the limiting ratio of this count to the maximal full count possible for the ET as $N \rightarrow \infty$, with each cell approaching a point in a standard 2-D fractal. Hence, for any ${\bf S}$ with a singleton high-bit in evidence, there exists a Sky containing all recursive redoublings of its inner skybox, and computations like those just considered can further be used to specify fractal dimensions and the like. (Such computations, however, will not concern us.) Finally, recall that, by spectrographic equivalence, all such computations will lead to the same results for each ${\bf S}$ value in the same spectral band or octave. \section{Hide/Fill Involution: Further-Right High-Bits with ${\bf 24 < S < 32}$.} Recall that, in the Sand Mandala flip-book, each increment of ${\bf S}$ moved the two sets of orthogonal parallel lines one cell closer toward their opposite numbers: while ${\bf S = 9}$ had two filled-in rows and columns forming a square missing its corners, the progression culminating in ${\bf S = 15}$ showed a cross-hairs configuration: the parallel lines of cells now abutted each other in 2-ply horizontal and vertical arrays. The same basic progression is on display in the Chingons, starting with ${\bf S = 17}$. But now the number of strut-opposite cell pairs in each row and column is 15, not 7, so the cross-hairs pattern can't arise until ${\bf S = 31}$. Yet it never arises in quite the manner expected, as something quite singular transpires just after flipping past the ET in the middle, for ${\bf S = 24}$. Here, rows and columns labeled $8$ and $16$ constrain a square of empty cells in the center $\cdots$ quickly followed by an ET which seems to continue the expected trajectory -- except that almost all the non-long-diagonal cells left empty in its predecessor ETs are now inexplicably filled. More, there is a method to the ``almost all'' as well: for we now see not 2, but 4 rows and columns, all being blanked out while those labeled with $g$ and ${\bf S} \; mod \; g$ are being filled in. This is an inevitable side effect of a second high-bit in ${\bf S}$: we call this phenomenon, first appearing in the Chingons, \textit{hide/fill involution}. There are 4, not 2, line-pairs, because ${\bf S}$ and ${\bf G}$, modulo a lower power of 2 (because devolving upon a prior CDP generation's $g$), offer twice the possibilities: for ${\bf S = 25}$, ${\bf S} \; mod \; 16$ is now $9$, but ${\bf S} \; mod\; 8$ can result in either $1$ or $17$ as well -- with correlated \textit{multiples} of $8$ ($8$ proper, and $24$) defining the other two pairings. All cells with ${\bf R \; | C \; | \; P}$ equal to one of these 4 values, but for the handful already set to ``on'' by the first high-bit, will now be set to ``off,'' while all other non-long-diagonal cells set to ``off'' in the Pathion Sand Mandalas are suddenly ``on.'' What results for each Chingon ET with $24 < {\bf S} < 32$ is an ensemble comprised of $23$ Box-Kites. (For the flip-book, see Slides 40 -- 54.) Why does this happen? The logic is as straightforward as the effect can seem mysterious, and is akin, for good reason, to the involutory effect on trip orientation induced by Rule 2 addings of ${\bf G}$ to 2 of the trip's 3 indices. In order to grasp it, we need only to consider another pair of abstract calculation setups, of the sort we've seen already many times. The first is the core of the Two-Bit Theorem, which we state and prove as follows: \noindent {\small Theorem 15}. $2^{N}$-ion dyads making DMZs before augmenting ${\bf S}$ with a new high-bit no longer do so after the fact. \noindent \textit{Proof}. Suppose the high-bit in the bitstring representation of ${\bf S}$ is $2^{K},\;K < (N-1)$. Suppose further that, for some L-index trip $(u,v,w)$, the Assessors $U$ and $V$ are DMZ's, with their dyads having same inner signs. (This last assumption is strictly to ease calculations, and not substantive: we could, as earlier, use one or more binary variables of the $sg$ type to cover all cases explicitly, including Type I vs. Type II box-kites. To keep things simple, we assume Type I in what follows.) We then have $(u + u \cdot X)(v + v \cdot X) = (u + U)(v + V) = 0$. But now suppose, without changing $N$, we add a bit somewhere further to the left to ${\bf S}$, so that ${\bf S} < (2^{K} = L) < {\bf G}$. The augmented strut constant now equals ${\bf S_{L}} = {\bf S + L}$. One of our L-indices, say $v$, belongs to a Vent Assessor thanks to the assumed inner signing; hence, by Rule 2 and the Third Vizier, $(V,v,X) \rightarrow (X + L, v, V + L)$. Its DMZ partner $u$, meanwhile, must thereby be a Zigzag L-index, which means $(u,U,X) \rightarrow (u, X + L, U + L)$. We claim the truth of the following arithmetic: \begin{center} $v \; + \; (V + L)$ \\ \underline{$ \;\;\; u \; + \; (U + L)\;\;\;$} \\ $+(W \;+ \; L) \; + w$ \\ \underline{$+ \; w \;\; - (W + L)$} \\ NOT ZERO (+w's don't cancel) \\ \end{center} The left bottom product is given. The product to its right is derived as follows: since $u$ is a Zigzag L-index, the Trefoil U-trip $(u,V,W)$ has the same orientation as $(u,v,w)$, so that Rule 2 $\rightarrow (u, W+L, V+L)$, implying the negative result shown. The left product on the top line, though, has terms derived from a Trefoil U-trip lacking a Zigzag L-index, so that only after Rule 2 reversal are the letters arrayed in Zigzag L-trip order: $(U + L, v, W + L)$. Ergo, $+ (W+L)$. Similarly for the top right: Rule 2 reversal ``straightens out'' the Trefoil U-trip, to give $(U + L, V + L, w)$; therefore, $(+ w)$ results. If we explicitly covered further cases by using an $sg$ variable, we would be faced with a Theorem 2 situation: one or the other product pair cancels, but not both. $\;\;\blacksquare$ \noindent \textit{Remark.} The prototype for the phenomenon this theorem covers is the ``explosion'' of a Sedenion box-kite into a trio of interconnected ones in a Pathion sand mandala, with the ${\bf S}$ of the latter = the ${\bf X}$ of the former. As part of this process, 4 of the expected 7 are ``hidden'' box-kites (HBKs), with no DMZs along their edges. These have zigzag L-trips which are precisely the L-trips of the 4 Sedenion Sails. Here, an empirical observation which will spur more formal investigations in a sequel study: for the 3 HBKs based on trefoil L-trips, exactly 1 strut has reversed orientation (a different one in each of them), with the orientation of the triangular side whose midpoint it ends in also being reversed. For the HBK based on the zigzag L-trip, all 3 struts are reversed, so that the flow along the sides is exactly the reverse of that shown in the ``Rule 0'' circle. (Hence, all possible flow patterns along struts are covered, with only those entailing 0 or 2 reversals corresponding to functional box-kites: our Type I and Type II designations.) It is not hard to show that this zigzag-based HBK has another surprising property: the 8 units defined by its own zigzag's Assessors plus ${\bf X}$ and the real unit form a ZD-free copy of the Octonions. This is also true when the analogous Type II situation is explored, albeit for a slightly different reason: in the former case, all 3 Catamaran ``twistings'' take the zigzag edges to other HBKs; in the latter, though, the pair of Assessors in some other Type II box-kite reached by ``twisting'' -- $(a,B)$ and $(A,b)$, say, if the edge be that joining Assessors A and B, with strut-constant $c_{opp} = d$ -- are \textit{strut opposites}, and hence also bereft of ZDs. The general picture seems to mirror this concrete case, and will be studied in ``Voyage by Catamaran'' with this expectation: the bit-twiddling logic that generates meta-fractal ``Skies'' also underwrites a means for jumping between ZD-free Octonion clones in an infinite number of HBKs housed in a Sky. Given recent interest in pure ``E8'' models giving a privileged place to the basis of zero-divisor theory, namely ``G2'' projections (viz., A. Garrett Lisi's ``An Exceptionally Simple Theory of Everything''); a parallel vogue for many-worlds approaches; and, the well-known correspondence between 8-D closest-packing patterns, the loop of the 240 unit Octonions which Coxeter discovered, and E8 algebras -- given all this, tracking the logic of the links across such Octonionic ``brambles'' might prove of great interest to many researchers. Now, we still haven't explained the flipside of this off-switch effect, to which prior CDP generation Box-Kites -- appropriately zero-padded to become Box-Kites in the current generation until the new high-bit is added to the strut-constant -- are subjected. How is it that previously empty cells \textit{not} associated with the second high-bit's blanked-out R, C, P values are now \textit{full}? The answer is simple, and is framed in the Hat-Trick Theorem this way. \noindent {\small Theorem 16}. Cells in an ET which represent DMZ edges of some $2^{N}$-ion Box-Kites for some fixed ${\bf S}$, and which are offed in turn upon augmenting of ${\bf S}$ by a new leftmost bit, are turned on once more if ${\bf S}$ is augmented by yet another new leftmost bit. \noindent \textit{Proof}. We begin an induction based upon the simplest case (which the Chingons are the first $2^{N}$-ions to provide): consider Box-Kites with ${\bf S \leq 8}$. If a high-bit be appended to ${\bf S}$, then the associated Box-Kites are offed. However, if \textit{another} high-bit be affixed, these dormant Box-Kites are re-awakened -- the second half of \textit{hide/fill involution}. We simply assume an L-index set $(u,v,w)$ underwriting a Sail in the ET for the pre-augmented ${\bf S}$, with Assessors $(u, U)$ and $(v, V)$. Then, we introduce a more leftified bit $2^{Q} = M$, where pre-augmented ${\bf S} < L < M < {\bf G}$, then compute the term-by term products of $(u + (U + L + M))$ and $(v + sg \cdot (V + L + M))$, using the usual methods. And as these methods tell us that two applications of Rule 2 have the same effect as none in such a setup, we have no more to prove. $\;\;\blacksquare$ \noindent \textit{Corollary}. The induction just invoked makes it clear that strut constants equal to multiples of $8$ not powers of $2$ are included in the same spectral band as all other integers larger than the prior multiple. The promissory note issued in the second paragraph of Part II's concluding section, on 64-D Spectrography, can now be deemed redeemed. In the Chingons, high-bits $L$ and $M$ are necessarily adjacent in the bitstring for ${\bf S < G = 32}$; but in the general $2^{N}$-ion case, $N$ large, zero-padding guarantees that things will work in just the same manner, with only one difference: the recursive creation of ``harmonics'' of relatively small-$g$ $(s,g)$-modular ${\bf R, C, P}$ values will propagate to further levels, thereby effecting overall Box-Kite counts. In general terms, we have echoes of the formula given for $(s,g)$-modular calculations, but with this signal difference: there will be \textit{one} such rule for \textit{each} high-bit $2^{H}$ in ${\bf S}$, where residues of ${\bf S}$ modulo $2^{H}$ will generate their own near-solid lines of rows and columns, be they hidden or filled. Likewise for multiples of $2^{H} < {\bf G}$ which are not covered by prior rules, and multiples of $2^H$ supplemented by the bit-specific residue (regardless of whether $2^{H}$ itself is available for treatment by this bit-specific rule). In the simplest, no-zero-padding instances, all even multiples are excluded, as they will have occurred already in prior rules for higher bits, and fills or hides, once fixed by a higher bit's rule, cannot be overridden. Cases with some zero-padding are not so simple. Consider this two-bit instance, ${\bf S = 73}, N = 8$: the fill-bit is 64, the hide-bit is just 8, so that only 9 and 64 generate NSLs of filled values; all other multiples of 8, and their supplementing by 1 (including 65) are NSLs of hidden values. Now look at a variation on this example, with the single high-bit of zero-padding removed -- i.~e., ${\bf S = 41}, N = 8$. Here, the fill-bit is 32, and its multiples 64 and 96, as well as their supplements by ${\bf S} \; modulo \; 32 \; = \; 9$, or 9 and 73 and 105, label NSLs of filled values; but all other multiples of 8, plus all multiples of 8 supplemented by 1 not equal to 9 or 73 or 105, label NSLs of hidden values. Cases with multiple fill and hide bits, with or without additional zero-padding, are obviously even more complicated to handle explicitly on a case-by-case basis, but the logic framing the rules remain simple; hence, even such messy cases are programmatically easy to handle. Hide/fill involution means, then, that the first, third, and any further odd-numbered high-bits (counting from the left) will generate ``fill'' rules, whereas all the even-numbered high-bits generate ``hide'' rules -- with all cells not touched by a rule being either hidden (if the total number of high-bits $B$ is odd) or filled ($B$ is even). Two further examples should make the workings of this protocol more clear. First, the Chingon test case of ${\bf S = 25}$: for $({\bf R \; | \; C \; | \; P} \; = \; 9 \;| \; 16)$, all the ET cells are filled; however, for $({\bf R \; | \; C \; | \; P} \; = 1 \;|\; 8 \;|\; 17 \;|\; 24)$, ET cells not already filled by the first rule (and, as visual inspection of Slide 48 indicates, there are only 8 cells in the entire 840-cell ET already filled by the prior rule which the current rule would like to operate on) are hidden from view. Because the 16- and 8- bits are the only high-bits, the count of same is even, meaning all remaining ET cells not covered by these 2 rules are filled. We get 23 for Box-Kite count as follows. First, the 16-bit rule gives us 7 Box-Kites, per earlier arguments; the 8-bit rule, which gives 3 filled Box-Kites in the Pathions, recursively propagates to cover 19 hidden Box-Kites in the Chingons, according to the formula produced last section. But hide/fill involution says that, of the 35 maximum possible Box-Kites in a Chingon ET, $35 - 19 = 16$ are now made visible. As none of these have the Pathion ${\bf G = 16}$ as an L-index, and all the 7 Box-Kites from the 16-bit rule \textit{do}, we therefore have a grand total of $7 + 16 = 23$ Box-Kites in the ${\bf S =25}$ ET, as claimed (and as cell-counting on the cited Slide will corroborate). The concluding Slides 76--78 present a trio of color-coded ``histological slices'' of the hiding and filling sequence (beginning with the blanking of the long diagonals) for the simplest 3-high-bit case, $N = 7, {\bf S = 57}$. Here, the first fill rule works on 25 and 32; the first hide rule, on 9, 16, 41, and 48; the second fill rule, on 1, 8, 17, 24, 33, 40, 49, and 56; and the rest of the cells, since the count of high-bits is odd, are left blank. We do not give an explicit algorithmic method here, however, for computing the number of Box-Kites contained in this 3,720-cell ET. Such recursiveness is best handled programmatically, rather than by cranking out an explicit (hence, long and tedious) formula, meant for working out by a time-consuming hand calculation. What we can do, instead, is conclude with a brief finale, embodying all our results in the simple ``recipe theory'' promised originally, and offer some reflections on future directions. \section{Fundamental Theorem of Zero-Divisor Algebra} All of the prior arguments constitute steps sufficient to demonstrate the Fundamental Theorem of Zero-Divisor Algebra. Like the role played by its Gaussian predecessor in the legitimizing of another ``new kind of [complex] number theory,'' its simultaneous simplicity and generality open out on extensive new vistas at once alien and inviting. The Theorem proper can be subdivided into a Proposition concerning all integers, and a ``Recipe Theory'' pragmatics for preparing and ``cooking'' the meta-fractal entities whose existence the proposition asserts, but cannot tell us how to construct. \noindent \textit{Proposition:} Any integer $K > 8$ not a power of $2$ can uniquely be associated with a Strut Constant ${\bf S}$ of ZD ensembles, whose inner skybox resides in the $2^{N}$-ions with $2^{N-2} < K < 2^{N-1}$. The bitstring representation of ${\bf S}$ completely determines an infinite-dimensional analog of a standard plane-confined fractal, with each of the latter's points associated with an empty cell in the infinite Emanation Table, with all non-empty cells comprised wholly of mutually orthogonal primitive zero-divisors, one line of same per cell. \noindent \textit{Preparation:} Prepare each suitable ${\bf S}$ by producing its bitstring representation, then determining the number of high-bits it contains: if ${\bf S}$ is a multiple of 8, right-shift 4 times; otherwise, right-shift 3 times. Then count the number $B$ of 1's in the shortened bitstring that results. For this set \verb|{B}| of $B$ elements, construct two same-sized arrays, whose indices range from $1$ to $B$: the array \verb|{i}| which indexes the left-to-right counting order of the elements of \verb|{B}|; and, the array \verb|{P}| which indexes the powers of $2$ of the same element in the same left-to-right order. (Example: if $K = 613$, the inner skybox is contained in the $2^{11}$-ions; as the number is not a multiple of $8$, the bistring representation $1001100101$ is right-shifted thrice to yield the substring of high-bits $1001100$; $B = 3$, and for $1 \; \leq \; i \; \leq \; 3$, $P_{1} = 9, \; P_{2} = 6; P_{3} = 5$.) \noindent \textit{Cookbook Instructions:} \begin{description} \item \verb|[0]| ~ For a given strut-constant ${\bf S}$, compute the high-bit count $B$ and bitstring arrays \verb|{i}| and \verb|{P}|, per preparation instructions. \item \verb|[1]|~ Create a square spreadsheet-cell array, of edge-length $2^{I}$, where $I \geq {\bf G/2} = g$ of the inner skybox for ${\bf S}$, with the Sky as the limit when $I \rightarrow \infty$. \item \verb|[2]| ~ Fill in the labels along all four edges, with those running along the right (bottom) borders identical to those running along the left (top), except in reversed left-right (top-bottom) order. Refer to those along the top as column numbers $C$, and those along the left edge, as row numbers $R$, setting candidate contents of any cell (r,c) to $R \veebar C = P$. \item \verb|[3]| ~ Paint all cells along the long diagonals of the spreadsheet just constructed a color indicating BLANK, so that all cells with $R = C$ (running down from upper left corner) else $R \veebar C = {\bf S}$ (running down from upper right) have their $P$-values hidden. \item \verb|[4]| ~ For $1 \; \leq \; i \; \leq \; B$, consider for painting only those cells in the spreadsheet created in \verb|[1]| with $R \; | \; C \; | \; P \; = \; m \cdot 2^{\gamma} \; | \; m \cdot 2^{\gamma} \; + \; \sigma$, where $\gamma = P_{i}, \sigma = {\bf S} \; mod* \; 2^{\gamma}$, and $m$ is any integer $\geq 0$ (with $m = 0$ only producing a legitimate candidate for the right-hand's second option, as an XOR of $0$ indicates a long-diagonal cell). \item \verb|[5]| ~ If a candidate cell has already been painted by a prior application of these instructions to a prior value of $i$, leave it as is. Otherwise, paint it with $R \veebar C$ if $i = $ odd, else paint it BLANK. \item \verb|[6]| ~ Loop to \verb|[4]| after incrementing $i$. If $i < B$, proceed until this step, then reloop, reincrement, and retest for $i = B$. When this last condition is met, proceed to the next step. \item \verb|[7]| ~ If $B$ is odd, paint all cells not already painted, BLANK; for $B$ even, paint them with $R \veebar C$. \end{description} In these pseudocode instructions, no attention is given to edge-mark generation, performance optimization, or other embellishments. Recursive expansion beyond the chosen limits of the $2^{N}$-ion starting point is also not addressed. (Just keep all painted cells as is, then redouble until the expanded size desired is attained; compute appropriate insertions to the label lines, then paint all new cells according to the same recipe.) What should be clear, though, is any optimization cannot fail to be qualitatively more efficient than the code in the appendix to [9], which computes on a cell-by-cell basis. For ${\bf S} > 8, \; N > 4$, we've reached the onramp to the Metafractal Superhighway: new kinds of efficiency, synergy, connectedness, and so on, would seem to more than compensate for the increase in dimension. It is well-known that Chaotic attractors are built up from fractals; hence, our results make it quite thinkable to consider Chaos Theory from the vantage of pure Number $\cdots$ and hence the switch from one mode of Chaos to another as a bitstring-driven -- or, put differently, a cellular automaton-type -- process, of Wolfram's Class 4 complexity. Such switching is of the utmost importance in coming to terms with the most complex finite systems known: human brains. The late Francisco Varela, both a leading visionary in neurological research and its computer modeling, and a long-time follower of Madhyamika Buddhism who'd collaborated with the Dalai Lama in his ``Tibetan Buddhists talk with brain scientists'' dialogues [10], pointed to just the sorts of problems being addressed here as the next frontier. In a review essay he co-authored in 2001 just before his death [11, p. 237], we read these concluding thoughts on the theme of what lies ``Beyond Synchrony'' in the brain's workings: \begin{quote} The transient nature of coherence is central to the entire idea of large-scale synchrony, as it underscores the fact that the system does not behave dynamically as having stable attractors [e.g., Chaos], but rather metastable patterns -- a succession of self-limiting recurrent patterns. In the brain, there is no ``settling down'' but an ongoing change marked only by transient coordination among populations, as the attractor itself changes owing to activity-dependent changes and modulations of synaptic connections. \end{quote} Varela and Jean Petitot (whose work was the focus of the intermezzo concluding Part I, in which semiotically inspired context the Three Viziers were introduced) were long-time collaborators, as evidenced in the last volume on \textit{Naturalizing Phenomenology} [12] which they co-edited. It is only natural then to re-inscribe the theme of mathematizing semiotics into the current context: Petitot offers separate studies, at the ``atomic'' level where Greimas' ``Semiotic Square'' resides; and at the large-scale and architectural, where one must place L\'evi-Strauss's ``Canonical Law of Myths.'' But the pressing problem is finding a smooth approach that lets one slide the same modeling methodology from the one scale to the other: a fractal-based ``scale-free network'' approach, in other words. What makes this distinct from the problem we just saw Varela consider is the focus on the structure, rather than dynamics, of transient coherence -- a focus, then, in the last analysis, on a characterization of \textit{database architecture} that can at once accommodate meta-chaotic transiency and structural linguists' cascades of ``double articulations.'' Starting at least with C. S. Peirce over a century ago, and receiving more recent elaboration in the hands of J. M. Dunn and the research into the ``Semantic Web'' devolving from his work, data structures which include metadata at the same level as the data proper have led to a focus on ``triadic logic,'' as perhaps best exemplified in the recent work of Edward L. Robertson. [13] His exploration of a natural triadic-to-triadic query language deriving from Datalog, which he calls Trilog, is not (unlike our Skies) intrinsically recursive. But his analysis depends upon recursive arguments built atop it, and his key constructs are strongly resonant with our own (explicitly recursive) ones. We focus on just a few to make the point, with the aim of provoking interest in fusing approaches, rather than in proving any particular results. The still-standard technology of relational databases based on SQL statements (most broadly marketed under the Oracle label) was itself derived from Peirce's triadic thinking: the creator of the relational formalism, Edgar F. ``Ted'' Codd, was a PhD student of Peirce editor and scholar Arthur W. Burks. Codd's triadic ``relations,'' as Robertson notes (and as Peirce first recognized, he tells us, in 1885), are ``the minimal, and thus most uniform'' representations ``where metadata, that is data about data, is treated uniformly with regular data.'' In Codd's hands (and in those of his market-oriented imitators in the SQL arena), metadata was ``relegated to an essentially syntactic role'' [13, p. 1] -- a role quite appropriate to the applications and technological limitations of the 1970's, but inadequate for the huge and/or highly dynamic schemata that are increasingly proving critical in bioinformatics, satellite data interpretation, Google server-farm harvesting, and so on. As Robertson sums up the situation motivating his own work, \begin{quote} Heterogeneous situations, where diverse schemata represent semantically similar data, illustrate the problems which arise when one person's semantics is another's syntax -- the physical ``data dependence'' that relational technology was designed to avoid has been replaced by a structural data dependence. Hence we see the need to [use] a simple, uniform relational representation where the data/metadata distinction is not frozen in syntax. [13, pp. 1-2] \end{quote} As in relational database theory and practice, the forming and exploiting of inner and outer \textit{joins} between variously keyed tables of data is seminal to Robertson's approach as well as Codd's. And while the RDF formalism of the Semantic Web (the representational mechanism for describing structures as well as contents of web artifacts on the World Wide Web) is likewise explicitly triadic, there has, to date, been no formal mechanism put in place for manipulating information in RDF format. Hence, ``there is no natural way to restrict output of these mechanisms to triples, except by fiat'' [13, p. 4], much less any sophisticated rule-based apparatus like Codd's ``normal forms'' for querying and tabulating such data. It is no surprise, then, that Robertson's ``fundamental operation on triadic relations is a particular three-way join which takes explicit advantage of the triadic structure of its operands.'' This \textit{triadic join}, meanwhile, ``results in another triadic relation, thus providing the closure required of an algebra.'' [13, p. 6] Parsing Robertson's compact symbolic expressions into something close to standard English, the trijoin of three triadic relations R, S, T is defined as some $(a, b, c)$ selected from the universe of possibilities $(x, y, z)$, such that $(a, x, z) \in R$, $(x, b, y) \in S$, and $(z, y, c) \in T$. This relation, he argues, is the most fundamental of all the operators he defines. When supplemented with a few constant relations (analogs of Tarski's ``infinite constants'' embodied in the four binary relations of universality of all pairs, identity of all equal pairs, diversity of all unequal pairs, and the empty set), it can express all the standard monotonic operators (thereby excluding, among his primitives, only the relative complement). How does this compare with our ZD setup, and the workings of Skies? For one thing, Infinite constants, of a type akin to Tarski's, are embodied in the fact that any full meta-fractal requires the use of an infinite ${\bf G}$, which sits atop an endless cascade of singleton leftmost bits, determining for any given ${\bf S}$ an indefinite tower of ZDs. One of the core operators massaging Robertson's triads is the \textit{flip}, which fixes one component of a relation while interchanging the other two $\cdots$ but our Rule 2 is just the recursive analog of this, allowing one to move up and down towers of values with great flexibilty (allowing, as well, on and off switching effecting whole ensembles). The integer triads upon which our entire apparatus depends are a gift of nature, not dictated ``by fiat,'' and give us a natural basis for generating and tracking unique IDs with which to ``tag'' and ``unpack'' data (with ``storage'' provided free of charge by the empty spaces of our meta-fractals: the ``atoms'' of Semiotic Squares have four long-diagonal slots each, one per each of the ``controls'' Petitot's Catastrophe Theory reading calls for, and so on.) Finally, consider two dual constructions that are the core of our own triadic number theory: if the $(a, b, c)$ of last paragraph, for instance, be taken as a Zigzag's L-index set, then the other trio of triples correlates quite exactly with the Zigzag U-trips. And this 3-to-1 relation, recall, exactly parallels that between the 3 Trefoil, and 1 Zigzag, Sails defining a Box-Kite, with this very parallel forming the support for the recursion that ultimately lifts us up into a Sky. We can indeed make this comparison to Robinson's formalism exceedingly explicit: if his X, Y, Z be considered the angular nodes of PSL(2,7) situated at the 12 o'clock apex and the right and left corners respectively, then his $(a, b, c)$ correspond exactly to our own Rule 0 trip's same-lettered indices! Here, we would point out that these two threads of reflection -- on underwriting Chaos with cellular-automaton-tied Number Theory, and designing new kinds of database architectures -- are hardly unrelated. It should be recalled that two years prior to his revolutionary 1970 paper on relational databases [14], Codd published a pioneering book on cellular automata [15]. It is also worth noting that one of the earliest technologies to be spawned by fractals arose in the arena of data compression of images, as epitomized in the work of Michael Barnsley and his Iterative Systems company. The immediate focus of the author's own commercial efforts is on fusing meta-fractal mathematics with the context-sensitive adaptive-parsing ``Meta-S'' technology of business associate Quinn Tyler Jackson. [16] And as that focus, tautologically, is not mathematical \textit{per se}, we pass it by and leave it, like so many other themes just touched on here, for later work. \pagebreak \section*{References} \begin{description} \item \verb|[1]| Robert P. C. de Marrais, ``Placeholder Substructures I: The Road From NKS to Scale-Free Networks is Paved with Zero Divisors,'' \textit{Complex Systems}, 17 (2007), 125-142; arXiv:math.RA/0703745 \item \verb|[2]| Robert P. C. de Marrais, ``Placeholder Substructures II: Meta-Fractals, Made of Box-Kites, Fill Infinite-Dimensional Skies,'' arXiv:0704.0026 [math.RA] \item \verb|[3]| Robert P. C. de Marrais, ``The 42 Assessors and the Box-Kites They Fly,'' arXiv:math.GM/0011260 \item \verb|[4]| ~ Robert P. C. de Marrais, ``The Marriage of Nothing and All: Zero-Divisor Box-Kites in a `TOE' Sky,'' in Proceedings of the $26^{\textrm{th}}$ International Colloquium on Group Theoretical Methods in Physics, The Graduate Center of the City University of New York, June 26-30, 2006, forthcoming from Springer--Verlag. \item \verb|[5]| Robert P. C. de Marrais, ``Placeholder Substructures: The Road from NKS to Small-World, Scale-Free Networks Is Paved with Zero-Divisors,'' http:// \newline wolframscience.com/conference/2006/ presentations/materials/demarrais.ppt (Note: the author's surname is listed under ``M,'' not ``D.'') \item \verb|[6]|~ Benoit Mandelbrot, \textit{The Fractal Geometry of Nature} (W. H. Freeman and Company, San Francisco, 1983) \item \verb|[7]| Ed Pegg, Jr., ``Tournament Dice,'' \textit{Math Games} column for July 11, 2005, on the MAA website at http://www.maa.org/editorial/ mathgames/mathgames \verb|_07_11_05|.html \item \verb|[8]| Robert P. C. de Marrais, ``The `Something From Nothing' Insertion Point,'' http://www.wolframscience.com/conference/2004/ presentations/ \newline materials/rdemarrais.pdf \item \verb|[9]| Robert P. C. de Marrais, ``Presto! Digitization,'' arXiv:math.RA/0603281 \item \verb|[10]| Francisco Varela, editor, \textit{Sleeping, Dreaming, and Dying: An Exploration of Consciousness with the Dalai Lama} (Wisdom Publications: Boston, 1997). \item \verb|[11]| F. J. Varela, J.-P. Lachauz, E. Rodrigues and J. Martinerie, ``The brainweb: phase synchronization and large-scale integration,'' \textit{Nature Reviews Neuroscience}, 2 (2001), pp. 229-239. \item \verb|[12]| Jean Petitot, Francisco J. Varela, Bernard Pachoud and Jean-Michel Roy, \textit{Naturalizing Phenomenology: Issues in Contemporary Phenomenology and Cognitive Science} (Stanford University Press: Stanford, 1999) \item \verb|[13]| Edward L. Robertson, ``An Algebra for Triadic Relations,'' Technical Report No. 606, Computer Science Department, Indiana University, Bloomington IN 47404-4101, January 2005; online at http://www.cs.indiana.edu/ \newline pub/techreports/TR606.pdf \item \verb|[14]| E. F. Codd, \textit{The Relational Model for Database Management: Version 2} (Addison-Wesley: Reading MA, 1990) is the great visionary's most recent and comprehensive statement. \item \verb|[15]| E. F. Codd, \textit{Cellular Automata} (Academic Press: New York, 1968) \item \verb|[16]| Quinn Tyler Jackson, \textit{Adapting to Babel -- Adaptivity and Context-Sensiti- vity in Parsing: From $a^{n}b^{n}c^{n}$ to RNA} (Ibis Publishing: P.O. Box3083, Plymouth MA 02361, 2006; for purchasing information, contact Thothic Technology Partners, LLC, at their website, www.thothic.com). \end{description} \end{document}
\begin{document} \title{Tight Bound on Randomness for Violating the Clauser-Horne-Shimony-Holt Inequality} \author{Yifeng~Teng,~Shenghao~Yang,~\IEEEmembership{Member,~IEEE},~Siwei~Wang~and~Mingfei~Zhao \thanks{This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61471215. This work was partially funded by a grant from the University Grants Committee of the Hong Kong Special Administrative Region (Project No.\ AoE/E-02/08).} \thanks{Y. Teng is with the Department of Computer Sciences, University of Wisconsin-Madison, Madison, USA (e-mail: [email protected]).} \thanks{S. Yang is with the School of Science of Engineering, The Chinese University of Hong Kong, Shenzhen, P. R. China (e-mail: [email protected]).} \thanks{S. Wang is with the Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, P. R. China (e-mail: [email protected]).} \thanks{M. Zhao is with the School of Computer Science, McGill University, Montreal, Canada (e-mail: [email protected]).} } \maketitle \begin{abstract} Free will (or randomness) has been studied to achieve loophole-free Bell's inequality test and to provide device-independent quantum key distribution security proofs. The required randomness such that a local hidden variable model (LHVM) can violate the Clauser-Horne-Shimony-Holt (CHSH) inequality has been studied, but a tight bound has not been proved for a practical case that i) the device settings of the two parties in the Bell test are independent; and ii) the device settings of each party can be correlated or biased across different runs. Using some information theoretic techniques, we prove in this paper a tight bound on the required randomness for this case such that the CHSH inequality can be violated by certain LHVM. Our proof has a clear achievability and converse style. The achievability part is proved using type counting. To prove the converse part, we introduce a concept called profile for a set of binary sequences and study the properties of profiles. Our profile-based converse technique is also of independent interest. \end{abstract} \begin{IEEEkeywords} Bell's inequality test, CHSH inequality, randomness loophole, randomness bound \end{IEEEkeywords} \section{Introduction} Bell's inequality test \cite{bell1964einstein} provides an approach to verify the existence of physical phenomenon that cannot be explained by local hidden variable models (LHVMs). The Clauser-Horne-Shimony-Holt (CHSH) inequality \cite{Clauser69} is the most often used inequality in Bell test experiments. Experimental demonstrations of the violation of CHSH inequalities have been conducted since 1982 \cite{Aspect82} (see also Giustina et al.'s work \cite{Giustina13} and the references therein). These Bell tests, however, suffer from an inherent loophole that the settings of the participated devices may not be chosen totally randomly, called the \emph{randomness (free will)} loophole. A small amount of correction between the device settings makes it possible that a LHVM can reproduce predictions of quantum mechanics \cite{Feldmann95,Kofler06,Hall10,Barrett11,Hall11}. This loophole also weakens the Bell's inequality based security proofs of device-independent quantum key distribution \cite{Mayers1998,Acin06,Vazirani14} and randomness expansion \cite{Pironio10,Colbeck12,Dhara14}. One of the essential questions in the randomness loophole is the bound of randomness such that the correctness of Bell tests can (or cannot) be guaranteed \cite{Hall10,Barrett11,Hall11,Koh12,Thinh13,Pop13,yuan2014randomness,Putz14}. Using a min-entropy type randomness measure, the bound of randomness required in a CHSH inequality test can be formulated as an optimization problem, and various special cases have been solved \cite{Koh12,Pop13,yuan2014randomness}. One case that has not been completely resolved in the literature is that the two parties of the test have independent settings, but the setting of each party can be biased or correlated across different runs. In this paper, we study this case and obtain the asymptotic optimal value explicitly. \subsection{Problem Formulation} Let $n$ be a positive integer, and $X, Y$ be two random variables over $\{0,1\}^n$ with a joint distribution $p_{XY}$. We may consider that $X$ and $Y$ are the device settings of the two parties in an $n$-run Bell test, respectively. The following randomness measure has been used in the literature: \begin{equation*} P = \left(\max_{\mathbf{x},\mathbf{y}\in\{0,1\}^n}p_{XY}(\mathbf{x},\mathbf{y}) \right)^{1/n}. \end{equation*} When $X$ and $Y$ are independent and uniformly distributed, $P = 1/4$, which is the minimum value of $P$ and corresponds to the case of complete randomness. When $X$ and $Y$ are deterministic, $P=1$, which corresponds to the case of zero randomness. Note that $P$ is related to the min-entropy: \begin{equation*} H_{\infty}(X,Y) := -\log \max_{\mathbf{x},\mathbf{y}\in\{0,1\}^n}p_{XY}(\mathbf{x},\mathbf{y}) = -n \log P. \end{equation*} Regard the vectors $\mathbf{x} \in \{0,1\}^n$ as column vectors and denote by $\mathbf{x}^\mathrm{T}$ the transpose of the $\mathbf{x}$. The optimization problem of interest is \begin{equation} \label{eq:13} \begin{array}{cl} \displaystyle{\min_{p_{XY}}} & P \\ \text{s.t.} & \displaystyle\frac{1}{n} \sum_{\mathbf{x},\mathbf{y}}\mathbf{x}^{\mathrm{T}}\mathbf{y} p_{XY}(\mathbf{x},\mathbf{y})\leq \frac{4-S_Q}{8}, \end{array} \end{equation} where $S_Q=2\sqrt{2}$ is a quantum constant. Readers may refer to \cite{Hall10,Koh12,Pop13} to see how this problem is obtained. Optimization \eqref{eq:13} can be simplified to a linear programming \cite{Pop13}. When $n=1$, the optimal value of \eqref{eq:13} is $(S_Q + 4)/24 \approx 0.285$, which was shown by Hall \cite{Hall10} and Koh et al. \cite{Koh12}. When $n\rightarrow \infty$, Pope and Kay \cite{Pop13} showed that the optimal value of \eqref{eq:13} converges to $3^{\frac{-S_Q-4}{8}}2^{-h_{\mathrm{b}}\left(\frac{4-S_Q}{8}\right)} \approx 0.258$, where \begin{equation*} h_{\mathrm{b}}(t)=-t\log_2t-(1-t)\log_2(1-t) \end{equation*} is the binary entropy function. The case that $X$ and $Y$ are independent is of particular interest. Towards a loophole free Bell test, physicists have designed experiments with independent device settings \cite{Gallicchio14}. In quantum key distribution, the experimental devices of the two parties may be manufactured independently and separated spatially, reducing the potential correlation of the device settings generated by the adversary. For independent device settings, the corresponding optimization problem becomes \begin{equation} \label{eq:7} \begin{array}{cl} \displaystyle{\min_{p_{X},p_{Y}}} & P \\ \text{s.t.} & \displaystyle\frac{1}{n} \sum_{\mathbf{x},\mathbf{y}}\mathbf{x}^{\mathrm{T}}\mathbf{y} p_{XY}(\mathbf{x},\mathbf{y})\leq \frac{4-S_Q}{8}\\ & p_{XY}(\mathbf{x},\mathbf{y}) = p_{X}(\mathbf{x})p_{Y}(\mathbf{y}). \end{array} \end{equation} Note that the above problem is not derived by directly imposing the constraint $p_{XY}(\mathbf{x},\mathbf{y}) = p_{X}(\mathbf{x})p_{Y}(\mathbf{y})$ to \eqref{eq:13}. For the completeness, we briefly discuss how \eqref{eq:7} is derived from the corresponding CHSH inequality test problem in Appendix~\ref{sec:deriv-optim-probl}. When $n=1$, it was obtained by Koh et al. \cite{Koh12} that the optimal value of \eqref{eq:7} is $S_Q/8 \approx 0.354$. Let $P_Q$ be the limit of the optimal value of \eqref{eq:7}, when $n\rightarrow \infty$. The value of $P_Q$ has the following interpretation. For any independent device settings with randomness less than $P_Q$, it is not possible to have a LHVM that violates CHSH inequality. But for any value $P>P_Q$, there exists a LHVM that violates CHSH inequality where the device settings are independent, but have randomness less than or equal to $P$. Therefore, we are motivated to study the value of $P_Q$ for CHSH inequality test. Yuan, Cao and Ma \cite{yuan2014randomness} have shown numerically that $P_Q \lessapprox 0.264$. \subsection{Our Contribution} In this paper, we provide an exact characterization of $P_Q$, and hence close the unresolved case in Table~\ref{tab:1}. Particularly, we show that \begin{equation*} P_Q = 4^{-h_{\mathrm{b}}(\sqrt{c_Q})} = 0.26428\ldots, \end{equation*} where \(c_Q=\frac{4-S_Q}{8} \approx 0.1464\). Our formula has a min-entropy interpretation: $-n\log_2 P_Q = 2n h_{\mathrm{b}}(\sqrt{c_Q})$, i.e., each bit in $X$ and $Y$ has an average min-entropy $h_{\mathrm{b}}(\sqrt{c_Q})$. To prove achievability, we simplify \eqref{eq:7} by introducing an extra constraint that both $X$ and $Y$ have the uniform distribution over $\mathcal{A}_{n,l}$, the set of sequences in $\{0,1\}^n$ with at most $nl$ $1$s, and obtain a new optimization problem \begin{equation} \tag{$2'$} \label{eq:72} \begin{array}{cl} \displaystyle{\min_{l}} & (1/|\mathcal{A}_{n,l}|)^{2/n} \\ \text{s.t.} & \displaystyle\frac{1}{n |\mathcal{A}_{n,l}|^2} \sum_{\mathbf{x},\mathbf{y}\in \mathcal{A}_{n,l}}\mathbf{x}^{\mathrm{T}}\mathbf{y} \leq \frac{4-S_Q}{8}, \end{array} \end{equation} which is essentially the same problem studied in \cite[Section IV-B]{yuan2014randomness}. The asymptotic optimal value of \eqref{eq:72} when $n \rightarrow \infty$, denoted by $\hat P_Q$, gives an upper bound on $P_Q$ since \eqref{eq:72} is obtained by reducing the feasible region of \eqref{eq:7}. The numerical bound on $\hat P_Q$ in \cite{yuan2014randomness} can be made analytical, and it shows that $\hat P_Q\leq 4^{-h_{\mathrm{b}}(\sqrt{c_Q})}$ and hence $P_Q \leq 4^{-h_{\mathrm{b}}(\sqrt{c_Q})}$. The major part of our paper is to show the converse that no distributions of $X$ and $Y$ with randomness less than $4^{-h_{\mathrm{b}}(\sqrt{c_Q})}$ can be feasible for \eqref{eq:7}, i.e., $P_Q \geq 4^{-h_{\mathrm{b}}(\sqrt{c_Q})}$. Note that we cannot use \eqref{eq:72} as the starting point to prove the converse since the derivation of \eqref{eq:72} implies $\hat{P}_Q \geq P_Q$. It is possible to show that $\hat{P}_Q \geq 4^{-h_{\mathrm{b}}(\sqrt{c_Q})}$, but not $P_Q \geq 4^{-h_{\mathrm{b}}(\sqrt{c_Q})}$ by studying only \eqref{eq:72}. To prove converse, we introduce a concept called \emph{profile} to characterize a set of binary sequences. We study some properties of profiles, based on which optimization \eqref{eq:7} is simplified and the converse is proved. The technique of profile seems to be firstly used here and may of independent interest for other problems. In the remainder of this paper, our techniques used to prove the main result are summarized in the next section, followed by the details in Section~\ref{sec:proofs-main-results}. Some concluding remarks are given in Section~\ref{sec:concluding-remarks}. \begin{table} \centering \caption{\label{tab:1} Previous results.} \begin{tabular}{ccc} \hline\hline & correlated devices & independent devices \\ \hline $n=1$ & $(S_Q+4)/24 \approx 0.285$ & $S_Q/8 \approx 0.354$ \\ $n\rightarrow\infty$ & $3^{-\frac{S_Q+4}{8}}2^{-h_{\mathrm{b}}\left(\frac{4-S_Q}{8}\right)} \approx 0.258$ & $ \lessapprox 0.264$ \\ \hline\hline \end{tabular} \end{table} \section{Outline of the Proofs} As described in the previous section, we formulate an optimization problem as follows. \begin{problem}\label{problem1} For any given $c\in (0,1/4]$ and every positive integer $n$, consider the following program \begin{equation*} \begin{array}{cl} \displaystyle{\min_{p_X,p_Y}} & \displaystyle{\left(\max_{\mathbf{x}}p_X(\mathbf{x}) \max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n}},\nonumber \\ \text{s.t.} & \displaystyle{\frac{1}{n}\sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n}p_X(\mathbf{x})p_Y(\mathbf{y})\mathbf{x}^\mathrm{T}\mathbf{y}\leq c}, \end{array} \end{equation*} where $p_X$ and $p_Y$ are probability distributions over $\{0,1\}^n$. Let $P_n$ be the optimal value of the above program. We are interested in the limit of the sequence $\{P_n\}$ when $n\rightarrow\infty$. \end{problem} Specifically we will need the case that $c=c_Q$ for the physics problem of interests. Now we state the following theorem. \begin{theorem}\label{thm1} For Problem~\ref{problem1} with $c=c_Q$, \(\displaystyle \lim_{n\to\infty}P_n=4^{-h_{\mathrm{b}}(\sqrt{c_Q})}\), where \begin{equation}\label{eq:8} h_{\mathrm{b}}(t)=-t\log_2t-(1-t)\log_2(1-t) \end{equation} is the binary entropy function. \end{theorem} In the following of this section, we give an outline of the main techniques towards proving this theorem. We have the following bound for $P_n$. \begin{proposition}\label{prop:1} For all sufficiently large $n$, $1/4 \leq P_n < 1/2$. \end{proposition} \subsection{Simplified Problem} Let $S_X$ and $S_Y$ be the support of distributions $p_X$ and $p_Y$, respectively. Problem \ref{problem1} can be simplified if we only consider distributions that are uniform over support. Suppose that \begin{eqnarray*} p_X(\mathbf{x}) & = & \frac{1}{|S_X|},\ \forall \mathbf{x}\in S_X, \\ p_Y(\mathbf{y}) & = & \frac{1}{|S_Y|},\ \forall \mathbf{y}\in S_Y. \end{eqnarray*} Then we have \begin{equation*} \left(\max_{\mathbf{x}}p_X(\mathbf{x}) \max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n} = \frac{1}{\sqrt[n]{|S_X|\cdot|S_Y|}}, \end{equation*} and \begin{equation*} \frac{1}{n}\sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n}p_X(\mathbf{x})p_Y(\mathbf{y})\mathbf{x}^\mathrm{T}\mathbf{y} = \frac{\sum_{\mathbf{x}\in S_X,\mathbf{y}\in S_Y}\mathbf{x}^\mathrm{T}\mathbf{y}}{n|S_X|\cdot|S_Y|}. \end{equation*} Define a new problem as follows: \begin{problem}\label{problem2} For any given $c\in (0,1/4]$ and every positive integer \(n\), consider the following programming \begin{eqnarray} \displaystyle{\min_{S_X,S_Y}} & \ & \displaystyle{\frac{1}{\sqrt[n]{|S_X|\cdot|S_Y|}}},\nonumber \\ \text{s.t.} & & \displaystyle{\frac{1}{n|S_X|\cdot|S_Y|} \sum_{\mathbf{x}\in S_X,\mathbf{y}\in S_Y}\mathbf{x}^\mathrm{T}\mathbf{y} \leq c}, \label{constraint2} \end{eqnarray} where $S_X$ and $S_Y$ are subsets of \(\{0,1\}^n\). Let $P_n'$ be the optimal value of the above program. We are interested in the limit of the sequence $\{P_n'\}$ when $n\rightarrow\infty$. \end{problem} It is obvious that $P_n \leq P_n'$ since only distributions that are uniform over support are considered in Problem~\ref{problem2}. The following theorem enables us to focus on $\lim_n P_n'$. \begin{theorem}\label{t2} $\lim_{n\to\infty}P_n'/P_n=1$. \end{theorem} \subsection{Profiles} To study the properties of a set of binary vectors, we introduce the concept of \textit{profile}. For any positive integer \(m\), we call vector \(a=(a_1,a_2,\cdots,a_m)\in[0,1]^m\) a \emph{profile} or an \textit{m-profile}. For each \(S\subseteq\{0,1\}^n\), define the \textit{profile} of set \(S\) as \begin{equation*} \Gamma(S)=\begin{cases} \displaystyle \frac{1}{|S|}\sum_{s\in S}{s},& \text{\(|S|>0\)};\\ (0,0,\ldots,0), & \text{\(|S|=0\)}. \end{cases} \end{equation*} We see that \(\Gamma(S)\) is an \(n\)-profile. Define the \emph{characteristic function} of an $m$-profile \(a\) as \(\displaystyle f_a:[0,1]\to[0,1]\) such that \begin{equation*} f_a(t)=\begin{cases} a_1,& t=0;\\ a_{\lceil tm\rceil},& \forall 0<t\leq 1. \end{cases} \end{equation*} The characteristic function of a profile is a step function. For two profiles \(a\) and \(b\), we say \(a\leq b\) if for any \(0\leq r\leq 1\), \(f_a(r)\leq f_b(r)\), where $a$ and $b$ may not include the same number of components. For a vector $a$, we denote by $a_i$ the $i$-th component of $a$. \begin{lemma}\label{lm:1} For two $n$ profiles $a$ and $b$, $\frac{1}{n} a^{\mathrm{T}}b =\int_{0}^{1}{f_{a}(t)f_{b}(t)}\mathrm{d}t$. \end{lemma} \begin{IEEEproof} We write according to the definition that \begin{IEEEeqnarray*}{rCl} \frac{1}{n}a^{\mathrm{T}}b &=&\frac{1}{n} \sum_{i=1}^n a_ib_i \\ &=&\frac{1}{n} \sum_{i=1}^n \int_{(i-1)/n}^{i/n} f_a(t) f_b(t) \mathrm{d}t \\ &=&\int_{0}^{1}{f_{a}(t)f_{b}(t)}\mathrm{d}t, \end{IEEEeqnarray*} where the second equality holds due to the fact that the characteristic function of a profile is a step function. \end{IEEEproof} The following lemma tells us how to represent the constraint in Problem~\ref{problem2} in a simple way using profiles. \begin{lemma} \label{lemma4} In Problem~\ref{problem2}, the left hand side of constraint (\ref{constraint2}) can be expressed as \begin{equation*} \frac{1}{n|S_X|\cdot|S_Y|} \sum_{\mathbf{x}\in S_X,\mathbf{y}\in S_Y}\mathbf{x}^\mathrm{T}\mathbf{y} = \frac{1}{n} a^{\mathrm{T}}b, \end{equation*} where \(a=\Gamma(S_X)\) and \(b=\Gamma(S_Y)\). \end{lemma} \begin{IEEEproof} We can write \begin{IEEEeqnarray*}{rCl} \IEEEeqnarraymulticol{3}{l}{\frac{1}{n}\cdot\frac{1}{|S_X|}\cdot\frac{1}{|S_Y|}\sum_{\mathbf{x}\in S_X, \mathbf{y}\in S_Y}\mathbf{x}^\mathrm{T}\mathbf{y}} \\ \quad &=& \frac{1}{n}\cdot\frac{1}{|S_X|}\cdot\frac{1}{|S_Y|}\left(\sum_{\mathbf{x}\in S_X}{\mathbf{x}}\right)^\mathrm{T}\left(\sum_{\mathbf{y}\in S_Y}{\mathbf{y}}\right) \\ &=&\frac{1}{n}\cdot\frac{1}{|S_X|}\cdot\frac{1}{|S_Y|}(|S_X|a)^{\mathrm{T}}(|S_Y|b) \IEEEyesnumber \label{eq:ds} \\ &=&\frac{1}{n}a^{\mathrm{T}}b, \end{IEEEeqnarray*} where \eqref{eq:ds} follows from the definition of the profile of a set of binary vectors. \end{IEEEproof} The following theorem states that to get the value of \(P_n'\), we only need to consider \(S_X\) and \(S_Y\) with certain monotone property of their profiles. \begin{theorem}\label{t3} For all n, there exist $S_X, S_Y\subseteq \{0,1\}^n$ that achieve \(P_n'\) in Problem \ref{problem2} such that for \(a=\Gamma(S_X)\) and \(b=\Gamma(S_Y)\), \(0.5\geq a_1\geq a_2\geq...\geq a_n\geq0\) and \(0\leq b_1\leq b_2\leq...\leq b_n\leq 0.5\). \end{theorem} By Theorem \ref{t3}, it is sufficient for us to consider only profiles \(a\in[0,0.5]^m\). For each \(m\)-profile \(a\), define its \textit{$n$-volume} to be \begin{equation} V_n(a)=\max\left\{|S|:S\subseteq\{0,1\}^n,\ \Gamma(S)\leq a\right\}, \end{equation} where $n$ may not be the same as $m$. \begin{lemma}\label{volumelemma} For any two profiles \(p\) and \(q\), if \(p\leq q\), we have \(V_n(p)\leq V_n(q)\) for every positive integer \(n\). \end{lemma} \begin{IEEEproof} Notice that for any \(n\), any \textit{n-profile} smaller than \(p\) is smaller than \(q\), then the lemma suffices. \end{IEEEproof} The following theorem gives an upper bound on the volume of a profile, which will be used in the proof of the lower bound on $P_n'$. \begin{theorem}\label{theorem1} Fix an integer $m$ and let \(\displaystyle a\in\left[0,0.5\right]^m\) be an \(m\)-profile. For any positive integer \(n\), the \textit{n-volume} of profile \(a\) satisfies \begin{equation} V_n(a)\leq 2^{\frac{n}{m}(\sum_{i=1}^{m}{h_{\mathrm{b}}(a_i)}+o(1))}, \end{equation} where $h_{\mathrm{b}}$ is the binary entropy function defined in \eqref{eq:8} and $o(1)\rightarrow 0$ as $n\rightarrow\infty$. \end{theorem} \subsection{Converse and Achievability} \begin{theorem}\label{maintheorem} For any sequence of \(S_X,S_Y\subseteq \{0,1\}^n\) such that \begin{equation*} \frac{1}{n|S_X|\cdot|S_Y|} \sum_{\mathbf{x}\in S_X,\mathbf{y}\in S_Y} \mathbf{x}^{\mathrm{T}}\mathbf{y}\leq c_Q, \end{equation*} we have \begin{equation*} \liminf_{n\to\infty}{\frac{1}{\sqrt[n]{|S_X||S_Y|}}}\geq 4^{-h_{\mathrm{b}}\left(\sqrt{c_Q}\right)}. \end{equation*} \end{theorem} We then give a construction of $S_X$ and $S_Y$ to show that the bound in Theorem \ref{maintheorem} is tight. \begin{theorem}\label{finaltheorem} There exists a sequence of \(S_X,S_Y\subseteq \{0,1\}^n\) such that \begin{equation*} \frac{1}{n|S_X|\cdot|S_Y|} \sum_{\mathbf{x}\in S_X,\mathbf{y}\in S_Y}\mathbf{x}^{\mathrm{T}}\mathbf{y}\leq c_Q, \end{equation*} and \begin{equation*} \lim_{n\to\infty}{\frac{1}{\sqrt[n]{|S_X||S_Y|}}}=4^{-h_{\mathrm{b}}\left(\sqrt{c_Q}\right)}. \end{equation*} \end{theorem} Now we are ready to prove Theorem~\ref{thm1}. \begin{IEEEproof}[Proof of Theorem~\ref{thm1}] Theorem \ref{maintheorem} implies that \begin{equation*} \liminf_{n\to\infty} P_n' \geq 4^{-h_{\mathrm{b}}\left(\sqrt{c_Q}\right)}, \end{equation*} and Theorem \ref{finaltheorem} implies that \begin{equation*} \limsup_{n\to\infty} P_n' \leq 4^{-h_{\mathrm{b}}\left(\sqrt{c_Q}\right)}. \end{equation*} Thus $\lim_{n\to\infty} P_n' = 4^{-h_{\mathrm{b}}\left(\sqrt{c_Q}\right)}$, which together with Theorem~\ref{t2} proves Theorem \ref{thm1}. \end{IEEEproof} \section{Proofs} \label{sec:proofs-main-results} \subsection{Proof of Proposition~\ref{prop:1}} The lower bound follows from $\max_\mathbf{x} p_X(\mathbf{x}) \geq 1/2^n$ for any distribution $p_X$ over $\{0,1\}^n$. To prove the upper bound, consider the following two distributions: \begin{equation*} p_X(\mathbf{x}) = \begin{cases} 1-2c, & \mathbf{x} = \mathbf{0} \\ 2c/(2^n-1), & \mathbf{x} \neq \mathbf{0}, \end{cases} \end{equation*} where $c\in (0,1/4]$ as given in Problem~\ref{problem2}, and $p_Y(\mathbf{y})=1/2^n$ for all $\mathbf{y}\in\{0,1\}^n$. We the have \begin{IEEEeqnarray*}{rCl} \IEEEeqnarraymulticol{3}{l}{\frac{1}{n}\sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n}p_X(\mathbf{x})p_Y(\mathbf{y})\mathbf{x}^\mathrm{T}\mathbf{y}} \\ \quad & = & \frac{2c}{2^n(2^n-1)} \cdot \frac{1}{n}\sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n} \mathbf{x}^\mathrm{T}\mathbf{y} \\ & = & \frac{c }{2^{n-1}(2^n-1)} 2^{2(n-1)} \leq c, \end{IEEEeqnarray*} and \begin{IEEEeqnarray*}{rCl} \IEEEeqnarraymulticol{3}{l}{P_n \leq \left(\max_{\mathbf{x}}p_X(\mathbf{x}) \max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n}} \\ \quad & = & \frac{1}{2} \left(\max\{1-2c, 2c/(2^n-1)\} \right)^{1/n}\\ & = & \frac{1}{2}(1-2c)^{1/n} < \frac{1}{2}, \end{IEEEeqnarray*} where the second equality follows from $c\leq 1/4$ and the last inequality follows from $c>0$. \subsection{Proof of Theorem \ref{t2}} Suppose that $p_X$ and $p_Y$ on $\{0,1\}^n$ achieve the minimum objective value $P_n$ in Problem \ref{problem1}. Write \begin{equation*} \sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n}p_X(\mathbf{x})p_Y(\mathbf{y})\mathbf{x}^{\mathrm{T}}\mathbf{y}=\sum_\mathbf{x} p_X(\mathbf{x})\theta_{p_Y}(\mathbf{x}), \end{equation*} where \begin{equation*} \theta_{p_Y}(\mathbf{x})=\mathbf{x}^{\mathrm{T}}\left(\sum_\mathbf{y} p_Y(\mathbf{y})\mathbf{y}\right). \end{equation*} Let $P_X=\max_\mathbf{x} p_X(\mathbf{x})$. We know that $P_X>0$. If $P_X=1$, then there exists $\mathbf{x}_0$ such that $p_X(\mathbf{x}_0) = 1$. In this case, $P_n=1/2$ since otherwise we may instead choose $p_X$ such that $p_X(\mathbf{0})=1$ and $p_Y$ such that $p_Y(\mathbf{y})=1/2^n$ for all $\mathbf{y}\in\{0,1\}^n$. Thus we have a contradiction to $P_n<1/2$ (see Proposition~\ref{prop:1}). Therefore, $0<P_X<1$. Now consider the following linear program: \begin{equation}\label{eq:9a} \begin{aligned} \min_{p_X} \qquad &\sum_\mathbf{x} p_X(\mathbf{x}) \theta_{p_Y}(\mathbf{x}),\\ \text{s.t.}\qquad &p_X(\mathbf{x})\leq P_X, \ \forall \mathbf{x}\in\{0,1\}^n. \end{aligned} \end{equation} Let $p_X^{*}$ be an optimal distribution that minimizes the objective of \eqref{eq:9a}. Since the linear program must achieve its optimal value at the extreme points, there must be $\lfloor\frac{1}{P_X}\rfloor$ sequences $\mathbf{x}$ with $p_X^*(\mathbf{x})=P_X$ and one sequence $\mathbf{z}$ with $p_X^*(\mathbf{z}) = 1-\lfloor\frac{1}{P_X}\rfloor P_X$. For any other sequence $\mathbf{x}$, we have $p_X^*(\mathbf{x})=0$. We then have \begin{equation*} \sum_\mathbf{x} p_X^{*}(\mathbf{x})\theta_{p_Y}(\mathbf{x})\leq \sum_\mathbf{x} p_X(\mathbf{x})\theta_{p_Y}(\mathbf{x})\leq nc, \end{equation*} and \begin{equation*} \left(\max_\mathbf{x} p_X^{*}(\mathbf{x})\max_\mathbf{y} p_Y(\mathbf{y})\right)^{1/n}=\left(P_X\cdot\max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n}=P_n. \end{equation*} Therefore, $p_X^{*}$ and $p_Y$ also obtain the minimum objective value $P_n$ in Problem~\ref{problem1}. Let $S_X$ be the support of $p_X^*$. We have $|S_X|= \lceil \frac{1}{P_X} \rceil$, and for any $\mathbf{x} \in S_X$, $\theta_{p_Y}(\mathbf{z}) \geq \theta_{p_Y}(\mathbf{x})$. Let $\bar p_X$ be the uniform distribution over $S_X\backslash\{\mathbf{z}\}$. Notice for all $\mathbf{x} \in S_X\backslash\{\mathbf{z}\}$, \begin{equation*} \bar p_X(\mathbf{x}) \geq p_X^{*}(\mathbf{x}), \end{equation*} and \begin{equation*} \sum_{\mathbf{x}\in S_X\backslash\{\mathbf{z}\}}(\bar p_X(\mathbf{x})-p_X^{*}(\mathbf{x}))=p_X^{*}(\mathbf{z}). \end{equation*} We have \begin{eqnarray*} &&\sum_{\mathbf{x},\mathbf{y}}\bar p_X(\mathbf{x})p_Y(\mathbf{y})\mathbf{x}^{\mathrm{T}}\mathbf{y}-\sum_{\mathbf{x},\mathbf{y}}p_X^{*}(\mathbf{x})p_Y(\mathbf{y})\mathbf{x}^{\mathrm{T}}\mathbf{y}\\ &=&\sum_{\mathbf{x}\in S_X\backslash\{\mathbf{z}\}}(\bar p_X(\mathbf{x})-p_X^{*}(\mathbf{x})) \theta_{p_Y}(\mathbf{x}) -p_X^{*}(\mathbf{z}) \theta_{p_Y}(\mathbf{z})\\ &\leq &\sum_{x\in S_X\backslash\{\mathbf{z}\}}(\bar p_X(\mathbf{x})-p_X^{*}(\mathbf{x})) \theta_{p_Y}(\mathbf{z}) -p_X^{*}(\mathbf{z}) \theta_{p_Y}(\mathbf{z})\\ &=& 0. \end{eqnarray*} Thus \begin{equation}\label{eq:10} \sum_{\mathbf{x},\mathbf{y}}\bar p_X(\mathbf{x})p_Y(\mathbf{y})\mathbf{x}^{\mathrm{T}}\mathbf{y} \leq \sum_{\mathbf{x},\mathbf{y}}p_X^{*}(\mathbf{x})p_Y(\mathbf{y})\mathbf{x}^{\mathrm{T}}\mathbf{y} \leq nc. \end{equation} Let $P_n^\dagger = \min_{p_X,p_Y} \left(\max_{\mathbf{x}} p_X(\mathbf{x}) \max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n}$ such that $p_X$ and $p_Y$ satisfy the constraint of Problem~\ref{problem1} and $p_X$ is uniform over its support. We have \begin{eqnarray*} P_n \leq P_n^\dagger & \leq & \left(\max_{\mathbf{x}}\bar p_X(\mathbf{x})\max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n}\\ & = & \left(\frac{1}{\lfloor 1/P_X\rfloor}\max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n} \\ & \leq & \left(\frac{1}{\lceil 1/P_X\rceil - 1}\max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n}\\ & \leq & \left(3P_X\max_{\mathbf{y}}p_Y(\mathbf{y})\right)^{1/n} \\ & = & 3^{1/n} P_n, \end{eqnarray*} where the second inequality follows from $\bar p_X$ and $p_Y$ satisfy the constraint of Problem~\ref{problem1} (see \eqref{eq:10}), and the last inequality follows from $0<P_X<1$ and Lemma~\ref{l3} (to be proved later in this section). Therefore, $\lim_{n\rightarrow \infty} P_n^\dagger/P_n = 1$. Similar technique can be used to show that $\lim_{n\rightarrow \infty} P_n'/P_n^\dagger = 1$, which completes the proof of this theorem. Specifically, suppose that $p_X,p_Y$ on $\{0,1\}^n$ achieve $P_n^\dagger$ where $p_X$ is uniform on its support. Define $P_Y=\max_{\mathbf{y}}p_Y(\mathbf{y})$ and $P_X=\max_{\mathbf{x}}p_X(\mathbf{x})$. Similar to the above argument, there exists distribution $p_Y^{*}$ such that \begin{enumerate} \item for $\lfloor\frac{1}{P_Y}\rfloor$ sequences $\mathbf{y}$, $p_Y^*(\mathbf{y}) = P_Y$, for another one sequence $\mathbf{y}_0$, $p_Y^*(\mathbf{y}_0)=1-\lfloor\frac{1}{P_Y}\rfloor P_Y$, and for all other sequences $\mathbf{y}$, $p_Y^*(\mathbf{y})=0$; \item $\sum_{\mathbf{x},\mathbf{y}}p_X(\mathbf{x})p_Y^*(\mathbf{y}) \mathbf{x}^\mathrm{T}\mathbf{y} \leq \sum_{\mathbf{x},\mathbf{y}}p_X(\mathbf{x})p_Y(\mathbf{y}) \mathbf{x}^\mathrm{T}\mathbf{y} \leq nc$; and \item $(\max_\mathbf{x} p_X(\mathbf{x}) \max_{\mathbf{y}}p_Y^*(\mathbf{y}))^{1/n} = (P_XP_Y)^{1/n}$. \end{enumerate} Let the support set of distributions $p_Y^{*}$ be $S_Y$, and let $\bar p_Y$ be the uniform distribution over $S_Y\backslash\{\mathbf{y}_0\}$. Similar to the reasoning of \eqref{eq:10}, we have \begin{equation*} \sum_{\mathbf{x},\mathbf{y}} p_X(\mathbf{x})\bar p_Y(\mathbf{y})\mathbf{x}^{\mathrm{T}}\mathbf{y} \leq \sum_{\mathbf{x},\mathbf{y}}p_X(\mathbf{x})p_Y^*(\mathbf{y})\mathbf{x}^{\mathrm{T}}\mathbf{y} \leq nc. \end{equation*} Again, according to Lemma \ref{l3}, \begin{eqnarray*} P_n^\dagger \leq P_n'& \leq & \left(P_X \max_{\mathbf{y}}\bar p_Y(\mathbf{y})\right)^{1/n} \\ & = & \left(P_X \frac{1}{\lfloor 1/P_Y\rfloor}\right)^{1/n} \\ & \leq & \left(P_X \frac{1}{\lceil 1/P_Y\rceil -1 }\right)^{1/n} \\ & \leq & (3P_XP_Y)^{1/n} \\ &=&\sqrt[n]{3}P_n^\dagger, \end{eqnarray*} and hence $\lim_{n\rightarrow\infty} P_n'/P_n^\dagger = 1$. \begin{lemma}\label{l3} For every $x\in(0,1)$, \begin{equation*} x(\lceil 1/x \rceil-1)\geq \frac{1}{3}. \end{equation*} \end{lemma} \begin{IEEEproof} If $x \geq \frac{1}{3}$, then \begin{equation*} x(\lceil 1/x \rceil-1)\geq x \geq \frac{1}{3}. \end{equation*} If $x<\frac{1}{3}$, then \begin{equation*} x(\lceil 1/x \rceil-1)\geq x(1/x-2)\geq 1-2x>\frac{1}{3}. \end{equation*} \end{IEEEproof} \subsection{Proof of Theorem \ref{t3}} We first show that we only need to consider $S_X$ and $S_Y$ with profiles $a,b\in [0,0.5]^n$. Suppose that for some \(i\) we have \(a_i>\frac{1}{2}\). We obtain a new set $S_X'$ by flipping the $i$-th bit of all vectors in $S_X$. Let $a' = \Gamma(S_X')$. We have $a_k'=a_k$ for $k\neq i$ and $a_i'=1-a_i$. We know from Lemma~\ref{lemma4} that for the constraint \eqref{constraint2} still holds with $S_X'$ in place of $S_X$ since $a_i'<0.5< a_i$. While the objective function of Problem~\ref{problem2} with $S_X'$ in place of $S_X$ does not change since $|S_X'|=|S_X|$. Similarly we can modify $S_Y$ such that all \(b_i\leq \frac{1}{2}\). Without the loss of generality, we assume $a_1\geq a_2\geq\cdots\geq a_n$. Otherwise we just change the order of the bit in the string. Now we put $b_1,...,b_n$ in a non-decreasing reordering as: $b_1'\leq...\leq b_n'$. There must exist set $S_Y'\subseteq \{0,1\}^n$ such that $ \Gamma(S_Y')=(b_1',...,b_n')^{\mathrm{T}}$ by changing the order of the bits for each string in set $S_Y$. Then we have \begin{equation} \frac{1}{n|S_X||S_Y'|}\sum_{x\in S_X,y\in S_Y'}x^{\mathrm{T}}y=\sum_{i=1}^{n}{a_ib_i'}\leq \sum_{i=1}^{n}{a_ib_i}\leq c. \end{equation} The proof is completed by $|S_X||S_Y'|=|S_X||S_Y|$. \subsection{Proof of Theorem \ref{theorem1}} The logarithm in this proof has base $2$. Consider subset $S\subset \{0,1\}^n$ with $\Gamma(S)\leq a$. Define a random vector $X=(X_1,X_2,\ldots,X_n)$ over $\{0,1\}^n$ with support $S$ and $\Pr\{X=\mathbf{x}\} = \frac{1}{|S|}$ for each $\mathbf{x}\in S$. Recall that the $i$-th component of $\mathbf{x} \in \{0,1\}^n$ is denoted by $\mathbf{x}_i$. Let $l_k=\lfloor\frac{kn}{m}\rfloor$ for $k=0,1,\ldots,m$. Since $(\text{E}[X_1],\text{E}[X_i],\ldots,\text{E}[X_n]) = \Gamma(S) \leq a$, we have for $k=1,\ldots,m$ and $i=1,\ldots,l_k-l_{k-1}$, $\text{E}[X_{l_{k-1}+i}]=f_{\Gamma(S)}(\frac{l_{k-1}+i}{n}) \leq f_{a}(\frac{l_{k-1}+i}{n})=a_k$. Note that $X_i$ is a binary random variable. Hence the entropy $H(X_{l_{k-1}+i}) \leq h_{\mathrm{b}}(a_k)$ for $k=1,\ldots,m$ and $i=1,\ldots,l_k-l_{k-1}$. Therefore, \begin{eqnarray*} \log |S| = H(X) & \leq & \sum_{k=1}^m \sum_{i=1}^{l_k-l_{k-1}}H(X_{l_{k-1}+i}) \\ & \leq & \sum_{k=1}^m (l_k-l_{k-1}) h_{\mathrm{b}}(a_k) \\ & \leq & \frac{n}{m} \left(\sum_i h_{\mathrm{b}}(a_i)+o(1)\right), \end{eqnarray*} where the last inequality follows from $l_k-l_{k-1} \leq \frac{n}{m}+1$ and $o(1)$ tends to zero as $n$ tends to $\infty$. Since the above inequality holds for all subset $S\subset \{0,1\}^n$ with $\Gamma(S)\leq a$, we have \begin{equation*} V_n(a) \leq 2^{\frac{n}{m} (\sum_i h_{\mathrm{b}}(a_i)+o(1))}. \end{equation*} \subsection{Proof of Theorem \ref{maintheorem}} Let \(a=\Gamma(S_X)\), \(b=\Gamma(S_Y)\). By Theorem \ref{t3}, it is sufficient for us to consider $S_X$ and $S_Y$ such that $0.5\geq a_1 \geq \ldots \geq a_n \geq 0$ and $0\leq b_1 \leq \ldots \leq b_n \leq 0.5$. Hence \(f_a\) is decreasing on \([0,1]\), and \(f_b\) is increasing on \([0,1]\). Define two \textit{m-profile}s \(\bar{a}\) and \(\underline{a}\) such that for \(1\leq i\leq m\), \begin{equation*} \bar a_i=\frac{\lceil mf_a\left(\frac{i-1}{m}\right) \rceil}{m}, \ \underline a_i= \frac{\lfloor mf_a\left(\frac{i}{m}\right) \rfloor}{m}. \end{equation*} We have \(f_{\bar a}\) and \(f_{\underline{a}}\) are decreasing on \([0,1]\). \begin{lemma}\label{p1} \(\underline{a}\leq a\leq \bar{a}\). \end{lemma} \begin{IEEEproof} Notice that \(f_a\) is a decreasing function. For every \(0\leq r\leq 1\), \begin{equation*} f_{\bar{a}}(r)=\bar{a}_{\lceil rm\rceil}\geq f_a\left(\frac{\lceil rm\rceil-1}{m}\right)\geq f_a(r), \end{equation*} and similarly, \begin{equation*} f_{\underline{a}}(r)=\underline{a}_{\lceil rm\rceil}\leq f_a\left(\frac{\lceil rm\rceil}{m}\right)\leq f_a(r). \end{equation*} Thus \(\underline{a}\leq a\leq \bar a\). \end{IEEEproof} Define two \textit{m-profile}s \(\bar{b}\) and \(\underline{b}\) such that for \(1\leq i\leq m\), \begin{equation*} \bar b_i=\frac{\lceil mf_b\left(\frac{i}{m}\right) \rceil}{m}, \ \underline b_i= \frac{\lfloor mf_b\left(\frac{i-1}{m}\right) \rfloor}{m}. \end{equation*} We have \(f_{\bar{b}}\) and \(f_{\underline{b}}\) are increasing on \([0,1]\), and similar to Lemma~\ref{p1}, we have the following lemma. \begin{lemma} \(\underline{b}\leq b\leq \bar{b}\). \end{lemma} Now we can prove the following lemma. \begin{lemma}\label{l10} For $m\geq 2$, \begin{equation} \frac{1}{m}\sum_{i=1}^{m}{\bar{a}_i\bar{b}_i}-\frac{1}{n}\sum_{i=1}^{n}a_ib_i<\frac{2}{m}. \end{equation} \end{lemma} \begin{IEEEproof} Observe that \begin{IEEEeqnarray*}{rCl} \frac{1}{m}\sum_{i=1}^{m}{\bar{a}_i\bar{b}_i}-\frac{1}{n}\sum_{i=1}^{n}a_ib_i&=& \frac{1}{m}\sum_{i=1}^{m}{\bar{a}_i\bar{b}_i}-\int_{0}^{1}{f_a(t)f_b(t)}\mathrm{d}t\\ &\leq & \frac{1}{m}\sum_{i=1}^{m}{\bar{a}_i\bar{b}_i} - \int_{0}^{1}{f_{\underline{a}}(t)f_{\underline{b}}(t)}\mathrm{d}t \\ &=&\frac{1}{m}\sum_{i=1}^{m}{\bar{a}_i\bar{b}_i}-\frac{1}{m}\sum_{i=1}^{m}{\underline{a}_i\underline{b}_i}, \end{IEEEeqnarray*} where in the first and last equality we apply Lemma~\ref{lm:1}. The first equality comes from the fact that \(f_a\) and \(f_b\) are step functions. By definition, we have for $1\leq i\leq m-1$, $m\underline{a}_{i}\geq m\bar{a}_{i+1}-1$ and $m\underline{b}_{i+1} \geq m\bar{b}_{i}-1$. Hence \begin{IEEEeqnarray*}{rCl} \IEEEeqnarraymulticol{3}{l}{\frac{1}{m}\sum_{i=1}^{m}{\bar{a}_i\bar{b}_i}-\frac{1}{m}\sum_{i=1}^{m}{\underline{a}_i\underline{b}_i}} \\ & \leq & \frac{1}{m}\sum_{i=1}^{m}\bar{a}_i\bar{b}_i-\frac{1}{m}\sum_{i=2}^{m-1} \left(\bar{a}_{i+1}-\frac{1}{m}\right)\left(\bar{b}_{i-1}-\frac{1}{m}\right)\\ & = & \frac{1}{m}\Bigg(\bar{a}_1\bar{b}_1+\bar{a}_2\bar{b}_2+\sum_{i=3}^{m}\bar{a}_i(\bar{b}_i-\bar{b}_{i-2})+ \\ & & \quad \quad \sum_{i=2}^{m-1}\left(\frac{\bar{a}_{i+1}}{m}+\frac{\bar{b}_{i-1}}{m}-\frac{1}{m^2}\right)\Bigg) \\ & \leq & \frac{1}{m}\Bigg(0.25+0.25+\sum_{i=3}^{m}0.5(\bar{b}_i-\bar{b}_{i-2})+ \\ & & \quad \quad \sum_{i=2}^{m-1}\left(\frac{0.5}{m}+\frac{0.5}{m}\right)\Bigg) \\ & = & \frac{1}{m}\left(1.5-\frac{2}{m}+0.5\bar{b}_m+0.5\bar{b}_{m-1}-0.5\bar{b}_2-0.5\bar{b}_{1}\right) \\ & \leq & \frac{2}{m}, \end{IEEEeqnarray*} where we use the fact that $\bar{a}_i,\bar{b}_i \leq 0.5$. \end{IEEEproof} By Lemma \ref{l10} and the condition of the theorem (using the form given in Lemma~\ref{lemma4}), we have \begin{equation}\label{eq:12} \frac{1}{m}\sum_{i=1}^{m}{\bar{a}_i\bar{b}_i}\leq \frac{1}{n}\sum_{i=1}^{n}{a_ib_i}+\frac{2}{m}\leq c_Q+\frac{2}{m}. \end{equation} From Lemma \ref{volumelemma} and Theorem \ref{theorem1}, we know that \begin{IEEEeqnarray*}{rCl} |S_X||S_Y| & = & V_n(a)V_n(b) \\ & \leq & V_n(\bar{a})V_n(\bar{b}) \\ & \leq & 2^{\frac{n}{m}(\sum_{i=1}^{m}{\left(h_{\mathrm{b}}(\bar{a}_i)+h_{\mathrm{b}}(\bar{b}_i)\right)}+o(1))}, \end{IEEEeqnarray*} where $o(1)\rightarrow 0$ as $n\rightarrow \infty$. For $0\leq t \leq 0.25$, define \begin{equation} \label{eq:11} f(t)=\max_{2t\leq x\leq \frac{1}{2}}{\left(h_{\mathrm{b}}(x)+h_{\mathrm{b}}\left(\frac{t}{x}\right)\right)}. \end{equation} Some properties of the above function are given in Appendix~\ref{sec:convexity-function} (see Lemma \ref{lemma7} -- \ref{concavelemma}). We have \begin{IEEEeqnarray*}{rCl} \frac{1}{m}\sum_{i=1}^{m}\left(h_{\mathrm{b}}(\bar{a}_i)+h_{\mathrm{b}}(\bar{b}_i)\right) & = & \frac{1}{m}\sum_{i=1}^{m}\left(h_{\mathrm{b}}(\bar{a}_i)+h_{\mathrm{b}}\left(\frac{\bar{a}_i\bar{b}_i}{\bar{b}_i}\right)\right) \\ & \leq & \frac{1}{m}\sum_{i=1}^{m}f(\bar{a}_i\bar{b}_i)\leq f\left(c_Q+\frac{2}{m}\right), \end{IEEEeqnarray*} where the first inequality follows from the definition of $f(\bar{a}_i\bar{b}_i)$ and the second inequality is obtained by applying \eqref{eq:12} and Lemma \ref{concavelemma}. Thus for any sufficiently large $m$, \begin{equation*} \liminf_{n\to\infty}{\frac{1}{\sqrt[n]{|S_X||S_Y|}}}\geq 2^{-f\left(c_Q+\frac{2}{m}\right)}. \end{equation*} Take \(m\to\infty\) we have \begin{equation} \liminf_{n\to\infty}{\frac{1}{\sqrt[n]{|S_X||S_Y|}}}\geq 2^{-f(c_Q)}=4^{-h_{\mathrm{b}}(\sqrt{c_Q})}, \end{equation} where the last equality is implied by Lemma \ref{lemma7}. \subsection{Proof of Theorem \ref{finaltheorem}} For every $n$, let \begin{equation*} S_X=S_Y=\{\mathbf{x}\in\{0,1\}^{n}:\mathbf{x}\textrm{ includes at most }n\sqrt{c_Q}\ 1s\}. \end{equation*} Then \begin{equation} |S_X|=|S_Y|=\sum_{i=0}^{\lfloor n\sqrt{c_Q}\rfloor}{\binom{n}{i}}=2^{n(h_{\mathrm{b}}(\sqrt{c_Q})+o(1))}, \end{equation} where $o(1)\rightarrow 0$ as $n\rightarrow\infty$. Thus \begin{equation*} \lim_{n\to\infty}\frac{1}{\sqrt[n]{|S_X||S_Y|}}=\frac{1}{2^{2h_{\mathrm{b}}(\sqrt{c_Q})}}=4^{-h_{\mathrm{b}}(\sqrt{c_Q})}. \end{equation*} From the constructions of $S_X$ and $S_Y$, we know that \begin{equation*} \Gamma(S_X)=\Gamma(S_Y)\leq\left(\sqrt{c_Q}, \sqrt{c_Q},\cdots,\sqrt{c_Q}\right). \end{equation*} Therefore \begin{IEEEeqnarray*}{rCl} \frac{1}{n}\sum_{\mathbf{x}\in S_X, \mathbf{y}\in S_Y} \frac{1}{|S_X||S_Y|}\mathbf{x}^{\mathrm{T}}\mathbf{y} & = & \frac{1}{n} (\Gamma(S_X))^{\mathrm{T}}\Gamma(S_Y) \\ & \leq & \frac{1}{n}\sum_{i=1}^{n}{(\sqrt{c_Q})^2} \\ & = & c_Q. \end{IEEEeqnarray*} Thus \(S_X\) and \(S_Y\) satisfies constraints in Theorem \ref{t2}. \section{Concluding Remarks} \label{sec:concluding-remarks} In this paper, we determine for Problem~\ref{problem1} that when $c=c_Q$ \begin{equation} \label{eq:16} \lim_{n\rightarrow \infty} P_n = 4^{-h_{\mathrm{b}}(\sqrt{c})}, \end{equation} which is of particular interest for quantum information. Note that our technique also shows that \eqref{eq:16} holds for $c_Q \leq c < 1/4$. However, the existing technique in this paper does not imply \eqref{eq:16} for $c < c_Q$, which holds if we can show that $f(t)$ (defined in \eqref{eq:11}) is concave in $[0,0.25]$. But we can only show the concavity of $f(t)$ for the range $[0.0625, 0.25]$ (see Appendix~\ref{sec:convexity-function}). Whether $f(t)$ is concave in $[0,0.25]$ is of certain mathematical interest. \section*{Acknowledgments} We thank Xiongfeng Ma, Xiao Yuan and Zhu Cao for introducing us this problem and providing insightful comments to our work. \appendices \section{Background of the Optimization Problem} \label{sec:deriv-optim-probl} \subsection{CHSH Inequality} A Bell test experiment has two spatially separated parties, Alice and Bob, who can randomly choose their devices settings $X$ and $Y$ from set $\{0,1\}$ and generate random output bits $A$ and $B$, respectively. The Clauser-Horne-Shimony-Holt (CHSH) inequality is that \begin{equation}\label{eq:1} S^{(1)} := \sum_{a,b,x,y\in\{0,1\}}(-1)^{a\oplus b + xy} q_{AB|XY}(a,b|x,y) \leq 2, \end{equation} where $\oplus$ denotes the exclusive-or of two bits, and $q_{AB|XY}(a,b|x,y)$ is the probability that outputs $a$ and $b$ are generated when the device settings are $x$ and $y$. To simplify the notations, we may also write $q_{AB|XY}(a,b|x,y)$ as $q(a,b|x,y)$, and use the similar convention for other probability distributions. The theory of quantum mechanics predicts a maximum value for $S$ of $S_Q=2\sqrt{2}$. In a local hidden variable model (LHVM), assume that an adversary Eve controls a variable $\lambda$ taking discrete values so that \begin{equation*} q(a,b|x,y) = \sum_{\lambda} q(a|x,\lambda)q(b|y,\lambda) q(\lambda|x,y), \end{equation*} where $q(a|x,\lambda)$ (resp.\ $q(b|y,\lambda)$) is the probability that $a$ is output when the setting of Alice (resp. Bob) is $x$ (resp. $y$), and $q(\lambda|x,y)$ is the conditional probability distribution of the variable $\lambda$ given $x$ and $y$. \emph{Free will} is assumed in the derivation of the CHSH inequality, i.e., \begin{equation}\label{eq:14} q(\lambda|x,y) = q(\lambda). \end{equation} With this assumption, the inequality \eqref{eq:1} holds for any LHVM. We consider the case that the device settings may not be chosen freely, i.e., \eqref{eq:14} may not hold. By the Bayes' law, \begin{equation*} q(\lambda|x,y) = \frac{q(x,y|\lambda) q(\lambda)}{q(x,y)} = 4 q(x,y|\lambda) q(\lambda), \end{equation*} where $q(x,y)$ is assumed to be $1/4$ so that Alice and Bob cannot detect the existence of adversary Eve. In this case, \begin{equation}\label{eq:4} S = \sum_{\lambda} S_\lambda q(\lambda), \end{equation} where \begin{equation*} S_{\lambda} = 4 \sum_{a,b,x,y\in\{0,1\}} (-1)^{a\oplus b + xy} q(a|x,\lambda) q(b|y,\lambda) q(x,y|\lambda). \end{equation*} The adversary can pick probabilities $q(\lambda)$, $q(x,y|\lambda)$, $q(a|x,\lambda)$ and $q(b|y,\lambda)$ to fake the violation of a Bell's inequality. The following randomness measure are used in literature \cite{Koh12,Pop13,yuan2014randomness} \begin{equation*} P = \max_{x,y,\lambda}q(x,y|\lambda). \end{equation*} Note that $P$ takes values from $1/4$ to $1$. When $P=1/4$, all the device settings are uniformly picked independent of $\lambda$. When $P=1$, for at least one value of $\lambda$, the device settings are deterministic. We are interested in the minimum value of $P$ such that $S\geq S_Q$ for certain LHVMs in the independent device setting scenario, i.e., $q(x,y|\lambda) = q(x|\lambda)q(y|\lambda)$. In other words, we want to solve the following problem \begin{equation}\label{eq:2} \begin{array}{cl} \displaystyle{\min} & \max_{x,y,\lambda}q(x,y|\lambda)\\ \text{s.t.} & \sum_{\lambda} S_\lambda q(\lambda) \geq S_Q, \\ & \sum_{\lambda} q(x,y|\lambda)q(\lambda) = \frac{1}{4}, \\ & q(x,y|\lambda) = q(x|\lambda)q(y|\lambda), \end{array} \end{equation} where the minimization is over all the possible (conditional) distributions $q(\lambda)$, $q(a|x,\lambda)$, $q(b|y,\lambda)$ and $q(x,y|\lambda)$ with $q(x,y|\lambda)=q(x|\lambda)q(y|\lambda)$. Due to the convexity of the constraints with respect to $q(a|x,\lambda)$ and $q(b|y,\lambda)$, we can consider only deterministic distributions $q(a|x,\lambda)$ and $q(b|y,\lambda)$ without changing the optimal value of \eqref{eq:2}. Let $a = a(x,\lambda)$ and $b = b(y,\lambda)$. Rewrite \begin{equation}\label{eq:15} S_{\lambda} = 4 \sum_{x,y\in\{0,1\}} (-1)^{a(x,\lambda)\oplus b(y,\lambda) + xy} q(x,y|\lambda). \end{equation} In the above formulations, only a \emph{single run} of the test is performed. It is more realistic to consider that the device settings in different runs are correlated, which is referred to as the \emph{multiple-run} scenario, where the device settings $\mathbf{x}=(x_1,\ldots,x_n)^\mathrm{T}$ and $\mathbf{y}=(y_1,\ldots,y_n)^\mathrm{T}$ in $n$ runs of the tests follow a joint distribution $q(\mathbf{x},\mathbf{y}|\lambda)$. Similar to the discussion of the single-run scenario, for multiple runs, we have the CHSH inequality $S^{(n)}=\sum_{\lambda}S^{(n)}_{\lambda}q(\lambda) \leq 2$ with \begin{IEEEeqnarray*}{rCl} S^{(n)}_{\lambda} & = & \frac{4}{n} \sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n} q(\mathbf{x},\mathbf{y}|\lambda) \sum_{i=1}^n (-1)^{a(x_i,\lambda)\oplus b(y_i,\lambda) + x_iy_i} \\ & = & 4 \sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n} q(\mathbf{x},\mathbf{y}|\lambda) \Big[\pi(0,0|\mathbf{x},\mathbf{y}) (-1)^{a(0,\lambda)\oplus b(0,\lambda)} \\ & & + \pi(0,1|\mathbf{x},\mathbf{y}) (-1)^{a(0,\lambda)\oplus b(1,\lambda)} \\ & & + \pi(1,0|\mathbf{x},\mathbf{y}) (-1)^{a(1,\lambda)\oplus b(0,\lambda)} \\ & & + \pi(1,1|\mathbf{x},\mathbf{y}) (-1)^{a(1,\lambda)\oplus b(1,\lambda)+1} \Big] \\ & = & 4 \sum_{x,y\in \{0,1\}} (-1)^{a(x,\lambda)\oplus b(y,\lambda)+xy}\pi(x,y|\lambda) \IEEEyesnumber \label{eq:6} \end{IEEEeqnarray*} where $\pi(x,y|\mathbf{x},\mathbf{y})$ is the fraction of $(x,y)$ pairs among the pairs $(x_k,y_k), k=1,\ldots,n$, and \begin{equation*} \pi(x,y|\lambda) = \sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n} q(\mathbf{x},\mathbf{y}|\lambda) \pi(x,y|\mathbf{x},\mathbf{y}). \end{equation*} Note that \eqref{eq:6} shares the same form as \eqref{eq:15}. Define the measure of measurement dependence for multiple runs as \begin{equation*} P^{(n)} = \left(\max_{\mathbf{x},\mathbf{y},\lambda}q(\mathbf{x},\mathbf{y}|\lambda)\right)^{1/n}. \end{equation*} Under the independent device setting condition that $q(\mathbf{x},\mathbf{y}|\lambda) = q(\mathbf{x}|\lambda) q(\mathbf{y}|\lambda)$, the problem of interest now becomes \begin{equation}\label{eq:9} \begin{array}{cl} \displaystyle{\min} & \displaystyle{\left(\max_{\mathbf{x},\mathbf{y},\lambda} q(\mathbf{x},\mathbf{y}|\lambda)\right)^{1/n}} \\ \text{s.t.} & \displaystyle{\sum_{\lambda}S^{(n)}_\lambda q(\lambda) \geq S_Q} \\ & \sum_{\lambda} q(\mathbf{x},\mathbf{y}|\lambda) q(\lambda) = \frac{1}{4^n}, \\ & q(\mathbf{x},\mathbf{y}|\lambda) = q(\mathbf{x}|\lambda) q(\mathbf{y}|\lambda), \end{array} \end{equation} where $S^{(n)}_\lambda$ is defined in \eqref{eq:6}. Note that when $n=1$, \eqref{eq:9} becomes \eqref{eq:2}. \subsection{Simplification} We use the case $n=1$ to illustrate how to simplify the above optimization problem. First, we determine the choice of the output functions $a(x,\lambda)$ and $b(x,\lambda)$ using the approach in \cite{Pop13}. For a give value of $\lambda$, there are totally $16$ different pairs of the output functions $(a,b)$. Table~\ref{tab:outputfunction} lists the eight possible output functions with $a(0,\lambda)=0$. It is not necessary to consider the other eight possible output functions with $a(0,\lambda)=1$ since they give the same set of $S_\lambda$ as listed in the last column in Table~\ref{tab:outputfunction}. Since the output functions with index $1,2,3,4$ are better than the output functions with index $5,6,7,8$, respective, we use the former four choices of the output functions. \begin{table*} \centering \caption{\label{tab:outputfunction} Output function assignment.} \setlength{\tabcolsep}{10pt} \begin{tabular}{cccccc} \hline\hline & $a(0,\lambda)$ & $a(1,\lambda)$ & $b(0,\lambda)$ & $b(1,\lambda)$ & $S_\lambda/4$ \\ \hline 1 & 0 & 0 & 0 & 0 & $q(0,0|\lambda)+q(0,1|\lambda)+q(1,0|\lambda)-q(1,1|\lambda)$ \\ 2 & 0 & 0 & 0 & 1 & $q(0,0|\lambda)-q(0,1|\lambda)+q(1,0|\lambda)+q(1,1|\lambda)$ \\ 3 & 0 & 1 & 0 & 0 & $q(0,0|\lambda)+q(0,1|\lambda)-q(1,0|\lambda)+q(1,1|\lambda)$ \\ 4 & 0 & 1 & 1 & 0 & $-q(0,0|\lambda)+q(0,1|\lambda)+q(1,0|\lambda)+q(1,1|\lambda)$ \\ 5 & 0 & 0 & 1 & 0 & $-q(0,0|\lambda)+q(0,1|\lambda)-q(1,0|\lambda)-q(1,1|\lambda)$ \\ 6 & 0 & 0 & 1 & 1 & $-q(0,0|\lambda)-q(0,1|\lambda)-q(1,0|\lambda)+q(1,1|\lambda)$ \\ 7 & 0 & 1 & 0 & 1 & $q(0,0|\lambda)-q(0,1|\lambda)-q(1,0|\lambda)-q(1,1|\lambda)$ \\ 8 & 0 & 1 & 1 & 1 & $-q(0,0|\lambda)-q(0,1|\lambda)+q(1,0|\lambda)-q(1,1|\lambda)$ \\ \hline\hline \end{tabular} \end{table*} With the choices of the output functions as specified above, the constraint $\sum_{\lambda} q(x,y|\lambda)q(\lambda) = \frac{1}{4}$ is redundant. To show this, we consider a LHVM (denoted by $L^*$) with a constant $\lambda$, and output functions $a^*(x)=b^*(y)=0$. (Other choices of $a^*(x)$ and $b^*(y)$ can be shown similarly.) We use $q^*(x,y)$ to denote the device setting distribution related to this LHVM. Define a new LHVM (denoted by $L$) with $\lambda = 0, 1, 2, 3$ and $q(\lambda)=1/4$ as follows: The output functions are assigned according to Table~\ref{tab:output}, and the device setting distributions are assigned according to Table~\ref{tab:setting}. It can be verified that \begin{IEEEeqnarray*}{rCl} P & = & \max_{x,y\in\{0,1\},\lambda=\{0,1,2,3\}} q(x,y|\lambda) \\ & = & \max_{x,y\in\{0,1\}} q^*(x,y), \end{IEEEeqnarray*} and \begin{IEEEeqnarray*}{rCl} S & = & \sum_{\lambda=\{0,1,2,3\}} q(\lambda) 4 \sum_{x,y\in\{0,1\}} (-1)^{a(x,\lambda)\oplus b(y,\lambda) + xy} q(x,y|\lambda) \\ & = & q^*(0,0) + q^*(0,1) + q^*(1,0) - q^*(1,1). \end{IEEEeqnarray*} Hence, if LHVM $L^*$ achieves the optimal value of \eqref{eq:2}, so does LHVM $L$, which has $q(x,y) = 1/4$. \begin{table} \centering \caption{\label{tab:output} Output function assignment.} \setlength{\tabcolsep}{10pt} \begin{tabular}{ccccc} \hline\hline $\lambda$ & $a(0,\lambda)$ & $a(1,\lambda)$ & $b(0,\lambda)$ & $b(1,\lambda)$ \\ \hline 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 1\\ 2 & 0 & 1 & 0 & 0\\ 3 & 0 & 1 & 1 & 0 \\ \hline\hline \end{tabular} \end{table} \begin{table} \centering \caption{\label{tab:setting} Assignment of the device setting distributions.} \setlength{\tabcolsep}{10pt} \begin{tabular}{ccccc} \hline\hline $\lambda$ & $q(0,0|\lambda)$ & $q(0,1|\lambda)$ & $q(1,0|\lambda)$ & $q(1,1|\lambda)$ \\ \hline 0 & $q^*(0,0)$ & $q^*(0,1)$ & $q^*(1,0)$ & $q^*(1,1)$ \\ 1 & $q^*(1,0)$ & $q^*(1,1)$ & $q^*(0,0)$ & $q^*(0,1)$ \\ 2 & $q^*(0,1)$ & $q^*(0,0)$ & $q^*(1,1)$ & $q^*(1,0)$ \\ 3 & $q^*(1,1)$ & $q^*(1,0)$ & $q^*(0,1)$ & $q^*(0,0)$ \\ \hline\hline \end{tabular} \end{table} Further, for each of the four pairs of output functions with index $1,2,3,4$ in Table~\ref{tab:outputfunction}, the corresponding $S_{\lambda}$ involves only one summands with negative coefficient. Since the four probability masses $q(0,0|\lambda)$, $q(0,1|\lambda)$, $q(1,0|\lambda)$ and $q(1,1|\lambda)$ are symmetry, these four pairs of output functions achieve the same optimal value. Here we use $a(x,\lambda)=b(y,\lambda)=0$ so that \begin{equation*} \sum_{\lambda} S^{(1)}_\lambda q(\lambda) = 4 - 8 q_{XY}(1,1). \end{equation*} With these simplifications, the above minimization problem becomes \begin{equation}\label{eq:42} \begin{array}{cl} \displaystyle{\min} & \max_{x,y,\lambda}q(x,y|\lambda)\\ \text{s.t.} & q_{XY}(1,1) \leq \dfrac{4-S_Q}{8},\\ & q(x,y|\lambda) = q(x|\lambda)q(y|\lambda). \end{array} \end{equation} For any $\lambda$ and $c\in [0,0.5]$, let $P(c)$ be the minimum value of $\max_{x,y} q(x,y|\lambda)$ such that $q(1,1|\lambda) \leq c, q(x,y|\lambda) = q(x|\lambda)q(y|\lambda)$. Note that $P(c)$ does not depend on the choices of $\lambda$, and $P(c)$ is a non-increasing function of $c$. It clear that if we use only a constant $\lambda$ in \eqref{eq:42}, the optimal value is $P(\frac{4-S_Q}{8})$. Now we show that it is sufficient to consider a constant $\lambda$. Suppose that $q^*(x,y|\lambda)$ achieves the optimal value of \eqref{eq:42}. Let $c_{\lambda} = q^*(1,1|\lambda)$. By the first constraint of \eqref{eq:42}, we have $\sum_{\lambda} q^*(\lambda) c_{\lambda} = \frac{4-S_Q}{8}$, which implies the existence of certain $\lambda^*$ such that $c_{\lambda^*} \leq \frac{4-S_Q}{8}$. By the definition of $P(c)$, we have \begin{equation*} \max_{x,y} q^*(x,y|\lambda) \geq P(c_{\lambda}), \end{equation*} which implies \begin{equation*} \max_{\lambda,x,y} q^*(x,y|\lambda) \geq \max_{\lambda} P(c_{\lambda}) \geq P(c_{\lambda^*}) \geq P((4-S_Q)/8). \end{equation*} In other words, using a LHVM with $\lambda$ taking multiple values cannot achieve smaller optimal value than $P(\frac{4-S_Q}{8})$. Hence, it is sufficient to consider a constant $\lambda$, and \eqref{eq:42} becomes \begin{equation*} \begin{array}{cl} \displaystyle{\min} & \max_{x,y}q(x)q(y)\\ \text{s.t.} & q_{X}(1)q_{Y}(1)\leq \dfrac{4-S_Q}{8} \end{array} \end{equation*} Similar to the reasoning of the single-run case, we can use a deterministic strategy \(\lambda\) with \(a(x,\lambda)=b(y,\lambda)=0\), and simplify problem \eqref{eq:9} to \begin{equation*} \begin{array}{cl} \displaystyle{\min} & \displaystyle{\left(\max_{\mathbf{x},\mathbf{y}}q(\mathbf{x},\mathbf{y})\right)^{1/n}} \\ \text{s.t.} & \displaystyle{\frac{1}{n}\sum_{\mathbf{x},\mathbf{y}\in \{0,1\}^n}q(\mathbf{x},\mathbf{y})\mathbf{x}^\mathrm{T}\mathbf{y}\leq \frac{4-S_Q}{8}},\\ & q(\mathbf{x},\mathbf{y}) = q(\mathbf{x}) q(\mathbf{y}), \end{array} \end{equation*} which is \eqref{eq:7}. \section{Properties of a Function} \label{sec:convexity-function} We study some properties of the function $f(t)$ defined in \eqref{eq:11}. Recall that \begin{equation*} f(t)=\max_{2t\leq x\leq \frac{1}{2}}{\left(h_{\mathrm{b}}(x)+h_{\mathrm{b}}\left(\frac{t}{x}\right)\right)},\quad 0\leq t \leq 0.25. \end{equation*} The next lemma implies that $f(t) = 2h_{\mathrm{b}}(\sqrt{t})$ for \(0.0625\leq t\leq 0.25\). \begin{lemma}\label{lemma7} For \(0.0625\leq t\leq 0.25\), \(2t\leq x\leq 0.5\), we have \begin{equation*}\label{eq:7.1} h_{\mathrm{b}}(x)+h_{\mathrm{b}}\left(\frac{t}{x}\right)\leq 2h_{\mathrm{b}}(\sqrt{t}), \end{equation*} where the equality holds for $x=\sqrt{t}$. That is $f(t)=2h_{\mathrm{b}}(\sqrt{t})$ for $t \in [0.0625, 0.25]$. \end{lemma} \begin{IEEEproof} Fix \(t\). Let \( u(x)=h_{\mathrm{b}}(x)+h_{\mathrm{b}}\left(\frac{t}{x}\right)\). Observe that \( u(x)=u\left(\frac{t}{x}\right)\). Thus it suffices to show \(u(x)\leq 2h_{\mathrm{b}}(\sqrt{t})\) for \(2t\leq x\leq \sqrt{t}\). Taking derivative on \(u\) we have \begin{equation*} u'(x)=-\log x+\log (1-x)+\frac{t}{x^2}\log\left(\frac{t}{x}\right)-\frac{t}{x^2}\log\left(1-\frac{t}{x}\right) \end{equation*} Let \(v(x)=-x\log x+x\log(1-x)\), we have \begin{equation}\label{eq:7.2} xu'(x)=v(x)-v\left(\frac{t}{x}\right) \end{equation} From \( t\geq\frac{1}{16}\) we have \begin{equation}\label{eq:7.3} \frac{t}{x}\geq\frac{1}{2}-x\geq\frac{1}{4}. \end{equation} We may verify that \(v\) is decreasing on \([0.25,0.5]\). If \(x\geq0.25\), then \(xu'(x)\geq0\) since \(\displaystyle x\leq\frac{t}{x}\). Otherwise, we may verify \(v(x)\geq v(0.5-x)\) for \(x\leq 0.25\). Then apply (\ref{eq:7.3}) to (\ref{eq:7.2}) we have \begin{equation}\label{eq:7.4} xu'(x)=v(x)-v\left(\frac{t}{x}\right)\geq v(x)-v\left(0.5-x\right)\geq 0 \end{equation} Therefore \(u\) is an increasing function on \([2t,\sqrt{t}]\), which implies \(u(x)\leq 2h_{\mathrm{b}}(\sqrt{t})\). \end{IEEEproof} \begin{lemma} Function \(f(t)\) is increasing on \(\displaystyle \left[0,0.25\right]\). \end{lemma} \begin{IEEEproof} To show that \(f\) is increasing, fix any \(0\leq t_1<t_2\leq0.25\). We write \( f(t_1)=h_{\mathrm{b}}(x_1)+h_{\mathrm{b}}(y_1)\) where \(x_1\) maximizes \( h_{\mathrm{b}}(x)+h_{\mathrm{b}}\left(\frac{t_1}{x}\right)\) for \(x\in[2t_1,0.5]\) and \(x_1y_1=t_1\). We know that \(0\leq x_1,y_1\leq 0.5\). Find \(x_2\) and \(y_2\) such that \( x_1\leq x_2\leq \frac{1}{2}\), \( y_1\leq y_2\leq\frac{1}{2}\) such that \(x_2y_2=t_2\). Therefore \begin{equation*} f(t_1)=h_{\mathrm{b}}(x_1)+h_{\mathrm{b}}(y_1)\leq h_{\mathrm{b}}(x_2)+h_{\mathrm{b}}(y_2)\leq f(t_2). \end{equation*} \end{IEEEproof} \begin{lemma}\label{concavelemma} For any \(c'\geq c_Q = {2-\sqrt{2}\over 4} \approx 0.1464 \), if \(k\) real numbers \(\displaystyle t_1,t_2,\cdots,t_k\in\left[0,0.25\right]\) such that \(\frac{1}{k}\sum_{i=1}^{k}{t_i}\leq c'\), we have \begin{equation*} \frac{1}{k}\sum_{i=1}^{k}{f(t_i)}\leq f(c'). \end{equation*} \end{lemma} \begin{IEEEproof} Let \(f_0(t)=2h_{\mathrm{b}}\left(\sqrt{t}\right)\), \(\displaystyle 0\leq t\leq 0.25\). From Lemma \ref{lemma7} \(\displaystyle f(t)=f_0(t)\) for \(t\geq 0.0625\). Let \(f_1\) be the tangent line of \(f_0\) on \(\left(0.14,f_0(0.14)\right)\). Notice that \(h_{\mathrm{b}}(x)\) and \(\sqrt{x}\) are both concave on their domains. We see that \(f_0(t)\) is also concave on \([0,0.25]\). Observe that \(f_0\) is concave and increasing on \(\left[0,\frac{1}{4}\right]\), we have \(f_1\) is an increasing function, while for every \(\displaystyle t\in[0,0.25]\), \(f_0(t)\leq f_1(t)\). Let \(g(t)\) be a function defined on \(\left[0,0.25\right]\) such that \begin{equation*} g(t)=\begin{cases} f_1(t)& 0\leq t\leq 0.14;\\ f_0(t)& 0.14<t\leq0.25. \end{cases} \end{equation*} Observe that \(g\) is linear on \([0,0.14]\) and concave on \([0.14,0.25]\), thus \(g\) is concave on \([0,0.25]\). For \(0\leq t<0.0625\), \begin{IEEEeqnarray*}{rCl} f(t) & \leq & f(0.0625) \\ & = & f_0(0.0625) \ (= 1.623) \\ & < & g(0) \ (= 1.630) \\ & \leq & g(t). \end{IEEEeqnarray*} For \(0.0625\leq t\leq0.25\), \begin{equation*} f(t)=f_0(t)\leq g(t). \end{equation*} Thus \(g\) is always not smaller than \(f\). Take \(t_1',t_2',\cdots,t_k'\leq 0.25\) such that \(t_i\leq t_i'\) for all \(1\leq i\leq k\), while \(\frac{1}{k}\sum_{i=1}^{k}{t_i'}=c'\). Applying Jensen's inequality we have \begin{IEEEeqnarray*}{rCl} \frac{1}{k}\sum_{i=1}^{k}{f(t_i)} & \leq & \frac{1}{k}\sum_{i=1}^{k}{f(t_i')} \\ & \leq & \frac{1}{k}\sum_{i=1}^{k}{g(t_i')} \\ & \leq & g\left(\frac{1}{k}\sum_{i=1}^{k}{t_i'}\right) \\ & = & g(c') \\ & = & f(c'), \end{IEEEeqnarray*} where the first inequality holds since \(f\) is increasing, the second inequality holds since \(g\) is always no less than \(f\), and the last equality follows from $c'\geq c_Q > 0.14$. \end{IEEEproof} \end{document}
\begin{equation}gin{document} \renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \makeatletter \renewcommand\@makefnmark{\hbox{\@textsuperscript{\normalfont\mathcal Olor{white}\@thefnmark}}} \renewcommand\@makefntext[1]{ \parindent 1em\noindent \hb@[email protected]{ \hss\@textsuperscript{\normalfont\@thefnmark}}#1} \makeatother \title{Consistent circuits for indefinite causal order} \author{Augustin Vanrietvelde} \orcid{0000-0001-9022-8655} \email{[email protected]} \thanks{\\ Current address: Inria Paris-Saclay} \affiliation{Quantum Group, Department of Computer Science, University of Oxford} \affiliation{Department of Physics, Imperial College London} \affiliation{HKU-Oxford Joint Laboratory for Quantum Information and Computation} \author{Nick Ormrod$^*$} \orcid{0000-0003-2717-8709} \affiliation{Quantum Group, Department of Computer Science, University of Oxford} \affiliation{HKU-Oxford Joint Laboratory for Quantum Information and Computation} \email{[email protected]} \author{Hl\'er Kristj\'ansson$^*$} \orcid{0000-0003-4465-2863} \email{[email protected]} \thanks{\\ Current address: Department of Physics, Graduate School of Science, The University of Tokyo} \affiliation{Quantum Group, Department of Computer Science, University of Oxford} \affiliation{HKU-Oxford Joint Laboratory for Quantum Information and Computation} \author{Jonathan Barrett} \orcid{0000-0002-2222-0579} \email{[email protected]} \affiliation{Quantum Group, Department of Computer Science, University of Oxford} \begin{equation}gin{abstract} Over the past decade, a number of quantum processes have been proposed which are logically consistent, yet feature a cyclic causal structure. However, there is no general formal method to construct a process with an exotic causal structure in a way that ensures, and makes clear why, it is consistent. Here we provide such a method, given by an extended circuit formalism. This only requires directed graphs endowed with Boolean matrices, which encode basic constraints on operations. Our framework (a) defines a set of elementary rules for checking the validity of any such graph, (b) provides a way of constructing consistent processes as a circuit from valid graphs, and (c) yields an intuitive interpretation of the causal relations within a process and an explanation of why they do not lead to inconsistencies. We display how several standard examples of exotic processes, including ones that violate causal inequalities, are among the class of processes that can be generated in this way. We conjecture that this class in fact includes all unitarily extendible processes. {\footnote{These two authors made equal contributions to this work.}} \end{abstract} \maketitle \tableofcontents \renewcommand{\arabic{footnote}}{\arabic{footnote}} \makeatletter \renewcommand\@makefnmark{\hbox{\@textsuperscript{\normalfont\mathcal Olor{black}\@thefnmark}}} \makeatother \textrm{\upshape sec}tion{Introduction} The standard formalism of quantum information theory allows any set of quantum operations to be combined in parallel and in sequence. This gives rise to the intuition that quantum theory is \textit{compositional}, in the sense that combining elementary quantum processes always gives rise to other valid processes, and that all valid quantum processes can be constructed in this way. Compositionality finds its clearest expression in the graphical quantum circuit formalism \mathcal Ite{deutsch1989quantum,aharonov1998quantum,abramsky2004categorical,nielsen2000quantum,coecke_kissinger_2017}. There, as long as a circuit is formed by wiring together quantum operations, without allowing any feedback loops, the whole circuit forms a valid quantum process -- no matter what quantum operations are used. In the last decade, a class of quantum processes was proposed that does not fit into this picture \mathcal Ite{hardy2005probability,chiribella2009beyond,chiribella2013quantum,oreshkov2012quantum, baumeler2014maximal, baumeler2016space, wechs2021quantum}. These processes are logically consistent (in the sense of not implying a contradiction), yet the quantum operations they are built from are in an \textit{indefinite causal order}. This means that the operations cannot be wired up to form a circuit representing the process unless feedback loops are used, i.e. unless one of the global output wires of the circuit is plugged into one of its inputs. Now, if one does allow feedback loops, then one can write any process with indefinite causal order as a quantum circuit \mathcal Ite{chiribella2009beyond, chiribella2013quantum, araujo2017quantum}. However, most quantum circuits with feedback loops do not form valid quantum processes \mathcal Ite{oreshkov2012quantum, Baumeler_2021}. This is because feedback loops will almost always lead to logical inconsistencies, exemplified by the famous paradox in which a time traveller kills her grandfather. This means that there is currently no way to construct general processes by wiring together elementary operations, with the assurance that they will remain consistent. Instead, the most common way to formally specify a process with indefinite causal order so far has been to write down its process matrix \mathcal Ite{oreshkov2012quantum} or, equivalently, its supermap \mathcal Ite{chiribella2009beyond,chiribella2013quantum}. The inability to construct processes by specifying the connectivity between operations leads to three major problems. Firstly, given a mathematical object that one suspects is a process matrix, it is difficult to verify whether it actually is one. And without knowing whether it is a process matrix, one does not know whether it represents any logically consistent process. The explicit verification that an object is a process matrix is a brute-force procedure, requiring calculations on high-dimensional linear operators: this procedure becomes highly impractical when the dimensions and the number of parties increase. Secondly, this procedure supplies no intuition at all: given two mathematical objects, there is no clear interpretation of their components that explains why one is consistent while the other is not. This sharply contrasts the situation with definite-order processes, whose consistency can be regarded as an inevitable consequence of the acyclic form of their circuit representation. As a result, it is not surprising that processes with indefinite causal order have mostly been presented as isolated examples \mathcal Ite{chiribella2009beyond, chiribella2013quantum,oreshkov2012quantum}, or classes of similar processes \mathcal Ite{baumeler2016space, wechs2021quantum}. Thirdly, this situation prevents the design of formal methods for the study and manipulation of processes featuring indefinite causal order -- which would yield, for example, general theorems clarifying their properties, or a steady conceptual architecture for describing the different possible classes of examples. Indeed, the only available formal representation of an exotic process -- as a process matrix or a supermap -- is an undifferentiated blend of all of the causal influences involved, fused into a high-dimensional linear operator. Parsing its structure and behaviour is possible, at best, only through a careful inspection of it by a person sufficiently experienced with the quirks of the framework; no formal tools will be there to help. Here again, this contrasts the situation with ordered circuits, for which such formal tools have been extensively developed \mathcal Ite{chiribella2008transforming, coecke_kissinger_2017}. Recently, some progress has been made in devising methods to construct processes from elementary operations. In particular, Ref.\ \mathcal Ite{wechs2021quantum} gave a general framework for constructing any process with coherent (and possibly dynamical) control of causal orders. Yet, this method is limited to these processes, which cannot violate causal inequalities \mathcal Ite{branciard2015simplest, wechs2021quantum}. Since the focus of this framework was on the physical realisability of processes, it moreover put less emphasis on the connectivity of the processes. As a result, it does not provide the same sort of intuitive understanding of processes with indefinite causal order that quantum circuits provide for standard quantum processes. Circuit models for indefinite causal order (often featuring diagrammatic calculi) have also been introduced recently \mathcal Ite{clement2020, Chardonnet2022, Arrighi2021}, but without providing a way to check the consistency of the processes. Consequently, there is much to be gained from developing a method for constructing processes with indefinite causal order in a way that ensures that they are consistent and gives a legible description of their structure -- including processes that violate causal inequalities. Such a method could extend deep intuitions about what makes a process consistent to processes with indefinite causal order. It would also help us to move on from the piecemeal introduction of indefinite-order processes, to a paradigm in which one can construct large classes of processes, whose consistency is made obvious from the construction itself. Finally, it would make it easier to understand and manipulate these processes. In this work, we introduce a method for generating a large class of quantum processes with indefinite causal order, that both guarantees and explains their consistency. Our method relies on an extended circuit formalism, based on the framework of routed quantum circuits, introduced in Ref.\ \mathcal Ite{vanrietvelde2021routed} (see also Refs.\ \mathcal Ite{barrett2019, lorenz2020, vanrietvelde2021coherent, wilson2021composable}). In addition to the (now possibly cyclic) directed graph of the connectivity of operations, one only has to specify basic constraints on the allowed operations (called \textit{sectorial constraints}), encoded in Boolean matrices (called \textit{routes}). One can then define a set of elementary rules for checking the validity of this graph. We prove that any quantum process based on a valid graph is logically consistent -- as long as the quantum operations connected according to the graph respect the routes. Our rules for validity rely on the display and inspection of meaningful information about the causal structure expressed by such graphs, making the behaviour and consistency of the process more intelligible at the conceptual level. This sheds light on the conceptual differences between those cyclic causal structures that lead to logical paradoxes, and those that are logically consistent, and offers a simple and intuitive description of the inner structure of the latter. From our construction, we can derive a large class of valid `unitarily extendible processes', a class which is thought to correspond to those processes which could be physically realisable \mathcal Ite{araujo2017purification}. In fact, we can construct so many unitarily extendible processes that we are led to conjecture that \textit{all} such processes can be built this way. We emphasise that this paper does not aim to address the question of the \textit{physical realisability} of processes (though its ideas might help tackle it in the future). Rather, we are interested in understanding the abstract, logical structure lying at the heart of valid processes with indefinite causal order. Given a valid process, we do not ask whether it could be implemented in practice, or even in principle, given the laws of physics that govern our particular universe. We ask only whether a purported quantum process is \textit{logically consistent} (i.e.\ whether it implies any contradiction), what makes it so, and how it can be mathematically constructed in a way that makes this obvious. In summary, this paper seeks to restore compositionality to processes with indefinite causal order. More precisely, it uncovers the means by which elementary processes may be combined in a way that inevitably leads to valid composite processes, even when those composite processes can feature indefinite causal order. The benefits of this are both mathematical -- since we can eschew brute-force methods for checking that processes with indefinite causal order are consistent -- and conceptual -- since we get a better sense of what makes a processes logically consistent even when it exhibits indefinite causal order. The paper is structured as follows. We begin by introducing our framework using the example of a reconstruction of the quantum switch, in order to provide a pedagogical introduction to its main notions with a simple example. We then present our framework in full generality, which describes how to construct processes from elementary operations and their connectivity, in a way that guarantees logical consistency. Following this, we display how our framework allows us to reconstruct three further examples from the literature, namely the quantum 3-switch \mathcal Ite{colnaghi2012quantum}, the recently proposed (what we shall call) Grenoble process \mathcal Ite{wechs2021quantum}, and the Lugano process (also called Baumeler-Wolf or Ara\'ujo-Feix) \mathcal Ite{baumeler2014maximal, baumeler2016space, araujo2017purification}. We explain how the route structure displays the core behaviour of the processes in a compact way. We embed these examples into larger families of similar processes, and thus highlight the conceptual intuitions for their validity. We conclude with a short discussion and outlook, in which we spell out a conjecture that all unitarily extendible processes can be built using our method. \textrm{\upshape sec}tion{Reconstructing the quantum switch} \lambdabel{sec:2switch} \iffalse In the study of indefinite causal order, much attention has been devoted to the so-called \textit{quantum switch}. This is a protocol in which the order of implementation of two channels is coherently controlled on a qubit. This section of the paper provides a reconstruction of the switch that explains why it does not lead to any logical inconsistencies, despite its exotic causal structure. In doing so, it illustrates the general principles by which a scenario with indefinite causal order can constructed in a way that makes clear that it is consistent. \av{[Recast as `rebuilding' the switch]} We will start by discussing the sense in which the switch exhibits indefinite causal order. We then pose the question of why the switch does not lead to any logical inconsistencies, despite the fact that scenarios with superficially similar causal structures do. We answer this by showing that its mathematical representation -- the \texttt{SWITCH} \textit{supermap} -- satisfies certain principles. According to our main theorem, these principles suffice to show that \texttt{SWITCH} is a valid supermap, implying that the switch protocol does lead to any logical inconsistencies. \av{[same]} First though, a clarificatory remark. Since the switch is represented by a very simple supermap, there are much simpler ways to show that it is well-behaved than the method sketched here. Accordingly, we do not mean to suggest that the method is necessary to gain a good understanding of why the switch's causal structure is not pathological. Rather, the main point here is to start with a simple example to illustrate the general principles by which a unitarily extendible supermap's validity may be confirmed. The same principles can then be used to demonstrate the validity of more complicated supermaps, thus explaining the self-consistency of more opaque protocols, whose causal structures are much harder to understand. \av{[The following paragraphs are nice but would better fit in the introduction]}This is useful for two reasons. Firstly, the method will be of genuine practical utility when one has in mind some protocol but isn't sure whether it is consistent, or can be represented by a valid supermap. Secondly, owing to the generality of the principles used in the method and their conceptual, rather than purely mathematical, nature, they provide an answer to an interesting foundational question about causal structure. The question is: \begin{equation}gin{quote} what is conceptually different about those indefinite causal structures that do, and those that do not, lead to logical paradoxes? \end{quote} We believe a clear answer to this question is a prerequisite for a deep understanding of indefinite causal order. Accordingly, answering this question in the general case can help us understand relatively simple protocols, such as the switch, even more thoroughly. \fi The quantum switch \mathcal Ite{chiribella2009beyond, chiribella2013quantum} is a process with indefinite causal order whose logical consistency is relatively easy to understand. However, the obvious ways of grasping its consistency -- such as recognising it as the coherent control of processes with definite causal orders -- do not generalise to more exotic instances of indefinite causal order, such as the Grenoble \mathcal Ite{wechs2021quantum} or Lugano \mathcal Ite{baumeler2014maximal, baumeler2016space, araujo2017purification} processes (which we will discuss later in Section \ref{sec: examples}). In this section, we reconstruct the quantum switch in a way that guarantees that it is a valid supermap\footnote{Note that in this paper, we will adopt the \textit{supermap} representation of higher-order processes \mathcal Ite{chiribella2008transforming, chiribella2009beyond, chiribella2013quantum}, as opposed to the (mathematically equivalent) \textit{process matrix} representation \mathcal Ite{oreshkov2012quantum, araujo2017purification} also used in the literature. Because the relationship and equivalence between the two pictures, both at the conceptual and mathematical levels, can be a source of confusion, and in the interest of readers more accustomed to process matrices, we spell out the connection between the representations in detail in Appendix \ref{app: process matrices and supermaps}.}, using a method that does generalise to more exotic unitarily extendible processes. In so doing, we sketch out the main ingredients of our framework before it is formally introduced in Section \ref{sec: theorem}. We start this section by defining supermaps, and, in particular, the supermap describing the quantum switch. We then briefly summarise some key concepts of the routed circuits framework \mathcal Ite{vanrietvelde2021routed, vanrietvelde2021coherent, wilson2021composable}, which we will use for the reconstruction. Finally, we perform the reconstruction in a pedagogical way. \subsection{Supermaps and the quantum switch} The quantum switch is a process in which the order of application of two transformations is coherently controlled. Mathematically, it can be represented by a \textit{supermap}, called \texttt{SWITCH}. In the literature, a supermap is typically defined as a linear map that transforms quantum channels to quantum channels \mathcal Ite{chiribella2008transforming, chiribella2009beyond, chiribella2013quantum}. However, since we are only interested in unitarily extendible supermaps in this paper, we shall define a supermap as a linear map that transforms linear operators on a Hilbert space to linear operators on a Hilbert space. Given two input operators $U$ and $V$ of the same dimension $d$, \texttt{SWITCH} returns an operator of the form \begin{equation}gin{equation} \lambdabel{switch def} \textnormal{\texttt{SWITCH}}(U, V) = \ket{0}\bra{0} \otimes VU + \ket{1}\bra{1} \otimes UV \, . \end{equation} More generally, $U$ and $V$ could be acting on their own local ancillary systems, $X$ and $Y$ respectively. Then \texttt{SWITCH} is defined as follows: \begin{equation}gin{equation} \lambdabel{switch def 2} \texttt{SWITCH}(U, V) = \ket{0}\bra{0} \otimes (I_X \otimes V) (U \otimes I_Y) + \ket{1}\bra{1} \otimes (U \otimes I_Y) (I_X \otimes V) \end{equation} If $U$ and $V$ are both unitary operators, then $\texttt{SWITCH}(U, V)$ is also a unitary operator. We call supermaps like this, that always map unitary operators to unitary operators, \textit{superunitaries}. A little reflection reveals that any superunitary uniquely defines a supermap in the traditional sense as a map on channels via the Stinespring dilation of the channels. An equally formal but more intuitive representation of supermaps can be provided using diagrams \mathcal Ite{coecke_kissinger_2017, Kissinger2019}. The idea, illustrated for a monopartite supermap in Figure \ref{fig:monopartite smap}, is to represent unitary operators using boxes, and supermaps as shapes that give another box once one inserts a box into each of its `nodes'. \begin{equation}gin{figure*} \mathcal Entering $\tikzfig{monopartite_smap}: \ \ \ \tikzfig{unitary_operator} \mapsto \ \ \tikzfig{monopartite_smap_output_box} := \tikzfig{monopartite_smap_output}$ \mathcal Aption{Diagrammatic representation of a monopartite superunitary. $\mathcal S$ is a linear map from unitaries of the form $U: \mathcal H_{X^\textrm{\upshape in}} \otimes \mathcal H_{A^\textrm{\upshape in}} \rightarrow \mathcal H_{X^\textrm{\upshape out}} \otimes \mathcal H_{A^\textrm{\upshape out}}$ that maps the ingoing space for any pair of ancillary systems $X^\textrm{\upshape in}$ and $X^\textrm{\upshape out}$ to unitaries of the form $(\mathcal I \otimes \mathcal S)(U): \mathcal H_{X^\textrm{\upshape in}} \otimes \mathcal H_P \rightarrow \mathcal H_{X^\textrm{\upshape out}} \otimes \mathcal H_F$. This definition can be extended in an obvious way for supermaps acting on an arbitrary finite number of unitary operators.} \lambdabel{fig:monopartite smap} \end{figure*} In general, one can use a circuit decomposition with sums to represent a supermap. Such a decomposition is provided for the switch in Figure \ref{fig:switch pic}. The meaning of this circuit decomposition is that the action of the supermap is given by Figure \ref{fig:switch output}, the right-hand side of which has precisely the same formal meaning as (\ref{switch def 2}). \begin{equation}gin{figure*} \tikzfig{switch3} \mathcal Aption{Circuit decomposition of the switch. The left term in the sum projects onto the $\ket{0}$ state of the control and implements Alice's transformation before Bob's. The right term has a similar interpretation. Formally, the wire bent into a `U' shape can be interpreted as the unnormalised Bell ket $\ket{00}+\ket{11}$, and the upside-down `U' as the corresponding bra.} \lambdabel{fig:switch pic} \end{figure*} \begin{equation}gin{figure*} \scalebox{.9}{\tikzfig{switch_with_boxes_2} } \mathcal Aption{The action of the switch on a pair of unitary operators. The order of implementation of unitaries on the target system and ancillas is coherently controlled.} \lambdabel{fig:switch output} \end{figure*} However, if we do not allow for sums,\footnote{This is good practice because 1) sums lead to an exponential multiplication of the number of diagrams to consider, and 2) an intuitive presentation as a sum will not be available at all in more involved cases, like the Grenoble process or the Lugano process.} then it is impossible to draw $\texttt{SWITCH}(U, V)$ as a standard circuit in which both $U$ and $V$ appear exactly once, unless we allow feedback loops \mathcal Ite{chiribella2009beyond, chiribella2013quantum, Oreshkov_2019}. This means that we cannot write $\texttt{SWITCH}(U, V)$ as a standard circuit whose form makes it immediate that the switch is a consistent process. This feature of the switch, while somewhat inconvenient, is also a hallmark of indefinite causal order. Such processes combine their input operations in a way that cannot be understood as wiring them up to form a circuit, while avoiding feedback loops. While this is precisely what makes these processes interesting, it also can leave us guessing as to what makes them consistent. If their consistency is not guaranteed by the possibility of constructing them as a standard, acyclic circuit, then what is it guaranteed by? We will show that the consistency of the switch is guaranteed through a presentation of it as a \textit{routed} quantum circuit \mathcal Ite{vanrietvelde2021routed, vanrietvelde2021coherent, wilson2021composable}. Our method involves writing down a decorated directed graph, called the \textit{routed graph}, which captures the basic compositional structure of a large class of circuits. To check that all circuits that display this compositional structure are valid ones, it suffices to show that the graph satisfies two principles. Moreover, this method will generalise to other prominent unitarily extendible processes, such as Grenoble and Lugano. All of this is in spite of the fact that the routed graph may contain feedback loops. Thus while the usual framework of quantum circuits shows how one can validly combine quantum processes in a definite causal order, our framework shows how quantum processes can be validly combined in an indefinite order. At the heart of what makes a routed graph succeed or fail to generate consistent processes are the \textit{sectorial constraints} it enforces at its nodes. These restrict the transformations that they accept, and, in valid graphs, they can play a vital role in outlawing those transformations that would lead to inconsistencies. But these constraints are not captured by standard quantum circuits. To handle sectorial constraints properly, we need to understand the basics of routed quantum circuits, to which we now turn. \subsection{Routed quantum circuits} We start our summary of the framework of routed quantum circuits by considering a simple physical scenario that is conveniently represented with routed circuits. Suppose a photon with an internal degree of freedom is sent to a superposition of two paths, either going to the left or to the right depending on the logical value of a control qubit $C$. This can be represented with the following transformation $W: \mathcal H_C \otimes \mathcal H_T \rightarrow \mathcal H_L \otimes \mathcal H_R$, where $T$ is the internal degree of freedom of the photon. \begin{equation}gin{equation} \begin{equation}gin{split} \lambdabel{sup channels def} & W(\ket{0} \otimes\ket{\psi}) = \ket{\psi} \otimes \ket{\textnormal{vac}} \\ & W(\ket{1} \otimes\ket{\psi}) = \ket{\textnormal{vac}} \otimes \ket{\psi} \end{split} \end{equation} Each of the output spaces $\mathcal H_L$ and $\mathcal H_R$ is the direct sum of a vacuum sector spanned by $\ket{\textnormal{vac}}$ and a single-particle sector with the dimension of the target space $\mathcal H_T$ \mathcal Ite{chiribella2019shannon}: $\mathcal H_L = \mathcal H_L^\textnormal{vac} \oplus \mathcal H_L^\textnormal{par}$ and $\mathcal H_R = \mathcal H_R^\textnormal{vac} \oplus \mathcal H_R^\textnormal{par}$ . The physical evolution here is clearly reversible; yet, $W$ is not a unitary operator, because its output space, $\mathcal H_L \otimes \mathcal H_R = \bigoplus_{i, j \in \{\textrm{vac}, \textrm{par}\}} \mathcal H_L^i \otimes \mathcal H_R^j$, is `too large'. To fix this, we could consider an extension of $W$ to a unitary acting on a larger input space. However, this would require us to consider additional, possibly physically irrelevant degrees of freedom just to represent our original scenario, even though this original scenario was already reversible. Another solution is to restrict the definition of $W$ to the subspace of its output space that can actually be populated -- this is the one-particle subspace $\mathcal H^{\rm prac}:=(\mathcal H_L^{\rm par} \otimes \mathcal H_R^\textnormal{vac}) \oplus (\mathcal H_L^\textnormal{vac} \otimes \mathcal H_R^\textnormal{par})$. The resulting operator $\tilde{W} : \mathcal H_C \otimes \mathcal H_T \rightarrow \mathcal H^\textnormal{prac}$ is indeed unitary, but it cannot be represented as a standard circuit with one output for $L$ and another for $R$. The reason is that the formal meaning of putting two wires next to each other in a standard circuit is taking the tensor product of the associated Hilbert spaces, while our $\mathcal H^\textnormal{prac}$ is composed from $\mathcal H_L$ and $\mathcal H_R$ via direct sums as well as tensor products. In order to achieve a better circuit representation, we can label the output subspaces in the following way. \begin{equation}gin{equation} \begin{equation}gin{split} & \mathcal H_{L^\textnormal{par}} := \mathcal H_L^0 \\ & \mathcal H_{L^\textnormal{vac}} := \mathcal H_L^1 \\ & \mathcal H_{R^\textnormal {par}} := \mathcal H_R^1 \\ & \mathcal H_{R^\textnormal{vac}} := \mathcal H_R^0 \end{split} \end{equation} Then the physically relevant output space $\mathcal H^\textit{prac}$ is the direct sum of the subspaces $\mathcal H_L^i \otimes \mathcal H_R^j$ for which the indices match. This is equivalent to saying that $W$ \textit{follows the route} given by the Kroenecker delta $\delta^{ij}$. This just means that it respects the equation \begin{equation}gin{equation} \lambdabel{wroute} W = \sum_{ij} \delta^{ij} \mathcal Dot (\pi_L^i \otimes \pi_R^j) \mathcal Irc W \, , \end{equation} where the $\pi_L^i$ and $\pi_R^j$ are projectors onto $\mathcal H_L^i$ and $\mathcal H_R^j$ respectively. Now, we can formally represent our transformation as a unitary by adding indices to the outputs that represent the sectorisation of the Hilbert spaces, and decorating $W$ with the route matrix $\delta^{ij}$. This gives the following diagram. \begin{equation}gin{equation} \lambdabel{sup channels 1} \tikzfig{sup_of_channels} \end{equation} The interpretation of this diagram is that we have a transformation $W$ which follows the route $\delta$. The route matrix tells us that $W$ only maps states to its so-called \textit{practical output space}, which is now defined via the route as $\mathcal H_{\textrm{prac}} := \bigoplus_{ij} \delta^{ij} \, \mathcal H_L^i \otimes \mathcal H_R^j$. $W$ is unitary with respect to this output subspace, so the routed map $(\delta, W)$ is called a \textit{routed unitary} transformation, even though $W$ is not strictly speaking a unitary operator. We can simplify the diagram (\ref{sup channels 1}) by introducing a shorthand called \textit{index-matching}. Since the effect of $\delta^{ij}$ is just to `match up' the value of the output indices in the practical output space, we can avoid writing the matrix explicitly and instead just match the output indices directly: \begin{equation}gin{equation} \lambdabel{sup channels 2} \tikzfig{sup_of_channels_chan} \, . \end{equation} We now explain the notion of a routed linear map in full generality. Given a transformation $U: \mathcal H_A \rightarrow \mathcal H_B$, we can construct a routed transformation $(\lambdambda, U)$ by first sectorising $U$'s input and output Hilbert spaces into a set of orthogonal subspaces: \begin{equation}gin{equation} \begin{equation}gin{split} & \mathcal H_A = \bigoplus_i \mathcal H_A^i \\ & \mathcal H_B = \bigoplus_j \mathcal H_B^j \end{split} \end{equation} We then specify a Boolean matrix $\lambdambda$, which is there to tell us which $\mathcal H_A^i$ may be mapped to which $\mathcal H_B^j$. We call this sort of restriction on the form of $U$ a \textit{sectorial constraint}. Specifically, $U$ \textit{follows} $\lambdambda$ if it satisfies \begin{equation}gin{equation} \lambdabel{route} U = \sum \lambdambda_i^j \mathcal Dot \pi^j_B \mathcal Irc U \mathcal Irc \mu_A^i \end{equation} where the $\pi_B^j$'s and $\mu_A^i$'s project onto sectors $\mathcal H_B^j$ and $\mathcal H_A^i$ respectively. For $(\lambdambda, U)$ to be a valid routed transformation, we need $U$ to follow $\lambdambda$. The routes can be composed in parallel and in sequence by the Cartesian product and matrix multiplication, respectively. Routed maps can be composed by composing their elements pairwise: the composition of the linear maps will necessarily follow the composition of the routes. This enables us to build up large routed circuits using elementary routed maps as building blocks. The route $\lambdambda$ defines $U$'s practical input and output spaces $\bigoplus_i (\sum_j \lambdambda_i^j) \mathcal H_A^i$ and \\ $\bigoplus_j (\sum_i \lambdambda_i^j) \mathcal H_B^j$ respectively. $U$ is a routed unitary if the transformation is unitary when we restrict its definition to these spaces. The last thing to introduce is the notion of a \textit{routed supermap} \mathcal Ite{vanrietvelde2021coherent}. In a standard superunitary, any unitary operator that maps the ingoing space $\mathcal H_{A^\textrm{\upshape in}}$ to the outgoing space $\mathcal H_{A^\textrm{\upshape out}}$ of one of the nodes is considered a valid input to that node. For a routed supermap, a node might be equipped with some route $\lambdambda$ given a sectorisation of its input and output spaces. Then, the only valid unitary transformations for that node are those that follow $\lambdambda$. Formally, the valid inputs are those that respect $U = \sum_{ij} \lambdambda_i^j \mathcal Dot \mu^j_{B} \mathcal Irc U \mathcal Irc \pi^i_{A}$, or $U=\sum_{ij} \lambdambda_i^j \mathcal Dot (\mu^j_{B} \otimes I_X) \mathcal Irc U \mathcal Irc (\pi^i_{A} \otimes I_X)$ if the unitary also acts on some ancillary system $X$. \subsection{Extracting the relevant structure: routed circuit decomposition, skeletal supermap, routed graph} Armed with an understanding of routed maps, we can now give the promised routed circuit decomposition of the switch. Luckily, all we really need is a routed unitary of the form $(\delta, W)$, represented in (\ref{sup channels 2}). \begin{equation}gin{figure*} \mathcal Entering \tikzfig{switch_routed_circuit} \mathcal Aption{Routed circuit decomposition of the switch, using index matching. $W$ is the routed unitary defined in (\ref{sup channels def}). The wires bent into `cup' or `cap' shapes represent the (unnormalised) perfectly correlated entangled states \mathcal Ite{coecke_kissinger_2017}. Overall, the `$i=0$' sectors correspond to the branch where Alice's intervention is implemented before Bob's, while `$i=1$' corresponds to the branch where Bob's intervention is implemented before Alice's. Thus the cycle is constructed from two acyclic components corresponding to definite orders of implementation.} \lambdabel{fig:switch routed circuit} \end{figure*} Our decomposition of the switch is presented in Figure \ref{fig:switch routed circuit}. The basic intuition behind the diagram lies in the following interpretation. When we prepare the control qubit in $\ket{0}$, the target system may only enter the sectors of the wires inside the diagram corresponding to $i=0$. Recalling that $\mathcal H_L^0$ was a $d$-dimensional sector, and that $\mathcal H_R^0$ was a trivial, one-dimensional `dummy sector', this means that the particle will exit via the left output port of every $W$ it enters. Meanwhile, the right output will receive a one-dimensional dummy system, analogous to the vacuum in the interferometric example above (although this is merely a formal analogy – we are not committing to any particular physical interpretation of this dummy system). This means that the particle will go through Alice's node first, then Bob's, then out to the future. The opposite is true when we prepare the control in $\ket{1}$. We now want to nail down how the route structure in Figure \ref{fig:switch routed circuit} can be leveraged to certify that the supermap is a valid one. To do this, we first need to to consider a further pruned version of the circuit, in which only the essential information appears. This is given by what we call a \textit{skeletal supermap}: a supermap that includes nothing other than wires, without any boxes representing non-identity unitary transformations. The idea is that we can obtain the original supermap from the skeletal supermap by `fleshing it out', i.e.\ inserting some unitary transformations into the nodes. If we can show that this skeletal supermap is a valid superunitary, then it follows immediately that our original supermap is a valid superunitary. \begin{equation}gin{figure} \mathcal Entering \tikzfig{switch_skeletal} \mathcal Aption{Skeletal supermap for the switch. The nodes suffer sectorial constraints represented by the index-matching of the input and output wires, making this a \textit{routed} supermap. The routed circuit for the switch in Figure \ref{fig:switch routed circuit} is obtained by inserting unitary transformations into the nodes $P$ and $F$, and the monopartite superunitary (\ref{comb}) into $A$ and $B$.} \lambdabel{fig:switch skeletal} \end{figure} A suitable skeletal supermap for the switch is represented in Figure \ref{fig:switch skeletal}. Inserting $W$ and $W^\dag$ into the bottom and top nodes respectively, and inserting the superunitary \begin{equation}gin{equation}\lambdabel{comb} \tikzfig{comb} \end{equation} into each of the middle nodes yields \texttt{SWITCH}. We can represent the skeletal supermap using an even simpler graph. All we need to consider is the connectivity between the nodes, the routes and indices, and the specific index values that represent one-dimensional sectors. A representation of all this information, and nothing more, is provided by the \textit{routed graph}. This consists of \begin{equation}gin{itemize} \item a vertex for each node in the skeletal supermap, decorated with its route; \item arrows representing the wires connecting the nodes in the skeletal supermap; \item next to each arrow, the index of the corresponding wire; \item next to each arrow, the specific values of its index that correspond to a one-dimensional sector. \end{itemize} When a node has a `delta route' -- that is, a route that is equal to 1 if and only if all the indices take the same value -- we can adopt the shorthand index-matching representation where we decorate each of its ingoing and outgoing arrows with the same index. The routed graph for the switch's skeletal counterpart is given in Figure \ref{fig:rswitch routed graph}, with and without the index-matching shorthand. Remarkably, this elementary object contains all the information we need to confirm that the switch is a valid superunitary, or in other words, that it is consistent. \begin{equation}gin{figure} \mathcal Entering \subfloat{\tikzfig{figures/RoutedGraphSwitch}} \qquad \qquad \qquad \subfloat{\tikzfig{figures/RoutedGraphSwitchMatching}} \mathcal Aption{Routed graph for the switch, with and without index-matching. The vertices represent nodes from the skeletal supermap. On the left, each wire is equipped with its own index, and the numbers in red denote the values of the index that correspond to a one-dimensional sector. Each node is decorated with a $\delta$ matrix representing the route, which is equal to 1 if and only if all of its arguments are equal. Lower indices refer to input wires, upper indices refer to output wires. Since all the routes are `delta' routes, we can use the convenient shorthand of index-matching to produce a simpler diagram with the same meaning, as on the right.} \lambdabel{fig:rswitch routed graph} \end{figure} \subsection{Checking for validity}\lambdabel{sec:checking_validity} In our framework, one can just consider the routed graph depicting the connectivity of the supermap, and infer from it that the supermap is valid. This amounts to checking that the routed graph conforms to a couple of principles. Here we shall present these principles and the way to check them in a pedagogical manner, taking advantage of the relative simplicity of the switch's case. To motivate these principles, a good place to start is with the intuition that in a self-consistent protocol, information should not genuinely be able to flow in a circle. This is because, if it did, then at any point on the circle we could control the outgoing information on incoming information that is inconsistent with it. This happens in the grandfather paradox, where Alice's grandfather is killed if Alice exists, even though Alice's existence is incompatible with his murder. Yet from the present routed graph it seems as if information does flow in a circle between $A$ and $B$. What we need to do is use the information in the graph to obtain a more fine-grained perspective from which the cycle disappears (or is at least shown to be harmless). We start by fine-graining each node into a number of \textit{branches}. If the route of a node dictates that there are exactly $n$ disjoint subspaces of the input space that must be mapped one-to-one to $n$ disjoint regions of the output space, we say that there are $n$ branches. To make this clear, we can represent the route matrices as diagrams with arrows from the input sectors $\mathcal H^i_\textrm{\upshape in}$ to the output sectors $\mathcal H^j_\textrm{\upshape out}$ being present when the corresponding route matrix element $\lambdambda_i^j$ is equal to $1$. For the node $A$, we have a route of the form $\delta_i^j$, which is represented in Figure \ref{fig:branches}. In this sort of diagram, each disconnected `island', circled in red, corresponds to a distinct branch. Thus $A$ has two distinct branches, which we label $A^i$ in correspondence with the value of $i$. On the other hand, although the node $P$'s outgoing space has two sectors, $P$ only has one branch, since its graph is fully connected, as represented in Figure \ref{fig:branches2}. \iffalse \begin{equation}gin{figure} \mathcal Entering \subfloat{\begin{equation}gin{tikzpicture} \node (1) at (0, 0) {$\mathcal H_\textrm{\upshape in}^0$}; \node (2) at (5, 0) {$\mathcal H_\textrm{\upshape in}^1$}; \node (3) at (0, 4) {$\mathcal H_\textrm{\upshape out}^0$}; \node (4) at (5, 4) {$\mathcal H_\textrm{\upshape out}^1$}; \draw[->] (1) to (3); \draw[->] (2) to (4); \end{tikzpicture}} \qquad \qquad \qquad \qquad \subfloat{\begin{equation}gin{tikzpicture} \node (1) at (0, 0) {$\mathcal H_\textrm{\upshape in}^0$}; \node (2) at (5, 0) {$\mathcal H_\textrm{\upshape in}^1$}; \node (3) at (0, 4) {$\mathcal H_\textrm{\upshape out}^0$}; \node (4) at (5, 4) {$\mathcal H_\textrm{\upshape out}^1$}; \draw[->] (1) to (3); \draw[->] (2) to (4); \draw[red, dashed] (0cm,2.3cm) ellipse[x radius=1.25,y radius=3.5]; \draw[red, dashed] (5cm,2.3cm) ellipse[x radius=1.25,y radius=3.5]; \end{tikzpicture}} \mathcal Aption{The route for the $A$ node of the skeletal supermap.} \lambdabel{fig:branches} \end{figure} \fi \begin{equation}gin{figure} \mathcal Entering \begin{equation}gin{tikzpicture} \node (1) at (0, 0) {$\mathcal H_\textrm{\upshape in}^0$}; \node (2) at (5, 0) {$\mathcal H_\textrm{\upshape in}^1$}; \node (3) at (0, 4) {$\mathcal H_\textrm{\upshape out}^0$}; \node (4) at (5, 4) {$\mathcal H_\textrm{\upshape out}^1$}; \draw[->] (1) to (3); \draw[->] (2) to (4); \draw[red, dashed] (0cm,2.3cm) ellipse[x radius=1.25,y radius=3.5]; \draw[red, dashed] (5cm,2.3cm) ellipse[x radius=1.25,y radius=3.5]; \end{tikzpicture} \mathcal Aption{The route for the $A$ node of the skeletal supermap.} \lambdabel{fig:branches} \end{figure} \begin{equation}gin{figure} \mathcal Entering \begin{equation}gin{tikzpicture} \node (1) at (2.5, 0) {$\mathcal H_\textrm{\upshape in}$}; \node (3) at (0, 4) {$\mathcal H_\textrm{\upshape out}^0$}; \node (4) at (5, 4) {$\mathcal H_\textrm{\upshape out}^1$}; \draw[->] (1) to (3); \draw[->] (1) to (4); \draw[red, dashed] (2.5, 2.7) ellipse[x radius=3.9, y radius=3.9]; \end{tikzpicture} \mathcal Aption{The route for the $P$ node of the skeletal supermap.} \lambdabel{fig:branches2} \end{figure} Intuitively, branches correspond to \textit{alternatives} in a node: for example, in node $A$, either the branch $A^0$ or the branch $A^1$ will happen.\footnote{Of course, because we are in quantum theory, both could happen in a superposition. But a remarkable feature of our framework is that, in order to check the validity of the routed graph, we do not have to consider superpositions: it is sufficient to reason as if the branches were mutually exclusive. Therefore this is what we will do through this section.} It is these branches, rather than the original nodes, that we will study. In particular, we will check whether the branches themselves form informational loops; this will be the subject of our second principle. \subsubsection{Bifurcation choices and bi-univocality} Before proceeding to investigate informational loops, we first need to check another, more fundamental, property. As we remarked, branches correspond to alternatives. Thus, roughly speaking, we should make sure that the route structure is detailed enough to specify that exactly one branch happens at each node. This is captured by the notion of a \textit{bifurcation choice}. Let us introduce it with an example, resorting to agents for intuition. In the route for the $P$ node in Figure \ref{fig:branches2}, the ingoing space may be mapped to two different sectors of the outgoing space, meaning that an agent can `choose' to send information to just one of these sectors. More generally, an agent at a node can make a `bifurcation choice' for each branch of that node (for the branches that contain only one output value, the choice is trivial). In the routed graph of Figure \ref{fig:rswitch routed graph}, only $P$ features a bifurcation choice. Furthermore, this bifurcation choice amounts to picking the value of the index $i$ through the graph; thus, if the agent at $P$ picks, say, $i=0$, this leads (through the other routes) to that value getting instantiated through the graph, and therefore to the branches $A^0$ and $B^0$ `happening'. The symmetric situation happens for the $i=1$ choice. In other words, each possible bifurcation choice determines exactly one branch to happen at every node. It is this behaviour, rather elementary in the case of the switch, that we want to ask for in general. This leads to a principle that we will call \textit{univocality}:\footnote{This is an unashamed gallicism. `Univocal' means `speaking with one voice', i.e., yielding exactly one output. For instance, functions are univocal, while relations (represented here by Boolean matrices) are generically equivocal.} any tuple of choices made at every branch leads to exactly one branch happening at every node. In other words, once the agents at the nodes of our skeletal supermap make all their bifurcation choices, there is a determinate fact, for each branch, about whether the quantum state will pass through it. More formally, this will be defined as the fact that the routed graph defines a function (as opposed to a relation) from bifurcation choices to `branch statuses', where branch statuses are bits representing whether a given branch has happened or not. (Section \ref{sec: theorem} describes how this function is defined.) This can be seen as forbidding situations where bifurcation choices would either underdetermine branch statuses (i.e.\ lead to several possible branch assignments) or overdetermine them (i.e.\ lead to no possible assignment at all).\footnote{On the relationship between underdetermination and overdetermination in cyclic processes, see Ref.\ \mathcal Ite{Baumeler_2021}.} For the switch, this is satisfied because the bifurcation choice at the $P$ node of the skeletal supermap determines which branches of $A$ and $B$ we end up in. This corresponds to the fact that in \texttt{SWITCH}, the logical state of the control qubit fixes the causal order (recalling the fact that the causal order is what defined the different branches of $A$ and $B$). We also require that the `time-reversed' routed graph, obtained by reversing the direction of the arrows on the original routed graph, satisfies univocality as well. This is satisfied by the switch, corresponding to the fact that the information about which causal order took place ends up recorded in the control qubit at the end of the protocol. If both the routed graph and its time-reversed version satisfy univocality, we say that the routed graph satisfies \textbf{bi-univocality}. Thus the entire bi-univocality condition is satisfied by \texttt{SWITCH}. We summarise the condition as follows: \begin{equation}gin{quote} \textit{Bi-univocality:} The routed graph and the time-reversed routed graph define functions from bifurcation choices to branch statuses. \end{quote} \subsubsection{The branch graph and weak loops} We now turn to our second principle, which deals with whether influences between branches flow in a circle. To check this, we construct a directed `branch' graph representing the flow of information between different branches in the routed graph, depicted in Figure \ref{fig:branch graph}. Causal/informational loops among the branches will be understood as loops in this graph. The branch graph contains solid, dashed green, and dashed red arrows. The solid arrows represent the flow of quantum information along `paths' in the routed graph permitted by the routes, while the dashed arrows represent the flow of information via choices of which path to follow, when multiple paths are permitted by the routes. We explain each of these in turn at an intuitive level; the general formal procedure for constructing the branch graph from a routed graph is described in Section \ref{sec: theorem}. To understand the solid arrows, note that there are two possible joint value assignments to all of the indices in the routed graph: either $i=0$ everywhere, or $i=1.$ For the $i=0$ assignment, the arrows $P \rightarrow B$ and $B \rightarrow A$ in Figure \ref{fig:rswitch routed graph} correspond to one-dimensional sectors, as indicated by the red zeroes. What this shows is that no quantum information flows from $P$ to the branch $B^0$ or from $B^0$ to $A^0$. For this reason, there are no solid arrows $P \rightarrow B^0$ or $B^0 \rightarrow A^0$ in the branch graph. On the other hand, quantum information does flow from $P$ into the branch $A^0$, then into $B^0$, and then finally into $F$. Thus we have the path $P \rightarrow A^0 \rightarrow B^0 \rightarrow F$ of solid arrows in the branch graph. By following precisely analogous reasoning for the $i=1$ assignment, we arrive at the solid arrows in Figure \ref{fig:branch graph}. Evidently, the solid arrows in the branch graph do not form a loop\footnote{We note that this corresponds to an observation from Ref.\ \mathcal Ite{barrett2021cyclic}, that, although the switch has a cyclic causal structure, it can still be written as a direct sum of (pure) processes with a definite causal order. We want to stress however that such an observation is in general \textit{not} sufficient to ensure the consistency of the process, as it overlooks the need to 1) check bi-univocality, and 2) also represent dashed arrows in the branch graph.}. \begin{equation}gin{figure} \mathcal Entering \tikzfig{figures/SwitchBranchGraph} \mathcal Aption{The branch graph for the routed graph in Figure \ref{fig:rswitch routed graph}. Each vertex represents the branch of some node in the routed graph. The branches of $A$ and $B$ are labelled with superscripts corresponding to the relevant value of $i$ in the index-matching routed graph. The other nodes have only one branch, and we denote this branch with the same letter we used for the original nodes. The solid arrows are attributed by considering the connection between the branches encoded in the routed graph; the dashed green and red arrows represent relations of functional dependence in the functions from bifurcation choices to branch statuses required by bi-univocality. The graph contains no cycles of any kind, so it trivially satisfies the weak loops condition.} \lambdabel{fig:branch graph} \end{figure} To rule out informational loops, it is necessary that the solid arrows do not form a loop, but it is not sufficient. What the lack of this kind of loop shows is that the quantum information confined within particular branches by the routes does not flow in a circle. But there is another type of information flowing in the routed circuit: the information that determines \textit{which branch happens}. This information is represented by the dashed arrows in Figure \ref{fig:branch graph}. It is possibilistic in nature, and can therefore be captured entirely using routes, based on the theory of finite relations (i.e.\ Boolean matrices). Fortunately, if univocality is satisfied, then we already know that the routed graph defines a function from bifurcation choices to the statuses of the branches (i.e.\ the binary variables encoding whether each branch happened). We can thus define the green dashed arrows as representing the functional dependencies within this function. Namely, there is a green dashed arrow from $X^\alphapha$ to $Y^\begin{equation}taa$ just in case the branch status at $Y^\begin{equation}taa$ depends on the bifurcation choice at $X^\alphapha$. For example, there is a green dashed arrow from $P$ to $A^0$ because one can choose whether or not $A^0$ happens by choosing which logical state to prepare the control qubit in at $P$. If a similar influence relation holds from $Y^\begin{equation}taa$ to $X^\alphapha$ in the time-reversed version of the protocol, then we draw a red dashed arrow from $X^\alphapha$ to $Y^\begin{equation}taa$. Doing this for all the branches gives us the dashed arrows in Figure \ref{fig:branch graph}. The full branch graph gives a complete account of the flow of information in the skeletal supermap of the switch. It represents both the quantum information that flows within the branches with the solid arrows, and the `which-branch' information that is affected by bifurcation choices. This second sort of information can be thought of classically, since it corresponds to preferred sectorisations of the state spaces. We also call it possibilistic, since it is purely about the binary question of whether a branch does or does not happen given certain bifurcation choices, and can accordingly be represented by the routes using the theory of relations, represented by Boolean matrices. From this fine-grained perspective, it is clear that no information actually flows in a loop in \texttt{SWITCH}, since the branch graph of Figure \ref{fig:branch graph} satisfies \begin{equation}gin{quote} \textit{No loops:} There are no directed loops in the branch graph. \end{quote} According to the upcoming Theorem \ref{thm: Main} -- the main theorem of this paper --, the satisfaction of bi-univocality and no loops is enough to demonstrate the validity of the skeletal supermap, and hence of the switch itself. Thus the logical consistency of the switch is a consequence of the satisfaction of these principles. Remarkably though, a principle logically weaker than no loops is enough to ensure the validity of the supermap. We did not need to show that the routed graph contains no loops at all, but only loops of a weak, and harmless, type. Specifically, we needed to show that the graph satisfies the following principle, which we call \textbf{weak loops}. \begin{equation}gin{quote} \textit{Weak loops:} Any given loop in the branch graph is entirely made up of dashed arrows of a single colour. \end{quote} Our main theorem states that bi-univocality together with weak loops implies that a skeletal supermap is valid, and hence that any associated protocol is self-consistent. While all protocols we have studied that do not violate causal inequalities satisfy no loops, Section \ref{sec:lugano} will show that the Lugano process has green loops (see Figure \ref{fig: Lugano branch graph}). This will lead us to conjecture that \textit{the presence of monochromatic loops is precisely what enables the violation of causal inequalities}. \subsection{Why do we need bi-univocality?} Naively, one might imagine that the lack of a causal/informational loop among the branches is enough to guarantee that a protocol is consistent. In this subsection, we explain why this intuition fails. To this end, consider the supermap in Figure \ref{fig:grandfather smap}. A single wire is bent round in a loop, and serves both as input and as output to a node. The wire represents a qubit partitioned into sectors spanned by $\ket{0}$ and $\ket{1}$ respectively. We impose a delta-route on the node so that the transformations we insert must map each sector to itself. \begin{equation}gin{figure} \mathcal Entering \tikzfig{grandfather_smap} \mathcal Aption{A routed skeletal supermap that leads to a grandfather paradox, where a qubit is sent back to itself. Formally, the wires bent into `cup' and `cap' shapes can be thought of as the unnormalised perfectly correlated entangled ket and bra respectively. The qubit is partitioned into sectors spanned by the logical states $\ket{0}$ and $\ket{1}$. The index-matching means that the agent at the node must map the logical states $\ket{i}$ to themselves, up to dephasing.} \lambdabel{fig:grandfather smap} \end{figure} The node has two branches, each corresponding to a one-dimensional sector. Hence, no information flows between the branches. Clearly, then, there can be no question of an informational loop within a branch. Nevertheless, the supermap is invalid, and fails to represent any logically consistent process. This is a consequence of the fact that \textit{nothing can determine the value of the index} $i$. For if it did, then univocality would be respected, and our main theorem would imply the validity of the supermap. The key problem here is one of underdetermination. The fact that $i$ is not fixed means that there is no point in the circuit where an agent could make a bifurcation choice determining which branch happens. So, even though there is no \textit{over}determination \textit{within} a branch (resulting in a grandfather paradox), there is still \textit{under}determination \textit{of} the branch. To see how this leads to concrete problems, one needs only to notice that inserting a CNOT gate into the supermap (where the NOT part acts on an ancillary system) leads to a trace-decreasing output transformation, and that this is a direct consequence of the underdetermination of $i$.\footnote{The reader might also notice that inserting a $Z$ Pauli matrix into the node results in a sort of grandfather paradox associated with the Fourier basis. But the foregoing considerations show that even in the classical case, where there is no Fourier basis, we still need bi-univocality.} \subsection{Summary} We offer a brief summary of this section, where we showed how to represent the switch as a routed circuit and how to certify the validity of this circuit despite its feedback loops. We wrote the switch as a routed circuit (Figure \ref{fig:switch routed circuit}). We captured this circuit's basic structure by trimming it down to a `skeletal' routed supermap (Figure \ref{fig:switch skeletal}), from which the switch can be constructed, by inserting unitary transformations into the nodes. We represented the structure of the skeletal supermap as an equivalent routed graph (Figure \ref{fig:rswitch routed graph}). We then showed that this routed graph satisfies two conditions, bi-univocality and weak loops, which, by our main theorem, imply that the skeletal supermap is valid (i.e.\ takes unitaries to unitaries), which in turn implies that any routed circuit with its connectivity is valid as well. Bi-univocality requires that choices of bifurcation in the routed graph lead to a definite fact about the branch that happens at each node. It also requires a similar statement to hold about the time-reversed version of the routed graph, obtained by reversing the direction of the arrows. If bi-univocality holds, then we can ask whether the routed graph satisfies the weak loops condition. To evaluate this condition, we form a branch graph, in which solid arrows represent the ability of quantum information to flow between the different branches. Green dashed arrows indicate that bifurcation choices at one branch can influence whether another branch happens in the routed graph. Red dashed arrows represent the same thing for a time-reversed version of the routed graph. The weak loops condition states that any given loop in the branch graph must be formed entirely of dashed arrows of the same colour. The switch satisfies this trivially since its branch graph contains no loops all. Our constructions do not only provide a technical way to certify the consistency of a process, but also make its inner structure evident. Indeed, the routed graph gives an intuition of the crucial structural features of the switch. Furthermore, its branch graph displays the order in which its branches happen, and tells us which branches control which other branches happen. This will be particularly valuable when we perform the same reconstruction for more elaborate processes in Section \ref{sec: examples}. \textrm{\upshape sec}tion{The framework} \lambdabel{sec: theorem} In this section, we present our framework in detail and state our main theorem, which says that any routed graph satisfying bi-univocality and weak loops defines a valid superunitary. To keep things readable, we will give definitions at a semi-formal level; a fully formal account is given in Appendix \ref{app: Theorem}. \begin{equation}gin{figure} \mathcal Entering \begin{equation}gin{subfigure}[c]{0.4\textwidth} \mathcal Entering \tikzfig{IndexedGraph} \mathcal Aption{An indexed graph.} \lambdabel{fig:Indexed Graph} \end{subfigure} \hspace{1cm}$\xrightarrow{\substack{\textrm{specifying}\\ \text{routes}}}$\hspace{0.2cm} \begin{equation}gin{subfigure}[c]{0.4\textwidth} \mathcal Entering \tikzfig{RoutedGraph} \mathcal Aption{A routed graph.} \lambdabel{fig:Routed Graph} \end{subfigure} \mathcal Aption{Examples of an indexed graph and of a routed graph; the latter is obtained from the former by specifying a branched route at every node. The arrows not bearing indices have a trivial (i.e.\ a singleton) set of index values.} \lambdabel{fig:Indexed and Routed Graphs} \end{figure} The most basic notion we need is that of a routed graph: this is a directed multi\footnote{A multigraph is a graph in which there can be several arrows between two given nodes. In the interest of generality, we will allow them, even though for the purposes of the certification of supermaps' validity, any multigraph could just be turned into an equivalent graph by merging wires.}-graph with decorated nodes and arrows. The nodes are decorated with routes, and the arrows are decorated with indices that are in turn equipped with a `dimension' for each index value. A routed graph with its routes still unspecified will be called an indexed graph. Examples are given in Figure \ref{fig:Indexed and Routed Graphs}. \begin{equation}gin{definition}[Indexed and routed graphs] An \emph{indexed graph} $\Gammamma$ is a directed multigraph in which each arrow is attributed a non-empty set of index values. Each of these values is furthermore attributed a non-zero natural number, called its dimension\footnote{This will be the dimension of the corresponding sector in the interpretation of the graph as a supermap. Note that for our theorem, all we need to know is which sectors are one-dimensional.}. A \emph{routed graph} $(\Gamma, (\lambdaN)_{N \in \texttt{Nodes}_\Ga})$ is an indexed graph for which a relation (or `route') has been specified at every node. The route $\lambdaN$ at node $N$ goes from the Cartesian product of the sets of indices of the arrows going into $N$, to that of the sets of indices of the arrows going out of $N$. \end{definition} We also allow these graphs to feature arrows `coming from nowhere' (resp.\ `going nowhere'): these will be interpreted as global inputs (resp.\ global outputs) of the supermap. We ask for these not to be indexed, that is, to have trivial (i.e.\ singleton) sets of index values.\footnote{This requirement is there only to make the statement of univocality simpler, as otherwise one would have to distinguish several cases. Any routed graph with indexed input and output arrows can be turned into one without, by adjoining to it a global input node and a global output node.} \begin{equation}gin{figure} \mathcal Entering $\tikzfig{Lambda} {\LARGE \mathcal Ong} \quad \quad \quad \tikzfig{branchedrel}$ \mathcal Aption{`Looking inside' a branched route $\lambdambda_N$ (which is to be used for the node $N$ of an indexed graph). On the left, we see $\lambdambda_N$ as a box: each of its input (resp.\ output) wires is the set of index values of one of the arrows going into $N$ (resp.\ coming out of it). On the right, we see its `unfolded' structure, specifying how $\lambdambda_N$ connects input values to output values; each of the input black dots corresponds to a possible value (more precisely, a possible tuple of values) of the input indices, and similarly for the outputs. } \lambdabel{fig: Looking inside} \end{figure} We will in fact not need all types of relations: we will restrict ourselves to considering \textit{branched} ones. \begin{equation}gin{definition}[Branched routes] A route $\lambdaN$ is \emph{branched} if any two of its input values are either connected to the exact same output values, or have no output values in common. \end{definition} An example is given in Figure \ref{fig: Looking inside}. As seen in this figure, a branched relation $\lambdaN$ defines compatible (partial) partitions of its input and output sets, which we call $\lambdaN$'s \textit{branches} (or, in a slight abuse of notation, $N$'s branches, which will be called the ${N^\al}$ with $\alpha$ varying), with each input value of a branch being connected to all output values of this branch and vice versa. There can also be input (resp.\ output) values that are not connected to anything by $\lambdambda_N$; these will be said to be outside its practical inputs (resp.\ outputs), and are considered to be part of no branch at all. These values correspond to sectors which are just there for formal purposes and will never be used in practice -- part of the role of bi-univocality will be to ensure that this does not lead to any inconsistencies. A skeletal routed supermap can be naturally defined from a routed graph. \begin{equation}gin{definition}[Skeletal supermap associated to a routed graph] Given a routed graph $\GammalaN$, its associated skeletal (routed) supermap is obtained by interpreting each wire as a sectorised Hilbert space, whose sectors are labelled by the set of index values of this wire, with each sector having the dimension that was assigned to its corresponding index value; and interpreting each node as a slot for a linear map, going from the tensor product of the Hilbert spaces associated to its incoming arrows, to that of the Hilbert spaces associated to its outgoing arrows, and following the route associated to that node. The supermap acts on linear maps by connecting them along the graph of $\Gamma$\footnote{Note that this procedure has an unambiguous meaning, despite the cycles in $\Gamma$, due to the fact that finite-dimensional complex linear maps form a traced monoidal category \mathcal Ite{joyal_street_verity_1996}.}. \end{definition} Our goal is to define structural requirements on routed graphs ensuring that their associated supermap is a (routed) \textit{superunitary}; i.e.,\ that it yields a unitary map when arbitrary unitary maps, following the routes, are plugged at each of its nodes. Note that a map being unitary, in this context, means that it is unitary when restricted to act only on its practical input space, consisting of the input sectors whose indices are practical inputs of the route, and to map to its similarly defined practical output space. Our first principle will be univocality. The idea is that some branches feature \textit{bifurcations}, i.e.\ include several output values (e.g.\ branches $N^\textrm{I}$ and $N^\textrm{III}$ in Figure \ref{fig: Looking inside}). `Bifurcation choices', in a branch at a node -- i.e.\ choosing a single output value for this branch, and erasing the arrows to the other output values -- will in general lead to some branches at other nodes `not happening' -- i.e.\ to none of their input values being instantiated. Univocality tells us that \textit{any tuple of bifurcation choices} throughout the graph should lead to \textit{one and exactly one} branch happening at every node. To make this requirement formal, we will `augment' our relations, i.e.\ supplement them with ancillary wires: ancillary input wires with which bifurcation choices in each branch can be specified; and ancillary output wires which record, in a binary variable, whether each branch happened or not. \begin{equation}gin{figure} \mathcal Entering $ \tikzfig{LambdaNoComment} \quad \xrightarrow{\textrm{augmenting}} \quad \scalebox{.92}{\tikzfig{LambdaAug}}$ \mathcal Aption{The `augmented' version of the branched route $\lambdambda_N$ described in Figure \ref{fig: Looking inside}. The ancillary input wires for branches $N^\mathrm{II}$ and $N^\mathrm{IV}$ are not written, as they are trivial: each of these branches has only one output value.} \lambdabel{fig: Augmenting} \end{figure} \begin{equation}gin{definition}[Augmenting] We take a branched route $\lambdaN$. For each of its branches ${N^\al}$ we denote the set of output values of this branch as $\texttt{Ind}out_{N^\al}$, and define a binary set $\texttt{Happens}_{N^\al} \mathcal Ong \{0,1\}$. The \emph{augmented version} $\lambda_N^\textrm{\upshape aug}$ of $\lambdaN$ is the partial function going from $\lambdaN$'s input values and from the $\texttt{Ind}out_{N^\al}$'s, to $\lambdaN$'s output values and the $\texttt{Happens}_{N^\al}$'s, defined in the following way: \begin{equation}gin{itemize} \item if its argument from $\lambdaN$'s input values is among the input values of a branch ${N^\al}$, then it returns its $\texttt{Ind}out_{N^\al}$'s argument, value $1$ in $\texttt{Happens}_{N^\al}$, and value $0$ in $\texttt{Happens}_{N^{\alpha'}}$ for $\alpha' \neq \alpha$; \item if its argument from $\lambdaN$'s input values is not among the input values of any branch -- i.e.\ if it is outside of $\lambdaN$'s practical input values --, then the output is undefined. \end{itemize} \end{definition} \begin{equation}gin{figure} \mathcal Entering \tikzfig{Univocality} \mathcal Aption{The `choice relation' for the routed graph of Figure \ref{fig:Routed Graph}. Here, we are assuming that $\nu$ has one branch, that $\lambdambda$ and $\sigma$ have two (with one of $\sigma$'s branches having trivial bifurcation choices), and $\mu$ has three. For better readability, trivial wires are left implicit and ancillary wires are written in blue. } \lambdabel{fig: Univocality} \end{figure} To illustrate this definition, let us display what it gives in the case of the routed graph of the switch, depicted in Figure \ref{fig:rswitch routed graph}. The augmented version of $P$'s route has one extra binary input (encoding the bifurcation choice that can be made at this node), and no extra output (as $P$ only features one branch); this augmented version is just the identity function from the extra input to the original output (remembering that the original input of the route was trivial). As for the route of node $A$, its augmented version features no extra input (as $A$ features no bifurcation choice), and two extra outputs: the first one encodes whether branch $A^0$ happened, the second one encodes whether branch $A^1$ happened. This augmented version (which has two inputs and four outputs) is the partial function defined by $(0,0) \mapsto (0,0,1,0)$ and $(1,1) \mapsto (1,1,0,1)$ and undefined on inputs $(0,1)$ and $(1,0)$ (which are outside the practical input values). The same goes for $B$. Finally, the augmented version of $F$'s route is identical to the original version (there are no bifurcation choices and only one branch). As represented in Figure \ref{fig: Augmenting}, the augmented version of a route features extra ancillary wires. One can then form a relation by connecting the non-ancillary wires of the $\lambda_N^\textrm{\upshape aug}$'s according to the indexed graph $\Gamma$ (see Figure \ref{fig: Univocality} for an example)\footnote{Similarly to before, this procedure makes sense because relations form a traced monoidal category. The $\lambda_N^\textrm{\upshape aug}$ are here viewed as relations, as any partial function can be.}. We call this the `choice relation', which we will write $\Lambdambda_{\GammalaN}$. $\Lambdambda_{\GammalaN}$ goes from the bifurcation choices to the $\texttt{Happens}$ binary variables that tell us whether each branch happened. The requirement that the former unambiguously determine the latter then takes a natural form. \begin{equation}gin{principle}[Univocality and bi-univocality] A routed graph $\GammalaN$ satisfies the principle of \emph{univocality} if its choice relation $\Lambdambda_{\GammalaN}$ is a function. $\GammalaN$ satisfies the principle of \emph{bi-univocality} if both it and its adjoint $(\Gamma^\top, (\lambda_N^\top)_N)$ satisfy univocality. \end{principle} The adjoint of a routed graph is simply the routed graph obtained by reversing the direction of its arrows, and taking the adjoints of its routes: it can be interpreted as its time-reversed version. Being bi-univocal thus means being `univocal both ways'. When univocality is satisfied, the choice relation -- which is then a choice function -- plays another role: its causal structure (defined by functional dependence) tells us which bifurcation choices can affect the status of which branch. This will define the green dashed arrows in the branch graph, whereas the analogous information in the choice function of the adjoint graph will define the (reverse of the) red dashed arrows. Our last job is to define the solid arrows in the branch graph. The idea is that the `$N^\alpha$' branch of node $N$ has a direct influence on the `${M^\begin{equation}t}$' branch of node $M$ if there is an arrow from $N$ to $M$ that doesn't become either inconsistent or trivial (i.e.\ reduce to either zero sectors or to a single one-dimensional one) when one fixes $N$ to be in branch $\alpha$ and $M$ to be in branch $\begin{equation}taa$. To capture this, we will have to talk about \textit{consistent assignments of values} to the indices of all arrows in the graph. \begin{equation}gin{definition}[Consistent assignment] A \emph{consistent assignment} of values to $\GammalaN$'s indices is an assignment of a value to the arrows' indices, such that for any node $N$, the tuple of values for $N$'s inputs is related by $\lambdaN$ to the tuple of values for $N$'s outputs. \end{definition} Note that (as proven in Appendix \ref{app: Theorem}) an assignment is consistent if and only if at every node, the tuple of input values and the tuple of output values that it yields are in the same branch (and in particular are not outside the practical inputs/outputs). In that sense, one can talk about this consistent assignment of values as, in particular, assigning a given branch to every node. The idea of solid arrows, embodied by the following definition, is then that one draws a solid arrow from ${N^\al}$ to ${M^\begin{equation}t}$ if there is an arrow $A$ joining $N$ to $M$, except if ${N^\al}$ and ${M^\begin{equation}t}$ can never happen jointly, or if there is a single value of $A$'s index compatible with both of them happening, and this value makes $A$ trivial. \begin{equation}gin{definition} Taking a branch ${N^\al}$ of node $N$ and a branch ${M^\begin{equation}t}$ of node $M$, we say that there is a solid arrow ${N^\al} \to {M^\begin{equation}t}$ if there exists an arrow from $N$ to $M$, except if: \begin{equation}gin{itemize} \item there are no consistent assignment of values that assign branch $\alpha$ to $N$ and branch $\begin{equation}ta$ to $M$; \item or if all such assignments assign the same value to the index of the arrow $N \to M$, and this value has dimension 1 (i.e.\ corresponds to a one-dimensional sector). \end{itemize} If there are several arrows from $N$ to $M$, then we say that there is a solid arrow ${N^\al} \to {M^\begin{equation}t}$ unless the above applies to all of them. \end{definition} With this in our toolbox, we can define the branch graph. \begin{equation}gin{definition}[Branch graph] The \emph{branch graph} of a routed graph $\GammalaN$ that satisfies bi-univocality is the graph in which: \begin{equation}gin{itemize} \item the nodes are given by the branches of $\GammalaN$'s nodes; \item solid arrows are given by the previous definition; \item there is a green dashed arrow $N^\alpha \to {M^\begin{equation}t}$ if the choice function $\Lambdambda_{\GammalaN}$ features causal influence (i.e.\ functional dependence) from $\texttt{Ind}outNal$ to $\texttt{Happens}Mbe$; \item there is a red dashed arrow $N^\alpha \to {M^\begin{equation}t}$ if the choice function of the adjoint graph, $\Lambdambda_{\left(\Gamma^\top, \left( \lambda_N^\top \right)_N \right)}$, features causal influence (i.e.\ functional dependence) from $\texttt{Ind}inMbe$ to $\texttt{Happens}Nal$. \end{itemize} \end{definition} \begin{equation}gin{figure} \mathcal Entering \begin{equation}gin{subfigure}[b]{0.2\textwidth} \mathcal Entering \tikzfig{branchgraph1} \mathcal Aption{} \end{subfigure} \quad \begin{equation}gin{subfigure}[b]{0.2\textwidth} \mathcal Entering \tikzfig{branchgraph2} \mathcal Aption{} \end{subfigure} \quad \begin{equation}gin{subfigure}[b]{0.2\textwidth} \mathcal Entering \tikzfig{branchgraph3} \mathcal Aption{} \end{subfigure} \quad \begin{equation}gin{subfigure}[b]{0.2\textwidth} \mathcal Entering \tikzfig{branchgraph4} \mathcal Aption{} \end{subfigure} \mathcal Aption{Examples of branch graphs. (a) and (b) satisfy the weak loops principle, but (c) and (d) do not. For (d), this is due to the presence of a bi-coloured $\infty$-shaped loop in the central layer.} \lambdabel{fig: Branch Graphs} \end{figure} Examples of branch graphs are shown in Figure \ref{fig: Branch Graphs}. Now that the branch graph is defined, we can check whether it satisfies our second principle. \begin{equation}gin{principle}[Weak loops] We say that a loop in a branch graph is \emph{weak} if it is entirely made of dashed arrows of the same colour. A routed graph satisfies the principle of weak loops if every loop in its branch graph is weak. \end{principle} Note that, as a particular case, any routed graph whose branch graph features no loops trivially satisfies this principle. This will be sufficient to check the consistency of processes featuring (possibly dynamical) coherent control of causal order. We will conjecture that the more exotic processes, which violate causal inequalities, are characterised by the existence of weak loops in their branch graph. Finally, we can display our main theorem. \begin{equation}gin{theorem} \lambdabel{thm: Main} Let $\GammalaN$ be a routed graph satisfying the principles of bi-univocality and weak loops. (We then say that it is valid.) Then its associated skeletal supermap is a routed superunitary. \end{theorem} The proof of Theorem \ref{thm: Main} is given in Appendix \ref{app: Theorem}. The next corollary, which is direct, stresses the fact that there are then many supermaps which can be obtained from this skeletal supermap, and that the validity of the latter implies that they are valid as well. \begin{equation}gin{corollary} Let $\GammalaN$ be a valid routed graph. Then, any supermap built from its associated skeletal supermap by plugging in unitaries at some of its nodes and unitary monopartite supermaps at other nodes is a superunitary. \end{corollary} \textrm{\upshape sec}tion{Examples of constructing processes with indefinite causal order} \lambdabel{sec: examples} In this section, we reconstruct three further examples of processes with indefinite causal order from valid routed graphs, namely the quantum 3-switch, the Grenoble process and the Lugano process. This will enable us to see each of these processes as a member of a large family of processes that can be constructed `in the same way' -- i.e. from the same routed graph. This in turn will allow us to distinguish between those features of the process that are `accidental', and those that are essential for the consistency of the process. What results is reminiscent of the situation for processes without indefinite causal order. Such processes can be represented as circuits, in which it is immediate that changing the particular transformations will preserve the consistency of the process, so long as the connectivity of the circuit is maintained. In our reconstructions of processes with indefinite causal order, it is immediate that changing the particular transformations in a routed circuit decomposition of a process with indefinite causal order preserves consistency, so long as the resulting routed circuit is still `fleshing out' the same (valid) routed graph. Before turning to the examples, we briefly explain a shorthand way of presenting the routed graphs. Rather than directly stating the route associated with each node, it is sometimes simpler to specify a \textit{global index constraint}, from which the individual routes can be derived. This global index constraint specifies the allowed joint value-assignments for all of the indices in the graph. Formally, it can be represented as a Boolean tensor $G$ over the Cartesian product of all the indices in the graph. We set the coefficients $G_i=1$ to be equal to 1 for allowed joint value assignments $i$ (note that $i$ here denotes a list of indices), and $G_i=0$ for those that are not allowed. Then, we can calculate the route $\lambdambda_N$ at some specific node $N$ as the least restrictive route consistent with the global index constraint. Assuming that the set of indices $\texttt{Ind}_N^\textnormal{in}$ going into the node and the set of outgoing indices $\texttt{Ind}_N^\textnormal{out}$ are disjoint, so that no arrows start and finish at that same node,\footnote{Note that the requirements of bi-univocality and weak loops imply that any indices on these `self-loops' would have to have values that either never happen, or else correspond to one-dimensional branches, and that the value of the indices are fixed by the other ingoing and outgoing arrows. This means that one gets exactly the same unitary processes from the routed graph if one removes the self-loops. Accordingly, the assumption of no self-loops does not sacrifice any generality.} we can calculate this by marginalising over the indices $\texttt{Ind}\begin{align}ckslash \{\texttt{Ind}_N^\textnormal{in} \sqcup \texttt{Ind}_N^\textnormal{out}\}$ that do not come out of or go into the node. Writing the indices as $i=(i_\textnormal{in}i_\textnormal{out}i')$, where the $i' \in \{\texttt{Ind}_N^\textnormal{in} \sqcup \texttt{Ind}_N^\textnormal{out}\}$ denote the joint value-assignments of those `irrelevant indices', the marginalisation is performed by taking the Boolean sum over $i'$, \begin{equation}gin{equation} (\lambdambda_N)_{i_\textnormal{in}}^{i_\textnormal{out}} := \sum_{i'} G_{i_\textnormal{in}i_\textnormal{out}i'} \, . \end{equation} In the examples below, we represent the global index by using a combination of index-matching on the routed graph and `floating' equations relating the indices written beside the graph. The idea with index-matching is that when indices on two different arrows are matched, the global index constraint must be 0 for all joint value assignments in which they are not equal. Similarly, the global index constraint is 0 for all joint value assignments not satisfying the floating equations. The global index constraint of the routed graph is then the most general Boolean matrix compatible with the index-matching and the equations. Similarly, we will also present routed \textit{circuits} using a global index constraint. In that case, we derive the routes associated with the individual \textit{transformations} (i.e.\ the boxes) that make up the circuit by marginalising over the global index constraint of the circuit. We want to stress, once again, that the use of index-matching and global index constraints is only a graphical shorthand: in order to study the graphs and check the principles, they have to be formally translated into routes for the nodes. \subsection{The quantum 3-switch} The quantum 3-switch \mathcal Ite{colnaghi2012quantum} is a unitary process defined analogously to the quantum switch, but with three intermediate agents: Alice ($A$), Bob ($B$) and Charlie ($C$). The Past ($P$) consists of a 6-dimensional control qudit $P_C$ and a $d$-dimensional target qudit $P_T$. Depending on the initial state of the control qudit, the three agents receive the target qudit in a different order, outlined in Table \ref{tab:3switch}. At the end, the target qudit is sent to the Future ($F$). \begin{equation}gin{table}[ht] \mathcal Entering \begin{equation}gin{tabular}{c|c} Control state & Order \\ \hline $\ket{1}$ & $A-B-C$ \\ $\ket{2}$ & $A-C-B$ \\ $\ket{3}$ & $B-C-A$ \\ $\ket{4}$ & $B-A-C$ \\ $\ket{5}$ & $C-A-B$ \\ $\ket{6}$ & $C-B-A$ \\ \end{tabular} \mathcal Aption{The relative order of the agents Alice ($A$), Bob ($B$) and Charlie ($C$) depending on the value of the control state.} \lambdabel{tab:3switch} \end{table} \subsubsection{The routed graph} We start by drawing a routed graph from which the quantum 3-switch, amongst other processes, can be constructed. This routed graph is given in Figure \ref{fig:3switch_routedgraph}. The global index constraint is represented by matching the indices on different arrows, and by the floating equation $l+m+n+p+q+r=1$. This equation enforces that precisely one of the six summed over indices is equal to one. Thus the global index constraint is the Boolean matrix that ensures that matched indices take the same value, and that exactly one of the six distinct values is 1. \begin{equation}gin{figure}[ht] \mathcal Entering \tikzfig{figures/3switch_routedgraph} \mathcal Aption{The routed graph for the quantum 3-switch, using a global index constraint.} \lambdabel{fig:3switch_routedgraph} \end{figure} The route at node $P$ (which we denote $\eta$) is, by definition, the most liberal route compatible with the global index constraint. This is the route that forces exactly one of its indices to be equal to $1$: \begin{equation}gin{equation} \begin{equation}gin{cases} \eta^{100000} = \eta^{010000} = \eta^{001000}=\eta^{000100} =\eta^{000010}=\eta^{000001} = 1 \, ; \\ \eta^{lmnpqr} =0 \quad {\rm otherwise}. \end{cases} \end{equation} The route $\eta$ also has a convenient graphical representation, depicted in Figure \ref{fig:3switchpast}. $\eta$ has a single branch with a bifurcation choice between six options, each corresponding to one of the indices $lmnpqr$ being equal to 1. Each option enforces one of the six possible causal orders. \begin{equation}gin{figure} \mathcal Entering \tikzfig{3switchPastRoute} \mathcal Aption{The route and branch structure of the node $P$ of the quantum $3$-switch. There is one unique branch with a bifurcation choice between six options, each of which enforces a causal order.} \lambdabel{fig:3switchpast} \end{figure} Let us explain how this works in detail. In the routed graph for the standard switch, the arrow $P\rightarrow A$ came with two index values, corresponding to whether or not Alice received the message first. But for the 3-switch, if Alice does receive the message first, then there are two further possibilities: either she comes first and the causal order is clockwise ($A-B-C$), or she comes first and the order is anticlockwise ($A-C-B$). For this reason, the arrow from $P$ to $A$ has three index values overall. The sectors where she gets the message first correspond to $(l=1, p=0)$ and $(l=0, p=1)$; while $(l=0, p=0)$ corresponds to a one-dimensional `dummy' sector. Likewise, all internal wires are associated with three sectors; two non-`dummy' sectors for when one of their indices equals one, and a `dummy' sector for when both are equal to zero\footnote{Note that the sectors with both indices equal to $1$, although formally present, are irrelevant: they correspond to impossible joint assignments of values.}. \begin{equation}gin{figure} \mathcal Entering \tikzfig{figures/3switchaliceroute} \mathcal Aption{The route and branch structure for the intermediate agents in the $3$-switch. There are six branches, each corresponding to a causal order.} \lambdabel{fig:3switchalice} \end{figure} Now suppose that the agent at $P$ makes the `$l=1$' bifurcation choice, so that the message is sent to Alice. The global index constraint then enforces the route at the nodes $A$, $B$, $C$ depicted in Figure \ref{fig:3switchalice}. Thus Alice's route implies that she has no choice but to preserve the value of $l$, meaning that she must send the message along the arrow from $A$ to $B$, since this is her only outgoing arrow that does not correspond to a dummy sector when $l=1$. Then Bob similarly has no choice but to pass the message to Charlie, and finally Charlie is forced to send the message into the Future. The net result is that the message moves inexorably along the path $P \rightarrow A \rightarrow B \rightarrow C \rightarrow F$ of arrows decorated with an $l$ index, giving the causal order $A-B-C$. Thus, if an agent at $P$ makes the bifurcation choice that $l=1$, they pick out this causal order. Similarly, any option from the bifurcation choice enforces one of the six possible causal orders. In this sense, the bifurcation choice at $P$ is a choice between causal orders, just as in the case of the original quantum switch. This state of affairs -- that the causal order is determined by a bifurcation choice at the Past node -- is characteristic of the (non-dynamical) coherent control of causal orders. Now let us show that the routed graph satisfies our two principles. It is clear that the bifurcation choice at $P$, picking which index is equal to 1, determines the status of all branches of the intermediate nodes, since these branches are all defined by a certain index equalling 1 (see Figure \ref{fig:3switchalice}). This bifurcation choice is the only one in the routed graph, and $P$ and $F$ each have just one branch (the route at $F$ is just the time-reversed version of the one at $P$, obtained by reversing the direction of the arrows in Figure \ref{fig:3switchpast}). Thus the sole bifurcation choice in the routed graph leads to a single branch happening at each node; formally speaking, we have a function from bifurcation choices to branch statuses. That is, the routed graph satisfies univocality. Since the routed graph is time-symmetric, it follows that it satisfies bi-univocality. This allows us to draw the branch graph, following the rules in Section \ref{sec:checking_validity}: we display it in Figure \ref{fig:3switch_branchgraph}. In this graph, the six branches for each of the nodes $A$,$B$ and $C$ are denoted by the specification of which index is equal to 1 (with all the others equal to 0), e.g.\ $A^{l=1}, A^{p=1}$, etc. There are no loops in the branch graph, meaning that the routed graph trivially satisfies weak loops. We can thus invoke Theorem \ref{thm: Main} to conclude that any process that can be obtained from the routed graph of Figure \ref{fig:3switch_routedgraph}, including the quantum 3-switch, is a consistent quantum process. \begin{equation}gin{figure}[ht] \mathcal Entering \tikzfig{figures/3switch_branchgraph} \mathcal Aption{The branch graph for the quantum 3-switch.} \lambdabel{fig:3switch_branchgraph} \end{figure} \subsubsection{The routed circuit} \begin{equation}gin{figure}[ht] \mathcal Entering \tikzfig{figures/3switch} \mathcal Aption{A routed circuit diagram for the quantum 3-switch, using a global index constraint. To avoid too much clutter, instead of explicitly drawing loops, output lines that end with dots are to be interpreted as being looped back to join the corresponding input lines with dots (with the same system labels, including indices). Some wires are coloured for readability.} \lambdabel{fig:3switch} \end{figure} The routed circuit for the quantum 3-switch can be constructed from the routed graph in Figure \ref{fig:3switch_routedgraph} by inserting unitary transformations into the corresponding skeletal supermap. This is displayed in Figure \ref{fig:3switch}, where we have again used the shorthand of global index constraints. The routes of the transformations can be derived from the global index constraint, just like the routes of the nodes in the routed graph. The systems in the routed circuit have the following properties: \begin{equation}gin{itemize} \item The systems $P_T,F_T,A_{\rm in}, A_{\rm out}, B_{\rm in}, B_{\rm out}, C_{\rm in}, C_{\rm out}$ are all isomorphic, and correspond to a $d$-dimensional space. \item $P_C, F_C$ are 6-dimensional control systems. \item The routed system $C_k$ is also a 6-dimensional control system, with an explicit partition into six one-dimensional sectors. \item The routed systems $R^{lp}, S^{mq}, T^{nr}, X^{ln}, Y^{pq}$ are all $(2d+1)$-dimensional systems, this time partitioned into \textit{two} $d$-dimensional sectors and a single 1-dimensional `dummy' sector. For example, $R^{lp} = R^{00}\oplus R^{10} \oplus R^{01}$, where $R^{00}$ is the 1-dimensional sector. The presence of two separate $d$-dimensional sectors corresponds to the fact that each of these wires can carry the message in two separate causal orders. We denote the unique state in the 1-dimensional sectors by $\ket{\rm dum}$. \end{itemize} The unitary $U_P$ at the bottom of the diagram is given by the isomorphism: \begin{equation}gin{equation} U_P : \begin{equation}gin{cases} \ket{1}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto \ket{\psi}_{R^{10}} \otimes \ket{\rm dum}_{S^{00}} \otimes \ket{\rm dum}_{T^{00}} \\ \ket{2}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto\ket{\psi}_{R^{01}} \otimes \ket{\rm dum}_{S^{00}} \otimes \ket{\rm dum}_{T^{00} } \\ \ket{3}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto\ket{\rm dum}_{R^{00}} \otimes \ket{\psi}_{S^{10}} \otimes \ket{\rm dum}_{T^{00}} \\ \ket{4}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto \ket{\rm dum}_{R^{00}} \otimes\ket{\psi}_{S^{01}} \otimes \ket{\rm dum}_{T^{00}} \\ \ket{5}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto \ket{\rm dum}_{R^{00}} \otimes \ket{\rm dum}_{S^{00}} \otimes \ket{\psi}_{T^{10}} \\ \ket{6}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto \ket{\rm dum}_{R^{00}} \otimes \ket{\rm dum}_{S^{00}} \otimes \ket{\psi}_{T^{01}} \\ \end{cases} \lambdabel{eq:U_3switch} \end{equation} between the non-routed system $P_C \otimes P_T$ ($6 d$-dimensional) and the routed system \\ $\bigoplus_{lmnpqr} \eta^{lmnpqr} R^{lp} \otimes S^{mq} \otimes T^{nr}$ [also of dimension $2(d\times1\times 1) +2(1\times d\times 1) +2(1\times 1\times d)=6d$]. $U_F$ has the same form as $U_P$, where the $\ket{1}_{F_C}$ state of the control qubit is again mapped to the $l=1$ sector, $\ket{2}_{F_C}$ is again mapped to $p=1$ sector, and so on. The other unitaries denoted by $U$ are the unique unitaries of the form above that respect the index-matching. \subsection{The Grenoble process} In their 2021 paper \mathcal Ite{wechs2021quantum}, Wechs and co-authors from Grenoble presented a new tripartite process with \textit{dynamical} indefinite causal order, that is, where the causal order is not predetermined at the start of the process, but can be influenced by the intermediate agents themselves. In the present work, we shall call this process the \textit{Grenoble process}. Like the 3-switch, the Grenoble process involves three intermediate agents, who receive information from the global Past and ultimately send information into the global Future.\footnote{Note, that in the original formulation in Ref.\ \mathcal Ite{wechs2021quantum}, the Future is split into more than one party, whilst in this work, to simplify the presentation we consider only one Future party.} The Past (P) consists of a 3-dimensional control qutrit $P_C$ and a 2-dimensional target qubit $P_T$. As with the previous processes we have studied, the logical state of the control system determines which of the intermediate agents will receive the message first. However, unlike the previous processes, this control system does not enforce a single causal order. This is because the agent who receives the message first is free to choose which agent will receive it second. In particular, the logical state of the target qubit after it passes through the first node will determine who gets it second: $\ket{0}$ means it will be sent in clockwise order (for example, to Bob if Alice was first), while $\ket{1}$ means it will be sent in anticlockwise order (for example, to Charlie if Alice was first). Finally, before the action of the third and final agent, the information about the relative order of the first two agents is scrambled onto an ancillary qubit, which is transferred directly to the Future (F). In the Grenoble process, the emergent causal order depends not only on the global Past, but also on the actions of the intermediate agents. This is the hallmark of \textit{dynamical} coherent control of causal order. In our terms, this will correspond to the fact that a causal order (and the branch statuses that fix it) is determined not only by a bifuraction choice at the Past, but also by bifurcation choices of the intermediate agents. \subsubsection{The routed graph} To begin with, we write down a routed graph from which the Grenoble process, amongst others, can be constructed. This graph is given in Figure \ref{fig:grenoble_routedgraph}, again using global index constraints. \begin{equation}gin{figure}[ht] \mathcal Entering \scalebox{1}{\tikzfig{figures/grenoble_routedgraph}} \mathcal Aption{The routed graph for the Grenoble process, using a global index constraint.} \lambdabel{fig:grenoble_routedgraph} \end{figure} For each arrow, the sector corresponding to all of its indices being equal to zero is a one-dimensional sector. The global index constraint (in particular, the floating equation $l+m+n=1$), imposes a route at the node $P$ that forces exactly one of the outgoing indices to equal 1, depicted in Figure \ref{fig:grenoblepast}. The route at the $F$ node is just the time-reverse of the route at the Past. The global index constraint also gives rise to a route at $A$ depicted in Figure \ref{fig:grenoble route}. The routes at $B$ and $C$ are closely analogous. \begin{equation}gin{figure} \mathcal Entering \tikzfig{GrenoblePast} \mathcal Aption{The branch structure for the node $P$ of the Grenoble process. The bifurcation choice in the unique branch determines which agent comes first.} \lambdabel{fig:grenoblepast} \end{figure} \begin{equation}gin{figure} \mathcal Entering \tikzfig{GrenobleRoutes2} \mathcal Aption{The branch structure for the node $A$ of the Grenoble process.} \lambdabel{fig:grenoble route} \end{figure} Just like the routed graph for the 3-switch, the bifurcation choice at $P$ determines which agent comes first. But unlike the 3-switch, this bifurcation does not enforce an entire causal order. Rather, it is left to the first intermediate agent to decide which one should come second. For example, suppose that an agent at $P$ makes the bifurcation choice $l=1$. This sends the message to Alice (since the only outgoing arrow from $P$ that is not associated with a trivial dummy sector in this case is $P \rightarrow A$). This leads to Alice having the binary bifurcation choice associated with the branch $A^{l=1}$, depicted in Figure \ref{fig:grenoble route}. This bifurcation choice determines which agent comes second. For example, suppose Alice chooses the bifurcation option $l_1=1$. Then the message is passed along the `$l_1n_2$'-indexed arrow to Bob. Then Bob's route implies he has no such choice: he is forced to preserve the value of $l_1=1$, and is thereby compelled to send the message along the $m_1l_1$ arrow to Charlie (he is confined to a branch $B^{l_1=1}$, analogous to $A^{n_1=1}$ in Figure \ref{fig:grenoble route}). Finally, Charlie, confined to a branch $C^{h=1}$ analogous to $A^{f=1}$ in Figure 1, is forced to send the message off into the Future. Thus Alice's choice $l_1=1$ enforces the clockwise causal order $A-B-C$. On the other hand, choosing $l_2=1$ leads to the anticlockwise order $A-C-B$. The situation is analogous if another one of the agents comes first. If Bob comes first, he makes a bifurcation choice between $m_1=1$ and $m_2=1$ that enforces either the clockwise order $B-C-A$ or the anticlockiwse order $B-A-C$, respectively. Finally, if Charlie comes first, he chooses between $n_1=1$ and the clockwise order order $C-A-B$, or $n_2=1$ and the anticlockwise order $C-B-A$ . This scenario also allows for the disappearance of the information about the order of agents that acted already. Indeed, suppose that Alice comes last. This means she has either received the message coming clockwise from Charlie, or anticlockwise from Bob: i.e., either $m_1=1$ or $n_2=1$, respectively. In both cases, the floating equation $f=m_1+n_2$ guarantees that $f=1$, meaning that the information about which agent came first and which came second is lost\footnote{More specifically, this information is not tracked anymore by the route structure. Of course, depending on Alice's choice of intervention, this information could still be sent to the Future and be read there; but the scenario does not ask for it to be preserved in a specific structural way.}. This can be seen in the structure of the `$f=1$' branch in Figure \ref{fig:grenoble route}. Again, the situation is analogous if another agent comes last. To construct the branch graph, consider the following. The node $P$ consists of a single branch with a bifurcation choice between three options, each corresponding to the case when one of the three indices $l,m,n$ equals 1. In the time-reversed version of the routed graph, the node $F$ has a bifurcation with three options, each corresponding to the case when one of the three indices $f,g,h$ equals 1. \begin{equation}gin{figure}[ht] \mathcal Entering \tikzfig{figures/grenoble_branchgraph} \mathcal Aption{The branch graph for the Grenoble process. For clarity, we have omitted the green and red dashed arrows from $P$ and to $F$, respectively: they simply point upwards in the diagram to/from all of the other branched nodes.} \lambdabel{fig:grenoble_branchgraph} \end{figure} The routes at the nodes $A,B,C$ consist of four branches, as illustrated in Figure \ref{fig:grenoble route}. One of these branches corresponds to the case when the index on the wire coming directly from $P$ equals 1, with a bifurcation between two options splitting this index into an index of the same name with subscript either 1 or 2 (corresponding to whether the message is sent clockwise or anticlockwise). Another branch corresponds to the case when the index on the wire going directly to $F$ equals 1, with a bifurcation in the time-reversed routed graph combining the second index of each of the two wires coming in from the other agents (corresponding to whether the message came from the clockwise or anticlockwise direction). The final two branches correspond to the cases when one of the two indices that appear on both the input and output wires of the nodes are equal to 1. Following Figure \ref{fig:grenoble route}, we shall denote these branches by superscripts labelling which index is equal to 1. The fact that the routed graph satisfies univocality is implicit in the above explanation of how the bifurcation choices pick out a causal order. For they do so precisely by determining which branch happens at each node. For example, the bifurcation choice $l=1$ at $P$ leads to the branch $A^{l=1}$ happening, corresponding to Alice coming first. Then Alice's bifurcation choice determines which branches happen at $B$ and $C$: $B^{l_1=1}$ and $C^{h=1}$ if she chooses $l_1=1$; $C^{l_2=1}$ and $B^{g=1}$ if she chooses $l_2=1$. In general, the bifurcation choices at $P$ and at the resulting first intermediate node always determine which branches happen. Thus, the routed graph satisfies univocality. Since the routed graph is time-symmetric, it immediately follows that it satisfies bi-univocality. This allows us to draw the branch graph, which is shown in Figure \ref{fig:grenoble_branchgraph}. Since this branch graph has no loops, it trivially satisfies our weak loops condition. Thus the routed graph is valid. It follows that the Grenoble process -- and any other process constructed from this routed graph -- is consistent. \subsubsection{The routed circuit} \begin{equation}gin{figure}[ht] \mathcal Entering \tikzfig{figures/grenoble} \mathcal Aption{A routed circuit diagram for the Grenoble process, using a global index constraint. To avoid graphical clutter, we have avoided explicitly drawing loops. Instead, output lines that end with dots are to be interpreted as being looped back to join the corresponding input lines with dots (with the same system labels, including indices). Wires are coloured for better readability.} \lambdabel{fig:grenoble} \end{figure} By inserting suitable unitary transformations into the skeletal supermap associated with Figure \ref{fig:grenoble_routedgraph}, we can now construct the Grenoble process. A routed circuit for the Grenoble process is given in Figure \ref{fig:grenoble}: \begin{equation}gin{itemize} \item The systems $P_T,F_T,A_{\rm in}, A_{\rm out}, B_{\rm in}, B_{\rm out}, C_{\rm in}, C_{\rm out}$ are all isomorphic, and correspond to a 2-dimensional target Hilbert space (encoding the message). \item $P_C$ is a 3-dimensional control system; $F_C$ is a 3-dimensional control system, $F_A$ is a 2-dimensional ancillary system. \item The routed systems $Q^{fl n_1 m_2}, Q^{gm l_1 n_2}, Q^{hn m_1 l_2}$ are 4-dimensional control systems, with an explicit partition into four 1-dimensional sectors, each corresponding to exactly one of their four indices being equal to one. \item The routed systems $R^{l}, R^{f}, S^m, S^g, T^n, T^h$ are 3-dimensional systems, with an explicit partition into one 1-dimensional `dummy' sector and one 2-dimensional `message' sector, for example, $R^l = R^0 \oplus R^1$, where $R^{0}$ is the 1-dimensional sector. \item The routed systems $D^f, E^g, K^h$ are similarly 3-dimensional systems, with an explicit partition into one 1-dimensional `dummy' sector and one 2-dimensional `ancillary' sector, for example, $D^l = D^0 \oplus D^1$, where $D^{0}$ is the 1-dimensional sector. The 2-dimensional `ancillary' system will be used to store the information about whether the message was sent clockwise or anticlockwise (or in a superposition of the two) after the first agent, conditional on the state of the qubit before the action of the third agent. \item The routed systems $X^{n_1 m_1}, Y^{m_2 n_2}, X^{l_1 n_1}, Y^{n_2 l_1}, X^{m_1 l_1}, Y^{l_2 m_2}$ are all $4$-dimensional systems, partitioned into \textit{one} $2$-dimensional `message' sector (corresponding to the message travelling from the second to the third agent), \textit{one} 1-dimensional `message' sector (corresponding to the message travelling from the first to the second agent, in which case the space is only one-dimensional because the state of the message itself determines to whom it is sent next), and \textit{one} 1-dimensional `dummy' sector. For example, $X^{n_1 m_1} = X^{00}\oplus X^{10} \oplus X^{01}$, where $X^{00}$ is the 1-dimensional `dummy' sector and $X^{10}$ is the 1-dimensional `message' sector. \end{itemize} The global index imposes a route $\delta^{(l+m+n), 1}$ on $U_P$ that forces exactly one of its output indices to be equal to 1. In other words, its practical output space is $\bigoplus_{lmn} \delta^{(l+m+n), 1} R^{l} \otimes S^{m} \otimes T^{n}$. $U_P$ is a three-party generalisation of the superposition-of-paths unitary (\ref{sup channels def}) from Section \ref{sec:2switch}. Its action is given by the following, where we label the kets by individual sectors, rather than by systems: \begin{equation}gin{equation} U_P : \begin{equation}gin{cases} \ket{1}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto \ket{\psi}_{R^{1}} \otimes \ket{\rm dum}_{S^{0}} \otimes \ket{\rm dum}_{T^{0}} \\ \ket{2}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto\ket{\rm dum}_{R^{0}} \otimes \ket{\psi}_{S^{1}} \otimes \ket{\rm dum}_{T^{0}} \\ \ket{3}_{P_C} \otimes \ket{\psi}_{P_T} \mapsto \ket{\rm dum}_{R^{0}} \otimes \ket{\rm dum}_{S^{0}} \otimes \ket{\psi}_{T^{1}} \\ \end{cases} \lambdabel{eq:U_grenoble} \end{equation} Thus $U_P$ defines a unitary transformation from $P_C \otimes P_T$ to $\bigoplus_{lmn} \delta^{(l+m+n), 1} R^{l} \otimes S^{m} \otimes T^{n}$. In fact, the global index constraint (in particular, the floating equation $l+m+n=1$) restricts $U_P$'s practical output space to $\bigoplus_{lmn} \delta^{(l+m+n), 1} R^{l} \otimes S^{m} \otimes T^{n}$, meaning that it defines a routed unitary transformation. \iffalse \begin{equation}gin{equation} W_{A_{\rm in}} : \begin{equation}gin{cases} \ket{\rm dum}_{K} \otimes \ket{0}_{Q} \otimes \ket{\psi}_{P_T} \mapsto \ket{\rm dum}_{R^{00}} \otimes \ket{\rm dum}_{S^{00}} \otimes \ket{\psi}_{T^{10}} \\ \ket{\rm dum}_{K} \otimes \ket{1}_{Q} \otimes \ket{\psi}_{P_T} \mapsto \ket{\psi}_{R^{10}} \otimes \ket{\rm dum}_{S^{00}} \otimes \ket{\rm dum}_{T^{00}} \\ \ket{\rm dum}_{K} \otimes \ket{2}_{Q} \otimes \ket{\psi}_{P_T} \mapsto\ket{\psi}_{R^{01}} \otimes \ket{\rm dum}_{S^{00}} \otimes \ket{\rm dum}_{T^{00} } \\ \ket{\rm dum}_{K} \otimes\ket{3}_{Q} \otimes \ket{\psi}_{P_T} \mapsto\ket{\rm dum}_{R^{00}} \otimes \ket{\psi}_{S^{10}} \otimes \ket{\rm dum}_{T^{00}} \\ \ket{\rm dum}_{K} \otimes\ket{4}_{Q} \otimes \ket{\psi}_{P_T} \mapsto \ket{\rm dum}_{R^{00}} \otimes\ket{\psi}_{S^{01}} \otimes \ket{\rm dum}_{T^{00}} \\ \end{cases} \lambdabel{eq:U_3switch} \end{equation} between the non-routed system $P_C \otimes P_T$ ($6 d$-dimensional) and the routed system \\ $\bigotimes_{lmnpqr} \eta^{lmnpqr} R^{lp} \otimes S^{mq} \otimes T^{nr}$ [also of dimension $2(d\times1\times 1) +2(1\times d\times 1) +2(1\times 1\times d)=6d$]. \fi $V_A$ is defined below. Note that here the labelling by sectors is necessary to distinguish between states belonging to different sectors that we label with the same ket, e.g.\ $\ket{0}_{X^{01}}$ and $\ket{0}_{X^{10}}$. \begin{equation}gin{equation} \lambdabel{eq:v_grenoble} V_A: \begin{equation}gin{cases} \ket{0}_{R^1} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{{\rm dum}}_{Y^{00}} \mapsto \ket{0}_{A_{\rm in}} \otimes \ket{0}_Q \otimes \ket{{\rm dum}}_{D^0} \\ \ket{1}_{R^1} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{{\rm dum}}_{Y^{00}} \mapsto \ket{1}_{A_{\rm in}} \otimes \ket{0}_Q \otimes \ket{{\rm dum}}_{D^0} \\ \ket{{\rm dum}}_{R^0} \otimes \ket{0}_{X^{10}} \otimes \ket{{\rm dum}}_{Y^{00}} \mapsto \ket{0}_{A_{\rm in}} \otimes \ket{1}_Q \otimes \ket{{\rm dum}}_{D^0} \\ \ket{{\rm dum}}_{R^0} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{1}_{Y^{10}} \mapsto \ket{1}_{A_{\rm in}} \otimes \ket{2}_Q \otimes \ket{{\rm dum}}_{D^0} \\ \ket{{\rm dum}}_{R^0} \otimes \ket{0}_{X^{01}} \otimes \ket{{\rm dum}}_{Y^{00}} \mapsto \ket{0}_{A_{\rm in}} \otimes \ket{3}_Q \otimes \ket{0}_{D^1} \\ \ket{{\rm dum}}_{R^0} \otimes \ket{1}_{X^{01}} \otimes \ket{\rm{dum}}_{Y^{00}} \mapsto \ket{1}_{A_{\rm in}} \otimes \ket{3}_Q \otimes \ket{1}_{D^1} \\ \ket{{\rm dum}}_{R^0} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{0}_{Y^{01}} \mapsto \ket{0}_{A_{\rm in}} \otimes \ket{3}_Q \otimes \ket{1}_{D^1} \\ \ket{{\rm dum}}_{R^0} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{1}_{Y^{01}} \mapsto \ket{1}_{A_{\rm in}} \otimes \ket{3}_Q \otimes \ket{0}_{D^1} \\ \end{cases} \end{equation} Since the global index constraint restricts $V_A$'s practical input and output spaces to those sectors where exactly one index is equal to 1, it also defines a routed isometry\footnote{The notion of a routed isometry is defined similarly to that of a routed unitary \mathcal Ite{vanrietvelde2021routed}.}. $V_B$ and $V_C$ are defined similarly. The routed unitary $W_A$ is defined as follows: \begin{equation}gin{equation} \lambdabel{eq:w_grenoble} W_A: \begin{equation}gin{cases} \ket{0}_{A_{\rm out}} \otimes \ket{0}_Q \mapsto \ket{{\rm dum}}_{R^0} \otimes \ket{0}_{X^{10}} \otimes \ket{{\rm dum}}_{Y^{00}} \\ \ket{1}_{A_{\rm out}} \otimes \ket{0}_Q \mapsto \ket{{\rm dum}}_{R^0} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{{\rm 1}}_{Y^{10}} \\ \ket{0}_{A_{\rm out}} \otimes \ket{1}_Q \mapsto \ket{{\rm dum}}_{R^0} \otimes \ket{0}_{X^{01}} \otimes \ket{{\rm dum}}_{Y^{00}} \\ \ket{1}_{A_{\rm out}} \otimes \ket{1}_Q \mapsto \ket{{\rm dum}}_{R^0} \otimes \ket{1}_{X^{01}} \otimes \ket{{\rm dum}}_{Y^{00}} \\ \ket{0}_{A_{\rm out}} \otimes \ket{2}_Q \mapsto \ket{{\rm dum}}_{R^0} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{0}_{Y^{01}} \\ \ket{1}_{A_{\rm out}} \otimes \ket{2}_Q \mapsto \ket{{\rm dum}}_{R^0} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{1}_{Y^{01}} \\ \ket{0}_{A_{\rm out}} \otimes \ket{3}_Q \mapsto \ket{0}_{R^1} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{{\rm dum}}_{Y^{00}} \\ \ket{0}_{A_{\rm out}} \otimes \ket{3}_Q \mapsto \ket{1}_{R^1} \otimes \ket{{\rm dum}}_{X^{00}} \otimes \ket{{\rm dum}}_{Y^{00}} \\ \end{cases} \end{equation} The routed unitaries $W_B$ and $W_C$ are defined in a similar way. Finally, the routed unitary \textit{\AE} is given by the following: \begin{equation}gin{equation} \lambdabel{eq:ae_grenoble} \textit{\AE}_F: \begin{equation}gin{cases} \ket{\psi}_{R^1} \otimes \ket{\xi}_{D^1} \otimes \ket{{\rm dum}}_{S^0} \otimes \ket{{\rm dum}}_{E^0} \otimes \ket{{\rm dum}}_{T^0} \otimes \ket{{\rm dum}}_{K^0} \mapsto \ket{\xi}_{F_A} \otimes \ket{1}_{F_C} \otimes \ket{\psi}_{F_T} \\ \ket{{\rm dum}}_{R^0} \otimes \ket{{\rm dum}}_{D^0} \otimes \ket{\psi}_{S^1} \otimes \ket{\xi}_{E^1} \otimes \ket{{\rm dum}}_{T^0} \otimes \ket{{\rm dum}}_{K^0} \mapsto \ket{\xi}_{F_A} \otimes \ket{2}_{F_C} \otimes \ket{\psi}_{F_T} \\ \ket{{\rm dum}}_{R^0} \otimes \ket{{\rm dum}}_{D^0} \otimes \ket{{\rm dum}}_{S^0} \otimes \ket{{\rm dum}}_{E^0} \otimes \ket{\psi}_{T^1} \otimes \ket{\xi}_{K^1} \mapsto \ket{\xi}_{F_A} \otimes \ket{3}_{F_C} \otimes \ket{\psi}_{F_T} \\ \end{cases} \end{equation} Note that the Grenoble process is an isometric process, with the overall output dimension greater than the overall input dimension (in particular, $V$ is a routed isometry). The process can be made unitary in a natural way, by adding an extra 2-dimensional ancillary qubit to the input of the Past and adding routed wires of dimension $1+2$ from the Past to each of the routed unitaries $W$, bearing the same index as the wire from the Past to the corresponding $V $. This makes the process symmetric in time. As a result, this increases the dimension of the Hilbert space of the sector carrying the message between the first and second agents from 1 to 2. In turn, this increases the dimensionality of the input space to the unitaries $V$, making the entire process unitary. Note also that the Future cannot necessarily determine the relative order of the first two agents from their control and ancillary qubits $F_C, F_A$, if the third agent performs a non-unitary operation (because the order information encoded in the ancillary qubit relied on knowledge of the state of the message before the action of the third agent). One peculiar feature of the Grenoble process is that the qubit that we have called the `target qubit' -- that is, the system that passes between the intermediate agents -- plays a dual role. On the one hand, it is the `message' that the agents receive. On the other hand, it also plays a role in determining the causal order. In particular, after it passes through the first agent, its logical state determines which agent receives it next. Thus if Alice comes first and wants to send the target qubit to Bob, she must send him the $\ket{0}$ state, but if she wants to send it to Charlie, she must send him $\ket{1}$. Our reconstruction of the Grenoble process makes it obvious that this feature is not necessary to make the process consistent. Starting from the same routed graph, one can easily define a variation on the Grenoble process, in which Alice is also given a second, `control' qubit. This control qubit determines which agent comes second, leaving Alice free to send that agent whatever state on the target qubit she likes. Bob and Charlie can also be given their own qubits. Since this process can be obtained by fleshing out a routed graph whose validity we have already checked, it is immediate that this new process is also consistent. This illustrates a useful feature of our framework for constructing processes with indefinite causal order; namely, that variations on a process can be defined in a straightforward way, leading to a clearer understanding of which features of the original process were essential for its logical consistency, and which other features can be changed at will. \subsection{The Lugano process} \lambdabel{sec:lugano} The Lugano process, discovered by Ara\'ujo and Feix and then published and further studied by Baumeler and Wolf \mathcal Ite{baumeler2014maximal, baumeler2016space} (and therefore sometimes also called BW or AF/BW), is the seminal example of a unitary process violating causal inequalities. It was first presented as a classical process, whose unitary extension to quantum theory can be derived in a straightforward way \mathcal Ite{araujo2017purification}. As we place ourselves in a general quantum framework here, we will primarily focus on this quantum version of the process; we note that the classical version can be obtained from the quantum one by feeding it specific input states and introducing decoherence in each of the wires of the circuit. This shows, more generally, that at least some exotic classical processes are also part of the class of processes that can be built through our procedure. Indeed, we will show here how the (unitary) Lugano process can be constructed from a valid routed graph; this will provide an example of a process violating causal inequalities that can also be accommodated by our framework. In fact, we will derive a larger family of processes, defined by a same valid routed graph, and display how the Lugano process can be obtained as the simplest instance of this family. The other processes in this family share the basic behaviour of the Lugano process, but can feature, on top of it, arbitrarily large dimensions and arbitrarily complex operations. \subsubsection{The logical structure} Before we present the routed graph for the construction of the Lugano process, let us start with an intuitive account of the logical structure lying at the heart of it. This logical structure can be presented as a voting protocol involving three agents, in which each of the agents receives part of the result of the vote before having even cast their vote. Why this is possible without leading to any logical paradox, of the grandfather type, is the central point to understand. In this voting protocol, each agent casts a vote for which of the other two agents they would like to see come last in the causal order. Alice, for instance, can either vote for Bob or for Charlie to come last. If there is a majority, then the winning agent can both i) learn that they won the vote, and ii) receive (arbitrarily large) messages from each of the two losing agents. As for the two losers, each of them can only learn that they lost (i.e.\ no majority was obtained in favour of them), and they cannot receive any messages from the other agents. If no majority is obtained, then all agents learn that they lost, and none of them can signal to any other. This voting protocol would have nothing surprising if it assumed that the winner learns of their victory and receives messages from the losers `after' all the votes are cast. Yet in the Lugano process, the crucial fact is that Alice, for instance, learns whether she won, and (if she won) receives Bob and Charlie's messages, \textit{before} she casts her own vote; and the same goes for Bob and Charlie. This sounds dangerously close to a grandfather paradox, since each agent contributes to an outcome that they might become aware of before they make their contribution. It seems likely that the agents could somehow take advantage of this system to send messages back to their own past, and decide what they do based on those messages, leading to logical inconsistencies. Why this never happens -- why, more precisely, the agents still have no way to send information back to themselves -- can be figured out with a bit of analysis of the voting system. Indeed, Alice, for instance, finds herself in either of two cases. The first one is that she won: a majority `was' obtained in favour of her. This means that she cannot send messages to either of the agents, since only the winner can receive messages. Nor can she signal to other agents by casting her vote: her victory implies that both Bob and Charlie voted for her, in which case her own vote is irrelevant to the outcome. Therefore, if Alice wins, then she cannot send any information back to herself via the other agents. Alternatively, Alice could lose the vote. If so, then she cannot receive any messages from the other agents, so she has no hope of sending information back to herself through their messages to her. Therefore, if she wants to send information to herself, she will have to try to change the outcome of the vote in her favour (thus creating a grandfather-type paradox). But she cannot do this by simply changing her own vote, as there being a majority in her favour only depends on how the other agents vote. Nor can she make herself win by encouraging the other agents to vote differently. Alice can only send a message to (say) Bob if Charlie voted for Bob as well. So, whatever Bob does, there will never be a majority in favour of Alice. Therefore, if Alice loses, she cannot send any information back to herself. For this reason, the Lugano process, despite conflicting with intuitions about causal and temporal structure, does not lead to any logical paradoxes, after all. \subsubsection{The routed graph} \begin{equation}gin{figure} \mathcal Entering \scalebox{1.3}{\tikzfig{LuganoWithZeroes}} \mathcal Aption{The routed graph for the Lugano process. To help intuition, we used different colours to denote the arrows that pertain to some particular agent (i.e.\ the ones whose indices encode the `votes' or the `vote result' for that agent). Each of the indices has only two possible values, $0$ or $1$. To reduce clutter, we have used arrows with dotted ends to avoid drawing all the arrows explicitly; pairs of dotted arrows with the same index are shorthand for a single unbroken arrow. For example, the pair of arrows with the index $k_2$ denotes a single indexed arrow $C \rightarrow X$.} \lambdabel{fig: Lugano routed graph} \end{figure} Of course, our description of the Lugano process so far has only been pitched at an intuitive level. The point of the routed graph that we will now present is precisely to formalise this intuitive description; while the validity of this graph -- defined as the satisfaction of our two principles -- will provide a formal counterpart to our argument that no logical paradoxes should arise from this protocol. Our routed graph is depicted in Figure \ref{fig: Lugano routed graph}. In this routed graph, the nodes $A$, $B$, and $C$, representing the three agents, will be supplemented with three other nodes $X$, $Y$ and $Z$ which can be thought of -- continuing with our metaphor -- as `vote-counting stations', in which the votes for each of the agents will be centralised and counted. $X$ will deal with the votes for Alice, $Y$ those for Bob, and $Z$ those for Charlie. Let us determine the routes in the graph and explain their meaning, starting with node $A$. The index $l$ of the arrow going into $A$ from its vote-counting station $X$ indicates whether Alice won the vote (it has value $1$ if she wins, and $0$ if she loses). Furthermore, the indices $i_1$ and $i_2$ encode respectively whether Alice voted for Charlie or Bob, taking the value 1 when she votes for the corresponding agent. The index-matching of $l$ ensures that Alice cannot change whether she wins or loses, while the floating equation $i_1+i_2=1$ implies that Alice must vote for precisely one other agent. This leads to the route for $A$ depicted in Figure \ref{fig: LuganoNodes}. The route consists of two branches corresponding to Alice's victory or defeat, and a binary bifurcation choice representing her own vote in each case. The routes for $B$ and $C$ are fully analogous. \begin{equation}gin{figure} \mathcal Entering \tikzfig{LuganoNodes} \mathcal Aption{The branch structure of the route for node $A$. $B$ and $C$ are fully analogous.} \lambdabel{fig: LuganoNodes} \end{figure} Now let us consider $X$, the vote-counting station for Alice. This node receives the index $k_2$, which tells us whether Charlie voted for Alice; and $j_1$, which tells us whether Bob did. It preserves these indices, sending them to the global Future, but also generates from them a new index $l$ whose value is given by their product, $l=k_2 \mathcal Dot j_1$. This ensures that $l=1$ just in case Alice wins the vote. The route for the $X$ node has four branches, corresponding to the possible joint values of $k_2$ and $j_1$ (which we will denote $X^{k_2 j_1}$), and none of them include bifurcations. $Y$ and $Z$ work in the same way. Finally, the `global Future' node $F$ just serves to channel out the remaining information. Since it receives all the distinct indices in the graph, its route is just given by the global index constraint. In other words, its practical input set of values is just the set of values permitted by the index-matching and the floating equations. For the arrows $X \to A$, $Y \to B$, and $Z \to C$, the $0$ value corresponds to a one-dimensional `dummy' sector. The interpretation of this is once again natural: the messages are sent to an agent only if this agent won. We can now check that the routed graph of Figure \ref{fig: Lugano routed graph} satisfies our two principles. We start with univocality. The choice relation for this graph can be checked to be a function from the six binary bifurcation choices to the statuses of the branches. This function can be meaningfully presented in the following algorithmic way: \begin{equation}gin{itemize} \item Look at the votes of the losing branches ($A^0$, $B^0$ and $C^0$). If a majority is found in these votes (say, in favour of Alice), then set the `result' indices accordingly (in this case, $l=1$, $m=n=0$) and use the bifurcation choices of the losing branches ($B^0$ and $C^0$) to set the value of the votes of `losers' ($j_1, j_2, k_1, k_2$); use the bifurcation choice of the winning branch ($A^1$) to define the value of the winner's vote ($i_1$ and $i_2$); \item If no majority is found, define $l=m=n=0$ and use the bifurcation choices of the losing branches to set all votes. \item Now that the values of all indices in the graph have been fixed, derive which branches happened and which didn't. \end{itemize} \begin{equation}gin{figure} \mathcal Entering \tikzfig{LuganoBranch} \mathcal Aption{A simplified version of the branch graph for the Lugano process. We do not draw all the arrows, as this would create a lot of clutter and would be superfluous for our purposes of checking for cycles; we rather just organise the branches in layers, such that all unspecified arrows only ever go `up' with respect to this partition.} \lambdabel{fig: Lugano branch graph} \end{figure} Univocality is thus satisfied. Its time-reversed version can be checked to be satisfied as well: all bifurcation choices in the reverse graph are located in $F$, and they have the effect of fixing all indices to consistent joint values. A simplified version of the branch graph is presented in Figure \ref{fig: Lugano branch graph}. We see that there \textit{are} loops in the branch graph, specifically in its bottom layer; yet they are only composed of green dashed arrows. This entails that the routed graph of Figure \ref{fig: Lugano routed graph} satisfies the weak loops principle, and is thus valid. In Section \ref{sec: conclusion}, we will formulate the conjecture that this presence of weak loops is a signature of its causal inequalities violating nature. \subsubsection{The routed circuit} \begin{equation}gin{figure}[ht] \mathcal Entering \scalebox{1.2}{\tikzfig{LuganoFilled}} \mathcal Aption{A routed circuit diagram for the Lugano process. We follow the same graphical conventions as in Figure \ref{fig: Lugano routed graph}. The gates at the bottom are CNOTs, controlled on the coloured wires.} \lambdabel{fig: Lugano routed circuit} \end{figure} We proved that our routed graph was a valid one, and therefore that any routed unitary circuit built from it will define a valid process. In particular, the Lugano process (as defined e.g.\ in equation (27) of Ref.\ \mathcal Ite{araujo2017quantum}) is obtained by taking all sectors in all wires to be one-dimensional and by fleshing out the circuit as depicted in Figure \ref{fig: Lugano routed circuit}. In this figure, $V$ serves to encodes an agent's vote in the values of the outgoing indices; for example, the $V$ above Alice's node can be written as \begin{equation}gin{equation} V:= \ket{i_1=1}\ket{i_2=0}\bra{0} + \ket{i_1=0}\ket{i_2=1}\bra{1} \, , \end{equation} where by $\ket{i_1=1}$ we denote a state in the $i_1=1$ sector. $W$ sends the information about the value of its incoming indices to the Future, while also sending the information about the product of those values to the wire that loops back around to the Past. For example, the $W$ above Alice's node is defined as \begin{equation}gin{equation} W := \sum_{x, y =0}^1 \ket{l=x \mathcal Dot y} \ket{(k_2, j_1)=(x, y)} \bra{k_2=x}\bra{j_1=y} \, . \end{equation} Finally, $U$ simply embeds its practical input space (defined by the global index constraint) into the global Future. Its precise form is irrelevant to our concerns, so we leave it out. This shows how a paradigmatic unitary process that violates causal inequalities can be rebuilt using our method. We emphasise, however, that the Lugano process is merely the simplest example of a process obtained from fleshing out a routed circuit of the form of Figure \ref{fig: Lugano routed graph}; one could instead take this routed circuit to feature arbitrarily large dimensions (as long as the crucial sectors we specified remain one-dimensional), and fill it up with arbitrary operations (as long as they follow the routes). In other words, we have in fact defined a large \textit{family} of processes that all rely on the same core behaviour as the Lugano process. It is particularly worth noting that, while in the Lugano process the message sent to the winner is trivial (it is necessarily the $\ket{1}$ state), this family of consistent processes includes those where each losing agent can send arbitrarily large messages to the winner. Thus, the routed graph makes clear that the triviality of the messages in the original Lugano process is an arbitrary feature, that is not essential to the consistency. \textrm{\upshape sec}tion{Discussion and outlook} \lambdabel{sec: conclusion} In this paper, we presented a method for constructing processes with indefinite causal order, based on a decorated directed graph called the routed graph. The method of construction ensures, and hence explains, the consistency of the process, and makes its logical structure evident. In particular, we proved that any process constructed from a routed graph satisfying our two principles, bi-univocality and weak loops, is necessarily consistent. Our method can be used to construct a number of unitarily extendible processes. We explicitly constructed the quantum switch, the 3-switch, the Grenoble process, and the Lugano process. For each of these processes, our method can also construct a large family of processes obtainable from the same routed graph. We expect that the other currently known examples of unitary processes that are built from classical processes analogous to Lugano can also be constructed using our method. Ultimately, we are led to the following conjecture. \begin{equation}gin{conjecture} Any unitary process -- and therefore any unitarily extendible process -- can be obtained by `fleshing out' a valid routed graph. \end{conjecture} Another fact pointing towards this conjecture is that bipartite unitarily extendible processes were recently proven \mathcal Ite{barrett2021cyclic, yokojima2021consequences} to reduce to the coherent control of causal orders analogous to the quantum switch, which can therefore be written as valid routed circuits. Our conjecture can be thought of as a tentative generalisation of this result to $\geq 3$-partite processes. We expect that significant progress in this direction could be obtained if one were to prove another conjecture, that of the existence of causal decompositions of unitary channels in the general case \mathcal Ite{lorenz2020}. This leads us to a limitation of our current results: they offer no systematic way to \textit{decompose} a known process into a routed circuit (except, in some cases, through a careful conceptual analysis of it). An important subject for future work, deeply related to the above conjecture, would be to come up with ways to supplement the bottom-up procedure presented in this paper with a top-down procedure, in which one would start with a `black-box' unknown process and extract a way of writing it as the fleshing-out of a valid routed graph. Another limitation is that this paper had no concern for the \textit{physicality} of processes, i.e.\ for the question of whether and how they could be implemented in practice, using either standard or exotic physics. This was a conscious choice on our part, as we wanted to rather focus on the question of their logical \textit{conceivability}. However, we expect that our way of dealing with the latter question might, through the clarifications and the diagrammatic method it provides, pave the way for work on possible implementations or on physical principles constraining them. An important consequence of our work is that it shows how at least a large class of valid quantum processes can be derived from the sole study of \textit{possibilistic} structures, encapsulated by routes. These possibilistic structures impose constraints on quantum operations, but there is nothing specifically quantum about them; they could be interpreted as constraints on classical operations as well. This adds to the idea, already conveyed by the discovery of classical exotic processes, that the logical possibility for indefinite causal order does not always arise from the specifics of quantum structures. If our above conjecture turned out to be true, this would warrant this conclusion for any unitary and unitarily extendible process, whose quantum nature is nothing more than coherence between the branches of an equally admissible classical process. By contrast, some non-unitarily extendible processes, such as the OCB process \mathcal Ite{oreshkov2012quantum, araujo2017purification}, appear to feature a more starkly quantum behaviour in their display of indefinite causal order. This can be seen for example in the fact that the violation of causal inequalities by the OCB process relies on a choice between the use of maximally incompatible bases on the part of one agent. A more quantitative clue is the fact that the OCB process saturates a Tsirelson-like bound on non-causal correlations \mathcal Ite{Brukner2014}. It is therefore unlikely that such processes could be built using our method, as routes do not capture any specifically quantum (i.e.\ linear algebraic) behaviour. In particular, the display of a unitary process with OCB-like features would probably provide a counter-example to our conjecture. In the course of the presentation of the framework and of the main examples, we commented on the fact that the presence of (necessarily weak) loops in the branch graph were associated with the violation of causal inequalities: processes showcasing (possibly dynamical) coherent control of causal order, and therefore incapable of violating causal inequalities \mathcal Ite{wechs2021quantum} -- such as the switch, the 3-switch and the Grenoble processes -- featured no such loops; while the Lugano process, which does violate causal inequalities, had loops in its branch graph. This leads us to the following conjecture. \begin{equation}gin{conjecture} The skeletal supermap corresponding to a routed graph violates causal inequalities if and only if its branch graph features (necessarily weak) loops. \end{conjecture} Proving this conjecture would unlock a remarkable correspondence between, on the one hand, the structural features of processes, and, on the other hand, their operational properties. An interesting question is how this would connect to (partial) characterisations of causal inequalities-violating processes via their causal structure \mathcal Ite{TselentisInPrep}. Our work facilitates a transition from a paradigm of defining processes with indefinite causal order one by one and checking their consistency by hand, to one of generating large classes of such processes from the study of elementary graphs, with their consistency baked in. In that, it follows the spirit of Ref.\ \mathcal Ite{wechs2021quantum}, with more emphasis on the connectivity of processes and on the formal language with which one can describe the consistent ones. Another difference is that the framework presented here also allows us to build at least some of the unitary processes that violate causal inequalities \mathcal Ite{branciard2015simplest}. A natural application would be to build and study new exotic processes using our framework; we leave this for future work. More generally, the fact that our rules for validity only rely on the study of graphs decorated with Boolean matrices opens the way for a systematic algorithmic search for instances, using numerical methods. A final feature of our framework is how, through the use of graphical methods and meaningful principles, it makes more intelligible, and more amenable to intuition, the reasons why a process can be both cyclic and consistent -- a notoriously obscure behaviour, especially in the case of processes violating causal inequalities. Our two rules for validity, however, are still high-level; further work is needed to investigate their structural implications. This could eventually lead to a reasoned classification of the graphs that satisfy them, and therefore of (at least a large class of) exotic processes. \begin{equation}gin{acknowledgments} It is a pleasure to thank Alastair Abbott, Giulio Chiribella, Ognyan Oreshkov, Nicola Pinzani, Eleftherios Tselentis, Julian Wechs, Matt Wilson, and Wataru Yokojima for helpful discussions and comments. AV also thanks Alexandra Elbakyan for her help to access the scientific literature. AV is supported by the EPSRC Centre for Doctoral Training in Controlled Quantum Dynamics. HK acknowledges funding from the UK Engineering and Physical Sciences Research Council (EPSRC) through grant EP/R513295/1. This publication was made possible through the support of the grant 61466 `The Quantum Information Structure of Spacetime (QISS)' (qiss.fr) from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. \end{acknowledgments} \appendix \textrm{\upshape sec}tion{The relationship between the supermap and process matrix representations} \lambdabel{app: process matrices and supermaps} Two equivalent but distinct mathematical frameworks are in use in the indefinite causal order literature to represent higher-order processes, stemming from two independent lines of work: one is that of supermaps \mathcal Ite{chiribella2008transforming,chiribella2013quantum} (also called superchannels), and the other is that of process matrices \mathcal Ite{oreshkov2012quantum, araujo2017purification} (also called W-matrices). This can lead to some confusion. In this Appendix, we spell out the equivalence between the two pictures, in order to help readers more accustomed to the process matrix picture to translate our results and concepts from the supermap picture, that we use in this paper. In broad terms, supermaps and process matrices are equivalent mathematical representations of a same higher-order process, connected by the Choi-Jamiołkowski (CJ) isomorphism. What can add to the confusion is also that they stem from different conceptual points of view on the situations being modelled, and that the equivalence between these points of view might not be obvious at first sight. We will thus start with a conceptual discussion, before spelling out the mathematical equivalence. We will then further comment on how super\textit{unitaries}, which are the focus of this paper, can be translated to super\textit{channels}: the jump is simply the standard one between the linear representation of pure quantum theory and the completely positive representation of mixed quantum theory. \subsection{At the conceptual level} The point of superchannels is to model higher-order transformations, mapping channels to channels, in the same way that channels map states to states. More precisely, in analogy with the fact that channels can be characterised as the only linear mappings $\mathcal C \in \Lin \left[ \Lin(\mathcal H_A^\textrm{\upshape in}) \to \Lin(\mathcal H_A^\textrm{\upshape out}) \right]$ that preserve all quantum states -- including quantum states on an extended system $\rho \in \Lin[\mathcal H_A^\textrm{\upshape in} \otimes \mathcal H_X]$ --, superchannels are characterised \mathcal Ite{chiribella2008transforming} as the linear mappings \begin{equation} \mathcal S \in \Lin \left[ \Lin \left[ \Lin(\mathcal H_A^\textrm{\upshape in}) \to \Lin(\mathcal H_A^\textrm{\upshape out}) \right] \to \Lin \left[ \Lin(\mathcal H_P) \to \Lin(\mathcal H_F) \right] \right] \end{equation} that preserve all quantum channels -- including quantum channels on an extended system $\mathcal C \in \Lin \left[ \Lin(\mathcal H_A^\textrm{\upshape in} \otimes \mathcal H_X) \to \Lin(\mathcal H_A^\textrm{\upshape out} \otimes \mathcal H_Y) \right]$. Moreover, \textit{multipartite} superchannels \mathcal Ite{chiribella2009beyond, chiribella2013quantum} can act on pairs, or generally tuples, of channels, mapping them to one `global' channel\footnote{More precisely, bipartite superchannels were originally defined as acting on the larger space of all non-signalling channels on the tensor product of their two slots. However, it was proven at the same time that the well-defined superchannels on pairs of channels are exactly the same ones as superchannels on non-signalling channels, so we can overlook this difference.}. The conceptual idea is thus to combine `Alice's channel' and `Bob's channel' into a larger channel; it stems from an emphasis on a \textit{computational} picture, focused on the study of architectures for quantum computation. Another line of research, developed independently, adopts an \textit{operational} picture, insisting instead on the idea of local agents performing quantum measurements -- and crucially, getting classical outcomes \mathcal Ite{oreshkov2012quantum}. Therefore, rather than on a notion of combining operations, it focuses on the task of computing joint probability distributions for these local outcomes. This is where process matrices come in naturally: taking $\mathcal M_i \in \Lin[\Lin[\mathcal H_A^\textrm{\upshape in}] \to \Lin[\mathcal H_A^\textrm{\upshape out}]]$ as the CP map corresponding to Alice obtaining outcome $i$, and $M_i \in \Lin[\mathcal H_A^\textrm{\upshape in} \otimes \mathcal H_A^\textrm{\upshape out}]$ as its CJ representation (see below for its mathematical definition) -- and similarly taking $N_j$ for Bob obtaining the outcome $j$ --, one can write the joint probability compactly as \begin{equation} \lambdabel{eq: proba process matrix} \mathcal P(i,j) = \Tr \left[ (M_i^T \otimes N_j^T) \mathcal Irc W \right] \, , \end{equation} where $W \in \Lin[\mathcal H_A^\textrm{\upshape in} \otimes \mathcal H_A^\textrm{\upshape out} \otimes \mathcal H_B^\textrm{\upshape in} \otimes \mathcal H_B^\textrm{\upshape out}] $ is the \textit{process matrix}, which one asks to yield well-defined probabilities, through (\ref{eq: proba process matrix}), for any choice of measurements on Alice and Bob's parts. In order to allow for a notion of purification, the process matrix formalism was then extended \mathcal Ite{araujo2017purification} to model general higher-order operations, with $W$ now also acting on a global past $P$ and a global future $F$, and the RHS of (\ref{eq: proba process matrix}) being taken to be a partial trace on all other systems, so that the LHS yields (the CJ representation of) a quantum operation $P \to F$. One can now see how this gets us closer, at least conceptually, to the notion of a superchannel. The original process matrices as defined in \mathcal Ite{oreshkov2012quantum} can then be understood as akin to superchannels with a trivial output. One might, however, worry that this overlooks the key conceptual difference: that process matrices relate not only channels, but also the probabilities of measurement outcomes. Shouldn't that be more general than superchannels? The key idea to understand why this worry is, in fact, unwarranted, is the fact that the obtaining of any measurement outcome can equivalently be modelled as the implementation of a deterministic channel acting on the system at hand and on an ancilla, together with a measurement outcome having been obtained by subsequently measuring the ancilla. In this way, one can recover the probabilities for measurements in the superchannel picture as well. Before we turn to the mathematical equivalence, let us briefly comment on the respective strengths and weaknesses of the two representations. The main advantage of process matrices is the ability, in an operational picture, to compute joint probabilities in a straightforward and compact way, via the Hilbert-Schmidt inner product of (\ref{eq: proba process matrix}). However, this strength becomes a weakness once one is interested in higher-order processes with a non-trivial output: because the process matrix is then a `flattened' CJ representation of the process -- i.e., an operator acting indistinctly on all the input and output Hilbert spaces involved, e.g.\ on $A^\textrm{\upshape in} \otimes A^\textrm{\upshape out} \otimes B^\textrm{\upshape in} \otimes B^\textrm{\upshape out} \otimes P \otimes F$ --, it smears out the distinction i) between the inputs and outputs of local operations (e.g. between $A^\textrm{\upshape in}$ and $A^\textrm{\upshape out}$), and ii) between the inputs and the output of the higher-order transformation (e.g.\ between $A^\textrm{\upshape in}$ and $A^\textrm{\upshape out}$, which correspond to one of the input channels, and $P$ and $F$, which correspond to the output channel). This flattening is the reason why the connectivity of a higher-order process is particularly difficult to parse in a process matrix: identities between systems, for instance, have to be represented not by identity operators but by maximally entangled states. Similarly, the tracing-out move of (\ref{eq: proba process matrix}) lends itself badly to intuition and generally messes up the input/output structure. By contrast, the superchannel's type, as a map ${\rm Chan}(A^\textrm{\upshape in} \to A^\textrm{\upshape out}) \times {\rm Chan}(B^\textrm{\upshape in} \to B^\textrm{\upshape out}) \to {\rm Chan}(P \to F)$, neatly encodes the different roles of the different spaces, and facilitates intuitions about the connectivity. In particular, as we are especially interested in a faithful and direct representation of the connectivity of processes, we found the superchannel picture more practical for the needs of the present paper. \subsection{At the mathematical level} The process matrix picture relies on the Choi-Jamiolkowski (CJ) representation of CP maps \mathcal Ite{jamiolkowski1972linear, choi1975completely}, which can be defined as follows. Consider a CP map $\mathcal M_A: A^{\rm{in}} \rightarrow A^{\rm{out}}$. We make a copy of the input system $A^{\rm{in}}$, and consider the (unnormalised) maximally entangled state $\ket{\Phi^+}=\sum_i{\ket{ii}}$ on $A^{\rm{in}} \otimes A^{\rm{in}}$. The CJ representation $M_A$ of $\mathcal M_A$ is then the positive operator on $A^{\rm{in}} \otimes A^{\rm{out}}$ obtained by feeding one half of this entangled state into $\mathcal M$: \begin{equation}gin{equation} M_A := (\mathcal I \otimes \mathcal M_A)\ket{\Phi^+}\bra{\Phi^+} \end{equation} Process matrices were originally defined as operators mapping CJ representations of CP maps to probabilities via (\ref{eq: proba process matrix}). In the bipartite case, they were therefore required to satisfy: \begin{equation}gin{equation} \lambdabel{originalpms} \begin{equation}gin{split} W \in \Lin[\mathcal H_A^\textrm{\upshape in} \otimes \mathcal H_A^\textrm{\upshape out} \otimes \mathcal H_B^\textrm{\upshape in} \otimes \mathcal H_B^\textrm{\upshape out}] \\ W \geq 0 \\ {\rm Tr} \left[(M^T \otimes N^T) \mathcal Irc W \right] =1 \ \ \ \ \ \forall M\forall N \end{split} \end{equation} where $M$ and $N$ can be CJ matrices for any pair of \textit{channels} for Alice and Bob. The positivity requirement buys us positive probabilities; the last requirement ensures that our probability distributions are normalised. In \mathcal Ite{araujo2017purification}, the definition of process matrices was extended so that they output a CJ matrix for a CP map from a `past' system $P$ to a `future' system $F$. In the bipartite case, the extended process matrix $W \in \Lin[\mathcal H_P \otimes \mathcal H_A^\textrm{\upshape in} \otimes \mathcal H_A^\textrm{\upshape out} \otimes \mathcal H_B^\textrm{\upshape in} \otimes \mathcal H_B^\textrm{\upshape out} \otimes \mathcal H_F]$ maps CJ matrices $M$ and $N$ to a CJ matrix $G := {\rm Tr}_{A^\textrm{\upshape in} A^\textrm{\upshape out} B^\textrm{\upshape in} B^\textrm{\upshape out}}\left[(M^T \otimes N^T) \mathcal Irc W \right] \in \Lin[\mathcal H_P \otimes \mathcal H_F]$. Now, rather than requiring that we map CP maps to positive and normalised probabilities, we need to require that we map CP maps to CP maps, and channels to channels. This is guaranteed by the following conditions: \begin{equation}gin{equation} \lambdabel{extendedpms} \begin{equation}gin{split} W \in \Lin[\mathcal H_P \otimes \mathcal H_A^\textrm{\upshape in} \otimes \mathcal H_A^\textrm{\upshape out} \otimes \mathcal H_B^\textrm{\upshape in} \otimes \mathcal H_B^\textrm{\upshape out} \otimes \mathcal H_F] \\ W \geq 0 \\ {\rm Tr}_{A^\textrm{\upshape in} A^\textrm{\upshape out} B^\textrm{\upshape in} B^\textrm{\upshape out}} \left[(M^T \otimes N^T) \mathcal Irc W \right] =G \ \ \ \ \ \forall M\forall N \end{split} \end{equation} where $M$ and $N$ represent any channels for Alice and Bob, and we require that $G$ represents a channel from $P$ to $F$. The definitions for the original and the extended process matrices generalise in an obvious way to the multipartite case. The original process matrices are special cases of these more general process matrices, in which the global `past' and `future' systems $P$ and $F$ are one-dimensional, since in this case probabilities are CP maps and the number 1 is a channel. On the other hand, when one considers any particular state-preparation at $P$, and traces out $F$, any of these new, extended process matrices gives rise to a process matrix as originally defined in (\ref{originalpms})\mathcal Ite{wechs2021quantum}. We now demonstrate the equivalence of the extended process matrices and superchannels. More precisely, we show that every extended process matrix uniquely defines a valid superchannel, and vice versa. A bipartite process matrix $W$ defines a superchannel $\mathcal S$ in the following way: \begin{equation}gin{equation} \mathcal S(\mathcal M, \mathcal N) := {\rm Choi}^{-1} \left( {\rm Tr}_{A^\textrm{\upshape in} A^\textrm{\upshape out} B^\textrm{\upshape in} B^\textrm{\upshape out}}\left[(M^T \otimes N^T) \mathcal Irc W \right] \right) \end{equation} where $\mathcal M: \tilde{A}^\textrm{\upshape in} \otimes A^\textrm{\upshape in} \rightarrow \tilde{A}^\textrm{\upshape out} \otimes A^\textrm{\upshape out}$ is Alice's channel, which also acts on ancillas, and $M$ is its CJ representation, and similarly for $\mathcal N$ and $N$. One might initially worry that $\mathcal S$ need not always be a superchannel, since the extended process matrices were only defined with respect to input CP maps without ancillas, but a superchannel must also preserve channels with ancillas. However, the positivity of $W$ ensures that $G \geq 0$ where \begin{equation}gin{equation} \begin{equation}gin{split} G:&={\rm Tr}_{A^\textrm{\upshape in} A^\textrm{\upshape out} B^\textrm{\upshape in} B^\textrm{\upshape out}}\left[(M^T \otimes N^T) \mathcal Irc W \right] \\ & \in \Lin[\mathcal H_P \otimes \mathcal H_{\tilde{A}}^\textrm{\upshape in} \otimes \mathcal H_{\tilde{B}}^\textrm{\upshape in} \otimes \mathcal H_F \otimes \mathcal H_{\tilde{A}}^\textrm{\upshape out} \otimes \mathcal H_{\tilde{B}}^\textrm{\upshape out}] \end{split} \end{equation} A positive CJ matrix always represents a CP map, meaning that ${\rm Choi}^{-1}(G)$ is indeed a CP map from the past and ancillary inputs, to the future and ancillary outputs. Then the last condition in (\ref{extendedpms}) ensures that $\mathcal S$ maps channels with ancillas to other channels, meaning that $\mathcal S$ is indeed a superchannel. To see how a bipartite superchannel $\mathcal S$ defines a process matrix $W$, we suppose that Alice and Bob both insert swap channels into the superchannel. The process matrix is then the CJ representation of the resulting channel: \begin{equation}gin{equation} W := {\rm Choi}\left(\mathcal S(\texttt{SWAP}, \texttt{SWAP})\right) \end{equation} The positivity of $W$ follows from the complete positivity of the channel $\mathcal S(\texttt{SWAP}, \texttt{SWAP})$. One can show that the mapping on (CJ representations of) channels provided by $W$ is the same as the mapping provided by $\mathcal S$ \begin{equation}gin{equation} {\rm Tr}_{A^\textrm{\upshape in} A^\textrm{\upshape out} B^\textrm{\upshape in} B^\textrm{\upshape out}}\left[(M^T \otimes N^T) \mathcal Irc W \right]= G = {\rm Choi} (\mathcal S(\mathcal M, \mathcal N)) \end{equation} meaning the last requirement of (\ref{extendedpms}) will also be satisfied. Rather than superchannels, this work concentrates on what we call \textit{superunitaries} -- that is, linear mappings from a set of unitary operators to another unitary operator. To connect these to process matrices, we note that every superunitary uniquely defines a `unitary superchannel' -- that is, a superchannel that always returns a unitary channel when you feed it a set of unitary channels. Given a superunitary $S$, we define a unitary superchannel in an obvious way: \begin{equation}gin{equation} \mathcal S(\mathcal U, \mathcal V) := \left[S(U, V) \right](\mathcal Dot) \left[S(U, V) \right]^\dag \end{equation} That is, the action of $\mathcal S$ on the channels $\mathcal U:=U(\mathcal Dot)U^\dag$ and $\mathcal V:=V(\mathcal Dot)V^\dag$ is just the channel corresponding to the action of $S$ on $U$ and $V$. The action of this superchannel on more general channels can then be calculated using the Stinespring dilation. Conversely, any unitary superchannel defines a superunitary, up to an irrelevant global phase. We have shown that i) superchannels are equivalent to process matrices and ii) superunitaries are equivalent to unitary superchannels (up to phase). Now it follows trivially that superunitaries are equivalent to unitary process matrices -- that is, the process matrices that map unitary channels (possibly with ancillas) to unitary channels. As proven in \mathcal Ite{araujo2017purification}, these are precisely the process matrices that can be written in the form $W=\ket{U_w}\bra{U_w}$ where \begin{equation}gin{equation} \ket{U_w}:= (I \otimes U_w)\ket{\Phi^+} \end{equation} is a CJ \textit{vector} representing a unitary operator $U_w$. \onecolumn \textrm{\upshape sec}tion{Technical definitions and proofs} \lambdabel{app: Theorem} \subsection{Notations} In this appendix, we spell out our framework in fully technical terms, and prove its central theorem, ensuring that the routed graphs that satisfy the two principles define routed superunitaries. To do this, and in particular to prove the theorem, we will have to deal with complicated operators, often acting on arbitrary numbers of factored spaces. We therefore introduced some notational techniques to avoid unnecessary clutter, which we will present and motivate here. The first of these techniques is \textit{padding}. The idea is to have operators act on spaces larger than the ones they were originally defined to act on, simply by tensoring them with identity operators. For instance, we can have $f \in \mathcal L(\mathcal H_A)$ act on $\mathcal H_A \otimes \mathcal H_B$, by considering $f \otimes I_B$. However, when -- as will often be the case in what follows -- there is an arbitrarily large number of factors, and $f$ formally only acts on an arbitrary subset of them, it becomes very heavy notationally, and of limited mathematical interest, to keep track explicitly of which identity operators we should tensor $f$ with. We will thus allow ourselves to make this procedure implicit. The idea will then be the following: for an operator $f$, its padded version $f_\textrm{\upshape pad}$ will be defined as `$f$ tensored with the identity operators required to make its action meaningful, in the context of the expression at hand'. For instance, taking $f$ to act on $\mathcal H_A \otimes \mathcal H_B$, and $g$ to act on $\mathcal H_B \otimes \mathcal H_C $, $g_\textrm{\upshape pad} \mathcal Irc f_\textrm{\upshape pad}$ will be taken to mean $(I_A \otimes g) \mathcal Irc (f \otimes I_C)$, an operator acting on $\mathcal H_A \otimes \mathcal H_B \otimes \mathcal H_C$. For another example, taking $h$ acting on $\mathcal H_C \otimes \mathcal H_D$, the equation $g_\textrm{\upshape pad} \mathcal Irc f_\textrm{\upshape pad} = h_\textrm{\upshape pad}$ will be taken to mean $(I_A \otimes g \otimes I_D) \mathcal Irc (f \otimes I_C \otimes I_D) = I_A \otimes I_B \otimes I_C \otimes h$. This notation will extend to supermaps as well: for instance, if $\mathcal S$ is a supermap of type $(\mathcal H_A^\textrm{\upshape in} \to \mathcal H_A^\textrm{\upshape out}) \to (\mathcal H_P \to \mathcal H_F)$, we will define its action on a map $f: \mathcal H_A^\textrm{\upshape in} \otimes \mathcal H_B^\textrm{\upshape in} \to \mathcal H_A^\textrm{\upshape out} \otimes \mathcal H_B^\textrm{\upshape out}$ as $\mathcal S_\textrm{\upshape pad}[f]$, which should be understood as $(\mathcal S \otimes \mathcal I)[f]$, where $\mathcal I$ is the identity supermap on $\mathcal L(\mathcal H_B^\textrm{\upshape in} \to \mathcal H_B^\textrm{\upshape out})$. Another related technique we will use in order to avoid clutter is to disregard the ordering of factors. Indeed, factors in a given tensor product are usually regarded as being labelled by ordered lists, rather than by sets, of indices. For instance, $\mathcal H_A \otimes \mathcal H_B$ and $\mathcal H_B \otimes \mathcal H_A$ are usually regarded as different (albeit isomorphic) spaces. Accordingly, suppose we take a map $h$ over $\mathcal H_A \otimes \mathcal H_B \otimes \mathcal H_C$ which decomposes as a tensor product of a map $f$ on $\mathcal H_A \otimes \mathcal H_C$ and a map $g$ on $\mathcal H_B$. This fact, in the usual picture, could not be expressed as $h = f \otimes g$, as the RHS there acts on $\mathcal H_A \otimes \mathcal H_C \otimes \mathcal H_B$. Rather, one should write $h = (I \otimes \texttt{swap}_{C,B}) \mathcal Irc (f \otimes g) \mathcal Irc (I \otimes \texttt{swap}_{B,C})$. Another feature of the standard view is that it is not possible to write $\bigotimes_{X \in \{A, B\}} \mathcal H_X$, as this expression would leave the order of the factors ambiguous. For the expressions we will consider, keeping with this use would force us to 1) explicitly introduce arbitrary orderings of all the sets of indices we use to label factors in tensor products, and 2) overload our expressions with swaps, in order to always place next to each other the spaces on which a given map acts. This would once again create a lot of clutter with little relevance. We will therefore abstain ourselves from that constraint, and take the view that tensor products are labelled with sets, rather than ordered lists, of indices. This will allow us to write $h = f \otimes g$ in the case described above, or to write Hilbert spaces of the form $\bigotimes_{X \in \{A, B\}} \mathcal H_X$. The expressions we will write in this way could always be recast in the standard view, using arbitrary orderings of the sets at hand and large amounts of swaps. One might wonder whether either padding or disregarding the ordering of factors might lead to ambiguities. In fact, such ambiguities only arise if some Hilbert spaces in a tensor product are labelled with the same index, for instance if one is dealing with a Hilbert space like $\mathcal H_A \otimes \mathcal H_B \otimes \mathcal H_A$. This is why we will carefully avoid such situations, by only ever tensoring different -- although possibly isomorphic -- Hilbert spaces. For instance, if we need a tensor product of $\mathcal H_A$ with itself, we will write it as $\mathcal H_A \otimes \mathcal H_{A'}$, with $\mathcal H_A \mathcal Ong \mathcal H_{A'}$. Note that the same techniques and notations will be used for relations. \subsection{Technical definitions on supermaps} \begin{equation}gin{definition}[Superrelation] A \emph{superrelation} of type $\bigtimes_{N} (K_N \to L_N) \to (P \to F)$, where $P$, $F$, the $K_N$'s and the $L_N$'s are all sets, is a relation $\mathcal S^\mathcal At{Rel}: \mathcal At{Rel}(\bigtimes_N K_N, \bigtimes_{N'} L_{N'}) \to \mathcal At{Rel}(P,F)$. With a slight abuse of notation, we denote $\mathcal S^\mathcal At{Rel}[(\lambdaN)_N]:= \mathcal S^\mathcal At{Rel}[\bigotimes_N \lambdaN]$. \end{definition} \begin{equation}gin{definition}[Supermap] A \emph{supermap} of type $\bigtimes_{N} (\mathcal H^\textrm{\upshape in}_N \to \mathcal H^\textrm{\upshape out}_N) \to (\mathcal H_P \to \mathcal H_F)$, where $\mathcal H_P$, $\mathcal H_F$, the $\mathcal H^\textrm{\upshape in}_N$'s and the $\mathcal H^\textrm{\upshape out}_N$'s are all finite-dimensional Hilbert spaces, is a linear map $\mathcal S: \mathcal L(\bigotimes_N \mathcal H^\textrm{\upshape in}_N, \bigotimes_{N'} \mathcal H^\textrm{\upshape out}_{N'}) \to \mathcal L(\mathcal H_P,\mathcal H_F)$. With a slight abuse of notation, we denote $\mathcal S[(f_N)_N]:= \mathcal S[\bigotimes_N f_N]$. \end{definition} \begin{equation}gin{definition}[Superunitary] A \emph{superunitary} of type $\bigtimes_{N} (\mathcal H^\textrm{\upshape in}_N \to \mathcal H^\textrm{\upshape out}_N) \to (\mathcal H_P \to \mathcal H_F)$ is a supermap of the same type such that, for any choice of ancillary input and output spaces $\mathcal H^{\textrm{\upshape in},\textrm{\upshape anc}}_N$ and $\mathcal H^{\textrm{\upshape out},\textrm{\upshape anc}}_N$ at every $N$, and any choice of unitary maps $U_N : \mathcal H^\textrm{\upshape in}_N \otimes \mathcal H^{\textrm{\upshape in},\textrm{\upshape anc}}_N \to \mathcal H^\textrm{\upshape out}_N \otimes \mathcal H^{\textrm{\upshape out},\textrm{\upshape anc}}_N$ at every $N$, one has: \begin{equation} \lambdabel{eq:superunitary} \mathcal S_\textrm{\upshape pad}[(U_N)_N] \textrm{ is a unitary from } \mathcal H_P \otimes \left(\bigotimes_N \mathcal H^{\textrm{\upshape in},\textrm{\upshape anc}}_N \right) \textrm{ to } \mathcal H_F \otimes \left(\bigotimes_N \mathcal H^{\textrm{\upshape out},\textrm{\upshape anc}}_N \right) \, . \end{equation} \end{definition} \begin{equation}gin{definition}[Routed Supermap] A \emph{routed supermap} of type $\bigtimes_{N} (\mathcal H^\textrm{\upshape in}_N \overset{\lambdaN}{\to} \mathcal H^\textrm{\upshape out}_N) \to (\mathcal H_P \overset{\mu}{\to} \mathcal H_F)$, where the $\mathcal H^\textrm{\upshape in}_N$'s and the $\mathcal H^\textrm{\upshape out}_N$'s are sectorised finite-dimensional Hilbert spaces and the $\lambdaN$'s are relations $\texttt{Ind}in_N \to \texttt{Ind}out_N$, is a supermap: i) which is restricted to only act on the maps of $\mathcal L(\bigotimes_N \mathcal H^\textrm{\upshape in}_N, \bigotimes_{N'} \mathcal H^\textrm{\upshape out}_{N'})$ that follow the route $\bigtimes_N \lambdambda_N$; and ii) whose output always follows the route $\mu$. We say it is superunitary if it satisfies (\ref{eq:superunitary}) when acting on routed unitaries $U_N : \mathcal H^\textrm{\upshape in}_N \otimes \mathcal H^{\textrm{\upshape in},\textrm{\upshape anc}}_N \overset{\lambdaN}{\to} \mathcal H^\textrm{\upshape out}_N \otimes \mathcal H^{\textrm{\upshape out},\textrm{\upshape anc}}_N$ that follow the routes. \end{definition} \subsection{Technical presentation of the framework} \begin{equation}gin{definition}[Indexed graph] An \emph{indexed graph} $\Gamma$ consists of \begin{equation}gin{itemize} \item a finite set of nodes (or vertices) $\texttt{Nodes}_\Ga$; \item a finite set of arrows (or edges) $\texttt{Arr}_\Ga = \texttt{Arr}_\Ga^\textrm{\upshape in} \sqcup \texttt{Arr}_\Ga^\textrm{\upshape int} \sqcup \texttt{Arr}_\Ga^\textrm{\upshape out} $; \item functions $\texttt{head} : \texttt{Arr}_\Ga^\textrm{\upshape in} \sqcup \texttt{Arr}_\Ga^\textrm{\upshape int} \to \texttt{Nodes}_\Ga$ and $\texttt{tail} : \texttt{Arr}_\Ga^\textrm{\upshape int} \sqcup \texttt{Arr}_\Ga^\textrm{\upshape out} \to \texttt{Nodes}_\Ga$; \item for each arrow $A \in \texttt{Arr}_\Ga$, a finite set of indices $\texttt{Ind}_A$, satisfying: $A \not\in \texttt{Arr}_\Ga^\textrm{\upshape int} \implies \texttt{Ind}_A$ is trivial (i.e. is a singleton); \item a function $\texttt{dim}: \bigsqcup_{A \in \texttt{Arr}_\Ga} \texttt{Ind}_A \to \mathbb{N}^*$. \end{itemize} \end{definition} We further define $\texttt{Ind}in_\Gamma := \bigtimes_{A \in \texttt{Arr}_\Ga^\textrm{\upshape in}} \texttt{Ind}_A$, $\texttt{Ind}out_\Gamma := \bigtimes_{A \in \texttt{Arr}_\Ga^\textrm{\upshape out}} \texttt{Ind}_A$, and for any $N \in \texttt{Nodes}_\Ga$: $\textrm{\upshape in}(N) := \texttt{head}^{-1} (N)$, $\textrm{\upshape out}(N) := \texttt{tail}^{-1} (N)$, $\texttt{Ind}in_N := \bigtimes_{A \in \textrm{\upshape in}(N)} \texttt{Ind}_A$ and $\texttt{Ind}out_N := \bigtimes_{A \in \textrm{\upshape out}(N)} \texttt{Ind}_A$. To prepare for the interpretation of this graph in terms of complex linear maps, we also define the following sectorised Hilbert spaces: for all $A \in \texttt{Arr}_\Ga$, $\mathcal H_A := \bigoplus_{k \in \texttt{Ind}_A} \mathcal H_A^k$, where $\mathcal H_A^k \mathcal Ong \mathbb{C}^{\texttt{dim}(k)}$; $\mathcal H_P := \bigotimes_{A \in \texttt{Arr}_\Ga^\textrm{\upshape in}} \mathcal H_A = \bigoplus_{\mathcal At{V}ec{k} \in \texttt{Ind}in_\Gamma} \bigotimes \mathcal H_A^{k_A}$ and $\mathcal H_F := \bigotimes_{A \in \texttt{Arr}_\Ga^\textrm{\upshape out}} \mathcal H_A = \bigoplus_{\mathcal At{V}ec{k} \in \texttt{Ind}out_\Gamma} \bigotimes \mathcal H_A^{k_A}$; and for all $N \in \texttt{Nodes}_\Ga$, $\mathcal HinN := \bigotimes_{A \in \textrm{\upshape in}(N)} \mathcal H_A = \bigoplus_{\mathcal At{V}ec{k} \in \texttt{Ind}in_N} \bigotimes \mathcal H_A^{k_A}$ and $\mathcal HoutN := \bigotimes_{A \in \textrm{\upshape out}(N)} \mathcal H_A = \bigoplus_{\mathcal At{V}ec{k} \in \texttt{Ind}out_N} \bigotimes \mathcal H_A^{k_A}$. \begin{equation}gin{definition}[Branched relation] A relation $\lambda: K \to L$ is said to be \emph{branched} if, when seen as a function $K \to \mathcal P(L)$, it satisfies \begin{equation} \forall k,k' \in K, \lambda(k) \mathcal Ap \lambda(k') = \lambda(k) \textrm{ or } \emptyset \, , \end{equation} i.e. $\lambda(k)$ and $\lambda(k')$ are disjoint or the same. \end{definition} Note that $\lambda$ is branched if and only if $\lambda^\top$ is branched. Branched relations define compatible, non-complete partitions of their domain and codomain, corresponding to \textit{branches}. Formally, a branch $\alpha$ of the branched relation $\lambda: K \to L$ is a pair of non-empty sets $K^\alpha \subseteq K$ and $L^\alpha \subseteq L$ such that $\lambda(K^\alpha) = L^\alpha$ and $\lambda^\top(L^\alpha) = K^\alpha$. We denote the set of branches of $\lambda$ as $\texttt{Bran}(\lambda)$. Note that the partitions are not complete, i.e. $\bigsqcup_{\alpha \in\texttt{Bran}(\lambda)} K^\alpha$ might not be equal to $K$ (and the same goes for the outputs); the discrepancy corresponds to the indices that are sent by $\lambda$ to the empty set, as we consider these indices to be part of no branch at all. \begin{equation}gin{definition}[Routed graph] A \emph{routed graph} $(\Gamma, (\lambdaN)_{N \in \texttt{Nodes}_\Ga})$ consists of an indexed graph $\Gamma$ and, for every node $N$, of a branched relation $\lambda : \texttt{Ind}in_N \to \texttt{Ind}out_N$, called the route for node $N$. \end{definition} We will write routed graphs as $(\Gamma, (\lambdaN)_{N} )$ for brevity. We denote elements of $\texttt{Bran}(\lambdaN)$ as ${N^\al}$, and denote the set of input (resp. output) indices of ${N^\al}$ as $\texttt{Ind}inNal \subseteq \texttt{Ind}in_N$ (resp. $\texttt{Ind}outNal \subseteq \texttt{Ind}out_N$). We also define $\texttt{Bran}_{(\Gamma,(\lambdaN)_N)} := \bigsqcup_{N \in \texttt{Nodes}_\Ga} \texttt{Bran}(\lambdaN)$, the set of all branches in the whole routed graph. We will now define the notion of a branch being a \textit{strong parent} of another: this will correspond to solid arrows in the branch graph. First, we introduce the set of possible tuple of values of indices, in order to exclude inconsistent assignments of values. \begin{equation}gin{definition} We define $\texttt{PossVal}_\Ga$ as the subset of $\bigtimes_{A \in \texttt{Arr}_\Ga} \texttt{Ind}_A$ defined by \begin{equation} \forall (k_A)_{A \in \texttt{Arr}_\Ga} \in \texttt{PossVal}_\Ga, \, \forall N \in \texttt{Nodes}_\Ga, \,\, (k_A)_{A \in \textrm{\upshape in}(N)} \overset{\lambdaN}{\sim} (k_A)_{A \in \textrm{\upshape out}(N)} \,.\end{equation} \end{definition} A tuple of values is possible if and only if for every node, it yields a input and an output values that are in the same branch. \begin{equation}gin{lemma} \lambdabel{lem: mu} Let ${\Vec{k}} = (k_A)_{A \in \texttt{Arr}_\Ga} \in \bigtimes_{A \in \texttt{Arr}_\Ga} \texttt{Ind}_A$. ${\Vec{k}} \in \texttt{PossVal}_\Ga$ if and only if it meets the following two conditions: \begin{equation}gin{itemize} \item $\forall N \in \texttt{Nodes}_\Ga$, $(k_A)_{A \in \textrm{\upshape in}(N)}$ is in $\lambdaN$'s practical inputs and $(k_A)_{A \in \textrm{\upshape out}(N)}$ is in $\lambdaN$'s practical outputs; \item denoting, for every $N$, $\mu_N^\textrm{\upshape in}({\Vec{k}})$ as the element of $\texttt{Bran}_N$ such that $(k_A)_{A \in \textrm{\upshape in}(N)} \in \texttt{Ind}in_{N^{\mu_N^\textrm{\upshape in}({\Vec{k}})}}$, and $\mu_N^\textrm{\upshape out}({\Vec{k}})$ similarly, we have: $\forall N, \mu_N^\textrm{\upshape in}({\Vec{k}}) = \mu_N^\textrm{\upshape out}({\Vec{k}})$. \end{itemize} \end{lemma} \begin{equation}gin{proof} This derives directly from the structure of the routes: a branched route relates an input value to an output value if and only if they are both in the same branch (and in particular, are not outside of its practical in/outputs). \end{proof} For ${\Vec{k}} \in \texttt{PossVal}_\Ga$, we can therefore denote for every $N$ the branch ${\mu_N}veck$ equal to both $\mu^\textrm{\upshape in}_N({\Vec{k}})$ and $\mu^\textrm{\upshape out}_N({\Vec{k}})$. \begin{equation}gin{definition}[Strong parents] Let $(\Gamma, (\lambdaN)_{N \in \texttt{Nodes}_\Ga})$ be a routed graph, and ${N^\al}$ and ${M^\begin{equation}t}$ two branches in it. We define the set of arrows from $N$ to $M$ as $\texttt{Link}(N,M):= \textrm{\upshape out}(N) \mathcal Ap \textrm{\upshape in}(M)$. We define $\texttt{Link}Val({N^\al},{M^\begin{equation}t})$, the set of values linking ${N^\al}$ to ${M^\begin{equation}t}$, as \begin{equation} \texttt{Link}Val({N^\al},{M^\begin{equation}t}) := \left\{ (k_A)_{A \in \texttt{Link}(N,M)} | \textsc{exch}ists (k_A)_{A \in \texttt{Arr}_\Ga \setminus \texttt{Link}(N,M)} \textrm{ such that } \begin{equation}gin{cases} \mu_N\left((k_A)_{A \in \texttt{Arr}_\Ga} \right) = \alpha \\ \mu_M\left((k_A)_{A \in \texttt{Arr}_\Ga} \right) = \begin{equation}ta \end{cases} \right\}\end{equation} We say that the branch ${N^\al}$ is \emph{not a strong parent} of the branch ${M^\begin{equation}t}$ if at least one of the following holds: \begin{equation}gin{itemize} \item $\texttt{Link}(N,M) = \emptyset$; \item $\texttt{Link}Val({N^\al},{M^\begin{equation}t}) = \emptyset$; \item $\texttt{Link}Val({N^\al},{M^\begin{equation}t})$ is a singleton and its unique element $(k_A)_{A \in \texttt{Link}(N,M)}$ satisfies $\forall A \in \texttt{Link}(N,M), \, \texttt{dim}(k_A) =1 $. \end{itemize} \end{definition} \begin{equation}gin{definition}[Adjoint of a graph] \lambdabel{def:adjoint of a graph} If $\Gamma$ is an indexed graph, its \emph{adjoint} $\Gammatop$ is the indexed graph given by swapping the roles of $\texttt{Arr}_\Ga^\textrm{\upshape in}$ and $\texttt{Arr}_\Ga^\textrm{\upshape out}$ and those of $\texttt{head}$ and $\texttt{tail}$, and leaving the rest invariant. If $\GammalaN$ is a routed graph, its adjoint is the routed graph $(\Gammatop, (\lambdaN^\top)_N)$. \end{definition} \begin{equation}gin{definition}[Skeletal superrelation of an indexed graph] Given an indexed graph $\Gamma$, its associated \emph{skeletal superrelation} is the superrelation $\cs^\Rel_{\Ga}: \bigtimes_{N} (\texttt{Ind}in_N \to \texttt{Ind}out_N) \to (\texttt{Ind}in_\Gamma \to \texttt{Ind}out_\Gamma)$ defined by \begin{equation} \lambdabel{eq:skeletal superrelation}\cs^\Rel_{\Ga}[(\lambdaN)_N] := \Tr_{\texttt{Ind}_A, A \in \texttt{Arr}_\Ga^\textrm{\upshape int}} \left[ \bigotimes_N \lambdaN \right] \, .\end{equation} Note that this is well-typed because $\bigtimes_N \texttt{Ind}in_N = \bigtimes_{A \in \texttt{Arr}_\Ga^\textrm{\upshape in} \sqcup \texttt{Arr}_\Ga^\textrm{\upshape int}} \texttt{Ind}_A$ and $\bigtimes_N \texttt{Ind}out_N = \bigtimes_{A \in \texttt{Arr}_\Ga^\textrm{\upshape out} \sqcup \texttt{Arr}_\Ga^\textrm{\upshape int}} \texttt{Ind}_A$. \end{definition} \begin{equation}gin{definition}[Skeletal supermap of a routed graph] Given a routed graph $\GammalaN$, its associated (routed) \emph{skeletal supermap} is the supermap $\cs_{\GalaN}laN$ of type $\bigtimes_{N} (\mathcal H^\textrm{\upshape in}_N \overset{\lambdaN}{\to} \mathcal H^\textrm{\upshape out}_N) \to (\mathcal H_P \overset{\cs^\Rel_{\Ga}[(\lambdaN)_N]}{\to} \mathcal H_F)$ defined by \begin{equation} \lambdabel{eq:skeletal supermap}\cs_{\GalaN}laN[(f_N)_N] := \Tr_{\mathcal H_A, A \in \texttt{Arr}_\Ga^\textrm{\upshape int}} \left[ \bigotimes_N f_N \right] \, .\end{equation} Note that the fact that $\cs_{\GalaN}laN[(f_N)_N]$ follows the route $\cs^\Rel_{\Ga}[(\lambdaN)_N]$ when the $f_N$'s follow the $\lambdaN$'s is ensured by the fact that routed maps form a compact closed category \mathcal Ite{vanrietvelde2021routed,wilson2021composable}. \end{definition} \begin{equation}gin{definition}[Augmented relation] Given a relation $\lambdaN : \texttt{Ind}in_N \to \texttt{Ind}out_N$ serving as a route for node $N$, its \emph{augmented} version is the partial function (encoded as a relation) $\lambdaNaug : \texttt{Ind}in_N \times \left( \bigtimes_{\alpha \in \texttt{Bran}(\lambdaN)} \texttt{Ind}outNal \right) \to \texttt{Ind}out_N \times \left( \bigtimes_{\alpha \in \texttt{Bran}(\lambdaN)} \texttt{Happens}Nal \right)$ -- where $\forall \alpha, \texttt{Happens}Nal \mathcal Ong \{0,1\}$ -- given by \begin{equation} \lambdabel{eq:augmented relation} \lambdaNaug(k, (l^\alpha)_{\alpha \in \texttt{Bran}(\lambdaN)}) = \begin{equation}gin{cases} \{ (l^\alpha, (\delta^\alpha_{\alpha'})_{\alpha \in \texttt{Bran}(\lambdaN)} ) \} &\textrm{ if } k \in \texttt{Ind}inNal \\ \emptyset &\textrm{ if } \forall \alpha, k \not\in \texttt{Ind}inNal \,. \end{cases}\end{equation} \end{definition} \begin{equation}gin{definition}[Univocality] \lambdabel{def:univocality} A routed graph $\GammalaN$ is \emph{univocal} if \begin{equation} \lambdabel{eq:univocality} \cs^\Rel_{\Ga}pad \left[{(\lambdaNaug)}_N \right] \textrm{ is a function.} \end{equation} We then note this function as $\Lambdambda_\GammalaN$. $\GammalaN$ is \emph{bi-univocal} if both it and its adjoint $\left( \Gammatop, (\lambdaN^\top)_N \right)$ are univocal. \end{definition} \begin{equation}gin{definition}[Branch graph] If $\GammalaN$ is a bi-univocal routed graph, its \emph{branch graph} $\GammaBran$ is the graph in which \begin{equation}gin{itemize} \item the nodes are the branches of $\GammalaN$, i.e. the elements of $\texttt{Bran}_\GammalaN$; \item there is a green dashed arrow from ${N^\al}$ to ${M^\begin{equation}t}$ if $\Lambdambda_\GammalaN$ features influence from $\texttt{Ind}outNal$ to $\texttt{Happens}Mbe$; \item there is a red dashed arrow from ${N^\al}$ to ${M^\begin{equation}t}$ if $\Lambdambda_{\left( \Gammatop, (\lambdaN^\top)_N \right)}$ features influence from $\texttt{Ind}inMbe$ to $\texttt{Happens}Nal$; \item there is a solid arrow from ${N^\al}$ to ${M^\begin{equation}t}$ if ${N^\al}$ is a strong parent of ${M^\begin{equation}t}$. \end{itemize} \end{definition} \begin{equation}gin{definition}[Weak loops] Let $\GammalaN$ be a bi-univocal routed graph. We say that a loop in $\GammaBran$ is \emph{weak} if it only contains green dashed arrows, or if it only contains red dashed arrows. \end{definition} \begin{equation}gin{theorem}[Main theorem]\lambdabel{thm:Main Theorem} Let $\GammalaN$ be a routed graph which is bi-univocal, and whose branch graph $\GammaBran$ only features weak loops. Then its associated skeletal supermap $\cs_{\GalaN}laN$ is a superunitary. \end{theorem} The rest of this Appendix is dedicated to the proof of this theorem. \subsection{Proof} \subsubsection{Preliminary lemmas and definitions} \lambdabel{sec: preliminary} \begin{equation}gin{lemma}\lambdabel{lem: simplification} To prove Theorem \ref{thm:Main Theorem}, it is sufficient to prove that, for any valid routed graph $\GammalaN$, $\cs_{\GalaN}laN$ preserves unitarity when acting on input operations without ancillas. \end{lemma} \begin{equation}gin{proof} Suppose it was proven that for any valid $\GammalaN$, and for any set of routed unitaries $U_N : \mathcal H^\textrm{\upshape in}_N \overset{\lambdaN}{\to} \mathcal H^\textrm{\upshape out}_N$, $\mathcal S_{\GammalaN}[(U_N)_N]$ is a unitary. Taking now a valid $\GammalaN$ and, for every $N$, a choice of ancillary input and output spaces $\mathcal H_N^{\textrm{\upshape in}, \textrm{\upshape anc}}$ and $\mathcal H_N^{\textrm{\upshape out}, \textrm{\upshape anc}}$, and a routed unitary $U_N : \mathcal H^\textrm{\upshape in}_N \otimes \mathcal H^{\textrm{\upshape in},\textrm{\upshape anc}}_N \overset{\lambdaN}{\to} \mathcal H^\textrm{\upshape out}_N \otimes \mathcal H^{\textrm{\upshape out},\textrm{\upshape anc}}_N$. One can then define a new indexed graph $\tilde{\Gammamma}$ by adding, for each $N$, a new arrow in $\texttt{Arr}_\Ga^\textrm{\upshape in}$, with Hilbert space $\mathcal H^{\textrm{\upshape in},\textrm{\upshape anc}}_N$, and a new arrow in $\texttt{Arr}_\Ga^\textrm{\upshape out}$, with Hilbert space $\mathcal H^{\textrm{\upshape out},\textrm{\upshape anc}}_N$. The routed graph $(\tilde{\Gammamma}, (\lambdaN)_N)$ then has the same choice relation and the same branch graph as $\GammalaN$; it is therefore valid as well. We can thus apply our assumption to it, which entails that $\mathcal S_{(\tilde{\Gammamma}, (\lambdaN)_N)}[(U_N)_N] = \mathcal S^\textrm{\upshape pad}_\GammalaN[(U_N)_N]$ is unitary. This thus proves the theorem in the general case. \end{proof} From now on, we will therefore work with a fixed routed graph $\GammalaN$ (which we will often denote as $\Gamma$ for simplicity) satisfying bi-univocality and weak loops, and a fixed collection of routed unitary maps $U_N : \mathcal HinN \overset{\lambdaN}{\to} \mathcal HoutN$ following the $\lambdaN$'s. Writing $\mathcal S:= \cs_{\GalaN}$ for simplicity, our goal is to prove that $\mathcal S[(U_N)_N]: \mathcal H_P \to \mathcal H_F$ is a unitary. For each branch ${N^\al}$, we define $\mathcal HinNal := \bigoplus_{(k_A)_{A \in \textrm{\upshape in}(N)} \in \texttt{Ind}inNal} \bigotimes_{A \in \textrm{\upshape in}(N)} \mathcal H_A^{k_A} \subseteq \mathcal HinN$, $\mathcal HoutNal := \bigoplus_{(k_A)_{A \in \textrm{\upshape out}(N)} \in \texttt{Ind}outNal} \bigotimes_{A \in \textrm{\upshape out}(N)} \mathcal H_A^{k_A} \subseteq \mathcal HoutN$. We also define the projection $p_N^\alpha: \mathcal HinN \to \mathcal HinNal$ and the injection $i_N^\alpha : \mathcal HoutNal \to \mathcal HoutN$. We define the \textit{exchange} gate for $N$, $\textsc{exch}_N : \mathcal HinN \otimes \left(\bigotimes_{\alpha \in \texttt{Bran}(\lambdaN)} \mathcal HoutNal \right) \to \mathcal Hout \otimes \left( \bigotimes_{\alpha \in \texttt{Bran}(\lambdaN)} \mathcal HinNal \right)$, by \begin{equation} \lambdabel{eq: def exchange} \textsc{exch}_N := \sum_{\alpha \in \texttt{Bran}(\lambdaN)} i_{N,\pad}^\al \mathcal Irc \left( \textsc{swap}_{{N^\al}in, {N^\al}out} \otimes (\bigotimes_{\begin{equation}ta \neq \alpha} \Theta_{N^\begin{equation}t}) \right) \mathcal Irc p_{N,\pad}^\al \, , \end{equation} where $\forall {N^\begin{equation}t}$, $\Theta_{N^\begin{equation}t}$ is an arbitrarily chosen unitary from $\mathcal Hout_{N^\begin{equation}t}$ to $\mathcal Hin_{N^\begin{equation}t}$. Note that $\textsc{exch}_N$ follows $\lambdaN$ by construction. We note that the fact that the $U_N$'s follow the $\lambdaN$'s entails that one can find a block decomposition for them, i.e., one can define unitaries ${U_N^\al}: \mathcal HinNal \to \mathcal HoutNal$ such that \begin{equation} \forall N, U_N = \sum_{\alpha \in \texttt{Bran}(\lambdaN)} i_{N,\pad}^\al \mathcal Irc{U_N^\al} \mathcal Irc p_{N,\pad}^\al \, . \end{equation} As a first preliminary to the proof, we will study in detail how bifurcation choices are in correspondence with assignments of values to the arrow's indices. The following definition and lemma prove two things. First, univocality implies that any tuple of bifurcation choices fixes not only the branch at every node, but also the specific index values picked in that branch. Second, for a fixed tuple of bifurcation choices, the bifurcation choices at the branches not happening have no effect -- i.e.\ modifying them to any other value wouldn't affect the any of the index values in the graph; while, on the contrary, modifying the bifurcation choice at any of the branches happening always changes at least one of the index values in the graph. In that sense, any tuple of values of the graph's indices corresponds either to no tuple of bifurcation choices at all, or to exactly one bifurcation choice at the branches that happen for this tuple of values, with no dependence on the bifurcation choices at branches that don't happen. \begin{equation}gin{definition} For every $N$ in $\texttt{Nodes}_\Ga$, we take $\texttt{Ind}out_N{}' \mathcal Ong \texttt{Ind}out_N$ and define the partial function (encoded as a relation) $\lambdaNsec : \texttt{Ind}in_N \times \left( \bigtimes_{\alpha \in \texttt{Bran}_N} \texttt{Ind}outNal \right) \to \texttt{Ind}out_N \times \texttt{Ind}out_N{}'$ given by \begin{equation} \lambdabel{eq:sec relation} \lambdaNsec(k, (l^\alpha)_{\alpha \in \texttt{Bran}_N}) = \begin{equation}gin{cases} \{ (l^\alpha, l^\alpha) \} &\textrm{ if } k \in \texttt{Ind}inNal \\ \emptyset &\textrm{ if } \forall \alpha, \, k \not\in \texttt{Ind}inNal \,. \end{cases}\end{equation} \end{definition} \begin{equation}gin{lemma} \lambdabel{lem: lambdasec} If $\GammalaN$ is univocal, then $\mathcal S^\mathcal At{Rel} \left[{(\lambdaNsec)}_N \right]$ is an injective function $\bigtimes_{{N^\al} \in \texttt{Bran}_\Gamma}\texttt{Ind}outNal \to \bigtimes_{N} \texttt{Ind}out_N{}'$, which we denote $\Lambdambda^\textrm{\upshape sec}$. Furthermore, its preimage sets are given by \begin{equation} \lambdabel{eq: reverse lambdasec} \forall (k_N)_{N}, \, \left( \Lambdambda^\textrm{\upshape sec} \right) ^{-1} \left((k_N)_{N}\right) = \textrm{ either } \emptyset \textrm{ or } \left( \bigtimes_N \{k_N \} \right) \times \left( \bigtimes_{N^\alpha | k_N \not\in \texttt{Ind}outNal} \texttt{Ind}outNal \right)\, . \end{equation} \end{lemma} \begin{equation}gin{proof} We will use bra-ket notations for relational states and effects. For every branch ${N^\al}$, we define $\mathcal Opyy_{N^\al}out: \texttt{Ind}outNal \to \texttt{Ind}outNal \times \texttt{Ind}outNal{}'$, with $\texttt{Ind}outNal{}' \mathcal Ong \texttt{Ind}outNal$, by $\mathcal Opyy_{N^\al}out \ket{l} = \ket{l} \otimes \ket{l}$. For every node $N$, we define the partial function (encoded as a relation) $\sigma^N: \bigtimes_{\alpha \in \texttt{Bran}(\lambdaN)} \texttt{Happens}Nal \to \texttt{Bran}_N$ by \begin{equation} \lambdabel{eq: sigma def} \sigma^N((\varepsilon^{\alpha'})_{\alpha' \in \texttt{Bran}_N}) = \begin{equation}gin{cases} \{ \alpha \} &\textrm{ if } \forall \alpha', \varepsilon^{\alpha'} = \delta^{\alpha'}_{\alpha} \\ \emptyset &\textrm{ otherwise.} \end{cases}\end{equation} For every node $N$, we define the function $\nu^N: \texttt{Bran}_N \times \left( \bigtimes_{\alpha \in \texttt{Bran}_N} \texttt{Ind}outNal{}' \right) \to \texttt{Ind}out_N{}'$, with $\texttt{Ind}out_N{}' \mathcal Ong \texttt{Ind}out_N$, by \begin{equation} \nu^N(\mathcal At{V}ec{l}, \alpha) = l^\alpha \, . \end{equation} One can then compute that $\lambdaNsec = \nu^N_\textrm{\upshape pad} \mathcal Irc \sigma^N_\textrm{\upshape pad} \mathcal Irc \lambda^\textrm{\upshape aug}_{N, \textrm{\upshape pad}} \mathcal Irc \mathcal Opyy_{{N^\al}out, \textrm{\upshape pad}}$; we can thus reexpress $\Lambdambda^\textrm{\upshape sec}$ in terms of the choice function $\Lambdambda$, \begin{equation} \Lambdambda^\textrm{\upshape sec} = \mathcal S^\mathcal At{Rel}\left[{(\lambdaNsec)}_N \right] = \left( \prod_N \nu^N_\textrm{\upshape pad} \right) \mathcal Irc \left( \prod_N \sigma^N_\textrm{\upshape pad} \right) \mathcal Irc \Lambdambda \mathcal Irc \left( \prod_{{N^\al}} \mathcal Opyy_{{N^\al}out, \textrm{\upshape pad}} \right) \, . \end{equation} Given that the outputs of a $\lambda^\textrm{\upshape aug}_N$ are within the domain of definition of the corresponding $\sigma^N$, the fact that $\Lambdambda$ is a function thus implies that $\left( \prod_N \sigma^N_\textrm{\upshape pad} \right) \mathcal Irc \Lambdambda$ is a function as well. Given that the $\mathcal Opyy_{{N^\al}out}$'s and $\nu^N$'s are functions, $\Lambda^\textrm{\upshape sec}$ is a function as well . Furthermore, let us fix an $N$ and $k_N \in \texttt{Ind}out_N$. If $k_N$ is outside of $\lambdaN$'s practical outputs, it immediately has no preimage through $\nu^N$. Taking the other case, we denote $\alpha$ as the branch of $N$ such that $k_N \in \texttt{Ind}outNal$. Then, \begin{equation} \begin{equation}gin{split} &\bra{k_N}_{\texttt{Ind}outNal{}', \textrm{\upshape pad}} \mathcal Irc \lambdaNsec \\ &= \bra{k_N}_{\texttt{Ind}outNal{}', \textrm{\upshape pad}} \mathcal Irc \nu^N_\textrm{\upshape pad} \mathcal Irc \sigma^N_\textrm{\upshape pad} \mathcal Irc \lambda^\textrm{\upshape aug}_{N, \textrm{\upshape pad}} \mathcal Irc \left( \bigotimes_{\alpha' \in \texttt{Bran}_N} \mathcal Opyy_{N^{\alpha'}_\textrm{\upshape out}} \right)_\textrm{\upshape pad}\\ &= \bra{k_N}_{\texttt{Ind}outNal{}', \textrm{\upshape pad}} \mathcal Irc \left( \bigotimes_{\alpha' \in \texttt{Bran}_N \setminus \{\alpha\}} \bra{\texttt{Ind}out_{N^{\alpha'}}} \right)_\textrm{\upshape pad} \mathcal Irc \sigma^N_\textrm{\upshape pad} \mathcal Irc \lambda^\textrm{\upshape aug}_{N, \textrm{\upshape pad}} \mathcal Irc \left( \bigotimes_{\alpha' \in \texttt{Bran}_N} \mathcal Opyy_{N^{\alpha'}_\textrm{\upshape out}} \right)_\textrm{\upshape pad}\\ &= \bra{k_N}_{\texttt{Ind}outNal{}', \textrm{\upshape pad}} \mathcal Irc \sigma^N_\textrm{\upshape pad} \mathcal Irc \lambda^\textrm{\upshape aug}_{N, \textrm{\upshape pad}} \mathcal Irc \left( \, \ketbra{k_N}{k_N} \, \right)_{\texttt{Ind}outNal, \textrm{\upshape pad}} \\ &= \left( \bigotimes_{\alpha' \in \texttt{Bran}_N} \bra{\delta^{\alpha'}_{\alpha}}_{\texttt{Happens}_{N^{\alpha'}}} \right)_\textrm{\upshape pad} \mathcal Irc \lambda^\textrm{\upshape aug}_{N, \textrm{\upshape pad}} \mathcal Irc \left( \, \ketbra{k_N}{k_N} \, \right)_{\texttt{Ind}outNal, \textrm{\upshape pad}} \\ &= \ket{k_N}_{\texttt{Ind}out_N} \, \left( \bra{\texttt{Ind}inNal}_{\texttt{Ind}in_N} \otimes \bra{k_N}_{\texttt{Ind}outNal} \otimes \left( \bigotimes_{\alpha' \in \texttt{Bran}_N \setminus \{\alpha\}} \bra{\texttt{Ind}out_{N^{\alpha'}}} \right) \right) \, . \end{split} \end{equation} Therefore, we find that $\left( \Lambdambda^\textrm{\upshape sec} \right) ^{-1} ((k_N)_N)$ is empty if at least one of the $k_N$'s is outside of the practical outputs of the corresponding $\lambdaN$'s, and that otherwise -- denoting, for every $N$, $\alpha(k_N)$ as the branch such that $k_N \in \texttt{Ind}out_{N^{\alpha(k_N)}}$--, \begin{equation} \begin{equation}gin{split} &\left( \bigotimes_{N \in \texttt{Nodes}_\Ga} \bra{k_N}_{\texttt{Ind}out_N{}'} \right) \mathcal Irc \Lambdambda^\textrm{\upshape sec} \\ &= \left( \bigotimes_{N \in \texttt{Nodes}_\Ga} \bra{k_N}_{\texttt{Ind}out_N{}'} \right) \mathcal Irc \mathcal S^\mathcal At{Rel} \left[{(\lambdaNsec)}_N \right] \\ &= \mathcal S^\mathcal At{Rel} \left[ \left(\ket{k_N}_{\texttt{Ind}out_N} \bra{\texttt{Ind}in_{N^{\alpha(k_N)}}}_{\texttt{Ind}in_N} \right)_N \right] \otimes \left( \bigotimes_{N \in \texttt{Nodes}_\Ga} \bra{k_N}_{\texttt{Ind}out_{N^{\alpha(k_N)}}} \otimes \left( \bigotimes_{\alpha' \in \texttt{Bran}_N \setminus \{\alpha(k_N)\}} \bra{\texttt{Ind}out_{N^{\alpha'}}} \right)\right) \\ \end{split} \end{equation} $\mathcal S^\mathcal At{Rel} \left[ \left(\ket{k_N}_{\texttt{Ind}out_N} \bra{\texttt{Ind}in_{N^{\alpha(k_N)}}}_{\texttt{Ind}in_N} \right)_N \right]$ is just a scalar in the theory of relations, i.e.\ $0$ or $1$; $\left( \Lambdambda^\textrm{\upshape sec} \right) ^{-1} ((k_N)_N)$ is thus non-empty if and only if this scalar is equal to $1$, and the rest of the expression yields (\ref{eq: reverse lambdasec}). This also shows that $\Lambda^\textrm{\upshape sec}_\Gamma$ is injective. \end{proof} Note that we defined $\Lambdambda^\textrm{\upshape sec}_\Gamma$ as having codomain $\bigtimes_{N \in \texttt{Nodes}_\Ga} \texttt{Ind}out_N$; but, given that for each $N$ we have $\texttt{Ind}out_N = \bigtimes_{A \in \textrm{\upshape out}(N)} \texttt{Ind}_A$, we can also see it as a function to $\bigtimes_{A \in \texttt{Arr}_\Ga} \texttt{Ind}_A$ (we neglect the discrepancy due to global input arrows of the graph, as their sets of index values are trivial). $\Lambdambda^\textrm{\upshape sec}_\Gamma$ can thus be interpreted as telling us how bifurcation choices fix all indices in the graph. $\Lambdambda^\textrm{\upshape sec}_{\Gammatop}$, obtained from considering the adjoint graph, tells us the same about reverse bifurcation choices. From that perspective, in the above Lemma, the case of an empty set of preimages corresponds exactly to impossible assignments of values to the arrows, i.e.\ to ones that are outside of $\texttt{PossVal}_\Ga$. \begin{equation}gin{lemma} \lambdabel{lem: PossVal} Given ${\Vec{k}} = (k_A)_{A \in \texttt{Arr}_\Ga}$, $\left( \Lambdambda^\textrm{\upshape sec} \right) ^{-1} ({\Vec{k}})$ is empty if and only if ${\Vec{k}} \not\in \texttt{PossVal}_\Ga$. \end{lemma} \begin{equation}gin{proof} First, if there exists an $N$ such that $k_N = (k_A)_{A \in \textrm{\upshape out}(N)}$ is outside $\lambdaN$'s practical outputs, then $\left( \Lambdambda^\textrm{\upshape sec} \right) ^{-1} ({\Vec{k}})$ is empty (as pointed out in the previous proof), and ${\Vec{k}} \not\in \texttt{PossVal}_\Ga$ (as pointed out in Lemma \ref{lem: mu}). Otherwise, we know from the previous proof that $\left( \Lambdambda^\textrm{\upshape sec} \right) ^{-1} ({\Vec{k}})$ is not empty if and only if $\mathcal S^\mathcal At{Rel} \left[ \left(\ket{k_N}_{\texttt{Ind}out_N} \bra{\texttt{Ind}in_{N^{\alpha(k_N)}}}_{\texttt{Ind}in_N} \right)_N \right] = 1$. But given how $\mathcal S^\mathcal At{Rel}$ was defined in (\ref{eq:skeletal superrelation}), and the form of the $\lambdaN$'s, this is the case if and only if for all $N$, $(k_A)_{A \in \textrm{\upshape in}(N)}$ is in the branch $\alpha(k_N)$. As the function $\alpha_N$ is precisely the function $\mu^\textrm{\upshape out}_N$ defined in Lemma \ref{lem: mu}, we thus find the condition $\mu^\textrm{\upshape out}_N({\Vec{k}}) = \mu^\textrm{\upshape in}_N({\Vec{k}})$ showed in this Lemma to be necessary and sufficient for ${\Vec{k}} \in \texttt{PossVal}_\Ga$. \end{proof} Finally, we draw the consequences of the fact that branches satisfy the weak loops condition. Given a branch ${N^\al}$, we define the following subsets of $\texttt{Bran}Ga$. By a `path' in $\GammaBran$, we mean any sequence of arrows, without a distinction between the solid, green dashed or red dashed types. \begin{equation}gin{itemize} \item $\mathcal P({N^\al}) := \{{O^\ga} \neq {N^\al} \, | \, \textsc{exch}ists \textrm{ a path } {O^\ga} \to {N^\al} \textrm{ in } \GammaBran \}$, ${N^\al}$'s past; \item $\mathcal F({N^\al}) := \{{O^\ga} \neq {N^\al} \, | \, \textsc{exch}ists \textrm{ a path } {N^\al} \to {O^\ga} \textrm{ in } \GammaBran \}$, ${N^\al}$'s future; \item $\mathcal L({N^\al}) := \mathcal P({N^\al}) \mathcal Ap \mathcal F({N^\al})$, ${N^\al}$'s layer (i.e. the branches that form a loop with ${N^\al}$); \item ${\mathcal P^\str}({N^\al}) := \mathcal P({N^\al}) \setminus \mathcal L({N^\al})$, ${N^\al}$'s strict past; \item $\mathcal Fst({N^\al}) := \mathcal F({N^\al}) \setminus \mathcal L({N^\al})$, ${N^\al}$'s strict future; \end{itemize} It is clear that the relation $\sim$, defined by: ${N^\al} \sim {O^\ga}$ if ${N^\al} = {O^\ga}$ or ${O^\ga} \in \mathcal L({N^\al})$, is an equivalence relation on $\texttt{Bran}Ga$, partitioning it into a collection of layers. The fact that all loops in $\texttt{Bran}Ga$ are weak then allows us to say that a given layer either only contains green dashed arrows between its elements (in which case we will call it a green layer), or only contains red dashed arrows (in which case we will call it a red layer)\footnote{Note that single-branch layers can be considered to be either green or red: the choice will not affect the proof.}. Furthermore, merging the nodes of each layer transforms $\GammaBran$ into an acyclic graph. One can thus define a partial order between layers. Arbitrarily turning it into a total order, and picking arbitrary orderings within each layer, leads to a total ordering $<$ of $\texttt{Bran}Ga$ in which branches of a same layer are all next to each other, and in which ${N^\al} < {O^\ga} \implies {N^\al} \not\in \mathcal Fst({O^\ga})$. We can use this total ordering to label the branches with natural numbers, as $\texttt{Bran}Ga = \{ B(i) \, | \, 1 \leq i \leq n \}$. For a given $i$ and a given branch ${N^\al} > B(i)$, we define ${\mathcal P_i}({N^\al}) := \mathcal P({N^\al}) \mathcal Ap \{{O^\ga} > B(i) \}$, $\mathcal Fi({N^\al}) := \mathcal F({N^\al}) \mathcal Ap \{{O^\ga} > B(i) \}$, etc. \subsubsection{The induction hypothesis} This ordering of $\Gamma$'s branches will allows us to define an induction. The point is to start from $\cs_\pad[(\textsc{exch}_N)_N]$, and then to `refill' the branches one by one, making sure that the unitary obtained at each step is sufficiently well-behaved to be able to move to the next step. To define it, we will first need to define these `partially filled exchanges' that are being used at every step $i$ in the induction, which we shall call $V_{N,i}$'s. We do so by defining how they act on each branch: i.e., $\forall i, \forall {N^\al}$, we define $V_{N,i}^\alpha : \mathcal HinN \otimes \left( \bigotimes_{\begin{equation}ta | {N^\begin{equation}t} > B(i)} \mathcal Hout_{N^\begin{equation}t} \right) \to \mathcal HoutN \otimes \left( \bigotimes_{\begin{equation}ta | {N^\begin{equation}t} > B(i)} \mathcal Hin_{N^\begin{equation}t} \right)$ by \begin{equation} V_{N,i}^\alpha = \begin{equation}gin{cases} i_{N,\pad}^\al \mathcal Irc \left( \textsc{swap}_{{N^\al}in, {N^\al}out} \otimes (\bigotimes_{\begin{equation}ta> B(i), \begin{equation}ta \neq \alpha} \Theta_{N^\begin{equation}t} ) \right) \mathcal Irc p_{N,\pad}^\al &\textrm{ if } {N^\al} > B(i) \, ,\\ \left( i_N^\alpha \mathcal Irc {U_N^\al} \mathcal Irc p_N^\alpha \right) \otimes\left( \bigotimes_{\begin{equation}ta> B(i)} \Theta_{N^\begin{equation}t} \right) &\textrm{ if } {N^\al} \leq B(i) \,, \end{cases}\end{equation} and we use them to define \begin{equation} V_{N,i} := \sum_{\alpha \in \texttt{Bran}(\lambdaN)} V_{N,i}^\alpha \, .\end{equation} We will write the input (resp. output) space of $\cs_\pad[(V_{N,i})_N]$ as $\mathcal H_i^\textrm{\upshape out} := \mathcal H_P \otimes \left( \bigotimes_{{N^\begin{equation}t} > B(i)} \mathcal H^\textrm{\upshape out}_{{N^\begin{equation}t}} \right)$ (resp. $\mathcal H_i^\textrm{\upshape in}:= \mathcal H_F \otimes \left( \bigotimes_{{N^\begin{equation}t} > B(i)} \mathcal H^\textrm{\upshape in}_{{N^\begin{equation}t}} \right)$). We also write $\Bar{V}_{N,i}^\alpha:= V_{N,i} - V_{N,i}^\alpha$. Note that the $V_{N,i}$'s follow the $\lambdaN$'s by construction, and that one has $V_{N,0} = \textsc{exch}_N$ and $V_{N,n} = U_N$. The core of the induction will be the hypothesis that, at step $i$, $\cs_\pad[(V_{N,i})_N]$ is unitary. However, this will not be sufficient: we will also need other conditions ensuring that $\cs_\pad[(V_{N,i})_N]$ features structural properties which allow us to move to step $i+1$. More precisely, these conditions will encode the fact that at every step $i$, and for every branch ${N^\al}$ that hasn't been filled yet (i.e. such that ${N^\al} > B(i)$), one can find projectors on $\cs_\pad[(V_{N,i})_N]$'s inputs and outputs that control whether ${N^\al}$ happens or not, and that all these projectors will play well with one another. One subtlety is that, if $B(i)$ is in a \textit{red} layer and if there are still unfilled branches in that layer, then the projectors controlling the status of branches above that layer cannot be defined. This is ultimately not problematic, as one can wait for the whole layer to have been filled to redefine them; but this will force us to amend parts of the induction hypothesis when it is the case. Finally, another part of the induction hypothesis will rely on the causal properties of $\cs_\pad \left[(V_{N,i})_N \right]$. We will describe these by using the behaviour of $\cs_\pad[(V_{N,i})_N]$ seen as an isomorphism of operator algebras, defining $\mathcal V_i : \Lin \left[ \mathcal H_P \otimes \left( \bigotimes_{{O^\ga} > B(i)} \mathcal H_{{O^\ga}out} \right) \right] \to \Lin \left[ \mathcal H_F \otimes \left( \bigotimes_{{O^\ga} > B(i)} \mathcal H_{{O^\ga}in} \right) \right]$ by \begin{equation} \forall f, \mathcal V_i[f] := \cs_\pad[(V_{N,i})_N] \mathcal Irc f \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \, . \end{equation} When $\cs_\pad[(V_{N,i})_N]$ is unitary, this defines an isomorphism of operator algebras, preserving sums, compositions, and the dagger. This implies that, more generally, $\mathcal V_i$ will preserve commutation relations, self-adjointness, idempotency, etc. We now turn to our induction hypotheses at step $i$. \begin{equation}gin{hyp}[H1]\lambdabel{H1} $\cs_\pad[(V_{N,i})_N]$ is unitary. \end{hyp} As we mentioned, H1 is the core of the induction, and will allow us to conclude in the end that $\mathcal S[(V_{N,n})_N] = \mathcal S[(U_{N})_N]$ is indeed unitary. \begin{equation}gin{hyp}[H2] One has defined, for all ${N^\al} > B(i)$, orthogonal projectors: \begin{equation}gin{itemize} \item $\zetaiout({N^\al})$, acting on $\begin{equation}gin{cases} \mathcal H_P \otimes \left( \bigotimes_{{O^\ga} \in {\mathcal P_i}({N^\al})} \mathcal H_{O^\ga}out \right) \, &\textrm{ if } {N^\al} \textrm{ is in a green layer;}\\ \mathcal H_P \otimes \left( \bigotimes_{{O^\ga} \in {\mathcal P_i}st({N^\al})} \mathcal H_{O^\ga}out \right) \, &\textrm{ if } {N^\al} \textrm{ is in a red layer;} \end{cases}$ \item $\zetaiin({N^\al})$, acting on $\begin{equation}gin{cases} \mathcal H_F \otimes \left( \bigotimes_{{O^\ga} \in \mathcal Fist({N^\al})} \mathcal H_{O^\ga}in \right) \, &\textrm{ if } {N^\al} \textrm{ is in a green layer;}\\ \mathcal H_F \otimes \left( \bigotimes_{{O^\ga} \in \mathcal Fi({N^\al})} \mathcal H_{O^\ga}in \right) \, &\textrm{ if } {N^\al} \textrm{ is in a red layer;} \end{cases}$ \end{itemize} such that (once correctly padded) the $\zetaipadout({N^\al})$'s for different ${N^\al}$'s all commute pairwise, and the $\zetaipadin({N^\al})$'s commute as well, and such that \begin{equation} \lambdabel{eq:zetain to zetaout} \forall {N^\al}, \, \zetaipadin({N^\al}) = \mathcal V_i[\zetaipadout({N^\al})] \, . \end{equation} If $B(i)$ and $B(i+1)$ are in the same red layer, then all of the former definitions have only been made for the ${N^\al}$'s in that same layer, i.e. in $\mathcal L_i(B(i))$. When this happens, we say that \emph{$i$ is a special step}. \end{hyp} H2 introduces the projectors that will be used to control the status of the still-unfilled branches. The fact that the $\zeta$'s commute pairwise ensures that these controls can always be meaningfully combined. Note that the out-projector for ${N^\al}$ only acts on ${N^\al}$'s past, while its in-projector only acts on ${N^\al}$'s future; and furthermore, that for ${N^\al}$ in a green layer its in-projector only acts on ${N^\al}$'s \textit{strict} future, while for ${N^\al}$ in a red layer its out-projector only acts on its \textit{strict} past. In particular, the $\zeta({N^\al})$'s never act on ${N^\al}$ itself: this ensures that at any step, a branch never holds some part of its own controls. We will also write ${\Bar{\ze}_i^\inn}({N^\al}):= \mathbb{1} - \zetaiin({N^\al}) $ and ${\Bar{\ze}_i^\out}({N^\al}):= \mathbb{1} - \zetaiout({N^\al}) $. \begin{equation}gin{hyp}[H3] The $\zetaiout$'s satisfy \begin{equation} \forall {N^\al}, \forall {O^\ga}, \zetaipadout({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({O^\ga}) \textrm{ acts trivially on } \mathcal H_{O^\ga}out \, , \end{equation} and the $\zetaiin$'s satisfy \begin{equation} \forall {N^\al}, \forall {O^\ga}, \zetaipadin({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\inn}({O^\ga}) \textrm{ acts trivially on } \mathcal H_{O^\ga}in \, . \end{equation} \end{hyp} This hypothesis encodes the fact that, when a branch ${O^\ga}$ doesn't happen, it doesn't hold any control on other branches ${N^\al}$. Note that the $\forall {N^\al}, {O^\ga}$ only runs over the branches for which the $\zeta$'s have been defined in H2, i.e. it only runs over $\mathcal L_i(B(i))$ if $i$ is a special step. The same will apply in the other hypotheses. \begin{equation}gin{hyp}[H4] The $\zetaiout$'s satisfy: \begin{equation} \forall {N^\al}, {N^\begin{equation}t}, \textrm{ branches of the same node}, \, \zetaipadout({N^\al}) \mathcal Irc \zetaipadout({N^\begin{equation}t}) = 0 \,. \end{equation} \end{hyp} The meaning is that two branches of the same node are incompatible. Note that one can infer, using (\ref{eq:zetain to zetaout}), that the $\zetaiin$'s then satisfy the same property. \begin{equation}gin{hyp}[H5] Let $Q \subseteq \{B(i') \, | \, i' \geq i\}$ a set of branches on different nodes; i.e., one can define $\Tilde{Q} \subseteq \texttt{Nodes}_\Ga$ and a function $\alpha$ such that $Q = \{ N^{\alpha(N)} \, | \, N \in \Tilde{Q}\}$. Then, \begin{equation} \cs_\pad[(V_{N,i})_N] \mathcal Irc \prod_{N \in \Tilde{Q}} \zetaipadout \left( N^{\alpha(N)} \right) = \cs_\pad \left[(V_{N,i})_{N \in \texttt{Nodes}_\Ga \setminus \Tilde{Q}} \times \left(V_{N,i}^{\alpha(N)} \right)_{N \in \Tilde{Q}} \right] \, . \end{equation} \end{hyp} H5 formalises the fact that the $\zetaiout$'s control whether branches happen or not. Note that, using (\ref{eq:zetain to zetaout}), one could have written the same equation using $\zetaiin$'s. \begin{equation}gin{hyp}[H6] For a branch ${N^\al}$ in a green layer, we have \begin{equation} \lambdabel{eq:(H6)} \begin{equation}gin{split} &\forall f \in \Lin[\mathcal H_{N^\al}in], \, \textsc{exch}ists f' \in \Lin \left[\mathcal H_P \otimes \left( \bigotimes_{{O^\ga} \in {\mathcal P_i}st ({N^\al})} \mathcal H_{O^\ga}out \right) \right] \, \textrm{ such that} \\ &\mathcal V_i^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetaipadout({N^\al}) = {f'}_\textrm{\upshape pad} \mathcal Irc \zetaipadout({N^\al}) \,. \end{split} \end{equation} \end{hyp} H6 means that, for a branch ${N^\al}$ in a green layer, provided that one is in the subspace in which branch ${N^\al}$ happens, $\cs_\pad[(V_{N,i})_N]$'s causal structure only has the \textit{strict} past of ${N^\al}$ signalling to ${N^\al}in$. This will be important to ensure that, when ${N^\al}$ is `refilled', the action of any $\zetaiout$'s on it becomes an action on its strict past. \begin{equation}gin{hyp}[H7] For a branch ${N^\al}$ in a red layer, we have \begin{equation} \lambdabel{eq:(H7)} \begin{equation}gin{split} &\forall f \in \Lin[\mathcal H_{N^\al}out], \, \textsc{exch}ists f' \in \Lin \left[\mathcal H_F \otimes \left( \bigotimes_{{O^\ga} \in \mathcal Fist ({N^\al})} \mathcal H_{O^\ga}in \right) \right] \, \textrm{ such that} \\ &\mathcal V_i[f_\textrm{\upshape pad}] \mathcal Irc \zetaipadin({N^\al}) = {f'}_\textrm{\upshape pad} \mathcal Irc \zetaipadin({N^\al}) \,. \end{split} \end{equation} \end{hyp} H7 plays the same role as H6 in the reverse time direction. \subsubsection{Proof of the base case} \paragraph{H1} The proof that $\mathcal S_\textrm{\upshape pad}[(\textsc{exch}_N)_N]$ is unitary will rely on the lemmas of Section \ref{sec: preliminary}. To use them, we will first introduce a way to show how bifurcation choices are enforced through the use of the $\textsc{exch}$'s. For every $A$ in $\texttt{Arr}_\Ga$, we define $\textsc{witness}_A: \mathcal H_A \to \mathcal H_A \otimes \mathbb{C}^\abs{\texttt{Ind}_A}$ by \begin{equation} \textsc{witness}_A := \sum_{k_A \in \texttt{Ind}_A} \pi^{k_A}_A \otimes \ket{k_A} \, , \end{equation} where the $\pi^{k_A}_A$'s are the projectors on the different sectors of $A$, and we've introduced an arbitrary basis of $\mathbb{C}^\abs{\texttt{Ind}_A}$ labelled by $A$'s index values. The point of $\textsc{witness}_A$ is simply to channel out the information about each arrow's index value. For a given $N$, with respect to the sectorisations of the $\mathcal H_{N^\al}out$, of the $\mathcal H_A$'s for the $A$'s in $\textrm{\upshape in}(N)$ and $\textrm{\upshape out}(N)$, and to the sectorisation of the $\mathbb{C}^\abs{\texttt{Ind}_A}$'s given by the previous basis, $\lambdaNsec$ is a route for $\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \textsc{witness}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N$. Thus (because the compatibility with routes is preserved by the dagger compact structure \mathcal Ite{vanrietvelde2021routed}), $\mathcal S^\mathcal At{Rel}_\textrm{\upshape pad}[(\lambdaNsec)_N] = \Lambda^\textrm{\upshape sec}_\Gamma$ is a route for $\mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \textsc{witness}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right]$. Therefore, \begin{equation} \lambdabel{eq: base case H1 1} \begin{equation}gin{split} &\mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \textsc{witness}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &= \sum_{{\Vec{k}} \in \bigtimes_A \texttt{Ind}_A} \left( \bigotimes_A \ketbra{k_A}{k_A} \right)_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \textsc{witness}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &\mathcal Irc \left( \sum_{\mathcal At{V}ec{q} \in \left( \Lambdambda^\textrm{\upshape sec} \right) ^{-1} ({\Vec{k}})} \bigotimes_{{N^\al} \in \texttt{Bran}Ga} \pi^{q_{N^\al}}_{N^\al}out \right)_\textrm{\upshape pad} \\ &\overset{\textrm{Lemma \ref{lem: PossVal}}}{=} \sum_{{\Vec{k}} \in \texttt{PossVal}_\Ga} \left( \bigotimes_A \ketbra{k_A}{k_A} \right)_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \textsc{witness}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &\mathcal Irc \left( \sum_{\mathcal At{V}ec{q} \in \left( \Lambdambda^\textrm{\upshape sec} \right) ^{-1} ({\Vec{k}})} \bigotimes_{{N^\al} \in \texttt{Bran}Ga} \pi^{q_{N^\al}}_{N^\al}out \right)_\textrm{\upshape pad} \\ &\overset{\textrm{Lemma \ref{lem: lambdasec}}}{=} \sum_{{\Vec{k}} \in \texttt{PossVal}_\Ga} \left( \bigotimes_A \ketbra{k_A}{k_A} \right)_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \textsc{witness}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &\mathcal Irc \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}} \right)_\textrm{\upshape pad} \, , \end{split} \end{equation} where $\pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}}$ is the projector on $\mathcal H_{N^{\mu_N}veck_\textrm{\upshape out}}$'s sector labelled by $(k_A)_{A \in \textrm{\upshape out}(N)}$ (remember that for a given ${N^\al}$, we have $\mathcal H_{N^\al}out = \bigoplus_{(k_A)_{A \in \textrm{\upshape out}(N)} \in \texttt{Ind}outNal} \bigotimes_{A \in \textrm{\upshape out}(N)} \mathcal H_A^{k_A}$). Moreover, we have $\left( \sum_{k \in \texttt{Ind}_A} \bra{k} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{witness}_A = \mathbb{1}_A$, so \begin{equation} \lambdabel{eq: base case H1 2} \begin{equation}gin{split} &\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \\ &= \mathcal S_\textrm{\upshape pad} \left[\left( \sum_{(k_A)_A \in \texttt{Ind}out_N} \left(\bigotimes_{A \in \textrm{\upshape out}(N)} \bra{k_A} \mathcal Irc \textsc{witness}_A \right)_\textrm{\upshape pad} \textsc{exch}_N \right)_N \right] \\ &= \sum_{{\Vec{k}} \in \bigtimes_A \texttt{Ind}_A} \left( \bigotimes_{A} \bra{k_A} \right)_\textrm{\upshape pad} \mathcal S_\textrm{\upshape pad} \left[\left( \left( \textsc{witness}_A \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &\overset{\textrm{(\ref{eq: base case H1 1})}}{=} \sum_{{\Vec{k}} \in \texttt{PossVal}_\Ga} \left( \bigotimes_A \bra{k_A} \right)_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \textsc{witness}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &\mathcal Irc \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}} \right)_\textrm{\upshape pad} \\ &= \sum_{{\Vec{k}} \in \texttt{PossVal}_\Ga} \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \mathcal Irc \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}} \right)_\textrm{\upshape pad} \, . \end{split} \end{equation} A symmetric argument relying on $\Gammatop$ leads to \begin{equation} \lambdabel{eq: base case H1 2 sym} \begin{equation}gin{split} &\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \\ &= \sum_{{\Vec{k}} \in \texttt{PossVal}_\Ga} \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape in}(N)}}_{N^{\mu_N}veck_\textrm{\upshape in}} \right)_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[\left( \textsc{exch}_N \mathcal Irc \left( \otimes_{A \in \textrm{\upshape in}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \right)_N \right] \, . \end{split} \end{equation} Furthermore, the projectors $\left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}} \right)_\textrm{\upshape pad}$, for ${\Vec{k}} \in \texttt{PossVal}_\Ga$, form a sectorisation of the input space of $\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]$. Indeed, by Lemma \ref{lem: lambdasec}, the \begin{equation} \left( \bigtimes_N \{(k_A)_{A \in \textrm{\upshape out}(N)} \}_{\texttt{Ind}out_{N^{\mu_N}veck}} \right) \times \left( \bigtimes_{N^\alpha | \alpha \neq {\mu_N}veck} \texttt{Ind}outNal \right) \end{equation} are the preimage sets of the injective function $\Lambda^\textrm{\upshape sec}_\Gamma$, and therefore form a partition of its domain $\bigtimes_{N^\al} \texttt{Ind}outNal$. The sectorisation is thus obtained as a coarse-graining of that given by the $\bigotimes_{N^\al} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^\al}out$. Symmetrically, the $\left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape in}(N)}}_{N^{\mu_N}veck_\textrm{\upshape in}} \right)_\textrm{\upshape pad}$ form a sectorisation of $\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]$'s codomain. Crucially, $\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]$ is block diagonal with respect to these two sectorisations: indeed, for a given ${\Vec{k}}$, \begin{equation} \lambdabel{eq: base block diagonal}\begin{equation}gin{split} &\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}} \right)_\textrm{\upshape pad} \\ &\overset{\textrm{(\ref{eq: base case H1 2})}}{=} \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &\overset{\textrm{(\ref{eq:skeletal supermap})}}{=} \mathcal S_\textrm{\upshape pad} \left[\left( \textsc{exch}_N \mathcal Irc \left( \bigotimes_{A \in \textrm{\upshape in}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \right)_N \right] \\ &\overset{\textrm{(\ref{eq: base case H1 2 sym})}}{=} \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape in}(N)}}_{N^{\mu_N}veck_\textrm{\upshape in}} \right)_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \\ \end{split} \end{equation} All that is left for us to prove is that all of these blocks, which we will denote as $T^{{\Vec{k}}}$'s, are unitary (with respect to the suitable restrictions of their domain and codomain). We start by computing \begin{equation} \begin{equation}gin{split} T^{\Vec{k}} &= \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &\overset{\textrm{(\ref{eq: def exchange})}}{=} \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \mathcal Irc i_{N,\textrm{\upshape pad}}^{\mu_N}veck \mathcal Irc \left( \textsc{swap}_{N^{\mu_N}veck_\textrm{\upshape in}, N^{\mu_N}veck_\textrm{\upshape out}} \otimes \left( \bigotimes_{\begin{equation}ta \neq {\mu_N}veck} \Theta_{N^\begin{equation}t} \right) \right) \mathcal Irc p_{N,\textrm{\upshape pad}}^{\mu_N}veck \right)_N \right] \\ &\overset{\textrm{(\ref{eq:skeletal supermap})}}{=} \Tr_{A \in \texttt{Arr}_\Ga^\textrm{\upshape int}} \left[ \bigotimes_N \left( \bigotimes_{A \in \textrm{\upshape out}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \mathcal Irc i_{N,\textrm{\upshape pad}}^{\mu_N}veck \mathcal Irc \textsc{swap}_{N^{\mu_N}veck_\textrm{\upshape in}, N^{\mu_N}veck_\textrm{\upshape out}} \mathcal Irc p_{N,\textrm{\upshape pad}}^{\mu_N}veck \right] \otimes \left( \bigotimes_{M, \begin{equation}ta \neq \mu_M({\Vec{k}})} \Theta_{M^\begin{equation}t} \right) \\ &= \left( \left( \bigotimes_N p_N^{\mu_N}veck \right) \mathcal Irc \left( \bigotimes_{A \in \texttt{Arr}_\Ga} \pi^{k_A}_{A} \right) \mathcal Irc \left( \bigotimes_N i_{N}^{\mu_N}veck \right) \right) \otimes \left( \bigotimes_{M, \begin{equation}ta \neq \mu_M({\Vec{k}})} \Theta_{M^\begin{equation}t} \right) \,. \end{split} \end{equation} Remember that $i_N^{\mu_N}veck$ is the injection $\mathcal H^\textrm{\upshape out}_{N^{\mu_N}veck} \to \mathcal H^\textrm{\upshape out}_N = \bigotimes_{A \in \textrm{\upshape out}(N)} \mathcal H_A$, and $p_N^{\mu_N}veck$ is the projection $\mathcal H^\textrm{\upshape in}_N = \bigotimes_{A \in \textrm{\upshape in}(N)} \mathcal H_A \to \mathcal H^\textrm{\upshape in}_{N^{\mu_N}veck} $. We will also define the injection $i_N^{(k_A)_{A \in \textrm{\upshape out}(N)}}: \bigotimes_{A \in \textrm{\upshape out}(N)} \mathcal H_A^{k_A} \to \mathcal H^\textrm{\upshape out}_{N^{\mu_N}veck}$, and the projection $p_N^{(k_A)_{A \in \textrm{\upshape in}(N)}}: \mathcal H^\textrm{\upshape in}_{N^{\mu_N}veck} \to \bigotimes_{A \in \textrm{\upshape in}(N)} \mathcal H_A^{k_A}$: these map $T^{\Vec{k}}$'s to the suitable domains and codomains. Note that we then have \begin{equation}gin{subequations} \begin{equation} \bigotimes_N i_N^{\mu_N}veck \mathcal Irc i_N^{(k_A)_{A \in \textrm{\upshape out}(N)}} = \bigotimes_A i_A^{k_A} \, , \end{equation} \begin{equation} \bigotimes_N p_N^{(k_A)_{A \in \textrm{\upshape out}(N)}} \mathcal Irc p_N^{\mu_N}veck = \bigotimes_A p_A^{k_A} \, , \end{equation} \end{subequations} where $i_A^{k_A}$ is the injection $\mathcal H_A^{k_A} \to \mathcal H_A$ and $p_A^{k_A}$ is the projection $\mathcal H_A \to \mathcal H_A^{k_A}$. Thus, \begin{equation} \lambdabel{eq: Tk H1} \begin{equation}gin{split} &\left( \bigotimes_N p_N^{(k_A)_{A \in \textrm{\upshape in}(N)}} \right)_\textrm{\upshape pad} \mathcal Irc T^{\Vec{k}} \mathcal Irc \left( \bigotimes_N i_N^{(k_A)_{A \in \textrm{\upshape out}(N)}} \right)_\textrm{\upshape pad} \\ &= \left( \bigotimes_A p_A^{k_A} \mathcal Irc \pi_A^{k_A} \mathcal Irc i_A^{k_A} \right) \otimes \left( \bigotimes_{M, \begin{equation}ta \neq \mu_M({\Vec{k}})} \Theta_{M^\begin{equation}t} \right)\\ &= \left( \bigotimes_A \mathbb{1}_{A^{k_A}} \right) \otimes \left( \bigotimes_{M, \begin{equation}ta \neq \mu_M({\Vec{k}})} \Theta_{M^\begin{equation}t} \right) \,. \end{split} \end{equation} Each of the blocks composing $\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]$ is thus unitary once restricted to the suitable subspaces, so $\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]$ is unitary as well. \paragraph{H2} We define, for all branches ${N^\al}$, \begin{equation}gin{subequations} \begin{equation} Z^\textrm{\upshape out}({N^\al}) \lambdabel{eq: Z base} := \Lambda_\Gamma^{-1} \left( \{1\}_\texttt{Happens}Nal \times \bigtimes_{{M^\begin{equation}t} \neq {N^\al}} \texttt{Happens}Mbe \right) \, , \end{equation} \begin{equation} \zetaout({N^\al}) \lambdabel{eq: zeta base} := \sum_{(l_{M^\begin{equation}t})_{{M^\begin{equation}t} \in \texttt{Bran}Ga} \in Z^\textrm{\upshape out}({N^\al})} \left( \bigotimes_{M^\begin{equation}t} \pi^{l_{M^\begin{equation}t}}_{{M^\begin{equation}t}out} \right) \, , \end{equation} \end{subequations} and similarly for the $Z^\textrm{\upshape in}$'s and $\zetain$'s. Note that \begin{equation} \lambdabel{eq: Z from Lasec} Z^\textrm{\upshape out}({N^\al}) = \bigsqcup_{\substack{{\Vec{k}} \in \texttt{PossVal}_\Ga \\ {\mu_N}veck = \alpha}} \left( \Lambda^\textrm{\upshape sec}_\Gamma \right) ^{-1} \left( {\Vec{k}} \right) \, .\end{equation} Given their definition, the $\zetaout$'s are commuting orthogonal projectors. Furthermore, as green dashed arrows in $\GammaBran$ represent $\Lambda_\Gamma$'s causal structure, we have, for any branch ${N^\al}$, \begin{equation} Z^\textrm{\upshape out}({N^\al}) := \tilde{Z}^\textrm{\upshape out}({N^\al}) \times \left( \bigtimes_{\textsc{exch}ists \textrm{ no green dashed arrow } {M^\begin{equation}t} \to {N^\al}} \texttt{Ind}out_{M^\begin{equation}t} \right) \end{equation} Through (\ref{eq: zeta base}), this implies that $\zetaout({N^\al})$ acts trivially on the ${M^\begin{equation}t}$'s that are not linked to ${N^\al}$ by a green dashed arrow. We can thus in particular see it as the padding of an operator acting only on $\mathcal P({N^\al})$, or on $\mathcal P^\textrm{\upshape str}({N^\al})$ if ${N^\al}$ is in a red layer. The same applies symmetrically for the $\zetain$'s. Finally, (\ref{eq: base block diagonal}) implies \begin{equation} \lambdabel{eq: base zein/zeout} \begin{equation}gin{split} \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \zetaout({N^\al}) &= \sum_{\substack{{\Vec{k}} \in \texttt{PossVal}_\Ga \\ {\mu_N}veck = \alpha}} \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}} \right)_\textrm{\upshape pad} \\ &= \sum_{\substack{{\Vec{k}} \in \texttt{PossVal}_\Ga \\ {\mu_N}veck = \alpha}} \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}} \right)_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \\ &= \zetain({N^\al}) \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \, ; \\ \end{split}\end{equation} thus, $\mathcal V_0[\zetaout({N^\al})] = \zetain({N^\al})$. \paragraph{H3} We will prove that $Z^\textrm{\upshape out}({N^\al}) \mathcal Ap \begin{align}r{Z}^\textrm{\upshape out}({M^\begin{equation}t})$ is of the form $\tilde{Z} \times \texttt{Ind}out_{M^\begin{equation}t}$, from which (H3) derives. This set can be computed, using bra-ket notations in $\mathcal At{Rel}$, as $\left( \bra{1}_{\texttt{Happens}Nal} \otimes \bra{0}_\texttt{Happens}Mbe \right)_\textrm{\upshape pad} \mathcal Irc \Lambda_\Gamma$. Yet, one can see from the definition of the $\lambda^\textrm{\upshape aug}$'s that \begin{equation} \bra{0}_{\texttt{Happens}Mbe, \textrm{\upshape pad}} \mathcal Irc \lambda_M^\textrm{\upshape aug} = \bra{0}_{\texttt{Happens}Mbe, \textrm{\upshape pad}} \mathcal Irc \lambda_M^\textrm{\upshape aug} \mathcal Irc \ketbra{\texttt{Ind}out_{M^\begin{equation}t}}{\texttt{Ind}out_{M^\begin{equation}t}}_{\texttt{Ind}out_{M^\begin{equation}t}, \textrm{\upshape pad}} \, ; \end{equation} thus, \begin{equation} \begin{equation}gin{split} &\left( \bra{1}_{\texttt{Happens}Nal} \otimes \bra{0}_\texttt{Happens}Mbe \right)_\textrm{\upshape pad} \mathcal Irc \Lambda_\Gamma \\ &= \left( \bra{1}_{\texttt{Happens}Nal} \otimes \bra{0}_\texttt{Happens}Mbe \right)_\textrm{\upshape pad} \mathcal Irc \cs^\Rel_{\Ga}\left[(\lambdaNaug)_N \right] \\ &= \left( \bra{1}_{\texttt{Happens}Nal} \otimes \bra{0}_\texttt{Happens}Mbe \right)_\textrm{\upshape pad} \mathcal Irc \cs^\Rel_{\Ga}\left[(\lambdaNaug)_{N\neq M} \times \left(\lambda_M^\textrm{\upshape aug} \mathcal Irc \ketbra{\texttt{Ind}out_{M^\begin{equation}t}}{\texttt{Ind}out_{M^\begin{equation}t}}_{\texttt{Ind}out_{M^\begin{equation}t}, \textrm{\upshape pad}} \right) \right] \\ &= \left( \left( \bra{1}_{\texttt{Happens}Nal} \otimes \bra{0}_\texttt{Happens}Mbe \right)_\textrm{\upshape pad} \mathcal Irc \cs^\Rel_{\Ga}\left[(\lambdaNaug)_N \right] \mathcal Irc \ket{\texttt{Ind}out_{M^\begin{equation}t}}_{\texttt{Ind}out_{M^\begin{equation}t}} \right) \otimes \bra{\texttt{Ind}out_{M^\begin{equation}t}}_{\texttt{Ind}out_{M^\begin{equation}t}} \, , \end{split} \end{equation} which shows that indeed $Z^\textrm{\upshape out}({N^\al}) \mathcal Ap \begin{align}r{Z}^\textrm{\upshape out}({M^\begin{equation}t}) = \tilde{Z} \times \texttt{Ind}out_{M^\begin{equation}t}$. The proof for the $Z^\textrm{\upshape in}$'s is symmetric. \paragraph{H4} (H4) comes from the fact that, for $\alpha \neq \begin{equation}ta$, one has $Z^\textrm{\upshape out}({N^\al}) \mathcal Ap Z^\textrm{\upshape out}({N^\begin{equation}t}) = \emptyset$, which can be derived directly from (\ref{eq: Z from Lasec}). \paragraph{H5} We take $Q = \{ N^{\alpha(N)} \, | \, N \in \Tilde{Q}\} \subseteq \texttt{Bran}Ga$. Then, \begin{equation} \begin{equation}gin{split} &\mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \prod_{N \in \tilde{Q}} \zetaout(N^{\alpha(N)}) \\ &\overset{\textrm (\ref{eq: Z from Lasec})}{=} \sum_{\substack{{\Vec{k}} \in \texttt{PossVal}_\Ga \\ \forall N \in \tilde{Q}, {\mu_N}veck = \alpha(N)}} \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \left(\bigotimes_{N \in \texttt{Nodes}_\Ga} \pi^{(k_A)_{A \in \textrm{\upshape out}(N)}}_{N^{\mu_N}veck_\textrm{\upshape out}} \right)_\textrm{\upshape pad} \\ &\overset{\textrm (\ref{eq: base case H1 2})}{=} \sum_{\substack{{\Vec{k}} \in \texttt{PossVal}_\Ga \\ \forall N \in \tilde{Q}, {\mu_N}veck = \alpha(N)}} \mathcal S_\textrm{\upshape pad} \left[\left (\left( \bigotimes_{A \in \textrm{\upshape out}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_N \right] \\ &= \mathcal S_\textrm{\upshape pad} \left[ \left( \textsc{exch}_N \right)_{N \not\in \tilde{Q}} \times \left ( \sum_{(k_A)_{A \in \textrm{\upshape out}(N)} \in \texttt{Ind}outNal} \left( \bigotimes_{A \in \textrm{\upshape out}(N)} \pi^{k_A}_{A} \right)_\textrm{\upshape pad} \mathcal Irc \textsc{exch}_N \right)_{N \in \tilde{Q}} \right] \\ &= \mathcal S_\textrm{\upshape pad} \left[ \left( \textsc{exch}_N \right)_{N \not\in \tilde{Q}} \times \left ( \pi^\alpha_{N^\textrm{\upshape out}} \mathcal Irc \textsc{exch}_N \right)_{N \in \tilde{Q}} \right] \\ &\overset{\textrm (\ref{eq: def exchange})}{=} \mathcal S_\textrm{\upshape pad} \left[ \left( \textsc{exch}_N \right)_{N \not\in \tilde{Q}} \times \left ( V_{0,N}^\alpha \right)_{N \in \tilde{Q}} \right] \,. \end{split} \end{equation} \paragraph{H6} We take ${N^\al}$ in a green layer, and $f \in \Lin \left[ \mathcal H_{{N^\al}in} \right]$. We then have (note that $\zetain({N^\al})$ doesn't act on ${N^\al}in$) \begin{equation} \begin{equation}gin{split} &\mathcal V_{0}^\dagger \left[ f_\textrm{\upshape pad} \right] \mathcal Irc \zetapadout({N^\al}) = \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \zetapadout({N^\al}) \\ &\overset{\textrm (\ref{eq: base zein/zeout})}{=} \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc \zetapadin({N^\al}) \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \zetapadout({N^\al}) \\ &= \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]^\dagger \mathcal Irc \zetapadin({N^\al}) \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \zetapadout({N^\al}) \\ &\overset{\textrm (\ref{eq: base zein/zeout})}{=} \zetapadout({N^\al}) \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \zetapadout({N^\al}) \\ &\overset{\textrm (\ref{eq: zeta base}), (\ref{eq: Z from Lasec})}{=} \sum_{\substack{{\Vec{k}}, {\Vec{l}} \in \texttt{PossVal}_\Ga \\ {\mu_N}veck = {\mu_N}vecl = \alpha}} \left( T^{\Vec{l}} \, \right)^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc T^{\Vec{k}}\, . \end{split} \end{equation} Furthermore, taking ${M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})$, because ${N^\al}$ is in a green layer we know that there is no red dashed arrow from ${M^\begin{equation}t}$ to ${N^\al}$, and thus $\zetain({M^\begin{equation}t})$ doesn't act on ${N^\al}in$. We can thus apply the same computation to it as well, which leads to \begin{equation} \begin{equation}gin{split} &\mathcal V_{0}^\dagger \left[ f_\textrm{\upshape pad} \right] \mathcal Irc \zetapadout({N^\al}) = \mathcal V_{0} \left[ f_\textrm{\upshape pad} \right] \mathcal Irc \zetapadout({N^\al}) \mathcal Irc \left( \zetapadout({M^\begin{equation}t}) + {\Bar{\ze}_{\pad}^\out} ({M^\begin{equation}t}) \right) \\ &= \zetapadout({N^\al}) \mathcal Irc \zetapadout({M^\begin{equation}t}) \mathcal Irc \mathcal V_{0} \left[ f_\textrm{\upshape pad} \right] \mathcal Irc \zetapadout({N^\al}) \mathcal Irc \zetapadout({M^\begin{equation}t})\\ &+ \zetapadout({N^\al}) \mathcal Irc {\Bar{\ze}_{\pad}^\out}({M^\begin{equation}t}) \mathcal Irc \mathcal V_{0} \left[ f_\textrm{\upshape pad} \right] \mathcal Irc \zetapadout({N^\al}) \mathcal Irc {\Bar{\ze}_{\pad}^\out}({M^\begin{equation}t})\\ &= \sum_{\substack{{\Vec{k}}, {\Vec{l}} \in \texttt{PossVal}_\Ga \\ {\mu_N}veck = {\mu_N}vecl = \alpha \\ \mu_M({\Vec{k}}) = \begin{equation}ta \iff \mu_M({\Vec{l}}) = \begin{equation}ta}} \left( T^{\Vec{l}} \, \right)^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc T^{\Vec{k}}\, ; \end{split} \end{equation} in other words, in the sum above, the values of ${\Vec{k}}$ and ${\Vec{l}}$ that lead to attributing different statuses to ${M^\begin{equation}t}$ correspond to null terms, so that one can skip them in the summation. More generally, one can apply this reasoning to all branches ${M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})$, leading to \begin{equation} \lambdabel{eq: base H6 1} \mathcal V_{0} \left[ f_\textrm{\upshape pad} \right] \mathcal Irc \zetapadout({N^\al}) = \sum_{\substack{{\Vec{k}}, {\Vec{l}} \in \texttt{PossVal}_\Ga \\ {\mu_N}veck = {\mu_N}vecl = \alpha \\ \forall {M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al}), \mu_M({\Vec{k}}) = \begin{equation}ta \iff \mu_M({\Vec{l}}) = \begin{equation}ta}} \left( T^{\Vec{l}} \, \right)^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc T^{\Vec{k}}\, . \end{equation} Using (\ref{eq: Tk H1}), we rewrite $T^{\Vec{k}}$, for an arbitrary ${\Vec{k}}$, as \begin{equation} \begin{equation}gin{split} T^{\Vec{k}} &= \left( \bigotimes_M {p_M^{(k_A)_{A \in \textrm{\upshape in}(M)}}} \right)^\dagger_\textrm{\upshape pad} \mathcal Irc \left( \left( \bigotimes_A \mathbb{1}_{A^{k_A}} \right) \otimes \left( \bigotimes_{M, \begin{equation}ta \neq \mu_M({\Vec{k}}) }\Thetaeta_{M^\begin{equation}t} \right)\right) \\ &\mathcal Irc \left( \bigotimes_M {i_M^{(k_A)_{A \in \textrm{\upshape out}(M)}}} \right)^\dagger_\textrm{\upshape pad} \, . \end{split} \end{equation} Now, we take ${\Vec{k}}, {\Vec{l}} \in \texttt{PossVal}_\Ga$ satisfying the requirements we pinned down earlier; we can then compute \begin{equation} \begin{equation}gin{split} &{T^{\Vec{l}}}^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc T^{\Vec{k}} \\ &= \left( \bigotimes_{M} i_M^{(l_A)_{A \in \textrm{\upshape out}(M)}} \right)_\textrm{\upshape pad} \mathcal Irc \left( \left( \bigotimes_A \mathbb{1}_{A^{l_A}} \right) \otimes \left( \bigotimes_{M, \begin{equation}ta \neq \mu_M({\Vec{l}}) }\Thetaeta_{M^\begin{equation}t}^\dagger \right)\right) \\ &\mathcal Irc \left( \bigotimes_M {p_M^{(l_A)_{A \in \textrm{\upshape in}(M)}}} \right)_\textrm{\upshape pad} \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc \left( \bigotimes_M {p_M^{(k_A)_{A \in \textrm{\upshape in}(M)}}} \right)^\dagger_\textrm{\upshape pad} \\ &\mathcal Irc \left( \left( \bigotimes_A \mathbb{1}_{A^{{k_A}}} \right) \otimes \left( \bigotimes_{M, \begin{equation}ta \neq \mu_M({\Vec{k}}) }\Thetaeta_{M^\begin{equation}t} \right)\right) \mathcal Irc \left( \bigotimes_M {i_M^{(k_A)_{A \in \textrm{\upshape out}(M)}}} \right)^\dagger_\textrm{\upshape pad} \\ &= \left( \bigotimes_{M} i_M^{(l_A)_{A \in \textrm{\upshape out}(M)}} \right)_\textrm{\upshape pad} \mathcal Irc \left[ \left( p_N^{(l_A)_{A \in \textrm{\upshape in}(N)}} \mathcal Irc f \mathcal Irc \left( p_N^{(k_A)_{A \in \textrm{\upshape in}(N)}} \right)^\dagger \right) \right. \\ &\otimes \left( \bigotimes_{\substack{M \neq N \\ \mu_M({\Vec{l}}) \neq \mu_M({\Vec{k}})}} \left( \Thetaeta_{M^{\mu_M({\Vec{k}})}}^\dagger \mathcal Irc \left( p_M^{(k_A)_{A \in \textrm{\upshape in}(M)}} \right)^\dagger \right) \otimes \left( p_M^{(l_A)_{A \in \textrm{\upshape in}(M)}} \mathcal Irc \Thetaeta_{M^{\mu_M({\Vec{l}})}} \right) \right) \\ &\otimes \left. \left( \bigotimes_{\substack{M \neq N \\ \mu_M({\Vec{l}}) = \mu_M({\Vec{k}}) = 1}} {p_M^{(l_A)_{A \in \textrm{\upshape in}(M)}}} \mathcal Irc \left( p_M^{(k_A)_{A \in \textrm{\upshape in}(M)}} \right)^\dagger \right) \right]_\textrm{\upshape pad} \mathcal Irc \left( \bigotimes_M {i_M^{(k_A)_{A \in \textrm{\upshape out}(M)}}} \right)^\dagger_\textrm{\upshape pad} \, . \end{split} \end{equation} Note that each of the ${p_M^{(l_A)_{A \in \textrm{\upshape in}(M)}}} \mathcal Irc \left(p_M^{(k_A)_{A \in \textrm{\upshape in}(M)}} \right)^\dagger$ terms, for $M$ such that $M^{\mu_M({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al}) \mathcal Up \{{N^\al}\}$, can be rewritten as $\bigotimes_{A \in \textrm{\upshape in}(M)} p_A^{l_A} \mathcal Irc i_A^{k_A}$ (which is the identity if $k_A = l_A \forall A \in \textrm{\upshape in}(M)$, and zero otherwise). Now, for any $M$ such that $M^{\mu_M({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al})$, and for any $O$ such that $O^{\mu_O({\Vec{k}})} \in {\mathcal P^\str}({N^\al}) \mathcal Up \{ {N^\al} \}$, there is no arrow $A \in \texttt{Link}(M,O)$ such that $\abs{A^{k_A}} \neq 1$, as that would imply the existence of a solid arrow from $M^{\mu_M({\Vec{k}})}$ to $O^{\mu_O({\Vec{k}})}$, which would contradict $M^{\mu_M({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al})$. Thus, all of the non-trivial arrows in $\textrm{\upshape out}(M)$ go to $O$'s such that $O \neq N$ and $\mu_O({\Vec{l}}) = \mu_O({\Vec{k}})$. Thus this implies that, if one doesn't have $k_A = l_A \forall A \in \textrm{\upshape out}(M)$ then the whole expression is null; while otherwise, the term in square brackets acts trivially on each of the $A \in \textrm{\upshape out}(M)$ -- in other words, the arrows coming out of $M$ are never acted on and simply link $i_M^{(k_A)_{A \in \textrm{\upshape out}(M)}}{}^\dagger$ directly to $i_M^{(l_A)_{A \in \textrm{\upshape out}(M)}}$. One can thus reorganise this expression (neglecting the existence of all the trivial spaces) as \begin{equation} \lambdabel{eq: comp H6 base}\begin{equation}gin{split} &{T^{\Vec{l}}}^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc T^{\Vec{k}} = \left[ \left( \bigotimes_{\substack{M \\ M^{\mu_M({\Vec{l}})} \in {\mathcal P^\str}({N^\al})}} i_M^{(l_A)_{A \in \textrm{\upshape out}(M)}} \right)_\textrm{\upshape pad} \mathcal Irc \left[ \vphantom{\bigotimes_{\substack{a \\ b \\ c}}} \left( p_N^{(l_A)_{A \in \textrm{\upshape in}(N)}} \mathcal Irc f \mathcal Irc \left( p_N^{(k_A)_{A \in \textrm{\upshape in}(N)}} \right)^\dagger \right) \right. \right. \\ &\otimes \left( \bigotimes_{\substack{M \\ M^{\mu_M({\Vec{l}})}, M^{\mu_M({\Vec{k}})} \in {\mathcal P^\str}({N^\al}) \\ \mu_M({\Vec{l}}) \neq \mu_M({\Vec{k}})}} \left( \Thetaeta_{M^{\mu_M({\Vec{k}})}}^\dagger \mathcal Irc \left( p_M^{(k_A)_{A \in \textrm{\upshape in}(M)}} \right)^\dagger \right) \otimes \left( p_M^{(l_A)_{A \in \textrm{\upshape in}(M)}} \mathcal Irc \Thetaeta_{M^{\mu_M({\Vec{l}})}} \right) \right) \\ &\otimes \left. \left( \bigotimes_{\substack{M \\ M^{\mu_M({\Vec{l}})}, M^{\mu_M({\Vec{k}})} \in {\mathcal P^\str}({N^\al}) \\ \mu_M({\Vec{l}}) = \mu_M({\Vec{k}})}} \bigotimes_{A \in \textrm{\upshape in}(M)} p_A^{l_A} \mathcal Irc i_A^{k_A} \right) \otimes \left( \bigotimes_{\substack{O \\ O^{\mu_O({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al})}} \bigotimes_{\substack{M \\ M^{\mu_M({\Vec{k}})} \in {\mathcal P^\str}({N^\al})}} \bigotimes_{A \in \texttt{Link}(M,O)} p_A^{l_A} \mathcal Irc i_A^{k_A} \right) \right] \\ &\mathcal Irc \left. \left( \bigotimes_{\substack{M \\ M^{\mu_M({\Vec{k}})} \in {\mathcal P^\str}({N^\al})}} {i_M^{(k_A)_{A \in \textrm{\upshape out}(M)}}} \right)^\dagger_\textrm{\upshape pad} \right]_\textrm{\upshape pad} \otimes \left( \left( \bigotimes_{\substack{M \\ M^{\mu_M({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al})}} \pi^{({k_A})_{A \in \textrm{\upshape out}(M)}}_{M^{\mu_M({\Vec{k}})}_\textrm{\upshape out}} \right) \otimes \left( \bigotimes_{\substack{M, \begin{equation}ta \neq \mu_M({\Vec{k}}) \\ {M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})}} \mathbb{1}_{{M^\begin{equation}t}out} \right)\right) \, ; \end{split} \end{equation} note how the action on the ${M^\begin{equation}t}out$'s for ${M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})$ now only consists of a projector independent of $f$. Note that in the second bracket of the third line, the condition $M^{\mu_M({\Vec{k}})} \in {\mathcal P^\str}({N^\al})$ could equivalently have been replaced with $M^{\mu_M({\Vec{l}})} \in {\mathcal P^\str}({N^\al})$, because we know that $M^{\mu_M({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al}) \implies M^{\mu_M({\Vec{l}})} = M^{\mu_M({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al})$, and conversely; so that we have the equivalence $M^{\mu_M({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al}) \iff M^{\mu_M({\Vec{l}})} \not\in {\mathcal P^\str}({N^\al})$. Both bracketed terms in the third line can be rewritten simply as Kronecker deltas, of the form: \begin{equation} \left( \prod_{\substack{M \\ M^{\mu_M({\Vec{l}})}, M^{\mu_M({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al}) \\ \mu_M({\Vec{l}}) = \mu_M({\Vec{k}})}} \prod_{A \in \textrm{\upshape in}(M)} \delta_{k_A, l_A} \right) \mathcal Dot \left( \prod_{\substack{O \\ O^{\mu_O({\Vec{k}})} \not\in {\mathcal P^\str}({N^\al})}} \prod_{\substack{M \\ M^{\mu_M({\Vec{k}})} \in {\mathcal P^\str}({N^\al})}} \prod_{A \in \texttt{Link}(M,O)} \delta_{k_A, l_A} \right) \, .\end{equation} We now take $\vec{q} = (q^{O^\ga})_{{O^\ga} \in \texttt{Bran}Ga}, \vec{r} = (r^{O^\ga})_{{O^\ga} \in \texttt{Bran}Ga} \in \bigtimes_{{O^\ga} \in \texttt{Bran}Ga} \texttt{Ind}out_{O^\ga}$. We then have \begin{equation} \lambdabel{eq: comp H6 base 2}\begin{equation}gin{split} &\left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{r^{O^\ga}}_{O^\ga}out \right) \mathcal Irc \mathcal V_0^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetapadout({N^\al}) \mathcal Irc \left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{q^{O^\ga}}_{O^\ga}out \right) \\ &= \left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{r^{O^\ga}}_{O^\ga}out \right) \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right]^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc \mathcal S_\textrm{\upshape pad} \left[(\textsc{exch}_N)_N \right] \mathcal Irc \left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{q^{O^\ga}}_{O^\ga}out \right) \mathcal Irc \zetapadout({N^\al}) \\ &= \left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{r^{O^\ga}}_{O^\ga}out \right) \mathcal Irc \left( T^{\Lambda^\textrm{\upshape sec}_\Gamma \left(\vec{r} \right)} \right)^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc T^{\Lambda_\Gamma^\textrm{\upshape sec} \left( \vec{q} \right)} \mathcal Irc \left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{q^{O^\ga}}_{O^\ga}out \right) \mathcal Irc \zetapadout({N^\al}) \, . \end{split} \end{equation} Note that $\mu_M \mathcal Irc \Lambda_\Gamma^\textrm{\upshape sec} \left( \vec{q} \right)$ denotes the only branch $\begin{equation}ta$ of $M$ such that the ${M^\begin{equation}t}$ term of $\Lambda_\Gamma \left( \vec{q} \right)$, which we denote $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right)$, is $1$. By the previous considerations, the term above can thus be non null only if $\Lambda^{N^\al}_\Gamma \left( \vec{q} \right) = \Lambda^{N^\al}_\Gamma \left( \vec{r} \right) = 1$ and if $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right) \forall {M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})$. Furthermore, when this is the case, then we can use (\ref{eq: comp H6 base}) and get \begin{equation} \lambdabel{eq: comp H6 base 3}\begin{equation}gin{split} &\left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{r^{O^\ga}}_{O^\ga}out \right) \mathcal Irc \mathcal V_0^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetapadout({N^\al}) \mathcal Irc \left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{q^{O^\ga}}_{O^\ga}out \right) \\ &= F^{(r^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}, (q^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}} \otimes \left(\bigotimes_{{M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})} \pi^{q^{M^\begin{equation}t}}_{M^\begin{equation}t}out \right) \\ &\mathcal Dot \prod_{{O^\ga} \not\in {\mathcal P^\str}({N^\al})} \delta_{q^{O^\ga}, r^{O^\ga}}\, , \end{split} \end{equation} where \begin{equation} \lambdabel{eq: F^rq} \begin{equation}gin{split} &F^{(r^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}, (q^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}} \\ &= \left( \bigotimes_{{O^\ga} \in {\mathcal P^\str}({N^\al})} \pi^{r^{O^\ga}}_{O^\ga}out \right) \mathcal Irc \left( \bigotimes_{\substack{{M^\begin{equation}t} \in {\mathcal P^\str}({N^\al}) \\ \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right) =1}} i_M^{(l_A)_{A \in \textrm{\upshape out}(M)}} \right)_\textrm{\upshape pad} \mathcal Irc \left[ \vphantom{\bigotimes_{\substack{a \\ b \\ c}}} \left( p_N^{(l_A)_{A \in \textrm{\upshape in}(N)}} \mathcal Irc f \mathcal Irc \left( p_N^{(k_A)_{A \in \textrm{\upshape in}(N)}} \right)^\dagger \right) \right. \\ &\otimes \left. \left( \bigotimes_{\substack{M \textrm{ s.t.\ } \textsc{exch}ists \begin{equation}ta, \begin{equation}ta': \\ {M^\begin{equation}t}, M^{\begin{equation}ta'} \in {\mathcal P^\str}({N^\al}) \\ \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = \Lambda^{M^{\begin{equation}ta'}}_\Gamma \left( \vec{r} \right) = 1}} \left( \Thetaeta_{{M^\begin{equation}t}}^\dagger \mathcal Irc \left( p_M^{(k_A)_{A \in \textrm{\upshape in}(M)}} \right)^\dagger \right) \otimes \left( p_M^{(l_A)_{A \in \textrm{\upshape in}(M)}} \mathcal Irc \Thetaeta_{M^{\begin{equation}ta'}} \right) \right) \right] \\ &\mathcal Irc \left( \bigotimes_{\substack{{M^\begin{equation}t} \in {\mathcal P^\str}({N^\al}) \\ \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) =1}} {i_M^{(k_A)_{A \in \textrm{\upshape out}(M)}}} \right)^\dagger_\textrm{\upshape pad} \mathcal Irc \left( \bigotimes_{{O^\ga} \in {\mathcal P^\str}({N^\al})} \pi^{q^{O^\ga}}_{O^\ga}out \right) \mathcal Dot \left( \prod_{\substack{{M^\begin{equation}t} \in {\mathcal P^\str}({N^\al}) \\ \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right) = \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = 1}} \prod_{A \in \textrm{\upshape in}(M)} \delta_{k_A, l_A} \right) \\ &\mathcal Dot \left( \prod_{\substack{O \\ \sum_{{O^\ga} \in {\mathcal P^\str}({N^\al})} \Lambda^{O^\ga}_\Gamma \left( \vec{q} \right) = 0}} \quad \prod_{\substack{M \\ \sum_{{M^\begin{equation}t} \in {\mathcal P^\str}({N^\al})} \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = 1}} \quad \prod_{A \in \texttt{Link}(M,O)} \delta_{k_A, l_A} \right)\, , \end{split}\end{equation} with ${\Vec{k}} := \Lambda_\Gamma^\textrm{\upshape sec}(\vec{q})$, ${\Vec{l}} := \Lambda_\Gamma^\textrm{\upshape sec}(\vec{r})$. Note how $F^{(r^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}, (q^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}}$, which comes from the term in square brackets in (\ref{eq: comp H6 base}) and acts only on $P$ and on ${\mathcal P^\str}({N^\al})$, doesn't depend on the value of $(q^{O^\ga})_{{O^\ga} \not\in {\mathcal P^\str}({N^\al})}$: indeed, the values of $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right)$ and of $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right)$ don't depend on the values of the $q^{O^\ga}$ for ${O^\ga} \not\in {\mathcal P^\str}({N^\al})$, as there are no green dashed arrows from these ${O^\ga}$'s to any ${M^\begin{equation}t} \in {\mathcal P^\str}({N^\al})$. Similarly, the relevant values of the $k_A$'s are those such that $A \in \textrm{\upshape out}(M)$ for some ${M^\begin{equation}t} \in {\mathcal P^\str}({N^\al})$ such that $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = 1$; thus they are just equal to the $A$-value of $q^{M^\begin{equation}t}$. The same goes for the $l_A$'s. So far, we proved that (\ref{eq: comp H6 base 3}) holds for $\vec{q}$ and $\vec{r}$ satisfying: $\Lambda^{N^\al}_\Gamma \left( \vec{q} \right) = \Lambda^{N^\al}_\Gamma \left( \vec{r} \right) = 1$ and $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right) \forall {M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})$. We now want to prove that the same holds when the latter condition is not satisfied -- or in other words, that in this case the RHS is also null. We will thus prove that if $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right) \forall {M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})$ does not hold, then (\ref{eq: F^rq}) is null, and thus the RHS in (\ref{eq: comp H6 base 3}) is null as well. We suppose (\ref{eq: F^rq}) is not null, and take ${M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})$ such that $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = 1$. Taking $A \in \textrm{\upshape out}(M)$, and denoting $O := \texttt{head}(A)$ and $\gamma$ such that $\Lambda^{O^\ga}_\Gamma \left( \vec{q} \right) = 1$, we have: either ${O^\ga} \in {\mathcal P^\str}({N^\al})$, in which case $k_A = l_A$ by the penultimate term in (\ref{eq: F^rq}); or ${O^\ga} \not\in {\mathcal P^\str}({N^\al})$, in which case $k_A = l_A$ by the last term in (\ref{eq: F^rq}). Thus we have $k_A = l_A \forall A \in \textrm{\upshape out}(M)$, and thus $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right) = 1$. Symmetrically, $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right) = 1$ implies $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right) = 1$, so we indeed get $\Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{r} \right) = \Lambda^{M^\begin{equation}t}_\Gamma \left( \vec{q} \right)$. Therefore, (\ref{eq: comp H6 base 3}) holds for $\vec{q}$ satisfying $\Lambda^{N^\al}_\Gamma \left( \vec{q} \right) = 1$. We can thus finally compute \begin{equation} \begin{equation}gin{split} &\mathcal V_0^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetapadout({N^\al}) \\ &= \sum_{\substack{\vec{q}, \vec{r} \\ \Lambda^{N^\al}_\Gamma \left( \vec{q} \right) = 1}} \left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{r^{O^\ga}}_{O^\ga}out \right) \mathcal Irc \mathcal V_0^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetapadout({N^\al}) \mathcal Irc \left( \bigotimes_{{O^\ga} \in \texttt{Bran}Ga} \pi^{q^{O^\ga}}_{O^\ga}out \right) \\ &\overset{\textrm (\ref{eq: comp H6 base 3})}{=} \sum_{\substack{\vec{q}, \vec{r} \\ \Lambda^{N^\al}_\Gamma \left( \vec{q} \right) = 1}} F^{(r^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}, (q^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}}\\ &\otimes \left(\bigotimes_{{M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})} \pi^{q^{M^\begin{equation}t}}_{M^\begin{equation}t}out \right) \mathcal Dot \prod_{{O^\ga} \not\in {\mathcal P^\str}({N^\al})} \delta_{q^{O^\ga}, r^{O^\ga}} \\ &= \sum_{\substack{\left( q^{O^\ga} \right)_{{O^\ga} \in {\mathcal P^\str}({N^\al})} \\ \left( r^{O^\ga} \right)_{{O^\ga} \in {\mathcal P^\str}({N^\al})}}} \left( F^{(r^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}, (q^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}} \otimes \sum_{\substack{\left( q^{O^\ga} \right)_{{O^\ga} \not\in {\mathcal P^\str}({N^\al})} \\ \Lambda^{N^\al}_\Gamma \left( \vec{q} \right) = 1}} \left(\bigotimes_{{M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})} \pi^{q^{M^\begin{equation}t}}_{M^\begin{equation}t}out \right) \right) \\ &= \left( \sum_{\substack{\left( q^{O^\ga} \right)_{{O^\ga} \in {\mathcal P^\str}({N^\al})} \\ \left( r^{O^\ga} \right)_{{O^\ga} \in {\mathcal P^\str}({N^\al})}}} F^{(r^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}, (q^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}} \right) \mathcal Irc \left( \sum_{\substack{\left( s^{O^\ga} \right)_{{O^\ga} \in \texttt{Bran}Ga} \\ \Lambda^{N^\al}_\Gamma \left( \vec{s} \right) = 1}} \bigotimes_{{M^\begin{equation}t} \not\in {\mathcal P^\str}({N^\al})} \pi^{s^{M^\begin{equation}t}}_{M^\begin{equation}t}out \right)_\textrm{\upshape pad} \\ &= \left( \sum_{\substack{\left( q^{O^\ga} \right)_{{O^\ga} \in {\mathcal P^\str}({N^\al})} \\ \left( r^{O^\ga} \right)_{{O^\ga} \in {\mathcal P^\str}({N^\al})}}} F^{(r^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}, (q^{O^\ga})_{{O^\ga} \in {\mathcal P^\str}({N^\al})}} \right) \mathcal Irc \zetapadout({N^\al}) \, ; \end{split} \end{equation} denoting the left-hand factor as $f'$ yields (\ref{eq:(H6)}). \paragraph{H7} The proof of (H7) is the symmetric of that of (H6). \subsubsection{Proof of the induction step} We suppose the induction hypotheses are all satisfied up to step $i$. We write ${M^\begin{equation}t} := B(i+1)$, the branch we have to refill in this induction step. We first consider the case: neither $i$ nor $i+1$ are special steps. Note that, because the branches have been ordered so that all branches of a same layer are next to each other, the fact that $i+1$ is not a special step entails: ${M^\begin{equation}t}$ is in a red layer $\implies {\mathcal P_i}({M^\begin{equation}t}) = \emptyset$. \paragraph*{H1} Let us first prove H1 at step $i+1$. From (\ref{eq:zetain to zetaout}) and (H5) applied to $Q = \{{M^\begin{equation}t}\}$, we have \begin{equation} \begin{equation}gin{split} \cs_\pad[(V_{N,i})_N] &= \, \zetaipadin({M^\begin{equation}t}) \mathcal Irc \cs_\pad[(V_{N,i})_{N \neq M} \times (V_{M,i}^\begin{equation}ta)] \mathcal Irc \zetaipadout({M^\begin{equation}t}) \\ &+ \, {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \mathcal Irc \cs_\pad[(V_{N,i})_{N \neq M} \times (\Bar{V}_{M,i}^\begin{equation}ta)] \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) \,. \end{split} \end{equation} Furthermore, \begin{equation}gin{subequations} \begin{equation} \cs_\pad[(V_{N,i})_{N \neq M} \times (V_{M,i+1}^\begin{equation}ta)] = \Tr_{{M^\begin{equation}t}out} [U_\textrm{\upshape pad}^\begin{equation}ta \mathcal Irc \cs_\pad[(V_{N,i})_{N \neq M} \times (V_{M,i}^\begin{equation}ta)] ] \,, \end{equation} \begin{equation} \cs_\pad[(V_{N,i})_{N \neq M} \times (\begin{align}r{V}_{m,i+1}^\begin{equation}ta)] = \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Tr_{{M^\begin{equation}t}out} [\Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_{N \neq M} \times (\begin{align}r{V}_{M,i}^\begin{equation}ta)]] \,, \end{equation} \end{subequations} so, because $\zetaiin({M^\begin{equation}t})$ doesn't act on ${M^\begin{equation}t}in$, and $\zetaiout({M^\begin{equation}t})$ doesn't act on ${M^\begin{equation}t}out$, \begin{equation} \lambdabel{eq: step for H1} \begin{equation}gin{split} \cs_\pad[(V_{N,i+1})_N] &= \, \cs_\pad[(V_{N,i})_{N \neq M} \times (V_{M,i+1}^\begin{equation}ta)] + \cs_\pad[(V_{N,i})_{N \neq M} \times (\begin{align}r{V}_{M,i+1}^\begin{equation}ta)] \\ &= \, \zetaipadin({M^\begin{equation}t}) \mathcal Irc \Tr_{{M^\begin{equation}t}out} [U_\textrm{\upshape pad}^\begin{equation}ta \mathcal Irc \cs_\pad[(V_{N,i})_{N \neq M} \times (V_{M,i}^\begin{equation}ta)] ] \mathcal Irc \zetaipadout({M^\begin{equation}t}) \\ &+ \, {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \mathcal Irc \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Tr_{{M^\begin{equation}t}out} [\Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_{N \neq M} \times (\begin{align}r{V}_{M,i}^\begin{equation}ta)]] \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) \,. \end{split} \end{equation} Therefore, $\cs_\pad[(V_{N,i+1})_N]$ can be decomposed into two terms: one that can be considered as a linear map from the subspace of $\mathcal H_i^\textrm{\upshape out}$ defined by the projector $\zetaipadout({M^\begin{equation}t})$, to the subspace of $\mathcal H_i^\textrm{\upshape in}$ defined by the projector $\zetaipadin({M^\begin{equation}t})$; and one that can be considered a linear map from, and to, the subspaces orthogonal to these. We now have to prove that each of these two terms is unitary. We start with the first term. (H6) implies that $U_\textrm{\upshape pad}^\begin{equation}ta \mathcal Irc \cs_\pad[(V_{N,i})_{N \neq M} \times (V_{M,i}^\begin{equation}ta)]$ features no causal influence from ${M^\begin{equation}t}out$ to ${M^\begin{equation}t}in$, via the characterisation of causal influence in terms of algebras \mathcal Ite{Ormrod2022} (note that it makes sense to talk about the factors ${M^\begin{equation}t}out$ and ${M^\begin{equation}t}in$ of its input and output spaces because the $\zeta({M^\begin{equation}t})$'s do not act on these). Therefore, one can find a unitary causal decomposition of it as $W^2 \mathcal Irc (\textsc{swap}_{{M^\begin{equation}t}out, {M^\begin{equation}t}in} \otimes \mathbb{1}) \mathcal Irc W^1$, where $W^1$ doesn't act on ${M^\begin{equation}t}out$ and $W^2$ doesn't act on ${M^\begin{equation}t}in$. The first term in (\ref{eq: step for H1}) -- with its input and output spaces suitably restricted -- is thus of the form $W^2 \mathcal Irc (U^\begin{equation}ta \otimes \mathbb{1}) \mathcal Irc W^1$, which is unitary. As for the second term, one can see from the definition of the $V_{M,i}^\alpha$'s that $\cs_\pad[(V_{N,i})_{N \neq M} \times (\begin{align}r{V}_{M,i}^\begin{equation}ta)]$ is of the form $\Thetaeta_{{M^\begin{equation}t}} \otimes W$, with $W$ a unitary (once restricted to the suitable subspaces). Therefore, the term can simply be rewritten as $W$. We have therefore proven (H1) at rank $i+1$. \paragraph*{A Lemma.} Before turning to the other induction hypotheses, we prove a Lemma that we will need to use a few times to compute how $\mathcal V_{i+1}^\dagger$ acts on sufficiently well-behaved linear operators. \begin{equation}gin{lemma} \lambdabel{lem: MovingThrough} Let $g \in \Lin[\mathcal H_i^\textrm{\upshape in}]$, not acting (i.e.\ acting trivially) on ${M^\begin{equation}t}in$, commuting with $\zetaipadin({M^\begin{equation}t})$, and satisfying: $\mathcal V_i^\dagger[g] \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t})$ doesn't act on ${M^\begin{equation}t}out$. We fix an orthonormal (with respect to the Hilbert-Schmidt inner product) basis $(E_m)_{1 \leq m \leq \texttt{dim}({M^\begin{equation}t}out)^2}$ of $\Lin[\mathcal H_{M^\begin{equation}t}out]$, with $E_0 = \mathbb{1}$, and decompose $\mathcal V_i^\dagger[g]$ as \begin{equation} \lambdabel{eq: g decomposition} \mathcal V_i^\dagger[g] = \sum_m \mathcal Hi_m \otimes E_m \, ,\end{equation} with the $\mathcal Hi_m$'s acting on $\mathcal H_P \otimes \left( \bigotimes_{{O^\ga} > B(i+1)} \mathcal H_{O^\ga}out \right)$. With padding, we can also write $\mathcal V_i^\dagger[g] = \sum_m \mathcal Hi_{m, \textrm{\upshape pad}} \mathcal Irc E_{m, \textrm{\upshape pad}}$, with the terms commuting. We then have \begin{equation} \lambdabel{eq: Lemma's result} \mathcal V_{i+1}^\dagger[g] = \mathcal Hi_{0, \textrm{\upshape pad}} + \sum_{m \neq 0} E'_{m, \textrm{\upshape pad}} \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}} \, , \end{equation} where the $E'_m$'s are defined, through the use of (H6) at step $i$, by $\mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger E_m U ^\begin{equation}ta)_\textrm{\upshape pad}] \mathcal Irc \zetaipadout({M^\begin{equation}t}) = E'_{m,\textrm{\upshape pad}} \mathcal Irc \zetaipadout({M^\begin{equation}t})$, with the $E'_m$'s only acting on $\mathcal H_P$ (because ${\mathcal P_i}st({M^\begin{equation}t}) = \emptyset$). \end{lemma} \begin{equation}gin{proof} We will compute $\mathcal V_{i+1}^\dagger[g]$ by looking at how $g$ `moves through' $\cs_\pad[(V_{N,i+1})_N]$. First, we rewrite (\ref{eq: step for H1}) more compactly as \begin{equation} \lambdabel{eq: compact rewriting} \begin{equation}gin{split} &\cs_\pad[(V_{N,i+1})_N] \\ &= \, \Tr_{{M^\begin{equation}t}out} \left[ \left(\zetaipadin({M^\begin{equation}t}) \otimes U_\textrm{\upshape pad}^\begin{equation}ta + {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \otimes \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \right) \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \right] \, . \end{split} \end{equation} Thus (because $g$ doesn't act on ${M^\begin{equation}t}in$, and commutes with $\zetaipadin({M^\begin{equation}t})$), \begin{equation} \lambdabel{eq: computation for Lemma} \begin{equation}gin{split} & g \mathcal Irc \cs_\pad[(V_{N,i+1})_N] \\ &= \, g \mathcal Irc \Tr_{{M^\begin{equation}t}out} \left[ \left( \zetaipadin({M^\begin{equation}t}) \mathcal Irc U_\textrm{\upshape pad}^\begin{equation}ta + {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \mathcal Irc \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \right) \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \right] \\ &= \, \Tr_{{M^\begin{equation}t}out} \left[ \left(\zetaipadin({M^\begin{equation}t}) \mathcal Irc U_\textrm{\upshape pad}^\begin{equation}ta + {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \mathcal Irc \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \right) \mathcal Irc g \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \right] \\ &= \, \Tr_{{M^\begin{equation}t}out} \left[ \left(\zetaipadin({M^\begin{equation}t}) \mathcal Irc U_\textrm{\upshape pad}^\begin{equation}ta + {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \mathcal Irc \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \right) \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \mathcal Irc \mathcal V_i^\dagger[g] \right] \, . \end{split} \end{equation} We now consider the decomposition (\ref{eq: g decomposition}) of $\mathcal V_i^\dagger[g]$, and we look at \begin{equation} \begin{equation}gin{split} \sum_{m \neq 0} (\mathcal Hi_{m,\textrm{\upshape pad}} \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t})) \otimes E_m &= \sum_{m \neq 0} (\mathcal Hi_{m} \otimes E_m)_\textrm{\upshape pad} \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) \\ &= \mathcal V_i^\dagger[g] \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) - (\mathcal Hi_0 \otimes \mathbb{1}_{M^\begin{equation}t}out)_\textrm{\upshape pad} \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) \, . \end{split}\end{equation} Both terms of the second line's RHS act trivially on ${M^\begin{equation}t}out$: the first term by assumption, and the second because it is a composition of operators acting trivially on ${M^\begin{equation}t}out$. From the form of the LHS, we can thus deduce: $\forall m \neq 0, \mathcal Hi_{m,\textrm{\upshape pad}} \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) = 0$, which can be rewritten as \begin{equation} \lambdabel{eq: absorbing zeta} \forall m \neq 0, \mathcal Hi_{m,\textrm{\upshape pad}} = \mathcal Hi_{m,\textrm{\upshape pad}} \mathcal Irc \zetaipadout({M^\begin{equation}t}) \, . \end{equation} In the same way we can prove that $\mathcal Hi_{m,\textrm{\upshape pad}} = \zetaipadout({M^\begin{equation}t}) \mathcal Irc \mathcal Hi_{m,\textrm{\upshape pad}}$. We are now in a position to continue the computation started in (\ref{eq: computation for Lemma}); we write $\kappa_i^\textrm{\upshape in}({M^\begin{equation}t}):= \zetaipadin({M^\begin{equation}t}) \otimes U_\textrm{\upshape pad}^\begin{equation}ta + {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \otimes \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger$. \begin{equation} \begin{equation}gin{split} & g \mathcal Irc \cs_\pad[(V_{N,i+1})_N] \\ &\overset{(\ref{eq: absorbing zeta})}{=} \, \Tr_{{M^\begin{equation}t}out} \left[ \left(\zetaipadin({M^\begin{equation}t}) \mathcal Irc U_\textrm{\upshape pad}^\begin{equation}ta + {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \mathcal Irc \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \right) \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \mathcal Irc \mathcal Hi_{0, \textrm{\upshape pad}} \right] \\ &+ \, \sum_{m \neq 0} \Tr_{{M^\begin{equation}t}out} \left[ \zetaipadin({M^\begin{equation}t}) \mathcal Irc U_\textrm{\upshape pad}^\begin{equation}ta \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}} \mathcal Irc E_{m, \textrm{\upshape pad}} \right] \\ &= \, \cs_\pad[(V_{N,i+1})_N] \mathcal Irc \mathcal Hi_{0, \textrm{\upshape pad}} \\ &+ \, \sum_{m \neq 0} \Tr_{{M^\begin{equation}t}out} \left[ E_{m, \textrm{\upshape pad}} \mathcal Irc \zetaipadin({M^\begin{equation}t}) \mathcal Irc U_\textrm{\upshape pad}^\begin{equation}ta \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}} \right] \\ &= [\ldots] + \sum_{m \neq 0} \Tr_{{M^\begin{equation}t}out} \left[ \zetaipadin({M^\begin{equation}t}) \mathcal Irc U_\textrm{\upshape pad}^\begin{equation}ta \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \mathcal Irc \mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger E_m U ^\begin{equation}ta)_\textrm{\upshape pad}] \right] \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}}\\ &= [\ldots] + \sum_{m \neq 0} \Tr_{{M^\begin{equation}t}out} \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape in}({M^\begin{equation}t}) \mathcal Irc \zetaipadin({M^\begin{equation}t}) \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \mathcal Irc \mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger E_m U ^\begin{equation}ta)_\textrm{\upshape pad}] \right] \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}}\\ &= [\ldots] + \sum_{m \neq 0} \Tr_{{M^\begin{equation}t}out} \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape in}({M^\begin{equation}t}) \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \mathcal Irc \mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger E_m U ^\begin{equation}ta)_\textrm{\upshape pad}] \mathcal Irc \zetaipadout({M^\begin{equation}t}) \right] \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}}\\ &\overset{\textrm{(H6)}}{=} [\ldots] + \sum_{m \neq 0} \Tr_{{M^\begin{equation}t}out} \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape in}({M^\begin{equation}t}) \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \mathcal Irc E'_{m, \textrm{\upshape pad}} \mathcal Irc \zetaipadout({M^\begin{equation}t}) \right] \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}}\\ &= \, [\ldots] + \sum_{m \neq 0} \Tr_{{M^\begin{equation}t}out} \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape in}({M^\begin{equation}t}) \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \right] \mathcal Irc E'_{m, \textrm{\upshape pad}} \mathcal Irc \zetaipadout({M^\begin{equation}t}) \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}}\\ &\overset{(\ref{eq: compact rewriting})}{=} \, [\ldots] + \sum_{m \neq 0} \cs_\pad[(V_{N,i+1})_N] \mathcal Irc E'_{m, \textrm{\upshape pad}} \mathcal Irc \zetaipadout({M^\begin{equation}t}) \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}}\\ &\overset{(\ref{eq: absorbing zeta})}{=} \, [\ldots] + \sum_{m \neq 0} \cs_\pad[(V_{N,i+1})_N] \mathcal Irc E'_{m, \textrm{\upshape pad}} \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}}\\ &= \, \cs_\pad[(V_{N,i+1})_N] \mathcal Irc \left( \mathcal Hi_{0, \textrm{\upshape pad}} + \sum_{m \neq 0} E'_{m, \textrm{\upshape pad}} \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}} \right) \,. \end{split}\end{equation} In the previous computation, we used (H6) to replace $\mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger E_m U ^\begin{equation}ta)_\textrm{\upshape pad}] \mathcal Irc \zetaipadout({M^\begin{equation}t})$ with $E'_{m,\textrm{\upshape pad}} \mathcal Irc \zetaipadout({M^\begin{equation}t})$, with the $E'_m$'s only acting on $\mathcal H_P$ (because ${\mathcal P_i}st({M^\begin{equation}t}) = \emptyset$). This then allowed to get the term out of the trace. The computation allows us to conclude that (\ref{eq: Lemma's result}) holds. \end{proof} \paragraph*{H2} We now turn to (H2). We will take: $\forall {N^\al} > B(i+1), \zetaiplin({N^\al}):= \zetaiin({N^\al})$. There are two things to check in order to ensure that this makes sense. The first is that the $\zetaiin({N^\al})$ are indeed all defined, which holds here because $i$ is not a special step. The second thing to check is that for an arbitrary ${N^\al}$, $\zetaiin({N^\al})$ wasn't acting on ${M^\begin{equation}t}in$. This follows from the fact that $i+1$ is not a special step. Indeed, the way we defined the ordering of the branches ensures that ${M^\begin{equation}t} \not\in \mathcal Fist({N^\al})$. This ensures that $\zetaiin({N^\al})$ doesn't act on ${M^\begin{equation}t}in$ if ${N^\al}$ is in a green layer; while if ${N^\al}$ is in a red layer, then the fact that $i+1$ is not a special step implies that ${M^\begin{equation}t}$ is not in this red layer, i.e. that ${M^\begin{equation}t} \not\in \mathcal Fi({N^\al})$ and thus that $\zetaiin({N^\al})$ doesn't act on ${M^\begin{equation}t}in$. We then want to define, from there, $\zetaiplpadout({N^\al}) := \mathcal V_{i+1}^\dagger[\zetaiplpadin({N^\al})], \, \forall {N^\al}$. The fact (which derives from (H1)) that $\mathcal V_{i+1}$ is an isomorphism of operator algebras will then ensure that the $\zetaiplpadout$'s are pairwise commuting orthogonal projectors, as the $\zetaiplpadin$'s are. What is left to prove is that, fixing an ${N^\al}$ whose layer is green, $\mathcal V_{i+1}^\dagger[\zetaiplpadin({N^\al})]$ (which formally acts on the whole $\mathcal H_{i+1}^\textrm{\upshape out}$) can indeed be seen as the padding of an operator acting on $\mathcal H_P \otimes \left( \bigotimes_{{O^\ga} \in \mathcal P_{i+1}({N^\al})} \mathcal H_{O^\ga}out \right)$ -- i.e., that it acts trivially on other factors; and similarly, that for an ${N^\al}$ whose layer is red, $\mathcal V_{i+1}^\dagger[\zetaiplpadin({N^\al})]$ can be seen as only acting on $\mathcal H_P \otimes \left( \bigotimes_{{O^\ga} \in \mathcal P_{i+1}^\textrm{\upshape str}({N^\al})} \mathcal H_{O^\ga}out \right)$. For this, fixing an ${N^\al} > B(i+1)$, we will make use of Lemma \ref{lem: MovingThrough} to compute $\mathcal V_{i+1}^\dagger[\zetaipadin({N^\al})]$. The latter satisfies the lemma's assumptions: $\zetaipadin({N^\al})$ doesn't act on ${M^\begin{equation}t}in$ and commutes with $\zetaipadin({M^\begin{equation}t})$ by (H1) at step $i$, and $\mathcal V_i^\dagger[\zetaipadin({N^\al})] = \zetaipadout({N^\al})$, by (H3) at step $i$, satisfies: $\zetaipadout({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t})$ acts trivially on ${M^\begin{equation}t}out$. By Lemma \ref{lem: MovingThrough}, writing $\zetaiout({N^\al}) = \sum_m \mathcal Hi_m \otimes E_m $, we can thus conclude \begin{equation} \lambdabel{eq: result for (H2)} \mathcal V_{i+1}^\dagger[\zetaiplpadin({N^\al})] = \mathcal Hi_{0, \textrm{\upshape pad}} + \sum_{m \neq 0} E'_{m, \textrm{\upshape pad}} \mathcal Irc \mathcal Hi_{m, \textrm{\upshape pad}} \, . \end{equation} If ${N^\al}$ is in a green layer, then in this expression, the $\mathcal Hi_{m}$'s act on $\mathcal H_P \otimes \left( \bigotimes_{{O^\ga} \in \mathcal P_{i+1}({N^\al})} \mathcal H_{O^\ga}out \right)$ and the $E'_m$'s act on $\mathcal H_P$; thus, $\mathcal V_{i+1}^\dagger [\zetaiplpadin({N^\al})]$ only acts non-trivially on $\mathcal H_P \otimes \left( \bigotimes_{{O^\ga} \in \mathcal P_{i+1}({N^\al})} \mathcal H_{O^\ga}out \right)$. If ${N^\al}$ is in a red layer, the same can be said replacing $\mathcal P$'s with $\mathcal P^\textrm{\upshape str}$'s. This concludes the proof of (H2). \paragraph*{H3} The proof of (H3) is direct for the $\zetaiplin$'s, as they are equal to the $\zetaiin$'s. For the $\zetaiplout$'s, fixing ${N^\al}$ and ${O^\ga}$, one can compute $\zetaiplpadout({N^\al}) \mathcal Irc {\Bar{\ze}_{i+1,\pad}^\out}({O^\ga}) = \mathcal V_{i+1}^\dagger[\zetaipadin({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\inn}({O^\ga})]$ by once again invoking Lemma \ref{lem: MovingThrough}, writing \begin{equation} \zetaiout({N^\al}) \mathcal Irc {\Bar{\ze}_i^\out}({M^\begin{equation}t}) = \sum_m \xi_m \otimes E_m \, ,\end{equation} where the $\xi_m$'s act trivially on ${O^\ga}out$ because $\zetaiout({N^\al}) \mathcal Irc {\Bar{\ze}_i^\out}({M^\begin{equation}t})$ does, by (H3) at step $i$. $\zetaipadin({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\inn}({O^\ga})$ commutes with $\zetaipadin({M^\begin{equation}t})$ and doesn't act on ${M^\begin{equation}t}in$; to apply the Lemma, we thus have to prove that $\zetaipadout({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({O^\ga}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t})$ acts trivially on ${M^\begin{equation}t}out$. This follows from the rewriting \begin{equation} \begin{equation}gin{split} &\zetaipadout({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({O^\ga}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) \\ &= \left( \zetaipadin({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \right) \mathcal Irc \left( {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) - \zetaipadout({O^\ga}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) \right) \, , \end{split} \end{equation} all of the terms in which, one can conclude by (H3) at step $i$, act trivially on ${M^\begin{equation}t}out$. Lemma \ref{lem: MovingThrough} thus leads to \begin{equation} \zetaiplpadout({N^\al}) \mathcal Irc {\Bar{\ze}_{i+1,\pad}^\out}({O^\ga}) = \mathcal V_{i+1}^\dagger[\zetaipadin({N^\al}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\inn}({O^\ga})] = \xi_{0, \textrm{\upshape pad}} + \sum_{m \neq 0} E'_{m, \textrm{\upshape pad}} \mathcal Irc \xi_{m, \textrm{\upshape pad}} \, , \end{equation} with neither the $\xi_m$'s nor the $E'_m$'s acting on ${O^\ga}out$, which concludes the proof of (H3). \paragraph*{H4} The proof of (H4) at step $i+1$ is immediate as it derives from (H4) at step $i$ for the $\zetaiplin$'s, which are equal to the $\zetaiin$'s. \paragraph*{H5} For the proof of (H5), we take $Q \subseteq \{B(i') \, | \, i' \geq i +1 \}$ a set of branches on different nodes, and $\Tilde{Q} \subseteq \texttt{Nodes}_\Ga$ and a function $\alpha$ such that $Q = \{ N^{\alpha(N)} \, | \, N \in \Tilde{Q}\}$. We will prove the version of (H5) written with $\zetaiplin$'s. We first consider the case $M \in \Tilde{Q}$. Then by (H4) we have $\zetaipadin(M^{\alpha(M)}) = \zetaipadin(M^{\alpha(M)}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t})$, and we can therefore write \begin{equation} \begin{equation}gin{split} &\prod_{N \in \Tilde{Q}} \zetaiplpadin(N^{\alpha(N)}) \mathcal Irc \cs_\pad[(V_{N,i+1})_N] \\ &= \, \prod_{N \in \Tilde{Q}} \zetaipadin(N^{\alpha(N)}) \mathcal Irc \zetaipadin({M^\begin{equation}t}) \mathcal Irc \cs_\pad[(V_{N,i+1})_N] \\ &= \, \prod_{N \in \Tilde{Q}} \zetaipadin(N^{\alpha(N)}) \mathcal Irc \zetaipadin({M^\begin{equation}t}) \mathcal Irc \cs_\pad[(V_{N,i+1})_N] \\ &= \, \prod_{N \in \Tilde{Q}} \zetaipadin(N^{\alpha(N)}) \mathcal Irc \zetaipadin({M^\begin{equation}t}) \mathcal Irc \frac{1}{\texttt{dim}({M^\begin{equation}t}))} \Tr_{M^\begin{equation}t}out \left[ \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_N] \right] \\ &= \, \prod_{N \in \Tilde{Q}} \zetaipadin(N^{\alpha(N)}) \mathcal Irc \frac{1}{\texttt{dim}({M^\begin{equation}t}))} \Tr_{M^\begin{equation}t}out \left[ \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_N] \right] \\ &\overset{\textrm{(H5)}}{=} \, \frac{1}{\texttt{dim}({M^\begin{equation}t}))} \Tr_{M^\begin{equation}t}out \left[ \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_{N \not\in \Tilde{Q}} \times (V^{\alpha(N)}_{N,i})_{N \in \Tilde{Q}}] \right] \\ &= \, \frac{1}{\texttt{dim}({M^\begin{equation}t}))} \Tr_{M^\begin{equation}t}out \left[ \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \mathcal Irc \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}} \mathcal Irc \cs_\pad[(V_{N,i+1})_{N \not\in \Tilde{Q}} \times (V^{\alpha(N)}_{N,i+1})_{N \in \Tilde{Q}}] \right] \\ &= \, \cs_\pad[(V_{N,i+1})_{N \not\in \Tilde{Q}} \times (V^{\alpha(N)}_{N,i+1})_{N \in \Tilde{Q}}] \, . \end{split} \end{equation} In the case $M \not\in \Tilde{Q}$, then defining $\alpha(M) := \begin{equation}ta$, we have \begin{equation} \begin{equation}gin{split} &\prod_{N \in \Tilde{Q}} \zetaiplpadin(N^{\alpha(N)}) \mathcal Irc \cs_\pad[(V_{N,i+1})_N] \\ &= \, \Tr_{{M^\begin{equation}t}out} \left[ \prod_{N \in \Tilde{Q} \mathcal Up \{M\}} \zetaipadin(N^{\alpha(N)}) \mathcal Irc U_\textrm{\upshape pad}^\begin{equation}ta \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \right] \\ &+ \,\frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Tr_{{M^\begin{equation}t}out} \left[ \prod_{N \in \Tilde{Q}} \zetaipadin(N^{\alpha(N)}) \mathcal Irc {\Bar{\ze}_{i,\pad}^\inn}({M^\begin{equation}t}) \mathcal Irc \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_{N}] \right] \\ &\overset{\textrm{(H5)}}{=} \, \Tr_{{M^\begin{equation}t}out} \left[ U_\textrm{\upshape pad}^\begin{equation}ta \mathcal Irc \cs_\pad[(V_{N,i})_{N \not\in \Tilde{Q} \mathcal Up \{M\}} \times (V_{N,i}^{\alpha(N)})_{N \in \Tilde{Q} \mathcal Up \{M\}}] \right] \\ &+ \, \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Tr_{{M^\begin{equation}t}out} \left[ \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_{N \not\in \Tilde{Q} \mathcal Up \{M\}} \times (V_{N,i}^{\alpha(N)})_{N \in \Tilde{Q}} \times (\begin{align}r{V}_{M,i}^\begin{equation}ta)] \right] \\ &= \, \cs_\pad[(V_{N,i+1})_{N \not\in \Tilde{Q} \mathcal Up \{M\}} \times (V_{N,i+1}^{\alpha(N)})_{N \in \Tilde{Q}} \times (V_{M,i}^\begin{equation}ta)] \\ &+ \, \cs_\pad[(V_{N,i+1})_{N \not\in \Tilde{Q} \mathcal Up \{M\}} \times (V_{N,i+1}^{\alpha(N)})_{N \in \Tilde{Q}} \times (\begin{align}r{V}_{M,i}^\begin{equation}ta)]\\ &= \, \cs_\pad[(V_{N,i+1})_{N \not\in \Tilde{Q}} \times (V_{N,i+1}^{\alpha(N)})_{N \in \Tilde{Q}}] \,. \end{split} \end{equation} \paragraph*{H6} To prove (H6), we fix a branch ${N^\al} > B(i+1)$ in a green layer, and $f \in \Lin[\mathcal H_{N^\al}in]$. We first consider the case ${N^\al} \not\in \mathcal Fist({M^\begin{equation}t})$. $\zetaiin({M^\begin{equation}t})$ then doesn't act on ${N^\al}in$: indeed, either ${M^\begin{equation}t}$ is in a green layer and $\zetaiin({M^\begin{equation}t})$ doesn't act outside of $\mathcal Fist({M^\begin{equation}t})$, or ${M^\begin{equation}t}$ is in a red layer and then, because $i+1$ is not a special step, ${N^\al}$ is not in this layer and thus ${N^\al} \not\in \mathcal Fi({M^\begin{equation}t})$. Furthermore, as we saw that $\cs_\pad[(V_{N,i})_N)] \mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t})$ was of the form $W \otimes \Thetaeta_{M^\begin{equation}t}$, and $f$ doesn't act on ${M^\begin{equation}t}in$, it follows that $\mathcal V_i^\dagger[f]\mathcal Irc {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t})$ doesn't act on ${M^\begin{equation}t}out$. We can therefore apply Lemma \ref{lem: MovingThrough} and get \begin{equation} \mathcal V_{i+1}^\dagger[f_\textrm{\upshape pad}] = \phi_{0, \textrm{\upshape pad}} + \sum_{m \neq 0} E'_{m, \textrm{\upshape pad}} \mathcal Irc \phi_{m, \textrm{\upshape pad}} \, \overset{\textrm{(H6)}}{=} \sum_m \mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger E_m U ^\begin{equation}ta)_\textrm{\upshape pad}] \mathcal Irc \phi_{m, \textrm{\upshape pad}} \, , \end{equation} where $\mathcal V_i^\dagger[f_\textrm{\upshape pad}] = \sum_m \phi_m \otimes E_m$. Furthermore, as we've seen, we have $\zetaiplpadout({N^\al}) = \sum_n \mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger E_n U ^\begin{equation}ta)_\textrm{\upshape pad}] \mathcal Irc \mathcal Hi_n $. We are therefore led to \begin{equation} \lambdabel{eq: comp H6} \begin{equation}gin{split} &\mathcal V_{i+1}^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetaiplpadout({N^\al}) = \sum_{mn} \phi_m \mathcal Irc \mathcal Hi_n \mathcal Irc \mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger \mathcal Irc E_m \mathcal Irc E_n \mathcal Irc U^\begin{equation}ta)_\textrm{\upshape pad}]\\ &= \, \left( \sum_{mn} \sigma_{lmn} \phi_m \mathcal Irc \mathcal Hi_n \right) \mathcal Irc \sum_l \mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger \mathcal Irc E_l \mathcal Irc U^\begin{equation}ta)_\textrm{\upshape pad}] \, , \end{split} \end{equation} where the $\sigma_{lmn}$'s are the structure constants on $\Lin[{M^\begin{equation}t}out]$, i.e. $E_m \mathcal Irc E_n = \sum_l \sigma_{lmn} E_l$. Yet (H6) at step $i$ gives us that there exists $f'$ acting on $P$ and ${\mathcal P_i}st({N^\al})$ (and therefore not on ${M^\begin{equation}t}out$) such that $\mathcal V_i^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetaipadout({N^\al}) = {f'}_\textrm{\upshape pad} \mathcal Irc \zetaipadout({N^\al})$, which can be rewritten as \begin{equation} \sum_l \left( \sum_{mn} \sigma_{lmn} \phi_m \mathcal Irc \mathcal Hi_n \right) \otimes E_l = \sum_l \left( f'_\textrm{\upshape pad} \mathcal Irc \mathcal Hi_l \right) \otimes E_l \, , \end{equation} leading to \begin{equation} \forall l, \sum_{mn} \sigma_{lmn} \phi_m \mathcal Irc \mathcal Hi_n = f'_\textrm{\upshape pad} \mathcal Irc \mathcal Hi_l \, . \end{equation} Reinserting this into (\ref{eq: comp H6}), we find \begin{equation} \begin{equation}gin{split} \mathcal V_{i+1}^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetaiplpadout({N^\al}) &= \, f'_\textrm{\upshape pad} \mathcal Irc \sum_l \mathcal Hi_l \mathcal Irc \mathcal V_{i}^\dagger[({U^{\begin{equation}ta}}^\dagger \mathcal Irc E_l \mathcal Irc U^\begin{equation}ta)_\textrm{\upshape pad}] \\ &= f'_\textrm{\upshape pad} \mathcal Irc \zetaiplpadout({N^\al}) \, , \end{split} \end{equation} where $f'$ acts on $P$ and on ${\mathcal P_i}st({N^\al})$, and the latter is equal to $\mathcal P_{i+1}^\textrm{\upshape str}({N^\al})$ as ${M^\begin{equation}t} \not\in {\mathcal P_i}st({N^\al})$. We now consider the case ${N^\al} \in \mathcal Fist({M^\begin{equation}t})$. We will use the fact that (\ref{eq:(H6)}) can be equivalently written as $\zetaipadout({N^\al}) \mathcal Irc \mathcal V_i^\dagger[f_\textrm{\upshape pad}] = f'_\textrm{\upshape pad} \mathcal Irc \zetaipadout({N^\al})$; indeed, $\zetaipadin({N^\al})$ doesn't act on ${N^\al}in$, so $f_\textrm{\upshape pad}$ and $\zetaipadin({N^\al})$ commute, so $\mathcal V_i^\dagger[f]$ and $\mathcal V_i^\dagger[\zetaipadin({N^\al})] = \zetaipadout({N^\al})$ commute as well. We write $\kappa_i^\textrm{\upshape out}({M^\begin{equation}t}):= \zetaipadout({M^\begin{equation}t}) \otimes U_\textrm{\upshape pad}^\begin{equation}ta + {\Bar{\ze}_{i,\pad}^\out}({M^\begin{equation}t}) \otimes \frac{1}{\texttt{dim} ({M^\begin{equation}t})} \Thetaeta_{{M^\begin{equation}t},\textrm{\upshape pad}}^\dagger$. \begin{equation} \lambdabel{eq: H6 computation 1} \begin{equation}gin{split} &\zetaiplpadout({N^\al}) \mathcal Irc \cs_\pad[(V_{N,i+1})_N]^\dagger \mathcal Irc f_\textrm{\upshape pad} \\ &= \, \cs_\pad[(V_{N,i+1})_N]^\dagger \mathcal Irc \zetaipadin({N^\al}) \mathcal Irc f_\textrm{\upshape pad} \\ &= \, \Tr_{M^\begin{equation}t}in \left[ \kappa^\textrm{\upshape out}_{i,\textrm{\upshape pad}}({M^\begin{equation}t})^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \right] \mathcal Irc \zetaipadin({N^\al}) \mathcal Irc f_\textrm{\upshape pad} \\ &= \, \Tr_{M^\begin{equation}t}in \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc \zetaipadout({N^\al}) \mathcal Irc \mathcal V_i^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \right] \\ &\overset{\textrm{(H6)}}{=} \, \Tr_{M^\begin{equation}t}in \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc f'_\textrm{\upshape pad} \mathcal Irc \zetaipadout({N^\al}) \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \right] \\ &= \, \Tr_{M^\begin{equation}t}in \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc f'_\textrm{\upshape pad} \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \right] \mathcal Irc \zetaiplpadin({N^\al}) \, , \\ \end{split} \end{equation} from which we get \begin{equation} \lambdabel{eq: H6 computation 2} \begin{equation}gin{split} &\zetaiplpadout({N^\al}) \mathcal Irc \mathcal V_{i+1}^\dagger[f_\textrm{\upshape pad}] \\ &= \, \zetaiplpadout({N^\al}) \mathcal Irc \cs_\pad[(V_{N,i+1})_N]^\dagger \mathcal Irc f_\textrm{\upshape pad} \mathcal Irc \cs_\pad[(V_{N,i+1})_N] \\ &= \, \Tr_{M^\begin{equation}t}in \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc f'_\textrm{\upshape pad} \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \right] \\ &\mathcal Irc \Tr_{M^\begin{equation}t}in \left[ \cs_\pad[(V_{N,i})_N] \mathcal Irc {\kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}}({M^\begin{equation}t}) \right] \mathcal Irc \zetaiplpadout({N^\al}) \, . \end{split} \end{equation} We will rewrite the traces in another way, defining $\ket{\Phi^+({M^\begin{equation}t})} := \sum_k \ket{k}_{M^\begin{equation}t}in \otimes \ket{k}_{{M^\begin{equation}t}in'}$, where ${M^\begin{equation}t}in' \mathcal Ong {M^\begin{equation}t}in$, and $(\ket{k})_k$ is an arbitrary orthonormal basis. The above can then be expressed as \begin{equation} \lambdabel{eq: H6 computation 3}\begin{equation}gin{split} &\zetaiplpadout({N^\al}) \mathcal Irc \mathcal V_{i+1}^\dagger[f_\textrm{\upshape pad}] \\ &= \, \bra{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \mathcal Irc \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc f'_\textrm{\upshape pad} \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \mathcal Irc \ket{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \\ &\mathcal Irc \bra{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \mathcal Irc \cs_\pad[(V_{N,i})_N] \mathcal Irc {\kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}}({M^\begin{equation}t}) \mathcal Irc \ket{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \mathcal Irc \zetaiplpadout({N^\al})\\ &= \, \bra{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \mathcal Irc \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc f'_\textrm{\upshape pad} \mathcal Irc \mathcal V_{i, \textrm{\upshape pad}}^\dagger \left[\ket{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \bra{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \right] \\ & \mathcal Irc {\kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}}({M^\begin{equation}t}) \mathcal Irc \ket{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \mathcal Irc \zetaiplpadout({N^\al}) \, . \\ \end{split} \end{equation} In this expression, $\bra{\Phi^+({M^\begin{equation}t})}$ and $\ket{\Phi^+({M^\begin{equation}t})}$ act on ${M^\begin{equation}t}out$ and ${M^\begin{equation}t}out'$; $\kappa^\textrm{\upshape out}({M^\begin{equation}t})$ acts on $P$, ${M^\begin{equation}t}out$ / ${M^\begin{equation}t}in$ (on its domain/codomain), and ${\mathcal P_i}({M^\begin{equation}t}) \subseteq {\mathcal P_i}st({N^\al})$; $f'$ acts on $P$ and ${\mathcal P_i}st({N^\al})$; and $\mathcal V_{i, \textrm{\upshape pad}}^\dagger \left[\ket{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \bra{\Phi^+({M^\begin{equation}t})}_\textrm{\upshape pad} \right]$ acts on $P$, ${M^\begin{equation}t}out$, ${M^\begin{equation}t}in'$ and ${\mathcal P_i}({M^\begin{equation}t}) \subseteq {\mathcal P_i}st({N^\al})$. Their composition -- which doesn't act on ${M^\begin{equation}t}out$ and ${M^\begin{equation}t}out'$ as these are explicitly terminated by $\bra{\Phi^+({M^\begin{equation}t})}$ and $\ket{\Phi^+({M^\begin{equation}t})}$ -- thus acts trivially outside of $P$ and ${\mathcal P_i}st({N^\al}) \setminus \{ {M^\begin{equation}t} \} = \mathcal P_{i+1}^\textrm{\upshape str}({N^\al})$. Therefore, we can write \begin{equation} \zetaiplpadout({N^\al}) \mathcal Irc \mathcal V_{i+1}^\dagger[f_\textrm{\upshape pad}] = f''_\textrm{\upshape pad} \mathcal Irc \zetaiplpadout({N^\al}) \, , \end{equation} with $f'' \in \Lin \left[\mathcal H_P \otimes (\bigotimes_{{O^\ga} \in \mathcal P_{i+1}^\textrm{\upshape str}({N^\al})} \mathcal H_{O^\ga}out) \right]$. \paragraph*{H7} We take ${N^\al} > {M^\begin{equation}t}$ in a red layer, and $f \in \Lin[\mathcal H_{{N^\al}out}]$. Because ${N^\al}$ is in a red layer and $i+1$ is not a special step, we have ${N^\al} \not\in {\mathcal P_i}({M^\begin{equation}t})$. Thus $f$ commutes with $\kappa_i^\textrm{\upshape out}({M^\begin{equation}t})$, as the latter only acts non trivially on $P$, ${\mathcal P_i}({M^\begin{equation}t})$ and ${M^\begin{equation}t}$. Thus, \begin{equation} \lambdabel{eq: computation H7 1} \begin{equation}gin{split} &f_\textrm{\upshape pad} \mathcal Irc \cs_\pad[(V_{N,i+1})_N]^\dagger \mathcal Irc \zetaiplpadin({N^\al}) \\ &= \, f_\textrm{\upshape pad} \mathcal Irc \cs_\pad[(V_{N,i+1})_N]^\dagger \mathcal Irc \zetaipadin({N^\al}) \\ &= \, \Tr_{M^\begin{equation}t}in \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \mathcal Irc \mathcal V_i^\dagger[f_\textrm{\upshape pad}] \mathcal Irc \zetaipadin({N^\al}) \right] \\ &= \, \Tr_{M^\begin{equation}t}in \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \mathcal Irc \zetaipadin({N^\al}) \mathcal Irc \mathcal V_i^\dagger[f_\textrm{\upshape pad}] \right] \\ &\overset{\textrm{(H7)}}{=} \, \Tr_{M^\begin{equation}t}in \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \mathcal Irc \zetaipadin({N^\al}) \mathcal Irc f'_\textrm{\upshape pad} \right] \\ &= \, \Tr_{M^\begin{equation}t}in \left[ \kappa_{i,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})^\dagger \mathcal Irc \cs_\pad[(V_{N,i})_N]^\dagger \right] \mathcal Irc \zetaipadin({N^\al}) \mathcal Irc f'_\textrm{\upshape pad} \\ &= \, \cs_\pad[(V_{N,i+1})_N]^\dagger \mathcal Irc \zetaipadin({N^\al}) \mathcal Irc f'_\textrm{\upshape pad} \, , \end{split} \end{equation} where $f'$ acts on $\mathcal Fist({N^\al}) = \mathcal F_{i+1}^\textrm{\upshape str}({N^\al})$. We can use this to find (noting that $\mathcal V_{i+1}[f]$ and $\zetaiplpadin({N^\al})$ commute because $f$ and $\zetaiplpadout({N^\al})$ do) \begin{equation} \lambdabel{eq: computation H7 2} \zetaiplpadin({N^\al}) \mathcal Irc \mathcal V_{i+1}[f_\textrm{\upshape pad}] = \mathcal V_{i+1}[f_\textrm{\upshape pad}] \mathcal Irc \zetaiplpadin({N^\al}) = \zetaiplpadin({N^\al}) \mathcal Irc f'_\textrm{\upshape pad} \, . \end{equation} \paragraph*{The special steps} The previous proofs were relying on the assumption that neither $i$ nor $i+1$ were special steps. We now consider the other cases. Note first that the proof of (H1) presented earlier is valid in these cases as well. We start with the case: $i+1$ is a special step (regardless of the status of $i$). One then only has to define, and check the properties of, the $\zeta_{i+1}({N^\al})$'s for ${N^\al} \in \mathcal L_{i+1}({M^\begin{equation}t})$. Note that for such ${N^\al}$'s, the $\zeta_{i}({N^\al})$'s were defined at step $i$; indeed, either $i$ is not a special step and all the $\zeta_i$'s were defined, or $i$ is a special step, which entails that $\mathcal L_{i+1}({M^\begin{equation}t}) \subset \mathcal L_{i}(B(i))$, and the $\zeta_i$'s were defined for elements of this latter set. One can then follow a proof strategy that is a time-reversed version of the one presented earlier, except that only the ${N^\al}$'s in $\mathcal L_{i+1}({M^\begin{equation}t})$ are considered. Namely, we will define, for ${N^\al} \in \mathcal L_{i+1}({M^\begin{equation}t})$, $\zetaiplout({N^\al}) := \zetaiout({N^\al})$ and $\zetaiplpadin({N^\al}) := \mathcal V_i[\zetaiplpadout({N^\al})]$, and the rest of the proof can be obtained by following the earlier proof, simply replacing in's with out's, looking at daggered versions of maps, etc. Indeed, the induction hypotheses are fully invariant under time-symmetry, except for one crucial thing: the fact that, when looking in the forward direction, all branches of the layers in the strict past of the branch under consideration have been refilled already, and thus have no $\zeta$'s. Here, however, we are only redefining, and proving properties of, the $\zeta_{i+1}({N^\al})$'s of the layer under consideration; everything thus goes as if the branches in its strict future didn't exist. Moreover, the fact that $i+1$ is a special step implies that $B(i+1)$ is in a red layer, which means that, when considering things from a time-reversed perspective, $B(i+1)$ is in a green layer and thus neither $i$ not $i+1$ are special steps. Thus, for the purposes of defining these $\zeta_{i+1}({N^\al})$'s, the situation is exactly symmetric to the one considered previously. A final case to consider is: $i$ is a special step but $i+1$ is not. In this case, $B(i)$ and ${M^\begin{equation}t} = B(i+1)$ are in the same red layer, but $B(i+2)$ and the rest of the ${N^\al} > B(i+1)$ aren't. The interpretation is that we just finished filling up the branches of a red layer, a procedure during which we didn't define $\zetata$'s for the branches above it; so that we now have to redefine them. The strategy for this case is to define the $\zetaiplin({N^\al})$'s to be equal, not to the $\zetaiin({N^\al})$'s -- which were not defined --, but to the $\zetajin({N^\al})$, where $j$ is the latest step that was not special, i.e.\ the latest step at which these were defined. $B(j)$ is then the first branch of the red layer we finished refilling. One can then follow a strategy similar to the previous proof, now deriving that (H2)-(H7) hold at step $i+1$ from the fact that they hold at step $j$. Let us highlight the main steps. First, from (\ref{eq: step for H1}) holding at all steps between $j$ and $i+1$, we can deduce \begin{equation} \lambdabel{eq: expression for special step} \cs_\pad[(V_{N,i+1})_N] = \Tr_{B(t)^\textrm{\upshape in}, \, j < t \leq i+1} \left[ \cs_\pad[(V_{N,j})_N] \mathcal Irc \left( \prod_{t=j+1}^{i+1} \kappa_{j,\textrm{\upshape pad}}^\textrm{\upshape out}(B(t)) \right) \right] \, , \end{equation} where we also used the fact that, due to how we defined the $\zeta^\textrm{\upshape out}$'s to remain the same when filling up a red layer, we have $\forall t \in \llbracket j+1, i+1 \rrbracket, \kappa_t^\textrm{\upshape out}(B(t)) = \kappa_j^\textrm{\upshape out}(B(t))$. For (H2), we fix ${N^\al} > B(i+1)$; note that we then have ${N^\al} \in \mathcal Fist(B(i+1))$, because $B(i+1)$ is the last branch in its (red) layer. As before, we have to prove that, defining $\zetaiplpadin({N^\al}) := \zeta_{j,\textrm{\upshape pad}}^\textrm{\upshape in}$ and $\zetaiplpadout({N^\al}) := \mathcal V^{i+1}[\zetaiplpadin({N^\al})]$, the latter doesn't act outside of $P$ and $\mathcal P_{i+1}({N^\al})$ (or $\mathcal P_{i+1}^\textrm{\upshape str}({N^\al})$ if ${N^\al}$ is in a red layer). Using (\ref{eq: expression for special step}) and techniques similar to before, we are led to \begin{equation} \lambdabel{eq: computation H2 special} \begin{equation}gin{split} \mathcal V_{i+1}^\dagger[\zeta_{j,\textrm{\upshape pad}}^\textrm{\upshape in}({N^\al})] &= \left( \prod_{t=j+1}^{i+1} \bra{\phi^+(B(t))}_\textrm{\upshape pad} \right) \mathcal Irc \left( \prod_{t=j+1}^{i+1} \kappa_{j,\textrm{\upshape pad}}^\textrm{\upshape out}(B(t))^\dagger \right) \\ &\mathcal Irc \mathcal V_{j, \textrm{\upshape pad}}^\dagger \left[ \prod_{t=j+1}^{i+1} \ket{\Phi^+(B(t))}_\textrm{\upshape pad} \bra{\Phi^+(B(t))}_\textrm{\upshape pad} \right] \mathcal Irc \zeta_{j,\textrm{\upshape pad}}^\textrm{\upshape out}({N^\al}) \\ &\mathcal Irc \left( \prod_{t=j+1}^{i+1} \kappa_{j,\textrm{\upshape pad}}^\textrm{\upshape out}(B(t)) \right) \mathcal Irc \left( \prod_{t=j+1}^{i+1} \ket{\phi^+(B(t))}_\textrm{\upshape pad} \right) \, , \end{split} \end{equation} in which $\zeta_{j,\textrm{\upshape pad}}^\textrm{\upshape out}({N^\al})$ acts only on $P$ and $\mathcal P_j({N^\al})$ ($\mathcal P_j^\textrm{\upshape str}({N^\al})$ if ${N^\al}$ is in a red layer), and the other terms act only on $\mathcal L(B(i+1))$ and on $P$. Given that all the wires in $\mathcal L(B(i+1))$ are explicitly terminated by the $\phi^+$'s, it follows that $\mathcal V_{i+1}^\dagger[\zeta_{j,\textrm{\upshape pad}}^\textrm{\upshape in}({N^\al})]$ only acts on $P$ and on $\mathcal P_j({N^\al}) \setminus \mathcal L(B(i+1)) = \mathcal P_{i+1}({N^\al})$ (or $\mathcal P_j^\textrm{\upshape str}({N^\al}) \setminus \mathcal L(B(i+1)) = \mathcal P_{i+1}^\textrm{\upshape str}({N^\al})$ if ${N^\al}$ is in a red layer). The proof of (H3) is fully analogous: computing $\mathcal V_{i+1}^\dagger[\zeta_{j,\textrm{\upshape pad}}^\textrm{\upshape in}({N^\al}) \mathcal Irc \begin{align}r{\zetata}_{j,\textrm{\upshape pad}}^\textrm{\upshape in}({M^\begin{equation}t})]$ leads to (\ref{eq: computation H2 special}) with $\zeta_{j,\textrm{\upshape pad}}^\textrm{\upshape out}({N^\al})$ replaced with $\zeta_{j,\textrm{\upshape pad}}^\textrm{\upshape out}({N^\al}) \mathcal Irc \begin{align}r{\zetata}_{j,\textrm{\upshape pad}}^\textrm{\upshape out}({M^\begin{equation}t})$, so that invoking (H3) at step $j$ leads to (H3) at step $i+1$. (H4), as before, is direct, and the proof of (H5) is analogous to the one for the non-special cases. For the proof of (H6), we take ${N^\al} > B(i+1)$ in a green layer and $f \in \Lin[\mathcal H_{{N^\al}in}]$. Then, the computation is similar to (\ref{eq: H6 computation 1}), (\ref{eq: H6 computation 2}) and (\ref{eq: H6 computation 3}), yielding \begin{equation} \lambdabel{eq: computation H6 special} \begin{equation}gin{split} &\zetaiplpadout({N^\al}) \mathcal Irc \mathcal V_{i+1}^\dagger[f_\textrm{\upshape pad}] \\ &= \left( \prod_{t=j+1}^{i+1} \bra{\phi^+(B(t))}_\textrm{\upshape pad} \right) \mathcal Irc \left( \prod_{t=j+1}^{i+1} \kappa_{j,\textrm{\upshape pad}}^\textrm{\upshape out}(B(t))^\dagger \right) \mathcal Irc \mathcal V_{j, \textrm{\upshape pad}}^\dagger \left[ \prod_{t=j+1}^{i+1} \ket{\Phi^+(B(t))}_\textrm{\upshape pad} \bra{\Phi^+(B(t))}_\textrm{\upshape pad} \right] \\ &\mathcal Irc f'_\textrm{\upshape pad} \mathcal Irc \left( \prod_{t=j+1}^{i+1} \kappa_{j,\textrm{\upshape pad}}^\textrm{\upshape out}(B(t)) \right) \mathcal Irc \left( \prod_{t=j+1}^{i+1} \ket{\phi^+(B(t))}_\textrm{\upshape pad} \right) \mathcal Irc \zetaiplpadout({N^\al}) \, , \end{split} \end{equation} where $f' \in \Lin[\mathcal H_P \otimes (\bigotimes_{{O^\ga} \in \mathcal P_j^\textrm{\upshape str}({N^\al})})]$ was defined through $\zeta_{j, \textrm{\upshape pad}}^\textrm{\upshape out}({N^\al}) \mathcal Irc \mathcal V_{j}^\dagger[f_\textrm{\upshape pad}] = f'_\textrm{\upshape pad} \mathcal Irc \zeta_{j, \textrm{\upshape pad}}^\textrm{\upshape out}({N^\al})$ by using (H6) at step $j$. Once again, by looking at where the operators are acting, we can conclude that this defines a $f''$ acting only on $P$ and on $\mathcal P_j^\textrm{\upshape str}({N^\al}) \setminus \mathcal L(B(i+1)) = \mathcal P_{i+1}^\textrm{\upshape str}({N^\al})$. Finally, for (H7), one can follow computations (\ref{eq: computation H7 1}) and (\ref{eq: computation H7 2}), to get \begin{equation} \zetaiplpadin({N^\al}) \mathcal Irc \mathcal V_{i+1}[f_\textrm{\upshape pad}] = \zeta_{j, \textrm{\upshape pad}}^\textrm{\upshape in}({N^\al}) \mathcal Irc f'_\textrm{\upshape pad} \, , \end{equation} where $f'$, obtained by the use of (H7) at step $j$, only acts on $\mathcal F_j^\textrm{\upshape str}({N^\al}) = \mathcal F_{i+1}^\textrm{\upshape str}({N^\al})$. This concludes the proof of the induction step. \paragraph*{Conclusion} As the base case and induction step are true, the induction hypotheses are true at every step up to $n$. In particular, (H1) at step $n$ then reads: \begin{equation} \mathcal S[(U_N)_N] \textrm{ is unitary.} \end{equation} As this was done for $\mathcal S = \mathcal S_\GammalaN$ for an arbitrary valid routed graph $\GammalaN$, and for an arbitrary collection of routed unitaries $U_N : \mathcal HinN \overset{\lambdaN}{\to} \mathcal HoutN$ following the $\lambdaN$'s, we can invoke Lemma \ref{lem: simplification} and conclude that Theorem \ref{thm:Main Theorem} holds. \qed \end{document}
\begin{document} \title{Affine toric varieties with an open orbit of an $SL_n$ action} \author{N.\,Yu.~Medved} \address{CS Dept. at Higher School of Economics, Moscow} \email{[email protected]} \keywords {Affine algebraic varieties, toric varieties, algebraic group actions, Cox ring.} \begin{abstract} We study affine toric varieties with an action of group $SL_n$ with a dense orbit. A characterisation in terms of $SL_n \times Q$-modules is given where $Q$ is a quasitorus. This characterisation is more explicitly expanded in case $n=3$. It is shown that in case $n=3$ the divisor class group rank is not greater than $3$, however it is unbounded when $n\geqslant 4$. \end{abstract} \maketitle \section{Introduction} The ground field $\mathbb{K}$ is supposed to be algebraically closed of characteristic zero. In this paper we study irreducible affine toric varieties with a regular action of the group $SL_n$ with an open orbit. A normal irreducible variety $X$ is said to be toric if it admits an effective action of an algebraic torus $T$ with an open orbit. A classical problem in the study of algrebraic group actions is describing orbit closures, i.e. varieties with a dense orbit of a group action. For $G=SL_2$ all normal varieties with a dense $SL_2$-orbit were described by V.L.~Popov in \cite{popov_quasihomogeneous_1973} using the $U$-invariants. Unfortunately, it seems there is no such description available for $SL_n$ with $n>2$. A well-researched class of algebraic varieties is provided by toric varieties. One of the aspects of their importance is that they provide useful examples, as many of their properties can be computed explicitly. In this paper we are interested in affine toric varieties with an open orbit of a regular action of the group $SL_n$. Their properties may be studied to gain intuition to possible properties of arbitrary affine varieties with a dense orbit of an $SL_n$ action. In the paper \cite{gaifullin_affine_2008} there was provided a description of affine toric varieties with a dense orbit of an $SL_2$ action. Extending methods from that paper, we describe all irreducible affine toric varieties with an open orbit of an $SL_n$ action. In Section \ref{Prelim} we recall basic facts about affine toric varieties and introduce the Cox construction and the total coordinate space of the variety. Also in this section we introduce the notion of a prehomogeneous vector space, that is, a vector space with a regular action of an algebraic group with an open orbit. We show that a unique prehomogeneous vector space may be associated with every affine toric variety with a dense $SL_n$-orbit. In Section \ref{Gale} we obtain the conditions for a prehomogeneous vector space to be in image of this correspondence. This allows us to reduce our problem to classification of the prehomogeneous vector spaces satisfying those conditions. For that we use the classification from \cite{sato_kimura_1977}. In Section \ref{config} we provide a criterion for an affine space with a linear quasitorus action to be the total coordinate space of an affine toric variety. Applying this result in Section \ref{n=3} we establish which prehomogeneous vector spaces may appear in the case $n=3$. The result is contained in Theorem~\ref{thm_n_3}. One aspect of this classification is that all of them have the class group rank equal to $0, 1, 2$ or $3$. That corresponds nicely to the case $n=2$ where it follows from a well-known result due to V.L.~Popov \cite{popov_quasihomogeneous_1973} that all varieties with a dense $SL_2$ orbit, not necessarily toric ones, have the class group rank either $0$ or $1$. In Section \ref{n=4} we show that such behaviour does not continue when $n \geqslant 4$, in fact, for any $n \geqslant 4$ and any $d \in \mathbb{N}$ there is such a variety with class group rank equal $d$. Moreover, the constructed example has trivial stabilizer of the generic point. The author would like to thank his scientific advisor S.\,A.\,Gaifullin for lots of helpful discussions. The author would also like to thank I.\,V.\,Arzhantsev for providing valuable feedback on the subject of the paper. \section{Preliminaries}\label{Prelim} Let $X$ be a toric variety, that is a variety with an open orbit of an effective action of an algebraic torus. It can be described in combinatorial terms by introducing a lattice $N$ and a set of stongly convex polyhedral cones in the space $N \otimes_{\mathbb{Z}} \mathbb{Q}$ satisfying several properties that is called its fan. For details we refer the reader to \cite{cox2011toric}. We shall require only the case when $X$ is affine in which case the fan has only one cone of maximal dimension. Let its rays have $\rho_1, \ldots, \rho_\tau$ as their primitive vectors in the lattice $N$ of one-parameter subgroups. There is a bijection between those rays and prime $T$-invariant Weyl divisors, let us denote the divisor corresponding to $\rho_i$ as $D_{\rho_i}$. The notion of the \textit{Cox ring} $\mathcal{R}(X)$ of a toric variety $X$ was formulated by D.~Cox as follows: $\mathcal{R}(X)$ is the polynomial ring of $\tau$ variables $x_1, \ldots, x_\tau$ graded by the divisor class group of $X$: a monomial $\prod x_i^{a_i}$ has degree $\left[\sum a_i D_{\rho_i}\right]$ where $[D]$ denotes class of the divisor $D$. A similar construction can be introduced in the non-toric case, if some other conditions stay true (see, for example,~\cite{arzhantsev_cox_2015}). In the non-toric case the ring $\mathcal{R}(X)$ is not necessarily the polynomial ring. The Cox ring of a variety is unique up to a graded rings isomorphism. By $\overline{X}$ we denote $\operatorname{Spec} \mathcal{R}(X)$~--- the \textit{total coordinate space} of the variety $X$. Since the homogeneous component of $[0]$ in $\mathcal{R}(X)$ is isomophic to $\mathbb{K}[X]$ we may consider the morphism $\pi: \overline{X} \to X$ which is called the \textit{Cox realization} of the variety $X$. It may be expressed as the categorical factor by the action of the characteristic quasitorus $Q=\mathbb{K}[\operatorname{Cl}(X)]$. Consider an abelian group $K$ and a $K$-graded integral $\mathbb{K}$-algebra $R$. We say that a nonzero nonunit element $f$ is $K$-\textit{prime} if it is homogeneous and for any homogeneous $g, h$ such that $f|gh$ we have either $f|g$ or $f|h$. We say that $R$ is \textit{factorially $K$-graded} if every homogeneous nonzero nonunit element is a product of $K$-primes. Let us denote the categorical factor of $Z$ by the action of $Q$ by $Z/\!/Q$. Let us introduce a criterion for a variety $Z$ with a quasitorus action $Q$ to be a total coordinate space of an affine variety $X$: \begin{propos}{\rm(Corollary of statement 2.3 in \cite{arzhantsev_cox_2010})}\label{universal} Consider a quasitorus $Q$ action on a normal irreducible affine variety $Z$ with an open $Q$-invariant subset $U$ such that the following conditions hold: \begin{enumerate} \item $\mathbb{K}[Z]$ is factorially $\operatorname{Cl}(X)$-graded; \item $\mathrm{codim}_Z (Z \setminus U) \geqslant 2$; \item $Q$ acts freely on $U$; \item every fiber of quotient morphism $\pi: Z \to Z/\!/Q$ intersecting $U$ consists of one $Q$-orbit. \end{enumerate} Let us denote $X \cong Z/\!/Q$. Then $Q$ is isomorphic to the characteristic quasitorus of $X$ and $Z$ is isomorphic to the total coordinate space $\overline{X}$. \end{propos} \begin{propos}\label{1} Let $G$ be a simply connected semisimple algebraic group acting on an affine variety $X$ with an open orbit, let $Q$ be the characteristic quasitorus of the variety $X$. Then $X$ is toric if and only if there exists such a $(G \times Q)$-module $V$ with an open $(G \times Q)$-orbit that $X$ is $G$-equivariantly isomorphic to $V/\!/Q$ and $V \to V/\!/Q$ is the Cox realisation. \end{propos} \begin{proof} ($\Rightarrow$) Existance of the module immediately follows from applying \cite[Th. 4.2.3.2]{arzhantsev_cox_2015} to the total coordinate space of $X$. The orbit is open by \cite[Lemma~1]{arzhantsev2010homogeneous}.\\ ($\Leftarrow$) The factor is obviously toric as the action of $Q$ is linear. The orbit is open again by \cite[Lemma~1]{arzhantsev2010homogeneous}. \end{proof} Let us now introduce the notion of a prehomogeneous vector space. \begin{definition} A vector space $V$ with a linear action of a connected algebraic group $G$ is called a \textit{prehomogeneous vector space} if this action has a dense orbit. \end{definition} In \cite{KIMURA1983} (see also \cite{kimura2002introduction}) there is a list of all prehomogeneous vector spaces for groups of type~$G_s \times Q$ where $G_s$ is simple and $Q$ is a quasitorus. From this list we obtain all prehomogeneous vector spaces of the group $SL_n \times Q$. Let $\Lambda_i$ denote the irreducible representation (Weyl module) of $SL_n$ in $\Lambda^i (\mathbb{K}^n)$. We are going to identify the trivial representation $\mathbb{K}^1$ with~$\Lambda_0$. It was shown in \cite{sato_kimura_1977} that any prehomogeneous vector space of~$SL_n \times Q$ must decompose into a direct sum of $\Lambda_0, \Lambda_1, \Lambda_2, \Lambda_3, S^2(\mathbb{K}^n)$. Moreover,~$\Lambda_3$ may appear only if $n=6,7,8$. The full list of possible prehomogeneous vector spaces is rather long, thus we present only the case $n=3$ and also a special case for $n>3$, which we will require later. \begin{propos}\label{case_dim_3} Every prehomogeneous vector space $V$ of the group $SL_3 \times Q$ is either one of the following, or the one of the conjugate to them: \begin{enumerate} \item $\underbrace{\Lambda_1 \oplus \ldots \oplus \Lambda_1}_l \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r ,$ where $l=0,1,2$ and $Q$ acts with an open orbit on $\Theta(V)=\underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r$; \item $\Lambda_1 \oplus \Lambda_1 \oplus \Lambda_1 \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r ,$ where $Q$ acts with an open orbit \\ on $\Theta(V)=\left\langle det\right\rangle \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r$; \item $\underbrace{\Lambda_1 \oplus \ldots \oplus \Lambda_1}_{l-1} \oplus (\Lambda_1)^{\ast} \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r,$ where $l$ is either $2$ or $3$ and $Q$ acts with an open orbit on $\Theta(V) = \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r \oplus \left\langle g_1 \right\rangle \oplus \ldots \oplus \left\langle g_{l-1} \right\rangle,$ where $g_i$ is the polynomial that represents the pairing between $i$-th copy of $\Lambda_1$ and~$(\Lambda_1)^{\ast}$. \end{enumerate} \end{propos} \begin{propos}\label{n_by_n} For any $n \geqslant 2$ the representation $\underbrace{\Lambda_1 \oplus \ldots \oplus \Lambda_1}_n \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r $ is a prehomogeneous vector space of the group $SL_n \times Q$ whenever $Q$ acts with an open orbit on $\left\langle det\right\rangle \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r$. \end{propos} \section{Gale duality and positively 2-spanning polyhedra}\label{Gale} We provide a brief introduction to the Gale duality, for more details see, for example, \cite{matousek_lectures_2002}. \defd{d} \defn{n} When we refer to a \textit{collection} of some objects, we mean that any element may belong to the collection in several copies, which we consider to be separate. Note that when we \textit{delete} a member of a collection, this refers to only one copy of the member, so if it existed in several copies, the others may remain. \begin{definition} A \textit{point configuration} $\mathcal{A}$ in an affine space $\mathbb{A}$ over $\mathbb{Q}$ is an arbitrary collection of points $a_1, \ldots, a_n$, not lying in an affine subspace of lesser dimension. \end{definition} \begin{remark} It immediately follows from the definition that $n \geqslant d+1$. Further we shall consider only the case $n \geqslant d+2$, some of the statements may be still applicable in the case $n=d+1$ but not all of them. \end{remark} \begin{definition} A \textit{vector configuration} $\mathcal{G}$ in vector space $W$ over $\mathbb{Q}$ is an arbitrary collection of vectors $g_1, \ldots, g_n$. We shall call $\mathcal{G}$ a \textit{vector configuration with zero sum} whenever $\sum g_i = 0$. \end{definition} Let us consider an affine space identified with $\mathbb{Q}^d$ and a point configuration $\mathcal{A}$ consisting of $n$ points $a_1, \ldots, a_{n}$. The Gale transform of $\mathcal{A}$ is a vector configuration with zero sum $\mathcal{G}$, consisting of $n$ vectors in $\mathbb{Q}^{n-d-1}$ defined as follows. Consider the $d \times n$ matrix $A$ having coordinates of the points $a_i$ as its columns. Let us denote a row vector $(\underbrace{1,1,\ldots,1}_n)$ as $e$. We shall append $e$ to the matrix $A$ obtaining a $(d+1) \times n$ matrix $\tilde{A}$. As the points $a_i$ do not lie in an affine subspace of lesser dimension, this matrix is of rank $d+1$. Now we consider the $(d+1)$--dimensional subspace $W$ in $\mathbb{Q}^n$, generated by its rows. Let $b_1, \ldots, b_{n-d-1}$ be a basis of the orthogonal subspace $W^{\perp}$. We write them into the rows of the matrix $B$. Finally, we let $g_i$ be the columns of the matrix $B$. The collection $\mathcal{G}$ of such $g_i$ is called the Gale transform of $\mathcal{A}$. \begin{remark} We shall note that the resulting $n$ vectors do not correspond to the $n$ points individually. The Gale transform only establishes a correspondence between collections. Moreover, this correspondence is not one-to-one (see the next lemmas). \end{remark} \begin{remark} The Gale transform image $\mathcal{G}$ has zero sum, as the rows of $B$ are orthogonal to $e$. \end{remark} \begin{remark} Addition of $e$ can be thought of as projectivization of the configuration in the affine space. A linear version of Gale duality can be defined similarly by skipping this step. Note that the Gale transform of an arbitrary vector configuration might not have the sum equal zero, as $e$ no longer necessarily lies in the linear span of the rows of $\tilde{A}$. Moreover, for the linear duality another description exists in terms of tensor products, for details refer to~\cite[Chapter~2.2.1]{arzhantsev_cox_2015}. \end{remark} \begin{remark} \begin{enumerate} \item For different choice of basis in $W^{\perp}$ the Gale transform images $\mathcal{G}=\{g_1,\ldots,g_n\}$ and $\mathcal{G}'=\{g_1', \ldots, g_n'\}$ are the same up to a linear transformation. \item If two point collections $a$ and $a'$ are the same up to an affine transformation then the matrix $B$ is the same, thus the Gale transform images are the same. \end{enumerate} \end{remark} Let us denote by $\operatorname{AffDep}(\mathcal{A})$ the set of all linear dependancies of the matrix $\tilde{A}$ columns $\{\alpha \in \mathbb{Q}^n: \alpha_1 a_1 + \ldots + \alpha_n a_n =0, \alpha_1+ \ldots + \alpha_n = 0\}$. As $\operatorname{AffVal}(\mathcal{A})$ we denote the set $\{(f(a_1), \ldots, f (a_n)) \mid f: \mathbb{Q}^d \to \mathbb{Q} \text{~--- an affine function} \}$. Similarly, $\operatorname{LinDep}(\mathcal{G})$ stands for $\{\alpha \in \mathbb{Q}^{n-d-1}: \alpha_1 g_1 + \ldots + \alpha_n g_{n} =0\}$ and $\operatorname{LinVal}(\mathcal{G})$ for $\{(f(g_1), \ldots, f (g_n)) \mid f: \mathbb{Q}^d \to \mathbb{Q} \text{~--- linear function} \}$. \begin{lemma}\label{dual} Let $\mathcal{A}$ be a point configuration in space $\mathbb{Q}^d$ and $\mathcal{G}$~--- the Gale transform of $\mathcal{A}$. Then $\operatorname{AffVal}(\mathcal{A})=\operatorname{LinDep}(\mathcal{G}), \operatorname{AffDep}(\mathcal{A})=\operatorname{LinVal}(\mathcal{G})$. \end{lemma} \begin{proof} For any $(b_1, \ldots, b_n) \in \operatorname{AffVal}(\mathcal{A})$ there exists an affine function $f: \mathbb{Q}^d \to \mathbb{Q}$ such that $f(a_i)=b_i$ for all $i$. Let us consider the coordinates of $f$ by denoting its linear part as a row vector $f_1$ and its constant as $f_0$. As the rows of $\tilde{A}$ and $B$ are othogonal, we get matrix equations $AB^T = 0, eB^T = 0$. By applying the equation $b = f_1 A + f_0 e$ we get the following: $$\sum b_i g_i = bB^T = (f_1A+f_0e)B^T = f_1AB^T+f_0eB^T = 0.$$ To prove the reverse note that if $\sum b_i g_i = 0,$ then $b$ is orthogonal to all the rows of the matrix $B$, thus equal to a linear combination of the rows of $\tilde{A}$. The proof of the second equality is essentially the same.\end{proof} \begin{lemma}\label{general_position} All the points of $\mathcal{A}$ lie in general position if and only if every $n-d-1$ vector of $\mathcal{G}$ form a basis in $\mathbb{Q}^{n-d-1}$. \end{lemma} \begin{proof} Suppose some $n-d-1$ vectors of $\mathcal{G}$ do not form a basis of $\mathbb{Q}^{n-d-1}$. Then there is a linear dependency $\sum b_i g_i = 0$ with no more than $n-d-1$ nonzero elements. By previous Lemma $b$ may be interpreted as an element of $\operatorname{AffVal}(\mathcal{A})$, which means that there exists such an affine function $f$ that it is zero at at least $d+1$ point from $a$. This means there is a hyperplane containing at least $d+1$ point. Those arguments may be easily followed backwards to prove the reverse. \end{proof} \begin{lemma}\label{in_one_face} Let $I$ be a subset of $\{1,\ldots,n\}$, then the points $\{a_i \mid i \in I\}$ lie in one face of the convex hull $\operatorname{conv}(\mathcal{A})$ if and only if $0 \in \operatorname{conv}\{g_j \mid j \not\in I\}$. \end{lemma} \begin{proof} If points $\{a_i \mid i\in I\}$ lie in a common face of $\operatorname{conv}(\mathcal{A})$ then there is an affine function $f$ which is zero at $a_i$ and nonnegative at all other points of configuration. Moreover, at some point of configuration it is positive, as all the points of $\mathcal{A}$ may not lie in a hyperplane. Let us consider the vector $b = \left(f(a_1),f(a_2), \ldots, f(a_n)\right)$, it belongs to $\operatorname{LinDep}(\mathcal{G})$, which implies $\sum b_j g_j = 0$. As $b_i = 0$ for $i \in I$, we can delete those terms from the sum, considering $\sum\limits_{j \not\in I} b_j g_j = 0$ where all $b_j$ are nonnegative. This equation exactly means that $0 \in \operatorname{conv}\{g_j \mid j \not\in I\}$. It is easy to see that all the logical transitions are reversible. \end{proof} \section{Criterion of a Cox ring of an affine toric variety}\label{config} For the next definition and its discussion in (unrelated) context of combinatorial geometry refer to \cite{2013arXiv1304.7186P}. \begin{definition} We shall call a vector configuration in $W$ \textit{positively 2-spanning} if for any linear hyperplane $H$ both open halfspaces $H^+$ and $H^-$ contain at least 2 vectors from the configuration, not necessarily different. An equivalent definition is that when we delete any vector from the configuration, the remaining do not all lie in one closed halfspace. \end{definition} \begin{lemma}\label{dual_to_polygon} A vector configuration $g_1 \ldots g_n$ spanning $\mathbb{Q}^d$ is positively 2-spanning if and only if the Gale dual point configuration $a_1 \ldots a_n$ in $\mathbb{Q}^{n-d-1}$ is the set of vertices of a convex polyhedron without repetitions. \end{lemma} \begin{proof} Let configuration $\mathcal{G}$ be not positively 2-spanning. It means that there exists a vector $g_j$ such that all other lie in a closed halfspace, which means there exists a linear function $h$ such that $h(g_i) \geqslant 0$ for all $i \neq j$. Let $b_i$ be $h(g_i)$. As the configuration spans the entire space, $h$ cannot be zero on all the vectors from the configuration, thus $\sum\limits_{i\neq j} b_i > 0$, which implies $b_j<0$. By Lemma~\ref{dual} we obtain $b \in \operatorname{AffDep}{\mathcal{A}}$, which implies $\sum b_i a_i = 0$. Dividing by $|b_j|$ and isolating $a_j$ on one side of the equation we obtain $\sum\limits_{i \neq j} \frac{b_i}{|b_j|} a_i = a_j$. From $\sum g_i = 0$ it immediately follows that $\sum b_i = 0$, which implies $\sum\limits_{i\neq j} \frac{b_i}{|b_j|} = 1$. Thus existance of such a vector $b\in \operatorname{AffDep}{\mathcal{A}}$ with one negative coordinate $b_j$ and all other nonnegative is equivalent to existance of a vertex $a_j$ that lies in convex hull of the others. This condition means exactly that the configuration $a_i$ is not a set of vertices of a convex polyhedron. \end{proof} \begin{lemma}\label{scale_one_vector} Suppose a configuration $g_1 \ldots g_n$ is positively 2-spanning. Then for any positive rational $\lambda_i$ the configuration $\{\lambda_i g_i \}$ is positively 2-spanning too. \end{lemma} \begin{proof} Obviously if a vector $w_i$ is in some open halfspace, then the vector $\lambda_i w_i$ lies in the same halfspace. Thus the lemma is trivial. \end{proof} \begin{lemma}\label{monom} Suppose a vector configuration $g_1, \ldots, g_s$ such that its convex hull contains some neighbourhood of $0$. Fix some index $k$. Then there exists a nonnegative linear dependance $\sum \alpha_i g_i$ with $\alpha_k > 0$. \end{lemma} \begin{proof} If we triangulate the surface of the convex hull we obtain a partition of the space into strongly convex simplicial cones. Consider the cone containing $-g_k$. Then by a well-known lemma there is a coefficient $N$ such that $-Ng_k$ may be expressed as a nonnegative linear combination of the elements that are generating the rays of this cone. \end{proof} Let $\overline{M}$ be a finitely generated abelian group. We set $M$ to be the factorgroup $\overline{M}/\operatorname{Tor}(\overline{M})$. For an element $\overline{w}$ of $\overline{M}$ let $w$ denote its image in $M$. Let $M_{\mathbb{Q}} = M \otimes_{\mathbb{Z}} \mathbb{Q}$. \begin{definition} In the above notation we say that a collection $\{\overline{w_i}\}$ in $\overline{M}$ satisfies the condition~$\ast$ if two following conditions hold: \begin{enumerate} \item the configuration $\{w_i\}$ in $M_{\mathbb{Q}}$ is positively 2-spanning; \item if we delete any $\overline{w_i}$, the rest generate $\overline{M}$. \end{enumerate} \end{definition} If $Q$ is a quasitorus let $\mathfrak{X}(Q)$ denote the character group of $Q$, that is, the group of all homomorphisms $Q \to \mathbb{K}^{\times}$. Let us prove the following criterion, which allows to determine whether an affine space with a quasitorus action is a total coordinate space of an affine toric variety: \begin{propos}\label{crit} Suppose a quasitorus $Q$ acts linearly on $V = \mathbb{K}^l$. Let $\overline{w_i} \in \overline{M}=\mathfrak{X}(Q)$ be the weights of the coordinate functions $x_i$. Then $V$ equipped with this action of $Q$ is the Cox realisation of an affine toric variety $V/\!/Q$ if and only if $\{\overline{w_i}\}$ satisfy condition~$\ast$. \end{propos} \begin{proof} ($\Rightarrow$) Let $\pi: V \stackrel{/\!/Q}{\twoheadrightarrow} X$ be the Cox representation of a toric variety $X \cong V/\!/Q$. Let us prove that the condition~$\ast$ holds. By Proposition \ref{universal} there is an open subset $U \subset V$, such that: \begin{itemize} \item $\mathrm{codim}_{V} V\setminus U \geqslant 2$, \item $Q$ acts freely on $U$, \item every fiber of $\pi$ intersecting $U$ consists of exactly one $Q$-orbit. \end{itemize} Suppose there is an index $j$ such that $\{\overline{w_1}, \ldots \overline{w_{j-1}}, \overline{w_{j+1}}, \ldots \overline{w_l}\}$ do not generate $\overline{M}$. We can rephrase it as follows: let $$B=\left<\overline{w_1}, \ldots \overline{w_{j-1}}, \overline{w_{j+1}}, \ldots \overline{w_l}\right> \subset \overline{M},$$ then $B$ is a proper subgroup. \begin{lemma}\label{proper_subgroup} Assume $\overline{M} = \mathfrak{X}(Q)$. Then a subgroup $B\subset \overline{M}$ is proper if and only if there is an element $s\in Q$ such that $s$ is not the identity and $b(s)=1$ for all $b\in B$. \end{lemma} \begin{proof} Let $F$ be a finitely generated free abelian group, $\tau: F \to \overline{M}$ a surjection. Let $C=\tau^{-1}(B)$. From $B \varsubsetneq \overline{M}$ immediately follows $C \varsubsetneq F$. Let us choose coordinated bases $\{f_i\}$ and $\{d_i f_i\}$ in free abelian groups $F$ and $C$. If $d_k=0$ for some index $k$ then consider $\phi: F \to \mathbb{K}^{\times}$ given by formula $\phi(f_k) = \zeta$ and $\phi(f_i) = 1$ for all $i \neq k$ where $\zeta$ is an arbitrary nonunit element of $\mathbb{K}^{\times}$. If all $d_i \neq 0$ then pick an arbirary $k$ and set $\phi(f_k) = \sqrt[d_k]{1}, \varphi(f_i) = 1$ for all $i \neq k$. See that in both cases $\phi \neq 1$, but $\phi(\ker \tau)=\phi(C) = 1$. Thus $\phi$ factors through $\tau$ and we obtain some $s: \overline{M} \to \mathbb{K}^{\times}$ by formula $s = \phi(\tau^{-1}).$ This is the required $s$. The second part of the statement is obvious. \end{proof} Let us consider the element $s \in Q$ from Lemma \ref{proper_subgroup}. Let us show that $Q$ does not act freely on $U$. Indeed, $\overline{w_i}(s) = 1$ for all $i \neq j$, which means that all coordinates $x_i$ are invariant with respect to $s$ for $i \neq j$. Thus $s$ acts trivially on $U \cap \{x_j=0\}$. But this set is nonempty since $\mathrm{codim}_{V} V\setminus U \geqslant 2$. Now let us assume that the configuration $\{w_i\}$ is not positively 2-spanning. It means there is an index $j$ and a closed halfspace $\alpha^+$ such that for all $i \neq j$ we have $w_i \in \alpha^+$. By $\alpha$ let us denote the hyperplane that is the border of $\alpha^+$. Let $K \subset \{1,\ldots, l\} \setminus\{j\}$ be the set of indices such that for all $k \in K$ we have $w_k \not \in \alpha$. If $K$ is nonempty then let us pick an arbitrary $k \in K$ and consider the set $U_k=(\{x_j=0\} \cap U) \cap \{x_k \neq 0\}$. It is not empty as it is an intersection of nonempty open sets in $\{x_j = 0\}$. Thus we may consider an arbitrary vector $v \in U_k$. Let us consider another vector $v'$ obtained from $v$ by replacing the $k$-th coordinate by $0$. Every regular $Q$-invariant containing $x_k$ should contain $x_j$, thus equals $0$ at both $v$ and $v'$. The regular $Q$-invariants that do not contain $x_k$ are equal on $v$ and $v'$. Thus all regular $Q$-invariants are the same on $v$ and $v'$ but they lie in different orbits of the quasitorus, which contradicts the assumptions. If $K$ is empty then there is no regular $Q$-invariant containing $x_j$. Analogously to the previous paragraph we may consider $v \in U \setminus \{x_j = 0\}$ and $v'$ obtained by replacing the $k$-th coordinate by $0$. The same arguments apply. ($\Leftarrow$) Let us assume that the condition $\ast$ hold. We again apply Proposition~\ref{universal}. The factoriality holds automatically as the polynomial ring is factorial, thus it is homogeneously factorial with respect to any grading. As $U$ let us choose the set $$U = V \setminus \bigcup\limits_{i\neq k} \{ x_i = 0, x_k = 0\},$$ that is the set of points that have no more than one zero coordinate. We see that $\mathrm{codim}_V V\setminus U = 2$ holds. Let us show that $Q$ acts freely on $U$. Indeed, suppose there is an element $s\in Q$ stabilizing a point $u\in U$. That means that $s$ acts trivially on all coordinate functions not vanishing at $u$. So there is $j$ such that $s$ acts trivially on all coordinate functions except $x_j$. As $\{\overline{w_i} | i \neq j\}$ generate $\overline{M}$ we see that for every $\overline{w} \in \overline{M}$ there is a representation $\overline{w} = \sum\limits_{i\neq j} \alpha_i \overline{w_i}$. All $\overline{w_i}(s)=1$ for $i\neq j$ thus $\overline{w}(s)=1$ for all $\overline{w} \in \overline{M}$. Thus $s$ equals $1$. Finally, let us show that for every $u \in U$ the preimage $\pi^{-1}(\pi(u))$ is exactly one $Q$-orbit. Let us fix some point $u \in U$ and suppose that every its coordinate except $x_j$ is not $0$. Let $u'$ be another point in $\pi^{-1}(\pi(u))$, we are going to show that every its coordinate except $x_j$ is also not $0$. This would imply that $u' \in U$. Let us pick a coordinate $x_k$ where $k \neq j$. As the configuration $\{w_i\}$ is positively 2-spanning there is a nonnegative linear dependance $\sum\limits_{i \neq j} \alpha_i w_i = 0$. By Lemma~\ref{monom} one can get the coefficient $\alpha_k$ to be positive. Consider a corresponding $M$-homogeneous monomial $m = \prod\limits_{i \neq j} x_i^{\alpha_i}$. We raize it to some power $d$ so that $m^d$ is $\overline{M}$-homogeneous, that is, $Q$-invariant. Note that $m^d$ is nonzero at $u$, thus it is nonzero at every point in $\pi^{-1}(\pi(u))$. This implies that for any $u'$ in $\pi^{-1}(\pi(u))$ its $k$-th coordinate is nonzero. As this stands for every $k$ except $j$, we obtain that $u' \in U$, in other words $\pi^{-1}(\pi(u)) \subset U$. All the $Q$-orbits in $U$ are of the same dimension, thus one of them cannot lie in the closure of another, so we obtain that $\pi^{-1}(\pi(u))$ consists of only one $Q$-orbit. \end{proof} From Prop.~\ref{1} and Prop.~\ref{crit} obviously follows the following proposition: \begin{propos}\label{result} Affine toric varieties with an action of a simply connected semisimple group $G$ with an open orbit are categorical factors by the action of $Q$ of $(G\times Q)$-modules with an open $G\times Q$-orbit for which the condition~$\ast$ holds . \end{propos} We also provide two lemmas about positively 2-spanning configurations that are going to be used later. \begin{lemma}\label{d+3} Suppose a collection $\{w_1,\ldots w_n\}$ in an $s$-dimensional space is positively 2-spanned. Then either $s=0$ or $n \geqslant s+3$. \end{lemma} \begin{proof} Suppose $s \neq 0$ and consider a hyperplane spanned by $s-1$ vectors in the collection. Then by definition there are at least 2 vectors on both sides of this hyperplane thus there are at least $s+3$ elements in the collection. \end{proof} \begin{lemma}\label{projection} Suppose $w_1, \ldots w_s \in M, M=A\oplus B$. If the collection $\{w_i\}$ is positively 2-spanning then the collection of their projections $\{proj_{B} (w_i)\}$ onto the second summand is also positively 2-spanning. \end{lemma} The proof immediately follows from the definition. \section{Affine toric varieties with an action of the group $SL_3$ with an open orbit}\label{n=3} \begin{propos}\label{prop_n_3} a) Every $(SL_3\times Q)$-module $V$ with an open $SL_3\times Q$-orbit for which the condition~$\ast$ holds is either one of the following, or one of the conjugate to them: \begin{enumerate} \item \begin{enumerate} \item $\{0\}$, where $\dim Q = 0$; \item $\Lambda_1$, where $\dim Q = 0$; \item $\Lambda_1 \oplus \Lambda_1$, where $\dim Q = 0$; \item $\Lambda_1 \oplus \Lambda_1 \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r ,$ where $r=0,1$ and $\dim Q = 1$; \item $\Lambda_1 \oplus \Lambda_1 \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r ,$ where $r=2$ and $\dim Q = 2$; \end{enumerate} \item \begin{enumerate} \item $\Lambda_1 \oplus \Lambda_1 \oplus \Lambda_1 \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r ,$ where $r=0$ and $\dim Q = 1$; \item $\Lambda_1 \oplus \Lambda_1 \oplus \Lambda_1 \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r ,$ where $r=0,1$ and $\dim Q = 2$; \item $\Lambda_1 \oplus \Lambda_1 \oplus \Lambda_1 \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r ,$ where $r=2$ and $\dim Q = 3$; \end{enumerate} \item $\underbrace{\Lambda_1 \oplus \ldots \oplus \Lambda_1}_{l-1} \oplus (\Lambda_1)^{\ast},$ where $l=2,3$ and $\dim Q = l-1$, \end{enumerate} where we assume that $Q$ acts with an open orbit on $\Theta(V)$ as in Prop.~\ref{case_dim_3}.\\ b) Every of the listed cases exists, that is, there is such a group $\overline{M}$ and a set of weights that $Q$ acts with an open orbit on $\Theta(V)$ and the condition~$\ast$ holds. \end{propos} \begin{proof} Let $d$ denote $\dim Q$. Let us start with the first case from Prop.~\ref{case_dim_3}: let $$V=\underbrace{\Lambda_1 \oplus \ldots \oplus \Lambda_1}_l \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r,$$ where $l=0,1,2$ and $Q$ acts with an open orbit on $\underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r$. Let us skip the case $l=0$ which immediately comes down to case (1). Let us denote the $Q$-weights on the summands as $\overline{v_1}, \ldots, \overline{v_l}, \overline{w_1}, \ldots, \overline{w_r}$ where the first $l$ correspond to $\Lambda_1$ and the next $r$ correspond to $\Lambda_0$. Thus we want to find inequalities for $r$ by checking the condition~$\ast$ for the collection $$\overline{v_1}, \overline{v_1}, \overline{v_1}, \overline{v_2}, \overline{v_2}, \overline{v_2}, \ldots, \overline{v_l}, \overline{v_l}, \overline{v_l}, \overline{w_1}, \overline{w_2}, \ldots ,\overline{w_r}.$$ Now let us consider the subspace $A$ generated by $v_i$ and its dimension $a$. Then after applying Lemma~\ref{projection} we obtain a $(d-a)$-dimensional space with no more than $r$ nonzero projections that come from $w_1, \ldots, w_r$. By applying Lemma~\ref{d+3} we see that either $d-a=0$ or $r \geqslant d-a+3$. Suppose the second case holds. As $Q$ acts with an open orbit on $\underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r$, it immediately follows that $r \leqslant d$. Combining the inequalities we see that $d-a+3 \leqslant d$, which means that $a \geqslant 3$. On the other hand, $a \leqslant l \leqslant 2$. Thus $r \geqslant d-a+3$ is impossible, so only the case $d-a=0$ remains. \begin{lemma}\label{max} If $l \geqslant d$ the following inequality holds: $l + \frac{r}{2} \geqslant d+1$. \end{lemma} \begin{proof} As $l \geqslant d$ we may pick $d-1$ element $v_1, \ldots, v_{d-1}$ and consider a linear hyperplane $\alpha$ through them. On each side there are at least 2 elements, thus there is either 1 element of the remaining $v_d, \ldots, v_{l-(d-1)}$ or 2 elements of $w_1, \ldots, w_r$. Taking into account both sides we obtain $2(l-(d-1))+r \geqslant 4$. This inequality is equivalent to the one claimed. \end{proof} If $d=a=0$ then $r$ is also $0$ and we are in the case (2) or (3). If $d=a=1$ then $r \leqslant d$ thus $r=0,1$. By Lemma~\ref{max} we get $l \geqslant d+1 = 2$, thus $l=2$. We obtained the case (4). Finally, if $d=a=2$ then as $l \geqslant a$ we have $l=2$. By Lemma~\ref{max} we get $2+\frac{r}{2} \geqslant 2+1$ thus $r \geqslant 2$. As $r \leqslant d = 2$ we have $r=2$. This is the case (5). Now let us consider the second case from the Prop.~\ref{case_dim_3}: let $$V = \Lambda_1 \oplus \Lambda_1 \oplus \Lambda_1 \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r ,$$ where $Q$ acts with an open orbit \\ on $\left\langle det\right\rangle \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r$. We keep the notation from the previous case. By applying Lemma~\ref{projection} we obtain a $(d-a)$-dimensional space with no more than $r$ nonzero projections that come from $w_1, \ldots, w_r$. By applying Lemma~\ref{d+3} we see that either $d-a=0$ or $r \geqslant d-a+3$. Suppose the second case holds. As $Q$ acts with an open orbit on $\left\langle det\right\rangle \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r$, we obtain $d \geqslant r+1$. Combining the inequalities we see that $d-a+3 \leqslant d-1$, which means that $a \geqslant 4$. On the other hand, $a \leqslant l = 3$, thus $r \geqslant d-a+3$ is impossible, so only the case $d-a=0$ remains. By Lemma~\ref{max} with $l=3$ we get $d \leqslant 3$. As $d \geqslant r+1$ we know that $d \geqslant 1$. If $d=a=1$ then as $r+1 \leqslant d$ we obtain $r=0$. If $d=a=2$ by Lemma~\ref{max} we get $3+\frac{r}{2} \geqslant 2+1$ thus it does not add any restrictions. As $r+1 \leqslant d = 2$ we have $r \leqslant 1$. Finally, if $d=a=3$ by Lemma~\ref{max} we get $3+\frac{r}{2} \geqslant 3+1$ thus $r \geqslant 2$. On the other hand we know that $r+1 \leqslant d = 3$. Thus $r=2$. Now let us consider the last case from Prop.~\ref{case_dim_3}: let $$V=\underbrace{\Lambda_1 \oplus \ldots \oplus \Lambda_1}_{l-1} \oplus (\Lambda_1)^{\ast} \oplus \underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r,$$ where $l$ is either $2$ or $3$ and $Q$ acts with an open orbit on $\underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r \oplus \left\langle g_1 \right\rangle \oplus \ldots \oplus \left\langle g_{l-1} \right\rangle,$ where $g_i$ is the polynomial that represents the coupling between $i$-th copy of $\Lambda_1$ and $(\Lambda_1)^{\ast}.$ Let us again keep the notation from the previous cases. By applying Lemma~\ref{projection} we obtain a $(d-a)$-dimensional space with no more than $r$ nonzero projections that come from $w_1, \ldots, w_r$. By applying Lemma~\ref{d+3} we see that either $d-a=0$ or $r \geqslant d-a+3$. Suppose the second case holds. As $Q$ acts with an open orbit on $\underbrace{\Lambda_0 \oplus \ldots \oplus \Lambda_0}_r \oplus \left\langle g_1 \right\rangle \oplus \ldots \oplus \left\langle g_{l-1} \right\rangle,$ we obtain $d \geqslant r+(l-1)$. Combining the inequalities we see that $d-a+3 \leqslant d-l+1$, which means that $a \geqslant l+2$. On the other hand, $a \leqslant l$, thus $r \geqslant d-a+3$ is impossible, so only the case $d-a=0$ remains. As $d \geqslant r+(l-1) \geqslant r+1$ we know that $d \geqslant 1$. If $a=d=1$ then as $d\geqslant r+1$ we obtain $r=0$. Also as $d \geqslant l-1$ we see that $l=3$ is impossible in this casem thus $l=2$. If $a=d=2$ then from $d \geqslant r+1$ we obtain $r \leqslant 1$, by Lemma~\ref{max} we get $l+\frac{r}{2} \geqslant 2 + 1 = 3$, thus $l = 3$. As $d \geqslant r+(l-1) = r+2$ we now get $r=0$. This ends the proof of part a) as we went through all cases of Prop.~\ref{case_dim_3}. Now to prove b) we present a set of weights in $M=\mathbb{Z}^{d}$ satisfying the constraints. \begin{tabular}{cccccc} Case & $v_1$ & $v_2$ & $v_3$ & $w_1$ & $w_2$\\ \hline 1a & - & - & - & - & - \\ 1b & $0$ & - & - & - & - \\ 1c & $0$ & $0$ & - & - & - \\ 1d & $1$ & $-1$ & - & $1$ if $r=1$ & - \\ 1e & $(1,0)$ & $(0,1)$ & - & $(-1,-1)$ & $(-1,-2)$ \\ 2a & $1$ & $1$ & $-1$ & - & - \\ 2b & $(1,0)$ & $(0,1)$ & $(-1,-2)$ & $(1,0)$ if $r=1$ & - \\ 2c & $(1,0,0)$ & $(0,1,0)$ & $(0,0,1)$ & $(-1,-1,-2)$ & $(-1,-2,-1)$ \\ 3,$l=2$ & $1$ & $-1$ & - & - & - \\ 3,$l=3$ & $(1,0)$ & $(0,1)$ & $(-1,-1)$ & - & - \\ \end{tabular} \end{proof} The following theorem immediately follows from Proposition~\ref{prop_n_3} and from Proposition~\ref{result}. \begin{theorem}\label{thm_n_3} Every affine toric variety $X$ with an action of the group $SL_3$ with an open orbit is the categorical factor of a module $V(X)$ from Prop.~\ref{prop_n_3}. Moreover, if $X\underset{SL_3}{\cong}Y$ then $V(X)\underset{SL_3 \times Q}{\cong} V(Y)$. \end{theorem} \section{A series of examples with arbitrary large class group rank}\label{n=4} As we have discussed in the introduction, in the case $n=2$ there is a result due to V.\,L.\,Popov that the variety is either a homogeneous space or there is exactly one divisor outside of the open orbit. This means that the divisor class group in this case has rank 1, in fact, it is shown that is equal to $\mathbb{Z}\oplus\mathbb{Z}_m$. Our result in the previous section shows that in the case $n=3$ at least for the toric varieties the dimension of the characteristic quasitorus is $3$ or less, which means that the rank of the class group is $3$ or less. Now we show in the following theorem that such behaviour does not continue for $n>3$. \begin{theorem}\label{thm_n_4} For every $n\geqslant 4, d \geqslant 1$ there exists a $SL_n$-embedding with the class group rank $d$. \end{theorem} \begin{proof} Let us consider the module $\underbrace{\mathbb{K}^n \oplus \dots \oplus \mathbb{K}^n}_n \oplus \underbrace{\mathbb{K} \oplus \dots \oplus \mathbb{K}}_{d-1}$. Let $x_{ij}$ denote the coordinates in $i$-th copy of the module $\mathbb{K}^n$ and $y_i$ denote the coordinate in $i$-th copy of $\mathbb{K}$. We equip the module with a $d$-dimensional torus $T$ acting with weights $v_1, \ldots, v_n, w_1, \ldots, w_{d-1}$. We want to construct such weights that the condition~$\ast$ holds and there is an open orbit of $SL_n \times Q$. Let us set all the weights $v_i$ for $i \geqslant 5$ to be equal $0$. Now consider a convex polygon with $d+3$ vertices in $\mathbb{Q}^2$ and let the weights $v_1, v_2, v_3, v_4, w_1, \ldots, w_{d-1}$ be equal to the Gale transform image of the collection of its vertices. They generate some lattice, isomorphic to $\mathbb{Z}^d$, let us denote it as $W$. Lemma~\ref{dual_to_polygon} ensures that the configuration $\{v_1,v_2,v_3,v_4,w_1,\ldots,w_{d-1}\}$ is positively 2-spanning. For the orbit $SL_n \times Q$ to be open we need the weights of the independent generators of the algebra of $SL_n$-invariant, that is $y_1, \ldots, y_{d-1}, \det (x_{ij})$, to be linearly independent, as Proposition~\ref{n_by_n} tells us. Those weights are $w_1, \ldots, w_{d-1}, nv_1+nv_2+\ldots+nv_n$. However by construction we have $v_1+v_2+v_3+v_4 = -(w_1+\ldots+w_{d-1})$ and all other $v_i$ are equal $0$. Thus we have to tinker with our set of weights as follows. We multiply all the weights except of $v_1$ by a factor of $2$, denoting the new weights as $v_i'$ and $w_j'$. By Lemma~\ref{scale_one_vector} the result is still positively 2-spanning. By Lemma~\ref{general_position} the vectors $v_1, w_1, w_2, \ldots, w_{d-1}$ are linearly independent, thus the vector $$\chi_{det}=n(v_1'+v_2'+\ldots+v_n')=n(v_1+2v_2+\ldots+2v_n)=$$ $$=-2n(w_1+\ldots+w_{d-1})-nv_1=-n(w_1'+\ldots+w_{d-1}')-nv_1'$$ is linearly independent with the system $w_1', w_2', \ldots, w_{d-1}'$. Let us denote the lattice generated by new weights as $W'$. Now we need to check the condition~$\ast$ for the new set of weights. By Lemma~\ref{scale_one_vector} the new configuration is positively 2-spanning. It remains to establish that if we delete any element, the remaining do generate $W'$. As the configuration contains multiple copies of $v_i'$, their deletion cannot change the generated lattice. Thus we consider only the case of deleting some $w_j'$. But $w_j' = - (v_1'+v_2'+\ldots+v_n')-(w_1'+\ldots+w_{j-1}+w_{j+1}+\ldots+w_{d-1}')$, which means it can be obtained from the other weights, which implies the lattice remains the same. This concludes our proof, as the obtained configuration satisfies condition~$\ast$ and provides an open $SL_n \times Q$-orbit. \end{proof} \printbibliography \end{document}
\begin{document} \begin{center} {\bf INTEGRAL EQUATION FOR THE TRANSITION DENSITY \\ OF THE MULTIDIMENSIONAL MARKOV RANDOM FLIGHT} \end{center} \begin{center} Alexander D. KOLESNIK\\ Institute of Mathematics and Computer Science\\ Academy Street 5, Kishinev 2028, Moldova\\ E-Mail: [email protected] \end{center} \vskip 0.2cm \begin{abstract} We consider the Markov random flight $\bold X(t)$ in the Euclidean space $\Bbb R^m, \; m\ge 2,$ starting from the origin $\bold 0\in\Bbb R^m$ that, at Poisson-paced times, changes its direction at random according to arbitrary distribution on the unit $(m-1)$-dimensional sphere $S^m(\bold 0,1)$ having absolutely continuous density. For any time instant $t>0$, the convolution-type recurrent relations for the joint and conditional densities of process $\bold X(t)$ and of the number of changes of direction, are obtained. Using these relations, we derive an integral equation for the transition density of $\bold X(t)$ whose solution is given in the form of a uniformly converging series composed of the multiple double convolutions of the singular component of the density with itself. Two important particular cases of the uniform distribution on $S^m(\bold 0,1)$ and of the Gaussian distributions on the unit circumference $S^2(\bold 0,1)$ are separately considered. \end{abstract} \vskip 0.1cm {\it Keywords:} Random flight, random evolution, joint density, conditional density, convolution, Fourier transform, characteristic function \vskip 0.2cm {\it AMS 2010 Subject Classification:} 60K35, 60K99, 60J60, 60J65, 82C41, 82C70 \section{Introduction} \numberwithin{equation}{section} Random motions at finite speed in the multidimensional Euclidean spaces $\Bbb R^m, \; m\ge 2$, also called random flights, have become the subject of intense researches in last decades. The majority of published works deal with the case of isotropic Markov random flights when the motions are controlled by a homogeneous Poisson process and their directions are taken uniformly on the unit $(m-1)$-dimensional sphere \cite{kol2,kol3,kol4,kol5,kol6,kol7}, \cite{mas}, \cite{sta1,sta2}. The limiting behaviour of the Markov random flight with a finite number of fixed directions in $\Bbb R^m$ was examined in \cite{ghosh}. In recent years the non-Markovian multidimensional random walks with Erlang- and Dirichlet-distributed displacements were studied in a series of works \cite{lecaer1,lecaer2,lecaer3,letac}, \cite{pogor1,pogor2}. Such random motions at finite velocities are of a great interest due to both their big theoretical importance and numerous fruitful applications in physics, chemistry, biology and other fields. When studying such a motion, its explicit distribution is, undoubtedly, the most attractive aim of the researches. However, despite many efforts, the closed-form expressions for the distributions of Markov random flights were obtained in a few cases only. In the spaces of low even-order dimensions such distributions were obtained in explicit forms by different methods (see \cite{sta2}, \cite{mas}, \cite{kol5}, \cite{kol7} for the Euclidean plane $\Bbb R^2$, \cite{kol6} for the space $\Bbb R^4$ and \cite{kol2} for the space $\Bbb R^6$). Moreover, in the spaces $\Bbb R^2$ and $\Bbb R^4$ such distributions are surprisingly expressed in terms of elementary functions, while in the space $\Bbb R^6$ the distribution has the form of a series composed of some polynomials. As far as the random flights in the odd-dimensional Euclidean spaces is concerned, their analysis is much more complicated in comparison with the even-dimensional cases. A formula for the transition density of the symmetric Markov random flight with unit speed in the space $\Bbb R^3$ was given in \cite{sta1}, however it has a very complicated form of an integral with variable limits whose integrand depends on the inverse tangent function from the integration variable (see \cite[formulas (1.3) and (4.21)]{sta1}). Moreover, the density presented in this work evokes some questions since its absolutely continuous (integral) part is discontinuous at the origin $\bold 0\in\Bbb R^3$ and this fact seems fairly strange. The characteristic functions of the multidimensional random flights are much more convenient objects for analysing than their densities. This is due to the fact that, while the densities are finitary fucntions (that is, functions defined on the compact sets of $\Bbb R^m$), their characteristic functions (Fourier transforms) are analytical real functions defined everywhere in $\Bbb R^m$. That is why just the characteristic functions have become the subject of the vast research whose results were published in \cite{kol3} and \cite{kol8}. In particular, in \cite{kol3} the tme-convolutional recurrent relations for the joint and conditional characteristic functions of the Markov random flight in the Euclidean space $\Bbb R^m$ of arbitrary dimension $m\ge 2$, were obtained. By using these recurrent ralations, the Volterra integral equation of second kind with continuous kernel for the unconditional characteristic function was derived and a closed-form expression for its Laplace transform was given. Such convolutional structure of the characteristic functions makes hints about a similar one for the respective densities. Discovering of such convolutional relations for the densities of Markov random flights in $\Bbb R^m, \; m\ge 2,$ is the main subject of this article. The paper is organized as follows. In Section 2 we introduce the general Markov random flight in the Euclidean spaces $\Bbb R^m, \; m\ge 2$, with arbitrary dissipation function and describe the structure of its distribution. Some basic properties of the joint, conditional and unconditional characteristic functions of the process are also given. In Section 3 we derive the recurrent relations for the joint and conditional densities of the process and of the number of changes of direction in the form of a double convolutions in space and time variables. Basing on these recurrent relations, an integral equation for the transition density of the process is obtained in Section 4, whose solution is given in the form of a uniformly converging series composed of the multiple double convolutions of the singular component of the density with itself. This solution is unique in the class of finitary functions in $\Bbb R^m$. Two important particular cases of the uniform distribution on $S^m(\bold 0,1)$ and of the Gaussian distributions on the unit circumference $S^2(\bold 0,1)$ are considered in Section 5. \section{Description of Process and Its Basic Properties} \numberwithin{equation}{section} Consider the following stochastic motion. A particle starts from the origin $\bold 0 = (0, \dots, 0)$ of the Euclidean space $\Bbb R^m, \; m\ge 2,$ at the initial time instant $t=0$ and moves with some constant speed $c$ (note that $c$ is treated as the constant norm of the velocity). The initial direction is a random $m$-dimensional vector with arbitrary distribution (also called the dissipation function) on the unit sphere $$S^m(\bold 0, 1) = \left\{ \bold x=(x_1, \dots ,x_m)\in \Bbb R^m: \; \Vert\bold x\Vert^2 = x_1^2+ \dots +x_m^2=1 \right\} $$ having the absolutely continuous bounded density $\chi(\bold x), \; \bold x\in S^m(\bold 0, 1)$. Emphasize that here and thereafter the upper index $m$ means the dimension of the space, which the sphere $S^m(\bold 0, 1)$ is considered in, but not its own dimension which, clearly, is $m-1$. The motion is controlled by a homogeneous Poisson process $N(t)$ of rate $\lambda>0$ as follows. At each Poissonian instant, the particle instantaneously takes on a new random direction on $S^m(\bold 0, 1)$ with the same density $\chi(\bold x), \; \bold x\in S^m(\bold 0, 1),$ independently of its previous motion and keeps moving with the same speed $c$ until the next Poisson event occurs, then it takes a new random direction again and so on. Let $\bold X(t)=(X_1(t), \dots ,X_m(t))$ be the particle's position at time $t>0$ which is referred to as the $m$-dimensional random flight. At arbitrary time instant $t>0$ the particle, with probability 1, is located in the close $m$-dimensional ball of radius $ct$ centered at the origin $\bold 0$: $$\bold B^m(\bold 0, ct) = \left\{ \bold x=(x_1, \dots ,x_m)\in \Bbb R^m : \; \Vert\bold x\Vert^2 = x_1^2+ \dots +x_m^2\le c^2t^2 \right\} .$$ Consider the probability distribution function $$\Phi(\bold x, t) = \text{Pr} \left\{ \bold X(t)\in d\bold x \right\}, \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t\ge 0,$$ of process $\bold X(t)$, where $d\bold x$ is the infinitesimal element in the space $\Bbb R^m$ with Lebesgue measure $\mu(d\bold x) = dx_1 \dots dx_m$. For arbitrary fixed $t>0$, distribution $\Phi(\bold x, t)$ consists of two components. The singular component corresponds to the case when no one Poisson event occurs in the time interval $(0,t)$ and it is concentrated on the sphere $$S^m(\bold 0, ct) =\partial\bold B^m(\bold 0, ct) = \left\{ \bold x=(x_1, \dots ,x_m)\in \Bbb R^m: \; \Vert\bold x\Vert^2 = x_1^2+ \dots +x_m^2=c^2t^2 \right\} .$$ In this case the particle is located on the sphere $S^m(\bold 0, ct)$ and the probability of this event is $$\text{Pr} \left\{ \bold X(t)\in S^m(\bold 0, ct) \right\} = e^{-\lambda t} .$$ If at least one Poisson event occurs by time instant $t$, then the particle is located strictly inside the ball $\bold B^m(\bold 0, ct)$ and the probability of this event is $$\text{Pr} \left\{ \bold X(t)\in \text{int} \; \bold B^m(\bold 0, ct) \right\} = 1 - e^{-\lambda t} .$$ The part of the distribution $\Phi(\bold x, t)$ corresponding to this case is concentrated in the interior $$\text{int} \; \bold B^m(\bold 0, ct) = \left\{ \bold x=(x_1, \dots ,x_m)\in \Bbb R^m: \; \Vert\bold x\Vert^2 = x_1^2+ \dots +x_m^2<c^2t^2 \right\}$$ of the ball $\bold B^m(\bold 0, ct)$ and forms its absolutely continuous component. Let $p(\bold x,t) = p(x_1, \dots ,x_m,t), \;\; \bold x\in\bold B^m(\bold 0, ct) , \; t>0,$ be the density of distribution $\Phi(\bold x, t)$. It has the form \begin{equation}\label{dens} p(\bold x,t) = p_s(\bold x,t) + p_{ac}(\bold x,t) , \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t>0, \end{equation} where $p_s(\bold x,t)$ is the density (in the sense of generalized functions) of the singular component of $\Phi(\bold x, t)$ concentrated on the sphere $S^m(\bold 0, ct)$ and $p_{ac}(\bold x,t)$ is the density of the absolutely continuous component of $\Phi(\bold x, t)$ concentrated in $\text{int} \; \bold B^m(\bold 0, ct)$. The density $\chi(\bold x), \; \bold x\in S^m(\bold 0, 1),$ on the unit sphere $S^m(\bold 0, 1)$ generates the absolutely continuous and bounded (in $\bold x$ for any fixed $t$) density $\varrho(\bold x,t), \bold x\in S^m(\bold 0, ct),$ on the sphere $S^m(\bold 0, ct)$ of radius $ct$ according to the formula $\varrho(\bold x,t) = \chi(\frac{1}{ct}\bold x), \; \bold x\in S^m(\bold 0, ct), \; t>0$. Therefore, the singular part of density (\ref{dens}) has the form: \begin{equation}\label{densS} p_s(\bold x,t) = e^{-\lambda t} \varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2) , \qquad t>0, \end{equation} where $\delta(x)$ is the Dirac delta-function. The absolutely continuous part of density (\ref{dens}) has the form: \begin{equation}\label{densAC} p_{ac}(\bold x,t) = f_{ac}(\bold x,t) \Theta(ct-\Vert\bold x\Vert) , \qquad t>0, \end{equation} where $f_{ac}(\bold x,t)$ is some positive function absolutely continuous in $\text{int} \; \bold B^m(\bold 0, ct)$ and $\Theta(x)$ is the Heaviside step function given by \begin{equation}\label{heaviside} \Theta(x) = \left\{ \aligned 1, \qquad & \text{if} \; x>0,\\ 0, \qquad & \text{if} \; x\le 0. \endaligned \right. \end{equation} Consider the conditional densities $p_n(\bold x,t), \; n\ge 0,$ of process $\bold X(t)$ conditioned by the random events $\{ N(t)=n \}, \; n\ge 0,$ where, remind, $N(t)$ is the number of the Poisson events that have occurred in the time interval $(0, t)$. Obviously, $p_0(\bold x,t)=\varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2)$. Therefore, our aim is to examine conditional densities $p_n(\bold x,t)$ for $n\ge 1$. Consider the conditional characteristic functions (Fourier transform) of process $\bold X(t)$: \begin{equation}\label{rec1} G_n(\boldsymbol\alpha, t) = E \left\{ e^{i\langle\boldsymbol\alpha, \bold X(t)\rangle} \vert \; N(t)=n \right\} , \qquad n\ge 1, \end{equation} where $\boldsymbol\alpha =(\alpha_1, \dots ,\alpha_m) \in \Bbb R^m$ is the real $m$-dimensional vector of inversion parameters and $\langle\boldsymbol\alpha, \bold X(t)\rangle$ is the inner product of the vectors $\boldsymbol\alpha$ and $\bold X(t)$. According to \cite[formula (6.8)]{kol3}, functions (\ref{rec1}) are given by the formula: \begin{equation}\label{rec2} G_n(\boldsymbol\alpha, t) = \frac{n!}{t^n} \int_0^t d\tau_1 \int_{\tau_1}^t d\tau_2 \dots \int_{\tau_{n-1}}^t d\tau_n \left\{ \prod_{j=1}^{n+1} \psi(\boldsymbol\alpha, \tau_j-\tau_{j-1}) \right\} , \qquad n\ge 1, \end{equation} where \begin{equation}\label{rec3} \psi(\boldsymbol\alpha, t) = \mathcal F_{\bold x} \left[ \varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2) \right](\boldsymbol\alpha) = \int_{S^m(\bold 0, ct)} e^{i\langle\boldsymbol\alpha, \bold x\rangle} \; \varrho(\bold x, t) \; \nu(d\bold x) \end{equation} is the characteristic function (Fourier transform) of density $\varrho(\bold x,t)$ concentrated on the sphere $S^m(\bold 0, ct)$ of radius $ct$ and $\nu(d\bold x)$ is the surface Lebesgue measure on $S^m(\bold 0, ct)$. Consider separately the integral factor in (\ref{rec2}): \begin{equation}\label{rec4} \mathcal J_n(\boldsymbol\alpha, t) = \int_0^t d\tau_1 \int_{\tau_1}^t d\tau_2 \dots \int_{\tau_{n-1}}^t d\tau_n \left\{ \prod_{j=1}^{n+1} \psi(\boldsymbol\alpha, \tau_j-\tau_{j-1}) \right\} , \qquad n\ge 1. \end{equation} This function has a quite definite probabilistic sense, namely \begin{equation}\label{recc4} \tilde G_n(\boldsymbol\alpha, t) = \mathcal F_{\bold x} \bigl[ \tilde p_n(\bold x, t) \bigr](\boldsymbol\alpha) = \frac{(\lambda t)^n \; e^{-\lambda t}}{n!} \; G_n(\boldsymbol\alpha, t) = \lambda^n e^{-\lambda t} \mathcal J_n(\boldsymbol\alpha, t), \end{equation} $$\boldsymbol\alpha\in\Bbb R^m, \qquad t>0, \qquad n\ge 1,$$ is the characteristic function (Fourier transform) of the joint probability density $p_n(\bold x, t)$ of the particle's position at time instant $t$ and of the number $N(t)=n$ of the Poisson events that have occurred by this time moment $t$. It is known (see \cite[Theorem 5]{kol3}) that, for arbitrary $n\ge 1$, functions (\ref{rec4}) are connected with each other by the following recurrent relation: \begin{equation}\label{rec5} \mathcal J_n(\boldsymbol\alpha, t) = \int_0^t \psi(\boldsymbol\alpha, t-\tau) \; \mathcal J_{n-1}(\boldsymbol\alpha, \tau) \; d\tau = \int_0^t \psi(\boldsymbol\alpha, \tau) \; \mathcal J_{n-1}(\boldsymbol\alpha, t-\tau) \; d\tau, \qquad n\ge 1, \end{equation} where, by definition, ${\mathcal J}_0(\boldsymbol\alpha, t) \overset{\text{def}}{=} \psi(\boldsymbol\alpha, t)$. Formula (\ref{rec5}) can also be represented in the following time-convolutional form: \begin{equation}\label{rec6} \mathcal J_n(\boldsymbol\alpha, t) = \psi(\boldsymbol\alpha, t) \overset{t}{\ast} \mathcal J_{n-1}(\boldsymbol\alpha, t), \qquad n\ge 1, \end{equation} where the symbol $\overset{t}{\ast}$ means the convolution operation with respect to time variable $t$. From (\ref{rec6}) it follows that \begin{equation}\label{rec7} \mathcal J_n(\boldsymbol\alpha, t) = \left[ \psi(\boldsymbol\alpha, t) \right]^{\overset{t}{\ast}(n+1)} , \qquad n\ge 1, \end{equation} where $\overset{t}{\ast}(n+1)$ means the $(n+1)$-multiple convolution in $t$. Applying Laplace transformation $\mathcal L_t$ (in time variable $t$) to (\ref{rec7}), we arrive at the formula \begin{equation}\label{rec8} \mathcal L_t \left[ \mathcal J_n(\boldsymbol\alpha, t) \right] (s) = \bigl( \mathcal L_t \left[ \psi(\boldsymbol\alpha, t) \right] (s) \bigr)^{n+1} , \qquad n\ge 1. \end{equation} It is also known (see \cite[Corollary 5.3]{kol3}) that conditional characteristic functions (\ref{rec2}) satisfy the following recurrent relation \begin{equation}\label{rec8} G_n(\boldsymbol\alpha, t) = \frac{n}{t^n} \int_0^t \tau^{n-1} \psi(\boldsymbol\alpha, t-\tau) \; G_{n-1}(\boldsymbol\alpha, \tau) \; d\tau, \qquad G_0(\boldsymbol\alpha, t) \overset{\text{def}}{=} \psi(\boldsymbol\alpha, t), \quad n\ge 1. \end{equation} The unconditional characteristic function \begin{equation}\label{rec9} G(\boldsymbol\alpha, t) = E \left\{ e^{i\langle\boldsymbol\alpha, \bold X(t)\rangle} \right\} \end{equation} of process $\bold X(t)$ satisfies the Volterra integral equation of second kind (see \cite[Theorem 6]{kol3}): \begin{equation}\label{rec10} G(\boldsymbol\alpha, t) = e^{-\lambda t} \psi(\boldsymbol\alpha, t) + \lambda \int_0^t e^{-\lambda (t-\tau)} \psi(\boldsymbol\alpha, t-\tau) \; G(\boldsymbol\alpha, \tau) \; d\tau , \qquad t\ge 0, \end{equation} or in the convolutional form \begin{equation}\label{rec11} G(\boldsymbol\alpha, t) = e^{-\lambda t} \psi(\boldsymbol\alpha, t) + \lambda \bigl[ \left( e^{-\lambda t} \psi(\boldsymbol\alpha, t) \right) \overset{t}{\ast} \mathcal J(\boldsymbol\alpha, t) \bigr] . \end{equation} This is the renewal equation for Markov random flight $\bold X(t)$. In the class of continuous functions integral equation (\ref{rec10}) (or (\ref{rec11})) has the unique solution given by the uniformly converging series \begin{equation}\label{rec12} G(\boldsymbol\alpha, t) = e^{-\lambda t} \sum_{n=0}^{\infty} \lambda^n \; \left[ \psi(\boldsymbol\alpha, t) \right]^{\overset{t}{\ast} (n+1)} \; . \end{equation} From (\ref{rec11}) we obtain the general formula for the Laplace transform of characteristic function (\ref{rec9}): \begin{equation}\label{rec13} \mathcal L_t \left[ G(\boldsymbol\alpha, t) \right] (s) = \frac{\mathcal L_t \left[ \psi(\boldsymbol\alpha, t) \right] (s+\lambda)}{1 - \lambda \; \mathcal L_t \left[ \psi(\boldsymbol\alpha, t) \right](s+\lambda)} \; , \qquad \text{Re} \; s > 0. \end{equation} These properties will be used in the next section for deriving recurrent relations for the joint and conditional densities of Markov random flight $\bold X(t)$. \section{Recurrent Relations} \numberwithin{equation}{section} Consider the joint probability densities $p_n(\bold x, t), \; n\ge 0, \; \bold x\in\bold B^m(\bold 0, ct), \; t>0,$ of the particle's position $\bold X(t)$ at time instant $t>0$ and of the number of the Poisson events $\{ N(t)=n\}$ that have occurred by this instant $t$. For $n=0$, we have \begin{equation}\label{joint0} p_0(\bold x, t) = p_s(\bold x, t) = e^{-\lambda t} \varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2) , \qquad t>0, \end{equation} where, remind, $p_s(\bold x, t)$ is the singular part of density (\ref{dens}) concentrated on the surface of the sphere $S^m(\bold 0, ct) = \partial\bold B^m(\bold 0, ct)$ and given by (\ref{densS}). If $n\ge 1$, then, according to (\ref{densAC}), joint densities $p_n(\bold x, t)$ have the form: \begin{equation}\label{jointN} p_n(\bold x, t) = f_n(\bold x,t) \Theta(ct-\Vert\bold x\Vert) , \qquad n\ge 1, \quad t>0, \end{equation} where $f_n(\bold x,t), \; n\ge 1,$ are some positive functions absolutely continuous in $\text{int} \; \bold B^m(\bold 0, ct)$ and $\Theta(x)$ is the Heaviside step function. The joint density $p_{n+1}(\bold x,t)$ can be expressed through the previous one $p_n(\bold x,t)$ by means of a recurrent relation. This result is given by the following theorem. {\bf Theorem 1.} {\it The joint densities} $p_n(\bold x, t), \; n\ge 1,$ {\it are connected with each other by the following recurrent relation:} \begin{equation}\label{rec14} p_{n+1}(\bold x, t) = \lambda \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} p_n(\bold x, \tau) \bigr] \; d\tau , \qquad n\ge 1, \quad \bold x\in\text{int} \; \bold B^m(\bold 0, ct), \quad t>0. \end{equation} \vskip 0.2cm \begin{proof} Applying Fourier transformation to the right-hand side of (\ref{rec15}), we have: \begin{equation}\label{rec17} \aligned \lambda \; \mathcal F_{\bold x} & \left[ \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} p_n(\bold x, \tau) \bigr] \; d\tau \right](\boldsymbol\alpha) \\ & = \lambda \int_0^t \mathcal F_{\bold x} \left[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} p_n(\bold x, \tau) \right](\boldsymbol\alpha) \; d\tau \\ & = \lambda \int_0^t \mathcal F_{\bold x} \bigl[ p_0(\bold x, t-\tau) \bigr](\boldsymbol\alpha) \; \mathcal F_{\bold x} \bigl[ p_n(\bold x, \tau) \bigr](\boldsymbol\alpha) \; d\tau \\ & = \lambda \int_0^t e^{-\lambda(t-\tau)} \mathcal F_{\bold x} \bigl[ \varrho(\bold x, t-\tau) \delta(c^2(t-\tau)^2-\Vert\bold x\Vert^2) \bigr](\boldsymbol\alpha) \;\; \mathcal F_{\bold x} \bigl[ p_n(\bold x, \tau) \bigr](\boldsymbol\alpha) \; d\tau \\ & = \lambda \int_0^t e^{-\lambda(t-\tau)} \psi(\boldsymbol\alpha, t-\tau) \; \lambda^n e^{-\lambda\tau} \mathcal J_n(\boldsymbol\alpha, \tau) \; d\tau \\ & = \lambda^{n+1} e^{-\lambda t} \int_0^t \psi(\boldsymbol\alpha, t-\tau) \; \mathcal J_n(\boldsymbol\alpha, \tau) \; d\tau \\ & = \lambda^{n+1} e^{-\lambda t} \mathcal J_{n+1}(\boldsymbol\alpha, t) \\ & = \mathcal F_{\bold x} \bigl[ p_{n+1}(\bold x, t) \bigr](\boldsymbol\alpha) , \endaligned \end{equation} where we have used formulas (\ref{rec3}), (\ref{recc4}), (\ref{rec5}). Thus, both the functions on the left- and right-hand sides of (\ref{rec14}) have the same Fourier transform and, therefore, they coincide. The change of integration order in the first step of (\ref{rec17}) is justified because the convolution $p_0(\bold x, t-\tau) \overset{\bold x}{\ast} p_n(\bold x, \tau)$ of the singular part $p_0(\bold x, t-\tau)$ of the density with the absolutely continuous one $p_n(\bold x, \tau), \; n\ge 1,$ is the absolutely continuous (and, therefore, uniformly bounded in $\bold x$) function. From this fact it follows that, for any $n\ge 1$, the integral in square brackets on the left-hand side of (\ref{rec17}) converges uniformly in $\bold x$ for any fixed $t$. The theorem is proved. \end{proof} {\it Remark 1.} In view of (\ref{densS}) and (\ref{densAC}), formula (\ref{rec14}) can be represented in the following expanded form: \begin{equation}\label{rec15} \aligned p_{n+1}(\bold x, t) & = \lambda \int_0^t e^{-\lambda(t-\tau)} \\ & \quad \times \left\{ \int \varrho(\bold x-\boldsymbol\xi,t-\tau) \delta(c^2(t-\tau)^2-\Vert\bold x-\boldsymbol\xi\Vert^2) \; f_n(\boldsymbol\xi, \tau) \Theta(c\tau-\Vert\boldsymbol\xi\Vert) \; \nu(d\boldsymbol\xi) \right\} d\tau , \endaligned \end{equation} $$n\ge 1, \quad \bold x\in\text{int} \; \bold B^m(\bold 0, ct), \quad t>0,$$ where the function $f_n(\boldsymbol\xi, \tau)$ is absolutely continuous in the variable $\boldsymbol\xi=(\xi_1,\dots,\xi_m)\in\Bbb R^m$ and $\nu(d\boldsymbol\xi)$ is the surface Lebesgue measure. Integration area in the interior integral on the right-hand side of (\ref{rec15}) is determined by all the $\boldsymbol\xi$, under which the integrand takes non-zero values, that is, by the system $$\boldsymbol\xi\in\Bbb R^m \; : \; \left\{ \aligned & \Vert\bold x-\boldsymbol\xi\Vert^2 = c^2(t-\tau)^2 , \\ & \Vert\boldsymbol\xi\Vert < c\tau. \endaligned \right.$$ The first relation of this system determines a sphere $S^m(\bold x, c(t-\tau))$ of radius $c(t-\tau)$ centred at point $\bold x$, while the second one represents an open ball $\text{int} \; \bold B^m(\bold 0, c\tau)$ of radius $c\tau$ centred at the origin $\bold 0$. Their intersection \begin{equation}\label{setM} M(\bold x, \tau) = S^m(\bold x, c(t-\tau))\cap\text{int} \; \bold B^m(\bold 0, c\tau), \end{equation} which is a part of (or the whole) surface of sphere $S^m(\bold x, c(t-\tau))$ located inside the ball $\bold B^m(\bold 0, c\tau)$, represents the integration area of dimension $m-1$ in the interior integral of (\ref{rec15}). Note that the sum of the radii of $S^m(\bold x, c(t-\tau))$ and $\text{int} \; \bold B^m(\bold 0, c\tau)$ is $c(t-\tau)+c\tau = ct > \Vert\bold x\Vert$, that is greater than the distance $\Vert\bold x\Vert$ between their centres $\bold 0$ and $\bold x$. This fact, as well as some simple geometric reasonongs, show that intersection (\ref{setM}) depends on $\tau\in (0,t)$ as follows. $\bullet$ If $\tau\in (0, \; \frac{t}{2} - \frac{\Vert x\Vert}{2c}]$, then intersection (\ref{setM}) is empty, that is, $M(\bold x, \tau) = \varnothing$. $\bullet$ If $\tau\in (\frac{t}{2} - \frac{\Vert x\Vert}{2c}, \; \frac{t}{2} + \frac{\Vert x\Vert}{2c}]$, then intersection $M(\bold x, \tau)$ is not empty and represents some hyperspace of dimension $m-1$. $\bullet$ If $\tau\in (\frac{t}{2} + \frac{\Vert x\Vert}{2c}, \; t]$, then $S^m(\bold x, c(t-\tau))\subset \text{int} \; \bold B^m(\bold 0, c\tau)$ and, therefore, in this case $M(\bold x, \tau) = S^m(\bold x, c(t-\tau))$. Thus, formula (\ref{rec15}), as well as (\ref{rec14}), can be rewritten in the expanded form: \begin{equation}\label{rec16} \aligned p_{n+1}(\bold x, t) & = \lambda \int\limits_{\frac{t}{2} - \frac{\Vert x\Vert}{2c}}^{\frac{t}{2} + \frac{\Vert x\Vert}{2c}} e^{-\lambda(t-\tau)} \left\{ \int\limits_{M(\bold x, \tau)} \varrho(\bold x-\boldsymbol\xi,t-\tau) \; f_n(\boldsymbol\xi, \tau) \; \nu(d\boldsymbol\xi) \right\} d\tau \\ & + \lambda \int\limits_{\frac{t}{2} + \frac{\Vert x\Vert}{2c}}^t e^{-\lambda(t-\tau)} \left\{ \int\limits_{S^m(\bold x, c(t-\tau))} \varrho(\bold x-\boldsymbol\xi,t-\tau) \; f_n(\boldsymbol\xi, \tau) \; \nu(d\boldsymbol\xi) \right\} d\tau \endaligned \end{equation} and the expressions in curly brackets of (\ref{rec16}) represents surface integrals over $M(\bold x, \tau)$ and $S^m(\bold x, c(t-\tau))$. {\it Remark 2.} By means of the double convolution operation of two arbitrary generalized functions $f_1(\bold x, t), f_2(\bold x, t) \in\mathscr{S'}, \; \bold x\in\Bbb R^m, \; t>0,$ \begin{equation}\label{rec16a} f_1(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} f_2(\bold x, t) = \int_0^t \int_{\Bbb R^m} f_1(\boldsymbol\xi, \tau) \; f_2(\bold x - \boldsymbol\xi, t-\tau) \; d\boldsymbol\xi \; d\tau \end{equation} formula (\ref{rec14}) can be represented in the succinct convolutional form \begin{equation}\label{rec16b} p_{n+1}(\bold x, t) = \lambda \left[ p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} p_n(\bold x, t) \right] . \end{equation} Taking into account the well-known connections between the joint and conditional densities, we can extract from Theorem 1 a convolution-type recurrent relation for the conditional probability densities $\tilde p_n(\bold x, t), \; n\ge 1$. {\bf Corollary 1.1.} {\it The conditional densities} $\tilde p_n(\bold x, t), \; n\ge 1,$ {\it are connected with each other by the following recurrent relation:} \begin{equation}\label{rec18} \tilde p_{n+1}(\bold x, t) = \frac{n+1}{t^{n+1}} \int_0^t \tau^n \bigl[ \tilde p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; \tilde p_n(\bold x, \tau) \bigr] \; d\tau , \qquad n\ge 1, \quad \bold x\in\text{int} \; \bold B^m(\bold 0, ct), \quad t>0, \end{equation} {\it where $\tilde p_0(\bold x, t) = \varrho(\bold x,t) \delta(c^2t^2-\Vert\bold x\Vert^2)$ is the conditional density corresponding to the case when no one Poisson event occurs before time instant $t$.} \vskip 0.2cm \begin{proof} The proof immediately follows from Theorem 1 and recurrent formula (\ref{rec8}). \end{proof} {\it Remark 3.} Formulas (\ref{rec14}) and (\ref{rec18}) are also valid for $n=0$. In this case, for arbitrary $t>0$, they take the form: \begin{equation}\label{rec19} p_1(\bold x, t) = \lambda \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p_0(\bold x, \tau) \bigr] \; d\tau , \end{equation} \begin{equation}\label{rec20} \tilde p_1(\bold x, t) = \frac{1}{t} \int_0^t \bigl[ \tilde p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; \tilde p_0(\bold x, \tau) \bigr] \; d\tau , \end{equation} where, remind, function $p_0(\bold x, t)$ defined by (\ref{joint0}) is the singular part of the density concentrated on the surface of the sphere $S^m(\bold 0, ct)$. The derivation of (\ref{rec19}) is a simple recompilation of the proof of Theorem 1 and taking into account the boundedness of $p_0(\bold x, t)$ which justifies the change of integration order in (\ref{rec17}). Formula (\ref{rec20}) follows from (\ref{rec19}). \section{Integral Equation for Transition Density} \numberwithin{equation}{section} The transition probability density $p(\bold x, t)$ of the multidimensional Markov flight $\bold X(t)$ is defined by the formula \begin{equation}\label{int1} p(\bold x, t) = \sum_{n=0}^{\infty} p_n(\bold x, t) , \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t>0, \end{equation} where the joint densities $p_n(\bold x, t), \; n\ge 0,$ are given by (\ref{joint0}) and (\ref{jointN}). The density (\ref{int1}) is defined everywhere in the ball $\bold B^m(\bold 0, ct)$, while the function \begin{equation}\label{int2} p_{ac}(\bold x, t) = \sum_{n=1}^{\infty} p_n(\bold x, t) \end{equation} forms its absolutely continuous part concentrated in the interior $\text{int} \; \bold B^m(\bold 0, ct)$ of the ball. Therefore, series (\ref{int2}) converges uniformly everywhere in the close ball $\bold B^m(\bold 0, ct-\varepsilon)$ for arbitrary small $\varepsilon>0$. In the following theorem we state an integral equation for density (\ref{int1}). {\bf Theorem 2.} {\it The transition probability density $p(\bold x, t)$ of the Markov random flight $\bold X(t)$ satisfies the integral equation:} \begin{equation}\label{int3} p(\bold x, t) = p_0(\bold x, t) + \lambda \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p(\bold x, \tau) \bigr] \; d\tau , \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t>0. \end{equation} {\it In the class of finitary functions (that is, generalized functions defined on the compact sets of $\Bbb R^m$), integral equation} (\ref{int3}) {\it has the unique solution given by the series} \begin{equation}\label{int4} p(\bold x,t) = \sum_{n=0}^{\infty} \lambda^n \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} , \end{equation} {\it where the symbol $\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)$ means the $(n+1)$-multiple double convolution with respect to spatial and time variables defined by} (\ref{rec16a}), {\it that is,} $$\left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} = \underbrace{p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} \dots \overset{\bold x}{\ast} \overset{t}{\ast} p_0(\bold x, t)}_{(n+1) \; \text{times}} .$$ {\it Series} (\ref{int4}) {\it is convergent everywhere in the open ball} $\text{int} \; \bold B^m(\bold 0, ct)$. {\it For any small $\varepsilon>0$, the series} (\ref{int4}) {\it converges uniformly (in $\bold x$ for any fixed $t>0$) in the close ball $\bold B^m(\bold 0, ct-\varepsilon)$ and, therefore, it determines the density $p(\bold x, t)$ which is continuous and bounded in this ball}. \vskip 0.2cm \begin{proof} Applying Theorem 1 and taking into account the uniform convergence of series (\ref{int2}) and of the integral in formula (\ref{rec14}), we have: $$\aligned p(\bold x, t) & = \sum_{n=0}^{\infty} p_n(\bold x, t) \\ & = p_0(\bold x, t) + \sum_{n=1}^{\infty} p_n(\bold x, t) \\ & = p_0(\bold x, t) + \lambda \sum_{n=1}^{\infty} \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p_{n-1}(\bold x, \tau) \bigr] \; d\tau \\ & = p_0(\bold x, t) + \lambda \int_0^t \sum_{n=1}^{\infty} \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p_{n-1}(\bold x, \tau) \bigr] \; d\tau \\ & = p_0(\bold x, t) + \lambda \int_0^t \left[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; \left\{ \sum_{n=1}^{\infty} p_{n-1}(\bold x, \tau) \right\} \right] \; d\tau \\ & = p_0(\bold x, t) + \lambda \int_0^t \left[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; \left\{ \sum_{n=0}^{\infty} p_n(\bold x, \tau) \right\} \right] \; d\tau \\ & = p_0(\bold x, t) + \lambda \int_0^t \bigl[ p_0(\bold x, t-\tau) \overset{\bold x}{\ast} \; p(\bold x, \tau) \bigr] \; d\tau , \endaligned$$ proving (\ref{int3}). Another way of proving the theorem is to apply the Fourier transformation to both sides of (\ref{int3}). Justifying then the change of the order of integrals similarly as it was done in (\ref{rec17}), we arrive at Volterra integral equation (\ref{rec10}) for Fourier transforms. Using notation (\ref{rec16a}), equation (\ref{int3}) can be represented in the convolutional form \begin{equation}\label{int5} p(\bold x, t) = p_0(\bold x, t) + \lambda \bigl[ p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} p(\bold x, t) \bigr] , \qquad \bold x\in\bold B^m(\bold 0, ct), \quad t>0. \end{equation} Let us check that series (\ref{int4}) satisfies equation (\ref{int5}). Substituting (\ref{int4}) into the right-hand side of (\ref{int5}), we have: $$\aligned p_0(\bold x, t) + \lambda \biggl[ p_0(\bold x, t) \overset{\bold x}{\ast} \overset{t}{\ast} \biggl( \sum_{n=0}^{\infty} \lambda^n \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} \biggr) \biggr] & = p_0(\bold x, t) + \sum_{n=0}^{\infty} \lambda^{n+1} \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+2)} \\ & = p_0(\bold x, t) + \sum_{n=1}^{\infty} \lambda^n \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} \\ & = \sum_{n=0}^{\infty} \lambda^n \left[ p_0(\bold x, t) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} \\ & = p(\bold x, t) \endaligned$$ and, therefore, series (\ref{int4}) is really the solution to equation (\ref{int5}). Note that applying Fourier transformation to (\ref{int3}) and (\ref{int4}) and taking into account (\ref{densS}), we arrive at the known results (\ref{rec11}) and (\ref{rec12}), respectively. The uniqueness of solution (\ref{int4}) in the class of finitary functions follows from the uniqueness of the solution of Volterra integral equation (\ref{rec10}) for its Fourier transform (\ref{rec12}) (i.e. characteristic function) in the class of continuous functions. Since the transition density $p(\bold x,t)$ is absolutely continuous in the open ball $\text{int} \; \bold B^m(\bold 0, ct)$, then, for any $\varepsilon>0$, it is continuous and uniformly bounded in the close ball $\bold B^m(\bold 0, ct-\varepsilon)$. From this fact and taking into account the uniqueness of the solution of integral equation (\ref{int3}) in the class of finitary functions, we can conclude that series (\ref{int4}) converges uniformly in $\bold B^m(\bold 0, ct-\varepsilon)$ for any small $\varepsilon>0$. This completes the proof. \end{proof} \section{Some Particular Cases} \numberwithin{equation}{section} In this section we consider two important particular cases of the general Markov random flight described in Section 2 when the dissipation function has the uniform distribution on the unit sphere $S^m(\bold 0,1)$ and Gaussian distribution on the unit circumference $S^2(\bold 0,1)$. \subsection{Symmetric Random Flights} Suppose that the initial and every new direction are chosen according to the uniform distribution on the unit sphere $S^m(\bold 0,1)$. Such processes in the Euclidean spaces $\Bbb R^m$ of different dimensions $m\ge 2$, which are referred to as the symmetric Markov random flights, have become the subject of a series of works \cite{kol2,kol3,kol4,kol5,kol6,kol7}, \cite{mas}, \cite{sta1,sta2}. In this symmetric case the function $\varrho(\bold x,t)$ is the density of the uniform distribution on the surface of the sphere $S^m(\bold 0, ct)$ and, therefore, it does not depend on spatial variable $\bold x$. Then, according to (\ref{densS}), the singular part of the transition density of process $\bold X(t)$ takes the form: \begin{equation}\label{sym1} p_s(\bold x,t) = e^{-\lambda t} \frac{\Gamma\left( \frac{m}{2} \right)}{2\pi^{m/2}\; (ct)^{m-1}} \; \delta(c^2t^2-\Vert\bold x\Vert^2) , \qquad m\ge 2, \quad t>0. \end{equation} Therefore, according to Theorem 1, for arbitrary dimension $m\ge 2$, the absolutely continuous parts $f_n(\bold x, t)$, \; $n\ge 0,$ of the joint probability densities of the symmetric Markov random flight are connected with each other by the following recurrent relation: \begin{equation}\label{sym2} f_{n+1}(\bold x, t) = \frac{\lambda \; \Gamma\left( \frac{m}{2} \right)}{2\pi^{m/2}\; c^{m-1}} \int_0^t \frac{e^{-\lambda(t-\tau)}}{(t-\tau)^{m-1}} \biggl\{ \int\limits_{M(\bold x,\tau)} f_n(\boldsymbol\xi,\tau) \; d\boldsymbol\xi \biggr\} \; d\tau , \end{equation} $$\bold x=(x_1,\dots,x_m) \in\text{int} \; \bold B^m(\bold 0, ct), \quad m\ge 2, \quad n\ge 0, \quad t>0,$$ where the integration area $M(\bold x,\tau)$ is given by (\ref{setM}). It is known (see \cite[formula (7)]{kol4}) that, in arbitrary dimension $m\ge 2$, the joint density of symmetric Markov random flight $\bold X(t)$ and of the single change of direction is given by the formula \begin{equation}\label{sym3} f_1(\bold x,t) = \lambda e^{-\lambda t} \; \frac{2^{m-3} \Gamma\left( \frac{m}{2} \right)}{\pi^{m/2} c^m t^{m-1}} \; F\left( \frac{m-1}{2}, -\frac{m}{2}+2; \; \frac{m}{2}; \; \frac{\Vert\bold x\Vert^2}{c^2t^2} \right) , \end{equation} $$\bold x=(x_1,\dots,x_m)\in\text{int} \; \bold B^m(\bold 0, ct), \quad m\ge 2, \quad t>0,$$ where $$F(\alpha,\beta;\gamma;z) = \sum_{k=0}^{\infty} \frac{(\alpha)_k (\beta)_k}{(\gamma)_k} \; \frac{z^k}{k!}$$ is the Gauss hypergeometric function. Then, by substituting (\ref{sym3}) into (\ref{sym2}) (for $n=1$), we obtain the following formula for the joint density of process $\bold X(t)$ and of two changes of direction: \begin{equation}\label{sym4} \aligned f_2(\bold x,t) & = \lambda^2 e^{-\lambda t} \; \frac{2^{m-4} \left[\Gamma\left( \frac{m}{2} \right)\right]^2}{\pi^m \; c^{2m-1}} \\ & \qquad\times \int_0^t \biggl\{ \int\limits_{M(\bold x,\tau)} F\left( \frac{m-1}{2}, -\frac{m}{2}+2; \; \frac{m}{2}; \; \frac{\Vert\boldsymbol\xi\Vert^2}{c^2\tau^2} \right) d\boldsymbol\xi \biggr\} \; \frac{d\tau}{(\tau(t-\tau))^{m-1}} , \endaligned \end{equation} $$\bold x=(x_1,\dots,x_m)\in\text{int} \; \bold B^m(\bold 0, ct), \quad m\ge 2, \quad t>0.$$ In the three-dimensional Euclidean space $\Bbb R^3$, joint density (\ref{sym3}) was computed explicitly by different methods and it has the form (see \cite[formula (25)]{kol5} or \cite[the second term of formulas (1.3) and (4.21)]{sta1}): \begin{equation}\label{sym5} f_1(\bold x,t) = \frac{\lambda e^{-\lambda t}}{4\pi c^2 t \Vert\bold x\Vert} \; \ln\left( \frac{ct+\Vert\bold x\Vert}{ct-\Vert\bold x\Vert} \right) , \end{equation} $$\bold x=(x_1,x_2,x_3)\in\text{int} \; \bold B^3(\bold 0, ct), \quad \Vert\bold x\Vert=\sqrt{x_1^2+x_2^2+x_3^2}, \quad t>0.$$ By substituting this joint density into (\ref{sym2}) (for $n=1, \; m=3$), we arrive at the formula: \begin{equation}\label{sym6} f_2(\bold x,t) = \frac{\lambda^2 e^{-\lambda t}}{16 \pi^2 c^4} \int_0^t \biggl\{ \int\limits_{M(\bold x,\tau)} \ln\left( \frac{c\tau+\Vert\boldsymbol\xi\Vert}{c\tau-\Vert\boldsymbol\xi\Vert} \right) \frac{d\boldsymbol\xi}{\Vert\boldsymbol\xi\Vert} \biggr\} \; \frac{d\tau}{\tau(t-\tau)^2} , \end{equation} $$\bold x=(x_1,x_2,x_3)\in\text{int} \; \bold B^3(\bold 0, ct), \quad t>0.$$ Formula (\ref{sym6}) can also be obtained by setting $m=3$ in (\ref{sym4}). According to Theorem 2 and (\ref{sym1}), the transition density of the $m$-dimensional symmetric Markov random flight solves the integral equation \begin{equation}\label{sym7} \aligned p(\bold x, t) & = \frac{\Gamma\left( \frac{m}{2} \right)}{2\pi^{m/2}\; c^{m-1}} \biggl\{ \frac{e^{-\lambda t}}{t^{m-1}} \; \delta(c^2t^2-\Vert\bold x\Vert^2) \\ & \qquad + \lambda \int_0^t \biggl[ \left( \frac{e^{-\lambda (t-\tau)}}{(t-\tau)^{m-1}} \; \delta(c^2(t-\tau)^2-\Vert\bold x\Vert^2) \right) \overset{\bold x}{\ast} \; p(\bold x, \tau) \biggr] \; d\tau \biggr\} , \endaligned \end{equation} $$\bold x=(x_1,\dots,x_m)\in\bold B^m(\bold 0, ct), \quad t>0.$$ In the class of finitary functions, equation (\ref{sym7}) has the unique solution given by the series \begin{equation}\label{sym8} p(\bold x,t) = \sum_{n=0}^{\infty} \lambda^n \left( \frac{\Gamma\left( \frac{m}{2} \right)}{2\pi^{m/2} \; c^{m-1}} \right)^{n+1} \left[ \frac{e^{-\lambda t}}{t^{m-1}} \; \delta(c^2t^2-\Vert\bold x\Vert^2) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} . \end{equation} \subsection{Gaussian Distribution on Circumference} Consider now the case of the non-symmetric planar random flight when the initial and each new direction are chosen according to the Gaussian distribution on the unit circumference $S^2(\bold 0,1)$ with the two-dimensional density \begin{equation}\label{gauss1} \chi_k(\bold x) = \frac{1}{2\pi \; I_0(k)} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(1-\Vert\bold x\Vert^2) , \end{equation} $$\bold x =(x_1,x_2)\in\Bbb R^2, \qquad \Vert\bold x\Vert=\sqrt{x_1^2+x_2^2} \qquad k\in\Bbb R,$$ where $I_0(z)$ is the modified Bessel function of order 0. Formula (\ref{gauss1}) determines the one-parametric family of Gaussian densities $\left\{ \chi_k(\bold x), \; k\in\Bbb R \right\}$, and for any fixed real $k\in\Bbb R$ the density $\chi_k(\bold x)$ is absolutely continuous and uniformly bounded on $S^2(\bold 0,1)$. If $k=0$, then formula (\ref{gauss1}) transforms into the density of the uniform distribution on the unit circumference $S^2(\bold 0,1)$, while for $k\neq 0$ it produces pure Gaussian-type densities. In the unit polar coordinates $x_1=\cos\theta, \; x_2=\sin\theta,$ formula (\ref{gauss1}) takes the form of the circular Gaussian law: \begin{equation}\label{gauss2} \chi_k(\theta) = \frac{e^{k \cos\theta}}{2\pi \; I_0(k)} , \qquad \theta\in [-\pi, \pi), \quad k\in\Bbb R. \end{equation} For arbitrary real $k\in\Bbb R$, Gaussian density (\ref{gauss1}) on the unit circumference $S^2(\bold 0,1)$ generates the Gaussian density \begin{equation}\label{gauss3} p_s(\bold x,t) = \frac{e^{-\lambda t}}{2\pi ct \; I_0(k)} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(c^2t^2-\Vert\bold x\Vert^2) , \end{equation} $$\bold x =(x_1,x_2)\in\Bbb R^2, \quad \Vert\bold x\Vert=\sqrt{x_1^2+x_2^2}, \quad t>0, \quad k\in\Bbb R,$$ concentrated on the circumference $S^2(\bold 0, ct)$ of radius $ct$. Then, according to Theorem 1, the joint densities are connected with each other by the recurrent relation \begin{equation}\label{gauss4} \aligned & f_{n+1}(\bold x, t) \\ & = \frac{\lambda}{2\pi c I_0(k)} \int_0^t \biggl\{ \int\limits_{M(\bold x,\tau)} \exp\left( \frac{k (x_1-\xi_1)}{\sqrt{(x_1-\xi_1)^2+(x_2-\xi_2)^2}} \right) \; f_n(\xi_1,\xi_2,\tau) \; d\xi_1 d\xi_2 \biggr\} \frac{e^{-\lambda(t-\tau)}}{t-\tau} \; d\tau , \endaligned \end{equation} $$\bold x =(x_1,x_2)\in \; \text{int} \; \bold B^2(\bold 0, ct), \quad n\ge 0, \quad t>0, \quad k\in\Bbb R.$$ According to Theorem 2 and (\ref{gauss3}), the transition density of the planar Markov random flight with Gaussian dissipation function (\ref{gauss1}) satisfies the integral equation \begin{equation}\label{gauss5} \aligned p(\bold x,t) & = \frac{e^{-\lambda t}}{2\pi ct \; I_0(k)} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(c^2t^2-\Vert\bold x\Vert^2) \\ & \qquad + \frac{\lambda}{2\pi c \; I_0(k)} \int_0^t \left[ \left( \frac{e^{-\lambda\tau}}{\tau} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(c^2\tau^2-\Vert\bold x\Vert^2) \right) \overset{\bold x}{\ast} \; p(\bold x, \tau) \right] d\tau , \endaligned \end{equation} $$\bold x=(x_1,x_2)\in\bold B^2(\bold 0, ct), \quad \Vert\bold x\Vert=\sqrt{x_1^2+x_2^2}, \quad t>0, \quad k\in\Bbb R.$$ In the class of finitary functions, equation (\ref{gauss5}) has the unique solution given by the series \begin{equation}\label{gauss6} p(\bold x,t) = \sum_{n=0}^{\infty} \lambda^n \left( \frac{1}{2\pi c \; I_0(k)} \right)^{n+1} \left[ \frac{e^{-\lambda t}}{t} \; \exp\left( \frac{k x_1}{\Vert\bold x\Vert} \right) \; \delta(c^2 t^2-\Vert\bold x\Vert^2) \right]^{\overset{\bold x}{\ast} \overset{t}{\ast}(n+1)} . \end{equation} \end{document}
\betagin{document} \title{Conditions for Boundedness into Hardy spaces} \author[Grafakos]{Loukas Grafakos} \address{Department of Mathematics, University of Missouri, Columbia, MO 65211} \email{[email protected]} \author[Nakamura]{Shohei Nakamura} \address{Department of Mathematical Science and Information Science, Tokyo Metropolitan University, 1-1 Minami-Ohsawa, Hachioji, Tokyo, 192-0397, Japan} \email{[email protected]} \author[Nguyen]{Hanh Van Nguyen} \address{Department of Mathematics, University of Alabama, Tuscaloosa, AL 35487} \email{[email protected]} \author[Sawano]{Yoshihiro Sawano} \address{Department of Mathematical Science and Information Science, Tokyo Metropolitan University, 1-1 Minami-Ohsawa, Hachioji, Tokyo, 192-0397, Japan} \email{[email protected]} \thanks{The first author would like to thank the Simons Foundation.} \thanks{MSC 42B15, 42B30} \betagin{abstract} We obtain boundedness from a product of Lebesgue or Hardy spaces into Hardy spaces under suitable cancellation conditions for a large class of multilinear operators that includes the Coifman-Meyer class, sums of products of linear Calder\'{o}n-Zygmund operators and combinations of these two types. \end{abstract} \maketitle \tableofcontents \section{Introduction} In this work, we obtain boundedness for multilinear singular operators of various types from products of Lebesgue or Hardy spaces into Hardy spaces, under suitable cancellation conditions. This particular line of investigation was initiated in the work of Coifman, Lions, Meyer and Semmes \cite{CLMS} who showed that certain bilinear operators with vanishing integral map $L^q\times L^{q'} $ into the Hardy space $H^1$ for $1<q<\infty$ with $q'=q/( q-1)$. This result was extended by Dobyinksi \cite{Do} for Coifman-Meyer multiplier operators and by Coifman and Grafakos \cite{CG} for finite sums of products of Calder\'{o}n-Zygmund operators. In \cite{CG} boundedness was extended to $H^{p_1}\times H^{p_2}\to H^p$ for the entire range $0<p_1,p_2,p<\infty$ and $1/p=1/p_1+1/p_2$, under the necessary cancellation conditions. Additional proofs of these results were provided by Grafakos and Li \cite{GLLXW00}, Hu and Meng \cite{HuMe12}, and Huang and Liu \cite{HuangLiu13}. All the aforementioned accounts on this topic are based on different approaches and address two classes of operators but \cite{CG}, \cite{HuMe12}, and \cite{HuangLiu13} seem to contain flaws in their proofs; in fact, as of this writing, only the approach in \cite{GLLXW00} stands, which deals with the case of finite sums of products of Calder\'{o}n-Zygmund operators. In this work we revisit this line of investigation via a new method based on $(p,\infty)$-atomic decompositions. Our approach is powerful enough to encompass many types of multilinear operators that include all the previously studied (Coifman-Meyer type and finite sums of products of Calder\'on-Zygmund operators) as well as mixed types. An alternative approach to Hardy space estimates for bilinear operators has appeared in the recent work of Hart and Lu \cite{HartLu}. Recall that the Hardy space $H^p$ with $0<p<\infty$ is given as the space of all tempered distributions $f$ for which \[ \|f\|_{H^p}=\big\|\sup_{t>0}|e^{t\Delta}f|\big\|_{L^p} \] is finite, where $e^{t\Delta}$ denotes the heat semigroup for $0<p \le \infty$. Note that $H^p$ and $L^p$ are isomorphic with norm equivalence when $1<p \le \infty$. In this work we study the boundedness into $H^p$ of the following three types of operators: \betagin{itemize} \item multilinear singular integral operators of Coifman-Meyer type; \item sums of $m$-fold products of linear Calder\'{o}n-Zygmund singular integrals; \item multilinear singular integrals of mixed type (i.e., combinations of the previous two types). \end{itemize} Let $m,n$ be positive integers. For a bounded function $\sigmagma$ on $({\mathbb R}^n)^m$ we consider the multilinear operator \[ {\mathcal T}_\sigmagma(f_1,\ldots,f_m)(x) = \int_{({\mathbb R}^n)^m} \sigmagma(\xi_1,\ldots,\xi_m) \widehat{f_1}(\xi_1)\cdots\widehat{f_m}(\xi_m) e^{2\partiali i x\cdot (\xi_1+\cdots+\xi_m)} \,d\xi_1\cdots\,d\xi_m \quad (x \in {\mathbb R}^n) \] for $f_1,\ldots,f_m \in {\mathscr S}$. Here $\mathscr S$ is the space of Schwartz functions and $\widehat f (\xi) = \int_{\mathbb R^n} f(x) e^{-2\partiali i x\cdot \xi}dx$ is the Fourier transform of a given Schwartz function $f$ on $\mathbb R^n$. The space of tempered distributions is denoted by ${\mathscr S}'$. Certain conditions on $\sigmagma$ imply that ${\mathcal T}_\sigmagma$ extends to a bounded linear operator from $L^{p_1} \times \cdots \times L^{p_m}$ to $L^p$ as long as $1<p_1,\ldots,p_m \le \infty$ and $0<p<\infty$ satisfies \betagin{equation}\label{Holder} \fracrac{1}{p} = \fracrac{1}{p_1}+\cdots+\fracrac{1}{p_m}. \end{equation} Such a condition is the following by Coifman-Meyer (modeled after the classical Mihlin linear multiplier condition) \betagin{equation}\label{CMcond} |\partialartial^\alphapha \sigmagma(\xi_1,\ldots,\xi_m)| \lesssim (|\xi_1|+\cdots+|\xi_m|)^{-|\alphapha|}, \quad (\xi_1,\ldots,\xi_m) \in ({\mathbb R}^n)^m \setminus \{0\} \end{equation} for $\alphapha \in ({\mathbb N}_0{}^n)^m$ satisfying $|\alphapha| \le M$ for some large $M$. Such operators are called $m$-linear Calder\'{o}n-Zygmund operators and there is a rich theory for them analogous to the linear one. An $m$-linear Calder\'{o}n-Zygmund operator associated with a Calder\'{o}n-Zygmund kernel $K$ on $\mathbb{R}^{mn}$ is defined by \betagin{equation}\label{eq.CalZygOPT} {\mathcal T}_\sigmagma(f_1,\ldots,f_m)(x) = \int_{({\mathbb R}^n)^m} K(x-y_1,\ldots,x-y_m) f_1(y_1)\cdots f_m(y_m)\; dy_1\cdots dy_m, \end{equation} where $\sigmagma$ is the distributional Fourier transform of $K$ on $({\mathbb R}^n)^m$ that satisfies (\ref{CMcond}). When $m=1$, these operators reduce to classical Calder\'{o}n-Zygmund singular integral operators. An $m$-linear operator of {\it product type} on $\mathbb{R}^{mn}$ is defined by \betagin{equation}\label{eq.CalZygOPT-2} \sum_{\rho=1}^T T_{\sigmagma_1^\rho}(f_1)(x)\cdots T_{\sigmagma_m^\rho}(f_m)(x) \quad (x \in {\mathbb R}^n), \end{equation} where the $T_{\sigmagma_j^\rho}$'s are linear Calder\'{o}n-Zygmund operators associated with the multipliers $\sigmagma_j^\rho$. In terms of kernels these operators can be expressed as \[ {\mathcal T}_\sigmagma(f_1,\ldots,f_m)(x) = \sum_{\rho=1}^T \partialrod_{j=1}^m \int_{{\mathbb R}^n} K^\rho_{\sigmagma_j}(x-y_j) f_j(y_j) dy_j, \] where $K^\rho_1,\ldots,K^\rho_m$ are the Calder\'{o}n-Zygmund kernels of the operator $T^\rho_{\sigmagma_1},\ldots,T^\rho_{\sigmagma_m}$, respectively for $\rho=1,\ldots,T$. In this work we also consider operators of {\it mixed type}, i.e., of the form \betagin{equation}\label{eq.CalZygOPT-3} {\mathcal T}_\sigmagma(f_1,\ldots,f_m)(x) = \sum_{\rho=1}^T \sum_{\substack{I_1^\rho,\ldots,I_{G(\rho)}^\rho }} \partialrod_{g=1}^{G(\rho)} T_{\sigmagma_{I_g^\rho}}(\{f_l\}_{l \in I_g^{{\rho}} })(x), \end{equation} where for each $\rho=1,\ldots,T$, $I^\rho_1, \cdots ,I^\rho_{G(\rho)}$ is a partition of $\{1,\ldots,m\}$ and each $T_{\sigmagma_{I_g^\rho}}$ is an $| I_g^\rho|$-linear Coifman-Meyer multiplier operator. We write $I^\rho_1+ \cdots + I^\rho_{G(\rho)}=\{1,\ldots,m\}$ to denote such partitions. In this work, we study operators of the form \eqref{eq.CalZygOPT}, \eqref{eq.CalZygOPT-2}, and \eqref{eq.CalZygOPT-3}. We will be working with indices in the following range \[ 0<p_1,\ldots,p_m \le \infty, \quad 0<p< \infty \] that satisfy (\ref{Holder}). Throughout this paper we reserve the letter $s$ to denote the following index: \betagin{equation}\label{eq:s} s=[n(1/p-1)]_+ \end{equation} and we fix $N\gg s$ a sufficiently large integer, say $N=m(n+1+2s)$. We recall that a $(p,\infty)$-atom is an $L^\infty$-function $a$ that satisfies $|a|\le \chi_Q$, where $Q$ is a cube on $\mathbb R^n$ with sides parallel to the axes and \[ \int_{{\mathbb R}^n}x^\alphapha a(x)\,dx=0 \] for all $\alphapha$ with $|\alphapha| \le N$. By convention, when $p=\infty$, $a$ is called a $(\infty,\infty)$-atom if $Q= \mathbb{R}^n$ and $\|a\|_{L^{\infty}} \le 1.$ No cancellations are required for $(\infty,\infty)$-atoms. Our main results are as follows: \betagin{theorem}\label{ThmMain} Let ${\mathcal T}_\sigmagma$ be the operator defined in \eqref{eq.CalZygOPT} and assume that it satisfies \eqref{CMcond}. Let $0<p_1,\ldots,p_m \le \infty$ and $0<p<\infty$ satisfy $(\ref{Holder})$. Assume that \betagin{equation}\label{eq.TmCan} \int_{{\mathbb R}^n} x^{\alphapha}{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(x)\; dx=0, \end{equation} for all $|\alphapha|\le s$ and all $(p_l,\infty)$-atoms $a_l$. Then ${\mathcal T}_\sigmagma$ can be extended to a bounded map from $H^{p_1}\times\cdots\times H^{p_m}$ to $H^p$. \end{theorem} \betagin{theorem}\label{ThmMain-2} Let ${\mathcal T}_\sigmagma$ be the operator defined in \eqref{eq.CalZygOPT-2}, $0<p_1,\ldots,p_m< \infty$, and $0<p<\infty$ satisfies $(\ref{Holder})$, where each $\sigmagma_j^\rho$ satisfies \eqref{CMcond} with $m=1$. Assume that $(\ref{eq.TmCan})$ holds for all $|\alphapha|\le s$. Then ${\mathcal T}_\sigmagma$ can be extended to a bounded map from $H^{p_1}\times\cdots\times H^{p_m}$ to $H^p$. \end{theorem} \betagin{theorem}\label{ThmMain-3} Let ${\mathcal T}_\sigmagma$ be the operator defined in \eqref{eq.CalZygOPT-3}, $0<p_1,\ldots,p_m \le \infty$, and $0<p<\infty$ satisfies $(\ref{Holder})$. Suppose that each $\sigmagma_{I_g^\rho}$ satisfies \eqref{CMcond} with $m=|I_g^\rho|$. Assume that $(\ref{eq.TmCan})$ holds for all $|\alphapha|\le s$ and that \betagin{equation}\label{160902-1} \sup_{\rho\, =1,\ldots,T}\sup_{I^\rho_1+ \cdots + I^\rho_{G(\rho)}=\{1,\ldots,m\}} \inf_{l \in I_g^t}p_l<\infty. \end{equation} Then ${\mathcal T}_\sigmagma$ can be extended to a bounded map from $H^{p_1}\times\cdots\times H^{p_m}$ to $H^p$. \end{theorem} \betagin{remark} \noindent (1) In Theorem \ref{ThmMain-2}, we exclude the case $p_l=\infty$ for all $l=1,\ldots, m$. In fact, one can not expect the mapping property of $\mathcal{T}_\sigmagma$ with \eqref{eq.CalZygOPT-2} if $p_l=\infty$ for some $l=1,\ldots,m$. Similarly, in Theorem \ref{ThmMain-3}, we need to assume \eqref{160902-1} instead of the exclusion of the case $p_l=\infty$ for some $l=1,\ldots,m$. (2) The convergence of the integral in \eqref{eq.TmCan} is a consequence of Lemma \ref{lm.3A1} for all $x$ outside the union of a fixed multiple of the supports of $a_i$, while the function $T(a_1, \dots, a_m)$ is integrable for $x$ inside any compact set. \end{remark} A few comments about the notation. For brevity we write $d\vec{y}=dy_1\,\cdots\,dy_m$ and we use the symbol $C $ to denote a nonessential constant whose value may vary at different occurrences. For $(k_1,\ldots,k_m) \in {\mathbb Z}^m$, we write $\vec{k}=(k_1,\ldots,k_m)$. We use the notation $A\lesssim B$ to indicate that $A\le C\, B$ for some constant $C$. We denote the Hardy-Littlewood maximal operator by $M$: \betagin{equation}\label{eq:160918-1} M f(x)=\sup_{r>0}\fracrac{1}{r^n}\int_{B(x,r)}|f(y)|\,dy. \end{equation} We say that $A\approx B$ if both $A\lesssim B$ and $B\lesssim A$ hold. The cardinality of a finite set $J$ is denoted by either $|J|$ or $\sharp J$. A cube $Q$ in $\mathbb R^n$ has sides parallel to the axes. We denote by $Q^*$ a centered-dilated cube of any cube $Q$ with the length scale factor $3\sqrt{n}$; then \betagin{equation}\label{eq:QStar} Q^*=3\sqrt{n}Q^*, \quad Q^{**}=9n Q. \end{equation} \section{Preliminary and related results} \label{s:160725-2} \subsection{Equivalent definitions of Hardy spaces} We begin this section by recalling Hardy spaces. Let $\partialhi \in C^\infty_{\rm c}$ satisfy \betagin{equation}\label{eq:160716-101} {\rm supp}(\partialhi) \subset \left\{x\in \mathbb{R}^n\ :\ |x|\le 1\right\} \end{equation} and \betagin{equation}\label{eq:160716-102} \int_{{\mathbb R}^n} \partialhi(y)\; dy=1. \end{equation} For $t>0$, we set $\partialhi_t(x)=t^{-n}\partialhi(t^{-1}x)$. The maximal function $M_\partialhi$ associated with the smooth bump $\partialhi$ is given by: \betagin{equation}\label{eq:M phi} M_\partialhi(f)(x) = \sup_{t>0}\big| (\partialhi_t*f)(x) \big| = \sup_{t>0}\Big| t^{-n}\int_{{\mathbb R}^n} \partialhi\big(y/t\big)f(x-y)\; dy \Big| \end{equation} for $f\in \mathscr S'(\mathbb R^n)$. For $0<p<\infty$, the Hardy space $H^{p}$ is characterized as the space of all tempered distributions $f$ for which $M_\partialhi(f)\in L^p$; also the $H^p$ quasinorm satisfies \[ \|f\|_{H^p} \approx \|M_\partialhi(f)\|_{L^p}. \] Denote by $\mathscr C^\infty_{\rm c}$ the space of all smooth functions on $\mathbb{R}^n$ with compact support. The following density property of Hardy spaces will be useful in the proof of the main theorems. \betagin{proposition}[{\rm \cite[Chapter III, 5.2(b)]{SteinHA}}] \label{prop:160726-1} Let $N \gg s$ be fixed. Then the following space is dense in $H^p$: \[ \mathcal{O}_N(\mathbb{R}^n)= \bigcap_{\alphapha \in {\mathbb N}_0^n, |\alphapha| \le N} \left\{f \in \mathscr C^\infty_{\rm c}\,:\, \int_{{\mathbb R}^n}x^\alphapha f(x)\,dx=0\right\}, \] where $\mathscr C^\infty_{\rm c}$ is the space of all smooth functions with compact supports in $\mathbb{R}^n.$ \end{proposition} The definition of the Hardy space is useful as the following theorem implies: \betagin{theorem}[\cite{Nakai-Sawano-2014}] \label{th-molecule} Let $0<p<\infty$. If $f \in H^p$, then there exist a collection of $(p,\infty)$-atoms $\{a_k\}_{k=1}^\infty$ and a nonnegative sequence $\{\lambda_k\}_{k=1}^\infty$ such that \[ f=\sum_{k=1}^\infty \lambda_k a_k \] in ${\mathscr S}'({\mathbb R}^n)$ and that we have \betagin{equation*} \Big\| \sum_{k=1}^\infty \lambda_k\chi_{Q_k} \Big\|_{L^p} \lesssim \|f\|_{H^p}. \end{equation*} Moreover, if $f \in \mathscr C^\infty_{\rm c}$ and $\displaystyle \int_{{\mathbb R}^n}x^\alphapha f(x)\,dx=0 $ for all $\alphapha$ with $|\alphapha| \le [n(1/p-1)]_+$, then we can arrange that $\lambda_k=0$ for all but finitely many $k$. \end{theorem} The following lemma, whose proof is just an application of the Fefferman-Stein vector-valued inequality for maximal function, will be used frequently in the next sections. \betagin{lemma} \label{lm.2B00} If $\gammamma >\max(1,\fracrac1p)$, $0<p<\infty$, $\lambda_k\ge 0$ and $\{Q_k\}_k$ are sequence of cubes, then \[ \Big\| \sum_{k}\lambda_k (M\chi_{Q_k})^{\gammamma} \Big\|_{L^p} \lesssim \Big\|\sum_{k}\lambda_k \chi_{Q_k}\Big\|_{L^p}. \] In particular \[ \Big\|\sum_{k}\lambda_k \chi_{Q_k^{**}}\Big\|_{L^p} \lesssim \Big\|\sum_{k}\lambda_k \chi_{Q_k}\Big\|_{L^p}. \] \end{lemma} We will also make use of the following result: \betagin{lemma}\label{lm.2A00} Let $p \in(0,\infty)$. Assume that $q \in (p,\infty] \cap [1,\infty]$. Suppose that we are given a sequence of cubes $\{Q_j\}_{j=1}^\infty$ and a sequence of non-negative $L^q$-functions $\{F_j\}_{j=1}^\infty$. Then \[ \Big\| \sum_{j=1}^\infty \chi_{Q_j}F_j \Big\|_{L^p} \lesssim \Big\| \sum_{j=1}^\infty \left( \fracrac{1}{|Q_j|} \int_{Q_j}F_j(y)^q\,dy\right)^{1/q} \chi_{Q_j} \Big\|_{L^p}. \] \end{lemma} \betagin{proof} See \cite{HuMe12} for the case of $0<p \le 1$ and \cite{Nakai-Sawano-2014}, \cite{Sawano13} for the case of $1<p<\infty$. \end{proof} \subsection{Reductions in the proof of main results} To start the proof of the main results, let $p_1,\ldots,p_m$ and $p$ be given as in Theorems \ref{ThmMain}, \ref{ThmMain-2} or \ref{ThmMain-3} and note that $H^{p_l}\cap\mathcal{O}_N(\mathbb{R}^n)$ is dense in $H^{p_l}$ for $1\le l\le m$ and $0<p_l<\infty$. Recall the integer $N\gg s$ and fix $f_l\in H^{p_l}\cap\mathcal{O}_N(\mathbb{R}^n)$ for which $0<p_l<\infty$. By Theorem \ref{th-molecule}, we can decompose $f_l = \sum_{k_l =1}^\infty \lambda_{l,k_l} a_{l,k_l}$, where $\{\lambda_{l,k_l}\}_{k_l=1}^\infty$ is a non-negative finite sequence and $\{a_{l,k_l}\}_{k_l=1}^\infty$ is a sequence of $(p_l,\infty)$-atoms such that $a_{l,k_l}$ is supported in a cube $Q_{l,k_l}$ satisfying \[ |a_{l,k_l}| \le \chi_{Q_{l,k_l}},\quad \int_{{\mathbb R}^n}x^\alphapha a_{l,k_l}(x)\,dx=0,\quad \ |\alphapha| \le N \] and that \betagin{equation}\label{crucial77} \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l}\chi_{Q_{l,k_l}} \Big\|_{L^{p_l}} \lesssim \|f_l\|_{H^{p_l}}. \end{equation} If $p_l=\infty$ and $f_l\in L^{\infty}$, then we can conventionally rewrite $f_l=\lambda_{l,k_l} a_{l,k_l}$ where $\lambda_{l,k_l}=\|f_l\|_{L^{\infty}}$ and $a_{l,k_l}=\|f\|_{L^{\infty}}^{-1}f$ is an $(\infty,\infty)$-atom supported in $Q_{l,k_l}=\mathbb{R}^n$. In this case the summation in \eqref{crucial77} is ignored since there is only one summand. By the multi-sublinearity of $M_\partialhi\circ \mathcal{T}_\sigmagma$, we can estimate \[ M_\partialhi\circ \mathcal{T}_\sigmagma(f_1,\ldots,f_m)(x)\le \sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})(x). \] To prove Theorems~\ref{ThmMain}, ~\ref{ThmMain-2}, and~\ref{ThmMain-3}, it now suffices to establish the following result: \betagin{proposition} \label{LM.Key-31} Let ${\mathcal T}_\sigmagma$ be the operator defined in \eqref{eq.CalZygOPT}, \eqref{eq.CalZygOPT-2} or \eqref{eq.CalZygOPT-3}. Let $p_1,\ldots,p_m$ and $p$ be given as in corresponding Theorems \ref{ThmMain}, \ref{ThmMain-2} or \ref{ThmMain-3}. Then we have \betagin{equation}\label{eq.PWEST} \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})\Big\|_{L^p} \lesssim \partialrod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l}\chi_{Q_{l,k_l}} \Big\|_{L^{p_l}}. \end{equation} \end{proposition} Notice that in view of \eqref{crucial77} and Proposition \ref{LM.Key-31}, one obtains the required estimate $$ \| {\mathcal T}_\sigmagma(f_1,\dots , f_m) \|_{H^p} = \| M_\partialhi\circ \mathcal{T}_\sigmagma(f_1,\ldots,f_m) \|_{L^p} \lesssim \| f_1\|_{H^{p_1}} \cdots \| f_m\|_{H^{p_m}}\, . $$ We may therefore focus on the proof of Proposition~\ref{LM.Key-31}. In the sequel we will prove \eqref{eq.PWEST}. Its proof will depend on whether ${\mathcal T}_\sigmagma$ is of type \eqref{eq.CalZygOPT}, \eqref{eq.CalZygOPT-2} or \eqref{eq.CalZygOPT-3}. The detail proof for each type is discussed in subsequent sections. \section{The Coifman-Meyer type} Throughout this section, $\mathcal{T}_\sigmagma$ denotes for the operator defined in \eqref{eq.CalZygOPT}. The main purpose of this section is to establish \eqref{eq.PWEST} for $\mathcal{T}_\sigmagma$. \subsection{Fundamental estimates for the Coifman-Meyer type} We treat the case of Coifman-Meyer multiplier operators whose symbols satisfy \eqref{CMcond}. The study of such operators was initiated by Coifman and Meyer \cite{CM2}, \cite{CM3} and was later pursued by Grafakos and Torres \cite{GrafakosTorresAdvances}; see also \cite{G} for an account. Denoting by $K$ the inverse Fourier transform of $\sigmagma$, in view of \eqref{CMcond}, we have \[ |\partialartial^{\betata}_{y} K(y_1,\ldots, y_m)| \lesssim \big( \sum_{i=1}^m |y_i| \big)^{-mn -|\betata|},\quad (y_1,\ldots,,y_m)\ne (0,\ldots,0) \] for all $\betata=(\betata_1,\ldots,\betata_m)\in{\mathbb N}_0{}^{mn}=({\mathbb N}_0{}^n)^m$ and $|\betata|\le N$. Examining carefully the smoothness of the kernel, we obtain the following estimates: \betagin{lemma} \label{lm.3A1} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. Let $\Lambda$ be a non-empty subset of $\{1,\ldots,m\}$. Then we have \[ |\mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y)| \le \dfrac{\min\{\ell(Q_k) : k\in \Lambda\}^{n+N+1}} {\big(\sum_{k\in \Lambda}|y-c_k|\big)^{n+N+1}} \] for all $y\notin \cup_{k\in \Lambda}Q_k^*$. \end{lemma} \betagin{proof} We may suppose that $\Lambda=\{1,\ldots,r\}$ for some $1\le r\le m$ and that \[ \ell(Q_1) = \min\{\ell(Q_k) : k\in \Lambda\}. \] Let $c_k$ be the center of $Q_k$ and fix $y\notin \cup_{k\in \Lambda}Q_k^*$. Using the cancellation of $a_1$ we can rewrite \betagin{align} \notag \mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y) =& \int_{\mathbb{R}^{mn}}K(y-y_1,\ldots,y-y_m)a_1(y_1)\cdots a_m(y_m)d\vec{y}\\ \notag =& \int_{\mathbb{R}^{mn}}\big[K(y-y_1,\ldots,y-y_m)-P_N(y,y_1,y_2,\ldots,y_m)\big] a_1(y_1)\cdots a_m(y_m)d\vec{y}\\ =& \int_{\mathbb{R}^{mn}}K^1(y,y_1,y_2,\ldots,y_m) a_1(y_1)\cdots a_m(y_m)d\vec{y}, \label{eq.3A5} \end{align} where \betagin{equation*} P_N(y,y_1,y_2,\ldots,y_m) = \sum_{|\alphapha|\le N}\fracrac{1}{\alphapha!} \partialartial^{\alphapha}_{1}K(y-c_1,y-y_2,\ldots,y-y_m)(c_1-y_1)^{\alphapha} \end{equation*} is the Taylor polynomial of degree $N$ of $K(y-\cdot,y-y_2,\ldots,y-y_m)$ at $c_1$ and \betagin{equation} \label{eq.3A6} K^1(y,y_1,\ldots,y_m) = K(y-y_1,\ldots,y-y_m)-P_N(y,y_1,y_2,\ldots,y_m). \end{equation} By the smoothness condition of the kernel and the fact that \[ |y-y_k|\approx |y-c_k| \] for all $k \in\Lambda$ and $y_k\in Q_k$ we can estimate \betagin{align*} \big|K(y,y_1,\ldots,y_m)-P_N(y,c_1,y_2,\ldots,y_m)\big|\lesssim& |y_1-c_1|^{N+1}\Big( \sum_{k\in \Lambda}|y-c_k|+\sum_{j=2}^m|y-y_j| \Big)^{-mn-N-1}. \end{align*} Thus, \betagin{align*} |\mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y)| \lesssim& \int_{\mathbb{R}^{mn}}\fracrac{|y_1-c_1|^{N+1}|a_1(y_1)|\cdots |a_m(y_m)|} {\Big( \sum_{k\in \Lambda}|y-c_k|+\sum_{j=2}^m|y-y_j| \Big)^{mn+N+1}} d\vec{y}\\ \lesssim& \int_{\mathbb{R}^{(m-1)n}}\fracrac{ \ell(Q_1)^{n+N+1}} {\Big( \sum_{k\in \Lambda}|y-c_k|+\sum_{j=2}^m|y_j| \Big)^{mn+N+1}} dy_2\cdots dy_m\\ \lesssim& \fracrac{\ell(Q_1)^{n+N+1}} {\Big( \sum_{k\in \Lambda}|y-c_k| \Big)^{n+N+1}}. \end{align*} \end{proof} \betagin{lemma}\label{lem:160726-51} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. Suppose $Q_1$ is the cube such that $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$. Then for fixed $1\le r<\infty$ and $j\in \mathbb{N}$, we have \betagin{align} \label{eq.3A2} \|\mathcal{T}_\sigmagma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^{r}}&\lesssim |Q_1|^{\fracrac{1}{r}} \partialrod_{l=1}^m \inf_{z\in Q_1^{*}} M\chi_{Q_l}(z)^\fracrac{n+N+1}{mn},\\ \label{eq.3A3} \|M\circ \mathcal{T}_\sigmagma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^{r}}&\lesssim |Q_1|^{\fracrac{1}{r}} \partialrod_{l=1}^m \inf_{z\in Q_1^{*}} M\chi_{Q_l}(z)^\fracrac{n+N+1}{mn}, \end{align} Furthermore, if $Q_0$ is a cube such that $\ell(Q_0)\le \ell(Q_1)$ and $2^jQ_0^{**}\cap 2^jQ_l^{**}=\emptyset$ for some $l$, then \betagin{equation}\label{eq.3D3} \|\mathcal{T}_\sigmagma(a_1,\ldots,a_m)\chi_{2^jQ_0^{**}}\|_{L^\infty} \lesssim \partialrod_{l=1}^m \inf_{z\in 2^jQ_0^{*}} M\chi_{2^jQ_l^{**}}(z)^\fracrac{n+N+1}{mn}. \end{equation} In particular, under the above assumption, \betagin{align}\label{eq.3D4} \Big( \fracrac{1}{|2^jQ_0^{**}|} \int_{2^jQ_0^{**}} |\mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y)|^rdy \Big)^{\fracrac1r} &\lesssim \partialrod_{l=1}^m \inf_{z\in 2^jQ_0^{*}} M\chi_{2^jQ_l^{**}}(z)^\fracrac{n+N+1}{mn}. \end{align} \end{lemma} \betagin{proof} To check \eqref{eq.3A2}, it is enough to consider $1<r<\infty$ and two following cases. First, if $Q_1^{**}\cap Q_k^{**}\ne \emptyset$ for all $2\le k\le m$, then, by the assumption $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$, $Q_1^{**}\subset 3Q_k^{**}$ for all $1\le k\le m$. This implies \[ \inf_{z\in Q_1^{*}}M\chi_{3Q_k^{**}}(z)\ge 1, \] for all $1\le k\le m.$ Now the boundedness of $\mathcal{T}_\sigmagma$ from $L^r\times L^\infty\times\cdots\times L^\infty$ to $L^r$ yields \betagin{align} \notag \|\mathcal{T}_\sigmagma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^r} \le& \|\mathcal{T}_\sigmagma(a_1,\ldots,a_m)\|_{L^r}\\ \notag \lesssim& \|a_1\|_{L^r}\|a_2\|_{L^\infty}\cdots \|a_m\|_{L^\infty}\\ \label{eq.3A4} \lesssim& |Q_1|^{\fracrac{1}{r}} \partialrod_{k=1}^m\inf_{z\in Q_1^{*}}M\chi_{3Q_k^{**}}(z)^\fracrac{n+N+1}{mn}. \end{align} Second, if $Q_1^{**}\cap Q_k^{**}=\emptyset$ for some $k$, then the set \[ \Lambda = \{2\le k\le m : Q_1^{**}\cap Q_k^{**}=\emptyset\} \] is a non-empty subset of $\{1,\ldots, m\}$. Fix arbitrarily $y\in \mathbb{R}^n$. By the cancellation of $a_1$, rewrite \[ \mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y)= \int_{\mathbb{R}^{mn}}K^1(y,y_1,y_2,\ldots,y_m) a_1(y_1)\cdots a_m(y_m)d\vec{y}, \] where $K^1(y,y_1,\ldots,y_m)$ is defined in \eqref{eq.3A6}. For $y_1\in Q_1$ we estimate \betagin{align*} \big|K^1(y,y_1,\ldots,y_m)\big|\le& C\ell(Q_1)^{N+1}\Big( |y-\xi_1|+\sum_{j=2}^m|y-y_j| \Big)^{-mn-N-1}, \end{align*} for some $\xi_1\in Q_1$ and for all $y_l\in Q_l$. Since $Q_1^{**}\cap Q_k^{**}=\emptyset$ for all $k\in \Lambda$, $|y-\xi_1|+|y-y_k|\ge |\xi_1-y_k| \ge C|c_1-c_k|$ for all $y_k\in Q_k$ and $k\in \Lambda$. Therefore \[ \big|K^1(y,y_1,\ldots,y_m)\big|\lesssim \ell(Q_1)^{N+1}\Big( \sum_{k\in \Lambda}|c_1-c_k| +\sum_{j=2}^{m}|y-y_j| \Big)^{-mn-N-1}, \] for all $y_1\in Q_1^*$ and $y_k\in Q_k$ for $k\in \Lambda$. Insert the above inequality into \eqref{eq.3A5} to obtain \[ |\mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y)| \lesssim \dfrac{\ell(Q_1)^{n+N+1}} {\big(\sum_{k\in \Lambda}|c_1-c_k|\big)^{n+N+1}} \lesssim \dfrac{\ell(Q_1)^{n+N+1}} {\sum_{k\in \Lambda}\big[\ell(Q_1)+|c_1-c_k|+\ell(Q_k)\big]^{n+N+1}}. \] Noting that $Q_1^{**}\subset 3Q_l^{**}$ for $l\notin \Lambda,$ the last inequality gives \betagin{equation} \label{eq.3A8} \|\mathcal{T}_\sigmagma(a_1,\ldots,a_m)\|_{L^\infty} \lesssim \partialrod_{k=1}^m\inf_{z\in Q_1^{*}}M\chi_{3Q_k^{**}}(z)^\fracrac{n+N+1}{mn}, \end{equation} which yields \betagin{equation} \label{eq.3A9} \|\mathcal{T}_\sigmagma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^r} \lesssim |Q_1|^{\fracrac{1}{r}} \partialrod_{k=1}^m\inf_{z\in Q_1^{*}}M\chi_{3Q_k^{**}}(z)^\fracrac{n+N+1}{mn}. \end{equation} Combining \eqref{eq.3A4} and \eqref{eq.3A9} and noting that $M\chi_{3Q} \lesssim M\chi_Q$, we obtain \eqref{eq.3A2}. Similarly, we can prove \eqref{eq.3A3}--\eqref{eq.3D3}. For example, to show \eqref{eq.3A3}, we again consider the case where $Q_1^{**}\cap Q_l^{**}\neq \emptyset$ holds for all $l$ and the case where this fails. In the first case, using the boundedness of $M$ on $L^r$, we arrive at the same situation as above. In the second case, we use the boundedness of $M$ on $L^\infty$ to see \[ \|M\circ \mathcal{T}_\sigmagma(a_1,\dots,a_m)\chi_{Q_1^{**}}\|_{L^r} \lesssim |Q_1|^\fracrac{1}{r} \| \mathcal{T}_\sigmagma(a_1,\dots,a_m)\|_{L^\infty}. \] Notice that the right-hand side is already treated in \eqref{eq.3A8}. \end{proof} Lemma \ref{lem:160726-51} will be used to study the behavior of the operator $M_\partialhi\circ \mathcal{T}_\sigmagma$ inside $Q_1^{**}$. For the region outside of $Q_1^{**}$, we need the following estimates. \betagin{lemma} \label{lm.3A10} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. If $p_k=\infty$ then $Q_k=\mathbb{R}^n$. Suppose that $Q_1$ is the cube for which $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$. Fix $0<t<\infty$. \betagin{enumerate} \item If $x \notin Q_1^{**}$ and $c_1 \notin B(x,100n^2t)$, then \betagin{equation}\label{eq.3A10} \fracrac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim \partialrod_{l=1}^m M\chi_{Q_l}(x)^{\fracrac{n+N+1}{m n}}. \end{equation} \item If $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$, then \betagin{equation}\label{eq.3A11} \fracrac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^*}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^{\fracrac{n+N+1}{m n}}, \end{equation} and \betagin{equation}\label{eq.3A12} \fracrac{1}{t^{n+s+1}} \int_{(Q_1^*)^c} |y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^{\fracrac{N-s}{m n}}. \end{equation} \item For all $x\notin Q_1^{**}$, we have \betagin{align} \label{eq.3B12} M_\partialhi\circ \mathcal{T}_\sigmagma(a_1,\ldots,a_m)(x)\lesssim \partialrod_{l=1}^m M\chi_{Q_l}(x)^{\fracrac{n+N+1}{m n}} + M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z \in Q_1^*} \Big( M\chi_{Q_l}(z)^{\fracrac{N-s}{m n}} \Big). \end{align} \end{enumerate} \end{lemma} \betagin{proof} Fix $x\notin Q_1^{**}$ and denote $\Lambda = \{1\le k\le m : x\notin Q_k^{**}\}$. \textit{(1)} Suppose $c_1 \notin B(x,100n^2t)$. For $y\in B(x,t)$, from \eqref{eq.3A5} we rewrite \[ \mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y) = \int_{\mathbb{R}^{mn}}K^1(y,y_1,\ldots,y_m) a_1(y_1)\cdots a_m(y_m)d\vec{y}, \] where $K^1$ is defined in \eqref{eq.3A6}. Note that for $y\in B(x,t)$, $y_1\in Q_1$ and $c_1\notin B(x,100n^2t)$, we have \[ t\lesssim |x-c_1|\lesssim |y-y_1|. \] Since $x\notin Q_k^{**}$ for all $k\in \Lambda$, \[ |x-c_k|\lesssim |x-y_k|\lesssim t+ |y-y_k|\lesssim |y-y_1|+ |y-y_k| \] for all $k \in \Lambda$ and $y_k \in Q_k$. Consequently, \betagin{align} \label{eq.3A13} \left|K^{1}(y,y_1,\ldots,y_m)\partialrod_{l=1}^m a_l(y_l)\right| \lesssim \fracrac{\ell(Q_1)^{N+1}\chi_{Q_1}(y_1)}{\displaystyle \left(\sum_{l=2}^m |y-y_l|+ \sum_{k \in \Lambda} |x-c_k|\right)^{m n+N+1}}. \end{align} Integrating (\ref{eq.3A13}) over $({\mathbb R}^n)^m$, and using that $\ell(Q_1)\leq \ell(Q_l)$ for all $2\le l\le m$, we obtain that \betagin{align*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)| &\lesssim \fracrac{\ell(Q_1)^{n+N+1}}{\displaystyle \left(\sum_{l \in \Lambda} |x-c_l|\right)^{n+N+1}} \\ &\lesssim \partialrod_{l\in \Lambda} \fracrac{\ell(Q_l)^{\fracrac{n+N+1}{|\Lambda|}}}{\displaystyle |x-c_l|^{\fracrac{n+N+1}{|\Lambda|}}} \chi_{(Q_l^{**})^c}(x) \cdot \partialrod_{k\notin \Lambda}\chi_{Q_k^{**}}(x) \lesssim \partialrod_{l=1}^m M\chi_{Q_l}(x)^{\fracrac{n+N+1}{m n}}. \end{align*} This pointwise estimate proves \eqref{eq.3A10}. \textit{(2)} Assume $c_1 \in B(x,100n^2t)$. Fix $1<r<\infty$ and estimate the left-hand side of \eqref{eq.3A11} by \[ \fracrac{\ell(Q_1)^{s+1}}{t^{n+s+1}}|Q_1|^{1-\fracrac1{r}} \|\mathcal{T}_\sigmagma(a_1,\ldots,a_m)\chi_{Q_1^{**}}\|_{L^r} \lesssim \fracrac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \partialrod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^{\fracrac{n+N+1}{m n}}, \] where we used \eqref{eq.3A2} in the above inequality. Since $x\notin Q_1^{**}$ and $c_1\in B(x,100n^2t)$, $Q_1^*\subset B(x,1000n^2t)$ and hence, $\ell(Q_1)/t\lesssim M\chi_{Q_1}(x)$. This combined with the last inequality implies \eqref{eq.3A11}. To verify \eqref{eq.3A12}, we recall the expression of $\mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y)$ in \eqref{eq.3A5} and the pointwise estimate for $K^1(y,y_1,\ldots,y_m)$ defined in \eqref{eq.3A6}. Denote $J=\{2\le k\le m : Q_1^{**}\cap Q_k^{**}=\emptyset\}$. Using the facts that $|y-y_1|\sigmam |y-c_1|\geq \ell(Q_1)$ for $y\notin Q_1^*$, $y_1\in Q_1$ and $|y-y_1|+|y-y_l|\geq |y_1-y_l|\gtrsim |z-c_l|$ for all $z\in Q_1^*$ and $l\in J$, we now estimate \betagin{align*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)| \lesssim \int_{({\mathbb R}^n)^m} \fracrac{\ell(Q_1)^{N+1}\chi_{Q_1}(y_1)\,d\vec{y}}{ \left( \ell(Q_1)+|y-c_1|+ \displaystyle \sum_{l\in J} |z-c_l| +\sum_{l=2}^m|y-y_l| \right)^{m n+N+1}} \end{align*} for all $y \in (Q_1^*)^c$ and $z\in Q_1^*$. Thus, \betagin{align*} &\fracrac{1}{t^{n+s+1}} \int_{(Q_1^*)^c}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \\ &\lesssim \fracrac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \times ({\mathbb R}^n)^m} \fracrac{|y-c_1|^{s+1}\ell(Q_1)^{N+1}\chi_{Q_1}(y_1)\,d\vec{y}dy} {\displaystyle\bigg(\ell(Q_1)+|y-c_1| +\sum_{l\in J}|c_1-c_l| +\sum_{l=2}^m |y-y_l|\bigg)^{m n+N+1}} \\ &\lesssim \left( \fracrac{\ell(Q_1)}{t} \right)^{n+s+1} \partialrod_{l\in J} \left( \fracrac{\ell(Q_l)}{|z-c_l|} \right)^\fracrac{N-s}{m}. \end{align*} Note that $ 1\lesssim\inf_{z\in Q_1^*}M\chi_{2Q_l^{**}}(z) $ if $Q_1^{**} \cap Q_l^{**} \ne \emptyset$; otherwise, $\ell(Q_l)/|z-c_l|\lesssim M\chi_{2Q_l^{**}}(z)^\fracrac{1}{n}$ for all $z\in Q_1^*$. Consequently, \betagin{align*} \fracrac{1}{t^{n+s+1}} \int_{(Q_1^*)^c}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy &\lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^{\fracrac{N-s}{mn}}, \end{align*} which deduces \eqref{eq.3A12}. \textit{(3)} It remains to prove \eqref{eq.3B12}. Fix $x\notin Q_1^{**}$. To calculate $M_{\partialhi}\circ \mathcal{T}_{\sigmagma}(a_1,\ldots,a_m)(x)$, we need to estimate \[ \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big| \] for each $t\in (0,\infty)$. Let consider two cases: $c_1 \notin B(x,100n^2t)$ and $c_1 \in B(x,100n^2t)$. In the first case, since $\partialhi$ is supported in the unit ball, \betagin{equation*} \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big| \lesssim \fracrac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy. \end{equation*} Since $c_1 \notin B(x,100n^2t)$, \eqref{eq.3A10} implies that \betagin{equation}\label{eq.3A14} \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big| \lesssim \partialrod_{l=1}^m M\chi_{Q_l}(x)^{\fracrac{n+N+1}{m n}}. \end{equation} In the second case, we will exploit the moment condition of ${\mathcal T}_\sigmagma(a_1,\ldots,a_m)$. Denote \betagin{equation} \label{eq.3C01} \delta_1^s(t;x,y) = \partialhi_t(x-y) - \sum_{|\alphapha|\le s}\fracrac{\partialartial^{\alphapha}[\partialhi_t](x-c_1)}{\alphapha!}(c_1-y)^{\alphapha}. \end{equation} Since $|\delta_1^s(t;x,y)|\lesssim t^{-n-s-1}$ for all $x,y$ and \eqref{eq.TmCan}, \betagin{align} \notag \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big|&= \Big| \int_{{\mathbb R}^n} \delta^s_1(t;x,y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big| \\ \notag \lesssim& \fracrac{1}{t^{n+s+1}} \int_{{\mathbb R}^n}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \\ \notag =& \fracrac{1}{t^{n+s+1}} \int_{Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ \notag &+ \fracrac{1}{t^{n+s+1}} \int_{(Q_1^*)^c}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim \fracrac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^*}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \label{eq.3C02}\\ \notag &+ \fracrac{1}{t^{n+s+1}} \int_{(Q_1^*)^c}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy. \end{align} Invoking \eqref{eq.3A11} and \eqref{eq.3A12}, we obtain \betagin{align} \notag & \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big|\\ \notag &\lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z \in Q_1^*} \Big[ M\chi_{Q_l}(z)^{\fracrac{n+N+1}{m n}} + M\chi_{Q_l}(z)^{\fracrac{N-s}{m n}} \Big]\\ \label{eq.3A15} &\lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z \in Q_1^*} \Big( M\chi_{Q_l}(z)^{\fracrac{N-s}{m n}} \Big). \end{align} Combining \eqref{eq.3A14} and \eqref{eq.3A15} yields the required estimate \eqref{eq.3B12}. The proof of Lemma \ref{lm.3A10} is now completed. \end{proof} \subsection{The proof of Proposition \ref{LM.Key-31} for Coifman-Meyer type} We now turn into the proof of \eqref{eq.PWEST}, i.e., estimate \betagin{equation} \label{eq.3C3} A= \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})\Big\|_{L^p} . \end{equation} For each $\vec{k}=(k_1,\ldots,k_m)$, we denote by $R_{\vec{k}}$ the cube with smallest length among $Q_{1,k_1},\ldots,Q_{m,k_m}$. Then we have $A\lesssim B+G$, where \betagin{equation} \label{eq.3C4} B = \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})\chi_{R_{\vec{k}}^{**}}\Big\|_{L^p} \end{equation} and \betagin{equation} \label{eq.3C5} G = \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})\chi_{(R_{\vec{k}}^{**})^c}\Big\|_{L^p}. \end{equation} To estimate $B$, for some $\max(1,p)<r<\infty$ Lemma \ref{lm.2A00} and \eqref{eq.3A3} imply \betagin{align*} B\lesssim & \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) \fracrac{\chi_{R_{\vec{k}}^{**}}}{|\chi_{R_{\vec{k}}^{**}}|^{\fracrac1r}} \| M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})\chi_{R_{\vec{k}}^{**}} \|_{L^r} \Big\|_{L^p} \\ \lesssim& \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) \Big( \partialrod_{l=1}^m \inf_{z\in R_{\vec{k}}^{*}} M\chi_{Q_{l,k_l}}(z)^{\fracrac{n+N+1}{mn}} \Big) \chi_{R_{\vec{k}}^{**}} \Big\|_{L^p}\\ =& \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big( \partialrod_{l=1}^m \inf_{z\in R_{\vec{k}}^{*}} \lambda_{l,k_l} M\chi_{Q_{l,k_l}}(z)^{\fracrac{n+N+1}{mn}} \Big) \chi_{R_{\vec{k}}^{**}} \Big\|_{L^p}\\ \lesssim& \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big( \partialrod_{l=1}^m \inf_{z\in R_{\vec{k}}^{*}} \lambda_{l,k_l} M\chi_{Q_{l,k_l}}(z)^{\fracrac{n+N+1}{mn}} \Big) \chi_{R_{\vec{k}}^{*}} \Big\|_{L^p}, \end{align*} where we used Lemma \ref{lm.2B00} in the last inequality. Now we can remove the infimum and apply H\"older's inequality to obtain \betagin{align} \notag B\lesssim & \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \partialrod_{l=1}^m \lambda_{l,k_l} \Big(M\chi_{Q_{l,k_l}}\Big)^{\fracrac{n+N+1}{mn}} \Big\|_{L^p} = \Big\| \partialrod_{l=1}^m \sum_{k_l=1}^\infty \lambda_{l,k_l} \Big( M\chi_{Q_{l,k_l}}\Big)^{\fracrac{n+N+1}{mn}} \Big\|_{L^p}\\ \notag \le& \partialrod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \Big( M\chi_{Q_{l,k_l}}\Big)^{\fracrac{n+N+1}{mn}} \Big\|_{L^{p_l}} \lesssim \partialrod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}^{**}} \Big\|_{L^{p_l}}\\ \label{eq.3A16} \lesssim& \partialrod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \Big\|_{L^{p_l}}. \end{align} Once again, Lemma \ref{lm.2B00} was used in the last two inequalities. To deal with $G$, we use \eqref{eq.3B12} and estimate $G \lesssim G_1+G_2$, where \[ G_1 = \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) \partialrod_{l=1}^m \left(M\chi_{Q_{l,k_l}}\right)^{\fracrac{n+N+1}{m n}} \Big\|_{L^p} \] and \[ G_2 = \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) \Big( \partialrod_{l=1}^m \inf_{z \in R_{\vec{k}}^*} M\chi_{Q_{l,k_l}}(z)^{\fracrac{N-s}{m n}} \Big) (M\chi_{R_{\vec{k}}^*})^{\fracrac{n+s+1}{n}} \Big\|_{L^p}. \] Repeating the argument in estimating for $B$, noting that $\fracrac{(n+s+1)p}{n}>1$ and $N\gg s$, we obtain \betagin{equation}\label{eq.3A17} G \lesssim G_1+ G_2 \lesssim \partialrod_{l=1}^m \Big\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \Big\|_{L^{p_l}}. \end{equation} Combining \eqref{eq.3A16} and \eqref{eq.3A17} deduces \eqref{eq.PWEST}. This completes the proof of Proposition \ref{LM.Key-31} for the operator $\mathcal{T}_\sigmagma$ of type \eqref{eq.CalZygOPT}. \betagin{remark} The techniques in this paper also work for CZ operators of non-convolution types; this recovers the results in \cite{HuMe12}. \end{remark} \section{The product type} On this whole section, we denote by $\mathcal{T}_\sigmagma$ the operator defined in \eqref{eq.CalZygOPT-2} and prove Proposition \ref{LM.Key-31} for this operator. Now we need to establish some results analogous to Lemmas \ref{lem:160726-51} and \ref{lm.3A10}. \subsection{Fundamental estimates for the product type} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. Here and below $M^{(r)}$ denotes the power-maximal operator: $M^{(r)}f(x)=M(|f|^r)(x)^\fracrac{1}{r}$. Suppose $Q_1$ is the cube such that $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$, then we have the following lemmas. \betagin{lemma} \label{lm.4A00} For all $x\in Q_1^{**}$, we have \betagin{align}\label{eq.4A01} M_\partialhi\circ \mathcal{T}_\sigmagma(a_1,\ldots,a_m)(x)\chi_{Q_1^{**}}(x) \lesssim \partialrod_{l=1}^m M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) \right). \end{align} \end{lemma} \betagin{proof} Fix $x\in Q_1^{**}$. We need to estimate \[ \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \lesssim \fracrac{1}{t^n} \int_{B(x,t)} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \] for each $t\in (0,\infty)$. The proof of \eqref{eq.4A01} is mainly based on the boundedness of $\mathcal{T}_\sigmagma$ and the smoothness condition of each Calder\'on-Zygmund kernel in \eqref{eq.CalZygOPT-2}. Instead of considering the whole sum in \eqref{eq.CalZygOPT-2}, for notational simplicity, it is convenient to consider one term, i.e., \betagin{equation}\label{eq.4A02} {\mathcal T}_\sigmagma(f_1,\ldots,f_m) = T_{\sigmagma_1}(f_1) \cdots T_{\sigmagma_m}(f_m) \end{equation} except when cancellation is used, when the entire sum is needed. We consider two cases: $t\le \ell(Q_1)$ and $t>\ell(Q_1)$. \textbf{Case 1: $t\le \ell(Q_1)$}. By H\"older inequality and \eqref{eq.3D4}, we have \if0 \fracootnote{ The detail is as follows. We need to decompose two cases; $Q_1^{**}\cap Q_l^{**}=\emptyset$ or not. In fact, \betagin{align*} &\partialrod_{l=1}^m \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m}\\ =& \partialrod_{l:Q_1^{**}\cap Q_l^{**}\neq \emptyset} \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m} \partialrod_{l:Q_1^{**}\cap Q_l^{**}=\emptyset} \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m}. \end{align*} For the first term, we employ the trivial estimate and $1=\chi_{Q_1^*}(x)\leq \chi_{Q_l^{**}}(x)$. For the second term, we observe that the assumptions $x\in Q_1^*$ and $t\leq \ell(Q_1)$ imply $B(x,t)\subset Q_1^{**}$ and hence $B(x,t)\cap Q_l^{**}=\emptyset$. This observation allows us to use \eqref{eq.3D4}. As a result, \[ \partialrod_{l=1}^m \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m} \lesssim \partialrod_{l:Q_1^{**}\cap Q_l^{**}\neq \emptyset} \chi_{Q_l^{**}}(x) M|T_{\sigmagma_l}(a_l)|^m(x)^{\fracrac{1}{m}} \partialrod_{l:Q_1^{**}\cap Q_l^{**}=\emptyset} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn}. \] } \fraci \betagin{align*} \fracrac{1}{t^n} \int_{B(x,t)} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim& \partialrod_{l=1}^m \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m}. \end{align*} Now, we decompose the above product depending on two sub-cases; $B(t,x)\cap Q_l^{**}=\emptyset$ or not. Then \betagin{align*} &\partialrod_{l=1}^m \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m}\\ =& \partialrod_{l:B(t,x)\cap Q_l^{**}=\emptyset} \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m} \partialrod_{l:B(t,x)\cap Q_l^{**}\neq \emptyset} \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m}. \end{align*} For the first sub-case, we employ \eqref{eq.3D4}. For the second sub-case, we observe that the assumption $t\leq \ell(Q_1)\le \ell(Q_l)$ imply $B(x,t)\subset 3Q_l^{**}$. As a result, \betagin{align*} \lefteqn{ \partialrod_{l=1}^m \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m} }\\ &\lesssim \partialrod_{l:B(x,t)\cap Q_l^{**}\neq \emptyset} \chi_{3Q_l^{**}}(x) M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) \partialrod_{l:B(x,t)\cap Q_l^{**}=\emptyset} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn}. \end{align*} Thus \betagin{equation} \label{eq.4A03} \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big| \lesssim \partialrod_{l=1}^m M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) \right). \end{equation} \textbf{Case 2: $t>\ell(Q_1)$}. Now we can estimate \betagin{align*} \fracrac{1}{t^n} \int_{B(x,t)} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim& \fracrac{1}{|Q_1^*|} \int_{\mathbb{R}^n} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ =& \fracrac{1}{|Q_1^*|} \int_{Q_1^*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ &+ \fracrac{1}{|Q_1^*|} \int_{\mathbb{R}^n\setminus Q_1^*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy. \end{align*} By the H\"older inequality and \eqref{eq.3D4}, the similar technique to \eqref{eq.4A03} yields \betagin{align} \notag \fracrac{1}{|Q_1^*|} \int_{Q_1^*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim& \partialrod_{l=1}^m \Big( \fracrac{1}{|Q_1^*|} \int_{Q_1^*} |T_{\sigmagma_l}a_l(y)|^m\,dy \Big)^{\fracrac1m} \\ \notag \lesssim& \partialrod_{l=1}^m \Big( \inf_{z \in Q_{1}^*} M\chi_{Q_{l}^{**}}(z)^{\fracrac{n+N+1}{mn}} + \inf_{z \in Q_{1}^*} M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \chi_{3Q_l^{**}}(z) \Big) \\ \label{eq.4A05} \lesssim& \partialrod_{l=1}^m M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) \right), \end{align} since $x\in Q_1^*$. For the second term, using the decay of $T_{\sigmagma_1}a_1(y)$ when $y\notin Q_1^*$ as in Lemma \ref{lm.3A1}, we obtain $$ \fracrac{1}{|Q_1^*|} \int_{{\mathbb R}^n \setminus Q_1^*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim \fracrac{1}{|Q_1^*|} \int_{{\mathbb R}^n \setminus Q_1^*} \fracrac{\ell(Q_1)^{n+N+1}}{|y-c_1|^{n+N+1}} \partialrod_{l=2}^m |T_{\sigmagma_l}a_l(y)|\,dy. $$ We decompose ${\mathbb R}^n \setminus Q_1^*$ into dyadic annuli and estimate \betagin{align*} & \fracrac{1}{|Q_1^*|} \int_{{\mathbb R}^n \setminus Q_1^*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \\ &\lesssim \sum_{j=1}^\infty 2^{j(-N-1)} \fracrac{1}{|2^jQ_1^*|} \int_{2^{j}Q_1^*} \chi_{2^{j}Q_1^*}(y) \partialrod_{l=2}^m |T_{\sigmagma_l}a_l(y)|\,dy\\ &\lesssim \sum_{j=1}^\infty 2^{j(-N-1)} \partialrod_{l=2}^m \Big( \fracrac{1}{|2^jQ_1^*|}\int_{2^jQ_1^*}|T_{\sigmagma_l}a_l(y)|^m\,dy \Big)^{\fracrac1m}\\ &\lesssim \sum_{j=1}^\infty 2^{j(-N-1)} \partialrod_{l=2}^m \Big( \inf_{z\in 2^jQ_1^*}(M\chi_{2^jQ_l^{**}})(z)^{\fracrac{n+N+1}{mn}} + \inf_{z\in 2^jQ_1^*} M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \chi_{2^{j+1}Q_l^{**}}(z) \Big), \end{align*} where we used \eqref{eq.3D4} in the last inequality. Since $M\chi_{2^jQ}\lesssim 2^{jn}M\chi_Q$, \[ \chi_{2^{j+1}Q_l^{**}}(x) \le (M\chi_{2^jQ_l^{**}})^{\fracrac{n+N+1}{mn}} \lesssim 2^{\fracrac{j(n+N+1)}{m}} M\chi_{Q_l}^{\fracrac{n+N+1}{mn}}. \] Insert this inequality into the previous estimate to obtain \betagin{align} \notag &\fracrac{1}{|Q_1^*|} \int_{{\mathbb R}^n \setminus Q_1^*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \\ \notag &\lesssim \sum_{j=1}^\infty 2^{-j(\fracrac{n+N+1}{m}-n)} \partialrod_{l=1}^m \Big( M\chi_{Q_l}(x)^{\fracrac{n+N+1}{mn}} + M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \Big) \\ \label{eq.4E00} &\lesssim \partialrod_{l=1}^m M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) \right), \end{align} since $N \gg n$. Combining \eqref{eq.4A03}--\eqref{eq.4E00} together completes the proof of \eqref{eq.4A01}. \end{proof} \betagin{lemma}\label{lm.4F01} Assume $x \notin Q_1^{**}$ and $c_1 \notin B(x,100n^2t)$. Then we have \betagin{align*} \fracrac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim \partialrod_{l=1}^m M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) \right). \end{align*} \end{lemma} \betagin{proof} Fix any $x\notin Q_1^{**}$ and $t>0$ such that $c_1\notin B(x,100n^2t)$. We denote \betagin{equation} \label{lm.4F02} J= \{ 2\le l\le m: x\notin Q_l^{**} \},\ J_0 = \{l\in J : B(x,2t)\cap Q_l^*=\emptyset\},\ J_1 = J\setminus J_0. \end{equation} Similar to the previous lemma, it is enough to consider the reduced form \eqref{eq.4A02} of $\mathcal{T}_\sigmagma$. From the H\"{o}lder inequality, we have \betagin{align*} &\fracrac{1}{t^n} \int_{B(x,t)} |\mathcal{T}_\sigmagma(a_1,\ldots,a_m)(y)|dy\\ &\lesssim \| T_{\sigmagma_1}a_1\chi_{B(x,t)} \|_{L^\infty} \partialrod_{l\in J_0} \|T_{\sigmagma_l}a_l\chi_{B(x,t)}\|_{L^{\infty}}\\ &\quad \times \partialrod_{l\in J_1} \left( \fracrac{1}{|B(x,t)|} \int_{B(x,t)}|T_{\sigmagma_l}a_l(y)|^mdy \right)^{\fracrac{1}{m}} \partialrod_{l\notin J} \left( \fracrac{1}{|B(x,t)|} \int_{B(x,t)}|T_{\sigmagma_l}a_l(y)|^mdy \right)^{\fracrac{1}{m}}\\ &=: {\rm I}\times {\rm II}\times {\rm III}\times {\rm IV}. \end{align*} For $\rm I$, we notice $Q_1^*\cap B(x,2t)=\emptyset$ since we have $x\notin Q_1^{**}$ and $c_1\notin B(x,100n^2t)$. So, we have only to use the decay estimate for $T_{\sigmagma_1}a_1$ to get \[ {\rm I}= \|T_{\sigmagma_1}a_1\chi_{B(x,t)}\|_{L^\infty} \lesssim \left( \fracrac{\ell(Q_1)}{ |x-c_1|+\ell(Q_1) } \right)^{n+N+1}. \] For all $l\in J_1$, since $B(x,2t)\cap Q_l^*\neq\emptyset$, $t\gtrsim \ell(Q_l)$; and hence, $Q_l^*\subset B(x,100n^2t)$. Therefore, \betagin{equation} \label{eq.4E01} \left( \fracrac{1}{|B(x,t)|} \int_{B(x,t)}|T_{\sigmagma_l}a_l(y)|^mdy \right)^{\fracrac{1}{m}} \lesssim \left( \fracrac{|Q_l|}{|B(x,t)|} \right)^{\fracrac{1}{m}} \lesssim 1. \end{equation} for all $l\in J_1$. Now combining the above inequality with the estimates for $\rm I$ yields \betagin{equation} \label{eq.4B01} {\rm I}\times {\rm III} \lesssim \left( \fracrac{\ell(Q_1)}{ |x-c_1|+\ell(Q_1) } \right)^{\fracrac{n+N+1}{m}} \partialrod_{l\in J_1} \left( \fracrac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)} \right)^{\fracrac{n+N+1}{m}}. \end{equation} As showed about $Q_l^*\subset B(x,100n^2t)$ for all $l\in J_1$. This implies $|x-c_l|\lesssim t$. Furthermore, $c_1\notin B(x,100n^2t)$ means $t\lesssim |x-c_1|$ which yields $|x-c_l|\lesssim t\lesssim |x-c_1|$. Recalling $\ell(Q_1)\leq \ell(Q_l)$, we see that \[ \fracrac{\ell(Q_1)}{ |x-c_1|+\ell(Q_1) } \lesssim \fracrac{\ell(Q_l)}{ |x-c_l|+\ell(Q_l) }. \] From \eqref{eq.4B01}, we obtain \betagin{align} {\rm I}\times {\rm III} \lesssim M\chi_{Q_1^{**}}(x)^{\fracrac{n+N+1}{m n}} \label{eq:161003-11} \partialrod_{l\in J_1} M\chi_{Q_l}(x)^\fracrac{n+N+1}{m n}. \end{align} Now, we turn to the estimate for ${\rm II}$ and ${\rm IV}$. For ${\rm II}$, we have only to employ the moment condition of $a_l$ to get \betagin{align} \label{eq:161003-12} {\rm II} &= \partialrod_{l\in J_0} \|T_{\sigmagma_l}a_l \cdot \chi_{B(x,t)}\|_{L^\infty} \lesssim \partialrod_{l\in J_0} M\chi_{Q_l}(x)^{\fracrac{n+N+1}{n}}. \end{align} For ${\rm IV}$, since $x\in Q_l^{**}$, we can estimate \betagin{align} \label{eq:161003-13} {\rm IV} &\lesssim \partialrod_{l\notin J} M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) \chi_{Q_l^{**}}(x) \end{align} Putting (\ref{eq:161003-11})--(\ref{eq:161003-13}) together, we conclude the proof of Lemma \ref{lm.4F01}. \end{proof} \betagin{lemma}\label{lem:160722-3} Assume $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$. Then we have \betagin{align} \notag & \fracrac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^{*}}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ \label{eq:160726-123} & \lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z\in Q_1^*} M\chi_{Q_l}(z)^\fracrac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \right). \end{align} \end{lemma} \betagin{proof} It is enough to restrict $\mathcal{T}_\sigmagma$ to the form \eqref{eq.4A02}. By the H\"older inequality we have \betagin{align*} & \fracrac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^*}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ \le& \fracrac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \partialrod_{l=1}^m \Big( \fracrac{1}{|Q_1^*|} \int_{Q_1^*}|T_{\sigmagma_l}a_l(y)|^mdy \Big)^{\fracrac1m}\\ \lesssim& \fracrac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \partialrod_{l=1}^m \Big( \inf_{z\in Q_1^*} M\chi_{Q_l}(z)^{\fracrac{n+N+1}{n}} + \inf_{z\in Q_1^*} M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \chi_{2Q_l^{**}}(z) \Big), \end{align*} where the last inequality is deduced from \eqref{eq.3D4}. Since $x\notin Q_1^{**}$ and $c_1\in B(x,100n^2t)$, $Q_1 \subset B(x,10000n^3t)$ which implies $ \ell(Q_1)/t\lesssim M\chi_{Q_1}(x)^\fracrac{1}{n} $. As a result, \betagin{align*} & \fracrac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^*}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim M\chi_{Q_1}(x)^\fracrac{n+s+1}{n} \partialrod_{l=1}^m \inf_{z \in Q_1^*} M\chi_{Q_l}(z)^\fracrac{n+N+1}{mn} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \right). \end{align*} This proves (\ref{eq:160726-123}). \end{proof} \betagin{lemma}\label{lem:160722-4} Assume $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$. Then we have \betagin{align*} & \fracrac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*} |y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ & \lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z\in Q_1^*}M\chi_{Q_l}(z)^{\fracrac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \right). \end{align*} \end{lemma} \betagin{proof} Using the decay of $T_{\sigmagma_1}a_1(y)$ when $y\notin Q_1^*$, we obtain \betagin{align*} \lefteqn{ \fracrac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy }\\ &\lesssim \fracrac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*} |y-c_1|^{s+1} \fracrac{\ell(Q_1)^{n+N+1}}{|y-c_1|^{n+N+1}} \partialrod_{l=2}^m |T_{\sigmagma_l}a_l(y)|\,dy. \end{align*} By dyadic decomposition of ${\mathbb R}^n \setminus Q_1^*$ as in the proof of Lemma \ref{lm.4A00}, we can estimate \betagin{align*} &\fracrac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \\ &\lesssim \fracrac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \sum_{j=1}^\infty 2^{j(s-N-n)} \int_{2^{j}Q_1^*} \chi_{2^{j}Q_1^*}(y) \partialrod_{l=2}^m |T_{\sigmagma_l}a_l(y)|\,dy\\ &\lesssim \fracrac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \sum_{j=1}^\infty 2^{j(s-N)} \partialrod_{l=2}^m \Big( \fracrac{1}{|2^jQ_1^*|}\int_{2^jQ_1^*}|T_{\sigmagma_l}a_l(y)|^m\,dy \Big)^{\fracrac1m}\\ &\lesssim \fracrac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \sum_{j=1}^\infty 2^{j(s-N)} \partialrod_{l=2}^m \Big( \inf_{z\in 2^jQ_1^*}M\chi_{2^jQ_l^{**}}(z)^{\fracrac{n+N+1}{mn}} + \inf_{z\in 2^jQ_1^*} M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \chi_{2^{j+1}Q_l^{**}}(z) \Big), \end{align*} where we used \eqref{eq.3D4} in the last inequality. We now repeat the argument in establishing \eqref{eq.4E00} to obtain \betagin{align*} &\fracrac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \\ &\lesssim \fracrac{\ell(Q_1)^{n+s+1}}{t^{n+s+1}} \partialrod_{l=1}^m \inf_{z\in Q_1^*}M\chi_{Q_l}(z)^{\fracrac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \right). \end{align*} Moreover, the assumption $x\notin Q_1^{**}$ and $c_1\in B(x,100n^2t)$ implies $ \fracrac{\ell(Q_1)}{t}\lesssim M\chi_{Q_1}(x)^{\fracrac1n}. $ Therefore, \betagin{align*} & \fracrac{1}{t^{n+s+1}} \int_{{\mathbb R}^n \setminus Q_1^*}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z\in Q_1^*}M\chi_{Q_l}(z)^{\fracrac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \right). \end{align*} This proves Lemma \ref{lem:160722-4}. \end{proof} \betagin{lemma} \label{lm.4B02} For all $x\in {\mathbb R}^n$, we have \betagin{align*} M_\partialhi\circ \mathcal{T}_\sigmagma(a_1,\ldots,a_m)(x) \lesssim& \partialrod_{l=1}^m M\chi_{Q_l}(x)^{\fracrac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(x) \right)\\ &+ M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \inf_{z\in Q_1^*}M\chi_{Q_l}(z)^{\fracrac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_l)(z) \right). \end{align*} \end{lemma} \betagin{proof} If $x\in Q_1^{**}$, the desired estimate is a consequence of Lemma \ref{lm.4A00}. Fix $x\notin Q_1^{**}$. To estimate $M_{\partialhi}\circ \mathcal{T}_{\sigmagma}(a_1,\ldots,a_m)(x)$, we need to examine \[ \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big| \] for each $t\in (0,\infty)$. If $c_1 \notin B(x,100n^2t)$, then we make use of Lemma \ref{lm.4F01}; otherwise, when $c_1 \in B(x,100n^2t)$ we recall \eqref{eq.3C02} and then apply Lemma \ref{lem:160722-3} and \ref{lem:160722-4} to obtain the required estimate in Lemma \ref{lm.4B02}. This completes the proof of the lemma. \end{proof} \subsection{The proof of Proposition \ref{LM.Key-31} for the product type} To process the proof of \eqref{eq.PWEST}, we set \betagin{equation*} A= \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})\Big\|_{L^p}. \end{equation*} For each $\vec{k}=(k_1,\ldots,k_m)$, we recall $R_{\vec{k}}$, the smallest-length cube among $Q_{1,k_1},\ldots,Q_{m,k_m}$. In view of Lemma \ref{lm.4B02}, we have \betagin{equation}\label{161031-1} A\lesssim B:= \left\| \sum_{k_1,\ldots, k_m=1}^\infty \partialrod_{l=1}^m \lambda_{l,k_l} \left(M\chi_{Q_{l,k_l}}\right)^{\fracrac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_{l,k_l}) \right) \right\|_{L^p}. \end{equation} In fact, our assumption imposing on $s$ means $(n+s+1)p/n>1$ and hence we may employ the boundedness of $M$ to obtain \betagin{align*} A&\lesssim B+ \left\| \sum_{k_1,\ldots,k_m=1}^\infty \left(M\chi_{R_{\vec{k}}^*}\right)^{\fracrac{n+s+1}{n}} \partialrod_{l=1}^m \lambda_{l,k_l} \inf_{z\in R_{\vec{k}}^*} \bigg(M\chi_{Q_{l,k_l}}\bigg) ^{\fracrac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_{l,k_l}) \right) \right\|_{L^p}\\ &\lesssim B. \end{align*} So, our task is to estimate $B$. Here, we prepare the following lemma. \betagin{lemma}\label{lm-161031-1} Let $p \in(0,\infty)$ and $\alphapha>\max{(1,p^{-1})}$. Assume that $q \in (p,\infty] \cap [1,\infty]$. Suppose that we are given a sequence of cubes $\{Q_k\}_{k=1}^\infty$ and a sequence of non-negative $L^q$-functions $\{F_k\}_{k=1}^\infty$. Then \[ \Big\| \sum_{k=1}^\infty (M\chi_{Q_k})^\alphapha F_k \Big\|_{L^p} \lesssim \Big\| \sum_{k=1}^\infty \chi_{Q_k} M^{(q)}F_k \Big\|_{L^p}. \] \end{lemma} \betagin{proof} By Lemma \ref{lm.2A00} and the fact that \( M\chi_{Q} \lesssim \chi_{Q}+ \sum_{j=1}^{\infty} 2^{-jn}\chi_{2^{j}Q\setminus 2^{j-1}Q}, \) we have \betagin{align*} \left\| \sum_{k=1}^\infty (M\chi_{Q_k})^\alphapha F_k \right\|_{L^p} &\lesssim \left\| \sum_{j=0}^\infty \sum_{k=1}^\infty 2^{-\alphapha jn} \chi_{2^jQ_k}F_k \right\|_{L^p}\\ &\lesssim \left\| \sum_{j=0}^\infty \sum_{k=1}^\infty 2^{-\alphapha jn} \chi_{2^jQ_k} \left( \fracrac{1}{|2^jQ_k|} \int_{2^jQ_k}F_k(y)^qdy \right)^\fracrac{1}{q} \right\|_{L^p}. \end{align*} Choose $\alphapha>\betata>\max(1,\fracrac1p)$ and observe the trivial estimate \[ \chi_{2^jQ_k} \lesssim \left( 2^{jn}M\chi_{Q_k} \right)^{\betata}. \] Now, Lemma \ref{lm.2B00} gives \betagin{align*} \left\| \sum_{k=1}^\infty (M\chi_{Q_k})^\alphapha F_k \right\|_{L^p} \lesssim& \left\| \sum_{j=0}^\infty \sum_{k=1}^\infty \chi_{Q_k} \left( \fracrac{2^{(\betata-\alphapha)jqn}}{|2^jQ_k|} \int_{2^jQ_k}F_k(y)^qdy \right)^\fracrac{1}{q} \right\|_{L^p} \lesssim& \Big\| \sum_{k=1}^\infty \chi_{Q_k} M^{(q)}F_k \Big\|_{L^p}, \end{align*} which yields the desired estimate. \end{proof} Lemma \ref{lm-161031-1} can be regard as a substitution of Lemma \ref{lm.2A00}. Before applying Lemma \ref{lm-161031-1} to $B$, we observe \betagin{align*} B \leq \partialrod_{l=1}^m \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \left(M\chi_{Q_{l,k_l}}\right)^{\fracrac{n+N+1}{mn}} \left( 1+M^{(m)} \circ T_{\sigmagma_l}(a_{l,k_l}) \right) \right\|_{L^{p_l}}. \end{align*} Then applying Fefferman-Stein's vector-valued inequality and Lemma \ref{lm-161031-1}, \betagin{align*} &\left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \left(M\chi_{Q_{l,k_l}}\right)^{\fracrac{n+N+1}{mn}} \left( 1+ M^{(m)} \circ T_{\sigmagma_l}(a_{l,k_l}) \right) \right\|_{L^{p_l}}\\ &\lesssim \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} M \big( \chi_{Q_{l,k_l}^{**}}\big)^{\fracrac{n+N+1}{mn}} \right\|_{L^{p_l}} + \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \left(M\chi_{Q_{l,k_l}}\right)^{\fracrac{n+N+1}{mn}} M^{(m)} \circ T_{\sigmagma_l}(a_{l,k_l}) \right\|_{L^{p_l}}\\ &\lesssim \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \right\|_{L^{p_l}} + \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}^{**}} M\circ M^{(m)} \circ T_{\sigmagma_l}(a_{l,k_l}) \right\|_{L^{p_l}}. \end{align*} For the second term, we choose $q\in(m,\infty)$ and employ Lemma \ref{lm.2A00}, and the boundedness of $M$ and $T_{\sigmagma_l}$ to have \betagin{align*} &\left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}^{**}} M\circ M^{(m)} \circ T_{\sigmagma_l}(a_{l,k_l}) \right\|_{L^{p_l}}\\ &\lesssim \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \fracrac{\chi_{Q_{l,k_l}^{**}}}{|Q_{l,k_l}|^\fracrac{1}{q}} \left\|M\circ M^{(m)} \circ T_{\sigmagma_l}(a_{l,k_l})\right\|_{L^q}^q \right\|_{L^{p_l}} \lesssim \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \right\|_{L^{p_l}}. \end{align*} As a result, \[ A\lesssim B\lesssim \partialrod_{l=1}^m \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_{l,k_l}} \right\|_{L^{p_l}}, \] which completes the proof of Proposition \ref{LM.Key-31}. \section{The mixed type} In this section, we prove Proposition \ref{LM.Key-31} for operators of type \eqref{eq.CalZygOPT-3}. The main techniques to deal with the operator $\mathcal{T}_\sigmagma$ of mixed type are combinations of two previous types. We now establish some necessary estimates for $\mathcal{T}_\sigmagma$. For the mixed type, we need the following lemma which can be shown by a way similar to that in Lemma \ref{lm.3A1}.\fracootnote{ The detailed proof is as follows. Fix any $y\in 2^{j+1}Q_1^*\setminus 2^jQ_1^*$. Let us use a notation $K^1(y,y_1,\ldots, y_m)$ as in the proof of Lemma \ref{lm.3A1}. Then for any $y_l\in Q_l$, $l=1,\ldots,m$, we have \betagin{align*} | K^1(y,y_1,\ldots,y_m) | &\lesssim \left( \fracrac{ \ell(Q_1) } { |y-y_1| + \sum_{l\in \Lambda_j} |y-y_l| + \sum_{l\geq2} |y-y_l| } \right)^{n+N+1}\\ &\lesssim \left( \fracrac{ \ell(Q_1) } { |y-c_1| + \sum_{l\in \Lambda_j} |y-c_l| + \sum_{l\geq2} |y-y_l| } \right)^{n+N+1}. \end{align*} In fact, if $l\in \Lambda_j$, $2^jQ_1^{**}\cap 2^jQ_l^{**}=\emptyset$ and hence, $y\in 2^{j+1}Q_1^*$ means $|y-y_l|\sigmam |y-c_l|$ for all $y_l\in Q_l$ for such $l$. Of course, $|y-y_1|\sigmam |y-c_1|$ is clear since $y\notin 2^jQ_1^*$. Using this kernel estimate, we may prove the desired estimate. } \betagin{lemma}\label{lm-161112-1} Let $\sigmagma$ be a Coifman-Meyer multiplier, $a_l$ be $(p_l,\infty)$-atoms supported on $Q_l$ for $1\leq l\leq m$. Assume $\ell(Q_1)=\min{\{\ell(Q_l):l=1,\ldots, m\}}$ and write $\Lambda_j=\{l=1,\ldots,m: 2^jQ_1^{**}\cap 2^jQ_l^{**}=\emptyset\}$. Then for any $y\in 2^{j+1}Q_1^*\setminus 2^jQ_1^*$ we have \[ |T_\sigmagma(a_1,\ldots, a_m)(y)| \lesssim \left( \fracrac{\ell(Q_1)} { |y-c_1| + \sum_{l\in \Lambda_j} |y-c_l|} \right)^{n+N+1}. \] \end{lemma} \subsection{Fundamental estimates for the mixed type} Let $a_k$ be $(p_k,\infty)$-atoms supported in $Q_k$ for all $1\le k\le m$. Suppose $Q_1$ is the cube such that $\ell(Q_1) = \min\{\ell(Q_k) : 1\le k\le m\}$. For each $1\le g\le G$, let $Q_{l(g)}$ be the smallest cube among $\{Q_l\}_{l \in I_g}$ and let $m_g = |I_g|$ be the cardinality of $I_g$. Then we have the following analogues to Lemmas \ref{lm.4A00}--\ref{lm.4B02}. We write $m_g=\sharp I_g$ for each $g$. \betagin{lemma} \label{lm.5A00} For all $x\in Q_1^{**}$, we have \betagin{align} \notag &M_\partialhi\circ \mathcal{T}_\sigmagma(a_1,\ldots,a_m)(x)\chi_{Q_1^{**}}(x)\\ \label{eq.5A01} &\lesssim \partialrod_{g=1}^G \left( M\chi_{Q_{l(g)}}(x)^\fracrac{(n+N+1)m_g}{nm} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})(x) + \partialrod_{l\in I_g} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \right). \end{align} \end{lemma} \betagin{proof} Fix $x\in Q_1^{**}$. We need to estimate \[ \Big| \int_{{\mathbb R}^n} \partialhi_t(x-y){\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)\,dy \Big| \lesssim \fracrac{1}{t^n} \int_{B(x,t)} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \] for each $t\in (0,\infty)$. Similar to the previous section, it is enough to consider the following form: \betagin{equation}\label{eq.CalZygOPT-31} {\mathcal T}_\sigmagma(f_1,\ldots,f_m) = \partialrod_{g=1}^{G} T_{\sigmagma_{I_g}}(\{f_l\}_{l \in I_g}), \end{equation} where $\{I_g\}_{g=1}^{G}$ is a partition of $\{1,\ldots,m\}$ with $1\in I_1$. By the H\"older inequality, we have \betagin{equation} \label{eq.4B07} \fracrac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy \lesssim \partialrod_{g=1}^G \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\fracrac1{G}}. \end{equation} For each $1\le g\le G$, we need to examine \[ \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\fracrac1{G}}. \] We consider two cases as in the proof of Lemma \ref{lm.4A00}. \textbf{Case 1: $t\le \ell(Q_{1})$}. We observe that \betagin{align*} \fracrac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy &\le \partialrod_{g: B(x,t)\cap Q_{l(g)}^{**}\neq\emptyset} \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\fracrac1{G}}\\ &\quad \times \partialrod_{g: B(x,t)\cap Q_{l(g)}^{**}=\emptyset} \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\fracrac1{G}}. \end{align*} When $B(x,t)\cap Q_{l(g)}^{**}\neq\emptyset$, we see that $x\in 3Q_{l(g)}^{**}$. This shows \betagin{align}\label{161112-4} \lefteqn{ \partialrod_{g: B(x,t)\cap Q_{l(g)}^{**}\neq\emptyset} \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\fracrac1{G}} }\\ \nonumber &\lesssim \partialrod_{g: B(x,t)\cap Q_{l(g)}^{**}\neq\emptyset} \chi_{3Q_{l(g)}^{**}}(x) M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x). \end{align} When $B(x,t)\cap Q_{l(g)}^{**}=\emptyset$, we may use \eqref{eq.3D4} to have \betagin{equation}\label{161112-3} \partialrod_{g:B(x,t)\cap Q_{l(g)}^{**}=\emptyset} \Big( \fracrac{1}{t^n} \int_{B(x,t)} |T_{\sigmagma_{I_g}}(\{a_l\}_{l \in I_g})(y)|^{G}dy \Big)^{\fracrac1{G}} \lesssim \partialrod_{g: B(x,t)\cap Q_{l(g)}^{**}=\emptyset} \partialrod_{l\in I_g} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn}. \end{equation} These two estimates \eqref{161112-4} and \eqref{161112-3} yield the desired estimate in the Case 1. \textbf{Case 2: $t> \ell(Q_{1})$}. We split \betagin{align*} \lefteqn{ \fracrac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy }\\ &\lesssim \fracrac{1}{|Q_1^*|} \int_{Q_1^*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy + \fracrac{1}{|Q_1^*|} \int_{(Q_1^*)^c} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy. \end{align*} For the first term, (\ref{161112-3}) yields \betagin{align*} & \fracrac{1}{|Q_1^*|} \int_{Q_1^*} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim \partialrod_{g=1}^G \left( \partialrod_{l\in I_g} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} + \chi_{Q_{l(g)}^{**}}(x) M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x) \right). \end{align*} For the second term, by a dyadic decomposition of $(Q_1^*)^c$, \betagin{align*} &\fracrac{1}{|Q_1^*|} \int_{(Q_1^*)^c} |{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ &= \sum_{j=0}^{\infty} \fracrac{1}{|Q_1^*|} \int_{2^{j+1}Q_1^*\setminus 2^jQ_1^*} |T_{\sigmagma_{I_1}}(\{a_l\}_{l\in I_1})(y)| \partialrod_{g\geq 2} |T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(y)|dy = \sum_{j=0}^\infty I_j. \end{align*} Now, we fix any $j$ and evaluate each $I_j$. Letting $\Lambda_j=\{l=1,\ldots,m: 2^jQ_1^{**}\cap 2^jQ_l^{**}=\emptyset\}$ and using Lemma \ref{lm-161112-1}, for $y\in2^{j+1}Q_1^*\setminus2^jQ_1^*$ we obtain \betagin{align*} |T_{\sigmagma_{I_1}}(\{a_l\}_{l\in I_1})(y)| &\lesssim \left( \fracrac{\ell(Q_1)}{|y-c_1|+\sum_{l\in I_1\cap \Lambda_j}|y-c_l|} \right)^{n+N+1}\\ &\lesssim 2^{-j(n+N+1)} \left( \fracrac{2^j\ell(Q_1)}{2^j\ell(Q_1)+\sum_{l\in I_1\cap \Lambda_j}|c_1-c_l|} \right)^{n+N+1} . \end{align*} We estimate this term further. If $l\in I_1\cap \Lambda_j$, $|c_1-c_l|\sigmam |x-c_l|$ since $x\in Q_1^{**}$. On the other hand, if $l\in I_1\setminus \Lambda_j$, $\chi_{2^jQ_l^{**}}(x)=\chi_{Q_1^{**}}(x)=1$ since $x\in Q_1^{**}$. So, we have \betagin{align*} |T_{\sigmagma_{I_1}}(\{a_l\}_{l\in I_1})(y)| &\lesssim 2^{-j(n+N+1)} \partialrod_{l\in I_1\cap \Lambda_j} \left( \fracrac{2^j\ell(Q_l)}{|x-c_l|} \right)^\fracrac{n+N+1}{m} \partialrod_{l\in I_1\setminus \Lambda_j} \chi_{2^jQ_l^{**}}(x)\\ &\lesssim 2^{-j(n+N+1)} \partialrod_{l\in I_1\setminus \{1\}} M\chi_{2^jQ_l^{**}}(x)^\fracrac{n+N+1}{mn}\\ &\lesssim 2^{-j(n+N+1)} 2^{j\fracrac{n+N+1}{m}(m_1-1)} \partialrod_{l\in I_1} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn}. \end{align*} This and H\"{o}lder's inequality imply that \betagin{equation}\label{161112-1} I_j \lesssim 2^{-j(N+1)} 2^{j\fracrac{n+N+1}{m}(m_1-1)} \partialrod_{l\in I_1} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \partialrod_{g\geq2} \left( \fracrac{1}{|2^jQ_1^*|} \int_{2^jQ_1^*} |T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\fracrac{1}{G}. \end{equation} In the usual way, we claim that \betagin{align}\label{161112-2} &\partialrod_{g\geq2} \left( \fracrac{1}{|2^jQ_1^*|} \int_{2^jQ_1^*} |T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\fracrac{1}{G}\\ &\lesssim 2^{j\fracrac{n+N+1}{m}(m-m_1)} \partialrod_{g\geq2} \left( M\chi_{Q_{l(g)}}(x)^\fracrac{(n+N+1)m_g}{nm} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})(x) + \partialrod_{l\in I_g} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \right).\nonumber \end{align} To see this, we again consider two possibilities of $g$ for each $j$; $2^jQ_1^{**}\cap 2^jQ_{l(g)}^{**}\neq \emptyset$ or not. In the first case, we notice $x\in Q_1^*\subset 2^jQ_{l(g)}^{**}$ and that we defined $m_g=\sharp I_g$, and hence \betagin{align*} \lefteqn{ \left( \fracrac{1}{|2^jQ_1^*|} \int_{2^jQ_1^*} |T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\fracrac{1}{G} }\\ &\lesssim \chi_{2^jQ_{l(g)}^{**}}(x) M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x)\\ &\lesssim 2^{j\fracrac{m_g(n+N+1)}{m}} M\chi_{Q_{l(g)}}(x)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x). \end{align*} In the second case; $2^jQ_1^{**}\cap 2^jQ_{l(g)}^{**}=\emptyset$, we use \eqref{eq.3D4} to see \betagin{align*} \lefteqn{ \left( \fracrac{1}{|2^jQ_1^*|} \int_{2^jQ_1^*} |T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\fracrac{1}{G} }\\ &\lesssim \partialrod_{l\in I_g} M\chi_{2^jQ_l^{**}}(x)^\fracrac{n+N+1}{mn} \lesssim 2^{j\fracrac{m_g(n+N+1)}{m}} \partialrod_{l\in I_g} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn}. \end{align*} These two estimates yields \eqref{161112-2}. Inserting \eqref{161112-2} to \eqref{161112-1}, we arrive at \[ I_j\lesssim 2^{-j(\fracrac{n+N+1}{m}-n)} \partialrod_{g=1}^G \left( M\chi_{Q_{l(g)}}(x)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x) + \partialrod_{l\in I_g} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \right). \] Taking $N$ sufficiently large, we can sum the above estimate up over $j \in {\mathbb N}$ and get desired estimate. This completes the proof of Lemma \ref{lm.5A00} \end{proof} \betagin{lemma}\label{lm.5A02} Assume $x \notin Q_1^{**}$ and $c_1 \notin B(x,100n^2t)$. Then we have \betagin{align*} &\fracrac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy\\ &\lesssim \partialrod_{g=1}^G \left( M\chi_{Q_{l(g)}}(x)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x) + \partialrod_{l\in I_g} M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \right). \end{align*} \end{lemma} \betagin{proof} Fix $x\notin Q_1^{**}$ and $t>0$ such that $c_1\notin B(x,100n^2t).$ Let ${\mathcal T}_\sigmagma$ be the operator of type \eqref{eq.CalZygOPT-3}. We may consider the reduced form \eqref{eq.CalZygOPT-31} of $\mathcal{T}_{\sigmagma}$ and start from \eqref{eq.4B07}. We define \[ J= \{ g=2,\ldots, m: x\notin Q_{l(g)}^{**} \}, \quad J_0= \{ g\in J: B(x,2t)\cap Q_{l(g)}^*=\emptyset \}, \quad J_1=J\setminus J_0 \] and split the product as follows: \betagin{align*} \fracrac{1}{t^n} \int_{B(x,t)}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy &\lesssim \left\| T_{\sigmagma_{I_1}}(\{a_l\}_{l\in I_1}) \chi_{B(x,t)} \right\|_{L^\infty} \partialrod_{l\in J_0} \left\| T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g}) \chi_{B(x,t)} \right\|_{L^\infty}\\ &\quad \times \partialrod_{g\in J_1} \left( \fracrac{1}{|B(x,t)|} \int_{B(x,t)} |T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\fracrac{1}{G}\\ &\quad \times \partialrod_{g\in \{2,\ldots,G\} \setminus J} \left( \fracrac{1}{|B(x,t)|} \int_{B(x,t)} |T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(y)|^Gdy \right)^\fracrac{1}{G}\\ &= {\rm I} \times {\rm II} \times {\rm III} \times {\rm IV}. \end{align*} To estimate I, we further define the partition of $I_1$: \betagin{align*} I_1^0 &= \{l\in I_1 : x\notin Q_l^{**}, B(x,2t)\cap Q_l^*=\emptyset\}, \quad I_1^1 = \{l\in I_1 : x\notin Q_l^{**}, B(x,2t)\cap Q_l^*\ne\emptyset\},\\ I_1^2 &= I_1\setminus(I_1^0\cup I_1^1). \end{align*} Since $x\notin Q_1^{**}$ and $c_1\notin B(x,100n^2t)$, we can see that $1\in I_1^0$. From Lemma \ref{lm.3A1}, we deduce \betagin{align*} &|T_{\sigmagma_{I_1}}(\{a_l\}_{l \in I_1})(y)|\lesssim \fracrac{\ell(Q_1)^{n+N+1}}{(\sum_{l\in I_1^0}|y-c_l|)^{n+N+1}} \lesssim \fracrac{\ell(Q_1)^{n+N+1}}{(\sum_{l\in I_1^0}|x-c_l|)^{n+N+1}} \\ &\lesssim \Big(\fracrac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)}\Big)^{(m-m_1)\fracrac{n+N+1}{m}} \partialrod_{l\in I_1^0} \Big(\fracrac{\ell(Q_l)}{|x-c_l|+\ell(Q_l)}\Big)^{\fracrac{n+N+1}{m}} \partialrod_{l\in I_1^1} \Big(\fracrac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)}\Big)^{\fracrac{n+N+1}{m}} \end{align*} for all $y\in B(x,t)$, where $m_1=|I_1|$ is the cardinality of the set $I_1$. As in the proof of Lemma \ref{lm.4F01} for the product type, if $x\notin Q_l^{**}$ and $B(x,2t)\cap Q_l^{*}\ne\emptyset$ then $|x-c_l|\lesssim t\lesssim |x-c_1|$. This observation implies \[ \fracrac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)} \lesssim \fracrac{\ell(Q_l)}{|x-c_l|+\ell(Q_l)} \] for all $l\in I_1^1$. Therefore, we can estimate \[ |T_{\sigmagma_{I_1}}(\{a_l\}_{l \in I_1})(y)| \lesssim \Big(\fracrac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)}\Big)^{(m-m_1)\fracrac{n+N+1}{m}} \partialrod_{l\in I_1^0\cup I_1^1}M\chi_{Q_l}(x)^{\fracrac{n+N+1}{mn}} \] for all $y\in B(x,t)$. Obviously, $1\lesssim M\chi_{Q_l}(x)$ for all $l\in I_1^2$, and hence we have \betagin{equation} \label{eq.5B08} |T_{\sigmagma_{I_1}}(\{a_l\}_{l \in I_1})(y)| \lesssim \Big(\fracrac{\ell(Q_1)}{|x-c_1|+\ell(Q_1)}\Big)^{(m-m_1)\fracrac{n+N+1}{m}} \partialrod_{l\in I_1}M\chi_{Q_l}(x)^{\fracrac{n+N+1}{mn}} \end{equation} for all $y\in B(x,t)$ which gives the estimate for I. For the third term III, we simply have \[ {\rm III} \leq \partialrod_{g\in J_1} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x). \] So, we obtain \betagin{align*} {\rm I}\times {\rm III} &\lesssim \partialrod_{l\in I_1}M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \partialrod_{g\in J_1} \left( \fracrac{\ell(Q_1)} { \ell(Q_1)+ |x-c_1| } \right)^\fracrac{m_g(n+N+1)}{m} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x)\\ &\lesssim \partialrod_{l\in I_1}M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn} \partialrod_{g\in J_1} M\chi_{Q_{l(g)}}(x)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x), \end{align*} since $g\in J_1$ implies $|x-c_{l(g)}|\lesssim|x-c_1|$. For the second term II, we use Lemma \ref{161112-1} and an argument as for estimate for I to get \[ {\rm II}= \partialrod_{l\in J_0} \left\| T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g}) \chi_{B(x,t)} \right\|_{L^\infty} \lesssim \partialrod_{g\in J_0} \partialrod_{l\in I_g}M\chi_{Q_l}(x)^\fracrac{n+N+1}{mn}. \] For the last term IV, we recall $g\notin J$ means $x\in Q_{l(g)}^{**}$ and hence, \[ {\rm IV} \lesssim \partialrod_{g\notin J} M\chi_{Q_{l(g)}}(x)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(x). \] Combining the estimates for I, II, III and IV, we complete the proof of Lemma \ref{lm.5A02}. \end{proof} \betagin{lemma} \label{lm.5C05} Assume $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$. Then we have \betagin{align*} \lefteqn{ \fracrac{\ell(Q_1)^{s+1}}{t^{n+s+1}} \int_{Q_1^{*}}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy }\\ & \lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{g=1}^G \inf_{z\in Q_1^*} \!\! \left[ M\chi_{Q_{l(g)}}(z)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(z) + \partialrod_{l\in I_g} M\chi_{Q_l}(z)^\fracrac{n+N+1}{mn} \right] \!. \end{align*} \end{lemma} \betagin{lemma} \label{lm.5C06} Assume $x \notin Q_1^{**}$ and $c_1 \in B(x,100n^2t)$. Then we have \betagin{align*} \lefteqn{ \fracrac{1}{t^{n+s+1}} \int_{(Q_1^{*})^c}|y-c_1|^{s+1}|{\mathcal T}_\sigmagma(a_1,\ldots,a_m)(y)|\,dy }\\ &\quad \lesssim M\chi_{Q_1}(x)^{\fracrac{n+s+1}{n}} \partialrod_{g=1}^G \inf_{z\in Q_1^*} \!\! \left[ M\chi_{Q_{l(g)}}(z)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_l\}_{l\in I_g})(z) + \partialrod_{l\in I_g} M\chi_{Q_l}(z)^\fracrac{n+N+1}{mn} \right]\! . \end{align*} \end{lemma} The proof of Lemmas \ref{lm.5C05} and \ref{lm.5C06} are very similar to those of Lemma \ref{lm.5A00}, so we omit the details here. \subsection{The proof of Proposition \ref{LM.Key-31} for the mixed type} Employing the above lemmas, we complete the proof of \eqref{eq.PWEST}. For each $\vec{k}=(k_1,\ldots, k_m)$, recall the smallest-length cube $R_{\vec{k}}$ among $Q_{1,k_1},\ldots, Q_{m,k_m}$ and write $Q_{l(g),\vec{k}(g)}$ for the cube of smallest-length among $\{Q_{l,k_l}\}_{l\in I_g}$. Combining Lemmas \ref{lm.5A00}-\ref{lm.5C06}, we have the following pointwise estimate \betagin{align*} &M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})(x) \lesssim \partialrod_{g=1}^G b_{g,\vec{k}(g)}(x) + M\chi_{R_{\vec{k}}^*}(x)^{\fracrac{n+s+1}{n}} \partialrod_{g=1}^G \inf_{z\in R_{\vec{k}}^*} b_{g,\vec{k}(g)}(z),\\ &b_{g,\vec{k}(g)}(x) = M\chi_{Q_{l(g),\vec{k}(g)}^*}(x)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})(x) + \partialrod_{l\in I_g} M\chi_{Q_{l,k_l}}(x)^\fracrac{n+N+1}{mn} \end{align*} for all $x\in {\mathbb R}^n$. As in the proof for the product type, we let \betagin{equation*} A= \Big\|\sum_{k_1,\ldots,k_m=1}^\infty \Big(\partialrod_{l=1}^m \lambda_{l,k_l}\Big) M_\partialhi \circ {\mathcal T}_\sigmagma(a_{1,k_1},\ldots,a_{m,k_m})\Big\|_{L^p}. \end{equation*} In view of $(n+s+1)p/n>1$, using Lemma \ref{lm.2B00} and H\"{o}lder's inequality, we see \betagin{align*} A &\lesssim \partialrod_{g=1}^G \bigg\| \sum_{k_l\ge 1:l\in I_g} \bigg(\partialrod_{l\in I_g}\lambda_{l,k_l}\bigg) \bigg( \Big(M\chi_{Q_{l(g),\vec{k}(g)}^*}\Big)^\fracrac{m_g(n+N+1)}{mn} M^{(G)} \circ T_{\sigmagma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g}) \\ & \qquad \qquad \qquad \qquad \qquad \qquad\qquad \qquad \qquad\qquad \qquad \qquad+ \partialrod_{l\in I_g} (M\chi_{Q_{l,k_l}})^\fracrac{n+N+1}{mn} \bigg) \bigg\|_{L^{q_g}}\\ &\lesssim \partialrod_{g=1}^G \Big( A_{g,1}+ A_{g,2} \Big), \end{align*} where $q_g\in (0,\infty)$ is defined by $ 1/q_g = \sum_{l\in I_g} 1/p_l $ and \betagin{align*} A_{g,1}&= \left\| \sum_{k_l\ge 1 : l\in I_g} \left(\partialrod_{l\in I_g}\lambda_{l,k_l}\right) \Big(M\chi_{Q_{l(g),\vec{k}(g)}^*}\Big)^\fracrac{m_g(n+N+1)}{mn} M^{(G)}\circ T_{\sigmagma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g}) \right\|_{L^{q_g}},\\ A_{g,2}&= \left\| \sum_{k_l\ge 1 : l\in I_g} \partialrod_{l\in I_g}\lambda_{l,k_l} (M\chi_{Q_{l,k_l}})^\fracrac{n+N+1}{mn} \right\|_{L^{q_g}}. \end{align*} For $A_{g,2}$, we have only to employ Lemma \ref{lm.2B00} to get the desired estimate. For $A_{g,1}$, take large $r$ and employ Lemma \ref{lm-161031-1} to obtain \[ A_{g,1} \lesssim \left\| \sum_{k_l\ge 1 : l\in I_g} \left(\partialrod_{l\in I_g}\lambda_{l,k_l}\right) \chi_{Q_{l(g),\vec{k}(g)}^*} M^{(r)}\circ M^{(G)} [T_{\sigmagma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})] \right\|_{L^{q_g}}. \] Then it follows from Lemma \ref{lm.2A00} and \eqref{eq.3A3} that \betagin{align*} A_{g,1} &\lesssim \left\| \sum_{k_l\ge 1 : l\in I_g} \left(\partialrod_{l\in I_g}\lambda_{l,k_l}\right) \fracrac{\chi_{Q_{l(g),\vec{k}(g)}^*}}{|Q_{l(g),\vec{k}(g)}|^{1/q}} \left\| \chi_{Q_{l(g),\vec{k}(g)}^*} M^{(r)}\circ M^{(G)} [T_{\sigmagma_{I_g}}(\{a_{l,k_l}\}_{l\in I_g})] \right\|_{L^q} \right\|_{L^{q_g}}\\ &\lesssim \left\| \sum_{k_l\ge 1 : l\in I_g} \left(\partialrod_{l\in I_g}\lambda_{l,k_l}\right) \chi_{Q_{l(g),\vec{k}(g)}^*} \inf_{z\in Q_{l(g),\vec{k}(g)}^*} \partialrod_{l\in I_g} M\chi_{Q_l}(z)^\fracrac{n+N+1}{mn} \right\|_{L^{q_g}}\\ &\leq \partialrod_{l\in I_g} \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} (M\chi_{Q_l})^\fracrac{n+N+1}{mn} \right\|_{L^{p_l}} \lesssim \partialrod_{l\in I_g} \left\| \sum_{k_l=1}^\infty \lambda_{l,k_l} \chi_{Q_l} \right\|_{L^{p_l}}, \end{align*} which completes the proof of Lemma \ref{LM.Key-31} for for operators of mixed type. \section{Examples} We provide some examples of operators of the kinds discussed in this paper: all of the following are symbols of trilinear operators acting on functions on the real line, thus they are functions on $\mathbb R^3= \mathbb R\times \mathbb R\times \mathbb R$. The symbol $$ \sigmagma_1(\xi_1,\xi_2,\xi_3) = \fracrac{ (\xi_1+\xi_2+\xi_3)^2 }{ \xi_1^2+\xi_2^2+\xi_3^2} $$ is associated with an operator of type~\eqref{eq.CalZygOPT}. The symbol \betagin{align*} \sigmagma_2 (\xi_1,\xi_2,\xi_3) &= \fracrac{\xi_1^3}{(1+\xi_1^2)^{\fracrac 32}} \fracrac{1}{(1+\xi_2^2+\xi_3^2)^{\fracrac 32}} + \fracrac{1}{(1+\xi_1^2)^{\fracrac 32}} \fracrac{\xi_2^3}{(1+\xi_2^2+\xi_3^2)^{\fracrac 32}} \\ &\quad +\fracrac{1}{(1+\xi_1^2)^{\fracrac 32}} \fracrac{\xi_3^3}{(1+\xi_2^2+\xi_3^2)^{\fracrac 32}} - \fracrac{3\xi_1} {(1+\xi_1^2)^{\fracrac 32}} \fracrac{\xi_2\xi_3}{(1+\xi_2^2+\xi_3^2)^{\fracrac 32}} \\ &= \fracrac{(\xi_1+\xi_2+\xi_3)(\xi_1^2+\xi_2^2+\xi_3^2-\xi_1\xi_2-\xi_2\xi_3-\xi_3\xi_1)} {(1+\xi_1^2)^{\fracrac 32}(1+\xi_2^2+\xi_3^2)^{\fracrac 32}} \end{align*} provides an example of an operator of type ~\eqref{eq.CalZygOPT-3}. Note that each term is given as a product of a multiplier of $\xi_1$ times a multiplier of $(\xi_2,\xi_3)$. The symbol \betagin{eqnarray*} \sigmagma_3 (\xi_1,\xi_2,\xi_3) &= & \fracrac{ \xi_1^4}{(1+\xi_1^2)^2} \fracrac{\xi_2^2}{(1+\xi_2^2)^2} \fracrac{\xi_3} { (1+\xi_3^2)^2} - \fracrac{ \xi_1^4}{(1+\xi_1^2)^2} \fracrac{\xi_2}{(1+\xi_2^2)^2} \fracrac{ \xi_3^2} { (1+\xi_3^2)^2} \\ && \quad- \fracrac{\xi_1^2}{(1+\xi_1^2)^2}\fracrac{ \xi_2^4}{(1+\xi_2^2)^2} \fracrac{\xi_3}{ (1+\xi_3^2)^2} + \fracrac{ \xi_1}{(1+\xi_1^2)^2} \fracrac{ \xi_2^4}{(1+\xi_2^2)^2} \fracrac{\xi_3^2} { (1+\xi_3^2)^2} \\ && \quad+ \fracrac{\xi_1^2}{(1+\xi_1^2)^2}\fracrac{\xi_2} {(1+\xi_2^2)^2} \fracrac{ \xi_3^4}{(1+\xi_3 ^2)^2} - \fracrac{\xi_1}{(1+\xi_1^2)^2}\fracrac{\xi_2^2} {(1+\xi_1^2)^2} \fracrac{ \xi_3^4}{(1+\xi_3 ^2)^2}\\ &=&- \fracrac{\xi_1\xi_2\xi_3(\xi_1-\xi_2)(\xi_2-\xi_3)(\xi_3-\xi_1) (\xi_1+\xi_2+\xi_3)}{(1+\xi_1^2)^2(1+\xi_2^2)^2(1+\xi_3^2)^2} \end{eqnarray*} yields an example of an operator of type ~\eqref{eq.CalZygOPT-2}. The next example: \betagin{eqnarray*} \sigmagma_4 (\xi_1,\xi_2,\xi_3) &= & \fracrac{\xi_1\xi_2}{\xi_1^2+\xi_2^2+(\xi_1+\xi_2)^2} \cdot 1 - \fracrac{\xi_1\xi_2}{\xi_1^2+\xi_2^2+\xi_3^2} \end{eqnarray*} shows that the integer $G(\rho)$ varies according to $\rho$. Notice that all four examples satisfy $$ \sigmagma_1(\xi_1,\xi_2,\xi_3)=\sigmagma_2(\xi_1,\xi_2,\xi_3) =\sigmagma_3(\xi_1,\xi_2,\xi_3)=\sigmagma_4(\xi_1,\xi_2,\xi_3)=0 $$ when $\xi_1+\xi_2+\xi_3=0$. This yields condition \eqref{eq.TmCan} when $s=0$; see \cite{Our-next-paper}. For the case of $s\in \mathbb Z^+$, we consider $\sigmagma_1{}^{s+1},\sigmagma_2{}^{s+1},\sigmagma_3{}^{s+1}$, for example. \betagin{thebibliography}{999} \bibitem{CLMS} Coifman R. R., Lions P. L., Meyer Y., Semmes S., \emph{Compensated compactness and Hardy spaces}, J. Math. Pures Appl. (9) \textbf{72} (1993), no. 3, 247--286. \bibitem{CM2} Coifman R. R., Meyer Y., {\it Commutateurs d'int\'egrales singuli\`eres et op\'erateurs multilin\'eaires}. Ann. Inst. Fourier, Grenoble \textbf{28} (1978), 177--202. \bibitem{CM3} Coifman R. R., Meyer Y., {\it Au-del\`a des op\'erateurs pseudo-diff\'erentiels}, Asterisk \textbf{57}, 1978. \bibitem{CG} Coifman R. R., Grafakos L., \emph{Hardy space estimates for multilinear operators, I,} Revista Mat. Iberoam. \textbf{8} (1992), no. 1, 45--67. \bibitem{Do} Dobyinski S., Ondelettes, renormalisations du produit et applications a certains operateurs bilineaires, Th\`ese de doctorat, Mathematiques, Univ. de Paris 9, France, 1992. \bibitem{FS} Fefferman C., Stein E. M., \emph{$H^p$ spaces of several variables}, Acta Math. \textbf{129} (1972), 137--193. \bibitem{G} Grafakos L., \emph{Hardy space estimates for multilinear operators, {\rm II},} Revista Mat. Iberoam. \textbf{8} (1992), no. 1, 69--92. \bibitem{Our-next-paper} in preparation. \bibitem{GLKT01} Grafakos L., Kalton N., \emph{Multilinear Calder\'on--Zygmund operators on Hardy spaces,} Collect. Math. \textbf{52} (2001), no. 2, 169--179. \bibitem{GLLXW00} Grafakos L., Li, X., \emph{Bilinear operators on homogeneous groups,} J. Oper. Th. \textbf{44} (2000), no. 1, 63--90. \bibitem{GrafakosTorresAdvances} Grafakos L., Torres R. H., \emph{Multilinear Calder\'on-Zygmund theory}, Adv. in Math. {\bf 165} (2002), no. 1, 124--164. \bibitem{HartLu} Hart J., Lu G., \emph{$H^p$ Estimates for Bilinear Square Functions and Singular Integrals}, Indiana Univ. Math. J. \textbf{65} (2016), 1567--1607. \bibitem{HuMe12} Hu G. E., Meng Y., \emph{Multilinear Calder\'on-Zygmund operator on products of Hardy spaces,} Acta Math. Sinica \textbf{28} (2012), no. 2, 281--294. \bibitem{HuangLiu13} Huang J., Liu Y., \emph{The boundedness of multilinear Calder\'{o}n-Zygmund operators on Hardy spaces}, Proc. Indian Acad. Sci. Math. Sci. \textbf{123} (2013), no. 3, 383--392. \bibitem{Nakai-Sawano-2014} Nakai E., Sawano Y., \emph{Orlicz-Hardy spaces and their duals}, Sci. China Math. {\bf 57} (2014), no. 5, 903--962. \bibitem{Sawano13} Sawano Y., Atomic decompositions of Hardy spaces with variable exponents and its application to bounded linear operator, Integr. Eq. Oper. Theory {\bf 77} (2013), 123--148. \bibitem{SteinHA} Stein, E. M., \emph{Harmonic Analysis, Real-Variable Methods, Orthogonality, and Oscillatory Integrals}, Princeton Mathematical Series 43, Princeton University Press, Princeton NJ 1993. \end{thebibliography} \end{document}
\begin{document} \title{Blowup behaviour for the nonlinear Klein--Gordon equation} \author{Rowan Killip} \address{University of California, Los Angeles} \author{Betsy Stovall} \address{University of California, Los Angeles} \author{Monica Visan} \address{University of California, Los Angeles} \begin{abstract} We analyze the blowup behaviour of solutions to the focusing nonlinear Klein--Gordon equation in spatial dimensions $d\geq 2$. We obtain upper bounds on the blowup rate, both globally in space and in light cones. The results are sharp in the conformal and sub-conformal cases. The argument relies on Lyapunov functionals derived from the dilation identity. We also prove that the critical Sobolev norm diverges near the blowup time. \end{abstract} \maketitle \tableofcontents \section{Introduction.} We consider the initial-value problem for the nonlinear Klein--Gordon equation \begin{equation} \label{E:eqn} \begin{cases} u_{tt} - \Delta u + m^2 u = |u|^p u\\ u(0) = u_0,\ u_t(0) = u_1 \end{cases} \end{equation} in spatial dimensions $d\geq 2$ with $0<p<\frac4{d-2}$ and $m\in[0,1]$. Note that when $m=0$, this reduces to the nonlinear wave equation. We will only consider real-valued solutions to \eqref{E:eqn}; the methods adapt easily to the complex-valued case. This equation is the natural Hamiltonian flow associated with the energy \begin{equation} \label{E:energy} E_m(u) = \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x}u(t,x)|^2 + \tfrac{m^2}2 |u(t,x)|^2 - \tfrac1{p+2}|u(t,x)|^{p+2}\, dx. \end{equation} Both the linear and nonlinear Klein--Gordon equations enjoy finite speed of propagation (indeed, they are fully Poincar\'e invariant). For this reason, many statements (including the definition of a solution) are most naturally formulated in light cones. \begin{definition}[Light cones] \label{D:light cones} Given $(t_0,x_0) \in {\mathbb{R}}_+ \times {\mathbb{R}}^d$, we write $$ \Gamma(t_0,x_0) := \{(t,x) \in [0,t_0) \times {\mathbb{R}}^d : |x - x_0| \leq t_0-t\} $$ to denote the backwards light cone emanating from this point. Given $u : \Gamma(t_0,x_0) \to {\mathbb{R}}$, we write $ u \in C_{t,\textrm{loc}} L^2_x(\Gamma(t_0,x_0))$ if, when we extend $u$ to be zero outside of $\Gamma(t_0,x_0)$, the function $t \mapsto u(t)$ is continuous in $L^2_x({\mathbb{R}}^d)$ on compact subintervals of $[0,t_0)$. We define $$ C_{t,\textrm{loc}}H^1_x(\Gamma(t_0,x_0)):=\{ u\in C_{t,\rm{loc}} L^2_x(\Gamma(t_0,x_0)) \text{ and } \nabla u \in C_{t,\rm{loc}} L^2_x(\Gamma(t_0,x_0))\}. $$ Finally, we write $u \in L^{\frac{p(d+1)}2}_{\rm{loc}}(\Gamma(t_0,x_0))$ if $u \in L^{\frac{p(d+1)}2}_{t,x}[(K \times {\mathbb{R}}^d)\cap \Gamma(t_0,x_0)]$ for every compact time interval $K \subset [0,t_0)$. \end{definition} The dispersion relation for the linear Klein--Gordon equation is $\omega^2 = m^2+|\xi|^2$. In view of this, we adopt the notation \begin{equation}\label{E:jp def} \jp{\xi}_m := \sqrt{m^2+|\xi|^2}, \end{equation} by analogy with the widely-used $\jp{\xi} := \sqrt{1+|\xi|^2}$. With this notation, the solution of the linear Klein--Gordon equation with $u(0)=u_0$ and $u_t(0)=u_1$ is given by $$ {\mathcal{S}}_m(t)(u_0,u_1) = \cos(t\langle\nabla\rangle_m)u_0 + \langle\nabla\rangle_m^{-1}\sin(t\langle\nabla\rangle_m)u_1. $$ \begin{definition}[Solution] \label{D:solution} Let $(t_0,x_0) \in {\mathbb{R}}_+ \times {\mathbb{R}}^d$. A function $u:\Gamma(t_0,x_0) \to {\mathbb{R}}$ is a \emph{(strong) solution} to \eqref{E:eqn} if $(u,u_t) \in C_{t,\textrm{loc}}[H^1_x \times L^2_x](\Gamma(t_0,x_0))$, $u \in L^{\frac{p(d+1)}2}_\textrm{loc}(\Gamma(t_0,x_0))$, and $u$ satisfies the Duhamel formula \begin{equation} \label{E:Duhamel} u(t) = {\mathcal{S}}_m(t)(u_0,u_1) + \int_0^t \langle\nabla\rangle_m^{-1}\sin(\langle\nabla\rangle_m(t-s))|u(s)|^pu(s)\, ds \end{equation} on $\Gamma(t_0,x_0)$. \end{definition} Strong solutions are known to be unique (cf. Proposition~\ref{P:uniqueness}) and so any initial data in $(u_0,u_1)\in H^1_x\times L^2_x$ leads to a unique \emph{maximally extended} solution defined on the union of all light cones upon which a strong solution exists. This region of spacetime is called the \emph{domain of maximal (forward) extension}. When this is not $[0, \infty)\times{\mathbb{R}}^d$, it must take the form $$ \{ (t,x) : x\in {\mathbb{R}}^d \text{ and } 0 \leq t < \sigma(x) \} $$ where $\sigma:{\mathbb{R}}^d\to(0,\infty)$ is a 1-Lipschitz function. The surface $$ \Sigma = \{(\sigma(x),x): x \in {\mathbb{R}}^d\} $$ is called the \emph{(forward) blowup surface}. A point $t_0=\sigma(x_0)$ on the blowup surface is called \emph{non-characteristic} if $\sigma(x)\geq \sigma(x_0) - (1-\varepsilon)|x-x_0|$ for all $x$ and some $\varepsilon>0$. Otherwise the point is called \emph{characteristic}. With these preliminaries out of the way, let us now describe both the principal results and the structure of the paper. In Section~\ref{S:local theory}, we review the local well-posedness theory for our equation. Almost nothing in this section is new. However we include full proofs, both for the sake of completeness and because in several places the exact formulation we require is not that which appears in the literature. Additionally, local well-posedness results are intrinsically lower bounds on the rate of blowup. In this way, the results of Section~\ref{S:local theory} provide a counterpart to the upper bounds proved elsewhere in the paper. Sections~\ref{S:non+blow},~\ref{S:CC}, and~\ref{S:critical blowup} culminate in a proof that the critical Sobolev norm diverges as the blowup time is approached, at least along a subsequence of times. Here criticality is defined with respect to scaling. The nonlinear Klein--Gordon equation does not have a scaling symmetry, except when $m=0$. In the massless case the scaling symmetry takes the form $u(t,x)\mapsto u^\lambda(t,x) := \lambda^{2/p} u(\lambda t,\lambda x)$ and the corresponding scale-invariant spaces are $u\in \dot H^{s_c}({\mathbb{R}}^d)$ and $u_t \in \dot H^{s_c-1}({\mathbb{R}}^d)$ with $s_c := \frac{d}2 - \frac2p$. As blowup is naturally associated with short length scales (i.e., $\lambda\to\infty$) and the coefficient of the mass term shrinks to zero under this scaling, it is natural to regard $s_c$ as the critical regularity for \eqref{E:eqn} even when $m > 0$. \begin{theorem}\label{T:I:sc} Consider initial data $u_0\in H^1({\mathbb{R}}^d)$ and $u_1\in L^2({\mathbb{R}}^d)$ with $d \geq 2$ and suppose $p = \frac4{d-2s_c}$ with $\frac1{2d} < s_c < 1$. If the maximal-lifespan solution $u$ to \eqref{E:eqn} blows up forward in time at $0<T_* < \infty$, then $$ \smash{\limsup_{t \uparrow T_*}} \bigl\{\|u(t)\|_{\dot H^{s_c}_x} + \|u_t(t)\|_{H^{s_c-1}_x}\bigr\} = \infty. $$ When $s_c < \frac12$ we additionally assume that $u_0$ and $u_1$ are spherically symmetric. \end{theorem} By virtue of scale invariance, the blowup time can be adjusted arbitrarily without altering the size of the critical norm. As this indicates, the link between blowup and the critical norm is subtle. We note also the example of the mass-critical nonlinear Schr\"odinger equation for which blowup does occur despite the fact that the $L^2_x$-norm is a constant of motion! We were prompted to investigate the behaviour of the critical norm by a recent paper of Merle and Raphael, \cite{MerleRaphaelAJM}, who considered the nonlinear Schr\"odinger equation with radial data and $0<s_c<1$. They showed that the critical norm must blow up as a power of $|\log(T^*-t)|$. In \cite{MerleRaphaelAJM} a rescaling argument is used to show that if a blowup solution were to exist for which the critical norm did not diverge, then one could produce a second solution that is global in at least one time direction and has energy $E(u)\leq 0$. The impossibility of this second type of solution is then deduced via the virial argument. Because the second solution has poor spatial decay, the virial argument needs to be space localized and the resulting error terms controlled; this relies heavily on the radial hypothesis. Note that the roles of the symmetry assumption in \cite{MerleRaphaelAJM} and here are of a completely different character. As discussed in Section~\ref{S:local theory}, our equation is \emph{ill-posed} in $H_x^{s_c}\times H_x^{s_c-1}$ when $s_c<\frac12$, unless one imposes the restriction to radial data. The additional restriction to $s_c>\frac1{2d}$ in Theorem~\ref{T:I:sc} stems from the fact that we do not know if the equation is well-posed in $ H_x^{s_c}\times H_x^{s_c-1}$ for spherically symmetric data; see Section~\ref{S:local theory} for more details. To prove Theorem~\ref{T:I:sc} we argue in a broadly similar manner, showing that failure of the theorem would result in a semi-global solution to the limiting (massless) equation with energy $E(u)\leq 0$ and then arguing that such a solution cannot exist. In our setting we are able to handle arbitrary (nonradial) solutions when $1/2\leq s_c<1$ by employing a concentration-compactness principle for an inequality of Gagliardo--Nirenberg type. The requisite concentration-compactness result is obtained in Section~\ref{S:CC}. The impossibility of semi-global solutions with energy $E(u)\leq 0$ to the massless equation is proved in Section~\ref{S:non+blow}; this relies on a space-truncated virial argument. In Section~\ref{S:other} we examine how the $L^2({\mathbb{R}}^d)$ norms of $u$ and $\nabla_{t,x} u$ behave near the blowup time. The arguments are comparatively straightforward applications of the virial argument; no spatial truncation is required. In the remaining three sections of the paper, we study the behaviour of $u$ and $\nabla_{t,x} u$ near individual points on the blowup surface, rather than integrated over all space as in Section~\ref{S:other}. This is a much more delicate matter. For the case of the nonlinear wave equation, this has been treated in a series of papers by Antonini, Merle, and Zaag; see \cite{AntoniniMerle,MerleZaagAJM,MerleZaagMA,MerleZaagIMRN}. All of these papers restrict attention only to cases where $s_c\leq \frac12$. In this paper we will extend their results to the Klein--Gordon setting, considering also the regime $\frac12 < s_c < 1$. The analysis of the nonlinear wave equation relies centrally on certain monotonicity formulae. In the papers mentioned above, these appear via rather \emph{ad hoc} manipulations mimicking earlier work of Giga and Kohn on the nonlinear heat equation \cite{GigaKohn:Indiana,GigaKohn:CPAM}. In Section~\ref{S:Lyapunov} we uncover the physical origins of these identities, finding that they are in fact close cousins of the dilation identity. This in turn indicates the proper analogues in the Klein--Gordon setting. The identities are then used in Sections~\ref{S:superconf blow} and~\ref{S:conf blow} to control the behaviour of solutions inside light cones. Section~\ref{S:superconf blow} treats the case $s_c>\frac12$ while Section~\ref{S:conf blow} covers $s_c\leq \frac12$. The following theorem captures the flavour of our results in the two cases: \begin{theorem}\label{T:I:cone} Let $d \geq 2$, $m \in [0,1]$, and $p = \frac4{d-2s_c}$ with $0< s_c < 1$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{(t,x):0 < t \leq T, |x|<t\}$, then $u$ satisfies \begin{equation}\label{E:I:u} \int_{|x|< t/2} |u(t,x)|^2\, dx \lesssim \begin{cases} t^{\frac{pd}{p+4}} & :\text{ if } s_c > \tfrac 12 \\ t^{2s_c} &:\text{ if } s_c \leq \tfrac 12 \end{cases} \end{equation} and \begin{equation}\label{E:I:grad u} \int_{t_0}^{2t_0} \!\!\! \int_{|x|<t/2} |\nabla_{\!t,x}u(t,x)|^2 \, dx\, dt \lesssim \begin{cases} 1 & :\text{ if } s_c > \tfrac 12 \\ t_0^{2s_c-1} &:\text{ if } s_c \leq \tfrac 12. \end{cases} \end{equation} \end{theorem} Note that the powers appearing in the two cases in RHS\eqref{E:I:u} and RHS\eqref{E:I:grad u} agree when $s_c = \frac12$. Note also that this theorem is best understood by considering the time-reversed evolution, that is, for initial data given at time $T$ and with $(0,0)$ being a point on the backwards blowup surface. The local well-posedness results in Section~\ref{S:local theory} (cf. Corollary~\ref{C:lwp loc}) show that these upper bounds on the blowup rate are sharp when $0<s_c\leq\frac12$. One peculiarity of the case $s_c=\frac12$ is that the massless equation is invariant under the full conformal group of Minkowski spacetime. For this reason we term this the \emph{conformal} case. Correspondingly, $s_c<\frac12$ and $s_c>\frac12$ will be referred to as the \emph{sub-} and \emph{super-conformal} cases, respectively. In the conformal case, the Lagrangian action is invariant under scaling and so the dilation identity takes the form of a true conservation law (cf. \eqref{E:m dilation}), while at other regularities it does not. The key dichotomy between $s_c \leq \frac12$ and $s_c>\frac12$ in the context of Theorem~\ref{T:I:cone} is not dictated directly by conformality, but rather by the scaling of the basic monotonicity formulae we use. The dilation identity scales as $s_c=\frac12$. As a consequence, we are able to obtain stronger results in the conformal and sub-conformal cases than in the super-conformal regime. Indeed, in these cases \eqref{E:I:grad u} can be upgraded to a \emph{pointwise in time} statement; see Theorem~\ref{T:cone bound subc}. Systematic consideration of all conformal conservation laws (cf. Section~\ref{S:Lyapunov}) does not lead to any monotonicity formulae scaling at a higher regularity, thereby suggesting that the dilation is still the best tool for the job when $s_c>\frac12$. The simplified version of our estimates given in Theorem~\ref{T:I:cone} only controls the size of the solution in the middle portion of the light cone, $\{|x| < t/2\}$. In truth, the estimates we prove give weighted bounds in the whole light cone; however, the weight decays rather quickly near the boundary of the light cone. If the point $(0,0)$ is not a characteristic point of the blowup surface, then simple covering arguments using nearby light cones show that the same estimates hold for the whole region $\{|x| <t\}$. In fact, when $s_c\leq \frac12$ our results precisely coincide with those proved by Merle and Zaag for the corresponding nonlinear wave equation in \cite{MerleZaagAJM,MerleZaagMA,MerleZaagIMRN}. (As mentioned previously, their works do not consider the case $s_c >\frac12$.) It turns out that it is possible to repeat the Merle--Zaag arguments virtually verbatim in the Klein--Gordon setting (with $s_c\leq \frac12$); however, this is not what we have done. While we do follow their strategy rather closely, the implementation is quite different. We use usual spacetime coordinates, as opposed to the similarity coordinates used by Giga and Kohn and again by Antonini, Merle, and Zaag. This makes the geometry of light cones much more transparent, which we exploit to obtain stronger averaged Lyapunov functionals (cf. \eqref{E:L flux ineq 2}), as well as to simplify the key covering argument (cf. our passage from \eqref{almost} to \eqref{tada} with subsection~3.2 in \cite{MerleZaagIMRN}). \noindent{\bf Acknowledgements} The first author was supported by NSF grant DMS-1001531. The second author was supported by an NSF Postdoctoral Fellowship. The third author was supported by NSF grant DMS-0901166 and a Sloan Foundation Fellowship. \subsection{Preliminaries} We will be regularly referring to the spacetime norms \begin{equation}\label{E:qr def} \bigl\| u \bigr\|_{L^q_t L^{\vphantom{q}r}_{\vphantom{t}x}({\mathbb{R}}\times{\mathbb{R}}^d)} := \biggl(\int_{{\mathbb{R}}} \bigg[ \int_{{\mathbb{R}}^d} |u(t,x)|^r\,dx \biggr]^{\frac qr} \;dt\biggr)^\frac1q, \end{equation} with obvious changes if $q$ or $r$ is infinity. We write $X\lesssim Y$ to indicate that $X\leq C Y$ for some implicit constant $C$, which varies from place to place. Let $\varphi(\xi)$ be a radial bump function supported in the ball $\{ \xi \in {\mathbb{R}}^d: |\xi| \leq \tfrac {11}{10} \}$ and equal to $1$ on the ball $\{ \xi \in {\mathbb{R}}^d: |\xi| \leq 1 \}$. For each number $N \in 2^{{\mathbb{Z}}}$, we define the Littlewood--Paley projections \begin{align*} \widehat{P_{\leq N} f}(\xi) &:= \varphi(\xi/N) \hat f(\xi)\\ \widehat{P_{> N} f}(\xi) &:= (1 - \varphi(\xi/N)) \hat f(\xi)\\ \widehat{P_N f}(\xi) &:= (\varphi(\xi/N) - \varphi(2\xi/N)) \hat f(\xi) \end{align*} and similarly $P_{<N}$ and $P_{\geq N}$. We will use basic properties of these operators, including \begin{lemma}[Bernstein estimates]\label{Bernstein} For $1 \leq p \leq q \leq \infty$, \begin{align*} \bigl\| |\nabla|^{\pm s} P_N f\bigr\|_{L^p({\mathbb{R}}^d)} &\sim N^{\pm s} \| P_N f \|_{L^p({\mathbb{R}}^d)},\\ \|P_{\leq N} f\|_{L^q({\mathbb{R}}^d)} &\lesssim N^{\frac{d}{p}-\frac{d}{q}} \|P_{\leq N} f\|_{L^p({\mathbb{R}}^d)},\\ \|P_N f\|_{L^q({\mathbb{R}}^d)} &\lesssim N^{\frac{d}{p}-\frac{d}{q}} \| P_N f\|_{L^p({\mathbb{R}}^d)}. \end{align*} \end{lemma} Next, we recall some well-known elliptic estimates; see, for example, \cite[Ch.~7]{GilTru} or \cite[Ch.~8]{LiebLoss}. \begin{lemma}(Sobolev inequality for domains) \label{L:Sob domain} Let $d\geq 2$ and let $\Omega\subset {\mathbb{R}}^d$ be a domain with the cone property. Then $$ \|f\|_{L^q(\Omega)}\lesssim \|f\|_{H^1(\Omega)} $$ provided that $2\leq q<\infty$ if $d=2$ and $2\leq q\leq \frac{2d}{d-2}$ if $d\geq 3$. The implicit constant depends only on $d, q,$ and $\Omega$. \end{lemma} We will only be applying this lemma to balls, exteriors of balls, and in the whole of ${\mathbb{R}}^d$; thus, the cone property automatically holds. \begin{lemma}(Poincar\'e inequality on bounded domains)\label{L:Poincare} Let $d\geq 2$ and let $\Omega\subset {\mathbb{R}}^d$ be a bounded domain. Then for any $f\in H^1_0(\Omega)$, $$ \|f\|_{L^2(\Omega)}\lesssim |\Omega|^{1/d} \|\nabla f\|_{L^2(\Omega)}. $$ \end{lemma} \begin{lemma}(Gagliardo--Nirenberg)\label{L:GN} Let $d\geq 2$ and let $0<p<\infty$ if $d=2$ and let $0<p\leq\frac4{d-2}$ if $d\geq 3$. Then $$ \|f\|_{L^{p+2}}^{p+2} \lesssim \|f\|_{L^{\frac{pd}2}}^p \|\nabla f\|_{L^2}^2 \quad\text{and}\quad \|f\|_{L^{\frac{pd}2}} \lesssim \|f\|_{L^2}^{1-s_c} \|\nabla f\|_{L^2}^{s_c}. $$ Moreover, for any $R>0$, $$ \|f\|_{L^{p+2}(|x|\geq R)}^{p+2} \lesssim \|f\|_{L^{\frac{pd}2}(|x|\geq R)}^p \|\nabla f\|_{L^2(|x|\geq R)}^2. $$ \end{lemma} \section{Local theory.} \label{S:local theory} \subsection{Strichartz inequalities} \begin{lemma}[Strichartz inequality] \label{L:Strichartz} Fix a value of $m \in [0,1]$. Let $u$ be a solution to the inhomogeneous equation \begin{equation} \label{E:ivp} u_{tt} - \Delta u + m^2 u = F \quad \text{with} \quad u(0) = u_0 \quad \text{and}\quad u_t(0) = u_1 \end{equation} on the time interval $[0,T]$. Let $0 \leq \gamma \leq 1$, $2 < q,\tilde{q} \leq \infty$, and $2 \leq r, \tilde r < \infty$ be exponents satisfying the scaling and admissibility conditions: $$ \frac1q + \frac{d}r = \frac{d}2 - \gamma = \frac1{\tilde q'} + \frac{d}{\tilde r'} - 2 \qquad \text{and} \qquad \frac1q + \frac{d-1}{2r},\ \frac1{\tilde q} + \frac{d-1}{2\tilde r} \leq \frac{d-1}4. $$ Then \begin{equation} \label{E:wave strichartz} \begin{aligned} &\|\langle\nabla\rangle_m^{\gamma} u\|_{C_t L^2_x([0,T] \times {\mathbb{R}}^d)} + \|\langle\nabla\rangle_m^{\gamma-1}u_t\|_{C_t L^2_x([0,T] \times {\mathbb{R}}^d)} + \|u\|_{L^q_tL^r_x([0,T] \times {\mathbb{R}}^d)} \\ & \qquad \qquad \lesssim \|\langle\nabla\rangle_m^{\gamma}u_0\|_{L^2_x} + \|\langle\nabla\rangle_m^{\gamma-1} u_1\|_{L^2_x} + \|F\|_{L^{\tilde q'}_tL^{\tilde r'}_x([0,T] \times {\mathbb{R}}^d)}. \end{aligned} \end{equation} Here the implicit constant is independent of $m$ and $T$, but may depend on $d$, $\gamma$, $q$, $\tilde q$, $r$, $\tilde r$. \end{lemma} \begin{remark} We will make particularly heavy use of the following special case: \begin{equation} \label{E:H12 strichartz} \begin{aligned} &\|\langle\nabla\rangle_m^{\frac12}u\|_{C_tL^2_x([0,T] \times {\mathbb{R}}^d)} + \|\langle\nabla\rangle_m^{-\frac12}u_t\|_{C_tL^2_x([0,T] \times {\mathbb{R}}^d)} + \|u\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}([0,T] \times {\mathbb{R}}^d)} \\ &\qquad\qquad \lesssim \|\langle\nabla\rangle_m^{\frac12} u_0\|_{L^2_x} + \|\langle\nabla\rangle_m^{-\frac12}u_1\|_{L^2_x} + \|F\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}([0,T] \times {\mathbb{R}}^d)}. \end{aligned} \end{equation} \end{remark} \begin{proof} The principle of stationary phase may be used to show that the linear operator $e^{it\langle\nabla\rangle_m}$ satisfies the dispersive estimate \begin{equation} \label{E:dispersive} \begin{aligned} \|e^{it\langle\nabla\rangle_m}P_N f\|_{L^{\infty}_x} &\lesssim N^d\bigl(1+\tfrac{|t|N^2}{\jp{N}_m}\bigr)^{-\frac{d-1}2}\|P_Nf\|_{L^1_x} \\ & \lesssim |t|^{-\frac{d-1}2}\bigl\langle N \bigr\rangle_m^{\frac{d+1}2}\|P_Nf\|_{L^1_x} , \end{aligned} \end{equation} where the implicit constants are independent of $m$. Combining this with the fact that $e^{it\langle\nabla\rangle_m}$ is an isometry on $L^2_x$, standard arguments (cf.\ \cite{tao:keel} and the references therein) give the Strichartz estimates \eqref{E:wave strichartz}. \end{proof} Using the Strichartz estimate, one can easily derive the following standard result: \begin{prop}[Uniqueness in light cones, \cite{Kapitanski94}] \label{P:uniqueness} Let $u$ and $\tilde u$ be two strong solutions to \eqref{E:eqn} on the backwards light cone $\Gamma(T,x_0)$. If $(u(0),u_t(0)) = (\tilde u(0),\tilde u_t(0))$ on $\{x:|x-x_0| \leq T\}$, then $u=\tilde u$ throughout $\Gamma(T,x_0)$. \end{prop} \begin{proof} To keep formulae within margins, we introduce the following notation: If $I \subset [0,\infty)$ is an interval, then we set $\Gamma_{I} := (I \times {\mathbb{R}}^d) \cap \Gamma(T,x_0)$. Let $\eta>0$ be a small constant to be chosen shortly. By Definition~\ref{D:solution}, we may write $[0,T] = \bigcup_{j=1}^{\infty} I_j$ with $$ \|u\|_{L^{\frac{p(d+1)}2}_{t,x}(\Gamma_{I_j})} + \|\tilde u\|_{L^{\frac{p(d+1)}2}_{t,x}(\Gamma_{I_j})} \leq \eta. $$ Next, by Lemma~\ref{L:Sob domain}, for each $t\in I_j$ we have \begin{align*} \|u(t)\|_{L_x^{\frac{2(d+1)}{d-1}}(|x-x_0|<T-t)}& + \|\tilde u(t)\|_{L_x^{\frac{2(d+1)}{d-1}}(|x-x_0|<T-t)}\\ &\lesssim \|u(t)\|_{H^1_x(|x-x_0|<T-t)}+ \|\tilde u(t)\|_{H^1_x(|x-x_0|<T-t)}. \end{align*} Thus, by the definition of strong solution, $$ \|u\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma_{I_j})} + \|\tilde u\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma_{I_j})}<\infty. $$ We now consider the difference $w = u - \tilde u$. By \eqref{E:wave strichartz}, finite speed of propagation, and H\"older's inequality, \begin{align*} \|w\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma_{I_1})} &\lesssim \||u|^pu - |\tilde u|^p \tilde u\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}(\Gamma_{I_1}) } \\ &\lesssim \|(|u|^p+ |\tilde u|^p)(u-\tilde u)\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}(\Gamma_{I_1}) } \\ &\lesssim \eta^p \|w\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma_{I_1})} . \end{align*} Choosing $\eta$ sufficiently small, we deduce that $w \equiv 0$ on $\Gamma_{I_1}$. An inductive argument yields $w \equiv 0$ on $\Gamma(T,x_0)$. \end{proof} If $m=0$ and $u$ has zero initial data, then it was proved by Harmse in \cite{Harmse} that better estimates than those in Lemma~\ref{L:Strichartz} are possible. \begin{lemma}[Strichartz estimates for inhomogeneous wave] \label{L:inhomog st} Let $u$ be a solution to the initial-value problem $$ u_{tt} - \Delta u = F \qquad \text{with} \qquad u(0) = u_t(0) = 0 $$ on the interval $[0,T]$. Then \begin{equation} \label{E:inhomog st} \|u\|_{L^r_{t,x}([0,T]\times {\mathbb{R}}^d)} \lesssim \|F\|_{L^{\tilde r'}_{t,x}([0,T] \times {\mathbb{R}}^d)}, \end{equation} whenever $r$ and $\tilde r$ satisfy the scaling and acceptability conditions $ \frac1r + \frac1{\tilde r} = \frac{d-1}{d+1}$ and $\frac1r, \frac1{\tilde r} < \frac{d-1}{2d}$. In particular, \eqref{E:inhomog st} holds with $r = \frac{p(d+1)}2$ and $\tilde r' = \frac{p(d+1)}{2(p+1)}$, provided that $\frac1{2d} < s_c < \frac12$. \end{lemma} Finally, in the radial case, lower regularity Strichartz estimates than those given in Lemma~\ref{L:Strichartz} are possible. \begin{lemma}[Radial Strichartz estimates for homogeneous wave] \label{L:radial wave} Let $\frac1{2d} < s_c < \frac12$ and let $p = \frac4{d-2s_c}$. If $u_0 \in \dot{H}^{s_c}_x({\mathbb{R}}^d)$ and $u_1 \in \dot{H}^{s_c-1}_x({\mathbb{R}}^d)$ are radial, then the solution to the linear wave equation satisfies \begin{equation} \label{E:radial wave} \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}} \lesssim \|u_0\|_{\dot{H}^{s_c}_x} + \|u_1\|_{\dot{H}^{s_c-1}_x}. \end{equation} \end{lemma} This estimate is implicit in \cite{KlainermanMachedon93} as discussed in \cite{LindbladSogge95}. We note that the bound \eqref{E:radial wave} is true for larger values of $s_c$ without the assumption of radiality (cf.~\eqref{E:wave strichartz}). \subsection{Well-posedness results} Global well-posedness and scattering for NLW with small initial data in critical Sobolev spaces is due to Lindblad and Sogge, \cite{LindbladSogge95}, in the super-conformal case ($\frac12 < s_c < 1$) and to Strauss, \cite{Strauss81}, in the conformal case ($s_c=\frac12$). In the sub-conformal case, the Lorentz symmetry may be used to construct examples which show that such a small data theory is impossible (cf.\ \cite{LindbladSogge95}). This motivates the consideration of \emph{radial} initial data when $s_c<\frac12$. However, even in order to construct global solutions in $L^{\frac{p(d+1)}2}_{t,x}$ from small radial data, we need to impose the additional condition $s_c > \frac1{2d}$. This originates in the fact that $\frac{p(d+1)}2$ with $p=\frac{4d}{d^2-1}$ (i.e. $s_c=\frac1{2d}$) corresponds to an endpoint in the cone restriction conjecture. Global well-posedness and scattering for NLW for $\frac1{2d}<s_c<\frac12$ with small radial data in critical Sobolev spaces may again be found in \cite{LindbladSogge95}. We summarize below the small data theory that we will use. \begin{prop}[Critical small data theory for wave] \label{P:Hsc sdt} Fix $d \geq 2$ and $\frac1{2d} < s_c < 1$ and let $p = \frac4{d-2s_c}$. Let $(u_0, u_1)\in \dot H^{s_c}_x\times \dot H^{s_c-1}_x$ with $u_0$ and $u_1$ radial when $\frac1{2d}<s_c<\frac12$. There exists $\eta_0$ depending on $d$ and $p$ so that if $\eta \leq \eta_0$ and $$ \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d)} \leq \eta $$ and additionally $$ \||\nabla|^{s_c-\frac12} {\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}([0,T] \times {\mathbb{R}}^d)} \leq \eta \qquad \text{if} \qquad \tfrac12\leq s_c<1, $$ then there is a unique solution $u$ to \eqref{E:eqn} with $m=0$ on $[0,T] \times {\mathbb{R}}^d$. Moreover, $u$ satisfies \begin{align} \|u\|_{L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d)} &\lesssim \eta \label{E:small data Lp}\\ \|\nabla_{t,x} u\|_{L^{\infty}_t \dot H^{s_c-1}_x([0,T] \times {\mathbb{R}}^d)} &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x}.\label{E:small data Hsc} \end{align} If in addition $(u_0,u_1) \in \dot H^1_x \times L^2_x$, then \begin{equation} \label{E:small data H1} \|\nabla_{t,x} u\|_{L^\infty_tL^2_x([0,T] \times {\mathbb{R}}^d)} \lesssim \|(u_0,u_1)\|_{\dot H^1_x \times L^2_x}. \end{equation} In particular, if $\|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} \leq \eta_1$ for some constant $\eta_1 = \eta_1(d,p) > 0$, then $u$ is global and obeys the estimates above on ${\mathbb{R}} \times {\mathbb{R}}^d$. \end{prop} \begin{proof} We begin by reviewing the proof of \eqref{E:small data Lp} and \eqref{E:small data Hsc} from \cite{LindbladSogge95} and then give the additional arguments needed to establish \eqref{E:small data H1}. Throughout the proof, all spacetime norms will be on $[0,T]\times{\mathbb{R}}^d$, unless we specify otherwise. Using Lemma~\ref{L:Strichartz} or Lemma~\ref{L:radial wave} (depending on $s_c$), we have \begin{align*} \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}} +& \||\nabla|^{s_c-\frac12}{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{2(d+1)}2}_{t,x}} & \lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x}. \end{align*} Thus, without loss of generality we may assume that \begin{equation} \label{E:eta lesssim Hsc} \eta_0 \lesssim \min\{1, \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x}\}. \end{equation} Let $v \mapsto \Phi_0(v)$ be the mapping given by $$ \Phi_0(v)(t): = {\mathcal{S}}_0(t)(u_0,u_1) + \int_0^t |\nabla|^{-1}\sin(|\nabla|(t-s))(|v|^p v)(s)\, ds. $$ We will use a contraction mapping argument to prove that $\Phi_0$ has a fixed point. We start with the case when $\frac1{2d} < s_c < \frac12$. We define $$ B := \Bigl\{v \in L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d) :\, \|v\|_{L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d)} \leq 2 \eta\Bigr\}. $$ By Lemma~\ref{L:inhomog st} and our hypotheses, for $v\in B$ we have \begin{align*} \|\Phi_0(v)\|_{L^{\frac{p(d+1)}2}_{t,x}} &\leq \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}} + C_{d,p}\||v|^pv\|_{L^{\frac{p(d+1)}{2(p+1)}}_{t,x}} \\ &\leq \eta + C_{d,p}\|v\|_{L^{\frac{p(d+1)}2}_{t,x}}^{p+1}\\ &\leq \eta + C_{d,p}\eta^{p+1}. \end{align*} Thus for $\eta$ sufficiently small, $\Phi_0$ maps $B$ into itself. To see that $\Phi_0$ is a contraction on $B$ with respect to the metric given by $d(u,v) = \|u-v\|_{L^{\frac{p(d+1)}2}_{t,x}}$, we apply Lemma~\ref{L:inhomog st} and use H\"older's inequality: \begin{align*} \|\Phi_0(u) - \Phi_0(v)\|_{L^{\frac{p(d+1)}2}_{t,x}} &\leq C_{d,p} \||u|^pu - |v|^p v\|_{L^{\frac{p(d+1)}{2(p+1)}}_{t,x}} \\ &\leq C_{d,p} \||u|+|v|\|^p_{L^{\frac{p(d+1)}2}_{t,x}}\|u-v\|_{L^{\frac{p(d+1)}2}_{t,x}}\\ &\leq C_{d,p} \eta^p \|u-v\|_{L^{\frac{p(d+1)}2}_{t,x}}. \end{align*} Thus for $\eta$ sufficiently small, $\Phi_0$ is a contraction on $B$ and so it has a fixed point $u$ in $B$. Moreover, by Lemma~\ref{L:Strichartz}, the fixed point satisfies \begin{align*} \|\nabla_{t,x} u\|_{C_t \dot H^{s_c-1}_x} &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} + \||u|^pu\|_{L^{\frac{p(d+1)}{2(p+1)}}_{t,x}} \\ &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} + \eta^{p+1}. \end{align*} The bound \eqref{E:small data Hsc} then follows from \eqref{E:eta lesssim Hsc}. We now turn to the case when $\frac12 \leq s_c < 1$. We define \begin{align*} B := \Bigl\{v \in L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d) :\, \|v\|_{L^{\frac{p(d+1)}2}_{t,x}} + \||\nabla|^{s_c-\frac12} v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\leq 3\eta\Bigr\}. \end{align*} By Lemma~\ref{L:Strichartz} and the fractional chain rule, for $v\in B$ we obtain \begin{align*} \|\Phi_0(v)\|_{L^{\frac{p(d+1)}2}_{t,x}} &+\||\nabla|^{s_c-\frac12}\Phi_0(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \\ &\leq \|{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{p(d+1)}2}_{t,x}} +\||\nabla|^{s_c-\frac12}{\mathcal{S}}_0(t)(u_0,u_1)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \\ &\quad+ C_{d,p}\||\nabla|^{s_c-\frac12}(|v|^pv)\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}} \\ &\leq 2\eta + C_{d,p}\||\nabla|^{s_c-\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\|v\|_{L^{\frac{p(d+1)}2}_{t,x}}^p \\ &\leq 2\eta + C_{d,p}\eta^{p+1}. \end{align*} Thus if $\eta$ is sufficiently small, $\Phi_0$ maps $B$ into itself. Since in this case we are considering $\frac4{d-1}\leq p<\frac4{d-2}$, by H\"older's inequality we have $$ \sup_{x_0 \in {\mathbb{R}}^d} \|v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))} \lesssim T^{s_c-\frac12}\sup_{x_0 \in {\mathbb{R}}^d}\|v\|_{L^{\frac{p(d+1)}2}_{t,x}(\Gamma(T,x_0))} \lesssim T^{s_c-\frac12}\eta $$ for any $v \in B$. Thus we may consider the metric on $B$ given by $$ d(u,v) = \sup_{x_0 \in {\mathbb{R}}^d} \|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))}. $$ By \eqref{E:H12 strichartz}, finite speed of propagation, and H\"older's inequality, \begin{align*} \|\Phi_0(u)-\Phi_0(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))} &\lesssim \||u|^pu -|v|^pv\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}(\Gamma(T,x_0))}\\ &\lesssim \|(|u|+|v|)^p\|_{L^{\frac{d+1}2}_{t,x}(\Gamma(T,x_0))} \|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))} \\ &\lesssim \eta^p\|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}(\Gamma(T,x_0))}, \end{align*} for each $x_0 \in {\mathbb{R}}^d$. Therefore if $\eta$ is sufficiently small, $\Phi_0$ is a contraction on $B$ with respect to the metric $d$ and so $\Phi_0$ has a fixed point $u$ in $B$. By Lemma~\ref{L:Strichartz} and the fractional chain rule, $u$ satisfies \begin{align*} \|\nabla_{t,x}u\|_{C_t\dot H^{s_c-1}_x} &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} + \||\nabla|^{s_c-\frac12}(|u|^pu)\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}} \\ &\lesssim \|(u_0,u_1)\|_{\dot H^{s_c}_x \times \dot H^{s_c-1}_x} + \eta^{p+1}. \end{align*} The bound \eqref{E:small data Hsc} then follows from \eqref{E:eta lesssim Hsc}. Finally, we turn to the persistence of regularity statement \eqref{E:small data H1}. For $\frac1{2d} < s_c < 1$, by \eqref{E:H12 strichartz}, the fractional chain rule, and \eqref{E:small data Lp}, \begin{align*} \|\nabla_{t,x} u\|_{C_tL^2_x} + \| |\nabla|^{\frac12}u \|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} &\lesssim \|(u_0,u_1)\|_{\dot H^1_x \times L^2_x} + \| |\nabla|^{\frac12} (|u|^pu) \|_{L^{\frac{2(d+1)}{d+3}}_{t,x}} \\ &\lesssim \|(u_0,u_1)\|_{\dot H^1_x \times L^2_x} + \| |\nabla|^{\frac12}u \|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\|u\|_{L^{\frac{p(d+1)}2}_{t,x}}^p \\ &\lesssim \|(u_0,u_1)\|_{\dot H^1_x \times L^2_x} + \eta^p \| |\nabla|^{\frac12}u \|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}. \end{align*} Therefore \eqref{E:small data H1} holds if $\eta$ is chosen sufficiently small. This completes the proof. \end{proof} The local existence theory of \eqref{E:eqn} in $H^1_x\times L_x^2$ is well-known (cf.\ \cite{GV:IHP89}, \cite{GV:MathZ85}); however, we need a result which is uniform in $m$. This is the topic of the following proposition: \begin{prop}[$H^1_x\times L_x^2$ local well-posedness for \eqref{E:eqn}] \label{P:lwp} Let $d\geq 2$, $m \in [0,1]$, $0 < s_c < 1$, and take $p = \frac4{d-2s_c}$. Let $u_0,u_1$ be initial data satisfying $$ \|u_0\|_{H^1_x} + \|u_1\|_{L^2_x} \leq M < \infty. $$ Then there exist $T\gtrsim_{p,d} \min\{ M^{-1/(1-s_c)}, M^{-p(d+1)/2}\}$, independent of $m$, and a unique solution $u$ to \eqref{E:eqn} on $[0,T]$. Furthermore, this solution satisfies \begin{align} \|\nabla_{t,x}u\|_{C_tL_x^2([0,T] \times {\mathbb{R}}^d)} &\lesssim M \label{E:H1lwp u in H1}\\ \|u\|_{C_tL^2_x([0,T] \times {\mathbb{R}}^d)} &\lesssim (1+T)M \label{E:H1lwp mass}\\ \|u\|_{L^{\frac{p(d+1)}2}_{t,x}([0,T] \times {\mathbb{R}}^d)} &\lesssim \max\{T^{1-s_c},T^{\frac2{p(d+1)}}\}M, \label{E:H1lwp u in Lq} \end{align} with the implicit constants depending only on $d,p$. \end{prop} \begin{remark} Well-posedness in $H^1_x\times L_x^2$ ensures that conservation of energy, which follows from an elementary computation for smooth, decaying solutions, continues to hold for solutions in $C_t(H^1_x\times L_x^2)$. \end{remark} \begin{proof}[Proof of Proposition~\ref{P:lwp}] Throughout the proof, all spacetime norms will be over the set $[0,T]\times{\mathbb{R}}^d$. We use the contraction mapping argument for the map $v \mapsto \Phi_m(v)$ given by $$ \Phi_m(v)(t): = {\mathcal{S}}_m(t)(u_0,u_1) + \int_0^t \langle\nabla\rangle_m^{-1}\sin\bigl((t-s)\langle\nabla\rangle_m\bigr)(|v|^pv)(s)\, ds. $$ Our analysis breaks into two cases. If $\frac{d+2}{2d} < s_c < 1$ (that is, $\frac{4d}{(d-2)(d+1)}<p<\frac4{d-2}$), we define \begin{align*} B := \Bigl\{v \in C_tH^1_x([0,T] \times {\mathbb{R}}^d) : \, &\|\nabla_{t,x} v\|_{C_tL^2_x} + \|\langle\nabla\rangle_m^{\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \\ &+ \|v\|_{L^{\frac{2p(d+1)}{p(d+1)(d-2) - 4d}}_tL^{\frac{p(d+1)}2}_x} \leq C_{d,p}M \Bigr\}. \end{align*} By H\"older's inequality, for $v \in B$ we have \begin{equation} \label{E:B subset Lq} \|v\|_{L^{\frac{p(d+1)}2}_{t,x}} \leq T^{1-s_c}\|v\|_{L^{\frac{2p(d+1)}{p(d+1)(d-2) - 4d}}_tL^{\frac{p(d+1)}2}_x}\lesssim_{d,p} T^{1-s_c} M. \end{equation} Using this together with Lemma~\ref{L:Strichartz} and the fractional chain rule, we obtain \begin{align*} &\|\nabla_{t,x}\Phi_m(v)\|_{C_tL^2_x} + \|\langle\nabla\rangle_m^{\frac12} \Phi_m(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} + \|\Phi_m(v)\|_{L^{\frac{2p(d+1)}{p(d+1)(d-2) - 4d}}_tL^{\frac{p(d+1)}2}_x}\\ &\qquad \lesssim_{d,p} \|(u_0,u_1)\|_{H^1_x \times L^2_x} + \|\langle\nabla\rangle_m^{\frac12}(|v|^pv)\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}} \\ &\qquad \lesssim_{d,p} M +\|\langle\nabla\rangle_m^{\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\|v\|_{L^{\frac{p(d+1)}2}_{t,x}}^p\\ &\qquad \lesssim_{d,p} M + T^{(1-s_c)p}M^{p+1}. \end{align*} Thus for $T$ sufficiently small depending on $d$, $p$, and $M$, $\Phi_m$ maps $B$ into itself. Next, we will show that $\Phi_m$ is a contraction with respect to the metric given by \begin{equation}\label{ssdist} d(u,v) =\|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}. \end{equation} We start by noting that, by the fundamental theorem of calculus, \begin{equation} \label{E:B subset L2} \|v\|_{C_tL^2_x} \leq \|u_0\|_{L^2_x} + T\|v_t\|_{C_tL^2_x} \lesssim_{d,p} (1+T)M \end{equation} for any $v\in B$. Thus, by H\"older and Sobolev embedding, \begin{align*} \|v\|_{ L^{\frac{2(d+1)}{d-1}}_{t,x}} \lesssim T^{\frac{d-1}{2(d+1)}} \|v\|_{C_tH^1_x} \lesssim_{d,p} T^{\frac{d-1}{2(d+1)}} (1+T) M. \end{align*} To continue, we use \eqref{E:H12 strichartz}, \eqref{E:B subset Lq}, and H\"older's inequality to estimate \begin{align*} \|\Phi_m(u) - \Phi_m(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} &\lesssim \||u|^p u - |v|^p v\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}}\\ &\lesssim \|(|u|+|v|)^p\|_{L^{\frac{d+1}2}_{t,x}} \|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\\ &\lesssim_{d,p} T^{(1-s_c)p} M^p\|u-v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \end{align*} for any $u,v \in B$. Therefore for $T$ sufficiently small, $\Phi_m$ is a contraction and consequently has a fixed point $u \in B$. Claims \eqref{E:H1lwp mass} and \eqref{E:H1lwp u in Lq} follow from \eqref{E:B subset Lq} and \eqref{E:B subset L2}. It remains to treat the case $0 <s_c \leq\frac{d+2}{2d}$. This time, we define $$ B := \Bigl\{v \in C_t H^1_x([0,T] \times {\mathbb{R}}^d) : \|\nabla_{t,x} v\|_{C_tL^2_x} +\|\langle\nabla\rangle_m^{\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \leq C_{d,p}M\Bigr\}. $$ In this case $2 < \frac{p(d+1)}2 \leq \frac{2d}{d-2}$ and so, using H\"older, Sobolev embedding, and \eqref{E:B subset L2}, we obtain \begin{align} \label{E:B subset Lq small p} \|v\|_{L^{\frac{p(d+1)}2}_{t,x}} &\lesssim T^{\frac2{p(d+1)}}\|v\|_{C_t H^1_x} \lesssim_{d,p} T^{\frac2{p(d+1)}} (1+T) M \end{align} for any $v\in B$. Arguing as in the previous case, and substituting \eqref{E:B subset Lq small p} for \eqref{E:B subset Lq}, we derive \begin{align*} \|\nabla_{t,x}\Phi_m(v)\|_{C_tL^2_x} &+ \|\langle\nabla\rangle_m^{\frac12}\Phi_m(v)\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \\ &\lesssim \|(u_0,u_1)\|_{H^1_x \times L^2_x} + \|\langle\nabla\rangle_m^{\frac12}v\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\|v\|^p_{L^{\frac{p(d+1)}2}_{t,x}} \\ & \lesssim _{d,p} M + T^{\frac2{d+1}}(1+T)^{p}M^{p+1}. \end{align*} Thus if $T$ is sufficiently small, we again obtain that $\Phi_m$ maps $B$ into itself. The proof that $\Phi_m$ is a contraction on $B$ with respect to the metric \eqref{ssdist} is exactly the same as in the previous case. This completes the proof. \end{proof} Because \eqref{E:eqn} obeys finite speed of propagation, we may localize in space in Proposition~\ref{P:lwp}. In this way we obtain the following scale-invariant lower bound on the blowup rate. \begin{corollary}[$H^1_\textrm{loc}\times L^2_\textrm{loc}$ local well-posedness and blowup criterion] \label{C:lwp loc} Let $d\geq 2$, $m \in [0,1]$, $0 < s_c < 1$, and take $p=\frac{4}{d-2s_c}$. Let $(u_0,u_1)$ be initial data satisfying $\|u_0\|_{H^1_\textrm{loc}} + \|u_1\|_{L^2_\textrm{loc}} \leq M < \infty$, where we define $$ \|f\|_{L^2_\textrm{loc}}^2 := \sup_{x_0 \in {\mathbb{R}}^d} \int_{|x-x_0|\leq 1} |f(x)|^2\, dx \qtq{and} \|f\|_{H^1_\textrm{loc}}^2 := \|f\|_{L^2_\textrm{loc}}^2 + \|\nabla f\|_{L^2_\textrm{loc}}^2. $$ Then there exist $T_0 > 0$, depending only on $d,p,M$ and a unique strong solution $u:[0,T_0] \times {\mathbb{R}}^d \to {\mathbb{R}}$ to \eqref{E:eqn} satisfying $$ \|u\|_{C_tH^1_\textrm{loc}([0,T_0] \times {\mathbb{R}}^d)} + \|u_t\|_{C_tL^2_\textrm{loc}([0,T_0] \times {\mathbb{R}}^d)} \lesssim M. $$ Furthermore, if $u$ blows up at time $0<T_* < \infty$ and $(T_*,x_0)$ lies on the forward-in-time blowup surface of $u$, then \begin{align} \label{E:H1loc lb} 1 \lesssim (T_*-t)^{-2s_c} \int_{|x-x_0| \leq T_*-t} |u(t,x)|^2 + (T_*-t)^2|\nabla_{t,x} u(t,x)|^2 \, dx \end{align} for all $t>0$ such that $T_*-1 \leq t < T_*$. The implicit constant depends only on $d,p$. \end{corollary} We note that in Theorem~$1'$ in \cite{MerleZaagAJM} and Theorem~1(ii) of \cite{MerleZaagMA}, Merle and Zaag claim that an alternative blowup criterion holds, namely, \begin{equation} \label{E:MerleZaag H1 loc blowup} 1 \lesssim \sup_{x_0 \in {\mathbb{R}}^d} (T_*-t)^{d-2s_c}\int_{|x-x_0|\leq 1}|u(t,x)|^2 + (T_*-t)^2|\nabla_{t,x}u(t,x)|^2\, dx. \end{equation} This lower bound is also repeated as equation (1.8) in \cite{MerleZaagIMRN}. It seems that in all three instances this is essentially a typo, since \eqref{E:H1loc lb} is equivalent to the lower bound in self-similar variables given in Theorem~1 of \cite{MerleZaagAJM} and Theorem~1(i) of \cite{MerleZaagMA}, while \eqref{E:MerleZaag H1 loc blowup} is not. Moreover, the scaling argument that Merle and Zaag suggest to prove \eqref{E:MerleZaag H1 loc blowup} seems only to establish \eqref{E:H1loc lb}. It is not difficult to construct a counterexample to \eqref{E:MerleZaag H1 loc blowup}. For a general subluminal blowup surface $t=\sigma(x)$, Kichenassamy \cite{Kich} (see also \cite{KVblowup}) has constructed solutions with $u(t,x)\sim (\sigma(x)-t)^{-2/p}$. Whenever the blowup surface is smooth with non-zero curvature at the first blowup point, this is inconsistent with \eqref{E:MerleZaag H1 loc blowup}. We turn now to the proof of Corollary~\ref{C:lwp loc}. \begin{proof} Both conclusions may be proved by applying Proposition~\ref{P:lwp} to spatially truncated initial data and then invoking finite speed of propagation. Since the proof of the first conclusion is a little simpler, we give the details only for the second. We argue by contradiction. To this end, let $T_*-1 < t_0 < T_*$ and let $x_0\in {\mathbb{R}}^d$ be such that $(T_*,x_0)$ lies on the forward-in-time blowup surface of $u$. By space-translation invariance, we may assume $x_0=0$. Suppose that $u$ satisfies \begin{align}\label{uT*} (T_*-t_0)^{-2s_c} \int_{|x| \leq T_*-t_0} |u(t_0,x)|^2 + (T_*-t_0)^2|\nabla_{t,x} u(t_0,x)|^2 \, dx < \eta, \end{align} for some small constant $\eta$ to be determined in a moment. Now set $$ \tilde u(t,x):=(T_*-t_0)^{\frac2p}u(t_0+ (T_*-t_0) t, (T_*-t_0) x). $$ A simple computation shows that $\tilde u$ satisfies \eqref{E:eqn} with $m$ replaced by $\tilde m:=(T_*-t_0)m$. Moreover, as $(T_*,0)$ belongs to the forward-in-time blowup surface of $u$, we see that $(1,0)$ lies on the blowup surface of $\tilde u$. Changing variables, \eqref{uT*} becomes $$ \int_{|x| \leq 1} |\tilde u(0)|^2 + |\nabla_{t,x} \tilde u(0)|^2 \, dx < \eta. $$ Thus, by the dominated convergence theorem, there exists $0<\delta<\frac12$ such that \begin{equation} \label{E:u small 1} \int_{|x| \leq 1+\delta} |\tilde u(0,x)|^2 + |\nabla_{t,x} \tilde u(0,x)|^2 \, dx < 2\eta. \end{equation} To continue, we define $v_0$ and $v_1$ such that $v_0 = \tilde u(0)$ and $v_1 = \tilde u_t(0)$ on $|x| \leq 1+\delta$, $v_0 = v_1 = 0$ on $|x| \geq 2$, and \begin{equation}\label{E:tilde u small} \|v_0\|_{H^1_x}^2 + \|v_1\|_{L^2_x}^2 \lesssim \eta. \end{equation} (For example, one can take $v_0$ to be the harmonic function on the annulus $1+\delta<|x|<2$ that matches these boundary values.) For $\eta$ sufficiently small depending on $d,p, \delta$ (but not on $\tilde m\in[0,1]$), Proposition~\ref{P:lwp} yields a solution to the initial-value problem $$ v_{tt} - \Delta v + \tilde m^2 v = |v|^p v \qtq{with} v(0) = v_0 \qtq{and} v_t(0) = v_1 $$ on $[0,1+\delta]\times {\mathbb{R}}^d$. Thus by finite speed of propagation, $\tilde u$ may be extended to a strong solution on the backward light cone $\Gamma(1+\delta,0)$, which contradicts the fact that $(1,0)$ lies on the blowup surface of $\tilde u$. \end{proof} \begin{corollary}[$\dot H^1_x\times L_x^2$ blowup criterion] \label{C:lwp dot} Let $d\geq 3$, $m \in [0,1]$, $0 < s_c < 1$, and take $p=\frac4{d-2s_c}$. Given initial data $(u_0,u_1)\in \dot H^1_x\times L_x^2$, if the solution $u$ to \eqref{E:eqn} blows up at time $0<T_* < \infty$, then \begin{align} \label{E:H1dot lb} 1 \lesssim (T_*-t)^{2-2s_c}\int_{{\mathbb{R}}^d}|\nabla_{t,x}u(t,x)|^2\, dx \end{align} for all $t>0$ such that $T_*-1 \leq t < T_*$. The implicit constant depends only on $d,p$. \end{corollary} \begin{proof} Note that by H\"older's inequality and Sobolev embedding, $\dot H^1_x \subset H^1_\textrm{loc}$. The claim now follows from Corollary~\ref{C:lwp loc}. \end{proof} \begin{remark} Note that in two dimensions, $\dot H^1_x$ cannot be realized as a space of distributions. Moreover, it is not difficult to construct concrete initial data that show that \eqref{E:H1dot lb} does not hold: Given $R>1$, let $$ u_1:= 0 \qtq{and} u_0(x) := \begin{cases} 0 & : |x| > R \\[1mm] -\frac{\log(|x|/R)}{\sqrt{\log(R)}} & : 1 \leq |x| \leq R \\[2mm] \sqrt{\log(R)} & : |x| < 1.\end{cases} $$ Note that $\int |u_1|^2 + |\nabla u_0|^2\,dx \sim 1$. However, as $R\to\infty$ the corresponding solution blows up more and more quickly; indeed, by solving the ODE and using finite speed of propagation, we see that the lifespan cannot exceed $$ \int_{A}^\infty \bigl[ \tfrac{2}{p+2} \bigl(u^{p+2} - A^{p+2}\bigr)\bigr]^{-1/2} du \sim A^{-\frac{p+4}{2}} \quad\text{where $A:=\sqrt{\log(R)}$.} $$ \end{remark} The following result shows that blowup must be accompanied by the blowup of the $\dot H^1_x$ norm of $u$. In this sense, while non-quantitative, it is a strengthening of Corollary~\ref{C:lwp dot}, which provides a lower bound on the full spacetime gradient of $u$. \begin{corollary}\label{C:ee blowup} Let $d \geq 2$, $m \in [0,1]$, and $0 < s_c < 1$. Set $p = \frac4{d-2s_c}$. Let $(u_0,u_1) \in H^1_x \times L^2_x$ and assume that the maximal-lifespan solution $u$ to \eqref{E:eqn} cannot be extended past time $0<T_* < \infty$. Then $$ \lim_{t \uparrow T_*} \|\nabla u(t)\|_{L^2_x} = \infty. $$ The same conclusion holds if the initial displacement $u_0$ merely belongs to $\dot H^1_x\cap \dot H^{s_c}_x$. \end{corollary} \begin{proof} By Proposition~\ref{P:lwp}, the solution $u$ can be extended as long as $(u,u_t)$ remains bounded in $H^1_x\times L^2_x$. Thus, as $u$ cannot be extended past time $0<T_*<\infty$, we must have \begin{align}\label{E:all to infty} \lim_{t \uparrow T_*} \bigl\{\|u(t)\|_{H^1_x}+ \|u_t(t)\|_{L_x^2} \bigr\}= \infty. \end{align} Let $\chi_R:=\phi(x/R)$ be a smooth cutoff to the ball of radius $R$. Combining dominated convergence with Proposition~\ref{P:lwp}, we can find $R>10 T_*$ large enough so that initial data $\tilde u_0:=(1-\chi_R)u_0$ and $\tilde u_1:=(1-\chi_R)u_1$ lead to a solution $\tilde u$ up to time $2T_*$. Moreover, $\tilde u$ remains uniformly bounded in $H^1_x\times L_x^2$ on $[0,T_*]$ and so, by conservation of energy, the potential energy of $\tilde u$ is also bounded on $[0,T_*]$. By finite speed of propagation, the original solution $u$ agrees with $\tilde u$ on $[0, T_*]\times\{|x|\geq 3R\}$, and so inherits these bounds; in particular, \begin{align}\label{inherited bounds} \|u\|_{L_t^\infty L_x^2([0, T_*]\times\{|x|\geq 3R\})} + \|u\|_{L_t^\infty L_x^{p+2}([0, T_*]\times\{|x|\geq 3R\})}<\infty. \end{align} When $m>0$, conservation of energy and \eqref{E:all to infty} dictate \begin{align}\label{pe blowup} \lim_{t \uparrow T_*} \|u(t)\|_{L^{p+2}_x}^{p+2} =\infty. \end{align} Combining this with \eqref{E:all to infty}, we conclude \begin{align}\label{pe blowup'} \lim_{t \uparrow T_*} \|\chi_{6R}u(t)\|_{L^{p+2}_x}^{p+2} =\infty. \end{align} This conclusion also holds when $m=0$. Indeed, the argument above is applicable to all sequences $t_n \uparrow T_*$ for which $\|\nabla_{t,x} u(t_n)\|_{L^2_x}\to \infty$. On sequences where $\|\nabla_{t,x} u(t_n)\|_{L^2_x}$ is bounded, \eqref{E:all to infty} guarantees $\| u(t_n)\|_{L^2_x} \to \infty$. However, in this case \eqref{pe blowup'} follows by using the $L_t^\infty L_x^2$ estimate in \eqref{inherited bounds} and H\"older's inequality. Using the Gagliardo--Nirenberg inequality followed by Lemma~\ref{L:Poincare} (on the ball $\{|x|\leq 24R\}$), we obtain \begin{align*} \|\chi_{6R}u(t)\|_{L^{p+2}_x}^{p+2}&\lesssim \|\chi_{6R}u(t)\|_{L^2_x}^{p(1-s_c)} \|\nabla [\chi_{6R}u(t)]\|_{L^2_x}^{\frac{pd}2}\\ &\lesssim R^{p(1-s_c)}\|\nabla [\chi_{6R}u(t)]\|_{L_x^2}^{p+2}\\ &\lesssim R^{p(1-s_c)} \bigl[\|\nabla u(t)\|_{L_x^2}+ R^{-1}\|u(t)\|_{L^2(|x|\geq 3R)} \bigr]^{p+2}. \end{align*} Combining this with \eqref{inherited bounds} and \eqref{pe blowup'}, we derive the claim. This completes the proof of the corollary for data $(u_0, u_1)\in H^1_x\times L^2_x$. For initial data $u_0\in \dot H^1_x\cap \dot H^{s_c}_x$ we observe that for $R>10T_*$ sufficiently large, the restriction of $u_0$ to the region $|x|\geq R$ is small in $H^1_{\textrm{loc}}$. Thus by Proposition~\ref{P:lwp}, the solution extends to the region $[0, 2T_*]\times\{|x|\geq 3R\}$ in the class $H^1_{\textrm{loc}}\times L^2_{\textrm{loc}}$. Now consider the solution $v$ with initial data $\chi_{10R} u_0$ and $\chi_{10R} u_1$. By applying the first version of this corollary, we see that $\|\nabla v(t)\|_{L^2_x}$ diverges as $t\to T_*$. Moreover, by the bounds on $v$ where $|x|\geq 3R$, this divergence must occur in the region $|x|\leq 6R$ where finite speed of propagation guarantees $v\equiv u$. \end{proof} We will also need a stability result for the nonlinear wave equation in the weak topology. \begin{lemma} \label{L:weak stability} Let $d\geq2$, $0 < s_c < 1$, and set $p = \frac4{d-2s_c}$. Let $\{m_n\}_{n\geq 1},\{\lambda_n\}_{n\geq 1} \subset [0,1]$ be sequences with $\lim m_n = \lim \lambda_n = 0$ and let $\{(u_0^{(n)}, u_1^{(n)})\}_{n\geq 1}$ be a sequence of initial data such that \begin{equation} \label{E:un to u at 0} \nabla u^{(n)}_0 \rightharpoonup \nabla u_0 \qtq{and} u^{(n)}_1 \rightharpoonup u_1 \qtq{weakly in $L^2_x$.} \end{equation} Assume also that the sequence $\{u^{(n)}\}_{n\geq 1}$ of solutions to $$ \partial_{tt} u^{(n)} - \Delta u^{(n)} + m_n^2 u^{(n)} = |u^{(n)}|^p u^{(n)} \qtq{on} [0,T) \times {\mathbb{R}}^d $$ with initial data $(u_0^{(n)}, u_1^{(n)})$ at time $t=0$ satisfy \begin{equation} \label{E:bounded sequences} \begin{aligned} \|\nabla_{t,x}u^{(n)}\|_{C_tL^2_x([0,T) \times {\mathbb{R}}^d)} &+ \||\nabla|^{s_c} u^{(n)}\|_{C_tL^2_x([0,T) \times {\mathbb{R}}^d)} \\ &+ \|\langle\nabla\rangle_{\lambda_n}^{s_c-1}u^{(n)}_t\|_{C_tL^2_x([0,T) \times {\mathbb{R}}^d)} \leq M<\infty. \end{aligned} \end{equation} Then the initial-value problem \begin{equation} \label{E:nlw} u_{tt} - \Delta u = |u|^p u \qtq{with} u(0) = u_0 \qtq{and} \partial_t u(0) = u_1 \end{equation} has a strong solution on $[0,T) \times {\mathbb{R}}^d$ with $(u,u_t) \in C_t[\dot H^1_x \times L^2_x] \cap C_t[\dot H^{s_c}_x \times \dot H^{s_c-1}_x]$. Furthermore, for each $t \in [0,T)$, we have \begin{equation} \label{E:un to u at t} (u^{(n)}(t),\partial_t u^{(n)}(t)) \rightharpoonup (u(t),\partial_t u(t)) \qtq{weakly in $\dot H^1_x \times L^2_x$.} \end{equation} Consequently, the limiting solution $u$ obeys the bounds \begin{equation} \label{E:stability bounds} \|\nabla_{t,x} u\|_{C_tL^2_x([0,T)\times {\mathbb{R}}^d)} + \|\nabla_{t,x} u\|_{C_t \dot H^{s_c-1}_x([0,T) \times {\mathbb{R}}^d)} \leq M. \end{equation} \end{lemma} \begin{proof} We will prove that there exists a time $0<t_0 < \min\{1,T\}$, depending only on $M$, such that $u$ exists up to time $t_0$ and satisfies \eqref{E:un to u at t} for each $t \in [0,t_0]$. The lemma follows from this and a simple iterative argument. We will construct the solution $u$ on $[0, t_0]\times{\mathbb{R}}^d$ by gluing together solutions defined in light cones. To this end, let $x_0 \in {\mathbb{R}}^d$ and let $\phi$ be a smooth cutoff such that $\phi(x) = 1$ for $|x| \leq 1$ and $\phi(x) = 0$ for $|x| \geq 2$. For $j=0,1$ we define the initial data \begin{gather*} u_{x_0,j}(x) := \phi(x-x_0)u_j(x) \qtq{and} u^{(n)}_{x_0,j}(x) := \phi(x-x_0)u^{(n)}_j(x). \end{gather*} By \eqref{E:un to u at 0}, \begin{equation} \label{E:wk conv after loc} (u^{(n)}_{x_0,0},u^{(n)}_{x_0,1}) \rightharpoonup (u_{x_0,0},u_{x_0,1}) \qtq{weakly in $\dot H^1_x \times L^2_x$} \end{equation} and so, by Rellich--Kondrashov, \begin{equation} \label{E:H12 conv after loc} (u^{(n)}_{x_0,0},u^{(n)}_{x_0,1}) \to (u_{x_0,0},u_{x_0,1}) \qtq{in $\dot H^{\frac12}_x \times \dot H^{-\frac12}_x$.} \end{equation} Furthermore, by \eqref{E:bounded sequences}, \begin{equation} \label{E:H1 after loc} \|u_{x_0,0}\|_{H^1_x} + \|u_{x_0,1}\|_{L^2_x} + \|u^{(n)}_{x_0,0}\|_{H^1_x} + \|u^{(n)}_{x_0,1}\|_{L^2_x} \lesssim M. \end{equation} Thus, by Proposition~\ref{P:lwp} there exists a time $0<t_0< 1$, depending only on $M$, such that the solutions $u_{x_0}$ and $u_{x_0}^{(n)}$ to $$ \begin{cases} \partial_{tt}u_{x_0} - \Delta u_{x_0} = |u_{x_0}|^pu_{x_0} \\ u_{x_0}(0) = u_{x_0,0}, \quad \partial_t u_{x_0}(0) = u_{x_0,1} \end{cases} \quad \begin{cases} \partial_{tt}u^{(n)}_{x_0} - \Delta u^{(n)}_{x_0} + m_n^2 u^{(n)} = |u^{(n)}_{x_0}|^pu_{x_0}^{(n)} \\ u^{(n)}_{x_0}(0) = u^{(n)}_{x_0,0}, \quad \partial_t u^{(n)}_{x_0}(0) = u^{(n)}_{x_0,1} \end{cases} $$ exist on $[0,t_0]\times {\mathbb{R}}^d$ and satisfy the bounds \begin{align} \|\nabla_{t,x} u_{x_0}\|_{C_tL^2_x([0,t_0] \times {\mathbb{R}}^d)} + \|\nabla_{t,x} u_{x_0}^{(n)}\|_{C_tL^2_x([0,t_0] \times {\mathbb{R}}^d)} &\lesssim M \label{E:ux0 H1}\\ \|u_{x_0}\|_{L^{\frac{p(d+1)}2}_{t,x}([0,t_0] \times {\mathbb{R}}^d)} + \|u^{(n)}_{x_0}\|_{L^{\frac{p(d+1)}2}_{t,x}([0,t_0] \times {\mathbb{R}}^d)} &< \eta, \label{E:uunx0 Lq} \end{align} for a small constant $\eta>0$ to be determined in a moment. Throughout the remainder of the proof, all spacetime norms will be on $[0,t_0]\times {\mathbb{R}}^d$. By H\"older and Sobolev embedding, \begin{align*} \|u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} + \|u^{(n)}_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} &\lesssim t_0^{\frac{d-1}{2(d+1)}}\Bigl(\|u_{x_0}\|_{C_tH^1_x} + \|u^{(n)}_{x_0}\|_{C_tH^1_x}\Bigr)\lesssim M. \end{align*} By Lemma~\ref{L:Strichartz} (applied with $m = 0$), \eqref{E:H12 conv after loc}, \eqref{E:ux0 H1}, \eqref{E:uunx0 Lq}, and H\"older's inequality, \begin{align*} \|\nabla_{t,x}(u^{(n)}_{x_0} - & u_{x_0})\|_{C_t\dot H^{-\frac12}_x} + \|u^{(n)}_{x_0} - u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\\ &\lesssim \|(u^{(n)}_{x_0,0}-u_{x_0,0}, u^{(n)}_{x_0,1} - u_{x_0,1})\|_{\dot H^{\frac12}_x \times \dot H^{-\frac12}_x} + \|m_n^2u^{(n)}_{x_0}\|_{L^1_t\dot H^{-\frac12}_x} \\ &\qquad \qquad + \||u^{(n)}_{x_0}|^pu^{(n)}_{x_0} - |u_{x_0}|^pu_{x_0}\|_{L^{\frac{2(d+1)}{d+3}}_{t,x}}\\ & \lesssim \varepsilon_n + m_n^2 t_0 \|u^{(n)}_{x_0}\|_{C_t\dot H^1_x} + \eta^p\|u^{(n)}_{x_0} - u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}\\ &\lesssim \varepsilon_n + m_n^2M + \eta^p\|u^{(n)}_{x_0} - u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}}, \end{align*} for some sequence $\varepsilon_n \to 0$. Thus, for $\eta$ sufficiently small, \begin{equation} \label{E:un to u locally} \|\nabla_{t,x}(u^{(n)}_{x_0} - u_{x_0})\|_{C_t\dot H^{-\frac12}_x} + \|u^{(n)}_{x_0} - u_{x_0}\|_{L^{\frac{2(d+1)}{d-1}}_{t,x}} \to 0 \qtq{as} n\to \infty. \end{equation} To conclude, by finite speed of propagation, the solution $u$ to \eqref{E:eqn} with $m=0$ exists and equals $u_{x_0}$ on $\Gamma(1,x_0)\cap([0,t_0] \times {\mathbb{R}}^d)$ for each $x_0 \in {\mathbb{R}}^d$. In particular, $u$ is a strong solution on $[0,t_0] \times {\mathbb{R}}^d$. Additionally, $u^{(n)} = u^{(n)}_{x_0}$ on $\Gamma(T,x_0) \cap ([0,t_0] \times {\mathbb{R}}^d)$ for each $x_0 \in {\mathbb{R}}^d$. Thus by \eqref{E:bounded sequences} and \eqref{E:un to u locally}, we obtain \eqref{E:un to u at t} for all $0\leq t\leq t_0$. This completes the proof. \end{proof} \section{Blowup of non-positive energy solutions of NLW}\label{S:non+blow} In this section we prove that non-positive energy solutions to the nonlinear wave equation blow up in finite time. More precisely, we have \begin{theorem}[Non-positive energy implies blowup] \label{T:nonpos blowup} Let $\frac1{2d} < s_c < 1$ and set $p = \frac4{d-2s_c}$. Let $(u_0,u_1) \in (\dot{H}^1_x \times L^2_x) \cap (\dot{H}^{s_c}_x \times \dot{H}^{s_c-1}_x)$ be initial data, with $u_0$ and $u_1$ radial if $s_c < \frac12$. Assume that $(u_0,u_1)$ is not identically zero and satisfies $$ E(u_0,u_1) = \int_{{\mathbb{R}}^d} \tfrac12|\nabla u_0(x)|^2 + \tfrac12|u_1(x)|^2 - \tfrac1{p+2}|u_0(x)|^{p+2}\, dx \leq 0. $$ Then the maximal-lifespan solution to the initial-value problem $$ u_{tt} - \Delta u = |u|^p u \qtq{with} u(0) = u_0 \qtq{and} u_t(0) = u_1 $$ blows up both forward and backward in finite time. \end{theorem} We note that for solutions to \eqref{E:eqn} with $m>0$, finiteness of the energy dictates that $\|u_0\|_{L^2_x}$ also be finite. Indeed, because of the estimate (cf. Lemma~\ref{L:GN}) \begin{align}\label{E:GN} \|f\|_{p+2}^{p+2}\leq C_{\opt} \|f\|_{\frac{pd}2}^p \|\nabla f\|_2^2\lesssim \|f\|_{\dot H^{s_c}_x}^p \|f\|_{\dot H^1_x}^2, \end{align} the natural energy space for initial data is $(u_0,u_1) \in H^1_x \times L^2_x$. The constant $C_{\opt}$ depends only on $d,p$ and denotes the optimal constant in the first inequality in \eqref{E:GN}. In this case (that is, $u_0\in H^1_x$), the theorem is well-known and may be obtained by taking two time derivatives of $\|u(t)\|_{L_x^2}$ (cf. \cite{Glassey73}, \cite{PayneSattinger}, and the proof of Proposition~\ref{P:finite mass}). To handle data for which $\|u_0\|_{L_x^2}$ need not be finite, we adapt this argument by introducing a spatial truncation and then dealing with the resulting error terms. The larger class of initial data considered here is dictated by the needs of Section~\ref{S:critical blowup}. \begin{proof} We define $$ \phi(x) := \begin{cases} 1, &\qtq{if} |x| \leq 1;\\ 1-2(|x|-1)^2, &\qtq{if} 1 \leq |x| \leq \tfrac32; \\ 2(2-|x|)^2, &\qtq{if} \tfrac32 \leq |x| \leq 2; \\ 0,&\qtq{if} |x| \geq 2, \end{cases} $$ and $\phi^c := 1-\phi$. As $(u_0,u_1) \in (\dot{H}^1_x \times L^2_x) \cap (\dot H^{s_c}_x \times \dot H^{s_c-1}_x)$, there exists a radius $R > 0$ such that $$ \|\phi^c\bigl(\tfrac{\cdot}{R/4}\bigr) u_0\|_{\dot H^{s_c}_x} + \|\phi^c\bigl(\tfrac{\cdot}{R/4}\bigr)u_1\|_{\dot H^{s_c-1}_x} \leq \eta_1, $$ where $\eta_1$ is the small data threshold from Proposition~\ref{P:Hsc sdt}. Let $v$ denote the global solution to \begin{equation*} v_{tt} - \Delta v = |v|^p v \qtq{with} v(0,x) = \phi^c\bigl(\tfrac{x}{R/4}\bigr)u_0(x) \qtq{and} v_t(0,x) = \phi^c\bigl(\tfrac{x}{R/4}\bigr) u_1(x). \end{equation*} By Proposition~\ref{P:Hsc sdt}, we may take $R$ sufficiently large that \begin{equation} \label{E:v small} \|\nabla_{t,x}v\|_{C_tL^2_x({\mathbb{R}} \times {\mathbb{R}}^d)} + \|v\|_{C_t L^{\frac{pd}2}_x({\mathbb{R}} \times {\mathbb{R}}^d)} < \eta \end{equation} for a small constant $\eta>0$ to be determined later. By finite speed of propagation, $u = v$ where $|x| \geq R/2+|t|$, and so \begin{equation} \label{E:u small} \|\nabla_{t,x}u\|_{C_tL^2_x(\{|x| \geq R/2+|t|\})} + \|u\|_{C_t L^{\frac{pd}2}_x(\{|x| \geq R/2+|t|\})} < \eta. \end{equation} Next, since $E(u) \leq 0$ and $u$ is not identically zero, by \eqref{E:GN} we must have \begin{align*} 0\geq E(u) &= \int_{{\mathbb{R}}^d} \tfrac12|u_t(t,x)|^2 + \tfrac12|\nabla u(t,x)|^2 - \tfrac1{p+2}|u(t,x)|^{p+2}\, dx \\ &\geq \int_{{\mathbb{R}}^d} \tfrac12|u_t(t,x)|^2 + \tfrac12\bigl(1-\tfrac2{p+2}C_{\opt}\|u(t)\|_{L^{\frac{pd}2}_x}^p\bigr)|\nabla u(t,x)|^2\, dx \end{align*} and so, $$ \|u(t)\|_{L^{\frac{pd}2}_x}^p \geq \tfrac{p+2}2 C_{\opt}^{-1}. $$ Thus, using \eqref{E:u small} and taking $\eta$ small enough so that $\eta^p<\frac{p+2}4 C_{\opt}^{-1}$, we obtain \begin{equation} \label{E:u pd/2 big} \|u(t)\|_{L^{\frac{pd}2}_x(|x| \leq R/2+|t|)}^p \geq \tfrac{p+2}4 C_{\opt}^{-1} \end{equation} for each $t$ in the lifespan of $u$. Finally, letting $\chi$ denote a smooth cutoff that is equal to one on the ball $\{|x|\leq R/2+|t|\}$ and vanishes when $|x|\geq R+2|t|$, and using Gagliardo--Nirenberg followed by Lemma~\ref{L:Poincare}, H\"older, and \eqref{E:u small}, we obtain \begin{align*} \|u(t)\|_{L^{\frac{pd}2}_x(|x|\leq \frac R2 + |t|)}& \lesssim \|\chi u(t)\|_{L_x^2}^{1-s_c} \|\nabla [\chi u(t)]\|_{L_x^2}^{s_c} \\ &\lesssim (R+|t|)^{1-s_c}\|\nabla [\chi u(t)]\|_{L_x^2}\\ &\lesssim (R+|t|)^{1-s_c} \Bigl[ \|\nabla u(t)\|_{L^2_x} + \|u(t)\|_{L^{\frac{pd}2}_x(|x|\geq \frac R2 + |t|)}\|\nabla \chi\|_{L_x^{\frac{2pd}{pd-4}}} \Bigr]\\ &\lesssim (R+|t|)^{1-s_c}\|\nabla u(t)\|_{L^2_x} +\eta. \end{align*} Thus, invoking \eqref{E:u pd/2 big} and taking $\eta$ sufficiently small, we obtain \begin{equation} \label{E:u H1 big} \|\nabla u(t)\|_{L^2_x}\gtrsim (R+|t|)^{-1+s_c}, \end{equation} throughout the lifetime of $u$. Now we are ready to define our truncated `mass'. We set $$ M(t) := \int_{{\mathbb{R}}^d} \phi\bigl(\tfrac{x}{R+|t|}\bigr) |u(t,x)|^2\, dx. $$ It is easy to see that this quantity is finite throughout the lifetime of $u$. By \eqref{E:u pd/2 big}, it never vanishes, that is, $M(t)>0$ for all $t$ in the lifespan of $u$. We differentiate. Routine computations reveal that for $t \geq 0$, we have \begin{align} \notag &M'(t) = \int_{{\mathbb{R}}^d} -\tfrac{x}{(R+|t|)^2}\cdot \nabla \phi\bigl(\tfrac{x}{R+|t|} \bigr) |u(t)|^2\, dx + \int_{{\mathbb{R}}^d} 2\phi\bigl(\tfrac{x}{R+|t|}\bigr)u(t)u_t(t)\, dx, \\ \label{E:M''} &\begin{aligned} M''(t) &= -2(p+2)E(u) + \int_{{\mathbb{R}}^d} 4 \phi\bigl(\tfrac{x}{R+|t|} \bigr) |u_t(t)|^2 + p|\nabla_{t,x}u(t)|^2\, dx \\ &\qquad + \int_{{\mathbb{R}}^d} 2\phi^c\bigl(\tfrac{x}{R+|t|} \bigr)\Bigl[|\nabla_{t,x}u(t)|^2- |u(t)|^{p+2}\Bigr]\, dx \\ &\qquad + \int_{{\mathbb{R}}^d} \left[\tfrac{2x}{(R+|t|)^3} \cdot \nabla \phi\bigl(\tfrac{x}{R+|t|} \bigr) + \tfrac{x_ix_j}{(R+|t|)^4}\partial_i \partial_j \phi\bigl(\tfrac{x}{R+|t|} \bigr)\right] |u(t)|^2\, dx \\ &\qquad - \int_{{\mathbb{R}}^d} \tfrac2{R+|t|} \nabla\phi\bigl(\tfrac{x}{R+|t|} \bigr) \cdot \bigl[\tfrac{2x}{R+|t|} u_t(t) + \nabla u(t)\bigr]u(t) \, dx. \end{aligned} \end{align} We will seek an upper bound for $|M'(t)|$ and a lower bound for $M''(t)$. We will make repeated use of the following bound, which is a simple consequence of H\"older's inequality followed by \eqref{E:u small} and \eqref{E:u H1 big}: \begin{align}\label{E:mass annulus} \int_{R+|t| \leq |x| \leq 2(R+|t|)}\frac{|u(t,x)|^2}{(R+|t|)^2}\, dx &\lesssim (R+|t|)^{-2(1-s_c)}\|u\|_{L^{\frac{pd}2}_x(|x|\geq R+|t|)}^2\notag\\ &\lesssim\eta^2 \|\nabla u(t)\|_{L^2_x}^2. \end{align} Using Cauchy--Schwarz and the inequality $|ab|^{\frac12} + |cd|^{\frac12} \leq (|a|+|c|)^{\frac12} (|b|+|d|)^{\frac12}$, we estimate \begin{align*} |M'(t)| &\leq \biggl(\int_{{\mathbb{R}}^d} \tfrac{\varepsilon}8 |\nabla \phi\bigl(\tfrac{x}{R+|t|}\bigr)|^2|u(t)|^2\, dx \biggr)^{\!\frac12} \biggl(\int_{R+|t| \leq |x| \leq 2(R+|t|)} \tfrac{8|x|^2}{\varepsilon (R+|t|)^4} |u(t)|^2\, dx \biggr)^{\!\frac12} \\ &\quad + \biggl(\int_{{\mathbb{R}}^d}(1-\varepsilon)\phi\bigl(\tfrac{x}{R+|t|}\bigr) |u(t)|^2\, dx\biggr)^{\!\frac12} \biggl( \int_{{\mathbb{R}}^d} \tfrac4{1-\varepsilon}\phi\bigl(\tfrac{x}{R+|t|}\bigr)|u_t(t)|^2\, dx \biggr)^{\!\frac12}\\ &\leq \biggl(\int_{{\mathbb{R}}^d}\left[\tfrac\varepsilon8 |\nabla \phi\bigl(\tfrac{x}{R+|t|}\bigr)|^2 + (1-\varepsilon) \phi\bigl(\tfrac{x}{R+|t|}\bigr) \right] |u(t)|^2\, dx \biggr)^{\!\frac12} \\ &\quad \times \biggl( \int_{R+|t| \leq |x| \leq 2(R+|t|)} \tfrac{32}{\varepsilon(R+|t|)^2} |u(t)|^2\, dx + \int_{{\mathbb{R}}^d} \tfrac4{1-\varepsilon}\phi\bigl(\tfrac{x}{R+|t|}\bigr)|u_t(t)|^2\, dx \biggr)^{\!\frac12} \end{align*} for any $0 < \varepsilon < 1$. Since $\tfrac18 |\nabla \phi|^2 \leq \phi$, the first factor in the product above is bounded by $M(t)$. Using \eqref{E:mass annulus} to estimate the first term in the second factor gives \begin{equation} \label{E:bound M'} |M'(t)|^2 \leq M(t) \left( C_{\varepsilon} \eta^2 \|\nabla u(t)\|_{L^2_x}^2 + \int_{{\mathbb{R}}^d} \tfrac4{1-\varepsilon}\phi\bigl(\tfrac{x}{R+|t|}\bigr)|u_t(t,x)|^2\, dx \right). \end{equation} We now turn to $M''(t)$. By \eqref{E:mass annulus}, \begin{align} \label{E:mass annulus'} \Biggl| \int_{{\mathbb{R}}^d}\Bigl[ & \tfrac{2x}{(R+|t|)^3}\cdot\nabla \phi\bigl(\tfrac{x}{R+|t|}\bigr)+\tfrac{x_ix_j}{(R+|t|)^4}\partial_i\partial_j\phi\bigl(\tfrac{x}{R+|t|} \bigr)\Bigr] |u(t,x)|^2\, dx \Biggr|\notag\\ &\qquad\lesssim \int_{R+|t| \leq |x| \leq 2(R+|t|)}\tfrac1{(R+|t|)^2}|u(t,x)|^2\, dx \lesssim \eta^2 \|\nabla u(t)\|_{L^2_x}^2. \end{align} Next, by Lemma~\ref{L:GN} and \eqref{E:u small}, \begin{align} \label{E:p+2 annulus} \int_{{\mathbb{R}}^d} \phi^c\bigl(\tfrac{x}{R+|t|}\bigr)|u(t,x)|^{p+2}\, dx &\leq \|u(t)\|_{L_x^{p+2}(|x|\geq R+|t|)}^{p+2}\notag\\ &\lesssim\|u(t)\|_{L^{\frac{pd}2}_x(|x|\geq R+|t|)}^p\|\nabla u(t)\|_{L^2_x(|x|\geq R+|t|)}^2\notag\\ &\lesssim \eta^p \|\nabla u(t)\|_{L^2_x}^2. \end{align} Finally, by Young's inequality and \eqref{E:mass annulus}, \begin{align} \label{E:u nabla u annulus} \Biggl|\int_{{\mathbb{R}}^d} \tfrac2{R+|t|} & \nabla\phi\bigl(\tfrac{x}{R+|t|}\bigr) \cdot \Bigl[\tfrac{2x}{R+|t|} u_t(t,x) + \nabla u(t,x)\Bigr]u(t,x) \, dx\Biggr|\notag\\ &\leq \int_{R+|t| \leq |x| \leq 2(R+|t|)} \tfrac{C_{\varepsilon}}{(R+|t|)^2} |u(t,x)|^2\, dx + \int_{{\mathbb{R}}^d} \varepsilon |\nabla_{t,x} u(t,x)|^2\, dx \notag\\ & \leq C_{\varepsilon} \eta^2 \|\nabla u(t)\|_{L^2_x}^2 + \varepsilon \|\nabla_{t,x} u(t)\|_{L^2_x}^2, \end{align} for any $\varepsilon > 0$. Now let $\delta > 0$. Combining $E(u)\leq 0$, \eqref{E:mass annulus'}, \eqref{E:p+2 annulus}, and \eqref{E:u nabla u annulus} with the identity \eqref{E:M''}, and choosing $\varepsilon=\varepsilon(\delta)$ and then $\eta=\eta(\varepsilon)$ sufficiently small, we obtain \begin{equation}\label{E:M'' lb 1} \begin{aligned} M''(t) &\geq \int_{{\mathbb{R}}^d} 4 \phi\bigl(\tfrac{x}{R+|t|}\bigr) |u_t(t,x)|^2 \, dx + (p-\delta) \int_{{\mathbb{R}}^d} |\nabla_{t,x} u(t,x)|^2\, dx \\ & \geq \int_{{\mathbb{R}}^d} \tfrac4{1-2\varepsilon} \phi\bigl(\tfrac{x}{R+|t|}\bigr) |u_t(t,x)|^2\, dx + (p-\delta) \int_{{\mathbb{R}}^d} |\nabla u(t,x)|^2\, dx. \end{aligned} \end{equation} Combining \eqref{E:bound M'} and \eqref{E:M'' lb 1}, we get \begin{equation} \label{E:M'MM''} |M'(t)|^2 \leq cM(t) M''(t), \end{equation} for some constant $0 < c < 1$. Using this we will prove that $u$ blows up in finite time, forward in time; finite-time blowup backward in time follows from time-reversal symmetry. We argue by contradiction. Suppose that the solution $u$ may be continued forward in time indefinitely. First, we consider the case when $M'(0) > 0$. By \eqref{E:u H1 big} and \eqref{E:M'' lb 1}, $M''(t) > 0$ for all $t$ in the lifespan of $u$, and so $M'(t)>0$ for all $t>0$. Thus by \eqref{E:bound M'}, $$ \frac{M'(t)}{M(t)} \leq c\frac{M''(t)}{M'(t)}. $$ Integrating both sides, we see that for $t \geq 0$ we have $$ \log \left( \frac{M(t)}{M(0)} \right) \leq c \log \left(\frac{M'(t)}{M'(0)} \right), $$ that is, $$ M'(0)M(0)^{-1/c} \leq M'(t)M(t)^{-1/c}. $$ Integrating a second time and recalling that $M(t)> 0$, we obtain $$ t M'(0) M(0)^{-1/c} \leq \frac{c}{1-c}(M(0)^{1-1/c} - M(t)^{1-1/c}) \leq \frac{c}{1-c}M(0)^{1-\frac1c}. $$ But this is impossible since the left-hand side grows linearly as $t \to \infty$, while the right-hand side is bounded. Thus we must have $M'(0) \leq 0$. More generally, if we suppose that $M'(t_0) \geq 0$ for any $t_0$ in the lifespan of $u$, then $M'(t) > 0$ for all $t > t_0$, since $M''(t)>0$ for all $t$ in the lifespan of $u$ (by \eqref{E:u H1 big} and \eqref{E:M'' lb 1}). Arguing as above, we again obtain a contradiction to indefinite forward in time existence of $u$. Thus we may assume that $M'(t) < 0$ for as long as $u$ exists, and therefore \begin{equation} \label{E:mass bounded} 0 < M(t) < M(0) \qtq{for all} t > 0. \end{equation} Furthermore, as $M'(t)$ stays negative, we must have by \eqref{E:M'' lb 1} that $$ |M'(0)| \geq \int_0^{\infty} M''(t) \,dt \gtrsim \int_0^{\infty} \|\nabla_{t,x} u(t)\|_{L^2_x}^2\, dt. $$ From this, we see that along some sequence $t_n \to \infty$, we have \begin{equation} \label{E:H1 to 0} \|\nabla u(t_n)\|_{L^2_x}^2 \to 0. \end{equation} Next, using Gagliardo--Nirenberg followed by H\"older, \eqref{E:u small}, \eqref{E:u H1 big}, and \eqref{E:mass bounded}, we obtain \begin{align*} \|&u(t_n)\|_{L^{\frac{pd}2}_x(|x| \leq R+t_n)}\\ &\lesssim \bigl\|\phi(\tfrac{\cdot}{R+t_n}) u(t_n)\bigr\|_{L^2_x}^{1-s_c}\bigl\|\nabla\bigl[\phi(\tfrac{\cdot}{R+t_n}) u(t_n)\bigr]\bigr\|_{L^2_x}^{s_c}\\ &\lesssim M(t_n)^{\frac{1-s_c}2} \Bigl[\|\nabla u(t_n)\|_{L^2_x} + \|u(t_n)\|_{L^{\frac{pd}2}_x(|x| \geq R+t_n)} \|\nabla \phi(\tfrac{\cdot}{R+t_n})\|_{L_x^{\frac{2pd}{pd-4}}}\Bigr]^{s_c}\\ &\lesssim M(0)^{\frac{1-s_c}2} \Bigl[\|\nabla u(t_n)\|_{L^2_x} + \eta (R+t_n)^{-1+s_c}\Bigr]^{s_c}\\ &\lesssim M(0)^{\frac{1-s_c}2} (1+\eta)^{s_c}\|\nabla u(t_n)\|_{L^2_x}^{s_c} \to 0 \quad \text{as}\quad n\to \infty, \end{align*} which contradicts \eqref{E:u pd/2 big}. This completes the proof of the theorem. \end{proof} \section{Concentration compactness for a Gagliardo--Nirenberg inequality}\label{S:CC} In this section we develop a concentration compactness principle associated to the following Gagliardo--Nirenberg inequality (cf. Lemma~\ref{L:GN}) \begin{align}\label{E:GN1} \|f\|_{L_x^{p+2}}^{p+2} \lesssim \|f\|_{\dot H^{s_c}_x}^p \|f\|_{\dot H^1_x}^2. \end{align} More precisely, we prove \begin{theorem}[Bubble decomposition for \eqref{E:GN1}] \label{T:bubble gn} Fix a dimension $d \geq 2$ and an exponent $\frac4d \leq p < \frac4{d-2}$. Let $s_c = \frac{d}2 - \frac2p$. Let $\{f_n\}_{n\geq 1}$ be a bounded sequence in $\dot H^1_x \cap \dot H^{s_c}_x$. Then there exist $J^* \in \{0,1,\ldots\} \cup \{\infty\}$, nonzero functions $\{\phi^j\}_{j=1}^{J^*} \subset \dot H^1_x \cap \dot H^{s_c}_x$, $\{x_n^j\}_{j=1}^{J^*} \subset {\mathbb{R}}^d$, and a subsequence of $\{f_n\}_{n\geq 1}$ such that along this subsequence \begin{equation} \label{E:bubble gn} f_n(x) = \sum_{j=1}^J \phi^j(x-x_n^j) + r_n^J(x) \qtq{for each} 0 \leq J < J^*+1 , \end{equation} with \begin{equation}\label{E:rnJ wkly to 0} r_n^J(\cdot + x_n^J) \rightharpoonup 0 \qtq{weakly in} \dot H^1_x \cap \dot H^{s_c}_x \qtq{for each} 1 \leq J < J^*+1. \end{equation} Furthermore, along this subsequence, the $r_n^J$ satisfy \begin{align} \label{E:rnJ to 0} \lim_{J \to J^*} \limsup_{n \to \infty} \|r_n^J\|_{L^{p+2}_x} = 0, \end{align} and for each $0 \leq J < J^*+1$, we have the following: \begin{align} \label{E:H1 decoup} &\lim_{n \to \infty} \Bigl\{\|f_n\|_{\dot{H}_x^1}^2 - \Bigl[\sum_{j=1}^J \|\phi^j\|_{\dot{H}_x^1}^2 + \|r_n^J\|_{\dot{H}_x^1}^2 \Bigr]\Bigr\} = 0 \\ \label{E:Hsc decoup} &\lim_{n \to \infty} \Bigl\{\|f_n\|_{\dot{H}_x^{s_c}}^2 - \Bigl[\sum_{j=1}^J \|\phi^j\|_{\dot{H}_x^{s_c}}^2 + \|r_n^J\|_{\dot{H}_x^{s_c}}^2 \Bigr]\Bigr\} = 0 \\ \label{E:p+2 decoup} &\lim_{n \to \infty} \Bigl\{\|f_n\|_{L^{p+2}_x}^{p+2} - \Bigl[\sum_{j=1}^J \|\phi^j\|_{L^{p+2}_x}^{p+2} + \|r_n^J\|_{L^{p+2}_x}^{p+2}\Bigr]\Bigr\} = 0. \end{align} Finally, for each $j' \neq j$, we have \begin{equation} \label{E:GN orthog} \lim_{n \to \infty} |x_n^j - x_n^{j'}| = \infty. \end{equation} \end{theorem} There are many results of this type, beginning with the work \cite{Solimini} of Solimini on Sobolev embedding. The argument below is modeled on the treatment in \cite{ClayNotes}, with the main step being the following inverse inequality. \begin{prop}[Inverse Gagliardo--Nirenberg inequality] \label{P:inverse gn} Fix a dimension $d \geq 2$ and an exponent $\frac4d \leq p < \frac4{d-2}$. Let $s_c = \frac{d}2 - \frac2p$. Let $\{f_n\}_{n\geq 1} \subset \dot{H}^1_x({\mathbb{R}}^d) \cap \dot{H}_x^{s_c}({\mathbb{R}}^d)$ and assume that $$ \limsup_{n \to \infty} \|f_n\|_{\dot H_x^{s_c}}^2 + \|f_n\|_{\dot H_x^1}^2 = M^2 \qtq{and} \liminf_{n \to \infty} \|f_n\|_{L^{p+2}_x} = \varepsilon > 0. $$ Then there exist $\phi \in \dot H_x^1({\mathbb{R}}^d) \cap \dot H_x^{s_c}({\mathbb{R}}^d)$ and $\{x_n\}_{n\geq 1} \subset {\mathbb{R}}^d$ such that after passing to a subsequence, we have the following: \begin{align} \label{E:fn wkly to phi} f_n(\cdot + x_n) \rightharpoonup \phi \qtq{weakly in} &\dot H^1_x \cap \dot H^{s_c}_x\\ \label{E:inv st H1} \lim_{n \to \infty} \Bigl\{ \|f_n\|_{\dot H^1_x}^2 - \|f_n - \phi(\cdot - x_n)\|_{\dot H^1_x}^2 \Bigr\} &= \|\phi\|_{\dot H^1_x}^2 \gtrsim \varepsilon^2\bigl(\frac{\varepsilon}{M}\bigr)^{\alpha_1} \\ \label{E:inv st Hsc} \lim_{n \to \infty} \Bigl\{ \|f_n\|_{\dot H^{s_c}_x}^2 - \|f_n - \phi(\cdot - x_n)\|_{\dot H^{s_c}_x}^2 \Bigr\} &= \|\phi\|_{\dot H^{s_c}_x}^2 \gtrsim \varepsilon^2\bigl(\frac{\varepsilon}{M}\bigr)^{\alpha_2} \\ \label{E:inv st p+2} \lim_{n \to \infty} \Bigl\{\|f_n\|_{L^{p+2}_x}^{p+2} - \|f_n - \phi(\cdot-x_n)\|_{L^{p+2}_x}^{p+2} \Bigr\} &= \|\phi\|_{L^{p+2}_x}^{p+2} \gtrsim \varepsilon^{p+2}\bigl(\frac{\varepsilon}{M}\bigr)^{\alpha_3}, \end{align} for certain positive constants $\alpha_1,\alpha_2,\alpha_3$ depending on $d,p$. \end{prop} \begin{proof}[Proof of Proposition \ref{P:inverse gn}] By passing to a subsequence, we may assume that \begin{equation} \label{E:control norms fn} \|f_n\|_{\dot H^1_x}^2 + \|f_n\|_{\dot H^{s_c}_x}^2 \leq 2M^2 \qtq{and} \|f_n\|_{L^{p+2}_x} \geq \tfrac\varepsilon2 \qtq{for all $n$.} \end{equation} Note that by \eqref{E:GN1} we must have $\varepsilon \lesssim M$. Now by \eqref{E:GN1}, \eqref{E:control norms fn}, and Bernstein's inequality, for all dyadic frequencies $N$ we have \begin{align*} \|P_N f_n\|_{L^{p+2}_x}^{p+2} &\lesssim \|P_Nf_n\|_{\dot H^{s_c}_x}^p \|P_N f_n\|_{\dot H^1_x}^2 \lesssim \min\{N^{-p(1-s_c)}, N^{2(1-s_c)}\}M^{p+2}. \end{align*} Thus, if we define $$ K = C\bigl(M\varepsilon^{-1}\bigr)^{\frac{p+2}{2p(1-s_c)}} $$ for a suitably large constant $C$, we obtain $$ \|P_{\leq K^{-p}} f_n\|_{L^{p+2}_x}^{p+2} + \|P_{\geq K^2} f_n\|_{L^{p+2}_x}^{p+2} \ll \varepsilon^{p+2}. $$ Hence, by the pigeonhole principle there exist dyadic frequencies $N_n$ satisfying $K^{-p} \leq N_n \leq K^2$ such that \begin{equation} \label{E:lb fN p+2} (\log K)^{-1} \varepsilon \lesssim \|P_{N_n} f_n\|_{L^{p+2}_x}. \end{equation} By passing to a subsequence, we may assume that $N_n = N$ for all $n$. By H\"older's inequality, the Sobolev embedding $\dot H^{s_c}_x \hookrightarrow L^{\frac{pd}2}_x$, and \eqref{E:control norms fn}, $$ \|P_N f_n\|_{L^{p+2}_x} \leq \|P_N f_n\|_{L^{\infty}_x}^{1-\frac{pd}{2(p+2)}}\|P_Nf_n\|_{L^{\frac{pd}2}_x}^{\frac{pd}{2(p+2)}} \lesssim M^{\frac{pd}{2(p+2)}} \|P_N f_n\|_{L^{\infty}_x}^{1-\frac{pd}{2(p+2)}}, $$ and so by \eqref{E:lb fN p+2}, there exists a sequence $\{x_n\} \subset {\mathbb{R}}^d$ such that $$ \bigl(\varepsilon^2 M^{-1-\frac{pd}{2(p+2)}}\bigr)^{\frac{p+2}{p(1-s_c)}} \lesssim \left(\frac{\varepsilon M^{-\frac{pd}{2(p+2)}}}{\log K}\right)^{\frac{2(p+2)}{4-p(d-2)}} \lesssim |P_N f_n(x_n)|. $$ We consider the sequence $f_n(\cdot+x_n)$. This sequence is bounded in $\dot H^1_x({\mathbb{R}}^d) \cap \dot H^{s_c}_x({\mathbb{R}}^d)$ by \eqref{E:control norms fn}, and so, after passing to a subsequence, there exists a weak limit $\phi \in \dot H^1_x({\mathbb{R}}^d) \cap \dot H^{s_c}_x({\mathbb{R}}^d)$ as in \eqref{E:fn wkly to phi}. The equalities in \eqref{E:inv st H1} and \eqref{E:inv st Hsc} are immediate. We now turn to the $L^{p+2}_x$ decoupling, \eqref{E:inv st p+2}. By \eqref{E:fn wkly to phi} and the Rellich--Kondrashov theorem, $f_n(\cdot + x_n) \to \phi$ in $L^2_\textrm{loc}$ and hence, after passing to a subsequence, almost everywhere. The equality in \eqref{E:inv st p+2} is then an immediate consequence of the Fatou lemma of Br\'ezis and Lieb; see \cite{BrezisLieb} or \cite{LiebLoss}. Finally, to obtain the lower bounds in \eqref{E:inv st H1}, \eqref{E:inv st Hsc}, and \eqref{E:inv st p+2}, we test $\phi$ against the function $k = P_N \delta_0$ ($\delta_0$ being the Dirac delta). We have $$ |\langle \phi , k \rangle| = \lim_{n \to \infty} \Bigl|\int_{{\mathbb{R}}^d} f_n(x +x_n) \overline{k(x)}\, dx\Bigr| = \lim_{n \to \infty} | P_N f_n(x_n)| \gtrsim \bigl(\varepsilon^2 M^{-1-\frac{pd}{2(p+2)}}\bigr)^{\frac{p+2}{p(1-s_c)}}. $$ Routine computations reveal that $$ \|k\|_{\dot H^{-s_c}_x} \sim N^{\frac d2-s_c}, \qquad \|k\|_{\dot H^{-1}_x} \sim N^{\frac d2-1}, \qquad \|k\|_{L^{\frac{p+2}{p+1}}_x} \sim N^{\frac d{p+2}}, $$ and since $K^{-p} \leq N \leq K^2$, the lower bounds follow. This completes the proof. \end{proof} We are now ready to prove Theorem~\ref{T:bubble gn}. \begin{proof}[Proof of Theorem~\ref{T:bubble gn}] To begin, we set $r_n^0 := f_n$. The identities \eqref{E:H1 decoup}, \eqref{E:Hsc decoup}, and \eqref{E:p+2 decoup} with $J=0$ are thus trivial. After passing to a subsequence, we may assume that $$ \lim_{n \to \infty} \|r_n^0\|_{\dot H^1_x}^2 + \|r_n^0\|_{\dot H^{s_c}_x}^2 = M_0^2 \qtq{and} \lim_{n \to \infty} \|r_n^0\|_{L^{p+2}_x} = \varepsilon_0. $$ By hypothesis $M_0 < \infty$, while by \eqref{E:GN} we have $\varepsilon_0 \lesssim M_0$. We now proceed inductively, assuming that a decomposition satisfying \eqref{E:bubble gn}, \eqref{E:rnJ wkly to 0}, \eqref{E:H1 decoup}, \eqref{E:Hsc decoup}, and \eqref{E:p+2 decoup} has been carried out up to some integer $J \geq 0$ and that the remainder satisfies $$ \lim_{n \to \infty} \|r_n^J\|_{\dot H^1_x}^2 + \|r_n^J\|_{\dot H^{s_c}_x}^2 = M_J^2 \qtq{and} \lim_{n \to \infty}\|r_n^J\|_{L^{p+2}_x} = \varepsilon_J, $$ with $\varepsilon_J \lesssim M_J$ (which follows from \eqref{E:GN}) and $M_J < M_0$ (which will be established below). If $\varepsilon_J = 0$, we stop, setting $J^* = J$. The relations \eqref{E:bubble gn} through \eqref{E:p+2 decoup} have thus been established; we will come to \eqref{E:GN orthog} in a moment. If $\varepsilon_J > 0$, we apply Proposition~\ref{P:inverse gn} to $\{r_n^J\}$, producing a sequence $\{x_n^{J+1}\}$ of points and a (subsequential) weak limit $$ r_n^J(\cdot + x_n^{J+1}) \rightharpoonup \phi^{J+1} \qtq{in} \dot H^1_x \cap \dot H^{s_c}_x. $$ Setting $r_n^{J+1} := r_n^J - \phi^{J+1}(\cdot - x_n^{J+1})$, we obtain \eqref{E:bubble gn} and \eqref{E:rnJ wkly to 0} with $J$ replaced by $J+1$. The identity in \eqref{E:inv st H1} is just $$ \lim_{n \to \infty} \Bigl\{\|r_n^J\|_{\dot H^1_x}^2 - \Bigl[\|\phi^{J+1}\|_{\dot H^1_x}^2 + \|r_n^{J+1}\|_{\dot H^1_x}^2\Bigr]\Bigr\} = 0. $$ Adding this to \eqref{E:H1 decoup} shows that this continues to hold at $J+1$: $$ \lim_{n \to \infty} \Bigl\{\|f_n\|_{\dot H^1_x}^2 - \Bigl[\sum_{j=1}^{J+1} \|\phi^j\|_{\dot H^1_x}^2 + \|r_n^{J+1}\|_{\dot H^1_x}^2\Bigr]\Bigr\} = 0. $$ To derive \eqref{E:Hsc decoup} and \eqref{E:p+2 decoup} with $J$ replaced by $J+1$, one argues similarly. Passing to a subsequence and applying \eqref{E:inv st H1}, \eqref{E:inv st Hsc}, and \eqref{E:inv st p+2}, we obtain \begin{align}\label{E:Sobolev decrement} M_{J+1}^2 &= \lim_{n \to \infty} \Bigl[\|r_n^{J+1}\|_{\dot H^1_x}^2 + \|r_n^{J+1}\|_{\dot H^{s_c}_x}^2\Bigr] \notag\\ &= \lim_{n \to \infty} \Bigl[\|r_n^J\|_{\dot H^1_x}^2 - \|\phi^{J+1}\|_{\dot H^1_x}^2 + \|r_n^J\|_{\dot H^{s_c}_x}^2 - \|\phi^{J+1}\|_{\dot H^{s_c}_x}^2\Bigr] \notag\\ &\leq M_J^2 - C\varepsilon_J^2\Bigl[\bigl(\tfrac{\varepsilon_J}{M_J}\bigr)^{\alpha_1} + \bigl(\tfrac{\varepsilon_J}{M_J}\bigr)^{\alpha_2}\Bigr] \end{align} and \begin{align}\label{E:p+2 decrement} \varepsilon_{J+1}^{p+2} = \lim_{n \to \infty} \|r_n^{J+1}\|_{L^{p+2}_x}^{p+2} = \lim_{n \to \infty} \Bigl[ \|r_n^J\|_{L^{p+2}_x}^{p+2} - \|\phi^J\|_{L^{p+2}_x}^{p+2} \Bigr] \leq \varepsilon_J^{p+2} - C\varepsilon_J^{p+2}\bigl(\tfrac{\varepsilon_J}{M_J}\bigr)^{\alpha_3}. \end{align} Either this process eventually stops and we obtain some finite $J^*$ or we set $J^* = \infty$. If we do have $J^* = \infty$, then \eqref{E:rnJ to 0} follows from \eqref{E:Sobolev decrement} and \eqref{E:p+2 decrement}. Finally, we prove the asymptotic orthogonality \eqref{E:GN orthog}. Let us suppose that this fails for some $j \neq j'$. We may assume that $j'>j$ and that $\lim_{n \to\infty}|x_n^j - x_n^{k}| = \infty$ for $j < k < j'$. Passing to a subsequence, we may assume that $\lim_{n \to \infty} (x_n^j - x_n^{j'}) = y$. We recall that $$ \phi^{j'} = \wklim_{n \to \infty} \, r_n^{j'-1}(\cdot + x_n^{j'}), $$ while $$ r_n^{j'-1} = r_n^j - \sum_{k=j+1}^{j'-1}\phi^{k}(\cdot - x_n^{k}). $$ Therefore, \begin{align*} \phi^{j'} = \wklim_{n \to \infty} \Bigl\{ r_n^j(\cdot + x_n^{j'}) - \sum_{k=j+1}^{j'-1} \phi^{k}(\cdot + x_n^{j'} - x_n^{k}) \Bigr\} &= \wklim_{n \to \infty} \, r_n^j(\cdot + x_n^j + y) = 0, \end{align*} where we used $\lim_{n \to \infty} |x_n^{j'} - x_n^{k}| = \infty$ for all $j < k < j'$ in order to derive the second equality and \eqref{E:rnJ wkly to 0} to derive the third equality. But $\phi^{j'}$ cannot be $0$ in view of \eqref{E:inv st H1} and the fact that our inductive procedure stops once we obtain $\varepsilon_J = 0$. This completes the proof of Theorem~\ref{T:bubble gn}. \end{proof} \section{Blowup of the critical norm}\label{S:critical blowup} The goal of this section is to show that finite-time blowup of solutions to \eqref{E:eqn} is accompanied by subsequential blowup of their critical Sobolev norm. More precisely, we prove the following result which is slightly more general than Theorem~\ref{T:I:sc} given in the Introduction. \begin{theorem} \label{T:liminf v2} Let $d \geq 2$, $m \in [0,1]$, and $\frac1{2d} < s_c < 1$. Set $p = \frac4{d-2s_c}$. Let $(u_0,u_1)$ be initial data for \eqref{E:eqn} satisfying $$ \|\langle\nabla\rangle_m u_0\|_{L^2_x} + \|u_1\|_{L^2_x} + \||\nabla|^{s_c}u_0\|_{L^2_x} \leq M<\infty, $$ with $u_0$ and $u_1$ radial if $s_c < \frac12$. Assume that the maximal-lifespan solution $u$ to \eqref{E:eqn} blows up forward in time at $0<T_* < \infty$. Then $$ \limsup_{t \uparrow T_*} \bigl\{\|u(t)\|_{\dot H^{s_c}_x} + \|u_t(t)\|_{H^{s_c-1}_x}\bigr\} = \infty. $$ \end{theorem} The remainder of the section is dedicated to the proof of the theorem. We assume by way of contradiction that \begin{equation} \label{E:lie} \|u\|_{L^{\infty}_t \dot H^{s_c}_x([0,T_*)\times {\mathbb{R}}^d)} + \|u_t\|_{L^{\infty}_t H^{s_c-1}_x([0,T_*) \times {\mathbb{R}}^d)} = K < \infty. \end{equation} By Corollary~\ref{C:ee blowup}, $$ \|\nabla_{t,x}u(t)\|_{L^2_x} \to \infty \qtq{as} t\to T_*. $$ Thus we may choose a sequence of times $\{t_n\}_{n\geq 1}$ increasing to $T_*$ and satisfying $$ \|\nabla_{t,x}u(t_n)\|_{L^2_x} = \|\nabla_{t,x} u\|_{C_tL^2_x([0,t_n] \times {\mathbb{R}}^d)} \to \infty \qtq{as} n \to \infty. $$ Let $$ \lambda_n := (\|\nabla_{t,x} u(t_n)\|_{L^2_x})^{-\frac1{1-s_c}} \to 0 $$ and define $$ u^{(n)}(t,x) := \lambda_n^{\frac2p}u(t_n - \lambda_n t, \lambda_n x) \qtq{for all} (t,x) \in [0,T_n] \times {\mathbb{R}}^d, $$ where $T_n := \frac{t_n}{\lambda_n} \to \infty$. Then $u^{(n)}$ solves \begin{equation} \label{E:un soln} u^{(n)}_{tt} - \Delta u^{(n)} + m_n^2 u^{(n)} = |u^{(n)}|^p u^{(n)} \end{equation} on $[0,T_n] \times {\mathbb{R}}^d$ with $m_n := \lambda_n m \to 0$. Furthermore, by our choice of $t_n$ and $\lambda_n$, $u^{(n)}$ satisfies \begin{align} \label{E:un H1} \|\nabla_{t,x} u^{(n)}\|_{C_tL^2_x([0,T_n] \times {\mathbb{R}}^d)} = \|\nabla_{t,x} u^{(n)}(0)\|_{L^2_x} = 1 \end{align} and \begin{equation} \label{E:un Hsc} \begin{aligned} &\|u^{(n)}\|_{C_t \dot H^{s_c}_x([0,T_n] \times {\mathbb{R}}^d)} + \|\langle \nabla\rangle_{\lambda_n}^{s_c-1}\partial_t u^{(n)}\|_{C_t L_x^2([0,T_n] \times {\mathbb{R}}^d)}\\ &\qquad \qquad = \|u\|_{C_t\dot H^{s_c}_x([0,t_n] \times {\mathbb{R}}^d)} + \|u_t\|_{C_t H^{s_c-1}_x([0,t_n] \times {\mathbb{R}}^d)} \leq K. \end{aligned} \end{equation} (We note that the subscript $\lambda_n$ is essential in \eqref{E:un Hsc} because we need the weak limit of $\partial_t u^{(n)}(0)$ to belong to $\dot H^{s_c-1}_x$; cf. Lemma~\ref{L:weak stability}.) Finally, by conservation of energy, \begin{align} \label{E:un nrg} \int_{{\mathbb{R}}^d} \tfrac12|&\nabla_{t,x} u^{(n)}(0,x)|^2 - \tfrac1{p+2} |u^{(n)}(0,x)|^{p+2}\, dx \notag\\ &= \lambda_n^{2-2s_c} \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x}u(t_n,x)|^2 - \tfrac1{p+2}|u(t_n,x)|^{p+2}\, dx \notag\\ &\leq \lambda_n^{2-2s_c} \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x} u(0,x)|^2 + \tfrac{m^2}2 |u(0,x)|^2 - \tfrac1{p+2} |u(0,x)|^{p+2}\, dx \to 0. \end{align} Using this and \eqref{E:un H1}, for sufficiently large $n$ we obtain \begin{equation} \label{E:p+2 lb} \int_{{\mathbb{R}}^d} |u^{(n)}(0,x)|^{p+2}\, dx \geq 1. \end{equation} To continue, we will use Lemma~\ref{L:weak stability} and Theorem~\ref{T:bubble gn} to prove that under the assumption \eqref{E:lie}, we have the following: \begin{lemma} \label{L:bad limit} The sequence $\{u^{(n)}\}_{n\geq 1}$ gives rise to a nonzero solution $w$ to the nonlinear wave equation \eqref{E:eqn} with $m=0$ which is global forward in time, satisfies $(w,w_t) \in C_t ([0, \infty);\dot H^1_x \times L^2_x\cap \dot H^{s_c}_x \times \dot H^{s_c-1}_x)$, and has $E(w) \leq 0$. \end{lemma} By Theorem~\ref{T:nonpos blowup}, a solution $w$ as described in Lemma~\ref{L:bad limit} cannot exist. The restriction to radial data for $s_c<\frac12$ arises only from the use of this theorem. Thus, in order to conclude the proof of Theorem~\ref{T:liminf v2}, it remains to prove Lemma~\ref{L:bad limit}. To prove the lemma, we treat the sub- and super-conformal cases separately. We start with the sub-conformal case, where the radial assumption on the initial data allows for a simpler treatment. \begin{proof}[Proof of Lemma~\ref{L:bad limit} when $s_c < \frac12$] By hypothesis, in this case we have that $u_0$ and $u_1$ are radial, and so the $u^{(n)}$ are radial also. Using \eqref{E:un H1} and \eqref{E:un Hsc} and passing to a subsequence, we obtain a weak limit \begin{equation} \label{E:un wkly to w} (u^{(n)}(0),u^{(n)}_t(0)) \rightharpoonup (w_0,w_1) \qtq{in $[\dot H^1_x \cap \dot H^{s_c}_x] \times L^2_x$.} \end{equation} Additionally, by \eqref{E:un Hsc}, $w_1 \in \dot H^{s_c-1}_x$. As the embedding $\dot H^1_{\rm{rad}} \cap \dot H^{s_c}_{\rm{rad}} \hookrightarrow L^{p+2}_{\rm{rad}}$ is compact, \eqref{E:un wkly to w} dictates \begin{equation} \label{E:un to w p+2} u^{(n)}(0) \to w_0 \qtq{strongly in} L^{p+2}_x. \end{equation} By \eqref{E:p+2 lb}, $w_0$ is not identically 0. Finally, by \eqref{E:un nrg}, \eqref{E:un wkly to w}, and \eqref{E:un to w p+2}, we have \begin{align*} E(w_0,w_1) &= \int_{{\mathbb{R}}^d} \tfrac12 |\nabla w_0(x)|^2 + \tfrac12|w_1(x)|^2 - \tfrac1{p+2}|w_0(x)|^{p+2}\, dx \\ &\leq \lim_{n \to \infty} \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x}u^{(n)}(0,x)|^2 - \tfrac1{p+2}|u^{(n)}(0,x)|^{p+2}\, dx \leq 0. \end{align*} Now let $w$ be the solution to \eqref{E:eqn} with $m=0$ and initial data $(w_0,w_1)$ at time $t=0$. By Lemma~\ref{L:weak stability} and the fact that $T_n \to \infty$, we obtain that $w$ is global forward in time. Moreover it satisfies $(w,w_t) \in C_t ([0, \infty);\dot H^1_x \times L^2_x\cap \dot H^{s_c}_x \times \dot H^{s_c-1}_x)$ and $E(w) \leq 0$. This completes the proof of the lemma in the sub-conformal case. \end{proof} It remains to prove Lemma~\ref{L:bad limit} in the conformal and super-conformal cases; in this setting, we will substitute Theorem~\ref{T:bubble gn} for the compact radial embedding used in the sub-conformal case. \begin{proof}[Proof of Lemma~\ref{L:bad limit} when $s_c \geq \frac12$] Applying Theorem~\ref{T:bubble gn} to $\{u^{(n)}(0)\}_{n\geq 1}$ and passing to a subsequence, we obtain the decomposition $$ u^{(n)}(0) = \sum_{j=1}^J \phi^j_0(\cdot - x_n^j) + r_n^J \qtq{for all} 0 \leq J < J^*+1, $$ satisfying the conclusions of that theorem. By \eqref{E:p+2 lb} and \eqref{E:rnJ to 0} we must have $J^* \geq 1$. Using \eqref{E:rnJ wkly to 0} followed by \eqref{E:GN orthog}, for each $j$ we have \begin{equation} \label{E:phiJ0} \phi^j_0 = \wklim_{n \to \infty} \Bigl\{u^{(n)}(0,\cdot + x_n^j) - \sum_{k=1}^{j-1} \phi_0^k(\cdot - x_n^k + x_n^j)\Bigr\} = \wklim_{n \to \infty} u^{(n)}(0,\cdot + x_n^j) , \end{equation} where the weak limit is taken in $\dot H^1_x \cap \dot H^{s_c}_x$. Using \eqref{E:un H1} and passing to a subsequence, we may define \begin{equation} \label{E:phiJ1} \begin{aligned} \phi^j_1 &= \wklim_{n \to \infty} u^{(n)}_t(0,\cdot + x_n^j) = \wklim_{n \to \infty} \Bigl\{u^{(n)}_t(0,\cdot + x_n^j) - \sum_{k=1}^{j-1} \phi^k_1(\cdot - x_n^k + x_n^j)\Bigr\}, \end{aligned} \end{equation} where now the weak limits are taken in $L^2_x$. By \eqref{E:un Hsc}, we have $\phi^j_1 \in L^2_x \cap \dot H^{s_c-1}_x$ for all $1 \leq j < J^*+1$. By Lemma~\ref{L:weak stability} and the fact that $T_n \to \infty$, the solutions $w^j$ to $$ w_{tt}^j - \Delta w^j = |w^j|^pw^j \qtq{with} w^j(0) = \phi^j_0 \qtq{and} w_t^j(0) = \phi^j_1 $$ are global forward in time and satisfy $(w^j,w^j_t) \in C_t ([0, \infty);\dot H^1_x \times L^2_x\cap \dot H^{s_c}_x \times \dot H^{s_c-1}_x)$. Since the $\phi^j_0$ are all nonzero, the lemma will follow if we can prove that there exists $j_0$ such that $$ E(w^{j_0})=E(\phi^{j_0}_0,\phi^{j_0}_1) = \int_{{\mathbb{R}}^d} \tfrac12|\nabla\phi^{j_0}_0(x) |^2 +\tfrac12 |\phi^{j_0}_1(x)|^2 - \tfrac{1}{p+2}|\phi^{j_0}_0(x)|^{p+2}\, dx \leq 0. $$ Indeed, $w^{j_0}$ would then be the solution described in Lemma~\ref{L:bad limit}. Now, by \eqref{E:H1 decoup}, $$ \sum_{j=1}^{J^*} \|\nabla \phi^j_0\|_{L^2_x}^2 \leq \lim_{n \to \infty} \|\nabla u^{(n)}(0)\|_{L^2_x}^2, $$ and by the definition of $\phi^j_1$ (cf.\ the proof of \eqref{E:H1 decoup}), we have $$ \sum_{j=1}^{J^*} \|\phi^j_1\|_{L^2_x}^2 \leq \lim_{n \to \infty} \|u^{(n)}_t(0)\|_{L^2_x}^2. $$ Moreover, by \eqref{E:p+2 decoup} and \eqref{E:rnJ to 0}, $$ \sum_{j=1}^{J^*} \|\phi^j_0\|_{L^{p+2}_x}^{p+2} = \lim_{n \to \infty} \|u^{(n)}(0)\|_{L^{p+2}_x}^{p+2}. $$ Therefore, using \eqref{E:un nrg}, $$ \sum_{j=1}^{J^*} E(w^j) \leq \lim_{n \to \infty} \int_{{\mathbb{R}}^d} \tfrac12|\nabla_{t,x}u^{(n)}(0,x)|^2 - \tfrac1{p+2}|u^{(n)}(0,x)|^{p+2}\, dx \leq 0, $$ and so at least one $w^j$ must have non-positive energy. This completes the proof of the lemma. \end{proof} \section{Growth of other global norms}\label{S:other} \begin{proposition} \label{P:finite mass} Let $d \geq 2$, $m \in [0,1]$, and $0 < s_c < 1$. Set $p = \frac4{d-2s_c}$. Let $(u_0,u_1) \in H^1_x \times L^2_x$ and assume that the maximal-lifespan solution $u$ to \eqref{E:eqn} blows up forward in time at $0<T_* < \infty$. Then we have the pointwise in time bound \begin{equation} \label{E:ptwise L2 bound} \int_{{\mathbb{R}}^d} |u(t,x)|^2\, dx \lesssim (T_*-t)^{-\frac4p} \qtq{for all} 0 \leq t < T_*. \end{equation} Furthermore, if $I \subset [0,T_*)$ is an interval with $|I| \sim \dist(I,T_*)$, then we have the time-averaged bound \begin{equation} \label{E:avg H1 bound} \frac1{|I|} \int_I \int_{{\mathbb{R}}^d} |\nabla_{t,x} u(t,x)|^2\, dx\, dt \lesssim \dist(I,T_*)^{-\frac4p - 2}. \end{equation} The implicit constants in \eqref{E:ptwise L2 bound} and \eqref{E:avg H1 bound} may depend on $u$ but are independent of~$t$. \end{proposition} \begin{remark} The blowup rate of solutions to the ODE $v''+m^2v -|v|^p=0$, namely, $v(t)\sim (T_*-t)^{-2/p}$, shows that the blowup rate in the proposition is sharp. On the other hand, solutions such as those constructed by Kichenassamy, whose blowup surface $t=\sigma(x)$ has a non-degenerate minimum at $T_*$, show that one cannot expect lower bounds of comparable size to the upper bounds given above. \end{remark} \begin{proof} We let $$ M(t) := \int_{{\mathbb{R}}^d} |u(t,x)|^2\, dx $$ denote the `mass'. We differentiate twice with respect to time and use the equation and integration by parts to see that \begin{align*} M'(t) &= \int_{{\mathbb{R}}^d} 2 u(t) u_t(t)\, dx\\ M''(t) &= \int_{{\mathbb{R}}^d} 2 |u_t(t)|^2 - 2|\nabla u(t)|^2 - 2m^2 |u(t)|^2 + 2|u(t)|^{p+2}\, dx\\ &= -2(p+2)E(u) + \int_{{\mathbb{R}}^d} (p+4) |u_t(t)|^2 + p|\nabla u(t)|^2 + p m^2 |u(t)|^2\, dx. \end{align*} By Proposition~\ref{P:lwp}, $M(t),M'(t),M''(t)$ are all finite for $0 \leq t < T_*$. As the solution $u$ blows up at $T_*$, by Corollary~\ref{C:ee blowup} we must have $$ \lim_{t \uparrow T_*} \int_{{\mathbb{R}}^d} |\nabla u(t,x)|^2 \, dx = \infty. $$ Using this and the conservation of energy, we deduce that there exists $0<t_0<T_*$ such that $$ 2(p+2)E(u) \leq \tfrac{p}2\|\nabla u(t)\|_{L^2_x}^2 \qtq{for all} t_0<t<T_*. $$ Thus, \begin{equation} \label{E:M'' lb} M''(t) \geq \int_{{\mathbb{R}}^d} (p+4) |u_t(t,x)|^2 + \tfrac{p}2|\nabla u(t,x)|^2\, dx \qtq{for all} t_0 < t < T_* \end{equation} and so, by Cauchy--Schwarz, \begin{equation} \label{E:M'<MM''} |M'(t)|^2 \leq \frac4{p+4} M(t)M''(t) \qtq{for all} t_0 < t < T_*. \end{equation} From \eqref{E:M'' lb} we see that $M(t)\geq 0$ is strictly convex on $(t_0, T_*)$ and so vanishes at most once on this interval. Altering $t_0$ if necessary, we may thus assume $M(t)>0$ on $(t_0, T_*)$. This and \eqref{E:M'<MM''} show that $M(t)^{-\frac{p}4}$ is concave on $(t_0,T_*)$; indeed, $$ \partial_t^2 M(t)^{-\frac{p}4} = -\tfrac p4\bigl[M''(t)M(t) - \tfrac{p+4}4(M'(t))^2 \bigr]M(t)^{-\frac{p+8}4} \leq 0. $$ Therefore for all $t,T$ satisfying $t_0 < t \leq T < T_*$, we have $$ M(t)^{-\frac{p}4} \geq \frac{t-t_0}{T-t_0}M(T)^{-\frac{p}4} + \frac{T-t}{T-t_0}M(t_0)^{-\frac{p}4}\geq \frac{T-t}{T-t_0}M(t_0)^{-\frac{p}4}. $$ Letting $T\uparrow T_*$ and rearranging yields $$ M(t) \leq M(t_0)(T_*-t_0)^{\frac4p}(T_*-t)^{-\frac4p} \qtq{for all} t_0 < t < T_*. $$ This proves \eqref{E:ptwise L2 bound}, at least for $t_0 < t < T_*$. For $0\leq t\leq t_0$ this is trivial by the local-in-time continuity of $M$. We now turn to \eqref{E:avg H1 bound}. It suffices to consider intervals $I \subset (t_0,T_*)$ with $|I| \sim \dist(I,T_*)$. Let $\phi_I$ be a smooth cutoff with $\phi \equiv 1$ on $I$, $\supp \phi_I \subset [0,T_*)$, $|\supp \phi_I| \sim\dist(\supp \phi_I, T_*) \sim |I|$, and $|\phi''| \lesssim |I|^{-2}$. Then by \eqref{E:M'' lb}, integration by parts, and \eqref{E:ptwise L2 bound}, we have \begin{align*} \int_I \int_{{\mathbb{R}}^d} |\nabla_{t,x} u(t,x)|^2\, dx\, dt &\lesssim \int_0^{T_*} \phi_I(t) M''(t)\, dt = \int_0^{T_*} \phi_I''(t)M(t)\, dt \\ &\lesssim |I||I|^{-2}\dist(I,T_*)^{-\frac4{p}} \sim |I|^{-\frac4{p} -1}. \end{align*} This completes the proof of the proposition. \end{proof} When $0 < s_c \leq \frac12$, the estimate \eqref{E:avg H1 bound} can be improved to a pointwise in time estimate, namely, \begin{equation} \label{E:temp cone bound} \int_{|x-x_0|<T_*-t} (T_*-t)^{2(1-s_c)}|\nabla_{t,x}u(t,x)|^2\, dx \lesssim 1. \end{equation} Inequality \eqref{E:temp cone bound} is proved for $m=0$ in \cite{MerleZaagAJM, MerleZaagMA}. For $0<m\leq1$, this is the content of Theorem~\ref{T:cone bound subc}. By the local well-posedness in Proposition~\ref{P:lwp} and finite speed of propagation, there exists $R>0$ such that \begin{equation} \label{E:global H1 outer space} \int_{|x|\geq R+T_*} |\nabla_{t,x} u(t,x)|^2\, dx \lesssim 1. \end{equation} Since $\{|x|\leq R+T_*\}$ is contained in the union of $C_{d,T_*,R}(T_*-t)^{-d}$ balls of radius $(T_*-t)$, \eqref{E:temp cone bound} implies that $$ \int_{|x|\leq R+T_*}|\nabla_{t,x}u(t,x)|^2\, dx \lesssim (T_*-t)^{-2(1-s_c)-d} = (T_*-t)^{-\frac4p-2}. $$ The estimate \eqref{E:avg H1 bound} follows by combining the inequality above with \eqref{E:global H1 outer space}. In Section~\ref{S:superconf blow} we prove averaged-in-time estimates inside light cones in the super-conformal case. However, when these are used to derive global in space bounds (in the manner just shown), the result is weaker than that given in \eqref{E:avg H1 bound}. \section{Lyapunov functionals}\label{S:Lyapunov} The most flexible way to describe conservation laws is in their microscopic form, that is, as the fact that a certain vector field is divergence-free in spacetime. Myriad consequences can then be derived by applying the divergence theorem, or, more generally, by pairing the vector field with the gradient of a function and integrating by parts. One of our goals in this section is to identify the underlying microscopic identities that yield the key monotonicity formulae in the analyses of Merle and Zaag. This points the way to the appropriate analogues for the results in the remaining sections. To simplify various expressions, for the remainder of the article, we will work in light cones $$ \{(t,x):0 < t \leq T, |x-x_0| < t\} \qtq{with} (T,x_0) \in {\mathbb{R}}_+\times{\mathbb{R}}^d, $$ rather than the backwards light cones discussed earlier. It is clear how to adapt Definition~\ref{D:solution} to this case. All the local theory results from Section~\ref{S:local theory} carry over by applying the time translation/reversal symmetry $u(t,x) \mapsto u(T-t,x)$. We begin with energy conservation: If $u$ is a solution to \eqref{E:eqn} and \begin{equation}\label{E:m E defn} \mathfrak{e}^0 := \tfrac12 u_t^2 + \tfrac12 |\nabla u|^2 + \tfrac{m^2}2 u^2 - \tfrac1{p+2} |u|^{p+2} \qtq{and} \vec\mathfrak{e} := - u_t \nabla u, \end{equation} then \begin{equation}\label{E:m E cons} \partial_t \mathfrak{e}^0 + \nabla \cdot \vec\mathfrak{e} = 0. \end{equation} The closest thing to a general procedure for discovering conservation laws is via Noether's theorem which makes the connection to (continuous) symmetries. The general nonlinear Klein--Gordon equation \eqref{E:eqn} has only the $\binom{d+2}{2}$-dimensional Poincar\'e group as symmetries; however, in the special case of $m=0$ and $p=4/(d-1)$ the symmetry group becomes the full $\binom{d+3}{2}$-dimensional conformal group (of $(d+1)$-dimensional \emph{spacetime}). Note that $p=4/(d-1)$ corresponds to $s_c=\frac12$, which explains the sub-/super-conformal nomenclature used in this paper. While some elements of the conformal group fail to be true symmetries of the equation, the vestigial `conservation laws' that arise have proven to be very useful. The requisite computations are rather lengthy; nevertheless, the results are very neatly catalogued in the paper \cite{Strauss77} by Strauss. This paper also contains a proof that the only continuous symmetries are those described above, which is to say, they generate the full Lie algebra of Killing fields. Let us quickly review the list. \emph{Translations:} Time translation symmetry is responsible for the energy conservation \eqref{E:m E cons}, above. Spatial translation symmetry implies the conservation of momentum. Note that momentum conservation is of limited use, since it is not coercive. While energy is not coercive in the strictest sense in the focusing case, it is at least a scalar. \emph{Rotations:} Here we include the full group of spacetime rotations $SO(1,d)$, which includes both spatial rotations and Lorentz boosts. This produces a tensor of conserved quantities, of which the usual angular momentum is a part. Again their utility is limited because they are not coercive. \emph{Dilation:} By dilation, we mean rescaling both space and time. This gives rise to a very important conservation law: if \begin{equation}\label{E:frak d} \begin{aligned} \mathfrak{d}^0 &{}:= \hbox to 0.5em{\hss$t$\hss} \bigl[ \tfrac12 |\nabla u|^2 - \tfrac12 u_t^2 + \tfrac{m^2}2 u^2 - \tfrac1{p+2} |u|^{p+2} \bigr] + \bigl[ x\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u \bigr] u_t \\ \vec\mathfrak{d} &{}:= \hbox to 0.5em{\hss$x$\hss} \bigl[ \tfrac12 |\nabla u|^2 - \tfrac12 u_t^2 + \tfrac{m^2}2 u^2 - \tfrac1{p+2} |u|^{p+2} \bigr] - \bigl[ x\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u \bigr]\nabla u \end{aligned} \end{equation} then \begin{align}\label{E:m dilation} \partial_t \mathfrak{d}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{d} = \tfrac{p(d-1)-4}{2(p+2)} |u|^{p+2} + m^2 |u|^2. \end{align} This is an honest conservation law only in the conformally invariant case ($m=0$ and $p=\frac4{d-1}$). However, in the super-conformal case (i.e., $p > \frac4{d-1}$), both terms have the same sign; thus we obtain a monotonicity formula --- a Lyapunov functional! \emph{Conformal translations:} Recall that inversion in a cone, that is, $$ (t,x) \mapsto \bigl(\tfrac{t}{t^2 - |x|^2},\tfrac{x}{t^2 - |x|^2}), $$ is a conformal map of spacetime. This involution does not commute with translations; by forming commutators, we obtain a $(d+1)$-dimensional family of continuous symmetries (at least in the conformally invariant case). The resulting conservation laws are called conformal energy (relating to time translation) and conformal momentum (resulting from spatial translations). The conformal momentum lacks coercivity. The conservation of conformal energy reads as follows: if \begin{align}\label{E:frak q0} \mathfrak{k}^0 &:= (t^2+|x|^2) \mathfrak{e}^0 + 2tu_t(x\cdot\!\nabla u) + (d-1)tuu_t - \tfrac{d-1}2 u^2 \\ \vec\mathfrak{k} &:= -\bigl[ (t^2+|x|^2) u_t + 2t(x\cdot\!\nabla u)+(d-1)tu\bigr] \nabla u \notag \\ & \qquad\qquad\qquad - 2xt\bigl[ \tfrac12 u_t^2 - \tfrac12 |\nabla u|^2 - \tfrac{m^2}2 u^2 + \tfrac1{p+2} |u|^{p+2} \bigr] \label{E:frak qj} \end{align} then \begin{align}\label{E:m conf E} \partial_t \mathfrak{k}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{k} = t \tfrac{p(d-1)-4}{(p+2)} |u|^{p+2} + 2tm^2 |u|^2 . \end{align} This completes the list. We found $d+1$ translations, $\tbinom{d+1}{2}$ rotations, $1$ dilation, and $d+1$ conformal translations. These generate the promised $\binom{d+3}{2}$-dimensional group of conformal symmetries. We now turn to converting these microscopic conservation laws into integrated form. The key identities we need originate from energy conservation \eqref{E:m E cons} and the dilation identity \eqref{E:m dilation}. The conformal energy identity \eqref{E:m conf E} has a very similar structure to the dilation identity \eqref{E:m dilation}; however, the extra factor of $2t$ on the right-hand side of \eqref{E:m conf E} makes it inferior for our purposes. Integrating the energy identity \eqref{E:m E cons} yields the family of well-known energy flux identities. The particular cases we need are the following: \begin{lemma}[Energy flux identity] \label{L:E flux} Let $u$ be a strong solution to \eqref{E:eqn} in the lightcone \begin{equation}\label{E:basic cone} \Gamma := \bigl\{ (t,x): 0<t\leq T \text{ and } |x|<t\bigr\}. \end{equation} Then for all $0<t_0<t_1<T$, \begin{align} & \int_{|x| < t_1} \tfrac{t_1^2-|x|^2}{t_1} \mathfrak{e}^0(t_1,x) \,dx - \int_{|x| < t_0} \tfrac{t_0^2-|x|^2}{t_0} \mathfrak{e}^0(t_0,x) \,dx \notag \\ & \qquad = \int_{t_0}^{t_1} \!\!\! \int_{|x|<t} \tfrac14(1+\tfrac{|x|}t)^2\bigl[u_t(t,x)+u_r(t,x)\bigr]^2 + \tfrac14(1-\tfrac{|x|}t)^2\bigl[u_t(t,x)-u_r(t,x)\bigr]^2\notag \\ &\qquad \qquad + (1+\tfrac{|x|^2}{t^2})\bigl[ \tfrac12|\nabslash u(t,x)|^2 + \tfrac{m^2}2 |u(t,x)|^2 - \tfrac1{p+2}|u(t,x)|^{p+2} \bigr]\, dx\, dt. \label{E:E flux} \end{align} Here $u_r := \frac{x}{|x|}\cdot\nabla u$ and $\nabslash u := \nabla u - \frac{x}{|x|} u_r$ denote the radial and angular derivatives, respectively, and $$ \mathfrak{e}^0(t,x) = \tfrac12 |u_t(t,x)|^2 + \tfrac12 |\nabla u(t,x)|^2 + \tfrac{m^2}2 |u(t,x)|^2 - \tfrac1{p+2} |u(t,x)|^{p+2}, $$ as in \eqref{E:m E defn}. \end{lemma} \begin{proof} The identity follows easily from \eqref{E:m E cons} and integration by parts. First we define a function $\phi$ and a frustrum $F$ as follows: $$ F := \bigl\{ (t,x): t_0<t<t_1 \text{ and } |x|<t\bigr\} \quad\text{by}\quad \phi(t,x) = \begin{cases} \tfrac{t^2-|x|^2}{t} &:|x|<t \\ 0 &:|x|\geq t. \end{cases} $$ Then integration by parts and \eqref{E:m E cons} show that \begin{align*} \iint_F \mathfrak{e}^0(t,x)\partial_t\phi(t,x) & + \vec\mathfrak{e}(t,x)\cdot \nabla\phi(t,x) \,dx\,dt \\ &=\int_{{\mathbb{R}}^d}\mathfrak{e}^0(t_1,x)\phi(t_1,x) -\mathfrak{e}^0(t_0,x)\phi(t_0,x)\, dx, \end{align*} which is equivalent to \eqref{E:E flux}. An alternate proof of \eqref{E:E flux} can be based on applying the divergence theorem to a family of concentric frustra inside $F$ with varying opening angle and then averaging over this family. This proof is more intuitive: on each frustrum we obtain the usual energy flux identity, namely, the energy at the top is equal to the energy at the bottom plus the energy flux out through the side of the frustrum. However, at the low regularity we are considering, this intermediate step is ill-defined: $\mathfrak{e}^0$ and $\vec\mathfrak{e}$ are merely $L^1_\textrm{loc}$. \end{proof} It is tempting (and not difficult) to run the same argument using the dilation identity; however, the result takes a more satisfactorily coercive form if we make a trivial modification. Here we mean trivial in a cohomological sense: observe that for any vector-valued function $\vec f:{\mathbb{R}}\times{\mathbb{R}}^d\to{\mathbb{R}}^d$ on spacetime, $( \nabla\cdot \vec f ,\ - \partial_t \vec f)$ is divergence free, by equality of mixed partial derivatives. This is quite different from \eqref{E:m E cons} or \eqref{E:m dilation}, which rely on the fact that $u(t,x)$ solves a PDE, namely, \eqref{E:eqn}. Specifically, defining \begin{align}\label{E:frak l} \mathfrak{l}^0 := \mathfrak{d}^0 + \tfrac{d-1}{4} \nabla\!\cdot\bigl(\tfrac{x}t u^2\bigr) \qtq{and} \vec\mathfrak{l} := \vec\mathfrak{d} - \tfrac{d-1}{4} \tfrac{\partial\ }{\partial t} \bigl(\tfrac{x}t u^2\bigr) \end{align} we deduce that \begin{align}\label{E:m l} \partial_t \mathfrak{l}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{l} = \partial_t \mathfrak{d}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{d} = \tfrac{p(d-1)-4}{2(p+2)} |u|^{p+2} + m^2 |u|^2. \end{align} To see the improvement of coercivity over the original dilation identity \eqref{E:m dilation}, we need to expand out the definition of $\mathfrak{l}^0$ and collect terms. This yields \begin{equation}\label{E:frak l0} \begin{aligned} \mathfrak{l}^0 &= \tfrac1{2t}\bigl|x\!\;\!\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u\bigr|^2 + \tfrac{t}2\bigl(|\nabla u|^2 - |\tfrac{x}t \cdot \nabla u|^2\bigr) - \tfrac{t}{p+2}|u|^{p+2} \\ & \qquad {} + \tfrac{d^2-1}{8t} u^2 + t \tfrac{m^2}{2} u^2; \end{aligned} \end{equation} indeed, the modification \eqref{E:frak l} was chosen precisely to complete the squares here and in \eqref{E:L bndry}. \begin{lemma}[Two dilation inequalities]\label{L:dilat id} Let $u$ be a strong solution to \eqref{E:eqn} in the light cone \eqref{E:basic cone}. Then \begin{equation}\label{E:L flux ineq} \begin{aligned} \int_{t_0}^{t_1} \!\!\! \int_{|x|<t} \! \tfrac{p(d-1)-4}{2(p+2)} & |u(t,x)|^{p+2} + m^2 |u(t,x)|^2 \, dx\,dt + \int_{|x| < t_0} \mathfrak{l}^0(t_0,x) \,dx \\ &\leq \int_{|x| < t_1} \mathfrak{l}^0(t_1,x) \,dx \end{aligned} \end{equation} for all $0<t_0<t_1\leq T$. Moreover, in the conformal case $p(d-1)=4$ we have \begin{equation}\label{E:L flux ineq 2} \begin{aligned} \!\!\! \int_{t_0}^{(1+\alpha)t_0} \!\!\! \int_{|x|<\alpha t} \! (t-|x|)^{d+1}|\nabla_{\!t,x} u(t,x)|^2 + (t-|x|)^{d-1} |u(t,x)|^2 dx\!\:dt \lesssim \alpha t_0^{d+1}\!, \end{aligned} \end{equation} uniformly for $0<\alpha\leq 1$ and $[t_0, (1+\alpha)t_0]\subset (0, T]$. The implicit constant depends on $d$, $T$, and the $H^1_x\times L^2_x$ norm of $(u(T),u_t(T))$ on the ball $\{|x|<T\}$. \end{lemma} \begin{proof} We begin with \eqref{E:L flux ineq}. If $u$ where $C^2$, then we could apply the divergence theorem on the frustrum $F=\{ (t,x): t_0<t<t_1 \text{ and } |x|<t\}$ to obtain \begin{align} &\int_{t_0}^{t_1} \!\!\! \int_{|x|<t} \! \tfrac{p(d-1)-4}{2(p+2)} |u(t,x)|^{p+2} + m^2 |u(t,x)|^2 \, dx\,dt \label{E:L div thm} \\ {}={}& \int_{|x| < t_1} \mathfrak{l}^0(t_1,x) \,dx - \int_{|x| < t_0} \mathfrak{l}^0(t_0,x) \,dx + \int_{t_0}^{t_1} \!\!\! \int_{|x|=t} \vec\mathfrak{l}\cdot\tfrac{x}{|x|} - \mathfrak{l}^0 \ dS(x)\,dt. \notag \end{align} Here, $dS$ denotes surface measure on the sphere $\{|x|=t\}$, or, equivalently, $(d-1)$-dimensional Hausdorff measure. Note that although $(-1,x/|x|)$ is not a unit vector, this is compensated for by the fact that $dS(x)\,dt$ is $2^{-1/2}$ times $d$-dimensional surface measure on the cone. Thus, for $u\in C^2$ the inequality \eqref{E:L flux ineq} follows directly from \eqref{E:L div thm} by neglecting the manifestly sign-definite term \begin{equation}\label{E:L bndry} \int_{t_0}^{t_1} \!\!\! \int_{|x|=t} \vec\mathfrak{l}\;\!\cdot\tfrac{x}{|x|} - \mathfrak{l}^0 \ dS(x)\,dt = - \int_{t_0}^{t_1} \!\!\! \int_{|x|=t} \tfrac1t\bigl[x\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u\bigr]^2 \, dS(x)\,dt. \end{equation} To make this argument rigorous when $(u,u_t)$ is merely in $H^1_x\times L^2_x$, one can use the integration by parts technique of the previous lemma. This time, one chooses $\phi$ to be a mollified version of the characteristic function of the frustrum $F$. We turn now to \eqref{E:L flux ineq 2}. Again we simplify the presentation by assuming that the solution is $C^2$. In the conformal case, the coefficient of the potential energy term in \eqref{E:L div thm} is zero, so we rely instead on the positivity of \eqref{E:L bndry}. Applying the dilation identity \eqref{E:L div thm} to $u(t+s,x+y)$ with fixed $0<s<T$ and $|y|<s$, and using the fact that \begin{align*} \lim_{t\searrow s} \int_{|x-y|<t-s} \tfrac{t-s}{p+2} |u(t,x)|^{p+2}\,dx = 0 \quad \text{for all $0<s<T$}, \end{align*} by the definition of a strong solution, we obtain \begin{align*} \int_s^T \!\!\! \int_{|x-y|=t-s} \bigl[(x-y)\cdot\!\nabla u(t,x) + (t-s) u_t(t,x) + \tfrac{d-1}{2} u(t,x)\bigr]^2 \frac{dS(x)\,dt}{t-s} \lesssim 1, \end{align*} where the implicit constant depends on $T$ and the $H^1_x\times L^2_x$ norm of $(u(T),u_t(T))$ on the ball $\{|x|<T\}$. We will deduce \eqref{E:L flux ineq 2} by integrating this over all choices of $(s,y)$ in the region \begin{equation}\label{sy restr} R(\alpha):= \bigl\{ (s,y) : (1-\alpha)t_0 < s+|y| < (1+\alpha)^2 t_0 \ \text{and} \ |y| < s \bigr\}. \end{equation} As $R(\alpha)$ has volume $O(\alpha t_0^{d+1})$, we deduce \begin{align*} \iint_{R(\alpha)}\! \int_s^T \!\!\! \int_{|x-y|=t-s} \bigl[(x-y)\cdot\!\nabla u + (t-s) u_t + \tfrac{d-1}{2} u\bigr]^2 \frac{dS(x)\,dt}{t-s} \,dy\,ds \lesssim \alpha t_0^{d+1}. \end{align*} Next we replace the variable $x$ by $\omega\in S^{d-1}$ via $x=y+(t-s)\omega$ and then change variables a second time from $y$ to $x$ via $y=x-(t-s)\omega$. This yields \begin{align*} \int\!\!\!\int\!\!\!\int\!\!\!\int_{\Omega(\alpha)} \bigl[(t-s) \bigl(\omega\cdot\!\nabla u(t,x) &+ u_t(t,x)\bigr) + \tfrac{d-1}{2} u(t,x)\bigr]^2 (t-s)^{d-2} \,ds\,dS(\omega)\,dx\,dt\\ &\lesssim \alpha t_0^{d+1}, \end{align*} where the region of integration is $$ \Omega(\alpha) := \bigl\{ (s,\omega,x,t) : \omega\in S^{d-1}, \ 0< s < t < T, \ \text{and} \ \bigl(s,x-(t-s)\omega\bigr)\in R(\alpha) \bigr\}. $$ To find a lower bound for this integral, we replace $\Omega(\alpha)$ by a smaller region, namely, $$ \tilde\Omega(\alpha):= \{ (s,\omega,x,t) : \omega\in S^{d-1}, \ |x|<\alpha t, \ t_0<t<(1+\alpha)t_0, \ 0 < t-s < \tfrac{t-|x|}2 \}. $$ Verifying $\tilde\Omega(\alpha)\subseteq \Omega(\alpha)$ rests on two simple observations. First, $$ |x-(t-s)\omega| < s \iff 2\bigl(t-x\cdot\omega\bigr)(t-s) < t^2 -|x|^2 $$ and so $0<t-s< \tfrac{t-|x|}{2}$ implies $|x-(t-s)\omega| < s$ whenever $|x|< t$. The second observation is that for $|x|< \alpha t$ and $t_0<t<(1+\alpha)t_0$, $$ s + |x - (t-s)\omega| \in \bigl[t-|x|, t+|x|\bigr] \subseteq \bigl((1-\alpha)t_0, (1+\alpha)^2t_0\bigr). $$ The decoupling of variables that is built into the structure of $\tilde\Omega(\alpha)$ makes the integral very easy to evaluate. Indeed, freezing $t$ and $x$ for a moment we have \begin{align*} \frac1{|S^{d-1}|} \int_{S^{d-1}} &\int_{(t+|x|)/2}^t \bigl[(t-s) \bigl(\omega\cdot\!\nabla u + u_t\bigr) + \tfrac{d-1}{2} u\bigr]^2 (t-s)^{d-2} \,ds\,dS(\omega) \\ &= \tfrac{1}{d+1} \bigl(\tfrac{t-|x|}{2}\bigr)^{d+1} \bigl[ u_t^2 + \tfrac1d |\nabla u|^2\bigr] + \tfrac{d-1}{d} \bigl(\tfrac{t-|x|}{2}\bigr)^{d} u u_t + \tfrac{d-1}{4} \bigl(\tfrac{t-|x|}{2}\bigr)^{d-1} u^2. \end{align*} Thus the desired estimate \eqref{E:L flux ineq 2} follows easily by restoring the integrals over $t$ and $x$ and by noting that the quadratic form $$ \tfrac{1}{d+1} X^2 + \tfrac{d-1}{d} XY + \tfrac{d-1}{4} Y^2 $$ has negative discriminant (and so is positive definite). \end{proof} Notice that the LHS\eqref{E:L div thm} is sign-definite for any $p\geq \frac{4}{d-1}$, thus providing a Lyapunov functional in that case. More precisely, Lemma~\ref{L:dilat id} shows that \begin{equation} \label{E:L(t)} L(t) := \int_{|x| < t} \mathfrak{l}^0(t,x) \,dx \end{equation} is an increasing function of time. Moreover, the expansion \eqref{E:frak l0} shows that this Lyapunov functional has good coercivity properties. Specializing to the conformally invariant case $m=0$ and $p=\frac{4}{d-1}$ we find precisely the Lyapunov functional used by Merle and Zaag in \cite{MerleZaagMA} in their treatment of this case (cf. Lemma~21 in \cite{MerleZaagMA}). This is not immediately apparent because Merle and Zaag use similarity variables \begin{equation}\label{E:ss vars} w(s,y) := t^{-2/p} u(t,x) \qtq{with} y := x/t \qtq{and} s := \log(t), \end{equation} as advocated in earlier work \cite{GigaKohn:Indiana} of Giga and Kohn on the semilinear heat equation. In particular, the connection to the dilation identity does not seem to have been noted before. In Section~\ref{S:conf blow}, we will revisit the work of Merle and Zaag on the conformally invariant wave equation in the course of discussing analogous results for Klein--Gordon. The principal deviation from their argument is the use of the estimate \eqref{E:L flux ineq 2} appearing in Lemma~\ref{L:dilat id}, which should be compared with Proposition~2.4 in \cite{MerleZaagMA} and Proposition~4.2 in \cite{MerleZaagIMRN}. Like their estimates, \eqref{E:L flux ineq 2} was proved by averaging the identity \eqref{E:L div thm}. Here we see one advantage to working in usual coordinates, namely, it makes it clear how to perform this averaging so as to obtain control over all directional derivatives. More specifically, the region $R(\alpha)$ appearing in \eqref{sy restr} was chosen to contain all spacetime points whose future light cones intersect the region of $(t,x)$ integration appearing in LHS\eqref{E:L flux ineq 2}. \begin{lemma}\label{L:L>0} Let $d \geq 2$, $m \in [0,1]$, and $p = \frac4{d-2s_c}$ with $\tfrac12 \leq s_c < 1$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{(t,x):0 < t \leq T, |x|<t\}$, then $L(t)\geq 0$ for all $0 < t \leq T$. \end{lemma} \begin{proof} The following argument appears also in \cite{AntoniniMerle}. Suppose by way of contradiction that $L(t_0)<0$ for some $t_0 \in (0,T]$. By the dominated convergence theorem, \eqref{E:frak l0}, and our assumption that $u$ is a strong solution, there exists $0<\delta< t_0$ such that \begin{equation} \label{E:L delta} \begin{aligned} L_\delta(t) := \int_{|x| < t-\delta}& \tfrac1{2(t-\delta)}\bigl|x\!\;\!\cdot\!\nabla u + (t-\delta) u_t + \tfrac{d-1}{2} u\bigr|^2 + \tfrac{t-\delta}2\bigl(|\nabla u|^2 - |\tfrac{x}{t-\delta} \cdot \nabla u|^2\bigr)\\ &\qquad {} - \tfrac{t-\delta}{p+2}|u|^{p+2} + \tfrac{d^2-1}{8(t-\delta)} u^2 + (t-\delta) \tfrac{m^2}{2} u^2\, dx \end{aligned} \end{equation} is negative at time $t=t_0$. Letting $\mathfrak{l}_u^0$ denote the quantity in \eqref{E:frak l0} and $u^\delta(t) := u(t+\delta)$, we may write $$ L_\delta(t) = \int_{|x| < t-\delta} \mathfrak{l}^0_{u^\delta}(t-\delta,x)\, dx. $$ As $u^\delta$ is a strong solution in the light cone $\{(t,x):-\delta < t \leq T-\delta, |x|<t+\delta\}$, Lemma~\ref{L:dilat id} implies that $L_\delta$ is an increasing function of time. As $L_\delta(t_0)<0$ and $0<\delta <t_0$, we deduce that \begin{equation} \label{E:lim Ldelta neg} \lim_{t \searrow \delta} L_\delta(t) < 0. \end{equation} On the other hand, \eqref{E:L delta} gives \begin{equation} \label{E:Ldelta lb} L_\delta(t) \geq -\int_{|x| < t} \tfrac{t-\delta}{p+2} |u(t,x)|^{p+2}\, dx. \end{equation} As $u$ is a strong solution, Lemma~\ref{L:Sob domain} implies that $\|u(t)\|_{L^{p+2}(|x|<t)}$ is uniformly bounded for $t \in [\delta,T]$; the bound may depend on $\delta$, but this is irrelevant. Thus, by \eqref{E:Ldelta lb}, $$ \lim_{t \searrow \delta} L_\delta(t) \geq 0, $$ contradicting \eqref{E:lim Ldelta neg}. This completes the proof of the lemma. \end{proof} In the next section, we will see that our results in the super-conformal case (i.e., when $p > \frac{4}{d-1}$) are less complete than for the conformal and sub-conformal cases. Without going into any details of the analysis, we can already explain why this happens: scaling. Recall from the introduction that when $m=0$, the set of solutions to \eqref{E:eqn} is invariant under the scaling $u(t,x)\mapsto u^\lambda(t,x) := \lambda^{2/p} u(\lambda t,\lambda x)$. Recall also that the critical regularity $s_c$ is determined by invariance under this scaling, which leads to $s_c = \frac d2 - \frac2p$. Note that it is reasonable to neglect the mass term when computing the scaling of the equation, on account of it being subcritical when compared with the other terms. We can apply the same reasoning to the conservation of energy to see that it has $\dot H^1_x$ scaling, at least in the cases when $p\leq4/(d-2)$, which includes those discussed in this paper. When $p>4/(d-2)$ the derivative terms in the energy are subcritical relative to the potential energy, which means that it is more reasonable to assert that the energy has the scaling of $\dot H^s_x$ with $s=pd/[2(p+2)]$. (For such $p$, this $s$ is less than $s_c$, so the equation is supercritical relative to both terms in the energy.) The dilation identity has $\dot H_x^{1/2}$ scaling; notice, for example, that the components of $\mathfrak{d}$ resemble those of $\mathfrak{e}$ multiplied by length (or time, which has the same dimensionality). By comparison, the conformal energy scales as $L^2_x$ and this is why it is inferior for our purposes. Indeed, experience has shown that after coercivity, the utility of a conservation/monotonicity law is dictated by its proximity to critical scaling. Thus we obtain optimal results when $s_c=1/2$, but only weaker results at higher critical regularity. When $p<4/(d-1)$, that is, when $s_c < 1/2$, the equation is subcritical relative to the dilation identity. As we are working on a finite time-interval, this is a favourable situation. However, in this case the identity is no longer coercive, which is very bad news. As the dilation identity is local in space, we can produce a whole family of identities (with lower scaling regularity) by averaging translates. While the previously neglected coercivity \eqref{E:L bndry} will now produce an additional positive volume integral term, it is far from obvious that coercivity can be restored. Nevertheless, Antonini and Merle \cite{AntoniniMerle} demonstrated that there is a Lyapunov functional when $p<4/(d-1)$. In the limit $p\nearrow 4/(d-1)$, one recovers the functional used in the subsequent paper \cite{MerleZaagMA}. Given the connection of this limiting case to the dilation identity (the topic of Lemma~\ref{L:dilat id}), one would expect to find a connection for all $p<4/(d-1)$. Our next task is to explicate this connection. To do so, we need to begin with a slight detour. Let us consider \emph{complex}-valued solutions to \eqref{E:eqn} for a moment. In this case we pick up an additional symmetry, namely, phase rotation invariance; the class of solutions is invariant under $u(t,x)\mapsto e^{i\theta}u(t,x)$. This begets the law of charge conservation: If $\mathfrak{q}^0 = \bar u u_t$ and $\vec\mathfrak{q} = - \bar u \nabla u$, then \begin{gather}\label{E:m charge id} \partial_t \mathfrak{q}^0 + \nabla\! \cdot\!\;\! \vec\mathfrak{q} = |u_t|^2 - |\nabla u|^2 - m^2 |u|^2 + |u|^{p+2}. \end{gather} Strictly speaking, charge conservation corresponds to the imaginary part of this identity, for which the right-hand side vanishes. (For real-valued solutions, this is just $0=0$.) The real part of \eqref{E:m charge id} is a non-trivial identity, even in the case of real-valued solutions to \eqref{E:eqn}, although it is no longer a true conservation law. Like the dilation identity, this law has $\dot H_x^{1/2}$ scaling. Although it is incidental to the main themes of this section, let us pause to observe that the charge identity underpins the virial theorem for this system. Recall that the virial theorem (cf. \cite{Clausius} or \cite[\S 10]{LandauLif1}) shows the following: For a mechanical system whose potential energy is a homogenous function of the coordinates (of $u$ in our case), the time-averaged potential and kinetic energies are in the proportion dictated by the homogeneity. The proof extends immediately to the case of potential energies that are a sum of terms with different homogeneities, as is the case for our equation: \begin{lemma}[Virial identity]\label{L:EnEqPart} Let $u:[0,\infty)\times{\mathbb{R}}^d\to{\mathbb{R}}$ be a global strong solution to \eqref{E:eqn} with $u \in L^\infty_t H^1_x$ and $u_t\in L^\infty_t L^2_x$. Then \begin{align*} \lim_{T\to\infty}\frac1T \int_0^T \!\!\! \int_{{\mathbb{R}}^d} \! & |\nabla u(t,x)|^2 + m^2 |u(t,x)|^2 - |u(t,x)|^{p+2} \,dx\,dt \\ &= \lim_{T\to\infty} \frac1T \int_0^T \!\!\! \int_{{\mathbb{R}}^d} \! |u_t(t,x)|^2 \,dx\,dt. \end{align*} \end{lemma} \begin{remark} In the lemma we assume that $\|(u(t),u_t(t))\|_{H^1 \times L^2}$ is bounded; it suffices that this norm is merely $o(t)$, as will become immediately apparent in the proof. \end{remark} \begin{proof} Integrating \eqref{E:m charge id} over the space-time slab $[0,T]\times{\mathbb{R}}^d$ gives \begin{align*} \int_0^T \!\!\! \int_{{\mathbb{R}}^d} \! |u_t(t,x)|^2 - |\nabla u(t,x)|^2 - m^2 |u(t,x)|^2 &+ |u(t,x)|^{p+2} \,dx\,dt \\ &= \int_{{\mathbb{R}}^d} \! \mathfrak{q}^0(T,x) - \mathfrak{q}^0(0,x) \,dx. \end{align*} Observing that the right-hand side is $O(1)$ by hypothesis, the result follows by dividing by $T$ and rearranging a little. \end{proof} We now return to the question of determining the link between the Lyapunov functional introduced in \cite{AntoniniMerle} and the dilation identity. It is natural to try averaging the dilation identity against a Lorentz-invariant function. For a generic tensor $\mathfrak{z}$, integration by parts formally yields the following: \begin{equation}\label{E:general z id} \begin{aligned} \int_{{\mathbb{R}}^d} & \mathfrak{z}^0(t_1,x)\psi(t_1^2-|x|^2)\,dx - \int_{{\mathbb{R}}^d} \mathfrak{z}^0(t_0,x) \psi(t_0^2-|x|^2)\,dx \\ ={}& \int_{t_0}^{t_1} \! \int_{{\mathbb{R}}^d} [\partial_t \;\! \mathfrak{z}^0 + \nabla \! \cdot \!\;\! \vec \mathfrak{z} \,] \psi(t^2-|x|^2) + 2 [ t \;\! \mathfrak{z}^0 - x \cdot \vec \mathfrak{z} \,] \psi'(t^2-|x|^2) \,dx\,dt. \end{aligned} \end{equation} We consider for a moment the case when $\mathfrak{z}=\mathfrak{d}$. Starting with \eqref{E:frak d}, a few elementary manipulations reveal \begin{equation}\label{E:td0-xd} \begin{aligned} t \;\! \mathfrak{d}^0 - x \cdot \vec \mathfrak{d} &= (t^2 - |x|^2) \bigl[ \tfrac12 |\nabla u|^2 - \tfrac12 {u_t}^2 + \tfrac{m^2}2 u^2 - \tfrac1{p+2} |u|^{p+2} \bigr] \\ &\qquad {} + \bigl[ x\cdot\!\nabla u + t u_t + \tfrac{d-1}{2} u \bigr] \bigl[ t u_t + x\cdot\!\nabla u\bigr]. \end{aligned} \end{equation} Notice the similarity of the first term in square brackets to the right-hand side of the charge identity \eqref{E:m charge id}. More generally, if $\mathfrak{z}$ has $\dot H^{1/2}_x$ scaling (like $\mathfrak{d}$), then to obtain a formula with critical scaling, the function $\psi(t^2-|x|^2)$ should have dimensions of length to the power $2\alpha$ with $\alpha=\frac12-s_c$. That is, $\psi$ should be homogenous of degree $\alpha$. By Euler's formula, this implies \begin{equation}\label{E:psi alpha-homo} \psi'(t^2-|x|^2) = \alpha (t^2-|x|^2)^{-1} \psi(t^2-|x|^2). \end{equation} \begin{lemma}[Combined dilation + charge identity]\label{L:dilat id2} Assume $p< 4/(d-1)$ so that $\alpha:=\frac12-s_c > 0$ and let $u$ be a strong solution to \eqref{E:eqn} in the light cone \eqref{E:basic cone}. Let \begin{equation*} \mathfrak{z}^0 := \tfrac1{2t}\bigl|x\!\;\!\cdot\!\nabla u + t u_t + \tfrac2p u\bigr|^2 + \tfrac{t}2\bigl(|\nabla u|^2 - |\tfrac{x}t \!\cdot\! \nabla u|^2\bigr) - \tfrac{t}{p+2}|u|^{p+2} + \bigl(\tfrac{m^2t}{2} + \tfrac{p+2}{p^2 t}\bigr) u^2. \end{equation*} Then \begin{align} \int_{|x|<t_1} & \mathfrak{z}^0(t_1,x) (t_1^2-|x|^2)^\alpha \,dx - \int_{|x|<t_0} \mathfrak{z}^0(t_0,x) (t_0^2-|x|^2)^\alpha\,dx \label{E:AM mono} \\ = & \int_{t_0}^{t_1} \!\!\!\int_{|x|<t} \!\! 2 \bigl| x\cdot\!\nabla u + t u_t + \tfrac2p u \bigr|^2 \alpha (t^2-|x|^2)^{\alpha-1} + m^2 u^2 (t^2-|x|^2)^\alpha dx\,dt \notag \end{align} for all $0<t_0<t_1\leq T$. \end{lemma} \begin{proof} We will prove the identity in slightly greater generality, by employing \eqref{E:general z id} with a general $\psi(t^2-|x|^2)$ with $\psi$ homogeneous of degree $\alpha$ and with \begin{equation} \label{E:mcz defn} \begin{aligned} \mathfrak{z}^0 = \mathfrak{d}^0 + \alpha \mathfrak{q}^0 + \tfrac1p\nabla \! \cdot \! \vec f + \tfrac{2\alpha}{p} g \qtq{and} \vec\mathfrak{z} = \vec\mathfrak{d} + \alpha \vec\mathfrak{q} - \tfrac1p\partial_t \vec f . \end{aligned} \end{equation} Here $\mathfrak{d}$ and $\mathfrak{q}$ represent the tensors associated with the dilation and charge identities (as in \eqref{E:m dilation} and \eqref{E:m charge id}, respectively), while $$ \vec f(t,x) = \frac{x}t |u(t,x)|^2 \qtq{and} g(t,x) = t^{-1} |u(t,x)|^2. $$ A little patience is all that is required to show that this definition of $\mathfrak{z}^0$ agrees with that stated in the lemma. Notice that the $\vec f$ terms in \eqref{E:mcz defn} differ only in the prefactor from those appearing in \eqref{E:frak l}; equality of mixed partial derivatives shows that this term does not affect the divergence. It was chosen to complete squares in a formula below. The additional summand $g$ appearing in \eqref{E:mcz defn} has no analogue in our previous computations. It is not clear how one might intuit the introduction of this term; however, if one proceeds without it, then the left-over terms can be recognized as a complete derivative and so explain \emph{a posteriori} its inclusion. With these preliminaries out of the way, applying \eqref{E:general z id} we obtain \begin{equation*} \begin{aligned} \int_{{\mathbb{R}}^d} \mathfrak{z}^0(t_1,x) & \psi(t_1^2-|x|^2)\,dx - \int_{{\mathbb{R}}^d} \mathfrak{z}^0(t_0,x) \psi(t_0^2-|x|^2)\,dx \\ &= \int_{t_0}^{t_1} \!\!\!\int_{{\mathbb{R}}^d} 2 \bigl| x\cdot\!\nabla u(t,x) + t u_t(t,x) + \tfrac2p u(t,x) \bigr|^2 \psi'(t^2-|x|^2) \,dx\,dt \\ & \qquad\qquad + \int_{t_0}^{t_1} \!\!\! \int_{{\mathbb{R}}^d} m^2 |u(t,x)|^2 \psi(t^2-|x|^2) \,dx\,dt. \end{aligned} \end{equation*} Specializing to \begin{equation*} \psi(t^2- |x|^2) = \begin{cases} (t^2 - |x|^2)^\alpha &: |x| < t \\ 0 &: |x|\geq t \end{cases} \end{equation*} we obtain the identity stated in the lemma. \end{proof} Since the integrand on the right-hand side of \eqref{E:AM mono} is positive, this identity shows the monotonicity in time of the function \begin{equation} \label{E:Z(t)} Z(t) := \int_{|x|<t} \mathfrak{z}^0(t,x) (t^2-|x|^2)^\alpha \,dx. \end{equation} After switching to self-similar variables (cf. \eqref{E:ss vars}), this agrees with the Lyapunov functional introduced by Antonini and Merle in \cite{AntoniniMerle} and used subsequently in \cite{MerleZaagAJM}. As we have seen, this functional is less directly deducible from the dilation identity than that discussed in Lemma~\ref{L:dilat id}, which appeared in the later paper \cite{MerleZaagMA} of Merle and Zaag. Note that formally taking the limit $p\nearrow 4/(d-1)$ in Lemma~\ref{L:dilat id2} yields Lemma~\ref{L:dilat id}. While we find the physical meaning of these Lyapunov functionals difficult to properly understand when written in self-similar variables, the example discussed in Lemma~\ref{L:dilat id2} makes a good case for their utility as a technique for finding such laws. In the two key examples discussed in this section, the correct multiplier appears after switching to similarity variables and converting the resulting equation to divergence form; compare \cite[Eqn (4)]{MerleZaagAJM} and \cite[Eqn (1.22)]{GigaKohn:CPAM}. Next we give analogues of Lemma~\ref{L:L>0} and \eqref{E:L flux ineq 2} for the functional $Z$. \begin{corollary}\label{C:Z id} Let $d \geq 2$, $m \in [0,1]$, and $p = \frac4{d-2s_c}$ with $0< s_c < \tfrac12$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{(t,x):0 < t \leq T, |x|<t\}$, then $Z(t)\geq 0$ for all $0 < t \leq T$. Moreover, \begin{align}\label{E:smudge Z} \!\!\int_{t_0}^{2t_0} \!\!\!\!\int_{|x|<t} (t-|x|)^{d+2-2s_c} |\nabla_{t,x} u(t,x)|^2 + (t-|x|)^{d-2s_c} |u(t,x)|^2 \,dx\,dt \lesssim t_0^{d+1}, \! \end{align} uniformly for $0<t_0\leq\frac12T$. \end{corollary} \begin{proof} The proof that $Z(t)\geq 0$ is identical to that of Lemma~\ref{L:L>0}. To prove \eqref{E:smudge Z} we argue much as we did for \eqref{E:L flux ineq 2}. First we observe that applying Lemma~\ref{L:dilat id2} to the light cone with apex $(s,y)$ yields $$ \int_s^T \int_{|x-y|<t-s} \bigl| (x-y)\cdot\!\nabla u + (t-s) u_t + \tfrac2p u \bigr|^2 \bigl\{(t-s)^2-|x-y|^2\bigr\}^{\alpha-1} dx\,dt \lesssim 1, $$ where the implicit constant depends on $T$ and the $H^1_x\times L^2_x$ norm of $(u(T),u_t(T))$ on the ball $\{|x|<T\}$. Next we integrate this inequality over the region $|y|<s<2t_0$ and so deduce $$ \int\!\!\int\!\!\int\!\!\int \bigl| (x-y)\cdot\!\nabla u + (t-s) u_t + \tfrac2p u \bigr|^2 \bigl\{(t-s)^2-|x-y|^2\bigr\}^{\alpha-1} dx\,dt\,dy\,ds \lesssim t_0^{d+1}, $$ where the integral is over a region which contains $$ \Omega := \{ (s,y,t,x) : t_0<t<2t_0,\ \tfrac{t+|x|}{2} < s < t,\ |x-y| < t-s,\text{ and } |x|<t\}. $$ Freezing $(t,x)$ and integrating out $y$ and then $s$ produces the estimate \eqref{E:smudge Z}. \end{proof} \section{Bounds in light cones: The super-conformal case}\label{S:superconf blow} In this section, we consider the super-conformal case, $\frac12 < s_c < 1$. Little seems to be known about the behaviour of local norms for blowup solutions in this case. In particular, the work of Merle and Zaag \cite{MerleZaagAJM, MerleZaagMA, MerleZaagIMRN} only considers the conformal and sub-conformal cases $(0 < s_c \leq \frac12$). The majority of this section will be devoted to a proof of the following theorem: \begin{theorem}[Mass/Energy bounds] \label{T:cone bound superc} Let $d \geq 2$, $m \in [0,1]$, and $p = \frac4{d-2s_c}$ with $\frac12 < s_c < 1$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{(t,x):0 < t \leq T, |x|<t\}$, then $u$ satisfies \begin{equation} \label{E:cone bound superc} \begin{aligned} \int_0^{T} \int_{|x|<t} \bigl(1-\tfrac{|x|}{t}\bigr)^2 & |\nabla_{t,x}u(t,x)|^2 + |u(t,x)|^{p+2}\, dx\, dt \lesssim 1, \end{aligned} \end{equation} as well as the pointwise in time bounds \begin{equation} \label{E:mass bound superc} \int_{|x|< t} |u(t,x)|^2\, dx \lesssim t^{\frac{pd}{p+4}} \qtq{and} \int_{|x|< t} |u(t,x)|^{\frac{p+4}2}\, dx \lesssim 1 \end{equation} for all $t \in (0,T]$. The implicit constants in \eqref{E:cone bound superc} and \eqref{E:mass bound superc} depend on $d$, $p$, $T$, $\|u(T)\|_{H^1_x(|x|<T)}$, and $\|u_t(T)\|_{L^2_x (|x|<T)}$. \end{theorem} \begin{proof} We rely on the information provided by the Lyapunov functional $L$ introduced in the previous section. Specifically, we will make use of Lemmas~\ref{L:dilat id}~and~\ref{L:L>0}, which assert that \begin{equation} \label{E:L superc} L(t) := \int_{|x| < t} \mathfrak{l}^0(t,x)\, dx \end{equation} is a nonnegative increasing function of time. To keep formulae within margins, we will not keep track of the specific dependence on $T$ or $(u(T),u_t(T))$ in the estimates that follow. We first consider \eqref{E:cone bound superc}. Combining Lemmas~\ref{L:dilat id}~and~\ref{L:L>0}, we immediately obtain \begin{align} \label{E:p+2 superc} \int_0^{T} \int_{|x|<t} |u(t,x)|^{p+2}\, dx\, dt &\lesssim L(T)\lesssim 1. \end{align} Thus there exists a sequence $t_n \searrow 0$ such that $$ \lim_{n \to \infty} t_n\|u(t_n)\|_{L_x^{p+2}(|x|<t_n)}^{p+2} = 0. $$ Therefore, if we define $$ g(t) := \int_{|x|<t} \tfrac{t^2-|x|^2}t \mathfrak{e}^0(t,x)\, dx, $$ then $$ \liminf_{n \to \infty} g(t_n) \geq 0. $$ Thus, combining Lemma~\ref{L:E flux} and \eqref{E:p+2 superc}, we obtain \begin{align*} \int_0^{T} \int_{|x| < t} &\tfrac14\bigl(1+\tfrac{|x|}t\bigr)^2\bigl[u_t(t,x)+u_r(t,x)\bigr]^2 + \tfrac14\bigl(1-\tfrac{|x|}t\bigr)^2\bigl[u_t(t,x)-u_r(t,x)\bigr]^2\\ &\qquad +\tfrac12\bigl(1+\tfrac{|x|^2}{t^2}\bigr)|\nabslash u(t,x)|^2 \, dx\, dt\\ & \leq g(T) + \int_0^{T} \int_{|x| < t} \tfrac1{p+2}\bigl(1+\tfrac{|x|^2}{t^2}\bigr)|u(t,x)|^{p+2}\, dx\, dt \lesssim 1. \end{align*} This bounds each of the derivatives of $u$ with respect to the frame $\nabslash$, $\partial_t + \partial_r$, and $\partial_t - \partial_r$, which spans all possible spacetime directions. The estimate for the last of these is the weakest, since it deteriorates near the edge of the cone, and so dictates the form of~\eqref{E:cone bound superc}. We turn now to \eqref{E:mass bound superc}. Observe that the first inequality follows from the second and H\"older's inequality. Thus, it remains to establish the second bound in \eqref{E:mass bound superc}. With the estimates at hand, there are several ways to proceed. The method we present below is informed by the needs of Section~\ref{S:conf blow}; in particular, it uses only the estimates available in that case. We first notice that, as a consequence of \eqref{E:frak l0}, \eqref{E:cone bound superc}, and the monotonicity of $L(t)$, \begin{align}\label{t0 bound} \int_{t_0}^{2t_0} \!\!\int_{|x|<t} \! \bigl|x\cdot \nabla u + tu_t +\tfrac{d-1}2u \bigr|^2\, dx\, dt &\leq \int_{t_0}^{2t_0} \!2tL(t)\, dt + \int_{t_0}^{2t_0} \!\!\int_{|x|<t} \!\tfrac{2t^2}{p+2}|u|^{p+2}\, dx\, dt\notag\\ &\lesssim t_0^2 \end{align} for all $0<t_0\leq \frac12 T$. Moreover, by H\"older and \eqref{E:cone bound superc}, we can control the desired quantity but only on average in time: \begin{align}\label{t0 bound q} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t} |u(t,x)|^{\frac{p+4}2}\, dx\, dt &\lesssim t_0^{\frac{p(d+1)}{2(p+2)}}\biggl[\int_{t_0}^{2t_0}\!\!\! \int_{|x|<t}|u(t,x)|^{p+2}\, dx\, dt\biggr]^{\frac{p+4}{2(p+2)}}\notag\\ &\lesssim t_0^{\frac{p(d+1)}{2(p+2)}}. \end{align} Next we will prove that the estimate \eqref{t0 bound q} together with the $L^2_{t,x}$ control \eqref{t0 bound} over the directional derivatives of $u$ inside the light cone imply the desired pointwise in time estimates. Observe that for a $C^1$ function $f:[1,2]\to {\mathbb{R}}$, \begin{align*} \sup_{1\leq \lambda\leq 2} |f(\lambda)| &\leq \int_1^2 |f(\lambda)| \,d\lambda + \int _1^2 |f'(\lambda)|\,d\lambda. \end{align*} The second part of \eqref{E:mass bound superc} follows by applying this to $f(\lambda):= \int_{|x|<t_0} |\lambda^{\frac{d-1}2} u(\lambda t_0, \lambda x)|^{\frac{p+4}2} \, dx$ and using Cauchy--Schwarz, \eqref{E:cone bound superc}, \eqref{t0 bound}, and \eqref{t0 bound q}. Indeed, \begin{align*} &\sup_{t_0\leq t\leq 2t_0} \int_{|x|<t} |u(t,x)|^{\frac{p+4}2}\, dx\\ &\quad \lesssim \sup_{1\leq \lambda\leq 2} |f(\lambda)|\\ &\quad\lesssim \frac1{t_0}\int_{t_0}^{2t_0} \!\!\!\int_{|x|<t} |u|^{\frac{p+4}2}\, dx\, dt\\ &\qquad +\frac1{t_0} \biggl(\int_{t_0}^{2t_0} \!\!\!\int_{|x|<t}\! |u|^{p+2}\, dx\, dt\biggr)^{\!1/2} \biggl(\int_{t_0}^{2t_0} \!\!\!\int_{|x|<t}\bigl|x\cdot \nabla u + tu_t +\tfrac{d-1}2u \bigr|^2dx\, dt\biggr)^{\!1/2}\\ &\quad \lesssim t_0^{\frac{p(d-1)-4}{2(p+2)}} + 1\\ &\quad \lesssim_T 1. \end{align*} The last inequality relies on the super-conformality hypothesis, namely, $p(d-1)>4$. This completes the proof of Theorem~\ref{T:cone bound superc}. \end{proof} We conclude this section with a corollary of Theorem~\ref{T:cone bound superc}. \begin{corollary} \label{C:cone bound superc} Let $d \geq 2$, $m \in [0,1]$, and $\frac12 < s_c < 1$. Set $p=\frac4{d-2s_c}$. Assume that there exists $0 < \varepsilon \leq 1$ such that $u$ is a strong solution to \eqref{E:eqn} in the cone $\{(t,x):0 < t \leq T, |x|<(1+\varepsilon)t\}$. Then for each $0 < t_0 \leq \frac T2$, \begin{equation} \label{E:dyadic bound superc} \begin{aligned} &\int_{t_0}^{2t_0} \int_{|x|<t} |\nabla_{t,x} u(t,x)|^2\, dx\, dt \lesssim 1, \end{aligned} \end{equation} with the implicit constant depending on $d$, $p$, $\varepsilon$, $T$, and the $H^1_x\times L_x^2$ norm of $(u(T), u_t(T))$ on the ball $\{|x|<T\}$. \end{corollary} We note that the assumption that $u$ is defined in the cone $\{(t,x):0 < t \leq T, |x|<(1+\varepsilon)t\}$ is equivalent to the assumption that $(0,0)$ is not a characteristic point of the (backwards in time) blowup surface of $u$. \begin{proof} We begin with a simple covering argument. There exist $N$, depending on $\varepsilon$ and $d$, and a set $\{x_j\}_{j=1}^N$ with $|x_j|<(1+\varepsilon)\frac{t_0}2$ such that $$ \{x :|x|<t_0\} \subset \bigcup_{j=1}^N \{x:|x-x_j|<(1-\tfrac\varepsilon2)\tfrac{t_0}2\}. $$ Therefore, \begin{align*} &\{(t,x):t_0 \leq t \leq 2t_0, \, |x|<t\}\\ &\qquad \subset \bigcup_{j=1}^N \{(t,x) :t_0 \leq t \leq 2t_0, \, |x-x_j|<(1-\tfrac\varepsilon2)\tfrac{t_0}2 + (t-t_0)\}\\ &\qquad \subset \bigcup_{j=1}^N \{(t,x):t_0 \leq t \leq 2t_0, \, |x-x_j| < (1-\tfrac\varepsilon{8})(t-\tfrac{t_0}2)\}. \end{align*} By assumption, $u$ is defined on each light cone $$ \{(t,x):\tfrac12t_0<t \leq T, \, |x-x_j| < t-\tfrac{t_0}2 \}, $$ so, by \eqref{E:cone bound superc}, \begin{align*} &\int_{t_0}^{2t_0} \int_{|x-x_j|<(1-\frac{\varepsilon}8)(t-\frac{t_0}2)}|\nabla_{t,x}u(t,x)|^2\, dx\, dt \\ &\qquad \lesssim_\varepsilon \int_{\frac{t_0}2}^{T} \int_{|x-x_j|<t-\frac{t_0}2} \bigl(1-\tfrac{|x-x_j|}{t-\tfrac{t_0}2}\bigr)^2|\nabla_{t,x}u(t,x)|^2\, dx\, dt \lesssim_\varepsilon 1. \end{align*} Summing this inequality over $1 \leq j \leq N$, we derive the claim. \end{proof} \section{Bounds in light cones: The conformal and sub-conformal cases}\label{S:conf blow} The goal of this section is to give pointwise in time upper bounds on the blowup rate of solutions to \eqref{E:eqn} in the conformal and sub-conformal cases, that is, when $0<s_c\leq \frac12$. \begin{theorem} \label{T:cone bound subc} Let $d \geq 2$, $m \in [0,1]$, $0 < s_c \leq \tfrac12$, and $p=\frac4{d-2s_c}$. If $u$ is a strong solution to \eqref{E:eqn} in the light cone $\{ (t,x) : 0<t\leq T \text{ and } |x|<t \}$, then \begin{equation} \label{E:cone bound subc} \int_{|x|<t/2} t^{-2s_c}|u(t,x)|^2 + t^{2(1-s_c)}|\nabla_{t,x}u(t,x)|^2\, dx \lesssim 1. \end{equation} The implicit constant depends on $d, s_c, T,$ and the $H^1_x\times L_x^2$ norm of $(u(T),u_t(T))$ on the ball $\{|x|<T\}$. \end{theorem} For the nonlinear wave equation, that is, \eqref{E:eqn} with $m=0$, this theorem was proved by Merle and Zaag in \cite{MerleZaagIMRN}, building on earlier work \cite{MerleZaagAJM,MerleZaagMA} that considered solutions defined in a spacetime slab. This result describes the behaviour of solutions near a general blowup surface $t=\sigma(x)$, as defined in the Introduction. In particular, in the case of a non-characteristic point, a simple covering argument yields \eqref{E:cone bound subc} with integration over the larger region $|x|<t$. The arguments of Merle and Zaag adapt \emph{mutis mutandis} to the Klein--Gordon equation \eqref{E:eqn}, since the mass term always appears with the helpful sign. However, our Lemma~\ref{L:dilat id} and Corollary~\ref{C:Z id} allow us to streamline the arguments of \cite{MerleZaagAJM}, \cite{MerleZaagMA}, and \cite{MerleZaagIMRN}. We focus first on the conformal case; the discussion of the sub-conformal case can be found at the end of this section. In the conformal case, our argument relies on \eqref{E:L flux ineq 2}, which gives control over all directional derivatives of the solution; this should be compared with Proposition~2.4 in \cite{MerleZaagMA} and Proposition~4.2 in \cite{MerleZaagIMRN}, which only provide control over a subset of directional derivatives. An immediate consequence of \eqref{E:L flux ineq 2} is \begin{align}\label{E:deriv on ball} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t-\frac1{10}t_0} |\nabla_{t,x} u|^2 + t_0^{-2} |u|^2 + t_0^{-2}\bigl|x\cdot\nabla u +tu_t +\tfrac{d-1}2 u\bigr|^2\, dx\, dt \lesssim 1, \end{align} uniformly in $0<t_0\leq \frac12 T$. Applying \eqref{E:L flux ineq 2} to a spacetime translate of our solution yields \begin{align}\label{E:deriv on alpha ball} \int_{t_0}^{(1+\alpha) t_0}\!\!\! \int_{|x-x_0|<\alpha t} \bigl|(x-x_0)\cdot\nabla u +tu_t +\tfrac{d-1}2 u\bigr|^2\, dx\, dt \lesssim \alpha t_0^2, \end{align} uniformly for $0<\alpha\leq \frac1{10}$, $|x_0|<\frac45 t_0$, and $[t_0,(1+\alpha)t_0]\subseteq (0,T]$. For comparison, see the proof of Proposition~3.1 in \cite{MerleZaagMA} and Proposition~4.2 in \cite{MerleZaagIMRN}. Next we transfer the estimate \eqref{E:deriv on ball} to a bound on the potential energy. To do this, we will employ a translated version of the functional $$ L(t) = \int_{|x|<t} \mathfrak{l}^0(t,x)\, dx, $$ introduced in Section~\ref{S:Lyapunov}; recall that $\mathfrak{l}^0$ is defined as $$ \mathfrak{l}^0 = \tfrac1{2t}|x\cdot \nabla u + tu_t + \tfrac{d-1}2u|^2 + \tfrac{t}2(|\nabla u|^2 - |\tfrac{x}t \cdot \nabla u|^2) - \tfrac{(d-1)t}{2(d+1)}|u|^{\frac{2(d+1)}{d-1}} + \tfrac{d^2-1}{8t}u^2 + t\tfrac{m^2}2u^2. $$ By Lemmas~\ref{L:dilat id}~and~\ref{L:L>0}, $L$ is a nonnegative increasing function of time. Using this functional adapted to the cone $\{|x|<t-\frac1{10}t_0\}$, specifically the fact that $L\geq 0$, we deduce \begin{align}\label{E:pot on ball} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t-\frac1{10}t_0} |u(t,x)|^{\frac{2(d+1)}{d-1}}\, dx\, dt \lesssim \text{LHS\eqref{E:deriv on ball}} \lesssim 1, \end{align} uniformly in $0<t_0\leq \frac12 T$. To prove Theorem~\ref{T:cone bound subc}, we need to upgrade the averaged in time estimates obtained above to pointwise in time estimates. The first step is the following result: \begin{lemma}[Pointwise in time estimates on the mass and a critical norm] \label{L:ptwise mass subc}\leavevmode\\ Let $0<t_0\leq \frac12 T$. For $t \in [t_0, 2t_0]$ we have \begin{equation} \label{E:ptwise mass subc} \int_{|x|<t-\frac1{10}t_0} |u(t,x)|^2\, dx \lesssim t_0 \end{equation} and \begin{equation} \label{E:ptwise q subc} \int_{|x-x_0|<\alpha t} |u(t,x)|^{\frac{2d}{d-1}}\, dx \lesssim \alpha^{1/2}, \end{equation} whenever $0<\alpha\leq \frac1{10}$ and $|x_0|<\frac45 t_0$. \end{lemma} \begin{proof} The proof follows the argument used to establish \eqref{E:mass bound superc}. To derive \eqref{E:ptwise mass subc}, one uses the function $f(\lambda):= \int_{|x|<t_0} |\lambda^{\frac{d-1}2} u(\lambda t_0,\lambda x)|^2\, dx$, while for \eqref{E:ptwise q subc} one uses the original version of $f$ with $p=\frac{4}{d-1}$. We need two ingredients: The first ingredient is an integral bound over the directional derivatives of $u$ on the appropriate cone; the role of the first ingredient in the current setting is played by \eqref{E:deriv on ball} and \eqref{E:deriv on alpha ball}. The second ingredient we need is averaged in time estimates for the left-hand sides of \eqref{E:ptwise mass subc} and \eqref{E:ptwise q subc}; the role of the second ingredient is played by \eqref{E:deriv on ball} and \begin{align*} \int_{t_0}^{(1+\alpha)t_0} \!\!\! &\int_{|x-x_0|<\alpha t} |u(t,x)|^{\frac{2d}{d-1}}\, dx\, dt\\ &\lesssim t_0\alpha^{\frac{d}{d+1}}\biggl[\int_{t_0}^{(1+\alpha)t_0}\!\!\! \int_{|x-x_0|<\alpha t}|u(t,x)|^{\frac{2(d+1)}{d-1}}\, dx\, dt\biggr]^{\frac{d}{d+1}} \lesssim t_0\alpha^{\frac{d}{d+1}}, \end{align*} which follows from H\"older and \eqref{E:pot on ball}. \end{proof} The simple argument just used does not allow us to upgrade our integrated gradient or potential energy estimates to versions that are pointwise in time. We will instead employ a bootstrap argument close to that in the work of Merle and Zaag. The requisite smallness is provided by \eqref{E:ptwise q subc} by choosing $\alpha$ small enough. Combining this estimate with the Gagliardo--Nirenberg inequality gives \begin{align} \int_{|x-x_0|<r} & |u(t,x)|^{\frac{2(d+1)}{d-1}}\, dx \notag\\ &\lesssim\biggl[\int_{|x-x_0|<r} |u(t,x)|^{\frac{2d}{d-1}}\, dx\biggr]^{\frac2d} \int_{|x-x_0|<r} |\nabla u(t,x)|^2 + \tfrac1{r^2} |u(t,x)|^2\, dx \notag\\ &\lesssim \alpha^{1/d}\int_{|x-x_0|<r} |\nabla u(t,x)|^2 + \tfrac1{r^2} |u(t,x)|^2\, dx, \label{E:nonconc} \end{align} uniformly for $0<\alpha\leq \frac1{10}$, $r<\alpha t$, and $|x_0|<\frac45 t$. To obtain an inequality in the opposite direction, we use boundedness of the functional $L$ adapted to the cone $\{(s,y) : |y-x_0|<r+s-t\}$ together with the observation \begin{align*} \bigl(1-\tfrac{|x|^2}{t^2}\bigr) \bigl|\nabla_{t,x} u|^2 &\lesssim t^{-2}|x\cdot \nabla u + tu_t + \tfrac{d-1}2u|^2 + (|\nabla u|^2 - |\tfrac{x}t \cdot \nabla u|^2) + t^{-2}u^2\\ &\lesssim t^{-1} \mathfrak{l}^0 + |u|^{\frac{2(d+1)}{d-1}}. \end{align*} This gives \begin{align}\label{E:grad from pot} \int_{|x-x_0|<r}\bigl( 1-\tfrac{|x-x_0|^2}{r^2} \bigr) |\nabla_{t,x} u(t,x)|^2\, dx \lesssim \tfrac1r + \int_{|x-x_0|<r} |u(t,x)|^{\frac{2(d+1)}{d-1}}\,dx, \end{align} where the coefficient of $1/r$ depends on the norm of $(u(T),u_t(T))$ via the monotonicity of $L$. Combining \eqref{E:nonconc} and \eqref{E:grad from pot} yields \begin{align}\label{almost} \int_{|x-x_0|<\frac12 r} |\nabla u(t,x)|^2\, dx &\lesssim \tfrac1r + \alpha^{1/d}\int_{|x-x_0|<r} |\nabla u(t,x)|^2 + \tfrac1{r^2} |u(t,x)|^2\, dx, \end{align} which is not immediately amenable to bootstrap because the two regions of integration are different. To remedy this, we set $R=\frac35t$ and $r= \frac\alpha3(R-|x_0|)$ and apply the following averaging operator to both sides: $$ f(x_0) \mapsto \frac1{R^{d+2}} \int_{|x_0|<R} (R-|x_0|)^2 f(x_0) \,dx_0. $$ We note that $$ \bigl\{ (x_0,x) : |x_0|\!<\!R,\ |x-x_0| \!<\! \tfrac\alpha3(R-|x_0|) \bigr\} \subseteq \bigl\{ (x_0,x) : |x|\!<\!R,\ |x_0-x| \!<\! \tfrac\alpha2(R-|x|) \bigr\} $$ and $$ \bigl\{ (x_0,x) : |x_0|\!<\!R,\ |x_0-x|\! <\! \tfrac\alpha6(R-|x_0|) \bigr\} \supseteq \bigl\{ (x_0,x) : |x|\!<\!R,\ |x-x_0| \!<\! \tfrac\alpha7(R-|x|) \bigr\}, $$ and that $R-|x|\sim R-|x_0|$ on any of these sets. Using Fubini, we deduce \begin{align*} \int_{|x|<R} \bigl( 1 - \tfrac{|x|}R\bigr)^{d+2} &|\nabla_{t,x} u(t,x)|^2 \, dx\\ &\lesssim (\alpha R)^{-1} +\alpha^{1/d}\int_{|x|<R}\bigl( 1 - \tfrac{|x|}R\bigr)^{d+2} |\nabla u(t,x)|^2 \, dx \\ &\quad + (\alpha R)^{-2}\alpha^{1/d}\int_{|x|<R} \bigl( 1 - \tfrac{|x|}R\bigr)^{d} |u(t,x)|^2\, dx. \end{align*} Choosing $\alpha$ sufficiently small and recalling $R=\frac35t$ and \eqref{E:ptwise mass subc}, we obtain \begin{align}\label{tada} \int_{|x|<\frac35t} \bigl( 1 - \tfrac{5|x|}{3t}\bigr)^{d+2} |\nabla_{t,x} u(t,x)|^2 \, dx \lesssim t^{-1}, \end{align} which yields the requisite bound on the spacetime gradient of $u$. To finish the proof of \eqref{E:cone bound subc}, we merely note that the bound on the $L^2_x$ norm was obtained already in Lemma~\ref{L:ptwise mass subc}. This completes the proof of Theorem~\ref{T:cone bound subc} in the conformal (i.e., $s_c=1/2$) case. Our argument for the sub-conformal case is similar, but slightly simpler, with \eqref{E:smudge Z} taking over the role played above by \eqref{E:L flux ineq 2}. In particular we have the following analogue of \eqref{E:deriv on ball} \begin{align}\label{E:D ball} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t-\frac1{10}t_0} |\nabla_{t,x} u|^2 + t_0^{-2} |u|^2 + t_0^{-2}\bigl|x\cdot\nabla u +tu_t +\tfrac2p u\bigr|^2\, dx\, dt \lesssim t_0^{2s_c -1}. \end{align} From this and the fact that $Z\geq 0$, we deduce \begin{align}\label{E:P ball} \int_{t_0}^{2t_0}\!\!\! \int_{|x|<t-\frac1{10}t_0} |u(t,x)|^{p+2}\, dx\, dt \lesssim t_0^{2s_c -1}. \end{align} Using the same argument as in Lemma~\ref{L:ptwise mass subc} and~\eqref{E:mass bound superc}, modifying $f(\lambda)$ as needed, we obtain \begin{equation}\label{E:wk pt} \int_{|x|<\frac9{10} t} |u(t,x)|^2\, dx \lesssim t^{2 s_c} \qtq{and} \int_{|x|<\frac9{10} t} |u(t,x)|^{\frac{p+4}2}\, dx \lesssim t^{2s_c -1}. \end{equation} Let $\gamma:=\frac 12-s_c$. Using the boundedness of the functional $Z$ associated to the cone with apex $(t-r,x_0)$, we obtain \begin{align}\label{p4g} \int_{|x-x_0|<r} \!\bigl(1 - \tfrac{|x-x_0|^2}{r^2}\bigr)^{\gamma + 1}|\nabla_{t,x} u|^2 \, dx \lesssim r^{2s_c-2} + \int_{|x-x_0|<r} \!\bigl(1 - \tfrac{|x-x_0|^2}{r^2}\bigr)^{\gamma}|u|^{p+2} \, dx, \end{align} for all $|x_0|<\frac45 t$ and $0<r < \tfrac1{10} t$. This plays the role of \eqref{E:grad from pot}. In order to obtain an upper bound on the potential energy we use the Gagliardo--Nirenberg inequality: \begin{align*} \int_{|x-x_0|<r} & \bigl(1 - \tfrac{|x-x_0|^2}{r^2}\bigr)^{\gamma}|u|^{p+2} \, dx\\ &\lesssim \biggl[ \int_{|x-x_0|<r} |u|^{\frac{p+4}2} \, dx\biggr]^{\frac{8-2p(d-2)}{8-p(d-2)}} \biggl[ \int_{|x-x_0|<r} |\nabla u|^{2} + r^{-2} |u|^2 \, dx\biggr]^{\frac{pd}{8-p(d-2)}}. \end{align*} Incorporating \eqref{E:wk pt} we deduce \begin{align*} \int_{|x-x_0|<r} \bigl(1 - \tfrac{|x-x_0|^2}{r^2}\bigr)^{\gamma} &|u|^{p+2} \, dx \\ &\lesssim t^{(2s_c-1)\frac{8-2p(d-2)}{8-p(d-2)}}\biggl[ r^{-2} t^{2s_c} + \int_{|x-x_0|<r} |\nabla u|^{2} \, dx\biggr]^{\frac{pd}{8-p(d-2)}}. \end{align*} Combining this estimate with \eqref{p4g} yields \begin{align*} \int_{|x-x_0|<r/2} |\nabla_{t,x} u|^2 \, dx \lesssim r^{2s_c-2} + t^{2s_c-2} \biggl[ r^{-2} t^{2} + t^{2-2s_c} \!\!\int_{|x-x_0|<r} |\nabla u|^{2} \, dx\biggr]^{\frac{pd}{8-p(d-2)}}. \end{align*} The basic bootstrap relation follows by setting $r=\tfrac13(\frac45 t-|x_0|)$ and applying the averaging operator $$ f(x_0) \mapsto \frac1{t^{d+2}} \int_{|x_0|< \frac45 t} f(x_0) (\tfrac{4t}5 -|x_0| )^2 \,dx_0 $$ to both sides. A little patience and Jensen's inequality then yield \begin{align*} \int_{|x|<\frac45 t} \bigl(1 - \tfrac{5|x|^2}{4t^2}\bigr)^{d + 2}& |\nabla_{t,x} u|^2 \, dx\\ &\lesssim t^{2s_c-2} \biggl[ 1+ t^{2-2s_c}\int_{|x|<\frac45 t} \bigl(1 - \tfrac{5|x|^2}{4t^2}\bigr)^{d + 2} |\nabla u|^{2} \, dx\biggr]^{\frac{pd}{8-p(d-2)}}, \end{align*} in much the same manner as in the conformal case. Noting that the last power here is smaller than one, this inequality yields $$ \int_{|x|<\frac45 t} \bigl(1 - \tfrac{5|x|^2}{4t^2}\bigr)^{d + 2} |\nabla_{t,x} u(t,x)|^2 \, dx \lesssim t^{2s_c-2}. $$ This immediately implies the estimate on the spacetime gradient stated in Theorem~\ref{T:cone bound subc} in the sub-conformal case. The stated estimate on the $L^2_x$-norm was given in \eqref{E:wk pt}. This completes the proof of Theorem~\ref{T:cone bound subc}. \end{document}
\begin{document} \author{Joshua Isralowitz} \address{Department of Mathematics, University at Albany, SUNY, Albany, NY, 12159 } \email{[email protected]} \maketitle \begin{abstract} In this paper we refine the recent sparse domination of the integrated $p = 2$ matrix weighted dyadic square function by T. Hytonen, S. Petermichl, and A. Volberg to prove a pointwise sparse domination of general matrix weighted dyadic square functions. We then use this to prove sharp two matrix weighted strong type inequalities for matrix weighted dyadic square functions when $1 < p \leq 2$. \end{abstract} \section{Introduction} Let be $U$ an a.e. positive definite $n \times n$ matrix valued function on $\mathbb{R}d$ (that is, a matrix weight), and for a measurable $\mathbb{C}n$ valued function $\V{f}$ on $\mathbb{R}d$ define \begin{equation*} \|\V{f}\|_{L^p(U)} := \left(\int_{\R^d} \abs{U^\frac{1}{p} (x) \V{f}(x)}^p \, dx \right)^\frac{1}{p} \end{equation*} where $\abs{\V{e}}$ is the standard Euclidean norm on $\mathbb{C}n$. We will say that a pair of matrix weights $U, V$ is matrix A${}_p$ if \begin{equation*} \br{U,V}_{\ensuremath{\text{A}_p}} := \sup_{\substack{I \subseteq \mathbb{R}d \\ I \text{ is a cube }}} \Xint-_I \left( \Xint-_I \| V^{-\frac{1}{p}} (y) U ^\frac{1}{p} (x)\| ^{p'} \, dy \right)^\frac{p}{p'} \, dx < \infty \end{equation*} where $\Xint-_I$ refers to the unweighted average and $\|A\|$ is the standard matrix norm of an $n \times n$ matrix $A$. Clearly this is a condition that reduces to the classical Muckenhoupt two weight A${}_p$ condition in the scalar setting (when $n = 1$). If $U = V$ then we will say $U$ is a matrix A${}_p$ weight if $\br{U}_{\ensuremath{\text{A}_p}} := \br{U,U}_{\ensuremath{\text{A}_p}} < \infty$. While it is known that most ``classical" operators from harmonic analysis (such as the maximal function, Calder\'{o}n-Zygmund operators, paraproducts, martingale transforms, square functions, etc.) are bounded on $L^p(U)$ for matrix A${}_p$ weights $U$, it is difficult to determine the sharp dependence of such operators on $\br{U}_{\ensuremath{\text{A}_p}}$. In fact, the only two such operators where sharp one weighted matrix weighted norm inequalities for $p = 2$ are known are for the dyadic square function, which was recently proved in \cite{HPV} and the maximal function, which was proved in \cite{IKP} by slightly modifying the ideas in \cite{CG}. Furthermore, among these two operators, sharp one matrix weighted A${}_p$ bounds for $1 < p < \infty$ are only known for the maximal function, which were proved in \cite{IM} by slightly modifying the ideas in \cite{Go}. The purpose of this paper is to prove sharp strong type matrix weighted norm inequalities for the dyadic square function in the range $1 < p \leq 2$, providing the first sharp $p \neq 2$ estimates for a singular operator in the matrix weighted setting. Let $\mathscr D$ be a dyadic grid and let $\{h_J ^k\}$ for $k = 1, \ldots, 2^d - 1$ and $J \in \mathscr D$ be any Haar system on $\mathbb{R}d$, meaning that $\{h_J ^k\}$ is an orthonormal system of $L^2(\mathbb{R}d)$ with $h_J ^k$ supported on $J, \int_J h_J ^k(x) \, dx = 0 $, and each $h_J ^k$ is constant on dyadic subcubes of $J$. Also, for a function $\V{f} : \mathbb{R}d \rightarrow \mathbb{C}n$ let \begin{equation*} \V{f}_J ^k := \int_J \V{f}(x) h_J ^k (x) \, dx \end{equation*} and define the matrix weighted dyadic square function $S_U \V{f} = S_{U, p} \V{f}$ by \begin{equation} S_{U, p} \V{f}(x) := \left( \sum_{J, k} \frac{\abs{U^\frac{1}{p} (x) \V{f}_J ^k}^2 1_J (x) }{|J|} \right)^\frac{1}{2}. \label{SquareFunction} \end{equation} For notational ease we will omit the dependence of $h_J ^k$ on $k$ and presume all sums involving Haar functions are taken over $k = 1, \ldots, 2^d - 1$. Note that this operator is a natural substitute for the dyadic square function in the sense that if $S_d$ is the ordinary dyadic square function on scalar valued functions then \begin{equation*} \norm{S_d f}_{L^p(v) \rightarrow L^p(u)} = \norm{S_{u, p} f}_{L^p (v) \rightarrow L^p}.\end{equation*} To state our main result we need the following definition. We say that a matrix weight $U$ is matrix A${}_p ^\text{wk}$ if \begin{equation*} [U]_{\ensuremath{\text{A}_p}wk} := \sup_{\V{e} \in \mathbb{C}n} \norm{\abs{U^\frac{1}{p} \V{e}}^p }_{\text{A}_\infty} < \infty. \end{equation*} It is easy to show (see \cite{Go} for example) that a matrix A${}_p$ weight is also a matrix A${}_p ^\text{wk}$ weight with $[U]_{\ensuremath{\text{A}_p}wk} \leq [U]_{\ensuremath{\text{A}_p}}$ and clearly in the scalar setting we have $[u]_{\ensuremath{\text{A}_p}wk} = [u]_{\text{A}_\infty}$. Our first result is the following. \begin{thm} \label{MainThmNormSquareFunction} If $U, V$ is a pair of matrix A${}_p$ weights, $V^{-\frac{p'}{p}}$ is a matrix A${}_{p'} ^ \text{wk}$ weight, and $1 < p \leq 2$ then the sharp estimate\begin{equation*} \|S_{U, p} \|_{L^p (V) \rightarrow L^p} \lesssim \br{U,V}_{\ensuremath{\text{A}_p}}^\frac{1}{p} [V^{-\frac{p'}{p}}]_{\ensuremath{\text{A}_p}primewk} ^\frac{1}{p} \end{equation*} holds. \\ Furthermore, if $2 < p < \infty$ we have the following (most likely not sharp) estimate \begin{equation*} \|S_{U, p} \|_{L^p(V) \rightarrow L^p} \lesssim \br{U,V}_{\ensuremath{\text{A}_p}}^\frac{1}{p} \br{V^{-\frac{p'}{p}}}_{\ensuremath{\text{A}_p}primewk} ^\frac{1}{p} \br{U}_{\ensuremath{\text{A}_p}wk} ^{\frac{1}{2} - \frac{1}{p}} \end{equation*} \end{thm} Note that this was proved when $p = 2$ in \cite{HPV} in the one weighted case and that sharpness when $1 < p \leq 2$ follows from the well known sharpness in the scalar setting (see \cite{LL,HL}). Also, note that while it is unlikely that Theorem \operatorname{Re}f{MainThmNormSquareFunction} is sharp when $p > 2$, it is a natural bound and in fact we will recover from the proof of Theorem \operatorname{Re}f{MainThmNormSquareFunction} the current best mixed matrix weighted A${}_p$ - A${}_\infty$ bound for a positive sparse operator $\tilde{\mathbb{M}C{S}}_{U}$ from \cite{CIM} (and thus the current best bound for CZOs via the sparse convex body domination theorem from \cite{NPTV}), namely \begin{equation*} \norm{\tilde{\mathbb{M}C{S}_{U}}}_{L^p \rightarrow L^p} \lesssim \br{U,V}_{\ensuremath{\text{A}_p}} ^\frac{1}{p} \br{V^{-\frac{p'}{p}}} _{\ensuremath{\text{A}_p}primewk} ^\frac{1}{p} \br{U}_{\ensuremath{\text{A}_p}wk} ^\frac{1}{p'} \end{equation*} (see p. 9 for the definition of a positive Sparse operator $\tilde{\mathbb{M}C{S}}_{U}$). We will now outline the arguments used to prove our main result. In the next section, we will modify the stopping time ideas from \cite{HPV} to prove a sparse domination of $S_{U, p}\V{f}(x)$ for all matrix weights $U, \ 1 < p < \infty,$ and $\V{f}$ measurable. Note that unlike in \cite{HPV} which proved the sparse domination of an integrated version of $S_{U,2} \V{f}$, we will actually prove a sparse \textit{pointwise} domination of $S_{U, p}\V{f}(x)$. In the third section we will prove Theorem \operatorname{Re}f{MainThmNormSquareFunction} by ``matrixizing" some of the ideas in \cite{C} to prove a matrix weighted Carleson embedding type theorem. Of particular novelty here is that we will use a matrix weighted ``stopping moment" decomposition, which to the author's knowledge is the first time such an argument in the matrix weighted setting has appeared. Note that a similar matrix weighted parallel corona decomposition argument should be possible (which in fact was used to prove a sharp version of Theorem \operatorname{Re}f{MainThmNormSquareFunction} in the scalar $p > 2$ setting in \cite{LL}). We will end this paper with an important point. First, as of the date of writing this paper, it is unknown whether the Rubio de Francia extrapolation theorem holds. Namely, it is not known whether the boundedness of an operator $T$ on $L^2(U)$ for all matrix A${}_2$ weights $U$ implies the boundedness of $T$ on $L^p(U)$ for all matrix A${}_p$ weights $U$ and all $1 < p < \infty.$ Thus, unlike in the scalar setting, sharp estimates (or even just boundedness) of operators for $p \neq 2$ do not at this moment follow from sharp estimates of operators for $p = 2$. \section{Sparse domination of square functions} Before we state the main result of this section we will need to introduce some definitions and notation. First we will introduce the concept of a reducing matrix, whose importance was emphasized in \cite{Go} and which has since shown to be vital in the theory of matrix weighted norm inequalities. Namely, for a matrix weight $U$, a cube $I$, and $\V{e} \in \mathbb{C}n$ there exists positive definite matricies $\mathbb{M}C{U}_I, \mathbb{M}C{U}_I '$ where \begin{equation*} \abs{\mathbb{M}C{U}_I \V{e}} \approx \ensuremath{\partial}r{\Xint-_I \abs{U^\frac{1}{p} (x) \V{e}}^p \, dx }^\frac{1}{p}, \qquad \abs{\mathbb{M}C{U}_I ' \V{e}} \approx \ensuremath{\partial}r{\Xint-_I \abs{U^{-\frac{1}{p}} (x) \V{e}}^{p'} \, dx }^\frac{1}{p'} \end{equation*} where the implicit constant depends only on $n$. In particular, it is easy to see that \begin{equation*} \br{U,V}_{\ensuremath{\text{A}_p}} \approx \sup_{\substack{I \subseteq \mathbb{R}d \\ I \text{ is a cube }}} \norm{\mathbb{M}C{U}_I \mathbb{M}C{V}_I '}^p. \end{equation*} \noindent Now let $\{\V{e}_j\}_{j = 1} ^n$ be any orthonormal basis of $\mathbb{C}n$. We will then use the following simple estimate without further mention throughout the rest of the paper: If $A$ is any $n \times n$ matrix then \begin{equation*} \|\mathbb{M}C{U} _I A\| ^p \approx \sum_{j = 1}^n \abs{\mathbb{M}C{U} _I A \V{e}_j} ^p \approx \sum_{j = 1}^n \Xint-_I \abs{U^\frac{1}{p} (x) A \V{e}_j} ^p \, dx \approx \Xint-_I \norm{U^\frac{1}{p} (x) A} ^p \, dx. \end{equation*} Let $\mathscr D$ be a dyadic grid. A collection $\ensuremath{\mathscr{L}}$ of dyadic cubes in $\mathscr D$ is sparse if \begin{equation*} \bigcup_{\substack{L \varsubsetneq J \\ L, J \in \ensuremath{\mathscr{L}}}} |L| \leq \frac12 |J|. \end{equation*} See \cite{LN} or \cite{NPTV} for more properties of sparse collections. \\ Given a sparse collection $\ensuremath{\mathscr{L}}$, define the ``sparse positive operator" $\tilde{S}_{U, \ensuremath{\mathscr{L}}} = \tilde{S}_{U, p, \ensuremath{\mathscr{L}}}$ by \begin{equation} \tilde{S}_{U, \ensuremath{\mathscr{L}}} \V{f} (x) := \left( \sum_{L \in \ensuremath{\mathscr{L}}} \innp{|\mathbb{M}C{U}_L \V{f}|} _L ^2 \norm {U^\frac{1}{p} (x) \mathbb{M}C{U}_L^{-1} } ^2 1_L (x) \right)^\frac{1}{2} \label{PositiveSparse}\end{equation} where $\innp{ }_L$ denotes the unweighted average over $L$. Furthermore, for any $J \in \ensuremath{\mathscr{L}}$ define the localized sparse positive operator $ \tilde{S}_{U, J, \ensuremath{\mathscr{L}}}$ by \begin{equation*} \tilde{S}_{U, J, \ensuremath{\mathscr{L}}} \V{f} (x) := \left( \sum_{\substack{L \in \ensuremath{\mathscr{L}} \\ L \subseteq J}} \innp{|\mathbb{M}C{U}_L \V{f}|} _L ^2 \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_L^{-1} } ^2 1_L (x) \right)^\frac{1}{2}\end{equation*} and similarly define $S_{U, J} \V{f}$ by \begin{equation*} S_{U, J} \V{f}(x) = \left( \sum_{\substack{L \in \mathscr D \\ L \subseteq J }} \frac{\abs{U^\frac{1}{p} (x) \V{f}_L}^2 1_L (x) }{|L|} \right)^\frac{1}{2}\end{equation*} Finally, for $N \in \mathbb{N}, $ define \begin{equation} \label{SquareFunctionN} S_{U, N} \V{f}(x) = \left( \sum_{\substack{L \in \mathscr D \\ 2^{-N} \leq \ell(L) \leq 2^N }} \frac{\abs{U^\frac{1}{p} (x) \V{f}_L}^2 1_L (x) }{|L|} \right)^\frac{1}{2}\end{equation} where $\ell(L)$ is the sidelength of $L$, and define $S_{U, J, N}$ in an analogous way. The main result of this section is the following. \begin{thm} \label{SparseDomination} Let $S_{U, N} \V{f}$ denote the matrix weighted square function defined by \eqref{SquareFunctionN}. Then there exists a sparse collection $\ensuremath{\mathscr{L}}$ of dyadic cubes where for a.e. $x \in \mathbb{R}d$ we have $S_{U, N} \V{f}(x) \lesssim \tilde{S}_{U, \ensuremath{\mathscr{L}}} \V{f}(x)$ with implicit constant independent of $\V{f}, x, N, $ and $ U$. \end{thm} \begin{proof} Let $S_d$ denote the unweighted dyadic square function with respect to $\mathscr D$. It is then enough to show that for each $J \in \mathscr D$ we can find a sparse collection $\ensuremath{\mathscr{L}}$ of dyadic subcubes of $J$ where $S_{U, J, N} \V{f}(x) \lesssim \tilde{S}_{U, J, \ensuremath{\mathscr{L}}} \V{f}(x)$, since then we can apply this to each $J \in \mathscr D$ with $\ell(J) = 2^N$. Let $\ensuremath{\mathscr{J}}(J)$ be the maximal dyadic subcubes $L$ of $J$ where \begin{equation*} \sum_{ J \supseteq I \supseteq L} \frac{|\mathbb{M}C{U}_J \V{f}_I|^2}{|I|} > \lambda \innp{|\mathbb{M}C{U}_J\V{f}|}_J ^2 \end{equation*} We claim that $\sum_{L \in \ensuremath{\mathscr{J}}(J)} |L| \leq \frac14 |J|$ for large enough $\lambda$. For that matter, if $x \in L \in \ensuremath{\mathscr{J}}(J)$ then \begin{align*} S_d (\mathbb{M}C{U}_J 1_J \V{f}) (x) ^ 2 & = \sum_{I \in \mathscr D} \frac{|\mathbb{M}C{U}_J (1_J\V{f})_I|^2}{|I|} 1_I (x) \\ & \geq \sum_{J \supseteq I \supseteq L} \frac{|(\mathbb{M}C{U}_J \V{f} 1_J)_I|^2}{|I|} \\ & = \sum_{ J \supseteq I \supseteq L} \frac{|\mathbb{M}C{U}_J \V{f}_I|^2}{|I|} \\ & > \lambda \innp{|\mathbb{M}C{U}_J \V{f}|}_J ^2 \end{align*} so that using the fact that $\|S_d\|_{L^{1} \rightarrow L^{1, \infty}} \leq C$ we get \begin{equation*} \sum_{L \in \ensuremath{\mathscr{J}}_1 (J)} |L| = \abs{\bigcup_{L \in \ensuremath{\mathscr{J}}_1(J)} L } \leq |\{x : S_d(\mathbb{M}C{U}_J 1_J \V{f} ) (x) \geq \lambda ^\frac12 \innp{|\mathbb{M}C{U}_J \V{f}|}_J \}| \leq \frac{C}{\lambda^\frac12} |J| \end{equation*} which clearly proves the claim. Let $\ensuremath{\mathscr{F}}(J)$ denote the collection of all $L \in \mathscr D(J)$ such that $L \not \subseteq Q$ for any $Q \in \ensuremath{\mathscr{J}}(J)$. Furthermore, abusing notation slightly, we will denote $\cup_{Q \in \ensuremath{\mathscr{J}}(J)} Q$ by $\cup\ensuremath{\mathscr{J}}(J)$. Fix $x \in \ensuremath{\mathscr{J}}$ such that $U(x)$ is defined. Then \begin{align*} \sum_{L \in \mathscr D(J)} \frac{\abs{U^\frac{1}{p} (x) \V{f}_L }^2 1_L (x) }{|L|} & = \sum_{L \in \ensuremath{\mathscr{F}}(J)} \frac{\abs{U^\frac{1}{p} (x) \V{f}_{L} } ^2 1_{L } (x)}{|L|} + \sum_{Q \in \ensuremath{\mathscr{J}}(J)} \sum_{L \in \mathscr D(Q)} \frac{\abs{U^\frac{1}{p} (x) \V{f}_L }^2 1_L (x) }{|L|} \\ & = A(x) + \sum_{Q \in \ensuremath{\mathscr{J}}(J)} \sum_{L \in \mathscr D(Q)} \frac{\abs{U^\frac{1}{p} (x) \V{f}_L }^2 1_L (x) }{|L|}\end{align*} We estimate $A(x)$ by considering two cases. First assume $x \in \cup\ensuremath{\mathscr{J}}(J).$ Thus, if $x \in I$ for some $I \in \ensuremath{\mathscr{J}}(J)$ and $x \in L \in \ensuremath{\mathscr{F}}(J)$ then again by definition of $\ensuremath{\mathscr{F}}(J)$ we have $J \supseteq L \varsupsetneq I$ so that \begin{align*} A(x) & \leq \norm{ U^\frac{1}{p} (x)\mathbb{M}C{U}_J ^{-1} }^2 1_J (x) \sum_{J \supseteq L \varsupsetneq I} \frac{\abs{ \mathbb{M}C{U}_J \V{f}_{L} } ^2 }{|L|} \\ & \leq \lambda \norm{ U^\frac{1}{p} (x) \mathbb{M}C{U}_J ^{-1}}^2 1_J (x) \innp{ | \mathbb{M}C{U}_J \V{f} |}_J ^2. \end{align*} On the other hand, if $x \not \in \cup \ensuremath{\mathscr{J}}(J)$ then we can pick a sequence of nested dyadic cubes $\{L_k ^x\} = \{L \in \ensuremath{\mathscr{F}}(J) : x \in L\} = \{L \in \mathscr D(J) : x \in L\}$. However, if \begin{equation*} \sum_k \frac{|\mathbb{M}C{U}_J \V{f}_{L_k ^x} |^2}{|L_k^x|} > \lambda \innp{|\mathbb{M}C{U}_J\V{f}|}_J ^2 \end{equation*} then obviously for some $k'$ we must have \begin{equation*} \sum_{ J \supseteq L \supseteq L_{k'} ^ x } \frac{|\mathbb{M}C{U}_J \V{f}_L|^2}{|L|} > \lambda \innp{|\mathbb{M}C{U}_J\V{f}|}_J ^2 \end{equation*} which means $x \in L_{k'} ^x \subseteq I$ for some $I \in \ensuremath{\mathscr{J}}(J)$. Thus, \begin{align*} A(x) & \leq \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_J ^{-1}}^2 \sum_{L \in \ensuremath{\mathscr{F}}(J)} \frac{\abs{\mathbb{M}C{U}_J \V{f}_L }^2 1_{L} (x)}{|L|} \\ & \leq \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_J ^{-1}} ^2 \sum_{k} \frac{\abs{\mathbb{M}C{U}_J \V{f}_{L_k ^x} }^2 }{|L_k ^x|} \\ & \leq \lambda \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_J ^{-1}}^2 \innp{|\mathbb{M}C{U}_J\V{f}|}_J ^2 1_J (x) \end{align*} Putting this together, we get \begin{align*} \sum_{L \in \mathscr D(J)} \frac{\abs{U^\frac{1}{p} (x) \V{f}_L }^2 1_L (x) }{|L|} \leq \lambda \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_J ^{-1}}^2 \innp{|\mathbb{M}C{U}_J \V{f}|}_J ^2 1_J (x) + \sum_{Q \in \ensuremath{\mathscr{J}}(J)} \sum_{L \in \mathscr D(Q)} \frac{\abs{U^\frac{1}{p} (x) \V{f}_L }^2 1_L (x) }{|I|}. \end{align*} Finally set $\ensuremath{\mathscr{J}}_0(J) = \{J\}$ and for $k \in \mathbb{N}$ set $\ensuremath{\mathscr{J}}_k(J) = \{L \in \ensuremath{\mathscr{J}}(Q) : Q \in \ensuremath{\mathscr{J}}_{k-1}(J)\}$. If $\ensuremath{\mathscr{L}} = \cup_k \ensuremath{\mathscr{J}}_k(J)$ then $\ensuremath{\mathscr{L}}$ is sparse, which by iteration completes the proof of Theorem \operatorname{Re}f{SparseDomination} \end{proof} \section{Proof of Theorem \operatorname{Re}f{MainThmNormSquareFunction}} \label{NormSquareSection} In this section we will prove Theorem \operatorname{Re}f{MainThmNormSquareFunction} by utilizing Theorem \operatorname{Re}f{SparseDomination}. Again fix $\lambda > 1$ to be determined momentarily. Given $Q \in \mathscr D$ let $\mathbb{M}C{G}(Q)$ denote the set of maximal $L \in \mathscr D(J)$ such that either \begin{equation} \Xint-_L |\mathbb{M}C{U}_J \vec{f}|> \lambda \Xint-_J|\mathbb{M}C{U}_J \vec{f}| \label{StopOne} \end{equation} or \begin{equation} \| \mathbb{M}C{U}_L \mathbb{M}C{U}_J ^{-1} \| > \lambda \label{StopTwo}\end{equation} We now prove that for $\lambda > 0$ large enough we have that \begin{equation} \label{sparseineq} \sum_{L \in \mathbb{M}C{G}(J)} |L| \leq \frac14 |J| \end{equation} Let $\ensuremath{\mathscr{J}}_1(J)$ and $\ensuremath{\mathscr{J}}_2(J)$ denote those maximal cubes in $J$ satisfying \eqref{StopOne} and \eqref{StopTwo}, respectively. Then, as usual, \begin{equation*} \sum_{L \in \ensuremath{\mathscr{J}}_1(J)} |L| \lesssim \frac{1}{\lambda^p } \sum_{L \in \ensuremath{\mathscr{J}}_1(J)} \int_L \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_J ^{-1}} ^p \, dx \lesssim \frac{1}{\lambda^p } \int_J \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_J ^{-1}} ^p \, dx \lesssim \frac{1}{\lambda^p } |J| \end{equation*} and furthermore \begin{align*} \sum_{L \in \ensuremath{\mathscr{J}}_2(J)} |L| & \leq \frac{1}{\lambda} \sum_{L \in \ensuremath{\mathscr{J}}_2(J)} |L| \frac{ \Xint-_L | \mathbb{M}C{U}_J \vec{f}| }{\Xint-_J | \mathbb{M}C{U}_J \vec{f}|} \\ & \leq \frac{1 }{\lambda} |J|. \end{align*} This completes the proof for $\lambda$ large enough, since $\ensuremath{\mathscr{J}}(J) \subseteq \ensuremath{\mathscr{J}}_1(J) \cup \ensuremath{\mathscr{J}}_2(J)$. Now for fixed $N \in \mathbb{N}$ let \begin{equation*} \mathbb{M}C{G}_0 = \{J \in \mathscr D : |J| = 2^{N}\}\end{equation*} and inductively define \begin{equation*} \mathbb{M}C{G}_{k + 1} = \{L \in \mathscr D : L \in \mathbb{M}C{G}(J) \text{ for some } J \in \mathbb{M}C{G}_k\}. \end{equation*} If $\mathbb{M}C{E}(J)$ denotes the collection of all $L' \in \mathscr D(J)$ that are not contained in any $L \in \mathbb{M}C{G}(J)$ and $\mathbb{M}C{G}$ is the union \begin{equation*} \mathbb{M}C{G} = \bigcup_{k = 0}^\infty \mathbb{M}C{G}_k \end{equation*} then we clearly have \begin{equation} \bigcup_{J \in \mathbb{M}C{G}} \mathbb{M}C{E}(J) = \{J \in \mathscr D : |J| \leq 2^{N}\} \label{DPart}\end{equation} for $\lambda > 0$ large enough since $J \in \mathbb{M}C{E}(J)$ for any $J \in \mathscr D$. Also clearly an iteration of \eqref{sparseineq} gives us that for any $Q \in \mathbb{M}C{G}$ \begin{equation} \sum_{L \in \mathbb{M}C{G} , L \subseteq Q} |Q| \leq \frac12 |L| \label{SparseProp}. \end{equation} \noindent Furthermore, it is important to note that if $L \in \mathbb{M}C{E}(J)$ then both \eqref{StopOne} and \eqref{StopTwo} are false, so that \begin{equation} \Xint-_L |\mathbb{M}C{U}_L \vec{f}| \leq \|\mathbb{M}C{U}_L \mathbb{M}C{U}_J^{-1} \| \Xint-_L |\mathbb{M}C{U}_J \vec{f}| \leq \lambda ^2 \Xint-_J |\mathbb{M}C{U}_J \vec{f}| \label{CoronaEq} \end{equation} We now state and prove a Carleson embedding type theorem for the type of operator used in the previous section, which will easily show Theorem \operatorname{Re}f{MainThmNormSquareFunction}. Given nonnegative measurable functions $\{a_L (x) \}_{L \in \mathscr D}$ and $r > 0$, define $\tilde{S}_{U, a} \V{f} = \tilde{S}_{U, a, r, p} \vec{f}$ by \begin{equation*} \tilde{S}_{U, a} \V{f} (x) = \left(\sum_{L \in \mathscr D} a_L (x) { \innp{\abs{ \mathbb{M}C{U}_L \vec{f}}}_L ^r 1_L(x)} \right)^\frac{1}{r}. \end{equation*} \begin{thm} \label{MaxEmbThm} Let $1 < p \leq r$, let $V^{-\frac{p'}{p}}$ be a matrix A${}_{p'} ^ \text{wk}$ weight and let \begin{align*}\|A\|_* = \sup_{J \in \mathscr D} \Xint-_J \left(\sum_{L \in \mathbb{M}C{D}(J)} a_L (x) { 1_L(x)} \right)^\frac{p}{r}\, dx. \end{align*} then \begin{equation*} \|\tilde{S}_{U, a} \|_{L^p(V) \rightarrow L^p} \lesssim [U, V]_{\text{A}_p} ^\frac{1}{p } \br{V^{-\frac{p'}{p}}}_{\ensuremath{\text{A}_p}primewk} ^\frac{1}{p} \|A\|_* ^\frac{1}{p}. \end{equation*} \label{MaxEmbThm} \end{thm} \begin{proof} Let \begin{equation*} F_J (x) = \left(\sum_{L \in \mathbb{M}C{E}(J)} a_L(x) { \innp{\abs{\mathbb{M}C{U}_L \vec{f}}_L }^r} 1_L(x) \right)^\frac{1}{r}. \end{equation*} We first get a bound for $\|F_J\|_{L^p}$. Note that \begin{align*} F_J (x) &= \left(\sum_{L \in \mathbb{M}C{E}(J)} a_L(x) { \innp{ \abs{ \mathbb{M}C{U}_L \vec{f}}_L }^r 1_L(x)} \right)^\frac{1}{r} \\ & \lesssim \innp{ \abs{ \mathbb{M}C{U}_J \vec{f}}}_J \left(\sum_{L \in \mathbb{M}C{E}(J)} a_L(x) 1_L(x) \right)^\frac{1}{r} \end{align*} by \eqref{CoronaEq} since $L \in \mathbb{M}C{E}(J)$. Thus, \begin{align*} \|F_J\|_{L^p} ^p & \lesssim \innp{ \abs{ \mathbb{M}C{U}_J \vec{f}}}_J ^p \int_J \left(\sum_{L \in \mathbb{M}C{E}(J)} a_L(x){1_L(x)} \right)^\frac{p}{r} \\ & \leq \|A\|_* |J| \innp{\abs{\mathbb{M}C{U}_J \vec{f}}}_J ^p . \end{align*} However, if \begin{equation*} \tilde{S}_{U, a, N} \vec{f}(x) = \left(\sum_{|L| \leq 2^{N}} a_L(x) { \innp{\abs{ \mathbb{M}C{U}_L \vec{f}} }_L ^r 1_L (x)} \right)^\frac{1}{r} \end{equation*} then \eqref{DPart} gives us that \begin{equation*}\tilde{S}_{U, a, N} \V{f}(x) = \left(\sum_{J \in \mathbb{M}C{G}} F_J ^r (x) \right)^\frac{1}{r}. \end{equation*} Then using the fact that $p \leq r$, \begin{align} \|\tilde{S}_{U, a, N} \V{f} \|_{L^p} & = \left\|\left(\sum_{J \in \mathbb{M}C{G}} F_J ^r \right)^\frac{1}{r} \right\|_{L^p} \nonumber \\ & \leq \left\|\left(\sum_{Q \in \mathbb{M}C{G}} F_Q ^p \right)^\frac{1}{p} \right\|_{L^p} \nonumber \\ & = \left( \sum_{Q \in \mathbb{M}C{G}} \|F_Q \|_{L^p} ^p \right)^\frac{1}{p} \nonumber \\ & \lesssim \|A\|_*^\frac{1}{p} \left( \sum_{J \in \mathbb{M}C{G}} |J| \innp{\abs{\mathbb{M}C{U}_J \vec{f}}}_J ^p\right)^\frac{1}{p}. \label{MaxEmbThmEst} \end{align} By the sharp reverse H\"{o}lder inequality for A${}_\infty$ weights, we can pick $\epsilon \approx [V^{-\frac{p'}{p}}]_{\text{A}_{p' }^\text{wk}}^{-1}$ small enough where \begin{align*} \Xint-_J |\mathbb{M}C{U}_J \vec{f}| & \leq \left(\Xint-_J \| \mathbb{M}C{U}_J V^{-\frac{1}{p}}\|^\frac{p - \epsilon }{p - \epsilon - 1} \right)^\frac{p - \epsilon - 1 }{p - \epsilon} \left(\Xint-_J |V^\frac{1}{p} \vec{f}|^{p - \epsilon}\right)^\frac{1}{p - \epsilon} \\ & \lesssim \left(\Xint-_J \| \mathbb{M}C{U}_J V^{-\frac{1}{p}} \|^{p'} \right)^\frac{1 }{p'} \left(\Xint-_J |V^\frac{1}{p} \vec{f}|^{p - \epsilon}\right)^\frac{1}{p - \epsilon} \\ & \lesssim [U, V]_{\text{A}_p} ^\frac{1}{p} \left(\Xint-_J |V^\frac{1}{p} \vec{f}|^{p - \epsilon}\right)^\frac{1}{p - \epsilon}. \end{align*} Now, for any nonnegative scalar Carleson sequence $(\tau_Q)$, if $$\|\tau_Q\|_* = \sup_{J \in \mathscr D} \frac{1}{|J|} \sum_{L \in \mathscr D(J)} \tau_L$$ the standard proof of the (unweighted) dyadic Carleson embedding theorem and the well known ``$L^{1+\delta}$" Maximal function bound for $\delta>0$ small tells us that for $q = 1+\delta$ \begin{align*} \sum_{Q \in \mathscr D} \tau_Q \innp{|f|}_Q ^q \leq \|\tau_Q\|_* \|M_d f\|_{L^q} ^q \lesssim \delta^{-1} \|\tau_Q\|_* \|f\|_{L^q} ^q \end{align*} Applying this to the exponent $\frac{p}{p - \epsilon} > 1$ , \eqref{MaxEmbThmEst} gives us \begin{align*} \norm{ \tilde{S}_{U, a, N} \V{f}}_{L^p} & \lesssim \|A\|_* ^\frac{1}{p} [U, V]_{\text{A}_p} ^\frac{1}{p} \left(\sum_{J \in \mathbb{M}C{G}} |J| \innp{ |V^\frac{1}{p} \vec{f}|^{p - \epsilon}}_J ^\frac{p}{p - \epsilon} \right)^\frac{1}{p} \\ & \lesssim \|A\|_*^\frac{1}{p} [U, V]_{\text{A}_p} ^\frac{1}{p} \epsilon ^{-\frac{1}{p}} \left(\int_{\R^d} (|V ^\frac{1}{p} \vec{f}(x)|^{p - \epsilon}) ^\frac{p }{p - \epsilon} \, dx \right)^\frac{1}{p} \\ & = \|A\|_* ^\frac{1}{p} [U, V]_{\text{A}_p} ^\frac{1}{p} [V^{-\frac{p'}{p}}]_{\text{A}_{p'} ^\text{wk}} ^{\frac{1}{p}} \|\vec{f}\|_{L^p(V)}. \end{align*} Letting $N \rightarrow \infty$ in conjunction with the monotone convergence theorem completes the proof. \end{proof} Finally, to see how this proves Theorem \operatorname{Re}f{MainThmNormSquareFunction} when $1 < p \leq 2$, set $r = 2$ and let $\ensuremath{\mathscr{L}} $ be a sparse collection. Set \[a_L(x) = \begin{cases} \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_L^{-1} }^2 & L \in \ensuremath{\mathscr{L}} \\ 0 & L \not \in \ensuremath{\mathscr{L}} \end{cases} \] Then since $\frac{p}{2} \leq 1$ \begin{align*} \sup_{J \in \mathbb{M}C{D}} \Xint-_J \left(\sum_{\substack {L \subseteq J \\ L \in \ensuremath{\mathscr{L}}}} { \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_L^{-1} }^2 1_L(x)} \right)^\frac{p}{2}\, dx & \leq \sup_{J \in \mathbb{M}C{D}} \frac{1}{|J|} \sum_{\substack {L \subseteq J \\ L \in \ensuremath{\mathscr{L}}}} \int_L \norm{U^\frac{1}{p} (x) \mathbb{M}C{U}_L^{-1} }^p \, dx \\ & \lesssim \sup_{J \in \mathbb{M}C{D}} \frac{1}{|J|} \sum_{\substack {L \subseteq J \\ L \in \ensuremath{\mathscr{L}}}} |L| \\ & = \sup_{J \in \mathbb{M}C{D}} \frac{1}{|J|} \sum_{L^* \subseteq J} \sum_{\substack {L \subseteq L^* \\ L \in \ensuremath{\mathscr{L}}}} |L| \\ & \leq \frac32 \sup_{J \in \mathbb{M}C{D}} \frac{1}{|J|} \sum_{L^* \subseteq J} |L^*| \leq \frac32 \end{align*} \noindent where here $\{L^*\}$ is the collection of maximal $L^* \in \ensuremath{\mathscr{L}}$ with $L^* \subseteq J$. Thus, if $\tilde{S}_{U, \ensuremath{\mathscr{L}}}$ is defined as in \eqref{PositiveSparse}, then Theorem \operatorname{Re}f{MaxEmbThm} gives us that for $r = 2$\begin{equation*} \|\tilde{S}_{U, \ensuremath{\mathscr{L}}} \|_{L^p(V) \rightarrow L^p} = \|\tilde{S}_{U, a}\|_{L^p(V) \rightarrow L^p} \lesssim [U, V]_{\text{A}_p} ^\frac{1}{p} [V^{-\frac{p'}{p}}]_{\text{A}_{p'} ^\text{wk}} ^{\frac{1}{p}}. \end{equation*} But Theorem \operatorname{Re}f{SparseDomination} and the monotone convergence theorem then says for any $\V{f} \in L^p(V)$, we have that \begin{align*} \| S_U \V{f} \|_{L^p} & = \lim_{N \rightarrow \infty} \| S_{U, N} \V{f} \|_{L^p} \\ & \lesssim \|\tilde{S}_{U, \ensuremath{\mathscr{L}}} \V{f}\|_{ L^p} \\ & \lesssim [U, V]_{\text{A}_p} ^\frac{1}{p} [V^{-\frac{p'}{p}}]_{\text{A}_{p'} ^\text{wk}} ^{\frac{1}{p}} \| \vec{f}\|_{L^p(V)}. \end{align*} To prove Theorem \operatorname{Re}f{MainThmNormSquareFunction} when $p > 2$, we argue as in \cite{CIM} and use a routine duality argument. In fact, we will prove a slightly stronger result. Given a sparse collection $\ensuremath{\mathscr{L}}$ and $r > 0$, define the $\tilde{S}_U = \tilde{S}_{U, \ensuremath{\mathscr{L}}, r, p} $ by \begin{equation*} \tilde{S}_U \V{f} (x) := \left( \sum_{L \in \ensuremath{\mathscr{L}}} \innp{|\mathbb{M}C{U}_L \V{f}|} _L ^r \norm {U^\frac{1}{p} (x) \mathbb{M}C{U}_L^{-1} } ^r 1_L (x) \right)^\frac{1}{r}. \end{equation*} Assume $p > r$ so that $\frac{p}{r} > 1$. Then \begin{align*} \norm{\tilde{S}_U \V{f} } _{L^p} & = \norm{\sum_{L \in \ensuremath{\mathscr{L}}} \innp{|\mathbb{M}C{U}_L \V{f}|} _L ^r \norm {U^\frac{1}{p} \mathbb{M}C{U}_L^{-1} } ^r 1_L }_{L^{\frac{p}{r}}} ^\frac{1}{r} \\ & \lesssim \sup_{\|g\|_{L^{\frac{p}{p-r}} \leq 1}} \ensuremath{\partial}r{ \sum_{L \in \ensuremath{\mathscr{L}}} |L| \innp{|\mathbb{M}C{U}_L \V{f}|} _L ^r \innp{\norm {U^\frac{1}{p} \mathbb{M}C{U}_L^{-1} } ^r g }_L }^\frac{1}{r}. \end{align*} However, as in the $1 < p \leq 2$ case, by the sharp reverse H\"{o}lder inequality for A${}_\infty$ weights we can pick $\epsilon_1 \approx \br{V^{-\frac{p'}{p}}}_{\ensuremath{\text{A}_p}primewk}^{-1}$ and $\epsilon_2 \approx \br{U}_{\ensuremath{\text{A}_p}wk} ^{-1}$ where \begin{equation*} \innp{|\mathbb{M}C{U}_L \V{f}|} _L ^r \leq \innp{\|\mathbb{M}C{U}_L V^{-\frac{1}{p}}\|^{\frac{p-\epsilon_1}{p-\epsilon_1 - 1}}}_L ^{r\ensuremath{\partial}r{\frac{p-\epsilon_1 - 1}{p-\epsilon_1}}}\innp{|V^{\frac{1}{p}}\V{f}|^{p-\epsilon_1}}_L ^{\frac{r}{p-\epsilon_1}} \lesssim \br{U,V}_{\ensuremath{\text{A}_p}} ^\frac{r}{p} \innp{|V^{\frac{1}{p}}\V{f}|^{p-\epsilon_1}}_L ^\frac{r}{p-\epsilon_1} \end{equation*} and \begin{equation*} \innp{\norm {U^\frac{1}{p} \mathbb{M}C{U}_L^{-1} } ^r g }_L \leq \innp{\norm{U^\frac{1}{p} \mathbb{M}C{U}_L^{-1} } ^{r\ensuremath{\partial}r{\frac{p-\epsilon_2}{r-\epsilon_2}}}}_L ^\frac{r-\epsilon_2}{p-\epsilon_2} \innp{|g|^\frac{p-\epsilon_2}{p - r} }_L ^ \frac{p - r}{p-\epsilon_2} \lesssim \innp{|g|^\frac{p-\epsilon_2}{p - r} }_L ^ \frac{p - r}{p-\epsilon_2} \end{equation*} If as usual \begin{equation*} E_L = L \backslash \bigcup_{\substack{L' \subsetneq L \\ L' \in \ensuremath{\mathscr{L}}}} L' \end{equation*} then the sets $\{E_L : L \in \ensuremath{\mathscr{L}}\}$ are disjoint and $|L| \leq 2 |E_L|$. We then have \begin{align*} \br{U,V}_{\ensuremath{\text{A}_p}} ^\frac{1}{p} & \ensuremath{\partial}r{\sum_{L \in \ensuremath{\mathscr{L}}} |L| \innp{|V^\frac{1}{p} \V{f}|^{p-\epsilon_1}}_L ^\frac{r}{p-\epsilon_1} \innp{|g|^{\frac{p-\epsilon_2}{p - r}} }_L ^ {\frac{p - r}{p-\epsilon_2}}} ^\frac{1}{r} \\ & \lesssim \br{U,V}_{\ensuremath{\text{A}_p}} ^\frac{1}{p} \ensuremath{\partial}r{\sum_{L \in \ensuremath{\mathscr{L}}} \int_{E_L} \ensuremath{\partial}r{M_d |V^\frac{1}{p}\V{f}|^{p-\epsilon_1} (x) } ^{\frac{r}{p-\epsilon_1}} \ensuremath{\partial}r{ M_d |g|^{\frac{p-\epsilon_2}{p - r}} (x)}^ {\frac{p - r}{p-\epsilon_2}} \, dx } ^\frac{1}{r} \\ & \leq \br{U,V}_{\ensuremath{\text{A}_p}} ^\frac{1}{p} \ensuremath{\partial}r{ \int_{\R^d} (M_d |V^\frac{1}{p}\V{f}|^{p-\epsilon_1})^\frac{p}{p-\epsilon_1}}^\frac{1}{p} \ensuremath{\partial}r{ \int_{\R^d} (M_d |g|^{\frac{p-\epsilon_2}{p - r}})^\frac{p}{p-\epsilon_2}}^\ensuremath{\partial}r{\frac{1}{r} - \frac{1}{p}} \\ & \lesssim \br{U,V}_{\ensuremath{\text{A}_p}} ^\frac{1}{p} (\epsilon_1) ^{-\frac{1}{p}} (\epsilon_2)^{- \ensuremath{\partial}r{\frac{1}{r} - \frac{1}{p}}} \|\V{f}\|_{L^p(V)} \|g\|_{L^{\frac{p}{p-r}}} ^{\frac{1}{r}} \end{align*} where $M_d$ is the dyadic maximal function. This completes the proof. Notice that when $r = 1$ and $p > 1$ we get that \begin{equation*} \norm{\mathbb{M}C{\tilde{S}}_{U, 1}}_{L^p(V) \rightarrow L^p} \lesssim \br{U,V}_{\ensuremath{\text{A}_p}} ^p \br{V^{-\frac{1}{p}}} _{\ensuremath{\text{A}_p}primewk} ^\frac{1}{p} \br{U}_{\ensuremath{\text{A}_p}wk} ^\frac{1}{p'} \end{equation*} which as mentioned before coincides with the best known A${}_p$ - A${}_\infty$ bound for sparse operators, since a sparse operator $\tilde{S}_{U, V}$ defined by \begin{align*} \tilde{S}_{U} \V{f} (x) = \sum_{L \in \ensuremath{\mathscr{L}}} \abs{U^\frac{1}{p} (x) \innp{ \V{f}}_L} 1_L(x) \end{align*} can be trivially dominated by $\mathbb{M}C{\tilde{S}}_{U,1}$. \end{document}
\betagin{document} \betagin{abstract} Suppose $\Sigma$ is a compact oriented surface (possibly with boundary) that has genus zero, and $L$ is a link in the interior of $(-1,1)\times \Sigma$. We prove that the Asaeda--Przytycki--Sikora (APS) homology of $L$ has rank $2$ if and only if $L$ is isotopic to an embedded knot in $\{0\}\times \Sigma$. As a consequence, the APS homology detects the unknot in $(-1,1)\times \Sigma$. This is the first detection result for generalized Khovanov homology that is valid on an infinite family of manifolds, and it partially solves a conjecture in \cite{xie2020instantons}. Our proof is different from the previous detection results obtained by instanton homology because in this case, the second page of Kronheimer--Mrowka's spectral sequence is not isomorphic to the APS homology. We also characterize all links in product manifolds that have minimal sutured instanton homology, which may be of independent interest. \end{abstract} \title{Instanton homology and knot detection on thickened surfaces} \section{Introduction} Khovanov homology \cite{Kh-Jones} is a link invariant that assigns a homology group to every link in $S^3$. A generalization of Khovanov homology was introduced by Asaeda--Przytycki--Sikora \cite{APS} for links in the interior of $[-1,1]\times \Sigma$, where $\Sigma$ is a compact oriented\footnote{ When $\Sigma$ is non-orientable, there is a unique $(-1,1)$--bundle over $\Sigma$ such that the total manifold is orientable, and the APS homology can be defined for links in this bundle. We will only consider the case when $\Sigma$ is oriented in this paper.} surface possibly with boundary. We will call this invariant the \emph{APS homology} and denote the APS homology of a link $L$ by $\APS(L)$. APS homology recovers Khovanov homology when $\Sigma$ is a disk or sphere. Kronheimer and Mrowka \cite{KM:Kh-unknot} proved that Khovanov homology distinguishes the unknot from other knots and links (see also \cite{Hedden-unknot, Grigssby-Wehrli-color}). A natural question is whether the analogous result holds for APS homology. More generally, the following conjecture was stated in \cite{xie2020instantons}. \betagin{conj} \label{conj_main} Suppose $\Sigma$ is a compact oriented surface and $L$ is a link in the interior of $[-1,1]\times \Sigma$. Then $\rightarrownk_{\mathbb{Z}/2}\APS(L;\mathbb{Z}/2)\ge 2$, and the equality holds if and only if $L$ is isotopic to a knot embedded in $\{0\}\times \Sigma$. \end{conj} It follows immediately from the definition that for every knot $K$ embedded in $\{0\}\times \Sigma$, we have $\rightarrownk_{\mathbb{Z}/2}\APS(K;\mathbb{Z}/2)=2;$ two knots $K_1,K_2\subset \{0\}\times \Sigma$ are isotopic if and only if their APS homology have the same gradings. Therefore, if Conjecture \ref{conj_main} is true, then for every knot $K\subset \{0\}\times \Sigma$, the APS homology distinguishes (the isotopy class of) $K$ from other knots and links. The unknot detection theorem of Kronheimer--Mrowka gives a positive answer to Conjecture \ref{conj_main} when $\Sigma$ is a disk or sphere. Special cases of Conjecture \ref{conj_main} were also known when $\Sigma$ is an annulus \cite{XZ:excision} or a torus \cite{xie2020instantons}. The main result of this paper gives a positive answer to Conjecture \ref{conj_main} for all compact surfaces with genus zero, possibly with boundary. \betagin{thm} \label{thm_main} Suppose $\Sigma$ is a compact oriented surface with genus zero. Let $L$ be a link in the interior of $[-1,1]\times \Sigma$. Then $\rightarrownk_{\mathbb{Z}/2}\APS(L;\mathbb{Z}/2)\ge 2$, and the equality holds if and only if $L$ is isotopic to a knot embedded in $\{0\}\times \Sigma$. \end{thm} Theorem \ref{thm_main} is the first detection result for generalized Khovanov homology that is valid on an infinite family of manifolds. Although our proof also uses instanton Floer homology and spectral sequences, there are multiple essential difficulties that make the proof of Theorem \ref{thm_main} conceptually different from the earlier detection results in \cite{KM:Kh-unknot, XZ:excision, xie2020instantons}. First of all, when $\Sigma$ has genus zero and has at least three boundary components, the second page of Kronheimer--Mrowka's spectral sequence is \emph{not} isomorphic to APS homology. We will construct another spectral sequence, which is inspired by the work of Winkeler \cite{winkeler2021khovanov}, to relate the second page of Kronheimer--Mrowka's spectral sequence to APS homology. It is also significantly more difficult to compute the differentials on the first page of Kronheimer--Mrowka's spectral sequence in the more general setting. Although the isomorphism class of the underlying group can be computed by sutured homology (Proposition \ref{prop_SiHI_iso_SHI}), due to the lack of a strong enough naturality property, one cannot use it to compute the differential maps. We will develop a different argument by establishing an invariance property of instanton homology under diffeomorphisms (Proposition \ref{prop_diff_induces_id_on_II}) and using it to obtain a partial algebraic description of the differential map (Proposition \ref{prop_description_of_T(b)}). It turns out that the partial description of the differentials given by Proposition \ref{prop_description_of_T(b)} will be sufficient for the proof of the main theorem. More importantly, even after a relationship between instanton homology and APS homology is established, the desired detection result does not follow immediately from the available tool package of instanton homology. This is because when the Euler characteristic of $\Sigma$ is negative, the usual argument of estimating Thurston norms using instanton homology is no longer effective in restricting the isotopy class of $L$. We resolve this difficulty by establishing the following result on sutured instanton Floer homology, which may be of independent interest. \betagin{thm} \label{thm_main_instanton} Suppose $\Sigma$ is a connected compact oriented surface with non-empty boundary, and $L$ is a link in the interior of $[-1,1]\times \Sigma$. If the singular sutured instanton homology $\SHI([-1,1]\times \Sigma, \{0\}\times \Sigma, L)$ has dimension no greater than $2$, then $L$ is isotopic to a knot in $\{0\}\times \Sigma$. \end{thm} The paper is organized as follows. Section \ref{sec_preliminaries} reviews some basic constructions and terminologies that will be needed in this paper, Section \ref{sec_two_nonvanishing_grading} proves Theorem \ref{thm_main_instanton}, Sections \ref{sec_instanton_thickend_surface} and \ref{sec_instanton_for_links_at_0} establish various properties of instanton Floer homology that will be needed in the proof of Theorem \ref{thm_main}, Section \ref{sec_proof_main} finishes the proof of the main theorem. {\bf Acknowledgments.} We would like to thank Ciprian Manolescu for helpful comments, and thank Lvzhou Chen for helpful discussions. \section{Preliminaries} \label{sec_preliminaries} \subsection{Notation and conventions} In this paper, all manifolds and all maps between manifolds are smooth. If $M$ is an oriented manifold, we use $-M$ to denote the same manifold $M$ with the opposite orientation. We use ${\rm int}(M)$ to denote the interior of a manifold $M$. Unless otherwise specified, all submanifolds of $\mathbb{R}$ are endowed with the inherited orientation from the standard orientation of $\mathbb{R}$, and product manifolds are endowed with the product orientation. We view $S^1$ as the quotient of $[-1,1]$ by gluing $1$ and $-1$. The standard orientation on $S^1$ is defined to be the push-forward of the standard orientation on $[-1,1]$. We will use $t_*\in S^1$ to denote the image of $1$ and $-1$ under the quotient map. The point $t_*\in S^1$ will often be used as a base point on $S^1$. We will use $A$ to denote the oriented annulus $[-1,1]\times S^1$. If $\Sigma$ is a compact oriented surface, we will call the manifold $(-1,1)\times \Sigma$ the \emph{thickened} $\Sigma$. \subsection{APS homology} \label{subsec_APS} This subsection briefly reviews the definition of APS homology from \cite{APS}. We will not discuss the gradings of APS homology here because they are not needed for Theorem \ref{thm_main}. For simplicity, we will focus on the case when $\Sigma$ is a compact oriented surface with genus zero. We also slightly modify the notation from \cite{APS} to make it more convenient for our proof of Theorem \ref{thm_main}. Suppose $C$ is an embedded compact closed $1$--manifold in $\Sigma$. Define a free abelian group $V(C)$ as follows: \betagin{enumerate} \item If $\gamma$ is a contractible simple closed curve on $\Sigma$, define $V(\gamma)$ to be the free group generated by $\mathbf{v}_+(\gamma)$ and $\mathbf{v}_-(\gamma)$, where $\mathbf{v}_+(\gamma)$ and $\mathbf{v}_-(\gamma)$ are formal generators associated with $\gamma$. \item If $\gamma$ is a non-contractible simple closed curve, let $\mathfrak{o}$, $\mathfrak{o}'$ be the two orientations of $\gamma$. Define $V(\gamma)$ to be the free group generated by $\mathbf{w}_{\mathfrak{o}}(\gamma)$ and $\mathbf{w}_{\mathfrak{o}'}(\gamma)$, where $\mathbf{w}_{\mathfrak{o}}(\gamma)$ and $\mathbf{w}_{\mathfrak{o}'}(\gamma)$ are formal generators associated with $\gamma$. \item In general, suppose the connected components of $C$ are $\gamma_1,\dots,\gamma_k$, define $$V(C)= \bigotimes_{i=1}^k V(\gamma_i).$$ \end{enumerate} Suppose a link $L$ in the interior of $[-1,1]\times \Sigma$ is given by a diagram $D$ on $\Sigma$ with $k$ crossings, and fix an ordering of the crossings. For $v=(v_1,v_2,\dots, v_k)\in \{0,1\}^k$, resolving the crossings of $D$ by a sequence of $0$--smoothings and $1$--smoothings (see Figure \ref{fig_01smoothing}) given by $v$ turns $D$ to an embedded closed $1$--manifold in $\Sigma$. Denote the resolved diagram by $D_v$. \betagin{figure} \betagin{tikzpicture} \draw[thick] (1,-1) to (-1,1); \draw[thick,dash pattern=on 1.3cm off 0.25cm] (1,1) to (-1,-1); \node[below] at (0,-1.1) {A crossing}; \draw[thick] (2,1) to [out=315,in=180] (3,0.3) to [out=0,in=225] (4,1); \draw[thick] (2,-1) to [out=45,in=180] (3,-0.3) to [out=0,in=135] (4,-1); \node[below] at (3,-1.1) {0-smoothing}; \draw[thick] (5,1) to [out=315,in=90] (5.7,0) to [out=270,in=45] (5,-1); \node[below] at (6,-1.1) {1-smoothing}; \draw[thick] (7,1) to [out=225,in=90] (6.3,0) to [out=270,in=135] (7,-1); \end{tikzpicture} \caption{Two types of smoothings}\label{fig_01smoothing} \end{figure} If a $0$--smoothing is changed to a $1$--smoothing at a given crossing, then the resolved diagram changes by merging two circles into one circle, or splitting one circle to two circles. Let $u\in \{0,1\}^k$ be the label of the new resolution, we define a map from $V(D_v)$ to $V(D_u)$ as follows. We can write $D_v = D_v' \cup C$ and $D_u = D_u' \cup C$, where $D_v'$ and $D_u'$ are the components of $D_v$ and $D_u$ that are modified by the smoothing change, and $C$ is the complement. We define a map from $V(D_v')$ to $V(D_u')$; this will induce a map from $V(D_v)$ to $V(D_u)$ by taking tensor product with the identity map on $V(C)$. In the case that $D_v'$ contains two circles and $D_u'$ contains one circle, write $D_v'= \gammamma_1\cup \gammamma_2$ and $D_u'=\gammamma$. \betagin{enumerate} \item If $\gammamma_1$ and $\gammamma_2$ are both contractible circles, the map is defined by \betagin{align*} \mathbf{v}_+(\gammamma_1) \otimesimes \mathbf{v}_+(\gammamma_2) &\mapsto \mathbf{v}_+(\gammamma), &\mathbf{v}_+(\gammamma_1)& \otimesimes \mathbf{v}_-(\gammamma_2) \mapsto \mathbf{v}_-(\gammamma) ,\\ \mathbf{v}_-(\gammamma_1) \otimesimes \mathbf{v}_+(\gammamma_2)&\mapsto \mathbf{v}_-(\gammamma), &\mathbf{v}_-(\gammamma_1)& \otimesimes \mathbf{v}_-(\gammamma_2) \mapsto 0. \end{align*} \item If $\gammamma_1$ is contractible and $\gammamma_2$ is non-contractible, the map is defined by $$ \mathbf{v}_+(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}}(\gammamma_2) \mapsto \mathbf{w}_{\mathfrak{o}}(\gammamma_2),\quad\quad \mathbf{v}_-(\gammamma_1)\otimesimes \mathbf{w}_\mathfrak{o}(\gammamma_2) \mapsto 0 $$ for every orientation $\mathfrak{o}$ of $\gamma_2$. \item If both $\gammamma_1$ and $\gammamma_2$ are non-contractible and $\gamma$ is contractible, then $\gammamma_1$ and $\gammamma_2$ must be parallel to each other, and we can identify the orientations of $\gamma_1$ with the orientations of $\gamma_2$. Let $\mathfrak{o}$ and $\mathfrak{o}'$ be the orientations of $\gamma_1$ and use the same notation to denote the corresponding orientations of $\gamma_2$. The merge map is defined by \betagin{align*} \mathbf{w}_{\mathfrak{o}}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}}(\gammamma_2) &\mapsto 0, &\mathbf{w}_{\mathfrak{o}'}(\gammamma_1)& \otimesimes \mathbf{w}_{\mathfrak{o}'}(\gammamma_2) \mapsto 0,\\ \mathbf{w}_{\mathfrak{o}}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}'}(\gammamma_2) &\mapsto \mathbf{v}_-(\gammamma), &\mathbf{w}_{\mathfrak{o}'}(\gammamma_1)&\otimesimes \mathbf{w}_{\mathfrak{o}}(\gammamma_2) \mapsto \mathbf{v}_-(\gammamma). \end{align*} \item If $\gamma_1,\gamma_2,\gamma$ are all non-contractible, define the merge map to be zero. \end{enumerate} In the case that $D_v'$ contains one circle and $D_u'$ contains two circles, write $D_v'= \gammamma$ and $D_u'=\gammamma_1\cup \gammamma_2$. \betagin{enumerate} \item If $\gammamma_1$ and $\gammamma_2$ are both contractible circles, then $\gamma$ is also contractible, and the map is defined by $$ \mathbf{v}_+(\gammamma) \mapsto \mathbf{v}_+(\gammamma_1)\otimesimes \mathbf{v}_-(\gammamma_2)+ \mathbf{v}_-(\gammamma_1)\otimesimes \mathbf{v}_+(\gammamma_2), \quad \mathbf{v}_-(\gammamma) \mapsto \mathbf{v}_-(\gammamma_1)\otimesimes \mathbf{v}_-(\gammamma_2). $$ \item If $\gammamma_1$ is contractible and $\gammamma_2$ is non-contractible, then $\gamma_2$ is parallel to $\gamma$. Identify the orientations of $\gamma$ with the orientations of $\gamma_2$. The split map is defined by $$ \mathbf{w}_{\mathfrak{o}}(\gammamma) \mapsto \mathbf{v}_-(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}}(\gammamma_2) $$ for every orientation $\mathfrak{o}$ of $\gamma$ and $\gamma_2$. \item If both $\gamma_1$ and $\gamma_2$ are non-contractible and $\gamma$ is contractible, then $\gammamma_1$ and $\gammamma_2$ must be parallel to each other, and we can identify the orientations of $\gamma_1$ with the orientations of $\gamma_2$. Let $\mathfrak{o}$ and $\mathfrak{o}'$ be the orientations of $\gamma_1$ and use the same notation to denote the corresponding orientations of $\gamma_2$. The split map is given by $$ \mathbf{v}_+(\gammamma) \mapsto \mathbf{w}_{\mathfrak{o}}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}'}(\gammamma_2) + \mathbf{w}_{\mathfrak{o}'}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}}(\gammamma_2) , \quad \mathbf{v}_-(\gammamma) \mapsto 0. $$ \item If all of $\gamma_1,\gamma_2,\gamma$ are non-contractible, define the splitting map to be zero. \end{enumerate} The above construction defines a map $d_{vu}: V(D_v) \to V(D_u)$ whenever $u$ is obtained from $v$ by changing one coordinate from 0 to 1. Let $e_i$ be the $i$--th standard basis vector of $\mathbb{Z}^k$. Define \betagin{equation*} \CKh_\Sigma(L)=\bigoplus_{v\in \{0,1\}^k} V(D_v), \end{equation*} and define an endomorphisms on $\CKh_\Sigma(L)$ by \betagin{equation*} \mathcal{D}_\Sigma= \sum_i \sum_{u-v=e_i} (-1)^{\sum_{i<j\le c}v_j} d_{vu}. \end{equation*} \betagin{thm}[\cite{APS}] The map $\mathcal{D}_\Sigma$ satisfies $\mathcal{D}_\Sigma^2=0$. The homology of $(\CKh_\Sigma(L),\mathcal{D}_\Sigma)$ does not depend on the diagram $D$ or the order of the crossings. \end{thm} The homology of $(\CKh_\Sigma(L),\mathcal{D}_\Sigma)$ is the APS homology of $L$. If $R$ is a commutative ring, the APS homology of $L$ with coefficient $R$ is defined to be the homology of $\CKh_\Sigma(L)\otimesimes R$ with respect to the differential $\mathcal{D}_\Sigma\otimesimes \id_R$, and we denote the homology by $\APS(L;R)$. \subsection{Singular instanton homology} We give a brief review of the theory of singular instanton Floer homology developed in \cite{KM:Kh-unknot,KM:YAFT}. Let $Y$ be a closed oriented $3$--manifold, $L$ be a link in $Y$, and $\omegaega\subset Y$ be an embedded compact $1$--manifold such that $\omegaega \cap L = \partial \omegaega$. A closed, oriented, connected, embedded surface $\Sigma\subset Y$ is called \emph{non-integral} (with respect to the triple $(Y,L,\omegaega)$) if one of the following conditions hold: \betagin{enumerate} \item The algebraic intersection number of $\Sigma$ and $L$ is odd. \item $\Sigma$ is disjoint from $L$, and the intersection number of $\Sigma$ and $\omegaega$ is odd. \end{enumerate} Note that when $\Sigma$ is disjoint from $L$, the mod $2$ intersection number of $\Sigma$ and $\omegaega$ is well-defined, so condition (2) above is well-defined. The triple $(Y,L,\omegaega)$ is called \emph{admissible} if there exists a non-integral surface in every connected component of $Y$. Let $R$ be a ring. \emph{Singular instanton homology} assigns an $R$--module $\II(Y,L,\omegaega,a;R)$ to every admissible triple $(Y,L,\omegaega)$ and a choice of ``auxiliary data'' $a$ on $(Y,L,\omegaega)$ (see \cite{KM:Kh-unknot}). The homology $\II(Y,L,\omegaega, a;R)$ is endowed with a relative $\mathbb{Z}/4$ grading. For different choices of auxiliary data $a_1$ and $a_2$, the homology $\II(Y,L,\omegaega,a_1;R)$ and $\II(Y,L,\omegaega,a_2;R)$ are isomorphic as $\mathbb{Z}/4$--relatively graded $R$--modules. The isomorphism is canonical up to an overall sign. We will often omit the auxiliary data $a$ from the notation and write the singular instanton homology group as $\II(Y,L,\omegaega;R)$. We always assume that some auxiliary data is chosen when defining singular instanton homology. If $\omegaega_1$ and $\omegaega_2$ have the same fundamental class in $H_1(Y,L;\mathbb{Z}/2)$, then $\II(Y,L,\omegaega_1;R)\cong \II(Y,L,\omegaega_2;R)$ as $\mathbb{Z}/4$ relatively graded modules. Suppose $(Y_1,L_1,\omegaega_1)$ and $(Y_2,L_2,\omegaega_2)$ are two admissible triples. A \emph{cobordism} from $(Y_1,L_1,\omegaega_1)$ to $(Y_2,L_2,\omegaega_2)$ is a quadruple $(W,S,\Omega,\tau)$, such that \betagin{enumerate} \item $W$ is an oriented $4$--manifold with boundary. \item $(S,\partial S)\subset (W,\partial W)$ is a properly embedded surface such that the interior of $S$ is disjoint from $\partial W$. \item $(\Omega, \partial \Omega)\subset (W,S\cup \partial W)$ is a tamely embedded surface with corner such that the interior of $\Omega$ is disjoint from $S \cup \partial W$. \item $\tau$ is an orientation-preserving diffeomorphism from $\partial W$ to $(-Y_1)\sqcup Y_2$. \item $\tau(\partial S)= L_1\cup L_2$. \item $\tau(\Omega\cap \partial W) = \omegaega_1\cup \omegaega_2$. \end{enumerate} A cobordism $(W,S,\Omega,\tau)$ from $(Y_1,L_1,\omegaega_1)$ to $(Y_2,L_2,\omegaega_2)$ induces a homomorphism $$ \II(W,S,\Omega,\tau): \II(Y_1,L_1,\omegaega_1;R) \to \II(Y_2,L_2,\omegaega_2;R). $$ The map $\II(W,S,\Omega,\tau)$ is only well defined up to an overall sign. Therefore, all identities and commutative diagrams involving cobordism maps only hold up to signs unless specific choices of signs are made. If $(W,S,\Omega,\tau)$ and $(W',S',\Omega',\tau')$ are two cobordisms from $(Y_1,L_1,\omegaega_1)$ to $(Y_2,L_2,\omegaega_2)$ such that there exists a diffeomorphism $\eta$ from $(W,S,\Omega)$ to $(W',S',\Omega')$ with $\tau'\circ\eta = \tau$, then $\II(W,S,\Omega,\tau) = \pm \II(W',S',\Omega',\tau')$. From now on, we will omit $\tau$ from the notation when it is clear from context. \betagin{rem} It is also possible to define a cobordism map when $S$ has normal self-intersection points and when $\Omega$ has a non-trivial intersection with $S$ in the interior. Since they are not needed in this paper, we will not discuss these constructions here. \end{rem} The cobordism maps satisfy a functoriality property in the following sense. \betagin{enumerate} \item If $(W,S,\Omega)$ is a product cobordism, then $\II(W,S,\Omega) = \pm \id$. \item Suppose $(W_1,S_1,\Omega_1)$ is a cobordism from $(Y_1,L_1,\omegaega_1)$ to $(Y_2,L_2,\omegaega_2)$ and $(W_2,S_2,\Omega_2)$ is a cobordism from $(Y_2,L_2,\omegaega_2)$ to $(Y_3,L_3,\omegaega_3)$. Then the gluing of $(W_1,S_1,\Omega_1)$ and $(W_2,S_2,\Omega_2)$ along $Y_2$ defines a cobordism $(\hat{W},\hat{S},\hat{\Omega})$ from $(Y_1,L_1,\omegaega_1)$ to $(Y_3,L_3,\omegaega_3)$. If $Y_2$ has $n$ components and $W_1$, $W_2$ are connected, then $$ \II(\hat{W},\hat{S},\hat{\Omega}) = \pm 2^{n-1} \II(W_2,S_2,\Omega_2)\circ \II(W_1,S_1,\Omega_1) . $$ \end{enumerate} \betagin{exmp} \label{exmp_diff_induce_iso} If $\varphi$ is a diffeomorphism from $(Y_1,L_1,\omegaega_1)$ to $(Y_2,L_2,\omegaega_2)$ that preserves the orientations of $Y_i$, then $\varphi$ defines an isomorphism from $\II(Y_1,L_1,\omegaega_1)$ to $\II(Y_2,L_2,\omegaega_2)$ (up to sign) as follows. Let $(W,S,\Omega)$ be the cobordism obtained by gluing $[-1,0] \times (Y_1,L_1,\omegaega_1)$ with $[0,1]\times (Y_2,L_2,\omegaega_2)$ via the map $$\id\times \varphi:\{0\}\times (Y_1,L_1,\omegaega_1)\to \{0\}\times (Y_2,L_2,\omegaega_2),$$ then the map induced by $\varphi$ is defined to be the cobordism map $\II(W,S,\Omega)$. If $\varphi$ is the composition of two diffeomorphisms $\varphi_1$ and $\varphi_2$, then the isomorphism defined by $\varphi$ on instanton homology is equal to the composition of the isomorphisms defined by $\varphi_1$ and $\varphi_2$ up to sign. We will be mostly interested in the case when $\varphi$ is a self-diffeomorphism. If $\varphi$ is homotopic to the identity through self-diffeomorphisms of $(Y,L,\omegaega)$, then the map induced by $\varphi$ is $\pm \id$. This is because the cobordism that defines this map is diffeomorphic to the trivial product via a diffeomorphism that fixes the boundary. \end{exmp} If $L$ is oriented and $\partial \omegaega=\emptyset$, there is a special class of auxiliary data such that for all $a_1,a_2$ in this class, the isomorphism from $\II(Y,L,\omegaega,a_1)$ to $\II(Y,L,\omegaega,a_2)$ is canonically defined without sign ambiguity\footnote{This is achieved by specifying a canonical reducible connection on the bundle. For details, the reader may refer to \cite{KM:Kh-unknot}, see the paragraph containing Equation (12) and the discussion after Definition 3.7.}. In this case, we will always assume that the auxiliary data is chosen in such special class. If $\varphi:(Y_1,L_1,\omegaega_1)\to (Y_2,L_2,\omegaega_2)$ is a diffeomorphism preserving the orientations of $Y_i$ and $L_i$, then the map $\varphi$ induces an isomorphism from $\II(Y_1,L_1,\omegaega_1)$ to $\II(Y_2,L_2,\omegaega_2)$ without sign ambiguity. Suppose $(W,S,\Omega)$ is a cobordism from $(Y_1,L_1,\omegaega_1)$ to $(Y_2,L_2,\omegaega_2)$, where $L_i$'s are oriented and $\partial \omegaega_i =\emptyset$. If $W$ is endowed with an almost complex structure such that $S$ is an almost complex submanifold and $\Omega\cap S=\emptyset$, there is a canonical choice of sign for the cobordism map $\II(W,S,\Omega)$, see \cite[Section 5.1]{KM:Kh-unknot}. When such cobordisms are glued in a way that is compatible with the almost complex structures, the functoriality property holds without sign ambiguity. \betagin{rem} There is another setting where the sign of the cobordism map can be fixed. Suppose $(W,S,\Omega)$ is a cobordism from $(Y_1,L_1,\omegaega_1)$ to $(Y_2,L_2,\omegaega_2)$, where $L_i$ are oriented and $\partial \omegaega_i =\emptyset$. If $Y_1\cong Y_2$, $W\cong [-1,1]\times Y_1$, $\Omega\cap S=\emptyset$, and $S$ is oriented, then there is a canonical choice of sign for the cobordism map $\II(W,S,\Omega)$. This construction is not needed in our paper. \end{rem} From now, all instanton homology groups will be defined with $\mathbb{C}$ coefficients, and we will omit the coefficient ring from the notation. If $(Y,L,\omegaega)$ is given by the disjoint union of $(Y_1,L_1,\omegaega_1)$ and $(Y_2,L_2,\omegaega_2)$, then the K\"unneth formula (for $\mathbb{C}$ coefficients) gives a canonical isomorphism $$ \II(Y,L,\omegaega) \cong \II(Y_1,L_1,\omegaega_1)\otimesimes \II(Y_2,L_2,\omegaega_2). $$ \subsection{$\mu$ maps and excision} Suppose $(Y,L,\omegaega)$ is an admissible triple and $a$ is a fixed choice of auxiliary data. For $\alphapha\in H_*(Y;\mathbb{Z})$, there is a homomorphism $$ \muu(\alphapha):\II(Y,L,\omegaega,a)\to \II(Y,L,\omegaega,a) $$ on singular instanton homology. The $\mu$ map for instanton Floer homology was first introduced in \cite{donaldson1990polynomial, D-Floer}; for the definition of the $\mu$ map in the context of singular instanton Floer homology, the reader may refer to \cite[Section 2.3.2]{Street} and \cite[Section 2.1]{XZ:excision}. There is a choice of an overall multiplicative constant factor in the definition of $\mu$ maps, and we follow the convention in \cite[Section 2.1]{XZ:excision} and choose the constant to be $-1/4$. Once the auxiliary data is fixed, the $\muu$ map is defined without sign ambiguity. Suppose $a_1$ and $a_2$ are two choices of auxiliary data for $(Y,L,\omegaega)$ and let $\varphi:\II(Y,L,\omegaega,a_1)\to \II(Y,L,\omegaega,a_2)$ be the canonical isomorphism. The map $\varphi$ is only defined up to an overall sign, and we choose an arbitrary sign for $\varphi$. Then $\varphi\circ \muu(\alphapha) = \muu(\alphapha)\circ \varphi$ and the equation holds without sign ambiguity. If $\alphapha$ is contained in the push-forward of $H_*(Y\backslash L;\mathbb{Z})$ to $H_*(Y;\mathbb{Z})$, it is also customary in the literature to write $\muu(\alphapha)$ as $\mu(\alphapha)$. The $\muu$ maps satisfy the following properties. \betagin{property} \label{property_mu_map_intertwine_with_cobordism} Suppose $(W,S,\Omega)$ is a cobordism between two admissible triples $(Y_1,L_1,\omegaega_1)$ and $(Y_2,L_2,\omegaega_2)$. Let $\alphapha_1\in H_*(Y_1;\mathbb{Z})$, $\alphapha_2\in H_*(Y_2;\mathbb{Z})$ be two homology classes such that their images in $H_*(W;\mathbb{Z})$ are the same. Fix a choice of sign for $\II(W,S,\Omega)$, the following equation holds without sign ambiguity. $$ \II(W,S,\Omega)\circ \muu(\alphapha_1) = \muu(\alphapha_2)\circ \II(W,S,\Omega). $$ \end{property} \betagin{cor} \label{cor_mu_map_eigenspace_preserved_by_cobordism} Let $(Y_1,L_1,\omegaega_1)$, $(Y_2,L_2,\omegaega_2)$, $(W,S,\Omega)$, $\alphapha_1$, $\alphapha_2$ be as in Property \ref{property_mu_map_intertwine_with_cobordism}. Then for every $\lambda\in \mathbb{C}$, the map $\II(W,S,\Omega)$ sends the generalized eigenspace of $\muu(\alphapha_1)$ in $\II(Y_1,L_1,\omegaega_1)$ with eigenvalue $\lambda$ to the generalized eigenspace of $\muu(\alphapha_2)$ in $(Y_2,L_2,\omegaega_2)$ with eigenvalue $\lambda$. \end{cor} \betagin{property} \label{property_linearity_and_commutativity_of_mu} \betagin{enumerate} \item For all $\alphapha,\alphapha'\in H_*(Y;\mathbb{Z})$, we have $$ \muu(\alphapha+\alphapha') = \muu(\alphapha) + \muu(\alphapha'). $$ \item For $\alphapha\in H_k(Y;\mathbb{Z})$, $\alphapha'\in H_{k'}(Y;\mathbb{Z})$, we have $$ \muu(\alphapha) \muu(\alphapha')= (-1)^{kk'} \muu(\alphapha') \muu(\alphapha). $$ \end{enumerate} \end{property} It is often useful to represent $\alphapha$ by embedded manifolds in $Y$. To simplify notation, if $\Sigma$ is a closed oriented surface embedded in $Y$, we will write $\muu([\Sigma])$ as $\muu(\Sigma)$. Similarly, if $p\in Y$ is a point, we will write $\muu([p])$ as $\muu(p)$. If $\alphapha\in H_{k}(Y;\mathbb{Z})$, then $\muu(\alphapha)$ has degree $4-k$ with respect to the relative $\mathbb{Z}/4$ grading. This observation and an elementary argument lead to the following lemma. \betagin{lem}[{\cite[Lemma 3.14]{xie2020instantons}}] \label{lem_same_dim_opposite_eigenvalues} Suppose $\alphapha\in H_{2}(Y;\mathbb{Z})$. Then for every $\lambda\in \mathbb{C}$, the dimension of the generalized eigenspace of $\muu(\alphapha)$ on $\II(Y,L,\omegaega)$ with eigenvalue $\lambda$ is the same as the dimension of the generalized eigenspace with eigenvalue $-\lambda$. \end{lem} We will also need the following result about the eigenvalues of $\mu$ maps. \betagin{prop}[{\cite[Corollary 7.2]{KM:suture}}, {\cite[Proposition 6.1]{XZ:excision}}] \label{prop_eigenvalues_bounded_by_Thurston_norm} Let $\II^{(2)}(Y,L,\omegaega)$ be the simultaneous generalized eigenspace of $\muu(p)$ with eigenvalue $2$ for all $p\in Y$. Suppose $\Sigma$ is a non-integral surface with genus $g$ that intersects $L$ transversely at $n$ points such that $2g-2+n\ge 0$, then all eigenvalues of $\muu(\Sigma)$ on $\II^{(2)}(Y,L,\omegaega)$ are contained in the set $$\{2g-2+n-2k\,|\,k\in \mathbb{Z}, \, 0\le k\le 2g-2+n\}.$$ \end{prop} The above proposition motivated the following definition in \cite{XZ:excision}. \betagin{defn} Let $(Y,L,\omegaega)$ be an admissible triple. Suppose $\Sigma$ is a closed oriented embedded surface in $Y$. Let $\Sigma_1,\dots,\Sigma_l$ be the connected components of $\Sigma$. Assume $\Sigma_i$ has genus $g_i$ and intersects $L$ transversely at $n_i$ points, and assume that $2g_i-2+n_i\ge 0$ for all $i$. Define $\II(Y,L,\omegaega|\Sigma)\subset \II^{(2)}(Y,L,\omegaega)$ to be the simultaneous generalized eigenspace of $$\muu(\Sigma_1),\dots,\muu(\Sigma_l)$$ with eigenvalues $$2g_1-2+n_1,\dots,2g_l-2+n_l.$$ \end{defn} Note that in the above definition, we do not require $\Sigma_i$ to be non-integral. This turns out to be convenient in some occasions. If $\Sigma$ is a torus that is disjoint from $L$ and intersects $\omegaega$ transversely at $1$ point, then by Proposition \ref{prop_eigenvalues_bounded_by_Thurston_norm} above, we have $\II(Y,L,\omegaega|\Sigma)= \II^{(2)}(Y,L,\omegaega)$. The group $\II(Y,L,\omegaega)$ satisfies the following property, which is usually referred to as the excision theorem. \betagin{prop}[{\cite[Theorem 7.7]{KM:suture}, \cite[Proposition 6.4]{XZ:excision}}] \label{prop_excision} Suppose $(Y,L,\omegaega)$ is an admissible triple. Suppose $\Sigma_1,\Sigma_2\subset Y$ are two disjoint non-integral surfaces such that both of them have genus $g$, intersect $L$ transversely at $n$ points, are disjoint from $\partial \omegaega$, and intersect $\omegaega$ transversely at $m$ points. Suppose either $n=0$ and $m$ is odd, or $n$ is odd and $n\ge 3$. Let $\varphi:\Sigma_1\to\Sigma_2$ be an orientation-preserving diffeomorphism that maps $\Sigma_1\cap L$ to $\Sigma_2\cap L$ and maps $\Sigma_1\cap \omegaega$ to $\Sigma_2\cap \omegaega$. Write $\Sigma_1\cup \Sigma_2$ as $\Sigma$. Let $(\widetilde{Y}, \widetilde{L}, \tilde{\omegaega})$ be the resulting triple after cutting $Y$ open along $\Sigma=\Sigma_1\cup \Sigma_2$ and gluing back the boundary components via the map $\varphi$, let $\widetilde{\Sigma}\subset\widetilde{Y}$ be the image of $\Sigma$ after gluing. Then the above excision defines a cobordism $(W,S,\Omega)$ from $(Y,L,\omegaega)$ to $(\widetilde{Y},\widetilde{L},\tilde{\omegaega})$, which induces an isomorphism from $\II(Y,L,\omegaega|\Sigma)$ to $\II(\widetilde{Y},\widetilde{L},\tilde{\omegaega}| \widetilde{\Sigma}).$ \end{prop} We will also need the following computation of instanton Floer homology. Recall that $t_*\in S^1$ is the image of $\{-1,1\}\subset[-1,1]$ under the quotient map. \betagin{prop}[{\cite[Proposition 7.8]{KM:suture}, \cite[Proposition 6.7]{XZ:excision}}] \label{prop_product_space_homology_rank_1} Suppose $\Sigma$ is a closed oriented surface with genus $g$, let $p_1,\dots,p_n, q_1,\dots,q_m$ be distinct points on $\Sigma$. Let $(u,\partial u)\subset (\Sigma, \{p_1,\dots,p_n\})$ be an embedded compact $1$--manifold in $\Sigma$ whose interior is disjoint from $\{p_1,\dots,p_n,q_1,\dots,q_m\}$. Assume $2g-2+n\ge 0$, and assume that either $n=0$ and $m$ is odd, or $n$ is odd and $n\ge 3$. Then we have $$\II(S^1\times \Sigma, S^1\times \{p_1,\dots,p_n\}, S^1\times \{q_1,\dots,q_m\}\cup \{t_*\}\times u|\{t_*\}\times \Sigma)\cong \mathbb{C}.$$ \end{prop} \subsection{Annular instanton Floer homology} Recall that $A= [-1,1]\times S^1$ denotes the oriented annulus. Suppose $L$ is a link in the interior of $[-1,1]\times A$. The \emph{annular instanton homology} of $L$ was introduced in \cite{AHI}, and it is defined by the following steps. \betagin{enumerate} \item Fix an orientation on the standard $S^2$. Write $S^2$ as the union of two disks $D_1\cup D_2$ with $D_1\cap D_2\cong S^1$. \item Embed $[-1,1]\times [-1,1]$ into $D_1$ by a fixed orientation-preserving map. \item View $L\subset [-1,1]\times A = [-1,1]\times [-1,1]\times S^1$ as a link in $D_1\times S^1$ via the above embedding. \item Take two points $p,q$ in $D_2$, and let $c$ be an arc connecting $p$ and $q$ in $D_2$. \item Define $\AHI(L)$ to be the singular instanton homology $$\II\big(S^2\times S^1, L\cup (\{p,q\}\times S^1), c\times\{t_*\}\big).$$ \end{enumerate} Let $\Sigma = S^2\times\{t_*\}\subset S^2\times S^1$ be endowed with the same orientation as the standard $S^2$. It was proved by \cite[Proposition 4.5]{AHI} that the eigenvalues of $\muu(\Sigma)$ on $\AHI(L)$ are all integers. The decomposition of $\AHI(L)$ by the generalized eigenspaces of $\muu(\Sigma)$ defines a grading on $\AHI(L)$, which is called the f-grading. By Lemma \ref{lem_same_dim_opposite_eigenvalues} (and also by \cite[Proposition 4.5]{AHI}), for each $i\in \mathbb{Z}$, the component of $\AHI(L)$ with f-degree $i$ has the same dimension as the component of $\AHI(L)$ with degree $-i$. Suppose $L_1$ and $L_2$ are two links in the interior of $[-1,1]\times A$ and $S$ is a link cobordism from $L_1$ to $L_2$, then $S$ defines a cobordism map from $\AHI(L_1)$ to $\AHI(L_2)$, and we denote the cobordism map by $\AHI(S)$. \subsection{Sutured instanton Floer homology for tangles} We will also need the theory of sutured instanton Floer invariant for tangles which was introduced in \cite{XZ:excision}. Suppose $(M,\gamma)$ is a balanced sutured manifold and $T$ is a balanced tangle in $(M,\gamma)$, we will use $\SHI(M,\gamma,T)$ to denote the sutured instanton Floer homology of $(M,\gamma,T)$ as defined in \cite[Definition 7.6]{XZ:excision}. For the definition of balanced sutured manifolds and balanced tangles, we refer the reader to \cite[Section 7]{XZ:excision}. When $T=\emptyset$, it is customary to denote $\SHI(M,\gamma,\emptyset)$ as $\SHI(M,\gamma)$. We remark here that $\SHI(M,\gamma,T)$ is only defined to be an \emph{isomorphism class} of $\mathbb{C}$--vector spaces instead of an actual vector space. The reader may refer to \cite{baldwin2015naturality} for a general discussion of the naturality problem of sutured homology. We will be mostly interested in the case when $(M,\gamma)=([-1,1]\times \Sigma,\{0\}\times \partial \Sigma)$, where $\Sigma$ is a compact oriented surface with boundary, and $T$ is an embedded link in the interior of $M$. In this case, there is an explicit description of $\SHI(M,\gamma,T)$ given as follows. \betagin{prop} \label{prop_SiHI_iso_SHI} Let $\Sigma$, $T$ be as above. Let $F$ be a compact surface such that there is an orientation-reversing diffeomorphism $\tau:\partial F\to \partial \Sigma$. We also assume that $F$ does not have disk components. Let $R=F\cup_\tau \Sigma$, let $p$ be a point on $F$, let $c$ be an arbitrary simple closed curve on $R$ such that there exists a simple closed curve $c'\subset F$ intersecting $c$ transversely at one point. Then we have \betagin{align} &\SHI([-1,1]\times \Sigma,\{0\}\times \partial \Sigma,T) \label{eqn_SiHI_iso_SHI_0} \\ \cong &\II(S^1\times R, T, S^1\times \{p\}|\{t_*\}\times R) \label{eqn_SiHI_iso_SHI_1} \\ \cong &\II(S^1\times R, T, \{t_*\}\times c|\{t_*\}\times R). \label{eqn_SiHI_iso_SHI_2} \end{align} where $T$ is viewed as a subset of $S^1\times R$ via the embedding of $(-1,1)\times \Sigma$ to $S^1\times R$. \end{prop} Proposition \ref{prop_SiHI_iso_SHI} follows from standard excision arguments. We briefly review the proof here by listing a few references. When $F$ is connected, the fact that \eqref{eqn_SiHI_iso_SHI_1} and \eqref{eqn_SiHI_iso_SHI_2} are isomorphic and that their isomorphism class does not depend on the choice of $F$ follows verbatim from the proof of \cite[Theorem 4.4]{KM:suture}. The case when $F$ is disconnected follows from the discussion in \cite[Theorem 4.4]{KM:suture}. The fact that \eqref{eqn_SiHI_iso_SHI_1} and \eqref{eqn_SiHI_iso_SHI_2} are both isomorphic to \eqref{eqn_SiHI_iso_SHI_0} follows from \cite[Remark 7.8]{XZ:excision} when $L=\emptyset$, and the same argument can be extended verbatim to the case when $L$ is a link in the interior of $M$. \section{Proof of Theorem \ref{thm_main_instanton}} \label{sec_two_nonvanishing_grading} The main purpose of this section is to establish Theorem \ref{thm_main_instanton}. The arguments in this section rely heavily on the techniques of sutured manifold decomposition. We will use the notation from \cite{Schar} for sutured manifolds and sutured hierarchies. \subsection{Product sutured manifold and gradings}\label{subsection: surfaces induce gradings on SHI} Suppose $\Sigma$ is a connected oriented surface with boundary. Let $(M,\gamma)$ be the sutured manifold given by $$(M,\gamma)=([-1,1]\times \Sigma,\{0\}\times \partial \Sigma).$$ Suppose $L$ is a link in the interior of $M$. Suppose $\alpha$ is an oriented properly embedded $1$--submanifold of $\Sigma$. We construct a ``grading'' on $\SHI(M,\gamma,L)$ associated to $\alpha$ as follows: Pick a connected oriented surface $F$ so that there is an orientation reversing diffeomorphism $$\tau:\partial F\to \gamma,$$ and assume that the genus of $F$ is positive. Pick a point $p\in F$ and an oriented properly embedded $1$--submanifold $\alpha'$ on $F\backslash\{p\}$ so that $\tau$ restricts to an orientation reversing diffeomorphism $$\tau:\partial \alpha'\to \partial\alpha.$$ To construct the grading, write $R=\Sigma\cup_\tau F$ and recall that by Proposition \ref{prop_SiHI_iso_SHI}, $$\SHI(M,\gamma,L)\cong \II(S^1\times R, L, S^1\times \{p\}|\{t_*\}\times R).$$ Now we define \betagin{equation}\label{eq: grading associated to al} \SHI(M,\gamma,L,\alpha,i)={\rm Eig}(\muu(S^1\times(\alpha\cup\alpha')),i)\cap \II(S^1\times R, L, S^1\times \{p\}|\{t_*\}\times R), \end{equation} where the intersection is taken as subspaces of $\II(S^1\times R, L, S^1\times \{p\})$, and ${\rm Eig}(\muu(S^1\times(\alpha\cup\alpha'),i))$ denotes the generalized eigenspace of the map $\muu(S^1\times(\alpha\cup\alpha'))$ with eigenvalue $i$. \begin{lem}\label{lem: grading is well defined} The isomorphism class of $\SHI(M,\gamma,L,\alpha,i)$ is independent of the choice of $F$ and $\alpha'$. \end{lem} \begin{proof} The proof that the isomorphism class of $\SHI(M,\gamma,L,\alpha,i)$ remains the same when $F$ is changed to $F\# T^2$ follows from the same argument as \cite[Theorem 4.4]{KM:suture}. To show that it is independent of the choice of $\alpha'$, we first claim that if $\betata$ is a non-separating simple closed curve on $R$ such that $p\notin \betata$ and $(S^1\times \beta) \cap L = \emptyset$, then the action of $\muu(S^1\times \beta)$ on $$\II(S^1\times R, L, S^1\times \{p\}|\{t_*\}\times R)$$ is nilpotent. To prove the claim, let $\beta'$ be a simple closed curve on $R$ intersecting $\beta$ transversely at one point. Then by the excision theorem (Proposition \ref{prop_excision}), we have \betagin{align*} &\II(S^1\times R, L, S^1\times \{p\}|\{t_*\}\times R) \\ \cong &\II(S^1\times R, L, S^1\times \{p\}\cup \{t_*\}\times \beta'|\{t_*\}\times R) \otimesimes \II(S^1\times R, \emptyset, S^1\times \{p\}\cup \{t_*\}\times \beta'|\{t_*\}\times R). \end{align*} The above isomorphism is defined by an excision cobordism map. By Proposition \ref{prop_eigenvalues_bounded_by_Thurston_norm}, the action of $\muu(S^1\times \beta)$ is nilpotent on $\II(S^1\times R, L, S^1\times \{p\}\cup \{t_*\}\times \beta'|\{t_*\}\times R)$ and $ \II(S^1\times R, \emptyset, S^1\times \{p\}\cup \{t_*\}\times \beta'|\{t_*\}\times R)$. Hence by Property \ref{property_mu_map_intertwine_with_cobordism}, the action of $\muu(S^1\times \beta)$ is also nilpotent on $\II(S^1\times R, L, S^1\times \{p\}|\{t_*\}\times R)$, and the claim is proved. Now suppose $\alpha''$ is another properly embedded oriented manifold in $F$ such that $\partial \alpha'' = \partial \alpha'$ as oriented manifolds. Then $$[S^1\times (\alpha\cup\alpha')]=[S^1\times (\alpha\cup\alpha'')]+[S^1\times(\alpha'\cup-\alpha'')]\in H_2(S^1\times R).$$ Note that $S^1\times(\alpha'\cup-\alpha'')$ is homologous to a union of tori of the form $\beta_i\times S^1$, where each $\beta_i$ is a non-separating simple closed curve on $R$ such that $p\notin \betata_i\subset F$. By the previous discussion, we conclude that $\muu([S^1\times(\alpha'\cup-\alpha'')])$ is nilpotent. Therefore the desired result follows from Property \ref{property_linearity_and_commutativity_of_mu}. \end{proof} By \cite[Lemma 3.16]{xie2020instantons} (see also Lemma \ref{lem_grading_SiHI_range} below), all eigenvalues of $\muu(S^1\times(\alpha\cup\alpha'))$ on $\II(S^1\times R, L, S^1\times \{p\}|\{t_*\}\times R)$ are integers. Therefore we have \betagin{lem} \label{lem_sum_gradings_SHI} $\dim \SHI(M,\gamma,L) = \sum_{i\in \mathbb{Z}} \dim \SHI(M,\gamma,L,\alpha,i). $ \qed \end{lem} We view the groups $\SHI(M,\gamma,L,\alpha,i)$ ($i\in \mathbb{Z}$) as a ``grading'' on $\SHI(M,\gamma,L)$ in the sense that they define a partition of the dimension of $\SHI(M,\gamma,L)$. \begin{defn} \label{def_groomed} We say an oriented properly embedded $1$-submanifold $\alpha\subset\Sigma$ is {\it groomed} if it satisfies either one of the following two conditions. \betagin{enumerate} \item [(1)] $\alpha$ has no closed component, and for every component $\sigma$ of $\partial \Sigma$, all intersections between $\alpha$ and $\sigma$ are of the same sign. \item [(2)] Every component of $\alpha$ is closed, and $\alpha$ consists of parallel, coherently oriented non-separating simple closed curves. \end{enumerate} \end{defn} \begin{rem} The term \emph{groomed} first appeared in \cite{G:Sut-2} to describe surfaces in $(M,\gamma)$. The $1$--manifold $\alpha$ is groomed in the sense of Definition \ref{def_groomed} if and only if $[-1,1]\times \alpha$ is a groomed surface in $([-1,1]\times \Sigma,\{0\}\times \partial \Sigma)$ in the sense of \cite{G:Sut-2}. \end{rem} The main result of this section is the following. \begin{prop}\label{prop: two non-zero gradings} Suppose $\Sigma$ is a compact connected oriented surface with non-empty boundary, and suppose $\alpha\subset \Sigma$ is groomed. Let $$(M,\gamma)=([-1,1]\times \Sigma,\{0\}\times \partial \Sigma).$$ Suppose $L\subset {\rm int}(M)$ is a non-empty link so that \betagin{enumerate} \item $(M,\gamma)$ is $L$--taut, \item every product annulus in $M\backslash L$ is trivial (cf. \cite[Definition 4.1]{Schar}), \item every product disk in $M\backslash L$ is boundary parallel (cf. \cite[Definition 4.1]{Schar}), \end{enumerate} then there are integers $i_+\neq i_-$ so that $$\SHI(M,\gamma,L,\alpha,i_+)\neq0{\rm~and~}\SHI(M,\gamma,L,\alpha,i_-)\neq0.$$ \end{prop} \begin{proof} If $L$ can be isotoped to a disjoint union of non-empty links $L_1\subset (0,1)\times \Sigma$ and $L_2\subset (-1,0)\times \Sigma$, then $(M,\gamma)$ is both $L_1$--taut and $L_2$--taut. By \cite[Theorem 7.12]{XZ:excision}, $\SHI(M,\gamma,L_1)$ and $\SHI(M,\gamma,L_2)$ are both non-vanishing. Hence by the excision theorem and Property \ref{property_mu_map_intertwine_with_cobordism}, one only needs to prove the desired property for $L_1$ and $L_2$. As a result, we may assume without loss of generality that $L$ cannot be isotoped to a disjoint union of $L_1$ and $L_2$ as above. This is equivalent to the statement that there is no non-trivial horizontal surfaces in $(M,\gamma)$ disjoint from $L$ (cf. \cite[Definition 2.17]{juhasz2010polytope}). Let $S_{\alpha}=[-1,1]\times \alpha$. Let $N\subset \partial M$ be the union of $[-1,1]\times \partial \Sigma$ and a small tubular neighborhood of $\partial S_\alpha$. Applying \cite[Theorem 2.5]{Schar} to $S_\alpha$ itself and to $S_\alpha$ with the opposite orientation, we obtain two surfaces $S_+$ and $S_-$ inside $M$ so that the following conditions hold: \betagin{itemize} \item [(i)] $\partial S_\pm \subset N$, the intersection $\partial S_\pm\cap [-1,1]\times \partial \Sigma$ only contains essential circles and essential arcs in $[-1,1]\times \partial \Sigma$, and $\partial S_\pm \cap R(\gamma) = (\pm S_\alpha) \cap R(\gamma)$. \item [(ii)] there exist positive integers $k_{\pm}$ so that \betagin{equation} \label{eqn_homology_S_pm} [S_{\pm},\partial S_{\pm}]=\pm[S_{\alpha},\partial S_{\alpha}]+k_{\pm}\cdot [R_+(\alpha),\partial R_+(\alpha)]\in H_2(M,N), \end{equation} \item [(iii)] $S_+\cap R_{\pm}(\gamma)=-S_-\cap R_{\pm}(\gamma)=\{\pm1\}\times\alpha.$ \item [(iv)] There are taut sutured manifold decompositions $$(M,\gamma,L)\stackrel{S_+}{\leadsto}(M_+,\gamma_+,T_+)~{\rm and~}(M,\gamma,L)\stackrel{S_-}{\leadsto}(M_-,\gamma_-,T_-).$$ \end{itemize} \begin{rem} In the original statement of \cite[Theorem 2.5]{Schar}, Equation \eqref{eqn_homology_S_pm} was replaced with the weaker statement $$[S_{\pm},\partial S_{\pm}]=\pm[S_{\alpha},\partial S_{\alpha}]+k_{\pm}\cdot [R_+(\alpha),\partial R_+(\alpha)]\in H_2(M,\partial M),$$ yet the proof in \cite{Schar} actually constructed surfaces $S_{\pm}$ satisfying the stronger property. This observation was also utilized in the proof of \cite[Theorem 6.1]{juhasz2010polytope}. \end{rem} Note the arc components of $S_+\cap [-1,1]\times \partial\Sigma$ come from the intersection points in $\partial\alpha\cap \partial\Sigma$. The assumption that $\alpha$ is groomed then implies that all arc intersections of $S_+$ with a given component of $[-1,1]\times\partial\Sigma$ are parallel and coherently oriented. For closed components of $S_+\cap [-1,1]\times\partial \Sigma$, if two such components are oriented oppositely, we can glue an annulus to them to remove both components without affecting the properties listed above. As a result, we can assume that for any component $\sigma$ of $\partial \Sigma$, the intersection $S_+\cap[-1,1]\times\sigma$ consists of parallel and coherently oriented essential simple closed curves or arcs. The same statement applies to $S_-$ as well. By \eqref{eqn_homology_S_pm}, we have $$[\partial S_{\pm}]=[\pm\partial S_{\alpha}]+k_{\pm}\cdot [\gamma]\in H_1(N).$$ By Condition (i) above, we have $\partial S_{\pm}\cap R(\gamma)=\{-1,1\}\times (\pm\alpha)$, so $\partial S_{\pm}$ can be obtained from the union of $\pm\partial S_{\alpha}$ and $k_{\pm}$ copies of $\gamma$ by oriented resolutions on their intersections. Suppose $F$ is an auxiliary surface for $(M,\gamma)$ and $\alpha'\subset F$ is chosen as in the construction of the grading associated to $\alpha$. Write $$R=\Sigma\cup F~{\rm and}~\widetilde{S}_{\alpha}=[-1,1]\times(\alpha\cup\alpha').$$ We extend $S_{\pm}$ to closed surfaces inside $S^1\times R$ as follows. Inside $[-1,1]\times F$, we perform a cut and paste surgery on $k_{\pm}$ parallel copies of $F$ and $[-1,1]\times\alpha'$, i.e., we cut these surfaces open along intersections and reglue in an orientation preserving way to obtain a properly embedded surface. Then we glue the resulting surface to $S_{\pm}$ via the identification of $[-1,1]\times\partial \Sigma$ and $[-1,1]\times F.$ Let $\widetilde{S}_{\pm}$ be the resulting properly embedded surface inside $[-1,1]\times R$. Then we have $$\partial \widetilde{S}_{\pm}\cap\{-1,1\}\times R=\pm\widetilde{S}_{\alpha}\cap\{-1,1\}\times R=\{-1,1\}\times(\alpha\cup\alpha').$$ As a result, when we glue $\{1\}\times R$ to $\{-1\}\times R$, the surfaces $\widetilde{S}_{\pm}$ are glued to become closed surfaces $\widebar{S}_{\pm}$ inside $S^1\times R$. Let $\widebar{S}_\alpha = S^1\times (\alpha\cup \alpha').$ Condition (ii) for $S_{\pm}$ and our construction of $\widebar{S}_{\pm}$ then imply that \betagin{equation}\label{eq: relation between S_pm and S_al} [\widebar{S}_{\pm}]=\pm[\widebar{S}_{\alpha}]+k_{\pm}\cdot [R]\in H_2(S^1\times R). \end{equation} Recall that by definition, the action of $\muu(\{t_*\}\times R)$ on $\II(S^1\times R,L,S^1\times\{p\}|\{t_*\}\times R)$ has only one eigenvalue $2g(R)-2$. By Condition (iv) above and \cite[Theorem 2.14]{LXZ-unknot}, we have \betaq 0&\neq \SHI(M_{\pm},\gamma_{\pm},T_{\pm})\\ &\cong \II(S^1\times R,L,S^1\times\{p\}|\{t_*\}\times R)\cap\Eig(\mu^{orb}(\widebar{S}_{\pm}),|S_{\pm}\cap L|-\chi(\widebar{S}_{\pm}))\\ &\cong \II(S^1\times R,L,S^1\times\{p\}\{t_*\}\times R)\cap\Eig(\mu^{orb}(\widebar{S}_{\alpha}),|S_{\pm}\cap L|-\chi(\widebar{S}_{\pm})+k_{\pm}\cdot \chi(R))\\ &=\SHI\bigg(M,\gamma,L,\alpha,\pm\big(|S_{\pm}\cap L|-\chi(\widebar{S}_{\pm})+k_{\pm}\cdot \chi(R)\big)\bigg). \end{aligned}\end{equation*} Hence if we take $$i_{\pm}=\pm\big(|S_{\pm}\cap L|-\chi(\widebar{S}_{\pm})+k_{\pm}\cdot \chi(R)\big),$$ then $$\SHI(M,\gamma,L,\alpha,i_{\pm})\neq0.$$ It remains to show that $i_+\neq i_-$. If $i_+=i_-$, then we have \betagin{align} &(|S_{+}\cap L|-\chi(\widebar{S}_{\pm})+k_+\cdot \chi(R))=-(|S_{-}\cap L|-\chi(\widebar{S}_{-})+k_-\cdot \chi(R)) \nonumber \\ \Rightarrow~&|S_+\cap L|+|S_-\cap L|-\chi(\widebar{S}_{+})-\chi(\widebar{S}_{-})=-(k_++k_-)\cdot \chi(R) \nonumber \\ \Rightarrow~&|S_+\cap L|+|S_-\cap L|-\chi({S}_{+})-\chi({S}_{-})+|S_{+}\cap \gamma|=-(k_++k_-)\cdot \chi(R_+(\gamma)). \label{eqn_i+=i-_implied} \end{align} Where in the last step, we used the construction of $\widebar{S}_{\pm}$ to deduce that $$\chi(\widebar{S}_{\pm})=\chi(S_{\pm})-\frac{1}{2}|S_{\pm}\cap \gamma|+k_{\pm}\cdot \chi(F);$$ also by construction, we have $|S_+\cap\gamma|=|S_-\cap\gamma|$. Equation \eqref{eqn_i+=i-_implied} directly contradicts Lemma \ref{lem: juhasz's ineq} below. Hence the proof is finished. \end{proof} We postpone the proof of Lemma \ref{lem: juhasz's ineq} to Section \ref{subsec: key inequality}. In the next section, we use Proposition \ref{prop: two non-zero gradings} to prove Theorem \ref{thm_main_instanton}. \subsection{Proof of Theorem \ref{thm_main_instanton} assuming Proposition \ref{prop: two non-zero gradings}} \begin{prop} \label{prop_SHI_rank_greater_than_2_if_no_trivial_product} Suppose $\Sigma$ is a compact connected oriented surface with non-empty boundary. Assume $\Sigma$ satisfies at least one of the following two conditions: \betagin{enumerate} \item $\Sigma$ has at least three boundary components. \item $\Sigma$ has genus at least one. \end{enumerate} Let $(M,\gamma)=([-1,1]\times \Sigma,\{0\}\times \partial \Sigma).$ Suppose $L\subset {\rm int}(M)$ is a link so that $(M,\gamma)$ is $L$--taut, every product annulus in $M\backslash L$ is trivial, and every product disk in $M\backslash L$ is boundary parallel. Then $$\dim \SHI(M,\gamma,L)>2.$$ \end{prop} \begin{proof} Assume $\dim \SHI(M,\gamma,L)\le 2$. Take two oriented $1$--manifolds $\alpha_1$ and $\alpha_2$ of $\Sigma$ as follows: \betagin{enumerate} \item If $\Sigma$ has at least three boundary components, let $\alpha_1$ and $\alpha_2$ be two properly embedded arcs so that $\alpha_1\cap\alpha_2=\emptyset$ and $\partial \alpha_1\cup\partial \alpha_2$ intersects at least three boundary components of $\partial\Sigma$. \item Otherwise, $\Sigma$ has genus at least one, and we let $\alpha_1$ and $\alpha_2$ be a pair of simple closed curves such that $\alpha_1$ and $\alpha_2$ intersect transversely at one point. \end{enumerate} By construction, $\alpha_1$ and $\alpha_2$ are both groomed. Then by Proposition \ref{prop: two non-zero gradings}, for $j=1,2$, there are distinct integers $i_{j,+}\neq i_{j,-}$ so that $$\SHI(M,\gamma,L,\alpha_j,i_{j,+})\neq0{\rm~and~}\SHI(M,\gamma,L,\alpha_j,i_{j,-})\neq0.$$ Note that we can choose suitable auxiliary data so that $S_{\alpha,j}=[-1,1]\times \alpha_j$ extends to closed surfaces $\widebar{S}_{\alpha,j}$ inside the same closure $Y=S^1\times (\Sigma\cup F)$ for both $j$. Since the actions $\muu(\widebar{S}_{\alpha,1})$ and $\muu(\widebar{S}_{\alpha,2})$ commute with each other, after changing the orders of $i_{j,+}$ and $i_{j,-}$ if necessary, we may assume that $$\SHI(M,\gamma,L,(\alpha_1,\alpha_2),(i_{1,+},i_{2,+}))\cong \SHI(M,\gamma,L,(\alpha_1,\alpha_2),(i_{1,-},i_{2,-}))\cong\mathbb{C},$$ where $\SHI(M,\gamma,L,(\alpha_1,\alpha_2),(\lambda_1,\lambda_2))$ denotes the intersection of $$\SHI(M,\gamma,L,\alpha_1,\lambda_1) ~\text{and}~ \SHI(M,\gamma,L,\alpha_2,\lambda_2)$$ as subspaces of $\II(S^1\times (\Sigma\cup F), L, S^1\times \{p\}|\{t_*\}\times (\Sigma\cup F))$. Pick co-prime integers $r$ and $s$ so that $$r\,( i_{1,+}- i_{1,-})=s\,( i_{2,-}- i_{2,+}).$$ If $r>0$, let $r\cdot \alpha_1$ be $r$ parallel copies of $\alpha_1$; if $r<0$, let $r\cdot \alpha_1$ be $|r|$ copies of $-\alpha_1$. Define $s\cdot \alpha_2$ similarly. If $\alpha_1$ and $\alpha_2$ are arcs, let $\alpha_3$ be the union of $r\cdot \alpha_1$ and $s\cdot \alpha_2$. If $\alpha_1$ and $\alpha_2$ are simple closed curves, let $\alpha_3$ be the union of $r\cdot \alpha_1$ and $s\cdot \alpha_2$ with singularities resolved at all intersections. Then \betagin{equation} \label{eq: only one grading} \SHI(M,\gamma,L,\alpha_3,i_3)\cong\mathbb{C}^2\cong \SHI(M,\gamma,L). \end{equation} Note that if $\alpha_1$ and $\alpha_2$ are simple closed curves that intersect transversely at one point, then $\alpha_3$ is a non-separating simple closed curve on $\Sigma$, which is groomed. If $\alpha_1$, $\alpha_2$ are arcs such that $\partial \alpha_1\cup \partial \alpha_2$ intersects each component of $\partial \Sigma$ at the same sign, then $\alpha_1\cup \alpha_2$ is also groomed. If $\alpha_3$ is groomed, then \eqref{eq: only one grading} and the assumption that $\dim \SHI(M,\gamma,L)\le 2$ contradicts Proposition \ref{prop: two non-zero gradings}. If $\alpha_3$ is not groomed, then both $\alpha_1$ and $\alpha_2$ are arcs. Let $\alpha'_1={\rm sign}(r)\cdot \alpha_1$, $\alpha'_2={\rm sign}(s)\cdot \alpha_2$. Then there is a unique component $\sigma$ of $\partial \Sigma$ that contains two points in $\partial \alpha_1\cup \partial \alpha_2$, and the intersections $\sigma \cap \alpha_1'$ and $\sigma\cap \alpha_2'$ have opposite signs. Suppose $q_1$ and $q_2$ are two intersections of $\sigma$ and $\alpha_3$ with opposite signs, and assume that $q_1$ and $q_2$ are adjacent on $\sigma$. Let $\deltalta$ be the part of $\sigma$ between $q_1$ and $q_2$ that does not contain any other intersections of $\sigma$ with $\alpha_3$, We can glue $\deltalta$ to $\alpha_3$ and push it into the interior of $\Sigma$ to get a new collection of arcs $\alpha_4$ as in Figure \ref{fig: alpha} so that $\partial \alpha_3=\partial \alpha_4\cup\{q_1,q_2\}.$ \betagin{figure}[h] \betagin{overpic}[width=0.6\textwidth]{./figures/Alpha.pdf} \put(26,1){$\alpha_3$} \put(48,32){$\partial \Sigma$} \put(10,36){$q_1$} \put(27,36){$q_2$} \put(63,36){$q_1$} \put(80,36){$q_2$} \put(72,35){$\deltalta$} \put(80,1){$\alpha_4$} \end{overpic} \caption{Modify $\alpha_3$ to obtain $\alpha_4$}\label{fig: alpha} \end{figure} Now we show that $\alpha_3$ and $\alpha_4$ induce the same grading on $\SHI(M,\gamma,L)$. Pick an auxiliary surface $F$ for $(M,\gamma)$ and pick a collection of arcs $\alpha_3'$ on $F$ so that $\partial \alpha_3'$ are glued to $\partial \alpha_3$. By Lemma \ref{lem: grading is well defined}, we can choose $\alpha_3'$ arbitrarily. Choose $\alpha_3'$ so that one component of $\alpha_3'$ connects $q_1$ and $q_2$ and is boundary parallel to $\deltalta$ in $F$. Now take $\alpha_4'=\alpha_3'\backslash\betata$. Using $\alpha_3'$ and $\alpha_4'$, we extend $[-1,1]\times \alpha_3$ and $[-1,1]\times \alpha_4$ to closed surfaces $S^1\times (\alpha_3\cup\alpha_3')$ and $S^1\times (\alpha_4\cup\alpha_4')$ inside the closure $S^1\times (\Sigma\cup F)$ of $(M,\gamma,L)$. It is clear that $$[S^1\times (\alpha_3\cup\alpha_3')]=[S^1\times (\alpha_4\cup\alpha_4')]\in H_2(S^1\times (\Sigma\cup F)),$$ so the gradings associated to $\alpha_3$ and $\alpha_4$ are the same. This process can be repeated until we obtain a groomed $1$--manifold that induces the same grading on $\SHI(M,\gamma,L)$ as $\alpha_3$. By Lemma \ref{lem_sum_gradings_SHI}, Proposition \ref{prop: two non-zero gradings}, and (\ref{eq: only one grading}), this yields a contradiction. \end{proof} \betagin{cor}\label{cor_contained_in_annulus} \label{cor: theorem 1.3} Suppose $\Sigma$ is a connected compact oriented surface with non-empty boundary. Let $(M,\gamma) = ([-1,1]\times \Sigma,\{0\}\times \Sigma)$ and suppose $\dim \SHI(M,\gamma,L) \le 2$ for some link $L\subset {\rm int} (M)$. Then there exists an embedded annulus $N\subset \Sigma$ such that $L$ can be isotoped into $[-1,1]\times N$. \end{cor} \betagin{proof} The statement is obvious if $L$ is contained in an embedded $3$--ball in $M$. From now, we assume that $L$ is not contained in a $3$--ball, therefore $M$ is $L$--irreducible. Since $R(\gamma)$ is incompressible and norm-minimizing in $(M,\gamma)$, it is also $L$--incompressible and $L$--norm minimizing. Therefore $(M,\gamma)$ is $L$--taut. If there is a non-trivial product annulus disjoint from $L$ or a product disk that is not boundary-parallel in $M\backslash L$, we may cut open $M$ along the annulus or disk and obtain another balanced sutured manifold $(M',\gamma')$. We know that $(M',\gamma')$ is also $L$--taut by \cite[Lemma 4.2]{Schar}. By Lemma \ref{lem_cutting_open_along_product_annulus_and_disk} below, $(M',\gamma')$ is also a product sutured manifold. We have $$\dim \SHI(M',\gamma',L) = \dim \SHI(M,\gamma,L) $$ because both sides of the equation can be defined using the same closure. Repeat this process until it cannot be continued; by \cite[Lemma 2.16]{juhasz2010polytope}, this process ends after finitely many steps. By Proposition \ref{prop: two non-zero gradings} and the assumption that $\dim \SHI(M,\gamma,L) \le 2$, there is at most one component in the final stage that contains a non-empty subset of $L$. Let $$(\hat M,\hat \gamma)= ([-1,1]\times \hat \Sigma, \{0\}\times \hat \Sigma)$$ be the connected component in the final stage that contains $L$. By Proposition \ref{prop_SHI_rank_greater_than_2_if_no_trivial_product}, $\hat \Sigma$ has genus-zero and at most two boundary components. This implies that $\hat \Sigma$ is a disk or an annulus, therefore the desired result is proved. \end{proof} \betagin{lem} \label{lem_cutting_open_along_product_annulus_and_disk} Suppose $\Sigma$ is a compact surface with non-empty boundary. \betagin{enumerate} \item If $S$ is a properly embedded annulus in $[-1,1]\times \Sigma$ such that $S\cap \{\pm 1\}\times \Sigma$ is an embedded non-contractible circle on $\{\pm 1\}\times \Sigma$, then $(S,\partial S)$ is isotopic in $([-1,1]\times \Sigma, \{-1,1\}\times \Sigma)$ to an annulus of the form $[-1,1]\times \betata$, where $\betata$ is a non-contractible circle on $\Sigma$. \item If $S$ is a properly embedded disk in $[-1,1]\times \Sigma$ such that $S\cap \{\pm 1\}\times \Sigma$ is an embedded arc on $\{\pm 1\}\times \Sigma$, then $(S,\partial S)$ is isotopic in $([-1,1]\times \Sigma, \{-1,1\}\times \Sigma)$ to a disk of the form $[-1,1]\times \betata$, where $\betata$ is an arc on $\Sigma$. \end{enumerate} \end{lem} \betagin{proof} For Part (1), write $S\cap \{1\}\times \Sigma = \{1\}\times \betata_+$, $S\cap \{-1\}\times \Sigma = \{-1\}\times \betata_-$, then $\betata_+$ and $\betata_-$ are isotopic curves in $\Sigma$, and we may assume without loss of generality that $\beta_-=\beta_+$. By the assumptions, the embedding of $S$ in $[-1,1]\times \Sigma$ is $\pi_1$--injective. Let $\gamma$ be an essential arc on $\Sigma$ that is disjoint from $\beta_\pm$. Perturb $S$ so that it intersects $[-1,1]\times \gamma$ transversely. Then every component of $S\cap ([-1,1]\times \gamma)$ is contractible in $[-1,1]\times \Sigma$, thus is contractible in $S$. A standard innermost disk argument shows that $S$ can be isotoped so that it is disjoint from $[-1,1]\times \gamma$. As a result, we may cut open the surface $\Sigma$ along arcs until $\Sigma$ is an annulus and $\beta_\pm$ is a non-contractible simple closed curve on $\Sigma$. Now write $\Sigma$ as $B(2)\backslash B(1)$, where $B(r)$ is the disk of radius $r$ in $\mathbb{R}^2$ centered at $(0,0)$. Let $M_1$ and $M_2$ be the closures of the two components of $[-1,1]\times \Sigma\backslash S$, where $M_1$ contains $[-1,1]\times \partial B(1)$ and $M_2$ contains $[-1,1]\times \partial B(2)$. The Sch\"onflies theorem implies that $M_1 \cup ([-1,1]\times B(1))$ is a ball. Let $T=[-1,1]\times \{(0,0)\}$, then $T$ is a tangle in $M_1 \cup ([-1,1]\times B(1))$, and $M_1$ can be viewed as the ball $M_1 \cup ([-1,1]\times B(1))$ with a tubular neighborhood of $T$ removed. Since the map $\pi_1(S)\to \pi_1([-1,1]\times \Sigma)$ induced by inclusion is an isomorphism, it follows from the Seifert--van-Kampen theorem that the maps $\pi_1(S)\to \pi_1(M_1)$ and $\pi_1(S)\to \pi_1(M_2)$ induced by inclusions are also isomorphisms, therefore $\pi_1(M_1)\cong \pi_1(M_2) \cong \mathbb{Z}$. A standard argument using Dehn's lemma then implies that $T$ is the trivial tangle in the ball $M_1 \cup ([-1,1]\times B(1))$. So $M_1$ is diffeomorphic to $[-1,1]\times A$, and $S$ can be isotoped to a parallel copy of $[-1,1]\times \partial B(1)$. Therefore Part(1) of the lemma is proved. The proof of Part (2) follows from a similar (but simpler) argument, where we first cut $\Sigma$ open along arcs to reduce to the case where $\Sigma$ is a disk, and then apply the Sch\"onflies theorem. \end{proof} \betagin{prop}\label{prop_AHI_rank_2} Suppose $L$ is a link in the thickened annulus $(-1,1)\times A$. If $\dim\AHI(L)\le 2$, then $L$ is either the unknot or isotopic to a non-contractible circle in $\{0\}\times A$. \end{prop} \betagin{proof} We can view the thickened annulus as the complement of an unknot $U\subset S^3$. According to \cite{AHI}*{Section 4.3}, we have $\AHI(L)\cong \II^\nablatural (L\cup U;p)$, where $p\in U$ is a base point. By \cite[Proposition 3.5]{xie2021meridian}, this implies that $\dim \AHI(L) \ge 2^{|L|}$, where $|L|$ is the number of components of $L$. Since we assume that $\dim\AHI(L)\le 2$, we know that $L$ must have only one component. According to \cite{Xie-earring}*{Section 3}, there is an earring-removing exact triangle $$ \cdots \to \II(S^3,L\cup U \cup m_1\cup m_2, u_1\cup u_2 ) \to \II^\nablatural (L\cup U;p) \to \II^\nablatural (L\cup U;p) $$ where $m_1, m_2$ are small meridians around $L, U$ respectively, and $u_1, u_2$ are small arcs joining $m_1,m_2$ to $L, U$ respectively. By \cite{Xie-earring}*{Proposition 5.1}, we have $\II(L\cup U \cup m_1\cup m_2, u_1+u_2 ) \cong \KHI(L\cup U)$. Therefore the above exact triangle implies \betagin{equation} \label{eqn_AHI_rank_2_proof1} \dim \KHI(L\cup U) \le 2\cdot\dim \II^\nablatural (L\cup U;p) =2\cdot\dim \AHI(L) \le 4. \end{equation} By \cite{LY-HeegaardDiagram}*{Proposition 3.14} we have \betagin{equation} \label{eqn_AHI_rank_2_proof2} \dim \KHI(L\cup U) \ge 2 \cdot\dim \KHI (L). \end{equation} By \cite[Theorem 1.1]{KM:Alexander}, the Euler characteristic of $\KHI$ is given by the Alexander polynomial. Since $L$ is a knot, this implies that $\dim \KHI(L)$ is odd and $\dim \KHI(L\cup U)$ is even. Therefore \eqref{eqn_AHI_rank_2_proof1}, \eqref{eqn_AHI_rank_2_proof2} imply that $$\dim \KHI(L) = 1, \quad \dim \KHI(L\cup U) = 2 ~ \text{or} ~4.$$ If $\dim \KHI(L\cup U) = 2$, then by \cite{LXZ-unknot}*{Corollary 1.8}, $L\cup U$ is the unlink, and the desired result holds. From now on, we assume that $\dim \KHI(L\cup U) = 4$. Since $\dim \KHI(L) = 1$, \cite{KM:suture}*{Proposition 7.16} implies that $L$ is an unknot when viewed as a knot in $S^3$. If the top f-grading of $\AHI(L)$ is zero, then by \cite{XZ:excision}*{Corollary 8.3}, $L$ is included in a solid $3$--ball in the thickened annulus, so $L\cup U$ is the unlink and $\dim \KHI(L\cup U) = 2$, contradicting the assumption that $\dim \KHI(L\cup U) = 4$. Therefore, the top f-grading of $\AHI(L)$ is not zero. By the symmetry of f-grading and the assumption that $\dim \AHI(L)\le 2$, the top f-grading component of $\AHI(L)$ must have dimension $1$. Therefore by \cite{XZ:excision}*{Corollary 8.4}, $L$ is a braid closure in the thickened annulus. Let $l>0$ be the strand number of the braid. Let $S^3(L\cup U)$ denote the complement of a tubular neighborhood of $L\cup U$ in $S^3$, and let $T^2_L, T^2_U\subset \partial\big( S^3(L\cup U)\big)$ be the two components corresponding to $L$ and $U$ respectively. Take $\gamma_{\mu}$ to be the union of two meridians of $L$ and two meridians of $U$. Then by the definition of $\KHI$ (cf. \cite{KM:suture}), we have $$\KHI(L\cup U)\cong \SHI(S^3(L\cup U),\gamma_{\mu}).$$ Let $D$ be an embedded disk bounded by $L$ in $S^3$. The surface $D\cap S^3(L\cup U)$ represents a homology class in $H_2(S^3(L\cup U),\partial S^3(L\cup U))$. Let $S$ be a surface representing the same homology class in $H_2(S^3(L\cup U),\partial S^3(L\cup U))$ so that $S$ is norm-minimizing, $\partial S\cap T^2_L$ is a longitude of $L$, and $\partial S\cap T^2_U$ consists of parallel and coherently oriented meridians of $U$. We can arrange the two components of $\gamma_{\mu}\cap T_U$ so that they are not separated by $\partial S\cap T_U$. Note since the linking number between $L$ and $U$ is non-zero, we must have $\chi(S)\leq0$ and hence according to \cite[Corollary 6.3]{GL-decomposition}, we have two taut sutured manifold decompositions $$ (S^3(L\cup U),\gamma_{\mu})\stackrel{S}{\leadsto}(M_+,\gamma_+)~{\rm and~}(S^3(L\cup U),\gamma_{\mu})\stackrel{-S}{\leadsto}(M_-,\gamma_-). $$ As in \cite[Section 2]{GL-decomposition}, the surface $S$ induces a $\mathbb{Z}$--grading on $\KHI(L\cup U)=\SHI(S^3(L\cup U),\gamma_{\mu})$ after possible stabilizations (cf. \cite[Definition 2.23]{GL-decomposition}). Now since $L$ and $U$ have a non-zero linking number, \cite[Theorem 1.18]{GL-decomposition} applies and it states that $\KHI(L\cup U)$ detects the Thurston norm of the homology class $[S]\in H_2(S^3(L\cup U),\partial S^3(L\cup U))$. In \cite[Section 6]{GL-decomposition}, this result is proved by establishing the following statements: \betagin{equation}\label{eq: KHI(L cup U)} \KHI(L\cup U,i_{t})\cong \SHI(M_+,\gamma_+),~\KHI(L\cup U,i_{b})\cong \SHI(M_-,\gamma_-), \end{equation} and \betagin{equation}\label{eq: i_t-i_b} x([S])=-\chi(S)=i_t-i_b-1, \end{equation} where $i_t$ and $i_b$ are the top and bottom gradings of $\KHI(L\cup U)$ with respect to the grading associated with $S$, and $\KHI(L\cup U,i)$ denotes the component of $\KHI(L\cup U)$ that has grading $i$ with respect to the grading induced by $S$. Now we look at the balanced manifolds $(M_+,\gamma_+)$ and $(M_-,\gamma_-)$. Since we have chosen the suture $\gamma_\mu$ in a way that the two components of $\gamma_{\mu}\cap T^2_U$ are not separated by $\partial S\cap T^2_U$, there are three components of $\gamma_+$ that are parallel to each other: one comes from $\partial S\cap T^2_U$, and two come from $\gamma_{\mu}\cap T^2_U$. The same statement holds for $\gamma_-$. Let $\gamma_{\pm}'$ be the suture $\gamma_{\pm}$ with two of the three parallel components removed. From \cite[Proof of Theorem 3.1]{KM:Alexander}, we know that $$\SHI(M_{\pm},\gamma_{\pm})\cong\SHI(M_{\pm},\gamma_{\pm}')\otimesimes\mathbb{C}^2.$$ Note Equation (\ref{eq: i_t-i_b}) implies that $i_t\neq i_b$, and hence from Equation (\ref{eq: KHI(L cup U)})we know $$\bigg(\SHI(M_+,\gamma_{+}')\otimesimes\mathbb{C}^2\bigg)\oplus\bigg(\SHI(M_-,\gamma_{-}')\otimesimes\mathbb{C}^2\bigg)\hookrightarrow\KHI(L\cup U)\cong\mathbb{C}^4.$$ As a result we must have $\SHI(M_{\pm},\gamma_{\pm}')\cong\mathbb{C}$. By \cite[Theorem 1.2]{GL-decomposition} (see also \cite{KM:suture}*{Theorem 6.1} for the case then $(M_\pm, \gamma_\pm')$ is a homology product), $(M_{\pm},\gamma_{\pm}')$ are both product sutured manifolds, and hence $U$ is the closure of a braid with axis $L$. Recall that we also proved $L$ is a braid closure in the thickened annulus with strand number $l>0$. So $L\cup U$ is a mutually braided link. According to \cite{XZ:forest}*{Lemma 6.7}, if $|l|\ge 2$, then $$ \| (1-x)(1-y)\tilde{\Delta}_{L\cup U}(x,y)\|>4, $$ where $\tilde{\Delta}_{L\cup U}(x,y)$ denotes the multi-variable Alexander polynomial of $L\cup U$ and $\|\cdot \|$ denotes the sum of the absolute values of the coefficients of a polynomial. By \cite{LY_multi_Alexander}*{Theorem 1.4}, $\KHI(L\cup U)$ can be equipped with multi-gradings whose graded Euler characteristic recovers $(1-x)(1-y)\tilde{\Delta}_{L\cup U}(x,y)$. Therefore when $l>2$, we have $$ 4=\dim \KHI(L\cup U) \ge \| (1-x)(1-y)\tilde{\Delta}_{L\cup U}(x,y)\|>4, $$ which yields a contradiction. As a consequence, we must have $l= 1$, so $L\cup U$ is the Hopf link, and the desired result is proved. \end{proof} \betagin{proof}[Proof of Theorem \ref{thm_main_instanton}] Suppose $$ \dim_{\mathbb{C}}\SHI([-1,1]\times \Sigma, \{0\}\times \Sigma, L)\le 2. $$ By Corollary \ref{cor_contained_in_annulus}, this implies that there exists an embedding of the annulus $\varphi:A\to \Sigma$ such that $L$ is contained in $(-1,1)\times \varphi(A)$ after isotopy. If $\varphi$ is contractible, we may find a non-contractible annulus embedded in $\Sigma$ that contains $\varphi(A)$, so we may assume that $\varphi$ is not contractible without loss of generality. Let $\varphi^{-1}(L)$ be the link in the interior of $[-1,1]\times A$ given by the pull-back of $L$ by $\id_{[-1,1]}\times \varphi$. By \cite[Lemma 3.10]{xie2020instantons} and Proposition \ref{prop_SiHI_iso_SHI}, $$ \AHI(\varphi^{-1}(L))\cong \SHI([-1,1]\times \Sigma, \{0\}\times \Sigma, L). $$ By Proposition \ref{prop_AHI_rank_2}, we conclude that $\varphi^{-1}(L)$ is isotopic to a knot embedded in $\{0\}\times A$ via an isotopy in $(-1,1)\times A$. Therefore the theorem is proved. \end{proof} \subsection{Proof of Lemma \ref{lem: juhasz's ineq}}\label{subsec: key inequality} This subsection proves the key inequality Lemma \ref{lem: juhasz's ineq}, thus completing the proof of Proposition \ref{prop: two non-zero gradings} and Theorem \ref{thm_main_instanton}. Our argument follows the strategy of the proof of \cite[Theorem 6.1]{juhasz2010polytope}. \begin{lem}\label{lem: juhasz's ineq} Suppose $(M,\gamma)$ is a taut balanced sutured manifold and $L\subset {\rm int}(M)$ is a link so that $(M,\gamma)$ is $L$--taut and has no non-trivial horizontal surfaces (cf. \cite[Definition 2.17]{juhasz2010polytope}), non-trivial product annuli, or non-boundary-parallel product disks that are disjoint from $L$. Assume further that $H_2(M;\mathbb{Z})=0$. Suppose $S_+$ and $S_-$ are two surfaces so that the following hold: \betagin{itemize} \item [(i)] $S_+\cap R(\gamma)=-S_-\cap R(\gamma)$. \item [(ii)] $[S_+,\partial S_+]=-[S_-,\partial S_-]\neq0\in H_2(M,\partial M)$ \item [(iii)] There is a positive integer $k$ so that $$[S_+,\partial S_+]+[S_-,\partial S_-]=k\cdot [R_+(\gamma),\partial R_+(\gamma)]\in H_2(M,N(\gamma)).$$ \item [(iv)] The surfaces $S_{\pm}$ are both $x_L$-norm minimizing and $L$--incompressible (cf. \cite[Definition 1.2]{Schar}). \end{itemize} Then we have the following inequality: $$\chi(S_+)+\chi(S_-)-|S_+\cap \gamma|-|L\cap S_+|-|L\cap S_-|<k\cdot \chi(R_+(\gamma)).$$ \end{lem} \begin{proof} Let $P$ be the result of the cut and paste surgery of $S_+\cup S_-$ described as follows: on $R(\gamma)$, Condition (i) above states that $\partial S_+\cap R(\gamma)=\partial S_-\cap R(\gamma)$, and we glue $\partial S_+\cap R(\gamma)$ to $\partial S_-\cap R(\gamma)$; away from $R(\gamma)$, perturb $S_+$ and $S_-$ so that they intersect transversely, cut them open along intersections, and re-glue in an orientation preserving way to resolve the intersections. By the first inequality on \cite[Page 27]{juhasz2010polytope}, we have $$\chi(S_+) + \chi(S_-) - |S_+\cap \gamma| \le \chi (P).$$ Hence it suffices to show that $$\chi(P)-|L\cap P|<k\cdot \chi(R_+(\gamma)).$$ To do this, first note that $$\partial[P]=\partial[S_+]+\partial[S_-]=\partial[S_+]+\partial[-S_+]+k\partial[R_+(\gamma)]=k\cdot[\gamma]\in H_1(A(\gamma);\mathbb{Z}).$$ Since $H_2(M)=0$, we know from Condition (iii) that $$[P]=k\cdot[R_+(\gamma)]\in H_2(M,A(\gamma);\mathbb{Z}).$$ From Condition (iv), we know that any closed component of $S_+\cap S_-$ are either both essential or both inessential on $S_+$ and $S_-$. We may assume without loss of generality that the intersection points of $S_+$ with every component of $L$ have the same sign. In fact, if a component of $L$ intersects $S_+$ at two consecutive points with opposite signs, we may attach a tube along the segment of $L$ between these two points to $S_+$ without changing the homology class of $S_+$ or the Thurston norms. The same statement holds for $S_-$. If $P$ contains a spherical component $Q$ that intersects $L$ transversely at no more than one point, since $H_2(M;\mathbb{Z})=0$, the component $Q$ must be disjoint from $L$. By a standard innermost circle argument, we can isotope $S_+$ and $S_-$ to eliminate this $2$--spherical component of $P$ without changing the intersection numbers with $L$. If $P$ contains a toroidal component $T$ that is disjoint from $L$, then after isotopy of $S_+$ and $S_-$ using a standard innermost circle argument, we can assume that $T$ consists of some annuli on $S_+$ and some annuli on $S_-$. Since $H_2(M;\mathbb{Z})=0$ any torus inside $M$ is homologically trivial. As a result, we can perform cut and paste surgeries to $S_+$ and $S_-$ to swap those annular parts on $T$. After the cut and paste surgeries, $S_{\pm}$ remain in the same homology classes and have same generalized Thurston norms as before, but the closed component $T$ is merged into other components of $P$. If $P$ contains a spherical component $Q$ that intersects $L$ transversely at two points, since $H_2(M;\mathbb{Z})=0$, the algebraic intersection number of $Q$ and $L$ is zero. Since we assume that the intersection points of $S_\pm$ and every component of $L$ have the same sign, one of the intersection points in $Q\cap L$ is contained in $S_+$, and the other is contained in $S_-$. After isotoping $S_+$ and $S_-$ using the innermost circle argument, we may assume that the decomposition of $Q$ by $S_+$ and $S_-$ consists of an even number of annuli and two disks. As a result, we can perform cut and paste surgeries to $S_+$ and $S_-$ to swap those annular parts on $Q$. After the cut and paste surgeries, $S_{\pm}$ remain in the same homology classes and have same generalized Thurston norms as before, but the closed component $Q$ is merged into other components of $P$. If $P$ has a disk component, then by the definition of $P$, $\partial P$ is contained in a small tubular neighborhood of $\gamma$, so $\partial D$ is parallel to a component of $\gamma$. Hence $R_{\pm}(\gamma)$ must both be disks as they are incompressible. Since $(M,\gamma)$ is assumed to be taut, $M$ must be a $3$-ball, but then $(M,\gamma)$ cannot be $L$--taut. Hence, we may assume without loss of generality that every component of $P$ has a positive $L$--Thurston norm. So $$x_L(P)=|L\cap P|-\chi(P)~{\rm and~}x_L(R_+(\gamma))=x(R_+(\gamma))=-\chi(R_+(\gamma)).$$ The desired inequality is then equivalent to $$x_L(P)>k\cdot x(R_+(\gamma)).$$ Assume the contrary; namely, $x_L(P)\leq k\cdot x(R_+(\gamma)).$ Since we know that $$[P]=k\cdot [R_+(\gamma)]\in H_2(M,A(\gamma);\mathbb{Z})$$ and $R_+(\gamma)$ is norm-minimizing, we have $$k\cdot x(R_+(\gamma))\leq x(P)\leq x_L(P)\leq k\cdot x(R_+(\gamma)).$$ Since every component of $P$ has positive generalized Thurston norm and $H_2(M;\mathbb{Z})=0$, we conclude that $P$ has no closed components, $P\cap L= \emptyset$, and \betagin{equation} \label{eqn_norm_of_P} x(P)=k\cdot x(R_+(\gamma)). \end{equation} Fix an arbitrary base point $x\in M\backslash P$ and define a function $\phi:M\backslash P\to \mathbb{Z}$ by taking $\phi(y)$ to be the algebraic intersection number of any path from $x$ to $y$ with $P$. Since $[P]=k\cdot [R_+(\gamma)]=0\in H_2(M,\partial M)$, the map $\phi$ is well defined. Since $P$ has no closed components, this implies that $P$ has the form $$P= P_1\cup...\cup P_{k},$$ where each $\partial P_i$ is parallel to $\gamma$. Since $M\backslash L$ does not contain non-trivial horizontal surfaces, we know that each $P_i$ is either parallel to $R_+(\gamma)$ or parallel to $R_-(\gamma)$ in $M\backslash L$. We can assume that $P_i$ is parallel to $R_+(\gamma)$ for $1\leq i\leq j$ and $P_i$ is parallel to $R_-(\gamma)$ for $j+1\leq i\leq k$. Let $J_j$ be the part of $M$ between $P_{j}$ and $P_{j+1}$, and assume without loss of generality that $\gamma\subset \partial J_j\cap A(\gamma)$. Then $(J_j,\gamma)$ is balanced sutured manifold diffeomorphic to $(M,\gamma)$ and $L\subset {\rm int}(J_j)$. Observe that the difference between $S_+\cup S_-$ and $P$ in the interior of $M$ are within a neighborhood of $S_+\cap S_-$ along which we performed the cut and paste surgery. As a result, every component of $S_+\cap J_j$ is either a product disk or a product annulus in $J_j$; product disk components of $S_+\cap J_j$ correspond to arc components of $S_+\cap S_-\cap {\rm int}(M)$, and product annulus components of $S_+\cap J_j$ correspond to circle components of $S_+\cap S_-\cap {\rm int}(M)$. Also note that these product disks and product annuli are disjoint from $L$. So by the assumption on $(M,\gamma,L)$, we know that $S_+\cap J_j$ can be isotoped into $\partial J_j$. Since $M\backslash J_j$ is a product, we can deform $S_+$ into $\partial M$ in $M$. This implies $$[\partial S_+]=0\in H_1(\partial M;\mathbb{Z}).$$ Since $H_2(M;\mathbb{Z})=0$, it contradicts Condition (ii). \end{proof} \section{Instanton homology for links in thickened surfaces} \label{sec_instanton_thickend_surface} Let $R$ be a closed, connected, oriented surface. In this section, we define and study the properties of an instanton Floer homology invariant for links in $(-1,1)\times R$. Recall that $t_*\in S^1$ denotes the image of $\{-1,1\}\subset[-1,1]$ under the quotient map. \betagin{defn} Let $L$ be a link in $(-1,1)\times R$. Let $p$ be a point on $R$ that is disjoint from the projection of $L$ to $R$. Embed $L$ into $S^1\times R$ via the quotient map $[-1,1] \times R \to S^1\times R$. Define $$ \SiHI_{R,p}(L) = \II(S^1\times R, L, S^1\times \{p\}|\{t_*\}\times R). $$ \end{defn} \betagin{rem} The group $\SiHI_{R,p}(L)$ is the same as the group $\SiHI(L)$ defined in \cite{xie2020instantons}. We need to study the properties of $\SiHI_{R,p}(L)$ with different surfaces $R$ in this paper, so we modify the notation to keep track of $R$ and $p$. Since the homology class $[S^1\times{p}]\in H_2(S^1\times R,L)$ is independent of $p$, the isomorphism class of $\SiHI_{R,p}(L)$ is independent of $p$. \end{rem} Suppose $\Sigma$ is a compact connected oriented surface with $\partial \Sigma\neq \emptyset$, and $L$ is a link in the interior of $[-1,1]\times \Sigma$. Let $F$ be a connected oriented surface such that there is an orientation-reversing diffeomorphism $\tau: \partial F\to\partial \Sigma$, and let $R = \Sigma\cup_\tau F$. We also assume that $F$ is not a disk. Then by Proposition \ref{prop_SiHI_iso_SHI}, we have \betagin{equation} \label{eqn_SiHI_iso_SHI} \SiHI_{R,p}(L) \cong \SHI([-1,1]\times \Sigma, \{0\}\times \partial \Sigma, L). \end{equation} Suppose $S$ is a link cobordism from $L_1$ to $L_2$ in the interior of $[-1,1]\times [-1,1]\times R$ whose projection to $R$ is disjoint from $p$. Then $$([-1,1]\times S^1\times R, S, [-1,1]\times S^1\times \{p\})$$ is a cobordism from $(S^1\times R,L_1,S^1\times \{p\})$ to $(S^1\times R,L_2,S^1\times \{p\})$. We will denote the induced cobordism map by $\SiHI_{R,p}(S)$. \subsection{Embedding of annulus in $R$} Let $\varphi:A\to R$ be an orientation-preserving embedding of the annulus $A$ in $R$ such that the image of $A$ is non-separating in $R$. Suppose $L$ is a link in the interior of $[-1,1]\times A$. We will abuse notation and use $L$ to denote its image in $[-1,1]\times R$. Let $p$ be a point in $R\backslash \varphi(A)$. The main result of this subsection is the following. \betagin{prop} \label{prop_embedding_of_annulus_in_R} There exists a natural isomorphism from $\AHI(L)$ to $\SiHI_{R,p}(L)$. \end{prop} By definition, an isomorphism $\mathcal{I}: \AHI(L)\to \SiHI_{R,p}(L)$ is natural if it intertwines with link cobordism maps. More precisely, for every link cobordism $S$ from $L_1$ to $L_2$ in $[-1,1]\times [-1,1]\times A$, we require that $$ \mathcal{I}\circ \AHI(S) = \pm \SiHI_{R,p}(S)\circ \mathcal{I}. $$ \betagin{proof} Let $\varphi':A\to T^2$ be an embedding of $A$ in the torus such that the image is parallel to a non-trivial simple closed curve in $T^2$, and let $F'$ be the closure of $T^2\backslash \varphi'(A)$. Let $p'$ be a point on $F'$. By \cite[Lemma 3.10]{xie2020instantons}, $\AHI(L)\cong \SiHI_{T^2,p'}(L)$. By \eqref{eqn_SiHI_iso_SHI}, $\SiHI_{T^2,p'}(L)\cong \SHI([-1,1]\times A, \{0\}\times A, L) \cong \SiHI_{R,p}(L)$. Therefore $\AHI(L)\cong \SiHI_{R,p}(L)$. All the isomorphisms above are given by excisions along surfaces disjoint from $(-1,1)\times A$, so the excision surfaces are disjoint from links and link cobordisms in $(-1,1)\times A$, therefore all the isomorphisms above intertwine with link cobordism maps. \end{proof} \betagin{cor} \label{cor_zero_maps_detected_by_AHI} Suppose $L_1,L_2$ are links in $(-1,1)\times A$ and $S$ is a cobordism from $L_1$ to $L_2$. Let $\varphi$ be an orientation-preserving embedding of $A$ in $R$. Let $p$ be a point in $R\backslash \varphi(A)$. Then \betagin{enumerate} \item A cobordism $S$ from $L_1$ to $L_2$ defines the zero map on $\SiHI_{R,p}(L_1)$ if and only if $S$ defines the zero map on $\AHI(L_1)$. \item Suppose $S_1,\dots,S_k$ are $k$ cobordisms from $L_1$ to $L_2$, and $\lambda_1,\dots,\lambda_k\in \mathbb{C}$. Then $\sum \lambda_i\AHI(S_i)=0$ for some choice of signs of the cobordism maps if and only if $\sum \lambda_i\SiHI_{R,p}(S_i)=0$ for some choice of signs of the cobordism maps. \end{enumerate} \end{cor} \subsection{Grading on $\SiHI$ from simple closed curves} Let $c$ be an oriented simple closed curve in $R$. The following result was proved in \cite{xie2020instantons}. \betagin{lem}[{\cite[Lemma 3.16]{xie2020instantons}}] \label{lem_grading_SiHI_range} Suppose $S^1\times c$ intersects $L$ transversely at $n$ points, then the eigenvalues of $\muu(S^1\times c)$ on $\SiHI_{R,p}(L)$ are contained in the set $\{-n+2i|i\in\mathbb{Z}, 0\le i\le n\}$. \end{lem} Lemma \ref{lem_grading_SiHI_range} is the motivation of the following definition. \betagin{defn} Define the $c$--grading on $\SiHI_{R,p}(L)$ to be the grading given by the generalized eigenspace decomposition of $\muu(S^1\times c)$. \end{defn} Suppose $S$ is a link cobordism. By Corollary \ref{cor_mu_map_eigenspace_preserved_by_cobordism}, for every oriented simple closed curve $c$ on $R$, the map $\SiHI_{R,p}(S)$ preserves the $c$--grading. \subsection{Maps induced by diffeomorphisms} Recall from Example \ref{exmp_diff_induce_iso} that diffeomorphisms on admissible triples define isomorphisms on instanton homology. In this subsection, we prove the following result for maps on $\SiHI_{R,p}$ induced by diffeomorphisms. \betagin{prop} \label{prop_diff_induces_id_on_II} Suppose $R$ is a closed oriented surface. Suppose $N\subset R$ is a connected embedded compact surface with boundary, such that $\partial N$ has an even number of connected components. Assume that $R\backslash N$ is connected and has genus at least $3$. Let $L$ be a link in the interior of $(-1,1)\times N$. Let $p\in R$ be a point disjoint from the projection of $L$ to $R$. Let $\varphi:R\to R$ be a diffeomorphism that restricts to the identity on $N\cup \{p\}$. Then the diffeomorphism $\id_{S^1}\times \varphi$ induces $\pm \id$ on $\SiHI_{R,p}(L)$. \end{prop} We start by introducing the following notation. If $\omegaega$ is a closed $1$--manifold embedded in $R$, $L$ is a link in $(-1,1)\times R$, and $p\in R\backslash \omegaega$ is a point disjoint from the projection of $L$ to $R$, define $$ H(L,\omegaega,p)= \II(S^1\times R, L,\{t_*\}\times \omegaega \cup S^1\times \{p\}|\{t_*\}\times R). $$ By Proposition \ref{prop_product_space_homology_rank_1}, $H(\emptyset,\omegaega,p)\cong \mathbb{C}$. If $\varphi:R\to R$ is an orientation-preserving diffeomorphism that restricts to the identity on the union of $\omegaega$, $\{p\}$, and the projection of $L$ to $R$, we will use $\varphi_*$ to denote the automorphism of $H(L,\omegaega,p)$ induced by $\id_{S^1}\times \varphi$. \betagin{lem} \label{lem_diff_induce_pm_id_with_empty_link} Suppose $\varphi:R\to R$ is an orientation-preserving diffeomorphism such that $\varphi(p)=p$ and $\varphi(\omegaega)=\omegaega$. Then $\varphi_*= \pm \id$ on $H(\emptyset,\omegaega,p)$. \end{lem} \betagin{proof} We first consider the case when $\omegaega=\emptyset$. Let $P$ be the pair-of-pants cobordism from two circles to one circle. Topologically, $P$ is a sphere with $3$ interior disks removed. Taking the product of $P$ with $R$ gives a cobordism from $S^1\times R \sqcup S^1 \times R$ to $S^1\times R$. This cobordism defines a map $$\mathcal{P}_0: H(\emptyset,\emptyset,p)\otimesimes H(\emptyset,\emptyset,p)\to H(\emptyset,\emptyset,p).$$ By Proposition \ref{prop_excision}, $\mathcal{P}_0$ is an isomorphism. The map $\mathcal{P}_0$ can be defined without sign ambiguity because the singular set is empty and the cobordism has an almost complex structure defined by taking the product of an almost complex structure on $P$ with an almost complex structure on $R$. Since the singular set is empty, the map $\varphi_*$ is also defined without sign ambiguity. And we have \betagin{equation} \label{eqn_clP_varphi_commutative_H(empty,empty,p)} \mathcal{P}_0(\varphi_*(x)\otimesimes \varphi_*(y)) = \varphi_*(\mathcal{P}_0(x,y)) \end{equation} for all $x,y\in H(\emptyset,\emptyset,p)$ because the map $\varphi$ lifts to a diffeomorphism $\id_P\times \varphi$ on the cobordism $P\times R$, and the almost complex structure is preserved up to isotopy under this diffeomorphism. Since $H(\emptyset,\emptyset,p)\cong\mathbb{C}$, there is a unique element $u_0\in H(\emptyset,\emptyset,p)$ so that $\mathcal{P}(u_0\otimesimes u_0)=u_0.$ By \eqref{eqn_clP_varphi_commutative_H(empty,empty,p)}, we have $\varphi_*(u_0) = u_0$, so $\varphi_*=\id$ on $H(\emptyset,\emptyset,p)$. When $\omegaega\neq 0$, the pair-of-pants cobordism defines a homomorphism $$\mathcal{P}_\omegaega: H(\emptyset,\omegaega,p)\otimesimes H(\emptyset,\omegaega,p)\to H(\emptyset,\emptyset,p).$$ The same almost complex structure as above defines a sign for $\mathcal{P}_\omegaega$, and we have $$\mathcal{P}_\omegaega(\varphi_*(x)\otimesimes \varphi_*(y)) = \varphi_*(\mathcal{P}_\omegaega(x,y))$$ for all $x,y\in H(\emptyset,\omegaega,p)$. Since $H(\emptyset,\omegaega,p)\cong\mathbb{C}$, there exist exactly two elements $u_\omegaega, -u_\omegaega \in H(\emptyset,\omegaega,p)$ such that $\mathcal{P}_\omegaega(u_\omegaega\otimesimes u_\omegaega) = u_0$. Therefore $\varphi_*(u_\omegaega) = \pm u_\omegaega$, and the lemma is proved. \end{proof} Now we prove a special case of Proposition \ref{prop_diff_induces_id_on_II} when $\varphi$ is given by a Dehn twist along a non-separating curve in $R\backslash N$. \betagin{lem} \label{lem_diff_induces_id_on_II_Dehn_twist} Let $R, N, p$ be as in Proposition \ref{prop_diff_induces_id_on_II}. Suppose $\gamma$ is an oriented simple closed curve on $R\backslash N$ such that there exists a simple closed curve $\gamma'$ on $R\backslash N$ intersecting $\gamma$ transversely at one point. Let $D_\gamma$ be the Dehn twist along $\gamma$. Then Proposition \ref{prop_diff_induces_id_on_II} holds for $\varphi = D_\gamma$. \end{lem} \betagin{proof} Let $\omegaega$ be a simple closed curve on $R$ that is disjoint from $p$, $\gamma$, $\gamma'$ and intersects every component of $\partial N$ transversely at one point. Such a curve exists because we assume both $N$ and $F$ are connected and $\partial N$ has an even number of components. Then we have the following commutative diagram (up to sign) $$ \betagin{tikzcd} H(\emptyset,\omegaega,p)\otimesimes H(L,\omegaega,p) \arrow{r}\arrow{d}{\id\otimesimes (D_\gamma)_*} & H(L,\omegaega,p)\otimesimes H(\emptyset,\omegaega,p) \arrow{d}{\id\otimesimes (D_\gamma)_*} \\ H(\emptyset,\omegaega,p)\otimesimes H(L,\omegaega,p) \arrow{r} & H(L,\omegaega,p)\otimesimes H(\emptyset,\omegaega,p) , \end{tikzcd} $$ where the horizontal maps are given by excisions along the components of $S^1\times \partial N$ between two copies of $S^1\times R$. This diagram is commutative because the excision locus are pointwise fixed under the map $\id_{S^1}\times D_\gamma$, so the cobordisms defining the two compositions are diffeomorphic. All maps in the above diagram are isomorphisms, the two horizontal maps are identical, and by Lemma \ref{lem_diff_induce_pm_id_with_empty_link}, the right vertical map is $\pm \id$. Therefore the left vertical map is also $\pm \id$, which implies $(D_\gamma)_* = \pm \id$ on $H(L,\omegaega,p)$. Now consider the commutative diagram (up to sign) $$ \betagin{tikzcd} H(\emptyset,\omegaega,p)\otimesimes H(L,\emptyset,p) \arrow{r}\arrow{d}{(D_\gamma)_*\otimesimes (D_\gamma)_*} & H(L,\omegaega,p) \arrow{d}{(D_\gamma)_*} \\ H(\emptyset,\omegaega,p)\otimesimes H(L,\emptyset,p) \arrow{r} & H(L,\omegaega,p), \end{tikzcd} $$ where the horizontal maps are given by excisions along $\{t_*\}\times R$. This diagram is commutative because $\id_{S^1}\times D_\gamma:S^1\times R \to S^1\times R$ lifts to a self-diffeomorphism on the cobordism that defines the horizontal maps. Since $(D_\gamma)_*=\pm \id$ on $H(\emptyset,\omegaega,p)$ and $H(L,\omegaega,p)$, we conclude that $(D_\gamma)_*= \pm \id$ on $ H(L,\emptyset,p)$. \end{proof} \betagin{proof}[Proof of Proposition \ref{prop_diff_induces_id_on_II}] Since the mapping class group is generated by Dehn twists, we only need to prove Proposition \ref{prop_diff_induces_id_on_II} when $\varphi$ is a Dehn twist along a simple closed curve $\gamma$ in $R\backslash (N\cup\{p\})$. Let $F=R\backslash N$. If $\gamma$ is non-separating in $F$, the desired result follows from Lemma \ref{lem_diff_induces_id_on_II_Dehn_twist}. If $\gamma$ is separating, at least one component of $F\backslash \gamma$ has genus at least $2$. Hence there exists an embedded compact surface $K$ in $F$ such that \betagin{enumerate} \item $K$ is diffeomorphic to a sphere with four interior disks removed. \item $\partial K = \gamma\cup\gamma_1\cup \gamma_2\cup \gamma_3$ where $\gamma_1,\gamma_2,\gamma_3$ are non-separating in $F$. \end{enumerate} The lantern relation (see, for example, \cite[Proposition 5.1]{farb2011primer}) on $K$ then implies that the Dehn twist on $\gamma$ can be generated by Dehn twists along non-separating curves on $F$. Hence the proposition is proved. \end{proof} \subsection{Pair-of-pants cobordisms} \label{subsec_pair_of_pants_cobordism} Suppose $L_1$ and $L_2$ are two links in $(-1,1)\times R$. Define $L_1\sqcup L_2$ to be the link obtained by isotoping $L_1$ to $(-1,0)\times R$ via the linear isotopy in the $(-1,1)$--component, isotoping $L_2$ to $(0,1)\times R$ via the linear isotopy in the $(-1,1)$--component, and then taking the union. Note that $L_1\sqcup L_2$ may not be isotopic to $L_2\sqcup L_1$ in general. The disjoint union operation is associative: the link $(L_1\sqcup L_2) \sqcup L_3$ and $L_1\sqcup (L_2\sqcup L_3)$ are canonically isotopic by a linear isotopy in the $(-1,1)$ component. We use this canonical isotopy to identify $(L_1\sqcup L_2) \sqcup L_3$ and $L_1\sqcup (L_2\sqcup L_3)$, and we will denote this link by $L_1\sqcup L_2\sqcup L_3$. More generally, suppose $L_1,\dots,L_k$ are $k$ links in $(-1,1)\times R$, we define the disjoint union inductively by setting $$L_1\sqcup \dots \sqcup L_k = (L_1\sqcup \dots \sqcup L_{k-1})\sqcup L_k.$$ Excision along $\{t_*\}\times R$ gives an isomorphism $$ \mathcal{P}: \SiHI_{R,p}(L_1)\otimesimes \SiHI_{R,p}(L_2)\to \SiHI_{R,p}(L_1\sqcup L_2). $$ The map $\mathcal{P}$ is defined up to sign. It is natural and associative in the following sense: \betagin{lem} \label{lem_associativity_naturality_clP} \betagin{enumerate} \item Suppose $S_1$ is a link cobordism from $L_1$ to $L_1'$. Let $S_1'$ be the cobordism from $L_1\sqcup L_2$ to $L_1'\sqcup L_2$ defined by taking the disjoint union of $S_1$ with the product cobordism of $L_2$. Then $$\SiHI_{R,p}(S_1')\circ \mathcal{P}= \pm \mathcal{P}\circ \big(\SiHI_{R,p}(S_1)\otimesimes \id \big). $$ The analogous statement also holds for cobordisms on $L_2$. \item $\mathcal{P}(\mathcal{P}(x,y),z)= \pm \mathcal{P}(x,\mathcal{P}(y,z))$. \end{enumerate} \end{lem} \betagin{proof} The cobordisms defining both sides of equations are diffeomorphic. \end{proof} \betagin{defn} \label{defn_clP} Define the isomorphism $$ \mathcal{P}: \SiHI_{R,p}(L_1)\otimesimes \SiHI_{R,p}(L_2)\otimesimes \dots \otimesimes \SiHI_{R,p}(L_k)\to \SiHI_{R,p}(L_1\sqcup \dots\sqcup L_k). $$ inductively by setting $$ \mathcal{P}(x_1\otimesimes\dots\otimesimes x_k) = \mathcal{P}(\mathcal{P}(x_1\otimesimes\dots\otimesimes x_{k-1}) \otimesimes x_k). $$ \end{defn} The links $L\sqcup \emptyset$ and $\emptyset\sqcup L$ are canonically isotopic to $L$ via linear isotopies on the $(-1,1)$ factor. Using these isotopies to identify $\SiHI_{R,p}(L)$ with $\SiHI_{R,p}(L\sqcup \emptyset)$ and $\SiHI_{R,p}( \emptyset\sqcup L)$, we have the following result. \betagin{lem} \label{lem_multiplicative_unit} There exists an element $ u_0\in \SiHI_{R,p}(\emptyset)$ such that $\mathcal{P}( u_0, x) = \pm x$, $\mathcal{P}( x, u_0) = \pm x$, for all links $L$ disjoint from $\{p\}\times S^1$ and all $ x\in \SiHI_{R,p}(L)$. \end{lem} \betagin{proof} Let $u_0\in \SiHI_{R,p}(\emptyset)$ be an element such that $\mathcal{P}( u_0, u_0) = \pm u_0$. Then the desired result follows by the associativity of $\mathcal{P}$ and the fact that $\mathcal{P}( u_0,-)$ and $\mathcal{P}(-, u_0)$ are both surjective. \end{proof} \section{Instanton homology for links in $\{0\}\times R$} \label{sec_instanton_for_links_at_0} \subsection{Notation} Suppose $R$ is a closed oriented surface with genus at least $3$. Let $p\in R$ be a fixed point. Suppose $C\subset R\backslash\{p\}$ is a closed $1$--manifold such that every non-contractible component is non-separating. Define a complex linear space $V_\mathbb{C}(C)$ as follows: \betagin{enumerate} \item If $\gamma$ is a contractible simple closed curve on $R$, define $V_\mathbb{C}(\gamma)$ to be the space generated by $\mathbf{v}_+(\gamma)$ and $\mathbf{v}_-(\gamma)$, where $\mathbf{v}_+(\gamma)$ and $\mathbf{v}_-(\gamma)$ are formal generators associated with $\gamma$. \item If $\gamma$ is a non-contractible simple closed curve, let $\mathfrak{o}$, $\mathfrak{o}'$ be the two orientations of $\gamma$. Define $V_\mathbb{C}(\gamma)$ to be the space generated by $\mathbf{w}_{\mathfrak{o}}(\gamma)$ and $\mathbf{w}_{\mathfrak{o}'}(\gamma)$, where $\mathbf{w}_{\mathfrak{o}}(\gamma)$ and $\mathbf{w}_{\mathfrak{o}'}(\gamma)$ are formal generators. \item In general, suppose the connected components of $C$ are $\gamma_1,\dots,\gamma_k$, define $$V_\mathbb{C}(C)=\bigotimes_{i=1}^k V_\mathbb{C}(\gamma_i)$$ where the tensor product is taken with coefficient $\mathbb{C}$. \end{enumerate} \betagin{rem} Suppose $\Sigma$ is a surface with genus zero and there is an embedding $\iota: \Sigma\to R$ such that $\iota$ induces an injection on $\pi_1$. Then the space $V_\mathbb{C}(C)$ above coincides with $V(C)\otimesimes_\mathbb{Z} \mathbb{C}$, where $V(C)$ is the free group defined in Section \ref{subsec_APS}. \end{rem} Recall that we view $S^1$ as the quotient space of $[-1,1]$.  We will use $0\in S^1$ to denote the image of $0\in[-1,1]$ in $S^1$. For simplicity, when $R$ and $p$ are clear from context, we will write $\SiHI_{R,p}(\{0\}\times C)$ as $\SiHI(C)$. \subsection{Isomorphism from $V_\mathbb{C}(C)$ to $\SiHI(C)$} \label{subsec_iso_V_to_SiHI} Let $R,p$ be as above. \betagin{lem} \label{lem_SiHI(C)_iso_V(C)} Suppose $C$ is a closed $1$--manifold in $R\backslash\{p\}$ such that every non-contractible component is non-separating. Then $$\dim_\mathbb{C} \SiHI(C) \cong V_{\mathbb{C}}(C).$$ \end{lem} \betagin{proof} Suppose $C$ has $k$ components $\gamma_1,\dots,\gamma_k$. Then $C$ is isotopic to $$(\{0\}\times \gamma_1)\sqcup(\{0\}\times \gamma_2)\sqcup \dots\sqcup (\{0\}\times \gamma_k).$$ By the excision theorem (see also Section \ref{subsec_pair_of_pants_cobordism}), this implies $\SiHI(C)\cong \otimesimes_{i=1}^k \SiHI(\gamma_i).$ Since every non-contractible component of $C$ is separating in $R$, by Proposition \ref{prop_embedding_of_annulus_in_R} and the computation of $\AHI$ in \cite[Example 4.2]{AHI}, we have $\dim \SiHI(\gamma_i)=2$. Hence the lemma is proved. \end{proof} For the rest of this subsection, we define an explicit isomorphism from $V_\mathbb{C}(C)$ to $\SiHI(C)$ that realizes the isomorphism in Lemma \ref{lem_SiHI(C)_iso_V(C)}. The isomorphism is constructed by the following steps. \subsubsection{Step 1} \label{subsubsec_iso_V_to_SiHI_1} We first fix a basis for $\SiHI(\gamma)$ when $\gamma$ is contractible. Let $ u_0\in \MHI_{R,p}(\emptyset)$ be the element given by Lemma \ref{lem_multiplicative_unit}. The element $u_0$ is unique up to sign. If $\gamma$ is a contractible simple closed curve on $\{0\}\times R$, then it bounds a unique disk $B$ in $\{0\}\times R$. The disk $B$ defines a link cobordism from the empty set to $\gamma$. Denote this link cobordism by $D^+(\gamma)$. Let $E^+(\gamma)$ be the connected sum of $D^+(\gamma)$ with a standard torus in a small ball. Define $v_+(\gamma)\in \SiHI(\gamma)$ to be the image of $u_0$ under the cobordism map of $\SiHI_{R,p}(D^+)$, define $v_-(\gamma)$ to be the image of $u_0$ under the map $\frac 12 \SiHI_{R,p}(E^+)$. By definition, $v_+(\gamma)$ and $v_-(\gamma)$ are only well defined up to signs. \betagin{lem} \label{lem_linear_independence_v_pm} $v_+(\gamma)$ and $v_-(\gamma)$ are linearly independent in $\SiHI(\gamma)$. \end{lem} \betagin{proof} Since $\SiHI(\emptyset)\cong \mathbb{C}$, $\SiHI(\gamma)\cong\mathbb{C}^2$, the lemma is equivalent to the statement that $\SiHI_{R,p}(D^+)$ and $\SiHI_{R,p}(E^+)$ are linearly independent maps from $\SiHI(\emptyset)$ to $\SiHI(\gamma)$. Since $\gamma$ is a contractible circle, we can find an embedded non-separating annulus in $R$ that contains $\gamma$, and view $D^+(\gamma)$ and $E^+(\gamma)$ as cobordisms between links in the thickened annulus. By Corollary \ref{cor_zero_maps_detected_by_AHI}, we only need to show that $\AHI(D^+(\gamma))$ and $\AHI(E^+(\gamma))$ are linearly independent as maps from $\AHI(\emptyset)$ to $\AHI(\gamma)$. This is proved in \cite{AHI}. In the notation of \cite{AHI}, these two cobordism maps are denoted by $\AHI(D^+)$ and $\AHI(\Sigma^+)$, and the desired result follows from \cite[Proposition 5.9]{AHI}. \end{proof} The mirror image of $D^+(\gamma)$ defines a cobordism from $\gamma$ to the empty set. We denote this cobordism by $D^-(\gamma)$. \betagin{lem} \label{lem_SiHI_D-} The map $\SiHI_{R,p}(D^-(\gamma))$ takes $v_+(\gamma)$ to $0$, takes $v_-(\gamma)$ to $\pm u_0$. \end{lem} \betagin{proof} This follows from the same argument as Lemma \ref{lem_linear_independence_v_pm}. We use Corollary \ref{cor_zero_maps_detected_by_AHI} to reduce to the case of $\AHI$, and then cite \cite[Proposition 5.9]{AHI}. The cobordism map defined by $D^-(\gamma)$ is denoted by $\AHI(D^-)$ in \cite{AHI}. \end{proof} \subsubsection{Step 2.} \label{subsubsec_iso_V_to_SiHI_2} Now we fix a basis for $\SiHI(\gamma)$ when $\gamma$ is non-separating. Fix a pair of non-separating oriented simple closed curves $(\gamma_0,\gamma_0')$ on $R$ with algebraic intersection number $1$ that are disjoint from $p$. By Lemma \ref{lem_grading_SiHI_range}, the $\gamma_0'$--degrees of $\SiHI(\gamma_0)$ are contained in $\{-1,1\}$. By Lemma \ref{lem_SiHI(C)_iso_V(C)}, we have $\dim \SiHI(\gamma_0)=2$. Hence by Lemma \ref{lem_same_dim_opposite_eigenvalues}, for each $d\in\{1,-1\}$, the component of $\SiHI_{\Sigma,p}(\gamma_0)$ with $\gamma_0'$--degree $d$ is one dimensional. Choose a non-zero homogeneous element $w_0\in \SiHI(\gamma_0)$ with respect to the $\gamma_0'$--degree. By Proposition \ref{prop_diff_induces_id_on_II}, this condition on $w_0$ does not depend on the choice of $\gamma_0'$. \betagin{lem} \label{lem_ga0_to_ga_SiHI} Suppose $\gamma$ is an oriented non-separating simple closed curve on $R$ and let $\varphi:R\to R$ be a diffeomorphism fixing $p$ such that $\varphi(\gamma_0) = \gamma$. Let $\varphi_*:\SiHI(\gamma_0)\to \SiHI(\gamma)$ be the isomorphism induced by the diffeomorphism $\id_{S^1}\times \varphi$. Then the map $\varphi_*$ is independent of the choice of $\varphi$ up to sign. \end{lem} \betagin{proof} Suppose $\varphi_1$ and $\varphi_2$ are two diffeomorphisms on $R$ such that $\varphi_i(p)=p$ and $\varphi_i(\gamma_0)=\gamma$. We need to show that $(\varphi_1)_* = \pm (\varphi_2)_*$. The map $\varphi_1^{-1}\circ \varphi_2$ can be written as a composition of two diffeomorphisms $\phi_1$ and $\phi_2$, where $\phi_1$ is isotopic to the identity, and $\phi_2$ is the identity map on a neighborhood of $\gamma_0$. By Proposition \ref{prop_diff_induces_id_on_II}, the map defined by $\phi_2$ is $\pm \id$. Since $\phi_1$ is isotopic to the identity, it also induced $\pm \id$ on $\SiHI(\gamma_0)$ (see Example \ref{exmp_diff_induce_iso}). Therefore $(\varphi_1)_* = \pm (\varphi_2)_*$. \end{proof} \betagin{defn} Suppose $\gamma$ is a non-separating curve on $R$ disjoint from $p$, and let $\mathfrak{o}$ be an orientation of $\gamma$. Let $\varphi$ be an orientation-preserving self-diffeomorphism of $R$ that takes $\gamma_0$ to $\gamma$ with orientation $\mathfrak{o}$, let $\varphi_*$ be defined as in Lemma \ref{lem_ga0_to_ga_SiHI}. Define $w_{\mathfrak{o},w_0}(\gamma)\in \SiHI(\gamma)$ to be the image of $w_0$ under the map $\varphi_*$. \end{defn} By definition, $w_{\mathfrak{o},w_0}(\gamma)$ is well-defined up to sign. \betagin{lem} Suppose $\gamma$ is a non-separating curve on $R$ disjoint from $p$, and let $\mathfrak{o}$, $\mathfrak{o}'$ be the two orientations of $\gamma$. Then $w_{\mathfrak{o},w_0}(\gamma)$ and $w_{\mathfrak{o}',w_0}(\gamma)$ are linearly independent in $\SiHI(\gamma)$. \end{lem} \betagin{proof} Let $\gamma'$ be a fixed oriented non-separating curve on $R$ that intersects $\gamma$ transversely at one point. Then by definition, $w_{\mathfrak{o},w_0}(\gamma)$ and $w_{\mathfrak{o}',w_0}(\gamma)$ are both homogeneous with respect to the $\gamma'$--degree, and their $\gamma'$--degrees have opposite signs. Therefore $w_{\mathfrak{o},w_0}(\gamma)$ and $w_{\mathfrak{o}',w_0}(\gamma)$ are linearly independent. \end{proof} \subsubsection{Step 3} Now we combine the results above to define an explicit isomorphism from $V_\mathbb{C}(C)$ to $\SiHI(C)$. \betagin{defn} \betagin{enumerate} \item If $\gamma$ is a contractible simple closed curve on $R$, define $\theta_{w_0}(\gamma): V_\mathbb{C}(\gamma)\to \SiHI(\gamma)$ to be the linear map that takes $\mathbf{v}_+(\gamma)$ to $v_+(\gamma)$, and takes $\mathbf{v}_-(\gamma)$ to $v_-(\gamma)$. \item If $\gamma$ is a non-separating simple closed curve on $R$, define $\theta_{w_0}(\gamma): V_\mathbb{C}(\gamma)\to \SiHI(\gamma)$ to be the linear map that takes $\mathbf{w}_\mathfrak{o}(\gamma)$ to $w_{\mathfrak{o},w_0}(\gamma)$ for each orientation $\mathfrak{o}$ of $\gamma$. \end{enumerate} \end{defn} The maps $\theta_{w_0}(\gamma)$ are isomorphisms. They are only well defined up to changes of signs on the images of the standard basis of $V_\mathbb{C}(\gamma)$. When $\gamma$ is contractible, $\theta_{w_0}(\gamma)$ does not depend on the choice of $w_0$. Suppose $C\subset R\backslash\{p\}$ is a closed $1$--manifold such that every non-trivial component is non-separating. Let $\sigma$ be an ordering of the components of $C$ as $(\gamma_1,\gamma_2,\cdots,\gamma_k)$. Then the link $C$ is isotopic to $$(\{0\}\times\gamma_1)\sqcup \dots\sqcup (\{0\}\times \gamma_k)$$ via an isotopy of $(-1,1)\times R$ that fixes the projection of every point on $R$. Any two such isotopies induce the same map from $\SiHI(C)$ to $$\SiHI_{R,p}((\{0\}\times\gamma_1)\sqcup \dots\sqcup (\{0\}\times \gamma_k))$$ because the diffeomorphisms defining the two maps are isotopic. \betagin{defn} \label{defn_Theta} Define $$ \Theta_{w_0,\sigma} : V_\mathbb{C}(C) \to \SiHI(C) $$ by $\Theta_{w_0,\sigma} = \mathcal{P}\circ (\theta_{w_0}(\gamma_1)\otimesimes \dots \otimesimes \theta_{w_0}(\gamma_k))$, where we identify $\SiHI(C)$ with $\SiHI_{R,p}((\{0\}\times\gamma_1)\sqcup \dots\sqcup (\{0\}\times \gamma_k))$ using the isomorphism described above, and $\mathcal{P}$ is the map defined in Definition \ref{defn_clP}. \end{defn} \subsection{Properties of $\Theta_{w_0,\sigma}$} The next two lemmas study the behavior of the map $\Theta_{w_0,\sigma}$ when changing the ordering of the components of $C$. \betagin{lem} \label{lem_change_order_with_contractible} Suppose $\gamma_1,\gamma_2$ are two disjoint curves on $R\backslash\{p\}$ such that $\gamma_1$ is contractible, and $\gamma_2$ is either contractible or non-separating. Let $x\in V(\gamma_2)$ be an element in the standard basis. In other words, $x$ is of the form $\mathbf{w}_{\mathfrak{o}}(\gamma_2)$ if $\gamma_2$ is non-separating, and is of the form $\mathbf{v}_\pm(\gamma_2)$ if $\gamma_2$ is contractible. Let $C=\gamma_1\cup \gamma_2$, let $\sigma$ be the ordering $(\gamma_1,\gamma_2)$, and $\sigma'$ be the ordering $(\gamma_2,\gamma_1)$. Then for each $\epsilon\in\{+,-\}$, $$ \Theta_{w_0,\sigma} \big( \mathbf{v}_\epsilon(\gamma_1) \otimesimes x\big) = \pm \Theta_{w_0,\sigma'} \big( x\otimesimes \mathbf{v}_\epsilon(\gamma_1) \big). $$ \end{lem} \betagin{proof} Each side of the equation is given by the image of a cobordism map from $$(S^1\times R, \emptyset, S^1\times \{p\} ) \sqcup (S^1\times R, \{0\}\times \gamma_2, S^1\times \{p\} )$$ to $$(S^1\times R, \{0\}\times C, S^1\times \{p\} )$$ applied to the element $u_0\otimesimes x\in \SiHI(\emptyset)\otimesimes \SiHI(\gamma_2)$. It is straightforward to verify that the two cobordisms are diffeomorphic relative to the boundary. \end{proof} \betagin{lem} \label{lem_change_order_non_parallel} Suppose $\gamma_1,\gamma_2$ are two disjoint curves on $R$ such that $R\backslash{\gamma_1\cup \gamma_2}$ is connected. Let $\mathfrak{o}_1,\mathfrak{o}_2$ be fixed orientations of $\gamma_1,\gamma_2$. Let $C=\gamma_1\cup \gamma_2$, let $\sigma$ be the ordering $(\gamma_1,\gamma_2)$, and $\sigma'$ be the ordering $(\gamma_2,\gamma_1)$. Then there exists $\lambda=\pm 1$ or $\pm i$, such that $$ \Theta_{w_0,\sigma} \big( \mathbf{w}_{\mathfrak{o}_1}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}_2}(\gamma_2)\big) = \lambda \cdot \Theta_{w_0,\sigma'} \big( \mathbf{w}_{\mathfrak{o}_2}(\gamma_2) \otimesimes \mathbf{w}_{\mathfrak{o}_1}(\gamma_1)\big). $$ \end{lem} \betagin{proof} Let $\gamma_1'$ be an oriented simple closed curve that is disjoint from $\gamma_2$ and intersects $\gamma_1$ transversely at one point, and let $\gamma_2'$ be an oriented simple closed curve that is disjoint from $\gamma_1$ and intersects $\gamma_2$ transversely at one point. Let $\mathfrak{o}_1'$, $\mathfrak{o}_2'$ be the opposite orientations of $\mathfrak{o}_1$ and $\mathfrak{o}_2$ respectively. Then $\SiHI(C)$ is spanned by the images under $\Theta_{w_0,\sigma} $ of $$ \mathbf{w}_{\mathfrak{o}_1}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}_2}(\gamma_2),\quad \mathbf{w}_{\mathfrak{o}_1'}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}_2}(\gamma_2), \quad \mathbf{w}_{\mathfrak{o}_1}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}_2'}(\gamma_2),\quad \mathbf{w}_{\mathfrak{o}_1'}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}_2'}(\gamma_2),$$ and the four images have different gradings with respect to the pair $(\gamma_1',\gamma_2')$. The same statement holds for $\Theta_{w_0,\sigma'}$. Comparing the gradings of the images shows that $$ \Theta_{w_0,\sigma} \big( \mathbf{w}_{\mathfrak{o}_1}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}_2}(\gamma_2)\big) = \lambda \cdot \Theta_{w_0,\sigma'} \big( \mathbf{w}_{\mathfrak{o}_2}(\gamma_2) \otimesimes \mathbf{w}_{\mathfrak{o}_1}(\gamma_1)\big) $$ for some non-zero constant $\lambda\in \mathbb{C}$ that is well-defined up to sign. There is an orientation-preserving diffeomorphism of $R$ that switches $(\gamma_1,\mathfrak{o}_1)$ and $(\gamma_2,\mathfrak{o}_2)$, so we must have $\lambda=\pm \lambda^{-1}$. Therefore $\lambda$ is an integer power of $i$. \end{proof} We also need the following technical results. \betagin{lem} \label{lem_cobordism_map_symmetry_1} Suppose $\gamma_1,\gamma_2$ are parallel simple closed curves on $R$ that are non-separating. Let $C= \gamma_1\cup \gamma_2$, and $S\subset \Sigma$ be the annulus contained in $\Sigma$ such that $\partial S = C$. Then $S$ defines a link cobordism from $\emptyset$ to $C$. Since $\gamma_1$ and $\gamma_2$ are parallel, we identify the orientations of $\gamma_1$ and $\gamma_2$. Let $\mathfrak{o},\mathfrak{o}'$ be the two orientations of $\gamma_i$ for $i=1,2$. Let $\sigma = (\gamma_1,\gamma_2)$. Then there exists $\lambda\in \mathbb{C}$ such that $$ \SiHI_{R,p}(S)(u_0) = \pm \Theta_{w_0,\sigma} \big( \lambda\cdot \mathbf{w}_{\mathfrak{o}}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}'}(\gamma_2) \pm \lambda\cdot \mathbf{w}_{\mathfrak{o}'}(\gamma_1) \otimesimes \cdot \mathbf{w}_{\mathfrak{o}}(\gamma_2) \big). $$ \end{lem} \betagin{proof} For each simple closed curve $\gamma'$ on $R$, the $\gamma'$--grading of the left-hand-side of the equation is zero. Therefore, $\SiHI_{R,p}(S)(u_0)$ must have the form $$\pm \Theta_{w_0,\sigma} \big( \lambda_1 \cdot \mathbf{w}_{\mathfrak{o}}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}'}(\gamma_2)+ \lambda_2 \cdot \mathbf{w}_{\mathfrak{o}'}(\gamma_1) \otimesimes \cdot \mathbf{w}_{\mathfrak{o}}(\gamma_2) \big).$$ We need to show that $\lambda_1=\pm \lambda_2$. Since $\gamma_1$ and $\gamma_2$ are parallel, there is an isomorphism $\iota: \SiHI(\gamma_1)\to \SiHI(\gamma_2)$ defined by the isotopy from $\gamma_1$ to $\gamma_2$. Consider the element $$ \xi = \mathcal{P}^{-1}\circ \SiHI_{R,p}(S)(u_0) \in \SiHI(\gamma_1)\otimesimes \SiHI(\gamma_2), $$ where we identify $\SiHI(\gamma_1\cup\gamma_2)$ with $\SiHI_{R,p}(\{0\}\times \gamma_1\sqcup \{0\}\times \gamma_2)$ as in Definition \ref{defn_Theta}. Let $\phi:\SiHI(\gamma_1)\otimesimes \SiHI(\gamma_2)\to \SiHI(\gamma_2)\otimesimes \SiHI(\gamma_1)$ be the isomorphism that switches the two components. It suffices to show that \betagin{equation} \label{eqn_phi(xi)=xi} \phi(\xi) = \pm (\iota\otimesimes \iota^{-1})\xi. \end{equation} Let $\hat \mathcal{P}: \SiHI_{R,p}(\gamma_1\cup \gamma_2)\to \SiHI(\gamma_1)\otimesimes \SiHI(\gamma_2)$ be the map induced by the reversed pair-of-pants cobordism. Recall that the excision theorem was proved by showing that there exists a non-zero constant $c$ only depending on the genus of $R$ such that $\mathcal{P}\circ \hat \mathcal{P} = c\cdot \id.$ So we have $\mathcal{P} ^{-1} = c^{-1}\cdot \hat\mathcal{P}$ for some constant $c$ that only depends on the genus of $R$. We claim that \betagin{equation} \label{eqn_empty_to_two_parallel_inv_by_switch} (\iota\otimesimes \iota^{-1})\circ \hat{\mathcal{P}}\circ \SiHI_{R,p}(S) = \pm \phi \circ \hat{ \mathcal{P}}\circ \SiHI_{R,p}(S) \end{equation} as maps from $\SiHI(\emptyset)$ to $\SiHI(\gamma_2)\otimesimes \SiHI(\gamma_1)$. Equation \eqref{eqn_empty_to_two_parallel_inv_by_switch} holds because the cobordisms defining both sides of the equation are diffeomorphic. In fact, the cobordisms defining the two sides of \eqref{eqn_empty_to_two_parallel_inv_by_switch} are shown schematically in Figure \ref{fig_pair_of_pants} and Figure \ref{fig_pair_of_pants_switch} in the direction perpendicular to $R$, and the identity follows from the fact that there is a diffeomorphism from Figure \ref{fig_pair_of_pants} to Figure \ref{fig_pair_of_pants_switch} that restricts to the identity on the boundary. Equation \eqref{eqn_empty_to_two_parallel_inv_by_switch} immediately implies \eqref{eqn_phi(xi)=xi}, and the desired result is proved. \end{proof} \betagin{figure}[!htb] \betagin{minipage}{0.48\textwidth} \centering \includegraphics[width=.3\linewidth]{./figures/pair_of_pants} \caption{} \label{fig_pair_of_pants} \end{minipage} \betagin{minipage}{0.48\textwidth} \centering \includegraphics[width=.3\linewidth]{./figures/pair_of_pants_switch} \caption{} \label{fig_pair_of_pants_switch} \end{minipage} \end{figure} \betagin{lem} \label{lem_cobordism_map_symmetry_2} Suppose $\gamma_1,\gamma_2$ are parallel simple closed curves on $R$ that are non-separating. Let $C= \gamma_1\cup \gamma_2$, and $S\subset \Sigma$ be the annulus contained in $\Sigma$ such that $\partial S = C$. Then $S$ defines a link cobordism from $C$ to $\emptyset$. Since $\gamma_1$ and $\gamma_2$ are parallel, we identify the orientations of $\gamma_1$ and $\gamma_2$. Let $\mathfrak{o},\mathfrak{o}'$ be the two orientations of $\gamma_i$ for $i=1,2$. Let $\sigma = (\gamma_1,\gamma_2)$. Then $$ \SiHI_{R,p}(S) \circ \Theta_{w_0,\sigma}( \mathbf{w}_{\mathfrak{o}}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}'}(\gamma_2) )= \pm \SiHI_{R,p}(S) \circ \Theta_{w_0,\sigma}( \mathbf{w}_{\mathfrak{o}'}(\gamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}}(\gamma_2) ) $$ \end{lem} \betagin{proof} This follows from the same argument as Lemma \ref{lem_cobordism_map_symmetry_1} by reversing the directions of all cobordisms. \end{proof} \section{Proof of the main theorem} \label{sec_proof_main} Let $\Sigma$ be a fixed compact surface with genus zero. Recall that when $\Sigma\cong S^2$, Theorem \ref{thm_main} follows from Kronheimer--Mrowka's unknot detection result. Therefore we will always assume that $\partial \Sigma\neq \emptyset$ in this section. Let $F$ be a connected surface such that there is an orientation-reversing diffeomorphism $\tau: \partial F \to \partial \Sigma$. Fix the map $\tau$ and let $R = F\cup_\tau N$. Choose $F$ so that the genus of $R$ is at least $3$. Let $p$ be a fixed point on $F$. Let $C$ be an embedded closed $1$--manifold in $\Sigma$, and view it as a submanifold of $R$. Since $\Sigma$ has genus zero and $F$ is connected, every non-contractible simple closed curve on $C$ is non-separating in $R$. So the discussions from Section \ref{sec_instanton_for_links_at_0} can be applied to $C$. Let $w_0$ be given as in Section \ref{subsec_iso_V_to_SiHI}. Suppose $b$ is an embedded disk on $\Sigma$ such that the interior of $b$ is disjoint from $C$ and the boundary of $b$ intersects $C$ at two arcs. Then surgery of $C$ along $b$ generates another embedded closed $1$--manifold on $\Sigma$, which we denote by $C_b$. See Figure \ref{fig:band_surgery} for an illustration of the surgery. We will call the disk $b$ the ``band'' that is \emph{attached} to $C$ and call the surgery from $C$ to $C_b$ the \emph{band surgery along $b$}. \betagin{figure} \betagin{overpic}[width=0.7\textwidth]{./figures/band_surgery} \put(2,-5){the manifold $C$} \put(43,-5){the band $b$} \put(77,-5){the manifold $C_b$} \end{overpic} \caption{Band surgery}\label{fig:band_surgery} \end{figure} The band surgery along $b$ defines a link cobordism from $C$ to $C_b$. Let $\SiHI(b)$ denote the cobordism map from $\SiHI(C)$ to $\SiHI(C_b)$. We will study the properties of the map $\SiHI(b)$. To start, consider the case where one of $C$ or $C_b$ has exactly one connected component. In the case that $C$ has one component $\gamma$ and $C_b$ has two components $\gamma_1$ and $\gamma_2$, let $\sigma$ denote the only possible ordering of components of $C$, and let $\sigma_b$ be an arbitrary ordering of the components of $C_b$. We have the following results: \betagin{lem} \label{lem_cobordism_SiHI_one_trivial_to_two_trivial} If $\gammamma_1$ and $\gammamma_2$ are both trivial circles, then $\gamma$ is also trivial. The map $\SiHI(b)$ has the form \betagin{align*} \Theta_{w_0,\sigma}\,\mathbf{v}(\gammamma)_+ &\mapsto \pm \Theta_{w_0,\sigma_b}\big(\mathbf{v}(\gammamma_1)_+\otimesimes \mathbf{v}(\gammamma_2)_- \big) \pm \Theta_{w_0,\sigma_b}\big( \mathbf{v}(\gammamma_1)_-\otimesimes \mathbf{v}(\gammamma_2)_+\big), \\ \Theta_{w_0,\sigma}\, \mathbf{v}(\gammamma)_- &\mapsto \pm \Theta_{w_0,\sigma_b}\big(\mathbf{v}(\gammamma_1)_-\otimesimes \mathbf{v}(\gammamma_2)_-\big). \end{align*} \end{lem} \betagin{proof} Recall from Section \ref{subsec_iso_V_to_SiHI} that $ \Theta_{w_0,\sigma_b}\big(\mathbf{v}(\gammamma_1)_+\otimesimes \mathbf{v}(\gammamma_2)_- \big)$ is given by the image of $$\SiHI(E^+(\gamma_1))(u_0)\otimesimes \SiHI(D^+(\gamma_2))(u_0)$$ under the pair-of-pants map $\mathcal{P}$. Therefore, $ \Theta_{w_0,\sigma_b}\big(\mathbf{v}(\gammamma_1)_+\otimesimes \mathbf{v}(\gammamma_2)_- \big)$ is defined by a cobordism $(\hat W,\hat S,\hat \Omega)$ from $(S^1\times R, \emptyset, S^1\times \{p\})\sqcup (S^1\times R, \emptyset, S^1\times \{p\})$ to $(S^1\times R, \{0\}\times (\gamma_1\sqcup \gamma_2), S^1\times \{p\})$. Let $S$ be the cobordism from $\emptyset$ to $\gamma_1\cup \gamma_2$ given by the disjoint union of $E^+(\gamma_1)$ and $D^+(\gamma_2)$. Then the cobordism $\hat S$ can be pushed to a tubular neighborhood of the boundary $(S^1\times R, \{0\}\times (\gamma_1\cup \gamma_2), S^1\times \{p\})$ to the position of $S$. Since $\mathcal{P}(u_0\otimesimes u_0)=\pm u_0$, this proves $$ \Theta_{w_0,\sigma_b}\big(\mathbf{v}(\gammamma_1)_+\otimesimes \mathbf{v}(\gammamma_2)_- \big) = \pm \SiHI(S)(u_0).$$ Similarly, the following elements in $\SiHI(\gamma_1\cup\gamma_2)$: $$\Theta_{w_0,\sigma_b}\big( \mathbf{v}(\gammamma_1)_-\otimesimes \mathbf{v}(\gammamma_2)_+\big), \; \Theta_{w_0,\sigma_b}\big(\mathbf{v}(\gammamma_1)_-\otimesimes \mathbf{v}(\gammamma_2)_-\big), \; \SiHI(b)\circ \Theta_{w_0,\sigma}\,\mathbf{v}(\gammamma)_+ , \; \SiHI(b)\circ \Theta_{w_0,\sigma}\,\mathbf{v}(\gammamma)_-,$$ are all given by the image of $u_0$ under maps induced from suitable cobordisms between $\emptyset$ and $\{0\}\times (\gamma_1\cup \gamma_2)$. Since $\SiHI(\emptyset)\cong \mathbb{C}$, the desired lemma is equivalent to the corresponding linear relations on these cobordism maps. By Corollary \ref{cor_zero_maps_detected_by_AHI}, we only need to prove the analogous result in $\AHI$. Hence the result follows from \cite[Proposition 5.9]{AHI}. \end{proof} \betagin{lem} \label{lem_cobordism_SiHI_non_contractible_split_one_contractible} If one of $\{\gamma_1,\gamma_2\}$ is contractible and the other is non-contractible, assume without loss of generality that $\gamma_1$ is contractible and $\gamma_2$ is non-contractible. Then $\gamma$ is isotopic to $\gamma_2$, and the orientations of $\gamma$ and $\gamma_2$ are canonically identified. Let $\mathfrak{o}$ be an arbitrary orientation of $\gamma$, and use the same notation to denote the corresponding orientation of $\gamma_2$. Then $\SiHI(b)$ has the form $$ \Theta_{w_0,\sigma}\,\mathbf{w}_{\mathfrak{o}} (\gammamma)\mapsto \pm \Theta_{w_0,\sigma_b}(\mathbf{v}_-(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}}(\gammamma_2)). $$ \end{lem} \betagin{proof} Similar to the argument of Lemma \ref{lem_cobordism_SiHI_one_trivial_to_two_trivial}, both sides of the equations can be interpreted as the images of suitable cobordism maps from $\SiHI(\gamma)$ to $\SiHI(\gamma_1\cup \gamma_2)$ applied to the element $w_{\mathfrak{o},w_0}(\gamma)\in \SiHI(\gamma)$ (see Section \ref{subsubsec_iso_V_to_SiHI_2} for the definition of $w_{\mathfrak{o},w_0}(\gamma)$). Therefore we only need to prove the equation for the cobordism maps. The result then follows from Corollary \ref{cor_zero_maps_detected_by_AHI} and \cite[Proposition 5.14]{AHI}. \end{proof} \betagin{lem} \label{lem_SiHI_cobordism_one_trivial_two_nontrivial} If both $\gamma_1$ and $\gamma_2$ are non-contractible and $\gamma$ is contractible. Then $\gamma_1$, $\gamma_2$ are isotopic to each other, and the orientations of $\gamma_1$ are $\gamma_2$ are canonically identified. Let $\mathfrak{o},\mathfrak{o}'$ be the orientations of $\gamma_1,$ and use the same notation for the orientations of $\gamma_2$. Then there exists $\lambda_1\neq 0$, such that the map $\SiHI(b)$ has the form $$ \Theta_{w_0,\sigma}\,\mathbf{v}_+(\gammamma) \mapsto \pm \lambda_1\Theta_{w_0,\sigma_b}\big(\mathbf{w}_\mathfrak{o}(\gammamma_1)\otimesimes\mathbf{w}_{\mathfrak{o}'}(\gammamma_2)\big)\pm \Theta_{w_0,\sigma_b}\big( \lambda_1 \mathbf{w}_{\mathfrak{o}'}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}} (\gammamma_2)\big), \quad \Theta_{w_0,\sigma}\,\mathbf{v}_-(\gammamma)\mapsto 0. $$ \end{lem} \betagin{proof} The image of $\Theta_{w_0,\sigma}\,\mathbf{v}_-(\gamma)$ is zero and the image of $\Theta_{w_0,\sigma}\, \mathbf{v}_+(\gamma)$ is non-zero because of Corollary \ref{cor_zero_maps_detected_by_AHI} and \cite[Proposition 5.14]{AHI}. The image of $\Theta_{w_0,\sigma}\, \mathbf{v}_+(\gamma)$ is given by the desired the form because of Lemma \ref{lem_cobordism_map_symmetry_1}. \end{proof} \betagin{lem} \label{lem_cobordism_map_1_nontrivial_to_2_nontrivial} If all of $\gamma,\gamma_1,\gamma_2$ are non-contractible, let $N$ be the regular neighborhood of $b\cup \gamma$. Then $N$ is a sphere with three disks removed, and the three boundaries of $N$ are parallel to $\gamma_1$, $\gamma_2$, $\gamma$. The boundary orientation of $N$ defines an orientation on each of $\gamma_1, \gamma_2, \gamma$, and we denote them by $\mathfrak{o}_1,\mathfrak{o}_2,\mathfrak{o}$ respectively. Denote their opposite orientations by $\mathfrak{o}_1',\mathfrak{o}_2',\mathfrak{o}'$. Then there exists a choice of $w_0$ such that $\SiHI(b)$ has the form \[ \Theta_{w_0,\sigma}\, \mathbf{w}_{\mathfrak{o}'}(\gammamma)\mapsto \pm \lambda_2\, \Theta_{w_0,\sigma_b}\big(\mathbf{w}(\gamma_1)_{\mathfrak{o}_1}\otimesimes \mathbf{w}(\gammamma_2)_{\mathfrak{o}_2}\big),\quad \Theta_{w_0,\sigma}\, \mathbf{w}(\gamma)_{\mathfrak{o}}\mapsto 0. \] for some constant $\lambda_2$. \end{lem} \betagin{proof} By comparing gradings, the image of $\Theta_{w_0,\sigma}\, \mathbf{w}_{\mathfrak{o}'}(\gammamma)$ must be a scalar multiple of $$\Theta_{w_0,\sigma_b}\big(\mathbf{w}(\gamma_1)_{\mathfrak{o}_1}\otimesimes \mathbf{w}(\gammamma_2)_{\mathfrak{o}_2}\big),$$ and the image of $\Theta_{w_0,\sigma}\, \mathbf{w}_{\mathfrak{o}}(\gammamma)$ must be a scalar multiple of $$\Theta_{w_0,\sigma_b}\big(\mathbf{w}(\gamma_1)_{\mathfrak{o}_1'}\otimesimes \mathbf{w}(\gammamma_2)_{\mathfrak{o}_2'}\big).$$ We show that the rank of $\SiHI(b)$ must be strictly less than $2$. Since cobordism maps are equivariant with respect to orientation preserving self-diffeomorphisms of $R$, we may assume without loss of generality that $\gamma$ and $b$ are given by Figure \ref{fig_cobordism_3_non_contractible}, where the boundaries of black dots are non-contractible curves in $\Sigma$. \betagin{figure}[!htb] \betagin{minipage}{0.48\textwidth} \centering \betagin{overpic}[width=.8\linewidth]{./figures/cobordism_3_non_contractible} \put(2,4){$\gamma$} \put(48,9){$b$} \end{overpic} \caption{} \label{fig_cobordism_3_non_contractible} \end{minipage} \betagin{minipage}{0.48\textwidth} \centering \includegraphics[width=.7\linewidth]{./figures/a_knot_on_sigma} \caption{A knot in $(-1,1)\times \Sigma$} \label{fig_a_knot_on_sigma} \end{minipage} \end{figure} Consider the knot $K$ in the interior of $[-1,1]\times \Sigma$ as shown in Figure \ref{fig_a_knot_on_sigma}. The fundamental class of $K$ is not in the image of any embedded annulus in $\Sigma$, so by Theorem \ref{thm_main_instanton}, we have $$ \dim \SHI([-1,1]\times \Sigma, \{0\}\times \Sigma, K) > 2. $$ By \eqref{eqn_SiHI_iso_SHI}, this implies $\dim \SiHI_{R,p}(K)>2$. By Kronheimer--Mrowka's unoriented skein exact triangle \cite[Theorem 6.8]{KM:Kh-unknot}, there is a long exact sequence $$ \cdots \to \SiHI(\gamma) \overset{\SiHI(b)}{\longrightarrow} \SiHI(\gamma_1\cup \gamma_2) \to \SiHI_{R,p}(K) \to \SiHI(\gamma) \to \SiHI(\gamma) \overset{\SiHI(b)}{\longrightarrow} \SiHI(\gamma_1\cup \gamma_2)\to \cdots. $$ Therefore the rank of $\SiHI(b)$ cannot be $2$. By diffeomorphism invariance, either the image of $\Theta_{w_0,\sigma}\mathbf{w}_\mathfrak{o}(\gamma)$ is zero for all choices of $\gamma,\gamma_1,\gamma_2$, or the image of $\Theta_{w_0,\sigma}\mathbf{w}_{\mathfrak{o}'}(\gamma)$ is zero for all choices of $\gamma,\gamma_1,\gamma_2$. Note that in the definition of $w_0$, we may choose it to have grading $1$ or $-1$ with respect to the $\gamma_0'$--grading. Suppose $w_0'$ is a different choice of $w_0$ with a different sign in the $\gamma_0'$--grading, then $w_{\mathfrak{o},w_0}(\gamma)$ is a scalar multiple of $w_{\mathfrak{o}',w_0'}(\gamma)$. Therefore, we can always choose $w_0$ so that the image of $\Theta_{w_0,\sigma}\mathbf{w}_{\mathfrak{o}}(\gamma)$ is zero. \end{proof} In the case that $c$ has two components $\gamma_1$ and $\gamma_2$ and they are merged into one circle $c_b=\gamma$ after the surgery, let $\sigma$ be an arbitrary ordering of the components of $C$, let $\sigma_b$ denote the only possible ordering of components of $C_b$. We have the following results. \betagin{lem} If both $\gamma_1$ and $\gamma_2$ are contractible circles, then $\gamma$ is also contractible. In this case, the map $\SiHI(b)$ has the form \betagin{align*} \Theta_{w_0,\sigma}\big(\mathbf{v}_+(c_1) \otimesimes \mathbf{v}_+(c_2)\big) &\mapsto \pm \Theta_{w_0,\sigma_b}\,\mathbf{v}_+(c), &\Theta_{w_0,\sigma}\big(\mathbf{v}_+(c_1)& \otimesimes \mathbf{v}_-(c_2)\big) \mapsto \pm \Theta_{w_0,\sigma_b}\,\mathbf{v}_-(c) ,\\ \Theta_{w_0,\sigma}\big(\mathbf{v}_-(c_1) \otimesimes \mathbf{v}_+(c_2) \big)&\mapsto \pm \Theta_{w_0,\sigma_b}\, \mathbf{v}_-(c), &\Theta_{w_0,\sigma}\big(\mathbf{v}_-(c_1)& \otimesimes \mathbf{v}_-(c_2)\big) \mapsto 0. \end{align*} \end{lem} \betagin{proof} This follows from the same argument as Lemma \ref{lem_cobordism_SiHI_one_trivial_to_two_trivial}. \end{proof} \betagin{lem} \label{lem_SiHI_cobordism_two_nontrivial_merge_to_trivial} If one of $\{\gamma_1,\gamma_2\}$ is contractible and the other is non-contractible, assume without loss of generality that $\gamma_1$ is contractible and $\gamma_2$ is non-contractible. Then $\gamma$ is isotopic to $\gamma_2$, and the orientations of $\gamma$ and $\gamma_2$ are canonically identified. Let $\mathfrak{o}$ be an arbitrary orientation of $\gamma$, and use the same notation to denote the corresponding orientation of $\gamma_2$. Then $\SiHI(b)$ has the form \[ \Theta_{w_0,\sigma}\big(\mathbf{v}_+(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}}(\gamma_2) \big)\mapsto \pm\Theta_{w_0,\sigma_b}\, \mathbf{w}_{\mathfrak{o}}(\gamma),\quad \Theta_{w_0,\sigma}\big(\mathbf{v}(\gamma_1)_-\otimesimes \mathbf{w}_{\mathfrak{o}}(\gamma_2)\big) \mapsto 0. \] \end{lem} \betagin{proof} This follows from the same argument as Lemma \ref{lem_cobordism_SiHI_non_contractible_split_one_contractible}. \end{proof} \betagin{lem} If $\gammamma_1$ and $\gammamma_2$ are both non-contractible and $\gammamma_3$ is contractible, then $\gamma_1$ and $\gamma_2$ must be isotopic. The orientations of $\gamma_1$ and $\gamma_2$ are canonically identified by the isotopy. Let $\mathfrak{o}$, $\mathfrak{o}'$ be the two orientations of $\gamma_1$, and use the same notation to denote the corresponding orientations of $\gamma_2$. Then $\SiHI(b)$ has the form \betagin{align} \Theta_{w_0,\sigma}\big( \mathbf{w}_\mathfrak{o}(\gammamma_1)\otimesimes \mathbf{w_\mathfrak{o}}(\gammamma_2) \big)&\mapsto 0, & \Theta_{w_0,\sigma}\big(\mathbf{w}_{\mathfrak{o}'}(\gammamma_1)& \otimesimes \mathbf{w}_{\mathfrak{o}'}(\gammamma_2)\big)\mapsto 0, \label{eqn_SiHI_merge_two_nontrivial_to_one_trivial_1} \\ \Theta_{w_0,\sigma}\big(\mathbf{w}_\mathfrak{o}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}'}(\gammamma_2)\big) &\mapsto \pm \lambda_3\Theta_{w_0,\sigma_b}\, \mathbf{v}_-(\gammamma), &\Theta_{w_0,\sigma}\big( \mathbf{w}_{\mathfrak{o}'}(\gammamma_1)&\otimesimes \mathbf{w}_{\mathfrak{o}} (\gammamma_2)\big)\mapsto \pm \lambda_3 \Theta_{w_0,\sigma_b}\, \mathbf{v}_-(\gammamma). \label{eqn_SiHI_merge_two_nontrivial_to_one_trivial_2} \end{align} for some non-zero $\lambda_3$. \end{lem} \betagin{proof} The right-hand side of the map is a linear combination of $ \Theta_{w_0,\sigma_b}\, \mathbf{v}_-(\gammamma)$ and $\Theta_{w_0,\sigma_b}\, \mathbf{v}_+(\gammamma)$. Consider the composition of this map with the splitting of one trivial circle to two trivial circles. By Corollary \ref{cor_zero_maps_detected_by_AHI} and \cite[Proposition 5.9, Proposition 5.14]{AHI}, the composition map is zero. Therefore, by Lemma \ref{lem_cobordism_SiHI_one_trivial_to_two_trivial}, the image of $\SiHI(b)$ is spanned by $ \Theta_{w_0,\sigma_b}\, \mathbf{v}_-(\gammamma)$. The two terms in \eqref{eqn_SiHI_merge_two_nontrivial_to_one_trivial_1} are zero because of gradings. To show that the two terms in \eqref{eqn_SiHI_merge_two_nontrivial_to_one_trivial_2} are equal up to sign, we compose the map with $\SiHI(D^-(\gamma))$ from Lemma \ref{lem_SiHI_D-} and invoke Lemma \ref{lem_cobordism_map_symmetry_2}. The right-hand-side of \eqref{eqn_SiHI_merge_two_nontrivial_to_one_trivial_2} are both non-zero because of Corollary \ref{cor_zero_maps_detected_by_AHI} and \cite[Proposition 5.14]{AHI}. \end{proof} \betagin{lem} \label{lem_SiHI_cobordism_two_nontrivial_one_nontrivial} Assume $w_0$ is chosen so that Lemma \ref{lem_cobordism_map_1_nontrivial_to_2_nontrivial} holds. If all of $\gamma,\gamma_1,\gamma_2$ are non-contractible, let $N$ be the regular neighborhood of $b\cup \gamma$. Then $N$ is a sphere with three disks removed, and the three boundaries of $N$ are parallel to $\gamma_1$, $\gamma_2$, $\gamma$. The boundary orientation of $N$ defines an orientation on each of $\gamma_1, \gamma_2, \gamma$, and we denote them by $\mathfrak{o}_1,\mathfrak{o}_2,\mathfrak{o}$ respectively. Denote their opposite orientations by $\mathfrak{o}_1',\mathfrak{o}_2',\mathfrak{o}'$. Then $\SiHI(b)$ has the form \betagin{align*} \Theta_{w_0,\sigma}\big( \mathbf{w}_{\mathfrak{o}_1'}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}_2'}(\gammamma_2) )&\mapsto \pm \lambda_4 \Theta_{w_0,\sigma_b}\,\mathbf{w}_{\mathfrak{o}}(\gammamma), &\Theta_{w_0,\sigma}\big(\mathbf{w}_{\mathfrak{o}_1'}(\gammamma_1)& \otimesimes \mathbf{w}_{\mathfrak{o}_2}(\gammamma_2)\big)\mapsto 0,\\ \Theta_{w_0,\sigma}\big(\mathbf{w}_{\mathfrak{o}_1}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}_2'}(\gammamma_2)\big) &\mapsto 0, &\Theta_{w_0,\sigma}\big(\mathbf{w}_{\mathfrak{o}_1}(\gammamma_1)&\otimesimes \mathbf{w}_{\mathfrak{o}_2}(\gammamma_2) \big)\mapsto 0. \end{align*} \end{lem} \betagin{proof} Comparison of gradings shows that \betagin{enumerate} \item the image of $\Theta_{w_0,\sigma}\big( \mathbf{v}(\gammamma_1)_{\mathfrak{o}_1'}\otimesimes \mathbf{v}(\gammamma_2)_{\mathfrak{o}_2'} )$ is a scalar multiple of $ \Theta_{w_0,\sigma_b}\,\mathbf{v}(\gammamma)_{\mathfrak{o}}$, \item the image of $\Theta_{w_0,\sigma}\big( \mathbf{v}(\gammamma_1)_{\mathfrak{o}_1}\otimesimes \mathbf{v}(\gammamma_2)_{\mathfrak{o}_2} )$ is a scalar multiple of $ \Theta_{w_0,\sigma_b}\,\mathbf{v}(\gammamma)_{\mathfrak{o}'}$, \item the images of $\Theta_{w_0,\sigma}\big(\mathbf{w}_{\mathfrak{o}_1'}(\gammamma_1) \otimesimes \mathbf{w}_{\mathfrak{o}_2}(\gammamma_2)\big)$ and $\Theta_{w_0,\sigma}\big(\mathbf{w}_{\mathfrak{o}_1}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}_2'}(\gammamma_2)\big) $ are zero. \end{enumerate} To show that the image of $\Theta_{w_0,\sigma}\big( \mathbf{v}(\gammamma_1)_{\mathfrak{o}_1}\otimesimes \mathbf{v}(\gammamma_2)_{\mathfrak{o}_2} )$ must also be zero when $w_0$ is chosen so that Lemma \ref{lem_cobordism_map_1_nontrivial_to_2_nontrivial} holds, consider the two bands in Figure \ref{fig_TQFT_two_holes}. Here, we adopt the convention that if $B$ is a standard disk in the plane, then the induced orientation on $\partial B$ is counter-clockwise. The boundaries of the black dots are non-contractible curves in $\Sigma$. Let $\deltalta_1,\deltalta_2$ be simple closed curves on $\Sigma$, let $\mathfrak{r}_1,\mathfrak{r}_2$ be the orientations of $\deltalta_1,\deltalta_2$ and $b_1$, $b_2$ be two disjoint bands attached to $\deltalta_1\cup \deltalta_2$ as shown in Figure \ref{fig_TQFT_two_holes}. Let $\tau$ denote an arbitrary ordering of the components of $\deltalta_1\cup \deltalta_2$. Then by Lemma \ref{lem_cobordism_map_1_nontrivial_to_2_nontrivial}, the image of $\Theta_{w_0,\tau}(\mathbf{w}_{\mathfrak{r}_1}(\deltalta_1)\otimesimes \mathbf{w}_{\mathfrak{r}_2}(\deltalta_2))$ under $\SiHI(b_2)$ is zero. By the functoriality of $\SiHI$, we have $\SiHI(b_1)\circ \SiHI(b_2) = \SiHI(b_2)\circ \SiHI(b_1)$ on $\SiHI(\deltalta_1\cup \deltalta_2)$, therefore $$ \SiHI(b_2)\circ \SiHI(b_1)\circ \Theta_{w_0,\tau}(\mathbf{w}_{\mathfrak{r}_1}(\deltalta_1)\otimesimes \mathbf{w}_{\mathfrak{r}_2}(\deltalta_2))=0. $$ It then follows from Lemma \ref{lem_cobordism_SiHI_non_contractible_split_one_contractible} that $$ \SiHI(b_1)\circ \Theta_{w_0,\tau}(\mathbf{w}_{\mathfrak{r}_1}(\deltalta_1)\otimesimes \mathbf{w}_{\mathfrak{r}_2}(\deltalta_2))=0. $$ Since there is an orientation-preserving diffeomorphism of $R$ that takes $(\deltalta_1,\mathfrak{r}_1,\deltalta_2,\mathfrak{r}_2,b_1)$ to $(\gamma_1,\mathfrak{o}_1,\gamma_2,\mathfrak{o}_2,b)$, we conclude that the image of $\Theta_{w_0,\sigma}\big(\mathbf{w}_{\mathfrak{o}_1}(\gammamma_1)\otimesimes \mathbf{w}_{\mathfrak{o}_2}(\gammamma_2)\big)$ under the map $\SiHI(b)$ is zero. \betagin{figure} \betagin{overpic}[width=0.4\textwidth]{./figures/TQFT_two_holes} \put(0,8){$\deltalta_1$} \put(12,30){$\deltalta_2$} \put(25,3){$b_1$} \put(50,10){$b_2$} \end{overpic} \caption{Two bands} \label{fig_TQFT_two_holes} \end{figure} \end{proof} \betagin{lem} \label{lem_lambda1_lambda3_product} Fix a choice of $w_0$. Let $\lambda_1$ be the constant given by Lemma \ref{lem_SiHI_cobordism_one_trivial_two_nontrivial}, and let $\lambda_3$ be the constant given by Lemma \ref{lem_SiHI_cobordism_two_nontrivial_merge_to_trivial}. Then $\lambda_1\lambda_3=\pm 1$. \end{lem} \betagin{proof} Consider the composition of two band surgeries, where the first surgery takes one contractible circle $\gamma$ to two non-contractible circles $\gamma_1$ and $\gamma_2$, and the second surgery is defined along the same band and it takes $\gamma_1\cup \gamma_2$ to $\gamma$. By Corollary \ref{cor_zero_maps_detected_by_AHI} and \cite[Proposition 5.14]{AHI}, the composition map takes $v_-(\gamma)$ to $2v_-(\gamma)$ (see Section \ref{subsubsec_iso_V_to_SiHI_1} for the definition of $v_-(\gamma)$). Hence we must have $\lambda_1\lambda_3=\pm 1$. \end{proof} As a result, we may rescale $w_0$ so that $\lambda_1=\pm 1$, $\lambda_3=\pm 1$. Now we study the band surgery map on $C$ when $C$ has an arbitrary number of components. \betagin{notn} \label{notn_ordering_of_components} Fix an embedding of $\Sigma$ into $\mathbb{R}^2$. For each $1$--manifold $C$ embedded in $\Sigma$, choose an ordering of the components of $C$ as follows. Suppose $\gamma_1$ and $\gamma_2$ are parallel non-contractible circles in $C$. Since we embed $\Sigma$ into $\mathbb{R}^2$, one of $\{\gamma_1,\gamma_2\}$ is inside the other. Assume $\gamma_1$ is on the inside and $\gamma_2$ is on the outside. Then we require that $\gamma_1$ appears before $\gamma_2$ in the ordering. When there are more than one orderings satisfying this condition, we choose one arbitrarily and fix the choice. Let $\sigma$ be the chosen ordering of the components of $C$ and let $\sigma_b$ be the ordering for $C_b$. \end{notn} Recall that the isomorphism $\Theta_{w_0,\sigma_b}:V_\mathbb{C}(C_b)\to \SiHI(C_b)$ is only defined up to changes of signs on the images of the standard basis. We make an arbitrary choice of signs and use it to define the inverse map $\Theta_{w_0,\sigma_b}^{-1}$. Define \betagin{equation} \label{eqn_defn_T(b)} T(b) =\Theta_{w_0,\sigma_b}^{-1} \circ \SiHI(b)\circ \Theta_{w_0,\sigma}: V_\mathbb{C}(C) \to V_\mathbb{C}(C_b). \end{equation} \betagin{prop} \label{prop_description_of_T(b)} Write $C = C'\cup C_0$, $C_b = C_b'\cup C_0$, where $C'$ and $C_b'$ consist of the components of $C$ and $C_b$ that intersect $b$, and $C_0$ consists of the components that are disjoint from $b$. Then $T(b)$ is given by the tensor product of a homomorphism from $V_\mathbb{C}(C')$ to $V_\mathbb{C}(C_b')$ and the identity map on $V_{\mathbb{C}}(C_0)$, where the homomorphism from $V_\mathbb{C}(C')$ to $V_\mathbb{C}(C_b')$ is described by Lemmas \ref{lem_cobordism_SiHI_one_trivial_to_two_trivial} to \ref{lem_SiHI_cobordism_two_nontrivial_one_nontrivial} up to multiplying integer powers if $i$ to the coefficients of the equations. \end{prop} \betagin{proof} By Lemma \ref{lem_change_order_with_contractible}, we can change the position of any contractible circle in $\sigma$ or $\sigma_b$ without affecting the statement of the proposition. Note that if $\gamma_1$ and $\gamma_2$ are a pair of disjoint, non-parallel, non-contractible circles on $\Sigma$, then the condition of Lemma \ref{lem_change_order_non_parallel} is satisfied. Hence by Lemma \ref{lem_change_order_non_parallel} and the associativity of $\mathcal{P}$, we may switch the order of a pair of consecutive non-parallel, non-contractible components in $\sigma$ or $\sigma_b$ without affecting the statement of the proposition. Therefore, we may assume without loss of generality that the components of $C'$ and $C_b'$ are placed in consecutive orders in $\sigma$ and $\sigma_b$ respectively. The desired result then follows from Lemmas \ref{lem_cobordism_SiHI_one_trivial_to_two_trivial} to \ref{lem_SiHI_cobordism_two_nontrivial_one_nontrivial} and the associativity of $\mathcal{P}$. \end{proof} Now we prove a rank estimate that is central to the proof of Theorem \ref{thm_main}. \betagin{prop} \label{prop_comparison_rank_APS_SiHI} Suppose $L$ is a link in the interior of $[-1,1]\times \Sigma$. Then $$\rightarrownk_{\mathbb{Z}/2} {\rm APS(L)}\ge \dim_\mathbb{C}\SiHI_{R,p}(L).$$ \end{prop} \betagin{proof} Suppose the link $L$ is given by a diagram $D$ on $\Sigma$ with $k$ crossings. Fix an ordering of the crossings. For $v=(v_1,v_2,\dots, v_k)\in \{0,1\}^k$, let $D_v$ be the resolved diagram indexed by $v$. Let $\mathcal{S}\in \{0,1\}^k \times \{0,1\}^k$ be the set of pairs $(v,u)$ such that $u$ is obtained from $v$ by changing one component from $0$ to $1$. For $(v,u)\in \mathcal{S}$, the change from $D_v$ to $D_u$ is given by a band surgery (see Figure \ref{fig_01smoothing} and Figure \ref{fig:band_surgery}). We use $b_{vu}$ to denote the band. By \cite[Proposition 6.9]{KM:Kh-unknot}, there is a spectral sequence abutting to $$ \II(S^1\times R, L,S^1\times \{p\}) $$ whose first page is $$ \bigoplus_{v\in\{0,1\}^k} \II(S^1\times R, \{0\}\times D_v,S^1\times \{p\}). $$ The differential on the first page is given by the sum of maps \betagin{equation} \label{eqn_second_page_differential} \II(S^1\times R, \{0\}\times D_v,S^1\times \{p\}) \to \II(S^1\times R, \{0\}\times D_u,S^1\times \{p\}) \end{equation} for all $(v,u)\in \mathcal{S}$, where each map is equal to the cobordism map defined by $b_{vu}$ up to sign. According to the discussion in \cite{AHI}*{Section 5} (also cf. \cite{xie2020instantons}*{Lemma 5.4}), this yields a spectral sequence relating the eigenspaces of the $\muu$ maps. Therefore, we obtain a spectral sequence abutting to $\SiHI_{R,p}(L)$ whose first page is given by $$ E_1 = \bigoplus_{v\in\{0,1\}^k} \SiHI(D_v), $$ and the differentials on the first page is equal to $$d_1 = \sum_{(v,u)\in \mathcal{S}}\SiHI(b_{vu})$$ for some choice of signs of the cobordism maps. The spectral sequence then implies \betagin{equation} \label{eqn_rank_estimate_SiHI_APS_1} \dim_\mathbb{C} H(E_1,d_1)\ge \dim_\mathbb{C} \SiHI_{R,p}(L), \end{equation} where $H(\cdot)$ denotes the homology of the complex. Identify $E_1$ with $$\bigoplus_{v\in\{0,1\}^k} V_\mathbb{C}(D_v)$$ using the isomorphisms $$\Theta_{w_0,\sigma_v}: V_\mathbb{C}(D_v)\to \SiHI(D_v),$$ where $\sigma_v$ are the orderings of components given by Notation \ref{notn_ordering_of_components}, and $w_0$ is chosen so that Lemma \ref{lem_cobordism_map_1_nontrivial_to_2_nontrivial} holds. By Lemma \ref{lem_lambda1_lambda3_product}, we may also rescale $w_0$ so that $\lambda_1=\pm 1$ and $\lambda_3=\pm 1$ in Lemmas \ref{lem_SiHI_cobordism_one_trivial_two_nontrivial} and \ref{lem_SiHI_cobordism_two_nontrivial_merge_to_trivial}. We now define a filtration on $(E_1,d_1)$ following the strategy of \cite[Section 6.2]{winkeler2021khovanov}. For each circle $\gamma$ on $\Sigma$, define a grading on $V_\mathbb{C}(\gamma)$ as follows. If $\gamma$ is a contractible circle, define the grading to be zero. If $\gamma$ is a non-contractible circle, recall that we view $\Sigma$ as a subset of $\mathbb{R}^2$. Define the grading of $\mathbf{w}_\mathfrak{o}(\gamma)$ to be $1$ if $\mathfrak{o}$ is the counter-clockwise orientation, and define the grading of $\mathbf{w}_\mathfrak{o}(\gamma)$ to be $-1$ if $\mathfrak{o}$ is the clockwise orientation. We adopt the convention that if $B$ is the standard disk in $\mathbb{R}^2$ then the induced orientation on $\partial B$ is counter-clockwise. It is then straightforward to check using Proposition \ref{prop_description_of_T(b)} that the map $d_1$ does not increase the grading, and the only terms in $d_1$ that decrease the grading are the terms with coefficients $\lambda_2$ and $\lambda_4$ given by Lemma \ref{lem_cobordism_map_1_nontrivial_to_2_nontrivial} and Lemma \ref{lem_SiHI_cobordism_two_nontrivial_one_nontrivial}. Let $(E_1',d_1')$ be the associated graded complex of $(E_1,d_1)$ with respect to the above filtration. By the spectral sequence associated with filtrations, we have \betagin{equation} \label{eqn_rank_estimate_SiHI_APS_2} \dim_\mathbb{C} H(E_1',d_1')\ge \dim_\mathbb{C} H(E_1,d_1). \end{equation} The space $E_1'$ is isomorphic to $$\bigoplus_{v\in\{0,1\}^k} V_\mathbb{C}(D_v),$$ which is the same as the group $\CKh_\Sigma(L)\otimesimes \mathbb{C}$ from the definition of $\APS(L;\mathbb{C})$ in Section \ref{subsec_APS}. By Proposition \ref{prop_description_of_T(b)}, the differential $d_1'$ coincides with the differential on $\CKh_\Sigma(L)\otimesimes \mathbb{C}$ up to multiplications by integer powers of $i$ on the images of the standard basis. This implies that there is a complex $$ (\hat{E}, \hat{d}) $$ with coefficients in $\mathbb{Z}[i]$, such that when reducing to $\mathbb{C}$ coefficients, the complex $(\hat{E}\otimesimes\mathbb{C}, \hat{d}\otimesimes \id_{\mathbb{C}})$ is isomorphic to $(E_1',d_1')$; when reducing to $\mathbb{Z}[i]/(1-i) \cong \mathbb{Z}/2$ coefficients, the complex $$(\hat{E}\otimesimes \big(\mathbb{Z}[i]/(1-i)\big), \hat{d}\otimesimes \id_{\mathbb{Z}[i]/(1-i)})$$ is isomorphic to the complex that defines $\APS(L;\mathbb{Z}/2)$. By the universal coefficient theorem, we have $$ \rightarrownk_{\mathbb{Z}/2} \APS(L;\mathbb{Z}/2) \ge \rightarrownk_{\mathbb{Z}[i]} H(\hat{E}, \hat{d}) = \dim_\mathbb{C} H(E_1',d_1'). $$ Hence the desired result is proved by combining the above inequality with \eqref{eqn_rank_estimate_SiHI_APS_1}, \eqref{eqn_rank_estimate_SiHI_APS_2}. \end{proof} \betagin{proof}[Proof of Theorem \ref{thm_main}] Suppose $L$ is a link in the interior of $[-1,1]\times \Sigma$ such that $\rightarrownk_{\mathbb{Z}/2}\APS(L;\mathbb{Z}/2)\le 2$. Let $F,R,p$ be as before. Then by Proposition \ref{prop_comparison_rank_APS_SiHI} and \eqref{eqn_SiHI_iso_SHI}, we have $$ \dim_{\mathbb{C}}\SHI([-1,1]\times \Sigma, \{0\}\times \Sigma, L)\le 2. $$ Now the theorem follows from Theorem \ref{thm_main_instanton}. \end{proof} \end{document}
\begin{document} \selectlanguage{english} \udk{519.21} \author{M.~Ilienko} \institution{National Technical University of Ukraine \lq\lq Igor Sikorsky Kyiv Polytechnic Institute\rq\rq, Kyiv} \position{associate prof. of the department of mathematical analysis and probability theory} \academictitle{phD, associate prof.} \mail{[email protected]} \orcid{0000-0001-6982-6124} \author{A.~Polishchuk} \institution{Institute of Applied Physics, National Academy of Science of Ukrine, Kyiv} \position{junior researcher} \mail{[email protected]} \shortauthor{Ilienko~M., Polishchuk~A.} \capitalauthor{M.~ILIENKO, A. POLISHCHUK} \authoreng{Ilienko~M.~K., Polishchuk~A.Yu.} \title{On the convergence of Baum-Katz series for sums of linear 2-nd order autoregressive sequences of random variables} \shorttitle{BAUM-KATZ SERIES FOR 2-ND ORDER AUTOREGRESSIVE SEQUENCES \ldots} \titleeng{On the convergence of Baum-Katz series for sums of linear 2-nd order autoregressive sequences} \maketitle \received{.} \newcommand{\hm}[1]{#1\nobreak\discretionary{}{\hbox{\ensuremath{#1}}}{}} \newcommand{\mathsf{Leb}}{\mathsf{Leb}} \begin{abstract} We consider complete convergence and closely related Hsu-Robbins-Erd\H{o}s-Spitzer-Baum-Katz series for sums whose terms are elements of a linear 2-nd order autoregressive sequences of random variables and prove sufficient conditions for the convergence of this series. \keywords{linear 2-nd order autoregressive models, weighted sums, complete convergence, Hsu-Robbins-Erd\H{o}s series, Spitzer series, Baum-Katz series} \end{abstract} \noindent \paragraph{Introduction} On a common probability space $(\Omega, \mathcal{F},\mathbb{P})$ consider a linear 2-nd order autoregressive sequence of random variables (r.v.'s) $(\xi_{k}, \, k\geq1)$, which obeys the system of stochastic recurrence equations \begin{equation}\label{model} \xi_{-1}=0, \ \ \xi_{0}=0, \ \ \xi_{k}=a\xi_{k-1}+b\xi_{k-2}+\theta_{k}, \ \ k\ge 1, \end{equation} where $a$ and $b$ are some real constants, $(\theta_{k})$ is a sequence of independent copies of a r.v. $\theta$. Note that linear regression models of different types have been studied for a long time. A multitude of publications contain various problems for regression sequences of r.v.'s and their extensions. See, for instance \cite{b,mikosh}, and numerous references therein. For elements of the sequence \eqref{model} set \begin{equation*} S_n=\sum_{k=1}^n \xi_k, \ \ n\geq1, \end{equation*} and for any $\varepsilon>0$ consider the following series \begin{equation}\label{BKseries} \sum_{n=1}^\infty n^{\frac{r}{p}-2}\mathbb{P} \Big\{\frac{|S_n|}{n^{1\!/\!p}}>\varepsilon \Big\}, \end{equation} where $0<p<2$ and $r\geq p$. In this paper we are interested in conditions for the convergence of this series. Hereinafter we will refer to the series \eqref{BKseries} as to Baum-Katz series, although some other no less prominent authors were involved in introducing it. Historically, for the sequence $(X_{n}, \, n\geq1)$ of independent copies of a r.v. $X$, and $S_n=\sum_{k=1}^n X_k,$ $n\geq1$, the reduced version (the case $r=2p=2$) of the series \eqref{BKseries} initially arose in the paper by Hsu and Robbins along with the notion of complete convergence, see \cite{Hsu}. In their paper, authors proved that if $\mathbb E X^{2}<\infty$, then $\sum_{n=1}^\infty\mathbb{P} \Big\{\Big|S_n/n-\mathbb EX\Big|>\varepsilon \Big\} <\infty$, while the converse was provided by Erd\H{o}s, see \cite{Erd}. Note that in view of the Borel-Cantelli Lemma complete convergence implies almost sure convergence, and so is tightly connected with the Strong Law of Large Numbers. Further, Spitzer, see \cite{Sp}, showed that $\sum_{n=1}^\infty n^{-1}\mathbb{P} \Big\{\Big|S_n/n-\mathbb EX\Big|>\varepsilon \Big\} <\infty$ if and only if $\mathbb E|X|<\infty$. Note that series \eqref{BKseries} covers the Spitzer's case with $r=p=1$. Finally, for the sequence of independent copies of a r.v. $X$, Baum and Katz, see \cite{BK}, introduced the series \eqref{BKseries} and proved that it is convergent if and only if $\mathbb E|X|^{r}<\infty,$ with $\mathbb E X=0$, when $r\geq1.$ Since then these classical results have been generalized in several directions, including Banach space setting (see, e.g., \cite{Huan19}). We refer to \cite{deli} where more detailed history on the topic is provided. Among all extensions we distinguish results concerning complete convergence and convergence of Baum-Katz series for weighted sums of independent r.v.'s, also known as rowwise independent random arrays, (see, e.g., \cite{deli,HuMoritz,gut,Hu,Cai,ChenMaSung} and references therein). As to dependent patterns, in the paper \cite{ilienko21} necessary and sufficient conditions for the convergence of the series (\ref{BKseries}) were obtained for the case of first-order autoregressive sequence of r.v.'s, i.e. with $b=0$ in \eqref{model}. Specifically, in this paper we concentrate on sufficient conditions of the series \eqref{BKseries} for sums of elements of model \eqref{model}, and under some simple assumptions imposed on $a$ and $b$ we expect to obtain similar to independent case Baum-Katz result. In our investigation we intend to reduce the case to the idea provided in \cite{ilienko21}, which in its turn was partially borrowed from \cite{gut}. \paragraph{Preliminaries} Consider the nonrandom recurrence sequence $(u_{n} , n\geq 1)$: \begin{equation}\label{uuu} u_{-1}=0, \ \ u_{0}=1, \ \ u_{n}=au_{n-1}+bu_{n-2}, \ \ n\geq 1, \end{equation} Evaluating \eqref{model} one has \begin{equation*} \xi_1=\theta_1=u_0\theta_1, \end{equation*} \begin{equation*} \xi_2=a\theta_1+\theta_2=u_1\theta_1+u_0\theta_2, \end{equation*} \begin{equation*} \xi_3=(a^2+b)\theta_1+a\theta_2+\theta_3=u_2\theta_1+u_1\theta_2+u_0\theta_3, \end{equation*} \begin{equation*} \ldots , \end{equation*} i.e. \begin{equation*} \xi_k=\sum_{l=1}^k u_{k-l}\theta_l, \ \ k\geq 1. \end{equation*} Now, for $n\geq 1 $ and $1\leq k \leq n$ set \begin{equation*} u(n-k)=\sum_{m=0}^{n-k} u_m. \end{equation*} Thus, \begin{equation*} S_n=\sum_{k=1}^n \xi_k = \sum_{k=1}^n \big( \sum_{l=1}^k u_{k-l}\theta_l \Big)= \sum_{k=1}^n \Big(\sum_{m=0}^{n-k} u_m \Big) \theta_k=\sum_{k=1}^n u(n-k)\theta_k, \ \ n\geq1. \end{equation*} \paragraph{Main result} Let us immediately proceed to the main result of this paper. \begin{theorem}\label{theorem1} Let in \eqref{model} \begin{equation}\label{coef_assump} -1<b<1-|a|, \end{equation} and $0<p<2$, $r\geq p$. If $\mathbb E|\theta|^{r}<\infty$, where $\mathbb E \theta=0 $ whenever $r\geq1$, then for any $\varepsilon>0$, \begin{equation*} \sum_{n=1}^\infty n^{\frac{r}{p}-2}\mathbb{P} \Big\{\frac{|S_n|}{n^{1\!/\!p}}>\varepsilon \Big\} <\infty. \end{equation*} \end{theorem} \paragraph{Proof} In \cite{ilienko21} the analogue of Theorem 1 for linear 1-st order autoregressive sequence of r.v.'s was proved in all details. We may adopt the proof of sufficiency to our case if we show that the values $u(n-k)$, $n\geq 1 $, $1\leq k \leq n$, are bounded in a similar way (compared with $a(n,k)$'s in \cite{ilienko21}). Introduce two real-valued matrices $$ M=\begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix}, \quad C=\begin{pmatrix} a & b \\ 1 & 0 \\ \end{pmatrix}. $$ Note, that $C$ is a Frobenius matrix. Let $\lambda_1$ and $\lambda_2$ be its eigenvalues, i.e. roots of the characteristic equation $\lambda^2-a\lambda-b=0$. Denote by $\nu_1$ and $\nu_2$ multiplicities of $\lambda_1$ and $\lambda_2$ respectively. Set $$ \rho=\max\big\{|\lambda_1|,|\lambda_2|\big\} \quad \textrm{and} \quad \mu= \underset{1\leq k\leq 2}{\max} \{\nu_k: |\lambda_k|=\rho\}. $$ Obviously, in our case either $\mu=1$ or $\mu=2$. Moreover, assumption \eqref{coef_assump} implies that both roots $\lambda_1$ and $\lambda_2$ lie within the unit circle, that is $\rho<1$. Observe that \begin{equation*} CM= \begin{pmatrix} a & 0 \\ 1 & 0 \\ \end{pmatrix}=\begin{pmatrix} u_1 & 0 \\ u_0 & 0 \\ \end{pmatrix}, \ \ C^2 M= \begin{pmatrix} a^2+b & 0 \\ a & 0 \\ \end{pmatrix}=\begin{pmatrix} u_2 & 0 \\ u_1 & 0 \\ \end{pmatrix}, \end{equation*} Further, \begin{equation*} C^3 M= \begin{pmatrix} a^3+2ab & 0 \\ a^2+b & 0 \\ \end{pmatrix}=\begin{pmatrix} u_3 & 0 \\ u_2 & 0 \\ \end{pmatrix}, \end{equation*} and so on. Using the method of mathematical induction it is easy to show that for any $s\geq 1$, \begin{equation*} C^s M= \begin{pmatrix} u_s & 0 \\ u_{s-1} & 0 \\ \end{pmatrix}. \end{equation*} Let for a square matrix $A=(a_{ij})_{i,j=1}^2$ with real entries $\|\cdot\|$ denote the matrix norm of the following form: $\|A\|=\Big(\sum_{i,j=1}^2 a_{i,j}^2\Big)^{1/2}.$ According to result by Koval', see \cite{ko} (see also Lemma 7.7.3 \cite{b}), if $C$ is a Frobenius matrix, there exist some constants $c_2>c_1>0$, such that for any $s\geq1$, \begin{equation*} c_1 \cdot \rho^s \cdot s^{\mu-1} \leq \|C^s M\|\leq c_2 \cdot \rho^s \cdot s^{\mu-1}, \end{equation*} where $c_1$ and $c_2$ do not depend on $s$. Now since \begin{equation*} (C^0+C^1+C^2+\ldots+C^{n-k})M= \begin{pmatrix} u(n-k) & 0 \\ u(n-k-1) & 0 \\ \end{pmatrix}, \end{equation*} then according to Koval's result, \begin{align} |u(n-k)|\leq & \sqrt{(u(n-k))^2+(u(n-k-1))^2} = \Big\|\sum_{l=0}^{n-k} C^{l}M \Big\| \leq \sum_{l=0}^{n-k} \Big\|C^{l}M \Big\| \leq \nonumber \\ & \leq c_2\sum_{l=0}^{n-k} \rho^l \cdot l^{\mu-1}, \label{inequal} \end{align} where $c_2$ is some positive constant. Now distinguish between two cases: 1) $\lambda_1\neq\lambda_2$ (if $b\neq -a^2/4$). In this case $\mu=1$ and according to \eqref{inequal}, \begin{equation*} |u(n-k)|\leq c_2\sum_{l=0}^{n-k} \rho^l= c_2 \frac{1-\rho^{n-k+1}}{1-\rho}, \ \ 1\leq k \leq n, \ \ n\geq 1. \end{equation*} 2) $\lambda_1=\lambda_2$ (if $b=-a^2/4$). In this case $\mu=2$ and according to \eqref{inequal}, \begin{equation*} |u(n-k)|\leq c_2\sum_{l=1}^{n-k} l \rho^l\leq c_2\sum_{l=1}^{\infty} l \rho^l = c_2\frac{\rho}{(1-\rho)^2}, \ \ 1\leq k \leq n, \ \ n\geq 1. \end{equation*} Combining both cases, \begin{equation*} |u(n-k)|\leq L=const= \max\Big\{ \frac{c_2}{1-\rho}, \frac{c_2\rho}{(1-\rho)^2} \Big\}, \ \ 1\leq k \leq n, \ \ n\geq 1. \end{equation*} Now, briefly adopt the proof of the sufficiency of Theorem 1 in \cite{ilienko21} to our case. As in \cite{ilienko21} we first restrict our proof to the case of symmetrically distributed r.v. $\theta$. Let us fix any $\varepsilon>0$ and apply an iteration of the Hoffmann-J$\o$rgensen inequality (see \cite{gut} or \cite{ilienko21}) with $s=t=n^{1/p}\varepsilon$. Thus, for $j\geq1$ there exist some constants $C_j$ and $D_j$ such that \begin{align} &\mathbb{P}\Big\{|S_n|>n^{1/p}\varepsilon\cdot 3^j\Big\} \leq \nonumber \\ &C_j \sum_{k=1}^n \mathbb{P}\Big\{ \Big|u(n-k)\theta_k \Big|>n^{1/p}\varepsilon\Big\}+D_j\Big(\mathbb{P}\Big\{|S_n|>n^{1/p}\varepsilon\Big\} \Big)^{2^j}. \label{HJ1} \end{align} The first terms in (\ref{HJ1}) can be estimated as follows \begin{align*} &\sum_{k=1}^n \mathbb{P} \Big\{\Big|u(n-k) \theta_k\Big|>n^{1/p}\varepsilon\Big\}= \sum_{k=1}^n \mathbb{P} \Big\{|\theta_k|>\frac{n^{1/p}\varepsilon}{|u(n-k)|} \Big\}\leq \\ &\leq\sum_{k=1}^n \mathbb{P} \Big\{|\theta_k|>n^{1/p}\varepsilon L^{-1}\Big\}=n\mathbb{P} \Big\{|\theta|>n^{1/p}\varepsilon_2\Big\}, \end{align*} where $\varepsilon_2= \varepsilon L^{-1}$. Further, we refer to the corresponding estimations in \cite{ilienko21}. Now consider the second term in (\ref{HJ1}). According to Markov inequality for $r>p$, one has \begin{equation*} \mathbb{P}\Big\{|S_n|>n^{1/p}\varepsilon\Big\}\leq \frac{\mathbb E|S_n|^r}{(n^{1/p}\varepsilon)^r}. \end{equation*} Next deal with $\mathbb E|S_n|^r$ distinguishing between the following cases. \textbf{1)} Let $0<r \leq 1$. Applying the $c_r$-inequality (see, for example, \cite{LBai}) with $c_r=1$ to $\mathbb E|S_n|^r$, one obtains \begin{equation*} \mathbb E|S_n|^r \leq \sum_{k=1}^{n} \mathbb E\Big|u(n-k) \theta_k \Big|^r= \sum_{k=1}^{n} \big|u(n-k) \big|^r \mathbb E |\theta_k|^r\leq \mathbb E|\theta|^r L^r n. \end{equation*} \textbf{2)} Let $r>1$. In this case to $\mathbb E|S_n|^r$ we consequently apply the Marcinkiewicz-Zygmund inequality (see, for example, \cite{LBai}) and the following well-known inequality: for positive $a_i$, $1\leq i \leq n$, $n \in \mathbb N$ and $r>0$ it is true that \begin{equation*} (a_1^2+a_2^2+...+a_n^2)^{r/2}\leq n^{0\vee (r/2-1)} \sum_{i=1}^n a_i^{r}. \end{equation*} Thus, \begin{align*} &\mathbb E|S_n|^r\leq b_r \mathbb E\Big(\sum_{k=1}^{n} \Big(u(n-k) \theta_k \Big)^2 \Big)^{r/2} \leq b_r n^{0\vee (r/2-1)} \mathbb E \sum_{k=1}^{n} \big|u(n-k) \theta_k \big|^{r} = \\ &=b_r n^{0\vee (r/2-1)} \mathbb E|\theta|^{r} \sum_{k=1}^{n} \big|u(n-k) \big|^{r} \leq b_r n^{0\vee (r/2-1)} \mathbb E|\theta|^{r} L^r n=b_r n^{1\vee (r/2)} \mathbb E|\theta|^{r} L^r. \end{align*} Here $b_r$ is some positive constant from the Marcinkiewicz-Zygmund inequality. Combining the above two cases, we arrive at the following bounds \begin{equation*} \mathbb E|S_n|^r \leq C(r) \mathbb E|\theta|^r {n^{1 \vee (r/2)}}, \end{equation*} with $C(r)=L^r$ or $b_r L^r$ depending on whether $0<r \leq 1$ or $r>1$. Now to finish the proof one needs to literary follow the steps of it in \cite{ilienko21}. \begin{example} If $\theta$ is a normally distributed r.v. with $\mathbb E \theta=0$, the model \eqref{model} represents the so-called Gaussian 2-Markov sequence of r.v.'s. with constant coefficients. In this case the series \eqref{BKseries} converges provided that $0<p<2$, $r\geq p$ and $-1<b<1-|a|$. \end{example} \paragraph{Conclusions} In the paper for sequences of sums whose terms are elements of 2-nd order linear autoregressive sequences, sufficient conditions for the convergence of Baum--Katz series are considered. Under some anticipated assumptions imposed on the coefficients of autoregressive sequence, obtained sufficient conditions are expressed as moment assumption of the generating r.v. The latter, in its turn, agrees with the classical Baum--Katz independent case. We intently focused our attention on 2-nd order autoregressive sequences, evading general $m$-th order case, since for $m=2$ assumptions imposed on the coefficients of the sequence are described in the most simple form. But, in prospect, by means of the same technique set problem may be generalized to $m$-th order autoregressive sequences for any $m\geq 2$. Moreover, we expect to prove also necessary conditions for convergence of Baum--Katz series for such sequences. \end{document}
\begin{document} \title{Variants of Korselt's Criterion} \author[T. Wright]{Thomas Wright} \maketitle \begin{abstract} Under sufficiently strong assumptions about the first term in an arithmetic progression, we prove that for any integer $a$, there are infinitely many $n\in \mathbb N$ such that for each prime factor $p|n$, we have $p-a|n-a$. This can be seen as a generalization of Carmichael numbers, which are integers $n$ such that $p-1|n-1$ for every $p|n$. \end{abstract} \section{Introduction} Recall that a Carmichael number is a composite number $n$ for which $$a^n\equiv a\pmod n$$ for every $a\in \mathbb Z$. As Fermat's Little Theorem states that the above congruence is true whenever $n$ is prime, Carmichael numbers thus serve as the disproof of the converse of Fermat's Little Theorem. While the first Carmichael numbers were discovered in 1910 by R.D. Carmichael \cite{Ca}, the search for Carmichael numbers was aided by the discovery of a necessary and sufficient condition by A. Korselt \cite{Ko} in 1899: \begin{Ko} \textit{$n$ is a Carmichael number if and only if $n$ is squarefree and $p-1|n-1$ for each prime $p|n$.}\end{Ko} Much like the relationship between Carmichael numbers and Fermat's Little Theorem above, Korselt's criterion is also seen to be easily satisfied if $n$ is prime. Korselt's criterion is of fundamental importance in the study of Carmichael numbers, as it remains the primary tool used to prove statements about such numbers. Because of its importance in the study of such pseudoprimes, mathematicians have begun to ask questions about what might happen should one wish to generalize Korselt's criterion. The most obvious of these generalizations is to change the 1 to another number; it is this generalization which motivates the present paper: \begin{Mo} For any $a\neq 1$, are there infinitely many composite $n$ such that for every prime $p|n$, $p-a|n-a$? \end{Mo} The question is not new. In their 1994 proof the infinitude of Carmichael numbers, Alford, Granville, and Pomerance \cite{AGP} stated the following:\\ \textit{One can modify our proof to show that for any fixed nonzero integer $a$, there are infinitely many squarefree, composite integers $n$ such that $p-a$ divides $n-1$ for all primes $p$ dividing $n$. However, we have been unable to prove this for $p-a$ dividing $n-b$, for $b$ other than 0 or 1. Such questions have significance for variants of pseudoprime tests, such as the Lucas probable prime test, strong Fibonacci pseudoprimes, and elliptic pseudoprimes.}\\ Our question, then, is the specific case of $a=b$. For purposes of nomenclature, we will refer to a number $n$ for which $p|n$ implies $p-a|n-a$ as an \textit{$a$-Carmichael number}. A regular Carmichael number is then a 1-Carmichael number. There has been little progress on this problem since 1994. In fact, there was no progress at all until 2011, when Ekstrom, Pomerance, and Thakur \cite{EPT} gave a conditional proof of infinitude in the case of $a=b=-1$; this result was later proven unconditionally in a 2013 paper by the present author \cite{WrE}. As of now, however, -1 remains the only value for $a$ (besides 1 and 0) where anything has been proven, even conditionally. In this paper, we use a conjecture of Heath-Brown to resolve the case of $p-a|n-a$ for every $a\in\mathbb Z$. As this divisibility condition is easily satisfied when $n$ is prime, our theorem can be seen as a reasonable generalization in the search for pseudoprimes. It is not clear that our results can be pushed to cases of $a\neq b$; this is in part because it is not clear for which cases of $a$ and $b$ one expects infinitely many $n$. In the next section, we will discuss the conjecture of Heath-Brown and our results. \section{Conjectures about primes in arithmetic progressions} The results of this paper will hinge upon the size of the first prime in an arithmetic progression. The standing conjecture in the area was made by Roger Heath-Brown \cite{HB}, who claimed the following: \begin{varthmC} Let $(c,m)=1$. Then the smallest prime $p$ that is congruent to $c$ mod $m$ is $\ll m(\log m)^2$. \end{varthmC} Other versions of this conjecture have previously been used to prove results about Carmichael numbers and associated constructs. For instance, Banks and Pomerance \cite{BP} used a variant of this conjecture to prove infinitely many Carmichael numbers in arithmetic progressions, and Ekstrom, Pomerance, and Thakur \cite{EPT} used another form of this conjecture to prove the aforementioned result about numbers $n$ for which $p|n$ implies $p+1|n+1$. In both cases, the authors actually used slightly weaker variants of the Heath-Brown conjecture; in fact, both papers showed that their results can be proven using either $m^{1+(\log m)^{\kappa-1}}$ for some $\kappa<1$ or $m^{1+\frac{\eta}{\log\log m}}$ for some $\eta>0$ as the upper bound. While the results in \cite{BP} and \cite{EPT} have been majorized in subsequent papers by Matom\"{a}ki and the present author (see \cite{Ma}, \cite{WrC}, and \cite{WrE}), the idea for using Heath-Brown's conjecture to prove results about Carmichael numbers can be seen to come from these works. In this paper, we stick primarily with the original Heath-Brown version of the conjecture, as it is not clear that our results would still hold with the variants mentioned above. We do afford ourselves the following relaxation of the conjecture: \begin{varthmC} \label{strong} Let $(c,m)=1$. Then there exists some constant $A$ such that smallest prime $p$ that is congruent to $c$ mod $m$ is $\ll m(\log m)^A$. \end{varthmC} Our results would still hold if one assumed that the first prime in an arithmetic progression were $\ll m(\log m)^{\log \log m}$; however, we use the conjecture above for the purposes of transparency. It should be noted that all of these conjectures about first primes in an arithmetic progression appear to be well beyond scope of modern mathematics. Currently, the best bound for first prime in an arithmetic progression mod $c$ is $\ll c^{5.2}$, which was proven by Xylouris \cite{Xy} in 2009. Even the assumption of the Generalized Riemann Hypothesis would only reduce this bound to $\ll c^{2+\epsilon}$. Regardless, if we are afforded the conjectures above, we are able to resolve the problem of $a$-Carmichael numbers completely. \begin{varthm1} Assume Conjecture \ref{strong}. Then for any $a$, there are infinitely many positive integers $n$ such that $p-a|n-a$ for each prime $p|n$. In fact, there exists a constant $C$ such that the number of $a$-Carmichael numbers up to $X$ is $$\gg X^{\frac{C}{(\log\log\log X)^2}}.$$ \end{varthm1} The proof of the main theorem is a combination of the ideas of \cite{AGP}, \cite{EPT}, and \cite{WrF}, as well as some new ideas. As in \cite{AGP}, we begin with an integer $L$ for which the maximum order of an element mod $L$ is small relative to $L$. Using this, we use the conjecture to find primes of the form $dk+a$ with relatively small $k$ for many of the $d|L$. From here, we establish that there exists a relatively small $k$ for which many of the $dk+a$ are prime simultaneously; then, we invoke the conjecture again to find that there exists a relatively small prime $P$ that is congruent to $a$ mod $kL$. With this setup, we prove that there exists some collection of these primes $dk+a$ whose product is congruent to 1 mod $P-a$; multiplying this product by $P$ then gives us an $a$-Carmichael number. Since there are infinitely many such $L$, there are infinitely many $a$-Carmichael numbers. It is worth noting that there are several new ideas in this paper:\\ - In contrast to \cite{BP} and \cite{EPT}, we use the Heath-Brown Conjecture not once but twice; we first use it to find a suitable set of primes with which to work, and then we use it again to find another prime to append to this set in order to create an $a$-Carmichael number. - Additionally, instead of simply taking $d$ to be any divisor of $L$, we first group the prime divisors of $L$ into sets of size $A+1$; we then take the $d$'s to be only those divisors of $L$ that can be written as products of some of these sets of $A+1$ primes. Doing this ensures that we do not double-count any of the primes of the form $dk+a$. - Most importantly, for our various choices of $d$, we do not attempt to find a density of primes of the form $dk+a$. Instead, we only look to find a single prime of this form for each $d$. This is the key to our approach because it dramatically lessens our value for $k$; Heath-Brown's conjecture can be applied for $k$ as small as $\log^A d$, while density estimates can only be used if $k$ is at least $d^\epsilon$ (even under Montgomery's conjecture for primes in arithmetic progressions [MV, Conjecture 13.9]). Because $k$ is small, the order of $a$ mod $k$ will be small as well. \section{Finding Primes: Conjecture Application \#1} As in other papers on Carmichael numbers, we begin by letting $\mathcal Q$ be a collection of primes $q$ for which $q-1$ is relatively smooth. More specifically, let $P(n)$ denote the largest prime factor of $n$, and let $1<\theta<2$. Then we define \[\mathcal Q=\{q\mbox{ }prime:\frac{y^\theta}{\log y}\leq q\leq y^{\theta},\mbox{ }q\equiv -1 \pmod{\alpha},\mbox{ }P(q-1)\leq y\}.\] It is known that there exist $Y$, $\gamma$ such that if $y>Y$ then $|\mathcal Q|\geq \gamma \frac{y^\theta}{\log y^\theta}$ (see [Wr2, Lemma 2.1]). From this, we construct $L$ to be the following: $$L=\prod_{q\in \mathcal Q}q.$$ The usual procedure in these cases is to examine the number of primes of the form $dk+a$ for the various $d|L$. From here, we then show that there are enough primes of this form to guarantee that there exists some $k$ for which there are many primes of this form. Here, though, we make an alteration to ensure that we are not counting the same primes multiple times. To this end, we define the following: Let us index the divisors of $L$ as $q_1,q_2,....,q_{\omega(L)}$. We will assume that Conjecture 2 is true for $\log^A d$ for some power $A$. We then define $$Q_i=q_{\scriptscriptstyle{(A+1)(i-1)+1}}q_{\scriptscriptstyle{(A+1)(i-1)+2}}...q_{\scriptscriptstyle{(A+1)i}}.$$ Note that for any such $Q_i$, we have $$Q_i> \frac{y^{(A+1)\theta}}{\log^{A+1} y}.$$ We will only then consider divisors $d$ of $L$ such that $d$ can be written as the product of $Q$'s. For a given $k$, let $$\mathcal P_k=\{p=dk+a:p\mbox{ }prime,d|L,d=\prod_{j\in S}Q_j\mbox{ }for\mbox{ }some\mbox{ }S\subset \{1,2,...,\left[\frac{\omega(L)}{A+1}\right]\}\}.$$ \section{Goals and Bounds} Our goal will be to show that there exists a $\mathcal P_k$ that is sufficiently large for our purposes. Here, sufficiently large will be defined in terms of the following theorem of van Emde Boas and Kruyswijk \cite{EK} and Meshulam \cite{Me}: \begin{theorem}\label{main} Let $n(L)$ denote the smallest number such that any set of at least $n(L)$ elements of $(\mathbb Z/L\mathbb Z)^\times$ must contain some subset whose product of its elements is 1 mod $L$. Let $\lambda(L)$ denote the maximal order (with regard to multiplication) of an element mod $L$. Then \begin{gather} n(L)<\lambda(L)\left(1+\log\left(\frac{L}{\lambda(L)}\right)\right). \end{gather} Moreover, let $r>t>n(L)$. Then any set of $r$ elements of $(\mathbb Z/L\mathbb Z)^\times$ contains at least $\left(\begin{array}{c} r\\t \end{array}\right)/\left(\begin{array}{c} r\\ n(L) \end{array}\right)$ distinct subsets of size at most $t$ and at least $t-n$ whose product is 1 mod $L$. \end{theorem} A comprehensive proof of this theorem is given in [AGP, Proposition 1.2]; rather than reprint the proof here, we refer the interested reader to that paper. Obviously, this theorem indicates that it will be important for us to learn about the size of $\lambda(L)$. To this end, we note that $L$ consists of primes $q$ for which $q-1$ only has prime factors that are less than $y$. For a prime $r$, let $a_r$ be the largest power such that $r^{a_r}\leq y^\theta$. Then $$\lambda(L)\leq \prod_{r\leq y,r\mbox{ }prime}r^{a_r}\leq \prod_{r\leq y,r\mbox{ }prime}y^{\theta} \leq e^{2 y\theta}.$$ Using this observation to bound (1) gives the following: \begin{lemma} For $n(L)$ as above, $$n(L)\leq e^{3 y\theta}.$$ \end{lemma} \section{Counting Elements of $\mathcal P_k$} In order to determine the size of $\mathcal P_k$, we must now invoke Conjecture 2. To this end, we have the following: \begin{theorem} Assume Conjecture \ref{strong}. Then there exists an integer $k$ such that $$|\mathcal P_k|>\frac{2^\frac{\omega(L)}{A+1}}{(\log L)^A}.$$\end{theorem}\label{PKK} We note that throughout the remainder of the paper, we will assume that $y$ and $A$ are large enough that the first prime in an arithmetic progression mod $L$ is strictly less than $L\log^A L$ (rather than $\leq$). \begin{proof} Assume the conjecture. Then for each $d$ that can be written as the product of $Q$'s, there exists a $k<\log^A L$ for which $dk+a$ is prime. Now, we must prove that none of these primes are double-counted (i.e. none of the primes are being counted as $dk+a$ for two different values of $d$). Let us assume that there exist $d_1,k_1,d_2,k_2$ such that $d_1k_1+1=d_2k_2+1$. We know that there exists some $Q_i$ that divides $d_1$ but not $d_2$. So $Q_i$ must divide $k_2$. By the conjecture, $$k_2<\log^{A} d\leq \log^{A} L.$$ But for any $Q_i$, $$Q_i>\frac{y^{(A+1)\theta}}{\log^{A+1} y}>\log^{A+1-\epsilon} L,$$ contradicting that $Q$ can divide $k_2$. Since the number of possible $d$'s is $2^{[\frac{\omega(L)}{A+1}]}$ and every $k$ is $<\log^A L$, there must therefore exist a $k$ where $$|\mathcal P_k|\geq \frac{2^\frac{\omega(L)}{A+1}}{(\log L)^A}.$$ \end{proof} \section{Invoking the Conjecture Again} Now, we will use the Heath-Brown conjecture again to prove that there exists another prime which we can multiply by some of the primes in $\mathcal P_k$ to find an $a$-Carmichael number. \begin{lemma} \label{BigP} Assume Conjecture \ref{strong}. If $y$ and $L$ are sufficiently large then for the $k$ chosen in Theorem \ref{PKK}, there exists a prime $P$ such that $P\equiv a$ (mod $Lk$) and $\frac{P-a}{Lk}\leq \log^{A} L$. \end{lemma} This follows from the statement of the conjecture. Define $k'=\frac{P-a}{Lk}$. Then we have the following: \begin{theorem}\label{PK} For $k$ and $k'$ as above, there exists a subset of $\mathcal P_k$ whose product is 1 mod $Lkk'$. \end{theorem} \begin{proof} Recall that $$|\mathcal P_k|>\frac{2^\frac{\omega(L)}{A+1}}{(\log L)^A}.$$ Since $k\leq \log^{A} L$, $k'\leq \log^{A+1} L$ and $\lambda(L)\leq e^{2y\theta}$, it is clear that $$\lambda(kk'L)\leq \lambda(k)\lambda(k')\lambda(L)\leq \left(\log^{2A+1} L\right) e^{2y\theta},$$ which means $$n(kk'L)\leq e^{3y\theta}.$$ Hence, $$|\mathcal P_k|>n(kk'L),$$ and thus the proof follows from Theorem \ref{main}. \end{proof} Let $p_1,p_2,....,p_s$ be a set of primes in $\mathcal P_k$ whose product is 1 mod $Lkk'$. We will denote $$n'=p_1p_2....p_s.$$ From this, we finally prove the existence of an $a$-Carmichael number: \begin{theorem} Let $P$ and $n'$ be as above, and let $n=Pn'$. Then $n$ is an $a$-Carmichael number.\end{theorem} \begin{proof} We know that $n'\equiv 1$ (mod $Lkk'$). Since $P\equiv a$ (mod $Lkk'$) by construction, we know that $Pn'\equiv a$ (mod $Lkk'$). So for every prime $p|n'$, we have $$p-a=dk|Lkk'|Pn'-a.$$ Moreover, for $P$, we know that $$P-a=Lkk'|Pn'-a.$$Thus, $n$ is an $a$-Carmichael number.\end{proof} Since there exist infinitely many $L$ that can be constructed in this fashion, there are infinitely many $a$-Carmichael numbers. \section{Counting the $a$-Carmichael numbers} If we wish to count the number of $a$-Carmichael numbers up to $x$, we have the following: \begin{theorem} Assuming Conjecture \ref{strong}, there exists a constant $C$ such that the number of $a$-Carmichael numbers up to $X$ is $$\gg X^{\frac{C}{(\log\log\log X)^2}},$$ \end{theorem} \begin{proof} In order to apply Theorem \ref{main}, we must choose an $r$ and $t$ and find a bound for $n=n(Lkk')$ such that $n<t<r<|\mathcal P_k|$. As such, let \begin{gather*} r=\left(\frac{7}{4}\right)^{\frac{\omega(L)}{A+1}},\\ t=\left(\frac 32\right)^{\frac{\omega(L)}{A+1}}. \end{gather*} Since $n\leq e^{3y\theta}$ and $\omega(L)\geq \gamma y^\theta/\log y$, it is clear that both $n<\left(\frac 54\right)^{\frac{\omega(L)}{A+1}}$ and $n\leq \frac{1}{20}t$ for sufficiently large $y$. According to Theorem \ref{main}, the number of subsets of $\mathcal P_k$ consisting of at most $t$ and at least $t-n(L)$ terms whose product is 1 mod $Lkk'$ is at least $$\left(\begin{array}{c} r \\ t \end{array}\right)/\left(\begin{array}{c} r \\ n \end{array}\right)$$ Recalling the standard bound that \[\left(\frac{u}{v}\right)^v\leq \left(\begin{array}{c} u\\v\end{array}\right)\leq \left(\frac{ue}{v}\right)^v,\] we have that \begin{align*} \left(\begin{array}{c} r \\ t \end{array}\right)/ & \left(\begin{array}{c} r \\ n \end{array}\right)\\ \geq & \left(\left(\frac{7}{6}\right)^{\frac{\omega(L)}{A+1}} \right)^t /\left(\left(\frac{7}{5}\right)^{\frac{\omega(L)}{A+1}} e \right)^{\frac{1}{20} t}\\ \geq & \left(\left(\frac{7}{6}\right)/\left(\frac 75\right)^{\frac{1}{20}}\right)^{\frac{t(\omega(L))}{A+1}} \left(\frac 1e\right)^{\frac{1}{20} t}\\ \geq & (1.1)^{\frac{t(\omega(L))}{A+1}}\\ \geq & e^{\frac{t\log 1.1}{A+1} \left(\frac{\gamma y^\theta}{\log y}\right)} \end{align*} where the last line comes from the fact that $\omega(L)\geq \frac{\gamma y^\theta}{\log y}$. Define \[X=Px^t,\] where $P$ is the prime given in Lemma \ref{BigP}. We recall that $L\leq e^{\kappa y^\theta}$ for some $\kappa<1.02$ (see [RS, Theorems 7 and 8]). So \begin{gather} \label{g1} X=Px^t\leq (L \log^{2A+1}L)(L\log^{A+1} L)^t< e^{3y^\theta t}. \end{gather} Note that for sufficiently large $y$, \begin{gather}\label{g2} (\log\log\log X)^2\geq \log y. \end{gather} From (\ref{g1}) and (\ref{g2}), it follows that $$e^{ty^\theta\left(\frac{\log 1.1}{A+1}\right) \left(\frac{\gamma }{\log y}\right)}\gg X^{\left(\frac{\log 1.1}{3(A+1)}\right) \left(\frac{\gamma}{\left(\log\log\log X\right)^2}\right)}.$$ This is then a lower bound for the number of $a$-Carmichael numbers up to $X$, thereby proving the main theorem. \end{proof} \end{document} \end{document}
\begin{document} \begin{center} {\Huge Soft Topology on Function Spaces} \textbf{\ Taha Yasin \"{O}ZT\"{U}RK$^{\mathrm{a}}$ \textbf{and }Sadi BAYRAMOV \textbf{$^{\mathrm{a}}$}} $^{\mathrm{a}}$\textit{Department of Mathematics$,$ Faculty of Science and Letters,}\\[0pt] \textit{Kafkas University$,$ TR-}$36100$\textit{\ Kars$,$ Turkey}\\[0pt] \textbf{e-mails: [email protected], [email protected] \ } \textbf{Abstract} \end{center} \begin{quotation} Molodtsov initiated the concept of soft sets in \cite{molod}. Maji et al. defined some operations on soft sets in \cite{maji2}. The concept of soft topological space was introduced by some authors. In this paper, we introduce the concept of the pointwise topology of soft topological spaces and the properties of soft mappings spaces. Finally, we investigate the relationships between some soft mappings spaces. \end{quotation} \noindent \textbf{Key Words and Phrases.} soft set,\textbf{\ }soft point,soft topological space, soft continuos mapping, soft mappings spaces, soft pointwise topology. \section{\protect INTRODUCTION} Many practical problems in economics, engineering, environment, social science, medical science etc. cannot be dealt with by classical methods, because classical methods have inherent difficulties. The reason for these difficulties may be due to the inadequacy of the theories of parameterization tools. Molodtsov \cite{molod} initiated the concept of soft set theory as a new mathematical tool for dealing with uncertainties. Maji et al. \cite{maji},\cite{maji2} research deal with operations over soft set. The algebraic structure of set theories dealing with uncertainties is an important problem. Many researchers heve contributed towards the algebraic structure of soft set theory. Akta\c{s} and \c{C}a\u{g}man \cite{aktas} defined soft groups and derived their basic properties. U. Acar et al. \cite {acar} introduced initial concepts of soft rings. F. Feng et al. \cite{feng} defined soft semirings and several related notions to establish a connection between soft sets and semirings. M. Shabir et al. \cite{shabir} studied soft ideals over a semigrup. Qiu Mei Sun et al. \cite{sun} defined soft modules and investigated their basic properties. C. Gunduz(Aras) and S. Bayramov \cite{gunduz1},\cite{gunduz} introduced fuzzy soft modules and intuitionistic fuzzy soft modules and investigated some basic properties. T. Y. Ozturk and S. Bayramov defined chain complexes of soft modules and their soft homology modules. T. Y. Ozturk et al. introduced the concept of inverse and direct systems in the category of soft modules. Recently, Shabir and Naz \cite{shabir2} initiated the study of soft topological spaces. Theoretical studies of soft topological spaces have also been by some authors in \cite{shabir3},\cite{aygun},\cite{zorlutuna},\cite {min},\cite{cagman},\cite{hussain}. In the study \cite{bayramov} were given different soft point concepts from the studies \cite{shabir3},\cite{aygun}, \cite{zorlutuna},\cite{min},\cite{cagman},\cite{hussain}. In this study, soft point consepts in the study \cite{bayramov} is used. In the present study, the pointwise topology is defined in soft continuous mappings space and the properties of soft mappings spaces is investigated. Subsequently, the relations is given between on some soft mappings spaces. \section{\textbf{PREL\.{I}M\.{I}NARY}} In this section we will introduce necessary definitions and theorems for soft sets. Molodtsov \cite{molod} defined the soft set in the following way. Let $X$ be an initial universe set and $E$ be a set of parameters. Let $P(X)$ denotes the power set of $X$ and $A\subset E.$ \begin{definition} \noindent \textbf{\ }\cite{molod} A pair $(F,A)$ is called a soft set over $ X,$ where $F$ is a mapping given by $F:A\rightarrow P(X)$. In other words, the soft set is a parameterized family of subsets of the set $X$. For $e\in A,$ $F(e)$ may be considered as the set of $\varepsilon -$ elements of the soft set $(F,A),$ or as the set of $e-$approximate elements of the soft set. \end{definition} \begin{definition} \cite{maji2} For two soft sets $(F,A)$ and $(G,B)$ over $X$, $(F,A)$ is called soft subset of $(G,B)$ if \noindent \textbf{\ } \begin{enumerate} \item $A\subset B$ and \item $\forall e\in A$, $F(e)$ and $G(e)$ are identical approximations. \noindent This relationship is denoted by $(F,A)\overset{\sim }{\subset } (G,B)$. Similarly, $(F,A)$ is called a soft superset of $(G,B)$ if $(G,B)$ is a soft subset of $(F,A)$. This relationship is denoted by $(F,A)\overset{ \sim }{\supset }(G,B)$. Two soft sets $(F,A)$ and $(G,B)$ over $X$ are called soft equal if $(F,A)$ is a soft subset of $(G,B)$, and $(G,B)$ is a soft subset of $(F,A)$. \end{enumerate} \end{definition} \begin{definition} \noindent \cite{maji2} The intersection of two soft sets $(F,A)$ and $(G,B)$ over $X$ is the soft set $(H,C)$, where $C=A\cap B$ and $\forall e\in C$, $ H(e)=F(e)\cap G(e)$. This is denoted by $(F,A)\overset{\sim }{\cap } (G,B)=(H,C)$. \end{definition} \begin{definition} \noindent\ \textbf{\ \cite{maji2} }The union of two soft sets $(F,A)$ and $ (G,B)$ over $X$ is the soft set, where $C=A\cup B$ and $\forall e\in C$, \begin{equation*} H(\varepsilon )=\left\{ \begin{array}{l} {F(e),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \ \ \ \mathrm{ if\;}}e\text{ }{\in A-B,} \\ {G(e),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\ \ \ \,\mathrm{ if\;}}e\text{ }{\in \mathrm{\;}B-A,} \\ {F(e)\cup G(e)\,\,\,\,\,\,\,\,\,\,\,\,\mathrm{if\;}}e\text{ }{\in A\cup B.} \end{array} \right. \end{equation*} This relationship is denoted by $(F,A)\overset{\sim }{\cup }(G,B)=(H,C)$. \end{definition} \begin{definition} \textbf{\cite{maji2} }A soft set $(F,A)$ over $X$ is said to be a NULL soft set denoted by $\Phi $ if for all $e\in A,$ $F(e)=\varnothing $(null set). \end{definition} \begin{definition} \textbf{\cite{maji2} }A soft set $(F,A)$ over $X$ is said to be an absolute soft set denoted by $\overset{\sim }{X}$ if for all $e\in A,$ $F(e)=X$(null set). \end{definition} \begin{definition} \cite{shabir2} The difference $(H,E)$ of two soft sets $(F,E)$ and $(G,E)$ over $X$ , denoted by $(F,E)\backslash (G,E),$ is defined as $H(e)=F(e)/G(e)$ for all $e\in E.$ \end{definition} \begin{definition} \cite{shabir2} Let $Y$ be a non-empty subset of $X$ , then $\overset{\sim }{Y }$ denotes the soft set $(Y,E)$ over $X$ for which $Y(e)=Y,$ for all $e\in E. $ In particular, $(X,E)$ will be denoted by $\overset{\sim }{X}.$ \end{definition} \begin{definition} \cite{shabir2} Let $(F,E)$ be a soft set over $X$ and $Y$ be a non-empty subset of $X.$ Then the sub soft set of $(F,E)$ over $Y$ denoted by $\left( ^{Y}F,E\right) ,$ is defined as follows $^{Y}F(e)=Y\cap F(e),$ for all $e\in E.$ In other words $\left( ^{Y}F,E\right) =\overset{\sim }{Y}\cap (F,E).$ \end{definition} \begin{definition} \cite{babitha} Let $(F,A)$ and $(G,B)$ be two soft sets over $X_{1}$ and $ X_{2},$ respectively. The cartesian product $(F,A)\times (G,B)$ is defined by $(F\times G)_{(A\times B)}$ where \begin{equation*} (F\times G)_{(A\times B)}(e,k)=\left( F,A\right) (e)\times (G,B)(k),\text{ \ }\forall (e,k)\in A\times B. \end{equation*} According to this definiton, the soft set $(F,A)\times (G,B)$ is soft set over $X_{1}\times X_{2}$ and its parameter universe is $E_{1}\times E_{2}.$ \end{definition} \begin{definition} \cite{babitha} Let $(F_{1},E_{1})$ and $(F_{2},E_{2})$ be two soft sets over $X_{1}$ and $X_{2},$ respectively and $p_{i}:X_{1}\times X_{2}\rightarrow X_{i},$ $q_{i}:E_{1}\times E_{2}\rightarrow E_{i}$ be projection mappings in classical meaning. Then the soft mappings $\left( p_{i},q_{i}\right) ,$ $ i\in \left\{ 1,2\right\} ,$ is called soft projection mapping from $ X_{1}\times X_{2}$ to $X_{i}$ and defined by \begin{eqnarray*} \left( p_{i},q_{i}\right) \left( (F_{1},E_{1})\times (F_{2},E_{2})\right) &=&\left( p_{i},q_{i}\right) \left( \left( F_{1}\times F_{2}\right) ,\left( E_{1}\times E_{2}\right) \right) \\ &=&p_{i}\left( F_{1}\times F_{2}\right) ,q_{i}\left( E_{1}\times E_{2}\right) \\ &=&(F,E)_{i}. \end{eqnarray*} \end{definition} \begin{definition} \cite{shabir2} Let $\tau $ be the collection of soft set over $X,$ then $ \tau $ is said to be a soft topology on $X$ if 1) $\Phi ,\overset{\sim }{X}$ belongs to $\tau ;$ 2) the union of any number of soft sets in $\tau $ belongs to $\tau ;$ 3) the intersection of any two soft sets in $\tau $ belongs to $\tau .$ The triplet $\left( X,\tau ,E\right) $ is called a soft topological space over $X.$ \end{definition} \begin{definition} \cite{shabir2} Let $\left( X,\tau ,E\right) $ be a soft topological space over $X,$ then members of $\tau $ are said to be soft open sets in $X.$ \end{definition} \begin{definition} \cite{shabir2} Let $\left( X,\tau ,E\right) $ be a soft topological space over $X.$ A soft set $(F,E)$ over $X$ is said to be a soft closed in $X,$ if its relative complement $(F,E)^{\prime }$ belongs to $\tau $. \end{definition} \begin{proposition} \cite{shabir2} Let $\left( X,\tau ,E\right) $ be a soft topological space over $X.$ Then the collection $\tau _{e}=\{F(e):(F,E)\in \tau \}$ for each $ e\in E,$ defines a topology on $X$. \end{proposition} \begin{definition} \cite{shabir2} Let $\left( X,\tau ,E\right) $ be a soft topological space over $X$ and $(F,E)$ be a soft set over $X.$ Then the soft closure of $ (F,E), $ denoted by $\overline{(F,E)}$ is the isntersection of all soft closed super sets of $(F,E).$ Clearly $\overline{(F,E)}$ is the smallest soft closed set over $X$ which contains $(F,E).$ \end{definition} \begin{definition} \cite{shabir2} Let $x\in X,$ then $(x,E)$ denotes the soft set over $X$ for which $x(e)=\{x\}$ for all $e\in E.$ \end{definition} \begin{definition} \cite{bayramov} Let $(F,E)$ be a soft set over $X$. The soft set $(F,E)$ is called a soft point, denoted by $\left( x_{e},E\right) ,$ if for the element $e\in E,$ $F(e)=\{x\}$ and $F(e^{\prime })=\varnothing $ for all $e^{\prime }\in E-\{e\}.$ \end{definition} \begin{definition} \cite{bayramov} Two soft points $\left( x_{e},E\right) $ and $\left( y_{e^{\prime }},E\right) $ over a common universe $X$, we say that the points are different points if $x\neq y$ or $e\neq e^{\prime }.$ \end{definition} \begin{definition} \cite{bayramov} Let $\left( X,\tau ,E\right) $ be a soft topological space over $X.$ A soft set $(F,E)$ in $\left( X,\tau ,E\right) $ is called a soft neighborhood of the soft point $\left( x_{e},E\right) \in (F,E)$ if there exists a soft open set $(G,E)$ such that $\left( x_{e},E\right) \in (G,E)\subset (F,E)$. \end{definition} \begin{definition} \cite{gunduz2} Let $\left( X,\tau ,E\right) $ and $(Y,\tau ^{\prime },E)$ be two soft topological spaces, $f:\left( X,\tau ,E\right) \rightarrow (Y,\tau ^{\prime },E)$ be a mapping. For each soft neighbourhood $(H,E)$ of $\left( f(x)_{e},E\right) ,$if there exists a soft neighbourhood $(F,E)$ of $\left( x_{e},E\right) $ such that $f\left( (F,E)\right) \subset (H,E),$ then $f$ is said to be soft continuous mapping at $\left( x_{e},E\right) .$ If $f$ is soft continuous mapping for all $\left( x_{e},E\right) $, then $f$ is called soft continuous mapping. \end{definition} \begin{definition} \cite{aygun} Let $\left( X,\tau ,E\right) $ be a soft topological space over $X.$ A subcollection $\beta $ of $\tau $ is said to be a base for $\tau $ if every member of $\tau $ can be expressed as a union of members of $\beta .$ \end{definition} \begin{definition} \cite{aygun} Let $\left( X,\tau ,E\right) $ be a soft topological space over $X.$ A subcollection $\delta $ of $\tau $ is said to be a subbase for $\tau $ if the family of all finite intersections members of $\delta $ forms a base for $\tau .$ \end{definition} \begin{definition} \cite{aygun} $\left\{ (\varphi _{i},\psi _{i}):(X,\tau ,E)\rightarrow \left( Y_{i},\tau _{i},E_{i}\right) \right\} _{i\in \Delta }$ be a family of soft mappings and $\left\{ \left( Y_{i},\tau _{i},E_{i}\right) \right\} _{i\in \Delta }$ is a family of soft topological spaces. Then, the topology $\tau $ generated from the subbase $\delta =\left\{ (\varphi _{i},\psi _{i})_{i\in \Delta }^{-1}(F,E):(F,E)\in \tau _{i},i\in \Delta \right\} $ is called the soft topology (or initial soft topology) induced by the family of soft mappings $\left\{ (\varphi _{i},\psi _{i})\right\} _{i\in \Delta }.$ \end{definition} \begin{definition} \cite{aygun} Let $\left\{ \left( X_{i},\tau _{i},E_{i}\right) \right\} _{i\in \Delta }$ be a family of soft topological spaces. Then, the initial soft topology on $X\left( =\tprod\nolimits_{i\in \Delta }X_{i}\right) $ generated by the family $\left\{ \left( p_{i},q_{i}\right) \right\} _{i\in \Delta }$ is called product soft topology on $X.$ ( Here, $\left( p_{i},q_{i}\right) $ is the soft projection mapping from $X$ to $X_{i},$ $ i\in \Delta $). The product soft topology is denoted by $\tprod\nolimits_{i\in \Delta }\tau _{i}.$ \end{definition} \section{Soft Topology on Function Spaces} Let $\{(X_{s},\tau _{s},E)\}_{s\in S}$ be a family of soft topological spaces over the same parameters set $E.$ We define a family of soft sets $ \left( \underset{s\in S}{\dprod }X_{s},\tau ,E\right) $ as follows; If $F_{s}:E\rightarrow P(X_{s})$ is a soft set over $X_{s}$ for each $s\in S, $ then $\underset{s\in S}{\dprod }F_{s}:E\rightarrow P(\underset{s\in S}{ \dprod }X_{s})$ is defined by $\left( \underset{s\in S}{\dprod }F_{s}\right) (e)=\underset{s\in S}{\dprod }F_{s}(e).$ Let's consider the topological product $\left( \underset{s\in S}{\dprod }X_{s},\underset{s\in S}{\dprod } \tau _{s},\underset{s\in S}{\dprod }E_{s}\right) $ of a family of soft topological spaces $\{(X_{s},\tau _{s},E)\}_{s\in S}.$ We take the contraction to the diagonal $\Delta \subset \underset{s\in S}{\dprod }E_{s}$ of each soft set $F:\underset{s\in S}{\dprod }E_{s}\rightarrow P(\underset{ s\in S}{\dprod }X_{s}).$ Since there exist a bijection mapping between the diagonal $\Delta $ and the parameters set $E,$ then the contractions of soft sets are soft sets over $E$. Now, let us define the topology on $\left( \underset{s\in S}{\dprod } X_{s},E\right) .$ Let $\left( p_{s_{0}},1_{E}\right) :\left( \underset{s\in S }{\dprod }X_{s},\tau ,E\right) \rightarrow (X_{s_{0}},\tau _{s_{0}},E)$ be a projection mapping and the soft set $\left( p_{s_{0}},1_{E}\right) ^{-1}(F_{s_{0}},E)$ be for each $(F_{s_{0}},E)\in \tau _{s_{0}}$. Then \begin{equation*} \left( p_{s_{0}},1_{E}\right) ^{-1}(F_{s_{0}},E)=\left( p_{s_{0}}^{-1}(F_{s_{0}}),E\right) =\left( F_{s_{0}}\times \underset{s\neq s_{0}}{\dprod }\widetilde{X}_{s},E\right) . \end{equation*} The topology generated from $\left\{ \left. \left( F_{s_{0}}\times \underset{ s\neq s_{0}}{\dprod }\widetilde{X}_{s},E\right) \right\vert s_{0}\in S,\text{ }(F_{s_{0}},E)\in \tau _{s_{0}}\right\} $ as a soft subbase and the soft topology is denoted by $\tau =\underset{s\in S}{\dprod }\tau _{s}.$ \begin{definition} The soft topological space $\left( \underset{s\in S}{\dprod }X_{s},\tau ,E\right) $ is called the product of family of the soft topological spaces $ \{(X_{s},\tau _{s},E)\}_{s\in S}.$ \end{definition} It is clear that each projection mapping $\left( p_{s},1_{E}\right) :\left( \underset{s\in S}{\dprod }X_{s},\tau ,E\right) \rightarrow (X_{s},\tau _{s},E)$ is soft continuous. Additionally, the soft base of the soft topology $\tau $ is formed by the soft sets \begin{equation*} \left\{ \begin{array}{c} \left( F_{s_{1}}\times \underset{s\neq s_{1}}{\dprod }\widetilde{X} _{s},E\right) \cap ...\cap \left( F_{s_{n}}\times \underset{s\neq s_{n}}{ \dprod }\widetilde{X}_{s},E\right) \\ =\left( F_{s_{1}}\times ...\times F_{s_{n}}\times \underset{s\neq s_{1}...s_{n}}{\dprod }\widetilde{X}_{s},E\right) \end{array} \right\} . \end{equation*} Let $\left( X,\tau ,E\right) $ be a soft topological space, $\{(Y_{s},\tau _{s},E)\}_{s\in S}$ be a family of soft topological spaces and $\left\{ \left( f_{s},1_{E}\right) :\left( X,\tau ,E\right) \rightarrow (Y_{s},\tau _{s},E)\right\} _{s\in S}$ be a family of soft mappings. For each soft point $x_{e}\in \left( X,\tau ,E\right) ,$ we define the soft map $f=\underset{ s\in S}{\bigtriangleup }f_{s}:$ $\left( X,\tau ,E\right) \rightarrow \left( \underset{s\in S}{\dprod }Y_{s},\tau ,E\right) $ by $f(x_{e})=\left\{ f_{s}(x_{e})\right\} _{s\in S}=\left\{ (f_{s}(x))_{e}\right\} _{s\in S}.$ If $f:$ $\left( X,\tau ,E\right) \rightarrow \left( \underset{s\in S}{\dprod } Y_{s},\tau ,E\right) $ is any soft mapping, then $f=\underset{s\in S}{ \bigtriangleup }f_{s}$ is satisfied for the family of soft mappings $\left\{ f_{s}=p_{s}\circ f:\left( X,\tau ,E\right) \rightarrow (Y_{s},\tau _{s},E)\right\} _{s\in S}.$ \begin{theorem} $f:\left( X,\tau ,E\right) \rightarrow \left( \underset{s\in S}{\dprod } Y_{s},\tau ,E\right) $ is soft continuous if and only if $f_{s}=p_{s}\circ f:\left( X,\tau ,E\right) \rightarrow (Y_{s},\tau _{s},E)$ is soft continuous for each $s\in S$. \end{theorem} \begin{proof} $\Longrightarrow $ The proof is obvious. $\Longleftarrow $ Let $\left( F_{s_{1}}\times ...\times F_{s_{n}}\times \underset{s\neq s_{1}...s_{n}}{\dprod }\widetilde{Y}_{s},E\right) $ be an any soft base of product topology. \begin{eqnarray*} f^{-1}\left( F_{s_{1}}\times ...\times F_{s_{n}}\times \underset{s\neq s_{1}...s_{n}}{\dprod }\widetilde{Y}_{s},E\right) &=&f^{-1}\left( p_{s_{1}}^{-1}(F_{s_{1}})\cap ...\cap p_{s_{n}}^{-1}(F_{s_{n}}),E\right) \\ &=&\left( f^{-1}p_{s_{1}}^{-1}(F_{s_{1}})\cap ...\cap f^{-1}p_{s_{n}}^{-1}(F_{s_{n}}),E\right) \end{eqnarray*} Since the soft mappings $p_{s_{1}}\circ f,...p_{s_{n}}\circ f$ are soft continuous, the soft set \begin{equation*} \left( f^{-1}p_{s_{1}}^{-1}(F_{s_{1}})\cap ...\cap f^{-1}p_{s_{n}}^{-1}(F_{s_{n}}),E\right) \end{equation*} is soft open. Thus, $f:\left( X,\tau ,E\right) \rightarrow \left( \underset{ s\in S}{\dprod }Y_{s},\tau ,E\right) $ is soft continuous. \end{proof} If $\left\{ f_{s}:\left( X_{s},\tau _{s},E\right) \rightarrow (Y_{s},\tau _{s}^{^{\prime }},E)\right\} _{s\in S}$ is a family of soft continuous mappings, then the soft mapping $\underset{s\in S}{\dprod }f_{s}:\left( \underset{s\in S}{\dprod }X_{s},\tau ,E\right) \rightarrow \left( \underset{ s\in S}{\dprod }Y_{s},\tau ^{^{\prime }},E\right) $ is soft continuous. Now, let the family of soft topological spaces $\{(X_{s},\tau _{s},E)\}_{s\in S}$ be discoint, i.e., $X_{s_{1}}\cap X_{s_{2}}=\varnothing $ for each $s_{1}\neq s_{2}$. For the soft set $F:E\rightarrow \underset{s\in S }{\cup }X_{s}$ over the set $E$, define the soft set $\left. F\right\vert _{X_{s}}:E\rightarrow X_{s}$ by \begin{equation*} \left. F\right\vert _{X_{s}}(e)=F(e)\cap X_{s},\text{ }\forall e\in E \end{equation*} and the soft topology $\tau $ define by \begin{equation*} (F,E)\in \tau \Longleftrightarrow \left( \left. F\right\vert _{X_{s}},E\right) \in \tau _{s}. \end{equation*} It is clear that $\tau $ is a soft topology. \begin{definition} A soft topological space $\left( \underset{s\in S}{\cup }X_{s},\tau ,E\right) $ is called soft topological sum of the family of soft topological spaces $\{(X_{s},\tau _{s},E)\}_{s\in S}$ and denoted by $\underset{s\in S}{ \oplus }(X_{s},\tau _{s},E).$ \end{definition} Let $\left( i_{s},1_{E}\right) :\left( X,\tau ,E\right) \rightarrow \underset {s\in S}{\oplus }(X_{s},\tau _{s},E)$ be an inclusion mapping for each $s\in S.$ Since \begin{equation*} \left( i_{s},1_{E}\right) ^{-1}(F,E)=\left( \left. F\right\vert _{X_{s}},E\right) \in \tau _{s},\text{ for }(F,E)\in \tau , \end{equation*} $\left( i_{s},1_{E}\right) $ is soft continuous. Let $\{(X_{s},\tau _{s},E)\}_{s\in S}$ be a family of soft topological spaces, $(Y,\tau ^{^{\prime }},E)$ be a soft topological space and $\left\{ f_{s}:(X_{s},\tau _{s},E)\rightarrow (Y,\tau ^{^{\prime }},E)\right\} _{s\in S}$ be a family of soft mappings. We define the function $f=\underset{s\in S} {\nabla }f_{s}:\underset{s\in S}{\oplus }(X_{s},\tau _{s},E)\rightarrow (Y,\tau ^{^{\prime }},E)$ by $f(x_{e})=f_{s}(x_{e})=\left( f_{s}(x)\right) _{e},$ where each soft point $x_{e}\in \underset{s\in S}{\oplus }(X_{s},\tau _{s},E)$ can belong to a unique soft topological space $(X_{s_{0}},\tau _{s_{0}},E).$ If $\ f:\underset{s\in S}{\oplus }(X_{s},\tau _{s},E)\rightarrow (Y,\tau ^{^{\prime }},E)$ is an any soft mappings, then $ \underset{s\in S}{\nabla }f_{s}=f$ is satisfied for the family of soft mappings $\left\{ f_{s}=f\circ i_{s}:(X_{s},\tau _{s},E)\rightarrow (Y,\tau ^{^{\prime }},E)\right\} _{s\in S}$. \begin{theorem} The soft mapping $\ f:\underset{s\in S}{\oplus }(X_{s},\tau _{s},E)\rightarrow (Y,\tau ^{^{\prime }},E)$ is a soft continuous if and only if $f_{s}=f\circ i_{s}:(X_{s},\tau _{s},E)\rightarrow (Y,\tau ^{^{\prime }},E)$ are soft continuous for each $s\in S.$ \end{theorem} \begin{proof} $\Longrightarrow $ The proof is obvious. $\Longleftarrow $ Let $(F,E)\in \tau ^{^{\prime }}$ be a soft open set. The soft set $f^{-1}(F,E)$ belongs to the soft topology $\underset{s\in S}{ \oplus }\tau _{s}$ if and only if the soft set $\left( \left. f^{-1}(F)\right\vert _{X_{s}},E\right) $ belongs to $\tau _{s}.$Since \begin{equation*} \left( \left. f^{-1}(F)\right\vert _{X_{s}},E\right) =i_{s}^{-1}\left( f^{-1}(F),E\right) =\left( i_{s}^{-1}\circ f^{-1}\right) (F,E)=f_{s}^{-1}(F,E)\in \tau _{s}, \end{equation*} $f$ is soft continuous. \end{proof} Let $\left\{ f_{s}:(X_{s},\tau _{s},E)\rightarrow (Y_{s},\tau _{s}^{^{\prime }},E)\right\} _{s\in S}$ be a family of soft continuous mappings. We define the mapping $f=\underset{s\in S}{\oplus }$ $f_{s}:\underset{s\in S}{\oplus } (X_{s},\tau _{s},E)\rightarrow \underset{s\in S}{\oplus }(Y_{s},\tau _{s}^{^{\prime }},E)$ by $f(x_{e})=f_{s}(x_{e})$ where each soft point $ x_{e}\in \underset{s\in S}{\oplus }(X_{s},\tau _{s},E)$ belongs to $ (X_{s_{0}},\tau _{s_{0}},E).$ It is clear that if each $f_{s}$ is soft continuous then $f$ is also soft continuous. \begin{theorem} Let $\{(X_{s},\tau _{s},E)\}_{s\in S}$ be a family of soft topological spaces. Then \begin{equation*} \left( \underset{s\in S}{\dprod }X_{s},\tau _{e}\right) =\underset{s\in S}{ \dprod }\left( X_{s},\tau _{s_{e}}\right) \text{ and }\left( \underset{s\in S }{\oplus }X_{s},\tau _{e}\right) =\underset{s\in S}{\oplus }\left( X_{s},\tau _{s_{e}}\right) \end{equation*} are satisfied for each $e\in E.$ \end{theorem} \begin{proof} We should show that $\tau _{e}=\underset{s\in S}{\dprod }\left( \tau _{s}\right) _{e}$. Let us take any set $U$ from $\tau _{e}.$ From the definition of the topology $\tau _{e}$, there exist a soft open set \begin{equation*} \left( (F_{s_{1}},E)\times ...\times (F_{s_{n}},E)\times \underset{s\neq s_{1}...s_{n}}{\dprod }\widetilde{X}_{s}\right) \end{equation*} such that the set $U=\left( F_{s_{1}}(e)\times ...\times F_{s_{n}}(e)\times \underset{s\neq s_{1}...s_{n}}{\dprod }\widetilde{X}_{s}\right) $ belongs to the topology $\underset{s\in S}{\dprod }\left( \tau _{s}\right) _{e}.$ Conversely, let $\left( U_{s_{1}}\times ...\times U_{s_{n}}\times \underset{ s\neq s_{1}...s_{n}}{\dprod }\widetilde{X}_{s}\right) \in \underset{s\in S}{ \dprod }\left( \tau _{s}\right) _{e}$. Then from the definition of the topology $\left( \tau _{s_{i}}\right) _{e}$, there exist soft open sets $ (F_{s_{1}},E),...,(F_{s_{n}},E)$ such that $ F_{s_{1}}(e)=U_{s_{1}},...,F_{s_{n}}(e)=U_{s_{n}}$. Then \begin{equation*} U_{s_{1}}\times ...\times U_{s_{n}}\times \underset{s\neq s_{1}...s_{n}}{ \dprod }\widetilde{X}_{s}=F_{s_{1}}(e)\times ...\times F_{s_{n}}(e)\times \underset{s\neq s_{1}...s_{n}}{\dprod }\widetilde{X}_{s}\in \tau _{e}. \end{equation*} The topological sum can be proven in the sameway. \end{proof} Let $\left( X,\tau ,E\right) $ and $\left( Y,\tau ^{^{\prime }},E\right) $ be two soft topological spaces. $Y^{X}$ is denoted the all soft continuous mappings from the soft topological space $\left( X,\tau ,E\right) $ to the soft topological space $\left( Y,\tau ^{^{\prime }},E\right) $, i. e., \begin{equation*} Y^{X}=\left\{ (f,1_{E}):\left( X,\tau ,E\right) \rightarrow \left( Y,\tau ^{^{\prime }},E\right) \mid (f,1_{E})-\text{a soft continuous map}\right\} . \end{equation*} If $(F,E)$ and $(G,E)$ are two soft set over $X$ and $Y$, respectively then we define the soft set $\left( G^{F},E\right) $ over $Y^{X}$ as follows; \begin{equation*} G^{F}(e)=\left\{ (f,1_{E}):\left( X,\tau ,E\right) \rightarrow \left( Y,\tau ^{^{\prime }},E\right) \mid f\left( F(e)\right) \subset G(e)\right\} \text{ for each }e\in E. \end{equation*} Now, let $x_{\alpha }\in \left( X,\tau ,E\right) $ be an any soft point. We define the soft mapping $e_{x_{\alpha }}:\left( Y^{X},E\right) \rightarrow \left( Y,\tau ^{^{\prime }},E\right) $ by $e_{x_{\alpha }}(f)=f(x_{\alpha })=\left( f(x)\right) _{\alpha }$. This mapping is called an evaluation map. For the soft set $(G,E)$ over $Y,$ $e_{x_{\alpha }}^{-1}(G,E)=(G^{x_{\alpha }},E)$ is satisfied.\ The soft topology that is generated from the soft sets $\left\{ (G^{x_{\alpha }},E)\mid (G,E)\in \tau ^{^{\prime }}\right\} $ as a subbase is called pointwise soft topology and denoted by $\tau _{p}.$ \begin{definition} $\left( Y^{X},\tau _{p},E\right) $ is called a pointwise soft function space (briefly $PISFS$). \end{definition} \begin{remark} The evaluation mapping $e_{x_{\alpha }}:\left( Y^{X},\tau _{p},E\right) \rightarrow \left( Y,\tau ^{^{\prime }},E\right) $ is a soft continuous mapping for each soft point $e_{x_{\alpha }}\in \left( X,\tau ,E\right) $. \end{remark} \begin{proposition} A soft map $g:(Z,\eta ,E)\rightarrow \left( Y^{X},\tau _{p},E\right) $, where $(Z,\eta ,E)$ is a soft topological space, is a soft continuous mapping if and only if the soft mapping $e_{x_{\alpha }}\circ g:(Z,\eta ,E)\rightarrow \left( Y,\tau ^{^{\prime }},E\right) $ is a soft continuous mapping for each $x_{\alpha }\in \left( X,\tau ,E\right) .$ \end{proposition} \begin{theorem} If the soft topological space $\left( Y,\tau ^{^{\prime }},E\right) $ is a soft $T_{i}-$space for each $i=0,1,2$ then the soft space $\left( Y^{X},\tau _{p},E\right) $ is also a soft $T_{i}-$space. \end{theorem} \begin{proof} The soft points of the soft topological space $\left( Y^{X},\tau _{p},E\right) $ denoted by $\left( f_{\alpha },E\right) $ i. e., if $\beta \neq \alpha $ then $f_{\alpha }(\beta )=\varnothing $ and if $\beta =\alpha $ then $f_{\alpha }(\beta )=f.$ Now, let $f_{\alpha }\neq g_{\beta }$ be two soft points. Then it should be $f\neq g$ or $\alpha \neq \beta $. If $f=g,$ then $\left( f(x)\right) _{\alpha }\neq \left( g(x)\right) _{\beta }\in \left( Y,\tau ^{^{\prime }},E\right) $ for each $x\in X.$ If $f\neq g,$ then $f(x^{0})\neq g(x^{0})$ suct that $x^{0}\in X.$ Therefore, $\left( f(x^{0})\right) _{\alpha }\neq \left( g(x^{0})\right) _{\beta }$ is satisfied. In both cases, $\left( f(x^{0})\right) _{\alpha }\neq \left( g(x^{0})\right) _{\beta }\in \left( Y,\tau ^{^{\prime }},E\right) $ is satisfied for at least one $x_{0}\in X.$ Since $\left( Y,\tau ^{^{\prime }},E\right) $ is a soft $T_{i}-$space, there exists soft open sets $ (F_{1},E),(F_{2},E)\in \tau ^{^{\prime }}$ where the condition of the soft $ T_{i}-$space is satisfied. Then the soft open sets $\left( F_{1}^{x_{\alpha }^{0}},E\right) =e_{x_{\alpha }^{0}}^{-1}(F_{1},E)$ and $\left( F_{2}^{x_{\beta }^{0}},E\right) =e_{x_{\beta }^{0}}^{-1}(F_{2},E)$ are neighbours of soft points $f_{\alpha }$ and $g_{\beta },$ respectivelly where the conditions of soft $T_{i}-$space are satisfied for these neighbours. \end{proof} Now, we construct relationships betwen some function spaces. Let $ \{(X_{s},\tau _{s},E)\}_{s\in S}$ be a family of pairwise disjoint soft topological spaces, $\left( Y,\tau ^{^{\prime }},E\right) $ be a soft topological spaces and $\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) ,$ $\underset{s\in S}{\oplus }(X_{s},\tau _{s},E)$ be a product and sum of soft topological spaces, respectively. Define \begin{equation*} \nabla :\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) \rightarrow \left( Y^{\underset{s\in S}{\oplus }X_{s}},\left( \tau ^{^{\prime ^{\underset{s\in S}{\oplus }}}}\right) _{p},E\right) \end{equation*} such that $\forall \left\{ \left( f_{s},1_{E}\right) \right\} \in \underset{ s\in S}{\dprod }Y^{X_{s}},$ $\forall x_{\alpha }\in \underset{s\in S}{\oplus }(X_{s},\tau _{s},E),$ $\underset{s\in S}{\nabla }\left( \left\{ f_{s}\right\} \right) \left( x_{\alpha }\right) =f_{s_{0}}\left( x_{\alpha }\right) =\left( f_{s_{0}}\left( x\right) \right) _{\alpha }$ , where $ x_{\alpha }$ belongs to unique $(X_{s_{0}},\tau _{s_{0}},E).$ We define the mapping \begin{equation*} \nabla ^{-1}:\left( Y^{\underset{s\in S}{\oplus }X_{s}},\left( \tau ^{^{\prime ^{\underset{s\in S}{\oplus }}}}\right) _{p},E\right) \rightarrow \underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) \end{equation*} by $\nabla ^{-1}(f)=\left\{ f\circ i_{s}=\left. f\right\vert _{X_{s}}:X_{s}\rightarrow Y\right\} \in \underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) $ for each $f:\underset{s\in S}{\oplus } X_{s}\rightarrow Y$. It is clear that the mapping $\nabla ^{-1}$ is an inverse of the mapping $\nabla .$ \begin{theorem} The mapping \begin{equation*} \nabla :\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) \rightarrow \left( Y^{\underset{s\in S}{\oplus }X_{s}},\left( \tau ^{^{\prime ^{\underset{s\in S}{\oplus }}}}\right) _{p},E\right) \end{equation*} is a soft homeomorphism in the pointwise soft topology. \end{theorem} \begin{proof} To prove the theorem, it is sufficient to show that the mappings $\nabla $ and $\nabla ^{-1}$ are soft continuous. For this, we need to show that the soft set $\nabla ^{-1}\left( e_{x_{\alpha }}^{-1}(F,E)\right) $ is a soft open set, where each $e_{x_{\alpha }}^{-1}(F,E)$ belongs to a soft subbase of the soft space $\left( Y^{\underset{s\in S}{\oplus }X_{s}},\left( \tau ^{^{\prime ^{\underset{s\in S}{\oplus }}}}\right) _{p},E\right) .$ \begin{equation*} e_{x_{\alpha }}^{-1}(F,E)=\left\{ f:\underset{s\in S}{\oplus } X_{s}\rightarrow Y\mid f\left( x_{\alpha }\right) \in (F,E)\right\} =\left\{ f_{s_{0}}:X_{s_{0}}\rightarrow Y\mid f_{s_{0}}\left( x_{\alpha }\right) \in (F,E)\right\} . \end{equation*} Since \begin{eqnarray*} \nabla ^{-1}\left( e_{x_{\alpha }}^{-1}(F,E)\right) &=&\nabla ^{-1}\left( \left\{ f_{s_{0}}:X_{s_{0}}\rightarrow Y\mid f_{s_{0}}\left( x_{\alpha }\right) \in (F,E)\right\} \right) \\ &=&\left\{ f_{s_{0}}:X_{s_{0}}\rightarrow Y\mid f_{s_{0}}\left( x_{\alpha }\right) \in (F,E)\right\} \times \underset{s\neq s_{0}}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) \end{eqnarray*} is the last soft set, $\nabla ^{-1}\left( e_{x_{\alpha }}^{-1}(F,E)\right) $ is a soft open set on the product space $\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) .$ Now, we prove that the mapping $\nabla ^{-1}:\left( Y^{\underset{s\in S}{ \oplus }X_{s}},\left( \tau ^{^{\prime ^{\underset{s\in S}{\oplus }}}}\right) _{p},E\right) \rightarrow \underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) $ is soft continuous. Indeed, for each the soft set $ \left( e_{x_{\alpha }}^{-1}(F,E)\right) _{s_{0}}(F,E)\times \underset{s\neq s_{0}}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) $ belongs to the subbase of the product space $\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) ,$ \begin{equation*} \left( e_{x_{\alpha }}^{-1}\right) _{s_{0}}(F,E)\times \underset{s\neq s_{0}} {\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) =\left\{ \left\{ f_{s}\right\} \in \underset{s\in S}{\dprod }Y^{X_{s}}\mid f_{s_{0}}\left( x_{\alpha }\right) \in (F,E)\right\} \end{equation*} is satisfied. Since the set \begin{eqnarray*} &&\left( \nabla ^{-1}\right) ^{-1}\left( \left( e_{x_{\alpha }}^{-1}\right) _{s_{0}}(F,E)\times \underset{s\neq s_{0}}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) \right) \\ &=&\nabla \left( \left( e_{x_{\alpha }}^{-1}\right) _{s_{0}}(F,E)\times \underset{s\neq s_{0}}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) \right) \\ &=&\left\{ \underset{s\in S}{\nabla }f_{s}:f_{s_{0}}\left( x_{\alpha }\right) \in (F,E)\right\} \end{eqnarray*} belongs to subbase of the soft topological space $\left( Y^{\underset{s\in S} {\oplus }X_{s}},\left( \tau ^{^{\prime ^{\underset{s\in S}{\oplus } }}}\right) _{p},E\right) ,$ the mapping $\nabla ^{-1}$ is soft continuous. Thus, the mapping \begin{equation*} \nabla :\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}},E\right) \rightarrow \left( Y^{\underset{s\in S}{\oplus }X_{s}},\left( \tau ^{^{\prime ^{\underset{s\in S}{\oplus }}}}\right) _{p},E\right) \end{equation*} is a soft homeomorphism. \end{proof} Now, $\{(Y_{s},\tau _{s}^{^{\prime }},E)\}_{s\in S}$ be the family of soft topological spaces, $\left( X,\tau ,E\right) $ be a soft topological space. We define mapping \begin{equation*} \Delta :\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}}^{^{\prime }},E\right) \rightarrow \left( \left( \underset{s\in S}{\dprod }Y_{s}\right) ^{X},\left( \underset{s\in S}{\dprod }\tau _{s}^{^{\prime }}\right) _{p},E\right) \end{equation*} by the rule $\forall \left\{ f_{s}:X\rightarrow Y_{s}\right\} \in \underset{ s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}}^{^{\prime }},E\right) ,$ $ \Delta \left\{ f_{s}\right\} =\underset{s\in S}{\Delta }f_{s}$. Let the mapping $\Delta ^{-1}=\left( \left( \underset{s\in S}{\dprod } Y_{s}\right) ^{X},\left( \underset{s\in S}{\dprod }\tau _{s}^{^{\prime }}\right) _{p},E\right) \rightarrow \underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}}^{^{\prime }},E\right) $ be \begin{equation*} \Delta ^{-1}(f)=\left\{ p_{s}\circ f=f_{s}:X_{s}\rightarrow Y\right\} \end{equation*} for each $f\in \left( \left( \underset{s\in S}{\dprod }Y_{s}\right) ^{X},\left( \underset{s\in S}{\dprod }\tau _{s}^{^{\prime }}\right) _{p},E\right) .$ It is clear that the mapping $\Delta ^{-1}$ is an inverse of the mapping $\Delta .$ \begin{theorem} The mapping \begin{equation*} \Delta :\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}}^{^{\prime }},E\right) \rightarrow \left( \left( \underset{s\in S}{\dprod }Y_{s}\right) ^{X},\left( \underset{s\in S}{\dprod }\tau _{s}^{^{\prime }}\right) _{p},E\right) \end{equation*} is a soft homeomorphism in the pointwise soft topology. \end{theorem} \begin{proof} Since $\Delta $ is bijective mapping, to prove the theorem it is sufficient to show that the mappings $\Delta $ and $\Delta ^{-1}$ are soft open. First, we show that the mapping $\Delta $ is soft open. Let us take an arbitrary soft set \begin{equation*} \left( e_{x_{\alpha _{1}}}^{-1}\right) _{s_{1}}(F_{s_{1}},E)\times ...\times \left( e_{x_{\alpha _{k}}}^{-1}\right) _{s_{k}}(F_{s_{k}},E)\times \left( \underset{s\neq s_{1}...s_{k}}{\dprod }Y_{s}\right) ^{X} \end{equation*} belongs to the base of the product space $\underset{s\in S}{\dprod }\left( Y^{X_{s}},\tau _{s_{p}}^{^{\prime }},E\right) .$ Since the soft set \begin{eqnarray*} &&\Delta \left( \left( e_{x_{\alpha _{1}}}^{-1}\right) _{s_{1}}(F_{s_{1}},E)\times ...\times \left( e_{x_{\alpha _{k}}}^{-1}\right) _{s_{k}}(F_{s_{k}},E)\times \left( \underset{s\neq s_{1}...s_{k}}{\dprod } Y_{s}\right) ^{X}\right) \\ &=&\{\left\{ f_{s}\right\} \mid f_{s_{1}}(x_{\alpha _{1}}^{1})\in (F_{s_{1}},E),...,f_{s_{k}}(x_{\alpha _{k}}^{k})\in (F_{s_{k}},E)\} \\ &=&(F_{s_{1}}^{x_{\alpha _{1}}^{1}},E)\times ...\times (F_{s_{k}}^{x_{\alpha _{k}}^{k}},E)\times \left( \underset{s\neq s_{1}...s_{k}}{\dprod } Y_{s}\right) ^{X} \end{eqnarray*} is soft open, $\Delta $ is a soft open mapping. Similarly, it can be proven that $\Delta ^{-1}$ is soft open mapping. Indeed, for each soft open set $e_{x_{\alpha }}^{-1}\left( (F_{s_{1}},E)\times ...\times (F_{s_{k}},E)\times \underset{s\neq s_{1}...s_{k}}{\dprod }Y_{s}\right) \in \left( \left( \underset{s\in S}{ \dprod }Y_{s}\right) ^{X},\left( \underset{s\in S}{\dprod }\tau _{s}^{^{\prime }}\right) _{p},E\right) ,$ \begin{eqnarray*} &&\Delta ^{-1}\left( e_{x_{\alpha }}^{-1}\left( (F_{s_{1}},E)\times ...\times (F_{s_{k}},E)\times \underset{s\neq s_{1}...s_{k}}{\dprod } Y_{s}\right) \right) \\ &=&\Delta ^{-1}\left( \left\{ f:X\rightarrow \underset{s\in S}{\dprod } Y_{s}\mid f(x_{\alpha })\in (F_{s_{1}},E)\times ...\times (F_{s_{k}},E)\times \underset{s\neq s_{1}...s_{k}}{\dprod }Y_{s}\right\} \right) \\ &=&\left\{ p_{s}\circ f\mid f(x_{\alpha })\in (F_{s_{1}},E)\times ...\times (F_{s_{k}},E)\times \underset{s\neq s_{1}...s_{k}}{\dprod }Y_{s}\right\} \\ &=&\left\{ p_{s}\circ f\mid p_{s_{1}}\circ f(x_{\alpha })\in (F_{s_{1}},E),...,p_{s_{k}}\circ f(x_{\alpha })\in (F_{s_{k}},E)\right\} . \end{eqnarray*} Hence this set is soft open and the theorem is proved. \end{proof} Now, let $\left( X,\tau ,E\right) $, $\left( Y,\tau ^{^{\prime }},E\right) $ and $\left( Z,\tau ^{^{\prime \prime }},E\right) $ be soft topological spaces and $f:\left( Z,\tau ^{^{\prime \prime }},E\right) \times \left( X,\tau ,E\right) \rightarrow \left( Y,\tau ^{^{\prime }},E\right) $ be a soft mapping. Then the induced map $\overset{\wedge }{f}:X\rightarrow Y^{Z \text{ }}$is defined by $\overset{\wedge }{f}(x_{\alpha })\left( z_{\beta }\right) =f\left( x_{\alpha },z_{\beta }\right) $ for soft points $x_{\alpha }\in \left( X,\tau ,E\right) $ and $z_{\beta }\in \left( Z,\tau ^{^{\prime \prime }},E\right) $. We define exponential law \begin{equation*} E:Y^{Z\times X}\rightarrow \left( Y^{Z}\right) ^{X} \end{equation*} by using induced maps $E(f)=$ $\overset{\wedge }{f}$ \ i.e., $E(f)(x_{\alpha })\left( z_{\beta }\right) =f\left( z_{\beta },x_{\alpha }\right) =\overset{ \wedge }{f}(x_{\alpha })\left( z_{\beta }\right) .$ We define the following mapping \begin{equation*} E^{-1}:\left( Y^{Z}\right) ^{X}\rightarrow Y^{Z\times X} \end{equation*} which is an inverse mapping $E$ as fallows \begin{equation*} E^{-1}(\overset{\wedge }{f})=f,\text{ \ }E^{-1}(\overset{\wedge }{f})\left( z_{\beta },x_{\alpha }\right) =\text{ \ }E^{-1}(\overset{\wedge }{f} (x_{\alpha })\left( z_{\beta }\right) )=f\left( z_{\beta },x_{\alpha }\right) . \end{equation*} Generally, \i n the pointwise topology for each soft continuous map $g$, the mapping $E^{-1}\left( g\right) $ need not to be soft continuous. Let us give the solution of this problem under the some conditions. \begin{theorem} Let $\left( X,\tau ,E\right) $, $\left( Y,\tau ^{^{\prime }},E\right) $ and $ \left( Z,\tau ^{^{\prime \prime }},E\right) $ be soft topological spaces and the mapping $e:Y^{Z}\times X\rightarrow Z,$ $e(f,z)=f(z)$ be soft continuous. Function space $Y^{X}$ with pointwise soft topology and for each soft continuous mapping $\overset{\wedge }{g}:X\rightarrow Y^{Z},$ \begin{equation*} E^{-1}\left( \overset{\wedge }{g}\right) :Z\times X\rightarrow Y \end{equation*} is also soft continuous. \end{theorem} \begin{proof} By using the mapping \begin{equation*} 1_{Z}\times \overset{\wedge }{g}:Z\times Y\rightarrow Z\times Y^{Z}, \end{equation*} we take \begin{equation*} Z\times X\overset{1_{Z}\times \overset{\wedge }{g}}{\longrightarrow }Z\times Y^{Z}\overset{t}{\longrightarrow }Y^{Z}\times Z\overset{e}{\longrightarrow } Y. \end{equation*} Hence $e\circ t\circ \left( 1_{Z}\times \overset{\wedge }{g}\right) \in Y^{Z\times X}$, where $t$ denotes switching mapping. Let us apply exponential law $E$ to $e\circ t\circ \left( 1_{Z}\times \overset{\wedge }{g} \right) $. For each soft point $x_{\alpha }\in \left( X,\tau ,E\right) $ and $z_{\beta }\in \left( Z,\tau ^{^{\prime \prime }},E\right) $, \begin{eqnarray*} \left\{ \left[ E\left( e\circ t\circ \left( 1_{Z}\times \overset{\wedge }{g} \right) \right) \right] \left( x_{\alpha }\right) \right\} \left( z_{\beta }\right) &=&\left( e\circ t\circ \left( 1_{Z}\times \overset{\wedge }{g} \right) \right) \left( z_{\beta },x_{\alpha }\right) \\ &=&e\circ t\left( z_{\beta },\overset{\wedge }{g}(x_{\alpha })\right) \\ &=&e\left( \overset{\wedge }{g}(x_{\alpha }),z_{\beta }\right) \\ &=&\left( \overset{\wedge }{g}(x_{\alpha })\right) \left( z_{\beta }\right) . \end{eqnarray*} Since $E\left( e\circ t\circ \left( 1_{Z}\times \overset{\wedge }{g}\right) \right) =\overset{\wedge }{g},$ $E^{-1}\left( \overset{\wedge }{g}\right) =e\circ t\circ \left( 1_{Z}\times \overset{\wedge }{g}\right) .$ Hence evaluation maps $e$ and $t$ are soft continuous, $E^{-1}\left( \overset{ \wedge }{g}\right) $ is soft continuous. \end{proof} \section{\textbf{\ CONCLUSION}} \noindent We hope that, the results of this study may help in the investigation of soft normed spaces and in many researches. \noindent \end{document}
\betaegin{document} \title{Example of a diffeomorphism\ for which the special ergodic theorem doesn't hold} \betaegin{abstract} In this work we present an example of $C^\infty$-diffeomorphism of a compact $4$-manifold such that it admits a global SRB measure but for which the special ergodic theorem doesn't hold. Namely, for this transformation there exist a continuous function $\varphi$ and a positive constant $\alpha$ such that the following holds: the set of the initial points for which the Birkhoff time averages of the function~$\varphi$ differ from its $\mu$--space average by at least~$\alpha$ has zero Lebesgue measure but full Hausdorff dimension. \end{abstract} \sigmaection*{Introduction} Let us recall the basic concepts and the notion of the special ergodic theorem, and then state our result. For more detailed survey of the subject we refer the reader to the work~\cite{KR} and to references therein. \sigmaubsection{SRB measures and special ergodic theorems} Let $M$ be a compact Riemannian manifold (equipped with the Lebesgue measure), and $f:M\to M$ be its transformation with an invariant measure $\mu$. For a function $\varphi\colon M\to \mathbb R$ and a point $x\in M$ we denote its $n$-th time average at $x$ by \betaegin{equation} \varphi_n(x):=\frac{1}{n}\sigmaum_{k=0}^{n-1} \varphi\circ f^k(x), \end{equation} and the space average of $\varphi$ by \betaegin{equation} \betaar\varphi:=\int_M \varphi\,d\mu. \end{equation} \betaegin{definition} An invariant probability measure $\mu$ is called a \emph{(global) SRB measure} for $f:M\to M$ if for any continuous function $\varphi$ and for Lebesgue-almost every $x\in M$, the time averages of $\varphi$ at $x$ tend to the space average of $\varphi$: \betaegin{equation}\label{eq:time-averages} \lim\limits_{n\to\infty}\varphi_n (x)=\betaar{\varphi}. \end{equation} \end{definition} Taking a continuous test-function $\varphi\in C(M)$ and any $\alpha\geqslant 0$, we define the set of ($\varphi,\alpha$)-nontypical points as \betaegin{equation*} K_{\varphi,\alpha}:=\leqslantft\{ x\in X\colon \varlimsup\limits_{n\to\infty} |\varphi_n(x)-\betaar\varphi|>\alpha \right\}. \end{equation*} By definition, if $\mu$ is a global SRB measure, then $\mathop{\mathrm{Leb}}(K_{\varphi,0})=0$. \betaegin{definition}\label{def:spec-erg} Let $\mu$ be a global SRB measure of $f$. Say that the \emph{special ergodic theorem} holds for $(f,\mu)$, if for every continuous function $\varphi\in C(M)$ and every $\alpha>0$ the Hausdorff dimension of the set $K_{\varphi,\alpha}$ is strictly less than the dimension of the phase space: \betaegin{equation}\label{eq:set} \forall \varphi\in C(M), \alphalpha>0 \qquad \dim_H K_{\varphi,\alpha}<\dim M. \end{equation} \end{definition} Our interest to the special ergodic theorem is related to its possible applications for studying perturbations of skew products, see, for example,~\cite{IKS}. In~\cite{IKS} the special ergodic theorem was proved for a doubling map of a circle, in~\cite{Saltykov} for linear Anosov diffeomorphisms of two-dimensional torus, and in~\cite{KR} for all transformations for which the so-called dynamical large deviation principle holds (in particularly, for all $C^2$-uniformly hyperbolic maps with a transitive attractor). \sigmaubsection{The counterexample} All known sufficient properties that are required from dynamical systems to satisfy the special ergodic theorem (SET) are quite restrictive. Thus one may expect that the SET holds not for every system. The aim of the present work is to present such an example. \betaegin{theorem*} There exists a $C^\infty$-diffeomorphism of a compact $4$-manifold such that it admits a global SRB measure, but the special ergodic theorem doesnt'n hold for it. \end{theorem*} Our idea is to start with a set that has zero Lebesgue measure and full Hausdorff dimension, and then try to construct a transformation that has it as a set of $(\varphi,\alpha)$-nontypical points for some test-function $\varphi$ and some $\alpha>0$. For this purpose, we first describe a family of subsets of an interval that have zero Lebesgue measure and Hausdorff dimension $1$. We can construct only a discontinuous transformation such that the set from this family is $(\varphi,\alpha)$-nontypical, so we increase the dimension from $1$ to $4$, consequently handling out the lack of continuity and smoothness of the map. This is done in the next four sections. One can easily verify that the construction fails for typical perturbation of the system, and our example has infinite codimension in the space of all $C^\infty$-diffeomorphisms of the manifold. The existence of an open set of diffeomorphisms that do not satisfy the SET is a challenging open problem. \sigmaubsection{Acknowledgements} The author is very grateful to V.~Kleptsyn for sharing useful ideas and stimulating discussions. Despite of his impact, V.~Kleptsyn refused to be one of the authors of the paper. The author is also grateful to I.~Binder who taught him the method of regular measures, to Yu.~Ilyashenko for his advertence to the work and for numerous valuable comments, and to A.~Kustarev for his kind consultations about connected sums. The author is thankful to Cornell University for hospitality. The author was supported by Chebyshev Laboratory (Department of Mathematics and Mechanics, St.-Petersburg State University) under the Russian Federation government grant 11.G34.31.0026, and partially by RFBR grant 10-01-00739-a and joint RFBR/CNRS grant 10-01-93115-CNRS-a. \sigmaection{Dimension 1: discontinuous map of an interval} For every sequence $P=\{p_n\}\in (0,1)^{\mathbb N}$ consider the Cantor set $C_P$, obtained from the interval $I=[0,1]$ by the standard infinite procedure of consecutive deleting the middle intervals. Namely, on the step number $n$ we take all the intervals that were obtained as a result of the previous steps (\textit{intervals of rank $n$}), and delete their central parts of relative length $p_n$, see Figure~1. In a particular case, when $p_n=1/3 \; \forall n\in\mathbb N$, we obtain the standard ''middle third'' Cantor set. \betaegin{center} \sigmacalebox{1.2} { \betaegin{pspicture}(0,-1.108125)(8.582812,1.108125) \psline[linewidth=0.04cm](0.3009375,-0.0903125)(8.300938,-0.0903125) \psdots[dotsize=0.16](0.3009375,-0.0903125) \psdots[dotsize=0.16](8.280937,-0.0903125) \psdots[dotsize=0.16](3.3009374,-0.0903125) \psdots[dotsize=0.16](5.3009377,-0.0903125) \psdots[dotsize=0.16](1.5009375,-0.0903125) \psdots[dotsize=0.16](2.1009376,-0.0903125) \psdots[dotsize=0.16](6.5009375,-0.0903125) \psdots[dotsize=0.16](7.1009374,-0.0903125) \usefont{T1}{ptm}{m}{n} \rput(4.3723435,0.1396875){$p_1$} \psdots[dotsize=0.16](0.7609375,-0.0903125) \psdots[dotsize=0.16](1.0409375,-0.0903125) \psdots[dotsize=0.16](2.5609374,-0.0903125) \psdots[dotsize=0.16](2.8409376,-0.0903125) \psdots[dotsize=0.16](5.7609377,-0.0903125) \psdots[dotsize=0.16](6.0409374,-0.0903125) \psdots[dotsize=0.16](7.5609374,-0.0903125) \psdots[dotsize=0.16](7.8409376,-0.0903125) \psline[linewidth=0.08cm](1.1009375,-0.0903125)(1.5009375,-0.0903125) \psline[linewidth=0.08cm](2.1409376,-0.0903125)(2.5409374,-0.0903125) \psline[linewidth=0.08cm](2.8209374,-0.0903125)(3.2609375,-0.0903125) \psline[linewidth=0.08cm](5.3009377,-0.0903125)(5.7009373,-0.0903125) \psline[linewidth=0.08cm](6.0609374,-0.0903125)(6.4609375,-0.0903125) \psline[linewidth=0.08cm](7.1009374,-0.0903125)(7.5409374,-0.0903125) \psline[linewidth=0.08cm](7.8809376,-0.0903125)(8.220938,-0.0903125) \psline[linewidth=0.08cm](0.3009375,-0.0903125)(0.7009375,-0.0903125) \usefont{T1}{ptm}{m}{n} \rput(4.3723435,-0.8803125){$p_2$} \usefont{T1}{ptm}{m}{n} \rput(4.3723435,0.9196875){$p_3$} \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(0.8809375,0.0296875)(4.1009374,0.8896875) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(7.7009373,0.0296875)(4.6809373,0.8896875) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(2.7209375,-0.0103125)(4.1209373,0.7496875) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(5.9009376,-0.0103125)(4.6209373,0.7096875) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(6.7809377,-0.1503125)(4.7409377,-0.9303125) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<-}(1.7809376,-0.1503125)(4.0609374,-0.9303125) \usefont{T1}{ptm}{m}{n} \rput(0.23234375,-0.3603125){$0$} \usefont{T1}{ptm}{m}{n} \rput(8.272344,-0.3603125){$1$} \end{pspicture} } \\ Fig. 1: first three steps of constructing the set $C_P$ \end{center} Let $\mathcal P\sigmaubset [0,1]^{\mathbb N}$ denote the set of the sequences $P=(p_n)$ such that: (i) $\lim_{n\to\infty} p_n=0$; (ii) $\sigmaum^\infty_{n=1} p_n=\infty$. \betaegin{lemma}\label{1} For every $P\in\mathcal P$ the set $C_P$ has zero Lebesgue $1$-measure and Hausdorff dimension $1$. \end{lemma} Though the Lemma is intuitively clear, we give an accurate proof. It makes us the concept of $\gamma$-regular measures, that will be used several times in our proofs. \betaegin{definition} A measure $\mu$ on a Riemannian manifold is called $\gamma$-regular, if there exist constants $c,\delta>0$ such that \betaegin{equation}\label{eq:reg} \mu(U)<c\cdot|U|^\gamma \end{equation} for every measurable set $U$ with $|U|<\delta$ (here $|\cdot|$ stands for the diameter of a set). \end{definition} \betaegin{proposition}[\cite{Falc}, Mass Distribution principle]\label{mdp} If $\text{supp }\mu\sigmaubset F$ for some probability $\gamma$-regular measure $\mu$, then $\dim_H F\geqslant \gamma$. \end{proposition} \betaegin{proof} For $\varepsilon$ small enough, $\gamma$-volume of any cover of $F$ by balls of diameter less than $\varepsilon$ is uniformly bounded away from zero: $$ \sigmaum |U_i|^\gamma \geqslant \frac{\sigmaum \mu(U_i)}{c}\geqslant \frac{\mu(F)}{c}=\frac{1}{c}>0. $$ \end{proof} \betaegin{proof}[Proof of Lemma~\ref{1}] Let us prove that $C_P$ has Lebesgue measure zero. Indeed, the second property $\sigmaum^\infty_{n=1} p_n=\infty$ implies that $$ \sigmaum^\infty_{n=1} (-\ln (1-p_n))=\infty. $$ Therefore $\prod^\infty_{n=1} (1-p_n)=0$ and hence $\mathop{\mathrm{Leb}}(C_P)=0$. To prove that $\dim_H K_P=1$, consider the standard probability measure $\mu_{P}$ on $C_P$ (that is a pullback of the Bernoulli $(1/2,1/2)$-measure due to natural encoding of $C_P$ by right-infinite sequences of zeroes and ones). Let us verify that for every $0<\gamma<1$ the measure $\mu_P$ is $\gamma$-regular. That is, we need to prove the existence of a constant $c$ such that~\eqref{eq:reg} holds for every sufficiently small interval $U$. Note that we can suppose that $U$ is an interval of rank $n$ for some $n$. Indeed, let \betaegin{equation}\label{eq:lambdan} \lambda_n=\prod\limits_{j=1}^n \frac{1-p_j}{2} \end{equation} be the length of any interval of rank $n$. Then for all sufficiently large $n$ the ratio $\lambda_n/\lambda_{n+1}$ is less than $3$. Hence one can contract and shift any interval $U$ of length between $\lambda_n$ and $\lambda_{n+1}$ to an interval of rank $n+1$ changing the suitable value of a constant $c$ in~\eqref{eq:reg} by no more than $3^\gamma$. Any interval of rank $n$ has the length $\lambda_n$ and the $\mu_P$-measure $\frac{1}{2^n}$. As a consequence of the first property $\lim_{j\to\infty} p_j=0$, there exists $n\in\mathbb N$ such that $2^{\gamma-1}<(1-p_j)^\gamma$ for every $j>n$. Starting with this number $n$, the sequence \betaegin{equation}\label{eq:regconst} \frac{\mu_P (U_n)}{|U_n|^\gamma}=\frac{2^{-n}}{\lambda_n^\gamma} =\frac{2^{n(\gamma-1)}}{\leqslantft(\prod_{j=1}^n 1-p_j)\right)^\gamma} \end{equation} decreases, and hence is bounded. This proves the $\gamma$-regularity of $\mu_P$. Therefore, by Proposition~\ref{mdp}, $\dim_H C_P \geqslant\gamma$ for every $\gamma<1$. The second conclusion of the lemma is also proved. \end{proof} Of course, taking any sequence $P\in\mathcal P$ one can easily construct a discontinuous map of a unit interval for which the set $C_P$ would be $(\varphi,\alpha)$-nontypical for some $\varphi\in C(I)$ and $\alpha>0$. Indeed, let $f$ send the set $C_P\sigmaetminus \{1\}$ to the left endpoint $0$ and $(I\sigmaetminus C_P)\cup \{1\}$ to the right endpoint $s_1=1$. Then the $\delta$-measure sitting at the right endpoint would be an SRB measure, and the set $C_P$ would play the role of the set of $(\varphi,1)$-nontypical points $K_{\varphi,1}$ (provided that the test function $\varphi$ has a value $1$ on the left endpoint and $0$ on the right endpoint). But this map is discontinuous on the Cantor set! So, relying on this lemma and keeping in mind the mentioned lack of continuity, let us make the next step: construct an example of a map of the square $[0,1] \times [0,1]$ which is, in some sense, less discontinuous than the previous one, and still has a $(\varphi,1)$-nontypical set of full Hausdorff dimension. \sigmaection{Dimension 2: a sieving construction.} Consider a square $$ Y=\{(x,p): x,p\in [0,1]\} $$ and describe a map $g$ on it. Fix the ''horizontal level'' \betaegin{equation}\label{eq:Ip} I_p:=\{(x,p)\in Y \mid x\in [0,1]\} \end{equation} and split it into three parts: $$I_p=I_{p,-1}\sigmaqcup I_{p,0}\sigmaqcup I_{p,1},$$ where \betaegin{equation}\label{eq:Ipk} \betaegin{aligned} I_{p,-1}&:=[0;\frac{1-p}{2}),\\ I_{p,0}&:=[\frac{1-p}{2};\frac{1+p}{2}], \text{ and}\\ I_{p,1}&:=(\frac{1+p}{2};1] \end{aligned} \end{equation} (i.e. the interval $I_{p,0}$ is the central part of length $p$). To define the transformation $g$, we introduce the function $q\colon [0,1]\to [0,1]$, \betaegin{equation}\label{eq:q-def} q(p)=p/(1+p) \end{equation} (so that $q(1/n)=1/(n+1)$), and mark the point $s_2=(1/2,1)$, which will be the support of SRB measure. We define $g$ separately on the different parts of every horisontal level $I_p$. For $p=1$, $g(x,p)=s_2$. For $0\leqslant p<1$, we send the central part $I_{p,0}$ to the point $s_2$ and stretch linearly other parts $I_{p,-1}$ and $I_{p,1}$ to the whole level $I_{q(p)}$ each, see Figure~2. The shaded triangle in this figure goes to the fixed point $s_2$ under one iterate of the map $g$. This map is defined by the following formula: \betaegin{equation}\label{eq:g-def} g(x,p)= \betaegin{cases} (\frac{2}{1-p}x, q(p)), &x\in I_{p,-1}\\ s_2, &x\in I_{p,0}\\ (\frac{2}{1-p}(x-\frac{1+p}{2}), q(p)), &x\in I_{p,1}. \end{cases} \end{equation} \betaegin{center} \sigmacalebox{1.2} { \betaegin{pspicture}(0,-4.5629687)(9.982813,4.5629687) \definecolor{color108b}{rgb}{0.8,0.8,0.8} \psframe[linewidth=0.04,dimen=outer](9.039531,3.8445315)(1.0795312,-4.115469) \rput{-180.0}(10.12,-0.2700627){\pstriangle[linewidth=0.03,dimen=outer,fillstyle=solid,fillcolor=color108b](5.06,-4.1150312)(7.94,7.96)} \psline[linewidth=0.04cm](1.0795312,0.4445314)(9.02,0.46296865) \psline[linewidth=0.08cm](1.0795312,-1.4554687)(9.039531,-1.4554687) \psline[linewidth=0.08cm](1.0995312,0.4445314)(2.7995312,0.4445314) \psline[linewidth=0.08cm](7.3,0.44296864)(9.02,0.44296864) \usefont{T1}{ptm}{m}{n} \rput(0.8123438,0.4345314){$p_0$} \usefont{T1}{ptm}{m}{n} \rput(0.6223438,-1.4454687){$q(p_0)$} \psdots[dotsize=0.14](5.079531,3.8445315) \usefont{T1}{ptm}{m}{n} \rput(5.072344,4.0745316){$s_2$} \usefont{T1}{ptm}{m}{n} \rput(1.9023439,0.7345314){$I_{p_0,0}$} \usefont{T1}{ptm}{m}{n} \rput(5.1023436,0.1945314){$I_{p_0,1}$} \usefont{T1}{ptm}{m}{n} \rput(8.382343,0.7345314){$I_{p_0,2}$} \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(5.08,0.54296863)(5.079531,3.4645314) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(6.5,0.52296865)(5.259531,3.5045316) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(3.66,0.52296865)(4.959531,3.5045316) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.3195312,0.3445314)(1.48,-1.3570313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.8795311,0.3645314)(4.84,-1.3970313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(2.6395311,0.3645314)(8.12,-1.3770313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(8.8,0.32296866)(8.58,-1.3570313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(8.259532,0.3445314)(5.22,-1.3770313) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(7.5,0.34296864)(1.92,-1.3570313) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.1,3.5829687)(1.1,4.422969) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(8.58,-4.097031)(9.86,-4.097031) \usefont{T1}{ptm}{m}{n} \rput(0.93234366,4.3745313){$p$} \usefont{T1}{ptm}{m}{n} \rput(0.9323437,3.8345313){$1$} \usefont{T1}{ptm}{m}{n} \rput(9.012344,-4.365469){$1$} \usefont{T1}{ptm}{m}{n} \rput(5.012344,-4.3854685){$1/2$} \usefont{T1}{ptm}{m}{n} \rput(0.9723436,-4.285469){$0$} \usefont{T1}{ptm}{m}{n} \rput(9.6823435,-4.365469){$x$} \end{pspicture} } \\ Fig. 2: the map $g$: one step of the sieving construction \end{center} Let $$ K_2=\{(x,p)\mid \forall n \quad g^n(x,p)\ne s_2\}. $$ Note that under iterates of the function $q(p)$, see~\eqref{eq:q-def} any point $p$ from $[0,1]$ tends to zero. Then our construction immediately implies that there is an alternative description of the set $K_2$: $$ K_2=\{(x,p)\mid \mathop{\mathrm{dist}}(g^n(x,p),I_0)\to 0\}. $$ So we have the mechanism that acts like an infinite sieve: it lets the point pass down through it to the zero level (and not to be ''thrown out'' to the point $s_2$ under the iterates of $g$) if and only if the points of the future orbit of this point never appear in the shaded triangle that is the union of the central parts of level intervals. \betaegin{lemma}\label{2} The $\delta$-measure sitting at the point $s_2$ is a global SRB measure for the map $g$. The set $K_2$ of the points that tend to the level $\{p=0\}$ under the iterates of $g$ has zero Lebesgue $2$-measure and Hausdorff dimension $2$. \end{lemma} \betaegin{proof} First, let us prove that on every level $I_{p_0},\,0<p_0\leqslant 1$, see~\eqref{eq:Ip}, Lebesgue almost every point is thrown out to $s_2$ under some iterate of $g$. That is, the set $D_{p_0}:=I_{p_0}\cap K_2$ has $1$-Lebesgue measure $0$. By construction, $D_{p_0}=\{p_0, x\mid x\in C_{P_0}\}$, where $P_0=P_0(p_0)=(p_0, q(p_0),\dots, q^n(p_0),\dots)$. By Lemma~\ref{1}, it is sufficient to show that $P_0\in\mathcal P$. For $p_0=1$, $P_0=(1,1/2,1/3,1/4,\dots)\in \mathcal P$. For any $0<p_0<1$ we also have $P_0(p_0)\in\mathcal P$ due to monotonicity of the function $q$. Indeed, once $1/(n+1)<p_0\leqslant 1/n$ for some $n\in\mathbb N$, then $1/(n+k+1)<q^k(p_0)\leqslant 1/(n+k)$ for every $k\in\mathbb N$. Therefore, as mentioned before, the sequence $P_0$ tends to $0$, and the sum of its components is infinite, as well as for the harmonic one. Hence, $P_0$ belongs to $\mathcal P$. The relation $\mathop{\mathrm{Leb}}_2(K_2)=0$ now follows from the Fubini theorem. Hence, $\delta$-measure at the point $s_2$ is a global SRB measure. It remains to prove that $\dim_H K_2=2$. For every level $I_p$ denote by $\mu_p$ the measure constructed in the proof of Lemma~\ref{1} and supported on the Cantor set $D_{p_0}$ of the points that don't tend to $s_2$ under iterates of $g$. Fix any $\gamma<1$. As it was shown in the proof of Lemma~\ref{1}, all the measures $\mu_p$ for $0<p<1$ are $\gamma$-regular. Note also that the sequence of iterations $P_0(p_0)=(p_0, q(p_0),\dots, q^n(p_0),\dots)$ of the point $p_0=1$ componentwise majorises sequences of iterations for all other $p_0\in [0,1)$ due to monotonicity of the function $q$. Hence, by~\eqref{eq:lambdan}, for any $n$ the length $\lambda_n$ of intervals of the rank $n$ for $P_0=P_0(p_0)$ monotonously decreases while $p_0$ decreases, and by~\eqref{eq:regconst} the regularity constant $c=c(\gamma)$ can be chosen independently on $p$. Then the measure $$ \mu:=\int_0^1 \mu_p dm(p) $$ is $(\gamma+1)$-regular with the same constant $c(\gamma)$ due to Fubini theorem. Therefore, by Proposition~\ref{mdp}, $\dim_H S_2 \geqslant\gamma+1$ for every $\gamma<1$, and hence $\dim_H S_2=2$. \end{proof} The map $g$ is not continuous as well, but the set of discontinuity consists now only of two intervals that are the union of the vertices of the intervals $I_{p,0}$, see~\eqref{eq:Ipk} (and form two sides of the shaded triangle in Figure~2). Note also that even the image of the map $g$ is disconnected (it consists of the point $s_2$ and the lower half of the square $Y$). To get rid of these problems we include this construction into a flow. Namely, we consider such a flow that its Poincar\'e map for some cross-section resembles the map $g$ described above. \sigmaection{Dimension 3: a flow on a stratified manifold} In this section we present a $3$-manifold and a flow on it. The time $1$ map of this flow does not satisfy the SET. The shortcoming is that the phase space is a stratified, rather than a genuine smooth manifold. We start with their brief description. The idea is to avoid the discontinuity of the previous $2$-dimensional example by dividing the images of the parts of $I_p$ by separatrices of saddles of a smooth flow. \sigmaubsection{Heuristic description} We consider a closed simply connected subset $T$ of a square and multiply it directly by an interval $[0,1]$, thus obtaining a $3$-dimensional manifold with boundary, homeomorphic to a closed $3$-cube. Then we glue parts of the boundary of this manifold and obtain $2$-stratum $S_2$ (every point of this stratum has a neighborhood homeomorphic to three half balls glued together by the boundary discs). Then we define a flow such that the Poincar\'e map for the cross-section $S_2$ is very similar to the map $g$, see \eqref{eq:g-def}, while the time $1$ map of a flow is continuous and displays the same asymptotic behaviour as the Poincar\'e map. Thereinafter we take $p\in [0,1/2]$ instead of $p\in [0,1]$ to avoid incorrect constructions for $p=1$. Now we pass to a rigorous description. \sigmaubsection{Construction of a stratified manifold} First we describe three curves in the square $R=\{(x,y)\mid x,y\in [-1,2]\}$. It isn't necessary to define them numerically, but just to state all their properties that we need. One can easily verify the existence of such curves. The curve $\gamma_{-1}$ starts at the point $(0,-1)$ and ends at $(-1,0)$, and coincides with the straight intervals $\{(0,-1+t)\}$ and $\{(-1+t,0)\}$, $t\in[0,\varepsilon]$, in some $\varepsilon$-neighborhoods of its endpoints. Analogously, the curve $\gamma_1$ starts at the point $(1,-1)$ ends at $(2,0)$, and coincides with the straight intervals $\{(1,-1+t)\}$ and $\{(2-t,0)\}$, $t\in[0,\varepsilon]$, in some $\varepsilon$-neighborhoods of its endpoints, see Figure~3a. The curve $\gamma_0$ starts at the point $(-1,1)$ and ends at $(2,1)$. It is tangent to the line $\{(x,2)\}$ at the point $(1/2,2)$ and coincides with the straight intervals $\{(-1+t,1)\}$ and $\{(2-t,0)\}$, $t\in[0,\varepsilon]$, in some $\varepsilon$-neighborhoods of its endpoints. Each of these three curves is assumed to be simple and $C^\infty$-smooth, to lie in the square $R$ and not to have any intersections with the other two curves. For every $y_0\in (1,2)$ the curve $\gamma_0$ intersects the line $\{y=y_0\}$ at two points. We denote by $T\sigmaubset R$ the closed subset of a square, bounded by these three curves and segments on the boundary of the square (see Figure~3a). \betaegin{center} \sigmacalebox{1.2} { \betaegin{pspicture}(0,-3.6229687)(7.2628126,3.6229687) \definecolor{color3208g}{rgb}{0.6,0.6,0.6} \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color3208g,gradend=color3208g,gradmidpoint=1.0] { \newpath \moveto(0.4609375,-3.1754687) \lineto(0.4609375,-3.1254687) \curveto(0.4609375,-3.1004686)(0.4609375,-3.0504687)(0.4609375,-3.0254688) \curveto(0.4609375,-3.0004687)(0.4609375,-2.9504688)(0.4609375,-2.9254687) \curveto(0.4609375,-2.9004688)(0.4609375,-2.8504686)(0.4609375,-2.8254688) \curveto(0.4609375,-2.8004687)(0.4559375,-2.7554688)(0.4509375,-2.7354689) \curveto(0.4459375,-2.7154686)(0.4409375,-2.6704688)(0.4409375,-2.6454687) \curveto(0.4409375,-2.6204689)(0.4409375,-2.5704687)(0.4409375,-2.5454688) \curveto(0.4409375,-2.5204687)(0.4459375,-2.4754686)(0.4509375,-2.4554687) \curveto(0.4559375,-2.4354687)(0.4559375,-2.3954687)(0.4509375,-2.3754687) \curveto(0.4459375,-2.3554688)(0.4409375,-2.3104687)(0.4409375,-2.2854688) \curveto(0.4409375,-2.2604687)(0.4459375,-2.2154686)(0.4509375,-2.1954687) \curveto(0.4559375,-2.1754687)(0.4609375,-2.1304688)(0.4609375,-2.1054688) \curveto(0.4609375,-2.0804687)(0.4609375,-2.0304687)(0.4609375,-2.0054688) \curveto(0.4609375,-1.9804688)(0.4609375,-1.9304688)(0.4609375,-1.9054687) \curveto(0.4609375,-1.8804687)(0.4559375,-1.8354688)(0.4509375,-1.8154688) \curveto(0.4459375,-1.7954688)(0.4409375,-1.7504687)(0.4409375,-1.7254688) \curveto(0.4409375,-1.7004688)(0.4409375,-1.6504687)(0.4409375,-1.6254687) \curveto(0.4409375,-1.6004688)(0.4409375,-1.5504688)(0.4409375,-1.5254687) \curveto(0.4409375,-1.5004687)(0.4459375,-1.4554688)(0.4509375,-1.4354688) \curveto(0.4559375,-1.4154687)(0.4609375,-1.3704687)(0.4609375,-1.3454688) \curveto(0.4609375,-1.3204688)(0.4609375,-1.2704687)(0.4609375,-1.2454687) \curveto(0.4609375,-1.2204688)(0.4759375,-1.1854688)(0.4909375,-1.1754688) \curveto(0.5059375,-1.1654687)(0.5459375,-1.1554687)(0.5709375,-1.1554687) \curveto(0.5959375,-1.1554687)(0.6459375,-1.1554687)(0.6709375,-1.1554687) \curveto(0.6959375,-1.1554687)(0.7459375,-1.1554687)(0.7709375,-1.1554687) \curveto(0.7959375,-1.1554687)(0.8459375,-1.1554687)(0.8709375,-1.1554687) \curveto(0.8959375,-1.1554687)(0.9459375,-1.1554687)(0.9709375,-1.1554687) \curveto(0.9959375,-1.1554687)(1.0409375,-1.1504687)(1.0609375,-1.1454687) \curveto(1.0809375,-1.1404687)(1.1259375,-1.1354687)(1.1509376,-1.1354687) \curveto(1.1759375,-1.1354687)(1.2259375,-1.1354687)(1.2509375,-1.1354687) \curveto(1.2759376,-1.1354687)(1.3209375,-1.1404687)(1.3409375,-1.1454687) \curveto(1.3609375,-1.1504687)(1.4059376,-1.1554687)(1.4309375,-1.1554687) \curveto(1.4559375,-1.1554687)(1.5059375,-1.1554687)(1.5309376,-1.1554687) \curveto(1.5559375,-1.1554687)(1.6009375,-1.1604687)(1.6209375,-1.1654687) \curveto(1.6409374,-1.1704688)(1.6809375,-1.1804688)(1.7009375,-1.1854688) \curveto(1.7209375,-1.1904688)(1.7609375,-1.2004688)(1.7809376,-1.2054688) \curveto(1.8009375,-1.2104688)(1.8409375,-1.2204688)(1.8609375,-1.2254688) \curveto(1.8809375,-1.2304688)(1.9209375,-1.2404687)(1.9409375,-1.2454687) \curveto(1.9609375,-1.2504687)(2.0009375,-1.2604687)(2.0209374,-1.2654687) \curveto(2.0409374,-1.2704687)(2.0759375,-1.2854687)(2.0909376,-1.2954688) \curveto(2.1059375,-1.3054688)(2.1359375,-1.3254688)(2.1509376,-1.3354688) \curveto(2.1659374,-1.3454688)(2.1909375,-1.3704687)(2.2009375,-1.3854687) \curveto(2.2109375,-1.4004687)(2.2309375,-1.4304688)(2.2409375,-1.4454688) \curveto(2.2509375,-1.4604688)(2.2709374,-1.4904687)(2.2809374,-1.5054687) \curveto(2.2909374,-1.5204687)(2.3109374,-1.5504688)(2.3209374,-1.5654688) \curveto(2.3309374,-1.5804688)(2.3459375,-1.6154687)(2.3509376,-1.6354687) \curveto(2.3559375,-1.6554687)(2.3659375,-1.6954688)(2.3709376,-1.7154688) \curveto(2.3759375,-1.7354687)(2.3859375,-1.7754687)(2.3909376,-1.7954688) \curveto(2.3959374,-1.8154688)(2.4059374,-1.8554688)(2.4109375,-1.8754687) \curveto(2.4159374,-1.8954687)(2.4209375,-1.9404688)(2.4209375,-1.9654688) \curveto(2.4209375,-1.9904687)(2.4259374,-2.0354688)(2.4309375,-2.0554688) \curveto(2.4359374,-2.0754688)(2.4409375,-2.1204689)(2.4409375,-2.1454687) \curveto(2.4409375,-2.1704688)(2.4409375,-2.2204688)(2.4409375,-2.2454689) \curveto(2.4409375,-2.2704687)(2.4409375,-2.3204687)(2.4409375,-2.3454688) \curveto(2.4409375,-2.3704689)(2.4459374,-2.4154687)(2.4509375,-2.4354687) \curveto(2.4559374,-2.4554687)(2.4609375,-2.5004687)(2.4609375,-2.5254688) \curveto(2.4609375,-2.5504687)(2.4609375,-2.6004686)(2.4609375,-2.6254687) \curveto(2.4609375,-2.6504688)(2.4609375,-2.7004688)(2.4609375,-2.7254686) \curveto(2.4609375,-2.7504687)(2.4609375,-2.8004687)(2.4609375,-2.8254688) \curveto(2.4609375,-2.8504686)(2.4609375,-2.9004688)(2.4609375,-2.9254687) \curveto(2.4609375,-2.9504688)(2.4609375,-3.0004687)(2.4609375,-3.0254688) \curveto(2.4609375,-3.0504687)(2.4609375,-3.1004686)(2.4609375,-3.1254687) \curveto(2.4609375,-3.1504688)(2.4609375,-3.1754687)(2.4609375,-3.1754687) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color3208g,gradend=color3208g,gradmidpoint=1.0] { \newpath \moveto(6.4609375,-3.1554687) \lineto(6.4509373,-3.1154687) \curveto(6.4459376,-3.0954688)(6.4409375,-3.0504687)(6.4409375,-3.0254688) \curveto(6.4409375,-3.0004687)(6.4409375,-2.9504688)(6.4409375,-2.9254687) \curveto(6.4409375,-2.9004688)(6.4409375,-2.8504686)(6.4409375,-2.8254688) \curveto(6.4409375,-2.8004687)(6.4409375,-2.7504687)(6.4409375,-2.7254686) \curveto(6.4409375,-2.7004688)(6.4409375,-2.6504688)(6.4409375,-2.6254687) \curveto(6.4409375,-2.6004686)(6.4409375,-2.5504687)(6.4409375,-2.5254688) \curveto(6.4409375,-2.5004687)(6.4409375,-2.4504688)(6.4409375,-2.4254687) \curveto(6.4409375,-2.4004688)(6.4409375,-2.3504686)(6.4409375,-2.3254688) \curveto(6.4409375,-2.3004687)(6.4409375,-2.2504687)(6.4409375,-2.2254686) \curveto(6.4409375,-2.2004688)(6.4409375,-2.1504688)(6.4409375,-2.1254687) \curveto(6.4409375,-2.1004686)(6.4409375,-2.0504687)(6.4409375,-2.0254688) \curveto(6.4409375,-2.0004687)(6.4409375,-1.9504688)(6.4409375,-1.9254688) \curveto(6.4409375,-1.9004687)(6.4409375,-1.8504688)(6.4409375,-1.8254688) \curveto(6.4409375,-1.8004688)(6.4409375,-1.7504687)(6.4409375,-1.7254688) \curveto(6.4409375,-1.7004688)(6.4409375,-1.6504687)(6.4409375,-1.6254687) \curveto(6.4409375,-1.6004688)(6.4409375,-1.5504688)(6.4409375,-1.5254687) \curveto(6.4409375,-1.5004687)(6.4409375,-1.4504688)(6.4409375,-1.4254688) \curveto(6.4409375,-1.4004687)(6.4409375,-1.3504688)(6.4409375,-1.3254688) \curveto(6.4409375,-1.3004688)(6.4409375,-1.2504687)(6.4409375,-1.2254688) \curveto(6.4409375,-1.2004688)(6.4209375,-1.1704688)(6.4009376,-1.1654687) \curveto(6.3809376,-1.1604687)(6.3359375,-1.1554687)(6.3109374,-1.1554687) \curveto(6.2859373,-1.1554687)(6.2359376,-1.1554687)(6.2109375,-1.1554687) \curveto(6.1859374,-1.1554687)(6.1359377,-1.1554687)(6.1109376,-1.1554687) \curveto(6.0859375,-1.1554687)(6.0359373,-1.1554687)(6.0109377,-1.1554687) \curveto(5.9859376,-1.1554687)(5.9359374,-1.1554687)(5.9109373,-1.1554687) \curveto(5.8859377,-1.1554687)(5.8359375,-1.1554687)(5.8109374,-1.1554687) \curveto(5.7859373,-1.1554687)(5.7359376,-1.1554687)(5.7109375,-1.1554687) \curveto(5.6859374,-1.1554687)(5.6409373,-1.1604687)(5.6209373,-1.1654687) \curveto(5.6009374,-1.1704688)(5.5559373,-1.1754688)(5.5309377,-1.1754688) \curveto(5.5059376,-1.1754688)(5.4559374,-1.1754688)(5.4309373,-1.1754688) \curveto(5.4059377,-1.1754688)(5.3609376,-1.1804688)(5.3409376,-1.1854688) \curveto(5.3209376,-1.1904688)(5.2759376,-1.1954688)(5.2509375,-1.1954688) \curveto(5.2259374,-1.1954688)(5.1809373,-1.2004688)(5.1609373,-1.2054688) \curveto(5.1409373,-1.2104688)(5.1009374,-1.2254688)(5.0809374,-1.2354687) \curveto(5.0609374,-1.2454687)(5.0209374,-1.2604687)(5.0009375,-1.2654687) \curveto(4.9809375,-1.2704687)(4.9409375,-1.2804687)(4.9209375,-1.2854687) \curveto(4.9009376,-1.2904687)(4.8659377,-1.3054688)(4.8509374,-1.3154688) \curveto(4.8359375,-1.3254688)(4.8059373,-1.3454688)(4.7909374,-1.3554688) \curveto(4.7759376,-1.3654687)(4.7459373,-1.3904687)(4.7309375,-1.4054687) \curveto(4.7159376,-1.4204688)(4.6909375,-1.4504688)(4.6809373,-1.4654688) \curveto(4.6709375,-1.4804688)(4.6509376,-1.5104687)(4.6409373,-1.5254687) \curveto(4.6309376,-1.5404687)(4.6109376,-1.5704688)(4.6009374,-1.5854688) \curveto(4.5909376,-1.6004688)(4.5759373,-1.6354687)(4.5709376,-1.6554687) \curveto(4.5659375,-1.6754688)(4.5509377,-1.7104688)(4.5409374,-1.7254688) \curveto(4.5309377,-1.7404687)(4.5159373,-1.7754687)(4.5109377,-1.7954688) \curveto(4.5059376,-1.8154688)(4.5009375,-1.8604687)(4.5009375,-1.8854687) \curveto(4.5009375,-1.9104687)(4.4959373,-1.9554688)(4.4909377,-1.9754688) \curveto(4.4859376,-1.9954687)(4.4809375,-2.0404687)(4.4809375,-2.0654688) \curveto(4.4809375,-2.0904686)(4.4759374,-2.1354687)(4.4709377,-2.1554687) \curveto(4.4659376,-2.1754687)(4.4609375,-2.2204688)(4.4609375,-2.2454689) \curveto(4.4609375,-2.2704687)(4.4559374,-2.3204687)(4.4509373,-2.3454688) \curveto(4.4459376,-2.3704689)(4.4409375,-2.4204688)(4.4409375,-2.4454687) \curveto(4.4409375,-2.4704688)(4.4409375,-2.5204687)(4.4409375,-2.5454688) \curveto(4.4409375,-2.5704687)(4.4359374,-2.6204689)(4.4309373,-2.6454687) \curveto(4.4259377,-2.6704688)(4.4209375,-2.7204688)(4.4209375,-2.7454689) \curveto(4.4209375,-2.7704687)(4.4259377,-2.8154688)(4.4309373,-2.8354688) \curveto(4.4359374,-2.8554688)(4.4409375,-2.9004688)(4.4409375,-2.9254687) \curveto(4.4409375,-2.9504688)(4.4409375,-3.0004687)(4.4409375,-3.0254688) \curveto(4.4409375,-3.0504687)(4.4459376,-3.0954688)(4.4509373,-3.1154687) \curveto(4.4559374,-3.1354687)(4.4609375,-3.1604688)(4.4609375,-3.1754687) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color3208g,gradend=color3208g,gradmidpoint=1.0] { \newpath \moveto(3.5609374,2.8245313) \lineto(3.6009376,2.8045313) \curveto(3.6209376,2.7945313)(3.6609375,2.7745314)(3.6809375,2.7645311) \curveto(3.7009375,2.7545311)(3.7359376,2.7345312)(3.7509375,2.7245312) \curveto(3.7659376,2.7145312)(3.7959375,2.6945312)(3.8109374,2.6845312) \curveto(3.8259375,2.6745312)(3.8559375,2.6545312)(3.8709376,2.6445312) \curveto(3.8859375,2.6345313)(3.9109375,2.6095312)(3.9209375,2.5945313) \curveto(3.9309375,2.5795312)(3.9509375,2.5495312)(3.9609375,2.5345314) \curveto(3.9709375,2.5195312)(3.9909375,2.4895313)(4.0009375,2.4745312) \curveto(4.0109377,2.4595313)(4.0309377,2.4295313)(4.0409374,2.4145312) \curveto(4.0509377,2.3995314)(4.0709376,2.3695312)(4.0809374,2.3545313) \curveto(4.0909376,2.3395312)(4.1109376,2.3045313)(4.1209373,2.2845314) \curveto(4.1309376,2.2645311)(4.1509376,2.2245312)(4.1609373,2.2045312) \curveto(4.1709375,2.1845312)(4.1909375,2.1445312)(4.2009373,2.1245313) \curveto(4.2109375,2.1045313)(4.2309375,2.0695312)(4.2409377,2.0545313) \curveto(4.2509375,2.0395312)(4.2659373,2.0045311)(4.2709374,1.9845313) \curveto(4.2759376,1.9645313)(4.2909374,1.9295312)(4.3009377,1.9145312) \curveto(4.3109374,1.8995312)(4.3309374,1.8695313)(4.3409376,1.8545313) \curveto(4.3509374,1.8395313)(4.3659377,1.8045312)(4.3709373,1.7845312) \curveto(4.3759375,1.7645313)(4.3909373,1.7295313)(4.4009376,1.7145313) \curveto(4.4109373,1.6995312)(4.4309373,1.6695312)(4.4409375,1.6545312) \curveto(4.4509373,1.6395313)(4.4759374,1.6045313)(4.4909377,1.5845313) \curveto(4.5059376,1.5645312)(4.5309377,1.5295312)(4.5409374,1.5145313) \curveto(4.5509377,1.4995313)(4.5709376,1.4695313)(4.5809374,1.4545312) \curveto(4.5909376,1.4395312)(4.6109376,1.4095312)(4.6209373,1.3945312) \curveto(4.6309376,1.3795313)(4.6509376,1.3495313)(4.6609373,1.3345313) \curveto(4.6709375,1.3195312)(4.6959376,1.2945312)(4.7109375,1.2845312) \curveto(4.7259374,1.2745312)(4.7559376,1.2545313)(4.7709374,1.2445313) \curveto(4.7859373,1.2345313)(4.8159375,1.2095313)(4.8309374,1.1945312) \curveto(4.8459377,1.1795312)(4.8759375,1.1545312)(4.8909373,1.1445312) \curveto(4.9059377,1.1345313)(4.9359374,1.1145313)(4.9509373,1.1045313) \curveto(4.9659376,1.0945313)(4.9959373,1.0745312)(5.0109377,1.0645312) \curveto(5.0259376,1.0545312)(5.0559373,1.0345312)(5.0709376,1.0245312) \curveto(5.0859375,1.0145313)(5.1159377,0.9945313)(5.1309376,0.9845312) \curveto(5.1459374,0.97453123)(5.1859374,0.95953125)(5.2109375,0.95453125) \curveto(5.2359376,0.94953126)(5.2809377,0.93953127)(5.3009377,0.9345313) \curveto(5.3209376,0.9295313)(5.3609376,0.9195312)(5.3809376,0.91453123) \curveto(5.4009376,0.90953124)(5.4409375,0.89953125)(5.4609375,0.89453125) \curveto(5.4809375,0.88953125)(5.5259376,0.87953126)(5.5509377,0.87453127) \curveto(5.5759373,0.8695313)(5.6259375,0.8645313)(5.6509376,0.8645313) \curveto(5.6759377,0.8645313)(5.7259374,0.8645313)(5.7509375,0.8645313) \curveto(5.7759376,0.8645313)(5.8259373,0.8595312)(5.8509374,0.8545312) \curveto(5.8759375,0.84953123)(5.9309373,0.84453124)(5.9609375,0.84453124) \curveto(5.9909377,0.84453124)(6.0459375,0.84453124)(6.0709376,0.84453124) \curveto(6.0959377,0.84453124)(6.1459374,0.84453124)(6.1709375,0.84453124) \curveto(6.1959376,0.84453124)(6.2459373,0.83953124)(6.2709374,0.83453125) \curveto(6.2959375,0.82953125)(6.3459377,0.82453126)(6.3709373,0.82453126) \curveto(6.3959374,0.82453126)(6.4259377,0.84453124)(6.4309373,0.8645313) \curveto(6.4359374,0.88453126)(6.4459376,0.9245312)(6.4509373,0.94453126) \curveto(6.4559374,0.96453124)(6.4609375,1.0095313)(6.4609375,1.0345312) \curveto(6.4609375,1.0595312)(6.4609375,1.1095313)(6.4609375,1.1345313) \curveto(6.4609375,1.1595312)(6.4609375,1.2095313)(6.4609375,1.2345313) \curveto(6.4609375,1.2595313)(6.4559374,1.3045312)(6.4509373,1.3245312) \curveto(6.4459376,1.3445313)(6.4409375,1.3895313)(6.4409375,1.4145312) \curveto(6.4409375,1.4395312)(6.4409375,1.4895313)(6.4409375,1.5145313) \curveto(6.4409375,1.5395312)(6.4409375,1.5895313)(6.4409375,1.6145313) \curveto(6.4409375,1.6395313)(6.4409375,1.6895312)(6.4409375,1.7145313) \curveto(6.4409375,1.7395313)(6.4409375,1.7895312)(6.4409375,1.8145312) \curveto(6.4409375,1.8395313)(6.4409375,1.8895313)(6.4409375,1.9145312) \curveto(6.4409375,1.9395312)(6.4409375,1.9895313)(6.4409375,2.0145311) \curveto(6.4409375,2.0395312)(6.4409375,2.0895312)(6.4409375,2.1145313) \curveto(6.4409375,2.1395311)(6.4409375,2.1895313)(6.4409375,2.2145312) \curveto(6.4409375,2.2395313)(6.4409375,2.2895312)(6.4409375,2.3145313) \curveto(6.4409375,2.3395312)(6.4409375,2.3895311)(6.4409375,2.4145312) \curveto(6.4409375,2.4395313)(6.4409375,2.4895313)(6.4409375,2.5145311) \curveto(6.4409375,2.5395312)(6.4409375,2.5895312)(6.4409375,2.6145313) \curveto(6.4409375,2.6395311)(6.4409375,2.6895313)(6.4409375,2.7145312) \curveto(6.4409375,2.7395313)(6.4409375,2.7795312)(6.4409375,2.8245313) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color3208g,gradend=color3208g,gradmidpoint=1.0] { \newpath \moveto(0.4409375,2.8245313) \lineto(0.4409375,2.7745314) \curveto(0.4409375,2.7495313)(0.4409375,2.6995313)(0.4409375,2.6745312) \curveto(0.4409375,2.6495314)(0.4409375,2.5995312)(0.4409375,2.5745313) \curveto(0.4409375,2.5495312)(0.4409375,2.4995313)(0.4409375,2.4745312) \curveto(0.4409375,2.4495313)(0.4409375,2.3995314)(0.4409375,2.3745313) \curveto(0.4409375,2.3495312)(0.4409375,2.2995312)(0.4409375,2.2745314) \curveto(0.4409375,2.2495313)(0.4409375,2.1995313)(0.4409375,2.1745312) \curveto(0.4409375,2.1495314)(0.4409375,2.0995312)(0.4409375,2.0745313) \curveto(0.4409375,2.0495312)(0.4409375,1.9995313)(0.4409375,1.9745313) \curveto(0.4409375,1.9495312)(0.4409375,1.8995312)(0.4409375,1.8745313) \curveto(0.4409375,1.8495313)(0.4409375,1.7995312)(0.4409375,1.7745312) \curveto(0.4409375,1.7495313)(0.4409375,1.6995312)(0.4409375,1.6745312) \curveto(0.4409375,1.6495312)(0.4409375,1.5995313)(0.4409375,1.5745312) \curveto(0.4409375,1.5495312)(0.4409375,1.5095313)(0.4409375,1.4945313) \curveto(0.4409375,1.4795313)(0.4409375,1.4395312)(0.4409375,1.4145312) \curveto(0.4409375,1.3895313)(0.4409375,1.3395313)(0.4409375,1.3145312) \curveto(0.4409375,1.2895312)(0.4409375,1.2395313)(0.4409375,1.2145313) \curveto(0.4409375,1.1895312)(0.4409375,1.1395313)(0.4409375,1.1145313) \curveto(0.4409375,1.0895313)(0.4409375,1.0395312)(0.4409375,1.0145313) \curveto(0.4409375,0.9895313)(0.4409375,0.93953127)(0.4409375,0.91453123) \curveto(0.4409375,0.88953125)(0.4609375,0.8595312)(0.4809375,0.8545312) \curveto(0.5009375,0.84953123)(0.5459375,0.84453124)(0.5709375,0.84453124) \curveto(0.5959375,0.84453124)(0.6409375,0.83953124)(0.6609375,0.83453125) \curveto(0.6809375,0.82953125)(0.7259375,0.82453126)(0.7509375,0.82453126) \curveto(0.7759375,0.82453126)(0.8259375,0.82453126)(0.8509375,0.82453126) \curveto(0.8759375,0.82453126)(0.9259375,0.82453126)(0.9509375,0.82453126) \curveto(0.9759375,0.82453126)(1.0159374,0.82453126)(1.0309376,0.82453126) \curveto(1.0459375,0.82453126)(1.0859375,0.82453126)(1.1109375,0.82453126) \curveto(1.1359375,0.82453126)(1.1809375,0.82953125)(1.2009375,0.83453125) \curveto(1.2209375,0.83953124)(1.2659374,0.84453124)(1.2909375,0.84453124) \curveto(1.3159375,0.84453124)(1.3609375,0.84953123)(1.3809375,0.8545312) \curveto(1.4009376,0.8595312)(1.4409375,0.8695313)(1.4609375,0.87453127) \curveto(1.4809375,0.87953126)(1.5209374,0.88953125)(1.5409375,0.89453125) \curveto(1.5609375,0.89953125)(1.6009375,0.90953124)(1.6209375,0.91453123) \curveto(1.6409374,0.9195312)(1.6809375,0.9295313)(1.7009375,0.9345313) \curveto(1.7209375,0.93953127)(1.7559375,0.95453125)(1.7709374,0.96453124) \curveto(1.7859375,0.97453123)(1.8159375,0.9945313)(1.8309375,1.0045313) \curveto(1.8459375,1.0145313)(1.8759375,1.0345312)(1.8909374,1.0445312) \curveto(1.9059376,1.0545312)(1.9359375,1.0745312)(1.9509375,1.0845313) \curveto(1.9659375,1.0945313)(1.9959375,1.1145313)(2.0109375,1.1245313) \curveto(2.0259376,1.1345313)(2.0559375,1.1545312)(2.0709374,1.1645312) \curveto(2.0859375,1.1745312)(2.1109376,1.1995312)(2.1209376,1.2145313) \curveto(2.1309376,1.2295313)(2.1559374,1.2545313)(2.1709375,1.2645313) \curveto(2.1859374,1.2745312)(2.2109375,1.2995312)(2.2209375,1.3145312) \curveto(2.2309375,1.3295312)(2.2509375,1.3595313)(2.2609375,1.3745313) \curveto(2.2709374,1.3895313)(2.2909374,1.4195312)(2.3009374,1.4345312) \curveto(2.3109374,1.4495312)(2.3309374,1.4795313)(2.3409376,1.4945313) \curveto(2.3509376,1.5095313)(2.3759375,1.5345312)(2.3909376,1.5445312) \curveto(2.4059374,1.5545312)(2.4309375,1.5795312)(2.4409375,1.5945313) \curveto(2.4509375,1.6095313)(2.4709375,1.6395313)(2.4809375,1.6545312) \curveto(2.4909375,1.6695312)(2.5109375,1.7045312)(2.5209374,1.7245313) \curveto(2.5309374,1.7445313)(2.5459375,1.7845312)(2.5509374,1.8045312) \curveto(2.5559375,1.8245312)(2.5709374,1.8595313)(2.5809374,1.8745313) \curveto(2.5909376,1.8895313)(2.6059375,1.9145312)(2.6109376,1.9245312) \curveto(2.6159375,1.9345312)(2.6309376,1.9595313)(2.6409376,1.9745313) \curveto(2.6509376,1.9895313)(2.6659374,2.0245314)(2.6709375,2.0445313) \curveto(2.6759374,2.0645313)(2.6909375,2.0995312)(2.7009375,2.1145313) \curveto(2.7109375,2.1295311)(2.7309375,2.1595314)(2.7409375,2.1745312) \curveto(2.7509375,2.1895313)(2.7659376,2.2245312)(2.7709374,2.2445312) \curveto(2.7759376,2.2645311)(2.7909374,2.2995312)(2.8009374,2.3145313) \curveto(2.8109374,2.3295312)(2.8309374,2.3595312)(2.8409376,2.3745313) \curveto(2.8509376,2.3895311)(2.8709376,2.4195313)(2.8809376,2.4345312) \curveto(2.8909376,2.4495313)(2.9109375,2.4795313)(2.9209375,2.4945312) \curveto(2.9309375,2.5095313)(2.9509375,2.5395312)(2.9609375,2.5545313) \curveto(2.9709375,2.5695312)(2.9909375,2.6045313)(3.0009375,2.6245313) \curveto(3.0109375,2.6445312)(3.0359375,2.6745312)(3.0509374,2.6845312) \curveto(3.0659375,2.6945312)(3.0959375,2.7145312)(3.1109376,2.7245312) \curveto(3.1259375,2.7345312)(3.1609375,2.7495313)(3.1809375,2.7545311) \curveto(3.2009375,2.7595313)(3.2359376,2.7745314)(3.2509375,2.7845314) \curveto(3.2659376,2.7945313)(3.2959375,2.8045313)(3.3409376,2.8045313) } \psframe[linewidth=0.06,dimen=outer](6.4809375,2.8345313)(0.4309375,-3.2154686) \psline[linewidth=0.1cm](0.4409375,0.82453126)(1.0609375,0.82453126) \psline[linewidth=0.1cm](5.8409376,0.84453124)(6.4609375,0.84453124) \psbezier[linewidth=0.06](1.0409375,0.82453126)(2.8809376,0.84453124)(2.4609458,2.808614)(3.4609375,2.8045313)(4.4609294,2.8004484)(4.0609374,0.84453124)(5.8609376,0.84453124) \psline[linewidth=0.1cm](0.4409375,-1.1554687)(1.0609375,-1.1554687) \psline[linewidth=0.1cm](5.8409376,-1.1754688)(6.4609375,-1.1754688) \psline[linewidth=0.1cm](2.4609375,-3.2154686)(2.4609375,-2.5954688) \psline[linewidth=0.1cm](4.4609375,-3.1954687)(4.4609375,-2.5754688) \psbezier[linewidth=0.06](1.0409375,-1.1554687)(2.2609375,-1.1554687)(2.4609375,-1.3554688)(2.4609375,-2.6154687) \psbezier[linewidth=0.06](4.4609375,-2.5954688)(4.4609375,-1.3954687)(4.6809373,-1.1754688)(5.8609376,-1.1754688) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.4409375,2.7645311)(0.4409375,3.5245314) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.2609377,-3.1954687)(7.1809373,-3.1954687) \usefont{T1}{ptm}{m}{n} \rput(3.3923438,-3.4454687){$1/2$} \usefont{T1}{ptm}{m}{n} \rput(4.452344,-3.4454687){$1$} \usefont{T1}{ptm}{m}{n} \rput(6.412344,-3.4454687){$2$} \usefont{T1}{ptm}{m}{n} \rput(2.4123437,-3.4254687){$0$} \usefont{T1}{ptm}{m}{n} \rput(0.29234374,-3.4254687){$-1$} \usefont{T1}{ptm}{m}{n} \rput(0.25234374,-1.1854688){$0$} \usefont{T1}{ptm}{m}{n} \rput(0.27234375,0.7945312){$1$} \usefont{T1}{ptm}{m}{n} \rput(0.25234374,2.7945313){$2$} \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](3.4609375,-3.1554687)(3.4609375,2.8045313) \usefont{T1}{ptm}{m}{n} \rput(0.29234374,3.4345312){$y$} \usefont{T1}{ptm}{m}{n} \rput(6.9623437,-3.4254687){$x$} \usefont{T1}{ptm}{m}{n} \rput(2.7723436,1.3745313){$\gamma_0$} \usefont{T1}{ptm}{m}{n} \rput(2.6323438,-1.3254688){$\gamma_{-1}$} \usefont{T1}{ptm}{m}{n} \rput(4.432344,-1.3254688){$\gamma_{1}$} \usefont{T1}{ptm}{m}{n} \rput(3.7823439,0.57453126){$T$} \psdots[dotsize=0.16](0.4609375,0.82453126) \psdots[dotsize=0.16](0.4609375,-1.1554687) \psdots[dotsize=0.16](0.4409375,2.7845314) \psdots[dotsize=0.16](0.4609375,-3.1754687) \psdots[dotsize=0.16](2.4609375,-3.1954687) \psdots[dotsize=0.16](4.4609375,-3.1754687) \psdots[dotsize=0.16](3.4609375,-3.1954687) \psdots[dotsize=0.16](6.4609375,-3.1954687) \end{pspicture} } \\ Fig. 3a: ''base'' subset $T$ of the square $R$ \end{center} Let \betaegin{equation}\label{eq:X-def} X=T\times [0,1/2]\sigmaubset \{(x,y,p)\mid x,y\in [-1,2],p\in [0,1/2]\}. \end{equation} Denote left and right parts of the boundary of the level \betaegin{equation}\label{eq:Tp} T_p:=T\times \{p\} \end{equation} by \betaegin{equation}\label{eq:Epm} \betaegin{aligned} E_{p,-1}&:=\{(-1,y,p)\mid y\in [0,1]\} \text{ and}\\ E_{p, 1}&:=\{( 2,y,p)\mid y\in [0,1]\}. \end{aligned} \end{equation} We don't change the notations, thus denote the lower boundary $y=-1$ of $T_p$ by \betaegin{equation}\label{eq:Ip3dim} I_p=\{(x,-1,p)\mid x\in [0,1]\}, \end{equation} and split it into three parts \betaegin{equation}\label{eq:Ipk3dim} \betaegin{aligned} I_{p,-1}&:=\{(x,-1,p)\mid x\in [0;\frac{1-p}{2})\},\\ I_{p,0}&:=\{(x,-1,p)\mid x\in [\frac{1-p}{2};\frac{1+p}{2}]\}, \text{ and}\\ I_{p,1}&:=\{(x,-1,p)\mid x\in (\frac{1+p}{2};1]\} \end{aligned} \end{equation} as in~\eqref{eq:Ip},~\eqref{eq:Ipk} (the difference between notations is adding the coordinate $y$ with the condition $y=-1$). We take $X$ and for every $p\in [0,1/2]$ glue linearly both intervals $E_{p,-1}$ and $E_{p,1}$ to \textit{the same} interval $I_{q(p)}$, where the function $q$ is defined in~\eqref{eq:q-def}. This equivalence is specified as follows: \betaegin{equation}\label{eq:equiv3} \betaegin{aligned} (-1,y,p)\equiv(y,-1,q(p));\\ (1,y,p)\equiv(y,-1,q(p)). \end{aligned} \end{equation} Thus we obtain a stratified $3$-manifold $\widetilde X$ with one $2$-dimensional stratum \betaegin{equation}\label{eq:S_2} S_2:=\cup_{p=0}^{1/2} I_{q(p)}=\{(x,-1,p)\mid x\in[0,1],p\in[0,1/3]\}, \end{equation} that coincides (in $\widetilde X$) with both rectangles \betaegin{equation}\label{eq:Fpm} F_{\pm}:=\cup_{p=0}^{1/2} E_{p,\pm 1}, \end{equation} see~\eqref{eq:Epm}, according to the equivalence~\eqref{eq:equiv3}. \sigmaubsection{Construction of a flow} To describe the flow on $\widetilde X$, we first describe the flow on $X$. This flow has two saddles $a_p$ and $b_p$ on any level $T_p$~\eqref{eq:Tp}. They lie on the intersection of $\gamma_{p,0}=\gamma_0 \times \{p\}$ on the level $T_p$ with the line $l_p=\{(x,y,p)\mid y=2-p/2\}$, and hence depend smoothly on $p$ and collide for $p=0$ in the point $(1/2,2,0)$. Let \betaegin{equation}\label{eq:Xpm} \betaegin{aligned} X^-:=\{(x,y,p)\in X\mid y< 2-p/2\},\\ X^+:=\{(x,y,p)\in X\mid y\geqslant 2-p/2\}. \end{aligned} \end{equation} The set $X^-$ can be described as ''part of $X$ that lies below the plane $L$ that contains all the $a_p$'s and $b_p$'s''. The function $p$ is the first integral of the flow restricted to $X^-$. The flow has no singular points on $X^-$. The left and right endpoints of the interval $I_{p,0}$ (see~\eqref{eq:Ip3dim},~\eqref{eq:Ipk3dim}) lie on the incoming separatrices of $a_p$ and $b_p$ respectively (for $p=0$ these endpoints collide as well as $a_0$ and $b_0$ do). The outcoming separatrices of $a_p$ and $b_p$ for the flow restricted to $X^-_p=X_p\cap X^-$ are respective parts of $\gamma_{p,0}$. All other trajectories in $X^-_p$ go from $I_{p,\pm 1}$ to $E_{p,\pm 1}$ and from $I_{p,0}$ to interval $E_{p,0}$ on the line $l_p$ between $a_p$ and $b_p$ (see Figures~3b,~3c). \betaegin{center} \sigmacalebox{1.2} { \betaegin{pspicture}(0,-3.6359375)(8.45375,3.6359375) \definecolor{color4517g}{rgb}{0.6,0.6,0.6} \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4517g,gradend=color4517g,gradmidpoint=1.0] { \newpath \moveto(1.3009375,-3.1625) \lineto(1.3009375,-3.1125) \curveto(1.3009375,-3.0875)(1.3009375,-3.0375)(1.3009375,-3.0125) \curveto(1.3009375,-2.9875)(1.3009375,-2.9375)(1.3009375,-2.9125) \curveto(1.3009375,-2.8875)(1.3009375,-2.8375)(1.3009375,-2.8125) \curveto(1.3009375,-2.7875)(1.2959375,-2.7425)(1.2909375,-2.7225) \curveto(1.2859375,-2.7025)(1.2809376,-2.6575)(1.2809376,-2.6325) \curveto(1.2809376,-2.6075)(1.2809376,-2.5575)(1.2809376,-2.5325) \curveto(1.2809376,-2.5075)(1.2859375,-2.4625)(1.2909375,-2.4425) \curveto(1.2959375,-2.4225)(1.2959375,-2.3825)(1.2909375,-2.3625) \curveto(1.2859375,-2.3425)(1.2809376,-2.2975)(1.2809376,-2.2725) \curveto(1.2809376,-2.2475)(1.2859375,-2.2025)(1.2909375,-2.1825) \curveto(1.2959375,-2.1625)(1.3009375,-2.1175)(1.3009375,-2.0925) \curveto(1.3009375,-2.0675)(1.3009375,-2.0175)(1.3009375,-1.9925) \curveto(1.3009375,-1.9675)(1.3009375,-1.9175)(1.3009375,-1.8925) \curveto(1.3009375,-1.8675)(1.2959375,-1.8225)(1.2909375,-1.8025) \curveto(1.2859375,-1.7825)(1.2809376,-1.7375)(1.2809376,-1.7125) \curveto(1.2809376,-1.6875)(1.2809376,-1.6375)(1.2809376,-1.6125) \curveto(1.2809376,-1.5875)(1.2809376,-1.5375)(1.2809376,-1.5125) \curveto(1.2809376,-1.4875)(1.2859375,-1.4425)(1.2909375,-1.4225) \curveto(1.2959375,-1.4025)(1.3009375,-1.3575)(1.3009375,-1.3325) \curveto(1.3009375,-1.3075)(1.3009375,-1.2575)(1.3009375,-1.2325) \curveto(1.3009375,-1.2075)(1.3159375,-1.1725)(1.3309375,-1.1625) \curveto(1.3459375,-1.1525)(1.3859375,-1.1425)(1.4109375,-1.1425) \curveto(1.4359375,-1.1425)(1.4859375,-1.1425)(1.5109375,-1.1425) \curveto(1.5359375,-1.1425)(1.5859375,-1.1425)(1.6109375,-1.1425) \curveto(1.6359375,-1.1425)(1.6859375,-1.1425)(1.7109375,-1.1425) \curveto(1.7359375,-1.1425)(1.7859375,-1.1425)(1.8109375,-1.1425) \curveto(1.8359375,-1.1425)(1.8809375,-1.1375)(1.9009376,-1.1325) \curveto(1.9209375,-1.1275)(1.9659375,-1.1225)(1.9909375,-1.1225) \curveto(2.0159376,-1.1225)(2.0659375,-1.1225)(2.0909376,-1.1225) \curveto(2.1159375,-1.1225)(2.1609375,-1.1275)(2.1809375,-1.1325) \curveto(2.2009375,-1.1375)(2.2459376,-1.1425)(2.2709374,-1.1425) \curveto(2.2959375,-1.1425)(2.3459375,-1.1425)(2.3709376,-1.1425) \curveto(2.3959374,-1.1425)(2.4409375,-1.1475)(2.4609375,-1.1525) \curveto(2.4809375,-1.1575)(2.5209374,-1.1675)(2.5409374,-1.1725) \curveto(2.5609374,-1.1775)(2.6009376,-1.1875)(2.6209376,-1.1925) \curveto(2.6409376,-1.1975)(2.6809375,-1.2075)(2.7009375,-1.2125) \curveto(2.7209375,-1.2175)(2.7609375,-1.2275)(2.7809374,-1.2325) \curveto(2.8009374,-1.2375)(2.8409376,-1.2475)(2.8609376,-1.2525) \curveto(2.8809376,-1.2575)(2.9159374,-1.2725)(2.9309375,-1.2825) \curveto(2.9459374,-1.2925)(2.9759376,-1.3125)(2.9909375,-1.3225) \curveto(3.0059376,-1.3325)(3.0309374,-1.3575)(3.0409374,-1.3725) \curveto(3.0509374,-1.3875)(3.0709374,-1.4175)(3.0809374,-1.4325) \curveto(3.0909376,-1.4475)(3.1109376,-1.4775)(3.1209376,-1.4925) \curveto(3.1309376,-1.5075)(3.1509376,-1.5375)(3.1609375,-1.5525) \curveto(3.1709375,-1.5675)(3.1859374,-1.6025)(3.1909375,-1.6225) \curveto(3.1959374,-1.6425)(3.2059374,-1.6825)(3.2109375,-1.7025) \curveto(3.2159376,-1.7225)(3.2259376,-1.7625)(3.2309375,-1.7825) \curveto(3.2359376,-1.8025)(3.2459376,-1.8425)(3.2509375,-1.8625) \curveto(3.2559376,-1.8825)(3.2609375,-1.9275)(3.2609375,-1.9525) \curveto(3.2609375,-1.9775)(3.2659376,-2.0225)(3.2709374,-2.0425) \curveto(3.2759376,-2.0625)(3.2809374,-2.1075)(3.2809374,-2.1325) \curveto(3.2809374,-2.1575)(3.2809374,-2.2075)(3.2809374,-2.2325) \curveto(3.2809374,-2.2575)(3.2809374,-2.3075)(3.2809374,-2.3325) \curveto(3.2809374,-2.3575)(3.2859375,-2.4025)(3.2909374,-2.4225) \curveto(3.2959375,-2.4425)(3.3009374,-2.4875)(3.3009374,-2.5125) \curveto(3.3009374,-2.5375)(3.3009374,-2.5875)(3.3009374,-2.6125) \curveto(3.3009374,-2.6375)(3.3009374,-2.6875)(3.3009374,-2.7125) \curveto(3.3009374,-2.7375)(3.3009374,-2.7875)(3.3009374,-2.8125) \curveto(3.3009374,-2.8375)(3.3009374,-2.8875)(3.3009374,-2.9125) \curveto(3.3009374,-2.9375)(3.3009374,-2.9875)(3.3009374,-3.0125) \curveto(3.3009374,-3.0375)(3.3009374,-3.0875)(3.3009374,-3.1125) \curveto(3.3009374,-3.1375)(3.3009374,-3.1625)(3.3009374,-3.1625) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4517g,gradend=color4517g,gradmidpoint=1.0] { \newpath \moveto(7.3209376,-3.1625) \lineto(7.3109374,-3.1225) \curveto(7.3059373,-3.1025)(7.3009377,-3.0575)(7.3009377,-3.0325) \curveto(7.3009377,-3.0075)(7.3009377,-2.9575)(7.3009377,-2.9325) \curveto(7.3009377,-2.9075)(7.3009377,-2.8575)(7.3009377,-2.8325) \curveto(7.3009377,-2.8075)(7.3009377,-2.7575)(7.3009377,-2.7325) \curveto(7.3009377,-2.7075)(7.3009377,-2.6575)(7.3009377,-2.6325) \curveto(7.3009377,-2.6075)(7.3009377,-2.5575)(7.3009377,-2.5325) \curveto(7.3009377,-2.5075)(7.3009377,-2.4575)(7.3009377,-2.4325) \curveto(7.3009377,-2.4075)(7.3009377,-2.3575)(7.3009377,-2.3325) \curveto(7.3009377,-2.3075)(7.3009377,-2.2575)(7.3009377,-2.2325) \curveto(7.3009377,-2.2075)(7.3009377,-2.1575)(7.3009377,-2.1325) \curveto(7.3009377,-2.1075)(7.3009377,-2.0575)(7.3009377,-2.0325) \curveto(7.3009377,-2.0075)(7.3009377,-1.9575)(7.3009377,-1.9325) \curveto(7.3009377,-1.9075)(7.3009377,-1.8575)(7.3009377,-1.8325) \curveto(7.3009377,-1.8075)(7.3009377,-1.7575)(7.3009377,-1.7325) \curveto(7.3009377,-1.7075)(7.3009377,-1.6575)(7.3009377,-1.6325) \curveto(7.3009377,-1.6075)(7.3009377,-1.5575)(7.3009377,-1.5325) \curveto(7.3009377,-1.5075)(7.3009377,-1.4575)(7.3009377,-1.4325) \curveto(7.3009377,-1.4075)(7.3009377,-1.3575)(7.3009377,-1.3325) \curveto(7.3009377,-1.3075)(7.3009377,-1.2575)(7.3009377,-1.2325) \curveto(7.3009377,-1.2075)(7.2809377,-1.1775)(7.2609377,-1.1725) \curveto(7.2409377,-1.1675)(7.1959376,-1.1625)(7.1709375,-1.1625) \curveto(7.1459374,-1.1625)(7.0959377,-1.1625)(7.0709376,-1.1625) \curveto(7.0459375,-1.1625)(6.9959373,-1.1625)(6.9709377,-1.1625) \curveto(6.9459376,-1.1625)(6.8959374,-1.1625)(6.8709373,-1.1625) \curveto(6.8459377,-1.1625)(6.7959375,-1.1625)(6.7709374,-1.1625) \curveto(6.7459373,-1.1625)(6.6959376,-1.1625)(6.6709375,-1.1625) \curveto(6.6459374,-1.1625)(6.5959377,-1.1625)(6.5709376,-1.1625) \curveto(6.5459375,-1.1625)(6.5009375,-1.1675)(6.4809375,-1.1725) \curveto(6.4609375,-1.1775)(6.4159374,-1.1825)(6.3909373,-1.1825) \curveto(6.3659377,-1.1825)(6.3159375,-1.1825)(6.2909374,-1.1825) \curveto(6.2659373,-1.1825)(6.2209377,-1.1875)(6.2009373,-1.1925) \curveto(6.1809373,-1.1975)(6.1359377,-1.2025)(6.1109376,-1.2025) \curveto(6.0859375,-1.2025)(6.0409374,-1.2075)(6.0209374,-1.2125) \curveto(6.0009375,-1.2175)(5.9609375,-1.2325)(5.9409375,-1.2425) \curveto(5.9209375,-1.2525)(5.8809376,-1.2675)(5.8609376,-1.2725) \curveto(5.8409376,-1.2775)(5.8009377,-1.2875)(5.7809377,-1.2925) \curveto(5.7609377,-1.2975)(5.7259374,-1.3125)(5.7109375,-1.3225) \curveto(5.6959376,-1.3325)(5.6659374,-1.3525)(5.6509376,-1.3625) \curveto(5.6359377,-1.3725)(5.6059375,-1.3975)(5.5909376,-1.4125) \curveto(5.5759373,-1.4275)(5.5509377,-1.4575)(5.5409374,-1.4725) \curveto(5.5309377,-1.4875)(5.5109377,-1.5175)(5.5009375,-1.5325) \curveto(5.4909377,-1.5475)(5.4709377,-1.5775)(5.4609375,-1.5925) \curveto(5.4509373,-1.6075)(5.4359374,-1.6425)(5.4309373,-1.6625) \curveto(5.4259377,-1.6825)(5.4109373,-1.7175)(5.4009376,-1.7325) \curveto(5.3909373,-1.7475)(5.3759375,-1.7825)(5.3709373,-1.8025) \curveto(5.3659377,-1.8225)(5.3609376,-1.8675)(5.3609376,-1.8925) \curveto(5.3609376,-1.9175)(5.3559375,-1.9625)(5.3509374,-1.9825) \curveto(5.3459377,-2.0025)(5.3409376,-2.0475)(5.3409376,-2.0725) \curveto(5.3409376,-2.0975)(5.3359375,-2.1425)(5.3309374,-2.1625) \curveto(5.3259373,-2.1825)(5.3209376,-2.2275)(5.3209376,-2.2525) \curveto(5.3209376,-2.2775)(5.3159375,-2.3275)(5.3109374,-2.3525) \curveto(5.3059373,-2.3775)(5.3009377,-2.4275)(5.3009377,-2.4525) \curveto(5.3009377,-2.4775)(5.3009377,-2.5275)(5.3009377,-2.5525) \curveto(5.3009377,-2.5775)(5.2959375,-2.6275)(5.2909374,-2.6525) \curveto(5.2859373,-2.6775)(5.2809377,-2.7275)(5.2809377,-2.7525) \curveto(5.2809377,-2.7775)(5.2859373,-2.8225)(5.2909374,-2.8425) \curveto(5.2959375,-2.8625)(5.3009377,-2.9075)(5.3009377,-2.9325) \curveto(5.3009377,-2.9575)(5.3009377,-3.0075)(5.3009377,-3.0325) \curveto(5.3009377,-3.0575)(5.3059373,-3.1025)(5.3109374,-3.1225) \curveto(5.3159375,-3.1425)(5.3209376,-3.1675)(5.3209376,-3.1825) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4517g,gradend=color4517g,gradmidpoint=1.0] { \newpath \moveto(4.4009376,2.8375) \lineto(4.4409375,2.8175) \curveto(4.4609375,2.8075)(4.5009375,2.7875)(4.5209374,2.7775) \curveto(4.5409374,2.7675)(4.5759373,2.7475)(4.5909376,2.7375) \curveto(4.6059375,2.7275)(4.6359377,2.7075)(4.6509376,2.6975) \curveto(4.6659374,2.6875)(4.6959376,2.6675)(4.7109375,2.6575) \curveto(4.7259374,2.6475)(4.7509375,2.6225)(4.7609377,2.6075) \curveto(4.7709374,2.5925)(4.7909374,2.5625)(4.8009377,2.5475) \curveto(4.8109374,2.5325)(4.8309374,2.5025)(4.8409376,2.4875) \curveto(4.8509374,2.4725)(4.8709373,2.4425)(4.8809376,2.4275) \curveto(4.8909373,2.4125)(4.9109373,2.3825)(4.9209375,2.3675) \curveto(4.9309373,2.3525)(4.9509373,2.3175)(4.9609375,2.2975) \curveto(4.9709377,2.2775)(4.9909377,2.2375)(5.0009375,2.2175) \curveto(5.0109377,2.1975)(5.0309377,2.1575)(5.0409374,2.1375) \curveto(5.0509377,2.1175)(5.0709376,2.0825)(5.0809374,2.0675) \curveto(5.0909376,2.0525)(5.1059375,2.0175)(5.1109376,1.9975) \curveto(5.1159377,1.9775)(5.1309376,1.9425)(5.1409373,1.9275) \curveto(5.1509376,1.9125)(5.1709375,1.8825)(5.1809373,1.8675) \curveto(5.1909375,1.8525)(5.2059374,1.8175)(5.2109375,1.7975) \curveto(5.2159376,1.7775)(5.2309375,1.7425)(5.2409377,1.7275) \curveto(5.2509375,1.7125)(5.2709374,1.6825)(5.2809377,1.6675) \curveto(5.2909374,1.6525)(5.3159375,1.6175)(5.3309374,1.5975) \curveto(5.3459377,1.5775)(5.3709373,1.5425)(5.3809376,1.5275) \curveto(5.3909373,1.5125)(5.4109373,1.4825)(5.4209375,1.4675) \curveto(5.4309373,1.4525)(5.4509373,1.4225)(5.4609375,1.4075) \curveto(5.4709377,1.3925)(5.4909377,1.3625)(5.5009375,1.3475) \curveto(5.5109377,1.3325)(5.5359373,1.3075)(5.5509377,1.2975) \curveto(5.5659375,1.2875)(5.5959377,1.2675)(5.6109376,1.2575) \curveto(5.6259375,1.2475)(5.6559377,1.2225)(5.6709375,1.2075) \curveto(5.6859374,1.1925)(5.7159376,1.1675)(5.7309375,1.1575) \curveto(5.7459373,1.1475)(5.7759376,1.1275)(5.7909374,1.1175) \curveto(5.8059373,1.1075)(5.8359375,1.0875)(5.8509374,1.0775) \curveto(5.8659377,1.0675)(5.8959374,1.0475)(5.9109373,1.0375) \curveto(5.9259377,1.0275)(5.9559374,1.0075)(5.9709377,0.9975) \curveto(5.9859376,0.9875)(6.0259376,0.9725)(6.0509377,0.9675) \curveto(6.0759373,0.9625)(6.1209373,0.9525)(6.1409373,0.9475) \curveto(6.1609373,0.9425)(6.2009373,0.9325)(6.2209377,0.9275) \curveto(6.2409377,0.9225)(6.2809377,0.9125)(6.3009377,0.9075) \curveto(6.3209376,0.9025)(6.3659377,0.8925)(6.3909373,0.8875) \curveto(6.4159374,0.8825)(6.4659376,0.8775)(6.4909377,0.8775) \curveto(6.5159373,0.8775)(6.5659375,0.8775)(6.5909376,0.8775) \curveto(6.6159377,0.8775)(6.6659374,0.8725)(6.6909375,0.8675) \curveto(6.7159376,0.8625)(6.7709374,0.8575)(6.8009377,0.8575) \curveto(6.8309374,0.8575)(6.8859377,0.8575)(6.9109373,0.8575) \curveto(6.9359374,0.8575)(6.9859376,0.8575)(7.0109377,0.8575) \curveto(7.0359373,0.8575)(7.0859375,0.8525)(7.1109376,0.8475) \curveto(7.1359377,0.8425)(7.1859374,0.8375)(7.2109375,0.8375) \curveto(7.2359376,0.8375)(7.2659373,0.8575)(7.2709374,0.8775) \curveto(7.2759376,0.8975)(7.2859373,0.9375)(7.2909374,0.9575) \curveto(7.2959375,0.9775)(7.3009377,1.0225)(7.3009377,1.0475) \curveto(7.3009377,1.0725)(7.3009377,1.1225)(7.3009377,1.1475) \curveto(7.3009377,1.1725)(7.3009377,1.2225)(7.3009377,1.2475) \curveto(7.3009377,1.2725)(7.2959375,1.3175)(7.2909374,1.3375) \curveto(7.2859373,1.3575)(7.2809377,1.4025)(7.2809377,1.4275) \curveto(7.2809377,1.4525)(7.2809377,1.5025)(7.2809377,1.5275) \curveto(7.2809377,1.5525)(7.2809377,1.6025)(7.2809377,1.6275) \curveto(7.2809377,1.6525)(7.2809377,1.7025)(7.2809377,1.7275) \curveto(7.2809377,1.7525)(7.2809377,1.8025)(7.2809377,1.8275) \curveto(7.2809377,1.8525)(7.2809377,1.9025)(7.2809377,1.9275) \curveto(7.2809377,1.9525)(7.2809377,2.0025)(7.2809377,2.0275) \curveto(7.2809377,2.0525)(7.2809377,2.1025)(7.2809377,2.1275) \curveto(7.2809377,2.1525)(7.2809377,2.2025)(7.2809377,2.2275) \curveto(7.2809377,2.2525)(7.2809377,2.3025)(7.2809377,2.3275) \curveto(7.2809377,2.3525)(7.2809377,2.4025)(7.2809377,2.4275) \curveto(7.2809377,2.4525)(7.2809377,2.5025)(7.2809377,2.5275) \curveto(7.2809377,2.5525)(7.2809377,2.6025)(7.2809377,2.6275) \curveto(7.2809377,2.6525)(7.2809377,2.7025)(7.2809377,2.7275) \curveto(7.2809377,2.7525)(7.2809377,2.7925)(7.2809377,2.8375) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4517g,gradend=color4517g,gradmidpoint=1.0] { \newpath \moveto(1.2809376,2.8375) \lineto(1.2809376,2.7875) \curveto(1.2809376,2.7625)(1.2809376,2.7125)(1.2809376,2.6875) \curveto(1.2809376,2.6625)(1.2809376,2.6125)(1.2809376,2.5875) \curveto(1.2809376,2.5625)(1.2809376,2.5125)(1.2809376,2.4875) \curveto(1.2809376,2.4625)(1.2809376,2.4125)(1.2809376,2.3875) \curveto(1.2809376,2.3625)(1.2809376,2.3125)(1.2809376,2.2875) \curveto(1.2809376,2.2625)(1.2809376,2.2125)(1.2809376,2.1875) \curveto(1.2809376,2.1625)(1.2809376,2.1125)(1.2809376,2.0875) \curveto(1.2809376,2.0625)(1.2809376,2.0125)(1.2809376,1.9875) \curveto(1.2809376,1.9625)(1.2809376,1.9125)(1.2809376,1.8875) \curveto(1.2809376,1.8625)(1.2809376,1.8125)(1.2809376,1.7875) \curveto(1.2809376,1.7625)(1.2809376,1.7125)(1.2809376,1.6875) \curveto(1.2809376,1.6625)(1.2809376,1.6125)(1.2809376,1.5875) \curveto(1.2809376,1.5625)(1.2809376,1.5225)(1.2809376,1.5075) \curveto(1.2809376,1.4925)(1.2809376,1.4525)(1.2809376,1.4275) \curveto(1.2809376,1.4025)(1.2809376,1.3525)(1.2809376,1.3275) \curveto(1.2809376,1.3025)(1.2809376,1.2525)(1.2809376,1.2275) \curveto(1.2809376,1.2025)(1.2809376,1.1525)(1.2809376,1.1275) \curveto(1.2809376,1.1025)(1.2809376,1.0525)(1.2809376,1.0275) \curveto(1.2809376,1.0025)(1.2809376,0.9525)(1.2809376,0.9275) \curveto(1.2809376,0.9025)(1.3009375,0.8725)(1.3209375,0.8675) \curveto(1.3409375,0.8625)(1.3859375,0.8575)(1.4109375,0.8575) \curveto(1.4359375,0.8575)(1.4809375,0.8525)(1.5009375,0.8475) \curveto(1.5209374,0.8425)(1.5659375,0.8375)(1.5909375,0.8375) \curveto(1.6159375,0.8375)(1.6659375,0.8375)(1.6909375,0.8375) \curveto(1.7159375,0.8375)(1.7659374,0.8375)(1.7909375,0.8375) \curveto(1.8159375,0.8375)(1.8559375,0.8375)(1.8709375,0.8375) \curveto(1.8859375,0.8375)(1.9259375,0.8375)(1.9509375,0.8375) \curveto(1.9759375,0.8375)(2.0209374,0.8425)(2.0409374,0.8475) \curveto(2.0609374,0.8525)(2.1059375,0.8575)(2.1309376,0.8575) \curveto(2.1559374,0.8575)(2.2009375,0.8625)(2.2209375,0.8675) \curveto(2.2409375,0.8725)(2.2809374,0.8825)(2.3009374,0.8875) \curveto(2.3209374,0.8925)(2.3609376,0.9025)(2.3809376,0.9075) \curveto(2.4009376,0.9125)(2.4409375,0.9225)(2.4609375,0.9275) \curveto(2.4809375,0.9325)(2.5209374,0.9425)(2.5409374,0.9475) \curveto(2.5609374,0.9525)(2.5959375,0.9675)(2.6109376,0.9775) \curveto(2.6259375,0.9875)(2.6559374,1.0075)(2.6709375,1.0175) \curveto(2.6859374,1.0275)(2.7159376,1.0475)(2.7309375,1.0575) \curveto(2.7459376,1.0675)(2.7759376,1.0875)(2.7909374,1.0975) \curveto(2.8059375,1.1075)(2.8359375,1.1275)(2.8509376,1.1375) \curveto(2.8659375,1.1475)(2.8959374,1.1675)(2.9109375,1.1775) \curveto(2.9259374,1.1875)(2.9509375,1.2125)(2.9609375,1.2275) \curveto(2.9709375,1.2425)(2.9959376,1.2675)(3.0109375,1.2775) \curveto(3.0259376,1.2875)(3.0509374,1.3125)(3.0609374,1.3275) \curveto(3.0709374,1.3425)(3.0909376,1.3725)(3.1009376,1.3875) \curveto(3.1109376,1.4025)(3.1309376,1.4325)(3.1409376,1.4475) \curveto(3.1509376,1.4625)(3.1709375,1.4925)(3.1809375,1.5075) \curveto(3.1909375,1.5225)(3.2159376,1.5475)(3.2309375,1.5575) \curveto(3.2459376,1.5675)(3.2709374,1.5925)(3.2809374,1.6075) \curveto(3.2909374,1.6225)(3.3109374,1.6525)(3.3209374,1.6675) \curveto(3.3309374,1.6825)(3.3509376,1.7175)(3.3609376,1.7375) \curveto(3.3709376,1.7575)(3.3859375,1.7975)(3.3909376,1.8175) \curveto(3.3959374,1.8375)(3.4109375,1.8725)(3.4209375,1.8875) \curveto(3.4309375,1.9025)(3.4459374,1.9275)(3.4509375,1.9375) \curveto(3.4559374,1.9475)(3.4709375,1.9725)(3.4809375,1.9875) \curveto(3.4909375,2.0025)(3.5059376,2.0375)(3.5109375,2.0575) \curveto(3.5159376,2.0775)(3.5309374,2.1125)(3.5409374,2.1275) \curveto(3.5509374,2.1425)(3.5709374,2.1725)(3.5809374,2.1875) \curveto(3.5909376,2.2025)(3.6059375,2.2375)(3.6109376,2.2575) \curveto(3.6159375,2.2775)(3.6309376,2.3125)(3.6409376,2.3275) \curveto(3.6509376,2.3425)(3.6709375,2.3725)(3.6809375,2.3875) \curveto(3.6909375,2.4025)(3.7109375,2.4325)(3.7209375,2.4475) \curveto(3.7309375,2.4625)(3.7509375,2.4925)(3.7609375,2.5075) \curveto(3.7709374,2.5225)(3.7909374,2.5525)(3.8009374,2.5675) \curveto(3.8109374,2.5825)(3.8309374,2.6175)(3.8409376,2.6375) \curveto(3.8509376,2.6575)(3.8759375,2.6875)(3.8909376,2.6975) \curveto(3.9059374,2.7075)(3.9359374,2.7275)(3.9509375,2.7375) \curveto(3.9659376,2.7475)(4.0009375,2.7625)(4.0209374,2.7675) \curveto(4.0409374,2.7725)(4.0759373,2.7875)(4.0909376,2.7975) \curveto(4.1059375,2.8075)(4.1359377,2.8175)(4.1809373,2.8175) } \psframe[linewidth=0.06,dimen=outer](7.3409376,2.8575)(1.2709374,-3.2125) \psline[linewidth=0.1cm](1.2809376,0.8375)(1.9009376,0.8375) \psline[linewidth=0.1cm](6.6809373,0.8575)(7.3009377,0.8575) \psbezier[linewidth=0.06](1.8809375,0.8375)(3.7209375,0.8575)(3.3009458,2.8215828)(4.3009377,2.8175)(5.300929,2.8134172)(4.9009376,0.8575)(6.7009373,0.8575) \psline[linewidth=0.1cm](1.2809376,-1.1425)(1.9009376,-1.1425) \psline[linewidth=0.1cm](6.6809373,-1.1625)(7.3009377,-1.1625) \psline[linewidth=0.1cm](3.3009374,-3.1825)(3.3009374,-2.5625) \psline[linewidth=0.1cm](5.3009377,-3.1825)(5.3009377,-2.5625) \psbezier[linewidth=0.06](1.8809375,-1.1425)(3.1009376,-1.1425)(3.3009374,-1.3425)(3.3009374,-2.6025) \psbezier[linewidth=0.06](5.3009377,-2.5825)(5.3009377,-1.3825)(5.5209374,-1.1625)(6.7009373,-1.1625) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.3009375,2.7575)(1.3009375,3.5175) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.9209375,-3.1825)(7.8409376,-3.1825) \usefont{T1}{ptm}{m}{n} \rput(0.52234375,1.8475){$2-p/2$} \usefont{T1}{ptm}{m}{n} \rput(5.3923435,-3.3925){$1$} \usefont{T1}{ptm}{m}{n} \rput(7.2523437,-3.4125){$2$} \usefont{T1}{ptm}{m}{n} \rput(3.1323438,-3.3725){$0$} \usefont{T1}{ptm}{m}{n} \rput(1.1323438,-3.4125){$-1$} \usefont{T1}{ptm}{m}{n} \rput(1.1323438,-1.1325){$0$} \usefont{T1}{ptm}{m}{n} \rput(1.1523438,0.8275){$1$} \usefont{T1}{ptm}{m}{n} \rput(1.1323438,2.8075){$2$} \psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](1.3009375,1.8375)(7.9009376,1.8375) \usefont{T1}{ptm}{m}{n} \rput(1.1323438,3.4475){$y$} \usefont{T1}{ptm}{m}{n} \rput(7.722344,-3.4125){$x$} \psdots[dotsize=0.16](3.4009376,1.8375) \psdots[dotsize=0.16](5.2009373,1.8375) \psbezier[linewidth=0.06](4.6809373,-3.1625)(4.6809373,-0.5425)(5.0609374,1.4975)(5.1809373,1.8375)(5.3009377,2.1775)(5.4809375,2.6375)(5.7009373,2.8175) \psbezier[linewidth=0.06](3.8809376,-3.1425)(3.9009376,-0.5425)(3.5287697,1.4598192)(3.4009376,1.8375)(3.2731051,2.2151809)(3.0809374,2.6375)(2.8809376,2.8175) \psline[linewidth=0.1cm](4.6809373,-3.1825)(4.6809373,-2.5625) \psline[linewidth=0.1cm](3.8809376,-3.1825)(3.8809376,-2.5625) \psbezier[linewidth=0.04](5.0809374,-3.1425)(5.1009374,-0.7425)(5.2409377,-0.0425)(5.7009373,-0.1425)(6.1609373,-0.2425)(5.7809377,-0.5425)(7.2609377,-0.5425) \psbezier[linewidth=0.04](3.4809375,-3.1625)(3.4609375,-1.4425)(3.3810425,-0.06492391)(2.9009376,-0.1625)(2.4208326,-0.26007608)(2.7209375,-0.5625)(1.3009375,-0.5425) \psbezier[linewidth=0.04](3.6809375,-3.1425)(3.6809375,-1.1425)(3.4809375,0.9575)(3.1009376,0.8575)(2.7209375,0.7575)(3.1209376,0.2575)(1.3009375,0.2575) \psbezier[linewidth=0.04](4.8809376,-3.1625)(4.9209375,-1.1425)(5.1209373,0.9175)(5.4809375,0.8375)(5.8409376,0.7575)(5.5209374,0.2375)(7.2609377,0.2375) \psbezier[linewidth=0.04](4.0809374,-3.1625)(4.1009374,0.6375)(3.6809375,1.2575)(3.7009375,1.4575)(3.7209375,1.6575)(3.8009374,1.7175)(3.8809376,1.8375) \psbezier[linewidth=0.04](4.4809375,-3.1625)(4.5009375,0.6575)(4.9609375,1.1975)(4.9009376,1.4575)(4.8409376,1.7175)(4.7409377,1.7775)(4.7009373,1.8375) \psbezier[linewidth=0.04](4.2809377,-3.1425)(4.3009377,0.2375)(4.3009377,-0.3025)(4.3009377,1.8375) \usefont{T1}{ptm}{m}{n} \rput(7.682344,2.0475){$l_p$} \usefont{T1}{ptm}{m}{n} \rput(4.912344,2.0475){$b_p$} \usefont{T1}{ptm}{m}{n} \rput(3.7823439,2.0075){$a_p$} \usefont{T1}{ptm}{m}{n} \rput(3.6684375,-3.4225){\sigmamall $I_{p,-1}$} \usefont{T1}{ptm}{m}{n} \rput(4.3084373,-3.4225){\sigmamall $I_{p,0}$} \usefont{T1}{ptm}{m}{n} \rput(4.9084377,-3.4225){\sigmamall $I_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(7.6784377,-0.1425){\sigmamall $E_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(0.7784375,-0.2025){\sigmamall $E_{p,-1}$} \psline[linewidth=0.04cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(3.0809374,2.5575)(3.3009374,2.1975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.5009375,2.5775)(5.3609376,2.2975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.9009376,-0.2225)(6.0809374,-0.3825) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(2.8609376,1.1175)(2.5609374,0.9575) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.5209374,1.3575)(5.8409376,1.0575) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.3009377,0.1775)(4.3009377,0.5375) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.6809373,0.1175)(4.7209377,0.4975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.9409375,0.2175)(3.9009376,0.4975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.7009375,0.1575)(3.6609375,0.5175) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.8809376,0.0975)(4.9409375,0.4975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.1409373,0.0975)(5.2409377,0.5175) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.4609375,0.1575)(3.3809376,0.4975) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(2.8009374,-0.1825)(2.5809374,-0.3025) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.8009377,-1.2825)(6.0609374,-1.2025) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.0409374,-1.3825)(2.8009374,-1.2425) \psline[linewidth=0.1cm](3.4809375,-3.1825)(3.4809375,-2.5625) \psline[linewidth=0.1cm](3.6809375,-3.1825)(3.6809375,-2.5625) \psline[linewidth=0.1cm](4.0809374,-3.1825)(4.0809374,-2.5625) \psline[linewidth=0.1cm](4.2809377,-3.1825)(4.2809377,-2.5625) \psline[linewidth=0.1cm](4.4809375,-3.1825)(4.4809375,-2.5625) \psline[linewidth=0.1cm](4.8809376,-3.1825)(4.8809376,-2.5625) \psline[linewidth=0.1cm](5.0809374,-3.1825)(5.0809374,-2.5625) \psline[linewidth=0.1cm](1.3209375,0.2775)(1.9409375,0.2775) \psline[linewidth=0.1cm](1.3009375,-0.5225)(1.9209375,-0.5225) \psline[linewidth=0.1cm](6.6609373,0.2575)(7.2809377,0.2575) \psline[linewidth=0.1cm](6.6609373,-0.5225)(7.2809377,-0.5225) \psdots[dotsize=0.16](1.3009375,0.8575) \psdots[dotsize=0.16](1.3009375,2.8175) \psdots[dotsize=0.16](1.3009375,-1.1425) \psdots[dotsize=0.16](1.2809376,-3.1825) \psdots[dotsize=0.16](3.3009374,-3.1825) \psdots[dotsize=0.16](5.3009377,-3.1825) \psdots[dotsize=0.16](7.3209376,-3.1825) \psdots[dotsize=0.16](3.8809376,-3.1825) \psdots[dotsize=0.16](4.6809373,-3.1825) \psdots[dotsize=0.16](7.2809377,0.8575) \psdots[dotsize=0.16](7.2809377,-1.1625) \end{pspicture} } \\ Fig. 3b: the flow on the level $T_p$ for $0<p\leqslant 1/2$ \end{center} \betaegin{center} \sigmacalebox{1.2} { \betaegin{pspicture}(0,-3.6459374)(8.435,3.6459374) \definecolor{color4300g}{rgb}{0.6,0.6,0.6} \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4300g,gradend=color4300g,gradmidpoint=1.0] { \newpath \moveto(1.2821875,-3.1525) \lineto(1.2821875,-3.1025) \curveto(1.2821875,-3.0775)(1.2821875,-3.0275)(1.2821875,-3.0025) \curveto(1.2821875,-2.9775)(1.2821875,-2.9275)(1.2821875,-2.9025) \curveto(1.2821875,-2.8775)(1.2821875,-2.8275)(1.2821875,-2.8025) \curveto(1.2821875,-2.7775)(1.2771875,-2.7325)(1.2721875,-2.7125) \curveto(1.2671875,-2.6925)(1.2621875,-2.6475)(1.2621875,-2.6225) \curveto(1.2621875,-2.5975)(1.2621875,-2.5475)(1.2621875,-2.5225) \curveto(1.2621875,-2.4975)(1.2671875,-2.4525)(1.2721875,-2.4325) \curveto(1.2771875,-2.4125)(1.2771875,-2.3725)(1.2721875,-2.3525) \curveto(1.2671875,-2.3325)(1.2621875,-2.2875)(1.2621875,-2.2625) \curveto(1.2621875,-2.2375)(1.2671875,-2.1925)(1.2721875,-2.1725) \curveto(1.2771875,-2.1525)(1.2821875,-2.1075)(1.2821875,-2.0825) \curveto(1.2821875,-2.0575)(1.2821875,-2.0075)(1.2821875,-1.9825) \curveto(1.2821875,-1.9575)(1.2821875,-1.9075)(1.2821875,-1.8825) \curveto(1.2821875,-1.8575)(1.2771875,-1.8125)(1.2721875,-1.7925) \curveto(1.2671875,-1.7725)(1.2621875,-1.7275)(1.2621875,-1.7025) \curveto(1.2621875,-1.6775)(1.2621875,-1.6275)(1.2621875,-1.6025) \curveto(1.2621875,-1.5775)(1.2621875,-1.5275)(1.2621875,-1.5025) \curveto(1.2621875,-1.4775)(1.2671875,-1.4325)(1.2721875,-1.4125) \curveto(1.2771875,-1.3925)(1.2821875,-1.3475)(1.2821875,-1.3225) \curveto(1.2821875,-1.2975)(1.2821875,-1.2475)(1.2821875,-1.2225) \curveto(1.2821875,-1.1975)(1.2971874,-1.1625)(1.3121876,-1.1525) \curveto(1.3271875,-1.1425)(1.3671875,-1.1325)(1.3921875,-1.1325) \curveto(1.4171875,-1.1325)(1.4671875,-1.1325)(1.4921875,-1.1325) \curveto(1.5171875,-1.1325)(1.5671875,-1.1325)(1.5921875,-1.1325) \curveto(1.6171875,-1.1325)(1.6671875,-1.1325)(1.6921875,-1.1325) \curveto(1.7171875,-1.1325)(1.7671875,-1.1325)(1.7921875,-1.1325) \curveto(1.8171875,-1.1325)(1.8621875,-1.1275)(1.8821875,-1.1225) \curveto(1.9021875,-1.1175)(1.9471875,-1.1125)(1.9721875,-1.1125) \curveto(1.9971875,-1.1125)(2.0471876,-1.1125)(2.0721874,-1.1125) \curveto(2.0971875,-1.1125)(2.1421876,-1.1175)(2.1621876,-1.1225) \curveto(2.1821876,-1.1275)(2.2271874,-1.1325)(2.2521875,-1.1325) \curveto(2.2771876,-1.1325)(2.3271875,-1.1325)(2.3521874,-1.1325) \curveto(2.3771875,-1.1325)(2.4221876,-1.1375)(2.4421875,-1.1425) \curveto(2.4621875,-1.1475)(2.5021875,-1.1575)(2.5221875,-1.1625) \curveto(2.5421875,-1.1675)(2.5821874,-1.1775)(2.6021874,-1.1825) \curveto(2.6221876,-1.1875)(2.6621876,-1.1975)(2.6821876,-1.2025) \curveto(2.7021875,-1.2075)(2.7421875,-1.2175)(2.7621875,-1.2225) \curveto(2.7821875,-1.2275)(2.8221874,-1.2375)(2.8421874,-1.2425) \curveto(2.8621874,-1.2475)(2.8971875,-1.2625)(2.9121876,-1.2725) \curveto(2.9271874,-1.2825)(2.9571874,-1.3025)(2.9721875,-1.3125) \curveto(2.9871874,-1.3225)(3.0121875,-1.3475)(3.0221875,-1.3625) \curveto(3.0321875,-1.3775)(3.0521874,-1.4075)(3.0621874,-1.4225) \curveto(3.0721874,-1.4375)(3.0921874,-1.4675)(3.1021874,-1.4825) \curveto(3.1121874,-1.4975)(3.1321876,-1.5275)(3.1421876,-1.5425) \curveto(3.1521876,-1.5575)(3.1671875,-1.5925)(3.1721876,-1.6125) \curveto(3.1771874,-1.6325)(3.1871874,-1.6725)(3.1921875,-1.6925) \curveto(3.1971874,-1.7125)(3.2071874,-1.7525)(3.2121875,-1.7725) \curveto(3.2171874,-1.7925)(3.2271874,-1.8325)(3.2321875,-1.8525) \curveto(3.2371874,-1.8725)(3.2421875,-1.9175)(3.2421875,-1.9425) \curveto(3.2421875,-1.9675)(3.2471876,-2.0125)(3.2521875,-2.0325) \curveto(3.2571876,-2.0525)(3.2621875,-2.0975)(3.2621875,-2.1225) \curveto(3.2621875,-2.1475)(3.2621875,-2.1975)(3.2621875,-2.2225) \curveto(3.2621875,-2.2475)(3.2621875,-2.2975)(3.2621875,-2.3225) \curveto(3.2621875,-2.3475)(3.2671876,-2.3925)(3.2721875,-2.4125) \curveto(3.2771876,-2.4325)(3.2821875,-2.4775)(3.2821875,-2.5025) \curveto(3.2821875,-2.5275)(3.2821875,-2.5775)(3.2821875,-2.6025) \curveto(3.2821875,-2.6275)(3.2821875,-2.6775)(3.2821875,-2.7025) \curveto(3.2821875,-2.7275)(3.2821875,-2.7775)(3.2821875,-2.8025) \curveto(3.2821875,-2.8275)(3.2821875,-2.8775)(3.2821875,-2.9025) \curveto(3.2821875,-2.9275)(3.2821875,-2.9775)(3.2821875,-3.0025) \curveto(3.2821875,-3.0275)(3.2821875,-3.0775)(3.2821875,-3.1025) \curveto(3.2821875,-3.1275)(3.2821875,-3.1525)(3.2821875,-3.1525) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4300g,gradend=color4300g,gradmidpoint=1.0] { \newpath \moveto(7.3021874,-3.1525) \lineto(7.2921877,-3.1125) \curveto(7.2871876,-3.0925)(7.2821875,-3.0475)(7.2821875,-3.0225) \curveto(7.2821875,-2.9975)(7.2821875,-2.9475)(7.2821875,-2.9225) \curveto(7.2821875,-2.8975)(7.2821875,-2.8475)(7.2821875,-2.8225) \curveto(7.2821875,-2.7975)(7.2821875,-2.7475)(7.2821875,-2.7225) \curveto(7.2821875,-2.6975)(7.2821875,-2.6475)(7.2821875,-2.6225) \curveto(7.2821875,-2.5975)(7.2821875,-2.5475)(7.2821875,-2.5225) \curveto(7.2821875,-2.4975)(7.2821875,-2.4475)(7.2821875,-2.4225) \curveto(7.2821875,-2.3975)(7.2821875,-2.3475)(7.2821875,-2.3225) \curveto(7.2821875,-2.2975)(7.2821875,-2.2475)(7.2821875,-2.2225) \curveto(7.2821875,-2.1975)(7.2821875,-2.1475)(7.2821875,-2.1225) \curveto(7.2821875,-2.0975)(7.2821875,-2.0475)(7.2821875,-2.0225) \curveto(7.2821875,-1.9975)(7.2821875,-1.9475)(7.2821875,-1.9225) \curveto(7.2821875,-1.8975)(7.2821875,-1.8475)(7.2821875,-1.8225) \curveto(7.2821875,-1.7975)(7.2821875,-1.7475)(7.2821875,-1.7225) \curveto(7.2821875,-1.6975)(7.2821875,-1.6475)(7.2821875,-1.6225) \curveto(7.2821875,-1.5975)(7.2821875,-1.5475)(7.2821875,-1.5225) \curveto(7.2821875,-1.4975)(7.2821875,-1.4475)(7.2821875,-1.4225) \curveto(7.2821875,-1.3975)(7.2821875,-1.3475)(7.2821875,-1.3225) \curveto(7.2821875,-1.2975)(7.2821875,-1.2475)(7.2821875,-1.2225) \curveto(7.2821875,-1.1975)(7.2621875,-1.1675)(7.2421875,-1.1625) \curveto(7.2221875,-1.1575)(7.1771874,-1.1525)(7.1521873,-1.1525) \curveto(7.1271877,-1.1525)(7.0771875,-1.1525)(7.0521874,-1.1525) \curveto(7.0271873,-1.1525)(6.9771876,-1.1525)(6.9521875,-1.1525) \curveto(6.9271874,-1.1525)(6.8771877,-1.1525)(6.8521876,-1.1525) \curveto(6.8271875,-1.1525)(6.7771873,-1.1525)(6.7521877,-1.1525) \curveto(6.7271876,-1.1525)(6.6771874,-1.1525)(6.6521873,-1.1525) \curveto(6.6271877,-1.1525)(6.5771875,-1.1525)(6.5521874,-1.1525) \curveto(6.5271873,-1.1525)(6.4821873,-1.1575)(6.4621873,-1.1625) \curveto(6.4421873,-1.1675)(6.3971877,-1.1725)(6.3721876,-1.1725) \curveto(6.3471875,-1.1725)(6.2971873,-1.1725)(6.2721877,-1.1725) \curveto(6.2471876,-1.1725)(6.2021875,-1.1775)(6.1821876,-1.1825) \curveto(6.1621876,-1.1875)(6.1171875,-1.1925)(6.0921874,-1.1925) \curveto(6.0671873,-1.1925)(6.0221877,-1.1975)(6.0021877,-1.2025) \curveto(5.9821873,-1.2075)(5.9421873,-1.2225)(5.9221873,-1.2325) \curveto(5.9021873,-1.2425)(5.8621874,-1.2575)(5.8421874,-1.2625) \curveto(5.8221874,-1.2675)(5.7821875,-1.2775)(5.7621875,-1.2825) \curveto(5.7421875,-1.2875)(5.7071877,-1.3025)(5.6921873,-1.3125) \curveto(5.6771874,-1.3225)(5.6471877,-1.3425)(5.6321874,-1.3525) \curveto(5.6171875,-1.3625)(5.5871873,-1.3875)(5.5721874,-1.4025) \curveto(5.5571876,-1.4175)(5.5321875,-1.4475)(5.5221877,-1.4625) \curveto(5.5121875,-1.4775)(5.4921875,-1.5075)(5.4821873,-1.5225) \curveto(5.4721875,-1.5375)(5.4521875,-1.5675)(5.4421873,-1.5825) \curveto(5.4321876,-1.5975)(5.4171877,-1.6325)(5.4121876,-1.6525) \curveto(5.4071875,-1.6725)(5.3921876,-1.7075)(5.3821874,-1.7225) \curveto(5.3721876,-1.7375)(5.3571873,-1.7725)(5.3521876,-1.7925) \curveto(5.3471875,-1.8125)(5.3421874,-1.8575)(5.3421874,-1.8825) \curveto(5.3421874,-1.9075)(5.3371873,-1.9525)(5.3321877,-1.9725) \curveto(5.3271875,-1.9925)(5.3221874,-2.0375)(5.3221874,-2.0625) \curveto(5.3221874,-2.0875)(5.3171873,-2.1325)(5.3121877,-2.1525) \curveto(5.3071876,-2.1725)(5.3021874,-2.2175)(5.3021874,-2.2425) \curveto(5.3021874,-2.2675)(5.2971873,-2.3175)(5.2921877,-2.3425) \curveto(5.2871876,-2.3675)(5.2821875,-2.4175)(5.2821875,-2.4425) \curveto(5.2821875,-2.4675)(5.2821875,-2.5175)(5.2821875,-2.5425) \curveto(5.2821875,-2.5675)(5.2771873,-2.6175)(5.2721877,-2.6425) \curveto(5.2671876,-2.6675)(5.2621875,-2.7175)(5.2621875,-2.7425) \curveto(5.2621875,-2.7675)(5.2671876,-2.8125)(5.2721877,-2.8325) \curveto(5.2771873,-2.8525)(5.2821875,-2.8975)(5.2821875,-2.9225) \curveto(5.2821875,-2.9475)(5.2821875,-2.9975)(5.2821875,-3.0225) \curveto(5.2821875,-3.0475)(5.2871876,-3.0925)(5.2921877,-3.1125) \curveto(5.2971873,-3.1325)(5.3021874,-3.1575)(5.3021874,-3.1725) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4300g,gradend=color4300g,gradmidpoint=1.0] { \newpath \moveto(4.3821874,2.8475) \lineto(4.4221873,2.8275) \curveto(4.4421873,2.8175)(4.4821873,2.7975)(4.5021877,2.7875) \curveto(4.5221877,2.7775)(4.5571876,2.7575)(4.5721874,2.7475) \curveto(4.5871873,2.7375)(4.6171875,2.7175)(4.6321874,2.7075) \curveto(4.6471877,2.6975)(4.6771874,2.6775)(4.6921873,2.6675) \curveto(4.7071877,2.6575)(4.7321873,2.6325)(4.7421875,2.6175) \curveto(4.7521877,2.6025)(4.7721877,2.5725)(4.7821875,2.5575) \curveto(4.7921877,2.5425)(4.8121877,2.5125)(4.8221874,2.4975) \curveto(4.8321877,2.4825)(4.8521876,2.4525)(4.8621874,2.4375) \curveto(4.8721876,2.4225)(4.8921876,2.3925)(4.9021873,2.3775) \curveto(4.9121876,2.3625)(4.9321876,2.3275)(4.9421873,2.3075) \curveto(4.9521875,2.2875)(4.9721875,2.2475)(4.9821873,2.2275) \curveto(4.9921875,2.2075)(5.0121875,2.1675)(5.0221877,2.1475) \curveto(5.0321875,2.1275)(5.0521874,2.0925)(5.0621877,2.0775) \curveto(5.0721874,2.0625)(5.0871873,2.0275)(5.0921874,2.0075) \curveto(5.0971875,1.9875)(5.1121874,1.9525)(5.1221876,1.9375) \curveto(5.1321874,1.9225)(5.1521873,1.8925)(5.1621876,1.8775) \curveto(5.1721873,1.8625)(5.1871877,1.8275)(5.1921873,1.8075) \curveto(5.1971874,1.7875)(5.2121873,1.7525)(5.2221875,1.7375) \curveto(5.2321873,1.7225)(5.2521877,1.6925)(5.2621875,1.6775) \curveto(5.2721877,1.6625)(5.2971873,1.6275)(5.3121877,1.6075) \curveto(5.3271875,1.5875)(5.3521876,1.5525)(5.3621874,1.5375) \curveto(5.3721876,1.5225)(5.3921876,1.4925)(5.4021873,1.4775) \curveto(5.4121876,1.4625)(5.4321876,1.4325)(5.4421873,1.4175) \curveto(5.4521875,1.4025)(5.4721875,1.3725)(5.4821873,1.3575) \curveto(5.4921875,1.3425)(5.5171876,1.3175)(5.5321875,1.3075) \curveto(5.5471873,1.2975)(5.5771875,1.2775)(5.5921874,1.2675) \curveto(5.6071873,1.2575)(5.6371875,1.2325)(5.6521873,1.2175) \curveto(5.6671877,1.2025)(5.6971874,1.1775)(5.7121873,1.1675) \curveto(5.7271876,1.1575)(5.7571874,1.1375)(5.7721877,1.1275) \curveto(5.7871876,1.1175)(5.8171873,1.0975)(5.8321877,1.0875) \curveto(5.8471875,1.0775)(5.8771877,1.0575)(5.8921876,1.0475) \curveto(5.9071875,1.0375)(5.9371877,1.0175)(5.9521875,1.0075) \curveto(5.9671874,0.9975)(6.0071874,0.9825)(6.0321875,0.9775) \curveto(6.0571876,0.9725)(6.1021876,0.9625)(6.1221876,0.9575) \curveto(6.1421876,0.9525)(6.1821876,0.9425)(6.2021875,0.9375) \curveto(6.2221875,0.9325)(6.2621875,0.9225)(6.2821875,0.9175) \curveto(6.3021874,0.9125)(6.3471875,0.9025)(6.3721876,0.8975) \curveto(6.3971877,0.8925)(6.4471874,0.8875)(6.4721875,0.8875) \curveto(6.4971876,0.8875)(6.5471873,0.8875)(6.5721874,0.8875) \curveto(6.5971875,0.8875)(6.6471877,0.8825)(6.6721873,0.8775) \curveto(6.6971874,0.8725)(6.7521877,0.8675)(6.7821875,0.8675) \curveto(6.8121877,0.8675)(6.8671875,0.8675)(6.8921876,0.8675) \curveto(6.9171877,0.8675)(6.9671874,0.8675)(6.9921875,0.8675) \curveto(7.0171876,0.8675)(7.0671873,0.8625)(7.0921874,0.8575) \curveto(7.1171875,0.8525)(7.1671877,0.8475)(7.1921873,0.8475) \curveto(7.2171874,0.8475)(7.2471876,0.8675)(7.2521877,0.8875) \curveto(7.2571874,0.9075)(7.2671876,0.9475)(7.2721877,0.9675) \curveto(7.2771873,0.9875)(7.2821875,1.0325)(7.2821875,1.0575) \curveto(7.2821875,1.0825)(7.2821875,1.1325)(7.2821875,1.1575) \curveto(7.2821875,1.1825)(7.2821875,1.2325)(7.2821875,1.2575) \curveto(7.2821875,1.2825)(7.2771873,1.3275)(7.2721877,1.3475) \curveto(7.2671876,1.3675)(7.2621875,1.4125)(7.2621875,1.4375) \curveto(7.2621875,1.4625)(7.2621875,1.5125)(7.2621875,1.5375) \curveto(7.2621875,1.5625)(7.2621875,1.6125)(7.2621875,1.6375) \curveto(7.2621875,1.6625)(7.2621875,1.7125)(7.2621875,1.7375) \curveto(7.2621875,1.7625)(7.2621875,1.8125)(7.2621875,1.8375) \curveto(7.2621875,1.8625)(7.2621875,1.9125)(7.2621875,1.9375) \curveto(7.2621875,1.9625)(7.2621875,2.0125)(7.2621875,2.0375) \curveto(7.2621875,2.0625)(7.2621875,2.1125)(7.2621875,2.1375) \curveto(7.2621875,2.1625)(7.2621875,2.2125)(7.2621875,2.2375) \curveto(7.2621875,2.2625)(7.2621875,2.3125)(7.2621875,2.3375) \curveto(7.2621875,2.3625)(7.2621875,2.4125)(7.2621875,2.4375) \curveto(7.2621875,2.4625)(7.2621875,2.5125)(7.2621875,2.5375) \curveto(7.2621875,2.5625)(7.2621875,2.6125)(7.2621875,2.6375) \curveto(7.2621875,2.6625)(7.2621875,2.7125)(7.2621875,2.7375) \curveto(7.2621875,2.7625)(7.2621875,2.8025)(7.2621875,2.8475) } \pscustom[linewidth=0.02,fillstyle=gradient,gradlines=2000,gradbegin=color4300g,gradend=color4300g,gradmidpoint=1.0] { \newpath \moveto(1.2621875,2.8475) \lineto(1.2621875,2.7975) \curveto(1.2621875,2.7725)(1.2621875,2.7225)(1.2621875,2.6975) \curveto(1.2621875,2.6725)(1.2621875,2.6225)(1.2621875,2.5975) \curveto(1.2621875,2.5725)(1.2621875,2.5225)(1.2621875,2.4975) \curveto(1.2621875,2.4725)(1.2621875,2.4225)(1.2621875,2.3975) \curveto(1.2621875,2.3725)(1.2621875,2.3225)(1.2621875,2.2975) \curveto(1.2621875,2.2725)(1.2621875,2.2225)(1.2621875,2.1975) \curveto(1.2621875,2.1725)(1.2621875,2.1225)(1.2621875,2.0975) \curveto(1.2621875,2.0725)(1.2621875,2.0225)(1.2621875,1.9975) \curveto(1.2621875,1.9725)(1.2621875,1.9225)(1.2621875,1.8975) \curveto(1.2621875,1.8725)(1.2621875,1.8225)(1.2621875,1.7975) \curveto(1.2621875,1.7725)(1.2621875,1.7225)(1.2621875,1.6975) \curveto(1.2621875,1.6725)(1.2621875,1.6225)(1.2621875,1.5975) \curveto(1.2621875,1.5725)(1.2621875,1.5325)(1.2621875,1.5175) \curveto(1.2621875,1.5025)(1.2621875,1.4625)(1.2621875,1.4375) \curveto(1.2621875,1.4125)(1.2621875,1.3625)(1.2621875,1.3375) \curveto(1.2621875,1.3125)(1.2621875,1.2625)(1.2621875,1.2375) \curveto(1.2621875,1.2125)(1.2621875,1.1625)(1.2621875,1.1375) \curveto(1.2621875,1.1125)(1.2621875,1.0625)(1.2621875,1.0375) \curveto(1.2621875,1.0125)(1.2621875,0.9625)(1.2621875,0.9375) \curveto(1.2621875,0.9125)(1.2821875,0.8825)(1.3021874,0.8775) \curveto(1.3221875,0.8725)(1.3671875,0.8675)(1.3921875,0.8675) \curveto(1.4171875,0.8675)(1.4621875,0.8625)(1.4821875,0.8575) \curveto(1.5021875,0.8525)(1.5471874,0.8475)(1.5721875,0.8475) \curveto(1.5971875,0.8475)(1.6471875,0.8475)(1.6721874,0.8475) \curveto(1.6971875,0.8475)(1.7471875,0.8475)(1.7721875,0.8475) \curveto(1.7971874,0.8475)(1.8371875,0.8475)(1.8521875,0.8475) \curveto(1.8671875,0.8475)(1.9071875,0.8475)(1.9321876,0.8475) \curveto(1.9571875,0.8475)(2.0021875,0.8525)(2.0221875,0.8575) \curveto(2.0421875,0.8625)(2.0871875,0.8675)(2.1121874,0.8675) \curveto(2.1371875,0.8675)(2.1821876,0.8725)(2.2021875,0.8775) \curveto(2.2221875,0.8825)(2.2621875,0.8925)(2.2821875,0.8975) \curveto(2.3021874,0.9025)(2.3421874,0.9125)(2.3621874,0.9175) \curveto(2.3821876,0.9225)(2.4221876,0.9325)(2.4421875,0.9375) \curveto(2.4621875,0.9425)(2.5021875,0.9525)(2.5221875,0.9575) \curveto(2.5421875,0.9625)(2.5771875,0.9775)(2.5921874,0.9875) \curveto(2.6071875,0.9975)(2.6371875,1.0175)(2.6521876,1.0275) \curveto(2.6671875,1.0375)(2.6971874,1.0575)(2.7121875,1.0675) \curveto(2.7271874,1.0775)(2.7571876,1.0975)(2.7721875,1.1075) \curveto(2.7871876,1.1175)(2.8171875,1.1375)(2.8321874,1.1475) \curveto(2.8471875,1.1575)(2.8771875,1.1775)(2.8921876,1.1875) \curveto(2.9071875,1.1975)(2.9321876,1.2225)(2.9421875,1.2375) \curveto(2.9521875,1.2525)(2.9771874,1.2775)(2.9921875,1.2875) \curveto(3.0071876,1.2975)(3.0321875,1.3225)(3.0421875,1.3375) \curveto(3.0521874,1.3525)(3.0721874,1.3825)(3.0821874,1.3975) \curveto(3.0921874,1.4125)(3.1121874,1.4425)(3.1221876,1.4575) \curveto(3.1321876,1.4725)(3.1521876,1.5025)(3.1621876,1.5175) \curveto(3.1721876,1.5325)(3.1971874,1.5575)(3.2121875,1.5675) \curveto(3.2271874,1.5775)(3.2521875,1.6025)(3.2621875,1.6175) \curveto(3.2721875,1.6325)(3.2921875,1.6625)(3.3021874,1.6775) \curveto(3.3121874,1.6925)(3.3321874,1.7275)(3.3421874,1.7475) \curveto(3.3521874,1.7675)(3.3671875,1.8075)(3.3721876,1.8275) \curveto(3.3771875,1.8475)(3.3921876,1.8825)(3.4021876,1.8975) \curveto(3.4121876,1.9125)(3.4271874,1.9375)(3.4321876,1.9475) \curveto(3.4371874,1.9575)(3.4521875,1.9825)(3.4621875,1.9975) \curveto(3.4721875,2.0125)(3.4871874,2.0475)(3.4921875,2.0675) \curveto(3.4971876,2.0875)(3.5121875,2.1225)(3.5221875,2.1375) \curveto(3.5321875,2.1525)(3.5521874,2.1825)(3.5621874,2.1975) \curveto(3.5721874,2.2125)(3.5871875,2.2475)(3.5921874,2.2675) \curveto(3.5971875,2.2875)(3.6121874,2.3225)(3.6221876,2.3375) \curveto(3.6321876,2.3525)(3.6521876,2.3825)(3.6621876,2.3975) \curveto(3.6721876,2.4125)(3.6921875,2.4425)(3.7021875,2.4575) \curveto(3.7121875,2.4725)(3.7321875,2.5025)(3.7421875,2.5175) \curveto(3.7521875,2.5325)(3.7721875,2.5625)(3.7821875,2.5775) \curveto(3.7921875,2.5925)(3.8121874,2.6275)(3.8221874,2.6475) \curveto(3.8321874,2.6675)(3.8571875,2.6975)(3.8721876,2.7075) \curveto(3.8871875,2.7175)(3.9171875,2.7375)(3.9321876,2.7475) \curveto(3.9471874,2.7575)(3.9821875,2.7725)(4.0021877,2.7775) \curveto(4.0221877,2.7825)(4.0571876,2.7975)(4.0721874,2.8075) \curveto(4.0871873,2.8175)(4.1171875,2.8275)(4.1621876,2.8275) } \psframe[linewidth=0.06,dimen=outer](7.3021874,2.8375)(1.2521875,-3.2125) \psline[linewidth=0.1cm](1.2621875,0.8475)(1.8821875,0.8475) \psline[linewidth=0.1cm](6.6621876,0.8675)(7.2821875,0.8675) \psbezier[linewidth=0.06](1.8621875,0.8475)(3.7021875,0.8675)(3.2821958,2.8315828)(4.2821875,2.8275)(5.2821794,2.8234172)(4.8821874,0.8675)(6.6821876,0.8675) \psline[linewidth=0.1cm](1.2621875,-1.1325)(1.8821875,-1.1325) \psline[linewidth=0.1cm](6.6621876,-1.1525)(7.2821875,-1.1525) \psline[linewidth=0.1cm](3.2821875,-3.1725)(3.2821875,-2.5525) \psline[linewidth=0.1cm](5.2821875,-3.1725)(5.2821875,-2.5525) \psbezier[linewidth=0.06](1.8621875,-1.1325)(3.0821874,-1.1325)(3.2821875,-1.3325)(3.2821875,-2.5925) \psbezier[linewidth=0.06](5.2821875,-2.5725)(5.2821875,-1.3725)(5.5021877,-1.1525)(6.6821876,-1.1525) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.2821875,2.7675)(1.2821875,3.5275) \psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.9021873,-3.1725)(7.7821875,-3.1725) \usefont{T1}{ptm}{m}{n} \rput(5.353594,-3.3825){$1$} \usefont{T1}{ptm}{m}{n} \rput(7.233594,-3.4025){$2$} \usefont{T1}{ptm}{m}{n} \rput(3.1935937,-3.3825){$0$} \usefont{T1}{ptm}{m}{n} \rput(1.1135937,-3.4025){$-1$} \usefont{T1}{ptm}{m}{n} \rput(1.0735937,-1.1225){$0$} \usefont{T1}{ptm}{m}{n} \rput(1.0935937,0.8375){$1$} \usefont{T1}{ptm}{m}{n} \rput(1.0735937,2.8175){$2$} \usefont{T1}{ptm}{m}{n} \rput(1.1135937,3.4575){$y$} \usefont{T1}{ptm}{m}{n} \rput(7.7035937,-3.4025){$x$} \psdots[dotsize=0.16](4.2821875,2.8275) \psline[linewidth=0.1cm](4.2621875,-3.1725)(4.2621875,-2.5525) \psline[linewidth=0.1cm](4.0621877,-3.1725)(4.0621877,-2.5525) \psbezier[linewidth=0.04](5.0621877,-3.1325)(5.0821877,-0.7325)(5.2221875,-0.0325)(5.6821876,-0.1325)(6.1421876,-0.2325)(5.7621875,-0.5325)(7.2421875,-0.5325) \psbezier[linewidth=0.04](3.4621875,-3.1525)(3.4421875,-1.4325)(3.3622925,-0.054923914)(2.8821876,-0.1525)(2.4020827,-0.2500761)(2.7021875,-0.5525)(1.2821875,-0.5325) \psbezier[linewidth=0.04](3.6621876,-3.1325)(3.6621876,-1.1325)(3.9221876,1.1075)(3.5421875,1.0475)(3.1621876,0.9875)(3.6421876,-0.1325)(1.3021874,0.0475) \psbezier[linewidth=0.04](4.8621874,-3.1525)(4.9021873,-1.1325)(4.5421877,1.1275)(4.9021873,1.0475)(5.2621875,0.9675)(4.9021873,-0.2125)(7.2621875,0.0475) \psbezier[linewidth=0.04](4.0621877,-3.1525)(4.0821877,0.6475)(4.1221876,2.0275)(3.8421874,2.1075)(3.5621874,2.1875)(3.5621874,0.1275)(1.2821875,0.5075) \psbezier[linewidth=0.04](4.4621873,-3.1525)(4.4821873,0.6675)(4.3821874,2.2475)(4.6021876,2.0875)(4.8221874,1.9275)(5.4021873,0.1875)(7.2821875,0.4875) \psbezier[linewidth=0.06](4.2621875,-3.1325)(4.2821875,0.2475)(4.2821875,0.6475)(4.2821875,2.8475) \usefont{T1}{ptm}{m}{n} \rput(4.193594,3.1175){$a_0=b_0$} \usefont{T1}{ptm}{m}{n} \rput(3.8296876,-3.4325){\sigmamall $I_{p,-1}$} \usefont{T1}{ptm}{m}{n} \rput(4.7296877,-3.4325){\sigmamall $I_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(7.6596875,-0.1325){\sigmamall $E_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(0.7596875,-0.1925){\sigmamall $E_{p,-1}$} \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.8821874,-0.2125)(6.0621877,-0.3725) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(2.8421874,1.1275)(2.5421875,0.9675) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.5021877,1.3675)(5.8221874,1.0675) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.2821875,0.1875)(4.2821875,0.5475) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.4621873,1.3275)(4.5021877,1.7075) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.0421877,1.3875)(4.0021877,1.6675) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(4.7421875,0.6875)(4.8221874,0.9875) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.7421875,0.6475)(3.6621876,0.9875) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(2.7821875,-0.1725)(2.5621874,-0.2925) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(5.7821875,-1.2725)(6.0421877,-1.1925) \psline[linewidth=0.02cm,arrowsize=0.05291667cm 6.0,arrowlength=2.0,arrowinset=0.4]{->}(3.0221875,-1.3725)(2.7821875,-1.2325) \psline[linewidth=0.1cm](4.4621873,-3.1725)(4.4621873,-2.5525) \psline[linewidth=0.1cm](4.8621874,-3.1725)(4.8621874,-2.5525) \psline[linewidth=0.1cm](5.0621877,-3.1725)(5.0621877,-2.5525) \psline[linewidth=0.1cm](3.6621876,-3.1725)(3.6621876,-2.5525) \psline[linewidth=0.1cm](3.4621875,-3.1725)(3.4621875,-2.5525) \psline[linewidth=0.1cm](1.2821875,0.4875)(1.9021875,0.4875) \psline[linewidth=0.1cm](1.2621875,0.0475)(1.8821875,0.0475) \psline[linewidth=0.1cm](1.2621875,-0.5325)(1.8821875,-0.5325) \psline[linewidth=0.1cm](6.6621876,0.4675)(7.2821875,0.4675) \psline[linewidth=0.1cm](6.6621876,0.0275)(7.2821875,0.0275) \psline[linewidth=0.1cm](6.6621876,-0.5325)(7.2821875,-0.5325) \psdots[dotsize=0.16](1.2821875,2.8275) \psdots[dotsize=0.16](1.2621875,0.8475) \psdots[dotsize=0.16](1.2821875,-1.1325) \psdots[dotsize=0.16](1.2621875,-3.1725) \psdots[dotsize=0.16](3.2821875,-3.1525) \psdots[dotsize=0.16](5.2821875,-3.1525) \psdots[dotsize=0.16](7.2821875,-3.1725) \psdots[dotsize=0.16](4.2621875,-3.1725) \psdots[dotsize=0.16](7.2621875,-1.1525) \psdots[dotsize=0.16](7.2621875,0.8675) \end{pspicture} } \\ Fig. 3c: the flow on the level $T_p$ for $p=0$ \end{center} We also assume that there are some neighborhoods $V_\pm$ of $F_\pm$, see~\eqref{eq:Fpm}, such that the generator of the flow is equal to $(0,\pm 1,0)$ in $V_\pm$ respectively; and a neighborhood $U$ of $S_2$ in $X$, see~\eqref{eq:S_2}, such that the the generator is equal to $(1,0,0)$ in $U$. We have already claimed that all the trajectories in $X^-$ preserve $p$-coordinate. The latter assumptions mean that all the trajectories that escape the entrance cross-section $I_p$ are ''locally vertical'' (preserving $x$-coordinate), and all the trajectories that come to the exit cross-sections $E_{p,\pm 1}$ are ''locally horisontal'' (preserving $y$-coordinate), and, moreover, the velocity of the flow near $F_\pm$ and $S_2$ is locally constant and equal to $1$. These assumptions will allow us descend the flow easily to the glued manifold correctly ($C^\infty$-smoothly) in the $4$-dimensional case, where we deal with a genuine (non-stratified) manifold. The description of the flow restricted to $X^-$ is finished. All trajectories in $X^+$, except for the fixed points $a_p$ and $b_p$, $p\in [0,1/2]$, and the unique sink $s_3:=(1/2,2,1/2)$, start from the plane $L$ that divides $X^+$ and $X^-$, smoothly continue the corresponding trajectories from $X^-$ that come to this plane, and tend to the sink $s_3$. The description of the flow on $X$ is finished. We descend it on $\widetilde X$ (without any worries about its smoothness, because there is no smooth structure on the stratum $S_2$, anyway). Let $G\colon \widetilde X\to\widetilde X$ denote its time $1$ map. \betaegin{lemma}\label{3} The $\delta$-measure sitting at the point $s_3$ is a global SRB measure for the map $G$. The set $K_3$ of the points that don't tend to $s_3$ under iterates of $G$ has zero Lebesgue $3$-measure and Hausdorff dimension $3$. \end{lemma} \betaegin{proof} The set of the points $(x,p)$ of the ''front side'' $\{y=0\}$ that don't tend to the point $s_3$ is the intersection $K_2'$ of the set $K_2$ from the previous section with the semispace $p\leqslant 1/2$. By Lemma~\ref{2}, the set $K_2'$ has Hausdorff dimension $2$ and zero Lebesgue $2$-measure. The set $K_3$ is a saturation of $K_2'$ by the trajectories of the flow. Therefore $K_3$ has zero Lebesgue $3$-measure by Fubini theorem and Hausdorff dimension $3$ by Proposition~\ref{mdp}. The rest of the points, obviously, tends to $s_3$. \end{proof} The map $G$ is continuous, but as we discussed before, after the gluing~\eqref{eq:equiv3} described above we obtain a stratified manifold. To get rid of the $2$-stratum $S_2$ we need to add one more dimension. \sigmaection{Dimension 4: gluing up a genuine manifold} We start with an unformal description of a flow on the $4$-manifold with a piecewise-smooth boundary, and then present the corresponding formulas. \sigmaubsection{Heuristic description again} Recall that $X=T\times [0,1]$ is the subset of the $3$-parallelepiped, before gluing the boundaries. Multiply it by an interval $[0,1]$, introducing new coordinate $h$. Denote \betaegin{equation}\label{eq:Mpm} \betaegin{aligned} M^-=X^-\times [0,1],\\ M^+=X^+\times [0,1]. \end{aligned} \end{equation} We define a flow in $M^-$ as follows: its the trajectories start on each \textit{entrance square cross-section on the level $p$} (that is, $I_p$, multiplied by the $h$-interval $[0,1]$) and preserve the $h$-coordinate until they reach the boundary. The flow is actually the same as in $X^-$ in previous section, with the condition on the generator $\dot h=0$ added. Almost all trajectories in $M^+$ tend to a unique sink $s_4$ (as in previous section: for $M^+$ instead of $X^+$, and $s_4$ instead of $s_3$). Then we define the gluing of the boundaries of $X\times [0,1]$ using the new coordinate $h$: the \textit{left and right hand exit square cross-sections} (that are $E_\pm$, multiplied by the $h$-interval $[0,1]$) are contracted thrice in the $h$-direction and are divorced by gluing to the \textit{separated} parts of the entrance square cross-section on the level $q(p)$. Thus we obtain the flow that acts on a compact manifold with a piecewise smooth boundary. The time $1$ map of this flow is continuous, admits a global SRB measure sitting at the point $s_4$, and the SET doesn't hold for it. Then, by means of several simple tricks, we derive from this flow another one, which acts on the boundaryless manifold, and such that its time $1$ map is bijective. \sigmaubsection{Construction of a manifold with a piecewise smooth boundary} Denote the ''front'', or ''entrance'' cross-section by $$ M_{-1}:=\{(x,-1,p,h)\mid x,h\in [0,1], p\in [0,1/2]\}. $$ The entrance square cross-section on the level $p$ \betaegin{equation*} X_p=\{(x,-1,p,h)\mid x,h\in [0,1]\} \end{equation*} is divided into three parts in $x$-direction according to the splitting~\eqref{eq:Ipk}: \betaegin{equation*} \betaegin{aligned} &X_{p,-1}=\{(x,0-1,p,h)\mid x\in I_{p,-1},h\in [0,1]\};\\ &X_{p,0}=\{(x,0-1,p,h)\mid x\in I_{p,0},h\in [0,1]\};\\ &X_{p,1}=\{(x,-1,p,h)\mid x\in I_{p,1},h\in [0,1]\}. \end{aligned} \end{equation*} Let us also divide $X_p$ into three equal parts in $h$-direction: \betaegin{equation*} \betaegin{aligned} &Z_{p,-1}=\{(x,-1,p,h)\mid x\in [0,1], h\in [0,1/3)\};\\ &Z_{p,0}=\{(x,-1,p,h)\mid x\in [0,1], h\in [1/3,2/3]\};\\ &Z_{p,1}=\{(x,0-1,p,h)\mid x\in [0,1], h\in (2/3,1]\}. \end{aligned} \end{equation*} We also define the ''exit squares'' on the level $p$: \betaegin{equation*} \betaegin{aligned} &A_{p,0}=\{(x,y,p,h)\mid (x,y,p)\in E_{p,0}, h\in [0,1]\},\\ &A_{p,\pm 1}=\{(x,y,p,h)\mid (x,y,p)\in E_{p,\pm 1}, h\in [0,1]\}, \end{aligned} \end{equation*} and glue the squares $A_{p,\pm 1}$ linearly to the rectangles $Z_{q(p),\pm 1}$ by the following equivalence: \betaegin{equation}\label{eq:equiv4} \betaegin{aligned} &(-1,y,p,h)\equiv(y,-1,q(p),h/3);\\ &(2,y,p,h)\equiv(y,-1,q(p),1-(h/3)). \end{aligned} \end{equation} Thus we obtain the genuine $C^\infty$-smooth manifold $M$ with a piecewise smooth. Indeed, all we need is to verify the existence of local charts on the manifold $N\sigmaubset M_{-1}$, \betaegin{equation}\label{eq:N} N:=\{(x,-1,p,h)\mid x\in [0,1], p\in[0,1/3], h\in[0,1/3]\cup[2/3,1]\}. \end{equation} Note that one can continue the manifold $X\times [0,1]$ in $x$-direction to the neighborhoods of $A_{p,\pm 1}$ and in $y$-direction to the neighborhood of $Z_{q(p),\pm 1}$ (in $\mathbb R^4$), and extend the equivalence~\eqref{eq:equiv4} by the following formulas (for $\varepsilon$ sufficiently close to $0$): \betaegin{equation}\label{eq:equiv4ext} \betaegin{aligned} &(-1+\varepsilon,y,p,h)\equiv(y,-1-\varepsilon,q(p),h/3);\\ &(2-\varepsilon,y,p,h)\equiv(y,-1-\varepsilon,q(p),1-(h/3)). \end{aligned} \end{equation} Then the local charts on $N$ descend from $\mathbb R^4$ to $M$ according to the latter equivalence. \sigmaubsection{Construction of a flow} In fact, we should add only few words to the heuristic description of the flow to make it rigorous. As was mentioned before, the flow in $M^-$ is actually the same flow as described the $3$-dimensional case, multiplied directly by coordinate $h$ (preserving it). We assumed before, that for the flow decsribed the $3$-dimensional case, there are some neighborhoods of $F_\pm$ and $S_2$, such that the generator of the flow is equal to $(0,\pm 1,0)$ in $V_\pm$ respectively, and is equal to $(1,0,0)$ in $U$. Thus, by the equivalence~\eqref{eq:equiv4ext}, there is a neighborhood of $N$ in $M$ such that the generator of the flow is identically equal to $(1,0,0,0)$ in this neighborhood. It means that the flow is descended to $M$ $C^\infty$-smoothly. Figure~4 displays the first return map to the ''front'' cubic cross-section $M_{-1}$ for points in $X_{p,-1}$ and $X_{p,1}$ that return to this cross-section (equivalently: for points whose $x$-coordinate is not central in terms of the map $g$, see Section~2.2). This map resembles a ''non-autonomous horseshoe map'': the $p$-levels are non-invariant and the expanding rate depends on the level. \betaegin{center} \sigmacalebox{1.2} { \betaegin{pspicture}(0,-4.5289063)(12.722813,4.5089064) \definecolor{color509b}{rgb}{0.6,0.6,0.6} \definecolor{color97b}{rgb}{0.8,0.8,0.8} \psline[linewidth=0.08cm](1.1395311,3.4689062)(4.539531,4.4689064) \psline[linewidth=0.08cm](8.559532,3.448906)(11.959531,4.4489064) \psline[linewidth=0.08cm](8.559532,-3.971094)(12.039531,-3.011094) \psline[linewidth=0.04cm,linestyle=dashed,dash=0.16cm 0.16cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.1595312,-3.971094)(5.2795315,-2.7710938) \psline[linewidth=0.08cm](11.979531,4.4689064)(11.999531,-3.0310938) \psline[linewidth=0.08cm](4.5195312,4.4689064)(12.019531,4.4689064) \psline[linewidth=0.03cm,linestyle=dashed,dash=0.16cm 0.16cm](4.5195312,0.48890612)(12.019531,0.48890612) \psline[linewidth=0.06cm](1.099531,-2.0510938)(8.599532,-2.0510938) \psline[linewidth=0.03cm,linestyle=dashed,dash=0.16cm 0.16cm](1.1795312,-2.0310938)(4.579531,-1.031094) \pspolygon[linewidth=0.03,fillstyle=solid,fillcolor=color509b](9.89953,0.48890612)(6.5195312,-0.53109396)(8.579531,-0.55109394)(11.93953,0.48890612) \psline[linewidth=0.06cm](1.1395311,-0.53109396)(8.639532,-0.53109396) \psline[linewidth=0.06cm](8.619531,-0.53109396)(11.999531,0.48890612) \psdots[dotsize=0.16](9.879532,0.48890612) \psdots[dotsize=0.16](11.979531,0.48890612) \psdots[dotsize=0.16](6.539531,-0.5110939) \psdots[dotsize=0.16](6.4195313,0.5089061) \psdots[dotsize=0.16](1.1595312,-0.53109396) \psdots[dotsize=0.16](8.579531,-0.55109394) \pspolygon[linewidth=0.03,fillstyle=solid,fillcolor=color97b](10.879531,-1.371094)(3.439531,-1.371094)(4.539531,-1.031094)(11.959531,-1.031094) \pspolygon[linewidth=0.03,fillstyle=solid,fillcolor=color509b](4.559531,0.48890612)(1.2195312,-0.5110939)(3.1195314,-0.5110939)(6.4395313,0.48890612) \pspolygon[linewidth=0.03,fillstyle=solid,fillcolor=color97b](8.599532,-2.0310938)(1.1595312,-2.0310938)(2.279531,-1.7110939)(9.679532,-1.6910938) \psline[linewidth=0.06cm](8.559532,-2.0310938)(11.959531,-1.031094) \psframe[linewidth=0.08,dimen=outer](8.619531,3.488906)(1.1195312,-4.0110936) \psdots[dotsize=0.16](3.0995314,-0.5110939) \psdots[dotsize=0.16](4.5195312,0.5089061) \psline[linewidth=0.06cm](4.7795315,-3.971094)(1.1795312,3.448906) \psline[linewidth=0.06cm](4.7795315,-3.951094)(8.599532,3.488906) \psline[linewidth=0.03cm,linestyle=dashed,dash=0.16cm 0.16cm](8.079531,-2.971094)(11.979531,4.4689064) \psline[linewidth=0.03cm,linestyle=dashed,dash=0.16cm 0.16cm](8.079531,-2.9910936)(4.539531,4.4889064) \psframe[linewidth=0.04,linestyle=dashed,dash=0.16cm 0.16cm,dimen=outer](12.019531,4.4889064)(4.5195312,-3.011094) \psdots[dotsize=0.16](1.1395311,-2.0510938) \psdots[dotsize=0.16](2.299531,-1.7110939) \psdots[dotsize=0.16](3.439531,-1.3910939) \psdots[dotsize=0.16](4.539531,-1.011094) \psdots[dotsize=0.16](11.979531,-1.031094) \psdots[dotsize=0.16](10.799531,-1.391094) \psdots[dotsize=0.16](9.659532,-1.7110939) \psdots[dotsize=0.16](8.579531,-2.0510938) \psline[linewidth=0.04cm,linestyle=dotted,dotsep=0.16cm](3.0995314,-0.53109396)(3.1195314,-3.931094) \psline[linewidth=0.04cm,linestyle=dotted,dotsep=0.16cm](6.539531,-0.55109394)(6.539531,-3.951094) \psdots[dotsize=0.16](6.539531,-3.971094) \psdots[dotsize=0.16](3.1195314,-3.971094) \psdots[dotsize=0.16](4.7795315,-3.971094) \usefont{T1}{ptm}{m}{n} \rput(4.882344,-4.3010936){$I_{p,0}$} \usefont{T1}{ptm}{m}{n} \rput(7.6023436,-4.2610936){$I_{p,1}$} \usefont{T1}{ptm}{m}{n} \rput(2.2223437,-4.3010936){$I_{p,-1}$} \usefont{T1}{ptm}{m}{n} \rput(5.2923436,-3.721094){$1/2$} \psline[linewidth=0.03cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(1.1595312,3.2689064)(1.1595312,4.328906) \psline[linewidth=0.03cm,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(8.179532,-3.9910936)(9.879532,-3.971094) \usefont{T1}{ptm}{m}{n} \rput(9.602344,-4.1810937){$x$} \usefont{T1}{ptm}{m}{n} \rput(0.9723436,4.2589064){$p$} \usefont{T1}{ptm}{m}{n} \rput(5.0723433,-2.6210938){$h$} \usefont{T1}{ptm}{m}{n} \rput(2.0123436,-3.441094){$1/3$} \psline[linewidth=0.034cm,linestyle=dotted,dotsep=0.16cm](2.299531,-1.6910938)(2.3,-3.5910938) \psline[linewidth=0.034cm,linestyle=dotted,dotsep=0.16cm](3.44,-1.5110937)(3.439531,-3.291094) \psdots[dotsize=0.16](2.279531,-3.6310937) \psdots[dotsize=0.16](3.439531,-3.311094) \usefont{T1}{ptm}{m}{n} \rput(3.1123435,-3.181094){$2/3$} \usefont{T1}{ptm}{m}{n} \rput(4.3723435,-2.801094){$1$} \usefont{T1}{ptm}{m}{n} \rput(0.8923437,-3.9810936){$0$} \usefont{T1}{ptm}{m}{n} \rput(8.612343,-4.2610936){$1$} \usefont{T1}{ptm}{m}{n} \rput(0.9323437,3.558906){$1$} \usefont{T1}{ptm}{m}{n} \rput(0.85234374,-0.5010939){$p_0$} \usefont{T1}{ptm}{m}{n} \rput(0.6223438,-2.0610938){$q(p_0)$} \psdots[dotsize=0.16](1.1595312,3.4689062) \psdots[dotsize=0.16](4.559531,-2.971094) \psdots[dotsize=0.16](8.579531,-3.971094) \psbezier[linewidth=0.1,arrowsize=0.05291667cm 5.0,arrowlength=1.0,arrowinset=0.4]{->}(3.7395313,-0.0310939)(3.82,-0.89109373)(5.179531,-1.6710938)(5.48,-1.8910937) \psbezier[linewidth=0.1,arrowsize=0.05291667cm 5.0,arrowlength=1.0,arrowinset=0.4]{->}(9.43953,0.0289061)(8.9,-0.87109375)(8.51953,-0.85109395)(7.74,-1.2710937) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 3.0,arrowlength=3.0,arrowinset=0.4]{->}(2.8395312,2.208906)(2.8395312,1.4089061)(3.5995314,0.4289061)(4.3595314,0.0289061) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 3.0,arrowlength=3.0,arrowinset=0.4]{->}(9.839532,2.188906)(10.259532,1.6889061)(10.519531,0.84890616)(10.219532,0.0489061) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 3.0,arrowlength=3.0,arrowinset=0.4]{->}(11.759532,-3.451094)(11.72,-2.3710938)(12.0,-1.1910938)(10.12,-1.1910938) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 3.0,arrowlength=3.0,arrowinset=0.4]{->}(10.139532,-3.711094)(9.86,-2.8710938)(9.74,-1.8310938)(8.1,-1.8510938) \usefont{T1}{ptm}{m}{n} \rput(10.702343,-3.8810937){$Z_{q(p_0),0}$} \usefont{T1}{ptm}{m}{n} \rput(11.502344,-3.601094){$Z_{q(p_0),2}$} \usefont{T1}{ptm}{m}{n} \rput(2.8723438,2.4789062){$X_{p_0,0}$} \usefont{T1}{ptm}{m}{n} \rput(9.812344,2.418906){$X_{p_0,2}$} \psdots[dotsize=0.16](1.1595312,-3.971094) \end{pspicture} } \\ Fig. 4: first return map for the front cubic transversal $M_{-1}$ \end{center} Analogously to the $3$-dimensional case, we put in $M^+$, see~\eqref{eq:Mpm}, the unique sink $s_4=(1/2,2,1/2,1/2)$. This sink attracts all the trajectories of the flow as soon as they reach $M^+$, except of two surfaces of the fixed points: $$ \{(x,y,p,h)\mid (x,y)=a_p, p\in [0,1/2], h\in [0,1]\} $$ and $$ \{(x,y,p,h)\mid (x,y)=b_p, p\in [0,1/2], h\in [0,1]\}. $$ Let $H\colon M\to M$ denote time $1$ map of the flow described above (one should distinguish it from the first return map displayed on the Figure~4). \betaegin{lemma}\label{4} The $\delta$-measure sitting at the point $s_4$ is a global SRB measure for the map $H$. The set $K_4$ of the points that don't tend to $s_4$ under iterations of $H$ has zero Lebesgue $4$-measure and Hausdorff dimension $4$. \end{lemma} \betaegin{proof} The set $K_4$ is the cross product of $K_3$ and an interval $[0,1]$, modulo the set of Hausdorff dimension $3$ that does not affect on it's Lebesgue $4$-measure. Hence all the assertions of the lemma follow from the corresponding statements of Lemma~\ref{3}. \end{proof} \sigmaubsection{Proof of the Theorem} \betaegin{proof}[Proof of the theorem] The theorem can be easily deduced from Lemma~\ref{4}. \textbf{Step 1.} To construct an example of endomorphism on a $4$-manifold with a piecewise smooth boundary into itself, for which the SET fails, one should take the map $H$ that has a global SRB measure ($\delta$-measure sitting at $s_4$), and any function $\varphi$ such that $\varphi(s_4)=0$ and $\varphi(t)=1$ $\forall t\in K_4$. Then $K_{\varphi,1}=K_4$, and hence, by Lemma~4, the SET doesn't hold for $H$. Then we make three simple improvements. First, we construct a similar example on the the manifold $\widetilde M$ with a $C^\infty$-smooth boundary. \textbf{Step 2.} The manifold $M$ has a piecewise smooth boundary. The non-smooth set of the boundary lies on the cross-section $M_{-1}$. It consists of the boundary (in $M_{-1}$) of the set $N$ of points that are glued by the equivalence~\eqref{eq:equiv4}, see~\eqref{eq:N}. Then one can link any $4$-regions to a small neighbourhood of this set bounded by some $C^\infty$-hypersurfaces that $C^\infty$-smoothly link to the boundary of $M$ (see Figure~5). Thus we obtain a manifold $\widetilde M$ that has a $C^\infty$-smooth boundary. The flow can be naturally extended to $\widetilde M$ by the same formula $(1,0,0,0)$ for the generator (in the natural chart in a neighborhood of $N$ in $\mathbb R^4$). Obviously, the SET still does not hold for time $1$ map $\widetilde H$ of this flow on $\widetilde M$. \betaegin{center} \sigmacalebox{1.2} { \betaegin{pspicture}(0,-2.65)(9.937813,2.64) \definecolor{color898b}{rgb}{0.6,0.6,0.6} \pscustom[linewidth=0.02,fillstyle=solid,fillcolor=color898b] { \newpath \moveto(3.9078126,1.25) \lineto(3.9178126,1.21) \curveto(3.9228125,1.19)(3.9278126,1.145)(3.9278126,1.12) \curveto(3.9278126,1.095)(3.9228125,1.045)(3.9178126,1.02) \curveto(3.9128125,0.995)(3.9078126,0.945)(3.9078126,0.92) \curveto(3.9078126,0.895)(3.9078126,0.845)(3.9078126,0.82) \curveto(3.9078126,0.795)(3.9078126,0.745)(3.9078126,0.72) \curveto(3.9078126,0.695)(3.9078126,0.645)(3.9078126,0.62) \curveto(3.9078126,0.595)(3.9078126,0.545)(3.9078126,0.52) \curveto(3.9078126,0.495)(3.9078126,0.445)(3.9078126,0.42) \curveto(3.9078126,0.395)(3.9078126,0.345)(3.9078126,0.32) \curveto(3.9078126,0.295)(3.9078126,0.245)(3.9078126,0.22) \curveto(3.9078126,0.195)(3.9078126,0.145)(3.9078126,0.12) \curveto(3.9078126,0.095)(3.9128125,0.05)(3.9178126,0.03) \curveto(3.9228125,0.01)(3.9278126,-0.035)(3.9278126,-0.06) \curveto(3.9278126,-0.085)(3.9278126,-0.135)(3.9278126,-0.16) \curveto(3.9278126,-0.185)(3.9278126,-0.235)(3.9278126,-0.26) \curveto(3.9278126,-0.285)(3.9228125,-0.33)(3.9178126,-0.35) \curveto(3.9128125,-0.37)(3.9078126,-0.415)(3.9078126,-0.44) \curveto(3.9078126,-0.465)(3.9078126,-0.515)(3.9078126,-0.54) \curveto(3.9078126,-0.565)(3.9078126,-0.615)(3.9078126,-0.64) \curveto(3.9078126,-0.665)(3.9128125,-0.715)(3.9178126,-0.74) \curveto(3.9228125,-0.765)(3.9278126,-0.815)(3.9278126,-0.84) \curveto(3.9278126,-0.865)(3.9328125,-0.89)(3.9378126,-0.89) \curveto(3.9428124,-0.89)(3.9578125,-0.875)(3.9678125,-0.86) \curveto(3.9778125,-0.845)(3.9928124,-0.805)(3.9978125,-0.78) \curveto(4.0028124,-0.755)(4.0178127,-0.71)(4.0278125,-0.69) \curveto(4.0378127,-0.67)(4.0528126,-0.63)(4.0578127,-0.61) \curveto(4.0628123,-0.59)(4.0728126,-0.55)(4.0778127,-0.53) \curveto(4.0828123,-0.51)(4.0978127,-0.475)(4.1078124,-0.46) \curveto(4.1178126,-0.445)(4.1328125,-0.41)(4.1378126,-0.39) \curveto(4.1428127,-0.37)(4.1578126,-0.33)(4.1678123,-0.31) \curveto(4.1778126,-0.29)(4.1928124,-0.25)(4.1978126,-0.23) \curveto(4.2028127,-0.21)(4.2178125,-0.175)(4.2278123,-0.16) \curveto(4.2378125,-0.145)(4.2578125,-0.115)(4.2678127,-0.1) \curveto(4.2778125,-0.085)(4.2978125,-0.055)(4.3078127,-0.04) \curveto(4.3178124,-0.025)(4.3328123,0.01)(4.3378124,0.03) \curveto(4.3428125,0.05)(4.3578124,0.085)(4.3678126,0.1) \curveto(4.3778124,0.115)(4.4028125,0.145)(4.4178123,0.16) \curveto(4.4328127,0.175)(4.4578123,0.205)(4.4678125,0.22) \curveto(4.4778123,0.235)(4.4978123,0.265)(4.5078125,0.28) \curveto(4.5178127,0.295)(4.5378127,0.325)(4.5478125,0.34) \curveto(4.5578127,0.355)(4.5828123,0.38)(4.5978127,0.39) \curveto(4.6128125,0.4)(4.6378126,0.425)(4.6478124,0.44) \curveto(4.6578126,0.455)(4.6828127,0.48)(4.6978126,0.49) \curveto(4.7128124,0.5)(4.7378125,0.525)(4.7478123,0.54) \curveto(4.7578125,0.555)(4.7778125,0.585)(4.7878127,0.6) \curveto(4.7978125,0.615)(4.8278127,0.64)(4.8478127,0.65) \curveto(4.8678126,0.66)(4.9028125,0.68)(4.9178123,0.69) \curveto(4.9328127,0.7)(4.9678125,0.72)(4.9878125,0.73) \curveto(5.0078125,0.74)(5.0378127,0.755)(5.0478125,0.76) \curveto(5.0578127,0.765)(5.0828123,0.78)(5.0978127,0.79) \curveto(5.1128125,0.8)(5.1478124,0.82)(5.1678123,0.83) \curveto(5.1878123,0.84)(5.2228127,0.865)(5.2378125,0.88) \curveto(5.2528124,0.895)(5.2828126,0.92)(5.2978125,0.93) \curveto(5.3128123,0.94)(5.3428125,0.96)(5.3578124,0.97) \curveto(5.3728123,0.98)(5.4128127,0.99)(5.4378123,0.99) \curveto(5.4628124,0.99)(5.5078125,1.0)(5.5278125,1.01) \curveto(5.5478125,1.02)(5.5878124,1.04)(5.6078124,1.05) \curveto(5.6278124,1.06)(5.6678123,1.075)(5.6878123,1.08) \curveto(5.7078123,1.085)(5.7478123,1.095)(5.7678127,1.1) \curveto(5.7878127,1.105)(5.8278127,1.115)(5.8478127,1.12) \curveto(5.8678126,1.125)(5.9078126,1.135)(5.9278126,1.14) \curveto(5.9478126,1.145)(5.9878125,1.155)(6.0078125,1.16) \curveto(6.0278125,1.165)(6.0678124,1.175)(6.0878124,1.18) \curveto(6.1078124,1.185)(6.1478124,1.195)(6.1678123,1.2) \curveto(6.1878123,1.205)(6.2278123,1.215)(6.2478123,1.22) \curveto(6.2678127,1.225)(6.2978125,1.235)(6.3278127,1.25) } \psline[linewidth=0.06cm](2.0478125,1.25)(9.907812,1.25) \psline[linewidth=0.06cm](3.9278126,-2.49)(3.9078126,2.61) \psbezier[linewidth=0.08](9.887813,1.23)(8.987812,1.23)(8.867812,1.25)(6.6878123,1.23)(4.5078125,1.21)(3.9078126,-0.45)(3.9278126,-1.53)(3.9478126,-2.61)(3.9278126,-1.63)(3.9278126,-2.51) \psline[linewidth=0.03cm](4.5078125,0.25)(4.5078125,2.61) \psline[linewidth=0.03cm](5.1078124,0.83)(5.1078124,2.61) \psline[linewidth=0.03cm](5.7078123,1.11)(5.7078123,2.61) \psline[linewidth=0.03cm](6.3078127,1.23)(6.3078127,2.61) \psline[linewidth=0.03cm](6.9078126,1.25)(6.9078126,2.61) \psline[linewidth=0.03cm](7.4878125,1.23)(7.4878125,2.61) \psline[linewidth=0.03cm](8.087812,1.25)(8.087812,2.61) \psline[linewidth=0.03cm](8.687813,1.27)(8.687813,2.61) \psline[linewidth=0.03cm](9.287812,1.23)(9.287812,2.61) \psline[linewidth=0.03cm](3.2878125,-2.45)(3.3078125,2.61) \psline[linewidth=0.03cm](2.6878126,-2.45)(2.7078125,2.61) \psline[linewidth=0.03cm](2.0878124,-2.45)(2.1078124,2.61) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(3.9078126,2.11)(3.9078126,2.35) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(4.5078125,0.87)(4.5078125,1.11) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(2.1078124,2.07)(2.1078124,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(2.7078125,2.07)(2.7078125,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(3.3078125,2.07)(3.3078125,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(4.5078125,2.09)(4.5078125,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(5.1078124,2.09)(5.1078124,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(5.7078123,2.09)(5.7078123,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(6.3078127,2.07)(6.3078127,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(6.9078126,2.07)(6.9078126,2.31) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(7.4878125,2.09)(7.4878125,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(8.087812,2.11)(8.087812,2.35) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(8.687813,2.09)(8.687813,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(9.287812,2.09)(9.287812,2.33) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(5.1078124,1.13)(5.1078124,1.37) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(3.9078126,0.49)(3.9078126,0.73) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(3.2878125,-0.13)(3.2878125,0.11) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(2.6878126,-0.51)(2.6878126,-0.27) \psline[linewidth=0.01cm,arrowsize=0.05291667cm 8.0,arrowlength=3.0,arrowinset=0.4]{->}(2.0878124,-0.95)(2.0878124,-0.71) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(7.2678127,0.31)(6.6478124,0.95)(4.9078126,1.19)(4.2678127,0.67) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(5.6278124,-1.11)(4.8078127,-0.87)(4.6278124,-0.59)(4.4078126,0.05) \psbezier[linewidth=0.02,arrowsize=0.05291667cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(0.8878125,1.75)(1.6478125,2.59)(2.1878126,1.83)(2.5678124,1.31) \usefont{T1}{ptm}{m}{n} \rput(0.9109375,1.66){} \usefont{T1}{ptm}{m}{n} \rput(0.92921877,1.32){$M_{-1}$} \usefont{T1}{ptm}{m}{n} \rput(6.3301563,-1.04){} \usefont{T1}{ptm}{m}{n} \rput(6.49,-1.32){$\partial\widetilde M$} \usefont{T1}{ptm}{m}{n} \rput(7.6834373,-0.22){$\widetilde M\sigmaetminus M$} \usefont{T1}{ptm}{m}{n} \rput(7.310625,0.2){} \end{pspicture} } \\ Fig. 5: slow flow on the smoothened manifold $\widetilde M$ \end{center} \textbf{Step 3.} The time $1$ map $\widetilde H$ of the described flow is not invertible. To make it invertible, we use the ''slow down procedure'' near the boundary of $\widetilde M$. Namely, we multiply the generator of the flow by a $C^\infty$-function that is equal to $0$ on the boundary $\partial \widetilde M$ of $\widetilde M$ and is positive in its interior $\mathop{\mathrm{Int}} \widetilde M$ (see Figure~5 again). For the function $\varphi$ described above, this procedure doesn't change neither the Hausdorff dimension of the $(\varphi,1)$-nontypical set of the time $1$ map of a flow, nor Lebesgue $4$-measure of the basin of attraction of $s_4$. Indeed, for every point of $\mathop{\mathrm{Int}} \widetilde M$ its positive semiorbit of the flow action does not intersect the boundary of $\widetilde M$. Therefore, the $(\varphi,1)$-nontypical set and the basin of attraction of $s_4$ for these two flows (''fast flow'' and ''slow flow'') can differ only by subsets of $\partial \widetilde M$. But $\partial \widetilde M$ has dimension $3$ and doesn't affect on the $4$-measure and on Hausdorff dimenion of the larger sets (sets of Hausdorff dimension more than $3$). We constructed an example of a $C^\infty$-\textit{diffeomorphism} $H_{slow}$ of a compact $4$-manifold with a smooth boundary, for which the SET doesn't hold. \textbf{Step 4.} Note that in the previous example the boundary of $\partial \widetilde M$ totally consists of fixed points of $H_{slow}$ (one of these points, $s_4$, is a support of a global SRB measure). To obtain an example on a boundaryless manifold, we consider the double of the manifold $\widetilde M$. Namely, we take two copies of $\widetilde M$ and glue the boundaries of these copies by the natural ''identical'' map. The two points $s_4$ on the copies glue together. Our diffeomorphism $H_{slow}$ naturally extends to the diffeomorphsm $\mathcal H$ of the new manifold $\mathcal M$ (because $\partial \widetilde M$ is fixed for $H_{slow}$). Obviously, $\mathcal H$ has a global SRB measure in $\mathcal M$ (the glued point $s_4$), but the SET does not hold for it, as well as for $H_{slow}$. \end{proof} \sigmaubsection{Topological type of the manifold} Our construction shows that $4$-dimensional manifold $M$ is homeomorphic to a neighborhood in $\mathbb R^4$ of union of two circles. It is also homeomorphic to the direct product of a filled pretzel and an interval. The same can be said about $\widetilde M$. Hence, the manifold $\mathcal M$ is a double of the direct product of a filled pretzel and an interval. $\mathcal M$ can also be described as a connected sum of two $S^3\times S^1$. Indeed, the filled pretzel is a connected sum of two solid tori $D^2\times S^1$. Hence, the direct product of a filled pretzel and an interval is homeomorphic to the connected sum of two $D^3\times S^1$. But the doubling operation and the operation of taking a connected sum (of two equal manifolds) topologically commute. The doubling of $D^3\times S^1$ is obviously homeomorphic to $S^3\times S^1$ (as far as the doubling of every $n$-disk $D^n$ is $S^n$). Therefore, $\mathcal M$ is homeomorphic to a connected sum of two $S^3\times S^1$. \betaegin{small} \betaegin{thebibliography}{99} \betaibitem{Falc} {\sigmac K. Falconer}. Fractal geometry: mathematical foundations and applications. \textit{New York, 1990}. \betaibitem{IKS} {\sigmac Yu. Ilyashenko}, {\sigmac V. Klepstyn}, {\sigmac P.Saltykov}. Openness of the set of boundary preserving maps of an annulus with intermingled attracting basins. \textit{Journal of Fixed Point Theory and Applications}, \textbf{3}:2 (2008), 449--463. \betaibitem{KR} {\sigmac V. Klepstyn}, {\sigmac D. Ryzhov}. Special ergodic theorem for hyperbolic maps. \textit{e-print}, arxiv.org/abs/1109.4060 \betaibitem{Saltykov} {\sigmac P. Saltykov}. Special ergodic theorem for Anosov diffeomorphisms on the 2-torus. \emph{Functional Analysis and its Applications}, \textbf{45}:1 (2011), 69--78. \betaibitem{Young-SRB} {\sigmac L.-S. Young}. What are SRB measures, and which dynamical systems have them? \textit{J. Statist. Phys.}, \textbf{108} (2002), 733--754. \end{thebibliography} \end{small} \end{document}
\begin{document} \begin{abstract} Using a cap product, we construct an explicit Poincar\'e duality isomorphism between the blown-up intersection cohomology and the Borel-Moore intersection homology, for any commutative ring of coefficients and second-countable, oriented pseudomanifolds. \end{abstract} \maketitle \section*{Introduction} Poincar\'e duality of singular spaces is the ``raison d'\^etre'' (\cite[Section 8.2]{FriedmanBook}) of intersection homology. It has been proven by Goresky and MacPherson in their first paper on intersection homology (\cite{GM1}) for compact PL pseudomanifolds and rational coefficients and extended to $\mathbb{Z}$ coefficients with some hypothesis on the torsion part, by Goresky and Siegel in \cite{GS}. Friedman and McClure obtain this isomorphism for a topological pseudomanifold, from a cap product with a fundamental class, for any field of coefficients in \cite{FM}, see also \cite{FriedmanBook} for a commutative ring of coefficients with restrictions on the torsion. Using the blown-up intersection cohomology with compact supports, we have established in \cite{CST2} a Poincar\'e duality for any commutative ring of coefficients, without hypothesis on the torsion part, for any oriented paracompact pseudomanifold. Moreover, we also set up in \cite{CST5} a Poincar\'e duality between the blown-up intersection cohomology and the Borel-Moore intersection homology of an oriented PL pseudomanifold $X$. This paper is the ``cha\^inon manquant:'' the existence of an explicit Poincar\'e duality isomorphism between the blown-up intersection cohomology and the Borel-Moore intersection homology, from a cap product with the fundamental class, for any commutative ring of coefficients and any second-countable, oriented pseudomanifold. This allows the definition of an intersection product on the Borel-Moore intersection homology, induced from the Poincar\'e duality and a cup product, as in the case of manifolds, see \corref{cor:intersectionproduct} and \remref{rem:lastone}. Let us note that a Poincar\'e duality isomorphism cannot exist with an intersection cohomology defined as homology of the dual complex of intersection chains, see \exemref{exam:pasdual}. In \secref{sec:back}, we recall basic background on pseudomanifolds and intersection homology. In particular, we present the complex of blown-up cochains, already introduced and studied in a series of papers \cite{ CST6,CST7, CST4, CST1,CST2,CST5,CST3} (also called Thom-Whitney cochains in some works). \secref{sec:BM} contains the main properties of Borel-Moore intersection homology: the existence of a Mayer-Vietoris exact sequence in \thmref{thm:MV} and the recollection of some results established in \cite{CST5}. \secref{sec:Poincare} is devoted to the proof of the main result stated in \thmref{thm:dual}: the existence of an isomorphism between the blown-up intersection cohomology and the Borel-Moore intersection homology, by using the cap-product with the fundamental class of a second-countable, oriented pseudomanifold, $$\mathcal D_{X}\colon {\mathscr H}^*_{\overline{p}}(X;R)\to {\mathfrak{H}}^{\infty,\overline{p}}_{n-*}(X;R),$$ for any commutative ring of coefficients. In \cite{CST5}, we prove it for PL-pseudomanifolds with a sheaf presentation of intersection homology. Here the duality is realized for topological pseudomanifolds, by a map defined at the chain complexes level by the cap product with a cycle representing the fundamental class. Notice also that the intersection homology of this work (\defref{def:lessimplexes}) is a general version, called tame homology in \cite{CST3} and non-GM in \cite{FriedmanBook}, which coincides with the original one for the perversities of \cite{GM1}. Let us also observe that our definition of Borel-Moore intersection homology coincides with the one studied by G. Friedman in \cite{MR2276609} for perversities depending only on the codimension of strata. Homology and cohomology are considered with coefficients in a commutative ring, $R$. In general, we do not explicit them in the proofs. For any topological space $X$, we denote by ${\mathtt c} X=X\times [0,1]/X\times\{0\}$ the cone on $X$ and by ${\mathring{\tc}} X=X\times [0,1[/X\times\{0\}$ the open cone on $X$. Elements of the cones ${\mathtt c} X$ and ${\mathring{\tc}} X$ are denoted $[x,t]$ and the apex is ${\mathtt v}=[-,0]$. \tableofcontents \section{Background}\label{sec:back} \subsection{Pseudomanifold} In \cite{GM1}, M. Goresky and R. MacPherson introduce intersection homology for the study of pseudomanifolds. Some basic properties of intersection homology, as the existence of a Mayer-Vietoris sequence, do not require such a structure and exist for filtered spaces. \begin{definition} A \emph{filtered space of (formal) dimension $n$,} $X$, is a Hausdorff space endowed with a filtration by closed subsets, $$\emptyset=X_{-1}\subseteq X_0\subseteq X_1\subseteq\dots\subseteq X_n=X,$$ such that $X_n\backslash X_{n-1}\neq \emptyset$. The \emph{strata} of $X$ of dimension $i$ are the connected components $S$ of $X_{i}\backslash X_{i-1}$; we denote $\dim S=i$ and ${\rm codim\,} S=\dim X-\dim S$. The \emph{regular strata} are the strata of dimension $n$ and the \emph{singular set} is the subspace $\Sigma =X_{n-1}$. We denote by ${\mathcal S}_{X}$ (or ${\mathcal S}$ if there is no ambiguity) the set of non-empty strata. \end{definition} \begin{definition} An $n$-dimensional \emph{CS set} is a filtered space of dimension $n$, $X$, such that, for any $i\in\{0,\dots,n\}$, $X_i\backslash X_{i-1}$ is an $i$-dimensional topological manifold or the empty set. Moreover, for each point $x \in X_i \backslash X_{i-1}$, $i\neq n$, there exist \begin{enumerate}[(i)] \item an open neighborhood $V$ of $x$ in $X$, endowed with the induced filtration, \item an open neighborhood $U$ of $x$ in $X_i\backslash X_{i-1}$, \item a compact filtered space $L$ of dimension $n-i-1$, whose cone ${\mathring{\tc}} L$ is endowed with the filtration $({\mathring{\tc}} L)_{i}={\mathring{\tc}} L_{i-1}$, \item a homeomorphism, $\varphi \colon U \times {\mathring{\tc}} L\to V$, such that \begin{enumerate}[(a)] \item $\varphi(u,{\mathtt v})=u$, for any $u\in U$, where ${\mathtt v}$ is the apex of the cone ${\mathring{\tc}} L$, \item $\varphi(U\times {\mathring{\tc}} L_{j})=V\cap X_{i+j+1}$, for all $j\in \{0,\dots,n-i-1\}$. \end{enumerate} \end{enumerate} The filtered space $L$ is called a \emph{link} of $x$. \end{definition} Except in the reminder of a previous result (see {\rm pr}opref{prop:supersuperbredon}), this work is only concerned with particular CS sets, the pseudomanifolds. \begin{definition} An $n$-dimensional \emph{pseudomanifold} is an $n$-dimensional CS set for which the link of a point $x\in X_i\backslash X_{i-1}$, $i\neq n$, is a compact pseudomanifold $L$ of dimension $n-i-1$. \end{definition} For a pseudomanifold, the formal dimension of the underlying filtered space coincides with the classical dimension of the manifold $X_{n}\backslash X_{n-1}$. In \cite{GM1}, the pseudomanifolds are supposed without strata of codimension~1. Here, we do not require this property. The class of pseudomanifolds is large anough to include (\cite{GM2}) complex algebraic or analytic varieties, real analytic varieties, Whitney and Thom-Mather stratified spaces, quotients of manifolds by compact Lie groups acting smootly, Thom spaces of vector bundles over triangulable compact manifolds, suspension of manifolds, ... \begin{remark}\label{rem:plentyofdef} For the convenience of the reader, we first collect basic topological definitions. A topological space $X$ is said \begin{enumerate} \item \emph{separable} if it contains a countable, dense subset; \item \emph{second-countable} if its topology has a numerable basis; that is there exists some numerable collection $\mathcal U = \{U_{j} |j \in \mathbb{N}\}$ of open subsets such that any open subset of $X$ can be written as a union of elements of some subfamily of $\mathcal U$; \item \emph{hemicompact} if it is locally compact and there exists a numerable sequence of relatively compact open subsets, $(U_{i})_{i\in \mathbb{N}}$, such that $\overline{U}_{i}\subset U_{i+1}$ and $X=\cup_{i}U_{i}$. \end{enumerate} To relate hypotheses of some following results to ones of previous works, we list some interactions between these notions. \begin{itemize} \item A second countable space is separable and, in a metric space, the two properties are equivalent (\cite[Theorem 16.9]{Wil}). \item A space is locally compact and second-countable if, and only if, it is metrizable and hemicompact, see \cite[Corollaire of Proposition 16]{MR0173226}. \end{itemize} \end{remark} As second-countability is a hereditary property, any open subset of a second-countable pseudomanifold is one also. Moreover, we also know that a second-countable pseudomanifold is paracompact, separable, metrizable and hemicompact. \subsection{Intersection homology} We consider intersection homology relatively to the general perversities defined in \cite{MacPherson90}. \begin{definition}\label{def:perversite} A \emph{perversity on a filtered space,} $X$, is an application, $\overline{p}\colon {\mathcal S}\to \mathbb{Z}$, defined on the set of strata of $X$ and taking the value~0 on the regular strata. Among them, mention the null perversity $\overline{0}$, constant with value~0, and the \emph{top perversity} defined by $\overline{t}(S)={\rm codim\,} S-2$ on singular strata. (This infers $\overline{t}(S)=-1$ for codimension~1 strata.) For any perversity, $\overline{p}$, the perversity $D\overline{p}:=\overline{t}-\overline{p}$ is called the \emph{complementary perversity} of $\overline{p}$. The pair $(X,\overline{p})$ is called a \emph{perverse space}. For a pseudomanifold we say \emph{perverse pseudomanifold.} \end{definition} \begin{example} Let $(X,\overline{p})$ be a perverse space of dimension $n$. \begin{itemize} \item An \emph{open perverse subspace $(U,\overline{p})$} is an open subset $U$ of $X$, endowed with the induced filtration and a perversity still denoted $\overline{p}$ and defined as follows: if $S\subset U$ is a stratum of $U$, such that $S\subset U\cap S'$ with $S'$ a stratum of $X$, then $\overline{p}(S)=\overline{p}(S')$. In the case of a perverse pseudomanifold, $(U,\overline{p})$ is one also. \item If $M$ is a connected topological manifold, the product $M\times X$ is a filtered space for the \emph{product filtration,} $\left(M \times X\right) _i = M \times X_{i}$. The perversity $\overline p$ induces a perversity on $M\times X$, still denoted $\overline p$ and defined by $\overline p(M \times S) = \overline p(S)$ for each stratum $S$ of $X$. \item If $X$ is compact, the open cone ${\mathring{\tc}} X $ is endowed with the \emph{conical filtration,} $\left({\mathring{\tc}} X\right) _i ={\mathring{\tc}} X_{i-1}$, $0\leq i\leq n+1$, where ${\mathring{\tc}} \,\emptyset=\{ {\mathtt v} \}$ is the apex of the cone. A perversity $\overline{p}$ on ${\mathring{\tc}} X$ induces a perversity on $X$ still denoted $\overline{p}$ and defined by $\overline{p}(S)=\overline{p}(S\times ]0,1[)$ for each stratum $S$ of $X$. \end{itemize} \end{example} \emph{For the rest of this section, we consider a perverse space $(X,\overline{p})$.} We introduce now a chain complex giving the intersection homology with coefficients in $R$, cf. \cite{CST3}. \begin{definition}\label{def:regularsimplex} A \emph{regular simplex} is a continuous map $\sigma\colon\Delta\to X$ of domain an Euclidean simplex decomposed in joins, $\Delta=\Delta_{0}\ast\Delta_{1}\ast\dots\ast\Delta_{n}$, such that $\sigma^{-1}X_{i} =\Delta_{0}\ast\Delta_{1}\ast\dots\ast\Delta_{i}$ for all~$i \in \{0, \dots, n\}$ and $\Delta_n \ne \emptyset$. Given an Euclidean regular simplex $\Delta = \Delta_0 * \dots *\Delta_n$, we denote ${\mathfrak{d}}\Delta$ the regular part of the chain $\partial \Delta$. More precisely, we set ${\mathfrak{d}} \Delta =\partial (\Delta_0 * \dots * \Delta_{n-1})* \Delta_n$, if $\dim(\Delta_n) = 0 $ and ${\mathfrak{d}} \Delta = \partial \Delta$, if $\dim(\Delta_n)\geq 1$. For any regular simplex $\sigma \colon\Delta \to X$, we set ${\mathfrak{d}} \sigma=\sigma_* \circ {\mathfrak{d}}$. Notice that ${\mathfrak{d}}^2=0$. We denote by ${\mathfrak{C}}_{*}(X;R)$ the complex of linear combinations of regular simplices (called finite chains) with the differential~${\mathfrak{d}}$. \end{definition} \begin{definition}\label{def:lessimplexes} The \emph{perverse degree of} a regular simplex $\sigma\colon\Delta=\Delta_{0}\ast \dots\ast\Delta_{n} \to X$ is the $(n+1)$-uple, $\|\sigma\|=(\|\sigma\|_0,\dots,\|\sigma\|_n)$, where $\|\sigma\|_{i}= \dim \sigma^{-1}X_{n-i}=\dim (\Delta_{0}\ast\dots\ast\Delta_{n-i})$, with the convention $\dim \emptyset=-\infty$. For each stratum $S$ of $X$, the \emph{perverse degree of $\sigma$ along $S$} is defined by $$\|\sigma\|_{S}=\left\{ \begin{array}{cl} -\infty,&\text{if } S\cap \sigma(\Delta)=\emptyset,\\ \|\sigma\|_{{\rm codim\,} S},&\text{otherwise.} \end{array}\right. $$ A \emph{regular simplex is $\overline{p}$-allowable} if \begin{equation*} \|\sigma\|_{S}\leq \dim \Delta-{\rm codim\,} S+\overline{p}(S), \end{equation*} for any stratum $S$. A finite chain $\xi$ is \emph{$\overline{p}$-allowable} if it is a linear combination of $\overline{p}$-allowable simplices and of \emph{$\overline{p}$-intersection} if $\xi$ and its boundary ${\mathfrak{d}} \xi$ are $\overline{p}$-allowable. We denote by ${\mathfrak{C}}_{*}^{\overline{p}}(X;R)$ the complex of $\overline{p}$-intersection chains and by ${\mathfrak{H}}_{*}^{\overline{p}}(X;R)$ its homology, called \emph{$\overline{p}$-intersection homology}. \end{definition} If $(U,\overline{p})$ is an open perverse subspace of $(X,\overline{p})$, we define the \emph{complex of relative $\overline{p}$-intersection chains} as the quotient ${\mathfrak{C}}_{*}^{\overline{p}}(X,U;R)= {\mathfrak{C}}_{*}^{\overline{p}}(X;R)/ {\mathfrak{C}}_{*}^{\overline{p}}(U;R)$. Its homology is denoted ${\mathfrak{H}}_{*}^{\overline{p}}(X,U;R)$. Finally, if $K\subset U$ is compact, we have ${\mathfrak{H}}_{*}^{\overline{p}}(X,X\backslash K;R)= {\mathfrak{H}}_{*}^{\overline{p}}(U,U\backslash K;R) $ by excision, cf. \cite[Corollary 4.5]{CST3}. \begin{remark} This homology is called tame intersection homology in \cite{CST3}. As we are only using it in this work, for sake of simplicity, we call it intersection homology. It coincides with the non-GM intersection homology of \cite{FriedmanBook} (see \cite[Theorem~B]{CST3}) and with intersection homology for the original perversities of \cite{GM1}, see \cite[Remark 3.9]{CST3}. The GM-perversities introduced by Goresky and MacPherson in \cite{GM1} depend only on the codimension of the strata and verify a growth condition, $\overline{p}(k)\leq \overline{p}(k+1)\leq \overline{p}(k)+1$, that implies the topological invariance of the intersection homology groups, if there is no strata of codimension~1. In \cite{CST3}, we prove that a certain topological invariance is still verified within the framework of strata dependent perversities. For that, we consider the intrinsic space, $X^*$, associated to a pseudomanifold $X$ (introduced by King in \cite{MR800845}) and use pushforward and pullback perversities. In particular, if there is no singular strata in $X$ becoming regular in $X^*$, we establish the invariance of intersection homology under a refinement of the stratification (\cite[Remark 6.14]{CST3}). \end{remark} \subsection{Blown-up intersection cohomology} Let $N_{*}(\Delta)$ and $N^*(\Delta)$ be the simplicial chains and cochains, with coefficient in $R$, of an Euclidean simplex $\Delta$. Given a face $F$ of $\Delta$, we write ${\mathbf 1}_{F}$ the element of $N^*(\Delta)$ taking the value 1 on $F$ and 0 otherwise. We denote also by $(F,0)$ the same face viewed as face of the cone ${\mathtt c}\Delta=[{\mathtt v}]\ast \Delta $ and by $(F,1)$ the face ${\mathtt c} F$ of ${\mathtt c} \Delta$. The apex is denoted $(\emptyset,1)={\mathtt c} \emptyset =[{\mathtt v}]$. If $\Delta=\Delta_{0}\ast\dots\ast\Delta_{n}$ is a regular Euclidean simplex, we set $${\widetilde{N}}^*(\Delta)=N^*({\mathtt c} \Delta_{0})\otimes\dots\otimes N^*({\mathtt c} \Delta_{n-1})\otimes N^*(\Delta_{n}).$$ A basis of ${\widetilde{N}}^*(\Delta)$ is made of the elements ${\mathbf 1}_{(F,\varepsilon)}={\mathbf 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\mathbf 1}_{(F_{n-1},\varepsilon_{n-1})}\otimes {\mathbf 1}_{F_{n}}$, where $\varepsilon_{i}\in\{0,1\}$ and $F_{i}$ is a face of $\Delta_{i}$ for $i\in\{0,\dots,n\}$ or the empty set with $\varepsilon_{i}=1$ if $i<n$. We set $|{\mathbf 1}_{(F,\varepsilon)}|_{>s}=\sum_{i>s}(\dim F_{i}+\varepsilon_{i})$, with $\varepsilon_{n}=0$. \begin{definition}\label{def:degrepervers} Let $\ell\in \{1,\ldots,n\}$. The \emph{$\ell$-perverse degree} of ${\mathbf 1}_{(F,\varepsilon)}\in {\widetilde{N}}^*(\Delta)$ is $$ \|{\mathbf 1}_{(F,\varepsilon)}\|_{\ell}=\left\{ \begin{array}{ccl} -\infty&\text{if} & \varepsilon_{n-\ell}=1,\\ |{\mathbf 1}_{(F,\varepsilon)}|_{> n-\ell} &\text{if}& \varepsilon_{n-\ell}=0. \end{array}\right.$$ For a cochain $\omega = \sum_b\lambda_b \ {\mathbf 1}_{(F_b,\varepsilon_b) }\in{\widetilde{N}}^*(\Delta)$ with $\lambda_{b}\neq 0$ for all $b$, the \emph{$\ell$-perverse degree} is $$\|\omega \|_{\ell}=\max_{b}\|{\mathbf 1}_{(F_b,\varepsilon_b)}\|_{\ell}.$$ By convention, we set $\|0\|_{\ell}=-\infty$. \end{definition} Let $(X,\overline{p})$ be a perverse space and $\sigma\colon \Delta=\Delta_{0}\ast\dots\ast\Delta_{n}\to X$ a regular simplex. We set ${\widetilde{N}}^*_{\sigma}={\widetilde{N}}^*(\Delta)$. Let $\delta_{\ell}\colon \Delta' \to\Delta$ be the inclusion of a face, we set $\partial_{\ell}\sigma=\sigma\circ\delta_{\ell}\colon \Delta'\to X$ with the induced filtration $\Delta'=\Delta'_{0}\ast\dots\ast\Delta'_{n}$. The \emph{blown-up complex} of $X$ is the cochain complex ${\widetilde{N}}^*(X;R)$ composed of the elements $\omega$ associating to each regular simplex $\sigma\colon \Delta_{0}\ast\dots\ast\Delta_{n}\to X$ an element $\omega_{\sigma}\in {\widetilde{N}}^*_{\sigma}$ such that $\delta_{\ell}^*(\omega_{\sigma})=\omega_{\partial_{\ell}\sigma}$, for any regular face operator $\delta_{\ell}\colon\Delta'\to\Delta$. The differential $d \omega$ is defined by $(d \omega)_{\sigma}=d(\omega_{\sigma})$. The \emph{perverse degree of $\omega$ along a singular stratum $S$} equals $$\|\omega\|_S=\sup\left\{ \|\omega_{\sigma}\|_{{\rm codim\,} S}\mid \sigma\colon \Delta\to X \; \text{regular such that } \sigma(\Delta)\cap S\neq\emptyset \right\}.$$ By setting $\|\omega\|_{S}=0$ for any regular stratum $S$, we get a map $\|\omega\|\colon {\mathcal S}\to \mathbb{N}$. \begin{definition}\label{def:blowup} A \emph{cochain $\omega\in {\widetilde{N}}^*(X;R)$ is $\overline{p}$-allowable} if $\| \omega\|\leq \overline{p}$ and of \emph{$\overline{p}$-intersection} if $\omega$ and $d\omega$ are $\overline{p}$-allowable. We denote ${\widetilde{N}}^*_{\overline{p}}(X;R)$ the complex of $\overline{p}$-intersection cochains and ${\mathscr H}^*_{\overline{p}}(X;R)$ its homology, called \emph{blown-up $\overline{p}$-intersection cohomology} of $X$. \end{definition} Let us recall its main properties. First, the canonical projection ${\rm pr} \colon X\times\mathbb{R}\rightarrow X$ induces an isomorphism (\cite[Theorem D]{CST4}) \begin{equation}\label{pro} {\rm pr}^* \colon {\mathscr H}^*_{\overline{p}}(X;R)\rightarrow{\mathscr H}^*_{\overline{p}}(X\times \mathbb{R};R). \end{equation} Also, if $L$ is a compact pseudomanifold and $\overlineerline p$ a perversity on the cone ${\mathring{\tc}} L$, inducing $\overline{p}$ on $L$, we have \cite[Theorem E]{CST4}: \begin{equation}\label{Cone} {\mathscr H}^*_{\overline{p}}({\mathring{\tc}} L;R) =\begin{cases} {\mathscr H}^*_{\overline{p}}(L;R), & \text{if }k\leq\overline p({\mathtt v}),\\ 0 & \text{if }k>\overline p({\mathtt v}), \end{cases} \end{equation} where ${\mathtt v}$ is the apex of the cone. If $k\leq\overline p({\mathtt v})$, the isomorphism ${\mathscr H}^*_{\overline{p}}({\mathring{\tc}} L;R)\cong {\mathscr H}^*_{\overline{p}}(L;R)$ is given by the inclusion $L \times ]0,1[ = {\mathring{\tc}} L \backslash\{{\mathtt v}\} \hookrightarrow {\mathring{\tc}} L$. \begin{definition}\label{def:Upetit} Let $\mathcal U$ be an open cover of $X$. A \emph{$\mathcal U$-small simplex} is a regular simplex, $\sigma\colon \Delta=\Delta_{0}\ast\dots\ast\Delta_{n}\to X$, such that there exists $U\in\mathcal U$ with ${\rm Im\,}\sigma\subset U$. The \emph{blown-up complex of $\mathcal U$-small cochains of $X$ with coefficients in $R$,} written ${\widetilde{N}}^{*,\mathcal U}(X;R)$ is the cochain complex made up of elements $\omega$, associating to any $\mathcal U$-small simplex, $\sigma\colon\Delta= \Delta_{0}\ast\dots\ast\Delta_{n}\to X$, an element $\omega_{\sigma}\in \Hiru {\widetilde{N}}*\Delta$, so that $\delta_{\ell}^*(\omega_{\sigma})=\omega_{\partial_{\ell}\sigma}$, for any face operator, $\delta_{\ell}\colon \Delta'_{0}\ast\dots\ast\Delta'_{n}\to \Delta_{0}\ast\dots\ast\Delta_{n}$, with $\Delta'_{n}\neq\emptyset$. If $\overline{p}$ is a perversity on $X$, we denote by ${\widetilde{N}}^{*,\mathcal U}_{\overline{p}}(X;R)$ the complex of $\mathcal U$-small cochains verifying $ \|\omega\|\leq \overline{p}$ and $\|\delta \omega\|\leq\overline{p}$. \end{definition} \begin{proposition}{\cite[Corollary 9.7]{CST4}}\label{cor:Upetits} The restriction map is a quasi-isomor\-phism,\newline $\rho_{\mathcal U}\colon \lau {\widetilde{N}}{*}{\overline{p}}{X;R} \to \lau{\widetilde{N}}{*,\mathcal U}{\overline{p}}{X;R}$. \end{proposition} Finally, the blown-up intersection cohomology satisfies the Mayer-Vietoris property. \begin{proposition}{\cite[Theorem C]{CST4}}\label{thm:MVcourte} Let $(X,\overline{p})$ be a paracompact perverse space, endowed with an open cover $\mathcal U =\{W_{1},W_{2}\}$ and a subordinated partition of the unity, $(f_{1},f_{2})$. For $i=1,\,2$, we denote by $\mathcal U_{i}$ the cover of $W_{i}$ consisting of the open subsets $(W_{1}\cap W_{2}, f_{{i}}^{-1}(]1/2,1])$ and by $\mathcal U$ the cover of $X$, union of the covers $\mathcal U_{i}$. Then, the canonical inclusions, $W_{i}\subset X$ and $W_{1}\cap W_{2}\subset W_{i}$, induce a short exact sequence, where $\varphi(\omega_{1},\omega_{2})=\omega_{1}-\omega_{2}$, $$ \xymatrix@C=4mm{ 0\ar[r]& {\widetilde{N}}^{*,\mathcal U}_{\overline{p}}(X;R)\ar[r] & {\widetilde{N}}^{*,\mathcal U_{1}}_{\overline{p}}(W_{1};R) \oplus {\widetilde{N}}^{*,\mathcal U_{2}}_{\overline{p}}(W_{2};R) \ar[r]^-{\varphi}& {\widetilde{N}}^{*}_{\overline{p}}(W_{1}\cap W_{2};R)\ar[r]& 0. }$$ \end{proposition} \section{Borel-Moore intersection homology}\label{sec:BM} In a filtered space $X$, locally finite chains are sums, perhaps infinite, $\xi=\sum_{j\in J}\lambda_{j}\sigma_{j}$, such that every point $x\in X$ has a neighborhood $U_{x}$ for which all but a finite number of the regular simplices $\sigma_{j}$ (see \defref{def:regularsimplex}) with support intersecting $U_{x}$ have a coefficient $\lambda_{j}$ equal to 0. \begin{definition}\label{def:BM} Let $(X,\overline{p})$ be a perverse space. We denote by ${\mathfrak{C}}^{\infty,\overline{p}}_{*}(X;R)$ the complex of locally finite chains of $\overline{p}$-intersection with the differential ${\mathfrak{d}}$. Its homology, ${\mathfrak{H}}^{\infty,\overline{p}}_{*}(X;R)$, is called \emph{the locally finite (or Borel-Moore) $\overline p$-intersection homology.} \end{definition} Recall a characterization of locally finite $\overline{p}$-intersec\-tion chains. \begin{proposition}{\cite[Proposition 3.4]{CST5}}\label{prop:BMprojectivelim} Let $(X,\overline{p})$ be a perverse space. Suppose that $X$ is locally compact, metrizable and separable. Then, the complex of locally finite $\overline{p}$-intersection chains is isomorphic to the inverse limit of complexes, $${\mathfrak{C}}^{\infty,\overline{p}}_{*}(X;R) \cong \varprojlim_{K\subset X}{\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R), $$ where the limit is taken over all compact subsets of $X$. \end{proposition} Since locally finite chains in an open subset of $X$ are not necessarily locally finite in $X$, the complex ${\mathfrak{C}}^{\infty,\overline{p}}_{*}(-;R)$ is not functorial for the inclusions of open subsets. To get round this defect, we introduce a contravariant chain complex as in \cite{Bre}. In the context of intersection homology a similar approach is done by Friedman (see \cite[Section 2.3.2]{MR2276609}), the sheafification of the resulting complex being nothing but Deligne's sheaf of \cite{GM2}. So we set $${\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(U;R):= \varprojlim_{K\subset U} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R), $$ where $K$ runs over the family of compact subsets of $U$. An element $\alpha\in {\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(U;R)$ is a family $\alpha=\langle\alpha_K\rangle_{K}$, indexed by the family of compacts of $U$, with $\alpha_K \in {\mathfrak{C}}_{*}^{\overline{p}}(X;R)$ and $\alpha_{K'} - \alpha_{K} \in {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash K;R)$, if $K \subset K'$. In particular, $\alpha=0$ if, and only if, $\alpha_K \in{\mathfrak{C}}_{*}^{\overline{p}}(X\backslash K;R)$ for every $K$. For the construction of the projective limit, an exhaustive family of compacts suffices. Therefore, if $X$ is hemicompact, we may use a numerable increasing sequence of compacts, $(K_{i})_{i\in\mathbb{N}}$, and get $\alpha=\langle \alpha_{i}\rangle_{i}$, with $\alpha_{i}\in {\mathfrak{C}}_{*}^{\overline{p}}(X;R)$ and $\alpha_{i+1} - \alpha_{i} \in {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash K_{i};R)$. Given two open subsets $V \subset U \subset X$, we denote by \begin{equation}\label{equa:maps2} I^{X,\overline{p}}_{V,U}\colon \varprojlim_{K\subset U} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R) \to \varprojlim_{K\subset V} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R) \end{equation} the map induced by the identity. So, the complex ${\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(-;R)$ defines a contravariant functor from the poset of open subsets of $X$. Moreover, this is an appropriate substitute for the study of locally finite $\overline p$-intersec\-tion homology as shows the following result. \begin{proposition}\label{prop:BMprojectivelimU} Let $(X,\overline{p})$ be a locally compact, second-countable perverse space and $U \subset X$ an open subset. The natural restriction $I_{U}^{\overline{p}}\colon {\mathfrak{C}}^{\infty,\overline{p}}_{*}(U;R) \to {\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(U;R)$ is a quasi-isomorphism. \end{proposition} \begin{proof} Let $(K_{i})_{i\in\mathbb{N}}$ be a numerable increasing sequence of compacts of $U$, covering $U$ and cofinal in the family of compact subsets of $U$. The maps ${\mathfrak{C}}_{*}^{\overline{p}}(U,U\backslash K_{i})\to {\mathfrak{C}}_{*}^{\overline{p}}(U,U\backslash K_{i+1})$ and ${\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i})\to {\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i+1})$ being surjective, these two sequences verify the Mittag-Leffler condition. Thus the inclusions $ ({\mathfrak{C}}_{*}^{\overline{p}}(U,U\backslash K_{i}))_{i} \to ({\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i}))_{i} $ give a morphism of short exact sequences (\cite[Proposition 3.5.8]{MR1269324}): $$\xymatrix@C=4mm{ 0\ar[r]& \varprojlim^1_{i}{\mathfrak{H}}_{k+1}^{\overline{p}}(U,U\backslash K_{i}) \ar[d]\ar[r]& H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(U,U\backslash K_{i})) \ar[d]\ar[r]& \varprojlim_{i}{\mathfrak{H}}_{k}^{\overline{p}}(U,U\backslash K_{i}) \ar[d]\ar[r]& 0\\ 0\ar[r]& \varprojlim^1_{i}{\mathfrak{H}}_{k+1}^{\overline{p}}(X,X\backslash K_{i}) \ar[r]& H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i})) \ar[r]& \varprojlim_{i}{\mathfrak{H}}_{k}^{\overline{p}}(X,X\backslash K_{i}) \ar[r]& 0. }$$ The result is now a consequence of the excision property. \end{proof} The existence of a Mayer-Vietoris exact sequence in this context can be deduced from a sheaf theoretic argument in the case of perversities depending only on the codimension of strata, as mentioned in \cite[Proof of Proposition 2.20]{MR2276609}. We provide below a direct proof for general perversities. \begin{theorem}\label{thm:MV} Let $(X,\overline{p})$ be a locally compact, second-countable perverse space and $\{U,V\}$ an open covering of $X$. Then we have a Mayer-Vietoris exact sequence, with coefficients in $R$, \begin{equation}\label{equa:MV} \xymatrix@C=4.9mm{ \dots \ar[r] & {\mathfrak{H}}^{\infty,\overline{p}}_{k}(X) \ar[r] & {\mathfrak{H}}^{\infty,\overline{p}}_{k}(V) \oplus {\mathfrak{H}}^{\infty,\overline{p}}_{k}(U) \ar[r] & {\mathfrak{H}}^{\infty,\overline{p}}_{k}(U\cap V) \ar[r] & {\mathfrak{H}}^{\infty,\overline{p}}_{k-1}(X) \ar[r] & \dots } \end{equation} \end{theorem} \begin{proof} As $U$ and $V$ are hemicompact, we choose sequences $(U_{i})_{i\in\mathbb{N}}$ and $(V_{i})_{i\in\mathbb{N}}$ of relatively compact open subsets of $U$ and $V$, respectively, such that $\overline{U}_{i}\subset U_{i+1}$, $\cup_{i\in\mathbb{N}}U_{i}=U$ and $\overline{V}_{i}\subset V_{i+1}$, $\cup_{i\in\mathbb{N}}V_{i}=V$. Let us notice that $(\overline{U}_{i}\cup \overline{V}_{i})_{i\in\mathbb{N}}$ and $(\overline{U}_{i}\cap \overline{V}_{i})_{i\in\mathbb{N}}$ are sequences of compact subsets such that $\overline{U}_{i}\cup \overline{V}_{i}\subset \overline{U}_{i+1}\cup \overline{V}_{i+1}$ and $\overline{U}_{i}\cap \overline{V}_{i}\subset \overline{U}_{i+1}\cap \overline{V}_{i+1}$ which are exhaustive for $U\cup V$ and $U\cap V$ respectively. As observed in the proof of {\rm pr}opref{prop:BMprojectivelimU}, the sequences $({\mathfrak{C}}_{*}^{\overline{p}}(X,X\backslash K_{i}))_{i\in\mathbb{N}}$ satisfy the Mittag-Leffler property, for $K_{i}=\overline{U}_{i}$, $\overline{V}_{i}$, $\overline{U}_{i}\cap \overline{V}_{i}$ or $\overline{U}_{i}\cup \overline{V}_{i}$. Therefore, the short exact sequences $$0\to {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash (\overline{U}_{i}\cup \overline{V}_{i})) \to {\mathfrak{C}}_{*}^{\overline{p}}( X\backslash \overline{U}_{i})\oplus {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{V}_{i}) \to {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{U}_{i})+ {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{V}_{i}) \to 0 $$ induce the short exact sequence \begin{equation}\label{MV2} \def\scriptstyle{\scriptstyle} \xymatrix@C=4mm{ 0\ar[r]& \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash (\overline{U}_{i}\cup \overline{V}_{i})) \ar[r]& \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{U}_{i})\oplus \varprojlim_{i} {\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{V}_{i}) \ar[r]& \varprojlim_{i} {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i})) \ar[r]& 0} \end{equation} with $ {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i})) ={\mathfrak{C}}_{*}^{\overline{p}}(X)/({\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{U}_{i})+ {\mathfrak{C}}_{*}^{\overline{p}}(X\backslash \overline{V}_{i}))$. The long exact sequence associated to \eqref{MV2} gives \eqref{equa:MV}. Let us see that. $\bullet$ First, by {\rm pr}opref{prop:BMprojectivelimU}, we have ${\mathfrak{H}}^{\infty,\overline{p}}_{*}(U\cup V) \cong H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash (\overline{U}_{i}\cup \overline{V}_{i})))$, ${\mathfrak{H}}^{\infty,\overline{p}}_{*}(U) \cong H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{U}_{i}))$ and ${\mathfrak{H}}^{\infty,\overline{p}}_{*}( V) \cong H_{k}(\varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{V}_{i}))$. $\bullet$ As for the \emph{third term} of the sequence (\ref{MV2}), we proved in \cite[Proposition 4.1]{CST3} that the identity map on $X$ induces a quasi-isomorphism, ${\mathfrak{C}}^{\overline{p}}_{*}(X\backslash \overline{U}_{i}) + {\mathfrak{C}}^{\overline{p}}_{*}(X\backslash \overline{V}_{i}) \to {\mathfrak{C}}^{\overline{p}}_{*}(X\backslash (\overline{U}_{i}\cap \overline{V}_{i}))$. Therefore, it induces a quasi-isomorphism $$ {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i})) \to {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash (\overline{U}_{i}\cap \overline{V}_{i})). $$ With the Mittag-Leffler property and {\rm pr}opref{prop:BMprojectivelimU}, the identity map also gives a quasi-isomorphism \begin{equation}\label{equa:MV3} \psi\colon \varprojlim_{i} {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i}) \to \varprojlim_{i} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash (\overline{U}_{i}\cap \overline{V}_{i})) = {\mathfrak{C}}^{\infty,X,\overline{p}}_{*}(U\cap V). \end{equation} \end{proof} The following properties have been proven in \cite{CST5}. \begin{proposition}{\cite[Proposition 3.5]{CST5}}\label{prop:foisR} Let $(L,\overline{p})$ be a compact perverse space. Then we have $${\mathfrak{H}}^{\infty,\overline{p}}_{k}(\mathbb{R}^m\times L;R) = {\mathfrak{H}}_{k-m}^{\overline{p}}(L;R).$$ \end{proposition} \begin{proposition}{\cite[Proposition 3.7]{CST5}}\label{prop:conefoisR} Let $L$ be a compact space and $\overline{p}$ be a perversity on the cone ${\mathring{\tc}} L$ of apex ${\mathtt v}$. Then we have $${\mathfrak{H}}^{\infty,\overline{p}}_{k}(\mathbb{R}^m\times {\mathring{\tc}} L;R)=\left\{ \begin{array}{ccl} 0&\text{if}&k\leq m+D\overline{p}({\mathtt v})+1,\\ {\mathfrak{H}}^{\overline{p}}_{k-m-1}(L;R) &\text{if}&k\geq m+ D\overline{p}({\mathtt v})+2. \end{array}\right. $$ \end{proposition} \section{Poincar\'e duality}\label{sec:Poincare} \subsection{Fundamental class and cap product}\label{subsec:cap} Let $(X,\overline{p})$ be a perverse pseudomanifold of dimension $n$. Recall from \cite{GM1} that an $R$-orientation of $X$ is an $R$-orientation of the manifold $X^n:=X\backslash X_{n-1}$. For any $x\in X^n$, we denote by ${\mathtt{o}}_{x}\in H_{n}(X^n,X^n\backslash\{x\};R)={\mathfrak{H}}_{n}^{\overline{0}}(X,X\backslash\{x\};R)$ the associated local orientation. We know (see \cite{FM} or \cite[Theorem 8.1.18]{FriedmanBook}) that, for any compact $K\subset X$, there exists a unique element $\Gamma^{K}_{X}\in {\mathfrak{H}}_{n}^{\overline{0}}(X,X\backslash K;R)$ whose restriction equals ${\mathtt{o}}_{x}$ for any $x\in K$. These classes give a Borel-Moore homology class, called \emph{the fundamental class of $X$,} $$\Gamma_{X}=\langle \Gamma^{K}_{X}\rangle_{K}\in {\mathfrak{H}}^{\infty,\overline{0}}_{n}(X;R).$$ The fundamental classes are natural for the injections between open subsets of $X$. Given two open subsets $V \subset U \subset X$, the map induced in Borel-Moore homology by the identity \begin{equation}\label{equa:restr} I^{X,\overline{p}}_{V,U}\colon \varprojlim_{K\subset U} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R) \to \varprojlim_{K\subset V} {\mathfrak{C}}^{\overline{p}}_{*}(X,X\backslash K;R) \end{equation} sends $\Gamma_{U}$ on $\Gamma_{V}$, see \cite[Theorem 8.1.18]{FriedmanBook}. Suppose that $X$ is equipped with two perversities, $\overline{p}$ and $\overline{q}$. In \cite[Proposition 4.2]{CST4}, we prove the existence of a map \begin{equation}\label{equa:cupprduitespacefiltre} -\smile -\colon{\widetilde{N}}^k_{\overline{p}}(X;R)\otimes {\widetilde{N}}^{\ell}_{\overline{q}}(X;R)\to {\widetilde{N}}^{k+\ell}_{\overline{p}+\overline{q}}(X;R), \end{equation} inducing an associative and commutative graded product, called \emph{intersection cup product,} \begin{equation}\label{equa:cupprduitTWcohomologie} -\smile -\colon {\mathscr H}^k_{\overline{p}}(X;R)\otimes {\mathscr H}^{\ell}_{\overline{q}}(X;R)\to {\mathscr H}^{k+\ell}_{\overline{p}+\overline{q}}(X;R). \end{equation} Mention also from \cite[Propositions 6.6 and 6.7]{CST4} the existence of \emph{cap products,} \begin{equation}\label{equa:caphomology} -\frown - \colon \widetilde{N}_{\overlineerline{p}}^{i}(X;R)\otimes {{\mathfrak{C}}}_{j}^{\overlineerline{q}}(X;R) \to {{\mathfrak{C}}}_{j-i}^{\overlineerline{p}+\overlineerline{q}}(X;R) \end{equation} such that $ (\eta\smile \omega)\frown \xi=\eta\frown(\omega\frown\xi)$ and ${\mathfrak{d}}(\omega\frown \xi)=d\omega\frown\xi+(-1)^{|\omega|}\omega\frown {\mathfrak{d}} \xi $. Thus, this cap product induces a \emph{cap product} in homology, \begin{equation}\label{equa:caphomologyhomology} - \frown - \colon {\mathscr H}_{\overlineerline{p}}^{i}(X;R)\otimes {{\mathfrak{H}}}_{j}^{\overlineerline{q}}(X;R) \to {{\mathfrak{H}}}_{j-i}^{\overlineerline{p}+\overlineerline{q}}(X;R). \end{equation} The map \eqref{equa:caphomology} can be extended to a map \begin{equation}\label{equa:capBM} -\frown - \colon \widetilde{N}_{\overlineerline{p}}^{i}(X;R)\otimes {{\mathfrak{C}}}_{j}^{\infty,\overlineerline{q}}(X;R) \to {{\mathfrak{C}}}_{j-i}^{\infty,\overlineerline{p}+\overlineerline{q}}(X;R) \end{equation} as follows:\\ given $\alpha\in {\widetilde{N}}^i_{\overline{p}}(X;R)$ and $\eta=\langle \eta_{K}\rangle_{K}\in {\mathfrak{C}}^{\infty,\overline{q}}_{j}(X;R)$, we set $\alpha\frown \eta=\langle \alpha\frown \eta_{K}\rangle_{K}\in {\mathfrak{C}}^{\infty,\overline{p}+\overline{q}}_{j-i}(X;R)$. This definition makes sense since the cap product \eqref{equa:caphomology} is natural. Moreover, from the compatibility with the differentials, we get an induced map, \begin{equation}\label{equa:caphomologyhomologyBM} - \frown - \colon {\mathscr H}_{\overlineerline{p}}^{i}(X;R)\otimes {{\mathfrak{H}}}_{j}^{\infty,\overlineerline{q}}(X;R) \to {{\mathfrak{H}}}_{j-i}^{\infty,\overlineerline{p}+\overlineerline{q}}(X;R). \end{equation} As $\Gamma_{X}\in {{\mathfrak{H}}}_{n}^{\infty,\overlineerline{0}}(X;R)$, the cap product with the fundamental class gives a map, \begin{equation}\label{equa:dualmap} \mathcal D_{X}:=-\frown \Gamma_{X}\colon {\mathscr H}^*_{\overline{p}}(X;R)\to {\mathfrak{H}}^{\infty,\overline{p}}_{n-*}(X;R), \end{equation} that is the Poincar\'e duality map of the next theorem. Let us emphasize that this map exists at the level of chain complexes. In this paradigm, the Poincar\'e duality comes from a cap product with the fundamental class, $\Gamma_{X}$. As this one is a Borel-Moore homology class, we need to adapt \eqref{equa:caphomologyhomology}. In fact, there are two ways of working. \begin{itemize} \item We may keep the blown-up cohomology, for which the cap with $\Gamma_{X}$ gives a Borel-Moore homology class as in \eqref{equa:dualmap}. This is the subject of this work which brings \thmref{thm:dual} below. \item We may also work with blown-up cohomology with compact supports, for which the cap product with $\Gamma_{X}$ gives a (finite) homology class. That is, we extend \eqref{equa:caphomology} in \begin{equation}\label{equa:capcompact} -\frown - \colon \widetilde{N}_{\overlineerline{p},c}^{i}(X;R)\otimes {{\mathfrak{C}}}_{j}^{\infty,\overlineerline{q}}(X;R) \to {{\mathfrak{C}}}_{j-i}^{\overlineerline{p}+\overlineerline{q}}(X;R), \end{equation} This approach was used in \cite{CST2} and led to the Poincar\'e duality (\cite[Theorem B]{CST2}) $$-\frown \Gamma_{X}\colon {\mathscr H}^{i}_{\overline{p},c}(X;R)\xrightarrow{\cong} {\mathfrak{H}}_{n-i}^{\overline{p}}(X;R).$$ \end{itemize} \subsection{Main theorem} We prove the existence of an isomorphism between the Borel-Moore $\overline{p}$-intersection homology and the blown-up $\overline{p}$-intersection cohomology. \begin{theorem}\label{thm:dual} Let $(X,\overline p)$ be an $n$-dimensional, second countable and oriented perverse pseudomanifold. The cap product with the fundamental class induces a Poincar\'e duality isomorphism $$\mathcal D_{X}\colon {\mathscr H}^*_{\overline{p}}(X;R) \xrightarrow{\cong} {\mathfrak{H}}^{\infty,\overline{p}}_{n-*}(X;R).$$ \end{theorem} The proof uses the following result. \begin{proposition}{\cite[Proposition 13.2]{CST5}}\label{prop:supersuperbredon} Let $\mathcal F_{X}$ be the category whose objects are (stratified homeomorphic to) open subsets of a given paracompact and separable CS set $X$ and whose morphisms are stratified homeomorphisms and inclusions. Let $\mathcal Ab_{*}$ be the category of graded abelian groups. Let $F^{*},\,G^{*}\colon \mathcal F_{X}\to \mathcal Ab$ be two functors and $\Phi\colon F^{*}\to G^{*}$ a natural transformation satisfying the conditions listed below. \begin{enumerate}[(i)] \item The functors $F^{*}$ and $G^{*}$ admit Mayer-Vietoris exact sequences and the natural transformation $\Phi$ induces a commutative diagram between these sequences. \item If $\{U_{\alpha}\}$ is a disjoint collection of open subsets of $X$ and $\Phi\colon F_{*}(U_{\alpha})\to G_{*}(U_{\alpha})$ is an isomorphism for each $\alpha$, then $\Phi\colon F^{*}(\bigsqcup_{\alpha}U_{\alpha})\to G^{*}(\bigsqcup_{\alpha}U_{\alpha})$ is an isomorphism. \item If $L$ is a compact filtered space such that $X$ has an open subset stratified homeomorphic to $\mathbb{R}^i\times {\mathring{\tc}} L$ and, if $\Phi\colon F^{*}(\mathbb{R}^i\times ({\mathring{\tc}} L\backslash \{{\mathtt v}\}))\to G^{*}(\mathbb{R}^i\times ({\mathring{\tc}} L\backslash \{{\mathtt v}\}))$ is an isomorphism, then so is $\Phi\colon F^{*}(\mathbb{R}^i\times {\mathring{\tc}} L)\to G^{*}(\mathbb{R}^i\times {\mathring{\tc}} L)$. (Here, ${\mathtt v}$ is the apex of the cone ${\mathring{\tc}} L$.) \item If $U$ is an open subset of X contained within a single stratum and homeomorphic to an Euclidean space, then $\Phi\colon F^{*}(U)\to G^{*}(U)$ is an isomorphism. \end{enumerate} Then $\Phi\colon F^{*}(X)\to G^{*}(X)$ is an isomorphism. \end{proposition} \begin{proof}[Proof of \thmref{thm:dual}] As any open subset $U\subset X$ is an oriented pseudomanifold, we may consider the associated homomorphism defined in (\ref{equa:dualmap}), $\dos {\mathcal D}{U}\colon {\mathscr H}_{k}^{ \overline{p}}(U) \to {\mathfrak{H}}^{\infty,X,\overline{p}}_{n-k}(U)$, where we use the identification ${\mathfrak{H}}^{\infty,\overline{p}}_{n-k}(U) \xrightarrow{\cong} {\mathfrak{H}}^{\infty,X,\overline{p}}_{n-k}(U)$ given by {\rm pr}opref{prop:BMprojectivelimU}. Let $V\subset U\subset X$ be two open subsets of $X$, endowed with the induced structures of pseudomanifold of $X$. The canonical inclusion (see \eqref{equa:maps2}) and the cap product with the fundamental class give a commutative diagram, \begin{equation}\label{conm} \xymatrix{ {\mathscr H}^k_{\overline{p}}(U) \ar[d]_{\mathcal D_{U}} \ar[r]^-{(I^{\overline{p}}_{V,U})^*} & {\mathscr H}^k_{\overline{p}}(V) \ar[d]^{\mathcal D_{V}}\\ {\mathfrak{H}}_{n-k}^{\infty,X,\overline{p}}(U) \ar[r]^-{(I^{X,\overline{p}}_{V,U})^*} & {\mathfrak{H}}_{n-k}^{\infty,X,\overline{p}}(V). } \end{equation} We apply {\rm pr}opref{prop:supersuperbredon} to the natural transformation $\mathcal D_{U}\colon {\mathscr H}_{\overline{p}}^k(U) \to {\mathfrak{H}}_{n-k}^{\infty,X,\overline{p}}(U)$. The proof is reduced to the verifications of its hypotheses. $\bullet$ First, let us notice that the conditions (ii) and (iv) are direct. $\bullet$ \emph{Property} (i). Let $\mathcal U = \{W_1,W_2\}$ be an open covering of $X$. Mayer-Vietoris sequences are constructed in {\rm pr}opref{thm:MVcourte} and \thmref{thm:MV}. We build a morphism between them with the following diagram. In the first row, we take over the notations of {\rm pr}opref{thm:MVcourte}. As in the proof of \thmref{thm:MV}, we choose sequences $(U_{i})_{i\in\mathbb{N}}$ and $(V_{i})_{i\in\mathbb{N}}$ of relatively compact open subsets of $W_{1}$ and $W_{2}$, respectively, such that $\overline{U}_{i}\subset U_{i+1}$, $\cup_{i\in\mathbb{N}}U_{i}=W_{1}$ and $\overline{V}_{i}\subset V_{i+1}$, $\cup_{i\in\mathbb{N}}V_{i}=W_{2}$. The last row corresponds to \eqref{MV2}. $$ \def\scriptstyle{\scriptstyle} \xymatrix@C=3.6mm{ 0\ar[r]& {\widetilde{N}}^{*,\mathcal U}_{\overline{p}}(X) \ar[r] & {\widetilde{N}}^{*,\mathcal U_{1}}_{\overline{p}}(W_{1}) \oplus {\widetilde{N}}^{*,\mathcal U_{2}}_{\overline{p}}(W_{2}) \ar[r] \ar@{}[dl]+<10pt>^{\fbox{\tiny{I}}}& {\widetilde{N}}^*_{\overline{p}}(W_{1}\cap W_{2}) \ar[r] \ar@{}[dl]+<10pt>^{\fbox{\tiny{II}}}& 0\\ & {\widetilde{N}}^*_{\overline{p}}(X) \ar[r] \ar[u]^{\rho} \ar[d]_{\frown \gamma_X}& {\widetilde{N}}^*_{\overline{p}}(W_{1}) \oplus {\widetilde{N}}^*_{\overline{p}}(W_{2}) \ar[r] \ar[u]_{\rho} \ar@{}[dl]+<10pt>^{\fbox{\tiny{III}}} \ar[d]^{\frown \gamma_{W_1} \oplus \frown \gamma_{W_2}}& {\widetilde{N}}^*_{\overline{p}}(W_{1}\cap W_{2}) \ar@{=}[u] \ar@{}[dl]+<10pt>^{\fbox{\tiny{IV}}} \ar[d]^{\frown \gamma_{W_1 \cap W_2}} &&\\ &{\mathfrak{C}}^{\infty,\overline{p}}_{n-*}(X) \ar[r] \ar[d]_{I^{\overline{p}}_{X}}& {\mathfrak{C}}^{\infty,\overline{p}}_{n-*}(W_{1}) \oplus {\mathfrak{C}}^{\infty,\overline{p}}_{n-*}(W_{2}) \ar[r] \ar@{}[dl]+<10pt>^{\fbox{\tiny{V}}} \ar[d]^{I^{\overline{p}}_{W_{1}}\oplus {I^{\overline{p}}_{W_{2}}}}& {\mathfrak{C}}^{\infty,\overline{p}}_{n-*}(W_{1}\cap W_{2}) \ar@{}[dl]+<10pt>^{\fbox{\tiny{VI}}} \ar[d]^{I^{\overline{p}}_{W_{1}\cap W_{2}}} &&\\ &{\mathfrak{C}}^{\infty,X,\overline{p}}_{n-*}(X) \ar[r] & {\mathfrak{C}}^{\infty,X,\overline{p}}_{n-*}(W_{1}) \oplus {\mathfrak{C}}^{\infty,X,\overline{p}}_{n-*}(W_{2}) \ar[r] \ar@{}[dl]+<10pt>^{\fbox{\tiny{VII}}}& {\mathfrak{C}}^{\infty,X,\overline{p}}_{n-*}(W_{1}\cap W_{2}) \ar@{}[dl]+<10pt>^{\fbox{\tiny{VIII}}}&&\\ 0\ar[r]& \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash (\overline{U}_{i}\cup \overline{V}_{i})) \ar@{=}[u] \ar[r]& \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{U}_{i})\oplus \varprojlim_{i}{\mathfrak{C}}_{*}^{\overline{p}}(X, X\backslash \overline{V}_{i}) \ar@{=}[u] \ar[r]& \varprojlim_{i} {\mathfrak{Q}}_{*}^{\overline{p}}(X, \overline{U}_{i},\overline{V}_{i})) \ar[u]_{\psi} \ar[r]&0. } $$ The maps $\rho$ denote restriction and the cycles $\gamma_{Y}\in {\mathfrak{C}}^{\infty,\overline{0}}_{n}(Y)$ represent the fundamental class of $Y$, for $Y=X,\,W_{1},\,W_{2}, W_{1}\cap W_{2}$. First, we prove that the above diagram commutes. The square VII is clearly commutative. The vertical maps of squares I, II, V and VI are induced by restrictions. Thus these squares are commutative. By naturality of the fundamental classes, we may choose for $\gamma_{W_1}$ the restriction of $\gamma_X$ and similarly for $\gamma_{W_2}$ and $\gamma_{W_1\cap W_2}$. So, diagrams III and IV commute. The diagram VIII is commutative by construction of the map $\psi$, see \eqref{equa:MV3}. Let us observe that the columns of the diagrams I, II, V, VI, VII and VIII are quasi-isomorphisms. This is a consequence of \cite[Theorem B]{CST4}, {\rm pr}opref{prop:BMprojectivelimU} and \eqref{equa:MV3} in the proof of \thmref{thm:MV}, respectively. So, we get the following commutative diagram, $$ \def\scriptstyle{\scriptstyle} \xymatrix@C=6mm{ \dots\ar[r]& {\mathscr H}^k_{\overline{p}}(X) \ar[r] \ar[d]^{\mathcal D_X}& {\mathscr H}^k_{\overline{p}}(W_{1}) \oplus {\mathscr H}^k_{\overline{p}}(W_{2}) \ar[r] \ar[d]^{\mathcal D_{W_1} \oplus \mathcal D_{W_2}} & {\mathscr H}^k_{\overline{p}}(W_{1}\cap W_{2}) \ar[r] \ar[d]^{\mathcal D_{W_1 \cap W_2}} & {\mathscr H}^{k+1}_{\overline{p}}(X) \ar[r] \ar[d]^{\mathcal D_X} & \dots \\ \dots \ar[r] & {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(X) \ar[r] & {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(W_{1})\oplus {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(W_{2)} \ar[r] & {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(W_{1}\cap W_{2}) \ar[r] & {\mathfrak{H}}_{n-k-1}^{\infty,\overline{p}}(X) \ar[r]& \dots. } $$ \emph{$\bullet$ Property} (iii) We apply \eqref{pro}, \eqref{Cone}, {\rm pr}opref{prop:foisR} and {\rm pr}opref {prop:conefoisR}. First, we have the isomorphism $ {\mathscr H}^k_{\overline{p}}(\mathbb{R}^i\times {\mathring{\tc}} L) = 0 = {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(\mathbb{R}^i\times {\mathring{\tc}} L)$ for $k > \overline p({\mathtt v})$, or equivalently $n-k < i + D \overline p({\mathtt v}) +2$. (Observe that $D\overline{p}({\mathtt v})=n-i-2-\overline{p}({\mathtt v})$.) Next, let $k \leq \overline p({\mathtt v})$. The following commutative diagram comes from \eqref{conm}. $$ \xymatrix{ {\mathscr H}^k_{\overline{p}}(\mathbb{R}^i\times {\mathring{\tc}} L) \ar[r] \ar[d]_{\mathcal D_{\mathbb{R}^i \times {\mathring{\tc}} L }}& {\mathscr H}^k_{\overline{p}}(\mathbb{R}^i \times L \times ]0,1[) \ar[d]^{\mathcal D_{\mathbb{R}^i \times L \times ]0,1[ }} \\ {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(\mathbb{R}^i \times {\mathring{\tc}} L) \ar[r] & {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(\mathbb{R}^i \times L \times ]0,1[). } $$ The right column is an isomorphism by hypothesis. Since horizontal rows are also isomorphisms, we deduce that $\mathcal D_{\mathbb{R}^i \times {\mathring{\tc}} L }$ is an isomorphism. This proves Property (iii). \end{proof} \begin{corollary}\label{cor:intersectionproduct} Let $(X,\overline p)$ be an $n$-dimensional, second countable and oriented perverse pseudomanifold. The Borel-Moore intersection homology can be endowed with an intersection product, induced from Poincar\'e duality and a cup product. \end{corollary} \begin{proof} The following commutative diagram, whose vertical maps are isomorphisms, defines the intersection product $\pitchfork$ from the cup product, \begin{equation}\label{equa:fork} \xymatrix{ {\mathscr H}^k_{\overline{p}}(X;R)\otimes {\mathscr H}^{\ell}_{\overline{q}}(X;R) \ar[r]^-{-\smile-}\ar[d]^{\cong}\ar[d]_{\mathcal D_X\otimes \mathcal D_X}& {\mathscr H}^{k+\ell}_{\overline{p}+\overline{q}}(X;R) \ar[d]^{\mathcal D_X}\ar[d]_{\cong} \\ {\mathfrak{H}}_{n-k}^{\infty,\overline{p}}(X;R)\otimes {\mathfrak{H}}_{n-\ell}^{\infty,\overline{q}}(X;R) \ar[r]^-{-\pitchfork-}& {\mathfrak{H}}_{n-k-\ell}^{\infty,\overline{p}+\overline{q}}(X;R). } \end{equation} \end{proof} \begin{remark} If $\overline{p}\colon \mathbb{N}\to \mathbb{Z}$ is a loose perversity as in \cite{MR2276609}, we have an inclusion of complexes $\iota\colon {\mathfrak{C}}_{*}^{\infty,\overline{p}}(X;R) \hookrightarrow I^{\overline{p}} {\mathfrak{C}}_{*}^{\infty}(X;R)$, where $I^{\overline{p}} {\mathfrak{C}}_{*}^{\infty}(X;R)$ denotes the chain complex studied by Friedman in \cite{MR2276609}. With the technique of the proof of \thmref{thm:dual}, using {\rm pr}opref{prop:supersuperbredon}, we also deduce that $\iota$ is a quasi-isomorphism. For the complex $I^{\overline{p}} {\mathfrak{C}}_{*}^{\infty}(X;R)$, the existence of a Mayer-Vietoris exact sequence follows from the fact that its homology proceeds from sheaf theory and the computations involving a cone are done in \cite[Propositions 2.18 and 2.20]{MR2276609}. Thus, the blown-up intersection cohomology is Poincar\'e dual to the Borel-Moore intersection homology defined in \cite{MR2276609}, for any commutative ring of coefficients. \end{remark} \begin{remark}\label{rem:GJE} An intersection cohomology, denoted ${\mathfrak{H}}_{\overline{p}}^{*}(X;R)$, can also be defined from the linear dual, ${\mathfrak{C}}_{\overline{p}}^{*}(X;R)=\hom({\mathfrak{C}}^{\overline{p}}_{*}(X;R),R)$. This is the point of view adopted in \cite{FriedmanBook}, \cite{FR1}, \cite{FM}. If $(X,\overline{p})$ is a locally $(\overline{p},R)$-torsion free perverse space $X$, this cohomology is isomorphic to the blown-up cohomology with the complementary perversity, ${\mathfrak{H}}_{\overline{p}}^{*}(X;R)\cong {\mathscr H}^*_{D\overline{p}}(X;R)$, see \cite[Theorem F]{CST4}. (We send the reader to \cite{GM1} for the condition on the torsion, noting that it is always satisfied if $R$ is a field.) Therefore, in this case, there is a Poincar\'e duality, ${\mathfrak{H}}_{\overline{p}}^{k}(X;R)\cong {\mathfrak{H}}_{n-k}^{\infty,D\overline{p}}(X;R)$, induced by a cup product. But, in contrast to the blown-up cohomology situation, such isomorphism does not exist in general as shows the following example. \end{remark} \begin{example}\label{exam:pasdual} Let $X=\Sigma\mathbb{R} P^3\backslash{*}$ be the complementary subspace of a regular point in the suspension of the real projective space $\mathbb{R} P^3$, together with the constant perversity on 1, denoted $\overline{1}$. (Observe $\overline{1}=D\overline{1}$.) Direct calculations from the Mayer-Vietoris exact sequences give as only non zero values for the homology and cohomologies, \begin{enumerate}[(i)] \item ${\mathscr H}^0_{\overline{1}}(X;\mathbb{Z})= \mathbb{Z}$ and ${\mathscr H}^3_{\overline{1}}(X;\mathbb{Z})= \mathbb{Z}_{2}$, \item ${\mathfrak{H}}_{1}^{\infty,\overline{1}}(X;\mathbb{Z})=\mathbb{Z}_{2}$ and ${\mathfrak{H}}_{4}^{\infty,\overline{1}}(X;\mathbb{Z})=\mathbb{Z}$, \item ${\mathfrak{H}}^{0}_{\overline{1}}(X;\mathbb{Z})=\mathbb{Z}$ and ${\mathfrak{H}}^{1}_{\overline{1}}(X;\mathbb{Z})=\mathbb{Z}_{2}$. \end{enumerate} We notice $ {\mathscr H}^k_{\overline{1}}(X;R) \cong {\mathfrak{H}}^{\infty,\overline{1}}_{4-k}(X;R)$ as stated in \thmref{thm:dual} but ${\mathfrak{H}}^{1}_{\overline{1}}(X;\mathbb{Z})=\mathbb{Z}_{2}\not\cong {\mathfrak{H}}_{3}^{\infty,\overline{1}}(X;\mathbb{Z})=0$. \end{example} \begin{remark}\label{rem:lastone} In the case of a compact oriented pseudomanifold, a diagram as \eqref{equa:fork} has already been introduced in \cite[Section~4]{CST7} for the definition of a product in intersection homology, for any ring of coefficients. For a PL pseudomanifold, a product of geometric cycles is defined by Goresky and MacPherson (\cite{GM1}) for a simplicial intersection homology. The relationship between \eqref{equa:fork} and the geometric product can be specified as below. (For sake of simplicity, we consider coefficients in a field.) \begin{itemize} \item From a diagram similar to \eqref{equa:fork}, Friedman and McClure define an intersection product on the intersection homology of a PL pseudomanifold, via a duality isomorphism (\cite{FM}) with the intersection cohomology ${\mathfrak{H}}_{\overline{p}}^{*}(X)$ recalled in \remref{rem:GJE}. In \cite[Theorem 1.3]{2018arXiv181210585F}, they connect it with the geometric product of \cite{GM1}. \item As recalled in \remref{rem:GJE}, the blown-up cohomology studied in this paper is connected to ${\mathfrak{H}}_{\overline{p}}^{*}(X)$ through a natural quasi-isomorphism, which is compatible with the algebra structures (see \cite[Corollary 4.4 and end of Section 4]{CST6}). \end{itemize} Consequently, the intersection product on ${\mathfrak{H}}_{\ast}^{\bullet}(X;R)$ defined by \eqref{equa:fork} coincides with the original geometric product of Goresky and McPherson in the PL compact case. Finally, mention the existence of a construction of the intersection product via a Leinster partial algebra structure, done by Friedman in \cite{MR3857199}, who extends to the perverse setting the construction made by McClure on the chain complex of a PL manifold, see \cite{MR2255502}. \end{remark} {\rm pr}ovidecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} {\rm pr}ovidecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } {\rm pr}ovidecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } {\rm pr}ovidecommand{\href}[2]{#2} \end{document} \end{document}
\begin{document} \runningauthor{Bartels, Stensbo-Smidt, Moreno-Mu\~noz, Boomsma, Frellsen, Hauberg} \twocolumn[ \aistatstitle{Adaptive Cholesky Gaussian Processes} \aistatsauthor{Simon Bartels \kschg{p(\bm{e}c y_{\mathcal{A}})}nd Kristoffer Stensbo-Smidt \kschg{p(\bm{e}c y_{\mathcal{A}})}nd Pablo Moreno-Mu\~noz} \aistatsaddress{University of Copenhagen \kschg{p(\bm{e}c y_{\mathcal{A}})}nd Technical University of Denmark \kschg{p(\bm{e}c y_{\mathcal{A}})}nd Technical University of Denmark} \aistatsauthor{Wouter Boomsma \kschg{p(\bm{e}c y_{\mathcal{A}})}nd Jes Frellsen \kschg{p(\bm{e}c y_{\mathcal{A}})}nd S\o{}ren Hauberg} \aistatsaddress{University of Copenhagen \kschg{p(\bm{e}c y_{\mathcal{A}})}nd Technical University of Denmark \kschg{p(\bm{e}c y_{\mathcal{A}})}nd Technical University of Denmark} ] \tikzset{external/force remake=false} \tikzexternaldisable \setlength{\figwidth}{.5\textwidth} \setlength{\figheight}{.25\textheight} \definecolor{color1}{rgb}{1,0.6,0.2} \definecolor{color0}{rgb}{0,0.4717,0.4604} \renewcommand{\bm{e}c}{\boldsymbol} \begin{abstract} We present a method to approximate Gaussian process regression models for large datasets by considering only a subset of the data. Our approach is novel in that the size of the subset is selected on the fly during exact inference with little computational overhead. From an empirical observation that the log-marginal likelihood often exhibits a linear trend once a sufficient subset of a dataset has been observed, we conclude that many large datasets contain redundant information that only slightly affects the posterior. Based on this, we provide probabilistic bounds on the full model evidence that can identify such subsets. Remarkably, these bounds are largely composed of terms that appear in intermediate steps of the standard Cholesky decomposition, allowing us to modify the algorithm to adaptively stop the decomposition once enough data have been observed. \end{abstract} \section{Introduction} \begin{figure} \caption{ The figure shows the log-marginal likelihood as a function of the size of the training set for five random permutations of the \texttt{pm25} \label{fig:evidence} \end{figure} The key computational challenge in Gaussian process regression is to evaluate the log-marginal likelihood of the $N$ observed data points, which is known to have cubic complexity \parencite{rasmussenwilliams}. It has been observed \parencite{Chalupka2013comparison} that the random-subset-of-data approximation can be a hard-to-beat baseline for approximate Gaussian process inference. However, the question of how to choose the size of the subset is non-trivial to answer. Here we make an attempt. We first make an empirical observation when studying the behavior of the log-marginal likelihood with increasing number of observations. \cref{fig:evidence} show this progression for a variety of models. We elaborate on this figure in \cref{subsec:intuition}, but for now note that after a certain number of observations, determined by model and dataset, the log-marginal likelihood starts to progress with a linear trend. This suggest that we may leverage this near-linearity to estimate the log-marginal likelihood of the full dataset after having seen only a subset of the data. However, as the point of linearity differs between models and datasets, this point cannot be set in advance but must be estimated on-the-fly. In this paper, we investigate three main questions, namely 1) how to detect the near linear trend when processing datapoints sequentially, 2) when it is safe to assume that this trend will continue , and 3) how to implement an efficient stopping strategy, that is, without too much overhead to the exact computation. We approach these questions from a (frequentist) probabilistic numerics perspective \parencite{hennigRSPA2015}. By treating the dataset as a collection of independent and identically distributed random variables, we provide expected upper and lower bounds on the log-marginal likelihood, which become tight when the above-mentioned linear trend arises. These bounds can be evaluated with little computational overhead by leveraging intermediate computations performed by the Cholesky decomposition that is commonly used for evaluating the log-marginal likelihood. We refer to our method as \emph{Adaptive Cholesky Gaussian Process} (ACGP\xspace{}). Our approach has a complexity of $\mathcal{O}(M^3)$, where $M$ is the processed subset-size, inducing an overhead of $\mathcal{O}(M)$ to the Cholesky decomposition. The main difference to previous work is that our algorithm does \emph{not necessarily} look at the whole dataset, which makes it particularly useful in settings where the dataset is so large that even linear-time approximations are not tractable. When a dataset contains a large amount of redundant data, ACGP\xspace allows the inference procedure to stop early, saving precious compute---especially when the kernel function is expensive to evaluate. \section{Background} \label{sec:background} We use a \textsc{python}-inspired index notation, abbreviating for example $[y_1, \ldots, y_{n-1}]^{\top}$ as $\bm{y}_{:n}$; observe that the indexing starts at 1. With $\operatorname{Diag}$ we define the operator that sets all off-diagonal entries of a matrix to $0$. \subsection{Gaussian Process Regression} We start by briefly reviewing Gaussian process (GP) regression models and how they are trained (see \textcite[Chapter 2 and 5.4]{rasmussenwilliams}). We consider the training dataset $\boldsymbolhcal{D} = \{\bm{x}_n, y_n\}^N_{n=1}$ with inputs $\bm{x}_n \in \boldsymbolhbb{R}^{D}$ and outputs $y_n \in \boldsymbolhbb{R}$. The inputs are collected in the matrix $\bm{X} = [\bm{x}_1, \bm{x}_2, \ldots, \bm{x}_N]^{\top} \in \boldsymbolhbb{R}^{N\times D}$. A GP $f \sim \mathcal{GP}(m(\bm{x}), k(\bm{x}, \bm{x}'))$ is a collection of random variables defined in terms of a mean function, $m(\bm{x})$, and a covariance function or \emph{kernel}, $k(\bm{x}, \bm{x}') = \operatorname{cov}(f(\bm{x}), f(\bm{x}'))$, such that any finite amount of random variables has a Gaussian distribution. Hence, the prior over $\bm{f}\colonequals f(\bm{X})$ is $\mathcal{N}(\bm{f}; m(\boldsymbol X), \mK_\text{ff})$, where we have used the shorthand notation $\mK_\text{ff} = k(\bm{X}, \bm{X})$. Without loss of generality, we assume a zero-mean prior, $m(\cdot)\colonequals 0$. We will consider the observations $\bm{y}$ as being noise-corrupted versions of the function values $\bm{f}$, and we shall parameterize this corruption through the likelihood function $p(\bm{y} \operatorname{|} \bm{f})$, which for regression tasks is typically assumed to be Gaussian, $p(\bm{y}\operatorname{|}\bm{f}) = \mathcal{N}(\bm{f}, \sigma^2\I)$. For such a model, the posterior over test inputs $\boldsymbol X_*$ can be computed in closed-form: $p(\bm{f}_*\operatorname{|}\bm{y})= \mathcal{N}(\bm{m}_* , \boldsymbol S_*)$, where \begin{align*} \bm{m}_* &= k(\boldsymbol X_*, \boldsymbol X)\bm{K}^{-1}\bm{y} \quad \text{ and } \\ \boldsymbol S_* &= k(\boldsymbol X_*, \boldsymbol X_*) - k(\boldsymbol X_*, \boldsymbol X)\bm{K}^{-1}k(\boldsymbol X, \boldsymbol X_*) \end{align*} with $\bm{K}\colonequals \mK_\text{ff} + \sigma^2\I$. By marginalizing over the function values of the likelihood distribution, we obtain the marginal likelihood, $p(\bm{y})= \int p(\bm{y}\operatorname{|}\bm{f})p(\bm{f}) d\bm{f}$, the de facto metric for comparing the performance of models in the Bayesian framework. While this integral is not tractable in general, it does have a closed-form solution for Gaussian process regression. Given the GP prior, $p(\bm{f}) = \mathcal{N}(\boldsymbolhbf{0}, \mK_\text{ff})$, and the Gaussian likelihood, the log-marginal likelihood distribution can be found to be \begin{align} \label{eq:log_marginal} \log p(\bm{y}) = -\frac{1}{2}\left(\log \det{2\pi\bm{K}} + \bm{y}^{\top}\bm{K}^{-1}\bm{y} \right)\, . \end{align} Evaluating this expressions costs $\mathcal{O}(N^3)$ operations. \subsection{Background on the Cholesky decomposition} \label{sec:cholesky} Inverting covariance matrices such as $\bm{K}$ is a slow and numerically unstable procedure. Therefore, in practice, one typically leverages the Cholesky decomposition of the covariance matrices to compute the inverses. The Cholesky decomposition of a symmetric and positive definite matrix $\boldsymbol K$ is the unique, lower\footnote{Equivalently, one can define $\boldsymbol L$ to be upper triangular such that $\boldsymbol K=\boldsymbol L^{\top} \boldsymbol L$.} triangular matrix $\boldsymbol L$ such that $\boldsymbol K=\boldsymbol L\boldsymbol L^{\top}$ \parencite[Theorem 4.2.7]{golub2013matrix4}. The advantage of having such a decomposition is that inversion with triangular matrices amounts to Gaussian elimination. There are different ways to compute $\boldsymbol L$. The Cholesky of a $1\times 1$ matrix is the square root of the scalar. For larger matrices, \begin{align} \label{eq:cholesky} \operatorname{chol}[\boldsymbol K]= \begin{bmatrix}\operatorname{chol}[\boldsymbol K_{:s,:s}] & \boldsymbol 0\\ \boldsymbol T & \operatorname{chol}\left[\boldsymbol K_{s:,s:}-\boldsymbol T\boldsymbol T^{\top}\right] \end{bmatrix}, \end{align} where $\boldsymbol T\colonequals \boldsymbol K_{s:,:s}{\operatorname{chol}[\boldsymbol K_{:s,:s}]}^{-\top}$ and $s$ is any integer between $1$ and the size of $\boldsymbol K$. Hence, extending a given Cholesky to a larger matrix requires three steps: \begin{enumerate} \setlength\itemsep{0em} \item solve the linear equation system $\boldsymbol T$, \item apply the downdate $\boldsymbol K_{s:,s:}-\boldsymbol T\boldsymbol T^{\top}$ and \item compute the Cholesky of the down-dated matrix. \end{enumerate} An important observation is that $\boldsymbol K_{s:,s:}-\boldsymbol T\boldsymbol T^{\top}$ is the posterior covariance matrix $\boldsymbol S_*+\sigma^2\boldsymbol I$ when considering $\boldsymbol X_{s:}$ as test points. We will make use of this observation in \cref{sec:bound_realization}. The log-determinant of $\boldsymbol K$ can be obtained from the Cholesky using $\log\det{\boldsymbol K}=2\sum_{n=1}^{N}\log \boldsymbol L_{nn}$. A similar recursive relationship exists between the quadratic form $\bm{e}c y^{\top} \inv{\boldsymbol K}\bm{e}c y$ and $\inv{\boldsymbol L}\bm{e}c y$ (see appendix, \cref{eq:recursive_LES}). \subsection{Related work} Much work has gone into tractable approximations to the log-marginal likelihood. Arguably, the most popular approximation methods for GPs are inducing point methods \parencite{quionero2005unifying,snelson2006sparse,titsias2009variational,hensman2013gaussian,hensman2017variational,shi2020sparse,artemev2021cglb}, where the dataset is approximated through a set of pseudo-data points (inducing points), summarizing information from nearby data. Other approaches involve building approximations to $\boldsymbol K$ \parencite{Fine2001Lowrank,Rahimi2008RandomFeatures,LazaroGredilla2010SparseSpectrum,Harbrecht2012pivotedCholesky,wilson2015kernel,Rudi2017Falkon,wang2019exact} or aggregating of distributed local approximations \parencite{gal2014distributed,deisenroth2015distributed}. One may also consider separately the approximation of the quadratic form via linear solvers such as conjugate gradients \parencite{hestenes1952methods,Cutajar2016Preconditioning} and the approximation of the log-determinant \parencite{Fitzsimons2017BayesianDeterminant,Fitzsimons2017EntropicDeterminant,Dong2017scalable}. Another line of research is scaling the hardware \parencite{Nguyen2019distributedGP}. All above referenced approaches have computational complexity at least $\mathcal{O}(N)$ (with the exception of \textcite{hensman2013gaussian} since it uses mini-batching). However, the size of a dataset is seldom a particularly chosen value but rather the ad-hoc end of the sampling procedure. The dependence on the dataset size implies that more data requires more computational budget even though more data might not be helpful. This is the main motivation for our work: to derive an approximation algorithm where computational complexity does not depend on redundant data. The work closest in spirit to the present paper is by \textcite{artemev2021cglb}, who also propose lower and upper bounds on quadratic form and log-determinant. There are a number of differences, however. Their bound relies on the method of conjugate gradients where we work directly with the Cholesky decomposition. Furthermore, while their bounds are deterministic, ours are probabilistic, which can make them tighter in certain cases, as they do not need to hold for all worst-case scenarios. This is also the main difference to the work of \textcite{hensman2013gaussian}. Their bounds allow for mini-batching, but these are inherently deterministic when applied with full batch size. \section{Methodology} \label{sec:methods} In the following, we will sketch our method. Our main goal is to convey the idea and intuition. To this end, we use suggestive notation. We refer the reader to the appendix for a more thorough and formal treatment. \subsection{Intuition on the linear extrapolation} \label{subsec:intuition} The marginal likelihood is typically presented as a joint distribution, but, using Bayes rule, one can also view it from a cumulative perspective as the sum of log-conditionals: \begin{align} \label{eq:log_marginal_alternative} \log p(\bm{y}) &= \sum_{n=1}^N \log p(y_n \operatorname{|} \bm{y}_{:n})\ . \end{align} With this equation in hand, the phenomena in \cref{fig:evidence} becomes much clearer. The figure shows the value of \cref{eq:log_marginal_alternative} for an increasing number of observations $n$. When the plot exhibits a linear trend, it is because the summands $\log p(y_{n}\operatorname{|} \bm{y}_{:n})$ become approximately constant, implying that the model is not gaining additional knowledge. In other words, new outputs are conditionally independent given the output observations seen so far. The key problem addressed in this paper is how to estimate the full marginal likelihood, $p(\bm{y})$, from only a subset of $M$ observations. The cumulative view of the log-marginal likelihood in \cref{eq:log_marginal_alternative} is our starting point. In particular, we will provide probabilistic bounds, which are functions of seen observations, on the estimate of the full marginal likelihood. These bounds will allow us to decide, on the fly, when we have seen enough observations to accurately estimate the full marginal likelihood. \subsection{Stopping strategy} Suppose that we have processed $M$ data points with $N-M$ data points yet to be seen. We can then decompose \cref{eq:log_marginal_alternative} into a sum of terms, which have already been computed, and a remaining sum \begin{align*} \log p(\bm{e}c y) &= \underbrace{\sum_{n=1}^M \log p(y_{n}\mid \bm{e}c y_{:n})}_{\kschg{p(\bm{e}c y_{\mathcal{A}})}: \text{ processed}} + \underbrace{\sum_{n=M+1}^{N} \log p(y_{n}\mid \bm{e}c y_{:n})}_{\kschg{p(\bm{e}c y_{\mathcal{B}} \given \bm{e}c y_{\mathcal{A}})}: \text{ remaining}}. \end{align*} Recall that we consider the $\bm{e}c x_i, y_i$ as independent and identically distributed random variables. Hence, we could estimate $\kschg{p(\bm{e}c y_{\mathcal{B}} \given \bm{e}c y_{\mathcal{A}})}$ as $(N-M)\kschg{p(\bm{e}c y_{\mathcal{A}})}/M$. Yet this is estimator is biased, since $(\bm{e}c x_{M+1}, y_{M+1}), \dots, (\bm{e}c x_N, y_N)$ interact non-linearly through the kernel function. Instead, we will derive unbiased lower and upper bounds, $\mathcal{L}$ and $\mathcal{U}$. To obtain unbiased estimates, we use the last-$m$ processed points, such that conditioned on the points up to $s\colonequals M-m$, the expected value of $\log p(\bm{e}c y)$ can be bounded from above and below: \begin{align*} \mathbb{E}[\mathcal{L}\operatorname{|} \bm{X}_{:s}, \bm{y}_{:s}] \leq \mathbb{E}[p(\bm{e}c y)\operatorname{|} \bm{X}_{:s}, \bm{y}_{:s}] \leq \mathbb{E}[\mathcal{U} \operatorname{|} \bm{X}_{:s}, \bm{y}_{:s}], \end{align*} and the observations from $s$ to $M$ can be used to estimate $\mathcal{L}$ and $\mathcal{U}$. \cref{fig:method_sketch} shows a sketch of our approach. \begin{figure} \caption{ Illustration of how ACGP\xspace proceeds during estimation of the full $\log p(\bm{y} \label{fig:method_sketch} \end{figure} We can then detect when the upper and lower bounds are sufficiently near each other, and stop computations early when the approximation is sufficiently good. More precisely, given a desired relative error $r$, we stop when \begin{align} \label{eq:stop_cond_r} \frac{\mathcal{U}-\mathcal{L}}{2\min(|\mathcal{U}|, |\mathcal{L}|)}<r \quad\text{and}\quad \operatorname{sign}(\mathcal{U})=\operatorname{sign}(\mathcal{L})\, . \end{align} If the bounds hold, then the estimator $(\mathcal{L}+\mathcal{U})/2$ achieves the desired relative error (\cref{lemma:rel_err_bound} in appendix). This is in contrast to other approximations, where one specifies a computational budget, rather than a desired accuracy. \subsection{Bounds on the log-marginal likelihood} \label{sec:bounds} From \cref{eq:log_marginal}, we see that the log-marginal likelihood requires computing a log-determinant of the kernel matrix and a quadratic term. In the following we present upper and lower bounds for both the log-determinant ($\mathcal{U}_\text{D}$ and $\mathcal{L}_\text{D}$, respectively) and the quadratic term ($\mathcal{U}_\text{Q}$ and $\mathcal{L}_\text{Q}$). We will need the posterior equations for the observations, \latin{i.e.}\xspace, $p(y_n \operatorname{|} \bm{e}c y_{:n})$, and we will need them as functions of test inputs $\bm{e}c x_*$ and $\bm{e}c x_{*}'$. To this end, define \begin{align*} \bm{m}_*^{(n)}(\bm{e}c x_*)&\colonequals k(\bm{e}c x_*, \boldsymbol X_{:n})\inv{\boldsymbol K_{:n, :n}}\bm{e}c y_{:n} \\%\qquad \text{and}\\ \intertext{and} \begin{split} \bm{\Sigma}_*^{(n)}(\bm{e}c x_*, \bm{e}c x_*')&\colonequals k(\bm{e}c x_*, \bm{e}c x_*')+\sigma^2\delta_{\bm{e}c x_*,\bm{e}c x_*'}\\ &\phantom{\colonequals\ } -k(\bm{e}c x_*, \boldsymbol X_{:n})\inv{\boldsymbol K_{:n,:n}}k(\boldsymbol X_{:n}, \bm{e}c x_*'), \end{split} \end{align*} such that $p(y_n \operatorname{|} \bm{e}c y_{:n})=\mathcal{N}(y_n; \bm{m}_*^{(n)}(\bm{e}c x_n), \bm{\Sigma}_*^{(n)}(\bm{e}c x_n, \bm{e}c x_n))$, which allows us to rewrite \cref{eq:log_marginal_alternative} as \begin{align} \label{eq:log_marginal_elementwise} \begin{split} \log p(\bm{e}c y)&\propto \sum_{n=1}^N \log \bm{\Sigma}_*^{(n-1)}(\bm{e}c x_{n}, \bm{e}c x_{n})\\ &\quad +\sum_{n=1}^N \frac{(y_n-\bm{m}_*^{(n-1)}(\bm{e}c x_{n}))^2}{\bm{\Sigma}_*^{(n-1)}(\bm{e}c x_n, \bm{e}c x_n)}\, . \end{split} \end{align} This reveals that the log-determinant can be written as a sum of posterior variances and the quadratic form has an expression as normalized square errors. Other key ingredients for our bounds are estimates for average posterior variance and average covariance. Therefore define the shorthands \begin{align*} \boldsymbol V&\colonequals \operatorname{Diag} \left[ \boldsymbol \Sigma_*^{(s)}(\boldsymbol X_{s:M}, \boldsymbol X_{s:M}) \right] \\%& \text{and} \intertext{and} \boldsymbol C&\colonequals \sum_{i=1}^{\frac{M}{2}} \boldsymbol \Sigma_*^{(s)}(\bm{e}c x_{s+2i}, \bm{e}c x_{s+2i-1})\bm{e}c e_{2i}\bm{e}c e_{2i}^{\top}\ , \end{align*} where $\bm{e}c e_j \in \mathbb{R}^{m}$ is the $j$-th standard basis vector. The matrix $\boldsymbol V$ is simply the diagonal of the posterior covariance matrix $\boldsymbol\Sigma_*$. The matrix $\boldsymbol C$ consists of every \emph{second} entry of the first off-diagonal of $\boldsymbol \Sigma_*$. These elements are placed on the diagonal with every second element being $0$. The reason for taking every second element is of theoretical nature, see \cref{remark:estimated_correlation} in the appendix. \subsubsection{Bounds on the log-determinant} Both bounds, lower and upper, use that $\log\det{\boldsymbol K}=\log\det{\boldsymbol K_{:s,:s}}+\log\det{\boldsymbol \Sigma_*^{(s)}(\boldsymbol X_{s:}, \boldsymbol X_{s:})}$ which follows from the matrix-determinant lemma. The first term is available from the already processed datapoints. It is the second addend that needs to be estimated, which we approach from the perspective of \cref{eq:log_marginal_elementwise}. It is well-established that, for a fixed input, more observations decrease the posterior variance, and this decrease cannot cross the threshold $\sigma^2$ \parencite[Question 2.9.4]{rasmussenwilliams}. This remains true when taking the expectation over the input. Hence, the average of the posterior variances for inputs $\boldsymbol X_{s:M}$ is with high probability an overestimate of the average posterior variance for inputs with higher index. This motivates our upper bound on the log-determinant: \begin{align} \mathcal{U}_\text{D} &= \log\det{\boldsymbol K_{:s,:s}} + (N-s)\mu_D, \label{eq:bound_ud} \\\mu_D&\colonequals \frac{1}{m} \sum_{i=1}^m \log\left(\boldsymbol V_{ii}\right). \notag \eqcomment{average log posterior variance} \end{align} To arrive at the lower bound on the log-determinant, we need an expression for how fast the average posterior variance could decrease which is governed by the covariance between inputs. The variable $\rho_D$ measures the average covariance, and we show in \cref{thm:log_det_lower_bound} in the appendix that this overestimates the decrease per step with high probability. Since the decrease cannot exceed $\sigma^2$, we introduce $\psi_D$ to denote the step which would cross this threshold. \begin{align} \begin{split} \mathcal{L}_\text{D} &= \log\det{\boldsymbol K_{:s,:s}} +(N-\psi_D)\log \sigma^2\\ &\quad +(\psi_D-s)\left(\mu_D-\frac{\psi_D-s-1}{2}\rho_D\right)\label{eq:bound_ld} \end{split} \\\rho_D&\colonequals \frac{2}{m\sigma^4}\sum_{i=1}^{m}\boldsymbol C^2_{2i,2i} \notag \eqcomment{average square covariance} \\\psi_D&\colonequals \max\left(N, s+\left\lfloor\frac{\tilde{\mu}_D-\log\sigma^2}{\tilde{\rho}_D}+\frac{1}{2}\right\rfloor\right)\label{eq:variance_cutoff_step} \leqcomment{steps $\mu_D$ can decrease by $\rho$} \end{align} where variables with a tilde refer to a preceding estimate, that is, exchanging the indices $M$ for $M-m$ and $s$ for $s-m$. \kscmt{Instead of $\psi_D$, could we use something like $N_{\sigma^2}$, $n_{\sigma^2}$ or $n_\rho$?} Both bounds collapse to the exact solution when $s=N$. The bounds are close when the average covariance between inputs, $\rho_D$, is small. This occurs for example when the average variance is close to $\sigma^2$ since the variance is an upper bound to the covariance. Another case where $\rho_D$ is small is when points are not correlated to begin with. \subsubsection{Bounds on the quadratic term} Denote with $\bm{e}c r_{*}\colonequals \bm{e}c y_{s:}-\bm{e}c m_*^{(s)}(\boldsymbol X_{s:})$ the prediction errors (the residuals), when considering the first $s$ points as training set and the remaining inputs as test set. Analogous to the bounds on the log-determinant, one can show with the matrix inversion lemma that $\bm{e}c y^{\top}\inv{\boldsymbol K}\bm{e}c y=\bm{e}c y_{:s}^{\top}\inv{\boldsymbol K_{:s, :s}}\bm{e}c y_{:s}+\bm{e}c r_{*}^{\top}\inv{(\boldsymbol \Sigma_*^{(s)}(\boldsymbol X_{s:}))}\bm{e}c r_{*}$. Again, the first term will turn out to be already computed. With a slight abuse of notation let $\bm{e}c r_{*}\colonequals \bm{e}c y_{s:M}-\bm{e}c m_*^{(s)}(\boldsymbol X_{s:M})$, that is, we consider only the first $m$ entries. Our lower bound arises from another well-known lower bound: $\bm{e}c a^{\top}\inv{\boldsymbol A}\bm{e}c a\geq 2\bm{e}c a^{\top}\bm{e}c b-\bm{e}c b^{\top}\boldsymbol A\bm{e}c b$ for all $\bm{e}c b$ (see for example \textcite{kim2018scalableStructureDiscovery,artemev2021cglb}). We write $\bm{e}c a^{\top} \inv{\boldsymbol A}\bm{e}c a$ as $\bm{e}c a^{\top}\operatorname{Diag}[\bm{e}c a]\inv{\left(\operatorname{Diag}[\bm{e}c a]\boldsymbol A\operatorname{Diag}[\bm{e}c a]\right)}\operatorname{Diag}[\bm{e}c a]\bm{e}c a$ and choose $\bm{e}c b\colonequals\inv{\operatorname{Diag}[\boldsymbol A]}\bm{e}c 1$. The result, after some cancellations, is the following probabilistic lower bound on the quadratic term: \begin{align} \mathcal{L}_\text{Q} &= \bm{e}c y_{:s}^{\top}\inv{\boldsymbol K_{:s,:s}}\bm{e}c y_{:s} + (N-s)\left(\mu_Q-\max(0, \rho_Q)\right) \label{eq:bound_lq} \\\mu_Q&\colonequals \frac{1}{m}\bm{e}c r_{*}^{\top}\inv{\boldsymbol V} \bm{e}c r_{*} \notag \eqcomment{average calibrated square error}\\ \begin{split} \rho_Q&\colonequals \frac{N-s-1}{2m} \\ &\phantom{\colonequals\ }\cdot\sum_{j=\frac{s+2}{2}}^{\frac{M}{2}}\frac{\bm{e}c r_{*,2j}\bm{e}c r_{*,2j-1}\bm{\Sigma}_*^{(s)}(\bm{e}c x_{2j}, \bm{e}c x_{2j-1})}{\bm{\Sigma}_*^{(s)}(\bm{e}c x_{2j}, \bm{e}c x_{2j})\bm{\Sigma}_*^{(s)}(\bm{e}c x_{2j-1}, \bm{e}c x_{2j-1})} \notag \end{split} \leqcomment{calibrated error correlation} \end{align} Our upper bound arises from the element-wise perspective of \cref{eq:log_marginal_elementwise}. We assume that the expected mean square error $(y_n-\bm{m}_*^{(n-1)}(\bm{e}c x_n))^2$ decreases with more observations. However, though mean square error and variance decrease, their expected ratio may increase or decrease depending on the choice of kernel, dataset and number of processed points. Using the average error calibration with a correction for the decreasing variance, we arrive at our upper bound on the quadratic term: \begin{align} \mathcal{U}_\text{Q} &= \bm{e}c y_{:s}^{\top}\inv{\boldsymbol K_{:s,:s}}\bm{e}c y_{:s} + (N-s)\left(\mu_Q + \rho_Q'\right) \label{eq:bound_uq} \\\rho_Q'&\colonequals \frac{N-s-1}{m}\frac{1}{\sigma^4}\bm{e}c r_{*}^{\top}\boldsymbol C\inv{\boldsymbol V}\boldsymbol C\bm{e}c r_{*} \notag \leqcomment{square error correlation} \end{align} In the appendix (\cref{thm:quad_upper_bound}), we present a tighter bound which uses a similar construction as for the lower bound on the log-determinant, switching the form at a step $\psi$. Again, the bounds collapse to the true quantity when $s=N$. The bounds will give good estimates when the average covariance between inputs is low or when the model can predict new data well, that is, when $\bm{e}c r_{*}$ is close to $0$. \subsection{Validity of bounds and stopping condition} \label{sec:main_theoretical_result} For the upper bound on the quadratic form, we need to make a (technical) assumption. It expresses the intuition that the (expected) mean square error should not increase with more data---a model should not become worse as its training set increases. It is possible to construct counter-examples where this assumption is violated: for example when $\bm{e}c y\sim\mathcal{N}(\bm{e}c 0, \boldsymbol I)$ and $p(\bm{e}c f)=\mathcal{N}(\bm{e}c 0, \boldsymbol K)$, the posterior mean is with high probability no longer zero-mean. However, our experiments in \cref{sec:experiments} indicate that this assumption is not problematic in practice. \begin{assumption} \label{assumption:targets} Assume that \begin{align*} \mathbb{E}\left[f(\bm{e}c x, \bm{e}c x')(y_j-\bm{e}c m_{*}^{(j-1)}(\bm{e}c x))^2\mid \boldsymbol X_{:s}, \bm{e}c y_{:s}\right] \\\quad\leq \mathbb{E}\left[f(\bm{e}c x, \bm{e}c x')(y_j-\bm{e}c m_{*}^{(s)}(\bm{e}c x))^2\mid \boldsymbol X_{:s}, \bm{e}c y_{:s}\right] \end{align*} for all $s\in\{1, \dots, N\}$ and for all $s <j\leq N$, where $f(\bm{e}c x, \bm{e}c x')$ is either $\frac{1}{\boldsymbol\Sigma_*^{(s)}(\bm{e}c x, \bm{e}c x)}$ or $\frac{\boldsymbol\Sigma_*^{(s)}(\bm{e}c x, \bm{e}c x')^2}{\sigma^4 \boldsymbol\Sigma_*^{(s)}(\bm{e}c x, \bm{e}c x)}$. \end{assumption} \begin{theorem} \label{thm:main} Assume that $(\bm{e}c x_1, y_1), \dots, (\bm{e}c x_N, y_N)$ are independent and identically distributed and that \cref{assumption:targets} holds. For any $s \in \{1, \dots, N\}$, the bounds defined in \cref{eq:bound_ld,eq:bound_ud,eq:bound_uq,eq:bound_lq} hold in expectation: \begin{align*} \begin{split} \mathbb{E}[\mathcal{L}_D\mid \boldsymbol X_{:s}, \bm{e}c y_{:s}]&\leq \mathbb{E}[\log\det{\boldsymbol K} \mid \boldsymbol X_{:s}, \bm{e}c y_{:s}]\\ &\leq \mathbb{E}[\mathcal{U}_D\mid \boldsymbol X_{:s}, \bm{e}c y_{:s}] \end{split}\\ \intertext{and} \begin{split} \mathbb{E}[\mathcal{L}_Q\mid \boldsymbol X_{:s}, \bm{e}c y_{:s}]&\leq \mathbb{E}[\bm{e}c y^{\top}\inv{\boldsymbol K}\bm{e}c y \mid \boldsymbol X_{:s}, \bm{e}c y_{:s}]\\ &\leq \mathbb{E}[\mathcal{U}_Q\mid \boldsymbol X_{:s}, \bm{e}c y_{:s}]\ . \end{split} \end{align*} \end{theorem} The proof can be found in \cref{sec:main_theorem_proof}, and a sketch in \cref{sec:proof_sketch}. \begin{theorem} \label{thm:guaranteed_precision} Let $r>0$ be a desired relative error and set $\mathcal{U}\colonequals -\frac{1}{2}\left(\mathcal{L}_D+\mathcal{L}_Q+N\log2\pi\right)$ and $\mathcal{L}\colonequals -\frac{1}{2}\left(\mathcal{U}_D+\mathcal{U}_Q+N\log2\pi\right)$. If the stopping conditions hold, that is, $\operatorname{sign}(\mathcal{U})=\operatorname{sign}(\mathcal{L})$ and \cref{eq:stop_cond_r} is true, then $\log p(\bm{e}c y)$ can be estimated from $(\mathcal{U}+\mathcal{L})/2$ such that, under the condition $\mathcal{L}_D\leq \log(\det{\boldsymbol K}) \leq \mathcal{U}_D \text{ and } \mathcal{L}_Q \leq \bm{e}c y^{\top}\inv{\boldsymbol K}\bm{e}c y \leq \mathcal{U}_Q$, the relative error is smaller than $r$, formally: \begin{align} \label{eq:main_result} \left|{\log p(\bm{e}c y)-(\mathcal{U}+\mathcal{L})/2}\right| \leq r|{\log p(\bm{e}c y)}|. \end{align} \end{theorem} The proof follows from \cref{lemma:rel_err_bound} in the appendix. \cref{thm:main} is a first step to obtain a probabilistic statement for \cref{eq:main_result}, that is, a statement of the form $\proba{\left|\frac{\log p(\bm{e}c y)-\frac{1}{2}(\mathcal{U}+\epsilon_{\mathcal{U},\delta}+\mathcal{L}-\epsilon_{\mathcal{L},\delta})}{\log p(\bm{e}c y)}\right| > r} \leq \delta$. In earlier work \parencite{bartels2021stoppedcholesky}, we have shown that such a statement can be obtained for the log-determinant. Theoretically, we can obtain such a statement using standard concentration inequalities and a union bound over $s$. In practice, the error guarding constants $\epsilon$ would render the result trivial. A union bound can be avoided using Hoeffding's inequality for martingales \parencite{fan2012hoeffding}. However, this requires to replace $s\colonequals M-m$ by a stopping time independent of $M$, which we regard as future work. \subsection{Practical implementation} \label{sec:bound_realization} The proposed bounds turn out to be surprisingly cheap to compute. If we set the block-size of the Cholesky decomposition to be $m$, the matrix $\boldsymbol\Sigma_*^{(s)}$ is exactly the downdated matrix in Step~2 of the algorithm outlined in \cref{sec:cholesky}. Similarly, the expressions for the bounds on the quadratic form appear while solving the linear equation system $\inv{\boldsymbol L}\bm{e}c y$. A slight modification to the Cholesky algorithm is enough to compute these bounds on the fly during the decomposition with little overhead. The stopping conditions can be checked before or after Step~3 of the Cholesky decomposition (\cref{sec:cholesky}). Here, we explore the former option since Step 3 is the bottleneck due to being less parallelizable than the other steps. Note that the definition of the bounds does not involve variables $\bm{e}c x, y$ which have not been processed. This allows an on-the-fly construction of the kernel matrix, avoiding potentially expensive kernel function evaluations. Furthermore, it is \emph{not} necessary to allocate $\mathcal{O}(N^2)$ memory in advance; a user can specify a maximal amount of processed datapoints, hoping that stopping occurs before hitting that limit. We provide the pseudo-code for this modified algorithm, our key algorithmic contribution, in \cref{sec:proof_sketch} (\cref{algo:chol_blocked,alg:bounds}). \sbchg{ For technical reasons, the bounds we use in practice, deviate in some places from the ones presented. We describe the details fully in \cref{sec:stopping}. } Additionally, we provide a \textsc{Python} implementation of our modified Cholesky decomposition and scripts to replicate the experiments of this paper.\footnote{The code is available at the following repository: \url{https://github.com/SimonBartels/acgp}} \subsection{The cumulative perspective} Using Bayes rule, we can write $\log p(\bm{e}c y)$ equivalently as \begin{align} \label{eq:app_log_marginal_alternative} &\log p(\bm{y}) = -\frac{1}{2}\left(\log\det{\boldsymbol K_N}+\bm{e}c y^{\top}\inv{\boldsymbol K_N}\bm{e}c y+N\log(2\pi)\right)= \sum_{n=1}^N \log p(y_{n}\mid \bm{e}c y_{:n-1}). \end{align} For each potential stopping point $t$ we can decompose \cref{eq:app_log_marginal_alternative} into a sum of three terms: \begin{align*} \log p(\bm{e}c y) &= \underbrace{\sum_{n=1}^s \log p(y_{n}\mid \bm{e}c y_{:n-1})}_{A: \text{ fully processed}} +\!\!\! \underbrace{\sum_{n=s+1}^{t}\!\!\! \log p(y_{n}\mid \bm{e}c y_{:n-1})}_{B: \text{ partially processed}} +\!\!\! \underbrace{\sum_{n=t+1}^{N}\!\!\! \log p(y_{n}\mid \bm{e}c y_{:n-1})}_{C: \text{ remaining}}, \end{align*} where $s<t$. We will use the partially processed points between $s$ and $t$, to obtain unbiased upper and lower bounds on the expected value of $\log p(\bm{e}c y_{s+1:}\operatorname{|} \bm{e}c y_{:s})$: \begin{align} \mathbb{E}[\mathcal{L}_t\mid\bm{e}c x_1,y_1,\dots \bm{e}c x_{s},y_{s}] \leq A+ \mathbb{E}[B+C \mid \bm{e}c x_1,y_1,\dots \bm{e}c x_{s},y_{s}] \leq \mathbb{E}[\mathcal{U}_t \negthickspace \mid \bm{e}c x_1,y_1,\dots \bm{e}c x_{s},y_{s}]. \end{align} \subsection{General bounds} The posterior of the $n$th observation conditioned on the previous is Gaussian with \begin{align*} p(y_{n}\mid \bm{e}c y_{:n-1}) = & \mathcal{N}(\mathcal{GP}mean[n-1]{n}, \mathcal{GP}var[n-1]{n}) \\ \mathcal{GP}mean[n-1]{n} \colonequals& k(\bm{e}c x_n, \boldsymbol X_{:n-1})\inv{\boldsymbol K_{n-1}}\bm{e}c y_{:n-1} \\ \postk{\bm{e}c x_n}{\bm{e}c x_n}{n-1} \colonequals& k(\bm{e}c x_n, \bm{e}c x_n) - k(\bm{e}c x_n, \boldsymbol X_{:n-1})\inv{\boldsymbol K_{n-1}}k(\boldsymbol X_{:n-1}, \bm{e}c x_n), \end{align*} where we assumed (w.l.o.g) that $\mu_0(\bm{e}c x)\colonequals 0$. Inspecting these expressions one finds that \begin{align} \log\det{\boldsymbol K_N}&=\sum_{n=1}^N \log\left(\mathcal{GP}var[n-1]{n}\right), \\\bm{e}c y^{\top}\inv{\boldsymbol K_N}\bm{e}c y&=\sum_{n=1}^N\frac{\left(y_n-\mathcal{GP}mean[n-1]{n}\right)^2}{\mathcal{GP}var[n-1]{n}}. \end{align} Our strategy is to find function families $u$ (and $l$) which upper (and lower) bound the expectation \begin{align*} l_{n,t}^d \leq_E \log \postk{\bm{e}c x_n}{\bm{e}c x_n}{n-1}\leq_E u_{n,t}^d\\ l_{n,t}^q \leq_E \frac{\left(y_n-\mathcal{GP}mean[n-1]{n}\right)^2}{\mathcal{GP}var[n-1]{n}}\leq_E u_{n,t}^q, \end{align*} where $\leq_E$ denotes that the inequality holds in expectation. We will choose the function families such that the unseen variables interact only in a \emph{controlled} manner. More specifically, \begin{align} \label{eq:required_form} f^x_{n,t}(\bm{e}c x_n, y_n, \dots, \bm{e}c x_1, y_1)=\sum_{j=s+1}^n g_{t}^{f,x}(\bm{e}c z_n, \bm{e}c z_j; \bm{e}c z_1, \dots \bm{e}c z_{s})\notag, \end{align} with $f\in\{u,l\}$ and $x\in\{d,q\}$. The effect of this restriction becomes apparent when taking the expectation. The sum over the bounds becomes the sum of only two terms: variance and covariance, formally: \begin{align} &\mathbb{E}\left[\sum_{n=s+1}^N f^x_{n,t}(\bm{e}c z_n, \dots, \bm{e}c z_1)\mid\sigma(\bm{e}c z_1, \dots, \bm{e}c z_{s})\right] \\&=\left(N-s\right)\mathbb{E}\left[g(\bm{e}c z_{s+1}, \bm{e}c z_{s+1}, \bm{e}c z_{1}\dots, \bm{e}c z_n)\mid\sigma(\bm{e}c z_1, \dots, \bm{e}c z_{s})\right] \notag \\&+\left(N-s\right) \frac{N-s+1}{2} \mathbb{E}\left[g(\bm{e}c z_{s+1}, \bm{e}c z_{s+2}, \bm{e}c z_{1}\dots, \bm{e}c z_n)\mid\sigma(\bm{e}c z_1, \dots, \bm{e}c z_{s})\right]. \end{align} We can estimate this expectation from the observations we obtained between $s$ and $t$. \begin{align} &\approx\frac{N-t}{t-s}\sum_{n=s+1}^t g(\bm{e}c z_n, \bm{e}c z_n, \bm{e}c z_{1}\dots, \bm{e}c z_{s}) \\&+\frac{2(N-t)}{t-s} \frac{N-s+1}{2} \sum_{i=1}^{\frac{t-s}{2}} g(\bm{e}c z_{s+2i}, \bm{e}c z_{s+2i-1}, \bm{e}c z_{1}\dots, \bm{e}c z_{s}). \notag \end{align} \subsection{Bounds on the log-determinant} Since the posterior variance of a Gaussian process can never increase with more data, the average of the (log) posterior variances is an estimator for an upper bound on the log-determinant. Hence in this case, we simply ignore the interaction between the remaining variables. We set $g(\bm{e}c x_n, \bm{e}c x_i)\colonequals \delta_{ni}\log\left(\mathcal{GP}var{n}\right)$ where $\delta_{ni}$ denotes Kronecker's $\delta$. To obtain a lower bound we use that for $c>0$ and $a\geq b\geq 0$, one can show that $\log\left(c+a-b\right)\geq\log\left(c+a\right)-\frac{b}{c}$ where the smaller $b$ the better the bound. In our case $c=\noisef{\bm{e}c x_n}$, $a=\postkt{\bm{e}c x_n}{\bm{e}c x_n}$ and $b=\postkt{\bm{e}c x_n}{\estX[n-1]}\inv{\left(\postkt{\estX[n-1]}{\estX[n-1]}+\noisef{\estX[n-1]}\right)}\postkt{\estX[n-1]}{\bm{e}c x_n}$. Underestimating the eigenvalues of $\postkt{\estX[n-1]}{\estX[n-1]}$ by 0 we obtain a lower bound, where each quantity can be estimated. Formally, for any $s\leq t$, \begin{align} &\log\left(\mathcal{GP}var[n-1]{n}\right)\geq \left(\log\left(\mathcal{GP}var{n}\right)-\sum_{i=s+1}^{n-1}\frac{\postkt{\bm{e}c x_n}{\bm{e}c x_i}^2}{\noisef{\bm{e}c x_n}\noisef{\bm{e}c x_i}}\right). \end{align} This bound can be worse than the deterministic lower bound $CDet$. It depends on how large $n$ is, how large the average correlation is and how small $CDet$ is. Denote with $\mu$ the estimator for the left addend and with $\rho$ the estimator for the second addend. We can determine the number of steps $n-s$ that this bound is better by solving for the maxima of a quadratic equation: \begin{align} p\left(\mu-\frac{p-1}{2}\rho\right)\geq pCDet \end{align} The tipping point $\psi$ is \begin{align} \psi\colonequals\max\left(N, s+\left\lfloor\frac{\mu-CDet}{\rho}+\frac{1}{2}\right\rfloor\right). \end{align} Hence, for $n>\psi$ we set $u^d_n\colonequals CDet$. Observe that, the smaller $\postkt{\bm{e}c x_j}{\bm{e}c x_{j+1}}^2$ the closer the bounds. This term represents the correlation of datapoints conditioned on the $s$ datapoints observed before. Thus, our bounds come together, when incoming observations become independent conditioned on what was already observed. Essentially, that $\postkt{\bm{e}c x_j}{\bm{e}c x_{j+1}}^2=0$ is the basic assumption of inducing input approximations \parencite{quionero2005unifying}. \subsection{Bounds on the quadratic form} For an upper bound on the quadratic form we apply a similar trick: \begin{align} \frac{x}{c+a-b}\leq \frac{x(c+b)}{c(c+a)}, \end{align} where $x\geq 0$. Further we assume that in expectation the mean square error improves with more data. \todo[inline]{the bound below holds only in expectation!} Formally, \begin{align} \frac{\left(y_j-\mathcal{GP}mean[j-1]{j}\right)^2}{\mathcal{GP}var[j-1]{j}}\leq_E \frac{\left(y_j-\mathcal{GP}mean{j}\right)^2}{\noisef{\bm{e}c x_j}\left(\mathcal{GP}var{j}\right)}\left(\noisef{\bm{e}c x_j}+\sum_{i=s+1}^{j-1}\frac{\left(\postkt{\bm{e}c x_j}{\bm{e}c x_i}\right)^2}{\noisef{\bm{e}c x_i}}\right) \end{align} For a lower bound observe that \begin{align} \bm{e}c y^{\top}\inv{\boldsymbol K}\bm{e}c y=\bm{e}c y_{:s}^{\top}\inv{\boldsymbol K_{s}}\bm{e}c y_{:s}+\left(\bm{e}c y_{s+1:N}-\mathcal{GP}meanX{\remX}\right)^{\top}\inv{\boldsymbolQ[N]}\left(\bm{e}c y_{s+1:N}-\mathcal{GP}meanX{\remX}\right) \end{align} where $\boldsymbolQ \colonequals \postkt{\estX[j]}{\estX[j]}+\noisefm{\estX[j]}$ with $j\geq s+1$ for the posterior covariance matrix of $\estX[j]$ conditioned on $\boldsymbol X_{1:s}$. We use a trick we first encountered in \textcite{kim2018scalableStructureDiscovery}: $\bm{e}c y^{\top}\inv{\boldsymbol A}\bm{e}c y\geq 2\bm{e}c y^{\top}\bm{e}c b-\bm{e}c b^{\top}\boldsymbol A\bm{e}c b$, for any $\bm{e}c b$. \newcommand{\boldsymbol e}{\bm{e}c e} For brevity introduce $\boldsymbol e\colonequals \bm{e}c y_{s+1:N}-\mathcal{GP}meanX{\remX}$. After applying the inequality with $\bm{e}c b\colonequals \inv{\operatorname{Diag}[\boldsymbolQ[N]]}\bm{e}c e$, we obtain \begin{align} 2 \sum_{n=s+1}^N \frac{\left(y_n-\mathcal{GP}mean{n}\right)^2}{\mathcal{GP}var{n}} -\sum_{n,n'=s+1}^N\frac{(y_n-\mathcal{GP}mean{n})}{\mathcal{GP}var{n}}[\boldsymbolQ[N]]_{nn'}\frac{(y_{n'}-\mathcal{GP}mean{n'})}{\mathcal{GP}var{n'}} \end{align} which is now in the form of \cref{eq:required_form}. Observe that, the smaller the square error $(y_j-\mathcal{GP}mean{j})^2$, the closer the bounds. That is, if the model fit is good, the quadratic form can be easily identified. \subsection{Using the Bounds for Stopping the Cholesky} \label{sec:stopping} \label{sec:bound_computation} We will use the following stopping strategy: when the difference between bounds becomes sufficiently small and their absolute value is far away from zero. More precisely, when having deterministic bounds $\mathcal{L}\leq x\leq \mathcal{U}$ on a number $x$, with \begin{align} \frac{\mathcal{U}-\mathcal{L}}{2\min(\kschg{p(\bm{e}c y_{\mathcal{A}})}bs{\mathcal{U}},\kschg{p(\bm{e}c y_{\mathcal{A}})}bs{\mathcal{L}})}\leq r \text{ and } \label{eq:stopping_condition_1} \\\operatorname{sign}{\mathcal{U}}=\operatorname{sign}{L}, \label{eq:stopping_condition_2} \end{align} then the relative error of the estimate $\frac{1}{2}(\mathcal{U}+\mathcal{L})$ is less than $r$, that is $|\frac{\frac{1}{2}(\mathcal{U}+\mathcal{L})-x}{x}|\leq r$. \begin{remark} \label{remark:autodiff} In our experiments, we do not use $\frac{1}{2}(\mathcal{U}+\mathcal{L})$ as estimator, and instead use the biased estimator $(N-\tau)\frac{1}{\tau}\log p(\bm{e}c y_{:\tau})$. Since stopping occurs when log-determinant and quadratic form evolve roughly linearly, the two estimators are not far off each other. The main reason for using the biased estimator is of a technical nature; it is easier and faster to implement a custom backward function which can handle the in-place operations of our Cholesky implementation. \end{remark} \begin{remark} \label{remark:estimated_correlation} To estimate the average correlation between elements of the kernel matrix, we use all elements of the off-diagonal instead of only every second. This has no effect on our main result, but it becomes important when developing PAC bounds. \end{remark} \begin{remark} \label{remark:advancing_bounds} The lower bound on the log-determinant, and the upper bound on the quadratic form switch their form at a step $\psi$ (\cref{thm:log_det_lower_bound,thm:quad_upper_bound}). Currently, to prove our results, this requires $\psi$ to be $\boldsymbolhcal{F}_{s}$-measurable, and for that reason we use estimators using inputs only up to index $s$, to define $\psi$. However, a PAC bound proof would allow to condition on the event that the estimators (plus some $\epsilon$) overestimate their expected values with high probability. Under that condition, we could use the true expected value (which is $\boldsymbolhcal{F}_{s}$-measurable) to define $\psi$. Hence, in our practical implementation we use estimators based on inputs with indices up to $M$ to define $\psi$. \end{remark} The question remains how to use the bounds and stopping strategy to derive an approximation algorithm. We transform the exact Cholesky decomposition for that purpose. For brevity denote $\boldsymbol L_{s}\colonequals \operatorname{chol}[k(\boldsymbol X_{:s}, \boldsymbol X_{:s})+\noisef{\boldsymbol X_{:s}}]$ and $\boldsymbol T_{s}\colonequals k(\boldsymbol X_{s+1:}, \boldsymbol X_{}){\boldsymbol L_{s}}^{-\top}$. For any $s\in \{1,\dots N\}$: \begin{align} \boldsymbol L_N= \begin{bmatrix}\boldsymbol L_{s} & \boldsymbol 0\\ \boldsymbol T & \operatorname{chol}\left[k(\boldsymbol X_{s+1:, s+1:})-\boldsymbol T\boldsymbol T^{\top}\right] \end{bmatrix} \end{align} One can verify that $\boldsymbol L_N$ is indeed the Cholesky of $\boldsymbol K_N$ by evaluating $\boldsymbol L_N\boldsymbol L_N^{\top}$. Observe that $k(\boldsymbol X_{s+1:, s+1:})-\boldsymbol T\boldsymbol T^{\top}$ is the posterior covariance matrix of the $\bm{e}c y_{s+1:}$ conditioned on $\bm{e}c y_{:s}$. Hence, in the step before the Cholesky of the posterior covariance matrix is computed, we can estimate our log-determinant bounds. Similar reasoning applies for solving the linear equation system. We can write \begin{align} \label{eq:recursive_LES} \bm{e}c \alpha_N= \begin{bmatrix} \bm{e}c \alpha_{s}\\ \inv{\operatorname{chol}\left[k(\boldsymbol X_{s+1:, s+1:})-\boldsymbol T\boldsymbol T^{\top}\right]}\left(\bm{e}c y_{s+1:}-\boldsymbol T_{s}\bm{e}c \alpha_{s}\right) \end{bmatrix} \end{align} Now observe that $\boldsymbol T_{s}\bm{e}c \alpha_{s}=\mathcal{GP}meanX{\boldsymbol X_{s+1:}}$. Hence, before the solving the lower equation system (and before computing the posterior Cholesky), we can compute our bounds for the quadratic form. There are different options to implement the Cholesky decomposition. We use a blocked, row-wise implementation \parencite{george1986parallelCholesky}. For a practical implementation see \cref{algo:chol_blocked} and \cref{alg:bounds}. \section{Experiments} \label{sec:experiments} We now examine the bounds and stopping strategy for ACGP\xspace{}{}. When running experiments without GPU support, all linear algebra operations are substituted for direct calls to the \textsc{OpenBLAS} library \parencite{wang2013OpenBLAS}, for efficient realization of \textit{in-place} operations. To still benefit from automatic differentiation, we used \textsc{PyTorch} \parencite{paszke2019pytorch} with a custom backward function for $\log p(\bm{e}c y)$ which wraps \textsc{OpenBLAS}. The details of our experimental setup can be found in \cref{sec:app_experimental_details}. \subsection{Performance on synthetic data} \label{sec:synthetic_experiment} ACGP\xspace{} will stop the computation when the posterior covariance matrix of the remaining points conditioned on the processed points is essentially diagonal. This scenario occurs for example when using a squared exponential kernel with long lengthscale and small observational noise on densely sampled dataset. To test ACGP\xspace in this scenario, we sample a function from a GP prior with zero mean and a squared exponential kernel with length scale $\log\ell=-2$. From this function, we uniformly sample $10^{12}$ observations $(\bm{x}, y)$ in the interval $[0,1]$ using an observation noise of $\sigma^2\colonequals 0.1$, that is, $\bm{y} = f(\bm{x}) + \mathcal{N}(0, 0.1)$, see \cref{fig:visualization}. This is a scenario where ACGP\xspace excels, since it does not need to load the dataset into memory in advance, whereas methods with at least linear complexity cannot even start computation. \begin{figure} \caption{ The figure shows one of the sampled functions from the synthetic experiment in \cref{sec:synthetic_experiment} \label{fig:visualization} \end{figure} The task is to estimate the true $\log p(\bm{y})$ of the full dataset, and we run ACGP\xspace{} with a relative error of $r=0.01$ and a blocksize of $1000$ to obtain this estimate. Since we cannot evaluate the actual $\log p(\bm{y})$ for all $10^{12}$ observations, we use the predicted and actual $\log p(\bm{y})$ for $10^4$ observations as proxy for assessing the performance of ACGP\xspace{}. We repeat the experiment for 10 different random seeds. Recall that ACGP\xspace{} estimates $\mathbb{E}[\log p(\bm{e}c y)\operatorname{|} \boldsymbol X_{:s}, \bm{e}c y_{:s}]$ as opposed to $\log p(\bm{e}c y)$, directly. Hence, there are two sources of error for ACGP\xspace{}: the deviation of $\log p(\bm{e}c y)$ from its expected value and the deviation of the empirical estimates from their expectations.\footnote{This shows the benefit of developing our theory further, to obtain probably-approximately-correct bounds. Such bounds introduce error-guarding constants to protect against fluctuations.} The average $\log p(\bm{e}c y_{:10^4})$ is $-2699.67\pm70.81$, and thus, due to the relative variance, a relative error of $r=0.01$ will be hard to achieve. When run on all $10^{12}$ observations, ACGP\xspace{} stops after processing just $4600 \pm 1562$ on average, obtaining an actual relative error on the estimate of $\log p(\bm{y})$ of $0.047 \pm 0.034$. To decrease this error, one can either decrease the specified relative error of ACGP\xspace or increase the blocksize, which will lead to more stable predictions. For the experiments in the remainder of this paper, we choose the latter strategy and set the blocksize to $10^4$, which is also better suited for parallel computations. \input{sections/bound_plots.tex} \input{sections/hyperparameter_tuning.tex} \section{Conclusions} The Cholesky decomposition is the de facto way to invert matrices when training Gaussian processes, yet it tends to be considered a black box. However, if one opens this black box, it turns out that the Cholesky decomposition computes the marginal log-likelihood of the full dataset, and, crucially, in intermediate steps, the posteriors of unprocessed training data conditioned on the processed. Making the community aware of this remarkable insight is one of our main contributions of our paper. Our main novelty is to use this insight to bound the (expected) marginal log-likelihood of the full dataset from only a subset. With only small modifications to this classic matrix decomposition, we can use these upper and lower bounds to stop the decomposition before all observations have been processed. This has the practical benefit that the kernel matrix $\boldsymbol{K}$ does not have to computed prior to performing the decomposition, but can rather be computed on-the-fly. Empirical results indicate that the approach carries significant promise. In general, we find that exact GP inference leads to better behaved optimization than approximations such as CGLB\xspace{} and inducing point methods, and that a well-optimized Cholesky implementation is surprisingly competitive in terms of performance. An advantage of our approach is that it is essentially parameter-free. The user has to specify a requested numerical accuracy and the computational demands will be scaled accordingly. Finally, we note that ACGP\xspace{} is complementary to much existing work, and should be seen as an addition to the GP toolbox, rather than a substitute for existing tools. \subsubsection*{References} \printbibliography[heading=none] \appendix \mathbf{1}column \renewcommand{\eqcomment}[1]{\\&\sslash\text{{\small\emph{#1}}}\notag} \section{Evolution of the log-marginal likelihood} \label{app:evidence} This section contains figures for the progression of the log-marginal likelihood for five different permutations of the same datasets as used in \cref{sec:bound_experiments} of the main paper. \cref{fig:evidence_rbf} shows the results for the squared exponential kernel (\cref{eq:app_kernel_se}) with $\theta\colonequals 1$ and $\sigma^2\colonequals 10^{-3}$, and \cref{fig:evidence_ou} shows the results for the Ornstein-Uhlenbeck kernel (\cref{eq:app_kernel_ou}) using the same parameters. \newcommand{\evidencefig}[3]{ \begin{minipage}[b]{.5\textwidth} \colonequalsntering \includegraphics[width=0.95\textwidth]{./figs/llh_progression/llh_#1_tight.pdf} \subcaption{ Log-marginal likelihood evolution for #2. } \label{fig:evidence_#1} \end{minipage} } \begin{figure} \caption{ The figure shows the log-marginal likelihood as a function of the size of the training set for the large datasets described in \cref{tbl:datasets} \label{fig:evidence_rbf} \end{figure} \begin{figure} \caption{The figure shows the log-marginal likelihood as a function of the size of the training set for the large datasets described in \cref{tbl:datasets} \label{fig:evidence_ou} \end{figure} \FloatBarrier{} \section{Experimental details} \label{sec:app_experimental_details} \begin{table}[htb] \colonequalsntering \caption{ Overview over all datasets used for the experiments in \cref{sec:experiments}. The total dataset size (training and testing) is denoted $N$ and $D$ denotes the dimensionality. } \begin{tabular}{lS[table-format=5]S[table-format=2]l} \toprule Key & {$N$} & {$D$} & Source \\\midrule \kschg{p(\bm{e}c y_{\mathcal{B}} \given \bm{e}c y_{\mathcal{A}})}IKE & 17379 & 17 & \textcite{Fanaee2013BikeDataset}. \web{http://archive.ics.uci.edu/ml/datasets/bike+sharing+dataset} \\\texttt{elevators} & 16599 & 18 & \textcite{Camachol1998AileronsElevators}. \\\texttt{kin40k} & 40000 & 8 & \textcite{Schwaighofer2002kin40k}. \\\texttt{metro} & 48204 & 66 & No citation request. \web{http://archive.ics.uci.edu/ml/datasets/Metro+Interstate+Traffic+Volume} \\\texttt{pm25} & 43824 & 79 & \textcite{Liang2015pmDataset}. \web{http://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data} \\\texttt{poletelecomm} & 15000 & 26 & \textcite{Weiss1995Poletelecomm}. \\\texttt{protein} & 45730 & 9 & No citation request. \web{http://archive.ics.uci.edu/ml/datasets/Physicochemical+Properties+of+Protein+Tertiary+Structure} \\\texttt{pumadyn} & 8192 & 32 & No citation request. \web[this website]{https://www.cs.toronto.edu/~delve/data/pumadyn/desc.html} \\\bottomrule \end{tabular} \label{tbl:datasets} \end{table} For an overview of the datasets we use, see~\cref{tbl:datasets}. The datasets are all normalized to have zero mean and unit variance for each feature. We explore two different computing environments. For datasets smaller than \num{20000} data points, we ran our experiments on a single GPU. This is the same setup as in \textcite{artemev2021cglb} with the difference that we use a \textsc{Titan RTX} whereas they have used a \textsc{Tesla V100}. For datasets larger than \num{20000} datapoints, our setup differs from \textcite{artemev2021cglb}. We use only CPUs on machines where the kernel matrix still fits fully into memory. Specifically, we used machines running Ubuntu 18.04 with 50 Gigabytes of RAM and two \textsc{Intel Xeon E5-2670 v2} CPUs. \subsection{Bound quality experiments} For CGLB\xspace, we compute the bounds with varying number of inducing inputs $M \colonequals \{512, 1024, 2048, 4096\}$ and measure the time it takes to compute the bounds. For ACGP\xspace, we define the blocksize $m \colonequals 256\cdot 40=\num{10192}$ which is the default \textsc{OpenBLAS}\xspace block size on our machines times the number of cores. This ensures that the sample size for our bounds is sufficiently large for accurate estimation, and at the same time the number of page-faults should be comparable to the default Cholesky implementation. We measure the elapsed time every time a block of data points is added to the processed dataset and the bounds are recomputed. We compare both methods using squared exponential kernel (SE) and the Ornstein-Uhlenbeck kernel (OU). \begin{align} k_\text{SE}(\bm{e}c x, \bm{e}c z)&\colonequals \theta \exp\left(-\frac{\norm{\bm{e}c x - \bm{e}c z}^2}{2\ell^2}\right) \label{eq:app_kernel_se} \\k_\text{OU}(\bm{e}c x, \bm{e}c z)&\colonequals \theta \exp\left(-\frac{\norm{\bm{e}c x - \bm{e}c z}}{\ell}\right). \label{eq:app_kernel_ou} \end{align} where we fix $\theta\colonequals 1$ and we vary $\ell$ as $\log\ell\in \{-1,0,1,2\}$. We use a Gaussian likelihood and fix the noise to $\sigma^2\colonequals 10^{-3}$. \subsection{Hyper-parameter tuning} In this section, we describe our experimental setup for the hyper-parameter optimization experiments, which closely follows that of \textcite{artemev2021cglb}. We randomly split each dataset into a training set consisting of 2/3 of examples, and a test set consisting of the remaining third. We use a Mat\'ern$\frac{3}{2}$ kernel function and L-BFGS-B as the optimizer with \textsc{SciPy} \parencite{Virtanen2020SciPy} default parameters if not specified otherwise. All algorithms are stopped the latest after 2000 optimization steps, after 12 hours of compute time, or when optimization has failed three times. We repeat each experiment five times with a different shuffle of the dataset and report the results in \cref{tbl:results,tbl:results_gpu}. For CGLB\xspace, it is necessary to decide on a number of inducing inputs. From the results reported by \textcite{artemev2021cglb}, it appears that using $M=2048$ inducing inputs yields the best trade-off in terms of speed and performance, hence we use this value in our experiments. For the exact Cholesky and CGLB\xspace, the L-BFGS-B convergence criterion ``relative change in function value'' (\texttt{ftol}) is set to 0. For ACGP\xspace, we need to decide on both the desired relative error, $r$, as well as the block size $m$. We successively decrease the optimizer's tolerance \texttt{ftol} as $(2/3)^{\text{restart}+1}$ and we set the same value for $r$. That is, regardless of whether the optimization of ACGP\xspace stopped successfully or for abnormal reasons, the optimization restarts aiming for higher precision. The effect of this is that, early in the hyper-parameter optimization, ACGP\xspace will stop early, thus providing only an approximation to the optimal hyper-parameter values, but also saving computations. With each restart, ACGP\xspace increases the precision, ensuring that we get closer and closer to the optimal hyper-parameter values at the expense of approaching the computational demand of an exact GP. The block size $m$ is set to the same value as for the bound quality experiments, \cref{sec:bound_experiments}, $40\cdot 256=\num{10192}$, which is the number of cores times the \textsc{OpenBLAS}\xspace block size. This ensures that the sample size for our bounds is sufficiently large for accurate estimation, and at the same time the number of page-faults should be comparable to the default Cholesky implementation. Note that $m$ is a global parameter, independent of the dataset. Hence, natural choices for both $r$ and $m$ are determined by parameters of standard software, which have sensible, machine-dependent default values. ACGP\xspace can therefore be considered parameter-free. Differing from the previous section, we use for ACGP\xspace the biased estimator $(N-M)\log p(\bm{e}c y_{:M})/M$ instead of $\mathcal{U}/2+\mathcal{L}/2$ to approximate $\log p(\bm{e}c y)$ when stopping. Since stopping occurs when log-determinant and quadratic form evolve roughly linearly, the two estimators are not far off each other. The main reason for using the biased estimator is of technical nature: for auto-differentiation, it is easier and faster to implement a custom backward function which can handle the in-place operations of our Cholesky implementation. This custom backward function needs roughly a factor two of the computation of $\log p(\bm{e}c y)$ whereas the \textsc{Torch}-default needs a factor six. This shows that when comparing to exact inference, auto-differentiation can be disadvantageous and make the Cholesky appear slower than it is. Regarding CGLB\xspace, computation time is not dominated by the gradient but only the function evaluation itself. \section{Additional results} \label{sec:app_additional_results} In this section, we report additional results for both the hyper-parameter tuning experiments (section~\ref{subsec:hyperparameters}) as well as plots to show the quality of the bounds on both the log-determinant term, the quadratic term, and the log-marginal likelihood (see \cref{subsec:bound_evolution}). \subsection{Additional results for hyper-parameter tuning} \label{subsec:hyperparameters} Denote with $N_*$ the number of test instances, and with $\mu$ and $\sigma^2$ the mean and variance approximations of a method. As performance metrics we use root mean square error (RMSE) $$\sqrt{\frac{1}{N^*}\sum_{n=1}^{N^*} (y_n^*-\mu(\bm{e}c x_n^*))^2}\ ,$$ negative log predictive density (NLPD) $$\frac{1}{2N^*}\sum_{n=1}^{N^*} \frac{(y_n^*-\mu(\bm{e}c x_n^*))^2}{\sigma^2(\bm{e}c x_n^*)}+\log\left(2\pi\sigma^2(\bm{e}c x_n^*)\right) \ ,$$ and the negative marginal log likelihood $-\log p(\bm{e}c y)$. \cref{tbl:results,tbl:results_gpu} summarize the results reported for each dataset, averaging over the outcomes of the final optimization step of each repetition. For each metric, we indicate whether a higher ($\uparrow$) or lower ($\downarrow$) value indicates a better result. The results for the exact GP regression are marked in italics to emphasize that these are results we are trying to approach, not to beat. As the other methods are all approximations to the exact GP, there is little hope of achieving better performance. The best result among the approximation methods for each dataset is highlighted in bold. \begin{table}[htb] \colonequalsntering \caption{Summary of the CPU hyper-parameter tuning results from \cref{sec:hyperparameter_experiments}. For each metric, we report its final value over the course of optimization. For SVGP, we did not compute the exact marginal log-likelihoods, to save cluster time. \label{tbl:results}} \input{sup/results_table_cpu.tex} \end{table} \begin{table}[htb] \colonequalsntering \caption{Summary of the GPU hyper-parameter tuning results from \cref{sec:hyperparameter_experiments}. For each metric, we report its final value over the course of optimization. We did not compute the exact marginal log-likelihoods, to save cluster time. \label{tbl:results_gpu}} \input{sup/results_table_gpu.tex} \end{table} \subsection{Additional plots for hyper-parameter tuning} \label{subsec:hyperparameter_plots} The plots for the hyper-parameter optimization are shown in figures~\ref{fig:hyp_metro_lml}--\ref{fig:hyp_pumadyn_nlpd}. Each point in the plots corresponds to one accepted optimization step for the given methods. Each point thus corresponds to a particular set of hyper-parameters during the optimization. In figures~\ref{fig:hyp_metro_rmse}--\ref{fig:hyp_pumadyn_rmse}, we show the root-mean-square error, RMSE, that each methods obtains on the test set at each optimisation step, and figures ~\ref{fig:hyp_metro_nlpd}--\ref{fig:hyp_pumadyn_nlpd} show the same for NLPD. In figures~\ref{fig:hyp_metro_lml}--\ref{fig:hyp_pumadyn_lml}, we show the log-marginal likelihood, $\log p(\bm{y})$, that an exact GP would have achieved with the specific set of hyper-parameters at each optimization step for each method. \begin{figure} \caption{$\log p(\bm{y} \label{fig:hyp_metro_lml} \caption{$\log p(\bm{y} \label{fig:hyp_pm25_lml} \caption{$\log p(\bm{y} \label{fig:hyp_protein_lml} \caption{$\log p(\bm{y} \label{fig:hyp_kin40k_lml} \caption{$\log p(\bm{y} \label{fig:hyp_bike_lml} \caption{$\log p(\bm{y} \label{fig:hyp_elevators_lml} \caption{$\log p(\bm{y} \label{fig:hyp_pole_lml} \caption{$\log p(\bm{y} \label{fig:hyp_pumadyn_lml} \end{figure} \begin{figure} \caption{RMSE for the \texttt{metro} \label{fig:hyp_metro_rmse} \caption{RMSE for the \texttt{pm25} \label{fig:hyp_pm25_rmse} \caption{RMSE for the \texttt{protein} \label{fig:hyp_protein_rmse} \caption{RMSE for the \texttt{kin40k} \label{fig:hyp_kin40k_rmse} \caption{RMSE for the \texttt{bike} \label{fig:hyp_bike_rmse} \caption{RMSE for the \texttt{elevators} \label{fig:hyp_elevators_rmse} \caption{RMSE for the \texttt{pole} \label{fig:hyp_pole_rmse} \caption{RMSE for the \texttt{pumadyn32nm} \label{fig:hyp_pumadyn_rmse} \end{figure} \begin{figure} \caption{NLPD for the \texttt{metro} \label{fig:hyp_metro_nlpd} \caption{NLPD for the \texttt{pm25} \label{fig:hyp_pm25_nlpd} \caption{NLPD for the \texttt{protein} \label{fig:hyp_protein_nlpd} \caption{NLPD for the \texttt{kin40k} \label{fig:hyp_kin40k_nlpd} \caption{NLPD for the \texttt{bike} \label{fig:hyp_bike_nlpd} \caption{NLPD for the \texttt{elevators} \label{fig:hyp_elevators_nlpd} \caption{NLPD for the \texttt{pole} \label{fig:hyp_pole_nlpd} \caption{NLPD for the \texttt{pumadyn32nm} \label{fig:hyp_pumadyn_nlpd} \end{figure} \subsection{Additional plots for the bound quality experiments} \label{subsec:bound_evolution} \input{sections/appendix_bound_plots_stacked.tex} \section{Notation} We use a \textsc{python}-inspired index notation, abbreviating for example $[y_1, \ldots, y_{n}]^{\top}$ as $\bm{y}_{:n}$---observe that the indexing starts at 1. \todo[inline]{check that that's what we need} Indexing binds before any other operation such that $\inv{\boldsymbol K_{:s, :s}}$ is the inverse of $\boldsymbol K_{:s, :s}$ and \emph{not} all elements up to $s$ of $\inv{\boldsymbol K}$. For $s \in \{1,\dots, N\}$ define $\boldsymbolhcal{F}_s\colonequals\sigma(\bm{e}c x_1, y_1, \dots, \bm{e}c x_s, y_s)$ to be the $\sigma$-algebra generated by $\bm{e}c x_1, y_1, \dots, \bm{e}c x_s, y_s$. With respect to the main article, we change the letter $M$ to $t$. The motivation for the former notation is to highlight the role of the variable as a subset size, whereas in this part, the focus is on $M$ as a stopping time. \section{Proof Sketch} \label{sec:proof_sketch} In this section of the appendix, we provide additional intuition on the theorems and proofs for the theory behind ACGP\xspace{}. \input{sections/method_long.tex} \input{sections/pseudo_code.tex} \input{sections/bound_code.tex} \section{Assumptions} \begin{assumption} \label{assume:exchangeability} \label{assumption:exchangeability} Let $(\mathcal{O}mega, \boldsymbolhcal{F}, \mathbb{P})$ be a probability space and let $(\bm{e}c x_j, y_j)_{j=1}^N$ be a sequence of independent and identically distributed random vectors with $\bm{e}c x:\mathcal{O}mega\rightarrow\mathbb{R}^D$ and $y:\mathcal{O}mega\rightarrow\mathbb{R}$. \end{assumption} \begin{assumption} \label{assume:expected_quadratic_form_assumptions} \label{assumption:expected_quadratic_form_assumptions} For all $s,i,j,t$ with $s< i\leq j\leq N$ and functions $f(\bm{e}c x_j,\bm{e}c x_i;\bm{e}c x_1, \dots \bm{e}c x_{s})\geq 0$ \begin{align} \label{eq:expected_quadratic_form_assumptions} \mathbb{E}\left[f(\bm{e}c x_j, \bm{e}c x_i)\left(y_j-\mathcal{GP}mean[j-1]{j}\right)^2\mid\boldsymbolhcal{F}_{s}\right] \leq \mathbb{E}\left[f(\bm{e}c x_j, \bm{e}c x_i)\left(y_j-\mathcal{GP}mean[s]{j}\right)^2\mid\boldsymbolhcal{F}_{s}\right] \end{align} where $f(\bm{e}c x_j, \bm{e}c x_i)\in\left\{\frac{1}{\mathcal{GP}var{j}}, \frac{\postkt{\bm{e}c x_j}{\bm{e}c x_i}^2}{(\mathcal{GP}var{j})\noisef{\bm{e}c x_j}\noisef{\bm{e}c x_i}}\right\}$. \end{assumption} That is, we assume that in expectation the estimator improves with more data. Note that, $f$ can not depend on any entries of $\bm{e}c y$. \section{Main Theorem} \label{sec:main_theorem_proof} \todo[inline]{adapt notation} This section restates \cref{thm:main} and connects the different proofs in the sections to follow. \begin{theorem} Assume that \cref{assume:exchangeability} and \cref{assume:expected_quadratic_form_assumptions} hold. For any even $m\in\{2, 4, \dots, N-2\}$ and any $s \in \{1, \dots, N-m\}$, the bounds defined in \cref{eq:bound_ud,thm:log_det_lower_bound,thm:quad_lower_bound,thm:quad_upper_bound} hold in expectation: \begin{align*} \mathbb{E}[\mathcal{L}_D\mid \boldsymbolhcal{F}_{s}]\leq \mathbb{E}[\log(\det{\boldsymbol K}) \mid \boldsymbolhcal{F}_{s}]\leq \mathbb{E}[\mathcal{U}_D\mid \boldsymbolhcal{F}_{s}] \text{ and} \\\mathbb{E}[\mathcal{L}_Q\mid \boldsymbolhcal{F}_{s}]\leq \mathbb{E}[\bm{e}c y^{\top}\inv{\boldsymbol K}\bm{e}c y \mid \boldsymbolhcal{F}_{s}]\leq \mathbb{E}[\mathcal{U}_Q\mid \boldsymbolhcal{F}_{s}]\ . \end{align*} \end{theorem} \begin{proof} Follows from \cref{thm:log_det_lower_bound,thm:quad_lower_bound,thm:quad_upper_bound,lemma:decreasing_expectation}. \end{proof} \section{Proof for the Lower Bound on the Determinant} \input{sup/det_lower_bound_proof.tex} \section{Proof for the Upper Bound on the Quadratic Form} \input{sup/quad_upper_bound_proof.tex} \section{Proof for the Lower Bound on the Quadratic Form} \input{sup/quad_lower_bound_proof.tex} \section{Utility Proofs} \input{sup/utility_proofs.tex} \begin{lemma}[Link between the Cholesky and Gaussian process regression] \label{lemma:cholesky_and_gp_variance} Denote with $\boldsymbol C_N$ the Cholesky decomposition of $\boldsymbol K$, so that $\boldsymbol C_N\boldsymbol C_N^{\top}=\boldsymbol K$. The $n$-th diagonal element of $\boldsymbol C_N$, squared, is equivalent to $\mathcal{GP}var[n-1]{n}$: \[ [\boldsymbol C_N]_{nn}^2=\mathcal{GP}var[n-1]{n} \, . \] \end{lemma} \begin{proof} With abuse of notation, define $\boldsymbol C_1\colonequals \sqrt{k(\bm{e}c x_1, \bm{e}c x_1)}$ and $$\boldsymbol C_N \colonequals \begin{bmatrix} \boldsymbol C_{N-1} & \bm{e}c 0 \\ \bm{e}c k_N^{\top}\boldsymbol C_{N-1}^{-\top} & \sqrt{k(\bm{e}c x_N, \bm{e}c x_N)+\sigma^2-\bm{e}c k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\bm{e}c k_N} \end{bmatrix}.$$ We will show that the lower triangular matrix $\boldsymbol C_N$ satisfies $\boldsymbol C_N\boldsymbol C_N^{\top} = \boldsymbol K_N+\sigma^2\boldsymbol I_N$. Since the Cholesky decomposition is unique \parencite[Theorem~4.2.7]{golub2013matrix4}, $\boldsymbol C_N$ must be the Cholesky decomposition of $\boldsymbol K$. Furthermore, by definition of $\boldsymbol C_N$, $[\boldsymbol C_N]_{NN}^2=k(\bm{e}c x_N, \bm{e}c x_N)+\sigma^2-\bm{e}c k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\bm{e}c k_N$. The statement then follows by induction. To remain within the text margins, define $$x\colonequals \bm{e}c k_N^{\top}\boldsymbol C_{N-1}^{-\top}\boldsymbol C_{N-1}^{\!-1}\bm{e}c k_N+k(\bm{e}c x_N, \bm{e}c x_N)+\sigma^2-\bm{e}c k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\bm{e}c k_N.$$ We want to show that $\boldsymbol C_N\boldsymbol C_N^{\top} = \boldsymbol K_N+\sigma^2\boldsymbol I_N$. \begin{align*} \boldsymbol C_{N}\boldsymbol C_{N}^{\top} &= \begin{bmatrix} \bm{e}c C_{N-1} & \bm{e}c 0 \\ \bm{e}c k_N^{\top}\boldsymbol C_{N-1}^{-\top} & \sqrt{k(\bm{e}c x_N, \bm{e}c x_N)+\sigma^2-\bm{e}c k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\bm{e}c k_N} \end{bmatrix}\\ &\quad\cdot\begin{bmatrix} \bm{e}c C_{N-1}^{\top} & \boldsymbol C_{N-1}^{\!-1}\bm{e}c k_N \\ \bm{e}c 0^{\top} & \sqrt{k(\bm{e}c x_N, \bm{e}c x_N)+\sigma^2-\bm{e}c k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\bm{e}c k_N} \end{bmatrix} \\&=\begin{bmatrix} \boldsymbol C_{N-1}\boldsymbol C_{N-1}^{\top} & \boldsymbol C_{N-1}\boldsymbol C_{N-1}^{\!-1}\bm{e}c k_N\\ \bm{e}c k_N^{\top} \boldsymbol C_{N-1}^{-\top} \boldsymbol C_{N-1}^{\top} & x \end{bmatrix} \\&=\begin{bmatrix} \boldsymbol K_{N-1} +\sigma^2\bm{e}c I_{N-1} & \bm{e}c k_N\\ \bm{e}c k_N^{\top} & x \end{bmatrix} \end{align*} Also $x$ can be simplified further. \begin{align*} x&=\bm{e}c k_N^{\top}\boldsymbol C_{N-1}^{-\top}\boldsymbol C_{N-1}^{\!-1}\bm{e}c k_N+k(\bm{e}c x_N, \bm{e}c x_N)+\sigma^2-\bm{e}c k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\bm{e}c k_N \\&=\bm{e}c k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\bm{e}c k_N+k(\bm{e}c x_N, \bm{e}c x_N)+\sigma^2-\bm{e}c k_N^{\top}\inv{(\boldsymbol K_{n-1}+\sigma^2\boldsymbol I_{n-1})}\bm{e}c k_N \\&=k(\bm{e}c x_N, \bm{e}c x_N)+\sigma^2. \end{align*} \end{proof} \end{document}
\begin{document} \title{Quantification and Characterization of Leakage Errors} \date{April 10, 2017} \author{Christopher J. Wood} \email{[email protected]} \affiliation{IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA} \author{Jay M. Gambetta} \affiliation{IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA} \begin{abstract} We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce new metrics for quantifying the coherent and incoherent properties of the resulting errors, and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities: the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set. \end{abstract} \maketitle \section{Introduction} \label{sec:intro} Accurate characterization of errors is critical for verifying the performance of quantum devices, and for prioritizing methods of error correction to improve the performance of quantum devices. In recent years there has been great progress in improving the performance of many types of qubits, with average gate fidelities exceeding $\sim\!\!\!0.999$ for 1-qubit and $\sim\!\!\!0.99$ for 2-qubit gates being reported in superconducting qubits~\cite{Barends2014nat,Sheldon2016pra2} and trapped ions~\cite{Benhelm2008nat,Ballance2016prl}. As thermal relaxation coherence times ($T_1$) increase, it becomes critical to quantify and measure the leading errors limiting further improvements in gate fidelities. In many quantum systems so-called leakage errors are an important factor for further gate optimization. These types of errors result from the state of a quantum system \emph{leaking out} of a pre-defined subspace to occupy an unwanted energy level. These types of errors are of particular importance in the context of fault-tolerant quantum error correction as they require significant additional resources to correct in a fault-tolerance manner over standard errors, and can greatly impact the fault tolerance-threshold of certain codes~\cite{Aliferis2007qic,Fowler2013pra,Suchara2015qic}. Furthermore even when very-weak or short-lived as the system returns to the desired subspace, these types of interactions can result in significant logical errors, such as the AC-stark shift observed in the control of superconducting qubits \cite{Motzoi2009prl,Lucero2010pra}. Leakage errors may be present in any quantum system where a qubit is encoded in a subspace of a larger quantum system as is the case for many qubit architectures including superconducting qubits~\cite{Gambetta2017nqi}, quantum dots~\cite{Divincenzo2000nat}, and trapped-ions~\cite{Haffner2008pr}. Though there has been significant interest in the characterization and suppression of leakage in quantum systems, current methods are highly architecture dependent~\cite{Lucero2010pra,Chow2010pra,Motzoi2009prl,Gambetta2011pra,Medford2013nat,Wardrop2014prb,Mehl2015prb2}, and while progress has been made (Eg. see \cite{Wallman2016njp,Chen2016prl}), there is not yet a general framework for quantifying and characterizing the relevant parameters of interest for an experimenter. In this paper our goal is three-fold: in \cref{sec:leakage-errors,sec:coherent-leakage} we develop a unified framework for quantifying leakage errors which may occur in quantum systems; in \cref{sec:leakage-rb} we extend randomized benchmarking (RB)~\cite{Knill2008pra,Magesan2011prl} to a \emph{leakage randomized benchmarking} (LRB) protocol that estimates leakage errors in addition to average gate fidelities; in \cref{sec:leakage-models} we explore several canonical examples of leakage errors which may occur in quantum systems. To quantify leakage, in \cref{sec:leakage-errors} we introduce measures for \emph{leakage in quantum states}, and for the quantum gate we introduce two measures which we call the \emph{leakage rate} $(L_1)$, and \emph{seepage rate} $(L_2)$. The leakage rate quantifies the average population lost from a quantum system of interest to states outside the computational subspace, while the seepage rate quantifies the return of population to the system from those states. To experimentally estimate these quantities we describe the leakage randomized benchmarking protocol (LRB) in \cref{sec:leakage-rb} and illustrate it's application with a numerical simulation of a transmon superconducting qubit. We note that this protocol and a similar approach have recently been demonstrated experimentally in Ref. \cite{McKay2016arx} and Ref. \cite{Chen2016prl} respectively. In \cref{sec:coherent-leakage} we discuss the special case of \emph{coherent leakage errors} and introduce measures for the \emph{coherence of leakage} in quantum states and channels. While these measures cannot be directly estimated using the LRB protocol we prove bounds on these quantities in terms leakage and seepage. We note that in previous work the combined leakage-seepage rates $L_1+L_2$ was referred to as a measure of coherence of leakage~\cite{Wallman2016njp}, however this is a misnomer as leakage and seepage can result from purely incoherent thermal relaxation processes. We demonstrate and discuss this in \cref{sec:leakage-models} along with several other examples of leakage models including logical leakage errors, unitary leakage, thermal leakage, and multi-qubit leakage. \subsection*{Comparison to previous work} There have been previous proposals for generalizing RB to account for leakage~\cite{Epstein2014pra,Chasseur2015pra,Chen2016prl}, and also for related benchmarking protocols to explicitly quantify leakage instead of average gate fidelity~\cite{Wallman2016njp}. The difference between our work and previous protocols is that it is designed to address the following experimental considerations \begin{enumerate} \item Allows for the robust estimation of the leakage rate $L_1$, seepage rate $L_2$, and average gate error $E$ of a Clifford gate set. \item Can be implemented in any system capable of implementing RB with only minor modification. \item The fitting model for parameter estimation from RB data is a single exponential decay model. \end{enumerate} The LRB protocol is essentially equivalent to the method recently used \cite{Chen2016prl,McKay2016arx} for characterization of leakage in a superconducting qubit. However, while that work relied on the assumption of a phenomenological decay model and direct measurement of leakage levels, we provide a more rigorous derivation of the decay model and discuss the assumptions for its validity. Our method also can be implemented without direct measurement of the leakage levels. To contrast our approach with other previous work, in \cite{Epstein2014pra,Chasseur2015pra} they consider the decay model derived from RB in the presence of leakage for the case of a qubit or multi-qubit system respectively and hence this satisfies condition 2. This fails condition 1 as it does not provide a means for estimating the $L_1$ or $L_2$, only the gate error. Further, since the resulting decay model is a bi-exponential in the single qubit case, and a multiple-exponential in the multi-qubit case it fails to satisfy condition 3. The proposals of robust estimation of leakage in \cite{Wallman2016njp} satisfies conditions 2 and 3, and in particular the decay model used is equivalent to one part of our proposed protocol. However, it fail condition 1 as the characterization parameter reported by this protocol is equivalent to the sum of what we define as the leakage and seepage rates. It is critical to estimate these two quantities separately for the characterization of a quantum gate set in the presence of leakage, which we demonstrate with an example. \section{Quantifying Leakage Errors} \label{sec:leakage-errors} Leakage in a quantum system can be modeled by treating the system of interest as a subspace of a larger quantum system in which the full dynamics occur. We will call the $d_1$-dimensional subspace of energy levels in which ideal dynamics occur the \emph{computational subspace}, labeled by $\2X_1$. The $d_2$ dimensional subspace of all additional levels that the system may occupy due to leakage dynamics will be called the \emph{leakage subspace}, labeled by $\2X_2$. Thus the full state space of the system is described by a $d_1+d_2$ dimensional direct-sum state space $\2X = \2X_1\oplus \2X_2$. We define the \emph{state leakage} ($L$) of a density matrix $\rho \in D(\2X)$ by \begin{equation} L(\rho) = \Tr[{\bb 1}_2 \rho] = 1-\Tr[{\bb 1}_1\rho], \label{eq:state-leakage} \end{equation} where ${\bb 1}_1$ and ${\bb 1}_2$ denote the projectors onto the subspaces $\2X_1$ and $\2X_2$ respectively. For a quantum state to exhibit leakage it must be introduced into the system by some physical process, for example thermal-relaxation or imperfect control errors. In general we refer to any system dynamics which result in a change of the state leakage of quantum system as a \emph{leakage error}. In the quantum circuit paradigm imperfect quantum operations may be described mathematically by completely-positive trace preserving maps (CPTP), and thus a leakage error is a special class of CPTP map that couples the computational and leakage subspaces. Let $\2E$ be a CPTP map describing a leakage error. We can quantify the leakage error in $\2E$ by how it changes the state leakage of an input state. However, unlike with state leakage we will require two metrics to distinguish between leakage errors which transfer population \emph{to}, and population \emph{from}, the leakage subspace. We call these errors \emph{gate leakage} and \emph{gate seepage} respectively. Much like with average gate fidelity to quantify typical gate errors we will generally be interested in the average and the worst case gate leakage and seepage. Thus we define the \emph{leakage rate} $L_1$, and \emph{seepage rate} $L_2$ of a channel $\2E$ to be the average channel leakage and channel seepage respectively: \begin{align} L_{1}(\2E) &= \int \! d\psi_1 L\Big(\2E(\ketbra{\psi_1}{\psi_1})\Big) =L\left(\2E\left(\frac{{\bb 1}_1}{d_1}\right)\right) \label{eq:leakage-rate} \\ L_{2}(\2E) &= 1-\int \! d\psi_2 L\Big(\2E(\ketbra{\psi_2}{\psi_2})\Big) = 1- L\left(\2E\left(\frac{{\bb 1}_2}{d_2}\right)\right) \nonumber \end{align} where the averages are over all state in the computational subspace $(\ket{\psi_1})$, and leakage subspace $(\ket{\psi_2})$ respectively. The worst case gate leakage and seepage rates which require maximizing, rather than averaging, over all input states. We note however that we may bound the worst-case quantities by average rates as was shown in \cite{Wallman2016njp}: \begin{align} d_1 L_1(\2E) \ge& L\big(\2E(\rho_1)\big) \label{eq:worst-p1}\\ d_2 L_2(\2E) \ge& 1-L\big(\2E(\rho_2)\big) \label{eq:worst-p2} \end{align} where $d_j$ is the dimension of $\2X_j$, and and $\rho_1(\rho_2)$ is an arbitrary state in, the computational (leakage) subspace. In addition to characterizing the amount of leakage introduced by an imperfect gate, it is also necessary to characterize the performance of the gate within the computational subspace. A commonly used measure of gate error is the average gate infidelity $E=1-\overline{F}$ where $\overline{F}$ is the average gate fidelity \begin{equation} \overline{F}(\2E) = \int d\psi\bra{\psi} \2E(\ketbra{\psi}{\psi})\ket{\psi}. \end{equation} In the presence of leakage we define $\overline{F}$ by only averaging over \emph{states within the computational subspace}: \begin{align} \overline{F}(\2E) &= \int d\psi_1 \bra{\psi_1} \2E(\ketbra{\psi_1}{\psi_1})\ket{\psi_1} \label{eq:fid-def}\\ &= \frac{d_1F_{\text{\scriptsize{pro}}}(\2E)+1-L_1}{d_1+1} \label{eq:leakage-fid} \end{align} where we have expressed $\overline{F}$ in terms of the \emph{process fidelity} of $\2E$ with the identity channel on the computational subspace: \begin{align} F_{\text{pro}}(\2E) = \frac{1}{d_1^2} \Tr[({\bb 1}_1\otimes{\bb 1}_1)\2S_{\2E}] \end{align} where $\2S_{\2E}$ is the superoperator representation of the map $\2E$~\cite{Wood2015qic}. We suggest that the goal of a partial characterization protocol of leakage errors is to extract the three parameters $L_1,L_2$ and $E$. This is a major difference between our approach and the protocol in \cite{Wallman2016njp} which only aims to learn a single parameter equivalent to the joint leakage-seepage rate $L_1+L_2$. Knowledge of the combined leakage rate is insufficient to accurately quantify the gate error, and in addition the relationship between $L_1$ and $L_2$ can vary greatly depending on the noise process causing leakage. Three specific cases are: \emph{erasure leakage errors} where $L_2=0$ and hence any leaked population is irretrievably lost; \emph{thermal relaxation leakage errors} where $L_2 \gg L_1$ if the computational subspace is the low energy subspace of the system; \emph{unital leakage errors} (of which unitary leakage errors are a subset) where the leakage and seepage rates are no longer independent and satisfy the equation $d_1 L_1 = d_2 L_2$~\footnote{This result follows directly from the definitions for leakage and seepage. For a given CPTP map $\2E$ we have $d_1 L_1 (\2E)= \Tr[{\bb 1}_2 \2E({\bb 1}_1)] = \Tr[{\bb 1}_2 \2E({\bb 1})] - d_2(1-L_2(\2E))$. Now, if $\2E$ is unital then $\Tr[{\bb 1}_2 \2E({\bb 1})]=\Tr[{\bb 1}_2] =d_2$, and hence $d_1 L_1 (\2E)=d_2 L_2 (\2E)$.}. We elaborate on this when giving several example noise models in \cref{sec:leakage-models}. We also note that the definitions of leakage and seepage can be generalized for leakage to multiple different leakage subspaces by further partitioning the leakage subspace into several different subspaces. We discuss this in more detail in \cref{sec:multi-leakage}. \section{Characterizing Leakage Errors} \label{sec:leakage-rb} We now show how the randomized benchmarking (RB) protocol can be generalized to include estimation of the leakage and seepage rates in addition to the average gate fidelity for a Clifford gate set. We call this generalized protocol \emph{leakage randomized benchmarking (LRB)}. The basic requirements for LRB are the implementation of a set of gates which form a 2-design on the computational subspace, the typical set being the Clifford gates, and the ability to perform a set of measurements $\{M_0,\hdots M_{d_1-1}\}$ which may be used to estimate the populations of a set of basis states in $\2X_1$. By summing over all measurements we may implement the projection $\sum_j M_j = {\bb 1}_1$, and so this allows for the estimation of the population in $\2X_1$, and hence of the state leakage $L$. In the following we will assume that this is done with respect to the computational basis. In this sense it is equivalent to RB, but with additional measurements and different post-processing of the acquired data. With this requirement the LRB protocol is as follows: \begin{enumerate} \item Choose a random sequence of $m$ Clifford gates $i_m = \2C_m \circ \hdots \circ \2C_1$ and compute the RB recover operator corresponding to $\2C_{m+1} = \2C_1^\dagger \circ\hdots \circ \2C_m^\dagger$ to obtain the RB sequence $i_m^\prime = \2C_{m+1}\circ i_m$. \item Prepare the system in an initial state $\rho_0=\ketbra00 \in D(\2X_1)$, apply the sequence $i_m^\prime$, and perform a measurement to obtain an estimate of the probabilities $p_j(i_m^\prime) = \Tr[M_j i_m^\prime(\rho_0)]$, where $j=0,\hdots, d_1-1$. \item Sum the probabilities $p_j(i_m^\prime)$ to obtain an estimate of the population in $\2X_1$: $p_{{\bb 1}_1}(i_m^\prime) = \sum_j p_j(i_m^\prime) = \Tr[{\bb 1}_1 i_m^\prime(\rho_0)]$. \item Repeat Steps 1-3 $K$ times to obtain an estimate of the average over all Clifford sequences $i_m$: $p_j(m)= \1E_{i_m^\prime}\left[ p_j(i_m^\prime)\right]$, $p_{{\bb 1}_1}(m)= \1E_{i_m^\prime}\left[ p_{{\bb 1}_1}(i_m^\prime)\right]$. \item Repeat 1-4 for different sequence lengths $m$. \item Fit $p_{{\bb 1}_1}(m)$ to the decay model \begin{equation} p_{{\bb 1}_1}(m) = A + B \lambda_1^m \label{eq:lrb-leak} \end{equation} with $0\le A,B$ to obtain estimated values for $A, B$ and $\lambda_1$. Compute the estimate of the average leakage and seepage rates of the gate set as \begin{align} L_1(\2E) &= (1-A)(1-\lambda_1) \label{eq:fit-p1}\\ L_2(\2E) &= A(1-\lambda_1). \label{eq:fit-p2} \end{align} Note that in practice one may put tighter bounds on the expressions based on estimates for leakage rates using \begin{align} A &\approx \frac{L_2}{L_1+ L_2} \\ B &\approx \frac{L_1}{L_1+L_2} + \epsilon_{\text{spam}}\\ \lambda_1 &= 1 - L_1 - L_2. \end{align} \item Using the fitted value of $\lambda_1$ fit $p_0(m)$ to the decay model \begin{align} p_0(m) &= A_0 + B_0 \lambda_1^m + C_0 \lambda_2^m \label{eq:lrb-fid} \end{align} where $0\le A_0 \le A, 0\le C_0 \le1, 0\le A_0 +B_0+C_0 \le 1$, to obtain an estimate of the average gate fidelity of the gate set by \begin{align} \overline{F} &= \frac{1}{d_1}\Big[(d_1-1)\lambda_2 + 1-L_1\big]. \label{eq:lrb-fid} \end{align} If leakage is weak ($\lambda_2 \ll \lambda_1, B\ll A$ ), a more robust fit can be obtained from fitting directly to a RB model $p_0(m) = A_0 +C_0 \lambda_2^m$. \end{enumerate} See \cref{sec:lrb-decay-model} for the derivation of the decay models \cref{eq:lrb-leak,eq:lrb-fid} in Steps 7 and 8. The LRB protocol subsumes the various decay models previously presented in the literature. The modified RB decay model presented in \cite{Epstein2014pra} equivalent to the model in \cref{eq:lrb-fid} for the case of leakage to a single level. The protocol presented in \cite{Wallman2016njp} is based of a 1-design average rather than a 2-design and is equivalent to our model in the case where the recovery operation is not included hence replacing the sequence $i_m^\prime$ with $i_m$ in Steps 2-5 in the LRB protocol. In this case the resulting decay model is equivalent to the one given in \cref{eq:lrb-leak}. Finally the phenomenological decay model assumed in \cite{Chen2016prl} is equivalent to an implementation of the LRB protocol with direct measurements of the leakage subspace and thus our work provides a theoretical justification of the assumptions and validity of this model. In the case where one has the ability to directly measure populations of the leakage subspace LRB Steps 1,2,4,5,7 are implemented the as is, but step 3 is replaced with a measurement set chosen to correspond to estimates of the population of the leakage subspace with respect to some basis \[ \{N_j = \ketbra{j}{j} : j = 1+d_1,..., d_2 + d_1-1\}. \] Following this Steps 6 is replaced with fitting to the model \begin{align} p_{{\bb 1}_2}(m) &= 1-p_{{\bb 1}_1}(m)= 1-A-B\, \lambda_1^m \label{eq:lrb-leak-direct} \end{align} This is of course most beneficial when the dimension of the leakage subspace is less than the computational subspace, and in particular when the leakage subspace is 1-dimensional gives a method of quickly estimating the leakage rates. \subsection{Simulation of LRB for a Superconducting Qubit} \label{sec:lrb-sim} We now demonstrate the application of the LRB protocol with a simulation of the a superconducting qubit, and note that this protocol has also implemented experimentally in Ref.~\cite{McKay2016arx}. A superconducting qubit is a weakly anharmonic oscillator which to a good approximation can be described by truncating the system to a three dimensional Hilbert space. In this case the qubit computational subspace is spanned by the states $\{\ket0,\ket1\}$, and the leakage subspace is a single level $\ket2$. The Hamiltonian for the system in the resonance frame of the $E_1-E_0$ energy separation is given by \begin{align} H(t) =& H_0 + H_c(t) \\ H_0 =& -\delta \ketbra{2}{2} \\ H_c(t) =& \frac12\Omega_x(t) H_x + \frac12\Omega_y(t) H_y \\ H_x =& \ketbra01 + \sqrt2\ketbra12 + h.c.\\ H_y =& -i\ketbra01 -i\sqrt{2}\ketbra12 + h.c. \end{align} where $\delta$ is the anharmonicity, and $H_c(t)$ is the time-dependent control Hamiltonian. We will consider a family of $y$-only DRAG corrected pulse shapes~\cite{Gambetta2011pra} where the $x$-drive component $\Omega_x(t)$ is a truncated Gaussian pulse, and the $y$-drive component is given by the scaled derivative \begin{align} \Omega_y(t) & = -\frac{\alpha}{\delta} \frac{d}{dt} \Omega_x(t). \label{eq:drag} \end{align} To include the effects of thermal relaxation, we model dissipation of the system as cavity relaxation to an equilibrium state with average photon number $\overline{n}$. This is described by the Markovian photon-loss dissipator~\cite{Wiseman2009book}\begin{align} \2D_c &= \kappa\left(1+\overline{n}\right) D[a] + \kappa\,\overline{n}\, D[a^\dagger] \label{eq:diss}\\ D[a]\rho &= a\rho a^\dagger -\frac12\{a^\dagger a,\rho\}, \end{align} where $\kappa$ is the relaxation rate of the system, $\overline{n}$ is the average thermal photon number, and $a,a^\dagger$ are the truncated creation and annihilation operators. To compute the superoperator for a given control pulse we solve the Lindblad master equation \begin{equation} \frac{d\rho(t)}{dt} = - i [H(t),\rho] + \2D_c \rho. \end{equation} over the time-dependent control pulse. For our simulation we compare the LRB estimates of mean leakage rate, and seepage rate, and average gate infidelity for a single-qubit Clifford gate set to the theoretical values computed directly from the Clifford gate superoperators. The noisy Clifford gate set was generated by simulating calibrated $\pm \pi/2$ $X$ and $Y$ rotation pulses for a transmon qubit with anharmonicity $\delta/2\pi = -300$~MHz, and thermal relaxation modeled as cavity dissipation with relaxation time constant $\kappa = 10$~ kHz, and an average photon number at thermal equilibrium of $\overline{n}=0.01$. To simulate leakage that occurs during measurement of the qubit we allow the system to evolve under the dissipator in \cref{eq:diss} for a typical measurement acquisition time of $5\mu s$ following the final pulse. The $X$,$Y$ pulses were simulated for a truncated gaussian pulses of lengths ranging from 8ns to 30ns with a 4ns spacing before and after each pulse. We compare four different pulse types: \emph{GAUSS} which has DRAG parameter $\alpha=0$, \emph{DRAG-F} where $\alpha$ is optimized to maximize average gate fidelity ($\alpha \approx 0.5$), \emph{DRAG-L} where has $\alpha$ is optimized to minimize the leakage rate ($\alpha\approx 1$), and \emph{DRAG-Z} which uses the leakage optimized $\alpha$ from DRAG-L but also uses and optimized Z-frame changes to maximize average gate fidelity as introduced in Ref.~\cite{McKay2016arx}. The LRB protocol was simulated using Clifford sequence lengths of $m=1, 101,201,\hdots, 3001$ averaging over 100 random seeds for each $m$. The results are shown in \cref{fig:lrb-leakage}. To illustrate the error in the estimates \cref{fig:lrb-seeds} shows the fitted values and 95\% confidence intervals of the fits versus the number of random Clifford sequences (seeds) averaged over for each length $m$ for the case of 14ns pulse used in \cref{fig:lrb-leakage}. Examples of simulated LRB data used for fitting in \cref{fig:lrb-leakage} are shown in \cref{fig:lrb-data}. \begin{figure} \caption{Average gate infidelity (top), leakage rate (middle) and seepage rate (bottom) averaged over the single qubit Clifford gates versus gate time of the component pulses. Data points are the fitted estimates from simulation of the LRB protocol with 100 seeds for each length-$m$ sequence, with shaded regions representing the 95\% confidence interval of the fit. The solid lines are theoretical values computed directly from the superoperators used for simulation, with the theoretical $T_1$-limit given by pure thermal relaxation noise only.} \label{fig:lrb-leakage} \end{figure} \begin{figure} \caption{Convergence of estimate of average gate infidelity (top), leakage rate (middle) and seepage rate (bottom) vs number of random Clifford sequences used to obtain average. Data points are the fitted estimates from simulation of the 16ns component pulse from \cref{fig:lrb-leakage} \label{fig:lrb-seeds} \end{figure} \begin{figure*} \caption{Example decay data for the simulated LRB experiment in \cref{fig:lrb-leakage} \label{fig:lrb-data} \end{figure*} The results in \cref{fig:lrb-leakage} show good agreement with the LRB fitted estimate and the directly computed theoretical values for leakage, seepage and infidelity. We note that due to the leakage rate being over an order of magnitude smaller than the average gate fidelity we are able to fit the fidelity RB curve to a single exponential as with standard RB. We see that the DRAG-Z and DRAG-F have the greatest improvement of gate infidelity, approaching the $T_1$ limit for longer pulses, with DRAG-Z only having an advantage for pulse times shorter than 14ns. The DRAG-L pulse shows a marginal improvement over the GAUSS pulse. This is because the dominant error is a leakage induced phase error, rather than the leakage rate itself, and hence is not corrected by the DRAG-L calibration. When comparing leakage rates, we see that for all pulses the leakage rates are typically 1 to 2 orders of magnitude below the infidelity. Both the DRAG-L and DRAG-Z pulses saturate the $T_1$ limit on leakage rates $L_1$ for pulses longer than 14ns, while the DRAG-F and GAUSS pulses only approach this limit for much longer pulses. For the case of seepage, as was discussed in \cref{sec:cavity-leakage}, we find that it is completely dominated by the thermal seepage due to $T_1$ relaxation. These simulations show that in the case of single-qubit gates in superconducting qubits, leakage induced errors are much more important for device optimization than the leakage rates themselves. Leakage and seepage rates are limited by $T_1$ relaxation methods, however if minimizing leakage errors significantly below the average gate fidelity is a requirement for a fault-tolerant codes, for example, then saturating this leakage $T_1$ limit in combination with the infidelity limit may be a desirable goal for control design. In this case simple half-DRAG pulses alone are not sufficient, and one must use other methods to optimize calibration to remove both the leakage induced phase error and suppress leakage rates such as the DRAG-Z pulse which uses DRAG in combination with Z-rotations to remove phase errors presented in Ref.~\cite{McKay2016arx}. Another method is to use DRAG in combination with a detuned drive term as done in Ref.~\cite{Chen2016prl}. Theoretical approaches to systematically design new control pulses to achieve this have also been recently proposed~\cite{Ribeiro2017prx}. \subsection{Assumptions of the LRB Protocol} \label{sec:lrb-assumptions} The derivation of the LRB decay models in \cref{eq:lrb-leak,eq:lrb-fid} require two key assumptions in addition to the requirements of RB. The first assumption is that when twirling over Clifford gates on the computational subspace there is enough randomness induced on the leakage subspace to implement a unitary 1-design average, thus depolarizing the conditional state on the leakage subspace. In many system of interest, for example superconducting qubits, leakage is typically weak and restricted to a 1-dimensional subspace that is off-resonance with the computational subspace levels. In this case the depolarizing requirement is trivially satisfied, as depolarization is implemented by the random phases accrued due to off-resonant evolution. If the leakage subspace is not sufficiently depolarized then non-Markovian effects may appear in the case where the leakage dynamics are coherent due to coherences between the leakage and computational subspace. We discuss coherent leakage in greater detail in \cref{sec:coherent-leakage}, and explore an example of when this twirling assumption breaks down in \cref{sec:unitary-dlm}. The key feature of insufficient twirling of the leakage subspace is oscillations in the leakage decay model. We can see this, for example, in the LRB data for the 8ns pulse decay curves in \cref{fig:lrb-data}, which was the gate set with the largest leakage rate in our simulation. In the case where the leakage subspace is only partially depolarized. \section{Coherent Leakage Errors} \label{sec:coherent-leakage} The leakage metrics in \cref{sec:leakage-errors} all measure incoherent properties of a quantum system. In particular the population of the leakage subspace used as the definition of state leakage does not inform us about coherences that may exist between states in the computational and leakage subspaces. Our restriction to these leakage metrics is a practical one as estimation of coherent properties of leakage is considerably more difficult. In the ideal case the LRB protocol acts to project our coherent leakage terms, and direct estimation requires the ability to directly measure coherences between the leakage and computational subspaces which. Nevertheless, we now present a theoretical framework for quantifying coherent leakage errors and show how these quantities may be bounded by the average leakage quantities from \cref{sec:leakage-errors}, and later in \cref{sec:unitary-leakage} we explore a simple example of a unitary coherent leakage error. \subsection{Coherence of Leakage} \label{sec:coherence-of-leakage} Consider a leakage system with state space $\2X= \2X_1\oplus\2X_2$, with identity operators ${\bb 1}_1$ and ${\bb 1}_2$ which project a state on the computational and leakage subspaces respectively. We can define a subspace consisting of all states of the form $\rho = (1-p_l)\rho_1 + p_l \rho_2$ which we call the \emph{incoherent leakage subspace (ILS)} as it is an incoherent mixtures of states in the leakage and computational subspaces. The projector onto the ILS is given by the CPTP map \begin{align} \2P_I &= \2I_1 + \2I_2 \\ \2P_I(\rho) &= {\bb 1}_1\rho{\bb 1}_1 + {\bb 1}_2\rho{\bb 1}_2 \label{eq:ils-proj} \end{align} where $\2I_1,\2I_2$ are the identity projection channels for the computational and leakage subspaces respectively. For a state $\rho$, the the orthogonal subspace to the ILS is traceless and consists of \emph{only} the coherent superposition terms between states in the computational and leakage subspace. We will call this subspace the \emph{coherent leakage subspace (CLS)}. The projector onto the CLS is given by \begin{align} \2P_C &= \2I - \2P_I \\ \2P_C(\rho) &= {\bb 1}_1\rho{\bb 1}_2 + {\bb 1}_2\rho{\bb 1}_1 \label{eq:cls-proj} \end{align} where $\2I$ is the identity channel on the full Hilbert space. We may use the CLS projection to define a measure of the quantify the coherence of leakage in a density matrix. We define the \emph{coherence of leakage} of a state $\rho$ to be \begin{equation} CL(\rho) = \| \2P_C(\rho)\|_1 = \| \rho - \2P_I(\rho) \|_1. \label{eq:leak-c} \end{equation} While we could use any suitable matrix norm, the choice of the 1-norm is to give an operational interpretation of $CL(\rho)$ via Helstrom's theorem. For example, consider a pure state $\rho= \ketbra{\psi}{\psi}$ consisting of a superposition of states in the leakage and computational subspaces. If the leakage of $\rho$ is given by \begin{equation} L(\rho)=\bra\psi{\bb 1}_2\ket\psi=p_l \end{equation} we may write \begin{equation} \ket{\psi}=\sqrt{1-p_l}\ket{\psi_1}+\sqrt{p_l}\ket{\psi_2} \end{equation} where $\ket{\psi_j}\in\2X_j$. Hence we have \begin{align} \| \2P_C(\ketbra\psi\psi)\|_1 &= 2\sqrt{p_l(1-p_l)}. \label{eq:cl-bound} \end{align} The norm in \cref{eq:cl-bound} equals 1 when $p_l=1/2$. As one might expect this shows that our ability to distinguish coherent leakage from purely incoherent leakage is maximized when there is an equal superposition of states in $\2X_1$ and $\2X_2$. If $p_l=0$ or 1, so that the state is entirely in the computational or leakage subspace, then there can be no coherences between the leakage and computational subspaces and $\| P_C(\rho)\|_1=0$. While the trace distance of the CLS projection has a useful operational interpretation it cannot be directly measured from measurements on the computation subspace alone. We can, however, prove that \cref{eq:cl-bound} provides an upper bound on the coherence of leakage. \begin{proposition} \label{prop:coherence-bound} Consider a density matrix $\rho\in \2L(\2X_1\oplus\2X_2)$. The coherence of leakage is upper bounded by \begin{equation} CL(\rho) \le 2\sqrt{p_l(1-p_l)} \nonumber \label{eq:leak-c} \end{equation} where $p_l = L(\rho)$ is the leakage of $\rho$. \begin{proof} See \cref{proof:coherence-bound}.\end{proof} \end{proposition} As shown in \cref{eq:cl-bound} the bound in \cref{prop:coherence-bound} is saturated for a pure state $\rho$. \subsection{Coherent Leakage Rates} \label{sec:gate-leakage} The definitions of leakage and seepage rates introduced in \cref{sec:leakage-errors} quantify the rates at which a CPTP error map $\2E$ increases or decreases the amount of state leakage of a given input state. We now consider how coherent leakage errors can be introduced into a system. Consider an arbitrary leakage error channel described by a CPTP map $\2E$. We may use the projectors for the ILS and CLS from \cref{eq:ils-proj,eq:cls-proj} to decompose $\2E$ into four channel components: \begin{equation} \2E = \2P_I\2E\2P_I + \2P_I\2E \2P_C + \2P_C\2E\2P_I+ \2P_C\2E\2P_C. \label{eq:chan-leak-decomp} \end{equation} The first term $\2E_I\equiv\2P_I\2E\2P_I$ is the trace preserving component of $\2E$, which we call the \emph{incoherent leakage component} of $\2E$. The remaining three terms all result in a traceless output operator. The incoherent leakage component may itself be expressed as $2\times 2$ block-mapping between the leakage and computational subspaces: \begin{align} \2E_I &= \2I_1 \2E \2I_1 + \2I_2 \2E \2I_1 + \2I_1 \2E \2I_2 +\2I_2 \2E \2I_2. \label{eq:incoh-decomp} \end{align} The TP property of $\2E$ allows us to write \cref{eq:incoh-decomp} in terms of the leakage and seepage rates $L_1,L_2$ as \begin{align} \2E_I & = (1-L_1)\2E_{11} + L_1 \2E_{21}+L_2\2E_{12} + (1-L_2) \2E_{22} \label{eq:incoh-leak-comp} \end{align} where \begin{equation} \2E_{ij} = \frac{\2I_i \2E \2I_j}{\Tr[{\bb 1}_i \2E({\bb 1}_j/d_j)]}. \end{equation} \cref{eq:incoh-leak-comp} shows that the incoherent leakage component $\2E_I$ is the only relevant term for estimating leakage and seepage rates as defined in \cref{sec:leakage-errors}, and under ideal situations the LRB protocol from \cref{sec:leakage-rb} projects an arbitrary error channel $\2E$ onto this component. If one doesn't project out the terms that allow for coherent leakage, then analogous to our definition of leakage and seepage rates, we can define two quantities to measure how much the coherence of leakage of a state is increased by leakage and seepage. We define the coherent leakage rate, and coherent seepage rate to be \begin{align} CL_1(\2E) =& \int d\psi_1 CL\big(\2E(\ketbra{\psi_1}{\psi_2})\big)\\ CL_2(\2E) =& \int d\psi_2 CL\big(\2E(\ketbra{\psi_1}{\psi_2})\big)\\ \end{align} respectively, where $\ket{\psi_j}\in\2X_j$. While this expression can be evaluated exactly for simple examples (such as the unitary leakage example in \cref{sec:unitary-leakage}), for more complicated examples we can always upper bound it by the leakage and seepage rates of the channel $\2E$ which we prove in \cref{prop:coh-leakage-bound}. \begin{proposition} \label{prop:coh-leakage-bound} The coherent leakage rate $CL_1(\2E)$ and coherent seepage rate $CL_2(\2E)$ of a CPTP map $\2E$ are upper bounded by \begin{equation} CL_j(\2E) \le 2\sqrt{L_j(1-L_j)} \nonumber \label{eq:leak-c} \end{equation} where $L_1, L_2$ are the leakage and seepage rates of $\2E$. \begin{proof} See \cref{proof:coh-leakage-bound}.\end{proof} \end{proposition} We note that if one attempts to implement the LRB protocol, but does not successfully project onto the incoherent leakage channel component, the resulting decay model may exhibit oscillations due to the coherent leakage terms. We explore this with a simple example in \cref{sec:unitary-dlm}. \section{Leakage Error Models} \label{sec:leakage-models} We now present some example models for leakage errors in quantum gates. These will cover simple logical models for erasure and depolarizing leakage for circuit simulations; dissipative leakage for modeling thermal leakage errors in physical qubits; and unitary leakage for leakage errors induced by quantum control. \subsection{Logical Leakage Errors} \label{sec:logical-leakage} \subsubsection{Erasure Error} \label{sec:erasure-error} The simplest model for a leakage type error is an \emph{erasure error} where with some probability $p$ we completely \emph{lose} our qubit. This could, for example, correspond to an atom escaping a trap, or a photon escaping from a cavity or wave-guide. To model this in the leakage framework outlined in \cref{sec:leakage-errors} we can represent an erasure channel with erasure probability $p_l$ as a CPTP map \begin{equation} \2E(\rho) = (1-p_l)\rho + p_l \ketbra{\psi_2}{\psi_2} \label{eq:loss} \end{equation} where $\ket{\psi_2} \in \2X_2$ is a state in the leakage subspace. In this case the leakage dynamics are entirely incoherent and we can think of the leakage subspace as a 1-dimensional system which keeps track of the lost population. The leakage and seepage rates for the erasure channel $\2E$ are given by $L_1=p_l$ and $L_2=0$. If we measure the leakage after $m$ applications of the channel, then the leakage of the output state is given by \begin{equation} p_l(m) = L(\2E^{\circ m}(\rho)) = 1 - (1-p_l)^m \end{equation} This shows that the state leakage $p_l$ approaches 1 as $m$ increases and in the infinite limit we can say that its population is contained entirely in the leakage subspace. \subsubsection{Depolarizing Leakage Extension} \label{sec:depol-leakage} Erasure errors are not a particularly realistic model for many architectures, such as spins, atomic systems, or superconducting qubits, where relaxation or other processes allow the higher energy levels to continue to interact with the computational subspace energy levels. We can consider erasure errors to be a subset of a more general leakage model which we call \emph{depolarizing leakage} in the limit where the seepage rate goes to 0. Let $\2E_1$ be an arbitrary CPTP map on the computational subspace. We define the \emph{depolarizing leakage extension} (DLE) of $\2E_1$ to be the channel \begin{equation} \2E_L = (1-L_1) \2E_1 + L_1 \2D_{21} + L_2\2D_{12} + (1-L_2)\2D_{2} \label{def:leakage-ext} \end{equation} where $L_1, L_2$ are the leakage and seepage rates for the extension, $\2D_j \equiv \2D_{jj}$, and $\2D_{ij}$ is a completely depolarizing map between subspaces $\2L(\2X_i)$ and $\2L(\2X_j)$ given by \begin{equation} \2D_{ij}(\rho) = \Tr[{\bb 1}_j \rho]\, \frac{{\bb 1}_i}{d_i}, \quad i,j\in \{1,2\}. \end{equation} The DLE channel $\2E_L$ is a purely incoherent leakage error as it removes all information about the leakage dynamics \emph{except} for the leakage and seepage rates. The simplicity of this model could prove useful as a channel extension for including the effects of leakage in full characterization protocols such as gate set tomography~\cite{Merkel2013pra,Dehollain2016njp}. The leakage channel components $\2D_{12}$, $\2D_{21}$ act to remove any coherence of leakage in an initial state, and in combination with the completely depolarizing component $\2D_{2}$ on the leakage subspace ensures that there are no memory effects in the leakage subspace dynamics. This assumption ensures an exponential model for the state leakage under repeated applications of the DLE. \begin{lemma}\label{prop:dle-leak} Let $\2E_L$ be a DLE and $\rho_1\in D(\2X_1)$ be a state in the computational subspace. Then the state leakage accumulation model for an initial state $\rho$ due to repeated actions of $\2E_L$ is given by \begin{equation*} L(\2E_L^m(\rho)) = \frac{L_1}{L_1+L_2} -\left(\frac{L_1}{L_1+L_2}-p_l\right)(1-L_1-L_2)^m \end{equation*} where $p_l = L(\rho)$ is the state leakage of the initial state. \begin{proof}See \cref{proof:dle-leak} \end{proof} \end{lemma} Notice that the leakage accumulation model in \cref{prop:dle-leak} is independent of the reduced dynamics of the map $\2E_1$ on the computational subspace. It \emph{only} depends on the leakage and seepage rates of $\2E_L$. Furthermore, since the leakage in the DLE is depolarizing, and hence purely incoherent, the coherence of leakage of the output state is always zero. \subsubsection{Depolarizing Leakage Model} \label{sec:depol-leakage} An important subset of DLE channels is the case where computational subspace component $\2E_1$ is itself a depolarizing channel. We call these types of channels \emph{depolarizing leakage models} (DLMs) and they are describe by the channel \begin{equation} \begin{aligned} \2E_{D} =& (1-L_1) \Big( \mu_1 \2I_1 + (1-\mu_1) \2D_{1} \Big) \\&+ L_1 \2D_{21} + L_2\2D_{12} + (1-L_2)\2D_{2}. \end{aligned} \label{def:dle} \end{equation} where $1-\mu_1$ is the depolarizing probability of the $\2E_1$ component. A DLM can be constructed from an arbitrary channel by performing a twirl over the computational subspace, while depolarizing the leakage subspace. Consider two independent chosen unitaries $U_1\in \1C_1$, $V_2 \in \1P_2$ where $\1C_1$ is a set of gates on the computational subspace which form a unitary 2-design, and $\1P_2$ is a set of gates on the leakage subspace which forms a unitary 1-design. The DLE projection of an arbitrary CPTP map $\2E$ is given by the independent average over both these groups: \begin{align} \2E_D =& \frac{1}{|\1C_1||\1P_2|} \sum_{\2U_1\in \1C_1}\sum_{\2U_2,\2V_2\in \1P_2}(\2U_1+\2V_2)\circ \2E\circ (\2U^\dagger_1+\2U_2)\\ =& \2W_1(\2E) + \2D_1\2E\2D_2 + \2D_2 \2E \2D_1 + \2D_2 \2E\2D_2 \label{eq:dlm-proj} \end{align} where $\2D_j$ is the completely depolarizing channels on $D(\2X_j)$, and $\2W_1$ be the twirling superchannel acting on the computational subspace as: \begin{align} \2W_1(\2E_{11}) &= \mu_1 \2I_1 + (1-\mu_1) \2D_{11} \\ \mu_1 &= \frac{d_1\overline{F}(\2E_{11})-1}{d_1-1}. \label{eq:twirl-param} \end{align} The utility of twirling in this manner is that the resulting channel $\2E_D$ will have the same average gate fidelity, leakage rate, and seepage rate of the original channel $\2E$. \begin{align} \overline{F}(\2E_D) =& \overline{F}(\2E)\\ L_1(\2E_D) =& L_1(\2E)\\ L_2(\2E_D) =& L_2(\2E). \end{align} One advantage of considering a DLM instead of a general DLE is that we can derive a simple expression for the fidelity decay model for repeated applications of a DLM. This is the decay model used for LRB, though in the absence of SPAM errors: \begin{align} \overline{F}(\2E_D^m) &= \frac{1}{d_1}\Big[ 1-p_l(m) + (d_1-1)(1-p_1)^m\mu_1^m\Big] \label{prop:dlm-fid} \end{align} where $p_l(m) = L(\2E_D^m)$ is the DLE leakage accumulation model given in \cref{prop:dle-leak}. While relatively simple, the DLM if of practical interest as it is the ideal model that LRB attempts to twirl an arbitrary error channel into. \subsection{Unitary Leakage Model} \label{sec:unitary-leakage} While the DLE and DLM are useful logical models for considering leakage errors in quantum gates we can consider more specific error models based on the control Hamiltonian used to generate the quantum gate. Unitary leakage is generated by a Hamiltonian term which couples states in the computational subspace and leakage subspace. The simplest such case is generated by an exchange interaction between a state in the computational subspace $(\ket1)$ and a state in the leakage subspace $(\ket2)$. In this case the interaction Hamiltonian is given by \begin{equation} H = \frac12\Big(\ketbra12 + \ketbra21\Big), \end{equation} and the resulting unitary leakage error after evolving under $H$ for time $t$ is given by \begin{align} U = e^{-i t H} &= {\bb 1} + \Big(\cos(t/2)-1\Big)\Big(\ketbra11 + \ketbra22\Big) \nonumber\\& \qquad+ \sin(t/2) \Big(\ketbra12 + \ketbra21\Big). \label{eq:unitary-leakage} \end{align} In this case the leakage rate of the channel, and the state leakage of an initial state $\rho_1$, as a function of evolution time are given by \begin{align} L(\rho_1(t)) =& \sin^2\left(\frac{t}{2}\right) \bra{1}\rho_1\ket{1}\\ L_j(\2U(t)) =& \frac{1}{d_j}\sin^2\left(\frac{t}{2}\right), \quad j=1,2. \label{eq:unitary-L12} \end{align} As mentioned in \cref{sec:leakage-errors} we have that the leakage and seepage rates satisfy $d_2L_2 = d_1 L_1$. We find that for this interaction Hamiltonian the state leakage oscillates as a function of time, which is a distinct difference from the DLE leakage accumulation model in \cref{prop:dle-leak}. This is because the leakage error is generating coherent Rabi-oscillations between a state in the computational and in the leakage subspaces. Accordingly, this type of leakage also generates coherences in the leakage if the initial state has some population in $\ket{1}$ or $\ket{2}$. For example, if the initial sate is $\rho_1(0) = \ketbra11$ we have that the coherence of leakage at time $t$ is given by \begin{equation} CL( \rho_1(t))= | \sin(t) |. \end{equation} Further more, the coherent leakage rate at time $t$ is given by \begin{align} CL_1(\2U(t))=&\frac{2 \left| \sin \left(\frac{t}{2}\right)\right| \Big[2-\big(1+\cos (t)\big) \left| \cos \left(\frac{t}{2}\right)\right|\Big]}{3 \big(1-\cos (t)\big)} \end{align} and the upper-bound for $CL_1$ from \cref{prop:coh-leakage-bound} is \begin{equation} CL_1(\2U(t)) \le \frac{2}{d_1}\left|\sin\left(\frac{t}{2}\right)\right|\sqrt{d_1-\sin^2\left(\frac{t}{2}\right)}. \end{equation} \subsubsection{DLM of a Unitary Leakage Error} \label{sec:unitary-dlm} We can also consider the perfect depolarizing projection of the unitary error onto a DLM after a fixed time $\Delta t$. In this case from inserting the expressions for $L_j$ from \cref{eq:unitary-L12} into the leakage accumulation model in \cref{prop:dle-leak} we have \begin{align*} p_l(m) =& \frac{d_2}{d_1+d_2} - \frac{d_2}{d_1+d_2}\left[1-\frac{d_1+d_2}{d_2d_1}\sin^2\left(\frac{\Delta t}{2}\right)\right]^m \end{align*} If we suppose we have a qutrit leakage model ($d_1=2,d_2=1$) then this reduces to \begin{equation} p_l(m) = \frac13 - \frac13\left[1-\frac32\sin^2\left(\frac{\Delta t}{2}\right)\right]^m. \label{eq:qutrit-u-leak} \end{equation} Note that after projection onto a DLM this will no longer generate oscillations. What happens if we have an imperfect projection due to not perfectly implementing the required twirling procedure in \cref{eq:dlm-proj}? In this case coherences between the leakage and computational subspaces may survive and be observed as memory effects resulting in oscillations in the resulting leakage accumulation model. We consider this by computing a twirl over the Clifford group on the computational subspace, where each perfect Clifford gate is extended to act as the identity on the leakage subspace $U = U_1 \oplus {\bb 1}_2$. This is followed by an imperfect depolarizing of the leakage subspace by a depolarizing channel with depolarizing probability $p$ \begin{equation} \2D(p) = (1-p) \2I + p (\2I_1 + \2D_2), \end{equation} where $\2I$ is the identity channel on the full Hilbert space, and $\2I_1, \2D_2$ are the identity channel, and completely depolarizing channel on the computational and leakage subspaces respectively. The leakage accumulation due to repeated applications of the resulting imperfectly twirled DLM for different values of depolarizing strength $p$ are shown in \cref{fig:coherent-unitary}. We find here that when there is no depolarizing of the leakage subspace that coherent oscillations survive for some time before being damped out. For a depolarizing strength of 10\% these oscillations are damped out, however we still observe faster leakage accumulation than the completely depolarized ideal case which would lead to an overestimate of the leakage rate in $L_1 = \frac12 \sin^2(\Delta t/2)$ in \cref{eq:qutrit-u-leak}. \begin{figure} \caption{State leakage accumulation of the imperfect DLM projection of the unitary leakage error channel in \cref{eq:unitary-leakage} \label{fig:coherent-unitary} \end{figure} \subsubsection{Weak Unitary Leakage} Now let us consider a general a unitary error model which is more applicable to many experimental scenarios where unitary leakage is introduced into a system by imperfections in a control hamiltonian. In general the leakage due to a unitary term is give by \begin{equation} U(t) = \2T\exp\left(-i \int_0^t dt_1 H(t_1)\right) \end{equation} where $\2T$ is the time-ordering operator, and $H(t)$ is the time-dependent Hamiltonian the system evolves under. The leakage and seepage rates of this interaction are given by \begin{align} L_j(\2U(t)) =& \frac{1}{d_j}\Tr[{\bb 1}_2 U(t) {\bb 1}_1 U(t)^\dagger],\quad j=1,2. \end{align} By expanding $U(t)$ into a Dyson series, we may obtain a perturbation expansion for the leakage and seepage rates due to the unitary leakage error. In doing so we find that the second order Dyson term is the leading order contribution to leakage and seepage rates, and is given by \begin{equation} L_j(\2U(t)) \approx \frac{t^2}{d_j} \Tr[{\bb 1}_2 \overline{H}(t) {\bb 1}_1 \overline{H}(t)] \end{equation} where $\overline{H}(t) = \frac{1}{t}\int_0^t dt_1 H(t_1)$ is the first order average Hamiltonian term of $H(t)$. Hence we can estimate the leakage rates due to given control pulse by computing the average Hamiltonian over the pulse shape. To illustrate this consider the transmon qubit system as used in \cref{sec:lrb-sim}. The first order leakage rate contribution from a $X_{\pi/2}$ control pulse is given in \cref{fig:weak-unitary}. Here we compare the same DRAG pulse shapes used in \cref{eq:drag} with DRAG parameters $\alpha=0$ (Gaussian), $\alpha=0.5$ (Drag-F), and $\alpha=1$ (Drag-L) for a transmon with anharmonicity $\delta/2\pi = -300$~MHz. For the Drag-L pulse the first-order unitary leakage is 0. \begin{figure} \caption{First order unitary leakage rate for an $X-\pi/2$ rotation pulse for a transmon qubit with anharmonicity $\delta/2\pi = -300$~MHz. The Drag-L pulse is not visible on the log plot as the first order leakage rate is 0.} \label{fig:weak-unitary} \end{figure} \subsection{Lindblad Leakage Models} \label{sec:lindblad-leakage} In the open quantum systems framework CPTP maps are generated by exponentiation of a Lindblad generator $\2E = e^{t(\2H+\2D)}$ where $\2H$ and $\2D$ are the generators of purely unitary and purely dissipative evolution respectively: \begin{align} \2H(\rho) =& -i [H,\rho] \label{eq:lind-unitary}\\ \2D(\rho) =& \sum_k \gamma_k A_k\rho A_k^\dagger -\frac12\{A_k^\dagger A_k,\rho\}. \label{eq:lind-diss} \end{align} In a real experiment leakage will not generally be purely dissipative or purely unitary, but a combination of both, however for calibration and gate optimization it is useful to estimate the relative contributions from both the dissipative and unitary parts individually. In practice, the dissipative leakage contribution will be an \emph{always-on} process that is due to thermal relaxation or other incoherent interactions, and is typically beyond the experimenters direct control. The unitary contribution, however, will typically be due to control errors which may be optimized, or interaction terms with neighboring systems which may be decoupled via control. Estimation of these two quantities may be achieved by considering a short-time expansion of $\2E$. This is useful, as for many commonly used models of dissipation the leakage contributions from unitary and dissipative processes are additive up to second order, which we prove in the following proposition. \begin{lemma} \label{prop:lindblad-leakage} Let $\2E$ be a CPTP channel with Lindblad generators $\2E=\exp\left(\Delta t(\2H +\2D)\right)$, where the dissipation operators $A_k$ are all raising or lowering operators: \[ A_{\pm k} = \sum_j \alpha_j \ketbra{j\pm k}{j} \] To second order in $\Delta t$ we have that the leakage and seepage rates for $\2E$ are given by \begin{equation} L_j(\2E) = L_j(\2E_{\text{uni}}) +L_j(\2E_{\text{diss}}) \,\quad j=1,2 \end{equation} where $\2E_{\text{uni}}$ and $\2E_{\text{diss}}$ are purely unitary and purely dissipative CPTP maps generated by $\2H$ and $\2D$ respectively. \begin{proof} See \cref{proof:lindblad-leakage} \end{proof} \end{lemma} One could use \cref{prop:lindblad-leakage} as a coarse way to estimate the contribution of a unitary leakage error from a LRB experiment in the presence of an always-on thermal leakage error which can general have the relevant dissipation parameters measured independently. If we return to the LRB simulation in \cref{sec:lrb-sim}, for example, dissipative effects were due to thermal $T_1$ relaxation. Using estimated values for $T_1$ and the average photon number of the system we may compute the theoretical dissipative contribution under an appropriate relaxation model and subtract them from the LRB error estimates to obtain coarse estimates for the unitary contribution. We now give some explicit examples of phenomenological dissipation models which may generate leakage. \subsubsection{Simple Dissipative Leakage} \label{sec:diss-leakage} Consider the CPTP map generated by a purely dissipative leakage model that couples a single state in the computational subspace ($\ket1$) with a single state the leakage subspace ($\ket2$). In this case our Lindblad dissipator consists of two operator terms \begin{align} A_{21} =& \ketbra21,\quad& A_{12} =& \ketbra12 \end{align} with corresponding rates $\gamma_1,\gamma_2$. This first term generates leakage from the $\ket1$ state to the $\ket2$ state at a rate $\gamma_1$, while the second term generates seepage from $\ket2$ to $\ket1$ at a rate $\gamma_2$. For this simple case the resulting error channel, given by the superoperator in $\2S(t) = e^{t \2D}$, can be evaluated analytically. The leakage and seepage rates for this map as a function of time are given by \begin{align} L_1(\2E) =& \frac{\gamma_1}{d_1(\gamma_1+\gamma_2)} \left(1-e^{-t(\gamma_1+\gamma_2)}\right) \\ L_2(\2E) =& \frac{\gamma_2}{d_2(\gamma_1+\gamma_2)} \left(1-e^{-t(\gamma_1+\gamma_2)}\right). \end{align} If we have an initial state $\rho$, then the state leakage as a function of time is given by \begin{equation} \begin{aligned} L(\rho(t)) =& L\big(\rho(0)\big)+\\& \frac{\gamma_1\bra{1}\rho\ket{1} - \gamma_2\bra{2}\rho\ket{2}}{ \gamma_1+\gamma_2}\left[1- e^{-t(\gamma_1+\gamma_2)}\right]. \end{aligned} \end{equation} We can also consider more general dissipation models, however in order to compute the leakage rates in these cases we will generally have to consider a short time expansion of the dissipative superoperator. \subsubsection{Thermal Relaxation} \label{sec:cavity-leakage} Let us now consider a physically motivated example of leakage due to thermal relaxation in a harmonic, or weakly anharmonic, oscillator. In such a system thermal relaxation to an equilibrium state is described by the Markovian photon-loss dissipator~\cite{Wiseman2009book} \begin{align} \2D_c &= \gamma_\downarrow D[a] + \gamma_\uparrow D[a^\dagger] \\ &= \kappa(1+\overline{n}) D[a] +\kappa \overline{n} D[a^\dagger] \label{eq:cavity-diss} \end{align} where we have defined the dissipation rate $\kappa = \gamma_\downarrow - \gamma_\uparrow \ge 0$, and the \emph{average photon number} of the oscillator \begin{equation} \overline{n} = \frac{\gamma_\uparrow}{\gamma_\downarrow-\gamma_\uparrow}. \end{equation} The state leakage of the cavity at thermal equilibrium by \begin{align} L(\rho_{eq})=& \left(\frac{\gamma_\uparrow}{\gamma_\downarrow}\right)^2 = \left(\frac{\overline{n}}{1+\overline{n}}\right)^2. \end{align} If we consider the error channel for evolution over a time $\Delta t$ such that $\kappa \Delta t \ll 1$, then to second order we find that the leakage and seepage rates are given by \begin{align} L_1(\2E) \approx& \kappa \overline{n}\,\Delta t\left[1 - (3+4\overline{n}) \kappa \Delta t\right] \\ L_2(\2E) \approx& \frac{2(1+\overline{n})\kappa \Delta t}{d_2}\left[1+(1-4\overline{n})\kappa \Delta t \right]. \end{align} Hence in the low photon limit ($\overline{n}\ll1$) we have that $L_2(\2E) \gg L_1(\2E)$. This is illustrated in \cref{fig:cavity-leakage} where we plot the leakage and seepage rate vs equilibrium photon number for $\overline{n}=0$ to $0.1$ for values of $\kappa \Delta t = 10^{-4}, 10^{-3}, 10^{-2}$. \begin{figure} \caption{Plot of leakage (solid) and seepage (dashed) rates vs average photon number at thermal equilibrium for $T_1$ relaxation of a qubit encoded in the lowest two energy levels of a cavity. The leakage rate is much less than the seepage rate $(L_1 \ll L_2)$ across the plotted photon number range.} \label{fig:cavity-leakage} \end{figure} Note that for a true cavity $d_2 = \infty$, so we must truncate the dimension of the cavity to some reasonable number of excitations. In the low $\overline{n}$ limit we may truncate to a qutrit model ($d_2=1$). The rates shown in \cref{fig:cavity-leakage} are comparable to those expected for a superconducting transmon qubit which typically have average photon numbers in the range of $\overline{n}\approx 10^{-2}$ to $10^{-1}$. \subsection{Multiple Leakage Subspaces} \label{sec:multi-leakage} The leakage errors described in \cref{sec:leakage-errors} report an average rate for leakage between the computational subspace and the \emph{entire} leakage subspace. If the computational subspace corresponds to a composite system, for example an $n$-qubit system, there may be several different leakage rates to different levels in the leakage subspace corresponding to leakage of each individual system, or cross-system leakage across components. For example in superconducting qubit systems there may be multiple different leakage rates due to frequency crowding in the off-resonant leakage levels and cross-talk in system control~\cite{Sheldon2016pra2,Takita2016prl}. In such situations useful characterization may require a more fine-grained approach. For composite-systems the definitions for state leakage, leakage rates, and seepage rates defined in \cref{sec:leakage-errors} naturally generalize to describe leakage to multiple leakage subspaces by simply decomposing the leakage subspace into direct sum subspaces \begin{equation} \2X_2 = \2Y_{1} \oplus \hdots \oplus \2Y_{m}. \end{equation} Using this decomposition we may define $m$ different measures of state leakage, leakage rates, and seepage rates which are given by replacing the projector ${\bb 1}_2$ with the projector ${\bb 1}_{2_{\2Y_j}}$ onto $\2Y_j$ in \cref{eq:state-leakage,eq:leakage-rate}: \begin{align} L_{\2Y_j}(\rho) =& \Tr[{\bb 1}_{\2Y_j}\rho] \\ L_{1_{\2Y_j}}(\2E) =& \Tr\left[{\bb 1}_{\2Y_j} \2E\left(\frac{{\bb 1}_1}{d_1}\right)\right]\\ L_{2_{\2Y_j}}(\2E) =& \Tr\left[{\bb 1}_1 \2E\left(\frac{{\bb 1}_{\2Y_j}}{d_{\2Y_j}}\right)\right] \end{align} where $d_{\2Y_j}$ is the dimension of $\2Y_j$, and $j=1,\hdots m$. The definitions of the total state leakage, leakage rate, and seepage rate the full leakage subspace may be expressed in terms of these multi-rate definitions as \begin{align} L(\rho) &= \sum_{j=1}^m L_{\2Y_j}(\rho) \\ L_1(\2E) &= \sum_{j=1}^m L_{1_{\2Y_j}}(\2E) \\ L_2(\2E) &= \sum_{j=1}^m d_{2_j}L_{2_{\2Y_j}}(\2E). \end{align} Let us consider a simple example of 2-qubit leakage where we define three leakage subspaces corresponding to only the first qubit leaking, only the second qubit leaking, and both qubits leaking. In this case our computational subspace is given by the tensor product of the computational subspaces of each of the qubits: ${\bb 1}_{\2X_1} = {\bb 1}_1\otimes{\bb 1}_1$. The leakage subspaces are then given by \begin{align} {\bb 1}_{\2Y_1} = {\bb 1}_2\otimes{\bb 1}_1 \\ {\bb 1}_{\2Y_2} = {\bb 1}_1\otimes{\bb 1}_2 \\ {\bb 1}_{\2Y_3} = {\bb 1}_2\otimes{\bb 1}_2 \end{align} and hence the full leakage subspace is $\2X_2 = {\bb 1}\otimes{\bb 1} - \2X_1$. In the most general case we could have that each of the $\2Y_1$ correspond to a 1-dimensional subspace spanned by one of the leakage basis states of $\2X_2$. Note that under this assumption we are ignoring direct interactions between individual leakage subspaces --- any interacting subspaces in this sense should be considered as a single subspace. An important direction for future research is to develop characterization methods for these multi-qubit system with multiple leakage subspaces. \section{Conclusion} \label{sec:conc} We have presented a framework for the quantification of leakage errors, both coherent and incoherent, in quantum systems and a method for characterizing the average properties of leakage errors in a quantum gate set by leakage randomized benchmarking. These tools provide a means for evaluation of new methods for suppressing and correcting leakage errors in quantum systems. An important point in characterizing leakage errors in quantum gates is that two rates are required to specify average leakage dynamics, the leakage rate out of the computational subspace, and the seepage rate back into the computational subspace. We illustrated this with several examples demonstrating leakage mechanisms due to control errors and thermal relaxation processes. Further, leakage rates themselves are not necessarily predictive of the average performance of quantum gates as specified by the average gate fidelity. This is because leakage dynamics can induce logical errors within the computational subspace. This has been well documented, and typically results in a phase error due to population briefly spending time in an off-resonant leakage level before returning to the computational subspace by the end of a control pulse. This can be seen explicitly in our simulation of leakage randomized benchmarking for a superconducting qubit where we contrasted Gaussian control pulses with two types of DRAG pulses designed to correct the phase-error, and to suppress leakage rates, and has also been studied in recent experimental works \cite{Chen2016prl,McKay2016arx}. The LRB protocol we presented provides a theoretical justification for the technique first used in Ref. \cite{Chen2016prl}, and in particular we find that the key assumption for the validity of the decay model is that twirling the computational subspace also acts to completely depolarize the leakage subspace. If this assumption breaks down, then in the presence of coherent leakage errors, such as those from unitary dynamics, non-Markovian effects may manifest as oscillations in the LRB decay curves about the ideal exponential model. In the case of the simple example we considered, even though these oscillations are quickly damped out by partial depolarization, they could lead to an overestimate of the leakage and seepage rates of the system. While we focused on RB based characterization methods in the present article we comment briefly on full tomographic methods for gate characterization. Under the assumption that one cannot directly measure leakage levels one cannot fully characterize average leakage dynamics by quantum process tomography. This is because at best one can only reconstruct the computational subspace channel component, which in the presence of leakage is not a trace-preserving map. From this channel component the leakage rate could in principle be determined by the sub-normalized trace of the reconstructed channel, but it is not obvious how to estimate the seepage rates. To estimate leakage and seepage in a tomographic setting one would have to use a sequence of gates amplify a leakage decay model. This could be done using recent additions to the gate-set tomography protocol which use germ sequences of increasing length to amplify errors~\cite{Dehollain2016njp}. By combining gate-set tomography with the depolarizing leakage extension channel we developed in \cref{sec:depol-leakage} one could attempt to reconstruct the effective channel on the computational subspace along with the leakage and seepage rates in the case where GST seeds also act to implement a depolarizing channel on the leakage subspace. \acknowledgements We thank D. McKay for helpful discussion and suggestions. This work was supported by ARO under contract W911NF-14-1-0124. \section*{Appendices} \begin{appendix} \section{Derivation of the LRB Decay Model} \label{sec:lrb-decay-model} Consider a leakage system with state space $\2X= \2X_1\oplus\2X_2$, and a unitary operator $U_1\in L(\2X_1)$ that acts on the computational subspace $\2X_1$. To extend $U_1$ to a unitary on $L(\2X)$ we may add a unitary $U_2\in L(\2X_2)$ such that $U_{12}\equiv U_1\oplus U_2 $ is unitary. Ideally, up to a phase this target extension will be the identity operator ($U_2={\bb 1}_2$), so that our intended interaction on $\2X_1$ acts trivially on $\2X_2$. Let $\2U_{12}$ represent the quantum channel for unitary evolution $\2U_{12}(\rho)\equiv U_{12}\rho U_{12}^\dagger$. The superoperator representation of $\2U_{12}$ in the column-stacking convention is given by \[ \2S_{\2U_{12}} = (U_1\oplus U_2)^*\otimes(U_1 \oplus U_2) = \2S_{\2U_1} +\2S_{\2U_2} + \2S_{\delta_{12}}, \] where we $\2S_{\2U_1} = (U_1^*\oplus 0)\otimes (U_j\oplus 0)$ is a unitary superoperator with support on $\2L(\2X_1)$, and similarly for $\2S_{\2U_2}$ on $L(\2X_2)$, and \begin{equation} \2S_{\delta_{12}}= (U_1\oplus 0)^*\otimes(0\oplus U_2) + (0\oplus U_2^*)\otimes (U_1\oplus 0) \label{eq:delta-coh} \end{equation} is a superoperator component which acts on the CLS defined by the projector in \cref{eq:cls-proj} of the main text. Let $\2C_k$ be the noisy implementation of the extension $\2U_{12,k}$ of the Clifford matrix $\2U_{1,k}$ on the computational subspace: \begin{align} \2C_k =& \2E \circ \2U_{12,k}\\ \2S_{\2C_k} =& \2S_{\2E} \circ (\2S_{\2U_{1,k}} + \2S_{\2U_{2,k}}+ \2S_{\delta_{12}}). \end{align} \begin{assumption} Here we have assumed a zeroth order approximation where the noise channel is approximately constant as a function of time, and for each Clifford gate $\2C_k$. \end{assumption} As with the case of standard randomized benchmarking the decay model derived in this limit should be valid even for slightly gate dependent noise~\cite{Magesan2012pra}. Consider now the randomized benchmarking protocol of choosing a sequence \[ i_m = \2C_m \circ \hdots \circ \2C_1 \] of $m$ Clifford gates, where the order of composition is such that $\2C_1$ is applied to the system first. The $m+1$ gate is chose to be the usual recover operation \[ \2C_{m+1} = \2E\circ \2R_{i_m} \] where on the computational subspace we have that the recovery operator satisfies \[ \2R_{1,i_m} = \2U_{1,1}^\dagger \circ \hdots \circ \2U_{1,m}^\dagger. \] The full RB sequence is then given by $i_m^\prime = \2C_{m+1}\circ i_m$. \begin{assumption} To further evaluate this sequence and evaluate the average leakage and seepage rates of the noisy gateset we want to project $\2U_{12}$ onto the ILS defined in \cref{eq:ils-proj} so that $\2S_{\delta_{12}} \approx 0$, and \begin{equation} \2S_{\2U_{12,k}} \approx S_{\2U_{1,k}} +\2S_{\2U_{2,k}}. \label{eq:u-coherence-cancel} \end{equation} \end{assumption} One method of doing this is to consider averaging over a local phase on one of the subspaces as was used in previous work \cite{Epstein2014pra,Chasseur2015pra,Wallman2016njp}. If one can implement the superoperator for $U_{-12} = -U_1 + U_2$ with a negative local-phase on $\2U_1$ (or equivalently on $\2U_2$), then the average of the two superoperators is given by \begin{equation} \2S_{\overline{\2U}_{12}} \equiv \frac12\left(\2S_{\2U_{12}} +\2S_{\2U_{-12}} \right) = \2S_{\2U_1} +\2S_{\2U_2}. \end{equation} We note that it may be difficult to experimentally implement this local phase difference without control of the leakage subspace, however, in practice it appears to make little difference for weak leakage rates demonstrated in \cite{Chen2016prl,McKay2016arx}. Using the assumption in \cref{eq:u-coherence-cancel} the superoperator for the RB sequence $i_m^\prime$ is given by \begin{align} \2S_{i_m^\prime} &= \2S_{\2E} (\2S_{\2R_{1,i_m}} + \2S_{\2U_{2,m+1}}) \hdots \2S_{\2E} (\2S_{\2U_{1,1}} + \2S_{\2U_{2,1}}). \end{align} The survival probability for an initial state $\rho$, and measurement of an operator $M$ for a gate sequence $i_m^\prime$ is given by \begin{align} P(1 | i_m^\prime\, M,\rho) &= \dbra{M}\2S_{i_m^\prime}\dket{\rho} = \Tr[M^\dagger \,\2S_{i_m^\prime}(\rho)] \label{eq:rb-survival-seq} \end{align} For a given length $m$ sequence of Clifford gates, the target decay model for randomized benchmarking is given by the average of \cref{eq:rb-survival-seq} over all equal sequences $i_m^\prime$: \begin{align} P(1 | m, M,\rho) &\equiv \1E_{i_m^\prime}[P(1 | i_m^\prime\, M,\rho) ] \nonumber\\ &= \dbra{M}\1E_{i_m^\prime}[\2 S_{i_m^\prime}]\dket{\rho}. \label{app-eq:survival} \end{align} To evaluate \cref{app-eq:survival} we may express $i_m^\prime$ in terms of unitaries $\2V_{j,k} = \2U_{j,1}^\dagger \hdots \2U_{j,k}^\dagger$ so that \begin{align} \2S_{i_m^\prime} &= \2S_{\2E} (\2S_{\2I_1} + \2S^\dagger_{\2V_{2,m+1}}) \times\nonumber\\&\qquad\prod_{k=1}^m \Big[ (\2S_{\2V_{1,k}} + \2S_{\2V_{2,k}})\2S_{\2E} (\2S_{\2V_{1,k}}^\dagger + \2S_{\2V_{2,k}}^\dagger) \Big]. \label{eq:s-rb-seq} \end{align} \begin{assumption} To proceed we must make another assumption that we may \emph{independently average} over the sets $\{U_{1,k}\}$ and $\{U_{2,k}\}$, we have that \begin{align} \1E_{i_m^\prime}[\2 S_{i_m^\prime}] &= \2S_{\2E} \left(\2S_{\2I_1} + \1E_{i_m^\prime} \left[ \2S^\dagger_{\2V_{2,m+1}} \right] \right) \label{eq:s-rb-average} \cdot\\&\quad \left( \1E_{i_m^\prime} \left[ (\2S_{\2V_{1,k}} + \2S_{\2V_{2,k}})\2S_{\2E} (\2S_{\2V_{1,k}}^\dagger + \2S_{\2V_{2,k}}^\dagger) \right] \right)^m. \nonumber \end{align} \end{assumption} Now, since the Clifford group $\{\2U_{1,k}\}$ is a unitary 2-design we make use of the twirling identity \begin{align} \1E_{i_m^\prime} \left[\2S_{\2V_{1,k}} \2S_{\2E} \2S_{\2V_{1,k}}^\dagger \right] &= \2W_1(\2E) \end{align} where \begin{align} \2W_1(\2E) &= \mu_1 \2I_1 + (1-\mu_1) \2D_{1} \\ \mu_1 &= \frac{d_1\overline{F}_{\2E_{11}}-1}{d_1-1} \label{eq:twirl-param} \end{align} is the twirling superchannel acting on the computational subspace, $\2I_1$ is the identity projector on the computational subspace, and \begin{equation} \2D_{1}(\rho) = \Tr[{\bb 1}_1 \rho] \,\frac{{\bb 1}_1}{d_1} \end{equation} is the completely depolarizing channel on the computational subspace. Since the Clifford group is also a unitary 1-design we may also evaluate \begin{align} \1E_{i_m^\prime} \left[\2S_{\2V_{1,k}}\right] = \2S_{\2D_{1}}. \end{align} \begin{assumption}\label{assum4} We now need to make one final assumption, that the matrices $\{\2U_{2,k}\}$ averaged over leakage subspace also form a unitary 1-design to ensure that we also have \begin{equation} \1E_{i_m^\prime} \left[\2S_{\2V_{2,k}} \right] = \2S_{\2D_{2}}. \label{eq:leakage-1d} \end{equation} where $\2D_{2}$ is the completely depolarizing channel on the leakage subspace. \end{assumption} Under \cref{assum4} we have that average superoperator in \cref{eq:s-rb-average} evaluates to \begin{align} \1E_{i_m^\prime}[\2S_{i_m^\prime} ] &= \2S_{\2E} \Big[ \2S_{\2W_1(\2E)} + \2S_{\2D_{1}} \2S_{\2E} \2S_{\2D_{2}} + \2S_{\2D_{1}} \2S_{\2E} \2S_{\2D_{2}} \nonumber\\&\qquad+ \2S_{\2D_{2}} \2S_{\2E} \2S_{\2D_{2}} \Big]^m \nonumber\\&=\2S_{\2E}\2S_{\2E_D}^m \end{align} where $\2E_D$ is given by \begin{align} \2E_D &= (1-L_1) \Big( \mu_1 \2I_1 + (1-\mu_1) \2D_{1} \Big) + L_1 \2D_{21} \nonumber\\&\qquad + L_2\2D_{12} + (1-L_2) \2D_{2} \label{app-eq:dlm} \end{align} and $L_1, L_2$ are the leakage and seepage rate of $\2E$ respectively, and we have defined completely depolarizing channels between the computational and leakage subspace: \begin{equation} \2D_{ij}(\rho) = \Tr[{\bb 1}_j \rho] \,\frac{{\bb 1}_i}{d_i}, \quad i,j =1,2. \end{equation} To compute $\2S_{\2E_{D}}^m$ we let ${A_j/\sqrt{d_1} : j=0,...,d_1-1}$ be an orthonormal operator basis for $\2L(\2X_1)$, with $A_0 = {\bb 1}_1$. In the qubit case this could be the Pauli basis $\{{\bb 1}_1, X, Y, Z\}/\sqrt{2}$, for example. The superoperator representations of the identity and completely depolarizing channels may be expressed in this basis as \begin{equation*} \2S_{\2I_1} = \sum_{j=0}^{d_1-1} \frac{1}{d_1} \dketdbra{A_j}{A_j}, \qquad \2S_{\2D_{1}} = \frac{1}{d_1} \dketdbra{A_0}{A_0}. \end{equation*} Hence \cref{app-eq:dlm} may be written as $\2E_D^m = \2E_{L}^m + \2E_F^m$ where \begin{align*} \2E_F =& (1-L_1) \mu_1 (\2I_1-\2D_1)\\ \2E_L =&(1-L_1) \2D_{11} + L_1\2D_{21} + L_2 \2D_{12} + (1-L_2) \2D_{22} \end{align*} and we have used the fact that $\Tr[\2S_{\2E_L}^\dagger \2S_{\2E_F}]=0$ to expand $(\2E_L + \2E_F)^m =\2E_L^m + \2E_F^m$. Since the operators $\2D_{ij}$ are mutually orthogonal we can compute the exponential of the superoperator for $\2E_L$ as a $2\times 2$ matrix \begin{align*} \2S_{\2E_L}^m &=\begin{pmatrix} 1-L_1 & L_1 \\ L_2 & 1-L_2\end{pmatrix}^m \\ =& \frac{1}{L_1+L_2} \begin{pmatrix} L_2 & L_1 \\ L_2 & L_1 \end{pmatrix} +\frac{(1-L_1-L_2)^m}{L_1+L_2}\begin{pmatrix} L_1 & -L_1 \\ -L_2 & L_2 \end{pmatrix} \end{align*} and hence for an initial state $\rho$ with $L(\rho)=p_l$ we have \begin{align} \2S_{\2E_L}^m (\rho) =& \left(\frac{L_2}{L_1+L_2}\right) \frac{{\bb 1}_1}{d_1} +\left(\frac{L_1}{L_1+L_2}\right)\frac{{\bb 1}_2}{d_2} \label{eq:slm-rho}\\& +\left(\frac{L_1}{L_1+L_2}-p_l\right)(1-L_1-L_2)^m \left( \frac{{\bb 1}_1}{d_1} -\frac{{\bb 1}_2}{d_2}\right). \nonumber \end{align} For the $\2E_F$ component we simply have \begin{equation} \2E_F^m = (1-L_1)^m \mu_1^m (\2I-\2D) \end{equation} and hence \begin{align} \2E_F^m(\rho) =& (1-L_1)^m \mu_1^m (1-p_l) \left(\rho_1 - \frac{{\bb 1}_1}{d_1}\right) \label{eq:sfm-rho} \end{align} where $\rho_1$ is defined by the projection onto the computational subspace $\2I_1(\rho) = (1-p_l)\rho_1$. Next, using the the survival probability in \cref{app-eq:survival} we consider outcomes for a set of measurements $\{M_j\}$ that ideally form a PVM on the computational subspace $(M_j = \ketbra{j}{j})$. Using the expressions in \cref{eq:sfm-rho,eq:slm-rho} we have \begin{align} P(1 |, m, M_j, \rho) &= A_j + B_j \lambda_1^m + C_j \lambda_2^m \end{align} where \begin{align} \lambda_1 & = 1-L_1-L_2\\ \lambda_2 & = (1-p_1)\mu_1\\ A_j & = \frac{1}{L_1+L_2}\Tr\left[M_j^\dagger\2E\left( L_2 \frac{{\bb 1}_1}{d_1}+L_1\frac{{\bb 1}_2}{d_2} \right)\right] \\ B_j & = \left(\frac{L_1}{L_1+L_2} -p_l\right) \Tr\left[M_j^\dagger \2E\left( \frac{{\bb 1}_1}{d_1}-\frac{{\bb 1}_2}{d_2} \right)\right]\\ C_j & = (1-p_l)\Tr\left[M_j^\dagger \2E\left(\rho_1 - \frac{{\bb 1}_1}{d_1}\right)\right]. \end{align} By setting $M_0=\rho_1$ as the ideal by measurement this gives the RB fidelity decay model for $j=0$. For the leakage model we must sum over the survival probabilities for the set of PVM measurements $\{M_j\}$. To allow for leakage in our measurement we assume a measurement leakage model given by \begin{equation} \sum_j M_j = (1-q_1){\bb 1}_1 + q_2 {\bb 1}_2 \end{equation} where $q_1$ and $q_2$ are the measurement leakage and seepage rates. Using this model for measurement leakage we have $\sum_j C_j = 0$, and hence the summed decay model is given by \begin{equation} P(1 |, m, {\bb 1}_1, \rho) = A + B \lambda_1^m \end{equation} where \begin{align} A & = \frac{L_2}{L_1+L_2} +\frac{L_1 q_2 - L_2 q_1}{L_1+L_2} \\ B & = \frac{L_1}{L_1+L_2} - \frac{L_1(q_1+q_2)}{L_1+L_2} -p_l (1-q_1-q_2) \end{align} Let us define two error terms \begin{align} \epsilon_M &= q_1 + p_l (1-q_1-q_2)\\ \epsilon_Q &= L_1q_2-L_2q_1 \end{align} Then we may rewrite $A$ and $B$ as \begin{align} A & = \frac{L_2+\epsilon_Q}{L_1+L_2}, & B & = \frac{L_1-\epsilon_Q}{L_1+L_2} -\epsilon_M \end{align} Hence our estimates of $L_1,L_2$ as computed from $A$ and $B$ are given by \begin{align*} L_1^{\text{est}}(A) &= (1-A)(1- \lambda_1) = L_1 -\epsilon_Q\\ L_2^{\text{est}}(A) &= A(1-\lambda_1) = L_2 + \epsilon_Q\\ L_1^{\text{est}}(B) &= B(1-\lambda_1) = L_1 -\epsilon_Q -\epsilon_M(L_1+L_2)\\ L_2^{\text{est}}(B) &= (1-B)(1-\lambda_1) = L_2 +\epsilon_Q +\epsilon_M(L_1+L_2) \end{align*} Hence the variance due to using the approximate model is less using $A$ then $B$, and is given by \[ \mbox{Var}(L_{j}^{\text{est}}) = \epsilon_Q^2. \] \section{Proof of \cref{prop:coherence-bound}}\label{proof:coherence-bound} Consider the spectral decomposition of a state $\rho=\sum_a\lambda_a\ketbra{\Psi_a}{\Psi_a}$. We can decompose each eigenstate $\ket{\Psi_a}$ as \begin{equation} \ket{\Psi_a}= \sqrt{1-p_a}\ket{\psi_{1,a}} +\sqrt{p_a}\ket{\psi_{2,a}} \nonumber \end{equation} where $0\le p_a\le 1$ is the leakage of the state $\ket{\Psi_a}$. The leakage of $\rho$ is then given by $p_l=L(\rho) = \sum_a \lambda_a p_a$, and the projection onto the the CLS is \begin{equation} \2P_C(\rho) = \sum_a \lambda_a \sqrt{p_a(1-p_a)} \Big(\ketbra{\psi_{1,a}}{\psi_{2,a}} +\ketbra{\psi_{2,a}}{\psi_{1,a}} \Big). \nonumber \end{equation} Using the triangle inequality we have that the trace norm of $\2P_C(\rho)$ is upper-bounded by \begin{align} \| \2P_C(\rho)\|_1 &\le \sum_a \lambda_a \sqrt{p_a(1-p_a)} \left\|\ketbra{\psi_{1,a}}{\psi_{2,a}} +\ketbra{\psi_{2,a}}{\psi_{1,a}} \right\|_1 \nonumber\\ &\le \sum_a 2\lambda_a \sqrt{p_a(1-p_a)}. \nonumber \end{align} Now, by the concavity of $f(x)=\sqrt{x}$ we have \begin{align*} \sum_a 2\lambda_a \sqrt{p_a(1-p_a)} &\le 2\sqrt{\sum_a \lambda_a \,p_a(1-p_a)} \\ &= 2\sqrt{p_l - \sum_a \lambda_a p_a^2}, \end{align*} and by convexity of $g(x)=x^2$ we have \begin{equation} \sum_a \lambda_a p_a^2 \ge \left(\sum_a \lambda_a p_a\right)^2 = p_l^2, \end{equation} hence \begin{equation} 2\sqrt{p_l - \sum_a \lambda_a p_a^2} \le 2\sqrt{p_l(1-p_l)} \end{equation} and so $\|\2P_C(\rho)\|_1$ is upper-bounded by $2\sqrt{p_l(1-p_l)}$. \qed \section{Proof of \cref{prop:coh-leakage-bound}}\label{proof:coh-leakage-bound} To bound the coherent leakage rate we start with the coherence of leakage bound from \cref{prop:coherence-bound} for the output state $\2E(\ketbra{\psi_1}{\psi_1})$: \begin{align} CL_1(\2E) &= \int d\psi_1 \| \2P_C\2E(\ketbra{\psi_1}{\psi_1}) \|_1 \\ &\le 2\int d\psi_1 \sqrt{p_l(\psi_1)-p_l(\psi_1)^2} \end{align} where $p_l(\psi_1) \equiv L(\2E(\ketbra{\psi_1}{\psi_1}))$. Next, by the concavity of $\sqrt{x}$ we have \begin{align} CL_1(\2E) &\le 2\sqrt{\int d\psi_1 p_l(\psi_1)-\int d\psi_1 p_l(\psi_1)^2}\\ &= 2\sqrt{L_1(\2E)-\int d\psi_1 p_l(\psi_1)^2}. \end{align} For the remaining term we can rewrite it as \begin{align} p_l(\psi_1)^2 &=\Tr[{\bb 1}_1\2E(\ketbra{\psi_1}{\psi_1}]^2\\ &=\Tr[{\bb 1}_2\otimes{\bb 1}_2 (\2E\otimes\2E)(\ketbra{\psi_1}{\psi_1}^{\otimes 2})]. \label{eq:pl-sq} \end{align} Using the result that the average over $\ketbra{\psi_1}{\psi_1}^{\otimes n}$ is given by \begin{equation} \int d\psi_1 \ketbra{\psi_1}{\psi_1}^{n} = \frac{\Pi_\text{sym}}{\Tr[\Pi_\text{\=sym}]} \end{equation} where $\Pi_\text{sym}$ is the projector on the the symmetric subspace of $\2X_1^{\otimes n}$, we may then evaluate for the case $n=2$ to obtain \begin{equation} \int d\psi_1 \ketbra{\psi_1}{\psi_1}^{\otimes 2} = \frac{{\bb 1}_1\otimes{\bb 1}_1 + U_{\text{SWAP}_1}}{d_1(d_1+1)}.\label{eq:psi-sq-int} \end{equation} Let $\{A_j/\sqrt{d_1}\}$, with $A_0 = {\bb 1}_1$ be an orthonormal operator basis for $\2L(\2X_1)$. Then we may rewrite the SWAP unitary as \begin{equation} U_{\text{SWAP}_1} = \frac{{\bb 1}_1\otimes{\bb 1}_1}{d_1} + \sum_{j=1}^{d_1-1} \frac{A_j\otimes A_j}{d_1} \end{equation} and so \cref{eq:psi-sq-int} becomes \begin{equation} \int d\psi_1 \ketbra{\psi_1}{\psi_1}^{\otimes 2} = \frac{{\bb 1}_1}{d_1}\otimes\frac{{\bb 1}_1}{d_1} +\sum_{j=1}^{d_1-1} \frac{A_j\otimes A_j}{d_1^2(d_1+1)}. \label{eq:psi-sq-int-2} \end{equation} Hence by return to \cref{eq:pl-sq} we have \begin{align*} \int d\psi_1 p_l(\psi_1)^2 &= L_1(\2E)^2 +\sum_{j=1}^{d_1-1} \frac{Tr[{\bb 1}_2 \2E(A_j)]^2}{d_1^2(d_1+1)}\\ &\ge L_1(\2E)^2 \end{align*} Thus we obtain the result \begin{align} CL_1(\2E) &= 2\sqrt{L_1(\2E)-\int d\psi_1 p_l(\psi_1)^2}\\ &\le 2\sqrt{L_1(\2E)\big(1-L_1(\2E)\big)}. \end{align} The result for seepage follows the same argument. \qed \section{Proof of \cref{prop:dle-leak}}\label{proof:dle-leak} Let $\2E_1\in C(\2X_1)$ be a CPTP map, and $\2E_L$ be DLE of $\2E_1$. The state leakage of an initial state $\rho$ after $m$ applications of $\2E_L$ is given by \begin{equation} L(\2E_L^m(\rho)] = \Tr[{\bb 1}_2 \2E_L^m(\rho)] = \Tr[\rho(\2E_m^\dagger)^m({\bb 1}_2)] \end{equation} where the adjoint channel $\2E_L^\dagger$ is given by \begin{equation} \2E_L^\dagger = (1-L_1) \2E_1^\dagger + L_1 \2D_{12} + L_2 \2D_{21} +(1-L_2)\2D_2 \end{equation} Since $\2E_1$ is TP, the adjoint-channel $\2E_1^\dagger$ is unital on the computational subspace ($\2E_1^\dagger({\bb 1}_1)={\bb 1}_1$)~\cite{Wood2015qic}. Hence Since the initial input is ${\bb 1}_2$ \begin{align*} \2E_L^\dagger\left(\alpha {\bb 1}_1 + \beta {\bb 1}_2\right) =& \big[(1-L_1)\alpha + L_1\beta\big]{\bb 1}_1 \\&+ L_1 \beta + (1-L_2)\beta\big]{\bb 1}_2 \end{align*} we can represent the superoperator for $\2E_L^\dagger$ with respect to the basis $\dket{{\bb 1}_1},\dket{{\bb 1}_2}$ as a $2\times 2$ matrix \begin{align} \2S_{\2E^\dagger_L} &=\begin{pmatrix} 1-L_1 & L_1 \\ L_2 & 1-L_2\end{pmatrix}. \end{align} Hence we can compute the $m^{\text{th}}$ power of $\2S_{\2E_L}$ obtaining \begin{align*} \2S_{\2E_L^\dagger}^m =& \frac{1}{L_1+L_2} \begin{pmatrix} L_2 & L_1 \\ L_2 & L_1 \end{pmatrix} \\& +\frac{1}{L_1+L_2}\begin{pmatrix} L_1 & -L_1 \\ -L_2 & L_2 \end{pmatrix}(1-L_1-L_2)^m, \end{align*} and hence \begin{align*} \2S_{\2E_L^\dagger}^m ({\bb 1}_2) =&\left(\frac{L_1}{L_1+L_2}\right){\bb 1} - \frac{(1-L_1-L_2)^m}{L_1+L_2}(L_1{\bb 1}_1 - L_2{\bb 1}_2). \end{align*} Thus we have that \begin{align*} L(\2E_L^m(\rho)) =&\left(\frac{L_1}{L_1+L_2}\right) \\& - \left(\frac{L_1\Tr[{\bb 1}_1\rho] - L_2\Tr[{\bb 1}_2\rho]}{L_1+L_2}\right)(1-L_1-L_2)^m \\ =&\frac{L_1}{L_1+L_2} -\left(\frac{L_1}{L_1+L_2}-p_l\right)(1-L_1-L_2)^m \end{align*} where $p_l = L(\rho)$. \qed \section{Proof of \cref{prop:lindblad-leakage}}\label{proof:lindblad-leakage} To prove the result of the second order expansion of $\2S = e^{\Delta t(\2H+\2D)}$ we must show that the term \begin{equation} \dbra{{\bb 1}_2}(\2D\2H + \2H\2D)\dket{{\bb 1}_1}= \dbra{{\bb 1}_1}(\2D\2H + \2H\2D)\dket{{\bb 1}_2}=0. \end{equation} Now $\dbra{{\bb 1}_i}\2H\2D\dket{{\bb 1}_j} = \sum_k \dbra{{\bb 1}_i}\2H \2D[A_k]\dket{{\bb 1}_j}$ where we restrict ourselves to $k$-photon ladder operators of the form \begin{equation} A_k = \sum_{s} \alpha_s \ketbra{s\pm k}{s}. \end{equation} Since $A_k^\dagger A_k$ is diagonal we have that ${\bb 1}_i A_k^\dagger A_k {\bb 1}_j =0$. Furthermore, we have that \begin{equation} A_k {\bb 1}_i A_k^\dagger {\bb 1}_j = |\alpha_s|^2 \ketbra{s}{s} \end{equation} for some $s$, and similarly for $A_k^\dagger {\bb 1}_i A_k{\bb 1}_j$. Using this and the property that $H$ is Hermitian, we have \begin{align} \Tr[A_k{\bb 1}_jA_k^\dagger{\bb 1}_i H] = |\alpha_s|^2 \bra{s}H\ket{s} \in \bb R. \end{align} Finally, expanding out the original expression we have \begin{align*} \dbra{{\bb 1}_i}\2H \2D[A]\dket{{\bb 1}_j} =& i\Tr[A{\bb 1}_jA^\dagger{\bb 1}_i H] -i\Tr[H{\bb 1}_i A{\bb 1}_jA^\dagger] \nonumber\\ =& -2\text{Im}\Tr[A{\bb 1}_jA^\dagger{\bb 1}_i H] \\ =& 0 \\ \dbra{{\bb 1}_i}\2D[A]\2H\dket{{\bb 1}_j} =&-\dbra{{\bb 1}_j}\2H \2D[A]^\dagger\dket{{\bb 1}_i} \nonumber\\ =& -i\Tr[A^\dagger{\bb 1}_iA{\bb 1}_j H] +i\Tr[H{\bb 1}_j A^\dagger{\bb 1}_iA] \nonumber\\ =& 2\text{Im}\Tr[A^\dagger{\bb 1}_iA{\bb 1}_j H]\\ =& 0 \label{eq:dh-term}. \qed \end{align*} \end{appendix} \end{document}
\begin{document} \noindent {\bf Free Malcev algebra of rank three.}\\ Alexandr Ivanovich Kornev \\ {Centro de Matem\'atica, Computa\c c\~ao e Cogni\c c\~ao,\\ Universidade Federal do ABC, Avenida dos Estados, 5001, \\ Bairro Bangu, Santo Andr\'e, SP, Brasil, CEP 09.210-580\\ phone +55(11)49968315 }\\ {\it e-mail: [email protected]} \title{\bf Free Malcev algebra of rank three.} \author{A.I.\ Kornev\footnote{Supported by FAPESP grant No. 2008/57680-5.}\\ Federal University of ABC (UFABC)\\ Brazil} \maketitle \begin{abstract} We find a basis for the free Malcev algebra on three free generators over a field of characteristic zero. The semiprimity and speciality of this algebra are proved. Also, we prove the decomposability of this algebra into a subdirect sum of the free Lie algebra of rank three and the free algebra of rank three of the variety generated by a simple seven-dimensional Malcev algebra. These results were announced in ~\cite{Kor}. \\ \end{abstract} The problem of finding of a basis for a free algebra is important for different varieties. For free Malcev algebras this problem is posed by Shirshov in ~\cite[ problem 1.160]{Dn}. For alternative algebras with three generators similar problem is solved in ~\cite{Il}. The base of the free Malcev superalgebra on one odd generator was constructed in ~\cite{Sh1odd}. Recall that a Malcev algebra is called special if it is a subalgebra of a commutator algebra $A^{-}$ for some alternative algebra $A.$ The question of the speciality of a Malcev algebras was posed by Malcev in ~\cite{Mal}. In the present paper we find the basis of the free Malcev algebra with three free generators, and prove the speciality of this algebra. In addition, we prove decomposition of this algebra into a subdirect sum of free Lie algebra of rank three and the free algebra of rank three of the variety generated by a simple seven-dimensional Malcev algebra.\\ Shestakov ~\cite{Sh2} in 1977 proved that a free Malcev algebra of $n>8$ generators over commutative ring $\Phi$ is not semiprime provided $7!\ne 0$ in $\Phi$. Filippov ~\cite{Fil4} in 1979 then proved that in fact a free Maltsev $\Phi$-algebra of $n>4$ generators is not semiprime if $6\Phi \ne 0.$ We prove that the free Malcev algebra of rank three is semiprime. For brevity, we omit the brackets in terms of the type $(...((x_1x_2)x_3)...)x_n.$ In addition, the products of the form $axx...x$ we denote by $ax^n.$\\ A linear algebra $M$ over a field $F$, which satisfies following identities $$x^2=0,$$ $$J(x,y,xz)=J(x,y,z)x,$$ where $J(x,y,z)=xyz+zxy+yzx$, is called a Malcev algebra. In what follows, the characteristic of $F$ is assumed to be zero. Let $R_a$ be the operator of right multiplication by an element $a$ of the algebra $M$, and let $R(M)$ be the algebra generated by all $R_a$. We will adhere to following notations:\\ $$L_{a,b}=1/2(R_aR_b+R_bR_a),$$ $$[R_a,R_b]=R_aR_b-R_bR_a,$$ $$(a,b,c)=(ab)c-a(bc).$$ In addition, by $\Delta_a^i(b)$ we denote the operator of partial linearization defined in ~\cite[Chapter 1, \S 4]{zsss}, by $Z(M)$ we denote the Lie center of Malcev algebra $M$ and by $G(a,b,c,d)$ we denote the function defined in ~\cite {sag} as follows: $$G(a,b,c,d)=J(ab,c,d)-bJ(a,c,d)-J(b,c,d)a.$$ Let $X=\{x,y,z\}$ and $M_3$ be the free Malcev algebra on the set of free generators $X$. For brevity, the expressions of the form $G(...(G(G(t,x,y,z),x,y,z)...),x,y,z)$ will be denoted as $tG^n$. \\ By $Alt[X]$ denote the free alternative algebra generated by the set of free generators $X$ and by $Ass[X]$ denote the free associative algebra, generated by the set of free generators $X$. Furthermore, for $a,b \in Alt[X]$ we denote $a\circ b=1/2(ba+ab)$ and for $a\in Alt[X]$ denote by $R^+_a$ the operator in the algebra $Alt[X]$ defined as follows: $xR^+_a=x\circ a$ for any $x\in Alt[X]$. Now $L^+_{a,b}$, is the operator in the algebra $Alt[X]$, defined as follows: $L^+_{a,b}=R^+_aR^+_b-R^+_{a\circ b}$ for any $x\in Alt[X]$ and $a,b \in Alt[X]$. If $B$ is an alternative algebra, then by $B^{-}$ denote the commutator algebra of the algebra $B$.\\ The main result:\\ {\bf Theorem.} Let $M_3$ be the free Malcev algebra with free generators $X=\{x,y,z\}$ Let $\textbf{U}=\{J(x,y,z) G^k L_{x,x}^l L_{y,y}^m L_{z,z}^n L_{x,y}^p L_{x,z}^q L_{y,z}^r \mid k,l,m,n,p,q,r \in \mathbb{N}\cup \{0\}\}$. Then the set of the vectors $\textbf{U}\cup \textbf{U}x\cup \textbf{U}y\cup \textbf{U}z\cup \textbf{U}xy\cup \textbf{U}xz\cup \textbf{U}yz$ forms a basis of the space $J(M_3,M_3,M_3).$ Furthermore, $M_3$ is special.\\ Any Malcev algebra $M$ and the algebra $R(M)$ satisfy the following identities:\\ \begin{equation}\label{a7} uL_{a,b}t=utL_{a,b}+uL_{at,b}-uL_{a,tb}. \end{equation} \begin{equation}\label{a1} (ab)(cd)=acbd+dacb+bdac+cbda, \end{equation} \begin{equation}\label{a4} G(t,a,b,c)=2(J(ta,b,c)+J(t,a,bc)), \end{equation} \begin{equation}\label{a3} G(t,a,b,c)=2/3(J(t,b,c)a+J(a,t,c)b+J(a,b,t)c-J(a,b,c)t), \end{equation} \begin{equation}\label{a2} 3J(wa,b,c)=J(a,b,c)w-J(b,c,w)a-2J(c,w,a)b+2J(w,a,b)c, \end{equation} \begin{equation}\label{a5} J(J(a,b,c),a,b)=-3J(a,b,c)(ab), \end{equation} \begin{equation}\label{a6} J(c,ba^{2k-1},b)=-J(c,b,a)a^{2k-2}b, (k\geq 1), \end{equation} The identity \eqref{a7} is the identity (12) of ~\cite[\S 1]{fil} rewritten in our notation. The identities \eqref{a1}, \eqref{a4}, \eqref{a3}, \eqref{a2} and \eqref{a5} are proved in ~\cite{sag} and \eqref{a6} is the identity (5) of ~\cite[\S 1]{fil}. Moreover, from the identity \eqref{a3} it is clear that the function $G$ is skew-symmetric for any two arguments. \\ {\bf Lemma 1.} Any Malcev algebra $M$ satisfies the following identities:\\ \begin{equation}\label{a10} J(a,b,c)L_{b,b}^ka=J(a,b,c)aL_{b,b}^k, \end{equation} \begin{equation}\label{a11} J(a,b,c)L_{a,a}^kL_{b,b}^l=J(a,b,c)L_{b,b}^lL_{a,a}^k, \end{equation} $$(ta)J(a,b,c)=$$ \begin{equation}\label{a8} -1/2J(a,t,c)ab+1/2J(a,t,b)ac-J(b,c,ta)a-3/2J(a,t,cb)a, \end{equation} \begin{equation}\label{a9} J(a,b,tac)=-1/2J(a,t,c)[R_a,R_b]+J(a,b,t)L_{a,c}. \end{equation} {\sc Proof}. The identity \eqref{a10} easy follows from the identity (21) of ~\cite{Kuz}.\\ The identiy \eqref{a11}. We apply the identities \eqref{a6} and \eqref{a10}. $$J(a,b,c)L_{a,a}^{k}L_{b,b}^{l}=-J(ba^{2k+1},b,cb^{2l-1})=-J(a,b,cb^{2l-1})L_{a,a}^{k}b=$$ $$-J(a,b,cb^{2l-1})bL_{a,a}^{k}=J(a,b,c)L_{b,b}^{l}L_{a,a}^{k}.$$ The identiy \eqref{a8}. Applying the operator $\Delta_b^1(h)$ to the identity $bJ(a,b,c)=J(a,b,cb)$ we obtain $$hJ(a,b,c)=J(a,h,c)b+J(a,h,cb)+J(a,b,ch).$$ From the identity \eqref{a2}: $$hJ(a,b,c)=1/3J(a,h,c)b+J(a,h,cb)+1/3J(a,b,h)c-2/3J(b,c,h)a+1/3hJ(a,b,c).$$ That is, $$hJ(a,b,c)=1/2J(a,h,c)b-J(b,c,h)a+1/2J(a,b,h)c+3/2J(a,h,cb).$$ Replacing now $h$ by $ta$ we obtain the identity \eqref{a8}.\\ The identiy \eqref{a9}. Applying twice the first the identity \eqref{a2} to the identity \eqref{a8}, we obtain the identity $$(ta)J(a,b,c)=-1/2J(a,t,c)ab-1/6J(a,b,t)ca-1/6J(a,t,c)ba-1/6J(t,b,c)aa+$$ $$+2/3J(a,b,c)ta-1/2J(a,b,t)ac.$$ From the identity \eqref{a2} we obtain $$J(a,b,tac)=2/3J(ta,c,a)b-2/3J(b,ta,c)a-1/3J(a,b,ta)c+1/3J(c,a,b)(ta)=$$ $$-2/3J(t,c,a)ab+4/9J(a,b,t)ca+4/9J(a,t,c)ba-2/9J(t,b,c)aa+$$ $$2/9J(a,b,c)ta+1/3J(a,b,t)ac-1/3(ta)J(a,b,c).$$ Applying now the previous identity, we obtain \eqref{a9}.$\blacksquare$ \\ {\bf Lemma 2.} For an arbitrary polynomial $f$ of degree $n$ from the subalgebra generated by the elements $a$ and $b$ of Malcev algebra $M$, the following equalities are true:\\ 1) $J(a,b,J(a,f,c))+(-1)^nJ(a,f,J(a,b,c))=0,$\\ 2) $J(a,b,J(f,b,c))+(-1)^nJ(f,b,J(a,b,c))=0,$\\ 3) $J(a,b,fc)=(-1)^nfJ(a,b,c).$\\ {\sc Proof}. We shall prove all three statements by induction on $n$. For $n =1$ all identities are obvious. We suppose they are correct when $ n = k $, and prove them for $ n = k +1 $. Since $M$ is a binary Lie algebra, we can assume that $f = f_1a$ or $ f = f_1b $. \\ 1) If $f=f_1a$, the proof is obvious. Let $f=f_1b$. Applying operator $\Delta_c^1(f_1)$ to the identity $J(a,cb,c)=J(a,b,c)c$ we obtain $$J(a,f_1b,c)=J(a,f_1,cb)+J(a,b,c)f_1.$$ Next, using this equation and the induction hypothesis, obtain $$J(a,b,J(a,f_1b,c))+(-1)^{k+1}J(a,f_1b,J(a,b,c))= J(a,b,J(a,f_1,cb))+J(a,b,J(a,b,c)f_1)+$$ $$+(-1)^{k+1}J(a,f_1,J(a,b,c)b)+(-1)^{k+1}J(a,b,J(a,b,c))f_1=0$$ 2) The proof is similar to 1).\\ 3) Let $f=f_1a$. $$J(a,b,f_1ac)=J(a,b,J(f_1,a,c)-cf_1a-acf_1)=$$ $$-J(a,b,J(a,f_1,c))+(-1)^{k+1}(f_1J(a,b,c)a+J(a,b,c)af_1)=$$ $$-J(a,b,J(a,f_1,c))+(-1)^{k+1}(J(J(a,b,c),a,f_1)-af_1J(a,b,c))=(-1)^{k+1}f_1aJ(a,b,c).$$\\ If $f=f_1b$, then the arguments are similar and use the equality 2).$\blacksquare$\\ {\bf Corollary 1.} Under the conditions of Lemma 2 the following equalities hold :\\ \begin{equation}\label{a12} J(a,f,c)b-J(b,f,c)a=\frac {3(-1)^n+1}{2}fJ(a,b,c), \end{equation} \begin{equation}\label{a13} J(a,fb,c)=-J(a,f,c)b+(-1)^nfJ(a,b,c). \end{equation} {\sc Proof}. From Lemma 2: $J(a,b,fc)=(-1)^nfJ(a,b,c).$ From the identity \eqref{a2}: $$J(a,b,fc)=2/3J(f,c,a)b-2/3J(b,f,c)a-1/3J(a,b,f)c+1/3J(c,a,b)f.$$ Combining, we obtain the first equality. The second equality follows from the first after the application of the identity \eqref{a2} to $J(a,fb,c)$. $\blacksquare$\\ {\bf Proposition 1.} Let $u\in J(M_3,M_3,M_3)$. There exist $\alpha_i$ from $F$ such that $$u=\sum_{i}\alpha_iJ(x,y,z)x_{i1}x_{i2}...x_{ik_i},$$ where $x_{ij}\in X, k_i\in \mathbb{N}\cup \{0\}.$\\ {\sc Proof.} In any Malcev algebra $M$ the following identity is fulfilled: $$J(a,b,c)abv=J(a,b,v)cba+J(a,b,c)vba+J(a,b,v)bca+$$ \begin{equation} +J(a,b,c)bva-J(a,b,v)abc-J(a,b,c)avb-J(a,b,v)acb.\tag{a} \end{equation} Applying the operator $\Delta_b^1(c)$ to identity \eqref{a10} for $k=1$ obtain: $$J(a,b,c)cba+J(a,b,c)bca=J(a,b,c)acb+J(a,b,c)abc.$$ After the application of the operator $\Delta_c^1(v)$ we obtain the desired identity.\\ We prove the proposition by induction on the degree $d$ of the element $u$. Corollary 2 ~\cite[Chapter 1, \S 3]{zsss} implies the homogeneity of any variety of Malcev algebras over a field $F$. Therefore, the proposition is valid for $d=3$. Assume the proposition true for all $d\leq k$. From the identity \eqref{a2} and the induction hypothesis it follows that $u$ can be written in the form $$u=\sum_{i}\alpha_iJ(x,y,z)x_{i1}x_{i2}...x_{ik_i}v_i,$$ where $x_{ij}\in X, k_i\in \mathbb{N}\cup \{0\},v_i\in M_3.$ It is obvious that the elements $\overline{v}_i=v_i+J(M_3,M_3,M_3)$ of the Lie algebra $M_3/J(M_3,M_3,M_3)$ are linear combinations of the monomials of the form $\overline{y}_{i1} \overline{y}_{i2}...\overline{y}_{il}$, where $y_{ij}\in X$ and $l\in \mathbb{N}$. Hence $v_i$ can be written as $v_i=y_{i1}y_{i2}...y_{il}+u_i$, where $u_i\in J(M_3,M_3,M_3)$. From homogeneity it follows that the degrees of the elements $u_i$ does not exceed $k$, therefore, the induction hypothesis implies that $v_i$ can be represented as a linear combination of the monomials of the form $t_{i1}t_{i2}...t_{is}$, where $t_{ij}\in X$ and $s\leq k$. Thus, to prove the proposition it is sufficient show that expressions of the form $J(x,y,z)x_{1}x_{2}...x_{r}(wt)$, where $x_{j},t\in X$ and $w\in M_3$, belong to the subspace generated by the set $J(M_3,M_3,M_3)X$. We prove this by induction on $r$. From the identity \eqref{a8} it follows that this is true for $r=0$. Let $r=1$. There are two cases.\\ 1. $x_1=t$. Without loss of generality we can assume that $x_1=t=x$. We have $$J(x,y,z)x(wx)=J(J(x,y,z),x,wx)-(wx)J(x,y,z)x+(wxx)J(x,y,z)=$$ $$-J(J(x,y,z),x,w)x-(wx)J(x,y,z)x+(wxx)J(x,y,z).$$ From the identity \eqref{a8} it follows that the third term, and therefore all expressions lie in $J(M_3,M_3,M_3)X$.\\ 2. $x_1\ne t$. Without loss of generality, we may assume that $x_1=x, t=y$. Applying the identities \eqref{a1} and \eqref{a8}, we obtain $$J(x,y,z)x(wy)=J(x,y,z)wxy+yJ(x,y,z)wx+xyJ(x,y,z)w+wxyJ(x,y,z)=$$ $$J(x,y,z)wxy+yJ(x,y,z)wx+wxyJ(x,y,z)+1/2J(x,y,z)xyw-1/2J(x,y,z)yxw.$$ The identities \eqref{a8} and (a) imply the required result.\\ Let $r\geq 1$ and $J(x,y,z)x_1...x_{r-1}(wt)\in J(M_3,M_3,M_3)X$. We denote $J(x,y,z)x_1...x_{r-2}$ by $J_0$. The identity \eqref{a1} gives $$J_0x_{r-1}x_r(wt)=-wtJ_0x_{r-1}x_r-x_r(wt)J_0x_{r-1}-x_{r-1}x_r(wt)J_0+J_0x_r(x_{r-1}(wt))=$$ $$-wtJ_0x_{r-1}x_r+wtx_rJ_0x_{r-1}-x_{r-1}x_r(wt)J_0-J_0x_r(wtx_{r-1}). $$ This equality and the induction hypothesis imply the required. $\blacksquare$ \\ {\bf Corollary 2.} Let $f,g,h$ be the polynomials from the subalgebra, generated by elements $x$ and $y$ of the Malcev algebra and $deg f=n$. Then \\ 1) $J(yx,f,z)=\frac{(-1)^n-1}{2}fJ(x,y,z),$\\ 2) $J(x,f,yz)=J(x,f,z)y-((-1)^n+1)fJ(x,y,z),$\\ 3) if $f$ has even degree, then $J(z,g,h)L_{f,y}=0.$\\ {\sc Proof}. 1) Applying the operator $\Delta_y^1(f)$ to the identity $J(yx,y,z)=J(x,y,z)y$ we obtain $$J(yx,f,z)=J(y,fx,z)+J(x,f,z)y-fJ(x,y,z).$$ From the identity \eqref{a13}: $$J(yx,f,z)=-J(y,f,z)x-(-1)^nfJ(x,y,z)+J(x,f,z)y-fJ(x,y,z).$$ Using \eqref{a12} we obtain 1).\\ 2) Applying the operator $\Delta_z^1(f)$ to identity $J(x,zy,z)=J(x,y,z)z$ we obtain $$J(x,fy,z)+J(x,zy,f)=J(x,y,z)f.$$ Using now \eqref{a13}, we obtain the required result.\\ 3) We first show that $J(x,y,z)L_{f,y}=0.$ Combining \eqref{a12} and \eqref{a13} we obtain $$J(x,f,z)y=J(fx,y,z)+\frac{(-1)^n+1}{2}fJ(x,y,z).$$ Applying this equality and 2) of this corollary, we obtain $$J(x,y,z)fy=-J(x,f,z)yy+J(fx,y,z)y=-J(x,f,yz)y-2fJ(x,y,z)y+J(fx,y,yz).$$ That is $$J(x,y,z)fy=J(x,f,yz)y-J(fx,y,yz).$$ Using the identities \eqref{a12} and \eqref{a13} we transform the right part of the last equality: $$J(x,y,z)fy=J(y,f,yz)x+2fJ(x,y,yz)+J(y,fx,yz)=$$ $$-fJ(x,y,yz)+2fJ(x,y,yz)=-J(x,y,z)yf.$$ Then, from Proposition 1 it follows that $J(z, g, h)$ can be represented as $$\sum_{i}\alpha_iJ(z,x,y)x_{i1}x_{i2}...x_{ik_i},$$ where $k_i\in \mathbb{N}\cup \{0\}, \alpha_i \in F$. From homogeneity it follows that $x_{ij}\in \{x,y\}$. Thus, it is clear that $$J(z,g,h)=J(\pm \sum_{i}\alpha_izx_{i1}x_{i2}...x_{ik_i},x,y).$$ This implies the required assertion. $\blacksquare$\\ We recall that the Lie center $Z(M)$ of Malcev algebra $M$ is the set of elements $c$ of algebra $M$ such that $J(c,a,b)=0$ for any $a$ and $b$ from the algebra $M$. It is well known (see, for example, ~\cite{sag} ) that the Lie center of any Malcev algebra is ideal and that $fg$ belongs to $Z(M)$ if $J(f,g,a)=0$ for any $a$. The central elements play important role in the theory of Malcev algebras The Lemma 3 ~\cite[\S 3]{fil} implies that the elements $yx^{2k-1}(yx)$ are central for any $k>0$. The following assertion gives some central elements of more general form. {\bf Corollary 3.} If $f$ and $h$ are the polynomials of even degree from the subalgebra generated by the elements $x$ and $y$ of the Malcev algebra $M$, then $fh$ belongs to $Z(M).$\\ {\sc Proof.} Induction on the degree $k$ of the polynomial $h$. When $k = 2$ the assertion obviously follows from 1) of Corollary 2. Now suppose that the statement is true in the case when the degree is equal to $k = 2l$. Prove it for a polynomial $h_1$ of polynomial degree $k = 2l +2$.\\ 1) Let $h_1=hxy$. We rewrite the identity \eqref{a9} as $$J(z,x,txy)=1/2J(x,t,y)xz-1/2J(x,t,y)zx-J(x,z,t)L_{x,y}.$$ Apply the operator $\Delta_x^1(f)$ to this equation and replace $t$ by $h$ $$J(z,f,hxy)+J(z,x,hfy)=1/2J(f,h,y)xz+1/2J(x,h,y)fz-$$ $$-1/2J(f,h,y)zx-1/2J(x,h,y)zf-J(f,z,h)L_{x,y}-J(x,z,h)L_{f,y}.$$ That is, $J(z,f,hxy)=J(z,x,h)L_{f,x}=0.$\\ 2) Let $h_1=hxx$. Apply the operator $\Delta_x^1(f)$ to the equation $J(z,x,txx)=J(z,x,t)xx$ and replace $t$ by $h$, obtain $$J(z,f,hxx)+J(z,x,hfx)+J(z,x,hxf)=J(z,f,h)xx+J(z,x,h)fx+J(z,x,h)xf.$$ Hence, $J(z,f,hxx)=J(z,x,h)L_{f,x}=0.$\\ The proofs of the cases $h_1=hyy$ and $h_1=hyx$ are similar. $\blacksquare$\\ {\bf Lemma 3.} In any Malcev algebra $M$ the following identities hold:\\ \begin{gather} J(aca^{2k+1}(R_cR_a)^{n-1},b,c)=J(a,b,c)L_{a,a}^{k}L_{a,c}^{n}, \label{a14} \\ J(aca^{2k+1},b,c)=J(a,b,c)L_{a,a}^{k}L_{a,c}, k\geq 0. \tag{\ref{a14}$'$} \\ J(aca(R_cR_a)^{n-1},b,c)=J(a,b,c)L_{a,c}^{n}, n\geq 1. \tag{\ref{a14}$''$} \end{gather} {\sc Proof.} We write the identity (6) of ~\cite[\S 1]{fil}\\ $$J(ca^{2k+2},b,c)=J(a,b,c)a^{2k}(ca)-J(a,b,c)a^{2k+1}c.$$ Applying \eqref{a8} we obtain $$J(aca^{2k+1},b,c)=-1/2J(a,ba^{2k},c)ac+1/2J(a,ba^{2k},c)ca+J(a,b,c)a^{2k+1}c.$$ That is $J(aca^{2k+1},b,c)=J(a,b,c)L_{a,a}^{k}L_{a,c}$. Using the identities \eqref{a6} and \eqref{a9}: $$J(a,b,c)L_{a,a}^{k}L_{a,c}^{n}=J(aca^{2k+1},b,c)L_{a,c}^{n-1}=J(aca^{2k+1}(R_cR_a)^{n-1},b,c).$$ $\blacksquare$\\ We denote $L_{zy,zy}+L_{y,y}L_{z,z}-L_{y,z}^2$ by $D(z,y).$\\ {\bf Proposition 2.} Let $T=\{L_{x,x},L_{y,y},L_{z,z},L_{x,y},L_{x,z},L_{y,z},L_{x,zy}\}.$ For all $S_i, T_i$ from $T$ and for any $n$ from $\mathbb{N}\cup \{0\}$:\\ \begin{equation}\label{1prop2} J(x,y,z)T_1T_2...T_n[S_1,S_2]=0, \end{equation} \begin{equation}\label{2prop2} J(x,y,z)T_1T_2...T_nL_{y,zy}=0, \end{equation} \begin{equation}\label{3prop2} J(x,y,z)T_1T_2...T_nD(y,z)=0. \end{equation} {\sc Proof.} In first we prove the equalities \eqref{1prop2}, \eqref{2prop2} and \eqref{3prop2} with the assumption that $S_i$ and $T_i$ belong to the set $T_0=T\backslash \{L_{x,zy}\}.$ \\ In what follows we will apply the condition $charF=0$ to collect similar terms arising after linearization. {\bf We prove the equality \eqref{1prop2} with the assumption that $S_i$ and $T_i$ belong to the set $T_0=T\backslash \{L_{x,zy}\}.$}\\ 1. We prove the identity \begin{equation} J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m[L_{x,x},L_{y,y}]=0. \tag{a} \end{equation} 1) $k=l=m=0$. Follows from the identity \eqref{a11}.\\ 2) $k=l=0$, $m\ne 0$. Applying \eqref{a6}, \eqref{a10} and again \eqref{a6} we have: $$J(x,y,z)L_{z,z}^mL_{x,x}L_{y,y}=J(x,y,z)L_{z,z}^mxxL_{y,y}=J(x,y,xz^{2m+1})xL_{y,y}=$$ $$J(x,y,xz^{2m+1})L_{y,y}x=J(x,xy^3,xz^{2m+1})$$ Applying \eqref{a6}, \eqref{a10} and again \eqref{a6} we have: $$ J(x,xy^3,xz^{2m+1})=J(x,xy^3,z)L_{z,z}^{m}x=J(x,xy^3,z)xL_{z,z}^{m}=$$ $$J(x,y,z)L_{y,y}xxL_{z,z}^{m}=J(x,y,z)L_{y,y}L_{x,x}L_{z,z}^{m}$$ Applying \eqref{a11}, \eqref{a6}, \eqref{a10} and again \eqref{a6} we have: $$J(x,y,z)L_{y,y}L_{x,x}L_{z,z}^{m}=J(x,y,z)L_{x,x}L_{y,y}L_{z,z}^{m}=J(x,y,z)L_{x,x}yyL_{z,z}^{m}=$$ $$J(yx^3,y,z)yL_{z,z}^{m}=J(yx^3,y,z)L_{z,z}^{m}y=J(yx^3,y,yz^{2m+1})$$ Applying \eqref{a6}, \eqref{a10} and again \eqref{a6} we have: $$J(yx^3,y,yz^{2m+1})=J(x,y,yz^{2m+1})L_{x,x}y=J(x,y,yz^{2m+1})yL_{x,x}= $$ $$J(x,y,z)L_{z,z}yyL_{x,x}=J(x,y,z)L_{z,z}^mL_{y,y}L_{x,x}.$$\\ 3) $k=m=0$, $l\ne 0$. It is obvious from \eqref{a11}.\\ 4) $k=0$, $l\ne 0$, $m\ne 0$. From the identity \eqref{a11}: $$J(x,y,z)L_{y,y}^lL_{z,z}^m[L_{x,x},L_{y,y}]=-J(x,y,yz^{2m+1}y^{2l-1})[L_{x,x},L_{y,y}]=0.$$ 5) $k\ne 0$, $l=m=0$. Follows from the identity \eqref{a11}.\\ 6) $k\ne 0$, $l=0$, $m\ne 0$. It is similar to the case 4).\\ 7) $k\ne 0$, $l\ne 0$, $m=0$. Follows from \eqref{a11}.\\ 8) $k\ne 0$, $l\ne 0$, $m\ne 0$. It is easy to see $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m[L_{x,x},L_{y,y}]=$$ $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mL_{x,x}L_{y,y}-J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mL_{y,y}L_{x,x}$$ Using \eqref{a6}, \eqref{a10} and \eqref{a11} we transform the first term $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mL_{x,x}L_{y,y}=-J(x,xy^{2l+1}x^{2k-1},z)L_{z,z}^mL_{x,x}L_{y,y}=$$ $$=J(x,xy^{2l+1}x^{2k},xz^{2m+1})L_{y,y}=J(x,xy^{2l+1},xz^{2m+1}x^{2k})L_{y,y}=$$ $$J(x,y,xz^{2m+1}x^{2k})L_{y,y}^lxL_{y,y}=J(x,y,xz^{2m+1}x^{2k})L_{y,y}^{l+1}x=$$ $$J(x,xy^{2l+3}x^{2k},xz^{2m+1})=J(x,xy^{2l+3}x^{2k},z)L_{z,z}^mx=$$ $$-J(x,xy^{2l+3}x^{2k+1},z)L_{z,z}^m$$ In the same way we can transform the second term of the commutator $$-J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mL_{y,y}L_{x,x}=J(yx^{2k+3}y^{2l+1},y,z)L_{z,z}^m$$ Applying \eqref{a6} to the sum of the two last equalities we obtain the required. 2. We show that \begin{equation} J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m[L_{x,y},L_{x,x}]=0. \tag{b} \end{equation} 1) $k=l=m=0$. It follows from \eqref{a11} by linearization. 2) $m=0$. $$J(x,y,z)L_{x,x}^kL_{y,y}^l[L_{x,y},L_{x,x}]=J(x,y,zx^{2k}y^{2l})[L_{x,y},L_{x,x}]=0.$$ 3) $m\ne 0$, $k=l=0$. From the identity (a): $$J(x,y,z)L_{z,z}^m[L_{x,x},L_{y,y}]=0.$$ Using the operator $\Delta_x^1(y)$ we obtain $$J(x,y,z)L_{z,z}^m[L_{x,y},L_{x,x}]=0.$$ 4) $m\ne 0$ and $k+l>0$. Without loss of generality we can assume that, for example, $k\ne 0$. From the identitie (a) and \eqref{a6}: $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m[L_{x,y},L_{x,x}]=-J(x,y,xz^{2m+1}x^{2k-1}y^{2l})[L_{x,y},L_{x,x}]=0.$$\\ 3. Prove now that \begin{equation} J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m[L_{x,y},L_{z,z}]=0.\tag{c} \end{equation} 1) $k=l=m=0$. From the identity (b) follows $J(x,y,z)[L_{x,z},L_{z,z}]=0.$ Applying the operator $\Delta_z^1(y)$ we obtain $$J(x,y,z)[L_{x,y},L_{z,z}]+2J(x,y,z)[L_{x,z},L_{y,z}]=0.$$ Appplying the identity (\ref{a14}$''$) to each term of the commutator we transform $$J(x,y,z)[L_{x,z},L_{y,z}]=J(x,y,z)L_{x,z}L_{y,z}-J(x,y,z)L_{y,z}L_{x,z}=J(xzx,y,z)L_{y,z}-$$ $$-J(x,yzy,z)L_{x,z}=J(xzx,yzy,z)-J(xzx,yzy,z)=0.$$ Hence, $J(x,y,z)[L_{x,y},L_{z,z}]=0$.\\ 2) $k=l=0$, $m\ne 0$. From the identity (a) follows $J(x,y,z)L_{z,z}^m[L_{x,x},L_{z,z}]=0.$ Applying the operator $\Delta_x^1(y)$ we obtain $J(x,y,z)L_{z,z}^m[L_{x,y},L_{z,z}]=0.$\\ 3) $k=0$, $l\ne 0$, $m=0$. From the identity (a) it follows $J(x,y,z)L_{y,y}^l[L_{x,x},L_{z,z}]=0.$ Applying the operator $\Delta_x^1(y)$ we have $J(x,y,z)L_{y,y}^l[L_{x,y},L_{z,z}]=0.$\\ 4) $k=0$, $l\ne 0$, $m\ne 0$. From the identity (a) it follows $J(x,y,z)L_{y,y}^lL_{z,z}^m[L_{x,x},L_{z,z}]=0.$ Applying the operator $\Delta_x^1(y)$ we obtain $J(x,y,z)L_{y,y}^lL_{z,z}^m[L_{x,y},L_{z,z}]=0.$\\ 5) $k\ne 0$, $l=m=0$. From the identity (a) it follows $J(x,y,z)L_{x,x}^k[L_{y,y},L_{z,z}]=0.$ Applying the operator $\Delta_y^1(x)$ we obtain $J(x,y,z)L_{x,x}^k[L_{x,y},L_{z,z}]=0.$\\ 6) $k\ne 0$, $l=0$, $m\ne 0$. From the identity (a) follows $J(x,y,z)L_{x,x}^kL_{z,z}^m[L_{y,y},L_{z,z}]=0.$ Applying the operator $\Delta_y^1(x)$ we have $J(x,y,z)L_{x,x}^kL_{z,z}^m[L_{x,y},L_{z,z}]=0.$\\ 7) $k\ne 0$, $l\ne 0$, $m=0$. $$J(x,y,z)L_{x,x}^kL_{y,y}^l[L_{x,y},L_{z,z}]=$$ $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{x,y}L_{z,z}-J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}L_{x,y}$$ Applying the identities (b) and (\ref{a14}$'$) to the first term of commutator we obtain $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{x,y}L_{z,z}=J(x,y,z)L_{x,x}^kL_{x,y}L_{y,y}^lL_{z,z}=J(xyx^{2k+1},y,z)L_{y,y}^lL_{z,z}$$ From \eqref{a11} and \eqref{a6} $$J(xyx^{2k+1},y,z)L_{y,y}^lL_{z,z}=J(xyx^{2k+1},y,z)L_{z,z}L_{y,y}^l=J(xyx^{2k+1},y,z)L_{z,z}yyL_{y,y}^{l-1}=$$ $$J(xyx^{2k+1},y,yz^3)yL_{y,y}^{l-1}=J(xyx^{2k+1},y,yz^3)y^{2l-1}=-J(xyx^{2k+1},y,yz^3y^{2l-1})$$ From (\ref{a14}$'$), \eqref{a6} and (a) $$-J(xyx^{2k+1},y,yz^3y^{2l-1})=-J(x,y,yz^3y^{2l-1})L_{x,x}^kL_{x,y}=J(x,y,yz^3)yL_{y,y}^{l-1}L_{x,x}^kL_{x,y}=$$ $$J(x,y,z)L_{y,y}^lL_{z,z}L_{x,x}^kL_{x,y}$$ In the same way we can transform the second term of the commutator $$-J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}L_{x,y}=-J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}L_{x,y}$$ Applying (a) to the sum of the two last equalities we obtain the required. 8) $k\ne 0$, $l\ne 0$, $m\ne 0$. Induction on $m$. For $m=0$ this statement was proved in 7). We suppose that $$ J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^i[L_{x,y},L_{z,z}]=0$$ for all $i<m$. $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m[L_{x,y},L_{z,z}]=$$ $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mL_{x,y}L_{z,z}-J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^{m+1}L_{x,y}$$ We transform the first term of the commutator. From the induction hypothesis, (a), (\ref{a14}$'$) and \eqref{a6} we obtain $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mL_{x,y}L_{z,z}=J(x,y,z)L_{x,x}^kL_{y,y}^lL_{x,y}L_{z,z}^{m+1}=$$ $$J(x,y,z)L_{x,x}^kL_{x,y}L_{y,y}^lL_{z,z}^{m+1}=J(yx^{2k+1},y,z)L_{y,y}^lL_{z,z}^{m+1}=$$ $$J(yx^{2k+1},y,z)L_{z,z}^{m+1}L_{y,y}^l=-J(yx^{2k+1},y,yz^{2m+3}y^{2l-1})=$$ $$-J(x,y,yz^{2m+3}y^{2l-1})L_{x,x}^kL_{x,y}=J(x,y,z)L_{y,y}^lL_{z,z}^{m+1}L_{x,x}^kL_{x,y}=$$ $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^{m+1}L_{x,y}$$ 4. We now prove the identity \begin{equation} J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m[L_{x,y},L_{x,z}]=0 \tag{d} \end{equation} 1) $m=0.$ From (c) using the operator $\Delta_z^1(x).$\\ 2) $l=0$. From the identity (c): $$J(x,y,z)L_{x,x}^kL_{z,z}^m[L_{y,y},L_{x,z}]=0.$$ Applying the operator $\Delta_y^1(x),$ we obtain the required result.\\ 3) $k=0$. The proof is similar to the cases 1) and 2).\\ 4) $k\ne 0$, $l\ne 0$, $m\ne 0$. Using the identities (a), (c), (b) and (\ref{a14}$'$): $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mL_{x,y}L_{x,z}=J(x,y,z)L_{y,y}^lL_{x,y}L_{x,x}^kL_{z,z}^mL_{x,z}=$$ $$J(x,yxy^{2l+1},z)L_{x,x}^kL_{z,z}^mL_{x,z}=J(x,yxy^{2l+1},zxz^{2m+1}x^{2k})=$$ $$J(x,y,zxz^{2m+1}x^{2k})L_{y,y}^lL_{x,y}=J(x,y,z)L_{x,x}^kL_{z,z}^mL_{x,z}L_{y,y}^lL_{x,y}=$$ $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mL_{x,z}L_{x,y}.$$ It finishes the proof of the equality (d).\\ From the proven identities (a),(b),(c) and (d) it follows $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m[S_1,S_2]=0, \textrm{where } S_i \in T_0.$$ Hence, with the operator $\Delta_x^1(y)$ it is easy to prove the identity $$J(x,y,z)L_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^m[S_1,S_2]=0,$$ using induction on $n$. The induction on $p$ and operator $\Delta_z^1(y)$ give the identity $$J(x,y,z)L_{x,y}^nL_{y,z}^pL_{x,x}^kL_{y,y}^lL_{z,z}^m[S_1,S_2]=0.$$ Finally, induction on $q$ and the operator $\Delta_x^1(z)$ give the identity $$J(x,y,z)L_{x,y}^nL_{y,z}^pL_{x,z}^qL_{x,x}^kL_{y,y}^lL_{z,z}^m[S_1,S_2]=0.$$ This is the equality \eqref{1prop2} assuming that the $ S_i $ and $ T_i $ belong to the set $T_0=T\backslash \{L_{x,zy}\}.$ \\ {\bf We prove the equality \eqref{2prop2} with the assumption that $T_i$ belong to the set $T_0=T\backslash \{L_{x,zy}\}.$}\\ That is, \begin{equation} J(x,y,z)L_{x,y}^nL_{x,z}^qL_{y,z}^pL_{x,x}^kL_{y,y}^lL_{z,z}^mL_{y,zy}=0. \tag{e} \end{equation} Induction on q. In first we consider the case $q=0.$ That is \begin{equation} J(x,y,z)L_{x,y}^nL_{y,z}^pL_{x,x}^kL_{y,y}^lL_{z,z}^mL_{y,zy}=0 \tag{e$'$} \end{equation} 1) $n=k=0.$ Obviously follows from the Proposition 1 and from equality 3) of the Corollary 2.\\ 2) $n=0$, $k\ne 0$.\\ 2.a) $p=l=m=0.$ Applying the identity \eqref{a6} and identities already proven in this proposition we obtain $$J(x,y,z)L_{x,x}^kyL_{y,z}=J(yx^{2k+1},y,z)L_{y,z}=J(yx^{2k+1},y,zyz)=J(x,y,zyz)L_{x,x}^ky=$$ $$J(x,y,z)L_{y,z}L_{x,x}^ky=J(x,y,z)L_{x,x}^kL_{y,z}y.$$ That is, $J(x,y,z)L_{x,x}^kL_{y,z}y-J(x,y,z)L_{x,x}^kyL_{y,z}=0.$ From the identity \eqref{a7}: $$0=J(x,y,z)L_{x,x}^kL_{y,z}y-J(x,y,z)L_{x,x}^kyL_{y,z}=J(x,y,z)L_{x,x}^kL_{y,yz}.$$ 2.b) $p\ne 0,$ $l=m=0$ Using the identity \eqref{a6} we obtain $$J(x,y,z)L_{y,z}^pL_{x,x}^kL_{y,zy}=1/2J(x,y,z)L_{x,x}^kyzL_{y,z}^{p-1}L_{y,zy}+1/2J(x,y,z)L_{x,x}^kzyL_{y,z}^{p-1}L_{y,zy}=$$ $$-1/2J(yx^{2k+1}L_{y,z}^{p-1}z,y,z)L_{y,zy}-1/2J(zx^{2k+1}L_{y,z}^{p-1}y,y,z)L_{y,zy}=0.$$ 2.c) If $p=0$ and $l\ne 0$ or $p=0$ and $m\ne 0$ it is sufficient to apply the identity \eqref{a6} 3) $n\ne 0$, $k\ne 0$. Using identity \eqref{a14} $$J(x,y,z)L_{x,y}^nL_{y,z}^pL_{x,x}^kL_{y,y}^lL_{z,z}^mL_{y,zy}=J(x,y,z)L_{x,x}^kL_{x,y}^nL_{y,z}^pL_{y,y}^lL_{z,z}^mL_{y,zy}=$$ $$J(xyx^{2k+1}(R_xR_y)^{n-1},y,z)L_{y,z}^pL_{y,y}^lL_{z,z}^mL_{y,zy}=$$ $$J(xyx^{2k+1}(R_xR_y)^{n-1}L_{y,z}^pL_{y,y}^lL_{z,z}^m,y,z)L_{y,zy}=0$$ 4) $n\ne 0$, $k=0$. It is sufficient to apply (\ref{a14}$''$).\\ Applying the operator $\Delta_x^1(z)$ to the identity (e$'$) and using induction hypothesis it is easy to prove the required identity (e). \\ {\bf We prove the equality \eqref{3prop2} with the assumption that $T_i$ belong to the set $T_0=T\backslash \{L_{x,zy}\}.$}\\ 1) $n=p=k=l=m=0$. We transform the first term of this sum. Substituing in the identity \eqref{a8} $t$ by $b$ we obtain $J(a,b,c)=-1/2J(a,b,c)ab+1/2J(a,b,c)ba.$ Applying this identity to the first term of the sum $$J(x,y,z)D(y,z)=J(x,y,z)L_{yz,yz}+J(x,y,z)L_{y,y}L_{z,z}-J(x,y,z)L_{y,z}^2$$ we have: $$J(x,y,z)(yz)(yz)=1/2J(-xyz+xzy,y,z)(yz)=$$ $$ -1/4J(-xyz+xzy,y,z)yz+1/4J(-xyz+xzy,y,z)zy=$$ $$1/4(J(x,y,z)yzyz-J(x,y,z)yzzy-J(x,y,z)zyyz+J(x,y,z)zyzy)=$$ $$-1/4J(x,y,z)yyzz+1/2J(x,y,z)yL_{y,z}z-1/4J(x,y,z)yL_{z,z}y-1/4J(x,y,z)zL_{y,y}z-$$ $$-1/4J(x,y,z)zzyy+1/2J(x,y,z)zL_{y,z}y=-J(x,y,z)L_{y,y}L_{z,z}+J(x,y,z)L_{y,z}^2$$ That is, $J(x,y,z)D(y,z)=0.$\\ 2) $n=p=l=m=0$, $k\ne 0.$ Apply to the identity $$J(x,y,z)L_{x,x}^kL_{y,y}=J(x,y,z)L_{y,y}L_{x,x}^k$$ the operator $\Delta_y^2(zy)$: $$J(x,y,z)L_{x,x}^kL_{zy,zy}+2J(x,zy,z)L_{x,x}^kL_{y,zy}=J(x,y,z)L_{zy,zy}L_{x,x}^k+2J(x,zy,z)L_{y,zy}L_{x,x}^k,$$ We show that the second term in each part of this equality is zero. Really, from the identity \eqref{a6} it follows $J(x,zy,z)L_{x,x}^kL_{y,zy}=J(x,y,z)zL_{x,x}^kL_{y,zy}=J(zx^{2k+1},y,z)L_{y,zy}=0.$ And $J(x,zy,z)L_{y,zy}L_{x,x}^k=0.$ Therefore, $J(x,y,z)L_{x,x}^kL_{zy,zy}=J(x,y,z)L_{zy,zy}L_{x,x}^k.$ From the identities proven above and item 1) it follows $$J(x,y,z)L_{x,x}^kD(y,z)=J(x,y,z)D(y,z)L_{x,x}^k=0,$$ that required. The remaining cases are proved similarly to the equality \eqref{2prop2}.\\ {\bf We prove the equality \eqref{1prop2} with the assumption that $T_i$ belong to the set $T_0$ and $S_j$ belongs to the set $T$.}\\ That is, \begin{equation} J(x,y,z)T_1T_2...T_n[S_1,S_2]=0,\quad T_i \in T_0, \quad S_j \in T \tag{f} \end{equation} Let none of the operators $ T_i $ be not equal $L_{x,x}$. From the equalities proven above it follows $$J(x,y,z)T_1T_2...T_n[L_{x,x},L_{y,y}]=0.$$ Apply the operator $\Delta_x^1(zy).$ Obtain $L_{x,y}\Delta_x^1(zy)=L_{zy,y},$ $L_{y,y}\Delta_x^1(zy)=0,$ $L_{z,z}\Delta_x^1(zy)=0$,\quad $L_{x,z}\Delta_x^1(zy)=L_{zy,z},$\quad $L_{y,z}\Delta_x^1(zy)=0$ and, therefore, $$J(x,y,z)T_1T_2...T_n[L_{x,zy},L_{y,y}]=0, \textrm{for } T_i\ne L_{x,x}.$$ The proof of identity (f) is obtained by induction on the degree of the operator $L_{x,x}$ using the operator $\Delta_{z}^1(x)$. Really, $L_{x,z}\Delta_{z}^1(x)=L_{x,x}$ and the remaining terms arising from the action of this operator are equal to zero by the previously proven identities. Applying to the resulting identity corresponding operator $\Delta_{x_i}^1(x_j), \textrm{where } x_i, x_j\in X,$ we obtain the equality (f). \\ {\bf We prove all statements of this Proposition without restrictions on $T_i$ and $S_j$.} We shall prove the conjunction of all three statements using induction on $l$, where $l$ is number of operators $L_{x,zy}$ in the sequence $T_1, T_2,...,T_n$. For $l=0$ all three identities are proved. Assume now that all three identities hold for all $l\leq k$. We show that these identities hold for $l=k+1$. First, we note that the induction hypothesis for \eqref{1prop2} implies that if we have any sequence of operators from $T$ and it contains not more than $k+1$ copies of $L_{x,zy}$, then we can permute these operators acting on $J(x,y,z)$. Suppose now that among the $T_1, T_2,...,T_n$ there are $k+1$ copies of $L_{x,zy}$. Let $T_i=L_{x,zy}$. Replace it by the operator $L_{x,x}$. From the induction hypothesis: $$J(x,y,z)T_1T_2...L_{x,x}...T_n[S_1,S_2]=0.$$ Applying to this identity the operator $\Delta_{x}^1(zy)$ and taking into account equalities: $L_{x,x}\Delta_{x}^1(zy)=2L_{x,zy}$, $L_{x,zy}\Delta_{x}^1(zy)=L_{zy,zy}$, $L_{x,z}\Delta_{x}^1(zy)=2L_{z,zy}$, $L_{x,y}\Delta_{x}^1(zy)=2L_{y,zy}$, $L_{y,y}\Delta_{x}^1(zy)=L_{z,z}\Delta_{x}^1(zy)=L_{y,z}\Delta_{x}^1(zy)=0,$ and equalities \eqref{2prop2} and \eqref{3prop2} for $l\leq k$ we obtain $$J(x,y,z)T_1T_2...L_{x,zy}...T_n[S_1,S_2]=0.$$ Similar arguments prove equalities \eqref{2prop2} and \eqref{3prop2} for a sequence $T_1, T_2,...,T_n$ containing $k+1$ copies of $L_{x,zy}$. $\blacksquare$\\ {\bf Corollary 4.} Let $w=J(x,y,z)T_1T_2...T_n$, for some $$T_i\in T=\{L_{x,x},L_{y,y},L_{z,z},L_{x,y},L_{x,z},L_{y,z},L_{x,zy}\},$$ $k,l,m,n\in \mathbb{N}\cup \{0\}$. The following equalities hold:\\ \begin{equation}\label{b18} wL_{x,zy}=wL_{y,xz}=wL_{z,yx} \end{equation} \begin{equation}\label{b19} w[L_{x_1,x_2},R_{x_3}]=0, x_i\in X \end{equation} $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m(xz)L_{x,xy}=$$ \begin{equation}\label{b20} =J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mxzL_{x,xy}=J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mzxL_{x,xy}=0. \end{equation} \begin{equation}\label{b21} J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mzx[R_x,L_{x,y}]=0. \end{equation} {\sc Proof.} The equality \eqref{b18}. From the \eqref{2prop2}: $wL_{x,xy}=0$. Apply the operator $\Delta_{x}^1(z)$ to this equality. From the \eqref{1prop2} we obtain $wL_{z,xy}+wL_{x,zy}=0$. That is, $wL_{z,yx}=wL_{x,zy}$. Similarly, the operator $\Delta_{y}^1(z)$, applied to the identity $wL_{y,yx}=0$ gives $wL_{z,yx}=wL_{y,xz}$.\\ The equality \eqref{b19}. The equalities $w[L_{x,y},R_x]=w[L_{x,y},R_y]=0,$ $w[L_{x,z},R_x]=w[L_{x,z},R_z]=0,$ $w[L_{y,z},R_y]=w[L_{y,z},R_z]=0$ follow from the identity \eqref{a7} and equality \eqref{2prop2}. The equalities $w[L_{x,y},R_z]=w[L_{y,z},R_x]=w[L_{x,z},R_y]=0$ follow from the identity \eqref{a7} and from the equality \eqref{b18}.\\ The equality \eqref{b20}. From the equalities \eqref{b19} and \eqref{1prop2}: $$J(x,y,z)L_{y,y}^lL_{z,z}^mzL_{x,x}^kL_{x,y}=J(x,y,z)L_{x,y}L_{y,y}^lL_{z,z}^mzL_{x,x}^k.$$ Apply to this identity the operator $\Delta_{y}^2(xy)$. From the \eqref{1prop2} and \eqref{2prop2} we have: $$J(x,xy,z)L_{y,y}^{l-i}L_{y,xy}L_{y,y}^{i-1}L_{z,z}^mzL_{x,x}^kL_{x,y}=J(x,y,xz)L_{x,y}L_{y,y}^{l-i}L_{y,xy}L_{y,y}^{i-1}L_{z,z}^mzL_{x,x}^k=0,$$ $$J(x,y,z)L_{y,y}^{l-i}L_{y,xy}L_{y,y}^{i-1}L_{z,z}^mzL_{x,x}^kL_{x,xy}=J(x,y,z)L_{x,xy}L_{y,y}^{l-i}L_{y,xy}L_{y,y}^{i-1}L_{z,z}^mzL_{x,x}^k=0,$$ $$J(x,y,z)L_{y,y}^{l-i}L_{xy,xy}L_{y,y}^{i-1}L_{z,z}^mzL_{x,x}^kL_{x,y}=J(x,y,z)L_{x,y}L_{y,y}^{l-i}L_{xy,xy}L_{y,y}^{i-1}L_{z,z}^mzL_{x,x}^k,$$ $$J(x,xy,z)L_{x,xy}L_{y,y}^{l}L_{z,z}^mzL_{x,x}^k=0.$$ Therefore, the application of the operator gives $$ J(x,xy,z)L_{y,y}^lL_{z,z}^mzL_{x,x}^kL_{x,xy}=0. $$ Furthermore, $$0=J(x,xy,z)L_{y,y}^lL_{z,z}^mzL_{x,x}^kL_{x,xy}=J(x,y,z)L_{y,y}^lL_{z,z}^mxzL_{x,x}^kL_{x,xy}=$$ $$-J(x,y,z)L_{y,y}^lL_{z,z}^mzxL_{x,x}^kL_{x,xy}+2J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,z}L_{x,x}^kL_{x,xy}=$$ $$-J(x,y,z)L_{y,y}^lL_{z,z}^mzxL_{x,x}^kL_{x,xy}=-J(x,y,z)L_{y,y}^lL_{z,z}^mzxx^{2k}L_{x,xy}=$$ $$-J(x,y,z)L_{y,y}^lL_{z,z}^mzx^{2k}xL_{x,xy}=-J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^kzxL_{x,xy}=$$ $$J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^kxzL_{x,xy}-2J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^kL_{x,z}L_{x,xy}=$$ $$J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^kxzL_{x,xy}.$$ That is, \begin{equation} J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^kxzL_{x,xy}=-J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^kzxL_{x,xy}=0. \tag {a} \end{equation} We prove now the equality $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m(xz)L_{x,xy}=0.$$ 1) $m>0$. Then $$J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^k(xz)L_{x,xy}=-J(x,zy^{2l+1}z^{2m-1}x^{2k},z)(xz)L_{x,xy}=$$ $$-1/2J(x,zy^{2l+1}z^{2m-1}x^{2k},z)zxL_{x,xy}+1/2J(x,zy^{2l+1}z^{2m-1}x^{2k},z)xzL_{x,xy}=$$ $$1/2J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^kzxL_{x,xy}-1/2J(x,y,z)L_{y,y}^lL_{z,z}^mL_{x,x}^kxzL_{x,xy}=0.$$ 2) $k>0$ It is similar to the case 1).\\ 3) $k=l=m=0$. Easy to see from \eqref{a8} $$J(x,y,z)(xz)L_{x,xy}=1/2J(x,y,z)zxL_{x,xy}-1/2J(x,y,z)xzL_{x,xy}=$$ $$-J(x,y,z)xzL_{x,xy}=-J(x,xy,z)zL_{x,xy}=-J(x,xy,z)L_{x,xy}z=0$$ 4) $k=m=0$, $l\ne 0$. From \eqref{a8} and \eqref{2prop2}: $$J(x,y,z)(zx)L_{y,y}^lL_{x,xy}=1/2J(x,y,z)xzL_{y,y}^lL_{x,xy}-1/2J(x,y,z)zxL_{y,y}^lL_{x,xy}=$$ \begin{equation} J(x,y,z)xzL_{y,y}^lL_{x,xy}.\tag{b} \end{equation} Applying the operator $\Delta_{z}^1(zx)$ to the identity $$J(x,y,z)L_{y,y}^lz=J(x,y,z)zL_{y,y}^l$$ we obtain $$J(x,y,zx)L_{y,y}^lz+J(x,y,z)L_{y,y}^l(zx)=J(x,y,z)(zx)L_{y,y}^l+J(x,y,zx)zL_{y,y}^l$$ and $$-J(x,y,z)L_{y,y}^lxzL_{x,xy}+J(x,y,z)L_{y,y}^l(zx)L_{x,xy}=$$ $$J(x,y,z)(zx)L_{y,y}^lL_{x,xy}-J(x,y,z)xzL_{y,y}^lL_{x,xy}.$$ From the identities (a) and (b) obtain: $$J(x,y,z)L_{y,y}^l(zx)L_{x,xy}=0.$$ The identity \eqref{b21} follows from \eqref{b20} and \eqref{a7}. $\blacksquare$\\ {\bf Lemma 4.} In any Malcev algebra $M$ the following identities hold:\\ \begin{equation}\label{b24} t(yx)z+t(zy)x+t(xz)y=tL_{x,zy}+tL_{y,xz}+tL_{z,yx}-1/2J(x,y,z)t, \end{equation} $$G(t,x,y,z)+J(x,y,z)t=2/3(tyzx-txzy)+2/3(tzxy-tyxz)+$$ \begin{equation}\label{b25} +2/3(txyz-tzyx)+2/3tL_{x,zy}+2/3tL_{y,xz}+2/3tL_{z,yx}. \end{equation} \begin{equation}\label{b22} G(tx,x,y,z)=G(t,x,y,z)x-2J(x,y,z)(tx), \end{equation} \begin{equation}\label{b23} G(tx^2,x,y,z)=G(t,x,y,z)x^2+2J(x,t,z)L_{x,xy}+2J(x,y,t)L_{x,xz}, \end{equation} {\sc Proof.} The equality \eqref{b24}. From the identity \eqref{a1} we have: $$t(xy)z+t(yz)x+t(zx)y=zxyt+tzxy-tyzx-xt(yz)+xyzt+txyz-tzxy-yt(zx)+$$ $$yzxt+tyzx-txyz-zt(xy)=ty(zx)+tx(yz)+ty(zx)+J(x,y,z)t.$$ That is,$$t(xy)z+t(yz)x+t(zx)y=ty(zx)+tx(yz)+tz(xy)+J(x,y,z)t.$$ Hence \eqref{b24} follows.\\ The equality \eqref{b25}. Further, from this identity and the identities \eqref{a1} and \eqref{a3} we obtain: $$3/2G(t,x,y,z)+3/2J(x,y,z)t=J(t,y,z)x+J(x,t,z)y+J(x,y,t)z-J(x,y,z)t=$$ $$tyzx+ztyx+yztx+xtzy+zxty+tzxy+xytz+txyz+ytxz=$$ $$tyzx-txzy+tzxy-tyxz+txyz-tzyx-t(xy)z+t(yz)x+t(zx)y=$$ $$tyzx-txzy+tzxy-tyxz+txyz-tzyx+tL_{x,zy}+tL_{y,xz}+tL_{z,yx}.$$ The equality \eqref{b22}. From the identities \eqref{a3}, \eqref{a4} and the definition of the function $G$: $$3/2G(tx,x,y,z)=J(tx,y,z)x+J(x,tx,z)y+J(x,y,tx)z-J(x,y,z)(tx)=$$ $$J(tx,y,z)x+J(yz,x,tx)-G(y,z,x,tx)-J(x,y,z)(tx)=$$ $$J(tx,y,z)x-J(yz,x,t)x+G(tx,x,y,z)-J(x,y,z)(tx)=$$ $$G(tx,x,y,z)-J(x,y,z)(tx)+1/2G(t,x,y,z)x,$$ whence follows the desired identity.\\ The identity \eqref{b23}. From the identities \eqref{a8}, \eqref{a3}, \eqref{a4} and the definition of the function $G$: $$(tx)J(x,y,z)=-1/2J(x,t,z)xy-1/2J(t,x,y)xz-J(y,z,tx)x-3/2J(x,t,zy)x=$$ $$1/2J(x,t,z)yx+1/2J(t,x,y)zx-J(y,z,tx)x-3/2J(x,t,zy)x-J(x,t,z)L_{x,y}-J(t,x,y)L_{x,z}=$$ $$1/2(J(x,t,z)y+J(t,x,y)z+J(t,x,zy))x-1/2G(t,x,y,z)x-J(x,t,z)L_{x,y}-J(t,x,y)L_{x,z}=$$ $$1/2(G(y,z,t,x)-2J(t,x,yz))x-1/2G(t,x,y,z)x-J(x,t,z)L_{x,y}-J(t,x,y)L_{x,z}=$$ $$-J(x,t,z)L_{x,y}-J(t,x,y)L_{x,z}-J(t,x,yz)x.$$ Further, from the identity \eqref{b22} we have: $$G(tx,x,y,z)=G(t,x,y,z)x-2J(x,y,z)(tx)=$$ $$G(t,x,y,z)x-2J(x,t,z)L_{x,y}-2J(x,y,t)L_{x,z}-2J(t,x,yz)x.$$ Thus, from this identity and the identity \eqref{a7} we obtain: $$G(tx^2,x,y,z)=G(tx,x,y,z)x-2J(x,tx,z)L_{x,y}-2J(tx,x,y)L_{x,z}-2J(tx,x,yz)x=$$ $$G(t,x,y,z)x^2-2J(x,t,z)L_{x,y}x-2J(x,y,t)L_{x,z}x-2J(t,x,yz)x^2+2J(x,t,z)xL_{x,y}+$$ $$+2J(t,x,y)xL_{x,z}+2J(t,x,yz)x^2=G(t,x,y,z)x^2+2J(x,t,z)L_{x,xy}+2J(x,y,t)L_{x,xz}.$$ It implies \eqref{b23}. $\blacksquare$ \\ {\bf Lemma 5.} Let $T=\{L_{x,x},L_{y,y},L_{z,z},L_{x,y},L_{x,z},L_{y,z},L_{x,zy}\},$ and $T_i \in T$, $n\in \mathbb{N}\cup \{0\}.$ In the algebra $M_3$ the following relations hold:\\ \begin{equation}\label{b26} J(x,y,z)T_1T_2...T_nJ(x,y,z)=0, \end{equation} \begin{equation}\label{b27} J(x,y,z)T_1T_2...T_n(J(x,y,z)L_{x,x})=0, \end{equation} \begin{equation}\label{b28} J(x,y,z)T_1T_2...T_nx(J(x,y,z)x)=0. \end{equation} {\sc Proof.} We prove \eqref{b26}. We first prove that $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mJ(x,y,z)=0.$$ Further, we denote $J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m$ by $v$. From the identity \eqref{b23} : $$G(vL_{x,x},x,y,z)-G(v,x,y,z)L_{x,x}=2J(x,v,z)L_{x,xy}+2J(x,y,v)L_{x,xz}.$$ If $k+m\ne 0$ and $k+l\ne 0$, then from identity \eqref{b20} and \eqref{a5} we obtain: $$G(vL_{x,x},x,y,z)-G(v,x,y,z)L_{x,x}=$$ $$2J(x,J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m,z)L_{x,xy}+2J(x,y,J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m)L_{x,xz}=$$ $$-2J(x,J(x,xy^{2l+1}x^{2k-1}z^{2m},z),z)L_{x,xy}-2J(x,y,J(x,y,xz^{2m+1}x^{2k-1}y^{2l}))L_{x,xz}=$$ $$6J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m(xz)L_{x,xy}+6J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m(xy)L_{x,xz}=0.$$ Suppose for example $k+l=0$, that is $v=J(x,y,z)L_{z,z}^m.$ Easy to see $$J(x,y,J(x,y,z)L_{y,y}^m)L_{x,xz}=J(x,y,J(x,y,zy^{2m}))L_{x,xz}=3J(x,y,z)L_{y,y}^m(xy)L_{x,xz}=0.$$ That is, $J(x,y,J(x,y,z)L_{y,y}^m)L_{x,xz}=0.$ Applying the operator $\Delta_{y}^{2m}(z)$ to this identity we obtain $J(x,y,J(x,y,z)L_{z,z}^m)L_{x,xz}=0$. Further, $$G(vL_{x,x},x,y,z)-G(v,x,y,z)L_{x,x}=2J(x,v,z)L_{x,xy}+2J(x,y,v)L_{x,xz}=0.$$ Thus, we have $G(vL_{x,x},x,y,z)-G(v,x,y,z)L_{x,x}=0,$ for all $t$ of the form $v=J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m$. Using this equality and induction on $k+l+m$ we obtain: \begin{equation} J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mG=J(x,y,z)GL_{x,x}^kL_{y,y}^lL_{z,z}^m.\tag{a} \end{equation} We have: $$vL_{z,z}xyz=vzzxyz=-vzxzyz+2vzL_{x,z}yz=vzxyzz-2vzxL_{y,z}z+2vzL_{x,z}yz=$$ $$-vxzyzz+2vL_{x,z}yzz-2vzxL_{y,z}z+2vzL_{x,z}yz=$$ $$vxyzzz-2vxL_{y,z}zz+2vL_{x,z}yzz-2vzxL_{y,z}z+2vzL_{x,z}yz=$$ $$vxyzL_{z,z}-2vxL_{z,z}L_{y,z}+2vL_{x,z}yL_{z,z}-2vzxL_{y,z}z+2vzL_{x,z}yz=$$ $$vxyzL_{z,z}-2vxL_{z,z}L_{y,z}+2vL_{x,z}yL_{z,z}+2vxzL_{y,z}z-2vL_{x,z}zzy-$$ $$-4vL_{x,z}L_{y,z}z+4vzL_{x,z}L_{y,z}=$$ $$vxyzL_{z,z}-2vxL_{z,z}L_{y,z}+2vxzL_{y,z}z.$$ From \eqref{b21} it follows that the sum of the last two terms is 0. That is, $$vL_{z,z}xyz=vxyzL_{z,z}.$$ Similarly, the equalities $vL_{y,y}xyz=vxyzL_{y,y}$ and $vL_{x,x}xyz=vxyzL_{x,x}$ can be verified. Therefore, \begin{equation} J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mxyz=J(x,y,z)xyzL_{x,x}^kL_{y,y}^lL_{z,z}^m.\tag{b} \end{equation} Recall that we agreed to denote the action of the operator $t\mapsto G(t,x,y,z)$ by $tG$ and the superposition of $n$ these operators by $tG^n$. From the identities \eqref{b25}, (a), (b) and \eqref{1prop2} for $v=J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m$ we have: $$vJ(x,y,z)=$$ $$vG+2/3(-vyzx+vxzy-vzxy+vyxz-vxyz+vzyx-vL_{x,zy}-vL_{z,yx}-vL_{y,xz})=$$ $$J(x,y,z)G L_{x,x}^kL_{y,y}^lL_{z,z}^m-2/3\big(J(x,y,z)yzx+J(x,y,z)xzy-J(x,y,z)zxy+$$ $$J(x,y,z)yxz-J(x,y,z)xyz+J(x,y,z)zyx+J(x,y,z)L_{x,zy}+J(x,y,z)L_{z,yx}+$$ $$+J(x,y,z)L_{y,xz}\big) L_{x,x}^kL_{y,y}^lL_{z,z}^m=J(x,y,z)J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m=0$$ That is, $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^mJ(x,y,z)=0.$$ We prove the idetntity $$J(x,y,z)L_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^mJ(x,y,z)=0.$$ by induction on $n$. For $n=0$ it is already proven. Suppose that $$J(x,y,z)L_{x,y}^iL_{x,x}^kL_{y,y}^lL_{z,z}^mJ(x,y,z)=0,\quad i\leq n.$$ Applying the operator $\Delta_{x}^1(y)$ to this identity and using the induction hypothesis we obtain the required.\\ From this identity by induction on the degree of the operator $L_{y,z}$ using the operator $\Delta_{z}^1(y)$ we obtain: $$J(x,y,z)L_{y,z}^pL_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^mJ(x,y,z)=0.$$ From this identity by induction on the degree of operator $L_{x,z}$ using the operator $\Delta_{z}^1(x)$ we obtain: $$J(x,y,z)L_{x,z}^qL_{y,z}^pL_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^mJ(x,y,z)=0.$$ From this identity by induction on the degree of operator $L_{x,zy}$ using the operator $\Delta_{x}^1(zy)$ and the equalities \eqref{2prop2} and \eqref{3prop2} we obtain: $$J(x,y,z)L_{x,zy}^sL_{x,z}^qL_{y,z}^pL_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^mJ(x,y,z)=0,$$ for all $k,l,m,n,p,q,s\in \mathbb{N}\cup \{0\}.$\\ Now we prove \eqref{b27}. First, we prove that $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m(J(x,y,z)L_{x,x})=0.$$ From \eqref{a7} and \eqref{2prop2} we obtain: $$J(x,y,z)L_{x,x}^kL_{y,y}^iL_{y,yx^2}=-J(x,y,z)L_{x,x}^kL_{y,y}^iL_{yx,yx}-J(x,y,z)L_{x,x}^kL_{y,y}^ixL_{y,yx}+$$ $$+J(x,y,z)L_{x,x}^kL_{y,y}^iL_{y,yx}x=-J(x,y,z)L_{x,x}^kL_{y,y}^iL_{yx,yx}+J(x,y,zL_{x,x}^kL_{y,y}^ix)L_{y,yx}=$$ $$-J(x,y,z)L_{x,x}^kL_{y,y}^iL_{yx,yx}.$$ Therefore, the application of the operator $\Delta_{y}^1(yx^2)$ with \eqref{2prop2} and the equality \eqref{b26} give: $$J(x,y,z)L_{x,x}^kL_{y,y}^lL_{z,z}^m(J(x,y,z)L_{x,x})=0.$$ Induction on the degree of the operator $L_{x,y}$ and application of $\Delta_{y}^1(x)$ gives: $$J(x,y,z)L_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^m(J(x,y,z)L_{x,x})=0.$$ Induction on the degree of the operator $L_{y,z}$ and application of $\Delta_{z}^1(y)$ gives: $$J(x,y,z)L_{y,z}^pL_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^m(J(x,y,z)L_{x,x})=0.$$ Induction on the degree of the operator $L_{x,z}$ and application of $\Delta_{z}^1(x)$ gives: $$J(x,y,z)L_{x,z}^qL_{y,z}^pL_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^m(J(x,y,z)L_{x,x})=0.$$ Induction on the degree of the operator $L_{y,xz}$, application of $\Delta_{y}^1(xz)$ and the equality \eqref{b18} give the equality \eqref{b27}.\\ We now prove \eqref{b28}. The following equality is obvious: $L_{z,yx}\Delta_{z}^1(zx)=L_{zx,yx}$. From \eqref{3prop2}: $$J(x,y,z)T_1T_2...T_iL_{zx,zx}=J(x,y,z)T_1T_2...T_iL_{x,x}L_{z,z}-J(x,y,z)T_1T_2...T_iL_{x,z}^2,$$ for all $T_j\in T, i\in \mathbb{N}\cup \{0\}.$ Applying the operator $\Delta_{z}^1(y)$ to this equality we obtain: $$J(x,y,z)T_1T_2...T_iL_{zx,yx}=J(x,y,z)T_1T_2...T_iL_{x,x}L_{z,y}-J(x,y,z)T_1T_2...T_iL_{x,z}L_{x,y}.$$ From \eqref{b26} and \eqref{1prop2} the identity follows: $$J(x,y,z)T_1T_2...T_mJ(x,y,z)=0,$$ where $T_i\in T\cup \{L_{x,zx}, L_{z,zx} \}.$ The application of the operator $\Delta_{z}^1(zx)$ to this identity, gives: $$J(x,y,zx)T_1T_2...T_mJ(x,y,z)+J(x,y,z)T_1T_2...T_mJ(x,y,zx)=0,$$ for all $T_i\in T\cup \{L_{x,zx}, L_{z,zx} \}.$ Now write \eqref{b26} as follows: $$J(x,y,z)L_{z,yx}^sL_{x,z}^qL_{y,z}^pL_{x,y}^nL_{x,x}^kL_{y,y}^lL_{z,z}^mJ(x,y,z)=0,$$ for any $k,l,m,n,p,q,s\in \mathbb{N}\cup \{0\}.$\\ Applying to it the operator $\Delta_{z}^2(zx)$, grouping corresponding terms, and considering the previous identity, we obtain \eqref{b27}. $\blacksquare$\\ {\bf Lemma 6.} In the algebra $M_3$ the following relations hold:\\ \begin{equation}\label{b29} J(y,z,x(R_{zy}R_x)^n)=J(y,z,x)L_{x,zy}^n, \end{equation} \begin{equation}\label{b30} J(x,y,z)G^n=6^nJ(x,y,z)L_{x,zy}^n. \end{equation} {\sc Proof.} Prove first the identity $$J(x,x(R_{zy}R_x)^nR_{zy},y)=J(x,y,z)yxL_{x,zy}^n$$ by induction on $n$. For $n=0$ this identity is obvious. Suppose that it holds for $n=k$. Prove it for $n=k+1$. Applying the operator $\Delta_{y}^1(zy)$ to the identity $$J(x,txy,y)=J(x,t,y)xy,$$ we obtain $$J(x,tx(zy),y)+J(x,txy,zy)=J(x,t,zy)xy+J(x,t,y)x(zy).$$ If $t=x(R_{zy}R_x)^kR_{zy},$ then $$J(x,tx(zy),y)=-J(x,txy,zy)+J(x,t,y)x(zy).$$ And from the identity \eqref{a9} we obtain: $$J(x,tx(zy),y)=-J(x,txy,zy)+J(x,t,y)x(zy)=1/2J(x,t,y)[R_x,R_{zy}]+J(x,t,y)x(zy)=$$ $$J(x,t,y)L_{x,zy}=J(x,y,z)yxL_{x,zy}^{k+1}.$$ We now prove the identity \eqref{b29} by induction on $n$. Let $n=1$. Apply the operator $\Delta_{y}^1(zy)$ to the identity $$J(y,z,xyx)=J(y,z,x)L_{x,y}.$$ We have $J(y,z,x(zy)x)=J(y,z,x)L_{x,zy}$. Suppose the identity holds for $n=k$. We prove for $n=k+1$. Substituing in the identity \eqref{a9} $a$ by $y$, $b$ by $z$, $c$ by $x$, applying to it the operator $\Delta_{y}^1(zy)$ and substituing $t$ by $x(R_{zy}R_x)^k$ we have $$J(z,y,t(zy)x)-J(z,y,t)L_{x,zy}=$$ $$-J(z,y,tyx)z+J(z,y,t)zL_{x,y}+1/2J(zy,t,x)[R_z,R_y]+1/2J(y,t,x)[R_z,R_{zy}]=$$ $$-1/2J(y,t,x)yz^2-1/2J(y,t,x)zyz-J(z,y,t)L_{x,y}z+$$ $$+J(z,y,t)zL_{x,y}+1/2J(zy,t,x)[R_z,R_y]+1/2J(y,t,x)[R_z,R_{zy}].$$ Let $t=x(R_{zy}R_x)^k$. From \eqref{b19} and the induction hypothesis: $$J(z,y,t(zy)x)-J(z,y,t)L_{x,zy}=-1/2J(y,t,x)yz^2+1/2J(y,t,x)zyz+1/2J(y,t,x)[R_z,R_{zy}].$$ From the proven identity, \eqref{2prop2}, and the identity \eqref{a14}: $$J(z,y,t(zy)x)-J(z,y,t)L_{x,zy}=1/2J(zy,y,x)xL_{x,zy}^kxyz^2-1/2J(zy,y,x)xL_{x,zy}^kxzyz+$$ $$+1/2J(zy,y,x)xL_{x,zy}^kx[R_z,R_{zy}]=1/2J(zy,y,x)L_{x,zy}^kxxyz^2-1/2J(zy,y,x)L_{x,zy}^kxxzyz-$$ $$+1/2J(zy,y,x)L_{x,zy}^kxx[R_z,R_{zy}]=1/2J(zy,y,x(zy)x^3(R_{zy}R_x)^{k-1})yz^2-$$ $$-1/2J(zy,y,x(zy)x^3(R_{zy}R_x)^{k-1})zyz-J(zy,y,x(zy)x^3(R_{zy}R_x)^{k-1})(zy)z=0.$$ We now prove the identity \eqref{b30} by induction on $n$. Let $n=1$. From \eqref{a3}, \eqref{b24} and \eqref{b18} we obtain: $$G(J(x,y,z),x,y,z)=2/3J(J(x,y,z),y,z)x+2/3J(x,J(x,y,z),z)y+$$ $$+2/3J(x,y,J(x,y,z))z-2/3J(x,y,z)J(x,y,z)=$$ $$2J(x,y,z)(zy)x+2J(x,y,z)(xz)y+J(x,y,z)(yx)z=$$ $$2J(x,y,z)L_{x,zy}+2J(x,y,z)L_{y,xz}+2J(x,y,z)L_{z,yx}=6J(x,y,z)L_{x,zy}.$$ Suppose the identity holds for $n=k$. We prove it for $n=k+1$. From the identities \eqref{a3}, \eqref{b18}, \eqref{b26} and \eqref{b29}: $$6^kG(J(x,y,z)L_{x,zy}^k,x,y,z)=2/3\cdot 6^k(J(J(x,y,z)L_{x,zy}^k,y,z)x+$$ $$J(x,J(x,y,z)L_{x,zy}^k,z)y+J(x,y,J(x,y,z)L_{x,zy}^k)z-J(x,y,z)L_{x,zy}^kJ(x,y,z))=$$ $$2/3\cdot 6^k(J(x(R_{zy}R_x)^k,y,z),y,z)x+J(x,J(x,y(R_{xz}R_y)^k,z),z)y+J(x,y,J(x,y,z(R_{yx}R_z)^k))z)=$$ $$2\cdot 6^k(J(x(R_{zy}R_x)^k,y,z)(zy)x+J(x,y(R_{xz}R_y)^k,z)(xz)y+J(x,y,z(R_{yx}R_z)^k)(yx)z)=$$ $$2\cdot 6^k(J(x(R_{zy}R_x)^k,y,z)(zy)x+J(x(R_{zy}R_x)^k,y,z)(xz)y+J(x(R_{zy}R_x)^k,y,z)(yx)z)=$$ $$2\cdot 6^k(J(x(R_{zy}R_x)^k,y,z)L_{x,zy}+J(x(R_{zy}R_x)^k,y,z)L_{y,xz}+J(x(R_{zy}R_x)^k,y,z)L_{z,yx})=$$ $$2\cdot 6^k(J(x(R_{zy}R_x)^k,y,z)L_{x,zy}+J(x(R_{zy}R_x)^k,y,z)L_{x,zy}+J(x(R_{zy}R_x)^k,y,z)L_{x,zy})=$$ $$6^{k+1}J(x,y,z)L_{x,zy}^{k+1}.\blacksquare $$\\ {\bf Lemma 7.} The set $$\textbf{U}\cup \textbf{U}x\cup \textbf{U}y\cup \textbf{U}z\cup \textbf{U}xy\cup \textbf{U}xz\cup \textbf{U}yz$$ generates a linear space $J(M_3,M_3,M_3)$ over the field $F.$\\ {\sc Proof.} We first prove that if $$w\in \textbf{U}=\{J(x,y,z) G^k L_{x,x}^l L_{y,y}^m L_{z,z}^n L_{x,y}^pL_{x,z}^q L_{y,z}^r|k,l,m,n,p,q,r \in \mathbb{N}\cup \{0\}\},$$ then \begin{equation}\label{b31} wG=6wL_{x,zy}=3w(xyz-zyx)=3w(yzx-xzy)=3w(zxy-yxz), \end{equation} \begin{equation}\label{b32} wxyz=1/6wG+wL_{y,z}x-wL_{x,z}y+wL_{x,y}z. \end{equation} From the identity \eqref{b22}: $$G(wL_{x,x},x,y,z)-G(w,x,y,z)L_{x,x}=-2J(x,y,z)(wx)x-2J(x,y,z)(wL_{x,x})$$ On the other hand, from the identity \eqref{a1}: $$(wx)(J(x,y,z)x)=wJ(x,y,z)xx+xwJ(x,y,z)x+J(x,y,z)xxw.$$ From \eqref{b26}, \eqref{b27}, \eqref{b28}: $$wxJ(x,y,z)x=-(wx)(J(x,y,z)x)=0.$$ Thus, $$G(wL_{x,x},x,y,z)-G(w,x,y,z)L_{x,x}=0$$ and the operators $\Delta_{x_1}^1(x_2)$, $x_i\in X$ give the identities: $$G(wL_{x_1,x_2},x,y,z)-G(w,x,y,z)L_{x_1,x_2}=0,$$ for all $x_i\in X.$ Since $w=J(x,y,z)T_1T_2...T_n$, where $$T_i\in T=\{L_{x,x},L_{y,y},L_{z,z}, L_{x,y}, L_{x,z}, L_{y,z},L_{x,zy}\},$$ and $n\in \mathbb{N}\cup \{0\}$ then the equality \eqref{1prop2} implies that $w$ can be written $$w=J(x,y,z)L_{x,zy}^kS_1S_2...S_{n-k},$$ where $S_i\in T\backslash \{L_{x,zy}\}.$ Therefore, using the proven identity and the identity \eqref{b30} and \eqref{b18} we obtain: $$G(w,x,y,z)=G(J(x,y,z)L_{x,zy}^kS_1S_2...S_{n-k},x,y,z)=$$ $$G(J(x,y,z)L_{x,zy}^k,x,y,z)S_1S_2...S_{n-k}=$$ $$6J(x,y,z)L_{x,zy}^{k+1}S_1S_2...S_{n-k}=6wL_{x,zy}=6wL_{y,xz}=6wL_{z,yx}.$$ From the identities \eqref{b25}, \eqref{b18}, \eqref{b26} and \eqref{b19}: $$G(w,x,y,z)=2/3(wyzx-wxzy)+2/3(wzxy-wyxz)+2/3(wxyz-wzyx)+$$ $$+2/3wL_{x,zy}+2/3wL_{y,xz}+2/3wL_{z,yx}-J(x,y,z)w=$$ $$2/3(wyzx-wxzy)+2/3(wzxy-wyxz)+2/3(wxyz-wzyx)+2wL_{z,yx}=$$ $$2/3(wyzx-wxzy)+2/3(wzxy-wyxz)+2/3(wxyz-wzyx)+1/3wG.$$ That is, $$wG=wxyz-wzyx+wyzx-wxzy+wzxy-wyxz=wxyz-wzyx-wyzx+2wL_{y,z}x+$$ $$+wxyz-2wxL_{y,z}-wzyx+2wzL_{y,z}+wxyz-2wL_{x,y}z=3wxyz-3wzyx.$$ The remaining equalities in \eqref{b31} can be proved similarly. Further, from \eqref{b31} and \eqref{b19}: $$wxyz=1/3wG+wzyx=1/3wG-wzxy+2wzL_{x,y}=$$ $$1/3wG+wxzy-2wL_{x,z}y+2wzL_{x,y}=1/3wG-wxyz+2wL_{y,z}x-2wL_{x,z}y+2wL_{x,y}z,$$ which implies \eqref{b32}.\\ Let $w\in J(M_3,M_3,M_3)$. Proposition 1 implies that there are $\alpha_i$ from $F$ for which $$w=\sum_{i}\alpha_iJ(x,y,z)x_{i1}x_{i2}...x_{ik_i}, x_{ij}\in X, k_i\in \mathbb{N}\cup \{0\}.$$ Therefore, it suffices to show that the polynomials of the form $J(x,y,z)x_{1}x_{2}...x_{k}, x_{i}\in X,$ where $k\in\mathbb{N}\cup\{0\}$ are the linear combinations of the polynomials from the set $$\textbf{U}\cup \textbf{U}x\cup \textbf{U}y\cup \textbf{U}z\cup \textbf{U}xy\cup \textbf{U}xz\cup \textbf{U}yz.$$ A proof by induction on $k$ obviously follows from the identities \eqref{b19} and \eqref{b32}.$\blacksquare$\\ {\sc Proof of the Theorem.} We prove the speciality of the algebra $M_3.$ That is, we show that free Malcev algebra on the free generators $X$ is isomorphic to the subalgebra of the algebra $(Alt[X])^{-}$, generated by the set $X$. Let $f$ be the canonical homomorphism $f:M_3 \longrightarrow (Alt[X])^{-}$. From Lemma 7 it follows that it suffices to show that the set $f(\textbf{U}\cup \textbf{U}x\cup \textbf{U}y\cup \textbf{U}z\cup \textbf{U}xy\cup \textbf{U}xz\cup \textbf{U}yz)$ is linearly independent in $Alt[X]$. We prove first the linear independence of the set $f(\textbf{U})$. As in ~\cite{Il} we define in $Alt[X]$ the following subsets: $$ W_0=\{(x,y,z)(L^+_{x,y})^{n_1}(L^+_{y,z})^{n_2}(L^+_{z,x})^{n_3}(L^+_{x,x})^{n_4}(L^+_{y,y})^{n_5}(L^+_{z,z})^{n_6} (L^+_{x,[y,z]})^{n_7}\mid n_i\in \mathbb{N}\cup \{0\}\},$$ $$W_1=\{[w,a]\mid w\in W_0, a\in X\},$$ $$W_2=\{(w,x,y), (w,y,z), (w,z,x)\mid w\in W_0\},$$ $$W=W_0\cup W_1\cup W_2,$$ $$W'=\{w\circ (x,y,z)\mid w\in W_0\}.$$ We prove that for each $u$ from $U$ there exists $\alpha\in F, \alpha\neq 0$, such that $f(u)=\alpha w$, where $w\in W$. Using the equality \eqref{1prop2} we can write $u$ in the form $$u=J(x,y,z)T_1...T_{n}T'_1...T'_{m},$$ where $T_i\in \{L_{x,y}, L_{x,z}, L_{y,z}, L_{x,zy} \}$ $\textrm{and } T_i\in \{L_{x,x}, L_{y,y}, L_{z,z}\}.$ Denoting $J(x,y,z)T_1...T_{n}$ by $u_0$ we have $u=u_0T'_1...T'_{m}.$ In first we prove the assertion for $u_0=J(x,y,z)T_1...T_{n}$ using the induction on $n$ For $n=0$ it is obvious. Assume that the assertion is proved for $u=J(x,y,z)T_1...T_{n-1}$ and assume that $T_n=L_{x,y}.$ It is easy to show that then $$f: uL_{x,y}\longmapsto 1/2([[f(u),x],y]+[[f(u),y],x])=$$ $$1/2(f(u)xy-xf(u)y-y(f(u)x)+y(xf(u))+f(u)yx-yf(u)x-x(f(u)y)+x(yf(u)))=$$ $$-2f(u)L^+_{x,y}-2f(u)L^+_{y,x}.$$ The equalities (8), (9) and (24) of ~\cite{Il} can be written in the following form $$(a,b,a)^+=0,$$ $$aL^+_{b,c}-aL^+_{c,b}=(b,a,c)^+,$$ $$(x_i,(x,y,z)T^+_1...T^+_n,x_j)^+=0,$$ for $x_i,x_j \in X$ and $T^+_s\in \{L^+_{x,y}, L^+_{x,z}, L^+_{y,z}, L^+_{x,zy} \},$ where $(a,b,c)^+$ denotes $aL^+_{b,c}.$ From these relations and from the induction hypothesis it follows that $f(u)L^+_{x,y}=f(u)L^+_{y,x}$. Therefore, $$f: uL_{x,y}\longmapsto -4f(u)L^+_{x,y}.$$ Similarly, $$f: uL_{x,zy}\longmapsto 4f(u)L^+_{x,[y,z]}.$$ From \eqref{b30} we obtain $$f: uG\longmapsto 24f(u)L^+_{x,[y,z]}.$$ Finally, direct calculation gives for any $t$ from $M_3$: $$f: tL_{x,x}\longmapsto [[f(t),x],x]=x(xf(t))+f(t)xx-xf(t)x-x(f(t)x)=$$ $$-4(f(t)\circ x\circ x-f(t)\circ (x\circ x))=-4f(t)L^+_{x,x}.$$ That is, $$f: tL_{x,x}\longmapsto [[f(t),x],x]=-4f(t)L^+_{x,x}.$$ From the proven identities and from results of ~\cite{Il} the linear independence of $f(\textbf{U})$ follows.\\ Note that for any $u\in \textbf{U}$ the following holds $$f: uxy\longmapsto 4f(u)L^+_{x,y}+2(f(u),x,y).$$ It means that $f(\textbf{U})=FW_0$ and the set $\textbf{U}$ is linearly independent.\\ Now suppose that there exist $u_i \in _F(\textbf{U})$ such that the following relations are fulfilled. $$f(u_0+u_1x+u_2y+u_3z+u_4xy+u_5xz+u_6yz)=0.$$ Denoting $f(u_i)=w_i$ we obtain $$w_0+[w_1,x]+[w_2,y]+[w_3,z]+4w_4L^+_{x,y}+2(w_4,x,y)+4w_5L^+_{z,x}+2(x,z,w_5)+4w_6L^+_{y,z}+$$ $$+2(y,z,w_6)=(w_0+4w_4L^+_{x,y}+4w_5L^+_{z,x}+4w_6L^+_{y,z})+[w_1,x]+[w_2,y]+[w_3,z]+2(w_4,x,y)+$$ $$+2(x,z,w_5)+2(y,z,w_6).$$ Hence, from the linear independence of the set $W$, proven in ~\cite{Il}, we have: $w_1=w_2=w_3=w_4=w_5=w_6=0$ and $w_0+4w_4L^+_{x,y}+4w_5L^+_{z,x}+4w_6L^+_{y,z}=0$. Therefore, $w_0=0$ and from proven we have $u_i=0$, $i\in \{0,1,2,3,4,5,6\}$. \\ Obviously, from Lemma 7 it follows that the set $$\textbf{U}\cup \textbf{U}x\cup \textbf{U}y\cup \textbf{U}z\cup \textbf{U}xy\cup \textbf{U}xz\cup \textbf{U}yz$$ is a basis of the space $J(M_3,M_3,M_3)$. $\blacksquare$\\ {\bf Corollary 5.} Let $S$ be the free algebra of rank three of the variety generated by a simple seven-dimensional Malcev algebra. The free Malcev algebra $M_3$ of rank three is a subdirect sum of the free Lie algebra $L$ of rank three and the free algebra $S$.\\ {\sc Proof.} Let $K$ be the free algebra of the variety generated Cayley-Dickson algebra over a field $F$ with a set of free generators $X$. It is easy to see that the subalgebra of the Malcev algebra $K^-$, generated by $X$ is isomorphic to $S$. From Theorem 1 of ~\cite{Il} we obtain that $M_3$ is the subalgebra in $K^-\oplus (Ass[X])^-$. The projection $K^-\oplus (Ass[X])^- \longrightarrow K^-$ induces a homomorphism $g_1: M_3\longrightarrow S$, which obviously is surjective. In addition, it is similar to show that the homomorphism $g_2: M_3\longrightarrow L$, induced by the projection $K^-\oplus (Ass[X])^- \longrightarrow (Ass[X])^-$ is surjective.$\blacksquare$\\ {\bf Corollary 6.} The free Malcev algebra of rank three is special.\\ {\sc Proof.} It follows from corollary 5.$\blacksquare$\\ In ~\cite{PolSh} particularly it was shown that the free algebra of the variety generated by the Cayley-Dickson algebra is prime. From this result and corollary 5 it can be easily obtained\\ {\bf Corollary 7.} The free Malcev algebra of rank three is semiprime.\\ In conclusion the author expresses his gratitude to I. P. Shestakov both for posing the problems and for his valuable remarks. Also the author is very grateful to S.V. Pchelintsev for his inestimable aid of preparing this article.\\ \end{document}